[Pkg-clamav-commits] [SCM] Debian repository for ClamAV branch, debian/unstable, updated. debian/0.95+dfsg-1-6156-g094ec9b

Török Edvin edwin at clamav.net
Sun Apr 4 01:09:53 UTC 2010


The following commit has been merged in the debian/unstable branch:
commit 5518b7d41f18a4fc56a90e1e67ef45edd4f397c5
Author: Török Edvin <edwin at clamav.net>
Date:   Fri Nov 27 12:44:52 2009 +0200

    Merge LLVM upstream r90002
    
    Squashed commit of the following:
    
    commit 9dbc06e034aaee26c33551a84ca04d9e114124de
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 08:40:14 2009 +0000
    
        add comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90002 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a722fabf41a89cb27561345640d46479af4d5baa
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 08:37:22 2009 +0000
    
        reduce nesting, no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90001 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0e065c4ce2614aec3c57122de80b601fe8349033
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 08:32:52 2009 +0000
    
        limit the recursion depth of GetLinearExpression.  This
        fixes a crash analyzing consumer-lame, which had an "%X = add %X, 1"
        in unreachable code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90000 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 248818e2c5a71ebc4b393506a52f1e78293799f2
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 08:25:10 2009 +0000
    
        teach GVN's load PRE to insert computations of the address in predecessors
        where it is not available.  It's unclear how to get this inserted
        computation into GVN's scalar availability sets, Owen, help? :)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89997 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 36063d789bb3ffd23e346b1d32f0aace6f8c1bce
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 06:42:42 2009 +0000
    
        add some tests for memdep phi translation + PRE.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89996 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a8eeda9fd32714cf5a68346369d22343ce80e0dc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 06:36:28 2009 +0000
    
        this test is failing, and is expected to.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89995 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 05d7bd8ed8e65d45f92c44f9c1b3ea9a40fe9713
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 06:33:09 2009 +0000
    
        filecheckize
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89994 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fc06fec923d4e2a8f22e45b040615b8bc8473459
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 06:31:55 2009 +0000
    
        rename test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89993 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6766b8a852dd9de4781976cf2df532abceba70a5
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 06:31:14 2009 +0000
    
        Fix phi translation in load PRE to agree with the phi
        translation done by memdep, and reenable gep translation
        again.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89992 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 295a27f1fb3e5ef271e1120bc8ebe18bdc5e1e70
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 05:53:01 2009 +0000
    
        redisable this, my bootstrap worked because it wasn't an optimized build, whoops.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89991 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2be6441fdb29f9e5d79e8319a8477c5df22ef780
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 05:19:56 2009 +0000
    
        try again.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89990 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8cc37e033917c220f8a163d124312f923631abdf
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 01:52:22 2009 +0000
    
        this is causing buildbot failures, disable for now.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89985 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fdfa6e8e96177aac789502e20f297081863a91aa
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 00:35:04 2009 +0000
    
        this (and probably several others) are now done.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89982 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 97791a78617fc5c90d62de15894ac6a2f3e4651b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 00:34:38 2009 +0000
    
        teach phi translation of GEPs to simplify geps like 'gep x, 0'.
        This allows us to compile the example from PR5313 into:
    
        LBB1_2:                                                     ## %bb
        	incl	%ecx
        	movb	%al, (%rsi)
        	movslq	%ecx, %rax
        	movb	(%rdi,%rax), %al
        	testb	%al, %al
        	jne	LBB1_2
    
        instead of:
    
        LBB1_2:                                                     ## %bb
        	movslq	%eax, %rcx
        	incl	%eax
        	movb	(%rdi,%rcx), %cl
        	movb	%cl, (%rsi)
        	movslq	%eax, %rcx
        	cmpb	$0, (%rdi,%rcx)
        	jne	LBB1_2
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89981 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5594a48160450e0a82a5dc59633edc9e862c46d2
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 00:29:05 2009 +0000
    
        factor some instcombine simplifications for getelementptr out to a new
        SimplifyGEPInst method in InstructionSimplify.h.  No functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89980 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8e4b727624fd4aa595f6bec77bb83e73a189119a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 00:07:37 2009 +0000
    
        teach memdep to do trivial PHI translation of GEPs.  More to
        come.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89979 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 21b2340ceefa352a63357ae1b26f0a74303e4246
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 23:41:07 2009 +0000
    
        Teach memdep to phi translate bitcasts.  This allows us to compile
        the example in GCC PR16799 to:
    
        LBB1_2:                                                     ## %bb1
        	movl	%eax, %eax
        	subq	%rax, %rdi
        	movq	%rdi, (%rcx)
        	movl	(%rdi), %eax
        	testl	%eax, %eax
        	je	LBB1_2
    
        instead of:
    
        LBB1_2:                                                     ## %bb1
        	movl	(%rdi), %ecx
        	subq	%rcx, %rdi
        	movq	%rdi, (%rax)
        	cmpl	$0, (%rdi)
        	je	LBB1_2
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89978 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 514f5e5b9e56069ce04c8d2aa9fda599591010b2
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 23:32:59 2009 +0000
    
        convert to filecheck
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89977 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c957c54b4e8307fa2498d55e7f6cbb457ad2da73
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Thu Nov 26 23:19:05 2009 +0000
    
        Fix typo spotted by Gabor Greif.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89976 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 66c2c4ebf1c0520ceabf1c3be9d26129ae4ffc97
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 23:18:49 2009 +0000
    
        factor some code out into some helper functions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89975 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9012af4c975acf28f6c88d23d33e5842d6a416ec
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Thu Nov 26 22:54:26 2009 +0000
    
        Clean up file, no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89974 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b068994837ad30a810301e0bcf5cb249e3cc68e5
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 22:48:23 2009 +0000
    
        Add a hack for PR5601, a crash on obsolete syntax that we plan to
        remove in LLVM 3.0
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89973 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 65e3484a37d5db9be72ee1f9b6df0b38174b9465
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 22:08:06 2009 +0000
    
        fix crash on Transforms/InstCombine/intrinsics.ll introduced by r89970
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89972 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fa787569976fb0cd260c333246f3ec08a202ee1b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 22:04:42 2009 +0000
    
        Fix PR5471 by removing an instcombine xform.  Some pieces of the code
        generates store to undef and some generates store to null as the idiom
        for undefined behavior.  Since simplifycfg zaps both, don't remove the
        undefined behavior in instcombine.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89971 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0b45226d223a0527b42841a3579a3477b160339a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 21:42:47 2009 +0000
    
        implement a bunch of xforms for overflow intrinsics, based on a patch
        by Alastair Lynn.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89970 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2204a60625dec63703edbdebe4cf5f90460db2e5
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Nov 26 21:38:41 2009 +0000
    
        Rename new TailDuplicationPass to avoid name conflict with the old one.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89968 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 10211456369f3b23084bcbca76eb3ca54074105f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 19:25:46 2009 +0000
    
        @test9 is a testcase for r89958.  Before 89958, we misanalyzed the
        first expression as P+4+4*i which we considered to possibly alias
        P+4*j.  Now we correctly analyze the former one as P+1+4*i.
    
        @test10 is a sanity test that verfies that we know that P+4+4*i != P+4*i.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89960 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2d00688def6b7c358a78b578d0dfd398238512bf
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 18:53:33 2009 +0000
    
        Implement PR1143 (at -m64) by making basicaa look through extensions.  We
        previously already handled it at -m32 because there were no i32->i64
        extensions for addressing.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89959 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b4fb3a8361062e7f26cf2a3e63243f8e790cc3f0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 18:35:46 2009 +0000
    
        fix two transposed lines duncan caught and add an explanatory comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89958 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e8836c5fe676c088a0f9f0600eeaf44e18924471
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 17:14:10 2009 +0000
    
        this todo is resolved.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89957 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dd824f5233a7a96e416ed36f05b59d997ed92926
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 17:12:50 2009 +0000
    
        move DecomposeGEPExpression out into ValueTracking.cpp
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89956 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1692755c9b104b0ac1209c89dfb549842adb15e8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 17:00:01 2009 +0000
    
        teach GetLinearExpression to be a bit more aggressive.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89955 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7d6d362e41018fa2a787a3fe04d1596bfd8610e7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 16:52:32 2009 +0000
    
        resolve a fixme.  I haven't figured out how to write a testcase
        to exercise this though.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89954 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c29d0d5eb1077c274b2c7186c0096201c8bac87c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 16:42:00 2009 +0000
    
        update status of this.  basicaa is much improved now,
        only missing the one form (in this testcase).  Dan, do you
        consider this example to be important?
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89953 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bd7537176664464b5ef0a4aa121f0c7bedfac16a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 16:26:43 2009 +0000
    
        Teach basicaa that x|c == x+c when the c bits of x are clear.  This
        allows us to compile the example in readme.txt into:
    
        LBB1_1:                                                     ## %bb
        	movl	4(%rdx,%rax), %ecx
        	movl	%ecx, %esi
        	imull	(%rdx,%rax), %esi
        	imull	%esi, %ecx
        	movl	%esi, 8(%rdx,%rax)
        	imull	%ecx, %esi
        	movl	%ecx, 12(%rdx,%rax)
        	movl	%esi, 16(%rdx,%rax)
        	imull	%ecx, %esi
        	movl	%esi, 20(%rdx,%rax)
        	addq	$16, %rax
        	cmpq	$4000, %rax
        	jne	LBB1_1
    
        instead of:
    
        LBB1_1:
        	movl	(%rdx,%rax), %ecx
        	imull	4(%rdx,%rax), %ecx
        	movl	%ecx, 8(%rdx,%rax)
        	imull	4(%rdx,%rax), %ecx
        	movl	%ecx, 12(%rdx,%rax)
        	imull	8(%rdx,%rax), %ecx
        	movl	%ecx, 16(%rdx,%rax)
        	imull	12(%rdx,%rax), %ecx
        	movl	%ecx, 20(%rdx,%rax)
        	addq	$16, %rax
        	cmpq	$4000, %rax
        	jne	LBB1_1
    
        GCC (4.2) doesn't seem to be able to eliminate the loads in this
        testcase either, it generates:
    
        L2:
        	movl	(%rdx), %eax
        	imull	4(%rdx), %eax
        	movl	%eax, 8(%rdx)
        	imull	4(%rdx), %eax
        	movl	%eax, 12(%rdx)
        	imull	8(%rdx), %eax
        	movl	%eax, 16(%rdx)
        	imull	12(%rdx), %eax
        	movl	%eax, 20(%rdx)
        	addl	$4, %ecx
        	addq	$16, %rdx
        	cmpl	$1002, %ecx
        	jne	L2
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89952 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6f2c591e34adfbedadd2276f16f29203b75bc305
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 16:18:10 2009 +0000
    
        teach basicaa that A[i] != A[i+1].
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89951 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3124e22ade9d96af8e2cb3b1708ae9da9bee9636
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 16:08:41 2009 +0000
    
        rename test
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89950 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8200930b6b24a8625e2a49b84e6df1e293c107ca
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 02:17:34 2009 +0000
    
        Change the other half of aliasGEP (which handles GEP differencing) to use DecomposeGEPExpression.  This dramatically simplifies and shrinks the code by eliminating the horrible CheckGEPInstructions method, fixes a miscompilation (@test3) and makes the code more aggressive.  In particular, we now handle the @test4 case, which is reduced from the SmallPtrSet constructor.  Missing this caused us to emit a variable length memset instead of a fixed size one.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89922 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 01526336b664739e81f2e14d8a6792dee2e2f619
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 02:16:28 2009 +0000
    
        add a new random feature test
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89921 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3c689d61206a26a2ec7e5f8ba5ca4d8f67f411e8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 02:14:59 2009 +0000
    
        Generalize DecomposeGEPExpression to exactly handle what Value::getUnderlyingObject does (when TD is around).  This allows us to avoid calling DecomposeGEPExpression unless the ultimate alias check we care about passes, speedup up BasicAA a bit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89920 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8334e9b223221e05f9c12eb3f1122032f1edc7c2
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 02:13:03 2009 +0000
    
        Implement a new DecomposeGEPExpression method, which decomposes a GEP into a list of scaled offsets.  Use this to eliminate some previous ad-hoc code which was subtly broken (it assumed all Constant*'s were non-zero, but strange constant express could be zero).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89915 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5d8f955b656d9743fa32c087848ee9c26338e6c2
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 02:11:08 2009 +0000
    
        Use GEPOperator more pervasively to simplify code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89914 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 41ec62554fd2b914d230c270963c3da30047be28
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 01:51:18 2009 +0000
    
        update some notes slightly
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89913 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 69272603d447a590a78832283bc8c67b1793acaf
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 26 01:50:12 2009 +0000
    
        remove some redundant braces
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89912 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 72524f2dc0f78d922c585c29e1d0947822e0db25
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Nov 26 00:35:01 2009 +0000
    
        Test for 89905.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89906 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1b563ac33aadc7fa11ddc7e28b7e2e067436fa00
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Nov 26 00:32:36 2009 +0000
    
        When all defs of a vr are implicit_def, delete all of the defs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89905 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 810ced7daebe78a8d84b94fac07e320a02cecb71
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Nov 26 00:32:21 2009 +0000
    
        Split tail duplication into a separate pass.  This is needed to avoid
        running tail duplication when doing branch folding for if-conversion, and
        we also want to be able to run tail duplication earlier to fix some
        reg alloc problems.  Move the CanFallThrough function from BranchFolding
        to MachineBasicBlock so that it can be shared by TailDuplication.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89904 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 97658f0fa49d6a3d8fd5f14010589e53474d2ba4
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Nov 25 23:50:09 2009 +0000
    
        Test for llvm-gcc checkin 89898.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89899 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cacef07c20f9dbeb8568426a4e84d8e45c14fa21
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Nov 25 23:28:01 2009 +0000
    
        Update to reflect recent debugging information encoding changes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89896 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5214e0d77ee3b90decf455cea0509ac32286e1a0
    Author: Viktor Kutuzov <vkutuzov at accesssoftek.com>
    Date:   Wed Nov 25 22:44:18 2009 +0000
    
        Rollback changes r89516: Added two SubtargetFeatures::AddFeatures methods, which accept a comma-separated string or already parsed command line parameters as input, and some code re-factoring to use these new methods.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89893 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ae719525ee29c8611887e5fe0153b81d1f0b56b7
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Nov 25 21:13:39 2009 +0000
    
        ProcessImplicitDefs should watch out for invalidated iterator and extra implicit operands on copies.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89880 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aea35a5ebae86ac620ec0ebfe5ee152b46c343f2
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Nov 25 19:57:14 2009 +0000
    
        Tail duplicate indirect branches for PowerPC, too.
        With the testcase for pr3120, the "threaded interpreter" runtime decreases
        from 1788 to 1413 with this change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89877 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 62b818848c2415d32c82957322cba81948b7f409
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Wed Nov 25 18:26:09 2009 +0000
    
        Avoid some possibly unsafe uses of StringRef::data().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89873 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7f75bbe16505f364edb9e7bc96ed6e44694e8723
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Nov 25 17:36:49 2009 +0000
    
        Use StringRef (again) in DebugInfo interface.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89866 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c46ed7f495edb1c8cf3bbf7603c78d06835f0d7d
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Nov 25 17:27:53 2009 +0000
    
        Based on the testcase for pr3120, running on my MacPro with Xeon processors,
        it is definitely profitable to tail duplicate indirect branches for x86.
        This is likely to be true to various degrees for all modern x86 processors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89865 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eba44ceacbbaa1cdbeb45ac660b50c024a3d1b9c
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Wed Nov 25 12:17:58 2009 +0000
    
        Support PIC loading of constant pool entries
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89863 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9ee5871e40d328473ec5b7733968eee6e80dc528
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Wed Nov 25 12:00:34 2009 +0000
    
        Adjust comments to new semantics.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89862 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit caa136e3c783ed03e83cb951e799614209c53b55
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Nov 25 06:53:08 2009 +0000
    
        Sketch structure for X86 disassembler.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89850 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit df5801320dfcdbb02d66d110e91e59590e39ee5d
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Wed Nov 25 06:32:19 2009 +0000
    
        API change Path::isSpecialFile to Path::isRegularFile, improve semantics in regards to comments from 89765 post review.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89848 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 58f610eff4cf913a5e6026283fe899bd6c6c1f33
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Wed Nov 25 06:04:18 2009 +0000
    
        Perform explicit instantiations in the proper namespace, since Clang diagnoses this ill-formity.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89846 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f4a14e453252f061df2bf35dbe5d216161b378cb
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Wed Nov 25 05:38:41 2009 +0000
    
        Reverting patch in revision 89758, initial attempt at fixing PR5373 has proven to be bogus.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89844 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d262856c12f952f5c258f8b33c09009e09f0153f
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Nov 25 04:46:58 2009 +0000
    
        Add the rest of the build system logic for optional target disassemblers
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89841 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0e18f26816cf75f2f3173ed51f44b708762414a4
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Nov 25 04:37:28 2009 +0000
    
        Regenerate configure
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89840 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4b0b0e476d4b9d5e4ffc502962b6057454c1f99d
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Nov 25 04:30:13 2009 +0000
    
        Add CMake and configure logic to create llvm/Config/Disassemblers.defs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89839 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6d494001c9402f321ee3b539408c82e94771870f
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Nov 25 02:13:23 2009 +0000
    
        Sketch TableGen disassembler emitter, based on patch by Sean Callanan.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89833 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 421b70020c09398a185cec1bebbcc524ea5fb167
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Wed Nov 25 01:05:25 2009 +0000
    
        Use endianess dependent offsets for load/store of doubles when
        using two swc/lwc instead of sdc/ldc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89826 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 65ee191f8d37af9b2c9c44487804b7e71cd7fa94
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Nov 25 00:58:21 2009 +0000
    
        Fix compiler warnings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89824 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 649dcac5e84130548fd46c239c12c3f531d64309
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Wed Nov 25 00:47:43 2009 +0000
    
        Only include in the callee saved regs the sub registers to avoid
        unnecessary save/restore.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89823 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 15b39c364d7acaab815964e8bb37f4f1883278ee
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Wed Nov 25 00:36:00 2009 +0000
    
        Add proper emission of load/store double to stack slots for mips1 targets!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89821 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit acdb878fcc4d19f3158b205f0acf2ff1660fc1d5
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Nov 25 00:31:13 2009 +0000
    
        Revert r89803.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89819 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 68722a84a011e2e2a8be3b3c6a09a2bbb477d390
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Nov 24 23:35:49 2009 +0000
    
        Refactor target hook for tail duplication as requested by Chris.
        Make tail duplication of indirect branches much more aggressive (for targets
        that indicate that it is profitable), based on further experience with
        this transformation.  I compiled 3 large applications with and without
        this more aggressive tail duplication and measured minimal changes in code
        size.  ("size" on Darwin seems to round the text size up to the nearest
        page boundary, so I can only say that any code size increase was less than
        one 4k page.) Radar 7421267.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89814 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9902d09e04bc3d2c1389e4fdabe5efbf14efc9ec
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Tue Nov 24 22:59:02 2009 +0000
    
        Do not store R31 into the caller's link area on PPC.
        This violates the ABI (that area is "reserved"), and
        while it is safe if all code is generated with current
        compilers, there is some very old code around that uses
        that slot for something else, and breaks if it is stored
        into.  Adjust testcases looking for current behavior.
        I've verified that the stack frame size is right in all
        testcases, whether it changed or not.  7311323.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89811 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b0eba4bdd3c9fa5a2f5a82b907439dafe0411358
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Nov 24 21:38:54 2009 +0000
    
        Enable debug info for ppc-darwin.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89803 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c10337d20080fca89c19acd69ae11f4a28e61cc8
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Nov 24 19:42:17 2009 +0000
    
        Use StringRef instead of std::string in DIEString.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89793 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 815c35cdb070cb876d01f3837f56b4d73b87f036
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Nov 24 19:37:07 2009 +0000
    
        Remove DebugLabelFolder pass. It is not used by dwarf writer anymore.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89790 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6f2bdd58f2cee41a5de62fe5515642d0371fecf8
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Nov 24 19:18:41 2009 +0000
    
        Swith to pubtypes section before emitting pub types.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89787 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8bb856794be3bbdc21e72ea90f8226efce096ebe
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Nov 24 19:03:33 2009 +0000
    
        Remove bogus error handling code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89786 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6901c962de266a478d056a8c5be5c8901ca0896e
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Tue Nov 24 16:29:23 2009 +0000
    
        Fix comments as pre-post review for rev.89765.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89770 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f21c0e1a6da8926d0cadc61320c0931485f765ee
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Tue Nov 24 15:19:10 2009 +0000
    
        Provide Path::isSpecialFile interface for PR5568.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89765 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b4016f5de54742c18653294b490c28ae55688d15
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Tue Nov 24 11:51:52 2009 +0000
    
        Fix for PR5373, Credit to Jakub Staszak.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89758 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6b2b23a3569d8cefb165601454cc1dede80d30d6
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 24 08:06:15 2009 +0000
    
        Enable predication of NEON instructions in Thumb2 mode.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89748 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit de17b23e8ab96fee43298f52d643383476994e96
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Nov 24 02:11:14 2009 +0000
    
        Oops. Re-disable JITTest.NoStubs on ARM and PPC since they still use stubs to
        make far calls work.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89733 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 003f9445a475809f93a5141770aa871eb1c548b2
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Nov 24 01:48:15 2009 +0000
    
        Delete some dead and non-obvious code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89729 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ec13b4fffb1742d8acd6e07a388b1e54dfd7c1c9
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Nov 24 01:14:22 2009 +0000
    
        Emit pubtypes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89725 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b21c0db31a04aacc19787dd6767983e91f007114
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Tue Nov 24 01:09:07 2009 +0000
    
        Make capitalization of names starting "is" more consistent.
        No functional change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89724 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9f433ab362f406dc8336ae2445bcfebbd133a8a1
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 24 01:05:23 2009 +0000
    
        Data type suffix must come after predicate.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89723 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 115a53c6df56d2befe8c19e602e31ffc2515b93a
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Tue Nov 24 00:59:08 2009 +0000
    
        <rdar://problem/6721894>. Allow multiple registers to be renamed together (super and sub) if necessary to break an anti-dependence.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89722 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a414f36ad4dca3554e1313a4715b3856ac4066b9
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Tue Nov 24 00:44:37 2009 +0000
    
        Materialize global addresses via movt/movw pair, this is always better
        than doing the same via constpool:
        1. Load from constpool costs 3 cycles on A9, movt/movw pair - just 2.
        2. Load from constpool might stall up to 300 cycles due to cache miss.
        3. Movt/movw does not use load/store unit.
        4. Less constpool entries => better compiler performance.
    
        This is only enabled on ELF systems, since darwin does not have needed
        relocations (yet).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89720 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 71465ac298b594109d9f6e9c8b4fc97051840a8b
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Nov 24 00:20:27 2009 +0000
    
        80 column violations
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89718 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f8f9acf65f39634e24604568f1cab3cca14ff18b
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Mon Nov 23 23:35:19 2009 +0000
    
        * Move stub allocation inside the JITEmitter, instead of exposing a
        way for each TargetJITInfo subclass to allocate its own stubs. This
        means stubs aren't as exactly-sized anymore, but it lets us get rid of
        TargetJITInfo::emitFunctionStubAtAddr(), which lets ARM and PPC
        support the eager JIT, fixing http://llvm.org/PR4816.
    
        * Rename the JITEmitter's stub creation functions to describe the kind
        of stub they create. So far, all of them create lazy-compilation
        stubs, but they sometimes get used when far-call stubs are needed.
        Fixing http://llvm.org/PR5201 will involve fixing this.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89715 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bf095c3301eabc23c9be57bd86209f98afc4fd13
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 23 23:25:54 2009 +0000
    
        enable iv-users simplification by default
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89713 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c61b56c8b54d3abdf197464c00ea9b3613aa5889
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 23 23:20:51 2009 +0000
    
        Remove ISD::DEBUG_LOC and ISD::DBG_LABEL, which are no longer used.
        Note that "hasDotLocAndDotFile"-style debug info was already broken;
        people wanting this functionality should implement it in the
        AsmPrinter/DwarfWriter code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89711 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 89acd2fc3e3c28cedeb8f5b5d5d378425b6d2f49
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Mon Nov 23 22:49:00 2009 +0000
    
        Allow more than one stub to be being generated at the same time.
    
        It's probably better in the long run to replace the
        indirect-GlobalVariable system. That'll be done after a subsequent
        patch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89708 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 09c61b35e2c8a6cf9bcdf42f8d07a8ac26238eac
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Nov 23 21:57:23 2009 +0000
    
        Massive refactoring of NEON instructions. Separate opcode from data size specifier suffix, move \t up stream to instruction format, and fix more 80 column violations.
        This fixes the NEON asm printing so the "predicate" field is printed between the opcode and the data type suffix.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89706 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8d34f97d8b78e04cad93ab4fb27f19963c0b2f70
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 23 21:30:55 2009 +0000
    
        Simplify this code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89702 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3a271d8ee93e2de1277abaa40fe3614c458905be
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 23 21:29:08 2009 +0000
    
        Print the debug info line and column in MachineInstr::print even when there's
        no filename. This situation is apparently fairly common right now.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89701 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8d450b25083d0f488b2e797e79eb92172eb2c48d
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 23 21:08:25 2009 +0000
    
        move fconst[sd] to UAL. <rdar://7414913>
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89700 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9ee642f0b2272dab1086aa3421d1958e63a066f6
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Mon Nov 23 21:00:43 2009 +0000
    
        Partially revert r84730 by removing N2VDup from ARMInstrFormats.td and modifying
        VDUPLND and VDUPLNQ to derive from N2V instead of N2VDup.  VDUPLND and VDUPLNQ
        now expect op19_18 and op17_16 as the first two args.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89699 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7b7663763bdc058d678f7d459dfa49cc97d6f4c6
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 23 20:39:53 2009 +0000
    
        update test for 89694
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89695 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 66e70cdbb4097df5f9f9341663441fc431015307
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 23 20:35:53 2009 +0000
    
        fold immediate of a + Const into the user as a subtract if it can fit as a negated two-part immediate.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89694 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6c6fa9a41918f91b9ea6765cfc0eb566e68b2de0
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Mon Nov 23 20:09:13 2009 +0000
    
        Revert r84572 by removing N3VImm from ARMInstrFormats.td now that we can specify
        {?,?,?,?} as op11_8 for VEXTd and VEXTq.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89693 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c3201132cadd122f71500f6cbc88130cfa912af3
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Nov 23 19:11:20 2009 +0000
    
        Add CreateLocation varinat that accepts MDNode (with a default value).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89689 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b125c6e04f2a851b4ed6a361a705816d0374ce4b
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Nov 23 18:43:37 2009 +0000
    
        Revert r89487.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89686 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 46f784e7a92f51878af9c6c0cdb8312b2d188303
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Mon Nov 23 18:16:16 2009 +0000
    
        Partially revert r89377 by removing NLdStLN class definition from
        ARMInstrFormats.td and fixing VLD[234]LN* and VST[234]LN* to derive from NLdSt
        instead of NLdStLN.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89684 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7d3b3e40e16069bbc07c2acb20594cfffe406572
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 23 18:12:11 2009 +0000
    
        Move CopyCatchInfo into FunctionLoweringInfo.cpp too, for consistency.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89683 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7ec2051989d074e3ffef5fe74fed228d61b5f428
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 23 18:04:58 2009 +0000
    
        Rename SelectionDAGLowering to SelectionDAGBuilder, and rename
        SelectionDAGBuild.cpp to SelectionDAGBuilder.cpp.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89681 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0c1d2c1183243d983ee60d2fa0e30c1aef4981f4
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Mon Nov 23 17:48:17 2009 +0000
    
        Make it clear that the index bit(s) of Vector Get Lane and Vector Set Lane
        should be left unspecified now that Bob Wilson has fixed pr5470.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89676 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 78105759c1482cdea69c13d7bedcd1867c40efad
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 23 17:46:23 2009 +0000
    
        Move RegsForValue to an anonymous namespace, since it is only used
        in this file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89675 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c073abf312a796da1dd5c567aa92affe77f62e1b
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 23 17:42:46 2009 +0000
    
        Move some more code out of SelectionDAGBuild.cpp and into
        FunctionLoweringInfo.cpp.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89674 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dfaea1cb45cae9e1ca8dd6b3398ab39dbab074d3
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Mon Nov 23 17:34:12 2009 +0000
    
        Minor itinerary fixes for FP instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89672 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ead611c1b544ed84281bfb255c2677ad58d63494
    Author: Ted Kremenek <kremenek at apple.com>
    Date:   Mon Nov 23 17:26:04 2009 +0000
    
        Update CMake file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89671 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 39a0cdff39ccce6633deb6ca4427a96796fef1b3
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 23 17:16:22 2009 +0000
    
        Move the FunctionLoweringInfo class and some related utility functions out
        of SelectionDAGBuild.h/cpp into its own files, to help separate
        general lowering logic from SelectionDAG-specific lowering logic.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89667 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4abfe40a9e41146760e8fdc2270abf60540b1e90
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 23 17:07:35 2009 +0000
    
        fix comment, thanks all :)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89666 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 913c8adf87bf3584d4c9d2fd7ad8beb19e3fcf11
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 23 16:46:41 2009 +0000
    
        use the new isNoAlias method to simplify some code, only do an escaping check if
        we have a non-constant pointer.  Constant pointers can't be local.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89665 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 042c6856f5af58f684d6cf7b692b87be1bf36f0a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 23 16:45:27 2009 +0000
    
        whitespace cleanup, tidying
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89664 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0ed47b7960f240169bcb847c7cde0b52096a1c2c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 23 16:44:43 2009 +0000
    
        speed up BasicAA a bit by implementing a long-standing TODO.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89663 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 03c010bff7008ca11acd4576c058b690d8bc6a03
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 23 16:38:54 2009 +0000
    
        add a helper
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89662 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 68761bbb60a826ebd53c006ee75d90dea55fea88
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 23 16:24:18 2009 +0000
    
        Move FunctionPassManagerImpl's dumpArguments and dumpPasses calls
        out of its run function and into its doInitialization method, so
        that it does the dump once instead of once per function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89660 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4f77c3e8551139a6bf4b6460794a9ef6b312ef52
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 23 16:22:21 2009 +0000
    
        Make ConstantFoldConstantExpression recursively visit the entire
        ConstantExpr, not just the top-level operator. This allows it to
        fold many more constants.
    
        Also, make GlobalOpt call ConstantFoldConstantExpression on
        GlobalVariable initializers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89659 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3d97a63506213bf48ad9e942b9f689fb78f5eeb0
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 23 16:13:39 2009 +0000
    
        Fix a use of an invalidated iterator in the case where there are multiple
        adjacent uses of a dead basic block from the same user. This fixes PR5596.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89658 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d301b553c36bae9bdd8d41b768f093760cb3f7e9
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Mon Nov 23 10:49:03 2009 +0000
    
        I forgot to update the prototype for LLVMBuildIntCast when correcting
        the body to not pass the name for the isSigned parameter.  However it
        seems that changing prototypes is a big-no-no, so here I revert the
        previous change and pass "true" for isSigned, meaning this always does
        a signed cast, which was the previous behaviour assuming the name was
        not NULL!  Some other C function needs to be introduced for the general
        case of signed or unsigned casts.  This hopefully unbreaks the ocaml
        binding.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89648 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 951cb61e2fc34e1f6c2324b9613985191e87fe49
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Mon Nov 23 04:52:00 2009 +0000
    
        Start catching LLVMContext misuse in the verifier.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89646 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1df6ea033855b9c8ff66bdfc5f479fe8d2afb272
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Mon Nov 23 03:50:44 2009 +0000
    
        Pull LLVMContext out of PromoteMemToReg.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89645 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c11ad4295ebbb1add9f5ab4c406a2c73851341a4
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Mon Nov 23 03:34:29 2009 +0000
    
        Remove LLVMContext and its include.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89644 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6e8bc9ccd4e6d8c9a7f6e4f4ae1a664abcb076a5
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Mon Nov 23 03:29:18 2009 +0000
    
        Remove unused LLVMContext.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89642 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d1d0c185a48ffd53cbd5b671008738447d878bc9
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Mon Nov 23 03:26:09 2009 +0000
    
        Remove dead LLVMContext argument.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89641 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ef61f69747de3b32a7edb82ea753091baf605683
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Mon Nov 23 03:17:33 2009 +0000
    
        Reapply r88830 with a bugfix: this transform only applies to icmp eq/ne. This
        fixes part of PR5438.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89639 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b01c8c362988e5a59116d31764300a3abc8982d4
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Mon Nov 23 00:40:39 2009 +0000
    
        CMake: Updated library dependencies.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89637 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 760e1e5f52d218629214db44d60494f612918086
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Mon Nov 23 00:32:42 2009 +0000
    
        CMake: Do not try to install a target before it is defined.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89636 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d59ab13c1166497b768f8c05522f4d7abf117deb
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Mon Nov 23 00:21:43 2009 +0000
    
        CMake: generate targets for tools and examples even when
        LLVM_BUILD_TOOLS or LLVM_BUILD_EXAMPLES are OFF.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89635 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ca55327ca9c480c28037c6a1a19b72e224ca5604
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 22 22:59:26 2009 +0000
    
        FileCheck, PR5239: Try to find the intended match on failures, but looking for a
        good nearby fuzzy match. Frequently the input is nearly correct, and just
        showing the user the a nearby sensible match is enough to diagnose the problem.
         - The "fuzzyness" is pretty simple and arbitrary, but worked on my three test
           cases. If you encounter problems, or places you think FileCheck should have
           guessed but didn't, please add test cases to PR5239.
    
        For example, previously FileCheck would report this:
        --
        t.cpp:21:55: error: expected string not found in input
        // CHECK: define void @_Z2f25f2_s1([[i64_i64_ty]] %a0)
                                                              ^
        <stdin>:19:30: note: scanning from here
        define void @_Z2f15f1_s1(%1) nounwind {
                                     ^
        <stdin>:19:30: note: with variable "i64_i64_ty" equal to "%0"
        --
    
        and now it also reports this:
        --
        <stdin>:27:1: note: possible intended match here
        define void @_Z2f25f2_s1(%0) nounwind {
        ^
        --
    
        which makes it clear that the CHECK just has an extra ' %a0' in it, without
        having to check the input.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89631 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d2e397d1b44bc0eee6ee27bc578e1f54f9bfb599
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 22 22:08:06 2009 +0000
    
        FileCheck: When a string using variable references fails to match, print
        additional information about the current definitions of the variables used in
        the string.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89628 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dd5bbc12814ad7b8c6273e2f0f321e160ee98894
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 22 22:08:00 2009 +0000
    
        SourceMgr: Add ShowLine argument to PrintMessage, to allow suppressing the source line output.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89627 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 716aea013d24f2b814ced60e799a636f4037ce5f
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 22 22:07:50 2009 +0000
    
        Allow '_' in FileCheck variable names, it is nice to have at least one
        separate character.
         - Chris, OK?
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89626 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 587d488cbfbfdba6a5e9cebff4f6b614856fde58
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sun Nov 22 20:14:00 2009 +0000
    
        Add getFrameIndexReference() to TargetRegisterInfo, which allows targets to
        tell debug info which base register to use to reference a frame index on a
        per-index basis. This is useful, for example, in the presence of dynamic
        stack realignment when local variables are indexed via the stack pointer and
        stack-based arguments via the frame pointer.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89620 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit afc3436c0a1089f70f879e539a464a6ac281ba21
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sun Nov 22 20:05:32 2009 +0000
    
        Move default FrameReg val to getFrameIndexReference(). Otherwise, debug info can get bogus values.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89618 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b23f242b6ee5ec88e39ef0d58f7165eed9df7a8c
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sun Nov 22 19:20:36 2009 +0000
    
        80-column cleanup
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89612 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ed9ebc2d93025f3fed6be267258d1f2abbd7f892
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Sun Nov 22 18:28:04 2009 +0000
    
        Teach MachineBasicBlock::updateTerminator() to handle a failing TII->ReverseBranchCondition(Cond) call.
    
        This fixes the MallocBench/cfrac test case regression.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89608 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ecc701d1f6585ff576317af22c2f9363865177bd
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 22 18:27:51 2009 +0000
    
        Update doc re: LLVM_BUILD_EXAMPLES.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89607 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1790623d43a92b20bff15ed6d9a4edd44c44ff32
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 22 18:27:43 2009 +0000
    
        Use ExtractElementInst::Create instead of new; patch by Artur Pietrek!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89606 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1017a6d8e59599dad4c0d4f8d34c0fcf3e5b1cd2
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 22 16:16:48 2009 +0000
    
        add fixme for dubious code.  Duncan, what do you think?
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89602 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7a639bf8ac9a22b5ed969ac3530cb931ffaf78cc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 22 16:15:59 2009 +0000
    
        remove a silly condition that doesn't make a lot of sense anymore.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89601 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3fda168358ac665f2756201cfd7598b6367d7f2e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 22 16:05:05 2009 +0000
    
        reduce indentation, no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89600 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3b803758388cc969c52e12e968f22f00eecdc1fb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 22 16:01:44 2009 +0000
    
        Remove the AliasAnalysis::getMustAliases method, which is dead.
    
        The hasNoModRefInfoForCalls isn't worth it as a filter because
        basicaa provides m/r info and everything chains to it, so remove
        it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89599 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 194cfb2c761c1c7c943527d5e843a070d580fa94
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Sun Nov 22 15:35:28 2009 +0000
    
        Miss two, PR5307.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89596 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c5a7ee56626c6c3b0fa8b4393e3c4cf214a5ea50
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Sun Nov 22 15:18:27 2009 +0000
    
        Convert Thumb2 tests to FileCheck for PR5307.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89595 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f6ddf7961a5dc35af0202a9877a5c03dfc13509f
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Sun Nov 22 15:15:52 2009 +0000
    
        Turns out stuff gets allocated to different registers depending on the subtarget.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89594 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 16f1ecc2ab9b1bda31da4ec0900b1d80bf075dc8
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Sun Nov 22 14:23:33 2009 +0000
    
        Convert ARM tests to FileCheck for PR5307.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89593 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5bbb84f365f9104e8091e45ac2e11513ef0c09f8
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Sun Nov 22 13:16:36 2009 +0000
    
        Convert test to FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89589 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 79b7b8a06d35cf5ea601dbd7d2c2ddd6a832bf70
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Sun Nov 22 13:09:48 2009 +0000
    
        Forgot to alter RUN line when converting to FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89588 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ba92a65cdc5c316f94576e321b19ebf7902420d8
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Sun Nov 22 12:50:05 2009 +0000
    
        Fix for bad FileCheck converts in revision 89584.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89586 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bebe181781a0f6464328ab66838e92814345c318
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Sun Nov 22 11:45:44 2009 +0000
    
        Convert a few tests to FileCheck for PR5307.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89584 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 537fe619ae0e817ea0c88a00e034f0228be147c2
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Sun Nov 22 04:24:42 2009 +0000
    
        Fix whitespace.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89582 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6e9a4bc4dc27a5bb203c14e98ce78df5a8ccd63b
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Sun Nov 22 03:58:57 2009 +0000
    
        Fix pr5470. Tablegen handles template arguments by temporarily setting their
        values, resolving references to them, and then removing the definitions.
        If a template argument is set to an undefined value, we need to resolve
        references to that argument to an explicit undefined value.  The current code
        leaves the reference to the template argument as it is, which causes an
        assertion failure later when the definition of the template argument is
        removed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89581 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 00b4eef951eedc4b234dd466069a54430b4d1bb8
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Nov 22 02:38:11 2009 +0000
    
        Remove dead code. While there, also turn a few 'T* ' into 'T *' to match the
        rest of the file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89577 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5902f6d18693f3a787615a98e7d3e36b8888ce62
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sun Nov 22 02:32:29 2009 +0000
    
        Generate more correct debug info for frame indices.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89576 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3aa832bd08262d07a332f8f69359f5d6f8368a08
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Nov 22 01:14:08 2009 +0000
    
        Minor optimization: when doing eq/ne comparions and RHS is a constant - swap operands, this will allow us to fold imm into comparison.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89574 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ef4818da6ffd4f013694072800db48b72b7cf9d4
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Nov 22 01:13:54 2009 +0000
    
        Drop unsupported imm operands
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89573 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6e56869809bfe5bcfaa3e6bcf7200a228d239007
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Nov 22 01:13:39 2009 +0000
    
        Use 2-byte alignment for functions. 4 bytes are clear overkill here.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89572 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 16cbc57dbb8113d45d2b600c549bb43655a1d9d8
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Nov 22 01:12:49 2009 +0000
    
        Use semicolon as assembler comment string
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89571 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eee8617c0f283222d9f2f0d9b24c9ff0b2eb7d9e
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Nov 21 23:34:09 2009 +0000
    
        Revert 89562. We're being sneakier than I was giving us credit for, and this
        isn't necessary.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89568 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 652b7433b68fdc0371902b0211be5030d56056be
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Nov 21 23:12:12 2009 +0000
    
        remove trailing whitespace
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89567 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 38a5cd273053ce399fc980317f25fc5b33e4a73a
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Sat Nov 21 22:44:20 2009 +0000
    
        Fix some spelling in comments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89566 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2d18aab483a2b6bb199a9476c1bf4ae2c14a5768
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Sat Nov 21 22:39:27 2009 +0000
    
        Avoid a redundant assertion.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89565 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3af6d702411a66e6d02ebfc566083fb78336d5e3
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Nov 21 21:40:08 2009 +0000
    
        Darwin requires a frame pointer for all non-leaf functions to support correct
        backtraces.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89562 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 67abceccf77faa1bbaab2e22c4d7e173927ee12e
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Nov 21 06:21:52 2009 +0000
    
        Add predicate operand to NEON instructions. Fix lots (but not all) 80 col violations in ARMInstrNEON.td.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89542 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 76fe98917d56fbbc1087c2275d3ab3a8ee4abd2b
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Nov 21 06:20:26 2009 +0000
    
        Allow target to disable if-converting predicable instructions. e.g. NEON instructions under ARM mode.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89541 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c50078e7edda4caf0443225ffde587ae36e1a5ad
    Author: Devang Patel <dpatel at apple.com>
    Date:   Sat Nov 21 02:48:08 2009 +0000
    
        Cosmetic changes, which were long overdue, in DwarfDebug.cpp.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89537 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1432b62b04c33ad89f4249ef73f2e73740db18c7
    Author: Devang Patel <dpatel at apple.com>
    Date:   Sat Nov 21 02:46:55 2009 +0000
    
        We are not using DBG_STOPPOINT anymore.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89536 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c591f900f958babf2e8043084a60f91d4b043836
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Nov 21 02:32:35 2009 +0000
    
        Maintain stylistic consistency.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89535 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e8aaa272391a7f5aae92ecf3f64bd94365a57f7a
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Sat Nov 21 02:05:31 2009 +0000
    
        Don't leave temporary files in the test directory.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89531 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e79ff41f7b22dc50740c3c87d0e6801e5fae5924
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Sat Nov 21 02:05:21 2009 +0000
    
        Be more clever about calculating live variables through new basic blocks.
    
        When splitting a critical edge, the registers live through the edge are:
    
        - Used in a PHI instruction, or
        - Live out from the predecessor, and
        - Live in to the successor.
    
        This allows the coalescer to eliminate even more phi joins.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89530 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 543b8df69c6e1c707e4f16a28665befd53b78da7
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Nov 21 02:01:24 2009 +0000
    
        Allow SmallString to implicitly convert to StringRef.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89529 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4473796bf84fc8f009c250ba74f18c5f54712a48
    Author: Eric Christopher <echristo at apple.com>
    Date:   Sat Nov 21 01:01:30 2009 +0000
    
        Add more optimizations for object size checking, enable handling of
        object size intrinsic and verify return type is correct. Collect various
        code in one place.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89523 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 76c1a107d9e2fb6eec90a3fffc8a39c11a4b0317
    Author: Devang Patel <dpatel at apple.com>
    Date:   Sat Nov 21 00:54:03 2009 +0000
    
        Remove dead code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89522 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fa1ce6a3b1b8d9f6ee587449741239c59f2b107e
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Sat Nov 21 00:53:23 2009 +0000
    
        When generating a vector the really slow way, via loads
        and stores, handle the case where the element size is not
        a valid target type correctly (PPC).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89521 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1233f91cb593d825c7efebcb8b3571a502976f23
    Author: Devang Patel <dpatel at apple.com>
    Date:   Sat Nov 21 00:31:03 2009 +0000
    
        There is no need to use FoldingSet to unique DIEs.
        DIEs are created from MDNode, which are already uniqued. And DwarfDebug already uses ValueMaps to find and use existing DIE for a given MDNode.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89518 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1718fe60e36c17d72fa7b96dcb381fe8586cab0e
    Author: Viktor Kutuzov <vkutuzov at accesssoftek.com>
    Date:   Sat Nov 21 00:00:02 2009 +0000
    
        Added two SubtargetFeatures::AddFeatures methods, which accept a comma-separated string or already parsed command line parameters as input, and some code re-factoring to use these new methods.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89516 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2737ee18866f4e52cc1cd1411d2273d76794374a
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Fri Nov 20 23:33:54 2009 +0000
    
        Restructure code to allow renaming of multiple-register groups for anti-dep breaking.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89511 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1004f9502ae653e8a4752e606c9136060c218068
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Nov 20 23:31:34 2009 +0000
    
        Enable hoisting load from constant memories.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89510 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ffb50c1d0c2d0c999082f02c746a3e6ae3030fcd
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 20 23:30:32 2009 +0000
    
        Fix a thinko that caused spurious @GOTOFFs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89509 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9b9d2a5f2053a53da30f4bc42f18df77646ca3a9
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 20 23:21:00 2009 +0000
    
        Update for new getBlockAddress signature.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89507 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 885793baaed110c848ccfdfb460bbc91774d8981
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 20 23:18:13 2009 +0000
    
        Target-independent support for TargetFlags on BlockAddress operands,
        and support for blockaddresses in x86-32 PIC mode.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89506 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b02aec51b2ea2206de1d61e7f9abd59b18daac1c
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Fri Nov 20 22:28:42 2009 +0000
    
        Recommitting PALIGNR shift width fixes.
        Thanks to Daniel Dunbar for fixing clang intrinsics:
          http://llvm.org/viewvc/llvm-project?view=rev&revision=89499
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89500 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 56187db10c250ffe39acb633bdf719c8fe66a273
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Fri Nov 20 22:16:40 2009 +0000
    
        Remove an incorrect overaggressive optimization
        (PPC specific).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89496 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a7c408162231566c5eff408a28585063f152ca27
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Fri Nov 20 22:09:28 2009 +0000
    
        Reverting PALIGNR fix until I figure out how this
        broke the Clang testsuite.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89495 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d2dbcba3aae7f29b61b6e778d01936f1e99e94ec
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Fri Nov 20 21:40:28 2009 +0000
    
        Fixed PALIGNR to take 8-bit rotations in all cases.
        Also fixed the corresponding testcase, and the PALIGNR
          intrinsic (tested for correctness with llvm-gcc).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89491 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d90672c248edda0cab8f84db6f9b16fe8fe4b281
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Nov 20 21:37:22 2009 +0000
    
        Do not hold on to a map slot while new entries may be inserted into the map.
        Use ValueMap, instead of std::map.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89490 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 197e2fcd5b2b0a4d391208380dbe71dbcee48e04
    Author: David Greene <greened at obbligato.org>
    Date:   Fri Nov 20 21:13:27 2009 +0000
    
        Cleanups.
    
        Make things a little more efficient as suggested by Evan.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89489 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8268f72bb23ff56e4e72260fa1e6575941f46c15
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Nov 20 21:05:37 2009 +0000
    
        There is no need to emit source location info for DW_TAG_pointer_type.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89487 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0379c26c10c7e3090ec806a8333f6d3b5ebf8822
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 20 20:51:18 2009 +0000
    
        Make Loop::getLoopLatch() work on loops which don't have preheaders, as
        it may be used in contexts where preheader insertion may have failed due
        to an indirectbr.
    
        Make LoopSimplify's LoopSimplify::SeparateNestedLoop properly fail in
        the case that it would require splitting an indirectbr edge.
    
        These fix PR5502.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89484 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 082f11b74a4bc1ed1434757aa57582a4232e8f29
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 20 20:19:14 2009 +0000
    
        Fix IPSCCP's code for deleting dead blocks to tolerate outstanding
        blockaddress users. This fixes PR5569.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89483 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9d5e4247ca66b21f37c9417c865bd715e93ba960
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Nov 20 20:17:30 2009 +0000
    
        Revert "Add some rough optimizations for checking routines.", it buildeth not.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89482 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 92b73c9b0c67894b541ad7e8aaebb663c73f9864
    Author: Eric Christopher <echristo at apple.com>
    Date:   Fri Nov 20 19:57:37 2009 +0000
    
        Add some rough optimizations for checking routines.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89479 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2f6bfd4a20638089c8d00a40c2f4cd9e8b63e691
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Nov 20 19:57:15 2009 +0000
    
        Remat VLDRD from constpool. Clean up some instruction property specifications.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89478 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ffd15dce5570f5c05754fc4dd7962ffe27faaac6
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Nov 20 19:55:37 2009 +0000
    
        Add option -licm-const-load to hoist all loads from constant memory.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89477 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b3de86391c62a6519caa0d6ea12197326efbb793
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Fri Nov 20 19:37:38 2009 +0000
    
        The verify() call of CPEIsInRange() isn't right for the assertion check of
        constant pool ranges, as CPEIsInRange() makes conservative assumptions about
        the potential alignment changes from branch adjustments. The verification,
        on the other hand, runs after those branch adjustments are made, so the
        effects on alignment are known and already taken into account. The sanity
        check in verify should check the range directly instead.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89473 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 58bb6d922787ef2477a325c7106c95367d034af9
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 20 19:33:16 2009 +0000
    
        Use stripPointerCasts(). Thanks Duncan!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89472 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d34903a350e0bb37ab31d6cb8449b419535bfe8a
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Fri Nov 20 19:32:48 2009 +0000
    
        Remove some old experimental code that is no longer needed. Remove additional, speculative scheduling pass as its cost did not translate into significant performance improvement. Minor tweaks.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89471 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ff47428dd421e3fa20990dcb4a26a1b8b287ac3b
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Fri Nov 20 18:54:59 2009 +0000
    
        More consistent labelling of basic blocks in debug output
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89470 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cd91143935f6fc55ba9af90054f1d6dc4d4fcff3
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 20 17:50:21 2009 +0000
    
        Revert the rule that considers comparisons between two pointers in the
        same object to be a non-capture; Duncan pointed out a way that such
        a comparison could be a capture.
    
        Make the rule that considers a comparison against null more specific,
        and only consider noalias return values compared against null. This
        still supports test/Transforms/GVN/nonescaping-malloc.ll, and is not
        susceptible to the problem Duncan pointed out with noalias arguments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89468 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c8cb0d95e6125d2c62c76e822e3edf09c6891aae
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Fri Nov 20 17:23:17 2009 +0000
    
        Move the handling of CommaSeparated options into ProvideOption.
    
        Makes '--comma-separated val1,val2' mean the same thing as
        '--comma-separated=val1,val2' (that is, 'val1' and 'val2' are not lumped
        together as 'val1,val2'). Also declutters the main loop a bit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89463 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e7f89b0b44347e999695b1cafe8ac0d363cb2ba2
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Fri Nov 20 13:19:51 2009 +0000
    
        Fix PR5563, an expensive checks failure when running on
        tests/Transforms/InstCombine/shufflemask-undef.ll.  If
        anyone cares, the use of 2*e here (and the equivalent
        all over the place in instcombine) seems wrong, though
        harmless: it should really be twice the length of the
        input vector.  I think shufflevector used to require
        that the mask have the same length as the input, but I
        don't think that's true any more.  I don't care enough
        about vectors to do anything about this...
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89456 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6888fe797e7b7ee3cd9028cbc8d54aa67a878147
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Fri Nov 20 10:45:10 2009 +0000
    
        Fix PR5558, which was caused by a wrong fix for PR3393 (see commit 63048),
        which was an expensive checks failure due to a bug in the checking.  This
        patch in essence reverts the original fix for PR3393, and refixes it by a
        tweak to the way expensive checking is done.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89454 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit edb2ded475aba1a45d5bc80ded559e9d91bf6f8a
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Fri Nov 20 09:53:25 2009 +0000
    
        Try to work around grep's "Binary file (standard input) matches" complaints seen
        on ppc buildbot.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89452 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1b6aa45f821c6991b616a8deb340171a83a6bc0a
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Nov 20 02:52:08 2009 +0000
    
        Fix -march= name for x86-64.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89445 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 40b0a2e6bb0787bc7c36a8fc79910a33e1a58965
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 20 02:51:26 2009 +0000
    
        Fix fast-isel to avoid selecting the return instruction if a
        tail call has been encountered.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89444 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 871fc2c9d35287f0feaeb077f4ce892da9436d06
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Fri Nov 20 02:32:06 2009 +0000
    
        Remove verifySizes() since it's not adding much value.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89443 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ef4c5b2c4971e741cd583f5c351e8328e72e93fa
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Nov 20 02:10:27 2009 +0000
    
        Also CSE non-pic load from constant pools.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89440 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c0bb0ae7d0154008e836e31d8f8cf77fa2ebb726
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 20 02:03:44 2009 +0000
    
        Add an experimental option to run gep-splitting and no-load GVN
        just before codegen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89439 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7a958d37a1ed34b2160bbdc7371b1d15641acb2d
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 20 01:34:03 2009 +0000
    
        Simplify this code; it's not necessary to check isIdentifiedObject here
        because if the results from getUnderlyingObject match, the values must
        be from the same underlying object, even if we don't know what that
        object is.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89434 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 222f68489f8cf17fa316ad1028c285326d830214
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Fri Nov 20 01:17:03 2009 +0000
    
        Add MachineBasicBlock::getName, and use it in place of getBasicBlock()->getName.
    
        Fix debug code that assumes getBasicBlock never returns NULL.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89428 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b1b77e7c3e5adb80863c97c4d24aa598c5fc1515
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 20 01:09:34 2009 +0000
    
        Teach getSmallConstantTripMultiple about Shl operators.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89426 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 69aaa0084163fa280cdae34786d3dd67695753c9
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Nov 20 00:54:03 2009 +0000
    
        Fix codegen of conditional move of immediates. We were not making use of the immediate forms of cmov instructions at all.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89423 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d7c28c8c5c1de49d0b0413f6ff4fada2b80c1ec0
    Author: Lang Hames <lhames at gmail.com>
    Date:   Fri Nov 20 00:53:30 2009 +0000
    
        Removed references to LiveStacks from Spiller.* . They're no longer needed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89422 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1e927d7b1f0097ad715d2a4375b2bd7e40dd3c89
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 20 00:50:47 2009 +0000
    
        Refine the capture tracking rules for comparisons to be more
        careful about crazy methods of capturing pointers using comparisons.
        Comparisons of identified objects with null in the default address
        space are not captures. And, comparisons of two pointers within the
        same identified object are not captures.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89421 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0f2b0e7a5096767455d0dd6591e2085c518c72c2
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 20 00:43:11 2009 +0000
    
        Use isVoidTy().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89419 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2aef1e6b29c5d584bfdb1a1bc1d674f986108771
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Nov 20 00:40:21 2009 +0000
    
        Specify proper arch and triple for 64-bit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89418 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d45acc97e4b6a2854810c0a8076df3907e7be6e7
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Nov 20 00:32:16 2009 +0000
    
        Testcase for r89415.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89417 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0fbd6ad19b836a8138007e82176581755e206c7f
    Author: Eric Christopher <echristo at apple.com>
    Date:   Fri Nov 20 00:21:55 2009 +0000
    
        Update comment to reflect instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89414 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ac1d868496cea420332f07cde2d57cf3d9744eb
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 19 23:53:49 2009 +0000
    
        Refine this to only apply to null in the default address space.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89411 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ef7c365276e64d3ec8f3f8b2b251057186c73bc4
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Thu Nov 19 23:42:58 2009 +0000
    
        Try to fix JITTest.FarCallToKnownFunction on ARM and PPC.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89410 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a5619db5fa424bd31f0fb1f4647ef479585a1aba
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Thu Nov 19 23:21:43 2009 +0000
    
        Use CMAKE_DL_LIBS instead of raw library name. Fixes bug 5536.
    
        Patch by Tobias Grosser!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89406 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 25d6435df004aa2890aee502e67242b158c744e7
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Thu Nov 19 23:12:37 2009 +0000
    
        Fix a couple of problems with maintaining liveness information for antidep breaking.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89404 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ca890b1973ef53d6c7ece027c037f6de1260cb4b
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Nov 19 23:10:28 2009 +0000
    
        When placing constant islands and adjusting for alignment padding, inline
        assembly can confuse things utterly, as it's assumed that instructions in
        inline assembly are 4 bytes wide. For Thumb mode, that's often not true,
        so the calculations for when alignment padding will be present get thrown off,
        ultimately leading to out of range constant pool entry references. Making
        more conservative assumptions that padding may be necessary when inline asm
        is present avoids this situation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89403 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c4f555a862e61afbb08604327f70ced9f9e6b934
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 19 21:57:48 2009 +0000
    
        Extend CaptureTracking to indicate when a value is never stored, even
        if it is not ultimately captured. Teach BasicAliasAnalysis that a
        local object address which does not escape and is never stored does
        not alias with a value resulting from a load.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89398 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3327e8265b77a0ddf6d59dfb891cd311c571ac91
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Nov 19 21:45:22 2009 +0000
    
        Refactor cmov selection code out to a separate function. No functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89396 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 907856d48f376196e91573fa1d9c5c5e87d86d0b
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 19 21:34:07 2009 +0000
    
        Comparing a pointer with null is not a capture.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89389 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ba2f80467ebfd343f8fdbe28745d92a5dd9447f3
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Thu Nov 19 20:48:14 2009 +0000
    
        Only run this mutex test if threading is enabled.  This
        fixes PR5395.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89385 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f9cd9bd206387e98c2897c47b29fd4d31df2f223
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Nov 19 19:42:16 2009 +0000
    
        Place new basic blocks immediately after their predecessor when splitting
        critical edges in PHIElimination.
    
        This has a huge impact on regalloc performance, and we recover almost all of
        the 10% compile time regression that edge splitting introduced.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89381 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f0e65b8d3493b044f266aa620cead7a1c91b3ccf
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Nov 19 19:21:09 2009 +0000
    
        Reverting the EH table patches.
    
        $ svn merge -c -89279 https://llvm.org/svn/llvm-project/llvm/trunk
        --- Reverse-merging r89279 into '.':
        U    lib/CodeGen/AsmPrinter/DwarfException.cpp
        U    lib/Target/TargetLoweringObjectFile.cpp
        $ svn merge -c -89270 https://llvm.org/svn/llvm-project/llvm/trunk
        --- Reverse-merging r89270 into '.':
        G    lib/CodeGen/AsmPrinter/DwarfException.cpp
        G    lib/Target/TargetLoweringObjectFile.cpp
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89379 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9a5dc8bb0dbc4fab3b9594c9a94edcb82e5f6eac
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Thu Nov 19 19:20:17 2009 +0000
    
        Added NLdStLN which is similar to NLdSt with the exception that op7_4 is not
        fully specified at this level.  Subclasses of NLdStLN can specify selective
        bit(s) for Inst{7-4}, as is done for VLD[234]LN* and VST[234]LN* inside
        ARMInstrNEON.td.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89377 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit af9a21d1a69f3b3ef222674d760309001f4827b7
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Nov 19 19:09:39 2009 +0000
    
        Fix a small bug.
    
        Fix one case we missed to make sure we reserve registers from
        allocation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89376 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3f74e9c0b2a9ca331220ef8d8dd377e46c7a1ee8
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 19 19:00:10 2009 +0000
    
        Enable hoisting of loads from constant memory by default. In cases where
        they are lowered to instruction sequences more complex than a simple
        load, such that CodeGen cannot rematerialize them, a reload from a
        spill slot is likely to be cheaper than the complex sequence.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89374 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c6bb9ad2a7a5486e4f375e35d537f1cd45ce3015
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Thu Nov 19 18:53:18 2009 +0000
    
        Use StringRef::min instead of std::min.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89372 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 49f755b9ed789fa5088c1340e5c9d7b026c89263
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Nov 19 18:23:19 2009 +0000
    
        fix typo
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89369 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7cc0fd2839a6b5f01cf103317e7dd9cef1058617
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Thu Nov 19 18:22:16 2009 +0000
    
        TableGen/OptParser: When ordering options, make "sentinel" options appear before
        everything else.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89368 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 766a36de6e28e0113a56aa51ab00268de0203e42
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Thu Nov 19 17:29:36 2009 +0000
    
        Trailing whitespace.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89364 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f6c09a5fec44233d337ac34910f2f3e5df2f9939
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Thu Nov 19 17:29:25 2009 +0000
    
        Make example/Hello compile again.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89363 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3a5f981b1c49eeaa6698480c1b90d6e9382b9568
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 19 16:35:11 2009 +0000
    
        Fix a typo in a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89360 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 86f092e5f20940eae910c4e60608406134a48831
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Nov 19 16:08:04 2009 +0000
    
        cstdlib is not automatically included with StringRef anymore.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89359 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f42d3bbff53b7c8de06a0e23eebaf4a9aac2ff00
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Nov 19 16:04:41 2009 +0000
    
        Reenable Split2 StringRef test with Apple gcc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89357 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c71d1f03e0c46be0eee00263f6b1609fd84bd235
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Nov 19 15:55:49 2009 +0000
    
        Add support for spreading register allocation.
    
        Add a -linearscan-skip-count argument (default to 0) that tells the
        allocator to remember the last N registers it allocated and skip them
        when looking for a register candidate.  This tends to spread out
        register usage and free up post-allocation scheduling at the cost of
        slightly more register pressure.  The primary benefit is the ability
        to backschedule reloads.
    
        This is turned off by default.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89356 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dd629ec8b36ef375f35386a190c4c301dda09f44
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Nov 19 15:48:14 2009 +0000
    
        Remove the now obsolete algorithm include from StringRef.h.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89354 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 449529d5cfab41b1899d04273a326065c4420223
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Thu Nov 19 15:39:50 2009 +0000
    
        Workaround PR5482, because all the gcc versions that I had were miscompiling StringRef:
        4.2.4, 4.3.4, 4.4.2.
        The workaround is to use a local min/max implementation that takes an integer
        param, and not a reference to integer param (like std::min does).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89352 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 02cec4c48f3c3daf922ee0d8aae5621c12ea3e81
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Nov 19 12:17:31 2009 +0000
    
        Unbreak x64 MSVC build. Patch by Nicolas Capens!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89341 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5d446e5cc22993e05211b803cea43f09299d7f96
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Thu Nov 19 11:59:00 2009 +0000
    
        Add PS3 Triple class, Credit to John Thompson.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89339 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cc7bfb11acd970d4cd4da36ae281065b22be5123
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Nov 19 08:16:50 2009 +0000
    
        80 col violation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89337 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8ea6a674a73ae8fdb75e15d692c90a048d7af1e0
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Thu Nov 19 07:18:49 2009 +0000
    
        Unbreak test, Bruno please check.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89329 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 91fd9e4f6f0cd25369fafa3a81312fba451519cc
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Nov 19 06:57:41 2009 +0000
    
        More consistent thumb1 asm printing.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89328 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8ec49859a25324df48adf6b899d26b679e7a6ab2
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Nov 19 06:32:27 2009 +0000
    
        Shrink ldr / str [sp, imm0-1024] to 16-bit instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89326 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 462517808063f450416dd5693adbc31293e9ba9a
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Nov 19 06:31:26 2009 +0000
    
        Eliminate more * 4 in Thumb1 asm printing for consistency sake.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89325 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 59b2354e6b6079cc6bd912770ad84a681d3082e0
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Thu Nov 19 06:06:13 2009 +0000
    
        - Add sugregister logic to handle f64=(f32,f32).
        - Support mips1 like load/store of doubles:
    
        Instead of:
          sdc $f0, X($3)
        Generate:
          swc $f0, X($3)
          swc $f1, X+4($3)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89322 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 31a8a8c2a44d05f0f6d79f2ff57eb06ed02f6952
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Thu Nov 19 05:28:18 2009 +0000
    
        Only use small sections for non linux targets!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89316 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7e105d8389347468aaa57d612d4d20c0168464ec
    Author: Lang Hames <lhames at gmail.com>
    Date:   Thu Nov 19 04:15:33 2009 +0000
    
        Added a new Spiller implementation which wraps LiveIntervals::addIntervalsForSpills.
        All spiller calls in RegAllocLinearScan now go through the new Spiller interface.
        The "-new-spill-framework" command line option has been removed. To use the trivial in-place spiller you should now pass "-spiller=trivial -rewriter=trivial".
        (Note the trivial spiller/rewriter are only meant to serve as examples of the new in-place modification work. Enabling them will yield terrible, though hopefully functional, code).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89311 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4c00995baf288899fa940756099b2e5710702841
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Thu Nov 19 02:25:50 2009 +0000
    
        autoconf config.* claims to not know about auroraux triple.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89301 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 51d2daf6108cda81a81f3c59e44a836a3f4cc8b7
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Nov 19 02:05:44 2009 +0000
    
        Teach IVUsers to keep things simpler and track loop-invariant strides only
        for uses inside the loop. This works better with LSR. Disabled behind
        -simplify-iv-users while benchmarking.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89299 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a6b28c2ca5ac9fa8b911e5b68fe07ee47bf054ce
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Nov 19 02:03:18 2009 +0000
    
        Eliminate duplicate phi nodes in loops. Loop rotation, for example, can introduce these, and it's beneficial to later passes to clean them up.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89298 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ba8e8e58a471dd3252b5f7e3da42501e2508f17f
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Nov 19 02:02:10 2009 +0000
    
        Make EliminateDuplicatePHINodes() available as a utility function
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89297 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 73fb12e94ad66fee4f8a56049b18113f604cee36
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Nov 19 01:33:57 2009 +0000
    
        Test from Dhrystone to make sure that we're not emitting an aligned load for a
        string that's aligned at 8-bytes instead of 16-bytes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89295 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1b6a195cbe09d455be646027b83b039c7fa6b06d
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Thu Nov 19 00:14:53 2009 +0000
    
        Add TOOLALIAS makefile variable; this defines an alternate name for a program
        which the makefiles will create by symlinking the actual tool to.
         - For use by clang, where we want to make 'clang++' and alias for clang (which
           enables C++ support in the driver)
    
         - Not sure this is the best approach, alternative suggestions welcome!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89282 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 121e71656e195b7fe04fcc6416b69393cf0ef33e
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Nov 19 00:09:14 2009 +0000
    
        The "ReadOnlyWithRel" enum seems to apply more to what Darwin does with the EH
        exception table than DataRel.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89279 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bc0de436deae8197ed48639ff40595e9fa4e55b3
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Thu Nov 19 00:04:43 2009 +0000
    
        Twine: Stores kinds as uchar instead of bitfield to be friendlier to the
        optimizer.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89278 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d7a62cc10ff8a5d40b15d2750b62b1f66e507779
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Nov 18 23:48:57 2009 +0000
    
        There should be no need to keep renumbering blocks during tail duplication.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89275 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ac2a6d3db6062f0f5d4908c62d354981bc75378f
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Nov 18 23:30:38 2009 +0000
    
        Fix buildbots.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89274 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c40a1de1e088f0e0c8d854cf53247e41bfa6075c
    Author: Richard Osborne <richard at xmos.com>
    Date:   Wed Nov 18 23:20:42 2009 +0000
    
        Add XCore support for indirectbr / blockaddress.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89273 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eede1cfd80ead986dea190267a853c00eeff63a1
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Wed Nov 18 23:20:09 2009 +0000
    
        De-bork CMake build
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89272 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit da739ea07a1f130adcc9ad3d51e9c895af374dc6
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Nov 18 23:18:46 2009 +0000
    
        Attempt #2:
    
        Place the EH table in the __TEXT section on MachO. It saves space.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89270 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5a57a12064094766ae271d09fe365c882d48468b
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Nov 18 22:52:37 2009 +0000
    
        Tail duplication still needs to iterate.  Duplicating new instructions onto
        the tail of a block may make that block a new candidate for duplication.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89264 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7fda037ed8e65b25550b1b439af1153b40df0739
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Nov 18 22:12:31 2009 +0000
    
        Add another statistic to measure code size due to tail duplication.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89254 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0359a09551cd3dee071945a565def98fa61b48e3
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Wed Nov 18 22:04:44 2009 +0000
    
        Remove spurious @verbatim.  Patch by Timo Juhani Lindfors!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89252 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d69a01fbd2dcea207cdb20c28de1031556cc4681
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Nov 18 21:54:13 2009 +0000
    
        Not all ASM has # for comments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89250 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 678387e294c678f1cdc602864e5f590ad6fdf65a
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Wed Nov 18 21:33:35 2009 +0000
    
        Fix PR5300.
    
        When TwoAddressInstructionPass deletes a dead instruction, make sure that all
        register kills are accounted for. The 2-addr register does not get special
        treatment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89246 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5e906a43ec271475b7d5c33ea8c354a29e14b92e
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Nov 18 21:29:51 2009 +0000
    
        TableGen: Add initial backend for clang Driver's option parsing.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89245 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 390afed1012b60baf06eac8904fb72af3ccaedaa
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Wed Nov 18 20:36:57 2009 +0000
    
        Allow the machine verifier to be run outside the PassManager.
    
        Verify LiveVariables information when present.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89241 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c272c74658d1f521d0a67ccdf8155d4566380f75
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Wed Nov 18 20:36:47 2009 +0000
    
        Remove the -early-coalescing option
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89240 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9ec5663bc8a70b50a0b7b5e08140577ee1f3957f
    Author: Lang Hames <lhames at gmail.com>
    Date:   Wed Nov 18 20:31:20 2009 +0000
    
        Fixed the in-place spiller and trivial rewriter, which had been broken by the recent SlotIndexes work.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89238 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 07fce117258e85a921db78e5656c4039f48829fc
    Author: Viktor Kutuzov <vkutuzov at accesssoftek.com>
    Date:   Wed Nov 18 20:20:05 2009 +0000
    
        Added getDefaultSubtargetFeatures method to SubtargetFeatures class which returns a correct feature string for given triple.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89236 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ec816259573936142fdc922b8c8fa98e77a3c998
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Nov 18 19:29:37 2009 +0000
    
        Add statistics for tail duplication.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89225 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eb368aa30f54a1388c341e6f443c9c6f96216fdc
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Wed Nov 18 18:39:57 2009 +0000
    
        Add ARMv6 itineraries.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89218 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ecce4b7c8a9afdf9ca01e9c017871492a406ebfe
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Nov 18 18:10:35 2009 +0000
    
        Fix a few places that were missed when we converted to unified syntax.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89214 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f1a48f57d4a251f639165b4fa8f5b25de598e6c2
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Wed Nov 18 18:01:35 2009 +0000
    
        Don't require LiveVariables for PHIElimination. Enable critical edge splitting
        when LiveVariables is available.
    
        The -split-phi-edges is now gone, and so is the hack to disable it when using
        the local register allocator. The PHIElimination pass no longer has
        LiveVariables as a prerequisite - that is what broke the local allocator.
        Instead we do critical edge splitting when possible - that is when
        LiveVariables is available.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89213 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 82262041840cecaa4a3d015d124d166a7b87a995
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Nov 18 17:42:22 2009 +0000
    
        Turn LLVM_BUILD_EXAMPLES off by default in CMake builds, to match Makefiles &
        Clang.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89211 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0b3f26dbec6c2ca5a680f7d42795408f3a044bee
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Nov 18 17:42:17 2009 +0000
    
        lit: Fix exclude dirs functionality.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89210 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 64954241b84be8ee24450c4a27e01e836eee2fa2
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Wed Nov 18 05:43:15 2009 +0000
    
        Fix passing of float arguments through ffi.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89198 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 19194a2f4d58b62e438b2c28f4b42966931cd397
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Nov 18 03:34:27 2009 +0000
    
        Add a target hook to allow changing the tail duplication limit based on the
        contents of the block to be duplicated.  Use this for ARM Cortex A8/9 to
        be more aggressive tail duplicating indirect branches, since it makes it
        much more likely that they will be predicted in the branch target buffer.
        Testcase coming soon.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89187 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c3bd9875c4cffb6514b54839c0220cfeccb5b7a9
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Nov 18 01:03:56 2009 +0000
    
        The llvm-gcc front-end and the pass manager use two separate TargetData objects.
        This is probably not confined to *just* these two things.
    
        Anyway, the llvm-gcc front-end may look up the structure layout information for
        an abstract type. That information will be stored into a table with the FE's
        TD. Instruction combine can come along and also ask for information on that
        abstract type, but for a separate TD (the one associated with the pass manager).
    
        After the type is refined, the old structure layout information in the pass
        manager's TD file is out of date. If a new type is allocated in the same space
        as the old-unrefined type, then the structure type information in the pass
        manager's TD file will be wrong, but won't know it.
    
        Fix this by making the TD's structure type information an abstract type user.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89176 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 10fd2878665f7ae15848e0d5166a5ea873e2bac8
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Nov 18 00:58:27 2009 +0000
    
        Simplify ComputeMultiple so that it doesn't depend on TargetData.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89175 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9321a6db532fd0d5437d854e19e8a9fdc1f59290
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Wed Nov 18 00:02:18 2009 +0000
    
        Fix inverted test and add testcase from failing self-host.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89167 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 39a0f07ef82b6cc70ce87c038620921d87297ced
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Nov 17 22:39:08 2009 +0000
    
        Remove dead code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89156 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b064d5efec51117f9819c7ffabb60970549025b1
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Nov 17 21:58:16 2009 +0000
    
        Add ability to set code model within the execution engine builders
        and creation interfaces.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89151 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9007bcee6efaa3d528edd8e47cfeb9a2fe47071a
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Nov 17 21:52:40 2009 +0000
    
        Remove fragile test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89150 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e65e2a55c76fd80f137adc6595057cf1a4a4d8fd
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Nov 17 21:37:04 2009 +0000
    
        grammar
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89145 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 35ccbb7056c9c20ecc158d701fb3b00fdd795601
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Nov 17 21:24:11 2009 +0000
    
        Enable arm jumpt table adjustment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89143 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 05930c4f222e11e44989c1040e7b9912e662ad86
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Nov 17 21:23:49 2009 +0000
    
        Disable -split-phi-edges to unbreak the buildbots
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89142 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 82ef39981df3f6d75e2ba52517452ff8df4a8415
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Nov 17 20:46:00 2009 +0000
    
        Never call UpdateTerminator() when AnalyzeBranch would fail.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89139 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0a4f0a3bd881d83fed218587dbe0d1beb885b7a3
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Tue Nov 17 20:38:36 2009 +0000
    
        Forgot to commit test fixes
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89138 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aa4af890f3ecfcb21e9a0ce4f21304aec7fa087d
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Tue Nov 17 20:04:59 2009 +0000
    
        Both Darwin as and GNU as violate ARM docs wrt printing of addrmode6
        alignment imm (in the same way). Fix asmprinting for non-darwin platforms.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89137 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 18f204f62c038bf35ed4887486bd051d60224825
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 17 19:19:59 2009 +0000
    
        Add a WriteAsOperand for MachineBasicBlock so MachineLoopInfo dump looks sane.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89130 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 71faf6a347d7bc9076951bf2e317058024d80697
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 17 19:19:01 2009 +0000
    
        Fix comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89129 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 226f307c9fd011e31a1b9194017941265ce644d9
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Nov 17 19:15:50 2009 +0000
    
        Enable -split-phi-edges by default, except when -regalloc=local.
    
        The local register allocator doesn't like it when LiveVariables is run.
        We should also disable edge splitting under -O0, but that has to wait a bit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89125 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dc4c7583e32dcab06bcb54fce793230b7c20eead
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Nov 17 19:05:35 2009 +0000
    
        80-column violations
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89123 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ee140c4ce226ab811c551e85a3302a9e0ba5f301
    Author: Viktor Kutuzov <vkutuzov at accesssoftek.com>
    Date:   Tue Nov 17 18:48:27 2009 +0000
    
        Added getArchNameForAssembler method to the Triple class for which returns OS and Vendor independent target assembler arch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89122 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5da4d89ecae4eb432275e4c186811bbb2116a35b
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Nov 17 18:30:09 2009 +0000
    
        Remove a special case for tail merging that seems to be both broken and
        unnecessary.  It is broken because the "isIdenticalTo" check should be
        negated.  If that is fixed, this code causes the CodeGen/X86/tail-opts.ll
        test to fail, in the dont_merge_oddly function.  And, I confirmed that the
        regression is real -- the generated code is worse.  As far as I can tell,
        that tail-opts.ll test is checking for what this code is supposed to handle
        and we're doing the right thing anyway.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89121 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a6454d6b3a54455960df969ee7d3cb4725df645a
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 17 18:10:11 2009 +0000
    
        Generalize OptimizeLoopTermCond to optimize more loop terminating icmp to use postinc iv.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89116 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4b167eeef90ff3d7c5195ce6f2b1acb4478d522c
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Nov 17 18:04:15 2009 +0000
    
        Set MadeChange instead of MadeChangeThisIteration.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89114 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c77d9a4bdcb38493ce2b210b0e8e306181608657
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Tue Nov 17 17:57:04 2009 +0000
    
        Revert CPU detection code to return "generic" instead of an empty string in case
        of failure. The x86 target didn't like empty cpu names and broke x86 tests on
        non-x86 buildbots.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89111 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f492c7348325f8f039047ab11cd8a44b46cb9a6e
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Nov 17 17:53:56 2009 +0000
    
        Remove trailing whitespace
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89110 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c9622b2ee6ee8bf9057d5d17bc4b6dc70d8fa874
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Nov 17 17:40:31 2009 +0000
    
        Update a comment, now that tail duplication happens after other branch
        folding optimizations.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89109 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3f6474095adef9b9dc65ef94b481785caa086ea7
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Tue Nov 17 17:17:50 2009 +0000
    
        Set Inst{15-12} (Rd/Rt) to 0b1111 (PC) for BR_JTadd, BR_JTr, and BR_JTm to
        distinguish between them and the more generic instructions (add, mov, and ldr).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89108 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9cc7b872e96371d21aeff515d9f94e6c11955a07
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Nov 17 17:06:18 2009 +0000
    
        Perform tail duplication only once, after tail merging is complete.
        It was too difficult to keep the heuristics for merging and duplication
        consistent.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89105 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 47d60dc7125c5c8d2fc0e752b2448792db668751
    Author: Nuno Lopes <nunoplopes at sapo.pt>
    Date:   Tue Nov 17 15:35:39 2009 +0000
    
        add Case() with 5 args
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89099 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aeb0ef3dba79ca51c55331915e22ad6a3d2e9531
    Author: Jay Foad <jay.foad at gmail.com>
    Date:   Tue Nov 17 13:13:59 2009 +0000
    
        Fix HTML formatting.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89093 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bed96f986a22dd0d457fc5054733ec85e7ac9061
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Tue Nov 17 10:54:25 2009 +0000
    
        1.  Allow SCCIterator to work with GraphT types that are constant.
        2.  Allow SCCIterator to work with inverse graphs.
        3.  Fix an incorrect comment in GraphTraits.h (the type in the comment
        was given as GraphType* when it is actually const GraphType &).
        Patch by Patrick Alexander Simmons.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89091 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 18d0288933ba946f480d62c4233028538b8dc7df
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Tue Nov 17 10:20:22 2009 +0000
    
        Make bugpoint pass -load arguments to LLI.  This lets one use bugpoint with
        programs that depend on native shared libraries.  Patch by Timo Lindfors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89087 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 91bbf5ea7a0dd374f65ba1db5a2e868cfa08065c
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 17 09:55:52 2009 +0000
    
        Revert 89021. It's miscompiling llvm-gcc driver driver at -O0.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89082 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8e66471c0fac08232fa3464324f2b3e42aec3fcb
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 17 09:51:18 2009 +0000
    
        Re-apply 89011. It's not to be blamed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89081 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cff9162abbf0e246c71f29299165c6ee1a6e615a
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Nov 17 09:29:59 2009 +0000
    
        "XFAIL" the Split2 StringReft test with Apple gcc, which miscompiles it.
         - I plan on fixing/workarounding this, but until then I'd like the bots to stay
           green.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89077 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 04af0cbb1d29d16253d4e5d3be8dca4b57ce447e
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 17 09:20:28 2009 +0000
    
        Revert 89011. Buildbot thinks it might be breaking stuff.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89076 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 58e64e179fc32f88dc3b1b447c6a2225ed22e55f
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Tue Nov 17 09:17:08 2009 +0000
    
        Remove VISIBILITY_HIDDEN from the classes in this directory. Fixes bug 5507.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89075 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 26459c2bf024b2a7dc92ab376679ebc4abf47e61
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Tue Nov 17 08:34:52 2009 +0000
    
        Following a suggestion of Daniel Dunbar, stop people passing the name
        as the isSigned bool to CreateIntCast by having this resolve to a call
        to a private method, rather than by using a gcc attribute.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89067 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ee68f45fe77ad7ce3079eb9e8222075dee099f08
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Tue Nov 17 08:11:44 2009 +0000
    
        Revert r88939.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89066 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 928d16d438165dc6459fdde2365d36e64642ed1e
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Tue Nov 17 07:52:09 2009 +0000
    
        Fail less mysteriously; inform the user that their LLVM was not built with
        libffi support and that the interpreter can't call external functions without
        it. Patch by Timo Juhani Lindfors! Fixes PR5466.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89062 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d06c8c5777c8bda4fac5bebc55324fe5fd7bc6dd
    Author: Lang Hames <lhames at gmail.com>
    Date:   Tue Nov 17 07:19:50 2009 +0000
    
        Fixed call to wrong constructor.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89059 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6171d2bb7ed78f04b91d05998ceef80c86fdbedd
    Author: Owen Anderson <resistor at mac.com>
    Date:   Tue Nov 17 07:06:10 2009 +0000
    
        Fix a race condition in the Timer class.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89056 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a9cc7bfa480cba73c84b03353fd134735d8021d4
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Nov 17 01:23:53 2009 +0000
    
        Refactor the code that creates the "dot-label" difference. This may be used in
        more than one place. No intended functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89024 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bd75ded296f631b4b59f5461340de1e1a2beae01
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Nov 17 01:21:04 2009 +0000
    
        When moving a block for table jumps, make sure the prior block terminator
        is analyzable so it can be updated. If it's not, be safe and don't move the
        block.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89022 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3bf51602527c76684b69fb21070cdd4ba50edec3
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Nov 17 01:07:22 2009 +0000
    
        Enable -split-phi-edges by default
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89021 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3c1a4c55964f71831f36162320b421ebb82ec56f
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 17 00:55:55 2009 +0000
    
        MOV64rm should be marked isReMaterializable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89019 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7f9c46ffbb2f96175858d78814fa52a3416ee81e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Nov 17 00:47:23 2009 +0000
    
        Remove the optimizations that convert BRCOND and BR_CC into
        unconditional branches or fallthroghes. Instcombine/SimplifyCFG
        should be simplifying branches with known conditions.
    
        This fixes some problems caused by these transformations not
        updating the MachineBasicBlock CFG.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89017 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dadd6cd96d573649490516c5840139af923d9176
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Nov 17 00:47:06 2009 +0000
    
        Remove debug info attached with an instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89016 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e4e00b81b5a07e01722211f03b31a5941d4d731e
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Nov 17 00:43:13 2009 +0000
    
        In GlobalVariable::setInitializer, assert that the initializer has the
        right type.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89014 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1fb3d35636d3bf4700533a22ab2a13f2b98fc9d2
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 17 00:23:22 2009 +0000
    
        A few more instructions that should be marked re-materializable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89011 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5df3462936ef55f159ad4cdc28ddaac77790b862
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Nov 17 00:20:26 2009 +0000
    
        Convert to FileCheck
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89007 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2effb21501a21f1213986797396be850fa3da2eb
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Nov 17 00:03:38 2009 +0000
    
        Convert to FileCheck
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89002 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a5e2ab84cce52357c4b51c35b25cf9b840883a4f
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Nov 17 00:00:33 2009 +0000
    
        Cleanup. Missed removing these when converting. Oops.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89001 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ff43a627bcc5f09f467393f4041d8969efe87949
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Mon Nov 16 23:57:56 2009 +0000
    
        Set Rm bits of BX_RET to 0b1110 (R14); and set condition code bits of BRIND to
        0b1110 (ALways).  This is so that the disassembler decoder can distinguish among
        BX_RET, BRIND, and BXr9.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89000 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a7cbdd85b600e9c24be78eb84de633055226f0dc
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 16 23:49:55 2009 +0000
    
        Fix this test - there don't appear to be any actual Reload Reuses
        in this testcase.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88998 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2d5efd2014e22fb4cf39f54d1832e352a3b585ab
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 16 23:43:42 2009 +0000
    
        Revert r87049, which was the workaround for the regression triggered
        by the recent FixedStackPseudoSourceValue-related changes, now that
        the specific bug that affected it is fixed, in r88954.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88997 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d99c0dfeddad8a81ef891c6bdd51ad5032fb4418
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Mon Nov 16 23:32:30 2009 +0000
    
        Revert the test from r88984. It relies on being able to mmap 16GB of
        address space (though it only uses a small fraction of that), and the
        buildbots disallow that.
    
        Also add a comment to the Makefile's ulimit line warning future
        developers that changing it won't work.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88994 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ead0f2d62cb30219483230c810f5ad05678ff5d
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 16 23:19:29 2009 +0000
    
        Convert to FileCheck
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88991 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 16330ea4da1a4eba625c735d6247450b4afe56cd
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 16 22:49:38 2009 +0000
    
        Initialize the new AsmPrinterFlags field to 0, fixing uses of
        uninitialized memory.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88985 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e233d8a0c6e61da9468080079d7b840a9ee05a72
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Mon Nov 16 22:41:33 2009 +0000
    
        Make X86-64 in the Large model always emit 64-bit calls.
        The large code model is documented at
        http://www.x86-64.org/documentation/abi.pdf and says that calls should
        assume their target doesn't live within the 32-bit pc-relative offset
        that fits in the call instruction.
    
        To do this, we turn off the global-address->target-global-address
        conversion in X86TargetLowering::LowerCall(). The first attempt at
        this broke the lazy JIT because it can separate the movabs(imm->reg)
        from the actual call instruction. The lazy JIT receives the address of
        the movabs as a relocation and needs to record the return address from
        the call; and then when that call happens, it needs to patch the
        movabs with the newly-compiled target. We could thread the call
        instruction into the relocation and record the movabs<->call mapping
        explicitly, but that seems to require at least as much new
        complication in the code generator as this change.
    
        To fix this, we make lazy functions _always_ go through a call
        stub. You'd think we'd only have to force lazy calls through a stub on
        difficult platforms, but that turns out to break indirect calls
        through a function pointer. The right fix for that is to distinguish
        between calls and address-of operations on uncompiled functions, but
        that's complex enough to leave for someone else to do.
    
        Another attempt at this defined a new CALL64i pseudo-instruction,
        which expanded to a 2-instruction sequence in the assembly output and
        was special-cased in the X86CodeEmitter's emitInstruction()
        function. That broke indirect calls in the same way as above.
    
        This patch also removes a hack forcing Darwin to the small code model.
        Without far-call-stubs, the small code model requires things of the
        JITMemoryManager that the DefaultJITMemoryManager can't provide.
    
        Thanks to echristo for lots of testing!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88984 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c577b71a6d398d64f3fdc51ffc607ea2882f2cb5
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Nov 16 22:38:00 2009 +0000
    
        Don't build examples by default, use BUILD_EXAMPLES=1 to build them. The only utility of this is testing that we keep the examples up to date, I will just make the buildbots run with this flag.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88979 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dbe77e5158a4729643feac6871dadd08e3f1c4b4
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Nov 16 22:37:52 2009 +0000
    
        Add "Unoptimized" build (NO_DEBUG_SYMBOLS=1 ENABLE_OPTIMIZED=1), for reducing
        disk space, and increasing battery lifetime. :)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88978 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7c5bade5afd43e2fefe0287dd19d4874f2eac704
    Author: Eric Christopher <echristo at apple.com>
    Date:   Mon Nov 16 22:34:32 2009 +0000
    
        Fix unused variables warnings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88977 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9d7cd4ecfc39fc323edaf91e1c17d04feec6b728
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Nov 16 21:56:03 2009 +0000
    
        - Check memoperand alignment instead of checking stack alignment. Most load / store folding instructions are not referencing spill stack slots.
        - Mark MOVUPSrm re-materializable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88974 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4cb32c36830adc6833d7e459e3a8a9f784e235bb
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Nov 16 21:53:40 2009 +0000
    
        Revert r88939.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88973 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b736b5dd0a3cfd23ad96365beac5bf918805f850
    Author: David Greene <greened at obbligato.org>
    Date:   Mon Nov 16 21:52:23 2009 +0000
    
        Fix an expensive-checks error.
    
        The Mask and LHSMask may not be of the same size, so don't do the
        transformation if they're different.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88972 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7fcb07aa007c643df80ffaac78ed860fe7c0ceea
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 16 21:13:22 2009 +0000
    
        Make the pass class name more explicit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88964 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6b4b40736acfba22d0befa1badf2b3f851e53c13
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 16 21:03:58 2009 +0000
    
        make pass name a bit more clear
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88961 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ca78b6fa4a7a87aa8823ecf8d97fbb3f6f7730ac
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 16 20:45:50 2009 +0000
    
        Revert 88957. This file uses CodeGenOpt, which is defined in TargetMachine.h.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88959 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5cd155b41f86ccb3019fde539d43798a4019c3b6
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 16 20:41:12 2009 +0000
    
        Remove an unnecessary #include.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88957 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8450a9090b38c0faf1401326dd7ec95358e995cc
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 16 20:40:47 2009 +0000
    
        Sink a #include <map> to where it's actually needed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88956 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8a50988004d9f2614faf8f8de30c7f2f71bffb09
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 16 20:40:06 2009 +0000
    
        Make PseudoSourceValue's classof recognize
        FixedStackPseudoSourceValueVal, to respect this isa relationship.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88954 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 57d7aeffb1f29269aefed5240d6637436676184b
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 16 20:35:59 2009 +0000
    
        Fix a typo in a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88953 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 073faa1a39a976617625078b326a004ac2592b97
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 16 20:04:15 2009 +0000
    
        Convert to FileCheck
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88947 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ae9735e748d6b5095d87744f28b337fb44d4872f
    Author: Lang Hames <lhames at gmail.com>
    Date:   Mon Nov 16 20:03:13 2009 +0000
    
        Added a testcase for PR5495.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88946 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3890bf0cfe614495230a3f3c0aac3a9278f0d844
    Author: Rafael Espindola <rafael.espindola at gmail.com>
    Date:   Mon Nov 16 19:46:55 2009 +0000
    
        Add configure options for specifying where to look for libstdc++.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88943 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fcfac37c9ca325ddd39c82eee1b80133328b9544
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 16 19:46:46 2009 +0000
    
        Convert to FileCheck
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88942 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dceacb2a529c2bcdfd59ed0585684dff24c9054b
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Nov 16 19:33:27 2009 +0000
    
        Fix a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88940 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ac402d8db30d059030a6410a5dbc7bef59c3fa32
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Nov 16 19:20:48 2009 +0000
    
        Add VISIBILITY_HIDDEN marker.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88939 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 896373b0146c9a789b003bb7a15618747db484bb
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 16 18:58:52 2009 +0000
    
        Simplify thumb2 jump table adjustments. Remove unnecessary calculation and
        usage of block sizes and offsets.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88935 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 98f9f85767a7837d985e0dc421ea825f14469378
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 16 18:55:47 2009 +0000
    
        clarify comment
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88933 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f4ae3216ba93bd253044cdcbfa7e7645677de6d7
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Nov 16 18:54:08 2009 +0000
    
        Fix some comments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88932 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c81c2552bd5410dee67cbf3cb220fbac2ed3c2ac
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Nov 16 18:08:46 2009 +0000
    
        Whitespace: be consistent with pointer syntax.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88929 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d904455c24c3f96288e950109730e5f3c6e6009f
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Nov 16 17:56:13 2009 +0000
    
        Clean up whitespace.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88927 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d1225c9a6179c4429b4eb7d937ee40dedf39d3de
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 16 17:24:45 2009 +0000
    
        tbb opt off by default
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88921 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fad80ae623fc6897ea32966208814e9a4c70802e
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 16 17:17:48 2009 +0000
    
        back off for a bit. tracking down weirdness
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88919 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 313cc22c734cfef8794dbfaf8a4ac9040a5cd9f6
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 16 17:10:56 2009 +0000
    
        Analyze has to be before checking the condition, obviously. Properly construct an iterator for prior.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88917 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d826aa969474c97c8522d440d82cfc0710f9ae2c
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Mon Nov 16 16:56:48 2009 +0000
    
        Make ERROR_IF_USED macro work with GCC <= 4.2, Apple GCCs
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88916 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a07e9ea3a1a0136632007d2cf93b70e3ebb4dc97
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Mon Nov 16 15:28:17 2009 +0000
    
        Make sure that if anyone passes a name by accident for the isSigned
        parameter of CreateIntCast then they get an error from the compiler
        (or from the linker with a non-gcc compiler).  Another possibility
        is to flip the order of the DestTy and isSigned parameters, since you
        should then get a compiler warning if you try to use a char* for a
        Type*.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88913 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2e7b993b008be1bcca4cec4803092d876b21595a
    Author: David Greene <greened at obbligato.org>
    Date:   Mon Nov 16 15:12:23 2009 +0000
    
        Support spill comments.
    
        Have the asm printer emit a comment if an instruction is a spill or
        reload and have the spiller mark copies it introdues so the asm printer
        can also annotate those.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88911 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6ac7ca342ff2b023d6521c6b9a90a883b7947e21
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Mon Nov 16 13:15:28 2009 +0000
    
        BuildIntCast takes an additional parameter, isSigned.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88910 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9a0789a0d7e1648791c921e87abae3012969fbf4
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Mon Nov 16 12:32:28 2009 +0000
    
        CreateIntCast takes an "isSigned" parameter.  Pass "true" for it, rather than
        a name.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88908 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 871746fa4d93eed8573c219360241a4f1841f5a2
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Nov 16 07:10:36 2009 +0000
    
        Special case FixedStackPseudoSourceValueVal as well. Do we really need to differentiate PseudoSourceValueVal from FixedStackPseudoSourceValueVal at this level?
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88902 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit edf605d92d7e3529548911797e93397b81d1489e
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Nov 16 06:31:49 2009 +0000
    
        Check if subreg index is zero.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88899 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8a4ccce190f441cf2a178dbe4b74abdbf92221fa
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Nov 16 05:52:06 2009 +0000
    
        For some targets, a copy can use a register multiple times, e.g. ppc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88895 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aedc2e9d64b536d329cd406cc1d6551cce9b70fe
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Nov 16 05:44:04 2009 +0000
    
        xfail for now. It has been failing.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88892 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b3d4e5e6d5b0b8d857c111166e3d0defd860d437
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Mon Nov 16 04:35:29 2009 +0000
    
        Disable ldc1/sdc1 instructions for mips1 targets.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88887 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5ad52d038d3d31c43455d4a072678736bdfdc5f4
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Mon Nov 16 04:33:42 2009 +0000
    
        - Fix a small bug while handling target constant pools (one param was missing).
        - Add a smarter constant pool loading, instead of:
    
        lui $2, %hi($CPI1_0)
        addiu $2, $2, %lo($CPI1_0)
        lwc1 $f0, 0($2)
    
        Generate:
    
        lui $2, %hi($CPI1_0)
        lwc1 $f0, %lo($CPI1_0)($2)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88886 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 00c4cadea398906831e85a9444a2803c7483661d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 16 03:51:42 2009 +0000
    
        typo spotted by duncan.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88884 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8d0591c3acd4208faff96f90ddfba096e6c20dc3
    Author: Lang Hames <lhames at gmail.com>
    Date:   Mon Nov 16 02:07:31 2009 +0000
    
        Fixes the bug exposed by Anton's test case in PR 5495:
        Make sure when ProcessImplicitDefs removes a copy which kills its source reg that it
        removes the copy from said reg's Kills list.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88881 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 248a854e1e7ba8c2ab6595e24948dabed45c280e
    Author: Lang Hames <lhames at gmail.com>
    Date:   Mon Nov 16 02:00:09 2009 +0000
    
        Fix for the original bug in PR5495 - Look at uses as well as defs when determining the PHI-copy insert point.
    
        - Patch by Andrew Canis!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88880 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 860fb5bb3ded49293473d043b5206742c14b3476
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sun Nov 15 21:45:34 2009 +0000
    
        Detect need for autoalignment of the stack earlier to catch spills more
        conservatively. eliminateFrameIndex() machinery adjust to handle addr mode
        6 (vld1/vst1) used for spills. Fix tests to expect aligned Q-reg spilling
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88874 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4b93798c0fd881d607fba728f7c138f8def59315
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sun Nov 15 21:05:07 2009 +0000
    
        set the def of the VLD1q64 properly
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88873 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0c7aff7f75b0277e755005715910cc54058ee8e2
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 15 20:03:53 2009 +0000
    
        disable copying, enforce some invariants.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88870 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c7178235fa11abb93ec377646da59cd105b97793
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 15 20:02:12 2009 +0000
    
        teach LVI to infer edge information from switch instructions.
        This allows JT to eliminate a ton of infeasible edges when
        handling code like the templates in PatternMatch.h
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88869 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 37d31f5b76f2a9175d52c3e05dc032ee142a395e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 15 20:01:24 2009 +0000
    
        fix a logic error that would cause LVI-JT to miscompile
        some conditionals
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88868 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 84c81704932e9a6ec43dccd9f7337ddf9b586f2a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 15 20:00:52 2009 +0000
    
        implement the first stab at caching queries.  This isn't correct
        (because the invalidation logic is missing) but LVI isn't enabled
        by default anyway.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88867 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 56b7f9e99d3085fbe9f56f000c3b8cb8c0d71e8f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 15 19:59:49 2009 +0000
    
        refactor a bunch of code forming the new LazyValueInfoCache
        and LVIQuery classes, no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88866 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c3e3a5da45bad95d74336408f3bcbf0ef5dfb801
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 15 19:58:31 2009 +0000
    
        make PRE of loads preserve the alignment of the moved load instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88865 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit de8ec70490a2344987913332ad11618bb6eb463e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 15 19:57:43 2009 +0000
    
        fix a bug handling 'not x' when x is undef.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88864 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8483451fe05bf3308c3fc1f171f1d95a09e79cec
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 15 19:56:28 2009 +0000
    
        mark getIntrinsicID() 'readonly'.  This allows various classof methods
        (like DbgDeclareInst's) to shrink substantially.  It sucks that we have
        to pull Compiler.h into such a public header, but at least Compiler.h
        doesn't pull anything else in.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88863 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a50e9237b7d8cdeace748653d7f96b9899fe22b7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 15 19:54:31 2009 +0000
    
        add attributes for readnone/readonly functions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88862 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3ecb882db35446d5ee649d4d75b5e2de542c8c40
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 15 19:52:43 2009 +0000
    
        add a version of array_pod_sort that takes a custom comparator function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88861 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 78799101733621457222fe31d32291664e8aa350
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Nov 15 17:51:23 2009 +0000
    
        Add a complex missed optimization opportunity I came across while investigating
        bug 5438.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88855 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0e3a24525cce403014d8bf531017cf632f6a1a06
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Sun Nov 15 10:18:17 2009 +0000
    
        Add PSP OS Target to Triple, Credit to Bruno Cardoso Lopes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88849 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4d0c998ab9affd1d2fd66fa94726b3d171ab0303
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 15 08:10:29 2009 +0000
    
        lit: Factor a new OneCommandPerFileTest out of SyntaxCheckTest.
         - Used for running a single fixed command on a directory of files, with the
           option of deriving a temporary input file from the test source.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88844 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eb7d651805dd1b62eaea656e570ba8da682b8ca1
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Nov 15 07:47:32 2009 +0000
    
        Revert r88830 and r88831 which appear to have caused a selfhost buildbot some
        grief. I suspect this patch merely exposed a bug else.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88841 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d71aeb7ade8a57594e422b669b689a9b7e402ed0
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 15 07:22:58 2009 +0000
    
        Remove duplicate implementation of excludes functionality, and support excluding
        dirnames.
    
        Also, add support for the 'unsupported' config property.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88838 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e0410a11814d5923395775f889a11b11405dd8f1
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Nov 15 06:16:57 2009 +0000
    
        Correct typo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88831 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d8e3063c9c13bad8152942da6c362db2180fb6a0
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Nov 15 05:55:17 2009 +0000
    
        Teach instcombine to look for booleans in wider integers when it encounters a
        zext(icmp). It may be able to optimize that away. This fixes one of the cases
        in PR5438.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88830 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ed6d6e7ec31ebd4b2a7cf8117082bd4395e6bd91
    Author: Lang Hames <lhames at gmail.com>
    Date:   Sun Nov 15 04:39:51 2009 +0000
    
        Added an assert to the PBQP allocator to catch infinite cost solutions which might otherwise lead to miscompilations.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88829 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e22eb6b2ec6cb14f8fd750ca0582bb5788da783a
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 15 01:02:09 2009 +0000
    
        lit: Add --repeat=N option, for running each test N times.
         - Currently just useful for timing, although it could be extended as one (bad) way to deal with flaky tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88827 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9fa13439112fd8cec6b2a3ad6b642664e5922068
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Nov 14 22:04:42 2009 +0000
    
        Remove bogus corei7 and atom entries, the family was incorrect.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88818 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4823e959d80382898b5dcfb7a67f75ec56781516
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Nov 14 21:57:35 2009 +0000
    
        remove xfail
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88817 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b8ed2bb105e79bbc6f3347fb2f513127d1abedc7
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Nov 14 21:36:19 2009 +0000
    
        Fill out X86 table, although we are missing lots of names for things. We now
        properly detect my Xeon box though.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88814 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d31559cc26ed09e6ea8b7a77d59fa343d9f92795
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Nov 14 21:36:07 2009 +0000
    
        Report the detected host CPU in --version.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88813 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b4b383b2502314ea3868c5646d201ba463974752
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Nov 14 21:33:37 2009 +0000
    
        cleanup.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88812 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 37957d5ea068c53cda4586c3391a66b355cadddf
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Nov 14 20:15:03 2009 +0000
    
        Do not merge jump tables this early. Branch folding will do any necessary
        merges, and until then, it's useful to keep the tables separate for ease
        of manipulation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88806 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1b8810510f177de84116be20460126c81b4e2575
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Nov 14 20:10:18 2009 +0000
    
        Cleanup flow, and only update the jump table we're analyzing when replacing a destination MBB.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88805 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 47387c55bcb1613a65de19780f7fe900253f3c6d
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Nov 14 20:09:13 2009 +0000
    
        Add function to replace a destination MBB in a single jump table
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88804 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c53a0c29398ff20c94d466093405662b50bad47
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Sat Nov 14 19:51:20 2009 +0000
    
        Remove dead variable found by clang++.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88803 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ca574ca06afe9019e0a955b4ebd8555fc3622357
    Author: Richard Osborne <richard at xmos.com>
    Date:   Sat Nov 14 19:33:35 2009 +0000
    
        Add XCore support for arbitrary-sized aggregate returns.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88802 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c28999496ea409dae005b6278b5730b6e421d552
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sat Nov 14 18:01:41 2009 +0000
    
        Temporary disable the error - it seems to be too conservative.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88800 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0f5690b2a5307bf3ca135f41ca2a42123026d12e
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Sat Nov 14 16:37:18 2009 +0000
    
        Implement DISABLE_INLINE for MSVC. This required changing the position in all
        forward declaration and patching tblgen to emit it right. Patch by Amine Khaldi!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88798 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 653ec44e67895e6a844808a07cf98bfcecf46da4
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Sat Nov 14 15:15:39 2009 +0000
    
        This test doesn't work on arm either.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88794 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f0bb845e635bd183b9526dff7a60f5c8edfd3392
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Sat Nov 14 14:14:58 2009 +0000
    
        Make NORETURN working with MSVC. MSVC only accepts NORETURN in front of the
        decl so move it there. GCC accepts it both in front and after decls.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88791 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 91c44416fdc2fb7eb5969bd011fcb35862f19125
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Nov 14 10:09:12 2009 +0000
    
        Add llvm::sys::getHostCPUName, for detecting the LLVM name for the host CPU.
         - This is an initial step towards -march=native support in Clang, and towards
           eliminating host dependencies in the targets. See PR5389.
    
         - Patch by Roman Divacky!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88768 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 216b9ea902b6a96fa246f8212aa63df272c832b9
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sat Nov 14 07:25:54 2009 +0000
    
        Remove LLVMContext from reassociate. It was threaded through every function but
        ultimately never used.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88763 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5461b52004980be0ec9abac2199abb4371dcfb3f
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Sat Nov 14 07:22:25 2009 +0000
    
        revert 88761 as it fails builds.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88762 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8bd6558bac94a6e57c6ccb051feed72d7124470a
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Sat Nov 14 06:19:49 2009 +0000
    
        Fix debug info crashes for PIC16.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88761 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b341c078583a302c2a8c8b99fe41d1cb9ee38e5b
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sat Nov 14 06:15:14 2009 +0000
    
        Teach BasicAA that a constant expression can't alias memory provably not
        allocated until runtime (such as an alloca). Patch by Hans Wennborg!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88760 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f177bd4187684d81daae0889083f1a17dbea9616
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Nov 14 03:42:17 2009 +0000
    
        Added getSubRegIndex(A,B) that returns subreg index of A to B. Use it to replace broken code in VirtRegRewriter.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88753 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a88d1acbaefb5416a147dcdd2199bb43ab70983d
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Nov 14 02:55:43 2009 +0000
    
        - Change TargetInstrInfo::reMaterialize to pass in TargetRegisterInfo.
        - If destination is a physical register and it has a subreg index, use the
          sub-register instead.
        This fixes PR5423.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88745 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c8d2665eb0e8faf17ba618c946f542c6d9c35e4a
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Nov 14 02:27:51 2009 +0000
    
        Add an option for running GVN with redundant load processing disabled.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88742 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1196b1407d835a21fc3b9181c821822b4602ef04
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Nov 14 02:11:32 2009 +0000
    
        Add radar number.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88739 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5da2b872edfc21b7483158f064a9f98636302c78
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Nov 14 02:09:09 2009 +0000
    
        Fix PR5412: Fix an inverted check and another missing sub-register check.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88738 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c0c961121927e428d6279512dd45c634aafd1593
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Nov 14 02:06:30 2009 +0000
    
        Enable the tail call optimization when the caller returns undef.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88737 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 770b4d6907dab297589a12d3bf89f926c0b400e9
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Nov 14 01:50:00 2009 +0000
    
        When expanding t2STRDi8 r, r to two stores, add kill markers correctly.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88734 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 257ed14adea832ee52d55613bd9f8ce3d2a8f43d
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Sat Nov 14 00:38:13 2009 +0000
    
        Fix bug in -split-phi-edges.
    
        When splitting an edge after a machine basic block with fall-through, we
        forgot to insert a jump instruction. Fix this by calling updateTerminator() on
        the fall-through block when relevant.
    
        Also be more precise in PHIElimination::isLiveIn.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88728 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4e3a74162079124f41f7b17b28d8c2e96ce76a16
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Sat Nov 14 00:38:06 2009 +0000
    
        Update MachineDominator information
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88727 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0f1c0cea5912f6b4297d4557b5d13f5d1d39f522
    Author: Lang Hames <lhames at gmail.com>
    Date:   Sat Nov 14 00:02:51 2009 +0000
    
        Added an API to the SlotIndexes pass to allow new instructions to be inserted into the numbering.
    
        PreAllocSplitting is now using this API to insert code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88725 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 02406c496c6b5242eeb2d3d623d0cf50193f30e7
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Nov 13 23:16:41 2009 +0000
    
        Fix PR5411. Bug in UpdateKills. A reg def partially define its super-registers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88719 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a974c30e47670933e452a2eaa35ecacbcedae572
    Author: Eric Christopher <echristo at apple.com>
    Date:   Fri Nov 13 23:08:47 2009 +0000
    
        Remove extraneous commit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88716 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e9d5a80c80238bf6b3fce6507ba8a9568bc5b093
    Author: Eric Christopher <echristo at apple.com>
    Date:   Fri Nov 13 23:00:14 2009 +0000
    
        Print out something, even if it's non-parseable later when we've
        got ghost linkage. It's better than aborting.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88715 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c29f6d0a24c3dada6f50d47f6ccc90293a9840e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 13 22:24:13 2009 +0000
    
        Move the FixedStackPseudoSourceValueVal enum value before InstructionVal
        so that isa<Instructon> doesn't return true for FixedStackPseudoSourceValue
        values. This fixes a variety of problems, including crashes with -debug
        and -print-machineinstrs. Also, add a comment to warn about this.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88711 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ee787d256e654edaad2c585042e60c8cfa761fd3
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Nov 13 21:58:54 2009 +0000
    
        Disable the JITTest.NoStubs test for Darwin PPC. It apparently doesn't implement
        emitFunctionStubAtAddr.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88708 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit abb5622f9edb5b9a82ac2b0b93031a30c5b358e0
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Fri Nov 13 21:56:15 2009 +0000
    
        Fix PHIElimination optimization that uses MBB->getBasicBlock.
    
        The BasicBlock associated with a MachineBasicBlock does not necessarily
        correspond to the code in the MBB.
    
        Don't insert a new IR BasicBlock when splitting critical edges. We are not
        supposed to modify the IR during codegen, and we should be able to do just
        fine with a NULL BB.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88707 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9fea16565f83b155fcd7edc5faa8ef7b1a7dfea1
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Fri Nov 13 21:56:09 2009 +0000
    
        Add MachineFunction::verify() to call the machine code verifier directly.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88706 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c43aee1e131787dcea2d7f32494d80583723e92f
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Fri Nov 13 21:56:01 2009 +0000
    
        The instruction pointer %RIP is a reserved register on x86_64.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88705 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 14b7fce7a45ed35c18811033e25a200a53ad49e1
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Fri Nov 13 21:55:54 2009 +0000
    
        Fix polarity of a CFG check in machine verifier.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88704 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a5d3935ad09cafa5d8fe40f1b65ab398e4e7b0e4
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 13 21:55:31 2009 +0000
    
        Use .data() instead of .c_str() when nul-termination is not needed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88703 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3c0334d78ba363c2ab2edb2ac7a9f4b340764f1b
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Nov 13 21:45:04 2009 +0000
    
        Do not use value handle to wrap MDNode in DIDescriptor.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88700 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ca9b04ba02e321977d03e79a5bbd4c95eb23a0d8
    Author: David Greene <greened at obbligato.org>
    Date:   Fri Nov 13 21:34:57 2009 +0000
    
        Move DebugInfo checks into EmitComments and remove them from
        target-specific AsmPrinters.  Not all comments need DebugInfo.
    
        Re-enable the line numbers comment test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88697 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cd6a8e23207f276f915f20cad9f3905a862df5d9
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 13 21:02:15 2009 +0000
    
        When optimizing for size, don't tail-merge unless it's likely to be a
        code-size win, and not when it's only likely to be code-size neutral,
        such as when only a single instruction would be eliminated and a new
        branch would be required.
    
        This fixes rdar://7392894.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88692 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5cec5f651fae320adef8d66650851fa86587a88d
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Nov 13 20:36:40 2009 +0000
    
        Fix PR5410: LiveVariables lost subreg def:
    
        D0<def,dead> = ...
        ...
                     = S0<use, kill>
        S0<def>      = ...
        ...
        D0<def>      =
    
        The first D0 def is correctly marked dead, however, livevariables should have
        added an implicit def of S0 or we end up with a use without a def.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88690 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e6e30350b685bd4c9eddf4a88cb5dac41ee61eb8
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Fri Nov 13 19:52:48 2009 +0000
    
        Allow target to specify regclass for which antideps will only be broken along the critical path.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88682 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e149835a3e9026a8ee4765e81a67d3961e62fc6c
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Fri Nov 13 18:49:59 2009 +0000
    
        Support fp64 immediate zero, this fixes only part of PR5445
        because the testcase is triggering one more bug.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88674 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3510e118fbdf09887c4b804f1eba0cb62a2d3493
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 13 18:49:38 2009 +0000
    
        Don't let a noalias difference disrupt the tailcall optimization.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88672 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9c0d846c1608882657e4adf01703a7bc907baf48
    Author: David Greene <greened at obbligato.org>
    Date:   Fri Nov 13 14:42:06 2009 +0000
    
        Remove duplicate APIs and state WRT spill objects.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87106 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ce91e0acd570dd9d2e5ef054cec34e820f1a4f75
    Author: Rafael Espindola <rafael.espindola at gmail.com>
    Date:   Fri Nov 13 04:55:09 2009 +0000
    
        Distinguish "a," from "a". The first one splits into "a" + "" and the second one into
        "a" + 0.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87084 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7e78567b46273b04541d5b68e5eb11987b54d8a5
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Nov 13 02:27:33 2009 +0000
    
        Revert r87059 for now. It is failing clang tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87070 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ab9a0687312516700d6aae598e894db4560a48b0
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Nov 13 02:25:26 2009 +0000
    
        Ignore nameless variables.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87069 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3d49596b97b7363bb783adb0d02a22e99bc0b48e
    Author: Rafael Espindola <rafael.espindola at gmail.com>
    Date:   Fri Nov 13 02:18:25 2009 +0000
    
        Switch to smallvector. Also fix issue with using unsigend for MaxSplit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87068 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 48fd1e4422178b0f4989d6c06fc8abfb5b9fd12d
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Fri Nov 13 01:45:18 2009 +0000
    
        Adjust isConstantSplat to allow for big-endian targets.
        PPC is such a target; make it work.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87060 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 44b3c3ef7154bcc1a0637b1cf253d14864e96cc3
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Fri Nov 13 01:44:55 2009 +0000
    
        Remove unnecessary llvm.dbg.declare bitcast
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87059 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 76af158707679cc7764fa8b91f1924a0f3976114
    Author: Rafael Espindola <rafael.espindola at gmail.com>
    Date:   Fri Nov 13 01:24:40 2009 +0000
    
        Add a new split method to StringRef that puts the substrings in a vector.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87058 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 08ca9519152539cb987a447473da7bffb0ddada1
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Fri Nov 13 01:19:24 2009 +0000
    
        Block renumbering
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87056 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0b6f987c42f5224cbba161d8f8705a2085548515
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Fri Nov 13 01:17:22 2009 +0000
    
        use lower case for readability
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87054 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ee41cf2347a7798f9ddde2281e72f34b2cc2e916
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Nov 13 01:01:58 2009 +0000
    
        Update test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87049 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 98c70f7c2db8426127d953589a1f3d2490646137
    Author: David Greene <greened at obbligato.org>
    Date:   Fri Nov 13 00:29:53 2009 +0000
    
        Fix a bootstrap failure.
    
        Provide special isLoadFromStackSlotPostFE and isStoreToStackSlotPostFE
        interfaces to explicitly request checking for post-frame ptr elimination
        operands.  This uses a heuristic so it isn't reliable for correctness.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87047 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a5d9ade2f815552c50c991ffed2128f376e00338
    Author: Owen Anderson <resistor at mac.com>
    Date:   Thu Nov 12 23:22:41 2009 +0000
    
        Re-enable this code, since redundant PHIs are now being better nuked.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87042 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5b8472da2572659b4ed6b9c4bef4b71ca2d1626f
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Nov 12 23:13:08 2009 +0000
    
        Simplify code a bit
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87040 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7354a3e048bb8ca2281d38778368bfbbfb05c2b6
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Nov 12 21:59:20 2009 +0000
    
        Refactor code that checks if it's a call to a "nounwind" function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87036 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 17909ebface5a88cea9259ab16689ea4baf7464a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 12 21:58:18 2009 +0000
    
        use isInstructionTriviallyDead, as pointed out by Duncan
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87035 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9b5fbf226159fc889516396ec979f408a182c136
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Nov 12 21:49:55 2009 +0000
    
        Do some cleanups suggested by Chris.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87034 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d6cace3a4708a5e41259f53b254a68f2b9e0ca28
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Thu Nov 12 21:26:11 2009 +0000
    
        StringRef(const char*) should not be used to turn null pointers into empty
        strings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87031 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0fb68ff9424b68ae0d41d9d58372f9075a35b2c1
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Nov 12 21:07:54 2009 +0000
    
        Set the ReloadReuse AsmPrinter flag where appropriate.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87030 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3e820e14feed174af0008f03ae3adc37add9b445
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Thu Nov 12 21:07:02 2009 +0000
    
        Remove my Value.h build fix.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87029 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7cf8534ab3c029dc5d792d8e36357c8028278a67
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Nov 12 21:04:19 2009 +0000
    
        Fix a build error by providing a missing enum value.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87028 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8ba3999a322f98167aab91baf54a4119b1fa518b
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Nov 12 21:00:03 2009 +0000
    
        Make the MachineFunction argument of getFrameRegister const.
    
        This also fixes a build error.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87027 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 138ae5347fabad22459cd18576e44b630859e389
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Nov 12 20:55:29 2009 +0000
    
        Add hasLoadFromStackSlot and hasStoreToStackSlot to return whether a
        machine instruction loads or stores from/to a stack slot.  Unlike
        isLoadFromStackSlot and isStoreFromStackSlot, the instruction may be
        something other than a pure load/store (e.g. it may be an arithmetic
        operation with a memory operand).  This helps AsmPrinter determine when
        to print a spill/reload comment.
    
        This is only a hint since we may not be able to figure this out in all
        cases.  As such, it should not be relied upon for correctness.
    
        Implement for X86.  Return false by default for other architectures.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87026 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 99182191f2a0ce322f3576a4211d6b6fcaec5a78
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Thu Nov 12 20:53:56 2009 +0000
    
        Attempt to unbreak LLVM build, David G. please check.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87025 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3824bc8f0c4ba2970f9100e871692e389351fa55
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Thu Nov 12 20:53:43 2009 +0000
    
        Fix -Asserts warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87024 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e32637bcbd717a8233de28566aeaa0b2db1d659b
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Nov 12 20:51:53 2009 +0000
    
        If there's more than one function operand to a call instruction, be conservative
        and don't assume that the call doesn't throw. It would be nice if there were a
        way to determine which is the callee and which is a parameter. In practice, the
        architecture we care about normally only have one operand for a call instruction
        (x86 and arm).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87023 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6424ab9738972c0a9d5f588c59645f85782cf68c
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Nov 12 20:49:22 2009 +0000
    
        Add a bool flag to StackObjects telling whether they reference spill
        slots.  The AsmPrinter will use this information to determine whether to
        print a spill/reload comment.
    
        Remove default argument values.  It's too easy to pass a wrong argument
        value when multiple arguments have default values.  Make everything
        explicit to trap bugs early.
    
        Update all targets to adhere to the new interfaces..
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87022 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ea862b03de56f3746e10b93b4aaea3f5c781fd21
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Nov 12 20:36:59 2009 +0000
    
        Add compare_lower and equals_lower methods to StringRef. Switch all users of
        StringsEqualNoCase (from StringExtras.h) to it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87020 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5b11c7069ffa39e6276b66d9dbf91ecd3844f286
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Nov 12 20:25:07 2009 +0000
    
        Make FixedStackPseudoSourceValue a first-class PseudoSourceValue by
        making it visible to clients and adding LLVM-style cast capability.
        This will be used by AsmPrinter to determine when to emit spill comments
        for an instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87019 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 145fd183554981733e64ccce959b8b4ea9b66ed3
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Nov 12 20:21:09 2009 +0000
    
        Add AsmPrinter comment flags to machine instructions so that AsmPrinter
        can emit extra information in comments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87018 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0ea46c5af1b82dc5af51ced679e665d87129c5ba
    Author: David Greene <greened at obbligato.org>
    Date:   Thu Nov 12 20:13:34 2009 +0000
    
        Add comment flags so AsmPrinter can output additional information when
        emitting comments.  These flags carry semantic information not otherwise
        easily derivable from the IR text.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87016 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1f708fd24678e7f44057dd08643f78dd1541aa5a
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Thu Nov 12 19:08:21 2009 +0000
    
        Rename registers to break output dependencies in addition to anti-dependencies.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87015 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4a7ef8dcbe4c986ba7f4267b56d4779567bc91ff
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Nov 12 19:02:56 2009 +0000
    
        "Attach debug info with llvm instructions" mode was enabled a month ago. Now make it permanent and remove old way of inserting intrinsics to encode debug info for line number and scopes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87014 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 28bc2ec029ce31dc4acc3da2a082c407f8a982ef
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 12 18:36:19 2009 +0000
    
        Mark DBG_LABEL, EH_LABEL, and GC_LABEL as not-duplicable, since
        they really are not duplicable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87009 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c3de6e216c54e25c6275e55797087db22d0da82e
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Nov 12 17:59:45 2009 +0000
    
        Silence a warning on targets with unsigned chars.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87002 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2b0dd3fed34eb77a58c42aee574e171fe5d983df
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Nov 12 17:25:07 2009 +0000
    
        Update TB[BH] layout optimization. Add support for moving the target block
        to directly follow the jump table. Move the layout changes to prior to any
        constant island handling.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86999 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e8ffe253d56acda3d573471d0d97f916ca3f0879
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Nov 12 17:19:09 2009 +0000
    
        Clean up testcase a bit. Simplify case blocks and adjust switch instruction to not take an undefined value as input.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86997 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c07cafee23409d04581c2873bcbb58d8f383939a
    Author: Nuno Lopes <nunoplopes at sapo.pt>
    Date:   Thu Nov 12 15:10:33 2009 +0000
    
        fix crash in my previous patch
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86987 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1e03e8568fcb49eadc7d0baf9640d28fb2e0444f
    Author: Nuno Lopes <nunoplopes at sapo.pt>
    Date:   Thu Nov 12 14:53:53 2009 +0000
    
        implement shl, ashr, and lshr methods. shl is not fully implemented as it is quite tricky.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86986 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cbc070fd1e24e88edc865429c356f5f53de020af
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Nov 12 12:35:27 2009 +0000
    
        Fix typo in run line.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86984 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9adc091f450070795e2e1f8f83eead54ca000b55
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Thu Nov 12 09:44:17 2009 +0000
    
        typo
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86980 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 83cd6cbb1c6ad22198d156c6d43b1158eeca4ee8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 12 07:56:08 2009 +0000
    
        implement a nice little efficiency hack in the inliner.  Since we're now
        running IPSCCP early, and we run functionattrs interlaced with the inliner,
        we often (particularly for small or noop functions) completely propagate
        all of the information about a call to its call site in IPSSCP (making a call
        dead) and functionattrs is smart enough to realize that the function is
        readonly (because it is interlaced with inliner).
    
        To improve compile time and make the inliner threshold more accurate, realize
        that we don't have to inline dead readonly function calls.  Instead, just
        delete the call.  This happens all the time for C++ codes, here are some
        counters from opt/llvm-ld counting the number of times calls were deleted vs
        inlined on various apps:
    
        Tramp3d opt:
          5033 inline                - Number of call sites deleted, not inlined
         24596 inline                - Number of functions inlined
        llvm-ld:
          667 inline           - Number of functions deleted because all callers found
          699 inline           - Number of functions inlined
    
        483.xalancbmk opt:
          8096 inline                - Number of call sites deleted, not inlined
         62528 inline                - Number of functions inlined
        llvm-ld:
           217 inline           - Number of allocas merged together
          2158 inline           - Number of functions inlined
    
        471.omnetpp:
          331 inline                - Number of call sites deleted, not inlined
         8981 inline                - Number of functions inlined
        llvm-ld:
          171 inline           - Number of functions deleted because all callers found
          629 inline           - Number of functions inlined
    
        Deleting a call is much faster than inlining it, and is insensitive to the
        size of the callee. :)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86975 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cdac8bdcab6df408da67ef9f8a32a6298a8a523c
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Nov 12 07:49:10 2009 +0000
    
        RegScavenger::enterBasicBlock should always reset register state.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86972 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6d70200b12f1e1f28c87f18e8cfcad09490dfb10
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Nov 12 07:35:05 2009 +0000
    
        - Teach LSR to avoid changing cmp iv stride if it will create an immediate that
          cannot be folded into target cmp instruction.
        - Avoid a phase ordering issue where early cmp optimization would prevent the
          later count-to-zero optimization.
        - Add missing checks which could cause LSR to reuse stride that does not have
          users.
        - Fix a bug in count-to-zero optimization code which failed to find the pre-inc
          iv's phi node.
        - Remove, tighten, loosen some incorrect checks disable valid transformations.
        - Quite a bit of code clean up.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86969 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ae466e3c7583e38941203c6de310f4749ccc529e
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Nov 12 07:16:34 2009 +0000
    
        Use table to separate opcode from operands.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86965 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3a2ce50346bc55508a38e13d389bcafab7eb4c29
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Nov 12 07:13:11 2009 +0000
    
        isLegalICmpImmediate should take a signed integer; code clean up.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86964 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a8d851e5c0812d5b3c93b8180b2e85e50411d6b5
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Thu Nov 12 06:48:09 2009 +0000
    
        CMake: Hopefully unbreak the build by mimicking the changes on the
        other build system about the new C_INCLUDE_DIRS configure option.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86960 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 354924df7e9bffc35f8eccff8152c678f17a9885
    Author: Rafael Espindola <rafael.espindola at gmail.com>
    Date:   Thu Nov 12 05:46:09 2009 +0000
    
        Add the --with-c-include-dirs to llvm's configure.
        The clang patch is next.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86955 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 469f048845803a7e0a71fc315e6962f3978ff466
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Thu Nov 12 05:36:09 2009 +0000
    
        CMake: Pass -lm to check_symbol_exists for detecting several math
        functions like floorf, ceilf, ... Add test for detecting nearbyintf.
    
        This change was prompted by test/Transforms/SimplifyLibCalls/floor.ll
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86954 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 670358cf61e1f7b5c6753dc2fc08b336634dac1e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 12 05:24:05 2009 +0000
    
        use getPredicateOnEdge to fold comparisons through PHI nodes,
        which implements GCC PR18046.  This also gets us 360 more
        jump threads on 176.gcc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86953 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 45dfc9458f8565fd2e9a3974569e6b37e56fa9f5
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 12 04:57:13 2009 +0000
    
        various fixes to the lattice transfer functions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86952 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e876ad0cacd6c9809e4924d53a0fa8d2c6d8816e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 12 04:37:50 2009 +0000
    
        switch jump threading to use getPredicateOnEdge in one place
        making the new LVI stuff smart enough to subsume some special
        cases in the old code.  Disable them when LVI is around, the
        testcase still passes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86951 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 90fedb1f17790459fdbb555fe278f05e39bf1dac
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 12 04:36:58 2009 +0000
    
        Add a new getPredicateOnEdge method which returns more rich information for
        constant constraints.  Improve the LVI lattice to include inequality
        constraints.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86950 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 04e67d0e2a2d7bcde1028998b06990c935dd95b2
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Nov 12 03:55:33 2009 +0000
    
        Move the utility function UpdateTerminator() from CodePlacementOpt() into
        MachineBasicBlock so other passes can utilize it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86947 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f7ecb28b62f3fd1539b294e07c6cf3ab9b61f1bb
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Nov 12 03:28:35 2009 +0000
    
        Revert 86857. It's causing consumer-typeset to fail, and there's a better way to do it forthcoming anyway.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86945 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ea644f7a484cc773e729614db11a760b62c857db
    Author: Eric Christopher <echristo at apple.com>
    Date:   Thu Nov 12 03:12:18 2009 +0000
    
        Use stubs when we have them, otherwise use code we already have,
        otherwise create a stub.
    
        Add a test to make sure we don't create extraneous stubs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86941 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1b6922036dab420a5eddf28e887c2e44c6c0f340
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Thu Nov 12 02:52:56 2009 +0000
    
        Add the braces gcc suggested.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86933 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7b3a128b658b346c36d5e212922b432dcb5b3dfa
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Thu Nov 12 02:08:11 2009 +0000
    
        Add CreateNUWAdd and CreateNUWSub to complement the existing CreateNSWAdd and
        CreateNSWSub functions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86930 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d74192e646c8d5fa743cc6b120d4cd5aef19e08a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 12 02:04:17 2009 +0000
    
        should not commit when distracted.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86929 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b53550dee876b4de5472e30c701034694d0e5d24
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 12 01:59:26 2009 +0000
    
        Make the BranchFolderPass class local to BranchFolding.cpp.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86928 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 52ba84cea50c305d9ce2e0371554b246048f2a7b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 12 01:55:20 2009 +0000
    
        We now thread some impossible condition information with LVI.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86927 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7f14744c3d06ad94c6a47e1258e73d724b81a2b6
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 12 01:51:28 2009 +0000
    
        Minor code cleanups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86926 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9a90e1559353cc51cd1acfa8ba7e34e263ae4cd3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 12 01:41:34 2009 +0000
    
        with the new code we can thread non-instruction values.  This
        allows us to handle the test10 testcase.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86924 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 45404958d705d78cc6068356f8ee293b27b15219
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 12 01:37:43 2009 +0000
    
        this argument can be an arbitrary value, it doesn't need to be an instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86923 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2859e1cd581ff16359db8dfa133317715a398fa1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 12 01:29:10 2009 +0000
    
        expose edge information and switch j-t to use it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86920 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2df5c60d9171a173f7bad2e3d53c8c35c85b96d5
    Author: Lang Hames <lhames at gmail.com>
    Date:   Thu Nov 12 01:24:08 2009 +0000
    
        Fixed an iteration condition in PreAllocSplitting. This should fix some miscompilations casued by PreAllocSplitting.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86919 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b5133689def7aab253f03da63975c6b0dbe3169a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 12 01:22:16 2009 +0000
    
        move some stuff into DEBUG's and turn on lazy-value-info for
        the basic.ll testcase.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86918 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bdd18e596e8433c00b9ba325c9ba623ab7f08004
    Author: Eric Christopher <echristo at apple.com>
    Date:   Thu Nov 12 01:06:08 2009 +0000
    
        Fix typo, cleanup whitespace.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86917 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5f1c0f3a937b5034b5cbc59b98de5d1208f94313
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Nov 12 00:50:58 2009 +0000
    
        Do not use StringRef in DebugInfo interface.
        This allows StringRef to skip controversial if(str) check in constructor.
        Buildbots, wait for corresponding clang and llvm-gcc FE check-ins!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86914 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 08982a392a0eaba71c37087eb53b071aa29b55e8
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 12 00:39:10 2009 +0000
    
        Tail merge at any size when there are two potentials blocks and one
        can be made to fall through into the other.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86909 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d225fdf94c2b6eeed4f5511a134750cf6970b377
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Nov 11 23:17:02 2009 +0000
    
        Don't mark a call as potentially throwing if the function it's calling has the
        "nounwind" attribute.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86897 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 60ee2143d4989e3a6897c10f998b697bd8a41cf2
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Wed Nov 11 23:09:33 2009 +0000
    
        A real solution for the first part of PR5445
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86895 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6bc1ebed0e03786c25f380ed0dc397013f17360e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 11 22:48:44 2009 +0000
    
        make LazyValueInfo actually to some stuff.  This isn't very tested but improves
        strswitch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86889 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2deeebcd534ee7de0ea3cab6bce2dde77dfc59ce
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 11 22:31:38 2009 +0000
    
        pass TD into a SimplifyCmpInst call.  Add another case that
        uses LVI info when -enable-jump-threading-lvi is passed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86886 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e7bfb8796a10f92b4749d8c2510437693a8722e7
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Nov 11 21:57:02 2009 +0000
    
        Promote MergePotentialsElt and SameTailElt to be regular classes
        instead of typedefs for std::pair. This simplifies the type of
        SameTails, which previously was std::vector<std::pair<std::vector<std::pair<unsigned, MachineBasicBlock *> >::iterator, MachineBasicBlock::iterator>
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86885 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cec9c63631255076665e2adec25fc785e6945e34
    Author: Kenneth Uildriks <kennethuil at gmail.com>
    Date:   Wed Nov 11 19:59:24 2009 +0000
    
        x86 users can now return arbitrary sized structs.  Structs too large to fit in return registers will be returned through a hidden sret parameter introduced during SelectionDAG construction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86876 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 36f0ea91aca18d364fde01604ebb6b1de21331b0
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Nov 11 19:56:05 2009 +0000
    
        Revert this line of 86871.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86875 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1b4d683a5818182cbb69cd3e4b2e7a7585cdefc3
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Nov 11 19:55:08 2009 +0000
    
        If doesSupportDebugInformation() is false then do not try to emit dwarf debug info.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86874 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 47faa8df5d82bb3a11b334b589cf957ebc1e43a5
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Nov 11 19:49:34 2009 +0000
    
        Check in the changes to this file too.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86873 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6638ea7611f1fa8da067372147aa7bbec6d2e562
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Nov 11 19:48:59 2009 +0000
    
        Add support for tail duplication to BranchFolding, and extend
        tail merging support to handle more cases.
         - Recognize several cases where tail merging is beneficial even when
           the tail size is smaller than the generic threshold.
         - Make use of MachineInstrDesc::isBarrier to help detect
           non-fallthrough blocks.
         - Check for and avoid disrupting fall-through edges in more cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86871 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 409558d2c4f941068e3d00eed5e0242ddeb90fba
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Wed Nov 11 19:31:31 2009 +0000
    
        Fix liveness calculation when splitting critical edges during PHI elimination.
    
        - Edges are split before any phis are eliminated, so the code is SSA.
    
        - Create a proper IR BasicBlock for the split edges.
    
        - LiveVariables::addNewBlock now has same syntax as
          MachineDominatorTree::addNewBlock. Algorithm calculates predecessor live-out
          set rather than successor live-in set.
    
        This feature still causes some miscompilations.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86867 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2c86b6b0413e8ddf58ffea4a9917a81835606569
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Nov 11 19:08:42 2009 +0000
    
        Reenable StackTracke.cpp test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86861 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e1ce565254502ef7472b125abc6918bd67fedcc4
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Nov 11 19:06:06 2009 +0000
    
        Add SetDebugLocation() variant to
        add debug info location to an instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86859 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2621469af6386370b3bf8d41f9a76733b7a4e94b
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Nov 11 19:05:52 2009 +0000
    
        Add TargetLowering::isLegalICmpImmediate. It tells LSR what immediate can be folded into target icmp instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86858 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9c7d61e9f372f1a6daccb4df389520193d58de67
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Nov 11 19:04:24 2009 +0000
    
        Do jump table adjustment before constant island allocation
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86857 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0fc27d2431b70c05018c84432f6bbc13ea28cf91
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Nov 11 18:42:28 2009 +0000
    
        Fix indentation level.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86856 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7a6aa7b82c2ce16ee86ffb5e0ccd3b6007922397
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Nov 11 18:38:14 2009 +0000
    
        Whitespace cleanups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86855 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aa4f4fa4cae85ae252660d03d9af0b4ebe596169
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Nov 11 18:23:17 2009 +0000
    
        Prefix MBB numbers with "BB#" in debug output to make it clear what
        the numbers mean.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86854 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a0045b0723b852027e2a6cf3fe8a0876a7e06280
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Nov 11 18:18:34 2009 +0000
    
        Minor code simplification.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86853 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 333246bfa9cfc484925a0b1487dd316e00a6cf50
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Nov 11 18:14:02 2009 +0000
    
        Fix a copy+pasto in a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86852 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 40685553381a0c38ff0c6c79d191f9df5caec623
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Nov 11 18:11:07 2009 +0000
    
        Set isBarrier = 1 on return instructions, as they are control barriers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86851 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8112b94b215d777db473b38ece871f3f6ed09032
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Nov 11 18:07:16 2009 +0000
    
        Use a tab in INT3's asm string, for consistency.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86850 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4adaab13a9902c5c02c85afc051a6f858e003128
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 11 17:54:02 2009 +0000
    
        another const prop failure.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86848 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d6b0ee1baa2aeaeaf8334d0c4e2b71ca0872712e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 11 17:51:27 2009 +0000
    
        add a note
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86847 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 16d2fcb1d839722c7f4458425eaa029faba019a2
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 11 17:37:02 2009 +0000
    
        Reject duplicate case values in a switch, PR5450.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86846 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 87825dc679818a7b0a12be3832da6a8ed2358578
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Wed Nov 11 15:34:13 2009 +0000
    
        Don't trivially delete unused calls to llvm.invariant.start.  This allows
        llvm.invariant.start to be used without necessarily being paired with a call
        to llvm.invariant.end.  If you run the entire optimization pipeline then such
        calls are in fact deleted (adce does it), but that's actually a good thing since
        we probably do want them to be zapped late in the game.  There should really be
        an integration test that checks that the llvm.invariant.start call lasts long
        enough that all passes that do interesting things with it get to do their stuff
        before it is deleted.  But since no passes do anything interesting with it yet
        this will have to wait for later.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86840 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 808b980ce8e153f4718b2b9f5210bd9a1aa03e3b
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Nov 11 07:11:02 2009 +0000
    
        Add nounwind.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86814 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 124efd4dc3d0638c2eee94d105b5f2a7a58bf80b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 11 05:56:35 2009 +0000
    
        remove the now dead condprop pass, PR3906.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86810 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 223f53f58f5b3de1b341e25802920048751aaf15
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Wed Nov 11 05:30:02 2009 +0000
    
        Fix JITTest.ModuleDeletion in -Asserts mode (which turns off JITEmitDebugInfo
        by default).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86807 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e1c54d796895b812ca506547359d1fe6a07dc62f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 11 05:25:16 2009 +0000
    
        remove condprop testcases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86804 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c731e6c716e7193f0e3e0da2d5622e4e85c24802
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Nov 11 05:19:11 2009 +0000
    
        Add StringRef::split(StringRef), to complement StringRef::split(char).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86803 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6051856ac7cb0552bf0d1132f37cbb9b65153ae1
    Author: Rafael Espindola <rafael.espindola at gmail.com>
    Date:   Wed Nov 11 04:10:24 2009 +0000
    
        Remove dead code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86802 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8e51aebc45d335f03d31525f2c41130cd6068c9a
    Author: Sandeep Patel <deeppatel1987 at gmail.com>
    Date:   Wed Nov 11 03:23:46 2009 +0000
    
        Show command-line args and features passed into backend in debug output. Approved by Evan Cheng.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86797 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 046dc77ed41a73b26064791659fed94e7a7ef86a
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Nov 11 03:10:03 2009 +0000
    
        Add missing run line. Devang, please check.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86795 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c9f2d2426db8ea833e540eeb26ac3e25f1741e4a
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Nov 11 03:09:50 2009 +0000
    
        Fix -Asserts warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86794 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6ae593cd1381ab2f6aff6eb1309a1a7b5270c8c6
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Nov 11 02:47:19 2009 +0000
    
        The TBB and TBH instructions for Thumb2 are really handy for jump tables, but
        can only branch forward. To best take advantage of them, we'd like to adjust
        the basic blocks around a bit when reasonable. This patch puts basics in place
        to do that, with a super-simple algorithm for backwards jump table targets that
        creates a new branch after the jump table which branches backwards. Real
        heuristics for reordering blocks or other modifications rather than inserting
        branches will follow.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86791 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 42677804895d83660b0ba380227e0ed1dc1b522f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 11 02:08:33 2009 +0000
    
        stub out some LazyValueInfo interfaces, and have JumpThreading
        start using them in a trivial way when -enable-jump-threading-lvi
        is passed.  enable-jump-threading-lvi will be my playground for
        awhile.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86789 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit db541c3878dc672baaef7a0f9fa6894b72b3d67a
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Nov 11 01:44:22 2009 +0000
    
        Fix test to work on every platform.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86786 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5198ca6c3aa6c3c4ba06123da6193f856b5e37fa
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Nov 11 01:41:32 2009 +0000
    
        Fix test to work on every platform.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86785 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 01267264dd298b39aaeacb3e97796c87084cd491
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Nov 11 01:41:10 2009 +0000
    
        XFAIL for now.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86784 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6c9df2b8d4ebb458bf3bd6088f0b9be26b2abc86
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Nov 11 01:24:59 2009 +0000
    
        Make sure that the exception handling data has the same visibility as the
        function it's generated for.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86779 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ae1135926a1e4e2e3a7f812fad28b66305b271cc
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Nov 11 00:43:14 2009 +0000
    
        Add Triple::str() which returns the contents of the Triple as a string, as a more readable alternative to getTriple().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86773 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ce8986f9627a92ccebb067284b3cf4092d737149
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Nov 11 00:31:36 2009 +0000
    
        Do not assume first function scope seen represents current function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86771 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 609f3f8a0e727ee25be1f4ef17ab81b5db72fa24
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Nov 11 00:28:53 2009 +0000
    
        Add From arguments to StringRef search functions, and tweak doxyments.
    
        Also, add unittests for find_first_of and find_first_not_of.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86770 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7eef4c72249fd13ce9e766fc463efa6c44d25c0c
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Nov 11 00:28:38 2009 +0000
    
        llvm-gcc/clang don't (won't?) need this hack.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86769 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1a0948830344dae797979c37dfc904cd932a43be
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 11 00:27:54 2009 +0000
    
        oops, didn't mean to commit this, no harm, but add a todoops, didn't mean to commit this, no harm, but add a todoo
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86768 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 73c80094c819a5266794496cf39fa4b6a9b39e14
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 11 00:22:30 2009 +0000
    
        Stub out a new lazy value info pass, which will eventually
        vend value constraint information to the optimizer.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86767 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5a0858228d368aaf25e514c8fbb857c0808c22dc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 11 00:21:58 2009 +0000
    
        add a fixme
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86766 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b0585b20d136dd5ac675f7664dcb84d76c04ec26
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 11 00:21:21 2009 +0000
    
        remove redundant foward declaration.  This function is already in
        Analysis/Passes.h
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86765 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 53addbf446f848c0f165e5d2cbaca31fff796b36
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Nov 11 00:18:40 2009 +0000
    
        While creating DbgScopes, do not forget parent scope.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86763 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b31ea598704a4e5a07fbe3b9d8b2f4b95f713088
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Nov 11 00:00:21 2009 +0000
    
        Block terminator may be a switch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86761 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit de578195f23510abf1bb3d29994f4835798ee5d2
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 23:54:10 2009 +0000
    
        jump threading does everything that condprop does any more.  This passes
        bootstrap on darwin i386.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86758 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8106001c35356931ca296340ca27253d7c25c51a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 23:47:45 2009 +0000
    
        add a note
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86756 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 759f058d1ef755e2bbfa851797e70a2c90bb93d9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 23:40:49 2009 +0000
    
        I did this a week or two ago
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86754 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bad42264107804ba40844000f12d9e0a23ea563b
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Nov 10 23:20:04 2009 +0000
    
        Ignore variable if scope info is not available.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86753 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 263c64f826938b1d5cdd747317a66898a1ad7031
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Nov 10 23:18:33 2009 +0000
    
        Test this on Darwin only.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86752 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8b8c35b2d8a380c7ce06cce6618bb5875f5c5294
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Tue Nov 10 23:16:41 2009 +0000
    
        Emit correct code when making a ConstantPool entry for a vector
        constant whose component type is not a legal type for the target.
        (If the target ConstantPool cannot handle this type either, it has
        an opportunity to merge elements.  In practice any target with
        8-bit bytes must support i8 *as data*).  7320806 (partial).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86751 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 90a0fe3bd6043f897285b967b196f6ab26dfdcae
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Nov 10 23:06:00 2009 +0000
    
        Implement support to debug inlined functions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86748 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d2df0e391758a9c98d063c8e44c0cf1021383a02
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 22:56:15 2009 +0000
    
        in -dot-cfg and -dot-cfg-only, when rendering switch instructions,
        put the switch value in the successor boxes like we put T/F for branches.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86747 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5df5bb90f8aa6a3f5eab335cf87cdb9f928d6f93
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 22:39:16 2009 +0000
    
        implement a TODO by teaching jump threading about "xor x, 1".
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86739 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fe96c5f78e17e7562622fe026d641d31f948de1c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 22:26:15 2009 +0000
    
        move some generally useful functions out of jump threading
        into libanalysis and transformutils.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86735 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7a94dac821f39910b311b206a4b0f23649c826e2
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Nov 10 22:16:57 2009 +0000
    
        Don't mark conditional branch instructions as control barriers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86732 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7562d076e99129679edb7037d7a6a840448be217
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Nov 10 22:14:04 2009 +0000
    
        Modify how the prologue encoded the "move" information for the FDE. GCC
        generates a sequence similar to this:
    
        __Z4funci:
        LFB2:
                mflr r0
        LCFI0:
                stmw r30,-8(r1)
        LCFI1:
                stw r0,8(r1)
        LCFI2:
                stwu r1,-80(r1)
        LCFI3:
                mr r30,r1
        LCFI4:
    
        where LCFI3 and LCFI4 are used by the FDE to indicate what the FP, LR, and other
        things are. We generated something more like this:
    
        Leh_func_begin1:
                mflr r0
                stw r31, 20(r1)
                stw r0, 8(r1)
        Llabel1:
                stwu r1, -80(r1)
        Llabel2:
                mr r31, r1
    
        Note that we are missing the "mr" instruction. This patch makes it more like the
        GCC output.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86729 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e940b1d8d14af6d3006f6fbad95b9ff332469df4
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Nov 10 22:05:35 2009 +0000
    
        Process InlinedAt location info.
        Update InsertDeclare to return newly inserted llvm.dbg.declare intrinsic.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86727 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1b8b05938fb8d9aacc01f526f010580668e028f5
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 22:02:09 2009 +0000
    
        fix a crash in SCCP handling extractvalue of an array, pointed out and
        tracked down by Stephan Reiter!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86726 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit be9cdbf981991737b49e4474701e820bc881cbfd
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Nov 10 22:01:05 2009 +0000
    
        Teach PHIElimination to split critical edges when -split-phi-edges is enabled.
    
        Critical edges leading to a PHI node are split when the PHI source variable is
        live out from the predecessor block. This help the coalescer eliminate more
        PHI joins.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86725 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4e45f87bcb425c0f18e92f80eac28a2dbc6be555
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Nov 10 22:00:56 2009 +0000
    
        Refactoring: Extract method PHIElimination::isLiveOut().
        Clean up some whitespace.
        No functional changes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86724 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c89abd1e17beb5886a27ec0e45615e0d21de7fb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 21:45:09 2009 +0000
    
        improve comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86723 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7ca174c49440e447ac90ce2cc4c70d950db28cb2
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 21:40:01 2009 +0000
    
        Make jump threading eliminate blocks that just contain phi nodes,
        debug intrinsics, and an unconditional branch when possible.  This
        reuses the TryToSimplifyUncondBranchFromEmptyBlock function split
        out of simplifycfg.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86722 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8a0ab8392c91e1bae7901cedd6a12ba3aee2a1fd
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 10 21:14:05 2009 +0000
    
        Generalize lsr code that optimize loop to count down towards zero.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86715 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fc6409ff5b7c36c6e4fb3c3c03c4766554296a57
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Nov 10 21:02:18 2009 +0000
    
        Optimize test more.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86714 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 053fd3c45f72e248641e349197db27d717c2be6e
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Tue Nov 10 19:53:28 2009 +0000
    
        make this handle redefinition of malloc function with different prototype correctly
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86712 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4137d7cdf726c8d94ef376603d909b69037a9e91
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 10 19:48:13 2009 +0000
    
        Change Thumb1 address mode printing, instead of
        [r0, #2 * 4]
        Now
        [r0, #8]
    
        This makes Thumb2 assembly more uniform and frankly the scale doesn't add much.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86707 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 979c7ab12dfb8913593c05cb303b947c5b06d1f8
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 10 19:44:56 2009 +0000
    
        Add a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86706 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 260558f8510c33aa746818f43259bcdc74597ebc
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Tue Nov 10 19:36:40 2009 +0000
    
        Add defensive break.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86705 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 95734fc03391268e8791350141b2e4bb4f11bb03
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Nov 10 18:24:37 2009 +0000
    
        Add a monstrous hack to improve X86ISelDAGToDAG compile time.
         - Force NDEBUG on in any Release build. This drops the compile time to ~100s
           from ~600s, in Release mode.
    
         - This may just be a temporary workaround, I don't know the true nature of the
           gcc-4.2 compile time performance problem.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86695 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit df5999a933bdc33038405dded11d5df3f9dd5ff6
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Tue Nov 10 18:21:37 2009 +0000
    
        Fix obvious typo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86694 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bc5d013abcc18fb77376ed402e24d3b8d22252d6
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 17:00:47 2009 +0000
    
        clarify logic.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86689 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7e7b7d7c190786cc8844432d23e0bd0beed8c191
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Tue Nov 10 15:30:33 2009 +0000
    
        CMake: Add Darwin-specific linker flags for building loadable modules
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86684 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b73594ca0d4eff2cc6ec0d9a7fa234b0a31a2bdf
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Tue Nov 10 13:49:50 2009 +0000
    
        Teach DSE to eliminate useless trampolines.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86683 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ebb5cc581b2298046a670bfd273ffc3d0e47ac0f
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Tue Nov 10 09:32:10 2009 +0000
    
        Add brackets to make gcc-4.4 happy.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86681 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bb0ae064da8981a29f17d289d341d6fa981e2976
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Tue Nov 10 09:08:09 2009 +0000
    
        Codegen support for the llvm.invariant/lifetime.start/end intrinsics:
        just throw them away.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86678 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 43af76a34cd64afcaf7579be626b0309a6f1cc0d
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Tue Nov 10 08:32:25 2009 +0000
    
        Update computeArraySize() to use ComputeMultiple() to determine the array size associated with a malloc; also extend PerformHeapAllocSRoA() to check if the optimized malloc's arg had its highest bit set, so that it is safe for ComputeMultiple() to look through sext instructions while determining the optimized malloc's array size
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86676 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fc57ab9a76f2ed92b1cc169e1feab30b920e6d87
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Tue Nov 10 08:28:35 2009 +0000
    
        Add ComputeMultiple() analysis function that recursively determines if a Value V is a multiple of unsigned Base
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86675 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 34d37223ac5283d3e95010a858ffd905d6e79b36
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 07:44:36 2009 +0000
    
        optimize test
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86672 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d001109874b60167b56116a54c0a08bf935cd062
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 07:23:37 2009 +0000
    
        unify the code that determines whether it is a good idea to change the type
        of a computation.  This fixes some infinite loops when dealing with TD that
        has no native types.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86670 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 754b8dd86b9602a6fd2a01ae824d7f6abc3b8da3
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Tue Nov 10 07:00:43 2009 +0000
    
        Simplify.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86668 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f55c62cdb87a0d5bab5dc9a2aff770d6323f5831
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Tue Nov 10 06:46:40 2009 +0000
    
        Reapply r86359, "Teach dead store elimination that certain intrinsics write to
        memory just like a store" with bug fixed (partial-overwrite.ll is the
        regression test).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86667 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c006fc03f5592d1eaeef4eb090b0056e2420e9a6
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 05:59:26 2009 +0000
    
        refactor TryToSimplifyUncondBranchFromEmptyBlock out of SimplifyCFG.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86666 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dc174b18352c1bb89d7871fb4d0de7998472e60f
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Tue Nov 10 02:45:37 2009 +0000
    
        CMake: Support for building llvm loadable modules.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86656 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 861f9d4ecfe8412da341a88fd38c5e2ba0ac990e
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Nov 10 02:41:27 2009 +0000
    
        lit: Start documentation testing architecture.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86655 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit de18cd2603d56c4d99ae5f3659bb133ffa1b07c7
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Nov 10 02:41:17 2009 +0000
    
        lit: Add ExampleTests, for testing lit and demonstrating test suite features.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86654 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 25869df0693d50591c28c86424341ffc64a2522f
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Nov 10 02:40:21 2009 +0000
    
        lit: Fix bug in --show-suites which accidentally override the list of tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86653 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1700ffa4eaeb0f11cdd6d8ef6537ec0cf08ffc5a
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Tue Nov 10 02:35:13 2009 +0000
    
        Fix PR5445
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86651 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 70f4e8f2d2740f7233c85726450a253c806ebfff
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 02:04:54 2009 +0000
    
        I misread the parens, not so redundant after all.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86648 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a66ac18f9e84664d748f52a82d350287b6a0b629
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 01:57:31 2009 +0000
    
        make jump threading recursively simplify expressions instead of doing it
        just one level deep.  On the testcase we go from getting this:
    
        F1:                                               ; preds = %T2
          %F = and i1 true, %cond                         ; <i1> [#uses=1]
          br i1 %F, label %X, label %Y
    
        to a fully threaded:
    
        F1:                                               ; preds = %T2
          br label %Y
    
        This changes gets us to the point where we're forming (too many) switch
        instructions on doug's strswitch testcase.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86646 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c0e5b0cbed7a7bb83ef652042d1964d263f3c89f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 01:56:04 2009 +0000
    
        remove some redundant parens.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86645 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 42d0954b5c5a3e29f7e010b48f0f47ea83f6d4a5
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Tue Nov 10 01:45:05 2009 +0000
    
        CMake: Remove unnecessary `unset' which was not supported by old cmake
        releases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86644 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 91f5efc47691a931743775b97103f84879d121e5
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Nov 10 01:37:57 2009 +0000
    
        Remove an unused variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86642 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9b3225207423344cbac9fa367c077055d0cdeaf9
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Nov 10 01:36:20 2009 +0000
    
        Minor code simplification.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86641 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1439df1156885faa7395fba6671731fa12ef6f31
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Nov 10 01:33:08 2009 +0000
    
        Trim a bunch of unneeded code from this testcase.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86640 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e9b70547cbc08570254283673cfe0873fd8c935f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 01:19:06 2009 +0000
    
        don't invalidate PN, rewrite of this code is in progress anyway.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86639 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2d8a0bbc8ec438abd9c9daa1387ac7e2fa2255c0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 01:08:51 2009 +0000
    
        add a new SimplifyInstruction API, which is like ConstantFoldInstruction,
        except that the result may not be a constant.  Switch jump threading to
        use it so that it gets things like (X & 0) -> 0, which occur when phi preds
        are deleted and the remaining phi pred was a zero.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86637 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8154d2e023fe3137363f8bbc9dae2dff7188dccb
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Nov 10 01:02:17 2009 +0000
    
        Fix DenseMap iterator constness.
    
        This patch forbids implicit conversion of DenseMap::const_iterator to
        DenseMap::iterator which was possible because DenseMapIterator inherited
        (publicly) from DenseMapConstIterator. Conversion the other way around is now
        allowed as one may expect.
    
        The template DenseMapConstIterator is removed and the template parameter
        IsConst which specifies whether the iterator is constant is added to
        DenseMapIterator.
    
        Actually IsConst parameter is not necessary since the constness can be
        determined from KeyT but this is not relevant to the fix and can be addressed
        later.
    
        Patch by Victor Zverovich!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86636 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a3e46f6405c5b7c3284325514b8f573e197d9b4a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 10 00:55:12 2009 +0000
    
        factor simplification logic for AND and OR out to InstSimplify from instcombine.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86635 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2a0ca3b82e4e78705e37f2b3d9092bc9767b8306
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Tue Nov 10 00:48:55 2009 +0000
    
        Fixed to address code review. No functional changes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86634 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 80ab6551aafc61c32f496dcce7531a4621be653b
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Nov 10 00:43:58 2009 +0000
    
        Fix MemoryBuffer::getSTDIN to *not* return null if stdin is empty, this is a lame API.
    
        Also, Stringrefify some more MemoryBuffer functions, and add two performance FIXMEs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86630 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 526dd742a7832e90f52ac99c24a4370d3d98d347
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Tue Nov 10 00:15:47 2009 +0000
    
        Allow targets to specify register classes whose member registers should not be renamed to break anti-dependencies.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86628 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 54c21357f32f867012ff2556b4a7c9be21ffa759
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 9 23:55:12 2009 +0000
    
        pull a bunch of logic out of instcombine into instsimplify for compare
        simplification, this handles the foldable fcmp x,x cases among many others.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86627 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 51568464cf57a67ff3d07caae6d47dddc761c614
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 9 23:34:17 2009 +0000
    
        Pass the (optional) TargetData object to ConstantFoldInstOperands
        and ConstantFoldCompareInstOperands.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86626 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 454d7a02c0d2ca8f3a2ce2b6ba64f7f271a9f391
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 9 23:31:49 2009 +0000
    
        inline a simple function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86625 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a9333562d9551aa8c14a2dc3a463c73a1531dc3c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 9 23:28:39 2009 +0000
    
        rename SimplifyCompare -> SimplifyCmpInst and split it into
        Simplify[IF]Cmp pieces.  Add some predicates to CmpInst to
        determine whether a predicate is fp or int.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86624 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit af0ec43f7322c579e2abb859bae4cc8075ec8e0a
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 9 23:11:45 2009 +0000
    
        Now that the default is 'enabled,' a separate command line option for ARM is
        not necessary.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86621 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6e15fef0c220e7fd35df09834e3ebf761bc6e1c4
    Author: Mike Stump <mrs at apple.com>
    Date:   Mon Nov 9 23:10:49 2009 +0000
    
        Add testcase for recent checkin.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86620 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 10381be95c87ab8d2ec9ac2acc7a7b7a75b45ecf
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 9 23:06:58 2009 +0000
    
        fix ConstantFoldCompareInstOperands to take the LHS/RHS as
        individual operands instead of taking a temporary array
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86619 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b806339fc15e8fa03aedbbebefd7674bca853265
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Nov 9 23:05:44 2009 +0000
    
        Add StringSwitch::Cases overloads, for matching multiple strings to a single
        value.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86618 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a337def3a456f08bc01d114efa1bd7f4146a3150
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 9 23:00:14 2009 +0000
    
        use instructionsimplify instead of a weak clone of ad-hoc folding stuff.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86616 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a7b16df9f4ff6e92ad9bd9c70ff9e5d5ce062803
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 9 22:59:01 2009 +0000
    
        Update test
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86614 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8f6ee0a82ccfbdab33acd3fc787c04141ec05449
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 9 22:57:59 2009 +0000
    
        stub out a new libanalysis "instruction simplify" interface that
        takes decimated instructions and applies identities to them.  This
        is pretty minimal at this point, but I plan to pull some instcombine
        logic out into these and similar routines.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86613 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 35294ef681cd8a790a015b5e430e62e04634bcba
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Mon Nov 9 22:34:19 2009 +0000
    
        Remove dlsym stubs, with Nate Begeman's permission.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86606 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8674f03b85cbf65f4e15902a68e089341b9f1743
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 9 22:32:40 2009 +0000
    
        Enable dynamic stack realignment by default.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86604 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2610615be6d53b20c12ff4e1e806b4912e5345fd
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 9 22:32:36 2009 +0000
    
        stub out a new form of BasicBlock::RemovePredecessorAndSimplify which
        simplifies instruction users of PHIs when the phi is eliminated.  This
        will be moved to transforms/utils after some other refactoring.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86603 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 65d8800c4b8ae965422e75b0f3309191ae23fbc3
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 9 22:32:03 2009 +0000
    
        Set dynamic stack realignment to real values.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86602 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit de65e53debdf767545ba31e5bb8f2fd7095f9e8f
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 9 22:28:30 2009 +0000
    
        Remove an unneeded #include.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86601 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0dd119b2f1709f252b0ba64f6966bc2077f14ac8
    Author: Mike Stump <mrs at apple.com>
    Date:   Mon Nov 9 22:28:21 2009 +0000
    
        Fix for 64-bit builds.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86600 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5019fe068d1092dbe9276b45b487fa0db337c70a
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Nov 9 21:45:26 2009 +0000
    
        Similar to r86588, but for Darwin this time.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86592 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6fb58e1e57b08abe754ef2de14a01012acaaecc5
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Nov 9 21:20:14 2009 +0000
    
        The jump table was being generated before the end label for exception handling
        was generated. This caused code like this:
    
        ## The asm code for the function
                .section        __TEXT,__const
                .align  2
        lJTI11_0:
        LJTI11_0:
                .long    LBB11_16
                .long    LBB11_4
                .long    LBB11_5
                .long    LBB11_6
                .long    LBB11_7
                .long    LBB11_8
                .long    LBB11_9
                .long    LBB11_10
                .long    LBB11_11
                .long    LBB11_12
                .long    LBB11_13
                .long    LBB11_14
        Leh_func_end11:   ## <---now in the wrong section!
    
        The `Leh_func_end11' would then end up in the wrong section, causing the
        resulting EH frame information to be wrong:
    
        __ZL11CheckRightsjPKcbRbRP6NSData.eh:
            .set    Lset500eh,Leh_frame_end11-Leh_frame_begin11
            .long   Lset500eh  ; Length of Frame Information Entry
        Leh_frame_begin11:
            .long   Leh_frame_begin11-Leh_frame_common
            .long   Leh_func_begin11-.
            .set    Lset501eh,Leh_func_end11-Leh_func_begin11
            .long   Lset501eh                                   ; FDE address range
        `Lset501eh' is now something huge instead of the real value.
    
        The X86 back-end generates the jump table after the EH information is
        emitted. Do the same here.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86588 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 80164f2cd037e165924e023aa3a8b8b9b533e8f7
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 9 19:38:45 2009 +0000
    
        Print "..." instead of all the uninteresting register clobbers on call
        instructions. This makes CodeGen dumps significantly less noisy.
    
        Example before:
          BL <ga:@bar>, %R0<imp-def>, %R1<imp-def,dead>, %R2<imp-def,dead>, %R3<imp-def,dead>, %R12<imp-def,dead>, %LR<imp-def,dead>, %D0<imp-def,dead>, %D1<imp-def,dead>, %D2<imp-def,dead>, %D3<imp-def,dead>, %D4<imp-def,dead>, %D5<imp-def,dead>, %D6<imp-def,dead>, %D7<imp-def,dead>, %D16<imp-def,dead>, %D17<imp-def,dead>, %D18<imp-def,dead>, %D19<imp-def,dead>, %D20<imp-def,dead>, %D21<imp-def,dead>, %D22<imp-def,dead>, %D23<imp-def,dead>, %D24<imp-def,dead>, %D25<imp-def,dead>, %D26<imp-def,dead>, %D27<imp-def,dead>, %D28<imp-def,dead>, %D29<imp-def,dead>, %D30<imp-def,dead>, %D31<imp-def,dead>, %CPSR<imp-def,dead>, %FPSCR<imp-def,dead>
    
        Same example after:
          BL <ga:@bar>, %R0<imp-def>, %R1<imp-def,dead>, %LR<imp-def,dead>, %CPSR<imp-def,dead>, ...
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86583 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 35b0b02d641f0a7035fc325a0cd1dd91b2358be4
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 9 19:29:11 2009 +0000
    
        Default-addressspace null pointers don't alias anything. This allows
        GVN to be more aggressive. Patch by Hans Wennborg! (with a comment added by me)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86582 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1eb5ad3173bac40e253c4bd8346745a0b9d35217
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Mon Nov 9 19:22:17 2009 +0000
    
        Fix dependencies added to model memory aliasing for post-RA scheduling. The dependencies were overly conservative for memory access that are known not to alias.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86580 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 87ec5ad9f1b09e47d8fd16fdd671787052782566
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 9 19:01:53 2009 +0000
    
        The inbounds keyword isn't relevant to overindexing of
        static array types. Thanks to Duncan for pointing this out!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86576 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0273f182cb12b8627e629784e04b2e05d982eddd
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 9 18:59:22 2009 +0000
    
        Fix a comment in a typo that Duncan noticed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86575 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c668e25ae2f85eb43d2240723bf86e8d2d827bae
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 9 18:40:39 2009 +0000
    
        Remove the "special case" for zero-length arrays, and rephrase this
        paragraph to be more precise.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86572 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2fb4ae554502621f3d5b3945f650e6b5ee66ffc7
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 9 18:28:24 2009 +0000
    
        Generalize LCSSA to handle loops with exits with predecessors outside
        the loop. This is needed because with indirectbr it may not be possible
        for LoopSimplify to guarantee that all loop exit predecessors are
        inside the loop. This fixes PR5437.
    
        LCCSA no longer actually requires LoopSimplify form, but for now it
        must still have the dependency because the PassManager doesn't know
        how to schedule LoopSimplify otherwise.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86569 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit af255824880d1a779f31df72f5a52e1419f773f7
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 9 18:20:38 2009 +0000
    
        Fix an 80-column violation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86567 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fe7d6aaba103e48c15cbf44070b99562039ae29d
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 9 18:19:43 2009 +0000
    
        Minor tidiness fixes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86565 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a39e7acd54fdc09f0b8b3db176a4fbf53c1679a1
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 9 18:18:49 2009 +0000
    
        Constify MachineFunctionAnalysis' TargetMachine reference.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86564 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2e2c064bd56e45608c077950636fff02033475f2
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 9 17:06:51 2009 +0000
    
        Fix a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86558 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7740e0f64d3b91e6579b3595340a993da0abc84e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 9 17:06:23 2009 +0000
    
        Suppress implicit copy ctor and copy assignment for MachineFunction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86557 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5a7b8f0a83d0b584cfb5a565011fa500e7260638
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Nov 9 16:38:15 2009 +0000
    
        Use ',' separation in XFAILs, lit doesn't evaluate them as regexs (easy to add,
        but might as well use the more standard syntax).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86553 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d49d0fd052f4f586956ffba0cb5c7b146632c3ef
    Author: Nuno Lopes <nunoplopes at sapo.pt>
    Date:   Mon Nov 9 15:36:28 2009 +0000
    
        add zextOrTrunc and sextOrTrunc methods, that are similar to the ones in APInt
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86549 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cdc498061580523723269ee368d6b7adbbb02699
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 9 15:27:51 2009 +0000
    
        Work around assembler not recognizing #0.0 form immediate for vmcp
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86548 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5b35c97ee4fc66ebee2771ddef1dfdc3eff92077
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Mon Nov 9 15:26:40 2009 +0000
    
        CMake: Detect gv, circo, twopi, neato, fdo, dot and dotty.
    
        Patch by Arnaud Allard de Grandmaison!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86547 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 666eecfd9e3463f981d935d8fb082a3716604f61
    Author: Xerxes Ranby <xerxes at zafena.se>
    Date:   Mon Nov 9 14:50:34 2009 +0000
    
        Make lib/Support/Debug.cpp SetCurrentDebugType implementation part of llvm namespace to match function declaration in Debug.h.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86544 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b0273e0a346ffd639eb062e6cc752a2799a29826
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Mon Nov 9 14:27:49 2009 +0000
    
        Fix PR5149.
    
        http://llvm.org/bugs/show_bug.cgi?id=5149
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86543 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e0efca5e6470972e944ea2fc546690652eb4a6ea
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 9 07:12:01 2009 +0000
    
        make this handle redefinition of malloc with different prototype correctly.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86525 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 69a7075dc8caf9332f5c105577d0ea36a1cfd00e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 9 07:07:56 2009 +0000
    
        if a 'with overflow' intrinsic just has the normal result used, simplify
        it to a normal binop.  Patch by Alastair Lynn, testcase by me.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86524 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 48ac7b971f7e6cbd60c4d1c70663cb0d328cacf2
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Nov 9 06:49:37 2009 +0000
    
        Hide a couple of options.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86522 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a241af8f12c65c26a4a631c3428cc7daefed4318
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Nov 9 06:49:22 2009 +0000
    
        80 col.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86521 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7452f6a39e6980c100521593a0b9bcee17b5ad3d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 9 04:57:04 2009 +0000
    
        fix PR5104: when printing a single character, return the result of
        putchar in case there is an error.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86515 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit acb19b052cad5c3583209cb708463a17c8dc6217
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 9 04:47:27 2009 +0000
    
        fix some bogus asserts, PR5049
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86514 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5993faa112338898200ae47e6d9de0071b8ebde3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 9 04:18:23 2009 +0000
    
        random tidy
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86511 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fa188de533488e8238ca4b9346b9deb3b9a5a8af
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 9 04:15:28 2009 +0000
    
        remove a redundant printout, LinkInArchive prints this as well.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86510 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 073c12c61ebbb3b88b550dc1a2ebf87abcab9927
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 9 01:38:00 2009 +0000
    
        enhance PHI slicing to handle the case when a slicable PHI is begin
        used by a chain of other PHIs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86503 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1675912cad699198af854de0af1bc8f93ca4730f
    Author: Owen Anderson <resistor at mac.com>
    Date:   Mon Nov 9 00:48:15 2009 +0000
    
        Small cleanups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86499 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cd1c8db070a428d72c35335d4a30218e3f24c277
    Author: Owen Anderson <resistor at mac.com>
    Date:   Mon Nov 9 00:44:44 2009 +0000
    
        Revert my previous patch to ABCD and fix things the right way.  There are two problems addressed
        here:
    
        1) We need to avoid processing sigma nodes as phi nodes for constraint generation.
        2) We need to generate constraints for comparisons against constants properly.
    
        This includes our first working ABCD test!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86498 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 28075886973ba6292e5dc3964aa6206520337741
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 9 00:41:49 2009 +0000
    
        comment typos pointed out by Duncan
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86497 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e2fda536ff4bd38be9f9ebea6ad90ce0b48d0e12
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 9 00:11:35 2009 +0000
    
        Use Unified Assembly Syntax for the ARM backend.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86494 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 57cf21e12d890dd0ac45c9369fb501bbef3a0050
    Author: Owen Anderson <resistor at mac.com>
    Date:   Sun Nov 8 22:36:55 2009 +0000
    
        Fix an issue where the ordering of blocks within a function could lead to different constraint
        graphs being produced.  The cause was that we were incorrectly marking sigma instructions as
        processed after handling the sigma-specific constraints for them, potentially neglecting to
        process them as normal instructions as well.
    
        Unfortunately, the testcase that inspired this still doesn't work because of a bug in the solver,
        which is next on the list to debug.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86486 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6235a52b83705e463a0ba08e28d8a288ac462b43
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 8 21:51:53 2009 +0000
    
        Add a 'zkill' script, which is more-or-less a fancy (although not necessarily
        very robust) version of killall. Because I like making shiny new wheels out of
        spare parts.
    
        For use by buildbots when people insist on making cc1 infinite loop. :)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86484 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4ca7390a7c462bb3e7cb11239e22c3ec7b93080b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 8 21:20:06 2009 +0000
    
        Teach an instcombine to not pull trunc instructions through PHI nodes
        when both the source and dest are illegal types, since it would cause
        the phi to grow (for example, we shouldn't transform test14b's phi to
        a phi on i320).  This fixes an infinite loop on i686 bootstrap with
        phi slicing turned on, so turn it back on.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86483 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5987b829927111762b6354e379f331a2476c7240
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Sun Nov 8 20:55:48 2009 +0000
    
        Revert commit 81144, and add a comment.  It caused bugpoint timeouts
        not to work any more on linux.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86481 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1cd526bc4bb02aec231107278b004f5801e2af4b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 8 19:23:30 2009 +0000
    
        reapply r8644[3-5] with only the scary part
        (SliceUpIllegalIntegerPHI) disabled.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86480 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bfa5deef8b8798068cda5c4989793d178d6f4a4b
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 8 17:52:47 2009 +0000
    
        Speculatively revert r8644[3-5], they seem to be leading to infinite loops in
        llvm-gcc bootstrap.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86478 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c5073681095c40377e1086ea5c2d66d5a0e539e2
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Nov 8 15:33:12 2009 +0000
    
        Add and-not (bic) patterns. Based heavily on patch by Brian Lucas!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86471 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 185c213be4fda6313031905f207fadb446d155de
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Nov 8 15:32:44 2009 +0000
    
        Move OR patterns upper to all logical stuff. No functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86470 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e3b260e9514d07bd9e9b66a5a907785a1178ff27
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Nov 8 15:32:28 2009 +0000
    
        Some nice peephole patterns. Based on patch by Brian Lucas!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86469 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b5d6f65c43ebfb2e77786c70c07987a9e7b12d80
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Nov 8 15:32:11 2009 +0000
    
        Print tab before operand of jcc
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86468 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fc5c66b384e5ac8089737a40b97a02ae32176142
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Nov 8 14:27:38 2009 +0000
    
        Fix invalid operand updates & implement post-inc memory operands
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86466 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4bd4ef0b0aee36cf5f4f7b1398722a2a0fe92e7e
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Nov 8 12:58:40 2009 +0000
    
        Throw an error when stack realignment stuff fails instead of silent
        code miscompilation
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86463 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e9439104f0c4bd0fb348b0739d51ddf01d2ac994
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Nov 8 12:14:54 2009 +0000
    
        It is invalid to infer the value type from the result #0 of the node
        since the instruction might use the other result of different type.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86462 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 353ad44681895cd7799966ccafa1e572ca63ec17
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 8 09:46:57 2009 +0000
    
        Remove ByteswapSCANFResults, it is dead.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86458 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6c331cc294d08c58407ae962723a85e676b04b28
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 8 09:34:14 2009 +0000
    
        NNT: Remove DejaGNU test from NewNightlyTest reports, this aspect of testing is
        handled by buildbots now.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86454 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bb3fe4a85b912f48c3b4b2e713d894387b9abc75
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 8 09:29:52 2009 +0000
    
        Two small fixes for site.exp for cmake.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86453 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3221d7689547e97fa387f5619fef29a42a8da732
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 8 09:08:00 2009 +0000
    
        Derive the right paths to use during testing instead of passing it in via make.
    
        Also, fix a few other details of the cmake test target and rename it to
        'check'. CMake tests now work for the most part, but there are a handful of
        failures left due to missing site.exp bits.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86452 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b4dbd965fd3a5175f701f86ccda73bf8271ef6a5
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 8 09:07:51 2009 +0000
    
        Switch to using 'lit.site.cfg.in' for the site config template for Unit tests,
        and generate it for CMake builds as well.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86451 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0448c48560ce06959994e7a5df6f9ff2e9eac448
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 8 09:07:42 2009 +0000
    
        Cleanup some unused RUN lines.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86450 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7948379df3422b4c9d98c3943a864293d4ff0527
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 8 09:07:33 2009 +0000
    
        lit: Hardcode whence seek value, os.SEEK_END isn't always available.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86449 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5a6655647ddb18e15038155a4565ab771fff35a6
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 8 09:07:26 2009 +0000
    
        lit: Warn when a test suite contains no tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86448 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7613ba5737a340d3cf88311c7a954625dae0bad1
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 8 09:07:13 2009 +0000
    
        lit: Drop require_and_and support.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86447 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2a7f8c576a1490e1b2cc6bc04b5c8333435902bd
    Author: Lang Hames <lhames at gmail.com>
    Date:   Sun Nov 8 08:49:59 2009 +0000
    
        Moved some ManagedStatics out of the SlotIndexes header.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86446 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bfdcf999c67b880b0ba6f92b2cd7c3529a216c6c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 8 08:36:40 2009 +0000
    
        another more interesting test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86445 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e9ad9a4948b4d65d68c5614b2e599b94d7ef8831
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 8 08:30:58 2009 +0000
    
        feature test for the new transformation in r86443
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86444 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c42200ecbc9761eb2e237f998ebe1408ddc40104
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 8 08:21:13 2009 +0000
    
        teach a couple of instcombine transformations involving PHIs to
        not turn a PHI in a legal type into a PHI of an illegal type, and
        add a new optimization that breaks up insane integer PHI nodes into
        small pieces (PR3451).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86443 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 16f6916607396e205984e25c26798f6bf6c4c2c9
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Nov 8 05:45:04 2009 +0000
    
        We don't need to byteswap, the interpreter assumes the program is running
        native anyways. This fixes a crash using %d and similar in a scanf statement.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86440 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e342f10769875f34a07053cf77ace1ccd9cedd42
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 8 03:43:06 2009 +0000
    
        lit: Workaround a Win32/subprocess bug when appending.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86437 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 377d801fa2f6dbd4a5ccef74224c7be00cad308a
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 8 03:35:19 2009 +0000
    
        lit: Preserve the PATHEXT variable when running subcommands, this is important on Win32
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86436 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c97274fc4693cd67b6e4c51ec3baa21f70f039e6
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 8 02:32:01 2009 +0000
    
        Make TargetData::getStringRepresentation spit out native integer types,
        this gives llvm-gcc generated modules the right data.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86435 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bbf10b58aa5df695476f2be89db4b92063e1c1b0
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Nov 8 02:23:15 2009 +0000
    
        Remove test. Execution tests are slow and generally not worth it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86434 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8f36edca3cfe963255d7766cd3f963605cabf32e
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Nov 8 01:04:45 2009 +0000
    
        Fix run line.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86429 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 543de5cfe67417377df9bf97f94bd27837bc3259
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Nov 8 00:45:29 2009 +0000
    
        Fix the interpreter to not crash due to zeroext/signext
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86428 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 322873dbdbca38b44b0f3216dd8a0f91e81da137
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 8 00:34:22 2009 +0000
    
        Prevent warning spew about -fPIC when using CMake generated Xcode project files.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86427 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bacc516181e4772892c2a61ce0acc17074fedf1e
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sun Nov 8 00:27:19 2009 +0000
    
        Use aligned load/store instructions for spilling Q registers when we know the stack slot is 128 bit aligned
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86425 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit be9db751021d3849e4cd2590e4e4c7ec4f50b221
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Nov 8 00:15:23 2009 +0000
    
        Refactor code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86423 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e8f69491b675eaea14eb21c9cea799c424e9082c
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Nov 7 23:52:27 2009 +0000
    
        Fix CMake reporting of target triple.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86419 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b38d0a0852e059b49987cabec86a05f82d64979f
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Nov 7 23:52:20 2009 +0000
    
        Stop running get_target_triple more than we need to.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86418 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 630690c2fe8d0efa0a7ea1f54334dd4182c34c18
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Nov 7 23:51:55 2009 +0000
    
        Fix MSVC warning ( | with bool and unsigned int).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86417 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8acf7a18664fd8abcdd979e10354bf5d2b11b400
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Nov 7 23:21:30 2009 +0000
    
        Fix class -> struct tag.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86416 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b13034d606a6e67a8cb78d28574a8f9a5da681bf
    Author: Nate Begeman <natebegeman at mac.com>
    Date:   Sat Nov 7 23:17:15 2009 +0000
    
        x86 vector shuffle cleanup/fixes:
    
        1. rename the movhp patfrag to movlhps, since thats what it actually matches
        2. eliminate the bogus movhps load and store patterns, they were incorrect.  The load transforms are already handled (correctly) by shufps/unpack.
        3. revert a recent test change to its correct form.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86415 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0e426c27d48dd7f1890bfa71b7566a7cfb05c4e2
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Nov 7 22:00:39 2009 +0000
    
        80-column cleanup of file header comments
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86408 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 04d92822f7e9437186a873e4f6335b4c379c5d65
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Nov 7 21:25:39 2009 +0000
    
        Support alignment specifier for NEON vld/vst instructions
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86404 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ded7c4a4c0f0df94bd7abfc98a6e0123c5849efe
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sat Nov 7 21:10:15 2009 +0000
    
        Improve tail call elimination to handle the switch statement.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86403 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 812c299cb8ac1c341552584fbbdfca9a75ffe764
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Nov 7 19:40:04 2009 +0000
    
        t2ldrpci_pic can be used for blockaddress as well.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86400 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 07600e1c9abb174d4984f50c79b0e45062e93c8f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 7 19:13:17 2009 +0000
    
        temporarily remove these tests, as they are breaking in the buildbot,
        Eric, please investigate.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86399 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2e9f5d0556117a5d0537efa3f018ddd920f899c1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 7 19:11:46 2009 +0000
    
        make instcombine only rewrite a chain of computation
        (eliminating some extends) if the new type of the
        computation is legal or if both the source and dest
        are illegal.  This prevents instcombine from changing big
        chains of computation into i64 on 32-bit targets for
        example.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86398 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1b60b1603b3be4b41b2e3a6820fe116835ec7dd9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 7 19:07:32 2009 +0000
    
        indicate what the native integer types for the target are.
        Please verify.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86397 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 932568fad8ba37fec6fbab9f874d1be2d7623c8f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 7 18:53:00 2009 +0000
    
        all targets should be required to declare legal integer types.  My plan to
        make it optional doesn't work out.  If you don't want to specify this, don't
        specify a TD string at all.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86394 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8055a3c54bc88921ab4b3051a01112710ef9f215
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 7 18:03:32 2009 +0000
    
        remove empty files.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86392 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f822a2f4f3b5f91ae0be604b1202622a186ea544
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 7 17:59:32 2009 +0000
    
        Revert r86359, it is breaking the self host on the
        llvm-gcc-i386-darwin9 build bot.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86391 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a0e695bd1f1afe129a7e294b02cdba45629f0c9c
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sat Nov 7 17:15:25 2009 +0000
    
        First try of the post-inc operands handling... Not fully worked, though :(
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86386 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a6d97be420bb6fe615998154f555b06be9cc1cf5
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sat Nov 7 17:15:06 2009 +0000
    
        Add some dummy support for post-incremented loads
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86385 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0e459b4d02c73518bf209236d7e406259e56408e
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sat Nov 7 17:14:39 2009 +0000
    
        Add 8 bit libcalls and make use of them for msp430
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86384 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit daa66e5edbb1eba5a6a602200946c5e2f27c3dbd
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sat Nov 7 17:13:57 2009 +0000
    
        Add few pseudo-source-values
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86383 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c7e166ffb74e4504175767260bd1e23e35f98b7
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sat Nov 7 17:13:35 2009 +0000
    
        Initial support for addrmode handling. Tests by Brian Lucas!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86382 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c51a65b683d9ae5c0963eaa31cd0cd71241bd536
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sat Nov 7 17:12:58 2009 +0000
    
        Some preliminary variable asmprinting
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86381 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4266e28b8b014310ee00746ff5f0e012b111720e
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sat Nov 7 17:12:38 2009 +0000
    
        Use '.L' for global private prefix (as mspgcc)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86380 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit db3ab2f46c9e75a41f3ba2021b7537af4bd813f4
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sat Nov 7 17:12:21 2009 +0000
    
        Drop old asmprinter stuff
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86379 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 59ab8afb46d74604f438d78213a641c038c519b8
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sat Nov 7 15:20:32 2009 +0000
    
        It turns out that the testcase in question uncovered subreg-handling bug.
        Add assert in asmprinter to catch such cases and xfail the tests.
        PR is to be filled.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86375 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ece4180609c6ec2e21dfef5f27eb9bf7b65351e4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 7 09:35:34 2009 +0000
    
        add the ability for TargetData to return information about legal integer
        datatypes on a given CPU.  This is intended to allow instcombine and other
        transformations to avoid converting big sequences of operations to an
        inconvenient width, and will help clean up after SRoA.  See also "Adding
        legal integer sizes to TargetData" on Feb 1, 2009 on llvmdev, and PR3451.
    
        Comments welcome.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86370 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 683309f29784560cf55826b443f44503318876f1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 7 09:23:04 2009 +0000
    
        more cleanup.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86369 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2aaa75e234660532df77f2a630509bfee77652d9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 7 09:20:54 2009 +0000
    
        add some missing #includes
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86367 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 044fe3e5132220650eda5e8f876739fb055455bc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 7 09:13:23 2009 +0000
    
        rewrite TargetData to use StringRef/raw_ostream instead of thrashing std::strings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86366 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9ea9331c67f53b9deb5aa58e0b27c2bddd21a200
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 7 09:07:01 2009 +0000
    
        prune #include / layering violation
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86365 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e63aa1f6906831c4a1b8bf2f2bc1ee23a4e3c3c6
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Sat Nov 7 08:51:52 2009 +0000
    
        Make the need-stub variables accurate and consistent.  In the case of
        MachineRelocations, "stub" always refers to a far-call stub or a
        load-a-faraway-global stub, so this patch adds "Far" to the term. (Other stubs
        are used for lazy compilation and dlsym address replacement.) The variable was
        also inconsistent between the positive and negative sense, and the positive
        sense ("NeedStub") was more demanding than is accurate (since a nearby-enough
        function can be called directly even if the platform often requires a stub).
        Since the negative sense causes double-negatives, I switched to
        "MayNeedFarStub" globally.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86363 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f36502964777b9941c5a4c531990cc6f2488dc23
    Author: Eric Christopher <echristo at apple.com>
    Date:   Sat Nov 7 08:45:53 2009 +0000
    
        Fix a couple of shuffle patterns to use movhlps instead
        of movhps as the constraint.  Changes optimizations so
        update testcases as appropriate as well.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86360 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 69a3bcea684c35eb0a394e0dbcfa97b25b33c0d6
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sat Nov 7 08:34:40 2009 +0000
    
        Teach dead store elimination that certain intrinsics write to memory just like
        a store.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86359 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 09a32aa09eeec1d1894333940228d2d097c10007
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 7 08:31:52 2009 +0000
    
        remove the win32 tree, it's stale and confusing.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86358 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d4fddc700e7bb0387a520cbc2acbc01d29eac122
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 7 08:05:03 2009 +0000
    
        reapply 86289, 86278, 86270, 86267, 86266 & 86264 plus a fix
        (making pred factoring only happen if threading is guaranteed
        to be successful).
    
        This now survives an X86-64 bootstrap of llvm-gcc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86355 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 87c965591afc72dca6b805427a322cd5adf40d11
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 7 07:50:34 2009 +0000
    
        Fix PR5421 by APInt'izing switch lowering.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86354 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5031fbdde50deb47304882f92ce6b893887cd324
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sat Nov 7 07:42:38 2009 +0000
    
        Oops, FunctionContainsEscapingAllocas is really used to mean two different
        things. Back out part of r86349 for a moment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86353 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 10b606f2cf73f5cfb1d50fa48f890c09bec38b59
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sat Nov 7 07:10:01 2009 +0000
    
        Dust off tail recursion elimination. Fix a fixme by applying CaptureTracking
        and add a .ll to demo the new capability.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86349 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7e7dd487592ee788e07b8dc0279293969a0bda07
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Sat Nov 7 06:33:58 2009 +0000
    
        llvmc: Add a '-time' option.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86348 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 05858c503a855f067a256a31c7e811543765aaec
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Sat Nov 7 06:33:12 2009 +0000
    
        Trailing whitespace.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86347 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 75f7b8dca3aaaf0a4ffacfb4d4edccd80000eb4e
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Sat Nov 7 06:33:01 2009 +0000
    
        80-col violation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86346 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 147f0500f9f9d9e0091dc61347b094334280fd83
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 7 06:19:20 2009 +0000
    
        merge cmp1 into cmp0 and filecheckize.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86345 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a1c6ba05e0a2b1e604f2cae702a60cf8e568d40b
    Author: Lang Hames <lhames at gmail.com>
    Date:   Sat Nov 7 05:50:28 2009 +0000
    
        Update some globals to use ManagedStatic.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86342 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e6f60f14e1adc950bfd1529325db1acedf9cf97e
    Author: Mon P Wang <wangmp at apple.com>
    Date:   Sat Nov 7 04:46:25 2009 +0000
    
        Fix memoizing of CvtRndSatSDNode
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86340 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 01fa52d302ba6f4939b6907d0ae10fee5d61200c
    Author: Mon P Wang <wangmp at apple.com>
    Date:   Sat Nov 7 04:07:33 2009 +0000
    
        Fixed Overload table bug noticed by Jakob
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86332 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 10268e847d84c45a601e68bb02eba4dca773331b
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Nov 7 04:07:30 2009 +0000
    
        Missed this.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86331 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ca529232f661d674617b7d5e51102cf4ee9a6ffb
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Nov 7 04:04:34 2009 +0000
    
        Refactor code. Fix a potential missing check. Teach isIdentical() about tLDRpci_pic.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86330 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 626474d3ddba098f44d1a372314418a63c152540
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Nov 7 03:52:02 2009 +0000
    
        - Add TargetInstrInfo::isIdentical(). It's similar to MachineInstr::isIdentical
          except it doesn't care if the definitions' virtual registers differ. This is
          used by machine LICM and other MI passes to perform CSE.
        - Teach Thumb2InstrInfo::isIdentical() to check two t2LDRpci_pic are identical.
          Since pc relative constantpool entries are always different, this requires it
          it check if the values can actually the same.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86328 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 80994a152341401f8151ece592fd0a07dd14ae1f
    Author: Ted Kremenek <kremenek at apple.com>
    Date:   Sat Nov 7 03:26:59 2009 +0000
    
        Update CMake file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86325 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 87d0426c569d606dca1d32dc23cae9dee245fc76
    Author: Kenneth Uildriks <kennethuil at gmail.com>
    Date:   Sat Nov 7 02:11:54 2009 +0000
    
        Add code to check at SelectionDAGISel::LowerArguments time to see if return values can be lowered to registers.  Coming soon, code to perform sret-demotion if return values cannot be lowered to registers
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86324 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d1332dcf1421bd9a7b2e9527056cb1934aea041c
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Sat Nov 7 01:58:40 2009 +0000
    
        Fix inverted conflict test in -early-coalesce.
    
        A non-identity copy cannot be coalesced when the phi join destination register
        is live at the copy site.
    
        Also verify the condition that the PHI join source register is only used in
        the PHI join. Otherwise the coalescing is invalid.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86322 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cbc9debd4e99fc06ef89fdd5817c81bc92ac6a85
    Author: Devang Patel <dpatel at apple.com>
    Date:   Sat Nov 7 01:32:59 2009 +0000
    
        Revert following patches to fix llvmgcc bootstrap.
        86289, 86278, 86270, 86267, 86266 & 86264
        Chris, please take a look.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86321 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d139b9495d8e6a18ac8c7d3a7767f7e0db76a673
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Sat Nov 7 00:54:36 2009 +0000
    
        My previous patch (r84124) for setting the encoding bits 4 and 7 of DPSoRegFrm
        was wrong and too aggressive in the sense that DPSoRegFrm includes both constant
        shifts (with Inst{4} = 0) and register controlled shifts (with Inst{4} = 1 and
        Inst{7} = 0).  The 'rr' fragment of the multiclass definitions actually means
        register/register with no shift, see A8-11.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86319 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f1e1f4c9a3c02d44a9ea7f82c986794318b20415
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Sat Nov 7 00:41:19 2009 +0000
    
        - new SROA mallocs should have the mallocs running-or'ed, not the malloc's bitcast
        - fix ProcessInternalGlobal() debug output
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86317 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b035016639fa72a2a698d3f362fff7f9eee4e5ca
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Sat Nov 7 00:36:50 2009 +0000
    
        Fit in 80 columns
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86316 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7f287f83693b6f419210d1a3560c45515b9e690c
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Sat Nov 7 00:26:47 2009 +0000
    
        Avoid "ambiguous 'else'" warning from gcc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86314 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 955449e468395f33a2debd72e3e6f8f5a1dc1b46
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Sat Nov 7 00:16:28 2009 +0000
    
        Re-commit r86077 now that r86290 fixes the 179.art and 175.vpr ARM regressions.
    
        Here is the original commit message:
    
        This commit updates malloc optimizations to operate on malloc calls that have constant int size arguments.
    
        Update CreateMalloc so that its callers specify the size to allocate:
        MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
        Optimization uses use TargetData to compute the allocation size.
    
        Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
        Extend getMallocType() to support malloc calls that have non-bitcast uses.
    
        Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses.  The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.
    
        Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses.  The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.
    
        Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.
    
        Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86311 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ab81483d946285672c43aa85f78f912321e95348
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Nov 7 00:13:30 2009 +0000
    
        80-columns
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86310 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 49c5ea4e4216aa9545345efe476bbc2dc32db9d8
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Sat Nov 7 00:00:10 2009 +0000
    
        Give the JITResolver a direct pointer to its JITEmitter, and use that instead
        of going through the global TheJIT variable.  This makes it easier to use
        features of JITEmitter that aren't in JITCodeEmitter for fixing PR5201.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86305 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7921e58057a3a19e15877cfd18fcbeb828e919f9
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Nov 6 23:52:48 2009 +0000
    
        - Add pseudo instructions tLDRpci_pic and t2LDRpci_pic which does a pc-relative
          load of a GV from constantpool and then add pc. It allows the code sequence to
          be rematerializable so it would be hoisted by machine licm.
        - Add a late pass to break these pseudo instructions into a number of real
          instructions. Also move the code in Thumb2 IT pass that breaks up t2MOVi32imm
          to this pass. This is done before post regalloc scheduling to allow the
          scheduler to proper schedule these instructions. It also allow them to be
          if-converted and shrunk by later passes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86304 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d271d8b61f3747846a4ac40f9c49a9b9ca71bc67
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Fri Nov 6 23:45:15 2009 +0000
    
        Honour subreg machine operands during asmprinting
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86303 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6a14a00bb348755ff7974c56ff8df9845deb68f3
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Nov 6 23:33:28 2009 +0000
    
        Print VMOV (immediate) operands as hexadecimal values.  Apple's assembler
        will not accept negative values for these.  LLVM's default operand printing
        sign extends values, so that valid unsigned values appear as negative
        immediates.  Print all VMOV immediate operands as hex values to resolve this.
        Radar 7372576.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86301 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f676a031cac638f43285ad86a7e6e884a4d399a0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 6 23:19:58 2009 +0000
    
        Fix a bug where we'd call SplitBlockPredecessors with a pred in the
        set only once even if it has multiple edges to BB.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86299 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e9538bb41adbd94f658cbf5fdf7cabe3b4ee5920
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Nov 6 23:06:42 2009 +0000
    
        Fix a broken test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86298 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cda028f643df712e635d48b9f19af6e5dad2cd54
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Nov 6 22:38:38 2009 +0000
    
        Fix comment typos.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86295 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e4582d9e08187fb0d75c51ddb139f3ad3a45642d
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Nov 6 22:24:13 2009 +0000
    
        Remove ARMPCLabelIndex from ARMISelLowering. Use ARMFunctionInfo::createConstPoolEntryUId() instead.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86294 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e725b7d6aa1d922f7045b072edc1025ef3622bd0
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Fri Nov 6 21:43:21 2009 +0000
    
        CallInst::CreateMalloc() and CallInst::CreateFree() need to create calls with correct calling convention
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86290 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c42f1cb6754214561a0eacb03feea6146ebe6f84
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Fri Nov 6 21:24:57 2009 +0000
    
        Remove function left over from other jump threading cleanup.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86289 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c79bcffd435ca296c8006c50f2b86be1ec45a3a5
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Fri Nov 6 20:10:46 2009 +0000
    
        fix typo
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86281 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d67140691f38c21ee6da9b2d4d3302c31078b6a3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 6 19:21:48 2009 +0000
    
        Fix a problem discovered on self host.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86278 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 593376d7428da212386f6f583ca2973b03c13d73
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 6 18:24:32 2009 +0000
    
        remove more code subsumed by r86264
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86270 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d3a1626f2fc865dceffc949662bf31dc37e64a95
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Nov 6 18:24:05 2009 +0000
    
        Tolerate invalid derived type.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86269 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 52814ea97e6546388dc2460712ea6233389de475
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 6 18:22:54 2009 +0000
    
        eliminate some more code subsumed by r86264
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86267 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e41dbc99d28bdadb78ae8a6515963ba62814239e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 6 18:20:58 2009 +0000
    
        remove now redundant code, r86264 handles this case.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86266 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a00d671e18b86d5366609965dcc181b8995dbc8d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 6 18:15:14 2009 +0000
    
        Extend jump threading to support much more general threading
        predicates.  This allows us to jump thread things like:
    
        _ZN12StringSwitchI5ColorE4CaseILj7EEERS1_RAT__KcRKS0_.exit119:
          %tmp1.i24166 = phi i8 [ 1, %bb5.i117 ], [ %tmp1.i24165, %_Z....exit ], [ %tmp1.i24165, %bb4.i114 ]
          %toBoolnot.i87 = icmp eq i8 %tmp1.i24166, 0     ; <i1> [#uses=1]
          %tmp4.i90 = icmp eq i32 %tmp2.i, 6              ; <i1> [#uses=1]
          %or.cond173 = and i1 %toBoolnot.i87, %tmp4.i90  ; <i1> [#uses=1]
          br i1 %or.cond173, label %bb4.i96, label %_ZN12...
    
        Where it is "obvious" that when coming from %bb5.i117 that the 'and' is always
        false.  This triggers a surprisingly high number of times in the testsuite,
        and gets us closer to generating good code for doug's strswitch testcase.
    
        This also make a bunch of other code in jump threading redundant, I'll rip
        out in the next patch.  This survived an enable-checking llvm-gcc bootstrap.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86264 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 400718724c2abc7eab43ccf09a4e970a23846017
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 6 18:03:10 2009 +0000
    
        Use WriteAsOperand to print GlobalAddress MachineOperands. This
        prints them with the leading '@'.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86261 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 83e42c7163cc29cf8afd36082164ffb8a2e4b454
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Nov 6 17:58:12 2009 +0000
    
        Do not bother to emit debug info for nameless global variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86259 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit de46a5b60ea8cf3db0b613172667a7038470c578
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Nov 6 10:58:06 2009 +0000
    
        Pass StringRef by value.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86251 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d71e0318a7639a78ea87cc0f6eabf13358fd4c9e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 6 06:33:01 2009 +0000
    
        clang++ points out that this is pointless.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86239 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 002e65d73c9060d48f73b259ccfcc483352e859a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 6 05:59:53 2009 +0000
    
        remove some more Context arguments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86235 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6070c017ba421c2862fc0c25d9770169e858c7e3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 6 04:27:31 2009 +0000
    
        remove a bunch of extraneous LLVMContext arguments
        from various APIs, addressing PR5325.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86231 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3209d11ed21bf3bc4e82a55509cfcc17f18fb6e4
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Nov 6 04:12:13 2009 +0000
    
        NewNighlytTest: Fix timestamp format to actually make sense (it was missing the hour).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86229 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0fa6ca54e53fa3ebee13f92e1c422f8749e7b0f3
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Nov 6 04:12:07 2009 +0000
    
        NewNightlyTest: Add -noclean option, which doesn't run 'make clean' before building LLVM (for testing).
    
        Also, switch to always running 'make clean' in the test-suite directories.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86228 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 59db302d26871b9aae9538e78940474cea811791
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Nov 6 04:12:02 2009 +0000
    
        NewNightlyTest: Unbreak passing the build directory via a positional argument.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86227 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4c419e78067be2b1dd7f9de8cc6db5e827746b3b
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Nov 6 04:11:29 2009 +0000
    
        NewNightlyTest: Add -llvmgccdir as alternative to environment variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86226 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2243fcd787afdb7b7f59f388778ba99d57b6a468
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Fri Nov 6 01:33:24 2009 +0000
    
        Revert r86077 because it caused crashes in 179.art and 175.vpr on ARM
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86213 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fabc47c0671013d23e636704db0fb6a34e106615
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Nov 6 01:30:04 2009 +0000
    
        Do not try to emit debug info entry for dead global variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86212 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0ffe7349180f102968461625f010f1895afabe3e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 6 00:19:43 2009 +0000
    
        Don't print a redundant tab for inline asm, and do use the new printKill.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86206 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 94023e15c7231aa8362d75d5c55961314c3ff796
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Fri Nov 6 00:12:53 2009 +0000
    
        Add a bunch of missing "template" keywords to disambiguate dependent template names. GCC eats this ill-formed code, Clang does not. I already filed PR5404 to improve recovery in this case
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86204 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 45535af451f897e642dcaeae79b3c8f75c125799
    Author: Eric Christopher <echristo at apple.com>
    Date:   Fri Nov 6 00:11:57 2009 +0000
    
        Fix PR5315, original patch by Nicolas Capens!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86203 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bbc43963b76f8e54e6cee47defde4868586665fd
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 6 00:04:54 2009 +0000
    
        Factor out the printing of the leading tab into printInlineAsm.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86199 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 72dec44309783b3651a549f18637d055d54334ae
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Nov 6 00:04:05 2009 +0000
    
        Make printImplicitDef and printKill non-virtual, since they don't
        need to be overridden.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86198 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3965ee217f82f6faae5b9bff7a756d2573edbe2c
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 23:53:08 2009 +0000
    
        Use SUBREG_TO_REG instead of INSERT_SUBREG to model x86-64's
        implicit zero-extend.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86196 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2b4eb23c3ae503dbfe2f2d14ac14a513d033755c
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 23:34:59 2009 +0000
    
        Teach LSR to avoid calling SplitCriticalEdge on edges with indirectbr.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86193 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cabfc80a201e5153d12f5ba18a6376b5641170b6
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 23:31:40 2009 +0000
    
        Update these tests for the new label names.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86192 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ff592eced3d70159de10c5a3837e0d0fee7edb8f
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 23:14:35 2009 +0000
    
        Fix the label name generation for address-taken labels to avoid potential
        problems with name collisions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86189 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fb99c50b72f2e9eb91d2b8ecf269210e984c98f7
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Thu Nov 5 23:01:30 2009 +0000
    
        Make a few more LLVM headers parsable as standalone headers.
    
        Fix some problems with the hidden copy constructors for
        ImmutableMap/ImmutableSet found by Clang++.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86186 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2061a07c64a7bdc784d114c796369a14b557c202
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Thu Nov 5 22:58:04 2009 +0000
    
        Teach lit's SyntaxCheckTest two new tricks:
          - skip .svn directories
          - add a set of excluded filenames so we can easily skip tests
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86185 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0d1cb01d2bfeba381f2df9e1295403c8f44527d6
    Author: Lang Hames <lhames at gmail.com>
    Date:   Thu Nov 5 22:20:57 2009 +0000
    
        Added support for renumbering existing index list elements. Removed some junk from the initial numbering code in runOnMachineFunction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86184 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 934feaaf81ccedee352186013166f25def61f914
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 21:48:32 2009 +0000
    
        Avoid calling getUniqueExitBlocks from within LoopSimplify, as it depends
        on loops having dedicated exits, which LoopSimplify can no longer always
        guarantee.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86181 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit daf8ec35aa03cc88dc86febffce6daf1f2ba4d53
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 21:47:04 2009 +0000
    
        LoopDeletion depends on loops having dedicated exits.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86180 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 80439603bfcf5cc282adf1d4aa2e2fb0ece52bd2
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 21:14:46 2009 +0000
    
        The introduction of indirectbr meant the introduction of
        unsplittable critical edges, which means the introduction of
        loops which cannot be transformed to LoopSimplify form. Fix
        LoopSimplify to avoid transforming such loops into invalid
        code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86176 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aef6c7a849c4e8bdd07c017aef2dcfb8c019e287
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 21:11:53 2009 +0000
    
        Update various Loop optimization passes to cope with the possibility that
        LoopSimplify form may not be available.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86175 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6c41ba177e160a4c26c6c9c7c49d27cbecf1a94b
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Thu Nov 5 21:06:09 2009 +0000
    
        Fix bug in aggressive antidep breaking; liveness was not updated correctly for regions that do not have antidep candidates.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86172 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d2691dbecbd79d0e6f4523622a2078ba10c860ef
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 19:44:06 2009 +0000
    
        Teach LoopUnroll how to bail if LoopSimplify can't give it what it needs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86164 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 11bf599cd4fe54f457b4f23940dbd62bd8a70c17
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 19:43:25 2009 +0000
    
        Call getAnalysis<LoopInfo> the normal way, instead of asking passed-in
        LoopPassManager for it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86163 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 406a0d14c0a031fe251e4d63f6713387cdebeb13
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 19:42:20 2009 +0000
    
        InstrTypes.h includes Instruction.h, so it's not necessary to include both.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86162 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2c26c5bac5337d59784e0d7bfa8050b1d571a3da
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 19:41:37 2009 +0000
    
        Fix IVUsers to avoid assuming that the loop has a unique backedge.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86161 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 967f370d753ef2b39eb89ffea50b1f4601958beb
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 19:33:15 2009 +0000
    
        Delete an unused member variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86160 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 47927a381cd2049120ca518a5b8a879488129bf6
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 19:21:41 2009 +0000
    
        Factor out the predicate code for loopsimplify form exit blocks into
        a separate helper function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86159 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 355b2a31a057ac8164f0ab4ff0c3b11a38cf0205
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Thu Nov 5 19:03:26 2009 +0000
    
        CMake: Detect dotty.
    
        Patch by Arnaud Allard de Grandmaison!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86153 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c389f9986beace94d187e24fae9c47c4668d3eed
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Thu Nov 5 18:57:56 2009 +0000
    
        CMake: do not test for pthread and dl libraries on Windows (except
        Cygwin). Fixes PR 5368.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86152 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 63e2b7e7ea21976b0922a76e4cd10ebf3a2591ea
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 18:49:11 2009 +0000
    
        Avoid printing a redundant space in SDNode->dump().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86151 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7abf0939c0cf058c89bfdf010ebfa0377fb17034
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 18:47:09 2009 +0000
    
        Remove uninteresting and confusing debug output.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86149 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5c2b47ac5ace2e3c08d4e37a6f9813c37ece0407
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Thu Nov 5 18:30:50 2009 +0000
    
        Move llvm::cl::opt's conversion function into the base classes that
        actually need that conversion function. Silences a Clang++ warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86148 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5013522f6bb58c5591e2681e6e293ed9f3bd76e1
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Nov 5 18:25:44 2009 +0000
    
        Add an assertion to catch indirectbr in SplitBlockPredecessors. This
        makes several optimization passes abort in cases where they're currently
        silently miscompiling code.
    
        Remove the indirectbr assertion from SplitEdge. Indirectbr is only
        a problem for critical edges, and SplitEdge defers to SplitCriticalEdge
        to handle those, and SplitCriticalEdge has its own assertion for
        indirectbr.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86147 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 586101f58224a041c3fafb356155e42e16ba793e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 5 18:19:19 2009 +0000
    
        add a note from PR5313
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86146 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 68e77880026c35f6dfd89cc28ed98b349751ea8c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 5 17:51:44 2009 +0000
    
        Declare classes with matched tags, pointed out by a clang++ warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86144 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit df1353e8583730b5b576097b9c4d0a98d4b4af9b
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Nov 5 17:44:22 2009 +0000
    
        Teach SimplifyLibCalls to fold memcmp calls with constant arguments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86141 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1520cef0c0cc888a54599902add7541a19133b4b
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Thu Nov 5 16:27:33 2009 +0000
    
        lit: Add --param NAME=VALUE option, for test suite specific use (to communicate
        arbitrary command line arguments to the test suite).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86137 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 329f7bafcebcdc487ef8bca808dacd19c0c5594e
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Nov 5 14:33:27 2009 +0000
    
        Do map insert+find in one step. TODO -= 2.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86133 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bf0e40c0133361f1e4f2a96a7bd6cdcc5080a695
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Nov 5 14:32:40 2009 +0000
    
        Path::createDirectoryOnDisk should ignore existing directories on win32 too.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86132 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4de7965e718bb1a3fc3b36e233e97246c32c84f8
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Thu Nov 5 13:39:23 2009 +0000
    
        Make two more LLVM headers standalone
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86131 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4e4fe9d4baccaff88c839d9662b0b503f9d2edd0
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Thu Nov 5 13:30:28 2009 +0000
    
        Make a few headers standalone. Plus, add a missing "template" keyword
        that Clang diagnoses but GCC does not.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86130 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 22a64b22aaddff70a4563b541467b323ff0b9dd8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Nov 5 05:57:34 2009 +0000
    
        merge a few crash tests into crash.ll
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86119 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3b0f27313e3d82ccaee3764a0513fde8e34a4c38
    Author: Mon P Wang <wangmp at apple.com>
    Date:   Thu Nov 5 03:19:08 2009 +0000
    
        Reintroduce support for overloading target intrinsics
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86114 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 38e9090bf172f4eb48f6ec53f5895160515b84a9
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Thu Nov 5 01:45:50 2009 +0000
    
        Replace std::map.at() with std::map[].
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86102 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 36005a2ab57baffb80e841d6d3334c41aa8ddc86
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Thu Nov 5 01:19:35 2009 +0000
    
        Break anti-dependencies using free registers in a round-robin manner to avoid introducing new anti-dependencies.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86098 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9d112f2efc514158edea9dd883ca2885d91ef999
    Author: Lang Hames <lhames at gmail.com>
    Date:   Thu Nov 5 01:18:31 2009 +0000
    
        Tidied some ugliness in the SlotIndex default constructor.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86097 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a192bc05ae6122d9b2921cbbb0337db8972961ea
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Nov 5 01:16:59 2009 +0000
    
        Now that code placement optimization pass is run for JIT, make sure it's before pre-emit passes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86092 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3970a4b91a1ffac6bbcb448d4d1f5097014520e0
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Nov 5 01:13:02 2009 +0000
    
        Use WeakVH while storing metadata in containers.
        This fixes PR5393.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86091 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 45a329b627400a8481e9ade2288357391d2f3a25
    Author: Lang Hames <lhames at gmail.com>
    Date:   Thu Nov 5 00:52:28 2009 +0000
    
        Removed an assert which was causing significant slowdowns in debug builds.
        This assert was very conservative to begin with (the error condition is well
        covered by tests elsewhere in the code) so we won't miss much by removing it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86088 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8f43c297b8a4825f8ceace47c00c3a2491667839
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Nov 5 00:51:31 2009 +0000
    
        Add -mtriple to llc commands, attempting to fix buildbot failures.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86086 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 10aae1b683ac3f55b0f49787927e385cfe2cd706
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Nov 5 00:51:13 2009 +0000
    
        Code refactoring.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86085 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9a97a7f3749e9be0bf9b91571704a5b671f9dbe6
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Nov 5 00:30:35 2009 +0000
    
        Attempt again to fix buildbot failures: make expected output less specific
        and compile with -mtriple to specify *-apple-darwin targets.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86081 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c466871dfda63d07da52ca2d87279c33a1db2eca
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Thu Nov 5 00:16:44 2009 +0000
    
        Correctly add chain dependencies around calls and unknown-side-effect instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86080 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e67ebab906b981bb9534cd7c68fda75cb5ff57c9
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Thu Nov 5 00:03:03 2009 +0000
    
        Update CreateMalloc so that its callers specify the size to allocate:
        MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
        Optimization uses use TargetData to compute the allocation size.
    
        Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
        Extend getMallocType() to support malloc calls that have non-bitcast uses.
    
        Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses.  The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.
    
        Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses.  The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.
    
        Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.
    
        Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86077 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 71842a96a0af9476bb92a29534509e04f02a835d
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Nov 4 23:48:00 2009 +0000
    
        While calculating original type size for a derived type, handle type variants encoded as DIDerivedType appropriately.
        This improves bitfield support.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86073 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f1d76577adf6049e5e7b389bc928d3367e8acbc3
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Nov 4 23:20:40 2009 +0000
    
        Grammar.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86068 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5d0336c05ff61c5c2c8592e07c977cf3f9e72e46
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 4 23:20:12 2009 +0000
    
        improve DSE when TargetData is not around, based on work by
        Hans Wennborg!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86067 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8db35425f1cc0dfc5275e413a302a03d99a3c038
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Nov 4 23:11:07 2009 +0000
    
        Now that the memory leak from McCat/08-main has been fixed (86056), re-enable
        aggressive testing of dynamic stack alignment.
        Note that this is off by default, and enabled for LLCBETA nightly results.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86064 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 688ef4595d8d624bb968d508432a64a2652176ae
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Nov 4 22:41:51 2009 +0000
    
        If a function has no stack frame at all, dynamic realignment isn't necessary.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86057 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6023c439e5e7ed8cfcb63488e411e3f04f2515da
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Nov 4 22:41:00 2009 +0000
    
        dynamic stack realignment necessitates scanning the floating point callee-
        saved instructions even if no stack adjustment for those saves is needed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86056 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7d9fe58ac9c54287f9894c61e361af13e6690300
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Nov 4 22:06:12 2009 +0000
    
        Fix DW_AT_data_member_location for bit-fields. It points to the location of annonymous field that covers respective field.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86054 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e8cbca94a15e7064ee043b77799d9906d8d4b324
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Nov 4 21:31:18 2009 +0000
    
        Add PowerPC codegen for indirect branches.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86050 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 13d4167715408b029b52ca88437321f306b057df
    Author: Lang Hames <lhames at gmail.com>
    Date:   Wed Nov 4 21:24:15 2009 +0000
    
        Handle empty/tombstone keys for LiveIndex more cleanly. Check for index sanity when constructing index list entries.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86049 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5791bad4eae891bd0596cdf5b5984e42b15e693f
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Wed Nov 4 20:50:23 2009 +0000
    
        A value is only assigned to errno if NumRead equals -1, so do
        not reason based on errno if NumRead has a different value.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86046 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3e5933ce0a357d34527487908bde7c1d29a0795f
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Nov 4 20:04:11 2009 +0000
    
        Fix broken test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86045 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4fc72f071d1541bcc6f6a5d276b728167d53359c
    Author: Eric Christopher <echristo at apple.com>
    Date:   Wed Nov 4 19:57:50 2009 +0000
    
        Add some options to disable various code gen optimizations.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86044 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ff19cf6b084b54e4f04b72f080926a4a37d953c
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Nov 4 19:37:40 2009 +0000
    
        Array element size does not match array size but array is not a bitfield.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86043 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a589b3890a92196ca96b3a26e65ddafc52cbe3d5
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Nov 4 19:25:34 2009 +0000
    
        Add test for ARM indirectbr codegen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86042 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7f251671d1a905be12347721af468b8e31ad9d6e
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Wed Nov 4 19:24:37 2009 +0000
    
        Print out an informative comment for KILL instructions.
    
        The KILL pseudo-instruction may survive to the asm printer pass, just like the IMPLICIT_DEF. Print the KILL as a comment instead of just leaving a blank line in the output.
    
        With -asm-verbose=0, a blank line is printed, like IMPLICIT?DEF.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86041 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bf2c373dfc705dc218ea5dc83e8ddfe3c330702b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 4 18:57:42 2009 +0000
    
        Fix an iterator invalidation bug that happens when a hashtable
        resizes in IPSCCP.  This fixes PR5394.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86036 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fcdde22dbb4894a8b9711ef1f921426a3eb6727a
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Nov 4 08:36:50 2009 +0000
    
        Look for llvm-gcc under /Developer/usr/bin first.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86023 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a39f127c957d42f0ce408b00fb5c597fb9f766d1
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Nov 4 08:33:14 2009 +0000
    
        RangeIsDefinedByCopyFromReg() should check for subreg_to_reg, insert_subreg,
        and extract_subreg as a "copy" that defines a valno.
        Also fixes a typo. These two issues prevent a simple subreg coalescing from
        happening before.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86022 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 93e6ff9686763f528032b5c1e15153672cdde84c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 4 08:05:20 2009 +0000
    
        move two functions up higher in the file.  Delete a useless argument
        to EmitGEPOffset.
    
        Implement some new transforms for optimizing
        subtracts of two pointer to ints into the same vector.  This happens
        for C++ iterator idioms for example, stringmap takes a const char*
        that points to the start and end of a string.  Once inlined, we want
        the pointer difference to turn back into a length.
    
        This is rdar://7362831.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86021 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0ed81a9a308da7f0b05af7887e53b913cdf8128c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 4 07:57:05 2009 +0000
    
        filecheckize this test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86020 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2527652d0530076c4fcfc114b4a950bbfeeaa77a
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Nov 4 07:38:48 2009 +0000
    
        The .n suffix must go after the predicate.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86019 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b3673807c7cef986331086848ace56e2a7a8a421
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Wed Nov 4 06:15:28 2009 +0000
    
        The magic for our current brand of .bc files is BC. For older ones it was llvc.
        When was it ever "llvm"?
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86009 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9f225801184a85aa03ddc3b45da5ad0eda8986ed
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Nov 4 05:00:12 2009 +0000
    
        make IRBuilder zap "X|0" and "X&-1" when building IR, this happens
        during bitfield codegen and slows down -O0 compile times by making
        useless IR.  rdar://7362516
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86006 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ca58fd8d29aa31703adf240118e6031be84c3808
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Nov 4 04:32:50 2009 +0000
    
        configure: Add --with-optimize-option, for setting the default value of
        OPTIMIZE_OPTION.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86005 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3072a0705d8acd8aad61b8591362367c428a3c19
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Nov 4 03:08:57 2009 +0000
    
        Silence implicit conversion warnings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86000 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e6fbcac2b54259df3d1f22fbbb0a0609c51cc5c9
    Author: Lang Hames <lhames at gmail.com>
    Date:   Wed Nov 4 01:52:40 2009 +0000
    
        Another spurious friend declaration removed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85997 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 36b4226099ea59d8a6711a84c19cace000cc37ea
    Author: Lang Hames <lhames at gmail.com>
    Date:   Wed Nov 4 01:34:22 2009 +0000
    
        Removed an unnecessary friend declaration and some crufty comments from IndexListEntry.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85995 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c527ba8dbb651bc60d2ea16990283d8e96e249ed
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Wed Nov 4 01:32:06 2009 +0000
    
        Fix CMake makefiles
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85994 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 94b940d57a3e1cb133733ffa70dd7b3dec8e0d05
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Nov 4 00:42:33 2009 +0000
    
        Fix test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85986 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 98fd6060b428475e227274836140601d27016a4f
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Nov 4 00:00:39 2009 +0000
    
        Use ldr.n to workaround a darwin assembler bug.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85980 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d6a717cba117dae9a90a373698f691d9452c9c49
    Author: Lang Hames <lhames at gmail.com>
    Date:   Tue Nov 3 23:52:08 2009 +0000
    
        The Indexes Patch.
    
        This introduces a new pass, SlotIndexes, which is responsible for numbering
        instructions for register allocation (and other clients). SlotIndexes numbering
        is designed to match the existing scheme, so this patch should not cause any
        changes in the generated code.
    
        For consistency, and to avoid naming confusion, LiveIndex has been renamed
        SlotIndex.
    
        The processImplicitDefs method of the LiveIntervals analysis has been moved
        into its own pass so that it can be run prior to SlotIndexes. This was
        necessary to match the existing numbering scheme.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85979 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 24a1825a08acd458d350140f0706f3cbc5bf6a2e
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Nov 3 23:44:31 2009 +0000
    
        Fix branch folding bug for indirect branches: for a block containing only
        an unconditional branch (possibly from tail merging), this code is
        trying to redirect all of its predecessors to go directly to the branch
        target, but that isn't feasible for indirect branches.  The other
        predecessors (that don't end with indirect branches) could theoretically
        still be handled, but that is not easily done right now.
    
        The AnalyzeBranch interface doesn't currently let us distinguish jump table
        branches from indirect branches, and this code is currently handling
        jump tables.  To avoid punting on address-taken blocks, we would have to give
        up handling jump tables.  That seems like a bad tradeoff.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85975 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dca9e5cbeb8369bd4d8a6ea6a3232717884b1e8a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 3 23:40:48 2009 +0000
    
        reimplement multiple return value handling in IPSCCP, making it
        more aggressive an correct.  This survives building llvm in 64-bit
        mode with optimizations and the built llvm passes make check.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85973 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 276f816673b1257798355cedad6f1529c75af3dd
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 3 23:13:34 2009 +0000
    
        Fix t2Int_eh_sjlj_setjmp. Immediate form of orr is a 32-bit instruction. So it should be 22 bytes instead of 20 bytes long.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85965 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eafe7166f828b3444172ff8d3ab2fc1f060d1004
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Nov 3 22:50:10 2009 +0000
    
        Use llvm-gcc on newer Darwins.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85963 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 28cf87a1ef35e19850b37d1c86e1e525041bb41a
    Author: Nuno Lopes <nunoplopes at sapo.pt>
    Date:   Tue Nov 3 22:07:07 2009 +0000
    
        set svn:ignore
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85953 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3e5ab0bd7db07a568cb543f7c78c1bc5ab70efa5
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 3 21:59:33 2009 +0000
    
        fconsts / fconstd immediate should be proceeded with #.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85952 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c09ffd223afb21cb7619530e58969b34fdcf71c0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 3 21:50:09 2009 +0000
    
        fix broken link
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85951 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d6cb4521300b5d215e1363b82cdc2fd443c67088
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 3 21:40:02 2009 +0000
    
        Re-apply 85799. It turns out my code isn't buggy.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85947 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4cc5096427898be26afda67e6c1de40a98603000
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 3 21:26:26 2009 +0000
    
        fix test
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85946 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 339daf45dbfe2111a16d08e0068b9e3669c6684f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 3 21:25:50 2009 +0000
    
        merge a test into ipsccp-basic.  running llvm-ld to get one pass is... bad.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85945 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 996892b9ed838e797c1a870e9839dfcd236a5014
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Tue Nov 3 20:57:50 2009 +0000
    
        Do a scheduling pass ignoring anti-dependencies to identify candidate registers that should be renamed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85939 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a5b4c33c8bee18c293e4dbb9ce53d60bb653ad4d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 3 20:52:57 2009 +0000
    
        finish half thunk thought
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85937 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fb4ea016719ee4d21cd3c0d01f82cd12c346d870
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Tue Nov 3 20:39:35 2009 +0000
    
        Changes requested (avoid getFunction(), avoid Type creation via isVoidTy(), and avoid redundant isFreeCall cases) in feedback to r85176
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85936 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d078dfc911400195d7d2fc6961f5e684dbb1413c
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Tue Nov 3 20:15:00 2009 +0000
    
        <rdar://problem/7352605>. When building schedule graph use mayAlias information to avoid chaining loads/stores of spill slots with non-aliased memory ops.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85934 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d3406e998920033dc225fa89d75e161c2a47c625
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Tue Nov 3 20:02:35 2009 +0000
    
        Changes (* location in pointer variables, avoiding include, and using APInt::getLimitedValue) based on feedback to r85814
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85933 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e9b50cdd8d518c6321aba8b741024fa90d1bef4a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 3 19:35:13 2009 +0000
    
        turn IPSCCP back on by default, try #3 or 4? Woo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85929 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 59dc8e6f2cea716a6f677d50bd82d96cd45ebc0e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 3 19:24:51 2009 +0000
    
        fix an IPSCCP bug I introduced when I changed IPSCCP to start working on
        functions that don't have local linkage.  Basically, we need to be more
        careful about propagating argument information to functions whose results
        we aren't tracking.  This fixes a miscompilation of
        LLVMCConfigurationEmitter.cpp when built with an llvm-gcc that has ipsccp
        enabled.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85923 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3133d382edb595aed9df285f64f32a4ace8044bc
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Tue Nov 3 19:10:22 2009 +0000
    
        Make this code more robust by not thinking we are making progress
        if zero bytes were read.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85922 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ad58bb3530c4ee4c2363217f515e99f700d7e52c
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Nov 3 19:06:07 2009 +0000
    
        Parse debug info attached with insertvalue and extractvalue instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85921 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0c7403016afc834b7c310a5676de5ba5dfd1ea2c
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Tue Nov 3 18:46:11 2009 +0000
    
        Move subtarget check upper for NEON reg-reg fixup pass.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85914 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f982ca6b74654680fb22828fc1f9573b8ecd634d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 3 18:30:31 2009 +0000
    
        mark some constant global const.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85910 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 11ac7e7b5d9a4c1ede1e3485ad38eb4ca889918c
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Nov 3 18:30:27 2009 +0000
    
        Ignore unnamed variables.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85909 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 85bc7b12e5f602c250d35c681c5639d7a56e1099
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 3 17:54:12 2009 +0000
    
        xfail this test since daniel turned off ipsccp
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85907 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fdca138a92bf031f1bd4fb26f0564c4ca97b5658
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 3 17:03:02 2009 +0000
    
        testcase for r85903
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85906 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8d2a9ca1cfb732a273a4707480f7a9b11c22d0f5
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 3 16:50:11 2009 +0000
    
        fix a subtle bug I introduced when refactoring SCCP.  Testcase
        to follow.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85903 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a092c1232e712b8d40d71910a0bde367875bd31f
    Author: Kenneth Uildriks <kennethuil at gmail.com>
    Date:   Tue Nov 3 15:29:06 2009 +0000
    
        Make opt default to not adding a target data string and update tests that depend on target data to supply it within the test
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85900 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6d8f7ca741e223de15b4bc83ddc00033e5da5be1
    Author: Kenneth Uildriks <kennethuil at gmail.com>
    Date:   Tue Nov 3 15:25:20 2009 +0000
    
        Added a comment to a function that had none
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85899 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1e96e727b2b74ae92c8d3ed6c1c786aca6f513f5
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Tue Nov 3 12:52:50 2009 +0000
    
        Eliminate some temporaries.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85896 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 594522ab60e8ccebd9339149714422e94e1fdf72
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Tue Nov 3 09:40:08 2009 +0000
    
        Run the functionattrs pass after the inliner, and not before.
        This makes both logical sense (see below) and increases the
        number of functions marked readnone/readonly by about 1-2%
        in practice.  The number of functions marked nocapture goes
        up by about 5-10%.  The reason it makes sense is shown by
        the following example: if you run -functionattrs -inline on
        it, then no attributes are assigned.  But if you instead run
        -inline -functionattrs then @f is marked readnone because the
        simplifications produced by the inliner eliminate the store.
    
        @x = external global i32
    
        define void @w(i1 %b) {
                br i1 %b, label %write, label %return
        write:
                store i32 1, i32 *@x
                br label %return
        return:
                ret void
        }
    
        define void @f() {
                call void @w(i1 0)
                ret void
        }
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85893 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fab6da0286f9888c0c3a10117bcdbcb4504fe99a
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Nov 3 07:49:22 2009 +0000
    
        Speculatively redisable IPSCCP, I think its still breaking things.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85884 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 39c2f8441685672f1b501c4fc24c22dba6e19274
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Nov 3 07:26:38 2009 +0000
    
        lit: Update Clang's test style to use XFAIL: and XTARGET: lines that match
        LLVM's tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85882 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5d63848cf8c3a3b885f75dd0be11e5e217c254e9
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 3 07:08:08 2009 +0000
    
        Trim unnecessary include.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85878 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 74f8efd28d43839bb2513ad39c93a5ae317c3764
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Nov 3 06:29:56 2009 +0000
    
        For Thumb indirect branches, use "mov pc, reg" which does not switch
        between ARM/Thumb modes and does not require the low bit of the target
        address to be set for Thumb.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85874 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8ea4690298176786e7a20f69755e5dadaa4f5e98
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Nov 3 06:29:36 2009 +0000
    
        Fix a funky "declared with greater visibility than the type of its field"
        warning from gcc by removing VISIBILITY_HIDDEN attributes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85873 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f865754812e17df0cee7e4a1d387bac6603f015b
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 3 05:52:54 2009 +0000
    
        Fix PR5367. QPR_8 is the super regclass of DPR_8 and SPR_8.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85871 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8ef1ec1295a24dc9ed54952005a3f2cab20165c6
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 3 05:51:39 2009 +0000
    
        Clean up copyRegToReg.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85870 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 89947e524969d671c7cb8763fdc333ee02727338
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Nov 3 05:50:57 2009 +0000
    
        Add QPR_8 as a superreg class of SPR_8 and DPR_8.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85869 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7cae17293078de6851d16451c78680666157e05d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 3 05:35:19 2009 +0000
    
        remove unneeded checks of isFreeCall
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85866 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6679f518c4388f1b5a580f9cf6d82391a863dfd4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 3 05:34:51 2009 +0000
    
        remove a check of isFreeCall: the argument to free is already nocapture so the generic call code works fine.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85865 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a856e3625f4d2486c2968afbbbc1192ae191b88f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 3 05:33:46 2009 +0000
    
        remove a isFreeCall check: it is a callinst that can write to memory already.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85863 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 53716298997391af4aa6547150f9162a942312ff
    Author: Ted Kremenek <kremenek at apple.com>
    Date:   Tue Nov 3 04:14:12 2009 +0000
    
        Update CMake file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85861 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6008c51749687d8525357ae6f75f3deebfbc032c
    Author: Ted Kremenek <kremenek at apple.com>
    Date:   Tue Nov 3 04:06:58 2009 +0000
    
        Support updating 'llvm_add_target' lists as well.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85860 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cfdce75f068a2af4fc55051c09b80119fc720a81
    Author: Ted Kremenek <kremenek at apple.com>
    Date:   Tue Nov 3 04:01:53 2009 +0000
    
        Alphabetize.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85859 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3a2499a52b50685ed16c51d28550a09d0235dd29
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Nov 3 03:42:51 2009 +0000
    
        turn IPSCCP back on now that the iterator invalidation bug is fixed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85858 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit adcdd9b902b4e48f1231f8a504dd7a6871eb1515
    Author: Nate Begeman <natebegeman at mac.com>
    Date:   Tue Nov 3 03:30:51 2009 +0000
    
        Add a couple more target nodes
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85857 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b00cf59da47c3f365ab2834850876dfccdaf2346
    Author: Nate Begeman <natebegeman at mac.com>
    Date:   Tue Nov 3 02:19:31 2009 +0000
    
        Declare sin & cos as readonly so they match the code in SelectionDAGBuild
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85853 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3ed3a35a6549891d3a9ef08bcc9caaf21703a537
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Tue Nov 3 01:04:26 2009 +0000
    
        Turn neon reg-reg moves fixup code into separate pass. This should reduce the compile time.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85850 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3daf4012398a2511b36cf7bd31192f098f9820b9
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Tue Nov 3 00:37:36 2009 +0000
    
        Temporary xfail until PR5367 will be resolved
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85848 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b6057985f14be94252b2741f5e8ae338879395a6
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Tue Nov 3 00:24:48 2009 +0000
    
        Revert r85049, it is causing PR5367
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85847 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c9eda2cdfdd2fa0f49365d669d74968e1a925e2
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Nov 3 00:02:05 2009 +0000
    
        Revert previous change to a comment.  The BlockAddresses go in the
        constant pool so they don't get wrapped separately.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85844 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dd355c46afc286a3d561ac671dba1f057cde8e6f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 23:25:39 2009 +0000
    
        fix a nasty iterator invalidation bug from my conversion from
        std::map to DenseMap, exposed on release llvm-gcc bootstrap.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85840 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit da8a1abb3dc7f1fde5759ccb0fc0820c4198d94d
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Nov 2 21:49:14 2009 +0000
    
        Revert 85799 for now. It might be breaking llvm-gcc driver.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85827 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d1da913d170e20cf91fd12f9da5d733f171bcb7d
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Nov 2 20:59:23 2009 +0000
    
        Put BlockAddresses into ARM constant pools.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85824 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b7209bfc4cf90c88ce0d1795184e99fe54091c53
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Mon Nov 2 20:14:39 2009 +0000
    
        Fix ARMAsmParser::ParseMemoryOffsetReg() where the parameter OffsetRegNum should
        have been passed as a reference.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85823 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d78dfdfe75fcfb54c65bd312ab31fec89d6741be
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 19:31:10 2009 +0000
    
        revert r8579[56], which are causing unhappiness in buildbot land.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85818 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5bcc6637fb7f1d7a618c4fc9f981c7ae1d6cead0
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Mon Nov 2 19:11:03 2009 +0000
    
        CMake: Report an error if there is an unknown .cpp file in a source
        directory.
    
        This is useful in case someone who works with the config&make build
        system forgot to add a file to its CMakeLists.txt. Instead of
        obtaining undefined references at link time, cmake will complain at
        configure time on the first build after a svn update.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85817 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 529fbeb0c9987e3211c6d44d123c13e6dad5673c
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Mon Nov 2 18:51:28 2009 +0000
    
        Set bit instead of calling pow() to compute 2 << n
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85814 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 76356bcc3bdfbdb0470281af5b3505a990cf1e94
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 18:28:45 2009 +0000
    
        typo
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85812 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c264c9eca13f6b368663f98b1cf3c3d23dc7f061
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 18:27:22 2009 +0000
    
        merge 2008-03-10-sret.ll into ipsccp-basic.ll, and upgrade its syntax.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85811 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3e5b03000665b0ef55d866a33f88bb8375f96695
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 18:22:51 2009 +0000
    
        disable IPSCCP support for multiple return values, it is buggy, so just
        disable it until I can fix it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85810 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9a8ec820614ab5b5a0edef1297cbc4cec4b1c3d9
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Mon Nov 2 17:28:36 2009 +0000
    
        Fix schedule model for BFC.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85809 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a07d68f6f017c73cbca1222cddd6baf40974974b
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Nov 2 17:10:37 2009 +0000
    
        Hyphenate some comments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85808 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9374f78f5230bcd68f2e8b24d0b975cb86527e76
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Mon Nov 2 17:06:28 2009 +0000
    
        Chain dependencies used to enforce memory order should have latency of 0 (except for true dependency of Store followed by aliased Load... we estimate that case with a single cycle of latency assuming the hardware will bypass)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85807 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 62acbf19235b3b20a24958b973538e35ec6f2d67
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Nov 2 16:59:06 2009 +0000
    
        Add support for BlockAddress values in ARM constant pools.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85806 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d737897784e400b7c2642030b362faf42493a398
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Nov 2 16:58:31 2009 +0000
    
        Prune unnecessary include.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85805 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9a615773ca6289058893e18267c9ce4095cb3656
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Nov 2 08:09:49 2009 +0000
    
        Initilize the machine LICM CSE map upon the first time an instruction is hoisted to
        the loop preheader. Add instructions which are already in the preheader block that
        may be common expressions of those that are hoisted out. These does get a few more
        instructions CSE'ed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85799 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8ff61c68cde1cf4d785362c21f4a316436d14a22
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Nov 2 07:58:25 2009 +0000
    
        These are done / no longer care.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85798 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d9ca3d1e7116200fe9e62cb9397c783d1d653159
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Nov 2 07:51:19 2009 +0000
    
        Add an entry.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85797 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f386afd11f1c8e56d1afc8ff534bc58a2a3c29ed
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 07:34:29 2009 +0000
    
        now that ip sccp *really* subsumes ipcp, remove ipcp again.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85796 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 29cc27a3e256f969418198dcff1c9b28317890bb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 07:33:59 2009 +0000
    
        improve IPSCCP to be able to propagate the result of "!mayBeOverridden"
        function to calls of that function, regardless of whether it has local
        linkage or has its address taken.  Not escaping should only affect
        whether we make an aggressive assumption about the arguments to a
        function, not whether we can track the result of it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85795 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6aed9d422dfd1b2e46a5ae63d95e2c83f5279247
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Nov 2 07:11:54 2009 +0000
    
        Remove an irrelevant and poorly reduced test case.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85794 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 74f9ed2fef57ffe21074042e681c7c000cb4e0a1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 06:34:04 2009 +0000
    
        don't mark the arguments of prototype overdefined, they will never be queried.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85793 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e84f1231eff713701e9d60c4b2b16f3997daa0eb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 06:28:16 2009 +0000
    
        restore some code I removed in r85788, refactor it into
        a shared place instead of duplicating it 4 times.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85792 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c2a4e20894cd1fa7afc1c7586c4b650ff5cd8a56
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 06:17:06 2009 +0000
    
        remove some confused code that dates from when we had
        "multiple return values" but not "first class aggregates"
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85791 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a5ffa7c6137334a259426aa4f656a6d28193a459
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 06:11:23 2009 +0000
    
        avoid redundant lookups in BBExecutable, and make it a SmallPtrSet.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85790 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0148bb29a5f14dfd9c5c94ec7144c0554cd2691f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 06:06:14 2009 +0000
    
        Use the libanalysis 'ConstantFoldLoadFromConstPtr' function
        instead of reinventing SCCP-specific logic.  This gives us
        new powers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85789 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6367c3fa7cb3820bc31a40e9195388db37e452a0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 05:55:40 2009 +0000
    
        switch the main 'ValueState' map from being an std::map to being
        a DenseMap.  Doing this required being aware of subtle iterator
        invalidation issues, but it provides a big speedup.  In a
        release-asserts build, this sped up optimizing 403.gcc from
        1.34s -> 0.79s (IPSCCP) and 1.11s -> 0.44s (SCCP).
    
        This commit also conflates in a bunch of general cleanups, sorry.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85788 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d9a5f772aa33a15e8ae9db61010ea0e347b771ca
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Nov 2 04:44:55 2009 +0000
    
        Unbreak ARMBaseRegisterInfo::copyRegToReg.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85787 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c9661b6764715959f2b35c795af1dfbe5a92bbb6
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 04:37:17 2009 +0000
    
        fix a bug exposed by moving SRoA earlier which caused a crash building kc++
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85786 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aae7a3d378ed2f64bb369ad672439f08fb8e685c
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Nov 2 03:46:35 2009 +0000
    
        Missing bit of universal built + hosted
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85785 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 84388f11cf36e91102b33b61b5209318e78445c8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 03:25:55 2009 +0000
    
        only IPSCCP incoming arguments if the function is executable, this fixes
        an assertion on the buildbot.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85784 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 220571cfe77c2beb85bdf42c2e1feded607eea9f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 03:21:36 2009 +0000
    
        add a new ValueState::getConstantInt() helper, use it to
        simplify some code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85783 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4ba3276aeb2caea7729d5d792e01a9800b9208b4
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Mon Nov 2 03:20:57 2009 +0000
    
        Fix malloc.h is deprecated warning on DragonFly BSD.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85782 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 97489014b43242375a4e5258472ee082a1c786ee
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Mon Nov 2 03:14:31 2009 +0000
    
        Fix for warning seen on DF-BSD, Victor, please fix this to use a shift instead of pow()
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85781 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b52f700e8ae038dcd38a17112f942f1be298ade2
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 03:03:42 2009 +0000
    
        tidy up some more: remove some extraneous inline specifiers, return harder.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85780 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9e974a781487f7bee54b92a541f3f682ec95e479
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Mon Nov 2 02:55:39 2009 +0000
    
        Apply fix for PR5135, Credit to Andreas Neustifter.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85779 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c9edab8daff6c1e9d474961b0e7c97cd57dbf2a7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 02:54:24 2009 +0000
    
        eliminate the SCCPSolver::getValueMapping method.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85778 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e9474d5d76734a90e0d2e92a420e848fb435cc32
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 02:48:17 2009 +0000
    
        fix failures introduced in r85774
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85777 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 14513dcc2b5d3dd742deaf0080e0995c865db87f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 02:47:51 2009 +0000
    
        factor duplicated code into a new DeleteInstructionInBlock
        function, eliminate temporary (and pointless) smallvector.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85776 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c879800404e3705403f04009016ced1d92a2c114
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 02:33:50 2009 +0000
    
        Chris used to use '...' instead of proper grammar.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85775 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit adaf73315966fa8ad1c5053e7ac5aa15eed7aabd
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 02:30:06 2009 +0000
    
        remove some extraneous llvmcontext stuff.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85774 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1eb405b598213a9ffe077d4b42c23f055568294e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 02:20:32 2009 +0000
    
        change LatticeVal to use PointerIntPair to save some space.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85773 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 69fa3f564d114d4f2b4d953e736373c917c6d640
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 02:06:37 2009 +0000
    
        fix instcombine to only do store sinking when the alignments
        of the two loads agree.  Propagate that onto the new store.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85772 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d0996a2f36e9fc7a4573e296c84deb012c196ff0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 02:00:18 2009 +0000
    
        merge a test into store.ll
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85771 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 31cf6ca5aa61e751ceaa6c4555acb592e551d67e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 2 01:58:03 2009 +0000
    
        convert to filecheck
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85770 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b4ae2294dc77422063a9f739b27410c9f0c151e4
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Nov 2 00:25:26 2009 +0000
    
        Add missing end-tag.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85769 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 614b32b9c179b1e897cd953eb90fd00e374c1728
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Nov 2 00:24:16 2009 +0000
    
        Some formatting changes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85768 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f817d7bfb1568b67a790dbae0ea05e162b6bc560
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Mon Nov 2 00:12:06 2009 +0000
    
        Handle splats of undefs properly. This includes the testcase for PR5364 as well.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85767 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d9331217b489a204b9264bb424ebf83612e1b29e
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Mon Nov 2 00:11:39 2009 +0000
    
        Do not infer the target type for COPY_TO_REGCLASS from dest regclass, this won't work if it can contain several types. Require explicit result type for the node for now. This fixes PR5364.
    
        PS: It seems that blackfin usage of copy_to_regclass is completely bogus!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85766 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1853b6e277b38bc8d796951f371cbc5824b21aff
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Mon Nov 2 00:11:06 2009 +0000
    
        64-bit FP loads & stores operate on both NEON and VFP pipelines.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85765 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eed9c14b562d5c43c787b0a74c99f3adf6238f6b
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Mon Nov 2 00:10:38 2009 +0000
    
        Use NEON reg-reg moves, where profitable. This reduces "domain-cross" stalls, when we used to mix vfp and neon code (the former were used for reg-reg moves)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85764 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 87e4592785576c12930d0ffd65b33c7b7d889fc5
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Nov 1 23:50:04 2009 +0000
    
        Add PseudoSourceValue::mayAlias. It returns true if the object can ever alias any LLVM IR value.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85762 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9b1f541793e2143762a8f08e65d46bec80fd758e
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Nov 1 22:08:51 2009 +0000
    
        Line this up as well.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85748 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0f80251773274b4df99b6b5cbea024feebd9aae3
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Nov 1 22:07:54 2009 +0000
    
        Fix whitespace.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85747 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ac1b43d80bf7bac148dd5d4ad3dd98054cf9408
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Nov 1 22:04:35 2009 +0000
    
        Fix a couple more places where we are creating ld / st instructions without memoperands.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85746 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3e9fef42389147453ebcbe12543ae9f1c6530f7c
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Nov 1 21:12:51 2009 +0000
    
        Make use of imm12 version of Thumb2 ldr / str instructions more aggressively.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85743 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 66207cc7cda2cfcda69b9ba092279e2357d94a6d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 20:41:59 2009 +0000
    
        fix two strange things in the default passmgr:
    
        1. we'd run simplifycfg at the very start, even though
        the per function passes have already cleaned this up.
    
        2. In the main per-function pipeline that is interlaced with inlining
           etc, we would do instcombine, jump threading, simplifycfg *before*
           doing SROA.  SROA is much more likely to expose opportunities for
           these passes than they are for SROA, so move SRoA up earlier.
    
        also add some comments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85742 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5409d9dbbfaa11af8119b76ea7263c3a5a2271d7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 20:10:11 2009 +0000
    
        merge phi-merge.ll into phi.ll
    
        I don't know what Dan wants to do with phi-merge-gep.ll, I'll let
        him deal with it because instcombine may end up sinking these.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85739 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 52fe1bc4548599673b9040a44fe8e2aad2e65827
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 20:07:07 2009 +0000
    
        when merging two loads, make sure to take the min of their alignment,
        not the max.  This didn't matter until the previous patch because
        instcombine would refuse to sink loads with differenting alignments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85738 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 38751f83c7e653378da439fc511928e3a0d3fb72
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 20:04:24 2009 +0000
    
        split load sinking out to its own function, like gep sinking.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85737 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 310a00f632a2db7665ed388d9268af8e1f3d02f5
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 19:50:13 2009 +0000
    
        fix a bug noticed by inspection: when instcombine sinks loads through
        phis, it didn't preserve the alignment of the load.  This is a missed
        optimization of the alignment is high and a miscompilation when the
        alignment is low.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85736 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1f50617de31bdfc1138cbd1681bf67e59b5f5252
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 19:29:12 2009 +0000
    
        IPSCCP apparently is not a superset of IPCP, this is bad,
        but I'll investigate it separately.  This unbreaks
        test/FrontendC/weak_constant.c
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85735 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d05b6010946f8f7554210e983f742eb226500c40
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 19:22:20 2009 +0000
    
        convert to filecheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85734 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0ea1608ce43d2a636745a58bd7ef0a8819609b53
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Sun Nov 1 19:16:21 2009 +0000
    
        Improve the other instance of the comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85733 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit af5f1d6c07a358b8378b0458c81a473cefcf15cc
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Sun Nov 1 19:12:43 2009 +0000
    
        Add a missing closing parenthesis, and tweak to fit in 80
        columns.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85732 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b5e2138e1894d150c0f3f830c236028ff5ad0f87
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 19:09:12 2009 +0000
    
        only run GlobalDCE at -O3 and run it late instead of early.
        GlobalOpt already deletes trivially dead functions/globals,
        so GlobalDCE only adds values for cycles of dead things.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85731 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3f2e8ec4afbba963c09dd4856258dd5cabf59112
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 19:03:42 2009 +0000
    
        cleanups, switch GlobalDCE to SmallPtrSet instead of std::set
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85730 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 45ad5c40eb868d053ddbd2296a26c02882c6bd37
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 18:57:49 2009 +0000
    
        We currently only run ipsccp at LTO time, which is silly.  It subsumes
        ipconstprop and doesn't take much time.  Just run it in its place.
    
        This adds a testcase for it, which I plan to expand to cover other
        "integration" cases, where we expect the optimizer to be able to
        eliminate various things.  Due to phase order issues we've regressed
        in a number of areas and integration tests are the only way I see to
        prevent this.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85729 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c3ce1a63e4891b7c53dbdde5dffb49b45f5672de
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 18:42:03 2009 +0000
    
        remove a bunch of locking from LLVMContextImpl.  Since only one thread
        can be banging on a context at a time, this isn't needed.  Owen, please
        review.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85728 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dda0326aa939d3dde36927286257036cd4fcb6d4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 18:17:37 2009 +0000
    
        improve comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85725 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 789eb4ae48f61b77b0ee688f12758f8e401358c1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 18:16:30 2009 +0000
    
        add a comment about why we don't allow inlining indbr.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85724 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit adc8e889dc20cad3cfbb3fb8d51e53c5c2b9625c
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Nov 1 18:13:29 2009 +0000
    
        Fix tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85723 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b8b101895463b01ac2c4044f2d49c2d624ea2e1a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 18:11:50 2009 +0000
    
        the verifier shouldn't modify the IR.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85722 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 67127294f701ddec34ccb7a5dc324817be4645e7
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Sun Nov 1 16:42:53 2009 +0000
    
        Reverting 85714, 85715, 85716, which are breaking the build
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85717 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 37a513fcc2c75f3080c93e6fcd9a97be368337fc
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sun Nov 1 15:28:36 2009 +0000
    
        Add a function to Passes.h to allow clients to create instances
        of the ScalarEvolution pass without needing to #include ScalarEvolution.h.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85716 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 47070a3479c96876132aa2ac6d17adde7a860402
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sun Nov 1 15:23:35 2009 +0000
    
        Don't #include Pass.h from CallGraph.h.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85715 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e7f59f3af13b5fdee43de653d00598ca5a53bff6
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sun Nov 1 15:20:19 2009 +0000
    
        Remove the #include of Pass.h from PassManager.h. This breaks a significant
        #include dependency, as frontends commonly pull in PassManager.h.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85714 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2f4875062faa6cad24c7ada735dba0877b5da634
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 06:11:53 2009 +0000
    
        teach ipsccp and ipconstprop that a blockaddress doesn't 'take the address' of a function
        in a way that should prevent ip constprop.  This allows clang/test/CodeGen/indirect-goto.c
        to pass with the new indirect goto lowering.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85709 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cab3885bac1d1f297b73a1ea491eeeb870758b27
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 04:57:33 2009 +0000
    
        change llvm::MergeBlockIntoPredecessor to not merge two blocks BB1->BB2
        when BB2 has its address taken.  Since it ends up doing BB2->rauw(BB1),
        this can cause the address of the entry block to be taken.  Since it is
        generally undesirable to nuke blocks whose address is taken, even when
        we can, just unconditionally stop this xform.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85708 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b2b693f125e133847bd713d09e6d3894dbabe0cb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 04:23:20 2009 +0000
    
        strengthen an assumption: RevectorBlockTo knows that PredBB
        ended in an uncond branch because the pass requires BreakCriticalEdges.
    
        However, BCE doesn't eliminate critical adges from indbrs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85707 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c29feace6d1ac93295c58c29e7c7fb9ae9afc77
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 04:08:01 2009 +0000
    
        fix an issue where the verifier would reject a function whose entry
        block had its address taken even if the blockaddress was dead.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85706 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f395da4484452032c224d0d98420bbeb09b56e3b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 03:42:55 2009 +0000
    
        if CostMetrics says to never duplicate some code, don't unswitch a loop.
        This prevents unswitching from duplicating indbr's.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85705 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 16e7f57318711bda14e78f61c275bd4d9e3e1f6d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 03:40:38 2009 +0000
    
        constant fold indirectbr(blockaddress(%bb)) -> br label %bb.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85704 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 03796eeeb9c2c303c1fa1513ce009882e355bddc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 03:25:03 2009 +0000
    
        improve x86 codegen support for blockaddress.  We now compile
        the testcase into:
    
        _test1:                                                     ## @test1
        ## BB#0:                                                    ## %entry
        	leaq	L_test1_bb6(%rip), %rax
        	jmpq	*%rax
        L_test1_bb:                                                 ## Address Taken
        LBB1_1:                                                     ## %bb
        	movb	$1, %al
        	ret
        L_test1_bb6:                                                ## Address Taken
        LBB1_2:                                                     ## %bb6
        	movb	$2, %al
        	ret
    
        Note, it is very very strange that BlockAddressSDNode doesn't carry
        around TargetFlags.  Dan, please fix this.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85703 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c1a9f7074f6a2b4e681cd895ab32f9e572bcef2e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 03:07:53 2009 +0000
    
        pull check for return inst out of loop, never inline a callee that contains
        an indirectbr.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85702 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0a1444a7e6a2e0e68c72f4aac1f4e638645d9dac
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 03:03:03 2009 +0000
    
        Fix BlockAddress::replaceUsesOfWithOnConstant to correctly
        maintain the block use count in SubclassData.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85701 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 26cc8cd188e185295429c72ccda6fb3b5003d4cf
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 02:46:39 2009 +0000
    
        implement linker support for BlockAddress.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85700 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 620cead9d0a40e874503c5605952c40186d585f0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 1 01:27:45 2009 +0000
    
        Revert 85678/85680.  The decision is to stay with the current form of
        indirectbr, thus we don't need "blockaddr(@func, null)".  Eliminate it
        for simplicity.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85699 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 538da74ec2d9bb494a7704e195515c8be8b5b373
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Oct 31 23:46:45 2009 +0000
    
        Use cbz and cbnz instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85698 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 67420bf16a60b1ae36b4d38187b3dd43e73fee51
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Oct 31 22:57:36 2009 +0000
    
        vml[as].f32 cause stalls in following advanced SIMD instructions. Avoid using
        them for scalar floating point operations for now.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85697 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5859872920b063601b0acf9b30a8c7ed2872be1b
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Oct 31 22:20:56 2009 +0000
    
        Consolidate test files
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85696 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c08430d9dfcc194964945f33245042562d740760
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Oct 31 22:16:14 2009 +0000
    
        Change to use FileCheck
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85695 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a7a7c531a022abcd54f4a15cbf69ad7000fe4b72
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Oct 31 22:14:17 2009 +0000
    
        Make tests more explicit about which instructions are expected.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85694 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7c3c4f57135e652ad535cc4bb32cedd98ceb2834
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Oct 31 22:12:44 2009 +0000
    
        Grammar tweak to comments
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85693 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4790cb4896fbb204ec53b8e170d492e3e2842789
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 31 22:11:15 2009 +0000
    
        Make sure PRE doesn't split crit edges from indirectbr.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85692 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7306aa658f13e92124c7f3fd0bd6c3dbe7084fa6
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Oct 31 22:10:38 2009 +0000
    
        Update test to be more explicit about what instruction sequences are expected for each operation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85691 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 083a3ef68afd72bf97a4330382bc3e653f47559d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 31 22:04:43 2009 +0000
    
        llvm::SplitEdge should refuse to split an edge from an indirectbr.
        Fix CodeGenPrepare to not try to split edges from indirectbr.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85690 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1a4ddea811297cd5ab6fef2d4ca9d5b1279be7f0
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Oct 31 21:52:58 2009 +0000
    
        Update test to be more explicit about what instruction sequences are expected for each operation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85689 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f281b470896161c227bf2d1a1be7c1f178b6e46a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 31 21:51:10 2009 +0000
    
        update the comment above llvm::SplitCriticalEdge, and make
        it abort on IndirectBrInst as describe in the comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85688 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a3bddfb18197f4b00b161db0c643e433f79e67e6
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Oct 31 21:42:19 2009 +0000
    
        Expand 64-bit logical shift right inline
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85687 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5480bad7b9ba0a765d937631674f2b65f8eeea1e
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Oct 31 21:00:56 2009 +0000
    
        Expand 64-bit arithmetic shift right inline
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85685 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 70337f0e084c54af2348a7a8aee72a2823c0e5b8
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 20:59:09 2009 +0000
    
        Fix a missing newline in the dwarf output code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85684 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 57b31659358cd7ae9cd045ec2ce3019b5adc234a
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 20:19:03 2009 +0000
    
        Make -print-machineinstrs more readable.
         - Be consistent when referring to MachineBasicBlocks: BB#0.
         - Be consistent when referring to virtual registers: %reg1024.
         - Be consistent when referring to unknown physical registers: %physreg10.
         - Be consistent when referring to known physical registers: %RAX
         - Be consistent when referring to register 0: %reg0
         - Be consistent when printing alignments: align=16
         - Print jump table contents.
         - Don't print host addresses, in general.
         - and various other cleanups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85682 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit de4f1504bec844ec60a29d3946ff7257da0c3c50
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 20:17:39 2009 +0000
    
        Factor out more code into addCommonCodeGenPasses. The JIT wasn't
        previously running CodePlacementOpt. Also print headers before
        each dump in -print-machineinstrs mode, so that it's clear which
        dump is which.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85681 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4200f2c815159e939351d5d447f7f68fb557e600
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 31 20:13:24 2009 +0000
    
        adjust a couple xforms to work with null bb's in BlockAddress.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85680 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7bc5f36e65090bc44026333e2966b720d0747a0d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 31 20:08:37 2009 +0000
    
        Make blockaddress(@func, null) be valid, and make 'deleting a basic
        block with a blockaddress still referring to it' replace the invalid
        blockaddress with a new blockaddress(@func, null) instead of a
        inttoptr(1).
    
        This changes the bitcode encoding format, and still needs codegen
        support (this should produce a non-zero value, referring to the entry
        block of the function would also be quite reasonable).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85678 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 12f2dae2e58e01e256f08d511bb1a8b1d46b6a95
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Sat Oct 31 19:54:06 2009 +0000
    
        Force triple; darwin's ASM syntax differs from linux's.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85676 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 998eacce4edb2421d0b40349057d8c30c1585ade
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Oct 31 19:38:01 2009 +0000
    
        Expand 64 bit left shift inline rather than using the libcall. For now, this
        is unconditional. Making it still use the libcall when optimizing for size
        would be a good adjustment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85675 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 55d502f2448942052449602527322a8361508307
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Sat Oct 31 19:22:24 2009 +0000
    
        Add missing colons for FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85674 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dde39bd81cc82e60c13053190247af4b6486845d
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Oct 31 19:06:53 2009 +0000
    
        Convert to FileCheck
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85673 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 154b234c6da80515738cb79aee89794f05bfa0a5
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Oct 31 18:00:10 2009 +0000
    
        The universal SDKROOT should only be assigned when hosted. Otherwise the
        SDKROOT can refer to the target when we're building for the host.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85672 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d56c0cb634aa6812916307ad6f6a72e8589864b1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 31 17:48:31 2009 +0000
    
        add a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85671 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 80a0f8f856c40fcf82dafc313f76f2e7a64913c5
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 17:33:01 2009 +0000
    
        Revert r85667. LoopUnroll currently can't call utility functions which
        auto-update the DominatorTree because it doesn't keep the DominatorTree
        current while it works.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85670 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0afbe771f831d12dccd7813ea2a5e3bdac391b0b
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 16:16:41 2009 +0000
    
        Remove redundant code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85668 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 508ac4b7e797df75dfbfb2f9cf06a0d94995ac46
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 16:08:00 2009 +0000
    
        Merge the enhancements from LoopUnroll's FoldBlockIntoPredecessor into
        MergeBlockIntoPredecessor. This makes SimplifyCFG slightly more aggressive,
        and makes it unnecessary for LoopUnroll to have its own copy of this code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85667 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f893ba349f3ac2b2ea4ec3418567d48301650f42
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 15:04:55 2009 +0000
    
        Rename forgetLoopBackedgeTakenCount to forgetLoop, because it
        clears out more information than just the stored backedge taken count.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85664 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e5f0d710707ce7f9247fa70de5552cb11d1dada7
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 14:54:17 2009 +0000
    
        Replace LoopUnrollPass.cpp's custom code-size estimation code using
        the new common CodeMetrics code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85663 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 590112e91929037815065feb50868b05e272bbcf
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 14:46:50 2009 +0000
    
        Simplify this code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85662 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 73ec8db1329e8c2aa1df8f131030d84ce0c07e71
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 14:39:43 2009 +0000
    
        Remove an unnecessary #include.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85661 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ffd9e3fc97c59c5f477eccb876fc52cb3c492778
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 14:38:25 2009 +0000
    
        Update CMakeLists for recent renames.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85660 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fe18719fbded699e8901b003f31e25378f63928b
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 14:37:31 2009 +0000
    
        Rename UnrollLoop.cpp to LoopUnroll.cpp, and LoopUnroll.cpp to
        LoopUnrollPass.cpp, for consistency with other passes which are
        similarly split.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85659 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6d54b9847f98e0dfb58e9c7d13af4cb86a5b0812
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 14:35:41 2009 +0000
    
        Remove CodeGenLICM. It's largely obsoleted by MachineLICM's new ability
        to unfold loop-invariant loads.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85657 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fab05d2f984c70bf79e79dc059301bfdafb66378
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 14:32:25 2009 +0000
    
        Make ScalarEvolutionAliasAnalysis slightly more aggressive, by making an
        underlying alias call even for non-identified-object values.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85656 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2cc8e84a06c9a71a03a322deaf30691ccd0d96c0
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 14:22:52 2009 +0000
    
        Reapply r85634, with the bug fixed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85655 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9c98ce3d0b7bca679abebf6c6e2e9f6de6bbb02e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 14:14:04 2009 +0000
    
        When discarding SrcValue information, discard all of it so that code
        that uses this information knows to behave conservatively.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85654 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 509fccf21830e93d117e482fdc5a6b18e183bd3e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 14:12:53 2009 +0000
    
        Fix 80-column violation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85653 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6de661ad137c60e288bd4626f5817e39fc695e51
    Author: Eric Christopher <echristo at apple.com>
    Date:   Sat Oct 31 09:24:35 2009 +0000
    
        Fix warning with gcc-4.0 and signed/unsigned.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85648 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 326d7242e283922daf63e607bbc84e1498f76da7
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Oct 31 03:39:36 2009 +0000
    
        It's safe to remat t2LDRpci; Add PseudoSourceValue to load / store's to enable more machine licm. More changes coming.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85643 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 35f8bdcd86c0281f7b5a040e765a665774ef5b22
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Oct 31 01:28:06 2009 +0000
    
        Revert 85634. It's breaking consumer-typeset (and others).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85641 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 17d892adad853cc7fea4e64d72e3717903a953cd
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 31 00:15:28 2009 +0000
    
        Add a target triple so that this test behaves consistently across hosts.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85640 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 61b16fbdaeb7617b779fd0df62b32b40cf68a12c
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 23:59:06 2009 +0000
    
        Add assertion checks here to turn silent miscompiles into aborts.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85639 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c00047af5603cf63b34c76c740d4a29fd22ff60a
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 23:57:47 2009 +0000
    
        Don't mark registers dead here when processing nodes with MVT::Flag
        results. This works around a problem affecting targets which rely on
        MVT::Flag to handle physical register defs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85638 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5aaaa52b972e9ef90316ecf9cb5b406dd31650f1
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 23:18:27 2009 +0000
    
        Fix the -mattr line for this test so that it passes on hosts that lack SSSE3.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85637 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ac97988e5d42a2a96b08c49dffa61bbe64119db8
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 23:16:10 2009 +0000
    
        Add a testcase for the recent duplicate PHI elimination changes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85636 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 08cb4f2aff78d2bb1f44af7616a9117437e1eeb3
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 23:15:43 2009 +0000
    
        Add a comment about a missed opportunity.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85635 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 14af0e5b2284045a70c71231a4258862e911219d
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 23:15:21 2009 +0000
    
        Optimize around the fact that pred_iterator is slow: instead of sorting
        PHI operands by the predecessor order, sort them by the order used by the
        first PHI in the block. This is still suffucient to expose duplicates.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85634 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 13f60e83d6add4d7159c322a7ccc1c36c26a115b
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Fri Oct 30 22:55:57 2009 +0000
    
        Updates to the ARM target assembler for llvm-mc per review comments from
        Daniel Dunbar.
        - Reordered the fields in the ARMOperand Mem struct to make the struct smaller.
        Making bool's into 1 bit fields and put the MCExpr* fields adjacent to each
        other.
        - Fixed a number of places in ARMAsmParser.cpp so they have doxygen comments.
        - Change the name of ARMAsmParser::ParseRegister() to MaybeParseRegister and
        added the bool ParseWriteBack parameter.
        - Changed ARMAsmParser::ParseMemory() to call MaybeParseRegister().
        - Added ARMAsmParser::ParseMemoryOffsetReg to factor out parsing the offset of a
        memory operand.  And use it for both parsing both preindexed and post indexing
        addressing forms in ARMAsmParser::ParseMemory.
        - Changed the first argument to ParseShift() to a reference.
        - Changed ParseShift() to check for Rrx first and return to reduce nesting.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85632 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 17c45db8ddc97c08cb46ad669677bb0c6459f363
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Oct 30 22:52:47 2009 +0000
    
        If string field is empty then return NULL.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85630 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ec080e7b221889a3b1b2616a19e4019295509627
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Oct 30 22:39:36 2009 +0000
    
        if basic blocks are destroyed while there are *just* BlockAddress' hanging
        around, then zap them.  This is analogous to dangling constantexprs hanging
        off functions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85627 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 885be9130b4d9bfe1274b018e8b5c6aa667eb12c
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 22:39:04 2009 +0000
    
        Teach SimplifyCFG how to eliminate duplicate PHI nodes within a block.
        This reduces codesize on a variety of codes by 1-2% on x86-64. It also
        helps clean up after SSAUpdater.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85626 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0fb64e96c8cb775c57cea53171ce0525da9249c5
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Oct 30 22:33:29 2009 +0000
    
        make hasAddressTaken() constant time by storing a refcount in BB's subclass data.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85625 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4381e3bc06cb247d7d57658b2c6134c24021d316
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Oct 30 22:22:46 2009 +0000
    
        Add a note about Robert Muth's alternate jump table implementation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85624 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 012d03d39d3cbb8e195b7a9ce50fc9ef6e474ec0
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 22:22:22 2009 +0000
    
        Sort the incoming values in PHI nodes to match the predecessor order.
        This helps expose duplicate PHIs, which will make it easier for them
        to be eliminated.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85623 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f011658a243d918c31627fddaf7ae351f2f19eb6
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 22:18:41 2009 +0000
    
        Fix MachineLICM to use the correct virtual register class when
        unfolding loads for hoisting.  getOpcodeAfterMemoryUnfold returns the
        opcode of the original operation without the load, not the load
        itself, MachineLICM needs to know the operand index in order to get
        the correct register class. Extend getOpcodeAfterMemoryUnfold to
        return this information.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85622 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 68e38afb4b538f372b34f6321b020870215f3c26
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Oct 30 22:15:48 2009 +0000
    
        it isn't valid to take the address of the entry block.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85621 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8bda3570a26c8c3e46a95629adbf495b1144a633
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Oct 30 22:09:30 2009 +0000
    
        If a type is derived from a derived type then calculate size appropriately.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85619 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ce7da4da5b7e55de4dc8dd381391859a2bfac309
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Fri Oct 30 21:33:05 2009 +0000
    
        Build in ARM mode explicitly when on ARM Darwin
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85615 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 54f775ff61c8a018900f30560e5de9c419d2cb03
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Oct 30 21:13:59 2009 +0000
    
        Add missing substitution for %llvmgcc_only.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85614 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bfb983d31495c0c21178f0ea493b149a579b62d2
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Fri Oct 30 20:54:59 2009 +0000
    
        Allow cross target build
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85611 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0d20d17a4707634b1973053699fa027c313c8220
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Oct 30 20:13:25 2009 +0000
    
        Fix a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85610 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f036e55b68d46d66e8ee5251861c7331517fba2d
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 30 20:12:24 2009 +0000
    
        Add option to createGVNPass to disable PRE.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85609 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7a178321e28af8328c1627de479df1064df856b6
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 30 20:03:40 2009 +0000
    
        I forgot to commit this test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85608 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d6d77da9dfee79811ba7d82f26adda9b5ae20922
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Fri Oct 30 19:53:38 2009 +0000
    
        When cross-building, the CFLAGS and CXXFLAGS are for the target, and don't
        apply to the build tools. If we want to allow build tool flags input, we
        should have separate inputs (BUILD_CFLAGS and BUILD_CXXFLAGS, perhaps).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85607 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 45795a3646741fea0da9973cad9e4d80e0cfc9a6
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Fri Oct 30 19:52:05 2009 +0000
    
        Remove extraneous comment line
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85606 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eebeb09c84a0685a186a5539ee01ac7e8c4315d7
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Fri Oct 30 19:51:32 2009 +0000
    
        update name check for Apple style builds to be more permissive
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85605 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 56c480af03aad91c507eed011fd3977e81225e70
    Author: Lang Hames <lhames at gmail.com>
    Date:   Fri Oct 30 18:12:09 2009 +0000
    
        Stop the iterator in ValueLiveAt from potentially running off the end of the interval.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85599 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eac0e8f344245ea3d5f7d78eac4033e8c64f18ab
    Author: Rafael Espindola <rafael.espindola at gmail.com>
    Date:   Fri Oct 30 14:33:14 2009 +0000
    
        This fixes functions like
    
        void f (int a1, int a2, int a3, int a4, int a5,...)
    
        In ARMTargetLowering::LowerFormalArguments if the function has 4 or
        more regular arguments we used to set VarArgsFrameIndex using an
        offset of 0, which is only correct if the function has exactly 4
        regular arguments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85590 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bf1bacb9000591a608241f87f1e231ae5165156e
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Fri Oct 30 11:42:08 2009 +0000
    
        CMake: install .def files from source `include/llvm' directory.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85587 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f9eeeee14b7945c133fdaed19ce62be2a18b0457
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 30 07:23:49 2009 +0000
    
        Rather than having llvm-gcc changing the meaning of OptimizeSize, just make sure loop unswitch is conservative when optimization level is < 3.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85581 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8f743387c77dd5cdd20a6441e5da8271b52ca233
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Oct 30 05:45:42 2009 +0000
    
        Add ARM codegen for indirect branches.
        clang/test/CodeGen/indirect-goto.c runs! (unoptimized)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85577 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a7a4db28ffff0b028558ab1dc92619f2804b9366
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 02:45:10 2009 +0000
    
        Most stack straces don't need 3 digits worth of levels.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85575 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 21d49aa0bac191a9c3dc3b7a6fba8805209d1dff
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 02:13:27 2009 +0000
    
        Don't delete blocks which have their address taken.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85572 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8f38bef9785b0d126ffac7a6dbe9050daff1a786
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 02:08:26 2009 +0000
    
        Mention if a block has its address taken in debug output.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85571 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b4b367821511e04319a7ac0c59bb8d8f11be3345
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 02:01:10 2009 +0000
    
        Simplify this code and avoid an extra space character in the output.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85568 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1995be14a415ae124e42dec505a9c5fa6f83a8d1
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 01:45:18 2009 +0000
    
        Add support for BlockAddress static initializers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85562 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d8e273feafff0c64aac1587298390f0a91f147af
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 01:38:20 2009 +0000
    
        Add a FIXME comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85559 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 65e3b3e189325686c5cb6b38e9cf6446d5fb9952
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 01:34:35 2009 +0000
    
        Add some comments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85558 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 064403ea743b1457fbc71477feeb1b2f8c5cb3b1
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 01:28:02 2009 +0000
    
        Initial x86 support for BlockAddresses.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85557 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9105751c7ce64082e5769142e14b0c65cd06bb0a
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 01:27:03 2009 +0000
    
        Initial target-independent CodeGen support for BlockAddresses.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85556 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 99ffec19f61ce7ff2a52a524bfa92207eb680e7c
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Oct 30 00:39:25 2009 +0000
    
        Remove dead code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85551 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 35264272bed55d2ec0b75079b25a11d9af8f74b5
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 00:20:08 2009 +0000
    
        Add a BlockAddress MachineOperand kind.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85549 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ed56bc7692cc05f851ce4f9331e1af5299b7c97c
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 00:17:14 2009 +0000
    
        Add svn:ignore properties.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85548 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 07188d85626733b44f77fbee4b1d23e8faa24624
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 30 00:14:33 2009 +0000
    
        Remove a redundant copy constructor.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85547 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ebb14ff86fe1abf66d63985844fed583fb2044a5
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Fri Oct 30 00:08:40 2009 +0000
    
        Dial back the realignment a bit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85546 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fffff4408557f721ab7ff680bcdcca2c2534aaae
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Thu Oct 29 23:30:59 2009 +0000
    
        Between scheduling regions, correctly maintain anti-dep breaking state so that we don't incorrectly rename registers that span these regions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85537 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 438d4105eeb47c5bc1c5e5464b47a3df46ff2bbd
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Oct 29 23:30:06 2009 +0000
    
        Remove some unnecessary spaces in debug output.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85536 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b65e3acb4a07cbc809bc22fbc67675c7233d0f79
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Oct 29 22:30:23 2009 +0000
    
        Move some code from being emitted as boilerplate duplicated in every
        *ISelDAGToDAG.cpp to being regular code in SelectionDAGISel.cpp.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85530 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e2471f2abaa8edb55f5d8252cab8fe08ef09618f
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Thu Oct 29 19:17:04 2009 +0000
    
        Fix a couple of bugs in aggressive anti-dep breaking.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85522 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4b7a76d7d57c0140e80d5e3f74d929fc909aa8b5
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 29 18:40:06 2009 +0000
    
        Refactor complicated predicate into a separate function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85519 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit befd0170cc6b611f1671310557aaba84feb18a2d
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Oct 29 18:20:34 2009 +0000
    
        First bitcase use may not lead to a dbg.declare intrinsic. Iterate uses until one find's dbg.declare intrinsic.
        Patch by Sunae Seo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85518 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 30afe01e7ef78e1d2c01c924161ebe53bf02a3de
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Oct 29 18:10:34 2009 +0000
    
        Rename usesCustomDAGSchedInserter to usesCustomInserter, and update a
        bunch of associated comments, because it doesn't have anything to do
        with DAGs or scheduling. This is another step in decoupling MachineInstr
        emitting from scheduling.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85517 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5e0e60ac697c23819629d0fb2cc8a04bcfe1cf75
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Oct 29 17:47:20 2009 +0000
    
        Refactor the code for unfolding a load into a separate function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85515 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3c91dd9cb7e0acb0b92309cbf8b3c75a469ff18b
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Oct 29 17:39:46 2009 +0000
    
        Reapply r85338.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85514 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dbd28a3a3b8657faad8b2fdbb2d505d63e229121
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Oct 29 12:55:32 2009 +0000
    
        Fix MSVC build.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85505 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b7f1f1053add0239195844f748833c58565c569a
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Thu Oct 29 07:35:15 2009 +0000
    
        Apply some cleanups. No functionality changes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85498 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6c633c5a850ed50e123bd975990e59d0e60d4614
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 29 05:53:32 2009 +0000
    
        add sanity check for indbr.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85496 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4f2bbd66a9e049f60c8187a201f18d0df80c2e12
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 29 05:51:50 2009 +0000
    
        just for the hell of it, allow globalopt to statically evaluate
        static constructors with indirect gotos :)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85495 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bfa6b8573272eb5396dce856f5259d63f7d5fc48
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 29 05:26:09 2009 +0000
    
        add interpreter support for indirect goto / blockaddress.  The interpreter
        now correctly runs clang's test/CodeGen/indirect-goto.c.  The JIT will abort
        on it until someone feels compelled to implement this.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85488 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 936e59eb5b9d2d5648249ab6c7b14619ae0662e2
    Author: Zhongxing Xu <xuzhongxing at gmail.com>
    Date:   Thu Oct 29 05:07:14 2009 +0000
    
        add newline to make cl.exe happy.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85483 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cfb73b384ae5f97a4796fc91602d50a474c1e1c7
    Author: Zhongxing Xu <xuzhongxing at gmail.com>
    Date:   Thu Oct 29 04:41:24 2009 +0000
    
        fix 80-col.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85480 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c63a779b48206fdcef692203002b990609f5734f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 29 04:25:46 2009 +0000
    
        greatly improve the LLVM IR bitcode encoding documentation,
        patch by Peter Housel!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85479 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 29c4d45951c300e27374dabc7470b841a2fcd37a
    Author: Zhongxing Xu <xuzhongxing at gmail.com>
    Date:   Thu Oct 29 03:43:06 2009 +0000
    
        Explicitly convert to double to suppress Visual C++ 2008 build error C2668 pow is ambiguous call to overloaded function
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85478 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9823cc9df2fbd6e8bd106e223718996f0a68a1d2
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Oct 29 02:41:21 2009 +0000
    
        To get more thorough testing from llc-beta nightly runs, do dynamic stack
        realignment regardless of whether it's strictly necessary.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85476 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ba150df4bd9907b5820fb1d805aa8e4e6a0a0b0
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Oct 29 02:33:47 2009 +0000
    
        When the function is doing dynamic stack realignment, the spill slot will be
        indexed via the stack pointer, even if a frame pointer is present. Update the
        heuristic to place it nearest the stack pointer in that case, rather than
        nearest the frame pointer.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85474 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 212f9a9d323087354f9fcba897c66db5b517eba9
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Thu Oct 29 02:04:53 2009 +0000
    
        Sorry to break the build.
        I was trying to check the WIP file to some local repository, but ended up
        checking in the llvm repository.  Oops!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85470 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d8805c02dcf8c349967529d34add7641dc2f261f
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Thu Oct 29 01:45:07 2009 +0000
    
        Minor tweak to forgo the the curly braces for most case blocks, except when
        declaring local variables.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85467 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ff1a8e5764e4f6a8c471e432a400858e849923b2
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 29 01:21:20 2009 +0000
    
        teach various passes about blockaddress.  We no longer
        crash on any clang tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85465 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 12d5a3225f1a26113673727170f9552622525325
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Thu Oct 29 01:15:40 2009 +0000
    
        When there is a 2-instruction spill sequence, record
        the second (store) instruction in SpillSlotToUsesMap
        consistently.  I don't think this matters functionally,
        but it's cleaner and Evan wants it this way.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85463 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ef486b10571f71e53fe3281aec3abdb3446c5245
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Oct 29 00:37:35 2009 +0000
    
        Don't put in these EH changes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85460 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dff406469313119ba79883ea6c3b44e1c0bc5740
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Thu Oct 29 00:34:30 2009 +0000
    
        A switch-on-string-literal construct that is a nice alternative to
        cascading "ifs" of strcmps/memcmps.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85459 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 52efc3f5167aeb90927b75fc363b999a0fcd8f80
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 29 00:31:02 2009 +0000
    
        teach ValueMapper about BlockAddress', making bugpoint a lot more useful.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85458 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 15513ab5a548a8e64be4c2f7968c9ae6bd8a1418
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 29 00:28:30 2009 +0000
    
        unindent massive blocks, no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85457 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6577d404c19a0a80b13e358dd72b0c1b022b06c1
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Oct 29 00:22:16 2009 +0000
    
        Reverting r85338 for now. It's causing a bootstrap failure on PPC darwin9.
    
        --- Reverse-merging r85338 into '.':
        U    lib/CodeGen/SimpleRegisterCoalescing.cpp
        U    lib/CodeGen/SimpleRegisterCoalescing.h
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85454 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 41335ea37656dcfa81b37aff1c407df501884e6e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Oct 29 00:14:44 2009 +0000
    
        Add indirectbr and blockaddress to the vim syntax highlighting file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85451 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d50a560f876bc526b57305c5ec1b3478f8cb97b3
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Oct 29 00:09:08 2009 +0000
    
        Add a hasAddressTaken for BasicBlock.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85449 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 30f4c269dd65e2abaaf718261410fd7c768dfc86
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 28 23:25:00 2009 +0000
    
        add IRBuilder support for IndirectBr
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85445 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e84c212b3cf2616caf6284d522b5fccdd95cbc8e
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 28 22:10:20 2009 +0000
    
        Reimplement BranchFolding change to avoid tail merging for a 1 instruction
        common tail, except when the OptimizeForSize function attribute is present.
        Radar 7338114.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85441 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eaa6891fe4fe26bc3a8842ed01be52746a752393
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Oct 28 21:56:18 2009 +0000
    
        When we generate spill code, then decide we don't need
        to spill after all, we weren't handling 2-instruction
        spill sequences correctly (PPC Altivec).  We need to
        remove the store in this case.  Removing the other
        instruction(s) would be goodness but is not needed for
        correctness, and isn't done here.  7331562.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85437 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 220daf6b5c39b425ef06192ce461538f29efabd6
    Author: Eric Christopher <echristo at apple.com>
    Date:   Wed Oct 28 21:32:16 2009 +0000
    
        Make sure we return the right sized type here.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85436 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 93ab5617a1e187a411418c1ea38b312b6aacf850
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 28 20:46:46 2009 +0000
    
        Revert r85346 change to control tail merging by CodeGenOpt::Level.
        I'm going to redo this using the OptimizeForSize function attribute.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85426 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9d632b6c0cc62539b5316a2861f247515d9c43a1
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Wed Oct 28 20:18:55 2009 +0000
    
        Extend getMallocArraySize() to determine the array size if the malloc argument is:
        ArraySize * ElementSize
        ElementSize * ArraySize
        ArraySize << log2(ElementSize)
        ElementSize << log2(ArraySize)
    
        Refactor isArrayMallocHelper and delete isSafeToGetMallocArraySize, so that there is only 1 copy of the malloc array determining logic.
        Update users of getMallocArraySize() to not bother calling isArrayMalloc() as well.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85421 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 071d2cfe3a5a2c1723a41b00b4cb9b831e46bf81
    Author: Viktor Kutuzov <vkutuzov at accesssoftek.com>
    Date:   Wed Oct 28 18:55:55 2009 +0000
    
        Fix to pass options from Gold plugin to LTO codegen
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85419 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5220b2df6b1a39fbfae239f56acc10e6d8c2915c
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Wed Oct 28 18:37:31 2009 +0000
    
        Teach cmake that mk[sd]temp is defined in stdlib.h on some systems.
        This fixes parallel build with clang on glibc platforms.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85414 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4a8676e226cc62766c4638cb3dcd128f02e9d128
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Wed Oct 28 18:29:54 2009 +0000
    
        Make AntiDepReg.h internal.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85412 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ea6986525fadfd6488bc28e4ef99004d2b2aa519
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 28 18:26:41 2009 +0000
    
        Add a Thumb BRIND pattern.  Change the ARM BRIND assembly to separate the
        opcode and operand with a tab.  Check for these instructions in the usual
        places.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85411 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 116b72c4dac37340eff42e02af6e6a0b60252578
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Oct 28 18:19:56 2009 +0000
    
        fconsts and fconstd are obviously re-materializable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85410 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b666f8884f79e76357655ae65a0395cc72d06400
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Oct 28 17:33:28 2009 +0000
    
        Cleanup now that frame index scavenging via post-pass is working for ARM and Thumb2.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85406 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d8c44710c05973817296df4b1800105768d7c2d1
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Oct 28 16:51:52 2009 +0000
    
        llvm.dbg.global_variables do not exist anymore.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85402 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 337ba4f1060c186398363a037327b85fb8b4e1d1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 28 15:32:19 2009 +0000
    
        add a new 'SetCurrentDebugType' API (requested by Andrew Haley for JIT
        stuff) to programmatically control the current debug flavor.  While
        I'm at it, doxygenate Debug.h and clean it up.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85395 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 927dcecba16c04d82b567cbe00479256465bd5c2
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 28 15:28:02 2009 +0000
    
        Don't call SDNode::isPredecessorOf when it isn't necessary. If the load's
        chains have no users, they can't be predecessors of the condition.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85394 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 92ebb1fa248a671b76301809137a445f7e9dbb9c
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 28 15:23:36 2009 +0000
    
        Simplify this code: if the unfolded load can't be hoisted, just delete
        the new instructions and leave the old one in place.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85393 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f9f7a94e94b86030d10671cd6fe07b8c175785b5
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Wed Oct 28 15:04:53 2009 +0000
    
        No newline at end of file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85390 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1595d6896a6d0e096c281b58e691b2c520f8bd92
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Wed Oct 28 13:29:18 2009 +0000
    
        Update CMake file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85389 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b2c7314bc788f2af8bc0016d44129a63164401ac
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Wed Oct 28 13:14:50 2009 +0000
    
        use metavariable <result> instead of SSA name %result for consistency
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85388 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c0ea767d44bfb0c5a7f2f76552f00509885890cd
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Wed Oct 28 13:05:07 2009 +0000
    
        ooops, SSA name should not be part of the link
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85387 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 463c934daf5a4d57e798a65c145d8952f9115de3
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Wed Oct 28 09:21:30 2009 +0000
    
        advertise new syntax for unnamed instructions
        and eliminate confusing double-use of SSA names
        (work in progress)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85385 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c07861ad19e211b9375051ea2e728b7db6ec5029
    Author: Owen Anderson <resistor at mac.com>
    Date:   Wed Oct 28 07:05:35 2009 +0000
    
        Treat lifetime begin/end markers as allocations/frees respectively for the
        purposes for GVN/DSE.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85383 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 43d273db094c46eefceb6a27748c515aafdee7aa
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Wed Oct 28 07:03:15 2009 +0000
    
        Add ABCD, a generalized implementation of the Elimination of Array Bounds
        Checks on Demand algorithm which looks at arbitrary branches instead of loop
        iterations. This is GSoC work by Andre Tavares with only editorial changes
        applied!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85382 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fa281b8ec7167ba67ddcc8d441f6ff1685c83998
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Oct 28 06:55:03 2009 +0000
    
        Give ARMISD::EH_SJLJ_LONGJMP and EH_SJLJ_SETJMP names.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85381 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e21ca59adea038a61199913e6dbce8621516040c
    Author: Owen Anderson <resistor at mac.com>
    Date:   Wed Oct 28 06:30:52 2009 +0000
    
        Be more careful about invariance reasoning on "store" queries.  Stores still need
        to depend on Ref and ModRef calls within the invariant region.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85380 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 06cd2076d346cfa4aaf012be32eb380b2f4205e9
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Oct 28 06:30:34 2009 +0000
    
        X86 palignr intrinsics immediate field is in bits. ISel must transform it into bytes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85379 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a44701c008b2c54554c42aa0b46b53ceb0a8719f
    Author: Owen Anderson <resistor at mac.com>
    Date:   Wed Oct 28 06:18:42 2009 +0000
    
        Add trivial support for the invariance intrinsics to memdep.  This logic is
        purely local for now.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85378 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1bd1d9e2f90724ddeb7f095ada51cc276d14125f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 28 05:53:48 2009 +0000
    
        add bitcode reader support for blockaddress.  We can now fully
        round trip blockaddress through .ll and .bc files, so add a testcase.
    
        There are still a bunch of places in the optimizer and other places
        that need to be updated to work with these constructs, but at least
        the basics are in now.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85377 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d5693a28a5d34f8adeb65c0e4b67c4c811962466
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 28 05:24:40 2009 +0000
    
        bitcode writer support for blockaddress.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85376 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 67abb53946800619c549c67053d6f344d664ff27
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 28 05:14:34 2009 +0000
    
        Previously, all operands to Constant were themselves constant.
        In the new world order, BlockAddress can have a BasicBlock operand.
        This doesn't permute much, because if you have a ConstantExpr (or
        anything more specific than Constant) we still know the operand has
        to be a Constant.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85375 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d22ec1cce22c5be46a150666dd5fc5da06c5ce64
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 28 04:47:06 2009 +0000
    
        doc bug spotted by apinski
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85372 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4fa8f3294b576c0406061df2778cdc1983693f2c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 28 04:12:16 2009 +0000
    
        'static const  void *X = &&y' can only be put in the
        readonly section if a reference to the containing function
        is valid in the readonly section.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85370 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ed7cc32565a72f3a40d03cfc7e313515bca7b9e2
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 28 03:44:30 2009 +0000
    
        Rewrite SelectionDAG::isPredecessorOf to be iterative instead of
        recursive to avoid consuming extraordinary amounts of stack space
        when processing tall graphs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85369 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5bb63f1658aadcc4c7a414e0794d7e75a13fbd6b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 28 03:39:23 2009 +0000
    
        full asmparser support for blockaddress.  We can now do:
        $ llvm-as foo.ll -d -disable-output
    
        which reads and prints the .ll file.  BC encoding is the
        next project.  Testcase will go in once that works.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85368 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fcbe37f591793786a110d516ed076a7a256561d9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 28 03:38:12 2009 +0000
    
        asmprinter support for BlockAddress.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85367 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 61c4ffe7c0ff88d7f81b2ac7b96382d984ccb5fb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 28 03:37:35 2009 +0000
    
        when we tear down a module, we need to be careful to
        zap BlockAddress values.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85366 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a1e56d94ca83f00c5370cfb0b660e800356fbb37
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 28 03:36:44 2009 +0000
    
        basic blocks can now have non-instruction users.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85365 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6efe35213b3fca4fd9df6a35848aadf2e31923c5
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 28 03:21:57 2009 +0000
    
        Teach MachineLICM to unfold loads from constant memory from
        otherwise unhoistable instructions in order to allow the loads
        to be hoisted.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85364 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7c7a3ff2b1b7be0f3a3f5a3ca0b01a182b0d2fba
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Oct 28 01:44:26 2009 +0000
    
        Use fconsts and fconstd to materialize small fp constants.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85362 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a0e6778c5f1e124bc15fceeca0c875614a082c79
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Oct 28 01:43:28 2009 +0000
    
        Add a second ValueType argument to isFPImmLegal.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85361 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 48c10d1e1fdd6f7d2f1ce5a9e2380b281dbe3006
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 28 01:13:53 2009 +0000
    
        Mark dead physregdefs dead immediately. This helps MachineSink and
        MachineLICM and other things which run before LiveVariables is run.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85360 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 114924db349324f455170178d8b1a8d34da8f4c6
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 28 01:12:16 2009 +0000
    
        Allow constants of different types to share constant pool entries
        if they have compatible encodings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85359 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ae9824d9490ac7373fa2cddfe0f65bc8426249eb
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Oct 28 01:08:17 2009 +0000
    
        Remove getIEEEFloatParts and getIEEEDoubleParts. They are not needed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85358 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fd3ac5568dcb981fe9539706bd97f427bb36ff15
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 28 00:55:57 2009 +0000
    
        Update SystemZ to use PSW following the way x86 uses EFLAGS. Besides
        eliminating a use of MVT::Flag, this is needed for an upcoming CodeGen
        change.
    
        This unfortunately requires SystemZ to switch to the list-burr
        scheduler, in order to handle the physreg defs properly, however
        that's what LLVM has available at this time.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85357 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f061fc84a89aa961b3f5e25dce93fa5ac033c090
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 28 00:37:03 2009 +0000
    
        Add an indirect branch pattern for ARM.  Testcase will be coming soon.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85355 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 523521538fe159fd1bc104a6dddf66432d8c4ef1
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Wed Oct 28 00:28:31 2009 +0000
    
        Fix the ModuleDeletion test on PPC and ARM.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85352 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4c3800f0172e2fdc0b3b2e78dbaf89b150f4e04f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 28 00:19:10 2009 +0000
    
        rename indbr -> indirectbr to appease the residents of #llvm.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85351 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 746f7fd8f0fa1d3d83dc28ddfc3be561dcf09deb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 28 00:01:44 2009 +0000
    
        IR support for the new BlockAddress constant kind.  This is
        untested and there is no way to use it, next up: doing battle
        with asmparser.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85349 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit afe63293c930f39f6f6d4ac6e76da0d3546dfca4
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Oct 27 23:49:38 2009 +0000
    
        Record CodeGen optimization level in the BranchFolding pass so that we can
        use it to control tail merging when there is a tradeoff between performance
        and code size.  When there is only 1 instruction in the common tail, we have
        been merging.  That can be good for code size but is a definite loss for
        performance.  Now we will avoid tail merging in that case when the
        optimization level is "Aggressive", i.e., "-O3".  Radar 7338114.
    
        Since the IfConversion pass invokes BranchFolding, it too needs to know
        the optimization level.  Note that I removed the RegisterPass instantiation
        for IfConversion because it required a default constructor.  If someone
        wants to keep that for some reason, we can add a default constructor with
        a hard-wired optimization level.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85346 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 043667bb0ce28faf90b72737e91d23b0e4817711
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Oct 27 23:45:55 2009 +0000
    
        Rename lib/VMCore/ConstantsContext.h:ValueMap<> to ConstantUniqueMap<> to avoid
        colliding with llvm/ADT/ValueMap.h:ValueMap<>.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85344 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f9e2226cbaa0b634728b3a7a47acd1c11440fccc
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Oct 27 23:30:07 2009 +0000
    
        Add new note.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85341 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 472ebdd60175b884d8ea2f1e77a73bcbe5a9136c
    Author: Lang Hames <lhames at gmail.com>
    Date:   Tue Oct 27 23:16:58 2009 +0000
    
        Fixed a bug in the coalescer where intervals were occasionally merged despite a real interference. This fixes rdar://problem/7157961.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85338 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fe8db6e76137272e242fc806826b15aaa934845b
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Oct 27 22:52:29 2009 +0000
    
        Enable virtual register based frame index scavenging by default for ARM & T2.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85335 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 53353c479dd5642be973d6d0148ed0723ab27530
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Oct 27 22:48:31 2009 +0000
    
        Move and clarify note.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85334 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b004ed5cee7a2d006860c017249c16d6331c0758
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Oct 27 22:45:39 2009 +0000
    
        Infrastructure for dynamic stack realignment on ARM. For now, this is off by
        default behind a command line option. This will enable better performance for
        vectors on NEON enabled processors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85333 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 030a21f58d0164737d9aadb314dc15dd279f7b1f
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Oct 27 22:43:24 2009 +0000
    
        Note corrected.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85332 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 69b37c7dd9ca765240fcb7f5f51c4ada13b8524c
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Oct 27 22:40:45 2009 +0000
    
        Modify note.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85331 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8c4e2231017c5ff51cee8c7de384e7d105883287
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Oct 27 22:39:42 2009 +0000
    
        Revert the API changes from r85295 to make it easier for people to build
        against both 2.6 and HEAD.  The default is still changed to eager jitting.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85330 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 85196b544a2832e3da5024dd2b9559f0791a2f86
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Oct 27 22:34:43 2009 +0000
    
        Add a note.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85329 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 12236e15007f962c3f9ac9496e10adf86f5bd3e4
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 27 22:16:29 2009 +0000
    
        Factor out redundancy from clone() implementations.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85327 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b34bcfc17c7c84a3b9be34a99b40f7596c2174cd
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 27 22:10:34 2009 +0000
    
        Update the MachineBasicBlock CFG for an indirect branch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85325 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2b87f2ae510bb5298482aba49c63c49840b0db7d
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 27 21:56:26 2009 +0000
    
        Add CodeGen support for indirect branches.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85323 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fd9ff122e342300ebbd082b41439c28097d3f632
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 21:52:54 2009 +0000
    
        typo
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85322 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 24f2e859707fc58040b9d50759a864a6e35eac39
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 21:49:40 2009 +0000
    
        you can't take the address of the entry block of a function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85321 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0bae7b3ef34de55a4a7ae3f88688526c05891509
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 21:44:20 2009 +0000
    
        improvements from gabor.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85320 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 117d16b8a7a9c2b631ac36affb6a9dcb33ffcbf7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 21:43:39 2009 +0000
    
        make the build build.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85319 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 64dd7b4a31b939ecea7f037a997d54a92beeadcd
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Oct 27 21:35:42 2009 +0000
    
        Add new APFloat methods that return sign, exp, and mantissa of ieee float and double values.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85318 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 41fc6ad3dffe415decc911b39388fa20cff73a23
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 21:27:42 2009 +0000
    
        Random updates to passes for indbr, I need blockaddress before I can do much more.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85316 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 405a71abced5cc697bff587436dea1bb2a6a18b1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 21:24:48 2009 +0000
    
        cppbackend support for indbr
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85312 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 20e88f5303d60d9105a80e24fb70ecc0ecd80856
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 21:21:06 2009 +0000
    
        CBE support for indbr.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85311 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 29246b5e51060717e19c0b092ddad5d07237c9be
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 21:19:13 2009 +0000
    
        fix things pointed out by Dan!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85310 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d07c83760d225f8bfe864939ce7d8d3fbdde1ea4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 21:01:34 2009 +0000
    
        document the forthcoming blockaddress constant.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85306 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d55c2acab4373802918c5c83eb9fcc7fd0f1349a
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Tue Oct 27 20:51:49 2009 +0000
    
        Similar to r85280, do not clear the "S" bit for RSBri and RSBrs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85299 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f5278f228d564351a3f0df98a74fad48a892debc
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 27 20:47:17 2009 +0000
    
        Do not held on to DenseMap slot accross map insertion. The insertion may cause the map to grow rending the slot invalid.
        Use this opportunity to use ValueMap instead of DenseMap.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85298 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2b4d1db83d22c184d42fed3e4863fe6253476a01
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Tue Oct 27 20:45:15 2009 +0000
    
        Set condition code bits of BL and BLr9 to 0b1110 (ALways) to distinguish between
        BL_pred and BLr9_pred.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85297 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dab00f86c61d0de362877671ffb2c835e59b84a1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 20:42:54 2009 +0000
    
        don't use stdio
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85296 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4f6ac2f5671ce45a9690e24bd753770111611166
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Oct 27 20:30:28 2009 +0000
    
        Change the JIT to compile eagerly by default as agreed in
        http://llvm.org/PR5184, and beef up the comments to describe what both options
        do and the risks of lazy compilation in the presence of threads.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85295 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1e6c6ebac9fde4ee5c0435affc89ae6f183c011c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 20:27:24 2009 +0000
    
        fix pasto pointed out by Rafael
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85294 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 932c62083ecc46064b3f9209004b359bc451f215
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Tue Oct 27 20:12:38 2009 +0000
    
        Add radar number.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85290 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 972ec94278d281d85525a4a597274d2fdca614a3
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Tue Oct 27 20:06:05 2009 +0000
    
        Testcase for llvm-gcc patch 85284.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85287 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 28f4d2fb94b9057d8daba2128867fde276a37d1b
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Tue Oct 27 20:05:49 2009 +0000
    
        Rename MallocFreeHelper as MemoryBuiltins
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85286 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 408a5e530b3dd17cc789c538a2f3020501a13a59
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Tue Oct 27 20:04:22 2009 +0000
    
        CMake: Install .inc files too.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85285 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d9693e60eef5eaf6c45a498d2a7a5d05ca85b02e
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Tue Oct 27 19:57:29 2009 +0000
    
        Rather than excluding quite some things, and still installing
        CMakeLists.txt, Makefiles, ... it's better to whitelist what we really
        want to install.
    
        Patch by Ingmar Vanhassel!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85282 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6337b551d01bc8239f0bd6440ac82bce2b313f4c
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Oct 27 19:56:55 2009 +0000
    
        Do away with addLegalFPImmediate. Add a target hook isFPImmLegal which returns true if the fp immediate can be natively codegened by target.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85281 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 66eeb81868889994314fc49be900868351d6f1a8
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Oct 27 19:52:03 2009 +0000
    
        Do not clear the "S" bit for RSCri and RSCrs.  They inherit from the "sI"
        instruction format that already takes care of setting this.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85280 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e0787286429c486909bd3b01b5dfedacbc2d7394
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 19:13:16 2009 +0000
    
        add enough support for indirect branch for the feature test to pass
        (assembler,asmprinter, bc reader+writer) and document it.  Codegen
        currently aborts on it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85274 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1bf7944abd6a4205d09769377fbcd9240ab416d8
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Tue Oct 27 18:44:24 2009 +0000
    
        Explicitly specify 0b00, i.e, zero rotation, as the rotate filed (Inst{11-10})
        for the r/rr fragment of the multiclass AI_unary_rrot/AI_bin_rrot.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85271 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 29cbb4c46ac9a5267d9428959c737c381a5e85e6
    Author: Rafael Espindola <rafael.espindola at gmail.com>
    Date:   Tue Oct 27 17:59:03 2009 +0000
    
        Add missing testcase.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85266 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 68c363b54790d793a5186df06de8853a2ea5e549
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 17:40:49 2009 +0000
    
        change of mind :)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85258 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a7f42de3ef7ce28ad6c2fa21125c6a9fa39f6024
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Tue Oct 27 17:40:24 2009 +0000
    
        Remove unnecessary gotos to fall-thru successors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85257 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b320a29fd07bb55461d20676e31cc9314f14001e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 17:40:19 2009 +0000
    
        rename test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85256 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5663974f0438575c0e45d0bcb5ce0391eb5a5f96
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Tue Oct 27 17:25:15 2009 +0000
    
        Test commit.  Added '.' to the comment line.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85255 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 85c4ec20b82b5a2702e618922d549222366907dc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 17:08:31 2009 +0000
    
        Type.h doesn't need to #include LLVMContext.h
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85254 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6d15db7cb3b93b99d16ff5d7c0b87ceb0bf02f21
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 17:02:08 2009 +0000
    
        pseudosourcevalue is also still using getGlobalContext(), so it isn't
        thread safe either.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85253 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7fd81130b8b6daf7f47f808d2134732b2f9ce116
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 17:01:03 2009 +0000
    
        apparently the X86 JIT isn't fully contextized, it is still using getGlobalContext() :(
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85252 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f8a5f3002aa72cd783cb7f4d976ddfb9da964018
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Tue Oct 27 16:56:58 2009 +0000
    
        Fix reversed logic spotted by Owen Anderson.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85251 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 96b4e7d1e7d37ecc040b42a836a3d0b7884e839f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 16:53:54 2009 +0000
    
        trim another #include
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85250 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 622f3fc17dbcc235ef8f07f9da803eb0fe9c35bc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 16:49:53 2009 +0000
    
        remove an unneeded #include.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85248 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8af43fbca65148054b8b7d218c54ad85c8c4d6aa
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Tue Oct 27 14:54:46 2009 +0000
    
        Convert Analysis tests to FileCheck in regards to PR5307.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85241 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8f419e24e13ef1e9090c6e96b18da9bb45338ed1
    Author: Rafael Espindola <rafael.espindola at gmail.com>
    Date:   Tue Oct 27 14:09:44 2009 +0000
    
        Correctly align double arguments in the stack.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85235 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7d91e27702ced60731f53a71cd0052e30c9cca77
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Tue Oct 27 09:02:49 2009 +0000
    
        80-col violation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85215 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 56724a6b62e4bd778a91d1a44eb9f521fd6c8a78
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Oct 27 06:31:02 2009 +0000
    
        Fix Thumb2 failures by converting them to FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85210 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e811eeb605587038550284216cec691a85754793
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Oct 27 06:16:45 2009 +0000
    
        Fix the rest of the ARM failures by converting them to FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85208 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f59fc08fa71a5648f712d7d178a650c817008408
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Oct 27 05:50:28 2009 +0000
    
        Fix some more failures by converting to FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85207 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 38dfe1cae81f30a9efe49e16700632ffd2987dec
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 05:39:41 2009 +0000
    
        Fix a pretty serious misfeature of the inliner: if it inlines a function
        with multiple return values it inserts a PHI to merge them all together.
        However, if the return values are all the same, it ends up with a pointless
        PHI and this pointless PHI happens to really block SRoA from happening in
        at least a silly C++ example written by Doug, but probably others.  This
        fixes rdar://7339069.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85206 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 468f729300f9ff4fc208650e29470e108b7e6ba0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 05:35:35 2009 +0000
    
        convert to filecheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85205 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8257c409ae7df2cb7641f4194b3df694e8ce552f
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Oct 27 05:30:47 2009 +0000
    
        Convert to FileCheck, fixing failure due to tab change in the process.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85204 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9b95748848a137a23419ae1e6b4f92736af70e59
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 27 04:58:10 2009 +0000
    
        lang points out that the comment is out of date with the code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85203 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f6803121c07be2467ed2dd49192e2f5432ef2ab6
    Author: Mike Stump <mrs at apple.com>
    Date:   Tue Oct 27 02:17:51 2009 +0000
    
        Fix VS build, patch by Marius Wachtler.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85198 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 406f493c3b956f24d4567e59443daafb7d122df9
    Author: Mike Stump <mrs at apple.com>
    Date:   Tue Oct 27 02:14:13 2009 +0000
    
        VS build fix, patch by Marius Wachtler.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85197 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 98d00f20a97fa2774f337d1a56fee66865eb650c
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Oct 27 01:06:51 2009 +0000
    
        Fix OProfileJITEventListener after r85182.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85192 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 37115cc04ea3dc2516d34a1b0f8dc262652fbb63
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Oct 27 00:52:25 2009 +0000
    
        Add objectsize intrinsic and hook it up through codegen. Doesn't
        do anything than return "I don't know" at the moment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85189 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c1db4e53b11822ff3262d6b741974ef78ce04ea4
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Oct 27 00:20:49 2009 +0000
    
        Now VFP instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85186 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1674ea5205cbc47e59849163c5992f3befbd5fb0
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 27 00:11:02 2009 +0000
    
        Add braces to avoid ambiguous else.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85185 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 08540a929721efe42893675068f8a5a1767afad5
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Oct 27 00:08:59 2009 +0000
    
        Change Thumb1 and Thumb2 instructions to separate opcode from operands with a tab instead of a space.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85184 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1291fce6c0a61fb74a124ffd5b2da02c0de6ab4f
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Oct 27 00:03:05 2009 +0000
    
        Automatically do the equivalent of freeMachineCodeForFunction(F) when F is
        being destroyed. This allows users to run global optimizations like globaldce
        even after some functions have been jitted.
    
        This patch also removes the Function* parameter to
        JITEventListener::NotifyFreeingMachineCode() since it can cause that to be
        called when the Function is partially destroyed. This change will be even more
        helpful later when I think we'll want to allow machine code to actually outlive
        its Function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85182 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 43a4c67a86782f5f6ad5d80dbbf29ff23f3faa08
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Mon Oct 26 23:58:56 2009 +0000
    
        Rename MallocHelper as MallocFreeHelper, since it now also identifies calls to free()
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85181 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f9a7549471a1194f31d2627c8f0665eb88bb96fc
    Author: Owen Anderson <resistor at mac.com>
    Date:   Mon Oct 26 23:56:52 2009 +0000
    
        Forgot to commit these.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85180 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 14d033341deef1eacc39efc4f51b132356120405
    Author: Owen Anderson <resistor at mac.com>
    Date:   Mon Oct 26 23:55:47 2009 +0000
    
        Add a straight-forward implementation of SCCVN for aggressively eliminating scalar redundancies.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85179 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d3f9bc403327201a4cbb416f6627d1c54c0fcac8
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Oct 26 23:45:59 2009 +0000
    
        Change ARM asm strings to separate opcode from operands with a tab instead of a space.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85178 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6b05409548b99af03ebd6f92fee5337dc7ed0dc4
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Mon Oct 26 23:44:29 2009 +0000
    
        Remove all references to MallocInst and FreeInst
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85177 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f9a7a33126ed4def13851a4d77a442db55ffe307
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Mon Oct 26 23:43:48 2009 +0000
    
        Remove FreeInst.
        Remove LowerAllocations pass.
        Update some more passes to treate free calls just like they were treating FreeInst.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85176 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2aec46ede15c195de15bdafc25e9760b6d9b5c52
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Oct 26 22:59:12 2009 +0000
    
        Try to get ahead of Johnny Chen and pro-actively add some more ARM encoding
        bits.  Johnny, please review -- I do not have a good track record of getting
        these right.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85173 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 67d02c9e078bd68cad7bc12fbc86ceed345aae15
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Mon Oct 26 22:52:03 2009 +0000
    
        Convert a few tests to FileCheck for PR5307.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85171 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ccd00e369b68760c043986705ca45bb06e233f1e
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Oct 26 22:42:13 2009 +0000
    
        Fix ARM encoding typo: Opcod3 is not passed to ASuI parent class.
        Patch by Johnny Chen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85169 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3a308521257a4c8fc6e816bdd4218ade39e1edfe
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Oct 26 22:34:44 2009 +0000
    
        Add more ARM instruction encodings for 's' bit set and "rs" register encoding
        bits.  Patch by Johnny Chen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85167 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aa644ea6cd8b6c4fa5d0442c91267cdfdab43eab
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Mon Oct 26 22:31:16 2009 +0000
    
        Allow the aggressive anti-dep breaker to process the same region multiple times. This is necessary because new anti-dependencies are exposed when "current" ones are broken.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85166 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c5f5512bc4ed863d9f616d7da1ff6826bcf3b3b8
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 26 22:18:58 2009 +0000
    
        Simplify this code. LoopDeletion doesn't need to explicit check that
        the loop exiting block dominates the latch block; if ScalarEvolution
        can prove that the trip-count is finite, that's sufficient.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85165 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4dcf7c0292231fa38ff5c0ab4a2413c7b0bbb66e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 26 22:14:22 2009 +0000
    
        Code that checks WillNotOverflowSignedAdd before creating an Add
        can safely use the NSW bit on the Add.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85164 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 20404788e09d71c9c6171a5f4e24f5a9be2d84bd
    Author: Ted Kremenek <kremenek at apple.com>
    Date:   Mon Oct 26 22:06:01 2009 +0000
    
        Update CMake files.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85161 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6c576d14840a48ccebdb449dfdb0b7b3b95fbbc0
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 26 21:55:43 2009 +0000
    
        Teach BasicAA how to analyze Select instructions, and make it more
        aggressive on PHI instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85158 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f974aca5c3b657b93fd45aead3fba00100652b23
    Author: Julien Lerouge <jlerouge at apple.com>
    Date:   Mon Oct 26 20:01:35 2009 +0000
    
        Remove / use flags that are now set in the Makefile.config.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85149 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c3f1ce99a96866828eaeb5f97bf4f423db2a553e
    Author: Julien Lerouge <jlerouge at apple.com>
    Date:   Mon Oct 26 20:00:35 2009 +0000
    
        Regenerate.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85148 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c438e2b035300052b147b83dfe48dd8becd92697
    Author: Julien Lerouge <jlerouge at apple.com>
    Date:   Mon Oct 26 19:58:44 2009 +0000
    
        Add an autoconf test to check for optional compiler flags like
        -Wno-missing-field-initializers or -Wno-variadic-macros.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85147 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 64948f805fbd7e5d69066d85f8dee30c9d84bf1f
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Mon Oct 26 19:41:00 2009 +0000
    
        Define virtual destructor in *.cpp file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85146 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a8b859600a76b46db61dd5fe6609f66897fe9c07
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Mon Oct 26 19:32:42 2009 +0000
    
        Add aggressive anti-dependence breaker. Currently it is not the default for any target. Enable with -break-anti-dependencies=all.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85145 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 415659a34f7e7c3d0a52f2712fb449a10e0abf0a
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 26 19:12:14 2009 +0000
    
        Check in the experimental GEP splitter pass. This pass splits complex
        GEPs (more than one non-zero index) into simple GEPs (at most one
        non-zero index).  In some simple experiments using this it's not
        uncommon to see 3% overall code size wins, because it exposes
        redundancies that can be eliminated, however it's tricky to use
        because instcombine aggressively undoes the work that this pass does.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85144 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7fcfff2099a6341e2ccb0de4eb0e75431fd40e05
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Mon Oct 26 19:00:47 2009 +0000
    
        Add virtual destructor.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85141 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b9839c626d35a7ad18d27f53fc5c882fe3bf2bd6
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Mon Oct 26 18:40:24 2009 +0000
    
        Revert r85134, it breaks mingw build
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85138 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ddabac83a56fa9fda47c8c4ededa6560731b7304
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 26 18:36:40 2009 +0000
    
        Add CreateZExtOrBitCast and CreateSExtOrBitCast to TargetFolder
        for consistency with ConstantFolder.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85137 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7080dc09b5ff61334f9a13c1eec1e0eb9c195556
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 26 18:26:18 2009 +0000
    
        When checking whether a def of an aliased register is dead, ask the
        machineinstr whether the aliased register is dead, rather than the original
        register is dead. This allows it to get the correct answer when examining
        an instruction like this:
          CALLpcrel32 <ga:foo>, %AL<imp-def>, %EAX<imp-def,dead>
        where EAX is dead but a subregister of it is still live. This fixes PR5294.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85135 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fe8fe92809dd5b4c24a841947213994ffdd21899
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Mon Oct 26 18:22:59 2009 +0000
    
        Make PIC16 overlay a loadable pass.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85134 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2eb7f43741299f784b492a57d52e86702195b458
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Oct 26 17:09:00 2009 +0000
    
        Do not use expensive sort().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85130 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 59f8286f657aa97d979604e1ed4840dd9956342f
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Mon Oct 26 17:01:20 2009 +0000
    
        Some svn:ignore tweaks.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85128 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ad5789c4fd62e77a8d27c43db98384ed9762dfe7
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Mon Oct 26 16:59:04 2009 +0000
    
        Break anti-dependence breaking out into its own class.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85127 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 152a24965397e1b3ed7bc62c8a3b3a036b9bf733
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Oct 26 16:54:35 2009 +0000
    
        Add support to encode type info using llvm::Constant.
        Patch by Talin!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85126 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a6122ae09b2869474e1875ea4334c15137edb264
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 26 15:55:24 2009 +0000
    
        Fix a typo in a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85120 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6e060dbe83b295208c3a4fce8ea278851d9720f0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 26 15:40:07 2009 +0000
    
        reapply r85085 with a bugfix to avoid infinite looping.
        All of the 'demorgan' related xforms need to use
        dyn_castNotVal, not m_Not.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85119 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a25e81a52a619075c948a7fc1e8ec3f8ef9b09cb
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 26 15:32:57 2009 +0000
    
        Make LSR's OptimizeShadowIV ignore induction variables with negative
        strides for now, because it doesn't handle them correctly. This fixes a
        miscompile of SingleSource/Benchmarks/Misc-C++/ray.
    
        This problem was usually hidden because indvars transforms such induction
        variables into negations of canonical induction variables.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85118 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9919ed02cbeac5498b21825ae4924b140784be08
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Oct 26 04:56:07 2009 +0000
    
        - Revert some changes from 85044, 85045, and 85047 that broke x86_64 tests and
          bootstrapping. It's not safe to leave identity subreg_to_reg and insert_subreg
          around.
        - Relax register scavenging to allow use of partially "not-live" registers. It's
          common for targets to operate on registers where the top bits are undef. e.g.
          s0 =
          d0 = insert_subreg d0<undef>, s0, 1
          ...
             = d0
          When the insert_subreg is eliminated by the coalescer, the scavenger used to
          complain. The previous fix was to keep to insert_subreg around. But that's
          brittle and it's overly conservative when we want to use the scavenger to
          allocate registers. It's actually legal and desirable for other instructions
          to use the "undef" part of d0. e.g.
          s0 =
          d0 = insert_subreg d0<undef>, s0, 1
          ...
          s1 =
             = s1
             = d0
          We probably need add a "partial-undef" marker on machine operand so the
          machine verifier would not complain.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85091 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5d4a07e1f97b2822c334fe0a46fabe7135037889
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Oct 26 03:51:32 2009 +0000
    
        Revert 85085. It causes infinite looping during llvm-gcc build.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85090 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2fad78bbd23df52f93797656dd8eaee63cc27896
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Mon Oct 26 02:37:56 2009 +0000
    
        Fix gmake check for AuroraUX triple.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85088 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ec7df14990f3bdf92f8e55dcdeeca03d6802602
    Author: Chandler Carruth <chandlerc at gmail.com>
    Date:   Mon Oct 26 01:35:46 2009 +0000
    
        Move DataTypes.h to include/llvm/System, update all users. This breaks the last
        direct inclusion edge from System to Support.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85086 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f05d95c356337c9a85cc12acde2cdc209ba02a1b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 26 01:06:31 2009 +0000
    
        Implement PR3266 & PR5276, folding:
           not (or (icmp, icmp)) -> and(icmp, icmp)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85085 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d3a2943bdc41c9f2f973e74d27b965b48e538ea9
    Author: Chandler Carruth <chandlerc at gmail.com>
    Date:   Sun Oct 25 23:54:41 2009 +0000
    
        Update the 'svn:ignore' property to remove stale file references.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85084 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dd2472b097db91e662dfc71bd69a2dc05d289704
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 25 23:47:55 2009 +0000
    
        convert or.ll to filecheck and merge or2 into it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85083 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 89a222bfd4ba450a0ec169f6ac72a42dbd87cd4c
    Author: Chandler Carruth <chandlerc at gmail.com>
    Date:   Sun Oct 25 23:41:56 2009 +0000
    
        Remove stale reference to ThreadSupport.h.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85082 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 85ee709efb4af23b7205eaf5b8378dc96ee6a021
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 25 23:22:50 2009 +0000
    
        fix PR5295 where the .ll parser didn't reject a function after a global
        or global after a function with conflicting names.  Update some testcases
        that were accidentally depending on this behavior.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85081 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f756c166ec47173633f631b1be8d2f90e11769d6
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Oct 25 23:11:06 2009 +0000
    
        Suppress -Asserts warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85078 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 79cdc2ecda3b93508190ffef80f49565093260d3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 25 23:06:42 2009 +0000
    
        fix PR5186: the JIT shouldn't try to codegen available_externally
        functions it should just look them up like declarations.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85077 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4f13bfcd9890009dcab9454992134c486fc8a126
    Author: Chandler Carruth <chandlerc at gmail.com>
    Date:   Sun Oct 25 22:38:41 2009 +0000
    
        Remove unused includes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85074 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 20f9924817aba90dc88c43b247671c6542fb523c
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sun Oct 25 19:14:48 2009 +0000
    
        of -> or
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85065 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4f9656021235a937c7c874487dbbb5780f3b0427
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sun Oct 25 18:55:46 2009 +0000
    
        80-column cleanup
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85064 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 27e801cd7ee5ce1b11ed9c2e6fc478747f70cc3c
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Sun Oct 25 08:14:11 2009 +0000
    
        Reapply 85006 with a minor fix.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85052 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a45dd92bcde845b4edd7a3495eb8f3fba708a853
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Oct 25 08:01:41 2009 +0000
    
        Add a couple of ARM cross-rc coalescing tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85051 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d6a6656d22cb77cd940b25d3aaa7382e73912683
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Oct 25 07:53:48 2009 +0000
    
        Update tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85050 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7da9a9ca5eb152a5d570c29b6b873dda02148925
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Oct 25 07:53:28 2009 +0000
    
        Add ARM getMatchingSuperRegClass to handle S / D / Q cross regclass coalescing.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85049 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 146c8488a42d45d57f0e25abccb008d7e9b8d135
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Oct 25 07:52:27 2009 +0000
    
        Don't forget subreg indices when folding load / store.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85048 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 899830995d35fa6878fd6204fce742c90b9d554b
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Oct 25 07:51:47 2009 +0000
    
        Use isIdentityCopy. Fix a bozo bug (flipped condition) in InvalidateRegDef.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85047 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 381f09782a49e19f142866145947405531b8d2cd
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Oct 25 07:49:57 2009 +0000
    
        Code clean up.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85046 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c319538252bef8bab51638c1cb6ac9beab6e0bf5
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Oct 25 07:48:51 2009 +0000
    
        Do not delete identity insert_subreg even if dest is virtual. Let later passes delete them. This avoids register scavenger complain.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85045 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a331ec82f78470eeb4c6c1e666d082a64fea5069
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Oct 25 07:47:07 2009 +0000
    
        Add isIdentityCopy to check for identity copy (or extract_subreg, etc.)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85044 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5a44ef9fd5f7c3964ad79b94778261175dea5c33
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Oct 25 06:57:41 2009 +0000
    
        Remove includes of Support/Compiler.h that are no longer needed after the
        VISIBILITY_HIDDEN removal.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85043 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 492d06efde44a4e38a6ed321ada4af5a75494df6
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Oct 25 06:33:48 2009 +0000
    
        Remove VISIBILITY_HIDDEN from class/struct found inside anonymous namespaces.
        Chris claims we should never have visibility_hidden inside any .cpp file but
        that's still not true even after this commit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85042 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 62d027dd4b9b60d87f7c617aac1473bf8794a5d2
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 25 06:17:51 2009 +0000
    
        this is done.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85041 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8e39cf73d44aea2c233909dff6504362a6ccce8e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 25 06:15:37 2009 +0000
    
        Teach FoldBitCast to be able to handle bitcasts from (e.g.) i128 -> <4 x float>.
    
        This allows us to simplify this:
        union vec2d {
            double e[2];
            double v __attribute__((vector_size(16)));
        };
        typedef union vec2d vec2d;
    
        static vec2d a={{1,2}}, b={{3,4}};
    
        vec2d foo () {
            return (vec2d){ .v = a.v + b.v * (vec2d){{5,5}}.v };
        }
    
        down to:
    
        define %0 @foo() nounwind ssp {
        entry:
          %mrv5 = insertvalue %0 undef, double 1.600000e+01, 0 ; <%0> [#uses=1]
          %mrv6 = insertvalue %0 %mrv5, double 2.200000e+01, 1 ; <%0> [#uses=1]
          ret %0 %mrv6
        }
    
        instead of:
    
        define %0 @foo() nounwind ssp {
        entry:
          %mrv5 = insertvalue %0 undef, double extractelement (<2 x double> fadd (<2 x double> fmul (<2 x double> bitcast (<1 x i128> <i128 85174437667405312423031577302488055808> to <2 x double>), <2 x double> <double 3.000000e+00, double 4.000000e+00>), <2 x double> <double 1.000000e+00, double 2.000000e+00>), i32 0), 0 ; <%0> [#uses=1]
          %mrv6 = insertvalue %0 %mrv5, double extractelement (<2 x double> fadd (<2 x double> fmul (<2 x double> bitcast (<1 x i128> <i128 85174437667405312423031577302488055808> to <2 x double>), <2 x double> <double 3.000000e+00, double 4.000000e+00>), <2 x double> <double 1.000000e+00, double 2.000000e+00>), i32 1), 1 ; <%0> [#uses=1]
          ret %0 %mrv6
        }
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85040 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5aa9a3e9ac31b1c800daf8fd9f273ded02d4b75d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 25 06:08:26 2009 +0000
    
        move FoldBitCast earlier in the file, and use it instead of
        ConstantExpr::getBitCast in various places.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85039 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ea55ea084bc578f7e161343561826ae47265fdce
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 25 06:02:57 2009 +0000
    
        refactor FoldBitCast to reduce nesting and to always return a constantexpr
        instead of returning null on failure.  No functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85038 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b0796c6d1dbaef83b400f53bee71f2dff0bff22e
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Oct 25 05:20:17 2009 +0000
    
        Remove ICmpInst::isSignedPredicate which was a reimplementation
        CmpInst::isSigned.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85037 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 36d0d2030275df7b40226a7979c6d7424955a2f6
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Oct 25 03:50:03 2009 +0000
    
        Sink isTrueWhenEqual from ICmpInst to CmpInst. Add a matching isFalseWhenEqual
        which is equal to !isTrueWhenEqual for ints but not for floats.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85036 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 83410aa8c59837bafd2fcab76c33abd14e72b5c1
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Oct 25 03:30:55 2009 +0000
    
        lit: Add --config-prefix option, to override default config file names.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85035 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 062ddcb23b7e81787c7ae597d851e654049a6c2b
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Oct 25 03:22:00 2009 +0000
    
        Indent.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85034 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 27f73ebba47a60ebdce7104e2db1991b615c8979
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Sun Oct 25 01:44:24 2009 +0000
    
        Regenerate.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85031 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 01b49d0c8e25ca4e719179e056be1f12370bf312
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Sun Oct 25 01:44:11 2009 +0000
    
        Document OptionPreprocessor.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85030 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 19e491c408463252d0a5c008a3ec02c9a15c7400
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Sun Oct 25 01:43:50 2009 +0000
    
        Add a test for OptionPreprocessor.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85029 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f20c0915df1dcb44682aa5380b9c88e800985bf4
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Oct 25 01:37:26 2009 +0000
    
        lit: Allow use of /dev/null in redirects on Windows (replace by a temporary
        file).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85028 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9ef97dcce4891c64bef5ce217341053224a57dc5
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sun Oct 25 00:45:07 2009 +0000
    
        When the scavenger is looking for a good candidate location to restore from a
        spill, it should avoid doing so inside the live range of a virtual register.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85026 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ed2a16a36af4902ada0b3f3866bbe9ec2ac5dcf3
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 24 23:52:07 2009 +0000
    
        Update these tests to match what Loop::print now prints.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85021 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a5b5be1642c9ff53eb6c3596519fafa75715acd8
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 24 23:37:16 2009 +0000
    
        MapValue doesn't needs its LLVMContext argument.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85020 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3a175215245c62bf37aaa117703882fcc271e72d
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 24 23:34:26 2009 +0000
    
        Rename isLoopExit to isLoopExiting, for consistency with the wording
        used elsewhere - an exit block is a block outside the loop branched to
        from within the loop. An exiting block is a block inside the loop that
        branches out.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85019 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 16be04bcdbbd17ba27ad9f3d3b404b78dc10d4c3
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 24 23:24:45 2009 +0000
    
        Delete a spurious semicolon.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85018 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6cb04ccc59e4b359a5ede29ba0a8a453b7201527
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 24 23:23:04 2009 +0000
    
        Make these tests more interesting by using
        -verify-dom-info and -verify-loop-info, which enable additional
        (expensive) consistency checks.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85017 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f3c12364e4fe208d7c08f2b855e56681f66528b2
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 24 23:19:52 2009 +0000
    
        Rewrite LoopRotation's SSA updating code using SSAUpdater.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85016 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 30bb4a4eefbff71da7454822c921cb26b4faccfc
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Oct 24 20:32:49 2009 +0000
    
        lit: Support '>>' redirections when executing scripts internally.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85014 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 19ca34a85e80831b11e053bcf39ef0975ccb3807
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Oct 24 20:32:43 2009 +0000
    
        Update CMake dependencies.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85013 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 55e0f3183257cfe642d5d6d7c26f6cc993e99b1a
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Oct 24 20:32:36 2009 +0000
    
        Teach macho-dump to dump UUIDs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85012 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 87767b48a2f1945330c282d08851ac1ab0e80d3d
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 24 20:01:11 2009 +0000
    
        Make DominanceFrontierBase::print's output prettier.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85011 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d0f9ddd94abcd8400de8a1fa707b874a7a2a04e3
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 24 19:57:58 2009 +0000
    
        Make DominanceFrontier::addBasicBlock return the iterator for the newly
        inserted block.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85010 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b9ec1b66fb1f1560227e44cff952b0ea321022e5
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 24 19:56:23 2009 +0000
    
        Add an explicit keyword.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85009 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 937a17362032cec745e3036b3b6cc3ed39b70168
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Sat Oct 24 18:19:41 2009 +0000
    
        Revert back 85006 for now as it breaks PIC16 tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85008 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6c4a7ca6ed4f686dcd23e4f2798d9ee8a3119a60
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Sat Oct 24 18:02:44 2009 +0000
    
        Adding support for placing global objects in shared data memory.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85006 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 90c418ae50b0f979ca2326777bdabdc423cae564
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 24 05:27:19 2009 +0000
    
        various cleanups suggested by Duncan
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84993 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e040a936ccd5387a28b4228244f1e99fc6810d3c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 24 05:22:15 2009 +0000
    
        fix PR5287, a serious regression from my previous patches.  Thanks to
        Duncan for the nice tiny testcase.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84992 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 939460818ececec9e225ac70ab473c109300bbad
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Sat Oct 24 04:23:03 2009 +0000
    
        Auto-upgrade free instructions to calls to the builtin free function.
        Update all analysis passes and transforms to treat free calls just like FreeInst.
        Remove RaiseAllocations and all its tests since FreeInst no longer needs to be raised.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84987 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 09549369a6f7976934e21a49befb6d9f46abb56b
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Oct 24 02:07:42 2009 +0000
    
        80 col violation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84986 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8e4c4276b78da64a7b505a83d6d0b107ade85f0e
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sat Oct 24 00:27:00 2009 +0000
    
        Add some asserts to catch copyRegToReg() fails early
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84983 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 852f708684bb34dea4f22b2c46829c49e671e2b7
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Oct 24 00:19:24 2009 +0000
    
        Restrict Thumb1 register allocation to low registers, even for instructions that
        can access the hi regs. Our prologue and epilogue code doesn't know how to
        properly handle save/restore of the hi regs, so things go badly when we alloc
        them.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84982 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8940d798e7ceec957f1dc2a2862105d4180f3bda
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 23 23:09:19 2009 +0000
    
        Identity copies should not contribute to spill weight.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84978 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit db980fa1680f4a52fbd9c940284df543fcac97b1
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Fri Oct 23 23:07:42 2009 +0000
    
        FIXME no longer applies. R12 and R3 are available for allocation
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84977 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6ddfe5ce0cc40627e9b10c69b71071b4aef46f52
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Fri Oct 23 22:37:43 2009 +0000
    
        Fix http://llvm.org/PR4822: allow module deletion after a function has been
        compiled.
    
        When functions are compiled, they accumulate references in the JITResolver's
        stub maps. This patch removes those references when the functions are
        destroyed.  It's illegal to destroy a Function when any thread may still try to
        call its machine code.
    
        This patch also updates r83987 to use ValueMap instead of explicit CallbackVHs
        and fixes a couple "do stuff inside assert()" bugs from r84522.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84975 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b168730cf56bd8e8bf0c4d7f494b79ca135e973d
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Fri Oct 23 21:09:37 2009 +0000
    
        Remove AllocationInst.  Since MallocInst went away, AllocaInst is the only subclass of AllocationInst, so it no longer is necessary.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84969 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b2a663a0a3b3711679bcbac053b788a3e2ec346a
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Fri Oct 23 20:54:00 2009 +0000
    
        Fix stylistic and documentation problems in ValueMap found by Nick Lewycky and
        Evan Cheng.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84967 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6530ae50de6d78e90f499b67ee9db01da894ff4a
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 23 17:57:43 2009 +0000
    
        APInt-ify the gep scaling code, so that it correctly handles the case where
        the scale overflows pointer-sized arithmetic. This fixes PR5281.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84954 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit af1c05ad386e69449b53582da8f2b94c044e57db
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 23 17:10:01 2009 +0000
    
        Make LoopDeletion check the maximum backedge taken count, rather than the
        exact backedge taken count, when checking for infinite loops. This allows
        it to delete loops with multiple exit conditions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84952 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a737af39c9cb53e5d3c82eef853b48212e0e5e6f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Oct 23 07:00:55 2009 +0000
    
        some stuff is done, we still have constantexpr simplification to do.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84943 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b81d5390eef32d58aac471f2d5d3d3f308c8c46b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Oct 23 06:57:37 2009 +0000
    
        teach libanalysis to simplify vector loads with bitcast sources.  This
        implements something out of Target/README.txt producing:
    
        _foo:                                                       ## @foo
        	movl	4(%esp), %eax
        	movapd	LCPI1_0, %xmm0
        	movapd	%xmm0, (%eax)
        	ret	$4
    
        instead of:
    
        _foo:                                                       ## @foo
        	movl	4(%esp), %eax
        	movapd	_b, %xmm0
        	mulpd	LCPI1_0, %xmm0
        	addpd	_a, %xmm0
        	movapd	%xmm0, (%eax)
        	ret	$4
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84942 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bec079507e277613cbd9cd3e1dd4cde4a5e14f89
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Oct 23 06:50:36 2009 +0000
    
        enhance FoldReinterpretLoadFromConstPtr to handle loads of up to 32
        bytes (i256).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84941 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fa97251b2ccff460d1c50ef1e954d45c8c1e43d7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Oct 23 06:23:49 2009 +0000
    
        teach libanalysis to fold int and fp loads from almost arbitrary
        non-type-safe constant initializers.  This sort of thing happens
        quite a bit for 4-byte loads out of string constants, unions,
        bitfields, and an interesting endianness check from sqlite, which
        is something like this:
    
        const int sqlite3one = 1;
        # define SQLITE_BIGENDIAN    (*(char *)(&sqlite3one)==0)
        # define SQLITE_LITTLEENDIAN (*(char *)(&sqlite3one)==1)
        # define SQLITE_UTF16NATIVE (SQLITE_BIGENDIAN?SQLITE_UTF16BE:SQLITE_UTF16LE)
    
        all of these macros now constant fold away.
    
        This implements PR3152 and is based on a patch started by Eli, but heavily
        modified and extended.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84936 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 221f9d4df58002c36af0a128282eb687b30e232c
    Author: Tanya Lattner <tonic at nondot.org>
    Date:   Fri Oct 23 06:20:06 2009 +0000
    
        Add 2.6 release note.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84934 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bb5d7afc9fba2575cc4ee672d30e120f7c5d4d20
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 23 05:58:34 2009 +0000
    
        Update tests for 84931.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84932 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 868e7d89bcd9c39376174a0ec12e4f6ea5ec25ff
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 23 05:57:35 2009 +0000
    
        X86 needs critical path anti-dependency breaking.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84931 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 29a9919606a4790d496a3db568883b080ec57ff1
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Fri Oct 23 04:02:51 2009 +0000
    
        Commit fixes for half precision I noted in review, so
        they don't get lost; I don't think the originator has
        write access.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84928 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1caa0c6a612427636a06109b999b254bda758b11
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Oct 23 01:37:01 2009 +0000
    
        This is passing on Darwin PPC.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84921 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9167ebd0342cbb558b21d3e96a979087105af3b6
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Fri Oct 23 00:59:10 2009 +0000
    
        Minor code cleanup.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84919 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 68e0213cf4b9f944a356cf578050cc17f19d1257
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Oct 23 00:01:05 2009 +0000
    
        Neuter stack protectors by only checking character arrays. This is what GCC
        does.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84916 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e56e4a63f19ea3b7e40d84704d9b066698722db6
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Thu Oct 22 23:19:17 2009 +0000
    
        Allow the target to select the level of anti-dependence breaking that should be performed by the post-RA scheduler. The default is none.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84911 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b2bdf61b5f71e0971bdaa0e4cff9647031c6d8ee
    Author: Ted Kremenek <kremenek at apple.com>
    Date:   Thu Oct 22 22:16:17 2009 +0000
    
        Use 'waitpid' instead of 'wait'.  Basing Program::Wait() on 'wait()' prevents it being correct within a multithreaded context.
    
        This address: PR 5277 (Program::Wait is unsafe to call from multiple threads).
    
        Note: If waitpid() turns out to be non-portable, we can add more autoconf magic, or look into
        another solution.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84903 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 50e846dbda3390610f7b414dd14000ebb170c737
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Thu Oct 22 22:11:22 2009 +0000
    
        Try r84890 again (adding ValueMap<>), now that I've tested the compile on
        gcc-4.4.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84902 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6c954193474a76852f5cec1f899b8e4184a15bd0
    Author: Eric Christopher <echristo at apple.com>
    Date:   Thu Oct 22 22:06:50 2009 +0000
    
        size_t, not unsigned here to silence a warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84900 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 65ca6b735393dc7d03990ecbda2a1cb4344183ed
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Oct 22 21:49:41 2009 +0000
    
        Random include cleanup.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84898 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1d1f0f3101559a0f42c74cc854b2fd8d10e795fa
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Thu Oct 22 20:57:35 2009 +0000
    
        Fix OProfileJITEventListener after r84054 renamed CompileUnit to Scope.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84895 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9f6aeb72926b953e115d6fac3cbe138f569fb9a5
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Oct 22 20:48:59 2009 +0000
    
        Tidying up some code and comments. No functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84894 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 510cbea6c3ebc7702616d655d4dd1367e17b462a
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Thu Oct 22 20:23:43 2009 +0000
    
        Revert r84890, which broke the linux build.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84892 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 87ed85965251c3c6dda530e083d5c1aedd3eef54
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Thu Oct 22 20:10:20 2009 +0000
    
        Add a ValueMap<ValueOrSubclass*, T> type.  ValueMap<Value*, T> is safe to use
        even when keys get RAUWed and deleted during its lifetime. By default the keys
        act like WeakVHs, but users can pass a third template parameter to configure
        how updates work and whether to do anything beyond updating the map on each
        action.
    
        It's also possible to automatically acquire a lock around ValueMap updates
        triggered by RAUWs and deletes, to support the ExecutionEngine.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84890 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b10e68d04e54b6ad9303d84ee1fb6dfbe6c7ddc2
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Oct 22 19:36:54 2009 +0000
    
        Hide MetadataContext implementation details.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84886 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0f6f894016256484e5ccdfa0d94cfecb51aaeca9
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Oct 22 18:55:16 2009 +0000
    
        Fix getMDs() interface such that it does not expose implementation details.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84885 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0aa3004c5f4b6d56619a755bab708a5b9c65cc94
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Oct 22 18:25:28 2009 +0000
    
        Using TrackingVH instead of WeakVH or WeakMetadataVH.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84884 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c14af970afe7672fe152780c81c6b0b2c7eb75d
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Oct 22 17:40:37 2009 +0000
    
        Sort handler names to ensure deterministic behavior.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84878 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8ded08a1c38a0b55a33655592d279706977ccc35
    Author: Stuart Hastings <stuart at apple.com>
    Date:   Thu Oct 22 17:22:37 2009 +0000
    
        Trying again to tweak the top-level Makefile to facilitate an Apple-style build.
        Now with Clang-compatibility.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84872 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit df6a663b5b5a592c4dffdea732db0fe0d1453e14
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 22 16:52:21 2009 +0000
    
        Revert 84843.  Evan, this was breaking some of the if-conversion tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84868 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 310fdba5949e9e67c177aa1335daaf63db34ca7b
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Thu Oct 22 16:03:32 2009 +0000
    
        Include config.h in order to have HAVE_STDINT_H be defined.
        In the latest binutils the plugin-api.h needs this - without
        it the LLVM gold plugin fails to compile.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84861 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e369d54068dc4c4aaed61abf7ffeabd237719267
    Author: Nicolas Geoffray <nicolas.geoffray at lip6.fr>
    Date:   Thu Oct 22 14:35:57 2009 +0000
    
        Verify that the function and exception table have been allocated
        before freeing them.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84859 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ed248f3628f79be217ec9403c66f7bed2e31bde2
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Thu Oct 22 12:53:25 2009 +0000
    
        Check that accessing a struct field that occurs before the start
        of the struct (!) works correctly.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84853 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dc1fe3096ec2d5d64ae29cd465594aee276c3f7e
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Thu Oct 22 10:02:10 2009 +0000
    
        Test handling of record fields with negative offsets.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84851 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 812c4346ae2badfb759ba1a0c1187eda28092821
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Oct 22 09:28:49 2009 +0000
    
        Shift art to the right to keep GCC from complaining about multi-line comments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84849 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2147b5787db9e84e22c5d008db703ddce3ae9d86
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Oct 22 06:48:32 2009 +0000
    
        Move if-conversion before post-regalloc scheduling so the predicated instruction get scheduled properly.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84843 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit de6ba0a0863d45638958b1c0e47aab50c9fbaaa0
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Oct 22 06:47:35 2009 +0000
    
        Load / store multiple was missing opportunites when the load / store bundles are at the end of the bb. Test case is already in, the bug is exposed by subsequent commit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84842 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7bdc6d5cc22c8e45cbd1c2afec51a7d4c2597a12
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 22 06:44:07 2009 +0000
    
        move another load optimization from instcombine -> libanalysis.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84841 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c352ed0063694702fc056d18bacec36f73761b16
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 22 06:38:35 2009 +0000
    
        move 'loading i32 from string' optimization from instcombine
        to libanalysis.  Instcombine shrinking... does this even
        make sense???
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84840 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0527483e95ddd8a4a8019337eba4b1ecf1b90196
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 22 06:25:11 2009 +0000
    
        Move some constant folding logic for loads out of instcombine into
        Analysis/ConstantFolding.cpp.  This doesn't change the behavior of
        instcombine but makes other clients of ConstantFoldInstruction
        able to handle loads.  This was partially extracted from Eli's patch
        in PR3152.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84836 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a14a20abbaa4fa993b66ab746dfedad633e3f974
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Oct 22 05:11:00 2009 +0000
    
        Trim more includes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84832 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6e38eb0b9332d2af9d3d0db0bc94a1fdaa5c709d
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Oct 22 05:08:49 2009 +0000
    
        Trim include.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84831 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9fdaa881e8c3af817312d7bd262a07731c6b0fc7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 22 04:47:09 2009 +0000
    
        testcase for PR4678 & rdar://7309675
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84830 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8e73058aea1ddca356b119286f82c6b4b25d7098
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Thu Oct 22 04:15:24 2009 +0000
    
        Forgot a declaration.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84828 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1bb5da04d19ddca55a536ebdc0e3b30f606b5b98
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Thu Oct 22 04:15:07 2009 +0000
    
        Make 'unset_option' work on list options.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84827 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 62b547a9ebaa6ec2a311748d9f38521ee1175230
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 22 03:42:27 2009 +0000
    
        fix warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84826 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 580b86f11cbb963155f8a7706ede7835c20691ba
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Oct 22 01:01:24 2009 +0000
    
        Fix getHandlerNames() interface. Now it populate clinet supplied small vector with handler names.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84820 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c1a226dc19b3cfe39c1c42af2b53463c3b3d160a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 22 00:52:28 2009 +0000
    
        llvm-ld doesn't throw.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84819 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c343f9666768b6550d56792a76d0a79cb6ed508b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 22 00:50:24 2009 +0000
    
        this doesn't use EH either.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84818 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 96f3f0ea1784565c7f175c709d99fb44737a6077
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 22 00:46:41 2009 +0000
    
        nothing opt uses can throw, remove the try block and -fexceptions when
        building opt.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84816 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cd0702fb9d6f58c3d334b8a8649892b5bc1c9648
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 22 00:44:10 2009 +0000
    
        Add some command line options for twiddling the default data layout
        used by opt when a module doesn't specify one.  Patch from Kenneth Uildriks!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84814 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8776641c8937656b395e6bf9256f90c60c616103
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Oct 22 00:40:00 2009 +0000
    
        Don't generate sbfx / ubfx with negative lsb field. Patch by David Conrad.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84813 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b3254b4944c646570b94027cf701136f8a95352e
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Oct 22 00:22:05 2009 +0000
    
        Use StringRef to construct MDString.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84811 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b5ed7f070696277cfa62750e4907ba65208ed66f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 22 00:17:26 2009 +0000
    
        fix PR5262.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84810 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fadf5e5953fdd1925c1d71dec4d771ab06b2e186
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Thu Oct 22 00:16:00 2009 +0000
    
        Use special DAG-to-DAG preprocessing to allow mem-mem instructions to be selected.
        Yay for ASCII graphics!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84808 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fcfa3114e0922e6a127cd9e63dce10d98fb77757
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Thu Oct 22 00:15:17 2009 +0000
    
        Fix null pointer dereference.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84806 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9ab984e0f73ee6857fdf9222fb9b836126070b60
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Oct 22 00:10:15 2009 +0000
    
        Remove meaningless const.
        Pass StringRef by value.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84804 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 52a7362292fcc0cf59c3d38ce8955b709b8e00e6
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Oct 22 00:03:58 2009 +0000
    
        Revert the main portion of r31856. It was causing BranchFolding
        to break up CFG diamonds by banishing one of the blocks to the end of
        the function, which is bad for code density and branch size.
    
        This does pessimize MultiSource/Benchmarks/Ptrdist/yacr2, the
        benchmark cited as the reason for the change, however I've examined
        the code and it looks more like a case of gaming a particular
        branch than of being generally applicable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84803 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 625fc8a70dea12493a862ff030cf63e1a3eeec8c
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Oct 21 23:57:35 2009 +0000
    
        Derive metadata hierarchy from Value instead of User.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84801 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3980f9bdc1bef099f011b193282624140d918f4d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 21 23:41:58 2009 +0000
    
        revert r84754, it isn't the right approach.  Edwin, please propose
        patches for fixes like this instead of committing them directly.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84799 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1190aa49e2e41435881d3a76052a29a2aff9f632
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Oct 21 23:40:56 2009 +0000
    
        Missing piece of the ARM frame index post-scavenging conditionalization
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84798 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 505f7cffe7582df375c43540771e4c4970971848
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Wed Oct 21 23:29:32 2009 +0000
    
        Fix thinko noticed by Chris.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84797 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d6e39eca79fc5c778baf517f159b7d47e6964818
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Oct 21 23:29:12 2009 +0000
    
        Adjust testcases for msasm -> alignstack.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84796 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5ee3e4bf7e162b8daf43c2fd660f67095d00af3a
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Oct 21 23:28:00 2009 +0000
    
        Rename msasm to alignstack per review.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84795 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 87c2b24e57deea38e8cf572a0c461bfc21296996
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Wed Oct 21 23:27:54 2009 +0000
    
        Remove pointless return; at end of function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84794 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3def926d532e753cab49cc1bacf660f49d626864
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Oct 21 22:59:56 2009 +0000
    
        The spill restore needs to be resolved to the SP/FP just like the spill
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84792 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 86320274fce17431f8ad60ebd9e89c2e7bcacc04
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Oct 21 22:59:24 2009 +0000
    
        Conditionalize ARM/T2 frame index post-scavenging while working out fixes
        for a few bugs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84791 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8e4759105fdc5ae0f14c0dce471817b59d7db268
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Wed Oct 21 22:55:51 2009 +0000
    
        Simplify code. No intended functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84790 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7f92d1399fe9f3df00ed1a0b0e864b2d63dc3cef
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Oct 21 21:57:13 2009 +0000
    
        Use StringRef.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84786 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2b1ec3887477e72d994e996b7e0422777172b7d0
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 21 21:36:27 2009 +0000
    
        Most of the NEON shuffle instructions do not support 64-bit element types.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84785 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 26c7b903a8f7ab6bf9bb75110a0ff5da74bf41fd
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Oct 21 21:25:09 2009 +0000
    
        Do not use SmallVector to store MDNode elements.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84784 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 11face1c79fa330d0f77be6ddb9357e2846a4dc2
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 21:15:18 2009 +0000
    
        Revert r84764, it breaks mingw build
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84783 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 002646297951aab37dd7e9a1a205158d20b46196
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Oct 21 21:09:48 2009 +0000
    
        XFAIL this test for PPC.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84782 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1afc8e26bf2580d78e824666534049f5693c09df
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Oct 21 20:44:34 2009 +0000
    
        Improve handling of immediates by splitting 32-bit immediates into two 16-bit
        immediate operands when they will fit into the using instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84778 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 85a066070801c2ab1d8a39dcb0c5033fb188e9d8
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 19:18:28 2009 +0000
    
        Add DAG printing for RMW stuff debugging
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84776 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a3d275414331b53b0a3ed6490719417c016b3515
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 19:17:55 2009 +0000
    
        RMW preprocessing stuff was incorrect. Grab the stuff from x86 backend and disable some tests until it will be clever enough to handle them.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84775 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7c040a63f12f348acaa0508f75a58df839bead52
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 19:17:18 2009 +0000
    
        Implement branch folding
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84774 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a791a042aabfbed7db07afd1d9475bac2b0f3cf9
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 19:16:49 2009 +0000
    
        Cosmetic changes, no functionality changes
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84773 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 67439f05c61cd2445416d7f4f32c4062827376da
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Wed Oct 21 19:11:40 2009 +0000
    
        Make changes to rev 84292 as requested by Chris Lattner.
    
        Most changes are cleanup, but there is 1 correctness fix:
        I fixed InstCombine so that the icmp is removed only if the malloc call is removed (which requires explicit removal because the Worklist won't DCE any calls since they can have side-effects).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84772 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e33f252643208755c07a6b43082f20d05b15af42
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 21 17:54:01 2009 +0000
    
        Fix NEON VST2LN instruction encoding.
        Patch by Johnny Chen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84767 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e0053d22f825bf4749beaa282a4263e945aebcbc
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 21 17:52:34 2009 +0000
    
        Revert 84732.  It was the wrong fix.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84766 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0d4250d2a10ec19f50c06baf6cb12f3fa97b0159
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Oct 21 17:33:41 2009 +0000
    
        Incorporate various suggestions Chris gave during metadata review.
    
        - i < getNumElements()  instead of getNumElements() > i
        - Make setParent() private
        - Fix use of resizeOperands
        - Reset HasMetadata bit after removing all metadata attached to an instruction
        - Efficient use of iterators
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84765 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0fab1d621d58906aee98e3ab703a0aa3ffd1ffde
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Wed Oct 21 17:27:23 2009 +0000
    
        Build shared lib instead of an archive.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84764 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 154f22e8e4bafc4013c1f6a516b7a8102669012d
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Oct 21 15:26:21 2009 +0000
    
        Cleanup of frame index scavenging. Better code flow and more accurately
        handles T2 and ARM use cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84761 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 37f42df1740fcc9fbba819648361fb2494ae6bd8
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Wed Oct 21 13:22:20 2009 +0000
    
        Two corrections for docs/CMake.html.
    
        Patch by Victor Zverovich!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84759 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 81548e7d913da85e324988057f52e83cf16e3714
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Wed Oct 21 10:49:00 2009 +0000
    
        Fix PR5262: when folding select into PHI, make sure all operands are available
        in the PHI's Basic Block. This uses a conservative approach, because we don't
        have dominator info in instcombine.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84754 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 00e8f5a671512a8647a3ac418b9d269e3a77ea63
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Wed Oct 21 10:42:44 2009 +0000
    
        Add a pass to overlay pic16 data sections for function frame and automatic
        variables. This pass can be invoked by llvm-ld or opt to traverse over the call graph
        to detect what function frames and their automatic variables can be overlaid.
        Currently this builds an archive , but needs to be changed to a loadable module.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84753 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9da99aa14b010c5fe3f9e52f53636d7c73181b2d
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Wed Oct 21 10:38:59 2009 +0000
    
        Added more options to mcc16 driver.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84752 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 89ef28530fcc03f47245644eb67085b94efd0bac
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Oct 21 08:15:52 2009 +0000
    
        Match more patterns to movt.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84751 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8c56cfc79c02c68bef1c8bc37f8ee7a47c5e2e32
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Oct 21 07:56:02 2009 +0000
    
        Need a comma after imp-use.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84749 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 745edbe1bd3546db5109e59ea48f8a2a9ba569d8
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Wed Oct 21 06:01:54 2009 +0000
    
        De-bork CMake build
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84744 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4334a08e041230c998dcf2f0749f05f3a27e35a1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 21 05:07:57 2009 +0000
    
        Set comment string, patch by Johnny Chen!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84743 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 10460aa4c6677cee572aee1692649f713e83cfc3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 21 04:11:19 2009 +0000
    
        make GVN work better when TD is not around:
    
        "In the existing code, if the load and the value to replace it with are
        of different types *and* target data is available, it tries to use the
        target data to coerce the replacement value to the type of the load.
        Otherwise, it skips all effort to handle the type mismatch and just
        feeds the wrongly-typed replacement value to replaceAllUsesWith, which
        triggers an assertion.
    
        The patch replaces it with an outer if checking for type mismatch, and
        an inner if-else that checks whether target data is available and, if
        not, returns false rather than trying to replace the load."
    
        Patch by Kenneth Uildriks!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84739 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 34fdb7750482a92c156acf8a4be9d1550386717c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 21 04:10:24 2009 +0000
    
        tidy
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84738 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6a77292423d0cc8955e56147b98b7d39c253eefa
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 21 02:27:20 2009 +0000
    
        Fix some more NEON instruction encoding problems.
        Thanks to Johnny Chen for discovering the problem.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84732 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit df16f21228cf99bf3e8bd192afa1a8246e70341c
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Oct 21 02:21:34 2009 +0000
    
        Do not remove dead metadata for now.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84731 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 52e0d9d2677019df9209470c4cfcdc0a69365842
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 21 02:15:46 2009 +0000
    
        Leave some NEON instruction encoding bits unspecified instead of setting
        a default value of zero.  This is important for decoding the instructions.
        Patch by Johnny Chen, with some changes from me, too.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84730 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 83a8e220504f71e2e88e73d866a4ca4542fc2caa
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Wed Oct 21 02:13:52 2009 +0000
    
        Clarify documentation on multi_val options.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84729 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2155c0a5a49c4bf016e0e335ebf893d132eb7530
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Wed Oct 21 02:13:13 2009 +0000
    
        Implement any_[not_]empty and list versions of switch_on and [not_]empty.
    
        Useful for OptionPreprocessor.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84728 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fb035b45cbad41ccbc6190fdcc6b888528075bc5
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 21 01:44:44 2009 +0000
    
        Revert r84658 and r84691. They were causing llvm-gcc bootstrap to fail.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84727 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 30a61c148accb447031985fc98a1213c3bbbc027
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 21 01:10:37 2009 +0000
    
        IPSCCP is missing stuff.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84725 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ca24463d2c9df0ab7775a59ada835267a7043953
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Oct 21 00:51:40 2009 +0000
    
        This is passing on Darwin PPC.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84723 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ad525150667630fe4c47c88a2da230e283118ae0
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Wed Oct 21 00:43:48 2009 +0000
    
        Delete the MacOSJITEventListener per echristo's request. It was disabled by
        default and didn't work anyway.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84720 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2810c7f3181a3a11b0c69743a4b15cde9e69c875
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 00:14:15 2009 +0000
    
        Add note
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84713 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dd5e1eb3d54f5c045a413152fe63846176aa9bea
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 00:13:58 2009 +0000
    
        Be crazy and assert in case of unsupported modifier passed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84712 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 616b9bb30f255c7531c70f3ece80402e2b0ed882
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 00:13:42 2009 +0000
    
        Handle external symbols
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84711 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c9a90ae4863e130386e8e4c23c54468afc54f2f7
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 00:13:25 2009 +0000
    
        Distinguish between pcrel imm operands and 'normal' ones. Fix fixes gross weirdness of asmprinting.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84710 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7199ce5ab3517ffe7d78878bfc78e5ad3fb42054
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 00:13:05 2009 +0000
    
        Add basic block operands & jump kinds
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84709 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3fcbbcd7945d1216a8572ad0c543ba9c98de3723
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 00:12:44 2009 +0000
    
        Ignore all implicit reg operands
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84708 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eb85a3f2fa93c04bbe6c5fae910c6cfaf1180984
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 00:12:27 2009 +0000
    
        Add a workaround for different memops prefixes
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84707 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit baffc35d25adf06f1e50d722523376eee7b31152
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 00:12:08 2009 +0000
    
        Checkpoint MCInst printer. We (almostly) able to print global / JT / constpool entries
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84706 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8a50ba5671ff1b454db4f22d8b5d22b4354d272b
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 00:11:44 2009 +0000
    
        Add reg-imm tests
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84705 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0f66de4cb1e6f103f3d7a5bdb4524f30a0de9ee3
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 00:11:27 2009 +0000
    
        Add simple operand printing stuff
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84704 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c7ceed86f92915c5244032d7ed36fe4f31c3904
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 00:11:08 2009 +0000
    
        Add experimental MSP430 MCInstLowering stuff
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84703 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 38640dd53e549765a07a47d02f628aeb39593df7
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 00:10:47 2009 +0000
    
        Wire up MSP430 printMCInst() method
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84702 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 425a93824d9237f0130fffad5d433789cbbec63f
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 00:10:30 2009 +0000
    
        Add MSP430 InstPrinter stub
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84701 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aa315908b7d6ca9eef91b45171ca7af5e5b8f6d0
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 21 00:10:00 2009 +0000
    
        Use proper target data
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84700 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 38e5be5da7a72d9318370359b20992765ce73152
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Tue Oct 20 22:50:43 2009 +0000
    
        Respect src register allocation requirements when breaking anti-dependencies. Remove some dead code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84691 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 04480acd34c5bf3312e6ddafb36d595788a26f0f
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 20 22:50:27 2009 +0000
    
        Cosmetic changes.
    
        s/validName/isValidName/g
        s/with an Instruction/to an Instruction/g
        s/RegisterMDKind/registerMDKind/g
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84689 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 38f422c8143b1a7c99796e40e295c677e11fbff9
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Oct 20 22:10:05 2009 +0000
    
        Fix -Asserts warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84687 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 672de0aa03c5943cfdd1e49206326d1955c8b3a1
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Tue Oct 20 21:37:45 2009 +0000
    
        Fix invalid for vector types fneg(bitconvert(x)) => bitconvert(x ^ sign)
        transform.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84683 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d4a322f88c5a91a151fc42f6d829e11c90ee38ce
    Author: Lang Hames <lhames at gmail.com>
    Date:   Tue Oct 20 21:28:22 2009 +0000
    
        Oops. Backing out 84681 - needs to wait for the indexing patch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84682 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 38f4d15c5884940ea775494bb1e64d5b4b3e9e87
    Author: Lang Hames <lhames at gmail.com>
    Date:   Tue Oct 20 21:25:13 2009 +0000
    
        Added some debugging output to pre-alloc splitting.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84681 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7d7f7fb7fbcf64bc232fe800da0c1f4533485936
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 20 21:04:26 2009 +0000
    
        add a real testcase for PR4313
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84676 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f3eec137dfa3cb5dd54f6d15f9606a91d8f43f99
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 20 21:00:47 2009 +0000
    
        add a test similar to that needed for PR4313, but that doesn't
        fail without the patch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84675 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c10dc49d8abaad87a2b82cd9e56e989fd7e1d020
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 20 20:57:58 2009 +0000
    
        the date on this testcase is wrong, it is unreduced, and it passes without the fix for PR4313.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84674 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8ec5024633e3330028b469bd35301da789b77dcc
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 20 20:41:13 2009 +0000
    
        Fix another place that calls Loop::contains a lot to construct a sorted
        container of the blocks and do efficient lookups. This makes
        isLoopSimplifyForm much faster on large loops, fixing a significant
        compile-time issue in builds with assertions enabled.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84673 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 82ad352a6a4e1074d64e41fe3e03f70efba30858
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 20 20:39:43 2009 +0000
    
        merge and filecheckize
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84672 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 55190138d93c8d975d7586f56e7b2bbf4be7cd73
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 20 20:33:46 2009 +0000
    
        merge two tests and convert to filecheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84671 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0ea1568b4728e4de08c89594b232fa9fba80b5d4
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Oct 20 20:31:31 2009 +0000
    
        Disable by default while debugging
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84669 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b95f0ead66316dc45e05b35edd21ab8069146255
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 20 20:27:49 2009 +0000
    
        alternate fix for PR5258 which avoids worklist problems, with reduced testcase.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84667 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 860507d1bbced055f324e91a18de4f5ddd650a4f
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Oct 20 20:19:50 2009 +0000
    
        add cmd line opt to disable frame index reuse for ARM and T2. debug aid.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84664 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5c6665c8637d9483f51c3448941298218748f7eb
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 20 20:06:09 2009 +0000
    
        Restore LoopUnswitch's block-oriented threshold. LoopUnswitch now checks both
        the estimated code size and the number of blocks when deciding whether to
        do a non-trivial unswitch. This protects it from some very undesirable
        worst-case behavior on large numbers of loop-unswitchable conditions, such
        as in the testcase in PR5259.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84661 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 11eb3a6c0dc019c8e69c7853e2d3678940714d30
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Tue Oct 20 19:54:44 2009 +0000
    
        Checkpoint more aggressive anti-dependency breaking for post-ra scheduler.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84658 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a10c4a8a085bff9a71ee666c2ac214c00dfee9a8
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Oct 20 19:52:35 2009 +0000
    
        Better handle instructions that re-def a scratch register
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84657 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c28699c6e054f2033d93c04c87bf7ad88646bc1
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 20 18:14:49 2009 +0000
    
        Following r84485, add Defs = [EFLAGS] to the 32-bit lock instructions too.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84652 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2512e357a856660a1b30766997ae046eaffe21e5
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Oct 20 18:13:21 2009 +0000
    
        Move the Function*->allocated blocks map from the JITMemoryManager to the
        JITEmitter.
    
        I'm gradually making Functions auto-remove themselves from the JIT when they're
        destroyed. In this case, the Function needs to be removed from the JITEmitter,
        but the map recording which Functions need to be removed lived behind the
        JITMemoryManager interface, which made things difficult.
    
        This patch replaces the deallocateMemForFunction(Function*) method with a pair
        of methods deallocateFunctionBody(void *) and deallocateExceptionTable(void *)
        corresponding to the two startFoo/endFoo pairs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84651 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6caa53a1305c77e42d5099722ea05ab79ea0ab97
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Oct 20 16:33:57 2009 +0000
    
        Register re-use for scavenged frame indices must check for re-deginition
        of the register in the instruction which kills the scavenged value.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84641 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8ab7dd082e1d3c2e8b7ee74c56502d5d0287f624
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 20 16:22:37 2009 +0000
    
        Make TranslateX86CC return COND_INVALID instead of aborting when it
        encounters an OEQ or UNE comparison, and update its callers to check
        for this return status and recover. This fixes a problem resulting from
        the LowerOperation hooks being called from LegalizeVectorOps, because
        LegalizeVectorOps only lowers vectors, so OEQ and UNE comparisons may
        still be at large. This fixes PR5092.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84640 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 73b93c38b8150e9360f1089a0746e277b5ed910a
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Tue Oct 20 15:42:00 2009 +0000
    
        Fix PR5258, jump-threading creating invalid PHIs.
        When an incoming value for a PHI is updated, we must also updated all other
        incoming values for the same BB to match, otherwise we create invalid PHIs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84638 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a617464b940cbd549e0fc6600dc3fcdd447f6abe
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Tue Oct 20 15:15:09 2009 +0000
    
        Fix PR4313: IPSCCP was not setting the lattice value for the invoke instruction
        when the invoke had multiple return values: it set the lattice value only on the
        extractvalue.
        This caused the invoke's lattice value to remain the default (undefined), and
        later propagated to extractvalue's operand, which incorrectly introduces
        undefined behavior.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84637 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fdaec7c0cbc5d6abdf7db2561af08f90a9856742
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Tue Oct 20 11:44:38 2009 +0000
    
        Random #include pruning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84632 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 02294d8ab2ac257005c9c7dc5c5039ee16d376cc
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Tue Oct 20 09:16:32 2009 +0000
    
        This file is replaeced by PIC16Section.h.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84628 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4736b507e87b7690eeb2dab726367593aea226ef
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Oct 20 07:30:54 2009 +0000
    
        NNT: Implement "config mode", use -config path/to/llvm-config
    
         - This runs the nightly test and does all the submission logic, but using the
           LLVM build specified by the llvm-config.
    
         - Useful for, among other things, testing NNT itself.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84620 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 955b7394b5be916e47f93865b099bd57738b4e16
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Oct 20 07:30:46 2009 +0000
    
        NNT: Remove unused BUILDTYPE argument.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84619 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6d01a09e9033c335655d9652b6a609e41d4db8c4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 20 06:22:33 2009 +0000
    
        implement some more easy hooks.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84614 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9ec22035cfa3affa2e5e5a026fe228716831a781
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 20 06:15:28 2009 +0000
    
        Implement some hooks, make printOperand abort if unknown modifiers are
        present.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84613 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6cc1d3464849d68ba734f6ff8f54b934f53e218d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 20 05:58:02 2009 +0000
    
        t2MOVi32imm is currently always lowered by the Thumb2ITBlockPass.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84611 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 56f4502e733d0373166542f85ee641e9df72413e
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Oct 20 05:33:23 2009 +0000
    
        PowerPC ifdef'ing considered more complicated than one might like.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84603 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 75b4315d0399ae642397f7deec028adb1802a8bb
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Oct 20 05:15:36 2009 +0000
    
        Wire up the ARM MCInst printer, for llvm-mc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84600 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b457c89c5998fb359323bf27c43fa326c1016124
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 20 04:50:37 2009 +0000
    
        Re-apply r84295, with fixes to how the loop "top" and "bottom" blocks are
        tracked. Instead of trying to manually keep track of these locations
        while doing complex modifications, just recompute them when they're needed.
        This fixes a bug in which the TopMBB and BotMBB were not correctly updated,
        leading to invalid transformations.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84598 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2894c9052cf5b91ff9d422c31c5aa68eebb869e2
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Oct 20 04:23:20 2009 +0000
    
        Trim unnecessary includes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84597 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7e3f3cec672df37935f6c651d1135657fdfe5cdd
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 20 04:16:37 2009 +0000
    
        Add getTopBlock and getBottomBlock member functions to MachineLoopInfo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84596 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b2455d2907541391f396ae4eecd5814aecb6ce4d
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Tue Oct 20 04:09:50 2009 +0000
    
        Correct test for PowerPC.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84595 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a3a4de41a79969ce6e6ac96facfa9d4a01c5e7f1
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Oct 20 02:23:13 2009 +0000
    
        Revert "Tweak top-level Makefile to facilitate Apple-style build.", this is
        breaking Clang's Apple-style build.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84592 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3a284bff97818a9da5313655544ee6d22ff77a67
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Oct 20 02:23:05 2009 +0000
    
        NNT: Remove duplicate verbose print.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84591 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 86fc0730e6c0f4c2f6fdb0a9296fa4eb2802ef53
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Oct 20 01:32:47 2009 +0000
    
        Now that all ARM subtargets use frame index scavenging, the Thumb1 requires*
        functions are not needed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84587 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2f576c1745fb305f8db1f583022e7e4e93706841
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Oct 20 01:31:09 2009 +0000
    
        If the physical register being spilled does not have an interval, spill its sub-registers instead.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84586 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 81d33cdd24ad12950350365eb634134292e357e1
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Oct 20 01:26:58 2009 +0000
    
        Enable post-pass frame index register scavenging for ARM and Thumb2
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84585 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7960fbe0b2679bda3c6db9ebf0c13d0e0f1f8a3b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 20 01:11:37 2009 +0000
    
        lower ARM::MOVi32imm properly.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84583 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f5044e84ffc2dfe993b050d96403ec1170645903
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 20 00:56:16 2009 +0000
    
        add support for external symbols.  The mc instprinter  can now handle
        reasonable code like Codegen/ARM/2009-02-27-SpillerBug.ll, producing
        identical output except for superior formatting of constant pool entries.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84582 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a2483d54b25eb8f19ca58419b01918eb5a4a0a1c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 20 00:52:47 2009 +0000
    
        get fancy: support basic block operands.  Yay for jumps.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84579 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 37cd7db2c8fa2817637d8b5ecafcc33a167865ed
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 20 00:46:11 2009 +0000
    
        add supprort for the 'sbit' operand, MOVi apparently has one.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84577 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 31c1d7b06d5f7ec47ec96d99275faee9624cffd0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 20 00:42:49 2009 +0000
    
        add support for instruction predicates.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84575 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e4eb7345116dbb08a81742416317b707b6c2cb76
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 20 00:40:56 2009 +0000
    
        implement printSORegOperand, add lowering for the nasty and despicable MOVi2pieces :)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84573 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c8d6989f39c90afddab819efd17372519278e71c
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Oct 20 00:38:19 2009 +0000
    
        Refs: A8-598.
        Leave Inst{11-8}, which represents the starting byte index of the extracted
        result in the concatenation of the operands and is left unspecified.
    
        Patch by Johnny Chen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84572 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 77ef7774b592306694272b141712dec57a94caab
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Oct 20 00:19:08 2009 +0000
    
        Add missing encoding bits to NLdSt class of instructions.
    
        Patch by Johnny Chen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84570 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6ac9153989bc41f2b4f6b20379870e7caf22b86d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 23:35:57 2009 +0000
    
        X86 should ignore implicit regs when lowering to MCInst also,
        no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84567 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aabf5bee0bd2fc12e02937511e4e28d400c8211e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 23:31:43 2009 +0000
    
        handle addmode4 modifiers, fix a fixme in printRegisterList
        by ignoring all implicit regs when lowering.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84566 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d9e0e2a295cd5d167dfe579d2f889f091c5e7f95
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 23:05:23 2009 +0000
    
        simplify by using the twine form of GetOrCreateSymbol
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84565 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 07e97529cb7a8e1608ceedefb5ba5fde32cdb74f
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Mon Oct 19 23:00:00 2009 +0000
    
        Updated cmake library dependencies.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84564 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 46999f20018606012039e3fe0861ea8d0a632593
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Oct 19 22:57:03 2009 +0000
    
        Enable allocation of R3 in Thumb1
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84563 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e1ca870b6187d888ca1b83095fc0a37b38a33613
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 22:51:16 2009 +0000
    
        use EmitLabel instead of text emission
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84562 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2743f3605a9fcd16f1045a8d9bcb8950cab0860e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 22:49:00 2009 +0000
    
        add a twine version of MCContext::GetOrCreateSymbol.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84561 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4c565d8576a1ceb933301e7437723978ba985553
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 22:33:05 2009 +0000
    
        lower the ARM::CONSTPOOL_ENTRY pseudo op, giving us constant pool entries
        like:
    
        @ BB#1:
        	.align	2
        LCPI1_0:
        	.long	L_.str-(LPC0+8)
    
        Note that proper indentation of the label :)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84558 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 976a864c9995dced48ffaca18ff68921f0a4d5df
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Oct 19 22:27:30 2009 +0000
    
        Adjust the scavenge register spilling to allow the target to choose an
        appropriate restore location for the spill as well as perform the actual
        save and restore.
    
        The Thumb1 target uses this to make sure R12 is not clobbered while a spilled
        scavenger register is live there.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84554 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 817d551945e7ce46b12f8b3d022c7a7a0e90275e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 22:23:04 2009 +0000
    
        add MCInstLower support for lowering ARM::PICADD, a pseudo op for pic stuffola.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84553 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a7d4c7c42ec13195de5096d6d82d68e175a1fde4
    Author: Owen Anderson <resistor at mac.com>
    Date:   Mon Oct 19 22:14:22 2009 +0000
    
        Refactor lookup_or_add to contain _MUCH_ less duplicated code.  Add support for
        numbering first class aggregate instructions while we're at it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84547 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c8bdad042bc8e9f7c0aad7194ab4c120f38aba01
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 22:09:23 2009 +0000
    
        add register list and hacked up addrmode #4 support, we now get this:
    
        _main:
        	stmsp! sp!, {r7, lr}
        	mov r7, sp
        	sub sp, sp, #4
        	mov r0, #0
        	str r0, [sp]
        	ldr r0, LCPI1_0
        	bl _printf
        	ldr r0, [sp]
        	mov sp, r7
        	ldmsp! sp!, {r7, pc}
    
        Note the unhappy ldm/stm because of modifiers being ignored.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84546 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 965998b0cd1ca7f0e3a389cd2086034fc16a5d2b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 21:59:25 2009 +0000
    
        revert r84540, fixing build breakage I didn't see because of
        broken makefile deps :(
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84544 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f1977ef5f64e12407799dbb7b6c113b4b0567db1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 21:57:05 2009 +0000
    
        add addrmode2 support, getting us up to:
    
        _main:
        	stm ,
        	mov r7, sp
        	sub sp, sp, #4
        	mov r0, #0
        	str r0, [sp]
        	ldr r0, LCPI1_0
        	bl _printf
        	ldr r0, [sp]
        	mov sp, r7
        	ldm ,
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84543 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9521525ee4beef3f057a4b1921399322c5060f97
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 21:53:00 2009 +0000
    
        add jump tables, constant pools and some trivial global
        lowering stuff.  We can now compile hello world to:
    
        _main:
        	stm ,
        	mov r7, sp
        	sub sp, sp, #4
        	mov r0, #0
        	str r0,
        	ldr r0,
        	bl _printf
        	ldr r0,
        	mov sp, r7
        	ldm ,
    
        Almost looks like arm code :)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84542 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 96a4cb23a20b1c7c67bd04759f2a67bf4f289bbd
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Mon Oct 19 21:47:22 2009 +0000
    
        Malloc calls are marked NoAlias, so the code below the isMalloc() check makes it redundant.  Removing the isMalloc() check.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84541 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ed37b58917e817139e25466c91e36813435fb1f9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 21:45:31 2009 +0000
    
        pass mangler in as a reference instead of a pointer.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84540 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 33576323678a64f89b0f2598b54a8a9b6d95b93a
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Mon Oct 19 21:24:28 2009 +0000
    
        More refactoring...
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84537 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ba9c060b0e2a2bc7cadcbc2bff3a8e025f00213d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 21:23:15 2009 +0000
    
        reduce #includes
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84536 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cc309683cabf05c1ec7175ecdaf5ba97a36fb1cc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 21:21:39 2009 +0000
    
        add printing support for SOImm operands, getting us to:
    
        _main:
        	stm ,
        	mov r7, sp
        	sub sp, sp, #4
        	mov r0, #0
        	str r0,
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84535 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b472cd8f456dc9cb9e8071c2ea9e068a07566308
    Author: Owen Anderson <resistor at mac.com>
    Date:   Mon Oct 19 21:14:57 2009 +0000
    
        Simplify some code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84533 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 799e7c15430c30c25ad374e2022c23b88a27a7fe
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 20:59:55 2009 +0000
    
        wire up some basic printOperand goodness, giving us stuff like this before
        we abort:
    
        _main:
        	stm ,
        	mov r7, sp
        	sub sp, sp,
        	mov r0,
        	str r0,
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84532 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1526e99433566fb172e9f82340455fbf783963a0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 20:21:05 2009 +0000
    
        add the files that go with the previous rev
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84531 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit da6d01a3c6e8fcf1685b7903f966ac6c27cd6c62
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 20:20:46 2009 +0000
    
        wire up skeletal support for having llc print instructions
        through mcinst lowering -> mcinstprinter, when llc is passed
        the -enable-arm-mcinst-printer flag.  Currently this
        is very "aborty".
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84530 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ee99626c6bd31b4c0eea7de52945f514d7b2b0c6
    Author: Owen Anderson <resistor at mac.com>
    Date:   Mon Oct 19 20:11:52 2009 +0000
    
        Banish ConstantsLock.  It's serving no purpose other than slowing things down
        at the moment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84529 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fd73ff4241505512fbaba4781c019338774893a7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 19:59:05 2009 +0000
    
        wire up ARM's printMCInst method.  Now llvm-mc should be able to produce
        "something" when printing MCInsts, it will just be missing all the
        operand info.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84528 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8d01409390c7392f92540fa207556c2acc2ed89b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 19:56:26 2009 +0000
    
        stub out a minimal ARMInstPrinter.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84527 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit be7efccdb9578e39f224bf36a66c191dd6943e9e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 19:51:42 2009 +0000
    
        remove strings from instructions who are never asmprinted.
        All of these "subreg32" modifier instructions are handled
        explicitly by the MCInst lowering phase.  If they got to
        the asmprinter, they would explode.  They should eventually
        be replace with correct use of subregs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84526 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e45e6f933f7296f36a21918ef8d59431550c8eda
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Mon Oct 19 18:49:59 2009 +0000
    
        Clean up the JITResolver stub/callsite<->function maps.
    
        The JITResolver maps Functions to their canonical stubs and all callsites for
        lazily-compiled functions to their target Functions. To make Function
        destruction work, I'm going to need to remove all callsites on destruction, so
        this patch also adds the reverse mapping for that.
    
        There was an incorrect assumption in here that the only stub for a function
        would be the one caused by needing to lazily compile it, while x86-64 far calls
        and dlsym-stubs could also cause such stubs, but I didn't look for a test case
        that the assumption broke.
    
        This also adds DenseMapInfo<AssertingVH> so I can use DenseMaps instead of
        std::maps.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84522 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c7f8d32d6edd39000fbb2643828d86d2235b139
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 18:49:14 2009 +0000
    
        simplify code, reducing string thrashing.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84521 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d9a6f16cfc733a242378b2bd57f744e46f88fed5
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 18:44:38 2009 +0000
    
        switch hidden gv stubs to use MachineModuleInfoMachO instead of a custom map.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84520 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5212138dbd0f9b3f1b0ce4db245fc43f7e29b242
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 18:38:33 2009 +0000
    
        use MachineModuleInfoMachO for non-lazy gv stubs instead of a private map.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84519 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d3a3110595d5ac406334584886588e5c94ea9d78
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 18:27:56 2009 +0000
    
        convert to filecheck syntax and make a lot more aggressive.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84517 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2e1e52272a4d8168ae18689877ae1fe7737ced4f
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Mon Oct 19 18:21:09 2009 +0000
    
        Revert r84295, this unbreaks llvm-gcc bootstrap on x86-64/linux
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84516 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 43bc0c42b35039d7ad103d3dfd5e075c9fb52a61
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 18:18:07 2009 +0000
    
        rename test
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84515 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 14f3c57043c80ba21e32c04f4b911872a154fc86
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 18:11:25 2009 +0000
    
        remove dead map
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84513 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 882baf592d34dab3679b411d457b5a36f6b53a22
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 18:08:02 2009 +0000
    
        don't bother trying to avoid emitting redundant constant pool alignment directives.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84512 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 55112546ec17566bda9c1baece0837b0f8645d43
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 18:03:41 2009 +0000
    
        remove accidental comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84510 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fe284f7efed56c46bae93eb3980924544ec9f513
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 18:03:08 2009 +0000
    
        emit .subsections_via_symbols through MCStreamer instead of textually.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84509 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b8db78262d2760d34ef451cdde7b20b9177a2f0e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 17:59:19 2009 +0000
    
        cleanup doFinalization -> EmitEndOfAsmFile.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84508 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 79caa11f7f8f871914ae6915d84c8548cf194b04
    Author: Stuart Hastings <stuart at apple.com>
    Date:   Mon Oct 19 17:53:54 2009 +0000
    
        Tweak top-level Makefile to facilitate Apple-style build.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84507 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 186a6a5b4f2db03622c216773e841de4255357eb
    Author: Nate Begeman <natebegeman at mac.com>
    Date:   Mon Oct 19 17:31:16 2009 +0000
    
        PR 5245 - The imediate size target flag was not set on 3A-prefixed SSSE3 instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84506 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cd13d085ce8ca450d379abf127ced1cb097cce10
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 19 16:04:50 2009 +0000
    
        Fix SplitBlockPredecessors' LoopInfo updating code to handle the case
        where a loop's header is being split and it has predecessors which are not
        contained by the most-nested loop which contains the loop.
        This fixes PR5235.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84505 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d20dd8f42feee188300b2f59ea4a9c0ab4809690
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 19 14:56:05 2009 +0000
    
        Fix a typo in a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84504 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 988308e1aa1aa43502e9dd353cf50bc409ca5f96
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 19 14:52:05 2009 +0000
    
        Change a few instance variables to be local variables.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84503 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3ceed37700dafc4ba39a17bfc47f9e4c468e6e7a
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 19 14:47:32 2009 +0000
    
        Change instnamer to name arguments "arg" instead of "tmp" for clarity, and
        to name basic blocks "bb" instead of "BB", for consistency.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84502 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c5382fb5c471313e455811e52766ff319d5ff389
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 13:20:56 2009 +0000
    
        NNT: Add -parallel-test option, which runs llvm-test with
        ENABLE_PARALLEL_REPORT.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84497 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 96aab414701203ede877aeac1d4a1875f63022ed
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 13:20:50 2009 +0000
    
        NNT: Don't hard code -l3.0 argument to make, this is very server dependent. Users who care can use -compileflags for this.
    
        Also, fix make clean call and a few other tweaks.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84496 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f41928152aad569778c5fd8066da19eaae636fee
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 13:20:44 2009 +0000
    
        NNT: Fix refactoro, I dropped the list of all (llvm-test) tests. I'm sure it was named dejagnu_test_list for a good reason.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84495 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a2b34e390409e9a8c33f603782d0b6ec4ec26c01
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 13:20:38 2009 +0000
    
        NNT: Lift conditional logic out of test steps.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84494 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2b9446e3b98f9d9f3f51dc108691768138f21f81
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 13:20:31 2009 +0000
    
        NNT: Now that build & test steps are factored out, coalesce all the logic together.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84493 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2942b0204853452b8236646b01e50cd1d29af9a6
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 13:20:25 2009 +0000
    
        NNT: Sink code for running nightly test into subroutine.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84492 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ce25dafc1634a23335e133d77c2d0bff89e7b3a6
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 13:20:19 2009 +0000
    
        NNT: Tweaks and simplifications.
         - Split out configure log.
         - Kill off GetRegexNum.
         - Fix GetRegex to not return previous match on failure.
         - Remove dead code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84491 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f2ec0094a0679b9265e1722c6422f2020c9f2ba6
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 13:20:13 2009 +0000
    
        NNT: Move build code into subroutine.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84490 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4493e94e93c1c26fd70a9aeff21044b01a3f84f8
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 13:20:06 2009 +0000
    
        NNT: Move source checkout code into subroutine.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84489 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3c52ad157a2afc9cf0cdc24d840ec95c4f97b86b
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 13:20:00 2009 +0000
    
        NNT: Remove .{o,a} size info, this is better tracked elsewhere.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84488 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6981e023d9d22b1e3e8d46b84da8c32050b25521
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 13:19:53 2009 +0000
    
        NNT: Remove code to track build warnings, the buildbots cover this.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84487 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ce819f1cfc4fd012a8f7515f57cbe7fd6747c67b
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Mon Oct 19 11:00:58 2009 +0000
    
        Fix PR5247, "lock addq" pattern (and other atomics), it DOES modify EFLAGS.
        LLC was scheduling compares before the adds causing wrong branches to be taken
        in programs, resulting in misoptimized code wherever atomic adds where used.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84485 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0f5a54282ccce79a8d4b2a6b28515bfe86620234
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 09:19:32 2009 +0000
    
        Also check for __POWERPC__ when skipping these tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84482 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dce5acc21533ac83a5ba75642c0cfc4c02155937
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 09:19:19 2009 +0000
    
        NNT: Remove hard coded BuildDir and WebDir, users should have to specify these.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84481 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9ec7b9e26cc036a52125efaa186321d2bbf59c1c
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 09:19:09 2009 +0000
    
        NNT: Remove "CVS Stats", this isn't particularly useful and can be better done by the server or user.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84480 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 414cffec0a45ba543ddca53e8a24e8a833f1588a
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 09:18:54 2009 +0000
    
        NNT: Remove now-unused -cvstag argument and CVSROOT code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84479 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit affc0a59ae87e928f31dc939fe065e3903bd11e4
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 09:18:46 2009 +0000
    
        NNT: Remove -usecvs option, this is very old.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84478 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6d2151fa92c53725a76d4679c22e07f54d7dfeac
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 09:18:37 2009 +0000
    
        NNT: Remove -debug argument, it is unused.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84477 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ae3faf16d09e5247aca536bd9b905e37a38a55f2
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 09:18:24 2009 +0000
    
        Regroup NewNightlyTest.pl options
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84476 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0ea0b60777d86112abcbee9b15dd5073f1a366d1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 07:10:59 2009 +0000
    
        various cleanups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84471 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dd0729d4def76cc0a73fad25eeeba48fd045f776
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 05:51:03 2009 +0000
    
        simplify.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84465 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 56121b8af3863dafe75a4c6caf57be85d1ff0e78
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 05:34:14 2009 +0000
    
        eliminate md_on_instruction.ll, md_on_instruction2.ll is a superset of it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84464 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c6a1c72ff0baf32402308e691695dc40c52d91c4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 19 05:31:10 2009 +0000
    
        clean up after metadata changes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84463 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c62ebee717f224491aa4239b567e11c7ac78ea0f
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 03:54:21 2009 +0000
    
        lit: When running Tcl scripts via shell, try harder to find 'bash', but fall
        back to running them internally if that fails. PR5240.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84462 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a4e496517b45bd4ac4181a9c4748552409792a96
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 03:54:13 2009 +0000
    
        Add link to 'lit' from CommandGuide.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84461 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 618232c08a37f39031de8a94fa91bc6a6186373b
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Oct 19 03:53:55 2009 +0000
    
        Teach lit that the .c files in 'test/CodeGen/CellSPU/useful-harnesses' aren't tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84460 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 080f8e27b25d17bba69e18e1a8eddaa32288604c
    Author: Nate Begeman <natebegeman at mac.com>
    Date:   Mon Oct 19 02:17:23 2009 +0000
    
        Add support for matching shuffle patterns with palignr.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84459 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit af4fb5a5cc937dd6aa982e393c6574707913d0ed
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Sun Oct 18 22:51:30 2009 +0000
    
        Refactoring, no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84450 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c91317828c58e9780e7d6b6c127dc87cf61fa1f5
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Oct 18 19:58:47 2009 +0000
    
        Spill slots cannot alias.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84432 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6e2f5a4d9034c358ba5b077e24b4279d1194e328
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Oct 18 19:57:27 2009 +0000
    
        Turn on post-alloc scheduling for x86.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84431 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cbbfd5928726deeefc1ae03df8a70aca8a225197
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Oct 18 18:31:31 2009 +0000
    
        Oops. I forgot to change the tests first. Disable post-alloc scheduling.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84425 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 174e2cf99df2011a0d56e96dcdb32c1ccaf4f464
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Oct 18 18:16:27 2009 +0000
    
        -Revert parts of 84326 and 84411. Distinquishing between fixed and non-fixed
        stack slots and giving them different PseudoSourceValue's did not fix the
        problem of post-alloc scheduling miscompiling llvm itself.
        - Apply Dan's conservative workaround by assuming any non fixed stack slots can
        alias other memory locations. This means a load from spill slot #1 cannot
        move above a store of spill slot #2.
        - Enable post-alloc scheduling for x86 at optimization leverl Default and above.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84424 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3820341c59acd9a1d0a664a2ce86745015d8fccb
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Oct 18 06:27:36 2009 +0000
    
        Only fixed stack objects and spill slots should be get FixedStack PseudoSourceValue.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84411 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 66f2cb90e57e03f3f141ae225969c90a00ac2784
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 18 05:27:44 2009 +0000
    
        remove some nonascii weird stuff
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84410 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 234d587487a9c825cef6c5f96a430f69250cbdba
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 18 05:20:17 2009 +0000
    
        remove a now-pointless regtest
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84409 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 708770bbcab0eeaad849b10542fcba56c72d23fc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 18 05:09:15 2009 +0000
    
        add some fixme's
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84408 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5451359c1e3d61ff28c2dab42cd6fc4b3f802344
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 18 05:08:07 2009 +0000
    
        punctuate properly
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84407 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 24a2abdaedcac06cea91153c375373e044dfe1ff
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 18 05:03:41 2009 +0000
    
        remove testcase for dead pass
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84406 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2e4d92851e2a8bc442034b7b8fe1d3bf1d951d2d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 18 05:03:00 2009 +0000
    
        fix test
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84405 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b68e27a04f32573543c23bbcd33518cd9257905a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 18 05:02:09 2009 +0000
    
        remove the IndMemRemPass, which only made sense for when malloc/free were intrinsic
        instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84404 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 22a687f687f626a83c0608c45de864d76f503604
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 18 04:58:34 2009 +0000
    
        fix the other issue with ID's, hopefully really fixing the linux build.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84403 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a07bed1e7ea771a4cf7e0aba70fd22c812ee3b5a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 18 04:55:26 2009 +0000
    
        tighten up test3, add test3a for the converse
        transform, which isn't happening yet.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84402 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5b554f7452046589a922493f4a062c133da401f8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 18 04:50:18 2009 +0000
    
        tighten test2, add a test that it doesn't get transformed in the invalid edge case.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84401 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 285e6506294aba9e2738a3d645e1ea93101d1b04
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Oct 18 04:41:36 2009 +0000
    
        Merge tests into modref.ll. Also add a test for r84174 at Chris' behest!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84400 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bc8aea94fd079629681425b0d35d296ae687db88
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 18 04:27:14 2009 +0000
    
        fix some problems with ID definitions, which will hopefully fix the build bots.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84399 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 82e741607967f2d0f16af4be7d47e91ba62c43a8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 18 04:10:40 2009 +0000
    
        add function passes for printing various dominator datastructures
        accessible through opt.  Patch by Tobias Grosser!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84397 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2f3311465b03efa4c14cc46431768a269610077f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 18 04:09:11 2009 +0000
    
        make DOTGraphTraits public, patch by Tobias Grosser!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84396 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 288dde63a65a687f73227a676c5d6d7a0656a29a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 18 04:05:53 2009 +0000
    
        add nodes_begin/end/iterator for dominfo, patch by Tobias Grosser!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84395 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c3ad8096d8df43b24644d6fdb2485d57d12734ce
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Sun Oct 18 02:05:42 2009 +0000
    
        Support GoogleTest's "typed tests"
        (http://code.google.com/p/googletest/wiki/GoogleTestAdvancedGuide#Typed_Tests)
        in lit.py.  These tests have names like "ValueMapTest/0.Iteration", which broke
        when lit.py os.path.join()ed them onto the path and then assumed it could
        os.path.split() them back off.  This patch shifts path components from the
        testPath to the testName until the testPath exists.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84387 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 56939cebe60e2fd6ae196570764b8149f8f3b88b
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Oct 18 00:42:07 2009 +0000
    
        Add a couple new testcases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84385 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bb5dd54e60ded55b75c1bfbc873210bc46c4470d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 17 23:59:51 2009 +0000
    
        replace a useless test with a useful one
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84383 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 918fdb719a332177af23aaa94bdb667979205412
    Author: Eric Christopher <echristo at apple.com>
    Date:   Sat Oct 17 23:56:18 2009 +0000
    
        More warnings patrol: Another unused argument and more implicit
        conversions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84382 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b07b4b26cdda848985cf1d8e62b9d98fdc35b0f0
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sat Oct 17 23:52:26 2009 +0000
    
        Fix test/Bindings/Ocaml/vmcore.ml. When IRBuilder::CreateMalloc was removed,
        LLVMBuildMalloc was reimplemented but with the bug that it didn't insert the
        resulting instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84374 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4793c939b4b654a0902323f8ae32c46b5169a649
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 17 23:48:54 2009 +0000
    
        inline isGEP away.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84373 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 844797e20c5a78cdefd967a979a19415398ddd2c
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Oct 17 23:15:04 2009 +0000
    
        Fix my -Asserts warning fix.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84372 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fec939c71ac163fd0ad235cd7da545cf4472fc3b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 17 21:53:27 2009 +0000
    
        Teach vm core to more aggressively fold 'trunc' constantexprs,
        allowing it to simplify the crazy constantexprs in the testcases
        down to something sensible.  This allows -std-compile-opts to
        completely "devirtualize" the pointers to member functions in
        the testcase from PR5176.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84368 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a9a0841baa2ec61270e133ffbfbe112158ba8a42
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 17 21:51:19 2009 +0000
    
        remove # uses from FileCheck lines.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84367 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 568b8beb18a5e4752307f149e45f71569af60e74
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 17 21:31:19 2009 +0000
    
        rename test
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84364 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 826b38064530cd5a666d7ccddc96465476c4bdeb
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Oct 17 20:43:42 2009 +0000
    
        Move UnescapeString to a static function for its sole client; its inefficient and broken.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84358 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8c5f301e26efb526c76870107210598d5008c4dd
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Oct 17 20:43:29 2009 +0000
    
        Remove llvm::EscapeString, raw_ostream::write_escaped is much faster.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84357 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f89f4be5637df6ac149e83c3aa7b52bc4e6d7db0
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Oct 17 20:43:19 2009 +0000
    
        Use raw_ostream::write_escaped instead of EscapeString.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84356 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6af700555f6c97d33872c2bbf316214924247065
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Oct 17 20:43:08 2009 +0000
    
        Add raw_ostream::write_escaped, for writing escaped strings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84355 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6d43539e07cc1a865b4f54bedbd0b374e04963a4
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Sat Oct 17 20:09:29 2009 +0000
    
        First draft of the OptionPreprocessor.
    
        More to follow...
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84352 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c53d225e9d93e78dd5ae9f8ee7b6e5fc0bec3e56
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Sat Oct 17 20:08:47 2009 +0000
    
        This variable is never used.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84351 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 95b99cae40c20a3fa2da3ee0e58e29d4e21701ed
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Sat Oct 17 20:08:30 2009 +0000
    
        Disallow multiple instances of PluginPriority.
    
        Several instances of PluginPriority in a single file most probably signifies a
        programming error.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84350 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cbe1e4185d46cf2c8bfc2413595490285665e82a
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Sat Oct 17 20:07:49 2009 +0000
    
        -O[0-3] options should be also forwarded to opt and llc.
    
        This will require implementing OptionPreprocessor to forbid invalid invocations
        such as 'llvmc -O1 -O2'.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84349 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ecae47867f4d466a71600ca7ba1cbca69befc276
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sat Oct 17 19:43:45 2009 +0000
    
        Emit newlines at the end of instructions too.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84348 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit baa7e86d8243c7fa92e72f97035339bb59412aaf
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Oct 17 18:21:06 2009 +0000
    
        Move StringMap's string has function into StringExtras.h
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84344 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bd35458c807abf6a887ee7983639d2ebe5a9736b
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Oct 17 18:11:57 2009 +0000
    
        Remove unnecessary include.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84336 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aa127ee919bdff5f6363818a10fe7db8d70e0780
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Oct 17 09:33:00 2009 +0000
    
        Suppress -Asserts warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84327 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5505bc1c26d335e9a0f22f2c55fcf7544037b2d9
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Oct 17 09:20:14 2009 +0000
    
        Distinquish stack slots from other stack objects. They (and fixed objects) get FixedStack PseudoSourceValues.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84326 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 96156ef84788d3fdfe445226b4904e3bc37321df
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Oct 17 08:57:09 2009 +0000
    
        Re-arrange some fields.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84324 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 766b3658894b467195bc3ecde407f4c31711fe17
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Oct 17 08:12:36 2009 +0000
    
        Add another required #include for freestanding .h files.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84322 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1f99657b2abd791d220018261e21d9f4008708b4
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Oct 17 07:53:04 2009 +0000
    
        Revert 84315 for now. Re-thinking the patch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84321 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 77e61b46fce5e165ecfedb5666c43b3d2c1dc971
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Oct 17 06:22:26 2009 +0000
    
        Rename getFixedStack to getStackObject. The stack objects represented are not
        necessarily fixed. Only those will negative frame indices are "fixed."
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84315 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1adc252eb5e951fc26ff19d06854d7828f9c5f48
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Oct 17 06:05:11 2009 +0000
    
        80 col violation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84311 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8a5b9f815bc8bcbac451d4d6e9c4f23fc2376683
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 17 05:39:39 2009 +0000
    
        Simplify some code (first hunk) and fix PR5208 (second hunk) by
        updating the callgraph when introducing a call.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84310 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 14bff688ddb01a7b6c8dee87934d4535b27e71c0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 17 04:47:42 2009 +0000
    
        check in a bunch of content from TestingGuide.  Part of PR5216
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84309 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 479d5af531b5f84661180a3dc3435cd3ffeb7942
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Oct 17 03:28:28 2009 +0000
    
        llvm-as: Simplify, and don't create empty output files with -disable-output.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84304 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bdd9c1a09a9dd512d440f432ff6ea5152198e545
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Oct 17 03:28:20 2009 +0000
    
        Reclaim a lost month.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84303 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 852942c7c83c65c39e3758423e7251e45067c596
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Oct 17 03:28:07 2009 +0000
    
        Add required #includes for freestanding .h files.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84302 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bb39ea144fef229366341ab9636934e73de09b43
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 17 01:37:38 2009 +0000
    
        Delete an obsolete comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84300 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 37f513df87ceb54e873adc113a1fcf4af9556833
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Sat Oct 17 01:18:07 2009 +0000
    
        Remove MallocInst from LLVM Instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84299 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e2996ee5c9a9e0c05141675726ba67d5484c541e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 17 00:32:43 2009 +0000
    
        Enhance CodePlacementOpt's unconditional intra-loop branch elimination logic
        to be more general and understand more varieties of loops.
    
        Teach CodePlacementOpt to reorganize the basic blocks of a loop so that
        they are contiguous. This also includes a fair amount of logic for preserving
        fall-through edges while doing so. This fixes a BranchFolding-ism where blocks
        which can't be made to use a fall-through edge and don't conveniently fit
        anywhere nearby get tossed out to the end of the function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84295 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 20c851251fd60708146d2ece04972eb6ce4e3deb
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 17 00:28:24 2009 +0000
    
        Add a splice member function which accepts a range instead of a
        single iterator.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84294 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6203d9d219ba79ef8427a773cc0b2df69e3165f7
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Sat Oct 17 00:00:19 2009 +0000
    
        Autoupgrade malloc insts to malloc calls.
        Update testcases that rely on malloc insts being present.
    
        Also prematurely remove MallocInst handling from IndMemRemoval and RaiseAllocations to help pass tests in this incremental step.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84292 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2d1e0840efbf5ad8d6f9578ff9b7029b0246c3af
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Fri Oct 16 23:12:25 2009 +0000
    
        HeapAllocSRoA also needs to check if malloc array size can be computed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84288 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 304ec288964451a72c3f632bc5c6c927337f3b8f
    Author: Mon P Wang <wangmp at apple.com>
    Date:   Fri Oct 16 22:09:05 2009 +0000
    
        Update tests to use FileCheck
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84282 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fd49010577b9dcf3e3dea49f8412662a4b73f5cc
    Author: Mon P Wang <wangmp at apple.com>
    Date:   Fri Oct 16 22:07:19 2009 +0000
    
        Add test case for r84279
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84280 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e50ac8f2f5743922862162859c658dfb3ca4ab14
    Author: Mon P Wang <wangmp at apple.com>
    Date:   Fri Oct 16 22:05:48 2009 +0000
    
        Allow widening of extract subvector
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84279 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 18a7d1c9020268e07643101f0944fe4c4febef25
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Oct 16 21:27:43 2009 +0000
    
        Do not emit name entry for a pointer type.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84276 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 86e24b01e3cfdd1d2f533df99be1824515b677c5
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 16 21:06:15 2009 +0000
    
        Change createPostRAScheduler so it can be turned off at llc -O1.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84273 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5e1dcfc3d1327614ac50f7f6ad8e298a3b90f927
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 16 21:02:20 2009 +0000
    
        Add a CodeGenOpt::Less level to match -O1. It'll be used by clients which do not want post-regalloc scheduling.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84272 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2ddf5988322bcade2fc323adf82b95a63db1d2b4
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 16 20:59:35 2009 +0000
    
        Move zext and sext casts fed by loads into the same block as the
        load, to help SelectionDAG fold them into the loads, unless
        conditions are unfavorable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84271 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1a2ffff5b616781b32f48ef657bdcf79bef4e2d3
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Oct 16 18:45:49 2009 +0000
    
        Parse PHI instruction with attached metadata.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84264 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a6871afa41a2f40b6c1c3d8ef087c474d999ffc7
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Oct 16 18:18:03 2009 +0000
    
        If there is not any llvm instruction associated with each lexical scope encoded in debug info then create such scope on demand for variable info.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84262 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 17c3d2a4a530bc8e63919de919b3b94c425dc499
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Fri Oct 16 18:08:17 2009 +0000
    
        Invert isSafeToGetMallocArraySize check because we return NULL when we don't know the size.
    
        Thanks to Duncan Sands for noticing this bug.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84261 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6201175b20cfea5318be7e5fe1740352310078e3
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Fri Oct 16 18:07:17 2009 +0000
    
        Invert isSafeToGetMallocArraySize check because we return NULL when we don't know the size.
    
        Thanks to Duncan Sands for noticing this bug.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84260 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9235ce598591f5a1b3909bcfaf7f165c7a401d51
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Oct 16 16:30:58 2009 +0000
    
        Update from Cristina, llvm-gcc doesn't build on the SPARC version of solaris
        at the moment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84258 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aa1562a685ba347b2dae203d2988fc031cce9c2d
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Oct 16 16:30:02 2009 +0000
    
        Force triple in tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84257 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d483c393bc30eceb75bf59dc0ea8778ac7f265b5
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Fri Oct 16 15:20:13 2009 +0000
    
        Strip trailing white space.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84256 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 31c1be0731372506379290d0329d6ba53d00105d
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Fri Oct 16 12:18:23 2009 +0000
    
        Check that GVN performs this transform even if the calls
        themselves are not marked readonly, but only the called
        functions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84253 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8581c1174108faf2b7c8c4f02b7190f35438f762
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Fri Oct 16 10:29:08 2009 +0000
    
        Update CMake file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84252 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 018db6635bdae3d0d7a12a1cab571b7c8c4a4cce
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Fri Oct 16 08:58:34 2009 +0000
    
        Cleaned up some code. No functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84251 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1302f57663145190ad603a572aeeadb066def866
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 16 06:18:09 2009 +0000
    
        I am no spelling bee.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84250 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cae7b0e6150f2fe795277324bb7107dba39a78e7
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 16 06:11:08 2009 +0000
    
        Enable post-alloc scheduling for all ARM variants except for Thumb1.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84249 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 57796906e96fa68af19262323496e4b9c038a516
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 16 06:10:34 2009 +0000
    
        If post-alloc scheduler is not enabled, it should return false, not true.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84248 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 722d96604ba2eed55fc01e166566935979f4739b
    Author: Zhongxing Xu <xuzhongxing at gmail.com>
    Date:   Fri Oct 16 05:42:28 2009 +0000
    
        Indent code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84247 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 11891b19a322b2bf91180b7be3d322a280af5f8c
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 16 05:33:58 2009 +0000
    
        Add comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84246 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e7a6705ab48692b1c34be80322bc4846bd3c2a74
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 16 05:18:39 2009 +0000
    
        80 column violation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84244 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 05538f12cafe7defa2e94364b8f0efba3ace7483
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Oct 16 03:58:44 2009 +0000
    
        Fix more NEON instruction encodings.
        Patch by Johnny Chen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84243 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 60a6b65b8c1d4ce983ff3e8813fb5b6d841a947d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Oct 16 02:13:51 2009 +0000
    
        Add half precision floating point support (float16) to APFloat,
        patch by Peter Johnson! (PR5195)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84239 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 814d6c1a619c09ce2f14ce63a056caad815d515d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Oct 16 02:06:30 2009 +0000
    
        add haiku support, patch by Paul Davey!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84238 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8ea11530adf5179347449057cd5dabfde6e15bcc
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Oct 16 01:58:23 2009 +0000
    
        MC: Set symbol values in MachO MCStreamer.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84236 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f9a37ec3deab603bd7691b56104887cb6de8bb22
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Oct 16 01:58:15 2009 +0000
    
        Minor formatting tweaks.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84235 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4e5c1b69df607d4894e005a357d0c90a86aeabc5
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Oct 16 01:58:03 2009 +0000
    
        MC: Switch assembler API to using MCExpr instead of MCValue.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84234 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 123febf23cf84ced76a714912c4ad4c16712ee9f
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Oct 16 01:57:52 2009 +0000
    
        MC: Remove unneeded context argument to MCExpr::Evaluate*.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84233 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9ce249c9b86d74dec723948d70f533194e750807
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Oct 16 01:57:39 2009 +0000
    
        MC: Tweak variable assignment diagnostics, and make reassignment of non-absolute
        variables and symbols invalid.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84232 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 86cd1e91b877072006f660b7a8baaa96c4e5597a
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Oct 16 01:34:54 2009 +0000
    
        MC: When parsing a variable reference, substitute absolute variables immediately
        since they are allowed to be redefined.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84230 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e824b0813023c66cc8b4cd7c6cf4d7623750f779
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Oct 16 01:33:57 2009 +0000
    
        MC: Move assembler variable values from MCContext to MCSymbol.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84229 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5ad36277631403be640bf9fb0fad79ce7e1f262a
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Oct 16 01:33:11 2009 +0000
    
        MC: Switch MCContext value table to storing MCExprs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84228 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5ae95c36f52cdc38b9e68c0db0115fa286657f4c
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 16 00:33:09 2009 +0000
    
        When checking aliases between phi sources and V2, we know the sources are not themselves phi nodes. However, V2 may be. Call aliasCheck with V2 first to potentially eliminate a std::swap call.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84226 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8ab1f273514d6d87fa421c9bc8fc819615707dbf
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 15 23:12:05 2009 +0000
    
        Revert svn r80498 and replace it with a different solution.  The only problem
        I can see with the original code was that I forgot that this runs after
        type legalization and hence the result type will always be i32. (Custom
        legalization of EXTRACT_VECTOR_ELT is only enabled for vector types with
        8- and 16-bit elements.)
    
        Regarding the FIXME comment: any information about sign and zero-extension
        should be captured by separate extension operations.  The DAG combiner should
        handle those to produce either VGETLANEu or VGETLANEs, and that seems to be
        working now.  If there are cases that we're missing, let me know.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84218 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d5fcb1d9b141f0d99023b2f804aa82830ba3b30c
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Thu Oct 15 22:36:18 2009 +0000
    
        Dllexport stuff cleanup:
        1. Emit external function type information for all COFF targets since it's
        a feature of object format
        2. Emit linker directives only for cygming (since this is ld-specific stuff)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84214 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 28d7ecb80c7a6a39039e73f1f3aa8ad91bf05bd3
    Author: Sandeep Patel <deeppatel1987 at gmail.com>
    Date:   Thu Oct 15 22:25:32 2009 +0000
    
        Branches must be the last instruction in a Thumb2 IT block. Approved by Evan Cheng.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84212 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 85f30d7362b1e8543f1988d02548e6f00949c3fe
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 15 21:57:47 2009 +0000
    
        Fix encoding bits for N3VLInt3_QHS multiclass with 8-bit elements.
        Patch by Johnny Chen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84206 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e6e22392e070a3159fe844aefedbdaad6973e423
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Thu Oct 15 21:42:45 2009 +0000
    
        Fix ARM memory operand parsing of post indexing with just a base register, that
        is just "[Rn]" and no tailing comma with an offset, etc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84205 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7bfd59f06e8d1e8242d8775fe4e15a792394c698
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 15 20:49:47 2009 +0000
    
        Fix a potential performance problem in placing ARM constant pools.
        In the case where there are no good places to put constants and we fall back
        upon inserting unconditional branches to make new blocks, allow all constant
        pool references in range of those blocks to put constants there, even if that
        means resetting the "high water marks" for those references.  This will still
        terminate because you can't keep splitting blocks forever, and in the bad
        cases where we have to split blocks, it is important to avoid splitting more
        than necessary.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84202 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 804b26d218ceff164c578c1fdbf8c2a14d52aaf8
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Thu Oct 15 20:48:48 2009 +0000
    
        More bits of the ARM target assembler for llvm-mc, code added to parse labels
        as expressions, code for parsing a few arm specific directives (still needs
        the MCStreamer calls for these).  Some clean up of the operand parsing code
        and adding some comments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84201 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c70e2907ad8de87b0c1b3756632ac685a927d61e
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Oct 15 20:23:21 2009 +0000
    
        Remove X86Subtarget::IsLinux. It's no longer being used.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84200 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8905891e04393b45d9d1857f379027b7ce0d0324
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Thu Oct 15 20:14:52 2009 +0000
    
        Fix bug where array malloc with unexpected computation of the size argument resulted in MallocHelper
        identifying the malloc as a non-array malloc.  This broke GlobalOpt's optimization of stores of mallocs
        to global variables.
    
        The fix is to classify malloc's into 3 categories:
        1. non-array mallocs
        2. array mallocs whose array size can be determined
        3. mallocs that cannot be determined to be of type 1 or 2 and cannot be optimized
    
        getMallocArraySize() returns NULL for category 3, and all users of this function must avoid their
        malloc optimization if this function returns NULL.
    
        Eventually, currently unexpected codegen for computing the malloc's size argument will be supported in
        isArrayMalloc() and getMallocArraySize(), extending malloc optimizations to those examples.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84199 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e62d85d3b7616d5d48dce3683e755eadd5ccb78f
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Oct 15 19:46:34 2009 +0000
    
        Add files Sanjiv forgot.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84196 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ac2620eaebfccc759fbd341cfa5cfda563b7a71d
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Thu Oct 15 19:26:25 2009 +0000
    
        Re-apply 84180 with the fixed test case.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84195 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1b2cfa4be33b01c9fe32b1dcb7a9ba2298e8e25c
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Oct 15 18:50:52 2009 +0000
    
        Move Blackfin intrinsics into the Target/Blackfin directory.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84194 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 75dbdd10525ae7cb7c2f16ecc7092a667b69556f
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Oct 15 18:50:03 2009 +0000
    
        Report errors correctly for unselected target intrinsics.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84193 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e33a1faef145f2b87ecb308b08245fb6e5fb119d
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Oct 15 18:49:26 2009 +0000
    
        Clean up TargetIntrinsicInfo API. Add pure virtual methods.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84192 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e8ecbbda5f6305de7d163d4bbb453f7a0c13814f
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Thu Oct 15 18:48:58 2009 +0000
    
        Add missing break statements!  Thanks to Duncan Sands for pointing this out!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84191 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f0b820977c50aba1a967ee0748b687a598166e3b
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Oct 15 18:48:47 2009 +0000
    
        Tablegen target intrinsics from the target main .td file.
    
        Fix pasto.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84190 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a0d1f50857f0c1fe106159d0333db80c8e038c20
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Oct 15 16:49:16 2009 +0000
    
        Disable another unittest that doesn't work on arm and ppc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84186 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0522c9e121aafb968248ab057fc722d00d727e45
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Thu Oct 15 15:02:14 2009 +0000
    
        Revert "Complete Rewrite of AsmPrinter, TargetObjectFile based on new
        PIC16Section class", it breaks globals.ll.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84184 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c138172ed03d189bd006f80aedc51875958d662
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Thu Oct 15 10:10:43 2009 +0000
    
        Complete Rewrite of AsmPrinter, TargetObjectFile based on new PIC16Section class
        derived from MCSection.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84180 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 89e8a22aa6899efce0b56f864ce6cde83e12a36a
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Thu Oct 15 09:48:25 2009 +0000
    
        Few changes to comply with new DebugInfo Metadata representation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84179 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f6fb71472108d47156409bd5c87f91203688e2d4
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Thu Oct 15 08:17:44 2009 +0000
    
        The gcc plugin is now called dragonegg.so and no longer llvm.so.
        Pointed out by Gabor.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84177 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b42a8b3a5647931071b48055a1278610eb9b7682
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Thu Oct 15 07:11:24 2009 +0000
    
        Teach basicaa about memcpy/memmove/memset. The length argument can be used to
        improve alias results if constant, and the source pointer can't be modified.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84175 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 00649ad5ae87a3cf23e087e2fd9c8c8ba17ec431
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Thu Oct 15 06:12:11 2009 +0000
    
        Teach BasicAA to use the size parameter of the memory use marker intrinsics.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84174 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 57d88c38edb3667eb54c26571bcee5d57ae8ef12
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 15 05:52:29 2009 +0000
    
        Be smarter about reusing constant pool entries.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84173 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a9cdce8ba7928ac8a5176243fbf847415130f644
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 15 05:10:36 2009 +0000
    
        Fix another problem with ARM constant pools.  Radar 7303551.
        When ARMConstantIslandPass cannot find any good locations (i.e., "water") to
        place constants, it falls back to inserting unconditional branches to make a
        place to put them.  My recent change exposed a problem in this area.  We may
        sometimes append to the same block more than one unconditional branch.  The
        symptoms of this are that the generated assembly has a branch to an undefined
        label and running llc with -debug will cause a seg fault.
    
        This happens more easily since my change to prevent CPEs from moving from
        lower to higher addresses as the algorithm iterates, but it could have
        happened before.  The end of the block may be in range for various constant
        pool references, but the insertion point for new CPEs is not right at the end
        of the block -- it is at the end of the CPEs that have already been placed
        at the end of the block.  The insertion point could be out of range.  When
        that happens, the fallback code will always append another unconditional
        branch if the end of the block is in range.
    
        The fix is to only append an unconditional branch if the block does not
        already end with one.  I also removed a check to see if the constant pool load
        instruction is at the end of the block, since that is redundant with
        checking if the end of the block is in-range.
    
        There is more to be done here, but I think this fixes the immediate problem.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84172 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c4269e5fe007052e6216dd91637003c197b91fb6
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 15 04:59:28 2009 +0000
    
        only try to fold constantexpr operands when the worklist is first populated,
        don't bother every time going around the main worklist.  This speeds up a
        release-asserts opt -std-compile-opts on 403.gcc by about 4% (1.5s).  It
        seems to speed up the most expensive instances of instcombine by ~10%.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84171 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ee5839b4da5f9aa3e5a2ea25b18733cd4b630e52
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 15 04:13:44 2009 +0000
    
        don't bother calling ConstantFoldInstruction unless there is a use of the
        instruction (which disqualifies stores, unreachable, etc) and at least the
        first operand is a constant.  This filters out a lot of obvious cases that
        can't be folded.  Also, switch the IRBuilder to a TargetFolder, which tries
        harder.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84170 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8f67407416b0c020a4955c9ba5d67b92564dbfc8
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Thu Oct 15 00:36:35 2009 +0000
    
        Take advantage of TargetData when available; we know that the atomic intrinsics
        only dereference the element they point to directly with no pointer arithmetic.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84159 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cebc72ccbc6215b71808c2be76779ecfee77f0ee
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Oct 15 00:36:22 2009 +0000
    
        Make CodePlacementOpt align loops, rather than loop headers. The
        header is just the entry block to the loop, and it needn't be at
        the top of the loop in the code layout.
    
        Remove the code that suppressed loop alignment for outer loops,
        so that outer loops are aligned.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84158 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d94b8ee12edef4928c7fc762e3c8a4bf7075bdd8
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Oct 14 23:39:27 2009 +0000
    
        When LiveVariables is adding implicit-def to model "partial dead", add the earlyclobber marker if the superreg def has it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84153 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b71c46ced3b6452d2d96e06b5fde59910541ccb5
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Oct 14 23:37:31 2009 +0000
    
        Print earlyclobber for implicit-defs as well.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84152 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7ad23fdef67f0088e7b610a6a19149a2a1b16639
    Author: Eric Christopher <echristo at apple.com>
    Date:   Wed Oct 14 22:14:18 2009 +0000
    
        One more iteration here and a yet better way to solve it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84150 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 67d5877e544e547779e1b3349d9c8f3b56d7474a
    Author: Eric Christopher <echristo at apple.com>
    Date:   Wed Oct 14 21:45:49 2009 +0000
    
        Fix the unused argument problem here a different way - cast to void.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84147 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d4897545e4e21d0dbdc788c5576f4b80071a95d8
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 14 21:43:17 2009 +0000
    
        Fix instruction encoding bits for NEON VPADAL.
        Patch by Johnny Chen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84146 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4a567357c7179b5c3c39f21d774578f362a4c033
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 14 21:40:45 2009 +0000
    
        Remove unused variables to fix build warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84144 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c2febcd7aeb104f4afd876aa1526195459159f31
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Oct 14 21:22:39 2009 +0000
    
        Make loop not recalc getNumOperands() each time around
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84138 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dd7bb43d6a6e61569368046b33fbe2d700d5bc68
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Oct 14 21:08:09 2009 +0000
    
        Add support to record DbgScope as inlined scope.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84134 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ce93edf8e13e59909f6b87e8532c3961fd2c80d
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Oct 14 21:07:11 2009 +0000
    
        quiet compiler warning
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84133 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4d53c631654d3a04a85738a29ecee83d200cc3c4
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 14 20:39:01 2009 +0000
    
        Delete bogus semicolons.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84132 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d9921a052e234642eb7db4f77ae777123d73b62f
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Oct 14 20:31:01 2009 +0000
    
        Inst{11-8} for vshl should be 0b0101, not 0b1111.
        Refs: A7-17 & A8-750.
    
        Patch by Johnny Chen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84131 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 932740916d76b3253cf710e424abf73e525ee5cd
    Author: Eric Christopher <echristo at apple.com>
    Date:   Wed Oct 14 20:28:33 2009 +0000
    
        Remove a bunch of unused arguments from functions, silencing a
        warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84130 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ede4ff04cde85ad8ed196f0bb6386734c8232f4a
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Wed Oct 14 20:04:41 2009 +0000
    
        The ARM and PowerPC jits are broken in this regard.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84128 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e3d726a12e19ecf7985a485fc72f3097091b57fa
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Wed Oct 14 20:01:39 2009 +0000
    
        There seems to be no reason for opt's -S option to be hidden.
        Make it visible.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84127 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b289010eda885d07a3d224243e0e7478fa072f5e
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Wed Oct 14 19:02:13 2009 +0000
    
        Make use of the result of the loads even though that means adding -instcombine.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84125 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cfb46c5a762f118fbe73fb50eec5be56a1fcd247
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 14 19:00:24 2009 +0000
    
        Set instruction encoding bits 4 and 7 for ARM register-register and
        register-shifted-register instructions.  Patch by Johnny Chen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84124 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e4ff1a619f2a68222def8d30406ceb9f768ad410
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 14 18:32:29 2009 +0000
    
        Refactor code to select NEON VST intrinsics.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84122 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e9d08b8dc372ef97faa5f5e47863f06a3ba8f74d
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Oct 14 17:29:00 2009 +0000
    
        Use isVoidTy()
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84118 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5ccc85a153111ff9f03ec36b0f78ffdd27e3ff09
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 14 17:28:52 2009 +0000
    
        Refactor code to select NEON VLD intrinsics.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84117 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ad89036441492939570bf8972f93b00b6a358264
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Oct 14 17:02:49 2009 +0000
    
        Add copyMD to copy metadata from one instruction to another instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84113 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 10c6f85c9ba78275c9f6de2ac9ea136200a09d10
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 14 16:46:45 2009 +0000
    
        More refactoring.  NEON vst lane intrinsics can share almost all the code for
        vld lane intrinsics.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84110 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 98e68ba7ff1b12e185eb7fa677683cc00e1e97e1
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 14 16:19:03 2009 +0000
    
        Refactor code for selecting NEON load lane intrinsics.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84109 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4205cfe3fe4c86814b91248d25f0cf01ecf642b4
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Wed Oct 14 16:11:37 2009 +0000
    
        I don't see any point in having both eh.selector.i32 and eh.selector.i64,
        so get rid of eh.selector.i64 and rename eh.selector.i32 to eh.selector.
        Likewise for eh.typeid.for.  This aligns us with gcc, which always uses a
        32 bit value for the selector on all platforms.  My understanding is that
        the register allocator used to assert if the selector intrinsic size didn't
        match the pointer size, and this was the reason for introducing the two
        variants.  However my testing shows that this is no longer the case (I
        fixed some bugs in selector lowering yesterday, and some more today in the
        fastisel path; these might have caused the original problems).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84106 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f27a0438aa24fb0ab2e5095fc22525d7e2933033
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 14 15:21:58 2009 +0000
    
        make instcombine's instruction sinking more aggressive in the
        presence of PHI nodes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84103 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f693477fe44829276ad0b270fb8856228c6313d8
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Wed Oct 14 11:12:33 2009 +0000
    
        Undo pthread patch from rev. 83930 & 83823. Credit to Paul Davey.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84083 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8407f6b1defd05a0adf914c15c33a273e22819c6
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Oct 14 06:46:26 2009 +0000
    
        Clear VisitedPHIs after use.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84080 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9d17125f7a6ac8eb8e679b1fd7c14a0b539d15f9
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Oct 14 06:41:49 2009 +0000
    
        Another BasicAA fix. If a value does not alias a GEP's base pointer, then it
        cannot alias the GEP. GEP pointer alias rule states this clearly:
        A pointer value formed from a getelementptr instruction is associated with the
        addresses associated with the first operand of the getelementptr.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84079 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8e004f69769d7fc77faabbb9f67a3e6600c6d1f1
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Wed Oct 14 05:55:03 2009 +0000
    
        AuroraUX needs special Solaris system header.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84076 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7292a4ed9741e2456a41808e0dffb57f2f6addad
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Oct 14 05:22:03 2009 +0000
    
        More code clean up based on patch feedback.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84074 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f365008a16f295572515fdab85e50a5515313cab
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Oct 14 05:05:02 2009 +0000
    
        Change VisitedPHIs into an instance variable that's freed by each alias() call.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84072 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 48baab84c72b491fc64ab18b482a135e847ffde8
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Oct 14 01:45:10 2009 +0000
    
        Replace test with a simpler hand crafted one.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84069 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a635c3bfa9c12c34afdcbf178c4cc7c9fa82dd0c
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Wed Oct 14 00:44:50 2009 +0000
    
        Provide AuroraUX triple support in configure. Credit to - Paul Davey.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84067 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit be13ae5c35e4956241f95efabf864e19e53dbdbd
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Oct 14 00:34:56 2009 +0000
    
        Use llvmgxx for C++ test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84066 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8d6ff0f9d29c7043545bb44c7783f57db6849dbe
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 14 00:28:48 2009 +0000
    
        Fix this test to account for a movl $0 being emitted as an xor now,
        and convert it to FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84065 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7b1329794aba9952e3bd68eceb4f501c11cda72d
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Oct 14 00:10:54 2009 +0000
    
        Testcases for msasm bit (llvm-gcc 84062).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84063 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f20cb16e938f29db7c1ab8579efe470db1d724c2
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 14 00:08:59 2009 +0000
    
        Make isSafeToClobberEFLAGS more aggressive. Teach it to scan backwards
        (for uses marked kill and defs marked dead) a few instructions in
        addition to forwards. Also, increase the maximum number of instructions
        to scan, as it appears to help in a fair number of cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84061 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d32578ed8fc15abf79d37b9b1f62f0c8fa7ce01d
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 14 00:02:01 2009 +0000
    
        This remat entry is basically done. There are hooks to allow targets
        to remat non-load instructions as loads, and the remat code now uses
        the UnmodeledSideEffects flags, MachineMemOperands, and similar things
        to decide which instructions are valid for rematerialization.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84060 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 60547e8e863ea6b31029e1d2875cd947a320f9d0
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 13 23:58:05 2009 +0000
    
        Add a few README.txt items.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84059 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 815008ee195e15946d9754a8f71fc2a8d194c9c5
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 13 23:36:36 2009 +0000
    
        Fix resetCachedCostInfo to reset all of the cost information, instead of
        just the NumBlocks field.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84056 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 404500c0966b24922b62c6718713afdee5b88b7e
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Tue Oct 13 23:33:38 2009 +0000
    
        Correct comment about ARM immediates using '#' not '$' and TODO for modifiers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84055 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fc1df34dea6dae98d823f9cf95afbb364201ec97
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 13 23:28:53 2009 +0000
    
        s/DebugLoc.CompileUnit/DebugLoc.Scope/g
        s/DebugLoc.InlinedLoc/DebugLoc.InlinedAtLoc/g
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84054 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e3829c8912580d00aaf3c07661de34e4894983db
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 13 22:56:32 2009 +0000
    
        Check void type before using RAUWd.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84049 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 31a5af59b207ac90e75ddd830ff6c2e916b4a7ae
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Oct 13 22:29:24 2009 +0000
    
        More Neon clean-up: avoid the need for custom-lowering vld/st-lane intrinsics
        by creating TargetConstants during instruction selection instead of during
        legalization.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84042 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4d4c8b383ab4e089c9662cd3f6a0afdb144ecf6f
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Tue Oct 13 22:19:02 2009 +0000
    
        More bits of the ARM target assembler for llvm-mc to parse immediates.
        Also fixed a couple of coding style things that crept in.  And added more
        to the temporary hacked up ARMAsmParser::MatchInstruction() method for testing.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84040 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8c11ed7fff8d07010e27d0374a2b9caa8fe95a54
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Oct 13 22:02:20 2009 +0000
    
        Teach basic AA about PHI nodes. If all operands of a phi NoAlias another value than it's safe to declare the PHI NoAlias the value. Ditto for MustAlias.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84038 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 648950ff0e202e9ceb550c7c45406378093526f4
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Tue Oct 13 21:56:55 2009 +0000
    
        Documentation for the new msasm flag, which is no
        worse than the rest of the asm documentation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84037 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5f742e55de300e938ae76643a8e640913fc7a40e
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Oct 13 21:55:24 2009 +0000
    
        NEON VLD/VST are now fully implemented.  For operations that expand to
        multiple instructions, the expansion is done during selection so there is
        no need to do anything special during legalization.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84036 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit edad36f13386447e75436d250c8a06fecef7535d
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 13 21:41:20 2009 +0000
    
        Do not check use_empty() before replaceAllUsesWith(). This gives ValueHandles a chance to get properly updated.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84033 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f905746744a1a3802e530321aa028fe25fd76d9e
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Oct 13 21:32:57 2009 +0000
    
        Keep track of stubs that are created. This fixes PR5162 and probably PR4822 and
        4406. Patch by Nick Lewycky!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84032 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 07087113c591107ec2b39346a39c160a8e5f1170
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Tue Oct 13 21:17:00 2009 +0000
    
        Add is_same type trait
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84029 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dd7bd93b11d18bc0bcb64e8ee5261c5724b25614
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Tue Oct 13 21:04:12 2009 +0000
    
        Introduce new convenience methods for sign extending or
        truncating an SDValue (depending on whether the target
        type is bigger or smaller than the value's type); or zero
        extending or truncating it.  Use it in a few places (this
        seems to be a popular operation, but I only modified cases
        of it in SelectionDAGBuild).  In particular, the eh_selector
        lowering was doing this wrong due to a repeated rather than
        inverted test, fixed with this change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84027 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5ad986e1fb4598bf9a3a4298152da7b2b63ae2a6
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 13 20:56:38 2009 +0000
    
        Optimizer may remove debug info.  This test checks debug info for include headers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84025 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6855b2222d78ecf39a0855bec858c8c395e04538
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Oct 13 20:50:28 2009 +0000
    
        Revise ARM inline assembly memory operands to require the memory address to
        be in a register.  The previous use of ARM address mode 2 was completely
        arbitrary and inappropriate for Thumb.  Radar 7137468.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84022 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2477bfe9a3bdbc1474034214d1cd7ffb014024c8
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Tue Oct 13 20:46:56 2009 +0000
    
        Add an "msasm" flag to inline asm as suggested in PR 5125.
        A little ugliness is accepted to keep the binary file format
        compatible.  No functional change yet.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84020 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 68ca7fcda8b8b00e70e08b9e34fd7df1bd82ffd3
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 13 20:45:18 2009 +0000
    
        These tests now pass.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84019 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f7a7fc9585a5dc3afa894b6e5bdfd26fd17eca38
    Author: Sandeep Patel <deeppatel1987 at gmail.com>
    Date:   Tue Oct 13 20:25:58 2009 +0000
    
        Fix method name in comment, per Bob Wilson.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84017 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f9b9295d0601de7ee7673efc241a17c42873d08e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 13 20:12:23 2009 +0000
    
        Use the new CodeMetrics class to compute code size instead of
        manually counting instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84016 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9cb17acf4c2d135199764894ec788fff8f0864b5
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 13 20:10:10 2009 +0000
    
        Compute a full cost value even when a setjmp call is found.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84015 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 41c6d59aec5f3cebb414f951bb2c948aab984778
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 13 19:58:07 2009 +0000
    
        Split code not specific to Function inlining out into a separate class,
        named CodeMetrics. Move it to be a non-nested class. Rename RegionInfo
        back to FunctionInfo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84013 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 958921a663e7b11ad8dcb246924864548d4810e7
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Tue Oct 13 19:16:03 2009 +0000
    
        Add debugging output.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84011 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 28aff231cfcbf047bcf618c7fb371ac4544ee9f8
    Author: Ted Kremenek <kremenek at apple.com>
    Date:   Tue Oct 13 19:08:10 2009 +0000
    
        Provide a mode for ImmutableMap/ImmutableSet to not automatically canonicalize the internal functional AVL trees.  This should speedup clients that use ImmutableMap/ImmutableSet but don't require fast comparisons of maps.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84010 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bb4648a936134616f3bef1097520afd80b496970
    Author: Sandeep Patel <deeppatel1987 at gmail.com>
    Date:   Tue Oct 13 18:59:48 2009 +0000
    
        Add ARMv6T2 SBFX/UBFX instructions. Approved by Anton Korobeynikov.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84009 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d4d7aa76bb002aa044467d2c346f3e7435d7b43b
    Author: Ted Kremenek <kremenek at apple.com>
    Date:   Tue Oct 13 18:57:27 2009 +0000
    
        Update CMake file (lexically order files).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84008 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 872bc634b0da92ae0db414daa52a6c221dc71d8a
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 13 18:51:28 2009 +0000
    
        Do not write empty METADATA_ATTACHMENT record.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84006 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3314e600f24362268b08833d60977509ef4e5a5b
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 13 18:51:03 2009 +0000
    
        Remove unnecessary assert.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84004 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 91b676c7280dd85581610d1b0cf57676d977f7e2
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Tue Oct 13 18:50:54 2009 +0000
    
        Update the other CMake file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84003 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 88900a0acbfb3cd8cb39475001117ff88a5ea8f5
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 13 18:49:55 2009 +0000
    
        Parse GEP with attached custom metadata. This happens during libprofile_rt.bca build.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84002 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fdcd492c1d47bff46fbf2b60c29fdcec939441aa
    Author: Ted Kremenek <kremenek at apple.com>
    Date:   Tue Oct 13 18:48:07 2009 +0000
    
        Update CMake file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84001 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dd217c030bea3858468768e46c5846115c3186dd
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Oct 13 18:42:04 2009 +0000
    
        Refactor some code. No functionality changes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84000 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0cb01e1e36b59295a907a07b4b40539e87c28069
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 13 18:37:20 2009 +0000
    
        Commit the removal of this file, which is now moved to lib/Analysis.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83999 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fad07189d3c7d121a1dc1473211096a76197177d
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 13 18:30:07 2009 +0000
    
        Move the InlineCost code from Transforms/Utils to Analysis.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83998 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4c8728645b94500b5197da9f2413528becc11a02
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 13 18:24:11 2009 +0000
    
        Start refactoring the inline cost estimation code so that it can be used
        for purposes other than inlining.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83997 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b8a7d3e1a68fce4b087e39e67176fe3c36039bd9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 13 18:13:05 2009 +0000
    
        change simplifycfg to not duplicate 'unwind' instructions.  Hopefully
        this will increase the likelihood of common code getting sunk towards
        the unwind.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83996 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5afeae814f97bea55783376496dc92afaf65f3e0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 13 18:10:05 2009 +0000
    
        convert to filecheck
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83995 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c5e16eec73040a25df9f8d48ef46eb2884aff798
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 13 18:08:21 2009 +0000
    
        rename test
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83994 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4f542bfbc4da1bfffed2fcd42c87dedd62780450
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 13 17:50:43 2009 +0000
    
        Make LoopUnswitch's cost estimation count Instructions, rather than
        BasicBlocks, so that it doesn't blindly procede in the presence of
        large individual BasicBlocks. This addresses a class of code-size
        expansion problems.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83992 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0b6ea7aacd105e7fbdf5edc18446487cfd03039c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 13 17:48:04 2009 +0000
    
        rename ReleaseNotes-2.6.html -> ReleaseNotes.html
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83990 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 385cf88d65178e344e1e37468516d4bfeede5131
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 13 17:47:06 2009 +0000
    
        add Zero
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83988 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2be240354e170f362ec70ba0aa68e1f070d71f9c
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Oct 13 17:42:08 2009 +0000
    
        Make the ExecutionEngine automatically remove global mappings on when their
        GlobalValue is destroyed.  Function destruction still leaks machine code and
        can crash on leaked stubs, but this is some progress.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83987 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0bcdc3f719e79dcd849918aa1ee1712801cd8078
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 13 17:39:29 2009 +0000
    
        don't use dead loads as tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83985 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 107176143cbbb5f5546ddc2a16c6a88801130bf0
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 13 17:35:35 2009 +0000
    
        "there is not any instruction with attached debug info in this module" does not mean "there is no debug info in this module". :)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83984 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ce7c9ebddf4e1742e71087fae5abf239061475f2
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Oct 13 17:35:30 2009 +0000
    
        Add some ARM instruction encoding bits.
        Patch by Johnny Chen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83983 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f0976f41d9100957bf1eca01436dc8546d95ee39
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Oct 13 17:29:13 2009 +0000
    
        Fix regression introduced by r83894.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83982 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 897be2a525c03c3c2331ac5ef74a2e7483434e4d
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 13 17:00:54 2009 +0000
    
        Copy metadata when value is RAUW'd. It is debatable whether this is the right approach for custom metadata data in general. However, right now the only custom data user, "dbg", expects this behavior while FE is constructing llvm IR with debug info.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83977 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b89ac779620ca5fe55c2854dd28fa3b295fb6b69
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 13 16:32:09 2009 +0000
    
        Disable this test for now.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83975 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b072c755593e95a598b683c12be63c359e274c02
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Oct 13 15:27:23 2009 +0000
    
        Fix a tab.  Thanks to Johnny Chen for pointing it out.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83973 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aa02f165570eee052008e8bd8d706f454ee277b6
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Tue Oct 13 09:24:02 2009 +0000
    
        The eh.exception intrinsic only reads from memory, it doesn't
        write to it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83963 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 12e4ccf0ce8e66b23104171e35a4a94c09c94cef
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Tue Oct 13 09:23:11 2009 +0000
    
        Pacify the compiler (signed with unsigned comparison) by making
        these constants unsigned.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83962 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 23f49980e325ac4e698ebc0ed3a0a7e3e298941e
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Tue Oct 13 07:57:33 2009 +0000
    
        Force memory use markers to have a ConstantInt for the size argument.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83960 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fc461aee7449bb8010b6f4946d88afd8e2b05e3e
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Tue Oct 13 07:48:38 2009 +0000
    
        Teach BasicAA a little something about the atomic intrinsics: they can only
        modify through the pointer they're given.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83959 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c888d3594875d75fc81e4cd3b6630b4b88f8054e
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Tue Oct 13 07:03:23 2009 +0000
    
        Add new "memory use marker" intrinsics. These indicate lifetimes and invariant
        sections of memory objects.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83953 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c74255d3943b40a2dc36226750cfedf6f4482c72
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Oct 13 06:47:08 2009 +0000
    
        Fix a -Asserts warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83950 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b1b6f5af69b22ab3a9a40458edc4a4dc9dc33c18
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 13 05:33:26 2009 +0000
    
        remove dead header.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83943 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d68d06d66398db5439efd49c3a8a44daaebe1d91
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 13 04:27:02 2009 +0000
    
        remove notcast, it is now dead!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83938 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fe2b3c30a3709b51950225038e289b9eacc35a29
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 13 04:25:24 2009 +0000
    
        remove two old and nearly useless tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83937 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c04f26c3e682a28878f7eb58e324892aaee875a
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 13 01:51:29 2009 +0000
    
        XFAIL these tests for now.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83933 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bbf0a780e9c94afc562bafbb40ea0afd6377000f
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 13 01:49:02 2009 +0000
    
        Add a ceilLogBase2 function to APInt.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83932 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1fc5fb659c7acd0415e6f7b02393179303071982
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Tue Oct 13 01:42:53 2009 +0000
    
        Memory dependence analysis was incorrectly stopping to scan for stores to a pointer at bitcast uses of a malloc call.
        It should continue scanning until the malloc call, and this patch fixes that.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83931 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8cd10be5c1c09b7aa2272e00571540c082834c4a
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Tue Oct 13 01:01:38 2009 +0000
    
        Regenerate configure for rev. 83823 putback.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83930 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7e02bc626b03039d40d461df1bb5c3c322a74fef
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Oct 12 23:22:09 2009 +0000
    
        Enable "debug info attached to an instruction" mode.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83925 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6096157c93259c3484c250b3541e9851dfc55da4
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Oct 12 23:11:24 2009 +0000
    
        Find enclosing subprogram info.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83922 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9cd02c5e750dea8637fe300b27a0bf14a53e22bd
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Oct 12 23:10:55 2009 +0000
    
        Set default location for a function if it is not set.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83921 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7bbbc7d65719ff18592a21cb2246e7fa68e213d4
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Mon Oct 12 22:51:49 2009 +0000
    
        Fix two warnings about unused variables that are only used in assert() calls.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83917 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1449b8594c0a82f80d198ddadda5d23795cad922
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Oct 12 22:49:05 2009 +0000
    
        Delete a comment that makes no sense to me.  The statement that moving a CPE
        before its reference is only supported on ARM has not been true for a while.
        In fact, until recently, that was only supported for Thumb.  Besides that,
        CPEs are always a multiple of 4 bytes in size, so inserting a CPE should have
        no effect on Thumb alignment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83916 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f5286ef78bba77aed8e21685baa8567e63316937
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Mon Oct 12 22:39:54 2009 +0000
    
        Fix a problem in the code where ARMAsmParser::ParseShift() second argument
        should have been a pointer to a reference.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83915 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0dab5a226a623e256b6df9bef3634fa371b63bf6
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Oct 12 22:25:23 2009 +0000
    
        Make licm debug message readable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83908 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 96b37b64f271bec34a6a650fa875acff17cb53d5
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Oct 12 21:39:43 2009 +0000
    
        Change CreateNewWater method to return NewMBB by reference.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83905 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e5001730bd4aa9b23daa7861ea3ef08c35e77777
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Oct 12 21:23:15 2009 +0000
    
        Last week, ARMConstantIslandPass was failing to converge for the
        MultiSource/Benchmarks/MiBench/automotive-susan test.  The failure has
        since been masked by an unrelated change (just randomly), so I don't have
        a testcase for this now.  Radar 7291928.
    
        The situation where this happened is that a constant pool entry (CPE) was
        placed at a lower address than the load that referenced it.  There were in
        fact 2 CPEs placed at adjacent addresses and referenced by 2 loads that were
        close together in the code.  The distance from the loads to the CPEs was
        right at the limit of what they could handle, so that only one of the CPEs
        could be placed within range.  On every iteration, the first CPE was found
        to be out of range, causing a new CPE to be inserted.  The second CPE had
        been in range but the newly inserted entry pushed it too far away.  Thus the
        second CPE was also replaced by a new entry, which in turn pushed the first
        CPE out of range.  Etc.
    
        Judging from some comments in the code, the initial implementation of this
        pass did not support CPEs placed _before_ their references.  In the case
        where the CPE is placed at a higher address, the key to making the algorithm
        terminate is that new CPEs are only inserted at the end of a group of adjacent
        CPEs.  This is implemented by removing a basic block from the "WaterList"
        once it has been used, and then adding the newly inserted CPE block to the
        list so that the next insertion will come after it.  This avoids the ping-pong
        effect where CPEs are repeatedly moved to the beginning of a group of
        adjacent CPEs.  This does not work when going backwards, however, because the
        entries at the end of an adjacent group of CPEs are closer than the CPEs
        earlier in the group.
    
        To make this pass terminate, we need to maintain a property that changes can
        only happen in some sort of monotonic fashion.  The fix used here is to require
        that the CPE for a particular constant pool load can only move to lower
        addresses.  This is a very simple change to the code and should not cause
        any significant degradation in the results.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83902 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 31d9eb561b52a70ac181ad610115e35ab86298dc
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Oct 12 20:45:53 2009 +0000
    
        Another minor clean-up.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83897 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit df5d0b3525506c916e4bb14fc6288ab43415f940
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 12 20:42:35 2009 +0000
    
        allow this testcase to pass with recent changes.  The test hasn't been
        producing any stores at all for a long time, but ".store." was in some
        IR instruction names until recently.  This removal caused the test to
        start failing.  Just make it reject any stores.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83895 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8454df9db700b63793f4f73f3df6e69d4a54572a
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Oct 12 20:37:23 2009 +0000
    
        Remove redundant parameter.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83894 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 93312097c5c22dac9e6fa9b2e63ab29708f51ba0
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Oct 12 19:04:03 2009 +0000
    
        Use early exit to reduce indentation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83874 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 688468aa23fe332955436b2acd062fd9bf289f8d
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Oct 12 19:01:12 2009 +0000
    
        Change to return a value by reference.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83873 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b15413cce529d24d6ae4e52116eac7c56f042023
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Oct 12 18:52:13 2009 +0000
    
        Add a typedef for an iterator.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83872 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f215bb209351a7fa2244cc691ef214b6beaa757e
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Mon Oct 12 18:49:00 2009 +0000
    
        Revert the kludge in 76703.  I got a clean
        bootstrap of FSF-style PPC, so there is some
        reason to believe the original bug (which was
        never analyzed) has been fixed, probably by
        82266.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83871 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b58870629223e655f015b90de48154b7b837259b
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Mon Oct 12 18:45:32 2009 +0000
    
        Fix warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83870 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6f216b73b69538d39ab7f84c6a0816270bc7f4dd
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 12 18:33:33 2009 +0000
    
        fix validation error pointed out by gabor (and the w3c :)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83868 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6608a010f11207afbe01650d9dffa30783b920ec
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 12 18:12:47 2009 +0000
    
        Improve bugpoint doc, patch by Timo Lindfors!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83865 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9b3c779a1dfd96ccced1d0e8c6e3f95519ab23cf
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Mon Oct 12 17:43:32 2009 +0000
    
        Fix http://llvm.org/PR5160, to let CallbackVHs modify other ValueHandles on the
        same Value without breaking things.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83861 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aee4b899c056aa56539c4ed530e8d7b05da874c3
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Mon Oct 12 16:50:25 2009 +0000
    
        another bunch of <tt>s
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83860 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 69a9b168a959772b096ef8f7ad53992ec2c663d9
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 12 16:44:10 2009 +0000
    
        Remove a redundant member variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83857 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 81be561deccf40f5f3ea1b4bda8c35c6c53cb429
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 12 16:43:44 2009 +0000
    
        Delete some obsolete declarations.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83856 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7c765f3562d346774d9988ce5a26e984c7e4da75
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Mon Oct 12 16:40:25 2009 +0000
    
        even more <tt>s
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83854 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c738f5ee6c48df39332389e307de1305903c8e7
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 12 16:36:12 2009 +0000
    
        Don't forget to mark RAX as live-out of the function when arranging for
        it to hold the address of an sret return value, for x86-64 ABI purposes.
    
        Also, fix the test that was originally intended to test this to actually
        test it, using FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83853 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e6acc86c5d19e865476fc4926de6dbb6797ee880
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Mon Oct 12 16:27:44 2009 +0000
    
        more typewriter face
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83852 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 55646ef432b266fb169dfec13d0d7ae0d47b5224
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Mon Oct 12 16:13:36 2009 +0000
    
        fix three validation errors, I leave the fourth to sabre :-)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83851 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 51a2dab0f36873ec09e8b8949dff285af7a0eb16
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Mon Oct 12 16:08:52 2009 +0000
    
        set some options in typewriter font
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83850 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5fb9d7e8d4fe8ec08d086e07c83c8c5952cf071a
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Mon Oct 12 14:46:08 2009 +0000
    
        Documentation: Perform automated correction of common typos.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83849 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b26e279e163669d34e45697b2ebe0e1003b7d8f8
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Mon Oct 12 13:37:29 2009 +0000
    
        Fix typo, patch from Timo Juhani Lindfors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83848 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c301a8de25cc35ae384b63fc4560874dd6a27fd
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Mon Oct 12 09:31:55 2009 +0000
    
        Eliminate some redundant llvm-as calls.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83837 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f402dc1c9976e809aec4d4e486e62496c9d92180
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Mon Oct 12 09:01:26 2009 +0000
    
        Missing CHECK: lines makes test exit abnormally.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83835 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 56b1d0597f0f97824098a34fddfcd936fbda194b
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Mon Oct 12 08:51:28 2009 +0000
    
        FileCheck not CheckFile, oops.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83834 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b5b320e759bf05872182af0e182a7369c1b7932c
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Mon Oct 12 08:46:47 2009 +0000
    
        Convert InstCombine/call.ll to CheckFile.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83833 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 53966cf901cc222b47ffdcb8a6c5ffb293e1680e
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Mon Oct 12 07:18:14 2009 +0000
    
        Convert the rest of the InstCombine tests from notcast to FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83828 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fef913ffdb15b2fbbb2b7151e135eb3d7fba1ca2
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Mon Oct 12 06:32:42 2009 +0000
    
        Remove this part of the test, it never actually tested anything anyways. This
        unbreaks make check after evocallaghan's changes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83827 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d0e2835fda304f3250cc4433b4b53d50a072a94f
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Mon Oct 12 06:23:56 2009 +0000
    
        Fix syntax error missed in converting zext.ll test. Convert 2003-11-13-ConstExprCastCall.ll to FileCheck from notcast.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83826 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 79711e110cdba700bee125d1356c804edad847e4
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Mon Oct 12 06:14:06 2009 +0000
    
        Convert InstCombine tests from notcast to FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83825 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e4e67c4d7b48ad44a7647ae33fac429025963387
    Author: Nate Begeman <natebegeman at mac.com>
    Date:   Mon Oct 12 05:53:58 2009 +0000
    
        More heuristics for Combiner-AA.  Still catches all important cases, but
        compile time penalty on gnugo, the worst case in MultiSource, is down to
        about 2.5% from 30%
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83824 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4c24c914c9726197266242a89c899a43c852bce2
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Mon Oct 12 04:57:20 2009 +0000
    
        Haiku porting patches, Credit to Paul Davey.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83823 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8935700747e6774fc1732c2608c80e533690a0f8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 12 04:22:44 2009 +0000
    
        Fix PR5087, patch by Jakub Staszak!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83822 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5dd66f99bc3d857188292aebead57d7251f63268
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 12 04:01:02 2009 +0000
    
        add some more hooks to the C bindings, patch by Kenneth Uildriks!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83821 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cddb07767225ac456e52fd730bc1de8ca0e2b9b2
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Mon Oct 12 04:00:13 2009 +0000
    
        Make ParallelJIT pthreads linking with CMake slightly less broken
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83820 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 84354c70bd3d1b20ec40883c6caf06fe26c27807
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Mon Oct 12 04:00:11 2009 +0000
    
        Fix LLVM CMake build system so that it may now work on Solaris and AuroraUX.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83819 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b5663c7c1eaf8cf8fb1d857780811ea3bf2c7b58
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 12 03:58:40 2009 +0000
    
        populate instcombine's initial worklist more carefully, causing
        it to visit instructions from the start of the function to the
        end of the function in the first path.  This greatly speeds up
        some pathological cases (e.g. PR5150).
    
        Try #3, this time with some unneeded debug info stuff removed
        which was causing dead pointers to be added to the worklist.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83818 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 92f8d26a987f8fa57ceacd18d0802e7010ad49df
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 23:56:08 2009 +0000
    
        revert r83814 for now, it is making the llvm-gcc bootstrap unhappy.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83817 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 204c64a7117240e986fc265224e3209455247698
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 23:19:44 2009 +0000
    
        pic16 uses 16 bit pointers, but is 8 bit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83815 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fab98956ee8bb06aa5357ac0364ed15e949f4d7a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 23:17:43 2009 +0000
    
        populate instcombine's initial worklist more carefully, causing
        it to visit instructions from the start of the function to the
        end of the function in the first path.  This greatly speeds up
        some pathological cases (e.g. PR5150).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83814 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6018f7a47052775c969148546ffde541b69e80ad
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Oct 11 23:10:09 2009 +0000
    
        Fix Makefile to build correctly on Darwin. Patch by Sandeep Patel!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83813 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2012d00dec16d998c3c3c05fb8b59614772ee490
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Oct 11 23:03:53 2009 +0000
    
        Add missed mem-mem move patterns
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83812 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1bb755cbeeabe6610220f1867fc08ea9503775a4
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Oct 11 23:03:28 2009 +0000
    
        Add MSP430 mem-mem insts support. Patch by Brian Lucas with some my refinements
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83811 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 89f19d1b3a3da269ee5267eab8dad7c07def855b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 23:02:46 2009 +0000
    
        remove some harmful code that would turn an insertelement on an undef
        into a shuffle even if it was used by another insertelement.  If the
        visitation order of instcombine was wrong, this would turn a chain of
        insertelements into a chain of shufflevectors, which was quite painful.
    
        Since CollectShuffleElements handles these cases, the code can just
        be nuked.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83810 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0505cb155a108d6b9ad4c7866be5d012d2f46181
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Oct 11 23:02:38 2009 +0000
    
        Add bunch of MSP430 'feature' tests. Patch by Brian Lucas with some my refinements
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83809 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b88a9f6ec39bcd6c3d40072ec2ae2854c9653683
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 22:54:48 2009 +0000
    
        reduce vec_shuffle2 and merge into vec_shuffle.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83807 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a2680eb6a3be4623b53e8ab3423973a30bc5ac57
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 22:52:15 2009 +0000
    
        filecheckize vec_shuffle.ll and merge shuffle.ll into it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83806 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2d94ef475390a3b506202af22be9482ba21f99b4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 22:45:17 2009 +0000
    
        filecheckize
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83805 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bce34da6aa0fafd04739231b1e2c68a2e7d5cb6f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 22:44:16 2009 +0000
    
        rename test
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83804 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit df025c2d02814a3263c9f4a5d535c93c44301999
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 22:42:06 2009 +0000
    
        remove old testcase
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83803 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9cb155c0a2e3302ddcb4ecb2c0021f6aafabdaa0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 22:39:58 2009 +0000
    
        merge test into shift.ll, this also eliminates awful grepping on -stats output
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83802 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 31eecb906b881523b8997b5b2f77734b81be8369
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 22:36:59 2009 +0000
    
        convert to filecheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83801 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eefa89c5b8ccf1a50ffeca3493d9608f3c3ae491
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 22:22:13 2009 +0000
    
        teach instcombine to simplify xor's harder, catching the
        new testcase.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83799 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4580d45e185e24ae6460f55c8bd825b222881e25
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 22:00:32 2009 +0000
    
        cleanups
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83797 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cf28e6a58b5cde2d8178206ca9754b4fd6d15d6b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 21:42:08 2009 +0000
    
        convert xor2 to filecheck, merge in a random regtest
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83796 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3508c5ca5968647a4b5f303bd9cc6d139ffde10f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 21:36:10 2009 +0000
    
        cleanup, no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83795 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4ca76f7f980d23a1d7a82559592838a91eb1fb4f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 21:29:45 2009 +0000
    
        generalize a transformation even more: we don't care whether the
        input the the mul is a zext from bool, just that it is all zeros
        other than the low bit.  This fixes some phase ordering issues
        that would cause us to miss some xforms in mul.ll when the worklist
        is visited differently.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83794 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 291872e65ebb3cf049e50f76fe0cc1b126b2207e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 21:22:21 2009 +0000
    
        simplify a transformation by making it more general.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83792 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 57842c564561b7e696d1fc016ec0c55e6b08bcd4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 21:05:34 2009 +0000
    
        temporarily revert previous patch
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83791 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 54874018d447572ff91ee9a5e1fb22ddc12b73a7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 21:04:37 2009 +0000
    
        populate instcombine's initial worklist more carefully, causing
        it to visit instructions from the start of the function to the
        end of the function in the first path.  This greatly speeds up
        some pathological cases (e.g. PR5150).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83790 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cb2683a5dda25516ad0c224f0bc5046b8f10f687
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Sun Oct 11 19:58:35 2009 +0000
    
        Remove CleanupDbgInfo, instcombine does this and its not worth duplicating it
        here.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83789 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2734bdb105226b7f4a5cbabee2b730fcbc4ac928
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Sun Oct 11 19:40:38 2009 +0000
    
        More DragonEgg verbiage.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83788 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2c9e52d4836192ce0463bd735667ac9861ba8fe9
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Sun Oct 11 19:30:56 2009 +0000
    
        Remove spurious brackets.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83787 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aeb2b5ba57596a8b077fb2e7878cf63ab2652ca4
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Sun Oct 11 19:15:54 2009 +0000
    
        LICM shouldn't sink/delete debug information. Fix this and add a testcase.
        For now the metadata of sinked/hoisted instructions is still wrong, but that'll
        be fixed when instructions will have debug metadata directly attached.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83786 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9cbe2bc74aff10a95b2ee6ed6fb4a47dffee4751
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Oct 11 19:14:21 2009 +0000
    
        Implement 'm' memory operand properly
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83785 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0c425acb0ed7cbc27602e7b7faa1e6ff755e37b5
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Oct 11 19:14:02 2009 +0000
    
        Implement proper asmprinting for the globals. This eliminates bogus "call" modifier and also adds support for offsets wrt globals.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83784 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 92edf7f8851ef0b58a0545195d7d749d2c1785f1
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Oct 11 19:13:34 2009 +0000
    
        Implement asm printing for inline asm memory operands
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83783 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c8ab34d04b0472a623f3aa7a2da0fce1fdace2ca
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 19:07:23 2009 +0000
    
        add PR5004 as a known problem.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83782 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 711542c40f2d29a3d693a6bf59387c1bae4efd3e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 19:02:54 2009 +0000
    
        duncan points out that llvm-gcc doesn't do the right thing with -fverbose-asm yet.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83781 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6178bed87c4874e4760711a68fd4aa50f9a1d588
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Oct 11 18:53:09 2009 +0000
    
        Fix typo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83780 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 42170ac977990187aa1fe8f11b96b5966234c9da
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Oct 11 18:47:33 2009 +0000
    
        Fix typo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83779 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 59be447db438daefb7cdc64feeb2595f474f0be8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 18:39:58 2009 +0000
    
        when folding duplicate conditions, delete the
        now-probably-dead instruction tree feeding it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83778 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2c50edbc5a7493d03d1561eeb139a316b582b65d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 18:21:32 2009 +0000
    
        some notes from Anton
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83777 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c1654ea87b71d0c7a965b5fffa05f83ffcc90c3d
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Sun Oct 11 11:44:34 2009 +0000
    
        catch some other serial commas that my earlier grep did not spot
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83772 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 00ade922bbb18d8e8fca7a4e69bef5b9c5dbe1fe
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Sun Oct 11 11:23:40 2009 +0000
    
        eliminate some instances of serial comma. sabre, if you feel strong about this, feel free to revert this rev
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83771 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6486965df5826add65108c0ebdf0a8a530b40b00
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Sun Oct 11 11:20:26 2009 +0000
    
        Fix typo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83770 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 25d7536a03070dc647e4cae021e9fae10111c60c
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Sun Oct 11 10:44:44 2009 +0000
    
        apply some tweaks
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83769 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 669766de5f5883fd1e378347017986980a35b527
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Sun Oct 11 10:27:57 2009 +0000
    
        fix some obvious typos
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83768 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8f506a9c91975bcff3af5a4100f093002a0db636
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Sun Oct 11 09:07:15 2009 +0000
    
        Add an outline of the DragonEgg gcc plugin.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83765 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6438c58898936351337a02c6f3d89dbefcf8d6c1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 07:53:15 2009 +0000
    
        implement rdar://7293527, a trivial instcombine that llvm-gcc
        gets but clang doesn't, because it is implemented in GCC's
        fold routine.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83761 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0c7a6448809f6dc46ba73d35edfe8710c9758f64
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 07:51:25 2009 +0000
    
        add a helper for matching "1".
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83760 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ed90ae261ab94579121d50605807e8b9ad5e9b00
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 07:24:57 2009 +0000
    
        implement a transformation in jump threading that is currently
        done by condprop, but do it in a much more general form.  The
        basic idea is that we can do a limited form of tail duplication
        in the case when we have a branch on a phi.  Moving the branch
        up in to the predecessor block makes instruction selection
        much easier and encourages chained jump threadings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83759 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a84c14e849681bc0fcf86529593d9475c0250617
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 07:11:11 2009 +0000
    
        another testcase jump threading shouldn't crash on.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83758 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 41e44ea0443e3fda16de10d4ac56574ad4cdaae4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 07:10:28 2009 +0000
    
        rename a file, remove a poorly reduced testcase.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83757 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5bf9d1b142a7646cc02c22faa44ca229cded4999
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 04:40:21 2009 +0000
    
        restructure some code, no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83756 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 353c7f9d63875f26c94cd63f9824d33a5a29bd63
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 04:33:43 2009 +0000
    
        factor some code better and move a function, no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83755 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a5c0a111a8bc2c320b19e042b941a04cc51dbc48
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 04:18:15 2009 +0000
    
        make jump threading on a phi with undef inputs happen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83754 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b3fb398811b1fa62ff2262e0f96bca634aa96224
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 04:17:33 2009 +0000
    
        there is no need to run mem2reg after jump threading at LTO time now.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83753 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 990739315fb919324c69c035ab3ac6e2b00d22d4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 04:03:22 2009 +0000
    
        fix a bunch of bad formatting, delete the dead
        ConstantInt::TheTrueVal/TheFalseVal members.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83752 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3733adba857257f94b0945022794fb1c1bbefbbb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 03:55:30 2009 +0000
    
        merge two tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83751 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4f68e0c66398707bce7c1304f09ed42c863337e7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 03:54:21 2009 +0000
    
        simplify some run lines, convert a test to filecheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83750 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0c5465e27e51577bdee6797f31a9bd5c7545d5c7
    Author: Ted Kremenek <kremenek at apple.com>
    Date:   Sun Oct 11 03:10:25 2009 +0000
    
        Update release notes blurb on the static analyzer.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83749 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e98866c4d6898e2502af1f30d1d5ae7cfbd14bc3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 02:53:37 2009 +0000
    
        rewrite LCSSA to use SSAUpdate, to only return true if it modifies
        the IR, and to implement the FIXME'd optimization.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83748 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7dd77a530bc7f7e74dc137cf93a7dd04f3829831
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Oct 11 01:07:15 2009 +0000
    
        clean up and simplify some code.  Don't use setvector when things will be
        inserted only once, just use vector.  Don't compute ExitBlocks unless we
        need it, change std::sort to array_pod_sort.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83747 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6e5ea272663faaaf886a17f77963b5c12bdf8802
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 23:50:30 2009 +0000
    
        switch GVN to use SSAUpdater.  Besides removing a lot of complexity
        from GVN, this also speeds it up, inserts fewer PHI nodes (see the
        testcase) and allows it to remove more loads (due to fewer PHI nodes
        standing in the way).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83746 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2daa0fe12d450b31d9b79e66edd08e65623adbbc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 23:41:48 2009 +0000
    
        add a simple helper method.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83745 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e4f2c548e29dacb887d3249f35cc881fb1b29bbe
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 23:15:24 2009 +0000
    
        add ability for clients of SSAUpdater to find out about the
        PHI nodes inserted.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83744 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8b04a44b984696613a6469db5927268de2f762f6
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 23:05:42 2009 +0000
    
        clarify
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83743 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7b06de142ad444492acf76879609cc3a99e91d18
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 23:04:12 2009 +0000
    
        remove dead code
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83742 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit af354a777f57a5585a3b210d785db6349539953f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 23:00:11 2009 +0000
    
        add the ability to get a rewritten value from the middle of a block,
        not just at the end.  Add a big comment explaining when this could
        be useful (which never happens for jump threading).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83741 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 20c6ad636a273a51464d58d8f8bc1c56084e74af
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 22:41:58 2009 +0000
    
        rename GetValueInBlock -> GetValueAtEndOfBlock to better reflect
        what it does.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83740 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3a9bcf02f7e5e0ad89a228f87e19e9e6cf2527d7
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sat Oct 10 22:17:47 2009 +0000
    
        It seems that OR operation does not affect status reg at all.
        Remove impdef of SRW. This fixes PR4779
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83739 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b9a5155e6a8ce63a720f7a3451716b18c2d28fa4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 22:15:25 2009 +0000
    
        fix broken link
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83738 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8767bdca785849ff566f409d4d3801f274686e3a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 22:13:38 2009 +0000
    
        final changes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83737 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 835f25302bc50f969d425fd9c2b911113bf61d96
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 22:02:58 2009 +0000
    
        down to 'major changes'
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83736 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fdf24e7c757b6ab6fef50461bdc3467230d576b9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 21:40:13 2009 +0000
    
        more updates
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83735 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 69c5440a5bd6d4e6016cf892d6950753f175e4dc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 21:37:16 2009 +0000
    
        add a link to the GSG for info on how to check out svn trunk
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83734 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7305d116823dc875501092035f458734c0663161
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 21:30:55 2009 +0000
    
        x86 uses 5 operands for most memory refs now.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83733 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d4f53350fb0367b3b87ce3230083a7709294ba24
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Sat Oct 10 20:06:04 2009 +0000
    
        More spelling and grammar tweaks.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83728 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 08f0add2e589d69edcb270125e86533fcb751e48
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Sat Oct 10 19:45:13 2009 +0000
    
        More spelling fixes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83724 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dc5bd2bcab36d8df81d308baefbf36441b676b37
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Sat Oct 10 19:30:16 2009 +0000
    
        Spelling fixes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83722 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 10abb752d27135245e6565c347fa7c61a69d3159
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 19:26:21 2009 +0000
    
        more tweaks
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83721 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ccedda4f3460575743ed25a090ee3904d971ab5a
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Sat Oct 10 19:16:25 2009 +0000
    
        Remove an inappropriate line in the description of the
        clang static analyser.  Decrease duplication in the text.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83720 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 51ead6da49d101c8507fa07b7d1b35dc563abeed
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 19:00:55 2009 +0000
    
        continue decoding chris scribble.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83719 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2bb49c3f61f371f06fd85f34fbd886bbf513667a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 18:40:48 2009 +0000
    
        remove some dead passes
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83717 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4d91d4d3e9e2383157ccb0404ae3d6eb432fd54d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 18:33:13 2009 +0000
    
        checkpoint.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83716 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0fee5c2ba2015f5d738d413b3980ab71a0fdfb74
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 18:26:06 2009 +0000
    
        fix broken anchors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83715 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2caa3dbab217e4ec2efa8ee04fa9de812aac99c8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 09:09:20 2009 +0000
    
        use a typedef instead of spelling out an insane type.  Yay for auto someday.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83707 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cad6e47254b042d7ee8168c0b19240436f397baa
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 09:05:58 2009 +0000
    
        Change jump threading to use the new SSAUpdater class instead of
        DemoteRegToStack.  This makes it more efficient (because it isn't
        creating a ton of load/stores that are eventually removed by a later
        mem2reg), and more slightly more effective (because those load/stores
        don't get in the way of threading).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83706 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4baac8e63aae277b30891647cb8cc91500035a28
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 09:04:27 2009 +0000
    
        Implement an efficient and fully general SSA update mechanism that
        works on unstructured CFGs.  This implements PR217, our oldest open PR.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83705 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e8bb5a4841c7f35bdad091ac5e9ffd8515ffcd5a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 08:27:29 2009 +0000
    
        add some WeakVH::operator='s.  Without these, assigning
        a Value* to a WeakVH was constructing a temporary WeakVH
        (due to the implicit assignment operator).  This avoids
        that cost.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83704 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0470c8bf65383fcdf8e19fab6350e9dc8992c343
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 08:01:27 2009 +0000
    
        change some static_cast into cast, pointed out by Gabor.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83703 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4c097ffa6542b236a22d784f35c69c28a54189a0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 07:42:42 2009 +0000
    
        add a version of PHINode::getIncomingBlock that takes a raw
        Use, to complement the version that takes a use_iterator.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83702 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 34635fd2a1fa61305f64291b89ad230ad0c1abfb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Oct 10 06:22:45 2009 +0000
    
        random tidying
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83701 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f5ee5165329daef7db0c6a8a0abc23d344e9e6f9
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 10 01:32:21 2009 +0000
    
        Create a new InstrEmitter class for translating SelectionDAG nodes
        into MachineInstrs. This is mostly just moving the code from
        ScheduleDAGSDNodesEmit.cpp into a new class. This decouples MachineInstr
        emitting from scheduling.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83699 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e1c0b3bed2a6e94779604ba226ee8721496f20e7
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 10 01:29:16 2009 +0000
    
        Make getMachineNode return a MachineSDNode* instead of a generic SDNode*
        since it won't do any folding. This will help avoid some inconvenient
        casting.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83698 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fe5ef8c515912e0217c2730731f5f0385a14c250
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 10 00:36:09 2009 +0000
    
        Remove a no-longer-necessary #include.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83697 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ef188518b791f048bd10dc1e8d0f62e63086198
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 10 00:34:18 2009 +0000
    
        Replace X86's CanRematLoadWithDispOperand by calling the target-independent
        MachineInstr::isInvariantLoad instead, which has the benefit of being
        more complete.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83696 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 101611d6083ca81b9bcf86d51be4717fbe1edc5a
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Oct 10 00:15:38 2009 +0000
    
        Fix a missing initialization of PostRAScheduler's AA member.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83695 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 04e1b1d338e24537ab7c42bd0a1f0bf544732080
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 9 23:33:48 2009 +0000
    
        The ScheduleDAG framework now requires an AliasAnalysis argument, though
        it isn't needed in the ScheduleDAGSDNodes schedulers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83691 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d85c2fe674dcb1b5e8c3bfa740a8e396e1a43290
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 9 23:31:04 2009 +0000
    
        Update this test; the code is the same but it gets counted as one
        fewer remat.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83690 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 78e1a77cde46b7ba5de7d39b3a6db14e058f897c
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 9 23:28:27 2009 +0000
    
        Mark the LDR instruction with isReMaterializable, as it is rematerializable
        when loading from an invariant memory location.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83688 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0a4c09e724bd3ab7c9a1d3a1615894e7bf209179
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 9 23:27:56 2009 +0000
    
        Factor out LiveIntervalAnalysis' code to determine whether an instruction
        is trivially rematerializable and integrate it into
        TargetInstrInfo::isTriviallyReMaterializable. This way, all places that
        need to know whether an instruction is rematerializable will get the
        same answer.
    
        This enables the useful parts of the aggressive-remat option by
        default -- using AliasAnalysis to determine whether a memory location
        is invariant, and removes the questionable parts -- rematting operations
        with virtual register inputs that may not be live everywhere.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83687 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 40c80212c1c0be155e7a561f0a18a94856d7eb2f
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Oct 9 22:42:28 2009 +0000
    
        Extract scope information from the variable itself, instead of relying on alloca or llvm.dbg.declare location.
    
        While recording beginning of a function, use scope info from the first location entry instead of just relying on first location entry itself.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83684 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 43304d30b79b35253613e21f34375a5493f62550
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Fri Oct 9 22:10:27 2009 +0000
    
        ExecutionEngine::clearGlobalMappingsFromModule failed to remove reverse
        mappings, which could cause errors and assert-failures.  This patch fixes that,
        adds a test, and refactors the global-mapping-removal code into a single place.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83678 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 04e3e0aec0ee96b3c0c59404d7715aaa97d95eb0
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 9 22:09:05 2009 +0000
    
        Add a const qualifier.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83677 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f5c20d7bc7641c093c9630366823584817b1f784
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Fri Oct 9 21:42:02 2009 +0000
    
        Use names instead of numbers for some of the magic
        constants used in inlining heuristics (especially
        those used in more than one file).  No functional change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83675 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 60bc0274f8e3cc55dd9332c4f404afee1c675b95
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Fri Oct 9 21:12:28 2009 +0000
    
        Added another bit of the ARM target assembler to llvm-mc to parse register
        lists.  Changed ARMAsmParser::MatchRegisterName to return -1 instead of 0 on
        errors so 0-15 values could be returned as register numbers.  Also added the
        rest of the arm register names to the currently hacked up version to allow more
        testing.  Some changes to ARMAsmParser::ParseOperand to give different errors
        for things not yet supported and some additions to the hacked
        ARMAsmParser::MatchInstruction to allow more testing for now.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83673 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e5496f6ba52d5603eb26d72b421c2d8731f4ac24
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 9 21:02:10 2009 +0000
    
        isTriviallyReMaterializable checks the
        TargetInstrDesc::isRematerializable flag, so it isn't necessary to do
        this check in its callers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83671 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f9a38e927ec463018c38d44e71c096032d2bd855
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 9 20:35:19 2009 +0000
    
        Fix the x86 test-shrink optimization so that it doesn't shrink comparisons
        when one of the bits being tested would end up being the sign bit in the
        narrower type, and a signed comparison is being performed, since this would
        change the result of the signed comparison. This fixes PR5132.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83670 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ff57ca95835abff5340b955ef0277117c35439db
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Oct 9 20:20:54 2009 +0000
    
        Merge a bunch of NEON tests into larger files so they run faster.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83667 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c7973eb023472b13237076954240d915c30c2c69
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 9 18:10:05 2009 +0000
    
        Add basic infrastructure and x86 support for preserving MachineMemOperand
        information when unfolding memory references.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83656 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fb8127571e3f12f3c71b90f92b561ba34b4c0582
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Oct 9 17:51:49 2009 +0000
    
        Check invalid debug info for enums. This may happen when underlyng enum is optimized away. Eventually DwarfChecker will clean this up during llvm verification stage.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83655 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 66037312be7232178d30cfcab772f4e8e330ff5c
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Fri Oct 9 17:33:33 2009 +0000
    
        when previous scratch register is killed, flag the value as no longer tracking
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83653 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 20399e118dce04d516184b0d75e64ce701101f3d
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Oct 9 17:20:46 2009 +0000
    
        Convert some ARM tests with lots of greps to use FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83651 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f5ceadfdacd1076a9449ec317dc8c6f7f621ca62
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 9 16:35:06 2009 +0000
    
        Revert r83606 and add comments explaining why it isn't safe.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83649 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 13eff6afac77654d75c584382caf1ae89295168d
    Author: Nicolas Geoffray <nicolas.geoffray at lip6.fr>
    Date:   Fri Oct 9 13:17:57 2009 +0000
    
        As it turns out, the bug fixes in GC codegen did not make it
        to llvm-2.6. Remove the precise garbage collection feature.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83638 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0dcee36aa05b0bec97f63996fcb9f34626e86118
    Author: Nicolas Geoffray <nicolas.geoffray at lip6.fr>
    Date:   Fri Oct 9 10:17:14 2009 +0000
    
        80-columns!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83628 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 12c0e568a1c2bc53ada189e7e846c8fc6f0fea4a
    Author: Nicolas Geoffray <nicolas.geoffray at lip6.fr>
    Date:   Fri Oct 9 10:13:08 2009 +0000
    
        Add initial information on VMKit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83627 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 896d258c97e62cda9273af7c16420116652a4520
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Oct 9 06:36:25 2009 +0000
    
        more random updates.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83625 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d1aa3e5f0fe657b868ff72e8bab8db7359e1bc98
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 9 06:31:25 2009 +0000
    
        Give Dan and my recent changes, machine LICM is now code size neutral.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83624 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0d60110630c3869e7a8f0dbeb8a1228856425760
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Oct 9 06:24:25 2009 +0000
    
        checkpoint.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83623 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3dd21c11a02af6dede438855f9b433be3dfb1abc
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 9 06:21:52 2009 +0000
    
        Fix a logic error that caused non-rematable loop invariants loads to be licm'ed out of loop.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83622 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6e6d33cba1b2c637e02dbb91639ce9832b77c0a0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Oct 9 05:55:04 2009 +0000
    
        checkpoint.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83621 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3a22d57b59002c06bcc397a2d0896880748da5d6
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Fri Oct 9 05:45:38 2009 +0000
    
        Slight rewording.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83620 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 731bacfd51b94138ba08e635eff3ca6fdcbac118
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Fri Oct 9 05:45:21 2009 +0000
    
        Omit the 'out_file_index != -1' check when possible.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83619 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 220acf8eeb24372c18be5fcb09801f990dffe4c7
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Fri Oct 9 05:45:01 2009 +0000
    
        Use llvm-as only for compiling .ll -> .bc.
    
        llc can compile .ll files directly these days.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83618 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit abdc3b88031c863d2f74d104127686a5a69af598
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Oct 9 05:31:56 2009 +0000
    
        Commit one last NEON test to use FileCheck.  That's all of them now!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83617 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cb2151d7341f051c8a01aa098bf2eda3379e9234
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Oct 9 05:14:48 2009 +0000
    
        Convert more NEON tests to use FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83616 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 401ec6d06b636887ac3d25016aea353151394e54
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Oct 9 05:01:15 2009 +0000
    
        update clang section.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83615 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cbaa50309d57421f5bdd4d90a94a36b88444fb73
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Fri Oct 9 04:15:52 2009 +0000
    
        Raise the limit on built-in plugins in llvmc to 10.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83614 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5bdb7471aa50c84fdbe958fe5b2a2e0ca20fe930
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Fri Oct 9 02:40:01 2009 +0000
    
        Reconfigure automatically when Base.td.in is changed.
    
        Thanks to Chris for heads-up!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83613 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9b4ac5427138629eb3cf8419fcf059b1b8848f91
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 9 01:17:11 2009 +0000
    
        Reset kill markers after live interval is reconstructed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83608 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit afa08b51a65fd3e945cb752281f7ad5f02c4fa29
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Oct 9 00:57:38 2009 +0000
    
        Indentation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83607 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 837b52307f0cc56172a43dab64d2fbcfe1bd5d5f
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 9 00:41:22 2009 +0000
    
        Preserve HasNSW and HasNUW when constructing SCEVs for Add and Mul
        instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83606 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fa77a95f69eb7ebf8ad5eec689ab92f2f8961f64
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Fri Oct 9 00:11:32 2009 +0000
    
        When considering whether to inline Callee into Caller,
        and that will make Caller too big to inline, see if it
        might be better to inline Caller into its callers instead.
        This situation is described in PR 2973, although I haven't
        tried the specific case in SPASS.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83602 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c9afeb35476d80a581127cd6e46b62a75727de30
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Oct 9 00:10:36 2009 +0000
    
        Add the ability to track HasNSW and HasNUW on more kinds of SCEV expressions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83601 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c7692e0185faaab4a68386afc362d95be591b455
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Oct 9 00:01:36 2009 +0000
    
        Add codegen support for NEON vst4lane intrinsics with 128-bit vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83600 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dbffb21337be67595a47585d60e8dda083038601
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 8 23:51:31 2009 +0000
    
        Add codegen support for NEON vst3lane intrinsics with 128-bit vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83598 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 18e94a7a1d02201407370ce9f73be9798bd4b7cc
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 8 23:38:24 2009 +0000
    
        Add codegen support for NEON vst2lane intrinsics with 128-bit vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83596 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e76c59427191dff2450a644ad14db0a8d973b18e
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 8 23:33:03 2009 +0000
    
        Convert more NEON tests to use FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83595 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7a8c6dfd8b82ead296bebcb567580cf08c5d449d
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 8 22:53:57 2009 +0000
    
        Add codegen support for NEON vld4lane intrinsics with 128-bit vectors.
        Also fix some copy-and-paste errors in previous changes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83590 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9e95fbdc67767a5b924b05be00174db40a0cce57
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Oct 8 22:42:35 2009 +0000
    
        Remove code that makes no sense.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83589 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7f38db881ebc7e8d6e3723f74306fc179bc6a26d
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 8 22:33:53 2009 +0000
    
        Convert more NEON tests to use FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83587 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 47a1ff630bcc4c6ad5fc198b0b1fe9f7cd07a85c
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 8 22:27:33 2009 +0000
    
        Add codegen support for NEON vld3lane intrinsics with 128-bit vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83585 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 872cba53706d4b15c7d44ee586495daaf9acb850
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Thu Oct 8 21:24:34 2009 +0000
    
        Update CMake build yet again after a source file was removed
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83575 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ee60911ca1e826c45202025244f0d1d273d1c2c0
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Oct 8 20:52:51 2009 +0000
    
        It's possible for a global variable to be optimized out of a metadata object. So
        we should allow a "null" with this dyn_cast.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83573 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9c196ef25d4afaf139cf282e439df7afa2c4e6ad
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Thu Oct 8 20:43:22 2009 +0000
    
        Use lower16 / upper16 imm modifiers to asmprint 32-bit imms splitted via movt/movw pair.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83572 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f57e106e4ef58a165435aef6683db98b559f594b
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Oct 8 20:41:17 2009 +0000
    
        Clear variable debug info map at the end of the function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83571 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5687d8a26c08d089c60df2c92d048ab85bcdb2b9
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 8 18:56:10 2009 +0000
    
        Add codegen support for NEON vld2lane intrinsics with 128-bit vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83568 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5642cc9b885987140f2f385af99fd6ad3ff47845
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 8 18:52:56 2009 +0000
    
        Clean up some unnecessary initializations.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83566 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 67902736c48e80706297e3141221c4951a15dabc
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 8 18:51:31 2009 +0000
    
        Clean up a comment (indentation was wrong).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83565 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fcaa4cff1b4f3a5b7a731c6e591f345ce5fcdbc1
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 8 18:49:46 2009 +0000
    
        Add a SelectionDAG getTargetInsertSubreg convenience function,
        similar to getTargetExtractSubreg.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83564 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6882dff7faafcef6f4099c22fcb8f1720b021720
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Oct 8 18:48:03 2009 +0000
    
        Do not record line number to implicitly mark start of function if function has arguments. Extra line number entries trip gdb in some cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83563 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 37f93448e6320e093f256f4884200656a3441f03
    Author: Richard Osborne <richard at xmos.com>
    Date:   Thu Oct 8 17:14:57 2009 +0000
    
        Add missing names for the XCore specific LADD and LSUB nodes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83556 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 130a245ddcec93d0105d56a5490bf15261fc5775
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Oct 8 17:00:02 2009 +0000
    
        Add a form of addPreserved which takes a string argument, to allow passes
        to declare that they preserve other passes without needing to pull in
        additional header file or library dependencies. Convert MachineFunctionPass
        and CodeGenLICM to make use of this.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83555 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 59a302773ef641214008b3fca3d18a6227e06f5d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 8 16:01:33 2009 +0000
    
        some updates from users of llvm
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83551 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c3a9f20fbd8ce4f486a775c526f6915da438072e
    Author: Richard Osborne <richard at xmos.com>
    Date:   Thu Oct 8 15:38:17 2009 +0000
    
        Add some peepholes for signed comparisons using ashr X, X, 32.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83549 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 169c7013404386716ee4fa02d8c88286a15b598c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 8 07:01:46 2009 +0000
    
        all content split into sections, still much work to be done.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83532 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 019ef5c94bf984de5c7ed18e3cd8174199cae615
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 8 06:42:44 2009 +0000
    
        remove LoopVR pass.  According to Nick:
        "LoopVR's logic was copied into ScalarEvolution::getUnsignedRange and
        ::getSignedRange. Please delete LoopVR."
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83531 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6aaf6907619d567b988c88dc787974141d093fa0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Oct 8 06:27:53 2009 +0000
    
        checkpoint, this is still not comprehendible.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83530 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 96c75278165e3401271e659bb9375c0b0cb76241
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Thu Oct 8 06:03:38 2009 +0000
    
        Unbreak the build.
    
        Forgot about the need to reconfigure after modifying Base.td.in....
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83529 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c70c0aaa6bcd91504049046a85ef1320be6e442
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 8 06:02:10 2009 +0000
    
        Convert more NEON tests to use FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83528 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 94b5d43b15023b4757c1b08838104c6a14438e45
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 8 05:18:18 2009 +0000
    
        Add codegen support for NEON vst4 intrinsics with <1 x i64> vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83526 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 18c78e698719eea38d06de611575087e084e1dfc
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Thu Oct 8 04:40:28 2009 +0000
    
        Make the Base plugin understand -MF and -MT.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83525 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c7d766abfd5bdebe8e27baf0e92fe4c11e2dcc9
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Thu Oct 8 04:40:08 2009 +0000
    
        Input files should go before all other options.
    
        Important, for example, when calling 'gcc a.o b.o c.o -lD -lE -lF'.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83524 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 508ceee8273f7a4f579c60fd9e1ad2fee9f40241
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Oct 8 01:50:26 2009 +0000
    
        Cleanup up unused R3LiveIn tracking.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83522 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4820d98aaad1442bfb4efb104a2ac91b64bc3a36
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Oct 8 01:46:59 2009 +0000
    
        Re-enable register scavenging in Thumb1 by default.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83521 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 51d82f19af9992ec249e8494031365864defd035
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Oct 8 01:09:45 2009 +0000
    
        bugfix. The target may use virtual registers that aren't tracked for re-use but are allocated by the scavenger. The re-use algorithm needs to watch for that.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83519 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7200e5d850492bc316cf1cef72cfff01fb88fdb5
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 8 00:28:28 2009 +0000
    
        Add codegen support for NEON vst3 intrinsics with <1 x i64> vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83518 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dd43d1eec2d82eb61329b5940fc2c7f4165cd9ed
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Oct 8 00:21:01 2009 +0000
    
        Add codegen support for NEON vst2 intrinsics with <1 x i64> vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83513 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 17091f035fcfa8a12b319146d199c043fbc59fce
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Thu Oct 8 00:12:24 2009 +0000
    
        In instcombine's debug output, avoid printing ADD for instructions that are
        already on the worklist, and print Visited when an instruction is about to be
        visited.  Net, on one input, this reduced the output size by at least 9x.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83510 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7ce4750b6014e38275dcf2878b6ca15a7111ac6d
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 7 23:54:04 2009 +0000
    
        Add codegen support for NEON vld4 intrinsics with <1 x i64> vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83508 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d48ca59eb7bd0af0e294c309b403f7bd72fdab0a
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 7 23:47:21 2009 +0000
    
        Convert more NEON tests to use FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83507 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit da8caccf975ceec8914586e3f0ae7e46a228e640
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 7 23:39:57 2009 +0000
    
        Add codegen support for NEON vld3 intrinsics with <1 x i64> vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83506 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ed8f6834c213d06f4b31cf1747d8900d346b461f
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Wed Oct 7 23:22:42 2009 +0000
    
        Fix the OProfile part of PR5018. This fixes --without-oprofile, makes
        it the default, and works around a broken libopagent on some Debian
        systems.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83503 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8c3be587276cc116d834bf213f80149e40add2b9
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 7 22:57:01 2009 +0000
    
        Add codegen support for NEON vld2 intrinsics with <1 x i64> vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83502 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c7d2c5396f2876ed2ac8e5c3b65291a769a90767
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Oct 7 22:49:41 2009 +0000
    
        reverting thumb1 scavenging default due to test failure while I figure out what's up.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83501 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bcd68c3cde6340d81b205a97d9855c55c131150b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Oct 7 22:49:30 2009 +0000
    
        second half of lazy liveness removal.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83500 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1bf03f7398acced77fdf5dcc70543c47f550e956
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Oct 7 22:47:20 2009 +0000
    
        Fix handling of x86 'R' constraint.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83499 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c85628e8a290be582bf9c9845aedc29ac8dac99e
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 7 22:30:19 2009 +0000
    
        Convert more NEON tests to use FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83497 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ce7bfc50779b980e76e470733c72c25c82a5c0bf
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Oct 7 22:26:31 2009 +0000
    
        Enable thumb1 register scavenging by default.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83496 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1d57f4b9260b0e0133e79e4fad874dfaf1739975
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Oct 7 22:26:14 2009 +0000
    
        Enable thumb1 register scavenging by default.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83494 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 47b9a3a81764a9db2834279dd716127112d0ca72
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Oct 7 22:04:08 2009 +0000
    
        Extract subprogram and compile unit information from the debug info attached to an instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83491 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b172116673cb33453971b6308941bc7cbd2f624c
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 7 21:53:04 2009 +0000
    
        Add some instruction encoding bits for NEON load/store instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83490 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c8d0f170e091f38e57a5889655d2e14497c20563
    Author: Eric Christopher <echristo at apple.com>
    Date:   Wed Oct 7 21:14:25 2009 +0000
    
        80-column and whitespace fixes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83489 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0479a8535055d21c086964c753787a62f8688cb0
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Wed Oct 7 20:57:20 2009 +0000
    
        Fixed MCSectionMachO::ParseSectionSpecifier to allow an attribute of "none" so
        that a symbol stub section with no attributes can be parsed as in:
        .section __TEXT,__picsymbolstub4,symbol_stubs,none,16
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83488 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fa506f44c225410cb1592fb48bbb9e1a85bba76b
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 7 20:51:42 2009 +0000
    
        Convert test to FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83487 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 931c76b4c08ea058d90f9fd5f5a59aea52010c0d
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 7 20:49:18 2009 +0000
    
        Add codegen support for NEON vst4 intrinsics with 128-bit vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83486 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2a85bd13a0155cef6081a7f78e4b59c70ba8f5b4
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 7 20:30:08 2009 +0000
    
        Add codegen support for NEON vst3 intrinsics with 128-bit vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83484 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a3330a871867e32033b8e0ea4134517d0ff62b9d
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Oct 7 19:08:36 2009 +0000
    
        grammar
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83483 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5fa67d35b87341e9990fb7b3a26e3de7e76c0e64
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 7 18:47:39 2009 +0000
    
        Add codegen support for NEON vst2 intrinsics with 128-bit vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83482 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b1355fe89fe28726ead6203e1b1f6e43f3292a81
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Oct 7 18:44:24 2009 +0000
    
        add initializers for clarity. Add missing assignment of PrevLastUseOp.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83481 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fd4a856fb7ad8ccf886eebd9a1575be404ada2f9
    Author: Owen Anderson <resistor at mac.com>
    Date:   Wed Oct 7 18:40:17 2009 +0000
    
        Remove LazyLiveness from the tree.  It doesn't work right now, and I'm not going to have the time
        to finish it any time soon.  If someone's interested it, they can resurrect it from SVN history.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83480 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 004a2e16d3ac693fad8bbd5a6798406653f672ba
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 7 18:09:32 2009 +0000
    
        Add codegen support for NEON vld4 intrinsics with 128-bit vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83479 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 308327301d99c310723d2ac204dc1fc513f89243
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Wed Oct 7 18:01:35 2009 +0000
    
        Add another bit of the ARM target assembler to llvm-mc to parse registers
        with writeback, things like "sp!", etc.  Also added some more stuff to the
        temporarily hacked methods ARMAsmParser::MatchRegisterName and
        ARMAsmParser::MatchInstruction to allow more parser testing.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83477 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9c1b793895e57d2696e6c84215f63bfd699e222e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 7 17:47:20 2009 +0000
    
        Replace some code for aggressive-remat with MachineInstr::isInvariantLoad, and
        teach it how to recognize invariant physical registers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83476 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eef78170dfb5ee4137742e94dbb0253cdda451f8
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 7 17:38:06 2009 +0000
    
        Replace TargetInstrInfo::isInvariantLoad and its target-specific
        implementations with a new MachineInstr::isInvariantLoad, which uses
        MachineMemOperands and is target-independent. This brings MachineLICM
        and other functionality to targets which previously lacked an
        isInvariantLoad implementation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83475 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d5ce7033ba2445d8fd13504e422afeeffb61c3d3
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 7 17:36:00 2009 +0000
    
        Add a few simple MachineVerifier checks for MachineMemOperands.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83474 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a8b4362ee4e3a8442c18c5c951594a018cb7a682
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 7 17:24:55 2009 +0000
    
        Add codegen support for NEON vld3 intrinsics with 128-bit vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83471 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 520f5611addbb5e14f6c0140155c94690e421414
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 7 17:23:09 2009 +0000
    
        Rearrange code for selecting vld2 intrinsics.  No functionality change.
        This is just to be more consistent with the forthcoming code for vld3/4.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83470 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 23e40bb47e44e9c84d3597d5ab6728e7656acbdf
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Oct 7 17:19:13 2009 +0000
    
        Add tests for vld2 of 128-bit vectors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83468 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b67fb2b7be5c577e88578d240998a7ed3bf210db
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Oct 7 17:12:56 2009 +0000
    
        Add register-reuse to frame-index register scavenging. When a target uses
        a virtual register to eliminate a frame index, it can return that register
        and the constant stored there to PEI to track. When scavenging to allocate
        for those registers, PEI then tracks the last-used register and value, and
        if it is still available and matches the value for the next index, reuses
        the existing value rather and removes the re-materialization instructions.
        Fancier tracking and adjustment of scavenger allocations to keep more
        values live for longer is possible, but not yet implemented and would likely
        be better done via a different, less special-purpose, approach to the
        problem.
    
        eliminateFrameIndex() is modified so the target implementations can return
        the registers they wish to be tracked for reuse.
    
        ARM Thumb1 implements and utilizes the new mechanism. All other targets are
        simply modified to adjust for the changed eliminateFrameIndex() prototype.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83467 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit faad9ce705c2531b896c96c880137fdba93f96e6
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Oct 7 16:37:55 2009 +0000
    
        Do not assume that the module is set.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83462 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f57115c4bc5ab3acb1c2fafae53e340c9591a924
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Wed Oct 7 09:23:56 2009 +0000
    
        Add a comment explaining how DenseMap::insert works, because it is not
        intuitive.
        It does NOT update the value if the key is already in the map,
        it also returns false if the key is already in the map, regardless
        if the value matched.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83458 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1d7579c137eb7b85a3e40f4b89c1d73c4d29dea3
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Wed Oct 7 09:22:55 2009 +0000
    
        Add PR to this FIXME, looks like I didn't commit this change after all.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83457 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9657ce5edd48f028aa8f8e07ce043aad5dd24e6b
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Wed Oct 7 07:35:19 2009 +0000
    
        Make getPointerTo return a const PointerType* rather than
        an unqualified PointerType* because it seems more correct.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83454 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d1f241ac09c5f93dd0f967eeb75172b7ff27d874
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 7 03:00:18 2009 +0000
    
        INTRINSIC_W_CHAIN and INTRINSIC_VOID do not use MemSDNode. They
        may access memory, but they don't carry a MachineMemOperand.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83449 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b656c2722d3d782e893fccb11f0be2c9fbf4b31a
    Author: Eric Christopher <echristo at apple.com>
    Date:   Wed Oct 7 00:54:08 2009 +0000
    
        Add FreeInst to the "is a call" check for Insts that are calls, but
        not intrinsics.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83441 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e725a19617ce125a02ad9c96279a68fbb0459240
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Oct 7 00:33:10 2009 +0000
    
        Fix this comment. The loop header is the loop entry point.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83437 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f68604a29b3058546a6ec8d7f244523d59893a3b
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Oct 7 00:06:35 2009 +0000
    
        Add PseudoSourceValues for constpool stuff on ELF (Darwin should use something similar)
        and register spills.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83435 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9081aa6d5f1ab9e2ec8d14ec71c5b2bc3bbbf9cb
    Author: Eric Christopher <echristo at apple.com>
    Date:   Wed Oct 7 00:02:18 2009 +0000
    
        While we still have a MallocInst treat it as a call like any other
        for inlining.
    
        When MallocInst goes away this code will be subsumed as part of
        calls and work just fine...
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83434 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 91e27daccce0af4bff6f9299fa52c8cbbeab5ce1
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Tue Oct 6 22:26:42 2009 +0000
    
        Added bits of the ARM target assembler to llvm-mc to parse some load instruction
        operands.  Some parsing of arm memory operands for preindexing and postindexing
        forms including with register controled shifts.  This is a work in progress.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83424 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e9829cafca9811fa616f8046ce384b797814fa67
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Oct 6 22:01:59 2009 +0000
    
        Add codegen support for NEON vld2 operations on quad registers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83422 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3360945de1c4c74c224806f22405ad8081b29701
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Oct 6 22:01:15 2009 +0000
    
        Use copyRegToReg hook to copy registers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83421 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 233b35a9018db719a4538efa59b00ba8c7582688
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Oct 6 21:45:26 2009 +0000
    
        r83391 was completely broken since Twines keep references to their inputs, and
        some of the inputs were temporaries.  Here's a real fix for the miscompilation.
        Thanks to sabre for pointing out the problem.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83417 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fc6baa536fe873f258c8aef5cb170d1d4b67f0c4
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Oct 6 21:16:19 2009 +0000
    
        Update NEON struct names to match llvm-gcc changes.
        (This is not required for correctness but might help with sanity.)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83415 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3443c216b30567ecf229ce43fe31ecb60e5e7ed3
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Oct 6 20:18:46 2009 +0000
    
        Fix a comment typo.
        Patch by Johnny Chen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83407 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 870fd046ae63c0ff664ba52054d8c6dd7debaf33
    Author: Nicolas Geoffray <nicolas.geoffray at lip6.fr>
    Date:   Tue Oct 6 19:55:53 2009 +0000
    
        Bugfix for the CommaSeparated option. The original code was adding the whole
        string at the end of the list, instead of the last comma-separated string.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83405 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b814315fa8e7762d1d8ce33513e001f3df900245
    Author: Ted Kremenek <kremenek at apple.com>
    Date:   Tue Oct 6 19:45:38 2009 +0000
    
        Update CMake file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83404 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit abf70f8384290d8a2f26d494c27f20cefd1fed60
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Oct 6 19:06:16 2009 +0000
    
        Fix illegal cross-type aliasing. Found by baldrick on a newer gcc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83401 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0feae424692b07197998bd7a451d8da44fd0ac9a
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 6 18:37:31 2009 +0000
    
        Add support to handle debug info attached to an instruction.
        This is not yet enabled.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83400 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4c45a0c48614b2387855cee4f1e3f99013344ecf
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 6 17:43:57 2009 +0000
    
        Make LLVMContext's pImpl member const.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83393 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5fda434b0d4932d63b21ce3b5dd72c18e855da26
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Oct 6 17:38:38 2009 +0000
    
        Instead of printing unnecessary basic block labels as labels in
        verbose-asm mode, print comments instead. This eliminates a non-comment
        difference between verbose-asm mode and non-verbose-asm mode.
    
        Also, factor out the relevant code out of all the targets and into
        target-independent code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83392 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c28187ff2fb70821affaa8b0a58b2a350af7b795
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Oct 6 17:25:50 2009 +0000
    
        Fix PR5112, a miscompilation on gcc-4.0.3.  Patch by Collin Winter!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83391 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6f9e3d8c96438bfdc3953f3f85a44753e3bcf652
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Oct 6 16:59:46 2009 +0000
    
        remove predicate simplifier, it never got the last bugs beaten
        out of it, and jump threading, condprop and gvn are now getting
        most of the benefit.  This was approved by Nicholas and Nicolas.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83390 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f4a940b7700547f0f96469c52d155d15c97306ac
    Author: Richard Osborne <richard at xmos.com>
    Date:   Tue Oct 6 16:17:57 2009 +0000
    
        Remove xs1b predicate since it is no longer needed to differentiate betweem
        xs1a and xs1b.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83383 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1163f506d4d4cf143812d4b729ab43524f043a66
    Author: Richard Osborne <richard at xmos.com>
    Date:   Tue Oct 6 16:01:09 2009 +0000
    
        Remove xs1a subtarget. xs1a is a preproduction device used in
        early development boards which is no longer supported in the
        XMOS toolchain.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83381 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d90b176bd923fef57b155202ce529076f3b4c587
    Author: Richard Osborne <richard at xmos.com>
    Date:   Tue Oct 6 15:41:52 2009 +0000
    
        Default to the xs1b subtarget
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83380 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f2519d6193d736ee7f2b9cc24e0143cfa933f79e
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Tue Oct 6 15:40:36 2009 +0000
    
        Introduce and use convenience methods for getting pointer types
        where the element is of a basic builtin type.  For example, to get
        an i8* use getInt8PtrTy.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83379 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 33b6450f08ca9ef6306adbd8cdb24145bf9a2202
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Oct 6 15:03:44 2009 +0000
    
        grammar
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83378 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f434889fb46c722f22c99a827ccc026f6271e3cd
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 6 03:15:38 2009 +0000
    
        Fix cut-n-pasto.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83367 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit de8d48befd3a916c3389898c67a4f4adfa61fe78
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 6 03:04:58 2009 +0000
    
        Update processDebugLoc() to handle requests to process debug info, before and after emitting instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83364 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5450fc15cb5de674d4e5203ab9ace59d3d6c38e5
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 6 02:19:11 2009 +0000
    
        Update processDebugLoc() so that it can be used to process debug info before and after printing an instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83363 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ac35dcf9887f9e7883745876525c2a7dc39bf8d4
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 6 02:01:32 2009 +0000
    
        Remove dead code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83362 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 393a46deb66be6885fd064a38e650bed859af5a1
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 6 01:50:42 2009 +0000
    
        Add utility routine to set begin and end labels for DbgScopes.
        This will be used by processDebugLoc().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83361 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 77354f12a0fd1882f4ec787e0a4309c2aaeae819
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 6 01:31:35 2009 +0000
    
        Remove unintentional function decl.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83356 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8413999c15ade93fff73f9e43738f1dfd885ba70
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 6 01:26:37 2009 +0000
    
        Add utility routine to collect variable debug info. This is not yet used.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83355 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5d3b7ca3aa24f7b0f1edc18c8f72d9bf1a84169c
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Oct 6 00:35:55 2009 +0000
    
        Fix http://llvm.org/PR5116 by rolling back r60822.  This passes `make unittests
        check-lit` on both x86-64 Linux and x86-32 Darwin.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83353 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 93d2e89b3e82de8ab53b252b5f86de4e3b6980b6
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 6 00:09:08 2009 +0000
    
        Set default location for the function if it is not already set.
        This code is not  yet enabled.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83349 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 95d477eff9aa5b68b022546565f0a41eee38cb89
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Oct 6 00:03:14 2009 +0000
    
        Existence of a compile unit for input source file is a good indicator to check debug info's presence in a module.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83348 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 186ee5cfbdf8c08394250e41d4bd6493999d56a1
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Oct 5 23:59:00 2009 +0000
    
        If subprogram die is not available then construct new one.
        This can happen if debug info is processed lazily.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83347 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8c62b1e8b968a5a02d52913d3f4957e7a307bd5e
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Mon Oct 5 23:51:08 2009 +0000
    
        Add a test for http://llvm.org/PR3043.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83346 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 695c8b08cab0e2844ddc14e3e7f87916fd90fdc6
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Oct 5 23:40:42 2009 +0000
    
        Adjust context for the global variables that are not at file scope, e.g.
        void foo() { static int bar = 42; }
        Here, foo's DIE is parent of bar's DIE.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83344 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6bd5cc8a280aee4124ba96c146ea9672ae8a9712
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Oct 5 23:22:08 2009 +0000
    
        Set address while constructing DIE.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83343 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4b1adaa251c9e496d2e1be180dd83da49a001548
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Mon Oct 5 23:05:32 2009 +0000
    
        CMake misses a check for sbrk on NetBSD.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83341 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2bffa8f29fe36f76343f7bf82beefa5ef505b3fe
    Author: Evan Phoenix <evan at fallingsnow.net>
    Date:   Mon Oct 5 22:53:52 2009 +0000
    
        Extend ConstantFolding to understand signed overflow variants
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83338 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit adf4cf604eeca26d6a854945893fa6c264e02900
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Oct 5 22:30:23 2009 +0000
    
        In Thumb1, the register scavenger is not always able to use an emergency
        spill slot. When frame references are via the frame pointer, they will be
        negative, but Thumb1 load/store instructions only allow positive immediate
        offsets. Instead, Thumb1 will spill to R12.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83336 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1567166b9fe417a0656f9fcb2a18b35d915c34c3
    Author: Evan Phoenix <evan at fallingsnow.net>
    Date:   Mon Oct 5 22:29:11 2009 +0000
    
        First test commit
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83334 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 72c9ef41f51bf3a71ce5d44001d944e4223679f3
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Mon Oct 5 21:15:43 2009 +0000
    
        Don't treat malloc calls with non-matching prototype as malloc.
        Fixes second part of PR5130, miscompilation in FreeBSD kernel, where malloc takes 3 params,
        and *does* initialize memory.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83324 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ddaf8eb6dc30ea98c9f28eefa779a70d14114fd7
    Author: Edward O'Callaghan <eocallaghan at auroraux.org>
    Date:   Mon Oct 5 18:43:19 2009 +0000
    
        No newline at end of files.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83318 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 946d0aef89afb5752d65e1900532c2afdc0cd700
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Oct 5 18:03:19 2009 +0000
    
        Gracefully handle various scopes while recording source line info.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83317 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f49f7b0e07de54727532f40a75be162878b882c7
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 5 16:36:26 2009 +0000
    
        Remove an unnnecessary LLVMContext argument in
        ConstantFoldLoadThroughGEPConstantExpr.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83311 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3af2d41b3ea4d448dffb3761ea0339e7025ec447
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 5 16:31:55 2009 +0000
    
        Use Use::operator= instead of Use::set, for consistency.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83310 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6421f8134ceb2bb306e98dda770afc317fcbff42
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 5 15:52:08 2009 +0000
    
        Remove explicit enum integer values. They don't appear to be needed, and
        they make it less convenient to add new entries.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83308 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a93849c7f7d379b51fa23747f75afa5252fe3eae
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 5 15:42:08 2009 +0000
    
        Add RIP to GR64_NOREX. This fixed a MachineVerifier error when RIP
        is used in an operand which requires GR64_NOREX.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83307 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 290c17ca7b6f528a6780647273e9b9169a58d9eb
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Oct 5 15:23:17 2009 +0000
    
        Fix a name in a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83306 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 807767e1e711d943b51e579413858adbbc6e9807
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 5 07:07:29 2009 +0000
    
        callgraph changes came after the 2.6 branch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83299 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 82cdc06a6626e9a9fae300fafaeae9702ffb3808
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 5 05:54:46 2009 +0000
    
        strength reduce a ton of type equality tests to check the typeid (Through
        the new predicates I added) instead of going through a context and doing a
        pointer comparison.  Besides being cheaper, this allows a smart compiler
        to turn the if sequence into a switch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83297 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4509b017dfea39a23da470417d3abbeab298b79f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 5 05:48:40 2009 +0000
    
        add more type predicates.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83296 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2825098ca09e0a7a620fafe8fe422071c1ff3291
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 5 05:26:04 2009 +0000
    
        teach the optimizer how to constant fold uadd/usub intrinsics.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83295 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cc5e4678c9bc6558c66c0e42cc61b14ce68cc984
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 5 05:06:24 2009 +0000
    
        simplify this code a bunch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83294 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit db63a6816974cca1962bce6a84d5baa7b51cdd1e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 5 05:05:57 2009 +0000
    
        add some helper functions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83293 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5fd5ce6e5f28b20073b61cfcaa684418bda852ae
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 5 05:00:35 2009 +0000
    
        code simplifications.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83292 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f68c8b637b794fc2a98b75b41077ae21901243cc
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Oct 5 02:51:06 2009 +0000
    
        Move implicit and paralle to a separate codegen specific section.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83291 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 95ac4ebe1d2951233c5e2dad1e9873258b890f3f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 5 02:47:47 2009 +0000
    
        instcombine shouldn't delete all null checks for mallocs.
        This fixes PR5130.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83290 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d511acce19f3f29ab933a90e0f1972d31685fd40
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 5 02:35:05 2009 +0000
    
        stop MachineFunctionPass from claiming that it preserves LoopDependence info,
        which causes dependence info to be linked into lli.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83289 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit afb1c45acace0b0f6fe0aaae1cf08ac6ad93091a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 5 02:29:51 2009 +0000
    
        remove llvm-db: it is completely broken and if anyone wants to do a debugger,
        they should not base it on llvm-db (which not following almost any "best practices").
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83288 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b2d434792c8ef1d97da45acc4266b873c71b837e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Oct 5 02:12:39 2009 +0000
    
        add some completely unformated and probably incoherent notes about things
        I saw while reading all the commits between the 2.5 and 2.6 release branches.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83287 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fa289966b6b7ae54a3c17822f5155d0588783342
    Author: Owen Anderson <resistor at mac.com>
    Date:   Sun Oct 4 18:49:55 2009 +0000
    
        Do away with the strange use of BitVectors in SSI, and just use normal sets.  This makes the code much more C++/LLVM-ish.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83286 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cc0aeb29f77c1d84341f2843edf6c0f78642f125
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Sun Oct 4 18:18:39 2009 +0000
    
        Whitespace and formatting.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83285 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 20bf333554b755a1f1633e0ee0517ac6ec7ca5fa
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Sun Oct 4 17:54:36 2009 +0000
    
        Remove trailing whitespace from build output.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83284 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 11a021db1a24a060fbd14a383a0da3cb7ec439eb
    Author: Owen Anderson <resistor at mac.com>
    Date:   Sun Oct 4 17:52:13 2009 +0000
    
        Fix a typo in the comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83283 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit de650865a291a022b69a1b2b9e6e554073915e5b
    Author: Owen Anderson <resistor at mac.com>
    Date:   Sun Oct 4 17:47:39 2009 +0000
    
        SSI needs to require DT and DF transitively, since it uses them outside of its runOnFunction.
        Similarly, it can be marked setPreservesAll, since it does no work in its runOnFunction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83282 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 85cf4cfee00f555cc782773fe9c1241b038a24cd
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Sun Oct 4 06:24:57 2009 +0000
    
        CMake: remove .so file extension from library names when building
        dependency info.
    
        Patch by Peter Collingbourne!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83275 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9c46ca5cfeec9bc6fe2d8ddc14dfefa66151189c
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Oct 4 06:13:54 2009 +0000
    
        Allow -inline-threshold override default threshold even if compiling to optimize for size.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83274 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 763375a457d612e0300d71f78850f9bcf6e14650
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Oct 4 05:25:42 2009 +0000
    
        Rename enum NumOccurrences to NumOccurrencesFlag since there is a member named NumOccurrences.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83273 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ec15365c18c2cd9a5f73ec27e356dcbcfb7212f
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sat Oct 3 19:30:43 2009 +0000
    
        Requires element types in a constant initializer to match the element types of
        of the constant. This reverts r6544 and r7428.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83270 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8af7b535e3d97a9edb29b0ce3e6c4a4529b081a8
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Sat Oct 3 04:44:16 2009 +0000
    
        Add a comment to describe letters used in multiclass name suffixes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83257 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 64c60918f15571fbb4dabdfa074c2bcdc0b9296d
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Sat Oct 3 04:41:21 2009 +0000
    
        Fix encoding problem for VMLS instruction.
        Thanks to Johnny Chen for pointing this out!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83256 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ead9d35fa9885422f4d1168f9261c63e194b21ff
    Author: Lang Hames <lhames at gmail.com>
    Date:   Sat Oct 3 04:31:31 2009 +0000
    
        Oops. Renamed remaining MachineInstrIndex references.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83255 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0a6f4525040fd90996d1391cef8b24284b817ed3
    Author: Lang Hames <lhames at gmail.com>
    Date:   Sat Oct 3 04:21:37 2009 +0000
    
        Renamed MachineInstrIndex to LiveIndex.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83254 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c4ab6051f82185a6cbb90a9269924510c4ee1064
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Fri Oct 2 19:52:33 2009 +0000
    
        Try to fix unit test linking on linux ...
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83252 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 56656b73ef40540b43ebb537846f1ab09b7bcb3b
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Fri Oct 2 19:36:31 2009 +0000
    
        MingW build fixes
    
        - MingW needs -lpsapi (in ${LIBS}) linked after -lLLVMSystem.
          Noticed by Ronald Pijnacker!
    
        - Some parts of the System library must be build with exceptions on windows.
          Based on a patch by Jay Foad!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83251 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e3901ce4809ad26b8173855549cc5adbe7b29b56
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Fri Oct 2 15:59:52 2009 +0000
    
        Fix a use-after-free in post-ra-scheduling.
    
        MI->addOperand invalidates references to it's operands, avoid touching
        the operand after a new one was added.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83249 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 30aa6a6b4bc96797d71e8c3cd542973a198124b5
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Fri Oct 2 09:30:03 2009 +0000
    
        Fix make rule when objdir is inside srcdir.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83243 91177308-0d34-0410-b5e6-96231b3b80d8

diff --git a/libclamav/c++/llvm/CMakeLists.txt b/libclamav/c++/llvm/CMakeLists.txt
index c9c5272..39ccfbb 100644
--- a/libclamav/c++/llvm/CMakeLists.txt
+++ b/libclamav/c++/llvm/CMakeLists.txt
@@ -70,6 +70,9 @@ else( MSVC )
     CACHE STRING "Semicolon-separated list of targets to build, or \"all\".")
 endif( MSVC )
 
+set(C_INCLUDE_DIRS "" CACHE STRING
+  "Colon separated list of directories clang will search for headers.")
+
 set(LLVM_TARGET_ARCH "host"
   CACHE STRING "Set target to use for LLVM JIT or use \"host\" for automatic detection.")
 
@@ -164,13 +167,19 @@ option(LLVM_ENABLE_PIC "Build Position-Independent Code" ON)
 
 set(ENABLE_PIC 0)
 if( LLVM_ENABLE_PIC )
-  if( SUPPORTS_FPIC_FLAG )
-    message(STATUS "Building with -fPIC")
-    add_llvm_definitions(-fPIC)
-    set(ENABLE_PIC 1)
- else( SUPPORTS_FPIC_FLAG )
-    message(STATUS "Warning: -fPIC not supported.")
-  endif()
+ if( XCODE )
+   # Xcode has -mdynamic-no-pic on by default, which overrides -fPIC. I don't
+   # know how to disable this, so just force ENABLE_PIC off for now.
+   message(STATUS "Warning: -fPIC not supported with Xcode.")
+ else( XCODE )
+   if( SUPPORTS_FPIC_FLAG )
+      message(STATUS "Building with -fPIC")
+      add_llvm_definitions(-fPIC)
+      set(ENABLE_PIC 1)
+   else( SUPPORTS_FPIC_FLAG )
+      message(STATUS "Warning: -fPIC not supported.")
+   endif()
+ endif()
 endif()
 
 set( CMAKE_RUNTIME_OUTPUT_DIRECTORY ${LLVM_TOOLS_BINARY_DIR} )
@@ -221,6 +230,10 @@ endif( MSVC )
 
 include_directories( ${LLVM_BINARY_DIR}/include ${LLVM_MAIN_INCLUDE_DIR})
 
+if( ${CMAKE_SYSTEM_NAME} MATCHES SunOS )
+   SET(CMAKE_CXX_FLAGS ${CMAKE_CXX_FLAGS} "-include llvm/System/Solaris.h")
+endif( ${CMAKE_SYSTEM_NAME} MATCHES SunOS )
+
 include(AddLLVM)
 include(TableGen)
 
@@ -267,6 +280,7 @@ add_subdirectory(utils/not)
 
 set(LLVM_ENUM_ASM_PRINTERS "")
 set(LLVM_ENUM_ASM_PARSERS "")
+set(LLVM_ENUM_DISASSEMBLERS "")
 foreach(t ${LLVM_TARGETS_TO_BUILD})
   message(STATUS "Targeting ${t}")
   add_subdirectory(lib/Target/${t})
@@ -281,6 +295,11 @@ foreach(t ${LLVM_TARGETS_TO_BUILD})
     set(LLVM_ENUM_ASM_PARSERS 
       "${LLVM_ENUM_ASM_PARSERS}LLVM_ASM_PARSER(${t})\n")
   endif( EXISTS ${LLVM_MAIN_SRC_DIR}/lib/Target/${t}/AsmParser/CMakeLists.txt )
+  if( EXISTS ${LLVM_MAIN_SRC_DIR}/lib/Target/${t}/Disassembler/CMakeLists.txt )
+    add_subdirectory(lib/Target/${t}/Disassembler)
+    set(LLVM_ENUM_DISASSEMBLERS
+      "${LLVM_ENUM_DISASSEMBLERS}LLVM_DISASSEMBLER(${t})\n")
+  endif( EXISTS ${LLVM_MAIN_SRC_DIR}/lib/Target/${t}/Disassembler/CMakeLists.txt )
   set(CURRENT_LLVM_TARGET)
 endforeach(t)
 
@@ -296,36 +315,47 @@ configure_file(
   ${LLVM_BINARY_DIR}/include/llvm/Config/AsmParsers.def
   )
 
+# Produce llvm/Config/Disassemblers.def
+configure_file(
+  ${LLVM_MAIN_INCLUDE_DIR}/llvm/Config/Disassemblers.def.in
+  ${LLVM_BINARY_DIR}/include/llvm/Config/Disassemblers.def
+  )
+
 add_subdirectory(lib/ExecutionEngine)
 add_subdirectory(lib/ExecutionEngine/Interpreter)
 add_subdirectory(lib/ExecutionEngine/JIT)
 add_subdirectory(lib/Target)
 add_subdirectory(lib/AsmParser)
-add_subdirectory(lib/Debugger)
 add_subdirectory(lib/Archive)
 
 add_subdirectory(projects)
 
 option(LLVM_BUILD_TOOLS "Build LLVM tool programs." ON)
-if(LLVM_BUILD_TOOLS)
-  add_subdirectory(tools)
-endif()
-
-option(LLVM_BUILD_EXAMPLES "Build LLVM example programs." ON)
-if(LLVM_BUILD_EXAMPLES)
-  add_subdirectory(examples)
-endif ()
-
-install(DIRECTORY include
-  DESTINATION .
+add_subdirectory(tools)
+
+option(LLVM_BUILD_EXAMPLES "Build LLVM example programs." OFF)
+add_subdirectory(examples)
+
+install(DIRECTORY include/
+  DESTINATION include
+  FILES_MATCHING
+  PATTERN "*.def"
+  PATTERN "*.h"
+  PATTERN "*.td"
+  PATTERN "*.inc"
   PATTERN ".svn" EXCLUDE
-  PATTERN "*.cmake" EXCLUDE
-  PATTERN "*.in" EXCLUDE
-  PATTERN "*.tmp" EXCLUDE
   )
 
-install(DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/include
-  DESTINATION .
+install(DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/include/
+  DESTINATION include
+  FILES_MATCHING
+  PATTERN "*.def"
+  PATTERN "*.h"
+  PATTERN "*.gen"
+  PATTERN "*.inc"
+  # Exclude include/llvm/CMakeFiles/intrinsics_gen.dir, matched by "*.def"
+  PATTERN "CMakeFiles" EXCLUDE
+  PATTERN ".svn" EXCLUDE
   )
 
 # TODO: make and install documentation.
diff --git a/libclamav/c++/llvm/Makefile b/libclamav/c++/llvm/Makefile
index bd8552b..1ef89e4 100644
--- a/libclamav/c++/llvm/Makefile
+++ b/libclamav/c++/llvm/Makefile
@@ -19,13 +19,24 @@ LEVEL := .
 #
 # When cross-compiling, there are some things (tablegen) that need to
 # be build for the build system first.
+
+# If "RC_ProjectName" exists in the environment, and its value is
+# "llvmCore", then this is an "Apple-style" build; search for
+# "Apple-style" in the comments for more info.  Anything else is a
+# normal build.
+ifneq ($(findstring llvmCore, $(RC_ProjectName)),llvmCore)  # Normal build (not "Apple-style").
+
 ifeq ($(BUILD_DIRS_ONLY),1)
   DIRS := lib/System lib/Support utils
   OPTIONAL_DIRS :=
 else
   DIRS := lib/System lib/Support utils lib/VMCore lib tools/llvm-config \
           tools runtime docs unittests
-  OPTIONAL_DIRS := examples projects bindings
+  OPTIONAL_DIRS := projects bindings
+endif
+
+ifeq ($(BUILD_EXAMPLES),1)
+  OPTIONAL_DIRS += examples
 endif
 
 EXTRA_DIST := test unittests llvm.spec include win32 Xcode
@@ -88,6 +99,8 @@ cross-compile-build-tools:
 	$(Verb) if [ ! -f BuildTools/Makefile ]; then \
           $(MKDIR) BuildTools; \
 	  cd BuildTools ; \
+	  unset CFLAGS ; \
+	  unset CXXFLAGS ; \
 	  $(PROJ_SRC_DIR)/configure --build=$(BUILD_TRIPLE) \
 		--host=$(BUILD_TRIPLE) --target=$(BUILD_TRIPLE); \
 	  cd .. ; \
@@ -127,8 +140,7 @@ dist-hook::
 	$(Echo) Eliminating files constructed by configure
 	$(Verb) $(RM) -f \
 	  $(TopDistDir)/include/llvm/Config/config.h  \
-	  $(TopDistDir)/include/llvm/Support/DataTypes.h  \
-	  $(TopDistDir)/include/llvm/Support/ThreadSupport.h
+	  $(TopDistDir)/include/llvm/System/DataTypes.h
 
 clang-only: all
 tools-only: all
@@ -143,8 +155,11 @@ install-libs: install
 FilesToConfig := \
   include/llvm/Config/config.h \
   include/llvm/Config/Targets.def \
-	include/llvm/Config/AsmPrinters.def \
-  include/llvm/Support/DataTypes.h
+  include/llvm/Config/AsmPrinters.def \
+  include/llvm/Config/AsmParsers.def \
+  include/llvm/Config/Disassemblers.def \
+  include/llvm/System/DataTypes.h \
+  tools/llvmc/plugins/Base/Base.td
 FilesToConfigPATH  := $(addprefix $(LLVM_OBJ_ROOT)/,$(FilesToConfig))
 
 all-local:: $(FilesToConfigPATH)
@@ -210,3 +225,9 @@ happiness: update all check unittests
 
 .NOTPARALLEL:
 
+else # Building "Apple-style."
+# In an Apple-style build, once configuration is done, lines marked
+# "Apple-style" are removed with sed!  Please don't remove these!
+# Look for the string "Apple-style" in utils/buildit/build_llvm.
+include $(shell find . -name GNUmakefile) # Building "Apple-style."
+endif # Building "Apple-style."
diff --git a/libclamav/c++/llvm/Makefile.config.in b/libclamav/c++/llvm/Makefile.config.in
index fc84c0b..44296a4 100644
--- a/libclamav/c++/llvm/Makefile.config.in
+++ b/libclamav/c++/llvm/Makefile.config.in
@@ -250,6 +250,9 @@ RDYNAMIC := @RDYNAMIC@
 #DEBUG_SYMBOLS = 1
 @DEBUG_SYMBOLS@
 
+# The compiler flags to use for optimized builds.
+OPTIMIZE_OPTION := @OPTIMIZE_OPTION@
+
 # When ENABLE_PROFILING is enabled, the llvm source base is built with profile
 # information to allow gprof to be used to get execution frequencies.
 #ENABLE_PROFILING = 1
@@ -310,6 +313,12 @@ endif
 # Location of the plugin header file for gold.
 BINUTILS_INCDIR := @BINUTILS_INCDIR@
 
+C_INCLUDE_DIRS := @C_INCLUDE_DISR@
+CXX_INCLUDE_ROOT := @CXX_INCLUDE_ROOT@
+CXX_INCLUDE_ARCH := @CXX_INCLUDE_ARCH@
+CXX_INCLUDE_32BIT_DIR = @CXX_INCLUDE_32BIT_DIR@
+CXX_INCLUDE_64BIT_DIR = @CXX_INCLUDE_64BIT_DIR@
+
 # When ENABLE_LLVMC_DYNAMIC is enabled, LLVMC will link libCompilerDriver
 # dynamically. This is needed to make dynamic plugins work on some targets
 # (Windows).
@@ -320,3 +329,9 @@ ENABLE_LLVMC_DYNAMIC = 0
 # support (via the -load option).
 ENABLE_LLVMC_DYNAMIC_PLUGINS = 1
 #@ENABLE_LLVMC_DYNAMIC_PLUGINS@
+
+# Optional flags supported by the compiler
+# -Wno-missing-field-initializers
+NO_MISSING_FIELD_INITIALIZERS = @NO_MISSING_FIELD_INITIALIZERS@
+# -Wno-variadic-macros
+NO_VARIADIC_MACROS = @NO_VARIADIC_MACROS@
diff --git a/libclamav/c++/llvm/Makefile.rules b/libclamav/c++/llvm/Makefile.rules
index 264bab8..49ecb1e 100644
--- a/libclamav/c++/llvm/Makefile.rules
+++ b/libclamav/c++/llvm/Makefile.rules
@@ -246,6 +246,12 @@ LLVMC_BUILTIN_PLUGIN_2 = $(word 2, $(LLVMC_BUILTIN_PLUGINS))
 LLVMC_BUILTIN_PLUGIN_3 = $(word 3, $(LLVMC_BUILTIN_PLUGINS))
 LLVMC_BUILTIN_PLUGIN_4 = $(word 4, $(LLVMC_BUILTIN_PLUGINS))
 LLVMC_BUILTIN_PLUGIN_5 = $(word 5, $(LLVMC_BUILTIN_PLUGINS))
+LLVMC_BUILTIN_PLUGIN_6 = $(word 6, $(LLVMC_BUILTIN_PLUGINS))
+LLVMC_BUILTIN_PLUGIN_7 = $(word 7, $(LLVMC_BUILTIN_PLUGINS))
+LLVMC_BUILTIN_PLUGIN_8 = $(word 8, $(LLVMC_BUILTIN_PLUGINS))
+LLVMC_BUILTIN_PLUGIN_9 = $(word 9, $(LLVMC_BUILTIN_PLUGINS))
+LLVMC_BUILTIN_PLUGIN_10 = $(word 10, $(LLVMC_BUILTIN_PLUGINS))
+
 
 ifneq ($(LLVMC_BUILTIN_PLUGIN_1),)
 CPP.Flags += -DLLVMC_BUILTIN_PLUGIN_1=$(LLVMC_BUILTIN_PLUGIN_1)
@@ -267,6 +273,27 @@ ifneq ($(LLVMC_BUILTIN_PLUGIN_5),)
 CPP.Flags += -DLLVMC_BUILTIN_PLUGIN_5=$(LLVMC_BUILTIN_PLUGIN_5)
 endif
 
+ifneq ($(LLVMC_BUILTIN_PLUGIN_6),)
+CPP.Flags += -DLLVMC_BUILTIN_PLUGIN_5=$(LLVMC_BUILTIN_PLUGIN_6)
+endif
+
+ifneq ($(LLVMC_BUILTIN_PLUGIN_7),)
+CPP.Flags += -DLLVMC_BUILTIN_PLUGIN_5=$(LLVMC_BUILTIN_PLUGIN_7)
+endif
+
+ifneq ($(LLVMC_BUILTIN_PLUGIN_8),)
+CPP.Flags += -DLLVMC_BUILTIN_PLUGIN_5=$(LLVMC_BUILTIN_PLUGIN_8)
+endif
+
+ifneq ($(LLVMC_BUILTIN_PLUGIN_9),)
+CPP.Flags += -DLLVMC_BUILTIN_PLUGIN_5=$(LLVMC_BUILTIN_PLUGIN_9)
+endif
+
+ifneq ($(LLVMC_BUILTIN_PLUGIN_10),)
+CPP.Flags += -DLLVMC_BUILTIN_PLUGIN_5=$(LLVMC_BUILTIN_PLUGIN_10)
+endif
+
+
 endif
 
 endif # LLVMC_BASED_DRIVER
@@ -285,16 +312,6 @@ endif
 #--------------------------------------------------------------------
 
 CPP.Defines :=
-# OPTIMIZE_OPTION - The optimization level option we want to build LLVM with
-# this can be overridden on the make command line.
-ifndef OPTIMIZE_OPTION
-  ifneq ($(HOST_OS),MingW)
-    OPTIMIZE_OPTION := -O3
-  else
-    OPTIMIZE_OPTION := -O2
-  endif
-endif
-
 ifeq ($(ENABLE_OPTIMIZED),1)
   BuildMode := Release
   # Don't use -fomit-frame-pointer on Darwin or FreeBSD.
@@ -321,11 +338,19 @@ ifeq ($(ENABLE_OPTIMIZED),1)
     KEEP_SYMBOLS := 1
   endif
 else
-  BuildMode := Debug
-  CXX.Flags += -g
-  C.Flags   += -g
-  LD.Flags  += -g
-  KEEP_SYMBOLS := 1
+  ifdef NO_DEBUG_SYMBOLS
+    BuildMode := Unoptimized
+    CXX.Flags +=
+    C.Flags   +=
+    LD.Flags  +=
+    KEEP_SYMBOLS := 1
+  else
+    BuildMode := Debug
+    CXX.Flags += -g
+    C.Flags   += -g
+    LD.Flags  += -g
+    KEEP_SYMBOLS := 1
+  endif
 endif
 
 ifeq ($(ENABLE_PROFILING),1)
@@ -539,6 +564,8 @@ endif
 ifeq ($(TARGET_OS),Darwin)
   ifneq ($(ARCH),ARM)
     TargetCommonOpts += -mmacosx-version-min=$(DARWIN_VERSION)
+  else
+    TargetCommonOpts += -marm
   endif
 endif
 
@@ -644,6 +671,10 @@ ifeq ($(HOST_OS),SunOS)
 CPP.BaseFlags += -include llvm/System/Solaris.h
 endif
 
+ifeq ($(HOST_OS),AuroraUX)
+CPP.BaseFlags += -include llvm/System/Solaris.h
+endif # !HOST_OS - AuroraUX.
+
 LD.Flags      += -L$(LibDir) -L$(LLVMLibDir)
 CPP.BaseFlags += -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS
 # All -I flags should go here, so that they don't confuse llvm-config.
@@ -705,6 +736,8 @@ else
 Ranlib        = ranlib
 endif
 
+AliasTool     = ln -s
+
 #----------------------------------------------------------
 # Get the list of source files and compute object file
 # names from them.
@@ -1184,10 +1217,20 @@ ifdef TOOLNAME
 #---------------------------------------------------------
 # Set up variables for building a tool.
 #---------------------------------------------------------
+TOOLEXENAME := $(strip $(TOOLNAME))$(EXEEXT)
+ifdef EXAMPLE_TOOL
+ToolBuildPath   := $(ExmplDir)/$(TOOLEXENAME)
+else
+ToolBuildPath   := $(ToolDir)/$(TOOLEXENAME)
+endif
+
+# TOOLALIAS is a name to symlink (or copy) the tool to.
+ifdef TOOLALIAS
 ifdef EXAMPLE_TOOL
-ToolBuildPath   := $(ExmplDir)/$(strip $(TOOLNAME))$(EXEEXT)
+ToolAliasBuildPath   := $(ExmplDir)/$(strip $(TOOLALIAS))$(EXEEXT)
 else
-ToolBuildPath   := $(ToolDir)/$(strip $(TOOLNAME))$(EXEEXT)
+ToolAliasBuildPath   := $(ToolDir)/$(strip $(TOOLALIAS))$(EXEEXT)
+endif
 endif
 
 #---------------------------------------------------------
@@ -1207,7 +1250,7 @@ endif
 endif
 
 ifeq ($(HOST_OS), $(filter $(HOST_OS), Linux NetBSD FreeBSD))
-LD.Flags += -Wl,--version-script=$(LLVM_SRC_ROOT)/autoconf/ExportMap.map
+  LD.Flags += -Wl,--version-script=$(LLVM_SRC_ROOT)/autoconf/ExportMap.map
 endif
 endif
 
@@ -1215,12 +1258,15 @@ endif
 #---------------------------------------------------------
 # Provide targets for building the tools
 #---------------------------------------------------------
-all-local:: $(ToolBuildPath)
+all-local:: $(ToolBuildPath) $(ToolAliasBuildPath)
 
 clean-local::
 ifneq ($(strip $(ToolBuildPath)),)
 	-$(Verb) $(RM) -f $(ToolBuildPath)
 endif
+ifneq ($(strip $(ToolAliasBuildPath)),)
+	-$(Verb) $(RM) -f $(ToolAliasBuildPath)
+endif
 
 ifdef EXAMPLE_TOOL
 $(ToolBuildPath): $(ExmplDir)/.dir
@@ -1235,13 +1281,22 @@ $(ToolBuildPath): $(ObjectsO) $(ProjLibsPaths) $(LLVMLibsPaths)
 	$(Echo) ======= Finished Linking $(BuildMode) Executable $(TOOLNAME) \
           $(StripWarnMsg)
 
+ifneq ($(strip $(ToolAliasBuildPath)),)
+$(ToolAliasBuildPath): $(ToolBuildPath)
+	$(Echo) Creating $(BuildMode) Alias $(TOOLALIAS) $(StripWarnMsg)
+	$(Verb) $(RM) -f $(ToolAliasBuildPath)
+	$(Verb) $(AliasTool) $(TOOLEXENAME) $(ToolAliasBuildPath)
+	$(Echo) ======= Finished Creating $(BuildMode) Alias $(TOOLNAME) \
+          $(StripWarnMsg)
+endif
+
 ifdef NO_INSTALL
 install-local::
 	$(Echo) Install circumvented with NO_INSTALL
 uninstall-local::
 	$(Echo) Uninstall circumvented with NO_INSTALL
 else
-DestTool = $(PROJ_bindir)/$(TOOLNAME)$(EXEEXT)
+DestTool = $(PROJ_bindir)/$(TOOLEXENAME)
 
 install-local:: $(DestTool)
 
@@ -1252,6 +1307,23 @@ $(DestTool): $(ToolBuildPath) $(PROJ_bindir)
 uninstall-local::
 	$(Echo) Uninstalling $(BuildMode) $(DestTool)
 	-$(Verb) $(RM) -f $(DestTool)
+
+# TOOLALIAS install.
+ifdef TOOLALIAS
+DestToolAlias = $(PROJ_bindir)/$(TOOLALIAS)$(EXEEXT)
+
+install-local:: $(DestToolAlias)
+
+$(DestToolAlias): $(DestTool) $(PROJ_bindir)
+	$(Echo) Installing $(BuildMode) $(DestToolAlias)
+	$(Verb) $(RM) -f $(DestToolAlias)
+	$(Verb) $(AliasTool) $(TOOLEXENAME) $(DestToolAlias)
+
+uninstall-local::
+	$(Echo) Uninstalling $(BuildMode) $(DestToolAlias)
+	-$(Verb) $(RM) -f $(DestToolAlias)
+endif
+
 endif
 endif
 
@@ -1280,7 +1352,7 @@ DEPEND_MOVEFILE = then $(MV) -f "$(ObjDir)/$*.d.tmp" "$(ObjDir)/$*.d"; \
                   else $(RM) "$(ObjDir)/$*.d.tmp"; exit 1; fi
 
 $(ObjDir)/%.o: %.cpp $(ObjDir)/.dir $(BUILT_SOURCES)
-	$(Echo) "Compiling $*.cpp for $(BuildMode) build " $(PIC_FLAG)
+	$(Echo) "Compiling $*.cpp for $(BuildMode) build" $(PIC_FLAG)
 	$(Verb) if $(Compile.CXX) $(DEPEND_OPTIONS) $< -o $(ObjDir)/$*.o ; \
 	        $(DEPEND_MOVEFILE)
 
@@ -1493,6 +1565,11 @@ $(ObjDir)/%GenDAGISel.inc.tmp : %.td $(ObjDir)/.dir
 	$(Echo) "Building $(<F) DAG instruction selector implementation with tblgen"
 	$(Verb) $(TableGen) -gen-dag-isel -o $(call SYSPATH, $@) $<
 
+$(TARGET:%=$(ObjDir)/%GenDisassemblerTables.inc.tmp): \
+$(ObjDir)/%GenDisassemblerTables.inc.tmp : %.td $(ObjDir)/.dir
+	$(Echo) "Building $(<F) disassembly tables with tblgen"
+	$(Verb) $(TableGen) -gen-disassembler -o $(call SYSPATH, $@) $<
+
 $(TARGET:%=$(ObjDir)/%GenFastISel.inc.tmp): \
 $(ObjDir)/%GenFastISel.inc.tmp : %.td $(ObjDir)/.dir
 	$(Echo) "Building $(<F) \"fast\" instruction selector implementation with tblgen"
@@ -1509,8 +1586,8 @@ $(ObjDir)/%GenCallingConv.inc.tmp : %.td $(ObjDir)/.dir
 	$(Verb) $(TableGen) -gen-callingconv -o $(call SYSPATH, $@) $<
 
 $(TARGET:%=$(ObjDir)/%GenIntrinsics.inc.tmp): \
-$(ObjDir)/%GenIntrinsics.inc.tmp : Intrinsics%.td $(ObjDir)/.dir
-	$(Echo) "Building $(<F) calling convention information with tblgen"
+$(ObjDir)/%GenIntrinsics.inc.tmp : %.td $(ObjDir)/.dir
+	$(Echo) "Building $(<F) intrinsics information with tblgen"
 	$(Verb) $(TableGen) -gen-tgt-intrinsic -o $(call SYSPATH, $@) $<
 
 clean-local::
diff --git a/libclamav/c++/llvm/README.txt b/libclamav/c++/llvm/README.txt
index 34d3766..c78a9ee 100644
--- a/libclamav/c++/llvm/README.txt
+++ b/libclamav/c++/llvm/README.txt
@@ -1,9 +1,9 @@
 Low Level Virtual Machine (LLVM)
 ================================
 
-This directory and its subdirectories contain source code for the Low Level 
+This directory and its subdirectories contain source code for the Low Level
 Virtual Machine, a toolkit for the construction of highly optimized compilers,
-optimizers, and runtime environments. 
+optimizers, and runtime environments.
 
 LLVM is open source software. You may freely distribute it under the terms of
 the license agreement found in LICENSE.txt.
diff --git a/libclamav/c++/llvm/autoconf/config.guess b/libclamav/c++/llvm/autoconf/config.guess
index e792aac..865fe53 100755
--- a/libclamav/c++/llvm/autoconf/config.guess
+++ b/libclamav/c++/llvm/autoconf/config.guess
@@ -333,6 +333,10 @@ case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in
     sun4*:SunOS:5.*:* | tadpole*:SunOS:5.*:*)
 	echo sparc-sun-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'`
 	exit ;;
+    i86pc:AuroraUX:5.*:* | i86xen:AuroraUX:5.*:*)
+	AUX_ARCH="i386"
+	echo ${AUX_ARCH}-pc-auroraux`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'`
+	exit ;;
     i86pc:SunOS:5.*:* | i86xen:SunOS:5.*:*)
 	eval $set_cc_for_build
 	SUN_ARCH="i386"
diff --git a/libclamav/c++/llvm/autoconf/config.sub b/libclamav/c++/llvm/autoconf/config.sub
index 8ca084b..183976a 100755
--- a/libclamav/c++/llvm/autoconf/config.sub
+++ b/libclamav/c++/llvm/autoconf/config.sub
@@ -1256,6 +1256,9 @@ case $os in
 	-solaris1 | -solaris1.*)
 		os=`echo $os | sed -e 's|solaris1|sunos4|'`
 		;;
+	-auroraux)
+		os=-auroraux
+		;;
 	-solaris)
 		os=-solaris2
 		;;
@@ -1274,7 +1277,7 @@ case $os in
 	# -sysv* is not here because it comes later, after sysvr4.
 	-gnu* | -bsd* | -mach* | -minix* | -genix* | -ultrix* | -irix* \
 	      | -*vms* | -sco* | -esix* | -isc* | -aix* | -cnk* | -sunos | -sunos[34]*\
-	      | -hpux* | -unos* | -osf* | -luna* | -dgux* | -solaris* | -sym* \
+	      | -hpux* | -unos* | -osf* | -luna* | -dgux* | -auroraux* | -solaris* | -sym* \
 	      | -kopensolaris* \
 	      | -amigaos* | -amigados* | -msdos* | -newsos* | -unicos* | -aof* \
 	      | -aos* | -aros* \
diff --git a/libclamav/c++/llvm/autoconf/configure.ac b/libclamav/c++/llvm/autoconf/configure.ac
index dee9037..9519698 100644
--- a/libclamav/c++/llvm/autoconf/configure.ac
+++ b/libclamav/c++/llvm/autoconf/configure.ac
@@ -165,6 +165,11 @@ AC_CACHE_CHECK([type of operating system we're going to host on],
     llvm_cv_no_link_all_option="-Wl,-z,defaultextract"
     llvm_cv_os_type="SunOS"
     llvm_cv_platform_type="Unix" ;;
+  *-*-auroraux*)
+    llvm_cv_link_all_option="-Wl,-z,allextract"
+    llvm_cv_link_all_option="-Wl,-z,defaultextract"
+    llvm_cv_os_type="AuroraUX"
+    llvm_cv_platform_type="Unix" ;;
   *-*-win32*)
     llvm_cv_link_all_option="-Wl,--whole-archive"
     llvm_cv_no_link_all_option="-Wl,--no-whole-archive"
@@ -175,6 +180,11 @@ AC_CACHE_CHECK([type of operating system we're going to host on],
     llvm_cv_no_link_all_option="-Wl,--no-whole-archive"
     llvm_cv_os_type="MingW"
     llvm_cv_platform_type="Win32" ;;
+  *-*-haiku*)
+    llvm_cv_link_all_option="-Wl,--whole-archive"
+    llvm_cv_no_link_all_option="-Wl,--no-whole-archive"
+    llvm_cv_os_type="Haiku"
+    llvm_cv_platform_type="Unix" ;;  
   *-unknown-eabi*)
     llvm_cv_link_all_option="-Wl,--whole-archive"
     llvm_cv_no_link_all_option="-Wl,--no-whole-archive"
@@ -219,10 +229,14 @@ AC_CACHE_CHECK([type of operating system we're going to target],
     llvm_cv_target_os_type="Linux" ;;
   *-*-solaris*)
     llvm_cv_target_os_type="SunOS" ;;
+  *-*-auroraux*)
+    llvm_cv_target_os_type="AuroraUX" ;;
   *-*-win32*)
     llvm_cv_target_os_type="Win32" ;;
   *-*-mingw*)
     llvm_cv_target_os_type="MingW" ;;
+  *-*-haiku*)
+    llvm_cv_target_os_type="Haiku" ;;  
   *-unknown-eabi*)
     llvm_cv_target_os_type="Freestanding" ;;
   *)
@@ -519,11 +533,12 @@ for a_target in $TARGETS_TO_BUILD; do
   fi
 done
 
-# Build the LLVM_TARGET and LLVM_ASM_PRINTER macro uses for
-# Targets.def, AsmPrinters.def, and AsmParsers.def.
+# Build the LLVM_TARGET and LLVM_... macros for Targets.def and the individual
+# target feature def files.
 LLVM_ENUM_TARGETS=""
 LLVM_ENUM_ASM_PRINTERS=""
 LLVM_ENUM_ASM_PARSERS=""
+LLVM_ENUM_DISASSEMBLERS=""
 for target_to_build in $TARGETS_TO_BUILD; do
   LLVM_ENUM_TARGETS="LLVM_TARGET($target_to_build) $LLVM_ENUM_TARGETS"
   if test -f ${srcdir}/lib/Target/${target_to_build}/AsmPrinter/Makefile ; then
@@ -532,10 +547,14 @@ for target_to_build in $TARGETS_TO_BUILD; do
   if test -f ${srcdir}/lib/Target/${target_to_build}/AsmParser/Makefile ; then
     LLVM_ENUM_ASM_PARSERS="LLVM_ASM_PARSER($target_to_build) $LLVM_ENUM_ASM_PARSERS";
   fi
+  if test -f ${srcdir}/lib/Target/${target_to_build}/Disassembler/Makefile ; then
+    LLVM_ENUM_DISASSEMBLERS="LLVM_DISASSEMBLER($target_to_build) $LLVM_ENUM_DISASSEMBLERS";
+  fi
 done
 AC_SUBST(LLVM_ENUM_TARGETS)
 AC_SUBST(LLVM_ENUM_ASM_PRINTERS)
 AC_SUBST(LLVM_ENUM_ASM_PARSERS)
+AC_SUBST(LLVM_ENUM_DISASSEMBLERS)
 
 dnl Prevent the CBackend from using printf("%a") for floating point so older
 dnl C compilers that cannot deal with the 0x0p+0 hex floating point format
@@ -593,6 +612,23 @@ if test -n "$LLVMGXX" && test -z "$LLVMGCC"; then
    AC_MSG_ERROR([Invalid llvm-gcc. Use --with-llvmgcc when --with-llvmgxx is used]);
 fi
 
+dnl Override the option to use for optimized builds.
+AC_ARG_WITH(optimize-option,
+  AS_HELP_STRING([--with-optimize-option],
+                 [Select the compiler options to use for optimized builds]),,
+                 withval=default)
+AC_MSG_CHECKING([optimization flags])
+case "$withval" in
+  default)
+    case "$llvm_cv_os_type" in
+    MingW) optimize_option=-O3 ;;
+    *)     optimize_option=-O2 ;;
+    esac ;;
+  *) optimize_option="$withval" ;;
+esac
+AC_SUBST(OPTIMIZE_OPTION,$optimize_option)
+AC_MSG_RESULT([$optimize_option])
+
 dnl Specify extra build options
 AC_ARG_WITH(extra-options,
   AS_HELP_STRING([--with-extra-options],
@@ -636,6 +672,41 @@ case "$withval" in
   *) AC_MSG_ERROR([Invalid path for --with-ocaml-libdir. Provide full path]) ;;
 esac
 
+AC_ARG_WITH(c-include-dir,
+  AS_HELP_STRING([--with-c-include-dirs],
+    [Colon separated list of directories clang will search for headers]),,
+    withval="")
+AC_DEFINE_UNQUOTED(C_INCLUDE_DIRS,"$withval",
+                   [Directories clang will search for headers])
+
+AC_ARG_WITH(cxx-include-root,
+  AS_HELP_STRING([--with-cxx-include-root],
+    [Directory with the libstdc++ headers.]),,
+    withval="")
+AC_DEFINE_UNQUOTED(CXX_INCLUDE_ROOT,"$withval",
+                   [Directory with the libstdc++ headers.])
+
+AC_ARG_WITH(cxx-include-arch,
+  AS_HELP_STRING([--with-cxx-include-arch],
+    [Architecture of the libstdc++ headers.]),,
+    withval="")
+AC_DEFINE_UNQUOTED(CXX_INCLUDE_ARCH,"$withval",
+                   [Arch the libstdc++ headers.])
+
+AC_ARG_WITH(cxx-include-32bit-dir,
+  AS_HELP_STRING([--with-cxx-include-32bit-dir],
+    [32 bit multilib dir.]),,
+    withval="")
+AC_DEFINE_UNQUOTED(CXX_INCLUDE_32BIT_DIR,"$withval",
+                   [32 bit multilib directory.])
+
+AC_ARG_WITH(cxx-include-64bit-dir,
+  AS_HELP_STRING([--with-cxx-include-64bit-dir],
+    [64 bit multilib directory.]),,
+    withval="")
+AC_DEFINE_UNQUOTED(CXX_INCLUDE_64BIT_DIR,"$withval",
+                   [64 bit multilib directory.])
+
 dnl Allow linking of LLVM with GPLv3 binutils code.
 AC_ARG_WITH(binutils-include,
   AS_HELP_STRING([--with-binutils-include],
@@ -929,6 +1000,12 @@ fi
 dnl Tool compatibility is okay if we make it here.
 AC_MSG_RESULT([ok])
 
+dnl Check optional compiler flags. 
+AC_MSG_CHECKING([optional compiler flags])
+CXX_FLAG_CHECK(NO_VARIADIC_MACROS, [-Wno-variadic-macros])
+CXX_FLAG_CHECK(NO_MISSING_FIELD_INITIALIZERS, [-Wno-missing-field-initializers])
+AC_MSG_RESULT([$NO_VARIADIC_MACROS $NO_MISSING_FIELD_INITIALIZERS])
+
 dnl===-----------------------------------------------------------------------===
 dnl===
 dnl=== SECTION 5: Check for libraries
@@ -960,7 +1037,7 @@ AC_SEARCH_LIBS(mallinfo,malloc,AC_DEFINE([HAVE_MALLINFO],[1],
 dnl pthread locking functions are optional - but llvm will not be thread-safe
 dnl without locks.
 if test "$ENABLE_THREADS" -eq 1 ; then
-  AC_CHECK_LIB(pthread,pthread_mutex_init)
+  AC_CHECK_LIB(pthread, pthread_mutex_init)
   AC_SEARCH_LIBS(pthread_mutex_lock,pthread,
                  AC_DEFINE([HAVE_PTHREAD_MUTEX_LOCK],[1],
                            [Have pthread_mutex_lock]))
@@ -999,31 +1076,30 @@ AC_ARG_WITH(oprofile,
       AC_SUBST(USE_OPROFILE, [1])
       case "$withval" in
         /usr|yes) llvm_cv_oppath=/usr/lib/oprofile ;;
+        no) llvm_cv_oppath=
+            AC_SUBST(USE_OPROFILE, [0]) ;;
         *) llvm_cv_oppath="${withval}/lib/oprofile"
            CPPFLAGS="-I${withval}/include";;
       esac
-      LIBS="$LIBS -L${llvm_cv_oppath} -Wl,-rpath,${llvm_cv_oppath}"
-      AC_SEARCH_LIBS(op_open_agent, opagent, [], [
-        echo "Error! You need to have libopagent around."
-        exit -1
-      ])
-      AC_CHECK_HEADER([opagent.h], [], [
-        echo "Error! You need to have opagent.h around."
-        exit -1
-      ])
+      if test -n "$llvm_cv_oppath" ; then
+        LIBS="$LIBS -L${llvm_cv_oppath} -Wl,-rpath,${llvm_cv_oppath}"
+        dnl Work around http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=537744:
+        dnl libbfd is not included properly in libopagent in some Debian
+        dnl versions.  If libbfd isn't found at all, we assume opagent works
+        dnl anyway.
+        AC_SEARCH_LIBS(bfd_init, bfd, [], [])
+        AC_SEARCH_LIBS(op_open_agent, opagent, [], [
+          echo "Error! You need to have libopagent around."
+          exit -1
+        ])
+        AC_CHECK_HEADER([opagent.h], [], [
+          echo "Error! You need to have opagent.h around."
+          exit -1
+          ])
+      fi
     ],
     [
-      llvm_cv_old_LIBS="$LIBS"
-      LIBS="$LIBS -L/usr/lib/oprofile -Wl,-rpath,/usr/lib/oprofile"
-      dnl If either the library or header aren't present, omit oprofile support.
-      AC_SEARCH_LIBS(op_open_agent, opagent,
-                     [AC_SUBST(USE_OPROFILE, [1])],
-                     [LIBS="$llvm_cv_old_LIBS"
-                      AC_SUBST(USE_OPROFILE, [0])])
-      AC_CHECK_HEADER([opagent.h], [], [
-        LIBS="$llvm_cv_old_LIBS"
-        AC_SUBST(USE_OPROFILE, [0])
-      ])
+      AC_SUBST(USE_OPROFILE, [0])
     ])
 AC_DEFINE_UNQUOTED([USE_OPROFILE],$USE_OPROFILE,
                    [Define if we have the oprofile JIT-support library])
@@ -1336,7 +1412,8 @@ AC_CONFIG_HEADERS([include/llvm/Config/config.h])
 AC_CONFIG_FILES([include/llvm/Config/Targets.def])
 AC_CONFIG_FILES([include/llvm/Config/AsmPrinters.def])
 AC_CONFIG_FILES([include/llvm/Config/AsmParsers.def])
-AC_CONFIG_HEADERS([include/llvm/Support/DataTypes.h])
+AC_CONFIG_FILES([include/llvm/Config/Disassemblers.def])
+AC_CONFIG_HEADERS([include/llvm/System/DataTypes.h])
 
 dnl Configure the makefile's configuration data
 AC_CONFIG_FILES([Makefile.config])
diff --git a/libclamav/c++/llvm/autoconf/m4/cxx_flag_check.m4 b/libclamav/c++/llvm/autoconf/m4/cxx_flag_check.m4
new file mode 100644
index 0000000..ab09f2a
--- /dev/null
+++ b/libclamav/c++/llvm/autoconf/m4/cxx_flag_check.m4
@@ -0,0 +1,2 @@
+AC_DEFUN([CXX_FLAG_CHECK],
+  [AC_SUBST($1, `$CXX $2 -fsyntax-only -xc /dev/null 2>/dev/null && echo $2`)])
diff --git a/libclamav/c++/llvm/cmake/config-ix.cmake b/libclamav/c++/llvm/cmake/config-ix.cmake
index 17feffe..3a2b91c 100755
--- a/libclamav/c++/llvm/cmake/config-ix.cmake
+++ b/libclamav/c++/llvm/cmake/config-ix.cmake
@@ -4,6 +4,11 @@ include(CheckSymbolExists)
 include(CheckFunctionExists)
 include(CheckCXXSourceCompiles)
 
+if( UNIX )
+  # Used by check_symbol_exists:
+  set(CMAKE_REQUIRED_LIBRARIES m)
+endif()
+
 # Helper macros and functions
 macro(add_cxx_include result files)
   set(${result} "")
@@ -39,7 +44,9 @@ check_include_file(malloc.h HAVE_MALLOC_H)
 check_include_file(malloc/malloc.h HAVE_MALLOC_MALLOC_H)
 check_include_file(memory.h HAVE_MEMORY_H)
 check_include_file(ndir.h HAVE_NDIR_H)
-check_include_file(pthread.h HAVE_PTHREAD_H)
+if( NOT LLVM_ON_WIN32 )
+  check_include_file(pthread.h HAVE_PTHREAD_H)
+endif()
 check_include_file(setjmp.h HAVE_SETJMP_H)
 check_include_file(signal.h HAVE_SIGNAL_H)
 check_include_file(stdint.h HAVE_STDINT_H)
@@ -63,10 +70,12 @@ check_include_file(utime.h HAVE_UTIME_H)
 check_include_file(windows.h HAVE_WINDOWS_H)
 
 # library checks
-check_library_exists(pthread pthread_create "" HAVE_LIBPTHREAD)
-check_library_exists(pthread pthread_getspecific "" HAVE_PTHREAD_GETSPECIFIC)
-check_library_exists(pthread pthread_rwlock_init "" HAVE_PTHREAD_RWLOCK_INIT)
-check_library_exists(dl dlopen "" HAVE_LIBDL)
+if( NOT LLVM_ON_WIN32 )
+  check_library_exists(pthread pthread_create "" HAVE_LIBPTHREAD)
+  check_library_exists(pthread pthread_getspecific "" HAVE_PTHREAD_GETSPECIFIC)
+  check_library_exists(pthread pthread_rwlock_init "" HAVE_PTHREAD_RWLOCK_INIT)
+  check_library_exists(dl dlopen "" HAVE_LIBDL)
+endif()
 
 # function checks
 check_symbol_exists(getpagesize unistd.h HAVE_GETPAGESIZE)
@@ -75,17 +84,22 @@ check_symbol_exists(setrlimit sys/resource.h HAVE_SETRLIMIT)
 check_function_exists(isatty HAVE_ISATTY)
 check_symbol_exists(isinf cmath HAVE_ISINF_IN_CMATH)
 check_symbol_exists(isinf math.h HAVE_ISINF_IN_MATH_H)
+check_symbol_exists(finite ieeefp.h HAVE_FINITE_IN_IEEEFP_H)
 check_symbol_exists(isnan cmath HAVE_ISNAN_IN_CMATH)
 check_symbol_exists(isnan math.h HAVE_ISNAN_IN_MATH_H)
 check_symbol_exists(ceilf math.h HAVE_CEILF)
 check_symbol_exists(floorf math.h HAVE_FLOORF)
+check_symbol_exists(nearbyintf math.h HAVE_NEARBYINTF)
 check_symbol_exists(mallinfo malloc.h HAVE_MALLINFO)
 check_symbol_exists(malloc_zone_statistics malloc/malloc.h
                     HAVE_MALLOC_ZONE_STATISTICS)
-check_symbol_exists(mkdtemp unistd.h HAVE_MKDTEMP)
-check_symbol_exists(mkstemp unistd.h HAVE_MKSTEMP)
-check_symbol_exists(mktemp unistd.h HAVE_MKTEMP)
-check_symbol_exists(pthread_mutex_lock pthread.h HAVE_PTHREAD_MUTEX_LOCK)
+check_symbol_exists(mkdtemp "stdlib.h;unistd.h" HAVE_MKDTEMP)
+check_symbol_exists(mkstemp "stdlib.h;unistd.h" HAVE_MKSTEMP)
+check_symbol_exists(mktemp "stdlib.h;unistd.h" HAVE_MKTEMP)
+if( NOT LLVM_ON_WIN32 )
+  check_symbol_exists(pthread_mutex_lock pthread.h HAVE_PTHREAD_MUTEX_LOCK)
+endif()
+check_symbol_exists(sbrk unistd.h HAVE_SBRK)
 check_symbol_exists(strtoll stdlib.h HAVE_STRTOLL)
 check_symbol_exists(strerror string.h HAVE_STRERROR)
 check_symbol_exists(strerror_r string.h HAVE_STRERROR_R)
@@ -118,6 +132,27 @@ endif()
 check_type_exists(uint64_t "${headers}" HAVE_UINT64_T)
 check_type_exists(u_int64_t "${headers}" HAVE_U_INT64_T)
 
+# available programs checks
+function(llvm_find_program name)
+  string(TOUPPER ${name} NAME)
+  find_program(LLVM_PATH_${NAME} ${name})
+  mark_as_advanced(LLVM_PATH_${NAME})
+  if(LLVM_PATH_${NAME})
+    set(HAVE_${NAME} 1 CACHE INTERNAL "Is ${name} available ?")
+    mark_as_advanced(HAVE_${NAME})
+  else(LLVM_PATH_${NAME})
+    set(HAVE_${NAME} "" CACHE INTERNAL "Is ${name} available ?")
+  endif(LLVM_PATH_${NAME})
+endfunction()
+
+llvm_find_program(gv)
+llvm_find_program(circo)
+llvm_find_program(twopi)
+llvm_find_program(neato)
+llvm_find_program(fdp)
+llvm_find_program(dot)
+llvm_find_program(dotty)
+
 # Define LLVM_MULTITHREADED if gcc atomic builtins exists.
 include(CheckAtomic)
 
@@ -130,7 +165,9 @@ endif()
 
 include(GetTargetTriple)
 get_target_triple(LLVM_HOSTTRIPLE)
-message(STATUS "LLVM_HOSTTRIPLE: ${LLVM_HOSTTRIPLE}")
+
+# FIXME: We don't distinguish the target and the host. :(
+set(TARGET_TRIPLE "${LLVM_HOSTTRIPLE}")
 
 # Determine the native architecture.
 string(TOLOWER "${LLVM_TARGET_ARCH}" LLVM_NATIVE_ARCH)
@@ -227,7 +264,7 @@ configure_file(
   )
 
 configure_file(
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/Support/DataTypes.h.cmake
-  ${LLVM_BINARY_DIR}/include/llvm/Support/DataTypes.h
+  ${LLVM_MAIN_INCLUDE_DIR}/llvm/System/DataTypes.h.cmake
+  ${LLVM_BINARY_DIR}/include/llvm/System/DataTypes.h
   )
 
diff --git a/libclamav/c++/llvm/cmake/modules/AddLLVM.cmake b/libclamav/c++/llvm/cmake/modules/AddLLVM.cmake
index 205ddb7..0ecd153 100755
--- a/libclamav/c++/llvm/cmake/modules/AddLLVM.cmake
+++ b/libclamav/c++/llvm/cmake/modules/AddLLVM.cmake
@@ -22,9 +22,36 @@ macro(add_llvm_library name)
 endmacro(add_llvm_library name)
 
 
+macro(add_llvm_loadable_module name)
+  if( NOT LLVM_ON_UNIX )
+    message(STATUS "Loadable modules not supported on this platform.
+${name} ignored.")
+  else()
+    llvm_process_sources( ALL_FILES ${ARGN} )
+    add_library( ${name} MODULE ${ALL_FILES} )
+    set_target_properties( ${name} PROPERTIES PREFIX "" )
+
+    if (APPLE)
+      # Darwin-specific linker flags for loadable modules.
+      set_target_properties(${name} PROPERTIES
+        LINK_FLAGS "-Wl,-flat_namespace -Wl,-undefined -Wl,suppress")
+    endif()
+
+    install(TARGETS ${name}
+      LIBRARY DESTINATION lib${LLVM_LIBDIR_SUFFIX}
+      ARCHIVE DESTINATION lib${LLVM_LIBDIR_SUFFIX})
+  endif()
+endmacro(add_llvm_loadable_module name)
+
+
 macro(add_llvm_executable name)
   llvm_process_sources( ALL_FILES ${ARGN} )
-  add_executable(${name} ${ALL_FILES})
+  if( EXCLUDE_FROM_ALL )
+    add_executable(${name} EXCLUDE_FROM_ALL ${ALL_FILES})
+  else()
+    add_executable(${name} ${ALL_FILES})
+  endif()
+  set(EXCLUDE_FROM_ALL OFF)
   if( LLVM_USED_LIBS )
     foreach(lib ${LLVM_USED_LIBS})
       target_link_libraries( ${name} ${lib} )
@@ -45,17 +72,25 @@ endmacro(add_llvm_executable name)
 
 macro(add_llvm_tool name)
   set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${LLVM_TOOLS_BINARY_DIR})
+  if( NOT LLVM_BUILD_TOOLS )
+    set(EXCLUDE_FROM_ALL ON)
+  endif()
   add_llvm_executable(${name} ${ARGN})
-  install(TARGETS ${name}
-    RUNTIME DESTINATION bin)
+  if( LLVM_BUILD_TOOLS )
+    install(TARGETS ${name} RUNTIME DESTINATION bin)
+  endif()
 endmacro(add_llvm_tool name)
 
 
 macro(add_llvm_example name)
 #  set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${LLVM_EXAMPLES_BINARY_DIR})
+  if( NOT LLVM_BUILD_EXAMPLES )
+    set(EXCLUDE_FROM_ALL ON)
+  endif()
   add_llvm_executable(${name} ${ARGN})
-  install(TARGETS ${name}
-    RUNTIME DESTINATION examples)
+  if( LLVM_BUILD_EXAMPLES )
+    install(TARGETS ${name} RUNTIME DESTINATION examples)
+  endif()
 endmacro(add_llvm_example name)
 
 
diff --git a/libclamav/c++/llvm/cmake/modules/GetTargetTriple.cmake b/libclamav/c++/llvm/cmake/modules/GetTargetTriple.cmake
index 87262ad..ac0c009 100644
--- a/libclamav/c++/llvm/cmake/modules/GetTargetTriple.cmake
+++ b/libclamav/c++/llvm/cmake/modules/GetTargetTriple.cmake
@@ -4,12 +4,12 @@
 function( get_target_triple var )
   if( MSVC )
     if( CMAKE_CL_64 )
-      set( ${var} "x86_64-pc-win32" PARENT_SCOPE )
+      set( value "x86_64-pc-win32" )
     else()
-      set( ${var} "i686-pc-win32" PARENT_SCOPE )
+      set( value "i686-pc-win32" )
     endif()
   elseif( MINGW AND NOT MSYS )
-    set( ${var} "i686-pc-mingw32" PARENT_SCOPE )
+    set( value "i686-pc-mingw32" )
   else( MSVC )
     set(config_guess ${LLVM_MAIN_SRC_DIR}/autoconf/config.guess)
     execute_process(COMMAND sh ${config_guess}
@@ -19,7 +19,8 @@ function( get_target_triple var )
     if( NOT TT_RV EQUAL 0 )
       message(FATAL_ERROR "Failed to execute ${config_guess}")
     endif( NOT TT_RV EQUAL 0 )
-    set( ${var} ${TT_OUT} PARENT_SCOPE )
-    message(STATUS "Target triple: ${${var}}")
+    set( value ${TT_OUT} )
   endif( MSVC )
+  set( ${var} ${value} PARENT_SCOPE )
+  message(STATUS "Target triple: ${value}")
 endfunction( get_target_triple var )
diff --git a/libclamav/c++/llvm/cmake/modules/LLVMConfig.cmake b/libclamav/c++/llvm/cmake/modules/LLVMConfig.cmake
index d1c297c..0744b50 100755
--- a/libclamav/c++/llvm/cmake/modules/LLVMConfig.cmake
+++ b/libclamav/c++/llvm/cmake/modules/LLVMConfig.cmake
@@ -5,7 +5,7 @@ function(get_system_libs return_var)
       set(system_libs ${system_libs} imagehlp psapi)
     elseif( CMAKE_HOST_UNIX )
       if( HAVE_LIBDL )
-	set(system_libs ${system_libs} dl)
+	set(system_libs ${system_libs} ${CMAKE_DL_LIBS})
       endif()
       if( LLVM_ENABLE_THREADS AND HAVE_LIBPTHREAD )
 	set(system_libs ${system_libs} pthread)
@@ -32,7 +32,7 @@ endfunction(explicit_llvm_config)
 function(explicit_map_components_to_libraries out_libs)
   set( link_components ${ARGN} )
   foreach(c ${link_components})
-    # add codegen, asmprinter, asmparser
+    # add codegen, asmprinter, asmparser, disassembler
     list(FIND LLVM_TARGETS_TO_BUILD ${c} idx)
     if( NOT idx LESS 0 )
       list(FIND llvm_libs "LLVM${c}CodeGen" idx)
@@ -58,6 +58,10 @@ function(explicit_map_components_to_libraries out_libs)
       if( NOT asmidx LESS 0 )
         list(APPEND expanded_components "LLVM${c}Info")
       endif()
+      list(FIND llvm_libs "LLVM${c}Disassembler" asmidx)
+      if( NOT asmidx LESS 0 )
+        list(APPEND expanded_components "LLVM${c}Disassembler")
+      endif()
     elseif( c STREQUAL "native" )
       list(APPEND expanded_components "LLVM${LLVM_NATIVE_ARCH}CodeGen")
     elseif( c STREQUAL "nativecodegen" )
diff --git a/libclamav/c++/llvm/cmake/modules/LLVMLibDeps.cmake b/libclamav/c++/llvm/cmake/modules/LLVMLibDeps.cmake
index fba999e..f389250 100644
--- a/libclamav/c++/llvm/cmake/modules/LLVMLibDeps.cmake
+++ b/libclamav/c++/llvm/cmake/modules/LLVMLibDeps.cmake
@@ -2,7 +2,7 @@ set(MSVC_LIB_DEPS_LLVMARMAsmParser LLVMARMInfo LLVMMC)
 set(MSVC_LIB_DEPS_LLVMARMAsmPrinter LLVMARMCodeGen LLVMARMInfo LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMARMCodeGen LLVMARMInfo LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMARMInfo LLVMSupport)
-set(MSVC_LIB_DEPS_LLVMAlphaAsmPrinter LLVMAlphaInfo LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMAlphaAsmPrinter LLVMAlphaCodeGen LLVMAlphaInfo LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMAlphaCodeGen LLVMAlphaInfo LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMAlphaInfo LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMAnalysis LLVMCore LLVMSupport LLVMSystem LLVMTarget)
@@ -11,21 +11,19 @@ set(MSVC_LIB_DEPS_LLVMAsmParser LLVMCore LLVMSupport LLVMSystem)
 set(MSVC_LIB_DEPS_LLVMAsmPrinter LLVMAnalysis LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMBitReader LLVMCore LLVMSupport LLVMSystem)
 set(MSVC_LIB_DEPS_LLVMBitWriter LLVMCore LLVMSupport LLVMSystem)
-set(MSVC_LIB_DEPS_LLVMBlackfinAsmPrinter LLVMAsmPrinter LLVMBlackfinInfo LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMBlackfinAsmPrinter LLVMAsmPrinter LLVMBlackfinCodeGen LLVMBlackfinInfo LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMBlackfinCodeGen LLVMBlackfinInfo LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMBlackfinInfo LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMCBackend LLVMAnalysis LLVMCBackendInfo LLVMCodeGen LLVMCore LLVMScalarOpts LLVMSupport LLVMSystem LLVMTarget LLVMTransformUtils LLVMipa)
 set(MSVC_LIB_DEPS_LLVMCBackendInfo LLVMSupport)
-set(MSVC_LIB_DEPS_LLVMCellSPUAsmPrinter LLVMAsmPrinter LLVMCellSPUInfo LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMCellSPUAsmPrinter LLVMAsmPrinter LLVMCellSPUCodeGen LLVMCellSPUInfo LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMCellSPUCodeGen LLVMCellSPUInfo LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMCellSPUInfo LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMCodeGen LLVMAnalysis LLVMCore LLVMMC LLVMScalarOpts LLVMSupport LLVMSystem LLVMTarget LLVMTransformUtils)
 set(MSVC_LIB_DEPS_LLVMCore LLVMSupport LLVMSystem)
 set(MSVC_LIB_DEPS_LLVMCppBackend LLVMCore LLVMCppBackendInfo LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMCppBackendInfo LLVMSupport)
-set(MSVC_LIB_DEPS_LLVMDebugger LLVMAnalysis LLVMBitReader LLVMCore LLVMSupport LLVMSystem)
 set(MSVC_LIB_DEPS_LLVMExecutionEngine LLVMCore LLVMSupport LLVMSystem LLVMTarget)
-set(MSVC_LIB_DEPS_LLVMHello LLVMCore LLVMSupport LLVMSystem)
 set(MSVC_LIB_DEPS_LLVMInstrumentation LLVMAnalysis LLVMCore LLVMScalarOpts LLVMSupport LLVMSystem LLVMTransformUtils)
 set(MSVC_LIB_DEPS_LLVMInterpreter LLVMCodeGen LLVMCore LLVMExecutionEngine LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMJIT LLVMCodeGen LLVMCore LLVMExecutionEngine LLVMMC LLVMSupport LLVMSystem LLVMTarget)
@@ -33,8 +31,8 @@ set(MSVC_LIB_DEPS_LLVMLinker LLVMArchive LLVMBitReader LLVMCore LLVMSupport LLVM
 set(MSVC_LIB_DEPS_LLVMMC LLVMSupport LLVMSystem)
 set(MSVC_LIB_DEPS_LLVMMSIL LLVMAnalysis LLVMCodeGen LLVMCore LLVMMSILInfo LLVMScalarOpts LLVMSupport LLVMSystem LLVMTarget LLVMTransformUtils LLVMipa)
 set(MSVC_LIB_DEPS_LLVMMSILInfo LLVMSupport)
-set(MSVC_LIB_DEPS_LLVMMSP430AsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMMSP430Info LLVMSupport LLVMSystem LLVMTarget)
-set(MSVC_LIB_DEPS_LLVMMSP430CodeGen LLVMCodeGen LLVMCore LLVMMC LLVMMSP430Info LLVMSelectionDAG LLVMSupport LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMMSP430AsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMMSP430CodeGen LLVMMSP430Info LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMMSP430CodeGen LLVMCodeGen LLVMCore LLVMMC LLVMMSP430Info LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMMSP430Info LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMMipsAsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMMipsCodeGen LLVMMipsInfo LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMMipsCodeGen LLVMCodeGen LLVMCore LLVMMC LLVMMipsInfo LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget)
@@ -42,17 +40,17 @@ set(MSVC_LIB_DEPS_LLVMMipsInfo LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMPIC16 LLVMAnalysis LLVMCodeGen LLVMCore LLVMMC LLVMPIC16Info LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMPIC16AsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMPIC16 LLVMPIC16Info LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMPIC16Info LLVMSupport)
-set(MSVC_LIB_DEPS_LLVMPowerPCAsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMPowerPCInfo LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMPowerPCAsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMPowerPCCodeGen LLVMPowerPCInfo LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMPowerPCCodeGen LLVMCodeGen LLVMCore LLVMMC LLVMPowerPCInfo LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMPowerPCInfo LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMScalarOpts LLVMAnalysis LLVMCore LLVMSupport LLVMSystem LLVMTarget LLVMTransformUtils)
 set(MSVC_LIB_DEPS_LLVMSelectionDAG LLVMAnalysis LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMSupport LLVMSystem LLVMTarget)
-set(MSVC_LIB_DEPS_LLVMSparcAsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMSparcInfo LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMSparcAsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMSparcCodeGen LLVMSparcInfo LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMSparcCodeGen LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSparcInfo LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMSparcInfo LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMSupport LLVMSystem)
 set(MSVC_LIB_DEPS_LLVMSystem )
-set(MSVC_LIB_DEPS_LLVMSystemZAsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMSystemZInfo LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMSystemZAsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMSystemZCodeGen LLVMSystemZInfo LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMSystemZCodeGen LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMSystemZInfo LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMSystemZInfo LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMTarget LLVMCore LLVMMC LLVMSupport LLVMSystem)
@@ -62,7 +60,7 @@ set(MSVC_LIB_DEPS_LLVMX86AsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC L
 set(MSVC_LIB_DEPS_LLVMX86CodeGen LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget LLVMX86Info)
 set(MSVC_LIB_DEPS_LLVMX86Info LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMXCore LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget LLVMXCoreInfo)
-set(MSVC_LIB_DEPS_LLVMXCoreAsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget LLVMXCoreInfo)
+set(MSVC_LIB_DEPS_LLVMXCoreAsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget LLVMXCore LLVMXCoreInfo)
 set(MSVC_LIB_DEPS_LLVMXCoreInfo LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMipa LLVMAnalysis LLVMCore LLVMSupport LLVMSystem)
 set(MSVC_LIB_DEPS_LLVMipo LLVMAnalysis LLVMCore LLVMSupport LLVMSystem LLVMTarget LLVMTransformUtils LLVMipa)
diff --git a/libclamav/c++/llvm/cmake/modules/LLVMProcessSources.cmake b/libclamav/c++/llvm/cmake/modules/LLVMProcessSources.cmake
index 12a8968..b753735 100644
--- a/libclamav/c++/llvm/cmake/modules/LLVMProcessSources.cmake
+++ b/libclamav/c++/llvm/cmake/modules/LLVMProcessSources.cmake
@@ -22,6 +22,7 @@ endmacro(add_header_files)
 
 function(llvm_process_sources OUT_VAR)
   set( sources ${ARGN} )
+  llvm_check_source_file_list( ${sources} )
   # Create file dependencies on the tablegenned files, if any.  Seems
   # that this is not strictly needed, as dependencies of the .cpp
   # sources on the tablegenned .inc files are detected and handled,
@@ -37,3 +38,17 @@ function(llvm_process_sources OUT_VAR)
   endif()
   set( ${OUT_VAR} ${sources} PARENT_SCOPE )
 endfunction(llvm_process_sources)
+
+
+function(llvm_check_source_file_list)
+  set(listed ${ARGN})
+  file(GLOB globbed *.cpp)
+  foreach(g ${globbed})
+    get_filename_component(fn ${g} NAME)
+    list(FIND listed ${fn} idx)
+    if( idx LESS 0 )
+      message(SEND_ERROR "Found unknown source file ${g}
+Please update ${CMAKE_CURRENT_SOURCE_DIR}/CMakeLists.txt\n")
+    endif()
+  endforeach()
+endfunction(llvm_check_source_file_list)
diff --git a/libclamav/c++/llvm/configure b/libclamav/c++/llvm/configure
index a90d4c1..4ef693f 100755
--- a/libclamav/c++/llvm/configure
+++ b/libclamav/c++/llvm/configure
@@ -847,7 +847,9 @@ TARGETS_TO_BUILD
 LLVM_ENUM_TARGETS
 LLVM_ENUM_ASM_PRINTERS
 LLVM_ENUM_ASM_PARSERS
+LLVM_ENUM_DISASSEMBLERS
 ENABLE_CBE_PRINTF_A
+OPTIMIZE_OPTION
 EXTRA_OPTIONS
 BINUTILS_INCDIR
 ENABLE_LLVMC_DYNAMIC
@@ -913,6 +915,8 @@ LLVMGCCCOMMAND
 LLVMGXXCOMMAND
 LLVMGCC
 LLVMGXX
+NO_VARIADIC_MACROS
+NO_MISSING_FIELD_INITIALIZERS
 USE_UDIS86
 USE_OPROFILE
 HAVE_PTHREAD
@@ -1595,9 +1599,19 @@ Optional Packages:
                           searches PATH)
   --with-llvmgxx          Specify location of llvm-g++ driver (default
                           searches PATH)
+  --with-optimize-option  Select the compiler options to use for optimized
+                          builds
   --with-extra-options    Specify additional options to compile LLVM with
   --with-ocaml-libdir     Specify install location for ocaml bindings (default
                           is stdlib)
+  --with-c-include-dirs   Colon separated list of directories clang will
+                          search for headers
+  --with-cxx-include-root Directory with the libstdc++ headers.
+  --with-cxx-include-arch Architecture of the libstdc++ headers.
+  --with-cxx-include-32bit-dir
+                          32 bit multilib dir.
+  --with-cxx-include-64bit-dir
+                          64 bit multilib directory.
   --with-binutils-include Specify path to binutils/include/ containing
                           plugin-api.h file for gold plugin.
   --with-tclinclude       directory where tcl headers are
@@ -2338,6 +2352,11 @@ else
     llvm_cv_no_link_all_option="-Wl,-z,defaultextract"
     llvm_cv_os_type="SunOS"
     llvm_cv_platform_type="Unix" ;;
+  *-*-auroraux*)
+    llvm_cv_link_all_option="-Wl,-z,allextract"
+    llvm_cv_link_all_option="-Wl,-z,defaultextract"
+    llvm_cv_os_type="AuroraUX"
+    llvm_cv_platform_type="Unix" ;;
   *-*-win32*)
     llvm_cv_link_all_option="-Wl,--whole-archive"
     llvm_cv_no_link_all_option="-Wl,--no-whole-archive"
@@ -2348,6 +2367,11 @@ else
     llvm_cv_no_link_all_option="-Wl,--no-whole-archive"
     llvm_cv_os_type="MingW"
     llvm_cv_platform_type="Win32" ;;
+  *-*-haiku*)
+    llvm_cv_link_all_option="-Wl,--whole-archive"
+    llvm_cv_no_link_all_option="-Wl,--no-whole-archive"
+    llvm_cv_os_type="Haiku"
+    llvm_cv_platform_type="Unix" ;;
   *-unknown-eabi*)
     llvm_cv_link_all_option="-Wl,--whole-archive"
     llvm_cv_no_link_all_option="-Wl,--no-whole-archive"
@@ -2398,10 +2422,14 @@ else
     llvm_cv_target_os_type="Linux" ;;
   *-*-solaris*)
     llvm_cv_target_os_type="SunOS" ;;
+  *-*-auroraux*)
+    llvm_cv_target_os_type="AuroraUX" ;;
   *-*-win32*)
     llvm_cv_target_os_type="Win32" ;;
   *-*-mingw*)
     llvm_cv_target_os_type="MingW" ;;
+  *-*-haiku*)
+    llvm_cv_target_os_type="Haiku" ;;
   *-unknown-eabi*)
     llvm_cv_target_os_type="Freestanding" ;;
   *)
@@ -5081,11 +5109,12 @@ _ACEOF
   fi
 done
 
-# Build the LLVM_TARGET and LLVM_ASM_PRINTER macro uses for
-# Targets.def, AsmPrinters.def, and AsmParsers.def.
+# Build the LLVM_TARGET and LLVM_... macros for Targets.def and the individual
+# target feature def files.
 LLVM_ENUM_TARGETS=""
 LLVM_ENUM_ASM_PRINTERS=""
 LLVM_ENUM_ASM_PARSERS=""
+LLVM_ENUM_DISASSEMBLERS=""
 for target_to_build in $TARGETS_TO_BUILD; do
   LLVM_ENUM_TARGETS="LLVM_TARGET($target_to_build) $LLVM_ENUM_TARGETS"
   if test -f ${srcdir}/lib/Target/${target_to_build}/AsmPrinter/Makefile ; then
@@ -5094,11 +5123,15 @@ for target_to_build in $TARGETS_TO_BUILD; do
   if test -f ${srcdir}/lib/Target/${target_to_build}/AsmParser/Makefile ; then
     LLVM_ENUM_ASM_PARSERS="LLVM_ASM_PARSER($target_to_build) $LLVM_ENUM_ASM_PARSERS";
   fi
+  if test -f ${srcdir}/lib/Target/${target_to_build}/Disassembler/Makefile ; then
+    LLVM_ENUM_DISASSEMBLERS="LLVM_DISASSEMBLER($target_to_build) $LLVM_ENUM_DISASSEMBLERS";
+  fi
 done
 
 
 
 
+
 # Check whether --enable-cbe-printf-a was given.
 if test "${enable_cbe_printf_a+set}" = set; then
   enableval=$enable_cbe_printf_a;
@@ -5176,6 +5209,29 @@ echo "$as_me: error: Invalid llvm-gcc. Use --with-llvmgcc when --with-llvmgxx is
 fi
 
 
+# Check whether --with-optimize-option was given.
+if test "${with_optimize_option+set}" = set; then
+  withval=$with_optimize_option;
+else
+  withval=default
+fi
+
+{ echo "$as_me:$LINENO: checking optimization flags" >&5
+echo $ECHO_N "checking optimization flags... $ECHO_C" >&6; }
+case "$withval" in
+  default)
+    case "$llvm_cv_os_type" in
+    MingW) optimize_option=-O3 ;;
+    *)     optimize_option=-O2 ;;
+    esac ;;
+  *) optimize_option="$withval" ;;
+esac
+OPTIMIZE_OPTION=$optimize_option
+
+{ echo "$as_me:$LINENO: result: $optimize_option" >&5
+echo "${ECHO_T}$optimize_option" >&6; }
+
+
 # Check whether --with-extra-options was given.
 if test "${with_extra_options+set}" = set; then
   withval=$with_extra_options;
@@ -5230,6 +5286,76 @@ echo "$as_me: error: Invalid path for --with-ocaml-libdir. Provide full path" >&
 esac
 
 
+# Check whether --with-c-include-dir was given.
+if test "${with_c_include_dir+set}" = set; then
+  withval=$with_c_include_dir;
+else
+  withval=""
+fi
+
+
+cat >>confdefs.h <<_ACEOF
+#define C_INCLUDE_DIRS "$withval"
+_ACEOF
+
+
+
+# Check whether --with-cxx-include-root was given.
+if test "${with_cxx_include_root+set}" = set; then
+  withval=$with_cxx_include_root;
+else
+  withval=""
+fi
+
+
+cat >>confdefs.h <<_ACEOF
+#define CXX_INCLUDE_ROOT "$withval"
+_ACEOF
+
+
+
+# Check whether --with-cxx-include-arch was given.
+if test "${with_cxx_include_arch+set}" = set; then
+  withval=$with_cxx_include_arch;
+else
+  withval=""
+fi
+
+
+cat >>confdefs.h <<_ACEOF
+#define CXX_INCLUDE_ARCH "$withval"
+_ACEOF
+
+
+
+# Check whether --with-cxx-include-32bit-dir was given.
+if test "${with_cxx_include_32bit_dir+set}" = set; then
+  withval=$with_cxx_include_32bit_dir;
+else
+  withval=""
+fi
+
+
+cat >>confdefs.h <<_ACEOF
+#define CXX_INCLUDE_32BIT_DIR "$withval"
+_ACEOF
+
+
+
+# Check whether --with-cxx-include-64bit-dir was given.
+if test "${with_cxx_include_64bit_dir+set}" = set; then
+  withval=$with_cxx_include_64bit_dir;
+else
+  withval=""
+fi
+
+
+cat >>confdefs.h <<_ACEOF
+#define CXX_INCLUDE_64BIT_DIR "$withval"
+_ACEOF
+
+
+
 # Check whether --with-binutils-include was given.
 if test "${with_binutils_include+set}" = set; then
   withval=$with_binutils_include;
@@ -10994,7 +11120,7 @@ else
   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
   lt_status=$lt_dlunknown
   cat > conftest.$ac_ext <<EOF
-#line 10997 "configure"
+#line 11123 "configure"
 #include "confdefs.h"
 
 #if HAVE_DLFCN_H
@@ -13138,7 +13264,7 @@ ia64-*-hpux*)
   ;;
 *-*-irix6*)
   # Find out which ABI we are using.
-  echo '#line 13141 "configure"' > conftest.$ac_ext
+  echo '#line 13267 "configure"' > conftest.$ac_ext
   if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
   (eval $ac_compile) 2>&5
   ac_status=$?
@@ -14856,11 +14982,11 @@ else
    -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
    -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
    -e 's:$: $lt_compiler_flag:'`
-   (eval echo "\"\$as_me:14859: $lt_compile\"" >&5)
+   (eval echo "\"\$as_me:14985: $lt_compile\"" >&5)
    (eval "$lt_compile" 2>conftest.err)
    ac_status=$?
    cat conftest.err >&5
-   echo "$as_me:14863: \$? = $ac_status" >&5
+   echo "$as_me:14989: \$? = $ac_status" >&5
    if (exit $ac_status) && test -s "$ac_outfile"; then
      # The compiler can only warn and ignore the option if not recognized
      # So say no if there are warnings other than the usual output.
@@ -15124,11 +15250,11 @@ else
    -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
    -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
    -e 's:$: $lt_compiler_flag:'`
-   (eval echo "\"\$as_me:15127: $lt_compile\"" >&5)
+   (eval echo "\"\$as_me:15253: $lt_compile\"" >&5)
    (eval "$lt_compile" 2>conftest.err)
    ac_status=$?
    cat conftest.err >&5
-   echo "$as_me:15131: \$? = $ac_status" >&5
+   echo "$as_me:15257: \$? = $ac_status" >&5
    if (exit $ac_status) && test -s "$ac_outfile"; then
      # The compiler can only warn and ignore the option if not recognized
      # So say no if there are warnings other than the usual output.
@@ -15228,11 +15354,11 @@ else
    -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
    -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
    -e 's:$: $lt_compiler_flag:'`
-   (eval echo "\"\$as_me:15231: $lt_compile\"" >&5)
+   (eval echo "\"\$as_me:15357: $lt_compile\"" >&5)
    (eval "$lt_compile" 2>out/conftest.err)
    ac_status=$?
    cat out/conftest.err >&5
-   echo "$as_me:15235: \$? = $ac_status" >&5
+   echo "$as_me:15361: \$? = $ac_status" >&5
    if (exit $ac_status) && test -s out/conftest2.$ac_objext
    then
      # The compiler can only warn and ignore the option if not recognized
@@ -17680,7 +17806,7 @@ else
   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
   lt_status=$lt_dlunknown
   cat > conftest.$ac_ext <<EOF
-#line 17683 "configure"
+#line 17809 "configure"
 #include "confdefs.h"
 
 #if HAVE_DLFCN_H
@@ -17780,7 +17906,7 @@ else
   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
   lt_status=$lt_dlunknown
   cat > conftest.$ac_ext <<EOF
-#line 17783 "configure"
+#line 17909 "configure"
 #include "confdefs.h"
 
 #if HAVE_DLFCN_H
@@ -20148,11 +20274,11 @@ else
    -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
    -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
    -e 's:$: $lt_compiler_flag:'`
-   (eval echo "\"\$as_me:20151: $lt_compile\"" >&5)
+   (eval echo "\"\$as_me:20277: $lt_compile\"" >&5)
    (eval "$lt_compile" 2>conftest.err)
    ac_status=$?
    cat conftest.err >&5
-   echo "$as_me:20155: \$? = $ac_status" >&5
+   echo "$as_me:20281: \$? = $ac_status" >&5
    if (exit $ac_status) && test -s "$ac_outfile"; then
      # The compiler can only warn and ignore the option if not recognized
      # So say no if there are warnings other than the usual output.
@@ -20252,11 +20378,11 @@ else
    -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
    -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
    -e 's:$: $lt_compiler_flag:'`
-   (eval echo "\"\$as_me:20255: $lt_compile\"" >&5)
+   (eval echo "\"\$as_me:20381: $lt_compile\"" >&5)
    (eval "$lt_compile" 2>out/conftest.err)
    ac_status=$?
    cat out/conftest.err >&5
-   echo "$as_me:20259: \$? = $ac_status" >&5
+   echo "$as_me:20385: \$? = $ac_status" >&5
    if (exit $ac_status) && test -s out/conftest2.$ac_objext
    then
      # The compiler can only warn and ignore the option if not recognized
@@ -21822,11 +21948,11 @@ else
    -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
    -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
    -e 's:$: $lt_compiler_flag:'`
-   (eval echo "\"\$as_me:21825: $lt_compile\"" >&5)
+   (eval echo "\"\$as_me:21951: $lt_compile\"" >&5)
    (eval "$lt_compile" 2>conftest.err)
    ac_status=$?
    cat conftest.err >&5
-   echo "$as_me:21829: \$? = $ac_status" >&5
+   echo "$as_me:21955: \$? = $ac_status" >&5
    if (exit $ac_status) && test -s "$ac_outfile"; then
      # The compiler can only warn and ignore the option if not recognized
      # So say no if there are warnings other than the usual output.
@@ -21926,11 +22052,11 @@ else
    -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
    -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
    -e 's:$: $lt_compiler_flag:'`
-   (eval echo "\"\$as_me:21929: $lt_compile\"" >&5)
+   (eval echo "\"\$as_me:22055: $lt_compile\"" >&5)
    (eval "$lt_compile" 2>out/conftest.err)
    ac_status=$?
    cat out/conftest.err >&5
-   echo "$as_me:21933: \$? = $ac_status" >&5
+   echo "$as_me:22059: \$? = $ac_status" >&5
    if (exit $ac_status) && test -s out/conftest2.$ac_objext
    then
      # The compiler can only warn and ignore the option if not recognized
@@ -24161,11 +24287,11 @@ else
    -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
    -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
    -e 's:$: $lt_compiler_flag:'`
-   (eval echo "\"\$as_me:24164: $lt_compile\"" >&5)
+   (eval echo "\"\$as_me:24290: $lt_compile\"" >&5)
    (eval "$lt_compile" 2>conftest.err)
    ac_status=$?
    cat conftest.err >&5
-   echo "$as_me:24168: \$? = $ac_status" >&5
+   echo "$as_me:24294: \$? = $ac_status" >&5
    if (exit $ac_status) && test -s "$ac_outfile"; then
      # The compiler can only warn and ignore the option if not recognized
      # So say no if there are warnings other than the usual output.
@@ -24429,11 +24555,11 @@ else
    -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
    -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
    -e 's:$: $lt_compiler_flag:'`
-   (eval echo "\"\$as_me:24432: $lt_compile\"" >&5)
+   (eval echo "\"\$as_me:24558: $lt_compile\"" >&5)
    (eval "$lt_compile" 2>conftest.err)
    ac_status=$?
    cat conftest.err >&5
-   echo "$as_me:24436: \$? = $ac_status" >&5
+   echo "$as_me:24562: \$? = $ac_status" >&5
    if (exit $ac_status) && test -s "$ac_outfile"; then
      # The compiler can only warn and ignore the option if not recognized
      # So say no if there are warnings other than the usual output.
@@ -24533,11 +24659,11 @@ else
    -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
    -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
    -e 's:$: $lt_compiler_flag:'`
-   (eval echo "\"\$as_me:24536: $lt_compile\"" >&5)
+   (eval echo "\"\$as_me:24662: $lt_compile\"" >&5)
    (eval "$lt_compile" 2>out/conftest.err)
    ac_status=$?
    cat out/conftest.err >&5
-   echo "$as_me:24540: \$? = $ac_status" >&5
+   echo "$as_me:24666: \$? = $ac_status" >&5
    if (exit $ac_status) && test -s out/conftest2.$ac_objext
    then
      # The compiler can only warn and ignore the option if not recognized
@@ -27445,6 +27571,15 @@ fi
 { echo "$as_me:$LINENO: result: ok" >&5
 echo "${ECHO_T}ok" >&6; }
 
+{ echo "$as_me:$LINENO: checking optional compiler flags" >&5
+echo $ECHO_N "checking optional compiler flags... $ECHO_C" >&6; }
+NO_VARIADIC_MACROS=`$CXX -Wno-variadic-macros -fsyntax-only -xc /dev/null 2>/dev/null && echo -Wno-variadic-macros`
+
+NO_MISSING_FIELD_INITIALIZERS=`$CXX -Wno-missing-field-initializers -fsyntax-only -xc /dev/null 2>/dev/null && echo -Wno-missing-field-initializers`
+
+{ echo "$as_me:$LINENO: result: $NO_VARIADIC_MACROS $NO_MISSING_FIELD_INITIALIZERS" >&5
+echo "${ECHO_T}$NO_VARIADIC_MACROS $NO_MISSING_FIELD_INITIALIZERS" >&6; }
+
 
 
 { echo "$as_me:$LINENO: checking for sin in -lm" >&5
@@ -28539,13 +28674,17 @@ if test "${with_oprofile+set}" = set; then
 
       case "$withval" in
         /usr|yes) llvm_cv_oppath=/usr/lib/oprofile ;;
+        no) llvm_cv_oppath=
+            USE_OPROFILE=0
+ ;;
         *) llvm_cv_oppath="${withval}/lib/oprofile"
            CPPFLAGS="-I${withval}/include";;
       esac
-      LIBS="$LIBS -L${llvm_cv_oppath} -Wl,-rpath,${llvm_cv_oppath}"
-      { echo "$as_me:$LINENO: checking for library containing op_open_agent" >&5
-echo $ECHO_N "checking for library containing op_open_agent... $ECHO_C" >&6; }
-if test "${ac_cv_search_op_open_agent+set}" = set; then
+      if test -n "$llvm_cv_oppath" ; then
+        LIBS="$LIBS -L${llvm_cv_oppath} -Wl,-rpath,${llvm_cv_oppath}"
+                                        { echo "$as_me:$LINENO: checking for library containing bfd_init" >&5
+echo $ECHO_N "checking for library containing bfd_init... $ECHO_C" >&6; }
+if test "${ac_cv_search_bfd_init+set}" = set; then
   echo $ECHO_N "(cached) $ECHO_C" >&6
 else
   ac_func_search_save_LIBS=$LIBS
@@ -28562,16 +28701,16 @@ cat >>conftest.$ac_ext <<_ACEOF
 #ifdef __cplusplus
 extern "C"
 #endif
-char op_open_agent ();
+char bfd_init ();
 int
 main ()
 {
-return op_open_agent ();
+return bfd_init ();
   ;
   return 0;
 }
 _ACEOF
-for ac_lib in '' opagent; do
+for ac_lib in '' bfd; do
   if test -z "$ac_lib"; then
     ac_res="none required"
   else
@@ -28612,7 +28751,7 @@ eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5
   ac_status=$?
   echo "$as_me:$LINENO: \$? = $ac_status" >&5
   (exit $ac_status); }; }; then
-  ac_cv_search_op_open_agent=$ac_res
+  ac_cv_search_bfd_init=$ac_res
 else
   echo "$as_me: failed program was:" >&5
 sed 's/^/| /' conftest.$ac_ext >&5
@@ -28622,201 +28761,27 @@ fi
 
 rm -f core conftest.err conftest.$ac_objext \
       conftest$ac_exeext
-  if test "${ac_cv_search_op_open_agent+set}" = set; then
+  if test "${ac_cv_search_bfd_init+set}" = set; then
   break
 fi
 done
-if test "${ac_cv_search_op_open_agent+set}" = set; then
+if test "${ac_cv_search_bfd_init+set}" = set; then
   :
 else
-  ac_cv_search_op_open_agent=no
+  ac_cv_search_bfd_init=no
 fi
 rm conftest.$ac_ext
 LIBS=$ac_func_search_save_LIBS
 fi
-{ echo "$as_me:$LINENO: result: $ac_cv_search_op_open_agent" >&5
-echo "${ECHO_T}$ac_cv_search_op_open_agent" >&6; }
-ac_res=$ac_cv_search_op_open_agent
+{ echo "$as_me:$LINENO: result: $ac_cv_search_bfd_init" >&5
+echo "${ECHO_T}$ac_cv_search_bfd_init" >&6; }
+ac_res=$ac_cv_search_bfd_init
 if test "$ac_res" != no; then
   test "$ac_res" = "none required" || LIBS="$ac_res $LIBS"
 
-else
-
-        echo "Error! You need to have libopagent around."
-        exit -1
-
 fi
 
-      if test "${ac_cv_header_opagent_h+set}" = set; then
-  { echo "$as_me:$LINENO: checking for opagent.h" >&5
-echo $ECHO_N "checking for opagent.h... $ECHO_C" >&6; }
-if test "${ac_cv_header_opagent_h+set}" = set; then
-  echo $ECHO_N "(cached) $ECHO_C" >&6
-fi
-{ echo "$as_me:$LINENO: result: $ac_cv_header_opagent_h" >&5
-echo "${ECHO_T}$ac_cv_header_opagent_h" >&6; }
-else
-  # Is the header compilable?
-{ echo "$as_me:$LINENO: checking opagent.h usability" >&5
-echo $ECHO_N "checking opagent.h usability... $ECHO_C" >&6; }
-cat >conftest.$ac_ext <<_ACEOF
-/* confdefs.h.  */
-_ACEOF
-cat confdefs.h >>conftest.$ac_ext
-cat >>conftest.$ac_ext <<_ACEOF
-/* end confdefs.h.  */
-$ac_includes_default
-#include <opagent.h>
-_ACEOF
-rm -f conftest.$ac_objext
-if { (ac_try="$ac_compile"
-case "(($ac_try" in
-  *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
-  *) ac_try_echo=$ac_try;;
-esac
-eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5
-  (eval "$ac_compile") 2>conftest.er1
-  ac_status=$?
-  grep -v '^ *+' conftest.er1 >conftest.err
-  rm -f conftest.er1
-  cat conftest.err >&5
-  echo "$as_me:$LINENO: \$? = $ac_status" >&5
-  (exit $ac_status); } &&
-	 { ac_try='test -z "$ac_c_werror_flag" || test ! -s conftest.err'
-  { (case "(($ac_try" in
-  *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
-  *) ac_try_echo=$ac_try;;
-esac
-eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5
-  (eval "$ac_try") 2>&5
-  ac_status=$?
-  echo "$as_me:$LINENO: \$? = $ac_status" >&5
-  (exit $ac_status); }; } &&
-	 { ac_try='test -s conftest.$ac_objext'
-  { (case "(($ac_try" in
-  *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
-  *) ac_try_echo=$ac_try;;
-esac
-eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5
-  (eval "$ac_try") 2>&5
-  ac_status=$?
-  echo "$as_me:$LINENO: \$? = $ac_status" >&5
-  (exit $ac_status); }; }; then
-  ac_header_compiler=yes
-else
-  echo "$as_me: failed program was:" >&5
-sed 's/^/| /' conftest.$ac_ext >&5
-
-	ac_header_compiler=no
-fi
-
-rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
-{ echo "$as_me:$LINENO: result: $ac_header_compiler" >&5
-echo "${ECHO_T}$ac_header_compiler" >&6; }
-
-# Is the header present?
-{ echo "$as_me:$LINENO: checking opagent.h presence" >&5
-echo $ECHO_N "checking opagent.h presence... $ECHO_C" >&6; }
-cat >conftest.$ac_ext <<_ACEOF
-/* confdefs.h.  */
-_ACEOF
-cat confdefs.h >>conftest.$ac_ext
-cat >>conftest.$ac_ext <<_ACEOF
-/* end confdefs.h.  */
-#include <opagent.h>
-_ACEOF
-if { (ac_try="$ac_cpp conftest.$ac_ext"
-case "(($ac_try" in
-  *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
-  *) ac_try_echo=$ac_try;;
-esac
-eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5
-  (eval "$ac_cpp conftest.$ac_ext") 2>conftest.er1
-  ac_status=$?
-  grep -v '^ *+' conftest.er1 >conftest.err
-  rm -f conftest.er1
-  cat conftest.err >&5
-  echo "$as_me:$LINENO: \$? = $ac_status" >&5
-  (exit $ac_status); } >/dev/null; then
-  if test -s conftest.err; then
-    ac_cpp_err=$ac_c_preproc_warn_flag
-    ac_cpp_err=$ac_cpp_err$ac_c_werror_flag
-  else
-    ac_cpp_err=
-  fi
-else
-  ac_cpp_err=yes
-fi
-if test -z "$ac_cpp_err"; then
-  ac_header_preproc=yes
-else
-  echo "$as_me: failed program was:" >&5
-sed 's/^/| /' conftest.$ac_ext >&5
-
-  ac_header_preproc=no
-fi
-
-rm -f conftest.err conftest.$ac_ext
-{ echo "$as_me:$LINENO: result: $ac_header_preproc" >&5
-echo "${ECHO_T}$ac_header_preproc" >&6; }
-
-# So?  What about this header?
-case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in
-  yes:no: )
-    { echo "$as_me:$LINENO: WARNING: opagent.h: accepted by the compiler, rejected by the preprocessor!" >&5
-echo "$as_me: WARNING: opagent.h: accepted by the compiler, rejected by the preprocessor!" >&2;}
-    { echo "$as_me:$LINENO: WARNING: opagent.h: proceeding with the compiler's result" >&5
-echo "$as_me: WARNING: opagent.h: proceeding with the compiler's result" >&2;}
-    ac_header_preproc=yes
-    ;;
-  no:yes:* )
-    { echo "$as_me:$LINENO: WARNING: opagent.h: present but cannot be compiled" >&5
-echo "$as_me: WARNING: opagent.h: present but cannot be compiled" >&2;}
-    { echo "$as_me:$LINENO: WARNING: opagent.h:     check for missing prerequisite headers?" >&5
-echo "$as_me: WARNING: opagent.h:     check for missing prerequisite headers?" >&2;}
-    { echo "$as_me:$LINENO: WARNING: opagent.h: see the Autoconf documentation" >&5
-echo "$as_me: WARNING: opagent.h: see the Autoconf documentation" >&2;}
-    { echo "$as_me:$LINENO: WARNING: opagent.h:     section \"Present But Cannot Be Compiled\"" >&5
-echo "$as_me: WARNING: opagent.h:     section \"Present But Cannot Be Compiled\"" >&2;}
-    { echo "$as_me:$LINENO: WARNING: opagent.h: proceeding with the preprocessor's result" >&5
-echo "$as_me: WARNING: opagent.h: proceeding with the preprocessor's result" >&2;}
-    { echo "$as_me:$LINENO: WARNING: opagent.h: in the future, the compiler will take precedence" >&5
-echo "$as_me: WARNING: opagent.h: in the future, the compiler will take precedence" >&2;}
-    ( cat <<\_ASBOX
-## ----------------------------------- ##
-## Report this to llvmbugs at cs.uiuc.edu ##
-## ----------------------------------- ##
-_ASBOX
-     ) | sed "s/^/$as_me: WARNING:     /" >&2
-    ;;
-esac
-{ echo "$as_me:$LINENO: checking for opagent.h" >&5
-echo $ECHO_N "checking for opagent.h... $ECHO_C" >&6; }
-if test "${ac_cv_header_opagent_h+set}" = set; then
-  echo $ECHO_N "(cached) $ECHO_C" >&6
-else
-  ac_cv_header_opagent_h=$ac_header_preproc
-fi
-{ echo "$as_me:$LINENO: result: $ac_cv_header_opagent_h" >&5
-echo "${ECHO_T}$ac_cv_header_opagent_h" >&6; }
-
-fi
-if test $ac_cv_header_opagent_h = yes; then
-  :
-else
-
-        echo "Error! You need to have opagent.h around."
-        exit -1
-
-fi
-
-
-
-else
-
-      llvm_cv_old_LIBS="$LIBS"
-      LIBS="$LIBS -L/usr/lib/oprofile -Wl,-rpath,/usr/lib/oprofile"
-            { echo "$as_me:$LINENO: checking for library containing op_open_agent" >&5
+        { echo "$as_me:$LINENO: checking for library containing op_open_agent" >&5
 echo $ECHO_N "checking for library containing op_open_agent... $ECHO_C" >&6; }
 if test "${ac_cv_search_op_open_agent+set}" = set; then
   echo $ECHO_N "(cached) $ECHO_C" >&6
@@ -28912,15 +28877,15 @@ echo "${ECHO_T}$ac_cv_search_op_open_agent" >&6; }
 ac_res=$ac_cv_search_op_open_agent
 if test "$ac_res" != no; then
   test "$ac_res" = "none required" || LIBS="$ac_res $LIBS"
-  USE_OPROFILE=1
 
 else
-  LIBS="$llvm_cv_old_LIBS"
-                      USE_OPROFILE=0
+
+          echo "Error! You need to have libopagent around."
+          exit -1
 
 fi
 
-      if test "${ac_cv_header_opagent_h+set}" = set; then
+        if test "${ac_cv_header_opagent_h+set}" = set; then
   { echo "$as_me:$LINENO: checking for opagent.h" >&5
 echo $ECHO_N "checking for opagent.h... $ECHO_C" >&6; }
 if test "${ac_cv_header_opagent_h+set}" = set; then
@@ -29078,13 +29043,18 @@ if test $ac_cv_header_opagent_h = yes; then
   :
 else
 
-        LIBS="$llvm_cv_old_LIBS"
-        USE_OPROFILE=0
-
+          echo "Error! You need to have opagent.h around."
+          exit -1
 
 fi
 
 
+      fi
+
+else
+
+      USE_OPROFILE=0
+
 
 fi
 
@@ -35411,7 +35381,9 @@ ac_config_files="$ac_config_files include/llvm/Config/AsmPrinters.def"
 
 ac_config_files="$ac_config_files include/llvm/Config/AsmParsers.def"
 
-ac_config_headers="$ac_config_headers include/llvm/Support/DataTypes.h"
+ac_config_files="$ac_config_files include/llvm/Config/Disassemblers.def"
+
+ac_config_headers="$ac_config_headers include/llvm/System/DataTypes.h"
 
 
 ac_config_files="$ac_config_files Makefile.config"
@@ -36038,7 +36010,8 @@ do
     "include/llvm/Config/Targets.def") CONFIG_FILES="$CONFIG_FILES include/llvm/Config/Targets.def" ;;
     "include/llvm/Config/AsmPrinters.def") CONFIG_FILES="$CONFIG_FILES include/llvm/Config/AsmPrinters.def" ;;
     "include/llvm/Config/AsmParsers.def") CONFIG_FILES="$CONFIG_FILES include/llvm/Config/AsmParsers.def" ;;
-    "include/llvm/Support/DataTypes.h") CONFIG_HEADERS="$CONFIG_HEADERS include/llvm/Support/DataTypes.h" ;;
+    "include/llvm/Config/Disassemblers.def") CONFIG_FILES="$CONFIG_FILES include/llvm/Config/Disassemblers.def" ;;
+    "include/llvm/System/DataTypes.h") CONFIG_HEADERS="$CONFIG_HEADERS include/llvm/System/DataTypes.h" ;;
     "Makefile.config") CONFIG_FILES="$CONFIG_FILES Makefile.config" ;;
     "llvm.spec") CONFIG_FILES="$CONFIG_FILES llvm.spec" ;;
     "docs/doxygen.cfg") CONFIG_FILES="$CONFIG_FILES docs/doxygen.cfg" ;;
@@ -36211,12 +36184,12 @@ TARGETS_TO_BUILD!$TARGETS_TO_BUILD$ac_delim
 LLVM_ENUM_TARGETS!$LLVM_ENUM_TARGETS$ac_delim
 LLVM_ENUM_ASM_PRINTERS!$LLVM_ENUM_ASM_PRINTERS$ac_delim
 LLVM_ENUM_ASM_PARSERS!$LLVM_ENUM_ASM_PARSERS$ac_delim
+LLVM_ENUM_DISASSEMBLERS!$LLVM_ENUM_DISASSEMBLERS$ac_delim
 ENABLE_CBE_PRINTF_A!$ENABLE_CBE_PRINTF_A$ac_delim
+OPTIMIZE_OPTION!$OPTIMIZE_OPTION$ac_delim
 EXTRA_OPTIONS!$EXTRA_OPTIONS$ac_delim
 BINUTILS_INCDIR!$BINUTILS_INCDIR$ac_delim
 ENABLE_LLVMC_DYNAMIC!$ENABLE_LLVMC_DYNAMIC$ac_delim
-ENABLE_LLVMC_DYNAMIC_PLUGINS!$ENABLE_LLVMC_DYNAMIC_PLUGINS$ac_delim
-CXX!$CXX$ac_delim
 _ACEOF
 
   if test `sed -n "s/.*$ac_delim\$/X/p" conf$$subs.sed | grep -c X` = 97; then
@@ -36258,6 +36231,8 @@ _ACEOF
 ac_delim='%!_!# '
 for ac_last_try in false false false false false :; do
   cat >conf$$subs.sed <<_ACEOF
+ENABLE_LLVMC_DYNAMIC_PLUGINS!$ENABLE_LLVMC_DYNAMIC_PLUGINS$ac_delim
+CXX!$CXX$ac_delim
 CXXFLAGS!$CXXFLAGS$ac_delim
 ac_ct_CXX!$ac_ct_CXX$ac_delim
 NM!$NM$ac_delim
@@ -36318,6 +36293,8 @@ LLVMGCCCOMMAND!$LLVMGCCCOMMAND$ac_delim
 LLVMGXXCOMMAND!$LLVMGXXCOMMAND$ac_delim
 LLVMGCC!$LLVMGCC$ac_delim
 LLVMGXX!$LLVMGXX$ac_delim
+NO_VARIADIC_MACROS!$NO_VARIADIC_MACROS$ac_delim
+NO_MISSING_FIELD_INITIALIZERS!$NO_MISSING_FIELD_INITIALIZERS$ac_delim
 USE_UDIS86!$USE_UDIS86$ac_delim
 USE_OPROFILE!$USE_OPROFILE$ac_delim
 HAVE_PTHREAD!$HAVE_PTHREAD$ac_delim
@@ -36352,7 +36329,7 @@ LIBOBJS!$LIBOBJS$ac_delim
 LTLIBOBJS!$LTLIBOBJS$ac_delim
 _ACEOF
 
-  if test `sed -n "s/.*$ac_delim\$/X/p" conf$$subs.sed | grep -c X` = 92; then
+  if test `sed -n "s/.*$ac_delim\$/X/p" conf$$subs.sed | grep -c X` = 96; then
     break
   elif $ac_last_try; then
     { { echo "$as_me:$LINENO: error: could not make $CONFIG_STATUS" >&5
@@ -36371,7 +36348,7 @@ fi
 
 cat >>$CONFIG_STATUS <<_ACEOF
 cat >"\$tmp/subs-2.sed" <<\CEOF$ac_eof
-/@[a-zA-Z_][a-zA-Z_0-9]*@/!b end
+/@[a-zA-Z_][a-zA-Z_0-9]*@/!b
 _ACEOF
 sed '
 s/[,\\&]/\\&/g; s/@/@|#_!!_#|/g
@@ -36384,8 +36361,6 @@ N; s/^.*\n//; s/[,\\&]/\\&/g; s/@/@|#_!!_#|/g; b n
 ' >>$CONFIG_STATUS <conf$$subs.sed
 rm -f conf$$subs.sed
 cat >>$CONFIG_STATUS <<_ACEOF
-:end
-s/|#_!!_#|//g
 CEOF$ac_eof
 _ACEOF
 
@@ -36633,7 +36608,7 @@ s&@abs_builddir@&$ac_abs_builddir&;t t
 s&@abs_top_builddir@&$ac_abs_top_builddir&;t t
 s&@INSTALL@&$ac_INSTALL&;t t
 $ac_datarootdir_hack
-" $ac_file_inputs | sed -f "$tmp/subs-1.sed" | sed -f "$tmp/subs-2.sed" >$tmp/out
+" $ac_file_inputs | sed -f "$tmp/subs-1.sed" | sed -f "$tmp/subs-2.sed" | sed 's/|#_!!_#|//g' >$tmp/out
 
 test -z "$ac_datarootdir_hack$ac_datarootdir_seen" &&
   { ac_out=`sed -n '/\${datarootdir}/p' "$tmp/out"`; test -n "$ac_out"; } &&
diff --git a/libclamav/c++/llvm/docs/AliasAnalysis.html b/libclamav/c++/llvm/docs/AliasAnalysis.html
index a89903d..ebf6386 100644
--- a/libclamav/c++/llvm/docs/AliasAnalysis.html
+++ b/libclamav/c++/llvm/docs/AliasAnalysis.html
@@ -225,12 +225,7 @@ method for testing dependencies between function calls.  This method takes two
 call sites (CS1 &amp; CS2), returns NoModRef if the two calls refer to disjoint
 memory locations, Ref if CS1 reads memory written by CS2, Mod if CS1 writes to
 memory read or written by CS2, or ModRef if CS1 might read or write memory
-accessed by CS2.  Note that this relation is not commutative.  Clients that use
-this method should be predicated on the <tt>hasNoModRefInfoForCalls()</tt>
-method, which indicates whether or not an analysis can provide mod/ref
-information for function call pairs (most can not).  If this predicate is false,
-the client shouldn't waste analysis time querying the <tt>getModRefInfo</tt>
-method many times.</p>
+accessed by CS2.  Note that this relation is not commutative.</p>
 
 </div>
 
@@ -251,21 +246,6 @@ analysis implementations and can be put to good use by various clients.
 
 <!-- _______________________________________________________________________ -->
 <div class="doc_subsubsection">
-  The <tt>getMustAliases</tt> method
-</div>
-
-<div class="doc_text">
-
-<p>The <tt>getMustAliases</tt> method returns all values that are known to
-always must alias a pointer.  This information can be provided in some cases for
-important objects like the null pointer and global values.  Knowing that a
-pointer always points to a particular function allows indirect calls to be
-turned into direct calls, for example.</p>
-
-</div>
-
-<!-- _______________________________________________________________________ -->
-<div class="doc_subsubsection">
   The <tt>pointsToConstantMemory</tt> method
 </div>
 
diff --git a/libclamav/c++/llvm/docs/BitCodeFormat.html b/libclamav/c++/llvm/docs/BitCodeFormat.html
index 15b0523..655d7f6 100644
--- a/libclamav/c++/llvm/docs/BitCodeFormat.html
+++ b/libclamav/c++/llvm/docs/BitCodeFormat.html
@@ -27,6 +27,15 @@
   <li><a href="#llvmir">LLVM IR Encoding</a>
     <ol>
     <li><a href="#basics">Basics</a></li>
+    <li><a href="#MODULE_BLOCK">MODULE_BLOCK Contents</a></li>
+    <li><a href="#PARAMATTR_BLOCK">PARAMATTR_BLOCK Contents</a></li>
+    <li><a href="#TYPE_BLOCK">TYPE_BLOCK Contents</a></li>
+    <li><a href="#CONSTANTS_BLOCK">CONSTANTS_BLOCK Contents</a></li>
+    <li><a href="#FUNCTION_BLOCK">FUNCTION_BLOCK Contents</a></li>
+    <li><a href="#TYPE_SYMTAB_BLOCK">TYPE_SYMTAB_BLOCK Contents</a></li>
+    <li><a href="#VALUE_SYMTAB_BLOCK">VALUE_SYMTAB_BLOCK Contents</a></li>
+    <li><a href="#METADATA_BLOCK">METADATA_BLOCK Contents</a></li>
+    <li><a href="#METADATA_ATTACHMENT">METADATA_ATTACHMENT Contents</a></li>
     </ol>
   </li>
 </ol>
@@ -220,7 +229,7 @@ A bitstream is a sequential series of <a href="#blocks">Blocks</a> and
 abbreviation ID encoded as a fixed-bitwidth field.  The width is specified by
 the current block, as described below.  The value of the abbreviation ID
 specifies either a builtin ID (which have special meanings, defined below) or
-one of the abbreviation IDs defined by the stream itself.
+one of the abbreviation IDs defined for the current block by the stream itself.
 </p>
 
 <p>
@@ -254,11 +263,11 @@ Blocks in a bitstream denote nested regions of the stream, and are identified by
 a content-specific id number (for example, LLVM IR uses an ID of 12 to represent
 function bodies).  Block IDs 0-7 are reserved for <a href="#stdblocks">standard blocks</a>
 whose meaning is defined by Bitcode; block IDs 8 and greater are
-application specific. Nested blocks capture the hierachical structure of the data
+application specific. Nested blocks capture the hierarchical structure of the data
 encoded in it, and various properties are associated with blocks as the file is
 parsed.  Block definitions allow the reader to efficiently skip blocks
 in constant time if the reader wants a summary of blocks, or if it wants to
-efficiently skip data they do not understand.  The LLVM IR reader uses this
+efficiently skip data it does not understand.  The LLVM IR reader uses this
 mechanism to skip function bodies, lazily reading them on demand.
 </p>
 
@@ -268,7 +277,8 @@ block.  In particular, each block maintains:
 </p>
 
 <ol>
-<li>A current abbrev id width.  This value starts at 2, and is set every time a
+<li>A current abbrev id width.  This value starts at 2 at the beginning of
+    the stream, and is set every time a
     block record is entered.  The block entry specifies the abbrev id width for
     the body of the block.</li>
 
@@ -335,13 +345,14 @@ an even multiple of 32-bits.
 
 <div class="doc_text">
 <p>
-Data records consist of a record code and a number of (up to) 64-bit integer
-values.  The interpretation of the code and values is application specific and
-there are multiple different ways to encode a record (with an unabbrev record or
-with an abbreviation).  In the LLVM IR format, for example, there is a record
+Data records consist of a record code and a number of (up to) 64-bit
+integer values.  The interpretation of the code and values is
+application specific and may vary between different block types.
+Records can be encoded either using an unabbrev record, or with an
+abbreviation.  In the LLVM IR format, for example, there is a record
 which encodes the target triple of a module.  The code is
-<tt>MODULE_CODE_TRIPLE</tt>, and the values of the record are the ASCII codes
-for the characters in the string.
+<tt>MODULE_CODE_TRIPLE</tt>, and the values of the record are the
+ASCII codes for the characters in the string.
 </p>
 
 </div>
@@ -358,7 +369,7 @@ Encoding</a></div>
 <p>
 An <tt>UNABBREV_RECORD</tt> provides a default fallback encoding, which is both
 completely general and extremely inefficient.  It can describe an arbitrary
-record by emitting the code and operands as vbrs.
+record by emitting the code and operands as VBRs.
 </p>
 
 <p>
@@ -391,6 +402,11 @@ allows the files to be completely self describing.  The actual encoding of
 abbreviations is defined below.
 </p>
 
+<p>The record code, which is the first field of an abbreviated record,
+may be encoded in the abbreviation definition (as a literal
+operand) or supplied in the abbreviated record (as a Fixed or VBR
+operand value).</p>
+
 </div>
 
 <!-- ======================================================================= -->
@@ -409,8 +425,9 @@ emitted.
 <p>
 Abbreviations can be determined dynamically per client, per file. Because the
 abbreviations are stored in the bitstream itself, different streams of the same
-format can contain different sets of abbreviations if the specific stream does
-not need it.  As a concrete example, LLVM IR files usually emit an abbreviation
+format can contain different sets of abbreviations according to the needs
+of the specific stream.
+As a concrete example, LLVM IR files usually emit an abbreviation
 for binary operators.  If a specific LLVM module contained no or few binary
 operators, the abbreviation does not need to be emitted.
 </p>
@@ -431,7 +448,8 @@ defined abbreviations in the scope of this block.  This definition only exists
 inside this immediate block &mdash; it is not visible in subblocks or enclosing
 blocks.  Abbreviations are implicitly assigned IDs sequentially starting from 4
 (the first application-defined abbreviation ID).  Any abbreviations defined in a
-<tt>BLOCKINFO</tt> record receive IDs first, in order, followed by any
+<tt>BLOCKINFO</tt> record for the particular block type
+receive IDs first, in order, followed by any
 abbreviations defined within the block itself.  Abbreviated data records
 reference this ID to indicate what abbreviation they are invoking.
 </p>
@@ -461,31 +479,32 @@ emitted as their code, followed by the extra data.
 
 <p>The possible operand encodings are:</p>
 
-<ol>
-<li>Fixed: The field should be emitted as
+<ul>
+<li>Fixed (code 1): The field should be emitted as
     a <a href="#fixedwidth">fixed-width value</a>, whose width is specified by
     the operand's extra data.</li>
-<li>VBR: The field should be emitted as
+<li>VBR (code 2): The field should be emitted as
     a <a href="#variablewidth">variable-width value</a>, whose width is
     specified by the operand's extra data.</li>
-<li>Array: This field is an array of values.  The array operand
-    has no extra data, but expects another operand to follow it which indicates
+<li>Array (code 3): This field is an array of values.  The array operand
+    has no extra data, but expects another operand to follow it, indicating
     the element type of the array.  When reading an array in an abbreviated
     record, the first integer is a vbr6 that indicates the array length,
     followed by the encoded elements of the array.  An array may only occur as
     the last operand of an abbreviation (except for the one final operand that
     gives the array's type).</li>
-<li>Char6: This field should be emitted as
+<li>Char6 (code 4): This field should be emitted as
     a <a href="#char6">char6-encoded value</a>.  This operand type takes no
-    extra data.</li>
-<li>Blob: This field is emitted as a vbr6, followed by padding to a
+    extra data. Char6 encoding is normally used as an array element type.
+    </li>
+<li>Blob (code 5): This field is emitted as a vbr6, followed by padding to a
     32-bit boundary (for alignment) and an array of 8-bit objects.  The array of
     bytes is further followed by tail padding to ensure that its total length is
     a multiple of 4 bytes.  This makes it very efficient for the reader to
     decode the data without having to make a copy of it: it can use a pointer to
     the data in the mapped in file and poke directly at it.  A blob may only
     occur as the last operand of an abbreviation.</li>
-</ol>
+</ul>
 
 <p>
 For example, target triples in LLVM modules are encoded as a record of the
@@ -517,7 +536,7 @@ as:
 
 <ol>
 <li>The first value, 4, is the abbreviation ID for this abbreviation.</li>
-<li>The second value, 2, is the code for <tt>TRIPLE</tt> in LLVM IR files.</li>
+<li>The second value, 2, is the record code for <tt>TRIPLE</tt> records within LLVM IR file <tt>MODULE_BLOCK</tt> blocks.</li>
 <li>The third value, 4, is the length of the array.</li>
 <li>The rest of the values are the char6 encoded values
     for <tt>"abcd"</tt>.</li>
@@ -541,7 +560,7 @@ used for any other string value.
 
 <p>
 In addition to the basic block structure and record encodings, the bitstream
-also defines specific builtin block types.  These block types specify how the
+also defines specific built-in block types.  These block types specify how the
 stream is to be decoded or other metadata.  In the future, new standard blocks
 may be added.  Block IDs 0-7 are reserved for standard blocks.
 </p>
@@ -569,7 +588,7 @@ blocks.  The currently specified records are:
 </div>
 
 <p>
-The <tt>SETBID</tt> record indicates which block ID is being
+The <tt>SETBID</tt> record (code 1) indicates which block ID is being
 described.  <tt>SETBID</tt> records can occur multiple times throughout the
 block to change which block ID is being described.  There must be
 a <tt>SETBID</tt> record prior to any other records.
@@ -584,13 +603,13 @@ in <tt>BLOCKINFO</tt> blocks receive abbreviation IDs as described
 in <tt><a href="#DEFINE_ABBREV">DEFINE_ABBREV</a></tt>.
 </p>
 
-<p>The <tt>BLOCKNAME</tt> can optionally occur in this block.  The elements of
-the record are the bytes for the string name of the block.  llvm-bcanalyzer uses
+<p>The <tt>BLOCKNAME</tt> record (code 2) can optionally occur in this block.  The elements of
+the record are the bytes of the string name of the block.  llvm-bcanalyzer can use
 this to dump out bitcode files symbolically.</p>
 
-<p>The <tt>SETRECORDNAME</tt> record can optionally occur in this block.  The
-first entry is a record ID number and the rest of the elements of the record are
-the bytes for the string name of the record.  llvm-bcanalyzer uses
+<p>The <tt>SETRECORDNAME</tt> record (code 3) can also optionally occur in this block.  The
+first operand value is a record ID number, and the rest of the elements of the record are
+the bytes for the string name of the record.  llvm-bcanalyzer can use
 this to dump out bitcode files symbolically.</p>
 
 <p>
@@ -626,7 +645,7 @@ Each of the fields are 32-bit fields stored in little endian form (as with
 the rest of the bitcode file fields).  The Magic number is always
 <tt>0x0B17C0DE</tt> and the version is currently always <tt>0</tt>.  The Offset
 field is the offset in bytes to the start of the bitcode stream in the file, and
-the Size field is a size in bytes of the stream. CPUType is a target-specific
+the Size field is the size in bytes of the stream. CPUType is a target-specific
 value that can be used to encode the CPU of the target.
 </p>
 
@@ -681,26 +700,28 @@ When combined with the bitcode magic number and viewed as bytes, this is
 <div class="doc_text">
 
 <p>
-<a href="#variablewidth">Variable Width Integers</a> are an efficient way to
-encode arbitrary sized unsigned values, but is an extremely inefficient way to
-encode signed values (as signed values are otherwise treated as maximally large
-unsigned values).
+<a href="#variablewidth">Variable Width Integer</a> encoding is an efficient way to
+encode arbitrary sized unsigned values, but is an extremely inefficient for
+encoding signed values, as signed values are otherwise treated as maximally large
+unsigned values.
 </p>
 
 <p>
-As such, signed vbr values of a specific width are emitted as follows:
+As such, signed VBR values of a specific width are emitted as follows:
 </p>
 
 <ul>
-<li>Positive values are emitted as vbrs of the specified width, but with their
+<li>Positive values are emitted as VBRs of the specified width, but with their
     value shifted left by one.</li>
-<li>Negative values are emitted as vbrs of the specified width, but the negated
+<li>Negative values are emitted as VBRs of the specified width, but the negated
     value is shifted left by one, and the low bit is set.</li>
 </ul>
 
 <p>
-With this encoding, small positive and small negative values can both be emitted
-efficiently.
+With this encoding, small positive and small negative values can both
+be emitted efficiently. Signed VBR encoding is used in
+<tt>CST_CODE_INTEGER</tt> and <tt>CST_CODE_WIDE_INTEGER</tt> records
+within <tt>CONSTANTS_BLOCK</tt> blocks.
 </p>
 
 </div>
@@ -716,21 +737,23 @@ LLVM IR is defined with the following blocks:
 </p>
 
 <ul>
-<li>8  &mdash; <tt>MODULE_BLOCK</tt> &mdash; This is the top-level block that
+<li>8  &mdash; <a href="#MODULE_BLOCK"><tt>MODULE_BLOCK</tt></a> &mdash; This is the top-level block that
     contains the entire module, and describes a variety of per-module
     information.</li>
-<li>9  &mdash; <tt>PARAMATTR_BLOCK</tt> &mdash; This enumerates the parameter
+<li>9  &mdash; <a href="#PARAMATTR_BLOCK"><tt>PARAMATTR_BLOCK</tt></a> &mdash; This enumerates the parameter
     attributes.</li>
-<li>10 &mdash; <tt>TYPE_BLOCK</tt> &mdash; This describes all of the types in
+<li>10 &mdash; <a href="#TYPE_BLOCK"><tt>TYPE_BLOCK</tt></a> &mdash; This describes all of the types in
     the module.</li>
-<li>11 &mdash; <tt>CONSTANTS_BLOCK</tt> &mdash; This describes constants for a
+<li>11 &mdash; <a href="#CONSTANTS_BLOCK"><tt>CONSTANTS_BLOCK</tt></a> &mdash; This describes constants for a
     module or function.</li>
-<li>12 &mdash; <tt>FUNCTION_BLOCK</tt> &mdash; This describes a function
+<li>12 &mdash; <a href="#FUNCTION_BLOCK"><tt>FUNCTION_BLOCK</tt></a> &mdash; This describes a function
     body.</li>
-<li>13 &mdash; <tt>TYPE_SYMTAB_BLOCK</tt> &mdash; This describes the type symbol
+<li>13 &mdash; <a href="#TYPE_SYMTAB_BLOCK"><tt>TYPE_SYMTAB_BLOCK</tt></a> &mdash; This describes the type symbol
     table.</li>
-<li>14 &mdash; <tt>VALUE_SYMTAB_BLOCK</tt> &mdash; This describes a value symbol
+<li>14 &mdash; <a href="#VALUE_SYMTAB_BLOCK"><tt>VALUE_SYMTAB_BLOCK</tt></a> &mdash; This describes a value symbol
     table.</li>
+<li>15 &mdash; <a href="#METADATA_BLOCK"><tt>METADATA_BLOCK</tt></a> &mdash; This describes metadata items.</li>
+<li>16 &mdash; <a href="#METADATA_ATTACHMENT"><tt>METADATA_ATTACHMENT</tt></a> &mdash; This contains records associating metadata with function instruction values.</li>
 </ul>
 
 </div>
@@ -741,7 +764,387 @@ LLVM IR is defined with the following blocks:
 
 <div class="doc_text">
 
-<p>
+<p>The <tt>MODULE_BLOCK</tt> block (id 8) is the top-level block for LLVM
+bitcode files, and each bitcode file must contain exactly one. In
+addition to records (described below) containing information
+about the module, a <tt>MODULE_BLOCK</tt> block may contain the
+following sub-blocks:
+</p>
+
+<ul>
+<li><a href="#BLOCKINFO"><tt>BLOCKINFO</tt></a></li>
+<li><a href="#PARAMATTR_BLOCK"><tt>PARAMATTR_BLOCK</tt></a></li>
+<li><a href="#TYPE_BLOCK"><tt>TYPE_BLOCK</tt></a></li>
+<li><a href="#TYPE_SYMTAB_BLOCK"><tt>TYPE_SYMTAB_BLOCK</tt></a></li>
+<li><a href="#VALUE_SYMTAB_BLOCK"><tt>VALUE_SYMTAB_BLOCK</tt></a></li>
+<li><a href="#CONSTANTS_BLOCK"><tt>CONSTANTS_BLOCK</tt></a></li>
+<li><a href="#FUNCTION_BLOCK"><tt>FUNCTION_BLOCK</tt></a></li>
+<li><a href="#METADATA_BLOCK"><tt>METADATA_BLOCK</tt></a></li>
+</ul>
+
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection"><a name="MODULE_CODE_VERSION">MODULE_CODE_VERSION Record</a>
+</div>
+
+<div class="doc_text">
+
+<p><tt>[VERSION, version#]</tt></p>
+
+<p>The <tt>VERSION</tt> record (code 1) contains a single value
+indicating the format version. Only version 0 is supported at this
+time.</p>
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection"><a name="MODULE_CODE_TRIPLE">MODULE_CODE_TRIPLE Record</a>
+</div>
+
+<div class="doc_text">
+<p><tt>[TRIPLE, ...string...]</tt></p>
+
+<p>The <tt>TRIPLE</tt> record (code 2) contains a variable number of
+values representing the bytes of the <tt>target triple</tt>
+specification string.</p>
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection"><a name="MODULE_CODE_DATALAYOUT">MODULE_CODE_DATALAYOUT Record</a>
+</div>
+
+<div class="doc_text">
+<p><tt>[DATALAYOUT, ...string...]</tt></p>
+
+<p>The <tt>DATALAYOUT</tt> record (code 3) contains a variable number of
+values representing the bytes of the <tt>target datalayout</tt>
+specification string.</p>
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection"><a name="MODULE_CODE_ASM">MODULE_CODE_ASM Record</a>
+</div>
+
+<div class="doc_text">
+<p><tt>[ASM, ...string...]</tt></p>
+
+<p>The <tt>ASM</tt> record (code 4) contains a variable number of
+values representing the bytes of <tt>module asm</tt> strings, with
+individual assembly blocks separated by newline (ASCII 10) characters.</p>
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection"><a name="MODULE_CODE_SECTIONNAME">MODULE_CODE_SECTIONNAME Record</a>
+</div>
+
+<div class="doc_text">
+<p><tt>[SECTIONNAME, ...string...]</tt></p>
+
+<p>The <tt>SECTIONNAME</tt> record (code 5) contains a variable number
+of values representing the bytes of a single section name
+string. There should be one <tt>SECTIONNAME</tt> record for each
+section name referenced (e.g., in global variable or function
+<tt>section</tt> attributes) within the module. These records can be
+referenced by the 1-based index in the <i>section</i> fields of
+<tt>GLOBALVAR</tt> or <tt>FUNCTION</tt> records.</p>
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection"><a name="MODULE_CODE_DEPLIB">MODULE_CODE_DEPLIB Record</a>
+</div>
+
+<div class="doc_text">
+<p><tt>[DEPLIB, ...string...]</tt></p>
+
+<p>The <tt>DEPLIB</tt> record (code 6) contains a variable number of
+values representing the bytes of a single dependent library name
+string, one of the libraries mentioned in a <tt>deplibs</tt>
+declaration.  There should be one <tt>DEPLIB</tt> record for each
+library name referenced.</p>
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection"><a name="MODULE_CODE_GLOBALVAR">MODULE_CODE_GLOBALVAR Record</a>
+</div>
+
+<div class="doc_text">
+<p><tt>[GLOBALVAR, pointer type, isconst, initid, linkage, alignment, section, visibility, threadlocal]</tt></p>
+
+<p>The <tt>GLOBALVAR</tt> record (code 7) marks the declaration or
+definition of a global variable. The operand fields are:</p>
+
+<ul>
+<li><i>pointer type</i>: The type index of the pointer type used to point to
+this global variable</li>
+
+<li><i>isconst</i>: Non-zero if the variable is treated as constant within
+the module, or zero if it is not</li>
+
+<li><i>initid</i>: If non-zero, the value index of the initializer for this
+variable, plus 1.</li>
+
+<li><a name="linkage"><i>linkage</i></a>: An encoding of the linkage
+type for this variable:
+  <ul>
+    <li><tt>external</tt>: code 0</li>
+    <li><tt>weak</tt>: code 1</li>
+    <li><tt>appending</tt>: code 2</li>
+    <li><tt>internal</tt>: code 3</li>
+    <li><tt>linkonce</tt>: code 4</li>
+    <li><tt>dllimport</tt>: code 5</li>
+    <li><tt>dllexport</tt>: code 6</li>
+    <li><tt>extern_weak</tt>: code 7</li>
+    <li><tt>common</tt>: code 8</li>
+    <li><tt>private</tt>: code 9</li>
+    <li><tt>weak_odr</tt>: code 10</li>
+    <li><tt>linkonce_odr</tt>: code 11</li>
+    <li><tt>available_externally</tt>: code 12</li>
+    <li><tt>linker_private</tt>: code 13</li>
+  </ul>
+</li>
+
+<li><i>alignment</i>: The logarithm base 2 of the variable's requested
+alignment, plus 1</li>
+
+<li><i>section</i>: If non-zero, the 1-based section index in the
+table of <a href="#MODULE_CODE_SECTIONNAME">MODULE_CODE_SECTIONNAME</a>
+entries.</li>
+
+<li><a name="visibility"><i>visibility</i></a>: If present, an
+encoding of the visibility of this variable:
+  <ul>
+    <li><tt>default</tt>: code 0</li>
+    <li><tt>hidden</tt>: code 1</li>
+    <li><tt>protected</tt>: code 2</li>
+  </ul>
+</li>
+
+<li><i>threadlocal</i>: If present and non-zero, indicates that the variable
+is <tt>thread_local</tt></li>
+
+</ul>
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection"><a name="MODULE_CODE_FUNCTION">MODULE_CODE_FUNCTION Record</a>
+</div>
+
+<div class="doc_text">
+
+<p><tt>[FUNCTION, type, callingconv, isproto, linkage, paramattr, alignment, section, visibility, gc]</tt></p>
+
+<p>The <tt>FUNCTION</tt> record (code 8) marks the declaration or
+definition of a function. The operand fields are:</p>
+
+<ul>
+<li><i>type</i>: The type index of the function type describing this function</li>
+
+<li><i>callingconv</i>: The calling convention number:
+  <ul>
+    <li><tt>ccc</tt>: code 0</li>
+    <li><tt>fastcc</tt>: code 8</li>
+    <li><tt>coldcc</tt>: code 9</li>
+    <li><tt>x86_stdcallcc</tt>: code 64</li>
+    <li><tt>x86_fastcallcc</tt>: code 65</li>
+    <li><tt>arm_apcscc</tt>: code 66</li>
+    <li><tt>arm_aapcscc</tt>: code 67</li>
+    <li><tt>arm_aapcs_vfpcc</tt>: code 68</li>
+  </ul>
+</li>
+
+<li><i>isproto</i>: Non-zero if this entry represents a declaration
+rather than a definition</li>
+
+<li><i>linkage</i>: An encoding of the <a href="#linkage">linkage type</a>
+for this function</li>
+
+<li><i>paramattr</i>: If nonzero, the 1-based parameter attribute index
+into the table of <a href="#PARAMATTR_CODE_ENTRY">PARAMATTR_CODE_ENTRY</a>
+entries.</li>
+
+<li><i>alignment</i>: The logarithm base 2 of the function's requested
+alignment, plus 1</li>
+
+<li><i>section</i>: If non-zero, the 1-based section index in the
+table of <a href="#MODULE_CODE_SECTIONNAME">MODULE_CODE_SECTIONNAME</a>
+entries.</li>
+
+<li><i>visibility</i>: An encoding of the <a href="#visibility">visibility</a>
+    of this function</li>
+
+<li><i>gc</i>: If present and nonzero, the 1-based garbage collector
+index in the table of
+<a href="#MODULE_CODE_GCNAME">MODULE_CODE_GCNAME</a> entries.</li>
+</ul>
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection"><a name="MODULE_CODE_ALIAS">MODULE_CODE_ALIAS Record</a>
+</div>
+
+<div class="doc_text">
+
+<p><tt>[ALIAS, alias type, aliasee val#, linkage, visibility]</tt></p>
+
+<p>The <tt>ALIAS</tt> record (code 9) marks the definition of an
+alias. The operand fields are</p>
+
+<ul>
+<li><i>alias type</i>: The type index of the alias</li>
+
+<li><i>aliasee val#</i>: The value index of the aliased value</li>
+
+<li><i>linkage</i>: An encoding of the <a href="#linkage">linkage type</a>
+for this alias</li>
+
+<li><i>visibility</i>: If present, an encoding of the
+<a href="#visibility">visibility</a> of the alias</li>
+
+</ul>
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection"><a name="MODULE_CODE_PURGEVALS">MODULE_CODE_PURGEVALS Record</a>
+</div>
+
+<div class="doc_text">
+<p><tt>[PURGEVALS, numvals]</tt></p>
+
+<p>The <tt>PURGEVALS</tt> record (code 10) resets the module-level
+value list to the size given by the single operand value. Module-level
+value list items are added by <tt>GLOBALVAR</tt>, <tt>FUNCTION</tt>,
+and <tt>ALIAS</tt> records.  After a <tt>PURGEVALS</tt> record is seen,
+new value indices will start from the given <i>numvals</i> value.</p>
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection"><a name="MODULE_CODE_GCNAME">MODULE_CODE_GCNAME Record</a>
+</div>
+
+<div class="doc_text">
+<p><tt>[GCNAME, ...string...]</tt></p>
+
+<p>The <tt>GCNAME</tt> record (code 11) contains a variable number of
+values representing the bytes of a single garbage collector name
+string. There should be one <tt>GCNAME</tt> record for each garbage
+collector name referenced in function <tt>gc</tt> attributes within
+the module. These records can be referenced by 1-based index in the <i>gc</i>
+fields of <tt>FUNCTION</tt> records.</p>
+</div>
+
+<!-- ======================================================================= -->
+<div class="doc_subsection"><a name="PARAMATTR_BLOCK">PARAMATTR_BLOCK Contents</a>
+</div>
+
+<div class="doc_text">
+
+<p>The <tt>PARAMATTR_BLOCK</tt> block (id 9) ...
+</p>
+
+</div>
+
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection"><a name="PARAMATTR_CODE_ENTRY">PARAMATTR_CODE_ENTRY Record</a>
+</div>
+
+<div class="doc_text">
+
+<p><tt>[ENTRY, paramidx0, attr0, paramidx1, attr1...]</tt></p>
+
+<p>The <tt>ENTRY</tt> record (code 1) ...
+</p>
+</div>
+
+<!-- ======================================================================= -->
+<div class="doc_subsection"><a name="TYPE_BLOCK">TYPE_BLOCK Contents</a>
+</div>
+
+<div class="doc_text">
+
+<p>The <tt>TYPE_BLOCK</tt> block (id 10) ...
+</p>
+
+</div>
+
+
+<!-- ======================================================================= -->
+<div class="doc_subsection"><a name="CONSTANTS_BLOCK">CONSTANTS_BLOCK Contents</a>
+</div>
+
+<div class="doc_text">
+
+<p>The <tt>CONSTANTS_BLOCK</tt> block (id 11) ...
+</p>
+
+</div>
+
+
+<!-- ======================================================================= -->
+<div class="doc_subsection"><a name="FUNCTION_BLOCK">FUNCTION_BLOCK Contents</a>
+</div>
+
+<div class="doc_text">
+
+<p>The <tt>FUNCTION_BLOCK</tt> block (id 12) ...
+</p>
+
+<p>In addition to the record types described below, a
+<tt>FUNCTION_BLOCK</tt> block may contain the following sub-blocks:
+</p>
+
+<ul>
+<li><a href="#CONSTANTS_BLOCK"><tt>CONSTANTS_BLOCK</tt></a></li>
+<li><a href="#VALUE_SYMTAB_BLOCK"><tt>VALUE_SYMTAB_BLOCK</tt></a></li>
+<li><a href="#METADATA_ATTACHMENT"><tt>METADATA_ATTACHMENT</tt></a></li>
+</ul>
+
+</div>
+
+
+<!-- ======================================================================= -->
+<div class="doc_subsection"><a name="TYPE_SYMTAB_BLOCK">TYPE_SYMTAB_BLOCK Contents</a>
+</div>
+
+<div class="doc_text">
+
+<p>The <tt>TYPE_SYMTAB_BLOCK</tt> block (id 13) ...
+</p>
+
+</div>
+
+
+<!-- ======================================================================= -->
+<div class="doc_subsection"><a name="VALUE_SYMTAB_BLOCK">VALUE_SYMTAB_BLOCK Contents</a>
+</div>
+
+<div class="doc_text">
+
+<p>The <tt>VALUE_SYMTAB_BLOCK</tt> block (id 14) ... 
+</p>
+
+</div>
+
+
+<!-- ======================================================================= -->
+<div class="doc_subsection"><a name="METADATA_BLOCK">METADATA_BLOCK Contents</a>
+</div>
+
+<div class="doc_text">
+
+<p>The <tt>METADATA_BLOCK</tt> block (id 15) ...
+</p>
+
+</div>
+
+
+<!-- ======================================================================= -->
+<div class="doc_subsection"><a name="METADATA_ATTACHMENT">METADATA_ATTACHMENT Contents</a>
+</div>
+
+<div class="doc_text">
+
+<p>The <tt>METADATA_ATTACHMENT</tt> block (id 16) ...
 </p>
 
 </div>
diff --git a/libclamav/c++/llvm/docs/Bugpoint.html b/libclamav/c++/llvm/docs/Bugpoint.html
index 0f5a511..bf75b5b 100644
--- a/libclamav/c++/llvm/docs/Bugpoint.html
+++ b/libclamav/c++/llvm/docs/Bugpoint.html
@@ -216,6 +216,17 @@ non-obvious ways.  Here are some hints and tips:<p>
     the list of specified optimizations to be randomized and applied to the 
     program. This process will repeat until a bug is found or the user
     kills <tt>bugpoint</tt>.
+
+<li><p><tt>bugpoint</tt> does not understand the <tt>-O</tt> option
+    that is used to specify optimization level to <tt>opt</tt>. You
+    can use e.g.</p>
+
+<div class="doc_code">
+<p><tt>opt -O2 -debug-pass=Arguments foo.bc -disable-output</tt></p>
+</div>
+
+    <p>to get a list of passes that are used with <tt>-O2</tt> and
+    then pass this list to <tt>bugpoint</tt>.</p>
     
 </ol>
 
diff --git a/libclamav/c++/llvm/docs/CMake.html b/libclamav/c++/llvm/docs/CMake.html
index 25f4710..2b7fda3 100644
--- a/libclamav/c++/llvm/docs/CMake.html
+++ b/libclamav/c++/llvm/docs/CMake.html
@@ -209,7 +209,7 @@
   <dt><b>CMAKE_BUILD_TYPE</b>:STRING</dt>
 
   <dd>Sets the build type for <i>make</i> based generators. Possible
-    values are Release, Debug, RelWithDebInfo and MiniSizeRel. On
+    values are Release, Debug, RelWithDebInfo and MinSizeRel. On
     systems like Visual Studio the user sets the build type with the IDE
     settings.</dd>
 
@@ -251,16 +251,22 @@
     <i>-DLLVM_TARGETS_TO_BUILD="X86;PowerPC;Alpha"</i>.</dd>
 
   <dt><b>LLVM_BUILD_TOOLS</b>:BOOL</dt>
-  <dd>Build LLVM tools. Defaults to ON.</dd>
+  <dd>Build LLVM tools. Defaults to ON. Targets for building each tool
+    are generated in any case. You can build an tool separately by
+    invoking its target. For example, you can build <i>llvm-as</i>
+    with a makefile-based system executing <i>make llvm-as</i> on the
+    root of your build directory.</dd>
 
   <dt><b>LLVM_BUILD_EXAMPLES</b>:BOOL</dt>
-  <dd>Build LLVM examples. Defaults to ON.</dd>
+  <dd>Build LLVM examples. Defaults to OFF. Targets for building each
+    example are generated in any case. See documentation
+    for <i>LLVM_BUILD_TOOLS</i> above for more details.</dd>
 
   <dt><b>LLVM_ENABLE_THREADS</b>:BOOL</dt>
   <dd>Build with threads support, if available. Defaults to ON.</dd>
 
   <dt><b>LLVM_ENABLE_ASSERTIONS</b>:BOOL</dt>
-  <dd>Enables code assertions. Defaults to ON if and only if
+  <dd>Enables code assertions. Defaults to OFF if and only if
     CMAKE_BUILD_TYPE is <i>Release</i>.</dd>
 
   <dt><b>LLVM_ENABLE_PIC</b>:BOOL</dt>
diff --git a/libclamav/c++/llvm/docs/CodeGenerator.html b/libclamav/c++/llvm/docs/CodeGenerator.html
index 4f8472c..cc3a541 100644
--- a/libclamav/c++/llvm/docs/CodeGenerator.html
+++ b/libclamav/c++/llvm/docs/CodeGenerator.html
@@ -1812,24 +1812,27 @@ define fastcc i32 @tailcaller(i32 %in1, i32 %in2) {
 
 <div class="doc_code">
 <pre>
-Base + [1,2,4,8] * IndexReg + Disp32
+SegmentReg: Base + [1,2,4,8] * IndexReg + Disp32
 </pre>
 </div>
 
-<p>In order to represent this, LLVM tracks no less than 4 operands for each
+<p>In order to represent this, LLVM tracks no less than 5 operands for each
    memory operand of this form.  This means that the "load" form of
    '<tt>mov</tt>' has the following <tt>MachineOperand</tt>s in this order:</p>
 
 <div class="doc_code">
 <pre>
-Index:        0     |    1        2       3           4
-Meaning:   DestReg, | BaseReg,  Scale, IndexReg, Displacement
-OperandTy: VirtReg, | VirtReg, UnsImm, VirtReg,   SignExtImm
+Index:        0     |    1        2       3           4          5
+Meaning:   DestReg, | BaseReg,  Scale, IndexReg, Displacement Segment
+OperandTy: VirtReg, | VirtReg, UnsImm, VirtReg,   SignExtImm  PhysReg
 </pre>
 </div>
 
 <p>Stores, and all other instructions, treat the four memory operands in the
-   same way and in the same order.</p>
+   same way and in the same order.  If the segment register is unspecified
+   (regno = 0), then no segment override is generated.  "Lea" operations do not
+   have a segment register specified, so they only have 4 operands for their
+   memory reference.</p>
 
 </div>
 
diff --git a/libclamav/c++/llvm/docs/CodingStandards.html b/libclamav/c++/llvm/docs/CodingStandards.html
index 8945216..ee9443d 100644
--- a/libclamav/c++/llvm/docs/CodingStandards.html
+++ b/libclamav/c++/llvm/docs/CodingStandards.html
@@ -303,7 +303,7 @@ for debate.</p>
 <div class="doc_text">
 
 <p>In all cases, prefer spaces to tabs in source files.  People have different
-prefered indentation levels, and different styles of indentation that they
+preferred indentation levels, and different styles of indentation that they
 like... this is fine.  What isn't is that different editors/viewers expand tabs
 out to different tab stops.  This can cause your code to look completely
 unreadable, and it is not worth dealing with.</p>
@@ -491,7 +491,7 @@ most cases, you simply don't need the definition of a class... and not
 <b>must</b> include all of the header files that you are using -- you can 
 include them either directly
 or indirectly (through another header file).  To make sure that you don't
-accidently forget to include a header file in your module header, make sure to
+accidentally forget to include a header file in your module header, make sure to
 include your module header <b>first</b> in the implementation file (as mentioned
 above).  This way there won't be any hidden dependencies that you'll find out
 about later...</p>
@@ -790,7 +790,7 @@ locality.</p>
 <div class="doc_text">
 
 <p>Use the "<tt>assert</tt>" function to its fullest.  Check all of your
-preconditions and assumptions, you never know when a bug (not neccesarily even
+preconditions and assumptions, you never know when a bug (not necessarily even
 yours) might be caught early by an assertion, which reduces debugging time
 dramatically.  The "<tt>&lt;cassert&gt;</tt>" header file is probably already
 included by the header files you are using, so it doesn't cost anything to use
diff --git a/libclamav/c++/llvm/docs/CommandGuide/FileCheck.pod b/libclamav/c++/llvm/docs/CommandGuide/FileCheck.pod
index 539f66f..32516ad 100644
--- a/libclamav/c++/llvm/docs/CommandGuide/FileCheck.pod
+++ b/libclamav/c++/llvm/docs/CommandGuide/FileCheck.pod
@@ -21,9 +21,6 @@ for matching multiple different inputs in one file in a specific order.
 The I<match-filename> file specifies the file that contains the patterns to
 match.  The file to verify is always read from standard input.
 
-The input and output of B<FileCheck> is beyond the scope of this short
-introduction. Please see the I<TestingGuide> page in the LLVM documentation.
-
 =head1 OPTIONS
 
 =over
@@ -58,6 +55,189 @@ If B<FileCheck> verifies that the file matches the expected contents, it exits
 with 0.  Otherwise, if not, or if an error occurs, it will exit with a non-zero
 value.
 
+=head1 TUTORIAL
+
+FileCheck is typically used from LLVM regression tests, being invoked on the RUN
+line of the test.  A simple example of using FileCheck from a RUN line looks
+like this:
+
+  ; RUN: llvm-as < %s | llc -march=x86-64 | FileCheck %s
+
+This syntax says to pipe the current file ("%s") into llvm-as, pipe that into
+llc, then pipe the output of llc into FileCheck.  This means that FileCheck will
+be verifying its standard input (the llc output) against the filename argument
+specified (the original .ll file specified by "%s").  To see how this works,
+lets look at the rest of the .ll file (after the RUN line):
+
+  define void @sub1(i32* %p, i32 %v) {
+  entry:
+  ; <b>CHECK: sub1:</b>
+  ; <b>CHECK: subl</b>
+          %0 = tail call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %p, i32 %v)
+          ret void
+  }
+  
+  define void @inc4(i64* %p) {
+  entry:
+  ; <b>CHECK: inc4:</b>
+  ; <b>CHECK: incq</b>
+          %0 = tail call i64 @llvm.atomic.load.add.i64.p0i64(i64* %p, i64 1)
+          ret void
+  }
+
+Here you can see some "CHECK:" lines specified in comments.  Now you can see
+how the file is piped into llvm-as, then llc, and the machine code output is
+what we are verifying.  FileCheck checks the machine code output to verify that
+it matches what the "CHECK:" lines specify.
+
+The syntax of the CHECK: lines is very simple: they are fixed strings that
+must occur in order.  FileCheck defaults to ignoring horizontal whitespace
+differences (e.g. a space is allowed to match a tab) but otherwise, the contents
+of the CHECK: line is required to match some thing in the test file exactly.
+
+One nice thing about FileCheck (compared to grep) is that it allows merging
+test cases together into logical groups.  For example, because the test above
+is checking for the "sub1:" and "inc4:" labels, it will not match unless there
+is a "subl" in between those labels.  If it existed somewhere else in the file,
+that would not count: "grep subl" matches if subl exists anywhere in the
+file.
+
+
+
+=head2 The FileCheck -check-prefix option
+
+The FileCheck -check-prefix option allows multiple test configurations to be
+driven from one .ll file.  This is useful in many circumstances, for example,
+testing different architectural variants with llc.  Here's a simple example:
+
+  ; RUN: llvm-as < %s | llc -mtriple=i686-apple-darwin9 -mattr=sse41 \
+  ; RUN:              | <b>FileCheck %s -check-prefix=X32</b>
+  ; RUN: llvm-as < %s | llc -mtriple=x86_64-apple-darwin9 -mattr=sse41 \
+  ; RUN:              | <b>FileCheck %s -check-prefix=X64</b>
+
+  define <4 x i32> @pinsrd_1(i32 %s, <4 x i32> %tmp) nounwind {
+          %tmp1 = insertelement <4 x i32>; %tmp, i32 %s, i32 1
+          ret <4 x i32> %tmp1
+  ; <b>X32:</b> pinsrd_1:
+  ; <b>X32:</b>    pinsrd $1, 4(%esp), %xmm0
+  
+  ; <b>X64:</b> pinsrd_1:
+  ; <b>X64:</b>    pinsrd $1, %edi, %xmm0
+  }
+
+In this case, we're testing that we get the expected code generation with
+both 32-bit and 64-bit code generation.
+
+
+
+=head2 The "CHECK-NEXT:" directive
+
+Sometimes you want to match lines and would like to verify that matches
+happen on exactly consequtive lines with no other lines in between them.  In
+this case, you can use CHECK: and CHECK-NEXT: directives to specify this.  If
+you specified a custom check prefix, just use "<PREFIX>-NEXT:".  For
+example, something like this works as you'd expect:
+
+  define void @t2(<2 x double>* %r, <2 x double&gt;* %A, double %B) {
+	%tmp3 = load <2 x double&gt;* %A, align 16
+	%tmp7 = insertelement <2 x double&gt; undef, double %B, i32 0
+	%tmp9 = shufflevector <2 x double&gt; %tmp3,
+                              <2 x double&gt; %tmp7,
+                              <2 x i32&gt; < i32 0, i32 2 &gt;
+	store <2 x double&gt; %tmp9, <2 x double&gt;* %r, align 16
+	ret void
+        
+  ; <b>CHECK:</b> t2:
+  ; <b>CHECK:</b> 	movl	8(%esp), %eax
+  ; <b>CHECK-NEXT:</b> 	movapd	(%eax), %xmm0
+  ; <b>CHECK-NEXT:</b> 	movhpd	12(%esp), %xmm0
+  ; <b>CHECK-NEXT:</b> 	movl	4(%esp), %eax
+  ; <b>CHECK-NEXT:</b> 	movapd	%xmm0, (%eax)
+  ; <b>CHECK-NEXT:</b> 	ret
+  }
+
+CHECK-NEXT: directives reject the input unless there is exactly one newline
+between it an the previous directive.  A CHECK-NEXT cannot be the first
+directive in a file.
+
+
+
+=head2 The "CHECK-NOT:" directive
+
+The CHECK-NOT: directive is used to verify that a string doesn't occur
+between two matches (or the first match and the beginning of the file).  For
+example, to verify that a load is removed by a transformation, a test like this
+can be used:
+
+  define i8 @coerce_offset0(i32 %V, i32* %P) {
+    store i32 %V, i32* %P
+   
+    %P2 = bitcast i32* %P to i8*
+    %P3 = getelementptr i8* %P2, i32 2
+
+    %A = load i8* %P3
+    ret i8 %A
+  ; <b>CHECK:</b> @coerce_offset0
+  ; <b>CHECK-NOT:</b> load
+  ; <b>CHECK:</b> ret i8
+  }
+
+
+
+=head2 FileCheck Pattern Matching Syntax
+
+The CHECK: and CHECK-NOT: directives both take a pattern to match.  For most
+uses of FileCheck, fixed string matching is perfectly sufficient.  For some
+things, a more flexible form of matching is desired.  To support this, FileCheck
+allows you to specify regular expressions in matching strings, surrounded by
+double braces: B<{{yourregex}}>.  Because we want to use fixed string
+matching for a majority of what we do, FileCheck has been designed to support
+mixing and matching fixed string matching with regular expressions.  This allows
+you to write things like this:
+
+  ; CHECK: movhpd	<b>{{[0-9]+}}</b>(%esp), <b>{{%xmm[0-7]}}</b>
+
+In this case, any offset from the ESP register will be allowed, and any xmm
+register will be allowed.
+
+Because regular expressions are enclosed with double braces, they are
+visually distinct, and you don't need to use escape characters within the double
+braces like you would in C.  In the rare case that you want to match double
+braces explicitly from the input, you can use something ugly like
+B<{{[{][{]}}> as your pattern.
+
+
+
+=head2 FileCheck Variables
+
+It is often useful to match a pattern and then verify that it occurs again
+later in the file.  For codegen tests, this can be useful to allow any register,
+but verify that that register is used consistently later.  To do this, FileCheck
+allows named variables to be defined and substituted into patterns.  Here is a
+simple example:
+
+  ; CHECK: test5:
+  ; CHECK:    notw	<b>[[REGISTER:%[a-z]+]]</b>
+  ; CHECK:    andw	{{.*}}<b>[[REGISTER]]</b>
+
+The first check line matches a regex (<tt>%[a-z]+</tt>) and captures it into
+the variables "REGISTER".  The second line verifies that whatever is in REGISTER
+occurs later in the file after an "andw".  FileCheck variable references are
+always contained in <tt>[[ ]]</tt> pairs, are named, and their names can be
+formed with the regex "<tt>[a-zA-Z_][a-zA-Z0-9_]*</tt>".  If a colon follows the
+name, then it is a definition of the variable, if not, it is a use.
+
+FileCheck variables can be defined multiple times, and uses always get the
+latest value.  Note that variables are all read at the start of a "CHECK" line
+and are all defined at the end.  This means that if you have something like
+"<tt>CHECK: [[XYZ:.*]]x[[XYZ]]<tt>" that the check line will read the previous
+value of the XYZ variable and define a new one after the match is performed.  If
+you need to do something like this you can probably take advantage of the fact
+that FileCheck is not actually line-oriented when it matches, this allows you to
+define two separate CHECK lines that match on the same line.
+
+
+
 =head1 AUTHORS
 
 Maintained by The LLVM Team (L<http://llvm.org>).
diff --git a/libclamav/c++/llvm/docs/CommandGuide/index.html b/libclamav/c++/llvm/docs/CommandGuide/index.html
index ca233a1..9516d07 100644
--- a/libclamav/c++/llvm/docs/CommandGuide/index.html
+++ b/libclamav/c++/llvm/docs/CommandGuide/index.html
@@ -132,6 +132,8 @@ options) arguments to the tool you are interested in.</p>
     Flexible file verifier used extensively by the testing harness</li>
 <li><a href="/cmds/tblgen.html"><b>tblgen</b></a> -
     target description reader and generator</li>
+<li><a href="/cmds/lit.html"><b>lit</b></a> -
+    LLVM Integrated Tester, for running tests</li>
 
 </ul>
 </div>
diff --git a/libclamav/c++/llvm/docs/CommandGuide/lit.pod b/libclamav/c++/llvm/docs/CommandGuide/lit.pod
index a818302..246fc66 100644
--- a/libclamav/c++/llvm/docs/CommandGuide/lit.pod
+++ b/libclamav/c++/llvm/docs/CommandGuide/lit.pod
@@ -36,6 +36,9 @@ Finally, B<lit> also supports additional options for only running a subset of
 the options specified on the command line, see L<"SELECTION OPTIONS"> for
 more information.
 
+Users interested in the B<lit> architecture or designing a B<lit> testing
+implementation should see L<"LIT ARCHITECTURE">
+
 =head1 GENERAL OPTIONS
 
 =over
@@ -49,6 +52,17 @@ Show the B<lit> help message.
 Run I<N> tests in parallel. By default, this is automatically chose to match the
 number of detected available CPUs.
 
+=item B<--config-prefix>=I<NAME>
+
+Search for I<NAME.cfg> and I<NAME.site.cfg> when searching for test suites,
+instead I<lit.cfg> and I<lit.site.cfg>.
+
+=item B<--param> I<NAME>, B<--param> I<NAME>=I<VALUE>
+
+Add a user defined parameter I<NAME> with the given I<VALUE> (or the empty
+string if not given). The meaning and use of these parameters is test suite
+dependent.
+
 =back 
 
 =head1 OUTPUT OPTIONS
@@ -135,6 +149,11 @@ List the discovered test suites as part of the standard output.
 
 Run Tcl scripts internally (instead of converting to shell scripts).
 
+=item B<--repeat>=I<N>
+
+Run each test I<N> times. Currently this is primarily useful for timing tests,
+other results are not collated in any reasonable fashion.
+
 =back
 
 =head1 EXIT STATUS
@@ -211,6 +230,119 @@ Depending on the test format tests may produce additional information about
 their status (generally only for failures). See the L<Output|"LIT OUTPUT">
 section for more information.
 
+=head1 LIT INFRASTRUCTURE
+
+This section describes the B<lit> testing architecture for users interested in
+creating a new B<lit> testing implementation, or extending an existing one.
+
+B<lit> proper is primarily an infrastructure for discovering and running
+arbitrary tests, and to expose a single convenient interface to these
+tests. B<lit> itself doesn't contain know how to run tests, rather this logic is
+defined by I<test suites>.
+
+=head2 TEST SUITES
+
+As described in L<"TEST DISCOVERY">, tests are always located inside a I<test
+suite>. Test suites serve to define the format of the tests they contain, the
+logic for finding those tests, and any additional information to run the tests.
+
+B<lit> identifies test suites as directories containing I<lit.cfg> or
+I<lit.site.cfg> files (see also B<--config-prefix>. Test suites are initially
+discovered by recursively searching up the directory hierarchy for all the input
+files passed on the command line. You can use B<--show-suites> to display the
+discovered test suites at startup.
+
+Once a test suite is discovered, its config file is loaded. Config files
+themselves are just Python modules which will be executed. When the config file
+is executed, two important global variables are predefined:
+
+=over
+
+=item B<lit>
+
+The global B<lit> configuration object (a I<LitConfig> instance), which defines
+the builtin test formats, global configuration parameters, and other helper
+routines for implementing test configurations.
+
+=item B<config>
+
+This is the config object (a I<TestingConfig> instance) for the test suite,
+which the config file is expected to populate. The following variables are also
+available on the I<config> object, some of which must be set by the config and
+others are optional or predefined:
+
+B<name> I<[required]> The name of the test suite, for use in reports and
+diagnostics.
+
+B<test_format> I<[required]> The test format object which will be used to
+discover and run tests in the test suite. Generally this will be a builtin test
+format available from the I<lit.formats> module.
+
+B<test_src_root> The filesystem path to the test suite root. For out-of-dir
+builds this is the directory that will be scanned for tests.
+
+B<test_exec_root> For out-of-dir builds, the path to the test suite root inside
+the object directory. This is where tests will be run and temporary output files
+places.
+
+B<environment> A dictionary representing the environment to use when executing
+tests in the suite.
+
+B<suffixes> For B<lit> test formats which scan directories for tests, this
+variable as a list of suffixes to identify test files. Used by: I<ShTest>,
+I<TclTest>.
+
+B<substitutions> For B<lit> test formats which substitute variables into a test
+script, the list of substitutions to perform. Used by: I<ShTest>, I<TclTest>.
+
+B<unsupported> Mark an unsupported directory, all tests within it will be
+reported as unsupported. Used by: I<ShTest>, I<TclTest>.
+
+B<parent> The parent configuration, this is the config object for the directory
+containing the test suite, or None.
+
+B<on_clone> The config is actually cloned for every subdirectory inside a test
+suite, to allow local configuration on a per-directory basis. The I<on_clone>
+variable can be set to a Python function which will be called whenever a
+configuration is cloned (for a subdirectory). The function should takes three
+arguments: (1) the parent configuration, (2) the new configuration (which the
+I<on_clone> function will generally modify), and (3) the test path to the new
+directory being scanned.
+
+=back
+
+=head2 TEST DISCOVERY
+
+Once test suites are located, B<lit> recursively traverses the source directory
+(following I<test_src_root>) looking for tests. When B<lit> enters a
+sub-directory, it first checks to see if a nest test suite is defined in that
+directory. If so, it loads that test suite recursively, otherwise it
+instantiates a local test config for the directory (see L<"LOCAL CONFIGURATION
+FILES">).
+
+Tests are identified by the test suite they are contained within, and the
+relative path inside that suite. Note that the relative path may not refer to an
+actual file on disk; some test formats (such as I<GoogleTest>) define "virtual
+tests" which have a path that contains both the path to the actual test file and
+a subpath to identify the virtual test.
+
+=head2 LOCAL CONFIGURATION FILES
+
+When B<lit> loads a subdirectory in a test suite, it instantiates a local test
+configuration by cloning the configuration for the parent direction -- the root
+of this configuration chain will always be a test suite. Once the test
+configuration is cloned B<lit> checks for a I<lit.local.cfg> file in the
+subdirectory. If present, this file will be loaded and can be used to specialize
+the configuration for each individual directory. This facility can be used to
+define subdirectories of optional tests, or to change other configuration
+parameters -- for example, to change the test format, or the suffixes which
+identify test files.
+
+=head2 LIT EXAMPLE TESTS
+
+The B<lit> distribution contains several example implementations of test suites
+in the I<ExampleTests> directory.
+
 =head1 SEE ALSO
 
 L<valgrind(1)>
diff --git a/libclamav/c++/llvm/docs/CommandLine.html b/libclamav/c++/llvm/docs/CommandLine.html
index f14defc..cefb6f8 100644
--- a/libclamav/c++/llvm/docs/CommandLine.html
+++ b/libclamav/c++/llvm/docs/CommandLine.html
@@ -1022,7 +1022,7 @@ files that use them.  This is called the internal storage model.</p>
 code from the storage of the value parsed.  For example, lets say that we have a
 '<tt>-debug</tt>' option that we would like to use to enable debug information
 across the entire body of our program.  In this case, the boolean value
-controlling the debug code should be globally accessable (in a header file, for
+controlling the debug code should be globally accessible (in a header file, for
 example) yet the command line option processing code should not be exposed to
 all of these clients (requiring lots of .cpp files to #include
 <tt>CommandLine.h</tt>).</p>
@@ -1107,7 +1107,7 @@ a command line option.  Look <a href="#value_desc_example">here</a> for an
 example.</li>
 
 <li><a name="cl::init">The <b><tt>cl::init</tt></b></a> attribute specifies an
-inital value for a <a href="#cl::opt">scalar</a> option.  If this attribute is
+initial value for a <a href="#cl::opt">scalar</a> option.  If this attribute is
 not specified then the command line option value defaults to the value created
 by the default constructor for the type. <b>Warning</b>: If you specify both
 <b><tt>cl::init</tt></b> and <b><tt>cl::location</tt></b> for an option,
@@ -1178,7 +1178,7 @@ href="#cl::list">cl::list</a></tt>.  These modifiers give you the ability to
 tweak how options are parsed and how <tt>--help</tt> output is generated to fit
 your application well.</p>
 
-<p>These options fall into five main catagories:</p>
+<p>These options fall into five main categories:</p>
 
 <ol>
 <li><a href="#hiding">Hiding an option from <tt>--help</tt> output</a></li>
@@ -1190,9 +1190,9 @@ your application well.</p>
 <li><a href="#misc">Miscellaneous option modifiers</a></li>
 </ol>
 
-<p>It is not possible to specify two options from the same catagory (you'll get
+<p>It is not possible to specify two options from the same category (you'll get
 a runtime error) to a single option, except for options in the miscellaneous
-catagory.  The CommandLine library specifies defaults for all of these settings
+category.  The CommandLine library specifies defaults for all of these settings
 that are the most useful in practice and the most common, which mean that you
 usually shouldn't have to worry about these.</p>
 
@@ -1536,7 +1536,7 @@ not be available, it can't just look in <tt>argv[0]</tt>), the name of the
 environment variable to examine, the optional
 <a href="#description">additional extra text</a> to emit when the
 <tt>--help</tt> option is invoked, and the boolean
-switch that controls whether <a href="#response">reponse files</a>
+switch that controls whether <a href="#response">response files</a>
 should be read.</p>
 
 <p><tt>cl::ParseEnvironmentOptions</tt> will break the environment
diff --git a/libclamav/c++/llvm/docs/CompilerDriver.html b/libclamav/c++/llvm/docs/CompilerDriver.html
index 9bc08ac..761d6ee 100644
--- a/libclamav/c++/llvm/docs/CompilerDriver.html
+++ b/libclamav/c++/llvm/docs/CompilerDriver.html
@@ -33,11 +33,12 @@ The ReST source lives in the directory 'tools/llvmc/doc'. -->
 </ul>
 </li>
 <li><a class="reference internal" href="#language-map" id="id15">Language map</a></li>
-<li><a class="reference internal" href="#more-advanced-topics" id="id16">More advanced topics</a><ul>
-<li><a class="reference internal" href="#hooks-and-environment-variables" id="id17">Hooks and environment variables</a></li>
-<li><a class="reference internal" href="#how-plugins-are-loaded" id="id18">How plugins are loaded</a></li>
-<li><a class="reference internal" href="#debugging" id="id19">Debugging</a></li>
-<li><a class="reference internal" href="#conditioning-on-the-executable-name" id="id20">Conditioning on the executable name</a></li>
+<li><a class="reference internal" href="#option-preprocessor" id="id16">Option preprocessor</a></li>
+<li><a class="reference internal" href="#more-advanced-topics" id="id17">More advanced topics</a><ul>
+<li><a class="reference internal" href="#hooks-and-environment-variables" id="id18">Hooks and environment variables</a></li>
+<li><a class="reference internal" href="#how-plugins-are-loaded" id="id19">How plugins are loaded</a></li>
+<li><a class="reference internal" href="#debugging" id="id20">Debugging</a></li>
+<li><a class="reference internal" href="#conditioning-on-the-executable-name" id="id21">Conditioning on the executable name</a></li>
 </ul>
 </li>
 </ul>
@@ -343,8 +344,9 @@ output).</li>
 output.</li>
 <li><tt class="docutils literal"><span class="pre">multi_val</span> <span class="pre">n</span></tt> - this option takes <em>n</em> arguments (can be useful in some
 special cases). Usage example: <tt class="docutils literal"><span class="pre">(parameter_list_option</span> <span class="pre">&quot;foo&quot;,</span> <span class="pre">(multi_val</span>
-<span class="pre">3))</span></tt>. Only list options can have this attribute; you can, however, use
-the <tt class="docutils literal"><span class="pre">one_or_more</span></tt> and <tt class="docutils literal"><span class="pre">zero_or_one</span></tt> properties.</li>
+<span class="pre">3))</span></tt>; the command-line syntax is '-foo a b c'. Only list options can have
+this attribute; you can, however, use the <tt class="docutils literal"><span class="pre">one_or_more</span></tt>, <tt class="docutils literal"><span class="pre">zero_or_one</span></tt>
+and <tt class="docutils literal"><span class="pre">required</span></tt> properties.</li>
 <li><tt class="docutils literal"><span class="pre">init</span></tt> - this option has a default value, either a string (if it is a
 parameter), or a boolean (if it is a switch; boolean constants are called
 <tt class="docutils literal"><span class="pre">true</span></tt> and <tt class="docutils literal"><span class="pre">false</span></tt>). List options can't have this attribute. Usage
@@ -417,8 +419,15 @@ readability. It is usually better to split tool descriptions and/or
 use TableGen inheritance instead.</p>
 <ul class="simple">
 <li>Possible tests are:<ul>
-<li><tt class="docutils literal"><span class="pre">switch_on</span></tt> - Returns true if a given command-line switch is
-provided by the user. Example: <tt class="docutils literal"><span class="pre">(switch_on</span> <span class="pre">&quot;opt&quot;)</span></tt>.</li>
+<li><tt class="docutils literal"><span class="pre">switch_on</span></tt> - Returns true if a given command-line switch is provided by
+the user. Can be given a list as argument, in that case <tt class="docutils literal"><span class="pre">(switch_on</span> <span class="pre">[&quot;foo&quot;,</span>
+<span class="pre">&quot;bar&quot;,</span> <span class="pre">&quot;baz&quot;])</span></tt> is equivalent to <tt class="docutils literal"><span class="pre">(and</span> <span class="pre">(switch_on</span> <span class="pre">&quot;foo&quot;),</span> <span class="pre">(switch_on</span>
+<span class="pre">&quot;bar&quot;),</span> <span class="pre">(switch_on</span> <span class="pre">&quot;baz&quot;))</span></tt>.
+Example: <tt class="docutils literal"><span class="pre">(switch_on</span> <span class="pre">&quot;opt&quot;)</span></tt>.</li>
+<li><tt class="docutils literal"><span class="pre">any_switch_on</span></tt> - Given a list of switch options, returns true if any of
+the switches is turned on.
+Example: <tt class="docutils literal"><span class="pre">(any_switch_on</span> <span class="pre">[&quot;foo&quot;,</span> <span class="pre">&quot;bar&quot;,</span> <span class="pre">&quot;baz&quot;])</span></tt> is equivalent to <tt class="docutils literal"><span class="pre">(or</span>
+<span class="pre">(switch_on</span> <span class="pre">&quot;foo&quot;),</span> <span class="pre">(switch_on</span> <span class="pre">&quot;bar&quot;),</span> <span class="pre">(switch_on</span> <span class="pre">&quot;baz&quot;))</span></tt>.</li>
 <li><tt class="docutils literal"><span class="pre">parameter_equals</span></tt> - Returns true if a command-line parameter equals
 a given value.
 Example: <tt class="docutils literal"><span class="pre">(parameter_equals</span> <span class="pre">&quot;W&quot;,</span> <span class="pre">&quot;all&quot;)</span></tt>.</li>
@@ -428,16 +437,24 @@ Example: <tt class="docutils literal"><span class="pre">(parameter_in_list</span
 <li><tt class="docutils literal"><span class="pre">input_languages_contain</span></tt> - Returns true if a given language
 belongs to the current input language set.
 Example: <tt class="docutils literal"><span class="pre">(input_languages_contain</span> <span class="pre">&quot;c++&quot;)</span></tt>.</li>
-<li><tt class="docutils literal"><span class="pre">in_language</span></tt> - Evaluates to true if the input file language
-equals to the argument. At the moment works only with <tt class="docutils literal"><span class="pre">cmd_line</span></tt>
-and <tt class="docutils literal"><span class="pre">actions</span></tt> (on non-join nodes).
+<li><tt class="docutils literal"><span class="pre">in_language</span></tt> - Evaluates to true if the input file language is equal to
+the argument. At the moment works only with <tt class="docutils literal"><span class="pre">cmd_line</span></tt> and <tt class="docutils literal"><span class="pre">actions</span></tt> (on
+non-join nodes).
 Example: <tt class="docutils literal"><span class="pre">(in_language</span> <span class="pre">&quot;c++&quot;)</span></tt>.</li>
-<li><tt class="docutils literal"><span class="pre">not_empty</span></tt> - Returns true if a given option (which should be
-either a parameter or a parameter list) is set by the
-user.
+<li><tt class="docutils literal"><span class="pre">not_empty</span></tt> - Returns true if a given option (which should be either a
+parameter or a parameter list) is set by the user. Like <tt class="docutils literal"><span class="pre">switch_on</span></tt>, can
+be also given a list as argument.
 Example: <tt class="docutils literal"><span class="pre">(not_empty</span> <span class="pre">&quot;o&quot;)</span></tt>.</li>
+<li><tt class="docutils literal"><span class="pre">any_not_empty</span></tt> - Returns true if <tt class="docutils literal"><span class="pre">not_empty</span></tt> returns true for any of
+the options in the list.
+Example: <tt class="docutils literal"><span class="pre">(any_not_empty</span> <span class="pre">[&quot;foo&quot;,</span> <span class="pre">&quot;bar&quot;,</span> <span class="pre">&quot;baz&quot;])</span></tt> is equivalent to <tt class="docutils literal"><span class="pre">(or</span>
+<span class="pre">(not_empty</span> <span class="pre">&quot;foo&quot;),</span> <span class="pre">(not_empty</span> <span class="pre">&quot;bar&quot;),</span> <span class="pre">(not_empty</span> <span class="pre">&quot;baz&quot;))</span></tt>.</li>
 <li><tt class="docutils literal"><span class="pre">empty</span></tt> - The opposite of <tt class="docutils literal"><span class="pre">not_empty</span></tt>. Equivalent to <tt class="docutils literal"><span class="pre">(not</span> <span class="pre">(not_empty</span>
-<span class="pre">X))</span></tt>. Provided for convenience.</li>
+<span class="pre">X))</span></tt>. Provided for convenience. Can be given a list as argument.</li>
+<li><tt class="docutils literal"><span class="pre">any_not_empty</span></tt> - Returns true if <tt class="docutils literal"><span class="pre">not_empty</span></tt> returns true for any of
+the options in the list.
+Example: <tt class="docutils literal"><span class="pre">(any_empty</span> <span class="pre">[&quot;foo&quot;,</span> <span class="pre">&quot;bar&quot;,</span> <span class="pre">&quot;baz&quot;])</span></tt> is equivalent to <tt class="docutils literal"><span class="pre">(not</span> <span class="pre">(and</span>
+<span class="pre">(not_empty</span> <span class="pre">&quot;foo&quot;),</span> <span class="pre">(not_empty</span> <span class="pre">&quot;bar&quot;),</span> <span class="pre">(not_empty</span> <span class="pre">&quot;baz&quot;)))</span></tt>.</li>
 <li><tt class="docutils literal"><span class="pre">single_input_file</span></tt> - Returns true if there was only one input file
 provided on the command-line. Used without arguments:
 <tt class="docutils literal"><span class="pre">(single_input_file)</span></tt>.</li>
@@ -481,8 +498,8 @@ options that aren't mentioned in the option list.</p>
 <li>Possible tool properties:<ul>
 <li><tt class="docutils literal"><span class="pre">in_language</span></tt> - input language name. Can be either a string or a
 list, in case the tool supports multiple input languages.</li>
-<li><tt class="docutils literal"><span class="pre">out_language</span></tt> - output language name. Tools are not allowed to
-have multiple output languages.</li>
+<li><tt class="docutils literal"><span class="pre">out_language</span></tt> - output language name. Multiple output languages are not
+allowed.</li>
 <li><tt class="docutils literal"><span class="pre">output_suffix</span></tt> - output file suffix. Can also be changed
 dynamically, see documentation on actions.</li>
 <li><tt class="docutils literal"><span class="pre">cmd_line</span></tt> - the actual command used to run the tool. You can
@@ -537,10 +554,11 @@ like a linker.</p>
 command.
 Example: <tt class="docutils literal"><span class="pre">(case</span> <span class="pre">(switch_on</span> <span class="pre">&quot;pthread&quot;),</span> <span class="pre">(append_cmd</span>
 <span class="pre">&quot;-lpthread&quot;))</span></tt></li>
-<li><tt class="docutils literal"><span class="pre">error`</span> <span class="pre">-</span> <span class="pre">exit</span> <span class="pre">with</span> <span class="pre">error.</span>
-<span class="pre">Example:</span> <span class="pre">``(error</span> <span class="pre">&quot;Mixing</span> <span class="pre">-c</span> <span class="pre">and</span> <span class="pre">-S</span> <span class="pre">is</span> <span class="pre">not</span> <span class="pre">allowed!&quot;)</span></tt>.</li>
-<li><tt class="docutils literal"><span class="pre">forward</span></tt> - forward an option unchanged.
-Example: <tt class="docutils literal"><span class="pre">(forward</span> <span class="pre">&quot;Wall&quot;)</span></tt>.</li>
+<li><tt class="docutils literal"><span class="pre">error</span></tt> - exit with error.
+Example: <tt class="docutils literal"><span class="pre">(error</span> <span class="pre">&quot;Mixing</span> <span class="pre">-c</span> <span class="pre">and</span> <span class="pre">-S</span> <span class="pre">is</span> <span class="pre">not</span> <span class="pre">allowed!&quot;)</span></tt>.</li>
+<li><tt class="docutils literal"><span class="pre">warning</span></tt> - print a warning.
+Example: <tt class="docutils literal"><span class="pre">(warning</span> <span class="pre">&quot;Specifying</span> <span class="pre">both</span> <span class="pre">-O1</span> <span class="pre">and</span> <span class="pre">-O2</span> <span class="pre">is</span> <span class="pre">meaningless!&quot;)</span></tt>.</li>
+<li><tt class="docutils literal"><span class="pre">forward</span></tt> - forward an option unchanged.  Example: <tt class="docutils literal"><span class="pre">(forward</span> <span class="pre">&quot;Wall&quot;)</span></tt>.</li>
 <li><tt class="docutils literal"><span class="pre">forward_as</span></tt> - Change the name of an option, but forward the
 argument unchanged.
 Example: <tt class="docutils literal"><span class="pre">(forward_as</span> <span class="pre">&quot;O0&quot;,</span> <span class="pre">&quot;--disable-optimization&quot;)</span></tt>.</li>
@@ -583,10 +601,37 @@ linked with the root node. Since tools are not allowed to have
 multiple output languages, for nodes &quot;inside&quot; the graph the input and
 output languages should match. This is enforced at compile-time.</p>
 </div>
+<div class="section" id="option-preprocessor">
+<h1><a class="toc-backref" href="#id16">Option preprocessor</a></h1>
+<p>It is sometimes useful to run error-checking code before processing the
+compilation graph. For example, if optimization options &quot;-O1&quot; and &quot;-O2&quot; are
+implemented as switches, we might want to output a warning if the user invokes
+the driver with both of these options enabled.</p>
+<p>The <tt class="docutils literal"><span class="pre">OptionPreprocessor</span></tt> feature is reserved specially for these
+occasions. Example (adapted from the built-in Base plugin):</p>
+<pre class="literal-block">
+def Preprocess : OptionPreprocessor&lt;
+(case (and (switch_on &quot;O3&quot;), (any_switch_on [&quot;O0&quot;, &quot;O1&quot;, &quot;O2&quot;])),
+           [(unset_option [&quot;O0&quot;, &quot;O1&quot;, &quot;O2&quot;]),
+            (warning &quot;Multiple -O options specified, defaulted to -O3.&quot;)],
+      (and (switch_on &quot;O2&quot;), (any_switch_on [&quot;O0&quot;, &quot;O1&quot;])),
+           (unset_option [&quot;O0&quot;, &quot;O1&quot;]),
+      (and (switch_on &quot;O1&quot;), (switch_on &quot;O0&quot;)),
+           (unset_option &quot;O0&quot;))
+&gt;;
+</pre>
+<p>Here, <tt class="docutils literal"><span class="pre">OptionPreprocessor</span></tt> is used to unset all spurious optimization options
+(so that they are not forwarded to the compiler).</p>
+<p><tt class="docutils literal"><span class="pre">OptionPreprocessor</span></tt> is basically a single big <tt class="docutils literal"><span class="pre">case</span></tt> expression, which is
+evaluated only once right after the plugin is loaded. The only allowed actions
+in <tt class="docutils literal"><span class="pre">OptionPreprocessor</span></tt> are <tt class="docutils literal"><span class="pre">error</span></tt>, <tt class="docutils literal"><span class="pre">warning</span></tt> and a special action
+<tt class="docutils literal"><span class="pre">unset_option</span></tt>, which, as the name suggests, unsets a given option. For
+convenience, <tt class="docutils literal"><span class="pre">unset_option</span></tt> also works on lists.</p>
+</div>
 <div class="section" id="more-advanced-topics">
-<h1><a class="toc-backref" href="#id16">More advanced topics</a></h1>
+<h1><a class="toc-backref" href="#id17">More advanced topics</a></h1>
 <div class="section" id="hooks-and-environment-variables">
-<span id="hooks"></span><h2><a class="toc-backref" href="#id17">Hooks and environment variables</a></h2>
+<span id="hooks"></span><h2><a class="toc-backref" href="#id18">Hooks and environment variables</a></h2>
 <p>Normally, LLVMC executes programs from the system <tt class="docutils literal"><span class="pre">PATH</span></tt>. Sometimes,
 this is not sufficient: for example, we may want to specify tool paths
 or names in the configuration file. This can be easily achieved via
@@ -619,7 +664,7 @@ the <tt class="docutils literal"><span class="pre">case</span></tt> expression (
 </pre>
 </div>
 <div class="section" id="how-plugins-are-loaded">
-<span id="priorities"></span><h2><a class="toc-backref" href="#id18">How plugins are loaded</a></h2>
+<span id="priorities"></span><h2><a class="toc-backref" href="#id19">How plugins are loaded</a></h2>
 <p>It is possible for LLVMC plugins to depend on each other. For example,
 one can create edges between nodes defined in some other plugin. To
 make this work, however, that plugin should be loaded first. To
@@ -635,7 +680,7 @@ with 0. Therefore, the plugin with the highest priority value will be
 loaded last.</p>
 </div>
 <div class="section" id="debugging">
-<h2><a class="toc-backref" href="#id19">Debugging</a></h2>
+<h2><a class="toc-backref" href="#id20">Debugging</a></h2>
 <p>When writing LLVMC plugins, it can be useful to get a visual view of
 the resulting compilation graph. This can be achieved via the command
 line option <tt class="docutils literal"><span class="pre">--view-graph</span></tt>. This command assumes that <a class="reference external" href="http://www.graphviz.org/">Graphviz</a> and
@@ -651,7 +696,7 @@ perform any compilation tasks and returns the number of encountered
 errors as its status code.</p>
 </div>
 <div class="section" id="conditioning-on-the-executable-name">
-<h2><a class="toc-backref" href="#id20">Conditioning on the executable name</a></h2>
+<h2><a class="toc-backref" href="#id21">Conditioning on the executable name</a></h2>
 <p>For now, the executable name (the value passed to the driver in <tt class="docutils literal"><span class="pre">argv[0]</span></tt>) is
 accessible only in the C++ code (i.e. hooks). Use the following code:</p>
 <pre class="literal-block">
@@ -682,7 +727,7 @@ the <tt class="docutils literal"><span class="pre">Base</span></tt> plugin behav
 <a href="mailto:foldr at codedgers.com">Mikhail Glushenkov</a><br />
 <a href="http://llvm.org">LLVM Compiler Infrastructure</a><br />
 
-Last modified: $Date: 2008-12-11 11:34:48 -0600 (Thu, 11 Dec 2008) $
+Last modified: $Date$
 </address></div>
 </div>
 </div>
diff --git a/libclamav/c++/llvm/docs/DeveloperPolicy.html b/libclamav/c++/llvm/docs/DeveloperPolicy.html
index 0c87a69..b11e48b 100644
--- a/libclamav/c++/llvm/docs/DeveloperPolicy.html
+++ b/libclamav/c++/llvm/docs/DeveloperPolicy.html
@@ -99,7 +99,9 @@
 
 <ol>
   <li>Make your patch against the Subversion trunk, not a branch, and not an old
-      version of LLVM.  This makes it easy to apply the patch.</li>
+      version of LLVM.  This makes it easy to apply the patch.  For information
+      on how to check out SVN trunk, please see the <a
+      href="GettingStarted.html#checkout">Getting Started Guide</a>.</li>
         
   <li>Similarly, patches should be submitted soon after they are generated.  Old
       patches may not apply correctly if the underlying code changes between the
diff --git a/libclamav/c++/llvm/docs/ExceptionHandling.html b/libclamav/c++/llvm/docs/ExceptionHandling.html
index c57776c..438edda 100644
--- a/libclamav/c++/llvm/docs/ExceptionHandling.html
+++ b/libclamav/c++/llvm/docs/ExceptionHandling.html
@@ -418,8 +418,7 @@
 <div class="doc_text">
 
 <pre>
-  i32 %<a href="#llvm_eh_selector">llvm.eh.selector.i32</a>(i8*, i8*, i8*, ...)
-  i64 %<a href="#llvm_eh_selector">llvm.eh.selector.i64</a>(i8*, i8*, i8*, ...)
+  i32 %<a href="#llvm_eh_selector">llvm.eh.selector</a>(i8*, i8*, i8*, ...)
 </pre>
 
 <p>This intrinsic is used to compare the exception with the given type infos,
@@ -451,8 +450,7 @@
 <div class="doc_text">
 
 <pre>
-  i32 %<a href="#llvm_eh_typeid_for">llvm.eh.typeid.for.i32</a>(i8*)
-  i64 %<a href="#llvm_eh_typeid_for">llvm.eh.typeid.for.i64</a>(i8*)
+  i32 %<a href="#llvm_eh_typeid_for">llvm.eh.typeid.for</a>(i8*)
 </pre>
 
 <p>This intrinsic returns the type info index in the exception table of the
diff --git a/libclamav/c++/llvm/docs/FAQ.html b/libclamav/c++/llvm/docs/FAQ.html
index 1ba7123..d62ffd7 100644
--- a/libclamav/c++/llvm/docs/FAQ.html
+++ b/libclamav/c++/llvm/docs/FAQ.html
@@ -685,7 +685,7 @@ Stop.
 <p>Also, there are a number of other limitations of the C backend that cause it
    to produce code that does not fully conform to the C++ ABI on most
    platforms. Some of the C++ programs in LLVM's test suite are known to fail
-   when compiled with the C back end because of ABI incompatiblities with
+   when compiled with the C back end because of ABI incompatibilities with
    standard C++ libraries.</p>
 </div>
 
@@ -700,7 +700,7 @@ Stop.
    portable is by using the preprocessor to include platform-specific code. In
    practice, information about other platforms is lost after preprocessing, so
    the result is inherently dependent on the platform that the preprocessing was
-   targetting.</p>
+   targeting.</p>
 
 <p>Another example is <tt>sizeof</tt>. It's common for <tt>sizeof(long)</tt> to
    vary between platforms. In most C front-ends, <tt>sizeof</tt> is expanded to
diff --git a/libclamav/c++/llvm/docs/GetElementPtr.html b/libclamav/c++/llvm/docs/GetElementPtr.html
index 752568f..dd49ef7 100644
--- a/libclamav/c++/llvm/docs/GetElementPtr.html
+++ b/libclamav/c++/llvm/docs/GetElementPtr.html
@@ -40,7 +40,7 @@
 <div class="doc_text"> 
   <p>This document seeks to dispel the mystery and confusion surrounding LLVM's
   GetElementPtr (GEP) instruction. Questions about the wiley GEP instruction are
-  probably the most frequently occuring questions once a developer gets down to
+  probably the most frequently occurring questions once a developer gets down to
   coding with LLVM. Here we lay out the sources of confusion and show that the
   GEP instruction is really quite simple.
   </p>
diff --git a/libclamav/c++/llvm/docs/GettingStarted.html b/libclamav/c++/llvm/docs/GettingStarted.html
index 890d8f8..851bfb6 100644
--- a/libclamav/c++/llvm/docs/GettingStarted.html
+++ b/libclamav/c++/llvm/docs/GettingStarted.html
@@ -724,6 +724,7 @@ revision), you can checkout it from the '<tt>tags</tt>' directory (instead of
 subdirectories of the '<tt>tags</tt>' directory:</p>
 
 <ul>
+<li>Release 2.6: <b>RELEASE_26</b></li>
 <li>Release 2.5: <b>RELEASE_25</b></li>
 <li>Release 2.4: <b>RELEASE_24</b></li>
 <li>Release 2.3: <b>RELEASE_23</b></li>
@@ -1153,7 +1154,7 @@ first command may not be required if you are already using the module):</p>
 <div class="doc_code">
 <pre>
 $ mount -t binfmt_misc none /proc/sys/fs/binfmt_misc
-$ echo ':llvm:M::llvm::/path/to/lli:' &gt; /proc/sys/fs/binfmt_misc/register
+$ echo ':llvm:M::BC::/path/to/lli:' &gt; /proc/sys/fs/binfmt_misc/register
 $ chmod u+x hello.bc   (if needed)
 $ ./hello.bc
 </pre>
diff --git a/libclamav/c++/llvm/docs/HowToReleaseLLVM.html b/libclamav/c++/llvm/docs/HowToReleaseLLVM.html
index 7040609..7f18440 100644
--- a/libclamav/c++/llvm/docs/HowToReleaseLLVM.html
+++ b/libclamav/c++/llvm/docs/HowToReleaseLLVM.html
@@ -40,7 +40,7 @@
 <div class="doc_text">
   <p>LLVM is released on a time based schedule (currently every 6 months). We
   do not have dot releases because of the nature of LLVM incremental 
-  developement philosophy. The release schedule is roughly as follows:
+  development philosophy. The release schedule is roughly as follows:
   </p>
 <ol>
 <li>Set code freeze and branch creation date for 6 months after last code freeze 
@@ -499,7 +499,7 @@ svn copy https://llvm.org/svn/llvm-project/test-suite/branches/release_XX \
   release documentation.</li>
   <li> Finally, update the main page (<tt>index.html</tt> and sidebar) to
   point to the new release and release announcement. Make sure this all gets
-  commited back into Subversion.</li>
+  committed back into Subversion.</li>
   </ol>
 </div>
 
diff --git a/libclamav/c++/llvm/docs/HowToSubmitABug.html b/libclamav/c++/llvm/docs/HowToSubmitABug.html
index 91d4e2b..0815b88 100644
--- a/libclamav/c++/llvm/docs/HowToSubmitABug.html
+++ b/libclamav/c++/llvm/docs/HowToSubmitABug.html
@@ -60,7 +60,7 @@ more easily.</p>
 <p>Once you have a reduced test-case, go to <a
 href="http://llvm.org/bugs/enter_bug.cgi">the LLVM Bug Tracking
 System</a> and fill out the form with the necessary details (note that you don't
-need to pick a catagory, just use the "new-bugs" catagory if you're not sure).
+need to pick a category, just use the "new-bugs" category if you're not sure).
 The bug description should contain the following
 information:</p>
 
diff --git a/libclamav/c++/llvm/docs/LangRef.html b/libclamav/c++/llvm/docs/LangRef.html
index 3853692..ab656d8 100644
--- a/libclamav/c++/llvm/docs/LangRef.html
+++ b/libclamav/c++/llvm/docs/LangRef.html
@@ -31,7 +31,7 @@
           <li><a href="#linkage_weak">'<tt>weak</tt>' Linkage</a></li>
           <li><a href="#linkage_appending">'<tt>appending</tt>' Linkage</a></li>
           <li><a href="#linkage_externweak">'<tt>extern_weak</tt>' Linkage</a></li>
-          <li><a href="#linkage_linkonce">'<tt>linkonce_odr</tt>' Linkage</a></li>
+          <li><a href="#linkage_linkonce_odr">'<tt>linkonce_odr</tt>' Linkage</a></li>
           <li><a href="#linkage_weak">'<tt>weak_odr</tt>' Linkage</a></li>
           <li><a href="#linkage_external">'<tt>externally visible</tt>' Linkage</a></li>
           <li><a href="#linkage_dllimport">'<tt>dllimport</tt>' Linkage</a></li>
@@ -83,6 +83,7 @@
       <li><a href="#complexconstants">Complex Constants</a></li>
       <li><a href="#globalconstants">Global Variable and Function Addresses</a></li>
       <li><a href="#undefvalues">Undefined Values</a></li>
+      <li><a href="#blockaddress">Addresses of Basic Blocks</a></li>
       <li><a href="#constantexprs">Constant Expressions</a></li>
       <li><a href="#metadata">Embedded Metadata</a></li>
     </ol>
@@ -110,6 +111,7 @@
           <li><a href="#i_ret">'<tt>ret</tt>' Instruction</a></li>
           <li><a href="#i_br">'<tt>br</tt>' Instruction</a></li>
           <li><a href="#i_switch">'<tt>switch</tt>' Instruction</a></li>
+          <li><a href="#i_indirectbr">'<tt>indirectbr</tt>' Instruction</a></li>
           <li><a href="#i_invoke">'<tt>invoke</tt>' Instruction</a></li>
           <li><a href="#i_unwind">'<tt>unwind</tt>'  Instruction</a></li>
           <li><a href="#i_unreachable">'<tt>unreachable</tt>' Instruction</a></li>
@@ -156,8 +158,6 @@
       </li>
       <li><a href="#memoryops">Memory Access and Addressing Operations</a>
         <ol>
-          <li><a href="#i_malloc">'<tt>malloc</tt>'   Instruction</a></li>
-          <li><a href="#i_free">'<tt>free</tt>'     Instruction</a></li>
           <li><a href="#i_alloca">'<tt>alloca</tt>'   Instruction</a></li>
          <li><a href="#i_load">'<tt>load</tt>'     Instruction</a></li>
          <li><a href="#i_store">'<tt>store</tt>'    Instruction</a></li>
@@ -273,6 +273,14 @@
           <li><a href="#int_atomic_load_umin"><tt>llvm.atomic.load.umin</tt></a></li>
         </ol>
       </li>
+      <li><a href="#int_memorymarkers">Memory Use Markers</a>
+        <ol>
+          <li><a href="#int_lifetime_start"><tt>llvm.lifetime.start</tt></a></li>
+          <li><a href="#int_lifetime_end"><tt>llvm.lifetime.end</tt></a></li>
+          <li><a href="#int_invariant_start"><tt>llvm.invariant.start</tt></a></li>
+          <li><a href="#int_invariant_end"><tt>llvm.invariant.end</tt></a></li>
+        </ol>
+      </li>
       <li><a href="#int_general">General intrinsics</a>
         <ol>
           <li><a href="#int_var_annotation">
@@ -330,7 +338,7 @@
    IR's", allowing many source languages to be mapped to them).  By providing
    type information, LLVM can be used as the target of optimizations: for
    example, through pointer analysis, it can be proven that a C automatic
-   variable is never accessed outside of the current function... allowing it to
+   variable is never accessed outside of the current function, allowing it to
    be promoted to a simple SSA value instead of a memory location.</p>
 
 </div>
@@ -351,12 +359,12 @@
 </pre>
 </div>
 
-<p>...because the definition of <tt>%x</tt> does not dominate all of its
-   uses. The LLVM infrastructure provides a verification pass that may be used
-   to verify that an LLVM module is well formed.  This pass is automatically run
-   by the parser after parsing input assembly and by the optimizer before it
-   outputs bitcode.  The violations pointed out by the verifier pass indicate
-   bugs in transformation passes or input to the parser.</p>
+<p>because the definition of <tt>%x</tt> does not dominate all of its uses. The
+   LLVM infrastructure provides a verification pass that may be used to verify
+   that an LLVM module is well formed.  This pass is automatically run by the
+   parser after parsing input assembly and by the optimizer before it outputs
+   bitcode.  The violations pointed out by the verifier pass indicate bugs in
+   transformation passes or input to the parser.</p>
 
 </div>
 
@@ -430,8 +438,8 @@
 
 <div class="doc_code">
 <pre>
-<a href="#i_add">add</a> i32 %X, %X           <i>; yields {i32}:%0</i>
-<a href="#i_add">add</a> i32 %0, %0           <i>; yields {i32}:%1</i>
+%0 = <a href="#i_add">add</a> i32 %X, %X           <i>; yields {i32}:%0</i>
+%1 = <a href="#i_add">add</a> i32 %0, %0           <i>; yields {i32}:%1</i>
 %result = <a href="#i_add">add</a> i32 %1, %1
 </pre>
 </div>
@@ -449,7 +457,7 @@
   <li>Unnamed temporaries are numbered sequentially</li>
 </ol>
 
-<p>...and it also shows a convention that we follow in this document.  When
+<p>It also shows a convention that we follow in this document.  When
    demonstrating instructions, we will follow an instruction with a comment that
    defines the type and name of value produced.  Comments are shown in italic
    text.</p>
@@ -474,24 +482,21 @@
    the "hello world" module:</p>
 
 <div class="doc_code">
-<pre><i>; Declare the string constant as a global constant...</i>
-<a href="#identifiers">@.LC0</a> = <a href="#linkage_internal">internal</a> <a
- href="#globalvars">constant</a> <a href="#t_array">[13 x i8]</a> c"hello world\0A\00"          <i>; [13 x i8]*</i>
+<pre>
+<i>; Declare the string constant as a global constant.</i>
+<a href="#identifiers">@.LC0</a> = <a href="#linkage_internal">internal</a> <a href="#globalvars">constant</a> <a href="#t_array">[13 x i8]</a> c"hello world\0A\00"    <i>; [13 x i8]*</i>
 
 <i>; External declaration of the puts function</i>
-<a href="#functionstructure">declare</a> i32 @puts(i8 *)                                           <i>; i32(i8 *)* </i>
+<a href="#functionstructure">declare</a> i32 @puts(i8 *)                                     <i>; i32(i8 *)* </i>
 
 <i>; Definition of main function</i>
-define i32 @main() {                                              <i>; i32()* </i>
-        <i>; Convert [13 x i8]* to i8  *...</i>
-        %cast210 = <a
- href="#i_getelementptr">getelementptr</a> [13 x i8]* @.LC0, i64 0, i64 0   <i>; i8 *</i>
+define i32 @main() {                                        <i>; i32()* </i>
+  <i>; Convert [13 x i8]* to i8  *...</i>
+  %cast210 = <a href="#i_getelementptr">getelementptr</a> [13 x i8]* @.LC0, i64 0, i64 0   <i>; i8 *</i>
 
-        <i>; Call puts function to write out the string to stdout...</i>
-        <a
- href="#i_call">call</a> i32 @puts(i8 * %cast210)                             <i>; i32</i>
-        <a
- href="#i_ret">ret</a> i32 0<br>}<br>
+  <i>; Call puts function to write out the string to stdout.</i>
+  <a href="#i_call">call</a> i32 @puts(i8 * %cast210)                             <i>; i32</i>
+  <a href="#i_ret">ret</a> i32 0<br>}<br>
 </pre>
 </div>
 
@@ -519,7 +524,7 @@ define i32 @main() {                                              <i>; i32()* </
    linkage:</p>
 
 <dl>
-  <dt><tt><b><a name="linkage_private">private</a></b></tt>: </dt>
+  <dt><tt><b><a name="linkage_private">private</a></b></tt></dt>
   <dd>Global values with private linkage are only directly accessible by objects
       in the current module.  In particular, linking code into a module with an
       private global value may cause the private to be renamed as necessary to
@@ -527,7 +532,7 @@ define i32 @main() {                                              <i>; i32()* </
       references can be updated. This doesn't show up in any symbol table in the
       object file.</dd>
 
-  <dt><tt><b><a name="linkage_linker_private">linker_private</a></b></tt>: </dt>
+  <dt><tt><b><a name="linkage_linker_private">linker_private</a></b></tt></dt>
   <dd>Similar to private, but the symbol is passed through the assembler and
       removed by the linker after evaluation.  Note that (unlike private
       symbols) linker_private symbols are subject to coalescing by the linker:
@@ -535,12 +540,12 @@ define i32 @main() {                                              <i>; i32()* </
       normal strong symbols, they are removed by the linker from the final
       linked image (executable or dynamic library).</dd>
 
-  <dt><tt><b><a name="linkage_internal">internal</a></b></tt>: </dt>
+  <dt><tt><b><a name="linkage_internal">internal</a></b></tt></dt>
   <dd>Similar to private, but the value shows as a local symbol
       (<tt>STB_LOCAL</tt> in the case of ELF) in the object file. This
       corresponds to the notion of the '<tt>static</tt>' keyword in C.</dd>
 
-  <dt><tt><b><a name="linkage_available_externally">available_externally</a></b></tt>: </dt>
+  <dt><tt><b><a name="linkage_available_externally">available_externally</a></b></tt></dt>
   <dd>Globals with "<tt>available_externally</tt>" linkage are never emitted
       into the object file corresponding to the LLVM module.  They exist to
       allow inlining and other optimizations to take place given knowledge of
@@ -549,20 +554,20 @@ define i32 @main() {                                              <i>; i32()* </
       be discarded at will, and are otherwise the same as <tt>linkonce_odr</tt>.
       This linkage type is only allowed on definitions, not declarations.</dd>
 
-  <dt><tt><b><a name="linkage_linkonce">linkonce</a></b></tt>: </dt>
+  <dt><tt><b><a name="linkage_linkonce">linkonce</a></b></tt></dt>
   <dd>Globals with "<tt>linkonce</tt>" linkage are merged with other globals of
       the same name when linkage occurs.  This is typically used to implement
       inline functions, templates, or other code which must be generated in each
       translation unit that uses it.  Unreferenced <tt>linkonce</tt> globals are
       allowed to be discarded.</dd>
 
-  <dt><tt><b><a name="linkage_weak">weak</a></b></tt>: </dt>
+  <dt><tt><b><a name="linkage_weak">weak</a></b></tt></dt>
   <dd>"<tt>weak</tt>" linkage has the same merging semantics as
       <tt>linkonce</tt> linkage, except that unreferenced globals with
       <tt>weak</tt> linkage may not be discarded.  This is used for globals that
       are declared "weak" in C source code.</dd>
 
-  <dt><tt><b><a name="linkage_common">common</a></b></tt>: </dt>
+  <dt><tt><b><a name="linkage_common">common</a></b></tt></dt>
   <dd>"<tt>common</tt>" linkage is most similar to "<tt>weak</tt>" linkage, but
       they are used for tentative definitions in C, such as "<tt>int X;</tt>" at
       global scope.
@@ -574,20 +579,20 @@ define i32 @main() {                                              <i>; i32()* </
       have common linkage.</dd>
 
 
-  <dt><tt><b><a name="linkage_appending">appending</a></b></tt>: </dt>
+  <dt><tt><b><a name="linkage_appending">appending</a></b></tt></dt>
   <dd>"<tt>appending</tt>" linkage may only be applied to global variables of
       pointer to array type.  When two global variables with appending linkage
       are linked together, the two global arrays are appended together.  This is
       the LLVM, typesafe, equivalent of having the system linker append together
       "sections" with identical names when .o files are linked.</dd>
 
-  <dt><tt><b><a name="linkage_externweak">extern_weak</a></b></tt>: </dt>
+  <dt><tt><b><a name="linkage_externweak">extern_weak</a></b></tt></dt>
   <dd>The semantics of this linkage follow the ELF object file model: the symbol
       is weak until linked, if not linked, the symbol becomes null instead of
       being an undefined reference.</dd>
 
-  <dt><tt><b><a name="linkage_linkonce">linkonce_odr</a></b></tt>: </dt>
-  <dt><tt><b><a name="linkage_weak">weak_odr</a></b></tt>: </dt>
+  <dt><tt><b><a name="linkage_linkonce_odr">linkonce_odr</a></b></tt></dt>
+  <dt><tt><b><a name="linkage_weak_odr">weak_odr</a></b></tt></dt>
   <dd>Some languages allow differing globals to be merged, such as two functions
       with different semantics.  Other languages, such as <tt>C++</tt>, ensure
       that only equivalent globals are ever merged (the "one definition rule" -
@@ -607,14 +612,14 @@ define i32 @main() {                                              <i>; i32()* </
    DLLs (Dynamic Link Libraries).</p>
 
 <dl>
-  <dt><tt><b><a name="linkage_dllimport">dllimport</a></b></tt>: </dt>
+  <dt><tt><b><a name="linkage_dllimport">dllimport</a></b></tt></dt>
   <dd>"<tt>dllimport</tt>" linkage causes the compiler to reference a function
       or variable via a global pointer to a pointer that is set up by the DLL
       exporting the symbol. On Microsoft Windows targets, the pointer name is
       formed by combining <code>__imp_</code> and the function or variable
       name.</dd>
 
-  <dt><tt><b><a name="linkage_dllexport">dllexport</a></b></tt>: </dt>
+  <dt><tt><b><a name="linkage_dllexport">dllexport</a></b></tt></dt>
   <dd>"<tt>dllexport</tt>" linkage causes the compiler to provide a global
       pointer to a pointer in a DLL, so that it can be referenced with the
       <tt>dllimport</tt> attribute. On Microsoft Windows targets, the pointer
@@ -927,24 +932,24 @@ declare signext i8 @returns_signed_char()
 <p>Currently, only the following parameter attributes are defined:</p>
 
 <dl>
-  <dt><tt>zeroext</tt></dt>
+  <dt><tt><b>zeroext</b></tt></dt>
   <dd>This indicates to the code generator that the parameter or return value
       should be zero-extended to a 32-bit value by the caller (for a parameter)
       or the callee (for a return value).</dd>
 
-  <dt><tt>signext</tt></dt>
+  <dt><tt><b>signext</b></tt></dt>
   <dd>This indicates to the code generator that the parameter or return value
       should be sign-extended to a 32-bit value by the caller (for a parameter)
       or the callee (for a return value).</dd>
 
-  <dt><tt>inreg</tt></dt>
+  <dt><tt><b>inreg</b></tt></dt>
   <dd>This indicates that this parameter or return value should be treated in a
       special target-dependent fashion during while emitting code for a function
       call or return (usually, by putting it in a register as opposed to memory,
       though some targets use it to distinguish between two different kinds of
       registers).  Use of this attribute is target-specific.</dd>
 
-  <dt><tt><a name="byval">byval</a></tt></dt>
+  <dt><tt><b><a name="byval">byval</a></b></tt></dt>
   <dd>This indicates that the pointer parameter should really be passed by value
       to the function.  The attribute implies that a hidden copy of the pointee
       is made between the caller and the callee, so the callee is unable to
@@ -959,7 +964,7 @@ declare signext i8 @returns_signed_char()
       generator that usually indicates a desired alignment for the synthesized
       stack slot.</dd>
 
-  <dt><tt>sret</tt></dt>
+  <dt><tt><b>sret</b></tt></dt>
   <dd>This indicates that the pointer parameter specifies the address of a
       structure that is the return value of the function in the source program.
       This pointer must be guaranteed by the caller to be valid: loads and
@@ -967,7 +972,7 @@ declare signext i8 @returns_signed_char()
       may only be applied to the first parameter. This is not a valid attribute
       for return values. </dd>
 
-  <dt><tt>noalias</tt></dt>
+  <dt><tt><b>noalias</b></tt></dt>
   <dd>This indicates that the pointer does not alias any global or any other
       parameter.  The caller is responsible for ensuring that this is the
       case. On a function return value, <tt>noalias</tt> additionally indicates
@@ -977,12 +982,12 @@ declare signext i8 @returns_signed_char()
       <a href="http://llvm.org/docs/AliasAnalysis.html#MustMayNo">alias
       analysis</a>.</dd>
 
-  <dt><tt>nocapture</tt></dt>
+  <dt><tt><b>nocapture</b></tt></dt>
   <dd>This indicates that the callee does not make any copies of the pointer
       that outlive the callee itself. This is not a valid attribute for return
       values.</dd>
 
-  <dt><tt>nest</tt></dt>
+  <dt><tt><b>nest</b></tt></dt>
   <dd>This indicates that the pointer parameter can be excised using the
       <a href="#int_trampoline">trampoline intrinsics</a>. This is not a valid
       attribute for return values.</dd>
@@ -1002,7 +1007,7 @@ declare signext i8 @returns_signed_char()
 
 <div class="doc_code">
 <pre>
-define void @f() gc "name" { ...
+define void @f() gc "name" { ... }
 </pre>
 </div>
 
@@ -1032,42 +1037,42 @@ define void @f() gc "name" { ...
 define void @f() noinline { ... }
 define void @f() alwaysinline { ... }
 define void @f() alwaysinline optsize { ... }
-define void @f() optsize
+define void @f() optsize { ... }
 </pre>
 </div>
 
 <dl>
-  <dt><tt>alwaysinline</tt></dt>
+  <dt><tt><b>alwaysinline</b></tt></dt>
   <dd>This attribute indicates that the inliner should attempt to inline this
       function into callers whenever possible, ignoring any active inlining size
       threshold for this caller.</dd>
 
-  <dt><tt>inlinehint</tt></dt>
+  <dt><tt><b>inlinehint</b></tt></dt>
   <dd>This attribute indicates that the source code contained a hint that inlining
       this function is desirable (such as the "inline" keyword in C/C++).  It
       is just a hint; it imposes no requirements on the inliner.</dd>
 
-  <dt><tt>noinline</tt></dt>
+  <dt><tt><b>noinline</b></tt></dt>
   <dd>This attribute indicates that the inliner should never inline this
       function in any situation. This attribute may not be used together with
       the <tt>alwaysinline</tt> attribute.</dd>
 
-  <dt><tt>optsize</tt></dt>
+  <dt><tt><b>optsize</b></tt></dt>
   <dd>This attribute suggests that optimization passes and code generator passes
       make choices that keep the code size of this function low, and otherwise
       do optimizations specifically to reduce code size.</dd>
 
-  <dt><tt>noreturn</tt></dt>
+  <dt><tt><b>noreturn</b></tt></dt>
   <dd>This function attribute indicates that the function never returns
       normally.  This produces undefined behavior at runtime if the function
       ever does dynamically return.</dd>
 
-  <dt><tt>nounwind</tt></dt>
+  <dt><tt><b>nounwind</b></tt></dt>
   <dd>This function attribute indicates that the function never returns with an
       unwind or exceptional control flow.  If the function does unwind, its
       runtime behavior is undefined.</dd>
 
-  <dt><tt>readnone</tt></dt>
+  <dt><tt><b>readnone</b></tt></dt>
   <dd>This attribute indicates that the function computes its result (or decides
       to unwind an exception) based strictly on its arguments, without
       dereferencing any pointer arguments or otherwise accessing any mutable
@@ -1078,7 +1083,7 @@ define void @f() optsize
       exceptions by calling the <tt>C++</tt> exception throwing methods, but
       could use the <tt>unwind</tt> instruction.</dd>
 
-  <dt><tt><a name="readonly">readonly</a></tt></dt>
+  <dt><tt><b><a name="readonly">readonly</a></b></tt></dt>
   <dd>This attribute indicates that the function does not write through any
       pointer arguments (including <tt><a href="#byval">byval</a></tt>
       arguments) or otherwise modify any state (e.g. memory, control registers,
@@ -1089,7 +1094,7 @@ define void @f() optsize
       exception by calling the <tt>C++</tt> exception throwing methods, but may
       use the <tt>unwind</tt> instruction.</dd>
 
-  <dt><tt><a name="ssp">ssp</a></tt></dt>
+  <dt><tt><b><a name="ssp">ssp</a></b></tt></dt>
   <dd>This attribute indicates that the function should emit a stack smashing
       protector. It is in the form of a "canary"&mdash;a random value placed on
       the stack before the local variables that's checked upon return from the
@@ -1100,7 +1105,7 @@ define void @f() optsize
       function that doesn't have an <tt>ssp</tt> attribute, then the resulting
       function will have an <tt>ssp</tt> attribute.</dd>
 
-  <dt><tt>sspreq</tt></dt>
+  <dt><tt><b>sspreq</b></tt></dt>
   <dd>This attribute indicates that the function should <em>always</em> emit a
       stack smashing protector. This overrides
       the <tt><a href="#ssp">ssp</a></tt> function attribute.<br>
@@ -1110,14 +1115,14 @@ define void @f() optsize
       an <tt>ssp</tt> attribute, then the resulting function will have
       an <tt>sspreq</tt> attribute.</dd>
 
-  <dt><tt>noredzone</tt></dt>
+  <dt><tt><b>noredzone</b></tt></dt>
   <dd>This attribute indicates that the code generator should not use a red
       zone, even if the target-specific ABI normally permits it.</dd>
 
-  <dt><tt>noimplicitfloat</tt></dt>
+  <dt><tt><b>noimplicitfloat</b></tt></dt>
   <dd>This attributes disables implicit floating point instructions.</dd>
 
-  <dt><tt>naked</tt></dt>
+  <dt><tt><b>naked</b></tt></dt>
   <dd>This attribute disables prologue / epilogue emission for the function.
       This can have very system-specific consequences.</dd>
 </dl>
@@ -1210,6 +1215,13 @@ target datalayout = "<i>layout specification</i>"
   <dt><tt>s<i>size</i>:<i>abi</i>:<i>pref</i></tt></dt>
   <dd>This specifies the alignment for a stack object of a given bit
       <i>size</i>.</dd>
+
+  <dt><tt>n<i>size1</i>:<i>size2</i>:<i>size3</i>...</tt></dt>
+  <dd>This specifies a set of native integer widths for the target CPU
+      in bits.  For example, it might contain "n32" for 32-bit PowerPC,
+      "n32:64" for PowerPC 64, or "n8:16:32:64" for X86-64.  Elements of
+      this set are considered to support most general arithmetic 
+      operations efficiently.</dd>
 </dl>
 
 <p>When constructing the data layout for a given target, LLVM starts with a
@@ -1564,12 +1576,12 @@ Classifications</a> </div>
   </tr>
 </table>
 
-<p>Note that 'variable sized arrays' can be implemented in LLVM with a zero
-   length array.  Normally, accesses past the end of an array are undefined in
-   LLVM (e.g. it is illegal to access the 5th element of a 3 element array).  As
-   a special case, however, zero length arrays are recognized to be variable
-   length.  This allows implementation of 'pascal style arrays' with the LLVM
-   type "<tt>{ i32, [0 x float]}</tt>", for example.</p>
+<p>There is no restriction on indexing beyond the end of the array implied by
+   a static type (though there are restrictions on indexing beyond the bounds
+   of an allocated object in some cases). This means that single-dimension
+   'variable sized array' addressing can be implemented in LLVM with a zero
+   length array type. An implementation of 'pascal style arrays' in LLVM could
+   use the type "<tt>{ i32, [0 x float]}</tt>", for example.</p>
 
 <p>Note that the code generator does not yet support large aggregate types to be
    used as function return types. The specific limit on how large an aggregate
@@ -2019,7 +2031,7 @@ Classifications</a> </div>
 <div class="doc_text">
 
 <p>The string '<tt>undef</tt>' can be used anywhere a constant is expected, and
-   indicates that the user of the value may recieve an unspecified bit-pattern.
+   indicates that the user of the value may receive an unspecified bit-pattern.
    Undefined values may be of any type (other than label or void) and be used
    anywhere a constant is permitted.</p>
 
@@ -2118,7 +2130,7 @@ number of reasons, but the short answer is that an undef "variable" can
 arbitrarily change its value over its "live range".  This is true because the
 "variable" doesn't actually <em>have a live range</em>.  Instead, the value is
 logically read from arbitrary registers that happen to be around when needed,
-so the value is not neccesarily consistent over time.  In fact, %A and %C need
+so the value is not necessarily consistent over time.  In fact, %A and %C need
 to have the same semantics or the core LLVM "replace all uses with" concept
 would not hold.</p>
 
@@ -2164,6 +2176,34 @@ has undefined behavior.</p>
 </div>
 
 <!-- ======================================================================= -->
+<div class="doc_subsection"><a name="blockaddress">Addresses of Basic
+    Blocks</a></div>
+<div class="doc_text">
+
+<p><b><tt>blockaddress(@function, %block)</tt></b></p>
+
+<p>The '<tt>blockaddress</tt>' constant computes the address of the specified
+   basic block in the specified function, and always has an i8* type.  Taking
+   the address of the entry block is illegal.</p>
+     
+<p>This value only has defined behavior when used as an operand to the
+   '<a href="#i_indirectbr"><tt>indirectbr</tt></a>' instruction or for comparisons
+   against null.  Pointer equality tests between labels addresses is undefined
+   behavior - though, again, comparison against null is ok, and no label is
+   equal to the null pointer.  This may also be passed around as an opaque
+   pointer sized value as long as the bits are not inspected.  This allows
+   <tt>ptrtoint</tt> and arithmetic to be performed on these values so long as
+   the original value is reconstituted before the <tt>indirectbr</tt>.</p>
+   
+<p>Finally, some targets may provide defined semantics when
+   using the value as the operand to an inline assembly, but that is target
+   specific.
+   </p>
+
+</div>
+
+
+<!-- ======================================================================= -->
 <div class="doc_subsection"><a name="constantexprs">Constant Expressions</a>
 </div>
 
@@ -2300,7 +2340,7 @@ has undefined behavior.</p>
    the two digit hex code.  For example: "<tt>!"test\00"</tt>".</p>
 
 <p>Metadata nodes are represented with notation similar to structure constants
-   (a comma separated list of elements, surrounded by braces and preceeded by an
+   (a comma separated list of elements, surrounded by braces and preceded by an
    exclamation point).  For example: "<tt>!{ metadata !"test\00", i32
    10}</tt>".</p>
 
@@ -2330,8 +2370,10 @@ has undefined behavior.</p>
    to <a href="#moduleasm"> Module-Level Inline Assembly</a>) through the use of
    a special value.  This value represents the inline assembler as a string
    (containing the instructions to emit), a list of operand constraints (stored
-   as a string), and a flag that indicates whether or not the inline asm
-   expression has side effects.  An example inline assembler expression is:</p>
+   as a string), a flag that indicates whether or not the inline asm
+   expression has side effects, and a flag indicating whether the function
+   containing the asm needs to align its stack conservatively.  An example
+   inline assembler expression is:</p>
 
 <div class="doc_code">
 <pre>
@@ -2359,6 +2401,22 @@ call void asm sideeffect "eieio", ""()
 </pre>
 </div>
 
+<p>In some cases inline asms will contain code that will not work unless the
+   stack is aligned in some way, such as calls or SSE instructions on x86,
+   yet will not contain code that does that alignment within the asm.
+   The compiler should make conservative assumptions about what the asm might
+   contain and should generate its usual stack alignment code in the prologue
+   if the '<tt>alignstack</tt>' keyword is present:</p>
+
+<div class="doc_code">
+<pre>
+call void asm alignstack "eieio", ""()
+</pre>
+</div>
+
+<p>If both keywords appear the '<tt>sideeffect</tt>' keyword must come
+   first.</p>
+
 <p>TODO: The format of the asm and constraints string still need to be
    documented here.  Constraints on what can be done (e.g. duplication, moving,
    etc need to be documented).  This is probably best done by reference to
@@ -2487,6 +2545,7 @@ Instructions</a> </div>
    '<a href="#i_ret"><tt>ret</tt></a>' instruction, the
    '<a href="#i_br"><tt>br</tt></a>' instruction, the
    '<a href="#i_switch"><tt>switch</tt></a>' instruction, the
+   '<a href="#i_indirectbr">'<tt>indirectbr</tt></a>' Instruction, the
    '<a href="#i_invoke"><tt>invoke</tt></a>' instruction, the
    '<a href="#i_unwind"><tt>unwind</tt></a>' instruction, and the
    '<a href="#i_unreachable"><tt>unreachable</tt></a>' instruction.</p>
@@ -2619,8 +2678,8 @@ IfUnequal:
 <p>The <tt>switch</tt> instruction specifies a table of values and
    destinations. When the '<tt>switch</tt>' instruction is executed, this table
    is searched for the given value.  If the value is found, control flow is
-   transfered to the corresponding destination; otherwise, control flow is
-   transfered to the default destination.</p>
+   transferred to the corresponding destination; otherwise, control flow is
+   transferred to the default destination.</p>
 
 <h5>Implementation:</h5>
 <p>Depending on properties of the target machine and the particular
@@ -2645,6 +2704,55 @@ IfUnequal:
 
 </div>
 
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection">
+   <a name="i_indirectbr">'<tt>indirectbr</tt>' Instruction</a>
+</div>
+
+<div class="doc_text">
+
+<h5>Syntax:</h5>
+<pre>
+  indirectbr &lt;somety&gt;* &lt;address&gt;, [ label &lt;dest1&gt;, label &lt;dest2&gt;, ... ]
+</pre>
+
+<h5>Overview:</h5>
+
+<p>The '<tt>indirectbr</tt>' instruction implements an indirect branch to a label
+   within the current function, whose address is specified by
+   "<tt>address</tt>".  Address must be derived from a <a
+   href="#blockaddress">blockaddress</a> constant.</p>
+
+<h5>Arguments:</h5>
+
+<p>The '<tt>address</tt>' argument is the address of the label to jump to.  The
+   rest of the arguments indicate the full set of possible destinations that the
+   address may point to.  Blocks are allowed to occur multiple times in the
+   destination list, though this isn't particularly useful.</p>
+   
+<p>This destination list is required so that dataflow analysis has an accurate
+   understanding of the CFG.</p>
+
+<h5>Semantics:</h5>
+
+<p>Control transfers to the block specified in the address argument.  All
+   possible destination blocks must be listed in the label list, otherwise this
+   instruction has undefined behavior.  This implies that jumps to labels
+   defined in other functions have undefined behavior as well.</p>
+
+<h5>Implementation:</h5>
+
+<p>This is typically implemented with a jump through a register.</p>
+
+<h5>Example:</h5>
+<pre>
+ indirectbr i8* %Addr, [ label %bb1, label %bb2, label %bb3 ]
+</pre>
+
+</div>
+
+
 <!-- _______________________________________________________________________ -->
 <div class="doc_subsubsection">
   <a name="i_invoke">'<tt>invoke</tt>' Instruction</a>
@@ -3624,7 +3732,7 @@ Instruction</a> </div>
 
 <h5>Example:</h5>
 <pre>
-  %result = extractelement &lt;4 x i32&gt; %vec, i32 0    <i>; yields i32</i>
+  &lt;result&gt; = extractelement &lt;4 x i32&gt; %vec, i32 0    <i>; yields i32</i>
 </pre>
 
 </div>
@@ -3660,7 +3768,7 @@ Instruction</a> </div>
 
 <h5>Example:</h5>
 <pre>
-  %result = insertelement &lt;4 x i32&gt; %vec, i32 1, i32 0    <i>; yields &lt;4 x i32&gt;</i>
+  &lt;result&gt; = insertelement &lt;4 x i32&gt; %vec, i32 1, i32 0    <i>; yields &lt;4 x i32&gt;</i>
 </pre>
 
 </div>
@@ -3701,13 +3809,13 @@ Instruction</a> </div>
 
 <h5>Example:</h5>
 <pre>
-  %result = shufflevector &lt;4 x i32&gt; %v1, &lt;4 x i32&gt; %v2, 
+  &lt;result&gt; = shufflevector &lt;4 x i32&gt; %v1, &lt;4 x i32&gt; %v2, 
                           &lt;4 x i32&gt; &lt;i32 0, i32 4, i32 1, i32 5&gt;  <i>; yields &lt;4 x i32&gt;</i>
-  %result = shufflevector &lt;4 x i32&gt; %v1, &lt;4 x i32&gt; undef, 
+  &lt;result&gt; = shufflevector &lt;4 x i32&gt; %v1, &lt;4 x i32&gt; undef, 
                           &lt;4 x i32&gt; &lt;i32 0, i32 1, i32 2, i32 3&gt;  <i>; yields &lt;4 x i32&gt;</i> - Identity shuffle.
-  %result = shufflevector &lt;8 x i32&gt; %v1, &lt;8 x i32&gt; undef, 
+  &lt;result&gt; = shufflevector &lt;8 x i32&gt; %v1, &lt;8 x i32&gt; undef, 
                           &lt;4 x i32&gt; &lt;i32 0, i32 1, i32 2, i32 3&gt;  <i>; yields &lt;4 x i32&gt;</i>
-  %result = shufflevector &lt;4 x i32&gt; %v1, &lt;4 x i32&gt; %v2, 
+  &lt;result&gt; = shufflevector &lt;4 x i32&gt; %v1, &lt;4 x i32&gt; %v2, 
                           &lt;8 x i32&gt; &lt;i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7 &gt;  <i>; yields &lt;8 x i32&gt;</i>
 </pre>
 
@@ -3753,7 +3861,7 @@ Instruction</a> </div>
 
 <h5>Example:</h5>
 <pre>
-  %result = extractvalue {i32, float} %agg, 0    <i>; yields i32</i>
+  &lt;result&gt; = extractvalue {i32, float} %agg, 0    <i>; yields i32</i>
 </pre>
 
 </div>
@@ -3792,7 +3900,7 @@ Instruction</a> </div>
 
 <h5>Example:</h5>
 <pre>
-  %result = insertvalue {i32, float} %agg, i32 1, 0    <i>; yields {i32, float}</i>
+  &lt;result&gt; = insertvalue {i32, float} %agg, i32 1, 0    <i>; yields {i32, float}</i>
 </pre>
 
 </div>
@@ -3807,95 +3915,13 @@ Instruction</a> </div>
 
 <p>A key design point of an SSA-based representation is how it represents
    memory.  In LLVM, no memory locations are in SSA form, which makes things
-   very simple.  This section describes how to read, write, allocate, and free
+   very simple.  This section describes how to read, write, and allocate
    memory in LLVM.</p>
 
 </div>
 
 <!-- _______________________________________________________________________ -->
 <div class="doc_subsubsection">
-  <a name="i_malloc">'<tt>malloc</tt>' Instruction</a>
-</div>
-
-<div class="doc_text">
-
-<h5>Syntax:</h5>
-<pre>
-  &lt;result&gt; = malloc &lt;type&gt;[, i32 &lt;NumElements&gt;][, align &lt;alignment&gt;]     <i>; yields {type*}:result</i>
-</pre>
-
-<h5>Overview:</h5>
-<p>The '<tt>malloc</tt>' instruction allocates memory from the system heap and
-   returns a pointer to it. The object is always allocated in the generic
-   address space (address space zero).</p>
-
-<h5>Arguments:</h5>
-<p>The '<tt>malloc</tt>' instruction allocates
-   <tt>sizeof(&lt;type&gt;)*NumElements</tt> bytes of memory from the operating
-   system and returns a pointer of the appropriate type to the program.  If
-   "NumElements" is specified, it is the number of elements allocated, otherwise
-   "NumElements" is defaulted to be one.  If a constant alignment is specified,
-   the value result of the allocation is guaranteed to be aligned to at least
-   that boundary.  If not specified, or if zero, the target can choose to align
-   the allocation on any convenient boundary compatible with the type.</p>
-
-<p>'<tt>type</tt>' must be a sized type.</p>
-
-<h5>Semantics:</h5>
-<p>Memory is allocated using the system "<tt>malloc</tt>" function, and a
-   pointer is returned.  The result of a zero byte allocation is undefined.  The
-   result is null if there is insufficient memory available.</p>
-
-<h5>Example:</h5>
-<pre>
-  %array  = malloc [4 x i8]                     <i>; yields {[%4 x i8]*}:array</i>
-
-  %size   = <a href="#i_add">add</a> i32 2, 2                        <i>; yields {i32}:size = i32 4</i>
-  %array1 = malloc i8, i32 4                    <i>; yields {i8*}:array1</i>
-  %array2 = malloc [12 x i8], i32 %size         <i>; yields {[12 x i8]*}:array2</i>
-  %array3 = malloc i32, i32 4, align 1024       <i>; yields {i32*}:array3</i>
-  %array4 = malloc i32, align 1024              <i>; yields {i32*}:array4</i>
-</pre>
-
-<p>Note that the code generator does not yet respect the alignment value.</p>
-
-</div>
-
-<!-- _______________________________________________________________________ -->
-<div class="doc_subsubsection">
-  <a name="i_free">'<tt>free</tt>' Instruction</a>
-</div>
-
-<div class="doc_text">
-
-<h5>Syntax:</h5>
-<pre>
-  free &lt;type&gt; &lt;value&gt;                           <i>; yields {void}</i>
-</pre>
-
-<h5>Overview:</h5>
-<p>The '<tt>free</tt>' instruction returns memory back to the unused memory heap
-   to be reallocated in the future.</p>
-
-<h5>Arguments:</h5>
-<p>'<tt>value</tt>' shall be a pointer value that points to a value that was
-   allocated with the '<tt><a href="#i_malloc">malloc</a></tt>' instruction.</p>
-
-<h5>Semantics:</h5>
-<p>Access to the memory pointed to by the pointer is no longer defined after
-   this instruction executes.  If the pointer is null, the operation is a
-   noop.</p>
-
-<h5>Example:</h5>
-<pre>
-  %array  = <a href="#i_malloc">malloc</a> [4 x i8]                     <i>; yields {[4 x i8]*}:array</i>
-            free   [4 x i8]* %array
-</pre>
-
-</div>
-
-<!-- _______________________________________________________________________ -->
-<div class="doc_subsubsection">
   <a name="i_alloca">'<tt>alloca</tt>' Instruction</a>
 </div>
 
@@ -4227,7 +4253,7 @@ entry:
 <pre>
   %X = trunc i32 257 to i8              <i>; yields i8:1</i>
   %Y = trunc i32 123 to i1              <i>; yields i1:true</i>
-  %Y = trunc i32 122 to i1              <i>; yields i1:false</i>
+  %Z = trunc i32 122 to i1              <i>; yields i1:false</i>
 </pre>
 
 </div>
@@ -4411,7 +4437,7 @@ entry:
 <pre>
   %X = fptoui double 123.0 to i32      <i>; yields i32:123</i>
   %Y = fptoui float 1.0E+300 to i1     <i>; yields undefined:1</i>
-  %X = fptoui float 1.04E+17 to i8     <i>; yields undefined:1</i>
+  %Z = fptoui float 1.04E+17 to i8     <i>; yields undefined:1</i>
 </pre>
 
 </div>
@@ -4449,7 +4475,7 @@ entry:
 <pre>
   %X = fptosi double -123.0 to i32      <i>; yields i32:-123</i>
   %Y = fptosi float 1.0E-247 to i1      <i>; yields undefined:1</i>
-  %X = fptosi float 1.04E+17 to i8      <i>; yields undefined:1</i>
+  %Z = fptosi float 1.04E+17 to i8      <i>; yields undefined:1</i>
 </pre>
 
 </div>
@@ -4593,8 +4619,8 @@ entry:
 <h5>Example:</h5>
 <pre>
   %X = inttoptr i32 255 to i32*          <i>; yields zero extension on 64-bit architecture</i>
-  %X = inttoptr i32 255 to i32*          <i>; yields no-op on 32-bit architecture</i>
-  %Y = inttoptr i64 0 to i32*            <i>; yields truncation on 32-bit architecture</i>
+  %Y = inttoptr i32 255 to i32*          <i>; yields no-op on 32-bit architecture</i>
+  %Z = inttoptr i64 0 to i32*            <i>; yields truncation on 32-bit architecture</i>
 </pre>
 
 </div>
@@ -6598,7 +6624,8 @@ LLVM</a>.</p>
 
 <h5>Example:</h5>
 <pre>
-%ptr      = malloc i32
+%mallocP  = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
+%ptr      = bitcast i8* %mallocP to i32*
             store i32 4, %ptr
 
 %result1  = load i32* %ptr      <i>; yields {i32}:result1 = 4</i>
@@ -6649,7 +6676,8 @@ LLVM</a>.</p>
 
 <h5>Examples:</h5>
 <pre>
-%ptr      = malloc i32
+%mallocP  = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
+%ptr      = bitcast i8* %mallocP to i32*
             store i32 4, %ptr
 
 %val1     = add i32 4, 4
@@ -6704,7 +6732,8 @@ LLVM</a>.</p>
 
 <h5>Examples:</h5>
 <pre>
-%ptr      = malloc i32
+%mallocP  = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
+%ptr      = bitcast i8* %mallocP to i32*
             store i32 4, %ptr
 
 %val1     = add i32 4, 4
@@ -6759,8 +6788,9 @@ LLVM</a>.</p>
 
 <h5>Examples:</h5>
 <pre>
-%ptr      = malloc i32
-        store i32 4, %ptr
+%mallocP  = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
+%ptr      = bitcast i8* %mallocP to i32*
+            store i32 4, %ptr
 %result1  = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 4 )
                                 <i>; yields {i32}:result1 = 4</i>
 %result2  = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 2 )
@@ -6810,8 +6840,9 @@ LLVM</a>.</p>
 
 <h5>Examples:</h5>
 <pre>
-%ptr      = malloc i32
-        store i32 8, %ptr
+%mallocP  = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
+%ptr      = bitcast i8* %mallocP to i32*
+            store i32 8, %ptr
 %result1  = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 4 )
                                 <i>; yields {i32}:result1 = 8</i>
 %result2  = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 2 )
@@ -6887,8 +6918,9 @@ LLVM</a>.</p>
 
 <h5>Examples:</h5>
 <pre>
-%ptr      = malloc i32
-        store i32 0x0F0F, %ptr
+%mallocP  = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
+%ptr      = bitcast i8* %mallocP to i32*
+            store i32 0x0F0F, %ptr
 %result0  = call i32 @llvm.atomic.load.nand.i32.p0i32( i32* %ptr, i32 0xFF )
                                 <i>; yields {i32}:result0 = 0x0F0F</i>
 %result1  = call i32 @llvm.atomic.load.and.i32.p0i32( i32* %ptr, i32 0xFF )
@@ -6965,8 +6997,9 @@ LLVM</a>.</p>
 
 <h5>Examples:</h5>
 <pre>
-%ptr      = malloc i32
-        store i32 7, %ptr
+%mallocP  = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
+%ptr      = bitcast i8* %mallocP to i32*
+            store i32 7, %ptr
 %result0  = call i32 @llvm.atomic.load.min.i32.p0i32( i32* %ptr, i32 -2 )
                                 <i>; yields {i32}:result0 = 7</i>
 %result1  = call i32 @llvm.atomic.load.max.i32.p0i32( i32* %ptr, i32 8 )
@@ -6980,6 +7013,133 @@ LLVM</a>.</p>
 
 </div>
 
+
+<!-- ======================================================================= -->
+<div class="doc_subsection">
+  <a name="int_memorymarkers">Memory Use Markers</a>
+</div>
+
+<div class="doc_text">
+
+<p>This class of intrinsics exists to information about the lifetime of memory
+   objects and ranges where variables are immutable.</p>
+
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection">
+  <a name="int_lifetime_start">'<tt>llvm.lifetime.start</tt>' Intrinsic</a>
+</div>
+
+<div class="doc_text">
+
+<h5>Syntax:</h5>
+<pre>
+  declare void @llvm.lifetime.start(i64 &lt;size&gt;, i8* nocapture &lt;ptr&gt;)
+</pre>
+
+<h5>Overview:</h5>
+<p>The '<tt>llvm.lifetime.start</tt>' intrinsic specifies the start of a memory
+   object's lifetime.</p>
+
+<h5>Arguments:</h5>
+<p>The first argument is a constant integer representing the size of the
+   object, or -1 if it is variable sized.  The second argument is a pointer to
+   the object.</p>
+
+<h5>Semantics:</h5>
+<p>This intrinsic indicates that before this point in the code, the value of the
+   memory pointed to by <tt>ptr</tt> is dead.  This means that it is known to
+   never be used and has an undefined value.  A load from the pointer that
+   precedes this intrinsic can be replaced with
+   <tt>'<a href="#undefvalues">undef</a>'</tt>.</p>
+
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection">
+  <a name="int_lifetime_end">'<tt>llvm.lifetime.end</tt>' Intrinsic</a>
+</div>
+
+<div class="doc_text">
+
+<h5>Syntax:</h5>
+<pre>
+  declare void @llvm.lifetime.end(i64 &lt;size&gt;, i8* nocapture &lt;ptr&gt;)
+</pre>
+
+<h5>Overview:</h5>
+<p>The '<tt>llvm.lifetime.end</tt>' intrinsic specifies the end of a memory
+   object's lifetime.</p>
+
+<h5>Arguments:</h5>
+<p>The first argument is a constant integer representing the size of the
+   object, or -1 if it is variable sized.  The second argument is a pointer to
+   the object.</p>
+
+<h5>Semantics:</h5>
+<p>This intrinsic indicates that after this point in the code, the value of the
+   memory pointed to by <tt>ptr</tt> is dead.  This means that it is known to
+   never be used and has an undefined value.  Any stores into the memory object
+   following this intrinsic may be removed as dead.
+
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection">
+  <a name="int_invariant_start">'<tt>llvm.invariant.start</tt>' Intrinsic</a>
+</div>
+
+<div class="doc_text">
+
+<h5>Syntax:</h5>
+<pre>
+  declare {}* @llvm.invariant.start(i64 &lt;size&gt;, i8* nocapture &lt;ptr&gt;) readonly
+</pre>
+
+<h5>Overview:</h5>
+<p>The '<tt>llvm.invariant.start</tt>' intrinsic specifies that the contents of
+   a memory object will not change.</p>
+
+<h5>Arguments:</h5>
+<p>The first argument is a constant integer representing the size of the
+   object, or -1 if it is variable sized.  The second argument is a pointer to
+   the object.</p>
+
+<h5>Semantics:</h5>
+<p>This intrinsic indicates that until an <tt>llvm.invariant.end</tt> that uses
+   the return value, the referenced memory location is constant and
+   unchanging.</p>
+
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection">
+  <a name="int_invariant_end">'<tt>llvm.invariant.end</tt>' Intrinsic</a>
+</div>
+
+<div class="doc_text">
+
+<h5>Syntax:</h5>
+<pre>
+  declare void @llvm.invariant.end({}* &lt;start&gt;, i64 &lt;size&gt;, i8* nocapture &lt;ptr&gt;)
+</pre>
+
+<h5>Overview:</h5>
+<p>The '<tt>llvm.invariant.end</tt>' intrinsic specifies that the contents of
+   a memory object are mutable.</p>
+
+<h5>Arguments:</h5>
+<p>The first argument is the matching <tt>llvm.invariant.start</tt> intrinsic.
+   The second argument is a constant integer representing the size of the
+   object, or -1 if it is variable sized and the third argument is a pointer
+   to the object.</p>
+
+<h5>Semantics:</h5>
+<p>This intrinsic indicates that the memory is mutable again.</p>
+
+</div>
+
 <!-- ======================================================================= -->
 <div class="doc_subsection">
   <a name="int_general">General Intrinsics</a>
diff --git a/libclamav/c++/llvm/docs/LinkTimeOptimization.html b/libclamav/c++/llvm/docs/LinkTimeOptimization.html
index 4bc92b2..524a4e8 100644
--- a/libclamav/c++/llvm/docs/LinkTimeOptimization.html
+++ b/libclamav/c++/llvm/docs/LinkTimeOptimization.html
@@ -166,7 +166,7 @@ $ llvm-gcc a.o main.o -o main # &lt;-- standard link command without any modific
     provided by the linker on various platform are not unique. This means, 
     this new tool needs to support all such features and platforms in one 
     super tool or a separate tool per platform is required. This increases 
-    maintance cost for link time optimizer significantly, which is not 
+    maintenance cost for link time optimizer significantly, which is not 
     necessary. This approach also requires staying synchronized with linker 
     developements on various platforms, which is not the main focus of the link 
     time optimizer. Finally, this approach increases end user's build time due 
@@ -189,7 +189,7 @@ $ llvm-gcc a.o main.o -o main # &lt;-- standard link command without any modific
   user-supplied information, such as a list of exported symbols. LLVM 
   optimizer collects control flow information, data flow information and knows 
   much more about program structure from the optimizer's point of view. 
-  Our goal is to take advantage of tight intergration between the linker and 
+  Our goal is to take advantage of tight integration between the linker and 
   the optimizer by sharing this information during various linking phases.
 </p>
 </div>
diff --git a/libclamav/c++/llvm/docs/MakefileGuide.html b/libclamav/c++/llvm/docs/MakefileGuide.html
index ad44be8..a9c0725 100644
--- a/libclamav/c++/llvm/docs/MakefileGuide.html
+++ b/libclamav/c++/llvm/docs/MakefileGuide.html
@@ -261,7 +261,7 @@
 <!-- ======================================================================= -->
 <div class="doc_subsubsection"><a name="BCModules">Bitcode Modules</a></div>
 <div class="doc_text">
-  <p>In some situations, it is desireable to build a single bitcode module from
+  <p>In some situations, it is desirable to build a single bitcode module from
   a variety of sources, instead of an archive, shared library, or bitcode 
   library. Bitcode modules can be specified in addition to any of the other
   types of libraries by defining the <a href="#MODULE_NAME">MODULE_NAME</a>
diff --git a/libclamav/c++/llvm/docs/Passes.html b/libclamav/c++/llvm/docs/Passes.html
index 362be32..bbf6b3d 100644
--- a/libclamav/c++/llvm/docs/Passes.html
+++ b/libclamav/c++/llvm/docs/Passes.html
@@ -78,7 +78,6 @@ perl -e '$/ = undef; for (split(/\n/, <>)) { s:^ *///? ?::; print "  <p>\n" if !
 <tr><td><a href="#anders-aa">-anders-aa</a></td><td>Andersen's Interprocedural Alias Analysis</td></tr>
 <tr><td><a href="#basicaa">-basicaa</a></td><td>Basic Alias Analysis (default AA impl)</td></tr>
 <tr><td><a href="#basiccg">-basiccg</a></td><td>Basic CallGraph Construction</td></tr>
-<tr><td><a href="#basicvn">-basicvn</a></td><td>Basic Value Numbering (default GVN impl)</td></tr>
 <tr><td><a href="#codegenprepare">-codegenprepare</a></td><td>Optimize for code generation</td></tr>
 <tr><td><a href="#count-aa">-count-aa</a></td><td>Count Alias Analysis Query Responses</td></tr>
 <tr><td><a href="#debug-aa">-debug-aa</a></td><td>AA use debugger</td></tr>
@@ -90,7 +89,6 @@ perl -e '$/ = undef; for (split(/\n/, <>)) { s:^ *///? ?::; print "  <p>\n" if !
 <tr><td><a href="#globalsmodref-aa">-globalsmodref-aa</a></td><td>Simple mod/ref analysis for globals</td></tr>
 <tr><td><a href="#instcount">-instcount</a></td><td>Counts the various types of Instructions</td></tr>
 <tr><td><a href="#intervals">-intervals</a></td><td>Interval Partition Construction</td></tr>
-<tr><td><a href="#load-vn">-load-vn</a></td><td>Load Value Numbering</td></tr>
 <tr><td><a href="#loops">-loops</a></td><td>Natural Loop Construction</td></tr>
 <tr><td><a href="#memdep">-memdep</a></td><td>Memory Dependence Analysis</td></tr>
 <tr><td><a href="#no-aa">-no-aa</a></td><td>No Alias Analysis (always returns 'may' alias)</td></tr>
@@ -125,11 +123,9 @@ perl -e '$/ = undef; for (split(/\n/, <>)) { s:^ *///? ?::; print "  <p>\n" if !
 <tr><td><a href="#deadtypeelim">-deadtypeelim</a></td><td>Dead Type Elimination</td></tr>
 <tr><td><a href="#die">-die</a></td><td>Dead Instruction Elimination</td></tr>
 <tr><td><a href="#dse">-dse</a></td><td>Dead Store Elimination</td></tr>
-<tr><td><a href="#gcse">-gcse</a></td><td>Global Common Subexpression Elimination</td></tr>
 <tr><td><a href="#globaldce">-globaldce</a></td><td>Dead Global Elimination</td></tr>
 <tr><td><a href="#globalopt">-globalopt</a></td><td>Global Variable Optimizer</td></tr>
 <tr><td><a href="#gvn">-gvn</a></td><td>Global Value Numbering</td></tr>
-<tr><td><a href="#gvnpre">-gvnpre</a></td><td>Global Value Numbering/Partial Redundancy Elimination</td></tr>
 <tr><td><a href="#indmemrem">-indmemrem</a></td><td>Indirect Malloc and Free Removal</td></tr>
 <tr><td><a href="#indvars">-indvars</a></td><td>Canonicalize Induction Variables</td></tr>
 <tr><td><a href="#inline">-inline</a></td><td>Function Integration/Inlining</td></tr>
@@ -161,9 +157,7 @@ perl -e '$/ = undef; for (split(/\n/, <>)) { s:^ *///? ?::; print "  <p>\n" if !
 <tr><td><a href="#mem2reg">-mem2reg</a></td><td>Promote Memory to Register</td></tr>
 <tr><td><a href="#memcpyopt">-memcpyopt</a></td><td>Optimize use of memcpy and friends</td></tr>
 <tr><td><a href="#mergereturn">-mergereturn</a></td><td>Unify function exit nodes</td></tr>
-<tr><td><a href="#predsimplify">-predsimplify</a></td><td>Predicate Simplifier</td></tr>
 <tr><td><a href="#prune-eh">-prune-eh</a></td><td>Remove unused exception handling info</td></tr>
-<tr><td><a href="#raiseallocs">-raiseallocs</a></td><td>Raise allocations from calls to instructions</td></tr>
 <tr><td><a href="#reassociate">-reassociate</a></td><td>Reassociate expressions</td></tr>
 <tr><td><a href="#reg2mem">-reg2mem</a></td><td>Demote all values to stack slots</td></tr>
 <tr><td><a href="#scalarrepl">-scalarrepl</a></td><td>Scalar Replacement of Aggregates</td></tr>
@@ -304,25 +298,6 @@ perl -e '$/ = undef; for (split(/\n/, <>)) { s:^ *///? ?::; print "  <p>\n" if !
 
 <!-------------------------------------------------------------------------- -->
 <div class="doc_subsection">
-  <a name="basicvn">Basic Value Numbering (default Value Numbering impl)</a>
-</div>
-<div class="doc_text">
-  <p>
-  This is the default implementation of the <code>ValueNumbering</code>
-  interface.  It walks the SSA def-use chains to trivially identify
-  lexically identical expressions.  This does not require any ahead of time
-  analysis, so it is a very fast default implementation.
-  </p>
-  <p>
-  The ValueNumbering analysis passes are mostly deprecated. They are only used
-  by the <a href="#gcse">Global Common Subexpression Elimination pass</a>, which
-  is deprecated by the <a href="#gvn">Global Value Numbering pass</a> (which
-  does its value numbering on its own).
-  </p>
-</div>
-
-<!-------------------------------------------------------------------------- -->
-<div class="doc_subsection">
   <a name="codegenprepare">Optimize for code generation</a>
 </div>
 <div class="doc_text">
@@ -461,28 +436,6 @@ perl -e '$/ = undef; for (split(/\n/, <>)) { s:^ *///? ?::; print "  <p>\n" if !
 
 <!-------------------------------------------------------------------------- -->
 <div class="doc_subsection">
-  <a name="load-vn">Load Value Numbering</a>
-</div>
-<div class="doc_text">
-  <p>
-  This pass value numbers load and call instructions.  To do this, it finds
-  lexically identical load instructions, and uses alias analysis to determine
-  which loads are guaranteed to produce the same value.  To value number call
-  instructions, it looks for calls to functions that do not write to memory
-  which do not have intervening instructions that clobber the memory that is
-  read from.
-  </p>
-  
-  <p>
-  This pass builds off of another value numbering pass to implement value
-  numbering for non-load and non-call instructions.  It uses Alias Analysis so
-  that it can disambiguate the load instructions.  The more powerful these base
-  analyses are, the more powerful the resultant value numbering will be.
-  </p>
-</div>
-
-<!-------------------------------------------------------------------------- -->
-<div class="doc_subsection">
   <a name="loops">Natural Loop Construction</a>
 </div>
 <div class="doc_text">
@@ -865,23 +818,6 @@ perl -e '$/ = undef; for (split(/\n/, <>)) { s:^ *///? ?::; print "  <p>\n" if !
 
 <!-------------------------------------------------------------------------- -->
 <div class="doc_subsection">
-  <a name="gcse">Global Common Subexpression Elimination</a>
-</div>
-<div class="doc_text">
-  <p>
-  This pass is designed to be a very quick global transformation that
-  eliminates global common subexpressions from a function.  It does this by
-  using an existing value numbering analysis pass to identify the common
-  subexpressions, eliminating them when possible.
-  </p>
-  <p>
-  This pass is deprecated by the <a href="#gvn">Global Value Numbering pass</a>
-  (which does a better job with its own value numbering).
-  </p>
-</div>
-
-<!-------------------------------------------------------------------------- -->
-<div class="doc_subsection">
   <a name="globaldce">Dead Global Elimination</a>
 </div>
 <div class="doc_text">
@@ -912,35 +848,11 @@ perl -e '$/ = undef; for (split(/\n/, <>)) { s:^ *///? ?::; print "  <p>\n" if !
 </div>
 <div class="doc_text">
   <p>
-  This pass performs global value numbering to eliminate fully redundant
-  instructions.  It also performs simple dead load elimination.
-  </p>
-  <p>
-  Note that this pass does the value numbering itself, it does not use the
-  ValueNumbering analysis passes.
+  This pass performs global value numbering to eliminate fully and partially
+  redundant instructions.  It also performs redundant load elimination.
   </p>
 </div>
 
-<!-------------------------------------------------------------------------- -->
-<div class="doc_subsection">
-  <a name="gvnpre">Global Value Numbering/Partial Redundancy Elimination</a>
-</div>
-<div class="doc_text">
-  <p>
-  This pass performs a hybrid of global value numbering and partial redundancy
-  elimination, known as GVN-PRE.  It performs partial redundancy elimination on
-  values, rather than lexical expressions, allowing a more comprehensive view 
-  the optimization.  It replaces redundant values with uses of earlier 
-  occurences of the same value.  While this is beneficial in that it eliminates
-  unneeded computation, it also increases register pressure by creating large
-  live ranges, and should be used with caution on platforms that are very 
-  sensitive to register pressure.
-  </p>
-  <p>
-  Note that this pass does the value numbering itself, it does not use the
-  ValueNumbering analysis passes.
-  </p>
-</div>
 
 <!-------------------------------------------------------------------------- -->
 <div class="doc_subsection">
@@ -1578,28 +1490,6 @@ if (X &lt; 3) {</pre>
 
 <!-------------------------------------------------------------------------- -->
 <div class="doc_subsection">
-  <a name="predsimplify">Predicate Simplifier</a>
-</div>
-<div class="doc_text">
-  <p>
-  Path-sensitive optimizer. In a branch where <tt>x == y</tt>, replace uses of
-  <tt>x</tt> with <tt>y</tt>. Permits further optimization, such as the 
-  elimination of the unreachable call:
-  </p>
-  
-<blockquote><pre
->void test(int *p, int *q)
-{
-  if (p != q)
-    return;
-
-  if (*p != *q)
-    foo(); // unreachable
-}</pre></blockquote>
-</div>
-
-<!-------------------------------------------------------------------------- -->
-<div class="doc_subsection">
   <a name="prune-eh">Remove unused exception handling info</a>
 </div>
 <div class="doc_text">
@@ -1613,17 +1503,6 @@ if (X &lt; 3) {</pre>
 
 <!-------------------------------------------------------------------------- -->
 <div class="doc_subsection">
-  <a name="raiseallocs">Raise allocations from calls to instructions</a>
-</div>
-<div class="doc_text">
-  <p>
-  Converts <tt>@malloc</tt> and <tt>@free</tt> calls to <tt>malloc</tt> and
-  <tt>free</tt> instructions.
-  </p>
-</div>
-
-<!-------------------------------------------------------------------------- -->
-<div class="doc_subsection">
   <a name="reassociate">Reassociate expressions</a>
 </div>
 <div class="doc_text">
@@ -1653,7 +1532,7 @@ if (X &lt; 3) {</pre>
   <p>
   This file demotes all registers to memory references.  It is intented to be
   the inverse of <a href="#mem2reg"><tt>-mem2reg</tt></a>.  By converting to
-  <tt>load</tt> instructions, the only values live accross basic blocks are
+  <tt>load</tt> instructions, the only values live across basic blocks are
   <tt>alloca</tt> instructions and <tt>load</tt> instructions before
   <tt>phi</tt> nodes. It is intended that this should make CFG hacking much 
   easier. To make later hacking easier, the entry block is split into two, such
@@ -1908,8 +1787,8 @@ if (X &lt; 3) {</pre>
         integrals f.e.</li>
     <li>All of the constants in a switch statement are of the correct type.</li>
     <li>The code is in valid SSA form.</li>
-    <li>It should be illegal to put a label into any other type (like a
-        structure) or to return one. [except constant arrays!]</li>
+    <li>It is illegal to put a label into any other type (like a structure) or 
+        to return one.</li>
     <li>Only phi nodes can be self referential: <tt>%x = add i32 %x, %x</tt> is
         invalid.</li>
     <li>PHI nodes must have an entry for each predecessor, with no extras.</li>
diff --git a/libclamav/c++/llvm/docs/ProgrammersManual.html b/libclamav/c++/llvm/docs/ProgrammersManual.html
index 4e97bc0..e4e3dc2 100644
--- a/libclamav/c++/llvm/docs/ProgrammersManual.html
+++ b/libclamav/c++/llvm/docs/ProgrammersManual.html
@@ -83,6 +83,7 @@ option</a></li>
       <li><a href="#dss_stringmap">"llvm/ADT/StringMap.h"</a></li>
       <li><a href="#dss_indexedmap">"llvm/ADT/IndexedMap.h"</a></li>
       <li><a href="#dss_densemap">"llvm/ADT/DenseMap.h"</a></li>
+      <li><a href="#dss_valuemap">"llvm/ADT/ValueMap.h"</a></li>
       <li><a href="#dss_map">&lt;map&gt;</a></li>
       <li><a href="#dss_othermap">Other Map-Like Container Options</a></li>
     </ul></li>
@@ -650,7 +651,7 @@ even if the source lives in multiple files.</p>
 <p>The <tt>DEBUG_WITH_TYPE</tt> macro is also available for situations where you
 would like to set <tt>DEBUG_TYPE</tt>, but only for one specific <tt>DEBUG</tt>
 statement. It takes an additional first parameter, which is the type to use. For
-example, the preceeding example could be written as:</p>
+example, the preceding example could be written as:</p>
 
 
 <div class="doc_code">
@@ -1492,6 +1493,23 @@ inserted into the map) that it needs internally.</p>
 
 <!-- _______________________________________________________________________ -->
 <div class="doc_subsubsection">
+  <a name="dss_valuemap">"llvm/ADT/ValueMap.h"</a>
+</div>
+
+<div class="doc_text">
+
+<p>
+ValueMap is a wrapper around a <a href="#dss_densemap">DenseMap</a> mapping
+Value*s (or subclasses) to another type.  When a Value is deleted or RAUW'ed,
+ValueMap will update itself so the new version of the key is mapped to the same
+value, just as if the key were a WeakVH.  You can configure exactly how this
+happens, and what else happens on these two events, by passing
+a <code>Config</code> parameter to the ValueMap template.</p>
+
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection">
   <a name="dss_map">&lt;map&gt;</a>
 </div>
 
@@ -2983,7 +3001,7 @@ the <tt>lib/VMCore</tt> directory.</p>
   <dt><tt>VectorType</tt></dt>
   <dd>Subclass of SequentialType for vector types. A 
   vector type is similar to an ArrayType but is distinguished because it is 
-  a first class type wherease ArrayType is not. Vector types are used for 
+  a first class type whereas ArrayType is not. Vector types are used for 
   vector operations and are usually small vectors of of an integer or floating 
   point type.</dd>
   <dt><tt>StructType</tt></dt>
@@ -3543,7 +3561,7 @@ Superclasses: <a href="#GlobalValue"><tt>GlobalValue</tt></a>,
 <a href="#Value"><tt>Value</tt></a></p>
 
 <p>The <tt>Function</tt> class represents a single procedure in LLVM.  It is
-actually one of the more complex classes in the LLVM heirarchy because it must
+actually one of the more complex classes in the LLVM hierarchy because it must
 keep track of a large amount of data.  The <tt>Function</tt> class keeps track
 of a list of <a href="#BasicBlock"><tt>BasicBlock</tt></a>s, a list of formal 
 <a href="#Argument"><tt>Argument</tt></a>s, and a 
@@ -3552,7 +3570,7 @@ of a list of <a href="#BasicBlock"><tt>BasicBlock</tt></a>s, a list of formal
 <p>The list of <a href="#BasicBlock"><tt>BasicBlock</tt></a>s is the most
 commonly used part of <tt>Function</tt> objects.  The list imposes an implicit
 ordering of the blocks in the function, which indicate how the code will be
-layed out by the backend.  Additionally, the first <a
+laid out by the backend.  Additionally, the first <a
 href="#BasicBlock"><tt>BasicBlock</tt></a> is the implicit entry node for the
 <tt>Function</tt>.  It is not legal in LLVM to explicitly branch to this initial
 block.  There are no implicit exit nodes, and in fact there may be multiple exit
@@ -3682,7 +3700,7 @@ Superclasses: <a href="#GlobalValue"><tt>GlobalValue</tt></a>,
 <a href="#User"><tt>User</tt></a>,
 <a href="#Value"><tt>Value</tt></a></p>
 
-<p>Global variables are represented with the (suprise suprise)
+<p>Global variables are represented with the (surprise surprise)
 <tt>GlobalVariable</tt> class. Like functions, <tt>GlobalVariable</tt>s are also
 subclasses of <a href="#GlobalValue"><tt>GlobalValue</tt></a>, and as such are
 always referenced by their address (global values must live in memory, so their
@@ -3732,7 +3750,7 @@ never change at runtime).</p>
 
   <li><tt><a href="#Constant">Constant</a> *getInitializer()</tt>
 
-    <p>Returns the intial value for a <tt>GlobalVariable</tt>.  It is not legal
+    <p>Returns the initial value for a <tt>GlobalVariable</tt>.  It is not legal
     to call this method if there is no initializer.</p></li>
 </ul>
 
diff --git a/libclamav/c++/llvm/docs/ReleaseNotes-2.6.html b/libclamav/c++/llvm/docs/ReleaseNotes-2.6.html
deleted file mode 100644
index 98e3565..0000000
--- a/libclamav/c++/llvm/docs/ReleaseNotes-2.6.html
+++ /dev/null
@@ -1,908 +0,0 @@
-<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
-                      "http://www.w3.org/TR/html4/strict.dtd">
-<html>
-<head>
-  <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
-  <link rel="stylesheet" href="llvm.css" type="text/css">
-  <title>LLVM 2.6 Release Notes</title>
-</head>
-<body>
-
-<div class="doc_title">LLVM 2.6 Release Notes</div>
-
-<ol>
-  <li><a href="#intro">Introduction</a></li>
-  <li><a href="#subproj">Sub-project Status Update</a></li>
-  <li><a href="#externalproj">External Projects Using LLVM 2.6</a></li>
-  <li><a href="#whatsnew">What's New in LLVM 2.6?</a></li>
-  <li><a href="GettingStarted.html">Installation Instructions</a></li>
-  <li><a href="#portability">Portability and Supported Platforms</a></li>
-  <li><a href="#knownproblems">Known Problems</a></li>
-  <li><a href="#additionalinfo">Additional Information</a></li>
-</ol>
-
-<div class="doc_author">
-  <p>Written by the <a href="http://llvm.org">LLVM Team</a></p>
-</div>
-
-<!-- *********************************************************************** -->
-<div class="doc_section">
-  <a name="intro">Introduction</a>
-</div>
-<!-- *********************************************************************** -->
-
-<div class="doc_text">
-
-<p>This document contains the release notes for the LLVM Compiler
-Infrastructure, release 2.6.  Here we describe the status of LLVM, including
-major improvements from the previous release and significant known problems.
-All LLVM releases may be downloaded from the <a
-href="http://llvm.org/releases/">LLVM releases web site</a>.</p>
-
-<p>For more information about LLVM, including information about the latest
-release, please check out the <a href="http://llvm.org/">main LLVM
-web site</a>.  If you have questions or comments, the <a
-href="http://mail.cs.uiuc.edu/mailman/listinfo/llvmdev">LLVM Developer's Mailing
-List</a> is a good place to send them.</p>
-
-<p>Note that if you are reading this file from a Subversion checkout or the
-main LLVM web page, this document applies to the <i>next</i> release, not the
-current one.  To see the release notes for a specific release, please see the
-<a href="http://llvm.org/releases/">releases page</a>.</p>
-
-</div>
-
-<!-- Unfinished features in 2.5:
-  Machine LICM
-  Machine Sinking
-  target-specific intrinsics
-  gold lto plugin
-  pre-alloc splitter, strong phi elim
-  <tt>llc -enable-value-prop</tt>, propagation of value info
-       (sign/zero ext info) from one MBB to another
-  debug info for optimized code
-  interpreter + libffi
-  postalloc scheduler: anti dependence breaking, hazard recognizer?
-
-initial support for debug line numbers when optimization enabled, not useful in
-  2.5 but will be for 2.6.
-
- -->
-
- <!-- for announcement email:
-   -->
-
-<!-- *********************************************************************** -->
-<div class="doc_section">
-  <a name="subproj">Sub-project Status Update</a>
-</div>
-<!-- *********************************************************************** -->
-
-<div class="doc_text">
-<p>
-The LLVM 2.6 distribution currently consists of code from the core LLVM
-repository &mdash;which roughly includes the LLVM optimizers, code generators
-and supporting tools &mdash; and the llvm-gcc repository.  In addition to this
-code, the LLVM Project includes other sub-projects that are in development.  The
-two which are the most actively developed are the <a href="#clang">Clang
-Project</a> and the <a href="#vmkit">VMKit Project</a>.
-</p>
-
-</div>
-
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="clang">Clang: C/C++/Objective-C Frontend Toolkit</a>
-</div>
-
-<div class="doc_text">
-
-<p>The <a href="http://clang.llvm.org/">Clang project</a> is an effort to build
-a set of new 'LLVM native' front-end technologies for the LLVM optimizer and
-code generator.  While Clang is not included in the LLVM 2.6 release, it is
-continuing to make major strides forward in all areas.  Its C and Objective-C
-parsing and code generation support is now very solid.  For example, it is
-capable of successfully building many real-world applications for X86-32
-and X86-64,
-including the <a href="http://wiki.freebsd.org/BuildingFreeBSDWithClang">FreeBSD
-kernel</a> and <a href="http://gcc.gnu.org/gcc-4.2/">gcc 4.2</a>.  C++ is also
-making <a href="http://clang.llvm.org/cxx_status.html">incredible progress</a>,
-and work on templates has recently started.  If you are
-interested in fast compiles and good diagnostics, we encourage you to try it out
-by <a href="http://clang.llvm.org/get_started.html">building from mainline</a>
-and reporting any issues you hit to the <a
-href="http://lists.cs.uiuc.edu/mailman/listinfo/cfe-dev">Clang front-end mailing
-list</a>.</p>
-
-<p>In the LLVM 2.6 time-frame, the Clang team has made many improvements:</p>
-
-<ul>
-<li>Something wonderful!</li>
-<li>AuroraUX / FreeBSD &amp; OpenBSD Toolchain support.</li>
-<li>Many many bugs are fixed and many features have been added.</li>
-</ul>
-</div>
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="clangsa">Clang Static Analyzer</a>
-</div>
-
-<div class="doc_text">
-
-<p>Previously announced in the 2.4 LLVM release, the Clang project also
-includes an early stage static source code analysis tool for <a
-href="http://clang.llvm.org/StaticAnalysis.html">automatically finding bugs</a>
-in C and Objective-C programs. The tool performs a growing set of checks to find
-bugs that occur on a specific path within a program.</p>
-
-<p>In the LLVM 2.6 time-frame there have been many significant improvements to
-XYZ.</p>
-
-<p>The set of checks performed by the static analyzer continues to expand, and
-future plans for the tool include full source-level inter-procedural analysis
-and deeper checks such as buffer overrun detection. There are many opportunities
-to extend and enhance the static analyzer, and anyone interested in working on
-this project is encouraged to get involved!</p>
-
-</div>
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="vmkit">VMKit: JVM/CLI Virtual Machine Implementation</a>
-</div>
-
-<div class="doc_text">
-<p>
-The <a href="http://vmkit.llvm.org/">VMKit project</a> is an implementation of
-a JVM and a CLI Virtual Machines (Microsoft .NET is an
-implementation of the CLI) using the Just-In-Time compiler of LLVM.</p>
-
-<p>Following LLVM 2.6, VMKit has its XYZ release that you can find on its
-<a href="http://vmkit.llvm.org/releases/">webpage</a>. The release includes
-bug fixes, cleanup and new features. The major changes are:</p>
-
-<ul>
-
-<li>Something wonderful!</li>
-
-</ul>
-</div>
-
-<!-- *********************************************************************** -->
-<div class="doc_section">
-  <a name="externalproj">External Projects Using LLVM 2.6</a>
-</div>
-<!-- *********************************************************************** -->
-
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="macruby">MacRuby</a>
-</div>
-
-<div class="doc_text">
-
-<p>
-<a href="http://macruby.org">MacRuby</a> is an implementation of Ruby on top of
-core Mac OS X technologies, such as the Objective-C common runtime and garbage
-collector, and the CoreFoundation framework. It is principally developed by
-Apple and aims at enabling the creation of full-fledged Mac OS X applications.
-</p>
-
-<p>
-MacRuby uses LLVM for optimization passes, JIT and AOT compilation of Ruby
-expressions. It also uses zero-cost DWARF exceptions to implement Ruby exception
-handling.</p>
-
-</div>
-
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="pure">Pure</a>
-</div>
-
-<div class="doc_text">
-<p>
-<a href="http://pure-lang.googlecode.com/">Pure</a>
-is an algebraic/functional programming language based on term rewriting.
-Programs are collections of equations which are used to evaluate expressions in
-a symbolic fashion. Pure offers dynamic typing, eager and lazy evaluation,
-lexical closures, a hygienic macro system (also based on term rewriting),
-built-in list and matrix support (including list and matrix comprehensions) and
-an easy-to-use C interface. The interpreter uses LLVM as a backend to
- JIT-compile Pure programs to fast native code.</p>
-
-<p>In addition to the usual algebraic data structures, Pure also has
-MATLAB-style matrices in order to support numeric computations and signal
-processing in an efficient way. Pure is mainly aimed at mathematical
-applications right now, but it has been designed as a general purpose language.
-The dynamic interpreter environment and the C interface make it possible to use
-it as a kind of functional scripting language for many application areas.
-</p>
-</div>
-
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="ldc">LLVM D Compiler</a>
-</div>
-
-<div class="doc_text">
-<p>
-<a href="http://www.dsource.org/projects/ldc">LDC</a> is an implementation of
-the D Programming Language using the LLVM optimizer and code generator.
-The LDC project works great with the LLVM 2.6 release.  General improvements in
-this
-cycle have included new inline asm constraint handling, better debug info
-support, general bugfixes, and better x86-64 support.  This has allowed
-some major improvements in LDC, getting us much closer to being as
-fully featured as the original DMD compiler from DigitalMars.
-</p>
-</div>
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="RoadsendPHP">Roadsend PHP</a>
-</div>
-
-<div class="doc_text">
-<p><a href="http://code.roadsend.com/rphp">Roadsend PHP</a> (rphp) is an open
-source implementation of the PHP programming 
-language that uses LLVM for its optimizer, JIT, and static compiler. This is a 
-reimplementation of an earlier project that is now based on LLVM.</p>
-</div>
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="Unladen Swallow">Unladen Swallow</a>
-</div>
-
-<div class="doc_text">
-<p><a href="http://code.google.com/p/unladen-swallow/">Unladen Swallow</a> is a
-branch of <a href="http://python.org/">Python</a> intended to be fully
-compatible and significantly faster.  It uses LLVM's optimization passes and JIT
-compiler.</p>
-</div>
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="Rubinius">Rubinius</a>
-</div>
-
-<div class="doc_text">
-<p><a href="http://github.com/evanphx/rubinius">Rubinius</a> is a new virtual
-machine for Ruby. It leverages LLVM to dynamically compile Ruby code down to
-machine code using LLVM's JIT.</p>
-</div>
-
-
-<!-- *********************************************************************** -->
-<div class="doc_section">
-  <a name="whatsnew">What's New in LLVM 2.6?</a>
-</div>
-<!-- *********************************************************************** -->
-
-<div class="doc_text">
-
-<p>This release includes a huge number of bug fixes, performance tweaks, and
-minor improvements.  Some of the major improvements and new features are listed
-in this section.
-</p>
-</div>
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="majorfeatures">Major New Features</a>
-</div>
-
-<div class="doc_text">
-
-<p>LLVM 2.6 includes several major new capabilities:</p>
-
-<ul>
-<li>Something wonderful!</li>
-<li>LLVM 2.6 includes a brand new experimental LLVM bindings to the Ada2005 programming language.</li>
-</ul>
-
-</div>
-
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="llvm-gcc">llvm-gcc 4.2 Improvements</a>
-</div>
-
-<div class="doc_text">
-
-<p>LLVM fully supports the llvm-gcc 4.2 front-end, which marries the GCC
-front-ends and driver with the LLVM optimizer and code generator.  It currently
-includes support for the C, C++, Objective-C, Ada, and Fortran front-ends.</p>
-
-<ul>
-<li>Something wonderful!</li>
-</ul>
-
-</div>
-
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="coreimprovements">LLVM IR and Core Improvements</a>
-</div>
-
-<div class="doc_text">
-<p>LLVM IR has several new features that are used by our existing front-ends and
-can be useful if you are writing a front-end for LLVM:</p>
-
-<ul>
-<li>Something wonderful!</li>
-</ul>
-
-</div>
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="optimizer">Optimizer Improvements</a>
-</div>
-
-<div class="doc_text">
-
-<p>In addition to a large array of bug fixes and minor performance tweaks, this
-release includes a few major enhancements and additions to the optimizers:</p>
-
-<ul>
-
-<li>Something wonderful!</li>
-
-</ul>
-
-</div>
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="codegen">Target Independent Code Generator Improvements</a>
-</div>
-
-<div class="doc_text">
-
-<p>We have put a significant amount of work into the code generator
-infrastructure, which allows us to implement more aggressive algorithms and make
-it run faster:</p>
-
-<ul>
-
-<li>Something wonderful!</li>
-</ul>
-</div>
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="x86">X86-32 and X86-64 Target Improvements</a>
-</div>
-
-<div class="doc_text">
-<p>New features of the X86 target include:
-</p>
-
-<ul>
-
-<li>Something wonderful!</li>
-</ul>
-
-</div>
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="pic16">PIC16 Target Improvements</a>
-</div>
-
-<div class="doc_text">
-<p>New features of the PIC16 target include:
-</p>
-
-<ul>
-<li>Something wonderful!</li>
-</ul>
-
-<p>Things not yet supported:</p>
-
-<ul>
-<li>Floating point.</li>
-<li>Passing/returning aggregate types to and from functions.</li>
-<li>Variable arguments.</li>
-<li>Indirect function calls.</li>
-<li>Interrupts/programs.</li>
-<li>Debug info.</li>
-</ul>
-
-</div>
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="ARM">ARM Target Improvements</a>
-</div>
-
-<div class="doc_text">
-<p>New features of the ARM target include:
-</p>
-
-<ul>
-
-<li>Preliminary support for processors, such as the Cortex-A8 and Cortex-A9,
-that implement version v7-A of the ARM architecture.  The ARM backend now
-supports both the Thumb2 and Advanced SIMD (Neon) instruction sets. The
-AAPCS-VFP "hard float" calling conventions are also supported with the
-<tt>-float-abi=hard</tt> flag. These features are still somewhat experimental
-and subject to change. The Neon intrinsics, in particular, may change in future
-releases of LLVM.
-</li>
-</ul>
-
-</div>
-
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="llvmc">Improvements in LLVMC</a>
-</div>
-
-<div class="doc_text">
-<p>New features include:</p>
-
-<ul>
-<li>Something wonderful!</li>
-</ul>
-
-</div>
-
-
-<!--=========================================================================-->
-<div class="doc_subsection">
-<a name="changes">Major Changes and Removed Features</a>
-</div>
-
-<div class="doc_text">
-
-<p>If you're already an LLVM user or developer with out-of-tree changes based
-on LLVM 2.5, this section lists some "gotchas" that you may run into upgrading
-from the previous release.</p>
-
-<ul>
-
-<li>Something horrible!</li>
-
-</ul>
-
-
-<p>In addition, many APIs have changed in this release.  Some of the major LLVM
-API changes are:</p>
-
-<ul>
-<li>LLVM's global uniquing tables for <tt>Type</tt>s and <tt>Constant</tt>s have
-    been privatized into members of an <tt>LLVMContext</tt>.  A number of APIs
-    now take an <tt>LLVMContext</tt> as a parameter.  To smooth the transition
-    for clients that will only ever use a single context, the new 
-    <tt>getGlobalContext()</tt> API can be used to access a default global 
-    context which can be passed in any and all cases where a context is 
-    required.
-<li>The <tt>getABITypeSize</tt> methods are now called <tt>getAllocSize</tt>.</li>
-<li>The <tt>Add</tt>, <tt>Sub</tt>, and <tt>Mul</tt> operators are no longer
-    overloaded for floating-point types. Floating-point addition, subtraction,
-    and multiplication are now represented with new operators <tt>FAdd</tt>,
-    <tt>FSub</tt>, and <tt>FMul</tt>. In the <tt>IRBuilder</tt> API,
-    <tt>CreateAdd</tt>, <tt>CreateSub</tt>, <tt>CreateMul</tt>, and
-    <tt>CreateNeg</tt> should only be used for integer arithmetic now;
-    <tt>CreateFAdd</tt>, <tt>CreateFSub</tt>, <tt>CreateFMul</tt>, and
-    <tt>CreateFNeg</tt> should now be used for floating-point arithmetic.</li>
-<li>The DynamicLibrary class can no longer be constructed, its functionality has
-    moved to static member functions.</li>
-<li><tt>raw_fd_ostream</tt>'s constructor for opening a given filename now
-    takes an extra <tt>Force</tt> argument. If <tt>Force</tt> is set to
-    <tt>false</tt>, an error will be reported if a file with the given name
-    already exists. If <tt>Force</tt> is set to <tt>true</tt>, the file will
-    be silently truncated (which is the behavior before this flag was
-    added).</li>
-<li><tt>SCEVHandle</tt> no longer exists, because reference counting is no
-longer done for <tt>SCEV*</tt> objects, instead <tt>const SCEV*</tt> should be
-used.</li>
-
-<li>Many APIs, notably <tt>llvm::Value</tt>, now use the <tt>StringRef</tt>
-and <tt>Twine</tt> classes instead of passing <tt>const char*</tt>
-or <tt>std::string</tt>, as described in
-the <a href="ProgrammersManual.html#string_apis">Programmer's Manual</a>. Most
-clients should be unaffected by this transition, unless they are used to <tt>Value::getName()</tt> returning a string. Here are some tips on updating to 2.6:
-  <ul>
-    <li><tt>getNameStr()</tt> is still available, and matches the old
-      behavior. Replacing <tt>getName()</tt> calls with this is an safe option,
-      although more efficient alternatives are now possible.</li>
-
-    <li>If you were just relying on <tt>getName()</tt> being able to be sent to
-      a <tt>std::ostream</tt>, consider migrating
-      to <tt>llvm::raw_ostream</tt>.</li>
-      
-    <li>If you were using <tt>getName().c_str()</tt> to get a <tt>const
-        char*</tt> pointer to the name, you can use <tt>getName().data()</tt>.
-        Note that this string (as before), may not be the entire name if the
-        name containts embedded null characters.</li>
-
-    <li>If you were using operator plus on the result of <tt>getName()</tt> and
-      treating the result as an <tt>std::string</tt>, you can either
-      uses <tt>Twine::str</tt> to get the result as an <tt>std::string</tt>, or
-      could move to a <tt>Twine</tt> based design.</li>
-
-    <li><tt>isName()</tt> should be replaced with comparison
-      against <tt>getName()</tt> (this is now efficient).
-  </ul>
-</li>
-
-<li>The registration interfaces for backend Targets has changed (what was
-previously TargetMachineRegistry). For backend authors, see the <a href="WritingAnLLVMBackend.html#TargetRegistration">Writing An LLVM Backend</a> guide. For clients, the notable API changes are:
-  <ul>
-    <li><tt>TargetMachineRegistry</tt> has been renamed
-      to <tt>TargetRegistry</tt>.</li>
-
-    <li>Clients should move to using the <tt>TargetRegistry::lookupTarget()</tt>
-      function to find targets.</li>
-  </ul>
-</li>
-
-<li>llvm-dis now fails if output file exists, instead of dumping to stdout.
-FIXME: describe any other tool changes due to the raw_fd_ostream change.  FIXME:
-This is not an API change, maybe there should be a tool changes section?</li>
-<li>temporarely due to Context API change passes should call doInitialization()
-method of the pass they inherit from, otherwise Context is NULL.
-FIXME: remove this entry when this is no longer needed.<li>
-</ul>
-
-</div>
-
-
-
-<!-- *********************************************************************** -->
-<div class="doc_section">
-  <a name="portability">Portability and Supported Platforms</a>
-</div>
-<!-- *********************************************************************** -->
-
-<div class="doc_text">
-
-<p>LLVM is known to work on the following platforms:</p>
-
-<ul>
-<li>Intel and AMD machines (IA32, X86-64, AMD64, EMT-64) running Red Hat
-Linux, Fedora Core, FreeBSD and AuroraUX (and probably other unix-like systems).</li>
-<li>PowerPC and X86-based Mac OS X systems, running 10.3 and above in 32-bit
-and 64-bit modes.</li>
-<li>Intel and AMD machines running on Win32 using MinGW libraries (native).</li>
-<li>Intel and AMD machines running on Win32 with the Cygwin libraries (limited
-    support is available for native builds with Visual C++).</li>
-<li>Sun UltraSPARC workstations running Solaris 10.</li>
-<li>Alpha-based machines running Debian GNU/Linux.</li>
-</ul>
-
-<p>The core LLVM infrastructure uses GNU autoconf to adapt itself
-to the machine and operating system on which it is built.  However, minor
-porting may be required to get LLVM to work on new platforms.  We welcome your
-portability patches and reports of successful builds or error messages.</p>
-
-</div>
-
-<!-- *********************************************************************** -->
-<div class="doc_section">
-  <a name="knownproblems">Known Problems</a>
-</div>
-<!-- *********************************************************************** -->
-
-<div class="doc_text">
-
-<p>This section contains significant known problems with the LLVM system,
-listed by component.  If you run into a problem, please check the <a
-href="http://llvm.org/bugs/">LLVM bug database</a> and submit a bug if
-there isn't already one.</p>
-
-<ul>
-<li>LLVM will not correctly compile on Solaris and/or OpenSolaris
-using the stock GCC 3.x.x series 'out the box',
-See: <a href="#brokengcc">Broken versions of GCC and other tools</a>.
-However, A <a href="http://pkg.auroraux.org/GCC">Modern GCC Build</a>
-for x86/x64 has been made available from the third party AuroraUX Project
-that has been meticulously tested for bootstrapping LLVM & Clang.</li>
-</ul>
-
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsection">
-  <a name="experimental">Experimental features included with this release</a>
-</div>
-
-<div class="doc_text">
-
-<p>The following components of this LLVM release are either untested, known to
-be broken or unreliable, or are in early development.  These components should
-not be relied on, and bugs should not be filed against them, but they may be
-useful to some people.  In particular, if you would like to work on one of these
-components, please contact us on the <a
-href="http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev">LLVMdev list</a>.</p>
-
-<ul>
-<li>The MSIL, Alpha, SPU, MIPS, and PIC16 backends are experimental.</li>
-<li>The <tt>llc</tt> "<tt>-filetype=asm</tt>" (the default) is the only
-    supported value for this option.</li>
-</ul>
-
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsection">
-  <a name="x86-be">Known problems with the X86 back-end</a>
-</div>
-
-<div class="doc_text">
-
-<ul>
-  <li>The X86 backend does not yet support
-    all <a href="http://llvm.org/PR879">inline assembly that uses the X86
-    floating point stack</a>.  It supports the 'f' and 't' constraints, but not
-    'u'.</li>
-  <li>The X86 backend generates inefficient floating point code when configured
-    to generate code for systems that don't have SSE2.</li>
-  <li>Win64 code generation wasn't widely tested. Everything should work, but we
-    expect small issues to happen. Also, llvm-gcc cannot build the mingw64
-    runtime currently due
-    to <a href="http://llvm.org/PR2255">several</a>
-    <a href="http://llvm.org/PR2257">bugs</a> and due to lack of support for
-    the
-    'u' inline assembly constraint and for X87 floating point inline assembly.</li>
-  <li>The X86-64 backend does not yet support the LLVM IR instruction
-      <tt>va_arg</tt>. Currently, the llvm-gcc and front-ends support variadic
-      argument constructs on X86-64 by lowering them manually.</li>
-</ul>
-
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsection">
-  <a name="ppc-be">Known problems with the PowerPC back-end</a>
-</div>
-
-<div class="doc_text">
-
-<ul>
-<li>The Linux PPC32/ABI support needs testing for the interpreter and static
-compilation, and lacks support for debug information.</li>
-</ul>
-
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsection">
-  <a name="arm-be">Known problems with the ARM back-end</a>
-</div>
-
-<div class="doc_text">
-
-<ul>
-<li>Support for the Advanced SIMD (Neon) instruction set is still incomplete
-and not well tested.  Some features may not work at all, and the code quality
-may be poor in some cases.</li>
-<li>Thumb mode works only on ARMv6 or higher processors. On sub-ARMv6
-processors, thumb programs can crash or produce wrong
-results (<a href="http://llvm.org/PR1388">PR1388</a>).</li>
-<li>Compilation for ARM Linux OABI (old ABI) is supported but not fully tested.
-</li>
-<li>There is a bug in QEMU-ARM (&lt;= 0.9.0) which causes it to incorrectly
- execute
-programs compiled with LLVM.  Please use more recent versions of QEMU.</li>
-</ul>
-
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsection">
-  <a name="sparc-be">Known problems with the SPARC back-end</a>
-</div>
-
-<div class="doc_text">
-
-<ul>
-<li>The SPARC backend only supports the 32-bit SPARC ABI (-m32); it does not
-    support the 64-bit SPARC ABI (-m64).</li>
-</ul>
-
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsection">
-  <a name="mips-be">Known problems with the MIPS back-end</a>
-</div>
-
-<div class="doc_text">
-
-<ul>
-<li>The O32 ABI is not fully supported.</li>
-<li>64-bit MIPS targets are not supported yet.</li>
-</ul>
-
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsection">
-  <a name="alpha-be">Known problems with the Alpha back-end</a>
-</div>
-
-<div class="doc_text">
-
-<ul>
-
-<li>On 21164s, some rare FP arithmetic sequences which may trap do not have the
-appropriate nops inserted to ensure restartability.</li>
-
-</ul>
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsection">
-  <a name="c-be">Known problems with the C back-end</a>
-</div>
-
-<div class="doc_text">
-
-<ul>
-<li><a href="http://llvm.org/PR802">The C backend has only basic support for
-    inline assembly code</a>.</li>
-<li><a href="http://llvm.org/PR1658">The C backend violates the ABI of common
-    C++ programs</a>, preventing intermixing between C++ compiled by the CBE and
-    C++ code compiled with <tt>llc</tt> or native compilers.</li>
-<li>The C backend does not support all exception handling constructs.</li>
-<li>The C backend does not support arbitrary precision integers.</li>
-</ul>
-
-</div>
-
-
-<!-- ======================================================================= -->
-<div class="doc_subsection">
-  <a name="c-fe">Known problems with the llvm-gcc C front-end</a>
-</div>
-
-<div class="doc_text">
-
-<p>llvm-gcc does not currently support <a href="http://llvm.org/PR869">Link-Time
-Optimization</a> on most platforms "out-of-the-box".  Please inquire on the
-LLVMdev mailing list if you are interested.</p>
-
-<p>The only major language feature of GCC not supported by llvm-gcc is
-    the <tt>__builtin_apply</tt> family of builtins.   However, some extensions
-    are only supported on some targets.  For example, trampolines are only
-    supported on some targets (these are used when you take the address of a
-    nested function).</p>
-
-<p>If you run into GCC extensions which are not supported, please let us know.
-</p>
-
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsection">
-  <a name="c++-fe">Known problems with the llvm-gcc C++ front-end</a>
-</div>
-
-<div class="doc_text">
-
-<p>The C++ front-end is considered to be fully
-tested and works for a number of non-trivial programs, including LLVM
-itself, Qt, Mozilla, etc.</p>
-
-<ul>
-<li>Exception handling works well on the X86 and PowerPC targets. Currently
-  only Linux and Darwin targets are supported (both 32 and 64 bit).</li>
-</ul>
-
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsection">
-  <a name="fortran-fe">Known problems with the llvm-gcc Fortran front-end</a>
-</div>
-
-<div class="doc_text">
-<ul>
-<li>Fortran support generally works, but there are still several unresolved bugs
-    in Bugzilla.  Please see the tools/gfortran component for details.</li>
-</ul>
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsection">
-  <a name="ada-fe">Known problems with the llvm-gcc Ada front-end</a>
-</div>
-
-<div class="doc_text">
-The llvm-gcc 4.2 Ada compiler works fairly well; however, this is not a mature
-technology, and problems should be expected.
-<ul>
-<li>The Ada front-end currently only builds on X86-32.  This is mainly due
-to lack of trampoline support (pointers to nested functions) on other platforms.
-However, it <a href="http://llvm.org/PR2006">also fails to build on X86-64</a>
-which does support trampolines.</li>
-<li>The Ada front-end <a href="http://llvm.org/PR2007">fails to bootstrap</a>.
-This is due to lack of LLVM support for <tt>setjmp</tt>/<tt>longjmp</tt> style
-exception handling, which is used internally by the compiler.
-Workaround: configure with --disable-bootstrap.</li>
-<li>The c380004, <a href="http://llvm.org/PR2010">c393010</a>
-and <a href="http://llvm.org/PR2421">cxg2021</a> ACATS tests fail
-(c380004 also fails with gcc-4.2 mainline).
-If the compiler is built with checks disabled then <a href="http://llvm.org/PR2010">c393010</a>
-causes the compiler to go into an infinite loop, using up all system memory.</li>
-<li>Some GCC specific Ada tests continue to crash the compiler.</li>
-<li>The -E binder option (exception backtraces)
-<a href="http://llvm.org/PR1982">does not work</a> and will result in programs
-crashing if an exception is raised.  Workaround: do not use -E.</li>
-<li>Only discrete types <a href="http://llvm.org/PR1981">are allowed to start
-or finish at a non-byte offset</a> in a record.  Workaround: do not pack records
-or use representation clauses that result in a field of a non-discrete type
-starting or finishing in the middle of a byte.</li>
-<li>The <tt>lli</tt> interpreter <a href="http://llvm.org/PR2009">considers
-'main' as generated by the Ada binder to be invalid</a>.
-Workaround: hand edit the file to use pointers for <tt>argv</tt> and
-<tt>envp</tt> rather than integers.</li>
-<li>The <tt>-fstack-check</tt> option <a href="http://llvm.org/PR2008">is
-ignored</a>.</li>
-</ul>
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsection">
-	<a name="ocaml-bindingse">Known problems with the O'Caml bindings</a>
-</div>
-
-<div class="doc_text">
-
-<p>The Llvm.Linkage module is broken, and has incorrect values. Only
-Llvm.Linkage.External, Llvm.Linkage.Available_externally, and
-Llvm.Linkage.Link_once will be correct. If you need any of the other linkage
-modes, you'll have to write an external C library in order to expose the
-functionality. This has been fixed in the trunk.</p>
-</div>
-
-<!-- *********************************************************************** -->
-<div class="doc_section">
-  <a name="additionalinfo">Additional Information</a>
-</div>
-<!-- *********************************************************************** -->
-
-<div class="doc_text">
-
-<p>A wide variety of additional information is available on the <a
-href="http://llvm.org">LLVM web page</a>, in particular in the <a
-href="http://llvm.org/docs/">documentation</a> section.  The web page also
-contains versions of the API documentation which is up-to-date with the
-Subversion version of the source code.
-You can access versions of these documents specific to this release by going
-into the "<tt>llvm/doc/</tt>" directory in the LLVM tree.</p>
-
-<p>If you have any questions or comments about LLVM, please feel free to contact
-us via the <a href="http://llvm.org/docs/#maillist"> mailing
-lists</a>.</p>
-
-</div>
-
-<!-- *********************************************************************** -->
-
-<hr>
-<address>
-  <a href="http://jigsaw.w3.org/css-validator/check/referer"><img
-  src="http://jigsaw.w3.org/css-validator/images/vcss-blue" alt="Valid CSS"></a>
-  <a href="http://validator.w3.org/check/referer"><img
-  src="http://www.w3.org/Icons/valid-html401-blue" alt="Valid HTML 4.01"></a>
-
-  <a href="http://llvm.org/">LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date$
-</address>
-
-</body>
-</html>
diff --git a/libclamav/c++/llvm/docs/ReleaseNotes.html b/libclamav/c++/llvm/docs/ReleaseNotes.html
index bdaba71..5a5a01a 100644
--- a/libclamav/c++/llvm/docs/ReleaseNotes.html
+++ b/libclamav/c++/llvm/docs/ReleaseNotes.html
@@ -4,17 +4,17 @@
 <head>
   <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
   <link rel="stylesheet" href="llvm.css" type="text/css">
-  <title>LLVM 2.5 Release Notes</title>
+  <title>LLVM 2.6 Release Notes</title>
 </head>
 <body>
 
-<div class="doc_title">LLVM 2.5 Release Notes</div>
+<div class="doc_title">LLVM 2.6 Release Notes</div>
 
 <ol>
   <li><a href="#intro">Introduction</a></li>
   <li><a href="#subproj">Sub-project Status Update</a></li>
-  <li><a href="#externalproj">External Projects Using LLVM 2.5</a></li>
-  <li><a href="#whatsnew">What's New in LLVM 2.5?</a></li>
+  <li><a href="#externalproj">External Projects Using LLVM 2.6</a></li>
+  <li><a href="#whatsnew">What's New in LLVM 2.6?</a></li>
   <li><a href="GettingStarted.html">Installation Instructions</a></li>
   <li><a href="#portability">Portability and Supported Platforms</a></li>
   <li><a href="#knownproblems">Known Problems</a></li>
@@ -34,7 +34,7 @@
 <div class="doc_text">
 
 <p>This document contains the release notes for the LLVM Compiler
-Infrastructure, release 2.5.  Here we describe the status of LLVM, including
+Infrastructure, release 2.6.  Here we describe the status of LLVM, including
 major improvements from the previous release and significant known problems.
 All LLVM releases may be downloaded from the <a
 href="http://llvm.org/releases/">LLVM releases web site</a>.</p>
@@ -51,25 +51,37 @@ current one.  To see the release notes for a specific release, please see the
 <a href="http://llvm.org/releases/">releases page</a>.</p>
 
 </div>
-
-<!-- Unfinished features in 2.5:
-  Machine LICM
-  Machine Sinking
-  target-specific intrinsics
-  gold lto plugin
-  pre-alloc splitter, strong phi elim
-  <tt>llc -enable-value-prop</tt>, propagation of value info
-       (sign/zero ext info) from one MBB to another
-  debug info for optimized code
-  interpreter + libffi
+ 
+
+<!--
+Almost dead code.
+  include/llvm/Analysis/LiveValues.h => Dan
+  lib/Transforms/IPO/MergeFunctions.cpp => consider for 2.8.
+  llvm/Analysis/PointerTracking.h => Edwin wants this, consider for 2.8.
+-->
+ 
+   
+<!-- Unfinished features in 2.6:
+  gcc plugin.
+  strong phi elim
+  variable debug info for optimized code
   postalloc scheduler: anti dependence breaking, hazard recognizer?
-
-initial support for debug line numbers when optimization enabled, not useful in
-  2.5 but will be for 2.6.
-
+  metadata
+  loop dependence analysis
+  ELF Writer?  How stable?
+  <li>PostRA scheduler improvements, ARM adoption (David Goodwin).</li>
+  2.7 supports the GDB 7.0 jit interfaces for debug info.
+  2.7 eliminates ADT/iterator.h
  -->
 
  <!-- for announcement email:
+ Logo web page.
+ llvm devmtg
+ compiler_rt
+ KLEE web page at klee.llvm.org
+ Many new papers added to /pubs/
+   Mention gcc plugin.
+
    -->
 
 <!-- *********************************************************************** -->
@@ -80,12 +92,11 @@ initial support for debug line numbers when optimization enabled, not useful in
 
 <div class="doc_text">
 <p>
-The LLVM 2.5 distribution currently consists of code from the core LLVM
-repository &mdash;which roughly includes the LLVM optimizers, code generators
-and supporting tools &mdash; and the llvm-gcc repository.  In addition to this
-code, the LLVM Project includes other sub-projects that are in development.  The
-two which are the most actively developed are the <a href="#clang">Clang
-Project</a> and the <a href="#vmkit">VMKit Project</a>.
+The LLVM 2.6 distribution currently consists of code from the core LLVM
+repository (which roughly includes the LLVM optimizers, code generators
+and supporting tools), the Clang repository and the llvm-gcc repository.  In
+addition to this code, the LLVM Project includes other sub-projects that are in
+development.  Here we include updates on these subprojects.
 </p>
 
 </div>
@@ -99,37 +110,30 @@ Project</a> and the <a href="#vmkit">VMKit Project</a>.
 <div class="doc_text">
 
 <p>The <a href="http://clang.llvm.org/">Clang project</a> is an effort to build
-a set of new 'LLVM native' front-end technologies for the LLVM optimizer and
-code generator.  While Clang is not included in the LLVM 2.5 release, it is
-continuing to make major strides forward in all areas.  Its C and Objective-C
-parsing and code generation support is now very solid.  For example, it is
-capable of successfully building many real-world applications for X86-32
-and X86-64,
-including the <a href="http://wiki.freebsd.org/BuildingFreeBSDWithClang">FreeBSD
-kernel</a> and <a href="http://gcc.gnu.org/gcc-4.2/">gcc 4.2</a>.  C++ is also
-making <a href="http://clang.llvm.org/cxx_status.html">incredible progress</a>,
-and work on templates has recently started.  If you are
-interested in fast compiles and good diagnostics, we encourage you to try it out
-by <a href="http://clang.llvm.org/get_started.html">building from mainline</a>
-and reporting any issues you hit to the <a
+a set of new 'LLVM native' front-end technologies for the C family of languages.
+LLVM 2.6 is the first release to officially include Clang, and it provides a
+production quality C and Objective-C compiler.  If you are interested in <a 
+href="http://clang.llvm.org/performance.html">fast compiles</a> and
+<a href="http://clang.llvm.org/diagnostics.html">good diagnostics</a>, we
+encourage you to try it out.  Clang currently compiles typical Objective-C code
+3x faster than GCC and compiles C code about 30% faster than GCC at -O0 -g
+(which is when the most pressure is on the frontend).</p>
+
+<p>In addition to supporting these languages, C++ support is also <a
+href="http://clang.llvm.org/cxx_status.html">well under way</a>, and mainline
+Clang is able to parse the libstdc++ 4.2 headers and even codegen simple apps.
+If you are interested in Clang C++ support or any other Clang feature, we
+strongly encourage you to get involved on the <a 
 href="http://lists.cs.uiuc.edu/mailman/listinfo/cfe-dev">Clang front-end mailing
 list</a>.</p>
 
-<p>In the LLVM 2.5 time-frame, the Clang team has made many improvements:</p>
+<p>In the LLVM 2.6 time-frame, the Clang team has made many improvements:</p>
 
 <ul>
-<li>Clang now has a new driver, which is focused on providing a GCC-compatible
-    interface.</li>
-<li>The X86-64 ABI is now supported, including support for the Apple
-    64-bit Objective-C runtime and zero cost exception handling.</li>
-<li>Precompiled header support is now implemented.</li>
-<li>Objective-C support is significantly improved beyond LLVM 2.4, supporting
-    many features, such as Objective-C Garbage Collection.</li>
-<li>Variable length arrays are now fully supported.</li>
-<li>C99 designated initializers are now fully supported.</li>
-<li>Clang now includes all major compiler headers, including a
-    redesigned <i>tgmath.h</i> and several more intrinsic headers.</li>
-<li>Many many bugs are fixed and many features have been added.</li>
+<li>C and Objective-C support are now considered production quality.</li>
+<li>AuroraUX, FreeBSD and OpenBSD are now supported.</li>
+<li>Most of Objective-C 2.0 is now supported with the GNU runtime.</li>
+<li>Many many bugs are fixed and lots of features have been added.</li>
 </ul>
 </div>
 
@@ -140,19 +144,18 @@ list</a>.</p>
 
 <div class="doc_text">
 
-<p>Previously announced in the last LLVM release, the Clang project also
+<p>Previously announced in the 2.4 and 2.5 LLVM releases, the Clang project also
 includes an early stage static source code analysis tool for <a
 href="http://clang.llvm.org/StaticAnalysis.html">automatically finding bugs</a>
-in C and Objective-C programs. The tool performs a growing set of checks to find
+in C and Objective-C programs. The tool performs checks to find
 bugs that occur on a specific path within a program.</p>
 
-<p>In the LLVM 2.5 time-frame there have been many significant improvements to
-the analyzer's core path simulation engine and machinery for generating
-path-based bug reports to end-users. Particularly noteworthy improvements
-include experimental support for full field-sensitivity and reasoning about heap
-objects as well as an improved value-constraints subengine that does a much
-better job of reasoning about inequality relationships (e.g., <tt>x &gt; 2</tt>)
-between variables and constants.
+<p>In the LLVM 2.6 time-frame, the analyzer core has undergone several important
+improvements and cleanups and now includes a new <em>Checker</em> interface that
+is intended to eventually serve as a basis for domain-specific checks. Further,
+in addition to generating HTML files for reporting analysis results, the
+analyzer can now also emit bug reports in a structured XML format that is
+intended to be easily readable by other programs.</p>
 
 <p>The set of checks performed by the static analyzer continues to expand, and
 future plans for the tool include full source-level inter-procedural analysis
@@ -170,44 +173,191 @@ this project is encouraged to get involved!</p>
 <div class="doc_text">
 <p>
 The <a href="http://vmkit.llvm.org/">VMKit project</a> is an implementation of
-a JVM and a CLI Virtual Machines (Microsoft .NET is an
-implementation of the CLI) using the Just-In-Time compiler of LLVM.</p>
+a JVM and a CLI Virtual Machine (Microsoft .NET is an
+implementation of the CLI) using LLVM for static and just-in-time
+compilation.</p>
 
-<p>Following LLVM 2.5, VMKit has its second release that you can find on its
-<a href="http://vmkit.llvm.org/releases/">webpage</a>. The release includes
+<p>
+VMKit version 0.26 builds with LLVM 2.6 and you can find it on its
+<a href="http://vmkit.llvm.org/releases/">web page</a>. The release includes
 bug fixes, cleanup and new features. The major changes are:</p>
 
 <ul>
 
-<li>Ahead of Time compiler: compiles .class files to llvm .bc. VMKit uses this
-functionality to native compile the standard classes (e.g. java.lang.String).
-Users can compile AoT .class files into dynamic libraries and run them with the
-help of VMKit.</li>
+<li>A new llcj tool to generate shared libraries or executables of Java
+    files.</li>
+<li>Cooperative garbage collection. </li>
+<li>Fast subtype checking (paper from Click et al [JGI'02]). </li>
+<li>Implementation of a two-word header for Java objects instead of the original
+    three-word header. </li>
+<li>Better Java specification-compliance: division by zero checks, stack
+    overflow checks, finalization and references support. </li>
 
-<li>New exception model: the dwarf exception model is very slow for
-exception-intensive applications, so the JVM has had a new implementation of
-exceptions which check at each function call if an exception happened. There is
-a low performance penalty on applications without exceptions, but it is a big
-gain for exception-intensive applications. For example the jack benchmark in
-Spec JVM98 is 6x faster (performance gain of 83%).</li>
+</ul>
+</div>
 
-<li>User-level management of thread stacks, so that thread local data access
-at runtime is fast and portable. </li>
 
-<li>Implementation of biased locking for faster object synchronizations at
-runtime.</li>
+<!--=========================================================================-->
+<div class="doc_subsection">
+<a name="compiler-rt">compiler-rt: Compiler Runtime Library</a>
+</div>
 
-<li>New support for OSX/X64, Linux/X64 (with the Boehm GC) and Linux/ppc32.</li>
+<div class="doc_text">
+<p>
+The new LLVM <a href="http://compiler-rt.llvm.org/">compiler-rt project</a>
+is a simple library that provides an implementation of the low-level
+target-specific hooks required by code generation and other runtime components.
+For example, when compiling for a 32-bit target, converting a double to a 64-bit
+unsigned integer is compiled into a runtime call to the "__fixunsdfdi"
+function. The compiler-rt library provides highly optimized implementations of
+this and other low-level routines (some are 3x faster than the equivalent
+libgcc routines).</p>
 
-</ul>
+<p>
+All of the code in the compiler-rt project is available under the standard LLVM
+License, a "BSD-style" license.</p>
+
+</div>
+
+<!--=========================================================================-->
+<div class="doc_subsection">
+<a name="klee">KLEE: Symbolic Execution and Automatic Test Case Generator</a>
 </div>
 
+<div class="doc_text">
+<p>
+The new LLVM <a href="http://klee.llvm.org/">KLEE project</a> is a symbolic
+execution framework for programs in LLVM bitcode form.  KLEE tries to
+symbolically evaluate "all" paths through the application and records state
+transitions that lead to fault states.  This allows it to construct testcases
+that lead to faults and can even be used to verify algorithms.  For more
+details, please see the <a
+href="http://llvm.org/pubs/2008-12-OSDI-KLEE.html">OSDI 2008 paper</a> about
+KLEE.</p>
+
+</div>
+
+<!--=========================================================================-->
+<div class="doc_subsection">
+<a name="dragonegg">DragonEgg: GCC-4.5 as an LLVM frontend</a>
+</div>
+
+<div class="doc_text">
+<p>
+The goal of <a href="http://dragonegg.llvm.org/">DragonEgg</a> is to make
+gcc-4.5 act like llvm-gcc without requiring any gcc modifications whatsoever.
+<a href="http://dragonegg.llvm.org/">DragonEgg</a> is a shared library (dragonegg.so)
+that is loaded by gcc at runtime.  It uses the new gcc plugin architecture to
+disable the GCC optimizers and code generators, and schedule the LLVM optimizers
+and code generators (or direct output of LLVM IR) instead.  Currently only Linux
+and Darwin are supported, and only on x86-32 and x86-64.  It should be easy to
+add additional unix-like architectures and other processor families.  In theory
+it should be possible to use <a href="http://dragonegg.llvm.org/">DragonEgg</a>
+with any language supported by gcc, however only C and Fortran work well for the
+moment.  Ada and C++ work to some extent, while Java, Obj-C and Obj-C++ are so
+far entirely untested.  Since gcc-4.5 has not yet been released, neither has
+<a href="http://dragonegg.llvm.org/">DragonEgg</a>.  To build
+<a href="http://dragonegg.llvm.org/">DragonEgg</a> you will need to check out the
+development versions of <a href="http://gcc.gnu.org/svn.html/"> gcc</a>,
+<a href="http://llvm.org/docs/GettingStarted.html#checkout">llvm</a> and
+<a href="http://dragonegg.llvm.org/">DragonEgg</a> from their respective
+subversion repositories, and follow the instructions in the
+<a href="http://dragonegg.llvm.org/">DragonEgg</a> README.
+</p>
+
+</div>
+
+
+<!--=========================================================================-->
+<div class="doc_subsection">
+<a name="mc">llvm-mc: Machine Code Toolkit</a>
+</div>
+
+<div class="doc_text">
+<p>
+The LLVM Machine Code (MC) Toolkit project is a (very early) effort to build
+better tools for dealing with machine code, object file formats, etc.  The idea
+is to be able to generate most of the target specific details of assemblers and
+disassemblers from existing LLVM target .td files (with suitable enhancements),
+and to build infrastructure for reading and writing common object file formats.
+One of the first deliverables is to build a full assembler and integrate it into
+the compiler, which is predicted to substantially reduce compile time in some
+scenarios.
+</p>
+
+<p>In the LLVM 2.6 timeframe, the MC framework has grown to the point where it
+can reliably parse and pretty print (with some encoding information) a
+darwin/x86 .s file successfully, and has the very early phases of a Mach-O
+assembler in progress.  Beyond the MC framework itself, major refactoring of the
+LLVM code generator has started.  The idea is to make the code generator reason
+about the code it is producing in a much more semantic way, rather than a
+textual way.  For example, the code generator now uses MCSection objects to
+represent section assignments, instead of text strings that print to .section
+directives.</p>
+
+<p>MC is an early and ongoing project that will hopefully continue to lead to
+many improvements in the code generator and build infrastructure useful for many
+other situations.
+</p>
+
+</div>	
+
+
 <!-- *********************************************************************** -->
 <div class="doc_section">
-  <a name="externalproj">External Projects Using LLVM 2.5</a>
+  <a name="externalproj">External Open Source Projects Using LLVM 2.6</a>
 </div>
 <!-- *********************************************************************** -->
 
+<div class="doc_text">
+
+<p>An exciting aspect of LLVM is that it is used as an enabling technology for
+   a lot of other language and tools projects.  This section lists some of the
+   projects that have already been updated to work with LLVM 2.6.</p>
+</div>
+
+
+<!--=========================================================================-->
+<div class="doc_subsection">
+<a name="Rubinius">Rubinius</a>
+</div>
+
+<div class="doc_text">
+<p><a href="http://github.com/evanphx/rubinius">Rubinius</a> is an environment
+for running Ruby code which strives to write as much of the core class
+implementation in Ruby as possible. Combined with a bytecode interpreting VM, it
+uses LLVM to optimize and compile ruby code down to machine code. Techniques
+such as type feedback, method inlining, and uncommon traps are all used to
+remove dynamism from ruby execution and increase performance.</p>
+
+<p>Since LLVM 2.5, Rubinius has made several major leaps forward, implementing
+a counter based JIT, type feedback and speculative method inlining.
+</p>
+
+</div>
+
+<!--=========================================================================-->
+<div class="doc_subsection">
+<a name="macruby">MacRuby</a>
+</div>
+
+<div class="doc_text">
+
+<p>
+<a href="http://macruby.org">MacRuby</a> is an implementation of Ruby on top of
+core Mac OS X technologies, such as the Objective-C common runtime and garbage
+collector and the CoreFoundation framework. It is principally developed by
+Apple and aims at enabling the creation of full-fledged Mac OS X applications.
+</p>
+
+<p>
+MacRuby uses LLVM for optimization passes, JIT and AOT compilation of Ruby
+expressions. It also uses zero-cost DWARF exceptions to implement Ruby exception
+handling.</p>
+
+</div>
+
+
 <!--=========================================================================-->
 <div class="doc_subsection">
 <a name="pure">Pure</a>
@@ -224,12 +374,8 @@ built-in list and matrix support (including list and matrix comprehensions) and
 an easy-to-use C interface. The interpreter uses LLVM as a backend to
  JIT-compile Pure programs to fast native code.</p>
 
-<p>In addition to the usual algebraic data structures, Pure also has
-MATLAB-style matrices in order to support numeric computations and signal
-processing in an efficient way. Pure is mainly aimed at mathematical
-applications right now, but it has been designed as a general purpose language.
-The dynamic interpreter environment and the C interface make it possible to use
-it as a kind of functional scripting language for many application areas.
+<p>Pure versions 0.31 and later have been tested and are known to work with
+LLVM 2.6 (and continue to work with older LLVM releases >= 2.3 as well).
 </p>
 </div>
 
@@ -243,11 +389,11 @@ it as a kind of functional scripting language for many application areas.
 <p>
 <a href="http://www.dsource.org/projects/ldc">LDC</a> is an implementation of
 the D Programming Language using the LLVM optimizer and code generator.
-The LDC project works great with the LLVM 2.5 release.  General improvements in
+The LDC project works great with the LLVM 2.6 release.  General improvements in
 this
 cycle have included new inline asm constraint handling, better debug info
-support, general bugfixes, and better x86-64 support.  This has allowed
-some major improvements in LDC, getting us much closer to being as
+support, general bug fixes and better x86-64 support.  This has allowed
+some major improvements in LDC, getting it much closer to being as
 fully featured as the original DMD compiler from DigitalMars.
 </p>
 </div>
@@ -258,142 +404,160 @@ fully featured as the original DMD compiler from DigitalMars.
 </div>
 
 <div class="doc_text">
-<p><a href="http://code.roadsend.com/rphp">Roadsend PHP</a> (rphp) is an open
+<p>
+<a href="http://code.roadsend.com/rphp">Roadsend PHP</a> (rphp) is an open
 source implementation of the PHP programming 
-language that uses LLVM for its optimizer, JIT, and static compiler. This is a 
+language that uses LLVM for its optimizer, JIT and static compiler. This is a 
 reimplementation of an earlier project that is now based on LLVM.</p>
 </div>
 
-
-<!-- *********************************************************************** -->
-<div class="doc_section">
-  <a name="whatsnew">What's New in LLVM 2.5?</a>
+<!--=========================================================================-->
+<div class="doc_subsection">
+<a name="UnladenSwallow">Unladen Swallow</a>
 </div>
-<!-- *********************************************************************** -->
 
 <div class="doc_text">
-
-<p>This release includes a huge number of bug fixes, performance tweaks, and
-minor improvements.  Some of the major improvements and new features are listed
-in this section.
-</p>
+<p>
+<a href="http://code.google.com/p/unladen-swallow/">Unladen Swallow</a> is a
+branch of <a href="http://python.org/">Python</a> intended to be fully
+compatible and significantly faster.  It uses LLVM's optimization passes and JIT
+compiler.</p>
 </div>
 
 <!--=========================================================================-->
 <div class="doc_subsection">
-<a name="majorfeatures">Major New Features</a>
+<a name="llvm-lua">llvm-lua</a>
 </div>
 
 <div class="doc_text">
+<p>
+<a href="http://code.google.com/p/llvm-lua/">LLVM-Lua</a> uses LLVM to add JIT
+and static compiling support to the Lua VM.  Lua bytecode is analyzed to
+remove type checks, then LLVM is used to compile the bytecode down to machine
+code.</p>
+</div>
 
-<p>LLVM 2.5 includes several major new capabilities:</p>
+<!--=========================================================================-->
+<div class="doc_subsection">
+<a name="icedtea">IcedTea Java Virtual Machine Implementation</a>
+</div>
 
-<ul>
-<li>LLVM 2.5 includes a brand new <a
-href="http://en.wikipedia.org/wiki/XCore">XCore</a> backend.</li>
+<div class="doc_text">
+<p>
+<a href="http://icedtea.classpath.org/wiki/Main_Page">IcedTea</a> provides a
+harness to build OpenJDK using only free software build tools and to provide
+replacements for the not-yet free parts of OpenJDK.  One of the extensions that
+IcedTea provides is a new JIT compiler named <a
+href="http://icedtea.classpath.org/wiki/ZeroSharkFaq">Shark</a> which uses LLVM
+to provide native code generation without introducing processor-dependent
+code.
+</p>
+</div>
 
-<li>llvm-gcc now generally supports the GFortran front-end, and the precompiled
-release binaries now support Fortran, even on Mac OS/X.</li>
 
-<li>CMake is now used by the <a href="GettingStartedVS.html">LLVM build process
-on Windows</a>.  It automatically generates Visual Studio project files (and
-more) from a set of simple text files.  This makes it much easier to
-maintain.  In time, we'd like to standardize on CMake for everything.</li>
 
-<li>LLVM 2.5 now uses (and includes) Google Test for unit testing.</li>
+<!-- *********************************************************************** -->
+<div class="doc_section">
+  <a name="whatsnew">What's New in LLVM 2.6?</a>
+</div>
+<!-- *********************************************************************** -->
 
-<li>The LLVM native code generator now supports arbitrary precision integers.
-Types like <tt>i33</tt> have long been valid in the LLVM IR, but were previously
-only supported by the interpreter.  Note that the C backend still does not
-support these.</li>
+<div class="doc_text">
 
-<li>LLVM 2.5 no longer uses 'bison,' so it is easier to build on Windows.</li>
-</ul>
+<p>This release includes a huge number of bug fixes, performance tweaks and
+minor improvements.  Some of the major improvements and new features are listed
+in this section.
+</p>
 
 </div>
 
-
 <!--=========================================================================-->
 <div class="doc_subsection">
-<a name="llvm-gcc">llvm-gcc 4.2 Improvements</a>
+<a name="majorfeatures">Major New Features</a>
 </div>
 
 <div class="doc_text">
 
-<p>LLVM fully supports the llvm-gcc 4.2 front-end, which marries the GCC
-front-ends and driver with the LLVM optimizer and code generator.  It currently
-includes support for the C, C++, Objective-C, Ada, and Fortran front-ends.</p>
+<p>LLVM 2.6 includes several major new capabilities:</p>
 
 <ul>
-<li>In this release, the GCC inliner is completely disabled.  Previously the GCC
-inliner was used to handle always-inline functions and other cases.  This caused
-problems with code size growth, and it is completely disabled in this
-release.</li>
-
-<li>llvm-gcc (and LLVM in general) now support code generation for stack
-canaries, which is an effective form of <a
-href="http://en.wikipedia.org/wiki/Stack-smashing_protection">buffer overflow
-protection</a>.  llvm-gcc supports this with the <tt>-fstack-protector</tt>
-command line option (just like GCC).  In LLVM IR, you can request code
-generation for stack canaries with function attributes.
-</li>
+<li>New <a href="#compiler-rt">compiler-rt</a>, <A href="#klee">KLEE</a>
+    and <a href="#mc">machine code toolkit</a> sub-projects.</li>
+<li>Debug information now includes line numbers when optimizations are enabled.
+    This allows statistical sampling tools like OProfile and Shark to map
+    samples back to source lines.</li>
+<li>LLVM now includes new experimental backends to support the MSP430, SystemZ
+    and BlackFin architectures.</li>
+<li>LLVM supports a new <a href="GoldPlugin.html">Gold Linker Plugin</a> which
+    enables support for <a href="LinkTimeOptimization.html">transparent
+    link-time optimization</a> on ELF targets when used with the Gold binutils
+    linker.</li>
+<li>LLVM now supports doing optimization and code generation on multiple 
+    threads.  Please see the <a href="ProgrammersManual.html#threading">LLVM
+    Programmer's Manual</a> for more information.</li>
+<li>LLVM now has experimental support for <a
+    href="http://nondot.org/~sabre/LLVMNotes/EmbeddedMetadata.txt">embedded
+    metadata</a> in LLVM IR, though the implementation is not guaranteed to be
+    final and the .bc file format may change in future releases.  Debug info 
+    does not yet use this format in LLVM 2.6.</li>
 </ul>
 
 </div>
 
-
 <!--=========================================================================-->
 <div class="doc_subsection">
 <a name="coreimprovements">LLVM IR and Core Improvements</a>
 </div>
 
 <div class="doc_text">
-<p>LLVM IR has several new features that are used by our existing front-ends and
-can be useful if you are writing a front-end for LLVM:</p>
+<p>LLVM IR has several new features for better support of new targets and that
+expose new optimization opportunities:</p>
 
 <ul>
-<li>The <a href="LangRef.html#i_shufflevector">shufflevector</a> instruction 
-has been generalized to allow different shuffle mask width than its input
-vectors.  This allows you to use shufflevector to combine two
-"&lt;4 x float&gt;" vectors into a "&lt;8 x float&gt;" for example.</li>
-
-<li>LLVM IR now supports new intrinsics for computing and acting on <a 
-href="LangRef.html#int_overflow">overflow of integer operations</a>. This allows
-efficient code generation for languages that must trap or throw an exception on
-overflow.  While these intrinsics work on all targets, they only generate
-efficient code on X86 so far.</li>
-
-<li>LLVM IR now supports a new <a href="LangRef.html#linkage">private
-linkage</a> type to produce labels that are stripped by the assembler before it
-produces a .o file (thus they are invisible to the linker).</li>
-
-<li>LLVM IR supports two new attributes for better alias analysis.  The <a
-href="LangRef.html#paramattrs">noalias</a> attribute can now be used on the
-return value of a function to indicate that it returns new memory (e.g.
-'malloc', 'calloc', etc).
-The new <a href="LangRef.html#paramattrs">nocapture</a> attribute can be used
-on pointer arguments to indicate that the function does not return the pointer,
-store it in an object that outlives the call, or let the value of the pointer
-escape from the function in any other way.
-Note that it is the pointer itself that must not escape, not the value it
-points to: loading a value out of the pointer is perfectly fine.
-Many standard library functions (e.g. 'strlen', 'memcpy') have this property.
-<!-- The simplifylibcalls pass applies these attributes to standard libc functions. -->
-</li>
-
-<li>The parser for ".ll" files in lib/AsmParser is now completely rewritten as a
-recursive descent parser.  This parser produces better error messages (including
-caret diagnostics), is less fragile (less likely to crash on strange things),
-does not leak memory, is more efficient, and eliminates LLVM's last use of the
-'bison' tool.</li>
-
-<li>Debug information representation and manipulation internals have been
-    consolidated to use a new set of classes in
-    <tt>llvm/Analysis/DebugInfo.h</tt>.  These routines are more
-    efficient, robust, and extensible and replace the older mechanisms.
-    llvm-gcc, clang, and the code generator now use them to create and process
-    debug information.</li>
-
+<li>The <a href="LangRef.html#i_add">add</a>, <a 
+    href="LangRef.html#i_sub">sub</a> and <a href="LangRef.html#i_mul">mul</a>
+    instructions have been split into integer and floating point versions (like
+    divide and remainder), introducing new <a
+    href="LangRef.html#i_fadd">fadd</a>, <a href="LangRef.html#i_fsub">fsub</a>,
+    and <a href="LangRef.html#i_fmul">fmul</a> instructions.</li>
+<li>The <a href="LangRef.html#i_add">add</a>, <a 
+    href="LangRef.html#i_sub">sub</a> and <a href="LangRef.html#i_mul">mul</a>
+    instructions now support optional "nsw" and "nuw" bits which indicate that
+    the operation is guaranteed to not overflow (in the signed or
+    unsigned case, respectively).  This gives the optimizer more information and
+    can be used for things like C signed integer values, which are undefined on
+    overflow.</li>
+<li>The <a href="LangRef.html#i_sdiv">sdiv</a> instruction now supports an
+    optional "exact" flag which indicates that the result of the division is
+    guaranteed to have a remainder of zero.  This is useful for optimizing pointer
+    subtraction in C.</li>
+<li>The <a href="LangRef.html#i_getelementptr">getelementptr</a> instruction now
+    supports arbitrary integer index values for array/pointer indices.  This
+    allows for better code generation on 16-bit pointer targets like PIC16.</li>
+<li>The <a href="LangRef.html#i_getelementptr">getelementptr</a> instruction now
+    supports an "inbounds" optimization hint that tells the optimizer that the
+    pointer is guaranteed to be within its allocated object.</li>
+<li>LLVM now support a series of new linkage types for global values which allow
+    for better optimization and new capabilities:
+    <ul>
+    <li><a href="LangRef.html#linkage_linkonce">linkonce_odr</a> and
+        <a href="LangRef.html#linkage_weak">weak_odr</a> have the same linkage
+        semantics as the non-"odr" linkage types.  The difference is that these
+        linkage types indicate that all definitions of the specified function
+        are guaranteed to have the same semantics.  This allows inlining
+        templates functions in C++ but not inlining weak functions in C,
+        which previously both got the same linkage type.</li>
+    <li><a href="LangRef.html#linkage_available_externally">available_externally
+        </a> is a new linkage type that gives the optimizer visibility into the
+        definition of a function (allowing inlining and side effect analysis)
+        but that does not cause code to be generated.  This allows better
+        optimization of "GNU inline" functions, extern templates, etc.</li>
+    <li><a href="LangRef.html#linkage_linker_private">linker_private</a> is a
+        new linkage type (which is only useful on Mac OS X) that is used for
+        some metadata generation and other obscure things.</li>
+    </ul></li>
+<li>Finally, target-specific intrinsics can now return multiple values, which
+    is useful for modeling target operations with multiple results.</li>
 </ul>
 
 </div>
@@ -405,27 +569,53 @@ does not leak memory, is more efficient, and eliminates LLVM's last use of the
 
 <div class="doc_text">
 
-<p>In addition to a large array of bug fixes and minor performance tweaks, this
+<p>In addition to a large array of minor performance tweaks and bug fixes, this
 release includes a few major enhancements and additions to the optimizers:</p>
 
 <ul>
 
-<li>The loop optimizer now improves floating point induction variables in
-several ways, including adding shadow induction variables to avoid
-"integer &lt;-&gt; floating point" conversions in loops when safe.</li>
+<li>The <a href="Passes.html#scalarrepl">Scalar Replacement of Aggregates</a>
+    pass has many improvements that allow it to better promote vector unions,
+    variables which are memset, and much more strange code that can happen to
+    do bitfield accesses to register operations.  An interesting change is that
+    it now produces "unusual" integer sizes (like i1704) in some cases and lets
+    other optimizers clean things up.</li>
+<li>The <a href="Passes.html#loop-reduce">Loop Strength Reduction</a> pass now
+    promotes small integer induction variables to 64-bit on 64-bit targets,
+    which provides a major performance boost for much numerical code.  It also
+    promotes shorts to int on 32-bit hosts, etc.  LSR now also analyzes pointer
+    expressions (e.g. getelementptrs), as well as integers.</li>
+<li>The <a href="Passes.html#gvn">GVN</a> pass now eliminates partial
+    redundancies of loads in simple cases.</li>
+<li>The <a href="Passes.html#inline">Inliner</a> now reuses stack space when
+    inlining similar arrays from multiple callees into one caller.</li>
+<li>LLVM includes a new experimental Static Single Information (SSI)
+    construction pass.</li>
 
-<li>The "-mem2reg" pass is now much faster on code with large basic blocks.</li>
+</ul>
+
+</div>
 
-<li>The "-jump-threading" pass is more powerful: it is iterative
-  and handles threading based on values with fully and partially redundant
-  loads.</li>
 
-<li>The "-memdep" memory dependence analysis pass (used by GVN and memcpyopt) is
-    both faster and more aggressive.</li>
+<!--=========================================================================-->
+<div class="doc_subsection">
+<a name="executionengine">Interpreter and JIT Improvements</a>
+</div>
 
-<li>The "-scalarrepl" scalar replacement of aggregates pass is more aggressive
-    about promoting unions to registers.</li>
+<div class="doc_text">
 
+<ul>
+<li>LLVM has a new "EngineBuilder" class which makes it more obvious how to
+    set up and configure an ExecutionEngine (a JIT or interpreter).</li>
+<li>The JIT now supports generating more than 16M of code.</li>
+<li>When configured with <tt>--with-oprofile</tt>, the JIT can now inform
+     OProfile about JIT'd code, allowing OProfile to get line number and function
+     name information for JIT'd functions.</li>
+<li>When "libffi" is available, the LLVM interpreter now uses it, which supports
+    calling almost arbitrary external (natively compiled) functions.</li>
+<li>Clients of the JIT can now register a 'JITEventListener' object to receive
+    callbacks when the JIT emits or frees machine code. The OProfile support
+    uses this mechanism.</li>
 </ul>
 
 </div>
@@ -442,33 +632,55 @@ infrastructure, which allows us to implement more aggressive algorithms and make
 it run faster:</p>
 
 <ul>
-<li>The <a href="WritingAnLLVMBackend.html">Writing an LLVM Compiler
-Backend</a> document has been greatly expanded and is substantially more
-complete.</li>
-
-<li>The SelectionDAG type legalization logic has been completely rewritten, is
-now more powerful (it supports arbitrary precision integer types for example),
-and is more correct in several corner cases.  The type legalizer converts
-operations on types that are not natively supported by the target machine into
-equivalent code sequences that only use natively supported types.  The old type
-legalizer is still available (for now) and will be used if
-<tt>-disable-legalize-types</tt> is passed to the code generator.
-</li>
 
-<li>The code generator now supports widening illegal vectors to larger legal
-ones (for example, converting operations on &lt;3 x float&gt; to work on
-&lt;4 x float&gt;) which is very important for common graphics
-applications.</li>
-
-<li>The assembly printers for each target are now split out into their own
-libraries that are separate from the main code generation logic.  This reduces
-the code size of JIT compilers by not requiring them to be linked in.</li>
-
-<li>The 'fast' instruction selection path (used at -O0 and for fast JIT
-    compilers) now supports accelerating codegen for code that uses exception
-    handling constructs.</li>
-    
-<li>The optional PBQP register allocator now supports register coalescing.</li>
+<li>The <tt>llc -asm-verbose</tt> option (exposed from llvm-gcc as <tt>-dA</tt>
+    and clang as <tt>-fverbose-asm</tt> or <tt>-dA</tt>) now adds a lot of 
+    useful information in comments to
+    the generated .s file.  This information includes location information (if
+    built with <tt>-g</tt>) and loop nest information.</li>
+<li>The code generator now supports a new MachineVerifier pass which is useful
+    for finding bugs in targets and codegen passes.</li>
+<li>The Machine LICM is now enabled by default.  It hoists instructions out of
+    loops (such as constant pool loads, loads from read-only stubs, vector
+    constant synthesization code, etc.) and is currently configured to only do
+    so when the hoisted operation can be rematerialized.</li>
+<li>The Machine Sinking pass is now enabled by default.  This pass moves
+    side-effect free operations down the CFG so that they are executed on fewer
+    paths through a function.</li>
+<li>The code generator now performs "stack slot coloring" of register spills,
+    which allows spill slots to be reused.  This leads to smaller stack frames
+    in cases where there are lots of register spills.</li>
+<li>The register allocator has many improvements to take better advantage of
+    commutable operations, various spiller peephole optimizations, and can now
+    coalesce cross-register-class copies.</li>
+<li>Tblgen now supports multiclass inheritance and a number of new string and
+    list operations like <tt>!(subst)</tt>, <tt>!(foreach)</tt>, <tt>!car</tt>,
+    <tt>!cdr</tt>, <tt>!null</tt>, <tt>!if</tt>, <tt>!cast</tt>.
+    These make the .td files more expressive and allow more aggressive factoring
+    of duplication across instruction patterns.</li>
+<li>Target-specific intrinsics can now be added without having to hack VMCore to
+    add them.  This makes it easier to maintain out-of-tree targets.</li>
+<li>The instruction selector is better at propagating information about values
+    (such as whether they are sign/zero extended etc.) across basic block
+    boundaries.</li>
+<li>The SelectionDAG datastructure has new nodes for representing buildvector
+    and <a href="http://llvm.org/PR2957">vector shuffle</a> operations.  This
+    makes operations and pattern matching more efficient and easier to get
+    right.</li>
+<li>The Prolog/Epilog Insertion Pass now has experimental support for performing
+    the "shrink wrapping" optimization, which moves spills and reloads around in
+    the CFG to avoid doing saves on paths that don't need them.</li>
+<li>LLVM includes new experimental support for writing ELF .o files directly
+    from the compiler.  It works well for many simple C testcases, but doesn't
+    support exception handling, debug info, inline assembly, etc.</li>
+<li>Targets can now specify register allocation hints through
+    <tt>MachineRegisterInfo::setRegAllocationHint</tt>. A regalloc hint consists
+    of hint type and physical register number. A hint type of zero specifies a
+    register allocation preference. Other hint type values are target specific
+    which are resolved by <tt>TargetRegisterInfo::ResolveRegAllocHint</tt>. An
+    example is the ARM target which uses register hints to request that the
+    register allocator provide an even / odd register pair to two virtual
+    registers.</li>
 </ul>
 </div>
 
@@ -482,37 +694,33 @@ the code size of JIT compilers by not requiring them to be linked in.</li>
 </p>
 
 <ul>
-<li>The <tt><a href="LangRef.html#int_returnaddress">llvm.returnaddress</a></tt>
-intrinsic (which is used to implement <tt>__builtin_return_address</tt>) now
-supports non-zero stack depths on X86.</li>
-
-<li>The X86 backend now supports code generation of vector shift operations
-using SSE instructions.</li>
-
-<li>X86-64 code generation now takes advantage of red zone, unless the
-<tt>-mno-red-zone</tt> option is specified.</li>
-
-<li>The X86 backend now supports using address space #256 in LLVM IR as a way of
-performing memory references off the GS segment register.  This allows a
-front-end to take advantage of very low-level programming techniques when
-targeting X86 CPUs. See <tt>test/CodeGen/X86/movgs.ll</tt> for a simple
-example.</li>
-
-<li>The X86 backend now supports a <tt>-disable-mmx</tt> command line option to
-  prevent use of MMX even on chips that support it.  This is important for cases
-  where code does not contain the proper <tt>llvm.x86.mmx.emms</tt>
-  intrinsics.</li>
-
-<li>The X86 JIT now detects the new Intel <a 
-   href="http://en.wikipedia.org/wiki/Intel_Core_i7">Core i7</a> and <a
-   href="http://en.wikipedia.org/wiki/Intel_Atom">Atom</a> chips and
-    auto-configures itself appropriately for the features of these chips.</li>
-    
-<li>The JIT now supports exception handling constructs on Linux/X86-64 and
-    Darwin/x86-64.</li>
 
-<li>The JIT supports Thread Local Storage (TLS) on Linux/X86-32 but not yet on
-    X86-64.</li>
+<li>SSE 4.2 builtins are now supported.</li>
+<li>GCC-compatible soft float modes are now supported, which are typically used
+    by OS kernels.</li>
+<li>X86-64 now models implicit zero extensions better, which allows the code
+    generator to remove a lot of redundant zexts.  It also models the 8-bit "H"
+    registers as subregs, which allows them to be used in some tricky
+    situations.</li>
+<li>X86-64 now supports the "local exec" and "initial exec" thread local storage
+    model.</li>
+<li>The vector forms of the <a href="LangRef.html#i_icmp">icmp</a> and <a
+    href="LangRef.html#i_fcmp">fcmp</a> instructions now select to efficient
+    SSE operations.</li>
+<li>Support for the win64 calling conventions have improved.  The primary
+    missing feature is support for varargs function definitions.  It seems to
+    work well for many win64 JIT purposes.</li>
+<li>The X86 backend has preliminary support for <a 
+    href="CodeGenerator.html#x86_memory">mapping address spaces to segment
+    register references</a>.  This allows you to write GS or FS relative memory
+    accesses directly in LLVM IR for cases where you know exactly what you're
+    doing (such as in an OS kernel).  There are some known problems with this
+    support, but it works in simple cases.</li>
+<li>The X86 code generator has been refactored to move all global variable
+    reference logic to one place
+    (<tt>X86Subtarget::ClassifyGlobalReference</tt>) which
+    makes it easier to reason about.</li>
+
 </ul>
 
 </div>
@@ -527,70 +735,156 @@ example.</li>
 </p>
 
 <ul>
-<li>Both direct and indirect load/stores work now.</li>
-<li>Logical, bitwise and conditional operations now work for integer data
-types.</li>
-<li>Function calls involving basic types work now.</li>
-<li>Support for integer arrays.</li>
-<li>The compiler can now emit libcalls for operations not supported by m/c
-instructions.</li>
-<li>Support for both data and ROM address spaces.</li>
+<li>Support for floating-point, indirect function calls, and
+    passing/returning aggregate types to functions.
+<li>The code generator is able to generate debug info into output COFF files.
+<li>Support for placing an object into a specific section or at a specific
+    address in memory.</li>
 </ul>
 
 <p>Things not yet supported:</p>
 
 <ul>
-<li>Floating point.</li>
-<li>Passing/returning aggregate types to and from functions.</li>
 <li>Variable arguments.</li>
-<li>Indirect function calls.</li>
 <li>Interrupts/programs.</li>
-<li>Debug info.</li>
 </ul>
 
 </div>
 
+<!--=========================================================================-->
+<div class="doc_subsection">
+<a name="ARM">ARM Target Improvements</a>
+</div>
+
+<div class="doc_text">
+<p>New features of the ARM target include:
+</p>
+
+<ul>
+
+<li>Preliminary support for processors, such as the Cortex-A8 and Cortex-A9,
+that implement version v7-A of the ARM architecture.  The ARM backend now
+supports both the Thumb2 and Advanced SIMD (Neon) instruction sets.</li>
+
+<li>The AAPCS-VFP "hard float" calling conventions are also supported with the
+<tt>-float-abi=hard</tt> flag.</li>
+
+<li>The ARM calling convention code is now tblgen generated instead of resorting
+    to C++ code.</li>
+</ul>
+
+<p>These features are still somewhat experimental
+and subject to change. The Neon intrinsics, in particular, may change in future
+releases of LLVM.  ARMv7 support has progressed a lot on top of tree since 2.6
+branched.</p>
+
+
+</div>
 
 <!--=========================================================================-->
 <div class="doc_subsection">
-<a name="llvmc">Improvements in LLVMC</a>
+<a name="OtherTarget">Other Target Specific Improvements</a>
 </div>
 
 <div class="doc_text">
-<p>New features include:</p>
+<p>New features of other targets include:
+</p>
 
 <ul>
-<li>Beginning with LLVM 2.5, <tt>llvmc2</tt> is known as
- just <tt>llvmc</tt>. The old <tt>llvmc</tt> driver was removed.</li>
+<li>Mips now supports O32 Calling Convention.</li>
+<li>Many improvements to the 32-bit PowerPC SVR4 ABI (used on powerpc-linux)
+    support, lots of bugs fixed.</li>
+<li>Added support for the 64-bit PowerPC SVR4 ABI (used on powerpc64-linux).
+    Needs more testing.</li>
+</ul>
+
+</div>
+
+<!--=========================================================================-->
+<div class="doc_subsection">
+<a name="newapis">New Useful APIs</a>
+</div>
 
-<li>The Clang plugin was substantially improved and is now enabled
- by default. The command <tt>llvmc --clang</tt> can be now used as a
- synonym to <tt>ccc</tt>.</li>
+<div class="doc_text">
 
-<li>There is now a <tt>--check-graph</tt> option, which is supposed to catch
- common errors like multiple default edges, mismatched output/input language
- names and cycles. In general, these checks can't be done at compile-time
- because of the need to support plugins.</li>
+<p>This release includes a number of new APIs that are used internally, which
+   may also be useful for external clients.
+</p>
 
-<li>Plugins are now more flexible and can refer to compilation graph nodes and
- options defined in other plugins. To manage dependencies, a priority-sorting
- mechanism was introduced. This change affects the TableGen file syntax. See the
- documentation for details.</li>
+<ul>
+<li>New <a href="http://llvm.org/doxygen/PrettyStackTrace_8h-source.html">
+    <tt>PrettyStackTrace</tt> class</a> allows crashes of llvm tools (and applications
+    that integrate them) to provide more detailed indication of what the
+    compiler was doing at the time of the crash (e.g. running a pass).
+    At the top level for each LLVM tool, it includes the command line arguments.
+    </li>
+<li>New <a href="http://llvm.org/doxygen/StringRef_8h-source.html">StringRef</a>
+    and <a href="http://llvm.org/doxygen/Twine_8h-source.html">Twine</a> classes
+    make operations on character ranges and
+    string concatenation to be more efficient.  <tt>StringRef</tt> is just a <tt>const
+    char*</tt> with a length, <tt>Twine</tt> is a light-weight rope.</li>
+<li>LLVM has new <tt>WeakVH</tt>, <tt>AssertingVH</tt> and <tt>CallbackVH</tt>
+    classes, which make it easier to write LLVM IR transformations.  <tt>WeakVH</tt>
+    is automatically drops to null when the referenced <tt>Value</tt> is deleted,
+    and is updated across a <tt>replaceAllUsesWith</tt> operation.
+    <tt>AssertingVH</tt> aborts the program if the
+    referenced value is destroyed while it is being referenced.  <tt>CallbackVH</tt>
+    is a customizable class for handling value references.  See <a
+    href="http://llvm.org/doxygen/ValueHandle_8h-source.html">ValueHandle.h</a> 
+    for more information.</li>
+<li>The new '<a href="http://llvm.org/doxygen/Triple_8h-source.html">Triple
+    </a>' class centralizes a lot of logic that reasons about target
+    triples.</li>
+<li>The new '<a href="http://llvm.org/doxygen/ErrorHandling_8h-source.html">
+    llvm_report_error()</a>' set of APIs allows tools to embed the LLVM
+    optimizer and backend and recover from previously unrecoverable errors.</li>
+<li>LLVM has new abstractions for <a 
+    href="http://llvm.org/doxygen/Atomic_8h-source.html">atomic operations</a>
+    and <a href="http://llvm.org/doxygen/RWMutex_8h-source.html">reader/writer
+    locks</a>.</li>
+<li>LLVM has new <a href="http://llvm.org/doxygen/SourceMgr_8h-source.html">
+    <tt>SourceMgr</tt> and <tt>SMLoc</tt> classes</a> which implement caret
+    diagnostics and basic include stack processing for simple parsers. It is
+    used by tablegen, llvm-mc, the .ll parser and FileCheck.</li>
+</ul>
 
-<li>Hooks can now be provided with arguments. The syntax is "<tt>$CALL(MyHook,
- 'Arg1', 'Arg2', 'Arg3')</tt>".</li>
 
-<li>A new option type: multi-valued option, for options that take more than one
- argument (for example, "<tt>-foo a b c</tt>").</li>
+</div>
 
-<li>New option properties: '<tt>one_or_more</tt>', '<tt>zero_or_more</tt>',
-'<tt>hidden</tt>' and '<tt>really_hidden</tt>'.</li>
+<!--=========================================================================-->
+<div class="doc_subsection">
+<a name="otherimprovements">Other Improvements and New Features</a>
+</div>
 
-<li>The '<tt>case</tt>' expression gained an '<tt>error</tt>' action and
- an '<tt>empty</tt>' test (equivalent to "<tt>(not (not_empty ...))</tt>").</li>
+<div class="doc_text">
+<p>Other miscellaneous features include:</p>
 
-<li>Documentation now looks more consistent to the rest of the LLVM
- docs. There is also a man page now.</li>
+<ul>
+<li>LLVM now includes a new internal '<a 
+    href="http://llvm.org/cmds/FileCheck.html">FileCheck</a>' tool which allows
+    writing much more accurate regression tests that run faster.  Please see the
+    <a href="TestingGuide.html#FileCheck">FileCheck section of the Testing
+    Guide</a> for more information.</li>
+<li>LLVM profile information support has been significantly improved to produce
+correct use counts, and has support for edge profiling with reduced runtime
+overhead.  Combined, the generated profile information is both more correct and
+imposes about half as much overhead (2.6. from 12% to 6% overhead on SPEC
+CPU2000).</li>
+<li>The C bindings (in the llvm/include/llvm-c directory) include many newly
+    supported APIs.</li>
+<li>LLVM 2.6 includes a brand new experimental LLVM bindings to the Ada2005
+    programming language.</li>
+
+<li>The LLVMC driver has several new features:
+  <ul>
+  <li>Dynamic plugins now work on Windows.</li>
+  <li>New option property: init. Makes possible to provide default values for
+      options defined in plugins (interface to <tt>cl::init</tt>).</li>
+  <li>New example: Skeleton, shows how to create a standalone LLVMC-based
+      driver.</li>
+  <li>New example: mcc16, a driver for the PIC16 toolchain.</li>
+  </ul>
+</li>
 
 </ul>
 
@@ -605,13 +899,24 @@ instructions.</li>
 <div class="doc_text">
 
 <p>If you're already an LLVM user or developer with out-of-tree changes based
-on LLVM 2.4, this section lists some "gotchas" that you may run into upgrading
+on LLVM 2.5, this section lists some "gotchas" that you may run into upgrading
 from the previous release.</p>
 
 <ul>
-
-<li>llvm-gcc defaults to <tt>-fno-math-errno</tt> on all X86 targets.</li>
-
+<li>The Itanium (IA64) backend has been removed.  It was not actively supported
+    and had bitrotted.</li>
+<li>The BigBlock register allocator has been removed, it had also bitrotted.</li>
+<li>The C Backend (<tt>-march=c</tt>) is no longer considered part of the LLVM release
+criteria.  We still want it to work, but no one is maintaining it and it lacks
+support for arbitrary precision integers and other important IR features.</li>
+
+<li>All LLVM tools now default to overwriting their output file, behaving more
+    like standard unix tools.  Previously, this only happened with the '<tt>-f</tt>'
+    option.</li>
+<li>LLVM build now builds all libraries as .a files instead of some
+  libraries as relinked .o files.  This requires some APIs like
+  InitializeAllTargets.h.
+  </li>
 </ul>
 
 
@@ -619,8 +924,82 @@ from the previous release.</p>
 API changes are:</p>
 
 <ul>
-<li>Some deprecated interfaces to create <tt>Instruction</tt> subclasses, that
-    were spelled with lower case "create," have been removed.</li>
+<li>All uses of <tt>hash_set</tt> and <tt>hash_map</tt> have been removed from
+    the LLVM tree and the wrapper headers have been removed.</li>
+<li>The llvm/Streams.h and <tt>DOUT</tt> member of Debug.h have been removed.  The
+    <tt>llvm::Ostream</tt> class has been completely removed and replaced with
+    uses of <tt>raw_ostream</tt>.</li>
+<li>LLVM's global uniquing tables for <tt>Type</tt>s and <tt>Constant</tt>s have
+    been privatized into members of an <tt>LLVMContext</tt>.  A number of APIs
+    now take an <tt>LLVMContext</tt> as a parameter.  To smooth the transition
+    for clients that will only ever use a single context, the new 
+    <tt>getGlobalContext()</tt> API can be used to access a default global 
+    context which can be passed in any and all cases where a context is 
+    required.
+<li>The <tt>getABITypeSize</tt> methods are now called <tt>getAllocSize</tt>.</li>
+<li>The <tt>Add</tt>, <tt>Sub</tt> and <tt>Mul</tt> operators are no longer
+    overloaded for floating-point types. Floating-point addition, subtraction
+    and multiplication are now represented with new operators <tt>FAdd</tt>,
+    <tt>FSub</tt> and <tt>FMul</tt>. In the <tt>IRBuilder</tt> API,
+    <tt>CreateAdd</tt>, <tt>CreateSub</tt>, <tt>CreateMul</tt> and
+    <tt>CreateNeg</tt> should only be used for integer arithmetic now;
+    <tt>CreateFAdd</tt>, <tt>CreateFSub</tt>, <tt>CreateFMul</tt> and
+    <tt>CreateFNeg</tt> should now be used for floating-point arithmetic.</li>
+<li>The <tt>DynamicLibrary</tt> class can no longer be constructed, its functionality has
+    moved to static member functions.</li>
+<li><tt>raw_fd_ostream</tt>'s constructor for opening a given filename now
+    takes an extra <tt>Force</tt> argument. If <tt>Force</tt> is set to
+    <tt>false</tt>, an error will be reported if a file with the given name
+    already exists. If <tt>Force</tt> is set to <tt>true</tt>, the file will
+    be silently truncated (which is the behavior before this flag was
+    added).</li>
+<li><tt>SCEVHandle</tt> no longer exists, because reference counting is no
+    longer done for <tt>SCEV*</tt> objects, instead <tt>const SCEV*</tt>
+    should be used.</li>
+
+<li>Many APIs, notably <tt>llvm::Value</tt>, now use the <tt>StringRef</tt>
+and <tt>Twine</tt> classes instead of passing <tt>const char*</tt>
+or <tt>std::string</tt>, as described in
+the <a href="ProgrammersManual.html#string_apis">Programmer's Manual</a>. Most
+clients should be unaffected by this transition, unless they are used to
+<tt>Value::getName()</tt> returning a string. Here are some tips on updating to
+2.6:
+  <ul>
+    <li><tt>getNameStr()</tt> is still available, and matches the old
+      behavior. Replacing <tt>getName()</tt> calls with this is an safe option,
+      although more efficient alternatives are now possible.</li>
+
+    <li>If you were just relying on <tt>getName()</tt> being able to be sent to
+      a <tt>std::ostream</tt>, consider migrating
+      to <tt>llvm::raw_ostream</tt>.</li>
+      
+    <li>If you were using <tt>getName().c_str()</tt> to get a <tt>const
+        char*</tt> pointer to the name, you can use <tt>getName().data()</tt>.
+        Note that this string (as before), may not be the entire name if the
+        name contains embedded null characters.</li>
+
+    <li>If you were using <tt>operator +</tt> on the result of <tt>getName()</tt> and
+      treating the result as an <tt>std::string</tt>, you can either
+      use <tt>Twine::str</tt> to get the result as an <tt>std::string</tt>, or
+      could move to a <tt>Twine</tt> based design.</li>
+
+    <li><tt>isName()</tt> should be replaced with comparison
+      against <tt>getName()</tt> (this is now efficient).
+  </ul>
+</li>
+
+<li>The registration interfaces for backend Targets has changed (what was
+previously <tt>TargetMachineRegistry</tt>). For backend authors, see the <a
+href="WritingAnLLVMBackend.html#TargetRegistration">Writing An LLVM Backend</a>
+guide. For clients, the notable API changes are:
+  <ul>
+    <li><tt>TargetMachineRegistry</tt> has been renamed
+      to <tt>TargetRegistry</tt>.</li>
+
+    <li>Clients should move to using the <tt>TargetRegistry::lookupTarget()</tt>
+      function to find targets.</li>
+  </ul>
+</li>
 </ul>
 
 </div>
@@ -639,15 +1018,15 @@ API changes are:</p>
 
 <ul>
 <li>Intel and AMD machines (IA32, X86-64, AMD64, EMT-64) running Red Hat
-Linux, Fedora Core and FreeBSD (and probably other unix-like systems).</li>
+    Linux, Fedora Core, FreeBSD and AuroraUX (and probably other unix-like
+    systems).</li>
 <li>PowerPC and X86-based Mac OS X systems, running 10.3 and above in 32-bit
-and 64-bit modes.</li>
+    and 64-bit modes.</li>
 <li>Intel and AMD machines running on Win32 using MinGW libraries (native).</li>
 <li>Intel and AMD machines running on Win32 with the Cygwin libraries (limited
     support is available for native builds with Visual C++).</li>
-<li>Sun UltraSPARC workstations running Solaris 10.</li>
+<li>Sun x86 and AMD64 machines running Solaris 10, OpenSolaris 0906.</li>
 <li>Alpha-based machines running Debian GNU/Linux.</li>
-<li>Itanium-based (IA64) machines running Linux and HP-UX.</li>
 </ul>
 
 <p>The core LLVM infrastructure uses GNU autoconf to adapt itself
@@ -670,6 +1049,21 @@ listed by component.  If you run into a problem, please check the <a
 href="http://llvm.org/bugs/">LLVM bug database</a> and submit a bug if
 there isn't already one.</p>
 
+<ul>
+<li>The llvm-gcc bootstrap will fail with some versions of binutils (e.g. 2.15)
+    with a message of "<tt><a href="http://llvm.org/PR5004">Error: can not do 8
+    byte pc-relative relocation</a></tt>" when building C++ code.  We intend to
+    fix this on mainline, but a workaround for 2.6 is to upgrade to binutils
+    2.17 or later.</li>
+    
+<li>LLVM will not correctly compile on Solaris and/or OpenSolaris
+using the stock GCC 3.x.x series 'out the box',
+See: <a href="GettingStarted.html#brokengcc">Broken versions of GCC and other tools</a>.
+However, A <a href="http://pkg.auroraux.org/GCC">Modern GCC Build</a>
+for x86/x86-64 has been made available from the third party AuroraUX Project
+that has been meticulously tested for bootstrapping LLVM &amp; Clang.</li>
+</ul>
+
 </div>
 
 <!-- ======================================================================= -->
@@ -687,9 +1081,11 @@ components, please contact us on the <a
 href="http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev">LLVMdev list</a>.</p>
 
 <ul>
-<li>The MSIL, IA64, Alpha, SPU, MIPS, and PIC16 backends are experimental.</li>
+<li>The MSIL, Alpha, SPU, MIPS, PIC16, Blackfin, MSP430 and SystemZ backends are
+    experimental.</li>
 <li>The <tt>llc</tt> "<tt>-filetype=asm</tt>" (the default) is the only
-    supported value for this option.</li>
+    supported value for this option.  The ELF writer is experimental.</li>
+<li>The implementation of Andersen's Alias Analysis has many known bugs.</li>
 </ul>
 
 </div>
@@ -744,14 +1140,14 @@ compilation, and lacks support for debug information.</li>
 <div class="doc_text">
 
 <ul>
+<li>Support for the Advanced SIMD (Neon) instruction set is still incomplete
+and not well tested.  Some features may not work at all, and the code quality
+may be poor in some cases.</li>
 <li>Thumb mode works only on ARMv6 or higher processors. On sub-ARMv6
 processors, thumb programs can crash or produce wrong
 results (<a href="http://llvm.org/PR1388">PR1388</a>).</li>
 <li>Compilation for ARM Linux OABI (old ABI) is supported but not fully tested.
 </li>
-<li>There is a bug in QEMU-ARM (&lt;= 0.9.0) which causes it to incorrectly
- execute
-programs compiled with LLVM.  Please use more recent versions of QEMU.</li>
 </ul>
 
 </div>
@@ -778,7 +1174,6 @@ programs compiled with LLVM.  Please use more recent versions of QEMU.</li>
 <div class="doc_text">
 
 <ul>
-<li>The O32 ABI is not fully supported.</li>
 <li>64-bit MIPS targets are not supported yet.</li>
 </ul>
 
@@ -801,21 +1196,6 @@ appropriate nops inserted to ensure restartability.</li>
 
 <!-- ======================================================================= -->
 <div class="doc_subsection">
-  <a name="ia64-be">Known problems with the IA64 back-end</a>
-</div>
-
-<div class="doc_text">
-
-<ul>
-<li>The Itanium backend is highly experimental and has a number of known
-    issues.  We are looking for a maintainer for the Itanium backend.  If you
-    are interested, please contact the LLVMdev mailing list.</li>
-</ul>
-
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsection">
   <a name="c-be">Known problems with the C back-end</a>
 </div>
 
@@ -841,10 +1221,6 @@ appropriate nops inserted to ensure restartability.</li>
 
 <div class="doc_text">
 
-<p>llvm-gcc does not currently support <a href="http://llvm.org/PR869">Link-Time
-Optimization</a> on most platforms "out-of-the-box".  Please inquire on the
-LLVMdev mailing list if you are interested.</p>
-
 <p>The only major language feature of GCC not supported by llvm-gcc is
     the <tt>__builtin_apply</tt> family of builtins.   However, some extensions
     are only supported on some targets.  For example, trampolines are only
@@ -882,7 +1258,8 @@ itself, Qt, Mozilla, etc.</p>
 <div class="doc_text">
 <ul>
 <li>Fortran support generally works, but there are still several unresolved bugs
-    in Bugzilla.  Please see the tools/gfortran component for details.</li>
+    in <a href="http://llvm.org/bugs/">Bugzilla</a>.  Please see the
+    tools/gfortran component for details.</li>
 </ul>
 </div>
 
@@ -902,16 +1279,16 @@ which does support trampolines.</li>
 <li>The Ada front-end <a href="http://llvm.org/PR2007">fails to bootstrap</a>.
 This is due to lack of LLVM support for <tt>setjmp</tt>/<tt>longjmp</tt> style
 exception handling, which is used internally by the compiler.
-Workaround: configure with --disable-bootstrap.</li>
+Workaround: configure with <tt>--disable-bootstrap</tt>.</li>
 <li>The c380004, <a href="http://llvm.org/PR2010">c393010</a>
 and <a href="http://llvm.org/PR2421">cxg2021</a> ACATS tests fail
 (c380004 also fails with gcc-4.2 mainline).
 If the compiler is built with checks disabled then <a href="http://llvm.org/PR2010">c393010</a>
 causes the compiler to go into an infinite loop, using up all system memory.</li>
 <li>Some GCC specific Ada tests continue to crash the compiler.</li>
-<li>The -E binder option (exception backtraces)
+<li>The <tt>-E</tt> binder option (exception backtraces)
 <a href="http://llvm.org/PR1982">does not work</a> and will result in programs
-crashing if an exception is raised.  Workaround: do not use -E.</li>
+crashing if an exception is raised.  Workaround: do not use <tt>-E</tt>.</li>
 <li>Only discrete types <a href="http://llvm.org/PR1981">are allowed to start
 or finish at a non-byte offset</a> in a record.  Workaround: do not pack records
 or use representation clauses that result in a field of a non-discrete type
@@ -925,6 +1302,20 @@ ignored</a>.</li>
 </ul>
 </div>
 
+<!-- ======================================================================= -->
+<div class="doc_subsection">
+	<a name="ocaml-bindings">Known problems with the O'Caml bindings</a>
+</div>
+
+<div class="doc_text">
+
+<p>The <tt>Llvm.Linkage</tt> module is broken, and has incorrect values. Only
+<tt>Llvm.Linkage.External</tt>, <tt>Llvm.Linkage.Available_externally</tt>, and
+<tt>Llvm.Linkage.Link_once</tt> will be correct. If you need any of the other linkage
+modes, you'll have to write an external C library in order to expose the
+functionality. This has been fixed in the trunk.</p>
+</div>
+
 <!-- *********************************************************************** -->
 <div class="doc_section">
   <a name="additionalinfo">Additional Information</a>
diff --git a/libclamav/c++/llvm/docs/SourceLevelDebugging.html b/libclamav/c++/llvm/docs/SourceLevelDebugging.html
index 49ce278..c405575 100644
--- a/libclamav/c++/llvm/docs/SourceLevelDebugging.html
+++ b/libclamav/c++/llvm/docs/SourceLevelDebugging.html
@@ -37,15 +37,10 @@
     </ul></li>
     <li><a href="#format_common_intrinsics">Debugger intrinsic functions</a>
       <ul>
-      <li><a href="#format_common_stoppoint">llvm.dbg.stoppoint</a></li>
-      <li><a href="#format_common_func_start">llvm.dbg.func.start</a></li>
-      <li><a href="#format_common_region_start">llvm.dbg.region.start</a></li>
-      <li><a href="#format_common_region_end">llvm.dbg.region.end</a></li>
       <li><a href="#format_common_declare">llvm.dbg.declare</a></li>
     </ul></li>
-    <li><a href="#format_common_stoppoints">Representing stopping points in the
-                                           source program</a></li>
   </ol></li>
+  <li><a href="#format_common_lifetime">Object lifetimes and scoping</a></li>
   <li><a href="#ccxx_frontend">C/C++ front-end specific debug information</a>
   <ol>
     <li><a href="#ccxx_compile_units">C/C++ source file information</a></li>
@@ -288,7 +283,7 @@ height="369">
    way.  Also, all debugging information objects start with a tag to indicate
    what type of object it is.  The source-language is allowed to define its own
    objects, by using unreserved tag numbers.  We recommend using with tags in
-   the range 0x1000 thru 0x2000 (there is a defined enum DW_TAG_user_base =
+   the range 0x1000 through 0x2000 (there is a defined enum DW_TAG_user_base =
    0x1000.)</p>
 
 <p>The fields of debug descriptors used internally by LLVM 
@@ -763,92 +758,6 @@ DW_TAG_return_variable = 258
 
 <!-- ======================================================================= -->
 <div class="doc_subsubsection">
-  <a name="format_common_stoppoint">llvm.dbg.stoppoint</a>
-</div>
-
-<div class="doc_text">
-<pre>
-  void %<a href="#format_common_stoppoint">llvm.dbg.stoppoint</a>( uint, uint, metadata)
-</pre>
-
-<p>This intrinsic is used to provide correspondence between the source file and
-   the generated code.  The first argument is the line number (base 1), second
-   argument is the column number (0 if unknown) and the third argument the
-   source <tt>%<a href="#format_compile_units">llvm.dbg.compile_unit</a>.
-   Code following a call to this intrinsic will
-   have been defined in close proximity of the line, column and file. This
-   information holds until the next call
-   to <tt>%<a href="#format_common_stoppoint">lvm.dbg.stoppoint</a></tt>.</p>
-
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsubsection">
-  <a name="format_common_func_start">llvm.dbg.func.start</a>
-</div>
-
-<div class="doc_text">
-<pre>
-  void %<a href="#format_common_func_start">llvm.dbg.func.start</a>( metadata )
-</pre>
-
-<p>This intrinsic is used to link the debug information
-   in <tt>%<a href="#format_subprograms">llvm.dbg.subprogram</a></tt> to the
-   function. It defines the beginning of the function's declarative region
-   (scope). It also implies a call to
-   %<tt><a href="#format_common_stoppoint">llvm.dbg.stoppoint</a></tt> which
-   defines a source line "stop point". The intrinsic should be called early in
-   the function after the all the alloca instructions.  It should be paired off
-   with a closing
-   <tt>%<a href="#format_common_region_end">llvm.dbg.region.end</a></tt>.
-   The function's single argument is
-   the <tt>%<a href="#format_subprograms">llvm.dbg.subprogram.type</a></tt>.</p>
-
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsubsection">
-  <a name="format_common_region_start">llvm.dbg.region.start</a>
-</div>
-
-<div class="doc_text">
-<pre>
-  void %<a href="#format_common_region_start">llvm.dbg.region.start</a>( metadata )
-</pre>
-
-<p>This intrinsic is used to define the beginning of a declarative scope (ex.
-   block) for local language elements.  It should be paired off with a closing
-   <tt>%<a href="#format_common_region_end">llvm.dbg.region.end</a></tt>.  The
-   function's single argument is
-   the <tt>%<a href="#format_blocks">llvm.dbg.block</a></tt> which is
-   starting.</p>
-
-
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsubsection">
-  <a name="format_common_region_end">llvm.dbg.region.end</a>
-</div>
-
-<div class="doc_text">
-<pre>
-  void %<a href="#format_common_region_end">llvm.dbg.region.end</a>( metadata )
-</pre>
-
-<p>This intrinsic is used to define the end of a declarative scope (ex. block)
-   for local language elements.  It should be paired off with an
-   opening <tt>%<a href="#format_common_region_start">llvm.dbg.region.start</a></tt>
-   or <tt>%<a href="#format_common_func_start">llvm.dbg.func.start</a></tt>.
-   The function's single argument is either
-   the <tt>%<a href="#format_blocks">llvm.dbg.block</a></tt> or
-   the <tt>%<a href="#format_subprograms">llvm.dbg.subprogram.type</a></tt>
-   which is ending.</p>
-
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsubsection">
   <a name="format_common_declare">llvm.dbg.declare</a>
 </div>
 
@@ -867,40 +776,6 @@ DW_TAG_return_variable = 258
 
 <!-- ======================================================================= -->
 <div class="doc_subsection">
-  <a name="format_common_stoppoints">
-     Representing stopping points in the source program
-  </a>
-</div>
-
-<div class="doc_text">
-
-<p>LLVM debugger "stop points" are a key part of the debugging representation
-   that allows the LLVM to maintain simple semantics
-   for <a href="#debugopt">debugging optimized code</a>.  The basic idea is that
-   the front-end inserts calls to
-   the <a href="#format_common_stoppoint">%<tt>llvm.dbg.stoppoint</tt></a>
-   intrinsic function at every point in the program where a debugger should be
-   able to inspect the program (these correspond to places a debugger stops when
-   you "<tt>step</tt>" through it).  The front-end can choose to place these as
-   fine-grained as it would like (for example, before every subexpression
-   evaluated), but it is recommended to only put them after every source
-   statement that includes executable code.</p>
-
-<p>Using calls to this intrinsic function to demark legal points for the
-   debugger to inspect the program automatically disables any optimizations that
-   could potentially confuse debugging information.  To
-   non-debug-information-aware transformations, these calls simply look like
-   calls to an external function, which they must assume to do anything
-   (including reading or writing to any part of reachable memory).  On the other
-   hand, it does not impact many optimizations, such as code motion of
-   non-trapping instructions, nor does it impact optimization of subexpressions,
-   code duplication transformations, or basic-block reordering
-   transformations.</p>
-
-</div>
-
-<!-- ======================================================================= -->
-<div class="doc_subsection">
   <a name="format_common_lifetime">Object lifetimes and scoping</a>
 </div>
 
@@ -914,21 +789,20 @@ DW_TAG_return_variable = 258
    scoping in this sense, and does not want to be tied to a language's scoping
    rules.</p>
 
-<p>In order to handle this, the LLVM debug format uses the notion of "regions"
-   of a function, delineated by calls to intrinsic functions.  These intrinsic
-   functions define new regions of the program and indicate when the region
-   lifetime expires.  Consider the following C fragment, for example:</p>
+<p>In order to handle this, the LLVM debug format uses the metadata attached
+   with llvm instructions to encode line nuber and scoping information.
+   Consider the following C fragment, for example:</p>
 
 <div class="doc_code">
 <pre>
 1.  void foo() {
-2.    int X = ...;
-3.    int Y = ...;
+2.    int X = 21;
+3.    int Y = 22;
 4.    {
-5.      int Z = ...;
-6.      ...
+5.      int Z = 23;
+6.      Z = X;
 7.    }
-8.    ...
+8.    X = Y;
 9.  }
 </pre>
 </div>
@@ -937,99 +811,124 @@ DW_TAG_return_variable = 258
 
 <div class="doc_code">
 <pre>
-void %foo() {
+nounwind ssp {
 entry:
-    %X = alloca int
-    %Y = alloca int
-    %Z = alloca int
-    
-    ...
-    
-    call void @<a href="#format_common_func_start">llvm.dbg.func.start</a>( metadata !0)
-    
-    call void @<a href="#format_common_stoppoint">llvm.dbg.stoppoint</a>( uint 2, uint 2, metadata !1)
-    
-    call void @<a href="#format_common_declare">llvm.dbg.declare</a>({}* %X, ...)
-    call void @<a href="#format_common_declare">llvm.dbg.declare</a>({}* %Y, ...)
-    
-    <i>;; Evaluate expression on line 2, assigning to X.</i>
-    
-    call void @<a href="#format_common_stoppoint">llvm.dbg.stoppoint</a>( uint 3, uint 2, metadata !1)
-    
-    <i>;; Evaluate expression on line 3, assigning to Y.</i>
-    
-    call void @<a href="#format_common_stoppoint">llvm.region.start</a>()
-    call void @<a href="#format_common_stoppoint">llvm.dbg.stoppoint</a>( uint 5, uint 4, metadata !1)
-    call void @<a href="#format_common_declare">llvm.dbg.declare</a>({}* %X, ...)
-    
-    <i>;; Evaluate expression on line 5, assigning to Z.</i>
-    
-    call void @<a href="#format_common_stoppoint">llvm.dbg.stoppoint</a>( uint 7, uint 2, metadata !1)
-    call void @<a href="#format_common_region_end">llvm.region.end</a>()
-    
-    call void @<a href="#format_common_stoppoint">llvm.dbg.stoppoint</a>( uint 9, uint 2, metadata !1)
-    
-    call void @<a href="#format_common_region_end">llvm.region.end</a>()
-    
-    ret void
+  %X = alloca i32, align 4                        ; <i32*> [#uses=4]
+  %Y = alloca i32, align 4                        ; <i32*> [#uses=4]
+  %Z = alloca i32, align 4                        ; <i32*> [#uses=3]
+  %0 = bitcast i32* %X to { }*                    ; <{ }*> [#uses=1]
+  call void @llvm.dbg.declare({ }* %0, metadata !0), !dbg !7
+  store i32 21, i32* %X, !dbg !8
+  %1 = bitcast i32* %Y to { }*                    ; <{ }*> [#uses=1]
+  call void @llvm.dbg.declare({ }* %1, metadata !9), !dbg !10
+  store i32 22, i32* %Y, !dbg !11
+  %2 = bitcast i32* %Z to { }*                    ; <{ }*> [#uses=1]
+  call void @llvm.dbg.declare({ }* %2, metadata !12), !dbg !14
+  store i32 23, i32* %Z, !dbg !15
+  %tmp = load i32* %X, !dbg !16                   ; <i32> [#uses=1]
+  %tmp1 = load i32* %Y, !dbg !16                  ; <i32> [#uses=1]
+  %add = add nsw i32 %tmp, %tmp1, !dbg !16        ; <i32> [#uses=1]
+  store i32 %add, i32* %Z, !dbg !16
+  %tmp2 = load i32* %Y, !dbg !17                  ; <i32> [#uses=1]
+  store i32 %tmp2, i32* %X, !dbg !17
+  ret void, !dbg !18
 }
+
+declare void @llvm.dbg.declare({ }*, metadata) nounwind readnone
+
+!0 = metadata !{i32 459008, metadata !1, metadata !"X", 
+                metadata !3, i32 2, metadata !6}; [ DW_TAG_auto_variable ]
+!1 = metadata !{i32 458763, metadata !2}; [DW_TAG_lexical_block ]
+!2 = metadata !{i32 458798, i32 0, metadata !3, metadata !"foo", metadata !"foo", 
+               metadata !"foo", metadata !3, i32 1, metadata !4, 
+               i1 false, i1 true}; [DW_TAG_subprogram ]
+!3 = metadata !{i32 458769, i32 0, i32 12, metadata !"foo.c", 
+                metadata !"/private/tmp", metadata !"clang 1.1", i1 true, 
+                i1 false, metadata !"", i32 0}; [DW_TAG_compile_unit ]
+!4 = metadata !{i32 458773, metadata !3, metadata !"", null, i32 0, i64 0, i64 0, 
+                i64 0, i32 0, null, metadata !5, i32 0}; [DW_TAG_subroutine_type ]
+!5 = metadata !{null}
+!6 = metadata !{i32 458788, metadata !3, metadata !"int", metadata !3, i32 0, 
+                i64 32, i64 32, i64 0, i32 0, i32 5}; [DW_TAG_base_type ]
+!7 = metadata !{i32 2, i32 7, metadata !1, null}
+!8 = metadata !{i32 2, i32 3, metadata !1, null}
+!9 = metadata !{i32 459008, metadata !1, metadata !"Y", metadata !3, i32 3, 
+                metadata !6}; [ DW_TAG_auto_variable ]
+!10 = metadata !{i32 3, i32 7, metadata !1, null}
+!11 = metadata !{i32 3, i32 3, metadata !1, null}
+!12 = metadata !{i32 459008, metadata !13, metadata !"Z", metadata !3, i32 5, 
+                 metadata !6}; [ DW_TAG_auto_variable ]
+!13 = metadata !{i32 458763, metadata !1}; [DW_TAG_lexical_block ]
+!14 = metadata !{i32 5, i32 9, metadata !13, null}
+!15 = metadata !{i32 5, i32 5, metadata !13, null}
+!16 = metadata !{i32 6, i32 5, metadata !13, null}
+!17 = metadata !{i32 8, i32 3, metadata !1, null}
+!18 = metadata !{i32 9, i32 1, metadata !2, null}
 </pre>
 </div>
 
 <p>This example illustrates a few important details about the LLVM debugging
-   information.  In particular, it shows how the various intrinsics are applied
+   information.  In particular, it shows how the llvm.dbg.declare intrinsic
+   and location information, attached with an instruction, are applied
    together to allow a debugger to analyze the relationship between statements,
    variable definitions, and the code used to implement the function.</p>
 
-<p>The first
-   intrinsic <tt>%<a href="#format_common_func_start">llvm.dbg.func.start</a></tt>
-   provides a link with the <a href="#format_subprograms">subprogram
-   descriptor</a> containing the details of this function.  This call also
-   defines the beginning of the function region, bounded by
-   the <tt>%<a href="#format_common_region_end">llvm.region.end</a></tt> at the
-   end of the function.  This region is used to bracket the lifetime of
-   variables declared within.  For a function, this outer region defines a new
-   stack frame whose lifetime ends when the region is ended.</p>
-
-<p>It is possible to define inner regions for short term variables by using the
-   %<a href="#format_common_stoppoint"><tt>llvm.region.start</tt></a>
-   and <a href="#format_common_region_end"><tt>%llvm.region.end</tt></a> to
-   bound a region.  The inner region in this example would be for the block
-   containing the declaration of Z.</p>
-
-<p>Using regions to represent the boundaries of source-level functions allow
-   LLVM interprocedural optimizations to arbitrarily modify LLVM functions
-   without having to worry about breaking mapping information between the LLVM
-   code and the and source-level program.  In particular, the inliner requires
-   no modification to support inlining with debugging information: there is no
-   explicit correlation drawn between LLVM functions and their source-level
-   counterparts (note however, that if the inliner inlines all instances of a
-   non-strong-linkage function into its caller that it will not be possible for
-   the user to manually invoke the inlined function from a debugger).</p>
-
-<p>Once the function has been defined,
-   the <a href="#format_common_stoppoint"><tt>stopping point</tt></a>
-   corresponding to line #2 (column #2) of the function is encountered.  At this
-   point in the function, <b>no</b> local variables are live.  As lines 2 and 3
-   of the example are executed, their variable definitions are introduced into
-   the program using
-   %<a href="#format_common_declare"><tt>llvm.dbg.declare</tt></a>, without the
-   need to specify a new region.  These variables do not require new regions to
-   be introduced because they go out of scope at the same point in the program:
-   line 9.</p>
-
-<p>In contrast, the <tt>Z</tt> variable goes out of scope at a different time,
-   on line 7.  For this reason, it is defined within the inner region, which
-   kills the availability of <tt>Z</tt> before the code for line 8 is executed.
-   In this way, regions can support arbitrary source-language scoping rules, as
-   long as they can only be nested (ie, one scope cannot partially overlap with
-   a part of another scope).</p>
-
-<p>It is worth noting that this scoping mechanism is used to control scoping of
-   all declarations, not just variable declarations.  For example, the scope of
-   a C++ using declaration is controlled with this and could change how name
-   lookup is performed.</p>
-
+   <div class="doc_code">
+   <pre> 
+     call void @llvm.dbg.declare({ }* %0, metadata !0), !dbg !7   
+   </pre>
+   </div>
+<p>This first intrinsic 
+   <tt>%<a href="#format_common_declare">llvm.dbg.declare</a></tt>
+   encodes debugging information for variable <tt>X</tt>. The metadata, 
+   <tt>!dbg !7</tt> attached with the intrinsic provides scope information for 
+   the variable <tt>X</tt>. </p>
+   <div class="doc_code">
+   <pre>
+     !7 = metadata !{i32 2, i32 7, metadata !1, null}
+     !1 = metadata !{i32 458763, metadata !2}; [DW_TAG_lexical_block ]
+     !2 = metadata !{i32 458798, i32 0, metadata !3, metadata !"foo", 
+                     metadata !"foo", metadata !"foo", metadata !3, i32 1, 
+                     metadata !4, i1 false, i1 true}; [DW_TAG_subprogram ]   
+   </pre>
+   </div>
+
+<p> Here <tt>!7</tt> is a metadata providing location information. It has four
+   fields : line number, column number, scope and original scope. The original
+   scope represents inline location if this instruction is inlined inside
+   a caller. It is null otherwise. In this example scope is encoded by 
+   <tt>!1</tt>. <tt>!1</tt> represents a lexical block inside the scope
+   <tt>!2</tt>, where <tt>!2</tt> is a
+   <a href="#format_subprograms">subprogram descriptor</a>. 
+   This way the location information attched with the intrinsics indicates
+   that the variable <tt>X</tt> is declared at line number 2 at a function level
+   scope in function <tt>foo</tt>.</p>
+
+<p>Now lets take another example.</p>
+
+   <div class="doc_code">
+   <pre> 
+     call void @llvm.dbg.declare({ }* %2, metadata !12), !dbg !14
+   </pre>
+   </div>
+<p>This intrinsic 
+   <tt>%<a href="#format_common_declare">llvm.dbg.declare</a></tt>
+   encodes debugging information for variable <tt>Z</tt>. The metadata, 
+   <tt>!dbg !14</tt> attached with the intrinsic provides scope information for 
+   the variable <tt>Z</tt>. </p>
+   <div class="doc_code">
+   <pre>
+     !13 = metadata !{i32 458763, metadata !1}; [DW_TAG_lexical_block ]
+     !14 = metadata !{i32 5, i32 9, metadata !13, null}
+   </pre>
+   </div>
+
+<p> Here <tt>!14</tt> indicates that <tt>Z</tt> is declaread at line number 5,
+   column number 9 inside a lexical scope <tt>!13</tt>. This lexical scope
+   itself resides inside lexcial scope <tt>!1</tt> described above.</p>
+
+<p>The scope information attached with each instruction provides a straight
+   forward way to find instructions covered by a scope. </p>
 </div>
 
 <!-- *********************************************************************** -->
diff --git a/libclamav/c++/llvm/docs/TableGenFundamentals.html b/libclamav/c++/llvm/docs/TableGenFundamentals.html
index bf38dda..ade4bf6 100644
--- a/libclamav/c++/llvm/docs/TableGenFundamentals.html
+++ b/libclamav/c++/llvm/docs/TableGenFundamentals.html
@@ -151,7 +151,7 @@ file prints this (at the time of this writing):</p>
   <b>bit</b> isReMaterializable = 0;
   <b>bit</b> isPredicable = 0;
   <b>bit</b> hasDelaySlot = 0;
-  <b>bit</b> usesCustomDAGSchedInserter = 0;
+  <b>bit</b> usesCustomInserter = 0;
   <b>bit</b> hasCtrlDep = 0;
   <b>bit</b> isNotDuplicable = 0;
   <b>bit</b> hasSideEffects = 0;
@@ -398,13 +398,6 @@ which case the user must specify it explicitly.</dd>
   <dd>a dag value.  The first element is required to be a record definition, the
   remaining elements in the list may be arbitrary other values, including nested
   `<tt>dag</tt>' values.</dd>
-<dt><tt>(implicit a)</tt></dt>
-  <dd>an implicitly defined physical register.  This tells the dag instruction
-  selection emitter the input pattern's extra definitions matches implicit
-  physical register definitions.</dd>
-<dt><tt>(parallel (a), (b))</tt></dt>
-  <dd>a list of dags specifying parallel operations which map to the same
-  instruction.</dd>
 <dt><tt>!strconcat(a, b)</tt></dt>
   <dd>A string value that is the result of concatenating the 'a' and 'b'
   strings.</dd>
@@ -760,6 +753,25 @@ opened, as in the case with the <tt>CALL*</tt> instructions above.</p>
 </div>
 
 <!-- *********************************************************************** -->
+<div class="doc_section"><a name="codegen">Code Generator backend info</a></div>
+<!-- *********************************************************************** -->
+
+<p>Expressions used by code generator to describe instructions and isel
+patterns:</p>
+
+<div class="doc_text">
+
+<dt><tt>(implicit a)</tt></dt>
+  <dd>an implicitly defined physical register.  This tells the dag instruction
+  selection emitter the input pattern's extra definitions matches implicit
+  physical register definitions.</dd>
+<dt><tt>(parallel (a), (b))</tt></dt>
+  <dd>a list of dags specifying parallel operations which map to the same
+  instruction.</dd>
+
+</div>
+
+<!-- *********************************************************************** -->
 <div class="doc_section"><a name="backends">TableGen backends</a></div>
 <!-- *********************************************************************** -->
 
diff --git a/libclamav/c++/llvm/docs/WritingAnLLVMPass.html b/libclamav/c++/llvm/docs/WritingAnLLVMPass.html
index f715a96..f531a74 100644
--- a/libclamav/c++/llvm/docs/WritingAnLLVMPass.html
+++ b/libclamav/c++/llvm/docs/WritingAnLLVMPass.html
@@ -179,7 +179,7 @@ source tree in the <tt>lib/Transforms/Hello</tt> directory.</p>
 <div class="doc_code"><pre>
 # Makefile for hello pass
 
-# Path to top level of LLVM heirarchy
+# Path to top level of LLVM hierarchy
 LEVEL = ../../..
 
 # Name of the library to build
@@ -453,7 +453,7 @@ available, from the most general to the most specific.</p>
 <p>When choosing a superclass for your Pass, you should choose the <b>most
 specific</b> class possible, while still being able to meet the requirements
 listed.  This gives the LLVM Pass Infrastructure information necessary to
-optimize how passes are run, so that the resultant compiler isn't unneccesarily
+optimize how passes are run, so that the resultant compiler isn't unnecessarily
 slow.</p>
 
 </div>
@@ -492,7 +492,7 @@ invalidated, and are never "run".</p>
 href="http://llvm.org/doxygen/classllvm_1_1ModulePass.html">ModulePass</a></tt>"
 class is the most general of all superclasses that you can use.  Deriving from
 <tt>ModulePass</tt> indicates that your pass uses the entire program as a unit,
-refering to function bodies in no predictable order, or adding and removing
+referring to function bodies in no predictable order, or adding and removing
 functions.  Because nothing is known about the behavior of <tt>ModulePass</tt>
 subclasses, no optimization can be done for their execution.</p>
 
diff --git a/libclamav/c++/llvm/docs/tutorial/LangImpl3.html b/libclamav/c++/llvm/docs/tutorial/LangImpl3.html
index bc5db46..e3d2117 100644
--- a/libclamav/c++/llvm/docs/tutorial/LangImpl3.html
+++ b/libclamav/c++/llvm/docs/tutorial/LangImpl3.html
@@ -183,7 +183,7 @@ Value *VariableExprAST::Codegen() {
 </div>
 
 <p>References to variables are also quite simple using LLVM.  In the simple version
-of Kaleidoscope, we assume that the variable has already been emited somewhere
+of Kaleidoscope, we assume that the variable has already been emitted somewhere
 and its value is available.  In practice, the only values that can be in the
 <tt>NamedValues</tt> map are function arguments.  This
 code simply checks to see that the specified name is in the map (if not, an 
@@ -362,7 +362,7 @@ definition of this function.</p>
 first, we want to allow 'extern'ing a function more than once, as long as the
 prototypes for the externs match (since all arguments have the same type, we
 just have to check that the number of arguments match).  Second, we want to
-allow 'extern'ing a function and then definining a body for it.  This is useful
+allow 'extern'ing a function and then defining a body for it.  This is useful
 when defining mutually recursive functions.</p>
 
 <p>In order to implement this, the code above first checks to see if there is
diff --git a/libclamav/c++/llvm/docs/tutorial/LangImpl4.html b/libclamav/c++/llvm/docs/tutorial/LangImpl4.html
index 8310b61..728d518 100644
--- a/libclamav/c++/llvm/docs/tutorial/LangImpl4.html
+++ b/libclamav/c++/llvm/docs/tutorial/LangImpl4.html
@@ -209,7 +209,7 @@ requires a pointer to the <tt>Module</tt> (through the <tt>ModuleProvider</tt>)
 to construct itself.  Once it is set up, we use a series of "add" calls to add
 a bunch of LLVM passes.  The first pass is basically boilerplate, it adds a pass
 so that later optimizations know how the data structures in the program are
-layed out.  The "<tt>TheExecutionEngine</tt>" variable is related to the JIT,
+laid out.  The "<tt>TheExecutionEngine</tt>" variable is related to the JIT,
 which we will get to in the next section.</p>
 
 <p>In this case, we choose to add 4 optimization passes.  The passes we chose
@@ -388,24 +388,19 @@ entry:
 </pre>
 </div>
 
-<p>This illustrates that we can now call user code, but there is something a bit subtle
-going on here.  Note that we only invoke the JIT on the anonymous functions
-that <em>call testfunc</em>, but we never invoked it on <em>testfunc
-</em>itself.</p>
-
-<p>What actually happened here is that the anonymous function was
-JIT'd when requested.  When the Kaleidoscope app calls through the function
-pointer that is returned, the anonymous function starts executing.  It ends up
-making the call to the "testfunc" function, and ends up in a stub that invokes
-the JIT, lazily, on testfunc.  Once the JIT finishes lazily compiling testfunc,
-it returns and the code re-executes the call.</p>
-
-<p>In summary, the JIT will lazily JIT code, on the fly, as it is needed.  The
-JIT provides a number of other more advanced interfaces for things like freeing
-allocated machine code, rejit'ing functions to update them, etc.  However, even
-with this simple code, we get some surprisingly powerful capabilities - check
-this out (I removed the dump of the anonymous functions, you should get the idea
-by now :) :</p>
+<p>This illustrates that we can now call user code, but there is something a bit
+subtle going on here.  Note that we only invoke the JIT on the anonymous
+functions that <em>call testfunc</em>, but we never invoked it
+on <em>testfunc</em> itself.  What actually happened here is that the JIT
+scanned for all non-JIT'd functions transitively called from the anonymous
+function and compiled all of them before returning
+from <tt>getPointerToFunction()</tt>.</p>
+
+<p>The JIT provides a number of other more advanced interfaces for things like
+freeing allocated machine code, rejit'ing functions to update them, etc.
+However, even with this simple code, we get some surprisingly powerful
+capabilities - check this out (I removed the dump of the anonymous functions,
+you should get the idea by now :) :</p>
 
 <div class="doc_code">
 <pre>
@@ -453,8 +448,8 @@ directly.</p>
 resolved.  It allows you to establish explicit mappings between IR objects and
 addresses (useful for LLVM global variables that you want to map to static
 tables, for example), allows you to dynamically decide on the fly based on the
-function name, and even allows you to have the JIT abort itself if any lazy
-compilation is attempted.</p>
+function name, and even allows you to have the JIT compile functions lazily the
+first time they're called.</p>
 
 <p>One interesting application of this is that we can now extend the language
 by writing arbitrary C++ code to implement operations.  For example, if we add:
diff --git a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl3.html b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl3.html
index 0ba04ab..a598875 100644
--- a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl3.html
+++ b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl3.html
@@ -159,7 +159,7 @@ uses "the foo::get(..)" idiom instead of "new foo(..)" or "foo::Create(..)".</p>
 </div>
 
 <p>References to variables are also quite simple using LLVM.  In the simple
-version of Kaleidoscope, we assume that the variable has already been emited
+version of Kaleidoscope, we assume that the variable has already been emitted
 somewhere and its value is available.  In practice, the only values that can be
 in the <tt>Codegen.named_values</tt> map are function arguments.  This code
 simply checks to see that the specified name is in the map (if not, an unknown
@@ -323,7 +323,7 @@ code above.</p>
 first, we want to allow 'extern'ing a function more than once, as long as the
 prototypes for the externs match (since all arguments have the same type, we
 just have to check that the number of arguments match).  Second, we want to
-allow 'extern'ing a function and then definining a body for it.  This is useful
+allow 'extern'ing a function and then defining a body for it.  This is useful
 when defining mutually recursive functions.</p>
 
 <div class="doc_code">
diff --git a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl4.html b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl4.html
index 238fc53..543e12f 100644
--- a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl4.html
+++ b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl4.html
@@ -224,7 +224,7 @@ requires a pointer to the <tt>the_module</tt> (through the
 <tt>the_module_provider</tt>) to construct itself.  Once it is set up, we use a
 series of "add" calls to add a bunch of LLVM passes.  The first pass is
 basically boilerplate, it adds a pass so that later optimizations know how the
-data structures in the program are layed out.  The
+data structures in the program are laid out.  The
 "<tt>the_execution_engine</tt>" variable is related to the JIT, which we will
 get to in the next section.</p>
 
@@ -406,22 +406,17 @@ entry:
 
 <p>This illustrates that we can now call user code, but there is something a bit
 subtle going on here.  Note that we only invoke the JIT on the anonymous
-functions that <em>call testfunc</em>, but we never invoked it on <em>testfunc
-</em>itself.</p>
-
-<p>What actually happened here is that the anonymous function was JIT'd when
-requested.  When the Kaleidoscope app calls through the function pointer that is
-returned, the anonymous function starts executing.  It ends up making the call
-to the "testfunc" function, and ends up in a stub that invokes the JIT, lazily,
-on testfunc.  Once the JIT finishes lazily compiling testfunc,
-it returns and the code re-executes the call.</p>
-
-<p>In summary, the JIT will lazily JIT code, on the fly, as it is needed.  The
-JIT provides a number of other more advanced interfaces for things like freeing
-allocated machine code, rejit'ing functions to update them, etc.  However, even
-with this simple code, we get some surprisingly powerful capabilities - check
-this out (I removed the dump of the anonymous functions, you should get the idea
-by now :) :</p>
+functions that <em>call testfunc</em>, but we never invoked it
+on <em>testfunc</em> itself.  What actually happened here is that the JIT
+scanned for all non-JIT'd functions transitively called from the anonymous
+function and compiled all of them before returning
+from <tt>run_function</tt>.</p>
+
+<p>The JIT provides a number of other more advanced interfaces for things like
+freeing allocated machine code, rejit'ing functions to update them, etc.
+However, even with this simple code, we get some surprisingly powerful
+capabilities - check this out (I removed the dump of the anonymous functions,
+you should get the idea by now :) :</p>
 
 <div class="doc_code">
 <pre>
@@ -467,8 +462,8 @@ calls in the module to call the libm version of <tt>sin</tt> directly.</p>
 get resolved.  It allows you to establish explicit mappings between IR objects
 and addresses (useful for LLVM global variables that you want to map to static
 tables, for example), allows you to dynamically decide on the fly based on the
-function name, and even allows you to have the JIT abort itself if any lazy
-compilation is attempted.</p>
+function name, and even allows you to have the JIT compile functions lazily the
+first time they're called.</p>
 
 <p>One interesting application of this is that we can now extend the language
 by writing arbitrary C code to implement operations.  For example, if we add:
diff --git a/libclamav/c++/llvm/include/llvm-c/Core.h b/libclamav/c++/llvm/include/llvm-c/Core.h
index a44afcd..c741d1c 100644
--- a/libclamav/c++/llvm/include/llvm-c/Core.h
+++ b/libclamav/c++/llvm/include/llvm-c/Core.h
@@ -33,7 +33,7 @@
 #ifndef LLVM_C_CORE_H
 #define LLVM_C_CORE_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 #ifdef __cplusplus
 
@@ -89,6 +89,12 @@ typedef struct LLVMOpaqueMemoryBuffer *LLVMMemoryBufferRef;
 /** See the llvm::PassManagerBase class. */
 typedef struct LLVMOpaquePassManager *LLVMPassManagerRef;
 
+/**
+ * Used to iterate through the uses of a Value, allowing access to all Values
+ * that use this Value.  See the llvm::Use and llvm::value_use_iterator classes.
+ */
+typedef struct LLVMOpaqueUseIterator *LLVMUseIteratorRef;
+
 typedef enum {
     LLVMZExtAttribute       = 1<<0,
     LLVMSExtAttribute       = 1<<1,
@@ -114,6 +120,62 @@ typedef enum {
 } LLVMAttribute;
 
 typedef enum {
+  LLVMRet            = 1,
+  LLVMBr             = 2,
+  LLVMSwitch         = 3,
+  LLVMInvoke         = 4,
+  LLVMUnwind         = 5,
+  LLVMUnreachable    = 6,
+  LLVMAdd            = 7,
+  LLVMFAdd           = 8,
+  LLVMSub            = 9,
+  LLVMFSub           = 10,
+  LLVMMul            = 11,
+  LLVMFMul           = 12,
+  LLVMUDiv           = 13,
+  LLVMSDiv           = 14,
+  LLVMFDiv           = 15,
+  LLVMURem           = 16,
+  LLVMSRem           = 17,
+  LLVMFRem           = 18,
+  LLVMShl            = 19,
+  LLVMLShr           = 20,
+  LLVMAShr           = 21,
+  LLVMAnd            = 22,
+  LLVMOr             = 23,
+  LLVMXor            = 24,
+  LLVMMalloc         = 25,
+  LLVMFree           = 26,
+  LLVMAlloca         = 27,
+  LLVMLoad           = 28,
+  LLVMStore          = 29,
+  LLVMGetElementPtr  = 30,
+  LLVMTrunk          = 31,
+  LLVMZExt           = 32,
+  LLVMSExt           = 33,
+  LLVMFPToUI         = 34,
+  LLVMFPToSI         = 35,
+  LLVMUIToFP         = 36,
+  LLVMSIToFP         = 37,
+  LLVMFPTrunc        = 38,
+  LLVMFPExt          = 39,
+  LLVMPtrToInt       = 40,
+  LLVMIntToPtr       = 41,
+  LLVMBitCast        = 42,
+  LLVMICmp           = 43,
+  LLVMFCmp           = 44,
+  LLVMPHI            = 45,
+  LLVMCall           = 46,
+  LLVMSelect         = 47,
+  LLVMVAArg          = 50,
+  LLVMExtractElement = 51,
+  LLVMInsertElement  = 52,
+  LLVMShuffleVector  = 53,
+  LLVMExtractValue   = 54,
+  LLVMInsertValue    = 55
+} LLVMOpcode;
+
+typedef enum {
   LLVMVoidTypeKind,        /**< type with no size */
   LLVMFloatTypeKind,       /**< 32 bit floating point type */
   LLVMDoubleTypeKind,      /**< 64 bit floating point type */
@@ -393,9 +455,7 @@ void LLVMDisposeTypeHandle(LLVMTypeHandleRef TypeHandle);
         macro(UnreachableInst)              \
         macro(UnwindInst)                   \
     macro(UnaryInstruction)                 \
-      macro(AllocationInst)                 \
-        macro(AllocaInst)                   \
-        macro(MallocInst)                   \
+      macro(AllocaInst)                     \
       macro(CastInst)                       \
         macro(BitCastInst)                  \
         macro(FPExtInst)                    \
@@ -410,7 +470,6 @@ void LLVMDisposeTypeHandle(LLVMTypeHandleRef TypeHandle);
         macro(UIToFPInst)                   \
         macro(ZExtInst)                     \
       macro(ExtractValueInst)               \
-      macro(FreeInst)                       \
       macro(LoadInst)                       \
       macro(VAArgInst)
 
@@ -419,6 +478,7 @@ LLVMTypeRef LLVMTypeOf(LLVMValueRef Val);
 const char *LLVMGetValueName(LLVMValueRef Val);
 void LLVMSetValueName(LLVMValueRef Val, const char *Name);
 void LLVMDumpValue(LLVMValueRef Val);
+void LLVMReplaceAllUsesWith(LLVMValueRef OldVal, LLVMValueRef NewVal);
 
 /* Conversion functions. Return the input value if it is an instance of the
    specified class, otherwise NULL. See llvm::dyn_cast_or_null<>. */
@@ -426,6 +486,15 @@ void LLVMDumpValue(LLVMValueRef Val);
   LLVMValueRef LLVMIsA##name(LLVMValueRef Val);
 LLVM_FOR_EACH_VALUE_SUBCLASS(LLVM_DECLARE_VALUE_CAST)
 
+/* Operations on Uses */
+LLVMUseIteratorRef LLVMGetFirstUse(LLVMValueRef Val);
+LLVMUseIteratorRef LLVMGetNextUse(LLVMUseIteratorRef U);
+LLVMValueRef LLVMGetUser(LLVMUseIteratorRef U);
+LLVMValueRef LLVMGetUsedValue(LLVMUseIteratorRef U);
+
+/* Operations on Users */
+LLVMValueRef LLVMGetOperand(LLVMValueRef Val, unsigned Index);
+
 /* Operations on constants of any type */
 LLVMValueRef LLVMConstNull(LLVMTypeRef Ty); /* all zeroes */
 LLVMValueRef LLVMConstAllOnes(LLVMTypeRef Ty); /* only for int/vector */
@@ -446,6 +515,8 @@ LLVMValueRef LLVMConstReal(LLVMTypeRef RealTy, double N);
 LLVMValueRef LLVMConstRealOfString(LLVMTypeRef RealTy, const char *Text);
 LLVMValueRef LLVMConstRealOfStringAndSize(LLVMTypeRef RealTy, const char *Text,
                                           unsigned SLen);
+unsigned long long LLVMConstIntGetZExtValue(LLVMValueRef ConstantVal);
+long long LLVMConstIntGetSExtValue(LLVMValueRef ConstantVal);
 
 
 /* Operations on composite constants */
@@ -464,6 +535,7 @@ LLVMValueRef LLVMConstStruct(LLVMValueRef *ConstantVals, unsigned Count,
 LLVMValueRef LLVMConstVector(LLVMValueRef *ScalarConstantVals, unsigned Size);
 
 /* Constant expressions */
+LLVMOpcode LLVMGetConstOpcode(LLVMValueRef ConstantVal);
 LLVMValueRef LLVMAlignOf(LLVMTypeRef Ty);
 LLVMValueRef LLVMSizeOf(LLVMTypeRef Ty);
 LLVMValueRef LLVMConstNeg(LLVMValueRef ConstantVal);
@@ -587,6 +659,7 @@ void LLVMSetFunctionCallConv(LLVMValueRef Fn, unsigned CC);
 const char *LLVMGetGC(LLVMValueRef Fn);
 void LLVMSetGC(LLVMValueRef Fn, const char *Name);
 void LLVMAddFunctionAttr(LLVMValueRef Fn, LLVMAttribute PA);
+LLVMAttribute LLVMGetFunctionAttr(LLVMValueRef Fn);
 void LLVMRemoveFunctionAttr(LLVMValueRef Fn, LLVMAttribute PA);
 
 /* Operations on parameters */
@@ -600,6 +673,7 @@ LLVMValueRef LLVMGetNextParam(LLVMValueRef Arg);
 LLVMValueRef LLVMGetPreviousParam(LLVMValueRef Arg);
 void LLVMAddAttribute(LLVMValueRef Arg, LLVMAttribute PA);
 void LLVMRemoveAttribute(LLVMValueRef Arg, LLVMAttribute PA);
+LLVMAttribute LLVMGetAttribute(LLVMValueRef Arg);
 void LLVMSetParamAlignment(LLVMValueRef Arg, unsigned align);
 
 /* Operations on basic blocks */
@@ -796,7 +870,7 @@ LLVMValueRef LLVMBuildTruncOrBitCast(LLVMBuilderRef, LLVMValueRef Val,
                                      LLVMTypeRef DestTy, const char *Name);
 LLVMValueRef LLVMBuildPointerCast(LLVMBuilderRef, LLVMValueRef Val,
                                   LLVMTypeRef DestTy, const char *Name);
-LLVMValueRef LLVMBuildIntCast(LLVMBuilderRef, LLVMValueRef Val,
+LLVMValueRef LLVMBuildIntCast(LLVMBuilderRef, LLVMValueRef Val, /*Signed cast!*/
                               LLVMTypeRef DestTy, const char *Name);
 LLVMValueRef LLVMBuildFPCast(LLVMBuilderRef, LLVMValueRef Val,
                              LLVMTypeRef DestTy, const char *Name);
@@ -950,6 +1024,7 @@ namespace llvm {
   DEFINE_SIMPLE_CONVERSION_FUNCTIONS(ModuleProvider,     LLVMModuleProviderRef)
   DEFINE_SIMPLE_CONVERSION_FUNCTIONS(MemoryBuffer,       LLVMMemoryBufferRef  )
   DEFINE_SIMPLE_CONVERSION_FUNCTIONS(LLVMContext,        LLVMContextRef       )
+  DEFINE_SIMPLE_CONVERSION_FUNCTIONS(Use,                LLVMUseIteratorRef           )
   DEFINE_STDCXX_CONVERSION_FUNCTIONS(PassManagerBase,    LLVMPassManagerRef   )
   
   #undef DEFINE_STDCXX_CONVERSION_FUNCTIONS
diff --git a/libclamav/c++/llvm/include/llvm-c/Transforms/IPO.h b/libclamav/c++/llvm/include/llvm-c/Transforms/IPO.h
index 9bc947f..0a94315 100644
--- a/libclamav/c++/llvm/include/llvm-c/Transforms/IPO.h
+++ b/libclamav/c++/llvm/include/llvm-c/Transforms/IPO.h
@@ -54,7 +54,7 @@ void LLVMAddLowerSetJmpPass(LLVMPassManagerRef PM);
 /** See llvm::createPruneEHPass function. */
 void LLVMAddPruneEHPass(LLVMPassManagerRef PM);
 
-/** See llvm::createRaiseAllocationsPass function. */
+// FIXME: Remove in LLVM 3.0.
 void LLVMAddRaiseAllocationsPass(LLVMPassManagerRef PM);
 
 /** See llvm::createStripDeadPrototypesPass function. */
diff --git a/libclamav/c++/llvm/include/llvm-c/Transforms/Scalar.h b/libclamav/c++/llvm/include/llvm-c/Transforms/Scalar.h
index e52a1d1..2c5a371 100644
--- a/libclamav/c++/llvm/include/llvm-c/Transforms/Scalar.h
+++ b/libclamav/c++/llvm/include/llvm-c/Transforms/Scalar.h
@@ -31,9 +31,6 @@ void LLVMAddAggressiveDCEPass(LLVMPassManagerRef PM);
 /** See llvm::createCFGSimplificationPass function. */
 void LLVMAddCFGSimplificationPass(LLVMPassManagerRef PM);
 
-/** See llvm::createCondPropagationPass function. */
-void LLVMAddCondPropagationPass(LLVMPassManagerRef PM);
-
 /** See llvm::createDeadStoreEliminationPass function. */
 void LLVMAddDeadStoreEliminationPass(LLVMPassManagerRef PM);
 
diff --git a/libclamav/c++/llvm/include/llvm/ADT/APFloat.h b/libclamav/c++/llvm/include/llvm/ADT/APFloat.h
index 4d7e7ae..30d998f 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/APFloat.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/APFloat.h
@@ -125,6 +125,7 @@ namespace llvm {
   public:
 
     /* We support the following floating point semantics.  */
+    static const fltSemantics IEEEhalf;
     static const fltSemantics IEEEsingle;
     static const fltSemantics IEEEdouble;
     static const fltSemantics IEEEquad;
@@ -321,12 +322,14 @@ namespace llvm {
     opStatus roundSignificandWithExponent(const integerPart *, unsigned int,
                                           int, roundingMode);
 
+    APInt convertHalfAPFloatToAPInt() const;
     APInt convertFloatAPFloatToAPInt() const;
     APInt convertDoubleAPFloatToAPInt() const;
     APInt convertQuadrupleAPFloatToAPInt() const;
     APInt convertF80LongDoubleAPFloatToAPInt() const;
     APInt convertPPCDoubleDoubleAPFloatToAPInt() const;
     void initFromAPInt(const APInt& api, bool isIEEE = false);
+    void initFromHalfAPInt(const APInt& api);
     void initFromFloatAPInt(const APInt& api);
     void initFromDoubleAPInt(const APInt& api);
     void initFromQuadrupleAPInt(const APInt &api);
diff --git a/libclamav/c++/llvm/include/llvm/ADT/APInt.h b/libclamav/c++/llvm/include/llvm/ADT/APInt.h
index 6c418bd..88aa995 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/APInt.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/APInt.h
@@ -1234,6 +1234,11 @@ public:
     return BitWidth - 1 - countLeadingZeros();
   }
 
+  /// @returns the ceil log base 2 of this APInt.
+  unsigned ceilLogBase2() const {
+    return BitWidth - (*this - 1).countLeadingZeros();
+  }
+
   /// @returns the log base 2 of this APInt if its an exact power of two, -1
   /// otherwise
   int32_t exactLogBase2() const {
diff --git a/libclamav/c++/llvm/include/llvm/ADT/DenseMap.h b/libclamav/c++/llvm/include/llvm/ADT/DenseMap.h
index daeda28..8329947 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/DenseMap.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/DenseMap.h
@@ -14,8 +14,9 @@
 #ifndef LLVM_ADT_DENSEMAP_H
 #define LLVM_ADT_DENSEMAP_H
 
-#include "llvm/Support/PointerLikeTypeTraits.h"
 #include "llvm/Support/MathExtras.h"
+#include "llvm/Support/PointerLikeTypeTraits.h"
+#include "llvm/Support/type_traits.h"
 #include "llvm/ADT/DenseMapInfo.h"
 #include <iterator>
 #include <new>
@@ -27,12 +28,8 @@ namespace llvm {
 
 template<typename KeyT, typename ValueT,
          typename KeyInfoT = DenseMapInfo<KeyT>,
-         typename ValueInfoT = DenseMapInfo<ValueT> >
+         typename ValueInfoT = DenseMapInfo<ValueT>, bool IsConst = false>
 class DenseMapIterator;
-template<typename KeyT, typename ValueT,
-         typename KeyInfoT = DenseMapInfo<KeyT>,
-         typename ValueInfoT = DenseMapInfo<ValueT> >
-class DenseMapConstIterator;
 
 template<typename KeyT, typename ValueT,
          typename KeyInfoT = DenseMapInfo<KeyT>,
@@ -73,7 +70,8 @@ public:
   }
 
   typedef DenseMapIterator<KeyT, ValueT, KeyInfoT> iterator;
-  typedef DenseMapConstIterator<KeyT, ValueT, KeyInfoT> const_iterator;
+  typedef DenseMapIterator<KeyT, ValueT,
+                           KeyInfoT, ValueInfoT, true> const_iterator;
   inline iterator begin() {
      return iterator(Buckets, Buckets+NumBuckets);
   }
@@ -145,6 +143,9 @@ public:
     return ValueT();
   }
 
+  // Inserts key,value pair into the map if the key isn't already in the map.
+  // If the key is already in the map, it returns false and doesn't update the
+  // value.
   std::pair<iterator, bool> insert(const std::pair<KeyT, ValueT> &KV) {
     BucketT *TheBucket;
     if (LookupBucketFor(KV.first, TheBucket))
@@ -423,40 +424,55 @@ private:
   }
 };
 
-template<typename KeyT, typename ValueT, typename KeyInfoT, typename ValueInfoT>
-class DenseMapIterator : 
-      public std::iterator<std::forward_iterator_tag, std::pair<KeyT, ValueT>,
-                          ptrdiff_t> {
-  typedef std::pair<KeyT, ValueT> BucketT;
-protected:
-  const BucketT *Ptr, *End;
+template<typename KeyT, typename ValueT,
+         typename KeyInfoT, typename ValueInfoT, bool IsConst>
+class DenseMapIterator {
+  typedef std::pair<KeyT, ValueT> Bucket;
+  typedef DenseMapIterator<KeyT, ValueT,
+                           KeyInfoT, ValueInfoT, true> ConstIterator;
+  friend class DenseMapIterator<KeyT, ValueT, KeyInfoT, ValueInfoT, true>;
+public:
+  typedef ptrdiff_t difference_type;
+  typedef typename conditional<IsConst, const Bucket, Bucket>::type value_type;
+  typedef value_type *pointer;
+  typedef value_type &reference;
+  typedef std::forward_iterator_tag iterator_category;
+private:
+  pointer Ptr, End;
 public:
   DenseMapIterator() : Ptr(0), End(0) {}
 
-  DenseMapIterator(const BucketT *Pos, const BucketT *E) : Ptr(Pos), End(E) {
+  DenseMapIterator(pointer Pos, pointer E) : Ptr(Pos), End(E) {
     AdvancePastEmptyBuckets();
   }
 
-  std::pair<KeyT, ValueT> &operator*() const {
-    return *const_cast<BucketT*>(Ptr);
+  // If IsConst is true this is a converting constructor from iterator to
+  // const_iterator and the default copy constructor is used.
+  // Otherwise this is a copy constructor for iterator.
+  DenseMapIterator(const DenseMapIterator<KeyT, ValueT,
+                                          KeyInfoT, ValueInfoT, false>& I)
+    : Ptr(I.Ptr), End(I.End) {}
+
+  reference operator*() const {
+    return *Ptr;
   }
-  std::pair<KeyT, ValueT> *operator->() const {
-    return const_cast<BucketT*>(Ptr);
+  pointer operator->() const {
+    return Ptr;
   }
 
-  bool operator==(const DenseMapIterator &RHS) const {
-    return Ptr == RHS.Ptr;
+  bool operator==(const ConstIterator &RHS) const {
+    return Ptr == RHS.operator->();
   }
-  bool operator!=(const DenseMapIterator &RHS) const {
-    return Ptr != RHS.Ptr;
+  bool operator!=(const ConstIterator &RHS) const {
+    return Ptr != RHS.operator->();
   }
 
-  inline DenseMapIterator& operator++() {          // Preincrement
+  inline DenseMapIterator& operator++() {  // Preincrement
     ++Ptr;
     AdvancePastEmptyBuckets();
     return *this;
   }
-  DenseMapIterator operator++(int) {        // Postincrement
+  DenseMapIterator operator++(int) {  // Postincrement
     DenseMapIterator tmp = *this; ++*this; return tmp;
   }
 
@@ -472,22 +488,6 @@ private:
   }
 };
 
-template<typename KeyT, typename ValueT, typename KeyInfoT, typename ValueInfoT>
-class DenseMapConstIterator : public DenseMapIterator<KeyT, ValueT, KeyInfoT> {
-public:
-  DenseMapConstIterator() : DenseMapIterator<KeyT, ValueT, KeyInfoT>() {}
-  DenseMapConstIterator(const std::pair<KeyT, ValueT> *Pos,
-                        const std::pair<KeyT, ValueT> *E)
-    : DenseMapIterator<KeyT, ValueT, KeyInfoT>(Pos, E) {
-  }
-  const std::pair<KeyT, ValueT> &operator*() const {
-    return *this->Ptr;
-  }
-  const std::pair<KeyT, ValueT> *operator->() const {
-    return this->Ptr;
-  }
-};
-
 } // end namespace llvm
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/ADT/DenseMapInfo.h b/libclamav/c++/llvm/include/llvm/ADT/DenseMapInfo.h
index 632728b..2f241c5 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/DenseMapInfo.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/DenseMapInfo.h
@@ -76,7 +76,7 @@ template<> struct DenseMapInfo<unsigned long> {
   static inline unsigned long getEmptyKey() { return ~0UL; }
   static inline unsigned long getTombstoneKey() { return ~0UL - 1L; }
   static unsigned getHashValue(const unsigned long& Val) {
-    return Val * 37UL;
+    return (unsigned)(Val * 37UL);
   }
   static bool isPod() { return true; }
   static bool isEqual(const unsigned long& LHS, const unsigned long& RHS) {
@@ -89,7 +89,7 @@ template<> struct DenseMapInfo<unsigned long long> {
   static inline unsigned long long getEmptyKey() { return ~0ULL; }
   static inline unsigned long long getTombstoneKey() { return ~0ULL - 1ULL; }
   static unsigned getHashValue(const unsigned long long& Val) {
-    return Val * 37ULL;
+    return (unsigned)(Val * 37ULL);
   }
   static bool isPod() { return true; }
   static bool isEqual(const unsigned long long& LHS,
diff --git a/libclamav/c++/llvm/include/llvm/ADT/EquivalenceClasses.h b/libclamav/c++/llvm/include/llvm/ADT/EquivalenceClasses.h
index ac9dd4d..f5f3d49 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/EquivalenceClasses.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/EquivalenceClasses.h
@@ -15,7 +15,7 @@
 #ifndef LLVM_ADT_EQUIVALENCECLASSES_H
 #define LLVM_ADT_EQUIVALENCECLASSES_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <set>
 
 namespace llvm {
diff --git a/libclamav/c++/llvm/include/llvm/ADT/FoldingSet.h b/libclamav/c++/llvm/include/llvm/ADT/FoldingSet.h
index c62c47d..81dc469 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/FoldingSet.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/FoldingSet.h
@@ -16,10 +16,9 @@
 #ifndef LLVM_ADT_FOLDINGSET_H
 #define LLVM_ADT_FOLDINGSET_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/StringRef.h"
-#include <iterator>
 
 namespace llvm {
   class APFloat;
diff --git a/libclamav/c++/llvm/include/llvm/ADT/GraphTraits.h b/libclamav/c++/llvm/include/llvm/ADT/GraphTraits.h
index 2d103cf..0fd1f50 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/GraphTraits.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/GraphTraits.h
@@ -30,7 +30,7 @@ struct GraphTraits {
   // typedef NodeType          - Type of Node in the graph
   // typedef ChildIteratorType - Type used to iterate over children in graph
 
-  // static NodeType *getEntryNode(GraphType *)
+  // static NodeType *getEntryNode(const GraphType &)
   //    Return the entry node of the graph
 
   // static ChildIteratorType child_begin(NodeType *)
diff --git a/libclamav/c++/llvm/include/llvm/ADT/HashExtras.h b/libclamav/c++/llvm/include/llvm/ADT/HashExtras.h
deleted file mode 100644
index 20c4fd3..0000000
--- a/libclamav/c++/llvm/include/llvm/ADT/HashExtras.h
+++ /dev/null
@@ -1,40 +0,0 @@
-//===-- llvm/ADT/HashExtras.h - Useful functions for STL hash ---*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file contains some templates that are useful if you are working with the
-// STL Hashed containers.
-//
-// No library is required when using these functions.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_ADT_HASHEXTRAS_H
-#define LLVM_ADT_HASHEXTRAS_H
-
-#include <string>
-
-// Cannot specialize hash template from outside of the std namespace.
-namespace HASH_NAMESPACE {
-
-// Provide a hash function for arbitrary pointers...
-template <class T> struct hash<T *> {
-  inline size_t operator()(const T *Val) const {
-    return reinterpret_cast<size_t>(Val);
-  }
-};
-
-template <> struct hash<std::string> {
-  size_t operator()(std::string const &str) const {
-    return hash<char const *>()(str.c_str());
-  }
-};
-
-}  // End namespace std
-
-#endif
diff --git a/libclamav/c++/llvm/include/llvm/ADT/ImmutableList.h b/libclamav/c++/llvm/include/llvm/ADT/ImmutableList.h
index a7f5819..5f8cb57 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/ImmutableList.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/ImmutableList.h
@@ -16,7 +16,7 @@
 
 #include "llvm/Support/Allocator.h"
 #include "llvm/ADT/FoldingSet.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <cassert>
 
 namespace llvm {
diff --git a/libclamav/c++/llvm/include/llvm/ADT/ImmutableMap.h b/libclamav/c++/llvm/include/llvm/ADT/ImmutableMap.h
index 96bf012..fc9fe8b 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/ImmutableMap.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/ImmutableMap.h
@@ -80,28 +80,30 @@ public:
 
   class Factory {
     typename TreeTy::Factory F;
+    const bool Canonicalize;
 
   public:
-    Factory() {}
-
-    Factory(BumpPtrAllocator& Alloc)
-      : F(Alloc) {}
+    Factory(bool canonicalize = true)
+      : Canonicalize(canonicalize) {}
+    
+    Factory(BumpPtrAllocator& Alloc, bool canonicalize = true)
+      : F(Alloc), Canonicalize(canonicalize) {}
 
     ImmutableMap GetEmptyMap() { return ImmutableMap(F.GetEmptyTree()); }
 
     ImmutableMap Add(ImmutableMap Old, key_type_ref K, data_type_ref D) {
       TreeTy *T = F.Add(Old.Root, std::make_pair<key_type,data_type>(K,D));
-      return ImmutableMap(F.GetCanonicalTree(T));
+      return ImmutableMap(Canonicalize ? F.GetCanonicalTree(T): T);
     }
 
     ImmutableMap Remove(ImmutableMap Old, key_type_ref K) {
       TreeTy *T = F.Remove(Old.Root,K);
-      return ImmutableMap(F.GetCanonicalTree(T));
+      return ImmutableMap(Canonicalize ? F.GetCanonicalTree(T): T);
     }
 
   private:
-    Factory(const Factory& RHS) {};
-    void operator=(const Factory& RHS) {};
+    Factory(const Factory& RHS); // DO NOT IMPLEMENT
+    void operator=(const Factory& RHS); // DO NOT IMPLEMENT
   };
 
   friend class Factory;
diff --git a/libclamav/c++/llvm/include/llvm/ADT/ImmutableSet.h b/libclamav/c++/llvm/include/llvm/ADT/ImmutableSet.h
index 5aa1943..676a198 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/ImmutableSet.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/ImmutableSet.h
@@ -16,7 +16,7 @@
 
 #include "llvm/Support/Allocator.h"
 #include "llvm/ADT/FoldingSet.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <cassert>
 #include <functional>
 
@@ -947,15 +947,19 @@ public:
 
   class Factory {
     typename TreeTy::Factory F;
+    const bool Canonicalize;
 
   public:
-    Factory() {}
+    Factory(bool canonicalize = true)
+      : Canonicalize(canonicalize) {}
 
-    Factory(BumpPtrAllocator& Alloc)
-      : F(Alloc) {}
+    Factory(BumpPtrAllocator& Alloc, bool canonicalize = true)
+      : F(Alloc), Canonicalize(canonicalize) {}
 
     /// GetEmptySet - Returns an immutable set that contains no elements.
-    ImmutableSet GetEmptySet() { return ImmutableSet(F.GetEmptyTree()); }
+    ImmutableSet GetEmptySet() {
+      return ImmutableSet(F.GetEmptyTree());
+    }
 
     /// Add - Creates a new immutable set that contains all of the values
     ///  of the original set with the addition of the specified value.  If
@@ -965,7 +969,8 @@ public:
     ///  The memory allocated to represent the set is released when the
     ///  factory object that created the set is destroyed.
     ImmutableSet Add(ImmutableSet Old, value_type_ref V) {
-      return ImmutableSet(F.GetCanonicalTree(F.Add(Old.Root,V)));
+      TreeTy *NewT = F.Add(Old.Root, V);
+      return ImmutableSet(Canonicalize ? F.GetCanonicalTree(NewT) : NewT);
     }
 
     /// Remove - Creates a new immutable set that contains all of the values
@@ -976,14 +981,15 @@ public:
     ///  The memory allocated to represent the set is released when the
     ///  factory object that created the set is destroyed.
     ImmutableSet Remove(ImmutableSet Old, value_type_ref V) {
-      return ImmutableSet(F.GetCanonicalTree(F.Remove(Old.Root,V)));
+      TreeTy *NewT = F.Remove(Old.Root, V);
+      return ImmutableSet(Canonicalize ? F.GetCanonicalTree(NewT) : NewT);
     }
 
     BumpPtrAllocator& getAllocator() { return F.getAllocator(); }
 
   private:
-    Factory(const Factory& RHS) {};
-    void operator=(const Factory& RHS) {};
+    Factory(const Factory& RHS); // DO NOT IMPLEMENT
+    void operator=(const Factory& RHS); // DO NOT IMPLEMENT
   };
 
   friend class Factory;
diff --git a/libclamav/c++/llvm/include/llvm/ADT/PointerUnion.h b/libclamav/c++/llvm/include/llvm/ADT/PointerUnion.h
index 33f2fcb..49c8940 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/PointerUnion.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/PointerUnion.h
@@ -186,8 +186,9 @@ namespace llvm {
     int is() const {
       // Is it PT1/PT2?
       if (::llvm::getPointerUnionTypeNum<PT1, PT2>((T*)0) != -1)
-        return Val.is<InnerUnion>() && Val.get<InnerUnion>().is<T>();
-      return Val.is<T>();
+        return Val.template is<InnerUnion>() && 
+               Val.template get<InnerUnion>().template is<T>();
+      return Val.template is<T>();
     }
     
     /// get<T>() - Return the value of the specified pointer type. If the
@@ -197,9 +198,9 @@ namespace llvm {
       assert(is<T>() && "Invalid accessor called");
       // Is it PT1/PT2?
       if (::llvm::getPointerUnionTypeNum<PT1, PT2>((T*)0) != -1)
-        return Val.get<InnerUnion>().get<T>();
+        return Val.template get<InnerUnion>().template get<T>();
       
-      return Val.get<T>();
+      return Val.template get<T>();
     }
     
     /// dyn_cast<T>() - If the current value is of the specified pointer type,
@@ -291,8 +292,10 @@ namespace llvm {
     int is() const {
       // Is it PT1/PT2?
       if (::llvm::getPointerUnionTypeNum<PT1, PT2>((T*)0) != -1)
-        return Val.is<InnerUnion1>() && Val.get<InnerUnion1>().is<T>();
-      return Val.is<InnerUnion2>() && Val.get<InnerUnion2>().is<T>();
+        return Val.template is<InnerUnion1>() && 
+               Val.template get<InnerUnion1>().template is<T>();
+      return Val.template is<InnerUnion2>() && 
+             Val.template get<InnerUnion2>().template is<T>();
     }
     
     /// get<T>() - Return the value of the specified pointer type. If the
@@ -302,9 +305,9 @@ namespace llvm {
       assert(is<T>() && "Invalid accessor called");
       // Is it PT1/PT2?
       if (::llvm::getPointerUnionTypeNum<PT1, PT2>((T*)0) != -1)
-        return Val.get<InnerUnion1>().get<T>();
+        return Val.template get<InnerUnion1>().template get<T>();
       
-      return Val.get<InnerUnion2>().get<T>();
+      return Val.template get<InnerUnion2>().template get<T>();
     }
     
     /// dyn_cast<T>() - If the current value is of the specified pointer type,
diff --git a/libclamav/c++/llvm/include/llvm/ADT/PriorityQueue.h b/libclamav/c++/llvm/include/llvm/ADT/PriorityQueue.h
index a8809dc..bf8a687 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/PriorityQueue.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/PriorityQueue.h
@@ -14,6 +14,7 @@
 #ifndef LLVM_ADT_PRIORITY_QUEUE_H
 #define LLVM_ADT_PRIORITY_QUEUE_H
 
+#include <algorithm>
 #include <queue>
 
 namespace llvm {
diff --git a/libclamav/c++/llvm/include/llvm/ADT/SCCIterator.h b/libclamav/c++/llvm/include/llvm/ADT/SCCIterator.h
index db985b5..3afcabd 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/SCCIterator.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/SCCIterator.h
@@ -136,8 +136,8 @@ public:
   typedef scc_iterator<GraphT, GT> _Self;
 
   // Provide static "constructors"...
-  static inline _Self begin(GraphT& G) { return _Self(GT::getEntryNode(G)); }
-  static inline _Self end  (GraphT& G) { return _Self(); }
+  static inline _Self begin(const GraphT& G) { return _Self(GT::getEntryNode(G)); }
+  static inline _Self end  (const GraphT& G) { return _Self(); }
 
   // Direct loop termination test (I.fini() is more efficient than I == end())
   inline bool fini() const {
@@ -186,15 +186,25 @@ public:
 
 // Global constructor for the SCC iterator.
 template <class T>
-scc_iterator<T> scc_begin(T G) {
+scc_iterator<T> scc_begin(const T& G) {
   return scc_iterator<T>::begin(G);
 }
 
 template <class T>
-scc_iterator<T> scc_end(T G) {
+scc_iterator<T> scc_end(const T& G) {
   return scc_iterator<T>::end(G);
 }
 
+template <class T>
+scc_iterator<Inverse<T> > scc_begin(const Inverse<T>& G) {
+       return scc_iterator<Inverse<T> >::begin(G);
+}
+
+template <class T>
+scc_iterator<Inverse<T> > scc_end(const Inverse<T>& G) {
+       return scc_iterator<Inverse<T> >::end(G);
+}
+
 } // End llvm namespace
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/ADT/STLExtras.h b/libclamav/c++/llvm/include/llvm/ADT/STLExtras.h
index 6f47692..32cf459 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/STLExtras.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/STLExtras.h
@@ -18,6 +18,7 @@
 #define LLVM_ADT_STLEXTRAS_H
 
 #include <cstddef> // for std::size_t
+#include <cstdlib> // for qsort
 #include <functional>
 #include <iterator>
 #include <utility> // for std::pair
@@ -270,6 +271,14 @@ static inline void array_pod_sort(IteratorTy Start, IteratorTy End) {
         get_array_pad_sort_comparator(*Start));
 }
 
+template<class IteratorTy>
+static inline void array_pod_sort(IteratorTy Start, IteratorTy End,
+                                  int (*Compare)(const void*, const void*)) {
+  // Don't dereference start iterator of empty sequence.
+  if (Start == End) return;
+  qsort(&*Start, End-Start, sizeof(*Start), Compare);
+}
+  
 } // End llvm namespace
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/ADT/SmallPtrSet.h b/libclamav/c++/llvm/include/llvm/ADT/SmallPtrSet.h
index 7d00e9a..c29fc9f 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/SmallPtrSet.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/SmallPtrSet.h
@@ -18,7 +18,7 @@
 #include <cassert>
 #include <cstring>
 #include <iterator>
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/Support/PointerLikeTypeTraits.h"
 
 namespace llvm {
diff --git a/libclamav/c++/llvm/include/llvm/ADT/SmallString.h b/libclamav/c++/llvm/include/llvm/ADT/SmallString.h
index 0354625..05bd8a4 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/SmallString.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/SmallString.h
@@ -38,12 +38,15 @@ public:
   // Extra methods.
   StringRef str() const { return StringRef(this->begin(), this->size()); }
 
+  // Implicit conversion to StringRef.
+  operator StringRef() const { return str(); }
+
   const char *c_str() {
     this->push_back(0);
     this->pop_back();
     return this->data();
   }
-  
+
   // Extra operators.
   const SmallString &operator=(StringRef RHS) {
     this->clear();
diff --git a/libclamav/c++/llvm/include/llvm/ADT/SparseBitVector.h b/libclamav/c++/llvm/include/llvm/ADT/SparseBitVector.h
index b7a6873..6c813ec 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/SparseBitVector.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/SparseBitVector.h
@@ -17,7 +17,7 @@
 
 #include "llvm/ADT/ilist.h"
 #include "llvm/ADT/ilist_node.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/Support/MathExtras.h"
 #include "llvm/Support/raw_ostream.h"
 #include <cassert>
diff --git a/libclamav/c++/llvm/include/llvm/ADT/StringExtras.h b/libclamav/c++/llvm/include/llvm/ADT/StringExtras.h
index 3d1993c..85936c0 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/StringExtras.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/StringExtras.h
@@ -14,8 +14,9 @@
 #ifndef LLVM_ADT_STRINGEXTRAS_H
 #define LLVM_ADT_STRINGEXTRAS_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/ADT/APFloat.h"
+#include "llvm/ADT/StringRef.h"
 #include <cctype>
 #include <cstdio>
 #include <string>
@@ -216,14 +217,18 @@ void SplitString(const std::string &Source,
                  std::vector<std::string> &OutFragments,
                  const char *Delimiters = " \t\n\v\f\r");
 
-/// UnescapeString - Modify the argument string, turning two character sequences
-/// like '\\' 'n' into '\n'.  This handles: \e \a \b \f \n \r \t \v \' \\ and
-/// \num (where num is a 1-3 byte octal value).
-void UnescapeString(std::string &Str);
-
-/// EscapeString - Modify the argument string, turning '\\' and anything that
-/// doesn't satisfy std::isprint into an escape sequence.
-void EscapeString(std::string &Str);
+/// HashString - Hash funtion for strings.
+///
+/// This is the Bernstein hash function.
+//
+// FIXME: Investigate whether a modified bernstein hash function performs
+// better: http://eternallyconfuzzled.com/tuts/algorithms/jsw_tut_hashing.aspx
+//   X*33+c -> X*33^c
+static inline unsigned HashString(StringRef Str, unsigned Result = 0) {
+  for (unsigned i = 0, e = Str.size(); i != e; ++i)
+    Result = Result * 33 + Str[i];
+  return Result;
+}
 
 } // End llvm namespace
 
diff --git a/libclamav/c++/llvm/include/llvm/ADT/StringMap.h b/libclamav/c++/llvm/include/llvm/ADT/StringMap.h
index 73fd635..86e8546 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/StringMap.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/StringMap.h
@@ -96,12 +96,12 @@ protected:
   /// specified bucket will be non-null.  Otherwise, it will be null.  In either
   /// case, the FullHashValue field of the bucket will be set to the hash value
   /// of the string.
-  unsigned LookupBucketFor(const StringRef &Key);
+  unsigned LookupBucketFor(StringRef Key);
 
   /// FindKey - Look up the bucket that contains the specified key. If it exists
   /// in the map, return the bucket number of the key.  Otherwise return -1.
   /// This does not modify the map.
-  int FindKey(const StringRef &Key) const;
+  int FindKey(StringRef Key) const;
 
   /// RemoveKey - Remove the specified StringMapEntry from the table, but do not
   /// delete it.  This aborts if the value isn't in the table.
@@ -109,7 +109,7 @@ protected:
 
   /// RemoveKey - Remove the StringMapEntry for the specified key from the
   /// table, returning it.  If the key is not in the table, this returns null.
-  StringMapEntryBase *RemoveKey(const StringRef &Key);
+  StringMapEntryBase *RemoveKey(StringRef Key);
 private:
   void init(unsigned Size);
 public:
@@ -282,13 +282,13 @@ public:
     return const_iterator(TheTable+NumBuckets, true);
   }
 
-  iterator find(const StringRef &Key) {
+  iterator find(StringRef Key) {
     int Bucket = FindKey(Key);
     if (Bucket == -1) return end();
     return iterator(TheTable+Bucket);
   }
 
-  const_iterator find(const StringRef &Key) const {
+  const_iterator find(StringRef Key) const {
     int Bucket = FindKey(Key);
     if (Bucket == -1) return end();
     return const_iterator(TheTable+Bucket);
@@ -296,18 +296,18 @@ public:
 
    /// lookup - Return the entry for the specified key, or a default
   /// constructed value if no such entry exists.
-  ValueTy lookup(const StringRef &Key) const {
+  ValueTy lookup(StringRef Key) const {
     const_iterator it = find(Key);
     if (it != end())
       return it->second;
     return ValueTy();
   }
 
-  ValueTy& operator[](const StringRef &Key) {
+  ValueTy& operator[](StringRef Key) {
     return GetOrCreateValue(Key).getValue();
   }
 
-  size_type count(const StringRef &Key) const {
+  size_type count(StringRef Key) const {
     return find(Key) == end() ? 0 : 1;
   }
 
@@ -350,7 +350,7 @@ public:
   /// exists, return it.  Otherwise, default construct a value, insert it, and
   /// return.
   template <typename InitTy>
-  StringMapEntry<ValueTy> &GetOrCreateValue(const StringRef &Key,
+  StringMapEntry<ValueTy> &GetOrCreateValue(StringRef Key,
                                             InitTy Val) {
     unsigned BucketNo = LookupBucketFor(Key);
     ItemBucket &Bucket = TheTable[BucketNo];
@@ -373,7 +373,7 @@ public:
     return *NewItem;
   }
 
-  StringMapEntry<ValueTy> &GetOrCreateValue(const StringRef &Key) {
+  StringMapEntry<ValueTy> &GetOrCreateValue(StringRef Key) {
     return GetOrCreateValue(Key, ValueTy());
   }
 
@@ -401,7 +401,7 @@ public:
     V.Destroy(Allocator);
   }
 
-  bool erase(const StringRef &Key) {
+  bool erase(StringRef Key) {
     iterator I = find(Key);
     if (I == end()) return false;
     erase(I);
diff --git a/libclamav/c++/llvm/include/llvm/ADT/StringRef.h b/libclamav/c++/llvm/include/llvm/ADT/StringRef.h
index aa7d577..f299f5f 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/StringRef.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/StringRef.h
@@ -10,12 +10,14 @@
 #ifndef LLVM_ADT_STRINGREF_H
 #define LLVM_ADT_STRINGREF_H
 
-#include <algorithm>
 #include <cassert>
 #include <cstring>
+#include <utility>
 #include <string>
 
 namespace llvm {
+  template<typename T>
+  class SmallVectorImpl;
 
   /// StringRef - Represent a constant reference to a string, i.e. a character
   /// array and a length, which need not be null terminated.
@@ -29,7 +31,7 @@ namespace llvm {
     typedef const char *iterator;
     static const size_t npos = ~size_t(0);
     typedef size_t size_type;
-    
+
   private:
     /// The start of the string, in an external buffer.
     const char *Data;
@@ -37,6 +39,19 @@ namespace llvm {
     /// The length of the string.
     size_t Length;
 
+    // Workaround PR5482: nearly all gcc 4.x miscompile StringRef and std::min()
+    // Changing the arg of min to be an integer, instead of a reference to an
+    // integer works around this bug.
+    size_t min(size_t a, size_t b) const
+    {
+      return a < b ? a : b;
+    }
+
+    size_t max(size_t a, size_t b) const
+    {
+      return a > b ? a : b;
+    }
+
   public:
     /// @name Constructors
     /// @{
@@ -45,16 +60,16 @@ namespace llvm {
     /*implicit*/ StringRef() : Data(0), Length(0) {}
 
     /// Construct a string ref from a cstring.
-    /*implicit*/ StringRef(const char *Str) 
-      : Data(Str) { if (Str) Length = ::strlen(Str); else Length = 0; }
- 
+    /*implicit*/ StringRef(const char *Str)
+      : Data(Str), Length(::strlen(Str)) {}
+
     /// Construct a string ref from a pointer and length.
-    /*implicit*/ StringRef(const char *data, unsigned length)
+    /*implicit*/ StringRef(const char *data, size_t length)
       : Data(data), Length(length) {}
 
     /// Construct a string ref from an std::string.
-    /*implicit*/ StringRef(const std::string &Str) 
-      : Data(Str.c_str()), Length(Str.length()) {}
+    /*implicit*/ StringRef(const std::string &Str)
+      : Data(Str.data()), Length(Str.length()) {}
 
     /// @}
     /// @name Iterators
@@ -83,7 +98,7 @@ namespace llvm {
       assert(!empty());
       return Data[0];
     }
-    
+
     /// back - Get the last character in the string.
     char back() const {
       assert(!empty());
@@ -92,16 +107,21 @@ namespace llvm {
 
     /// equals - Check for string equality, this is more efficient than
     /// compare() when the relative ordering of inequal strings isn't needed.
-    bool equals(const StringRef &RHS) const {
-      return (Length == RHS.Length && 
+    bool equals(StringRef RHS) const {
+      return (Length == RHS.Length &&
               memcmp(Data, RHS.Data, RHS.Length) == 0);
     }
 
+    /// equals_lower - Check for string equality, ignoring case.
+    bool equals_lower(StringRef RHS) const {
+      return Length == RHS.Length && compare_lower(RHS) == 0;
+    }
+
     /// compare - Compare two strings; the result is -1, 0, or 1 if this string
     /// is lexicographically less than, equal to, or greater than the \arg RHS.
-    int compare(const StringRef &RHS) const {
+    int compare(StringRef RHS) const {
       // Check the prefix for a mismatch.
-      if (int Res = memcmp(Data, RHS.Data, std::min(Length, RHS.Length)))
+      if (int Res = memcmp(Data, RHS.Data, min(Length, RHS.Length)))
         return Res < 0 ? -1 : 1;
 
       // Otherwise the prefixes match, so we only need to check the lengths.
@@ -110,6 +130,9 @@ namespace llvm {
       return Length < RHS.Length ? -1 : 1;
     }
 
+    /// compare_lower - Compare two strings, ignoring case.
+    int compare_lower(StringRef RHS) const;
+
     /// str - Get the contents as an std::string.
     std::string str() const { return std::string(Data, Length); }
 
@@ -117,9 +140,9 @@ namespace llvm {
     /// @name Operator Overloads
     /// @{
 
-    char operator[](size_t Index) const { 
+    char operator[](size_t Index) const {
       assert(Index < Length && "Invalid index!");
-      return Data[Index]; 
+      return Data[Index];
     }
 
     /// @}
@@ -135,12 +158,12 @@ namespace llvm {
     /// @{
 
     /// startswith - Check if this string starts with the given \arg Prefix.
-    bool startswith(const StringRef &Prefix) const { 
+    bool startswith(StringRef Prefix) const {
       return substr(0, Prefix.Length).equals(Prefix);
     }
 
     /// endswith - Check if this string ends with the given \arg Suffix.
-    bool endswith(const StringRef &Suffix) const {
+    bool endswith(StringRef Suffix) const {
       return slice(size() - Suffix.Length, size()).equals(Suffix);
     }
 
@@ -152,8 +175,8 @@ namespace llvm {
     ///
     /// \return - The index of the first occurence of \arg C, or npos if not
     /// found.
-    size_t find(char C) const {
-      for (size_t i = 0, e = Length; i != e; ++i)
+    size_t find(char C, size_t From = 0) const {
+      for (size_t i = min(From, Length), e = Length; i != e; ++i)
         if (Data[i] == C)
           return i;
       return npos;
@@ -163,14 +186,14 @@ namespace llvm {
     ///
     /// \return - The index of the first occurence of \arg Str, or npos if not
     /// found.
-    size_t find(const StringRef &Str) const;
-    
+    size_t find(StringRef Str, size_t From = 0) const;
+
     /// rfind - Search for the last character \arg C in the string.
     ///
     /// \return - The index of the last occurence of \arg C, or npos if not
     /// found.
     size_t rfind(char C, size_t From = npos) const {
-      From = std::min(From, Length);
+      From = min(From, Length);
       size_t i = From;
       while (i != 0) {
         --i;
@@ -179,29 +202,37 @@ namespace llvm {
       }
       return npos;
     }
-    
+
     /// rfind - Search for the last string \arg Str in the string.
     ///
     /// \return - The index of the last occurence of \arg Str, or npos if not
     /// found.
-    size_t rfind(const StringRef &Str) const;
-    
-    /// find_first_of - Find the first instance of the specified character or
-    /// return npos if not in string.  Same as find.
-    size_type find_first_of(char C) const { return find(C); }
-    
-    /// find_first_of - Find the first character from the string 'Chars' in the
-    /// current string or return npos if not in string.
-    size_type find_first_of(StringRef Chars) const;
-    
+    size_t rfind(StringRef Str) const;
+
+    /// find_first_of - Find the first character in the string that is \arg C,
+    /// or npos if not found. Same as find.
+    size_type find_first_of(char C, size_t = 0) const { return find(C); }
+
+    /// find_first_of - Find the first character in the string that is in \arg
+    /// Chars, or npos if not found.
+    ///
+    /// Note: O(size() * Chars.size())
+    size_type find_first_of(StringRef Chars, size_t From = 0) const;
+
+    /// find_first_not_of - Find the first character in the string that is not
+    /// \arg C or npos if not found.
+    size_type find_first_not_of(char C, size_t From = 0) const;
+
     /// find_first_not_of - Find the first character in the string that is not
-    /// in the string 'Chars' or return npos if all are in string. Same as find.
-    size_type find_first_not_of(StringRef Chars) const;
-    
+    /// in the string \arg Chars, or npos if not found.
+    ///
+    /// Note: O(size() * Chars.size())
+    size_type find_first_not_of(StringRef Chars, size_t From = 0) const;
+
     /// @}
     /// @name Helpful Algorithms
     /// @{
-    
+
     /// count - Return the number of occurrences of \arg C in the string.
     size_t count(char C) const {
       size_t Count = 0;
@@ -210,11 +241,11 @@ namespace llvm {
           ++Count;
       return Count;
     }
-    
+
     /// count - Return the number of non-overlapped occurrences of \arg Str in
     /// the string.
-    size_t count(const StringRef &Str) const;
-    
+    size_t count(StringRef Str) const;
+
     /// getAsInteger - Parse the current string as an integer of the specified
     /// radix.  If Radix is specified as zero, this does radix autosensing using
     /// extended C rules: 0 is octal, 0x is hex, 0b is binary.
@@ -229,7 +260,7 @@ namespace llvm {
     bool getAsInteger(unsigned Radix, unsigned &Result) const;
 
     // TODO: Provide overloads for int/unsigned that check for overflow.
-    
+
     /// @}
     /// @name Substring Operations
     /// @{
@@ -244,8 +275,8 @@ namespace llvm {
     /// exceeds the number of characters remaining in the string, the string
     /// suffix (starting with \arg Start) will be returned.
     StringRef substr(size_t Start, size_t N = npos) const {
-      Start = std::min(Start, Length);
-      return StringRef(Data + Start, std::min(N, Length - Start));
+      Start = min(Start, Length);
+      return StringRef(Data + Start, min(N, Length - Start));
     }
 
     /// slice - Return a reference to the substring from [Start, End).
@@ -259,8 +290,8 @@ namespace llvm {
     /// number of characters remaining in the string, the string suffix
     /// (starting with \arg Start) will be returned.
     StringRef slice(size_t Start, size_t End) const {
-      Start = std::min(Start, Length);
-      End = std::min(std::max(Start, End), Length);
+      Start = min(Start, Length);
+      End = min(max(Start, End), Length);
       return StringRef(Data + Start, End - Start);
     }
 
@@ -281,6 +312,42 @@ namespace llvm {
       return std::make_pair(slice(0, Idx), slice(Idx+1, npos));
     }
 
+    /// split - Split into two substrings around the first occurence of a
+    /// separator string.
+    ///
+    /// If \arg Separator is in the string, then the result is a pair (LHS, RHS)
+    /// such that (*this == LHS + Separator + RHS) is true and RHS is
+    /// maximal. If \arg Separator is not in the string, then the result is a
+    /// pair (LHS, RHS) where (*this == LHS) and (RHS == "").
+    ///
+    /// \param Separator - The string to split on.
+    /// \return - The split substrings.
+    std::pair<StringRef, StringRef> split(StringRef Separator) const {
+      size_t Idx = find(Separator);
+      if (Idx == npos)
+        return std::make_pair(*this, StringRef());
+      return std::make_pair(slice(0, Idx), slice(Idx + Separator.size(), npos));
+    }
+
+    /// split - Split into substrings around the occurences of a separator
+    /// string.
+    ///
+    /// Each substring is stored in \arg A. If \arg MaxSplit is >= 0, at most
+    /// \arg MaxSplit splits are done and consequently <= \arg MaxSplit
+    /// elements are added to A.
+    /// If \arg KeepEmpty is false, empty strings are not added to \arg A. They
+    /// still count when considering \arg MaxSplit
+    /// An useful invariant is that
+    /// Separator.join(A) == *this if MaxSplit == -1 and KeepEmpty == true
+    ///
+    /// \param A - Where to put the substrings.
+    /// \param Separator - The string to split on.
+    /// \param MaxSplit - The maximum number of times the string is split.
+    /// \parm KeepEmpty - True if empty substring should be added.
+    void split(SmallVectorImpl<StringRef> &A,
+               StringRef Separator, int MaxSplit = -1,
+               bool KeepEmpty = true) const;
+
     /// rsplit - Split into two substrings around the last occurence of a
     /// separator character.
     ///
@@ -304,28 +371,28 @@ namespace llvm {
   /// @name StringRef Comparison Operators
   /// @{
 
-  inline bool operator==(const StringRef &LHS, const StringRef &RHS) {
+  inline bool operator==(StringRef LHS, StringRef RHS) {
     return LHS.equals(RHS);
   }
 
-  inline bool operator!=(const StringRef &LHS, const StringRef &RHS) { 
+  inline bool operator!=(StringRef LHS, StringRef RHS) {
     return !(LHS == RHS);
   }
-  
-  inline bool operator<(const StringRef &LHS, const StringRef &RHS) {
-    return LHS.compare(RHS) == -1; 
+
+  inline bool operator<(StringRef LHS, StringRef RHS) {
+    return LHS.compare(RHS) == -1;
   }
 
-  inline bool operator<=(const StringRef &LHS, const StringRef &RHS) {
-    return LHS.compare(RHS) != 1; 
+  inline bool operator<=(StringRef LHS, StringRef RHS) {
+    return LHS.compare(RHS) != 1;
   }
 
-  inline bool operator>(const StringRef &LHS, const StringRef &RHS) {
-    return LHS.compare(RHS) == 1; 
+  inline bool operator>(StringRef LHS, StringRef RHS) {
+    return LHS.compare(RHS) == 1;
   }
 
-  inline bool operator>=(const StringRef &LHS, const StringRef &RHS) {
-    return LHS.compare(RHS) != -1; 
+  inline bool operator>=(StringRef LHS, StringRef RHS) {
+    return LHS.compare(RHS) != -1;
   }
 
   /// @}
diff --git a/libclamav/c++/llvm/include/llvm/ADT/StringSwitch.h b/libclamav/c++/llvm/include/llvm/ADT/StringSwitch.h
new file mode 100644
index 0000000..6562d57
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/ADT/StringSwitch.h
@@ -0,0 +1,110 @@
+//===--- StringSwitch.h - Switch-on-literal-string Construct --------------===/
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//===----------------------------------------------------------------------===/
+//
+//  This file implements the StringSwitch template, which mimics a switch()
+//  statements whose cases are string literals.
+//
+//===----------------------------------------------------------------------===/
+#ifndef LLVM_ADT_STRINGSWITCH_H
+#define LLVM_ADT_STRINGSWITCH_H
+
+#include "llvm/ADT/StringRef.h"
+#include <cassert>
+#include <cstring>
+
+namespace llvm {
+  
+/// \brief A switch()-like statement whose cases are string literals.
+///
+/// The StringSwitch class is a simple form of a switch() statement that
+/// determines whether the given string matches one of the given string
+/// literals. The template type parameter \p T is the type of the value that
+/// will be returned from the string-switch expression. For example,
+/// the following code switches on the name of a color in \c argv[i]:
+///
+/// \code
+/// Color color = StringSwitch<Color>(argv[i])
+///   .Case("red", Red)
+///   .Case("orange", Orange)
+///   .Case("yellow", Yellow)
+///   .Case("green", Green)
+///   .Case("blue", Blue)
+///   .Case("indigo", Indigo)
+///   .Case("violet", Violet)
+///   .Default(UnknownColor);
+/// \endcode
+template<typename T>
+class StringSwitch {
+  /// \brief The string we are matching.
+  StringRef Str;
+  
+  /// \brief The result of this switch statement, once known.
+  T Result;
+  
+  /// \brief Set true when the result of this switch is already known; in this
+  /// case, Result is valid.
+  bool ResultKnown;
+  
+public:
+  explicit StringSwitch(StringRef Str) 
+  : Str(Str), ResultKnown(false) { }
+  
+  template<unsigned N>
+  StringSwitch& Case(const char (&S)[N], const T& Value) {
+    if (!ResultKnown && N-1 == Str.size() && 
+        (std::memcmp(S, Str.data(), N-1) == 0)) {
+      Result = Value;
+      ResultKnown = true;
+    }
+    
+    return *this;
+  }
+  
+  template<unsigned N0, unsigned N1>
+  StringSwitch& Cases(const char (&S0)[N0], const char (&S1)[N1],
+                      const T& Value) {
+    return Case(S0, Value).Case(S1, Value);
+  }
+  
+  template<unsigned N0, unsigned N1, unsigned N2>
+  StringSwitch& Cases(const char (&S0)[N0], const char (&S1)[N1],
+                      const char (&S2)[N2], const T& Value) {
+    return Case(S0, Value).Case(S1, Value).Case(S2, Value);
+  }
+  
+  template<unsigned N0, unsigned N1, unsigned N2, unsigned N3>
+  StringSwitch& Cases(const char (&S0)[N0], const char (&S1)[N1],
+                      const char (&S2)[N2], const char (&S3)[N3],
+                      const T& Value) {
+    return Case(S0, Value).Case(S1, Value).Case(S2, Value).Case(S3, Value);
+  }
+
+  template<unsigned N0, unsigned N1, unsigned N2, unsigned N3, unsigned N4>
+  StringSwitch& Cases(const char (&S0)[N0], const char (&S1)[N1],
+                      const char (&S2)[N2], const char (&S3)[N3],
+                       const char (&S4)[N4], const T& Value) {
+    return Case(S0, Value).Case(S1, Value).Case(S2, Value).Case(S3, Value)
+      .Case(S4, Value);
+  }
+  
+  T Default(const T& Value) {
+    if (ResultKnown)
+      return Result;
+    
+    return Value;
+  }
+  
+  operator T() {
+    assert(ResultKnown && "Fell off the end of a string-switch");
+    return Result;
+  }
+};
+
+} // end namespace llvm
+
+#endif // LLVM_ADT_STRINGSWITCH_H
diff --git a/libclamav/c++/llvm/include/llvm/ADT/Trie.h b/libclamav/c++/llvm/include/llvm/ADT/Trie.h
index cf92862..b415990 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/Trie.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/Trie.h
@@ -18,6 +18,7 @@
 #include "llvm/ADT/GraphTraits.h"
 #include "llvm/Support/DOTGraphTraits.h"
 
+#include <cassert>
 #include <vector>
 
 namespace llvm {
diff --git a/libclamav/c++/llvm/include/llvm/ADT/Triple.h b/libclamav/c++/llvm/include/llvm/ADT/Triple.h
index 89736bc..fe39324 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/Triple.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/Triple.h
@@ -64,7 +64,7 @@ public:
     msp430,  // MSP430: msp430
     pic16,   // PIC16: pic16
     ppc,     // PPC: powerpc
-    ppc64,   // PPC64: powerpc64
+    ppc64,   // PPC64: powerpc64, ppu
     sparc,   // Sparc: sparc
     systemz, // SystemZ: s390x
     tce,     // TCE (http://tce.cs.tut.fi/): tce
@@ -90,12 +90,15 @@ public:
     DragonFly,
     FreeBSD,
     Linux,
+    Lv2,        // PS3
     MinGW32,
     MinGW64,
     NetBSD,
     OpenBSD,
+    Psp,
     Solaris,
-    Win32
+    Win32,
+    Haiku
   };
   
 private:
@@ -159,6 +162,8 @@ public:
   /// @name Direct Component Access
   /// @{
 
+  const std::string &str() const { return Data; }
+
   const std::string &getTriple() const { return Data; }
 
   /// getArchName - Get the architecture (first) component of the
@@ -217,23 +222,27 @@ public:
 
   /// setArchName - Set the architecture (first) component of the
   /// triple by name.
-  void setArchName(const StringRef &Str);
+  void setArchName(StringRef Str);
 
   /// setVendorName - Set the vendor (second) component of the triple
   /// by name.
-  void setVendorName(const StringRef &Str);
+  void setVendorName(StringRef Str);
 
   /// setOSName - Set the operating system (third) component of the
   /// triple by name.
-  void setOSName(const StringRef &Str);
+  void setOSName(StringRef Str);
 
   /// setEnvironmentName - Set the optional environment (fourth)
   /// component of the triple by name.
-  void setEnvironmentName(const StringRef &Str);
+  void setEnvironmentName(StringRef Str);
 
   /// setOSAndEnvironmentName - Set the operating system and optional
   /// environment components with a single string.
-  void setOSAndEnvironmentName(const StringRef &Str);
+  void setOSAndEnvironmentName(StringRef Str);
+
+  /// getArchNameForAssembler - Get an architecture name that is understood by the
+  /// target assembler.
+  const char *getArchNameForAssembler();
 
   /// @}
   /// @name Static helpers for IDs.
@@ -264,12 +273,12 @@ public:
 
   /// getArchTypeForLLVMName - The canonical type for the given LLVM
   /// architecture name (e.g., "x86").
-  static ArchType getArchTypeForLLVMName(const StringRef &Str);
+  static ArchType getArchTypeForLLVMName(StringRef Str);
 
   /// getArchTypeForDarwinArchName - Get the architecture type for a "Darwin"
   /// architecture name, for example as accepted by "gcc -arch" (see also
   /// arch(3)).
-  static ArchType getArchTypeForDarwinArchName(const StringRef &Str);
+  static ArchType getArchTypeForDarwinArchName(StringRef Str);
 
   /// @}
 };
diff --git a/libclamav/c++/llvm/include/llvm/ADT/Twine.h b/libclamav/c++/llvm/include/llvm/ADT/Twine.h
index 88fde0a..ca0be53 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/Twine.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/Twine.h
@@ -11,7 +11,7 @@
 #define LLVM_ADT_TWINE_H
 
 #include "llvm/ADT/StringRef.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <cassert>
 #include <string>
 
@@ -133,9 +133,9 @@ namespace llvm {
     /// Null or Empty kinds.
     const void *RHS;
     /// LHSKind - The NodeKind of the left hand side, \see getLHSKind().
-    NodeKind LHSKind : 8;
+    unsigned char LHSKind;
     /// RHSKind - The NodeKind of the left hand side, \see getLHSKind().
-    NodeKind RHSKind : 8;
+    unsigned char RHSKind;
 
   private:
     /// Construct a nullary twine; the kind must be NullKind or EmptyKind.
@@ -209,10 +209,10 @@ namespace llvm {
     }
 
     /// getLHSKind - Get the NodeKind of the left-hand side.
-    NodeKind getLHSKind() const { return LHSKind; }
+    NodeKind getLHSKind() const { return (NodeKind) LHSKind; }
 
     /// getRHSKind - Get the NodeKind of the left-hand side.
-    NodeKind getRHSKind() const { return RHSKind; }
+    NodeKind getRHSKind() const { return (NodeKind) RHSKind; }
 
     /// printOneChild - Print one child from a twine.
     void printOneChild(raw_ostream &OS, const void *Ptr, NodeKind Kind) const;
diff --git a/libclamav/c++/llvm/include/llvm/ADT/ValueMap.h b/libclamav/c++/llvm/include/llvm/ADT/ValueMap.h
new file mode 100644
index 0000000..b043c38
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/ADT/ValueMap.h
@@ -0,0 +1,375 @@
+//===- llvm/ADT/ValueMap.h - Safe map from Values to data -------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file defines the ValueMap class.  ValueMap maps Value* or any subclass
+// to an arbitrary other type.  It provides the DenseMap interface but updates
+// itself to remain safe when keys are RAUWed or deleted.  By default, when a
+// key is RAUWed from V1 to V2, the old mapping V1->target is removed, and a new
+// mapping V2->target is added.  If V2 already existed, its old target is
+// overwritten.  When a key is deleted, its mapping is removed.
+//
+// You can override a ValueMap's Config parameter to control exactly what
+// happens on RAUW and destruction and to get called back on each event.  It's
+// legal to call back into the ValueMap from a Config's callbacks.  Config
+// parameters should inherit from ValueMapConfig<KeyT> to get default
+// implementations of all the methods ValueMap uses.  See ValueMapConfig for
+// documentation of the functions you can override.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_ADT_VALUEMAP_H
+#define LLVM_ADT_VALUEMAP_H
+
+#include "llvm/ADT/DenseMap.h"
+#include "llvm/Support/ValueHandle.h"
+#include "llvm/Support/type_traits.h"
+#include "llvm/System/Mutex.h"
+
+#include <iterator>
+
+namespace llvm {
+
+template<typename KeyT, typename ValueT, typename Config, typename ValueInfoT>
+class ValueMapCallbackVH;
+
+template<typename DenseMapT, typename KeyT>
+class ValueMapIterator;
+template<typename DenseMapT, typename KeyT>
+class ValueMapConstIterator;
+
+/// This class defines the default behavior for configurable aspects of
+/// ValueMap<>.  User Configs should inherit from this class to be as compatible
+/// as possible with future versions of ValueMap.
+template<typename KeyT>
+struct ValueMapConfig {
+  /// If FollowRAUW is true, the ValueMap will update mappings on RAUW. If it's
+  /// false, the ValueMap will leave the original mapping in place.
+  enum { FollowRAUW = true };
+
+  // All methods will be called with a first argument of type ExtraData.  The
+  // default implementations in this class take a templated first argument so
+  // that users' subclasses can use any type they want without having to
+  // override all the defaults.
+  struct ExtraData {};
+
+  template<typename ExtraDataT>
+  static void onRAUW(const ExtraDataT &Data, KeyT Old, KeyT New) {}
+  template<typename ExtraDataT>
+  static void onDelete(const ExtraDataT &Data, KeyT Old) {}
+
+  /// Returns a mutex that should be acquired around any changes to the map.
+  /// This is only acquired from the CallbackVH (and held around calls to onRAUW
+  /// and onDelete) and not inside other ValueMap methods.  NULL means that no
+  /// mutex is necessary.
+  template<typename ExtraDataT>
+  static sys::Mutex *getMutex(const ExtraDataT &Data) { return NULL; }
+};
+
+/// See the file comment.
+template<typename KeyT, typename ValueT, typename Config = ValueMapConfig<KeyT>,
+         typename ValueInfoT = DenseMapInfo<ValueT> >
+class ValueMap {
+  friend class ValueMapCallbackVH<KeyT, ValueT, Config, ValueInfoT>;
+  typedef ValueMapCallbackVH<KeyT, ValueT, Config, ValueInfoT> ValueMapCVH;
+  typedef DenseMap<ValueMapCVH, ValueT, DenseMapInfo<ValueMapCVH>,
+                   ValueInfoT> MapT;
+  typedef typename Config::ExtraData ExtraData;
+  MapT Map;
+  ExtraData Data;
+public:
+  typedef KeyT key_type;
+  typedef ValueT mapped_type;
+  typedef std::pair<KeyT, ValueT> value_type;
+
+  ValueMap(const ValueMap& Other) : Map(Other.Map), Data(Other.Data) {}
+
+  explicit ValueMap(unsigned NumInitBuckets = 64)
+    : Map(NumInitBuckets), Data() {}
+  explicit ValueMap(const ExtraData &Data, unsigned NumInitBuckets = 64)
+    : Map(NumInitBuckets), Data(Data) {}
+
+  ~ValueMap() {}
+
+  typedef ValueMapIterator<MapT, KeyT> iterator;
+  typedef ValueMapConstIterator<MapT, KeyT> const_iterator;
+  inline iterator begin() { return iterator(Map.begin()); }
+  inline iterator end() { return iterator(Map.end()); }
+  inline const_iterator begin() const { return const_iterator(Map.begin()); }
+  inline const_iterator end() const { return const_iterator(Map.end()); }
+
+  bool empty() const { return Map.empty(); }
+  unsigned size() const { return Map.size(); }
+
+  /// Grow the map so that it has at least Size buckets. Does not shrink
+  void resize(size_t Size) { Map.resize(Size); }
+
+  void clear() { Map.clear(); }
+
+  /// count - Return true if the specified key is in the map.
+  bool count(const KeyT &Val) const {
+    return Map.count(Wrap(Val));
+  }
+
+  iterator find(const KeyT &Val) {
+    return iterator(Map.find(Wrap(Val)));
+  }
+  const_iterator find(const KeyT &Val) const {
+    return const_iterator(Map.find(Wrap(Val)));
+  }
+
+  /// lookup - Return the entry for the specified key, or a default
+  /// constructed value if no such entry exists.
+  ValueT lookup(const KeyT &Val) const {
+    return Map.lookup(Wrap(Val));
+  }
+
+  // Inserts key,value pair into the map if the key isn't already in the map.
+  // If the key is already in the map, it returns false and doesn't update the
+  // value.
+  std::pair<iterator, bool> insert(const std::pair<KeyT, ValueT> &KV) {
+    std::pair<typename MapT::iterator, bool> map_result=
+      Map.insert(std::make_pair(Wrap(KV.first), KV.second));
+    return std::make_pair(iterator(map_result.first), map_result.second);
+  }
+
+  /// insert - Range insertion of pairs.
+  template<typename InputIt>
+  void insert(InputIt I, InputIt E) {
+    for (; I != E; ++I)
+      insert(*I);
+  }
+
+
+  bool erase(const KeyT &Val) {
+    return Map.erase(Wrap(Val));
+  }
+  bool erase(iterator I) {
+    return Map.erase(I.base());
+  }
+
+  value_type& FindAndConstruct(const KeyT &Key) {
+    return Map.FindAndConstruct(Wrap(Key));
+  }
+
+  ValueT &operator[](const KeyT &Key) {
+    return Map[Wrap(Key)];
+  }
+
+  ValueMap& operator=(const ValueMap& Other) {
+    Map = Other.Map;
+    Data = Other.Data;
+    return *this;
+  }
+
+  /// isPointerIntoBucketsArray - Return true if the specified pointer points
+  /// somewhere into the ValueMap's array of buckets (i.e. either to a key or
+  /// value in the ValueMap).
+  bool isPointerIntoBucketsArray(const void *Ptr) const {
+    return Map.isPointerIntoBucketsArray(Ptr);
+  }
+
+  /// getPointerIntoBucketsArray() - Return an opaque pointer into the buckets
+  /// array.  In conjunction with the previous method, this can be used to
+  /// determine whether an insertion caused the ValueMap to reallocate.
+  const void *getPointerIntoBucketsArray() const {
+    return Map.getPointerIntoBucketsArray();
+  }
+
+private:
+  // Takes a key being looked up in the map and wraps it into a
+  // ValueMapCallbackVH, the actual key type of the map.  We use a helper
+  // function because ValueMapCVH is constructed with a second parameter.
+  ValueMapCVH Wrap(KeyT key) const {
+    // The only way the resulting CallbackVH could try to modify *this (making
+    // the const_cast incorrect) is if it gets inserted into the map.  But then
+    // this function must have been called from a non-const method, making the
+    // const_cast ok.
+    return ValueMapCVH(key, const_cast<ValueMap*>(this));
+  }
+};
+
+// This CallbackVH updates its ValueMap when the contained Value changes,
+// according to the user's preferences expressed through the Config object.
+template<typename KeyT, typename ValueT, typename Config, typename ValueInfoT>
+class ValueMapCallbackVH : public CallbackVH {
+  friend class ValueMap<KeyT, ValueT, Config, ValueInfoT>;
+  friend struct DenseMapInfo<ValueMapCallbackVH>;
+  typedef ValueMap<KeyT, ValueT, Config, ValueInfoT> ValueMapT;
+  typedef typename llvm::remove_pointer<KeyT>::type KeySansPointerT;
+
+  ValueMapT *Map;
+
+  ValueMapCallbackVH(KeyT Key, ValueMapT *Map)
+      : CallbackVH(const_cast<Value*>(static_cast<const Value*>(Key))),
+        Map(Map) {}
+
+public:
+  KeyT Unwrap() const { return cast_or_null<KeySansPointerT>(getValPtr()); }
+
+  virtual void deleted() {
+    // Make a copy that won't get changed even when *this is destroyed.
+    ValueMapCallbackVH Copy(*this);
+    sys::Mutex *M = Config::getMutex(Copy.Map->Data);
+    if (M)
+      M->acquire();
+    Config::onDelete(Copy.Map->Data, Copy.Unwrap());  // May destroy *this.
+    Copy.Map->Map.erase(Copy);  // Definitely destroys *this.
+    if (M)
+      M->release();
+  }
+  virtual void allUsesReplacedWith(Value *new_key) {
+    assert(isa<KeySansPointerT>(new_key) &&
+           "Invalid RAUW on key of ValueMap<>");
+    // Make a copy that won't get changed even when *this is destroyed.
+    ValueMapCallbackVH Copy(*this);
+    sys::Mutex *M = Config::getMutex(Copy.Map->Data);
+    if (M)
+      M->acquire();
+
+    KeyT typed_new_key = cast<KeySansPointerT>(new_key);
+    // Can destroy *this:
+    Config::onRAUW(Copy.Map->Data, Copy.Unwrap(), typed_new_key);
+    if (Config::FollowRAUW) {
+      typename ValueMapT::MapT::iterator I = Copy.Map->Map.find(Copy);
+      // I could == Copy.Map->Map.end() if the onRAUW callback already
+      // removed the old mapping.
+      if (I != Copy.Map->Map.end()) {
+        ValueT Target(I->second);
+        Copy.Map->Map.erase(I);  // Definitely destroys *this.
+        Copy.Map->insert(std::make_pair(typed_new_key, Target));
+      }
+    }
+    if (M)
+      M->release();
+  }
+};
+
+template<typename KeyT, typename ValueT, typename Config, typename ValueInfoT>
+struct DenseMapInfo<ValueMapCallbackVH<KeyT, ValueT, Config, ValueInfoT> > {
+  typedef ValueMapCallbackVH<KeyT, ValueT, Config, ValueInfoT> VH;
+  typedef DenseMapInfo<KeyT> PointerInfo;
+
+  static inline VH getEmptyKey() {
+    return VH(PointerInfo::getEmptyKey(), NULL);
+  }
+  static inline VH getTombstoneKey() {
+    return VH(PointerInfo::getTombstoneKey(), NULL);
+  }
+  static unsigned getHashValue(const VH &Val) {
+    return PointerInfo::getHashValue(Val.Unwrap());
+  }
+  static bool isEqual(const VH &LHS, const VH &RHS) {
+    return LHS == RHS;
+  }
+  static bool isPod() { return false; }
+};
+
+
+template<typename DenseMapT, typename KeyT>
+class ValueMapIterator :
+    public std::iterator<std::forward_iterator_tag,
+                         std::pair<KeyT, typename DenseMapT::mapped_type>,
+                         ptrdiff_t> {
+  typedef typename DenseMapT::iterator BaseT;
+  typedef typename DenseMapT::mapped_type ValueT;
+  BaseT I;
+public:
+  ValueMapIterator() : I() {}
+
+  ValueMapIterator(BaseT I) : I(I) {}
+
+  BaseT base() const { return I; }
+
+  struct ValueTypeProxy {
+    const KeyT first;
+    ValueT& second;
+    ValueTypeProxy *operator->() { return this; }
+    operator std::pair<KeyT, ValueT>() const {
+      return std::make_pair(first, second);
+    }
+  };
+
+  ValueTypeProxy operator*() const {
+    ValueTypeProxy Result = {I->first.Unwrap(), I->second};
+    return Result;
+  }
+
+  ValueTypeProxy operator->() const {
+    return operator*();
+  }
+
+  bool operator==(const ValueMapIterator &RHS) const {
+    return I == RHS.I;
+  }
+  bool operator!=(const ValueMapIterator &RHS) const {
+    return I != RHS.I;
+  }
+
+  inline ValueMapIterator& operator++() {  // Preincrement
+    ++I;
+    return *this;
+  }
+  ValueMapIterator operator++(int) {  // Postincrement
+    ValueMapIterator tmp = *this; ++*this; return tmp;
+  }
+};
+
+template<typename DenseMapT, typename KeyT>
+class ValueMapConstIterator :
+    public std::iterator<std::forward_iterator_tag,
+                         std::pair<KeyT, typename DenseMapT::mapped_type>,
+                         ptrdiff_t> {
+  typedef typename DenseMapT::const_iterator BaseT;
+  typedef typename DenseMapT::mapped_type ValueT;
+  BaseT I;
+public:
+  ValueMapConstIterator() : I() {}
+  ValueMapConstIterator(BaseT I) : I(I) {}
+  ValueMapConstIterator(ValueMapIterator<DenseMapT, KeyT> Other)
+    : I(Other.base()) {}
+
+  BaseT base() const { return I; }
+
+  struct ValueTypeProxy {
+    const KeyT first;
+    const ValueT& second;
+    ValueTypeProxy *operator->() { return this; }
+    operator std::pair<KeyT, ValueT>() const {
+      return std::make_pair(first, second);
+    }
+  };
+
+  ValueTypeProxy operator*() const {
+    ValueTypeProxy Result = {I->first.Unwrap(), I->second};
+    return Result;
+  }
+
+  ValueTypeProxy operator->() const {
+    return operator*();
+  }
+
+  bool operator==(const ValueMapConstIterator &RHS) const {
+    return I == RHS.I;
+  }
+  bool operator!=(const ValueMapConstIterator &RHS) const {
+    return I != RHS.I;
+  }
+
+  inline ValueMapConstIterator& operator++() {  // Preincrement
+    ++I;
+    return *this;
+  }
+  ValueMapConstIterator operator++(int) {  // Postincrement
+    ValueMapConstIterator tmp = *this; ++*this; return tmp;
+  }
+};
+
+} // end namespace llvm
+
+#endif
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/AliasAnalysis.h b/libclamav/c++/llvm/include/llvm/Analysis/AliasAnalysis.h
index be7d5ee..2d43bdd 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/AliasAnalysis.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/AliasAnalysis.h
@@ -94,13 +94,12 @@ public:
   virtual AliasResult alias(const Value *V1, unsigned V1Size,
                             const Value *V2, unsigned V2Size);
 
-  /// getMustAliases - If there are any pointers known that must alias this
-  /// pointer, return them now.  This allows alias-set based alias analyses to
-  /// perform a form a value numbering (which is exposed by load-vn).  If an
-  /// alias analysis supports this, it should ADD any must aliased pointers to
-  /// the specified vector.
-  ///
-  virtual void getMustAliases(Value *P, std::vector<Value*> &RetVals);
+  /// isNoAlias - A trivial helper function to check to see if the specified
+  /// pointers are no-alias.
+  bool isNoAlias(const Value *V1, unsigned V1Size,
+                 const Value *V2, unsigned V2Size) {
+    return alias(V1, V1Size, V2, V2Size) == NoAlias;
+  }
 
   /// pointsToConstantMemory - If the specified pointer is known to point into
   /// constant global memory, return true.  This allows disambiguation of store
@@ -262,14 +261,6 @@ public:
   ///
   virtual ModRefResult getModRefInfo(CallSite CS1, CallSite CS2);
 
-  /// hasNoModRefInfoForCalls - Return true if the analysis has no mod/ref
-  /// information for pairs of function calls (other than "pure" and "const"
-  /// functions).  This can be used by clients to avoid many pointless queries.
-  /// Remember that if you override this and chain to another analysis, you must
-  /// make sure that it doesn't have mod/ref info either.
-  ///
-  virtual bool hasNoModRefInfoForCalls() const;
-
 public:
   /// Convenience functions...
   ModRefResult getModRefInfo(LoadInst *L, Value *P, unsigned Size);
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/AliasSetTracker.h b/libclamav/c++/llvm/include/llvm/Analysis/AliasSetTracker.h
index 239f30f..42a377e 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/AliasSetTracker.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/AliasSetTracker.h
@@ -29,7 +29,6 @@ namespace llvm {
 class AliasAnalysis;
 class LoadInst;
 class StoreInst;
-class FreeInst;
 class VAArgInst;
 class AliasSetTracker;
 class AliasSet;
@@ -298,7 +297,6 @@ public:
   bool add(Value *Ptr, unsigned Size);  // Add a location
   bool add(LoadInst *LI);
   bool add(StoreInst *SI);
-  bool add(FreeInst *FI);
   bool add(VAArgInst *VAAI);
   bool add(CallSite CS);          // Call/Invoke instructions
   bool add(CallInst *CI)   { return add(CallSite(CI)); }
@@ -313,7 +311,6 @@ public:
   bool remove(Value *Ptr, unsigned Size);  // Remove a location
   bool remove(LoadInst *LI);
   bool remove(StoreInst *SI);
-  bool remove(FreeInst *FI);
   bool remove(VAArgInst *VAAI);
   bool remove(CallSite CS);
   bool remove(CallInst *CI)   { return remove(CallSite(CI)); }
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/CFGPrinter.h b/libclamav/c++/llvm/include/llvm/Analysis/CFGPrinter.h
index 6a479d1..440d182 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/CFGPrinter.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/CFGPrinter.h
@@ -15,6 +15,79 @@
 #ifndef LLVM_ANALYSIS_CFGPRINTER_H
 #define LLVM_ANALYSIS_CFGPRINTER_H
 
+#include "llvm/Function.h"
+#include "llvm/Instructions.h"
+#include "llvm/Assembly/Writer.h"
+#include "llvm/Support/CFG.h"
+#include "llvm/Support/GraphWriter.h"
+
+namespace llvm {
+template<>
+struct DOTGraphTraits<const Function*> : public DefaultDOTGraphTraits {
+  static std::string getGraphName(const Function *F) {
+    return "CFG for '" + F->getNameStr() + "' function";
+  }
+
+  static std::string getNodeLabel(const BasicBlock *Node,
+                                  const Function *Graph,
+                                  bool ShortNames) {
+    if (ShortNames && !Node->getName().empty())
+      return Node->getNameStr() + ":";
+
+    std::string Str;
+    raw_string_ostream OS(Str);
+
+    if (ShortNames) {
+      WriteAsOperand(OS, Node, false);
+      return OS.str();
+    }
+
+    if (Node->getName().empty()) {
+      WriteAsOperand(OS, Node, false);
+      OS << ":";
+    }
+
+    OS << *Node;
+    std::string OutStr = OS.str();
+    if (OutStr[0] == '\n') OutStr.erase(OutStr.begin());
+
+    // Process string output to make it nicer...
+    for (unsigned i = 0; i != OutStr.length(); ++i)
+      if (OutStr[i] == '\n') {                            // Left justify
+        OutStr[i] = '\\';
+        OutStr.insert(OutStr.begin()+i+1, 'l');
+      } else if (OutStr[i] == ';') {                      // Delete comments!
+        unsigned Idx = OutStr.find('\n', i+1);            // Find end of line
+        OutStr.erase(OutStr.begin()+i, OutStr.begin()+Idx);
+        --i;
+      }
+
+    return OutStr;
+  }
+
+  static std::string getEdgeSourceLabel(const BasicBlock *Node,
+                                        succ_const_iterator I) {
+    // Label source of conditional branches with "T" or "F"
+    if (const BranchInst *BI = dyn_cast<BranchInst>(Node->getTerminator()))
+      if (BI->isConditional())
+        return (I == succ_begin(Node)) ? "T" : "F";
+    
+    // Label source of switch edges with the associated value.
+    if (const SwitchInst *SI = dyn_cast<SwitchInst>(Node->getTerminator())) {
+      unsigned SuccNo = I.getSuccessorIndex();
+
+      if (SuccNo == 0) return "def";
+      
+      std::string Str;
+      raw_string_ostream OS(Str);
+      OS << SI->getCaseValue(SuccNo)->getValue();
+      return OS.str();
+    }    
+    return "";
+  }
+};
+} // End llvm namespace
+
 namespace llvm {
   class FunctionPass;
   FunctionPass *createCFGPrinterPass ();
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/CallGraph.h b/libclamav/c++/llvm/include/llvm/Analysis/CallGraph.h
index bcb6dee..287fe4f 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/CallGraph.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/CallGraph.h
@@ -51,9 +51,10 @@
 #ifndef LLVM_ANALYSIS_CALLGRAPH_H
 #define LLVM_ANALYSIS_CALLGRAPH_H
 
+#include "llvm/Function.h"
+#include "llvm/Pass.h"
 #include "llvm/ADT/GraphTraits.h"
 #include "llvm/ADT/STLExtras.h"
-#include "llvm/Pass.h"
 #include "llvm/Support/CallSite.h"
 #include "llvm/Support/ValueHandle.h"
 #include "llvm/System/IncludeFile.h"
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/CaptureTracking.h b/libclamav/c++/llvm/include/llvm/Analysis/CaptureTracking.h
index a0ff503..493ecf5 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/CaptureTracking.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/CaptureTracking.h
@@ -21,8 +21,12 @@ namespace llvm {
   /// by the enclosing function (which is required to exist).  This routine can
   /// be expensive, so consider caching the results.  The boolean ReturnCaptures
   /// specifies whether returning the value (or part of it) from the function
+  /// counts as capturing it or not.  The boolean StoreCaptures specified whether
+  /// storing the value (or part of it) into memory anywhere automatically
   /// counts as capturing it or not.
-  bool PointerMayBeCaptured(const Value *V, bool ReturnCaptures);
+  bool PointerMayBeCaptured(const Value *V,
+                            bool ReturnCaptures,
+                            bool StoreCaptures);
 
 } // end namespace llvm
 
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/ConstantFolding.h b/libclamav/c++/llvm/include/llvm/Analysis/ConstantFolding.h
index 263d42a..06951c7 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/ConstantFolding.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/ConstantFolding.h
@@ -26,20 +26,18 @@ namespace llvm {
   class TargetData;
   class Function;
   class Type;
-  class LLVMContext;
 
 /// ConstantFoldInstruction - Attempt to constant fold the specified
 /// instruction.  If successful, the constant result is returned, if not, null
 /// is returned.  Note that this function can only fail when attempting to fold
 /// instructions like loads and stores, which have no constant expression form.
 ///
-Constant *ConstantFoldInstruction(Instruction *I, LLVMContext &Context,
-                                  const TargetData *TD = 0);
+Constant *ConstantFoldInstruction(Instruction *I, const TargetData *TD = 0);
 
 /// ConstantFoldConstantExpression - Attempt to fold the constant expression
 /// using the specified TargetData.  If successful, the constant result is
 /// result is returned, if not, null is returned.
-Constant *ConstantFoldConstantExpression(ConstantExpr *CE, LLVMContext &Context,
+Constant *ConstantFoldConstantExpression(ConstantExpr *CE,
                                          const TargetData *TD = 0);
 
 /// ConstantFoldInstOperands - Attempt to constant fold an instruction with the
@@ -49,8 +47,7 @@ Constant *ConstantFoldConstantExpression(ConstantExpr *CE, LLVMContext &Context,
 /// form.
 ///
 Constant *ConstantFoldInstOperands(unsigned Opcode, const Type *DestTy,
-                                   Constant*const * Ops, unsigned NumOps,
-                                   LLVMContext &Context,
+                                   Constant *const *Ops, unsigned NumOps,
                                    const TargetData *TD = 0);
 
 /// ConstantFoldCompareInstOperands - Attempt to constant fold a compare
@@ -58,16 +55,18 @@ Constant *ConstantFoldInstOperands(unsigned Opcode, const Type *DestTy,
 /// returns a constant expression of the specified operands.
 ///
 Constant *ConstantFoldCompareInstOperands(unsigned Predicate,
-                                          Constant*const * Ops, unsigned NumOps,
-                                          LLVMContext &Context,
+                                          Constant *LHS, Constant *RHS,
                                           const TargetData *TD = 0);
 
+/// ConstantFoldLoadFromConstPtr - Return the value that a load from C would
+/// produce if it is constant and determinable.  If this is not determinable,
+/// return null.
+Constant *ConstantFoldLoadFromConstPtr(Constant *C, const TargetData *TD = 0);
 
 /// ConstantFoldLoadThroughGEPConstantExpr - Given a constant and a
 /// getelementptr constantexpr, return the constant value being addressed by the
 /// constant expression, or null if something is funny and we can't decide.
-Constant *ConstantFoldLoadThroughGEPConstantExpr(Constant *C, ConstantExpr *CE,
-                                                 LLVMContext &Context);
+Constant *ConstantFoldLoadThroughGEPConstantExpr(Constant *C, ConstantExpr *CE);
   
 /// canConstantFoldCallTo - Return true if its even possible to fold a call to
 /// the specified function.
@@ -76,7 +75,7 @@ bool canConstantFoldCallTo(const Function *F);
 /// ConstantFoldCall - Attempt to constant fold a call to the specified function
 /// with the specified arguments, returning null if unsuccessful.
 Constant *
-ConstantFoldCall(Function *F, Constant* const* Operands, unsigned NumOperands);
+ConstantFoldCall(Function *F, Constant *const *Operands, unsigned NumOperands);
 }
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/DebugInfo.h b/libclamav/c++/llvm/include/llvm/Analysis/DebugInfo.h
index 53c5120..866ed8a 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/DebugInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/DebugInfo.h
@@ -44,16 +44,18 @@ namespace llvm {
   class Instruction;
   class LLVMContext;
 
+  /// DIDescriptor - A thin wraper around MDNode to access encoded debug info. This should not
+  /// be stored in a container, because underly MDNode may change in certain situations.
   class DIDescriptor {
   protected:
-    TrackingVH<MDNode> DbgNode;
+    MDNode  *DbgNode;
 
     /// DIDescriptor constructor.  If the specified node is non-null, check
     /// to make sure that the tag in the descriptor matches 'RequiredTag'.  If
     /// not, the debug info is corrupt and we ignore it.
     DIDescriptor(MDNode *N, unsigned RequiredTag);
 
-    const char *getStringField(unsigned Elt) const;
+    StringRef getStringField(unsigned Elt) const;
     unsigned getUnsignedField(unsigned Elt) const {
       return (unsigned)getUInt64Field(Elt);
     }
@@ -135,8 +137,8 @@ namespace llvm {
     }
     virtual ~DIScope() {}
 
-    const char *getFilename() const;
-    const char *getDirectory() const;
+    StringRef getFilename() const;
+    StringRef getDirectory() const;
   };
 
   /// DICompileUnit - A wrapper for a compile unit.
@@ -148,9 +150,9 @@ namespace llvm {
     }
 
     unsigned getLanguage() const     { return getUnsignedField(2); }
-    const char *getFilename() const  { return getStringField(3);   }
-    const char *getDirectory() const { return getStringField(4);   }
-    const char *getProducer() const  { return getStringField(5);   }
+    StringRef getFilename() const  { return getStringField(3);   }
+    StringRef getDirectory() const { return getStringField(4);   }
+    StringRef getProducer() const  { return getStringField(5);   }
 
     /// isMain - Each input file is encoded as a separate compile unit in LLVM
     /// debugging information output. However, many target specific tool chains
@@ -163,7 +165,7 @@ namespace llvm {
 
     bool isMain() const                { return getUnsignedField(6); }
     bool isOptimized() const           { return getUnsignedField(7); }
-    const char *getFlags() const       { return getStringField(8);   }
+    StringRef getFlags() const       { return getStringField(8);   }
     unsigned getRunTimeVersion() const { return getUnsignedField(9); }
 
     /// Verify - Verify that a compile unit is well formed.
@@ -181,7 +183,7 @@ namespace llvm {
     explicit DIEnumerator(MDNode *N = 0)
       : DIDescriptor(N, dwarf::DW_TAG_enumerator) {}
 
-    const char *getName() const        { return getStringField(1); }
+    StringRef getName() const        { return getStringField(1); }
     uint64_t getEnumValue() const      { return getUInt64Field(2); }
   };
 
@@ -215,7 +217,7 @@ namespace llvm {
     virtual ~DIType() {}
 
     DIDescriptor getContext() const     { return getDescriptorField(1); }
-    const char *getName() const         { return getStringField(2);     }
+    StringRef getName() const         { return getStringField(2);     }
     DICompileUnit getCompileUnit() const{ return getFieldAs<DICompileUnit>(3); }
     unsigned getLineNumber() const      { return getUnsignedField(4); }
     uint64_t getSizeInBits() const      { return getUInt64Field(5); }
@@ -315,9 +317,9 @@ namespace llvm {
     virtual ~DIGlobal() {}
 
     DIDescriptor getContext() const     { return getDescriptorField(2); }
-    const char *getName() const         { return getStringField(3); }
-    const char *getDisplayName() const  { return getStringField(4); }
-    const char *getLinkageName() const  { return getStringField(5); }
+    StringRef getName() const         { return getStringField(3); }
+    StringRef getDisplayName() const  { return getStringField(4); }
+    StringRef getLinkageName() const  { return getStringField(5); }
     DICompileUnit getCompileUnit() const{ return getFieldAs<DICompileUnit>(6); }
     unsigned getLineNumber() const      { return getUnsignedField(7); }
     DIType getType() const              { return getFieldAs<DIType>(8); }
@@ -340,16 +342,16 @@ namespace llvm {
     }
 
     DIDescriptor getContext() const     { return getDescriptorField(2); }
-    const char *getName() const         { return getStringField(3); }
-    const char *getDisplayName() const  { return getStringField(4); }
-    const char *getLinkageName() const  { return getStringField(5); }
+    StringRef getName() const         { return getStringField(3); }
+    StringRef getDisplayName() const  { return getStringField(4); }
+    StringRef getLinkageName() const  { return getStringField(5); }
     DICompileUnit getCompileUnit() const{ return getFieldAs<DICompileUnit>(6); }
     unsigned getLineNumber() const      { return getUnsignedField(7); }
     DICompositeType getType() const { return getFieldAs<DICompositeType>(8); }
 
     /// getReturnTypeName - Subprogram return types are encoded either as
     /// DIType or as DICompositeType.
-    const char *getReturnTypeName() const {
+    StringRef getReturnTypeName() const {
       DICompositeType DCT(getFieldAs<DICompositeType>(8));
       if (!DCT.isNull()) {
         DIArray A = DCT.getTypeArray();
@@ -364,8 +366,8 @@ namespace llvm {
     /// compile unit, like 'static' in C.
     unsigned isLocalToUnit() const     { return getUnsignedField(9); }
     unsigned isDefinition() const      { return getUnsignedField(10); }
-    const char *getFilename() const    { return getCompileUnit().getFilename();}
-    const char *getDirectory() const   { return getCompileUnit().getDirectory();}
+    StringRef getFilename() const    { return getCompileUnit().getFilename();}
+    StringRef getDirectory() const   { return getCompileUnit().getDirectory();}
 
     /// Verify - Verify that a subprogram descriptor is well formed.
     bool Verify() const;
@@ -404,7 +406,7 @@ namespace llvm {
     }
 
     DIDescriptor getContext() const { return getDescriptorField(1); }
-    const char *getName() const     { return getStringField(2);     }
+    StringRef getName() const     { return getStringField(2);     }
     DICompileUnit getCompileUnit() const{ return getFieldAs<DICompileUnit>(3); }
     unsigned getLineNumber() const      { return getUnsignedField(4); }
     DIType getType() const              { return getFieldAs<DIType>(5); }
@@ -442,8 +444,8 @@ namespace llvm {
         DbgNode = 0;
     }
     DIScope getContext() const       { return getFieldAs<DIScope>(1); }
-    const char *getDirectory() const { return getContext().getDirectory(); }
-    const char *getFilename() const  { return getContext().getFilename(); }
+    StringRef getDirectory() const { return getContext().getDirectory(); }
+    StringRef getFilename() const  { return getContext().getFilename(); }
   };
 
   /// DILocation - This object holds location information. This object
@@ -456,8 +458,8 @@ namespace llvm {
     unsigned getColumnNumber() const   { return getUnsignedField(1); }
     DIScope  getScope() const          { return getFieldAs<DIScope>(2); }
     DILocation getOrigLocation() const { return getFieldAs<DILocation>(3); }
-    const char *getFilename() const    { return getScope().getFilename(); }
-    const char *getDirectory() const   { return getScope().getDirectory(); }
+    StringRef getFilename() const    { return getScope().getFilename(); }
+    StringRef getDirectory() const   { return getScope().getDirectory(); }
   };
 
   /// DIFactory - This object assists with the construction of the various
@@ -466,15 +468,8 @@ namespace llvm {
     Module &M;
     LLVMContext& VMContext;
 
-    // Cached values for uniquing and faster lookups.
     const Type *EmptyStructPtr; // "{}*".
-    Function *StopPointFn;   // llvm.dbg.stoppoint
-    Function *FuncStartFn;   // llvm.dbg.func.start
-    Function *RegionStartFn; // llvm.dbg.region.start
-    Function *RegionEndFn;   // llvm.dbg.region.end
     Function *DeclareFn;     // llvm.dbg.declare
-    StringMap<Constant*> StringCache;
-    DenseMap<Constant*, DIDescriptor> SimpleConstantCache;
 
     DIFactory(const DIFactory &);     // DO NOT IMPLEMENT
     void operator=(const DIFactory&); // DO NOT IMPLEMENT
@@ -494,12 +489,12 @@ namespace llvm {
     /// CreateCompileUnit - Create a new descriptor for the specified compile
     /// unit.
     DICompileUnit CreateCompileUnit(unsigned LangID,
-                                    StringRef Filenae,
+                                    StringRef Filename,
                                     StringRef Directory,
                                     StringRef Producer,
                                     bool isMain = false,
                                     bool isOptimized = false,
-                                    const char *Flags = "",
+                                    StringRef Flags = "",
                                     unsigned RunTimeVer = 0);
 
     /// CreateEnumerator - Create a single enumerator value.
@@ -512,6 +507,13 @@ namespace llvm {
                                 uint64_t OffsetInBits, unsigned Flags,
                                 unsigned Encoding);
 
+    /// CreateBasicType - Create a basic type like int, float, etc.
+    DIBasicType CreateBasicTypeEx(DIDescriptor Context, StringRef Name,
+                                DICompileUnit CompileUnit, unsigned LineNumber,
+                                Constant *SizeInBits, Constant *AlignInBits,
+                                Constant *OffsetInBits, unsigned Flags,
+                                unsigned Encoding);
+
     /// CreateDerivedType - Create a derived type like const qualified type,
     /// pointer, typedef, etc.
     DIDerivedType CreateDerivedType(unsigned Tag, DIDescriptor Context,
@@ -522,6 +524,16 @@ namespace llvm {
                                     uint64_t OffsetInBits, unsigned Flags,
                                     DIType DerivedFrom);
 
+    /// CreateDerivedType - Create a derived type like const qualified type,
+    /// pointer, typedef, etc.
+    DIDerivedType CreateDerivedTypeEx(unsigned Tag, DIDescriptor Context,
+                                        StringRef Name,
+                                    DICompileUnit CompileUnit,
+                                    unsigned LineNumber,
+                                    Constant *SizeInBits, Constant *AlignInBits,
+                                    Constant *OffsetInBits, unsigned Flags,
+                                    DIType DerivedFrom);
+
     /// CreateCompositeType - Create a composite type like array, struct, etc.
     DICompositeType CreateCompositeType(unsigned Tag, DIDescriptor Context,
                                         StringRef Name,
@@ -534,6 +546,18 @@ namespace llvm {
                                         DIArray Elements,
                                         unsigned RunTimeLang = 0);
 
+    /// CreateCompositeType - Create a composite type like array, struct, etc.
+    DICompositeType CreateCompositeTypeEx(unsigned Tag, DIDescriptor Context,
+                                        StringRef Name,
+                                        DICompileUnit CompileUnit,
+                                        unsigned LineNumber,
+                                        Constant *SizeInBits,
+                                        Constant *AlignInBits,
+                                        Constant *OffsetInBits, unsigned Flags,
+                                        DIType DerivedFrom,
+                                        DIArray Elements,
+                                        unsigned RunTimeLang = 0);
+
     /// CreateSubprogram - Create a new descriptor for the specified subprogram.
     /// See comments in DISubprogram for descriptions of these fields.
     DISubprogram CreateSubprogram(DIDescriptor Context, StringRef Name,
@@ -574,30 +598,17 @@ namespace llvm {
     DILocation CreateLocation(unsigned LineNo, unsigned ColumnNo,
                               DIScope S, DILocation OrigLoc);
 
-    /// InsertStopPoint - Create a new llvm.dbg.stoppoint intrinsic invocation,
-    /// inserting it at the end of the specified basic block.
-    void InsertStopPoint(DICompileUnit CU, unsigned LineNo, unsigned ColNo,
-                         BasicBlock *BB);
-
-    /// InsertSubprogramStart - Create a new llvm.dbg.func.start intrinsic to
-    /// mark the start of the specified subprogram.
-    void InsertSubprogramStart(DISubprogram SP, BasicBlock *BB);
-
-    /// InsertRegionStart - Insert a new llvm.dbg.region.start intrinsic call to
-    /// mark the start of a region for the specified scoping descriptor.
-    void InsertRegionStart(DIDescriptor D, BasicBlock *BB);
-
-    /// InsertRegionEnd - Insert a new llvm.dbg.region.end intrinsic call to
-    /// mark the end of a region for the specified scoping descriptor.
-    void InsertRegionEnd(DIDescriptor D, BasicBlock *BB);
+    /// CreateLocation - Creates a debug info location.
+    DILocation CreateLocation(unsigned LineNo, unsigned ColumnNo,
+                              DIScope S, MDNode *OrigLoc = 0);
 
     /// InsertDeclare - Insert a new llvm.dbg.declare intrinsic call.
-    void InsertDeclare(llvm::Value *Storage, DIVariable D,
-                       BasicBlock *InsertAtEnd);
+    Instruction *InsertDeclare(llvm::Value *Storage, DIVariable D,
+                               BasicBlock *InsertAtEnd);
 
     /// InsertDeclare - Insert a new llvm.dbg.declare intrinsic call.
-    void InsertDeclare(llvm::Value *Storage, DIVariable D,
-                       Instruction *InsertBefore);
+    Instruction *InsertDeclare(llvm::Value *Storage, DIVariable D,
+                               Instruction *InsertBefore);
 
   private:
     Constant *GetTagConstant(unsigned TAG);
@@ -662,12 +673,12 @@ bool getLocationInfo(const Value *V, std::string &DisplayName,
   DebugLoc ExtractDebugLocation(DbgFuncStartInst &FSI,
                                 DebugLocTracker &DebugLocInfo);
 
-  /// isInlinedFnStart - Return true if FSI is starting an inlined function.
-  bool isInlinedFnStart(DbgFuncStartInst &FSI, const Function *CurrentFn);
+  /// getDISubprogram - Find subprogram that is enclosing this scope.
+  DISubprogram getDISubprogram(MDNode *Scope);
+
+  /// getDICompositeType - Find underlying composite type.
+  DICompositeType getDICompositeType(DIType T);
 
-  /// isInlinedFnEnd - Return true if REI is ending an inlined function.
-  bool isInlinedFnEnd(DbgRegionEndInst &REI, const Function *CurrentFn);
-  /// DebugInfoFinder - This object collects DebugInfo from a module.
   class DebugInfoFinder {
 
   public:
@@ -679,24 +690,18 @@ bool getLocationInfo(const Value *V, std::string &DisplayName,
     /// processType - Process DIType.
     void processType(DIType DT);
 
-    /// processSubprogram - Enumberate DISubprogram.
-    void processSubprogram(DISubprogram SP);
-
-    /// processStopPoint - Process DbgStopPointInst.
-    void processStopPoint(DbgStopPointInst *SPI);
-
-    /// processFuncStart - Process DbgFuncStartInst.
-    void processFuncStart(DbgFuncStartInst *FSI);
+    /// processLexicalBlock - Process DILexicalBlock.
+    void processLexicalBlock(DILexicalBlock LB);
 
-    /// processRegionStart - Process DbgRegionStart.
-    void processRegionStart(DbgRegionStartInst *DRS);
-
-    /// processRegionEnd - Process DbgRegionEnd.
-    void processRegionEnd(DbgRegionEndInst *DRE);
+    /// processSubprogram - Process DISubprogram.
+    void processSubprogram(DISubprogram SP);
 
     /// processDeclare - Process DbgDeclareInst.
     void processDeclare(DbgDeclareInst *DDI);
 
+    /// processLocation - Process DILocation.
+    void processLocation(DILocation Loc);
+
     /// addCompileUnit - Add compile unit into CUs.
     bool addCompileUnit(DICompileUnit CU);
 
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/DomPrinter.h b/libclamav/c++/llvm/include/llvm/Analysis/DomPrinter.h
new file mode 100644
index 0000000..0ed2899
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/Analysis/DomPrinter.h
@@ -0,0 +1,30 @@
+//===-- DomPrinter.h - Dom printer external interface ------------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file defines external functions that can be called to explicitly
+// instantiate the dominance tree printer.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_ANALYSIS_DOMPRINTER_H
+#define LLVM_ANALYSIS_DOMPRINTER_H
+
+namespace llvm {
+  class FunctionPass;
+  FunctionPass *createDomPrinterPass();
+  FunctionPass *createDomOnlyPrinterPass();
+  FunctionPass *createDomViewerPass();
+  FunctionPass *createDomOnlyViewerPass();
+  FunctionPass *createPostDomPrinterPass();
+  FunctionPass *createPostDomOnlyPrinterPass();
+  FunctionPass *createPostDomViewerPass();
+  FunctionPass *createPostDomOnlyViewerPass();
+} // End llvm namespace
+
+#endif
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/Dominators.h b/libclamav/c++/llvm/include/llvm/Analysis/Dominators.h
index f63e31c..2e149d5 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/Dominators.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/Dominators.h
@@ -25,6 +25,7 @@
 #include "llvm/Function.h"
 #include "llvm/Instructions.h"
 #include "llvm/ADT/DenseMap.h"
+#include "llvm/ADT/DepthFirstIterator.h"
 #include "llvm/ADT/GraphTraits.h"
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/SmallVector.h"
@@ -309,7 +310,6 @@ public:
     if (DomTreeNodes.size() != OtherDomTreeNodes.size())
       return true;
 
-    SmallPtrSet<const NodeT *,4> MyBBs;
     for (typename DomTreeNodeMapType::const_iterator 
            I = this->DomTreeNodes.begin(),
            E = this->DomTreeNodes.end(); I != E; ++I) {
@@ -824,26 +824,44 @@ public:
 /// DominatorTree GraphTraits specialization so the DominatorTree can be
 /// iterable by generic graph iterators.
 ///
-template <> struct GraphTraits<DomTreeNode *> {
+template <> struct GraphTraits<DomTreeNode*> {
   typedef DomTreeNode NodeType;
   typedef NodeType::iterator  ChildIteratorType;
 
   static NodeType *getEntryNode(NodeType *N) {
     return N;
   }
-  static inline ChildIteratorType child_begin(NodeType* N) {
+  static inline ChildIteratorType child_begin(NodeType *N) {
     return N->begin();
   }
-  static inline ChildIteratorType child_end(NodeType* N) {
+  static inline ChildIteratorType child_end(NodeType *N) {
     return N->end();
   }
+
+  typedef df_iterator<DomTreeNode*> nodes_iterator;
+
+  static nodes_iterator nodes_begin(DomTreeNode *N) {
+    return df_begin(getEntryNode(N));
+  }
+
+  static nodes_iterator nodes_end(DomTreeNode *N) {
+    return df_end(getEntryNode(N));
+  }
 };
 
 template <> struct GraphTraits<DominatorTree*>
-  : public GraphTraits<DomTreeNode *> {
+  : public GraphTraits<DomTreeNode*> {
   static NodeType *getEntryNode(DominatorTree *DT) {
     return DT->getRootNode();
   }
+
+  static nodes_iterator nodes_begin(DominatorTree *N) {
+    return df_begin(getEntryNode(N));
+  }
+
+  static nodes_iterator nodes_end(DominatorTree *N) {
+    return df_end(getEntryNode(N));
+  }
 };
 
 
@@ -886,9 +904,9 @@ public:
   iterator       find(BasicBlock *B)       { return Frontiers.find(B); }
   const_iterator find(BasicBlock *B) const { return Frontiers.find(B); }
 
-  void addBasicBlock(BasicBlock *BB, const DomSetType &frontier) {
+  iterator addBasicBlock(BasicBlock *BB, const DomSetType &frontier) {
     assert(find(BB) == end() && "Block already in DominanceFrontier!");
-    Frontiers.insert(std::make_pair(BB, frontier));
+    return Frontiers.insert(std::make_pair(BB, frontier)).first;
   }
 
   /// removeBlock - Remove basic block BB's frontier.
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/IVUsers.h b/libclamav/c++/llvm/include/llvm/Analysis/IVUsers.h
index 948c675..22fbb35 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/IVUsers.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/IVUsers.h
@@ -161,6 +161,10 @@ public:
   void addUser(const SCEV *Offset, Instruction *User, Value *Operand) {
     Users.push_back(new IVStrideUse(this, Offset, User, Operand));
   }
+
+  void removeUser(IVStrideUse *User) {
+    Users.erase(User);
+  }
 };
 
 class IVUsers : public LoopPass {
@@ -201,6 +205,9 @@ public:
   /// return true.  Otherwise, return false.
   bool AddUsersIfInteresting(Instruction *I);
 
+  void AddUser(const SCEV *Stride, const SCEV *Offset,
+               Instruction *User, Value *Operand);
+
   /// getReplacementExpr - Return a SCEV expression which computes the
   /// value of the OperandValToReplace of the given IVStrideUse.
   const SCEV *getReplacementExpr(const IVStrideUse &U) const;
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/InlineCost.h b/libclamav/c++/llvm/include/llvm/Analysis/InlineCost.h
new file mode 100644
index 0000000..7ce49d7
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/Analysis/InlineCost.h
@@ -0,0 +1,180 @@
+//===- InlineCost.cpp - Cost analysis for inliner ---------------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements heuristics for inlining decisions.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_ANALYSIS_INLINECOST_H
+#define LLVM_ANALYSIS_INLINECOST_H
+
+#include <cassert>
+#include <climits>
+#include <map>
+#include <vector>
+
+namespace llvm {
+
+  class Value;
+  class Function;
+  class BasicBlock;
+  class CallSite;
+  template<class PtrType, unsigned SmallSize>
+  class SmallPtrSet;
+
+  // CodeMetrics - Calculate size and a few similar metrics for a set of
+  // basic blocks.
+  struct CodeMetrics {
+    /// NeverInline - True if this callee should never be inlined into a
+    /// caller.
+    bool NeverInline;
+    
+    /// usesDynamicAlloca - True if this function calls alloca (in the C sense).
+    bool usesDynamicAlloca;
+
+    /// NumInsts, NumBlocks - Keep track of how large each function is, which
+    /// is used to estimate the code size cost of inlining it.
+    unsigned NumInsts, NumBlocks;
+
+    /// NumVectorInsts - Keep track of how many instructions produce vector
+    /// values.  The inliner is being more aggressive with inlining vector
+    /// kernels.
+    unsigned NumVectorInsts;
+    
+    /// NumRets - Keep track of how many Ret instructions the block contains.
+    unsigned NumRets;
+
+    CodeMetrics() : NeverInline(false), usesDynamicAlloca(false), NumInsts(0),
+                    NumBlocks(0), NumVectorInsts(0), NumRets(0) {}
+    
+    /// analyzeBasicBlock - Add information about the specified basic block
+    /// to the current structure.
+    void analyzeBasicBlock(const BasicBlock *BB);
+
+    /// analyzeFunction - Add information about the specified function
+    /// to the current structure.
+    void analyzeFunction(Function *F);
+  };
+
+  namespace InlineConstants {
+    // Various magic constants used to adjust heuristics.
+    const int CallPenalty = 5;
+    const int LastCallToStaticBonus = -15000;
+    const int ColdccPenalty = 2000;
+    const int NoreturnPenalty = 10000;
+  }
+
+  /// InlineCost - Represent the cost of inlining a function. This
+  /// supports special values for functions which should "always" or
+  /// "never" be inlined. Otherwise, the cost represents a unitless
+  /// amount; smaller values increase the likelyhood of the function
+  /// being inlined.
+  class InlineCost {
+    enum Kind {
+      Value,
+      Always,
+      Never
+    };
+
+    // This is a do-it-yourself implementation of
+    //   int Cost : 30;
+    //   unsigned Type : 2;
+    // We used to use bitfields, but they were sometimes miscompiled (PR3822).
+    enum { TYPE_BITS = 2 };
+    enum { COST_BITS = unsigned(sizeof(unsigned)) * CHAR_BIT - TYPE_BITS };
+    unsigned TypedCost; // int Cost : COST_BITS; unsigned Type : TYPE_BITS;
+
+    Kind getType() const {
+      return Kind(TypedCost >> COST_BITS);
+    }
+
+    int getCost() const {
+      // Sign-extend the bottom COST_BITS bits.
+      return (int(TypedCost << TYPE_BITS)) >> TYPE_BITS;
+    }
+
+    InlineCost(int C, int T) {
+      TypedCost = (unsigned(C << TYPE_BITS) >> TYPE_BITS) | (T << COST_BITS);
+      assert(getCost() == C && "Cost exceeds InlineCost precision");
+    }
+  public:
+    static InlineCost get(int Cost) { return InlineCost(Cost, Value); }
+    static InlineCost getAlways() { return InlineCost(0, Always); }
+    static InlineCost getNever() { return InlineCost(0, Never); }
+
+    bool isVariable() const { return getType() == Value; }
+    bool isAlways() const { return getType() == Always; }
+    bool isNever() const { return getType() == Never; }
+
+    /// getValue() - Return a "variable" inline cost's amount. It is
+    /// an error to call this on an "always" or "never" InlineCost.
+    int getValue() const {
+      assert(getType() == Value && "Invalid access of InlineCost");
+      return getCost();
+    }
+  };
+  
+  /// InlineCostAnalyzer - Cost analyzer used by inliner.
+  class InlineCostAnalyzer {
+    struct ArgInfo {
+    public:
+      unsigned ConstantWeight;
+      unsigned AllocaWeight;
+      
+      ArgInfo(unsigned CWeight, unsigned AWeight)
+        : ConstantWeight(CWeight), AllocaWeight(AWeight) {}
+    };
+    
+    struct FunctionInfo {
+      CodeMetrics Metrics;
+
+      /// ArgumentWeights - Each formal argument of the function is inspected to
+      /// see if it is used in any contexts where making it a constant or alloca
+      /// would reduce the code size.  If so, we add some value to the argument
+      /// entry here.
+      std::vector<ArgInfo> ArgumentWeights;
+    
+      /// CountCodeReductionForConstant - Figure out an approximation for how
+      /// many instructions will be constant folded if the specified value is
+      /// constant.
+      unsigned CountCodeReductionForConstant(Value *V);
+    
+      /// CountCodeReductionForAlloca - Figure out an approximation of how much
+      /// smaller the function will be if it is inlined into a context where an
+      /// argument becomes an alloca.
+      ///
+      unsigned CountCodeReductionForAlloca(Value *V);
+
+      /// analyzeFunction - Add information about the specified function
+      /// to the current structure.
+      void analyzeFunction(Function *F);
+    };
+
+    std::map<const Function *, FunctionInfo> CachedFunctionInfo;
+
+  public:
+
+    /// getInlineCost - The heuristic used to determine if we should inline the
+    /// function call or not.
+    ///
+    InlineCost getInlineCost(CallSite CS,
+                             SmallPtrSet<const Function *, 16> &NeverInline);
+
+    /// getInlineFudgeFactor - Return a > 1.0 factor if the inliner should use a
+    /// higher threshold to determine if the function call should be inlined.
+    float getInlineFudgeFactor(CallSite CS);
+
+    /// resetCachedFunctionInfo - erase any cached cost info for this function.
+    void resetCachedCostInfo(Function* Caller) {
+      CachedFunctionInfo[Caller] = FunctionInfo();
+    }
+  };
+}
+
+#endif
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/InstructionSimplify.h b/libclamav/c++/llvm/include/llvm/Analysis/InstructionSimplify.h
new file mode 100644
index 0000000..1cd7e56
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/Analysis/InstructionSimplify.h
@@ -0,0 +1,79 @@
+//===-- InstructionSimplify.h - Fold instructions into simpler forms ------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file declares routines for folding instructions into simpler forms that
+// do not require creating new instructions.  For example, this does constant
+// folding, and can handle identities like (X&0)->0.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_ANALYSIS_INSTRUCTIONSIMPLIFY_H
+#define LLVM_ANALYSIS_INSTRUCTIONSIMPLIFY_H
+
+namespace llvm {
+  class Instruction;
+  class Value;
+  class TargetData;
+  
+  /// SimplifyAndInst - Given operands for an And, see if we can
+  /// fold the result.  If not, this returns null.
+  Value *SimplifyAndInst(Value *LHS, Value *RHS,
+                         const TargetData *TD = 0);
+
+  /// SimplifyOrInst - Given operands for an Or, see if we can
+  /// fold the result.  If not, this returns null.
+  Value *SimplifyOrInst(Value *LHS, Value *RHS,
+                        const TargetData *TD = 0);
+  
+  /// SimplifyICmpInst - Given operands for an ICmpInst, see if we can
+  /// fold the result.  If not, this returns null.
+  Value *SimplifyICmpInst(unsigned Predicate, Value *LHS, Value *RHS,
+                          const TargetData *TD = 0);
+  
+  /// SimplifyFCmpInst - Given operands for an FCmpInst, see if we can
+  /// fold the result.  If not, this returns null.
+  Value *SimplifyFCmpInst(unsigned Predicate, Value *LHS, Value *RHS,
+                          const TargetData *TD = 0);
+  
+
+  /// SimplifyGEPInst - Given operands for an GetElementPtrInst, see if we can
+  /// fold the result.  If not, this returns null.
+  Value *SimplifyGEPInst(Value * const *Ops, unsigned NumOps,
+                         const TargetData *TD = 0);
+  
+  //=== Helper functions for higher up the class hierarchy.
+  
+  
+  /// SimplifyCmpInst - Given operands for a CmpInst, see if we can
+  /// fold the result.  If not, this returns null.
+  Value *SimplifyCmpInst(unsigned Predicate, Value *LHS, Value *RHS,
+                         const TargetData *TD = 0);
+  
+  /// SimplifyBinOp - Given operands for a BinaryOperator, see if we can
+  /// fold the result.  If not, this returns null.
+  Value *SimplifyBinOp(unsigned Opcode, Value *LHS, Value *RHS, 
+                       const TargetData *TD = 0);
+  
+  /// SimplifyInstruction - See if we can compute a simplified version of this
+  /// instruction.  If not, this returns null.
+  Value *SimplifyInstruction(Instruction *I, const TargetData *TD = 0);
+  
+  
+  /// ReplaceAndSimplifyAllUses - Perform From->replaceAllUsesWith(To) and then
+  /// delete the From instruction.  In addition to a basic RAUW, this does a
+  /// recursive simplification of the updated instructions.  This catches
+  /// things where one simplification exposes other opportunities.  This only
+  /// simplifies and deletes scalar operations, it does not change the CFG.
+  ///
+  void ReplaceAndSimplifyAllUses(Instruction *From, Value *To,
+                                 const TargetData *TD = 0);
+} // end namespace llvm
+
+#endif
+
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/LazyValueInfo.h b/libclamav/c++/llvm/include/llvm/Analysis/LazyValueInfo.h
new file mode 100644
index 0000000..566788d
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/Analysis/LazyValueInfo.h
@@ -0,0 +1,73 @@
+//===- LazyValueInfo.h - Value constraint analysis --------------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file defines the interface for lazy computation of value constraint
+// information.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_ANALYSIS_LIVEVALUES_H
+#define LLVM_ANALYSIS_LIVEVALUES_H
+
+#include "llvm/Pass.h"
+
+namespace llvm {
+  class Constant;
+  class TargetData;
+  class Value;
+  
+/// LazyValueInfo - This pass computes, caches, and vends lazy value constraint
+/// information.
+class LazyValueInfo : public FunctionPass {
+  class TargetData *TD;
+  void *PImpl;
+  LazyValueInfo(const LazyValueInfo&); // DO NOT IMPLEMENT.
+  void operator=(const LazyValueInfo&); // DO NOT IMPLEMENT.
+public:
+  static char ID;
+  LazyValueInfo() : FunctionPass(&ID), PImpl(0) {}
+  ~LazyValueInfo() { assert(PImpl == 0 && "releaseMemory not called"); }
+
+  /// Tristate - This is used to return true/false/dunno results.
+  enum Tristate {
+    Unknown = -1, False = 0, True = 1
+  };
+  
+  
+  // Public query interface.
+  
+  /// getPredicateOnEdge - Determine whether the specified value comparison
+  /// with a constant is known to be true or false on the specified CFG edge.
+  /// Pred is a CmpInst predicate.
+  Tristate getPredicateOnEdge(unsigned Pred, Value *V, Constant *C,
+                              BasicBlock *FromBB, BasicBlock *ToBB);
+  
+  
+  /// getConstant - Determine whether the specified value is known to be a
+  /// constant at the end of the specified block.  Return null if not.
+  Constant *getConstant(Value *V, BasicBlock *BB);
+
+  /// getConstantOnEdge - Determine whether the specified value is known to be a
+  /// constant on the specified edge.  Return null if not.
+  Constant *getConstantOnEdge(Value *V, BasicBlock *FromBB, BasicBlock *ToBB);
+  
+  
+  // Implementation boilerplate.
+  
+  virtual void getAnalysisUsage(AnalysisUsage &AU) const {
+    AU.setPreservesAll();
+  }
+  virtual void releaseMemory();
+  virtual bool runOnFunction(Function &F);
+};
+
+}  // end namespace llvm
+
+#endif
+
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/LibCallAliasAnalysis.h b/libclamav/c++/llvm/include/llvm/Analysis/LibCallAliasAnalysis.h
index 7944af3..01f108d 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/LibCallAliasAnalysis.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/LibCallAliasAnalysis.h
@@ -49,9 +49,6 @@ namespace llvm {
       return false;
     }
     
-    /// hasNoModRefInfoForCalls - We can provide mod/ref information against
-    /// non-escaping allocations.
-    virtual bool hasNoModRefInfoForCalls() const { return false; }
   private:
     ModRefResult AnalyzeLibCallDetails(const LibCallFunctionInfo *FI,
                                        CallSite CS, Value *P, unsigned Size);
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/LiveValues.h b/libclamav/c++/llvm/include/llvm/Analysis/LiveValues.h
index 31b00d7..b92cb78 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/LiveValues.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/LiveValues.h
@@ -94,10 +94,6 @@ public:
   bool isKilledInBlock(const Value *V, const BasicBlock *BB);
 };
 
-/// createLiveValuesPass - This creates an instance of the LiveValues pass.
-///
-FunctionPass *createLiveValuesPass();
-
-}
+}  // end namespace llvm
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h b/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h
index 7631110..9969d99 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h
@@ -114,10 +114,10 @@ public:
   block_iterator block_begin() const { return Blocks.begin(); }
   block_iterator block_end() const { return Blocks.end(); }
 
-  /// isLoopExit - True if terminator in the block can branch to another block
-  /// that is outside of the current loop.
+  /// isLoopExiting - True if terminator in the block can branch to another
+  /// block that is outside of the current loop.
   ///
-  bool isLoopExit(const BlockT *BB) const {
+  bool isLoopExiting(const BlockT *BB) const {
     typedef GraphTraits<BlockT*> BlockTraits;
     for (typename BlockTraits::ChildIteratorType SI =
          BlockTraits::child_begin(const_cast<BlockT*>(BB)),
@@ -269,8 +269,6 @@ public:
 
   /// getLoopLatch - If there is a single latch block for this loop, return it.
   /// A latch block is a block that contains a branch back to the header.
-  /// A loop header in normal form has two edges into it: one from a preheader
-  /// and one from a latch block.
   BlockT *getLoopLatch() const {
     BlockT *Header = getHeader();
     typedef GraphTraits<Inverse<BlockT*> > InvBlockTraits;
@@ -278,20 +276,12 @@ public:
                                             InvBlockTraits::child_begin(Header);
     typename InvBlockTraits::ChildIteratorType PE =
                                               InvBlockTraits::child_end(Header);
-    if (PI == PE) return 0;  // no preds?
-
     BlockT *Latch = 0;
-    if (contains(*PI))
-      Latch = *PI;
-    ++PI;
-    if (PI == PE) return 0;  // only one pred?
-
-    if (contains(*PI)) {
-      if (Latch) return 0;  // multiple backedges
-      Latch = *PI;
-    }
-    ++PI;
-    if (PI != PE) return 0;  // more than two preds
+    for (; PI != PE; ++PI)
+      if (contains(*PI)) {
+        if (Latch) return 0;
+        Latch = *PI;
+      }
 
     return Latch;
   }
@@ -465,7 +455,7 @@ public:
       WriteAsOperand(OS, BB, false);
       if (BB == getHeader())    OS << "<header>";
       if (BB == getLoopLatch()) OS << "<latch>";
-      if (isLoopExit(BB))       OS << "<exit>";
+      if (isLoopExiting(BB))    OS << "<exiting>";
     }
     OS << "\n";
 
@@ -572,6 +562,10 @@ public:
   /// normal form.
   bool isLoopSimplifyForm() const;
 
+  /// hasDedicatedExits - Return true if no exit block for the loop
+  /// has a predecessor that is outside the loop.
+  bool hasDedicatedExits() const;
+
   /// getUniqueExitBlocks - Return all unique successor blocks of this loop. 
   /// These are the blocks _outside of the current loop_ which are branched to.
   /// This assumes that loop is in canonical form.
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/LoopVR.h b/libclamav/c++/llvm/include/llvm/Analysis/LoopVR.h
deleted file mode 100644
index 3b098e6..0000000
--- a/libclamav/c++/llvm/include/llvm/Analysis/LoopVR.h
+++ /dev/null
@@ -1,85 +0,0 @@
-//===- LoopVR.cpp - Value Range analysis driven by loop information -------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file defines the interface for the loop-driven value range pass.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_ANALYSIS_LOOPVR_H
-#define LLVM_ANALYSIS_LOOPVR_H
-
-#include "llvm/Pass.h"
-#include "llvm/Analysis/ScalarEvolution.h"
-#include "llvm/Support/ConstantRange.h"
-#include <map>
-
-namespace llvm {
-
-/// LoopVR - This class maintains a mapping of Values to ConstantRanges.
-/// There are interfaces to look up and update ranges by value, and for
-/// accessing all values with range information.
-///
-class LoopVR : public FunctionPass {
-public:
-  static char ID; // Class identification, replacement for typeinfo
-
-  LoopVR() : FunctionPass(&ID) {}
-
-  bool runOnFunction(Function &F);
-  virtual void print(raw_ostream &os, const Module *) const;
-  void releaseMemory();
-
-  void getAnalysisUsage(AnalysisUsage &AU) const;
-
-  //===---------------------------------------------------------------------
-  // Methods that are used to look up and update particular values.
-
-  /// get - return the ConstantRange for a given Value of IntegerType.
-  ConstantRange get(Value *V);
-
-  /// remove - remove a value from this analysis.
-  void remove(Value *V);
-
-  /// narrow - improve our unterstanding of a Value by pointing out that it
-  /// must fall within ConstantRange. To replace a range, remove it first.
-  void narrow(Value *V, const ConstantRange &CR);
-
-  //===---------------------------------------------------------------------
-  // Methods that are used to iterate across all values with information.
-
-  /// size - returns the number of Values with information
-  unsigned size() const { return Map.size(); }
-
-  typedef std::map<Value *, ConstantRange *>::iterator iterator;
-
-  /// begin - return an iterator to the first Value, ConstantRange pair
-  iterator begin() { return Map.begin(); }
-
-  /// end - return an iterator one past the last Value, ConstantRange pair
-  iterator end() { return Map.end(); }
-
-  /// getValue - return the Value referenced by an iterator
-  Value *getValue(iterator I) { return I->first; }
-
-  /// getConstantRange - return the ConstantRange referenced by an iterator
-  ConstantRange getConstantRange(iterator I) { return *I->second; }
-
-private:
-  ConstantRange compute(Value *V);
-
-  ConstantRange getRange(const SCEV *S, Loop *L, ScalarEvolution &SE);
-
-  ConstantRange getRange(const SCEV *S, const SCEV *T, ScalarEvolution &SE);
-
-  std::map<Value *, ConstantRange *> Map;
-};
-
-} // end llvm namespace
-
-#endif
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/MallocHelper.h b/libclamav/c++/llvm/include/llvm/Analysis/MallocHelper.h
deleted file mode 100644
index 0588dff..0000000
--- a/libclamav/c++/llvm/include/llvm/Analysis/MallocHelper.h
+++ /dev/null
@@ -1,86 +0,0 @@
-//===- llvm/Analysis/MallocHelper.h ---- Identify malloc calls --*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This family of functions identifies calls to malloc, bitcasts of malloc
-// calls, and the types and array sizes associated with them.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_ANALYSIS_MALLOCHELPER_H
-#define LLVM_ANALYSIS_MALLOCHELPER_H
-
-namespace llvm {
-class CallInst;
-class LLVMContext;
-class PointerType;
-class TargetData;
-class Type;
-class Value;
-
-//===----------------------------------------------------------------------===//
-//  malloc Call Utility Functions.
-//
-
-/// isMalloc - Returns true if the the value is either a malloc call or a
-/// bitcast of the result of a malloc call
-bool isMalloc(const Value* I);
-
-/// extractMallocCall - Returns the corresponding CallInst if the instruction
-/// is a malloc call.  Since CallInst::CreateMalloc() only creates calls, we
-/// ignore InvokeInst here.
-const CallInst* extractMallocCall(const Value* I);
-CallInst* extractMallocCall(Value* I);
-
-/// extractMallocCallFromBitCast - Returns the corresponding CallInst if the
-/// instruction is a bitcast of the result of a malloc call.
-const CallInst* extractMallocCallFromBitCast(const Value* I);
-CallInst* extractMallocCallFromBitCast(Value* I);
-
-/// isArrayMalloc - Returns the corresponding CallInst if the instruction 
-/// matches the malloc call IR generated by CallInst::CreateMalloc().  This 
-/// means that it is a malloc call with one bitcast use AND the malloc call's 
-/// size argument is:
-///  1. a constant not equal to the malloc's allocated type
-/// or
-///  2. the result of a multiplication by the malloc's allocated type
-/// Otherwise it returns NULL.
-/// The unique bitcast is needed to determine the type/size of the array
-/// allocation.
-CallInst* isArrayMalloc(Value* I, LLVMContext &Context, const TargetData* TD);
-const CallInst* isArrayMalloc(const Value* I, LLVMContext &Context,
-                              const TargetData* TD);
-
-/// getMallocType - Returns the PointerType resulting from the malloc call.
-/// This PointerType is the result type of the call's only bitcast use.
-/// If there is no unique bitcast use, then return NULL.
-const PointerType* getMallocType(const CallInst* CI);
-
-/// getMallocAllocatedType - Returns the Type allocated by malloc call. This
-/// Type is the result type of the call's only bitcast use. If there is no
-/// unique bitcast use, then return NULL.
-const Type* getMallocAllocatedType(const CallInst* CI);
-
-/// getMallocArraySize - Returns the array size of a malloc call.  The array
-/// size is computated in 1 of 3 ways:
-///  1. If the element type if of size 1, then array size is the argument to 
-///     malloc.
-///  2. Else if the malloc's argument is a constant, the array size is that
-///     argument divided by the element type's size.
-///  3. Else the malloc argument must be a multiplication and the array size is
-///     the first operand of the multiplication.
-/// This function returns constant 1 if:
-///  1. The malloc call's allocated type cannot be determined.
-///  2. IR wasn't created by a call to CallInst::CreateMalloc() with a non-NULL
-///     ArraySize.
-Value* getMallocArraySize(CallInst* CI, LLVMContext &Context,
-                          const TargetData* TD);
-
-} // End llvm namespace
-
-#endif
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/MemoryBuiltins.h b/libclamav/c++/llvm/include/llvm/Analysis/MemoryBuiltins.h
new file mode 100644
index 0000000..f6fa0c8
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/Analysis/MemoryBuiltins.h
@@ -0,0 +1,80 @@
+//===- llvm/Analysis/MemoryBuiltins.h- Calls to memory builtins -*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This family of functions identifies calls to builtin functions that allocate
+// or free memory.  
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_ANALYSIS_MEMORYBUILTINS_H
+#define LLVM_ANALYSIS_MEMORYBUILTINS_H
+
+namespace llvm {
+class CallInst;
+class PointerType;
+class TargetData;
+class Type;
+class Value;
+
+//===----------------------------------------------------------------------===//
+//  malloc Call Utility Functions.
+//
+
+/// isMalloc - Returns true if the value is either a malloc call or a bitcast of 
+/// the result of a malloc call
+bool isMalloc(const Value *I);
+
+/// extractMallocCall - Returns the corresponding CallInst if the instruction
+/// is a malloc call.  Since CallInst::CreateMalloc() only creates calls, we
+/// ignore InvokeInst here.
+const CallInst *extractMallocCall(const Value *I);
+CallInst *extractMallocCall(Value *I);
+
+/// extractMallocCallFromBitCast - Returns the corresponding CallInst if the
+/// instruction is a bitcast of the result of a malloc call.
+const CallInst *extractMallocCallFromBitCast(const Value *I);
+CallInst *extractMallocCallFromBitCast(Value *I);
+
+/// isArrayMalloc - Returns the corresponding CallInst if the instruction 
+/// is a call to malloc whose array size can be determined and the array size
+/// is not constant 1.  Otherwise, return NULL.
+const CallInst *isArrayMalloc(const Value *I, const TargetData *TD);
+
+/// getMallocType - Returns the PointerType resulting from the malloc call.
+/// The PointerType depends on the number of bitcast uses of the malloc call:
+///   0: PointerType is the malloc calls' return type.
+///   1: PointerType is the bitcast's result type.
+///  >1: Unique PointerType cannot be determined, return NULL.
+const PointerType *getMallocType(const CallInst *CI);
+
+/// getMallocAllocatedType - Returns the Type allocated by malloc call.
+/// The Type depends on the number of bitcast uses of the malloc call:
+///   0: PointerType is the malloc calls' return type.
+///   1: PointerType is the bitcast's result type.
+///  >1: Unique PointerType cannot be determined, return NULL.
+const Type *getMallocAllocatedType(const CallInst *CI);
+
+/// getMallocArraySize - Returns the array size of a malloc call.  If the 
+/// argument passed to malloc is a multiple of the size of the malloced type,
+/// then return that multiple.  For non-array mallocs, the multiple is
+/// constant 1.  Otherwise, return NULL for mallocs whose array size cannot be
+/// determined.
+Value *getMallocArraySize(CallInst *CI, const TargetData *TD,
+                          bool LookThroughSExt = false);
+                          
+//===----------------------------------------------------------------------===//
+//  free Call Utility Functions.
+//
+
+/// isFreeCall - Returns true if the the value is a call to the builtin free()
+bool isFreeCall(const Value *I);
+
+} // End llvm namespace
+
+#endif
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/MemoryDependenceAnalysis.h b/libclamav/c++/llvm/include/llvm/Analysis/MemoryDependenceAnalysis.h
index 205c34a..042c7fc 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/MemoryDependenceAnalysis.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/MemoryDependenceAnalysis.h
@@ -244,6 +244,20 @@ namespace llvm {
                                       BasicBlock *BB,
                                      SmallVectorImpl<NonLocalDepEntry> &Result);
     
+    /// PHITranslatePointer - Find an available version of the specified value
+    /// PHI translated across the specified edge.  If MemDep isn't able to
+    /// satisfy this request, it returns null.
+    Value *PHITranslatePointer(Value *V,
+                               BasicBlock *CurBB, BasicBlock *PredBB,
+                               const TargetData *TD) const;
+
+    /// InsertPHITranslatedPointer - Insert a computation of the PHI translated
+    /// version of 'V' for the edge PredBB->CurBB into the end of the PredBB
+    /// block.
+    Value *InsertPHITranslatedPointer(Value *V,
+                                      BasicBlock *CurBB, BasicBlock *PredBB,
+                                      const TargetData *TD) const;
+    
     /// removeInstruction - Remove an instruction from the dependence analysis,
     /// updating the dependence of instructions that previously depended on it.
     void removeInstruction(Instruction *InstToRemove);
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/Passes.h b/libclamav/c++/llvm/include/llvm/Analysis/Passes.h
index 66ab3ea..b222321 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/Passes.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/Passes.h
@@ -139,6 +139,12 @@ namespace llvm {
   // createLiveValuesPass - This creates an instance of the LiveValues pass.
   //
   FunctionPass *createLiveValuesPass();
+  
+  //===--------------------------------------------------------------------===//
+  //
+  /// createLazyValueInfoPass - This creates an instance of the LazyValueInfo
+  /// pass.
+  FunctionPass *createLazyValueInfoPass();
 
   //===--------------------------------------------------------------------===//
   //
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/PostDominators.h b/libclamav/c++/llvm/include/llvm/Analysis/PostDominators.h
index 171cfdb..42a16e7 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/PostDominators.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/PostDominators.h
@@ -74,6 +74,21 @@ struct PostDominatorTree : public FunctionPass {
 
 FunctionPass* createPostDomTree();
 
+template <> struct GraphTraits<PostDominatorTree*>
+  : public GraphTraits<DomTreeNode*> {
+  static NodeType *getEntryNode(PostDominatorTree *DT) {
+    return DT->getRootNode();
+  }
+
+  static nodes_iterator nodes_begin(PostDominatorTree *N) {
+    return df_begin(getEntryNode(N));
+  }
+
+  static nodes_iterator nodes_end(PostDominatorTree *N) {
+    return df_end(getEntryNode(N));
+  }
+};
+
 /// PostDominanceFrontier Class - Concrete subclass of DominanceFrontier that is
 /// used to compute the a post-dominance frontier.
 ///
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolution.h b/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolution.h
index cb4d7c1..4aa3dfa 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolution.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolution.h
@@ -24,7 +24,7 @@
 #include "llvm/Pass.h"
 #include "llvm/Instructions.h"
 #include "llvm/Function.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/Support/ValueHandle.h"
 #include "llvm/Support/Allocator.h"
 #include "llvm/Support/ConstantRange.h"
@@ -402,37 +402,45 @@ namespace llvm {
     const SCEV *getZeroExtendExpr(const SCEV *Op, const Type *Ty);
     const SCEV *getSignExtendExpr(const SCEV *Op, const Type *Ty);
     const SCEV *getAnyExtendExpr(const SCEV *Op, const Type *Ty);
-    const SCEV *getAddExpr(SmallVectorImpl<const SCEV *> &Ops);
-    const SCEV *getAddExpr(const SCEV *LHS, const SCEV *RHS) {
+    const SCEV *getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
+                           bool HasNUW = false, bool HasNSW = false);
+    const SCEV *getAddExpr(const SCEV *LHS, const SCEV *RHS,
+                           bool HasNUW = false, bool HasNSW = false) {
       SmallVector<const SCEV *, 2> Ops;
       Ops.push_back(LHS);
       Ops.push_back(RHS);
-      return getAddExpr(Ops);
+      return getAddExpr(Ops, HasNUW, HasNSW);
     }
     const SCEV *getAddExpr(const SCEV *Op0, const SCEV *Op1,
-                           const SCEV *Op2) {
+                           const SCEV *Op2,
+                           bool HasNUW = false, bool HasNSW = false) {
       SmallVector<const SCEV *, 3> Ops;
       Ops.push_back(Op0);
       Ops.push_back(Op1);
       Ops.push_back(Op2);
-      return getAddExpr(Ops);
+      return getAddExpr(Ops, HasNUW, HasNSW);
     }
-    const SCEV *getMulExpr(SmallVectorImpl<const SCEV *> &Ops);
-    const SCEV *getMulExpr(const SCEV *LHS, const SCEV *RHS) {
+    const SCEV *getMulExpr(SmallVectorImpl<const SCEV *> &Ops,
+                           bool HasNUW = false, bool HasNSW = false);
+    const SCEV *getMulExpr(const SCEV *LHS, const SCEV *RHS,
+                           bool HasNUW = false, bool HasNSW = false) {
       SmallVector<const SCEV *, 2> Ops;
       Ops.push_back(LHS);
       Ops.push_back(RHS);
-      return getMulExpr(Ops);
+      return getMulExpr(Ops, HasNUW, HasNSW);
     }
     const SCEV *getUDivExpr(const SCEV *LHS, const SCEV *RHS);
     const SCEV *getAddRecExpr(const SCEV *Start, const SCEV *Step,
-                              const Loop *L);
+                              const Loop *L,
+                              bool HasNUW = false, bool HasNSW = false);
     const SCEV *getAddRecExpr(SmallVectorImpl<const SCEV *> &Operands,
-                              const Loop *L);
+                              const Loop *L,
+                              bool HasNUW = false, bool HasNSW = false);
     const SCEV *getAddRecExpr(const SmallVectorImpl<const SCEV *> &Operands,
-                              const Loop *L) {
+                              const Loop *L,
+                              bool HasNUW = false, bool HasNSW = false) {
       SmallVector<const SCEV *, 4> NewOp(Operands.begin(), Operands.end());
-      return getAddRecExpr(NewOp, L);
+      return getAddRecExpr(NewOp, L, HasNUW, HasNSW);
     }
     const SCEV *getSMaxExpr(const SCEV *LHS, const SCEV *RHS);
     const SCEV *getSMaxExpr(SmallVectorImpl<const SCEV *> &Operands);
@@ -555,11 +563,10 @@ namespace llvm {
     /// has an analyzable loop-invariant backedge-taken count.
     bool hasLoopInvariantBackedgeTakenCount(const Loop *L);
 
-    /// forgetLoopBackedgeTakenCount - This method should be called by the
-    /// client when it has changed a loop in a way that may effect
-    /// ScalarEvolution's ability to compute a trip count, or if the loop
-    /// is deleted.
-    void forgetLoopBackedgeTakenCount(const Loop *L);
+    /// forgetLoop - This method should be called by the client when it has
+    /// changed a loop in a way that may effect ScalarEvolution's ability to
+    /// compute a trip count, or if the loop is deleted.
+    void forgetLoop(const Loop *L);
 
     /// GetMinTrailingZeros - Determine the minimum number of zero bits that S
     /// is guaranteed to end in (at every loop iteration).  It is, at the same
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolutionExpander.h b/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolutionExpander.h
index 915227d..bbdd043 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolutionExpander.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolutionExpander.h
@@ -38,8 +38,7 @@ namespace llvm {
     friend struct SCEVVisitor<SCEVExpander, Value*>;
   public:
     explicit SCEVExpander(ScalarEvolution &se)
-      : SE(se), Builder(se.getContext(),
-                        TargetFolder(se.TD, se.getContext())) {}
+      : SE(se), Builder(se.getContext(), TargetFolder(se.TD)) {}
 
     /// clear - Erase the contents of the InsertedExpressions map so that users
     /// trying to expand the same expression into multiple BasicBlocks or
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolutionExpressions.h b/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolutionExpressions.h
index 67f5e06..2c50350 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolutionExpressions.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolutionExpressions.h
@@ -234,6 +234,15 @@ namespace llvm {
 
     virtual const Type *getType() const { return getOperand(0)->getType(); }
 
+    bool hasNoUnsignedWrap() const { return SubclassData & (1 << 0); }
+    void setHasNoUnsignedWrap(bool B) {
+      SubclassData = (SubclassData & ~(1 << 0)) | (B << 0);
+    }
+    bool hasNoSignedWrap() const { return SubclassData & (1 << 1); }
+    void setHasNoSignedWrap(bool B) {
+      SubclassData = (SubclassData & ~(1 << 1)) | (B << 1);
+    }
+
     /// Methods for support type inquiry through isa, cast, and dyn_cast:
     static inline bool classof(const SCEVNAryExpr *S) { return true; }
     static inline bool classof(const SCEV *S) {
@@ -436,15 +445,6 @@ namespace llvm {
       return cast<SCEVAddRecExpr>(SE.getAddExpr(this, getStepRecurrence(SE)));
     }
 
-    bool hasNoUnsignedWrap() const { return SubclassData & (1 << 0); }
-    void setHasNoUnsignedWrap(bool B) {
-      SubclassData = (SubclassData & ~(1 << 0)) | (B << 0);
-    }
-    bool hasNoSignedWrap() const { return SubclassData & (1 << 1); }
-    void setHasNoSignedWrap(bool B) {
-      SubclassData = (SubclassData & ~(1 << 1)) | (B << 1);
-    }
-
     virtual void print(raw_ostream &OS) const;
 
     /// Methods for support type inquiry through isa, cast, and dyn_cast:
@@ -464,6 +464,9 @@ namespace llvm {
     SCEVSMaxExpr(const FoldingSetNodeID &ID,
                  const SmallVectorImpl<const SCEV *> &ops)
       : SCEVCommutativeExpr(ID, scSMaxExpr, ops) {
+      // Max never overflows.
+      setHasNoUnsignedWrap(true);
+      setHasNoSignedWrap(true);
     }
 
   public:
@@ -486,6 +489,9 @@ namespace llvm {
     SCEVUMaxExpr(const FoldingSetNodeID &ID,
                  const SmallVectorImpl<const SCEV *> &ops)
       : SCEVCommutativeExpr(ID, scUMaxExpr, ops) {
+      // Max never overflows.
+      setHasNoUnsignedWrap(true);
+      setHasNoSignedWrap(true);
     }
 
   public:
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/SparsePropagation.h b/libclamav/c++/llvm/include/llvm/Analysis/SparsePropagation.h
index 820e1bd..677d41d 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/SparsePropagation.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/SparsePropagation.h
@@ -153,7 +153,7 @@ public:
   /// value.  If an value is not in the map, it is returned as untracked,
   /// unlike the getOrInitValueState method.
   LatticeVal getLatticeState(Value *V) const {
-    DenseMap<Value*, LatticeVal>::iterator I = ValueState.find(V);
+    DenseMap<Value*, LatticeVal>::const_iterator I = ValueState.find(V);
     return I != ValueState.end() ? I->second : LatticeFunc->getUntrackedVal();
   }
   
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/ValueTracking.h b/libclamav/c++/llvm/include/llvm/Analysis/ValueTracking.h
index 212b5d1..5f3c671 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/ValueTracking.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/ValueTracking.h
@@ -15,10 +15,11 @@
 #ifndef LLVM_ANALYSIS_VALUETRACKING_H
 #define LLVM_ANALYSIS_VALUETRACKING_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <string>
 
 namespace llvm {
+  template <typename T> class SmallVectorImpl;
   class Value;
   class Instruction;
   class APInt;
@@ -63,11 +64,40 @@ namespace llvm {
   unsigned ComputeNumSignBits(Value *Op, const TargetData *TD = 0,
                               unsigned Depth = 0);
 
+  /// ComputeMultiple - This function computes the integer multiple of Base that
+  /// equals V.  If successful, it returns true and returns the multiple in
+  /// Multiple.  If unsuccessful, it returns false.  Also, if V can be
+  /// simplified to an integer, then the simplified V is returned in Val.  Look
+  /// through sext only if LookThroughSExt=true.
+  bool ComputeMultiple(Value *V, unsigned Base, Value *&Multiple,
+                       bool LookThroughSExt = false,
+                       unsigned Depth = 0);
+
   /// CannotBeNegativeZero - Return true if we can prove that the specified FP 
   /// value is never equal to -0.0.
   ///
   bool CannotBeNegativeZero(const Value *V, unsigned Depth = 0);
 
+  /// DecomposeGEPExpression - If V is a symbolic pointer expression, decompose
+  /// it into a base pointer with a constant offset and a number of scaled
+  /// symbolic offsets.
+  ///
+  /// The scaled symbolic offsets (represented by pairs of a Value* and a scale
+  /// in the VarIndices vector) are Value*'s that are known to be scaled by the
+  /// specified amount, but which may have other unrepresented high bits. As
+  /// such, the gep cannot necessarily be reconstructed from its decomposed
+  /// form.
+  ///
+  /// When TargetData is around, this function is capable of analyzing
+  /// everything that Value::getUnderlyingObject() can look through.  When not,
+  /// it just looks through pointer casts.
+  ///
+  const Value *DecomposeGEPExpression(const Value *V, int64_t &BaseOffs,
+                 SmallVectorImpl<std::pair<const Value*, int64_t> > &VarIndices,
+                                      const TargetData *TD);
+    
+  
+  
   /// FindScalarValue - Given an aggregrate and an sequence of indices, see if
   /// the scalar value indexed is already around as a register, for example if
   /// it were inserted directly into the aggregrate.
@@ -77,16 +107,14 @@ namespace llvm {
   Value *FindInsertedValue(Value *V,
                            const unsigned *idx_begin,
                            const unsigned *idx_end,
-                           LLVMContext &Context,
                            Instruction *InsertBefore = 0);
 
   /// This is a convenience wrapper for finding values indexed by a single index
   /// only.
   inline Value *FindInsertedValue(Value *V, const unsigned Idx,
-                                  LLVMContext &Context,
                                   Instruction *InsertBefore = 0) {
     const unsigned Idxs[1] = { Idx };
-    return FindInsertedValue(V, &Idxs[0], &Idxs[1], Context, InsertBefore);
+    return FindInsertedValue(V, &Idxs[0], &Idxs[1], InsertBefore);
   }
   
   /// GetConstantStringInfo - This function computes the length of a
diff --git a/libclamav/c++/llvm/include/llvm/BasicBlock.h b/libclamav/c++/llvm/include/llvm/BasicBlock.h
index b497827..80d8702 100644
--- a/libclamav/c++/llvm/include/llvm/BasicBlock.h
+++ b/libclamav/c++/llvm/include/llvm/BasicBlock.h
@@ -17,12 +17,13 @@
 #include "llvm/Instruction.h"
 #include "llvm/SymbolTableListTraits.h"
 #include "llvm/ADT/ilist.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
 
 class TerminatorInst;
 class LLVMContext;
+class BlockAddress;
 
 template<> struct ilist_traits<Instruction>
   : public SymbolTableListTraits<Instruction, BasicBlock> {
@@ -66,7 +67,7 @@ private:
 /// @brief LLVM Basic Block Representation
 class BasicBlock : public Value, // Basic blocks are data objects also
                    public ilist_node<BasicBlock> {
-
+  friend class BlockAddress;
 public:
   typedef iplist<Instruction> InstListType;
 private:
@@ -108,10 +109,10 @@ public:
         Function *getParent()       { return Parent; }
 
   /// use_back - Specialize the methods defined in Value, as we know that an
-  /// BasicBlock can only be used by Instructions (specifically PHI nodes and
-  /// terminators).
-  Instruction       *use_back()       { return cast<Instruction>(*use_begin());}
-  const Instruction *use_back() const { return cast<Instruction>(*use_begin());}
+  /// BasicBlock can only be used by Users (specifically PHI nodes, terminators,
+  /// and BlockAddress's).
+  User       *use_back()       { return cast<User>(*use_begin());}
+  const User *use_back() const { return cast<User>(*use_begin());}
   
   /// getTerminator() - If this is a well formed basic block, then this returns
   /// a pointer to the terminator instruction.  If it is not, then you get a
@@ -235,6 +236,19 @@ public:
   /// keeping loop information consistent, use the SplitBlock utility function.
   ///
   BasicBlock *splitBasicBlock(iterator I, const Twine &BBName = "");
+
+  /// hasAddressTaken - returns true if there are any uses of this basic block
+  /// other than direct branches, switches, etc. to it.
+  bool hasAddressTaken() const { return SubclassData != 0; }
+                     
+private:
+  /// AdjustBlockAddressRefCount - BasicBlock stores the number of BlockAddress
+  /// objects using it.  This is almost always 0, sometimes one, possibly but
+  /// almost never 2, and inconceivably 3 or more.
+  void AdjustBlockAddressRefCount(int Amt) {
+    SubclassData += Amt;
+    assert((int)(signed char)SubclassData >= 0 && "Refcount wrap-around");
+  }
 };
 
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/include/llvm/Bitcode/BitCodes.h b/libclamav/c++/llvm/include/llvm/Bitcode/BitCodes.h
index 449dc35..ada2e65 100644
--- a/libclamav/c++/llvm/include/llvm/Bitcode/BitCodes.h
+++ b/libclamav/c++/llvm/include/llvm/Bitcode/BitCodes.h
@@ -19,7 +19,7 @@
 #define LLVM_BITCODE_BITCODES_H
 
 #include "llvm/ADT/SmallVector.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <cassert>
 
 namespace llvm {
diff --git a/libclamav/c++/llvm/include/llvm/Bitcode/BitstreamWriter.h b/libclamav/c++/llvm/include/llvm/Bitcode/BitstreamWriter.h
index e48a190..2b1b85e 100644
--- a/libclamav/c++/llvm/include/llvm/Bitcode/BitstreamWriter.h
+++ b/libclamav/c++/llvm/include/llvm/Bitcode/BitstreamWriter.h
@@ -294,7 +294,7 @@ private:
   /// known to exist at the end of the the record.
   template<typename uintty>
   void EmitRecordWithAbbrevImpl(unsigned Abbrev, SmallVectorImpl<uintty> &Vals,
-                                const StringRef &Blob) {
+                                StringRef Blob) {
     const char *BlobData = Blob.data();
     unsigned BlobLen = (unsigned) Blob.size();
     unsigned AbbrevNo = Abbrev-bitc::FIRST_APPLICATION_ABBREV;
@@ -422,7 +422,7 @@ public:
   /// of the record.
   template<typename uintty>
   void EmitRecordWithBlob(unsigned Abbrev, SmallVectorImpl<uintty> &Vals,
-                          const StringRef &Blob) {
+                          StringRef Blob) {
     EmitRecordWithAbbrevImpl(Abbrev, Vals, Blob);
   }
   template<typename uintty>
@@ -435,7 +435,7 @@ public:
   /// that end with an array.
   template<typename uintty>
   void EmitRecordWithArray(unsigned Abbrev, SmallVectorImpl<uintty> &Vals,
-                          const StringRef &Array) {
+                          StringRef Array) {
     EmitRecordWithAbbrevImpl(Abbrev, Vals, Array);
   }
   template<typename uintty>
diff --git a/libclamav/c++/llvm/include/llvm/Bitcode/Deserialize.h b/libclamav/c++/llvm/include/llvm/Bitcode/Deserialize.h
index 3e90227..90a5141 100644
--- a/libclamav/c++/llvm/include/llvm/Bitcode/Deserialize.h
+++ b/libclamav/c++/llvm/include/llvm/Bitcode/Deserialize.h
@@ -20,7 +20,7 @@
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/Support/Allocator.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <vector>
 
 namespace llvm {
diff --git a/libclamav/c++/llvm/include/llvm/Bitcode/LLVMBitCodes.h b/libclamav/c++/llvm/include/llvm/Bitcode/LLVMBitCodes.h
index dccd8e0..c037399 100644
--- a/libclamav/c++/llvm/include/llvm/Bitcode/LLVMBitCodes.h
+++ b/libclamav/c++/llvm/include/llvm/Bitcode/LLVMBitCodes.h
@@ -138,7 +138,8 @@ namespace bitc {
     CST_CODE_CE_CMP        = 17,  // CE_CMP:        [opty, opval, opval, pred]
     CST_CODE_INLINEASM     = 18,  // INLINEASM:     [sideeffect,asmstr,conststr]
     CST_CODE_CE_SHUFVEC_EX = 19,  // SHUFVEC_EX:    [opty, opval, opval, opval]
-    CST_CODE_CE_INBOUNDS_GEP = 20 // INBOUNDS_GEP:  [n x operands]
+    CST_CODE_CE_INBOUNDS_GEP = 20,// INBOUNDS_GEP:  [n x operands]
+    CST_CODE_BLOCKADDRESS  = 21   // CST_CODE_BLOCKADDRESS [fnty, fnval, bb#]
   };
 
   /// CastOpcodes - These are values used in the bitcode files to encode which
@@ -209,7 +210,7 @@ namespace bitc {
 
     FUNC_CODE_INST_RET         = 10, // RET:        [opty,opval<both optional>]
     FUNC_CODE_INST_BR          = 11, // BR:         [bb#, bb#, cond] or [bb#]
-    FUNC_CODE_INST_SWITCH      = 12, // SWITCH:     [opty, opval, n, n x ops]
+    FUNC_CODE_INST_SWITCH      = 12, // SWITCH:     [opty, op0, op1, ...]
     FUNC_CODE_INST_INVOKE      = 13, // INVOKE:     [attr, fnty, op0,op1, ...]
     FUNC_CODE_INST_UNWIND      = 14, // UNWIND
     FUNC_CODE_INST_UNREACHABLE = 15, // UNREACHABLE
@@ -236,7 +237,8 @@ namespace bitc {
     FUNC_CODE_INST_CMP2        = 28, // CMP2:       [opty, opval, opval, pred]
     // new select on i1 or [N x i1]
     FUNC_CODE_INST_VSELECT     = 29, // VSELECT:    [ty,opval,opval,predty,pred]
-    FUNC_CODE_INST_INBOUNDS_GEP = 30 // INBOUNDS_GEP: [n x operands]
+    FUNC_CODE_INST_INBOUNDS_GEP= 30, // INBOUNDS_GEP: [n x operands]
+    FUNC_CODE_INST_INDIRECTBR  = 31  // INDIRECTBR: [opty, op0, op1, ...]
   };
 } // End bitc namespace
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/AsmPrinter.h b/libclamav/c++/llvm/include/llvm/CodeGen/AsmPrinter.h
index 2f0e8d0..9a07e31 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/AsmPrinter.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/AsmPrinter.h
@@ -22,6 +22,7 @@
 #include "llvm/ADT/DenseMap.h"
 
 namespace llvm {
+  class BlockAddress;
   class GCStrategy;
   class Constant;
   class ConstantArray;
@@ -85,6 +86,14 @@ namespace llvm {
     DwarfWriter *DW;
 
   public:
+    /// Flags to specify different kinds of comments to output in
+    /// assembly code.  These flags carry semantic information not
+    /// otherwise easily derivable from the IR text.
+    ///
+    enum CommentFlag {
+      ReloadReuse = 0x1
+    };
+
     /// Output stream on which we're printing assembly code.
     ///
     formatted_raw_ostream &O;
@@ -141,7 +150,7 @@ namespace llvm {
     mutable const Function *LastFn;
     mutable unsigned Counter;
     
-    // Private state for processDebugLock()
+    // Private state for processDebugLoc()
     mutable DebugLocTuple PrevDLT;
 
   protected:
@@ -171,11 +180,11 @@ namespace llvm {
 
     /// EmitStartOfAsmFile - This virtual method can be overridden by targets
     /// that want to emit something at the start of their file.
-    virtual void EmitStartOfAsmFile(Module &M) {}
+    virtual void EmitStartOfAsmFile(Module &) {}
     
     /// EmitEndOfAsmFile - This virtual method can be overridden by targets that
     /// want to emit something at the end of their file.
-    virtual void EmitEndOfAsmFile(Module &M) {}
+    virtual void EmitEndOfAsmFile(Module &) {}
     
     /// doFinalization - Shut down the asmprinter.  If you override this in your
     /// pass, you must make sure to call it explicitly.
@@ -288,7 +297,7 @@ namespace llvm {
     /// EmitString - Emit a string with quotes and a null terminator.
     /// Special characters are emitted properly.
     /// @verbatim (Eg. '\t') @endverbatim
-    void EmitString(const std::string &String) const;
+    void EmitString(const StringRef String) const;
     void EmitString(const char *String, unsigned Size) const;
 
     /// EmitFile - Emit a .file directive.
@@ -334,6 +343,14 @@ namespace llvm {
     /// block label.
     MCSymbol *GetMBBSymbol(unsigned MBBID) const;
     
+    /// GetBlockAddressSymbol - Return the MCSymbol used to satisfy BlockAddress
+    /// uses of the specified basic block.
+    MCSymbol *GetBlockAddressSymbol(const BlockAddress *BA,
+                                    const char *Suffix = "") const;
+    MCSymbol *GetBlockAddressSymbol(const Function *F,
+                                    const BasicBlock *BB,
+                                    const char *Suffix = "") const;
+
     /// EmitBasicBlockStart - This method prints the label for the specified
     /// MachineBasicBlock, an alignment (if present) and a comment describing
     /// it if appropriate.
@@ -357,8 +374,8 @@ namespace llvm {
     virtual void EmitMachineConstantPoolValue(MachineConstantPoolValue *MCPV);
 
     /// processDebugLoc - Processes the debug information of each machine
-    /// instruction's DebugLoc.
-    void processDebugLoc(const MachineInstr *MI);
+    /// instruction's DebugLoc. 
+    void processDebugLoc(const MachineInstr *MI, bool BeforePrintingInsn);
     
     /// printInlineAsm - This method formats and prints the specified machine
     /// instruction that is an inline asm.
@@ -366,9 +383,11 @@ namespace llvm {
 
     /// printImplicitDef - This method prints the specified machine instruction
     /// that is an implicit def.
-    virtual void printImplicitDef(const MachineInstr *MI) const;
-    
-    
+    void printImplicitDef(const MachineInstr *MI) const;
+
+    /// printKill - This method prints the specified kill machine instruction.
+    void printKill(const MachineInstr *MI) const;
+
     /// printPICJumpTableSetLabel - This method prints a set label for the
     /// specified MachineBasicBlock for a jumptable entry.
     virtual void printPICJumpTableSetLabel(unsigned uid,
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/BinaryObject.h b/libclamav/c++/llvm/include/llvm/CodeGen/BinaryObject.h
index 2d4bd73..3ade7c9 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/BinaryObject.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/BinaryObject.h
@@ -15,14 +15,14 @@
 #ifndef LLVM_CODEGEN_BINARYOBJECT_H
 #define LLVM_CODEGEN_BINARYOBJECT_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/CodeGen/MachineRelocation.h"
+#include "llvm/System/DataTypes.h"
 
 #include <string>
 #include <vector>
 
 namespace llvm {
 
-class MachineRelocation;
 typedef std::vector<uint8_t> BinaryData;
 
 class BinaryObject {
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/CallingConvLower.h b/libclamav/c++/llvm/include/llvm/CodeGen/CallingConvLower.h
index 5e730fc..45a2757 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/CallingConvLower.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/CallingConvLower.h
@@ -183,6 +183,13 @@ public:
   void AnalyzeReturn(const SmallVectorImpl<ISD::OutputArg> &Outs,
                      CCAssignFn Fn);
 
+  /// CheckReturn - Analyze the return values of a function, returning
+  /// true if the return can be performed without sret-demotion, and
+  /// false otherwise.
+  bool CheckReturn(const SmallVectorImpl<EVT> &OutTys,
+                   const SmallVectorImpl<ISD::ArgFlagsTy> &ArgsFlags,
+                   CCAssignFn Fn);
+
   /// AnalyzeCallOperands - Analyze the outgoing arguments to a call,
   /// incorporating info about the passed values into this state.
   void AnalyzeCallOperands(const SmallVectorImpl<ISD::OutputArg> &Outs,
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/DAGISelHeader.h b/libclamav/c++/llvm/include/llvm/CodeGen/DAGISelHeader.h
index b2acbc1..624f18a 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/DAGISelHeader.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/DAGISelHeader.h
@@ -64,22 +64,22 @@ public:
 
 /// ReplaceUses - replace all uses of the old node F with the use
 /// of the new node T.
-void ReplaceUses(SDValue F, SDValue T) DISABLE_INLINE {
+DISABLE_INLINE void ReplaceUses(SDValue F, SDValue T) {
   ISelUpdater ISU(ISelPosition);
   CurDAG->ReplaceAllUsesOfValueWith(F, T, &ISU);
 }
 
 /// ReplaceUses - replace all uses of the old nodes F with the use
 /// of the new nodes T.
-void ReplaceUses(const SDValue *F, const SDValue *T,
-                 unsigned Num) DISABLE_INLINE {
+DISABLE_INLINE void ReplaceUses(const SDValue *F, const SDValue *T,
+                                unsigned Num) {
   ISelUpdater ISU(ISelPosition);
   CurDAG->ReplaceAllUsesOfValuesWith(F, T, Num, &ISU);
 }
 
 /// ReplaceUses - replace all uses of the old node F with the use
 /// of the new node T.
-void ReplaceUses(SDNode *F, SDNode *T) DISABLE_INLINE {
+DISABLE_INLINE void ReplaceUses(SDNode *F, SDNode *T) {
   ISelUpdater ISU(ISelPosition);
   CurDAG->ReplaceAllUsesWith(F, T, &ISU);
 }
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/DwarfWriter.h b/libclamav/c++/llvm/include/llvm/CodeGen/DwarfWriter.h
index 46be8d2..460c3c7 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/DwarfWriter.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/DwarfWriter.h
@@ -87,32 +87,17 @@ public:
   /// the source line list.
   unsigned RecordSourceLine(unsigned Line, unsigned Col, MDNode *Scope);
 
-  /// RecordRegionStart - Indicate the start of a region.
-  unsigned RecordRegionStart(MDNode *N);
-
-  /// RecordRegionEnd - Indicate the end of a region.
-  unsigned RecordRegionEnd(MDNode *N);
-
   /// getRecordSourceLineCount - Count source lines.
   unsigned getRecordSourceLineCount();
 
-  /// RecordVariable - Indicate the declaration of  a local variable.
-  ///
-  void RecordVariable(MDNode *N, unsigned FrameIndex);
-
   /// ShouldEmitDwarfDebug - Returns true if Dwarf debugging declarations should
   /// be emitted.
   bool ShouldEmitDwarfDebug() const;
 
-  //// RecordInlinedFnStart - Indicate the start of a inlined function.
-  unsigned RecordInlinedFnStart(DISubprogram SP, DICompileUnit CU,
-                                unsigned Line, unsigned Col);
-
-  /// RecordInlinedFnEnd - Indicate the end of inlined subroutine.
-  unsigned RecordInlinedFnEnd(DISubprogram SP);
+  void BeginScope(const MachineInstr *MI, unsigned Label);
+  void EndScope(const MachineInstr *MI);
 };
 
-
 } // end llvm namespace
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/ELFRelocation.h b/libclamav/c++/llvm/include/llvm/CodeGen/ELFRelocation.h
index c3f88f1..e58b8df 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/ELFRelocation.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/ELFRelocation.h
@@ -14,7 +14,7 @@
 #ifndef LLVM_CODEGEN_ELF_RELOCATION_H
 #define LLVM_CODEGEN_ELF_RELOCATION_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
 
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/FastISel.h b/libclamav/c++/llvm/include/llvm/CodeGen/FastISel.h
index 6cd5519..1efd1e0 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/FastISel.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/FastISel.h
@@ -91,7 +91,7 @@ public:
   ///
   bool SelectInstruction(Instruction *I);
 
-  /// SelectInstruction - Do "fast" instruction selection for the given
+  /// SelectOperator - Do "fast" instruction selection for the given
   /// LLVM IR operator (Instruction or ConstantExpr), and append
   /// generated machine instructions to the current block. Return true
   /// if selection was successful.
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/JITCodeEmitter.h b/libclamav/c++/llvm/include/llvm/CodeGen/JITCodeEmitter.h
index 180783a..ea3e59b 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/JITCodeEmitter.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/JITCodeEmitter.h
@@ -18,7 +18,7 @@
 #define LLVM_CODEGEN_JITCODEEMITTER_H
 
 #include <string>
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/Support/MathExtras.h"
 #include "llvm/CodeGen/MachineCodeEmitter.h"
 
@@ -68,23 +68,29 @@ public:
   ///
   virtual bool finishFunction(MachineFunction &F) = 0;
   
-  /// startGVStub - This callback is invoked when the JIT needs the
-  /// address of a GV (e.g. function) that has not been code generated yet.
-  /// The StubSize specifies the total size required by the stub.
+  /// startGVStub - This callback is invoked when the JIT needs the address of a
+  /// GV (e.g. function) that has not been code generated yet.  The StubSize
+  /// specifies the total size required by the stub.  The BufferState must be
+  /// passed to finishGVStub, and start/finish pairs with the same BufferState
+  /// must be properly nested.
   ///
-  virtual void startGVStub(const GlobalValue* GV, unsigned StubSize,
-                           unsigned Alignment = 1) = 0;
+  virtual void startGVStub(BufferState &BS, const GlobalValue* GV,
+                           unsigned StubSize, unsigned Alignment = 1) = 0;
 
-  /// startGVStub - This callback is invoked when the JIT needs the address of a 
+  /// startGVStub - This callback is invoked when the JIT needs the address of a
   /// GV (e.g. function) that has not been code generated yet.  Buffer points to
-  /// memory already allocated for this stub.
+  /// memory already allocated for this stub.  The BufferState must be passed to
+  /// finishGVStub, and start/finish pairs with the same BufferState must be
+  /// properly nested.
   ///
-  virtual void startGVStub(const GlobalValue* GV, void *Buffer,
+  virtual void startGVStub(BufferState &BS, void *Buffer,
                            unsigned StubSize) = 0;
-  
-  /// finishGVStub - This callback is invoked to terminate a GV stub.
+
+  /// finishGVStub - This callback is invoked to terminate a GV stub and returns
+  /// the start address of the stub.  The BufferState must first have been
+  /// passed to startGVStub.
   ///
-  virtual void *finishGVStub(const GlobalValue* F) = 0;
+  virtual void *finishGVStub(BufferState &BS) = 0;
 
   /// emitByte - This callback is invoked when a byte needs to be written to the
   /// output stream.
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/LatencyPriorityQueue.h b/libclamav/c++/llvm/include/llvm/CodeGen/LatencyPriorityQueue.h
index 71fae2a..7ac0418 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/LatencyPriorityQueue.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/LatencyPriorityQueue.h
@@ -39,12 +39,14 @@ namespace llvm {
     /// predecessor for.  This is used as a tie-breaker heuristic for better
     /// mobility.
     std::vector<unsigned> NumNodesSolelyBlocking;
-
+    
+    /// Queue - The queue.
     PriorityQueue<SUnit*, std::vector<SUnit*>, latency_sort> Queue;
+
 public:
-    LatencyPriorityQueue() : Queue(latency_sort(this)) {
+  LatencyPriorityQueue() : Queue(latency_sort(this)) {
     }
-    
+
     void initNodes(std::vector<SUnit> &sunits) {
       SUnits = &sunits;
       NumNodesSolelyBlocking.resize(SUnits->size(), 0);
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/LazyLiveness.h b/libclamav/c++/llvm/include/llvm/CodeGen/LazyLiveness.h
deleted file mode 100644
index 388b638..0000000
--- a/libclamav/c++/llvm/include/llvm/CodeGen/LazyLiveness.h
+++ /dev/null
@@ -1,64 +0,0 @@
-//===- LazyLiveness.h - Lazy, CFG-invariant liveness information ----------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This pass implements a lazy liveness analysis as per "Fast Liveness Checking
-// for SSA-form Programs," by Boissinot, et al.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_CODEGEN_LAZYLIVENESS_H
-#define LLVM_CODEGEN_LAZYLIVENESS_H
-
-#include "llvm/CodeGen/MachineFunctionPass.h"
-#include "llvm/CodeGen/MachineDominators.h"
-#include "llvm/ADT/DenseMap.h"
-#include "llvm/ADT/DenseSet.h"
-#include "llvm/ADT/SparseBitVector.h"
-#include <vector>
-
-namespace llvm {
-
-class MachineRegisterInfo;
-
-class LazyLiveness : public MachineFunctionPass {
-public:
-  static char ID; // Pass identification, replacement for typeid
-  LazyLiveness() : MachineFunctionPass(&ID) { }
-  
-  void getAnalysisUsage(AnalysisUsage &AU) const {
-    AU.setPreservesAll();
-    AU.addRequired<MachineDominatorTree>();
-    MachineFunctionPass::getAnalysisUsage(AU);
-  }
-  
-  bool runOnMachineFunction(MachineFunction &mf);
-
-  bool vregLiveIntoMBB(unsigned vreg, MachineBasicBlock* MBB);
-  
-private:
-  void computeBackedgeChain(MachineFunction& mf, MachineBasicBlock* MBB);
-  
-  typedef std::pair<MachineBasicBlock*, MachineBasicBlock*> edge_t;
-  
-  MachineRegisterInfo* MRI;
-  
-  DenseMap<MachineBasicBlock*, unsigned> preorder;
-  std::vector<MachineBasicBlock*> rev_preorder;
-  DenseMap<MachineBasicBlock*, SparseBitVector<128> > rv;
-  DenseMap<MachineBasicBlock*, SparseBitVector<128> > tv;
-  DenseSet<edge_t> backedges;
-  SparseBitVector<128> backedge_source;
-  SparseBitVector<128> backedge_target;
-  SparseBitVector<128> calculated;
-};
-
-}
-
-#endif
-
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/LinkAllAsmWriterComponents.h b/libclamav/c++/llvm/include/llvm/CodeGen/LinkAllAsmWriterComponents.h
index 1673c89..7d1b1fe 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/LinkAllAsmWriterComponents.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/LinkAllAsmWriterComponents.h
@@ -16,6 +16,7 @@
 #define LLVM_CODEGEN_LINKALLASMWRITERCOMPONENTS_H
 
 #include "llvm/CodeGen/GCs.h"
+#include <cstdlib>
 
 namespace {
   struct ForceAsmWriterLinking {
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/LiveInterval.h b/libclamav/c++/llvm/include/llvm/CodeGen/LiveInterval.h
index 1878b2e..e31a7f0 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/LiveInterval.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/LiveInterval.h
@@ -21,221 +21,19 @@
 #ifndef LLVM_CODEGEN_LIVEINTERVAL_H
 #define LLVM_CODEGEN_LIVEINTERVAL_H
 
-#include "llvm/ADT/DenseMapInfo.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/Support/Allocator.h"
 #include "llvm/Support/AlignOf.h"
+#include "llvm/CodeGen/SlotIndexes.h"
 #include <cassert>
 #include <climits>
 
 namespace llvm {
+  class LiveIntervals;
   class MachineInstr;
   class MachineRegisterInfo;
   class TargetRegisterInfo;
   class raw_ostream;
-  
-  /// MachineInstrIndex - An opaque wrapper around machine indexes.
-  class MachineInstrIndex {
-    friend class VNInfo;
-    friend class LiveInterval;
-    friend class LiveIntervals;
-    friend struct DenseMapInfo<MachineInstrIndex>;
-
-  public:
-
-    enum Slot { LOAD, USE, DEF, STORE, NUM };
-
-  private:
-
-    unsigned index;
-
-    static const unsigned PHI_BIT = 1 << 31;
-
-  public:
-
-    /// Construct a default MachineInstrIndex pointing to a reserved index.
-    MachineInstrIndex() : index(0) {}
-
-    /// Construct an index from the given index, pointing to the given slot.
-    MachineInstrIndex(MachineInstrIndex m, Slot s)
-      : index((m.index / NUM) * NUM + s) {} 
-    
-    /// Print this index to the given raw_ostream.
-    void print(raw_ostream &os) const;
-
-    /// Compare two MachineInstrIndex objects for equality.
-    bool operator==(MachineInstrIndex other) const {
-      return ((index & ~PHI_BIT) == (other.index & ~PHI_BIT));
-    }
-    /// Compare two MachineInstrIndex objects for inequality.
-    bool operator!=(MachineInstrIndex other) const {
-      return ((index & ~PHI_BIT) != (other.index & ~PHI_BIT));
-    }
-   
-    /// Compare two MachineInstrIndex objects. Return true if the first index
-    /// is strictly lower than the second.
-    bool operator<(MachineInstrIndex other) const {
-      return ((index & ~PHI_BIT) < (other.index & ~PHI_BIT));
-    }
-    /// Compare two MachineInstrIndex objects. Return true if the first index
-    /// is lower than, or equal to, the second.
-    bool operator<=(MachineInstrIndex other) const {
-      return ((index & ~PHI_BIT) <= (other.index & ~PHI_BIT));
-    }
-
-    /// Compare two MachineInstrIndex objects. Return true if the first index
-    /// is greater than the second.
-    bool operator>(MachineInstrIndex other) const {
-      return ((index & ~PHI_BIT) > (other.index & ~PHI_BIT));
-    }
-
-    /// Compare two MachineInstrIndex objects. Return true if the first index
-    /// is greater than, or equal to, the second.
-    bool operator>=(MachineInstrIndex other) const {
-      return ((index & ~PHI_BIT) >= (other.index & ~PHI_BIT));
-    }
-
-    /// Returns true if this index represents a load.
-    bool isLoad() const {
-      return ((index % NUM) == LOAD);
-    }
-
-    /// Returns true if this index represents a use.
-    bool isUse() const {
-      return ((index % NUM) == USE);
-    }
-
-    /// Returns true if this index represents a def.
-    bool isDef() const {
-      return ((index % NUM) == DEF);
-    }
-
-    /// Returns true if this index represents a store.
-    bool isStore() const {
-      return ((index % NUM) == STORE);
-    }
-
-    /// Returns the slot for this MachineInstrIndex.
-    Slot getSlot() const {
-      return static_cast<Slot>(index % NUM);
-    }
-
-    /// Returns true if this index represents a non-PHI use/def.
-    bool isNonPHIIndex() const {
-      return ((index & PHI_BIT) == 0);
-    }
-
-    /// Returns true if this index represents a PHI use/def.
-    bool isPHIIndex() const {
-      return ((index & PHI_BIT) == PHI_BIT);
-    }
-
-  private:
-
-    /// Construct an index from the given index, with its PHI kill marker set.
-    MachineInstrIndex(bool phi, MachineInstrIndex o) : index(o.index) {
-      if (phi)
-        index |= PHI_BIT;
-      else
-        index &= ~PHI_BIT;
-    }
-
-    explicit MachineInstrIndex(unsigned idx)
-      : index(idx & ~PHI_BIT) {}
-
-    MachineInstrIndex(bool phi, unsigned idx)
-      : index(idx & ~PHI_BIT) {
-      if (phi)
-        index |= PHI_BIT;
-    }
-
-    MachineInstrIndex(bool phi, unsigned idx, Slot slot)
-      : index(((idx / NUM) * NUM + slot) & ~PHI_BIT) {
-      if (phi)
-        index |= PHI_BIT;
-    }
-    
-    MachineInstrIndex nextSlot_() const {
-      assert((index & PHI_BIT) == ((index + 1) & PHI_BIT) &&
-             "Index out of bounds.");
-      return MachineInstrIndex(index + 1);
-    }
-
-    MachineInstrIndex nextIndex_() const {
-      assert((index & PHI_BIT) == ((index + NUM) & PHI_BIT) &&
-             "Index out of bounds.");
-      return MachineInstrIndex(index + NUM);
-    }
-
-    MachineInstrIndex prevSlot_() const {
-      assert((index & PHI_BIT) == ((index - 1) & PHI_BIT) &&
-             "Index out of bounds.");
-      return MachineInstrIndex(index - 1);
-    }
-
-    MachineInstrIndex prevIndex_() const {
-      assert((index & PHI_BIT) == ((index - NUM) & PHI_BIT) &&
-             "Index out of bounds.");
-      return MachineInstrIndex(index - NUM);
-    }
-
-    int distance(MachineInstrIndex other) const {
-      return (other.index & ~PHI_BIT) - (index & ~PHI_BIT);
-    }
-
-    /// Returns an unsigned number suitable as an index into a
-    /// vector over all instructions.
-    unsigned getVecIndex() const {
-      return (index & ~PHI_BIT) / NUM;
-    }
-
-    /// Scale this index by the given factor.
-    MachineInstrIndex scale(unsigned factor) const {
-      unsigned i = (index & ~PHI_BIT) / NUM,
-               o = (index % ~PHI_BIT) % NUM;
-      assert(index <= (~0U & ~PHI_BIT) / (factor * NUM) &&
-             "Rescaled interval would overflow");
-      return MachineInstrIndex(i * NUM * factor, o);
-    }
-
-    static MachineInstrIndex emptyKey() {
-      return MachineInstrIndex(true, 0x7fffffff);
-    }
-
-    static MachineInstrIndex tombstoneKey() {
-      return MachineInstrIndex(true, 0x7ffffffe);
-    }
-
-    static unsigned getHashValue(const MachineInstrIndex &v) {
-      return v.index * 37;
-    }
-
-  };
-
-  inline raw_ostream& operator<<(raw_ostream &os, MachineInstrIndex mi) {
-    mi.print(os);
-    return os;
-  }
-
-  /// Densemap specialization for MachineInstrIndex.
-  template <>
-  struct DenseMapInfo<MachineInstrIndex> {
-    static inline MachineInstrIndex getEmptyKey() {
-      return MachineInstrIndex::emptyKey();
-    }
-    static inline MachineInstrIndex getTombstoneKey() {
-      return MachineInstrIndex::tombstoneKey();
-    }
-    static inline unsigned getHashValue(const MachineInstrIndex &v) {
-      return MachineInstrIndex::getHashValue(v);
-    }
-    static inline bool isEqual(const MachineInstrIndex &LHS,
-                               const MachineInstrIndex &RHS) {
-      return (LHS == RHS);
-    }
-    static inline bool isPod() { return true; }
-  };
-
 
   /// VNInfo - Value Number Information.
   /// This class holds information about a machine level values, including
@@ -270,23 +68,25 @@ namespace llvm {
 
   public:
 
-    typedef SmallVector<MachineInstrIndex, 4> KillSet;
+    typedef SmallVector<SlotIndex, 4> KillSet;
 
     /// The ID number of this value.
     unsigned id;
     
     /// The index of the defining instruction (if isDefAccurate() returns true).
-    MachineInstrIndex def;
+    SlotIndex def;
 
     KillSet kills;
 
-    VNInfo()
-      : flags(IS_UNUSED), id(~1U) { cr.copy = 0; }
+    /*
+    VNInfo(LiveIntervals &li_)
+      : defflags(IS_UNUSED), id(~1U) { cr.copy = 0; }
+    */
 
     /// VNInfo constructor.
     /// d is presumed to point to the actual defining instr. If it doesn't
     /// setIsDefAccurate(false) should be called after construction.
-    VNInfo(unsigned i, MachineInstrIndex d, MachineInstr *c)
+    VNInfo(unsigned i, SlotIndex d, MachineInstr *c)
       : flags(IS_DEF_ACCURATE), id(i), def(d) { cr.copy = c; }
 
     /// VNInfo construtor, copies values from orig, except for the value number.
@@ -377,7 +177,7 @@ namespace llvm {
     }
 
     /// Returns true if the given index is a kill of this value.
-    bool isKill(MachineInstrIndex k) const {
+    bool isKill(SlotIndex k) const {
       KillSet::const_iterator
         i = std::lower_bound(kills.begin(), kills.end(), k);
       return (i != kills.end() && *i == k);
@@ -385,7 +185,7 @@ namespace llvm {
 
     /// addKill - Add a kill instruction index to the specified value
     /// number.
-    void addKill(MachineInstrIndex k) {
+    void addKill(SlotIndex k) {
       if (kills.empty()) {
         kills.push_back(k);
       } else {
@@ -397,7 +197,7 @@ namespace llvm {
 
     /// Remove the specified kill index from this value's kills list.
     /// Returns true if the value was present, otherwise returns false.
-    bool removeKill(MachineInstrIndex k) {
+    bool removeKill(SlotIndex k) {
       KillSet::iterator i = std::lower_bound(kills.begin(), kills.end(), k);
       if (i != kills.end() && *i == k) {
         kills.erase(i);
@@ -407,7 +207,7 @@ namespace llvm {
     }
 
     /// Remove all kills in the range [s, e).
-    void removeKills(MachineInstrIndex s, MachineInstrIndex e) {
+    void removeKills(SlotIndex s, SlotIndex e) {
       KillSet::iterator
         si = std::lower_bound(kills.begin(), kills.end(), s),
         se = std::upper_bound(kills.begin(), kills.end(), e);
@@ -421,11 +221,11 @@ namespace llvm {
   /// program, with an inclusive start point and an exclusive end point.
   /// These ranges are rendered as [start,end).
   struct LiveRange {
-    MachineInstrIndex start;  // Start point of the interval (inclusive)
-    MachineInstrIndex end;    // End point of the interval (exclusive)
+    SlotIndex start;  // Start point of the interval (inclusive)
+    SlotIndex end;    // End point of the interval (exclusive)
     VNInfo *valno;   // identifier for the value contained in this interval.
 
-    LiveRange(MachineInstrIndex S, MachineInstrIndex E, VNInfo *V)
+    LiveRange(SlotIndex S, SlotIndex E, VNInfo *V)
       : start(S), end(E), valno(V) {
 
       assert(S < E && "Cannot create empty or backwards range");
@@ -433,13 +233,13 @@ namespace llvm {
 
     /// contains - Return true if the index is covered by this range.
     ///
-    bool contains(MachineInstrIndex I) const {
+    bool contains(SlotIndex I) const {
       return start <= I && I < end;
     }
 
     /// containsRange - Return true if the given range, [S, E), is covered by
     /// this range. 
-    bool containsRange(MachineInstrIndex S, MachineInstrIndex E) const {
+    bool containsRange(SlotIndex S, SlotIndex E) const {
       assert((S < E) && "Backwards interval?");
       return (start <= S && S < end) && (start < E && E <= end);
     }
@@ -461,11 +261,11 @@ namespace llvm {
   raw_ostream& operator<<(raw_ostream& os, const LiveRange &LR);
 
 
-  inline bool operator<(MachineInstrIndex V, const LiveRange &LR) {
+  inline bool operator<(SlotIndex V, const LiveRange &LR) {
     return V < LR.start;
   }
 
-  inline bool operator<(const LiveRange &LR, MachineInstrIndex V) {
+  inline bool operator<(const LiveRange &LR, SlotIndex V) {
     return LR.start < V;
   }
 
@@ -522,7 +322,7 @@ namespace llvm {
     /// end of the interval.  If no LiveRange contains this position, but the
     /// position is in a hole, this method returns an iterator pointing the the
     /// LiveRange immediately after the hole.
-    iterator advanceTo(iterator I, MachineInstrIndex Pos) {
+    iterator advanceTo(iterator I, SlotIndex Pos) {
       if (Pos >= endIndex())
         return end();
       while (I->end <= Pos) ++I;
@@ -569,7 +369,7 @@ namespace llvm {
 
     /// getNextValue - Create a new value number and return it.  MIIdx specifies
     /// the instruction that defines the value number.
-    VNInfo *getNextValue(MachineInstrIndex def, MachineInstr *CopyMI,
+    VNInfo *getNextValue(SlotIndex def, MachineInstr *CopyMI,
                          bool isDefAccurate, BumpPtrAllocator &VNInfoAllocator){
       VNInfo *VNI =
         static_cast<VNInfo*>(VNInfoAllocator.Allocate((unsigned)sizeof(VNInfo),
@@ -625,13 +425,15 @@ namespace llvm {
     /// current interval, but are defined in the Clobbers interval, mark them
     /// used with an unknown definition value. Caller must pass in reference to
     /// VNInfoAllocator since it will create a new val#.
-    void MergeInClobberRanges(const LiveInterval &Clobbers,
+    void MergeInClobberRanges(LiveIntervals &li_,
+                              const LiveInterval &Clobbers,
                               BumpPtrAllocator &VNInfoAllocator);
 
     /// MergeInClobberRange - Same as MergeInClobberRanges except it merge in a
     /// single LiveRange only.
-    void MergeInClobberRange(MachineInstrIndex Start,
-                             MachineInstrIndex End,
+    void MergeInClobberRange(LiveIntervals &li_,
+                             SlotIndex Start,
+                             SlotIndex End,
                              BumpPtrAllocator &VNInfoAllocator);
 
     /// MergeValueInAsValue - Merge all of the live ranges of a specific val#
@@ -657,56 +459,54 @@ namespace llvm {
     bool empty() const { return ranges.empty(); }
 
     /// beginIndex - Return the lowest numbered slot covered by interval.
-    MachineInstrIndex beginIndex() const {
-      if (empty())
-        return MachineInstrIndex();
+    SlotIndex beginIndex() const {
+      assert(!empty() && "Call to beginIndex() on empty interval.");
       return ranges.front().start;
     }
 
     /// endNumber - return the maximum point of the interval of the whole,
     /// exclusive.
-    MachineInstrIndex endIndex() const {
-      if (empty())
-        return MachineInstrIndex();
+    SlotIndex endIndex() const {
+      assert(!empty() && "Call to endIndex() on empty interval.");
       return ranges.back().end;
     }
 
-    bool expiredAt(MachineInstrIndex index) const {
+    bool expiredAt(SlotIndex index) const {
       return index >= endIndex();
     }
 
-    bool liveAt(MachineInstrIndex index) const;
+    bool liveAt(SlotIndex index) const;
 
     // liveBeforeAndAt - Check if the interval is live at the index and the
     // index just before it. If index is liveAt, check if it starts a new live
     // range.If it does, then check if the previous live range ends at index-1.
-    bool liveBeforeAndAt(MachineInstrIndex index) const;
+    bool liveBeforeAndAt(SlotIndex index) const;
 
     /// getLiveRangeContaining - Return the live range that contains the
     /// specified index, or null if there is none.
-    const LiveRange *getLiveRangeContaining(MachineInstrIndex Idx) const {
+    const LiveRange *getLiveRangeContaining(SlotIndex Idx) const {
       const_iterator I = FindLiveRangeContaining(Idx);
       return I == end() ? 0 : &*I;
     }
 
     /// getLiveRangeContaining - Return the live range that contains the
     /// specified index, or null if there is none.
-    LiveRange *getLiveRangeContaining(MachineInstrIndex Idx) {
+    LiveRange *getLiveRangeContaining(SlotIndex Idx) {
       iterator I = FindLiveRangeContaining(Idx);
       return I == end() ? 0 : &*I;
     }
 
     /// FindLiveRangeContaining - Return an iterator to the live range that
     /// contains the specified index, or end() if there is none.
-    const_iterator FindLiveRangeContaining(MachineInstrIndex Idx) const;
+    const_iterator FindLiveRangeContaining(SlotIndex Idx) const;
 
     /// FindLiveRangeContaining - Return an iterator to the live range that
     /// contains the specified index, or end() if there is none.
-    iterator FindLiveRangeContaining(MachineInstrIndex Idx);
+    iterator FindLiveRangeContaining(SlotIndex Idx);
 
     /// findDefinedVNInfo - Find the by the specified
     /// index (register interval) or defined 
-    VNInfo *findDefinedVNInfoForRegInt(MachineInstrIndex Idx) const;
+    VNInfo *findDefinedVNInfoForRegInt(SlotIndex Idx) const;
 
     /// findDefinedVNInfo - Find the VNInfo that's defined by the specified
     /// register (stack inteval only).
@@ -721,7 +521,7 @@ namespace llvm {
 
     /// overlaps - Return true if the live interval overlaps a range specified
     /// by [Start, End).
-    bool overlaps(MachineInstrIndex Start, MachineInstrIndex End) const;
+    bool overlaps(SlotIndex Start, SlotIndex End) const;
 
     /// overlapsFrom - Return true if the intersection of the two live intervals
     /// is not empty.  The specified iterator is a hint that we can begin
@@ -738,18 +538,19 @@ namespace llvm {
     /// join - Join two live intervals (this, and other) together.  This applies
     /// mappings to the value numbers in the LHS/RHS intervals as specified.  If
     /// the intervals are not joinable, this aborts.
-    void join(LiveInterval &Other, const int *ValNoAssignments,
+    void join(LiveInterval &Other,
+              const int *ValNoAssignments,
               const int *RHSValNoAssignments,
               SmallVector<VNInfo*, 16> &NewVNInfo,
               MachineRegisterInfo *MRI);
 
     /// isInOneLiveRange - Return true if the range specified is entirely in the
     /// a single LiveRange of the live interval.
-    bool isInOneLiveRange(MachineInstrIndex Start, MachineInstrIndex End);
+    bool isInOneLiveRange(SlotIndex Start, SlotIndex End);
 
     /// removeRange - Remove the specified range from this interval.  Note that
     /// the range must be a single LiveRange in its entirety.
-    void removeRange(MachineInstrIndex Start, MachineInstrIndex End,
+    void removeRange(SlotIndex Start, SlotIndex End,
                      bool RemoveDeadValNo = false);
 
     void removeRange(LiveRange LR, bool RemoveDeadValNo = false) {
@@ -773,8 +574,8 @@ namespace llvm {
     void ComputeJoinedWeight(const LiveInterval &Other);
 
     bool operator<(const LiveInterval& other) const {
-      const MachineInstrIndex &thisIndex = beginIndex();
-      const MachineInstrIndex &otherIndex = other.beginIndex();
+      const SlotIndex &thisIndex = beginIndex();
+      const SlotIndex &otherIndex = other.beginIndex();
       return (thisIndex < otherIndex ||
               (thisIndex == otherIndex && reg < other.reg));
     }
@@ -785,8 +586,9 @@ namespace llvm {
   private:
 
     Ranges::iterator addRangeFrom(LiveRange LR, Ranges::iterator From);
-    void extendIntervalEndTo(Ranges::iterator I, MachineInstrIndex NewEnd);
-    Ranges::iterator extendIntervalStartTo(Ranges::iterator I, MachineInstrIndex NewStr);
+    void extendIntervalEndTo(Ranges::iterator I, SlotIndex NewEnd);
+    Ranges::iterator extendIntervalStartTo(Ranges::iterator I, SlotIndex NewStr);
+
     LiveInterval& operator=(const LiveInterval& rhs); // DO NOT IMPLEMENT
 
   };
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/LiveIntervalAnalysis.h b/libclamav/c++/llvm/include/llvm/CodeGen/LiveIntervalAnalysis.h
index 3311788..7a02d0f 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/LiveIntervalAnalysis.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/LiveIntervalAnalysis.h
@@ -23,12 +23,14 @@
 #include "llvm/CodeGen/MachineBasicBlock.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
 #include "llvm/CodeGen/LiveInterval.h"
+#include "llvm/CodeGen/SlotIndexes.h"
 #include "llvm/ADT/BitVector.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/Support/Allocator.h"
 #include <cmath>
+#include <iterator>
 
 namespace llvm {
 
@@ -40,21 +42,6 @@ namespace llvm {
   class TargetInstrInfo;
   class TargetRegisterClass;
   class VirtRegMap;
-  typedef std::pair<MachineInstrIndex, MachineBasicBlock*> IdxMBBPair;
-
-  inline bool operator<(MachineInstrIndex V, const IdxMBBPair &IM) {
-    return V < IM.first;
-  }
-
-  inline bool operator<(const IdxMBBPair &IM, MachineInstrIndex V) {
-    return IM.first < V;
-  }
-
-  struct Idx2MBBCompare {
-    bool operator()(const IdxMBBPair &LHS, const IdxMBBPair &RHS) const {
-      return LHS.first < RHS.first;
-    }
-  };
   
   class LiveIntervals : public MachineFunctionPass {
     MachineFunction* mf_;
@@ -64,84 +51,25 @@ namespace llvm {
     const TargetInstrInfo* tii_;
     AliasAnalysis *aa_;
     LiveVariables* lv_;
+    SlotIndexes* indexes_;
 
     /// Special pool allocator for VNInfo's (LiveInterval val#).
     ///
     BumpPtrAllocator VNInfoAllocator;
 
-    /// MBB2IdxMap - The indexes of the first and last instructions in the
-    /// specified basic block.
-    std::vector<std::pair<MachineInstrIndex, MachineInstrIndex> > MBB2IdxMap;
-
-    /// Idx2MBBMap - Sorted list of pairs of index of first instruction
-    /// and MBB id.
-    std::vector<IdxMBBPair> Idx2MBBMap;
-
-    /// FunctionSize - The number of instructions present in the function
-    uint64_t FunctionSize;
-
-    typedef DenseMap<const MachineInstr*, MachineInstrIndex> Mi2IndexMap;
-    Mi2IndexMap mi2iMap_;
-
-    typedef std::vector<MachineInstr*> Index2MiMap;
-    Index2MiMap i2miMap_;
-
     typedef DenseMap<unsigned, LiveInterval*> Reg2IntervalMap;
     Reg2IntervalMap r2iMap_;
 
-    DenseMap<MachineBasicBlock*, MachineInstrIndex> terminatorGaps;
-
-    /// phiJoinCopies - Copy instructions which are PHI joins.
-    SmallVector<MachineInstr*, 16> phiJoinCopies;
-
     /// allocatableRegs_ - A bit vector of allocatable registers.
     BitVector allocatableRegs_;
 
     /// CloneMIs - A list of clones as result of re-materialization.
     std::vector<MachineInstr*> CloneMIs;
 
-    typedef LiveInterval::InstrSlots InstrSlots;
-
   public:
     static char ID; // Pass identification, replacement for typeid
     LiveIntervals() : MachineFunctionPass(&ID) {}
 
-    MachineInstrIndex getBaseIndex(MachineInstrIndex index) {
-      return MachineInstrIndex(index, MachineInstrIndex::LOAD);
-    }
-    MachineInstrIndex getBoundaryIndex(MachineInstrIndex index) {
-      return MachineInstrIndex(index,
-        (MachineInstrIndex::Slot)(MachineInstrIndex::NUM - 1));
-    }
-    MachineInstrIndex getLoadIndex(MachineInstrIndex index) {
-      return MachineInstrIndex(index, MachineInstrIndex::LOAD);
-    }
-    MachineInstrIndex getUseIndex(MachineInstrIndex index) {
-      return MachineInstrIndex(index, MachineInstrIndex::USE);
-    }
-    MachineInstrIndex getDefIndex(MachineInstrIndex index) {
-      return MachineInstrIndex(index, MachineInstrIndex::DEF);
-    }
-    MachineInstrIndex getStoreIndex(MachineInstrIndex index) {
-      return MachineInstrIndex(index, MachineInstrIndex::STORE);
-    }    
-
-    MachineInstrIndex getNextSlot(MachineInstrIndex m) const {
-      return m.nextSlot_();
-    }
-
-    MachineInstrIndex getNextIndex(MachineInstrIndex m) const {
-      return m.nextIndex_();
-    }
-
-    MachineInstrIndex getPrevSlot(MachineInstrIndex m) const {
-      return m.prevSlot_();
-    }
-
-    MachineInstrIndex getPrevIndex(MachineInstrIndex m) const {
-      return m.prevIndex_();
-    }
-
     static float getSpillWeight(bool isDef, bool isUse, unsigned loopDepth) {
       return (isDef + isUse) * powf(10.0F, (float)loopDepth);
     }
@@ -170,112 +98,18 @@ namespace llvm {
       return r2iMap_.count(reg);
     }
 
-    /// getMBBStartIdx - Return the base index of the first instruction in the
-    /// specified MachineBasicBlock.
-    MachineInstrIndex getMBBStartIdx(MachineBasicBlock *MBB) const {
-      return getMBBStartIdx(MBB->getNumber());
-    }
-    MachineInstrIndex getMBBStartIdx(unsigned MBBNo) const {
-      assert(MBBNo < MBB2IdxMap.size() && "Invalid MBB number!");
-      return MBB2IdxMap[MBBNo].first;
-    }
-
-    /// getMBBEndIdx - Return the store index of the last instruction in the
-    /// specified MachineBasicBlock.
-    MachineInstrIndex getMBBEndIdx(MachineBasicBlock *MBB) const {
-      return getMBBEndIdx(MBB->getNumber());
-    }
-    MachineInstrIndex getMBBEndIdx(unsigned MBBNo) const {
-      assert(MBBNo < MBB2IdxMap.size() && "Invalid MBB number!");
-      return MBB2IdxMap[MBBNo].second;
-    }
-
     /// getScaledIntervalSize - get the size of an interval in "units,"
     /// where every function is composed of one thousand units.  This
     /// measure scales properly with empty index slots in the function.
     double getScaledIntervalSize(LiveInterval& I) {
-      return (1000.0 / InstrSlots::NUM * I.getSize()) / i2miMap_.size();
+      return (1000.0 * I.getSize()) / indexes_->getIndexesLength();
     }
     
     /// getApproximateInstructionCount - computes an estimate of the number
     /// of instructions in a given LiveInterval.
     unsigned getApproximateInstructionCount(LiveInterval& I) {
       double IntervalPercentage = getScaledIntervalSize(I) / 1000.0;
-      return (unsigned)(IntervalPercentage * FunctionSize);
-    }
-
-    /// getMBBFromIndex - given an index in any instruction of an
-    /// MBB return a pointer the MBB
-    MachineBasicBlock* getMBBFromIndex(MachineInstrIndex index) const {
-      std::vector<IdxMBBPair>::const_iterator I =
-        std::lower_bound(Idx2MBBMap.begin(), Idx2MBBMap.end(), index);
-      // Take the pair containing the index
-      std::vector<IdxMBBPair>::const_iterator J =
-        ((I != Idx2MBBMap.end() && I->first > index) ||
-         (I == Idx2MBBMap.end() && Idx2MBBMap.size()>0)) ? (I-1): I;
-
-      assert(J != Idx2MBBMap.end() && J->first <= index &&
-             index <= getMBBEndIdx(J->second) &&
-             "index does not correspond to an MBB");
-      return J->second;
-    }
-
-    /// getInstructionIndex - returns the base index of instr
-    MachineInstrIndex getInstructionIndex(const MachineInstr* instr) const {
-      Mi2IndexMap::const_iterator it = mi2iMap_.find(instr);
-      assert(it != mi2iMap_.end() && "Invalid instruction!");
-      return it->second;
-    }
-
-    /// getInstructionFromIndex - given an index in any slot of an
-    /// instruction return a pointer the instruction
-    MachineInstr* getInstructionFromIndex(MachineInstrIndex index) const {
-      // convert index to vector index
-      unsigned i = index.getVecIndex();
-      assert(i < i2miMap_.size() &&
-             "index does not correspond to an instruction");
-      return i2miMap_[i];
-    }
-
-    /// hasGapBeforeInstr - Return true if the previous instruction slot,
-    /// i.e. Index - InstrSlots::NUM, is not occupied.
-    bool hasGapBeforeInstr(MachineInstrIndex Index) {
-      Index = getBaseIndex(getPrevIndex(Index));
-      return getInstructionFromIndex(Index) == 0;
-    }
-
-    /// hasGapAfterInstr - Return true if the successive instruction slot,
-    /// i.e. Index + InstrSlots::Num, is not occupied.
-    bool hasGapAfterInstr(MachineInstrIndex Index) {
-      Index = getBaseIndex(getNextIndex(Index));
-      return getInstructionFromIndex(Index) == 0;
-    }
-
-    /// findGapBeforeInstr - Find an empty instruction slot before the
-    /// specified index. If "Furthest" is true, find one that's furthest
-    /// away from the index (but before any index that's occupied).
-    MachineInstrIndex findGapBeforeInstr(MachineInstrIndex Index,
-                                         bool Furthest = false) {
-      Index = getBaseIndex(getPrevIndex(Index));
-      if (getInstructionFromIndex(Index))
-        return MachineInstrIndex();  // No gap!
-      if (!Furthest)
-        return Index;
-      MachineInstrIndex PrevIndex = getBaseIndex(getPrevIndex(Index));
-      while (getInstructionFromIndex(Index)) {
-        Index = PrevIndex;
-        PrevIndex = getBaseIndex(getPrevIndex(Index));
-      }
-      return Index;
-    }
-
-    /// InsertMachineInstrInMaps - Insert the specified machine instruction
-    /// into the instruction index map at the given index.
-    void InsertMachineInstrInMaps(MachineInstr *MI, MachineInstrIndex Index) {
-      i2miMap_[Index.getVecIndex()] = MI;
-      Mi2IndexMap::iterator it = mi2iMap_.find(MI);
-      assert(it == mi2iMap_.end() && "Already in map!");
-      mi2iMap_[MI] = Index;
+      return (unsigned)(IntervalPercentage * indexes_->getFunctionSize());
     }
 
     /// conflictsWithPhysRegDef - Returns true if the specified register
@@ -289,19 +123,7 @@ namespace llvm {
                                  bool CheckUse,
                                  SmallPtrSet<MachineInstr*,32> &JoinedCopies);
 
-    /// findLiveInMBBs - Given a live range, if the value of the range
-    /// is live in any MBB returns true as well as the list of basic blocks
-    /// in which the value is live.
-    bool findLiveInMBBs(MachineInstrIndex Start, MachineInstrIndex End,
-                        SmallVectorImpl<MachineBasicBlock*> &MBBs) const;
-
-    /// findReachableMBBs - Return a list MBB that can be reached via any
-    /// branch or fallthroughs. Return true if the list is not empty.
-    bool findReachableMBBs(MachineInstrIndex Start, MachineInstrIndex End,
-                        SmallVectorImpl<MachineBasicBlock*> &MBBs) const;
-
     // Interval creation
-
     LiveInterval &getOrCreateInterval(unsigned reg) {
       Reg2IntervalMap::iterator I = r2iMap_.find(reg);
       if (I == r2iMap_.end())
@@ -326,36 +148,63 @@ namespace llvm {
       r2iMap_.erase(I);
     }
 
+    SlotIndex getZeroIndex() const {
+      return indexes_->getZeroIndex();
+    }
+
+    SlotIndex getInvalidIndex() const {
+      return indexes_->getInvalidIndex();
+    }
+
     /// isNotInMIMap - returns true if the specified machine instr has been
     /// removed or was never entered in the map.
-    bool isNotInMIMap(MachineInstr* instr) const {
-      return !mi2iMap_.count(instr);
+    bool isNotInMIMap(const MachineInstr* Instr) const {
+      return !indexes_->hasIndex(Instr);
+    }
+
+    /// Returns the base index of the given instruction.
+    SlotIndex getInstructionIndex(const MachineInstr *instr) const {
+      return indexes_->getInstructionIndex(instr);
+    }
+    
+    /// Returns the instruction associated with the given index.
+    MachineInstr* getInstructionFromIndex(SlotIndex index) const {
+      return indexes_->getInstructionFromIndex(index);
+    }
+
+    /// Return the first index in the given basic block.
+    SlotIndex getMBBStartIdx(const MachineBasicBlock *mbb) const {
+      return indexes_->getMBBStartIdx(mbb);
+    } 
+
+    /// Return the last index in the given basic block.
+    SlotIndex getMBBEndIdx(const MachineBasicBlock *mbb) const {
+      return indexes_->getMBBEndIdx(mbb);
+    } 
+
+    MachineBasicBlock* getMBBFromIndex(SlotIndex index) const {
+      return indexes_->getMBBFromIndex(index);
+    }
+
+    SlotIndex InsertMachineInstrInMaps(MachineInstr *MI) {
+      return indexes_->insertMachineInstrInMaps(MI);
     }
 
-    /// RemoveMachineInstrFromMaps - This marks the specified machine instr as
-    /// deleted.
     void RemoveMachineInstrFromMaps(MachineInstr *MI) {
-      // remove index -> MachineInstr and
-      // MachineInstr -> index mappings
-      Mi2IndexMap::iterator mi2i = mi2iMap_.find(MI);
-      if (mi2i != mi2iMap_.end()) {
-        i2miMap_[mi2i->second.index/InstrSlots::NUM] = 0;
-        mi2iMap_.erase(mi2i);
-      }
+      indexes_->removeMachineInstrFromMaps(MI);
     }
 
-    /// ReplaceMachineInstrInMaps - Replacing a machine instr with a new one in
-    /// maps used by register allocator.
     void ReplaceMachineInstrInMaps(MachineInstr *MI, MachineInstr *NewMI) {
-      Mi2IndexMap::iterator mi2i = mi2iMap_.find(MI);
-      if (mi2i == mi2iMap_.end())
-        return;
-      i2miMap_[mi2i->second.index/InstrSlots::NUM] = NewMI;
-      Mi2IndexMap::iterator it = mi2iMap_.find(MI);
-      assert(it != mi2iMap_.end() && "Invalid instruction!");
-      MachineInstrIndex Index = it->second;
-      mi2iMap_.erase(it);
-      mi2iMap_[NewMI] = Index;
+      indexes_->replaceMachineInstrInMaps(MI, NewMI);
+    }
+
+    bool findLiveInMBBs(SlotIndex Start, SlotIndex End,
+                        SmallVectorImpl<MachineBasicBlock*> &MBBs) const {
+      return indexes_->findLiveInMBBs(Start, End, MBBs);
+    }
+
+    void renumber() {
+      indexes_->renumberIndexes();
     }
 
     BumpPtrAllocator& getVNInfoAllocator() { return VNInfoAllocator; }
@@ -418,13 +267,6 @@ namespace llvm {
     /// marker to implicit_def defs and their uses.
     void processImplicitDefs();
 
-    /// computeNumbering - Compute the index numbering.
-    void computeNumbering();
-
-    /// scaleNumbering - Rescale interval numbers to introduce gaps for new
-    /// instructions
-    void scaleNumbering(int factor);
-
     /// intervalIsInOneMBB - Returns true if the specified interval is entirely
     /// within a single basic block.
     bool intervalIsInOneMBB(const LiveInterval &li) const;
@@ -433,25 +275,19 @@ namespace llvm {
     /// computeIntervals - Compute live intervals.
     void computeIntervals();
 
-    bool isProfitableToCoalesce(LiveInterval &DstInt, LiveInterval &SrcInt,
-                                SmallVector<MachineInstr*,16> &IdentCopies,
-                                SmallVector<MachineInstr*,16> &OtherCopies);
-
-    void performEarlyCoalescing();
-
     /// handleRegisterDef - update intervals for a register def
     /// (calls handlePhysicalRegisterDef and
     /// handleVirtualRegisterDef)
     void handleRegisterDef(MachineBasicBlock *MBB,
                            MachineBasicBlock::iterator MI,
-                           MachineInstrIndex MIIdx,
+                           SlotIndex MIIdx,
                            MachineOperand& MO, unsigned MOIdx);
 
     /// handleVirtualRegisterDef - update intervals for a virtual
     /// register def
     void handleVirtualRegisterDef(MachineBasicBlock *MBB,
                                   MachineBasicBlock::iterator MI,
-                                  MachineInstrIndex MIIdx, MachineOperand& MO,
+                                  SlotIndex MIIdx, MachineOperand& MO,
                                   unsigned MOIdx,
                                   LiveInterval& interval);
 
@@ -459,13 +295,13 @@ namespace llvm {
     /// def.
     void handlePhysicalRegisterDef(MachineBasicBlock* mbb,
                                    MachineBasicBlock::iterator mi,
-                                   MachineInstrIndex MIIdx, MachineOperand& MO,
+                                   SlotIndex MIIdx, MachineOperand& MO,
                                    LiveInterval &interval,
                                    MachineInstr *CopyMI);
 
     /// handleLiveInRegister - Create interval for a livein register.
     void handleLiveInRegister(MachineBasicBlock* mbb,
-                              MachineInstrIndex MIIdx,
+                              SlotIndex MIIdx,
                               LiveInterval &interval, bool isAlias = false);
 
     /// getReMatImplicitUse - If the remat definition MI has one (for now, we
@@ -478,7 +314,7 @@ namespace llvm {
     /// which reaches the given instruction also reaches the specified use
     /// index.
     bool isValNoAvailableAt(const LiveInterval &li, MachineInstr *MI,
-                            MachineInstrIndex UseIdx) const;
+                            SlotIndex UseIdx) const;
 
     /// isReMaterializable - Returns true if the definition MI of the specified
     /// val# of the specified interval is re-materializable. Also returns true
@@ -493,7 +329,7 @@ namespace llvm {
     /// MI. If it is successul, MI is updated with the newly created MI and
     /// returns true.
     bool tryFoldMemoryOperand(MachineInstr* &MI, VirtRegMap &vrm,
-                              MachineInstr *DefMI, MachineInstrIndex InstrIdx,
+                              MachineInstr *DefMI, SlotIndex InstrIdx,
                               SmallVector<unsigned, 2> &Ops,
                               bool isSS, int FrameIndex, unsigned Reg);
 
@@ -507,7 +343,7 @@ namespace llvm {
     /// VNInfo that's after the specified index but is within the basic block.
     bool anyKillInMBBAfterIdx(const LiveInterval &li, const VNInfo *VNI,
                               MachineBasicBlock *MBB,
-                              MachineInstrIndex Idx) const;
+                              SlotIndex Idx) const;
 
     /// hasAllocatableSuperReg - Return true if the specified physical register
     /// has any super register that's allocatable.
@@ -515,17 +351,17 @@ namespace llvm {
 
     /// SRInfo - Spill / restore info.
     struct SRInfo {
-      MachineInstrIndex index;
+      SlotIndex index;
       unsigned vreg;
       bool canFold;
-      SRInfo(MachineInstrIndex i, unsigned vr, bool f)
+      SRInfo(SlotIndex i, unsigned vr, bool f)
         : index(i), vreg(vr), canFold(f) {}
     };
 
-    bool alsoFoldARestore(int Id, MachineInstrIndex index, unsigned vr,
+    bool alsoFoldARestore(int Id, SlotIndex index, unsigned vr,
                           BitVector &RestoreMBBs,
                           DenseMap<unsigned,std::vector<SRInfo> >&RestoreIdxes);
-    void eraseRestoreInfo(int Id, MachineInstrIndex index, unsigned vr,
+    void eraseRestoreInfo(int Id, SlotIndex index, unsigned vr,
                           BitVector &RestoreMBBs,
                           DenseMap<unsigned,std::vector<SRInfo> >&RestoreIdxes);
 
@@ -544,7 +380,7 @@ namespace llvm {
     /// functions for addIntervalsForSpills to rewrite uses / defs for the given
     /// live range.
     bool rewriteInstructionForSpills(const LiveInterval &li, const VNInfo *VNI,
-        bool TrySplit, MachineInstrIndex index, MachineInstrIndex end,
+        bool TrySplit, SlotIndex index, SlotIndex end,
         MachineInstr *MI, MachineInstr *OrigDefMI, MachineInstr *DefMI,
         unsigned Slot, int LdSlot,
         bool isLoad, bool isLoadSS, bool DefIsReMat, bool CanDelete,
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/LiveStackAnalysis.h b/libclamav/c++/llvm/include/llvm/CodeGen/LiveStackAnalysis.h
index d63a222..e01d1ae 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/LiveStackAnalysis.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/LiveStackAnalysis.h
@@ -48,8 +48,6 @@ namespace llvm {
     iterator begin() { return S2IMap.begin(); }
     iterator end() { return S2IMap.end(); }
 
-    void scaleNumbering(int factor);
-
     unsigned getNumIntervals() const { return (unsigned)S2IMap.size(); }
 
     LiveInterval &getOrCreateInterval(int Slot, const TargetRegisterClass *RC) {
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/LiveVariables.h b/libclamav/c++/llvm/include/llvm/CodeGen/LiveVariables.h
index 172fb75..a37abd4 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/LiveVariables.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/LiveVariables.h
@@ -103,7 +103,17 @@ public:
       Kills.erase(I);
       return true;
     }
-    
+
+    /// findKill - Find a kill instruction in MBB. Return NULL if none is found.
+    MachineInstr *findKill(const MachineBasicBlock *MBB) const;
+
+    /// isLiveIn - Is Reg live in to MBB? This means that Reg is live through
+    /// MBB, or it is killed in MBB. If Reg is only used by PHI instructions in
+    /// MBB, it is not considered live in.
+    bool isLiveIn(const MachineBasicBlock &MBB,
+                  unsigned Reg,
+                  MachineRegisterInfo &MRI);
+
     void dump() const;
   };
 
@@ -263,6 +273,18 @@ public:
   void HandleVirtRegDef(unsigned reg, MachineInstr *MI);
   void HandleVirtRegUse(unsigned reg, MachineBasicBlock *MBB,
                         MachineInstr *MI);
+
+  bool isLiveIn(unsigned Reg, const MachineBasicBlock &MBB) {
+    return getVarInfo(Reg).isLiveIn(MBB, Reg, *MRI);
+  }
+
+  /// addNewBlock - Add a new basic block BB between DomBB and SuccBB. All
+  /// variables that are live out of DomBB and live into SuccBB will be marked
+  /// as passing live through BB. This method assumes that the machine code is
+  /// still in SSA form.
+  void addNewBlock(MachineBasicBlock *BB,
+                   MachineBasicBlock *DomBB,
+                   MachineBasicBlock *SuccBB);
 };
 
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachORelocation.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachORelocation.h
index d4027cc..27306c6 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachORelocation.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachORelocation.h
@@ -15,6 +15,8 @@
 #ifndef LLVM_CODEGEN_MACHO_RELOCATION_H
 #define LLVM_CODEGEN_MACHO_RELOCATION_H
 
+#include "llvm/System/DataTypes.h"
+
 namespace llvm {
 
   /// MachORelocation - This struct contains information about each relocation
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineBasicBlock.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineBasicBlock.h
index 2a9e86a..6b4c640 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineBasicBlock.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineBasicBlock.h
@@ -76,6 +76,10 @@ class MachineBasicBlock : public ilist_node<MachineBasicBlock> {
   /// exception handler.
   bool IsLandingPad;
 
+  /// AddressTaken - Indicate that this basic block is potentially the
+  /// target of an indirect branch.
+  bool AddressTaken;
+
   // Intrusive list support
   MachineBasicBlock() {}
 
@@ -88,10 +92,23 @@ class MachineBasicBlock : public ilist_node<MachineBasicBlock> {
 
 public:
   /// getBasicBlock - Return the LLVM basic block that this instance
-  /// corresponded to originally.
+  /// corresponded to originally. Note that this may be NULL if this instance
+  /// does not correspond directly to an LLVM basic block.
   ///
   const BasicBlock *getBasicBlock() const { return BB; }
 
+  /// getName - Return the name of the corresponding LLVM basic block, or
+  /// "(null)".
+  StringRef getName() const;
+
+  /// hasAddressTaken - Test whether this block is potentially the target
+  /// of an indirect branch.
+  bool hasAddressTaken() const { return AddressTaken; }
+
+  /// setHasAddressTaken - Set this block to reflect that it potentially
+  /// is the target of an indirect branch.
+  void setHasAddressTaken() { AddressTaken = true; }
+
   /// getParent - Return the MachineFunction containing this basic block.
   ///
   const MachineFunction *getParent() const { return xParent; }
@@ -213,7 +230,13 @@ public:
   /// potential fall-throughs at the end of the block.
   void moveBefore(MachineBasicBlock *NewAfter);
   void moveAfter(MachineBasicBlock *NewBefore);
-  
+
+  /// updateTerminator - Update the terminator instructions in block to account
+  /// for changes to the layout. If the block previously used a fallthrough,
+  /// it may now need a branch, and if it previously used branching it may now
+  /// be able to use a fallthrough.
+  void updateTerminator();
+
   // Machine-CFG mutators
   
   /// addSuccessor - Add succ as a successor of this MachineBasicBlock.
@@ -234,7 +257,7 @@ public:
   
   /// transferSuccessors - Transfers all the successors from MBB to this
   /// machine basic block (i.e., copies all the successors fromMBB and
-  /// remove all the successors fromBB).
+  /// remove all the successors from fromMBB).
   void transferSuccessors(MachineBasicBlock *fromMBB);
   
   /// isSuccessor - Return true if the specified MBB is a successor of this
@@ -248,6 +271,12 @@ public:
   /// ends with an unconditional branch to some other block.
   bool isLayoutSuccessor(const MachineBasicBlock *MBB) const;
 
+  /// canFallThrough - Return true if the block can implicitly transfer
+  /// control to the block after it by falling off the end of it.  This should
+  /// return false if it can reach the block after it, but it uses an explicit
+  /// branch to do so (e.g., a table jump).  True is a conservative answer.
+  bool canFallThrough();
+
   /// getFirstTerminator - returns an iterator to the first terminator
   /// instruction of this basic block. If a terminator does not exist,
   /// it returns end()
@@ -340,6 +369,8 @@ private:   // Methods used to maintain doubly linked list of blocks...
 
 raw_ostream& operator<<(raw_ostream &OS, const MachineBasicBlock &MBB);
 
+void WriteAsOperand(raw_ostream &, const MachineBasicBlock*, bool t);
+
 //===--------------------------------------------------------------------===//
 // GraphTraits specializations for machine basic block graphs (machine-CFGs)
 //===--------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineCodeEmitter.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineCodeEmitter.h
index 707b020..791db00 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineCodeEmitter.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineCodeEmitter.h
@@ -17,7 +17,7 @@
 #ifndef LLVM_CODEGEN_MACHINECODEEMITTER_H
 #define LLVM_CODEGEN_MACHINECODEEMITTER_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/Support/DebugLoc.h"
 
 namespace llvm {
@@ -48,17 +48,41 @@ class Function;
 /// occurred, more memory is allocated, and we reemit the code into it.
 /// 
 class MachineCodeEmitter {
+public:
+  class BufferState {
+    friend class MachineCodeEmitter;
+    /// BufferBegin/BufferEnd - Pointers to the start and end of the memory
+    /// allocated for this code buffer.
+    uint8_t *BufferBegin, *BufferEnd;
+
+    /// CurBufferPtr - Pointer to the next byte of memory to fill when emitting
+    /// code.  This is guranteed to be in the range [BufferBegin,BufferEnd].  If
+    /// this pointer is at BufferEnd, it will never move due to code emission,
+    /// and all code emission requests will be ignored (this is the buffer
+    /// overflow condition).
+    uint8_t *CurBufferPtr;
+  public:
+    BufferState() : BufferBegin(NULL), BufferEnd(NULL), CurBufferPtr(NULL) {}
+  };
+
 protected:
-  /// BufferBegin/BufferEnd - Pointers to the start and end of the memory
-  /// allocated for this code buffer.
-  uint8_t *BufferBegin, *BufferEnd;
-  
-  /// CurBufferPtr - Pointer to the next byte of memory to fill when emitting 
-  /// code.  This is guranteed to be in the range [BufferBegin,BufferEnd].  If
-  /// this pointer is at BufferEnd, it will never move due to code emission, and
-  /// all code emission requests will be ignored (this is the buffer overflow
-  /// condition).
-  uint8_t *CurBufferPtr;
+  /// These have the same meanings as the fields in BufferState
+  uint8_t *BufferBegin, *BufferEnd, *CurBufferPtr;
+
+  /// Save or restore the current buffer state.  The BufferState objects must be
+  /// used as a stack.
+  void SaveStateTo(BufferState &BS) {
+    assert(BS.BufferBegin == NULL &&
+           "Can't save state into the same BufferState twice.");
+    BS.BufferBegin = BufferBegin;
+    BS.BufferEnd = BufferEnd;
+    BS.CurBufferPtr = CurBufferPtr;
+  }
+  void RestoreStateFrom(BufferState &BS) {
+    BufferBegin = BS.BufferBegin;
+    BufferEnd = BS.BufferEnd;
+    CurBufferPtr = BS.CurBufferPtr;
+  }
 
 public:
   virtual ~MachineCodeEmitter() {}
@@ -237,7 +261,7 @@ public:
   /// MachineInstruction.  This is called before emitting any bytes associated
   /// with the instruction.  Even if successive instructions have the same debug
   /// location, this method will be called for each one.
-  virtual void processDebugLoc(DebugLoc DL) {}
+  virtual void processDebugLoc(DebugLoc DL, bool BeforePrintintInsn) {}
 
   /// emitLabel - Emits a label
   virtual void emitLabel(uint64_t LabelID) = 0;
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineCodeInfo.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineCodeInfo.h
index 024e602..a75c02a 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineCodeInfo.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineCodeInfo.h
@@ -17,6 +17,8 @@
 #ifndef EE_MACHINE_CODE_INFO_H
 #define EE_MACHINE_CODE_INFO_H
 
+#include "llvm/System/DataTypes.h"
+
 namespace llvm {
 
 class MachineCodeInfo {
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineDominators.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineDominators.h
index e56776b..086528a 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineDominators.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineDominators.h
@@ -23,8 +23,6 @@
 
 namespace llvm {
 
-inline void WriteAsOperand(raw_ostream &, const MachineBasicBlock*, bool t) {  }
-
 template<>
 inline void DominatorTreeBase<MachineBasicBlock>::addRoot(MachineBasicBlock* MBB) {
   this->Roots.push_back(MBB);
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineFrameInfo.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineFrameInfo.h
index b5479ba..bed82af 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineFrameInfo.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineFrameInfo.h
@@ -15,9 +15,11 @@
 #define LLVM_CODEGEN_MACHINEFRAMEINFO_H
 
 #include "llvm/ADT/BitVector.h"
-#include "llvm/ADT/DenseSet.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/ADT/DenseMap.h"
+#include "llvm/ADT/SmallVector.h"
+#include "llvm/System/DataTypes.h"
 #include <cassert>
+#include <limits>
 #include <vector>
 
 namespace llvm {
@@ -86,6 +88,10 @@ class MachineFrameInfo {
 
   // StackObject - Represent a single object allocated on the stack.
   struct StackObject {
+    // SPOffset - The offset of this object from the stack pointer on entry to
+    // the function.  This field has no meaning for a variable sized element.
+    int64_t SPOffset;
+    
     // The size of this object on the stack. 0 means a variable sized object,
     // ~0ULL means a dead object.
     uint64_t Size;
@@ -98,12 +104,14 @@ class MachineFrameInfo {
     // default, fixed objects are immutable unless marked otherwise.
     bool isImmutable;
 
-    // SPOffset - The offset of this object from the stack pointer on entry to
-    // the function.  This field has no meaning for a variable sized element.
-    int64_t SPOffset;
-    
-    StackObject(uint64_t Sz, unsigned Al, int64_t SP = 0, bool IM = false)
-      : Size(Sz), Alignment(Al), isImmutable(IM), SPOffset(SP) {}
+    // isSpillSlot - If true, the stack object is used as spill slot. It
+    // cannot alias any other memory objects.
+    bool isSpillSlot;
+
+    StackObject(uint64_t Sz, unsigned Al, int64_t SP, bool IM,
+                bool isSS)
+      : SPOffset(SP), Size(Sz), Alignment(Al), isImmutable(IM),
+        isSpillSlot(isSS) {}
   };
 
   /// Objects - The list of stack objects allocated...
@@ -176,6 +184,10 @@ class MachineFrameInfo {
   /// CSIValid - Has CSInfo been set yet?
   bool CSIValid;
 
+  /// SpillObjects - A vector indicating which frame indices refer to
+  /// spill slots.
+  SmallVector<bool, 8> SpillObjects;
+
   /// MMI - This field is set (via setMachineModuleInfo) by a module info
   /// consumer (ex. DwarfWriter) to indicate that frame layout information
   /// should be acquired.  Typically, it's the responsibility of the target's
@@ -186,6 +198,7 @@ class MachineFrameInfo {
   /// TargetFrameInfo - Target information about frame layout.
   ///
   const TargetFrameInfo &TFI;
+
 public:
   explicit MachineFrameInfo(const TargetFrameInfo &tfi) : TFI(tfi) {
     StackSize = NumFixedObjects = OffsetAdjustment = MaxAlignment = 0;
@@ -335,7 +348,7 @@ public:
   /// index with a negative value.
   ///
   int CreateFixedObject(uint64_t Size, int64_t SPOffset,
-                        bool Immutable = true);
+                        bool Immutable, bool isSS);
   
   
   /// isFixedObjectIndex - Returns true if the specified index corresponds to a
@@ -352,6 +365,14 @@ public:
     return Objects[ObjectIdx+NumFixedObjects].isImmutable;
   }
 
+  /// isSpillSlotObjectIndex - Returns true if the specified index corresponds
+  /// to a spill slot..
+  bool isSpillSlotObjectIndex(int ObjectIdx) const {
+    assert(unsigned(ObjectIdx+NumFixedObjects) < Objects.size() &&
+           "Invalid Object Idx!");
+    return Objects[ObjectIdx+NumFixedObjects].isSpillSlot;;
+  }
+
   /// isDeadObjectIndex - Returns true if the specified index corresponds to
   /// a dead object.
   bool isDeadObjectIndex(int ObjectIdx) const {
@@ -360,13 +381,25 @@ public:
     return Objects[ObjectIdx+NumFixedObjects].Size == ~0ULL;
   }
 
-  /// CreateStackObject - Create a new statically sized stack object, returning
-  /// a nonnegative identifier to represent it.
+  /// CreateStackObject - Create a new statically sized stack object,
+  /// returning a nonnegative identifier to represent it.
   ///
-  int CreateStackObject(uint64_t Size, unsigned Alignment) {
+  int CreateStackObject(uint64_t Size, unsigned Alignment, bool isSS) {
     assert(Size != 0 && "Cannot allocate zero size stack objects!");
-    Objects.push_back(StackObject(Size, Alignment));
-    return (int)Objects.size()-NumFixedObjects-1;
+    Objects.push_back(StackObject(Size, Alignment, 0, false, isSS));
+    int Index = (int)Objects.size()-NumFixedObjects-1;
+    assert(Index >= 0 && "Bad frame index!");
+    return Index;
+  }
+
+  /// CreateSpillStackObject - Create a new statically sized stack
+  /// object that represents a spill slot, returning a nonnegative
+  /// identifier to represent it.
+  ///
+  int CreateSpillStackObject(uint64_t Size, unsigned Alignment) {
+    CreateStackObject(Size, Alignment, true);
+    int Index = (int)Objects.size()-NumFixedObjects-1;
+    return Index;
   }
 
   /// RemoveStackObject - Remove or mark dead a statically sized stack object.
@@ -383,10 +416,10 @@ public:
   ///
   int CreateVariableSizedObject() {
     HasVarSizedObjects = true;
-    Objects.push_back(StackObject(0, 1));
+    Objects.push_back(StackObject(0, 1, 0, false, false));
     return (int)Objects.size()-NumFixedObjects-1;
   }
-  
+
   /// getCalleeSavedInfo - Returns a reference to call saved info vector for the
   /// current function.
   const std::vector<CalleeSavedInfo> &getCalleeSavedInfo() const {
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineFunction.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineFunction.h
index 8b881f5..f1bfa01 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineFunction.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineFunction.h
@@ -23,7 +23,6 @@
 #include "llvm/Support/DebugLoc.h"
 #include "llvm/Support/Allocator.h"
 #include "llvm/Support/Recycler.h"
-#include <map>
 
 namespace llvm {
 
@@ -115,6 +114,9 @@ class MachineFunction {
   // The alignment of the function.
   unsigned Alignment;
 
+  MachineFunction(const MachineFunction &); // intentionally unimplemented
+  void operator=(const MachineFunction&);   // intentionally unimplemented
+
 public:
   MachineFunction(Function *Fn, const TargetMachine &TM);
   ~MachineFunction();
@@ -229,6 +231,10 @@ public:
   ///
   void dump() const;
 
+  /// verify - Run the current MachineFunction through the machine code
+  /// verifier, useful for debugger use.
+  void verify(Pass *p=NULL, bool allowDoubleDefs=false) const;
+
   // Provide accessors for the MachineBasicBlock list...
   typedef BasicBlockListType::iterator iterator;
   typedef BasicBlockListType::const_iterator const_iterator;
@@ -267,6 +273,9 @@ public:
   void splice(iterator InsertPt, iterator MBBI) {
     BasicBlocks.splice(InsertPt, BasicBlocks, MBBI);
   }
+  void splice(iterator InsertPt, iterator MBBI, iterator MBBE) {
+    BasicBlocks.splice(InsertPt, BasicBlocks, MBBI, MBBE);
+  }
 
   void remove(iterator MBBI) {
     BasicBlocks.remove(MBBI);
@@ -329,7 +338,7 @@ public:
                                           unsigned base_alignment);
 
   /// getMachineMemOperand - Allocate a new MachineMemOperand by copying
-  /// an existing one, adjusting by an offset and using the given EVT.
+  /// an existing one, adjusting by an offset and using the given size.
   /// MachineMemOperands are owned by the MachineFunction and need not be
   /// explicitly deallocated.
   MachineMemOperand *getMachineMemOperand(const MachineMemOperand *MMO,
@@ -339,6 +348,20 @@ public:
   /// pointers.  This array is owned by the MachineFunction.
   MachineInstr::mmo_iterator allocateMemRefsArray(unsigned long Num);
 
+  /// extractLoadMemRefs - Allocate an array and populate it with just the
+  /// load information from the given MachineMemOperand sequence.
+  std::pair<MachineInstr::mmo_iterator,
+            MachineInstr::mmo_iterator>
+    extractLoadMemRefs(MachineInstr::mmo_iterator Begin,
+                       MachineInstr::mmo_iterator End);
+
+  /// extractStoreMemRefs - Allocate an array and populate it with just the
+  /// store information from the given MachineMemOperand sequence.
+  std::pair<MachineInstr::mmo_iterator,
+            MachineInstr::mmo_iterator>
+    extractStoreMemRefs(MachineInstr::mmo_iterator Begin,
+                        MachineInstr::mmo_iterator End);
+
   //===--------------------------------------------------------------------===//
   // Debug location.
   //
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineFunctionAnalysis.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineFunctionAnalysis.h
index d020a7b..aa4cc91 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineFunctionAnalysis.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineFunctionAnalysis.h
@@ -31,7 +31,7 @@ private:
 
 public:
   static char ID;
-  explicit MachineFunctionAnalysis(TargetMachine &tm,
+  explicit MachineFunctionAnalysis(const TargetMachine &tm,
                                    CodeGenOpt::Level OL = CodeGenOpt::Default);
   ~MachineFunctionAnalysis();
 
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstr.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstr.h
index 66af73e..c620449 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstr.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstr.h
@@ -19,6 +19,7 @@
 #include "llvm/ADT/ilist.h"
 #include "llvm/ADT/ilist_node.h"
 #include "llvm/ADT/STLExtras.h"
+#include "llvm/CodeGen/AsmPrinter.h"
 #include "llvm/CodeGen/MachineOperand.h"
 #include "llvm/Target/TargetInstrDesc.h"
 #include "llvm/Support/DebugLoc.h"
@@ -26,6 +27,7 @@
 
 namespace llvm {
 
+class AliasAnalysis;
 class TargetInstrDesc;
 class TargetInstrInfo;
 class TargetRegisterInfo;
@@ -44,6 +46,13 @@ private:
   unsigned short NumImplicitOps;        // Number of implicit operands (which
                                         // are determined at construction time).
 
+  unsigned short AsmPrinterFlags;       // Various bits of information used by
+                                        // the AsmPrinter to emit helpful
+                                        // comments.  This is *not* semantic
+                                        // information.  Do not use this for
+                                        // anything other than to convey comment
+                                        // information to AsmPrinter.
+
   std::vector<MachineOperand> Operands; // the operands
   mmo_iterator MemRefs;                 // information on memory references
   mmo_iterator MemRefsEnd;
@@ -106,6 +115,22 @@ public:
   const MachineBasicBlock* getParent() const { return Parent; }
   MachineBasicBlock* getParent() { return Parent; }
 
+  /// getAsmPrinterFlags - Return the asm printer flags bitvector.
+  ///
+  unsigned short getAsmPrinterFlags() const { return AsmPrinterFlags; }
+
+  /// getAsmPrinterFlag - Return whether an AsmPrinter flag is set.
+  ///
+  bool getAsmPrinterFlag(AsmPrinter::CommentFlag Flag) const {
+    return AsmPrinterFlags & Flag;
+  }
+
+  /// setAsmPrinterFlag - Set a flag for the AsmPrinter.
+  ///
+  void setAsmPrinterFlag(unsigned short Flag) {
+    AsmPrinterFlags |= Flag;
+  }
+
   /// getDebugLoc - Returns the debug location id of this MachineInstr.
   ///
   DebugLoc getDebugLoc() const { return debugLoc; }
@@ -274,11 +299,13 @@ public:
   /// isSafeToMove - Return true if it is safe to move this instruction. If
   /// SawStore is set to true, it means that there is a store (or call) between
   /// the instruction's location and its intended destination.
-  bool isSafeToMove(const TargetInstrInfo *TII, bool &SawStore) const;
+  bool isSafeToMove(const TargetInstrInfo *TII, bool &SawStore,
+                    AliasAnalysis *AA) const;
 
   /// isSafeToReMat - Return true if it's safe to rematerialize the specified
   /// instruction which defined the specified register instead of copying it.
-  bool isSafeToReMat(const TargetInstrInfo *TII, unsigned DstReg) const;
+  bool isSafeToReMat(const TargetInstrInfo *TII, unsigned DstReg,
+                     AliasAnalysis *AA) const;
 
   /// hasVolatileMemoryRef - Return true if this instruction may have a
   /// volatile memory reference, or if the information describing the
@@ -286,6 +313,13 @@ public:
   /// have no volatile memory references.
   bool hasVolatileMemoryRef() const;
 
+  /// isInvariantLoad - Return true if this instruction is loading from a
+  /// location whose value is invariant across the function.  For example,
+  /// loading a value from the constant pool or from from the argument area of
+  /// a function if it does not change.  This should only return true of *all*
+  /// loads the instruction does are invariant (if it does multiple loads).
+  bool isInvariantLoad(AliasAnalysis *AA) const;
+
   //
   // Debugging support
   //
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstrBuilder.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstrBuilder.h
index 7f681d7..6ca63f0 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstrBuilder.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstrBuilder.h
@@ -108,13 +108,6 @@ public:
     return *this;
   }
 
-  const MachineInstrBuilder &addMetadata(MDNode *N,
-                                         int64_t Offset = 0,
-                                         unsigned char TargetFlags = 0) const {
-    MI->addOperand(MachineOperand::CreateMDNode(N, Offset, TargetFlags));
-    return *this;
-  }
-
   const MachineInstrBuilder &addExternalSymbol(const char *FnName,
                                           unsigned char TargetFlags = 0) const {
     MI->addOperand(MachineOperand::CreateES(FnName, TargetFlags));
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineJumpTableInfo.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineJumpTableInfo.h
index 3ff2f2e..57c65c8 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineJumpTableInfo.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineJumpTableInfo.h
@@ -69,6 +69,11 @@ public:
   /// the jump tables to branch to New instead.
   bool ReplaceMBBInJumpTables(MachineBasicBlock *Old, MachineBasicBlock *New);
 
+  /// ReplaceMBBInJumpTable - If Old is a target of the jump tables, update
+  /// the jump table to branch to New instead.
+  bool ReplaceMBBInJumpTable(unsigned Idx, MachineBasicBlock *Old,
+                             MachineBasicBlock *New);
+
   /// getEntrySize - Returns the size of an individual field in a jump table. 
   ///
   unsigned getEntrySize() const { return EntrySize; }
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineLoopInfo.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineLoopInfo.h
index 65ad4e4..d3df805 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineLoopInfo.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineLoopInfo.h
@@ -38,6 +38,17 @@ namespace llvm {
 class MachineLoop : public LoopBase<MachineBasicBlock, MachineLoop> {
 public:
   MachineLoop();
+
+  /// getTopBlock - Return the "top" block in the loop, which is the first
+  /// block in the linear layout, ignoring any parts of the loop not
+  /// contiguous with the part the contains the header.
+  MachineBasicBlock *getTopBlock();
+
+  /// getBottomBlock - Return the "bottom" block in the loop, which is the last
+  /// block in the linear layout, ignoring any parts of the loop not
+  /// contiguous with the part the contains the header.
+  MachineBasicBlock *getBottomBlock();
+
 private:
   friend class LoopInfoBase<MachineBasicBlock, MachineLoop>;
   explicit MachineLoop(MachineBasicBlock *MBB)
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineMemOperand.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineMemOperand.h
index b7e267d..5dee199 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineMemOperand.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineMemOperand.h
@@ -16,6 +16,8 @@
 #ifndef LLVM_CODEGEN_MACHINEMEMOPERAND_H
 #define LLVM_CODEGEN_MACHINEMEMOPERAND_H
 
+#include "llvm/System/DataTypes.h"
+
 namespace llvm {
 
 class Value;
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineModuleInfo.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineModuleInfo.h
index b7b9019..bac9fce 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineModuleInfo.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineModuleInfo.h
@@ -32,7 +32,7 @@
 #define LLVM_CODEGEN_MACHINEMODULEINFO_H
 
 #include "llvm/Support/Dwarf.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/UniqueVector.h"
@@ -42,6 +42,7 @@
 #include "llvm/CodeGen/MachineLocation.h"
 #include "llvm/GlobalValue.h"
 #include "llvm/Pass.h"
+#include "llvm/Metadata.h"
 
 namespace llvm {
 
@@ -134,9 +135,6 @@ class MachineModuleInfo : public ImmutablePass {
   /// llvm.compiler.used.
   SmallPtrSet<const Function *, 32> UsedFunctions;
 
-  /// UsedDbgLabels - labels are used by debug info entries.
-  SmallSet<unsigned, 8> UsedDbgLabels;
-
   bool CallsEHReturn;
   bool CallsUnwindInit;
  
@@ -147,7 +145,9 @@ class MachineModuleInfo : public ImmutablePass {
 public:
   static char ID; // Pass identification, replacement for typeid
 
-  typedef DenseMap<MDNode *, std::pair<MDNode *, unsigned> > VariableDbgInfoMapTy;
+  typedef std::pair<unsigned, TrackingVH<MDNode> > UnsignedAndMDNodePair;
+  typedef SmallVector< std::pair<TrackingVH<MDNode>, UnsignedAndMDNodePair>, 4>
+    VariableDbgInfoMapTy;
   VariableDbgInfoMapTy VariableDbgInfo;
 
   MachineModuleInfo();
@@ -229,19 +229,6 @@ public:
     return LabelID ? LabelIDList[LabelID - 1] : 0;
   }
 
-  /// isDbgLabelUsed - Return true if label with LabelID is used by
-  /// DwarfWriter.
-  bool isDbgLabelUsed(unsigned LabelID) {
-    return UsedDbgLabels.count(LabelID);
-  }
-  
-  /// RecordUsedDbgLabel - Mark label with LabelID as used. This is used
-  /// by DwarfWriter to inform DebugLabelFolder that certain labels are
-  /// not to be deleted.
-  void RecordUsedDbgLabel(unsigned LabelID) {
-    UsedDbgLabels.insert(LabelID);
-  }
-
   /// getFrameMoves - Returns a reference to a list of moves done in the current
   /// function's prologue.  Used to construct frame maps for debug and exception
   /// handling comsumers.
@@ -332,9 +319,8 @@ public:
 
   /// setVariableDbgInfo - Collect information used to emit debugging information
   /// of a variable.
-  void setVariableDbgInfo(MDNode *N, MDNode *L, unsigned S) {
-    if (N && L)
-      VariableDbgInfo[N] = std::make_pair(L, S);
+  void setVariableDbgInfo(MDNode *N, unsigned Slot, MDNode *Scope) {
+    VariableDbgInfo.push_back(std::make_pair(N, std::make_pair(Slot, Scope)));
   }
 
   VariableDbgInfoMapTy &getVariableDbgInfo() {  return VariableDbgInfo;  }
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineOperand.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineOperand.h
index f715c44..8748afc 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineOperand.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineOperand.h
@@ -14,15 +14,15 @@
 #ifndef LLVM_CODEGEN_MACHINEOPERAND_H
 #define LLVM_CODEGEN_MACHINEOPERAND_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <cassert>
 
 namespace llvm {
   
 class ConstantFP;
+class BlockAddress;
 class MachineBasicBlock;
 class GlobalValue;
-class MDNode;
 class MachineInstr;
 class TargetMachine;
 class MachineRegisterInfo;
@@ -42,7 +42,7 @@ public:
     MO_JumpTableIndex,         ///< Address of indexed Jump Table for switch
     MO_ExternalSymbol,         ///< Name of external global symbol
     MO_GlobalAddress,          ///< Address of a global value
-    MO_Metadata                ///< Metadata info
+    MO_BlockAddress            ///< Address of a basic block
   };
 
 private:
@@ -108,7 +108,7 @@ private:
         int Index;                // For MO_*Index - The index itself.
         const char *SymbolName;   // For MO_ExternalSymbol.
         GlobalValue *GV;          // For MO_GlobalAddress.
-        MDNode *Node;             // For MO_Metadata.
+        BlockAddress *BA;         // For MO_BlockAddress.
       } Val;
       int64_t Offset;             // An offset from the object.
     } OffsetedInfo;
@@ -156,8 +156,8 @@ public:
   bool isGlobal() const { return OpKind == MO_GlobalAddress; }
   /// isSymbol - Tests if this is a MO_ExternalSymbol operand.
   bool isSymbol() const { return OpKind == MO_ExternalSymbol; }
-  /// isMetadata - Tests if this is a MO_Metadata operand.
-  bool isMetadata() const { return OpKind == MO_Metadata; }
+  /// isBlockAddress - Tests if this is a MO_BlockAddress operand.
+  bool isBlockAddress() const { return OpKind == MO_BlockAddress; }
 
   //===--------------------------------------------------------------------===//
   // Accessors for Register Operands
@@ -293,15 +293,16 @@ public:
     assert(isGlobal() && "Wrong MachineOperand accessor");
     return Contents.OffsetedInfo.Val.GV;
   }
-  
-  MDNode *getMDNode() const {
-    return Contents.OffsetedInfo.Val.Node;
+
+  BlockAddress *getBlockAddress() const {
+    assert(isBlockAddress() && "Wrong MachineOperand accessor");
+    return Contents.OffsetedInfo.Val.BA;
   }
   
   /// getOffset - Return the offset from the symbol in this operand. This always
   /// returns 0 for ExternalSymbol operands.
   int64_t getOffset() const {
-    assert((isGlobal() || isSymbol() || isCPI()) &&
+    assert((isGlobal() || isSymbol() || isCPI() || isBlockAddress()) &&
            "Wrong MachineOperand accessor");
     return Contents.OffsetedInfo.Offset;
   }
@@ -321,7 +322,7 @@ public:
   }
 
   void setOffset(int64_t Offset) {
-    assert((isGlobal() || isSymbol() || isCPI() || isMetadata()) &&
+    assert((isGlobal() || isSymbol() || isCPI() || isBlockAddress()) &&
         "Wrong MachineOperand accessor");
     Contents.OffsetedInfo.Offset = Offset;
   }
@@ -426,14 +427,6 @@ public:
     Op.setTargetFlags(TargetFlags);
     return Op;
   }
-  static MachineOperand CreateMDNode(MDNode *N, int64_t Offset,
-                                     unsigned char TargetFlags = 0) {
-    MachineOperand Op(MachineOperand::MO_Metadata);
-    Op.Contents.OffsetedInfo.Val.Node = N;
-    Op.setOffset(Offset);
-    Op.setTargetFlags(TargetFlags);
-    return Op;
-  }
   static MachineOperand CreateES(const char *SymName,
                                  unsigned char TargetFlags = 0) {
     MachineOperand Op(MachineOperand::MO_ExternalSymbol);
@@ -442,6 +435,14 @@ public:
     Op.setTargetFlags(TargetFlags);
     return Op;
   }
+  static MachineOperand CreateBA(BlockAddress *BA,
+                                 unsigned char TargetFlags = 0) {
+    MachineOperand Op(MachineOperand::MO_BlockAddress);
+    Op.Contents.OffsetedInfo.Val.BA = BA;
+    Op.setOffset(0); // Offset is always 0.
+    Op.setTargetFlags(TargetFlags);
+    return Op;
+  }
 
   friend class MachineInstr;
   friend class MachineRegisterInfo;
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineRegisterInfo.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineRegisterInfo.h
index 18e6020..c55cb32 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineRegisterInfo.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineRegisterInfo.h
@@ -243,6 +243,12 @@ public:
         return true;
     return false;
   }
+  bool isLiveOut(unsigned Reg) const {
+    for (liveout_iterator I = liveout_begin(), E = liveout_end(); I != E; ++I)
+      if (*I == Reg)
+        return true;
+    return false;
+  }
 
 private:
   void HandleVRegListReallocation();
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineRelocation.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineRelocation.h
index c539781..1c15fab 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineRelocation.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineRelocation.h
@@ -14,7 +14,7 @@
 #ifndef LLVM_CODEGEN_MACHINERELOCATION_H
 #define LLVM_CODEGEN_MACHINERELOCATION_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <cassert>
 
 namespace llvm {
@@ -65,7 +65,7 @@ class MachineRelocation {
 
   unsigned TargetReloType : 6; // The target relocation ID
   AddressType AddrType    : 4; // The field of Target to use
-  bool NeedStub           : 1; // True if this relocation requires a stub
+  bool MayNeedFarStub     : 1; // True if this relocation may require a far-stub
   bool GOTRelative        : 1; // Should this relocation be relative to the GOT?
   bool TargetResolve      : 1; // True if target should resolve the address
 
@@ -81,7 +81,7 @@ public:
   ///
   static MachineRelocation getGV(uintptr_t offset, unsigned RelocationType, 
                                  GlobalValue *GV, intptr_t cst = 0,
-                                 bool NeedStub = 0,
+                                 bool MayNeedFarStub = 0,
                                  bool GOTrelative = 0) {
     assert((RelocationType & ~63) == 0 && "Relocation type too large!");
     MachineRelocation Result;
@@ -89,7 +89,7 @@ public:
     Result.ConstantVal = cst;
     Result.TargetReloType = RelocationType;
     Result.AddrType = isGV;
-    Result.NeedStub = NeedStub;
+    Result.MayNeedFarStub = MayNeedFarStub;
     Result.GOTRelative = GOTrelative;
     Result.TargetResolve = false;
     Result.Target.GV = GV;
@@ -101,7 +101,7 @@ public:
   static MachineRelocation getIndirectSymbol(uintptr_t offset,
                                              unsigned RelocationType, 
                                              GlobalValue *GV, intptr_t cst = 0,
-                                             bool NeedStub = 0,
+                                             bool MayNeedFarStub = 0,
                                              bool GOTrelative = 0) {
     assert((RelocationType & ~63) == 0 && "Relocation type too large!");
     MachineRelocation Result;
@@ -109,7 +109,7 @@ public:
     Result.ConstantVal = cst;
     Result.TargetReloType = RelocationType;
     Result.AddrType = isIndirectSym;
-    Result.NeedStub = NeedStub;
+    Result.MayNeedFarStub = MayNeedFarStub;
     Result.GOTRelative = GOTrelative;
     Result.TargetResolve = false;
     Result.Target.GV = GV;
@@ -126,7 +126,7 @@ public:
     Result.ConstantVal = cst;
     Result.TargetReloType = RelocationType;
     Result.AddrType = isBB;
-    Result.NeedStub = false;
+    Result.MayNeedFarStub = false;
     Result.GOTRelative = false;
     Result.TargetResolve = false;
     Result.Target.MBB = MBB;
@@ -145,7 +145,7 @@ public:
     Result.ConstantVal = cst;
     Result.TargetReloType = RelocationType;
     Result.AddrType = isExtSym;
-    Result.NeedStub = true;
+    Result.MayNeedFarStub = true;
     Result.GOTRelative = GOTrelative;
     Result.TargetResolve = false;
     Result.Target.ExtSym = ES;
@@ -164,7 +164,7 @@ public:
     Result.ConstantVal = cst;
     Result.TargetReloType = RelocationType;
     Result.AddrType = isConstPool;
-    Result.NeedStub = false;
+    Result.MayNeedFarStub = false;
     Result.GOTRelative = false;
     Result.TargetResolve = letTargetResolve;
     Result.Target.Index = CPI;
@@ -183,7 +183,7 @@ public:
     Result.ConstantVal = cst;
     Result.TargetReloType = RelocationType;
     Result.AddrType = isJumpTable;
-    Result.NeedStub = false;
+    Result.MayNeedFarStub = false;
     Result.GOTRelative = false;
     Result.TargetResolve = letTargetResolve;
     Result.Target.Index = JTI;
@@ -258,12 +258,14 @@ public:
     return GOTRelative;
   }
 
-  /// doesntNeedStub - This function returns true if the JIT for this target
-  /// target is capable of directly handling the relocated GlobalValue reference
-  /// without using either a stub function or issuing an extra load to get the
-  /// GV address.
-  bool doesntNeedStub() const {
-    return !NeedStub;
+  /// mayNeedFarStub - This function returns true if the JIT for this target may
+  /// need either a stub function or an indirect global-variable load to handle
+  /// the relocated GlobalValue reference.  For example, the x86-64 call
+  /// instruction can only call functions within +/-2GB of the call site.
+  /// Anything farther away needs a longer mov+call sequence, which can't just
+  /// be written on top of the existing call.
+  bool mayNeedFarStub() const {
+    return MayNeedFarStub;
   }
 
   /// letTargetResolve - Return true if the target JITInfo is usually
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/Passes.h b/libclamav/c++/llvm/include/llvm/CodeGen/Passes.h
index 1e7115e..8e89702 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/Passes.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/Passes.h
@@ -15,13 +15,13 @@
 #ifndef LLVM_CODEGEN_PASSES_H
 #define LLVM_CODEGEN_PASSES_H
 
+#include "llvm/Target/TargetMachine.h"
 #include <string>
 
 namespace llvm {
 
   class FunctionPass;
   class PassInfo;
-  class TargetMachine;
   class TargetLowering;
   class RegisterCoalescer;
   class raw_ostream;
@@ -119,8 +119,9 @@ namespace llvm {
   ///
   FunctionPass *createLowerSubregsPass();
 
-  /// createPostRAScheduler - under development.
-  FunctionPass *createPostRAScheduler();
+  /// createPostRAScheduler - This pass performs post register allocation
+  /// scheduling.
+  FunctionPass *createPostRAScheduler(CodeGenOpt::Level OptLevel);
 
   /// BranchFolding Pass - This pass performs machine code CFG based
   /// optimizations to delete branches to branches, eliminate branches to
@@ -128,6 +129,10 @@ namespace llvm {
   /// branches.
   FunctionPass *createBranchFoldingPass(bool DefaultEnableTailMerge);
 
+  /// TailDuplicate Pass - Duplicate blocks with unconditional branches
+  /// into tails of their predecessors.
+  FunctionPass *createTailDuplicatePass();
+
   /// IfConverter Pass - This pass performs machine code if conversion.
   FunctionPass *createIfConverterPass();
 
@@ -135,11 +140,6 @@ namespace llvm {
   /// headers to target specific alignment boundary.
   FunctionPass *createCodePlacementOptPass();
 
-  /// DebugLabelFoldingPass - This pass prunes out redundant debug labels.  This
-  /// allows a debug emitter to determine if the range of two labels is empty,
-  /// by seeing if the labels map to the same reduced label.
-  FunctionPass *createDebugLabelFoldingPass();
-
   /// getRegisterAllocator - This creates an instance of the register allocator
   /// for the Sparc.
   FunctionPass *getRegisterAllocator(TargetMachine &T);
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/ProcessImplicitDefs.h b/libclamav/c++/llvm/include/llvm/CodeGen/ProcessImplicitDefs.h
new file mode 100644
index 0000000..cec867f
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/ProcessImplicitDefs.h
@@ -0,0 +1,41 @@
+//===-------------- llvm/CodeGen/ProcessImplicitDefs.h ----------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+
+#ifndef LLVM_CODEGEN_PROCESSIMPLICITDEFS_H
+#define LLVM_CODEGEN_PROCESSIMPLICITDEFS_H
+
+#include "llvm/CodeGen/MachineFunctionPass.h"
+
+namespace llvm {
+
+  class MachineInstr;
+  class TargetInstrInfo;
+
+  /// Process IMPLICIT_DEF instructions and make sure there is one implicit_def
+  /// for each use. Add isUndef marker to implicit_def defs and their uses.
+  class ProcessImplicitDefs : public MachineFunctionPass {
+  private:
+
+    bool CanTurnIntoImplicitDef(MachineInstr *MI, unsigned Reg,
+                                unsigned OpIdx, const TargetInstrInfo *tii_);
+
+  public:
+    static char ID;
+
+    ProcessImplicitDefs() : MachineFunctionPass(&ID) {}
+
+    virtual void getAnalysisUsage(AnalysisUsage &au) const;
+
+    virtual bool runOnMachineFunction(MachineFunction &fn);
+  };
+
+}
+
+#endif // LLVM_CODEGEN_PROCESSIMPLICITDEFS_H
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/PseudoSourceValue.h b/libclamav/c++/llvm/include/llvm/CodeGen/PseudoSourceValue.h
index c6be645..bace631 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/PseudoSourceValue.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/PseudoSourceValue.h
@@ -32,19 +32,28 @@ namespace llvm {
     virtual void printCustom(raw_ostream &O) const;
 
   public:
-    PseudoSourceValue();
+    explicit PseudoSourceValue(enum ValueTy Subclass = PseudoSourceValueVal);
 
     /// isConstant - Test whether the memory pointed to by this
     /// PseudoSourceValue has a constant value.
     ///
     virtual bool isConstant(const MachineFrameInfo *) const;
 
+    /// isAliased - Test whether the memory pointed to by this
+    /// PseudoSourceValue may also be pointed to by an LLVM IR Value.
+    virtual bool isAliased(const MachineFrameInfo *) const;
+
+    /// mayAlias - Return true if the memory pointed to by this
+    /// PseudoSourceValue can ever alias a LLVM IR Value.
+    virtual bool mayAlias(const MachineFrameInfo *) const;
+
     /// classof - Methods for support type inquiry through isa, cast, and
     /// dyn_cast:
     ///
     static inline bool classof(const PseudoSourceValue *) { return true; }
     static inline bool classof(const Value *V) {
-      return V->getValueID() == PseudoSourceValueVal;
+      return V->getValueID() == PseudoSourceValueVal ||
+             V->getValueID() == FixedStackPseudoSourceValueVal;
     }
 
     /// A pseudo source value referencing a fixed stack frame entry,
@@ -68,6 +77,36 @@ namespace llvm {
     /// constant, this doesn't need to identify a specific jump table.
     static const PseudoSourceValue *getJumpTable();
   };
+
+  /// FixedStackPseudoSourceValue - A specialized PseudoSourceValue
+  /// for holding FixedStack values, which must include a frame
+  /// index.
+  class FixedStackPseudoSourceValue : public PseudoSourceValue {
+    const int FI;
+  public:
+    explicit FixedStackPseudoSourceValue(int fi) :
+        PseudoSourceValue(FixedStackPseudoSourceValueVal), FI(fi) {}
+
+    /// classof - Methods for support type inquiry through isa, cast, and
+    /// dyn_cast:
+    ///
+    static inline bool classof(const FixedStackPseudoSourceValue *) {
+      return true;
+    }
+    static inline bool classof(const Value *V) {
+      return V->getValueID() == FixedStackPseudoSourceValueVal;
+    }
+
+    virtual bool isConstant(const MachineFrameInfo *MFI) const;
+
+    virtual bool isAliased(const MachineFrameInfo *MFI) const;
+
+    virtual bool mayAlias(const MachineFrameInfo *) const;
+
+    virtual void printCustom(raw_ostream &OS) const;
+
+    int getFrameIndex() const { return FI; }
+  };
 } // End llvm namespace
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/RegisterScavenging.h b/libclamav/c++/llvm/include/llvm/CodeGen/RegisterScavenging.h
index 7aa1086..84b726d 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/RegisterScavenging.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/RegisterScavenging.h
@@ -117,6 +117,9 @@ public:
     return scavengeRegister(RegClass, MBBI, SPAdj);
   }
 
+  /// setUsed - Tell the scavenger a register is used.
+  ///
+  void setUsed(unsigned Reg);
 private:
   /// isReserved - Returns true if a register is reserved. It is never "unused".
   bool isReserved(unsigned Reg) const { return ReservedRegs.test(Reg); }
@@ -131,7 +134,6 @@ private:
 
   /// setUsed / setUnused - Mark the state of one or a number of registers.
   ///
-  void setUsed(unsigned Reg);
   void setUsed(BitVector &Regs) {
     RegsAvailable &= ~Regs;
   }
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/RuntimeLibcalls.h b/libclamav/c++/llvm/include/llvm/CodeGen/RuntimeLibcalls.h
index 7a40f02..c404ab6 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/RuntimeLibcalls.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/RuntimeLibcalls.h
@@ -41,22 +41,27 @@ namespace RTLIB {
     SRA_I32,
     SRA_I64,
     SRA_I128,
+    MUL_I8,
     MUL_I16,
     MUL_I32,
     MUL_I64,
     MUL_I128,
+    SDIV_I8,
     SDIV_I16,
     SDIV_I32,
     SDIV_I64,
     SDIV_I128,
+    UDIV_I8,
     UDIV_I16,
     UDIV_I32,
     UDIV_I64,
     UDIV_I128,
+    SREM_I8,
     SREM_I16,
     SREM_I32,
     SREM_I64,
     SREM_I128,
+    UREM_I8,
     UREM_I16,
     UREM_I32,
     UREM_I64,
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/ScheduleDAG.h b/libclamav/c++/llvm/include/llvm/CodeGen/ScheduleDAG.h
index 2de095b..955965b 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/ScheduleDAG.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/ScheduleDAG.h
@@ -23,6 +23,7 @@
 #include "llvm/ADT/PointerIntPair.h"
 
 namespace llvm {
+  class AliasAnalysis;
   class SUnit;
   class MachineConstantPool;
   class MachineFunction;
@@ -341,25 +342,27 @@ namespace llvm {
     /// getDepth - Return the depth of this node, which is the length of the
     /// maximum path up to any node with has no predecessors.
     unsigned getDepth() const {
-      if (!isDepthCurrent) const_cast<SUnit *>(this)->ComputeDepth();
+      if (!isDepthCurrent) 
+        const_cast<SUnit *>(this)->ComputeDepth();
       return Depth;
     }
 
     /// getHeight - Return the height of this node, which is the length of the
     /// maximum path down to any node with has no successors.
     unsigned getHeight() const {
-      if (!isHeightCurrent) const_cast<SUnit *>(this)->ComputeHeight();
+      if (!isHeightCurrent) 
+        const_cast<SUnit *>(this)->ComputeHeight();
       return Height;
     }
 
-    /// setDepthToAtLeast - If NewDepth is greater than this node's depth
-    /// value, set it to be the new depth value. This also recursively
-    /// marks successor nodes dirty.
+    /// setDepthToAtLeast - If NewDepth is greater than this node's
+    /// depth value, set it to be the new depth value. This also
+    /// recursively marks successor nodes dirty.
     void setDepthToAtLeast(unsigned NewDepth);
 
-    /// setDepthToAtLeast - If NewDepth is greater than this node's depth
-    /// value, set it to be the new height value. This also recursively
-    /// marks predecessor nodes dirty.
+    /// setDepthToAtLeast - If NewDepth is greater than this node's
+    /// depth value, set it to be the new height value. This also
+    /// recursively marks predecessor nodes dirty.
     void setHeightToAtLeast(unsigned NewHeight);
 
     /// setDepthDirty - Set a flag in this node to indicate that its
@@ -490,7 +493,7 @@ namespace llvm {
     /// BuildSchedGraph - Build SUnits and set up their Preds and Succs
     /// to form the scheduling dependency graph.
     ///
-    virtual void BuildSchedGraph() = 0;
+    virtual void BuildSchedGraph(AliasAnalysis *AA) = 0;
 
     /// ComputeLatency - Compute node latency.
     ///
@@ -499,8 +502,8 @@ namespace llvm {
     /// ComputeOperandLatency - Override dependence edge latency using
     /// operand use/def information
     ///
-    virtual void ComputeOperandLatency(SUnit *Def, SUnit *Use,
-                                       SDep& dep) const { };
+    virtual void ComputeOperandLatency(SUnit *, SUnit *,
+                                       SDep&) const { }
 
     /// Schedule - Order nodes according to selected style, filling
     /// in the Sequence member.
@@ -517,21 +520,6 @@ namespace llvm {
     void EmitNoop();
 
     void EmitPhysRegCopy(SUnit *SU, DenseMap<SUnit*, unsigned> &VRBaseMap);
-
-  private:
-    /// EmitLiveInCopy - Emit a copy for a live in physical register. If the
-    /// physical register has only a single copy use, then coalesced the copy
-    /// if possible.
-    void EmitLiveInCopy(MachineBasicBlock *MBB,
-                        MachineBasicBlock::iterator &InsertPos,
-                        unsigned VirtReg, unsigned PhysReg,
-                        const TargetRegisterClass *RC,
-                        DenseMap<MachineInstr*, unsigned> &CopyRegMap);
-
-    /// EmitLiveInCopies - If this is the first basic block in the function,
-    /// and if it has live ins that need to be copied into vregs, emit the
-    /// copies into the top of the block.
-    void EmitLiveInCopies(MachineBasicBlock *MBB);
   };
 
   class SUnitIterator : public std::iterator<std::forward_iterator_tag,
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h
index e1b9998..f194e4e 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h
@@ -322,10 +322,10 @@ public:
                                   unsigned char TargetFlags = 0);
   SDValue getValueType(EVT);
   SDValue getRegister(unsigned Reg, EVT VT);
-  SDValue getDbgStopPoint(DebugLoc DL, SDValue Root, 
-                          unsigned Line, unsigned Col, MDNode *CU);
   SDValue getLabel(unsigned Opcode, DebugLoc dl, SDValue Root,
                    unsigned LabelID);
+  SDValue getBlockAddress(BlockAddress *BA, EVT VT,
+                          bool isTarget = false, unsigned char TargetFlags = 0);
 
   SDValue getCopyToReg(SDValue Chain, DebugLoc dl, unsigned Reg, SDValue N) {
     return getNode(ISD::CopyToReg, dl, MVT::Other, Chain,
@@ -381,6 +381,14 @@ public:
   SDValue getVectorShuffle(EVT VT, DebugLoc dl, SDValue N1, SDValue N2, 
                            const int *MaskElts);
 
+  /// getSExtOrTrunc - Convert Op, which must be of integer type, to the
+  /// integer type VT, by either sign-extending or truncating it.
+  SDValue getSExtOrTrunc(SDValue Op, DebugLoc DL, EVT VT);
+
+  /// getZExtOrTrunc - Convert Op, which must be of integer type, to the
+  /// integer type VT, by either zero-extending or truncating it.
+  SDValue getZExtOrTrunc(SDValue Op, DebugLoc DL, EVT VT);
+
   /// getZeroExtendInReg - Return the expression required to zero extend the Op
   /// value assuming it was the smaller SrcTy value.
   SDValue getZeroExtendInReg(SDValue Op, DebugLoc DL, EVT SrcTy);
@@ -672,35 +680,36 @@ public:
   /// Note that getMachineNode returns the resultant node.  If there is already
   /// a node of the specified opcode and operands, it returns that node instead
   /// of the current one.
-  SDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT);
-  SDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT, SDValue Op1);
-  SDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT, SDValue Op1,
-                         SDValue Op2);
-  SDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT,
+  MachineSDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT);
+  MachineSDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT,
+                                SDValue Op1);
+  MachineSDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT,
+                                SDValue Op1, SDValue Op2);
+  MachineSDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT,
                          SDValue Op1, SDValue Op2, SDValue Op3);
-  SDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT,
+  MachineSDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT,
                          const SDValue *Ops, unsigned NumOps);
-  SDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1, EVT VT2);
-  SDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1, EVT VT2,
+  MachineSDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1, EVT VT2);
+  MachineSDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1, EVT VT2,
                          SDValue Op1);
-  SDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1,
+  MachineSDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1,
                          EVT VT2, SDValue Op1, SDValue Op2);
-  SDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1,
+  MachineSDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1,
                          EVT VT2, SDValue Op1, SDValue Op2, SDValue Op3);
-  SDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1, EVT VT2,
+  MachineSDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1, EVT VT2,
                          const SDValue *Ops, unsigned NumOps);
-  SDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1, EVT VT2,
+  MachineSDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1, EVT VT2,
                          EVT VT3, SDValue Op1, SDValue Op2);
-  SDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1, EVT VT2,
+  MachineSDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1, EVT VT2,
                          EVT VT3, SDValue Op1, SDValue Op2, SDValue Op3);
-  SDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1, EVT VT2,
+  MachineSDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1, EVT VT2,
                          EVT VT3, const SDValue *Ops, unsigned NumOps);
-  SDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1, EVT VT2,
+  MachineSDNode *getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1, EVT VT2,
                          EVT VT3, EVT VT4, const SDValue *Ops, unsigned NumOps);
-  SDNode *getMachineNode(unsigned Opcode, DebugLoc dl,
+  MachineSDNode *getMachineNode(unsigned Opcode, DebugLoc dl,
                          const std::vector<EVT> &ResultTys, const SDValue *Ops,
                          unsigned NumOps);
-  SDNode *getMachineNode(unsigned Opcode, DebugLoc dl, SDVTList VTs,
+  MachineSDNode *getMachineNode(unsigned Opcode, DebugLoc dl, SDVTList VTs,
                          const SDValue *Ops, unsigned NumOps);
 
   /// getTargetExtractSubreg - A convenience function for creating
@@ -708,6 +717,11 @@ public:
   SDValue getTargetExtractSubreg(int SRIdx, DebugLoc DL, EVT VT,
                                  SDValue Operand);
 
+  /// getTargetInsertSubreg - A convenience function for creating
+  /// TargetInstrInfo::INSERT_SUBREG nodes.
+  SDValue getTargetInsertSubreg(int SRIdx, DebugLoc DL, EVT VT,
+                                SDValue Operand, SDValue Subreg);
+
   /// getNodeIfExists - Get the specified node if it's already available, or
   /// else return NULL.
   SDNode *getNodeIfExists(unsigned Opcode, SDVTList VTs,
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGISel.h b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGISel.h
index 2b713f1..4130d2c 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGISel.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGISel.h
@@ -23,7 +23,7 @@
 
 namespace llvm {
   class FastISel;
-  class SelectionDAGLowering;
+  class SelectionDAGBuilder;
   class SDValue;
   class MachineRegisterInfo;
   class MachineBasicBlock;
@@ -48,7 +48,7 @@ public:
   MachineFunction *MF;
   MachineRegisterInfo *RegInfo;
   SelectionDAG *CurDAG;
-  SelectionDAGLowering *SDL;
+  SelectionDAGBuilder *SDB;
   MachineBasicBlock *BB;
   AliasAnalysis *AA;
   GCFunctionInfo *GFI;
@@ -110,6 +110,14 @@ protected:
   bool CheckOrMask(SDValue LHS, ConstantSDNode *RHS,
                     int64_t DesiredMaskS) const;
   
+  // Calls to these functions are generated by tblgen.
+  SDNode *Select_INLINEASM(SDValue N);
+  SDNode *Select_UNDEF(const SDValue &N);
+  SDNode *Select_DBG_LABEL(const SDValue &N);
+  SDNode *Select_EH_LABEL(const SDValue &N);
+  void CannotYetSelect(SDValue N);
+  void CannotYetSelectIntrinsic(SDValue N);
+
 private:
   void SelectAllBasicBlocks(Function &Fn, MachineFunction &MF,
                             MachineModuleInfo *MMI,
@@ -119,7 +127,8 @@ private:
 
   void SelectBasicBlock(BasicBlock *LLVMBB,
                         BasicBlock::iterator Begin,
-                        BasicBlock::iterator End);
+                        BasicBlock::iterator End,
+                        bool &HadTailCall);
   void CodeGenAndEmitDAG();
   void LowerArguments(BasicBlock *BB);
   
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
index 604f065..950fd32 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
@@ -28,7 +28,7 @@
 #include "llvm/CodeGen/ValueTypes.h"
 #include "llvm/CodeGen/MachineMemOperand.h"
 #include "llvm/Support/MathExtras.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/Support/DebugLoc.h"
 #include <cassert>
 
@@ -97,7 +97,7 @@ namespace ISD {
     BasicBlock, VALUETYPE, CONDCODE, Register,
     Constant, ConstantFP,
     GlobalAddress, GlobalTLSAddress, FrameIndex,
-    JumpTable, ConstantPool, ExternalSymbol,
+    JumpTable, ConstantPool, ExternalSymbol, BlockAddress,
 
     // The address of the GOT
     GLOBAL_OFFSET_TABLE,
@@ -146,6 +146,7 @@ namespace ISD {
     TargetJumpTable,
     TargetConstantPool,
     TargetExternalSymbol,
+    TargetBlockAddress,
 
     /// RESULT = INTRINSIC_WO_CHAIN(INTRINSICID, arg1, arg2, ...)
     /// This node represents a target intrinsic function with no side effects.
@@ -493,10 +494,9 @@ namespace ISD {
     //   Operand #last: Optional, an incoming flag.
     INLINEASM,
 
-    // DBG_LABEL, EH_LABEL - Represents a label in mid basic block used to track
+    // EH_LABEL - Represents a label in mid basic block used to track
     // locations needed for debug and exception handling tables.  These nodes
     // take a chain as input and return a chain.
-    DBG_LABEL,
     EH_LABEL,
 
     // STACKSAVE - STACKSAVE has one operand, an input chain.  It produces a
@@ -545,18 +545,6 @@ namespace ISD {
     // HANDLENODE node - Used as a handle for various purposes.
     HANDLENODE,
 
-    // DBG_STOPPOINT - This node is used to represent a source location for
-    // debug info.  It takes token chain as input, and carries a line number,
-    // column number, and a pointer to a CompileUnit object identifying
-    // the containing compilation unit.  It produces a token chain as output.
-    DBG_STOPPOINT,
-
-    // DEBUG_LOC - This node is used to represent source line information
-    // embedded in the code.  It takes a token chain as input, then a line
-    // number, then a column then a file id (provided by MachineModuleInfo.) It
-    // produces a token chain as output.
-    DEBUG_LOC,
-
     // TRAMPOLINE - This corresponds to the init_trampoline intrinsic.
     // It takes as input a token chain, the pointer to the trampoline,
     // the pointer to the nested function, the pointer to pass for the
@@ -635,10 +623,6 @@ namespace ISD {
   /// element is not an undef.
   bool isScalarToVector(const SDNode *N);
 
-  /// isDebugLabel - Return true if the specified node represents a debug
-  /// label (i.e. ISD::DBG_LABEL or TargetInstrInfo::DBG_LABEL node).
-  bool isDebugLabel(const SDNode *N);
-
   //===--------------------------------------------------------------------===//
   /// MemIndexedMode enum - This enum defines the load / store indexed
   /// addressing modes.
@@ -1599,8 +1583,6 @@ public:
            N->getOpcode() == ISD::ATOMIC_LOAD_MAX     ||
            N->getOpcode() == ISD::ATOMIC_LOAD_UMIN    ||
            N->getOpcode() == ISD::ATOMIC_LOAD_UMAX    ||
-           N->getOpcode() == ISD::INTRINSIC_W_CHAIN   ||
-           N->getOpcode() == ISD::INTRINSIC_VOID      ||
            N->isTargetMemoryOpcode();
   }
 };
@@ -1954,10 +1936,10 @@ public:
   /// that value are zero, and the corresponding bits in the SplatUndef mask
   /// are set.  The SplatBitSize value is set to the splat element size in
   /// bits.  HasAnyUndefs is set to true if any bits in the vector are
-  /// undefined.
+  /// undefined.  isBigEndian describes the endianness of the target.
   bool isConstantSplat(APInt &SplatValue, APInt &SplatUndef,
                        unsigned &SplatBitSize, bool &HasAnyUndefs,
-                       unsigned MinSplatBits = 0);
+                       unsigned MinSplatBits = 0, bool isBigEndian = false);
 
   static inline bool classof(const BuildVectorSDNode *) { return true; }
   static inline bool classof(const SDNode *N) {
@@ -2005,26 +1987,23 @@ public:
   }
 };
 
-class DbgStopPointSDNode : public SDNode {
-  SDUse Chain;
-  unsigned Line;
-  unsigned Column;
-  MDNode *CU;
+class BlockAddressSDNode : public SDNode {
+  BlockAddress *BA;
+  unsigned char TargetFlags;
   friend class SelectionDAG;
-  DbgStopPointSDNode(SDValue ch, unsigned l, unsigned c,
-                     MDNode *cu)
-    : SDNode(ISD::DBG_STOPPOINT, DebugLoc::getUnknownLoc(),
-      getSDVTList(MVT::Other)), Line(l), Column(c), CU(cu) {
-    InitOperands(&Chain, ch);
+  BlockAddressSDNode(unsigned NodeTy, EVT VT, BlockAddress *ba,
+                     unsigned char Flags)
+    : SDNode(NodeTy, DebugLoc::getUnknownLoc(), getSDVTList(VT)),
+             BA(ba), TargetFlags(Flags) {
   }
 public:
-  unsigned getLine() const { return Line; }
-  unsigned getColumn() const { return Column; }
-  MDNode *getCompileUnit() const { return CU; }
+  BlockAddress *getBlockAddress() const { return BA; }
+  unsigned char getTargetFlags() const { return TargetFlags; }
 
-  static bool classof(const DbgStopPointSDNode *) { return true; }
+  static bool classof(const BlockAddressSDNode *) { return true; }
   static bool classof(const SDNode *N) {
-    return N->getOpcode() == ISD::DBG_STOPPOINT;
+    return N->getOpcode() == ISD::BlockAddress ||
+           N->getOpcode() == ISD::TargetBlockAddress;
   }
 };
 
@@ -2032,7 +2011,7 @@ class LabelSDNode : public SDNode {
   SDUse Chain;
   unsigned LabelID;
   friend class SelectionDAG;
-LabelSDNode(unsigned NodeTy, DebugLoc dl, SDValue ch, unsigned id)
+  LabelSDNode(unsigned NodeTy, DebugLoc dl, SDValue ch, unsigned id)
     : SDNode(NodeTy, dl, getSDVTList(MVT::Other)), LabelID(id) {
     InitOperands(&Chain, ch);
   }
@@ -2041,8 +2020,7 @@ public:
 
   static bool classof(const LabelSDNode *) { return true; }
   static bool classof(const SDNode *N) {
-    return N->getOpcode() == ISD::DBG_LABEL ||
-           N->getOpcode() == ISD::EH_LABEL;
+    return N->getOpcode() == ISD::EH_LABEL;
   }
 };
 
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/SlotIndexes.h b/libclamav/c++/llvm/include/llvm/CodeGen/SlotIndexes.h
new file mode 100644
index 0000000..65d85fc
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/SlotIndexes.h
@@ -0,0 +1,767 @@
+//===- llvm/CodeGen/SlotIndexes.h - Slot indexes representation -*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements SlotIndex and related classes. The purpuse of SlotIndex
+// is to describe a position at which a register can become live, or cease to
+// be live.
+//
+// SlotIndex is mostly a proxy for entries of the SlotIndexList, a class which
+// is held is LiveIntervals and provides the real numbering. This allows
+// LiveIntervals to perform largely transparent renumbering. The SlotIndex
+// class does hold a PHI bit, which determines whether the index relates to a
+// PHI use or def point, or an actual instruction. See the SlotIndex class
+// description for futher information.
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_CODEGEN_SLOTINDEXES_H
+#define LLVM_CODEGEN_SLOTINDEXES_H
+
+#include "llvm/ADT/PointerIntPair.h"
+#include "llvm/ADT/SmallVector.h"
+#include "llvm/CodeGen/MachineBasicBlock.h"
+#include "llvm/CodeGen/MachineFunctionPass.h"
+#include "llvm/CodeGen/MachineInstr.h"
+#include "llvm/Support/Allocator.h"
+#include "llvm/Support/ErrorHandling.h"
+
+namespace llvm {
+
+  /// This class represents an entry in the slot index list held in the
+  /// SlotIndexes pass. It should not be used directly. See the
+  /// SlotIndex & SlotIndexes classes for the public interface to this
+  /// information.
+  class IndexListEntry {
+  private:
+
+    static const unsigned EMPTY_KEY_INDEX = ~0U & ~3U,
+                          TOMBSTONE_KEY_INDEX = ~0U & ~7U;
+
+    IndexListEntry *next, *prev;
+    MachineInstr *mi;
+    unsigned index;
+
+  protected:
+
+    typedef enum { EMPTY_KEY, TOMBSTONE_KEY } ReservedEntryType;
+
+    // This constructor is only to be used by getEmptyKeyEntry
+    // & getTombstoneKeyEntry. It sets index to the given
+    // value and mi to zero.
+    IndexListEntry(ReservedEntryType r) : mi(0) {
+      switch(r) {
+        case EMPTY_KEY: index = EMPTY_KEY_INDEX; break;
+        case TOMBSTONE_KEY: index = TOMBSTONE_KEY_INDEX; break;
+        default: assert(false && "Invalid value for constructor."); 
+      }
+      next = this;
+      prev = this;
+    }
+
+  public:
+
+    IndexListEntry(MachineInstr *mi, unsigned index) : mi(mi), index(index) {
+      if (index == EMPTY_KEY_INDEX || index == TOMBSTONE_KEY_INDEX) {
+        llvm_report_error("Attempt to create invalid index. "
+                          "Available indexes may have been exhausted?.");
+      }
+    }
+
+    MachineInstr* getInstr() const { return mi; }
+    void setInstr(MachineInstr *mi) {
+      assert(index != EMPTY_KEY_INDEX && index != TOMBSTONE_KEY_INDEX &&
+             "Attempt to modify reserved index.");
+      this->mi = mi;
+    }
+
+    unsigned getIndex() const { return index; }
+    void setIndex(unsigned index) {
+      assert(index != EMPTY_KEY_INDEX && index != TOMBSTONE_KEY_INDEX &&
+             "Attempt to set index to invalid value.");
+      assert(this->index != EMPTY_KEY_INDEX &&
+             this->index != TOMBSTONE_KEY_INDEX &&
+             "Attempt to reset reserved index value.");
+      this->index = index;
+    }
+    
+    IndexListEntry* getNext() { return next; }
+    const IndexListEntry* getNext() const { return next; }
+    void setNext(IndexListEntry *next) {
+      assert(index != EMPTY_KEY_INDEX && index != TOMBSTONE_KEY_INDEX &&
+             "Attempt to modify reserved index.");
+      this->next = next;
+    }
+
+    IndexListEntry* getPrev() { return prev; }
+    const IndexListEntry* getPrev() const { return prev; }
+    void setPrev(IndexListEntry *prev) {
+      assert(index != EMPTY_KEY_INDEX && index != TOMBSTONE_KEY_INDEX &&
+             "Attempt to modify reserved index.");
+      this->prev = prev;
+    }
+
+    // This function returns the index list entry that is to be used for empty
+    // SlotIndex keys.
+    static IndexListEntry* getEmptyKeyEntry();
+
+    // This function returns the index list entry that is to be used for
+    // tombstone SlotIndex keys.
+    static IndexListEntry* getTombstoneKeyEntry();
+  };
+
+  // Specialize PointerLikeTypeTraits for IndexListEntry.
+  template <>
+  class PointerLikeTypeTraits<IndexListEntry*> { 
+  public:
+    static inline void* getAsVoidPointer(IndexListEntry *p) {
+      return p;
+    }
+    static inline IndexListEntry* getFromVoidPointer(void *p) {
+      return static_cast<IndexListEntry*>(p);
+    }
+    enum { NumLowBitsAvailable = 3 };
+  };
+
+  /// SlotIndex - An opaque wrapper around machine indexes.
+  class SlotIndex {
+    friend class SlotIndexes;
+    friend struct DenseMapInfo<SlotIndex>;
+
+  private:
+    static const unsigned PHI_BIT = 1 << 2;
+
+    PointerIntPair<IndexListEntry*, 3, unsigned> lie;
+
+    SlotIndex(IndexListEntry *entry, unsigned phiAndSlot)
+      : lie(entry, phiAndSlot) {
+      assert(entry != 0 && "Attempt to construct index with 0 pointer.");
+    }
+
+    IndexListEntry& entry() const {
+      return *lie.getPointer();
+    }
+
+    int getIndex() const {
+      return entry().getIndex() | getSlot();
+    }
+
+    static inline unsigned getHashValue(const SlotIndex &v) {
+      IndexListEntry *ptrVal = &v.entry();
+      return (unsigned((intptr_t)ptrVal) >> 4) ^
+             (unsigned((intptr_t)ptrVal) >> 9);
+    }
+
+  public:
+
+    // FIXME: Ugh. This is public because LiveIntervalAnalysis is still using it
+    // for some spill weight stuff. Fix that, then make this private.
+    enum Slot { LOAD, USE, DEF, STORE, NUM };
+
+    static inline SlotIndex getEmptyKey() {
+      return SlotIndex(IndexListEntry::getEmptyKeyEntry(), 0);
+    }
+
+    static inline SlotIndex getTombstoneKey() {
+      return SlotIndex(IndexListEntry::getTombstoneKeyEntry(), 0);
+    }
+    
+    /// Construct an invalid index.
+    SlotIndex() : lie(IndexListEntry::getEmptyKeyEntry(), 0) {}
+
+    // Construct a new slot index from the given one, set the phi flag on the
+    // new index to the value of the phi parameter.
+    SlotIndex(const SlotIndex &li, bool phi)
+      : lie(&li.entry(), phi ? PHI_BIT & li.getSlot() : (unsigned)li.getSlot()){
+      assert(lie.getPointer() != 0 &&
+             "Attempt to construct index with 0 pointer.");
+    }
+
+    // Construct a new slot index from the given one, set the phi flag on the
+    // new index to the value of the phi parameter, and the slot to the new slot.
+    SlotIndex(const SlotIndex &li, bool phi, Slot s)
+      : lie(&li.entry(), phi ? PHI_BIT & s : (unsigned)s) {
+      assert(lie.getPointer() != 0 &&
+             "Attempt to construct index with 0 pointer.");
+    }
+
+    /// Returns true if this is a valid index. Invalid indicies do
+    /// not point into an index table, and cannot be compared.
+    bool isValid() const {
+      return (lie.getPointer() != 0) && (lie.getPointer()->getIndex() != 0);
+    }
+
+    /// Print this index to the given raw_ostream.
+    void print(raw_ostream &os) const;
+
+    /// Dump this index to stderr.
+    void dump() const;
+
+    /// Compare two SlotIndex objects for equality.
+    bool operator==(SlotIndex other) const {
+      return getIndex() == other.getIndex();
+    }
+    /// Compare two SlotIndex objects for inequality.
+    bool operator!=(SlotIndex other) const {
+      return getIndex() != other.getIndex(); 
+    }
+   
+    /// Compare two SlotIndex objects. Return true if the first index
+    /// is strictly lower than the second.
+    bool operator<(SlotIndex other) const {
+      return getIndex() < other.getIndex();
+    }
+    /// Compare two SlotIndex objects. Return true if the first index
+    /// is lower than, or equal to, the second.
+    bool operator<=(SlotIndex other) const {
+      return getIndex() <= other.getIndex();
+    }
+
+    /// Compare two SlotIndex objects. Return true if the first index
+    /// is greater than the second.
+    bool operator>(SlotIndex other) const {
+      return getIndex() > other.getIndex();
+    }
+
+    /// Compare two SlotIndex objects. Return true if the first index
+    /// is greater than, or equal to, the second.
+    bool operator>=(SlotIndex other) const {
+      return getIndex() >= other.getIndex();
+    }
+
+    /// Return the distance from this index to the given one.
+    int distance(SlotIndex other) const {
+      return other.getIndex() - getIndex();
+    }
+
+    /// Returns the slot for this SlotIndex.
+    Slot getSlot() const {
+      return static_cast<Slot>(lie.getInt()  & ~PHI_BIT);
+    }
+
+    /// Returns the state of the PHI bit.
+    bool isPHI() const {
+      return lie.getInt() & PHI_BIT;
+    }
+
+    /// Returns the base index for associated with this index. The base index
+    /// is the one associated with the LOAD slot for the instruction pointed to
+    /// by this index.
+    SlotIndex getBaseIndex() const {
+      return getLoadIndex();
+    }
+
+    /// Returns the boundary index for associated with this index. The boundary
+    /// index is the one associated with the LOAD slot for the instruction
+    /// pointed to by this index.
+    SlotIndex getBoundaryIndex() const {
+      return getStoreIndex();
+    }
+
+    /// Returns the index of the LOAD slot for the instruction pointed to by
+    /// this index.
+    SlotIndex getLoadIndex() const {
+      return SlotIndex(&entry(), SlotIndex::LOAD);
+    }    
+
+    /// Returns the index of the USE slot for the instruction pointed to by
+    /// this index.
+    SlotIndex getUseIndex() const {
+      return SlotIndex(&entry(), SlotIndex::USE);
+    }
+
+    /// Returns the index of the DEF slot for the instruction pointed to by
+    /// this index.
+    SlotIndex getDefIndex() const {
+      return SlotIndex(&entry(), SlotIndex::DEF);
+    }
+
+    /// Returns the index of the STORE slot for the instruction pointed to by
+    /// this index.
+    SlotIndex getStoreIndex() const {
+      return SlotIndex(&entry(), SlotIndex::STORE);
+    }    
+
+    /// Returns the next slot in the index list. This could be either the
+    /// next slot for the instruction pointed to by this index or, if this
+    /// index is a STORE, the first slot for the next instruction.
+    /// WARNING: This method is considerably more expensive than the methods
+    /// that return specific slots (getUseIndex(), etc). If you can - please
+    /// use one of those methods.
+    SlotIndex getNextSlot() const {
+      Slot s = getSlot();
+      if (s == SlotIndex::STORE) {
+        return SlotIndex(entry().getNext(), SlotIndex::LOAD);
+      }
+      return SlotIndex(&entry(), s + 1);
+    }
+
+    /// Returns the next index. This is the index corresponding to the this
+    /// index's slot, but for the next instruction.
+    SlotIndex getNextIndex() const {
+      return SlotIndex(entry().getNext(), getSlot());
+    }
+
+    /// Returns the previous slot in the index list. This could be either the
+    /// previous slot for the instruction pointed to by this index or, if this
+    /// index is a LOAD, the last slot for the previous instruction.
+    /// WARNING: This method is considerably more expensive than the methods
+    /// that return specific slots (getUseIndex(), etc). If you can - please
+    /// use one of those methods.
+    SlotIndex getPrevSlot() const {
+      Slot s = getSlot();
+      if (s == SlotIndex::LOAD) {
+        return SlotIndex(entry().getPrev(), SlotIndex::STORE);
+      }
+      return SlotIndex(&entry(), s - 1);
+    }
+
+    /// Returns the previous index. This is the index corresponding to this
+    /// index's slot, but for the previous instruction.
+    SlotIndex getPrevIndex() const {
+      return SlotIndex(entry().getPrev(), getSlot());
+    }
+
+  };
+
+  /// DenseMapInfo specialization for SlotIndex.
+  template <>
+  struct DenseMapInfo<SlotIndex> {
+    static inline SlotIndex getEmptyKey() {
+      return SlotIndex::getEmptyKey();
+    }
+    static inline SlotIndex getTombstoneKey() {
+      return SlotIndex::getTombstoneKey();
+    }
+    static inline unsigned getHashValue(const SlotIndex &v) {
+      return SlotIndex::getHashValue(v);
+    }
+    static inline bool isEqual(const SlotIndex &LHS, const SlotIndex &RHS) {
+      return (LHS == RHS);
+    }
+    static inline bool isPod() { return false; }
+  };
+
+  inline raw_ostream& operator<<(raw_ostream &os, SlotIndex li) {
+    li.print(os);
+    return os;
+  }
+
+  typedef std::pair<SlotIndex, MachineBasicBlock*> IdxMBBPair;
+
+  inline bool operator<(SlotIndex V, const IdxMBBPair &IM) {
+    return V < IM.first;
+  }
+
+  inline bool operator<(const IdxMBBPair &IM, SlotIndex V) {
+    return IM.first < V;
+  }
+
+  struct Idx2MBBCompare {
+    bool operator()(const IdxMBBPair &LHS, const IdxMBBPair &RHS) const {
+      return LHS.first < RHS.first;
+    }
+  };
+
+  /// SlotIndexes pass.
+  ///
+  /// This pass assigns indexes to each instruction.
+  class SlotIndexes : public MachineFunctionPass {
+  private:
+
+    MachineFunction *mf;
+    IndexListEntry *indexListHead;
+    unsigned functionSize;
+
+    typedef DenseMap<const MachineInstr*, SlotIndex> Mi2IndexMap;
+    Mi2IndexMap mi2iMap;
+
+    /// MBB2IdxMap - The indexes of the first and last instructions in the
+    /// specified basic block.
+    typedef DenseMap<const MachineBasicBlock*,
+                     std::pair<SlotIndex, SlotIndex> > MBB2IdxMap;
+    MBB2IdxMap mbb2IdxMap;
+
+    /// Idx2MBBMap - Sorted list of pairs of index of first instruction
+    /// and MBB id.
+    std::vector<IdxMBBPair> idx2MBBMap;
+
+    typedef DenseMap<const MachineBasicBlock*, SlotIndex> TerminatorGapsMap;
+    TerminatorGapsMap terminatorGaps;
+
+    // IndexListEntry allocator.
+    BumpPtrAllocator ileAllocator;
+
+    IndexListEntry* createEntry(MachineInstr *mi, unsigned index) {
+      IndexListEntry *entry =
+        static_cast<IndexListEntry*>(
+          ileAllocator.Allocate(sizeof(IndexListEntry),
+          alignof<IndexListEntry>()));
+
+      new (entry) IndexListEntry(mi, index);
+
+      return entry;
+    }
+
+    void initList() {
+      assert(indexListHead == 0 && "Zero entry non-null at initialisation.");
+      indexListHead = createEntry(0, ~0U);
+      indexListHead->setNext(0);
+      indexListHead->setPrev(indexListHead);
+    }
+
+    void clearList() {
+      indexListHead = 0;
+      ileAllocator.Reset();
+    }
+
+    IndexListEntry* getTail() {
+      assert(indexListHead != 0 && "Call to getTail on uninitialized list.");
+      return indexListHead->getPrev();
+    }
+
+    const IndexListEntry* getTail() const {
+      assert(indexListHead != 0 && "Call to getTail on uninitialized list.");
+      return indexListHead->getPrev();
+    }
+
+    // Returns true if the index list is empty.
+    bool empty() const { return (indexListHead == getTail()); }
+
+    IndexListEntry* front() {
+      assert(!empty() && "front() called on empty index list.");
+      return indexListHead;
+    }
+
+    const IndexListEntry* front() const {
+      assert(!empty() && "front() called on empty index list.");
+      return indexListHead;
+    }
+
+    IndexListEntry* back() {
+      assert(!empty() && "back() called on empty index list.");
+      return getTail()->getPrev();
+    }
+
+    const IndexListEntry* back() const {
+      assert(!empty() && "back() called on empty index list.");
+      return getTail()->getPrev();
+    }
+
+    /// Insert a new entry before itr.
+    void insert(IndexListEntry *itr, IndexListEntry *val) {
+      assert(itr != 0 && "itr should not be null.");
+      IndexListEntry *prev = itr->getPrev();
+      val->setNext(itr);
+      val->setPrev(prev);
+      
+      if (itr != indexListHead) {
+        prev->setNext(val);
+      }
+      else {
+        indexListHead = val;
+      }
+      itr->setPrev(val);
+    }
+
+    /// Push a new entry on to the end of the list.
+    void push_back(IndexListEntry *val) {
+      insert(getTail(), val);
+    }
+
+  public:
+    static char ID;
+
+    SlotIndexes() : MachineFunctionPass(&ID), indexListHead(0) {}
+
+    virtual void getAnalysisUsage(AnalysisUsage &au) const;
+    virtual void releaseMemory(); 
+
+    virtual bool runOnMachineFunction(MachineFunction &fn);
+
+    /// Dump the indexes.
+    void dump() const;
+
+    /// Renumber the index list, providing space for new instructions.
+    void renumberIndexes();
+
+    /// Returns the zero index for this analysis.
+    SlotIndex getZeroIndex() {
+      assert(front()->getIndex() == 0 && "First index is not 0?");
+      return SlotIndex(front(), 0);
+    }
+
+    /// Returns the invalid index marker for this analysis.
+    SlotIndex getInvalidIndex() {
+      return getZeroIndex();
+    }
+
+    /// Returns the distance between the highest and lowest indexes allocated
+    /// so far.
+    unsigned getIndexesLength() const {
+      assert(front()->getIndex() == 0 &&
+             "Initial index isn't zero?");
+
+      return back()->getIndex();
+    }
+
+    /// Returns the number of instructions in the function.
+    unsigned getFunctionSize() const {
+      return functionSize;
+    }
+
+    /// Returns true if the given machine instr is mapped to an index,
+    /// otherwise returns false.
+    bool hasIndex(const MachineInstr *instr) const {
+      return (mi2iMap.find(instr) != mi2iMap.end());
+    }
+
+    /// Returns the base index for the given instruction.
+    SlotIndex getInstructionIndex(const MachineInstr *instr) const {
+      Mi2IndexMap::const_iterator itr = mi2iMap.find(instr);
+      assert(itr != mi2iMap.end() && "Instruction not found in maps.");
+      return itr->second;
+    }
+
+    /// Returns the instruction for the given index, or null if the given
+    /// index has no instruction associated with it.
+    MachineInstr* getInstructionFromIndex(SlotIndex index) const {
+      return index.entry().getInstr();
+    }
+
+    /// Returns the next non-null index.
+    SlotIndex getNextNonNullIndex(SlotIndex index) {
+      SlotIndex nextNonNull = index.getNextIndex();
+
+      while (&nextNonNull.entry() != getTail() &&
+             getInstructionFromIndex(nextNonNull) == 0) {
+        nextNonNull = nextNonNull.getNextIndex();
+      }
+
+      return nextNonNull;
+    }
+
+    /// Returns the first index in the given basic block.
+    SlotIndex getMBBStartIdx(const MachineBasicBlock *mbb) const {
+      MBB2IdxMap::const_iterator itr = mbb2IdxMap.find(mbb);
+      assert(itr != mbb2IdxMap.end() && "MBB not found in maps.");
+      return itr->second.first;
+    }
+
+    /// Returns the last index in the given basic block.
+    SlotIndex getMBBEndIdx(const MachineBasicBlock *mbb) const {
+      MBB2IdxMap::const_iterator itr = mbb2IdxMap.find(mbb);
+      assert(itr != mbb2IdxMap.end() && "MBB not found in maps.");
+      return itr->second.second;
+    }
+
+    /// Returns the terminator gap for the given index.
+    SlotIndex getTerminatorGap(const MachineBasicBlock *mbb) {
+      TerminatorGapsMap::iterator itr = terminatorGaps.find(mbb);
+      assert(itr != terminatorGaps.end() &&
+             "All MBBs should have terminator gaps in their indexes.");
+      return itr->second;
+    }
+
+    /// Returns the basic block which the given index falls in.
+    MachineBasicBlock* getMBBFromIndex(SlotIndex index) const {
+      std::vector<IdxMBBPair>::const_iterator I =
+        std::lower_bound(idx2MBBMap.begin(), idx2MBBMap.end(), index);
+      // Take the pair containing the index
+      std::vector<IdxMBBPair>::const_iterator J =
+        ((I != idx2MBBMap.end() && I->first > index) ||
+         (I == idx2MBBMap.end() && idx2MBBMap.size()>0)) ? (I-1): I;
+
+      assert(J != idx2MBBMap.end() && J->first <= index &&
+             index <= getMBBEndIdx(J->second) &&
+             "index does not correspond to an MBB");
+      return J->second;
+    }
+
+    bool findLiveInMBBs(SlotIndex start, SlotIndex end,
+                        SmallVectorImpl<MachineBasicBlock*> &mbbs) const {
+      std::vector<IdxMBBPair>::const_iterator itr =
+        std::lower_bound(idx2MBBMap.begin(), idx2MBBMap.end(), start);
+      bool resVal = false;
+
+      while (itr != idx2MBBMap.end()) {
+        if (itr->first >= end)
+          break;
+        mbbs.push_back(itr->second);
+        resVal = true;
+        ++itr;
+      }
+      return resVal;
+    }
+
+    /// Return a list of MBBs that can be reach via any branches or
+    /// fall-throughs.
+    bool findReachableMBBs(SlotIndex start, SlotIndex end,
+                           SmallVectorImpl<MachineBasicBlock*> &mbbs) const {
+      std::vector<IdxMBBPair>::const_iterator itr =
+        std::lower_bound(idx2MBBMap.begin(), idx2MBBMap.end(), start);
+
+      bool resVal = false;
+      while (itr != idx2MBBMap.end()) {
+        if (itr->first > end)
+          break;
+        MachineBasicBlock *mbb = itr->second;
+        if (getMBBEndIdx(mbb) > end)
+          break;
+        for (MachineBasicBlock::succ_iterator si = mbb->succ_begin(),
+             se = mbb->succ_end(); si != se; ++si)
+          mbbs.push_back(*si);
+        resVal = true;
+        ++itr;
+      }
+      return resVal;
+    }
+
+    /// Returns the MBB covering the given range, or null if the range covers
+    /// more than one basic block.
+    MachineBasicBlock* getMBBCoveringRange(SlotIndex start, SlotIndex end) const {
+
+      assert(start < end && "Backwards ranges not allowed.");
+
+      std::vector<IdxMBBPair>::const_iterator itr =
+        std::lower_bound(idx2MBBMap.begin(), idx2MBBMap.end(), start);
+
+      if (itr == idx2MBBMap.end()) {
+        itr = prior(itr);
+        return itr->second;
+      }
+
+      // Check that we don't cross the boundary into this block.
+      if (itr->first < end)
+        return 0;
+
+      itr = prior(itr);
+
+      if (itr->first <= start)
+        return itr->second;
+
+      return 0;
+    }
+
+    /// Insert the given machine instruction into the mapping. Returns the
+    /// assigned index.
+    SlotIndex insertMachineInstrInMaps(MachineInstr *mi,
+                                        bool *deferredRenumber = 0) {
+      assert(mi2iMap.find(mi) == mi2iMap.end() && "Instr already indexed.");
+
+      MachineBasicBlock *mbb = mi->getParent();
+
+      assert(mbb != 0 && "Instr must be added to function.");
+
+      MBB2IdxMap::iterator mbbRangeItr = mbb2IdxMap.find(mbb);
+
+      assert(mbbRangeItr != mbb2IdxMap.end() &&
+             "Instruction's parent MBB has not been added to SlotIndexes.");
+
+      MachineBasicBlock::iterator miItr(mi);
+      bool needRenumber = false;
+      IndexListEntry *newEntry;
+
+      IndexListEntry *prevEntry;
+      if (miItr == mbb->begin()) {
+        // If mi is at the mbb beginning, get the prev index from the mbb.
+        prevEntry = &mbbRangeItr->second.first.entry();
+      } else {
+        // Otherwise get it from the previous instr.
+        MachineBasicBlock::iterator pItr(prior(miItr));
+        prevEntry = &getInstructionIndex(pItr).entry();
+      }
+
+      // Get next entry from previous entry.
+      IndexListEntry *nextEntry = prevEntry->getNext();
+
+      // Get a number for the new instr, or 0 if there's no room currently.
+      // In the latter case we'll force a renumber later.
+      unsigned dist = nextEntry->getIndex() - prevEntry->getIndex();
+      unsigned newNumber = dist > SlotIndex::NUM ?
+        prevEntry->getIndex() + ((dist >> 1) & ~3U) : 0;
+
+      if (newNumber == 0) {
+        needRenumber = true;
+      }
+
+      // Insert a new list entry for mi.
+      newEntry = createEntry(mi, newNumber);
+      insert(nextEntry, newEntry);
+  
+      SlotIndex newIndex(newEntry, SlotIndex::LOAD);
+      mi2iMap.insert(std::make_pair(mi, newIndex));
+
+      if (miItr == mbb->end()) {
+        // If this is the last instr in the MBB then we need to fix up the bb
+        // range:
+        mbbRangeItr->second.second = SlotIndex(newEntry, SlotIndex::STORE);
+      }
+
+      // Renumber if we need to.
+      if (needRenumber) {
+        if (deferredRenumber == 0)
+          renumberIndexes();
+        else
+          *deferredRenumber = true;
+      }
+
+      return newIndex;
+    }
+
+    /// Add all instructions in the vector to the index list. This method will
+    /// defer renumbering until all instrs have been added, and should be 
+    /// preferred when adding multiple instrs.
+    void insertMachineInstrsInMaps(SmallVectorImpl<MachineInstr*> &mis) {
+      bool renumber = false;
+
+      for (SmallVectorImpl<MachineInstr*>::iterator
+           miItr = mis.begin(), miEnd = mis.end();
+           miItr != miEnd; ++miItr) {
+        insertMachineInstrInMaps(*miItr, &renumber);
+      }
+
+      if (renumber)
+        renumberIndexes();
+    }
+
+
+    /// Remove the given machine instruction from the mapping.
+    void removeMachineInstrFromMaps(MachineInstr *mi) {
+      // remove index -> MachineInstr and
+      // MachineInstr -> index mappings
+      Mi2IndexMap::iterator mi2iItr = mi2iMap.find(mi);
+      if (mi2iItr != mi2iMap.end()) {
+        IndexListEntry *miEntry(&mi2iItr->second.entry());        
+        assert(miEntry->getInstr() == mi && "Instruction indexes broken.");
+        // FIXME: Eventually we want to actually delete these indexes.
+        miEntry->setInstr(0);
+        mi2iMap.erase(mi2iItr);
+      }
+    }
+
+    /// ReplaceMachineInstrInMaps - Replacing a machine instr with a new one in
+    /// maps used by register allocator.
+    void replaceMachineInstrInMaps(MachineInstr *mi, MachineInstr *newMI) {
+      Mi2IndexMap::iterator mi2iItr = mi2iMap.find(mi);
+      if (mi2iItr == mi2iMap.end())
+        return;
+      SlotIndex replaceBaseIndex = mi2iItr->second;
+      IndexListEntry *miEntry(&replaceBaseIndex.entry());
+      assert(miEntry->getInstr() == mi &&
+             "Mismatched instruction in index tables.");
+      miEntry->setInstr(newMI);
+      mi2iMap.erase(mi2iItr);
+      mi2iMap.insert(std::make_pair(newMI, replaceBaseIndex));
+    }
+
+  };
+
+
+}
+
+#endif // LLVM_CODEGEN_LIVEINDEX_H 
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h b/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h
index 1f0dd21..45ef9b9 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h
@@ -18,7 +18,7 @@
 
 #include <cassert>
 #include <string>
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/Support/MathExtras.h"
 
 namespace llvm {
diff --git a/libclamav/c++/llvm/include/llvm/CompilerDriver/BuiltinOptions.h b/libclamav/c++/llvm/include/llvm/CompilerDriver/BuiltinOptions.h
index fe44c30..0c1bbe2 100644
--- a/libclamav/c++/llvm/include/llvm/CompilerDriver/BuiltinOptions.h
+++ b/libclamav/c++/llvm/include/llvm/CompilerDriver/BuiltinOptions.h
@@ -25,10 +25,11 @@ extern llvm::cl::opt<std::string> OutputFilename;
 extern llvm::cl::opt<std::string> TempDirname;
 extern llvm::cl::list<std::string> Languages;
 extern llvm::cl::opt<bool> DryRun;
+extern llvm::cl::opt<bool> Time;
 extern llvm::cl::opt<bool> VerboseMode;
 extern llvm::cl::opt<bool> CheckGraph;
-extern llvm::cl::opt<bool> WriteGraph;
 extern llvm::cl::opt<bool> ViewGraph;
+extern llvm::cl::opt<bool> WriteGraph;
 extern llvm::cl::opt<SaveTempsEnum::Values> SaveTemps;
 
 #endif // LLVM_INCLUDE_COMPILER_DRIVER_BUILTIN_OPTIONS_H
diff --git a/libclamav/c++/llvm/include/llvm/CompilerDriver/Common.td b/libclamav/c++/llvm/include/llvm/CompilerDriver/Common.td
index 5b7c543..79edb02 100644
--- a/libclamav/c++/llvm/include/llvm/CompilerDriver/Common.td
+++ b/libclamav/c++/llvm/include/llvm/CompilerDriver/Common.td
@@ -68,6 +68,9 @@ def not_empty;
 def default;
 def single_input_file;
 def multiple_input_files;
+def any_switch_on;
+def any_not_empty;
+def any_empty;
 
 // Possible actions.
 
@@ -76,7 +79,9 @@ def forward;
 def forward_as;
 def stop_compilation;
 def unpack_values;
+def warning;
 def error;
+def unset_option;
 
 // Increase/decrease the edge weight.
 def inc_weight;
@@ -90,11 +95,16 @@ class PluginPriority<int p> {
       int priority = p;
 }
 
-// Option list - used to specify aliases and sometimes help strings.
+// Option list - a single place to specify options.
 class OptionList<list<dag> l> {
       list<dag> options = l;
 }
 
+// Option preprocessor - actions taken during plugin loading.
+class OptionPreprocessor<dag d> {
+      dag preprocessor = d;
+}
+
 // Map from suffixes to language names
 
 class LangToSuffixes<string str, list<string> lst> {
diff --git a/libclamav/c++/llvm/include/llvm/CompilerDriver/CompilationGraph.h b/libclamav/c++/llvm/include/llvm/CompilerDriver/CompilationGraph.h
index 3daafd5..ba6ff47 100644
--- a/libclamav/c++/llvm/include/llvm/CompilerDriver/CompilationGraph.h
+++ b/libclamav/c++/llvm/include/llvm/CompilerDriver/CompilationGraph.h
@@ -43,7 +43,7 @@ namespace llvmc {
   class Edge : public llvm::RefCountedBaseVPTR<Edge> {
   public:
     Edge(const std::string& T) : ToolName_(T) {}
-    virtual ~Edge() {};
+    virtual ~Edge() {}
 
     const std::string& ToolName() const { return ToolName_; }
     virtual unsigned Weight(const InputLanguagesSet& InLangs) const = 0;
diff --git a/libclamav/c++/llvm/include/llvm/CompilerDriver/ForceLinkage.h b/libclamav/c++/llvm/include/llvm/CompilerDriver/ForceLinkage.h
index 58ea167..830c04e 100644
--- a/libclamav/c++/llvm/include/llvm/CompilerDriver/ForceLinkage.h
+++ b/libclamav/c++/llvm/include/llvm/CompilerDriver/ForceLinkage.h
@@ -41,6 +41,26 @@ namespace llvmc {
       LLVMC_FORCE_LINKAGE_DECL(LLVMC_BUILTIN_PLUGIN_5);
 #endif
 
+#ifdef LLVMC_BUILTIN_PLUGIN_6
+      LLVMC_FORCE_LINKAGE_DECL(LLVMC_BUILTIN_PLUGIN_6);
+#endif
+
+#ifdef LLVMC_BUILTIN_PLUGIN_7
+      LLVMC_FORCE_LINKAGE_DECL(LLVMC_BUILTIN_PLUGIN_7);
+#endif
+
+#ifdef LLVMC_BUILTIN_PLUGIN_8
+      LLVMC_FORCE_LINKAGE_DECL(LLVMC_BUILTIN_PLUGIN_8);
+#endif
+
+#ifdef LLVMC_BUILTIN_PLUGIN_9
+      LLVMC_FORCE_LINKAGE_DECL(LLVMC_BUILTIN_PLUGIN_9);
+#endif
+
+#ifdef LLVMC_BUILTIN_PLUGIN_10
+      LLVMC_FORCE_LINKAGE_DECL(LLVMC_BUILTIN_PLUGIN_10);
+#endif
+
 namespace force_linkage {
 
   struct LinkageForcer {
@@ -68,6 +88,26 @@ namespace force_linkage {
       LLVMC_FORCE_LINKAGE_CALL(LLVMC_BUILTIN_PLUGIN_5);
 #endif
 
+#ifdef LLVMC_BUILTIN_PLUGIN_6
+      LLVMC_FORCE_LINKAGE_CALL(LLVMC_BUILTIN_PLUGIN_6);
+#endif
+
+#ifdef LLVMC_BUILTIN_PLUGIN_7
+      LLVMC_FORCE_LINKAGE_CALL(LLVMC_BUILTIN_PLUGIN_7);
+#endif
+
+#ifdef LLVMC_BUILTIN_PLUGIN_8
+      LLVMC_FORCE_LINKAGE_CALL(LLVMC_BUILTIN_PLUGIN_8);
+#endif
+
+#ifdef LLVMC_BUILTIN_PLUGIN_9
+      LLVMC_FORCE_LINKAGE_CALL(LLVMC_BUILTIN_PLUGIN_9);
+#endif
+
+#ifdef LLVMC_BUILTIN_PLUGIN_10
+      LLVMC_FORCE_LINKAGE_CALL(LLVMC_BUILTIN_PLUGIN_10);
+#endif
+
     }
   };
 } // End namespace force_linkage.
diff --git a/libclamav/c++/llvm/include/llvm/CompilerDriver/Plugin.h b/libclamav/c++/llvm/include/llvm/CompilerDriver/Plugin.h
index 9f9eee3..e9a2048 100644
--- a/libclamav/c++/llvm/include/llvm/CompilerDriver/Plugin.h
+++ b/libclamav/c++/llvm/include/llvm/CompilerDriver/Plugin.h
@@ -29,6 +29,11 @@ namespace llvmc {
     /// first.
     virtual int Priority() const { return 0; }
 
+    /// PreprocessOptions - The auto-generated function that performs various
+    /// consistency checks on options (like ensuring that -O2 and -O3 are not
+    /// used together).
+    virtual void PreprocessOptions() const = 0;
+
     /// PopulateLanguageMap - The auto-generated function that fills in
     /// the language map (map from file extensions to language names).
     virtual void PopulateLanguageMap(LanguageMap&) const = 0;
@@ -60,13 +65,10 @@ namespace llvmc {
     PluginLoader();
     ~PluginLoader();
 
-    /// PopulateLanguageMap - Fills in the language map by calling
-    /// PopulateLanguageMap methods of all plugins.
-    void PopulateLanguageMap(LanguageMap& langMap);
-
-    /// PopulateCompilationGraph - Populates the compilation graph by
-    /// calling PopulateCompilationGraph methods of all plugins.
-    void PopulateCompilationGraph(CompilationGraph& tools);
+    /// RunInitialization - Calls PreprocessOptions, PopulateLanguageMap and
+    /// PopulateCompilationGraph methods of all plugins. This populates the
+    /// global language map and the compilation graph.
+    void RunInitialization(LanguageMap& langMap, CompilationGraph& graph) const;
 
   private:
     // noncopyable
diff --git a/libclamav/c++/llvm/include/llvm/Config/Disassemblers.def.in b/libclamav/c++/llvm/include/llvm/Config/Disassemblers.def.in
new file mode 100644
index 0000000..1b13657
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/Config/Disassemblers.def.in
@@ -0,0 +1,29 @@
+//===- llvm/Config/Disassemblers.def - LLVM Assembly Parsers ----*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file enumerates all of the assembly-language parsers
+// supported by this build of LLVM. Clients of this file should define
+// the LLVM_ASM_PARSER macro to be a function-like macro with a
+// single parameter (the name of the target whose assembly can be
+// generated); including this file will then enumerate all of the
+// targets with assembly parsers.
+//
+// The set of targets supported by LLVM is generated at configuration
+// time, at which point this header is generated. Do not modify this
+// header directly.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_DISASSEMBLER
+#  error Please define the macro LLVM_DISASSEMBLER(TargetName)
+#endif
+
+ at LLVM_ENUM_DISASSEMBLERS@
+
+#undef LLVM_DISASSEMBLER
diff --git a/libclamav/c++/llvm/include/llvm/Config/config.h.cmake b/libclamav/c++/llvm/include/llvm/Config/config.h.cmake
index d8de146..1f48ae9 100644
--- a/libclamav/c++/llvm/include/llvm/Config/config.h.cmake
+++ b/libclamav/c++/llvm/include/llvm/Config/config.h.cmake
@@ -9,6 +9,21 @@
 /* Define if CBE is enabled for printf %a output */
 #undef ENABLE_CBE_PRINTF_A
 
+/* Directories clang will search for headers */
+#define C_INCLUDE_DIRS "${C_INCLUDE_DIRS}"
+
+/* Directory clang will search for libstdc++ headers */
+#define CXX_INCLUDE_ROOT "${CXX_INCLUDE_ROOT}"
+
+/* Architecture of libstdc++ headers */
+#define CXX_INCLUDE_ARCH "${CXX_INCLUDE_ARCH}"
+
+/* 32 bit multilib directory */
+#define CXX_INCLUDE_32BIT_DIR "${CXX_INCLUDE_32BIT_DIR}"
+
+/* 64 bit multilib directory */
+#define CXX_INCLUDE_64BIT_DIR "${CXX_INCLUDE_64BIT_DIR}"
+
 /* Define if position independent code is enabled */
 #cmakedefine ENABLE_PIC ${ENABLE_PIC}
 
@@ -48,6 +63,9 @@
 /* Define to 1 if you have the `ceilf' function. */
 #cmakedefine HAVE_CEILF ${HAVE_CEILF}
 
+/* Define if the neat program is available */
+#cmakedefine HAVE_CIRCO ${HAVE_CIRCO}
+
 /* Define to 1 if you have the `closedir' function. */
 #undef HAVE_CLOSEDIR
 
@@ -77,10 +95,10 @@
 #cmakedefine HAVE_DL_H ${HAVE_DL_H}
 
 /* Define if the dot program is available */
-#undef HAVE_DOT
+#cmakedefine HAVE_DOT ${HAVE_DOT}
 
 /* Define if the dotty program is available */
-#undef HAVE_DOTTY
+#cmakedefine HAVE_DOTTY ${HAVE_DOTTY}
 
 /* Define if you have the _dyld_func_lookup function. */
 #undef HAVE_DYLD
@@ -97,8 +115,11 @@
 /* Define to 1 if you have the <fcntl.h> header file. */
 #cmakedefine HAVE_FCNTL_H ${HAVE_FCNTL_H}
 
+/* Define if the neat program is available */
+#cmakedefine HAVE_FDP ${HAVE_FDP}
+
 /* Set to 1 if the finite function is found in <ieeefp.h> */
-#undef HAVE_FINITE_IN_IEEEFP_H
+#cmakedefine HAVE_FINITE_IN_IEEEFP_H ${HAVE_FINITE_IN_IEEEFP_H}
 
 /* Define to 1 if you have the `floorf' function. */
 #cmakedefine HAVE_FLOORF ${HAVE_FLOORF}
@@ -137,7 +158,7 @@
 #undef HAVE_GRAPHVIZ
 
 /* Define if the gv program is available */
-#undef HAVE_GV
+#cmakedefine HAVE_GV ${HAVE_GV}
 
 /* Define to 1 if you have the `index' function. */
 #undef HAVE_INDEX
@@ -247,7 +268,10 @@
 #cmakedefine HAVE_NDIR_H ${HAVE_NDIR_H}
 
 /* Define to 1 if you have the `nearbyintf' function. */
-#undef HAVE_NEARBYINTF
+#cmakedefine HAVE_NEARBYINTF ${HAVE_NEARBYINTF}
+
+/* Define if the neat program is available */
+#cmakedefine HAVE_NEATO ${HAVE_NEATO}
 
 /* Define to 1 if you have the `opendir' function. */
 #undef HAVE_OPENDIR
@@ -289,7 +313,7 @@
 #undef HAVE_ROUNDF
 
 /* Define to 1 if you have the `sbrk' function. */
-#undef HAVE_SBRK
+#cmakedefine HAVE_SBRK ${HAVE_SBRK}
 
 /* Define to 1 if you have the `setenv' function. */
 #cmakedefine HAVE_SETENV ${HAVE_SETENV}
@@ -410,6 +434,9 @@
 /* Define to 1 if you have <sys/wait.h> that is POSIX.1 compatible. */
 #cmakedefine HAVE_SYS_WAIT_H ${HAVE_SYS_WAIT_H}
 
+/* Define if the neat program is available */
+#cmakedefine HAVE_TWOPI ${HAVE_TWOPI}
+
 /* Define to 1 if the system has the type `uint64_t'. */
 #undef HAVE_UINT64_T
 
@@ -467,17 +494,29 @@
 /* Added by Kevin -- Maximum path length */
 #cmakedefine MAXPATHLEN ${MAXPATHLEN}
 
+/* Define to path to circo program if found or 'echo circo' otherwise */
+#cmakedefine LLVM_PATH_CIRCO "${LLVM_PATH_CIRCO}"
+
 /* Define to path to dot program if found or 'echo dot' otherwise */
-#undef LLVM_PATH_DOT
+#cmakedefine LLVM_PATH_DOT "${LLVM_PATH_DOT}"
 
 /* Define to path to dotty program if found or 'echo dotty' otherwise */
-#undef LLVM_PATH_DOTTY
+#cmakedefine LLVM_PATH_DOTTY "${LLVM_PATH_DOTTY}"
+
+/* Define to path to fdp program if found or 'echo fdp' otherwise */
+#cmakedefine LLVM_PATH_FDP "${LLVM_PATH_FDP}"
 
 /* Define to path to Graphviz program if found or 'echo Graphviz' otherwise */
 #undef LLVM_PATH_GRAPHVIZ
 
 /* Define to path to gv program if found or 'echo gv' otherwise */
-#undef LLVM_PATH_GV
+#cmakedefine LLVM_PATH_GV "${LLVM_PATH_GV}"
+
+/* Define to path to neato program if found or 'echo neato' otherwise */
+#cmakedefine LLVM_PATH_NEATO "${LLVM_PATH_NEATO}"
+
+/* Define to path to twopi program if found or 'echo twopi' otherwise */
+#cmakedefine LLVM_PATH_TWOPI "${LLVM_PATH_TWOPI}"
 
 /* Installation prefix directory */
 #undef LLVM_PREFIX
diff --git a/libclamav/c++/llvm/include/llvm/Config/config.h.in b/libclamav/c++/llvm/include/llvm/Config/config.h.in
index 5257df9..8051f55 100644
--- a/libclamav/c++/llvm/include/llvm/Config/config.h.in
+++ b/libclamav/c++/llvm/include/llvm/Config/config.h.in
@@ -8,9 +8,24 @@
    */
 #undef CRAY_STACKSEG_END
 
+/* 32 bit multilib directory. */
+#undef CXX_INCLUDE_32BIT_DIR
+
+/* 64 bit multilib directory. */
+#undef CXX_INCLUDE_64BIT_DIR
+
+/* Arch the libstdc++ headers. */
+#undef CXX_INCLUDE_ARCH
+
+/* Directory with the libstdc++ headers. */
+#undef CXX_INCLUDE_ROOT
+
 /* Define to 1 if using `alloca.c'. */
 #undef C_ALLOCA
 
+/* Directories clang will search for headers */
+#undef C_INCLUDE_DIRS
+
 /* Define if CBE is enabled for printf %a output */
 #undef ENABLE_CBE_PRINTF_A
 
diff --git a/libclamav/c++/llvm/include/llvm/Constant.h b/libclamav/c++/llvm/include/llvm/Constant.h
index a42c7d4..8072fd9 100644
--- a/libclamav/c++/llvm/include/llvm/Constant.h
+++ b/libclamav/c++/llvm/include/llvm/Constant.h
@@ -48,6 +48,10 @@ protected:
     : User(ty, vty, Ops, NumOps) {}
 
   void destroyConstantImpl();
+  
+  void setOperand(unsigned i, Value *V) {
+    User::setOperand(i, V);
+  }
 public:
   /// isNullValue - Return true if this is the value that would be returned by
   /// getNullValue.
@@ -61,6 +65,10 @@ public:
   /// true for things like constant expressions that could divide by zero.
   bool canTrap() const;
 
+  /// isConstantUsed - Return true if the constant has users other than constant
+  /// exprs and other dangling things.
+  bool isConstantUsed() const;
+  
   enum PossibleRelocationsTy {
     NoRelocation = 0,
     LocalRelocation = 1,
@@ -83,16 +91,13 @@ public:
   /// FIXME: This really should not be in VMCore.
   PossibleRelocationsTy getRelocationInfo() const;
   
-  // Specialize get/setOperand for Constants as their operands are always
-  // constants as well.
-  Constant *getOperand(unsigned i) {
-    return static_cast<Constant*>(User::getOperand(i));
-  }
-  const Constant *getOperand(unsigned i) const {
-    return static_cast<const Constant*>(User::getOperand(i));
+  // Specialize get/setOperand for Users as their operands are always
+  // constants or BasicBlocks as well.
+  User *getOperand(unsigned i) {
+    return static_cast<User*>(User::getOperand(i));
   }
-  void setOperand(unsigned i, Constant *C) {
-    User::setOperand(i, C);
+  const User *getOperand(unsigned i) const {
+    return static_cast<const User*>(User::getOperand(i));
   }
   
   /// getVectorElements - This method, which is only valid on constant of vector
diff --git a/libclamav/c++/llvm/include/llvm/Constants.h b/libclamav/c++/llvm/include/llvm/Constants.h
index 260a89a..caa13f6 100644
--- a/libclamav/c++/llvm/include/llvm/Constants.h
+++ b/libclamav/c++/llvm/include/llvm/Constants.h
@@ -22,15 +22,16 @@
 #define LLVM_CONSTANTS_H
 
 #include "llvm/Constant.h"
-#include "llvm/Type.h"
 #include "llvm/OperandTraits.h"
 #include "llvm/ADT/APInt.h"
 #include "llvm/ADT/APFloat.h"
 #include "llvm/ADT/SmallVector.h"
+#include <vector>
 
 namespace llvm {
 
 class ArrayType;
+class IntegerType;
 class StructType;
 class PointerType;
 class VectorType;
@@ -45,7 +46,6 @@ struct ConvertConstantType;
 /// represents both boolean and integral constants.
 /// @brief Class for constant integers.
 class ConstantInt : public Constant {
-  static ConstantInt *TheTrueVal, *TheFalseVal;
   void *operator new(size_t, unsigned);  // DO NOT IMPLEMENT
   ConstantInt(const ConstantInt &);      // DO NOT IMPLEMENT
   ConstantInt(const IntegerType *Ty, const APInt& V);
@@ -56,12 +56,12 @@ protected:
     return User::operator new(s, 0);
   }
 public:
-  static ConstantInt* getTrue(LLVMContext &Context);
-  static ConstantInt* getFalse(LLVMContext &Context);
+  static ConstantInt *getTrue(LLVMContext &Context);
+  static ConstantInt *getFalse(LLVMContext &Context);
   
   /// If Ty is a vector type, return a Constant with a splat of the given
   /// value. Otherwise return a ConstantInt for the given value.
-  static Constant* get(const Type* Ty, uint64_t V, bool isSigned = false);
+  static Constant *get(const Type *Ty, uint64_t V, bool isSigned = false);
                               
   /// Return a ConstantInt with the specified integer value for the specified
   /// type. If the type is wider than 64 bits, the value will be zero-extended
@@ -69,7 +69,7 @@ public:
   /// be interpreted as a 64-bit signed integer and sign-extended to fit
   /// the type.
   /// @brief Get a ConstantInt for a specific value.
-  static ConstantInt* get(const IntegerType* Ty, uint64_t V,
+  static ConstantInt *get(const IntegerType *Ty, uint64_t V,
                           bool isSigned = false);
 
   /// Return a ConstantInt with the specified value for the specified type. The
@@ -77,26 +77,26 @@ public:
   /// either getSExtValue() or getZExtValue() will yield a correctly sized and
   /// signed value for the type Ty.
   /// @brief Get a ConstantInt for a specific signed value.
-  static ConstantInt* getSigned(const IntegerType* Ty, int64_t V);
+  static ConstantInt *getSigned(const IntegerType *Ty, int64_t V);
   static Constant *getSigned(const Type *Ty, int64_t V);
   
   /// Return a ConstantInt with the specified value and an implied Type. The
   /// type is the integer type that corresponds to the bit width of the value.
-  static ConstantInt* get(LLVMContext &Context, const APInt& V);
+  static ConstantInt *get(LLVMContext &Context, const APInt &V);
 
   /// Return a ConstantInt constructed from the string strStart with the given
   /// radix. 
-  static ConstantInt* get(const IntegerType* Ty, const StringRef& Str,
+  static ConstantInt *get(const IntegerType *Ty, StringRef Str,
                           uint8_t radix);
   
   /// If Ty is a vector type, return a Constant with a splat of the given
   /// value. Otherwise return a ConstantInt for the given value.
-  static Constant* get(const Type* Ty, const APInt& V);
+  static Constant *get(const Type* Ty, const APInt& V);
   
   /// Return the constant as an APInt value reference. This allows clients to
   /// obtain a copy of the value, with all its precision in tact.
   /// @brief Return the constant's value.
-  inline const APInt& getValue() const {
+  inline const APInt &getValue() const {
     return Val;
   }
   
@@ -248,20 +248,20 @@ public:
   /// Floating point negation must be implemented with f(x) = -0.0 - x. This
   /// method returns the negative zero constant for floating point or vector
   /// floating point types; for all other types, it returns the null value.
-  static Constant* getZeroValueForNegation(const Type* Ty);
+  static Constant *getZeroValueForNegation(const Type *Ty);
   
   /// get() - This returns a ConstantFP, or a vector containing a splat of a
   /// ConstantFP, for the specified value in the specified type.  This should
   /// only be used for simple constant values like 2.0/1.0 etc, that are
   /// known-valid both as host double and as the target format.
-  static Constant* get(const Type* Ty, double V);
-  static Constant* get(const Type* Ty, const StringRef& Str);
-  static ConstantFP* get(LLVMContext &Context, const APFloat& V);
-  static ConstantFP* getNegativeZero(const Type* Ty);
-  static ConstantFP* getInfinity(const Type* Ty, bool negative = false);
+  static Constant *get(const Type* Ty, double V);
+  static Constant *get(const Type* Ty, StringRef Str);
+  static ConstantFP *get(LLVMContext &Context, const APFloat &V);
+  static ConstantFP *getNegativeZero(const Type* Ty);
+  static ConstantFP *getInfinity(const Type *Ty, bool Negative = false);
   
   /// isValueValidForType - return true if Ty is big enough to represent V.
-  static bool isValueValidForType(const Type *Ty, const APFloat& V);
+  static bool isValueValidForType(const Type *Ty, const APFloat &V);
   inline const APFloat& getValueAPF() const { return Val; }
 
   /// isNullValue - Return true if this is the value that would be returned by
@@ -281,7 +281,7 @@ public:
   /// two floating point values.  The version with a double operand is retained
   /// because it's so convenient to write isExactlyValue(2.0), but please use
   /// it only for simple constants.
-  bool isExactlyValue(const APFloat& V) const;
+  bool isExactlyValue(const APFloat &V) const;
 
   bool isExactlyValue(double V) const {
     bool ignored;
@@ -315,7 +315,7 @@ protected:
     return User::operator new(s, 0);
   }
 public:
-  static ConstantAggregateZero* get(const Type* Ty);
+  static ConstantAggregateZero* get(const Type *Ty);
   
   /// isNullValue - Return true if this is the value that would be returned by
   /// getNullValue.
@@ -343,8 +343,8 @@ protected:
   ConstantArray(const ArrayType *T, const std::vector<Constant*> &Val);
 public:
   // ConstantArray accessors
-  static Constant* get(const ArrayType* T, const std::vector<Constant*>& V);
-  static Constant* get(const ArrayType* T, Constant* const* Vals, 
+  static Constant *get(const ArrayType *T, const std::vector<Constant*> &V);
+  static Constant *get(const ArrayType *T, Constant *const *Vals, 
                        unsigned NumVals);
                              
   /// This method constructs a ConstantArray and initializes it with a text
@@ -353,7 +353,7 @@ public:
   /// of the array by one (you've been warned).  However, in some situations 
   /// this is not desired so if AddNull==false then the string is copied without
   /// null termination.
-  static Constant* get(LLVMContext &Context, const StringRef &Initializer,
+  static Constant *get(LLVMContext &Context, StringRef Initializer,
                        bool AddNull = true);
   
   /// Transparently provide more efficient getOperand methods.
@@ -414,12 +414,11 @@ protected:
   ConstantStruct(const StructType *T, const std::vector<Constant*> &Val);
 public:
   // ConstantStruct accessors
-  static Constant* get(const StructType* T, const std::vector<Constant*>& V);
-  static Constant* get(LLVMContext &Context, 
-                       const std::vector<Constant*>& V, bool Packed);
-  static Constant* get(LLVMContext &Context,
-                       Constant* const *Vals, unsigned NumVals,
-                       bool Packed);
+  static Constant *get(const StructType *T, const std::vector<Constant*> &V);
+  static Constant *get(LLVMContext &Context, 
+                       const std::vector<Constant*> &V, bool Packed);
+  static Constant *get(LLVMContext &Context,
+                       Constant *const *Vals, unsigned NumVals, bool Packed);
 
   /// Transparently provide more efficient getOperand methods.
   DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Constant);
@@ -464,9 +463,9 @@ protected:
   ConstantVector(const VectorType *T, const std::vector<Constant*> &Val);
 public:
   // ConstantVector accessors
-  static Constant* get(const VectorType* T, const std::vector<Constant*>& V);
-  static Constant* get(const std::vector<Constant*>& V);
-  static Constant* get(Constant* const* Vals, unsigned NumVals);
+  static Constant *get(const VectorType *T, const std::vector<Constant*> &V);
+  static Constant *get(const std::vector<Constant*> &V);
+  static Constant *get(Constant *const *Vals, unsigned NumVals);
   
   /// Transparently provide more efficient getOperand methods.
   DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Constant);
@@ -550,7 +549,47 @@ public:
   }
 };
 
+/// BlockAddress - The address of a basic block.
+///
+class BlockAddress : public Constant {
+  void *operator new(size_t, unsigned);                  // DO NOT IMPLEMENT
+  void *operator new(size_t s) { return User::operator new(s, 2); }
+  BlockAddress(Function *F, BasicBlock *BB);
+public:
+  /// get - Return a BlockAddress for the specified function and basic block.
+  static BlockAddress *get(Function *F, BasicBlock *BB);
+  
+  /// get - Return a BlockAddress for the specified basic block.  The basic
+  /// block must be embedded into a function.
+  static BlockAddress *get(BasicBlock *BB);
+  
+  /// Transparently provide more efficient getOperand methods.
+  DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
+  
+  Function *getFunction() const { return (Function*)Op<0>().get(); }
+  BasicBlock *getBasicBlock() const { return (BasicBlock*)Op<1>().get(); }
+  
+  /// isNullValue - Return true if this is the value that would be returned by
+  /// getNullValue.
+  virtual bool isNullValue() const { return false; }
+  
+  virtual void destroyConstant();
+  virtual void replaceUsesOfWithOnConstant(Value *From, Value *To, Use *U);
+  
+  /// Methods for support type inquiry through isa, cast, and dyn_cast:
+  static inline bool classof(const BlockAddress *) { return true; }
+  static inline bool classof(const Value *V) {
+    return V->getValueID() == BlockAddressVal;
+  }
+};
+
+template <>
+struct OperandTraits<BlockAddress> : public FixedNumOperandTraits<2> {
+};
 
+DEFINE_TRANSPARENT_CASTED_OPERAND_ACCESSORS(BlockAddress, Value)
+  
+//===----------------------------------------------------------------------===//
 /// ConstantExpr - a constant value that is initialized with an expression using
 /// other constant values.
 ///
@@ -607,39 +646,39 @@ public:
   /// getAlignOf constant expr - computes the alignment of a type in a target
   /// independent way (Note: the return type is an i32; Note: assumes that i8
   /// is byte aligned).
-  static Constant* getAlignOf(const Type* Ty);
+  static Constant *getAlignOf(const Type* Ty);
   
   /// getSizeOf constant expr - computes the size of a type in a target
   /// independent way (Note: the return type is an i64).
   ///
-  static Constant* getSizeOf(const Type* Ty);
+  static Constant *getSizeOf(const Type* Ty);
 
   /// getOffsetOf constant expr - computes the offset of a field in a target
   /// independent way (Note: the return type is an i64).
   ///
-  static Constant* getOffsetOf(const StructType* Ty, unsigned FieldNo);
+  static Constant *getOffsetOf(const StructType* Ty, unsigned FieldNo);
   
-  static Constant* getNeg(Constant* C);
-  static Constant* getFNeg(Constant* C);
-  static Constant* getNot(Constant* C);
-  static Constant* getAdd(Constant* C1, Constant* C2);
-  static Constant* getFAdd(Constant* C1, Constant* C2);
-  static Constant* getSub(Constant* C1, Constant* C2);
-  static Constant* getFSub(Constant* C1, Constant* C2);
-  static Constant* getMul(Constant* C1, Constant* C2);
-  static Constant* getFMul(Constant* C1, Constant* C2);
-  static Constant* getUDiv(Constant* C1, Constant* C2);
-  static Constant* getSDiv(Constant* C1, Constant* C2);
-  static Constant* getFDiv(Constant* C1, Constant* C2);
-  static Constant* getURem(Constant* C1, Constant* C2);
-  static Constant* getSRem(Constant* C1, Constant* C2);
-  static Constant* getFRem(Constant* C1, Constant* C2);
-  static Constant* getAnd(Constant* C1, Constant* C2);
-  static Constant* getOr(Constant* C1, Constant* C2);
-  static Constant* getXor(Constant* C1, Constant* C2);
-  static Constant* getShl(Constant* C1, Constant* C2);
-  static Constant* getLShr(Constant* C1, Constant* C2);
-  static Constant* getAShr(Constant* C1, Constant* C2);
+  static Constant *getNeg(Constant *C);
+  static Constant *getFNeg(Constant *C);
+  static Constant *getNot(Constant *C);
+  static Constant *getAdd(Constant *C1, Constant *C2);
+  static Constant *getFAdd(Constant *C1, Constant *C2);
+  static Constant *getSub(Constant *C1, Constant *C2);
+  static Constant *getFSub(Constant *C1, Constant *C2);
+  static Constant *getMul(Constant *C1, Constant *C2);
+  static Constant *getFMul(Constant *C1, Constant *C2);
+  static Constant *getUDiv(Constant *C1, Constant *C2);
+  static Constant *getSDiv(Constant *C1, Constant *C2);
+  static Constant *getFDiv(Constant *C1, Constant *C2);
+  static Constant *getURem(Constant *C1, Constant *C2);
+  static Constant *getSRem(Constant *C1, Constant *C2);
+  static Constant *getFRem(Constant *C1, Constant *C2);
+  static Constant *getAnd(Constant *C1, Constant *C2);
+  static Constant *getOr(Constant *C1, Constant *C2);
+  static Constant *getXor(Constant *C1, Constant *C2);
+  static Constant *getShl(Constant *C1, Constant *C2);
+  static Constant *getLShr(Constant *C1, Constant *C2);
+  static Constant *getAShr(Constant *C1, Constant *C2);
   static Constant *getTrunc   (Constant *C, const Type *Ty);
   static Constant *getSExt    (Constant *C, const Type *Ty);
   static Constant *getZExt    (Constant *C, const Type *Ty);
@@ -653,9 +692,9 @@ public:
   static Constant *getIntToPtr(Constant *C, const Type *Ty);
   static Constant *getBitCast (Constant *C, const Type *Ty);
 
-  static Constant* getNSWAdd(Constant* C1, Constant* C2);
-  static Constant* getNSWSub(Constant* C1, Constant* C2);
-  static Constant* getExactSDiv(Constant* C1, Constant* C2);
+  static Constant *getNSWAdd(Constant *C1, Constant *C2);
+  static Constant *getNSWSub(Constant *C1, Constant *C2);
+  static Constant *getExactSDiv(Constant *C1, Constant *C2);
 
   /// Transparently provide more efficient getOperand methods.
   DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Constant);
@@ -747,14 +786,14 @@ public:
   /// all elements must be Constant's.
   ///
   static Constant *getGetElementPtr(Constant *C,
-                                    Constant* const *IdxList, unsigned NumIdx);
+                                    Constant *const *IdxList, unsigned NumIdx);
   static Constant *getGetElementPtr(Constant *C,
                                     Value* const *IdxList, unsigned NumIdx);
 
   /// Create an "inbounds" getelementptr. See the documentation for the
   /// "inbounds" flag in LangRef.html for details.
   static Constant *getInBoundsGetElementPtr(Constant *C,
-                                            Constant* const *IdxList,
+                                            Constant *const *IdxList,
                                             unsigned NumIdx);
   static Constant *getInBoundsGetElementPtr(Constant *C,
                                             Value* const *IdxList,
@@ -796,7 +835,7 @@ public:
   Constant *getWithOperands(const std::vector<Constant*> &Ops) const {
     return getWithOperands(&Ops[0], (unsigned)Ops.size());
   }
-  Constant *getWithOperands(Constant* const *Ops, unsigned NumOps) const;
+  Constant *getWithOperands(Constant *const *Ops, unsigned NumOps) const;
   
   virtual void destroyConstant();
   virtual void replaceUsesOfWithOnConstant(Value *From, Value *To, Use *U);
diff --git a/libclamav/c++/llvm/include/llvm/Debugger/Debugger.h b/libclamav/c++/llvm/include/llvm/Debugger/Debugger.h
deleted file mode 100644
index 42de356..0000000
--- a/libclamav/c++/llvm/include/llvm/Debugger/Debugger.h
+++ /dev/null
@@ -1,176 +0,0 @@
-//===- Debugger.h - LLVM debugger library interface -------------*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file defines the LLVM source-level debugger library interface.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_DEBUGGER_DEBUGGER_H
-#define LLVM_DEBUGGER_DEBUGGER_H
-
-#include <string>
-#include <vector>
-
-namespace llvm {
-  class Module;
-  class InferiorProcess;
-  class LLVMContext;
-
-  /// Debugger class - This class implements the LLVM source-level debugger.
-  /// This allows clients to handle the user IO processing without having to
-  /// worry about how the debugger itself works.
-  ///
-  class Debugger {
-    // State the debugger needs when starting and stopping the program.
-    std::vector<std::string> ProgramArguments;
-
-    // The environment to run the program with.  This should eventually be
-    // changed to vector of strings when we allow the user to edit the
-    // environment.
-    const char * const *Environment;
-
-    // Program - The currently loaded program, or null if none is loaded.
-    Module *Program;
-
-    // Process - The currently executing inferior process.
-    InferiorProcess *Process;
-
-    Debugger(const Debugger &);         // DO NOT IMPLEMENT
-    void operator=(const Debugger &);   // DO NOT IMPLEMENT
-  public:
-    Debugger();
-    ~Debugger();
-
-    //===------------------------------------------------------------------===//
-    // Methods for manipulating and inspecting the execution environment.
-    //
-
-    /// initializeEnvironment - Specify the environment the program should run
-    /// with.  This is used to initialize the environment of the program to the
-    /// environment of the debugger.
-    void initializeEnvironment(const char *const *envp) {
-      Environment = envp;
-    }
-
-    /// setWorkingDirectory - Specify the working directory for the program to
-    /// be started from.
-    void setWorkingDirectory(const std::string &Dir) {
-      // FIXME: implement
-    }
-
-    template<typename It>
-    void setProgramArguments(It I, It E) {
-      ProgramArguments.assign(I, E);
-    }
-    unsigned getNumProgramArguments() const {
-      return static_cast<unsigned>(ProgramArguments.size());
-    }
-    const std::string &getProgramArgument(unsigned i) const {
-      return ProgramArguments[i];
-    }
-
-
-    //===------------------------------------------------------------------===//
-    // Methods for manipulating and inspecting the program currently loaded.
-    //
-
-    /// isProgramLoaded - Return true if there is a program currently loaded.
-    ///
-    bool isProgramLoaded() const { return Program != 0; }
-
-    /// getProgram - Return the LLVM module corresponding to the program.
-    ///
-    Module *getProgram() const { return Program; }
-
-    /// getProgramPath - Get the path of the currently loaded program, or an
-    /// empty string if none is loaded.
-    std::string getProgramPath() const;
-
-    /// loadProgram - If a program is currently loaded, unload it.  Then search
-    /// the PATH for the specified program, loading it when found.  If the
-    /// specified program cannot be found, an exception is thrown to indicate
-    /// the error.
-    void loadProgram(const std::string &Path, LLVMContext& Context);
-
-    /// unloadProgram - If a program is running, kill it, then unload all traces
-    /// of the current program.  If no program is loaded, this method silently
-    /// succeeds.
-    void unloadProgram();
-
-    //===------------------------------------------------------------------===//
-    // Methods for manipulating and inspecting the program currently running.
-    //
-    // If the program is running, and the debugger is active, then we know that
-    // the program has stopped.  This being the case, we can inspect the
-    // program, ask it for its source location, set breakpoints, etc.
-    //
-
-    /// isProgramRunning - Return true if a program is loaded and has a
-    /// currently active instance.
-    bool isProgramRunning() const { return Process != 0; }
-
-    /// getRunningProcess - If there is no program running, throw an exception.
-    /// Otherwise return the running process so that it can be inspected by the
-    /// debugger.
-    const InferiorProcess &getRunningProcess() const {
-      if (Process == 0) throw "No process running.";
-      return *Process;
-    }
-
-    /// createProgram - Create an instance of the currently loaded program,
-    /// killing off any existing one.  This creates the program and stops it at
-    /// the first possible moment.  If there is no program loaded or if there is
-    /// a problem starting the program, this method throws an exception.
-    void createProgram();
-
-    /// killProgram - If the program is currently executing, kill off the
-    /// process and free up any state related to the currently running program.
-    /// If there is no program currently running, this just silently succeeds.
-    /// If something horrible happens when killing the program, an exception
-    /// gets thrown.
-    void killProgram();
-
-
-    //===------------------------------------------------------------------===//
-    // Methods for continuing execution.  These methods continue the execution
-    // of the program by some amount.  If the program is successfully stopped,
-    // execution returns, otherwise an exception is thrown.
-    //
-    // NOTE: These methods should always be used in preference to directly
-    // accessing the Dbg object, because these will delete the Process object if
-    // the process unexpectedly dies.
-    //
-
-    /// stepProgram - Implement the 'step' command, continuing execution until
-    /// the next possible stop point.
-    void stepProgram();
-
-    /// nextProgram - Implement the 'next' command, continuing execution until
-    /// the next possible stop point that is in the current function.
-    void nextProgram();
-
-    /// finishProgram - Implement the 'finish' command, continuing execution
-    /// until the specified frame ID returns.
-    void finishProgram(void *Frame);
-
-    /// contProgram - Implement the 'cont' command, continuing execution until
-    /// the next breakpoint is encountered.
-    void contProgram();
-  };
-
-  class NonErrorException {
-    std::string Message;
-  public:
-    NonErrorException(const std::string &M) : Message(M) {}
-    const std::string &getMessage() const { return Message; }
-  };
-
-} // end namespace llvm
-
-#endif
diff --git a/libclamav/c++/llvm/include/llvm/Debugger/InferiorProcess.h b/libclamav/c++/llvm/include/llvm/Debugger/InferiorProcess.h
deleted file mode 100644
index 71d138b..0000000
--- a/libclamav/c++/llvm/include/llvm/Debugger/InferiorProcess.h
+++ /dev/null
@@ -1,137 +0,0 @@
-//===- InferiorProcess.h - Represent the program being debugged -*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file defines the InferiorProcess class, which is used to represent,
-// inspect, and manipulate a process under the control of the LLVM debugger.
-//
-// This is an abstract class which should allow various different types of
-// implementations.  Initially we implement a unix specific debugger backend
-// that does not require code generator support, but we could eventually use
-// code generator support with ptrace, support windows based targets, supported
-// remote targets, etc.
-//
-// If the inferior process unexpectedly dies, an attempt to communicate with it
-// will cause an InferiorProcessDead exception to be thrown, indicating the exit
-// code of the process.  When this occurs, no methods on the InferiorProcess
-// class should be called except for the destructor.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_DEBUGGER_INFERIORPROCESS_H
-#define LLVM_DEBUGGER_INFERIORPROCESS_H
-
-#include <string>
-#include <vector>
-
-namespace llvm {
-  class Module;
-  class GlobalVariable;
-
-  /// InferiorProcessDead exception - This class is thrown by methods that
-  /// communicate with the interior process if the process unexpectedly exits or
-  /// dies.  The instance variable indicates what the exit code of the process
-  /// was, or -1 if unknown.
-  class InferiorProcessDead {
-    int ExitCode;
-  public:
-    InferiorProcessDead(int EC) : ExitCode(EC) {}
-    int getExitCode() const { return ExitCode; }
-  };
-
-  /// InferiorProcess class - This class represents the process being debugged
-  /// by the debugger.  Objects of this class should not be stack allocated,
-  /// because the destructor can throw exceptions.
-  ///
-  class InferiorProcess {
-    Module *M;
-  protected:
-    InferiorProcess(Module *m) : M(m) {}
-  public:
-    /// create - Create an inferior process of the specified module, and
-    /// stop it at the first opportunity.  If there is a problem starting the
-    /// program (for example, it has no main), throw an exception.
-    static InferiorProcess *create(Module *M,
-                                   const std::vector<std::string> &Arguments,
-                                   const char * const *envp);
-
-    // InferiorProcess destructor - Kill the current process.  If something
-    // terrible happens, we throw an exception from the destructor.
-    virtual ~InferiorProcess() {}
-
-    //===------------------------------------------------------------------===//
-    // Status methods - These methods return information about the currently
-    // stopped process.
-    //
-
-    /// getStatus - Return a status message that is specific to the current type
-    /// of inferior process that is created.  This can return things like the
-    /// PID of the inferior or other potentially interesting things.
-    virtual std::string getStatus() const {
-      return "";
-    }
-
-    //===------------------------------------------------------------------===//
-    // Methods for inspecting the call stack.
-    //
-
-    /// getPreviousFrame - Given the descriptor for the current stack frame,
-    /// return the descriptor for the caller frame.  This returns null when it
-    /// runs out of frames.  If Frame is null, the initial frame should be
-    /// returned.
-    virtual void *getPreviousFrame(void *Frame) const = 0;
-
-    /// getSubprogramDesc - Return the subprogram descriptor for the current
-    /// stack frame.
-    virtual const GlobalVariable *getSubprogramDesc(void *Frame) const = 0;
-
-    /// getFrameLocation - This method returns the source location where each
-    /// stack frame is stopped.
-    virtual void getFrameLocation(void *Frame, unsigned &LineNo,
-                                  unsigned &ColNo,
-                                  const GlobalVariable *&SourceDesc) const = 0;
-
-    //===------------------------------------------------------------------===//
-    // Methods for manipulating breakpoints.
-    //
-
-    /// addBreakpoint - This method adds a breakpoint at the specified line,
-    /// column, and source file, and returns a unique identifier for it.
-    ///
-    /// It is up to the debugger to determine whether or not there is actually a
-    /// stop-point that corresponds with the specified location.
-    virtual unsigned addBreakpoint(unsigned LineNo, unsigned ColNo,
-                                   const GlobalVariable *SourceDesc) = 0;
-
-    /// removeBreakpoint - This deletes the breakpoint with the specified ID
-    /// number.
-    virtual void removeBreakpoint(unsigned ID) = 0;
-
-
-    //===------------------------------------------------------------------===//
-    // Execution methods - These methods cause the program to continue execution
-    // by some amount.  If the program successfully stops, this returns.
-    // Otherwise, if the program unexpectedly terminates, an InferiorProcessDead
-    // exception is thrown.
-    //
-
-    /// stepProgram - Implement the 'step' command, continuing execution until
-    /// the next possible stop point.
-    virtual void stepProgram() = 0;
-
-    /// finishProgram - Implement the 'finish' command, continuing execution
-    /// until the current function returns.
-    virtual void finishProgram(void *Frame) = 0;
-
-    /// contProgram - Implement the 'cont' command, continuing execution until
-    /// a breakpoint is encountered.
-    virtual void contProgram() = 0;
-  };
-}  // end namespace llvm
-
-#endif
diff --git a/libclamav/c++/llvm/include/llvm/Debugger/ProgramInfo.h b/libclamav/c++/llvm/include/llvm/Debugger/ProgramInfo.h
deleted file mode 100644
index 8f31d7f..0000000
--- a/libclamav/c++/llvm/include/llvm/Debugger/ProgramInfo.h
+++ /dev/null
@@ -1,246 +0,0 @@
-//===- ProgramInfo.h - Information about the loaded program -----*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file defines various pieces of information about the currently loaded
-// program.  One instance of this object is created every time a program is
-// loaded, and destroyed every time it is unloaded.
-//
-// The various pieces of information gathered about the source program are all
-// designed to be extended by various SourceLanguage implementations.  This
-// allows source languages to keep any extended information that they support in
-// the derived class portions of the class.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_DEBUGGER_PROGRAMINFO_H
-#define LLVM_DEBUGGER_PROGRAMINFO_H
-
-#include "llvm/System/TimeValue.h"
-#include <string>
-#include <map>
-#include <vector>
-
-namespace llvm {
-  class GlobalVariable;
-  class Module;
-  class SourceFile;
-  struct SourceLanguage;
-  class ProgramInfo;
-
-  /// SourceLanguageCache - SourceLanguage implementations are allowed to cache
-  /// stuff in the ProgramInfo object.  The only requirement we have on these
-  /// instances is that they are destroyable.
-  struct SourceLanguageCache {
-    virtual ~SourceLanguageCache() {}
-  };
-
-  /// SourceFileInfo - One instance of this structure is created for each
-  /// source file in the program.
-  ///
-  class SourceFileInfo {
-    /// BaseName - The filename of the source file.
-    std::string BaseName;
-
-    /// Directory - The working directory of this source file when it was
-    /// compiled.
-    std::string Directory;
-
-    /// Version - The version of the LLVM debug information that this file was
-    /// compiled with.
-    unsigned Version;
-
-    /// Language - The source language that the file was compiled with.  This
-    /// pointer is never null.
-    ///
-    const SourceLanguage *Language;
-
-    /// Descriptor - The LLVM Global Variable which describes the source file.
-    ///
-    const GlobalVariable *Descriptor;
-
-    /// SourceText - The body of this source file, or null if it has not yet
-    /// been loaded.
-    mutable SourceFile *SourceText;
-  public:
-    SourceFileInfo(const GlobalVariable *Desc, const SourceLanguage &Lang);
-    ~SourceFileInfo();
-
-    const std::string &getBaseName() const { return BaseName; }
-    const std::string &getDirectory() const { return Directory; }
-    unsigned getDebugVersion() const { return Version; }
-    const GlobalVariable *getDescriptor() const { return Descriptor; }
-    SourceFile &getSourceText() const;
-
-    const SourceLanguage &getLanguage() const { return *Language; }
-  };
-
-
-  /// SourceFunctionInfo - An instance of this class is used to represent each
-  /// source function in the program.
-  ///
-  class SourceFunctionInfo {
-    /// Name - This contains an abstract name that is potentially useful to the
-    /// end-user.  If there is no explicit support for the current language,
-    /// then this string is used to identify the function.
-    std::string Name;
-
-    /// Descriptor - The descriptor for this function.
-    ///
-    const GlobalVariable *Descriptor;
-
-    /// SourceFile - The file that this function is defined in.
-    ///
-    const SourceFileInfo *SourceFile;
-
-    /// LineNo, ColNo - The location of the first stop-point in the function.
-    /// These are computed on demand.
-    mutable unsigned LineNo, ColNo;
-
-  public:
-    SourceFunctionInfo(ProgramInfo &PI, const GlobalVariable *Desc);
-    virtual ~SourceFunctionInfo() {}
-
-    /// getSymbolicName - Return a human-readable symbolic name to identify the
-    /// function (for example, in stack traces).
-    virtual std::string getSymbolicName() const { return Name; }
-
-    /// getDescriptor - This returns the descriptor for the function.
-    ///
-    const GlobalVariable *getDescriptor() const { return Descriptor; }
-
-    /// getSourceFile - This returns the source file that defines the function.
-    ///
-    const SourceFileInfo &getSourceFile() const { return *SourceFile; }
-
-    /// getSourceLocation - This method returns the location of the first
-    /// stopping point in the function.  If the body of the function cannot be
-    /// found, this returns zeros for both values.
-    void getSourceLocation(unsigned &LineNo, unsigned &ColNo) const;
-  };
-
-
-  /// ProgramInfo - This object contains information about the loaded program.
-  /// When a new program is loaded, an instance of this class is created.  When
-  /// the program is unloaded, the instance is destroyed.  This object basically
-  /// manages the lazy computation of information useful for the debugger.
-  class ProgramInfo {
-    Module *M;
-
-    /// ProgramTimeStamp - This is the timestamp of the executable file that we
-    /// currently have loaded into the debugger.
-    sys::TimeValue ProgramTimeStamp;
-
-    /// SourceFiles - This map is used to transform source file descriptors into
-    /// their corresponding SourceFileInfo objects.  This mapping owns the
-    /// memory for the SourceFileInfo objects.
-    ///
-    bool SourceFilesIsComplete;
-    std::map<const GlobalVariable*, SourceFileInfo*> SourceFiles;
-
-    /// SourceFileIndex - Mapping from source file basenames to the information
-    /// about the file.  Note that there can be filename collisions, so this is
-    /// a multimap.  This map is populated incrementally as the user interacts
-    /// with the program, through the getSourceFileFromDesc method.  If ALL of
-    /// the source files are needed, the getSourceFiles() method scans the
-    /// entire program looking for them.
-    ///
-    std::multimap<std::string, SourceFileInfo*> SourceFileIndex;
-
-    /// SourceFunctions - This map contains entries functions in the source
-    /// program.  If SourceFunctionsIsComplete is true, then this is ALL of the
-    /// functions in the program are in this map.
-    bool SourceFunctionsIsComplete;
-    std::map<const GlobalVariable*, SourceFunctionInfo*> SourceFunctions;
-
-    /// LanguageCaches - Each source language is permitted to keep a per-program
-    /// cache of information specific to whatever it needs.  This vector is
-    /// effectively a small map from the languages that are active in the
-    /// program to their caches.  This can be accessed by the language by the
-    /// "getLanguageCache" method.
-    std::vector<std::pair<const SourceLanguage*,
-                          SourceLanguageCache*> > LanguageCaches;
-  public:
-    ProgramInfo(Module *m);
-    ~ProgramInfo();
-
-    /// getProgramTimeStamp - Return the time-stamp of the program when it was
-    /// loaded.
-    sys::TimeValue getProgramTimeStamp() const { return ProgramTimeStamp; }
-
-    //===------------------------------------------------------------------===//
-    // Interfaces to the source code files that make up the program.
-    //
-
-    /// getSourceFile - Return source file information for the specified source
-    /// file descriptor object, adding it to the collection as needed.  This
-    /// method always succeeds (is unambiguous), and is always efficient.
-    ///
-    const SourceFileInfo &getSourceFile(const GlobalVariable *Desc);
-
-    /// getSourceFile - Look up the file with the specified name.  If there is
-    /// more than one match for the specified filename, prompt the user to pick
-    /// one.  If there is no source file that matches the specified name, throw
-    /// an exception indicating that we can't find the file.  Otherwise, return
-    /// the file information for that file.
-    ///
-    /// If the source file hasn't been discovered yet in the program, this
-    /// method might have to index the whole program by calling the
-    /// getSourceFiles() method.
-    ///
-    const SourceFileInfo &getSourceFile(const std::string &Filename);
-
-    /// getSourceFiles - Index all of the source files in the program and return
-    /// them.  This information is lazily computed the first time that it is
-    /// requested.  Since this information can take a long time to compute, the
-    /// user is given a chance to cancel it.  If this occurs, an exception is
-    /// thrown.
-    const std::map<const GlobalVariable*, SourceFileInfo*> &
-    getSourceFiles(bool RequiresCompleteMap = true);
-
-    //===------------------------------------------------------------------===//
-    // Interfaces to the functions that make up the program.
-    //
-
-    /// getFunction - Return source function information for the specified
-    /// function descriptor object, adding it to the collection as needed.  This
-    /// method always succeeds (is unambiguous), and is always efficient.
-    ///
-    const SourceFunctionInfo &getFunction(const GlobalVariable *Desc);
-
-    /// getSourceFunctions - Index all of the functions in the program and
-    /// return them.  This information is lazily computed the first time that it
-    /// is requested.  Since this information can take a long time to compute,
-    /// the user is given a chance to cancel it.  If this occurs, an exception
-    /// is thrown.
-    const std::map<const GlobalVariable*, SourceFunctionInfo*> &
-    getSourceFunctions(bool RequiresCompleteMap = true);
-
-    /// addSourceFunctionsRead - Return true if the source functions map is
-    /// complete: that is, all functions in the program have been read in.
-    bool allSourceFunctionsRead() const { return SourceFunctionsIsComplete; }
-
-    /// getLanguageCache - This method is used to build per-program caches of
-    /// information, such as the functions or types visible to the program.
-    /// This can be used by SourceLanguage implementations because it requires
-    /// an accessible [sl]::CacheType typedef, where [sl] is the C++ type of the
-    /// source-language subclass.
-    template<typename SL>
-    typename SL::CacheType &getLanguageCache(const SL *L) {
-      for (unsigned i = 0, e = LanguageCaches.size(); i != e; ++i)
-        if (LanguageCaches[i].first == L)
-          return *(typename SL::CacheType*)LanguageCaches[i].second;
-      typename SL::CacheType *NewCache = L->createSourceLanguageCache(*this);
-      LanguageCaches.push_back(std::make_pair(L, NewCache));
-      return *NewCache;
-    }
-  };
-
-} // end namespace llvm
-
-#endif
diff --git a/libclamav/c++/llvm/include/llvm/Debugger/RuntimeInfo.h b/libclamav/c++/llvm/include/llvm/Debugger/RuntimeInfo.h
deleted file mode 100644
index c537651..0000000
--- a/libclamav/c++/llvm/include/llvm/Debugger/RuntimeInfo.h
+++ /dev/null
@@ -1,142 +0,0 @@
-//===- RuntimeInfo.h - Information about running program --------*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file defines classes that capture various pieces of information about
-// the currently executing, but stopped, program.  One instance of this object
-// is created every time a program is stopped, and destroyed every time it
-// starts running again.  This object's main goal is to make access to runtime
-// information easy and efficient, by caching information as requested.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_DEBUGGER_RUNTIMEINFO_H
-#define LLVM_DEBUGGER_RUNTIMEINFO_H
-
-#include <vector>
-#include <cassert>
-
-namespace llvm {
-  class ProgramInfo;
-  class RuntimeInfo;
-  class InferiorProcess;
-  class GlobalVariable;
-  class SourceFileInfo;
-
-  /// StackFrame - One instance of this structure is created for each stack
-  /// frame that is active in the program.
-  ///
-  class StackFrame {
-    RuntimeInfo &RI;
-    void *FrameID;
-    const GlobalVariable *FunctionDesc;
-
-    /// LineNo, ColNo, FileInfo - This information indicates WHERE in the source
-    /// code for the program the stack frame is located.
-    unsigned LineNo, ColNo;
-    const SourceFileInfo *SourceInfo;
-  public:
-    StackFrame(RuntimeInfo &RI, void *ParentFrameID);
-
-    StackFrame &operator=(const StackFrame &RHS) {
-      FrameID = RHS.FrameID;
-      FunctionDesc = RHS.FunctionDesc;
-      return *this;
-    }
-
-    /// getFrameID - return the low-level opaque frame ID of this stack frame.
-    ///
-    void *getFrameID() const { return FrameID; }
-
-    /// getFunctionDesc - Return the descriptor for the function that contains
-    /// this stack frame, or null if it is unknown.
-    ///
-    const GlobalVariable *getFunctionDesc();
-
-    /// getSourceLocation - Return the source location that this stack frame is
-    /// sitting at.
-    void getSourceLocation(unsigned &LineNo, unsigned &ColNo,
-                           const SourceFileInfo *&SourceInfo);
-  };
-
-
-  /// RuntimeInfo - This class collects information about the currently running
-  /// process.  It is created whenever the program stops execution for the
-  /// debugger, and destroyed whenver execution continues.
-  class RuntimeInfo {
-    /// ProgInfo - This object contains static information about the program.
-    ///
-    ProgramInfo *ProgInfo;
-
-    /// IP - This object contains information about the actual inferior process
-    /// that we are communicating with and aggregating information from.
-    const InferiorProcess &IP;
-
-    /// CallStack - This caches information about the current stack trace of the
-    /// program.  This is lazily computed as needed.
-    std::vector<StackFrame> CallStack;
-
-    /// CurrentFrame - The user can traverse the stack frame with the
-    /// up/down/frame family of commands.  This index indicates the current
-    /// stack frame.
-    unsigned CurrentFrame;
-
-  public:
-    RuntimeInfo(ProgramInfo *PI, const InferiorProcess &ip)
-      : ProgInfo(PI), IP(ip), CurrentFrame(0) {
-      // Make sure that the top of stack has been materialized.  If this throws
-      // an exception, something is seriously wrong and the RuntimeInfo object
-      // would be unusable anyway.
-      getStackFrame(0);
-    }
-
-    ProgramInfo &getProgramInfo() { return *ProgInfo; }
-    const InferiorProcess &getInferiorProcess() const { return IP; }
-
-    //===------------------------------------------------------------------===//
-    // Methods for inspecting the call stack of the program.
-    //
-
-    /// getStackFrame - Materialize the specified stack frame and return it.  If
-    /// the specified ID is off of the bottom of the stack, throw an exception
-    /// indicating the problem.
-    StackFrame &getStackFrame(unsigned ID) {
-      if (ID >= CallStack.size())
-        materializeFrame(ID);
-      return CallStack[ID];
-    }
-
-    /// getCurrentFrame - Return the current stack frame object that the user is
-    /// inspecting.
-    StackFrame &getCurrentFrame() {
-      assert(CallStack.size() > CurrentFrame &&
-             "Must have materialized frame before making it current!");
-      return CallStack[CurrentFrame];
-    }
-
-    /// getCurrentFrameIdx - Return the current frame the user is inspecting.
-    ///
-    unsigned getCurrentFrameIdx() const { return CurrentFrame; }
-
-    /// setCurrentFrameIdx - Set the current frame index to the specified value.
-    /// Note that the specified frame must have been materialized with
-    /// getStackFrame before it can be made current.
-    void setCurrentFrameIdx(unsigned Idx) {
-      assert(Idx < CallStack.size() &&
-             "Must materialize frame before making it current!");
-      CurrentFrame = Idx;
-    }
-  private:
-    /// materializeFrame - Create and process all frames up to and including the
-    /// specified frame number.  This throws an exception if the specified frame
-    /// ID is nonexistant.
-    void materializeFrame(unsigned ID);
-  };
-}
-
-#endif
diff --git a/libclamav/c++/llvm/include/llvm/Debugger/SourceFile.h b/libclamav/c++/llvm/include/llvm/Debugger/SourceFile.h
deleted file mode 100644
index 155b45f..0000000
--- a/libclamav/c++/llvm/include/llvm/Debugger/SourceFile.h
+++ /dev/null
@@ -1,87 +0,0 @@
-//===- SourceFile.h - Class to represent a source code file -----*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file defines the SourceFile class which is used to represent a single
-// file of source code in the program, caching data from the file to make access
-// efficient.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_DEBUGGER_SOURCEFILE_H
-#define LLVM_DEBUGGER_SOURCEFILE_H
-
-#include "llvm/System/Path.h"
-#include "llvm/ADT/OwningPtr.h"
-#include <vector>
-
-namespace llvm {
-  class GlobalVariable;
-  class MemoryBuffer;
-
-  class SourceFile {
-    /// Filename - This is the full path of the file that is loaded.
-    ///
-    sys::Path Filename;
-
-    /// Descriptor - The debugging descriptor for this source file.  If there
-    /// are multiple descriptors for the same file, this is just the first one
-    /// encountered.
-    ///
-    const GlobalVariable *Descriptor;
-
-    /// This is the memory mapping for the file so we can gain access to it.
-    OwningPtr<MemoryBuffer> File;
-
-    /// LineOffset - This vector contains a mapping from source line numbers to
-    /// their offsets in the file.  This data is computed lazily, the first time
-    /// it is asked for.  If there are zero elements allocated in this vector,
-    /// then it has not yet been computed.
-    mutable std::vector<unsigned> LineOffset;
-
-  public:
-    /// SourceFile constructor - Read in the specified source file if it exists,
-    /// but do not build the LineOffsets table until it is requested.  This will
-    /// NOT throw an exception if the file is not found, if there is an error
-    /// reading it, or if the user cancels the operation.  Instead, it will just
-    /// be an empty source file.
-    SourceFile(const std::string &fn, const GlobalVariable *Desc);
-    
-    ~SourceFile();
-
-    /// getDescriptor - Return the debugging decriptor for this source file.
-    ///
-    const GlobalVariable *getDescriptor() const { return Descriptor; }
-
-    /// getFilename - Return the fully resolved path that this file was loaded
-    /// from.
-    const std::string &getFilename() const { return Filename.str(); }
-
-    /// getSourceLine - Given a line number, return the start and end of the
-    /// line in the file.  If the line number is invalid, or if the file could
-    /// not be loaded, null pointers are returned for the start and end of the
-    /// file.  Note that line numbers start with 0, not 1.  This also strips off
-    /// any newlines from the end of the line, to ease formatting of the text.
-    void getSourceLine(unsigned LineNo, const char *&LineStart,
-                       const char *&LineEnd) const;
-
-    /// getNumLines - Return the number of lines the source file contains.
-    ///
-    unsigned getNumLines() const {
-      if (LineOffset.empty()) calculateLineOffsets();
-      return static_cast<unsigned>(LineOffset.size());
-    }
-
-  private:
-    /// calculateLineOffsets - Compute the LineOffset vector for the current
-    /// file.
-    void calculateLineOffsets() const;
-  };
-} // end namespace llvm
-
-#endif
diff --git a/libclamav/c++/llvm/include/llvm/Debugger/SourceLanguage.h b/libclamav/c++/llvm/include/llvm/Debugger/SourceLanguage.h
deleted file mode 100644
index a07dd97..0000000
--- a/libclamav/c++/llvm/include/llvm/Debugger/SourceLanguage.h
+++ /dev/null
@@ -1,99 +0,0 @@
-//===- SourceLanguage.h - Interact with source languages --------*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file defines the abstract SourceLanguage interface, which is used by the
-// LLVM debugger to parse source-language expressions and render program objects
-// into a human readable string.  In general, these classes perform all of the
-// analysis and interpretation of the language-specific debugger information.
-//
-// This interface is designed to be completely stateless, so all methods are
-// const.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_DEBUGGER_SOURCELANGUAGE_H
-#define LLVM_DEBUGGER_SOURCELANGUAGE_H
-
-#include <string>
-
-namespace llvm {
-  class GlobalVariable;
-  class SourceFileInfo;
-  class SourceFunctionInfo;
-  class ProgramInfo;
-  class RuntimeInfo;
-
-  struct SourceLanguage {
-    virtual ~SourceLanguage() {}
-
-    /// getSourceLanguageName - This method is used to implement the 'show
-    /// language' command in the debugger.
-    virtual const char *getSourceLanguageName() const = 0;
-
-    //===------------------------------------------------------------------===//
-    // Methods used to implement debugger hooks.
-    //
-
-    /// printInfo - Implementing this method allows the debugger to use
-    /// language-specific 'info' extensions, e.g., 'info selectors' for objc.
-    /// This method should return true if the specified string is recognized.
-    ///
-    virtual bool printInfo(const std::string &What) const {
-      return false;
-    }
-
-    /// lookupFunction - Given a textual function name, return the
-    /// SourceFunctionInfo descriptor for that function, or null if it cannot be
-    /// found.  If the program is currently running, the RuntimeInfo object
-    /// provides information about the current evaluation context, otherwise it
-    /// will be null.
-    ///
-    virtual SourceFunctionInfo *lookupFunction(const std::string &FunctionName,
-                                               ProgramInfo &PI,
-                                               RuntimeInfo *RI = 0) const {
-      return 0;
-    }
-
-
-    //===------------------------------------------------------------------===//
-    // Methods used to parse various pieces of program information.
-    //
-
-    /// createSourceFileInfo - This method can be implemented by the front-end
-    /// if it needs to keep track of information beyond what the debugger
-    /// requires.
-    virtual SourceFileInfo *
-    createSourceFileInfo(const GlobalVariable *Desc, ProgramInfo &PI) const;
-
-    /// createSourceFunctionInfo - This method can be implemented by the derived
-    /// SourceLanguage if it needs to keep track of more information than the
-    /// SourceFunctionInfo has.
-    virtual SourceFunctionInfo *
-    createSourceFunctionInfo(const GlobalVariable *Desc, ProgramInfo &PI) const;
-
-
-    //===------------------------------------------------------------------===//
-    // Static methods used to get instances of various source languages.
-    //
-
-    /// get - This method returns a source-language instance for the specified
-    /// Dwarf 3 language identifier.  If the language is unknown, an object is
-    /// returned that can support some minimal operations, but is not terribly
-    /// bright.
-    static const SourceLanguage &get(unsigned ID);
-
-    /// get*Instance() - These methods return specific instances of languages.
-    ///
-    static const SourceLanguage &getCFamilyInstance();
-    static const SourceLanguage &getCPlusPlusInstance();
-    static const SourceLanguage &getUnknownLanguageInstance();
-  };
-}
-
-#endif
diff --git a/libclamav/c++/llvm/include/llvm/ExecutionEngine/ExecutionEngine.h b/libclamav/c++/llvm/include/llvm/ExecutionEngine/ExecutionEngine.h
index 6a3a914..d2c547d 100644
--- a/libclamav/c++/llvm/include/llvm/ExecutionEngine/ExecutionEngine.h
+++ b/libclamav/c++/llvm/include/llvm/ExecutionEngine/ExecutionEngine.h
@@ -19,6 +19,8 @@
 #include <map>
 #include <string>
 #include "llvm/ADT/SmallVector.h"
+#include "llvm/ADT/ValueMap.h"
+#include "llvm/Support/ValueHandle.h"
 #include "llvm/System/Mutex.h"
 #include "llvm/Target/TargetMachine.h"
 
@@ -26,6 +28,7 @@ namespace llvm {
 
 struct GenericValue;
 class Constant;
+class ExecutionEngine;
 class Function;
 class GlobalVariable;
 class GlobalValue;
@@ -37,13 +40,26 @@ class ModuleProvider;
 class MutexGuard;
 class TargetData;
 class Type;
-template<typename> class AssertingVH;
 
 class ExecutionEngineState {
+public:
+  struct AddressMapConfig : public ValueMapConfig<const GlobalValue*> {
+    typedef ExecutionEngineState *ExtraData;
+    static sys::Mutex *getMutex(ExecutionEngineState *EES);
+    static void onDelete(ExecutionEngineState *EES, const GlobalValue *Old);
+    static void onRAUW(ExecutionEngineState *, const GlobalValue *,
+                       const GlobalValue *);
+  };
+
+  typedef ValueMap<const GlobalValue *, void *, AddressMapConfig>
+      GlobalAddressMapTy;
+
 private:
+  ExecutionEngine &EE;
+
   /// GlobalAddressMap - A mapping between LLVM global values and their
   /// actualized version...
-  std::map<AssertingVH<const GlobalValue>, void *> GlobalAddressMap;
+  GlobalAddressMapTy GlobalAddressMap;
 
   /// GlobalAddressReverseMap - This is the reverse mapping of GlobalAddressMap,
   /// used to convert raw addresses into the LLVM global value that is emitted
@@ -52,7 +68,9 @@ private:
   std::map<void *, AssertingVH<const GlobalValue> > GlobalAddressReverseMap;
 
 public:
-  std::map<AssertingVH<const GlobalValue>, void *> &
+  ExecutionEngineState(ExecutionEngine &EE);
+
+  GlobalAddressMapTy &
   getGlobalAddressMap(const MutexGuard &) {
     return GlobalAddressMap;
   }
@@ -61,16 +79,18 @@ public:
   getGlobalAddressReverseMap(const MutexGuard &) {
     return GlobalAddressReverseMap;
   }
+
+  // Returns the address ToUnmap was mapped to.
+  void *RemoveMapping(const MutexGuard &, const GlobalValue *ToUnmap);
 };
 
 
 class ExecutionEngine {
   const TargetData *TD;
-  ExecutionEngineState state;
-  bool LazyCompilationDisabled;
+  ExecutionEngineState EEState;
+  bool CompilingLazily;
   bool GVCompilationDisabled;
   bool SymbolSearchingDisabled;
-  bool DlsymStubsEnabled;
 
   friend class EngineBuilder;  // To allow access to JITCtor and InterpCtor.
 
@@ -93,7 +113,8 @@ protected:
                                      std::string *ErrorStr,
                                      JITMemoryManager *JMM,
                                      CodeGenOpt::Level OptLevel,
-                                     bool GVsWithCode);
+                                     bool GVsWithCode,
+				     CodeModel::Model CMM);
   static ExecutionEngine *(*InterpCtor)(ModuleProvider *MP,
                                         std::string *ErrorStr);
 
@@ -153,7 +174,9 @@ public:
                                     JITMemoryManager *JMM = 0,
                                     CodeGenOpt::Level OptLevel =
                                       CodeGenOpt::Default,
-                                    bool GVsWithCode = true);
+                                    bool GVsWithCode = true,
+				    CodeModel::Model CMM =
+				      CodeModel::Default);
 
   /// addModuleProvider - Add a ModuleProvider to the list of modules that we
   /// can JIT from.  Note that this takes ownership of the ModuleProvider: when
@@ -210,8 +233,8 @@ public:
   /// at the specified location.  This is used internally as functions are JIT'd
   /// and as global variables are laid out in memory.  It can and should also be
   /// used by clients of the EE that want to have an LLVM global overlay
-  /// existing data in memory.  After adding a mapping for GV, you must not
-  /// destroy it until you've removed the mapping.
+  /// existing data in memory.  Mappings are automatically removed when their
+  /// GlobalValue is destroyed.
   void addGlobalMapping(const GlobalValue *GV, void *Addr);
   
   /// clearAllGlobalMappings - Clear all global mappings and start over again
@@ -235,29 +258,28 @@ public:
   void *getPointerToGlobalIfAvailable(const GlobalValue *GV);
 
   /// getPointerToGlobal - This returns the address of the specified global
-  /// value.  This may involve code generation if it's a function.  After
-  /// getting a pointer to GV, it and all globals it transitively refers to have
-  /// been passed to addGlobalMapping.  You must clear the mapping for each
-  /// referred-to global before destroying it.  If a referred-to global RTG is a
-  /// function and this ExecutionEngine is a JIT compiler, calling
-  /// updateGlobalMapping(RTG, 0) will leak the function's machine code, so you
-  /// should call freeMachineCodeForFunction(RTG) instead.  Note that
-  /// optimizations can move and delete non-external GlobalValues without
-  /// notifying the ExecutionEngine.
+  /// value.  This may involve code generation if it's a function.
   ///
   void *getPointerToGlobal(const GlobalValue *GV);
 
   /// getPointerToFunction - The different EE's represent function bodies in
   /// different ways.  They should each implement this to say what a function
-  /// pointer should look like.  See getPointerToGlobal for the requirements on
-  /// destroying F and any GlobalValues it refers to.
+  /// pointer should look like.  When F is destroyed, the ExecutionEngine will
+  /// remove its global mapping and free any machine code.  Be sure no threads
+  /// are running inside F when that happens.
   ///
   virtual void *getPointerToFunction(Function *F) = 0;
 
+  /// getPointerToBasicBlock - The different EE's represent basic blocks in
+  /// different ways.  Return the representation for a blockaddress of the
+  /// specified block.
+  ///
+  virtual void *getPointerToBasicBlock(BasicBlock *BB) = 0;
+  
   /// getPointerToFunctionOrStub - If the specified function has been
   /// code-gen'd, return a pointer to the function.  If not, compile it, or use
-  /// a stub to implement lazy compilation if available.  See getPointerToGlobal
-  /// for the requirements on destroying F and any GlobalValues it refers to.
+  /// a stub to implement lazy compilation if available.  See
+  /// getPointerToFunction for the requirements on destroying F.
   ///
   virtual void *getPointerToFunctionOrStub(Function *F) {
     // Default implementation, just codegen the function.
@@ -293,8 +315,7 @@ public:
 
   /// getOrEmitGlobalVariable - Return the address of the specified global
   /// variable, possibly emitting it to memory if needed.  This is used by the
-  /// Emitter.  See getPointerToGlobal for the requirements on destroying GV and
-  /// any GlobalValues it refers to.
+  /// Emitter.
   virtual void *getOrEmitGlobalVariable(const GlobalVariable *GV) {
     return getPointerToGlobal((GlobalValue*)GV);
   }
@@ -306,13 +327,29 @@ public:
   virtual void RegisterJITEventListener(JITEventListener *) {}
   virtual void UnregisterJITEventListener(JITEventListener *) {}
 
-  /// DisableLazyCompilation - If called, the JIT will abort if lazy compilation
-  /// is ever attempted.
+  /// DisableLazyCompilation - When lazy compilation is off (the default), the
+  /// JIT will eagerly compile every function reachable from the argument to
+  /// getPointerToFunction.  If lazy compilation is turned on, the JIT will only
+  /// compile the one function and emit stubs to compile the rest when they're
+  /// first called.  If lazy compilation is turned off again while some lazy
+  /// stubs are still around, and one of those stubs is called, the program will
+  /// abort.
+  ///
+  /// In order to safely compile lazily in a threaded program, the user must
+  /// ensure that 1) only one thread at a time can call any particular lazy
+  /// stub, and 2) any thread modifying LLVM IR must hold the JIT's lock
+  /// (ExecutionEngine::lock) or otherwise ensure that no other thread calls a
+  /// lazy stub.  See http://llvm.org/PR5184 for details.
   void DisableLazyCompilation(bool Disabled = true) {
-    LazyCompilationDisabled = Disabled;
+    CompilingLazily = !Disabled;
   }
+  bool isCompilingLazily() const {
+    return CompilingLazily;
+  }
+  // Deprecated in favor of isCompilingLazily (to reduce double-negatives).
+  // Remove this in LLVM 2.8.
   bool isLazyCompilationDisabled() const {
-    return LazyCompilationDisabled;
+    return !CompilingLazily;
   }
 
   /// DisableGVCompilation - If called, the JIT will abort if it's asked to
@@ -334,15 +371,7 @@ public:
   bool isSymbolSearchingDisabled() const {
     return SymbolSearchingDisabled;
   }
-  
-  /// EnableDlsymStubs - 
-  void EnableDlsymStubs(bool Enabled = true) {
-    DlsymStubsEnabled = Enabled;
-  }
-  bool areDlsymStubsEnabled() const {
-    return DlsymStubsEnabled;
-  }
-  
+
   /// InstallLazyFunctionCreator - If an unknown function is needed, the
   /// specified function pointer is invoked to create it.  If it returns null,
   /// the JIT will abort.
@@ -399,6 +428,7 @@ class EngineBuilder {
   CodeGenOpt::Level OptLevel;
   JITMemoryManager *JMM;
   bool AllocateGVsWithCode;
+  CodeModel::Model CMModel;
 
   /// InitEngine - Does the common initialization of default options.
   ///
@@ -408,6 +438,7 @@ class EngineBuilder {
     OptLevel = CodeGenOpt::Default;
     JMM = NULL;
     AllocateGVsWithCode = false;
+    CMModel = CodeModel::Default;
   }
 
  public:
@@ -452,6 +483,13 @@ class EngineBuilder {
     return *this;
   }
 
+  /// setCodeModel - Set the CodeModel that the ExecutionEngine target
+  /// data is using. Defaults to target specific default "CodeModel::Default".
+  EngineBuilder &setCodeModel(CodeModel::Model M) {
+    CMModel = M;
+    return *this;
+  }
+
   /// setAllocateGVsWithCode - Sets whether global values should be allocated
   /// into the same buffer as code.  For most applications this should be set
   /// to false.  Allocating globals with code breaks freeMachineCodeForFunction
@@ -465,7 +503,6 @@ class EngineBuilder {
   }
 
   ExecutionEngine *create();
-
 };
 
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/include/llvm/ExecutionEngine/GenericValue.h b/libclamav/c++/llvm/include/llvm/ExecutionEngine/GenericValue.h
index a2fed98..1301320 100644
--- a/libclamav/c++/llvm/include/llvm/ExecutionEngine/GenericValue.h
+++ b/libclamav/c++/llvm/include/llvm/ExecutionEngine/GenericValue.h
@@ -16,7 +16,7 @@
 #define GENERIC_VALUE_H
 
 #include "llvm/ADT/APInt.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
 
diff --git a/libclamav/c++/llvm/include/llvm/ExecutionEngine/JITEventListener.h b/libclamav/c++/llvm/include/llvm/ExecutionEngine/JITEventListener.h
index 8d3a1d7..dcc66b2 100644
--- a/libclamav/c++/llvm/include/llvm/ExecutionEngine/JITEventListener.h
+++ b/libclamav/c++/llvm/include/llvm/ExecutionEngine/JITEventListener.h
@@ -15,7 +15,7 @@
 #ifndef LLVM_EXECUTION_ENGINE_JIT_EVENTLISTENER_H
 #define LLVM_EXECUTION_ENGINE_JIT_EVENTLISTENER_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/Support/DebugLoc.h"
 
 #include <vector>
@@ -63,12 +63,14 @@ public:
   /// NotifyFreeingMachineCode - This is called inside of
   /// freeMachineCodeForFunction(), after the global mapping is removed, but
   /// before the machine code is returned to the allocator.  OldPtr is the
-  /// address of the machine code.
-  virtual void NotifyFreeingMachineCode(const Function &F, void *OldPtr) {}
+  /// address of the machine code and will be the same as the Code parameter to
+  /// a previous NotifyFunctionEmitted call.  The Function passed to
+  /// NotifyFunctionEmitted may have been destroyed by the time of the matching
+  /// NotifyFreeingMachineCode call.
+  virtual void NotifyFreeingMachineCode(void *OldPtr) {}
 };
 
-// These return NULL if support isn't available.
-JITEventListener *createMacOSJITEventListener();
+// This returns NULL if support isn't available.
 JITEventListener *createOProfileJITEventListener();
 
 } // end namespace llvm.
diff --git a/libclamav/c++/llvm/include/llvm/ExecutionEngine/JITMemoryManager.h b/libclamav/c++/llvm/include/llvm/ExecutionEngine/JITMemoryManager.h
index 21dee55..fd51920 100644
--- a/libclamav/c++/llvm/include/llvm/ExecutionEngine/JITMemoryManager.h
+++ b/libclamav/c++/llvm/include/llvm/ExecutionEngine/JITMemoryManager.h
@@ -14,7 +14,7 @@
 #ifndef LLVM_EXECUTION_ENGINE_JIT_MEMMANAGER_H
 #define LLVM_EXECUTION_ENGINE_JIT_MEMMANAGER_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <string>
 
 namespace llvm {
@@ -71,17 +71,6 @@ public:
   /// return a pointer to its base.
   virtual uint8_t *getGOTBase() const = 0;
   
-  /// SetDlsymTable - If the JIT must be able to relocate stubs after they have
-  /// been emitted, potentially because they are being copied to a process
-  /// where external symbols live at different addresses than in the JITing
-  ///  process, allocate a table with sufficient information to do so.
-  virtual void SetDlsymTable(void *ptr) = 0;
-  
-  /// getDlsymTable - If this is managing a table of entries so that stubs to
-  /// external symbols can be later relocated, this method should return a
-  /// pointer to it.
-  virtual void *getDlsymTable() const = 0;
-  
   /// NeedsExactSize - If the memory manager requires to know the size of the
   /// objects to be emitted
   bool NeedsExactSize() const {
@@ -132,9 +121,11 @@ public:
   ///
   virtual uint8_t *allocateGlobal(uintptr_t Size, unsigned Alignment) = 0;
 
-  /// deallocateMemForFunction - Free JIT memory for the specified function.
-  /// This is never called when the JIT is currently emitting a function.
-  virtual void deallocateMemForFunction(const Function *F) = 0;
+  /// deallocateFunctionBody - Free the specified function body.  The argument
+  /// must be the return value from a call to startFunctionBody() that hasn't
+  /// been deallocated yet.  This is never called when the JIT is currently
+  /// emitting a function.
+  virtual void deallocateFunctionBody(void *Body) = 0;
   
   /// startExceptionTable - When we finished JITing the function, if exception
   /// handling is set, we emit the exception table.
@@ -146,10 +137,16 @@ public:
   virtual void endExceptionTable(const Function *F, uint8_t *TableStart,
                                  uint8_t *TableEnd, uint8_t* FrameRegister) = 0;
 
+  /// deallocateExceptionTable - Free the specified exception table's memory.
+  /// The argument must be the return value from a call to startExceptionTable()
+  /// that hasn't been deallocated yet.  This is never called when the JIT is
+  /// currently emitting an exception table.
+  virtual void deallocateExceptionTable(void *ET) = 0;
+
   /// CheckInvariants - For testing only.  Return true if all internal
   /// invariants are preserved, or return false and set ErrorStr to a helpful
   /// error message.
-  virtual bool CheckInvariants(std::string &ErrorStr) {
+  virtual bool CheckInvariants(std::string &) {
     return true;
   }
 
diff --git a/libclamav/c++/llvm/include/llvm/Function.h b/libclamav/c++/llvm/include/llvm/Function.h
index 088c999..64be545 100644
--- a/libclamav/c++/llvm/include/llvm/Function.h
+++ b/libclamav/c++/llvm/include/llvm/Function.h
@@ -23,6 +23,7 @@
 #include "llvm/BasicBlock.h"
 #include "llvm/Argument.h"
 #include "llvm/Attributes.h"
+#include "llvm/Support/Compiler.h"
 
 namespace llvm {
 
@@ -148,7 +149,7 @@ public:
   /// The particular intrinsic functions which correspond to this value are
   /// defined in llvm/Intrinsics.h.
   ///
-  unsigned getIntrinsicID() const;
+  unsigned getIntrinsicID() const ATTRIBUTE_READONLY;
   bool isIntrinsic() const { return getIntrinsicID() != 0; }
 
   /// getCallingConv()/setCallingConv(CC) - These method get and set the
diff --git a/libclamav/c++/llvm/include/llvm/GlobalValue.h b/libclamav/c++/llvm/include/llvm/GlobalValue.h
index 7b0de34..b8d219c 100644
--- a/libclamav/c++/llvm/include/llvm/GlobalValue.h
+++ b/libclamav/c++/llvm/include/llvm/GlobalValue.h
@@ -90,7 +90,7 @@ public:
   
   bool hasSection() const { return !Section.empty(); }
   const std::string &getSection() const { return Section; }
-  void setSection(const StringRef &S) { Section = S; }
+  void setSection(StringRef S) { Section = S; }
   
   /// If the usage is empty (except transitively dead constants), then this
   /// global value can can be safely deleted since the destructor will
diff --git a/libclamav/c++/llvm/include/llvm/GlobalVariable.h b/libclamav/c++/llvm/include/llvm/GlobalVariable.h
index 56b2b9d..68bd1b3 100644
--- a/libclamav/c++/llvm/include/llvm/GlobalVariable.h
+++ b/libclamav/c++/llvm/include/llvm/GlobalVariable.h
@@ -28,7 +28,6 @@ namespace llvm {
 
 class Module;
 class Constant;
-class LLVMContext;
 template<typename ValueSubClass, typename ItemParentClass>
   class SymbolTableListTraits;
 
@@ -50,8 +49,7 @@ public:
   }
   /// GlobalVariable ctor - If a parent module is specified, the global is
   /// automatically inserted into the end of the specified modules global list.
-  GlobalVariable(LLVMContext &Context, const Type *Ty, bool isConstant,
-                 LinkageTypes Linkage,
+  GlobalVariable(const Type *Ty, bool isConstant, LinkageTypes Linkage,
                  Constant *Initializer = 0, const Twine &Name = "",
                  bool ThreadLocal = false, unsigned AddressSpace = 0);
   /// GlobalVariable ctor - This creates a global and inserts it before the
@@ -101,18 +99,10 @@ public:
     assert(hasInitializer() && "GV doesn't have initializer!");
     return static_cast<Constant*>(Op<0>().get());
   }
-  inline void setInitializer(Constant *CPV) {
-    if (CPV == 0) {
-      if (hasInitializer()) {
-        Op<0>().set(0);
-        NumOperands = 0;
-      }
-    } else {
-      if (!hasInitializer())
-        NumOperands = 1;
-      Op<0>().set(CPV);
-    }
-  }
+  /// setInitializer - Sets the initializer for this global variable, removing
+  /// any existing initializer if InitVal==NULL.  If this GV has type T*, the
+  /// initializer must have type T.
+  void setInitializer(Constant *InitVal);
 
   /// If the value is a global constant, its value is immutable throughout the
   /// runtime execution of the program.  Assigning a value into the constant
diff --git a/libclamav/c++/llvm/include/llvm/InlineAsm.h b/libclamav/c++/llvm/include/llvm/InlineAsm.h
index e0d992b..482e53e 100644
--- a/libclamav/c++/llvm/include/llvm/InlineAsm.h
+++ b/libclamav/c++/llvm/include/llvm/InlineAsm.h
@@ -31,18 +31,22 @@ class InlineAsm : public Value {
 
   std::string AsmString, Constraints;
   bool HasSideEffects;
+  bool IsAlignStack;
   
-  InlineAsm(const FunctionType *Ty, const StringRef &AsmString,
-            const StringRef &Constraints, bool hasSideEffects);
+  InlineAsm(const FunctionType *Ty, StringRef AsmString,
+            StringRef Constraints, bool hasSideEffects,
+            bool isAlignStack = false);
   virtual ~InlineAsm();
 public:
 
   /// InlineAsm::get - Return the the specified uniqued inline asm string.
   ///
-  static InlineAsm *get(const FunctionType *Ty, const StringRef &AsmString,
-                        const StringRef &Constraints, bool hasSideEffects);
+  static InlineAsm *get(const FunctionType *Ty, StringRef AsmString,
+                        StringRef Constraints, bool hasSideEffects,
+                        bool isAlignStack = false);
   
   bool hasSideEffects() const { return HasSideEffects; }
+  bool isAlignStack() const { return IsAlignStack; }
   
   /// getType - InlineAsm's are always pointers.
   ///
@@ -61,7 +65,7 @@ public:
   /// the specified constraint string is legal for the type.  This returns true
   /// if legal, false if not.
   ///
-  static bool Verify(const FunctionType *Ty, const StringRef &Constraints);
+  static bool Verify(const FunctionType *Ty, StringRef Constraints);
 
   // Constraint String Parsing 
   enum ConstraintPrefix {
@@ -106,7 +110,7 @@ public:
     /// Parse - Analyze the specified string (e.g. "=*&{eax}") and fill in the
     /// fields in this structure.  If the constraint string is not understood,
     /// return true, otherwise return false.
-    bool Parse(const StringRef &Str, 
+    bool Parse(StringRef Str, 
                std::vector<InlineAsm::ConstraintInfo> &ConstraintsSoFar);
   };
   
@@ -114,7 +118,7 @@ public:
   /// constraints and their prefixes.  If this returns an empty vector, and if
   /// the constraint string itself isn't empty, there was an error parsing.
   static std::vector<ConstraintInfo> 
-    ParseConstraints(const StringRef &ConstraintString);
+    ParseConstraints(StringRef ConstraintString);
   
   /// ParseConstraints - Parse the constraints of this inlineasm object, 
   /// returning them the same way that ParseConstraints(str) does.
diff --git a/libclamav/c++/llvm/include/llvm/InstrTypes.h b/libclamav/c++/llvm/include/llvm/InstrTypes.h
index cc923de..bc89969 100644
--- a/libclamav/c++/llvm/include/llvm/InstrTypes.h
+++ b/libclamav/c++/llvm/include/llvm/InstrTypes.h
@@ -51,10 +51,9 @@ protected:
   virtual BasicBlock *getSuccessorV(unsigned idx) const = 0;
   virtual unsigned getNumSuccessorsV() const = 0;
   virtual void setSuccessorV(unsigned idx, BasicBlock *B) = 0;
+  virtual TerminatorInst *clone_impl() const = 0;
 public:
 
-  virtual TerminatorInst *clone() const = 0;
-
   /// getNumSuccessors - Return the number of successors that this terminator
   /// has.
   unsigned getNumSuccessors() const {
@@ -116,9 +115,7 @@ public:
   // Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const UnaryInstruction *) { return true; }
   static inline bool classof(const Instruction *I) {
-    return I->getOpcode() == Instruction::Malloc ||
-           I->getOpcode() == Instruction::Alloca ||
-           I->getOpcode() == Instruction::Free ||
+    return I->getOpcode() == Instruction::Alloca ||
            I->getOpcode() == Instruction::Load ||
            I->getOpcode() == Instruction::VAArg ||
            I->getOpcode() == Instruction::ExtractValue ||
@@ -147,6 +144,7 @@ protected:
                  const Twine &Name, Instruction *InsertBefore);
   BinaryOperator(BinaryOps iType, Value *S1, Value *S2, const Type *Ty,
                  const Twine &Name, BasicBlock *InsertAtEnd);
+  virtual BinaryOperator *clone_impl() const;
 public:
   // allocate space for exactly two operands
   void *operator new(size_t s) {
@@ -216,6 +214,27 @@ public:
     return BO;
   }
 
+  /// CreateNUWAdd - Create an Add operator with the NUW flag set.
+  ///
+  static BinaryOperator *CreateNUWAdd(Value *V1, Value *V2,
+                                      const Twine &Name = "") {
+    BinaryOperator *BO = CreateAdd(V1, V2, Name);
+    BO->setHasNoUnsignedWrap(true);
+    return BO;
+  }
+  static BinaryOperator *CreateNUWAdd(Value *V1, Value *V2,
+                                      const Twine &Name, BasicBlock *BB) {
+    BinaryOperator *BO = CreateAdd(V1, V2, Name, BB);
+    BO->setHasNoUnsignedWrap(true);
+    return BO;
+  }
+  static BinaryOperator *CreateNUWAdd(Value *V1, Value *V2,
+                                      const Twine &Name, Instruction *I) {
+    BinaryOperator *BO = CreateAdd(V1, V2, Name, I);
+    BO->setHasNoUnsignedWrap(true);
+    return BO;
+  }
+
   /// CreateNSWSub - Create an Sub operator with the NSW flag set.
   ///
   static BinaryOperator *CreateNSWSub(Value *V1, Value *V2,
@@ -237,6 +256,27 @@ public:
     return BO;
   }
 
+  /// CreateNUWSub - Create an Sub operator with the NUW flag set.
+  ///
+  static BinaryOperator *CreateNUWSub(Value *V1, Value *V2,
+                                      const Twine &Name = "") {
+    BinaryOperator *BO = CreateSub(V1, V2, Name);
+    BO->setHasNoUnsignedWrap(true);
+    return BO;
+  }
+  static BinaryOperator *CreateNUWSub(Value *V1, Value *V2,
+                                      const Twine &Name, BasicBlock *BB) {
+    BinaryOperator *BO = CreateSub(V1, V2, Name, BB);
+    BO->setHasNoUnsignedWrap(true);
+    return BO;
+  }
+  static BinaryOperator *CreateNUWSub(Value *V1, Value *V2,
+                                      const Twine &Name, Instruction *I) {
+    BinaryOperator *BO = CreateSub(V1, V2, Name, I);
+    BO->setHasNoUnsignedWrap(true);
+    return BO;
+  }
+
   /// CreateExactSDiv - Create an SDiv operator with the exact flag set.
   ///
   static BinaryOperator *CreateExactSDiv(Value *V1, Value *V2,
@@ -299,8 +339,6 @@ public:
     return static_cast<BinaryOps>(Instruction::getOpcode());
   }
 
-  virtual BinaryOperator *clone() const;
-
   /// swapOperands - Exchange the two operands to this instruction.
   /// This instruction is safe to use on any binary instruction and
   /// does not modify the semantics of the instruction.  If the instruction
@@ -662,7 +700,7 @@ public:
   /// @brief Create a CmpInst
   static CmpInst *Create(OtherOps Op, unsigned short predicate, Value *S1,
                          Value *S2, const Twine &Name, BasicBlock *InsertAtEnd);
-
+  
   /// @brief Get the opcode casted to the right type
   OtherOps getOpcode() const {
     return static_cast<OtherOps>(Instruction::getOpcode());
@@ -674,6 +712,18 @@ public:
   /// @brief Set the predicate for this instruction to the specified value.
   void setPredicate(Predicate P) { SubclassData = P; }
 
+  static bool isFPPredicate(Predicate P) {
+    return P >= FIRST_FCMP_PREDICATE && P <= LAST_FCMP_PREDICATE;
+  }
+  
+  static bool isIntPredicate(Predicate P) {
+    return P >= FIRST_ICMP_PREDICATE && P <= LAST_ICMP_PREDICATE;
+  }
+  
+  bool isFPPredicate() const { return isFPPredicate(getPredicate()); }
+  bool isIntPredicate() const { return isIntPredicate(getPredicate()); }
+  
+  
   /// For example, EQ -> NE, UGT -> ULE, SLT -> SGE,
   ///              OEQ -> UNE, UGT -> OLE, OLT -> UGE, etc.
   /// @returns the inverse predicate for the instruction's current predicate.
@@ -719,6 +769,30 @@ public:
   /// @brief Determine if this is an equals/not equals predicate.
   bool isEquality();
 
+  /// @returns true if the comparison is signed, false otherwise.
+  /// @brief Determine if this instruction is using a signed comparison.
+  bool isSigned() const {
+    return isSigned(getPredicate());
+  }
+
+  /// @returns true if the comparison is unsigned, false otherwise.
+  /// @brief Determine if this instruction is using an unsigned comparison.
+  bool isUnsigned() const {
+    return isUnsigned(getPredicate());
+  }
+
+  /// This is just a convenience.
+  /// @brief Determine if this is true when both operands are the same.
+  bool isTrueWhenEqual() const {
+    return isTrueWhenEqual(getPredicate());
+  }
+
+  /// This is just a convenience.
+  /// @brief Determine if this is false when both operands are the same.
+  bool isFalseWhenEqual() const {
+    return isFalseWhenEqual(getPredicate());
+  }
+
   /// @returns true if the predicate is unsigned, false otherwise.
   /// @brief Determine if the predicate is an unsigned operation.
   static bool isUnsigned(unsigned short predicate);
@@ -733,6 +807,12 @@ public:
   /// @brief Determine if the predicate is an unordered operation.
   static bool isUnordered(unsigned short predicate);
 
+  /// Determine if the predicate is true when comparing a value with itself.
+  static bool isTrueWhenEqual(unsigned short predicate);
+
+  /// Determine if the predicate is false when comparing a value with itself.
+  static bool isFalseWhenEqual(unsigned short predicate);
+
   /// @brief Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const CmpInst *) { return true; }
   static inline bool classof(const Instruction *I) {
diff --git a/libclamav/c++/llvm/include/llvm/Instruction.def b/libclamav/c++/llvm/include/llvm/Instruction.def
index e603c12..205f303 100644
--- a/libclamav/c++/llvm/include/llvm/Instruction.def
+++ b/libclamav/c++/llvm/include/llvm/Instruction.def
@@ -97,80 +97,79 @@
 HANDLE_TERM_INST  ( 1, Ret        , ReturnInst)
 HANDLE_TERM_INST  ( 2, Br         , BranchInst)
 HANDLE_TERM_INST  ( 3, Switch     , SwitchInst)
-HANDLE_TERM_INST  ( 4, Invoke     , InvokeInst)
-HANDLE_TERM_INST  ( 5, Unwind     , UnwindInst)
-HANDLE_TERM_INST  ( 6, Unreachable, UnreachableInst)
-  LAST_TERM_INST  ( 6)
+HANDLE_TERM_INST  ( 4, IndirectBr , IndirectBrInst)
+HANDLE_TERM_INST  ( 5, Invoke     , InvokeInst)
+HANDLE_TERM_INST  ( 6, Unwind     , UnwindInst)
+HANDLE_TERM_INST  ( 7, Unreachable, UnreachableInst)
+  LAST_TERM_INST  ( 7)
 
 // Standard binary operators...
- FIRST_BINARY_INST( 7)
-HANDLE_BINARY_INST( 7, Add  , BinaryOperator)
-HANDLE_BINARY_INST( 8, FAdd  , BinaryOperator)
-HANDLE_BINARY_INST( 9, Sub  , BinaryOperator)
-HANDLE_BINARY_INST(10, FSub  , BinaryOperator)
-HANDLE_BINARY_INST(11, Mul  , BinaryOperator)
-HANDLE_BINARY_INST(12, FMul  , BinaryOperator)
-HANDLE_BINARY_INST(13, UDiv , BinaryOperator)
-HANDLE_BINARY_INST(14, SDiv , BinaryOperator)
-HANDLE_BINARY_INST(15, FDiv , BinaryOperator)
-HANDLE_BINARY_INST(16, URem , BinaryOperator)
-HANDLE_BINARY_INST(17, SRem , BinaryOperator)
-HANDLE_BINARY_INST(18, FRem , BinaryOperator)
+ FIRST_BINARY_INST( 8)
+HANDLE_BINARY_INST( 8, Add  , BinaryOperator)
+HANDLE_BINARY_INST( 9, FAdd  , BinaryOperator)
+HANDLE_BINARY_INST(10, Sub  , BinaryOperator)
+HANDLE_BINARY_INST(11, FSub  , BinaryOperator)
+HANDLE_BINARY_INST(12, Mul  , BinaryOperator)
+HANDLE_BINARY_INST(13, FMul  , BinaryOperator)
+HANDLE_BINARY_INST(14, UDiv , BinaryOperator)
+HANDLE_BINARY_INST(15, SDiv , BinaryOperator)
+HANDLE_BINARY_INST(16, FDiv , BinaryOperator)
+HANDLE_BINARY_INST(17, URem , BinaryOperator)
+HANDLE_BINARY_INST(18, SRem , BinaryOperator)
+HANDLE_BINARY_INST(19, FRem , BinaryOperator)
 
 // Logical operators (integer operands)
-HANDLE_BINARY_INST(19, Shl  , BinaryOperator) // Shift left  (logical)
-HANDLE_BINARY_INST(20, LShr , BinaryOperator) // Shift right (logical)
-HANDLE_BINARY_INST(21, AShr , BinaryOperator) // Shift right (arithmetic)
-HANDLE_BINARY_INST(22, And  , BinaryOperator)
-HANDLE_BINARY_INST(23, Or   , BinaryOperator)
-HANDLE_BINARY_INST(24, Xor  , BinaryOperator)
-  LAST_BINARY_INST(24)
+HANDLE_BINARY_INST(20, Shl  , BinaryOperator) // Shift left  (logical)
+HANDLE_BINARY_INST(21, LShr , BinaryOperator) // Shift right (logical)
+HANDLE_BINARY_INST(22, AShr , BinaryOperator) // Shift right (arithmetic)
+HANDLE_BINARY_INST(23, And  , BinaryOperator)
+HANDLE_BINARY_INST(24, Or   , BinaryOperator)
+HANDLE_BINARY_INST(25, Xor  , BinaryOperator)
+  LAST_BINARY_INST(25)
 
 // Memory operators...
- FIRST_MEMORY_INST(25)
-HANDLE_MEMORY_INST(25, Malloc, MallocInst)  // Heap management instructions
-HANDLE_MEMORY_INST(26, Free  , FreeInst  )
-HANDLE_MEMORY_INST(27, Alloca, AllocaInst)  // Stack management
-HANDLE_MEMORY_INST(28, Load  , LoadInst  )  // Memory manipulation instrs
-HANDLE_MEMORY_INST(29, Store , StoreInst )
-HANDLE_MEMORY_INST(30, GetElementPtr, GetElementPtrInst)
-  LAST_MEMORY_INST(30)
+ FIRST_MEMORY_INST(26)
+HANDLE_MEMORY_INST(26, Alloca, AllocaInst)  // Stack management
+HANDLE_MEMORY_INST(27, Load  , LoadInst  )  // Memory manipulation instrs
+HANDLE_MEMORY_INST(28, Store , StoreInst )
+HANDLE_MEMORY_INST(29, GetElementPtr, GetElementPtrInst)
+  LAST_MEMORY_INST(29)
 
 // Cast operators ...
 // NOTE: The order matters here because CastInst::isEliminableCastPair 
 // NOTE: (see Instructions.cpp) encodes a table based on this ordering.
- FIRST_CAST_INST(31)
-HANDLE_CAST_INST(31, Trunc   , TruncInst   )  // Truncate integers
-HANDLE_CAST_INST(32, ZExt    , ZExtInst    )  // Zero extend integers
-HANDLE_CAST_INST(33, SExt    , SExtInst    )  // Sign extend integers
-HANDLE_CAST_INST(34, FPToUI  , FPToUIInst  )  // floating point -> UInt
-HANDLE_CAST_INST(35, FPToSI  , FPToSIInst  )  // floating point -> SInt
-HANDLE_CAST_INST(36, UIToFP  , UIToFPInst  )  // UInt -> floating point
-HANDLE_CAST_INST(37, SIToFP  , SIToFPInst  )  // SInt -> floating point
-HANDLE_CAST_INST(38, FPTrunc , FPTruncInst )  // Truncate floating point
-HANDLE_CAST_INST(39, FPExt   , FPExtInst   )  // Extend floating point
-HANDLE_CAST_INST(40, PtrToInt, PtrToIntInst)  // Pointer -> Integer
-HANDLE_CAST_INST(41, IntToPtr, IntToPtrInst)  // Integer -> Pointer
-HANDLE_CAST_INST(42, BitCast , BitCastInst )  // Type cast
-  LAST_CAST_INST(42)
+ FIRST_CAST_INST(30)
+HANDLE_CAST_INST(30, Trunc   , TruncInst   )  // Truncate integers
+HANDLE_CAST_INST(31, ZExt    , ZExtInst    )  // Zero extend integers
+HANDLE_CAST_INST(32, SExt    , SExtInst    )  // Sign extend integers
+HANDLE_CAST_INST(33, FPToUI  , FPToUIInst  )  // floating point -> UInt
+HANDLE_CAST_INST(34, FPToSI  , FPToSIInst  )  // floating point -> SInt
+HANDLE_CAST_INST(35, UIToFP  , UIToFPInst  )  // UInt -> floating point
+HANDLE_CAST_INST(36, SIToFP  , SIToFPInst  )  // SInt -> floating point
+HANDLE_CAST_INST(37, FPTrunc , FPTruncInst )  // Truncate floating point
+HANDLE_CAST_INST(38, FPExt   , FPExtInst   )  // Extend floating point
+HANDLE_CAST_INST(39, PtrToInt, PtrToIntInst)  // Pointer -> Integer
+HANDLE_CAST_INST(40, IntToPtr, IntToPtrInst)  // Integer -> Pointer
+HANDLE_CAST_INST(41, BitCast , BitCastInst )  // Type cast
+  LAST_CAST_INST(41)
 
 // Other operators...
- FIRST_OTHER_INST(43)
-HANDLE_OTHER_INST(43, ICmp   , ICmpInst   )  // Integer comparison instruction
-HANDLE_OTHER_INST(44, FCmp   , FCmpInst   )  // Floating point comparison instr.
-HANDLE_OTHER_INST(45, PHI    , PHINode    )  // PHI node instruction
-HANDLE_OTHER_INST(46, Call   , CallInst   )  // Call a function
-HANDLE_OTHER_INST(47, Select , SelectInst )  // select instruction
-HANDLE_OTHER_INST(48, UserOp1, Instruction)  // May be used internally in a pass
-HANDLE_OTHER_INST(49, UserOp2, Instruction)  // Internal to passes only
-HANDLE_OTHER_INST(50, VAArg  , VAArgInst  )  // vaarg instruction
-HANDLE_OTHER_INST(51, ExtractElement, ExtractElementInst)// extract from vector
-HANDLE_OTHER_INST(52, InsertElement, InsertElementInst)  // insert into vector
-HANDLE_OTHER_INST(53, ShuffleVector, ShuffleVectorInst)  // shuffle two vectors.
-HANDLE_OTHER_INST(54, ExtractValue, ExtractValueInst)// extract from aggregate
-HANDLE_OTHER_INST(55, InsertValue, InsertValueInst)  // insert into aggregate
-
-  LAST_OTHER_INST(55)
+ FIRST_OTHER_INST(42)
+HANDLE_OTHER_INST(42, ICmp   , ICmpInst   )  // Integer comparison instruction
+HANDLE_OTHER_INST(43, FCmp   , FCmpInst   )  // Floating point comparison instr.
+HANDLE_OTHER_INST(44, PHI    , PHINode    )  // PHI node instruction
+HANDLE_OTHER_INST(45, Call   , CallInst   )  // Call a function
+HANDLE_OTHER_INST(46, Select , SelectInst )  // select instruction
+HANDLE_OTHER_INST(47, UserOp1, Instruction)  // May be used internally in a pass
+HANDLE_OTHER_INST(48, UserOp2, Instruction)  // Internal to passes only
+HANDLE_OTHER_INST(49, VAArg  , VAArgInst  )  // vaarg instruction
+HANDLE_OTHER_INST(50, ExtractElement, ExtractElementInst)// extract from vector
+HANDLE_OTHER_INST(51, InsertElement, InsertElementInst)  // insert into vector
+HANDLE_OTHER_INST(52, ShuffleVector, ShuffleVectorInst)  // shuffle two vectors.
+HANDLE_OTHER_INST(53, ExtractValue, ExtractValueInst)// extract from aggregate
+HANDLE_OTHER_INST(54, InsertValue, InsertValueInst)  // insert into aggregate
+
+  LAST_OTHER_INST(54)
 
 #undef  FIRST_TERM_INST
 #undef HANDLE_TERM_INST
diff --git a/libclamav/c++/llvm/include/llvm/Instruction.h b/libclamav/c++/llvm/include/llvm/Instruction.h
index fdae3d7..07b3231 100644
--- a/libclamav/c++/llvm/include/llvm/Instruction.h
+++ b/libclamav/c++/llvm/include/llvm/Instruction.h
@@ -38,6 +38,7 @@ protected:
               Instruction *InsertBefore = 0);
   Instruction(const Type *Ty, unsigned iType, Use *Ops, unsigned NumOps,
               BasicBlock *InsertAtEnd);
+  virtual Instruction *clone_impl() const = 0;
 public:
   // Out of line virtual method, so the vtable, etc has a home.
   ~Instruction();
@@ -47,7 +48,7 @@ public:
   ///   * The instruction has no parent
   ///   * The instruction has no name
   ///
-  virtual Instruction *clone() const = 0;
+  Instruction *clone() const;
 
   /// isIdenticalTo - Return true if the specified instruction is exactly
   /// identical to the current one.  This means that all operands match and any
diff --git a/libclamav/c++/llvm/include/llvm/Instructions.h b/libclamav/c++/llvm/include/llvm/Instructions.h
index c71d64a..5b48e1a 100644
--- a/libclamav/c++/llvm/include/llvm/Instructions.h
+++ b/libclamav/c++/llvm/include/llvm/Instructions.h
@@ -19,9 +19,7 @@
 #include "llvm/InstrTypes.h"
 #include "llvm/DerivedTypes.h"
 #include "llvm/Attributes.h"
-#include "llvm/BasicBlock.h"
 #include "llvm/CallingConv.h"
-#include "llvm/LLVMContext.h"
 #include "llvm/ADT/SmallVector.h"
 #include <iterator>
 
@@ -34,23 +32,30 @@ class LLVMContext;
 class DominatorTree;
 
 //===----------------------------------------------------------------------===//
-//                             AllocationInst Class
+//                                AllocaInst Class
 //===----------------------------------------------------------------------===//
 
-/// AllocationInst - This class is the common base class of MallocInst and
-/// AllocaInst.
+/// AllocaInst - an instruction to allocate memory on the stack
 ///
-class AllocationInst : public UnaryInstruction {
+class AllocaInst : public UnaryInstruction {
 protected:
-  AllocationInst(const Type *Ty, Value *ArraySize, 
-                 unsigned iTy, unsigned Align, const Twine &Name = "", 
-                 Instruction *InsertBefore = 0);
-  AllocationInst(const Type *Ty, Value *ArraySize,
-                 unsigned iTy, unsigned Align, const Twine &Name,
-                 BasicBlock *InsertAtEnd);
+  virtual AllocaInst *clone_impl() const;
 public:
+  explicit AllocaInst(const Type *Ty, Value *ArraySize = 0,
+                      const Twine &Name = "", Instruction *InsertBefore = 0);
+  AllocaInst(const Type *Ty, Value *ArraySize, 
+             const Twine &Name, BasicBlock *InsertAtEnd);
+
+  AllocaInst(const Type *Ty, const Twine &Name, Instruction *InsertBefore = 0);
+  AllocaInst(const Type *Ty, const Twine &Name, BasicBlock *InsertAtEnd);
+
+  AllocaInst(const Type *Ty, Value *ArraySize, unsigned Align,
+             const Twine &Name = "", Instruction *InsertBefore = 0);
+  AllocaInst(const Type *Ty, Value *ArraySize, unsigned Align,
+             const Twine &Name, BasicBlock *InsertAtEnd);
+
   // Out of line virtual method, so the vtable, etc. has a home.
-  virtual ~AllocationInst();
+  virtual ~AllocaInst();
 
   /// isArrayAllocation - Return true if there is an allocation size parameter
   /// to the allocation instruction that is not 1.
@@ -80,107 +85,6 @@ public:
   unsigned getAlignment() const { return (1u << SubclassData) >> 1; }
   void setAlignment(unsigned Align);
 
-  virtual AllocationInst *clone() const = 0;
-
-  // Methods for support type inquiry through isa, cast, and dyn_cast:
-  static inline bool classof(const AllocationInst *) { return true; }
-  static inline bool classof(const Instruction *I) {
-    return I->getOpcode() == Instruction::Alloca ||
-           I->getOpcode() == Instruction::Malloc;
-  }
-  static inline bool classof(const Value *V) {
-    return isa<Instruction>(V) && classof(cast<Instruction>(V));
-  }
-};
-
-
-//===----------------------------------------------------------------------===//
-//                                MallocInst Class
-//===----------------------------------------------------------------------===//
-
-/// MallocInst - an instruction to allocated memory on the heap
-///
-class MallocInst : public AllocationInst {
-public:
-  explicit MallocInst(const Type *Ty, Value *ArraySize = 0,
-                      const Twine &NameStr = "",
-                      Instruction *InsertBefore = 0)
-    : AllocationInst(Ty, ArraySize, Malloc,
-                     0, NameStr, InsertBefore) {}
-  MallocInst(const Type *Ty, Value *ArraySize,
-             const Twine &NameStr, BasicBlock *InsertAtEnd)
-    : AllocationInst(Ty, ArraySize, Malloc, 0, NameStr, InsertAtEnd) {}
-
-  MallocInst(const Type *Ty, const Twine &NameStr,
-             Instruction *InsertBefore = 0)
-    : AllocationInst(Ty, 0, Malloc, 0, NameStr, InsertBefore) {}
-  MallocInst(const Type *Ty, const Twine &NameStr,
-             BasicBlock *InsertAtEnd)
-    : AllocationInst(Ty, 0, Malloc, 0, NameStr, InsertAtEnd) {}
-
-  MallocInst(const Type *Ty, Value *ArraySize,
-             unsigned Align, const Twine &NameStr,
-             BasicBlock *InsertAtEnd)
-    : AllocationInst(Ty, ArraySize, Malloc,
-                     Align, NameStr, InsertAtEnd) {}
-  MallocInst(const Type *Ty, Value *ArraySize,
-             unsigned Align, const Twine &NameStr = "", 
-             Instruction *InsertBefore = 0)
-    : AllocationInst(Ty, ArraySize,
-                     Malloc, Align, NameStr, InsertBefore) {}
-
-  virtual MallocInst *clone() const;
-
-  // Methods for support type inquiry through isa, cast, and dyn_cast:
-  static inline bool classof(const MallocInst *) { return true; }
-  static inline bool classof(const Instruction *I) {
-    return (I->getOpcode() == Instruction::Malloc);
-  }
-  static inline bool classof(const Value *V) {
-    return isa<Instruction>(V) && classof(cast<Instruction>(V));
-  }
-};
-
-
-//===----------------------------------------------------------------------===//
-//                                AllocaInst Class
-//===----------------------------------------------------------------------===//
-
-/// AllocaInst - an instruction to allocate memory on the stack
-///
-class AllocaInst : public AllocationInst {
-public:
-  explicit AllocaInst(const Type *Ty,
-                      Value *ArraySize = 0,
-                      const Twine &NameStr = "",
-                      Instruction *InsertBefore = 0)
-    : AllocationInst(Ty, ArraySize, Alloca,
-                     0, NameStr, InsertBefore) {}
-  AllocaInst(const Type *Ty,
-             Value *ArraySize, const Twine &NameStr,
-             BasicBlock *InsertAtEnd)
-    : AllocationInst(Ty, ArraySize, Alloca, 0, NameStr, InsertAtEnd) {}
-
-  AllocaInst(const Type *Ty, const Twine &NameStr,
-             Instruction *InsertBefore = 0)
-    : AllocationInst(Ty, 0, Alloca, 0, NameStr, InsertBefore) {}
-  AllocaInst(const Type *Ty, const Twine &NameStr,
-             BasicBlock *InsertAtEnd)
-    : AllocationInst(Ty, 0, Alloca, 0, NameStr, InsertAtEnd) {}
-
-  AllocaInst(const Type *Ty, Value *ArraySize,
-             unsigned Align, const Twine &NameStr = "",
-             Instruction *InsertBefore = 0)
-    : AllocationInst(Ty, ArraySize, Alloca,
-                     Align, NameStr, InsertBefore) {}
-  AllocaInst(const Type *Ty, Value *ArraySize,
-             unsigned Align, const Twine &NameStr,
-             BasicBlock *InsertAtEnd)
-    : AllocationInst(Ty, ArraySize, Alloca,
-                     Align, NameStr, InsertAtEnd) {}
-
-  virtual AllocaInst *clone() const;
-
   /// isStaticAlloca - Return true if this alloca is in the entry block of the
   /// function and is a constant size.  If so, the code generator will fold it
   /// into the prolog/epilog code, so it is basically free.
@@ -198,35 +102,6 @@ public:
 
 
 //===----------------------------------------------------------------------===//
-//                                 FreeInst Class
-//===----------------------------------------------------------------------===//
-
-/// FreeInst - an instruction to deallocate memory
-///
-class FreeInst : public UnaryInstruction {
-  void AssertOK();
-public:
-  explicit FreeInst(Value *Ptr, Instruction *InsertBefore = 0);
-  FreeInst(Value *Ptr, BasicBlock *InsertAfter);
-
-  virtual FreeInst *clone() const;
-
-  // Accessor methods for consistency with other memory operations
-  Value *getPointerOperand() { return getOperand(0); }
-  const Value *getPointerOperand() const { return getOperand(0); }
-
-  // Methods for support type inquiry through isa, cast, and dyn_cast:
-  static inline bool classof(const FreeInst *) { return true; }
-  static inline bool classof(const Instruction *I) {
-    return (I->getOpcode() == Instruction::Free);
-  }
-  static inline bool classof(const Value *V) {
-    return isa<Instruction>(V) && classof(cast<Instruction>(V));
-  }
-};
-
-
-//===----------------------------------------------------------------------===//
 //                                LoadInst Class
 //===----------------------------------------------------------------------===//
 
@@ -235,6 +110,8 @@ public:
 ///
 class LoadInst : public UnaryInstruction {
   void AssertOK();
+protected:
+  virtual LoadInst *clone_impl() const;
 public:
   LoadInst(Value *Ptr, const Twine &NameStr, Instruction *InsertBefore);
   LoadInst(Value *Ptr, const Twine &NameStr, BasicBlock *InsertAtEnd);
@@ -265,8 +142,6 @@ public:
     SubclassData = (SubclassData & ~1) | (V ? 1 : 0);
   }
 
-  virtual LoadInst *clone() const;
-
   /// getAlignment - Return the alignment of the access that is being performed
   ///
   unsigned getAlignment() const {
@@ -304,6 +179,8 @@ public:
 class StoreInst : public Instruction {
   void *operator new(size_t, unsigned);  // DO NOT IMPLEMENT
   void AssertOK();
+protected:
+  virtual StoreInst *clone_impl() const;
 public:
   // allocate space for exactly two operands
   void *operator new(size_t s) {
@@ -342,8 +219,6 @@ public:
 
   void setAlignment(unsigned Align);
 
-  virtual StoreInst *clone() const;
-
   Value *getPointerOperand() { return getOperand(1); }
   const Value *getPointerOperand() const { return getOperand(1); }
   static unsigned getPointerOperandIndex() { return 1U; }
@@ -452,6 +327,8 @@ class GetElementPtrInst : public Instruction {
                     Instruction *InsertBefore = 0);
   GetElementPtrInst(Value *Ptr, Value *Idx,
                     const Twine &NameStr, BasicBlock *InsertAtEnd);
+protected:
+  virtual GetElementPtrInst *clone_impl() const;
 public:
   template<typename InputIterator>
   static GetElementPtrInst *Create(Value *Ptr, InputIterator IdxBegin,
@@ -525,8 +402,6 @@ public:
     return GEP;
   }
 
-  virtual GetElementPtrInst *clone() const;
-
   /// Transparently provide more efficient getOperand methods.
   DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
 
@@ -673,6 +548,9 @@ DEFINE_TRANSPARENT_OPERAND_ACCESSORS(GetElementPtrInst, Value)
 /// must be identical types.
 /// @brief Represent an integer comparison operator.
 class ICmpInst: public CmpInst {
+protected:
+  /// @brief Clone an indentical ICmpInst
+  virtual ICmpInst *clone_impl() const;  
 public:
   /// @brief Constructor with insert-before-instruction semantics.
   ICmpInst(
@@ -787,30 +665,6 @@ public:
     return !isEquality(P);
   }
 
-  /// @returns true if the predicate of this ICmpInst is signed, false otherwise
-  /// @brief Determine if this instruction's predicate is signed.
-  bool isSignedPredicate() const { return isSignedPredicate(getPredicate()); }
-
-  /// @returns true if the predicate provided is signed, false otherwise
-  /// @brief Determine if the predicate is signed.
-  static bool isSignedPredicate(Predicate pred);
-
-  /// @returns true if the specified compare predicate is
-  /// true when both operands are equal...
-  /// @brief Determine if the icmp is true when both operands are equal
-  static bool isTrueWhenEqual(ICmpInst::Predicate pred) {
-    return pred == ICmpInst::ICMP_EQ  || pred == ICmpInst::ICMP_UGE ||
-           pred == ICmpInst::ICMP_SGE || pred == ICmpInst::ICMP_ULE ||
-           pred == ICmpInst::ICMP_SLE;
-  }
-
-  /// @returns true if the specified compare instruction is
-  /// true when both operands are equal...
-  /// @brief Determine if the ICmpInst returns true when both operands are equal
-  bool isTrueWhenEqual() {
-    return isTrueWhenEqual(getPredicate());
-  }
-
   /// Initialize a set of values that all satisfy the predicate with C.
   /// @brief Make a ConstantRange for a relation with a constant value.
   static ConstantRange makeConstantRange(Predicate pred, const APInt &C);
@@ -825,8 +679,6 @@ public:
     Op<0>().swap(Op<1>());
   }
 
-  virtual ICmpInst *clone() const;
-
   // Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const ICmpInst *) { return true; }
   static inline bool classof(const Instruction *I) {
@@ -847,6 +699,9 @@ public:
 /// vectors of floating point values. The operands must be identical types.
 /// @brief Represents a floating point comparison operator.
 class FCmpInst: public CmpInst {
+protected:
+  /// @brief Clone an indentical FCmpInst
+  virtual FCmpInst *clone_impl() const;
 public:
   /// @brief Constructor with insert-before-instruction semantics.
   FCmpInst(
@@ -934,8 +789,6 @@ public:
     Op<0>().swap(Op<1>());
   }
 
-  virtual FCmpInst *clone() const;
-
   /// @brief Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const FCmpInst *) { return true; }
   static inline bool classof(const Instruction *I) {
@@ -1003,6 +856,8 @@ class CallInst : public Instruction {
   explicit CallInst(Value *F, const Twine &NameStr,
                     Instruction *InsertBefore);
   CallInst(Value *F, const Twine &NameStr, BasicBlock *InsertAtEnd);
+protected:
+  virtual CallInst *clone_impl() const;
 public:
   template<typename InputIterator>
   static CallInst *Create(Value *Func,
@@ -1042,12 +897,18 @@ public:
   ///    constant 1.
   /// 2. Call malloc with that argument.
   /// 3. Bitcast the result of the malloc call to the specified type.
-  static Value *CreateMalloc(Instruction *InsertBefore, const Type *IntPtrTy,
-                             const Type *AllocTy, Value *ArraySize = 0,
-                             const Twine &Name = "");
-  static Value *CreateMalloc(BasicBlock *InsertAtEnd, const Type *IntPtrTy,
-                             const Type *AllocTy, Value *ArraySize = 0,
-                             const Twine &Name = "");
+  static Instruction *CreateMalloc(Instruction *InsertBefore,
+                                   const Type *IntPtrTy, const Type *AllocTy,
+                                   Value *AllocSize, Value *ArraySize = 0,
+                                   const Twine &Name = "");
+  static Instruction *CreateMalloc(BasicBlock *InsertAtEnd,
+                                   const Type *IntPtrTy, const Type *AllocTy,
+                                   Value *AllocSize, Value *ArraySize = 0,
+                                   Function* MallocF = 0,
+                                   const Twine &Name = "");
+  /// CreateFree - Generate the IR for a call to the builtin free function.
+  static void CreateFree(Value* Source, Instruction *InsertBefore);
+  static Instruction* CreateFree(Value* Source, BasicBlock *InsertAtEnd);
 
   ~CallInst();
 
@@ -1056,8 +917,6 @@ public:
     SubclassData = (SubclassData & ~1) | unsigned(isTC);
   }
 
-  virtual CallInst *clone() const;
-
   /// Provide fast operand accessors
   DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
 
@@ -1148,10 +1007,15 @@ public:
   }
 
   /// getCalledValue - Get a pointer to the function that is invoked by this
-  /// instruction
+  /// instruction.
   const Value *getCalledValue() const { return Op<0>(); }
         Value *getCalledValue()       { return Op<0>(); }
 
+  /// setCalledFunction - Set the function called.
+  void setCalledFunction(Value* Fn) {
+    Op<0>() = Fn;
+  }
+
   // Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const CallInst *) { return true; }
   static inline bool classof(const Instruction *I) {
@@ -1220,6 +1084,8 @@ class SelectInst : public Instruction {
     init(C, S1, S2);
     setName(NameStr);
   }
+protected:
+  virtual SelectInst *clone_impl() const;
 public:
   static SelectInst *Create(Value *C, Value *S1, Value *S2,
                             const Twine &NameStr = "",
@@ -1250,8 +1116,6 @@ public:
     return static_cast<OtherOps>(Instruction::getOpcode());
   }
 
-  virtual SelectInst *clone() const;
-
   // Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const SelectInst *) { return true; }
   static inline bool classof(const Instruction *I) {
@@ -1276,6 +1140,9 @@ DEFINE_TRANSPARENT_OPERAND_ACCESSORS(SelectInst, Value)
 /// an argument of the specified type given a va_list and increments that list
 ///
 class VAArgInst : public UnaryInstruction {
+protected:
+  virtual VAArgInst *clone_impl() const;
+
 public:
   VAArgInst(Value *List, const Type *Ty, const Twine &NameStr = "",
              Instruction *InsertBefore = 0)
@@ -1288,8 +1155,6 @@ public:
     setName(NameStr);
   }
 
-  virtual VAArgInst *clone() const;
-
   // Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const VAArgInst *) { return true; }
   static inline bool classof(const Instruction *I) {
@@ -1312,6 +1177,9 @@ class ExtractElementInst : public Instruction {
                      Instruction *InsertBefore = 0);
   ExtractElementInst(Value *Vec, Value *Idx, const Twine &NameStr,
                      BasicBlock *InsertAtEnd);
+protected:
+  virtual ExtractElementInst *clone_impl() const;
+
 public:
   static ExtractElementInst *Create(Value *Vec, Value *Idx,
                                    const Twine &NameStr = "",
@@ -1328,8 +1196,6 @@ public:
   /// formed with the specified operands.
   static bool isValidOperands(const Value *Vec, const Value *Idx);
 
-  virtual ExtractElementInst *clone() const;
-
   Value *getVectorOperand() { return Op<0>(); }
   Value *getIndexOperand() { return Op<1>(); }
   const Value *getVectorOperand() const { return Op<0>(); }
@@ -1372,6 +1238,9 @@ class InsertElementInst : public Instruction {
                     Instruction *InsertBefore = 0);
   InsertElementInst(Value *Vec, Value *NewElt, Value *Idx,
                     const Twine &NameStr, BasicBlock *InsertAtEnd);
+protected:
+  virtual InsertElementInst *clone_impl() const;
+
 public:
   static InsertElementInst *Create(Value *Vec, Value *NewElt, Value *Idx,
                                    const Twine &NameStr = "",
@@ -1389,8 +1258,6 @@ public:
   static bool isValidOperands(const Value *Vec, const Value *NewElt,
                               const Value *Idx);
 
-  virtual InsertElementInst *clone() const;
-
   /// getType - Overload to return most specific vector type.
   ///
   const VectorType *getType() const {
@@ -1424,6 +1291,9 @@ DEFINE_TRANSPARENT_OPERAND_ACCESSORS(InsertElementInst, Value)
 /// input vectors.
 ///
 class ShuffleVectorInst : public Instruction {
+protected:
+  virtual ShuffleVectorInst *clone_impl() const;
+
 public:
   // allocate space for exactly three operands
   void *operator new(size_t s) {
@@ -1440,8 +1310,6 @@ public:
   static bool isValidOperands(const Value *V1, const Value *V2,
                               const Value *Mask);
 
-  virtual ShuffleVectorInst *clone() const;
-
   /// getType - Overload to return most specific vector type.
   ///
   const VectorType *getType() const {
@@ -1550,6 +1418,8 @@ class ExtractValueInst : public UnaryInstruction {
   void *operator new(size_t s) {
     return User::operator new(s, 1);
   }
+protected:
+  virtual ExtractValueInst *clone_impl() const;
 
 public:
   template<typename InputIterator>
@@ -1584,8 +1454,6 @@ public:
     return new ExtractValueInst(Agg, Idxs, Idxs + 1, NameStr, InsertAtEnd);
   }
 
-  virtual ExtractValueInst *clone() const;
-
   /// getIndexedType - Returns the type of the element that would be extracted
   /// with an extractvalue instruction with the specified parameters.
   ///
@@ -1717,6 +1585,8 @@ class InsertValueInst : public Instruction {
                   Instruction *InsertBefore = 0);
   InsertValueInst(Value *Agg, Value *Val, unsigned Idx,
                   const Twine &NameStr, BasicBlock *InsertAtEnd);
+protected:
+  virtual InsertValueInst *clone_impl() const;
 public:
   // allocate space for exactly two operands
   void *operator new(size_t s) {
@@ -1754,8 +1624,6 @@ public:
     return new InsertValueInst(Agg, Val, Idx, NameStr, InsertAtEnd);
   }
 
-  virtual InsertValueInst *clone() const;
-
   /// Transparently provide more efficient getOperand methods.
   DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
 
@@ -1864,6 +1732,8 @@ class PHINode : public Instruction {
       ReservedSpace(0) {
     setName(NameStr);
   }
+protected:
+  virtual PHINode *clone_impl() const;
 public:
   static PHINode *Create(const Type *Ty, const Twine &NameStr = "",
                          Instruction *InsertBefore = 0) {
@@ -1883,8 +1753,6 @@ public:
     resizeOperands(NumValues*2);
   }
 
-  virtual PHINode *clone() const;
-
   /// Provide fast operand accessors
   DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
 
@@ -1910,21 +1778,31 @@ public:
     return i/2;
   }
 
+  /// getIncomingBlock - Return incoming basic block #i.
+  ///
+  BasicBlock *getIncomingBlock(unsigned i) const {
+    return cast<BasicBlock>(getOperand(i*2+1));
+  }
+  
   /// getIncomingBlock - Return incoming basic block corresponding
-  /// to value use iterator
+  /// to an operand of the PHI.
   ///
-  template <typename U>
-  BasicBlock *getIncomingBlock(value_use_iterator<U> I) const {
-    assert(this == *I && "Iterator doesn't point to PHI's Uses?");
-    return static_cast<BasicBlock*>((&I.getUse() + 1)->get());
+  BasicBlock *getIncomingBlock(const Use &U) const {
+    assert(this == U.getUser() && "Iterator doesn't point to PHI's Uses?");
+    return cast<BasicBlock>((&U + 1)->get());
   }
-  /// getIncomingBlock - Return incoming basic block number x
+  
+  /// getIncomingBlock - Return incoming basic block corresponding
+  /// to value use iterator.
   ///
-  BasicBlock *getIncomingBlock(unsigned i) const {
-    return static_cast<BasicBlock*>(getOperand(i*2+1));
+  template <typename U>
+  BasicBlock *getIncomingBlock(value_use_iterator<U> I) const {
+    return getIncomingBlock(I.getUse());
   }
+  
+  
   void setIncomingBlock(unsigned i, BasicBlock *BB) {
-    setOperand(i*2+1, BB);
+    setOperand(i*2+1, (Value*)BB);
   }
   static unsigned getOperandNumForIncomingBlock(unsigned i) {
     return i*2+1;
@@ -1947,7 +1825,7 @@ public:
     // Initialize some new operands.
     NumOperands = OpNo+2;
     OperandList[OpNo] = V;
-    OperandList[OpNo+1] = BB;
+    OperandList[OpNo+1] = (Value*)BB;
   }
 
   /// removeIncomingValue - Remove an incoming value.  This is useful if a
@@ -1972,7 +1850,7 @@ public:
   int getBasicBlockIndex(const BasicBlock *BB) const {
     Use *OL = OperandList;
     for (unsigned i = 0, e = getNumOperands(); i != e; i += 2)
-      if (OL[i+1].get() == BB) return i/2;
+      if (OL[i+1].get() == (const Value*)BB) return i/2;
     return -1;
   }
 
@@ -2036,6 +1914,8 @@ private:
                       Instruction *InsertBefore = 0);
   ReturnInst(LLVMContext &C, Value *retVal, BasicBlock *InsertAtEnd);
   explicit ReturnInst(LLVMContext &C, BasicBlock *InsertAtEnd);
+protected:
+  virtual ReturnInst *clone_impl() const;
 public:
   static ReturnInst* Create(LLVMContext &C, Value *retVal = 0,
                             Instruction *InsertBefore = 0) {
@@ -2050,8 +1930,6 @@ public:
   }
   virtual ~ReturnInst();
 
-  virtual ReturnInst *clone() const;
-
   /// Provide fast operand accessors
   DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
 
@@ -2111,6 +1989,8 @@ class BranchInst : public TerminatorInst {
   BranchInst(BasicBlock *IfTrue, BasicBlock *InsertAtEnd);
   BranchInst(BasicBlock *IfTrue, BasicBlock *IfFalse, Value *Cond,
              BasicBlock *InsertAtEnd);
+protected:
+  virtual BranchInst *clone_impl() const;
 public:
   static BranchInst *Create(BasicBlock *IfTrue, Instruction *InsertBefore = 0) {
     return new(1, true) BranchInst(IfTrue, InsertBefore);
@@ -2132,8 +2012,6 @@ public:
   /// Transparently provide more efficient getOperand methods.
   DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
 
-  virtual BranchInst *clone() const;
-
   bool isUnconditional() const { return getNumOperands() == 1; }
   bool isConditional()   const { return getNumOperands() == 3; }
 
@@ -2151,7 +2029,7 @@ public:
   // targeting the specified block.
   // FIXME: Eliminate this ugly method.
   void setUnconditionalDest(BasicBlock *Dest) {
-    Op<-1>() = Dest;
+    Op<-1>() = (Value*)Dest;
     if (isConditional()) {  // Convert this to an uncond branch.
       Op<-2>() = 0;
       Op<-3>() = 0;
@@ -2169,7 +2047,7 @@ public:
 
   void setSuccessor(unsigned idx, BasicBlock *NewSucc) {
     assert(idx < getNumSuccessors() && "Successor # out of range for Branch!");
-    *(&Op<-1>() - idx) = NewSucc;
+    *(&Op<-1>() - idx) = (Value*)NewSucc;
   }
 
   // Methods for support type inquiry through isa, cast, and dyn_cast:
@@ -2205,7 +2083,7 @@ class SwitchInst : public TerminatorInst {
   // Operand[1]    = Default basic block destination
   // Operand[2n  ] = Value to match
   // Operand[2n+1] = BasicBlock to go to on match
-  SwitchInst(const SwitchInst &RI);
+  SwitchInst(const SwitchInst &SI);
   void init(Value *Value, BasicBlock *Default, unsigned NumCases);
   void resizeOperands(unsigned No);
   // allocate space for exactly zero operands
@@ -2217,7 +2095,7 @@ class SwitchInst : public TerminatorInst {
   /// be specified here to make memory allocation more efficient.  This
   /// constructor can also autoinsert before another instruction.
   SwitchInst(Value *Value, BasicBlock *Default, unsigned NumCases,
-             Instruction *InsertBefore = 0);
+             Instruction *InsertBefore);
 
   /// SwitchInst ctor - Create a new switch instruction, specifying a value to
   /// switch on and a default destination.  The number of additional cases can
@@ -2225,6 +2103,8 @@ class SwitchInst : public TerminatorInst {
   /// constructor also autoinserts at the end of the specified BasicBlock.
   SwitchInst(Value *Value, BasicBlock *Default, unsigned NumCases,
              BasicBlock *InsertAtEnd);
+protected:
+  virtual SwitchInst *clone_impl() const;
 public:
   static SwitchInst *Create(Value *Value, BasicBlock *Default,
                             unsigned NumCases, Instruction *InsertBefore = 0) {
@@ -2302,8 +2182,6 @@ public:
   ///
   void removeCase(unsigned idx);
 
-  virtual SwitchInst *clone() const;
-
   unsigned getNumSuccessors() const { return getNumOperands()/2; }
   BasicBlock *getSuccessor(unsigned idx) const {
     assert(idx < getNumSuccessors() &&"Successor idx out of range for switch!");
@@ -2311,7 +2189,7 @@ public:
   }
   void setSuccessor(unsigned idx, BasicBlock *NewSucc) {
     assert(idx < getNumSuccessors() && "Successor # out of range for switch!");
-    setOperand(idx*2+1, NewSucc);
+    setOperand(idx*2+1, (Value*)NewSucc);
   }
 
   // getSuccessorValue - Return the value associated with the specified
@@ -2343,6 +2221,105 @@ DEFINE_TRANSPARENT_OPERAND_ACCESSORS(SwitchInst, Value)
 
 
 //===----------------------------------------------------------------------===//
+//                             IndirectBrInst Class
+//===----------------------------------------------------------------------===//
+
+//===---------------------------------------------------------------------------
+/// IndirectBrInst - Indirect Branch Instruction.
+///
+class IndirectBrInst : public TerminatorInst {
+  void *operator new(size_t, unsigned);  // DO NOT IMPLEMENT
+  unsigned ReservedSpace;
+  // Operand[0]    = Value to switch on
+  // Operand[1]    = Default basic block destination
+  // Operand[2n  ] = Value to match
+  // Operand[2n+1] = BasicBlock to go to on match
+  IndirectBrInst(const IndirectBrInst &IBI);
+  void init(Value *Address, unsigned NumDests);
+  void resizeOperands(unsigned No);
+  // allocate space for exactly zero operands
+  void *operator new(size_t s) {
+    return User::operator new(s, 0);
+  }
+  /// IndirectBrInst ctor - Create a new indirectbr instruction, specifying an
+  /// Address to jump to.  The number of expected destinations can be specified
+  /// here to make memory allocation more efficient.  This constructor can also
+  /// autoinsert before another instruction.
+  IndirectBrInst(Value *Address, unsigned NumDests, Instruction *InsertBefore);
+  
+  /// IndirectBrInst ctor - Create a new indirectbr instruction, specifying an
+  /// Address to jump to.  The number of expected destinations can be specified
+  /// here to make memory allocation more efficient.  This constructor also
+  /// autoinserts at the end of the specified BasicBlock.
+  IndirectBrInst(Value *Address, unsigned NumDests, BasicBlock *InsertAtEnd);
+protected:
+  virtual IndirectBrInst *clone_impl() const;
+public:
+  static IndirectBrInst *Create(Value *Address, unsigned NumDests,
+                                Instruction *InsertBefore = 0) {
+    return new IndirectBrInst(Address, NumDests, InsertBefore);
+  }
+  static IndirectBrInst *Create(Value *Address, unsigned NumDests,
+                                BasicBlock *InsertAtEnd) {
+    return new IndirectBrInst(Address, NumDests, InsertAtEnd);
+  }
+  ~IndirectBrInst();
+  
+  /// Provide fast operand accessors.
+  DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
+  
+  // Accessor Methods for IndirectBrInst instruction.
+  Value *getAddress() { return getOperand(0); }
+  const Value *getAddress() const { return getOperand(0); }
+  void setAddress(Value *V) { setOperand(0, V); }
+  
+  
+  /// getNumDestinations - return the number of possible destinations in this
+  /// indirectbr instruction.
+  unsigned getNumDestinations() const { return getNumOperands()-1; }
+  
+  /// getDestination - Return the specified destination.
+  BasicBlock *getDestination(unsigned i) { return getSuccessor(i); }
+  const BasicBlock *getDestination(unsigned i) const { return getSuccessor(i); }
+  
+  /// addDestination - Add a destination.
+  ///
+  void addDestination(BasicBlock *Dest);
+  
+  /// removeDestination - This method removes the specified successor from the
+  /// indirectbr instruction.
+  void removeDestination(unsigned i);
+  
+  unsigned getNumSuccessors() const { return getNumOperands()-1; }
+  BasicBlock *getSuccessor(unsigned i) const {
+    return cast<BasicBlock>(getOperand(i+1));
+  }
+  void setSuccessor(unsigned i, BasicBlock *NewSucc) {
+    setOperand(i+1, (Value*)NewSucc);
+  }
+  
+  // Methods for support type inquiry through isa, cast, and dyn_cast:
+  static inline bool classof(const IndirectBrInst *) { return true; }
+  static inline bool classof(const Instruction *I) {
+    return I->getOpcode() == Instruction::IndirectBr;
+  }
+  static inline bool classof(const Value *V) {
+    return isa<Instruction>(V) && classof(cast<Instruction>(V));
+  }
+private:
+  virtual BasicBlock *getSuccessorV(unsigned idx) const;
+  virtual unsigned getNumSuccessorsV() const;
+  virtual void setSuccessorV(unsigned idx, BasicBlock *B);
+};
+
+template <>
+struct OperandTraits<IndirectBrInst> : public HungoffOperandTraits<1> {
+};
+
+DEFINE_TRANSPARENT_OPERAND_ACCESSORS(IndirectBrInst, Value)
+  
+  
+//===----------------------------------------------------------------------===//
 //                               InvokeInst Class
 //===----------------------------------------------------------------------===//
 
@@ -2394,6 +2371,8 @@ class InvokeInst : public TerminatorInst {
                     InputIterator ArgBegin, InputIterator ArgEnd,
                     unsigned Values,
                     const Twine &NameStr, BasicBlock *InsertAtEnd);
+protected:
+  virtual InvokeInst *clone_impl() const;
 public:
   template<typename InputIterator>
   static InvokeInst *Create(Value *Func,
@@ -2416,8 +2395,6 @@ public:
                                   Values, NameStr, InsertAtEnd);
   }
 
-  virtual InvokeInst *clone() const;
-
   /// Provide fast operand accessors
   DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
 
@@ -2520,11 +2497,11 @@ public:
     return cast<BasicBlock>(getOperand(2));
   }
   void setNormalDest(BasicBlock *B) {
-    setOperand(1, B);
+    setOperand(1, (Value*)B);
   }
 
   void setUnwindDest(BasicBlock *B) {
-    setOperand(2, B);
+    setOperand(2, (Value*)B);
   }
 
   BasicBlock *getSuccessor(unsigned i) const {
@@ -2534,7 +2511,7 @@ public:
 
   void setSuccessor(unsigned idx, BasicBlock *NewSucc) {
     assert(idx < 2 && "Successor # out of range for invoke!");
-    setOperand(idx+1, NewSucc);
+    setOperand(idx+1, (Value*)NewSucc);
   }
 
   unsigned getNumSuccessors() const { return 2; }
@@ -2598,6 +2575,8 @@ DEFINE_TRANSPARENT_OPERAND_ACCESSORS(InvokeInst, Value)
 ///
 class UnwindInst : public TerminatorInst {
   void *operator new(size_t, unsigned);  // DO NOT IMPLEMENT
+protected:
+  virtual UnwindInst *clone_impl() const;
 public:
   // allocate space for exactly zero operands
   void *operator new(size_t s) {
@@ -2606,8 +2585,6 @@ public:
   explicit UnwindInst(LLVMContext &C, Instruction *InsertBefore = 0);
   explicit UnwindInst(LLVMContext &C, BasicBlock *InsertAtEnd);
 
-  virtual UnwindInst *clone() const;
-
   unsigned getNumSuccessors() const { return 0; }
 
   // Methods for support type inquiry through isa, cast, and dyn_cast:
@@ -2635,6 +2612,9 @@ private:
 ///
 class UnreachableInst : public TerminatorInst {
   void *operator new(size_t, unsigned);  // DO NOT IMPLEMENT
+protected:
+  virtual UnreachableInst *clone_impl() const;
+
 public:
   // allocate space for exactly zero operands
   void *operator new(size_t s) {
@@ -2643,8 +2623,6 @@ public:
   explicit UnreachableInst(LLVMContext &C, Instruction *InsertBefore = 0);
   explicit UnreachableInst(LLVMContext &C, BasicBlock *InsertAtEnd);
 
-  virtual UnreachableInst *clone() const;
-
   unsigned getNumSuccessors() const { return 0; }
 
   // Methods for support type inquiry through isa, cast, and dyn_cast:
@@ -2667,6 +2645,10 @@ private:
 
 /// @brief This class represents a truncation of integer types.
 class TruncInst : public CastInst {
+protected:
+  /// @brief Clone an identical TruncInst
+  virtual TruncInst *clone_impl() const;
+
 public:
   /// @brief Constructor with insert-before-instruction semantics
   TruncInst(
@@ -2684,9 +2666,6 @@ public:
     BasicBlock *InsertAtEnd       ///< The block to insert the instruction into
   );
 
-  /// @brief Clone an identical TruncInst
-  virtual TruncInst *clone() const;
-
   /// @brief Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const TruncInst *) { return true; }
   static inline bool classof(const Instruction *I) {
@@ -2703,6 +2682,10 @@ public:
 
 /// @brief This class represents zero extension of integer types.
 class ZExtInst : public CastInst {
+protected:
+  /// @brief Clone an identical ZExtInst
+  virtual ZExtInst *clone_impl() const;
+
 public:
   /// @brief Constructor with insert-before-instruction semantics
   ZExtInst(
@@ -2720,9 +2703,6 @@ public:
     BasicBlock *InsertAtEnd       ///< The block to insert the instruction into
   );
 
-  /// @brief Clone an identical ZExtInst
-  virtual ZExtInst *clone() const;
-
   /// @brief Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const ZExtInst *) { return true; }
   static inline bool classof(const Instruction *I) {
@@ -2739,6 +2719,10 @@ public:
 
 /// @brief This class represents a sign extension of integer types.
 class SExtInst : public CastInst {
+protected:
+  /// @brief Clone an identical SExtInst
+  virtual SExtInst *clone_impl() const;
+
 public:
   /// @brief Constructor with insert-before-instruction semantics
   SExtInst(
@@ -2756,9 +2740,6 @@ public:
     BasicBlock *InsertAtEnd       ///< The block to insert the instruction into
   );
 
-  /// @brief Clone an identical SExtInst
-  virtual SExtInst *clone() const;
-
   /// @brief Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const SExtInst *) { return true; }
   static inline bool classof(const Instruction *I) {
@@ -2775,6 +2756,10 @@ public:
 
 /// @brief This class represents a truncation of floating point types.
 class FPTruncInst : public CastInst {
+protected:
+  /// @brief Clone an identical FPTruncInst
+  virtual FPTruncInst *clone_impl() const;
+
 public:
   /// @brief Constructor with insert-before-instruction semantics
   FPTruncInst(
@@ -2792,9 +2777,6 @@ public:
     BasicBlock *InsertAtEnd       ///< The block to insert the instruction into
   );
 
-  /// @brief Clone an identical FPTruncInst
-  virtual FPTruncInst *clone() const;
-
   /// @brief Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const FPTruncInst *) { return true; }
   static inline bool classof(const Instruction *I) {
@@ -2811,6 +2793,10 @@ public:
 
 /// @brief This class represents an extension of floating point types.
 class FPExtInst : public CastInst {
+protected:
+  /// @brief Clone an identical FPExtInst
+  virtual FPExtInst *clone_impl() const;
+
 public:
   /// @brief Constructor with insert-before-instruction semantics
   FPExtInst(
@@ -2828,9 +2814,6 @@ public:
     BasicBlock *InsertAtEnd       ///< The block to insert the instruction into
   );
 
-  /// @brief Clone an identical FPExtInst
-  virtual FPExtInst *clone() const;
-
   /// @brief Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const FPExtInst *) { return true; }
   static inline bool classof(const Instruction *I) {
@@ -2847,6 +2830,10 @@ public:
 
 /// @brief This class represents a cast unsigned integer to floating point.
 class UIToFPInst : public CastInst {
+protected:
+  /// @brief Clone an identical UIToFPInst
+  virtual UIToFPInst *clone_impl() const;
+
 public:
   /// @brief Constructor with insert-before-instruction semantics
   UIToFPInst(
@@ -2864,9 +2851,6 @@ public:
     BasicBlock *InsertAtEnd       ///< The block to insert the instruction into
   );
 
-  /// @brief Clone an identical UIToFPInst
-  virtual UIToFPInst *clone() const;
-
   /// @brief Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const UIToFPInst *) { return true; }
   static inline bool classof(const Instruction *I) {
@@ -2883,6 +2867,10 @@ public:
 
 /// @brief This class represents a cast from signed integer to floating point.
 class SIToFPInst : public CastInst {
+protected:
+  /// @brief Clone an identical SIToFPInst
+  virtual SIToFPInst *clone_impl() const;
+
 public:
   /// @brief Constructor with insert-before-instruction semantics
   SIToFPInst(
@@ -2900,9 +2888,6 @@ public:
     BasicBlock *InsertAtEnd       ///< The block to insert the instruction into
   );
 
-  /// @brief Clone an identical SIToFPInst
-  virtual SIToFPInst *clone() const;
-
   /// @brief Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const SIToFPInst *) { return true; }
   static inline bool classof(const Instruction *I) {
@@ -2919,6 +2904,10 @@ public:
 
 /// @brief This class represents a cast from floating point to unsigned integer
 class FPToUIInst  : public CastInst {
+protected:
+  /// @brief Clone an identical FPToUIInst
+  virtual FPToUIInst *clone_impl() const;
+
 public:
   /// @brief Constructor with insert-before-instruction semantics
   FPToUIInst(
@@ -2936,9 +2925,6 @@ public:
     BasicBlock *InsertAtEnd       ///< Where to insert the new instruction
   );
 
-  /// @brief Clone an identical FPToUIInst
-  virtual FPToUIInst *clone() const;
-
   /// @brief Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const FPToUIInst *) { return true; }
   static inline bool classof(const Instruction *I) {
@@ -2955,6 +2941,10 @@ public:
 
 /// @brief This class represents a cast from floating point to signed integer.
 class FPToSIInst  : public CastInst {
+protected:
+  /// @brief Clone an identical FPToSIInst
+  virtual FPToSIInst *clone_impl() const;
+
 public:
   /// @brief Constructor with insert-before-instruction semantics
   FPToSIInst(
@@ -2972,9 +2962,6 @@ public:
     BasicBlock *InsertAtEnd       ///< The block to insert the instruction into
   );
 
-  /// @brief Clone an identical FPToSIInst
-  virtual FPToSIInst *clone() const;
-
   /// @brief Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const FPToSIInst *) { return true; }
   static inline bool classof(const Instruction *I) {
@@ -3009,7 +2996,7 @@ public:
   );
 
   /// @brief Clone an identical IntToPtrInst
-  virtual IntToPtrInst *clone() const;
+  virtual IntToPtrInst *clone_impl() const;
 
   // Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const IntToPtrInst *) { return true; }
@@ -3027,6 +3014,10 @@ public:
 
 /// @brief This class represents a cast from a pointer to an integer
 class PtrToIntInst : public CastInst {
+protected:
+  /// @brief Clone an identical PtrToIntInst
+  virtual PtrToIntInst *clone_impl() const;
+
 public:
   /// @brief Constructor with insert-before-instruction semantics
   PtrToIntInst(
@@ -3044,9 +3035,6 @@ public:
     BasicBlock *InsertAtEnd       ///< The block to insert the instruction into
   );
 
-  /// @brief Clone an identical PtrToIntInst
-  virtual PtrToIntInst *clone() const;
-
   // Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const PtrToIntInst *) { return true; }
   static inline bool classof(const Instruction *I) {
@@ -3063,6 +3051,10 @@ public:
 
 /// @brief This class represents a no-op cast from one type to another.
 class BitCastInst : public CastInst {
+protected:
+  /// @brief Clone an identical BitCastInst
+  virtual BitCastInst *clone_impl() const;
+
 public:
   /// @brief Constructor with insert-before-instruction semantics
   BitCastInst(
@@ -3080,9 +3072,6 @@ public:
     BasicBlock *InsertAtEnd       ///< The block to insert the instruction into
   );
 
-  /// @brief Clone an identical BitCastInst
-  virtual BitCastInst *clone() const;
-
   // Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const BitCastInst *) { return true; }
   static inline bool classof(const Instruction *I) {
diff --git a/libclamav/c++/llvm/include/llvm/IntrinsicInst.h b/libclamav/c++/llvm/include/llvm/IntrinsicInst.h
index a502cc2..1e1dca2 100644
--- a/libclamav/c++/llvm/include/llvm/IntrinsicInst.h
+++ b/libclamav/c++/llvm/include/llvm/IntrinsicInst.h
@@ -313,14 +313,35 @@ namespace llvm {
     // Methods for support type inquiry through isa, cast, and dyn_cast:
     static inline bool classof(const EHSelectorInst *) { return true; }
     static inline bool classof(const IntrinsicInst *I) {
-      return I->getIntrinsicID() == Intrinsic::eh_selector_i32 ||
-             I->getIntrinsicID() == Intrinsic::eh_selector_i64;
+      return I->getIntrinsicID() == Intrinsic::eh_selector;
     }
     static inline bool classof(const Value *V) {
       return isa<IntrinsicInst>(V) && classof(cast<IntrinsicInst>(V));
     }
   };
   
+  /// MemoryUseIntrinsic - This is the common base class for the memory use
+  /// marker intrinsics.
+  ///
+  struct MemoryUseIntrinsic : public IntrinsicInst {
+
+    // Methods for support type inquiry through isa, cast, and dyn_cast:
+    static inline bool classof(const MemoryUseIntrinsic *) { return true; }
+    static inline bool classof(const IntrinsicInst *I) {
+      switch (I->getIntrinsicID()) {
+      case Intrinsic::lifetime_start:
+      case Intrinsic::lifetime_end:
+      case Intrinsic::invariant_start:
+      case Intrinsic::invariant_end:
+        return true;
+      default: return false;
+      }
+    }
+    static inline bool classof(const Value *V) {
+      return isa<IntrinsicInst>(V) && classof(cast<IntrinsicInst>(V));
+    }
+  };
+
 }
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/Intrinsics.td b/libclamav/c++/llvm/include/llvm/Intrinsics.td
index 9b0c876..c0cf00e 100644
--- a/libclamav/c++/llvm/include/llvm/Intrinsics.td
+++ b/libclamav/c++/llvm/include/llvm/Intrinsics.td
@@ -259,6 +259,11 @@ def int_longjmp    : Intrinsic<[llvm_void_ty], [llvm_ptr_ty, llvm_i32_ty]>;
 def int_sigsetjmp  : Intrinsic<[llvm_i32_ty] , [llvm_ptr_ty, llvm_i32_ty]>;
 def int_siglongjmp : Intrinsic<[llvm_void_ty], [llvm_ptr_ty, llvm_i32_ty]>;
 
+// Internal interface for object size checking
+def int_objectsize : Intrinsic<[llvm_anyint_ty], [llvm_ptr_ty, llvm_i32_ty],
+                               [IntrReadArgMem]>,
+                               GCCBuiltin<"__builtin_object_size">;
+
 //===-------------------- Bit Manipulation Intrinsics ---------------------===//
 //
 
@@ -289,14 +294,11 @@ let Properties = [IntrNoMem] in {
 
 //===------------------ Exception Handling Intrinsics----------------------===//
 //
-def int_eh_exception    : Intrinsic<[llvm_ptr_ty]>;
-def int_eh_selector_i32 : Intrinsic<[llvm_i32_ty],
-                                    [llvm_ptr_ty, llvm_ptr_ty, llvm_vararg_ty]>;
-def int_eh_selector_i64 : Intrinsic<[llvm_i64_ty],
-                                    [llvm_ptr_ty, llvm_ptr_ty, llvm_vararg_ty]>;
+def int_eh_exception : Intrinsic<[llvm_ptr_ty], [], [IntrReadMem]>;
+def int_eh_selector  : Intrinsic<[llvm_i32_ty],
+                                 [llvm_ptr_ty, llvm_ptr_ty, llvm_vararg_ty]>;
 
-def int_eh_typeid_for_i32 : Intrinsic<[llvm_i32_ty], [llvm_ptr_ty]>;
-def int_eh_typeid_for_i64 : Intrinsic<[llvm_i64_ty], [llvm_ptr_ty]>;
+def int_eh_typeid_for : Intrinsic<[llvm_i32_ty], [llvm_ptr_ty]>;
 
 def int_eh_return_i32 : Intrinsic<[llvm_void_ty], [llvm_i32_ty, llvm_ptr_ty]>;
 def int_eh_return_i64 : Intrinsic<[llvm_void_ty], [llvm_i64_ty, llvm_ptr_ty]>;
@@ -421,6 +423,22 @@ def int_atomic_load_umax : Intrinsic<[llvm_anyint_ty],
                                      [IntrWriteArgMem, NoCapture<0>]>,
                            GCCBuiltin<"__sync_fetch_and_umax">;
 
+//===------------------------- Memory Use Markers -------------------------===//
+//
+def int_lifetime_start  : Intrinsic<[llvm_void_ty],
+                                    [llvm_i64_ty, llvm_ptr_ty],
+                                    [IntrWriteArgMem, NoCapture<1>]>;
+def int_lifetime_end    : Intrinsic<[llvm_void_ty],
+                                    [llvm_i64_ty, llvm_ptr_ty],
+                                    [IntrWriteArgMem, NoCapture<1>]>;
+def int_invariant_start : Intrinsic<[llvm_descriptor_ty],
+                                    [llvm_i64_ty, llvm_ptr_ty],
+                                    [IntrReadArgMem, NoCapture<1>]>;
+def int_invariant_end   : Intrinsic<[llvm_void_ty],
+                                    [llvm_descriptor_ty, llvm_i64_ty,
+                                     llvm_ptr_ty],
+                                    [IntrWriteArgMem, NoCapture<2>]>;
+
 //===-------------------------- Other Intrinsics --------------------------===//
 //
 def int_flt_rounds : Intrinsic<[llvm_i32_ty]>,
@@ -461,4 +479,3 @@ include "llvm/IntrinsicsARM.td"
 include "llvm/IntrinsicsCellSPU.td"
 include "llvm/IntrinsicsAlpha.td"
 include "llvm/IntrinsicsXCore.td"
-include "llvm/IntrinsicsBlackfin.td"
diff --git a/libclamav/c++/llvm/include/llvm/IntrinsicsBlackfin.td b/libclamav/c++/llvm/include/llvm/IntrinsicsBlackfin.td
deleted file mode 100644
index 188e18c..0000000
--- a/libclamav/c++/llvm/include/llvm/IntrinsicsBlackfin.td
+++ /dev/null
@@ -1,34 +0,0 @@
-//===- IntrinsicsBlackfin.td - Defines Blackfin intrinsics -*- tablegen -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file defines all of the blackfin-specific intrinsics.
-//
-//===----------------------------------------------------------------------===//
-
-//===----------------------------------------------------------------------===//
-// Core synchronisation etc.
-//
-// These intrinsics have sideeffects. Each represent a single instruction, but
-// workarounds are sometimes required depending on the cpu.
-
-let TargetPrefix = "bfin" in {
-
-  // Execute csync instruction with workarounds
-  def int_bfin_csync : GCCBuiltin<"__builtin_bfin_csync">,
-          Intrinsic<[llvm_void_ty]>;
-
-  // Execute ssync instruction with workarounds
-  def int_bfin_ssync : GCCBuiltin<"__builtin_bfin_ssync">,
-          Intrinsic<[llvm_void_ty]>;
-
-  // Execute idle instruction with workarounds
-  def int_bfin_idle : GCCBuiltin<"__builtin_bfin_idle">,
-          Intrinsic<[llvm_void_ty]>;
-
-}
diff --git a/libclamav/c++/llvm/include/llvm/IntrinsicsX86.td b/libclamav/c++/llvm/include/llvm/IntrinsicsX86.td
index 5be032b..2f75ed5 100644
--- a/libclamav/c++/llvm/include/llvm/IntrinsicsX86.td
+++ b/libclamav/c++/llvm/include/llvm/IntrinsicsX86.td
@@ -484,13 +484,13 @@ let TargetPrefix = "x86" in {  // All intrinsics start with "llvm.x86.".
 // Misc.
 let TargetPrefix = "x86" in {  // All intrinsics start with "llvm.x86.".
   def int_x86_sse2_packsswb_128 : GCCBuiltin<"__builtin_ia32_packsswb128">,
-              Intrinsic<[llvm_v8i16_ty], [llvm_v8i16_ty,
+              Intrinsic<[llvm_v16i8_ty], [llvm_v8i16_ty,
                          llvm_v8i16_ty], [IntrNoMem]>;
   def int_x86_sse2_packssdw_128 : GCCBuiltin<"__builtin_ia32_packssdw128">,
-              Intrinsic<[llvm_v4i32_ty], [llvm_v4i32_ty,
+              Intrinsic<[llvm_v8i16_ty], [llvm_v4i32_ty,
                          llvm_v4i32_ty], [IntrNoMem]>;
   def int_x86_sse2_packuswb_128 : GCCBuiltin<"__builtin_ia32_packuswb128">,
-              Intrinsic<[llvm_v8i16_ty], [llvm_v8i16_ty,
+              Intrinsic<[llvm_v16i8_ty], [llvm_v8i16_ty,
                          llvm_v8i16_ty], [IntrNoMem]>;
   def int_x86_sse2_movmsk_pd : GCCBuiltin<"__builtin_ia32_movmskpd">,
               Intrinsic<[llvm_i32_ty], [llvm_v2f64_ty], [IntrNoMem]>;
@@ -673,10 +673,10 @@ let TargetPrefix = "x86" in {  // All intrinsics start with "llvm.x86.".
 let TargetPrefix = "x86" in {  // All intrinsics start with "llvm.x86.".
   def int_x86_ssse3_palign_r        : GCCBuiltin<"__builtin_ia32_palignr">,
               Intrinsic<[llvm_v1i64_ty], [llvm_v1i64_ty,
-                         llvm_v1i64_ty, llvm_i16_ty], [IntrNoMem]>;
+                         llvm_v1i64_ty, llvm_i8_ty], [IntrNoMem]>;
   def int_x86_ssse3_palign_r_128    : GCCBuiltin<"__builtin_ia32_palignr128">,
               Intrinsic<[llvm_v2i64_ty], [llvm_v2i64_ty,
-                         llvm_v2i64_ty, llvm_i32_ty], [IntrNoMem]>;
+                         llvm_v2i64_ty, llvm_i8_ty], [IntrNoMem]>;
 }
 
 //===----------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/include/llvm/LLVMContext.h b/libclamav/c++/llvm/include/llvm/LLVMContext.h
index 42b4ea6..b9ffeb0 100644
--- a/libclamav/c++/llvm/include/llvm/LLVMContext.h
+++ b/libclamav/c++/llvm/include/llvm/LLVMContext.h
@@ -19,6 +19,7 @@ namespace llvm {
 
 class LLVMContextImpl;
 class MetadataContext;
+
 /// This is an important class for using LLVM in a threaded context.  It
 /// (opaquely) owns and manages the core "global" data of LLVM's core 
 /// infrastructure, including the type and constant uniquing tables.
@@ -28,10 +29,10 @@ class LLVMContext {
   // DO NOT IMPLEMENT
   LLVMContext(LLVMContext&);
   void operator=(LLVMContext&);
+
 public:
-  LLVMContextImpl* pImpl;
+  LLVMContextImpl* const pImpl;
   MetadataContext &getMetadata();
-  bool RemoveDeadMetadata();
   LLVMContext();
   ~LLVMContext();
 };
diff --git a/libclamav/c++/llvm/include/llvm/LinkAllPasses.h b/libclamav/c++/llvm/include/llvm/LinkAllPasses.h
index 5854fc0..4aba210 100644
--- a/libclamav/c++/llvm/include/llvm/LinkAllPasses.h
+++ b/libclamav/c++/llvm/include/llvm/LinkAllPasses.h
@@ -16,9 +16,9 @@
 #define LLVM_LINKALLPASSES_H
 
 #include "llvm/Analysis/AliasSetTracker.h"
+#include "llvm/Analysis/DomPrinter.h"
 #include "llvm/Analysis/FindUsedTypes.h"
 #include "llvm/Analysis/IntervalPartition.h"
-#include "llvm/Analysis/LoopVR.h"
 #include "llvm/Analysis/Passes.h"
 #include "llvm/Analysis/PointerTracking.h"
 #include "llvm/Analysis/PostDominators.h"
@@ -63,6 +63,10 @@ namespace {
       (void) llvm::createDeadInstEliminationPass();
       (void) llvm::createDeadStoreEliminationPass();
       (void) llvm::createDeadTypeEliminationPass();
+      (void) llvm::createDomOnlyPrinterPass();
+      (void) llvm::createDomPrinterPass();
+      (void) llvm::createDomOnlyViewerPass();
+      (void) llvm::createDomViewerPass();
       (void) llvm::createEdgeProfilerPass();
       (void) llvm::createOptimalEdgeProfilerPass();
       (void) llvm::createFunctionInliningPass();
@@ -78,6 +82,7 @@ namespace {
       (void) llvm::createInternalizePass(false);
       (void) llvm::createLCSSAPass();
       (void) llvm::createLICMPass();
+      (void) llvm::createLazyValueInfoPass();
       (void) llvm::createLiveValuesPass();
       (void) llvm::createLoopDependenceAnalysisPass();
       (void) llvm::createLoopExtractorPass();
@@ -87,7 +92,6 @@ namespace {
       (void) llvm::createLoopUnswitchPass();
       (void) llvm::createLoopRotatePass();
       (void) llvm::createLoopIndexSplitPass();
-      (void) llvm::createLowerAllocationsPass();
       (void) llvm::createLowerInvokePass();
       (void) llvm::createLowerSetJmpPass();
       (void) llvm::createLowerSwitchPass();
@@ -99,7 +103,10 @@ namespace {
       (void) llvm::createPromoteMemoryToRegisterPass();
       (void) llvm::createDemoteRegisterToMemoryPass();
       (void) llvm::createPruneEHPass();
-      (void) llvm::createRaiseAllocationsPass();
+      (void) llvm::createPostDomOnlyPrinterPass();
+      (void) llvm::createPostDomPrinterPass();
+      (void) llvm::createPostDomOnlyViewerPass();
+      (void) llvm::createPostDomViewerPass();
       (void) llvm::createReassociatePass();
       (void) llvm::createSCCPPass();
       (void) llvm::createScalarReplAggregatesPass();
@@ -113,13 +120,9 @@ namespace {
       (void) llvm::createTailDuplicationPass();
       (void) llvm::createJumpThreadingPass();
       (void) llvm::createUnifyFunctionExitNodesPass();
-      (void) llvm::createCondPropagationPass();
       (void) llvm::createNullProfilerRSPass();
       (void) llvm::createRSProfilingPass();
-      (void) llvm::createIndMemRemPass();
       (void) llvm::createInstCountPass();
-      (void) llvm::createPredicateSimplifierPass();
-      (void) llvm::createCodeGenLICMPass();
       (void) llvm::createCodeGenPreparePass();
       (void) llvm::createGVNPass();
       (void) llvm::createMemCpyOptPass();
@@ -136,11 +139,13 @@ namespace {
       (void) llvm::createPartialInliningPass();
       (void) llvm::createSSIPass();
       (void) llvm::createSSIEverythingPass();
+      (void) llvm::createGEPSplitterPass();
+      (void) llvm::createSCCVNPass();
+      (void) llvm::createABCDPass();
 
       (void)new llvm::IntervalPartition();
       (void)new llvm::FindUsedTypes();
       (void)new llvm::ScalarEvolution();
-      (void)new llvm::LoopVR();
       (void)new llvm::PointerTracking();
       ((llvm::Function*)0)->viewCFGOnly();
       llvm::AliasSetTracker X(*(llvm::AliasAnalysis*)0);
diff --git a/libclamav/c++/llvm/include/llvm/Linker.h b/libclamav/c++/llvm/include/llvm/Linker.h
index 1e1da86..a68a2e0 100644
--- a/libclamav/c++/llvm/include/llvm/Linker.h
+++ b/libclamav/c++/llvm/include/llvm/Linker.h
@@ -65,8 +65,8 @@ class Linker {
     /// Construct the Linker with an empty module which will be given the
     /// name \p progname. \p progname will also be used for error messages.
     /// @brief Construct with empty module
-    Linker(const StringRef &progname, ///< name of tool running linker
-           const StringRef &modulename, ///< name of linker's end-result module
+    Linker(StringRef progname, ///< name of tool running linker
+           StringRef modulename, ///< name of linker's end-result module
            LLVMContext &C, ///< Context for global info
            unsigned Flags = 0  ///< ControlFlags (one or more |'d together)
     );
@@ -74,7 +74,7 @@ class Linker {
     /// Construct the Linker with a previously defined module, \p aModule. Use
     /// \p progname for the name of the program in error messages.
     /// @brief Construct with existing module
-    Linker(const StringRef& progname, Module* aModule, unsigned Flags = 0);
+    Linker(StringRef progname, Module* aModule, unsigned Flags = 0);
 
     /// Destruct the Linker.
     /// @brief Destructor
@@ -214,8 +214,8 @@ class Linker {
     /// @returns true if an error occurs, false otherwise
     /// @brief Link one library into the module
     bool LinkInLibrary (
-      const StringRef &Library, ///< The library to link in
-      bool& is_native             ///< Indicates if lib a native library
+      StringRef Library, ///< The library to link in
+      bool& is_native    ///< Indicates if lib a native library
     );
 
     /// This function links one bitcode archive, \p Filename, into the module.
@@ -267,7 +267,7 @@ class Linker {
     /// will be empty (i.e. sys::Path::isEmpty() will return true).
     /// @returns A sys::Path to the found library
     /// @brief Find a library from its short name.
-    sys::Path FindLib(const StringRef &Filename);
+    sys::Path FindLib(StringRef Filename);
 
   /// @}
   /// @name Implementation
@@ -277,9 +277,9 @@ class Linker {
     /// Module it contains (wrapped in an auto_ptr), or 0 if an error occurs.
     std::auto_ptr<Module> LoadObject(const sys::Path& FN);
 
-    bool warning(const StringRef &message);
-    bool error(const StringRef &message);
-    void verbose(const StringRef &message);
+    bool warning(StringRef message);
+    bool error(StringRef message);
+    void verbose(StringRef message);
 
   /// @}
   /// @name Data
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCAsmLexer.h b/libclamav/c++/llvm/include/llvm/MC/MCAsmLexer.h
index e66425a..da471d2 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCAsmLexer.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCAsmLexer.h
@@ -11,7 +11,7 @@
 #define LLVM_MC_MCASMLEXER_H
 
 #include "llvm/ADT/StringRef.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
 class MCAsmLexer;
@@ -56,7 +56,7 @@ struct AsmToken {
 
 public:
   AsmToken() {}
-  AsmToken(TokenKind _Kind, const StringRef &_Str, int64_t _IntVal = 0)
+  AsmToken(TokenKind _Kind, StringRef _Str, int64_t _IntVal = 0)
     : Kind(_Kind), Str(_Str), IntVal(_IntVal) {}
 
   TokenKind getKind() const { return Kind; }
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCAsmParser.h b/libclamav/c++/llvm/include/llvm/MC/MCAsmParser.h
index c1b5d13..d530093 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCAsmParser.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCAsmParser.h
@@ -10,7 +10,7 @@
 #ifndef LLVM_MC_MCASMPARSER_H
 #define LLVM_MC_MCASMPARSER_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
 class MCAsmLexer;
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCAssembler.h b/libclamav/c++/llvm/include/llvm/MC/MCAssembler.h
index 892f548..8656927 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCAssembler.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCAssembler.h
@@ -13,17 +13,18 @@
 #include "llvm/ADT/SmallString.h"
 #include "llvm/ADT/ilist.h"
 #include "llvm/ADT/ilist_node.h"
-#include "llvm/MC/MCValue.h"
 #include "llvm/Support/Casting.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <vector> // FIXME: Shouldn't be needed.
 
 namespace llvm {
 class raw_ostream;
 class MCAssembler;
 class MCContext;
+class MCExpr;
 class MCSection;
 class MCSectionData;
+class MCSymbol;
 
 class MCFragment : public ilist_node<MCFragment> {
   MCFragment(const MCFragment&);     // DO NOT IMPLEMENT
@@ -174,7 +175,7 @@ public:
 
 class MCFillFragment : public MCFragment {
   /// Value - Value to use for filling bytes.
-  MCValue Value;
+  const MCExpr *Value;
 
   /// ValueSize - The size (in bytes) of \arg Value to use when filling.
   unsigned ValueSize;
@@ -183,10 +184,10 @@ class MCFillFragment : public MCFragment {
   uint64_t Count;
 
 public:
-  MCFillFragment(MCValue _Value, unsigned _ValueSize, uint64_t _Count,
+  MCFillFragment(const MCExpr &_Value, unsigned _ValueSize, uint64_t _Count,
                  MCSectionData *SD = 0) 
     : MCFragment(FT_Fill, SD),
-      Value(_Value), ValueSize(_ValueSize), Count(_Count) {}
+      Value(&_Value), ValueSize(_ValueSize), Count(_Count) {}
 
   /// @name Accessors
   /// @{
@@ -195,7 +196,7 @@ public:
     return ValueSize * Count;
   }
 
-  MCValue getValue() const { return Value; }
+  const MCExpr &getValue() const { return *Value; }
   
   unsigned getValueSize() const { return ValueSize; }
 
@@ -211,15 +212,15 @@ public:
 
 class MCOrgFragment : public MCFragment {
   /// Offset - The offset this fragment should start at.
-  MCValue Offset;
+  const MCExpr *Offset;
 
   /// Value - Value to use for filling bytes.  
   int8_t Value;
 
 public:
-  MCOrgFragment(MCValue _Offset, int8_t _Value, MCSectionData *SD = 0)
+  MCOrgFragment(const MCExpr &_Offset, int8_t _Value, MCSectionData *SD = 0)
     : MCFragment(FT_Org, SD),
-      Offset(_Offset), Value(_Value) {}
+      Offset(&_Offset), Value(_Value) {}
 
   /// @name Accessors
   /// @{
@@ -229,7 +230,7 @@ public:
     return ~UINT64_C(0);
   }
 
-  MCValue getOffset() const { return Offset; }
+  const MCExpr &getOffset() const { return *Offset; }
   
   uint8_t getValue() const { return Value; }
 
@@ -294,10 +295,7 @@ public:
     uint64_t Offset;
 
     /// Value - The expression to eventually write into the fragment.
-    //
-    // FIXME: We could probably get away with requiring the client to pass in an
-    // owned reference whose lifetime extends past that of the fixup.
-    MCValue Value;
+    const MCExpr *Value;
 
     /// Size - The fixup size.
     unsigned Size;
@@ -308,9 +306,9 @@ public:
     uint64_t FixedValue;
 
   public:
-    Fixup(MCFragment &_Fragment, uint64_t _Offset, const MCValue &_Value, 
+    Fixup(MCFragment &_Fragment, uint64_t _Offset, const MCExpr &_Value,
           unsigned _Size) 
-      : Fragment(&_Fragment), Offset(_Offset), Value(_Value), Size(_Size),
+      : Fragment(&_Fragment), Offset(_Offset), Value(&_Value), Size(_Size),
         FixedValue(0) {}
   };
 
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCContext.h b/libclamav/c++/llvm/include/llvm/MC/MCContext.h
index 955aa8b..95c6bd4 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCContext.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCContext.h
@@ -15,10 +15,11 @@
 #include "llvm/Support/Allocator.h"
 
 namespace llvm {
-  class MCValue;
+  class MCExpr;
   class MCSection;
   class MCSymbol;
   class StringRef;
+  class Twine;
 
   /// MCContext - Context object for machine code objects.  This class owns all
   /// of the sections that it creates.
@@ -33,11 +34,6 @@ namespace llvm {
     /// Symbols - Bindings of names to symbols.
     StringMap<MCSymbol*> Symbols;
 
-    /// SymbolValues - Bindings of symbols to values.
-    //
-    // FIXME: Is there a good reason to not just put this in the MCSymbol?
-    DenseMap<const MCSymbol*, MCValue> SymbolValues;
-
     /// Allocator - Allocator object used for creating machine code objects.
     ///
     /// We use a bump pointer allocator to avoid the need to track all allocated
@@ -53,7 +49,7 @@ namespace llvm {
     /// CreateSymbol - Create a new symbol with the specified @param Name.
     ///
     /// @param Name - The symbol name, which must be unique across all symbols.
-    MCSymbol *CreateSymbol(const StringRef &Name);
+    MCSymbol *CreateSymbol(StringRef Name);
 
     /// GetOrCreateSymbol - Lookup the symbol inside with the specified
     /// @param Name.  If it exists, return it.  If not, create a forward
@@ -62,39 +58,26 @@ namespace llvm {
     /// @param Name - The symbol name, which must be unique across all symbols.
     /// @param IsTemporary - Whether this symbol is an assembler temporary,
     /// which should not survive into the symbol table for the translation unit.
-    MCSymbol *GetOrCreateSymbol(const StringRef &Name);
-    
+    MCSymbol *GetOrCreateSymbol(StringRef Name);
+    MCSymbol *GetOrCreateSymbol(const Twine &Name);
+
     /// CreateTemporarySymbol - Create a new temporary symbol with the specified
     /// @param Name.
     ///
     /// @param Name - The symbol name, for debugging purposes only, temporary
     /// symbols do not surive assembly. If non-empty the name must be unique
     /// across all symbols.
-    MCSymbol *CreateTemporarySymbol(const StringRef &Name = "");
+    MCSymbol *CreateTemporarySymbol(StringRef Name = "");
 
     /// LookupSymbol - Get the symbol for @param Name, or null.
-    MCSymbol *LookupSymbol(const StringRef &Name) const;
-
-    /// @}
-    /// @name Symbol Value Table
-    /// @{
-
-    /// ClearSymbolValue - Erase a value binding for @arg Symbol, if one exists.
-    void ClearSymbolValue(const MCSymbol *Symbol);
-
-    /// SetSymbolValue - Set the value binding for @arg Symbol to @arg Value.
-    void SetSymbolValue(const MCSymbol *Symbol, const MCValue &Value);
-
-    /// GetSymbolValue - Return the current value for @arg Symbol, or null if
-    /// none exists.
-    const MCValue *GetSymbolValue(const MCSymbol *Symbol) const;
+    MCSymbol *LookupSymbol(StringRef Name) const;
 
     /// @}
 
     void *Allocate(unsigned Size, unsigned Align = 8) {
       return Allocator.Allocate(Size, Align);
     }
-    void Deallocate(void *Ptr) { 
+    void Deallocate(void *Ptr) {
     }
   };
 
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCDisassembler.h b/libclamav/c++/llvm/include/llvm/MC/MCDisassembler.h
index ef10b80..ffa0e41 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCDisassembler.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCDisassembler.h
@@ -9,7 +9,7 @@
 #ifndef MCDISASSEMBLER_H
 #define MCDISASSEMBLER_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
   
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCExpr.h b/libclamav/c++/llvm/include/llvm/MC/MCExpr.h
index 19a32e7..13d40ec 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCExpr.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCExpr.h
@@ -11,7 +11,7 @@
 #define LLVM_MC_MCEXPR_H
 
 #include "llvm/Support/Casting.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
 class MCAsmInfo;
@@ -62,14 +62,14 @@ public:
   ///
   /// @param Res - The absolute value, if evaluation succeeds.
   /// @result - True on success.
-  bool EvaluateAsAbsolute(MCContext &Ctx, int64_t &Res) const;
+  bool EvaluateAsAbsolute(int64_t &Res) const;
 
   /// EvaluateAsRelocatable - Try to evaluate the expression to a relocatable
   /// value, i.e. an expression of the fixed form (a - b + constant).
   ///
   /// @param Res - The relocatable value, if evaluation succeeds.
   /// @result - True on success.
-  bool EvaluateAsRelocatable(MCContext &Ctx, MCValue &Res) const;
+  bool EvaluateAsRelocatable(MCValue &Res) const;
 
   /// @}
 
@@ -120,10 +120,8 @@ public:
   /// @{
 
   static const MCSymbolRefExpr *Create(const MCSymbol *Symbol, MCContext &Ctx);
-  static const MCSymbolRefExpr *Create(const StringRef &Name, MCContext &Ctx);
-  
-  
-  
+  static const MCSymbolRefExpr *Create(StringRef Name, MCContext &Ctx);
+
   /// @}
   /// @name Accessors
   /// @{
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCInst.h b/libclamav/c++/llvm/include/llvm/MC/MCInst.h
index 0fc4d18..29b38dd 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCInst.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCInst.h
@@ -17,7 +17,7 @@
 #define LLVM_MC_MCINST_H
 
 #include "llvm/ADT/SmallVector.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
 class raw_ostream;
@@ -43,7 +43,6 @@ class MCOperand {
 public:
   
   MCOperand() : Kind(kInvalid) {}
-  MCOperand(const MCOperand &RHS) { *this = RHS; }
 
   bool isValid() const { return Kind != kInvalid; }
   bool isReg() const { return Kind == kRegister; }
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCSection.h b/libclamav/c++/llvm/include/llvm/MC/MCSection.h
index 9e07186..ceb6d27 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCSection.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCSection.h
@@ -51,13 +51,13 @@ namespace llvm {
     /// of a syntactic one.
     bool IsDirective;
     
-    MCSectionCOFF(const StringRef &name, bool isDirective, SectionKind K)
+    MCSectionCOFF(StringRef name, bool isDirective, SectionKind K)
       : MCSection(K), Name(name), IsDirective(isDirective) {
     }
   public:
     
-    static MCSectionCOFF *Create(const StringRef &Name, bool IsDirective, 
-                                   SectionKind K, MCContext &Ctx);
+    static MCSectionCOFF *Create(StringRef Name, bool IsDirective, 
+                                 SectionKind K, MCContext &Ctx);
 
     const std::string &getName() const { return Name; }
     bool isDirective() const { return IsDirective; }
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCSectionELF.h b/libclamav/c++/llvm/include/llvm/MC/MCSectionELF.h
index 57fa903..4ec745f 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCSectionELF.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCSectionELF.h
@@ -35,13 +35,13 @@ class MCSectionELF : public MCSection {
   bool IsExplicit;
   
 protected:
-  MCSectionELF(const StringRef &Section, unsigned type, unsigned flags,
+  MCSectionELF(StringRef Section, unsigned type, unsigned flags,
                SectionKind K, bool isExplicit)
     : MCSection(K), SectionName(Section.str()), Type(type), Flags(flags), 
       IsExplicit(isExplicit) {}
 public:
   
-  static MCSectionELF *Create(const StringRef &Section, unsigned Type, 
+  static MCSectionELF *Create(StringRef Section, unsigned Type, 
                               unsigned Flags, SectionKind K, bool isExplicit,
                               MCContext &Ctx);
 
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCSectionMachO.h b/libclamav/c++/llvm/include/llvm/MC/MCSectionMachO.h
index 251c88f..6156819 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCSectionMachO.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCSectionMachO.h
@@ -33,7 +33,7 @@ class MCSectionMachO : public MCSection {
   /// size of stubs, for example.
   unsigned Reserved2;
   
-  MCSectionMachO(const StringRef &Segment, const StringRef &Section,
+  MCSectionMachO(StringRef Segment, StringRef Section,
                  unsigned TAA, unsigned reserved2, SectionKind K)
     : MCSection(K), TypeAndAttributes(TAA), Reserved2(reserved2) {
     assert(Segment.size() <= 16 && Section.size() <= 16 &&
@@ -52,8 +52,8 @@ class MCSectionMachO : public MCSection {
   }
 public:
   
-  static MCSectionMachO *Create(const StringRef &Segment,
-                                const StringRef &Section,
+  static MCSectionMachO *Create(StringRef Segment,
+                                StringRef Section,
                                 unsigned TypeAndAttributes,
                                 unsigned Reserved2,
                                 SectionKind K, MCContext &Ctx);
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCStreamer.h b/libclamav/c++/llvm/include/llvm/MC/MCStreamer.h
index 248e6b0..5febed7 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCStreamer.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCStreamer.h
@@ -14,7 +14,7 @@
 #ifndef LLVM_MC_MCSTREAMER_H
 #define LLVM_MC_MCSTREAMER_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
   class MCAsmInfo;
@@ -155,7 +155,7 @@ namespace llvm {
     ///
     /// This is used to implement assembler directives such as .byte, .ascii,
     /// etc.
-    virtual void EmitBytes(const StringRef &Data) = 0;
+    virtual void EmitBytes(StringRef Data) = 0;
 
     /// EmitValue - Emit the expression @param Value into the output as a native
     /// integer of the given @param Size bytes.
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCSymbol.h b/libclamav/c++/llvm/include/llvm/MC/MCSymbol.h
index 5dd7d68..cfe04d8 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCSymbol.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCSymbol.h
@@ -16,10 +16,11 @@
 
 #include <string>
 #include "llvm/ADT/StringRef.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
   class MCAsmInfo;
+  class MCExpr;
   class MCSection;
   class MCContext;
   class raw_ostream;
@@ -45,6 +46,9 @@ namespace llvm {
     /// absolute symbols.
     const MCSection *Section;
 
+    /// Value - If non-null, the value for a variable symbol.
+    const MCExpr *Value;
+
     /// IsTemporary - True if this is an assembler temporary label, which
     /// typically does not survive in the .o file's symbol table.  Usually
     /// "Lfoo" or ".foo".
@@ -52,9 +56,9 @@ namespace llvm {
 
   private:  // MCContext creates and uniques these.
     friend class MCContext;
-    MCSymbol(const StringRef &_Name, bool _IsTemporary) 
-      : Name(_Name), Section(0), IsTemporary(_IsTemporary) {}
-    
+    MCSymbol(StringRef _Name, bool _IsTemporary)
+      : Name(_Name), Section(0), Value(0), IsTemporary(_IsTemporary) {}
+
     MCSymbol(const MCSymbol&);       // DO NOT IMPLEMENT
     void operator=(const MCSymbol&); // DO NOT IMPLEMENT
   public:
@@ -69,6 +73,10 @@ namespace llvm {
       return IsTemporary;
     }
 
+    /// @}
+    /// @name Associated Sections
+    /// @{
+
     /// isDefined - Check if this symbol is defined (i.e., it has an address).
     ///
     /// Defined symbols are either absolute or in some section.
@@ -105,6 +113,23 @@ namespace llvm {
     void setAbsolute() { Section = AbsolutePseudoSection; }
 
     /// @}
+    /// @name Variable Symbols
+    /// @{
+
+    /// isVariable - Check if this is a variable symbol.
+    bool isVariable() const {
+      return Value != 0;
+    }
+
+    /// getValue() - Get the value for variable symbols, or null if the symbol
+    /// is not a variable.
+    const MCExpr *getValue() const { return Value; }
+
+    void setValue(const MCExpr *Value) {
+      this->Value = Value;
+    }
+
+    /// @}
 
     /// print - Print the value to the stream \arg OS.
     void print(raw_ostream &OS, const MCAsmInfo *MAI) const;
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCValue.h b/libclamav/c++/llvm/include/llvm/MC/MCValue.h
index 62aca6e..4f5ab31 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCValue.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCValue.h
@@ -14,7 +14,7 @@
 #ifndef LLVM_MC_MCVALUE_H
 #define LLVM_MC_MCVALUE_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/MC/MCSymbol.h"
 #include <cassert>
 
diff --git a/libclamav/c++/llvm/include/llvm/Metadata.h b/libclamav/c++/llvm/include/llvm/Metadata.h
index e441481..1d18eba 100644
--- a/libclamav/c++/llvm/include/llvm/Metadata.h
+++ b/libclamav/c++/llvm/include/llvm/Metadata.h
@@ -13,46 +13,30 @@
 //
 //===----------------------------------------------------------------------===//
 
-#ifndef LLVM_MDNODE_H
-#define LLVM_MDNODE_H
+#ifndef LLVM_METADATA_H
+#define LLVM_METADATA_H
 
-#include "llvm/User.h"
+#include "llvm/Value.h"
 #include "llvm/Type.h"
-#include "llvm/OperandTraits.h"
 #include "llvm/ADT/FoldingSet.h"
 #include "llvm/ADT/SmallVector.h"
-#include "llvm/ADT/SmallPtrSet.h"
-#include "llvm/ADT/DenseMap.h"
-#include "llvm/ADT/StringMap.h"
 #include "llvm/ADT/ilist_node.h"
-#include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/ValueHandle.h"
 
 namespace llvm {
 class Constant;
 class Instruction;
 class LLVMContext;
+class MetadataContextImpl;
 
 //===----------------------------------------------------------------------===//
 // MetadataBase  - A base class for MDNode, MDString and NamedMDNode.
-class MetadataBase : public User {
-private:
-  /// ReservedSpace - The number of operands actually allocated.  NumOperands is
-  /// the number actually in use.
-  unsigned ReservedSpace;
-
+class MetadataBase : public Value {
 protected:
   MetadataBase(const Type *Ty, unsigned scid)
-    : User(Ty, scid, NULL, 0), ReservedSpace(0) {}
+    : Value(Ty, scid) {}
 
-  void resizeOperands(unsigned NumOps);
 public:
-  /// isNullValue - Return true if this is the value that would be returned by
-  /// getNullValue.  This always returns false because getNullValue will never
-  /// produce metadata.
-  virtual bool isNullValue() const {
-    return false;
-  }
 
   /// Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const MetadataBase *) { return true; }
@@ -68,32 +52,29 @@ public:
 /// MDString is always unnamd.
 class MDString : public MetadataBase {
   MDString(const MDString &);            // DO NOT IMPLEMENT
-  void *operator new(size_t, unsigned);  // DO NOT IMPLEMENT
-  unsigned getNumOperands();             // DO NOT IMPLEMENT
 
   StringRef Str;
 protected:
-  explicit MDString(LLVMContext &C, const char *begin, unsigned l)
-    : MetadataBase(Type::getMetadataTy(C), Value::MDStringVal), Str(begin, l) {}
+  explicit MDString(LLVMContext &C, StringRef S)
+    : MetadataBase(Type::getMetadataTy(C), Value::MDStringVal), Str(S) {}
 
 public:
-  // Do not allocate any space for operands.
-  void *operator new(size_t s) {
-    return User::operator new(s, 0);
-  }
-  static MDString *get(LLVMContext &Context, const StringRef &Str);
+  static MDString *get(LLVMContext &Context, StringRef Str);
+  static MDString *get(LLVMContext &Context, const char *Str);
   
   StringRef getString() const { return Str; }
 
-  unsigned length() const { return Str.size(); }
+  unsigned getLength() const { return (unsigned)Str.size(); }
 
+  typedef StringRef::iterator iterator;
+  
   /// begin() - Pointer to the first byte of the string.
   ///
-  const char *begin() const { return Str.begin(); }
+  iterator begin() const { return Str.begin(); }
 
   /// end() - Pointer to one byte past the end of the string.
   ///
-  const char *end() const { return Str.end(); }
+  iterator end() const { return Str.end(); }
 
   /// Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const MDString *) { return true; }
@@ -108,14 +89,12 @@ public:
 /// MDNode is always unnamed.
 class MDNode : public MetadataBase, public FoldingSetNode {
   MDNode(const MDNode &);                // DO NOT IMPLEMENT
-  void *operator new(size_t, unsigned);  // DO NOT IMPLEMENT
-  // getNumOperands - Make this only available for private uses.
-  unsigned getNumOperands() { return User::getNumOperands();  }
 
   friend class ElementVH;
   // Use CallbackVH to hold MDNOde elements.
   struct ElementVH : public CallbackVH {
     MDNode *Parent;
+    ElementVH() {}
     ElementVH(Value *V, MDNode *P) : CallbackVH(V), Parent(P) {}
     ~ElementVH() {}
 
@@ -130,61 +109,32 @@ class MDNode : public MetadataBase, public FoldingSetNode {
   // Replace each instance of F from the element list of this node with T.
   void replaceElement(Value *F, Value *T);
 
-  SmallVector<ElementVH, 4> Node;
+  ElementVH *Node;
+  unsigned NodeSize;
 
 protected:
-  explicit MDNode(LLVMContext &C, Value*const* Vals, unsigned NumVals);
+  explicit MDNode(LLVMContext &C, Value *const *Vals, unsigned NumVals);
 public:
-  // Do not allocate any space for operands.
-  void *operator new(size_t s) {
-    return User::operator new(s, 0);
-  }
   // Constructors and destructors.
   static MDNode *get(LLVMContext &Context, 
-                     Value* const* Vals, unsigned NumVals);
-
-  /// dropAllReferences - Remove all uses and clear node vector.
-  void dropAllReferences();
+                     Value *const *Vals, unsigned NumVals);
 
   /// ~MDNode - Destroy MDNode.
   ~MDNode();
   
   /// getElement - Return specified element.
   Value *getElement(unsigned i) const {
-    assert (getNumElements() > i && "Invalid element number!");
+    assert(i < getNumElements() && "Invalid element number!");
     return Node[i];
   }
 
   /// getNumElements - Return number of MDNode elements.
-  unsigned getNumElements() const {
-    return Node.size();
-  }
-
-  // Element access
-  typedef SmallVectorImpl<ElementVH>::const_iterator const_elem_iterator;
-  typedef SmallVectorImpl<ElementVH>::iterator elem_iterator;
-  /// elem_empty - Return true if MDNode is empty.
-  bool elem_empty() const                { return Node.empty(); }
-  const_elem_iterator elem_begin() const { return Node.begin(); }
-  const_elem_iterator elem_end() const   { return Node.end();   }
-  elem_iterator elem_begin()             { return Node.begin(); }
-  elem_iterator elem_end()               { return Node.end();   }
-
-  /// isNullValue - Return true if this is the value that would be returned by
-  /// getNullValue.  This always returns false because getNullValue will never
-  /// produce metadata.
-  virtual bool isNullValue() const {
-    return false;
-  }
+  unsigned getNumElements() const { return NodeSize; }
 
   /// Profile - calculate a unique identifier for this MDNode to collapse
   /// duplicates
   void Profile(FoldingSetNodeID &ID) const;
 
-  virtual void replaceUsesOfWithOnConstant(Value *From, Value *To, Use *U) {
-    llvm_unreachable("This should never be called because MDNodes have no ops");
-  }
-
   /// Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const MDNode *) { return true; }
   static bool classof(const Value *V) {
@@ -193,23 +143,6 @@ public:
 };
 
 //===----------------------------------------------------------------------===//
-/// WeakMetadataVH - a weak value handle for metadata.
-class WeakMetadataVH : public WeakVH {
-public:
-  WeakMetadataVH() : WeakVH() {}
-  WeakMetadataVH(MetadataBase *M) : WeakVH(M) {}
-  WeakMetadataVH(const WeakMetadataVH &RHS) : WeakVH(RHS) {}
-  
-  operator Value*() const {
-    llvm_unreachable("WeakMetadataVH only handles Metadata");
-  }
-
-  operator MetadataBase*() const {
-   return dyn_cast_or_null<MetadataBase>(getValPtr());
-  }
-};
-
-//===----------------------------------------------------------------------===//
 /// NamedMDNode - a tuple of other metadata. 
 /// NamedMDNode is always named. All NamedMDNode element has a type of metadata.
 template<typename ValueSubClass, typename ItemParentClass>
@@ -220,24 +153,17 @@ class NamedMDNode : public MetadataBase, public ilist_node<NamedMDNode> {
   friend class LLVMContextImpl;
 
   NamedMDNode(const NamedMDNode &);      // DO NOT IMPLEMENT
-  void *operator new(size_t, unsigned);  // DO NOT IMPLEMENT
-  // getNumOperands - Make this only available for private uses.
-  unsigned getNumOperands() { return User::getNumOperands();  }
 
   Module *Parent;
-  SmallVector<WeakMetadataVH, 4> Node;
-  typedef SmallVectorImpl<WeakMetadataVH>::iterator elem_iterator;
+  SmallVector<TrackingVH<MetadataBase>, 4> Node;
 
+  void setParent(Module *M) { Parent = M; }
 protected:
-  explicit NamedMDNode(LLVMContext &C, const Twine &N, MetadataBase*const* Vals, 
+  explicit NamedMDNode(LLVMContext &C, const Twine &N, MetadataBase*const *Vals, 
                        unsigned NumVals, Module *M = 0);
 public:
-  // Do not allocate any space for operands.
-  void *operator new(size_t s) {
-    return User::operator new(s, 0);
-  }
   static NamedMDNode *Create(LLVMContext &C, const Twine &N, 
-                             MetadataBase*const*MDs, 
+                             MetadataBase *const *MDs, 
                              unsigned NumMDs, Module *M = 0) {
     return new NamedMDNode(C, N, MDs, NumMDs, M);
   }
@@ -257,45 +183,32 @@ public:
   /// getParent - Get the module that holds this named metadata collection.
   inline Module *getParent() { return Parent; }
   inline const Module *getParent() const { return Parent; }
-  void setParent(Module *M) { Parent = M; }
 
   /// getElement - Return specified element.
   MetadataBase *getElement(unsigned i) const {
-    assert (getNumElements() > i && "Invalid element number!");
+    assert(i < getNumElements() && "Invalid element number!");
     return Node[i];
   }
 
   /// getNumElements - Return number of NamedMDNode elements.
   unsigned getNumElements() const {
-    return Node.size();
+    return (unsigned)Node.size();
   }
 
   /// addElement - Add metadata element.
   void addElement(MetadataBase *M) {
-    resizeOperands(0);
-    OperandList[NumOperands++] = M;
-    Node.push_back(WeakMetadataVH(M));
+    Node.push_back(TrackingVH<MetadataBase>(M));
   }
 
-  typedef SmallVectorImpl<WeakMetadataVH>::const_iterator const_elem_iterator;
+  typedef SmallVectorImpl<TrackingVH<MetadataBase> >::iterator elem_iterator;
+  typedef SmallVectorImpl<TrackingVH<MetadataBase> >::const_iterator 
+    const_elem_iterator;
   bool elem_empty() const                { return Node.empty(); }
   const_elem_iterator elem_begin() const { return Node.begin(); }
   const_elem_iterator elem_end() const   { return Node.end();   }
   elem_iterator elem_begin()             { return Node.begin(); }
   elem_iterator elem_end()               { return Node.end();   }
 
-  /// isNullValue - Return true if this is the value that would be returned by
-  /// getNullValue.  This always returns false because getNullValue will never
-  /// produce metadata.
-  virtual bool isNullValue() const {
-    return false;
-  }
-
-  virtual void replaceUsesOfWithOnConstant(Value *From, Value *To, Use *U) {
-    llvm_unreachable(
-                "This should never be called because NamedMDNodes have no ops");
-  }
-
   /// Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const NamedMDNode *) { return true; }
   static bool classof(const Value *V) {
@@ -310,57 +223,56 @@ public:
 /// must start with an alphabet. The regular expression used to check name
 /// is [a-zA-Z$._][a-zA-Z$._0-9]*
 class MetadataContext {
-public:
-  typedef std::pair<unsigned, WeakVH> MDPairTy;
-  typedef SmallVector<MDPairTy, 2> MDMapTy;
-  typedef DenseMap<const Instruction *, MDMapTy> MDStoreTy;
-  friend class BitcodeReader;
-private:
-
-  /// MetadataStore - Collection of metadata used in this context.
-  MDStoreTy MetadataStore;
-
-  /// MDHandlerNames - Map to hold metadata handler names.
-  StringMap<unsigned> MDHandlerNames;
+  // DO NOT IMPLEMENT
+  MetadataContext(MetadataContext&);
+  void operator=(MetadataContext&);
 
+  MetadataContextImpl *const pImpl;
 public:
-  /// RegisterMDKind - Register a new metadata kind and return its ID.
+  MetadataContext();
+  ~MetadataContext();
+
+  /// registerMDKind - Register a new metadata kind and return its ID.
   /// A metadata kind can be registered only once. 
-  unsigned RegisterMDKind(const char *Name);
+  unsigned registerMDKind(StringRef Name);
 
   /// getMDKind - Return metadata kind. If the requested metadata kind
   /// is not registered then return 0.
-  unsigned getMDKind(const char *Name);
+  unsigned getMDKind(StringRef Name) const;
 
-  /// validName - Return true if Name is a valid custom metadata handler name.
-  bool validName(const char *Name);
+  /// isValidName - Return true if Name is a valid custom metadata handler name.
+  static bool isValidName(StringRef Name);
 
-  /// getMD - Get the metadata of given kind attached with an Instruction.
+  /// getMD - Get the metadata of given kind attached to an Instruction.
   /// If the metadata is not found then return 0.
   MDNode *getMD(unsigned Kind, const Instruction *Inst);
 
-  /// getMDs - Get the metadata attached with an Instruction.
-  const MDMapTy *getMDs(const Instruction *Inst);
+  /// getMDs - Get the metadata attached to an Instruction.
+  void getMDs(const Instruction *Inst, 
+        SmallVectorImpl<std::pair<unsigned, TrackingVH<MDNode> > > &MDs) const;
 
-  /// addMD - Attach the metadata of given kind with an Instruction.
+  /// addMD - Attach the metadata of given kind to an Instruction.
   void addMD(unsigned Kind, MDNode *Node, Instruction *Inst);
   
   /// removeMD - Remove metadata of given kind attached with an instuction.
   void removeMD(unsigned Kind, Instruction *Inst);
   
-  /// removeMDs - Remove all metadata attached with an instruction.
-  void removeMDs(const Instruction *Inst);
+  /// removeAllMetadata - Remove all metadata attached with an instruction.
+  void removeAllMetadata(Instruction *Inst);
 
-  /// getHandlerNames - Get handler names. This is used by bitcode
-  /// writer.
-  const StringMap<unsigned> *getHandlerNames();
+  /// copyMD - If metadata is attached with Instruction In1 then attach
+  /// the same metadata to In2.
+  void copyMD(Instruction *In1, Instruction *In2);
+
+  /// getHandlerNames - Populate client supplied smallvector using custome
+  /// metadata name and ID.
+  void getHandlerNames(SmallVectorImpl<std::pair<unsigned, StringRef> >&) const;
 
   /// ValueIsDeleted - This handler is used to update metadata store
   /// when a value is deleted.
-  void ValueIsDeleted(const Value *V) {}
-  void ValueIsDeleted(const Instruction *Inst) {
-    removeMDs(Inst);
-  }
+  void ValueIsDeleted(const Value *) {}
+  void ValueIsDeleted(Instruction *Inst);
+  void ValueIsRAUWd(Value *V1, Value *V2);
 
   /// ValueIsCloned - This handler is used to update metadata store
   /// when In1 is cloned to create In2.
diff --git a/libclamav/c++/llvm/include/llvm/Module.h b/libclamav/c++/llvm/include/llvm/Module.h
index 501625d..04dfb35 100644
--- a/libclamav/c++/llvm/include/llvm/Module.h
+++ b/libclamav/c++/llvm/include/llvm/Module.h
@@ -19,7 +19,7 @@
 #include "llvm/GlobalVariable.h"
 #include "llvm/GlobalAlias.h"
 #include "llvm/Metadata.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <vector>
 
 namespace llvm {
@@ -154,7 +154,7 @@ private:
 public:
   /// The Module constructor. Note that there is no default constructor. You
   /// must provide a name for the module upon construction.
-  explicit Module(const StringRef &ModuleID, LLVMContext& C);
+  explicit Module(StringRef ModuleID, LLVMContext& C);
   /// The module destructor. This will dropAllReferences.
   ~Module();
 
@@ -196,20 +196,20 @@ public:
 public:
 
   /// Set the module identifier.
-  void setModuleIdentifier(const StringRef &ID) { ModuleID = ID; }
+  void setModuleIdentifier(StringRef ID) { ModuleID = ID; }
 
   /// Set the data layout
-  void setDataLayout(const StringRef &DL) { DataLayout = DL; }
+  void setDataLayout(StringRef DL) { DataLayout = DL; }
 
   /// Set the target triple.
-  void setTargetTriple(const StringRef &T) { TargetTriple = T; }
+  void setTargetTriple(StringRef T) { TargetTriple = T; }
 
   /// Set the module-scope inline assembly blocks.
-  void setModuleInlineAsm(const StringRef &Asm) { GlobalScopeAsm = Asm; }
+  void setModuleInlineAsm(StringRef Asm) { GlobalScopeAsm = Asm; }
 
   /// Append to the module-scope inline assembly blocks, automatically
   /// appending a newline to the end.
-  void appendModuleInlineAsm(const StringRef &Asm) {
+  void appendModuleInlineAsm(StringRef Asm) {
     GlobalScopeAsm += Asm;
     GlobalScopeAsm += '\n';
   }
@@ -221,7 +221,7 @@ public:
   /// getNamedValue - Return the first global value in the module with
   /// the specified name, of arbitrary type.  This method returns null
   /// if a global with the specified name is not found.
-  GlobalValue *getNamedValue(const StringRef &Name) const;
+  GlobalValue *getNamedValue(StringRef Name) const;
 
 /// @}
 /// @name Function Accessors
@@ -236,10 +236,10 @@ public:
   ///      the existing function.
   ///   4. Finally, the function exists but has the wrong prototype: return the
   ///      function with a constantexpr cast to the right prototype.
-  Constant *getOrInsertFunction(const StringRef &Name, const FunctionType *T,
+  Constant *getOrInsertFunction(StringRef Name, const FunctionType *T,
                                 AttrListPtr AttributeList);
 
-  Constant *getOrInsertFunction(const StringRef &Name, const FunctionType *T);
+  Constant *getOrInsertFunction(StringRef Name, const FunctionType *T);
 
   /// getOrInsertFunction - Look up the specified function in the module symbol
   /// table.  If it does not exist, add a prototype for the function and return
@@ -248,20 +248,21 @@ public:
   /// named function has a different type.  This version of the method takes a
   /// null terminated list of function arguments, which makes it easier for
   /// clients to use.
-  Constant *getOrInsertFunction(const StringRef &Name,
+  Constant *getOrInsertFunction(StringRef Name,
                                 AttrListPtr AttributeList,
                                 const Type *RetTy, ...)  END_WITH_NULL;
 
-  Constant *getOrInsertFunction(const StringRef &Name, const Type *RetTy, ...)
+  /// getOrInsertFunction - Same as above, but without the attributes.
+  Constant *getOrInsertFunction(StringRef Name, const Type *RetTy, ...)
     END_WITH_NULL;
 
-  Constant *getOrInsertTargetIntrinsic(const StringRef &Name,
+  Constant *getOrInsertTargetIntrinsic(StringRef Name,
                                        const FunctionType *Ty,
                                        AttrListPtr AttributeList);
   
   /// getFunction - Look up the specified function in the module symbol table.
   /// If it does not exist, return null.
-  Function *getFunction(const StringRef &Name) const;
+  Function *getFunction(StringRef Name) const;
 
 /// @}
 /// @name Global Variable Accessors
@@ -271,13 +272,13 @@ public:
   /// symbol table.  If it does not exist, return null. If AllowInternal is set
   /// to true, this function will return types that have InternalLinkage. By
   /// default, these types are not returned.
-  GlobalVariable *getGlobalVariable(const StringRef &Name,
+  GlobalVariable *getGlobalVariable(StringRef Name,
                                     bool AllowInternal = false) const;
 
   /// getNamedGlobal - Return the first global variable in the module with the
   /// specified name, of arbitrary type.  This method returns null if a global
   /// with the specified name is not found.
-  GlobalVariable *getNamedGlobal(const StringRef &Name) const {
+  GlobalVariable *getNamedGlobal(StringRef Name) const {
     return getGlobalVariable(Name, true);
   }
 
@@ -288,7 +289,7 @@ public:
   ///      with a constantexpr cast to the right type.
   ///   3. Finally, if the existing global is the correct delclaration, return
   ///      the existing global.
-  Constant *getOrInsertGlobal(const StringRef &Name, const Type *Ty);
+  Constant *getOrInsertGlobal(StringRef Name, const Type *Ty);
 
 /// @}
 /// @name Global Alias Accessors
@@ -297,7 +298,7 @@ public:
   /// getNamedAlias - Return the first global alias in the module with the
   /// specified name, of arbitrary type.  This method returns null if a global
   /// with the specified name is not found.
-  GlobalAlias *getNamedAlias(const StringRef &Name) const;
+  GlobalAlias *getNamedAlias(StringRef Name) const;
 
 /// @}
 /// @name Named Metadata Accessors
@@ -306,12 +307,12 @@ public:
   /// getNamedMetadata - Return the first NamedMDNode in the module with the
   /// specified name. This method returns null if a NamedMDNode with the 
   /// specified name is not found.
-  NamedMDNode *getNamedMetadata(const StringRef &Name) const;
+  NamedMDNode *getNamedMetadata(StringRef Name) const;
 
   /// getOrInsertNamedMetadata - Return the first named MDNode in the module 
   /// with the specified name. This method returns a new NamedMDNode if a 
   /// NamedMDNode with the specified name is not found.
-  NamedMDNode *getOrInsertNamedMetadata(const StringRef &Name);
+  NamedMDNode *getOrInsertNamedMetadata(StringRef Name);
 
 /// @}
 /// @name Type Accessors
@@ -320,7 +321,7 @@ public:
   /// addTypeName - Insert an entry in the symbol table mapping Str to Type.  If
   /// there is already an entry for this name, true is returned and the symbol
   /// table is not modified.
-  bool addTypeName(const StringRef &Name, const Type *Ty);
+  bool addTypeName(StringRef Name, const Type *Ty);
 
   /// getTypeName - If there is at least one entry in the symbol table for the
   /// specified type, return it.
@@ -328,7 +329,7 @@ public:
 
   /// getTypeByName - Return the type with the specified name in this module, or
   /// null if there is none by that name.
-  const Type *getTypeByName(const StringRef &Name) const;
+  const Type *getTypeByName(StringRef Name) const;
 
 /// @}
 /// @name Direct access to the globals list, functions list, and symbol table
@@ -414,9 +415,9 @@ public:
   /// @brief Returns the number of items in the list of libraries.
   inline size_t       lib_size()  const { return LibraryList.size();  }
   /// @brief Add a library to the list of dependent libraries
-  void addLibrary(const StringRef &Lib);
+  void addLibrary(StringRef Lib);
   /// @brief Remove a library from the list of dependent libraries
-  void removeLibrary(const StringRef &Lib);
+  void removeLibrary(StringRef Lib);
   /// @brief Get all the libraries
   inline const LibraryListType& getLibraries() const { return LibraryList; }
 
diff --git a/libclamav/c++/llvm/include/llvm/Operator.h b/libclamav/c++/llvm/include/llvm/Operator.h
index 2b5cc57..60865aa 100644
--- a/libclamav/c++/llvm/include/llvm/Operator.h
+++ b/libclamav/c++/llvm/include/llvm/Operator.h
@@ -57,8 +57,8 @@ public:
   }
 
   static inline bool classof(const Operator *) { return true; }
-  static inline bool classof(const Instruction *I) { return true; }
-  static inline bool classof(const ConstantExpr *I) { return true; }
+  static inline bool classof(const Instruction *) { return true; }
+  static inline bool classof(const ConstantExpr *) { return true; }
   static inline bool classof(const Value *V) {
     return isa<Instruction>(V) || isa<ConstantExpr>(V);
   }
diff --git a/libclamav/c++/llvm/include/llvm/Pass.h b/libclamav/c++/llvm/include/llvm/Pass.h
index f3f71c8..909ccde 100644
--- a/libclamav/c++/llvm/include/llvm/Pass.h
+++ b/libclamav/c++/llvm/include/llvm/Pass.h
@@ -29,7 +29,7 @@
 #ifndef LLVM_PASS_H
 #define LLVM_PASS_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <cassert>
 #include <utility>
 #include <vector>
@@ -46,6 +46,7 @@ class PMStack;
 class AnalysisResolver;
 class PMDataManager;
 class raw_ostream;
+class StringRef;
 
 // AnalysisID - Use the PassInfo to identify a pass...
 typedef const PassInfo* AnalysisID;
@@ -164,6 +165,10 @@ public:
   // or null if it is not known.
   static const PassInfo *lookupPassInfo(intptr_t TI);
 
+  // lookupPassInfo - Return the pass info object for the pass with the given
+  // argument string, or null if it is not known.
+  static const PassInfo *lookupPassInfo(StringRef Arg);
+
   /// getAnalysisIfAvailable<AnalysisType>() - Subclasses use this function to
   /// get analysis information that might be around, for example to update it.
   /// This is different than getAnalysis in that it can fail (if the analysis
@@ -271,7 +276,7 @@ public:
   /// doInitialization - Virtual method overridden by subclasses to do
   /// any necessary per-module initialization.
   ///
-  virtual bool doInitialization(Module &M) { return false; }
+  virtual bool doInitialization(Module &) { return false; }
   
   /// runOnFunction - Virtual method overriden by subclasses to do the
   /// per-function processing of the pass.
@@ -323,7 +328,7 @@ public:
   /// doInitialization - Virtual method overridden by subclasses to do
   /// any necessary per-module initialization.
   ///
-  virtual bool doInitialization(Module &M) { return false; }
+  virtual bool doInitialization(Module &) { return false; }
 
   /// doInitialization - Virtual method overridden by BasicBlockPass subclasses
   /// to do any necessary per-function initialization.
diff --git a/libclamav/c++/llvm/include/llvm/PassAnalysisSupport.h b/libclamav/c++/llvm/include/llvm/PassAnalysisSupport.h
index b09ba45..5864fad 100644
--- a/libclamav/c++/llvm/include/llvm/PassAnalysisSupport.h
+++ b/libclamav/c++/llvm/include/llvm/PassAnalysisSupport.h
@@ -20,11 +20,13 @@
 #define LLVM_PASS_ANALYSIS_SUPPORT_H
 
 #include <vector>
+#include "llvm/Pass.h"
 #include "llvm/ADT/SmallVector.h"
+#include "llvm/ADT/StringRef.h"
 
 namespace llvm {
 
-// No need to include Pass.h, we are being included by it!
+class StringRef;
 
 //===----------------------------------------------------------------------===//
 // AnalysisUsage - Represent the analysis usage information of a pass.  This
@@ -79,6 +81,9 @@ public:
     return *this;
   }
 
+  // addPreserved - Add the specified Pass class to the set of analyses
+  // preserved by this pass.
+  //
   template<class PassClass>
   AnalysisUsage &addPreserved() {
     assert(Pass::getClassPassInfo<PassClass>() && "Pass class not registered!");
@@ -86,6 +91,18 @@ public:
     return *this;
   }
 
+  // addPreserved - Add the Pass with the specified argument string to the set
+  // of analyses preserved by this pass. If no such Pass exists, do nothing.
+  // This can be useful when a pass is trivially preserved, but may not be
+  // linked in. Be careful about spelling!
+  //
+  AnalysisUsage &addPreserved(StringRef Arg) {
+    const PassInfo *PI = Pass::lookupPassInfo(Arg);
+    // If the pass exists, preserve it. Otherwise silently do nothing.
+    if (PI) Preserved.push_back(PI);
+    return *this;
+  }
+
   // setPreservesAll - Set by analyses that do not transform their input at all
   void setPreservesAll() { PreservesAll = true; }
   bool getPreservesAll() const { return PreservesAll; }
diff --git a/libclamav/c++/llvm/include/llvm/PassManagers.h b/libclamav/c++/llvm/include/llvm/PassManagers.h
index 5a8f555..dffc24a 100644
--- a/libclamav/c++/llvm/include/llvm/PassManagers.h
+++ b/libclamav/c++/llvm/include/llvm/PassManagers.h
@@ -284,11 +284,11 @@ public:
   void removeNotPreservedAnalysis(Pass *P);
   
   /// Remove dead passes used by P.
-  void removeDeadPasses(Pass *P, const StringRef &Msg, 
+  void removeDeadPasses(Pass *P, StringRef Msg, 
                         enum PassDebuggingString);
 
   /// Remove P.
-  void freePass(Pass *P, const StringRef &Msg, 
+  void freePass(Pass *P, StringRef Msg, 
                 enum PassDebuggingString);
 
   /// Add pass P into the PassVector. Update 
@@ -344,7 +344,7 @@ public:
   void dumpLastUses(Pass *P, unsigned Offset) const;
   void dumpPassArguments() const;
   void dumpPassInfo(Pass *P, enum PassDebuggingString S1,
-                    enum PassDebuggingString S2, const StringRef &Msg);
+                    enum PassDebuggingString S2, StringRef Msg);
   void dumpRequiredSet(const Pass *P) const;
   void dumpPreservedSet(const Pass *P) const;
 
@@ -388,8 +388,8 @@ protected:
   bool isPassDebuggingExecutionsOrMore() const;
   
 private:
-  void dumpAnalysisUsage(const StringRef &Msg, const Pass *P,
-                           const AnalysisUsage::VectorType &Set) const;
+  void dumpAnalysisUsage(StringRef Msg, const Pass *P,
+                         const AnalysisUsage::VectorType &Set) const;
 
   // Set of available Analysis. This information is used while scheduling 
   // pass. If a pass requires an analysis which is not not available then 
diff --git a/libclamav/c++/llvm/include/llvm/PassSupport.h b/libclamav/c++/llvm/include/llvm/PassSupport.h
index b5e581a..d7f3097 100644
--- a/libclamav/c++/llvm/include/llvm/PassSupport.h
+++ b/libclamav/c++/llvm/include/llvm/PassSupport.h
@@ -21,7 +21,7 @@
 #ifndef LLVM_PASS_SUPPORT_H
 #define LLVM_PASS_SUPPORT_H
 
-// No need to include Pass.h, we are being included by it!
+#include "Pass.h"
 
 namespace llvm {
 
diff --git a/libclamav/c++/llvm/include/llvm/Support/AIXDataTypesFix.h b/libclamav/c++/llvm/include/llvm/Support/AIXDataTypesFix.h
deleted file mode 100644
index a9a9147..0000000
--- a/libclamav/c++/llvm/include/llvm/Support/AIXDataTypesFix.h
+++ /dev/null
@@ -1,25 +0,0 @@
-//===-- llvm/Support/AIXDataTypesFix.h - Fix datatype defs ------*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file overrides default system-defined types and limits which cannot be
-// done in DataTypes.h.in because it is processed by autoheader first, which
-// comments out any #undef statement
-//
-//===----------------------------------------------------------------------===//
-
-// No include guards desired!
-
-#ifndef SUPPORT_DATATYPES_H
-#error "AIXDataTypesFix.h must only be included via DataTypes.h!"
-#endif
-
-// GCC is strict about defining large constants: they must have LL modifier.
-// These will be defined properly at the end of DataTypes.h
-#undef INT64_MAX
-#undef INT64_MIN
diff --git a/libclamav/c++/llvm/include/llvm/Support/Allocator.h b/libclamav/c++/llvm/include/llvm/Support/Allocator.h
index 4c84878..b0ed33d 100644
--- a/libclamav/c++/llvm/include/llvm/Support/Allocator.h
+++ b/libclamav/c++/llvm/include/llvm/Support/Allocator.h
@@ -15,7 +15,7 @@
 #define LLVM_SUPPORT_ALLOCATOR_H
 
 #include "llvm/Support/AlignOf.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <cassert>
 #include <cstdlib>
 
diff --git a/libclamav/c++/llvm/include/llvm/Support/CommandLine.h b/libclamav/c++/llvm/include/llvm/Support/CommandLine.h
index 4fcca1d..2e65fdd 100644
--- a/libclamav/c++/llvm/include/llvm/Support/CommandLine.h
+++ b/libclamav/c++/llvm/include/llvm/Support/CommandLine.h
@@ -67,7 +67,7 @@ void MarkOptionsChanged();
 // Flags permitted to be passed to command line arguments
 //
 
-enum NumOccurrences {          // Flags for the number of occurrences allowed
+enum NumOccurrencesFlag {      // Flags for the number of occurrences allowed
   Optional        = 0x01,      // Zero or One occurrence
   ZeroOrMore      = 0x02,      // Zero or more occurrences allowed
   Required        = 0x03,      // One occurrence required
@@ -162,8 +162,8 @@ public:
   const char *HelpStr;    // The descriptive text message for --help
   const char *ValueStr;   // String describing what the value of this option is
 
-  inline enum NumOccurrences getNumOccurrencesFlag() const {
-    return static_cast<enum NumOccurrences>(Flags & OccurrencesMask);
+  inline enum NumOccurrencesFlag getNumOccurrencesFlag() const {
+    return static_cast<enum NumOccurrencesFlag>(Flags & OccurrencesMask);
   }
   inline enum ValueExpected getValueExpectedFlag() const {
     int VE = Flags & ValueMask;
@@ -197,7 +197,7 @@ public:
     Flags |= Flag;
   }
 
-  void setNumOccurrencesFlag(enum NumOccurrences Val) {
+  void setNumOccurrencesFlag(enum NumOccurrencesFlag Val) {
     setFlag(Val, OccurrencesMask);
   }
   void setValueExpectedFlag(enum ValueExpected Val) { setFlag(Val, ValueMask); }
@@ -495,7 +495,8 @@ public:
 //--------------------------------------------------
 // basic_parser - Super class of parsers to provide boilerplate code
 //
-struct basic_parser_impl {  // non-template implementation of basic_parser<t>
+class basic_parser_impl {  // non-template implementation of basic_parser<t>
+public:
   virtual ~basic_parser_impl() {}
 
   enum ValueExpected getValueExpectedFlagDefault() const {
@@ -525,7 +526,8 @@ struct basic_parser_impl {  // non-template implementation of basic_parser<t>
 // a typedef for the provided data type.
 //
 template<class DataType>
-struct basic_parser : public basic_parser_impl {
+class basic_parser : public basic_parser_impl {
+public:
   typedef DataType parser_data_type;
 };
 
@@ -660,7 +662,7 @@ template<>
 class parser<std::string> : public basic_parser<std::string> {
 public:
   // parse - Return true on error.
-  bool parse(Option &, StringRef ArgName, StringRef Arg, std::string &Value) {
+  bool parse(Option &, StringRef, StringRef Arg, std::string &Value) {
     Value = Arg.str();
     return false;
   }
@@ -681,7 +683,7 @@ template<>
 class parser<char> : public basic_parser<char> {
 public:
   // parse - Return true on error.
-  bool parse(Option &, StringRef ArgName, StringRef Arg, char &Value) {
+  bool parse(Option &, StringRef, StringRef Arg, char &Value) {
     Value = Arg[0];
     return false;
   }
@@ -720,8 +722,10 @@ template<> struct applicator<const char*> {
   static void opt(const char *Str, Opt &O) { O.setArgStr(Str); }
 };
 
-template<> struct applicator<NumOccurrences> {
-  static void opt(NumOccurrences NO, Option &O) { O.setNumOccurrencesFlag(NO); }
+template<> struct applicator<NumOccurrencesFlag> {
+  static void opt(NumOccurrencesFlag NO, Option &O) {
+    O.setNumOccurrencesFlag(NO);
+  }
 };
 template<> struct applicator<ValueExpected> {
   static void opt(ValueExpected VE, Option &O) { O.setValueExpectedFlag(VE); }
@@ -777,6 +781,8 @@ public:
 
   DataType &getValue() { check(); return *Location; }
   const DataType &getValue() const { check(); return *Location; }
+  
+  operator DataType() const { return this->getValue(); }
 };
 
 
@@ -812,6 +818,8 @@ public:
   DataType &getValue() { return Value; }
   DataType getValue() const { return Value; }
 
+  operator DataType() const { return getValue(); }
+
   // If the datatype is a pointer, support -> on it.
   DataType operator->() const { return Value; }
 };
@@ -861,8 +869,6 @@ public:
 
   ParserClass &getParser() { return Parser; }
 
-  operator DataType() const { return this->getValue(); }
-
   template<class T>
   DataType &operator=(const T &Val) {
     this->setValue(Val);
diff --git a/libclamav/c++/llvm/include/llvm/Support/Compiler.h b/libclamav/c++/llvm/include/llvm/Support/Compiler.h
index 342a97d..da31f98 100644
--- a/libclamav/c++/llvm/include/llvm/Support/Compiler.h
+++ b/libclamav/c++/llvm/include/llvm/Support/Compiler.h
@@ -29,6 +29,18 @@
 #define ATTRIBUTE_USED
 #endif
 
+#ifdef __GNUC__ // aka 'ATTRIBUTE_CONST' but following LLVM Conventions.
+#define ATTRIBUTE_READNONE __attribute__((__const__))
+#else
+#define ATTRIBUTE_READNONE
+#endif
+
+#ifdef __GNUC__  // aka 'ATTRIBUTE_PURE' but following LLVM Conventions.
+#define ATTRIBUTE_READONLY __attribute__((__pure__))
+#else
+#define ATTRIBUTE_READONLY
+#endif
+
 #if (__GNUC__ >= 4)
 #define BUILTIN_EXPECT(EXPR, VALUE) __builtin_expect((EXPR), (VALUE))
 #else
@@ -52,12 +64,16 @@
 // method "not for inlining".
 #if (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))
 #define DISABLE_INLINE __attribute__((noinline))
+#elif defined(_MSC_VER)
+#define DISABLE_INLINE __declspec(noinline)
 #else
 #define DISABLE_INLINE
 #endif
 
 #ifdef __GNUC__
 #define NORETURN __attribute__((noreturn))
+#elif defined(_MSC_VER)
+#define NORETURN __declspec(noreturn)
 #else
 #define NORETURN
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/Support/ConstantFolder.h b/libclamav/c++/llvm/include/llvm/Support/ConstantFolder.h
index 99cb920..b73cea0 100644
--- a/libclamav/c++/llvm/include/llvm/Support/ConstantFolder.h
+++ b/libclamav/c++/llvm/include/llvm/Support/ConstantFolder.h
@@ -18,6 +18,7 @@
 #define LLVM_SUPPORT_CONSTANTFOLDER_H
 
 #include "llvm/Constants.h"
+#include "llvm/InstrTypes.h"
 
 namespace llvm {
 
diff --git a/libclamav/c++/llvm/include/llvm/Support/ConstantRange.h b/libclamav/c++/llvm/include/llvm/Support/ConstantRange.h
index e9c8c7c..6342c6f 100644
--- a/libclamav/c++/llvm/include/llvm/Support/ConstantRange.h
+++ b/libclamav/c++/llvm/include/llvm/Support/ConstantRange.h
@@ -33,7 +33,7 @@
 #define LLVM_SUPPORT_CONSTANT_RANGE_H
 
 #include "llvm/ADT/APInt.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
 
@@ -187,6 +187,14 @@ public:
   /// truncated to the specified type.
   ConstantRange truncate(uint32_t BitWidth) const;
 
+  /// zextOrTrunc - make this range have the bit width given by \p BitWidth. The
+  /// value is zero extended, truncated, or left alone to make it that width.
+  ConstantRange zextOrTrunc(uint32_t BitWidth) const;
+  
+  /// sextOrTrunc - make this range have the bit width given by \p BitWidth. The
+  /// value is sign extended, truncated, or left alone to make it that width.
+  ConstantRange sextOrTrunc(uint32_t BitWidth) const;
+
   /// add - Return a new range representing the possible values resulting
   /// from an addition of a value in this range and a value in Other.
   ConstantRange add(const ConstantRange &Other) const;
@@ -209,6 +217,18 @@ public:
   /// TODO: This isn't fully implemented yet.
   ConstantRange udiv(const ConstantRange &Other) const;
 
+  /// shl - Return a new range representing the possible values resulting
+  /// from a left shift of a value in this range by the Amount value.
+  ConstantRange shl(const ConstantRange &Amount) const;
+
+  /// ashr - Return a new range representing the possible values resulting from
+  /// an arithmetic right shift of a value in this range by the Amount value.
+  ConstantRange ashr(const ConstantRange &Amount) const;
+
+  /// shr - Return a new range representing the possible values resulting
+  /// from a logical right shift of a value in this range by the Amount value.
+  ConstantRange lshr(const ConstantRange &Amount) const;
+
   /// print - Print out the bounds to a stream...
   ///
   void print(raw_ostream &OS) const;
diff --git a/libclamav/c++/llvm/include/llvm/Support/DataTypes.h.cmake b/libclamav/c++/llvm/include/llvm/Support/DataTypes.h.cmake
deleted file mode 100644
index ad210ed..0000000
--- a/libclamav/c++/llvm/include/llvm/Support/DataTypes.h.cmake
+++ /dev/null
@@ -1,152 +0,0 @@
-/*===-- include/Support/DataTypes.h - Define fixed size types -----*- C -*-===*\
-|*                                                                            *|
-|*                     The LLVM Compiler Infrastructure                       *|
-|*                                                                            *|
-|* This file is distributed under the University of Illinois Open Source      *|
-|* License. See LICENSE.TXT for details.                                      *|
-|*                                                                            *|
-|*===----------------------------------------------------------------------===*|
-|*                                                                            *|
-|* This file contains definitions to figure out the size of _HOST_ data types.*|
-|* This file is important because different host OS's define different macros,*|
-|* which makes portability tough.  This file exports the following            *|
-|* definitions:                                                               *|
-|*                                                                            *|
-|*   [u]int(32|64)_t : typedefs for signed and unsigned 32/64 bit system types*|
-|*   [U]INT(8|16|32|64)_(MIN|MAX) : Constants for the min and max values.     *|
-|*                                                                            *|
-|* No library is required when using these functinons.                        *|
-|*                                                                            *|
-|*===----------------------------------------------------------------------===*/
-
-/* Please leave this file C-compatible. */
-
-#ifndef SUPPORT_DATATYPES_H
-#define SUPPORT_DATATYPES_H
-
-#cmakedefine HAVE_SYS_TYPES_H ${HAVE_SYS_TYPES_H}
-#cmakedefine HAVE_INTTYPES_H ${HAVE_INTTYPES_H}
-#cmakedefine HAVE_STDINT_H ${HAVE_STDINT_H}
-#cmakedefine HAVE_UINT64_T ${HAVE_UINT64_T}
-#cmakedefine HAVE_U_INT64_T ${HAVE_U_INT64_T}
-
-#ifdef __cplusplus
-#include <cmath>
-#else
-#include <math.h>
-#endif
-
-#ifndef _MSC_VER
-
-/* Note that this header's correct operation depends on __STDC_LIMIT_MACROS
-   being defined.  We would define it here, but in order to prevent Bad Things
-   happening when system headers or C++ STL headers include stdint.h before we
-   define it here, we define it on the g++ command line (in Makefile.rules). */
-#if !defined(__STDC_LIMIT_MACROS)
-# error "Must #define __STDC_LIMIT_MACROS before #including Support/DataTypes.h"
-#endif
-
-#if !defined(__STDC_CONSTANT_MACROS)
-# error "Must #define __STDC_CONSTANT_MACROS before " \
-        "#including Support/DataTypes.h"
-#endif
-
-/* Note that <inttypes.h> includes <stdint.h>, if this is a C99 system. */
-#ifdef HAVE_SYS_TYPES_H
-#include <sys/types.h>
-#endif
-
-#ifdef HAVE_INTTYPES_H
-#include <inttypes.h>
-#endif
-
-#ifdef HAVE_STDINT_H
-#include <stdint.h>
-#endif
-
-#ifdef _AIX
-#include "llvm/Support/AIXDataTypesFix.h"
-#endif
-
-/* Handle incorrect definition of uint64_t as u_int64_t */
-#ifndef HAVE_UINT64_T
-#ifdef HAVE_U_INT64_T
-typedef u_int64_t uint64_t;
-#else
-# error "Don't have a definition for uint64_t on this platform"
-#endif
-#endif
-
-#ifdef _OpenBSD_
-#define INT8_MAX 127
-#define INT8_MIN -128
-#define UINT8_MAX 255
-#define INT16_MAX 32767
-#define INT16_MIN -32768
-#define UINT16_MAX 65535
-#define INT32_MAX 2147483647
-#define INT32_MIN -2147483648
-#define UINT32_MAX 4294967295U
-#endif
-
-#else /* _MSC_VER */
-/* Visual C++ doesn't provide standard integer headers, but it does provide
-   built-in data types. */
-#include <stdlib.h>
-#include <stddef.h>
-#include <sys/types.h>
-#ifdef __cplusplus
-#include <cmath>
-#else
-#include <math.h>
-#endif
-typedef __int64 int64_t;
-typedef unsigned __int64 uint64_t;
-typedef signed int int32_t;
-typedef unsigned int uint32_t;
-typedef short int16_t;
-typedef unsigned short uint16_t;
-typedef signed char int8_t;
-typedef unsigned char uint8_t;
-typedef signed int ssize_t;
-#define INT8_MAX 127
-#define INT8_MIN -128
-#define UINT8_MAX 255
-#define INT16_MAX 32767
-#define INT16_MIN -32768
-#define UINT16_MAX 65535
-#define INT32_MAX 2147483647
-#define INT32_MIN -2147483648
-#define UINT32_MAX 4294967295U
-#define INT8_C(C)   C
-#define UINT8_C(C)  C
-#define INT16_C(C)  C
-#define UINT16_C(C) C
-#define INT32_C(C)  C
-#define UINT32_C(C) C ## U
-#define INT64_C(C)  ((int64_t) C ## LL)
-#define UINT64_C(C) ((uint64_t) C ## ULL)
-#endif /* _MSC_VER */
-
-/* Set defaults for constants which we cannot find. */
-#if !defined(INT64_MAX)
-# define INT64_MAX 9223372036854775807LL
-#endif
-#if !defined(INT64_MIN)
-# define INT64_MIN ((-INT64_MAX)-1)
-#endif
-#if !defined(UINT64_MAX)
-# define UINT64_MAX 0xffffffffffffffffULL
-#endif
-
-#if __GNUC__ > 3
-#define END_WITH_NULL __attribute__((sentinel))
-#else
-#define END_WITH_NULL
-#endif
-
-#ifndef HUGE_VALF
-#define HUGE_VALF (float)HUGE_VAL
-#endif
-
-#endif  /* SUPPORT_DATATYPES_H */
diff --git a/libclamav/c++/llvm/include/llvm/Support/DataTypes.h.in b/libclamav/c++/llvm/include/llvm/Support/DataTypes.h.in
deleted file mode 100644
index 405f476..0000000
--- a/libclamav/c++/llvm/include/llvm/Support/DataTypes.h.in
+++ /dev/null
@@ -1,147 +0,0 @@
-/*===-- include/Support/DataTypes.h - Define fixed size types -----*- C -*-===*\
-|*                                                                            *|
-|*                     The LLVM Compiler Infrastructure                       *|
-|*                                                                            *|
-|* This file is distributed under the University of Illinois Open Source      *|
-|* License. See LICENSE.TXT for details.                                      *|
-|*                                                                            *|
-|*===----------------------------------------------------------------------===*|
-|*                                                                            *|
-|* This file contains definitions to figure out the size of _HOST_ data types.*|
-|* This file is important because different host OS's define different macros,*|
-|* which makes portability tough.  This file exports the following            *|
-|* definitions:                                                               *|
-|*                                                                            *|
-|*   [u]int(32|64)_t : typedefs for signed and unsigned 32/64 bit system types*|
-|*   [U]INT(8|16|32|64)_(MIN|MAX) : Constants for the min and max values.     *|
-|*                                                                            *|
-|* No library is required when using these functinons.                        *|
-|*                                                                            *|
-|*===----------------------------------------------------------------------===*/
-
-/* Please leave this file C-compatible. */
-
-#ifndef SUPPORT_DATATYPES_H
-#define SUPPORT_DATATYPES_H
-
-#undef HAVE_SYS_TYPES_H
-#undef HAVE_INTTYPES_H
-#undef HAVE_STDINT_H
-#undef HAVE_UINT64_T
-#undef HAVE_U_INT64_T
-
-#ifdef __cplusplus
-#include <cmath>
-#else
-#include <math.h>
-#endif
-
-#ifndef _MSC_VER
-
-/* Note that this header's correct operation depends on __STDC_LIMIT_MACROS
-   being defined.  We would define it here, but in order to prevent Bad Things
-   happening when system headers or C++ STL headers include stdint.h before we
-   define it here, we define it on the g++ command line (in Makefile.rules). */
-#if !defined(__STDC_LIMIT_MACROS)
-# error "Must #define __STDC_LIMIT_MACROS before #including Support/DataTypes.h"
-#endif
-
-#if !defined(__STDC_CONSTANT_MACROS)
-# error "Must #define __STDC_CONSTANT_MACROS before " \
-        "#including Support/DataTypes.h"
-#endif
-
-/* Note that <inttypes.h> includes <stdint.h>, if this is a C99 system. */
-#ifdef HAVE_SYS_TYPES_H
-#include <sys/types.h>
-#endif
-
-#ifdef HAVE_INTTYPES_H
-#include <inttypes.h>
-#endif
-
-#ifdef HAVE_STDINT_H
-#include <stdint.h>
-#endif
-
-#ifdef _AIX
-#include "llvm/Support/AIXDataTypesFix.h"
-#endif
-
-/* Handle incorrect definition of uint64_t as u_int64_t */
-#ifndef HAVE_UINT64_T
-#ifdef HAVE_U_INT64_T
-typedef u_int64_t uint64_t;
-#else
-# error "Don't have a definition for uint64_t on this platform"
-#endif
-#endif
-
-#ifdef _OpenBSD_
-#define INT8_MAX 127
-#define INT8_MIN -128
-#define UINT8_MAX 255
-#define INT16_MAX 32767
-#define INT16_MIN -32768
-#define UINT16_MAX 65535
-#define INT32_MAX 2147483647
-#define INT32_MIN -2147483648
-#define UINT32_MAX 4294967295U
-#endif
-
-#else /* _MSC_VER */
-/* Visual C++ doesn't provide standard integer headers, but it does provide
-   built-in data types. */
-#include <stdlib.h>
-#include <stddef.h>
-#include <sys/types.h>
-typedef __int64 int64_t;
-typedef unsigned __int64 uint64_t;
-typedef signed int int32_t;
-typedef unsigned int uint32_t;
-typedef short int16_t;
-typedef unsigned short uint16_t;
-typedef signed char int8_t;
-typedef unsigned char uint8_t;
-typedef signed int ssize_t;
-#define INT8_MAX 127
-#define INT8_MIN -128
-#define UINT8_MAX 255
-#define INT16_MAX 32767
-#define INT16_MIN -32768
-#define UINT16_MAX 65535
-#define INT32_MAX 2147483647
-#define INT32_MIN -2147483648
-#define UINT32_MAX 4294967295U
-#define INT8_C(C)   C
-#define UINT8_C(C)  C
-#define INT16_C(C)  C
-#define UINT16_C(C) C
-#define INT32_C(C)  C
-#define UINT32_C(C) C ## U
-#define INT64_C(C)  ((int64_t) C ## LL)
-#define UINT64_C(C) ((uint64_t) C ## ULL)
-#endif /* _MSC_VER */
-
-/* Set defaults for constants which we cannot find. */
-#if !defined(INT64_MAX)
-# define INT64_MAX 9223372036854775807LL
-#endif
-#if !defined(INT64_MIN)
-# define INT64_MIN ((-INT64_MAX)-1)
-#endif
-#if !defined(UINT64_MAX)
-# define UINT64_MAX 0xffffffffffffffffULL
-#endif
-
-#if __GNUC__ > 3
-#define END_WITH_NULL __attribute__((sentinel))
-#else
-#define END_WITH_NULL
-#endif
-
-#ifndef HUGE_VALF
-#define HUGE_VALF (float)HUGE_VAL
-#endif
-
-#endif  /* SUPPORT_DATATYPES_H */
diff --git a/libclamav/c++/llvm/include/llvm/Support/Debug.h b/libclamav/c++/llvm/include/llvm/Support/Debug.h
index 6f82ea7..afa828c 100644
--- a/libclamav/c++/llvm/include/llvm/Support/Debug.h
+++ b/libclamav/c++/llvm/include/llvm/Support/Debug.h
@@ -28,39 +28,47 @@
 
 namespace llvm {
 
-// DebugFlag - This boolean is set to true if the '-debug' command line option
-// is specified.  This should probably not be referenced directly, instead, use
-// the DEBUG macro below.
-//
-#ifndef NDEBUG
-extern bool DebugFlag;
+/// DEBUG_TYPE macro - Files can specify a DEBUG_TYPE as a string, which causes
+/// all of their DEBUG statements to be activatable with -debug-only=thatstring.
+#ifndef DEBUG_TYPE
+#define DEBUG_TYPE ""
 #endif
-
-// isCurrentDebugType - Return true if the specified string is the debug type
-// specified on the command line, or if none was specified on the command line
-// with the -debug-only=X option.
-//
+  
 #ifndef NDEBUG
+/// DebugFlag - This boolean is set to true if the '-debug' command line option
+/// is specified.  This should probably not be referenced directly, instead, use
+/// the DEBUG macro below.
+///
+extern bool DebugFlag;
+  
+/// isCurrentDebugType - Return true if the specified string is the debug type
+/// specified on the command line, or if none was specified on the command line
+/// with the -debug-only=X option.
+///
 bool isCurrentDebugType(const char *Type);
-#else
-#define isCurrentDebugType(X) (false)
-#endif
-
-// DEBUG_WITH_TYPE macro - This macro should be used by passes to emit debug
-// information.  In the '-debug' option is specified on the commandline, and if
-// this is a debug build, then the code specified as the option to the macro
-// will be executed.  Otherwise it will not be.  Example:
-//
-// DEBUG_WITH_TYPE("bitset", errs() << "Bitset contains: " << Bitset << "\n");
-//
-// This will emit the debug information if -debug is present, and -debug-only is
-// not specified, or is specified as "bitset".
 
-#ifdef NDEBUG
-#define DEBUG_WITH_TYPE(TYPE, X) do { } while (0)
-#else
+/// SetCurrentDebugType - Set the current debug type, as if the -debug-only=X
+/// option were specified.  Note that DebugFlag also needs to be set to true for
+/// debug output to be produced.
+///
+void SetCurrentDebugType(const char *Type);
+  
+/// DEBUG_WITH_TYPE macro - This macro should be used by passes to emit debug
+/// information.  In the '-debug' option is specified on the commandline, and if
+/// this is a debug build, then the code specified as the option to the macro
+/// will be executed.  Otherwise it will not be.  Example:
+///
+/// DEBUG_WITH_TYPE("bitset", errs() << "Bitset contains: " << Bitset << "\n");
+///
+/// This will emit the debug information if -debug is present, and -debug-only
+/// is not specified, or is specified as "bitset".
 #define DEBUG_WITH_TYPE(TYPE, X)                                        \
   do { if (DebugFlag && isCurrentDebugType(TYPE)) { X; } } while (0)
+
+#else
+#define isCurrentDebugType(X) (false)
+#define SetCurrentDebugType(X)
+#define DEBUG_WITH_TYPE(TYPE, X) do { } while (0)
 #endif
 
 // DEBUG macro - This macro should be used by passes to emit debug information.
@@ -70,11 +78,6 @@ bool isCurrentDebugType(const char *Type);
 //
 // DEBUG(errs() << "Bitset contains: " << Bitset << "\n");
 //
-
-#ifndef DEBUG_TYPE
-#define DEBUG_TYPE ""
-#endif
-
 #define DEBUG(X) DEBUG_WITH_TYPE(DEBUG_TYPE, X)
   
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/include/llvm/Support/DebugLoc.h b/libclamav/c++/llvm/include/llvm/Support/DebugLoc.h
index 30590c7..362390f 100644
--- a/libclamav/c++/llvm/include/llvm/Support/DebugLoc.h
+++ b/libclamav/c++/llvm/include/llvm/Support/DebugLoc.h
@@ -24,19 +24,19 @@ namespace llvm {
   /// DebugLocTuple - Debug location tuple of filename id, line and column.
   ///
   struct DebugLocTuple {
-    MDNode *CompileUnit;
-    MDNode *InlinedLoc;
+    MDNode *Scope;
+    MDNode *InlinedAtLoc;
     unsigned Line, Col;
 
     DebugLocTuple()
-      : CompileUnit(0), InlinedLoc(0), Line(~0U), Col(~0U) {};
+      : Scope(0), InlinedAtLoc(0), Line(~0U), Col(~0U) {}
 
     DebugLocTuple(MDNode *n, MDNode *i, unsigned l, unsigned c)
-      : CompileUnit(n), InlinedLoc(i), Line(l), Col(c) {};
+      : Scope(n), InlinedAtLoc(i), Line(l), Col(c) {}
 
     bool operator==(const DebugLocTuple &DLT) const {
-      return CompileUnit == DLT.CompileUnit &&
-        InlinedLoc == DLT.InlinedLoc &&
+      return Scope == DLT.Scope &&
+        InlinedAtLoc == DLT.InlinedAtLoc &&
         Line == DLT.Line && Col == DLT.Col;
     }
     bool operator!=(const DebugLocTuple &DLT) const {
@@ -74,16 +74,16 @@ namespace llvm {
       return DebugLocTuple((MDNode*)~1U, (MDNode*)~1U, ~1U, ~1U);
     }
     static unsigned getHashValue(const DebugLocTuple &Val) {
-      return DenseMapInfo<MDNode*>::getHashValue(Val.CompileUnit) ^
-             DenseMapInfo<MDNode*>::getHashValue(Val.InlinedLoc) ^
+      return DenseMapInfo<MDNode*>::getHashValue(Val.Scope) ^
+             DenseMapInfo<MDNode*>::getHashValue(Val.InlinedAtLoc) ^
              DenseMapInfo<unsigned>::getHashValue(Val.Line) ^
              DenseMapInfo<unsigned>::getHashValue(Val.Col);
     }
     static bool isEqual(const DebugLocTuple &LHS, const DebugLocTuple &RHS) {
-      return LHS.CompileUnit == RHS.CompileUnit &&
-             LHS.InlinedLoc  == RHS.InlinedLoc &&
-             LHS.Line        == RHS.Line &&
-             LHS.Col         == RHS.Col;
+      return LHS.Scope        == RHS.Scope &&
+             LHS.InlinedAtLoc == RHS.InlinedAtLoc &&
+             LHS.Line         == RHS.Line &&
+             LHS.Col          == RHS.Col;
     }
 
     static bool isPod() { return true; }
diff --git a/libclamav/c++/llvm/include/llvm/Support/ELF.h b/libclamav/c++/llvm/include/llvm/Support/ELF.h
index aa27946..e747c7a 100644
--- a/libclamav/c++/llvm/include/llvm/Support/ELF.h
+++ b/libclamav/c++/llvm/include/llvm/Support/ELF.h
@@ -21,7 +21,7 @@
 #ifndef LLVM_SUPPORT_ELF_H
 #define LLVM_SUPPORT_ELF_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <cstring>
 
 namespace llvm {
diff --git a/libclamav/c++/llvm/include/llvm/Support/ErrorHandling.h b/libclamav/c++/llvm/include/llvm/Support/ErrorHandling.h
index 67bccf0..6067795 100644
--- a/libclamav/c++/llvm/include/llvm/Support/ErrorHandling.h
+++ b/libclamav/c++/llvm/include/llvm/Support/ErrorHandling.h
@@ -60,15 +60,15 @@ namespace llvm {
   /// standard error, followed by a newline.
   /// After the error handler is called this function will call exit(1), it 
   /// does not return.
-  void llvm_report_error(const char *reason) NORETURN;
-  void llvm_report_error(const std::string &reason) NORETURN;
-  void llvm_report_error(const Twine &reason) NORETURN;
+  NORETURN void llvm_report_error(const char *reason);
+  NORETURN void llvm_report_error(const std::string &reason);
+  NORETURN void llvm_report_error(const Twine &reason);
 
   /// This function calls abort(), and prints the optional message to stderr.
   /// Use the llvm_unreachable macro (that adds location info), instead of
   /// calling this function directly.
-  void llvm_unreachable_internal(const char *msg=0, const char *file=0,
-                                 unsigned line=0) NORETURN;
+  NORETURN void llvm_unreachable_internal(const char *msg=0,
+                                          const char *file=0, unsigned line=0);
 }
 
 /// Prints the message and location info to stderr in !NDEBUG builds.
diff --git a/libclamav/c++/llvm/include/llvm/Support/Format.h b/libclamav/c++/llvm/include/llvm/Support/Format.h
index df03f66..340f517 100644
--- a/libclamav/c++/llvm/include/llvm/Support/Format.h
+++ b/libclamav/c++/llvm/include/llvm/Support/Format.h
@@ -23,6 +23,7 @@
 #ifndef LLVM_SUPPORT_FORMAT_H
 #define LLVM_SUPPORT_FORMAT_H
 
+#include <cassert>
 #include <cstdio>
 #ifdef WIN32
 #define snprintf _snprintf
diff --git a/libclamav/c++/llvm/include/llvm/Support/IRBuilder.h b/libclamav/c++/llvm/include/llvm/Support/IRBuilder.h
index 1f65978..2db2477 100644
--- a/libclamav/c++/llvm/include/llvm/Support/IRBuilder.h
+++ b/libclamav/c++/llvm/include/llvm/Support/IRBuilder.h
@@ -139,7 +139,7 @@ public:
     if (MDKind == 0) 
       MDKind = Context.getMetadata().getMDKind("dbg");
     if (MDKind == 0)
-      MDKind = Context.getMetadata().RegisterMDKind("dbg");
+      MDKind = Context.getMetadata().registerMDKind("dbg");
     CurDbgLocation = L;
   }
 
@@ -151,6 +151,15 @@ public:
       Context.getMetadata().addMD(MDKind, CurDbgLocation, I);
   }
 
+  /// SetDebugLocation -  Set location information for the given instruction.
+  void SetDebugLocation(Instruction *I, MDNode *Loc) {
+    if (MDKind == 0) 
+      MDKind = Context.getMetadata().getMDKind("dbg");
+    if (MDKind == 0)
+      MDKind = Context.getMetadata().registerMDKind("dbg");
+    Context.getMetadata().addMD(MDKind, Loc, I);
+  }
+
   /// Insert - Insert and return the specified instruction.
   template<typename InstTy>
   InstTy *Insert(InstTy *I, const Twine &Name = "") const {
@@ -253,6 +262,13 @@ public:
     return Insert(SwitchInst::Create(V, Dest, NumCases));
   }
 
+  /// CreateIndirectBr - Create an indirect branch instruction with the
+  /// specified address operand, with an optional hint for the number of
+  /// destinations that will be added (for efficient allocation).
+  IndirectBrInst *CreateIndirectBr(Value *Addr, unsigned NumDests = 10) {
+    return Insert(IndirectBrInst::Create(Addr, NumDests));
+  }
+
   /// CreateInvoke - Create an invoke instruction.
   template<typename InputIterator>
   InvokeInst *CreateInvoke(Value *Callee, BasicBlock *NormalDest,
@@ -383,15 +399,21 @@ public:
     return Insert(BinaryOperator::CreateAShr(LHS, RHS), Name);
   }
   Value *CreateAnd(Value *LHS, Value *RHS, const Twine &Name = "") {
-    if (Constant *LC = dyn_cast<Constant>(LHS))
-      if (Constant *RC = dyn_cast<Constant>(RHS))
+    if (Constant *RC = dyn_cast<Constant>(RHS)) {
+      if (isa<ConstantInt>(RC) && cast<ConstantInt>(RC)->isAllOnesValue())
+        return LHS;  // LHS & -1 -> LHS
+      if (Constant *LC = dyn_cast<Constant>(LHS))
         return Folder.CreateAnd(LC, RC);
+    }
     return Insert(BinaryOperator::CreateAnd(LHS, RHS), Name);
   }
   Value *CreateOr(Value *LHS, Value *RHS, const Twine &Name = "") {
-    if (Constant *LC = dyn_cast<Constant>(LHS))
-      if (Constant *RC = dyn_cast<Constant>(RHS))
+    if (Constant *RC = dyn_cast<Constant>(RHS)) {
+      if (RC->isNullValue())
+        return LHS;  // LHS | 0 -> LHS
+      if (Constant *LC = dyn_cast<Constant>(LHS))
         return Folder.CreateOr(LC, RC);
+    }
     return Insert(BinaryOperator::CreateOr(LHS, RHS), Name);
   }
   Value *CreateXor(Value *LHS, Value *RHS, const Twine &Name = "") {
@@ -429,17 +451,10 @@ public:
   // Instruction creation methods: Memory Instructions
   //===--------------------------------------------------------------------===//
 
-  MallocInst *CreateMalloc(const Type *Ty, Value *ArraySize = 0,
-                           const Twine &Name = "") {
-    return Insert(new MallocInst(Ty, ArraySize), Name);
-  }
   AllocaInst *CreateAlloca(const Type *Ty, Value *ArraySize = 0,
                            const Twine &Name = "") {
     return Insert(new AllocaInst(Ty, ArraySize), Name);
   }
-  FreeInst *CreateFree(Value *Ptr) {
-    return Insert(new FreeInst(Ptr));
-  }
   // Provided to resolve 'CreateLoad(Ptr, "...")' correctly, instead of
   // converting the string to 'bool' for the isVolatile parameter.
   LoadInst *CreateLoad(Value *Ptr, const char *Name) {
@@ -694,6 +709,11 @@ public:
       return Folder.CreateIntCast(VC, DestTy, isSigned);
     return Insert(CastInst::CreateIntegerCast(V, DestTy, isSigned), Name);
   }
+private:
+  // Provided to resolve 'CreateIntCast(Ptr, Ptr, "...")', giving a compile time
+  // error, instead of converting the string to bool for the isSigned parameter.
+  Value *CreateIntCast(Value *, const Type *, const char *); // DO NOT IMPLEMENT
+public:
   Value *CreateFPCast(Value *V, const Type *DestTy, const Twine &Name = "") {
     if (V->getType() == DestTy)
       return V;
diff --git a/libclamav/c++/llvm/include/llvm/Support/InstVisitor.h b/libclamav/c++/llvm/include/llvm/Support/InstVisitor.h
index 5d7c2f7..b2e5d58 100644
--- a/libclamav/c++/llvm/include/llvm/Support/InstVisitor.h
+++ b/libclamav/c++/llvm/include/llvm/Support/InstVisitor.h
@@ -46,17 +46,17 @@ namespace llvm {
 ///  /// Declare the class.  Note that we derive from InstVisitor instantiated
 ///  /// with _our new subclasses_ type.
 ///  ///
-///  struct CountMallocVisitor : public InstVisitor<CountMallocVisitor> {
+///  struct CountAllocaVisitor : public InstVisitor<CountAllocaVisitor> {
 ///    unsigned Count;
-///    CountMallocVisitor() : Count(0) {}
+///    CountAllocaVisitor() : Count(0) {}
 ///
-///    void visitMallocInst(MallocInst &MI) { ++Count; }
+///    void visitAllocaInst(AllocaInst &AI) { ++Count; }
 ///  };
 ///
 ///  And this class would be used like this:
-///    CountMallocVistor CMV;
-///    CMV.visit(function);
-///    NumMallocs = CMV.Count;
+///    CountAllocaVisitor CAV;
+///    CAV.visit(function);
+///    NumAllocas = CAV.Count;
 ///
 /// The defined has 'visit' methods for Instruction, and also for BasicBlock,
 /// Function, and Module, which recursively process all contained instructions.
@@ -160,14 +160,13 @@ public:
   RetTy visitReturnInst(ReturnInst &I)              { DELEGATE(TerminatorInst);}
   RetTy visitBranchInst(BranchInst &I)              { DELEGATE(TerminatorInst);}
   RetTy visitSwitchInst(SwitchInst &I)              { DELEGATE(TerminatorInst);}
+  RetTy visitIndirectBrInst(IndirectBrInst &I)      { DELEGATE(TerminatorInst);}
   RetTy visitInvokeInst(InvokeInst &I)              { DELEGATE(TerminatorInst);}
   RetTy visitUnwindInst(UnwindInst &I)              { DELEGATE(TerminatorInst);}
   RetTy visitUnreachableInst(UnreachableInst &I)    { DELEGATE(TerminatorInst);}
   RetTy visitICmpInst(ICmpInst &I)                  { DELEGATE(CmpInst);}
   RetTy visitFCmpInst(FCmpInst &I)                  { DELEGATE(CmpInst);}
-  RetTy visitMallocInst(MallocInst &I)              { DELEGATE(AllocationInst);}
-  RetTy visitAllocaInst(AllocaInst &I)              { DELEGATE(AllocationInst);}
-  RetTy visitFreeInst(FreeInst     &I)              { DELEGATE(Instruction); }
+  RetTy visitAllocaInst(AllocaInst &I)              { DELEGATE(Instruction); }
   RetTy visitLoadInst(LoadInst     &I)              { DELEGATE(Instruction); }
   RetTy visitStoreInst(StoreInst   &I)              { DELEGATE(Instruction); }
   RetTy visitGetElementPtrInst(GetElementPtrInst &I){ DELEGATE(Instruction); }
@@ -199,7 +198,6 @@ public:
   //
   RetTy visitTerminatorInst(TerminatorInst &I) { DELEGATE(Instruction); }
   RetTy visitBinaryOperator(BinaryOperator &I) { DELEGATE(Instruction); }
-  RetTy visitAllocationInst(AllocationInst &I) { DELEGATE(Instruction); }
   RetTy visitCmpInst(CmpInst &I)               { DELEGATE(Instruction); }
   RetTy visitCastInst(CastInst &I)             { DELEGATE(Instruction); }
 
diff --git a/libclamav/c++/llvm/include/llvm/Support/LeakDetector.h b/libclamav/c++/llvm/include/llvm/Support/LeakDetector.h
index 7dbfdbf..501a9db 100644
--- a/libclamav/c++/llvm/include/llvm/Support/LeakDetector.h
+++ b/libclamav/c++/llvm/include/llvm/Support/LeakDetector.h
@@ -26,6 +26,7 @@
 
 namespace llvm {
 
+class LLVMContext;
 class Value;
 
 struct LeakDetector {
diff --git a/libclamav/c++/llvm/include/llvm/Support/MathExtras.h b/libclamav/c++/llvm/include/llvm/Support/MathExtras.h
index 6fa618e..438b021 100644
--- a/libclamav/c++/llvm/include/llvm/Support/MathExtras.h
+++ b/libclamav/c++/llvm/include/llvm/Support/MathExtras.h
@@ -14,7 +14,7 @@
 #ifndef LLVM_SUPPORT_MATHEXTRAS_H
 #define LLVM_SUPPORT_MATHEXTRAS_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
 
diff --git a/libclamav/c++/llvm/include/llvm/Support/MemoryBuffer.h b/libclamav/c++/llvm/include/llvm/Support/MemoryBuffer.h
index eb4784c..65c7167 100644
--- a/libclamav/c++/llvm/include/llvm/Support/MemoryBuffer.h
+++ b/libclamav/c++/llvm/include/llvm/Support/MemoryBuffer.h
@@ -15,7 +15,7 @@
 #define LLVM_SUPPORT_MEMORYBUFFER_H
 
 #include "llvm/ADT/StringRef.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <string>
 
 namespace llvm {
@@ -24,7 +24,7 @@ namespace llvm {
 /// of memory, and provides simple methods for reading files and standard input
 /// into a memory buffer.  In addition to basic access to the characters in the
 /// file, this interface guarantees you can read one character past the end of
-/// @verbatim the file, and that this character will read as '\0'. @endverbatim
+/// the file, and that this character will read as '\0'.
 class MemoryBuffer {
   const char *BufferStart; // Start of the buffer.
   const char *BufferEnd;   // End of the buffer.
@@ -57,7 +57,7 @@ public:
   /// MemoryBuffer if successful, otherwise returning null.  If FileSize is
   /// specified, this means that the client knows that the file exists and that
   /// it has the specified size.
-  static MemoryBuffer *getFile(const char *Filename,
+  static MemoryBuffer *getFile(StringRef Filename,
                                std::string *ErrStr = 0,
                                int64_t FileSize = -1);
 
@@ -84,29 +84,18 @@ public:
   /// memory allocated by this method.  The memory is owned by the MemoryBuffer
   /// object.
   static MemoryBuffer *getNewUninitMemBuffer(size_t Size,
-                                             const char *BufferName = "");
+                                             StringRef BufferName = "");
 
-  /// getSTDIN - Read all of stdin into a file buffer, and return it.  This
-  /// returns null if stdin is empty.
+  /// getSTDIN - Read all of stdin into a file buffer, and return it.
   static MemoryBuffer *getSTDIN();
 
 
   /// getFileOrSTDIN - Open the specified file as a MemoryBuffer, or open stdin
   /// if the Filename is "-".  If an error occurs, this returns null and fills
-  /// in *ErrStr with a reason.  If stdin is empty, this API (unlike getSTDIN)
-  /// returns an empty buffer.
-  static MemoryBuffer *getFileOrSTDIN(const char *Filename,
-                                      std::string *ErrStr = 0,
-                                      int64_t FileSize = -1);
-
-  /// getFileOrSTDIN - Open the specified file as a MemoryBuffer, or open stdin
-  /// if the Filename is "-".  If an error occurs, this returns null and fills
   /// in *ErrStr with a reason.
-  static MemoryBuffer *getFileOrSTDIN(const std::string &FN,
+  static MemoryBuffer *getFileOrSTDIN(StringRef Filename,
                                       std::string *ErrStr = 0,
-                                      int64_t FileSize = -1) {
-    return getFileOrSTDIN(FN.c_str(), ErrStr, FileSize);
-  }
+                                      int64_t FileSize = -1);
 };
 
 } // end namespace llvm
diff --git a/libclamav/c++/llvm/include/llvm/Support/MemoryObject.h b/libclamav/c++/llvm/include/llvm/Support/MemoryObject.h
index dec0f13..e193ca2 100644
--- a/libclamav/c++/llvm/include/llvm/Support/MemoryObject.h
+++ b/libclamav/c++/llvm/include/llvm/Support/MemoryObject.h
@@ -10,7 +10,7 @@
 #ifndef MEMORYOBJECT_H
 #define MEMORYOBJECT_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
 
diff --git a/libclamav/c++/llvm/include/llvm/Support/NoFolder.h b/libclamav/c++/llvm/include/llvm/Support/NoFolder.h
index 1f671c1..7f2f149 100644
--- a/libclamav/c++/llvm/include/llvm/Support/NoFolder.h
+++ b/libclamav/c++/llvm/include/llvm/Support/NoFolder.h
@@ -174,7 +174,7 @@ public:
   }
 
   Value *CreateExtractElement(Constant *Vec, Constant *Idx) const {
-    return new ExtractElementInst(Vec, Idx);
+    return ExtractElementInst::Create(Vec, Idx);
   }
 
   Value *CreateInsertElement(Constant *Vec, Constant *NewElt,
diff --git a/libclamav/c++/llvm/include/llvm/Support/OutputBuffer.h b/libclamav/c++/llvm/include/llvm/Support/OutputBuffer.h
index 1adff2d..6b98e99 100644
--- a/libclamav/c++/llvm/include/llvm/Support/OutputBuffer.h
+++ b/libclamav/c++/llvm/include/llvm/Support/OutputBuffer.h
@@ -14,6 +14,7 @@
 #ifndef LLVM_SUPPORT_OUTPUTBUFFER_H
 #define LLVM_SUPPORT_OUTPUTBUFFER_H
 
+#include <cassert>
 #include <string>
 #include <vector>
 
diff --git a/libclamav/c++/llvm/include/llvm/Support/PassNameParser.h b/libclamav/c++/llvm/include/llvm/Support/PassNameParser.h
index 66ce3f2..ea4fe01 100644
--- a/libclamav/c++/llvm/include/llvm/Support/PassNameParser.h
+++ b/libclamav/c++/llvm/include/llvm/Support/PassNameParser.h
@@ -25,6 +25,7 @@
 
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/raw_ostream.h"
 #include "llvm/Pass.h"
 #include <algorithm>
 #include <cstring>
diff --git a/libclamav/c++/llvm/include/llvm/Support/PatternMatch.h b/libclamav/c++/llvm/include/llvm/Support/PatternMatch.h
index eb393ac..c0b6a6b 100644
--- a/libclamav/c++/llvm/include/llvm/Support/PatternMatch.h
+++ b/libclamav/c++/llvm/include/llvm/Support/PatternMatch.h
@@ -87,6 +87,18 @@ struct zero_ty {
 /// m_Zero() - Match an arbitrary zero/null constant.
 inline zero_ty m_Zero() { return zero_ty(); }
 
+struct one_ty {
+  template<typename ITy>
+  bool match(ITy *V) {
+    if (const ConstantInt *C = dyn_cast<ConstantInt>(V))
+      return C->isOne();
+    return false;
+  }
+};
+
+/// m_One() - Match a an integer 1.
+inline one_ty m_One() { return one_ty(); }
+  
 
 template<typename Class>
 struct bind_ty {
diff --git a/libclamav/c++/llvm/include/llvm/Support/PointerLikeTypeTraits.h b/libclamav/c++/llvm/include/llvm/Support/PointerLikeTypeTraits.h
index d64993f..b851404 100644
--- a/libclamav/c++/llvm/include/llvm/Support/PointerLikeTypeTraits.h
+++ b/libclamav/c++/llvm/include/llvm/Support/PointerLikeTypeTraits.h
@@ -15,7 +15,7 @@
 #ifndef LLVM_SUPPORT_POINTERLIKETYPETRAITS_H
 #define LLVM_SUPPORT_POINTERLIKETYPETRAITS_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
   
@@ -38,7 +38,7 @@ public:
     return static_cast<T*>(P);
   }
   
-  /// Note, we assume here that malloc returns objects at least 8-byte aligned.
+  /// Note, we assume here that malloc returns objects at least 4-byte aligned.
   /// However, this may be wrong, or pointers may be from something other than
   /// malloc.  In this case, you should specialize this template to reduce this.
   ///
diff --git a/libclamav/c++/llvm/include/llvm/Support/RecyclingAllocator.h b/libclamav/c++/llvm/include/llvm/Support/RecyclingAllocator.h
index 8e957f1..609193f 100644
--- a/libclamav/c++/llvm/include/llvm/Support/RecyclingAllocator.h
+++ b/libclamav/c++/llvm/include/llvm/Support/RecyclingAllocator.h
@@ -41,7 +41,7 @@ public:
   /// SubClass. The storage may be either newly allocated or recycled.
   ///
   template<class SubClass>
-  SubClass *Allocate() { return Base.Allocate<SubClass>(Allocator); }
+  SubClass *Allocate() { return Base.template Allocate<SubClass>(Allocator); }
 
   T *Allocate() { return Base.Allocate(Allocator); }
 
diff --git a/libclamav/c++/llvm/include/llvm/Support/SlowOperationInformer.h b/libclamav/c++/llvm/include/llvm/Support/SlowOperationInformer.h
index b30aa98..524049c 100644
--- a/libclamav/c++/llvm/include/llvm/Support/SlowOperationInformer.h
+++ b/libclamav/c++/llvm/include/llvm/Support/SlowOperationInformer.h
@@ -31,7 +31,7 @@
 
 #include <string>
 #include <cassert>
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
   class SlowOperationInformer {
diff --git a/libclamav/c++/llvm/include/llvm/Support/SourceMgr.h b/libclamav/c++/llvm/include/llvm/Support/SourceMgr.h
index 5b6f56b..b695ff1 100644
--- a/libclamav/c++/llvm/include/llvm/Support/SourceMgr.h
+++ b/libclamav/c++/llvm/include/llvm/Support/SourceMgr.h
@@ -120,7 +120,9 @@ public:
   ///
   /// @param Type - If non-null, the kind of message (e.g., "error") which is
   /// prefixed to the message.
-  void PrintMessage(SMLoc Loc, const std::string &Msg, const char *Type) const;
+  /// @param ShowLine - Should the diagnostic show the source line.
+  void PrintMessage(SMLoc Loc, const std::string &Msg, const char *Type,
+                    bool ShowLine = true) const;
   
   
   /// GetMessage - Return an SMDiagnostic at the specified location with the
@@ -128,8 +130,10 @@ public:
   ///
   /// @param Type - If non-null, the kind of message (e.g., "error") which is
   /// prefixed to the message.
+  /// @param ShowLine - Should the diagnostic show the source line.
   SMDiagnostic GetMessage(SMLoc Loc,
-                          const std::string &Msg, const char *Type) const;
+                          const std::string &Msg, const char *Type,
+                          bool ShowLine = true) const;
   
   
 private:
@@ -143,12 +147,15 @@ class SMDiagnostic {
   std::string Filename;
   int LineNo, ColumnNo;
   std::string Message, LineContents;
+  unsigned ShowLine : 1;
+
 public:
   SMDiagnostic() : LineNo(0), ColumnNo(0) {}
   SMDiagnostic(const std::string &FN, int Line, int Col,
-               const std::string &Msg, const std::string &LineStr)
+               const std::string &Msg, const std::string &LineStr,
+               bool showline = true)
     : Filename(FN), LineNo(Line), ColumnNo(Col), Message(Msg),
-      LineContents(LineStr) {}
+      LineContents(LineStr), ShowLine(showline) {}
 
   void Print(const char *ProgName, raw_ostream &S);
 };
diff --git a/libclamav/c++/llvm/include/llvm/Support/StandardPasses.h b/libclamav/c++/llvm/include/llvm/Support/StandardPasses.h
index 8c4f90b..18be1ad 100644
--- a/libclamav/c++/llvm/include/llvm/Support/StandardPasses.h
+++ b/libclamav/c++/llvm/include/llvm/Support/StandardPasses.h
@@ -96,41 +96,41 @@ namespace llvm {
       return;
     }
     
-    if (UnitAtATime)
-      PM->add(createRaiseAllocationsPass());    // call %malloc -> malloc inst
-    PM->add(createCFGSimplificationPass());     // Clean up disgusting code
     if (UnitAtATime) {
       PM->add(createGlobalOptimizerPass());     // Optimize out global vars
-      PM->add(createGlobalDCEPass());           // Remove unused fns and globs
-      // IP Constant Propagation
-      PM->add(createIPConstantPropagationPass());
+      
+      PM->add(createIPSCCPPass());              // IP SCCP
       PM->add(createDeadArgEliminationPass());  // Dead argument elimination
     }
     PM->add(createInstructionCombiningPass());  // Clean up after IPCP & DAE
     PM->add(createCFGSimplificationPass());     // Clean up after IPCP & DAE
-    if (UnitAtATime) {
-      if (HaveExceptions)
-        PM->add(createPruneEHPass());           // Remove dead EH info
-      PM->add(createFunctionAttrsPass());       // Set readonly/readnone attrs
-    }
+    
+    // Start of CallGraph SCC passes.
+    if (UnitAtATime && HaveExceptions)
+      PM->add(createPruneEHPass());           // Remove dead EH info
     if (InliningPass)
       PM->add(InliningPass);
+    if (UnitAtATime)
+      PM->add(createFunctionAttrsPass());       // Set readonly/readnone attrs
     if (OptimizationLevel > 2)
       PM->add(createArgumentPromotionPass());   // Scalarize uninlined fn args
+    
+    // Start of function pass.
+    
+    PM->add(createScalarReplAggregatesPass());  // Break up aggregate allocas
     if (SimplifyLibCalls)
       PM->add(createSimplifyLibCallsPass());    // Library Call Optimizations
     PM->add(createInstructionCombiningPass());  // Cleanup for scalarrepl.
     PM->add(createJumpThreadingPass());         // Thread jumps.
     PM->add(createCFGSimplificationPass());     // Merge & remove BBs
-    PM->add(createScalarReplAggregatesPass());  // Break up aggregate allocas
     PM->add(createInstructionCombiningPass());  // Combine silly seq's
-    PM->add(createCondPropagationPass());       // Propagate conditionals
+    
     PM->add(createTailCallEliminationPass());   // Eliminate tail calls
     PM->add(createCFGSimplificationPass());     // Merge & remove BBs
     PM->add(createReassociatePass());           // Reassociate expressions
     PM->add(createLoopRotatePass());            // Rotate Loop
     PM->add(createLICMPass());                  // Hoist loop invariants
-    PM->add(createLoopUnswitchPass(OptimizeSize));
+    PM->add(createLoopUnswitchPass(OptimizeSize || OptimizationLevel < 3));
     PM->add(createInstructionCombiningPass());  
     PM->add(createIndVarSimplifyPass());        // Canonicalize indvars
     PM->add(createLoopDeletionPass());          // Delete dead loops
@@ -144,7 +144,7 @@ namespace llvm {
     // Run instcombine after redundancy elimination to exploit opportunities
     // opened up by them.
     PM->add(createInstructionCombiningPass());
-    PM->add(createCondPropagationPass());       // Propagate conditionals
+    PM->add(createJumpThreadingPass());         // Thread jumps
     PM->add(createDeadStoreEliminationPass());  // Delete dead stores
     PM->add(createAggressiveDCEPass());         // Delete dead instructions
     PM->add(createCFGSimplificationPass());     // Merge & remove BBs
@@ -152,10 +152,15 @@ namespace llvm {
     if (UnitAtATime) {
       PM->add(createStripDeadPrototypesPass()); // Get rid of dead prototypes
       PM->add(createDeadTypeEliminationPass()); // Eliminate dead types
-    }
 
-    if (OptimizationLevel > 1 && UnitAtATime)
-      PM->add(createConstantMergePass());       // Merge dup global constants
+      // GlobalOpt already deletes dead functions and globals, at -O3 try a
+      // late pass of GlobalDCE.  It is capable of deleting dead cycles.
+      if (OptimizationLevel > 2)
+        PM->add(createGlobalDCEPass());         // Remove dead fns and globals.
+    
+      if (OptimizationLevel > 1)
+        PM->add(createConstantMergePass());       // Merge dup global constants
+    }
   }
 
   static inline void addOnePass(PassManager *PM, Pass *P, bool AndVerify) {
@@ -230,10 +235,8 @@ namespace llvm {
     addOnePass(PM, createInstructionCombiningPass(), VerifyEach);
 
     addOnePass(PM, createJumpThreadingPass(), VerifyEach);
-    // Cleanup jump threading.
-    addOnePass(PM, createPromoteMemoryToRegisterPass(), VerifyEach);
     
-    // Delete basic blocks, which optimization passes may have killed...
+    // Delete basic blocks, which optimization passes may have killed.
     addOnePass(PM, createCFGSimplificationPass(), VerifyEach);
 
     // Now that we have optimized the program, discard unreachable functions.
diff --git a/libclamav/c++/llvm/include/llvm/Support/TargetFolder.h b/libclamav/c++/llvm/include/llvm/Support/TargetFolder.h
index 8e28632..afed853 100644
--- a/libclamav/c++/llvm/include/llvm/Support/TargetFolder.h
+++ b/libclamav/c++/llvm/include/llvm/Support/TargetFolder.h
@@ -20,29 +20,27 @@
 #define LLVM_SUPPORT_TARGETFOLDER_H
 
 #include "llvm/Constants.h"
+#include "llvm/InstrTypes.h"
 #include "llvm/Analysis/ConstantFolding.h"
 
 namespace llvm {
 
 class TargetData;
-class LLVMContext;
 
 /// TargetFolder - Create constants with target dependent folding.
 class TargetFolder {
   const TargetData *TD;
-  LLVMContext &Context;
 
   /// Fold - Fold the constant using target specific information.
   Constant *Fold(Constant *C) const {
     if (ConstantExpr *CE = dyn_cast<ConstantExpr>(C))
-      if (Constant *CF = ConstantFoldConstantExpression(CE, Context, TD))
+      if (Constant *CF = ConstantFoldConstantExpression(CE, TD))
         return CF;
     return C;
   }
 
 public:
-  explicit TargetFolder(const TargetData *TheTD, LLVMContext &C) :
-    TD(TheTD), Context(C) {}
+  explicit TargetFolder(const TargetData *TheTD) : TD(TheTD) {}
 
   //===--------------------------------------------------------------------===//
   // Binary Operators
@@ -179,6 +177,16 @@ public:
   Constant *CreatePtrToInt(Constant *C, const Type *DestTy) const {
     return CreateCast(Instruction::PtrToInt, C, DestTy);
   }
+  Constant *CreateZExtOrBitCast(Constant *C, const Type *DestTy) const {
+    if (C->getType() == DestTy)
+      return C; // avoid calling Fold
+    return Fold(ConstantExpr::getZExtOrBitCast(C, DestTy));
+  }
+  Constant *CreateSExtOrBitCast(Constant *C, const Type *DestTy) const {
+    if (C->getType() == DestTy)
+      return C; // avoid calling Fold
+    return Fold(ConstantExpr::getSExtOrBitCast(C, DestTy));
+  }
   Constant *CreateTruncOrBitCast(Constant *C, const Type *DestTy) const {
     if (C->getType() == DestTy)
       return C; // avoid calling Fold
diff --git a/libclamav/c++/llvm/include/llvm/Support/Timer.h b/libclamav/c++/llvm/include/llvm/Support/Timer.h
index 54f1da9..8a0f55d 100644
--- a/libclamav/c++/llvm/include/llvm/Support/Timer.h
+++ b/libclamav/c++/llvm/include/llvm/Support/Timer.h
@@ -15,7 +15,7 @@
 #ifndef LLVM_SUPPORT_TIMER_H
 #define LLVM_SUPPORT_TIMER_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/System/Mutex.h"
 #include <string>
 #include <vector>
diff --git a/libclamav/c++/llvm/include/llvm/Support/ValueHandle.h b/libclamav/c++/llvm/include/llvm/Support/ValueHandle.h
index d22b30a..a9872a7 100644
--- a/libclamav/c++/llvm/include/llvm/Support/ValueHandle.h
+++ b/libclamav/c++/llvm/include/llvm/Support/ValueHandle.h
@@ -54,6 +54,8 @@ private:
   PointerIntPair<ValueHandleBase**, 2, HandleBaseKind> PrevPair;
   ValueHandleBase *Next;
   Value *VP;
+  
+  explicit ValueHandleBase(const ValueHandleBase&); // DO NOT IMPLEMENT.
 public:
   explicit ValueHandleBase(HandleBaseKind Kind)
     : PrevPair(0, Kind), Next(0), VP(0) {}
@@ -109,10 +111,15 @@ private:
   HandleBaseKind getKind() const { return PrevPair.getInt(); }
   void setPrevPtr(ValueHandleBase **Ptr) { PrevPair.setPointer(Ptr); }
 
-  /// AddToExistingUseList - Add this ValueHandle to the use list for VP,
-  /// where List is known to point into the existing use list.
+  /// AddToExistingUseList - Add this ValueHandle to the use list for VP, where
+  /// List is the address of either the head of the list or a Next node within
+  /// the existing use list.
   void AddToExistingUseList(ValueHandleBase **List);
 
+  /// AddToExistingUseListAfter - Add this ValueHandle to the use list after
+  /// Node.
+  void AddToExistingUseListAfter(ValueHandleBase *Node);
+
   /// AddToUseList - Add this ValueHandle to the use list for VP.
   void AddToUseList();
   /// RemoveFromUseList - Remove this ValueHandle from its current use list.
@@ -131,6 +138,13 @@ public:
   WeakVH(const WeakVH &RHS)
     : ValueHandleBase(Weak, RHS) {}
 
+  Value *operator=(Value *RHS) {
+    return ValueHandleBase::operator=(RHS);
+  }
+  Value *operator=(const ValueHandleBase &RHS) {
+    return ValueHandleBase::operator=(RHS);
+  }
+
   operator Value*() const {
     return getValPtr();
   }
@@ -224,6 +238,31 @@ template<> struct simplify_type<const AssertingVH<Value> > {
 template<> struct simplify_type<AssertingVH<Value> >
   : public simplify_type<const AssertingVH<Value> > {};
 
+// Specialize DenseMapInfo to allow AssertingVH to participate in DenseMap.
+template<typename T>
+struct DenseMapInfo<AssertingVH<T> > {
+  typedef DenseMapInfo<T*> PointerInfo;
+  static inline AssertingVH<T> getEmptyKey() {
+    return AssertingVH<T>(PointerInfo::getEmptyKey());
+  }
+  static inline T* getTombstoneKey() {
+    return AssertingVH<T>(PointerInfo::getTombstoneKey());
+  }
+  static unsigned getHashValue(const AssertingVH<T> &Val) {
+    return PointerInfo::getHashValue(Val);
+  }
+  static bool isEqual(const AssertingVH<T> &LHS, const AssertingVH<T> &RHS) {
+    return LHS == RHS;
+  }
+  static bool isPod() {
+#ifdef NDEBUG
+    return true;
+#else
+    return false;
+#endif
+  }
+};
+
 /// TrackingVH - This is a value handle that tracks a Value (or Value subclass),
 /// even across RAUW operations.
 ///
@@ -347,7 +386,7 @@ public:
   /// _before_ any of the uses have actually been replaced.  If WeakVH were
   /// implemented as a CallbackVH, it would use this method to call
   /// setValPtr(new_value).  AssertingVH would do nothing in this method.
-  virtual void allUsesReplacedWith(Value *new_value) {}
+  virtual void allUsesReplacedWith(Value *) {}
 };
 
 // Specialize simplify_type to allow CallbackVH to participate in
diff --git a/libclamav/c++/llvm/include/llvm/Support/raw_ostream.h b/libclamav/c++/llvm/include/llvm/Support/raw_ostream.h
index 7827dd8..a78e81f 100644
--- a/libclamav/c++/llvm/include/llvm/Support/raw_ostream.h
+++ b/libclamav/c++/llvm/include/llvm/Support/raw_ostream.h
@@ -15,7 +15,7 @@
 #define LLVM_SUPPORT_RAW_OSTREAM_H
 
 #include "llvm/ADT/StringRef.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
   class format_object_base;
@@ -51,7 +51,7 @@ private:
   /// for a \see write_impl() call to handle the data which has been put into
   /// this buffer.
   char *OutBufStart, *OutBufEnd, *OutBufCur;
-  
+
   enum BufferKind {
     Unbuffered = 0,
     InternalBuffer,
@@ -169,7 +169,7 @@ public:
     return *this;
   }
 
-  raw_ostream &operator<<(const StringRef &Str) {
+  raw_ostream &operator<<(StringRef Str) {
     // Inline fast path, particularly for strings with a known length.
     size_t Size = Str.size();
 
@@ -211,11 +211,15 @@ public:
     return *this;
   }
 
-  raw_ostream &operator<<(double N);  
+  raw_ostream &operator<<(double N);
 
   /// write_hex - Output \arg N in hexadecimal, without any prefix or padding.
   raw_ostream &write_hex(unsigned long long N);
 
+  /// write_escaped - Output \arg Str, turning '\\', '\t', '\n', '"', and
+  /// anything that doesn't satisfy std::isprint into an escape sequence.
+  raw_ostream &write_escaped(StringRef Str);
+
   raw_ostream &write(unsigned char C);
   raw_ostream &write(const char *Ptr, size_t Size);
 
@@ -224,8 +228,8 @@ public:
 
   /// indent - Insert 'NumSpaces' spaces.
   raw_ostream &indent(unsigned NumSpaces);
-  
-  
+
+
   /// Changes the foreground color of text that will be output from this point
   /// forward.
   /// @param colors ANSI color to use, the special SAVEDCOLOR can be used to
@@ -233,8 +237,8 @@ public:
   /// @param bold bold/brighter text, default false
   /// @param bg if true change the background, default: change foreground
   /// @returns itself so it can be used within << invocations
-  virtual raw_ostream &changeColor(enum Colors colors, bool bold=false,
-                                   bool  bg=false) { return *this; }
+  virtual raw_ostream &changeColor(enum Colors, bool = false,
+				   bool = false) { return *this; }
 
   /// Resets the colors to terminal defaults. Call this when you are done
   /// outputting colored text, or before program exit.
@@ -253,7 +257,7 @@ private:
   /// write_impl - The is the piece of the class that is implemented
   /// by subclasses.  This writes the \args Size bytes starting at
   /// \arg Ptr to the underlying stream.
-  /// 
+  ///
   /// This function is guaranteed to only be called at a point at which it is
   /// safe for the subclass to install a new buffer via SetBuffer.
   ///
@@ -331,7 +335,7 @@ class raw_fd_ostream : public raw_ostream {
   virtual size_t preferred_buffer_size();
 
 public:
-  
+
   enum {
     /// F_Excl - When opening a file, this flag makes raw_fd_ostream
     /// report an error if the file already exists.
@@ -346,7 +350,7 @@ public:
     /// make this distinction.
     F_Binary = 4
   };
-  
+
   /// raw_fd_ostream - Open the specified file for writing. If an error occurs,
   /// information about the error is put into ErrorInfo, and the stream should
   /// be immediately destroyed; the string will be empty if no error occurred.
@@ -359,10 +363,10 @@ public:
 
   /// raw_fd_ostream ctor - FD is the file descriptor that this writes to.  If
   /// ShouldClose is true, this closes the file when the stream is destroyed.
-  raw_fd_ostream(int fd, bool shouldClose, 
-                 bool unbuffered=false) : raw_ostream(unbuffered), FD(fd), 
+  raw_fd_ostream(int fd, bool shouldClose,
+                 bool unbuffered=false) : raw_ostream(unbuffered), FD(fd),
                                           ShouldClose(shouldClose) {}
-  
+
   ~raw_fd_ostream();
 
   /// close - Manually flush the stream and close the file.
@@ -465,7 +469,7 @@ public:
 class raw_null_ostream : public raw_ostream {
   /// write_impl - See raw_ostream::write_impl.
   virtual void write_impl(const char *Ptr, size_t size);
-  
+
   /// current_pos - Return the current position within the stream, not
   /// counting the bytes currently in the buffer.
   virtual uint64_t current_pos();
diff --git a/libclamav/c++/llvm/include/llvm/Support/type_traits.h b/libclamav/c++/llvm/include/llvm/Support/type_traits.h
index 71f7655..ce916b5 100644
--- a/libclamav/c++/llvm/include/llvm/Support/type_traits.h
+++ b/libclamav/c++/llvm/include/llvm/Support/type_traits.h
@@ -49,6 +49,17 @@ struct is_class
     enum { value = sizeof(char) == sizeof(dont_use::is_class_helper<T>(0)) };
 };
 
+/// \brief Metafunction that determines whether the two given types are 
+/// equivalent.
+template<typename T, typename U>
+struct is_same {
+  static const bool value = false;
+};
+
+template<typename T>
+struct is_same<T, T> {
+  static const bool value = true;
+};
   
 // enable_if_c - Enable/disable a template based on a metafunction
 template<bool Cond, typename T = void>
@@ -76,6 +87,21 @@ struct is_base_of {
       sizeof(char) == sizeof(dont_use::base_of_helper<Base>((Derived*)0));
 };
 
+// remove_pointer - Metafunction to turn Foo* into Foo.  Defined in
+// C++0x [meta.trans.ptr].
+template <typename T> struct remove_pointer { typedef T type; };
+template <typename T> struct remove_pointer<T*> { typedef T type; };
+template <typename T> struct remove_pointer<T*const> { typedef T type; };
+template <typename T> struct remove_pointer<T*volatile> { typedef T type; };
+template <typename T> struct remove_pointer<T*const volatile> {
+    typedef T type; };
+
+template <bool, typename T, typename F>
+struct conditional { typedef T type; };
+
+template <typename T, typename F>
+struct conditional<false, T, F> { typedef F type; };
+
 }
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/System/AIXDataTypesFix.h b/libclamav/c++/llvm/include/llvm/System/AIXDataTypesFix.h
new file mode 100644
index 0000000..8dbf02f
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/System/AIXDataTypesFix.h
@@ -0,0 +1,25 @@
+//===-- llvm/System/AIXDataTypesFix.h - Fix datatype defs ------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file overrides default system-defined types and limits which cannot be
+// done in DataTypes.h.in because it is processed by autoheader first, which
+// comments out any #undef statement
+//
+//===----------------------------------------------------------------------===//
+
+// No include guards desired!
+
+#ifndef SUPPORT_DATATYPES_H
+#error "AIXDataTypesFix.h must only be included via DataTypes.h!"
+#endif
+
+// GCC is strict about defining large constants: they must have LL modifier.
+// These will be defined properly at the end of DataTypes.h
+#undef INT64_MAX
+#undef INT64_MIN
diff --git a/libclamav/c++/llvm/include/llvm/System/Atomic.h b/libclamav/c++/llvm/include/llvm/System/Atomic.h
index 4ec117b..0c05d69 100644
--- a/libclamav/c++/llvm/include/llvm/System/Atomic.h
+++ b/libclamav/c++/llvm/include/llvm/System/Atomic.h
@@ -14,7 +14,7 @@
 #ifndef LLVM_SYSTEM_ATOMIC_H
 #define LLVM_SYSTEM_ATOMIC_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
   namespace sys {
diff --git a/libclamav/c++/llvm/include/llvm/System/DataTypes.h.cmake b/libclamav/c++/llvm/include/llvm/System/DataTypes.h.cmake
new file mode 100644
index 0000000..180c86c
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/System/DataTypes.h.cmake
@@ -0,0 +1,152 @@
+/*===-- include/System/DataTypes.h - Define fixed size types -----*- C -*-===*\
+|*                                                                            *|
+|*                     The LLVM Compiler Infrastructure                       *|
+|*                                                                            *|
+|* This file is distributed under the University of Illinois Open Source      *|
+|* License. See LICENSE.TXT for details.                                      *|
+|*                                                                            *|
+|*===----------------------------------------------------------------------===*|
+|*                                                                            *|
+|* This file contains definitions to figure out the size of _HOST_ data types.*|
+|* This file is important because different host OS's define different macros,*|
+|* which makes portability tough.  This file exports the following            *|
+|* definitions:                                                               *|
+|*                                                                            *|
+|*   [u]int(32|64)_t : typedefs for signed and unsigned 32/64 bit system types*|
+|*   [U]INT(8|16|32|64)_(MIN|MAX) : Constants for the min and max values.     *|
+|*                                                                            *|
+|* No library is required when using these functinons.                        *|
+|*                                                                            *|
+|*===----------------------------------------------------------------------===*/
+
+/* Please leave this file C-compatible. */
+
+#ifndef SUPPORT_DATATYPES_H
+#define SUPPORT_DATATYPES_H
+
+#cmakedefine HAVE_SYS_TYPES_H ${HAVE_SYS_TYPES_H}
+#cmakedefine HAVE_INTTYPES_H ${HAVE_INTTYPES_H}
+#cmakedefine HAVE_STDINT_H ${HAVE_STDINT_H}
+#cmakedefine HAVE_UINT64_T ${HAVE_UINT64_T}
+#cmakedefine HAVE_U_INT64_T ${HAVE_U_INT64_T}
+
+#ifdef __cplusplus
+#include <cmath>
+#else
+#include <math.h>
+#endif
+
+#ifndef _MSC_VER
+
+/* Note that this header's correct operation depends on __STDC_LIMIT_MACROS
+   being defined.  We would define it here, but in order to prevent Bad Things
+   happening when system headers or C++ STL headers include stdint.h before we
+   define it here, we define it on the g++ command line (in Makefile.rules). */
+#if !defined(__STDC_LIMIT_MACROS)
+# error "Must #define __STDC_LIMIT_MACROS before #including System/DataTypes.h"
+#endif
+
+#if !defined(__STDC_CONSTANT_MACROS)
+# error "Must #define __STDC_CONSTANT_MACROS before " \
+        "#including System/DataTypes.h"
+#endif
+
+/* Note that <inttypes.h> includes <stdint.h>, if this is a C99 system. */
+#ifdef HAVE_SYS_TYPES_H
+#include <sys/types.h>
+#endif
+
+#ifdef HAVE_INTTYPES_H
+#include <inttypes.h>
+#endif
+
+#ifdef HAVE_STDINT_H
+#include <stdint.h>
+#endif
+
+#ifdef _AIX
+#include "llvm/System/AIXDataTypesFix.h"
+#endif
+
+/* Handle incorrect definition of uint64_t as u_int64_t */
+#ifndef HAVE_UINT64_T
+#ifdef HAVE_U_INT64_T
+typedef u_int64_t uint64_t;
+#else
+# error "Don't have a definition for uint64_t on this platform"
+#endif
+#endif
+
+#ifdef _OpenBSD_
+#define INT8_MAX 127
+#define INT8_MIN -128
+#define UINT8_MAX 255
+#define INT16_MAX 32767
+#define INT16_MIN -32768
+#define UINT16_MAX 65535
+#define INT32_MAX 2147483647
+#define INT32_MIN -2147483648
+#define UINT32_MAX 4294967295U
+#endif
+
+#else /* _MSC_VER */
+/* Visual C++ doesn't provide standard integer headers, but it does provide
+   built-in data types. */
+#include <stdlib.h>
+#include <stddef.h>
+#include <sys/types.h>
+#ifdef __cplusplus
+#include <cmath>
+#else
+#include <math.h>
+#endif
+typedef __int64 int64_t;
+typedef unsigned __int64 uint64_t;
+typedef signed int int32_t;
+typedef unsigned int uint32_t;
+typedef short int16_t;
+typedef unsigned short uint16_t;
+typedef signed char int8_t;
+typedef unsigned char uint8_t;
+typedef signed int ssize_t;
+#define INT8_MAX 127
+#define INT8_MIN -128
+#define UINT8_MAX 255
+#define INT16_MAX 32767
+#define INT16_MIN -32768
+#define UINT16_MAX 65535
+#define INT32_MAX 2147483647
+#define INT32_MIN -2147483648
+#define UINT32_MAX 4294967295U
+#define INT8_C(C)   C
+#define UINT8_C(C)  C
+#define INT16_C(C)  C
+#define UINT16_C(C) C
+#define INT32_C(C)  C
+#define UINT32_C(C) C ## U
+#define INT64_C(C)  ((int64_t) C ## LL)
+#define UINT64_C(C) ((uint64_t) C ## ULL)
+#endif /* _MSC_VER */
+
+/* Set defaults for constants which we cannot find. */
+#if !defined(INT64_MAX)
+# define INT64_MAX 9223372036854775807LL
+#endif
+#if !defined(INT64_MIN)
+# define INT64_MIN ((-INT64_MAX)-1)
+#endif
+#if !defined(UINT64_MAX)
+# define UINT64_MAX 0xffffffffffffffffULL
+#endif
+
+#if __GNUC__ > 3
+#define END_WITH_NULL __attribute__((sentinel))
+#else
+#define END_WITH_NULL
+#endif
+
+#ifndef HUGE_VALF
+#define HUGE_VALF (float)HUGE_VAL
+#endif
+
+#endif  /* SUPPORT_DATATYPES_H */
diff --git a/libclamav/c++/llvm/include/llvm/System/DataTypes.h.in b/libclamav/c++/llvm/include/llvm/System/DataTypes.h.in
new file mode 100644
index 0000000..d574910
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/System/DataTypes.h.in
@@ -0,0 +1,147 @@
+/*===-- include/System/DataTypes.h - Define fixed size types -----*- C -*-===*\
+|*                                                                            *|
+|*                     The LLVM Compiler Infrastructure                       *|
+|*                                                                            *|
+|* This file is distributed under the University of Illinois Open Source      *|
+|* License. See LICENSE.TXT for details.                                      *|
+|*                                                                            *|
+|*===----------------------------------------------------------------------===*|
+|*                                                                            *|
+|* This file contains definitions to figure out the size of _HOST_ data types.*|
+|* This file is important because different host OS's define different macros,*|
+|* which makes portability tough.  This file exports the following            *|
+|* definitions:                                                               *|
+|*                                                                            *|
+|*   [u]int(32|64)_t : typedefs for signed and unsigned 32/64 bit system types*|
+|*   [U]INT(8|16|32|64)_(MIN|MAX) : Constants for the min and max values.     *|
+|*                                                                            *|
+|* No library is required when using these functinons.                        *|
+|*                                                                            *|
+|*===----------------------------------------------------------------------===*/
+
+/* Please leave this file C-compatible. */
+
+#ifndef SUPPORT_DATATYPES_H
+#define SUPPORT_DATATYPES_H
+
+#undef HAVE_SYS_TYPES_H
+#undef HAVE_INTTYPES_H
+#undef HAVE_STDINT_H
+#undef HAVE_UINT64_T
+#undef HAVE_U_INT64_T
+
+#ifdef __cplusplus
+#include <cmath>
+#else
+#include <math.h>
+#endif
+
+#ifndef _MSC_VER
+
+/* Note that this header's correct operation depends on __STDC_LIMIT_MACROS
+   being defined.  We would define it here, but in order to prevent Bad Things
+   happening when system headers or C++ STL headers include stdint.h before we
+   define it here, we define it on the g++ command line (in Makefile.rules). */
+#if !defined(__STDC_LIMIT_MACROS)
+# error "Must #define __STDC_LIMIT_MACROS before #including System/DataTypes.h"
+#endif
+
+#if !defined(__STDC_CONSTANT_MACROS)
+# error "Must #define __STDC_CONSTANT_MACROS before " \
+        "#including System/DataTypes.h"
+#endif
+
+/* Note that <inttypes.h> includes <stdint.h>, if this is a C99 system. */
+#ifdef HAVE_SYS_TYPES_H
+#include <sys/types.h>
+#endif
+
+#ifdef HAVE_INTTYPES_H
+#include <inttypes.h>
+#endif
+
+#ifdef HAVE_STDINT_H
+#include <stdint.h>
+#endif
+
+#ifdef _AIX
+#include "llvm/System/AIXDataTypesFix.h"
+#endif
+
+/* Handle incorrect definition of uint64_t as u_int64_t */
+#ifndef HAVE_UINT64_T
+#ifdef HAVE_U_INT64_T
+typedef u_int64_t uint64_t;
+#else
+# error "Don't have a definition for uint64_t on this platform"
+#endif
+#endif
+
+#ifdef _OpenBSD_
+#define INT8_MAX 127
+#define INT8_MIN -128
+#define UINT8_MAX 255
+#define INT16_MAX 32767
+#define INT16_MIN -32768
+#define UINT16_MAX 65535
+#define INT32_MAX 2147483647
+#define INT32_MIN -2147483648
+#define UINT32_MAX 4294967295U
+#endif
+
+#else /* _MSC_VER */
+/* Visual C++ doesn't provide standard integer headers, but it does provide
+   built-in data types. */
+#include <stdlib.h>
+#include <stddef.h>
+#include <sys/types.h>
+typedef __int64 int64_t;
+typedef unsigned __int64 uint64_t;
+typedef signed int int32_t;
+typedef unsigned int uint32_t;
+typedef short int16_t;
+typedef unsigned short uint16_t;
+typedef signed char int8_t;
+typedef unsigned char uint8_t;
+typedef signed int ssize_t;
+#define INT8_MAX 127
+#define INT8_MIN -128
+#define UINT8_MAX 255
+#define INT16_MAX 32767
+#define INT16_MIN -32768
+#define UINT16_MAX 65535
+#define INT32_MAX 2147483647
+#define INT32_MIN -2147483648
+#define UINT32_MAX 4294967295U
+#define INT8_C(C)   C
+#define UINT8_C(C)  C
+#define INT16_C(C)  C
+#define UINT16_C(C) C
+#define INT32_C(C)  C
+#define UINT32_C(C) C ## U
+#define INT64_C(C)  ((int64_t) C ## LL)
+#define UINT64_C(C) ((uint64_t) C ## ULL)
+#endif /* _MSC_VER */
+
+/* Set defaults for constants which we cannot find. */
+#if !defined(INT64_MAX)
+# define INT64_MAX 9223372036854775807LL
+#endif
+#if !defined(INT64_MIN)
+# define INT64_MIN ((-INT64_MAX)-1)
+#endif
+#if !defined(UINT64_MAX)
+# define UINT64_MAX 0xffffffffffffffffULL
+#endif
+
+#if __GNUC__ > 3
+#define END_WITH_NULL __attribute__((sentinel))
+#else
+#define END_WITH_NULL
+#endif
+
+#ifndef HUGE_VALF
+#define HUGE_VALF (float)HUGE_VAL
+#endif
+
+#endif  /* SUPPORT_DATATYPES_H */
diff --git a/libclamav/c++/llvm/include/llvm/System/Disassembler.h b/libclamav/c++/llvm/include/llvm/System/Disassembler.h
index 6d1cc0f..e11e792 100644
--- a/libclamav/c++/llvm/include/llvm/System/Disassembler.h
+++ b/libclamav/c++/llvm/include/llvm/System/Disassembler.h
@@ -15,7 +15,7 @@
 #ifndef LLVM_SYSTEM_DISASSEMBLER_H
 #define LLVM_SYSTEM_DISASSEMBLER_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <string>
 
 namespace llvm {
diff --git a/libclamav/c++/llvm/include/llvm/System/Host.h b/libclamav/c++/llvm/include/llvm/System/Host.h
index 3c6aa9d..6de1a4a 100644
--- a/libclamav/c++/llvm/include/llvm/System/Host.h
+++ b/libclamav/c++/llvm/include/llvm/System/Host.h
@@ -41,6 +41,12 @@ namespace sys {
   ///   CPU_TYPE-VENDOR-KERNEL-OPERATING_SYSTEM
   std::string getHostTriple();
 
+  /// getHostCPUName - Get the LLVM name for the host CPU. The particular format
+  /// of the name is target dependent, and suitable for passing as -mcpu to the
+  /// target which matches the host.
+  ///
+  /// \return - The host CPU name, or empty if the CPU could not be determined.
+  std::string getHostCPUName();
 }
 }
 
diff --git a/libclamav/c++/llvm/include/llvm/System/Memory.h b/libclamav/c++/llvm/include/llvm/System/Memory.h
index d6300db..69251dd 100644
--- a/libclamav/c++/llvm/include/llvm/System/Memory.h
+++ b/libclamav/c++/llvm/include/llvm/System/Memory.h
@@ -14,7 +14,7 @@
 #ifndef LLVM_SYSTEM_MEMORY_H
 #define LLVM_SYSTEM_MEMORY_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <string>
 
 namespace llvm {
diff --git a/libclamav/c++/llvm/include/llvm/System/Path.h b/libclamav/c++/llvm/include/llvm/System/Path.h
index 3b73a12..b8554c8 100644
--- a/libclamav/c++/llvm/include/llvm/System/Path.h
+++ b/libclamav/c++/llvm/include/llvm/System/Path.h
@@ -380,6 +380,13 @@ namespace sys {
       /// in the file system.
       bool canWrite() const;
 
+      /// This function checks that what we're trying to work only on a regular file.
+      /// Check for things like /dev/null, any block special file,
+      /// or other things that aren't "regular" regular files.
+      /// @returns true if the file is S_ISREG.
+      /// @brief Determines if the file is a regular file
+      bool isRegularFile() const;
+
       /// This function determines if the path name references an executable
       /// file in the file system. This function checks for the existence and
       /// executability (by the current program) of the file.
diff --git a/libclamav/c++/llvm/include/llvm/System/TimeValue.h b/libclamav/c++/llvm/include/llvm/System/TimeValue.h
index 4e419f1..b82647f 100644
--- a/libclamav/c++/llvm/include/llvm/System/TimeValue.h
+++ b/libclamav/c++/llvm/include/llvm/System/TimeValue.h
@@ -11,7 +11,7 @@
 //
 //===----------------------------------------------------------------------===//
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <string>
 
 #ifndef LLVM_SYSTEM_TIMEVALUE_H
@@ -65,8 +65,8 @@ namespace sys {
   /// @name Types
   /// @{
   public:
-    typedef int64_t SecondsType;        ///< Type used for representing seconds.
-    typedef int32_t NanoSecondsType;    ///< Type used for representing nanoseconds.
+    typedef int64_t SecondsType;    ///< Type used for representing seconds.
+    typedef int32_t NanoSecondsType;///< Type used for representing nanoseconds.
 
     enum TimeConversions {
       NANOSECONDS_PER_SECOND = 1000000000,  ///< One Billion
@@ -251,7 +251,7 @@ namespace sys {
       return seconds_ - PosixZeroTime.seconds_;
     }
 
-    /// Converts the TiemValue into the correspodning number of "ticks" for
+    /// Converts the TimeValue into the corresponding number of "ticks" for
     /// Win32 platforms, correcting for the difference in Win32 zero time.
     /// @brief Convert to windows time (seconds since 12:00:00a Jan 1, 1601)
     uint64_t toWin32Time() const {
diff --git a/libclamav/c++/llvm/include/llvm/Target/SubtargetFeature.h b/libclamav/c++/llvm/include/llvm/Target/SubtargetFeature.h
index 58333e2..38a3cc2 100644
--- a/libclamav/c++/llvm/include/llvm/Target/SubtargetFeature.h
+++ b/libclamav/c++/llvm/include/llvm/Target/SubtargetFeature.h
@@ -21,7 +21,8 @@
 #include <string>
 #include <vector>
 #include <cstring>
-#include "llvm/Support/DataTypes.h"
+#include "llvm/ADT/Triple.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
   class raw_ostream;
@@ -106,6 +107,10 @@ public:
   
   // Dump feature info.
   void dump() const;
+
+  /// Retrieve a formatted string of the default features for
+  /// the specified target triple.
+  static std::string getDefaultSubtargetFeatures(const Triple &Triple);
 };
 
 } // End namespace llvm
diff --git a/libclamav/c++/llvm/include/llvm/Target/Target.td b/libclamav/c++/llvm/include/llvm/Target/Target.td
index 4d65b19..6f1e066 100644
--- a/libclamav/c++/llvm/include/llvm/Target/Target.td
+++ b/libclamav/c++/llvm/include/llvm/Target/Target.td
@@ -199,7 +199,7 @@ class Instruction {
   bit isReMaterializable = 0; // Is this instruction re-materializable?
   bit isPredicable = 0;     // Is this instruction predicable?
   bit hasDelaySlot = 0;     // Does this instruction have an delay slot?
-  bit usesCustomDAGSchedInserter = 0; // Pseudo instr needing special help.
+  bit usesCustomInserter = 0; // Pseudo instr needing special help.
   bit hasCtrlDep   = 0;     // Does this instruction r/w ctrl-flow chains?
   bit isNotDuplicable = 0;  // Is it unsafe to duplicate this instruction?
   bit isAsCheapAsAMove = 0; // As cheap (or cheaper) than a move instruction.
@@ -413,6 +413,7 @@ def DBG_LABEL : Instruction {
   let AsmString = "";
   let Namespace = "TargetInstrInfo";
   let hasCtrlDep = 1;
+  let isNotDuplicable = 1;
 }
 def EH_LABEL : Instruction {
   let OutOperandList = (ops);
@@ -420,6 +421,7 @@ def EH_LABEL : Instruction {
   let AsmString = "";
   let Namespace = "TargetInstrInfo";
   let hasCtrlDep = 1;
+  let isNotDuplicable = 1;
 }
 def GC_LABEL : Instruction {
   let OutOperandList = (ops);
@@ -427,6 +429,7 @@ def GC_LABEL : Instruction {
   let AsmString = "";
   let Namespace = "TargetInstrInfo";
   let hasCtrlDep = 1;
+  let isNotDuplicable = 1;
 }
 def KILL : Instruction {
   let OutOperandList = (ops);
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetData.h b/libclamav/c++/llvm/include/llvm/Target/TargetData.h
index f8ea64b..e1d052e 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetData.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetData.h
@@ -21,10 +21,7 @@
 #define LLVM_TARGET_TARGETDATA_H
 
 #include "llvm/Pass.h"
-#include "llvm/Support/DataTypes.h"
-#include "llvm/Support/ErrorHandling.h"
 #include "llvm/ADT/SmallVector.h"
-#include <string>
 
 namespace llvm {
 
@@ -33,6 +30,7 @@ class Type;
 class IntegerType;
 class StructType;
 class StructLayout;
+class StructLayoutMap;
 class GlobalVariable;
 class LLVMContext;
 
@@ -73,26 +71,22 @@ private:
   unsigned char PointerABIAlign;       ///< Pointer ABI alignment
   unsigned char PointerPrefAlign;      ///< Pointer preferred alignment
 
-  //! Where the primitive type alignment data is stored.
-  /*!
-   @sa init().
-   @note Could support multiple size pointer alignments, e.g., 32-bit pointers
-   vs. 64-bit pointers by extending TargetAlignment, but for now, we don't.
-   */
+  SmallVector<unsigned char, 8> LegalIntWidths; ///< Legal Integers.
+  
+  /// Alignments- Where the primitive type alignment data is stored.
+  ///
+  /// @sa init().
+  /// @note Could support multiple size pointer alignments, e.g., 32-bit
+  /// pointers vs. 64-bit pointers by extending TargetAlignment, but for now,
+  /// we don't.
   SmallVector<TargetAlignElem, 16> Alignments;
-  //! Alignment iterator shorthand
-  typedef SmallVector<TargetAlignElem, 16>::iterator align_iterator;
-  //! Constant alignment iterator shorthand
-  typedef SmallVector<TargetAlignElem, 16>::const_iterator align_const_iterator;
-  //! Invalid alignment.
-  /*!
-    This member is a signal that a requested alignment type and bit width were
-    not found in the SmallVector.
-   */
+  
+  /// InvalidAlignmentElem - This member is a signal that a requested alignment
+  /// type and bit width were not found in the SmallVector.
   static const TargetAlignElem InvalidAlignmentElem;
 
-  // Opaque pointer for the StructType -> StructLayout map.
-  mutable void* LayoutMap;
+  // The StructType -> StructLayout map.
+  mutable StructLayoutMap *LayoutMap;
 
   //! Set/initialize target alignments
   void setAlignment(AlignTypeEnum align_type, unsigned char abi_align,
@@ -106,8 +100,8 @@ private:
   ///
   /// Predicate that tests a TargetAlignElem reference returned by get() against
   /// InvalidAlignmentElem.
-  inline bool validAlignment(const TargetAlignElem &align) const {
-    return (&align != &InvalidAlignmentElem);
+  bool validAlignment(const TargetAlignElem &align) const {
+    return &align != &InvalidAlignmentElem;
   }
 
 public:
@@ -115,13 +109,10 @@ public:
   ///
   /// @note This has to exist, because this is a pass, but it should never be
   /// used.
-  TargetData() : ImmutablePass(&ID) {
-    llvm_report_error("Bad TargetData ctor used.  "
-                      "Tool did not specify a TargetData to use?");
-  }
-
+  TargetData();
+  
   /// Constructs a TargetData from a specification string. See init().
-  explicit TargetData(const std::string &TargetDescription)
+  explicit TargetData(StringRef TargetDescription)
     : ImmutablePass(&ID) {
     init(TargetDescription);
   }
@@ -135,6 +126,7 @@ public:
     PointerMemSize(TD.PointerMemSize),
     PointerABIAlign(TD.PointerABIAlign),
     PointerPrefAlign(TD.PointerPrefAlign),
+    LegalIntWidths(TD.LegalIntWidths),
     Alignments(TD.Alignments),
     LayoutMap(0)
   { }
@@ -142,16 +134,35 @@ public:
   ~TargetData();  // Not virtual, do not subclass this class
 
   //! Parse a target data layout string and initialize TargetData alignments.
-  void init(const std::string &TargetDescription);
+  void init(StringRef TargetDescription);
 
   /// Target endianness...
-  bool          isLittleEndian()       const { return     LittleEndian; }
-  bool          isBigEndian()          const { return    !LittleEndian; }
+  bool isLittleEndian() const { return LittleEndian; }
+  bool isBigEndian() const { return !LittleEndian; }
 
   /// getStringRepresentation - Return the string representation of the
   /// TargetData.  This representation is in the same format accepted by the
   /// string constructor above.
   std::string getStringRepresentation() const;
+  
+  /// isLegalInteger - This function returns true if the specified type is
+  /// known tobe a native integer type supported by the CPU.  For example,
+  /// i64 is not native on most 32-bit CPUs and i37 is not native on any known
+  /// one.  This returns false if the integer width is not legal.
+  ///
+  /// The width is specified in bits.
+  ///
+  bool isLegalInteger(unsigned Width) const {
+    for (unsigned i = 0, e = LegalIntWidths.size(); i != e; ++i)
+      if (LegalIntWidths[i] == Width)
+        return true;
+    return false;
+  }
+  
+  bool isIllegalInteger(unsigned Width) const {
+    return !isLegalInteger(Width);
+  }
+  
   /// Target pointer alignment
   unsigned char getPointerABIAlignment() const { return PointerABIAlign; }
   /// Return target's alignment for stack-based pointers
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetInstrDesc.h b/libclamav/c++/llvm/include/llvm/Target/TargetInstrDesc.h
index d828a23..b0ed0bf 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetInstrDesc.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetInstrDesc.h
@@ -109,7 +109,7 @@ namespace TID {
     UnmodeledSideEffects,
     Commutable,
     ConvertibleTo3Addr,
-    UsesCustomDAGSchedInserter,
+    UsesCustomInserter,
     Rematerializable,
     CheapAsAMove,
     ExtraSrcRegAllocReq,
@@ -416,7 +416,7 @@ public:
     return Flags & (1 << TID::ConvertibleTo3Addr);
   }
   
-  /// usesCustomDAGSchedInsertionHook - Return true if this instruction requires
+  /// usesCustomInsertionHook - Return true if this instruction requires
   /// custom insertion support when the DAG scheduler is inserting it into a
   /// machine basic block.  If this is true for the instruction, it basically
   /// means that it is a pseudo instruction used at SelectionDAG time that is 
@@ -424,8 +424,8 @@ public:
   ///
   /// If this is true, the TargetLoweringInfo::InsertAtEndOfBasicBlock method
   /// is used to insert this into the MachineBasicBlock.
-  bool usesCustomDAGSchedInsertionHook() const {
-    return Flags & (1 << TID::UsesCustomDAGSchedInserter);
+  bool usesCustomInsertionHook() const {
+    return Flags & (1 << TID::UsesCustomInserter);
   }
   
   /// isRematerializable - Returns true if this instruction is a candidate for
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetInstrInfo.h b/libclamav/c++/llvm/include/llvm/Target/TargetInstrInfo.h
index 2d21a9b..8070d45 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetInstrInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetInstrInfo.h
@@ -103,32 +103,64 @@ public:
   /// isTriviallyReMaterializable - Return true if the instruction is trivially
   /// rematerializable, meaning it has no side effects and requires no operands
   /// that aren't always available.
-  bool isTriviallyReMaterializable(const MachineInstr *MI) const {
-    return MI->getDesc().isRematerializable() &&
-           isReallyTriviallyReMaterializable(MI);
+  bool isTriviallyReMaterializable(const MachineInstr *MI,
+                                   AliasAnalysis *AA = 0) const {
+    return MI->getOpcode() == IMPLICIT_DEF ||
+           (MI->getDesc().isRematerializable() &&
+            (isReallyTriviallyReMaterializable(MI, AA) ||
+             isReallyTriviallyReMaterializableGeneric(MI, AA)));
   }
 
 protected:
   /// isReallyTriviallyReMaterializable - For instructions with opcodes for
-  /// which the M_REMATERIALIZABLE flag is set, this function tests whether the
-  /// instruction itself is actually trivially rematerializable, considering
-  /// its operands.  This is used for targets that have instructions that are
-  /// only trivially rematerializable for specific uses.  This predicate must
-  /// return false if the instruction has any side effects other than
-  /// producing a value, or if it requres any address registers that are not
-  /// always available.
-  virtual bool isReallyTriviallyReMaterializable(const MachineInstr *MI) const {
-    return true;
+  /// which the M_REMATERIALIZABLE flag is set, this hook lets the target
+  /// specify whether the instruction is actually trivially rematerializable,
+  /// taking into consideration its operands. This predicate must return false
+  /// if the instruction has any side effects other than producing a value, or
+  /// if it requres any address registers that are not always available.
+  virtual bool isReallyTriviallyReMaterializable(const MachineInstr *MI,
+                                                 AliasAnalysis *AA) const {
+    return false;
   }
 
+private:
+  /// isReallyTriviallyReMaterializableGeneric - For instructions with opcodes
+  /// for which the M_REMATERIALIZABLE flag is set and the target hook
+  /// isReallyTriviallyReMaterializable returns false, this function does
+  /// target-independent tests to determine if the instruction is really
+  /// trivially rematerializable.
+  bool isReallyTriviallyReMaterializableGeneric(const MachineInstr *MI,
+                                                AliasAnalysis *AA) const;
+
 public:
-  /// Return true if the instruction is a register to register move and return
-  /// the source and dest operands and their sub-register indices by reference.
+  /// isMoveInstr - Return true if the instruction is a register to register
+  /// move and return the source and dest operands and their sub-register
+  /// indices by reference.
   virtual bool isMoveInstr(const MachineInstr& MI,
                            unsigned& SrcReg, unsigned& DstReg,
                            unsigned& SrcSubIdx, unsigned& DstSubIdx) const {
     return false;
   }
+
+  /// isIdentityCopy - Return true if the instruction is a copy (or
+  /// extract_subreg, insert_subreg, subreg_to_reg) where the source and
+  /// destination registers are the same.
+  bool isIdentityCopy(const MachineInstr &MI) const {
+    unsigned SrcReg, DstReg, SrcSubIdx, DstSubIdx;
+    if (isMoveInstr(MI, SrcReg, DstReg, SrcSubIdx, DstSubIdx) &&
+        SrcReg == DstReg)
+      return true;
+
+    if (MI.getOpcode() == TargetInstrInfo::EXTRACT_SUBREG &&
+        MI.getOperand(0).getReg() == MI.getOperand(1).getReg())
+    return true;
+
+    if ((MI.getOpcode() == TargetInstrInfo::INSERT_SUBREG ||
+         MI.getOpcode() == TargetInstrInfo::SUBREG_TO_REG) &&
+        MI.getOperand(0).getReg() == MI.getOperand(2).getReg())
+      return true;
+    return false;
+  }
   
   /// isLoadFromStackSlot - If the specified machine instruction is a direct
   /// load from a stack slot, return the virtual or physical register number of
@@ -139,6 +171,25 @@ public:
                                        int &FrameIndex) const {
     return 0;
   }
+
+  /// isLoadFromStackSlotPostFE - Check for post-frame ptr elimination
+  /// stack locations as well.  This uses a heuristic so it isn't
+  /// reliable for correctness.
+  virtual unsigned isLoadFromStackSlotPostFE(const MachineInstr *MI,
+                                             int &FrameIndex) const {
+    return 0;
+  }
+
+  /// hasLoadFromStackSlot - If the specified machine instruction has
+  /// a load from a stack slot, return true along with the FrameIndex
+  /// of the loaded stack slot.  If not, return false.  Unlike
+  /// isLoadFromStackSlot, this returns true for any instructions that
+  /// loads from the stack.  This is just a hint, as some cases may be
+  /// missed.
+  virtual bool hasLoadFromStackSlot(const MachineInstr *MI,
+                                    int &FrameIndex) const {
+    return 0;
+  }
   
   /// isStoreToStackSlot - If the specified machine instruction is a direct
   /// store to a stack slot, return the virtual or physical register number of
@@ -150,23 +201,33 @@ public:
     return 0;
   }
 
+  /// isStoreToStackSlotPostFE - Check for post-frame ptr elimination
+  /// stack locations as well.  This uses a heuristic so it isn't
+  /// reliable for correctness.
+  virtual unsigned isStoreToStackSlotPostFE(const MachineInstr *MI,
+                                      int &FrameIndex) const {
+    return 0;
+  }
+
+  /// hasStoreToStackSlot - If the specified machine instruction has a
+  /// store to a stack slot, return true along with the FrameIndex of
+  /// the loaded stack slot.  If not, return false.  Unlike
+  /// isStoreToStackSlot, this returns true for any instructions that
+  /// loads from the stack.  This is just a hint, as some cases may be
+  /// missed.
+  virtual bool hasStoreToStackSlot(const MachineInstr *MI,
+                                   int &FrameIndex) const {
+    return 0;
+  }
+
   /// reMaterialize - Re-issue the specified 'original' instruction at the
   /// specific location targeting a new destination register.
   virtual void reMaterialize(MachineBasicBlock &MBB,
                              MachineBasicBlock::iterator MI,
                              unsigned DestReg, unsigned SubIdx,
-                             const MachineInstr *Orig) const = 0;
-
-  /// isInvariantLoad - Return true if the specified instruction (which is
-  /// marked mayLoad) is loading from a location whose value is invariant across
-  /// the function.  For example, loading a value from the constant pool or from
-  /// from the argument area of a function if it does not change.  This should
-  /// only return true of *all* loads the instruction does are invariant (if it
-  /// does multiple loads).
-  virtual bool isInvariantLoad(const MachineInstr *MI) const {
-    return false;
-  }
-  
+                             const MachineInstr *Orig,
+                             const TargetRegisterInfo *TRI) const = 0;
+
   /// convertToThreeAddress - This method must be implemented by targets that
   /// set the M_CONVERTIBLE_TO_3_ADDR flag.  When this flag is set, the target
   /// may be able to convert a two-address instruction into one or more true
@@ -204,6 +265,14 @@ public:
   virtual bool findCommutedOpIndices(MachineInstr *MI, unsigned &SrcOpIdx1,
                                      unsigned &SrcOpIdx2) const = 0;
 
+  /// isIdentical - Return true if two instructions are identical. This differs
+  /// from MachineInstr::isIdenticalTo() in that it does not require the
+  /// virtual destination registers to be the same. This is used by MachineLICM
+  /// and other MI passes to perform CSE.
+  virtual bool isIdentical(const MachineInstr *MI,
+                           const MachineInstr *Other,
+                           const MachineRegisterInfo *MRI) const = 0;
+
   /// AnalyzeBranch - Analyze the branching code at the end of MBB, returning
   /// true if it cannot be understood (e.g. it's a switch dispatch or isn't
   /// implemented for a target).  Upon success, this returns false and returns
@@ -383,9 +452,12 @@ public:
   /// getOpcodeAfterMemoryUnfold - Returns the opcode of the would be new
   /// instruction after load / store are unfolded from an instruction of the
   /// specified opcode. It returns zero if the specified unfolding is not
-  /// possible.
+  /// possible. If LoadRegIndex is non-null, it is filled in with the operand
+  /// index of the operand which will hold the register holding the loaded
+  /// value.
   virtual unsigned getOpcodeAfterMemoryUnfold(unsigned Opc,
-                                      bool UnfoldLoad, bool UnfoldStore) const {
+                                      bool UnfoldLoad, bool UnfoldStore,
+                                      unsigned *LoadRegIndex = 0) const {
     return 0;
   }
   
@@ -442,16 +514,19 @@ public:
     return false;
   }
 
+  /// isPredicable - Return true if the specified instruction can be predicated.
+  /// By default, this returns true for every instruction with a
+  /// PredicateOperand.
+  virtual bool isPredicable(MachineInstr *MI) const {
+    return MI->getDesc().isPredicable();
+  }
+
   /// isSafeToMoveRegClassDefs - Return true if it's safe to move a machine
   /// instruction that defines the specified register class.
   virtual bool isSafeToMoveRegClassDefs(const TargetRegisterClass *RC) const {
     return true;
   }
 
-  /// isDeadInstruction - Return true if the instruction is considered dead.
-  /// This allows some late codegen passes to delete them.
-  virtual bool isDeadInstruction(const MachineInstr *MI) const = 0;
-
   /// GetInstSize - Returns the size of the specified Instruction.
   /// 
   virtual unsigned GetInstSizeInBytes(const MachineInstr *MI) const {
@@ -468,6 +543,10 @@ public:
   /// length.
   virtual unsigned getInlineAsmLength(const char *Str,
                                       const MCAsmInfo &MAI) const;
+
+  /// isProfitableToDuplicateIndirectBranch - Returns true if tail duplication
+  /// is especially profitable for indirect branches.
+  virtual bool isProfitableToDuplicateIndirectBranch() const { return false; }
 };
 
 /// TargetInstrInfoImpl - This is the default implementation of
@@ -488,8 +567,11 @@ public:
   virtual void reMaterialize(MachineBasicBlock &MBB,
                              MachineBasicBlock::iterator MI,
                              unsigned DestReg, unsigned SubReg,
-                             const MachineInstr *Orig) const;
-  virtual bool isDeadInstruction(const MachineInstr *MI) const;
+                             const MachineInstr *Orig,
+                             const TargetRegisterInfo *TRI) const;
+  virtual bool isIdentical(const MachineInstr *MI,
+                           const MachineInstr *Other,
+                           const MachineRegisterInfo *MRI) const;
 
   virtual unsigned GetFunctionSizeInBytes(const MachineFunction &MF) const;
 };
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetIntrinsicInfo.h b/libclamav/c++/llvm/include/llvm/Target/TargetIntrinsicInfo.h
index c14275f..ad8ac92 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetIntrinsicInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetIntrinsicInfo.h
@@ -14,6 +14,8 @@
 #ifndef LLVM_TARGET_TARGETINTRINSICINFO_H
 #define LLVM_TARGET_TARGETINTRINSICINFO_H
 
+#include <string>
+
 namespace llvm {
 
 class Function;
@@ -25,35 +27,36 @@ class Type;
 /// TargetIntrinsicInfo - Interface to description of machine instruction set
 ///
 class TargetIntrinsicInfo {
-  
-  const char **Intrinsics;               // Raw array to allow static init'n
-  unsigned NumIntrinsics;                // Number of entries in the desc array
-
-  TargetIntrinsicInfo(const TargetIntrinsicInfo &);  // DO NOT IMPLEMENT
-  void operator=(const TargetIntrinsicInfo &);   // DO NOT IMPLEMENT
+  TargetIntrinsicInfo(const TargetIntrinsicInfo &); // DO NOT IMPLEMENT
+  void operator=(const TargetIntrinsicInfo &);      // DO NOT IMPLEMENT
 public:
-  TargetIntrinsicInfo(const char **desc, unsigned num);
+  TargetIntrinsicInfo();
   virtual ~TargetIntrinsicInfo();
 
-  unsigned getNumIntrinsics() const { return NumIntrinsics; }
+  /// Return the name of a target intrinsic, e.g. "llvm.bfin.ssync".
+  /// The Tys and numTys parameters are for intrinsics with overloaded types
+  /// (e.g., those using iAny or fAny). For a declaration for an overloaded
+  /// intrinsic, Tys should point to an array of numTys pointers to Type,
+  /// and must provide exactly one type for each overloaded type in the
+  /// intrinsic.
+  virtual std::string getName(unsigned IID, const Type **Tys = 0,
+                              unsigned numTys = 0) const = 0;
 
-  virtual Function *getDeclaration(Module *M, const char *BuiltinName) const {
-    return 0;
-  }
+  /// Look up target intrinsic by name. Return intrinsic ID or 0 for unknown
+  /// names.
+  virtual unsigned lookupName(const char *Name, unsigned Len) const =0;
 
-  // Returns the Function declaration for intrinsic BuiltinName.  If the
-  // intrinsic can be overloaded, uses Tys to return the correct function.
-  virtual Function *getDeclaration(Module *M, const char *BuiltinName,
-                                   const Type **Tys, unsigned numTys) const {
-    return 0;
-  }
+  /// Return the target intrinsic ID of a function, or 0.
+  virtual unsigned getIntrinsicID(Function *F) const;
 
-  // Returns true if the Builtin can be overloaded.
-  virtual bool isOverloaded(Module *M, const char *BuiltinName) const {
-    return false;
-  }
-
-  virtual unsigned getIntrinsicID(Function *F) const { return 0; }
+  /// Returns true if the intrinsic can be overloaded.
+  virtual bool isOverloaded(unsigned IID) const = 0;
+  
+  /// Create or insert an LLVM Function declaration for an intrinsic,
+  /// and return it. The Tys and numTys are for intrinsics with overloaded
+  /// types. See above for more information.
+  virtual Function *getDeclaration(Module *M, unsigned ID, const Type **Tys = 0,
+                                   unsigned numTys = 0) const = 0;
 };
 
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetJITInfo.h b/libclamav/c++/llvm/include/llvm/Target/TargetJITInfo.h
index 9545689..7208a8d 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetJITInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetJITInfo.h
@@ -18,7 +18,8 @@
 #define LLVM_TARGET_TARGETJITINFO_H
 
 #include <cassert>
-#include "llvm/Support/DataTypes.h"
+#include "llvm/Support/ErrorHandling.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
   class Function;
@@ -48,22 +49,28 @@ namespace llvm {
       return 0;
     }
 
+    /// Records the required size and alignment for a call stub in bytes.
+    struct StubLayout {
+      size_t Size;
+      size_t Alignment;
+    };
+    /// Returns the maximum size and alignment for a call stub on this target.
+    virtual StubLayout getStubLayout() {
+      llvm_unreachable("This target doesn't implement getStubLayout!");
+      StubLayout Result = {0, 0};
+      return Result;
+    }
+
     /// emitFunctionStub - Use the specified JITCodeEmitter object to emit a
     /// small native function that simply calls the function at the specified
-    /// address.  Return the address of the resultant function.
-    virtual void *emitFunctionStub(const Function* F, void *Fn,
+    /// address.  The JITCodeEmitter must already have storage allocated for the
+    /// stub.  Return the address of the resultant function, which may have been
+    /// aligned from the address the JCE was set up to emit at.
+    virtual void *emitFunctionStub(const Function* F, void *Target,
                                    JITCodeEmitter &JCE) {
       assert(0 && "This target doesn't implement emitFunctionStub!");
       return 0;
     }
-    
-    /// emitFunctionStubAtAddr - Use the specified JITCodeEmitter object to
-    /// emit a small native function that simply calls Fn. Emit the stub into
-    /// the supplied buffer.
-    virtual void emitFunctionStubAtAddr(const Function* F, void *Fn,
-                                        void *Buffer, JITCodeEmitter &JCE) {
-      assert(0 && "This target doesn't implement emitFunctionStubAtAddr!");
-    }
 
     /// getPICJumpTableEntry - Returns the value of the jumptable entry for the
     /// specific basic block.
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h b/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h
index 4f567b0..ca51102 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h
@@ -325,12 +325,11 @@ public:
   /// scalarizing vs using the wider vector type.
   virtual EVT getWidenVectorType(EVT VT) const;
 
-  typedef std::vector<APFloat>::const_iterator legal_fpimm_iterator;
-  legal_fpimm_iterator legal_fpimm_begin() const {
-    return LegalFPImmediates.begin();
-  }
-  legal_fpimm_iterator legal_fpimm_end() const {
-    return LegalFPImmediates.end();
+  /// isFPImmLegal - Returns true if the target can instruction select the
+  /// specified FP immediate natively. If false, the legalizer will materialize
+  /// the FP immediate as a load from a constant pool.
+  virtual bool isFPImmLegal(const APFloat &Imm, EVT VT) const {
+    return false;
   }
   
   /// isShuffleMaskLegal - Targets can use this to indicate that they only
@@ -979,7 +978,7 @@ protected:
   /// not work with the with specified type and indicate what to do about it.
   void setLoadExtAction(unsigned ExtType, MVT VT,
                       LegalizeAction Action) {
-    assert((unsigned)VT.SimpleTy < sizeof(LoadExtActions[0])*4 &&
+    assert((unsigned)VT.SimpleTy < MVT::LAST_VALUETYPE &&
            ExtType < array_lengthof(LoadExtActions) &&
            "Table isn't big enough!");
     LoadExtActions[ExtType] &= ~(uint64_t(3UL) << VT.SimpleTy*2);
@@ -991,7 +990,7 @@ protected:
   void setTruncStoreAction(MVT ValVT, MVT MemVT,
                            LegalizeAction Action) {
     assert((unsigned)ValVT.SimpleTy < array_lengthof(TruncStoreActions) &&
-           (unsigned)MemVT.SimpleTy < sizeof(TruncStoreActions[0])*4 &&
+           (unsigned)MemVT.SimpleTy < MVT::LAST_VALUETYPE &&
            "Table isn't big enough!");
     TruncStoreActions[ValVT.SimpleTy] &= ~(uint64_t(3UL)  << MemVT.SimpleTy*2);
     TruncStoreActions[ValVT.SimpleTy] |= (uint64_t)Action << MemVT.SimpleTy*2;
@@ -1026,7 +1025,7 @@ protected:
   void setConvertAction(MVT FromVT, MVT ToVT,
                         LegalizeAction Action) {
     assert((unsigned)FromVT.SimpleTy < array_lengthof(ConvertActions) &&
-           (unsigned)ToVT.SimpleTy < sizeof(ConvertActions[0])*4 &&
+           (unsigned)ToVT.SimpleTy < MVT::LAST_VALUETYPE &&
            "Table isn't big enough!");
     ConvertActions[FromVT.SimpleTy] &= ~(uint64_t(3UL)  << ToVT.SimpleTy*2);
     ConvertActions[FromVT.SimpleTy] |= (uint64_t)Action << ToVT.SimpleTy*2;
@@ -1036,7 +1035,7 @@ protected:
   /// supported on the target and indicate what to do about it.
   void setCondCodeAction(ISD::CondCode CC, MVT VT,
                          LegalizeAction Action) {
-    assert((unsigned)VT.SimpleTy < sizeof(CondCodeActions[0])*4 &&
+    assert((unsigned)VT.SimpleTy < MVT::LAST_VALUETYPE &&
            (unsigned)CC < array_lengthof(CondCodeActions) &&
            "Table isn't big enough!");
     CondCodeActions[(unsigned)CC] &= ~(uint64_t(3UL)  << VT.SimpleTy*2);
@@ -1051,12 +1050,6 @@ protected:
     PromoteToType[std::make_pair(Opc, OrigVT.SimpleTy)] = DestVT.SimpleTy;
   }
 
-  /// addLegalFPImmediate - Indicate that this target can instruction select
-  /// the specified FP immediate natively.
-  void addLegalFPImmediate(const APFloat& Imm) {
-    LegalFPImmediates.push_back(Imm);
-  }
-
   /// setTargetDAGCombine - Targets should invoke this method for each target
   /// independent node that they want to provide a custom DAG combiner for by
   /// implementing the PerformDAGCombine virtual method.
@@ -1174,6 +1167,18 @@ public:
     return SDValue();    // this is here to silence compiler errors
   }
 
+  /// CanLowerReturn - This hook should be implemented to check whether the
+  /// return values described by the Outs array can fit into the return
+  /// registers.  If false is returned, an sret-demotion is performed.
+  ///
+  virtual bool CanLowerReturn(CallingConv::ID CallConv, bool isVarArg,
+               const SmallVectorImpl<EVT> &OutTys,
+               const SmallVectorImpl<ISD::ArgFlagsTy> &ArgsFlags,
+               SelectionDAG &DAG)
+  {
+    // Return true by default to get preexisting behavior.
+    return true;
+  }
   /// LowerReturn - This hook must be implemented to lower outgoing
   /// return values, described by the Outs array, into the specified
   /// DAG. The implementation should return the resulting token chain
@@ -1432,14 +1437,15 @@ public:
                                             SelectionDAG &DAG) const;
   
   //===--------------------------------------------------------------------===//
-  // Scheduler hooks
+  // Instruction Emitting Hooks
   //
   
   // EmitInstrWithCustomInserter - This method should be implemented by targets
-  // that mark instructions with the 'usesCustomDAGSchedInserter' flag.  These
+  // that mark instructions with the 'usesCustomInserter' flag.  These
   // instructions are special in various ways, which require special support to
   // insert.  The specified MachineInstr is created but not inserted into any
-  // basic blocks, and the scheduler passes ownership of it to this method.
+  // basic blocks, and this method is called to expand it into a sequence of
+  // instructions, potentially also creating new basic blocks and control flow.
   // When new basic blocks are inserted and the edges from MBB to its successors
   // are modified, the method should insert pairs of <OldSucc, NewSucc> into the
   // DenseMap.
@@ -1508,6 +1514,14 @@ public:
     return false;
   }
 
+  /// isLegalICmpImmediate - Return true if the specified immediate is legal
+  /// icmp immediate, that is the target has icmp instructions which can compare
+  /// a register against the immediate without having to materialize the
+  /// immediate into a register.
+  virtual bool isLegalICmpImmediate(int64_t Imm) const {
+    return true;
+  }
+
   //===--------------------------------------------------------------------===//
   // Div utility functions
   //
@@ -1696,8 +1710,6 @@ private:
 
   ValueTypeActionImpl ValueTypeActions;
 
-  std::vector<APFloat> LegalFPImmediates;
-
   std::vector<std::pair<EVT, TargetRegisterClass*> > AvailableRegClasses;
 
   /// TargetDAGCombineArray - Targets can specify ISD nodes that they would
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetLoweringObjectFile.h b/libclamav/c++/llvm/include/llvm/Target/TargetLoweringObjectFile.h
index 821e537..9a64191 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetLoweringObjectFile.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetLoweringObjectFile.h
@@ -15,6 +15,7 @@
 #ifndef LLVM_TARGET_TARGETLOWERINGOBJECTFILE_H
 #define LLVM_TARGET_TARGETLOWERINGOBJECTFILE_H
 
+#include "llvm/ADT/StringRef.h"
 #include "llvm/MC/SectionKind.h"
 
 namespace llvm {
@@ -26,7 +27,6 @@ namespace llvm {
   class MCSectionMachO;
   class MCContext;
   class GlobalValue;
-  class StringRef;
   class TargetMachine;
   
 class TargetLoweringObjectFile {
@@ -288,14 +288,14 @@ public:
 
   /// getMachOSection - Return the MCSection for the specified mach-o section.
   /// This requires the operands to be valid.
-  const MCSectionMachO *getMachOSection(const StringRef &Segment,
-                                        const StringRef &Section,
+  const MCSectionMachO *getMachOSection(StringRef Segment,
+                                        StringRef Section,
                                         unsigned TypeAndAttributes,
                                         SectionKind K) const {
     return getMachOSection(Segment, Section, TypeAndAttributes, 0, K);
   }
-  const MCSectionMachO *getMachOSection(const StringRef &Segment,
-                                        const StringRef &Section,
+  const MCSectionMachO *getMachOSection(StringRef Segment,
+                                        StringRef Section,
                                         unsigned TypeAndAttributes,
                                         unsigned Reserved2,
                                         SectionKind K) const;
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetMachine.h b/libclamav/c++/llvm/include/llvm/Target/TargetMachine.h
index 92b648c..1104635 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetMachine.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetMachine.h
@@ -74,9 +74,10 @@ namespace FileModel {
 // Code generation optimization level.
 namespace CodeGenOpt {
   enum Level {
-    Default,
-    None,
-    Aggressive
+    None,        // -O0
+    Less,        // -O1
+    Default,     // -O2, -Os
+    Aggressive   // -O3
   };
 }
 
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetRegisterInfo.h b/libclamav/c++/llvm/include/llvm/Target/TargetRegisterInfo.h
index 6043ec8..cb29c73 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetRegisterInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetRegisterInfo.h
@@ -275,6 +275,7 @@ private:
   regclass_iterator RegClassBegin, RegClassEnd;   // List of regclasses
 
   int CallFrameSetupOpcode, CallFrameDestroyOpcode;
+
 protected:
   TargetRegisterInfo(const TargetRegisterDesc *D, unsigned NR,
                      regclass_iterator RegClassBegin,
@@ -325,7 +326,7 @@ public:
   /// getAllocatableSet - Returns a bitset indexed by register number
   /// indicating if a register is allocatable or not. If a register class is
   /// specified, returns the subset for the class.
-  BitVector getAllocatableSet(MachineFunction &MF,
+  BitVector getAllocatableSet(const MachineFunction &MF,
                               const TargetRegisterClass *RC = NULL) const;
 
   const TargetRegisterDesc &operator[](unsigned RegNo) const {
@@ -463,6 +464,11 @@ public:
   /// exist.
   virtual unsigned getSubReg(unsigned RegNo, unsigned Index) const = 0;
 
+  /// getSubRegIndex - For a given register pair, return the sub-register index
+  /// if the are second register is a sub-register of the first. Return zero
+  /// otherwise.
+  virtual unsigned getSubRegIndex(unsigned RegNo, unsigned SubRegNo) const = 0;
+
   /// getMatchingSuperReg - Return a super-register of the specified register
   /// Reg so its sub-register of index SubIdx is Reg.
   unsigned getMatchingSuperReg(unsigned Reg, unsigned SubIdx,
@@ -561,6 +567,12 @@ public:
     return false;
   }
 
+  /// requiresFrameIndexScavenging - returns true if the target requires post
+  /// PEI scavenging of registers for materializing frame index constants.
+  virtual bool requiresFrameIndexScavenging(const MachineFunction &MF) const {
+    return false;
+  }
+
   /// hasFP - Return true if the specified function should have a dedicated
   /// frame pointer register. For most targets this is true only if the function
   /// has variable sized allocas or if frame pointer elimination is disabled.
@@ -635,6 +647,19 @@ public:
   virtual void processFunctionBeforeFrameFinalized(MachineFunction &MF) const {
   }
 
+  /// saveScavengerRegister - Spill the register so it can be used by the
+  /// register scavenger. Return true if the register was spilled, false
+  /// otherwise. If this function does not spill the register, the scavenger
+  /// will instead spill it to the emergency spill slot.
+  ///
+  virtual bool saveScavengerRegister(MachineBasicBlock &MBB,
+                                     MachineBasicBlock::iterator I,
+                                     MachineBasicBlock::iterator &UseMI,
+                                     const TargetRegisterClass *RC,
+                                     unsigned Reg) const {
+    return false;
+  }
+
   /// eliminateFrameIndex - This method must be overriden to eliminate abstract
   /// frame indices from instructions which may use them.  The instruction
   /// referenced by the iterator contains an MO_FrameIndex operand which must be
@@ -642,8 +667,13 @@ public:
   /// specified instruction, as long as it keeps the iterator pointing the the
   /// finished product. SPAdj is the SP adjustment due to call frame setup
   /// instruction.
-  virtual void eliminateFrameIndex(MachineBasicBlock::iterator MI,
-                                   int SPAdj, RegScavenger *RS=NULL) const = 0;
+  ///
+  /// When -enable-frame-index-scavenging is enabled, the virtual register
+  /// allocated for this frame index is returned and its value is stored in
+  /// *Value.
+  virtual unsigned eliminateFrameIndex(MachineBasicBlock::iterator MI,
+                                       int SPAdj, int *Value = NULL,
+                                       RegScavenger *RS=NULL) const = 0;
 
   /// emitProlog/emitEpilog - These methods insert prolog and epilog code into
   /// the function.
@@ -662,12 +692,24 @@ public:
 
   /// getFrameRegister - This method should return the register used as a base
   /// for values allocated in the current stack frame.
-  virtual unsigned getFrameRegister(MachineFunction &MF) const = 0;
+  virtual unsigned getFrameRegister(const MachineFunction &MF) const = 0;
 
   /// getFrameIndexOffset - Returns the displacement from the frame register to
   /// the stack frame of the specified index.
   virtual int getFrameIndexOffset(MachineFunction &MF, int FI) const;
 
+  /// getFrameIndexReference - This method should return the base register
+  /// and offset used to reference a frame index location. The offset is
+  /// returned directly, and the base register is returned via FrameReg.
+  virtual int getFrameIndexReference(MachineFunction &MF, int FI,
+                                     unsigned &FrameReg) const {
+    // By default, assume all frame indices are referenced via whatever
+    // getFrameRegister() says. The target can override this if it's doing
+    // something different.
+    FrameReg = getFrameRegister(MF);
+    return getFrameIndexOffset(MF, FI);
+  }
+
   /// getRARegister - This method should return the register where the return
   /// address can be found.
   virtual unsigned getRARegister() const = 0;
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetRegistry.h b/libclamav/c++/llvm/include/llvm/Target/TargetRegistry.h
index 8042d23..167e1d1 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetRegistry.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetRegistry.h
@@ -51,7 +51,7 @@ namespace llvm {
     typedef unsigned (*TripleMatchQualityFnTy)(const std::string &TT);
 
     typedef const MCAsmInfo *(*AsmInfoCtorFnTy)(const Target &T,
-                                                const StringRef &TT);
+                                                StringRef TT);
     typedef TargetMachine *(*TargetMachineCtorTy)(const Target &T,
                                                   const std::string &TT,
                                                   const std::string &Features);
@@ -163,7 +163,7 @@ namespace llvm {
     /// feature set; it should always be provided. Generally this should be
     /// either the target triple from the module, or the target triple of the
     /// host if that does not exist.
-    const MCAsmInfo *createAsmInfo(const StringRef &Triple) const {
+    const MCAsmInfo *createAsmInfo(StringRef Triple) const {
       if (!AsmInfoCtorFn)
         return 0;
       return AsmInfoCtorFn(*this, Triple);
@@ -387,6 +387,15 @@ namespace llvm {
         T.MCDisassemblerCtorFn = Fn;
     }
 
+    /// RegisterMCInstPrinter - Register a MCInstPrinter implementation for the
+    /// given target.
+    /// 
+    /// Clients are responsible for ensuring that registration doesn't occur
+    /// while another thread is attempting to access the registry. Typically
+    /// this is done by initializing all targets at program startup.
+    ///
+    /// @param T - The target being registered.
+    /// @param Fn - A function to construct an MCInstPrinter for the target.
     static void RegisterMCInstPrinter(Target &T,
                                       Target::MCInstPrinterCtorTy Fn) {
       if (!T.MCInstPrinterCtorFn)
@@ -395,7 +404,7 @@ namespace llvm {
     
     /// RegisterCodeEmitter - Register a MCCodeEmitter implementation for the
     /// given target.
-    /// 
+    ///
     /// Clients are responsible for ensuring that registration doesn't occur
     /// while another thread is attempting to access the registry. Typically
     /// this is done by initializing all targets at program startup.
@@ -452,7 +461,7 @@ namespace llvm {
       TargetRegistry::RegisterAsmInfo(T, &Allocator);
     }
   private:
-    static const MCAsmInfo *Allocator(const Target &T, const StringRef &TT) {
+    static const MCAsmInfo *Allocator(const Target &T, StringRef TT) {
       return new MCAsmInfoImpl(T, TT);
     }
     
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetSelect.h b/libclamav/c++/llvm/include/llvm/Target/TargetSelect.h
index e79f651..951e7fa 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetSelect.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetSelect.h
@@ -33,6 +33,10 @@ extern "C" {
   // Declare all of the available assembly parser initialization functions.
 #define LLVM_ASM_PARSER(TargetName) void LLVMInitialize##TargetName##AsmParser();
 #include "llvm/Config/AsmParsers.def"
+
+  // Declare all of the available disassembler initialization functions.
+#define LLVM_DISASSEMBLER(TargetName) void LLVMInitialize##TargetName##Disassembler();
+#include "llvm/Config/Disassemblers.def"
 }
 
 namespace llvm {
@@ -79,6 +83,16 @@ namespace llvm {
 #include "llvm/Config/AsmParsers.def"
   }
   
+  /// InitializeAllDisassemblers - The main program should call this function if
+  /// it wants all disassemblers that LLVM is configured to support, to make
+  /// them available via the TargetRegistry.
+  ///
+  /// It is legal for a client to make multiple calls to this function.
+  inline void InitializeAllDisassemblers() {
+#define LLVM_DISASSEMBLER(TargetName) LLVMInitialize##TargetName##Disassembler();
+#include "llvm/Config/Disassemblers.def"
+  }
+  
   /// InitializeNativeTarget - The main program should call this function to
   /// initialize the native target corresponding to the host.  This is useful 
   /// for JIT applications to ensure that the target gets linked in correctly.
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetSelectionDAG.td b/libclamav/c++/llvm/include/llvm/Target/TargetSelectionDAG.td
index 700c64c..7f54f81 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetSelectionDAG.td
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetSelectionDAG.td
@@ -269,6 +269,10 @@ def externalsym : SDNode<"ISD::ExternalSymbol",       SDTPtrLeaf, [],
                          "ExternalSymbolSDNode">;
 def texternalsym: SDNode<"ISD::TargetExternalSymbol", SDTPtrLeaf, [],
                          "ExternalSymbolSDNode">;
+def blockaddress : SDNode<"ISD::BlockAddress",        SDTPtrLeaf, [],
+                         "BlockAddressSDNode">;
+def tblockaddress: SDNode<"ISD::TargetBlockAddress",  SDTPtrLeaf, [],
+                         "BlockAddressSDNode">;
 
 def add        : SDNode<"ISD::ADD"       , SDTIntBinOp   ,
                         [SDNPCommutative, SDNPAssociative]>;
@@ -325,6 +329,8 @@ def fneg       : SDNode<"ISD::FNEG"       , SDTFPUnaryOp>;
 def fsqrt      : SDNode<"ISD::FSQRT"      , SDTFPUnaryOp>;
 def fsin       : SDNode<"ISD::FSIN"       , SDTFPUnaryOp>;
 def fcos       : SDNode<"ISD::FCOS"       , SDTFPUnaryOp>;
+def fexp2      : SDNode<"ISD::FEXP2"      , SDTFPUnaryOp>;
+def flog2      : SDNode<"ISD::FLOG2"      , SDTFPUnaryOp>;
 def frint      : SDNode<"ISD::FRINT"      , SDTFPUnaryOp>;
 def ftrunc     : SDNode<"ISD::FTRUNC"     , SDTFPUnaryOp>;
 def fceil      : SDNode<"ISD::FCEIL"      , SDTFPUnaryOp>;
@@ -858,10 +864,3 @@ class ComplexPattern<ValueType ty, int numops, string fn,
   list<SDNodeProperty> Properties = props;
   list<CPAttribute> Attributes = attrs;
 }
-
-//===----------------------------------------------------------------------===//
-// Dwarf support.
-//
-def SDT_dwarf_loc : SDTypeProfile<0, 3,
-                      [SDTCisInt<0>, SDTCisInt<1>, SDTCisInt<2>]>;
-def dwarf_loc : SDNode<"ISD::DEBUG_LOC", SDT_dwarf_loc,[SDNPHasChain]>;
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetSubtarget.h b/libclamav/c++/llvm/include/llvm/Target/TargetSubtarget.h
index ac094f6..22b09ba 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetSubtarget.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetSubtarget.h
@@ -14,10 +14,14 @@
 #ifndef LLVM_TARGET_TARGETSUBTARGET_H
 #define LLVM_TARGET_TARGETSUBTARGET_H
 
+#include "llvm/Target/TargetMachine.h"
+
 namespace llvm {
 
 class SDep;
 class SUnit;
+class TargetRegisterClass;
+template <typename T> class SmallVectorImpl;
 
 //===----------------------------------------------------------------------===//
 ///
@@ -31,6 +35,11 @@ class TargetSubtarget {
 protected: // Can only create subclasses...
   TargetSubtarget();
 public:
+  // AntiDepBreakMode - Type of anti-dependence breaking that should
+  // be performed before post-RA scheduling.
+  typedef enum { ANTIDEP_NONE, ANTIDEP_CRITICAL, ANTIDEP_ALL } AntiDepBreakMode;
+  typedef SmallVectorImpl<TargetRegisterClass*> RegClassVector;
+
   virtual ~TargetSubtarget();
 
   /// getSpecialAddressLatency - For targets where it is beneficial to
@@ -39,10 +48,14 @@ public:
   /// should be attempted.
   virtual unsigned getSpecialAddressLatency() const { return 0; }
 
-  // enablePostRAScheduler - Return true to enable
-  // post-register-allocation scheduling.
-  virtual bool enablePostRAScheduler() const { return false; }
-
+  // enablePostRAScheduler - If the target can benefit from post-regalloc
+  // scheduling and the specified optimization level meets the requirement
+  // return true to enable post-register-allocation scheduling. In
+  // CriticalPathRCs return any register classes that should only be broken
+  // if on the critical path. 
+  virtual bool enablePostRAScheduler(CodeGenOpt::Level OptLevel,
+                                     AntiDepBreakMode& Mode,
+                                     RegClassVector& CriticalPathRCs) const;
   // adjustSchedDependency - Perform target specific adjustments to
   // the latency of a schedule dependency.
   virtual void adjustSchedDependency(SUnit *def, SUnit *use, 
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/IPO.h b/libclamav/c++/llvm/include/llvm/Transforms/IPO.h
index d66ed89..5e17904 100644
--- a/libclamav/c++/llvm/include/llvm/Transforms/IPO.h
+++ b/libclamav/c++/llvm/include/llvm/Transforms/IPO.h
@@ -69,13 +69,6 @@ ModulePass *createGlobalOptimizerPass();
 
 
 //===----------------------------------------------------------------------===//
-/// createRaiseAllocationsPass - Return a new pass that transforms malloc and
-/// free function calls into malloc and free instructions.
-///
-ModulePass *createRaiseAllocationsPass();
-
-
-//===----------------------------------------------------------------------===//
 /// createDeadTypeEliminationPass - Return a new pass that eliminates symbol
 /// table entries for types that are never used.
 ///
@@ -185,10 +178,6 @@ Pass *createSingleLoopExtractorPass();
 ///
 ModulePass *createBlockExtractorPass(const std::vector<BasicBlock*> &BTNE);
 
-/// createIndMemRemPass - This pass removes potential indirect calls of
-/// malloc and free
-ModulePass *createIndMemRemPass();
-
 /// createStripDeadPrototypesPass - This pass removes any function declarations
 /// (prototypes) that are not used.
 ModulePass *createStripDeadPrototypesPass();
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/RSProfiling.h b/libclamav/c++/llvm/include/llvm/Transforms/RSProfiling.h
index 98ec396..02439e8 100644
--- a/libclamav/c++/llvm/include/llvm/Transforms/RSProfiling.h
+++ b/libclamav/c++/llvm/include/llvm/Transforms/RSProfiling.h
@@ -15,7 +15,11 @@
 #ifndef LLVM_TRANSFORMS_RSPROFILING_H
 #define LLVM_TRANSFORMS_RSPROFILING_H
 
+#include "llvm/Pass.h"
+
 namespace llvm {
+  class Value;
+  
   //===--------------------------------------------------------------------===//
   /// RSProfilers - The basic Random Sampling Profiler Interface  Any profiler 
   /// that implements this interface can be transformed by the random sampling
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/Scalar.h b/libclamav/c++/llvm/include/llvm/Transforms/Scalar.h
index 012f69b..7159f86 100644
--- a/libclamav/c++/llvm/include/llvm/Transforms/Scalar.h
+++ b/libclamav/c++/llvm/include/llvm/Transforms/Scalar.h
@@ -171,14 +171,6 @@ FunctionPass *createReassociatePass();
 
 //===----------------------------------------------------------------------===//
 //
-// CondPropagationPass - This pass propagates information about conditional
-// expressions through the program, allowing it to eliminate conditional
-// branches in some cases.
-//
-FunctionPass *createCondPropagationPass();
-
-//===----------------------------------------------------------------------===//
-//
 // TailDuplication - Eliminate unconditional branches through controlled code
 // duplication, creating simpler CFG structures.
 //
@@ -225,16 +217,6 @@ extern const PassInfo *const LoopSimplifyID;
 
 //===----------------------------------------------------------------------===//
 //
-// LowerAllocations - Turn malloc and free instructions into @malloc and @free
-// calls.
-//
-//   AU.addRequiredID(LowerAllocationsID);
-//
-Pass *createLowerAllocationsPass(bool LowerMallocArgToInteger = false);
-extern const PassInfo *const LowerAllocationsID;
-
-//===----------------------------------------------------------------------===//
-//
 // TailCallElimination - This pass eliminates call instructions to the current
 // function which occur immediately before return instructions.
 //
@@ -278,17 +260,10 @@ extern const PassInfo *const LCSSAID;
 
 //===----------------------------------------------------------------------===//
 //
-// PredicateSimplifier - This pass collapses duplicate variables into one
-// canonical form, and tries to simplify expressions along the way.
-//
-FunctionPass *createPredicateSimplifierPass();
-
-//===----------------------------------------------------------------------===//
-//
 // GVN - This pass performs global value numbering and redundant load 
 // elimination cotemporaneously.
 //
-FunctionPass *createGVNPass();
+FunctionPass *createGVNPass(bool NoPRE = false, bool NoLoads = false);
 
 //===----------------------------------------------------------------------===//
 //
@@ -324,12 +299,6 @@ FunctionPass *createCodeGenPreparePass(const TargetLowering *TLI = 0);
 
 //===----------------------------------------------------------------------===//
 //
-// CodeGenLICM - This pass performs late LICM; hoisting constants out of loops.
-//
-Pass *createCodeGenLICMPass();
-  
-//===----------------------------------------------------------------------===//
-//
 // InstructionNamer - Give any unnamed non-void instructions "tmp" names.
 //
 FunctionPass *createInstructionNamerPass();
@@ -349,6 +318,24 @@ FunctionPass *createSSIPass();
 //
 FunctionPass *createSSIEverythingPass();
 
+//===----------------------------------------------------------------------===//
+//
+// GEPSplitter - Split complex GEPs into simple ones
+//
+FunctionPass *createGEPSplitterPass();
+
+//===----------------------------------------------------------------------===//
+//
+// SCCVN - Aggressively eliminate redundant scalar values
+//
+FunctionPass *createSCCVNPass();
+
+//===----------------------------------------------------------------------===//
+//
+// ABCD - Elimination of Array Bounds Checks on Demand
+//
+FunctionPass *createABCDPass();
+
 } // End llvm namespace
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/Utils/BasicBlockUtils.h b/libclamav/c++/llvm/include/llvm/Transforms/Utils/BasicBlockUtils.h
index e766d72..8172114 100644
--- a/libclamav/c++/llvm/include/llvm/Transforms/Utils/BasicBlockUtils.h
+++ b/libclamav/c++/llvm/include/llvm/Transforms/Utils/BasicBlockUtils.h
@@ -116,8 +116,8 @@ bool isCriticalEdge(const TerminatorInst *TI, unsigned SuccNum,
 /// SplitCriticalEdge - If this edge is a critical edge, insert a new node to
 /// split the critical edge.  This will update DominatorTree and
 /// DominatorFrontier information if it is available, thus calling this pass
-/// will not invalidate either of them. This returns true if the edge was split,
-/// false otherwise.  
+/// will not invalidate either of them. This returns the new block if the edge
+/// was split, null otherwise.
 ///
 /// If MergeIdenticalEdges is true (not the default), *all* edges from TI to the
 /// specified successor will be merged into the same critical edge block.  
@@ -126,10 +126,16 @@ bool isCriticalEdge(const TerminatorInst *TI, unsigned SuccNum,
 /// dest go to one block instead of each going to a different block, but isn't 
 /// the standard definition of a "critical edge".
 ///
+/// It is invalid to call this function on a critical edge that starts at an
+/// IndirectBrInst.  Splitting these edges will almost always create an invalid
+/// program because the address of the new block won't be the one that is jumped
+/// to.
+///
 BasicBlock *SplitCriticalEdge(TerminatorInst *TI, unsigned SuccNum,
                               Pass *P = 0, bool MergeIdenticalEdges = false);
 
-inline BasicBlock *SplitCriticalEdge(BasicBlock *BB, succ_iterator SI, Pass *P = 0) {
+inline BasicBlock *SplitCriticalEdge(BasicBlock *BB, succ_iterator SI,
+                                     Pass *P = 0) {
   return SplitCriticalEdge(BB->getTerminator(), SI.getSuccessorIndex(), P);
 }
 
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/Utils/BasicInliner.h b/libclamav/c++/llvm/include/llvm/Transforms/Utils/BasicInliner.h
index 6a57055..4bca6b8 100644
--- a/libclamav/c++/llvm/include/llvm/Transforms/Utils/BasicInliner.h
+++ b/libclamav/c++/llvm/include/llvm/Transforms/Utils/BasicInliner.h
@@ -15,7 +15,7 @@
 #ifndef BASICINLINER_H
 #define BASICINLINER_H
 
-#include "llvm/Transforms/Utils/InlineCost.h"
+#include "llvm/Analysis/InlineCost.h"
 
 namespace llvm {
 
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/Utils/Cloning.h b/libclamav/c++/llvm/include/llvm/Transforms/Utils/Cloning.h
index 5b15b5b..e9099f8 100644
--- a/libclamav/c++/llvm/include/llvm/Transforms/Utils/Cloning.h
+++ b/libclamav/c++/llvm/include/llvm/Transforms/Utils/Cloning.h
@@ -24,6 +24,7 @@ namespace llvm {
 
 class Module;
 class Function;
+class Instruction;
 class Pass;
 class LPPassManager;
 class BasicBlock;
@@ -154,7 +155,8 @@ void CloneAndPruneFunctionInto(Function *NewFunc, const Function *OldFunc,
                                SmallVectorImpl<ReturnInst*> &Returns,
                                const char *NameSuffix = "", 
                                ClonedCodeInfo *CodeInfo = 0,
-                               const TargetData *TD = 0);
+                               const TargetData *TD = 0,
+                               Instruction *TheCall = 0);
 
 /// InlineFunction - This function inlines the called function into the basic
 /// block of the caller.  This returns false if it is not possible to inline
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/Utils/InlineCost.h b/libclamav/c++/llvm/include/llvm/Transforms/Utils/InlineCost.h
deleted file mode 100644
index 2d0c397..0000000
--- a/libclamav/c++/llvm/include/llvm/Transforms/Utils/InlineCost.h
+++ /dev/null
@@ -1,156 +0,0 @@
-//===- InlineCost.cpp - Cost analysis for inliner ---------------*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file implements heuristics for inlining decisions.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_TRANSFORMS_UTILS_INLINECOST_H
-#define LLVM_TRANSFORMS_UTILS_INLINECOST_H
-
-#include <cassert>
-#include <climits>
-#include <map>
-#include <vector>
-
-namespace llvm {
-
-  class Value;
-  class Function;
-  class CallSite;
-  template<class PtrType, unsigned SmallSize>
-  class SmallPtrSet;
-
-  /// InlineCost - Represent the cost of inlining a function. This
-  /// supports special values for functions which should "always" or
-  /// "never" be inlined. Otherwise, the cost represents a unitless
-  /// amount; smaller values increase the likelyhood of the function
-  /// being inlined.
-  class InlineCost {
-    enum Kind {
-      Value,
-      Always,
-      Never
-    };
-
-    // This is a do-it-yourself implementation of
-    //   int Cost : 30;
-    //   unsigned Type : 2;
-    // We used to use bitfields, but they were sometimes miscompiled (PR3822).
-    enum { TYPE_BITS = 2 };
-    enum { COST_BITS = unsigned(sizeof(unsigned)) * CHAR_BIT - TYPE_BITS };
-    unsigned TypedCost; // int Cost : COST_BITS; unsigned Type : TYPE_BITS;
-
-    Kind getType() const {
-      return Kind(TypedCost >> COST_BITS);
-    }
-
-    int getCost() const {
-      // Sign-extend the bottom COST_BITS bits.
-      return (int(TypedCost << TYPE_BITS)) >> TYPE_BITS;
-    }
-
-    InlineCost(int C, int T) {
-      TypedCost = (unsigned(C << TYPE_BITS) >> TYPE_BITS) | (T << COST_BITS);
-      assert(getCost() == C && "Cost exceeds InlineCost precision");
-    }
-  public:
-    static InlineCost get(int Cost) { return InlineCost(Cost, Value); }
-    static InlineCost getAlways() { return InlineCost(0, Always); }
-    static InlineCost getNever() { return InlineCost(0, Never); }
-
-    bool isVariable() const { return getType() == Value; }
-    bool isAlways() const { return getType() == Always; }
-    bool isNever() const { return getType() == Never; }
-
-    /// getValue() - Return a "variable" inline cost's amount. It is
-    /// an error to call this on an "always" or "never" InlineCost.
-    int getValue() const {
-      assert(getType() == Value && "Invalid access of InlineCost");
-      return getCost();
-    }
-  };
-  
-  /// InlineCostAnalyzer - Cost analyzer used by inliner.
-  class InlineCostAnalyzer {
-    struct ArgInfo {
-    public:
-      unsigned ConstantWeight;
-      unsigned AllocaWeight;
-      
-      ArgInfo(unsigned CWeight, unsigned AWeight)
-        : ConstantWeight(CWeight), AllocaWeight(AWeight) {}
-    };
-    
-    // FunctionInfo - For each function, calculate the size of it in blocks and
-    // instructions.
-    struct FunctionInfo {
-      /// NeverInline - True if this callee should never be inlined into a
-      /// caller.
-      bool NeverInline;
-      
-      /// usesDynamicAlloca - True if this function calls alloca (in the C sense).
-      bool usesDynamicAlloca;
-
-      /// NumInsts, NumBlocks - Keep track of how large each function is, which
-      /// is used to estimate the code size cost of inlining it.
-      unsigned NumInsts, NumBlocks;
-
-      /// NumVectorInsts - Keep track of how many instructions produce vector
-      /// values.  The inliner is being more aggressive with inlining vector
-      /// kernels.
-      unsigned NumVectorInsts;
-      
-      /// ArgumentWeights - Each formal argument of the function is inspected to
-      /// see if it is used in any contexts where making it a constant or alloca
-      /// would reduce the code size.  If so, we add some value to the argument
-      /// entry here.
-      std::vector<ArgInfo> ArgumentWeights;
-      
-      FunctionInfo() : NeverInline(false), usesDynamicAlloca(false), NumInsts(0),
-                       NumBlocks(0), NumVectorInsts(0) {}
-      
-      /// analyzeFunction - Fill in the current structure with information
-      /// gleaned from the specified function.
-      void analyzeFunction(Function *F);
-
-      /// CountCodeReductionForConstant - Figure out an approximation for how
-      /// many instructions will be constant folded if the specified value is
-      /// constant.
-      unsigned CountCodeReductionForConstant(Value *V);
-      
-      /// CountCodeReductionForAlloca - Figure out an approximation of how much
-      /// smaller the function will be if it is inlined into a context where an
-      /// argument becomes an alloca.
-      ///
-      unsigned CountCodeReductionForAlloca(Value *V);
-    };
-
-    std::map<const Function *, FunctionInfo> CachedFunctionInfo;
-
-  public:
-
-    /// getInlineCost - The heuristic used to determine if we should inline the
-    /// function call or not.
-    ///
-    InlineCost getInlineCost(CallSite CS,
-                             SmallPtrSet<const Function *, 16> &NeverInline);
-
-    /// getInlineFudgeFactor - Return a > 1.0 factor if the inliner should use a
-    /// higher threshold to determine if the function call should be inlined.
-    float getInlineFudgeFactor(CallSite CS);
-
-    /// resetCachedFunctionInfo - erase any cached cost info for this function.
-    void resetCachedCostInfo(Function* Caller) {
-      CachedFunctionInfo[Caller].NumBlocks = 0;
-    }
-  };
-}
-
-#endif
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/Utils/Local.h b/libclamav/c++/llvm/include/llvm/Transforms/Utils/Local.h
index 419029f..e6687bb 100644
--- a/libclamav/c++/llvm/include/llvm/Transforms/Utils/Local.h
+++ b/libclamav/c++/llvm/include/llvm/Transforms/Utils/Local.h
@@ -78,6 +78,21 @@ void RecursivelyDeleteDeadPHINode(PHINode *PN);
 //  Control Flow Graph Restructuring.
 //
 
+/// RemovePredecessorAndSimplify - Like BasicBlock::removePredecessor, this
+/// method is called when we're about to delete Pred as a predecessor of BB.  If
+/// BB contains any PHI nodes, this drops the entries in the PHI nodes for Pred.
+///
+/// Unlike the removePredecessor method, this attempts to simplify uses of PHI
+/// nodes that collapse into identity values.  For example, if we have:
+///   x = phi(1, 0, 0, 0)
+///   y = and x, z
+///
+/// .. and delete the predecessor corresponding to the '1', this will attempt to
+/// recursively fold the 'and' to 0.
+void RemovePredecessorAndSimplify(BasicBlock *BB, BasicBlock *Pred,
+                                  TargetData *TD = 0);
+    
+  
 /// MergeBasicBlockIntoOnlyPred - BB is a block with one predecessor and its
 /// predecessor is known to have one successor (BB!).  Eliminate the edge
 /// between them, moving the instructions in the predecessor into BB.  This
@@ -85,7 +100,21 @@ void RecursivelyDeleteDeadPHINode(PHINode *PN);
 ///
 void MergeBasicBlockIntoOnlyPred(BasicBlock *BB, Pass *P = 0);
     
-  
+
+/// TryToSimplifyUncondBranchFromEmptyBlock - BB is known to contain an
+/// unconditional branch, and contains no instructions other than PHI nodes,
+/// potential debug intrinsics and the branch.  If possible, eliminate BB by
+/// rewriting all the predecessors to branch to the successor block and return
+/// true.  If we can't transform, return false.
+bool TryToSimplifyUncondBranchFromEmptyBlock(BasicBlock *BB);
+
+/// EliminateDuplicatePHINodes - Check for and eliminate duplicate PHI
+/// nodes in this block. This doesn't try to be clever about PHI nodes
+/// which differ only in the order of the incoming values, but instcombine
+/// orders them so it usually won't matter.
+///
+bool EliminateDuplicatePHINodes(BasicBlock *BB);
+
 /// SimplifyCFG - This function is used to do simplification of a CFG.  For
 /// example, it adjusts branches to branches to eliminate the extra hop, it
 /// eliminates unreachable basic blocks, and does other "peephole" optimization
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/Utils/PromoteMemToReg.h b/libclamav/c++/llvm/include/llvm/Transforms/Utils/PromoteMemToReg.h
index 71a077e..35cfadd 100644
--- a/libclamav/c++/llvm/include/llvm/Transforms/Utils/PromoteMemToReg.h
+++ b/libclamav/c++/llvm/include/llvm/Transforms/Utils/PromoteMemToReg.h
@@ -23,7 +23,6 @@ class AllocaInst;
 class DominatorTree;
 class DominanceFrontier;
 class AliasSetTracker;
-class LLVMContext;
 
 /// isAllocaPromotable - Return true if this alloca is legal for promotion.
 /// This is true if there are only loads and stores to the alloca...
@@ -40,7 +39,6 @@ bool isAllocaPromotable(const AllocaInst *AI);
 ///
 void PromoteMemToReg(const std::vector<AllocaInst*> &Allocas,
                      DominatorTree &DT, DominanceFrontier &DF,
-                     LLVMContext &Context,
                      AliasSetTracker *AST = 0);
 
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/Utils/SSAUpdater.h b/libclamav/c++/llvm/include/llvm/Transforms/Utils/SSAUpdater.h
new file mode 100644
index 0000000..2364330
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/Transforms/Utils/SSAUpdater.h
@@ -0,0 +1,108 @@
+//===-- SSAUpdater.h - Unstructured SSA Update Tool -------------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file declares the SSAUpdater class.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_TRANSFORMS_UTILS_SSAUPDATER_H
+#define LLVM_TRANSFORMS_UTILS_SSAUPDATER_H
+
+namespace llvm {
+  class Value;
+  class BasicBlock;
+  class Use;
+  class PHINode;
+  template<typename T>
+  class SmallVectorImpl;
+
+/// SSAUpdater - This class updates SSA form for a set of values defined in
+/// multiple blocks.  This is used when code duplication or another unstructured
+/// transformation wants to rewrite a set of uses of one value with uses of a
+/// set of values.
+class SSAUpdater {
+  /// AvailableVals - This keeps track of which value to use on a per-block
+  /// basis.  When we insert PHI nodes, we keep track of them here.  We use
+  /// WeakVH's for the value of the map because we RAUW PHI nodes when we
+  /// eliminate them, and want the WeakVH to track this.
+  //typedef DenseMap<BasicBlock*, TrackingVH<Value> > AvailableValsTy;
+  void *AV;
+
+  /// PrototypeValue is an arbitrary representative value, which we derive names
+  /// and a type for PHI nodes.
+  Value *PrototypeValue;
+
+  /// IncomingPredInfo - We use this as scratch space when doing our recursive
+  /// walk.  This should only be used in GetValueInBlockInternal, normally it
+  /// should be empty.
+  //std::vector<std::pair<BasicBlock*, TrackingVH<Value> > > IncomingPredInfo;
+  void *IPI;
+
+  /// InsertedPHIs - If this is non-null, the SSAUpdater adds all PHI nodes that
+  /// it creates to the vector.
+  SmallVectorImpl<PHINode*> *InsertedPHIs;
+public:
+  /// SSAUpdater constructor.  If InsertedPHIs is specified, it will be filled
+  /// in with all PHI Nodes created by rewriting.
+  explicit SSAUpdater(SmallVectorImpl<PHINode*> *InsertedPHIs = 0);
+  ~SSAUpdater();
+
+  /// Initialize - Reset this object to get ready for a new set of SSA
+  /// updates.  ProtoValue is the value used to name PHI nodes.
+  void Initialize(Value *ProtoValue);
+
+  /// AddAvailableValue - Indicate that a rewritten value is available at the
+  /// end of the specified block with the specified value.
+  void AddAvailableValue(BasicBlock *BB, Value *V);
+
+  /// HasValueForBlock - Return true if the SSAUpdater already has a value for
+  /// the specified block.
+  bool HasValueForBlock(BasicBlock *BB) const;
+
+  /// GetValueAtEndOfBlock - Construct SSA form, materializing a value that is
+  /// live at the end of the specified block.
+  Value *GetValueAtEndOfBlock(BasicBlock *BB);
+
+  /// GetValueInMiddleOfBlock - Construct SSA form, materializing a value that
+  /// is live in the middle of the specified block.
+  ///
+  /// GetValueInMiddleOfBlock is the same as GetValueAtEndOfBlock except in one
+  /// important case: if there is a definition of the rewritten value after the
+  /// 'use' in BB.  Consider code like this:
+  ///
+  ///      X1 = ...
+  ///   SomeBB:
+  ///      use(X)
+  ///      X2 = ...
+  ///      br Cond, SomeBB, OutBB
+  ///
+  /// In this case, there are two values (X1 and X2) added to the AvailableVals
+  /// set by the client of the rewriter, and those values are both live out of
+  /// their respective blocks.  However, the use of X happens in the *middle* of
+  /// a block.  Because of this, we need to insert a new PHI node in SomeBB to
+  /// merge the appropriate values, and this value isn't live out of the block.
+  ///
+  Value *GetValueInMiddleOfBlock(BasicBlock *BB);
+
+  /// RewriteUse - Rewrite a use of the symbolic value.  This handles PHI nodes,
+  /// which use their value in the corresponding predecessor.  Note that this
+  /// will not work if the use is supposed to be rewritten to a value defined in
+  /// the same block as the use, but above it.  Any 'AddAvailableValue's added
+  /// for the use's block will be considered to be below it.
+  void RewriteUse(Use &U);
+
+private:
+  Value *GetValueAtEndOfBlockInternal(BasicBlock *BB);
+  void operator=(const SSAUpdater&); // DO NOT IMPLEMENT
+  SSAUpdater(const SSAUpdater&);     // DO NOT IMPLEMENT
+};
+
+} // End llvm namespace
+
+#endif
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/Utils/SSI.h b/libclamav/c++/llvm/include/llvm/Transforms/Utils/SSI.h
index 8b49aac..198fc82 100644
--- a/libclamav/c++/llvm/include/llvm/Transforms/Utils/SSI.h
+++ b/libclamav/c++/llvm/include/llvm/Transforms/Utils/SSI.h
@@ -22,8 +22,8 @@
 #ifndef LLVM_TRANSFORMS_UTILS_SSI_H
 #define LLVM_TRANSFORMS_UTILS_SSI_H
 
+#include "llvm/InstrTypes.h"
 #include "llvm/Pass.h"
-#include "llvm/ADT/BitVector.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/SmallVector.h"
@@ -55,43 +55,36 @@ namespace llvm {
       // Stores variables created by SSI
       SmallPtrSet<Instruction *, 16> created;
 
-      // These variables are only live for each creation
-      unsigned num_values;
-
-      // Has a bit for each variable, true if it needs to be created
-      // and false otherwise
-      BitVector needConstruction;
-
       // Phis created by SSI
-      DenseMap<PHINode *, unsigned> phis;
+      DenseMap<PHINode *, Instruction*> phis;
 
       // Sigmas created by SSI
-      DenseMap<PHINode *, unsigned> sigmas;
+      DenseMap<PHINode *, Instruction*> sigmas;
 
       // Phi nodes that have a phi as operand and has to be fixed
       SmallPtrSet<PHINode *, 1> phisToFix;
 
       // List of definition points for every variable
-      SmallVector<SmallVector<BasicBlock *, 1>, 0> defsites;
+      DenseMap<Instruction*, SmallVector<BasicBlock*, 4> > defsites;
 
       // Basic Block of the original definition of each variable
-      SmallVector<BasicBlock *, 0> value_original;
+      DenseMap<Instruction*, BasicBlock*> value_original;
 
       // Stack of last seen definition of a variable
-      SmallVector<SmallVector<Instruction *, 1>, 0> value_stack;
+      DenseMap<Instruction*, SmallVector<Instruction *, 1> > value_stack;
 
-      void insertSigmaFunctions(SmallVectorImpl<Instruction *> &value);
-      void insertSigma(TerminatorInst *TI, Instruction *I, unsigned pos);
-      void insertPhiFunctions(SmallVectorImpl<Instruction *> &value);
-      void renameInit(SmallVectorImpl<Instruction *> &value);
+      void insertSigmaFunctions(SmallPtrSet<Instruction*, 4> &value);
+      void insertSigma(TerminatorInst *TI, Instruction *I);
+      void insertPhiFunctions(SmallPtrSet<Instruction*, 4> &value);
+      void renameInit(SmallPtrSet<Instruction*, 4> &value);
       void rename(BasicBlock *BB);
 
       void substituteUse(Instruction *I);
       bool dominateAny(BasicBlock *BB, Instruction *value);
       void fixPhis();
 
-      unsigned getPositionPhi(PHINode *PN);
-      unsigned getPositionSigma(PHINode *PN);
+      Instruction* getPositionPhi(PHINode *PN);
+      Instruction* getPositionSigma(PHINode *PN);
 
       void init(SmallVectorImpl<Instruction *> &value);
       void clean();
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/Utils/ValueMapper.h b/libclamav/c++/llvm/include/llvm/Transforms/Utils/ValueMapper.h
index d31edab..ed33413 100644
--- a/libclamav/c++/llvm/include/llvm/Transforms/Utils/ValueMapper.h
+++ b/libclamav/c++/llvm/include/llvm/Transforms/Utils/ValueMapper.h
@@ -20,10 +20,9 @@
 namespace llvm {
   class Value;
   class Instruction;
-  class LLVMContext;
   typedef DenseMap<const Value *, Value *> ValueMapTy;
 
-  Value *MapValue(const Value *V, ValueMapTy &VM, LLVMContext &Context);
+  Value *MapValue(const Value *V, ValueMapTy &VM);
   void RemapInstruction(Instruction *I, ValueMapTy &VM);
 } // End llvm namespace
 
diff --git a/libclamav/c++/llvm/include/llvm/Type.h b/libclamav/c++/llvm/include/llvm/Type.h
index 9c2fae0..752635c 100644
--- a/libclamav/c++/llvm/include/llvm/Type.h
+++ b/libclamav/c++/llvm/include/llvm/Type.h
@@ -12,9 +12,8 @@
 #define LLVM_TYPE_H
 
 #include "llvm/AbstractTypeUser.h"
-#include "llvm/LLVMContext.h"
 #include "llvm/Support/Casting.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/System/Atomic.h"
 #include "llvm/ADT/GraphTraits.h"
 #include <string>
@@ -28,6 +27,7 @@ class IntegerType;
 class TypeMapBase;
 class raw_ostream;
 class Module;
+class LLVMContext;
 
 /// This file contains the declaration of the Type class.  For more "Type" type
 /// stuff, look in DerivedTypes.h.
@@ -188,6 +188,30 @@ public:
   ///
   inline TypeID getTypeID() const { return ID; }
 
+  /// isVoidTy - Return true if this is 'void'.
+  bool isVoidTy() const { return ID == VoidTyID; }
+
+  /// isFloatTy - Return true if this is 'float', a 32-bit IEEE fp type.
+  bool isFloatTy() const { return ID == FloatTyID; }
+  
+  /// isDoubleTy - Return true if this is 'double', a 64-bit IEEE fp type.
+  bool isDoubleTy() const { return ID == DoubleTyID; }
+
+  /// isX86_FP80Ty - Return true if this is x86 long double.
+  bool isX86_FP80Ty() const { return ID == X86_FP80TyID; }
+
+  /// isFP128Ty - Return true if this is 'fp128'.
+  bool isFP128Ty() const { return ID == FP128TyID; }
+
+  /// isPPC_FP128Ty - Return true if this is powerpc long double.
+  bool isPPC_FP128Ty() const { return ID == PPC_FP128TyID; }
+
+  /// isLabelTy - Return true if this is 'label'.
+  bool isLabelTy() const { return ID == LabelTyID; }
+
+  /// isMetadataTy - Return true if this is 'metadata'.
+  bool isMetadataTy() const { return ID == MetadataTyID; }
+
   /// getDescription - Return the string representation of the type.
   std::string getDescription() const;
 
@@ -357,6 +381,21 @@ public:
   static const IntegerType *getInt32Ty(LLVMContext &C);
   static const IntegerType *getInt64Ty(LLVMContext &C);
 
+  //===--------------------------------------------------------------------===//
+  // Convenience methods for getting pointer types with one of the above builtin
+  // types as pointee.
+  //
+  static const PointerType *getFloatPtrTy(LLVMContext &C, unsigned AS = 0);
+  static const PointerType *getDoublePtrTy(LLVMContext &C, unsigned AS = 0);
+  static const PointerType *getX86_FP80PtrTy(LLVMContext &C, unsigned AS = 0);
+  static const PointerType *getFP128PtrTy(LLVMContext &C, unsigned AS = 0);
+  static const PointerType *getPPC_FP128PtrTy(LLVMContext &C, unsigned AS = 0);
+  static const PointerType *getInt1PtrTy(LLVMContext &C, unsigned AS = 0);
+  static const PointerType *getInt8PtrTy(LLVMContext &C, unsigned AS = 0);
+  static const PointerType *getInt16PtrTy(LLVMContext &C, unsigned AS = 0);
+  static const PointerType *getInt32PtrTy(LLVMContext &C, unsigned AS = 0);
+  static const PointerType *getInt64PtrTy(LLVMContext &C, unsigned AS = 0);
+
   /// Methods for support type inquiry through isa, cast, and dyn_cast:
   static inline bool classof(const Type *) { return true; }
 
@@ -391,7 +430,7 @@ public:
 
   /// getPointerTo - Return a pointer to the current type.  This is equivalent
   /// to PointerType::get(Foo, AddrSpace).
-  PointerType *getPointerTo(unsigned AddrSpace = 0) const;
+  const PointerType *getPointerTo(unsigned AddrSpace = 0) const;
 
 private:
   /// isSizedDerivedType - Derived types like structures and arrays are sized
diff --git a/libclamav/c++/llvm/include/llvm/TypeSymbolTable.h b/libclamav/c++/llvm/include/llvm/TypeSymbolTable.h
index 4dd3a4a..26b1dbf 100644
--- a/libclamav/c++/llvm/include/llvm/TypeSymbolTable.h
+++ b/libclamav/c++/llvm/include/llvm/TypeSymbolTable.h
@@ -15,6 +15,7 @@
 #define LLVM_TYPE_SYMBOL_TABLE_H
 
 #include "llvm/Type.h"
+#include "llvm/ADT/StringRef.h"
 #include <map>
 
 namespace llvm {
@@ -57,24 +58,28 @@ public:
   /// incrementing an integer and appending it to the name, if necessary
   /// @returns the unique name
   /// @brief Get a unique name for a type
-  std::string getUniqueName(const StringRef &BaseName) const;
+  std::string getUniqueName(StringRef BaseName) const;
 
   /// This method finds the type with the given \p name in the type map
   /// and returns it.
   /// @returns null if the name is not found, otherwise the Type
   /// associated with the \p name.
   /// @brief Lookup a type by name.
-  Type *lookup(const StringRef &name) const;
+  Type *lookup(StringRef name) const;
 
   /// Lookup the type associated with name.
   /// @returns end() if the name is not found, or an iterator at the entry for
   /// Type.
-  iterator find(const StringRef &name);
+  iterator find(StringRef Name) {
+    return tmap.find(Name);
+  }
 
   /// Lookup the type associated with name.
   /// @returns end() if the name is not found, or an iterator at the entry for
   /// Type.
-  const_iterator find(const StringRef &name) const;
+  const_iterator find(StringRef Name) const {
+    return tmap.find(Name);
+  }
 
   /// @returns true iff the symbol table is empty.
   /// @brief Determine if the symbol table is empty
@@ -114,7 +119,7 @@ public:
   /// a many-to-one mapping between names and types. This method allows a type
   /// with an existing entry in the symbol table to get a new name.
   /// @brief Insert a type under a new name.
-  void insert(const StringRef &Name, const Type *Typ);
+  void insert(StringRef Name, const Type *Typ);
 
   /// Remove a type at the specified position in the symbol table.
   /// @returns the removed Type.
diff --git a/libclamav/c++/llvm/include/llvm/Value.h b/libclamav/c++/llvm/include/llvm/Value.h
index 6b393f6..0960346 100644
--- a/libclamav/c++/llvm/include/llvm/Value.h
+++ b/libclamav/c++/llvm/include/llvm/Value.h
@@ -42,7 +42,7 @@ class raw_ostream;
 class AssemblyAnnotationWriter;
 class ValueHandleBase;
 class LLVMContext;
-class MetadataContext;
+class MetadataContextImpl;
 
 //===----------------------------------------------------------------------===//
 //                                 Value Class
@@ -83,7 +83,7 @@ private:
   friend class ValueSymbolTable; // Allow ValueSymbolTable to directly mod Name.
   friend class SymbolTable;      // Allow SymbolTable to directly poke Name.
   friend class ValueHandleBase;
-  friend class MetadataContext;
+  friend class MetadataContextImpl;
   friend class AbstractTypeUser;
   ValueName *Name;
 
@@ -210,6 +210,7 @@ public:
     GlobalAliasVal,           // This is an instance of GlobalAlias
     GlobalVariableVal,        // This is an instance of GlobalVariable
     UndefValueVal,            // This is an instance of UndefValue
+    BlockAddressVal,          // This is an instance of BlockAddress
     ConstantExprVal,          // This is an instance of ConstantExpr
     ConstantAggregateZeroVal, // This is an instance of ConstantAggregateNull
     ConstantIntVal,           // This is an instance of ConstantInt
@@ -223,8 +224,12 @@ public:
     NamedMDNodeVal,           // This is an instance of NamedMDNode
     InlineAsmVal,             // This is an instance of InlineAsm
     PseudoSourceValueVal,     // This is an instance of PseudoSourceValue
+    FixedStackPseudoSourceValueVal, // This is an instance of 
+                                    // FixedStackPseudoSourceValue
     InstructionVal,           // This is an instance of Instruction
-    
+    // Enum values starting at InstructionVal are used for Instructions;
+    // don't add new values here!
+
     // Markers:
     ConstantFirstVal = FunctionVal,
     ConstantLastVal  = ConstantPointerNullVal
diff --git a/libclamav/c++/llvm/include/llvm/ValueSymbolTable.h b/libclamav/c++/llvm/include/llvm/ValueSymbolTable.h
index 4f8ebe8..e05fdbd 100644
--- a/libclamav/c++/llvm/include/llvm/ValueSymbolTable.h
+++ b/libclamav/c++/llvm/include/llvm/ValueSymbolTable.h
@@ -16,7 +16,7 @@
 
 #include "llvm/Value.h"
 #include "llvm/ADT/StringMap.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
   template<typename ValueSubClass, typename ItemParentClass>
@@ -69,7 +69,7 @@ public:
   /// the symbol table. 
   /// @returns the value associated with the \p Name
   /// @brief Lookup a named Value.
-  Value *lookup(const StringRef &Name) const { return vmap.lookup(Name); }
+  Value *lookup(StringRef Name) const { return vmap.lookup(Name); }
 
   /// @returns true iff the symbol table is empty
   /// @brief Determine if the symbol table is empty
@@ -112,7 +112,7 @@ private:
   /// createValueName - This method attempts to create a value name and insert
   /// it into the symbol table with the specified name.  If it conflicts, it
   /// auto-renames the name and returns that instead.
-  ValueName *createValueName(const StringRef &Name, Value *V);
+  ValueName *createValueName(StringRef Name, Value *V);
   
   /// This method removes a value from the symbol table.  It leaves the
   /// ValueName attached to the value, but it is no longer inserted in the
diff --git a/libclamav/c++/llvm/lib/Analysis/AliasAnalysis.cpp b/libclamav/c++/llvm/lib/Analysis/AliasAnalysis.cpp
index c456990..dee9b53 100644
--- a/libclamav/c++/llvm/lib/Analysis/AliasAnalysis.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/AliasAnalysis.cpp
@@ -49,21 +49,11 @@ AliasAnalysis::alias(const Value *V1, unsigned V1Size,
   return AA->alias(V1, V1Size, V2, V2Size);
 }
 
-void AliasAnalysis::getMustAliases(Value *P, std::vector<Value*> &RetVals) {
-  assert(AA && "AA didn't call InitializeAliasAnalysis in its run method!");
-  return AA->getMustAliases(P, RetVals);
-}
-
 bool AliasAnalysis::pointsToConstantMemory(const Value *P) {
   assert(AA && "AA didn't call InitializeAliasAnalysis in its run method!");
   return AA->pointsToConstantMemory(P);
 }
 
-bool AliasAnalysis::hasNoModRefInfoForCalls() const {
-  assert(AA && "AA didn't call InitializeAliasAnalysis in its run method!");
-  return AA->hasNoModRefInfoForCalls();
-}
-
 void AliasAnalysis::deleteValue(Value *V) {
   assert(AA && "AA didn't call InitializeAliasAnalysis in its run method!");
   AA->deleteValue(V);
@@ -137,17 +127,18 @@ AliasAnalysis::getModRefBehavior(Function *F,
 
 AliasAnalysis::ModRefResult
 AliasAnalysis::getModRefInfo(CallSite CS, Value *P, unsigned Size) {
-  ModRefResult Mask = ModRef;
   ModRefBehavior MRB = getModRefBehavior(CS);
   if (MRB == DoesNotAccessMemory)
     return NoModRef;
-  else if (MRB == OnlyReadsMemory)
+  
+  ModRefResult Mask = ModRef;
+  if (MRB == OnlyReadsMemory)
     Mask = Ref;
   else if (MRB == AliasAnalysis::AccessesArguments) {
     bool doesAlias = false;
     for (CallSite::arg_iterator AI = CS.arg_begin(), AE = CS.arg_end();
          AI != AE; ++AI)
-      if (alias(*AI, ~0U, P, Size) != NoAlias) {
+      if (!isNoAlias(*AI, ~0U, P, Size)) {
         doesAlias = true;
         break;
       }
@@ -239,7 +230,7 @@ bool llvm::isNoAliasCall(const Value *V) {
 ///    NoAlias returns
 ///
 bool llvm::isIdentifiedObject(const Value *V) {
-  if (isa<AllocationInst>(V) || isNoAliasCall(V))
+  if (isa<AllocaInst>(V) || isNoAliasCall(V))
     return true;
   if (isa<GlobalValue>(V) && !isa<GlobalAlias>(V))
     return true;
diff --git a/libclamav/c++/llvm/lib/Analysis/AliasAnalysisCounter.cpp b/libclamav/c++/llvm/lib/Analysis/AliasAnalysisCounter.cpp
index 272c871..030bcd2 100644
--- a/libclamav/c++/llvm/lib/Analysis/AliasAnalysisCounter.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/AliasAnalysisCounter.cpp
@@ -17,7 +17,6 @@
 #include "llvm/Analysis/AliasAnalysis.h"
 #include "llvm/Assembly/Writer.h"
 #include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
@@ -28,8 +27,7 @@ static cl::opt<bool>
 PrintAllFailures("count-aa-print-all-failed-queries", cl::ReallyHidden);
 
 namespace {
-  class VISIBILITY_HIDDEN AliasAnalysisCounter 
-      : public ModulePass, public AliasAnalysis {
+  class AliasAnalysisCounter : public ModulePass, public AliasAnalysis {
     unsigned No, May, Must;
     unsigned NoMR, JustRef, JustMod, MR;
     const char *Name;
diff --git a/libclamav/c++/llvm/lib/Analysis/AliasAnalysisEvaluator.cpp b/libclamav/c++/llvm/lib/Analysis/AliasAnalysisEvaluator.cpp
index bb95c01..6a2564c 100644
--- a/libclamav/c++/llvm/lib/Analysis/AliasAnalysisEvaluator.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/AliasAnalysisEvaluator.cpp
@@ -28,7 +28,6 @@
 #include "llvm/Target/TargetData.h"
 #include "llvm/Support/InstIterator.h"
 #include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/ADT/SetVector.h"
 using namespace llvm;
@@ -45,7 +44,7 @@ static cl::opt<bool> PrintRef("print-ref", cl::ReallyHidden);
 static cl::opt<bool> PrintModRef("print-modref", cl::ReallyHidden);
 
 namespace {
-  class VISIBILITY_HIDDEN AAEval : public FunctionPass {
+  class AAEval : public FunctionPass {
     unsigned NoAlias, MayAlias, MustAlias;
     unsigned NoModRef, Mod, Ref, ModRef;
 
diff --git a/libclamav/c++/llvm/lib/Analysis/AliasDebugger.cpp b/libclamav/c++/llvm/lib/Analysis/AliasDebugger.cpp
index 1e82621..6868e3f 100644
--- a/libclamav/c++/llvm/lib/Analysis/AliasDebugger.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/AliasDebugger.cpp
@@ -23,14 +23,12 @@
 #include "llvm/Constants.h"
 #include "llvm/DerivedTypes.h"
 #include "llvm/Analysis/AliasAnalysis.h"
-#include "llvm/Support/Compiler.h"
 #include <set>
 using namespace llvm;
 
 namespace {
   
-  class VISIBILITY_HIDDEN AliasDebugger 
-      : public ModulePass, public AliasAnalysis {
+  class AliasDebugger : public ModulePass, public AliasAnalysis {
 
     //What we do is simple.  Keep track of every value the AA could
     //know about, and verify that queries are one of those.
@@ -92,11 +90,6 @@ namespace {
       return AliasAnalysis::getModRefInfo(CS1,CS2);
     }
     
-    void getMustAliases(Value *P, std::vector<Value*> &RetVals) {
-      assert(Vals.find(P) != Vals.end() && "Never seen value in AA before");
-      return AliasAnalysis::getMustAliases(P, RetVals);
-    }
-
     bool pointsToConstantMemory(const Value *P) {
       assert(Vals.find(P) != Vals.end() && "Never seen value in AA before");
       return AliasAnalysis::pointsToConstantMemory(P);
diff --git a/libclamav/c++/llvm/lib/Analysis/AliasSetTracker.cpp b/libclamav/c++/llvm/lib/Analysis/AliasSetTracker.cpp
index b056d00..6634600 100644
--- a/libclamav/c++/llvm/lib/Analysis/AliasSetTracker.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/AliasSetTracker.cpp
@@ -19,7 +19,6 @@
 #include "llvm/Type.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Assembly/Writer.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/InstIterator.h"
 #include "llvm/Support/Format.h"
@@ -154,9 +153,6 @@ bool AliasSet::aliasesPointer(const Value *Ptr, unsigned Size,
 
   // Check the call sites list and invoke list...
   if (!CallSites.empty()) {
-    if (AA.hasNoModRefInfoForCalls())
-      return true;
-
     for (unsigned i = 0, e = CallSites.size(); i != e; ++i)
       if (AA.getModRefInfo(CallSites[i], const_cast<Value*>(Ptr), Size)
                    != AliasAnalysis::NoModRef)
@@ -170,9 +166,6 @@ bool AliasSet::aliasesCallSite(CallSite CS, AliasAnalysis &AA) const {
   if (AA.doesNotAccessMemory(CS))
     return false;
 
-  if (AA.hasNoModRefInfoForCalls())
-    return true;
-
   for (unsigned i = 0, e = CallSites.size(); i != e; ++i)
     if (AA.getModRefInfo(CallSites[i], CS) != AliasAnalysis::NoModRef ||
         AA.getModRefInfo(CS, CallSites[i]) != AliasAnalysis::NoModRef)
@@ -297,12 +290,6 @@ bool AliasSetTracker::add(StoreInst *SI) {
   return NewPtr;
 }
 
-bool AliasSetTracker::add(FreeInst *FI) {
-  bool NewPtr;
-  addPointer(FI->getOperand(0), ~0, AliasSet::Mods, NewPtr);
-  return NewPtr;
-}
-
 bool AliasSetTracker::add(VAArgInst *VAAI) {
   bool NewPtr;
   addPointer(VAAI->getOperand(0), ~0, AliasSet::ModRef, NewPtr);
@@ -338,8 +325,6 @@ bool AliasSetTracker::add(Instruction *I) {
     return add(CI);
   else if (InvokeInst *II = dyn_cast<InvokeInst>(I))
     return add(II);
-  else if (FreeInst *FI = dyn_cast<FreeInst>(I))
-    return add(FI);
   else if (VAArgInst *VAAI = dyn_cast<VAArgInst>(I))
     return add(VAAI);
   return true;
@@ -428,13 +413,6 @@ bool AliasSetTracker::remove(StoreInst *SI) {
   return true;
 }
 
-bool AliasSetTracker::remove(FreeInst *FI) {
-  AliasSet *AS = findAliasSetForPointer(FI->getOperand(0), ~0);
-  if (!AS) return false;
-  remove(*AS);
-  return true;
-}
-
 bool AliasSetTracker::remove(VAArgInst *VAAI) {
   AliasSet *AS = findAliasSetForPointer(VAAI->getOperand(0), ~0);
   if (!AS) return false;
@@ -460,8 +438,6 @@ bool AliasSetTracker::remove(Instruction *I) {
     return remove(SI);
   else if (CallInst *CI = dyn_cast<CallInst>(I))
     return remove(CI);
-  else if (FreeInst *FI = dyn_cast<FreeInst>(I))
-    return remove(FI);
   else if (VAArgInst *VAAI = dyn_cast<VAArgInst>(I))
     return remove(VAAI);
   return true;
@@ -599,7 +575,7 @@ AliasSetTracker::ASTCallbackVH::operator=(Value *V) {
 //===----------------------------------------------------------------------===//
 
 namespace {
-  class VISIBILITY_HIDDEN AliasSetPrinter : public FunctionPass {
+  class AliasSetPrinter : public FunctionPass {
     AliasSetTracker *Tracker;
   public:
     static char ID; // Pass identification, replacement for typeid
diff --git a/libclamav/c++/llvm/lib/Analysis/BasicAliasAnalysis.cpp b/libclamav/c++/llvm/lib/Analysis/BasicAliasAnalysis.cpp
index 5fa87ff..b2983c7 100644
--- a/libclamav/c++/llvm/lib/Analysis/BasicAliasAnalysis.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/BasicAliasAnalysis.cpp
@@ -14,8 +14,6 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/Analysis/AliasAnalysis.h"
-#include "llvm/Analysis/CaptureTracking.h"
-#include "llvm/Analysis/MallocHelper.h"
 #include "llvm/Analysis/Passes.h"
 #include "llvm/Constants.h"
 #include "llvm/DerivedTypes.h"
@@ -23,15 +21,15 @@
 #include "llvm/GlobalVariable.h"
 #include "llvm/Instructions.h"
 #include "llvm/IntrinsicInst.h"
-#include "llvm/LLVMContext.h"
 #include "llvm/Operator.h"
 #include "llvm/Pass.h"
+#include "llvm/Analysis/CaptureTracking.h"
+#include "llvm/Analysis/MemoryBuiltins.h"
+#include "llvm/Analysis/ValueTracking.h"
 #include "llvm/Target/TargetData.h"
+#include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/SmallVector.h"
-#include "llvm/ADT/STLExtras.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
-#include "llvm/Support/GetElementPtrTypeIterator.h"
 #include <algorithm>
 using namespace llvm;
 
@@ -39,30 +37,6 @@ using namespace llvm;
 // Useful predicates
 //===----------------------------------------------------------------------===//
 
-static const GEPOperator *isGEP(const Value *V) {
-  return dyn_cast<GEPOperator>(V);
-}
-
-static const Value *GetGEPOperands(const Value *V, 
-                                   SmallVector<Value*, 16> &GEPOps) {
-  assert(GEPOps.empty() && "Expect empty list to populate!");
-  GEPOps.insert(GEPOps.end(), cast<User>(V)->op_begin()+1,
-                cast<User>(V)->op_end());
-
-  // Accumulate all of the chained indexes into the operand array
-  V = cast<User>(V)->getOperand(0);
-
-  while (const User *G = isGEP(V)) {
-    if (!isa<Constant>(GEPOps[0]) || isa<GlobalValue>(GEPOps[0]) ||
-        !cast<Constant>(GEPOps[0])->isNullValue())
-      break;  // Don't handle folding arbitrary pointer offsets yet...
-    GEPOps.erase(GEPOps.begin());   // Drop the zero index
-    GEPOps.insert(GEPOps.begin(), G->op_begin()+1, G->op_end());
-    V = G->getOperand(0);
-  }
-  return V;
-}
-
 /// isKnownNonNull - Return true if we know that the specified value is never
 /// null.
 static bool isKnownNonNull(const Value *V) {
@@ -83,8 +57,13 @@ static bool isKnownNonNull(const Value *V) {
 /// object that never escapes from the function.
 static bool isNonEscapingLocalObject(const Value *V) {
   // If this is a local allocation, check to see if it escapes.
-  if (isa<AllocationInst>(V) || isNoAliasCall(V))
-    return !PointerMayBeCaptured(V, false);
+  if (isa<AllocaInst>(V) || isNoAliasCall(V))
+    // Set StoreCaptures to True so that we can assume in our callers that the
+    // pointer is not the result of a load instruction. Currently
+    // PointerMayBeCaptured doesn't have any special analysis for the
+    // StoreCaptures=false case; if it did, our callers could be refined to be
+    // more precise.
+    return !PointerMayBeCaptured(V, false, /*StoreCaptures=*/true);
 
   // If this is an argument that corresponds to a byval or noalias argument,
   // then it has not escaped before entering the function.  Check if it escapes
@@ -94,7 +73,7 @@ static bool isNonEscapingLocalObject(const Value *V) {
       // Don't bother analyzing arguments already known not to escape.
       if (A->hasNoCaptureAttr())
         return true;
-      return !PointerMayBeCaptured(V, false);
+      return !PointerMayBeCaptured(V, false, /*StoreCaptures=*/true);
     }
   return false;
 }
@@ -103,17 +82,17 @@ static bool isNonEscapingLocalObject(const Value *V) {
 /// isObjectSmallerThan - Return true if we can prove that the object specified
 /// by V is smaller than Size.
 static bool isObjectSmallerThan(const Value *V, unsigned Size,
-                                LLVMContext &Context, const TargetData &TD) {
+                                const TargetData &TD) {
   const Type *AccessTy;
   if (const GlobalVariable *GV = dyn_cast<GlobalVariable>(V)) {
     AccessTy = GV->getType()->getElementType();
-  } else if (const AllocationInst *AI = dyn_cast<AllocationInst>(V)) {
+  } else if (const AllocaInst *AI = dyn_cast<AllocaInst>(V)) {
     if (!AI->isArrayAllocation())
       AccessTy = AI->getType()->getElementType();
     else
       return false;
   } else if (const CallInst* CI = extractMallocCall(V)) {
-    if (!isArrayMalloc(V, Context, &TD))
+    if (!isArrayMalloc(V, &TD))
       // The size is the argument to the malloc call.
       if (const ConstantInt* C = dyn_cast<ConstantInt>(CI->getOperand(1)))
         return (C->getZExtValue() < Size);
@@ -142,7 +121,7 @@ namespace {
   /// implementations, in that it does not chain to a previous analysis.  As
   /// such it doesn't follow many of the rules that other alias analyses must.
   ///
-  struct VISIBILITY_HIDDEN NoAA : public ImmutablePass, public AliasAnalysis {
+  struct NoAA : public ImmutablePass, public AliasAnalysis {
     static char ID; // Class identification, replacement for typeinfo
     NoAA() : ImmutablePass(&ID) {}
     explicit NoAA(void *PID) : ImmutablePass(PID) { }
@@ -164,7 +143,6 @@ namespace {
       llvm_unreachable("This method may not be called on this function!");
     }
 
-    virtual void getMustAliases(Value *P, std::vector<Value*> &RetVals) { }
     virtual bool pointsToConstantMemory(const Value *P) { return false; }
     virtual ModRefResult getModRefInfo(CallSite CS, Value *P, unsigned Size) {
       return ModRef;
@@ -172,7 +150,6 @@ namespace {
     virtual ModRefResult getModRefInfo(CallSite CS1, CallSite CS2) {
       return ModRef;
     }
-    virtual bool hasNoModRefInfoForCalls() const { return true; }
 
     virtual void deleteValue(Value *V) {}
     virtual void copyValue(Value *From, Value *To) {}
@@ -197,32 +174,45 @@ namespace {
   /// BasicAliasAnalysis - This is the default alias analysis implementation.
   /// Because it doesn't chain to a previous alias analysis (like -no-aa), it
   /// derives from the NoAA class.
-  struct VISIBILITY_HIDDEN BasicAliasAnalysis : public NoAA {
+  struct BasicAliasAnalysis : public NoAA {
     static char ID; // Class identification, replacement for typeinfo
     BasicAliasAnalysis() : NoAA(&ID) {}
     AliasResult alias(const Value *V1, unsigned V1Size,
-                      const Value *V2, unsigned V2Size);
+                      const Value *V2, unsigned V2Size) {
+      assert(VisitedPHIs.empty() && "VisitedPHIs must be cleared after use!");
+      AliasResult Alias = aliasCheck(V1, V1Size, V2, V2Size);
+      VisitedPHIs.clear();
+      return Alias;
+    }
 
     ModRefResult getModRefInfo(CallSite CS, Value *P, unsigned Size);
     ModRefResult getModRefInfo(CallSite CS1, CallSite CS2);
 
-    /// hasNoModRefInfoForCalls - We can provide mod/ref information against
-    /// non-escaping allocations.
-    virtual bool hasNoModRefInfoForCalls() const { return false; }
-
     /// pointsToConstantMemory - Chase pointers until we find a (constant
     /// global) or not.
     bool pointsToConstantMemory(const Value *P);
 
   private:
-    // CheckGEPInstructions - Check two GEP instructions with known
-    // must-aliasing base pointers.  This checks to see if the index expressions
-    // preclude the pointers from aliasing...
-    AliasResult
-    CheckGEPInstructions(const Type* BasePtr1Ty,
-                         Value **GEP1Ops, unsigned NumGEP1Ops, unsigned G1Size,
-                         const Type *BasePtr2Ty,
-                         Value **GEP2Ops, unsigned NumGEP2Ops, unsigned G2Size);
+    // VisitedPHIs - Track PHI nodes visited by a aliasCheck() call.
+    SmallPtrSet<const Value*, 16> VisitedPHIs;
+
+    // aliasGEP - Provide a bunch of ad-hoc rules to disambiguate a GEP
+    // instruction against another.
+    AliasResult aliasGEP(const GEPOperator *V1, unsigned V1Size,
+                         const Value *V2, unsigned V2Size,
+                         const Value *UnderlyingV1, const Value *UnderlyingV2);
+
+    // aliasPHI - Provide a bunch of ad-hoc rules to disambiguate a PHI
+    // instruction against another.
+    AliasResult aliasPHI(const PHINode *PN, unsigned PNSize,
+                         const Value *V2, unsigned V2Size);
+
+    /// aliasSelect - Disambiguate a Select instruction against another value.
+    AliasResult aliasSelect(const SelectInst *SI, unsigned SISize,
+                            const Value *V2, unsigned V2Size);
+
+    AliasResult aliasCheck(const Value *V1, unsigned V1Size,
+                           const Value *V2, unsigned V2Size);
   };
 }  // End of anonymous namespace
 
@@ -244,46 +234,124 @@ ImmutablePass *llvm::createBasicAliasAnalysisPass() {
 bool BasicAliasAnalysis::pointsToConstantMemory(const Value *P) {
   if (const GlobalVariable *GV = 
         dyn_cast<GlobalVariable>(P->getUnderlyingObject()))
+    // Note: this doesn't require GV to be "ODR" because it isn't legal for a
+    // global to be marked constant in some modules and non-constant in others.
+    // GV may even be a declaration, not a definition.
     return GV->isConstant();
   return false;
 }
 
 
-// getModRefInfo - Check to see if the specified callsite can clobber the
-// specified memory object.  Since we only look at local properties of this
-// function, we really can't say much about this query.  We do, however, use
-// simple "address taken" analysis on local objects.
-//
+/// getModRefInfo - Check to see if the specified callsite can clobber the
+/// specified memory object.  Since we only look at local properties of this
+/// function, we really can't say much about this query.  We do, however, use
+/// simple "address taken" analysis on local objects.
 AliasAnalysis::ModRefResult
 BasicAliasAnalysis::getModRefInfo(CallSite CS, Value *P, unsigned Size) {
-  if (!isa<Constant>(P)) {
-    const Value *Object = P->getUnderlyingObject();
-    
-    // If this is a tail call and P points to a stack location, we know that
-    // the tail call cannot access or modify the local stack.
-    // We cannot exclude byval arguments here; these belong to the caller of
-    // the current function not to the current function, and a tail callee
-    // may reference them.
-    if (isa<AllocaInst>(Object))
-      if (CallInst *CI = dyn_cast<CallInst>(CS.getInstruction()))
-        if (CI->isTailCall())
-          return NoModRef;
-    
-    // If the pointer is to a locally allocated object that does not escape,
-    // then the call can not mod/ref the pointer unless the call takes the
-    // argument without capturing it.
-    if (isNonEscapingLocalObject(Object) && CS.getInstruction() != Object) {
-      bool passedAsArg = false;
-      // TODO: Eventually only check 'nocapture' arguments.
-      for (CallSite::arg_iterator CI = CS.arg_begin(), CE = CS.arg_end();
-           CI != CE; ++CI)
-        if (isa<PointerType>((*CI)->getType()) &&
-            alias(cast<Value>(CI), ~0U, P, ~0U) != NoAlias)
-          passedAsArg = true;
+  const Value *Object = P->getUnderlyingObject();
+  
+  // If this is a tail call and P points to a stack location, we know that
+  // the tail call cannot access or modify the local stack.
+  // We cannot exclude byval arguments here; these belong to the caller of
+  // the current function not to the current function, and a tail callee
+  // may reference them.
+  if (isa<AllocaInst>(Object))
+    if (CallInst *CI = dyn_cast<CallInst>(CS.getInstruction()))
+      if (CI->isTailCall())
+        return NoModRef;
+  
+  // If the pointer is to a locally allocated object that does not escape,
+  // then the call can not mod/ref the pointer unless the call takes the pointer
+  // as an argument, and itself doesn't capture it.
+  if (!isa<Constant>(Object) && CS.getInstruction() != Object &&
+      isNonEscapingLocalObject(Object)) {
+    bool PassedAsArg = false;
+    unsigned ArgNo = 0;
+    for (CallSite::arg_iterator CI = CS.arg_begin(), CE = CS.arg_end();
+         CI != CE; ++CI, ++ArgNo) {
+      // Only look at the no-capture pointer arguments.
+      if (!isa<PointerType>((*CI)->getType()) ||
+          !CS.paramHasAttr(ArgNo+1, Attribute::NoCapture))
+        continue;
       
-      if (!passedAsArg)
+      // If  this is a no-capture pointer argument, see if we can tell that it
+      // is impossible to alias the pointer we're checking.  If not, we have to
+      // assume that the call could touch the pointer, even though it doesn't
+      // escape.
+      if (!isNoAlias(cast<Value>(CI), ~0U, P, ~0U)) {
+        PassedAsArg = true;
+        break;
+      }
+    }
+    
+    if (!PassedAsArg)
+      return NoModRef;
+  }
+
+  // Finally, handle specific knowledge of intrinsics.
+  IntrinsicInst *II = dyn_cast<IntrinsicInst>(CS.getInstruction());
+  if (II == 0)
+    return AliasAnalysis::getModRefInfo(CS, P, Size);
+
+  switch (II->getIntrinsicID()) {
+  default: break;
+  case Intrinsic::memcpy:
+  case Intrinsic::memmove: {
+    unsigned Len = ~0U;
+    if (ConstantInt *LenCI = dyn_cast<ConstantInt>(II->getOperand(3)))
+      Len = LenCI->getZExtValue();
+    Value *Dest = II->getOperand(1);
+    Value *Src = II->getOperand(2);
+    if (isNoAlias(Dest, Len, P, Size)) {
+      if (isNoAlias(Src, Len, P, Size))
         return NoModRef;
+      return Ref;
     }
+    break;
+  }
+  case Intrinsic::memset:
+    // Since memset is 'accesses arguments' only, the AliasAnalysis base class
+    // will handle it for the variable length case.
+    if (ConstantInt *LenCI = dyn_cast<ConstantInt>(II->getOperand(3))) {
+      unsigned Len = LenCI->getZExtValue();
+      Value *Dest = II->getOperand(1);
+      if (isNoAlias(Dest, Len, P, Size))
+        return NoModRef;
+    }
+    break;
+  case Intrinsic::atomic_cmp_swap:
+  case Intrinsic::atomic_swap:
+  case Intrinsic::atomic_load_add:
+  case Intrinsic::atomic_load_sub:
+  case Intrinsic::atomic_load_and:
+  case Intrinsic::atomic_load_nand:
+  case Intrinsic::atomic_load_or:
+  case Intrinsic::atomic_load_xor:
+  case Intrinsic::atomic_load_max:
+  case Intrinsic::atomic_load_min:
+  case Intrinsic::atomic_load_umax:
+  case Intrinsic::atomic_load_umin:
+    if (TD) {
+      Value *Op1 = II->getOperand(1);
+      unsigned Op1Size = TD->getTypeStoreSize(Op1->getType());
+      if (isNoAlias(Op1, Op1Size, P, Size))
+        return NoModRef;
+    }
+    break;
+  case Intrinsic::lifetime_start:
+  case Intrinsic::lifetime_end:
+  case Intrinsic::invariant_start: {
+    unsigned PtrSize = cast<ConstantInt>(II->getOperand(1))->getZExtValue();
+    if (isNoAlias(II->getOperand(2), PtrSize, P, Size))
+      return NoModRef;
+    break;
+  }
+  case Intrinsic::invariant_end: {
+    unsigned PtrSize = cast<ConstantInt>(II->getOperand(2))->getZExtValue();
+    if (isNoAlias(II->getOperand(3), PtrSize, P, Size))
+      return NoModRef;
+    break;
+  }
   }
 
   // The AliasAnalysis base class has some smarts, lets use them.
@@ -308,13 +376,265 @@ BasicAliasAnalysis::getModRefInfo(CallSite CS1, CallSite CS2) {
   return NoAA::getModRefInfo(CS1, CS2);
 }
 
+/// GetIndiceDifference - Dest and Src are the variable indices from two
+/// decomposed GetElementPtr instructions GEP1 and GEP2 which have common base
+/// pointers.  Subtract the GEP2 indices from GEP1 to find the symbolic
+/// difference between the two pointers. 
+static void GetIndiceDifference(
+                      SmallVectorImpl<std::pair<const Value*, int64_t> > &Dest,
+                const SmallVectorImpl<std::pair<const Value*, int64_t> > &Src) {
+  if (Src.empty()) return;
+
+  for (unsigned i = 0, e = Src.size(); i != e; ++i) {
+    const Value *V = Src[i].first;
+    int64_t Scale = Src[i].second;
+    
+    // Find V in Dest.  This is N^2, but pointer indices almost never have more
+    // than a few variable indexes.
+    for (unsigned j = 0, e = Dest.size(); j != e; ++j) {
+      if (Dest[j].first != V) continue;
+      
+      // If we found it, subtract off Scale V's from the entry in Dest.  If it
+      // goes to zero, remove the entry.
+      if (Dest[j].second != Scale)
+        Dest[j].second -= Scale;
+      else
+        Dest.erase(Dest.begin()+j);
+      Scale = 0;
+      break;
+    }
+    
+    // If we didn't consume this entry, add it to the end of the Dest list.
+    if (Scale)
+      Dest.push_back(std::make_pair(V, -Scale));
+  }
+}
+
+/// aliasGEP - Provide a bunch of ad-hoc rules to disambiguate a GEP instruction
+/// against another pointer.  We know that V1 is a GEP, but we don't know
+/// anything about V2.  UnderlyingV1 is GEP1->getUnderlyingObject(),
+/// UnderlyingV2 is the same for V2.
+///
+AliasAnalysis::AliasResult
+BasicAliasAnalysis::aliasGEP(const GEPOperator *GEP1, unsigned V1Size,
+                             const Value *V2, unsigned V2Size,
+                             const Value *UnderlyingV1,
+                             const Value *UnderlyingV2) {
+  int64_t GEP1BaseOffset;
+  SmallVector<std::pair<const Value*, int64_t>, 4> GEP1VariableIndices;
+
+  // If we have two gep instructions with must-alias'ing base pointers, figure
+  // out if the indexes to the GEP tell us anything about the derived pointer.
+  if (const GEPOperator *GEP2 = dyn_cast<GEPOperator>(V2)) {
+    // Do the base pointers alias?
+    AliasResult BaseAlias = aliasCheck(UnderlyingV1, ~0U, UnderlyingV2, ~0U);
+    
+    // If we get a No or May, then return it immediately, no amount of analysis
+    // will improve this situation.
+    if (BaseAlias != MustAlias) return BaseAlias;
+    
+    // Otherwise, we have a MustAlias.  Since the base pointers alias each other
+    // exactly, see if the computed offset from the common pointer tells us
+    // about the relation of the resulting pointer.
+    const Value *GEP1BasePtr =
+      DecomposeGEPExpression(GEP1, GEP1BaseOffset, GEP1VariableIndices, TD);
+    
+    int64_t GEP2BaseOffset;
+    SmallVector<std::pair<const Value*, int64_t>, 4> GEP2VariableIndices;
+    const Value *GEP2BasePtr =
+      DecomposeGEPExpression(GEP2, GEP2BaseOffset, GEP2VariableIndices, TD);
+    
+    // If DecomposeGEPExpression isn't able to look all the way through the
+    // addressing operation, we must not have TD and this is too complex for us
+    // to handle without it.
+    if (GEP1BasePtr != UnderlyingV1 || GEP2BasePtr != UnderlyingV2) {
+      assert(TD == 0 &&
+             "DecomposeGEPExpression and getUnderlyingObject disagree!");
+      return MayAlias;
+    }
+    
+    // Subtract the GEP2 pointer from the GEP1 pointer to find out their
+    // symbolic difference.
+    GEP1BaseOffset -= GEP2BaseOffset;
+    GetIndiceDifference(GEP1VariableIndices, GEP2VariableIndices);
+    
+  } else {
+    // Check to see if these two pointers are related by the getelementptr
+    // instruction.  If one pointer is a GEP with a non-zero index of the other
+    // pointer, we know they cannot alias.
+
+    // If both accesses are unknown size, we can't do anything useful here.
+    if (V1Size == ~0U && V2Size == ~0U)
+      return MayAlias;
+
+    AliasResult R = aliasCheck(UnderlyingV1, ~0U, V2, V2Size);
+    if (R != MustAlias)
+      // If V2 may alias GEP base pointer, conservatively returns MayAlias.
+      // If V2 is known not to alias GEP base pointer, then the two values
+      // cannot alias per GEP semantics: "A pointer value formed from a
+      // getelementptr instruction is associated with the addresses associated
+      // with the first operand of the getelementptr".
+      return R;
+
+    const Value *GEP1BasePtr =
+      DecomposeGEPExpression(GEP1, GEP1BaseOffset, GEP1VariableIndices, TD);
+    
+    // If DecomposeGEPExpression isn't able to look all the way through the
+    // addressing operation, we must not have TD and this is too complex for us
+    // to handle without it.
+    if (GEP1BasePtr != UnderlyingV1) {
+      assert(TD == 0 &&
+             "DecomposeGEPExpression and getUnderlyingObject disagree!");
+      return MayAlias;
+    }
+  }
+  
+  // In the two GEP Case, if there is no difference in the offsets of the
+  // computed pointers, the resultant pointers are a must alias.  This
+  // hapens when we have two lexically identical GEP's (for example).
+  //
+  // In the other case, if we have getelementptr <ptr>, 0, 0, 0, 0, ... and V2
+  // must aliases the GEP, the end result is a must alias also.
+  if (GEP1BaseOffset == 0 && GEP1VariableIndices.empty())
+    return MustAlias;
+
+  // If we have a known constant offset, see if this offset is larger than the
+  // access size being queried.  If so, and if no variable indices can remove
+  // pieces of this constant, then we know we have a no-alias.  For example,
+  //   &A[100] != &A.
+  
+  // In order to handle cases like &A[100][i] where i is an out of range
+  // subscript, we have to ignore all constant offset pieces that are a multiple
+  // of a scaled index.  Do this by removing constant offsets that are a
+  // multiple of any of our variable indices.  This allows us to transform
+  // things like &A[i][1] because i has a stride of (e.g.) 8 bytes but the 1
+  // provides an offset of 4 bytes (assuming a <= 4 byte access).
+  for (unsigned i = 0, e = GEP1VariableIndices.size();
+       i != e && GEP1BaseOffset;++i)
+    if (int64_t RemovedOffset = GEP1BaseOffset/GEP1VariableIndices[i].second)
+      GEP1BaseOffset -= RemovedOffset*GEP1VariableIndices[i].second;
+  
+  // If our known offset is bigger than the access size, we know we don't have
+  // an alias.
+  if (GEP1BaseOffset) {
+    if (GEP1BaseOffset >= (int64_t)V2Size ||
+        GEP1BaseOffset <= -(int64_t)V1Size)
+      return NoAlias;
+  }
+  
+  return MayAlias;
+}
+
+/// aliasSelect - Provide a bunch of ad-hoc rules to disambiguate a Select
+/// instruction against another.
+AliasAnalysis::AliasResult
+BasicAliasAnalysis::aliasSelect(const SelectInst *SI, unsigned SISize,
+                                const Value *V2, unsigned V2Size) {
+  // If the values are Selects with the same condition, we can do a more precise
+  // check: just check for aliases between the values on corresponding arms.
+  if (const SelectInst *SI2 = dyn_cast<SelectInst>(V2))
+    if (SI->getCondition() == SI2->getCondition()) {
+      AliasResult Alias =
+        aliasCheck(SI->getTrueValue(), SISize,
+                   SI2->getTrueValue(), V2Size);
+      if (Alias == MayAlias)
+        return MayAlias;
+      AliasResult ThisAlias =
+        aliasCheck(SI->getFalseValue(), SISize,
+                   SI2->getFalseValue(), V2Size);
+      if (ThisAlias != Alias)
+        return MayAlias;
+      return Alias;
+    }
+
+  // If both arms of the Select node NoAlias or MustAlias V2, then returns
+  // NoAlias / MustAlias. Otherwise, returns MayAlias.
+  AliasResult Alias =
+    aliasCheck(SI->getTrueValue(), SISize, V2, V2Size);
+  if (Alias == MayAlias)
+    return MayAlias;
+  AliasResult ThisAlias =
+    aliasCheck(SI->getFalseValue(), SISize, V2, V2Size);
+  if (ThisAlias != Alias)
+    return MayAlias;
+  return Alias;
+}
+
+// aliasPHI - Provide a bunch of ad-hoc rules to disambiguate a PHI instruction
+// against another.
+AliasAnalysis::AliasResult
+BasicAliasAnalysis::aliasPHI(const PHINode *PN, unsigned PNSize,
+                             const Value *V2, unsigned V2Size) {
+  // The PHI node has already been visited, avoid recursion any further.
+  if (!VisitedPHIs.insert(PN))
+    return MayAlias;
+
+  // If the values are PHIs in the same block, we can do a more precise
+  // as well as efficient check: just check for aliases between the values
+  // on corresponding edges.
+  if (const PHINode *PN2 = dyn_cast<PHINode>(V2))
+    if (PN2->getParent() == PN->getParent()) {
+      AliasResult Alias =
+        aliasCheck(PN->getIncomingValue(0), PNSize,
+                   PN2->getIncomingValueForBlock(PN->getIncomingBlock(0)),
+                   V2Size);
+      if (Alias == MayAlias)
+        return MayAlias;
+      for (unsigned i = 1, e = PN->getNumIncomingValues(); i != e; ++i) {
+        AliasResult ThisAlias =
+          aliasCheck(PN->getIncomingValue(i), PNSize,
+                     PN2->getIncomingValueForBlock(PN->getIncomingBlock(i)),
+                     V2Size);
+        if (ThisAlias != Alias)
+          return MayAlias;
+      }
+      return Alias;
+    }
+
+  SmallPtrSet<Value*, 4> UniqueSrc;
+  SmallVector<Value*, 4> V1Srcs;
+  for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
+    Value *PV1 = PN->getIncomingValue(i);
+    if (isa<PHINode>(PV1))
+      // If any of the source itself is a PHI, return MayAlias conservatively
+      // to avoid compile time explosion. The worst possible case is if both
+      // sides are PHI nodes. In which case, this is O(m x n) time where 'm'
+      // and 'n' are the number of PHI sources.
+      return MayAlias;
+    if (UniqueSrc.insert(PV1))
+      V1Srcs.push_back(PV1);
+  }
+
+  AliasResult Alias = aliasCheck(V2, V2Size, V1Srcs[0], PNSize);
+  // Early exit if the check of the first PHI source against V2 is MayAlias.
+  // Other results are not possible.
+  if (Alias == MayAlias)
+    return MayAlias;
 
-// alias - Provide a bunch of ad-hoc rules to disambiguate in common cases, such
-// as array references.
+  // If all sources of the PHI node NoAlias or MustAlias V2, then returns
+  // NoAlias / MustAlias. Otherwise, returns MayAlias.
+  for (unsigned i = 1, e = V1Srcs.size(); i != e; ++i) {
+    Value *V = V1Srcs[i];
+
+    // If V2 is a PHI, the recursive case will have been caught in the
+    // above aliasCheck call, so these subsequent calls to aliasCheck
+    // don't need to assume that V2 is being visited recursively.
+    VisitedPHIs.erase(V2);
+
+    AliasResult ThisAlias = aliasCheck(V2, V2Size, V, PNSize);
+    if (ThisAlias != Alias || ThisAlias == MayAlias)
+      return MayAlias;
+  }
+
+  return Alias;
+}
+
+// aliasCheck - Provide a bunch of ad-hoc rules to disambiguate in common cases,
+// such as array references.
 //
 AliasAnalysis::AliasResult
-BasicAliasAnalysis::alias(const Value *V1, unsigned V1Size,
-                          const Value *V2, unsigned V2Size) {
+BasicAliasAnalysis::aliasCheck(const Value *V1, unsigned V1Size,
+                               const Value *V2, unsigned V2Size) {
   // Strip off any casts if they exist.
   V1 = V1->stripPointerCasts();
   V2 = V2->stripPointerCasts();
@@ -329,14 +649,28 @@ BasicAliasAnalysis::alias(const Value *V1, unsigned V1Size,
   const Value *O1 = V1->getUnderlyingObject();
   const Value *O2 = V2->getUnderlyingObject();
 
+  // Null values in the default address space don't point to any object, so they
+  // don't alias any other pointer.
+  if (const ConstantPointerNull *CPN = dyn_cast<ConstantPointerNull>(O1))
+    if (CPN->getType()->getAddressSpace() == 0)
+      return NoAlias;
+  if (const ConstantPointerNull *CPN = dyn_cast<ConstantPointerNull>(O2))
+    if (CPN->getType()->getAddressSpace() == 0)
+      return NoAlias;
+
   if (O1 != O2) {
     // If V1/V2 point to two different objects we know that we have no alias.
     if (isIdentifiedObject(O1) && isIdentifiedObject(O2))
       return NoAlias;
-  
+
+    // Constant pointers can't alias with non-const isIdentifiedObject objects.
+    if ((isa<Constant>(O1) && isIdentifiedObject(O2) && !isa<Constant>(O2)) ||
+        (isa<Constant>(O2) && isIdentifiedObject(O1) && !isa<Constant>(O1)))
+      return NoAlias;
+
     // Arguments can't alias with local allocations or noalias calls.
-    if ((isa<Argument>(O1) && (isa<AllocationInst>(O2) || isNoAliasCall(O2))) ||
-        (isa<Argument>(O2) && (isa<AllocationInst>(O1) || isNoAliasCall(O1))))
+    if ((isa<Argument>(O1) && (isa<AllocaInst>(O2) || isNoAliasCall(O2))) ||
+        (isa<Argument>(O2) && (isa<AllocaInst>(O1) || isNoAliasCall(O1))))
       return NoAlias;
 
     // Most objects can't alias null.
@@ -347,497 +681,53 @@ BasicAliasAnalysis::alias(const Value *V1, unsigned V1Size,
   
   // If the size of one access is larger than the entire object on the other
   // side, then we know such behavior is undefined and can assume no alias.
-  LLVMContext &Context = V1->getContext();
   if (TD)
-    if ((V1Size != ~0U && isObjectSmallerThan(O2, V1Size, Context, *TD)) ||
-        (V2Size != ~0U && isObjectSmallerThan(O1, V2Size, Context, *TD)))
+    if ((V1Size != ~0U && isObjectSmallerThan(O2, V1Size, *TD)) ||
+        (V2Size != ~0U && isObjectSmallerThan(O1, V2Size, *TD)))
       return NoAlias;
   
-  // If one pointer is the result of a call/invoke and the other is a
+  // If one pointer is the result of a call/invoke or load and the other is a
   // non-escaping local object, then we know the object couldn't escape to a
-  // point where the call could return it.
-  if ((isa<CallInst>(O1) || isa<InvokeInst>(O1)) &&
-      isNonEscapingLocalObject(O2) && O1 != O2)
-    return NoAlias;
-  if ((isa<CallInst>(O2) || isa<InvokeInst>(O2)) &&
-      isNonEscapingLocalObject(O1) && O1 != O2)
-    return NoAlias;
-  
-  // If we have two gep instructions with must-alias'ing base pointers, figure
-  // out if the indexes to the GEP tell us anything about the derived pointer.
-  // Note that we also handle chains of getelementptr instructions as well as
-  // constant expression getelementptrs here.
-  //
-  if (isGEP(V1) && isGEP(V2)) {
-    const User *GEP1 = cast<User>(V1);
-    const User *GEP2 = cast<User>(V2);
-    
-    // If V1 and V2 are identical GEPs, just recurse down on both of them.
-    // This allows us to analyze things like:
-    //   P = gep A, 0, i, 1
-    //   Q = gep B, 0, i, 1
-    // by just analyzing A and B.  This is even safe for variable indices.
-    if (GEP1->getType() == GEP2->getType() &&
-        GEP1->getNumOperands() == GEP2->getNumOperands() &&
-        GEP1->getOperand(0)->getType() == GEP2->getOperand(0)->getType() &&
-        // All operands are the same, ignoring the base.
-        std::equal(GEP1->op_begin()+1, GEP1->op_end(), GEP2->op_begin()+1))
-      return alias(GEP1->getOperand(0), V1Size, GEP2->getOperand(0), V2Size);
-    
-    
-    // Drill down into the first non-gep value, to test for must-aliasing of
-    // the base pointers.
-    while (isGEP(GEP1->getOperand(0)) &&
-           GEP1->getOperand(1) ==
-           Constant::getNullValue(GEP1->getOperand(1)->getType()))
-      GEP1 = cast<User>(GEP1->getOperand(0));
-    const Value *BasePtr1 = GEP1->getOperand(0);
-
-    while (isGEP(GEP2->getOperand(0)) &&
-           GEP2->getOperand(1) ==
-           Constant::getNullValue(GEP2->getOperand(1)->getType()))
-      GEP2 = cast<User>(GEP2->getOperand(0));
-    const Value *BasePtr2 = GEP2->getOperand(0);
-
-    // Do the base pointers alias?
-    AliasResult BaseAlias = alias(BasePtr1, ~0U, BasePtr2, ~0U);
-    if (BaseAlias == NoAlias) return NoAlias;
-    if (BaseAlias == MustAlias) {
-      // If the base pointers alias each other exactly, check to see if we can
-      // figure out anything about the resultant pointers, to try to prove
-      // non-aliasing.
-
-      // Collect all of the chained GEP operands together into one simple place
-      SmallVector<Value*, 16> GEP1Ops, GEP2Ops;
-      BasePtr1 = GetGEPOperands(V1, GEP1Ops);
-      BasePtr2 = GetGEPOperands(V2, GEP2Ops);
-
-      // If GetGEPOperands were able to fold to the same must-aliased pointer,
-      // do the comparison.
-      if (BasePtr1 == BasePtr2) {
-        AliasResult GAlias =
-          CheckGEPInstructions(BasePtr1->getType(),
-                               &GEP1Ops[0], GEP1Ops.size(), V1Size,
-                               BasePtr2->getType(),
-                               &GEP2Ops[0], GEP2Ops.size(), V2Size);
-        if (GAlias != MayAlias)
-          return GAlias;
-      }
-    }
+  // point where the call could return it. The load case works because
+  // isNonEscapingLocalObject considers all stores to be escapes (it
+  // passes true for the StoreCaptures argument to PointerMayBeCaptured).
+  if (O1 != O2) {
+    if ((isa<CallInst>(O1) || isa<InvokeInst>(O1) || isa<LoadInst>(O1) ||
+         isa<Argument>(O1)) &&
+        isNonEscapingLocalObject(O2))
+      return NoAlias;
+    if ((isa<CallInst>(O2) || isa<InvokeInst>(O2) || isa<LoadInst>(O2) ||
+         isa<Argument>(O2)) &&
+        isNonEscapingLocalObject(O1))
+      return NoAlias;
   }
 
-  // Check to see if these two pointers are related by a getelementptr
-  // instruction.  If one pointer is a GEP with a non-zero index of the other
-  // pointer, we know they cannot alias.
-  //
-  if (isGEP(V2)) {
+  // FIXME: This isn't aggressively handling alias(GEP, PHI) for example: if the
+  // GEP can't simplify, we don't even look at the PHI cases.
+  if (!isa<GEPOperator>(V1) && isa<GEPOperator>(V2)) {
     std::swap(V1, V2);
     std::swap(V1Size, V2Size);
+    std::swap(O1, O2);
   }
+  if (const GEPOperator *GV1 = dyn_cast<GEPOperator>(V1))
+    return aliasGEP(GV1, V1Size, V2, V2Size, O1, O2);
 
-  if (V1Size != ~0U && V2Size != ~0U)
-    if (isGEP(V1)) {
-      SmallVector<Value*, 16> GEPOperands;
-      const Value *BasePtr = GetGEPOperands(V1, GEPOperands);
-
-      AliasResult R = alias(BasePtr, V1Size, V2, V2Size);
-      if (R == MustAlias) {
-        // If there is at least one non-zero constant index, we know they cannot
-        // alias.
-        bool ConstantFound = false;
-        bool AllZerosFound = true;
-        for (unsigned i = 0, e = GEPOperands.size(); i != e; ++i)
-          if (const Constant *C = dyn_cast<Constant>(GEPOperands[i])) {
-            if (!C->isNullValue()) {
-              ConstantFound = true;
-              AllZerosFound = false;
-              break;
-            }
-          } else {
-            AllZerosFound = false;
-          }
-
-        // If we have getelementptr <ptr>, 0, 0, 0, 0, ... and V2 must aliases
-        // the ptr, the end result is a must alias also.
-        if (AllZerosFound)
-          return MustAlias;
-
-        if (ConstantFound) {
-          if (V2Size <= 1 && V1Size <= 1)  // Just pointer check?
-            return NoAlias;
-
-          // Otherwise we have to check to see that the distance is more than
-          // the size of the argument... build an index vector that is equal to
-          // the arguments provided, except substitute 0's for any variable
-          // indexes we find...
-          if (TD && cast<PointerType>(
-                BasePtr->getType())->getElementType()->isSized()) {
-            for (unsigned i = 0; i != GEPOperands.size(); ++i)
-              if (!isa<ConstantInt>(GEPOperands[i]))
-                GEPOperands[i] =
-                  Constant::getNullValue(GEPOperands[i]->getType());
-            int64_t Offset =
-              TD->getIndexedOffset(BasePtr->getType(),
-                                   &GEPOperands[0],
-                                   GEPOperands.size());
-
-            if (Offset >= (int64_t)V2Size || Offset <= -(int64_t)V1Size)
-              return NoAlias;
-          }
-        }
-      }
-    }
-
-  return MayAlias;
-}
-
-// This function is used to determine if the indices of two GEP instructions are
-// equal. V1 and V2 are the indices.
-static bool IndexOperandsEqual(Value *V1, Value *V2, LLVMContext &Context) {
-  if (V1->getType() == V2->getType())
-    return V1 == V2;
-  if (Constant *C1 = dyn_cast<Constant>(V1))
-    if (Constant *C2 = dyn_cast<Constant>(V2)) {
-      // Sign extend the constants to long types, if necessary
-      if (C1->getType() != Type::getInt64Ty(Context))
-        C1 = ConstantExpr::getSExt(C1, Type::getInt64Ty(Context));
-      if (C2->getType() != Type::getInt64Ty(Context)) 
-        C2 = ConstantExpr::getSExt(C2, Type::getInt64Ty(Context));
-      return C1 == C2;
-    }
-  return false;
-}
-
-/// CheckGEPInstructions - Check two GEP instructions with known must-aliasing
-/// base pointers.  This checks to see if the index expressions preclude the
-/// pointers from aliasing...
-AliasAnalysis::AliasResult 
-BasicAliasAnalysis::CheckGEPInstructions(
-  const Type* BasePtr1Ty, Value **GEP1Ops, unsigned NumGEP1Ops, unsigned G1S,
-  const Type *BasePtr2Ty, Value **GEP2Ops, unsigned NumGEP2Ops, unsigned G2S) {
-  // We currently can't handle the case when the base pointers have different
-  // primitive types.  Since this is uncommon anyway, we are happy being
-  // extremely conservative.
-  if (BasePtr1Ty != BasePtr2Ty)
-    return MayAlias;
-
-  const PointerType *GEPPointerTy = cast<PointerType>(BasePtr1Ty);
-
-  LLVMContext &Context = GEPPointerTy->getContext();
-
-  // Find the (possibly empty) initial sequence of equal values... which are not
-  // necessarily constants.
-  unsigned NumGEP1Operands = NumGEP1Ops, NumGEP2Operands = NumGEP2Ops;
-  unsigned MinOperands = std::min(NumGEP1Operands, NumGEP2Operands);
-  unsigned MaxOperands = std::max(NumGEP1Operands, NumGEP2Operands);
-  unsigned UnequalOper = 0;
-  while (UnequalOper != MinOperands &&
-         IndexOperandsEqual(GEP1Ops[UnequalOper], GEP2Ops[UnequalOper],
-         Context)) {
-    // Advance through the type as we go...
-    ++UnequalOper;
-    if (const CompositeType *CT = dyn_cast<CompositeType>(BasePtr1Ty))
-      BasePtr1Ty = CT->getTypeAtIndex(GEP1Ops[UnequalOper-1]);
-    else {
-      // If all operands equal each other, then the derived pointers must
-      // alias each other...
-      BasePtr1Ty = 0;
-      assert(UnequalOper == NumGEP1Operands && UnequalOper == NumGEP2Operands &&
-             "Ran out of type nesting, but not out of operands?");
-      return MustAlias;
-    }
-  }
-
-  // If we have seen all constant operands, and run out of indexes on one of the
-  // getelementptrs, check to see if the tail of the leftover one is all zeros.
-  // If so, return mustalias.
-  if (UnequalOper == MinOperands) {
-    if (NumGEP1Ops < NumGEP2Ops) {
-      std::swap(GEP1Ops, GEP2Ops);
-      std::swap(NumGEP1Ops, NumGEP2Ops);
-    }
-
-    bool AllAreZeros = true;
-    for (unsigned i = UnequalOper; i != MaxOperands; ++i)
-      if (!isa<Constant>(GEP1Ops[i]) ||
-          !cast<Constant>(GEP1Ops[i])->isNullValue()) {
-        AllAreZeros = false;
-        break;
-      }
-    if (AllAreZeros) return MustAlias;
-  }
-
-
-  // So now we know that the indexes derived from the base pointers,
-  // which are known to alias, are different.  We can still determine a
-  // no-alias result if there are differing constant pairs in the index
-  // chain.  For example:
-  //        A[i][0] != A[j][1] iff (&A[0][1]-&A[0][0] >= std::max(G1S, G2S))
-  //
-  // We have to be careful here about array accesses.  In particular, consider:
-  //        A[1][0] vs A[0][i]
-  // In this case, we don't *know* that the array will be accessed in bounds:
-  // the index could even be negative.  Because of this, we have to
-  // conservatively *give up* and return may alias.  We disregard differing
-  // array subscripts that are followed by a variable index without going
-  // through a struct.
-  //
-  unsigned SizeMax = std::max(G1S, G2S);
-  if (SizeMax == ~0U) return MayAlias; // Avoid frivolous work.
-
-  // Scan for the first operand that is constant and unequal in the
-  // two getelementptrs...
-  unsigned FirstConstantOper = UnequalOper;
-  for (; FirstConstantOper != MinOperands; ++FirstConstantOper) {
-    const Value *G1Oper = GEP1Ops[FirstConstantOper];
-    const Value *G2Oper = GEP2Ops[FirstConstantOper];
-
-    if (G1Oper != G2Oper)   // Found non-equal constant indexes...
-      if (Constant *G1OC = dyn_cast<ConstantInt>(const_cast<Value*>(G1Oper)))
-        if (Constant *G2OC = dyn_cast<ConstantInt>(const_cast<Value*>(G2Oper))){
-          if (G1OC->getType() != G2OC->getType()) {
-            // Sign extend both operands to long.
-            if (G1OC->getType() != Type::getInt64Ty(Context))
-              G1OC = ConstantExpr::getSExt(G1OC, Type::getInt64Ty(Context));
-            if (G2OC->getType() != Type::getInt64Ty(Context)) 
-              G2OC = ConstantExpr::getSExt(G2OC, Type::getInt64Ty(Context));
-            GEP1Ops[FirstConstantOper] = G1OC;
-            GEP2Ops[FirstConstantOper] = G2OC;
-          }
-          
-          if (G1OC != G2OC) {
-            // Handle the "be careful" case above: if this is an array/vector
-            // subscript, scan for a subsequent variable array index.
-            if (const SequentialType *STy =
-                  dyn_cast<SequentialType>(BasePtr1Ty)) {
-              const Type *NextTy = STy;
-              bool isBadCase = false;
-              
-              for (unsigned Idx = FirstConstantOper;
-                   Idx != MinOperands && isa<SequentialType>(NextTy); ++Idx) {
-                const Value *V1 = GEP1Ops[Idx], *V2 = GEP2Ops[Idx];
-                if (!isa<Constant>(V1) || !isa<Constant>(V2)) {
-                  isBadCase = true;
-                  break;
-                }
-                // If the array is indexed beyond the bounds of the static type
-                // at this level, it will also fall into the "be careful" case.
-                // It would theoretically be possible to analyze these cases,
-                // but for now just be conservatively correct.
-                if (const ArrayType *ATy = dyn_cast<ArrayType>(STy))
-                  if (cast<ConstantInt>(G1OC)->getZExtValue() >=
-                        ATy->getNumElements() ||
-                      cast<ConstantInt>(G2OC)->getZExtValue() >=
-                        ATy->getNumElements()) {
-                    isBadCase = true;
-                    break;
-                  }
-                if (const VectorType *VTy = dyn_cast<VectorType>(STy))
-                  if (cast<ConstantInt>(G1OC)->getZExtValue() >=
-                        VTy->getNumElements() ||
-                      cast<ConstantInt>(G2OC)->getZExtValue() >=
-                        VTy->getNumElements()) {
-                    isBadCase = true;
-                    break;
-                  }
-                STy = cast<SequentialType>(NextTy);
-                NextTy = cast<SequentialType>(NextTy)->getElementType();
-              }
-              
-              if (isBadCase) G1OC = 0;
-            }
-
-            // Make sure they are comparable (ie, not constant expressions), and
-            // make sure the GEP with the smaller leading constant is GEP1.
-            if (G1OC) {
-              Constant *Compare = ConstantExpr::getICmp(ICmpInst::ICMP_SGT, 
-                                                        G1OC, G2OC);
-              if (ConstantInt *CV = dyn_cast<ConstantInt>(Compare)) {
-                if (CV->getZExtValue()) {  // If they are comparable and G2 > G1
-                  std::swap(GEP1Ops, GEP2Ops);  // Make GEP1 < GEP2
-                  std::swap(NumGEP1Ops, NumGEP2Ops);
-                }
-                break;
-              }
-            }
-          }
-        }
-    BasePtr1Ty = cast<CompositeType>(BasePtr1Ty)->getTypeAtIndex(G1Oper);
-  }
-
-  // No shared constant operands, and we ran out of common operands.  At this
-  // point, the GEP instructions have run through all of their operands, and we
-  // haven't found evidence that there are any deltas between the GEP's.
-  // However, one GEP may have more operands than the other.  If this is the
-  // case, there may still be hope.  Check this now.
-  if (FirstConstantOper == MinOperands) {
-    // Without TargetData, we won't know what the offsets are.
-    if (!TD)
-      return MayAlias;
-
-    // Make GEP1Ops be the longer one if there is a longer one.
-    if (NumGEP1Ops < NumGEP2Ops) {
-      std::swap(GEP1Ops, GEP2Ops);
-      std::swap(NumGEP1Ops, NumGEP2Ops);
-    }
-
-    // Is there anything to check?
-    if (NumGEP1Ops > MinOperands) {
-      for (unsigned i = FirstConstantOper; i != MaxOperands; ++i)
-        if (isa<ConstantInt>(GEP1Ops[i]) && 
-            !cast<ConstantInt>(GEP1Ops[i])->isZero()) {
-          // Yup, there's a constant in the tail.  Set all variables to
-          // constants in the GEP instruction to make it suitable for
-          // TargetData::getIndexedOffset.
-          for (i = 0; i != MaxOperands; ++i)
-            if (!isa<ConstantInt>(GEP1Ops[i]))
-              GEP1Ops[i] = Constant::getNullValue(GEP1Ops[i]->getType());
-          // Okay, now get the offset.  This is the relative offset for the full
-          // instruction.
-          int64_t Offset1 = TD->getIndexedOffset(GEPPointerTy, GEP1Ops,
-                                                 NumGEP1Ops);
-
-          // Now check without any constants at the end.
-          int64_t Offset2 = TD->getIndexedOffset(GEPPointerTy, GEP1Ops,
-                                                 MinOperands);
-
-          // Make sure we compare the absolute difference.
-          if (Offset1 > Offset2)
-            std::swap(Offset1, Offset2);
-
-          // If the tail provided a bit enough offset, return noalias!
-          if ((uint64_t)(Offset2-Offset1) >= SizeMax)
-            return NoAlias;
-          // Otherwise break - we don't look for another constant in the tail.
-          break;
-        }
-    }
-
-    // Couldn't find anything useful.
-    return MayAlias;
-  }
-
-  // If there are non-equal constants arguments, then we can figure
-  // out a minimum known delta between the two index expressions... at
-  // this point we know that the first constant index of GEP1 is less
-  // than the first constant index of GEP2.
-
-  // Advance BasePtr[12]Ty over this first differing constant operand.
-  BasePtr2Ty = cast<CompositeType>(BasePtr1Ty)->
-      getTypeAtIndex(GEP2Ops[FirstConstantOper]);
-  BasePtr1Ty = cast<CompositeType>(BasePtr1Ty)->
-      getTypeAtIndex(GEP1Ops[FirstConstantOper]);
-
-  // We are going to be using TargetData::getIndexedOffset to determine the
-  // offset that each of the GEP's is reaching.  To do this, we have to convert
-  // all variable references to constant references.  To do this, we convert the
-  // initial sequence of array subscripts into constant zeros to start with.
-  const Type *ZeroIdxTy = GEPPointerTy;
-  for (unsigned i = 0; i != FirstConstantOper; ++i) {
-    if (!isa<StructType>(ZeroIdxTy))
-      GEP1Ops[i] = GEP2Ops[i] = 
-                              Constant::getNullValue(Type::getInt32Ty(Context));
-
-    if (const CompositeType *CT = dyn_cast<CompositeType>(ZeroIdxTy))
-      ZeroIdxTy = CT->getTypeAtIndex(GEP1Ops[i]);
+  if (isa<PHINode>(V2) && !isa<PHINode>(V1)) {
+    std::swap(V1, V2);
+    std::swap(V1Size, V2Size);
   }
+  if (const PHINode *PN = dyn_cast<PHINode>(V1))
+    return aliasPHI(PN, V1Size, V2, V2Size);
 
-  // We know that GEP1Ops[FirstConstantOper] & GEP2Ops[FirstConstantOper] are ok
-
-  // Loop over the rest of the operands...
-  for (unsigned i = FirstConstantOper+1; i != MaxOperands; ++i) {
-    const Value *Op1 = i < NumGEP1Ops ? GEP1Ops[i] : 0;
-    const Value *Op2 = i < NumGEP2Ops ? GEP2Ops[i] : 0;
-    // If they are equal, use a zero index...
-    if (Op1 == Op2 && BasePtr1Ty == BasePtr2Ty) {
-      if (!isa<ConstantInt>(Op1))
-        GEP1Ops[i] = GEP2Ops[i] = Constant::getNullValue(Op1->getType());
-      // Otherwise, just keep the constants we have.
-    } else {
-      if (Op1) {
-        if (const ConstantInt *Op1C = dyn_cast<ConstantInt>(Op1)) {
-          // If this is an array index, make sure the array element is in range.
-          if (const ArrayType *AT = dyn_cast<ArrayType>(BasePtr1Ty)) {
-            if (Op1C->getZExtValue() >= AT->getNumElements())
-              return MayAlias;  // Be conservative with out-of-range accesses
-          } else if (const VectorType *VT = dyn_cast<VectorType>(BasePtr1Ty)) {
-            if (Op1C->getZExtValue() >= VT->getNumElements())
-              return MayAlias;  // Be conservative with out-of-range accesses
-          }
-          
-        } else {
-          // GEP1 is known to produce a value less than GEP2.  To be
-          // conservatively correct, we must assume the largest possible
-          // constant is used in this position.  This cannot be the initial
-          // index to the GEP instructions (because we know we have at least one
-          // element before this one with the different constant arguments), so
-          // we know that the current index must be into either a struct or
-          // array.  Because we know it's not constant, this cannot be a
-          // structure index.  Because of this, we can calculate the maximum
-          // value possible.
-          //
-          if (const ArrayType *AT = dyn_cast<ArrayType>(BasePtr1Ty))
-            GEP1Ops[i] =
-                  ConstantInt::get(Type::getInt64Ty(Context), 
-                                   AT->getNumElements()-1);
-          else if (const VectorType *VT = dyn_cast<VectorType>(BasePtr1Ty))
-            GEP1Ops[i] = 
-                  ConstantInt::get(Type::getInt64Ty(Context),
-                                   VT->getNumElements()-1);
-        }
-      }
-
-      if (Op2) {
-        if (const ConstantInt *Op2C = dyn_cast<ConstantInt>(Op2)) {
-          // If this is an array index, make sure the array element is in range.
-          if (const ArrayType *AT = dyn_cast<ArrayType>(BasePtr2Ty)) {
-            if (Op2C->getZExtValue() >= AT->getNumElements())
-              return MayAlias;  // Be conservative with out-of-range accesses
-          } else if (const VectorType *VT = dyn_cast<VectorType>(BasePtr2Ty)) {
-            if (Op2C->getZExtValue() >= VT->getNumElements())
-              return MayAlias;  // Be conservative with out-of-range accesses
-          }
-        } else {  // Conservatively assume the minimum value for this index
-          GEP2Ops[i] = Constant::getNullValue(Op2->getType());
-        }
-      }
-    }
-
-    if (BasePtr1Ty && Op1) {
-      if (const CompositeType *CT = dyn_cast<CompositeType>(BasePtr1Ty))
-        BasePtr1Ty = CT->getTypeAtIndex(GEP1Ops[i]);
-      else
-        BasePtr1Ty = 0;
-    }
-
-    if (BasePtr2Ty && Op2) {
-      if (const CompositeType *CT = dyn_cast<CompositeType>(BasePtr2Ty))
-        BasePtr2Ty = CT->getTypeAtIndex(GEP2Ops[i]);
-      else
-        BasePtr2Ty = 0;
-    }
+  if (isa<SelectInst>(V2) && !isa<SelectInst>(V1)) {
+    std::swap(V1, V2);
+    std::swap(V1Size, V2Size);
   }
+  if (const SelectInst *S1 = dyn_cast<SelectInst>(V1))
+    return aliasSelect(S1, V1Size, V2, V2Size);
 
-  if (TD && GEPPointerTy->getElementType()->isSized()) {
-    int64_t Offset1 =
-      TD->getIndexedOffset(GEPPointerTy, GEP1Ops, NumGEP1Ops);
-    int64_t Offset2 = 
-      TD->getIndexedOffset(GEPPointerTy, GEP2Ops, NumGEP2Ops);
-    assert(Offset1 != Offset2 &&
-           "There is at least one different constant here!");
-    
-    // Make sure we compare the absolute difference.
-    if (Offset1 > Offset2)
-      std::swap(Offset1, Offset2);
-    
-    if ((uint64_t)(Offset2-Offset1) >= SizeMax) {
-      //cerr << "Determined that these two GEP's don't alias ["
-      //     << SizeMax << " bytes]: \n" << *GEP1 << *GEP2;
-      return NoAlias;
-    }
-  }
   return MayAlias;
 }
 
-// Make sure that anything that uses AliasAnalysis pulls in this file...
+// Make sure that anything that uses AliasAnalysis pulls in this file.
 DEFINING_FILE_FOR(BasicAliasAnalysis)
diff --git a/libclamav/c++/llvm/lib/Analysis/CFGPrinter.cpp b/libclamav/c++/llvm/lib/Analysis/CFGPrinter.cpp
index 6fed400..e06704b 100644
--- a/libclamav/c++/llvm/lib/Analysis/CFGPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/CFGPrinter.cpp
@@ -17,73 +17,13 @@
 //
 //===----------------------------------------------------------------------===//
 
-#include "llvm/Function.h"
-#include "llvm/Instructions.h"
-#include "llvm/Pass.h"
 #include "llvm/Analysis/CFGPrinter.h"
-#include "llvm/Assembly/Writer.h"
-#include "llvm/Support/CFG.h"
-#include "llvm/Support/Compiler.h"
-#include "llvm/Support/GraphWriter.h"
-using namespace llvm;
-
-namespace llvm {
-template<>
-struct DOTGraphTraits<const Function*> : public DefaultDOTGraphTraits {
-  static std::string getGraphName(const Function *F) {
-    return "CFG for '" + F->getNameStr() + "' function";
-  }
-
-  static std::string getNodeLabel(const BasicBlock *Node,
-                                  const Function *Graph,
-                                  bool ShortNames) {
-    if (ShortNames && !Node->getName().empty())
-      return Node->getNameStr() + ":";
-
-    std::string Str;
-    raw_string_ostream OS(Str);
-
-    if (ShortNames) {
-      WriteAsOperand(OS, Node, false);
-      return OS.str();
-    }
 
-    if (Node->getName().empty()) {
-      WriteAsOperand(OS, Node, false);
-      OS << ":";
-    }
-    
-    OS << *Node;
-    std::string OutStr = OS.str();
-    if (OutStr[0] == '\n') OutStr.erase(OutStr.begin());
-
-    // Process string output to make it nicer...
-    for (unsigned i = 0; i != OutStr.length(); ++i)
-      if (OutStr[i] == '\n') {                            // Left justify
-        OutStr[i] = '\\';
-        OutStr.insert(OutStr.begin()+i+1, 'l');
-      } else if (OutStr[i] == ';') {                      // Delete comments!
-        unsigned Idx = OutStr.find('\n', i+1);            // Find end of line
-        OutStr.erase(OutStr.begin()+i, OutStr.begin()+Idx);
-        --i;
-      }
-
-    return OutStr;
-  }
-
-  static std::string getEdgeSourceLabel(const BasicBlock *Node,
-                                        succ_const_iterator I) {
-    // Label source of conditional branches with "T" or "F"
-    if (const BranchInst *BI = dyn_cast<BranchInst>(Node->getTerminator()))
-      if (BI->isConditional())
-        return (I == succ_begin(Node)) ? "T" : "F";
-    return "";
-  }
-};
-}
+#include "llvm/Pass.h"
+using namespace llvm;
 
 namespace {
-  struct VISIBILITY_HIDDEN CFGViewer : public FunctionPass {
+  struct CFGViewer : public FunctionPass {
     static char ID; // Pass identifcation, replacement for typeid
     CFGViewer() : FunctionPass(&ID) {}
 
@@ -105,7 +45,7 @@ static RegisterPass<CFGViewer>
 V0("view-cfg", "View CFG of function", false, true);
 
 namespace {
-  struct VISIBILITY_HIDDEN CFGOnlyViewer : public FunctionPass {
+  struct CFGOnlyViewer : public FunctionPass {
     static char ID; // Pass identifcation, replacement for typeid
     CFGOnlyViewer() : FunctionPass(&ID) {}
 
@@ -128,7 +68,7 @@ V1("view-cfg-only",
    "View CFG of function (with no function bodies)", false, true);
 
 namespace {
-  struct VISIBILITY_HIDDEN CFGPrinter : public FunctionPass {
+  struct CFGPrinter : public FunctionPass {
     static char ID; // Pass identification, replacement for typeid
     CFGPrinter() : FunctionPass(&ID) {}
     explicit CFGPrinter(void *pid) : FunctionPass(pid) {}
@@ -161,7 +101,7 @@ static RegisterPass<CFGPrinter>
 P1("dot-cfg", "Print CFG of function to 'dot' file", false, true);
 
 namespace {
-  struct VISIBILITY_HIDDEN CFGOnlyPrinter : public FunctionPass {
+  struct CFGOnlyPrinter : public FunctionPass {
     static char ID; // Pass identification, replacement for typeid
     CFGOnlyPrinter() : FunctionPass(&ID) {}
     explicit CFGOnlyPrinter(void *pid) : FunctionPass(pid) {}
diff --git a/libclamav/c++/llvm/lib/Analysis/CMakeLists.txt b/libclamav/c++/llvm/lib/Analysis/CMakeLists.txt
index d233322..0a83c3d 100644
--- a/libclamav/c++/llvm/lib/Analysis/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/Analysis/CMakeLists.txt
@@ -11,18 +11,21 @@ add_llvm_library(LLVMAnalysis
   ConstantFolding.cpp
   DbgInfoPrinter.cpp
   DebugInfo.cpp
+  DomPrinter.cpp
   IVUsers.cpp
+  InlineCost.cpp
   InstCount.cpp
+  InstructionSimplify.cpp
   Interval.cpp
   IntervalPartition.cpp
+  LazyValueInfo.cpp
   LibCallAliasAnalysis.cpp
   LibCallSemantics.cpp
   LiveValues.cpp
   LoopDependenceAnalysis.cpp
   LoopInfo.cpp
   LoopPass.cpp
-  LoopVR.cpp
-  MallocHelper.cpp
+  MemoryBuiltins.cpp
   MemoryDependenceAnalysis.cpp
   PointerTracking.cpp
   PostDominators.cpp
diff --git a/libclamav/c++/llvm/lib/Analysis/CaptureTracking.cpp b/libclamav/c++/llvm/lib/Analysis/CaptureTracking.cpp
index b30ac71..a276c64 100644
--- a/libclamav/c++/llvm/lib/Analysis/CaptureTracking.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/CaptureTracking.cpp
@@ -19,6 +19,7 @@
 #include "llvm/Analysis/CaptureTracking.h"
 #include "llvm/Instructions.h"
 #include "llvm/Value.h"
+#include "llvm/Analysis/AliasAnalysis.h"
 #include "llvm/ADT/SmallSet.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/Support/CallSite.h"
@@ -28,8 +29,11 @@ using namespace llvm;
 /// by the enclosing function (which is required to exist).  This routine can
 /// be expensive, so consider caching the results.  The boolean ReturnCaptures
 /// specifies whether returning the value (or part of it) from the function
+/// counts as capturing it or not.  The boolean StoreCaptures specified whether
+/// storing the value (or part of it) into memory anywhere automatically
 /// counts as capturing it or not.
-bool llvm::PointerMayBeCaptured(const Value *V, bool ReturnCaptures) {
+bool llvm::PointerMayBeCaptured(const Value *V,
+                                bool ReturnCaptures, bool StoreCaptures) {
   assert(isa<PointerType>(V->getType()) && "Capture is for pointers only!");
   SmallVector<Use*, 16> Worklist;
   SmallSet<Use*, 16> Visited;
@@ -53,8 +57,7 @@ bool llvm::PointerMayBeCaptured(const Value *V, bool ReturnCaptures) {
       // Not captured if the callee is readonly, doesn't return a copy through
       // its return value and doesn't unwind (a readonly function can leak bits
       // by throwing an exception or not depending on the input value).
-      if (CS.onlyReadsMemory() && CS.doesNotThrow() &&
-          I->getType() == Type::getVoidTy(V->getContext()))
+      if (CS.onlyReadsMemory() && CS.doesNotThrow() && I->getType()->isVoidTy())
         break;
 
       // Not captured if only passed via 'nocapture' arguments.  Note that
@@ -73,9 +76,6 @@ bool llvm::PointerMayBeCaptured(const Value *V, bool ReturnCaptures) {
       // captured.
       break;
     }
-    case Instruction::Free:
-      // Freeing a pointer does not cause it to be captured.
-      break;
     case Instruction::Load:
       // Loading from a pointer does not cause it to be captured.
       break;
@@ -85,7 +85,11 @@ bool llvm::PointerMayBeCaptured(const Value *V, bool ReturnCaptures) {
       break;
     case Instruction::Store:
       if (V == I->getOperand(0))
-        // Stored the pointer - it may be captured.
+        // Stored the pointer - conservatively assume it may be captured.
+        // TODO: If StoreCaptures is not true, we could do Fancy analysis
+        // to determine whether this store is not actually an escape point.
+        // In that case, BasicAliasAnalysis should be updated as well to
+        // take advantage of this.
         return true;
       // Storing to the pointee does not cause the pointer to be captured.
       break;
@@ -101,6 +105,18 @@ bool llvm::PointerMayBeCaptured(const Value *V, bool ReturnCaptures) {
           Worklist.push_back(U);
       }
       break;
+    case Instruction::ICmp:
+      // Don't count comparisons of a no-alias return value against null as
+      // captures. This allows us to ignore comparisons of malloc results
+      // with null, for example.
+      if (isNoAliasCall(V->stripPointerCasts()))
+        if (ConstantPointerNull *CPN =
+              dyn_cast<ConstantPointerNull>(I->getOperand(1)))
+          if (CPN->getType()->getAddressSpace() == 0)
+            break;
+      // Otherwise, be conservative. There are crazy ways to capture pointers
+      // using comparisons.
+      return true;
     default:
       // Something else - be conservative and say it is captured.
       return true;
diff --git a/libclamav/c++/llvm/lib/Analysis/ConstantFolding.cpp b/libclamav/c++/llvm/lib/Analysis/ConstantFolding.cpp
index f911b66..8d60907 100644
--- a/libclamav/c++/llvm/lib/Analysis/ConstantFolding.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ConstantFolding.cpp
@@ -23,10 +23,10 @@
 #include "llvm/GlobalVariable.h"
 #include "llvm/Instructions.h"
 #include "llvm/Intrinsics.h"
-#include "llvm/LLVMContext.h"
+#include "llvm/Analysis/ValueTracking.h"
+#include "llvm/Target/TargetData.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/StringMap.h"
-#include "llvm/Target/TargetData.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/GetElementPtrTypeIterator.h"
 #include "llvm/Support/MathExtras.h"
@@ -38,6 +38,138 @@ using namespace llvm;
 // Constant Folding internal helper functions
 //===----------------------------------------------------------------------===//
 
+/// FoldBitCast - Constant fold bitcast, symbolically evaluating it with 
+/// TargetData.  This always returns a non-null constant, but it may be a
+/// ConstantExpr if unfoldable.
+static Constant *FoldBitCast(Constant *C, const Type *DestTy,
+                             const TargetData &TD) {
+  
+  // This only handles casts to vectors currently.
+  const VectorType *DestVTy = dyn_cast<VectorType>(DestTy);
+  if (DestVTy == 0)
+    return ConstantExpr::getBitCast(C, DestTy);
+  
+  // If this is a scalar -> vector cast, convert the input into a <1 x scalar>
+  // vector so the code below can handle it uniformly.
+  if (isa<ConstantFP>(C) || isa<ConstantInt>(C)) {
+    Constant *Ops = C; // don't take the address of C!
+    return FoldBitCast(ConstantVector::get(&Ops, 1), DestTy, TD);
+  }
+  
+  // If this is a bitcast from constant vector -> vector, fold it.
+  ConstantVector *CV = dyn_cast<ConstantVector>(C);
+  if (CV == 0)
+    return ConstantExpr::getBitCast(C, DestTy);
+  
+  // If the element types match, VMCore can fold it.
+  unsigned NumDstElt = DestVTy->getNumElements();
+  unsigned NumSrcElt = CV->getNumOperands();
+  if (NumDstElt == NumSrcElt)
+    return ConstantExpr::getBitCast(C, DestTy);
+  
+  const Type *SrcEltTy = CV->getType()->getElementType();
+  const Type *DstEltTy = DestVTy->getElementType();
+  
+  // Otherwise, we're changing the number of elements in a vector, which 
+  // requires endianness information to do the right thing.  For example,
+  //    bitcast (<2 x i64> <i64 0, i64 1> to <4 x i32>)
+  // folds to (little endian):
+  //    <4 x i32> <i32 0, i32 0, i32 1, i32 0>
+  // and to (big endian):
+  //    <4 x i32> <i32 0, i32 0, i32 0, i32 1>
+  
+  // First thing is first.  We only want to think about integer here, so if
+  // we have something in FP form, recast it as integer.
+  if (DstEltTy->isFloatingPoint()) {
+    // Fold to an vector of integers with same size as our FP type.
+    unsigned FPWidth = DstEltTy->getPrimitiveSizeInBits();
+    const Type *DestIVTy =
+      VectorType::get(IntegerType::get(C->getContext(), FPWidth), NumDstElt);
+    // Recursively handle this integer conversion, if possible.
+    C = FoldBitCast(C, DestIVTy, TD);
+    if (!C) return ConstantExpr::getBitCast(C, DestTy);
+    
+    // Finally, VMCore can handle this now that #elts line up.
+    return ConstantExpr::getBitCast(C, DestTy);
+  }
+  
+  // Okay, we know the destination is integer, if the input is FP, convert
+  // it to integer first.
+  if (SrcEltTy->isFloatingPoint()) {
+    unsigned FPWidth = SrcEltTy->getPrimitiveSizeInBits();
+    const Type *SrcIVTy =
+      VectorType::get(IntegerType::get(C->getContext(), FPWidth), NumSrcElt);
+    // Ask VMCore to do the conversion now that #elts line up.
+    C = ConstantExpr::getBitCast(C, SrcIVTy);
+    CV = dyn_cast<ConstantVector>(C);
+    if (!CV)  // If VMCore wasn't able to fold it, bail out.
+      return C;
+  }
+  
+  // Now we know that the input and output vectors are both integer vectors
+  // of the same size, and that their #elements is not the same.  Do the
+  // conversion here, which depends on whether the input or output has
+  // more elements.
+  bool isLittleEndian = TD.isLittleEndian();
+  
+  SmallVector<Constant*, 32> Result;
+  if (NumDstElt < NumSrcElt) {
+    // Handle: bitcast (<4 x i32> <i32 0, i32 1, i32 2, i32 3> to <2 x i64>)
+    Constant *Zero = Constant::getNullValue(DstEltTy);
+    unsigned Ratio = NumSrcElt/NumDstElt;
+    unsigned SrcBitSize = SrcEltTy->getPrimitiveSizeInBits();
+    unsigned SrcElt = 0;
+    for (unsigned i = 0; i != NumDstElt; ++i) {
+      // Build each element of the result.
+      Constant *Elt = Zero;
+      unsigned ShiftAmt = isLittleEndian ? 0 : SrcBitSize*(Ratio-1);
+      for (unsigned j = 0; j != Ratio; ++j) {
+        Constant *Src = dyn_cast<ConstantInt>(CV->getOperand(SrcElt++));
+        if (!Src)  // Reject constantexpr elements.
+          return ConstantExpr::getBitCast(C, DestTy);
+        
+        // Zero extend the element to the right size.
+        Src = ConstantExpr::getZExt(Src, Elt->getType());
+        
+        // Shift it to the right place, depending on endianness.
+        Src = ConstantExpr::getShl(Src, 
+                                   ConstantInt::get(Src->getType(), ShiftAmt));
+        ShiftAmt += isLittleEndian ? SrcBitSize : -SrcBitSize;
+        
+        // Mix it in.
+        Elt = ConstantExpr::getOr(Elt, Src);
+      }
+      Result.push_back(Elt);
+    }
+  } else {
+    // Handle: bitcast (<2 x i64> <i64 0, i64 1> to <4 x i32>)
+    unsigned Ratio = NumDstElt/NumSrcElt;
+    unsigned DstBitSize = DstEltTy->getPrimitiveSizeInBits();
+    
+    // Loop over each source value, expanding into multiple results.
+    for (unsigned i = 0; i != NumSrcElt; ++i) {
+      Constant *Src = dyn_cast<ConstantInt>(CV->getOperand(i));
+      if (!Src)  // Reject constantexpr elements.
+        return ConstantExpr::getBitCast(C, DestTy);
+      
+      unsigned ShiftAmt = isLittleEndian ? 0 : DstBitSize*(Ratio-1);
+      for (unsigned j = 0; j != Ratio; ++j) {
+        // Shift the piece of the value into the right place, depending on
+        // endianness.
+        Constant *Elt = ConstantExpr::getLShr(Src, 
+                                    ConstantInt::get(Src->getType(), ShiftAmt));
+        ShiftAmt += isLittleEndian ? DstBitSize : -DstBitSize;
+        
+        // Truncate and remember this piece.
+        Result.push_back(ConstantExpr::getTrunc(Elt, DstEltTy));
+      }
+    }
+  }
+  
+  return ConstantVector::get(Result.data(), Result.size());
+}
+
+
 /// IsConstantOffsetFromGlobal - If this constant is actually a constant offset
 /// from a global, return the global and the constant.  Because of
 /// constantexprs, this function is recursive.
@@ -92,14 +224,275 @@ static bool IsConstantOffsetFromGlobal(Constant *C, GlobalValue *&GV,
   return false;
 }
 
+/// ReadDataFromGlobal - Recursive helper to read bits out of global.  C is the
+/// constant being copied out of. ByteOffset is an offset into C.  CurPtr is the
+/// pointer to copy results into and BytesLeft is the number of bytes left in
+/// the CurPtr buffer.  TD is the target data.
+static bool ReadDataFromGlobal(Constant *C, uint64_t ByteOffset,
+                               unsigned char *CurPtr, unsigned BytesLeft,
+                               const TargetData &TD) {
+  assert(ByteOffset <= TD.getTypeAllocSize(C->getType()) &&
+         "Out of range access");
+  
+  // If this element is zero or undefined, we can just return since *CurPtr is
+  // zero initialized.
+  if (isa<ConstantAggregateZero>(C) || isa<UndefValue>(C))
+    return true;
+  
+  if (ConstantInt *CI = dyn_cast<ConstantInt>(C)) {
+    if (CI->getBitWidth() > 64 ||
+        (CI->getBitWidth() & 7) != 0)
+      return false;
+    
+    uint64_t Val = CI->getZExtValue();
+    unsigned IntBytes = unsigned(CI->getBitWidth()/8);
+    
+    for (unsigned i = 0; i != BytesLeft && ByteOffset != IntBytes; ++i) {
+      CurPtr[i] = (unsigned char)(Val >> (ByteOffset * 8));
+      ++ByteOffset;
+    }
+    return true;
+  }
+  
+  if (ConstantFP *CFP = dyn_cast<ConstantFP>(C)) {
+    if (CFP->getType()->isDoubleTy()) {
+      C = FoldBitCast(C, Type::getInt64Ty(C->getContext()), TD);
+      return ReadDataFromGlobal(C, ByteOffset, CurPtr, BytesLeft, TD);
+    }
+    if (CFP->getType()->isFloatTy()){
+      C = FoldBitCast(C, Type::getInt32Ty(C->getContext()), TD);
+      return ReadDataFromGlobal(C, ByteOffset, CurPtr, BytesLeft, TD);
+    }
+    return false;
+  }
+
+  if (ConstantStruct *CS = dyn_cast<ConstantStruct>(C)) {
+    const StructLayout *SL = TD.getStructLayout(CS->getType());
+    unsigned Index = SL->getElementContainingOffset(ByteOffset);
+    uint64_t CurEltOffset = SL->getElementOffset(Index);
+    ByteOffset -= CurEltOffset;
+    
+    while (1) {
+      // If the element access is to the element itself and not to tail padding,
+      // read the bytes from the element.
+      uint64_t EltSize = TD.getTypeAllocSize(CS->getOperand(Index)->getType());
+
+      if (ByteOffset < EltSize &&
+          !ReadDataFromGlobal(CS->getOperand(Index), ByteOffset, CurPtr,
+                              BytesLeft, TD))
+        return false;
+      
+      ++Index;
+      
+      // Check to see if we read from the last struct element, if so we're done.
+      if (Index == CS->getType()->getNumElements())
+        return true;
+
+      // If we read all of the bytes we needed from this element we're done.
+      uint64_t NextEltOffset = SL->getElementOffset(Index);
+
+      if (BytesLeft <= NextEltOffset-CurEltOffset-ByteOffset)
+        return true;
+
+      // Move to the next element of the struct.
+      CurPtr += NextEltOffset-CurEltOffset-ByteOffset;
+      BytesLeft -= NextEltOffset-CurEltOffset-ByteOffset;
+      ByteOffset = 0;
+      CurEltOffset = NextEltOffset;
+    }
+    // not reached.
+  }
+
+  if (ConstantArray *CA = dyn_cast<ConstantArray>(C)) {
+    uint64_t EltSize = TD.getTypeAllocSize(CA->getType()->getElementType());
+    uint64_t Index = ByteOffset / EltSize;
+    uint64_t Offset = ByteOffset - Index * EltSize;
+    for (; Index != CA->getType()->getNumElements(); ++Index) {
+      if (!ReadDataFromGlobal(CA->getOperand(Index), Offset, CurPtr,
+                              BytesLeft, TD))
+        return false;
+      if (EltSize >= BytesLeft)
+        return true;
+      
+      Offset = 0;
+      BytesLeft -= EltSize;
+      CurPtr += EltSize;
+    }
+    return true;
+  }
+  
+  if (ConstantVector *CV = dyn_cast<ConstantVector>(C)) {
+    uint64_t EltSize = TD.getTypeAllocSize(CV->getType()->getElementType());
+    uint64_t Index = ByteOffset / EltSize;
+    uint64_t Offset = ByteOffset - Index * EltSize;
+    for (; Index != CV->getType()->getNumElements(); ++Index) {
+      if (!ReadDataFromGlobal(CV->getOperand(Index), Offset, CurPtr,
+                              BytesLeft, TD))
+        return false;
+      if (EltSize >= BytesLeft)
+        return true;
+      
+      Offset = 0;
+      BytesLeft -= EltSize;
+      CurPtr += EltSize;
+    }
+    return true;
+  }
+  
+  // Otherwise, unknown initializer type.
+  return false;
+}
+
+static Constant *FoldReinterpretLoadFromConstPtr(Constant *C,
+                                                 const TargetData &TD) {
+  const Type *LoadTy = cast<PointerType>(C->getType())->getElementType();
+  const IntegerType *IntType = dyn_cast<IntegerType>(LoadTy);
+  
+  // If this isn't an integer load we can't fold it directly.
+  if (!IntType) {
+    // If this is a float/double load, we can try folding it as an int32/64 load
+    // and then bitcast the result.  This can be useful for union cases.  Note
+    // that address spaces don't matter here since we're not going to result in
+    // an actual new load.
+    const Type *MapTy;
+    if (LoadTy->isFloatTy())
+      MapTy = Type::getInt32PtrTy(C->getContext());
+    else if (LoadTy->isDoubleTy())
+      MapTy = Type::getInt64PtrTy(C->getContext());
+    else if (isa<VectorType>(LoadTy)) {
+      MapTy = IntegerType::get(C->getContext(),
+                               TD.getTypeAllocSizeInBits(LoadTy));
+      MapTy = PointerType::getUnqual(MapTy);
+    } else
+      return 0;
+
+    C = FoldBitCast(C, MapTy, TD);
+    if (Constant *Res = FoldReinterpretLoadFromConstPtr(C, TD))
+      return FoldBitCast(Res, LoadTy, TD);
+    return 0;
+  }
+  
+  unsigned BytesLoaded = (IntType->getBitWidth() + 7) / 8;
+  if (BytesLoaded > 32 || BytesLoaded == 0) return 0;
+  
+  GlobalValue *GVal;
+  int64_t Offset;
+  if (!IsConstantOffsetFromGlobal(C, GVal, Offset, TD))
+    return 0;
+  
+  GlobalVariable *GV = dyn_cast<GlobalVariable>(GVal);
+  if (!GV || !GV->isConstant() || !GV->hasDefinitiveInitializer() ||
+      !GV->getInitializer()->getType()->isSized())
+    return 0;
+
+  // If we're loading off the beginning of the global, some bytes may be valid,
+  // but we don't try to handle this.
+  if (Offset < 0) return 0;
+  
+  // If we're not accessing anything in this constant, the result is undefined.
+  if (uint64_t(Offset) >= TD.getTypeAllocSize(GV->getInitializer()->getType()))
+    return UndefValue::get(IntType);
+  
+  unsigned char RawBytes[32] = {0};
+  if (!ReadDataFromGlobal(GV->getInitializer(), Offset, RawBytes,
+                          BytesLoaded, TD))
+    return 0;
+
+  APInt ResultVal(IntType->getBitWidth(), 0);
+  for (unsigned i = 0; i != BytesLoaded; ++i) {
+    ResultVal <<= 8;
+    ResultVal |= APInt(IntType->getBitWidth(), RawBytes[BytesLoaded-1-i]);
+  }
+
+  return ConstantInt::get(IntType->getContext(), ResultVal);
+}
+
+/// ConstantFoldLoadFromConstPtr - Return the value that a load from C would
+/// produce if it is constant and determinable.  If this is not determinable,
+/// return null.
+Constant *llvm::ConstantFoldLoadFromConstPtr(Constant *C,
+                                             const TargetData *TD) {
+  // First, try the easy cases:
+  if (GlobalVariable *GV = dyn_cast<GlobalVariable>(C))
+    if (GV->isConstant() && GV->hasDefinitiveInitializer())
+      return GV->getInitializer();
+
+  // If the loaded value isn't a constant expr, we can't handle it.
+  ConstantExpr *CE = dyn_cast<ConstantExpr>(C);
+  if (!CE) return 0;
+  
+  if (CE->getOpcode() == Instruction::GetElementPtr) {
+    if (GlobalVariable *GV = dyn_cast<GlobalVariable>(CE->getOperand(0)))
+      if (GV->isConstant() && GV->hasDefinitiveInitializer())
+        if (Constant *V = 
+             ConstantFoldLoadThroughGEPConstantExpr(GV->getInitializer(), CE))
+          return V;
+  }
+  
+  // Instead of loading constant c string, use corresponding integer value
+  // directly if string length is small enough.
+  std::string Str;
+  if (TD && GetConstantStringInfo(CE->getOperand(0), Str) && !Str.empty()) {
+    unsigned StrLen = Str.length();
+    const Type *Ty = cast<PointerType>(CE->getType())->getElementType();
+    unsigned NumBits = Ty->getPrimitiveSizeInBits();
+    // Replace LI with immediate integer store.
+    if ((NumBits >> 3) == StrLen + 1) {
+      APInt StrVal(NumBits, 0);
+      APInt SingleChar(NumBits, 0);
+      if (TD->isLittleEndian()) {
+        for (signed i = StrLen-1; i >= 0; i--) {
+          SingleChar = (uint64_t) Str[i] & UCHAR_MAX;
+          StrVal = (StrVal << 8) | SingleChar;
+        }
+      } else {
+        for (unsigned i = 0; i < StrLen; i++) {
+          SingleChar = (uint64_t) Str[i] & UCHAR_MAX;
+          StrVal = (StrVal << 8) | SingleChar;
+        }
+        // Append NULL at the end.
+        SingleChar = 0;
+        StrVal = (StrVal << 8) | SingleChar;
+      }
+      return ConstantInt::get(CE->getContext(), StrVal);
+    }
+  }
+  
+  // If this load comes from anywhere in a constant global, and if the global
+  // is all undef or zero, we know what it loads.
+  if (GlobalVariable *GV = dyn_cast<GlobalVariable>(CE->getUnderlyingObject())){
+    if (GV->isConstant() && GV->hasDefinitiveInitializer()) {
+      const Type *ResTy = cast<PointerType>(C->getType())->getElementType();
+      if (GV->getInitializer()->isNullValue())
+        return Constant::getNullValue(ResTy);
+      if (isa<UndefValue>(GV->getInitializer()))
+        return UndefValue::get(ResTy);
+    }
+  }
+  
+  // Try hard to fold loads from bitcasted strange and non-type-safe things.  We
+  // currently don't do any of this for big endian systems.  It can be
+  // generalized in the future if someone is interested.
+  if (TD && TD->isLittleEndian())
+    return FoldReinterpretLoadFromConstPtr(CE, *TD);
+  return 0;
+}
+
+static Constant *ConstantFoldLoadInst(const LoadInst *LI, const TargetData *TD){
+  if (LI->isVolatile()) return 0;
+  
+  if (Constant *C = dyn_cast<Constant>(LI->getOperand(0)))
+    return ConstantFoldLoadFromConstPtr(C, TD);
+
+  return 0;
+}
 
 /// SymbolicallyEvaluateBinop - One of Op0/Op1 is a constant expression.
 /// Attempt to symbolically evaluate the result of a binary operator merging
 /// these together.  If target data info is available, it is provided as TD, 
 /// otherwise TD is null.
 static Constant *SymbolicallyEvaluateBinop(unsigned Opc, Constant *Op0,
-                                           Constant *Op1, const TargetData *TD,
-                                           LLVMContext &Context){
+                                           Constant *Op1, const TargetData *TD){
   // SROA
   
   // Fold (and 0xffffffff00000000, (shl x, 32)) -> shl.
@@ -126,15 +519,15 @@ static Constant *SymbolicallyEvaluateBinop(unsigned Opc, Constant *Op0,
 
 /// SymbolicallyEvaluateGEP - If we can symbolically evaluate the specified GEP
 /// constant expression, do so.
-static Constant *SymbolicallyEvaluateGEP(Constant* const* Ops, unsigned NumOps,
+static Constant *SymbolicallyEvaluateGEP(Constant *const *Ops, unsigned NumOps,
                                          const Type *ResultTy,
-                                         LLVMContext &Context,
                                          const TargetData *TD) {
   Constant *Ptr = Ops[0];
   if (!TD || !cast<PointerType>(Ptr->getType())->getElementType()->isSized())
     return 0;
 
-  unsigned BitWidth = TD->getTypeSizeInBits(TD->getIntPtrType(Context));
+  unsigned BitWidth =
+    TD->getTypeSizeInBits(TD->getIntPtrType(Ptr->getContext()));
   APInt BasePtr(BitWidth, 0);
   bool BaseIsInt = true;
   if (!Ptr->isNullValue()) {
@@ -163,7 +556,7 @@ static Constant *SymbolicallyEvaluateGEP(Constant* const* Ops, unsigned NumOps,
   // If the base value for this address is a literal integer value, fold the
   // getelementptr to the resulting integer value casted to the pointer type.
   if (BaseIsInt) {
-    Constant *C = ConstantInt::get(Context, Offset+BasePtr);
+    Constant *C = ConstantInt::get(Ptr->getContext(), Offset+BasePtr);
     return ConstantExpr::getIntToPtr(C, ResultTy);
   }
 
@@ -184,7 +577,8 @@ static Constant *SymbolicallyEvaluateGEP(Constant* const* Ops, unsigned NumOps,
         return 0;
       APInt NewIdx = Offset.udiv(ElemSize);
       Offset -= NewIdx * ElemSize;
-      NewIdxs.push_back(ConstantInt::get(TD->getIntPtrType(Context), NewIdx));
+      NewIdxs.push_back(ConstantInt::get(TD->getIntPtrType(Ty->getContext()),
+                                         NewIdx));
       Ty = ATy->getElementType();
     } else if (const StructType *STy = dyn_cast<StructType>(Ty)) {
       // Determine which field of the struct the offset points into. The
@@ -192,7 +586,8 @@ static Constant *SymbolicallyEvaluateGEP(Constant* const* Ops, unsigned NumOps,
       // know the offset is within the struct at this point.
       const StructLayout &SL = *TD->getStructLayout(STy);
       unsigned ElIdx = SL.getElementContainingOffset(Offset.getZExtValue());
-      NewIdxs.push_back(ConstantInt::get(Type::getInt32Ty(Context), ElIdx));
+      NewIdxs.push_back(ConstantInt::get(Type::getInt32Ty(Ty->getContext()),
+                                         ElIdx));
       Offset -= APInt(BitWidth, SL.getElementOffset(ElIdx));
       Ty = STy->getTypeAtIndex(ElIdx);
     } else {
@@ -216,126 +611,11 @@ static Constant *SymbolicallyEvaluateGEP(Constant* const* Ops, unsigned NumOps,
   // If we ended up indexing a member with a type that doesn't match
   // the type of what the original indices indexed, add a cast.
   if (Ty != cast<PointerType>(ResultTy)->getElementType())
-    C = ConstantExpr::getBitCast(C, ResultTy);
+    C = FoldBitCast(C, ResultTy, *TD);
 
   return C;
 }
 
-/// FoldBitCast - Constant fold bitcast, symbolically evaluating it with 
-/// targetdata.  Return 0 if unfoldable.
-static Constant *FoldBitCast(Constant *C, const Type *DestTy,
-                             const TargetData &TD, LLVMContext &Context) {
-  // If this is a bitcast from constant vector -> vector, fold it.
-  if (ConstantVector *CV = dyn_cast<ConstantVector>(C)) {
-    if (const VectorType *DestVTy = dyn_cast<VectorType>(DestTy)) {
-      // If the element types match, VMCore can fold it.
-      unsigned NumDstElt = DestVTy->getNumElements();
-      unsigned NumSrcElt = CV->getNumOperands();
-      if (NumDstElt == NumSrcElt)
-        return 0;
-      
-      const Type *SrcEltTy = CV->getType()->getElementType();
-      const Type *DstEltTy = DestVTy->getElementType();
-      
-      // Otherwise, we're changing the number of elements in a vector, which 
-      // requires endianness information to do the right thing.  For example,
-      //    bitcast (<2 x i64> <i64 0, i64 1> to <4 x i32>)
-      // folds to (little endian):
-      //    <4 x i32> <i32 0, i32 0, i32 1, i32 0>
-      // and to (big endian):
-      //    <4 x i32> <i32 0, i32 0, i32 0, i32 1>
-      
-      // First thing is first.  We only want to think about integer here, so if
-      // we have something in FP form, recast it as integer.
-      if (DstEltTy->isFloatingPoint()) {
-        // Fold to an vector of integers with same size as our FP type.
-        unsigned FPWidth = DstEltTy->getPrimitiveSizeInBits();
-        const Type *DestIVTy = VectorType::get(
-                                 IntegerType::get(Context, FPWidth), NumDstElt);
-        // Recursively handle this integer conversion, if possible.
-        C = FoldBitCast(C, DestIVTy, TD, Context);
-        if (!C) return 0;
-        
-        // Finally, VMCore can handle this now that #elts line up.
-        return ConstantExpr::getBitCast(C, DestTy);
-      }
-      
-      // Okay, we know the destination is integer, if the input is FP, convert
-      // it to integer first.
-      if (SrcEltTy->isFloatingPoint()) {
-        unsigned FPWidth = SrcEltTy->getPrimitiveSizeInBits();
-        const Type *SrcIVTy = VectorType::get(
-                                 IntegerType::get(Context, FPWidth), NumSrcElt);
-        // Ask VMCore to do the conversion now that #elts line up.
-        C = ConstantExpr::getBitCast(C, SrcIVTy);
-        CV = dyn_cast<ConstantVector>(C);
-        if (!CV) return 0;  // If VMCore wasn't able to fold it, bail out.
-      }
-      
-      // Now we know that the input and output vectors are both integer vectors
-      // of the same size, and that their #elements is not the same.  Do the
-      // conversion here, which depends on whether the input or output has
-      // more elements.
-      bool isLittleEndian = TD.isLittleEndian();
-      
-      SmallVector<Constant*, 32> Result;
-      if (NumDstElt < NumSrcElt) {
-        // Handle: bitcast (<4 x i32> <i32 0, i32 1, i32 2, i32 3> to <2 x i64>)
-        Constant *Zero = Constant::getNullValue(DstEltTy);
-        unsigned Ratio = NumSrcElt/NumDstElt;
-        unsigned SrcBitSize = SrcEltTy->getPrimitiveSizeInBits();
-        unsigned SrcElt = 0;
-        for (unsigned i = 0; i != NumDstElt; ++i) {
-          // Build each element of the result.
-          Constant *Elt = Zero;
-          unsigned ShiftAmt = isLittleEndian ? 0 : SrcBitSize*(Ratio-1);
-          for (unsigned j = 0; j != Ratio; ++j) {
-            Constant *Src = dyn_cast<ConstantInt>(CV->getOperand(SrcElt++));
-            if (!Src) return 0;  // Reject constantexpr elements.
-            
-            // Zero extend the element to the right size.
-            Src = ConstantExpr::getZExt(Src, Elt->getType());
-            
-            // Shift it to the right place, depending on endianness.
-            Src = ConstantExpr::getShl(Src, 
-                             ConstantInt::get(Src->getType(), ShiftAmt));
-            ShiftAmt += isLittleEndian ? SrcBitSize : -SrcBitSize;
-            
-            // Mix it in.
-            Elt = ConstantExpr::getOr(Elt, Src);
-          }
-          Result.push_back(Elt);
-        }
-      } else {
-        // Handle: bitcast (<2 x i64> <i64 0, i64 1> to <4 x i32>)
-        unsigned Ratio = NumDstElt/NumSrcElt;
-        unsigned DstBitSize = DstEltTy->getPrimitiveSizeInBits();
-        
-        // Loop over each source value, expanding into multiple results.
-        for (unsigned i = 0; i != NumSrcElt; ++i) {
-          Constant *Src = dyn_cast<ConstantInt>(CV->getOperand(i));
-          if (!Src) return 0;  // Reject constantexpr elements.
-
-          unsigned ShiftAmt = isLittleEndian ? 0 : DstBitSize*(Ratio-1);
-          for (unsigned j = 0; j != Ratio; ++j) {
-            // Shift the piece of the value into the right place, depending on
-            // endianness.
-            Constant *Elt = ConstantExpr::getLShr(Src, 
-                            ConstantInt::get(Src->getType(), ShiftAmt));
-            ShiftAmt += isLittleEndian ? DstBitSize : -DstBitSize;
-
-            // Truncate and remember this piece.
-            Result.push_back(ConstantExpr::getTrunc(Elt, DstEltTy));
-          }
-        }
-      }
-      
-      return ConstantVector::get(Result.data(), Result.size());
-    }
-  }
-  
-  return 0;
-}
 
 
 //===----------------------------------------------------------------------===//
@@ -348,8 +628,7 @@ static Constant *FoldBitCast(Constant *C, const Type *DestTy,
 /// is returned.  Note that this function can only fail when attempting to fold
 /// instructions like loads and stores, which have no constant expression form.
 ///
-Constant *llvm::ConstantFoldInstruction(Instruction *I, LLVMContext &Context,
-                                        const TargetData *TD) {
+Constant *llvm::ConstantFoldInstruction(Instruction *I, const TargetData *TD) {
   if (PHINode *PN = dyn_cast<PHINode>(I)) {
     if (PN->getNumIncomingValues() == 0)
       return UndefValue::get(PN->getType());
@@ -376,30 +655,35 @@ Constant *llvm::ConstantFoldInstruction(Instruction *I, LLVMContext &Context,
       return 0;  // All operands not constant!
 
   if (const CmpInst *CI = dyn_cast<CmpInst>(I))
-    return ConstantFoldCompareInstOperands(CI->getPredicate(),
-                                           Ops.data(), Ops.size(), 
-                                           Context, TD);
+    return ConstantFoldCompareInstOperands(CI->getPredicate(), Ops[0], Ops[1],
+                                           TD);
+  
+  if (const LoadInst *LI = dyn_cast<LoadInst>(I))
+    return ConstantFoldLoadInst(LI, TD);
   
   return ConstantFoldInstOperands(I->getOpcode(), I->getType(),
-                                  Ops.data(), Ops.size(), Context, TD);
+                                  Ops.data(), Ops.size(), TD);
 }
 
 /// ConstantFoldConstantExpression - Attempt to fold the constant expression
 /// using the specified TargetData.  If successful, the constant result is
 /// result is returned, if not, null is returned.
 Constant *llvm::ConstantFoldConstantExpression(ConstantExpr *CE,
-                                               LLVMContext &Context,
                                                const TargetData *TD) {
   SmallVector<Constant*, 8> Ops;
-  for (User::op_iterator i = CE->op_begin(), e = CE->op_end(); i != e; ++i)
-    Ops.push_back(cast<Constant>(*i));
+  for (User::op_iterator i = CE->op_begin(), e = CE->op_end(); i != e; ++i) {
+    Constant *NewC = cast<Constant>(*i);
+    // Recursively fold the ConstantExpr's operands.
+    if (ConstantExpr *NewCE = dyn_cast<ConstantExpr>(NewC))
+      NewC = ConstantFoldConstantExpression(NewCE, TD);
+    Ops.push_back(NewC);
+  }
 
   if (CE->isCompare())
-    return ConstantFoldCompareInstOperands(CE->getPredicate(),
-                                           Ops.data(), Ops.size(), 
-                                           Context, TD);
+    return ConstantFoldCompareInstOperands(CE->getPredicate(), Ops[0], Ops[1],
+                                           TD);
   return ConstantFoldInstOperands(CE->getOpcode(), CE->getType(),
-                                  Ops.data(), Ops.size(), Context, TD);
+                                  Ops.data(), Ops.size(), TD);
 }
 
 /// ConstantFoldInstOperands - Attempt to constant fold an instruction with the
@@ -408,15 +692,17 @@ Constant *llvm::ConstantFoldConstantExpression(ConstantExpr *CE,
 /// attempting to fold instructions like loads and stores, which have no
 /// constant expression form.
 ///
+/// TODO: This function neither utilizes nor preserves nsw/nuw/inbounds/etc
+/// information, due to only being passed an opcode and operands. Constant
+/// folding using this function strips this information.
+///
 Constant *llvm::ConstantFoldInstOperands(unsigned Opcode, const Type *DestTy, 
                                          Constant* const* Ops, unsigned NumOps,
-                                         LLVMContext &Context,
                                          const TargetData *TD) {
   // Handle easy binops first.
   if (Instruction::isBinaryOp(Opcode)) {
     if (isa<ConstantExpr>(Ops[0]) || isa<ConstantExpr>(Ops[1]))
-      if (Constant *C = SymbolicallyEvaluateBinop(Opcode, Ops[0], Ops[1], TD,
-                                                  Context))
+      if (Constant *C = SymbolicallyEvaluateBinop(Opcode, Ops[0], Ops[1], TD))
         return C;
     
     return ConstantExpr::get(Opcode, Ops[0], Ops[1]);
@@ -441,7 +727,7 @@ Constant *llvm::ConstantFoldInstOperands(unsigned Opcode, const Type *DestTy,
         unsigned InWidth = Input->getType()->getScalarSizeInBits();
         if (TD->getPointerSizeInBits() < InWidth) {
           Constant *Mask = 
-            ConstantInt::get(Context, APInt::getLowBitsSet(InWidth,
+            ConstantInt::get(CE->getContext(), APInt::getLowBitsSet(InWidth,
                                                   TD->getPointerSizeInBits()));
           Input = ConstantExpr::getAnd(Input, Mask);
         }
@@ -458,11 +744,9 @@ Constant *llvm::ConstantFoldInstOperands(unsigned Opcode, const Type *DestTy,
       if (TD &&
           TD->getPointerSizeInBits() <=
           CE->getType()->getScalarSizeInBits()) {
-        if (CE->getOpcode() == Instruction::PtrToInt) {
-          Constant *Input = CE->getOperand(0);
-          Constant *C = FoldBitCast(Input, DestTy, *TD, Context);
-          return C ? C : ConstantExpr::getBitCast(Input, DestTy);
-        }
+        if (CE->getOpcode() == Instruction::PtrToInt)
+          return FoldBitCast(CE->getOperand(0), DestTy, *TD);
+        
         // If there's a constant offset added to the integer value before
         // it is casted back to a pointer, see if the expression can be
         // converted into a GEP.
@@ -485,7 +769,7 @@ Constant *llvm::ConstantFoldInstOperands(unsigned Opcode, const Type *DestTy,
                                             AT->getNumElements()))) {
                         Constant *Index[] = {
                           Constant::getNullValue(CE->getType()),
-                          ConstantInt::get(Context, ElemIdx)
+                          ConstantInt::get(ElTy->getContext(), ElemIdx)
                         };
                         return
                         ConstantExpr::getGetElementPtr(GV, &Index[0], 2);
@@ -508,8 +792,7 @@ Constant *llvm::ConstantFoldInstOperands(unsigned Opcode, const Type *DestTy,
       return ConstantExpr::getCast(Opcode, Ops[0], DestTy);
   case Instruction::BitCast:
     if (TD)
-      if (Constant *C = FoldBitCast(Ops[0], DestTy, *TD, Context))
-        return C;
+      return FoldBitCast(Ops[0], DestTy, *TD);
     return ConstantExpr::getBitCast(Ops[0], DestTy);
   case Instruction::Select:
     return ConstantExpr::getSelect(Ops[0], Ops[1], Ops[2]);
@@ -520,7 +803,7 @@ Constant *llvm::ConstantFoldInstOperands(unsigned Opcode, const Type *DestTy,
   case Instruction::ShuffleVector:
     return ConstantExpr::getShuffleVector(Ops[0], Ops[1], Ops[2]);
   case Instruction::GetElementPtr:
-    if (Constant *C = SymbolicallyEvaluateGEP(Ops, NumOps, DestTy, Context, TD))
+    if (Constant *C = SymbolicallyEvaluateGEP(Ops, NumOps, DestTy, TD))
       return C;
     
     return ConstantExpr::getGetElementPtr(Ops[0], Ops+1, NumOps-1);
@@ -532,9 +815,7 @@ Constant *llvm::ConstantFoldInstOperands(unsigned Opcode, const Type *DestTy,
 /// returns a constant expression of the specified operands.
 ///
 Constant *llvm::ConstantFoldCompareInstOperands(unsigned Predicate,
-                                                Constant*const * Ops, 
-                                                unsigned NumOps,
-                                                LLVMContext &Context,
+                                                Constant *Ops0, Constant *Ops1, 
                                                 const TargetData *TD) {
   // fold: icmp (inttoptr x), null         -> icmp x, 0
   // fold: icmp (ptrtoint x), 0            -> icmp x, null
@@ -543,17 +824,16 @@ Constant *llvm::ConstantFoldCompareInstOperands(unsigned Predicate,
   //
   // ConstantExpr::getCompare cannot do this, because it doesn't have TD
   // around to know if bit truncation is happening.
-  if (ConstantExpr *CE0 = dyn_cast<ConstantExpr>(Ops[0])) {
-    if (TD && Ops[1]->isNullValue()) {
-      const Type *IntPtrTy = TD->getIntPtrType(Context);
+  if (ConstantExpr *CE0 = dyn_cast<ConstantExpr>(Ops0)) {
+    if (TD && Ops1->isNullValue()) {
+      const Type *IntPtrTy = TD->getIntPtrType(CE0->getContext());
       if (CE0->getOpcode() == Instruction::IntToPtr) {
         // Convert the integer value to the right size to ensure we get the
         // proper extension or truncation.
         Constant *C = ConstantExpr::getIntegerCast(CE0->getOperand(0),
                                                    IntPtrTy, false);
-        Constant *NewOps[] = { C, Constant::getNullValue(C->getType()) };
-        return ConstantFoldCompareInstOperands(Predicate, NewOps, 2,
-                                               Context, TD);
+        Constant *Null = Constant::getNullValue(C->getType());
+        return ConstantFoldCompareInstOperands(Predicate, C, Null, TD);
       }
       
       // Only do this transformation if the int is intptrty in size, otherwise
@@ -561,16 +841,14 @@ Constant *llvm::ConstantFoldCompareInstOperands(unsigned Predicate,
       if (CE0->getOpcode() == Instruction::PtrToInt && 
           CE0->getType() == IntPtrTy) {
         Constant *C = CE0->getOperand(0);
-        Constant *NewOps[] = { C, Constant::getNullValue(C->getType()) };
-        // FIXME!
-        return ConstantFoldCompareInstOperands(Predicate, NewOps, 2,
-                                               Context, TD);
+        Constant *Null = Constant::getNullValue(C->getType());
+        return ConstantFoldCompareInstOperands(Predicate, C, Null, TD);
       }
     }
     
-    if (ConstantExpr *CE1 = dyn_cast<ConstantExpr>(Ops[1])) {
+    if (ConstantExpr *CE1 = dyn_cast<ConstantExpr>(Ops1)) {
       if (TD && CE0->getOpcode() == CE1->getOpcode()) {
-        const Type *IntPtrTy = TD->getIntPtrType(Context);
+        const Type *IntPtrTy = TD->getIntPtrType(CE0->getContext());
 
         if (CE0->getOpcode() == Instruction::IntToPtr) {
           // Convert the integer value to the right size to ensure we get the
@@ -579,26 +857,21 @@ Constant *llvm::ConstantFoldCompareInstOperands(unsigned Predicate,
                                                       IntPtrTy, false);
           Constant *C1 = ConstantExpr::getIntegerCast(CE1->getOperand(0),
                                                       IntPtrTy, false);
-          Constant *NewOps[] = { C0, C1 };
-          return ConstantFoldCompareInstOperands(Predicate, NewOps, 2, 
-                                                 Context, TD);
+          return ConstantFoldCompareInstOperands(Predicate, C0, C1, TD);
         }
 
         // Only do this transformation if the int is intptrty in size, otherwise
         // there is a truncation or extension that we aren't modeling.
         if ((CE0->getOpcode() == Instruction::PtrToInt &&
              CE0->getType() == IntPtrTy &&
-             CE0->getOperand(0)->getType() == CE1->getOperand(0)->getType())) {
-          Constant *NewOps[] = { 
-            CE0->getOperand(0), CE1->getOperand(0) 
-          };
-          return ConstantFoldCompareInstOperands(Predicate, NewOps, 2, 
-                                                 Context, TD);
-        }
+             CE0->getOperand(0)->getType() == CE1->getOperand(0)->getType()))
+          return ConstantFoldCompareInstOperands(Predicate, CE0->getOperand(0),
+                                                 CE1->getOperand(0), TD);
       }
     }
   }
-  return ConstantExpr::getCompare(Predicate, Ops[0], Ops[1]);
+  
+  return ConstantExpr::getCompare(Predicate, Ops0, Ops1);
 }
 
 
@@ -606,8 +879,7 @@ Constant *llvm::ConstantFoldCompareInstOperands(unsigned Predicate,
 /// getelementptr constantexpr, return the constant value being addressed by the
 /// constant expression, or null if something is funny and we can't decide.
 Constant *llvm::ConstantFoldLoadThroughGEPConstantExpr(Constant *C, 
-                                                       ConstantExpr *CE,
-                                                       LLVMContext &Context) {
+                                                       ConstantExpr *CE) {
   if (CE->getOperand(1) != Constant::getNullValue(CE->getOperand(1)->getType()))
     return 0;  // Do not allow stepping over the value!
   
@@ -641,15 +913,15 @@ Constant *llvm::ConstantFoldLoadThroughGEPConstantExpr(Constant *C,
           C = UndefValue::get(ATy->getElementType());
         else
           return 0;
-      } else if (const VectorType *PTy = dyn_cast<VectorType>(*I)) {
-        if (CI->getZExtValue() >= PTy->getNumElements())
+      } else if (const VectorType *VTy = dyn_cast<VectorType>(*I)) {
+        if (CI->getZExtValue() >= VTy->getNumElements())
           return 0;
         if (ConstantVector *CP = dyn_cast<ConstantVector>(C))
           C = CP->getOperand(CI->getZExtValue());
         else if (isa<ConstantAggregateZero>(C))
-          C = Constant::getNullValue(PTy->getElementType());
+          C = Constant::getNullValue(VTy->getElementType());
         else if (isa<UndefValue>(C))
-          C = UndefValue::get(PTy->getElementType());
+          C = UndefValue::get(VTy->getElementType());
         else
           return 0;
       } else {
@@ -677,8 +949,14 @@ llvm::canConstantFoldCallTo(const Function *F) {
   case Intrinsic::ctpop:
   case Intrinsic::ctlz:
   case Intrinsic::cttz:
+  case Intrinsic::uadd_with_overflow:
+  case Intrinsic::usub_with_overflow:
+  case Intrinsic::sadd_with_overflow:
+  case Intrinsic::ssub_with_overflow:
     return true;
-  default: break;
+  default:
+    return false;
+  case 0: break;
   }
 
   if (!F->hasName()) return false;
@@ -711,7 +989,7 @@ llvm::canConstantFoldCallTo(const Function *F) {
 }
 
 static Constant *ConstantFoldFP(double (*NativeFP)(double), double V, 
-                                const Type *Ty, LLVMContext &Context) {
+                                const Type *Ty) {
   errno = 0;
   V = NativeFP(V);
   if (errno != 0) {
@@ -719,18 +997,16 @@ static Constant *ConstantFoldFP(double (*NativeFP)(double), double V,
     return 0;
   }
   
-  if (Ty == Type::getFloatTy(Context))
-    return ConstantFP::get(Context, APFloat((float)V));
-  if (Ty == Type::getDoubleTy(Context))
-    return ConstantFP::get(Context, APFloat(V));
+  if (Ty->isFloatTy())
+    return ConstantFP::get(Ty->getContext(), APFloat((float)V));
+  if (Ty->isDoubleTy())
+    return ConstantFP::get(Ty->getContext(), APFloat(V));
   llvm_unreachable("Can only constant fold float/double");
   return 0; // dummy return to suppress warning
 }
 
 static Constant *ConstantFoldBinaryFP(double (*NativeFP)(double, double),
-                                      double V, double W,
-                                      const Type *Ty,
-                                      LLVMContext &Context) {
+                                      double V, double W, const Type *Ty) {
   errno = 0;
   V = NativeFP(V, W);
   if (errno != 0) {
@@ -738,140 +1014,196 @@ static Constant *ConstantFoldBinaryFP(double (*NativeFP)(double, double),
     return 0;
   }
   
-  if (Ty == Type::getFloatTy(Context))
-    return ConstantFP::get(Context, APFloat((float)V));
-  if (Ty == Type::getDoubleTy(Context))
-    return ConstantFP::get(Context, APFloat(V));
+  if (Ty->isFloatTy())
+    return ConstantFP::get(Ty->getContext(), APFloat((float)V));
+  if (Ty->isDoubleTy())
+    return ConstantFP::get(Ty->getContext(), APFloat(V));
   llvm_unreachable("Can only constant fold float/double");
   return 0; // dummy return to suppress warning
 }
 
 /// ConstantFoldCall - Attempt to constant fold a call to the specified function
 /// with the specified arguments, returning null if unsuccessful.
-
 Constant *
 llvm::ConstantFoldCall(Function *F, 
-                       Constant* const* Operands, unsigned NumOperands) {
+                       Constant *const *Operands, unsigned NumOperands) {
   if (!F->hasName()) return 0;
-  LLVMContext &Context = F->getContext();
   StringRef Name = F->getName();
-  
+
   const Type *Ty = F->getReturnType();
   if (NumOperands == 1) {
     if (ConstantFP *Op = dyn_cast<ConstantFP>(Operands[0])) {
-      if (Ty!=Type::getFloatTy(F->getContext()) &&
-          Ty!=Type::getDoubleTy(Context))
+      if (!Ty->isFloatTy() && !Ty->isDoubleTy())
         return 0;
       /// Currently APFloat versions of these functions do not exist, so we use
       /// the host native double versions.  Float versions are not called
       /// directly but for all these it is true (float)(f((double)arg)) ==
       /// f(arg).  Long double not supported yet.
-      double V = Ty==Type::getFloatTy(F->getContext()) ?
-                                     (double)Op->getValueAPF().convertToFloat():
+      double V = Ty->isFloatTy() ? (double)Op->getValueAPF().convertToFloat() :
                                      Op->getValueAPF().convertToDouble();
       switch (Name[0]) {
       case 'a':
         if (Name == "acos")
-          return ConstantFoldFP(acos, V, Ty, Context);
+          return ConstantFoldFP(acos, V, Ty);
         else if (Name == "asin")
-          return ConstantFoldFP(asin, V, Ty, Context);
+          return ConstantFoldFP(asin, V, Ty);
         else if (Name == "atan")
-          return ConstantFoldFP(atan, V, Ty, Context);
+          return ConstantFoldFP(atan, V, Ty);
         break;
       case 'c':
         if (Name == "ceil")
-          return ConstantFoldFP(ceil, V, Ty, Context);
+          return ConstantFoldFP(ceil, V, Ty);
         else if (Name == "cos")
-          return ConstantFoldFP(cos, V, Ty, Context);
+          return ConstantFoldFP(cos, V, Ty);
         else if (Name == "cosh")
-          return ConstantFoldFP(cosh, V, Ty, Context);
+          return ConstantFoldFP(cosh, V, Ty);
         else if (Name == "cosf")
-          return ConstantFoldFP(cos, V, Ty, Context);
+          return ConstantFoldFP(cos, V, Ty);
         break;
       case 'e':
         if (Name == "exp")
-          return ConstantFoldFP(exp, V, Ty, Context);
+          return ConstantFoldFP(exp, V, Ty);
         break;
       case 'f':
         if (Name == "fabs")
-          return ConstantFoldFP(fabs, V, Ty, Context);
+          return ConstantFoldFP(fabs, V, Ty);
         else if (Name == "floor")
-          return ConstantFoldFP(floor, V, Ty, Context);
+          return ConstantFoldFP(floor, V, Ty);
         break;
       case 'l':
         if (Name == "log" && V > 0)
-          return ConstantFoldFP(log, V, Ty, Context);
+          return ConstantFoldFP(log, V, Ty);
         else if (Name == "log10" && V > 0)
-          return ConstantFoldFP(log10, V, Ty, Context);
+          return ConstantFoldFP(log10, V, Ty);
         else if (Name == "llvm.sqrt.f32" ||
                  Name == "llvm.sqrt.f64") {
           if (V >= -0.0)
-            return ConstantFoldFP(sqrt, V, Ty, Context);
+            return ConstantFoldFP(sqrt, V, Ty);
           else // Undefined
             return Constant::getNullValue(Ty);
         }
         break;
       case 's':
         if (Name == "sin")
-          return ConstantFoldFP(sin, V, Ty, Context);
+          return ConstantFoldFP(sin, V, Ty);
         else if (Name == "sinh")
-          return ConstantFoldFP(sinh, V, Ty, Context);
+          return ConstantFoldFP(sinh, V, Ty);
         else if (Name == "sqrt" && V >= 0)
-          return ConstantFoldFP(sqrt, V, Ty, Context);
+          return ConstantFoldFP(sqrt, V, Ty);
         else if (Name == "sqrtf" && V >= 0)
-          return ConstantFoldFP(sqrt, V, Ty, Context);
+          return ConstantFoldFP(sqrt, V, Ty);
         else if (Name == "sinf")
-          return ConstantFoldFP(sin, V, Ty, Context);
+          return ConstantFoldFP(sin, V, Ty);
         break;
       case 't':
         if (Name == "tan")
-          return ConstantFoldFP(tan, V, Ty, Context);
+          return ConstantFoldFP(tan, V, Ty);
         else if (Name == "tanh")
-          return ConstantFoldFP(tanh, V, Ty, Context);
+          return ConstantFoldFP(tanh, V, Ty);
         break;
       default:
         break;
       }
-    } else if (ConstantInt *Op = dyn_cast<ConstantInt>(Operands[0])) {
+      return 0;
+    }
+    
+    
+    if (ConstantInt *Op = dyn_cast<ConstantInt>(Operands[0])) {
       if (Name.startswith("llvm.bswap"))
-        return ConstantInt::get(Context, Op->getValue().byteSwap());
+        return ConstantInt::get(F->getContext(), Op->getValue().byteSwap());
       else if (Name.startswith("llvm.ctpop"))
         return ConstantInt::get(Ty, Op->getValue().countPopulation());
       else if (Name.startswith("llvm.cttz"))
         return ConstantInt::get(Ty, Op->getValue().countTrailingZeros());
       else if (Name.startswith("llvm.ctlz"))
         return ConstantInt::get(Ty, Op->getValue().countLeadingZeros());
+      return 0;
     }
-  } else if (NumOperands == 2) {
+    
+    return 0;
+  }
+  
+  if (NumOperands == 2) {
     if (ConstantFP *Op1 = dyn_cast<ConstantFP>(Operands[0])) {
-      if (Ty!=Type::getFloatTy(F->getContext()) && 
-          Ty!=Type::getDoubleTy(Context))
+      if (!Ty->isFloatTy() && !Ty->isDoubleTy())
         return 0;
-      double Op1V = Ty==Type::getFloatTy(F->getContext()) ? 
-                      (double)Op1->getValueAPF().convertToFloat():
+      double Op1V = Ty->isFloatTy() ? 
+                      (double)Op1->getValueAPF().convertToFloat() :
                       Op1->getValueAPF().convertToDouble();
       if (ConstantFP *Op2 = dyn_cast<ConstantFP>(Operands[1])) {
-        double Op2V = Ty==Type::getFloatTy(F->getContext()) ? 
+        if (Op2->getType() != Op1->getType())
+          return 0;
+        
+        double Op2V = Ty->isFloatTy() ? 
                       (double)Op2->getValueAPF().convertToFloat():
                       Op2->getValueAPF().convertToDouble();
 
-        if (Name == "pow") {
-          return ConstantFoldBinaryFP(pow, Op1V, Op2V, Ty, Context);
-        } else if (Name == "fmod") {
-          return ConstantFoldBinaryFP(fmod, Op1V, Op2V, Ty, Context);
-        } else if (Name == "atan2") {
-          return ConstantFoldBinaryFP(atan2, Op1V, Op2V, Ty, Context);
-        }
+        if (Name == "pow")
+          return ConstantFoldBinaryFP(pow, Op1V, Op2V, Ty);
+        if (Name == "fmod")
+          return ConstantFoldBinaryFP(fmod, Op1V, Op2V, Ty);
+        if (Name == "atan2")
+          return ConstantFoldBinaryFP(atan2, Op1V, Op2V, Ty);
       } else if (ConstantInt *Op2C = dyn_cast<ConstantInt>(Operands[1])) {
-        if (Name == "llvm.powi.f32") {
-          return ConstantFP::get(Context, APFloat((float)std::pow((float)Op1V,
-                                                 (int)Op2C->getZExtValue())));
-        } else if (Name == "llvm.powi.f64") {
-          return ConstantFP::get(Context, APFloat((double)std::pow((double)Op1V,
+        if (Name == "llvm.powi.f32")
+          return ConstantFP::get(F->getContext(),
+                                 APFloat((float)std::pow((float)Op1V,
                                                  (int)Op2C->getZExtValue())));
+        if (Name == "llvm.powi.f64")
+          return ConstantFP::get(F->getContext(),
+                                 APFloat((double)std::pow((double)Op1V,
+                                                   (int)Op2C->getZExtValue())));
+      }
+      return 0;
+    }
+    
+    
+    if (ConstantInt *Op1 = dyn_cast<ConstantInt>(Operands[0])) {
+      if (ConstantInt *Op2 = dyn_cast<ConstantInt>(Operands[1])) {
+        switch (F->getIntrinsicID()) {
+        default: break;
+        case Intrinsic::uadd_with_overflow: {
+          Constant *Res = ConstantExpr::getAdd(Op1, Op2);           // result.
+          Constant *Ops[] = {
+            Res, ConstantExpr::getICmp(CmpInst::ICMP_ULT, Res, Op1) // overflow.
+          };
+          return ConstantStruct::get(F->getContext(), Ops, 2, false);
+        }
+        case Intrinsic::usub_with_overflow: {
+          Constant *Res = ConstantExpr::getSub(Op1, Op2);           // result.
+          Constant *Ops[] = {
+            Res, ConstantExpr::getICmp(CmpInst::ICMP_UGT, Res, Op1) // overflow.
+          };
+          return ConstantStruct::get(F->getContext(), Ops, 2, false);
+        }
+        case Intrinsic::sadd_with_overflow: {
+          Constant *Res = ConstantExpr::getAdd(Op1, Op2);           // result.
+          Constant *Overflow = ConstantExpr::getSelect(
+              ConstantExpr::getICmp(CmpInst::ICMP_SGT,
+                ConstantInt::get(Op1->getType(), 0), Op1),
+              ConstantExpr::getICmp(CmpInst::ICMP_SGT, Res, Op2), 
+              ConstantExpr::getICmp(CmpInst::ICMP_SLT, Res, Op2)); // overflow.
+
+          Constant *Ops[] = { Res, Overflow };
+          return ConstantStruct::get(F->getContext(), Ops, 2, false);
+        }
+        case Intrinsic::ssub_with_overflow: {
+          Constant *Res = ConstantExpr::getSub(Op1, Op2);           // result.
+          Constant *Overflow = ConstantExpr::getSelect(
+              ConstantExpr::getICmp(CmpInst::ICMP_SGT,
+                ConstantInt::get(Op2->getType(), 0), Op2),
+              ConstantExpr::getICmp(CmpInst::ICMP_SLT, Res, Op1), 
+              ConstantExpr::getICmp(CmpInst::ICMP_SGT, Res, Op1)); // overflow.
+
+          Constant *Ops[] = { Res, Overflow };
+          return ConstantStruct::get(F->getContext(), Ops, 2, false);
+        }
         }
       }
+      
+      return 0;
     }
+    return 0;
   }
   return 0;
 }
diff --git a/libclamav/c++/llvm/lib/Analysis/DbgInfoPrinter.cpp b/libclamav/c++/llvm/lib/Analysis/DbgInfoPrinter.cpp
index 2bbe2e0..ab92e3f 100644
--- a/libclamav/c++/llvm/lib/Analysis/DbgInfoPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/DbgInfoPrinter.cpp
@@ -35,7 +35,7 @@ PrintDirectory("print-fullpath",
                cl::Hidden);
 
 namespace {
-  class VISIBILITY_HIDDEN PrintDbgInfo : public FunctionPass {
+  class PrintDbgInfo : public FunctionPass {
     raw_ostream &Out;
     void printStopPoint(const DbgStopPointInst *DSI);
     void printFuncStart(const DbgFuncStartInst *FS);
diff --git a/libclamav/c++/llvm/lib/Analysis/DebugInfo.cpp b/libclamav/c++/llvm/lib/Analysis/DebugInfo.cpp
index 1d6e3a6..41d803c 100644
--- a/libclamav/c++/llvm/lib/Analysis/DebugInfo.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/DebugInfo.cpp
@@ -78,16 +78,16 @@ DIDescriptor::DIDescriptor(MDNode *N, unsigned RequiredTag) {
   }
 }
 
-const char *
+StringRef 
 DIDescriptor::getStringField(unsigned Elt) const {
   if (DbgNode == 0)
-    return NULL;
+    return StringRef();
 
   if (Elt < DbgNode->getNumElements())
     if (MDString *MDS = dyn_cast_or_null<MDString>(DbgNode->getElement(Elt)))
-      return MDS->getString().data();
+      return MDS->getString();
 
-  return NULL;
+  return StringRef();
 }
 
 uint64_t DIDescriptor::getUInt64Field(unsigned Elt) const {
@@ -116,7 +116,7 @@ GlobalVariable *DIDescriptor::getGlobalVariableField(unsigned Elt) const {
     return 0;
 
   if (Elt < DbgNode->getNumElements())
-      return dyn_cast<GlobalVariable>(DbgNode->getElement(Elt));
+      return dyn_cast_or_null<GlobalVariable>(DbgNode->getElement(Elt));
   return 0;
 }
 
@@ -307,8 +307,8 @@ void DIDerivedType::replaceAllUsesWith(DIDescriptor &D) {
 bool DICompileUnit::Verify() const {
   if (isNull())
     return false;
-  const char *N = getFilename();
-  if (!N)
+  StringRef N = getFilename();
+  if (N.empty())
     return false;
   // It is possible that directory and produce string is empty.
   return true;
@@ -363,6 +363,9 @@ bool DIGlobalVariable::Verify() const {
   if (isNull())
     return false;
 
+  if (getDisplayName().empty())
+    return false;
+
   if (getContext().isNull())
     return false;
 
@@ -398,27 +401,37 @@ bool DIVariable::Verify() const {
 /// getOriginalTypeSize - If this type is derived from a base type then
 /// return base type size.
 uint64_t DIDerivedType::getOriginalTypeSize() const {
-  if (getTag() != dwarf::DW_TAG_member)
-    return getSizeInBits();
-  DIType BT = getTypeDerivedFrom();
-  if (BT.getTag() != dwarf::DW_TAG_base_type)
-    return getSizeInBits();
-  return BT.getSizeInBits();
+  unsigned Tag = getTag();
+  if (Tag == dwarf::DW_TAG_member || Tag == dwarf::DW_TAG_typedef ||
+      Tag == dwarf::DW_TAG_const_type || Tag == dwarf::DW_TAG_volatile_type ||
+      Tag == dwarf::DW_TAG_restrict_type) {
+    DIType BaseType = getTypeDerivedFrom();
+    // If this type is not derived from any type then take conservative 
+    // approach.
+    if (BaseType.isNull())
+      return getSizeInBits();
+    if (BaseType.isDerivedType())
+      return DIDerivedType(BaseType.getNode()).getOriginalTypeSize();
+    else
+      return BaseType.getSizeInBits();
+  }
+    
+  return getSizeInBits();
 }
 
 /// describes - Return true if this subprogram provides debugging
 /// information for the function F.
 bool DISubprogram::describes(const Function *F) {
   assert (F && "Invalid function");
-  const char *Name = getLinkageName();
-  if (!Name)
+  StringRef Name = getLinkageName();
+  if (Name.empty())
     Name = getName();
-  if (strcmp(F->getName().data(), Name) == 0)
+  if (F->getName() == Name)
     return true;
   return false;
 }
 
-const char *DIScope::getFilename() const {
+StringRef DIScope::getFilename() const {
   if (isLexicalBlock()) 
     return DILexicalBlock(DbgNode).getFilename();
   else if (isSubprogram())
@@ -427,10 +440,10 @@ const char *DIScope::getFilename() const {
     return DICompileUnit(DbgNode).getFilename();
   else 
     assert (0 && "Invalid DIScope!");
-  return NULL;
+  return StringRef();
 }
 
-const char *DIScope::getDirectory() const {
+StringRef DIScope::getDirectory() const {
   if (isLexicalBlock()) 
     return DILexicalBlock(DbgNode).getDirectory();
   else if (isSubprogram())
@@ -439,7 +452,7 @@ const char *DIScope::getDirectory() const {
     return DICompileUnit(DbgNode).getDirectory();
   else 
     assert (0 && "Invalid DIScope!");
-  return NULL;
+  return StringRef();
 }
 
 //===----------------------------------------------------------------------===//
@@ -465,7 +478,8 @@ void DICompileUnit::dump() const {
 void DIType::dump() const {
   if (isNull()) return;
 
-  if (const char *Res = getName())
+  StringRef Res = getName();
+  if (!Res.empty())
     errs() << " [" << Res << "] ";
 
   unsigned Tag = getTag();
@@ -522,7 +536,8 @@ void DICompositeType::dump() const {
 
 /// dump - Print global.
 void DIGlobal::dump() const {
-  if (const char *Res = getName())
+  StringRef Res = getName();
+  if (!Res.empty())
     errs() << " [" << Res << "] ";
 
   unsigned Tag = getTag();
@@ -546,7 +561,8 @@ void DIGlobal::dump() const {
 
 /// dump - Print subprogram.
 void DISubprogram::dump() const {
-  if (const char *Res = getName())
+  StringRef Res = getName();
+  if (!Res.empty())
     errs() << " [" << Res << "] ";
 
   unsigned Tag = getTag();
@@ -574,7 +590,8 @@ void DIGlobalVariable::dump() const {
 
 /// dump - Print variable.
 void DIVariable::dump() const {
-  if (const char *Res = getName())
+  StringRef Res = getName();
+  if (!Res.empty())
     errs() << " [" << Res << "] ";
 
   getCompileUnit().dump();
@@ -590,9 +607,7 @@ void DIVariable::dump() const {
 //===----------------------------------------------------------------------===//
 
 DIFactory::DIFactory(Module &m)
-  : M(m), VMContext(M.getContext()), StopPointFn(0), FuncStartFn(0),
-    RegionStartFn(0), RegionEndFn(0),
-    DeclareFn(0) {
+  : M(m), VMContext(M.getContext()), DeclareFn(0) {
   EmptyStructPtr = PointerType::getUnqual(StructType::get(VMContext));
 }
 
@@ -642,7 +657,7 @@ DICompileUnit DIFactory::CreateCompileUnit(unsigned LangID,
                                            StringRef Producer,
                                            bool isMain,
                                            bool isOptimized,
-                                           const char *Flags,
+                                           StringRef Flags,
                                            unsigned RunTimeVer) {
   Value *Elts[] = {
     GetTagConstant(dwarf::DW_TAG_compile_unit),
@@ -695,6 +710,32 @@ DIBasicType DIFactory::CreateBasicType(DIDescriptor Context,
   return DIBasicType(MDNode::get(VMContext, &Elts[0], 10));
 }
 
+
+/// CreateBasicType - Create a basic type like int, float, etc.
+DIBasicType DIFactory::CreateBasicTypeEx(DIDescriptor Context,
+                                         StringRef Name,
+                                         DICompileUnit CompileUnit,
+                                         unsigned LineNumber,
+                                         Constant *SizeInBits,
+                                         Constant *AlignInBits,
+                                         Constant *OffsetInBits, unsigned Flags,
+                                         unsigned Encoding) {
+  Value *Elts[] = {
+    GetTagConstant(dwarf::DW_TAG_base_type),
+    Context.getNode(),
+    MDString::get(VMContext, Name),
+    CompileUnit.getNode(),
+    ConstantInt::get(Type::getInt32Ty(VMContext), LineNumber),
+    SizeInBits,
+    AlignInBits,
+    OffsetInBits,
+    ConstantInt::get(Type::getInt32Ty(VMContext), Flags),
+    ConstantInt::get(Type::getInt32Ty(VMContext), Encoding)
+  };
+  return DIBasicType(MDNode::get(VMContext, &Elts[0], 10));
+}
+
+
 /// CreateDerivedType - Create a derived type like const qualified type,
 /// pointer, typedef, etc.
 DIDerivedType DIFactory::CreateDerivedType(unsigned Tag,
@@ -722,6 +763,35 @@ DIDerivedType DIFactory::CreateDerivedType(unsigned Tag,
   return DIDerivedType(MDNode::get(VMContext, &Elts[0], 10));
 }
 
+
+/// CreateDerivedType - Create a derived type like const qualified type,
+/// pointer, typedef, etc.
+DIDerivedType DIFactory::CreateDerivedTypeEx(unsigned Tag,
+                                             DIDescriptor Context,
+                                             StringRef Name,
+                                             DICompileUnit CompileUnit,
+                                             unsigned LineNumber,
+                                             Constant *SizeInBits,
+                                             Constant *AlignInBits,
+                                             Constant *OffsetInBits,
+                                             unsigned Flags,
+                                             DIType DerivedFrom) {
+  Value *Elts[] = {
+    GetTagConstant(Tag),
+    Context.getNode(),
+    MDString::get(VMContext, Name),
+    CompileUnit.getNode(),
+    ConstantInt::get(Type::getInt32Ty(VMContext), LineNumber),
+    SizeInBits,
+    AlignInBits,
+    OffsetInBits,
+    ConstantInt::get(Type::getInt32Ty(VMContext), Flags),
+    DerivedFrom.getNode(),
+  };
+  return DIDerivedType(MDNode::get(VMContext, &Elts[0], 10));
+}
+
+
 /// CreateCompositeType - Create a composite type like array, struct, etc.
 DICompositeType DIFactory::CreateCompositeType(unsigned Tag,
                                                DIDescriptor Context,
@@ -754,6 +824,38 @@ DICompositeType DIFactory::CreateCompositeType(unsigned Tag,
 }
 
 
+/// CreateCompositeType - Create a composite type like array, struct, etc.
+DICompositeType DIFactory::CreateCompositeTypeEx(unsigned Tag,
+                                                 DIDescriptor Context,
+                                                 StringRef Name,
+                                                 DICompileUnit CompileUnit,
+                                                 unsigned LineNumber,
+                                                 Constant *SizeInBits,
+                                                 Constant *AlignInBits,
+                                                 Constant *OffsetInBits,
+                                                 unsigned Flags,
+                                                 DIType DerivedFrom,
+                                                 DIArray Elements,
+                                                 unsigned RuntimeLang) {
+
+  Value *Elts[] = {
+    GetTagConstant(Tag),
+    Context.getNode(),
+    MDString::get(VMContext, Name),
+    CompileUnit.getNode(),
+    ConstantInt::get(Type::getInt32Ty(VMContext), LineNumber),
+    SizeInBits,
+    AlignInBits,
+    OffsetInBits,
+    ConstantInt::get(Type::getInt32Ty(VMContext), Flags),
+    DerivedFrom.getNode(),
+    Elements.getNode(),
+    ConstantInt::get(Type::getInt32Ty(VMContext), RuntimeLang)
+  };
+  return DICompositeType(MDNode::get(VMContext, &Elts[0], 12));
+}
+
+
 /// CreateSubprogram - Create a new descriptor for the specified subprogram.
 /// See comments in DISubprogram for descriptions of these fields.  This
 /// method does not unique the generated descriptors.
@@ -875,65 +977,24 @@ DILocation DIFactory::CreateLocation(unsigned LineNo, unsigned ColumnNo,
   return DILocation(MDNode::get(VMContext, &Elts[0], 4));
 }
 
+/// CreateLocation - Creates a debug info location.
+DILocation DIFactory::CreateLocation(unsigned LineNo, unsigned ColumnNo,
+                                     DIScope S, MDNode *OrigLoc) {
+ Value *Elts[] = {
+    ConstantInt::get(Type::getInt32Ty(VMContext), LineNo),
+    ConstantInt::get(Type::getInt32Ty(VMContext), ColumnNo),
+    S.getNode(),
+    OrigLoc
+  };
+  return DILocation(MDNode::get(VMContext, &Elts[0], 4));
+}
 
 //===----------------------------------------------------------------------===//
 // DIFactory: Routines for inserting code into a function
 //===----------------------------------------------------------------------===//
 
-/// InsertStopPoint - Create a new llvm.dbg.stoppoint intrinsic invocation,
-/// inserting it at the end of the specified basic block.
-void DIFactory::InsertStopPoint(DICompileUnit CU, unsigned LineNo,
-                                unsigned ColNo, BasicBlock *BB) {
-
-  // Lazily construct llvm.dbg.stoppoint function.
-  if (!StopPointFn)
-    StopPointFn = llvm::Intrinsic::getDeclaration(&M,
-                                              llvm::Intrinsic::dbg_stoppoint);
-
-  // Invoke llvm.dbg.stoppoint
-  Value *Args[] = {
-    ConstantInt::get(llvm::Type::getInt32Ty(VMContext), LineNo),
-    ConstantInt::get(llvm::Type::getInt32Ty(VMContext), ColNo),
-    CU.getNode()
-  };
-  CallInst::Create(StopPointFn, Args, Args+3, "", BB);
-}
-
-/// InsertSubprogramStart - Create a new llvm.dbg.func.start intrinsic to
-/// mark the start of the specified subprogram.
-void DIFactory::InsertSubprogramStart(DISubprogram SP, BasicBlock *BB) {
-  // Lazily construct llvm.dbg.func.start.
-  if (!FuncStartFn)
-    FuncStartFn = Intrinsic::getDeclaration(&M, Intrinsic::dbg_func_start);
-
-  // Call llvm.dbg.func.start which also implicitly sets a stoppoint.
-  CallInst::Create(FuncStartFn, SP.getNode(), "", BB);
-}
-
-/// InsertRegionStart - Insert a new llvm.dbg.region.start intrinsic call to
-/// mark the start of a region for the specified scoping descriptor.
-void DIFactory::InsertRegionStart(DIDescriptor D, BasicBlock *BB) {
-  // Lazily construct llvm.dbg.region.start function.
-  if (!RegionStartFn)
-    RegionStartFn = Intrinsic::getDeclaration(&M, Intrinsic::dbg_region_start);
-
-  // Call llvm.dbg.func.start.
-  CallInst::Create(RegionStartFn, D.getNode(), "", BB);
-}
-
-/// InsertRegionEnd - Insert a new llvm.dbg.region.end intrinsic call to
-/// mark the end of a region for the specified scoping descriptor.
-void DIFactory::InsertRegionEnd(DIDescriptor D, BasicBlock *BB) {
-  // Lazily construct llvm.dbg.region.end function.
-  if (!RegionEndFn)
-    RegionEndFn = Intrinsic::getDeclaration(&M, Intrinsic::dbg_region_end);
-
-  // Call llvm.dbg.region.end.
-  CallInst::Create(RegionEndFn, D.getNode(), "", BB);
-}
-
 /// InsertDeclare - Insert a new llvm.dbg.declare intrinsic call.
-void DIFactory::InsertDeclare(Value *Storage, DIVariable D,
+Instruction *DIFactory::InsertDeclare(Value *Storage, DIVariable D,
                               Instruction *InsertBefore) {
   // Cast the storage to a {}* for the call to llvm.dbg.declare.
   Storage = new BitCastInst(Storage, EmptyStructPtr, "", InsertBefore);
@@ -942,11 +1003,11 @@ void DIFactory::InsertDeclare(Value *Storage, DIVariable D,
     DeclareFn = Intrinsic::getDeclaration(&M, Intrinsic::dbg_declare);
 
   Value *Args[] = { Storage, D.getNode() };
-  CallInst::Create(DeclareFn, Args, Args+2, "", InsertBefore);
+  return CallInst::Create(DeclareFn, Args, Args+2, "", InsertBefore);
 }
 
 /// InsertDeclare - Insert a new llvm.dbg.declare intrinsic call.
-void DIFactory::InsertDeclare(Value *Storage, DIVariable D,
+Instruction *DIFactory::InsertDeclare(Value *Storage, DIVariable D,
                               BasicBlock *InsertAtEnd) {
   // Cast the storage to a {}* for the call to llvm.dbg.declare.
   Storage = new BitCastInst(Storage, EmptyStructPtr, "", InsertAtEnd);
@@ -955,7 +1016,7 @@ void DIFactory::InsertDeclare(Value *Storage, DIVariable D,
     DeclareFn = Intrinsic::getDeclaration(&M, Intrinsic::dbg_declare);
 
   Value *Args[] = { Storage, D.getNode() };
-  CallInst::Create(DeclareFn, Args, Args+2, "", InsertAtEnd);
+  return CallInst::Create(DeclareFn, Args, Args+2, "", InsertAtEnd);
 }
 
 
@@ -966,20 +1027,18 @@ void DIFactory::InsertDeclare(Value *Storage, DIVariable D,
 /// processModule - Process entire module and collect debug info.
 void DebugInfoFinder::processModule(Module &M) {
 
+  MetadataContext &TheMetadata = M.getContext().getMetadata();
+  unsigned MDDbgKind = TheMetadata.getMDKind("dbg");
+
   for (Module::iterator I = M.begin(), E = M.end(); I != E; ++I)
     for (Function::iterator FI = (*I).begin(), FE = (*I).end(); FI != FE; ++FI)
       for (BasicBlock::iterator BI = (*FI).begin(), BE = (*FI).end(); BI != BE;
            ++BI) {
-        if (DbgStopPointInst *SPI = dyn_cast<DbgStopPointInst>(BI))
-          processStopPoint(SPI);
-        else if (DbgFuncStartInst *FSI = dyn_cast<DbgFuncStartInst>(BI))
-          processFuncStart(FSI);
-        else if (DbgRegionStartInst *DRS = dyn_cast<DbgRegionStartInst>(BI))
-          processRegionStart(DRS);
-        else if (DbgRegionEndInst *DRE = dyn_cast<DbgRegionEndInst>(BI))
-          processRegionEnd(DRE);
-        else if (DbgDeclareInst *DDI = dyn_cast<DbgDeclareInst>(BI))
+        if (DbgDeclareInst *DDI = dyn_cast<DbgDeclareInst>(BI))
           processDeclare(DDI);
+        else if (MDDbgKind) 
+          if (MDNode *L = TheMetadata.getMD(MDDbgKind, BI)) 
+            processLocation(DILocation(L));
       }
 
   NamedMDNode *NMD = M.getNamedMetadata("llvm.dbg.gv");
@@ -995,6 +1054,20 @@ void DebugInfoFinder::processModule(Module &M) {
   }
 }
 
+/// processLocation - Process DILocation.
+void DebugInfoFinder::processLocation(DILocation Loc) {
+  if (Loc.isNull()) return;
+  DIScope S(Loc.getScope().getNode());
+  if (S.isNull()) return;
+  if (S.isCompileUnit())
+    addCompileUnit(DICompileUnit(S.getNode()));
+  else if (S.isSubprogram())
+    processSubprogram(DISubprogram(S.getNode()));
+  else if (S.isLexicalBlock())
+    processLexicalBlock(DILexicalBlock(S.getNode()));
+  processLocation(Loc.getOrigLocation());
+}
+
 /// processType - Process DIType.
 void DebugInfoFinder::processType(DIType DT) {
   if (!addType(DT))
@@ -1021,6 +1094,17 @@ void DebugInfoFinder::processType(DIType DT) {
   }
 }
 
+/// processLexicalBlock
+void DebugInfoFinder::processLexicalBlock(DILexicalBlock LB) {
+  if (LB.isNull())
+    return;
+  DIScope Context = LB.getContext();
+  if (Context.isLexicalBlock())
+    return processLexicalBlock(DILexicalBlock(Context.getNode()));
+  else
+    return processSubprogram(DISubprogram(Context.getNode()));
+}
+
 /// processSubprogram - Process DISubprogram.
 void DebugInfoFinder::processSubprogram(DISubprogram SP) {
   if (SP.isNull())
@@ -1031,30 +1115,6 @@ void DebugInfoFinder::processSubprogram(DISubprogram SP) {
   processType(SP.getType());
 }
 
-/// processStopPoint - Process DbgStopPointInst.
-void DebugInfoFinder::processStopPoint(DbgStopPointInst *SPI) {
-  MDNode *Context = dyn_cast<MDNode>(SPI->getContext());
-  addCompileUnit(DICompileUnit(Context));
-}
-
-/// processFuncStart - Process DbgFuncStartInst.
-void DebugInfoFinder::processFuncStart(DbgFuncStartInst *FSI) {
-  MDNode *SP = dyn_cast<MDNode>(FSI->getSubprogram());
-  processSubprogram(DISubprogram(SP));
-}
-
-/// processRegionStart - Process DbgRegionStart.
-void DebugInfoFinder::processRegionStart(DbgRegionStartInst *DRS) {
-  MDNode *SP = dyn_cast<MDNode>(DRS->getContext());
-  processSubprogram(DISubprogram(SP));
-}
-
-/// processRegionEnd - Process DbgRegionEnd.
-void DebugInfoFinder::processRegionEnd(DbgRegionEndInst *DRE) {
-  MDNode *SP = dyn_cast<MDNode>(DRE->getContext());
-  processSubprogram(DISubprogram(SP));
-}
-
 /// processDeclare - Process DbgDeclareInst.
 void DebugInfoFinder::processDeclare(DbgDeclareInst *DDI) {
   DIVariable DV(cast<MDNode>(DDI->getVariable()));
@@ -1188,9 +1248,10 @@ namespace llvm {
       // Look for the bitcast.
       for (Value::use_const_iterator I = V->use_begin(), E =V->use_end();
             I != E; ++I)
-        if (isa<BitCastInst>(I))
-          return findDbgDeclare(*I, false);
-
+        if (isa<BitCastInst>(I)) {
+          const DbgDeclareInst *DDI = findDbgDeclare(*I, false);
+          if (DDI) return DDI;
+        }
       return 0;
     }
 
@@ -1214,7 +1275,8 @@ bool getLocationInfo(const Value *V, std::string &DisplayName,
       if (!DIGV) return false;
       DIGlobalVariable Var(cast<MDNode>(DIGV));
 
-      if (const char *D = Var.getDisplayName())
+      StringRef D = Var.getDisplayName();
+      if (!D.empty())
         DisplayName = D;
       LineNo = Var.getLineNumber();
       Unit = Var.getCompileUnit();
@@ -1224,18 +1286,22 @@ bool getLocationInfo(const Value *V, std::string &DisplayName,
       if (!DDI) return false;
       DIVariable Var(cast<MDNode>(DDI->getVariable()));
 
-      if (const char *D = Var.getName())
+      StringRef D = Var.getName();
+      if (!D.empty())
         DisplayName = D;
       LineNo = Var.getLineNumber();
       Unit = Var.getCompileUnit();
       TypeD = Var.getType();
     }
 
-    if (const char *T = TypeD.getName())
+    StringRef T = TypeD.getName();
+    if (!T.empty())
       Type = T;
-    if (const char *F = Unit.getFilename())
+    StringRef F = Unit.getFilename();
+    if (!F.empty())
       File = F;
-    if (const char *D = Unit.getDirectory())
+    StringRef D = Unit.getDirectory();
+    if (!D.empty())
       Dir = D;
     return true;
   }
@@ -1350,21 +1416,35 @@ bool getLocationInfo(const Value *V, std::string &DisplayName,
     return DebugLoc::get(Id);
   }
 
-  /// isInlinedFnStart - Return true if FSI is starting an inlined function.
-  bool isInlinedFnStart(DbgFuncStartInst &FSI, const Function *CurrentFn) {
-    DISubprogram Subprogram(cast<MDNode>(FSI.getSubprogram()));
-    if (Subprogram.describes(CurrentFn))
-      return false;
-
-    return true;
+  /// getDISubprogram - Find subprogram that is enclosing this scope.
+  DISubprogram getDISubprogram(MDNode *Scope) {
+    DIDescriptor D(Scope);
+    if (D.isNull())
+      return DISubprogram();
+    
+    if (D.isCompileUnit())
+      return DISubprogram();
+    
+    if (D.isSubprogram())
+      return DISubprogram(Scope);
+    
+    if (D.isLexicalBlock())
+      return getDISubprogram(DILexicalBlock(Scope).getContext().getNode());
+    
+    return DISubprogram();
   }
 
-  /// isInlinedFnEnd - Return true if REI is ending an inlined function.
-  bool isInlinedFnEnd(DbgRegionEndInst &REI, const Function *CurrentFn) {
-    DISubprogram Subprogram(cast<MDNode>(REI.getContext()));
-    if (Subprogram.isNull() || Subprogram.describes(CurrentFn))
-      return false;
-
-    return true;
+  /// getDICompositeType - Find underlying composite type.
+  DICompositeType getDICompositeType(DIType T) {
+    if (T.isNull())
+      return DICompositeType();
+    
+    if (T.isCompositeType())
+      return DICompositeType(T.getNode());
+    
+    if (T.isDerivedType())
+      return getDICompositeType(DIDerivedType(T.getNode()).getTypeDerivedFrom());
+    
+    return DICompositeType();
   }
 }
diff --git a/libclamav/c++/llvm/lib/Analysis/DomPrinter.cpp b/libclamav/c++/llvm/lib/Analysis/DomPrinter.cpp
new file mode 100644
index 0000000..f1b44d0
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Analysis/DomPrinter.cpp
@@ -0,0 +1,265 @@
+//===- DomPrinter.cpp - DOT printer for the dominance trees    ------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file defines '-dot-dom' and '-dot-postdom' analysis passes, which emit
+// a dom.<fnname>.dot or postdom.<fnname>.dot file for each function in the
+// program, with a graph of the dominance/postdominance tree of that
+// function.
+//
+// There are also passes available to directly call dotty ('-view-dom' or
+// '-view-postdom'). By appending '-only' like '-dot-dom-only' only the
+// names of the bbs are printed, but the content is hidden.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/Analysis/DomPrinter.h"
+#include "llvm/Pass.h"
+#include "llvm/Function.h"
+#include "llvm/Analysis/CFGPrinter.h"
+#include "llvm/Analysis/Dominators.h"
+#include "llvm/Analysis/PostDominators.h"
+
+using namespace llvm;
+
+namespace llvm {
+template<>
+struct DOTGraphTraits<DomTreeNode*> : public DefaultDOTGraphTraits {
+  static std::string getNodeLabel(DomTreeNode *Node, DomTreeNode *Graph,
+                                  bool ShortNames) {
+
+    BasicBlock *BB = Node->getBlock();
+
+    if (!BB)
+      return "Post dominance root node";
+
+    return DOTGraphTraits<const Function*>::getNodeLabel(BB, BB->getParent(),
+                                                         ShortNames);
+  }
+};
+
+template<>
+struct DOTGraphTraits<DominatorTree*> : public DOTGraphTraits<DomTreeNode*> {
+
+  static std::string getGraphName(DominatorTree *DT) {
+    return "Dominator tree";
+  }
+
+  static std::string getNodeLabel(DomTreeNode *Node,
+                                  DominatorTree *G,
+                                  bool ShortNames) {
+    return DOTGraphTraits<DomTreeNode*>::getNodeLabel(Node, G->getRootNode(),
+                                                      ShortNames);
+  }
+};
+
+template<>
+struct DOTGraphTraits<PostDominatorTree*>
+  : public DOTGraphTraits<DomTreeNode*> {
+  static std::string getGraphName(PostDominatorTree *DT) {
+    return "Post dominator tree";
+  }
+  static std::string getNodeLabel(DomTreeNode *Node,
+                                  PostDominatorTree *G,
+                                  bool ShortNames) {
+    return DOTGraphTraits<DomTreeNode*>::getNodeLabel(Node,
+                                                      G->getRootNode(),
+                                                      ShortNames);
+  }
+};
+}
+
+namespace {
+template <class Analysis, bool OnlyBBS>
+struct GenericGraphViewer : public FunctionPass {
+  std::string Name;
+
+  GenericGraphViewer(std::string GraphName, const void *ID) : FunctionPass(ID) {
+    Name = GraphName;
+  }
+
+  virtual bool runOnFunction(Function &F) {
+    Analysis *Graph;
+
+    Graph = &getAnalysis<Analysis>();
+    ViewGraph(Graph, Name, OnlyBBS);
+
+    return false;
+  }
+
+  virtual void getAnalysisUsage(AnalysisUsage &AU) const {
+    AU.setPreservesAll();
+    AU.addRequired<Analysis>();
+  }
+};
+
+struct DomViewer
+  : public GenericGraphViewer<DominatorTree, false> {
+  static char ID;
+  DomViewer() : GenericGraphViewer<DominatorTree, false>("dom", &ID){}
+};
+
+struct DomOnlyViewer
+  : public GenericGraphViewer<DominatorTree, true> {
+  static char ID;
+  DomOnlyViewer() : GenericGraphViewer<DominatorTree, true>("domonly", &ID){}
+};
+
+struct PostDomViewer
+  : public GenericGraphViewer<PostDominatorTree, false> {
+  static char ID;
+  PostDomViewer() :
+    GenericGraphViewer<PostDominatorTree, false>("postdom", &ID){}
+};
+
+struct PostDomOnlyViewer
+  : public GenericGraphViewer<PostDominatorTree, true> {
+  static char ID;
+  PostDomOnlyViewer() :
+    GenericGraphViewer<PostDominatorTree, true>("postdomonly", &ID){}
+};
+} // end anonymous namespace
+
+char DomViewer::ID = 0;
+RegisterPass<DomViewer> A("view-dom",
+                          "View dominance tree of function");
+
+char DomOnlyViewer::ID = 0;
+RegisterPass<DomOnlyViewer> B("view-dom-only",
+                              "View dominance tree of function "
+                              "(with no function bodies)");
+
+char PostDomViewer::ID = 0;
+RegisterPass<PostDomViewer> C("view-postdom",
+                              "View postdominance tree of function");
+
+char PostDomOnlyViewer::ID = 0;
+RegisterPass<PostDomOnlyViewer> D("view-postdom-only",
+                                  "View postdominance tree of function "
+                                  "(with no function bodies)");
+
+namespace {
+template <class Analysis, bool OnlyBBS>
+struct GenericGraphPrinter : public FunctionPass {
+
+  std::string Name;
+
+  GenericGraphPrinter(std::string GraphName, const void *ID)
+    : FunctionPass(ID) {
+    Name = GraphName;
+  }
+
+  virtual bool runOnFunction(Function &F) {
+    Analysis *Graph;
+    std::string Filename = Name + "." + F.getNameStr() + ".dot";
+    errs() << "Writing '" << Filename << "'...";
+
+    std::string ErrorInfo;
+    raw_fd_ostream File(Filename.c_str(), ErrorInfo);
+    Graph = &getAnalysis<Analysis>();
+
+    if (ErrorInfo.empty())
+      WriteGraph(File, Graph, OnlyBBS);
+    else
+      errs() << "  error opening file for writing!";
+    errs() << "\n";
+    return false;
+  }
+
+  virtual void getAnalysisUsage(AnalysisUsage &AU) const {
+    AU.setPreservesAll();
+    AU.addRequired<Analysis>();
+  }
+};
+
+struct DomPrinter
+  : public GenericGraphPrinter<DominatorTree, false> {
+  static char ID;
+  DomPrinter() : GenericGraphPrinter<DominatorTree, false>("dom", &ID) {}
+};
+
+struct DomOnlyPrinter
+  : public GenericGraphPrinter<DominatorTree, true> {
+  static char ID;
+  DomOnlyPrinter() : GenericGraphPrinter<DominatorTree, true>("domonly", &ID) {}
+};
+
+struct PostDomPrinter
+  : public GenericGraphPrinter<PostDominatorTree, false> {
+  static char ID;
+  PostDomPrinter() :
+    GenericGraphPrinter<PostDominatorTree, false>("postdom", &ID) {}
+};
+
+struct PostDomOnlyPrinter
+  : public GenericGraphPrinter<PostDominatorTree, true> {
+  static char ID;
+  PostDomOnlyPrinter() :
+    GenericGraphPrinter<PostDominatorTree, true>("postdomonly", &ID) {}
+};
+} // end anonymous namespace
+
+
+
+char DomPrinter::ID = 0;
+RegisterPass<DomPrinter> E("dot-dom",
+                           "Print dominance tree of function "
+                           "to 'dot' file");
+
+char DomOnlyPrinter::ID = 0;
+RegisterPass<DomOnlyPrinter> F("dot-dom-only",
+                               "Print dominance tree of function "
+                               "to 'dot' file "
+                               "(with no function bodies)");
+
+char PostDomPrinter::ID = 0;
+RegisterPass<PostDomPrinter> G("dot-postdom",
+                               "Print postdominance tree of function "
+                               "to 'dot' file");
+
+char PostDomOnlyPrinter::ID = 0;
+RegisterPass<PostDomOnlyPrinter> H("dot-postdom-only",
+                                   "Print postdominance tree of function "
+                                   "to 'dot' file "
+                                   "(with no function bodies)");
+
+// Create methods available outside of this file, to use them
+// "include/llvm/LinkAllPasses.h". Otherwise the pass would be deleted by
+// the link time optimization.
+
+FunctionPass *llvm::createDomPrinterPass() {
+  return new DomPrinter();
+}
+
+FunctionPass *llvm::createDomOnlyPrinterPass() {
+  return new DomOnlyPrinter();
+}
+
+FunctionPass *llvm::createDomViewerPass() {
+  return new DomViewer();
+}
+
+FunctionPass *llvm::createDomOnlyViewerPass() {
+  return new DomOnlyViewer();
+}
+
+FunctionPass *llvm::createPostDomPrinterPass() {
+  return new PostDomPrinter();
+}
+
+FunctionPass *llvm::createPostDomOnlyPrinterPass() {
+  return new PostDomOnlyPrinter();
+}
+
+FunctionPass *llvm::createPostDomViewerPass() {
+  return new PostDomViewer();
+}
+
+FunctionPass *llvm::createPostDomOnlyViewerPass() {
+  return new PostDomOnlyViewer();
+}
diff --git a/libclamav/c++/llvm/lib/Analysis/IPA/Andersens.cpp b/libclamav/c++/llvm/lib/Analysis/IPA/Andersens.cpp
index 1c9159d..e12db81 100644
--- a/libclamav/c++/llvm/lib/Analysis/IPA/Andersens.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/IPA/Andersens.cpp
@@ -59,12 +59,11 @@
 #include "llvm/Instructions.h"
 #include "llvm/Module.h"
 #include "llvm/Pass.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/InstIterator.h"
 #include "llvm/Support/InstVisitor.h"
 #include "llvm/Analysis/AliasAnalysis.h"
-#include "llvm/Analysis/MallocHelper.h"
+#include "llvm/Analysis/MemoryBuiltins.h"
 #include "llvm/Analysis/Passes.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/System/Atomic.h"
@@ -126,8 +125,8 @@ namespace {
     static bool isPod() { return true; }
   };
 
-  class VISIBILITY_HIDDEN Andersens : public ModulePass, public AliasAnalysis,
-                                      private InstVisitor<Andersens> {
+  class Andersens : public ModulePass, public AliasAnalysis,
+                    private InstVisitor<Andersens> {
     struct Node;
 
     /// Constraint - Objects of this structure are used to represent the various
@@ -485,7 +484,6 @@ namespace {
                       const Value *V2, unsigned V2Size);
     virtual ModRefResult getModRefInfo(CallSite CS, Value *P, unsigned Size);
     virtual ModRefResult getModRefInfo(CallSite CS1, CallSite CS2);
-    void getMustAliases(Value *P, std::vector<Value*> &RetVals);
     bool pointsToConstantMemory(const Value *P);
 
     virtual void deleteValue(Value *V) {
@@ -519,7 +517,7 @@ namespace {
     /// getObject - Return the node corresponding to the memory object for the
     /// specified global or allocation instruction.
     unsigned getObject(Value *V) const {
-      DenseMap<Value*, unsigned>::iterator I = ObjectNodes.find(V);
+      DenseMap<Value*, unsigned>::const_iterator I = ObjectNodes.find(V);
       assert(I != ObjectNodes.end() &&
              "Value does not have an object in the points-to graph!");
       return I->second;
@@ -528,7 +526,7 @@ namespace {
     /// getReturnNode - Return the node representing the return value for the
     /// specified function.
     unsigned getReturnNode(Function *F) const {
-      DenseMap<Function*, unsigned>::iterator I = ReturnNodes.find(F);
+      DenseMap<Function*, unsigned>::const_iterator I = ReturnNodes.find(F);
       assert(I != ReturnNodes.end() && "Function does not return a value!");
       return I->second;
     }
@@ -536,7 +534,7 @@ namespace {
     /// getVarargNode - Return the node representing the variable arguments
     /// formal for the specified function.
     unsigned getVarargNode(Function *F) const {
-      DenseMap<Function*, unsigned>::iterator I = VarargNodes.find(F);
+      DenseMap<Function*, unsigned>::const_iterator I = VarargNodes.find(F);
       assert(I != VarargNodes.end() && "Function does not take var args!");
       return I->second;
     }
@@ -594,11 +592,12 @@ namespace {
     void visitReturnInst(ReturnInst &RI);
     void visitInvokeInst(InvokeInst &II) { visitCallSite(CallSite(&II)); }
     void visitCallInst(CallInst &CI) { 
-      if (isMalloc(&CI)) visitAllocationInst(CI);
+      if (isMalloc(&CI)) visitAlloc(CI);
       else visitCallSite(CallSite(&CI)); 
     }
     void visitCallSite(CallSite CS);
-    void visitAllocationInst(Instruction &I);
+    void visitAllocaInst(AllocaInst &I);
+    void visitAlloc(Instruction &I);
     void visitLoadInst(LoadInst &LI);
     void visitStoreInst(StoreInst &SI);
     void visitGetElementPtrInst(GetElementPtrInst &GEP);
@@ -680,32 +679,6 @@ Andersens::getModRefInfo(CallSite CS1, CallSite CS2) {
   return AliasAnalysis::getModRefInfo(CS1,CS2);
 }
 
-/// getMustAlias - We can provide must alias information if we know that a
-/// pointer can only point to a specific function or the null pointer.
-/// Unfortunately we cannot determine must-alias information for global
-/// variables or any other memory memory objects because we do not track whether
-/// a pointer points to the beginning of an object or a field of it.
-void Andersens::getMustAliases(Value *P, std::vector<Value*> &RetVals) {
-  Node *N = &GraphNodes[FindNode(getNode(P))];
-  if (N->PointsTo->count() == 1) {
-    Node *Pointee = &GraphNodes[N->PointsTo->find_first()];
-    // If a function is the only object in the points-to set, then it must be
-    // the destination.  Note that we can't handle global variables here,
-    // because we don't know if the pointer is actually pointing to a field of
-    // the global or to the beginning of it.
-    if (Value *V = Pointee->getValue()) {
-      if (Function *F = dyn_cast<Function>(V))
-        RetVals.push_back(F);
-    } else {
-      // If the object in the points-to set is the null object, then the null
-      // pointer is a must alias.
-      if (Pointee == &GraphNodes[NullObject])
-        RetVals.push_back(Constant::getNullValue(P->getType()));
-    }
-  }
-  AliasAnalysis::getMustAliases(P, RetVals);
-}
-
 /// pointsToConstantMemory - If we can determine that this pointer only points
 /// to constant memory, return true.  In practice, this means that if the
 /// pointer can only point to constant globals, functions, or the null pointer,
@@ -792,7 +765,7 @@ void Andersens::IdentifyObjects(Module &M) {
       // object.
       if (isa<PointerType>(II->getType())) {
         ValueNodes[&*II] = NumObjects++;
-        if (AllocationInst *AI = dyn_cast<AllocationInst>(&*II))
+        if (AllocaInst *AI = dyn_cast<AllocaInst>(&*II))
           ObjectNodes[AI] = NumObjects++;
         else if (isMalloc(&*II))
           ObjectNodes[&*II] = NumObjects++;
@@ -1016,6 +989,8 @@ bool Andersens::AnalyzeUsesOfFunction(Value *V) {
       }
     } else if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(*UI)) {
       if (AnalyzeUsesOfFunction(GEP)) return true;
+    } else if (isFreeCall(*UI)) {
+      return false;
     } else if (CallInst *CI = dyn_cast<CallInst>(*UI)) {
       // Make sure that this is just the function being called, not that it is
       // passing into the function.
@@ -1037,8 +1012,6 @@ bool Andersens::AnalyzeUsesOfFunction(Value *V) {
     } else if (ICmpInst *ICI = dyn_cast<ICmpInst>(*UI)) {
       if (!isa<ConstantPointerNull>(ICI->getOperand(1)))
         return true;  // Allow comparison against null.
-    } else if (isa<FreeInst>(*UI)) {
-      return false;
     } else {
       return true;
     }
@@ -1156,7 +1129,6 @@ void Andersens::visitInstruction(Instruction &I) {
   case Instruction::Switch:
   case Instruction::Unwind:
   case Instruction::Unreachable:
-  case Instruction::Free:
   case Instruction::ICmp:
   case Instruction::FCmp:
     return;
@@ -1167,7 +1139,11 @@ void Andersens::visitInstruction(Instruction &I) {
   }
 }
 
-void Andersens::visitAllocationInst(Instruction &I) {
+void Andersens::visitAllocaInst(AllocaInst &I) {
+  visitAlloc(I);
+}
+
+void Andersens::visitAlloc(Instruction &I) {
   unsigned ObjectIndex = getObject(&I);
   GraphNodes[ObjectIndex].setValue(&I);
   Constraints.push_back(Constraint(Constraint::AddressOf, getNodeValue(I),
@@ -2819,7 +2795,7 @@ void Andersens::PrintNode(const Node *N) const {
   else
     errs() << "(unnamed)";
 
-  if (isa<GlobalValue>(V) || isa<AllocationInst>(V) || isMalloc(V))
+  if (isa<GlobalValue>(V) || isa<AllocaInst>(V) || isMalloc(V))
     if (N == &GraphNodes[getObject(V)])
       errs() << "<mem>";
 }
diff --git a/libclamav/c++/llvm/lib/Analysis/IPA/CallGraph.cpp b/libclamav/c++/llvm/lib/Analysis/IPA/CallGraph.cpp
index e2b288d..9cd8bb8 100644
--- a/libclamav/c++/llvm/lib/Analysis/IPA/CallGraph.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/IPA/CallGraph.cpp
@@ -17,7 +17,6 @@
 #include "llvm/Instructions.h"
 #include "llvm/IntrinsicInst.h"
 #include "llvm/Support/CallSite.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
@@ -26,7 +25,7 @@ namespace {
 //===----------------------------------------------------------------------===//
 // BasicCallGraph class definition
 //
-class VISIBILITY_HIDDEN BasicCallGraph : public CallGraph, public ModulePass {
+class BasicCallGraph : public CallGraph, public ModulePass {
   // Root is root of the call graph, or the external node if a 'main' function
   // couldn't be found.
   //
diff --git a/libclamav/c++/llvm/lib/Analysis/IPA/GlobalsModRef.cpp b/libclamav/c++/llvm/lib/Analysis/IPA/GlobalsModRef.cpp
index f5c1108..a979a99 100644
--- a/libclamav/c++/llvm/lib/Analysis/IPA/GlobalsModRef.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/IPA/GlobalsModRef.cpp
@@ -23,8 +23,7 @@
 #include "llvm/DerivedTypes.h"
 #include "llvm/Analysis/AliasAnalysis.h"
 #include "llvm/Analysis/CallGraph.h"
-#include "llvm/Analysis/MallocHelper.h"
-#include "llvm/Support/Compiler.h"
+#include "llvm/Analysis/MemoryBuiltins.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/InstIterator.h"
 #include "llvm/ADT/Statistic.h"
@@ -44,7 +43,7 @@ namespace {
   /// function in the program.  Later, the entries for these functions are
   /// removed if the function is found to call an external function (in which
   /// case we know nothing about it.
-  struct VISIBILITY_HIDDEN FunctionRecord {
+  struct FunctionRecord {
     /// GlobalInfo - Maintain mod/ref info for all of the globals without
     /// addresses taken that are read or written (transitively) by this
     /// function.
@@ -69,8 +68,7 @@ namespace {
   };
 
   /// GlobalsModRef - The actual analysis pass.
-  class VISIBILITY_HIDDEN GlobalsModRef
-      : public ModulePass, public AliasAnalysis {
+  class GlobalsModRef : public ModulePass, public AliasAnalysis {
     /// NonAddressTakenGlobals - The globals that do not have their addresses
     /// taken.
     std::set<GlobalValue*> NonAddressTakenGlobals;
@@ -113,7 +111,6 @@ namespace {
     ModRefResult getModRefInfo(CallSite CS1, CallSite CS2) {
       return AliasAnalysis::getModRefInfo(CS1,CS2);
     }
-    bool hasNoModRefInfoForCalls() const { return false; }
 
     /// getModRefBehavior - Return the behavior of the specified function if
     /// called from the specified call site.  The call site may be null in which
@@ -240,6 +237,8 @@ bool GlobalsModRef::AnalyzeUsesOfPointer(Value *V,
     } else if (BitCastInst *BCI = dyn_cast<BitCastInst>(*UI)) {
       if (AnalyzeUsesOfPointer(BCI, Readers, Writers, OkayStoreDest))
         return true;
+    } else if (isFreeCall(*UI)) {
+      Writers.push_back(cast<Instruction>(*UI)->getParent()->getParent());
     } else if (CallInst *CI = dyn_cast<CallInst>(*UI)) {
       // Make sure that this is just the function being called, not that it is
       // passing into the function.
@@ -261,8 +260,6 @@ bool GlobalsModRef::AnalyzeUsesOfPointer(Value *V,
     } else if (ICmpInst *ICI = dyn_cast<ICmpInst>(*UI)) {
       if (!isa<ConstantPointerNull>(ICI->getOperand(1)))
         return true;  // Allow comparison against null.
-    } else if (FreeInst *F = dyn_cast<FreeInst>(*UI)) {
-      Writers.push_back(F->getParent()->getParent());
     } else {
       return true;
     }
@@ -303,7 +300,7 @@ bool GlobalsModRef::AnalyzeIndirectGlobalMemory(GlobalValue *GV) {
       // Check the value being stored.
       Value *Ptr = SI->getOperand(0)->getUnderlyingObject();
 
-      if (isa<MallocInst>(Ptr) || isMalloc(Ptr)) {
+      if (isMalloc(Ptr)) {
         // Okay, easy case.
       } else if (CallInst *CI = dyn_cast<CallInst>(Ptr)) {
         Function *F = CI->getCalledFunction();
@@ -439,8 +436,8 @@ void GlobalsModRef::AnalyzeCallGraph(CallGraph &CG, Module &M) {
           if (cast<StoreInst>(*II).isVolatile())
             // Treat volatile stores as reading memory somewhere.
             FunctionEffect |= Ref;
-        } else if (isa<MallocInst>(*II) || isa<FreeInst>(*II) ||
-                   isMalloc(&cast<Instruction>(*II))) {
+        } else if (isMalloc(&cast<Instruction>(*II)) ||
+                   isFreeCall(&cast<Instruction>(*II))) {
           FunctionEffect |= ModRef;
         }
 
diff --git a/libclamav/c++/llvm/lib/Analysis/IVUsers.cpp b/libclamav/c++/llvm/lib/Analysis/IVUsers.cpp
index 543e017..37747b6 100644
--- a/libclamav/c++/llvm/lib/Analysis/IVUsers.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/IVUsers.cpp
@@ -151,6 +151,8 @@ static bool IVUseShouldUsePostIncValue(Instruction *User, Instruction *IV,
   if (L->contains(User->getParent())) return false;
 
   BasicBlock *LatchBlock = L->getLoopLatch();
+  if (!LatchBlock)
+    return false;
 
   // Ok, the user is outside of the loop.  If it is dominated by the latch
   // block, use the post-inc value.
@@ -206,6 +208,10 @@ bool IVUsers::AddUsersIfInteresting(Instruction *I) {
   if (!getSCEVStartAndStride(ISE, L, UseLoop, Start, Stride, SE, DT))
     return false;  // Non-reducible symbolic expression, bail out.
 
+  // Keep things simple. Don't touch loop-variant strides.
+  if (!Stride->isLoopInvariant(L) && L->contains(I->getParent()))
+    return false;
+
   SmallPtrSet<Instruction *, 4> UniqueUsers;
   for (Value::use_iterator UI = I->use_begin(), E = I->use_end();
        UI != E; ++UI) {
@@ -265,6 +271,18 @@ bool IVUsers::AddUsersIfInteresting(Instruction *I) {
   return true;
 }
 
+void IVUsers::AddUser(const SCEV *Stride, const SCEV *Offset,
+                      Instruction *User, Value *Operand) {
+  IVUsersOfOneStride *StrideUses = IVUsesByStride[Stride];
+  if (!StrideUses) {    // First occurrence of this stride?
+    StrideOrder.push_back(Stride);
+    StrideUses = new IVUsersOfOneStride(Stride);
+    IVUses.push_back(StrideUses);
+    IVUsesByStride[Stride] = StrideUses;
+  }
+  IVUsesByStride[Stride]->addUser(Offset, User, Operand);
+}
+
 IVUsers::IVUsers()
  : LoopPass(&ID) {
 }
diff --git a/libclamav/c++/llvm/lib/Analysis/InlineCost.cpp b/libclamav/c++/llvm/lib/Analysis/InlineCost.cpp
new file mode 100644
index 0000000..bd9377b
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Analysis/InlineCost.cpp
@@ -0,0 +1,344 @@
+//===- InlineCost.cpp - Cost analysis for inliner -------------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements inline cost analysis.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/Analysis/InlineCost.h"
+#include "llvm/Support/CallSite.h"
+#include "llvm/CallingConv.h"
+#include "llvm/IntrinsicInst.h"
+#include "llvm/ADT/SmallPtrSet.h"
+using namespace llvm;
+
+// CountCodeReductionForConstant - Figure out an approximation for how many
+// instructions will be constant folded if the specified value is constant.
+//
+unsigned InlineCostAnalyzer::FunctionInfo::
+         CountCodeReductionForConstant(Value *V) {
+  unsigned Reduction = 0;
+  for (Value::use_iterator UI = V->use_begin(), E = V->use_end(); UI != E; ++UI)
+    if (isa<BranchInst>(*UI))
+      Reduction += 40;          // Eliminating a conditional branch is a big win
+    else if (SwitchInst *SI = dyn_cast<SwitchInst>(*UI))
+      // Eliminating a switch is a big win, proportional to the number of edges
+      // deleted.
+      Reduction += (SI->getNumSuccessors()-1) * 40;
+    else if (isa<IndirectBrInst>(*UI))
+      // Eliminating an indirect branch is a big win.
+      Reduction += 200;
+    else if (CallInst *CI = dyn_cast<CallInst>(*UI)) {
+      // Turning an indirect call into a direct call is a BIG win
+      Reduction += CI->getCalledValue() == V ? 500 : 0;
+    } else if (InvokeInst *II = dyn_cast<InvokeInst>(*UI)) {
+      // Turning an indirect call into a direct call is a BIG win
+      Reduction += II->getCalledValue() == V ? 500 : 0;
+    } else {
+      // Figure out if this instruction will be removed due to simple constant
+      // propagation.
+      Instruction &Inst = cast<Instruction>(**UI);
+      
+      // We can't constant propagate instructions which have effects or
+      // read memory.
+      //
+      // FIXME: It would be nice to capture the fact that a load from a
+      // pointer-to-constant-global is actually a *really* good thing to zap.
+      // Unfortunately, we don't know the pointer that may get propagated here,
+      // so we can't make this decision.
+      if (Inst.mayReadFromMemory() || Inst.mayHaveSideEffects() ||
+          isa<AllocaInst>(Inst)) 
+        continue;
+
+      bool AllOperandsConstant = true;
+      for (unsigned i = 0, e = Inst.getNumOperands(); i != e; ++i)
+        if (!isa<Constant>(Inst.getOperand(i)) && Inst.getOperand(i) != V) {
+          AllOperandsConstant = false;
+          break;
+        }
+
+      if (AllOperandsConstant) {
+        // We will get to remove this instruction...
+        Reduction += 7;
+
+        // And any other instructions that use it which become constants
+        // themselves.
+        Reduction += CountCodeReductionForConstant(&Inst);
+      }
+    }
+
+  return Reduction;
+}
+
+// CountCodeReductionForAlloca - Figure out an approximation of how much smaller
+// the function will be if it is inlined into a context where an argument
+// becomes an alloca.
+//
+unsigned InlineCostAnalyzer::FunctionInfo::
+         CountCodeReductionForAlloca(Value *V) {
+  if (!isa<PointerType>(V->getType())) return 0;  // Not a pointer
+  unsigned Reduction = 0;
+  for (Value::use_iterator UI = V->use_begin(), E = V->use_end(); UI != E;++UI){
+    Instruction *I = cast<Instruction>(*UI);
+    if (isa<LoadInst>(I) || isa<StoreInst>(I))
+      Reduction += 10;
+    else if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(I)) {
+      // If the GEP has variable indices, we won't be able to do much with it.
+      if (!GEP->hasAllConstantIndices())
+        Reduction += CountCodeReductionForAlloca(GEP)+15;
+    } else {
+      // If there is some other strange instruction, we're not going to be able
+      // to do much if we inline this.
+      return 0;
+    }
+  }
+
+  return Reduction;
+}
+
+/// analyzeBasicBlock - Fill in the current structure with information gleaned
+/// from the specified block.
+void CodeMetrics::analyzeBasicBlock(const BasicBlock *BB) {
+  ++NumBlocks;
+
+  for (BasicBlock::const_iterator II = BB->begin(), E = BB->end();
+       II != E; ++II) {
+    if (isa<PHINode>(II)) continue;           // PHI nodes don't count.
+
+    // Special handling for calls.
+    if (isa<CallInst>(II) || isa<InvokeInst>(II)) {
+      if (isa<DbgInfoIntrinsic>(II))
+        continue;  // Debug intrinsics don't count as size.
+      
+      CallSite CS = CallSite::get(const_cast<Instruction*>(&*II));
+      
+      // If this function contains a call to setjmp or _setjmp, never inline
+      // it.  This is a hack because we depend on the user marking their local
+      // variables as volatile if they are live across a setjmp call, and they
+      // probably won't do this in callers.
+      if (Function *F = CS.getCalledFunction())
+        if (F->isDeclaration() && 
+            (F->getName() == "setjmp" || F->getName() == "_setjmp"))
+          NeverInline = true;
+
+      // Calls often compile into many machine instructions.  Bump up their
+      // cost to reflect this.
+      if (!isa<IntrinsicInst>(II))
+        NumInsts += InlineConstants::CallPenalty;
+    }
+    
+    if (const AllocaInst *AI = dyn_cast<AllocaInst>(II)) {
+      if (!AI->isStaticAlloca())
+        this->usesDynamicAlloca = true;
+    }
+
+    if (isa<ExtractElementInst>(II) || isa<VectorType>(II->getType()))
+      ++NumVectorInsts; 
+    
+    // Noop casts, including ptr <-> int,  don't count.
+    if (const CastInst *CI = dyn_cast<CastInst>(II)) {
+      if (CI->isLosslessCast() || isa<IntToPtrInst>(CI) || 
+          isa<PtrToIntInst>(CI))
+        continue;
+    } else if (const GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(II)){
+      // If a GEP has all constant indices, it will probably be folded with
+      // a load/store.
+      if (GEPI->hasAllConstantIndices())
+        continue;
+    }
+
+    ++NumInsts;
+  }
+  
+  if (isa<ReturnInst>(BB->getTerminator()))
+    ++NumRets;
+  
+  // We never want to inline functions that contain an indirectbr.  This is
+  // incorrect because all the blockaddress's (in static global initializers
+  // for example) would be referring to the original function, and this indirect
+  // jump would jump from the inlined copy of the function into the original
+  // function which is extremely undefined behavior.
+  if (isa<IndirectBrInst>(BB->getTerminator()))
+    NeverInline = true;
+}
+
+/// analyzeFunction - Fill in the current structure with information gleaned
+/// from the specified function.
+void CodeMetrics::analyzeFunction(Function *F) {
+  // Look at the size of the callee.
+  for (Function::const_iterator BB = F->begin(), E = F->end(); BB != E; ++BB)
+    analyzeBasicBlock(&*BB);
+}
+
+/// analyzeFunction - Fill in the current structure with information gleaned
+/// from the specified function.
+void InlineCostAnalyzer::FunctionInfo::analyzeFunction(Function *F) {
+  Metrics.analyzeFunction(F);
+
+  // A function with exactly one return has it removed during the inlining
+  // process (see InlineFunction), so don't count it.
+  // FIXME: This knowledge should really be encoded outside of FunctionInfo.
+  if (Metrics.NumRets==1)
+    --Metrics.NumInsts;
+
+  // Check out all of the arguments to the function, figuring out how much
+  // code can be eliminated if one of the arguments is a constant.
+  for (Function::arg_iterator I = F->arg_begin(), E = F->arg_end(); I != E; ++I)
+    ArgumentWeights.push_back(ArgInfo(CountCodeReductionForConstant(I),
+                                      CountCodeReductionForAlloca(I)));
+}
+
+// getInlineCost - The heuristic used to determine if we should inline the
+// function call or not.
+//
+InlineCost InlineCostAnalyzer::getInlineCost(CallSite CS,
+                               SmallPtrSet<const Function *, 16> &NeverInline) {
+  Instruction *TheCall = CS.getInstruction();
+  Function *Callee = CS.getCalledFunction();
+  Function *Caller = TheCall->getParent()->getParent();
+
+  // Don't inline functions which can be redefined at link-time to mean
+  // something else.  Don't inline functions marked noinline.
+  if (Callee->mayBeOverridden() ||
+      Callee->hasFnAttr(Attribute::NoInline) || NeverInline.count(Callee))
+    return llvm::InlineCost::getNever();
+
+  // InlineCost - This value measures how good of an inline candidate this call
+  // site is to inline.  A lower inline cost make is more likely for the call to
+  // be inlined.  This value may go negative.
+  //
+  int InlineCost = 0;
+  
+  // If there is only one call of the function, and it has internal linkage,
+  // make it almost guaranteed to be inlined.
+  //
+  if (Callee->hasLocalLinkage() && Callee->hasOneUse())
+    InlineCost += InlineConstants::LastCallToStaticBonus;
+  
+  // If this function uses the coldcc calling convention, prefer not to inline
+  // it.
+  if (Callee->getCallingConv() == CallingConv::Cold)
+    InlineCost += InlineConstants::ColdccPenalty;
+  
+  // If the instruction after the call, or if the normal destination of the
+  // invoke is an unreachable instruction, the function is noreturn.  As such,
+  // there is little point in inlining this.
+  if (InvokeInst *II = dyn_cast<InvokeInst>(TheCall)) {
+    if (isa<UnreachableInst>(II->getNormalDest()->begin()))
+      InlineCost += InlineConstants::NoreturnPenalty;
+  } else if (isa<UnreachableInst>(++BasicBlock::iterator(TheCall)))
+    InlineCost += InlineConstants::NoreturnPenalty;
+  
+  // Get information about the callee...
+  FunctionInfo &CalleeFI = CachedFunctionInfo[Callee];
+  
+  // If we haven't calculated this information yet, do so now.
+  if (CalleeFI.Metrics.NumBlocks == 0)
+    CalleeFI.analyzeFunction(Callee);
+
+  // If we should never inline this, return a huge cost.
+  if (CalleeFI.Metrics.NeverInline)
+    return InlineCost::getNever();
+
+  // FIXME: It would be nice to kill off CalleeFI.NeverInline. Then we
+  // could move this up and avoid computing the FunctionInfo for
+  // things we are going to just return always inline for. This
+  // requires handling setjmp somewhere else, however.
+  if (!Callee->isDeclaration() && Callee->hasFnAttr(Attribute::AlwaysInline))
+    return InlineCost::getAlways();
+    
+  if (CalleeFI.Metrics.usesDynamicAlloca) {
+    // Get infomation about the caller...
+    FunctionInfo &CallerFI = CachedFunctionInfo[Caller];
+
+    // If we haven't calculated this information yet, do so now.
+    if (CallerFI.Metrics.NumBlocks == 0)
+      CallerFI.analyzeFunction(Caller);
+
+    // Don't inline a callee with dynamic alloca into a caller without them.
+    // Functions containing dynamic alloca's are inefficient in various ways;
+    // don't create more inefficiency.
+    if (!CallerFI.Metrics.usesDynamicAlloca)
+      return InlineCost::getNever();
+  }
+
+  // Add to the inline quality for properties that make the call valuable to
+  // inline.  This includes factors that indicate that the result of inlining
+  // the function will be optimizable.  Currently this just looks at arguments
+  // passed into the function.
+  //
+  unsigned ArgNo = 0;
+  for (CallSite::arg_iterator I = CS.arg_begin(), E = CS.arg_end();
+       I != E; ++I, ++ArgNo) {
+    // Each argument passed in has a cost at both the caller and the callee
+    // sides.  This favors functions that take many arguments over functions
+    // that take few arguments.
+    InlineCost -= 20;
+    
+    // If this is a function being passed in, it is very likely that we will be
+    // able to turn an indirect function call into a direct function call.
+    if (isa<Function>(I))
+      InlineCost -= 100;
+    
+    // If an alloca is passed in, inlining this function is likely to allow
+    // significant future optimization possibilities (like scalar promotion, and
+    // scalarization), so encourage the inlining of the function.
+    //
+    else if (isa<AllocaInst>(I)) {
+      if (ArgNo < CalleeFI.ArgumentWeights.size())
+        InlineCost -= CalleeFI.ArgumentWeights[ArgNo].AllocaWeight;
+      
+      // If this is a constant being passed into the function, use the argument
+      // weights calculated for the callee to determine how much will be folded
+      // away with this information.
+    } else if (isa<Constant>(I)) {
+      if (ArgNo < CalleeFI.ArgumentWeights.size())
+        InlineCost -= CalleeFI.ArgumentWeights[ArgNo].ConstantWeight;
+    }
+  }
+  
+  // Now that we have considered all of the factors that make the call site more
+  // likely to be inlined, look at factors that make us not want to inline it.
+  
+  // Don't inline into something too big, which would make it bigger.
+  // "size" here is the number of basic blocks, not instructions.
+  //
+  InlineCost += Caller->size()/15;
+  
+  // Look at the size of the callee. Each instruction counts as 5.
+  InlineCost += CalleeFI.Metrics.NumInsts*5;
+
+  return llvm::InlineCost::get(InlineCost);
+}
+
+// getInlineFudgeFactor - Return a > 1.0 factor if the inliner should use a
+// higher threshold to determine if the function call should be inlined.
+float InlineCostAnalyzer::getInlineFudgeFactor(CallSite CS) {
+  Function *Callee = CS.getCalledFunction();
+  
+  // Get information about the callee...
+  FunctionInfo &CalleeFI = CachedFunctionInfo[Callee];
+  
+  // If we haven't calculated this information yet, do so now.
+  if (CalleeFI.Metrics.NumBlocks == 0)
+    CalleeFI.analyzeFunction(Callee);
+
+  float Factor = 1.0f;
+  // Single BB functions are often written to be inlined.
+  if (CalleeFI.Metrics.NumBlocks == 1)
+    Factor += 0.5f;
+
+  // Be more aggressive if the function contains a good chunk (if it mades up
+  // at least 10% of the instructions) of vector instructions.
+  if (CalleeFI.Metrics.NumVectorInsts > CalleeFI.Metrics.NumInsts/2)
+    Factor += 2.0f;
+  else if (CalleeFI.Metrics.NumVectorInsts > CalleeFI.Metrics.NumInsts/10)
+    Factor += 1.5f;
+  return Factor;
+}
diff --git a/libclamav/c++/llvm/lib/Analysis/InstCount.cpp b/libclamav/c++/llvm/lib/Analysis/InstCount.cpp
index 83724ca..a4b041f 100644
--- a/libclamav/c++/llvm/lib/Analysis/InstCount.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/InstCount.cpp
@@ -15,7 +15,6 @@
 #include "llvm/Analysis/Passes.h"
 #include "llvm/Pass.h"
 #include "llvm/Function.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/InstVisitor.h"
 #include "llvm/Support/raw_ostream.h"
@@ -34,8 +33,7 @@ STATISTIC(TotalMemInst, "Number of memory instructions");
 
 
 namespace {
-  class VISIBILITY_HIDDEN InstCount 
-      : public FunctionPass, public InstVisitor<InstCount> {
+  class InstCount : public FunctionPass, public InstVisitor<InstCount> {
     friend class InstVisitor<InstCount>;
 
     void visitFunction  (Function &F) { ++TotalFuncs; }
@@ -76,11 +74,11 @@ FunctionPass *llvm::createInstCountPass() { return new InstCount(); }
 bool InstCount::runOnFunction(Function &F) {
   unsigned StartMemInsts =
     NumGetElementPtrInst + NumLoadInst + NumStoreInst + NumCallInst +
-    NumInvokeInst + NumAllocaInst + NumMallocInst + NumFreeInst;
+    NumInvokeInst + NumAllocaInst;
   visit(F);
   unsigned EndMemInsts =
     NumGetElementPtrInst + NumLoadInst + NumStoreInst + NumCallInst +
-    NumInvokeInst + NumAllocaInst + NumMallocInst + NumFreeInst;
+    NumInvokeInst + NumAllocaInst;
   TotalMemInst += EndMemInsts-StartMemInsts;
   return false;
 }
diff --git a/libclamav/c++/llvm/lib/Analysis/InstructionSimplify.cpp b/libclamav/c++/llvm/lib/Analysis/InstructionSimplify.cpp
new file mode 100644
index 0000000..7a7eb6b
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Analysis/InstructionSimplify.cpp
@@ -0,0 +1,380 @@
+//===- InstructionSimplify.cpp - Fold instruction operands ----------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements routines for folding instructions into simpler forms
+// that do not require creating new instructions.  For example, this does
+// constant folding, and can handle identities like (X&0)->0.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/Analysis/InstructionSimplify.h"
+#include "llvm/Analysis/ConstantFolding.h"
+#include "llvm/Support/ValueHandle.h"
+#include "llvm/Instructions.h"
+#include "llvm/Support/PatternMatch.h"
+using namespace llvm;
+using namespace llvm::PatternMatch;
+
+/// SimplifyAndInst - Given operands for an And, see if we can
+/// fold the result.  If not, this returns null.
+Value *llvm::SimplifyAndInst(Value *Op0, Value *Op1,
+                             const TargetData *TD) {
+  if (Constant *CLHS = dyn_cast<Constant>(Op0)) {
+    if (Constant *CRHS = dyn_cast<Constant>(Op1)) {
+      Constant *Ops[] = { CLHS, CRHS };
+      return ConstantFoldInstOperands(Instruction::And, CLHS->getType(),
+                                      Ops, 2, TD);
+    }
+  
+    // Canonicalize the constant to the RHS.
+    std::swap(Op0, Op1);
+  }
+  
+  // X & undef -> 0
+  if (isa<UndefValue>(Op1))
+    return Constant::getNullValue(Op0->getType());
+  
+  // X & X = X
+  if (Op0 == Op1)
+    return Op0;
+  
+  // X & <0,0> = <0,0>
+  if (isa<ConstantAggregateZero>(Op1))
+    return Op1;
+  
+  // X & <-1,-1> = X
+  if (ConstantVector *CP = dyn_cast<ConstantVector>(Op1))
+    if (CP->isAllOnesValue())
+      return Op0;
+  
+  if (ConstantInt *Op1CI = dyn_cast<ConstantInt>(Op1)) {
+    // X & 0 = 0
+    if (Op1CI->isZero())
+      return Op1CI;
+    // X & -1 = X
+    if (Op1CI->isAllOnesValue())
+      return Op0;
+  }
+  
+  // A & ~A  =  ~A & A  =  0
+  Value *A, *B;
+  if ((match(Op0, m_Not(m_Value(A))) && A == Op1) ||
+      (match(Op1, m_Not(m_Value(A))) && A == Op0))
+    return Constant::getNullValue(Op0->getType());
+  
+  // (A | ?) & A = A
+  if (match(Op0, m_Or(m_Value(A), m_Value(B))) &&
+      (A == Op1 || B == Op1))
+    return Op1;
+  
+  // A & (A | ?) = A
+  if (match(Op1, m_Or(m_Value(A), m_Value(B))) &&
+      (A == Op0 || B == Op0))
+    return Op0;
+  
+  return 0;
+}
+
+/// SimplifyOrInst - Given operands for an Or, see if we can
+/// fold the result.  If not, this returns null.
+Value *llvm::SimplifyOrInst(Value *Op0, Value *Op1,
+                            const TargetData *TD) {
+  if (Constant *CLHS = dyn_cast<Constant>(Op0)) {
+    if (Constant *CRHS = dyn_cast<Constant>(Op1)) {
+      Constant *Ops[] = { CLHS, CRHS };
+      return ConstantFoldInstOperands(Instruction::Or, CLHS->getType(),
+                                      Ops, 2, TD);
+    }
+    
+    // Canonicalize the constant to the RHS.
+    std::swap(Op0, Op1);
+  }
+  
+  // X | undef -> -1
+  if (isa<UndefValue>(Op1))
+    return Constant::getAllOnesValue(Op0->getType());
+  
+  // X | X = X
+  if (Op0 == Op1)
+    return Op0;
+
+  // X | <0,0> = X
+  if (isa<ConstantAggregateZero>(Op1))
+    return Op0;
+  
+  // X | <-1,-1> = <-1,-1>
+  if (ConstantVector *CP = dyn_cast<ConstantVector>(Op1))
+    if (CP->isAllOnesValue())            
+      return Op1;
+  
+  if (ConstantInt *Op1CI = dyn_cast<ConstantInt>(Op1)) {
+    // X | 0 = X
+    if (Op1CI->isZero())
+      return Op0;
+    // X | -1 = -1
+    if (Op1CI->isAllOnesValue())
+      return Op1CI;
+  }
+  
+  // A | ~A  =  ~A | A  =  -1
+  Value *A, *B;
+  if ((match(Op0, m_Not(m_Value(A))) && A == Op1) ||
+      (match(Op1, m_Not(m_Value(A))) && A == Op0))
+    return Constant::getAllOnesValue(Op0->getType());
+  
+  // (A & ?) | A = A
+  if (match(Op0, m_And(m_Value(A), m_Value(B))) &&
+      (A == Op1 || B == Op1))
+    return Op1;
+  
+  // A | (A & ?) = A
+  if (match(Op1, m_And(m_Value(A), m_Value(B))) &&
+      (A == Op0 || B == Op0))
+    return Op0;
+  
+  return 0;
+}
+
+
+
+
+static const Type *GetCompareTy(Value *Op) {
+  return CmpInst::makeCmpResultType(Op->getType());
+}
+
+
+/// SimplifyICmpInst - Given operands for an ICmpInst, see if we can
+/// fold the result.  If not, this returns null.
+Value *llvm::SimplifyICmpInst(unsigned Predicate, Value *LHS, Value *RHS,
+                              const TargetData *TD) {
+  CmpInst::Predicate Pred = (CmpInst::Predicate)Predicate;
+  assert(CmpInst::isIntPredicate(Pred) && "Not an integer compare!");
+  
+  if (Constant *CLHS = dyn_cast<Constant>(LHS)) {
+    if (Constant *CRHS = dyn_cast<Constant>(RHS))
+      return ConstantFoldCompareInstOperands(Pred, CLHS, CRHS, TD);
+
+    // If we have a constant, make sure it is on the RHS.
+    std::swap(LHS, RHS);
+    Pred = CmpInst::getSwappedPredicate(Pred);
+  }
+  
+  // ITy - This is the return type of the compare we're considering.
+  const Type *ITy = GetCompareTy(LHS);
+  
+  // icmp X, X -> true/false
+  if (LHS == RHS)
+    return ConstantInt::get(ITy, CmpInst::isTrueWhenEqual(Pred));
+
+  if (isa<UndefValue>(RHS))                  // X icmp undef -> undef
+    return UndefValue::get(ITy);
+  
+  // icmp <global/alloca*/null>, <global/alloca*/null> - Global/Stack value
+  // addresses never equal each other!  We already know that Op0 != Op1.
+  if ((isa<GlobalValue>(LHS) || isa<AllocaInst>(LHS) || 
+       isa<ConstantPointerNull>(LHS)) &&
+      (isa<GlobalValue>(RHS) || isa<AllocaInst>(RHS) || 
+       isa<ConstantPointerNull>(RHS)))
+    return ConstantInt::get(ITy, CmpInst::isFalseWhenEqual(Pred));
+  
+  // See if we are doing a comparison with a constant.
+  if (ConstantInt *CI = dyn_cast<ConstantInt>(RHS)) {
+    // If we have an icmp le or icmp ge instruction, turn it into the
+    // appropriate icmp lt or icmp gt instruction.  This allows us to rely on
+    // them being folded in the code below.
+    switch (Pred) {
+    default: break;
+    case ICmpInst::ICMP_ULE:
+      if (CI->isMaxValue(false))                 // A <=u MAX -> TRUE
+        return ConstantInt::getTrue(CI->getContext());
+      break;
+    case ICmpInst::ICMP_SLE:
+      if (CI->isMaxValue(true))                  // A <=s MAX -> TRUE
+        return ConstantInt::getTrue(CI->getContext());
+      break;
+    case ICmpInst::ICMP_UGE:
+      if (CI->isMinValue(false))                 // A >=u MIN -> TRUE
+        return ConstantInt::getTrue(CI->getContext());
+      break;
+    case ICmpInst::ICMP_SGE:
+      if (CI->isMinValue(true))                  // A >=s MIN -> TRUE
+        return ConstantInt::getTrue(CI->getContext());
+      break;
+    }
+  }
+  
+  
+  return 0;
+}
+
+/// SimplifyFCmpInst - Given operands for an FCmpInst, see if we can
+/// fold the result.  If not, this returns null.
+Value *llvm::SimplifyFCmpInst(unsigned Predicate, Value *LHS, Value *RHS,
+                              const TargetData *TD) {
+  CmpInst::Predicate Pred = (CmpInst::Predicate)Predicate;
+  assert(CmpInst::isFPPredicate(Pred) && "Not an FP compare!");
+
+  if (Constant *CLHS = dyn_cast<Constant>(LHS)) {
+    if (Constant *CRHS = dyn_cast<Constant>(RHS))
+      return ConstantFoldCompareInstOperands(Pred, CLHS, CRHS, TD);
+   
+    // If we have a constant, make sure it is on the RHS.
+    std::swap(LHS, RHS);
+    Pred = CmpInst::getSwappedPredicate(Pred);
+  }
+  
+  // Fold trivial predicates.
+  if (Pred == FCmpInst::FCMP_FALSE)
+    return ConstantInt::get(GetCompareTy(LHS), 0);
+  if (Pred == FCmpInst::FCMP_TRUE)
+    return ConstantInt::get(GetCompareTy(LHS), 1);
+
+  if (isa<UndefValue>(RHS))                  // fcmp pred X, undef -> undef
+    return UndefValue::get(GetCompareTy(LHS));
+
+  // fcmp x,x -> true/false.  Not all compares are foldable.
+  if (LHS == RHS) {
+    if (CmpInst::isTrueWhenEqual(Pred))
+      return ConstantInt::get(GetCompareTy(LHS), 1);
+    if (CmpInst::isFalseWhenEqual(Pred))
+      return ConstantInt::get(GetCompareTy(LHS), 0);
+  }
+  
+  // Handle fcmp with constant RHS
+  if (Constant *RHSC = dyn_cast<Constant>(RHS)) {
+    // If the constant is a nan, see if we can fold the comparison based on it.
+    if (ConstantFP *CFP = dyn_cast<ConstantFP>(RHSC)) {
+      if (CFP->getValueAPF().isNaN()) {
+        if (FCmpInst::isOrdered(Pred))   // True "if ordered and foo"
+          return ConstantInt::getFalse(CFP->getContext());
+        assert(FCmpInst::isUnordered(Pred) &&
+               "Comparison must be either ordered or unordered!");
+        // True if unordered.
+        return ConstantInt::getTrue(CFP->getContext());
+      }
+    }
+  }
+  
+  return 0;
+}
+
+/// SimplifyGEPInst - Given operands for an GetElementPtrInst, see if we can
+/// fold the result.  If not, this returns null.
+Value *llvm::SimplifyGEPInst(Value *const *Ops, unsigned NumOps,
+                             const TargetData *TD) {
+  // getelementptr P -> P.
+  if (NumOps == 1)
+    return Ops[0];
+
+  // TODO.
+  //if (isa<UndefValue>(Ops[0]))
+  //  return UndefValue::get(GEP.getType());
+
+  // getelementptr P, 0 -> P.
+  if (NumOps == 2)
+    if (ConstantInt *C = dyn_cast<ConstantInt>(Ops[1]))
+      if (C->isZero())
+        return Ops[0];
+  
+  // Check to see if this is constant foldable.
+  for (unsigned i = 0; i != NumOps; ++i)
+    if (!isa<Constant>(Ops[i]))
+      return 0;
+  
+  return ConstantExpr::getGetElementPtr(cast<Constant>(Ops[0]),
+                                        (Constant *const*)Ops+1, NumOps-1);
+}
+
+
+//=== Helper functions for higher up the class hierarchy.
+
+/// SimplifyBinOp - Given operands for a BinaryOperator, see if we can
+/// fold the result.  If not, this returns null.
+Value *llvm::SimplifyBinOp(unsigned Opcode, Value *LHS, Value *RHS, 
+                           const TargetData *TD) {
+  switch (Opcode) {
+  case Instruction::And: return SimplifyAndInst(LHS, RHS, TD);
+  case Instruction::Or:  return SimplifyOrInst(LHS, RHS, TD);
+  default:
+    if (Constant *CLHS = dyn_cast<Constant>(LHS))
+      if (Constant *CRHS = dyn_cast<Constant>(RHS)) {
+        Constant *COps[] = {CLHS, CRHS};
+        return ConstantFoldInstOperands(Opcode, LHS->getType(), COps, 2, TD);
+      }
+    return 0;
+  }
+}
+
+/// SimplifyCmpInst - Given operands for a CmpInst, see if we can
+/// fold the result.
+Value *llvm::SimplifyCmpInst(unsigned Predicate, Value *LHS, Value *RHS,
+                             const TargetData *TD) {
+  if (CmpInst::isIntPredicate((CmpInst::Predicate)Predicate))
+    return SimplifyICmpInst(Predicate, LHS, RHS, TD);
+  return SimplifyFCmpInst(Predicate, LHS, RHS, TD);
+}
+
+
+/// SimplifyInstruction - See if we can compute a simplified version of this
+/// instruction.  If not, this returns null.
+Value *llvm::SimplifyInstruction(Instruction *I, const TargetData *TD) {
+  switch (I->getOpcode()) {
+  default:
+    return ConstantFoldInstruction(I, TD);
+  case Instruction::And:
+    return SimplifyAndInst(I->getOperand(0), I->getOperand(1), TD);
+  case Instruction::Or:
+    return SimplifyOrInst(I->getOperand(0), I->getOperand(1), TD);
+  case Instruction::ICmp:
+    return SimplifyICmpInst(cast<ICmpInst>(I)->getPredicate(),
+                            I->getOperand(0), I->getOperand(1), TD);
+  case Instruction::FCmp:
+    return SimplifyFCmpInst(cast<FCmpInst>(I)->getPredicate(),
+                            I->getOperand(0), I->getOperand(1), TD);
+  case Instruction::GetElementPtr: {
+    SmallVector<Value*, 8> Ops(I->op_begin(), I->op_end());
+    return SimplifyGEPInst(&Ops[0], Ops.size(), TD);
+  }
+  }
+}
+
+/// ReplaceAndSimplifyAllUses - Perform From->replaceAllUsesWith(To) and then
+/// delete the From instruction.  In addition to a basic RAUW, this does a
+/// recursive simplification of the newly formed instructions.  This catches
+/// things where one simplification exposes other opportunities.  This only
+/// simplifies and deletes scalar operations, it does not change the CFG.
+///
+void llvm::ReplaceAndSimplifyAllUses(Instruction *From, Value *To,
+                                     const TargetData *TD) {
+  assert(From != To && "ReplaceAndSimplifyAllUses(X,X) is not valid!");
+  
+  // FromHandle - This keeps a weakvh on the from value so that we can know if
+  // it gets deleted out from under us in a recursive simplification.
+  WeakVH FromHandle(From);
+  
+  while (!From->use_empty()) {
+    // Update the instruction to use the new value.
+    Use &U = From->use_begin().getUse();
+    Instruction *User = cast<Instruction>(U.getUser());
+    U = To;
+    
+    // See if we can simplify it.
+    if (Value *V = SimplifyInstruction(User, TD)) {
+      // Recursively simplify this.
+      ReplaceAndSimplifyAllUses(User, V, TD);
+      
+      // If the recursive simplification ended up revisiting and deleting 'From'
+      // then we're done.
+      if (FromHandle == 0)
+        return;
+    }
+  }
+  From->eraseFromParent();
+}
+
diff --git a/libclamav/c++/llvm/lib/Analysis/LazyValueInfo.cpp b/libclamav/c++/llvm/lib/Analysis/LazyValueInfo.cpp
new file mode 100644
index 0000000..5796c6f
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Analysis/LazyValueInfo.cpp
@@ -0,0 +1,582 @@
+//===- LazyValueInfo.cpp - Value constraint analysis ----------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file defines the interface for lazy computation of value constraint
+// information.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "lazy-value-info"
+#include "llvm/Analysis/LazyValueInfo.h"
+#include "llvm/Constants.h"
+#include "llvm/Instructions.h"
+#include "llvm/Analysis/ConstantFolding.h"
+#include "llvm/Target/TargetData.h"
+#include "llvm/Support/CFG.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/raw_ostream.h"
+#include "llvm/ADT/DenseMap.h"
+#include "llvm/ADT/PointerIntPair.h"
+#include "llvm/ADT/STLExtras.h"
+using namespace llvm;
+
+char LazyValueInfo::ID = 0;
+static RegisterPass<LazyValueInfo>
+X("lazy-value-info", "Lazy Value Information Analysis", false, true);
+
+namespace llvm {
+  FunctionPass *createLazyValueInfoPass() { return new LazyValueInfo(); }
+}
+
+
+//===----------------------------------------------------------------------===//
+//                               LVILatticeVal
+//===----------------------------------------------------------------------===//
+
+/// LVILatticeVal - This is the information tracked by LazyValueInfo for each
+/// value.
+///
+/// FIXME: This is basically just for bringup, this can be made a lot more rich
+/// in the future.
+///
+namespace {
+class LVILatticeVal {
+  enum LatticeValueTy {
+    /// undefined - This LLVM Value has no known value yet.
+    undefined,
+    /// constant - This LLVM Value has a specific constant value.
+    constant,
+    
+    /// notconstant - This LLVM value is known to not have the specified value.
+    notconstant,
+    
+    /// overdefined - This instruction is not known to be constant, and we know
+    /// it has a value.
+    overdefined
+  };
+  
+  /// Val: This stores the current lattice value along with the Constant* for
+  /// the constant if this is a 'constant' or 'notconstant' value.
+  PointerIntPair<Constant *, 2, LatticeValueTy> Val;
+  
+public:
+  LVILatticeVal() : Val(0, undefined) {}
+
+  static LVILatticeVal get(Constant *C) {
+    LVILatticeVal Res;
+    Res.markConstant(C);
+    return Res;
+  }
+  static LVILatticeVal getNot(Constant *C) {
+    LVILatticeVal Res;
+    Res.markNotConstant(C);
+    return Res;
+  }
+  
+  bool isUndefined() const   { return Val.getInt() == undefined; }
+  bool isConstant() const    { return Val.getInt() == constant; }
+  bool isNotConstant() const { return Val.getInt() == notconstant; }
+  bool isOverdefined() const { return Val.getInt() == overdefined; }
+  
+  Constant *getConstant() const {
+    assert(isConstant() && "Cannot get the constant of a non-constant!");
+    return Val.getPointer();
+  }
+  
+  Constant *getNotConstant() const {
+    assert(isNotConstant() && "Cannot get the constant of a non-notconstant!");
+    return Val.getPointer();
+  }
+  
+  /// markOverdefined - Return true if this is a change in status.
+  bool markOverdefined() {
+    if (isOverdefined())
+      return false;
+    Val.setInt(overdefined);
+    return true;
+  }
+
+  /// markConstant - Return true if this is a change in status.
+  bool markConstant(Constant *V) {
+    if (isConstant()) {
+      assert(getConstant() == V && "Marking constant with different value");
+      return false;
+    }
+    
+    assert(isUndefined());
+    Val.setInt(constant);
+    assert(V && "Marking constant with NULL");
+    Val.setPointer(V);
+    return true;
+  }
+  
+  /// markNotConstant - Return true if this is a change in status.
+  bool markNotConstant(Constant *V) {
+    if (isNotConstant()) {
+      assert(getNotConstant() == V && "Marking !constant with different value");
+      return false;
+    }
+    
+    if (isConstant())
+      assert(getConstant() != V && "Marking not constant with different value");
+    else
+      assert(isUndefined());
+
+    Val.setInt(notconstant);
+    assert(V && "Marking constant with NULL");
+    Val.setPointer(V);
+    return true;
+  }
+  
+  /// mergeIn - Merge the specified lattice value into this one, updating this
+  /// one and returning true if anything changed.
+  bool mergeIn(const LVILatticeVal &RHS) {
+    if (RHS.isUndefined() || isOverdefined()) return false;
+    if (RHS.isOverdefined()) return markOverdefined();
+
+    if (RHS.isNotConstant()) {
+      if (isNotConstant()) {
+        if (getNotConstant() != RHS.getNotConstant() ||
+            isa<ConstantExpr>(getNotConstant()) ||
+            isa<ConstantExpr>(RHS.getNotConstant()))
+          return markOverdefined();
+        return false;
+      }
+      if (isConstant()) {
+        if (getConstant() == RHS.getNotConstant() ||
+            isa<ConstantExpr>(RHS.getNotConstant()) ||
+            isa<ConstantExpr>(getConstant()))
+          return markOverdefined();
+        return markNotConstant(RHS.getNotConstant());
+      }
+      
+      assert(isUndefined() && "Unexpected lattice");
+      return markNotConstant(RHS.getNotConstant());
+    }
+    
+    // RHS must be a constant, we must be undef, constant, or notconstant.
+    if (isUndefined())
+      return markConstant(RHS.getConstant());
+    
+    if (isConstant()) {
+      if (getConstant() != RHS.getConstant())
+        return markOverdefined();
+      return false;
+    }
+
+    // If we are known "!=4" and RHS is "==5", stay at "!=4".
+    if (getNotConstant() == RHS.getConstant() ||
+        isa<ConstantExpr>(getNotConstant()) ||
+        isa<ConstantExpr>(RHS.getConstant()))
+      return markOverdefined();
+    return false;
+  }
+  
+};
+  
+} // end anonymous namespace.
+
+namespace llvm {
+raw_ostream &operator<<(raw_ostream &OS, const LVILatticeVal &Val) {
+  if (Val.isUndefined())
+    return OS << "undefined";
+  if (Val.isOverdefined())
+    return OS << "overdefined";
+
+  if (Val.isNotConstant())
+    return OS << "notconstant<" << *Val.getNotConstant() << '>';
+  return OS << "constant<" << *Val.getConstant() << '>';
+}
+}
+
+//===----------------------------------------------------------------------===//
+//                          LazyValueInfoCache Decl
+//===----------------------------------------------------------------------===//
+
+namespace {
+  /// LazyValueInfoCache - This is the cache kept by LazyValueInfo which
+  /// maintains information about queries across the clients' queries.
+  class LazyValueInfoCache {
+  public:
+    /// BlockCacheEntryTy - This is a computed lattice value at the end of the
+    /// specified basic block for a Value* that depends on context.
+    typedef std::pair<BasicBlock*, LVILatticeVal> BlockCacheEntryTy;
+    
+    /// ValueCacheEntryTy - This is all of the cached block information for
+    /// exactly one Value*.  The entries are sorted by the BasicBlock* of the
+    /// entries, allowing us to do a lookup with a binary search.
+    typedef std::vector<BlockCacheEntryTy> ValueCacheEntryTy;
+
+  private:
+    /// ValueCache - This is all of the cached information for all values,
+    /// mapped from Value* to key information.
+    DenseMap<Value*, ValueCacheEntryTy> ValueCache;
+  public:
+    
+    /// getValueInBlock - This is the query interface to determine the lattice
+    /// value for the specified Value* at the end of the specified block.
+    LVILatticeVal getValueInBlock(Value *V, BasicBlock *BB);
+
+    /// getValueOnEdge - This is the query interface to determine the lattice
+    /// value for the specified Value* that is true on the specified edge.
+    LVILatticeVal getValueOnEdge(Value *V, BasicBlock *FromBB,BasicBlock *ToBB);
+  };
+} // end anonymous namespace
+
+namespace {
+  struct BlockCacheEntryComparator {
+    static int Compare(const void *LHSv, const void *RHSv) {
+      const LazyValueInfoCache::BlockCacheEntryTy *LHS =
+        static_cast<const LazyValueInfoCache::BlockCacheEntryTy *>(LHSv);
+      const LazyValueInfoCache::BlockCacheEntryTy *RHS =
+        static_cast<const LazyValueInfoCache::BlockCacheEntryTy *>(RHSv);
+      if (LHS->first < RHS->first)
+        return -1;
+      if (LHS->first > RHS->first)
+        return 1;
+      return 0;
+    }
+    
+    bool operator()(const LazyValueInfoCache::BlockCacheEntryTy &LHS,
+                    const LazyValueInfoCache::BlockCacheEntryTy &RHS) const {
+      return LHS.first < RHS.first;
+    }
+  };
+}
+
+//===----------------------------------------------------------------------===//
+//                              LVIQuery Impl
+//===----------------------------------------------------------------------===//
+
+namespace {
+  /// LVIQuery - This is a transient object that exists while a query is
+  /// being performed.
+  ///
+  /// TODO: Reuse LVIQuery instead of recreating it for every query, this avoids
+  /// reallocation of the densemap on every query.
+  class LVIQuery {
+    typedef LazyValueInfoCache::BlockCacheEntryTy BlockCacheEntryTy;
+    typedef LazyValueInfoCache::ValueCacheEntryTy ValueCacheEntryTy;
+    
+    /// This is the current value being queried for.
+    Value *Val;
+    
+    /// This is all of the cached information about this value.
+    ValueCacheEntryTy &Cache;
+    
+    ///  NewBlocks - This is a mapping of the new BasicBlocks which have been
+    /// added to cache but that are not in sorted order.
+    DenseMap<BasicBlock*, LVILatticeVal> NewBlockInfo;
+  public:
+    
+    LVIQuery(Value *V, ValueCacheEntryTy &VC) : Val(V), Cache(VC) {
+    }
+
+    ~LVIQuery() {
+      // When the query is done, insert the newly discovered facts into the
+      // cache in sorted order.
+      if (NewBlockInfo.empty()) return;
+
+      // Grow the cache to exactly fit the new data.
+      Cache.reserve(Cache.size() + NewBlockInfo.size());
+      
+      // If we only have one new entry, insert it instead of doing a full-on
+      // sort.
+      if (NewBlockInfo.size() == 1) {
+        BlockCacheEntryTy Entry = *NewBlockInfo.begin();
+        ValueCacheEntryTy::iterator I =
+          std::lower_bound(Cache.begin(), Cache.end(), Entry,
+                           BlockCacheEntryComparator());
+        assert((I == Cache.end() || I->first != Entry.first) &&
+               "Entry already in map!");
+        
+        Cache.insert(I, Entry);
+        return;
+      }
+      
+      // TODO: If we only have two new elements, INSERT them both.
+      
+      Cache.insert(Cache.end(), NewBlockInfo.begin(), NewBlockInfo.end());
+      array_pod_sort(Cache.begin(), Cache.end(),
+                     BlockCacheEntryComparator::Compare);
+      
+    }
+
+    LVILatticeVal getBlockValue(BasicBlock *BB);
+    LVILatticeVal getEdgeValue(BasicBlock *FromBB, BasicBlock *ToBB);
+
+  private:
+    LVILatticeVal &getCachedEntryForBlock(BasicBlock *BB);
+  };
+} // end anonymous namespace
+
+/// getCachedEntryForBlock - See if we already have a value for this block.  If
+/// so, return it, otherwise create a new entry in the NewBlockInfo map to use.
+LVILatticeVal &LVIQuery::getCachedEntryForBlock(BasicBlock *BB) {
+  
+  // Do a binary search to see if we already have an entry for this block in
+  // the cache set.  If so, find it.
+  if (!Cache.empty()) {
+    ValueCacheEntryTy::iterator Entry =
+      std::lower_bound(Cache.begin(), Cache.end(),
+                       BlockCacheEntryTy(BB, LVILatticeVal()),
+                       BlockCacheEntryComparator());
+    if (Entry != Cache.end() && Entry->first == BB)
+      return Entry->second;
+  }
+  
+  // Otherwise, check to see if it's in NewBlockInfo or create a new entry if
+  // not.
+  return NewBlockInfo[BB];
+}
+
+LVILatticeVal LVIQuery::getBlockValue(BasicBlock *BB) {
+  // See if we already have a value for this block.
+  LVILatticeVal &BBLV = getCachedEntryForBlock(BB);
+  
+  // If we've already computed this block's value, return it.
+  if (!BBLV.isUndefined()) {
+    DEBUG(errs() << "  reuse BB '" << BB->getName() << "' val=" << BBLV <<'\n');
+    return BBLV;
+  }
+
+  // Otherwise, this is the first time we're seeing this block.  Reset the
+  // lattice value to overdefined, so that cycles will terminate and be
+  // conservatively correct.
+  BBLV.markOverdefined();
+  
+  // If V is live into BB, see if our predecessors know anything about it.
+  Instruction *BBI = dyn_cast<Instruction>(Val);
+  if (BBI == 0 || BBI->getParent() != BB) {
+    LVILatticeVal Result;  // Start Undefined.
+    unsigned NumPreds = 0;
+    
+    // Loop over all of our predecessors, merging what we know from them into
+    // result.
+    for (pred_iterator PI = pred_begin(BB), E = pred_end(BB); PI != E; ++PI) {
+      Result.mergeIn(getEdgeValue(*PI, BB));
+      
+      // If we hit overdefined, exit early.  The BlockVals entry is already set
+      // to overdefined.
+      if (Result.isOverdefined()) {
+        DEBUG(errs() << " compute BB '" << BB->getName()
+                     << "' - overdefined because of pred.\n");
+        return Result;
+      }
+      ++NumPreds;
+    }
+    
+    // If this is the entry block, we must be asking about an argument.  The
+    // value is overdefined.
+    if (NumPreds == 0 && BB == &BB->getParent()->front()) {
+      assert(isa<Argument>(Val) && "Unknown live-in to the entry block");
+      Result.markOverdefined();
+      return Result;
+    }
+    
+    // Return the merged value, which is more precise than 'overdefined'.
+    assert(!Result.isOverdefined());
+    return getCachedEntryForBlock(BB) = Result;
+  }
+  
+  // If this value is defined by an instruction in this block, we have to
+  // process it here somehow or return overdefined.
+  if (PHINode *PN = dyn_cast<PHINode>(BBI)) {
+    (void)PN;
+    // TODO: PHI Translation in preds.
+  } else {
+    
+  }
+  
+  DEBUG(errs() << " compute BB '" << BB->getName()
+               << "' - overdefined because inst def found.\n");
+
+  LVILatticeVal Result;
+  Result.markOverdefined();
+  return getCachedEntryForBlock(BB) = Result;
+}
+
+
+/// getEdgeValue - This method attempts to infer more complex 
+LVILatticeVal LVIQuery::getEdgeValue(BasicBlock *BBFrom, BasicBlock *BBTo) {
+  // TODO: Handle more complex conditionals.  If (v == 0 || v2 < 1) is false, we
+  // know that v != 0.
+  if (BranchInst *BI = dyn_cast<BranchInst>(BBFrom->getTerminator())) {
+    // If this is a conditional branch and only one successor goes to BBTo, then
+    // we maybe able to infer something from the condition. 
+    if (BI->isConditional() &&
+        BI->getSuccessor(0) != BI->getSuccessor(1)) {
+      bool isTrueDest = BI->getSuccessor(0) == BBTo;
+      assert(BI->getSuccessor(!isTrueDest) == BBTo &&
+             "BBTo isn't a successor of BBFrom");
+      
+      // If V is the condition of the branch itself, then we know exactly what
+      // it is.
+      if (BI->getCondition() == Val)
+        return LVILatticeVal::get(ConstantInt::get(
+                               Type::getInt1Ty(Val->getContext()), isTrueDest));
+      
+      // If the condition of the branch is an equality comparison, we may be
+      // able to infer the value.
+      if (ICmpInst *ICI = dyn_cast<ICmpInst>(BI->getCondition()))
+        if (ICI->isEquality() && ICI->getOperand(0) == Val &&
+            isa<Constant>(ICI->getOperand(1))) {
+          // We know that V has the RHS constant if this is a true SETEQ or
+          // false SETNE. 
+          if (isTrueDest == (ICI->getPredicate() == ICmpInst::ICMP_EQ))
+            return LVILatticeVal::get(cast<Constant>(ICI->getOperand(1)));
+          return LVILatticeVal::getNot(cast<Constant>(ICI->getOperand(1)));
+        }
+    }
+  }
+
+  // If the edge was formed by a switch on the value, then we may know exactly
+  // what it is.
+  if (SwitchInst *SI = dyn_cast<SwitchInst>(BBFrom->getTerminator())) {
+    // If BBTo is the default destination of the switch, we don't know anything.
+    // Given a more powerful range analysis we could know stuff.
+    if (SI->getCondition() == Val && SI->getDefaultDest() != BBTo) {
+      // We only know something if there is exactly one value that goes from
+      // BBFrom to BBTo.
+      unsigned NumEdges = 0;
+      ConstantInt *EdgeVal = 0;
+      for (unsigned i = 1, e = SI->getNumSuccessors(); i != e; ++i) {
+        if (SI->getSuccessor(i) != BBTo) continue;
+        if (NumEdges++) break;
+        EdgeVal = SI->getCaseValue(i);
+      }
+      assert(EdgeVal && "Missing successor?");
+      if (NumEdges == 1)
+        return LVILatticeVal::get(EdgeVal);
+    }
+  }
+  
+  // Otherwise see if the value is known in the block.
+  return getBlockValue(BBFrom);
+}
+
+
+//===----------------------------------------------------------------------===//
+//                         LazyValueInfoCache Impl
+//===----------------------------------------------------------------------===//
+
+LVILatticeVal LazyValueInfoCache::getValueInBlock(Value *V, BasicBlock *BB) {
+  // If already a constant, there is nothing to compute.
+  if (Constant *VC = dyn_cast<Constant>(V))
+    return LVILatticeVal::get(VC);
+  
+  DEBUG(errs() << "LVI Getting block end value " << *V << " at '"
+        << BB->getName() << "'\n");
+  
+  LVILatticeVal Result = LVIQuery(V, ValueCache[V]).getBlockValue(BB);
+  
+  DEBUG(errs() << "  Result = " << Result << "\n");
+  return Result;
+}
+
+LVILatticeVal LazyValueInfoCache::
+getValueOnEdge(Value *V, BasicBlock *FromBB, BasicBlock *ToBB) {
+  // If already a constant, there is nothing to compute.
+  if (Constant *VC = dyn_cast<Constant>(V))
+    return LVILatticeVal::get(VC);
+  
+  DEBUG(errs() << "LVI Getting edge value " << *V << " from '"
+        << FromBB->getName() << "' to '" << ToBB->getName() << "'\n");
+  LVILatticeVal Result =
+    LVIQuery(V, ValueCache[V]).getEdgeValue(FromBB, ToBB);
+  
+  DEBUG(errs() << "  Result = " << Result << "\n");
+  
+  return Result;
+}
+
+//===----------------------------------------------------------------------===//
+//                            LazyValueInfo Impl
+//===----------------------------------------------------------------------===//
+
+bool LazyValueInfo::runOnFunction(Function &F) {
+  TD = getAnalysisIfAvailable<TargetData>();
+  // Fully lazy.
+  return false;
+}
+
+/// getCache - This lazily constructs the LazyValueInfoCache.
+static LazyValueInfoCache &getCache(void *&PImpl) {
+  if (!PImpl)
+    PImpl = new LazyValueInfoCache();
+  return *static_cast<LazyValueInfoCache*>(PImpl);
+}
+
+void LazyValueInfo::releaseMemory() {
+  // If the cache was allocated, free it.
+  if (PImpl) {
+    delete &getCache(PImpl);
+    PImpl = 0;
+  }
+}
+
+Constant *LazyValueInfo::getConstant(Value *V, BasicBlock *BB) {
+  LVILatticeVal Result = getCache(PImpl).getValueInBlock(V, BB);
+  
+  if (Result.isConstant())
+    return Result.getConstant();
+  return 0;
+}
+
+/// getConstantOnEdge - Determine whether the specified value is known to be a
+/// constant on the specified edge.  Return null if not.
+Constant *LazyValueInfo::getConstantOnEdge(Value *V, BasicBlock *FromBB,
+                                           BasicBlock *ToBB) {
+  LVILatticeVal Result = getCache(PImpl).getValueOnEdge(V, FromBB, ToBB);
+  
+  if (Result.isConstant())
+    return Result.getConstant();
+  return 0;
+}
+
+/// getPredicateOnEdge - Determine whether the specified value comparison
+/// with a constant is known to be true or false on the specified CFG edge.
+/// Pred is a CmpInst predicate.
+LazyValueInfo::Tristate
+LazyValueInfo::getPredicateOnEdge(unsigned Pred, Value *V, Constant *C,
+                                  BasicBlock *FromBB, BasicBlock *ToBB) {
+  LVILatticeVal Result = getCache(PImpl).getValueOnEdge(V, FromBB, ToBB);
+  
+  // If we know the value is a constant, evaluate the conditional.
+  Constant *Res = 0;
+  if (Result.isConstant()) {
+    Res = ConstantFoldCompareInstOperands(Pred, Result.getConstant(), C, TD);
+    if (ConstantInt *ResCI = dyn_cast_or_null<ConstantInt>(Res))
+      return ResCI->isZero() ? False : True;
+    return Unknown;
+  }
+  
+  if (Result.isNotConstant()) {
+    // If this is an equality comparison, we can try to fold it knowing that
+    // "V != C1".
+    if (Pred == ICmpInst::ICMP_EQ) {
+      // !C1 == C -> false iff C1 == C.
+      Res = ConstantFoldCompareInstOperands(ICmpInst::ICMP_NE,
+                                            Result.getNotConstant(), C, TD);
+      if (Res->isNullValue())
+        return False;
+    } else if (Pred == ICmpInst::ICMP_NE) {
+      // !C1 != C -> true iff C1 == C.
+      Res = ConstantFoldCompareInstOperands(ICmpInst::ICMP_NE,
+                                            Result.getNotConstant(), C, TD);
+      if (Res->isNullValue())
+        return True;
+    }
+    return Unknown;
+  }
+  
+  return Unknown;
+}
+
+
diff --git a/libclamav/c++/llvm/lib/Analysis/LiveValues.cpp b/libclamav/c++/llvm/lib/Analysis/LiveValues.cpp
index 2bbe98a..02ec7d3 100644
--- a/libclamav/c++/llvm/lib/Analysis/LiveValues.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/LiveValues.cpp
@@ -17,7 +17,9 @@
 #include "llvm/Analysis/LoopInfo.h"
 using namespace llvm;
 
-FunctionPass *llvm::createLiveValuesPass() { return new LiveValues(); }
+namespace llvm {
+  FunctionPass *createLiveValuesPass() { return new LiveValues(); }
+}
 
 char LiveValues::ID = 0;
 static RegisterPass<LiveValues>
diff --git a/libclamav/c++/llvm/lib/Analysis/LoopInfo.cpp b/libclamav/c++/llvm/lib/Analysis/LoopInfo.cpp
index ce2d29f..4de756c 100644
--- a/libclamav/c++/llvm/lib/Analysis/LoopInfo.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/LoopInfo.cpp
@@ -243,6 +243,11 @@ unsigned Loop::getSmallConstantTripMultiple() const {
         case BinaryOperator::Mul:
           Result = dyn_cast<ConstantInt>(BO->getOperand(1));
           break;
+        case BinaryOperator::Shl:
+          if (ConstantInt *CI = dyn_cast<ConstantInt>(BO->getOperand(1)))
+            if (CI->getValue().getActiveBits() <= 5)
+              return 1u << CI->getZExtValue();
+          break;
         default:
           break;
         }
@@ -263,14 +268,13 @@ bool Loop::isLCSSAForm() const {
   SmallPtrSet<BasicBlock *, 16> LoopBBs(block_begin(), block_end());
 
   for (block_iterator BI = block_begin(), E = block_end(); BI != E; ++BI) {
-    BasicBlock  *BB = *BI;
-    for (BasicBlock ::iterator I = BB->begin(), E = BB->end(); I != E;++I)
+    BasicBlock *BB = *BI;
+    for (BasicBlock::iterator I = BB->begin(), E = BB->end(); I != E;++I)
       for (Value::use_iterator UI = I->use_begin(), E = I->use_end(); UI != E;
            ++UI) {
         BasicBlock *UserBB = cast<Instruction>(*UI)->getParent();
-        if (PHINode *P = dyn_cast<PHINode>(*UI)) {
+        if (PHINode *P = dyn_cast<PHINode>(*UI))
           UserBB = P->getIncomingBlock(UI);
-        }
 
         // Check the current block, as a fast-path.  Most values are used in
         // the same block they are defined in.
@@ -286,12 +290,17 @@ bool Loop::isLCSSAForm() const {
 /// the LoopSimplify form transforms loops to, which is sometimes called
 /// normal form.
 bool Loop::isLoopSimplifyForm() const {
-  // Normal-form loops have a preheader.
-  if (!getLoopPreheader())
-    return false;
-  // Normal-form loops have a single backedge.
-  if (!getLoopLatch())
-    return false;
+  // Normal-form loops have a preheader, a single backedge, and all of their
+  // exits have all their predecessors inside the loop.
+  return getLoopPreheader() && getLoopLatch() && hasDedicatedExits();
+}
+
+/// hasDedicatedExits - Return true if no exit block for the loop
+/// has a predecessor that is outside the loop.
+bool Loop::hasDedicatedExits() const {
+  // Sort the blocks vector so that we can use binary search to do quick
+  // lookups.
+  SmallPtrSet<BasicBlock *, 16> LoopBBs(block_begin(), block_end());
   // Each predecessor of each exit block of a normal loop is contained
   // within the loop.
   SmallVector<BasicBlock *, 4> ExitBlocks;
@@ -299,7 +308,7 @@ bool Loop::isLoopSimplifyForm() const {
   for (unsigned i = 0, e = ExitBlocks.size(); i != e; ++i)
     for (pred_iterator PI = pred_begin(ExitBlocks[i]),
          PE = pred_end(ExitBlocks[i]); PI != PE; ++PI)
-      if (!contains(*PI))
+      if (!LoopBBs.count(*PI))
         return false;
   // All the requirements are met.
   return true;
diff --git a/libclamav/c++/llvm/lib/Analysis/LoopVR.cpp b/libclamav/c++/llvm/lib/Analysis/LoopVR.cpp
deleted file mode 100644
index 573bd3e..0000000
--- a/libclamav/c++/llvm/lib/Analysis/LoopVR.cpp
+++ /dev/null
@@ -1,297 +0,0 @@
-//===- LoopVR.cpp - Value Range analysis driven by loop information -------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// FIXME: What does this do?
-//
-//===----------------------------------------------------------------------===//
-
-#define DEBUG_TYPE "loopvr"
-#include "llvm/Analysis/LoopVR.h"
-#include "llvm/Constants.h"
-#include "llvm/Instructions.h"
-#include "llvm/LLVMContext.h"
-#include "llvm/Analysis/LoopInfo.h"
-#include "llvm/Analysis/ScalarEvolutionExpressions.h"
-#include "llvm/Assembly/Writer.h"
-#include "llvm/Support/CFG.h"
-#include "llvm/Support/Debug.h"
-#include "llvm/Support/raw_ostream.h"
-using namespace llvm;
-
-char LoopVR::ID = 0;
-static RegisterPass<LoopVR> X("loopvr", "Loop Value Ranges", false, true);
-
-/// getRange - determine the range for a particular SCEV within a given Loop
-ConstantRange LoopVR::getRange(const SCEV *S, Loop *L, ScalarEvolution &SE) {
-  const SCEV *T = SE.getBackedgeTakenCount(L);
-  if (isa<SCEVCouldNotCompute>(T))
-    return ConstantRange(cast<IntegerType>(S->getType())->getBitWidth(), true);
-
-  T = SE.getTruncateOrZeroExtend(T, S->getType());
-  return getRange(S, T, SE);
-}
-
-/// getRange - determine the range for a particular SCEV with a given trip count
-ConstantRange LoopVR::getRange(const SCEV *S, const SCEV *T, ScalarEvolution &SE){
-
-  if (const SCEVConstant *C = dyn_cast<SCEVConstant>(S))
-    return ConstantRange(C->getValue()->getValue());
-    
-  ConstantRange FullSet(cast<IntegerType>(S->getType())->getBitWidth(), true);
-
-  // {x,+,y,+,...z}. We detect overflow by checking the size of the set after
-  // summing the upper and lower.
-  if (const SCEVAddExpr *Add = dyn_cast<SCEVAddExpr>(S)) {
-    ConstantRange X = getRange(Add->getOperand(0), T, SE);
-    if (X.isFullSet()) return FullSet;
-    for (unsigned i = 1, e = Add->getNumOperands(); i != e; ++i) {
-      ConstantRange Y = getRange(Add->getOperand(i), T, SE);
-      if (Y.isFullSet()) return FullSet;
-
-      APInt Spread_X = X.getSetSize(), Spread_Y = Y.getSetSize();
-      APInt NewLower = X.getLower() + Y.getLower();
-      APInt NewUpper = X.getUpper() + Y.getUpper() - 1;
-      if (NewLower == NewUpper)
-        return FullSet;
-
-      X = ConstantRange(NewLower, NewUpper);
-      if (X.getSetSize().ult(Spread_X) || X.getSetSize().ult(Spread_Y))
-        return FullSet; // we've wrapped, therefore, full set.
-    }
-    return X;
-  }
-
-  // {x,*,y,*,...,z}. In order to detect overflow, we use k*bitwidth where
-  // k is the number of terms being multiplied.
-  if (const SCEVMulExpr *Mul = dyn_cast<SCEVMulExpr>(S)) {
-    ConstantRange X = getRange(Mul->getOperand(0), T, SE);
-    if (X.isFullSet()) return FullSet;
-
-    const IntegerType *Ty = IntegerType::get(SE.getContext(), X.getBitWidth());
-    const IntegerType *ExTy = IntegerType::get(SE.getContext(),
-                                      X.getBitWidth() * Mul->getNumOperands());
-    ConstantRange XExt = X.zeroExtend(ExTy->getBitWidth());
-
-    for (unsigned i = 1, e = Mul->getNumOperands(); i != e; ++i) {
-      ConstantRange Y = getRange(Mul->getOperand(i), T, SE);
-      if (Y.isFullSet()) return FullSet;
-
-      ConstantRange YExt = Y.zeroExtend(ExTy->getBitWidth());
-      XExt = ConstantRange(XExt.getLower() * YExt.getLower(),
-                           ((XExt.getUpper()-1) * (YExt.getUpper()-1)) + 1);
-    }
-    return XExt.truncate(Ty->getBitWidth());
-  }
-
-  // X smax Y smax ... Z is: range(smax(X_smin, Y_smin, ..., Z_smin),
-  //                               smax(X_smax, Y_smax, ..., Z_smax))
-  // It doesn't matter if one of the SCEVs has FullSet because we're taking
-  // a maximum of the minimums across all of them.
-  if (const SCEVSMaxExpr *SMax = dyn_cast<SCEVSMaxExpr>(S)) {
-    ConstantRange X = getRange(SMax->getOperand(0), T, SE);
-    if (X.isFullSet()) return FullSet;
-
-    APInt smin = X.getSignedMin(), smax = X.getSignedMax();
-    for (unsigned i = 1, e = SMax->getNumOperands(); i != e; ++i) {
-      ConstantRange Y = getRange(SMax->getOperand(i), T, SE);
-      smin = APIntOps::smax(smin, Y.getSignedMin());
-      smax = APIntOps::smax(smax, Y.getSignedMax());
-    }
-    if (smax + 1 == smin) return FullSet;
-    return ConstantRange(smin, smax + 1);
-  }
-
-  // X umax Y umax ... Z is: range(umax(X_umin, Y_umin, ..., Z_umin),
-  //                               umax(X_umax, Y_umax, ..., Z_umax))
-  // It doesn't matter if one of the SCEVs has FullSet because we're taking
-  // a maximum of the minimums across all of them.
-  if (const SCEVUMaxExpr *UMax = dyn_cast<SCEVUMaxExpr>(S)) {
-    ConstantRange X = getRange(UMax->getOperand(0), T, SE);
-    if (X.isFullSet()) return FullSet;
-
-    APInt umin = X.getUnsignedMin(), umax = X.getUnsignedMax();
-    for (unsigned i = 1, e = UMax->getNumOperands(); i != e; ++i) {
-      ConstantRange Y = getRange(UMax->getOperand(i), T, SE);
-      umin = APIntOps::umax(umin, Y.getUnsignedMin());
-      umax = APIntOps::umax(umax, Y.getUnsignedMax());
-    }
-    if (umax + 1 == umin) return FullSet;
-    return ConstantRange(umin, umax + 1);
-  }
-
-  // L udiv R. Luckily, there's only ever 2 sides to a udiv.
-  if (const SCEVUDivExpr *UDiv = dyn_cast<SCEVUDivExpr>(S)) {
-    ConstantRange L = getRange(UDiv->getLHS(), T, SE);
-    ConstantRange R = getRange(UDiv->getRHS(), T, SE);
-    if (L.isFullSet() && R.isFullSet()) return FullSet;
-
-    if (R.getUnsignedMax() == 0) {
-      // RHS must be single-element zero. Return an empty set.
-      return ConstantRange(R.getBitWidth(), false);
-    }
-
-    APInt Lower = L.getUnsignedMin().udiv(R.getUnsignedMax());
-
-    APInt Upper;
-
-    if (R.getUnsignedMin() == 0) {
-      // Just because it contains zero, doesn't mean it will also contain one.
-      ConstantRange NotZero(APInt(L.getBitWidth(), 1),
-                            APInt::getNullValue(L.getBitWidth()));
-      R = R.intersectWith(NotZero);
-    }
- 
-    // But, the intersection might still include zero. If it does, then we know
-    // it also included one.
-    if (R.contains(APInt::getNullValue(L.getBitWidth())))
-      Upper = L.getUnsignedMax();
-    else
-      Upper = L.getUnsignedMax().udiv(R.getUnsignedMin());
-
-    return ConstantRange(Lower, Upper);
-  }
-
-  // ConstantRange already implements the cast operators.
-
-  if (const SCEVZeroExtendExpr *ZExt = dyn_cast<SCEVZeroExtendExpr>(S)) {
-    T = SE.getTruncateOrZeroExtend(T, ZExt->getOperand()->getType());
-    ConstantRange X = getRange(ZExt->getOperand(), T, SE);
-    return X.zeroExtend(cast<IntegerType>(ZExt->getType())->getBitWidth());
-  }
-
-  if (const SCEVSignExtendExpr *SExt = dyn_cast<SCEVSignExtendExpr>(S)) {
-    T = SE.getTruncateOrZeroExtend(T, SExt->getOperand()->getType());
-    ConstantRange X = getRange(SExt->getOperand(), T, SE);
-    return X.signExtend(cast<IntegerType>(SExt->getType())->getBitWidth());
-  }
-
-  if (const SCEVTruncateExpr *Trunc = dyn_cast<SCEVTruncateExpr>(S)) {
-    T = SE.getTruncateOrZeroExtend(T, Trunc->getOperand()->getType());
-    ConstantRange X = getRange(Trunc->getOperand(), T, SE);
-    if (X.isFullSet()) return FullSet;
-    return X.truncate(cast<IntegerType>(Trunc->getType())->getBitWidth());
-  }
-
-  if (const SCEVAddRecExpr *AddRec = dyn_cast<SCEVAddRecExpr>(S)) {
-    const SCEVConstant *Trip = dyn_cast<SCEVConstant>(T);
-    if (!Trip) return FullSet;
-
-    if (AddRec->isAffine()) {
-      const SCEV *StartHandle = AddRec->getStart();
-      const SCEV *StepHandle = AddRec->getOperand(1);
-
-      const SCEVConstant *Step = dyn_cast<SCEVConstant>(StepHandle);
-      if (!Step) return FullSet;
-
-      uint32_t ExWidth = 2 * Trip->getValue()->getBitWidth();
-      APInt TripExt = Trip->getValue()->getValue(); TripExt.zext(ExWidth);
-      APInt StepExt = Step->getValue()->getValue(); StepExt.zext(ExWidth);
-      if ((TripExt * StepExt).ugt(APInt::getLowBitsSet(ExWidth, ExWidth >> 1)))
-        return FullSet;
-
-      const SCEV *EndHandle = SE.getAddExpr(StartHandle,
-                                           SE.getMulExpr(T, StepHandle));
-      const SCEVConstant *Start = dyn_cast<SCEVConstant>(StartHandle);
-      const SCEVConstant *End = dyn_cast<SCEVConstant>(EndHandle);
-      if (!Start || !End) return FullSet;
-
-      const APInt &StartInt = Start->getValue()->getValue();
-      const APInt &EndInt = End->getValue()->getValue();
-      const APInt &StepInt = Step->getValue()->getValue();
-
-      if (StepInt.isNegative()) {
-        if (EndInt == StartInt + 1) return FullSet;
-        return ConstantRange(EndInt, StartInt + 1);
-      } else {
-        if (StartInt == EndInt + 1) return FullSet;
-        return ConstantRange(StartInt, EndInt + 1);
-      }
-    }
-  }
-
-  // TODO: non-affine addrec, udiv, SCEVUnknown (narrowed from elsewhere)?
-
-  return FullSet;
-}
-
-void LoopVR::getAnalysisUsage(AnalysisUsage &AU) const {
-  AU.addRequiredTransitive<LoopInfo>();
-  AU.addRequiredTransitive<ScalarEvolution>();
-  AU.setPreservesAll();
-}
-
-bool LoopVR::runOnFunction(Function &F) { Map.clear(); return false; }
-
-void LoopVR::print(raw_ostream &OS, const Module *) const {
-  for (std::map<Value *, ConstantRange *>::const_iterator I = Map.begin(),
-       E = Map.end(); I != E; ++I) {
-    OS << *I->first << ": " << *I->second << '\n';
-  }
-}
-
-void LoopVR::releaseMemory() {
-  for (std::map<Value *, ConstantRange *>::iterator I = Map.begin(),
-       E = Map.end(); I != E; ++I) {
-    delete I->second;
-  }
-
-  Map.clear();  
-}
-
-ConstantRange LoopVR::compute(Value *V) {
-  if (ConstantInt *CI = dyn_cast<ConstantInt>(V))
-    return ConstantRange(CI->getValue());
-
-  Instruction *I = dyn_cast<Instruction>(V);
-  if (!I)
-    return ConstantRange(cast<IntegerType>(V->getType())->getBitWidth(), false);
-
-  LoopInfo &LI = getAnalysis<LoopInfo>();
-
-  Loop *L = LI.getLoopFor(I->getParent());
-  if (!L || L->isLoopInvariant(I))
-    return ConstantRange(cast<IntegerType>(V->getType())->getBitWidth(), false);
-
-  ScalarEvolution &SE = getAnalysis<ScalarEvolution>();
-
-  const SCEV *S = SE.getSCEV(I);
-  if (isa<SCEVUnknown>(S) || isa<SCEVCouldNotCompute>(S))
-    return ConstantRange(cast<IntegerType>(V->getType())->getBitWidth(), false);
-
-  return ConstantRange(getRange(S, L, SE));
-}
-
-ConstantRange LoopVR::get(Value *V) {
-  std::map<Value *, ConstantRange *>::iterator I = Map.find(V);
-  if (I == Map.end()) {
-    ConstantRange *CR = new ConstantRange(compute(V));
-    Map[V] = CR;
-    return *CR;
-  }
-
-  return *I->second;
-}
-
-void LoopVR::remove(Value *V) {
-  std::map<Value *, ConstantRange *>::iterator I = Map.find(V);
-  if (I != Map.end()) {
-    delete I->second;
-    Map.erase(I);
-  }
-}
-
-void LoopVR::narrow(Value *V, const ConstantRange &CR) {
-  if (CR.isFullSet()) return;
-
-  std::map<Value *, ConstantRange *>::iterator I = Map.find(V);
-  if (I == Map.end())
-    Map[V] = new ConstantRange(CR);
-  else
-    Map[V] = new ConstantRange(Map[V]->intersectWith(CR));
-}
diff --git a/libclamav/c++/llvm/lib/Analysis/MallocHelper.cpp b/libclamav/c++/llvm/lib/Analysis/MallocHelper.cpp
deleted file mode 100644
index ab6239e..0000000
--- a/libclamav/c++/llvm/lib/Analysis/MallocHelper.cpp
+++ /dev/null
@@ -1,218 +0,0 @@
-//===-- MallocHelper.cpp - Functions to identify malloc calls -------------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This family of functions identifies calls to malloc, bitcasts of malloc
-// calls, and the types and array sizes associated with them.
-//
-//===----------------------------------------------------------------------===//
-
-#include "llvm/Analysis/MallocHelper.h"
-#include "llvm/Constants.h"
-#include "llvm/Instructions.h"
-#include "llvm/Module.h"
-#include "llvm/Analysis/ConstantFolding.h"
-using namespace llvm;
-
-//===----------------------------------------------------------------------===//
-//  malloc Call Utility Functions.
-//
-
-/// isMalloc - Returns true if the the value is either a malloc call or a
-/// bitcast of the result of a malloc call.
-bool llvm::isMalloc(const Value* I) {
-  return extractMallocCall(I) || extractMallocCallFromBitCast(I);
-}
-
-static bool isMallocCall(const CallInst *CI) {
-  if (!CI)
-    return false;
-
-  const Module* M = CI->getParent()->getParent()->getParent();
-  Constant *MallocFunc = M->getFunction("malloc");
-
-  if (CI->getOperand(0) != MallocFunc)
-    return false;
-
-  return true;
-}
-
-/// extractMallocCall - Returns the corresponding CallInst if the instruction
-/// is a malloc call.  Since CallInst::CreateMalloc() only creates calls, we
-/// ignore InvokeInst here.
-const CallInst* llvm::extractMallocCall(const Value* I) {
-  const CallInst *CI = dyn_cast<CallInst>(I);
-  return (isMallocCall(CI)) ? CI : NULL;
-}
-
-CallInst* llvm::extractMallocCall(Value* I) {
-  CallInst *CI = dyn_cast<CallInst>(I);
-  return (isMallocCall(CI)) ? CI : NULL;
-}
-
-static bool isBitCastOfMallocCall(const BitCastInst* BCI) {
-  if (!BCI)
-    return false;
-    
-  return isMallocCall(dyn_cast<CallInst>(BCI->getOperand(0)));
-}
-
-/// extractMallocCallFromBitCast - Returns the corresponding CallInst if the
-/// instruction is a bitcast of the result of a malloc call.
-CallInst* llvm::extractMallocCallFromBitCast(Value* I) {
-  BitCastInst *BCI = dyn_cast<BitCastInst>(I);
-  return (isBitCastOfMallocCall(BCI)) ? cast<CallInst>(BCI->getOperand(0))
-                                      : NULL;
-}
-
-const CallInst* llvm::extractMallocCallFromBitCast(const Value* I) {
-  const BitCastInst *BCI = dyn_cast<BitCastInst>(I);
-  return (isBitCastOfMallocCall(BCI)) ? cast<CallInst>(BCI->getOperand(0))
-                                      : NULL;
-}
-
-static bool isArrayMallocHelper(const CallInst *CI, LLVMContext &Context,
-                                const TargetData* TD) {
-  if (!CI)
-    return false;
-
-  const Type* T = getMallocAllocatedType(CI);
-
-  // We can only indentify an array malloc if we know the type of the malloc 
-  // call.
-  if (!T) return false;
-
-  Value* MallocArg = CI->getOperand(1);
-  Constant *ElementSize = ConstantExpr::getSizeOf(T);
-  ElementSize = ConstantExpr::getTruncOrBitCast(ElementSize, 
-                                                MallocArg->getType());
-  Constant *FoldedElementSize = ConstantFoldConstantExpression(
-                                       cast<ConstantExpr>(ElementSize), 
-                                       Context, TD);
-
-
-  if (isa<ConstantExpr>(MallocArg))
-    return (MallocArg != ElementSize);
-
-  BinaryOperator *BI = dyn_cast<BinaryOperator>(MallocArg);
-  if (!BI)
-    return false;
-
-  if (BI->getOpcode() == Instruction::Mul)
-    // ArraySize * ElementSize
-    if (BI->getOperand(1) == ElementSize ||
-        (FoldedElementSize && BI->getOperand(1) == FoldedElementSize))
-      return true;
-
-  // TODO: Detect case where MallocArg mul has been transformed to shl.
-
-  return false;
-}
-
-/// isArrayMalloc - Returns the corresponding CallInst if the instruction 
-/// matches the malloc call IR generated by CallInst::CreateMalloc().  This 
-/// means that it is a malloc call with one bitcast use AND the malloc call's 
-/// size argument is:
-///  1. a constant not equal to the malloc's allocated type
-/// or
-///  2. the result of a multiplication by the malloc's allocated type
-/// Otherwise it returns NULL.
-/// The unique bitcast is needed to determine the type/size of the array
-/// allocation.
-CallInst* llvm::isArrayMalloc(Value* I, LLVMContext &Context,
-                              const TargetData* TD) {
-  CallInst *CI = extractMallocCall(I);
-  return (isArrayMallocHelper(CI, Context, TD)) ? CI : NULL;
-}
-
-const CallInst* llvm::isArrayMalloc(const Value* I, LLVMContext &Context,
-                                    const TargetData* TD) {
-  const CallInst *CI = extractMallocCall(I);
-  return (isArrayMallocHelper(CI, Context, TD)) ? CI : NULL;
-}
-
-/// getMallocType - Returns the PointerType resulting from the malloc call.
-/// This PointerType is the result type of the call's only bitcast use.
-/// If there is no unique bitcast use, then return NULL.
-const PointerType* llvm::getMallocType(const CallInst* CI) {
-  assert(isMalloc(CI) && "GetMallocType and not malloc call");
-  
-  const BitCastInst* BCI = NULL;
-  
-  // Determine if CallInst has a bitcast use.
-  for (Value::use_const_iterator UI = CI->use_begin(), E = CI->use_end();
-       UI != E; )
-    if ((BCI = dyn_cast<BitCastInst>(cast<Instruction>(*UI++))))
-      break;
-
-  // Malloc call has 1 bitcast use and no other uses, so type is the bitcast's
-  // destination type.
-  if (BCI && CI->hasOneUse())
-    return cast<PointerType>(BCI->getDestTy());
-
-  // Malloc call was not bitcast, so type is the malloc function's return type.
-  if (!BCI)
-    return cast<PointerType>(CI->getType());
-
-  // Type could not be determined.
-  return NULL;
-}
-
-/// getMallocAllocatedType - Returns the Type allocated by malloc call. This
-/// Type is the result type of the call's only bitcast use. If there is no
-/// unique bitcast use, then return NULL.
-const Type* llvm::getMallocAllocatedType(const CallInst* CI) {
-  const PointerType* PT = getMallocType(CI);
-  return PT ? PT->getElementType() : NULL;
-}
-
-/// isConstantOne - Return true only if val is constant int 1.
-static bool isConstantOne(Value *val) {
-  return isa<ConstantInt>(val) && cast<ConstantInt>(val)->isOne();
-}
-
-/// getMallocArraySize - Returns the array size of a malloc call.  The array
-/// size is computated in 1 of 3 ways:
-///  1. If the element type if of size 1, then array size is the argument to 
-///     malloc.
-///  2. Else if the malloc's argument is a constant, the array size is that
-///     argument divided by the element type's size.
-///  3. Else the malloc argument must be a multiplication and the array size is
-///     the first operand of the multiplication.
-/// This function returns constant 1 if:
-///  1. The malloc call's allocated type cannot be determined.
-///  2. IR wasn't created by a call to CallInst::CreateMalloc() with a non-NULL
-///     ArraySize.
-Value* llvm::getMallocArraySize(CallInst* CI, LLVMContext &Context,
-                                const TargetData* TD) {
-  // Match CreateMalloc's use of constant 1 array-size for non-array mallocs.
-  if (!isArrayMalloc(CI, Context, TD))
-    return ConstantInt::get(CI->getOperand(1)->getType(), 1);
-
-  Value* MallocArg = CI->getOperand(1);
-  assert(getMallocAllocatedType(CI) && "getMallocArraySize and no type");
-  Constant *ElementSize = ConstantExpr::getSizeOf(getMallocAllocatedType(CI));
-  ElementSize = ConstantExpr::getTruncOrBitCast(ElementSize, 
-                                                MallocArg->getType());
-
-  Constant* CO = dyn_cast<Constant>(MallocArg);
-  BinaryOperator* BO = dyn_cast<BinaryOperator>(MallocArg);
-  assert((isConstantOne(ElementSize) || CO || BO) &&
-         "getMallocArraySize and malformed malloc IR");
-      
-  if (isConstantOne(ElementSize))
-    return MallocArg;
-    
-  if (CO)
-    return CO->getOperand(0);
-    
-  // TODO: Detect case where MallocArg mul has been transformed to shl.
-
-  assert(BO && "getMallocArraySize not constant but not multiplication either");
-  return BO->getOperand(0);
-}
diff --git a/libclamav/c++/llvm/lib/Analysis/MemoryBuiltins.cpp b/libclamav/c++/llvm/lib/Analysis/MemoryBuiltins.cpp
new file mode 100644
index 0000000..b448628
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Analysis/MemoryBuiltins.cpp
@@ -0,0 +1,207 @@
+//===------ MemoryBuiltins.cpp - Identify calls to memory builtins --------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This family of functions identifies calls to builtin functions that allocate
+// or free memory.  
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/Analysis/MemoryBuiltins.h"
+#include "llvm/Constants.h"
+#include "llvm/Instructions.h"
+#include "llvm/Module.h"
+#include "llvm/Analysis/ValueTracking.h"
+#include "llvm/Target/TargetData.h"
+using namespace llvm;
+
+//===----------------------------------------------------------------------===//
+//  malloc Call Utility Functions.
+//
+
+/// isMalloc - Returns true if the the value is either a malloc call or a
+/// bitcast of the result of a malloc call.
+bool llvm::isMalloc(const Value *I) {
+  return extractMallocCall(I) || extractMallocCallFromBitCast(I);
+}
+
+static bool isMallocCall(const CallInst *CI) {
+  if (!CI)
+    return false;
+
+  Function *Callee = CI->getCalledFunction();
+  if (Callee == 0 || !Callee->isDeclaration() || Callee->getName() != "malloc")
+    return false;
+
+  // Check malloc prototype.
+  // FIXME: workaround for PR5130, this will be obsolete when a nobuiltin 
+  // attribute will exist.
+  const FunctionType *FTy = Callee->getFunctionType();
+  if (FTy->getNumParams() != 1)
+    return false;
+  if (IntegerType *ITy = dyn_cast<IntegerType>(FTy->param_begin()->get())) {
+    if (ITy->getBitWidth() != 32 && ITy->getBitWidth() != 64)
+      return false;
+    return true;
+  }
+
+  return false;
+}
+
+/// extractMallocCall - Returns the corresponding CallInst if the instruction
+/// is a malloc call.  Since CallInst::CreateMalloc() only creates calls, we
+/// ignore InvokeInst here.
+const CallInst *llvm::extractMallocCall(const Value *I) {
+  const CallInst *CI = dyn_cast<CallInst>(I);
+  return (isMallocCall(CI)) ? CI : NULL;
+}
+
+CallInst *llvm::extractMallocCall(Value *I) {
+  CallInst *CI = dyn_cast<CallInst>(I);
+  return (isMallocCall(CI)) ? CI : NULL;
+}
+
+static bool isBitCastOfMallocCall(const BitCastInst *BCI) {
+  if (!BCI)
+    return false;
+    
+  return isMallocCall(dyn_cast<CallInst>(BCI->getOperand(0)));
+}
+
+/// extractMallocCallFromBitCast - Returns the corresponding CallInst if the
+/// instruction is a bitcast of the result of a malloc call.
+CallInst *llvm::extractMallocCallFromBitCast(Value *I) {
+  BitCastInst *BCI = dyn_cast<BitCastInst>(I);
+  return (isBitCastOfMallocCall(BCI)) ? cast<CallInst>(BCI->getOperand(0))
+                                      : NULL;
+}
+
+const CallInst *llvm::extractMallocCallFromBitCast(const Value *I) {
+  const BitCastInst *BCI = dyn_cast<BitCastInst>(I);
+  return (isBitCastOfMallocCall(BCI)) ? cast<CallInst>(BCI->getOperand(0))
+                                      : NULL;
+}
+
+static Value *computeArraySize(const CallInst *CI, const TargetData *TD,
+                               bool LookThroughSExt = false) {
+  if (!CI)
+    return NULL;
+
+  // The size of the malloc's result type must be known to determine array size.
+  const Type *T = getMallocAllocatedType(CI);
+  if (!T || !T->isSized() || !TD)
+    return NULL;
+
+  unsigned ElementSize = TD->getTypeAllocSize(T);
+  if (const StructType *ST = dyn_cast<StructType>(T))
+    ElementSize = TD->getStructLayout(ST)->getSizeInBytes();
+
+  // If malloc calls' arg can be determined to be a multiple of ElementSize,
+  // return the multiple.  Otherwise, return NULL.
+  Value *MallocArg = CI->getOperand(1);
+  Value *Multiple = NULL;
+  if (ComputeMultiple(MallocArg, ElementSize, Multiple,
+                      LookThroughSExt))
+    return Multiple;
+
+  return NULL;
+}
+
+/// isArrayMalloc - Returns the corresponding CallInst if the instruction 
+/// is a call to malloc whose array size can be determined and the array size
+/// is not constant 1.  Otherwise, return NULL.
+const CallInst *llvm::isArrayMalloc(const Value *I, const TargetData *TD) {
+  const CallInst *CI = extractMallocCall(I);
+  Value *ArraySize = computeArraySize(CI, TD);
+
+  if (ArraySize &&
+      ArraySize != ConstantInt::get(CI->getOperand(1)->getType(), 1))
+    return CI;
+
+  // CI is a non-array malloc or we can't figure out that it is an array malloc.
+  return NULL;
+}
+
+/// getMallocType - Returns the PointerType resulting from the malloc call.
+/// The PointerType depends on the number of bitcast uses of the malloc call:
+///   0: PointerType is the calls' return type.
+///   1: PointerType is the bitcast's result type.
+///  >1: Unique PointerType cannot be determined, return NULL.
+const PointerType *llvm::getMallocType(const CallInst *CI) {
+  assert(isMalloc(CI) && "getMallocType and not malloc call");
+  
+  const PointerType *MallocType = NULL;
+  unsigned NumOfBitCastUses = 0;
+
+  // Determine if CallInst has a bitcast use.
+  for (Value::use_const_iterator UI = CI->use_begin(), E = CI->use_end();
+       UI != E; )
+    if (const BitCastInst *BCI = dyn_cast<BitCastInst>(*UI++)) {
+      MallocType = cast<PointerType>(BCI->getDestTy());
+      NumOfBitCastUses++;
+    }
+
+  // Malloc call has 1 bitcast use, so type is the bitcast's destination type.
+  if (NumOfBitCastUses == 1)
+    return MallocType;
+
+  // Malloc call was not bitcast, so type is the malloc function's return type.
+  if (NumOfBitCastUses == 0)
+    return cast<PointerType>(CI->getType());
+
+  // Type could not be determined.
+  return NULL;
+}
+
+/// getMallocAllocatedType - Returns the Type allocated by malloc call.
+/// The Type depends on the number of bitcast uses of the malloc call:
+///   0: PointerType is the malloc calls' return type.
+///   1: PointerType is the bitcast's result type.
+///  >1: Unique PointerType cannot be determined, return NULL.
+const Type *llvm::getMallocAllocatedType(const CallInst *CI) {
+  const PointerType *PT = getMallocType(CI);
+  return PT ? PT->getElementType() : NULL;
+}
+
+/// getMallocArraySize - Returns the array size of a malloc call.  If the 
+/// argument passed to malloc is a multiple of the size of the malloced type,
+/// then return that multiple.  For non-array mallocs, the multiple is
+/// constant 1.  Otherwise, return NULL for mallocs whose array size cannot be
+/// determined.
+Value *llvm::getMallocArraySize(CallInst *CI, const TargetData *TD,
+                                bool LookThroughSExt) {
+  assert(isMalloc(CI) && "getMallocArraySize and not malloc call");
+  return computeArraySize(CI, TD, LookThroughSExt);
+}
+
+//===----------------------------------------------------------------------===//
+//  free Call Utility Functions.
+//
+
+/// isFreeCall - Returns true if the the value is a call to the builtin free()
+bool llvm::isFreeCall(const Value *I) {
+  const CallInst *CI = dyn_cast<CallInst>(I);
+  if (!CI)
+    return false;
+  Function *Callee = CI->getCalledFunction();
+  if (Callee == 0 || !Callee->isDeclaration() || Callee->getName() != "free")
+    return false;
+
+  // Check free prototype.
+  // FIXME: workaround for PR5130, this will be obsolete when a nobuiltin 
+  // attribute will exist.
+  const FunctionType *FTy = Callee->getFunctionType();
+  if (!FTy->getReturnType()->isVoidTy())
+    return false;
+  if (FTy->getNumParams() != 1)
+    return false;
+  if (FTy->param_begin()->get() != Type::getInt8PtrTy(Callee->getContext()))
+    return false;
+
+  return true;
+}
diff --git a/libclamav/c++/llvm/lib/Analysis/MemoryDependenceAnalysis.cpp b/libclamav/c++/llvm/lib/Analysis/MemoryDependenceAnalysis.cpp
index 97b791c..f958e75 100644
--- a/libclamav/c++/llvm/lib/Analysis/MemoryDependenceAnalysis.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/MemoryDependenceAnalysis.cpp
@@ -20,7 +20,8 @@
 #include "llvm/IntrinsicInst.h"
 #include "llvm/Function.h"
 #include "llvm/Analysis/AliasAnalysis.h"
-#include "llvm/Analysis/MallocHelper.h"
+#include "llvm/Analysis/InstructionSimplify.h"
+#include "llvm/Analysis/MemoryBuiltins.h"
 #include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/STLExtras.h"
 #include "llvm/Support/PredIteratorCache.h"
@@ -113,10 +114,9 @@ getCallSiteDependencyFrom(CallSite CS, bool isReadOnlyCall,
     } else if (VAArgInst *V = dyn_cast<VAArgInst>(Inst)) {
       Pointer = V->getOperand(0);
       PointerSize = AA->getTypeStoreSize(V->getType());
-    } else if (FreeInst *F = dyn_cast<FreeInst>(Inst)) {
-      Pointer = F->getPointerOperand();
-      
-      // FreeInsts erase the entire structure
+    } else if (isFreeCall(Inst)) {
+      Pointer = Inst->getOperand(1);
+      // calls to free() erase the entire structure
       PointerSize = ~0ULL;
     } else if (isa<CallInst>(Inst) || isa<InvokeInst>(Inst)) {
       // Debug intrinsics don't cause dependences.
@@ -168,13 +168,54 @@ getCallSiteDependencyFrom(CallSite CS, bool isReadOnlyCall,
 /// location depends.  If isLoad is true, this routine ignore may-aliases with
 /// read-only operations.
 MemDepResult MemoryDependenceAnalysis::
-getPointerDependencyFrom(Value *MemPtr, uint64_t MemSize, bool isLoad,
+getPointerDependencyFrom(Value *MemPtr, uint64_t MemSize, bool isLoad, 
                          BasicBlock::iterator ScanIt, BasicBlock *BB) {
 
+  Value *invariantTag = 0;
+
   // Walk backwards through the basic block, looking for dependencies.
   while (ScanIt != BB->begin()) {
     Instruction *Inst = --ScanIt;
 
+    // If we're in an invariant region, no dependencies can be found before
+    // we pass an invariant-begin marker.
+    if (invariantTag == Inst) {
+      invariantTag = 0;
+      continue;
+    } else if (IntrinsicInst *II = dyn_cast<IntrinsicInst>(Inst)) {
+      // If we pass an invariant-end marker, then we've just entered an
+      // invariant region and can start ignoring dependencies.
+      if (II->getIntrinsicID() == Intrinsic::invariant_end) {
+        uint64_t invariantSize = ~0ULL;
+        if (ConstantInt *CI = dyn_cast<ConstantInt>(II->getOperand(2)))
+          invariantSize = CI->getZExtValue();
+        
+        AliasAnalysis::AliasResult R =
+          AA->alias(II->getOperand(3), invariantSize, MemPtr, MemSize);
+        if (R == AliasAnalysis::MustAlias) {
+          invariantTag = II->getOperand(1);
+          continue;
+        }
+      
+      // If we reach a lifetime begin or end marker, then the query ends here
+      // because the value is undefined.
+      } else if (II->getIntrinsicID() == Intrinsic::lifetime_start ||
+                   II->getIntrinsicID() == Intrinsic::lifetime_end) {
+        uint64_t invariantSize = ~0ULL;
+        if (ConstantInt *CI = dyn_cast<ConstantInt>(II->getOperand(1)))
+          invariantSize = CI->getZExtValue();
+
+        AliasAnalysis::AliasResult R =
+          AA->alias(II->getOperand(2), invariantSize, MemPtr, MemSize);
+        if (R == AliasAnalysis::MustAlias)
+          return MemDepResult::getDef(II);
+      }
+    }
+
+    // If we're querying on a load and we're in an invariant region, we're done
+    // at this point. Nothing a load depends on can live in an invariant region.
+    if (isLoad && invariantTag) continue;
+
     // Debug intrinsics don't cause dependences.
     if (isa<DbgInfoIntrinsic>(Inst)) continue;
 
@@ -199,6 +240,10 @@ getPointerDependencyFrom(Value *MemPtr, uint64_t MemSize, bool isLoad,
     }
     
     if (StoreInst *SI = dyn_cast<StoreInst>(Inst)) {
+      // There can't be stores to the value we care about inside an 
+      // invariant region.
+      if (invariantTag) continue;
+      
       // If alias analysis can tell that this store is guaranteed to not modify
       // the query pointer, ignore it.  Use getModRefInfo to handle cases where
       // the query pointer points to constant memory etc.
@@ -225,16 +270,11 @@ getPointerDependencyFrom(Value *MemPtr, uint64_t MemSize, bool isLoad,
     // the allocation, return Def.  This means that there is no dependence and
     // the access can be optimized based on that.  For example, a load could
     // turn into undef.
-    if (AllocationInst *AI = dyn_cast<AllocationInst>(Inst)) {
-      Value *AccessPtr = MemPtr->getUnderlyingObject();
-      
-      if (AccessPtr == AI ||
-          AA->alias(AI, 1, AccessPtr, 1) == AliasAnalysis::MustAlias)
-        return MemDepResult::getDef(AI);
-      continue;
-    }
-    
-    if (isMalloc(Inst)) {
+    // Note: Only determine this to be a malloc if Inst is the malloc call, not
+    // a subsequent bitcast of the malloc call result.  There can be stores to
+    // the malloced memory between the malloc call and its bitcast uses, and we
+    // need to continue scanning until the malloc call.
+    if (isa<AllocaInst>(Inst) || extractMallocCall(Inst)) {
       Value *AccessPtr = MemPtr->getUnderlyingObject();
       
       if (AccessPtr == Inst ||
@@ -248,12 +288,16 @@ getPointerDependencyFrom(Value *MemPtr, uint64_t MemSize, bool isLoad,
     case AliasAnalysis::NoModRef:
       // If the call has no effect on the queried pointer, just ignore it.
       continue;
+    case AliasAnalysis::Mod:
+      // If we're in an invariant region, we can ignore calls that ONLY
+      // modify the pointer.
+      if (invariantTag) continue;
+      return MemDepResult::getClobber(Inst);
     case AliasAnalysis::Ref:
       // If the call is known to never store to the pointer, and if this is a
       // load query, we can safely ignore it (scan past it).
       if (isLoad)
         continue;
-      // FALL THROUGH.
     default:
       // Otherwise, there is a potential dependence.  Return a clobber.
       return MemDepResult::getClobber(Inst);
@@ -319,15 +363,15 @@ MemDepResult MemoryDependenceAnalysis::getDependency(Instruction *QueryInst) {
       MemPtr = LI->getPointerOperand();
       MemSize = AA->getTypeStoreSize(LI->getType());
     }
+  } else if (isFreeCall(QueryInst)) {
+    MemPtr = QueryInst->getOperand(1);
+    // calls to free() erase the entire structure, not just a field.
+    MemSize = ~0UL;
   } else if (isa<CallInst>(QueryInst) || isa<InvokeInst>(QueryInst)) {
     CallSite QueryCS = CallSite::get(QueryInst);
     bool isReadOnly = AA->onlyReadsMemory(QueryCS);
     LocalCache = getCallSiteDependencyFrom(QueryCS, isReadOnly, ScanPos,
                                            QueryParent);
-  } else if (FreeInst *FI = dyn_cast<FreeInst>(QueryInst)) {
-    MemPtr = FI->getPointerOperand();
-    // FreeInsts erase the entire structure, not just a field.
-    MemSize = ~0UL;
   } else {
     // Non-memory instruction.
     LocalCache = MemDepResult::getClobber(--BasicBlock::iterator(ScanPos));
@@ -641,6 +685,170 @@ SortNonLocalDepInfoCache(MemoryDependenceAnalysis::NonLocalDepInfo &Cache,
   }
 }
 
+/// isPHITranslatable - Return true if the specified computation is derived from
+/// a PHI node in the current block and if it is simple enough for us to handle.
+static bool isPHITranslatable(Instruction *Inst) {
+  if (isa<PHINode>(Inst))
+    return true;
+  
+  // We can handle bitcast of a PHI, but the PHI needs to be in the same block
+  // as the bitcast.
+  if (BitCastInst *BC = dyn_cast<BitCastInst>(Inst))
+    if (PHINode *PN = dyn_cast<PHINode>(BC->getOperand(0)))
+      if (PN->getParent() == BC->getParent())
+        return true;
+  
+  // We can translate a GEP that uses a PHI in the current block for at least
+  // one of its operands.
+  if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(Inst)) {
+    for (unsigned i = 0, e = GEP->getNumOperands(); i != e; ++i)
+      if (PHINode *PN = dyn_cast<PHINode>(GEP->getOperand(i)))
+        if (PN->getParent() == GEP->getParent())
+          return true;
+  }
+
+  //   cerr << "MEMDEP: Could not PHI translate: " << *Pointer;
+  //   if (isa<BitCastInst>(PtrInst) || isa<GetElementPtrInst>(PtrInst))
+  //     cerr << "OP:\t\t\t\t" << *PtrInst->getOperand(0);
+  
+  return false;
+}
+
+/// PHITranslateForPred - Given a computation that satisfied the
+/// isPHITranslatable predicate, see if we can translate the computation into
+/// the specified predecessor block.  If so, return that value.
+Value *MemoryDependenceAnalysis::
+PHITranslatePointer(Value *InVal, BasicBlock *CurBB, BasicBlock *Pred,
+                    const TargetData *TD) const {  
+  // If the input value is not an instruction, or if it is not defined in CurBB,
+  // then we don't need to phi translate it.
+  Instruction *Inst = dyn_cast<Instruction>(InVal);
+  if (Inst == 0 || Inst->getParent() != CurBB)
+    return InVal;
+  
+  if (PHINode *PN = dyn_cast<PHINode>(Inst))
+    return PN->getIncomingValueForBlock(Pred);
+  
+  // Handle bitcast of PHI.
+  if (BitCastInst *BC = dyn_cast<BitCastInst>(Inst)) {
+    PHINode *BCPN = cast<PHINode>(BC->getOperand(0));
+    Value *PHIIn = BCPN->getIncomingValueForBlock(Pred);
+    
+    // Constants are trivial to phi translate.
+    if (Constant *C = dyn_cast<Constant>(PHIIn))
+      return ConstantExpr::getBitCast(C, BC->getType());
+    
+    // Otherwise we have to see if a bitcasted version of the incoming pointer
+    // is available.  If so, we can use it, otherwise we have to fail.
+    for (Value::use_iterator UI = PHIIn->use_begin(), E = PHIIn->use_end();
+         UI != E; ++UI) {
+      if (BitCastInst *BCI = dyn_cast<BitCastInst>(*UI))
+        if (BCI->getType() == BC->getType())
+          return BCI;
+    }
+    return 0;
+  }
+
+  // Handle getelementptr with at least one PHI operand.
+  if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(Inst)) {
+    SmallVector<Value*, 8> GEPOps;
+    BasicBlock *CurBB = GEP->getParent();
+    for (unsigned i = 0, e = GEP->getNumOperands(); i != e; ++i) {
+      Value *GEPOp = GEP->getOperand(i);
+      // No PHI translation is needed of operands whose values are live in to
+      // the predecessor block.
+      if (!isa<Instruction>(GEPOp) ||
+          cast<Instruction>(GEPOp)->getParent() != CurBB) {
+        GEPOps.push_back(GEPOp);
+        continue;
+      }
+      
+      // If the operand is a phi node, do phi translation.
+      if (PHINode *PN = dyn_cast<PHINode>(GEPOp)) {
+        GEPOps.push_back(PN->getIncomingValueForBlock(Pred));
+        continue;
+      }
+      
+      // Otherwise, we can't PHI translate this random value defined in this
+      // block.
+      return 0;
+    }
+    
+    // Simplify the GEP to handle 'gep x, 0' -> x etc.
+    if (Value *V = SimplifyGEPInst(&GEPOps[0], GEPOps.size(), TD))
+      return V;
+
+
+    // Scan to see if we have this GEP available.
+    Value *APHIOp = GEPOps[0];
+    for (Value::use_iterator UI = APHIOp->use_begin(), E = APHIOp->use_end();
+         UI != E; ++UI) {
+      if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(*UI))
+        if (GEPI->getType() == GEP->getType() &&
+            GEPI->getNumOperands() == GEPOps.size() &&
+            GEPI->getParent()->getParent() == CurBB->getParent()) {
+          bool Mismatch = false;
+          for (unsigned i = 0, e = GEPOps.size(); i != e; ++i)
+            if (GEPI->getOperand(i) != GEPOps[i]) {
+              Mismatch = true;
+              break;
+            }
+          if (!Mismatch)
+            return GEPI;
+        }
+    }
+    return 0;
+  }
+  
+  return 0;
+}
+
+/// InsertPHITranslatedPointer - Insert a computation of the PHI translated
+/// version of 'V' for the edge PredBB->CurBB into the end of the PredBB
+/// block.
+///
+/// This is only called when PHITranslatePointer returns a value that doesn't
+/// dominate the block, so we don't need to handle the trivial cases here.
+Value *MemoryDependenceAnalysis::
+InsertPHITranslatedPointer(Value *InVal, BasicBlock *CurBB,
+                           BasicBlock *PredBB, const TargetData *TD) const {
+  // If the input value isn't an instruction in CurBB, it doesn't need phi
+  // translation.
+  Instruction *Inst = cast<Instruction>(InVal);
+  assert(Inst->getParent() == CurBB && "Doesn't need phi trans");
+
+  // Handle bitcast of PHI.
+  if (BitCastInst *BC = dyn_cast<BitCastInst>(Inst)) {
+    PHINode *BCPN = cast<PHINode>(BC->getOperand(0));
+    Value *PHIIn = BCPN->getIncomingValueForBlock(PredBB);
+    
+    // Otherwise insert a bitcast at the end of PredBB.
+    return new BitCastInst(PHIIn, InVal->getType(),
+                           InVal->getName()+".phi.trans.insert",
+                           PredBB->getTerminator());
+  }
+  
+  // Handle getelementptr with at least one PHI operand.
+  if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(Inst)) {
+    SmallVector<Value*, 8> GEPOps;
+    Value *APHIOp = 0;
+    BasicBlock *CurBB = GEP->getParent();
+    for (unsigned i = 0, e = GEP->getNumOperands(); i != e; ++i) {
+      GEPOps.push_back(GEP->getOperand(i)->DoPHITranslation(CurBB, PredBB));
+      if (!isa<Constant>(GEPOps.back()))
+        APHIOp = GEPOps.back();
+    }
+    
+    GetElementPtrInst *Result = 
+      GetElementPtrInst::Create(GEPOps[0], GEPOps.begin()+1, GEPOps.end(),
+                                InVal->getName()+".phi.trans.insert",
+                                PredBB->getTerminator());
+    Result->setIsInBounds(GEP->isInBounds());
+    return Result;
+  }
+  
+  return 0;
+}
 
 /// getNonLocalPointerDepFromBB - Perform a dependency query based on
 /// pointer/pointeesize starting at the end of StartBB.  Add any clobber/def
@@ -784,66 +992,70 @@ getNonLocalPointerDepFromBB(Value *Pointer, uint64_t PointeeSize,
       NumSortedEntries = Cache->size();
     }
     
-    // If this is directly a PHI node, just use the incoming values for each
-    // pred as the phi translated version.
-    if (PHINode *PtrPHI = dyn_cast<PHINode>(PtrInst)) {
-      Cache = 0;
-      
-      for (BasicBlock **PI = PredCache->GetPreds(BB); *PI; ++PI) {
-        BasicBlock *Pred = *PI;
-        Value *PredPtr = PtrPHI->getIncomingValueForBlock(Pred);
-        
-        // Check to see if we have already visited this pred block with another
-        // pointer.  If so, we can't do this lookup.  This failure can occur
-        // with PHI translation when a critical edge exists and the PHI node in
-        // the successor translates to a pointer value different than the
-        // pointer the block was first analyzed with.
-        std::pair<DenseMap<BasicBlock*,Value*>::iterator, bool>
-          InsertRes = Visited.insert(std::make_pair(Pred, PredPtr));
+    // If this is a computation derived from a PHI node, use the suitably
+    // translated incoming values for each pred as the phi translated version.
+    if (!isPHITranslatable(PtrInst))
+      goto PredTranslationFailure;
 
-        if (!InsertRes.second) {
-          // If the predecessor was visited with PredPtr, then we already did
-          // the analysis and can ignore it.
-          if (InsertRes.first->second == PredPtr)
-            continue;
-          
-          // Otherwise, the block was previously analyzed with a different
-          // pointer.  We can't represent the result of this case, so we just
-          // treat this as a phi translation failure.
-          goto PredTranslationFailure;
-        }
+    Cache = 0;
+      
+    for (BasicBlock **PI = PredCache->GetPreds(BB); *PI; ++PI) {
+      BasicBlock *Pred = *PI;
+      Value *PredPtr = PHITranslatePointer(PtrInst, BB, Pred, TD);
+      
+      // If PHI translation fails, bail out.
+      if (PredPtr == 0) {
+        // FIXME: Instead of modelling this as a phi trans failure, we should
+        // model this as a clobber in the one predecessor.  This will allow
+        // us to PRE values that are only available in some preds but not all.
+        goto PredTranslationFailure;
+      }
+      
+      // Check to see if we have already visited this pred block with another
+      // pointer.  If so, we can't do this lookup.  This failure can occur
+      // with PHI translation when a critical edge exists and the PHI node in
+      // the successor translates to a pointer value different than the
+      // pointer the block was first analyzed with.
+      std::pair<DenseMap<BasicBlock*,Value*>::iterator, bool>
+        InsertRes = Visited.insert(std::make_pair(Pred, PredPtr));
 
-        // FIXME: it is entirely possible that PHI translating will end up with
-        // the same value.  Consider PHI translating something like:
-        // X = phi [x, bb1], [y, bb2].  PHI translating for bb1 doesn't *need*
-        // to recurse here, pedantically speaking.
+      if (!InsertRes.second) {
+        // If the predecessor was visited with PredPtr, then we already did
+        // the analysis and can ignore it.
+        if (InsertRes.first->second == PredPtr)
+          continue;
         
-        // If we have a problem phi translating, fall through to the code below
-        // to handle the failure condition.
-        if (getNonLocalPointerDepFromBB(PredPtr, PointeeSize, isLoad, Pred,
-                                        Result, Visited))
-          goto PredTranslationFailure;
+        // Otherwise, the block was previously analyzed with a different
+        // pointer.  We can't represent the result of this case, so we just
+        // treat this as a phi translation failure.
+        goto PredTranslationFailure;
       }
+
+      // FIXME: it is entirely possible that PHI translating will end up with
+      // the same value.  Consider PHI translating something like:
+      // X = phi [x, bb1], [y, bb2].  PHI translating for bb1 doesn't *need*
+      // to recurse here, pedantically speaking.
       
-      // Refresh the CacheInfo/Cache pointer so that it isn't invalidated.
-      CacheInfo = &NonLocalPointerDeps[CacheKey];
-      Cache = &CacheInfo->second;
-      NumSortedEntries = Cache->size();
-      
-      // Since we did phi translation, the "Cache" set won't contain all of the
-      // results for the query.  This is ok (we can still use it to accelerate
-      // specific block queries) but we can't do the fastpath "return all
-      // results from the set"  Clear out the indicator for this.
-      CacheInfo->first = BBSkipFirstBlockPair();
-      SkipFirstBlock = false;
-      continue;
+      // If we have a problem phi translating, fall through to the code below
+      // to handle the failure condition.
+      if (getNonLocalPointerDepFromBB(PredPtr, PointeeSize, isLoad, Pred,
+                                      Result, Visited))
+        goto PredTranslationFailure;
     }
     
-    // TODO: BITCAST, GEP.
+    // Refresh the CacheInfo/Cache pointer so that it isn't invalidated.
+    CacheInfo = &NonLocalPointerDeps[CacheKey];
+    Cache = &CacheInfo->second;
+    NumSortedEntries = Cache->size();
     
-    //   cerr << "MEMDEP: Could not PHI translate: " << *Pointer;
-    //   if (isa<BitCastInst>(PtrInst) || isa<GetElementPtrInst>(PtrInst))
-    //     cerr << "OP:\t\t\t\t" << *PtrInst->getOperand(0);
+    // Since we did phi translation, the "Cache" set won't contain all of the
+    // results for the query.  This is ok (we can still use it to accelerate
+    // specific block queries) but we can't do the fastpath "return all
+    // results from the set"  Clear out the indicator for this.
+    CacheInfo->first = BBSkipFirstBlockPair();
+    SkipFirstBlock = false;
+    continue;
+
   PredTranslationFailure:
     
     if (Cache == 0) {
diff --git a/libclamav/c++/llvm/lib/Analysis/PointerTracking.cpp b/libclamav/c++/llvm/lib/Analysis/PointerTracking.cpp
index 2281836..8da07e7 100644
--- a/libclamav/c++/llvm/lib/Analysis/PointerTracking.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/PointerTracking.cpp
@@ -10,10 +10,11 @@
 // This file implements tracking of pointer bounds.
 //
 //===----------------------------------------------------------------------===//
+
 #include "llvm/Analysis/ConstantFolding.h"
 #include "llvm/Analysis/Dominators.h"
 #include "llvm/Analysis/LoopInfo.h"
-#include "llvm/Analysis/MallocHelper.h"
+#include "llvm/Analysis/MemoryBuiltins.h"
 #include "llvm/Analysis/PointerTracking.h"
 #include "llvm/Analysis/ScalarEvolution.h"
 #include "llvm/Analysis/ScalarEvolutionExpressions.h"
@@ -48,7 +49,7 @@ void PointerTracking::getAnalysisUsage(AnalysisUsage &AU) const {
 }
 
 bool PointerTracking::doInitialization(Module &M) {
-  const Type *PTy = PointerType::getUnqual(Type::getInt8Ty(M.getContext()));
+  const Type *PTy = Type::getInt8PtrTy(M.getContext());
 
   // Find calloc(i64, i64) or calloc(i32, i32).
   callocFunc = M.getFunction("calloc");
@@ -93,7 +94,7 @@ bool PointerTracking::doInitialization(Module &M) {
 const SCEV *PointerTracking::computeAllocationCount(Value *P,
                                                     const Type *&Ty) const {
   Value *V = P->stripPointerCasts();
-  if (AllocationInst *AI = dyn_cast<AllocationInst>(V)) {
+  if (AllocaInst *AI = dyn_cast<AllocaInst>(V)) {
     Value *arraySize = AI->getArraySize();
     Ty = AI->getAllocatedType();
     // arraySize elements of type Ty.
@@ -101,9 +102,10 @@ const SCEV *PointerTracking::computeAllocationCount(Value *P,
   }
 
   if (CallInst *CI = extractMallocCall(V)) {
-    Value *arraySize = getMallocArraySize(CI, P->getContext(), TD);
-    Ty = getMallocAllocatedType(CI);
-    if (!Ty || !arraySize) return SE->getCouldNotCompute();
+    Value *arraySize = getMallocArraySize(CI, TD);
+    const Type* AllocTy = getMallocAllocatedType(CI);
+    if (!AllocTy || !arraySize) return SE->getCouldNotCompute();
+    Ty = AllocTy;
     // arraySize elements of type Ty.
     return SE->getSCEV(arraySize);
   }
diff --git a/libclamav/c++/llvm/lib/Analysis/ProfileEstimatorPass.cpp b/libclamav/c++/llvm/lib/Analysis/ProfileEstimatorPass.cpp
index c585c1d..e767891 100644
--- a/libclamav/c++/llvm/lib/Analysis/ProfileEstimatorPass.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ProfileEstimatorPass.cpp
@@ -30,8 +30,7 @@ LoopWeight(
 );
 
 namespace {
-  class VISIBILITY_HIDDEN ProfileEstimatorPass :
-      public FunctionPass, public ProfileInfo {
+  class ProfileEstimatorPass : public FunctionPass, public ProfileInfo {
     double ExecCount;
     LoopInfo *LI;
     std::set<BasicBlock*>  BBToVisit;
diff --git a/libclamav/c++/llvm/lib/Analysis/ProfileInfo.cpp b/libclamav/c++/llvm/lib/Analysis/ProfileInfo.cpp
index 9efdd23..7f24f5a 100644
--- a/libclamav/c++/llvm/lib/Analysis/ProfileInfo.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ProfileInfo.cpp
@@ -16,7 +16,6 @@
 #include "llvm/Analysis/ProfileInfo.h"
 #include "llvm/Pass.h"
 #include "llvm/Support/CFG.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Support/Format.h"
@@ -178,8 +177,7 @@ raw_ostream& llvm::operator<<(raw_ostream &O, ProfileInfo::Edge E) {
 //
 
 namespace {
-  struct VISIBILITY_HIDDEN NoProfileInfo 
-    : public ImmutablePass, public ProfileInfo {
+  struct NoProfileInfo : public ImmutablePass, public ProfileInfo {
     static char ID; // Class identification, replacement for typeinfo
     NoProfileInfo() : ImmutablePass(&ID) {}
   };
diff --git a/libclamav/c++/llvm/lib/Analysis/ProfileInfoLoaderPass.cpp b/libclamav/c++/llvm/lib/Analysis/ProfileInfoLoaderPass.cpp
index 89d90bc..9e1dfb6 100644
--- a/libclamav/c++/llvm/lib/Analysis/ProfileInfoLoaderPass.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ProfileInfoLoaderPass.cpp
@@ -20,7 +20,6 @@
 #include "llvm/Analysis/ProfileInfo.h"
 #include "llvm/Analysis/ProfileInfoLoader.h"
 #include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/CFG.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
@@ -38,7 +37,7 @@ ProfileInfoFilename("profile-info-file", cl::init("llvmprof.out"),
                     cl::desc("Profile file loaded by -profile-loader"));
 
 namespace {
-  class VISIBILITY_HIDDEN LoaderPass : public ModulePass, public ProfileInfo {
+  class LoaderPass : public ModulePass, public ProfileInfo {
     std::string Filename;
     std::set<Edge> SpanningTree;
     std::set<const BasicBlock*> BBisUnvisited;
@@ -61,7 +60,7 @@ namespace {
     // recurseBasicBlock() - Calculates the edge weights for as much basic
     // blocks as possbile.
     virtual void recurseBasicBlock(const BasicBlock *BB);
-    virtual void readEdgeOrRemember(Edge, Edge&, unsigned &, unsigned &);
+    virtual void readEdgeOrRemember(Edge, Edge&, unsigned &, double &);
     virtual void readEdge(ProfileInfo::Edge, std::vector<unsigned>&);
 
     /// run - Load the profile information from the specified file.
@@ -85,7 +84,7 @@ Pass *llvm::createProfileLoaderPass(const std::string &Filename) {
 }
 
 void LoaderPass::readEdgeOrRemember(Edge edge, Edge &tocalc, 
-                                    unsigned &uncalc, unsigned &count) {
+                                    unsigned &uncalc, double &count) {
   double w;
   if ((w = getEdgeWeight(edge)) == MissingValue) {
     tocalc = edge;
@@ -118,7 +117,7 @@ void LoaderPass::recurseBasicBlock(const BasicBlock *BB) {
 
   // collect weights of all incoming and outgoing edges, rememer edges that
   // have no value
-  unsigned incount = 0;
+  double incount = 0;
   SmallSet<const BasicBlock*,8> pred_visited;
   pred_const_iterator bbi = pred_begin(BB), bbe = pred_end(BB);
   if (bbi==bbe) {
@@ -130,7 +129,7 @@ void LoaderPass::recurseBasicBlock(const BasicBlock *BB) {
     }
   }
 
-  unsigned outcount = 0;
+  double outcount = 0;
   SmallSet<const BasicBlock*,8> succ_visited;
   succ_const_iterator sbbi = succ_begin(BB), sbbe = succ_end(BB);
   if (sbbi==sbbe) {
diff --git a/libclamav/c++/llvm/lib/Analysis/ProfileVerifierPass.cpp b/libclamav/c++/llvm/lib/Analysis/ProfileVerifierPass.cpp
index 9766da5..5f36294 100644
--- a/libclamav/c++/llvm/lib/Analysis/ProfileVerifierPass.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ProfileVerifierPass.cpp
@@ -30,7 +30,7 @@ ProfileVerifierDisableAssertions("profile-verifier-noassert",
      cl::desc("Disable assertions"));
 
 namespace {
-  class VISIBILITY_HIDDEN ProfileVerifierPass : public FunctionPass {
+  class ProfileVerifierPass : public FunctionPass {
 
     struct DetailedBlockInfo {
       const BasicBlock *BB;
@@ -229,7 +229,8 @@ void ProfileVerifierPass::recurseBasicBlock(const BasicBlock *BB) {
   // to debug printers.
   DetailedBlockInfo DI;
   DI.BB = BB;
-  DI.outCount = DI.inCount = DI.inWeight = DI.outWeight = 0;
+  DI.outCount = DI.inCount = 0;
+  DI.inWeight = DI.outWeight = 0.0;
 
   // Read predecessors.
   std::set<const BasicBlock*> ProcessedPreds;
diff --git a/libclamav/c++/llvm/lib/Analysis/ScalarEvolution.cpp b/libclamav/c++/llvm/lib/Analysis/ScalarEvolution.cpp
index 12ad429..c6835ef 100644
--- a/libclamav/c++/llvm/lib/Analysis/ScalarEvolution.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ScalarEvolution.cpp
@@ -74,7 +74,6 @@
 #include "llvm/Assembly/Writer.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ConstantRange.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/GetElementPtrTypeIterator.h"
@@ -401,7 +400,7 @@ namespace {
   /// SCEVComplexityCompare - Return true if the complexity of the LHS is less
   /// than the complexity of the RHS.  This comparator is used to canonicalize
   /// expressions.
-  class VISIBILITY_HIDDEN SCEVComplexityCompare {
+  class SCEVComplexityCompare {
     LoopInfo *LI;
   public:
     explicit SCEVComplexityCompare(LoopInfo *li) : LI(li) {}
@@ -1191,7 +1190,8 @@ namespace {
 
 /// getAddExpr - Get a canonical add expression, or something simpler if
 /// possible.
-const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops) {
+const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
+                                        bool HasNUW, bool HasNSW) {
   assert(!Ops.empty() && "Cannot get empty add!");
   if (Ops.size() == 1) return Ops[0];
 #ifndef NDEBUG
@@ -1241,7 +1241,7 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops) {
         return Mul;
       Ops.erase(Ops.begin()+i, Ops.begin()+i+2);
       Ops.push_back(Mul);
-      return getAddExpr(Ops);
+      return getAddExpr(Ops, HasNUW, HasNSW);
     }
 
   // Check for truncates. If all the operands are truncated from the same
@@ -1296,7 +1296,7 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops) {
     }
     if (Ok) {
       // Evaluate the expression in the larger type.
-      const SCEV *Fold = getAddExpr(LargeOps);
+      const SCEV *Fold = getAddExpr(LargeOps, HasNUW, HasNSW);
       // If it folds to something simple, use it. Otherwise, don't.
       if (isa<SCEVConstant>(Fold) || isa<SCEVUnknown>(Fold))
         return getTruncateExpr(Fold, DstType);
@@ -1516,16 +1516,19 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops) {
     ID.AddPointer(Ops[i]);
   void *IP = 0;
   if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP)) return S;
-  SCEV *S = SCEVAllocator.Allocate<SCEVAddExpr>();
+  SCEVAddExpr *S = SCEVAllocator.Allocate<SCEVAddExpr>();
   new (S) SCEVAddExpr(ID, Ops);
   UniqueSCEVs.InsertNode(S, IP);
+  if (HasNUW) S->setHasNoUnsignedWrap(true);
+  if (HasNSW) S->setHasNoSignedWrap(true);
   return S;
 }
 
 
 /// getMulExpr - Get a canonical multiply expression, or something simpler if
 /// possible.
-const SCEV *ScalarEvolution::getMulExpr(SmallVectorImpl<const SCEV *> &Ops) {
+const SCEV *ScalarEvolution::getMulExpr(SmallVectorImpl<const SCEV *> &Ops,
+                                        bool HasNUW, bool HasNSW) {
   assert(!Ops.empty() && "Cannot get empty mul!");
 #ifndef NDEBUG
   for (unsigned i = 1, e = Ops.size(); i != e; ++i)
@@ -1688,9 +1691,11 @@ const SCEV *ScalarEvolution::getMulExpr(SmallVectorImpl<const SCEV *> &Ops) {
     ID.AddPointer(Ops[i]);
   void *IP = 0;
   if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP)) return S;
-  SCEV *S = SCEVAllocator.Allocate<SCEVMulExpr>();
+  SCEVMulExpr *S = SCEVAllocator.Allocate<SCEVMulExpr>();
   new (S) SCEVMulExpr(ID, Ops);
   UniqueSCEVs.InsertNode(S, IP);
+  if (HasNUW) S->setHasNoUnsignedWrap(true);
+  if (HasNSW) S->setHasNoSignedWrap(true);
   return S;
 }
 
@@ -1797,7 +1802,8 @@ const SCEV *ScalarEvolution::getUDivExpr(const SCEV *LHS,
 /// getAddRecExpr - Get an add recurrence expression for the specified loop.
 /// Simplify the expression as much as possible.
 const SCEV *ScalarEvolution::getAddRecExpr(const SCEV *Start,
-                                           const SCEV *Step, const Loop *L) {
+                                           const SCEV *Step, const Loop *L,
+                                           bool HasNUW, bool HasNSW) {
   SmallVector<const SCEV *, 4> Operands;
   Operands.push_back(Start);
   if (const SCEVAddRecExpr *StepChrec = dyn_cast<SCEVAddRecExpr>(Step))
@@ -1808,14 +1814,15 @@ const SCEV *ScalarEvolution::getAddRecExpr(const SCEV *Start,
     }
 
   Operands.push_back(Step);
-  return getAddRecExpr(Operands, L);
+  return getAddRecExpr(Operands, L, HasNUW, HasNSW);
 }
 
 /// getAddRecExpr - Get an add recurrence expression for the specified loop.
 /// Simplify the expression as much as possible.
 const SCEV *
 ScalarEvolution::getAddRecExpr(SmallVectorImpl<const SCEV *> &Operands,
-                               const Loop *L) {
+                               const Loop *L,
+                               bool HasNUW, bool HasNSW) {
   if (Operands.size() == 1) return Operands[0];
 #ifndef NDEBUG
   for (unsigned i = 1, e = Operands.size(); i != e; ++i)
@@ -1826,7 +1833,7 @@ ScalarEvolution::getAddRecExpr(SmallVectorImpl<const SCEV *> &Operands,
 
   if (Operands.back()->isZero()) {
     Operands.pop_back();
-    return getAddRecExpr(Operands, L);             // {X,+,0}  -->  X
+    return getAddRecExpr(Operands, L, HasNUW, HasNSW); // {X,+,0}  -->  X
   }
 
   // Canonicalize nested AddRecs in by nesting them in order of loop depth.
@@ -1855,7 +1862,7 @@ ScalarEvolution::getAddRecExpr(SmallVectorImpl<const SCEV *> &Operands,
           }
         if (AllInvariant)
           // Ok, both add recurrences are valid after the transformation.
-          return getAddRecExpr(NestedOperands, NestedLoop);
+          return getAddRecExpr(NestedOperands, NestedLoop, HasNUW, HasNSW);
       }
       // Reset Operands to its original state.
       Operands[0] = NestedAR;
@@ -1870,9 +1877,11 @@ ScalarEvolution::getAddRecExpr(SmallVectorImpl<const SCEV *> &Operands,
   ID.AddPointer(L);
   void *IP = 0;
   if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP)) return S;
-  SCEV *S = SCEVAllocator.Allocate<SCEVAddRecExpr>();
+  SCEVAddRecExpr *S = SCEVAllocator.Allocate<SCEVAddRecExpr>();
   new (S) SCEVAddRecExpr(ID, Operands, L);
   UniqueSCEVs.InsertNode(S, IP);
+  if (HasNUW) S->setHasNoUnsignedWrap(true);
+  if (HasNSW) S->setHasNoSignedWrap(true);
   return S;
 }
 
@@ -2942,9 +2951,15 @@ const SCEV *ScalarEvolution::createSCEV(Value *V) {
   Operator *U = cast<Operator>(V);
   switch (Opcode) {
   case Instruction::Add:
+    // Don't transfer the NSW and NUW bits from the Add instruction to the
+    // Add expression, because the Instruction may be guarded by control
+    // flow and the no-overflow bits may not be valid for the expression in
+    // any context.
     return getAddExpr(getSCEV(U->getOperand(0)),
                       getSCEV(U->getOperand(1)));
   case Instruction::Mul:
+    // Don't transfer the NSW and NUW bits from the Mul instruction to the
+    // Mul expression, as with Add.
     return getMulExpr(getSCEV(U->getOperand(0)),
                       getSCEV(U->getOperand(1)));
   case Instruction::UDiv:
@@ -3250,9 +3265,8 @@ ScalarEvolution::getBackedgeTakenInfo(const Loop *L) {
     // Now that we know more about the trip count for this loop, forget any
     // existing SCEV values for PHI nodes in this loop since they are only
     // conservative estimates made without the benefit of trip count
-    // information. This is similar to the code in
-    // forgetLoopBackedgeTakenCount, except that it handles SCEVUnknown PHI
-    // nodes specially.
+    // information. This is similar to the code in forgetLoop, except that
+    // it handles SCEVUnknown PHI nodes specially.
     if (ItCount.hasAnyInfo()) {
       SmallVector<Instruction *, 16> Worklist;
       PushLoopPHIs(L, Worklist);
@@ -3286,13 +3300,14 @@ ScalarEvolution::getBackedgeTakenInfo(const Loop *L) {
   return Pair.first->second;
 }
 
-/// forgetLoopBackedgeTakenCount - This method should be called by the
-/// client when it has changed a loop in a way that may effect
-/// ScalarEvolution's ability to compute a trip count, or if the loop
-/// is deleted.
-void ScalarEvolution::forgetLoopBackedgeTakenCount(const Loop *L) {
+/// forgetLoop - This method should be called by the client when it has
+/// changed a loop in a way that may effect ScalarEvolution's ability to
+/// compute a trip count, or if the loop is deleted.
+void ScalarEvolution::forgetLoop(const Loop *L) {
+  // Drop any stored trip count value.
   BackedgeTakenCounts.erase(L);
 
+  // Drop information about expressions based on loop-header PHIs.
   SmallVector<Instruction *, 16> Worklist;
   PushLoopPHIs(L, Worklist);
 
@@ -3629,7 +3644,7 @@ EvaluateConstantChrecAtConstant(const SCEVAddRecExpr *AddRec, ConstantInt *C,
 /// the addressed element of the initializer or null if the index expression is
 /// invalid.
 static Constant *
-GetAddressedElementFromGlobal(LLVMContext &Context, GlobalVariable *GV,
+GetAddressedElementFromGlobal(GlobalVariable *GV,
                               const std::vector<ConstantInt*> &Indices) {
   Constant *Init = GV->getInitializer();
   for (unsigned i = 0, e = Indices.size(); i != e; ++i) {
@@ -3717,7 +3732,7 @@ ScalarEvolution::ComputeLoadConstantCompareBackedgeTakenCount(
     // Form the GEP offset.
     Indexes[VarIdxNum] = Val;
 
-    Constant *Result = GetAddressedElementFromGlobal(getContext(), GV, Indexes);
+    Constant *Result = GetAddressedElementFromGlobal(GV, Indexes);
     if (Result == 0) break;  // Cannot compute!
 
     // Evaluate the condition for this iteration.
@@ -3796,29 +3811,26 @@ static PHINode *getConstantEvolvingPHI(Value *V, const Loop *L) {
 /// getConstantEvolvingPHI predicate, evaluate its value assuming the PHI node
 /// in the loop has the value PHIVal.  If we can't fold this expression for some
 /// reason, return null.
-static Constant *EvaluateExpression(Value *V, Constant *PHIVal) {
+static Constant *EvaluateExpression(Value *V, Constant *PHIVal,
+                                    const TargetData *TD) {
   if (isa<PHINode>(V)) return PHIVal;
   if (Constant *C = dyn_cast<Constant>(V)) return C;
   if (GlobalValue *GV = dyn_cast<GlobalValue>(V)) return GV;
   Instruction *I = cast<Instruction>(V);
-  LLVMContext &Context = I->getParent()->getContext();
 
   std::vector<Constant*> Operands;
   Operands.resize(I->getNumOperands());
 
   for (unsigned i = 0, e = I->getNumOperands(); i != e; ++i) {
-    Operands[i] = EvaluateExpression(I->getOperand(i), PHIVal);
+    Operands[i] = EvaluateExpression(I->getOperand(i), PHIVal, TD);
     if (Operands[i] == 0) return 0;
   }
 
   if (const CmpInst *CI = dyn_cast<CmpInst>(I))
-    return ConstantFoldCompareInstOperands(CI->getPredicate(),
-                                           &Operands[0], Operands.size(),
-                                           Context);
-  else
-    return ConstantFoldInstOperands(I->getOpcode(), I->getType(),
-                                    &Operands[0], Operands.size(),
-                                    Context);
+    return ConstantFoldCompareInstOperands(CI->getPredicate(), Operands[0],
+                                           Operands[1], TD);
+  return ConstantFoldInstOperands(I->getOpcode(), I->getType(),
+                                  &Operands[0], Operands.size(), TD);
 }
 
 /// getConstantEvolutionLoopExitValue - If we know that the specified Phi is
@@ -3864,7 +3876,7 @@ ScalarEvolution::getConstantEvolutionLoopExitValue(PHINode *PN,
       return RetVal = PHIVal;  // Got exit value!
 
     // Compute the value of the PHI node for the next iteration.
-    Constant *NextPHI = EvaluateExpression(BEValue, PHIVal);
+    Constant *NextPHI = EvaluateExpression(BEValue, PHIVal, TD);
     if (NextPHI == PHIVal)
       return RetVal = NextPHI;  // Stopped evolving!
     if (NextPHI == 0)
@@ -3905,7 +3917,7 @@ ScalarEvolution::ComputeBackedgeTakenCountExhaustively(const Loop *L,
   for (Constant *PHIVal = StartCST;
        IterationNum != MaxIterations; ++IterationNum) {
     ConstantInt *CondVal =
-      dyn_cast_or_null<ConstantInt>(EvaluateExpression(Cond, PHIVal));
+      dyn_cast_or_null<ConstantInt>(EvaluateExpression(Cond, PHIVal, TD));
 
     // Couldn't symbolically evaluate.
     if (!CondVal) return getCouldNotCompute();
@@ -3916,7 +3928,7 @@ ScalarEvolution::ComputeBackedgeTakenCountExhaustively(const Loop *L,
     }
 
     // Compute the value of the PHI node for the next iteration.
-    Constant *NextPHI = EvaluateExpression(BEValue, PHIVal);
+    Constant *NextPHI = EvaluateExpression(BEValue, PHIVal, TD);
     if (NextPHI == 0 || NextPHI == PHIVal)
       return getCouldNotCompute();// Couldn't evaluate or not making progress...
     PHIVal = NextPHI;
@@ -4025,12 +4037,10 @@ const SCEV *ScalarEvolution::computeSCEVAtScope(const SCEV *V, const Loop *L) {
         Constant *C;
         if (const CmpInst *CI = dyn_cast<CmpInst>(I))
           C = ConstantFoldCompareInstOperands(CI->getPredicate(),
-                                              &Operands[0], Operands.size(),
-                                              getContext());
+                                              Operands[0], Operands[1], TD);
         else
           C = ConstantFoldInstOperands(I->getOpcode(), I->getType(),
-                                       &Operands[0], Operands.size(),
-                                       getContext());
+                                       &Operands[0], Operands.size(), TD);
         return getSCEV(C);
       }
     }
diff --git a/libclamav/c++/llvm/lib/Analysis/ScalarEvolutionAliasAnalysis.cpp b/libclamav/c++/llvm/lib/Analysis/ScalarEvolutionAliasAnalysis.cpp
index cc79e6c..ef0e97b 100644
--- a/libclamav/c++/llvm/lib/Analysis/ScalarEvolutionAliasAnalysis.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ScalarEvolutionAliasAnalysis.cpp
@@ -19,14 +19,13 @@
 #include "llvm/Analysis/ScalarEvolutionExpressions.h"
 #include "llvm/Analysis/Passes.h"
 #include "llvm/Pass.h"
-#include "llvm/Support/Compiler.h"
 using namespace llvm;
 
 namespace {
   /// ScalarEvolutionAliasAnalysis - This is a simple alias analysis
   /// implementation that uses ScalarEvolution to answer queries.
-  class VISIBILITY_HIDDEN ScalarEvolutionAliasAnalysis : public FunctionPass,
-                                                         public AliasAnalysis {
+  class ScalarEvolutionAliasAnalysis : public FunctionPass,
+                                       public AliasAnalysis {
     ScalarEvolution *SE;
 
   public:
@@ -39,7 +38,7 @@ namespace {
     virtual AliasResult alias(const Value *V1, unsigned V1Size,
                               const Value *V2, unsigned V2Size);
 
-    Value *GetUnderlyingIdentifiedObject(const SCEV *S);
+    Value *GetBaseValue(const SCEV *S);
   };
 }  // End of anonymous namespace
 
@@ -69,25 +68,22 @@ ScalarEvolutionAliasAnalysis::runOnFunction(Function &F) {
   return false;
 }
 
-/// GetUnderlyingIdentifiedObject - Given an expression, try to find an
-/// "identified object" (see AliasAnalysis::isIdentifiedObject) base
-/// value. Return null is none was found.
+/// GetBaseValue - Given an expression, try to find a
+/// base value. Return null is none was found.
 Value *
-ScalarEvolutionAliasAnalysis::GetUnderlyingIdentifiedObject(const SCEV *S) {
+ScalarEvolutionAliasAnalysis::GetBaseValue(const SCEV *S) {
   if (const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(S)) {
     // In an addrec, assume that the base will be in the start, rather
     // than the step.
-    return GetUnderlyingIdentifiedObject(AR->getStart());
+    return GetBaseValue(AR->getStart());
   } else if (const SCEVAddExpr *A = dyn_cast<SCEVAddExpr>(S)) {
     // If there's a pointer operand, it'll be sorted at the end of the list.
     const SCEV *Last = A->getOperand(A->getNumOperands()-1);
     if (isa<PointerType>(Last->getType()))
-      return GetUnderlyingIdentifiedObject(Last);
+      return GetBaseValue(Last);
   } else if (const SCEVUnknown *U = dyn_cast<SCEVUnknown>(S)) {
-    // Determine if we've found an Identified object.
-    Value *V = U->getValue();
-    if (isIdentifiedObject(V))
-      return V;
+    // This is a leaf node.
+    return U->getValue();
   }
   // No Identified object found.
   return 0;
@@ -121,8 +117,8 @@ ScalarEvolutionAliasAnalysis::alias(const Value *A, unsigned ASize,
   // If ScalarEvolution can find an underlying object, form a new query.
   // The correctness of this depends on ScalarEvolution not recognizing
   // inttoptr and ptrtoint operators.
-  Value *AO = GetUnderlyingIdentifiedObject(AS);
-  Value *BO = GetUnderlyingIdentifiedObject(BS);
+  Value *AO = GetBaseValue(AS);
+  Value *BO = GetBaseValue(BS);
   if ((AO && AO != A) || (BO && BO != B))
     if (alias(AO ? AO : A, AO ? ~0u : ASize,
               BO ? BO : B, BO ? ~0u : BSize) == NoAlias)
diff --git a/libclamav/c++/llvm/lib/Analysis/ScalarEvolutionExpander.cpp b/libclamav/c++/llvm/lib/Analysis/ScalarEvolutionExpander.cpp
index f5df026..d674ee8 100644
--- a/libclamav/c++/llvm/lib/Analysis/ScalarEvolutionExpander.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ScalarEvolutionExpander.cpp
@@ -464,7 +464,7 @@ Value *SCEVExpander::expandAddToGEP(const SCEV *const *op_begin,
   if (!AnyNonZeroIndices) {
     // Cast the base to i8*.
     V = InsertNoopCastOfTo(V,
-       Type::getInt8Ty(Ty->getContext())->getPointerTo(PTy->getAddressSpace()));
+       Type::getInt8PtrTy(Ty->getContext(), PTy->getAddressSpace()));
 
     // Expand the operands for a plain byte offset.
     Value *Idx = expandCodeFor(SE.getAddExpr(Ops), Ty);
diff --git a/libclamav/c++/llvm/lib/Analysis/SparsePropagation.cpp b/libclamav/c++/llvm/lib/Analysis/SparsePropagation.cpp
index b7844f0..d7bcac2 100644
--- a/libclamav/c++/llvm/lib/Analysis/SparsePropagation.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/SparsePropagation.cpp
@@ -166,6 +166,11 @@ void SparseSolver::getFeasibleSuccessors(TerminatorInst &TI,
     return;
   }
   
+  if (isa<IndirectBrInst>(TI)) {
+    Succs.assign(Succs.size(), true);
+    return;
+  }
+  
   SwitchInst &SI = cast<SwitchInst>(TI);
   LatticeVal SCValue;
   if (AggressiveUndef)
diff --git a/libclamav/c++/llvm/lib/Analysis/ValueTracking.cpp b/libclamav/c++/llvm/lib/Analysis/ValueTracking.cpp
index baa347a..3e6af58 100644
--- a/libclamav/c++/llvm/lib/Analysis/ValueTracking.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ValueTracking.cpp
@@ -325,7 +325,7 @@ void llvm::ComputeMaskedBits(Value *V, const APInt &Mask,
       APInt Mask2(Mask.shl(ShiftAmt));
       ComputeMaskedBits(I->getOperand(0), Mask2, KnownZero,KnownOne, TD,
                         Depth+1);
-      assert((KnownZero & KnownOne) == 0&&"Bits known to be one AND zero?"); 
+      assert((KnownZero & KnownOne) == 0 && "Bits known to be one AND zero?"); 
       KnownZero = APIntOps::lshr(KnownZero, ShiftAmt);
       KnownOne  = APIntOps::lshr(KnownOne, ShiftAmt);
       // high bits known zero.
@@ -343,7 +343,7 @@ void llvm::ComputeMaskedBits(Value *V, const APInt &Mask,
       APInt Mask2(Mask.shl(ShiftAmt));
       ComputeMaskedBits(I->getOperand(0), Mask2, KnownZero, KnownOne, TD,
                         Depth+1);
-      assert((KnownZero & KnownOne) == 0&&"Bits known to be one AND zero?"); 
+      assert((KnownZero & KnownOne) == 0 && "Bits known to be one AND zero?"); 
       KnownZero = APIntOps::lshr(KnownZero, ShiftAmt);
       KnownOne  = APIntOps::lshr(KnownOne, ShiftAmt);
         
@@ -380,7 +380,7 @@ void llvm::ComputeMaskedBits(Value *V, const APInt &Mask,
   }
   // fall through
   case Instruction::Add: {
-    // If one of the operands has trailing zeros, than the bits that the
+    // If one of the operands has trailing zeros, then the bits that the
     // other operand has in those bit positions will be preserved in the
     // result. For an add, this works with either operand. For a subtract,
     // this only works if the known zeros are in the right operand.
@@ -436,7 +436,7 @@ void llvm::ComputeMaskedBits(Value *V, const APInt &Mask,
 
         KnownZero |= KnownZero2 & Mask;
 
-        assert((KnownZero & KnownOne) == 0&&"Bits known to be one AND zero?"); 
+        assert((KnownZero & KnownOne) == 0 && "Bits known to be one AND zero?"); 
       }
     }
     break;
@@ -449,7 +449,7 @@ void llvm::ComputeMaskedBits(Value *V, const APInt &Mask,
         KnownZero |= ~LowBits & Mask;
         ComputeMaskedBits(I->getOperand(0), Mask2, KnownZero, KnownOne, TD,
                           Depth+1);
-        assert((KnownZero & KnownOne) == 0&&"Bits known to be one AND zero?");
+        assert((KnownZero & KnownOne) == 0 && "Bits known to be one AND zero?");
         break;
       }
     }
@@ -469,26 +469,11 @@ void llvm::ComputeMaskedBits(Value *V, const APInt &Mask,
     break;
   }
 
-  case Instruction::Alloca:
-  case Instruction::Malloc: {
-    AllocationInst *AI = cast<AllocationInst>(V);
+  case Instruction::Alloca: {
+    AllocaInst *AI = cast<AllocaInst>(V);
     unsigned Align = AI->getAlignment();
-    if (Align == 0 && TD) {
-      if (isa<AllocaInst>(AI))
-        Align = TD->getABITypeAlignment(AI->getType()->getElementType());
-      else if (isa<MallocInst>(AI)) {
-        // Malloc returns maximally aligned memory.
-        Align = TD->getABITypeAlignment(AI->getType()->getElementType());
-        Align =
-          std::max(Align,
-                   (unsigned)TD->getABITypeAlignment(
-                     Type::getDoubleTy(V->getContext())));
-        Align =
-          std::max(Align,
-                   (unsigned)TD->getABITypeAlignment(
-                      Type::getInt64Ty(V->getContext())));
-      }
-    }
+    if (Align == 0 && TD)
+      Align = TD->getABITypeAlignment(AI->getType()->getElementType());
     
     if (Align > 0)
       KnownZero = Mask & APInt::getLowBitsSet(BitWidth,
@@ -804,6 +789,116 @@ unsigned llvm::ComputeNumSignBits(Value *V, const TargetData *TD,
   return std::max(FirstAnswer, std::min(TyBits, Mask.countLeadingZeros()));
 }
 
+/// ComputeMultiple - This function computes the integer multiple of Base that
+/// equals V.  If successful, it returns true and returns the multiple in
+/// Multiple.  If unsuccessful, it returns false. It looks
+/// through SExt instructions only if LookThroughSExt is true.
+bool llvm::ComputeMultiple(Value *V, unsigned Base, Value *&Multiple,
+                           bool LookThroughSExt, unsigned Depth) {
+  const unsigned MaxDepth = 6;
+
+  assert(V && "No Value?");
+  assert(Depth <= MaxDepth && "Limit Search Depth");
+  assert(V->getType()->isInteger() && "Not integer or pointer type!");
+
+  const Type *T = V->getType();
+
+  ConstantInt *CI = dyn_cast<ConstantInt>(V);
+
+  if (Base == 0)
+    return false;
+    
+  if (Base == 1) {
+    Multiple = V;
+    return true;
+  }
+
+  ConstantExpr *CO = dyn_cast<ConstantExpr>(V);
+  Constant *BaseVal = ConstantInt::get(T, Base);
+  if (CO && CO == BaseVal) {
+    // Multiple is 1.
+    Multiple = ConstantInt::get(T, 1);
+    return true;
+  }
+
+  if (CI && CI->getZExtValue() % Base == 0) {
+    Multiple = ConstantInt::get(T, CI->getZExtValue() / Base);
+    return true;  
+  }
+  
+  if (Depth == MaxDepth) return false;  // Limit search depth.
+        
+  Operator *I = dyn_cast<Operator>(V);
+  if (!I) return false;
+
+  switch (I->getOpcode()) {
+  default: break;
+  case Instruction::SExt:
+    if (!LookThroughSExt) return false;
+    // otherwise fall through to ZExt
+  case Instruction::ZExt:
+    return ComputeMultiple(I->getOperand(0), Base, Multiple,
+                           LookThroughSExt, Depth+1);
+  case Instruction::Shl:
+  case Instruction::Mul: {
+    Value *Op0 = I->getOperand(0);
+    Value *Op1 = I->getOperand(1);
+
+    if (I->getOpcode() == Instruction::Shl) {
+      ConstantInt *Op1CI = dyn_cast<ConstantInt>(Op1);
+      if (!Op1CI) return false;
+      // Turn Op0 << Op1 into Op0 * 2^Op1
+      APInt Op1Int = Op1CI->getValue();
+      uint64_t BitToSet = Op1Int.getLimitedValue(Op1Int.getBitWidth() - 1);
+      Op1 = ConstantInt::get(V->getContext(), 
+                             APInt(Op1Int.getBitWidth(), 0).set(BitToSet));
+    }
+
+    Value *Mul0 = NULL;
+    Value *Mul1 = NULL;
+    bool M0 = ComputeMultiple(Op0, Base, Mul0,
+                              LookThroughSExt, Depth+1);
+    bool M1 = ComputeMultiple(Op1, Base, Mul1,
+                              LookThroughSExt, Depth+1);
+
+    if (M0) {
+      if (isa<Constant>(Op1) && isa<Constant>(Mul0)) {
+        // V == Base * (Mul0 * Op1), so return (Mul0 * Op1)
+        Multiple = ConstantExpr::getMul(cast<Constant>(Mul0),
+                                        cast<Constant>(Op1));
+        return true;
+      }
+
+      if (ConstantInt *Mul0CI = dyn_cast<ConstantInt>(Mul0))
+        if (Mul0CI->getValue() == 1) {
+          // V == Base * Op1, so return Op1
+          Multiple = Op1;
+          return true;
+        }
+    }
+
+    if (M1) {
+      if (isa<Constant>(Op0) && isa<Constant>(Mul1)) {
+        // V == Base * (Mul1 * Op0), so return (Mul1 * Op0)
+        Multiple = ConstantExpr::getMul(cast<Constant>(Mul1),
+                                        cast<Constant>(Op0));
+        return true;
+      }
+
+      if (ConstantInt *Mul1CI = dyn_cast<ConstantInt>(Mul1))
+        if (Mul1CI->getValue() == 1) {
+          // V == Base * Op0, so return Op0
+          Multiple = Op0;
+          return true;
+        }
+    }
+  }
+  }
+
+  // We could not determine if V is a multiple of Base.
+  return false;
+}
+
 /// CannotBeNegativeZero - Return true if we can prove that the specified FP 
 /// value is never equal to -0.0.
 ///
@@ -853,6 +948,190 @@ bool llvm::CannotBeNegativeZero(const Value *V, unsigned Depth) {
   return false;
 }
 
+
+/// GetLinearExpression - Analyze the specified value as a linear expression:
+/// "A*V + B", where A and B are constant integers.  Return the scale and offset
+/// values as APInts and return V as a Value*.  The incoming Value is known to
+/// have IntegerType.  Note that this looks through extends, so the high bits
+/// may not be represented in the result.
+static Value *GetLinearExpression(Value *V, APInt &Scale, APInt &Offset,
+                                  const TargetData *TD, unsigned Depth) {
+  assert(isa<IntegerType>(V->getType()) && "Not an integer value");
+
+  // Limit our recursion depth.
+  if (Depth == 6) {
+    Scale = 1;
+    Offset = 0;
+    return V;
+  }
+  
+  if (BinaryOperator *BOp = dyn_cast<BinaryOperator>(V)) {
+    if (ConstantInt *RHSC = dyn_cast<ConstantInt>(BOp->getOperand(1))) {
+      switch (BOp->getOpcode()) {
+      default: break;
+      case Instruction::Or:
+        // X|C == X+C if all the bits in C are unset in X.  Otherwise we can't
+        // analyze it.
+        if (!MaskedValueIsZero(BOp->getOperand(0), RHSC->getValue(), TD))
+          break;
+        // FALL THROUGH.
+      case Instruction::Add:
+        V = GetLinearExpression(BOp->getOperand(0), Scale, Offset, TD, Depth+1);
+        Offset += RHSC->getValue();
+        return V;
+      case Instruction::Mul:
+        V = GetLinearExpression(BOp->getOperand(0), Scale, Offset, TD, Depth+1);
+        Offset *= RHSC->getValue();
+        Scale *= RHSC->getValue();
+        return V;
+      case Instruction::Shl:
+        V = GetLinearExpression(BOp->getOperand(0), Scale, Offset, TD, Depth+1);
+        Offset <<= RHSC->getValue().getLimitedValue();
+        Scale <<= RHSC->getValue().getLimitedValue();
+        return V;
+      }
+    }
+  }
+  
+  // Since clients don't care about the high bits of the value, just scales and
+  // offsets, we can look through extensions.
+  if (isa<SExtInst>(V) || isa<ZExtInst>(V)) {
+    Value *CastOp = cast<CastInst>(V)->getOperand(0);
+    unsigned OldWidth = Scale.getBitWidth();
+    unsigned SmallWidth = CastOp->getType()->getPrimitiveSizeInBits();
+    Scale.trunc(SmallWidth);
+    Offset.trunc(SmallWidth);
+    Value *Result = GetLinearExpression(CastOp, Scale, Offset, TD, Depth+1);
+    Scale.zext(OldWidth);
+    Offset.zext(OldWidth);
+    return Result;
+  }
+  
+  Scale = 1;
+  Offset = 0;
+  return V;
+}
+
+/// DecomposeGEPExpression - If V is a symbolic pointer expression, decompose it
+/// into a base pointer with a constant offset and a number of scaled symbolic
+/// offsets.
+///
+/// The scaled symbolic offsets (represented by pairs of a Value* and a scale in
+/// the VarIndices vector) are Value*'s that are known to be scaled by the
+/// specified amount, but which may have other unrepresented high bits. As such,
+/// the gep cannot necessarily be reconstructed from its decomposed form.
+///
+/// When TargetData is around, this function is capable of analyzing everything
+/// that Value::getUnderlyingObject() can look through.  When not, it just looks
+/// through pointer casts.
+///
+const Value *llvm::DecomposeGEPExpression(const Value *V, int64_t &BaseOffs,
+                 SmallVectorImpl<std::pair<const Value*, int64_t> > &VarIndices,
+                                          const TargetData *TD) {
+  // FIXME: Should limit depth like getUnderlyingObject?
+  BaseOffs = 0;
+  while (1) {
+    // See if this is a bitcast or GEP.
+    const Operator *Op = dyn_cast<Operator>(V);
+    if (Op == 0) {
+      // The only non-operator case we can handle are GlobalAliases.
+      if (const GlobalAlias *GA = dyn_cast<GlobalAlias>(V)) {
+        if (!GA->mayBeOverridden()) {
+          V = GA->getAliasee();
+          continue;
+        }
+      }
+      return V;
+    }
+    
+    if (Op->getOpcode() == Instruction::BitCast) {
+      V = Op->getOperand(0);
+      continue;
+    }
+    
+    const GEPOperator *GEPOp = dyn_cast<GEPOperator>(Op);
+    if (GEPOp == 0)
+      return V;
+    
+    // Don't attempt to analyze GEPs over unsized objects.
+    if (!cast<PointerType>(GEPOp->getOperand(0)->getType())
+        ->getElementType()->isSized())
+      return V;
+    
+    // If we are lacking TargetData information, we can't compute the offets of
+    // elements computed by GEPs.  However, we can handle bitcast equivalent
+    // GEPs.
+    if (!TD) {
+      if (!GEPOp->hasAllZeroIndices())
+        return V;
+      V = GEPOp->getOperand(0);
+      continue;
+    }
+    
+    // Walk the indices of the GEP, accumulating them into BaseOff/VarIndices.
+    gep_type_iterator GTI = gep_type_begin(GEPOp);
+    for (User::const_op_iterator I = GEPOp->op_begin()+1,
+         E = GEPOp->op_end(); I != E; ++I) {
+      Value *Index = *I;
+      // Compute the (potentially symbolic) offset in bytes for this index.
+      if (const StructType *STy = dyn_cast<StructType>(*GTI++)) {
+        // For a struct, add the member offset.
+        unsigned FieldNo = cast<ConstantInt>(Index)->getZExtValue();
+        if (FieldNo == 0) continue;
+        
+        BaseOffs += TD->getStructLayout(STy)->getElementOffset(FieldNo);
+        continue;
+      }
+      
+      // For an array/pointer, add the element offset, explicitly scaled.
+      if (ConstantInt *CIdx = dyn_cast<ConstantInt>(Index)) {
+        if (CIdx->isZero()) continue;
+        BaseOffs += TD->getTypeAllocSize(*GTI)*CIdx->getSExtValue();
+        continue;
+      }
+      
+      uint64_t Scale = TD->getTypeAllocSize(*GTI);
+      
+      // Use GetLinearExpression to decompose the index into a C1*V+C2 form.
+      unsigned Width = cast<IntegerType>(Index->getType())->getBitWidth();
+      APInt IndexScale(Width, 0), IndexOffset(Width, 0);
+      Index = GetLinearExpression(Index, IndexScale, IndexOffset, TD, 0);
+      
+      // The GEP index scale ("Scale") scales C1*V+C2, yielding (C1*V+C2)*Scale.
+      // This gives us an aggregate computation of (C1*Scale)*V + C2*Scale.
+      BaseOffs += IndexOffset.getZExtValue()*Scale;
+      Scale *= IndexScale.getZExtValue();
+      
+      
+      // If we already had an occurrance of this index variable, merge this
+      // scale into it.  For example, we want to handle:
+      //   A[x][x] -> x*16 + x*4 -> x*20
+      // This also ensures that 'x' only appears in the index list once.
+      for (unsigned i = 0, e = VarIndices.size(); i != e; ++i) {
+        if (VarIndices[i].first == Index) {
+          Scale += VarIndices[i].second;
+          VarIndices.erase(VarIndices.begin()+i);
+          break;
+        }
+      }
+      
+      // Make sure that we have a scale that makes sense for this target's
+      // pointer size.
+      if (unsigned ShiftBits = 64-TD->getPointerSizeInBits()) {
+        Scale <<= ShiftBits;
+        Scale >>= ShiftBits;
+      }
+      
+      if (Scale)
+        VarIndices.push_back(std::make_pair(Index, Scale));
+    }
+    
+    // Analyze the base pointer next.
+    V = GEPOp->getOperand(0);
+  }
+}
+
+
 // This is the recursive version of BuildSubAggregate. It takes a few different
 // arguments. Idxs is the index within the nested struct From that we are
 // looking at now (which is of type IndexedType). IdxSkip is the number of
@@ -862,7 +1141,6 @@ bool llvm::CannotBeNegativeZero(const Value *V, unsigned Depth) {
 static Value *BuildSubAggregate(Value *From, Value* To, const Type *IndexedType,
                                 SmallVector<unsigned, 10> &Idxs,
                                 unsigned IdxSkip,
-                                LLVMContext &Context,
                                 Instruction *InsertBefore) {
   const llvm::StructType *STy = llvm::dyn_cast<llvm::StructType>(IndexedType);
   if (STy) {
@@ -874,7 +1152,7 @@ static Value *BuildSubAggregate(Value *From, Value* To, const Type *IndexedType,
       Idxs.push_back(i);
       Value *PrevTo = To;
       To = BuildSubAggregate(From, To, STy->getElementType(i), Idxs, IdxSkip,
-                             Context, InsertBefore);
+                             InsertBefore);
       Idxs.pop_back();
       if (!To) {
         // Couldn't find any inserted value for this index? Cleanup
@@ -897,7 +1175,7 @@ static Value *BuildSubAggregate(Value *From, Value* To, const Type *IndexedType,
   // we might be able to find the complete struct somewhere.
   
   // Find the value that is at that particular spot
-  Value *V = FindInsertedValue(From, Idxs.begin(), Idxs.end(), Context);
+  Value *V = FindInsertedValue(From, Idxs.begin(), Idxs.end());
 
   if (!V)
     return NULL;
@@ -920,7 +1198,7 @@ static Value *BuildSubAggregate(Value *From, Value* To, const Type *IndexedType,
 //
 // All inserted insertvalue instructions are inserted before InsertBefore
 static Value *BuildSubAggregate(Value *From, const unsigned *idx_begin,
-                                const unsigned *idx_end, LLVMContext &Context,
+                                const unsigned *idx_end,
                                 Instruction *InsertBefore) {
   assert(InsertBefore && "Must have someplace to insert!");
   const Type *IndexedType = ExtractValueInst::getIndexedType(From->getType(),
@@ -930,8 +1208,7 @@ static Value *BuildSubAggregate(Value *From, const unsigned *idx_begin,
   SmallVector<unsigned, 10> Idxs(idx_begin, idx_end);
   unsigned IdxSkip = Idxs.size();
 
-  return BuildSubAggregate(From, To, IndexedType, Idxs, IdxSkip,
-                           Context, InsertBefore);
+  return BuildSubAggregate(From, To, IndexedType, Idxs, IdxSkip, InsertBefore);
 }
 
 /// FindInsertedValue - Given an aggregrate and an sequence of indices, see if
@@ -941,8 +1218,7 @@ static Value *BuildSubAggregate(Value *From, const unsigned *idx_begin,
 /// If InsertBefore is not null, this function will duplicate (modified)
 /// insertvalues when a part of a nested struct is extracted.
 Value *llvm::FindInsertedValue(Value *V, const unsigned *idx_begin,
-                         const unsigned *idx_end, LLVMContext &Context,
-                         Instruction *InsertBefore) {
+                         const unsigned *idx_end, Instruction *InsertBefore) {
   // Nothing to index? Just return V then (this is useful at the end of our
   // recursion)
   if (idx_begin == idx_end)
@@ -966,7 +1242,7 @@ Value *llvm::FindInsertedValue(Value *V, const unsigned *idx_begin,
     if (isa<ConstantArray>(C) || isa<ConstantStruct>(C))
       // Recursively process this constant
       return FindInsertedValue(C->getOperand(*idx_begin), idx_begin + 1,
-                               idx_end, Context, InsertBefore);
+                               idx_end, InsertBefore);
   } else if (InsertValueInst *I = dyn_cast<InsertValueInst>(V)) {
     // Loop the indices for the insertvalue instruction in parallel with the
     // requested indices
@@ -985,8 +1261,7 @@ Value *llvm::FindInsertedValue(Value *V, const unsigned *idx_begin,
           // %C = insertvalue {i32, i32 } %A, i32 11, 1
           // which allows the unused 0,0 element from the nested struct to be
           // removed.
-          return BuildSubAggregate(V, idx_begin, req_idx,
-                                   Context, InsertBefore);
+          return BuildSubAggregate(V, idx_begin, req_idx, InsertBefore);
         else
           // We can't handle this without inserting insertvalues
           return 0;
@@ -997,13 +1272,13 @@ Value *llvm::FindInsertedValue(Value *V, const unsigned *idx_begin,
       // looking for, then.
       if (*req_idx != *i)
         return FindInsertedValue(I->getAggregateOperand(), idx_begin, idx_end,
-                                 Context, InsertBefore);
+                                 InsertBefore);
     }
     // If we end up here, the indices of the insertvalue match with those
     // requested (though possibly only partially). Now we recursively look at
     // the inserted value, passing any remaining indices.
     return FindInsertedValue(I->getInsertedValueOperand(), req_idx, idx_end,
-                             Context, InsertBefore);
+                             InsertBefore);
   } else if (ExtractValueInst *I = dyn_cast<ExtractValueInst>(V)) {
     // If we're extracting a value from an aggregrate that was extracted from
     // something else, we can extract from that something else directly instead.
@@ -1027,7 +1302,7 @@ Value *llvm::FindInsertedValue(Value *V, const unsigned *idx_begin,
            && "Number of indices added not correct?");
     
     return FindInsertedValue(I->getAggregateOperand(), Idxs.begin(), Idxs.end(),
-                             Context, InsertBefore);
+                             InsertBefore);
   }
   // Otherwise, we don't know (such as, extracting from a function return value
   // or load instruction)
diff --git a/libclamav/c++/llvm/lib/AsmParser/LLLexer.cpp b/libclamav/c++/llvm/lib/AsmParser/LLLexer.cpp
index 16e0bd7..1b7c9c6 100644
--- a/libclamav/c++/llvm/lib/AsmParser/LLLexer.cpp
+++ b/libclamav/c++/llvm/lib/AsmParser/LLLexer.cpp
@@ -529,6 +529,7 @@ lltok::Kind LLLexer::LexIdentifier() {
   KEYWORD(module);
   KEYWORD(asm);
   KEYWORD(sideeffect);
+  KEYWORD(alignstack);
   KEYWORD(gc);
 
   KEYWORD(ccc);
@@ -575,6 +576,7 @@ lltok::Kind LLLexer::LexIdentifier() {
   KEYWORD(oge); KEYWORD(ord); KEYWORD(uno); KEYWORD(ueq); KEYWORD(une);
 
   KEYWORD(x);
+  KEYWORD(blockaddress);
 #undef KEYWORD
 
   // Keywords for types.
@@ -601,6 +603,14 @@ lltok::Kind LLLexer::LexIdentifier() {
     // Scan CurPtr ahead, seeing if there is just whitespace before the newline.
     if (JustWhitespaceNewLine(CurPtr))
       return lltok::kw_zeroext;
+  } else if (Len == 6 && !memcmp(StartChar, "malloc", 6)) {
+    // FIXME: Remove in LLVM 3.0.
+    // Autoupgrade malloc instruction.
+    return lltok::kw_malloc;
+  } else if (Len == 4 && !memcmp(StartChar, "free", 4)) {
+    // FIXME: Remove in LLVM 3.0.
+    // Autoupgrade malloc instruction.
+    return lltok::kw_free;
   }
 
   // Keywords for instructions.
@@ -636,13 +646,12 @@ lltok::Kind LLLexer::LexIdentifier() {
   INSTKEYWORD(ret,         Ret);
   INSTKEYWORD(br,          Br);
   INSTKEYWORD(switch,      Switch);
+  INSTKEYWORD(indirectbr,  IndirectBr);
   INSTKEYWORD(invoke,      Invoke);
   INSTKEYWORD(unwind,      Unwind);
   INSTKEYWORD(unreachable, Unreachable);
 
-  INSTKEYWORD(malloc,      Malloc);
   INSTKEYWORD(alloca,      Alloca);
-  INSTKEYWORD(free,        Free);
   INSTKEYWORD(load,        Load);
   INSTKEYWORD(store,       Store);
   INSTKEYWORD(getelementptr, GetElementPtr);
diff --git a/libclamav/c++/llvm/lib/AsmParser/LLParser.cpp b/libclamav/c++/llvm/lib/AsmParser/LLParser.cpp
index 42ce953..a92dbf8 100644
--- a/libclamav/c++/llvm/lib/AsmParser/LLParser.cpp
+++ b/libclamav/c++/llvm/lib/AsmParser/LLParser.cpp
@@ -29,34 +29,6 @@
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
-namespace llvm {
-  /// ValID - Represents a reference of a definition of some sort with no type.
-  /// There are several cases where we have to parse the value but where the
-  /// type can depend on later context.  This may either be a numeric reference
-  /// or a symbolic (%var) reference.  This is just a discriminated union.
-  struct ValID {
-    enum {
-      t_LocalID, t_GlobalID,      // ID in UIntVal.
-      t_LocalName, t_GlobalName,  // Name in StrVal.
-      t_APSInt, t_APFloat,        // Value in APSIntVal/APFloatVal.
-      t_Null, t_Undef, t_Zero,    // No value.
-      t_EmptyArray,               // No value:  []
-      t_Constant,                 // Value in ConstantVal.
-      t_InlineAsm,                // Value in StrVal/StrVal2/UIntVal.
-      t_Metadata                  // Value in MetadataVal.
-    } Kind;
-
-    LLParser::LocTy Loc;
-    unsigned UIntVal;
-    std::string StrVal, StrVal2;
-    APSInt APSIntVal;
-    APFloat APFloatVal;
-    Constant *ConstantVal;
-    MetadataBase *MetadataVal;
-    ValID() : APFloatVal(0.0) {}
-  };
-}
-
 /// Run: module ::= toplevelentity*
 bool LLParser::Run() {
   // Prime the lexer.
@@ -69,6 +41,48 @@ bool LLParser::Run() {
 /// ValidateEndOfModule - Do final validity and sanity checks at the end of the
 /// module.
 bool LLParser::ValidateEndOfModule() {
+  // Update auto-upgraded malloc calls to "malloc".
+  // FIXME: Remove in LLVM 3.0.
+  if (MallocF) {
+    MallocF->setName("malloc");
+    // If setName() does not set the name to "malloc", then there is already a 
+    // declaration of "malloc".  In that case, iterate over all calls to MallocF
+    // and get them to call the declared "malloc" instead.
+    if (MallocF->getName() != "malloc") {
+      Constant *RealMallocF = M->getFunction("malloc");
+      if (RealMallocF->getType() != MallocF->getType())
+        RealMallocF = ConstantExpr::getBitCast(RealMallocF, MallocF->getType());
+      MallocF->replaceAllUsesWith(RealMallocF);
+      MallocF->eraseFromParent();
+      MallocF = NULL;
+    }
+  }
+  
+  
+  // If there are entries in ForwardRefBlockAddresses at this point, they are
+  // references after the function was defined.  Resolve those now.
+  while (!ForwardRefBlockAddresses.empty()) {
+    // Okay, we are referencing an already-parsed function, resolve them now.
+    Function *TheFn = 0;
+    const ValID &Fn = ForwardRefBlockAddresses.begin()->first;
+    if (Fn.Kind == ValID::t_GlobalName)
+      TheFn = M->getFunction(Fn.StrVal);
+    else if (Fn.UIntVal < NumberedVals.size())
+      TheFn = dyn_cast<Function>(NumberedVals[Fn.UIntVal]);
+    
+    if (TheFn == 0)
+      return Error(Fn.Loc, "unknown function referenced by blockaddress");
+    
+    // Resolve all these references.
+    if (ResolveForwardRefBlockAddresses(TheFn, 
+                                      ForwardRefBlockAddresses.begin()->second,
+                                        0))
+      return true;
+    
+    ForwardRefBlockAddresses.erase(ForwardRefBlockAddresses.begin());
+  }
+  
+  
   if (!ForwardRefTypes.empty())
     return Error(ForwardRefTypes.begin()->second.second,
                  "use of undefined type named '" +
@@ -103,6 +117,38 @@ bool LLParser::ValidateEndOfModule() {
   return false;
 }
 
+bool LLParser::ResolveForwardRefBlockAddresses(Function *TheFn, 
+                             std::vector<std::pair<ValID, GlobalValue*> > &Refs,
+                                               PerFunctionState *PFS) {
+  // Loop over all the references, resolving them.
+  for (unsigned i = 0, e = Refs.size(); i != e; ++i) {
+    BasicBlock *Res;
+    if (PFS) {
+      if (Refs[i].first.Kind == ValID::t_LocalName)
+        Res = PFS->GetBB(Refs[i].first.StrVal, Refs[i].first.Loc);
+      else
+        Res = PFS->GetBB(Refs[i].first.UIntVal, Refs[i].first.Loc);
+    } else if (Refs[i].first.Kind == ValID::t_LocalID) {
+      return Error(Refs[i].first.Loc,
+       "cannot take address of numeric label after the function is defined");
+    } else {
+      Res = dyn_cast_or_null<BasicBlock>(
+                     TheFn->getValueSymbolTable().lookup(Refs[i].first.StrVal));
+    }
+    
+    if (Res == 0)
+      return Error(Refs[i].first.Loc,
+                   "referenced value is not a basic block");
+    
+    // Get the BlockAddress for this and update references to use it.
+    BlockAddress *BA = BlockAddress::get(TheFn, Res);
+    Refs[i].second->replaceAllUsesWith(BA);
+    Refs[i].second->eraseFromParent();
+  }
+  return false;
+}
+
+
 //===----------------------------------------------------------------------===//
 // Top-Level Entities
 //===----------------------------------------------------------------------===//
@@ -434,17 +480,17 @@ bool LLParser::ParseMDNode(MetadataBase *&Node) {
   if (ParseUInt32(MID))  return true;
 
   // Check existing MDNode.
-  std::map<unsigned, MetadataBase *>::iterator I = MetadataCache.find(MID);
+  std::map<unsigned, WeakVH>::iterator I = MetadataCache.find(MID);
   if (I != MetadataCache.end()) {
-    Node = I->second;
+    Node = cast<MetadataBase>(I->second);
     return false;
   }
 
   // Check known forward references.
-  std::map<unsigned, std::pair<MetadataBase *, LocTy> >::iterator
+  std::map<unsigned, std::pair<WeakVH, LocTy> >::iterator
     FI = ForwardRefMDNodes.find(MID);
   if (FI != ForwardRefMDNodes.end()) {
-    Node = FI->second.first;
+    Node = cast<MetadataBase>(FI->second.first);
     return false;
   }
 
@@ -524,7 +570,7 @@ bool LLParser::ParseStandaloneMetadata() {
 
   MDNode *Init = MDNode::get(Context, Elts.data(), Elts.size());
   MetadataCache[MetadataID] = Init;
-  std::map<unsigned, std::pair<MetadataBase *, LocTy> >::iterator
+  std::map<unsigned, std::pair<WeakVH, LocTy> >::iterator
     FI = ForwardRefMDNodes.find(MetadataID);
   if (FI != ForwardRefMDNodes.end()) {
     MDNode *FwdNode = cast<MDNode>(FI->second.first);
@@ -586,8 +632,7 @@ bool LLParser::ParseAlias(const std::string &Name, LocTy NameLoc,
 
   // See if this value already exists in the symbol table.  If so, it is either
   // a redefinition or a definition of a forward reference.
-  if (GlobalValue *Val =
-        cast_or_null<GlobalValue>(M->getValueSymbolTable().lookup(Name))) {
+  if (GlobalValue *Val = M->getNamedValue(Name)) {
     // See if this was a redefinition.  If so, there is no entry in
     // ForwardRefVals.
     std::map<std::string, std::pair<GlobalValue*, LocTy> >::iterator
@@ -647,16 +692,18 @@ bool LLParser::ParseGlobal(const std::string &Name, LocTy NameLoc,
       return true;
   }
 
-  if (isa<FunctionType>(Ty) || Ty == Type::getLabelTy(Context))
+  if (isa<FunctionType>(Ty) || Ty->isLabelTy())
     return Error(TyLoc, "invalid type for global variable");
 
   GlobalVariable *GV = 0;
 
   // See if the global was forward referenced, if so, use the global.
   if (!Name.empty()) {
-    if ((GV = M->getGlobalVariable(Name, true)) &&
-        !ForwardRefVals.erase(Name))
-      return Error(NameLoc, "redefinition of global '@" + Name + "'");
+    if (GlobalValue *GVal = M->getNamedValue(Name)) {
+      if (!ForwardRefVals.erase(Name) || !isa<GlobalValue>(GVal))
+        return Error(NameLoc, "redefinition of global '@" + Name + "'");
+      GV = cast<GlobalVariable>(GVal);
+    }
   } else {
     std::map<unsigned, std::pair<GlobalValue*, LocTy> >::iterator
       I = ForwardRefValIDs.find(NumberedVals.size());
@@ -1029,14 +1076,12 @@ bool LLParser::ParseOptionalCallingConv(CallingConv::ID &CC) {
 ///   ::= /* empty */
 ///   ::= !dbg !42
 bool LLParser::ParseOptionalCustomMetadata() {
-
-  std::string Name;
-  if (Lex.getKind() == lltok::NamedOrCustomMD) {
-    Name = Lex.getStrVal();
-    Lex.Lex();
-  } else
+  if (Lex.getKind() != lltok::NamedOrCustomMD)
     return false;
 
+  std::string Name = Lex.getStrVal();
+  Lex.Lex();
+
   if (Lex.getKind() != lltok::Metadata)
     return TokError("Expected '!' here");
   Lex.Lex();
@@ -1047,7 +1092,7 @@ bool LLParser::ParseOptionalCustomMetadata() {
   MetadataContext &TheMetadata = M->getContext().getMetadata();
   unsigned MDK = TheMetadata.getMDKind(Name.c_str());
   if (!MDK)
-    MDK = TheMetadata.RegisterMDKind(Name.c_str());
+    MDK = TheMetadata.registerMDKind(Name.c_str());
   MDsOnInst.push_back(std::make_pair(MDK, cast<MDNode>(Node)));
 
   return false;
@@ -1092,6 +1137,8 @@ bool LLParser::ParseIndexList(SmallVectorImpl<unsigned> &Indices) {
     return TokError("expected ',' as start of index list");
 
   while (EatIfPresent(lltok::comma)) {
+    if (Lex.getKind() == lltok::NamedOrCustomMD)
+      break;
     unsigned Idx;
     if (ParseUInt32(Idx)) return true;
     Indices.push_back(Idx);
@@ -1113,7 +1160,7 @@ bool LLParser::ParseType(PATypeHolder &Result, bool AllowVoid) {
   if (!UpRefs.empty())
     return Error(UpRefs.back().Loc, "invalid unresolved type up reference");
 
-  if (!AllowVoid && Result.get() == Type::getVoidTy(Context))
+  if (!AllowVoid && Result.get()->isVoidTy())
     return Error(TypeLoc, "void type only allowed for function results");
 
   return false;
@@ -1275,9 +1322,9 @@ bool LLParser::ParseTypeRec(PATypeHolder &Result) {
 
     // TypeRec ::= TypeRec '*'
     case lltok::star:
-      if (Result.get() == Type::getLabelTy(Context))
+      if (Result.get()->isLabelTy())
         return TokError("basic block pointers are invalid");
-      if (Result.get() == Type::getVoidTy(Context))
+      if (Result.get()->isVoidTy())
         return TokError("pointers to void are invalid; use i8* instead");
       if (!PointerType::isValidElementType(Result.get()))
         return TokError("pointer to this type is invalid");
@@ -1287,9 +1334,9 @@ bool LLParser::ParseTypeRec(PATypeHolder &Result) {
 
     // TypeRec ::= TypeRec 'addrspace' '(' uint32 ')' '*'
     case lltok::kw_addrspace: {
-      if (Result.get() == Type::getLabelTy(Context))
+      if (Result.get()->isLabelTy())
         return TokError("basic block pointers are invalid");
-      if (Result.get() == Type::getVoidTy(Context))
+      if (Result.get()->isVoidTy())
         return TokError("pointers to void are invalid; use i8* instead");
       if (!PointerType::isValidElementType(Result.get()))
         return TokError("pointer to this type is invalid");
@@ -1380,7 +1427,7 @@ bool LLParser::ParseArgumentList(std::vector<ArgInfo> &ArgList,
     if ((inType ? ParseTypeRec(ArgTy) : ParseType(ArgTy)) ||
         ParseOptionalAttrs(Attrs, 0)) return true;
 
-    if (ArgTy == Type::getVoidTy(Context))
+    if (ArgTy->isVoidTy())
       return Error(TypeLoc, "argument can not have void type");
 
     if (Lex.getKind() == lltok::LocalVar ||
@@ -1406,7 +1453,7 @@ bool LLParser::ParseArgumentList(std::vector<ArgInfo> &ArgList,
       if ((inType ? ParseTypeRec(ArgTy) : ParseType(ArgTy)) ||
           ParseOptionalAttrs(Attrs, 0)) return true;
 
-      if (ArgTy == Type::getVoidTy(Context))
+      if (ArgTy->isVoidTy())
         return Error(TypeLoc, "argument can not have void type");
 
       if (Lex.getKind() == lltok::LocalVar ||
@@ -1484,7 +1531,7 @@ bool LLParser::ParseStructType(PATypeHolder &Result, bool Packed) {
   if (ParseTypeRec(Result)) return true;
   ParamsList.push_back(Result);
 
-  if (Result == Type::getVoidTy(Context))
+  if (Result->isVoidTy())
     return Error(EltTyLoc, "struct element can not have void type");
   if (!StructType::isValidElementType(Result))
     return Error(EltTyLoc, "invalid element type for struct");
@@ -1493,7 +1540,7 @@ bool LLParser::ParseStructType(PATypeHolder &Result, bool Packed) {
     EltTyLoc = Lex.getLoc();
     if (ParseTypeRec(Result)) return true;
 
-    if (Result == Type::getVoidTy(Context))
+    if (Result->isVoidTy())
       return Error(EltTyLoc, "struct element can not have void type");
     if (!StructType::isValidElementType(Result))
       return Error(EltTyLoc, "invalid element type for struct");
@@ -1532,7 +1579,7 @@ bool LLParser::ParseArrayVectorType(PATypeHolder &Result, bool isVector) {
   PATypeHolder EltTy(Type::getVoidTy(Context));
   if (ParseTypeRec(EltTy)) return true;
 
-  if (EltTy == Type::getVoidTy(Context))
+  if (EltTy->isVoidTy())
     return Error(TypeLoc, "array and vector element type cannot be void");
 
   if (ParseToken(isVector ? lltok::greater : lltok::rsquare,
@@ -1559,8 +1606,9 @@ bool LLParser::ParseArrayVectorType(PATypeHolder &Result, bool isVector) {
 // Function Semantic Analysis.
 //===----------------------------------------------------------------------===//
 
-LLParser::PerFunctionState::PerFunctionState(LLParser &p, Function &f)
-  : P(p), F(f) {
+LLParser::PerFunctionState::PerFunctionState(LLParser &p, Function &f,
+                                             int functionNumber)
+  : P(p), F(f), FunctionNumber(functionNumber) {
 
   // Insert unnamed arguments into the NumberedVals list.
   for (Function::arg_iterator AI = F.arg_begin(), E = F.arg_end();
@@ -1590,7 +1638,29 @@ LLParser::PerFunctionState::~PerFunctionState() {
     }
 }
 
-bool LLParser::PerFunctionState::VerifyFunctionComplete() {
+bool LLParser::PerFunctionState::FinishFunction() {
+  // Check to see if someone took the address of labels in this block.
+  if (!P.ForwardRefBlockAddresses.empty()) {
+    ValID FunctionID;
+    if (!F.getName().empty()) {
+      FunctionID.Kind = ValID::t_GlobalName;
+      FunctionID.StrVal = F.getName();
+    } else {
+      FunctionID.Kind = ValID::t_GlobalID;
+      FunctionID.UIntVal = FunctionNumber;
+    }
+  
+    std::map<ValID, std::vector<std::pair<ValID, GlobalValue*> > >::iterator
+      FRBAI = P.ForwardRefBlockAddresses.find(FunctionID);
+    if (FRBAI != P.ForwardRefBlockAddresses.end()) {
+      // Resolve all these references.
+      if (P.ResolveForwardRefBlockAddresses(&F, FRBAI->second, this))
+        return true;
+      
+      P.ForwardRefBlockAddresses.erase(FRBAI);
+    }
+  }
+  
   if (!ForwardRefVals.empty())
     return P.Error(ForwardRefVals.begin()->second.second,
                    "use of undefined value '%" + ForwardRefVals.begin()->first +
@@ -1623,7 +1693,7 @@ Value *LLParser::PerFunctionState::GetVal(const std::string &Name,
   // If we have the value in the symbol table or fwd-ref table, return it.
   if (Val) {
     if (Val->getType() == Ty) return Val;
-    if (Ty == Type::getLabelTy(F.getContext()))
+    if (Ty->isLabelTy())
       P.Error(Loc, "'%" + Name + "' is not a basic block");
     else
       P.Error(Loc, "'%" + Name + "' defined with type '" +
@@ -1640,7 +1710,7 @@ Value *LLParser::PerFunctionState::GetVal(const std::string &Name,
 
   // Otherwise, create a new forward reference for this value and remember it.
   Value *FwdVal;
-  if (Ty == Type::getLabelTy(F.getContext()))
+  if (Ty->isLabelTy())
     FwdVal = BasicBlock::Create(F.getContext(), Name, &F);
   else
     FwdVal = new Argument(Ty, Name);
@@ -1666,7 +1736,7 @@ Value *LLParser::PerFunctionState::GetVal(unsigned ID, const Type *Ty,
   // If we have the value in the symbol table or fwd-ref table, return it.
   if (Val) {
     if (Val->getType() == Ty) return Val;
-    if (Ty == Type::getLabelTy(F.getContext()))
+    if (Ty->isLabelTy())
       P.Error(Loc, "'%" + utostr(ID) + "' is not a basic block");
     else
       P.Error(Loc, "'%" + utostr(ID) + "' defined with type '" +
@@ -1682,7 +1752,7 @@ Value *LLParser::PerFunctionState::GetVal(unsigned ID, const Type *Ty,
 
   // Otherwise, create a new forward reference for this value and remember it.
   Value *FwdVal;
-  if (Ty == Type::getLabelTy(F.getContext()))
+  if (Ty->isLabelTy())
     FwdVal = BasicBlock::Create(F.getContext(), "", &F);
   else
     FwdVal = new Argument(Ty);
@@ -1697,7 +1767,7 @@ bool LLParser::PerFunctionState::SetInstName(int NameID,
                                              const std::string &NameStr,
                                              LocTy NameLoc, Instruction *Inst) {
   // If this instruction has void type, it cannot have a name or ID specified.
-  if (Inst->getType() == Type::getVoidTy(F.getContext())) {
+  if (Inst->getType()->isVoidTy()) {
     if (NameID != -1 || !NameStr.empty())
       return P.Error(NameLoc, "instructions returning void cannot have a name");
     return false;
@@ -1959,20 +2029,50 @@ bool LLParser::ParseValID(ValID &ID) {
     return false;
 
   case lltok::kw_asm: {
-    // ValID ::= 'asm' SideEffect? STRINGCONSTANT ',' STRINGCONSTANT
-    bool HasSideEffect;
+    // ValID ::= 'asm' SideEffect? AlignStack? STRINGCONSTANT ',' STRINGCONSTANT
+    bool HasSideEffect, AlignStack;
     Lex.Lex();
     if (ParseOptionalToken(lltok::kw_sideeffect, HasSideEffect) ||
+        ParseOptionalToken(lltok::kw_alignstack, AlignStack) ||
         ParseStringConstant(ID.StrVal) ||
         ParseToken(lltok::comma, "expected comma in inline asm expression") ||
         ParseToken(lltok::StringConstant, "expected constraint string"))
       return true;
     ID.StrVal2 = Lex.getStrVal();
-    ID.UIntVal = HasSideEffect;
+    ID.UIntVal = unsigned(HasSideEffect) | (unsigned(AlignStack)<<1);
     ID.Kind = ValID::t_InlineAsm;
     return false;
   }
 
+  case lltok::kw_blockaddress: {
+    // ValID ::= 'blockaddress' '(' @foo ',' %bar ')'
+    Lex.Lex();
+
+    ValID Fn, Label;
+    LocTy FnLoc, LabelLoc;
+    
+    if (ParseToken(lltok::lparen, "expected '(' in block address expression") ||
+        ParseValID(Fn) ||
+        ParseToken(lltok::comma, "expected comma in block address expression")||
+        ParseValID(Label) ||
+        ParseToken(lltok::rparen, "expected ')' in block address expression"))
+      return true;
+    
+    if (Fn.Kind != ValID::t_GlobalID && Fn.Kind != ValID::t_GlobalName)
+      return Error(Fn.Loc, "expected function name in blockaddress");
+    if (Label.Kind != ValID::t_LocalID && Label.Kind != ValID::t_LocalName)
+      return Error(Label.Loc, "expected basic block name in blockaddress");
+    
+    // Make a global variable as a placeholder for this reference.
+    GlobalVariable *FwdRef = new GlobalVariable(*M, Type::getInt8Ty(Context),
+                                           false, GlobalValue::InternalLinkage,
+                                                0, "");
+    ForwardRefBlockAddresses[Fn].push_back(std::make_pair(Label, FwdRef));
+    ID.ConstantVal = FwdRef;
+    ID.Kind = ValID::t_Constant;
+    return false;
+  }
+      
   case lltok::kw_trunc:
   case lltok::kw_zext:
   case lltok::kw_sext:
@@ -2013,6 +2113,9 @@ bool LLParser::ParseValID(ValID &ID) {
         ParseIndexList(Indices) ||
         ParseToken(lltok::rparen, "expected ')' in extractvalue constantexpr"))
       return true;
+    if (Lex.getKind() == lltok::NamedOrCustomMD)
+      if (ParseOptionalCustomMetadata()) return true;
+
     if (!isa<StructType>(Val->getType()) && !isa<ArrayType>(Val->getType()))
       return Error(ID.Loc, "extractvalue operand must be array or struct");
     if (!ExtractValueInst::getIndexedType(Val->getType(), Indices.begin(),
@@ -2034,6 +2137,8 @@ bool LLParser::ParseValID(ValID &ID) {
         ParseIndexList(Indices) ||
         ParseToken(lltok::rparen, "expected ')' in insertvalue constantexpr"))
       return true;
+    if (Lex.getKind() == lltok::NamedOrCustomMD)
+      if (ParseOptionalCustomMetadata()) return true;
     if (!isa<StructType>(Val0->getType()) && !isa<ArrayType>(Val0->getType()))
       return Error(ID.Loc, "extractvalue operand must be array or struct");
     if (!ExtractValueInst::getIndexedType(Val0->getType(), Indices.begin(),
@@ -2279,7 +2384,7 @@ bool LLParser::ConvertGlobalValIDToValue(const Type *Ty, ValID &ID,
     // The lexer has no type info, so builds all float and double FP constants
     // as double.  Fix this here.  Long double does not need this.
     if (&ID.APFloatVal.getSemantics() == &APFloat::IEEEdouble &&
-        Ty == Type::getFloatTy(Context)) {
+        Ty->isFloatTy()) {
       bool Ignored;
       ID.APFloatVal.convert(APFloat::IEEEsingle, APFloat::rmNearestTiesToEven,
                             &Ignored);
@@ -2298,7 +2403,7 @@ bool LLParser::ConvertGlobalValIDToValue(const Type *Ty, ValID &ID,
     return false;
   case ValID::t_Undef:
     // FIXME: LabelTy should not be a first-class type.
-    if ((!Ty->isFirstClassType() || Ty == Type::getLabelTy(Context)) &&
+    if ((!Ty->isFirstClassType() || Ty->isLabelTy()) &&
         !isa<OpaqueType>(Ty))
       return Error(ID.Loc, "invalid type for undef constant");
     V = UndefValue::get(Ty);
@@ -2310,7 +2415,7 @@ bool LLParser::ConvertGlobalValIDToValue(const Type *Ty, ValID &ID,
     return false;
   case ValID::t_Zero:
     // FIXME: LabelTy should not be a first-class type.
-    if (!Ty->isFirstClassType() || Ty == Type::getLabelTy(Context))
+    if (!Ty->isFirstClassType() || Ty->isLabelTy())
       return Error(ID.Loc, "invalid type for null constant");
     V = Constant::getNullValue(Ty);
     return false;
@@ -2368,7 +2473,7 @@ bool LLParser::ConvertValIDToValue(const Type *Ty, ValID &ID, Value *&V,
       PTy ? dyn_cast<FunctionType>(PTy->getElementType()) : 0;
     if (!FTy || !InlineAsm::Verify(FTy, ID.StrVal2))
       return Error(ID.Loc, "invalid type for inline asm constraint string");
-    V = InlineAsm::get(FTy, ID.StrVal, ID.StrVal2, ID.UIntVal);
+    V = InlineAsm::get(FTy, ID.StrVal, ID.StrVal2, ID.UIntVal&1, ID.UIntVal>>1);
     return false;
   } else if (ID.Kind == ValID::t_Metadata) {
     V = ID.MetadataVal;
@@ -2395,6 +2500,18 @@ bool LLParser::ParseTypeAndValue(Value *&V, PerFunctionState &PFS) {
          ParseValue(T, V, PFS);
 }
 
+bool LLParser::ParseTypeAndBasicBlock(BasicBlock *&BB, LocTy &Loc,
+                                      PerFunctionState &PFS) {
+  Value *V;
+  Loc = Lex.getLoc();
+  if (ParseTypeAndValue(V, PFS)) return true;
+  if (!isa<BasicBlock>(V))
+    return Error(Loc, "expected a basic block");
+  BB = cast<BasicBlock>(V);
+  return false;
+}
+
+
 /// FunctionHeader
 ///   ::= OptionalLinkage OptionalVisibility OptionalCallingConv OptRetAttrs
 ///       Type GlobalName '(' ArgList ')' OptFuncAttrs OptSection
@@ -2547,6 +2664,8 @@ bool LLParser::ParseFunctionHeader(Function *&Fn, bool isDefine) {
              AI != AE; ++AI)
           AI->setName("");
       }
+    } else if (M->getNamedValue(FunctionName)) {
+      return Error(NameLoc, "redefinition of function '@" + FunctionName + "'");
     }
 
   } else {
@@ -2582,6 +2701,10 @@ bool LLParser::ParseFunctionHeader(Function *&Fn, bool isDefine) {
   // Add all of the arguments we parsed to the function.
   Function::arg_iterator ArgIt = Fn->arg_begin();
   for (unsigned i = 0, e = ArgList.size(); i != e; ++i, ++ArgIt) {
+    // If we run out of arguments in the Function prototype, exit early.
+    // FIXME: REMOVE THIS IN LLVM 3.0, this is just for the mismatch case above.
+    if (ArgIt == Fn->arg_end()) break;
+    
     // If the argument has a name, insert it into the argument symbol table.
     if (ArgList[i].Name.empty()) continue;
 
@@ -2606,7 +2729,10 @@ bool LLParser::ParseFunctionBody(Function &Fn) {
     return TokError("expected '{' in function body");
   Lex.Lex();  // eat the {.
 
-  PerFunctionState PFS(*this, Fn);
+  int FunctionNumber = -1;
+  if (!Fn.hasName()) FunctionNumber = NumberedVals.size()-1;
+  
+  PerFunctionState PFS(*this, Fn, FunctionNumber);
 
   while (Lex.getKind() != lltok::rbrace && Lex.getKind() != lltok::kw_end)
     if (ParseBasicBlock(PFS)) return true;
@@ -2615,7 +2741,7 @@ bool LLParser::ParseFunctionBody(Function &Fn) {
   Lex.Lex();
 
   // Verify function is ok.
-  return PFS.VerifyFunctionComplete();
+  return PFS.FinishFunction();
 }
 
 /// ParseBasicBlock
@@ -2700,6 +2826,7 @@ bool LLParser::ParseInstruction(Instruction *&Inst, BasicBlock *BB,
   case lltok::kw_ret:         return ParseRet(Inst, BB, PFS);
   case lltok::kw_br:          return ParseBr(Inst, PFS);
   case lltok::kw_switch:      return ParseSwitch(Inst, PFS);
+  case lltok::kw_indirectbr:  return ParseIndirectBr(Inst, PFS);
   case lltok::kw_invoke:      return ParseInvoke(Inst, PFS);
   // Binary Operators.
   case lltok::kw_add:
@@ -2782,9 +2909,9 @@ bool LLParser::ParseInstruction(Instruction *&Inst, BasicBlock *BB,
   case lltok::kw_call:           return ParseCall(Inst, PFS, false);
   case lltok::kw_tail:           return ParseCall(Inst, PFS, true);
   // Memory.
-  case lltok::kw_alloca:
-  case lltok::kw_malloc:         return ParseAlloc(Inst, PFS, KeywordVal);
-  case lltok::kw_free:           return ParseFree(Inst, PFS);
+  case lltok::kw_alloca:         return ParseAlloc(Inst, PFS);
+  case lltok::kw_malloc:         return ParseAlloc(Inst, PFS, BB, false);
+  case lltok::kw_free:           return ParseFree(Inst, PFS, BB);
   case lltok::kw_load:           return ParseLoad(Inst, PFS, false);
   case lltok::kw_store:          return ParseStore(Inst, PFS, false);
   case lltok::kw_volatile:
@@ -2856,9 +2983,7 @@ bool LLParser::ParseRet(Instruction *&Inst, BasicBlock *BB,
   PATypeHolder Ty(Type::getVoidTy(Context));
   if (ParseType(Ty, true /*void allowed*/)) return true;
 
-  if (Ty == Type::getVoidTy(Context)) {
-    if (EatIfPresent(lltok::comma))
-      if (ParseOptionalCustomMetadata()) return true;
+  if (Ty->isVoidTy()) {
     Inst = ReturnInst::Create(Context);
     return false;
   }
@@ -2894,8 +3019,6 @@ bool LLParser::ParseRet(Instruction *&Inst, BasicBlock *BB,
       }
     }
   }
-  if (EatIfPresent(lltok::comma))
-    if (ParseOptionalCustomMetadata()) return true;
 
   Inst = ReturnInst::Create(Context, RV);
   return false;
@@ -2907,7 +3030,8 @@ bool LLParser::ParseRet(Instruction *&Inst, BasicBlock *BB,
 ///   ::= 'br' TypeAndValue ',' TypeAndValue ',' TypeAndValue
 bool LLParser::ParseBr(Instruction *&Inst, PerFunctionState &PFS) {
   LocTy Loc, Loc2;
-  Value *Op0, *Op1, *Op2;
+  Value *Op0;
+  BasicBlock *Op1, *Op2;
   if (ParseTypeAndValue(Op0, Loc, PFS)) return true;
 
   if (BasicBlock *BB = dyn_cast<BasicBlock>(Op0)) {
@@ -2919,17 +3043,12 @@ bool LLParser::ParseBr(Instruction *&Inst, PerFunctionState &PFS) {
     return Error(Loc, "branch condition must have 'i1' type");
 
   if (ParseToken(lltok::comma, "expected ',' after branch condition") ||
-      ParseTypeAndValue(Op1, Loc, PFS) ||
+      ParseTypeAndBasicBlock(Op1, Loc, PFS) ||
       ParseToken(lltok::comma, "expected ',' after true destination") ||
-      ParseTypeAndValue(Op2, Loc2, PFS))
+      ParseTypeAndBasicBlock(Op2, Loc2, PFS))
     return true;
 
-  if (!isa<BasicBlock>(Op1))
-    return Error(Loc, "true destination of branch must be a basic block");
-  if (!isa<BasicBlock>(Op2))
-    return Error(Loc2, "true destination of branch must be a basic block");
-
-  Inst = BranchInst::Create(cast<BasicBlock>(Op1), cast<BasicBlock>(Op2), Op0);
+  Inst = BranchInst::Create(Op1, Op2, Op0);
   return false;
 }
 
@@ -2940,50 +3059,87 @@ bool LLParser::ParseBr(Instruction *&Inst, PerFunctionState &PFS) {
 ///    ::= (TypeAndValue ',' TypeAndValue)*
 bool LLParser::ParseSwitch(Instruction *&Inst, PerFunctionState &PFS) {
   LocTy CondLoc, BBLoc;
-  Value *Cond, *DefaultBB;
+  Value *Cond;
+  BasicBlock *DefaultBB;
   if (ParseTypeAndValue(Cond, CondLoc, PFS) ||
       ParseToken(lltok::comma, "expected ',' after switch condition") ||
-      ParseTypeAndValue(DefaultBB, BBLoc, PFS) ||
+      ParseTypeAndBasicBlock(DefaultBB, BBLoc, PFS) ||
       ParseToken(lltok::lsquare, "expected '[' with switch table"))
     return true;
 
   if (!isa<IntegerType>(Cond->getType()))
     return Error(CondLoc, "switch condition must have integer type");
-  if (!isa<BasicBlock>(DefaultBB))
-    return Error(BBLoc, "default destination must be a basic block");
 
   // Parse the jump table pairs.
   SmallPtrSet<Value*, 32> SeenCases;
   SmallVector<std::pair<ConstantInt*, BasicBlock*>, 32> Table;
   while (Lex.getKind() != lltok::rsquare) {
-    Value *Constant, *DestBB;
+    Value *Constant;
+    BasicBlock *DestBB;
 
     if (ParseTypeAndValue(Constant, CondLoc, PFS) ||
         ParseToken(lltok::comma, "expected ',' after case value") ||
-        ParseTypeAndValue(DestBB, BBLoc, PFS))
+        ParseTypeAndBasicBlock(DestBB, PFS))
       return true;
-
+    
     if (!SeenCases.insert(Constant))
       return Error(CondLoc, "duplicate case value in switch");
     if (!isa<ConstantInt>(Constant))
       return Error(CondLoc, "case value is not a constant integer");
-    if (!isa<BasicBlock>(DestBB))
-      return Error(BBLoc, "case destination is not a basic block");
 
-    Table.push_back(std::make_pair(cast<ConstantInt>(Constant),
-                                   cast<BasicBlock>(DestBB)));
+    Table.push_back(std::make_pair(cast<ConstantInt>(Constant), DestBB));
   }
 
   Lex.Lex();  // Eat the ']'.
 
-  SwitchInst *SI = SwitchInst::Create(Cond, cast<BasicBlock>(DefaultBB),
-                                      Table.size());
+  SwitchInst *SI = SwitchInst::Create(Cond, DefaultBB, Table.size());
   for (unsigned i = 0, e = Table.size(); i != e; ++i)
     SI->addCase(Table[i].first, Table[i].second);
   Inst = SI;
   return false;
 }
 
+/// ParseIndirectBr
+///  Instruction
+///    ::= 'indirectbr' TypeAndValue ',' '[' LabelList ']'
+bool LLParser::ParseIndirectBr(Instruction *&Inst, PerFunctionState &PFS) {
+  LocTy AddrLoc;
+  Value *Address;
+  if (ParseTypeAndValue(Address, AddrLoc, PFS) ||
+      ParseToken(lltok::comma, "expected ',' after indirectbr address") ||
+      ParseToken(lltok::lsquare, "expected '[' with indirectbr"))
+    return true;
+  
+  if (!isa<PointerType>(Address->getType()))
+    return Error(AddrLoc, "indirectbr address must have pointer type");
+  
+  // Parse the destination list.
+  SmallVector<BasicBlock*, 16> DestList;
+  
+  if (Lex.getKind() != lltok::rsquare) {
+    BasicBlock *DestBB;
+    if (ParseTypeAndBasicBlock(DestBB, PFS))
+      return true;
+    DestList.push_back(DestBB);
+    
+    while (EatIfPresent(lltok::comma)) {
+      if (ParseTypeAndBasicBlock(DestBB, PFS))
+        return true;
+      DestList.push_back(DestBB);
+    }
+  }
+  
+  if (ParseToken(lltok::rsquare, "expected ']' at end of block list"))
+    return true;
+
+  IndirectBrInst *IBI = IndirectBrInst::Create(Address, DestList.size());
+  for (unsigned i = 0, e = DestList.size(); i != e; ++i)
+    IBI->addDestination(DestList[i]);
+  Inst = IBI;
+  return false;
+}
+
+
 /// ParseInvoke
 ///   ::= 'invoke' OptionalCallingConv OptionalAttrs Type Value ParamList
 ///       OptionalAttrs 'to' TypeAndValue 'unwind' TypeAndValue
@@ -2996,7 +3152,7 @@ bool LLParser::ParseInvoke(Instruction *&Inst, PerFunctionState &PFS) {
   ValID CalleeID;
   SmallVector<ParamInfo, 16> ArgList;
 
-  Value *NormalBB, *UnwindBB;
+  BasicBlock *NormalBB, *UnwindBB;
   if (ParseOptionalCallingConv(CC) ||
       ParseOptionalAttrs(RetAttrs, 1) ||
       ParseType(RetType, RetTypeLoc, true /*void allowed*/) ||
@@ -3004,16 +3160,11 @@ bool LLParser::ParseInvoke(Instruction *&Inst, PerFunctionState &PFS) {
       ParseParameterList(ArgList, PFS) ||
       ParseOptionalAttrs(FnAttrs, 2) ||
       ParseToken(lltok::kw_to, "expected 'to' in invoke") ||
-      ParseTypeAndValue(NormalBB, PFS) ||
+      ParseTypeAndBasicBlock(NormalBB, PFS) ||
       ParseToken(lltok::kw_unwind, "expected 'unwind' in invoke") ||
-      ParseTypeAndValue(UnwindBB, PFS))
+      ParseTypeAndBasicBlock(UnwindBB, PFS))
     return true;
 
-  if (!isa<BasicBlock>(NormalBB))
-    return Error(CallLoc, "normal destination is not a basic block");
-  if (!isa<BasicBlock>(UnwindBB))
-    return Error(CallLoc, "unwind destination is not a basic block");
-
   // If RetType is a non-function pointer type, then this is the short syntax
   // for the call, which means that RetType is just the return type.  Infer the
   // rest of the function argument types from the arguments that are present.
@@ -3081,8 +3232,7 @@ bool LLParser::ParseInvoke(Instruction *&Inst, PerFunctionState &PFS) {
   // Finish off the Attributes and check them
   AttrListPtr PAL = AttrListPtr::get(Attrs.begin(), Attrs.end());
 
-  InvokeInst *II = InvokeInst::Create(Callee, cast<BasicBlock>(NormalBB),
-                                      cast<BasicBlock>(UnwindBB),
+  InvokeInst *II = InvokeInst::Create(Callee, NormalBB, UnwindBB,
                                       Args.begin(), Args.end());
   II->setCallingConv(CC);
   II->setAttributes(PAL);
@@ -3293,7 +3443,7 @@ bool LLParser::ParseShuffleVector(Instruction *&Inst, PerFunctionState &PFS) {
 }
 
 /// ParsePHI
-///   ::= 'phi' Type '[' Value ',' Value ']' (',' '[' Value ',' Valueß ']')*
+///   ::= 'phi' Type '[' Value ',' Value ']' (',' '[' Value ',' Value ']')*
 bool LLParser::ParsePHI(Instruction *&Inst, PerFunctionState &PFS) {
   PATypeHolder Ty(Type::getVoidTy(Context));
   Value *Op0, *Op1;
@@ -3314,6 +3464,9 @@ bool LLParser::ParsePHI(Instruction *&Inst, PerFunctionState &PFS) {
     if (!EatIfPresent(lltok::comma))
       break;
 
+    if (Lex.getKind() == lltok::NamedOrCustomMD)
+      break;
+
     if (ParseToken(lltok::lsquare, "expected '[' in phi value list") ||
         ParseValue(Ty, Op0, PFS) ||
         ParseToken(lltok::comma, "expected ',' after insertelement value") ||
@@ -3322,6 +3475,9 @@ bool LLParser::ParsePHI(Instruction *&Inst, PerFunctionState &PFS) {
       return true;
   }
 
+  if (Lex.getKind() == lltok::NamedOrCustomMD)
+    if (ParseOptionalCustomMetadata()) return true;
+
   if (!Ty->isFirstClassType())
     return Error(TypeLoc, "phi node must have first class type");
 
@@ -3438,7 +3594,7 @@ bool LLParser::ParseCall(Instruction *&Inst, PerFunctionState &PFS,
 ///   ::= 'malloc' Type (',' TypeAndValue)? (',' OptionalInfo)?
 ///   ::= 'alloca' Type (',' TypeAndValue)? (',' OptionalInfo)?
 bool LLParser::ParseAlloc(Instruction *&Inst, PerFunctionState &PFS,
-                          unsigned Opc) {
+                          BasicBlock* BB, bool isAlloca) {
   PATypeHolder Ty(Type::getVoidTy(Context));
   Value *Size = 0;
   LocTy SizeLoc;
@@ -3459,21 +3615,34 @@ bool LLParser::ParseAlloc(Instruction *&Inst, PerFunctionState &PFS,
   if (Size && Size->getType() != Type::getInt32Ty(Context))
     return Error(SizeLoc, "element count must be i32");
 
-  if (Opc == Instruction::Malloc)
-    Inst = new MallocInst(Ty, Size, Alignment);
-  else
+  if (isAlloca) {
     Inst = new AllocaInst(Ty, Size, Alignment);
+    return false;
+  }
+
+  // Autoupgrade old malloc instruction to malloc call.
+  // FIXME: Remove in LLVM 3.0.
+  const Type *IntPtrTy = Type::getInt32Ty(Context);
+  Constant *AllocSize = ConstantExpr::getSizeOf(Ty);
+  AllocSize = ConstantExpr::getTruncOrBitCast(AllocSize, IntPtrTy);
+  if (!MallocF)
+    // Prototype malloc as "void *(int32)".
+    // This function is renamed as "malloc" in ValidateEndOfModule().
+    MallocF = cast<Function>(
+       M->getOrInsertFunction("", Type::getInt8PtrTy(Context), IntPtrTy, NULL));
+  Inst = CallInst::CreateMalloc(BB, IntPtrTy, Ty, AllocSize, Size, MallocF);
   return false;
 }
 
 /// ParseFree
 ///   ::= 'free' TypeAndValue
-bool LLParser::ParseFree(Instruction *&Inst, PerFunctionState &PFS) {
+bool LLParser::ParseFree(Instruction *&Inst, PerFunctionState &PFS,
+                         BasicBlock* BB) {
   Value *Val; LocTy Loc;
   if (ParseTypeAndValue(Val, Loc, PFS)) return true;
   if (!isa<PointerType>(Val->getType()))
     return Error(Loc, "operand to free must be a pointer");
-  Inst = new FreeInst(Val);
+  Inst = CallInst::CreateFree(Val, BB);
   return false;
 }
 
@@ -3554,11 +3723,15 @@ bool LLParser::ParseGetElementPtr(Instruction *&Inst, PerFunctionState &PFS) {
 
   SmallVector<Value*, 16> Indices;
   while (EatIfPresent(lltok::comma)) {
+    if (Lex.getKind() == lltok::NamedOrCustomMD)
+      break;
     if (ParseTypeAndValue(Val, EltLoc, PFS)) return true;
     if (!isa<IntegerType>(Val->getType()))
       return Error(EltLoc, "getelementptr index must be an integer");
     Indices.push_back(Val);
   }
+  if (Lex.getKind() == lltok::NamedOrCustomMD)
+    if (ParseOptionalCustomMetadata()) return true;
 
   if (!GetElementPtrInst::getIndexedType(Ptr->getType(),
                                          Indices.begin(), Indices.end()))
@@ -3577,6 +3750,8 @@ bool LLParser::ParseExtractValue(Instruction *&Inst, PerFunctionState &PFS) {
   if (ParseTypeAndValue(Val, Loc, PFS) ||
       ParseIndexList(Indices))
     return true;
+  if (Lex.getKind() == lltok::NamedOrCustomMD)
+    if (ParseOptionalCustomMetadata()) return true;
 
   if (!isa<StructType>(Val->getType()) && !isa<ArrayType>(Val->getType()))
     return Error(Loc, "extractvalue operand must be array or struct");
@@ -3598,6 +3773,8 @@ bool LLParser::ParseInsertValue(Instruction *&Inst, PerFunctionState &PFS) {
       ParseTypeAndValue(Val1, Loc1, PFS) ||
       ParseIndexList(Indices))
     return true;
+  if (Lex.getKind() == lltok::NamedOrCustomMD)
+    if (ParseOptionalCustomMetadata()) return true;
 
   if (!isa<StructType>(Val0->getType()) && !isa<ArrayType>(Val0->getType()))
     return Error(Loc0, "extractvalue operand must be array or struct");
diff --git a/libclamav/c++/llvm/lib/AsmParser/LLParser.h b/libclamav/c++/llvm/lib/AsmParser/LLParser.h
index 97bf2f3..1112dc4 100644
--- a/libclamav/c++/llvm/lib/AsmParser/LLParser.h
+++ b/libclamav/c++/llvm/lib/AsmParser/LLParser.h
@@ -31,8 +31,41 @@ namespace llvm {
   class MetadataBase;
   class MDString;
   class MDNode;
-  struct ValID;
 
+  /// ValID - Represents a reference of a definition of some sort with no type.
+  /// There are several cases where we have to parse the value but where the
+  /// type can depend on later context.  This may either be a numeric reference
+  /// or a symbolic (%var) reference.  This is just a discriminated union.
+  struct ValID {
+    enum {
+      t_LocalID, t_GlobalID,      // ID in UIntVal.
+      t_LocalName, t_GlobalName,  // Name in StrVal.
+      t_APSInt, t_APFloat,        // Value in APSIntVal/APFloatVal.
+      t_Null, t_Undef, t_Zero,    // No value.
+      t_EmptyArray,               // No value:  []
+      t_Constant,                 // Value in ConstantVal.
+      t_InlineAsm,                // Value in StrVal/StrVal2/UIntVal.
+      t_Metadata                  // Value in MetadataVal.
+    } Kind;
+    
+    LLLexer::LocTy Loc;
+    unsigned UIntVal;
+    std::string StrVal, StrVal2;
+    APSInt APSIntVal;
+    APFloat APFloatVal;
+    Constant *ConstantVal;
+    MetadataBase *MetadataVal;
+    ValID() : APFloatVal(0.0) {}
+    
+    bool operator<(const ValID &RHS) const {
+      if (Kind == t_LocalID || Kind == t_GlobalID)
+        return UIntVal < RHS.UIntVal;
+      assert((Kind == t_LocalName || Kind == t_GlobalName) && 
+             "Ordering not defined for this ValID kind yet");
+      return StrVal < RHS.StrVal;
+    }
+  };
+  
   class LLParser {
   public:
     typedef LLLexer::LocTy LocTy;
@@ -46,8 +79,8 @@ namespace llvm {
     std::map<unsigned, std::pair<PATypeHolder, LocTy> > ForwardRefTypeIDs;
     std::vector<PATypeHolder> NumberedTypes;
     /// MetadataCache - This map keeps track of parsed metadata constants.
-    std::map<unsigned, MetadataBase *> MetadataCache;
-    std::map<unsigned, std::pair<MetadataBase *, LocTy> > ForwardRefMDNodes;
+    std::map<unsigned, WeakVH> MetadataCache;
+    std::map<unsigned, std::pair<WeakVH, LocTy> > ForwardRefMDNodes;
     SmallVector<std::pair<unsigned, MDNode *>, 2> MDsOnInst;
     struct UpRefRecord {
       /// Loc - This is the location of the upref.
@@ -75,9 +108,17 @@ namespace llvm {
     std::map<std::string, std::pair<GlobalValue*, LocTy> > ForwardRefVals;
     std::map<unsigned, std::pair<GlobalValue*, LocTy> > ForwardRefValIDs;
     std::vector<GlobalValue*> NumberedVals;
+    
+    // References to blockaddress.  The key is the function ValID, the value is
+    // a list of references to blocks in that function.
+    std::map<ValID, std::vector<std::pair<ValID, GlobalValue*> > >
+      ForwardRefBlockAddresses;
+    
+    Function *MallocF;
   public:
     LLParser(MemoryBuffer *F, SourceMgr &SM, SMDiagnostic &Err, Module *m) : 
-      Context(m->getContext()), Lex(F, SM, Err, m->getContext()), M(m) {}
+      Context(m->getContext()), Lex(F, SM, Err, m->getContext()),
+      M(m), MallocF(NULL) {}
     bool Run();
 
     LLVMContext& getContext() { return Context; }
@@ -182,13 +223,17 @@ namespace llvm {
       std::map<std::string, std::pair<Value*, LocTy> > ForwardRefVals;
       std::map<unsigned, std::pair<Value*, LocTy> > ForwardRefValIDs;
       std::vector<Value*> NumberedVals;
+      
+      /// FunctionNumber - If this is an unnamed function, this is the slot
+      /// number of it, otherwise it is -1.
+      int FunctionNumber;
     public:
-      PerFunctionState(LLParser &p, Function &f);
+      PerFunctionState(LLParser &p, Function &f, int FunctionNumber);
       ~PerFunctionState();
 
       Function &getFunction() const { return F; }
 
-      bool VerifyFunctionComplete();
+      bool FinishFunction();
 
       /// GetVal - Get a value with the specified name or ID, creating a
       /// forward reference record if needed.  This can return null if the value
@@ -228,7 +273,13 @@ namespace llvm {
       Loc = Lex.getLoc();
       return ParseTypeAndValue(V, PFS);
     }
-
+    bool ParseTypeAndBasicBlock(BasicBlock *&BB, LocTy &Loc,
+                                PerFunctionState &PFS);
+    bool ParseTypeAndBasicBlock(BasicBlock *&BB, PerFunctionState &PFS) {
+      LocTy Loc;
+      return ParseTypeAndBasicBlock(BB, Loc, PFS);
+    }
+  
     struct ParamInfo {
       LocTy Loc;
       Value *V;
@@ -262,6 +313,7 @@ namespace llvm {
     bool ParseRet(Instruction *&Inst, BasicBlock *BB, PerFunctionState &PFS);
     bool ParseBr(Instruction *&Inst, PerFunctionState &PFS);
     bool ParseSwitch(Instruction *&Inst, PerFunctionState &PFS);
+    bool ParseIndirectBr(Instruction *&Inst, PerFunctionState &PFS);
     bool ParseInvoke(Instruction *&Inst, PerFunctionState &PFS);
 
     bool ParseArithmetic(Instruction *&I, PerFunctionState &PFS, unsigned Opc,
@@ -276,14 +328,19 @@ namespace llvm {
     bool ParseShuffleVector(Instruction *&I, PerFunctionState &PFS);
     bool ParsePHI(Instruction *&I, PerFunctionState &PFS);
     bool ParseCall(Instruction *&I, PerFunctionState &PFS, bool isTail);
-    bool ParseAlloc(Instruction *&I, PerFunctionState &PFS, unsigned Opc);
-    bool ParseFree(Instruction *&I, PerFunctionState &PFS);
+    bool ParseAlloc(Instruction *&I, PerFunctionState &PFS,
+                    BasicBlock *BB = 0, bool isAlloca = true);
+    bool ParseFree(Instruction *&I, PerFunctionState &PFS, BasicBlock *BB);
     bool ParseLoad(Instruction *&I, PerFunctionState &PFS, bool isVolatile);
     bool ParseStore(Instruction *&I, PerFunctionState &PFS, bool isVolatile);
     bool ParseGetResult(Instruction *&I, PerFunctionState &PFS);
     bool ParseGetElementPtr(Instruction *&I, PerFunctionState &PFS);
     bool ParseExtractValue(Instruction *&I, PerFunctionState &PFS);
     bool ParseInsertValue(Instruction *&I, PerFunctionState &PFS);
+    
+    bool ResolveForwardRefBlockAddresses(Function *TheFn, 
+                             std::vector<std::pair<ValID, GlobalValue*> > &Refs,
+                                         PerFunctionState *PFS);
   };
 } // End llvm namespace
 
diff --git a/libclamav/c++/llvm/lib/AsmParser/LLToken.h b/libclamav/c++/llvm/lib/AsmParser/LLToken.h
index bfcb58e..797c32e 100644
--- a/libclamav/c++/llvm/lib/AsmParser/LLToken.h
+++ b/libclamav/c++/llvm/lib/AsmParser/LLToken.h
@@ -62,8 +62,8 @@ namespace lltok {
     kw_module,
     kw_asm,
     kw_sideeffect,
+    kw_alignstack,
     kw_gc,
-    kw_dbg,
     kw_c,
 
     kw_cc, kw_ccc, kw_fastcc, kw_coldcc,
@@ -111,12 +111,13 @@ namespace lltok {
     kw_fptoui, kw_fptosi, kw_inttoptr, kw_ptrtoint, kw_bitcast,
     kw_select, kw_va_arg,
 
-    kw_ret, kw_br, kw_switch, kw_invoke, kw_unwind, kw_unreachable,
+    kw_ret, kw_br, kw_switch, kw_indirectbr, kw_invoke, kw_unwind,
+    kw_unreachable,
 
     kw_malloc, kw_alloca, kw_free, kw_load, kw_store, kw_getelementptr,
 
     kw_extractelement, kw_insertelement, kw_shufflevector, kw_getresult,
-    kw_extractvalue, kw_insertvalue,
+    kw_extractvalue, kw_insertvalue, kw_blockaddress,
 
     // Unsigned Valued tokens (UIntVal).
     GlobalID,          // @42
diff --git a/libclamav/c++/llvm/lib/Bitcode/Reader/BitcodeReader.cpp b/libclamav/c++/llvm/lib/Bitcode/Reader/BitcodeReader.cpp
index fe0366f..9916388 100644
--- a/libclamav/c++/llvm/lib/Bitcode/Reader/BitcodeReader.cpp
+++ b/libclamav/c++/llvm/lib/Bitcode/Reader/BitcodeReader.cpp
@@ -342,7 +342,7 @@ Value *BitcodeReaderMDValueList::getValueFwdRef(unsigned Idx) {
     resize(Idx + 1);
 
   if (Value *V = MDValuePtrs[Idx]) {
-    assert(V->getType() == Type::getMetadataTy(Context) && "Type mismatch in value table!");
+    assert(V->getType()->isMetadataTy() && "Type mismatch in value table!");
     return V;
   }
 
@@ -808,7 +808,7 @@ bool BitcodeReader::ParseMetadata() {
       SmallVector<Value*, 8> Elts;
       for (unsigned i = 0; i != Size; i += 2) {
         const Type *Ty = getTypeByID(Record[i], false);
-        if (Ty == Type::getMetadataTy(Context))
+        if (Ty->isMetadataTy())
           Elts.push_back(MDValueList.getValueFwdRef(Record[i+1]));
         else if (Ty != Type::getVoidTy(Context))
           Elts.push_back(ValueList.getValueFwdRef(Record[i+1], Ty));
@@ -837,13 +837,20 @@ bool BitcodeReader::ParseMetadata() {
       SmallString<8> Name;
       Name.resize(RecordLength-1);
       unsigned Kind = Record[0];
+      (void) Kind;
       for (unsigned i = 1; i != RecordLength; ++i)
         Name[i-1] = Record[i];
       MetadataContext &TheMetadata = Context.getMetadata();
-      assert(TheMetadata.MDHandlerNames.find(Name.str())
-             == TheMetadata.MDHandlerNames.end() &&
-             "Already registered MDKind!");
-      TheMetadata.MDHandlerNames[Name.str()] = Kind;
+      unsigned ExistingKind = TheMetadata.getMDKind(Name.str());
+      if (ExistingKind == 0) {
+        unsigned NewKind = TheMetadata.registerMDKind(Name.str());
+        (void) NewKind;
+        assert (Kind == NewKind 
+                && "Unable to handle custom metadata mismatch!");
+      } else {
+        assert (ExistingKind == Kind 
+                && "Unable to handle custom metadata mismatch!");
+      }
       break;
     }
     }
@@ -967,19 +974,19 @@ bool BitcodeReader::ParseConstants() {
     case bitc::CST_CODE_FLOAT: {    // FLOAT: [fpval]
       if (Record.empty())
         return Error("Invalid FLOAT record");
-      if (CurTy == Type::getFloatTy(Context))
+      if (CurTy->isFloatTy())
         V = ConstantFP::get(Context, APFloat(APInt(32, (uint32_t)Record[0])));
-      else if (CurTy == Type::getDoubleTy(Context))
+      else if (CurTy->isDoubleTy())
         V = ConstantFP::get(Context, APFloat(APInt(64, Record[0])));
-      else if (CurTy == Type::getX86_FP80Ty(Context)) {
+      else if (CurTy->isX86_FP80Ty()) {
         // Bits are not stored the same way as a normal i80 APInt, compensate.
         uint64_t Rearrange[2];
         Rearrange[0] = (Record[1] & 0xffffLL) | (Record[0] << 16);
         Rearrange[1] = Record[0] >> 48;
         V = ConstantFP::get(Context, APFloat(APInt(80, 2, Rearrange)));
-      } else if (CurTy == Type::getFP128Ty(Context))
+      } else if (CurTy->isFP128Ty())
         V = ConstantFP::get(Context, APFloat(APInt(128, 2, &Record[0]), true));
-      else if (CurTy == Type::getPPC_FP128Ty(Context))
+      else if (CurTy->isPPC_FP128Ty())
         V = ConstantFP::get(Context, APFloat(APInt(128, 2, &Record[0])));
       else
         V = UndefValue::get(CurTy);
@@ -1167,7 +1174,8 @@ bool BitcodeReader::ParseConstants() {
     case bitc::CST_CODE_INLINEASM: {
       if (Record.size() < 2) return Error("Invalid INLINEASM record");
       std::string AsmStr, ConstrStr;
-      bool HasSideEffects = Record[0];
+      bool HasSideEffects = Record[0] & 1;
+      bool IsAlignStack = Record[0] >> 1;
       unsigned AsmStrSize = Record[1];
       if (2+AsmStrSize >= Record.size())
         return Error("Invalid INLINEASM record");
@@ -1181,9 +1189,25 @@ bool BitcodeReader::ParseConstants() {
         ConstrStr += (char)Record[3+AsmStrSize+i];
       const PointerType *PTy = cast<PointerType>(CurTy);
       V = InlineAsm::get(cast<FunctionType>(PTy->getElementType()),
-                         AsmStr, ConstrStr, HasSideEffects);
+                         AsmStr, ConstrStr, HasSideEffects, IsAlignStack);
       break;
     }
+    case bitc::CST_CODE_BLOCKADDRESS:{
+      if (Record.size() < 3) return Error("Invalid CE_BLOCKADDRESS record");
+      const Type *FnTy = getTypeByID(Record[0]);
+      if (FnTy == 0) return Error("Invalid CE_BLOCKADDRESS record");
+      Function *Fn =
+        dyn_cast_or_null<Function>(ValueList.getConstantFwdRef(Record[1],FnTy));
+      if (Fn == 0) return Error("Invalid CE_BLOCKADDRESS record");
+      
+      GlobalVariable *FwdRef = new GlobalVariable(*Fn->getParent(),
+                                                  Type::getInt8Ty(Context),
+                                            false, GlobalValue::InternalLinkage,
+                                                  0, "");
+      BlockAddrFwdRefs[Fn].push_back(std::make_pair(Record[2], FwdRef));
+      V = FwdRef;
+      break;
+    }  
     }
 
     ValueList.AssignValue(V, NextCstNo);
@@ -1943,7 +1967,7 @@ bool BitcodeReader::ParseFunctionBody(Function *F) {
       }
       break;
     }
-    case bitc::FUNC_CODE_INST_SWITCH: { // SWITCH: [opty, opval, n, n x ops]
+    case bitc::FUNC_CODE_INST_SWITCH: { // SWITCH: [opty, op0, op1, ...]
       if (Record.size() < 3 || (Record.size() & 1) == 0)
         return Error("Invalid SWITCH record");
       const Type *OpTy = getTypeByID(Record[0]);
@@ -1967,7 +1991,28 @@ bool BitcodeReader::ParseFunctionBody(Function *F) {
       I = SI;
       break;
     }
-
+    case bitc::FUNC_CODE_INST_INDIRECTBR: { // INDIRECTBR: [opty, op0, op1, ...]
+      if (Record.size() < 2)
+        return Error("Invalid INDIRECTBR record");
+      const Type *OpTy = getTypeByID(Record[0]);
+      Value *Address = getFnValueByID(Record[1], OpTy);
+      if (OpTy == 0 || Address == 0)
+        return Error("Invalid INDIRECTBR record");
+      unsigned NumDests = Record.size()-2;
+      IndirectBrInst *IBI = IndirectBrInst::Create(Address, NumDests);
+      InstructionList.push_back(IBI);
+      for (unsigned i = 0, e = NumDests; i != e; ++i) {
+        if (BasicBlock *DestBB = getBasicBlock(Record[2+i])) {
+          IBI->addDestination(DestBB);
+        } else {
+          delete IBI;
+          return Error("Invalid INDIRECTBR record!");
+        }
+      }
+      I = IBI;
+      break;
+    }
+        
     case bitc::FUNC_CODE_INST_INVOKE: {
       // INVOKE: [attrs, cc, normBB, unwindBB, fnty, op0,op1,op2, ...]
       if (Record.size() < 4) return Error("Invalid INVOKE record");
@@ -2046,14 +2091,20 @@ bool BitcodeReader::ParseFunctionBody(Function *F) {
     }
 
     case bitc::FUNC_CODE_INST_MALLOC: { // MALLOC: [instty, op, align]
+      // Autoupgrade malloc instruction to malloc call.
+      // FIXME: Remove in LLVM 3.0.
       if (Record.size() < 3)
         return Error("Invalid MALLOC record");
       const PointerType *Ty =
         dyn_cast_or_null<PointerType>(getTypeByID(Record[0]));
       Value *Size = getFnValueByID(Record[1], Type::getInt32Ty(Context));
-      unsigned Align = Record[2];
       if (!Ty || !Size) return Error("Invalid MALLOC record");
-      I = new MallocInst(Ty->getElementType(), Size, (1 << Align) >> 1);
+      if (!CurBB) return Error("Invalid malloc instruction with no BB");
+      const Type *Int32Ty = IntegerType::getInt32Ty(CurBB->getContext());
+      Constant *AllocSize = ConstantExpr::getSizeOf(Ty->getElementType());
+      AllocSize = ConstantExpr::getTruncOrBitCast(AllocSize, Int32Ty);
+      I = CallInst::CreateMalloc(CurBB, Int32Ty, Ty->getElementType(),
+                                 AllocSize, Size, NULL);
       InstructionList.push_back(I);
       break;
     }
@@ -2063,7 +2114,8 @@ bool BitcodeReader::ParseFunctionBody(Function *F) {
       if (getValueTypePair(Record, OpNum, NextValueNo, Op) ||
           OpNum != Record.size())
         return Error("Invalid FREE record");
-      I = new FreeInst(Op);
+      if (!CurBB) return Error("Invalid free instruction with no BB");
+      I = CallInst::CreateFree(Op, CurBB);
       InstructionList.push_back(I);
       break;
     }
@@ -2214,6 +2266,27 @@ bool BitcodeReader::ParseFunctionBody(Function *F) {
     }
   }
 
+  // See if anything took the address of blocks in this function.  If so,
+  // resolve them now.
+  /// BlockAddrFwdRefs - These are blockaddr references to basic blocks.  These
+  /// are resolved lazily when functions are loaded.
+  DenseMap<Function*, std::vector<BlockAddrRefTy> >::iterator BAFRI =
+    BlockAddrFwdRefs.find(F);
+  if (BAFRI != BlockAddrFwdRefs.end()) {
+    std::vector<BlockAddrRefTy> &RefList = BAFRI->second;
+    for (unsigned i = 0, e = RefList.size(); i != e; ++i) {
+      unsigned BlockIdx = RefList[i].first;
+      if (BlockIdx >= FunctionBBs.size())
+        return Error("Invalid blockaddress block #");
+    
+      GlobalVariable *FwdRef = RefList[i].second;
+      FwdRef->replaceAllUsesWith(BlockAddress::get(F, FunctionBBs[BlockIdx]));
+      FwdRef->eraseFromParent();
+    }
+    
+    BlockAddrFwdRefs.erase(BAFRI);
+  }
+  
   // Trim the value list down to the size it was before we parsed this function.
   ValueList.shrinkTo(ModuleValueListSize);
   std::vector<BasicBlock*>().swap(FunctionBBs);
diff --git a/libclamav/c++/llvm/lib/Bitcode/Reader/BitcodeReader.h b/libclamav/c++/llvm/lib/Bitcode/Reader/BitcodeReader.h
index eefc7bd..7b3a1ae 100644
--- a/libclamav/c++/llvm/lib/Bitcode/Reader/BitcodeReader.h
+++ b/libclamav/c++/llvm/lib/Bitcode/Reader/BitcodeReader.h
@@ -94,7 +94,7 @@ public:
 class BitcodeReaderMDValueList {
   std::vector<WeakVH> MDValuePtrs;
   
-  LLVMContext& Context;
+  LLVMContext &Context;
 public:
   BitcodeReaderMDValueList(LLVMContext& C) : Context(C) {}
 
@@ -122,7 +122,7 @@ public:
 };
 
 class BitcodeReader : public ModuleProvider {
-  LLVMContext& Context;
+  LLVMContext &Context;
   MemoryBuffer *Buffer;
   BitstreamReader StreamFile;
   BitstreamCursor Stream;
@@ -163,6 +163,12 @@ class BitcodeReader : public ModuleProvider {
   /// map contains info about where to find deferred function body (in the
   /// stream) and what linkage the original function had.
   DenseMap<Function*, std::pair<uint64_t, unsigned> > DeferredFunctionInfo;
+  
+  /// BlockAddrFwdRefs - These are blockaddr references to basic blocks.  These
+  /// are resolved lazily when functions are loaded.
+  typedef std::pair<unsigned, GlobalVariable*> BlockAddrRefTy;
+  DenseMap<Function*, std::vector<BlockAddrRefTy> > BlockAddrFwdRefs;
+  
 public:
   explicit BitcodeReader(MemoryBuffer *buffer, LLVMContext& C)
     : Context(C), Buffer(buffer), ErrorString(0), ValueList(C), MDValueList(C) {
diff --git a/libclamav/c++/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp b/libclamav/c++/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp
index 5857c59..af0b8ac 100644
--- a/libclamav/c++/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp
+++ b/libclamav/c++/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp
@@ -19,6 +19,7 @@
 #include "llvm/DerivedTypes.h"
 #include "llvm/InlineAsm.h"
 #include "llvm/Instructions.h"
+#include "llvm/LLVMContext.h"
 #include "llvm/Metadata.h"
 #include "llvm/Module.h"
 #include "llvm/Operator.h"
@@ -517,9 +518,7 @@ static void WriteModuleMetadata(const ValueEnumerator &VE,
       }
 
       // Code: [strchar x N]
-      const char *StrBegin = MDS->begin();
-      for (unsigned i = 0, e = MDS->length(); i != e; ++i)
-        Record.push_back(StrBegin[i]);
+      Record.append(MDS->begin(), MDS->end());
 
       // Emit the finished record.
       Stream.EmitRecord(bitc::METADATA_STRING, Record, MDSAbbrev);
@@ -563,29 +562,31 @@ static void WriteMetadataAttachment(const Function &F,
   // Write metadata attachments
   // METADATA_ATTACHMENT - [m x [value, [n x [id, mdnode]]]
   MetadataContext &TheMetadata = F.getContext().getMetadata();
+  typedef SmallVector<std::pair<unsigned, TrackingVH<MDNode> >, 2> MDMapTy;
+  MDMapTy MDs;
   for (Function::const_iterator BB = F.begin(), E = F.end(); BB != E; ++BB)
     for (BasicBlock::const_iterator I = BB->begin(), E = BB->end();
          I != E; ++I) {
-      const MetadataContext::MDMapTy *P = TheMetadata.getMDs(I);
-      if (!P) continue;
+      MDs.clear();
+      TheMetadata.getMDs(I, MDs);
       bool RecordedInstruction = false;
-      for (MetadataContext::MDMapTy::const_iterator PI = P->begin(), 
-             PE = P->end(); PI != PE; ++PI) {
-        if (MDNode *ND = dyn_cast_or_null<MDNode>(PI->second)) {
-          if (RecordedInstruction == false) {
-            Record.push_back(VE.getInstructionID(I));
-            RecordedInstruction = true;
-          }
-          Record.push_back(PI->first);
-          Record.push_back(VE.getValueID(ND));
+      for (MDMapTy::const_iterator PI = MDs.begin(), PE = MDs.end();
+             PI != PE; ++PI) {
+        if (RecordedInstruction == false) {
+          Record.push_back(VE.getInstructionID(I));
+          RecordedInstruction = true;
         }
+        Record.push_back(PI->first);
+        Record.push_back(VE.getValueID(PI->second));
       }
-      if (!StartedMetadataBlock)  {
-        Stream.EnterSubblock(bitc::METADATA_ATTACHMENT_ID, 3);
-        StartedMetadataBlock = true;
+      if (!Record.empty()) {
+        if (!StartedMetadataBlock)  {
+          Stream.EnterSubblock(bitc::METADATA_ATTACHMENT_ID, 3);
+          StartedMetadataBlock = true;
+        }
+        Stream.EmitRecord(bitc::METADATA_ATTACHMENT, Record, 0);
+        Record.clear();
       }
-      Stream.EmitRecord(bitc::METADATA_ATTACHMENT, Record, 0);
-      Record.clear();
     }
 
   if (StartedMetadataBlock)
@@ -602,11 +603,13 @@ static void WriteModuleMetadataStore(const Module *M,
   // Write metadata kinds
   // METADATA_KIND - [n x [id, name]]
   MetadataContext &TheMetadata = M->getContext().getMetadata();
-  const StringMap<unsigned> *Kinds = TheMetadata.getHandlerNames();
-  for (StringMap<unsigned>::const_iterator
-         I = Kinds->begin(), E = Kinds->end(); I != E; ++I) {
-    Record.push_back(I->second);
-    StringRef KName = I->first();
+  SmallVector<std::pair<unsigned, StringRef>, 4> Names;
+  TheMetadata.getHandlerNames(Names);
+  for (SmallVector<std::pair<unsigned, StringRef>, 4>::iterator 
+         I = Names.begin(),
+         E = Names.end(); I != E; ++I) {
+    Record.push_back(I->first);
+    StringRef KName = I->second;
     for (unsigned i = 0, e = KName.size(); i != e; ++i)
       Record.push_back(KName[i]);
     if (!StartedMetadataBlock)  {
@@ -677,7 +680,8 @@ static void WriteConstants(unsigned FirstVal, unsigned LastVal,
     }
 
     if (const InlineAsm *IA = dyn_cast<InlineAsm>(V)) {
-      Record.push_back(unsigned(IA->hasSideEffects()));
+      Record.push_back(unsigned(IA->hasSideEffects()) |
+                       unsigned(IA->isAlignStack()) << 1);
 
       // Add the asm string.
       const std::string &AsmStr = IA->getAsmString();
@@ -729,18 +733,16 @@ static void WriteConstants(unsigned FirstVal, unsigned LastVal,
     } else if (const ConstantFP *CFP = dyn_cast<ConstantFP>(C)) {
       Code = bitc::CST_CODE_FLOAT;
       const Type *Ty = CFP->getType();
-      if (Ty == Type::getFloatTy(Ty->getContext()) ||
-          Ty == Type::getDoubleTy(Ty->getContext())) {
+      if (Ty->isFloatTy() || Ty->isDoubleTy()) {
         Record.push_back(CFP->getValueAPF().bitcastToAPInt().getZExtValue());
-      } else if (Ty == Type::getX86_FP80Ty(Ty->getContext())) {
+      } else if (Ty->isX86_FP80Ty()) {
         // api needed to prevent premature destruction
         // bits are not in the same order as a normal i80 APInt, compensate.
         APInt api = CFP->getValueAPF().bitcastToAPInt();
         const uint64_t *p = api.getRawData();
         Record.push_back((p[1] << 48) | (p[0] >> 16));
         Record.push_back(p[0] & 0xffffLL);
-      } else if (Ty == Type::getFP128Ty(Ty->getContext()) ||
-                 Ty == Type::getPPC_FP128Ty(Ty->getContext())) {
+      } else if (Ty->isFP128Ty() || Ty->isPPC_FP128Ty()) {
         APInt api = CFP->getValueAPF().bitcastToAPInt();
         const uint64_t *p = api.getRawData();
         Record.push_back(p[0]);
@@ -749,10 +751,11 @@ static void WriteConstants(unsigned FirstVal, unsigned LastVal,
         assert (0 && "Unknown FP type!");
       }
     } else if (isa<ConstantArray>(C) && cast<ConstantArray>(C)->isString()) {
+      const ConstantArray *CA = cast<ConstantArray>(C);
       // Emit constant strings specially.
-      unsigned NumOps = C->getNumOperands();
+      unsigned NumOps = CA->getNumOperands();
       // If this is a null-terminated string, use the denser CSTRING encoding.
-      if (C->getOperand(NumOps-1)->isNullValue()) {
+      if (CA->getOperand(NumOps-1)->isNullValue()) {
         Code = bitc::CST_CODE_CSTRING;
         --NumOps;  // Don't encode the null, which isn't allowed by char6.
       } else {
@@ -762,7 +765,7 @@ static void WriteConstants(unsigned FirstVal, unsigned LastVal,
       bool isCStr7 = Code == bitc::CST_CODE_CSTRING;
       bool isCStrChar6 = Code == bitc::CST_CODE_CSTRING;
       for (unsigned i = 0; i != NumOps; ++i) {
-        unsigned char V = cast<ConstantInt>(C->getOperand(i))->getZExtValue();
+        unsigned char V = cast<ConstantInt>(CA->getOperand(i))->getZExtValue();
         Record.push_back(V);
         isCStr7 &= (V & 128) == 0;
         if (isCStrChar6)
@@ -850,6 +853,13 @@ static void WriteConstants(unsigned FirstVal, unsigned LastVal,
         Record.push_back(CE->getPredicate());
         break;
       }
+    } else if (const BlockAddress *BA = dyn_cast<BlockAddress>(C)) {
+      assert(BA->getFunction() == BA->getBasicBlock()->getParent() &&
+             "Malformed blockaddress");
+      Code = bitc::CST_CODE_BLOCKADDRESS;
+      Record.push_back(VE.getTypeID(BA->getFunction()->getType()));
+      Record.push_back(VE.getValueID(BA->getFunction()));
+      Record.push_back(VE.getGlobalBasicBlockID(BA->getBasicBlock()));
     } else {
       llvm_unreachable("Unknown constant!");
     }
@@ -999,7 +1009,7 @@ static void WriteInstruction(const Instruction &I, unsigned InstID,
   case Instruction::Br:
     {
       Code = bitc::FUNC_CODE_INST_BR;
-      BranchInst &II(cast<BranchInst>(I));
+      BranchInst &II = cast<BranchInst>(I);
       Vals.push_back(VE.getValueID(II.getSuccessor(0)));
       if (II.isConditional()) {
         Vals.push_back(VE.getValueID(II.getSuccessor(1)));
@@ -1013,6 +1023,13 @@ static void WriteInstruction(const Instruction &I, unsigned InstID,
     for (unsigned i = 0, e = I.getNumOperands(); i != e; ++i)
       Vals.push_back(VE.getValueID(I.getOperand(i)));
     break;
+  case Instruction::IndirectBr:
+    Code = bitc::FUNC_CODE_INST_INDIRECTBR;
+    Vals.push_back(VE.getTypeID(I.getOperand(0)->getType()));
+    for (unsigned i = 0, e = I.getNumOperands(); i != e; ++i)
+      Vals.push_back(VE.getValueID(I.getOperand(i)));
+    break;
+      
   case Instruction::Invoke: {
     const InvokeInst *II = cast<InvokeInst>(&I);
     const Value *Callee(II->getCalledValue());
@@ -1053,18 +1070,6 @@ static void WriteInstruction(const Instruction &I, unsigned InstID,
       Vals.push_back(VE.getValueID(I.getOperand(i)));
     break;
 
-  case Instruction::Malloc:
-    Code = bitc::FUNC_CODE_INST_MALLOC;
-    Vals.push_back(VE.getTypeID(I.getType()));
-    Vals.push_back(VE.getValueID(I.getOperand(0))); // size.
-    Vals.push_back(Log2_32(cast<MallocInst>(I).getAlignment())+1);
-    break;
-
-  case Instruction::Free:
-    Code = bitc::FUNC_CODE_INST_FREE;
-    PushValueAndType(I.getOperand(0), InstID, Vals, VE);
-    break;
-
   case Instruction::Alloca:
     Code = bitc::FUNC_CODE_INST_ALLOCA;
     Vals.push_back(VE.getTypeID(I.getType()));
diff --git a/libclamav/c++/llvm/lib/Bitcode/Writer/ValueEnumerator.cpp b/libclamav/c++/llvm/lib/Bitcode/Writer/ValueEnumerator.cpp
index 60253ad..d840d4a 100644
--- a/libclamav/c++/llvm/lib/Bitcode/Writer/ValueEnumerator.cpp
+++ b/libclamav/c++/llvm/lib/Bitcode/Writer/ValueEnumerator.cpp
@@ -14,6 +14,7 @@
 #include "ValueEnumerator.h"
 #include "llvm/Constants.h"
 #include "llvm/DerivedTypes.h"
+#include "llvm/LLVMContext.h"
 #include "llvm/Metadata.h"
 #include "llvm/Module.h"
 #include "llvm/TypeSymbolTable.h"
@@ -87,6 +88,8 @@ ValueEnumerator::ValueEnumerator(const Module *M) {
       EnumerateType(I->getType());
 
     MetadataContext &TheMetadata = F->getContext().getMetadata();
+    typedef SmallVector<std::pair<unsigned, TrackingVH<MDNode> >, 2> MDMapTy;
+    MDMapTy MDs;
     for (Function::const_iterator BB = F->begin(), E = F->end(); BB != E; ++BB)
       for (BasicBlock::const_iterator I = BB->begin(), E = BB->end(); I!=E;++I){
         for (User::const_op_iterator OI = I->op_begin(), E = I->op_end();
@@ -99,12 +102,11 @@ ValueEnumerator::ValueEnumerator(const Module *M) {
           EnumerateAttributes(II->getAttributes());
 
         // Enumerate metadata attached with this instruction.
-        const MetadataContext::MDMapTy *MDs = TheMetadata.getMDs(I);
-        if (MDs)
-          for (MetadataContext::MDMapTy::const_iterator MI = MDs->begin(),
-                 ME = MDs->end(); MI != ME; ++MI)
-            if (MDNode *MDN = dyn_cast_or_null<MDNode>(MI->second))
-              EnumerateMetadata(MDN);
+        MDs.clear();
+        TheMetadata.getMDs(I, MDs);
+        for (MDMapTy::const_iterator MI = MDs.begin(), ME = MDs.end(); MI != ME;
+             ++MI)
+          EnumerateMetadata(MI->second);
       }
   }
 
@@ -214,15 +216,16 @@ void ValueEnumerator::EnumerateMetadata(const MetadataBase *MD) {
     MDValues.push_back(std::make_pair(MD, 1U));
     MDValueMap[MD] = MDValues.size();
     MDValueID = MDValues.size();
-    for (MDNode::const_elem_iterator I = N->elem_begin(), E = N->elem_end();
-         I != E; ++I) {
-      if (*I)
-        EnumerateValue(*I);
+    for (unsigned i = 0, e = N->getNumElements(); i != e; ++i) {    
+      if (Value *V = N->getElement(i))
+        EnumerateValue(V);
       else
         EnumerateType(Type::getVoidTy(MD->getContext()));
     }
     return;
-  } else if (const NamedMDNode *N = dyn_cast<NamedMDNode>(MD)) {
+  }
+  
+  if (const NamedMDNode *N = dyn_cast<NamedMDNode>(MD)) {
     for(NamedMDNode::const_elem_iterator I = N->elem_begin(),
           E = N->elem_end(); I != E; ++I) {
       MetadataBase *M = *I;
@@ -273,7 +276,8 @@ void ValueEnumerator::EnumerateValue(const Value *V) {
       // graph that don't go through a global variable.
       for (User::const_op_iterator I = C->op_begin(), E = C->op_end();
            I != E; ++I)
-        EnumerateValue(*I);
+        if (!isa<BasicBlock>(*I)) // Don't enumerate BB operand to BlockAddress.
+          EnumerateValue(*I);
 
       // Finally, add the value.  Doing this could make the ValueID reference be
       // dangling, don't reuse it.
@@ -319,15 +323,20 @@ void ValueEnumerator::EnumerateOperandType(const Value *V) {
 
     // This constant may have operands, make sure to enumerate the types in
     // them.
-    for (unsigned i = 0, e = C->getNumOperands(); i != e; ++i)
-      EnumerateOperandType(C->getOperand(i));
+    for (unsigned i = 0, e = C->getNumOperands(); i != e; ++i) {
+      const User *Op = C->getOperand(i);
+      
+      // Don't enumerate basic blocks here, this happens as operands to
+      // blockaddress.
+      if (isa<BasicBlock>(Op)) continue;
+      
+      EnumerateOperandType(cast<Constant>(Op));
+    }
 
     if (const MDNode *N = dyn_cast<MDNode>(V)) {
-      for (unsigned i = 0, e = N->getNumElements(); i != e; ++i) {
-        Value *Elem = N->getElement(i);
-        if (Elem)
+      for (unsigned i = 0, e = N->getNumElements(); i != e; ++i)
+        if (Value *Elem = N->getElement(i))
           EnumerateOperandType(Elem);
-      }
     }
   } else if (isa<MDString>(V) || isa<MDNode>(V))
     EnumerateValue(V);
@@ -396,3 +405,23 @@ void ValueEnumerator::purgeFunction() {
   Values.resize(NumModuleValues);
   BasicBlocks.clear();
 }
+
+static void IncorporateFunctionInfoGlobalBBIDs(const Function *F,
+                                 DenseMap<const BasicBlock*, unsigned> &IDMap) {
+  unsigned Counter = 0;
+  for (Function::const_iterator BB = F->begin(), E = F->end(); BB != E; ++BB)
+    IDMap[BB] = ++Counter;
+}
+
+/// getGlobalBasicBlockID - This returns the function-specific ID for the
+/// specified basic block.  This is relatively expensive information, so it
+/// should only be used by rare constructs such as address-of-label.
+unsigned ValueEnumerator::getGlobalBasicBlockID(const BasicBlock *BB) const {
+  unsigned &Idx = GlobalBasicBlockIDs[BB];
+  if (Idx != 0)
+    return Idx-1;
+
+  IncorporateFunctionInfoGlobalBBIDs(BB->getParent(), GlobalBasicBlockIDs);
+  return getGlobalBasicBlockID(BB);
+}
+
diff --git a/libclamav/c++/llvm/lib/Bitcode/Writer/ValueEnumerator.h b/libclamav/c++/llvm/lib/Bitcode/Writer/ValueEnumerator.h
index da63dde..3c83e35 100644
--- a/libclamav/c++/llvm/lib/Bitcode/Writer/ValueEnumerator.h
+++ b/libclamav/c++/llvm/lib/Bitcode/Writer/ValueEnumerator.h
@@ -53,6 +53,10 @@ private:
   AttributeMapType AttributeMap;
   std::vector<AttrListPtr> Attributes;
   
+  /// GlobalBasicBlockIDs - This map memoizes the basic block ID's referenced by
+  /// the "getGlobalBasicBlockID" method.
+  mutable DenseMap<const BasicBlock*, unsigned> GlobalBasicBlockIDs;
+  
   typedef DenseMap<const Instruction*, unsigned> InstructionMapType;
   InstructionMapType InstructionMap;
   unsigned InstructionCount;
@@ -106,6 +110,11 @@ public:
   const std::vector<AttrListPtr> &getAttributes() const {
     return Attributes;
   }
+  
+  /// getGlobalBasicBlockID - This returns the function-specific ID for the
+  /// specified basic block.  This is relatively expensive information, so it
+  /// should only be used by rare constructs such as address-of-label.
+  unsigned getGlobalBasicBlockID(const BasicBlock *BB) const;
 
   /// incorporateFunction/purgeFunction - If you'd like to deal with a function,
   /// use these two methods to get its data into the ValueEnumerator!
diff --git a/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp b/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp
new file mode 100644
index 0000000..8e3f8e7
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp
@@ -0,0 +1,926 @@
+//===----- AggressiveAntiDepBreaker.cpp - Anti-dep breaker -------- ---------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements the AggressiveAntiDepBreaker class, which
+// implements register anti-dependence breaking during post-RA
+// scheduling. It attempts to break all anti-dependencies within a
+// block.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "post-RA-sched"
+#include "AggressiveAntiDepBreaker.h"
+#include "llvm/CodeGen/MachineBasicBlock.h"
+#include "llvm/CodeGen/MachineFrameInfo.h"
+#include "llvm/CodeGen/MachineInstr.h"
+#include "llvm/Target/TargetInstrInfo.h"
+#include "llvm/Target/TargetMachine.h"
+#include "llvm/Target/TargetRegisterInfo.h"
+#include "llvm/Support/CommandLine.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/raw_ostream.h"
+using namespace llvm;
+
+// If DebugDiv > 0 then only break antidep with (ID % DebugDiv) == DebugMod
+static cl::opt<int>
+DebugDiv("agg-antidep-debugdiv",
+                      cl::desc("Debug control for aggressive anti-dep breaker"),
+                      cl::init(0), cl::Hidden);
+static cl::opt<int>
+DebugMod("agg-antidep-debugmod",
+                      cl::desc("Debug control for aggressive anti-dep breaker"),
+                      cl::init(0), cl::Hidden);
+
+AggressiveAntiDepState::AggressiveAntiDepState(MachineBasicBlock *BB) :
+  GroupNodes(TargetRegisterInfo::FirstVirtualRegister, 0) {
+  // Initialize all registers to be in their own group. Initially we
+  // assign the register to the same-indexed GroupNode.
+  for (unsigned i = 0; i < TargetRegisterInfo::FirstVirtualRegister; ++i)
+    GroupNodeIndices[i] = i;
+
+  // Initialize the indices to indicate that no registers are live.
+  std::fill(KillIndices, array_endof(KillIndices), ~0u);
+  std::fill(DefIndices, array_endof(DefIndices), BB->size());
+}
+
+unsigned AggressiveAntiDepState::GetGroup(unsigned Reg)
+{
+  unsigned Node = GroupNodeIndices[Reg];
+  while (GroupNodes[Node] != Node)
+    Node = GroupNodes[Node];
+
+  return Node;
+}
+
+void AggressiveAntiDepState::GetGroupRegs(
+  unsigned Group,
+  std::vector<unsigned> &Regs,
+  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference> *RegRefs)
+{
+  for (unsigned Reg = 0; Reg != TargetRegisterInfo::FirstVirtualRegister; ++Reg) {
+    if ((GetGroup(Reg) == Group) && (RegRefs->count(Reg) > 0))
+      Regs.push_back(Reg);
+  }
+}
+
+unsigned AggressiveAntiDepState::UnionGroups(unsigned Reg1, unsigned Reg2)
+{
+  assert(GroupNodes[0] == 0 && "GroupNode 0 not parent!");
+  assert(GroupNodeIndices[0] == 0 && "Reg 0 not in Group 0!");
+  
+  // find group for each register
+  unsigned Group1 = GetGroup(Reg1);
+  unsigned Group2 = GetGroup(Reg2);
+  
+  // if either group is 0, then that must become the parent
+  unsigned Parent = (Group1 == 0) ? Group1 : Group2;
+  unsigned Other = (Parent == Group1) ? Group2 : Group1;
+  GroupNodes.at(Other) = Parent;
+  return Parent;
+}
+  
+unsigned AggressiveAntiDepState::LeaveGroup(unsigned Reg)
+{
+  // Create a new GroupNode for Reg. Reg's existing GroupNode must
+  // stay as is because there could be other GroupNodes referring to
+  // it.
+  unsigned idx = GroupNodes.size();
+  GroupNodes.push_back(idx);
+  GroupNodeIndices[Reg] = idx;
+  return idx;
+}
+
+bool AggressiveAntiDepState::IsLive(unsigned Reg)
+{
+  // KillIndex must be defined and DefIndex not defined for a register
+  // to be live.
+  return((KillIndices[Reg] != ~0u) && (DefIndices[Reg] == ~0u));
+}
+
+
+
+AggressiveAntiDepBreaker::
+AggressiveAntiDepBreaker(MachineFunction& MFi,
+                         TargetSubtarget::RegClassVector& CriticalPathRCs) : 
+  AntiDepBreaker(), MF(MFi),
+  MRI(MF.getRegInfo()),
+  TRI(MF.getTarget().getRegisterInfo()),
+  AllocatableSet(TRI->getAllocatableSet(MF)),
+  State(NULL) {
+  /* Collect a bitset of all registers that are only broken if they
+     are on the critical path. */
+  for (unsigned i = 0, e = CriticalPathRCs.size(); i < e; ++i) {
+    BitVector CPSet = TRI->getAllocatableSet(MF, CriticalPathRCs[i]);
+    if (CriticalPathSet.none())
+      CriticalPathSet = CPSet;
+    else
+      CriticalPathSet |= CPSet;
+   }
+ 
+  DEBUG(errs() << "AntiDep Critical-Path Registers:");
+  DEBUG(for (int r = CriticalPathSet.find_first(); r != -1; 
+             r = CriticalPathSet.find_next(r))
+          errs() << " " << TRI->getName(r));
+  DEBUG(errs() << '\n');
+}
+
+AggressiveAntiDepBreaker::~AggressiveAntiDepBreaker() {
+  delete State;
+}
+
+void AggressiveAntiDepBreaker::StartBlock(MachineBasicBlock *BB) {
+  assert(State == NULL);
+  State = new AggressiveAntiDepState(BB);
+
+  bool IsReturnBlock = (!BB->empty() && BB->back().getDesc().isReturn());
+  unsigned *KillIndices = State->GetKillIndices();
+  unsigned *DefIndices = State->GetDefIndices();
+
+  // Determine the live-out physregs for this block.
+  if (IsReturnBlock) {
+    // In a return block, examine the function live-out regs.
+    for (MachineRegisterInfo::liveout_iterator I = MRI.liveout_begin(),
+         E = MRI.liveout_end(); I != E; ++I) {
+      unsigned Reg = *I;
+      State->UnionGroups(Reg, 0);
+      KillIndices[Reg] = BB->size();
+      DefIndices[Reg] = ~0u;
+      // Repeat, for all aliases.
+      for (const unsigned *Alias = TRI->getAliasSet(Reg); *Alias; ++Alias) {
+        unsigned AliasReg = *Alias;
+        State->UnionGroups(AliasReg, 0);
+        KillIndices[AliasReg] = BB->size();
+        DefIndices[AliasReg] = ~0u;
+      }
+    }
+  } else {
+    // In a non-return block, examine the live-in regs of all successors.
+    for (MachineBasicBlock::succ_iterator SI = BB->succ_begin(),
+         SE = BB->succ_end(); SI != SE; ++SI)
+      for (MachineBasicBlock::livein_iterator I = (*SI)->livein_begin(),
+           E = (*SI)->livein_end(); I != E; ++I) {
+        unsigned Reg = *I;
+        State->UnionGroups(Reg, 0);
+        KillIndices[Reg] = BB->size();
+        DefIndices[Reg] = ~0u;
+        // Repeat, for all aliases.
+        for (const unsigned *Alias = TRI->getAliasSet(Reg); *Alias; ++Alias) {
+          unsigned AliasReg = *Alias;
+          State->UnionGroups(AliasReg, 0);
+          KillIndices[AliasReg] = BB->size();
+          DefIndices[AliasReg] = ~0u;
+        }
+      }
+  }
+
+  // Mark live-out callee-saved registers. In a return block this is
+  // all callee-saved registers. In non-return this is any
+  // callee-saved register that is not saved in the prolog.
+  const MachineFrameInfo *MFI = MF.getFrameInfo();
+  BitVector Pristine = MFI->getPristineRegs(BB);
+  for (const unsigned *I = TRI->getCalleeSavedRegs(); *I; ++I) {
+    unsigned Reg = *I;
+    if (!IsReturnBlock && !Pristine.test(Reg)) continue;
+    State->UnionGroups(Reg, 0);
+    KillIndices[Reg] = BB->size();
+    DefIndices[Reg] = ~0u;
+    // Repeat, for all aliases.
+    for (const unsigned *Alias = TRI->getAliasSet(Reg); *Alias; ++Alias) {
+      unsigned AliasReg = *Alias;
+      State->UnionGroups(AliasReg, 0);
+      KillIndices[AliasReg] = BB->size();
+      DefIndices[AliasReg] = ~0u;
+    }
+  }
+}
+
+void AggressiveAntiDepBreaker::FinishBlock() {
+  delete State;
+  State = NULL;
+}
+
+void AggressiveAntiDepBreaker::Observe(MachineInstr *MI, unsigned Count,
+                                     unsigned InsertPosIndex) {
+  assert(Count < InsertPosIndex && "Instruction index out of expected range!");
+
+  std::set<unsigned> PassthruRegs;
+  GetPassthruRegs(MI, PassthruRegs);
+  PrescanInstruction(MI, Count, PassthruRegs);
+  ScanInstruction(MI, Count);
+
+  DEBUG(errs() << "Observe: ");
+  DEBUG(MI->dump());
+  DEBUG(errs() << "\tRegs:");
+
+  unsigned *DefIndices = State->GetDefIndices();
+  for (unsigned Reg = 0; Reg != TargetRegisterInfo::FirstVirtualRegister; ++Reg) {
+    // If Reg is current live, then mark that it can't be renamed as
+    // we don't know the extent of its live-range anymore (now that it
+    // has been scheduled). If it is not live but was defined in the
+    // previous schedule region, then set its def index to the most
+    // conservative location (i.e. the beginning of the previous
+    // schedule region).
+    if (State->IsLive(Reg)) {
+      DEBUG(if (State->GetGroup(Reg) != 0)
+              errs() << " " << TRI->getName(Reg) << "=g" << 
+                State->GetGroup(Reg) << "->g0(region live-out)");
+      State->UnionGroups(Reg, 0);
+    } else if ((DefIndices[Reg] < InsertPosIndex) && (DefIndices[Reg] >= Count)) {
+      DefIndices[Reg] = Count;
+    }
+  }
+  DEBUG(errs() << '\n');
+}
+
+bool AggressiveAntiDepBreaker::IsImplicitDefUse(MachineInstr *MI,
+                                            MachineOperand& MO)
+{
+  if (!MO.isReg() || !MO.isImplicit())
+    return false;
+
+  unsigned Reg = MO.getReg();
+  if (Reg == 0)
+    return false;
+
+  MachineOperand *Op = NULL;
+  if (MO.isDef())
+    Op = MI->findRegisterUseOperand(Reg, true);
+  else
+    Op = MI->findRegisterDefOperand(Reg);
+
+  return((Op != NULL) && Op->isImplicit());
+}
+
+void AggressiveAntiDepBreaker::GetPassthruRegs(MachineInstr *MI,
+                                           std::set<unsigned>& PassthruRegs) {
+  for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
+    MachineOperand &MO = MI->getOperand(i);
+    if (!MO.isReg()) continue;
+    if ((MO.isDef() && MI->isRegTiedToUseOperand(i)) || 
+        IsImplicitDefUse(MI, MO)) {
+      const unsigned Reg = MO.getReg();
+      PassthruRegs.insert(Reg);
+      for (const unsigned *Subreg = TRI->getSubRegisters(Reg);
+           *Subreg; ++Subreg) {
+        PassthruRegs.insert(*Subreg);
+      }
+    }
+  }
+}
+
+/// AntiDepEdges - Return in Edges the anti- and output- dependencies
+/// in SU that we want to consider for breaking.
+static void AntiDepEdges(SUnit *SU, std::vector<SDep*>& Edges) {
+  SmallSet<unsigned, 4> RegSet;
+  for (SUnit::pred_iterator P = SU->Preds.begin(), PE = SU->Preds.end();
+       P != PE; ++P) {
+    if ((P->getKind() == SDep::Anti) || (P->getKind() == SDep::Output)) {
+      unsigned Reg = P->getReg();
+      if (RegSet.count(Reg) == 0) {
+        Edges.push_back(&*P);
+        RegSet.insert(Reg);
+      }
+    }
+  }
+}
+
+/// CriticalPathStep - Return the next SUnit after SU on the bottom-up
+/// critical path.
+static SUnit *CriticalPathStep(SUnit *SU) {
+  SDep *Next = 0;
+  unsigned NextDepth = 0;
+  // Find the predecessor edge with the greatest depth.
+  if (SU != 0) {
+    for (SUnit::pred_iterator P = SU->Preds.begin(), PE = SU->Preds.end();
+         P != PE; ++P) {
+      SUnit *PredSU = P->getSUnit();
+      unsigned PredLatency = P->getLatency();
+      unsigned PredTotalLatency = PredSU->getDepth() + PredLatency;
+      // In the case of a latency tie, prefer an anti-dependency edge over
+      // other types of edges.
+      if (NextDepth < PredTotalLatency ||
+          (NextDepth == PredTotalLatency && P->getKind() == SDep::Anti)) {
+        NextDepth = PredTotalLatency;
+        Next = &*P;
+      }
+    }
+  }
+
+  return (Next) ? Next->getSUnit() : 0;
+}
+
+void AggressiveAntiDepBreaker::HandleLastUse(unsigned Reg, unsigned KillIdx,
+                                             const char *tag, const char *header,
+                                             const char *footer) {
+  unsigned *KillIndices = State->GetKillIndices();
+  unsigned *DefIndices = State->GetDefIndices();
+  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference>& 
+    RegRefs = State->GetRegRefs();
+
+  if (!State->IsLive(Reg)) {
+    KillIndices[Reg] = KillIdx;
+    DefIndices[Reg] = ~0u;
+    RegRefs.erase(Reg);
+    State->LeaveGroup(Reg);
+    DEBUG(if (header != NULL) {
+        errs() << header << TRI->getName(Reg); header = NULL; });
+    DEBUG(errs() << "->g" << State->GetGroup(Reg) << tag);
+  }
+  // Repeat for subregisters.
+  for (const unsigned *Subreg = TRI->getSubRegisters(Reg);
+       *Subreg; ++Subreg) {
+    unsigned SubregReg = *Subreg;
+    if (!State->IsLive(SubregReg)) {
+      KillIndices[SubregReg] = KillIdx;
+      DefIndices[SubregReg] = ~0u;
+      RegRefs.erase(SubregReg);
+      State->LeaveGroup(SubregReg);
+      DEBUG(if (header != NULL) {
+          errs() << header << TRI->getName(Reg); header = NULL; });
+      DEBUG(errs() << " " << TRI->getName(SubregReg) << "->g" <<
+            State->GetGroup(SubregReg) << tag);
+    }
+  }
+
+  DEBUG(if ((header == NULL) && (footer != NULL)) errs() << footer);
+}
+
+void AggressiveAntiDepBreaker::PrescanInstruction(MachineInstr *MI, unsigned Count,
+                                              std::set<unsigned>& PassthruRegs) {
+  unsigned *DefIndices = State->GetDefIndices();
+  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference>& 
+    RegRefs = State->GetRegRefs();
+
+  // Handle dead defs by simulating a last-use of the register just
+  // after the def. A dead def can occur because the def is truely
+  // dead, or because only a subregister is live at the def. If we
+  // don't do this the dead def will be incorrectly merged into the
+  // previous def.
+  for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
+    MachineOperand &MO = MI->getOperand(i);
+    if (!MO.isReg() || !MO.isDef()) continue;
+    unsigned Reg = MO.getReg();
+    if (Reg == 0) continue;
+    
+    HandleLastUse(Reg, Count + 1, "", "\tDead Def: ", "\n");
+  }
+
+  DEBUG(errs() << "\tDef Groups:");
+  for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
+    MachineOperand &MO = MI->getOperand(i);
+    if (!MO.isReg() || !MO.isDef()) continue;
+    unsigned Reg = MO.getReg();
+    if (Reg == 0) continue;
+
+    DEBUG(errs() << " " << TRI->getName(Reg) << "=g" << State->GetGroup(Reg)); 
+
+    // If MI's defs have a special allocation requirement, don't allow
+    // any def registers to be changed. Also assume all registers
+    // defined in a call must not be changed (ABI).
+    if (MI->getDesc().isCall() || MI->getDesc().hasExtraDefRegAllocReq()) {
+      DEBUG(if (State->GetGroup(Reg) != 0) errs() << "->g0(alloc-req)");
+      State->UnionGroups(Reg, 0);
+    }
+
+    // Any aliased that are live at this point are completely or
+    // partially defined here, so group those aliases with Reg.
+    for (const unsigned *Alias = TRI->getAliasSet(Reg); *Alias; ++Alias) {
+      unsigned AliasReg = *Alias;
+      if (State->IsLive(AliasReg)) {
+        State->UnionGroups(Reg, AliasReg);
+        DEBUG(errs() << "->g" << State->GetGroup(Reg) << "(via " << 
+              TRI->getName(AliasReg) << ")");
+      }
+    }
+    
+    // Note register reference...
+    const TargetRegisterClass *RC = NULL;
+    if (i < MI->getDesc().getNumOperands())
+      RC = MI->getDesc().OpInfo[i].getRegClass(TRI);
+    AggressiveAntiDepState::RegisterReference RR = { &MO, RC };
+    RegRefs.insert(std::make_pair(Reg, RR));
+  }
+
+  DEBUG(errs() << '\n');
+
+  // Scan the register defs for this instruction and update
+  // live-ranges.
+  for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
+    MachineOperand &MO = MI->getOperand(i);
+    if (!MO.isReg() || !MO.isDef()) continue;
+    unsigned Reg = MO.getReg();
+    if (Reg == 0) continue;
+    // Ignore KILLs and passthru registers for liveness...
+    if ((MI->getOpcode() == TargetInstrInfo::KILL) ||
+        (PassthruRegs.count(Reg) != 0))
+      continue;
+
+    // Update def for Reg and aliases.
+    DefIndices[Reg] = Count;
+    for (const unsigned *Alias = TRI->getAliasSet(Reg);
+         *Alias; ++Alias) {
+      unsigned AliasReg = *Alias;
+      DefIndices[AliasReg] = Count;
+    }
+  }
+}
+
+void AggressiveAntiDepBreaker::ScanInstruction(MachineInstr *MI,
+                                           unsigned Count) {
+  DEBUG(errs() << "\tUse Groups:");
+  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference>& 
+    RegRefs = State->GetRegRefs();
+
+  // Scan the register uses for this instruction and update
+  // live-ranges, groups and RegRefs.
+  for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
+    MachineOperand &MO = MI->getOperand(i);
+    if (!MO.isReg() || !MO.isUse()) continue;
+    unsigned Reg = MO.getReg();
+    if (Reg == 0) continue;
+    
+    DEBUG(errs() << " " << TRI->getName(Reg) << "=g" << 
+          State->GetGroup(Reg)); 
+
+    // It wasn't previously live but now it is, this is a kill. Forget
+    // the previous live-range information and start a new live-range
+    // for the register.
+    HandleLastUse(Reg, Count, "(last-use)");
+
+    // If MI's uses have special allocation requirement, don't allow
+    // any use registers to be changed. Also assume all registers
+    // used in a call must not be changed (ABI).
+    if (MI->getDesc().isCall() || MI->getDesc().hasExtraSrcRegAllocReq()) {
+      DEBUG(if (State->GetGroup(Reg) != 0) errs() << "->g0(alloc-req)");
+      State->UnionGroups(Reg, 0);
+    }
+
+    // Note register reference...
+    const TargetRegisterClass *RC = NULL;
+    if (i < MI->getDesc().getNumOperands())
+      RC = MI->getDesc().OpInfo[i].getRegClass(TRI);
+    AggressiveAntiDepState::RegisterReference RR = { &MO, RC };
+    RegRefs.insert(std::make_pair(Reg, RR));
+  }
+  
+  DEBUG(errs() << '\n');
+
+  // Form a group of all defs and uses of a KILL instruction to ensure
+  // that all registers are renamed as a group.
+  if (MI->getOpcode() == TargetInstrInfo::KILL) {
+    DEBUG(errs() << "\tKill Group:");
+
+    unsigned FirstReg = 0;
+    for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
+      MachineOperand &MO = MI->getOperand(i);
+      if (!MO.isReg()) continue;
+      unsigned Reg = MO.getReg();
+      if (Reg == 0) continue;
+      
+      if (FirstReg != 0) {
+        DEBUG(errs() << "=" << TRI->getName(Reg));
+        State->UnionGroups(FirstReg, Reg);
+      } else {
+        DEBUG(errs() << " " << TRI->getName(Reg));
+        FirstReg = Reg;
+      }
+    }
+  
+    DEBUG(errs() << "->g" << State->GetGroup(FirstReg) << '\n');
+  }
+}
+
+BitVector AggressiveAntiDepBreaker::GetRenameRegisters(unsigned Reg) {
+  BitVector BV(TRI->getNumRegs(), false);
+  bool first = true;
+
+  // Check all references that need rewriting for Reg. For each, use
+  // the corresponding register class to narrow the set of registers
+  // that are appropriate for renaming.
+  std::pair<std::multimap<unsigned, 
+                     AggressiveAntiDepState::RegisterReference>::iterator,
+            std::multimap<unsigned,
+                     AggressiveAntiDepState::RegisterReference>::iterator>
+    Range = State->GetRegRefs().equal_range(Reg);
+  for (std::multimap<unsigned, AggressiveAntiDepState::RegisterReference>::iterator
+         Q = Range.first, QE = Range.second; Q != QE; ++Q) {
+    const TargetRegisterClass *RC = Q->second.RC;
+    if (RC == NULL) continue;
+
+    BitVector RCBV = TRI->getAllocatableSet(MF, RC);
+    if (first) {
+      BV |= RCBV;
+      first = false;
+    } else {
+      BV &= RCBV;
+    }
+
+    DEBUG(errs() << " " << RC->getName());
+  }
+  
+  return BV;
+}  
+
+bool AggressiveAntiDepBreaker::FindSuitableFreeRegisters(
+                                unsigned AntiDepGroupIndex,
+                                RenameOrderType& RenameOrder,
+                                std::map<unsigned, unsigned> &RenameMap) {
+  unsigned *KillIndices = State->GetKillIndices();
+  unsigned *DefIndices = State->GetDefIndices();
+  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference>& 
+    RegRefs = State->GetRegRefs();
+
+  // Collect all referenced registers in the same group as
+  // AntiDepReg. These all need to be renamed together if we are to
+  // break the anti-dependence.
+  std::vector<unsigned> Regs;
+  State->GetGroupRegs(AntiDepGroupIndex, Regs, &RegRefs);
+  assert(Regs.size() > 0 && "Empty register group!");
+  if (Regs.size() == 0)
+    return false;
+
+  // Find the "superest" register in the group. At the same time,
+  // collect the BitVector of registers that can be used to rename
+  // each register.
+  DEBUG(errs() << "\tRename Candidates for Group g" << AntiDepGroupIndex << ":\n");
+  std::map<unsigned, BitVector> RenameRegisterMap;
+  unsigned SuperReg = 0;
+  for (unsigned i = 0, e = Regs.size(); i != e; ++i) {
+    unsigned Reg = Regs[i];
+    if ((SuperReg == 0) || TRI->isSuperRegister(SuperReg, Reg))
+      SuperReg = Reg;
+
+    // If Reg has any references, then collect possible rename regs
+    if (RegRefs.count(Reg) > 0) {
+      DEBUG(errs() << "\t\t" << TRI->getName(Reg) << ":");
+    
+      BitVector BV = GetRenameRegisters(Reg);
+      RenameRegisterMap.insert(std::pair<unsigned, BitVector>(Reg, BV));
+
+      DEBUG(errs() << " ::");
+      DEBUG(for (int r = BV.find_first(); r != -1; r = BV.find_next(r))
+              errs() << " " << TRI->getName(r));
+      DEBUG(errs() << "\n");
+    }
+  }
+
+  // All group registers should be a subreg of SuperReg.
+  for (unsigned i = 0, e = Regs.size(); i != e; ++i) {
+    unsigned Reg = Regs[i];
+    if (Reg == SuperReg) continue;
+    bool IsSub = TRI->isSubRegister(SuperReg, Reg);
+    assert(IsSub && "Expecting group subregister");
+    if (!IsSub)
+      return false;
+  }
+
+#ifndef NDEBUG
+  // If DebugDiv > 0 then only rename (renamecnt % DebugDiv) == DebugMod
+  if (DebugDiv > 0) {
+    static int renamecnt = 0;
+    if (renamecnt++ % DebugDiv != DebugMod)
+      return false;
+    
+    errs() << "*** Performing rename " << TRI->getName(SuperReg) <<
+      " for debug ***\n";
+  }
+#endif
+
+  // Check each possible rename register for SuperReg in round-robin
+  // order. If that register is available, and the corresponding
+  // registers are available for the other group subregisters, then we
+  // can use those registers to rename.
+  const TargetRegisterClass *SuperRC = 
+    TRI->getPhysicalRegisterRegClass(SuperReg, MVT::Other);
+  
+  const TargetRegisterClass::iterator RB = SuperRC->allocation_order_begin(MF);
+  const TargetRegisterClass::iterator RE = SuperRC->allocation_order_end(MF);
+  if (RB == RE) {
+    DEBUG(errs() << "\tEmpty Super Regclass!!\n");
+    return false;
+  }
+
+  DEBUG(errs() << "\tFind Registers:");
+
+  if (RenameOrder.count(SuperRC) == 0)
+    RenameOrder.insert(RenameOrderType::value_type(SuperRC, RE));
+
+  const TargetRegisterClass::iterator OrigR = RenameOrder[SuperRC];
+  const TargetRegisterClass::iterator EndR = ((OrigR == RE) ? RB : OrigR);
+  TargetRegisterClass::iterator R = OrigR;
+  do {
+    if (R == RB) R = RE;
+    --R;
+    const unsigned NewSuperReg = *R;
+    // Don't replace a register with itself.
+    if (NewSuperReg == SuperReg) continue;
+    
+    DEBUG(errs() << " [" << TRI->getName(NewSuperReg) << ':');
+    RenameMap.clear();
+
+    // For each referenced group register (which must be a SuperReg or
+    // a subregister of SuperReg), find the corresponding subregister
+    // of NewSuperReg and make sure it is free to be renamed.
+    for (unsigned i = 0, e = Regs.size(); i != e; ++i) {
+      unsigned Reg = Regs[i];
+      unsigned NewReg = 0;
+      if (Reg == SuperReg) {
+        NewReg = NewSuperReg;
+      } else {
+        unsigned NewSubRegIdx = TRI->getSubRegIndex(SuperReg, Reg);
+        if (NewSubRegIdx != 0)
+          NewReg = TRI->getSubReg(NewSuperReg, NewSubRegIdx);
+      }
+
+      DEBUG(errs() << " " << TRI->getName(NewReg));
+      
+      // Check if Reg can be renamed to NewReg.
+      BitVector BV = RenameRegisterMap[Reg];
+      if (!BV.test(NewReg)) {
+        DEBUG(errs() << "(no rename)");
+        goto next_super_reg;
+      }
+
+      // If NewReg is dead and NewReg's most recent def is not before
+      // Regs's kill, it's safe to replace Reg with NewReg. We
+      // must also check all aliases of NewReg, because we can't define a
+      // register when any sub or super is already live.
+      if (State->IsLive(NewReg) || (KillIndices[Reg] > DefIndices[NewReg])) {
+        DEBUG(errs() << "(live)");
+        goto next_super_reg;
+      } else {
+        bool found = false;
+        for (const unsigned *Alias = TRI->getAliasSet(NewReg);
+             *Alias; ++Alias) {
+          unsigned AliasReg = *Alias;
+          if (State->IsLive(AliasReg) || (KillIndices[Reg] > DefIndices[AliasReg])) {
+            DEBUG(errs() << "(alias " << TRI->getName(AliasReg) << " live)");
+            found = true;
+            break;
+          }
+        }
+        if (found)
+          goto next_super_reg;
+      }
+      
+      // Record that 'Reg' can be renamed to 'NewReg'.
+      RenameMap.insert(std::pair<unsigned, unsigned>(Reg, NewReg));
+    }
+    
+    // If we fall-out here, then every register in the group can be
+    // renamed, as recorded in RenameMap.
+    RenameOrder.erase(SuperRC);
+    RenameOrder.insert(RenameOrderType::value_type(SuperRC, R));
+    DEBUG(errs() << "]\n");
+    return true;
+
+  next_super_reg:
+    DEBUG(errs() << ']');
+  } while (R != EndR);
+
+  DEBUG(errs() << '\n');
+
+  // No registers are free and available!
+  return false;
+}
+
+/// BreakAntiDependencies - Identifiy anti-dependencies within the
+/// ScheduleDAG and break them by renaming registers.
+///
+unsigned AggressiveAntiDepBreaker::BreakAntiDependencies(
+                              std::vector<SUnit>& SUnits,
+                              MachineBasicBlock::iterator& Begin,
+                              MachineBasicBlock::iterator& End,
+                              unsigned InsertPosIndex) {
+  unsigned *KillIndices = State->GetKillIndices();
+  unsigned *DefIndices = State->GetDefIndices();
+  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference>& 
+    RegRefs = State->GetRegRefs();
+
+  // The code below assumes that there is at least one instruction,
+  // so just duck out immediately if the block is empty.
+  if (SUnits.empty()) return 0;
+  
+  // For each regclass the next register to use for renaming.
+  RenameOrderType RenameOrder;
+
+  // ...need a map from MI to SUnit.
+  std::map<MachineInstr *, SUnit *> MISUnitMap;
+  for (unsigned i = 0, e = SUnits.size(); i != e; ++i) {
+    SUnit *SU = &SUnits[i];
+    MISUnitMap.insert(std::pair<MachineInstr *, SUnit *>(SU->getInstr(), SU));
+  }
+
+  // Track progress along the critical path through the SUnit graph as
+  // we walk the instructions. This is needed for regclasses that only
+  // break critical-path anti-dependencies.
+  SUnit *CriticalPathSU = 0;
+  MachineInstr *CriticalPathMI = 0;
+  if (CriticalPathSet.any()) {
+    for (unsigned i = 0, e = SUnits.size(); i != e; ++i) {
+      SUnit *SU = &SUnits[i];
+      if (!CriticalPathSU || 
+          ((SU->getDepth() + SU->Latency) > 
+           (CriticalPathSU->getDepth() + CriticalPathSU->Latency))) {
+        CriticalPathSU = SU;
+      }
+    }
+    
+    CriticalPathMI = CriticalPathSU->getInstr();
+  }
+
+#ifndef NDEBUG 
+  DEBUG(errs() << "\n===== Aggressive anti-dependency breaking\n");
+  DEBUG(errs() << "Available regs:");
+  for (unsigned Reg = 0; Reg < TRI->getNumRegs(); ++Reg) {
+    if (!State->IsLive(Reg))
+      DEBUG(errs() << " " << TRI->getName(Reg));
+  }
+  DEBUG(errs() << '\n');
+#endif
+
+  // Attempt to break anti-dependence edges. Walk the instructions
+  // from the bottom up, tracking information about liveness as we go
+  // to help determine which registers are available.
+  unsigned Broken = 0;
+  unsigned Count = InsertPosIndex - 1;
+  for (MachineBasicBlock::iterator I = End, E = Begin;
+       I != E; --Count) {
+    MachineInstr *MI = --I;
+
+    DEBUG(errs() << "Anti: ");
+    DEBUG(MI->dump());
+
+    std::set<unsigned> PassthruRegs;
+    GetPassthruRegs(MI, PassthruRegs);
+
+    // Process the defs in MI...
+    PrescanInstruction(MI, Count, PassthruRegs);
+    
+    // The dependence edges that represent anti- and output-
+    // dependencies that are candidates for breaking.
+    std::vector<SDep*> Edges;
+    SUnit *PathSU = MISUnitMap[MI];
+    AntiDepEdges(PathSU, Edges);
+
+    // If MI is not on the critical path, then we don't rename
+    // registers in the CriticalPathSet.
+    BitVector *ExcludeRegs = NULL;
+    if (MI == CriticalPathMI) {
+      CriticalPathSU = CriticalPathStep(CriticalPathSU);
+      CriticalPathMI = (CriticalPathSU) ? CriticalPathSU->getInstr() : 0;
+    } else { 
+      ExcludeRegs = &CriticalPathSet;
+    }
+
+    // Ignore KILL instructions (they form a group in ScanInstruction
+    // but don't cause any anti-dependence breaking themselves)
+    if (MI->getOpcode() != TargetInstrInfo::KILL) {
+      // Attempt to break each anti-dependency...
+      for (unsigned i = 0, e = Edges.size(); i != e; ++i) {
+        SDep *Edge = Edges[i];
+        SUnit *NextSU = Edge->getSUnit();
+        
+        if ((Edge->getKind() != SDep::Anti) &&
+            (Edge->getKind() != SDep::Output)) continue;
+        
+        unsigned AntiDepReg = Edge->getReg();
+        DEBUG(errs() << "\tAntidep reg: " << TRI->getName(AntiDepReg));
+        assert(AntiDepReg != 0 && "Anti-dependence on reg0?");
+        
+        if (!AllocatableSet.test(AntiDepReg)) {
+          // Don't break anti-dependencies on non-allocatable registers.
+          DEBUG(errs() << " (non-allocatable)\n");
+          continue;
+        } else if ((ExcludeRegs != NULL) && ExcludeRegs->test(AntiDepReg)) {
+          // Don't break anti-dependencies for critical path registers
+          // if not on the critical path
+          DEBUG(errs() << " (not critical-path)\n");
+          continue;
+        } else if (PassthruRegs.count(AntiDepReg) != 0) {
+          // If the anti-dep register liveness "passes-thru", then
+          // don't try to change it. It will be changed along with
+          // the use if required to break an earlier antidep.
+          DEBUG(errs() << " (passthru)\n");
+          continue;
+        } else {
+          // No anti-dep breaking for implicit deps
+          MachineOperand *AntiDepOp = MI->findRegisterDefOperand(AntiDepReg);
+          assert(AntiDepOp != NULL && "Can't find index for defined register operand");
+          if ((AntiDepOp == NULL) || AntiDepOp->isImplicit()) {
+            DEBUG(errs() << " (implicit)\n");
+            continue;
+          }
+          
+          // If the SUnit has other dependencies on the SUnit that
+          // it anti-depends on, don't bother breaking the
+          // anti-dependency since those edges would prevent such
+          // units from being scheduled past each other
+          // regardless.
+          //
+          // Also, if there are dependencies on other SUnits with the
+          // same register as the anti-dependency, don't attempt to
+          // break it.
+          for (SUnit::pred_iterator P = PathSU->Preds.begin(),
+                 PE = PathSU->Preds.end(); P != PE; ++P) {
+            if (P->getSUnit() == NextSU ?
+                (P->getKind() != SDep::Anti || P->getReg() != AntiDepReg) :
+                (P->getKind() == SDep::Data && P->getReg() == AntiDepReg)) {
+              AntiDepReg = 0;
+              break;
+            }
+          }
+          for (SUnit::pred_iterator P = PathSU->Preds.begin(),
+                 PE = PathSU->Preds.end(); P != PE; ++P) {
+            if ((P->getSUnit() == NextSU) && (P->getKind() != SDep::Anti) &&
+                (P->getKind() != SDep::Output)) {
+              DEBUG(errs() << " (real dependency)\n");
+              AntiDepReg = 0;
+              break;
+            } else if ((P->getSUnit() != NextSU) && 
+                       (P->getKind() == SDep::Data) && 
+                       (P->getReg() == AntiDepReg)) {
+              DEBUG(errs() << " (other dependency)\n");
+              AntiDepReg = 0;
+              break;
+            }
+          }
+          
+          if (AntiDepReg == 0) continue;
+        }
+        
+        assert(AntiDepReg != 0);
+        if (AntiDepReg == 0) continue;
+        
+        // Determine AntiDepReg's register group.
+        const unsigned GroupIndex = State->GetGroup(AntiDepReg);
+        if (GroupIndex == 0) {
+          DEBUG(errs() << " (zero group)\n");
+          continue;
+        }
+        
+        DEBUG(errs() << '\n');
+        
+        // Look for a suitable register to use to break the anti-dependence.
+        std::map<unsigned, unsigned> RenameMap;
+        if (FindSuitableFreeRegisters(GroupIndex, RenameOrder, RenameMap)) {
+          DEBUG(errs() << "\tBreaking anti-dependence edge on "
+                << TRI->getName(AntiDepReg) << ":");
+          
+          // Handle each group register...
+          for (std::map<unsigned, unsigned>::iterator
+                 S = RenameMap.begin(), E = RenameMap.end(); S != E; ++S) {
+            unsigned CurrReg = S->first;
+            unsigned NewReg = S->second;
+            
+            DEBUG(errs() << " " << TRI->getName(CurrReg) << "->" << 
+                  TRI->getName(NewReg) << "(" <<  
+                  RegRefs.count(CurrReg) << " refs)");
+            
+            // Update the references to the old register CurrReg to
+            // refer to the new register NewReg.
+            std::pair<std::multimap<unsigned, 
+                              AggressiveAntiDepState::RegisterReference>::iterator,
+                      std::multimap<unsigned,
+                              AggressiveAntiDepState::RegisterReference>::iterator>
+              Range = RegRefs.equal_range(CurrReg);
+            for (std::multimap<unsigned, AggressiveAntiDepState::RegisterReference>::iterator
+                   Q = Range.first, QE = Range.second; Q != QE; ++Q) {
+              Q->second.Operand->setReg(NewReg);
+            }
+            
+            // We just went back in time and modified history; the
+            // liveness information for CurrReg is now inconsistent. Set
+            // the state as if it were dead.
+            State->UnionGroups(NewReg, 0);
+            RegRefs.erase(NewReg);
+            DefIndices[NewReg] = DefIndices[CurrReg];
+            KillIndices[NewReg] = KillIndices[CurrReg];
+            
+            State->UnionGroups(CurrReg, 0);
+            RegRefs.erase(CurrReg);
+            DefIndices[CurrReg] = KillIndices[CurrReg];
+            KillIndices[CurrReg] = ~0u;
+            assert(((KillIndices[CurrReg] == ~0u) !=
+                    (DefIndices[CurrReg] == ~0u)) &&
+                   "Kill and Def maps aren't consistent for AntiDepReg!");
+          }
+          
+          ++Broken;
+          DEBUG(errs() << '\n');
+        }
+      }
+    }
+
+    ScanInstruction(MI, Count);
+  }
+  
+  return Broken;
+}
diff --git a/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.h b/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.h
new file mode 100644
index 0000000..8154d2d
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.h
@@ -0,0 +1,177 @@
+//=- llvm/CodeGen/AggressiveAntiDepBreaker.h - Anti-Dep Support -*- C++ -*-=//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements the AggressiveAntiDepBreaker class, which
+// implements register anti-dependence breaking during post-RA
+// scheduling. It attempts to break all anti-dependencies within a
+// block.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_CODEGEN_AGGRESSIVEANTIDEPBREAKER_H
+#define LLVM_CODEGEN_AGGRESSIVEANTIDEPBREAKER_H
+
+#include "AntiDepBreaker.h"
+#include "llvm/CodeGen/MachineBasicBlock.h"
+#include "llvm/CodeGen/MachineFrameInfo.h"
+#include "llvm/CodeGen/MachineFunction.h"
+#include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/CodeGen/ScheduleDAG.h"
+#include "llvm/Target/TargetSubtarget.h"
+#include "llvm/Target/TargetRegisterInfo.h"
+#include "llvm/ADT/BitVector.h"
+#include "llvm/ADT/SmallSet.h"
+#include <map>
+
+namespace llvm {
+  /// Class AggressiveAntiDepState 
+  /// Contains all the state necessary for anti-dep breaking.
+  class AggressiveAntiDepState {
+  public:
+    /// RegisterReference - Information about a register reference
+    /// within a liverange
+    typedef struct {
+      /// Operand - The registers operand
+      MachineOperand *Operand;
+      /// RC - The register class
+      const TargetRegisterClass *RC;
+    } RegisterReference;
+
+  private:
+    /// GroupNodes - Implements a disjoint-union data structure to
+    /// form register groups. A node is represented by an index into
+    /// the vector. A node can "point to" itself to indicate that it
+    /// is the parent of a group, or point to another node to indicate
+    /// that it is a member of the same group as that node.
+    std::vector<unsigned> GroupNodes;
+  
+    /// GroupNodeIndices - For each register, the index of the GroupNode
+    /// currently representing the group that the register belongs to.
+    /// Register 0 is always represented by the 0 group, a group
+    /// composed of registers that are not eligible for anti-aliasing.
+    unsigned GroupNodeIndices[TargetRegisterInfo::FirstVirtualRegister];
+  
+    /// RegRefs - Map registers to all their references within a live range.
+    std::multimap<unsigned, RegisterReference> RegRefs;
+  
+    /// KillIndices - The index of the most recent kill (proceding bottom-up),
+    /// or ~0u if the register is not live.
+    unsigned KillIndices[TargetRegisterInfo::FirstVirtualRegister];
+  
+    /// DefIndices - The index of the most recent complete def (proceding bottom
+    /// up), or ~0u if the register is live.
+    unsigned DefIndices[TargetRegisterInfo::FirstVirtualRegister];
+
+  public:
+    AggressiveAntiDepState(MachineBasicBlock *BB);
+    
+    /// GetKillIndices - Return the kill indices.
+    unsigned *GetKillIndices() { return KillIndices; }
+
+    /// GetDefIndices - Return the define indices.
+    unsigned *GetDefIndices() { return DefIndices; }
+
+    /// GetRegRefs - Return the RegRefs map.
+    std::multimap<unsigned, RegisterReference>& GetRegRefs() { return RegRefs; }
+
+    // GetGroup - Get the group for a register. The returned value is
+    // the index of the GroupNode representing the group.
+    unsigned GetGroup(unsigned Reg);
+    
+    // GetGroupRegs - Return a vector of the registers belonging to a
+    // group. If RegRefs is non-NULL then only included referenced registers.
+    void GetGroupRegs(
+       unsigned Group,
+       std::vector<unsigned> &Regs,
+       std::multimap<unsigned, AggressiveAntiDepState::RegisterReference> *RegRefs);
+
+    // UnionGroups - Union Reg1's and Reg2's groups to form a new
+    // group. Return the index of the GroupNode representing the
+    // group.
+    unsigned UnionGroups(unsigned Reg1, unsigned Reg2);
+
+    // LeaveGroup - Remove a register from its current group and place
+    // it alone in its own group. Return the index of the GroupNode
+    // representing the registers new group.
+    unsigned LeaveGroup(unsigned Reg);
+
+    /// IsLive - Return true if Reg is live
+    bool IsLive(unsigned Reg);
+  };
+
+
+  /// Class AggressiveAntiDepBreaker 
+  class AggressiveAntiDepBreaker : public AntiDepBreaker {
+    MachineFunction& MF;
+    MachineRegisterInfo &MRI;
+    const TargetRegisterInfo *TRI;
+
+    /// AllocatableSet - The set of allocatable registers.
+    /// We'll be ignoring anti-dependencies on non-allocatable registers,
+    /// because they may not be safe to break.
+    const BitVector AllocatableSet;
+
+    /// CriticalPathSet - The set of registers that should only be
+    /// renamed if they are on the critical path.
+    BitVector CriticalPathSet;
+
+    /// State - The state used to identify and rename anti-dependence
+    /// registers.
+    AggressiveAntiDepState *State;
+
+  public:
+    AggressiveAntiDepBreaker(MachineFunction& MFi, 
+                             TargetSubtarget::RegClassVector& CriticalPathRCs);
+    ~AggressiveAntiDepBreaker();
+    
+    /// Start - Initialize anti-dep breaking for a new basic block.
+    void StartBlock(MachineBasicBlock *BB);
+
+    /// BreakAntiDependencies - Identifiy anti-dependencies along the critical path
+    /// of the ScheduleDAG and break them by renaming registers.
+    ///
+    unsigned BreakAntiDependencies(std::vector<SUnit>& SUnits,
+                                   MachineBasicBlock::iterator& Begin,
+                                   MachineBasicBlock::iterator& End,
+                                   unsigned InsertPosIndex);
+
+    /// Observe - Update liveness information to account for the current
+    /// instruction, which will not be scheduled.
+    ///
+    void Observe(MachineInstr *MI, unsigned Count, unsigned InsertPosIndex);
+
+    /// Finish - Finish anti-dep breaking for a basic block.
+    void FinishBlock();
+
+  private:
+    typedef std::map<const TargetRegisterClass *,
+                     TargetRegisterClass::const_iterator> RenameOrderType;
+
+    /// IsImplicitDefUse - Return true if MO represents a register
+    /// that is both implicitly used and defined in MI
+    bool IsImplicitDefUse(MachineInstr *MI, MachineOperand& MO);
+    
+    /// GetPassthruRegs - If MI implicitly def/uses a register, then
+    /// return that register and all subregisters.
+    void GetPassthruRegs(MachineInstr *MI, std::set<unsigned>& PassthruRegs);
+
+    void HandleLastUse(unsigned Reg, unsigned KillIdx, const char *tag,
+                       const char *header =NULL, const char *footer =NULL);
+
+    void PrescanInstruction(MachineInstr *MI, unsigned Count,
+                            std::set<unsigned>& PassthruRegs);
+    void ScanInstruction(MachineInstr *MI, unsigned Count);
+    BitVector GetRenameRegisters(unsigned Reg);
+    bool FindSuitableFreeRegisters(unsigned AntiDepGroupIndex,
+                                   RenameOrderType& RenameOrder,
+                                   std::map<unsigned, unsigned> &RenameMap);
+  };
+}
+
+#endif
diff --git a/libclamav/c++/llvm/lib/CodeGen/AntiDepBreaker.h b/libclamav/c++/llvm/lib/CodeGen/AntiDepBreaker.h
new file mode 100644
index 0000000..3ee30c6
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/AntiDepBreaker.h
@@ -0,0 +1,59 @@
+//=- llvm/CodeGen/AntiDepBreaker.h - Anti-Dependence Breaking -*- C++ -*-=//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements the AntiDepBreaker class, which implements
+// anti-dependence breaking heuristics for post-register-allocation scheduling.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_CODEGEN_ANTIDEPBREAKER_H
+#define LLVM_CODEGEN_ANTIDEPBREAKER_H
+
+#include "llvm/CodeGen/MachineBasicBlock.h"
+#include "llvm/CodeGen/MachineFrameInfo.h"
+#include "llvm/CodeGen/MachineFunction.h"
+#include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/CodeGen/ScheduleDAG.h"
+#include "llvm/Target/TargetRegisterInfo.h"
+#include <vector>
+
+namespace llvm {
+
+/// AntiDepBreaker - This class works into conjunction with the
+/// post-RA scheduler to rename registers to break register
+/// anti-dependencies.
+class AntiDepBreaker {
+public:
+  virtual ~AntiDepBreaker();
+
+  /// Start - Initialize anti-dep breaking for a new basic block.
+  virtual void StartBlock(MachineBasicBlock *BB) =0;
+
+  /// BreakAntiDependencies - Identifiy anti-dependencies within a
+  /// basic-block region and break them by renaming registers. Return
+  /// the number of anti-dependencies broken.
+  ///
+  virtual unsigned BreakAntiDependencies(std::vector<SUnit>& SUnits,
+                                MachineBasicBlock::iterator& Begin,
+                                MachineBasicBlock::iterator& End,
+                                unsigned InsertPosIndex) =0;
+  
+  /// Observe - Update liveness information to account for the current
+  /// instruction, which will not be scheduled.
+  ///
+  virtual void Observe(MachineInstr *MI, unsigned Count,
+                       unsigned InsertPosIndex) =0;
+  
+  /// Finish - Finish anti-dep breaking for a basic block.
+  virtual void FinishBlock() =0;
+};
+
+}
+
+#endif
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
index 7e83473..993cdbf 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
@@ -18,6 +18,7 @@
 #include "llvm/Module.h"
 #include "llvm/CodeGen/GCMetadataPrinter.h"
 #include "llvm/CodeGen/MachineConstantPool.h"
+#include "llvm/CodeGen/MachineFrameInfo.h"
 #include "llvm/CodeGen/MachineFunction.h"
 #include "llvm/CodeGen/MachineJumpTableInfo.h"
 #include "llvm/CodeGen/MachineLoopInfo.h"
@@ -35,6 +36,7 @@
 #include "llvm/Support/Mangler.h"
 #include "llvm/MC/MCAsmInfo.h"
 #include "llvm/Target/TargetData.h"
+#include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetLowering.h"
 #include "llvm/Target/TargetLoweringObjectFile.h"
 #include "llvm/Target/TargetOptions.h"
@@ -512,7 +514,7 @@ void AsmPrinter::EmitXXStructorList(Constant *List) {
 //===----------------------------------------------------------------------===//
 /// LEB 128 number encoding.
 
-/// PrintULEB128 - Print a series of hexidecimal values (separated by commas)
+/// PrintULEB128 - Print a series of hexadecimal values (separated by commas)
 /// representing an unsigned leb128 value.
 void AsmPrinter::PrintULEB128(unsigned Value) const {
   char Buffer[20];
@@ -525,7 +527,7 @@ void AsmPrinter::PrintULEB128(unsigned Value) const {
   } while (Value);
 }
 
-/// PrintSLEB128 - Print a series of hexidecimal values (separated by commas)
+/// PrintSLEB128 - Print a series of hexadecimal values (separated by commas)
 /// representing a signed leb128 value.
 void AsmPrinter::PrintSLEB128(int Value) const {
   int Sign = Value >> (8 * sizeof(Value) - 1);
@@ -546,7 +548,7 @@ void AsmPrinter::PrintSLEB128(int Value) const {
 // Emission and print routines
 //
 
-/// PrintHex - Print a value as a hexidecimal value.
+/// PrintHex - Print a value as a hexadecimal value.
 ///
 void AsmPrinter::PrintHex(int Value) const { 
   char Buffer[20];
@@ -726,8 +728,8 @@ static void printStringChar(formatted_raw_ostream &O, unsigned char C) {
 /// EmitString - Emit a string with quotes and a null terminator.
 /// Special characters are emitted properly.
 /// \literal (Eg. '\t') \endliteral
-void AsmPrinter::EmitString(const std::string &String) const {
-  EmitString(String.c_str(), String.size());
+void AsmPrinter::EmitString(const StringRef String) const {
+  EmitString(String.data(), String.size());
 }
 
 void AsmPrinter::EmitString(const char *String, unsigned Size) const {
@@ -919,6 +921,8 @@ void AsmPrinter::EmitConstantValueOnly(const Constant *CV) {
     default:
       llvm_unreachable("Unsupported operator!");
     }
+  } else if (const BlockAddress *BA = dyn_cast<BlockAddress>(CV)) {
+    GetBlockAddressSymbol(BA)->print(O, MAI);
   } else {
     llvm_unreachable("Unknown constant value!");
   }
@@ -1006,7 +1010,7 @@ void AsmPrinter::EmitGlobalConstantFP(const ConstantFP *CFP,
   // precision...
   LLVMContext &Context = CFP->getContext();
   const TargetData *TD = TM.getTargetData();
-  if (CFP->getType() == Type::getDoubleTy(Context)) {
+  if (CFP->getType()->isDoubleTy()) {
     double Val = CFP->getValueAPF().convertToDouble();  // for comment only
     uint64_t i = CFP->getValueAPF().bitcastToAPInt().getZExtValue();
     if (MAI->getData64bitsDirective(AddrSpace)) {
@@ -1048,7 +1052,9 @@ void AsmPrinter::EmitGlobalConstantFP(const ConstantFP *CFP,
       O << '\n';
     }
     return;
-  } else if (CFP->getType() == Type::getFloatTy(Context)) {
+  }
+  
+  if (CFP->getType()->isFloatTy()) {
     float Val = CFP->getValueAPF().convertToFloat();  // for comment only
     O << MAI->getData32bitsDirective(AddrSpace)
       << CFP->getValueAPF().bitcastToAPInt().getZExtValue();
@@ -1058,7 +1064,9 @@ void AsmPrinter::EmitGlobalConstantFP(const ConstantFP *CFP,
     }
     O << '\n';
     return;
-  } else if (CFP->getType() == Type::getX86_FP80Ty(Context)) {
+  }
+  
+  if (CFP->getType()->isX86_FP80Ty()) {
     // all long double variants are printed as hex
     // api needed to prevent premature destruction
     APInt api = CFP->getValueAPF().bitcastToAPInt();
@@ -1143,7 +1151,9 @@ void AsmPrinter::EmitGlobalConstantFP(const ConstantFP *CFP,
     EmitZeros(TD->getTypeAllocSize(Type::getX86_FP80Ty(Context)) -
               TD->getTypeStoreSize(Type::getX86_FP80Ty(Context)), AddrSpace);
     return;
-  } else if (CFP->getType() == Type::getPPC_FP128Ty(Context)) {
+  }
+  
+  if (CFP->getType()->isPPC_FP128Ty()) {
     // all long double variants are printed as hex
     // api needed to prevent premature destruction
     APInt api = CFP->getValueAPF().bitcastToAPInt();
@@ -1347,25 +1357,33 @@ void AsmPrinter::PrintSpecial(const MachineInstr *MI, const char *Code) const {
 
 /// processDebugLoc - Processes the debug information of each machine
 /// instruction's DebugLoc.
-void AsmPrinter::processDebugLoc(const MachineInstr *MI) {
-  if (!MAI || !DW)
+void AsmPrinter::processDebugLoc(const MachineInstr *MI, 
+                                 bool BeforePrintingInsn) {
+  if (!MAI || !DW || !MAI->doesSupportDebugInformation()
+      || !DW->ShouldEmitDwarfDebug())
     return;
   DebugLoc DL = MI->getDebugLoc();
-  if (MAI->doesSupportDebugInformation() && DW->ShouldEmitDwarfDebug()) {
-    if (!DL.isUnknown()) {
-      DebugLocTuple CurDLT = MF->getDebugLocTuple(DL);
-
-      if (CurDLT.CompileUnit != 0 && PrevDLT != CurDLT) {
-        printLabel(DW->RecordSourceLine(CurDLT.Line, CurDLT.Col, 
-                                        CurDLT.CompileUnit));
-        O << '\n';
-      }
+  if (DL.isUnknown())
+    return;
+  DebugLocTuple CurDLT = MF->getDebugLocTuple(DL);
+  if (CurDLT.Scope == 0)
+    return;
 
+  if (BeforePrintingInsn) {
+    if (CurDLT != PrevDLT) {
+      unsigned L = DW->RecordSourceLine(CurDLT.Line, CurDLT.Col,
+                                        CurDLT.Scope);
+      printLabel(L);
+      DW->BeginScope(MI, L);
       PrevDLT = CurDLT;
     }
+  } else {
+    // After printing instruction
+    DW->EndScope(MI);
   }
 }
 
+
 /// printInlineAsm - This method formats and prints the specified machine
 /// instruction that is an inline asm.
 void AsmPrinter::printInlineAsm(const MachineInstr *MI) const {
@@ -1382,6 +1400,8 @@ void AsmPrinter::printInlineAsm(const MachineInstr *MI) const {
   // Disassemble the AsmStr, printing out the literal pieces, the operands, etc.
   const char *AsmStr = MI->getOperand(NumDefs).getSymbolName();
 
+  O << '\t';
+
   // If this asmstr is empty, just print the #APP/#NOAPP markers.
   // These are useful to see where empty asm's wound up.
   if (AsmStr[0] == 0) {
@@ -1573,6 +1593,17 @@ void AsmPrinter::printImplicitDef(const MachineInstr *MI) const {
     << TRI->getName(MI->getOperand(0).getReg());
 }
 
+void AsmPrinter::printKill(const MachineInstr *MI) const {
+  if (!VerboseAsm) return;
+  O.PadToColumn(MAI->getCommentColumn());
+  O << MAI->getCommentString() << " kill:";
+  for (unsigned n = 0, e = MI->getNumOperands(); n != e; ++n) {
+    const MachineOperand &op = MI->getOperand(n);
+    assert(op.isReg() && "KILL instruction must have only register operands");
+    O << ' ' << TRI->getName(op.getReg()) << (op.isDef() ? "<def>" : "<kill>");
+  }
+}
+
 /// printLabel - This method prints a local label used by debug and
 /// exception handling tables.
 void AsmPrinter::printLabel(const MachineInstr *MI) const {
@@ -1599,6 +1630,31 @@ bool AsmPrinter::PrintAsmMemoryOperand(const MachineInstr *MI, unsigned OpNo,
   return true;
 }
 
+MCSymbol *AsmPrinter::GetBlockAddressSymbol(const BlockAddress *BA,
+                                            const char *Suffix) const {
+  return GetBlockAddressSymbol(BA->getFunction(), BA->getBasicBlock(), Suffix);
+}
+
+MCSymbol *AsmPrinter::GetBlockAddressSymbol(const Function *F,
+                                            const BasicBlock *BB,
+                                            const char *Suffix) const {
+  assert(BB->hasName() &&
+         "Address of anonymous basic block not supported yet!");
+
+  // This code must use the function name itself, and not the function number,
+  // since it must be possible to generate the label name from within other
+  // functions.
+  std::string FuncName = Mang->getMangledName(F);
+
+  SmallString<60> Name;
+  raw_svector_ostream(Name) << MAI->getPrivateGlobalPrefix() << "BA"
+    << FuncName.size() << '_' << FuncName << '_'
+    << Mang->makeNameProper(BB->getName())
+    << Suffix;
+
+  return OutContext.GetOrCreateSymbol(Name.str());
+}
+
 MCSymbol *AsmPrinter::GetMBBSymbol(unsigned MBBID) const {
   SmallString<60> Name;
   raw_svector_ostream(Name) << MAI->getPrivateGlobalPrefix() << "BB"
@@ -1612,12 +1668,38 @@ MCSymbol *AsmPrinter::GetMBBSymbol(unsigned MBBID) const {
 /// MachineBasicBlock, an alignment (if present) and a comment describing
 /// it if appropriate.
 void AsmPrinter::EmitBasicBlockStart(const MachineBasicBlock *MBB) const {
+  // Emit an alignment directive for this block, if needed.
   if (unsigned Align = MBB->getAlignment())
     EmitAlignment(Log2_32(Align));
 
-  GetMBBSymbol(MBB->getNumber())->print(O, MAI);
-  O << ':';
+  // If the block has its address taken, emit a special label to satisfy
+  // references to the block. This is done so that we don't need to
+  // remember the number of this label, and so that we can make
+  // forward references to labels without knowing what their numbers
+  // will be.
+  if (MBB->hasAddressTaken()) {
+    GetBlockAddressSymbol(MBB->getBasicBlock()->getParent(),
+                          MBB->getBasicBlock())->print(O, MAI);
+    O << ':';
+    if (VerboseAsm) {
+      O.PadToColumn(MAI->getCommentColumn());
+      O << MAI->getCommentString() << " Address Taken";
+    }
+    O << '\n';
+  }
+
+  // Print the main label for the block.
+  if (MBB->pred_empty() || MBB->isOnlyReachableByFallthrough()) {
+    if (VerboseAsm)
+      O << MAI->getCommentString() << " BB#" << MBB->getNumber() << ':';
+  } else {
+    GetMBBSymbol(MBB->getNumber())->print(O, MAI);
+    O << ':';
+    if (!VerboseAsm)
+      O << '\n';
+  }
   
+  // Print some comments to accompany the label.
   if (VerboseAsm) {
     if (const BasicBlock *BB = MBB->getBasicBlock())
       if (BB->hasName()) {
@@ -1627,6 +1709,7 @@ void AsmPrinter::EmitBasicBlockStart(const MachineBasicBlock *MBB) const {
       }
 
     EmitComments(*MBB);
+    O << '\n';
   }
 }
 
@@ -1744,20 +1827,80 @@ GCMetadataPrinter *AsmPrinter::GetOrCreateGCPrinter(GCStrategy *S) {
 
 /// EmitComments - Pretty-print comments for instructions
 void AsmPrinter::EmitComments(const MachineInstr &MI) const {
-  assert(VerboseAsm && !MI.getDebugLoc().isUnknown());
-  
-  DebugLocTuple DLT = MF->getDebugLocTuple(MI.getDebugLoc());
+  if (!VerboseAsm)
+    return;
 
-  // Print source line info.
-  O.PadToColumn(MAI->getCommentColumn());
-  O << MAI->getCommentString() << " SrcLine ";
-  if (DLT.CompileUnit) {
-    DICompileUnit CU(DLT.CompileUnit);
-    O << CU.getFilename() << " ";
+  bool Newline = false;
+
+  if (!MI.getDebugLoc().isUnknown()) {
+    DebugLocTuple DLT = MF->getDebugLocTuple(MI.getDebugLoc());
+
+    // Print source line info.
+    O.PadToColumn(MAI->getCommentColumn());
+    O << MAI->getCommentString() << " SrcLine ";
+    if (DLT.Scope) {
+      DICompileUnit CU(DLT.Scope);
+      if (!CU.isNull())
+        O << CU.getFilename() << " ";
+    }
+    O << DLT.Line;
+    if (DLT.Col != 0)
+      O << ":" << DLT.Col;
+    Newline = true;
+  }
+
+  // Check for spills and reloads
+  int FI;
+
+  const MachineFrameInfo *FrameInfo =
+    MI.getParent()->getParent()->getFrameInfo();
+
+  // We assume a single instruction only has a spill or reload, not
+  // both.
+  if (TM.getInstrInfo()->isLoadFromStackSlotPostFE(&MI, FI)) {
+    if (FrameInfo->isSpillSlotObjectIndex(FI)) {
+      if (Newline) O << '\n';
+      O.PadToColumn(MAI->getCommentColumn());
+      O << MAI->getCommentString() << " Reload";
+      Newline = true;
+    }
+  }
+  else if (TM.getInstrInfo()->hasLoadFromStackSlot(&MI, FI)) {
+    if (FrameInfo->isSpillSlotObjectIndex(FI)) {
+      if (Newline) O << '\n';
+      O.PadToColumn(MAI->getCommentColumn());
+      O << MAI->getCommentString() << " Folded Reload";
+      Newline = true;
+    }
+  }
+  else if (TM.getInstrInfo()->isStoreToStackSlotPostFE(&MI, FI)) {
+    if (FrameInfo->isSpillSlotObjectIndex(FI)) {
+      if (Newline) O << '\n';
+      O.PadToColumn(MAI->getCommentColumn());
+      O << MAI->getCommentString() << " Spill";
+      Newline = true;
+    }
+  }
+  else if (TM.getInstrInfo()->hasStoreToStackSlot(&MI, FI)) {
+    if (FrameInfo->isSpillSlotObjectIndex(FI)) {
+      if (Newline) O << '\n';
+      O.PadToColumn(MAI->getCommentColumn());
+      O << MAI->getCommentString() << " Folded Spill";
+      Newline = true;
+    }
+  }
+
+  // Check for spill-induced copies
+  unsigned SrcReg, DstReg, SrcSubIdx, DstSubIdx;
+  if (TM.getInstrInfo()->isMoveInstr(MI, SrcReg, DstReg,
+                                      SrcSubIdx, DstSubIdx)) {
+    if (MI.getAsmPrinterFlag(ReloadReuse)) {
+      if (Newline) O << '\n';
+      O.PadToColumn(MAI->getCommentColumn());
+      O << MAI->getCommentString() << " Reload Reuse";
+      Newline = true;
+    }
   }
-  O << DLT.Line;
-  if (DLT.Col != 0) 
-    O << ":" << DLT.Col;
 }
 
 /// PrintChildLoopComment - Print comments about child loops within
@@ -1788,8 +1931,7 @@ static void PrintChildLoopComment(formatted_raw_ostream &O,
 }
 
 /// EmitComments - Pretty-print comments for basic blocks
-void AsmPrinter::EmitComments(const MachineBasicBlock &MBB) const
-{
+void AsmPrinter::EmitComments(const MachineBasicBlock &MBB) const {
   if (VerboseAsm) {
     // Add loop depth information
     const MachineLoop *loop = LI->getLoopFor(&MBB);
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.cpp
index ecf0007..0e93b98 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.cpp
@@ -105,26 +105,14 @@ DIE::~DIE() {
     delete Children[i];
 }
 
-/// AddSiblingOffset - Add a sibling offset field to the front of the DIE.
+/// addSiblingOffset - Add a sibling offset field to the front of the DIE.
 ///
-void DIE::AddSiblingOffset() {
+void DIE::addSiblingOffset() {
   DIEInteger *DI = new DIEInteger(0);
   Values.insert(Values.begin(), DI);
   Abbrev.AddFirstAttribute(dwarf::DW_AT_sibling, dwarf::DW_FORM_ref4);
 }
 
-/// Profile - Used to gather unique data for the value folding set.
-///
-void DIE::Profile(FoldingSetNodeID &ID) {
-  Abbrev.Profile(ID);
-
-  for (unsigned i = 0, N = Children.size(); i < N; ++i)
-    ID.AddPointer(Children[i]);
-
-  for (unsigned j = 0, M = Values.size(); j < M; ++j)
-    ID.AddPointer(Values[j]);
-}
-
 #ifndef NDEBUG
 void DIE::print(raw_ostream &O, unsigned IncIndent) {
   IndentCount += IncIndent;
@@ -231,16 +219,6 @@ unsigned DIEInteger::SizeOf(const TargetData *TD, unsigned Form) const {
   return 0;
 }
 
-/// Profile - Used to gather unique data for the value folding set.
-///
-void DIEInteger::Profile(FoldingSetNodeID &ID, unsigned Int) {
-  ID.AddInteger(isInteger);
-  ID.AddInteger(Int);
-}
-void DIEInteger::Profile(FoldingSetNodeID &ID) {
-  Profile(ID, Integer);
-}
-
 #ifndef NDEBUG
 void DIEInteger::print(raw_ostream &O) {
   O << "Int: " << (int64_t)Integer
@@ -258,16 +236,6 @@ void DIEString::EmitValue(Dwarf *D, unsigned Form) const {
   D->getAsm()->EmitString(Str);
 }
 
-/// Profile - Used to gather unique data for the value folding set.
-///
-void DIEString::Profile(FoldingSetNodeID &ID, const std::string &Str) {
-  ID.AddInteger(isString);
-  ID.AddString(Str);
-}
-void DIEString::Profile(FoldingSetNodeID &ID) {
-  Profile(ID, Str);
-}
-
 #ifndef NDEBUG
 void DIEString::print(raw_ostream &O) {
   O << "Str: \"" << Str << "\"";
@@ -292,16 +260,6 @@ unsigned DIEDwarfLabel::SizeOf(const TargetData *TD, unsigned Form) const {
   return TD->getPointerSize();
 }
 
-/// Profile - Used to gather unique data for the value folding set.
-///
-void DIEDwarfLabel::Profile(FoldingSetNodeID &ID, const DWLabel &Label) {
-  ID.AddInteger(isLabel);
-  Label.Profile(ID);
-}
-void DIEDwarfLabel::Profile(FoldingSetNodeID &ID) {
-  Profile(ID, Label);
-}
-
 #ifndef NDEBUG
 void DIEDwarfLabel::print(raw_ostream &O) {
   O << "Lbl: ";
@@ -327,16 +285,6 @@ unsigned DIEObjectLabel::SizeOf(const TargetData *TD, unsigned Form) const {
   return TD->getPointerSize();
 }
 
-/// Profile - Used to gather unique data for the value folding set.
-///
-void DIEObjectLabel::Profile(FoldingSetNodeID &ID, const std::string &Label) {
-  ID.AddInteger(isAsIsLabel);
-  ID.AddString(Label);
-}
-void DIEObjectLabel::Profile(FoldingSetNodeID &ID) {
-  Profile(ID, Label.c_str());
-}
-
 #ifndef NDEBUG
 void DIEObjectLabel::print(raw_ostream &O) {
   O << "Obj: " << Label;
@@ -363,20 +311,6 @@ unsigned DIESectionOffset::SizeOf(const TargetData *TD, unsigned Form) const {
   return TD->getPointerSize();
 }
 
-/// Profile - Used to gather unique data for the value folding set.
-///
-void DIESectionOffset::Profile(FoldingSetNodeID &ID, const DWLabel &Label,
-                               const DWLabel &Section) {
-  ID.AddInteger(isSectionOffset);
-  Label.Profile(ID);
-  Section.Profile(ID);
-  // IsEH and UseSet are specific to the Label/Section that we will emit the
-  // offset for; so Label/Section are enough for uniqueness.
-}
-void DIESectionOffset::Profile(FoldingSetNodeID &ID) {
-  Profile(ID, Label, Section);
-}
-
 #ifndef NDEBUG
 void DIESectionOffset::print(raw_ostream &O) {
   O << "Off: ";
@@ -405,18 +339,6 @@ unsigned DIEDelta::SizeOf(const TargetData *TD, unsigned Form) const {
   return TD->getPointerSize();
 }
 
-/// Profile - Used to gather unique data for the value folding set.
-///
-void DIEDelta::Profile(FoldingSetNodeID &ID, const DWLabel &LabelHi,
-                       const DWLabel &LabelLo) {
-  ID.AddInteger(isDelta);
-  LabelHi.Profile(ID);
-  LabelLo.Profile(ID);
-}
-void DIEDelta::Profile(FoldingSetNodeID &ID) {
-  Profile(ID, LabelHi, LabelLo);
-}
-
 #ifndef NDEBUG
 void DIEDelta::print(raw_ostream &O) {
   O << "Del: ";
@@ -436,21 +358,6 @@ void DIEEntry::EmitValue(Dwarf *D, unsigned Form) const {
   D->getAsm()->EmitInt32(Entry->getOffset());
 }
 
-/// Profile - Used to gather unique data for the value folding set.
-///
-void DIEEntry::Profile(FoldingSetNodeID &ID, DIE *Entry) {
-  ID.AddInteger(isEntry);
-  ID.AddPointer(Entry);
-}
-void DIEEntry::Profile(FoldingSetNodeID &ID) {
-  ID.AddInteger(isEntry);
-
-  if (Entry)
-    ID.AddPointer(Entry);
-  else
-    ID.AddPointer(this);
-}
-
 #ifndef NDEBUG
 void DIEEntry::print(raw_ostream &O) {
   O << format("Die: 0x%lx", (long)(intptr_t)Entry);
@@ -505,11 +412,6 @@ unsigned DIEBlock::SizeOf(const TargetData *TD, unsigned Form) const {
   return 0;
 }
 
-void DIEBlock::Profile(FoldingSetNodeID &ID) {
-  ID.AddInteger(isBlock);
-  DIE::Profile(ID);
-}
-
 #ifndef NDEBUG
 void DIEBlock::print(raw_ostream &O) {
   O << "Blk: ";
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.h b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.h
index 62b51ec..dc6a70a 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.h
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.h
@@ -29,7 +29,7 @@ namespace llvm {
   //===--------------------------------------------------------------------===//
   /// DIEAbbrevData - Dwarf abbreviation data, describes the one attribute of a
   /// Dwarf abbreviation.
-  class VISIBILITY_HIDDEN DIEAbbrevData {
+  class DIEAbbrevData {
     /// Attribute - Dwarf attribute code.
     ///
     unsigned Attribute;
@@ -52,7 +52,7 @@ namespace llvm {
   //===--------------------------------------------------------------------===//
   /// DIEAbbrev - Dwarf abbreviation, describes the organization of a debug
   /// information object.
-  class VISIBILITY_HIDDEN DIEAbbrev : public FoldingSetNode {
+  class DIEAbbrev : public FoldingSetNode {
     /// Tag - Dwarf tag code.
     ///
     unsigned Tag;
@@ -113,7 +113,7 @@ namespace llvm {
   class CompileUnit;
   class DIEValue;
 
-  class VISIBILITY_HIDDEN DIE : public FoldingSetNode {
+  class DIE {
   protected:
     /// Abbrev - Buffer for constructing abbreviation.
     ///
@@ -161,38 +161,28 @@ namespace llvm {
     void setSize(unsigned S) { Size = S; }
     void setAbstractCompileUnit(CompileUnit *CU) { AbstractCU = CU; }
 
-    /// AddValue - Add a value and attributes to a DIE.
+    /// addValue - Add a value and attributes to a DIE.
     ///
-    void AddValue(unsigned Attribute, unsigned Form, DIEValue *Value) {
+    void addValue(unsigned Attribute, unsigned Form, DIEValue *Value) {
       Abbrev.AddAttribute(Attribute, Form);
       Values.push_back(Value);
     }
 
     /// SiblingOffset - Return the offset of the debug information entry's
     /// sibling.
-    unsigned SiblingOffset() const { return Offset + Size; }
+    unsigned getSiblingOffset() const { return Offset + Size; }
 
-    /// AddSiblingOffset - Add a sibling offset field to the front of the DIE.
+    /// addSiblingOffset - Add a sibling offset field to the front of the DIE.
     ///
-    void AddSiblingOffset();
+    void addSiblingOffset();
 
-    /// AddChild - Add a child to the DIE.
+    /// addChild - Add a child to the DIE.
     ///
-    void AddChild(DIE *Child) {
+    void addChild(DIE *Child) {
       Abbrev.setChildrenFlag(dwarf::DW_CHILDREN_yes);
       Children.push_back(Child);
     }
 
-    /// Detach - Detaches objects connected to it after copying.
-    ///
-    void Detach() {
-      Children.clear();
-    }
-
-    /// Profile - Used to gather unique data for the value folding set.
-    ///
-    void Profile(FoldingSetNodeID &ID) ;
-
 #ifndef NDEBUG
     void print(raw_ostream &O, unsigned IncIndent = 0);
     void dump();
@@ -202,7 +192,7 @@ namespace llvm {
   //===--------------------------------------------------------------------===//
   /// DIEValue - A debug information entry value.
   ///
-  class VISIBILITY_HIDDEN DIEValue : public FoldingSetNode {
+  class DIEValue {
   public:
     enum {
       isInteger,
@@ -233,10 +223,6 @@ namespace llvm {
     ///
     virtual unsigned SizeOf(const TargetData *TD, unsigned Form) const = 0;
 
-    /// Profile - Used to gather unique data for the value folding set.
-    ///
-    virtual void Profile(FoldingSetNodeID &ID) = 0;
-
     // Implement isa/cast/dyncast.
     static bool classof(const DIEValue *) { return true; }
 
@@ -249,7 +235,7 @@ namespace llvm {
   //===--------------------------------------------------------------------===//
   /// DIEInteger - An integer value DIE.
   ///
-  class VISIBILITY_HIDDEN DIEInteger : public DIEValue {
+  class DIEInteger : public DIEValue {
     uint64_t Integer;
   public:
     explicit DIEInteger(uint64_t I) : DIEValue(isInteger), Integer(I) {}
@@ -277,10 +263,6 @@ namespace llvm {
     ///
     virtual unsigned SizeOf(const TargetData *TD, unsigned Form) const;
 
-    /// Profile - Used to gather unique data for the value folding set.
-    ///
-    static void Profile(FoldingSetNodeID &ID, unsigned Int);
-    virtual void Profile(FoldingSetNodeID &ID);
 
     // Implement isa/cast/dyncast.
     static bool classof(const DIEInteger *) { return true; }
@@ -294,10 +276,10 @@ namespace llvm {
   //===--------------------------------------------------------------------===//
   /// DIEString - A string value DIE.
   ///
-  class VISIBILITY_HIDDEN DIEString : public DIEValue {
-    const std::string Str;
+  class DIEString : public DIEValue {
+    const StringRef Str;
   public:
-    explicit DIEString(const std::string &S) : DIEValue(isString), Str(S) {}
+    explicit DIEString(const StringRef S) : DIEValue(isString), Str(S) {}
 
     /// EmitValue - Emit string value.
     ///
@@ -309,11 +291,6 @@ namespace llvm {
       return Str.size() + sizeof(char); // sizeof('\0');
     }
 
-    /// Profile - Used to gather unique data for the value folding set.
-    ///
-    static void Profile(FoldingSetNodeID &ID, const std::string &Str);
-    virtual void Profile(FoldingSetNodeID &ID);
-
     // Implement isa/cast/dyncast.
     static bool classof(const DIEString *) { return true; }
     static bool classof(const DIEValue *S) { return S->getType() == isString; }
@@ -326,7 +303,7 @@ namespace llvm {
   //===--------------------------------------------------------------------===//
   /// DIEDwarfLabel - A Dwarf internal label expression DIE.
   //
-  class VISIBILITY_HIDDEN DIEDwarfLabel : public DIEValue {
+  class DIEDwarfLabel : public DIEValue {
     const DWLabel Label;
   public:
     explicit DIEDwarfLabel(const DWLabel &L) : DIEValue(isLabel), Label(L) {}
@@ -339,11 +316,6 @@ namespace llvm {
     ///
     virtual unsigned SizeOf(const TargetData *TD, unsigned Form) const;
 
-    /// Profile - Used to gather unique data for the value folding set.
-    ///
-    static void Profile(FoldingSetNodeID &ID, const DWLabel &Label);
-    virtual void Profile(FoldingSetNodeID &ID);
-
     // Implement isa/cast/dyncast.
     static bool classof(const DIEDwarfLabel *)  { return true; }
     static bool classof(const DIEValue *L) { return L->getType() == isLabel; }
@@ -356,7 +328,7 @@ namespace llvm {
   //===--------------------------------------------------------------------===//
   /// DIEObjectLabel - A label to an object in code or data.
   //
-  class VISIBILITY_HIDDEN DIEObjectLabel : public DIEValue {
+  class DIEObjectLabel : public DIEValue {
     const std::string Label;
   public:
     explicit DIEObjectLabel(const std::string &L)
@@ -370,11 +342,6 @@ namespace llvm {
     ///
     virtual unsigned SizeOf(const TargetData *TD, unsigned Form) const;
 
-    /// Profile - Used to gather unique data for the value folding set.
-    ///
-    static void Profile(FoldingSetNodeID &ID, const std::string &Label);
-    virtual void Profile(FoldingSetNodeID &ID);
-
     // Implement isa/cast/dyncast.
     static bool classof(const DIEObjectLabel *) { return true; }
     static bool classof(const DIEValue *L) {
@@ -389,7 +356,7 @@ namespace llvm {
   //===--------------------------------------------------------------------===//
   /// DIESectionOffset - A section offset DIE.
   ///
-  class VISIBILITY_HIDDEN DIESectionOffset : public DIEValue {
+  class DIESectionOffset : public DIEValue {
     const DWLabel Label;
     const DWLabel Section;
     bool IsEH : 1;
@@ -408,12 +375,6 @@ namespace llvm {
     ///
     virtual unsigned SizeOf(const TargetData *TD, unsigned Form) const;
 
-    /// Profile - Used to gather unique data for the value folding set.
-    ///
-    static void Profile(FoldingSetNodeID &ID, const DWLabel &Label,
-                        const DWLabel &Section);
-    virtual void Profile(FoldingSetNodeID &ID);
-
     // Implement isa/cast/dyncast.
     static bool classof(const DIESectionOffset *)  { return true; }
     static bool classof(const DIEValue *D) {
@@ -428,7 +389,7 @@ namespace llvm {
   //===--------------------------------------------------------------------===//
   /// DIEDelta - A simple label difference DIE.
   ///
-  class VISIBILITY_HIDDEN DIEDelta : public DIEValue {
+  class DIEDelta : public DIEValue {
     const DWLabel LabelHi;
     const DWLabel LabelLo;
   public:
@@ -443,12 +404,6 @@ namespace llvm {
     ///
     virtual unsigned SizeOf(const TargetData *TD, unsigned Form) const;
 
-    /// Profile - Used to gather unique data for the value folding set.
-    ///
-    static void Profile(FoldingSetNodeID &ID, const DWLabel &LabelHi,
-                        const DWLabel &LabelLo);
-    virtual void Profile(FoldingSetNodeID &ID);
-
     // Implement isa/cast/dyncast.
     static bool classof(const DIEDelta *)  { return true; }
     static bool classof(const DIEValue *D) { return D->getType() == isDelta; }
@@ -462,7 +417,7 @@ namespace llvm {
   /// DIEntry - A pointer to another debug information entry.  An instance of
   /// this class can also be used as a proxy for a debug information entry not
   /// yet defined (ie. types.)
-  class VISIBILITY_HIDDEN DIEEntry : public DIEValue {
+  class DIEEntry : public DIEValue {
     DIE *Entry;
   public:
     explicit DIEEntry(DIE *E) : DIEValue(isEntry), Entry(E) {}
@@ -480,11 +435,6 @@ namespace llvm {
       return sizeof(int32_t);
     }
 
-    /// Profile - Used to gather unique data for the value folding set.
-    ///
-    static void Profile(FoldingSetNodeID &ID, DIE *Entry);
-    virtual void Profile(FoldingSetNodeID &ID);
-
     // Implement isa/cast/dyncast.
     static bool classof(const DIEEntry *)  { return true; }
     static bool classof(const DIEValue *E) { return E->getType() == isEntry; }
@@ -497,7 +447,7 @@ namespace llvm {
   //===--------------------------------------------------------------------===//
   /// DIEBlock - A block of values.  Primarily used for location expressions.
   //
-  class VISIBILITY_HIDDEN DIEBlock : public DIEValue, public DIE {
+  class DIEBlock : public DIEValue, public DIE {
     unsigned Size;                // Size in bytes excluding size header.
   public:
     DIEBlock()
@@ -525,10 +475,6 @@ namespace llvm {
     ///
     virtual unsigned SizeOf(const TargetData *TD, unsigned Form) const;
 
-    /// Profile - Used to gather unique data for the value folding set.
-    ///
-    virtual void Profile(FoldingSetNodeID &ID);
-
     // Implement isa/cast/dyncast.
     static bool classof(const DIEBlock *)  { return true; }
     static bool classof(const DIEValue *E) { return E->getType() == isBlock; }
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
index bbaf1ad..9dad574 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
@@ -23,9 +23,10 @@
 #include "llvm/Target/TargetLoweringObjectFile.h"
 #include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/ADT/StringExtras.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/Mangler.h"
 #include "llvm/Support/Timer.h"
-#include "llvm/Support/Debug.h"
 #include "llvm/System/Path.h"
 using namespace llvm;
 
@@ -38,167 +39,191 @@ static TimerGroup &getDwarfTimerGroup() {
 
 /// Configuration values for initial hash set sizes (log2).
 ///
-static const unsigned InitDiesSetSize          = 9; // log2(512)
 static const unsigned InitAbbreviationsSetSize = 9; // log2(512)
-static const unsigned InitValuesSetSize        = 9; // log2(512)
 
 namespace llvm {
 
 //===----------------------------------------------------------------------===//
 /// CompileUnit - This dwarf writer support class manages information associate
 /// with a source file.
-class VISIBILITY_HIDDEN CompileUnit {
+class CompileUnit {
   /// ID - File identifier for source.
   ///
   unsigned ID;
 
   /// Die - Compile unit debug information entry.
   ///
-  DIE *Die;
+  DIE *CUDie;
+
+  /// IndexTyDie - An anonymous type for index type.
+  DIE *IndexTyDie;
 
   /// GVToDieMap - Tracks the mapping of unit level debug informaton
   /// variables to debug information entries.
   /// FIXME : Rename GVToDieMap -> NodeToDieMap
-  std::map<MDNode *, DIE *> GVToDieMap;
+  ValueMap<MDNode *, DIE *> GVToDieMap;
 
   /// GVToDIEEntryMap - Tracks the mapping of unit level debug informaton
   /// descriptors to debug information entries using a DIEEntry proxy.
   /// FIXME : Rename
-  std::map<MDNode *, DIEEntry *> GVToDIEEntryMap;
+  ValueMap<MDNode *, DIEEntry *> GVToDIEEntryMap;
 
   /// Globals - A map of globally visible named entities for this unit.
   ///
   StringMap<DIE*> Globals;
 
-  /// DiesSet - Used to uniquely define dies within the compile unit.
+  /// GlobalTypes - A map of globally visible types for this unit.
   ///
-  FoldingSet<DIE> DiesSet;
+  StringMap<DIE*> GlobalTypes;
+
 public:
   CompileUnit(unsigned I, DIE *D)
-    : ID(I), Die(D), DiesSet(InitDiesSetSize) {}
-  ~CompileUnit() { delete Die; }
+    : ID(I), CUDie(D), IndexTyDie(0) {}
+  ~CompileUnit() { delete CUDie; delete IndexTyDie; }
 
   // Accessors.
-  unsigned getID() const { return ID; }
-  DIE* getDie() const { return Die; }
-  StringMap<DIE*> &getGlobals() { return Globals; }
+  unsigned getID()                  const { return ID; }
+  DIE* getCUDie()                   const { return CUDie; }
+  const StringMap<DIE*> &getGlobals()     const { return Globals; }
+  const StringMap<DIE*> &getGlobalTypes() const { return GlobalTypes; }
 
   /// hasContent - Return true if this compile unit has something to write out.
   ///
-  bool hasContent() const { return !Die->getChildren().empty(); }
+  bool hasContent() const { return !CUDie->getChildren().empty(); }
 
-  /// AddGlobal - Add a new global entity to the compile unit.
+  /// addGlobal - Add a new global entity to the compile unit.
   ///
-  void AddGlobal(const std::string &Name, DIE *Die) { Globals[Name] = Die; }
+  void addGlobal(const std::string &Name, DIE *Die) { Globals[Name] = Die; }
+
+  /// addGlobalType - Add a new global type to the compile unit.
+  ///
+  void addGlobalType(const std::string &Name, DIE *Die) { 
+    GlobalTypes[Name] = Die; 
+  }
 
-  /// getDieMapSlotFor - Returns the debug information entry map slot for the
+  /// getDIE - Returns the debug information entry map slot for the
   /// specified debug variable.
-  DIE *&getDieMapSlotFor(MDNode *N) { return GVToDieMap[N]; }
+  DIE *getDIE(MDNode *N) { return GVToDieMap.lookup(N); }
 
-  /// getDIEEntrySlotFor - Returns the debug information entry proxy slot for
-  /// the specified debug variable.
-  DIEEntry *&getDIEEntrySlotFor(MDNode *N) {
-    return GVToDIEEntryMap[N];
+  /// insertDIE - Insert DIE into the map.
+  void insertDIE(MDNode *N, DIE *D) {
+    GVToDieMap.insert(std::make_pair(N, D));
   }
 
-  /// AddDie - Adds or interns the DIE to the compile unit.
+  /// getDIEEntry - Returns the debug information entry for the speciefied
+  /// debug variable.
+  DIEEntry *getDIEEntry(MDNode *N) { return GVToDIEEntryMap.lookup(N); }
+
+  /// insertDIEEntry - Insert debug information entry into the map.
+  void insertDIEEntry(MDNode *N, DIEEntry *E) {
+    GVToDIEEntryMap.insert(std::make_pair(N, E));
+  }
+
+  /// addDie - Adds or interns the DIE to the compile unit.
   ///
-  DIE *AddDie(DIE &Buffer) {
-    FoldingSetNodeID ID;
-    Buffer.Profile(ID);
-    void *Where;
-    DIE *Die = DiesSet.FindNodeOrInsertPos(ID, Where);
-
-    if (!Die) {
-      Die = new DIE(Buffer);
-      DiesSet.InsertNode(Die, Where);
-      this->Die->AddChild(Die);
-      Buffer.Detach();
-    }
+  void addDie(DIE *Buffer) {
+    this->CUDie->addChild(Buffer);
+  }
+
+  // getIndexTyDie - Get an anonymous type for index type.
+  DIE *getIndexTyDie() {
+    return IndexTyDie;
+  }
 
-    return Die;
+  // setIndexTyDie - Set D as anonymous type for index which can be reused
+  // later.
+  void setIndexTyDie(DIE *D) {
+    IndexTyDie = D;
   }
+
 };
 
 //===----------------------------------------------------------------------===//
 /// DbgVariable - This class is used to track local variable information.
 ///
-class VISIBILITY_HIDDEN DbgVariable {
+class DbgVariable {
   DIVariable Var;                    // Variable Descriptor.
   unsigned FrameIndex;               // Variable frame index.
-  bool InlinedFnVar;                 // Variable for an inlined function.
+  DbgVariable *AbstractVar;          // Abstract variable for this variable.
+  DIE *TheDIE;
 public:
-  DbgVariable(DIVariable V, unsigned I, bool IFV)
-    : Var(V), FrameIndex(I), InlinedFnVar(IFV)  {}
+  DbgVariable(DIVariable V, unsigned I)
+    : Var(V), FrameIndex(I), AbstractVar(0), TheDIE(0)  {}
 
   // Accessors.
-  DIVariable getVariable() const { return Var; }
-  unsigned getFrameIndex() const { return FrameIndex; }
-  bool isInlinedFnVar() const { return InlinedFnVar; }
+  DIVariable getVariable()           const { return Var; }
+  unsigned getFrameIndex()           const { return FrameIndex; }
+  void setAbstractVariable(DbgVariable *V) { AbstractVar = V; }
+  DbgVariable *getAbstractVariable() const { return AbstractVar; }
+  void setDIE(DIE *D)                      { TheDIE = D; }
+  DIE *getDIE()                      const { return TheDIE; }
 };
 
 //===----------------------------------------------------------------------===//
 /// DbgScope - This class is used to track scope information.
 ///
-class DbgConcreteScope;
-class VISIBILITY_HIDDEN DbgScope {
+class DbgScope {
   DbgScope *Parent;                   // Parent to this scope.
   DIDescriptor Desc;                  // Debug info descriptor for scope.
-                                      // Either subprogram or block.
+  WeakVH InlinedAtLocation;           // Location at which scope is inlined.
+  bool AbstractScope;                 // Abstract Scope
   unsigned StartLabelID;              // Label ID of the beginning of scope.
   unsigned EndLabelID;                // Label ID of the end of scope.
   const MachineInstr *LastInsn;       // Last instruction of this scope.
   const MachineInstr *FirstInsn;      // First instruction of this scope.
   SmallVector<DbgScope *, 4> Scopes;  // Scopes defined in scope.
   SmallVector<DbgVariable *, 8> Variables;// Variables declared in scope.
-  SmallVector<DbgConcreteScope *, 8> ConcreteInsts;// Concrete insts of funcs.
 
   // Private state for dump()
   mutable unsigned IndentLevel;
 public:
-  DbgScope(DbgScope *P, DIDescriptor D)
-    : Parent(P), Desc(D), StartLabelID(0), EndLabelID(0), LastInsn(0),
-      FirstInsn(0), IndentLevel(0) {}
+  DbgScope(DbgScope *P, DIDescriptor D, MDNode *I = 0)
+    : Parent(P), Desc(D), InlinedAtLocation(I), AbstractScope(false),
+      StartLabelID(0), EndLabelID(0),
+      LastInsn(0), FirstInsn(0), IndentLevel(0) {}
   virtual ~DbgScope();
 
   // Accessors.
   DbgScope *getParent()          const { return Parent; }
+  void setParent(DbgScope *P)          { Parent = P; }
   DIDescriptor getDesc()         const { return Desc; }
+  MDNode *getInlinedAt()         const {
+    return dyn_cast_or_null<MDNode>(InlinedAtLocation);
+  }
+  MDNode *getScopeNode()         const { return Desc.getNode(); }
   unsigned getStartLabelID()     const { return StartLabelID; }
   unsigned getEndLabelID()       const { return EndLabelID; }
   SmallVector<DbgScope *, 4> &getScopes() { return Scopes; }
   SmallVector<DbgVariable *, 8> &getVariables() { return Variables; }
-  SmallVector<DbgConcreteScope*,8> &getConcreteInsts() { return ConcreteInsts; }
   void setStartLabelID(unsigned S) { StartLabelID = S; }
   void setEndLabelID(unsigned E)   { EndLabelID = E; }
   void setLastInsn(const MachineInstr *MI) { LastInsn = MI; }
   const MachineInstr *getLastInsn()      { return LastInsn; }
   void setFirstInsn(const MachineInstr *MI) { FirstInsn = MI; }
+  void setAbstractScope() { AbstractScope = true; }
+  bool isAbstractScope() const { return AbstractScope; }
   const MachineInstr *getFirstInsn()      { return FirstInsn; }
-  /// AddScope - Add a scope to the scope.
-  ///
-  void AddScope(DbgScope *S) { Scopes.push_back(S); }
 
-  /// AddVariable - Add a variable to the scope.
+  /// addScope - Add a scope to the scope.
   ///
-  void AddVariable(DbgVariable *V) { Variables.push_back(V); }
+  void addScope(DbgScope *S) { Scopes.push_back(S); }
 
-  /// AddConcreteInst - Add a concrete instance to the scope.
+  /// addVariable - Add a variable to the scope.
   ///
-  void AddConcreteInst(DbgConcreteScope *C) { ConcreteInsts.push_back(C); }
+  void addVariable(DbgVariable *V) { Variables.push_back(V); }
 
-  void FixInstructionMarkers() {
+  void fixInstructionMarkers() {
     assert (getFirstInsn() && "First instruction is missing!");
     if (getLastInsn())
       return;
-    
+
     // If a scope does not have an instruction to mark an end then use
     // the end of last child scope.
     SmallVector<DbgScope *, 4> &Scopes = getScopes();
     assert (!Scopes.empty() && "Inner most scope does not have last insn!");
     DbgScope *L = Scopes.back();
     if (!L->getLastInsn())
-      L->FixInstructionMarkers();
+      L->fixInstructionMarkers();
     setLastInsn(L->getLastInsn());
   }
 
@@ -211,11 +236,15 @@ public:
 void DbgScope::dump() const {
   raw_ostream &err = errs();
   err.indent(IndentLevel);
-  Desc.dump();
+  MDNode *N = Desc.getNode();
+  N->dump();
   err << " [" << StartLabelID << ", " << EndLabelID << "]\n";
+  if (AbstractScope)
+    err << "Abstract Scope\n";
 
   IndentLevel += 2;
-
+  if (!Scopes.empty())
+    err << "Children ...\n";
   for (unsigned i = 0, e = Scopes.size(); i != e; ++i)
     if (Scopes[i] != this)
       Scopes[i]->dump();
@@ -224,28 +253,11 @@ void DbgScope::dump() const {
 }
 #endif
 
-//===----------------------------------------------------------------------===//
-/// DbgConcreteScope - This class is used to track a scope that holds concrete
-/// instance information.
-///
-class VISIBILITY_HIDDEN DbgConcreteScope : public DbgScope {
-  CompileUnit *Unit;
-  DIE *Die;                           // Debug info for this concrete scope.
-public:
-  DbgConcreteScope(DIDescriptor D) : DbgScope(NULL, D) {}
-
-  // Accessors.
-  DIE *getDie() const { return Die; }
-  void setDie(DIE *D) { Die = D; }
-};
-
 DbgScope::~DbgScope() {
   for (unsigned i = 0, N = Scopes.size(); i < N; ++i)
     delete Scopes[i];
   for (unsigned j = 0, M = Variables.size(); j < M; ++j)
     delete Variables[j];
-  for (unsigned k = 0, O = ConcreteInsts.size(); k < O; ++k)
-    delete ConcreteInsts[k];
 }
 
 } // end llvm namespace
@@ -253,28 +265,23 @@ DbgScope::~DbgScope() {
 DwarfDebug::DwarfDebug(raw_ostream &OS, AsmPrinter *A, const MCAsmInfo *T)
   : Dwarf(OS, A, T, "dbg"), ModuleCU(0),
     AbbreviationsSet(InitAbbreviationsSetSize), Abbreviations(),
-    ValuesSet(InitValuesSetSize), Values(), StringPool(),
+    DIEValues(), StringPool(),
     SectionSourceLines(), didInitial(false), shouldEmit(false),
-    FunctionDbgScope(0), DebugTimer(0) {
+    CurrentFnDbgScope(0), DebugTimer(0) {
   if (TimePassesIsEnabled)
     DebugTimer = new Timer("Dwarf Debug Writer",
                            getDwarfTimerGroup());
 }
 DwarfDebug::~DwarfDebug() {
-  for (unsigned j = 0, M = Values.size(); j < M; ++j)
-    delete Values[j];
-
-  for (DenseMap<const MDNode *, DbgScope *>::iterator
-         I = AbstractInstanceRootMap.begin(),
-         E = AbstractInstanceRootMap.end(); I != E;++I)
-    delete I->second;
+  for (unsigned j = 0, M = DIEValues.size(); j < M; ++j)
+    delete DIEValues[j];
 
   delete DebugTimer;
 }
 
-/// AssignAbbrevNumber - Define a unique number for the abbreviation.
+/// assignAbbrevNumber - Define a unique number for the abbreviation.
 ///
-void DwarfDebug::AssignAbbrevNumber(DIEAbbrev &Abbrev) {
+void DwarfDebug::assignAbbrevNumber(DIEAbbrev &Abbrev) {
   // Profile the node so that we can make it unique.
   FoldingSetNodeID ID;
   Abbrev.Profile(ID);
@@ -295,224 +302,120 @@ void DwarfDebug::AssignAbbrevNumber(DIEAbbrev &Abbrev) {
   }
 }
 
-/// CreateDIEEntry - Creates a new DIEEntry to be a proxy for a debug
+/// createDIEEntry - Creates a new DIEEntry to be a proxy for a debug
 /// information entry.
-DIEEntry *DwarfDebug::CreateDIEEntry(DIE *Entry) {
-  DIEEntry *Value;
-
-  if (Entry) {
-    FoldingSetNodeID ID;
-    DIEEntry::Profile(ID, Entry);
-    void *Where;
-    Value = static_cast<DIEEntry *>(ValuesSet.FindNodeOrInsertPos(ID, Where));
-
-    if (Value) return Value;
-
-    Value = new DIEEntry(Entry);
-    ValuesSet.InsertNode(Value, Where);
-  } else {
-    Value = new DIEEntry(Entry);
-  }
-
-  Values.push_back(Value);
+DIEEntry *DwarfDebug::createDIEEntry(DIE *Entry) {
+  DIEEntry *Value = new DIEEntry(Entry);
+  DIEValues.push_back(Value);
   return Value;
 }
 
-/// SetDIEEntry - Set a DIEEntry once the debug information entry is defined.
-///
-void DwarfDebug::SetDIEEntry(DIEEntry *Value, DIE *Entry) {
-  Value->setEntry(Entry);
-
-  // Add to values set if not already there.  If it is, we merely have a
-  // duplicate in the values list (no harm.)
-  ValuesSet.GetOrInsertNode(Value);
-}
-
-/// AddUInt - Add an unsigned integer attribute data and value.
+/// addUInt - Add an unsigned integer attribute data and value.
 ///
-void DwarfDebug::AddUInt(DIE *Die, unsigned Attribute,
+void DwarfDebug::addUInt(DIE *Die, unsigned Attribute,
                          unsigned Form, uint64_t Integer) {
   if (!Form) Form = DIEInteger::BestForm(false, Integer);
-
-  FoldingSetNodeID ID;
-  DIEInteger::Profile(ID, Integer);
-  void *Where;
-  DIEValue *Value = ValuesSet.FindNodeOrInsertPos(ID, Where);
-
-  if (!Value) {
-    Value = new DIEInteger(Integer);
-    ValuesSet.InsertNode(Value, Where);
-    Values.push_back(Value);
-  }
-
-  Die->AddValue(Attribute, Form, Value);
+  DIEValue *Value = new DIEInteger(Integer);
+  DIEValues.push_back(Value);
+  Die->addValue(Attribute, Form, Value);
 }
 
-/// AddSInt - Add an signed integer attribute data and value.
+/// addSInt - Add an signed integer attribute data and value.
 ///
-void DwarfDebug::AddSInt(DIE *Die, unsigned Attribute,
+void DwarfDebug::addSInt(DIE *Die, unsigned Attribute,
                          unsigned Form, int64_t Integer) {
   if (!Form) Form = DIEInteger::BestForm(true, Integer);
-
-  FoldingSetNodeID ID;
-  DIEInteger::Profile(ID, (uint64_t)Integer);
-  void *Where;
-  DIEValue *Value = ValuesSet.FindNodeOrInsertPos(ID, Where);
-
-  if (!Value) {
-    Value = new DIEInteger(Integer);
-    ValuesSet.InsertNode(Value, Where);
-    Values.push_back(Value);
-  }
-
-  Die->AddValue(Attribute, Form, Value);
+  DIEValue *Value = new DIEInteger(Integer);
+  DIEValues.push_back(Value);
+  Die->addValue(Attribute, Form, Value);
 }
 
-/// AddString - Add a string attribute data and value.
+/// addString - Add a string attribute data and value.
 ///
-void DwarfDebug::AddString(DIE *Die, unsigned Attribute, unsigned Form,
-                           const std::string &String) {
-  FoldingSetNodeID ID;
-  DIEString::Profile(ID, String);
-  void *Where;
-  DIEValue *Value = ValuesSet.FindNodeOrInsertPos(ID, Where);
-
-  if (!Value) {
-    Value = new DIEString(String);
-    ValuesSet.InsertNode(Value, Where);
-    Values.push_back(Value);
-  }
-
-  Die->AddValue(Attribute, Form, Value);
+void DwarfDebug::addString(DIE *Die, unsigned Attribute, unsigned Form,
+                           const StringRef String) {
+  DIEValue *Value = new DIEString(String);
+  DIEValues.push_back(Value);
+  Die->addValue(Attribute, Form, Value);
 }
 
-/// AddLabel - Add a Dwarf label attribute data and value.
+/// addLabel - Add a Dwarf label attribute data and value.
 ///
-void DwarfDebug::AddLabel(DIE *Die, unsigned Attribute, unsigned Form,
+void DwarfDebug::addLabel(DIE *Die, unsigned Attribute, unsigned Form,
                           const DWLabel &Label) {
-  FoldingSetNodeID ID;
-  DIEDwarfLabel::Profile(ID, Label);
-  void *Where;
-  DIEValue *Value = ValuesSet.FindNodeOrInsertPos(ID, Where);
-
-  if (!Value) {
-    Value = new DIEDwarfLabel(Label);
-    ValuesSet.InsertNode(Value, Where);
-    Values.push_back(Value);
-  }
-
-  Die->AddValue(Attribute, Form, Value);
+  DIEValue *Value = new DIEDwarfLabel(Label);
+  DIEValues.push_back(Value);
+  Die->addValue(Attribute, Form, Value);
 }
 
-/// AddObjectLabel - Add an non-Dwarf label attribute data and value.
+/// addObjectLabel - Add an non-Dwarf label attribute data and value.
 ///
-void DwarfDebug::AddObjectLabel(DIE *Die, unsigned Attribute, unsigned Form,
+void DwarfDebug::addObjectLabel(DIE *Die, unsigned Attribute, unsigned Form,
                                 const std::string &Label) {
-  FoldingSetNodeID ID;
-  DIEObjectLabel::Profile(ID, Label);
-  void *Where;
-  DIEValue *Value = ValuesSet.FindNodeOrInsertPos(ID, Where);
-
-  if (!Value) {
-    Value = new DIEObjectLabel(Label);
-    ValuesSet.InsertNode(Value, Where);
-    Values.push_back(Value);
-  }
-
-  Die->AddValue(Attribute, Form, Value);
+  DIEValue *Value = new DIEObjectLabel(Label);
+  DIEValues.push_back(Value);
+  Die->addValue(Attribute, Form, Value);
 }
 
-/// AddSectionOffset - Add a section offset label attribute data and value.
+/// addSectionOffset - Add a section offset label attribute data and value.
 ///
-void DwarfDebug::AddSectionOffset(DIE *Die, unsigned Attribute, unsigned Form,
+void DwarfDebug::addSectionOffset(DIE *Die, unsigned Attribute, unsigned Form,
                                   const DWLabel &Label, const DWLabel &Section,
                                   bool isEH, bool useSet) {
-  FoldingSetNodeID ID;
-  DIESectionOffset::Profile(ID, Label, Section);
-  void *Where;
-  DIEValue *Value = ValuesSet.FindNodeOrInsertPos(ID, Where);
-
-  if (!Value) {
-    Value = new DIESectionOffset(Label, Section, isEH, useSet);
-    ValuesSet.InsertNode(Value, Where);
-    Values.push_back(Value);
-  }
-
-  Die->AddValue(Attribute, Form, Value);
+  DIEValue *Value = new DIESectionOffset(Label, Section, isEH, useSet);
+  DIEValues.push_back(Value);
+  Die->addValue(Attribute, Form, Value);
 }
 
-/// AddDelta - Add a label delta attribute data and value.
+/// addDelta - Add a label delta attribute data and value.
 ///
-void DwarfDebug::AddDelta(DIE *Die, unsigned Attribute, unsigned Form,
+void DwarfDebug::addDelta(DIE *Die, unsigned Attribute, unsigned Form,
                           const DWLabel &Hi, const DWLabel &Lo) {
-  FoldingSetNodeID ID;
-  DIEDelta::Profile(ID, Hi, Lo);
-  void *Where;
-  DIEValue *Value = ValuesSet.FindNodeOrInsertPos(ID, Where);
-
-  if (!Value) {
-    Value = new DIEDelta(Hi, Lo);
-    ValuesSet.InsertNode(Value, Where);
-    Values.push_back(Value);
-  }
-
-  Die->AddValue(Attribute, Form, Value);
+  DIEValue *Value = new DIEDelta(Hi, Lo);
+  DIEValues.push_back(Value);
+  Die->addValue(Attribute, Form, Value);
 }
 
-/// AddBlock - Add block data.
+/// addBlock - Add block data.
 ///
-void DwarfDebug::AddBlock(DIE *Die, unsigned Attribute, unsigned Form,
+void DwarfDebug::addBlock(DIE *Die, unsigned Attribute, unsigned Form,
                           DIEBlock *Block) {
   Block->ComputeSize(TD);
-  FoldingSetNodeID ID;
-  Block->Profile(ID);
-  void *Where;
-  DIEValue *Value = ValuesSet.FindNodeOrInsertPos(ID, Where);
-
-  if (!Value) {
-    Value = Block;
-    ValuesSet.InsertNode(Value, Where);
-    Values.push_back(Value);
-  } else {
-    // Already exists, reuse the previous one.
-    delete Block;
-    Block = cast<DIEBlock>(Value);
-  }
-
-  Die->AddValue(Attribute, Block->BestForm(), Value);
+  DIEValues.push_back(Block);
+  Die->addValue(Attribute, Block->BestForm(), Block);
 }
 
-/// AddSourceLine - Add location information to specified debug information
+/// addSourceLine - Add location information to specified debug information
 /// entry.
-void DwarfDebug::AddSourceLine(DIE *Die, const DIVariable *V) {
+void DwarfDebug::addSourceLine(DIE *Die, const DIVariable *V) {
   // If there is no compile unit specified, don't add a line #.
   if (V->getCompileUnit().isNull())
     return;
 
   unsigned Line = V->getLineNumber();
-  unsigned FileID = FindCompileUnit(V->getCompileUnit()).getID();
+  unsigned FileID = findCompileUnit(V->getCompileUnit()).getID();
   assert(FileID && "Invalid file id");
-  AddUInt(Die, dwarf::DW_AT_decl_file, 0, FileID);
-  AddUInt(Die, dwarf::DW_AT_decl_line, 0, Line);
+  addUInt(Die, dwarf::DW_AT_decl_file, 0, FileID);
+  addUInt(Die, dwarf::DW_AT_decl_line, 0, Line);
 }
 
-/// AddSourceLine - Add location information to specified debug information
+/// addSourceLine - Add location information to specified debug information
 /// entry.
-void DwarfDebug::AddSourceLine(DIE *Die, const DIGlobal *G) {
+void DwarfDebug::addSourceLine(DIE *Die, const DIGlobal *G) {
   // If there is no compile unit specified, don't add a line #.
   if (G->getCompileUnit().isNull())
     return;
 
   unsigned Line = G->getLineNumber();
-  unsigned FileID = FindCompileUnit(G->getCompileUnit()).getID();
+  unsigned FileID = findCompileUnit(G->getCompileUnit()).getID();
   assert(FileID && "Invalid file id");
-  AddUInt(Die, dwarf::DW_AT_decl_file, 0, FileID);
-  AddUInt(Die, dwarf::DW_AT_decl_line, 0, Line);
+  addUInt(Die, dwarf::DW_AT_decl_file, 0, FileID);
+  addUInt(Die, dwarf::DW_AT_decl_line, 0, Line);
 }
 
-/// AddSourceLine - Add location information to specified debug information
+/// addSourceLine - Add location information to specified debug information
 /// entry.
-void DwarfDebug::AddSourceLine(DIE *Die, const DISubprogram *SP) {
+void DwarfDebug::addSourceLine(DIE *Die, const DISubprogram *SP) {
   // If there is no compile unit specified, don't add a line #.
   if (SP->getCompileUnit().isNull())
     return;
@@ -522,25 +425,25 @@ void DwarfDebug::AddSourceLine(DIE *Die, const DISubprogram *SP) {
 
 
   unsigned Line = SP->getLineNumber();
-  unsigned FileID = FindCompileUnit(SP->getCompileUnit()).getID();
+  unsigned FileID = findCompileUnit(SP->getCompileUnit()).getID();
   assert(FileID && "Invalid file id");
-  AddUInt(Die, dwarf::DW_AT_decl_file, 0, FileID);
-  AddUInt(Die, dwarf::DW_AT_decl_line, 0, Line);
+  addUInt(Die, dwarf::DW_AT_decl_file, 0, FileID);
+  addUInt(Die, dwarf::DW_AT_decl_line, 0, Line);
 }
 
-/// AddSourceLine - Add location information to specified debug information
+/// addSourceLine - Add location information to specified debug information
 /// entry.
-void DwarfDebug::AddSourceLine(DIE *Die, const DIType *Ty) {
+void DwarfDebug::addSourceLine(DIE *Die, const DIType *Ty) {
   // If there is no compile unit specified, don't add a line #.
   DICompileUnit CU = Ty->getCompileUnit();
   if (CU.isNull())
     return;
 
   unsigned Line = Ty->getLineNumber();
-  unsigned FileID = FindCompileUnit(CU).getID();
+  unsigned FileID = findCompileUnit(CU).getID();
   assert(FileID && "Invalid file id");
-  AddUInt(Die, dwarf::DW_AT_decl_file, 0, FileID);
-  AddUInt(Die, dwarf::DW_AT_decl_line, 0, Line);
+  addUInt(Die, dwarf::DW_AT_decl_file, 0, FileID);
+  addUInt(Die, dwarf::DW_AT_decl_line, 0, Line);
 }
 
 /* Byref variables, in Blocks, are declared by the programmer as
@@ -566,12 +469,12 @@ void DwarfDebug::AddSourceLine(DIE *Die, const DIType *Ty) {
    side, the Debug Information Entry for the variable VarName needs to
    have a DW_AT_location that tells the debugger how to unwind through
    the pointers and __Block_byref_x_VarName struct to find the actual
-   value of the variable.  The function AddBlockByrefType does this.  */
+   value of the variable.  The function addBlockByrefType does this.  */
 
 /// Find the type the programmer originally declared the variable to be
 /// and return that type.
 ///
-DIType DwarfDebug::GetBlockByrefType(DIType Ty, std::string Name) {
+DIType DwarfDebug::getBlockByrefType(DIType Ty, std::string Name) {
 
   DIType subType = Ty;
   unsigned tag = Ty.getTag();
@@ -591,19 +494,19 @@ DIType DwarfDebug::GetBlockByrefType(DIType Ty, std::string Name) {
   for (unsigned i = 0, N = Elements.getNumElements(); i < N; ++i) {
     DIDescriptor Element = Elements.getElement(i);
     DIDerivedType DT = DIDerivedType(Element.getNode());
-    if (strcmp(Name.c_str(), DT.getName()) == 0)
+    if (Name == DT.getName())
       return (DT.getTypeDerivedFrom());
   }
 
   return Ty;
 }
 
-/// AddComplexAddress - Start with the address based on the location provided,
+/// addComplexAddress - Start with the address based on the location provided,
 /// and generate the DWARF information necessary to find the actual variable
 /// given the extra address information encoded in the DIVariable, starting from
 /// the starting location.  Add the DWARF information to the die.
 ///
-void DwarfDebug::AddComplexAddress(DbgVariable *&DV, DIE *Die,
+void DwarfDebug::addComplexAddress(DbgVariable *&DV, DIE *Die,
                                    unsigned Attribute,
                                    const MachineLocation &Location) {
   const DIVariable &VD = DV->getVariable();
@@ -616,36 +519,36 @@ void DwarfDebug::AddComplexAddress(DbgVariable *&DV, DIE *Die,
 
   if (Location.isReg()) {
     if (Reg < 32) {
-      AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_reg0 + Reg);
+      addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_reg0 + Reg);
     } else {
       Reg = Reg - dwarf::DW_OP_reg0;
-      AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_breg0 + Reg);
-      AddUInt(Block, 0, dwarf::DW_FORM_udata, Reg);
+      addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_breg0 + Reg);
+      addUInt(Block, 0, dwarf::DW_FORM_udata, Reg);
     }
   } else {
     if (Reg < 32)
-      AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_breg0 + Reg);
+      addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_breg0 + Reg);
     else {
-      AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_bregx);
-      AddUInt(Block, 0, dwarf::DW_FORM_udata, Reg);
+      addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_bregx);
+      addUInt(Block, 0, dwarf::DW_FORM_udata, Reg);
     }
 
-    AddUInt(Block, 0, dwarf::DW_FORM_sdata, Location.getOffset());
+    addUInt(Block, 0, dwarf::DW_FORM_sdata, Location.getOffset());
   }
 
   for (unsigned i = 0, N = VD.getNumAddrElements(); i < N; ++i) {
     uint64_t Element = VD.getAddrElement(i);
 
     if (Element == DIFactory::OpPlus) {
-      AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_plus_uconst);
-      AddUInt(Block, 0, dwarf::DW_FORM_udata, VD.getAddrElement(++i));
+      addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_plus_uconst);
+      addUInt(Block, 0, dwarf::DW_FORM_udata, VD.getAddrElement(++i));
     } else if (Element == DIFactory::OpDeref) {
-      AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_deref);
+      addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_deref);
     } else llvm_unreachable("unknown DIFactory Opcode");
   }
 
   // Now attach the location information to the DIE.
-  AddBlock(Die, Attribute, 0, Block);
+  addBlock(Die, Attribute, 0, Block);
 }
 
 /* Byref variables, in Blocks, are declared by the programmer as "SomeType
@@ -657,7 +560,7 @@ void DwarfDebug::AddComplexAddress(DbgVariable *&DV, DIE *Die,
    However, as far as the original *programmer* is concerned, the variable
    should still have type 'SomeType', as originally declared.
 
-   The function GetBlockByrefType dives into the __Block_byref_x_VarName
+   The function getBlockByrefType dives into the __Block_byref_x_VarName
    struct to find the original type of the variable, which is then assigned to
    the variable's Debug Information Entry as its real type.  So far, so good.
    However now the debugger will expect the variable VarName to have the type
@@ -702,13 +605,13 @@ void DwarfDebug::AddComplexAddress(DbgVariable *&DV, DIE *Die,
 
    That is what this function does.  */
 
-/// AddBlockByrefAddress - Start with the address based on the location
+/// addBlockByrefAddress - Start with the address based on the location
 /// provided, and generate the DWARF information necessary to find the
 /// actual Block variable (navigating the Block struct) based on the
 /// starting location.  Add the DWARF information to the die.  For
 /// more information, read large comment just above here.
 ///
-void DwarfDebug::AddBlockByrefAddress(DbgVariable *&DV, DIE *Die,
+void DwarfDebug::addBlockByrefAddress(DbgVariable *&DV, DIE *Die,
                                       unsigned Attribute,
                                       const MachineLocation &Location) {
   const DIVariable &VD = DV->getVariable();
@@ -717,7 +620,7 @@ void DwarfDebug::AddBlockByrefAddress(DbgVariable *&DV, DIE *Die,
   unsigned Tag = Ty.getTag();
   bool isPointer = false;
 
-  const char *varName = VD.getName();
+  StringRef varName = VD.getName();
 
   if (Tag == dwarf::DW_TAG_pointer_type) {
     DIDerivedType DTy = DIDerivedType(Ty.getNode());
@@ -737,10 +640,10 @@ void DwarfDebug::AddBlockByrefAddress(DbgVariable *&DV, DIE *Die,
   for (unsigned i = 0, N = Fields.getNumElements(); i < N; ++i) {
     DIDescriptor Element = Fields.getElement(i);
     DIDerivedType DT = DIDerivedType(Element.getNode());
-    const char *fieldName = DT.getName();
-    if (strcmp(fieldName, "__forwarding") == 0)
+    StringRef fieldName = DT.getName();
+    if (fieldName == "__forwarding")
       forwardingField = Element;
-    else if (strcmp(fieldName, varName) == 0)
+    else if (fieldName == varName)
       varField = Element;
   }
 
@@ -761,148 +664,144 @@ void DwarfDebug::AddBlockByrefAddress(DbgVariable *&DV, DIE *Die,
 
   if (Location.isReg()) {
     if (Reg < 32)
-      AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_reg0 + Reg);
+      addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_reg0 + Reg);
     else {
       Reg = Reg - dwarf::DW_OP_reg0;
-      AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_breg0 + Reg);
-      AddUInt(Block, 0, dwarf::DW_FORM_udata, Reg);
+      addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_breg0 + Reg);
+      addUInt(Block, 0, dwarf::DW_FORM_udata, Reg);
     }
   } else {
     if (Reg < 32)
-      AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_breg0 + Reg);
+      addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_breg0 + Reg);
     else {
-      AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_bregx);
-      AddUInt(Block, 0, dwarf::DW_FORM_udata, Reg);
+      addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_bregx);
+      addUInt(Block, 0, dwarf::DW_FORM_udata, Reg);
     }
 
-    AddUInt(Block, 0, dwarf::DW_FORM_sdata, Location.getOffset());
+    addUInt(Block, 0, dwarf::DW_FORM_sdata, Location.getOffset());
   }
 
   // If we started with a pointer to the __Block_byref... struct, then
   // the first thing we need to do is dereference the pointer (DW_OP_deref).
   if (isPointer)
-    AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_deref);
+    addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_deref);
 
   // Next add the offset for the '__forwarding' field:
   // DW_OP_plus_uconst ForwardingFieldOffset.  Note there's no point in
   // adding the offset if it's 0.
   if (forwardingFieldOffset > 0) {
-    AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_plus_uconst);
-    AddUInt(Block, 0, dwarf::DW_FORM_udata, forwardingFieldOffset);
+    addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_plus_uconst);
+    addUInt(Block, 0, dwarf::DW_FORM_udata, forwardingFieldOffset);
   }
 
   // Now dereference the __forwarding field to get to the real __Block_byref
   // struct:  DW_OP_deref.
-  AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_deref);
+  addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_deref);
 
   // Now that we've got the real __Block_byref... struct, add the offset
   // for the variable's field to get to the location of the actual variable:
   // DW_OP_plus_uconst varFieldOffset.  Again, don't add if it's 0.
   if (varFieldOffset > 0) {
-    AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_plus_uconst);
-    AddUInt(Block, 0, dwarf::DW_FORM_udata, varFieldOffset);
+    addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_plus_uconst);
+    addUInt(Block, 0, dwarf::DW_FORM_udata, varFieldOffset);
   }
 
   // Now attach the location information to the DIE.
-  AddBlock(Die, Attribute, 0, Block);
+  addBlock(Die, Attribute, 0, Block);
 }
 
-/// AddAddress - Add an address attribute to a die based on the location
+/// addAddress - Add an address attribute to a die based on the location
 /// provided.
-void DwarfDebug::AddAddress(DIE *Die, unsigned Attribute,
+void DwarfDebug::addAddress(DIE *Die, unsigned Attribute,
                             const MachineLocation &Location) {
   unsigned Reg = RI->getDwarfRegNum(Location.getReg(), false);
   DIEBlock *Block = new DIEBlock();
 
   if (Location.isReg()) {
     if (Reg < 32) {
-      AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_reg0 + Reg);
+      addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_reg0 + Reg);
     } else {
-      AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_regx);
-      AddUInt(Block, 0, dwarf::DW_FORM_udata, Reg);
+      addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_regx);
+      addUInt(Block, 0, dwarf::DW_FORM_udata, Reg);
     }
   } else {
     if (Reg < 32) {
-      AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_breg0 + Reg);
+      addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_breg0 + Reg);
     } else {
-      AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_bregx);
-      AddUInt(Block, 0, dwarf::DW_FORM_udata, Reg);
+      addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_bregx);
+      addUInt(Block, 0, dwarf::DW_FORM_udata, Reg);
     }
 
-    AddUInt(Block, 0, dwarf::DW_FORM_sdata, Location.getOffset());
+    addUInt(Block, 0, dwarf::DW_FORM_sdata, Location.getOffset());
   }
 
-  AddBlock(Die, Attribute, 0, Block);
+  addBlock(Die, Attribute, 0, Block);
 }
 
-/// AddType - Add a new type attribute to the specified entity.
-void DwarfDebug::AddType(CompileUnit *DW_Unit, DIE *Entity, DIType Ty) {
+/// addType - Add a new type attribute to the specified entity.
+void DwarfDebug::addType(CompileUnit *DW_Unit, DIE *Entity, DIType Ty) {
   if (Ty.isNull())
     return;
 
   // Check for pre-existence.
-  DIEEntry *&Slot = DW_Unit->getDIEEntrySlotFor(Ty.getNode());
+  DIEEntry *Entry = DW_Unit->getDIEEntry(Ty.getNode());
 
   // If it exists then use the existing value.
-  if (Slot) {
-    Entity->AddValue(dwarf::DW_AT_type, dwarf::DW_FORM_ref4, Slot);
+  if (Entry) {
+    Entity->addValue(dwarf::DW_AT_type, dwarf::DW_FORM_ref4, Entry);
     return;
   }
 
   // Set up proxy.
-  Slot = CreateDIEEntry();
+  Entry = createDIEEntry();
+  DW_Unit->insertDIEEntry(Ty.getNode(), Entry);
 
   // Construct type.
-  DIE Buffer(dwarf::DW_TAG_base_type);
+  DIE *Buffer = new DIE(dwarf::DW_TAG_base_type);
   if (Ty.isBasicType())
-    ConstructTypeDIE(DW_Unit, Buffer, DIBasicType(Ty.getNode()));
+    constructTypeDIE(DW_Unit, *Buffer, DIBasicType(Ty.getNode()));
   else if (Ty.isCompositeType())
-    ConstructTypeDIE(DW_Unit, Buffer, DICompositeType(Ty.getNode()));
+    constructTypeDIE(DW_Unit, *Buffer, DICompositeType(Ty.getNode()));
   else {
     assert(Ty.isDerivedType() && "Unknown kind of DIType");
-    ConstructTypeDIE(DW_Unit, Buffer, DIDerivedType(Ty.getNode()));
+    constructTypeDIE(DW_Unit, *Buffer, DIDerivedType(Ty.getNode()));
   }
 
   // Add debug information entry to entity and appropriate context.
   DIE *Die = NULL;
   DIDescriptor Context = Ty.getContext();
   if (!Context.isNull())
-    Die = DW_Unit->getDieMapSlotFor(Context.getNode());
-
-  if (Die) {
-    DIE *Child = new DIE(Buffer);
-    Die->AddChild(Child);
-    Buffer.Detach();
-    SetDIEEntry(Slot, Child);
-  } else {
-    Die = DW_Unit->AddDie(Buffer);
-    SetDIEEntry(Slot, Die);
-  }
+    Die = DW_Unit->getDIE(Context.getNode());
 
-  Entity->AddValue(dwarf::DW_AT_type, dwarf::DW_FORM_ref4, Slot);
+  if (Die)
+    Die->addChild(Buffer);
+  else
+    DW_Unit->addDie(Buffer);
+  Entry->setEntry(Buffer);
+  Entity->addValue(dwarf::DW_AT_type, dwarf::DW_FORM_ref4, Entry);
 }
 
-/// ConstructTypeDIE - Construct basic type die from DIBasicType.
-void DwarfDebug::ConstructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
+/// constructTypeDIE - Construct basic type die from DIBasicType.
+void DwarfDebug::constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
                                   DIBasicType BTy) {
   // Get core information.
-  const char *Name = BTy.getName();
+  StringRef Name = BTy.getName();
   Buffer.setTag(dwarf::DW_TAG_base_type);
-  AddUInt(&Buffer, dwarf::DW_AT_encoding,  dwarf::DW_FORM_data1,
+  addUInt(&Buffer, dwarf::DW_AT_encoding,  dwarf::DW_FORM_data1,
           BTy.getEncoding());
 
   // Add name if not anonymous or intermediate type.
-  if (Name)
-    AddString(&Buffer, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
+  if (!Name.empty())
+    addString(&Buffer, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
   uint64_t Size = BTy.getSizeInBits() >> 3;
-  AddUInt(&Buffer, dwarf::DW_AT_byte_size, 0, Size);
+  addUInt(&Buffer, dwarf::DW_AT_byte_size, 0, Size);
 }
 
-/// ConstructTypeDIE - Construct derived type die from DIDerivedType.
-void DwarfDebug::ConstructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
+/// constructTypeDIE - Construct derived type die from DIDerivedType.
+void DwarfDebug::constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
                                   DIDerivedType DTy) {
   // Get core information.
-  const char *Name = DTy.getName();
+  StringRef Name = DTy.getName();
   uint64_t Size = DTy.getSizeInBits() >> 3;
   unsigned Tag = DTy.getTag();
 
@@ -913,26 +812,26 @@ void DwarfDebug::ConstructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
 
   // Map to main type, void will not have a type.
   DIType FromTy = DTy.getTypeDerivedFrom();
-  AddType(DW_Unit, &Buffer, FromTy);
+  addType(DW_Unit, &Buffer, FromTy);
 
   // Add name if not anonymous or intermediate type.
-  if (Name)
-    AddString(&Buffer, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
+  if (!Name.empty() && Tag != dwarf::DW_TAG_pointer_type)
+    addString(&Buffer, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
 
   // Add size if non-zero (derived types might be zero-sized.)
   if (Size)
-    AddUInt(&Buffer, dwarf::DW_AT_byte_size, 0, Size);
+    addUInt(&Buffer, dwarf::DW_AT_byte_size, 0, Size);
 
   // Add source line info if available and TyDesc is not a forward declaration.
   if (!DTy.isForwardDecl())
-    AddSourceLine(&Buffer, &DTy);
+    addSourceLine(&Buffer, &DTy);
 }
 
-/// ConstructTypeDIE - Construct type DIE from DICompositeType.
-void DwarfDebug::ConstructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
+/// constructTypeDIE - Construct type DIE from DICompositeType.
+void DwarfDebug::constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
                                   DICompositeType CTy) {
   // Get core information.
-  const char *Name = CTy.getName();
+  StringRef Name = CTy.getName();
 
   uint64_t Size = CTy.getSizeInBits() >> 3;
   unsigned Tag = CTy.getTag();
@@ -941,7 +840,7 @@ void DwarfDebug::ConstructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
   switch (Tag) {
   case dwarf::DW_TAG_vector_type:
   case dwarf::DW_TAG_array_type:
-    ConstructArrayTypeDIE(DW_Unit, Buffer, &CTy);
+    constructArrayTypeDIE(DW_Unit, Buffer, &CTy);
     break;
   case dwarf::DW_TAG_enumeration_type: {
     DIArray Elements = CTy.getTypeArray();
@@ -950,8 +849,10 @@ void DwarfDebug::ConstructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
     for (unsigned i = 0, N = Elements.getNumElements(); i < N; ++i) {
       DIE *ElemDie = NULL;
       DIEnumerator Enum(Elements.getElement(i).getNode());
-      ElemDie = ConstructEnumTypeDIE(DW_Unit, &Enum);
-      Buffer.AddChild(ElemDie);
+      if (!Enum.isNull()) {
+        ElemDie = constructEnumTypeDIE(DW_Unit, &Enum);
+        Buffer.addChild(ElemDie);
+      }
     }
   }
     break;
@@ -959,17 +860,17 @@ void DwarfDebug::ConstructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
     // Add return type.
     DIArray Elements = CTy.getTypeArray();
     DIDescriptor RTy = Elements.getElement(0);
-    AddType(DW_Unit, &Buffer, DIType(RTy.getNode()));
+    addType(DW_Unit, &Buffer, DIType(RTy.getNode()));
 
     // Add prototype flag.
-    AddUInt(&Buffer, dwarf::DW_AT_prototyped, dwarf::DW_FORM_flag, 1);
+    addUInt(&Buffer, dwarf::DW_AT_prototyped, dwarf::DW_FORM_flag, 1);
 
     // Add arguments.
     for (unsigned i = 1, N = Elements.getNumElements(); i < N; ++i) {
       DIE *Arg = new DIE(dwarf::DW_TAG_formal_parameter);
       DIDescriptor Ty = Elements.getElement(i);
-      AddType(DW_Unit, Arg, DIType(Ty.getNode()));
-      Buffer.AddChild(Arg);
+      addType(DW_Unit, Arg, DIType(Ty.getNode()));
+      Buffer.addChild(Arg);
     }
   }
     break;
@@ -990,20 +891,20 @@ void DwarfDebug::ConstructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
         continue;
       DIE *ElemDie = NULL;
       if (Element.getTag() == dwarf::DW_TAG_subprogram)
-        ElemDie = CreateSubprogramDIE(DW_Unit,
+        ElemDie = createSubprogramDIE(DW_Unit,
                                       DISubprogram(Element.getNode()));
       else
-        ElemDie = CreateMemberDIE(DW_Unit,
+        ElemDie = createMemberDIE(DW_Unit,
                                   DIDerivedType(Element.getNode()));
-      Buffer.AddChild(ElemDie);
+      Buffer.addChild(ElemDie);
     }
 
     if (CTy.isAppleBlockExtension())
-      AddUInt(&Buffer, dwarf::DW_AT_APPLE_block, dwarf::DW_FORM_flag, 1);
+      addUInt(&Buffer, dwarf::DW_AT_APPLE_block, dwarf::DW_FORM_flag, 1);
 
     unsigned RLang = CTy.getRunTimeLang();
     if (RLang)
-      AddUInt(&Buffer, dwarf::DW_AT_APPLE_runtime_class,
+      addUInt(&Buffer, dwarf::DW_AT_APPLE_runtime_class,
               dwarf::DW_FORM_data1, RLang);
     break;
   }
@@ -1012,121 +913,143 @@ void DwarfDebug::ConstructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
   }
 
   // Add name if not anonymous or intermediate type.
-  if (Name)
-    AddString(&Buffer, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
+  if (!Name.empty())
+    addString(&Buffer, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
 
   if (Tag == dwarf::DW_TAG_enumeration_type ||
       Tag == dwarf::DW_TAG_structure_type || Tag == dwarf::DW_TAG_union_type) {
     // Add size if non-zero (derived types might be zero-sized.)
     if (Size)
-      AddUInt(&Buffer, dwarf::DW_AT_byte_size, 0, Size);
+      addUInt(&Buffer, dwarf::DW_AT_byte_size, 0, Size);
     else {
       // Add zero size if it is not a forward declaration.
       if (CTy.isForwardDecl())
-        AddUInt(&Buffer, dwarf::DW_AT_declaration, dwarf::DW_FORM_flag, 1);
+        addUInt(&Buffer, dwarf::DW_AT_declaration, dwarf::DW_FORM_flag, 1);
       else
-        AddUInt(&Buffer, dwarf::DW_AT_byte_size, 0, 0);
+        addUInt(&Buffer, dwarf::DW_AT_byte_size, 0, 0);
     }
 
     // Add source line info if available.
     if (!CTy.isForwardDecl())
-      AddSourceLine(&Buffer, &CTy);
+      addSourceLine(&Buffer, &CTy);
   }
 }
 
-/// ConstructSubrangeDIE - Construct subrange DIE from DISubrange.
-void DwarfDebug::ConstructSubrangeDIE(DIE &Buffer, DISubrange SR, DIE *IndexTy){
+/// constructSubrangeDIE - Construct subrange DIE from DISubrange.
+void DwarfDebug::constructSubrangeDIE(DIE &Buffer, DISubrange SR, DIE *IndexTy){
   int64_t L = SR.getLo();
   int64_t H = SR.getHi();
   DIE *DW_Subrange = new DIE(dwarf::DW_TAG_subrange_type);
 
-  AddDIEEntry(DW_Subrange, dwarf::DW_AT_type, dwarf::DW_FORM_ref4, IndexTy);
+  addDIEEntry(DW_Subrange, dwarf::DW_AT_type, dwarf::DW_FORM_ref4, IndexTy);
   if (L)
-    AddSInt(DW_Subrange, dwarf::DW_AT_lower_bound, 0, L);
+    addSInt(DW_Subrange, dwarf::DW_AT_lower_bound, 0, L);
   if (H)
-    AddSInt(DW_Subrange, dwarf::DW_AT_upper_bound, 0, H);
+    addSInt(DW_Subrange, dwarf::DW_AT_upper_bound, 0, H);
 
-  Buffer.AddChild(DW_Subrange);
+  Buffer.addChild(DW_Subrange);
 }
 
-/// ConstructArrayTypeDIE - Construct array type DIE from DICompositeType.
-void DwarfDebug::ConstructArrayTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
+/// constructArrayTypeDIE - Construct array type DIE from DICompositeType.
+void DwarfDebug::constructArrayTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
                                        DICompositeType *CTy) {
   Buffer.setTag(dwarf::DW_TAG_array_type);
   if (CTy->getTag() == dwarf::DW_TAG_vector_type)
-    AddUInt(&Buffer, dwarf::DW_AT_GNU_vector, dwarf::DW_FORM_flag, 1);
+    addUInt(&Buffer, dwarf::DW_AT_GNU_vector, dwarf::DW_FORM_flag, 1);
 
   // Emit derived type.
-  AddType(DW_Unit, &Buffer, CTy->getTypeDerivedFrom());
+  addType(DW_Unit, &Buffer, CTy->getTypeDerivedFrom());
   DIArray Elements = CTy->getTypeArray();
 
-  // Construct an anonymous type for index type.
-  DIE IdxBuffer(dwarf::DW_TAG_base_type);
-  AddUInt(&IdxBuffer, dwarf::DW_AT_byte_size, 0, sizeof(int32_t));
-  AddUInt(&IdxBuffer, dwarf::DW_AT_encoding, dwarf::DW_FORM_data1,
-          dwarf::DW_ATE_signed);
-  DIE *IndexTy = DW_Unit->AddDie(IdxBuffer);
+  // Get an anonymous type for index type.
+  DIE *IdxTy = DW_Unit->getIndexTyDie();
+  if (!IdxTy) {
+    // Construct an anonymous type for index type.
+    IdxTy = new DIE(dwarf::DW_TAG_base_type);
+    addUInt(IdxTy, dwarf::DW_AT_byte_size, 0, sizeof(int32_t));
+    addUInt(IdxTy, dwarf::DW_AT_encoding, dwarf::DW_FORM_data1,
+            dwarf::DW_ATE_signed);
+    DW_Unit->addDie(IdxTy);
+    DW_Unit->setIndexTyDie(IdxTy);
+  }
 
   // Add subranges to array type.
   for (unsigned i = 0, N = Elements.getNumElements(); i < N; ++i) {
     DIDescriptor Element = Elements.getElement(i);
     if (Element.getTag() == dwarf::DW_TAG_subrange_type)
-      ConstructSubrangeDIE(Buffer, DISubrange(Element.getNode()), IndexTy);
+      constructSubrangeDIE(Buffer, DISubrange(Element.getNode()), IdxTy);
   }
 }
 
-/// ConstructEnumTypeDIE - Construct enum type DIE from DIEnumerator.
-DIE *DwarfDebug::ConstructEnumTypeDIE(CompileUnit *DW_Unit, DIEnumerator *ETy) {
+/// constructEnumTypeDIE - Construct enum type DIE from DIEnumerator.
+DIE *DwarfDebug::constructEnumTypeDIE(CompileUnit *DW_Unit, DIEnumerator *ETy) {
   DIE *Enumerator = new DIE(dwarf::DW_TAG_enumerator);
-  const char *Name = ETy->getName();
-  AddString(Enumerator, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
+  StringRef Name = ETy->getName();
+  addString(Enumerator, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
   int64_t Value = ETy->getEnumValue();
-  AddSInt(Enumerator, dwarf::DW_AT_const_value, dwarf::DW_FORM_sdata, Value);
+  addSInt(Enumerator, dwarf::DW_AT_const_value, dwarf::DW_FORM_sdata, Value);
   return Enumerator;
 }
 
-/// CreateGlobalVariableDIE - Create new DIE using GV.
-DIE *DwarfDebug::CreateGlobalVariableDIE(CompileUnit *DW_Unit,
+/// createGlobalVariableDIE - Create new DIE using GV.
+DIE *DwarfDebug::createGlobalVariableDIE(CompileUnit *DW_Unit,
                                          const DIGlobalVariable &GV) {
+  // If the global variable was optmized out then no need to create debug info
+  // entry.
+  if (!GV.getGlobal()) return NULL;
+  if (GV.getDisplayName().empty()) return NULL;
+
   DIE *GVDie = new DIE(dwarf::DW_TAG_variable);
-  AddString(GVDie, dwarf::DW_AT_name, dwarf::DW_FORM_string, 
+  addString(GVDie, dwarf::DW_AT_name, dwarf::DW_FORM_string,
             GV.getDisplayName());
 
-  const char *LinkageName = GV.getLinkageName();
-  if (LinkageName) {
+  StringRef LinkageName = GV.getLinkageName();
+  if (!LinkageName.empty()) {
     // Skip special LLVM prefix that is used to inform the asm printer to not
     // emit usual symbol prefix before the symbol name. This happens for
     // Objective-C symbol names and symbol whose name is replaced using GCC's
     // __asm__ attribute.
     if (LinkageName[0] == 1)
-      LinkageName = &LinkageName[1];
-    AddString(GVDie, dwarf::DW_AT_MIPS_linkage_name, dwarf::DW_FORM_string,
+      LinkageName = LinkageName.substr(1);
+    addString(GVDie, dwarf::DW_AT_MIPS_linkage_name, dwarf::DW_FORM_string,
               LinkageName);
   }
-  AddType(DW_Unit, GVDie, GV.getType());
+  addType(DW_Unit, GVDie, GV.getType());
   if (!GV.isLocalToUnit())
-    AddUInt(GVDie, dwarf::DW_AT_external, dwarf::DW_FORM_flag, 1);
-  AddSourceLine(GVDie, &GV);
+    addUInt(GVDie, dwarf::DW_AT_external, dwarf::DW_FORM_flag, 1);
+  addSourceLine(GVDie, &GV);
+
+  // Add address.
+  DIEBlock *Block = new DIEBlock();
+  addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_addr);
+  addObjectLabel(Block, 0, dwarf::DW_FORM_udata,
+                 Asm->Mang->getMangledName(GV.getGlobal()));
+  addBlock(GVDie, dwarf::DW_AT_location, 0, Block);
+
   return GVDie;
 }
 
-/// CreateMemberDIE - Create new member DIE.
-DIE *DwarfDebug::CreateMemberDIE(CompileUnit *DW_Unit, const DIDerivedType &DT){
+/// createMemberDIE - Create new member DIE.
+DIE *DwarfDebug::createMemberDIE(CompileUnit *DW_Unit, const DIDerivedType &DT){
   DIE *MemberDie = new DIE(DT.getTag());
-  if (const char *Name = DT.getName())
-    AddString(MemberDie, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
+  StringRef Name = DT.getName();
+  if (!Name.empty())
+    addString(MemberDie, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
+  
+  addType(DW_Unit, MemberDie, DT.getTypeDerivedFrom());
 
-  AddType(DW_Unit, MemberDie, DT.getTypeDerivedFrom());
+  addSourceLine(MemberDie, &DT);
 
-  AddSourceLine(MemberDie, &DT);
+  DIEBlock *MemLocationDie = new DIEBlock();
+  addUInt(MemLocationDie, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_plus_uconst);
 
   uint64_t Size = DT.getSizeInBits();
   uint64_t FieldSize = DT.getOriginalTypeSize();
 
   if (Size != FieldSize) {
     // Handle bitfield.
-    AddUInt(MemberDie, dwarf::DW_AT_byte_size, 0, DT.getOriginalTypeSize()>>3);
-    AddUInt(MemberDie, dwarf::DW_AT_bit_size, 0, DT.getSizeInBits());
+    addUInt(MemberDie, dwarf::DW_AT_byte_size, 0, DT.getOriginalTypeSize()>>3);
+    addUInt(MemberDie, dwarf::DW_AT_bit_size, 0, DT.getSizeInBits());
 
     uint64_t Offset = DT.getOffsetInBits();
     uint64_t FieldOffset = Offset;
@@ -1137,45 +1060,48 @@ DIE *DwarfDebug::CreateMemberDIE(CompileUnit *DW_Unit, const DIDerivedType &DT){
 
     // Maybe we need to work from the other end.
     if (TD->isLittleEndian()) Offset = FieldSize - (Offset + Size);
-    AddUInt(MemberDie, dwarf::DW_AT_bit_offset, 0, Offset);
-  }
+    addUInt(MemberDie, dwarf::DW_AT_bit_offset, 0, Offset);
 
-  DIEBlock *Block = new DIEBlock();
-  AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_plus_uconst);
-  AddUInt(Block, 0, dwarf::DW_FORM_udata, DT.getOffsetInBits() >> 3);
-  AddBlock(MemberDie, dwarf::DW_AT_data_member_location, 0, Block);
+    // Here WD_AT_data_member_location points to the anonymous
+    // field that includes this bit field.
+    addUInt(MemLocationDie, 0, dwarf::DW_FORM_udata, FieldOffset >> 3);
+
+  } else
+    // This is not a bitfield.
+    addUInt(MemLocationDie, 0, dwarf::DW_FORM_udata, DT.getOffsetInBits() >> 3);
+
+  addBlock(MemberDie, dwarf::DW_AT_data_member_location, 0, MemLocationDie);
 
   if (DT.isProtected())
-    AddUInt(MemberDie, dwarf::DW_AT_accessibility, 0,
+    addUInt(MemberDie, dwarf::DW_AT_accessibility, 0,
             dwarf::DW_ACCESS_protected);
   else if (DT.isPrivate())
-    AddUInt(MemberDie, dwarf::DW_AT_accessibility, 0,
+    addUInt(MemberDie, dwarf::DW_AT_accessibility, 0,
             dwarf::DW_ACCESS_private);
 
   return MemberDie;
 }
 
-/// CreateSubprogramDIE - Create new DIE using SP.
-DIE *DwarfDebug::CreateSubprogramDIE(CompileUnit *DW_Unit,
+/// createSubprogramDIE - Create new DIE using SP.
+DIE *DwarfDebug::createSubprogramDIE(CompileUnit *DW_Unit,
                                      const DISubprogram &SP,
                                      bool IsConstructor,
                                      bool IsInlined) {
   DIE *SPDie = new DIE(dwarf::DW_TAG_subprogram);
+  addString(SPDie, dwarf::DW_AT_name, dwarf::DW_FORM_string, SP.getName());
 
-  const char * Name = SP.getName();
-  AddString(SPDie, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
-
-  const char *LinkageName = SP.getLinkageName();
-  if (LinkageName) {
-    // Skip special LLVM prefix that is used to inform the asm printer to not emit
-    // usual symbol prefix before the symbol name. This happens for Objective-C
-    // symbol names and symbol whose name is replaced using GCC's __asm__ attribute.
+  StringRef LinkageName = SP.getLinkageName();
+  if (!LinkageName.empty()) {
+    // Skip special LLVM prefix that is used to inform the asm printer to not
+    // emit usual symbol prefix before the symbol name. This happens for
+    // Objective-C symbol names and symbol whose name is replaced using GCC's
+    // __asm__ attribute.
     if (LinkageName[0] == 1)
-      LinkageName = &LinkageName[1];
-    AddString(SPDie, dwarf::DW_AT_MIPS_linkage_name, dwarf::DW_FORM_string,
+      LinkageName = LinkageName.substr(1);
+    addString(SPDie, dwarf::DW_AT_MIPS_linkage_name, dwarf::DW_FORM_string,
               LinkageName);
   }
-  AddSourceLine(SPDie, &SP);
+  addSourceLine(SPDie, &SP);
 
   DICompositeType SPTy = SP.getType();
   DIArray Args = SPTy.getTypeArray();
@@ -1184,54 +1110,53 @@ DIE *DwarfDebug::CreateSubprogramDIE(CompileUnit *DW_Unit,
   unsigned Lang = SP.getCompileUnit().getLanguage();
   if (Lang == dwarf::DW_LANG_C99 || Lang == dwarf::DW_LANG_C89 ||
       Lang == dwarf::DW_LANG_ObjC)
-    AddUInt(SPDie, dwarf::DW_AT_prototyped, dwarf::DW_FORM_flag, 1);
+    addUInt(SPDie, dwarf::DW_AT_prototyped, dwarf::DW_FORM_flag, 1);
 
   // Add Return Type.
   unsigned SPTag = SPTy.getTag();
   if (!IsConstructor) {
     if (Args.isNull() || SPTag != dwarf::DW_TAG_subroutine_type)
-      AddType(DW_Unit, SPDie, SPTy);
+      addType(DW_Unit, SPDie, SPTy);
     else
-      AddType(DW_Unit, SPDie, DIType(Args.getElement(0).getNode()));
+      addType(DW_Unit, SPDie, DIType(Args.getElement(0).getNode()));
   }
 
   if (!SP.isDefinition()) {
-    AddUInt(SPDie, dwarf::DW_AT_declaration, dwarf::DW_FORM_flag, 1);
+    addUInt(SPDie, dwarf::DW_AT_declaration, dwarf::DW_FORM_flag, 1);
 
     // Add arguments. Do not add arguments for subprogram definition. They will
     // be handled through RecordVariable.
     if (SPTag == dwarf::DW_TAG_subroutine_type)
       for (unsigned i = 1, N =  Args.getNumElements(); i < N; ++i) {
         DIE *Arg = new DIE(dwarf::DW_TAG_formal_parameter);
-        AddType(DW_Unit, Arg, DIType(Args.getElement(i).getNode()));
-        AddUInt(Arg, dwarf::DW_AT_artificial, dwarf::DW_FORM_flag, 1); // ??
-        SPDie->AddChild(Arg);
+        addType(DW_Unit, Arg, DIType(Args.getElement(i).getNode()));
+        addUInt(Arg, dwarf::DW_AT_artificial, dwarf::DW_FORM_flag, 1); // ??
+        SPDie->addChild(Arg);
       }
   }
 
-  if (!SP.isLocalToUnit() && !IsInlined)
-    AddUInt(SPDie, dwarf::DW_AT_external, dwarf::DW_FORM_flag, 1);
-
   // DW_TAG_inlined_subroutine may refer to this DIE.
-  DIE *&Slot = DW_Unit->getDieMapSlotFor(SP.getNode());
-  Slot = SPDie;
+  DW_Unit->insertDIE(SP.getNode(), SPDie);
   return SPDie;
 }
 
-/// FindCompileUnit - Get the compile unit for the given descriptor.
+/// findCompileUnit - Get the compile unit for the given descriptor.
 ///
-CompileUnit &DwarfDebug::FindCompileUnit(DICompileUnit Unit) const {
+CompileUnit &DwarfDebug::findCompileUnit(DICompileUnit Unit) const {
   DenseMap<Value *, CompileUnit *>::const_iterator I =
     CompileUnitMap.find(Unit.getNode());
   assert(I != CompileUnitMap.end() && "Missing compile unit.");
   return *I->second;
 }
 
-/// CreateDbgScopeVariable - Create a new scope variable.
+/// createDbgScopeVariable - Create a new scope variable.
 ///
-DIE *DwarfDebug::CreateDbgScopeVariable(DbgVariable *DV, CompileUnit *Unit) {
+DIE *DwarfDebug::createDbgScopeVariable(DbgVariable *DV, CompileUnit *Unit) {
   // Get the descriptor.
   const DIVariable &VD = DV->getVariable();
+  StringRef Name = VD.getName();
+  if (Name.empty())
+    return NULL;
 
   // Translate tag to proper Dwarf tag.  The result variable is dropped for
   // now.
@@ -1250,252 +1175,378 @@ DIE *DwarfDebug::CreateDbgScopeVariable(DbgVariable *DV, CompileUnit *Unit) {
 
   // Define variable debug information entry.
   DIE *VariableDie = new DIE(Tag);
-  const char *Name = VD.getName();
-  AddString(VariableDie, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
+  addString(VariableDie, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
 
   // Add source line info if available.
-  AddSourceLine(VariableDie, &VD);
+  addSourceLine(VariableDie, &VD);
 
   // Add variable type.
-  // FIXME: isBlockByrefVariable should be reformulated in terms of complex addresses instead.
+  // FIXME: isBlockByrefVariable should be reformulated in terms of complex
+  // addresses instead.
   if (VD.isBlockByrefVariable())
-    AddType(Unit, VariableDie, GetBlockByrefType(VD.getType(), Name));
+    addType(Unit, VariableDie, getBlockByrefType(VD.getType(), Name));
   else
-    AddType(Unit, VariableDie, VD.getType());
+    addType(Unit, VariableDie, VD.getType());
 
   // Add variable address.
-  if (!DV->isInlinedFnVar()) {
-    // Variables for abstract instances of inlined functions don't get a
-    // location.
-    MachineLocation Location;
-    Location.set(RI->getFrameRegister(*MF),
-                 RI->getFrameIndexOffset(*MF, DV->getFrameIndex()));
+  // Variables for abstract instances of inlined functions don't get a
+  // location.
+  MachineLocation Location;
+  unsigned FrameReg;
+  int Offset = RI->getFrameIndexReference(*MF, DV->getFrameIndex(), FrameReg);
+  Location.set(FrameReg, Offset);
+
+
+  if (VD.hasComplexAddress())
+    addComplexAddress(DV, VariableDie, dwarf::DW_AT_location, Location);
+  else if (VD.isBlockByrefVariable())
+    addBlockByrefAddress(DV, VariableDie, dwarf::DW_AT_location, Location);
+  else
+    addAddress(VariableDie, dwarf::DW_AT_location, Location);
 
+  return VariableDie;
+}
 
-    if (VD.hasComplexAddress())
-      AddComplexAddress(DV, VariableDie, dwarf::DW_AT_location, Location);
-    else if (VD.isBlockByrefVariable())
-      AddBlockByrefAddress(DV, VariableDie, dwarf::DW_AT_location, Location);
-    else
-      AddAddress(VariableDie, dwarf::DW_AT_location, Location);
+/// getUpdatedDbgScope - Find or create DbgScope assicated with the instruction.
+/// Initialize scope and update scope hierarchy.
+DbgScope *DwarfDebug::getUpdatedDbgScope(MDNode *N, const MachineInstr *MI,
+  MDNode *InlinedAt) {
+  assert (N && "Invalid Scope encoding!");
+  assert (MI && "Missing machine instruction!");
+  bool GetConcreteScope = (MI && InlinedAt);
+
+  DbgScope *NScope = NULL;
+
+  if (InlinedAt)
+    NScope = DbgScopeMap.lookup(InlinedAt);
+  else
+    NScope = DbgScopeMap.lookup(N);
+  assert (NScope && "Unable to find working scope!");
+
+  if (NScope->getFirstInsn())
+    return NScope;
+
+  DbgScope *Parent = NULL;
+  if (GetConcreteScope) {
+    DILocation IL(InlinedAt);
+    Parent = getUpdatedDbgScope(IL.getScope().getNode(), MI,
+                         IL.getOrigLocation().getNode());
+    assert (Parent && "Unable to find Parent scope!");
+    NScope->setParent(Parent);
+    Parent->addScope(NScope);
+  } else if (DIDescriptor(N).isLexicalBlock()) {
+    DILexicalBlock DB(N);
+    if (!DB.getContext().isNull()) {
+      Parent = getUpdatedDbgScope(DB.getContext().getNode(), MI, InlinedAt);
+      NScope->setParent(Parent);
+      Parent->addScope(NScope);
+    }
   }
 
-  return VariableDie;
+  NScope->setFirstInsn(MI);
+
+  if (!Parent && !InlinedAt) {
+    StringRef SPName = DISubprogram(N).getLinkageName();
+    if (SPName == MF->getFunction()->getName())
+      CurrentFnDbgScope = NScope;
+  }
+
+  if (GetConcreteScope) {
+    ConcreteScopes[InlinedAt] = NScope;
+    getOrCreateAbstractScope(N);
+  }
+
+  return NScope;
 }
 
-/// getOrCreateScope - Returns the scope associated with the given descriptor.
-///
-DbgScope *DwarfDebug::getDbgScope(MDNode *N, const MachineInstr *MI) {
-  DbgScope *&Slot = DbgScopeMap[N];
-  if (Slot) return Slot;
+DbgScope *DwarfDebug::getOrCreateAbstractScope(MDNode *N) {
+  assert (N && "Invalid Scope encoding!");
+
+  DbgScope *AScope = AbstractScopes.lookup(N);
+  if (AScope)
+    return AScope;
 
   DbgScope *Parent = NULL;
 
   DIDescriptor Scope(N);
-  if (Scope.isCompileUnit()) {
-    return NULL;
-  } else if (Scope.isSubprogram()) {
-    DISubprogram SP(N);
-    DIDescriptor ParentDesc = SP.getContext();
-    if (!ParentDesc.isNull() && !ParentDesc.isCompileUnit())
-      Parent = getDbgScope(ParentDesc.getNode(), MI);
-  } else if (Scope.isLexicalBlock()) {
+  if (Scope.isLexicalBlock()) {
     DILexicalBlock DB(N);
     DIDescriptor ParentDesc = DB.getContext();
     if (!ParentDesc.isNull())
-      Parent = getDbgScope(ParentDesc.getNode(), MI);
-  } else
-    assert (0 && "Unexpected scope info");
+      Parent = getOrCreateAbstractScope(ParentDesc.getNode());
+  }
 
-  Slot = new DbgScope(Parent, DIDescriptor(N));
-  Slot->setFirstInsn(MI);
+  AScope = new DbgScope(Parent, DIDescriptor(N), NULL);
 
   if (Parent)
-    Parent->AddScope(Slot);
-  else
-    // First function is top level function.
-    // FIXME - Dpatel - What is FunctionDbgScope ?
-    if (!FunctionDbgScope)
-      FunctionDbgScope = Slot;
+    Parent->addScope(AScope);
+  AScope->setAbstractScope();
+  AbstractScopes[N] = AScope;
+  if (DIDescriptor(N).isSubprogram())
+    AbstractScopesList.push_back(AScope);
+  return AScope;
+}
+
+/// updateSubprogramScopeDIE - Find DIE for the given subprogram and
+/// attach appropriate DW_AT_low_pc and DW_AT_high_pc attributes.
+/// If there are global variables in this scope then create and insert
+/// DIEs for these variables.
+DIE *DwarfDebug::updateSubprogramScopeDIE(MDNode *SPNode) {
+
+ DIE *SPDie = ModuleCU->getDIE(SPNode);
+ assert (SPDie && "Unable to find subprogram DIE!");
+ addLabel(SPDie, dwarf::DW_AT_low_pc, dwarf::DW_FORM_addr,
+          DWLabel("func_begin", SubprogramCount));
+ addLabel(SPDie, dwarf::DW_AT_high_pc, dwarf::DW_FORM_addr,
+          DWLabel("func_end", SubprogramCount));
+ MachineLocation Location(RI->getFrameRegister(*MF));
+ addAddress(SPDie, dwarf::DW_AT_frame_base, Location);
+
+ if (!DISubprogram(SPNode).isLocalToUnit())
+   addUInt(SPDie, dwarf::DW_AT_external, dwarf::DW_FORM_flag, 1);
+
+ // If there are global variables at this scope then add their dies.
+ for (SmallVector<WeakVH, 4>::iterator SGI = ScopedGVs.begin(),
+        SGE = ScopedGVs.end(); SGI != SGE; ++SGI) {
+   MDNode *N = dyn_cast_or_null<MDNode>(*SGI);
+   if (!N) continue;
+   DIGlobalVariable GV(N);
+   if (GV.getContext().getNode() == SPNode) {
+     DIE *ScopedGVDie = createGlobalVariableDIE(ModuleCU, GV);
+     if (ScopedGVDie)
+       SPDie->addChild(ScopedGVDie);
+   }
+ }
+ 
+ return SPDie;
+}
+
+/// constructLexicalScope - Construct new DW_TAG_lexical_block
+/// for this scope and attach DW_AT_low_pc/DW_AT_high_pc labels.
+DIE *DwarfDebug::constructLexicalScopeDIE(DbgScope *Scope) {
+  unsigned StartID = MMI->MappedLabel(Scope->getStartLabelID());
+  unsigned EndID = MMI->MappedLabel(Scope->getEndLabelID());
+
+  // Ignore empty scopes.
+  if (StartID == EndID && StartID != 0)
+    return NULL;
 
-  return Slot;
-}
+  DIE *ScopeDIE = new DIE(dwarf::DW_TAG_lexical_block);
+  if (Scope->isAbstractScope())
+    return ScopeDIE;
 
+  addLabel(ScopeDIE, dwarf::DW_AT_low_pc, dwarf::DW_FORM_addr,
+           StartID ?
+             DWLabel("label", StartID)
+           : DWLabel("func_begin", SubprogramCount));
+  addLabel(ScopeDIE, dwarf::DW_AT_high_pc, dwarf::DW_FORM_addr,
+           EndID ?
+             DWLabel("label", EndID)
+           : DWLabel("func_end", SubprogramCount));
 
-/// getOrCreateScope - Returns the scope associated with the given descriptor.
-/// FIXME - Remove this method.
-DbgScope *DwarfDebug::getOrCreateScope(MDNode *N) {
-  DbgScope *&Slot = DbgScopeMap[N];
-  if (Slot) return Slot;
 
-  DbgScope *Parent = NULL;
-  DILexicalBlock Block(N);
 
-  // Don't create a new scope if we already created one for an inlined function.
-  DenseMap<const MDNode *, DbgScope *>::iterator
-    II = AbstractInstanceRootMap.find(N);
-  if (II != AbstractInstanceRootMap.end())
-    return LexicalScopeStack.back();
+  return ScopeDIE;
+}
 
-  if (!Block.isNull()) {
-    DIDescriptor ParentDesc = Block.getContext();
-    Parent =
-      ParentDesc.isNull() ?  NULL : getOrCreateScope(ParentDesc.getNode());
-  }
+/// constructInlinedScopeDIE - This scope represents inlined body of
+/// a function. Construct DIE to represent this concrete inlined copy
+/// of the function.
+DIE *DwarfDebug::constructInlinedScopeDIE(DbgScope *Scope) {
+  unsigned StartID = MMI->MappedLabel(Scope->getStartLabelID());
+  unsigned EndID = MMI->MappedLabel(Scope->getEndLabelID());
+  assert (StartID && "Invalid starting label for an inlined scope!");
+  assert (EndID && "Invalid end label for an inlined scope!");
+  // Ignore empty scopes.
+  if (StartID == EndID && StartID != 0)
+    return NULL;
 
-  Slot = new DbgScope(Parent, DIDescriptor(N));
+  DIScope DS(Scope->getScopeNode());
+  if (DS.isNull())
+    return NULL;
+  DIE *ScopeDIE = new DIE(dwarf::DW_TAG_inlined_subroutine);
 
-  if (Parent)
-    Parent->AddScope(Slot);
-  else
-    // First function is top level function.
-    FunctionDbgScope = Slot;
+  DISubprogram InlinedSP = getDISubprogram(DS.getNode());
+  DIE *OriginDIE = ModuleCU->getDIE(InlinedSP.getNode());
+  assert (OriginDIE && "Unable to find Origin DIE!");
+  addDIEEntry(ScopeDIE, dwarf::DW_AT_abstract_origin,
+              dwarf::DW_FORM_ref4, OriginDIE);
 
-  return Slot;
-}
+  addLabel(ScopeDIE, dwarf::DW_AT_low_pc, dwarf::DW_FORM_addr,
+           DWLabel("label", StartID));
+  addLabel(ScopeDIE, dwarf::DW_AT_high_pc, dwarf::DW_FORM_addr,
+           DWLabel("label", EndID));
 
-/// ConstructDbgScope - Construct the components of a scope.
-///
-void DwarfDebug::ConstructDbgScope(DbgScope *ParentScope,
-                                   unsigned ParentStartID,
-                                   unsigned ParentEndID,
-                                   DIE *ParentDie, CompileUnit *Unit) {
-  // Add variables to scope.
-  SmallVector<DbgVariable *, 8> &Variables = ParentScope->getVariables();
-  for (unsigned i = 0, N = Variables.size(); i < N; ++i) {
-    DIE *VariableDie = CreateDbgScopeVariable(Variables[i], Unit);
-    if (VariableDie) ParentDie->AddChild(VariableDie);
-  }
+  InlinedSubprogramDIEs.insert(OriginDIE);
 
-  // Add concrete instances to scope.
-  SmallVector<DbgConcreteScope *, 8> &ConcreteInsts =
-    ParentScope->getConcreteInsts();
-  for (unsigned i = 0, N = ConcreteInsts.size(); i < N; ++i) {
-    DbgConcreteScope *ConcreteInst = ConcreteInsts[i];
-    DIE *Die = ConcreteInst->getDie();
+  // Track the start label for this inlined function.
+  ValueMap<MDNode *, SmallVector<InlineInfoLabels, 4> >::iterator
+    I = InlineInfo.find(InlinedSP.getNode());
 
-    unsigned StartID = ConcreteInst->getStartLabelID();
-    unsigned EndID = ConcreteInst->getEndLabelID();
+  if (I == InlineInfo.end()) {
+    InlineInfo[InlinedSP.getNode()].push_back(std::make_pair(StartID,
+                                                             ScopeDIE));
+    InlinedSPNodes.push_back(InlinedSP.getNode());
+  } else
+    I->second.push_back(std::make_pair(StartID, ScopeDIE));
 
-    // Add the scope bounds.
-    if (StartID)
-      AddLabel(Die, dwarf::DW_AT_low_pc, dwarf::DW_FORM_addr,
-               DWLabel("label", StartID));
-    else
-      AddLabel(Die, dwarf::DW_AT_low_pc, dwarf::DW_FORM_addr,
-               DWLabel("func_begin", SubprogramCount));
+  StringPool.insert(InlinedSP.getName());
+  StringPool.insert(InlinedSP.getLinkageName());
+  DILocation DL(Scope->getInlinedAt());
+  addUInt(ScopeDIE, dwarf::DW_AT_call_file, 0, ModuleCU->getID());
+  addUInt(ScopeDIE, dwarf::DW_AT_call_line, 0, DL.getLineNumber());
 
-    if (EndID)
-      AddLabel(Die, dwarf::DW_AT_high_pc, dwarf::DW_FORM_addr,
-               DWLabel("label", EndID));
-    else
-      AddLabel(Die, dwarf::DW_AT_high_pc, dwarf::DW_FORM_addr,
-               DWLabel("func_end", SubprogramCount));
+  return ScopeDIE;
+}
 
-    ParentDie->AddChild(Die);
-  }
 
-  // Add nested scopes.
-  SmallVector<DbgScope *, 4> &Scopes = ParentScope->getScopes();
-  for (unsigned j = 0, M = Scopes.size(); j < M; ++j) {
-    // Define the Scope debug information entry.
-    DbgScope *Scope = Scopes[j];
+/// constructVariableDIE - Construct a DIE for the given DbgVariable.
+DIE *DwarfDebug::constructVariableDIE(DbgVariable *DV,
+                                      DbgScope *Scope, CompileUnit *Unit) {
+  // Get the descriptor.
+  const DIVariable &VD = DV->getVariable();
+  StringRef Name = VD.getName();
+  if (Name.empty())
+    return NULL;
 
-    unsigned StartID = MMI->MappedLabel(Scope->getStartLabelID());
-    unsigned EndID = MMI->MappedLabel(Scope->getEndLabelID());
+  // Translate tag to proper Dwarf tag.  The result variable is dropped for
+  // now.
+  unsigned Tag;
+  switch (VD.getTag()) {
+  case dwarf::DW_TAG_return_variable:
+    return NULL;
+  case dwarf::DW_TAG_arg_variable:
+    Tag = dwarf::DW_TAG_formal_parameter;
+    break;
+  case dwarf::DW_TAG_auto_variable:    // fall thru
+  default:
+    Tag = dwarf::DW_TAG_variable;
+    break;
+  }
 
-    // Ignore empty scopes.
-    if (StartID == EndID && StartID != 0) continue;
+  // Define variable debug information entry.
+  DIE *VariableDie = new DIE(Tag);
 
-    // Do not ignore inlined scopes even if they don't have any variables or
-    // scopes.
-    if (Scope->getScopes().empty() && Scope->getVariables().empty() &&
-        Scope->getConcreteInsts().empty())
-      continue;
 
-    if (StartID == ParentStartID && EndID == ParentEndID) {
-      // Just add stuff to the parent scope.
-      ConstructDbgScope(Scope, ParentStartID, ParentEndID, ParentDie, Unit);
-    } else {
-      DIE *ScopeDie = new DIE(dwarf::DW_TAG_lexical_block);
+  DIE *AbsDIE = NULL;
+  if (DbgVariable *AV = DV->getAbstractVariable())
+    AbsDIE = AV->getDIE();
 
-      // Add the scope bounds.
-      if (StartID)
-        AddLabel(ScopeDie, dwarf::DW_AT_low_pc, dwarf::DW_FORM_addr,
-                 DWLabel("label", StartID));
-      else
-        AddLabel(ScopeDie, dwarf::DW_AT_low_pc, dwarf::DW_FORM_addr,
-                 DWLabel("func_begin", SubprogramCount));
+  if (AbsDIE) {
+    DIScope DS(Scope->getScopeNode());
+    DISubprogram InlinedSP = getDISubprogram(DS.getNode());
+    DIE *OriginSPDIE = ModuleCU->getDIE(InlinedSP.getNode());
+    (void) OriginSPDIE;
+    assert (OriginSPDIE && "Unable to find Origin DIE for the SP!");
+    DIE *AbsDIE = DV->getAbstractVariable()->getDIE();
+    assert (AbsDIE && "Unable to find Origin DIE for the Variable!");
+    addDIEEntry(VariableDie, dwarf::DW_AT_abstract_origin,
+                dwarf::DW_FORM_ref4, AbsDIE);
+  }
+  else {
+    addString(VariableDie, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
+    addSourceLine(VariableDie, &VD);
+
+    // Add variable type.
+    // FIXME: isBlockByrefVariable should be reformulated in terms of complex
+    // addresses instead.
+    if (VD.isBlockByrefVariable())
+      addType(Unit, VariableDie, getBlockByrefType(VD.getType(), Name));
+    else
+      addType(Unit, VariableDie, VD.getType());
+  }
 
-      if (EndID)
-        AddLabel(ScopeDie, dwarf::DW_AT_high_pc, dwarf::DW_FORM_addr,
-                 DWLabel("label", EndID));
-      else
-        AddLabel(ScopeDie, dwarf::DW_AT_high_pc, dwarf::DW_FORM_addr,
-                 DWLabel("func_end", SubprogramCount));
+  // Add variable address.
+  if (!Scope->isAbstractScope()) {
+    MachineLocation Location;
+    unsigned FrameReg;
+    int Offset = RI->getFrameIndexReference(*MF, DV->getFrameIndex(), FrameReg);
+    Location.set(FrameReg, Offset);
 
-      // Add the scope's contents.
-      ConstructDbgScope(Scope, StartID, EndID, ScopeDie, Unit);
-      ParentDie->AddChild(ScopeDie);
-    }
+    if (VD.hasComplexAddress())
+      addComplexAddress(DV, VariableDie, dwarf::DW_AT_location, Location);
+    else if (VD.isBlockByrefVariable())
+      addBlockByrefAddress(DV, VariableDie, dwarf::DW_AT_location, Location);
+    else
+      addAddress(VariableDie, dwarf::DW_AT_location, Location);
   }
+  DV->setDIE(VariableDie);
+  return VariableDie;
+
 }
 
-/// ConstructFunctionDbgScope - Construct the scope for the subprogram.
-///
-void DwarfDebug::ConstructFunctionDbgScope(DbgScope *RootScope,
-                                           bool AbstractScope) {
-  // Exit if there is no root scope.
-  if (!RootScope) return;
-  DIDescriptor Desc = RootScope->getDesc();
-  if (Desc.isNull())
+void DwarfDebug::addPubTypes(DISubprogram SP) {
+  DICompositeType SPTy = SP.getType();
+  unsigned SPTag = SPTy.getTag();
+  if (SPTag != dwarf::DW_TAG_subroutine_type) 
     return;
 
-  // Get the subprogram debug information entry.
-  DISubprogram SPD(Desc.getNode());
-
-  // Get the subprogram die.
-  DIE *SPDie = ModuleCU->getDieMapSlotFor(SPD.getNode());
-  assert(SPDie && "Missing subprogram descriptor");
+  DIArray Args = SPTy.getTypeArray();
+  if (Args.isNull()) 
+    return;
 
-  if (!AbstractScope) {
-    // Add the function bounds.
-    AddLabel(SPDie, dwarf::DW_AT_low_pc, dwarf::DW_FORM_addr,
-             DWLabel("func_begin", SubprogramCount));
-    AddLabel(SPDie, dwarf::DW_AT_high_pc, dwarf::DW_FORM_addr,
-             DWLabel("func_end", SubprogramCount));
-    MachineLocation Location(RI->getFrameRegister(*MF));
-    AddAddress(SPDie, dwarf::DW_AT_frame_base, Location);
+  for (unsigned i = 0, e = Args.getNumElements(); i != e; ++i) {
+    DIType ATy(Args.getElement(i).getNode());
+    if (ATy.isNull())
+      continue;
+    DICompositeType CATy = getDICompositeType(ATy);
+    if (!CATy.isNull() && !CATy.getName().empty()) {
+      if (DIEEntry *Entry = ModuleCU->getDIEEntry(CATy.getNode()))
+        ModuleCU->addGlobalType(CATy.getName(), Entry->getEntry());
+    }
   }
-
-  ConstructDbgScope(RootScope, 0, 0, SPDie, ModuleCU);
 }
 
-/// ConstructDefaultDbgScope - Construct a default scope for the subprogram.
-///
-void DwarfDebug::ConstructDefaultDbgScope(MachineFunction *MF) {
-  StringMap<DIE*> &Globals = ModuleCU->getGlobals();
-  StringMap<DIE*>::iterator GI = Globals.find(MF->getFunction()->getName());
-  if (GI != Globals.end()) {
-    DIE *SPDie = GI->second;
+/// constructScopeDIE - Construct a DIE for this scope.
+DIE *DwarfDebug::constructScopeDIE(DbgScope *Scope) {
+ if (!Scope)
+  return NULL;
+ DIScope DS(Scope->getScopeNode());
+ if (DS.isNull())
+   return NULL;
+
+ DIE *ScopeDIE = NULL;
+ if (Scope->getInlinedAt())
+   ScopeDIE = constructInlinedScopeDIE(Scope);
+ else if (DS.isSubprogram()) {
+   if (Scope->isAbstractScope())
+     ScopeDIE = ModuleCU->getDIE(DS.getNode());
+   else
+     ScopeDIE = updateSubprogramScopeDIE(DS.getNode());
+ }
+ else {
+   ScopeDIE = constructLexicalScopeDIE(Scope);
+   if (!ScopeDIE) return NULL;
+ }
 
-    // Add the function bounds.
-    AddLabel(SPDie, dwarf::DW_AT_low_pc, dwarf::DW_FORM_addr,
-             DWLabel("func_begin", SubprogramCount));
-    AddLabel(SPDie, dwarf::DW_AT_high_pc, dwarf::DW_FORM_addr,
-             DWLabel("func_end", SubprogramCount));
+  // Add variables to scope.
+  SmallVector<DbgVariable *, 8> &Variables = Scope->getVariables();
+  for (unsigned i = 0, N = Variables.size(); i < N; ++i) {
+    DIE *VariableDIE = constructVariableDIE(Variables[i], Scope, ModuleCU);
+    if (VariableDIE)
+      ScopeDIE->addChild(VariableDIE);
+  }
 
-    MachineLocation Location(RI->getFrameRegister(*MF));
-    AddAddress(SPDie, dwarf::DW_AT_frame_base, Location);
+  // Add nested scopes.
+  SmallVector<DbgScope *, 4> &Scopes = Scope->getScopes();
+  for (unsigned j = 0, M = Scopes.size(); j < M; ++j) {
+    // Define the Scope debug information entry.
+    DIE *NestedDIE = constructScopeDIE(Scopes[j]);
+    if (NestedDIE)
+      ScopeDIE->addChild(NestedDIE);
   }
+
+  if (DS.isSubprogram()) 
+    addPubTypes(DISubprogram(DS.getNode()));
+ 
+ return ScopeDIE;
 }
 
 /// GetOrCreateSourceID - Look up the source id with the given directory and
 /// source file names. If none currently exists, create a new id and insert it
 /// in the SourceIds map. This can update DirectoryNames and SourceFileNames
 /// maps as well.
-unsigned DwarfDebug::GetOrCreateSourceID(const char *DirName,
-                                         const char *FileName) {
+unsigned DwarfDebug::GetOrCreateSourceID(StringRef DirName, StringRef FileName) {
   unsigned DId;
   StringMap<unsigned>::iterator DI = DirectoryIdMap.find(DirName);
   if (DI != DirectoryIdMap.end()) {
@@ -1528,33 +1579,34 @@ unsigned DwarfDebug::GetOrCreateSourceID(const char *DirName,
   return SrcId;
 }
 
-void DwarfDebug::ConstructCompileUnit(MDNode *N) {
+void DwarfDebug::constructCompileUnit(MDNode *N) {
   DICompileUnit DIUnit(N);
-  const char *FN = DIUnit.getFilename();
-  const char *Dir = DIUnit.getDirectory();
+  StringRef FN = DIUnit.getFilename();
+  StringRef Dir = DIUnit.getDirectory();
   unsigned ID = GetOrCreateSourceID(Dir, FN);
 
   DIE *Die = new DIE(dwarf::DW_TAG_compile_unit);
-  AddSectionOffset(Die, dwarf::DW_AT_stmt_list, dwarf::DW_FORM_data4,
+  addSectionOffset(Die, dwarf::DW_AT_stmt_list, dwarf::DW_FORM_data4,
                    DWLabel("section_line", 0), DWLabel("section_line", 0),
                    false);
-  AddString(Die, dwarf::DW_AT_producer, dwarf::DW_FORM_string,
+  addString(Die, dwarf::DW_AT_producer, dwarf::DW_FORM_string,
             DIUnit.getProducer());
-  AddUInt(Die, dwarf::DW_AT_language, dwarf::DW_FORM_data1,
+  addUInt(Die, dwarf::DW_AT_language, dwarf::DW_FORM_data1,
           DIUnit.getLanguage());
-  AddString(Die, dwarf::DW_AT_name, dwarf::DW_FORM_string, FN);
+  addString(Die, dwarf::DW_AT_name, dwarf::DW_FORM_string, FN);
 
-  if (Dir)
-    AddString(Die, dwarf::DW_AT_comp_dir, dwarf::DW_FORM_string, Dir);
+  if (!Dir.empty())
+    addString(Die, dwarf::DW_AT_comp_dir, dwarf::DW_FORM_string, Dir);
   if (DIUnit.isOptimized())
-    AddUInt(Die, dwarf::DW_AT_APPLE_optimized, dwarf::DW_FORM_flag, 1);
+    addUInt(Die, dwarf::DW_AT_APPLE_optimized, dwarf::DW_FORM_flag, 1);
 
-  if (const char *Flags = DIUnit.getFlags())
-    AddString(Die, dwarf::DW_AT_APPLE_flags, dwarf::DW_FORM_string, Flags);
+  StringRef Flags = DIUnit.getFlags();
+  if (!Flags.empty())
+    addString(Die, dwarf::DW_AT_APPLE_flags, dwarf::DW_FORM_string, Flags);
 
   unsigned RVer = DIUnit.getRunTimeVersion();
   if (RVer)
-    AddUInt(Die, dwarf::DW_AT_APPLE_major_runtime_vers,
+    addUInt(Die, dwarf::DW_AT_APPLE_major_runtime_vers,
             dwarf::DW_FORM_data1, RVer);
 
   CompileUnit *Unit = new CompileUnit(ID, Die);
@@ -1568,7 +1620,7 @@ void DwarfDebug::ConstructCompileUnit(MDNode *N) {
   CompileUnits.push_back(Unit);
 }
 
-void DwarfDebug::ConstructGlobalVariableDIE(MDNode *N) {
+void DwarfDebug::constructGlobalVariableDIE(MDNode *N) {
   DIGlobalVariable DI_GV(N);
 
   // If debug information is malformed then ignore it.
@@ -1576,36 +1628,34 @@ void DwarfDebug::ConstructGlobalVariableDIE(MDNode *N) {
     return;
 
   // Check for pre-existence.
-  DIE *&Slot = ModuleCU->getDieMapSlotFor(DI_GV.getNode());
-  if (Slot)
+  if (ModuleCU->getDIE(DI_GV.getNode()))
     return;
 
-  DIE *VariableDie = CreateGlobalVariableDIE(ModuleCU, DI_GV);
-
-  // Add address.
-  DIEBlock *Block = new DIEBlock();
-  AddUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_addr);
-  AddObjectLabel(Block, 0, dwarf::DW_FORM_udata,
-                 Asm->Mang->getMangledName(DI_GV.getGlobal()));
-  AddBlock(VariableDie, dwarf::DW_AT_location, 0, Block);
+  DIE *VariableDie = createGlobalVariableDIE(ModuleCU, DI_GV);
 
   // Add to map.
-  Slot = VariableDie;
+  ModuleCU->insertDIE(N, VariableDie);
 
   // Add to context owner.
-  ModuleCU->getDie()->AddChild(VariableDie);
+  ModuleCU->getCUDie()->addChild(VariableDie);
 
   // Expose as global. FIXME - need to check external flag.
-  ModuleCU->AddGlobal(DI_GV.getName(), VariableDie);
+  ModuleCU->addGlobal(DI_GV.getName(), VariableDie);
+
+  DIType GTy = DI_GV.getType();
+  if (GTy.isCompositeType() && !GTy.getName().empty()) {
+    DIEEntry *Entry = ModuleCU->getDIEEntry(GTy.getNode());
+    assert (Entry && "Missing global type!");
+    ModuleCU->addGlobalType(GTy.getName(), Entry->getEntry());
+  }
   return;
 }
 
-void DwarfDebug::ConstructSubprogram(MDNode *N) {
+void DwarfDebug::constructSubprogramDIE(MDNode *N) {
   DISubprogram SP(N);
 
   // Check for pre-existence.
-  DIE *&Slot = ModuleCU->getDieMapSlotFor(N);
-  if (Slot)
+  if (ModuleCU->getDIE(N))
     return;
 
   if (!SP.isDefinition())
@@ -1613,35 +1663,39 @@ void DwarfDebug::ConstructSubprogram(MDNode *N) {
     // class type.
     return;
 
-  DIE *SubprogramDie = CreateSubprogramDIE(ModuleCU, SP);
+  DIE *SubprogramDie = createSubprogramDIE(ModuleCU, SP);
 
   // Add to map.
-  Slot = SubprogramDie;
+  ModuleCU->insertDIE(N, SubprogramDie);
 
   // Add to context owner.
-  ModuleCU->getDie()->AddChild(SubprogramDie);
+  ModuleCU->getCUDie()->addChild(SubprogramDie);
 
   // Expose as global.
-  ModuleCU->AddGlobal(SP.getName(), SubprogramDie);
+  ModuleCU->addGlobal(SP.getName(), SubprogramDie);
+
   return;
 }
 
-/// BeginModule - Emit all Dwarf sections that should come prior to the
+/// beginModule - Emit all Dwarf sections that should come prior to the
 /// content. Create global DIEs and emit initial debug info sections.
 /// This is inovked by the target AsmPrinter.
-void DwarfDebug::BeginModule(Module *M, MachineModuleInfo *mmi) {
+void DwarfDebug::beginModule(Module *M, MachineModuleInfo *mmi) {
   this->M = M;
 
   if (TimePassesIsEnabled)
     DebugTimer->startTimer();
 
+  if (!MAI->doesSupportDebugInformation())
+    return;
+
   DebugInfoFinder DbgFinder;
   DbgFinder.processModule(*M);
 
   // Create all the compile unit DIEs.
   for (DebugInfoFinder::iterator I = DbgFinder.compile_unit_begin(),
          E = DbgFinder.compile_unit_end(); I != E; ++I)
-    ConstructCompileUnit(*I);
+    constructCompileUnit(*I);
 
   if (CompileUnits.empty()) {
     if (TimePassesIsEnabled)
@@ -1655,24 +1709,20 @@ void DwarfDebug::BeginModule(Module *M, MachineModuleInfo *mmi) {
   if (!ModuleCU)
     ModuleCU = CompileUnits[0];
 
-  // If there is not any debug info available for any global variables and any
-  // subprograms then there is not any debug info to emit.
-  if (DbgFinder.global_variable_count() == 0
-      && DbgFinder.subprogram_count() == 0) {
-    if (TimePassesIsEnabled)
-      DebugTimer->stopTimer();
-    return;
-  }
-
   // Create DIEs for each of the externally visible global variables.
   for (DebugInfoFinder::iterator I = DbgFinder.global_variable_begin(),
-         E = DbgFinder.global_variable_end(); I != E; ++I)
-    ConstructGlobalVariableDIE(*I);
+         E = DbgFinder.global_variable_end(); I != E; ++I) {
+    DIGlobalVariable GV(*I);
+    if (GV.getContext().getNode() != GV.getCompileUnit().getNode())
+      ScopedGVs.push_back(*I);
+    else
+      constructGlobalVariableDIE(*I);
+  }
 
-  // Create DIEs for each of the externally visible subprograms.
+  // Create DIEs for each subprogram.
   for (DebugInfoFinder::iterator I = DbgFinder.subprogram_begin(),
          E = DbgFinder.subprogram_end(); I != E; ++I)
-    ConstructSubprogram(*I);
+    constructSubprogramDIE(*I);
 
   MMI = mmi;
   shouldEmit = true;
@@ -1698,21 +1748,28 @@ void DwarfDebug::BeginModule(Module *M, MachineModuleInfo *mmi) {
   }
 
   // Emit initial sections
-  EmitInitial();
+  emitInitial();
 
   if (TimePassesIsEnabled)
     DebugTimer->stopTimer();
 }
 
-/// EndModule - Emit all Dwarf sections that should come after the content.
+/// endModule - Emit all Dwarf sections that should come after the content.
 ///
-void DwarfDebug::EndModule() {
-  if (!ShouldEmitDwarfDebug())
+void DwarfDebug::endModule() {
+  if (!ModuleCU)
     return;
 
   if (TimePassesIsEnabled)
     DebugTimer->startTimer();
 
+  // Attach DW_AT_inline attribute with inlined subprogram DIEs.
+  for (SmallPtrSet<DIE *, 4>::iterator AI = InlinedSubprogramDIEs.begin(),
+         AE = InlinedSubprogramDIEs.end(); AI != AE; ++AI) {
+    DIE *ISP = *AI;
+    addUInt(ISP, dwarf::DW_AT_inline, 0, dwarf::DW_INL_inlined);
+  }
+
   // Standard sections final addresses.
   Asm->OutStreamer.SwitchSection(Asm->getObjFileLowering().getTextSection());
   EmitLabel("text_end", 0);
@@ -1726,87 +1783,207 @@ void DwarfDebug::EndModule() {
   }
 
   // Emit common frame information.
-  EmitCommonDebugFrame();
+  emitCommonDebugFrame();
 
   // Emit function debug frame information
   for (std::vector<FunctionDebugFrameInfo>::iterator I = DebugFrames.begin(),
          E = DebugFrames.end(); I != E; ++I)
-    EmitFunctionDebugFrame(*I);
+    emitFunctionDebugFrame(*I);
 
   // Compute DIE offsets and sizes.
-  SizeAndOffsets();
+  computeSizeAndOffsets();
 
   // Emit all the DIEs into a debug info section
-  EmitDebugInfo();
+  emitDebugInfo();
 
   // Corresponding abbreviations into a abbrev section.
-  EmitAbbreviations();
+  emitAbbreviations();
 
   // Emit source line correspondence into a debug line section.
-  EmitDebugLines();
+  emitDebugLines();
 
   // Emit info into a debug pubnames section.
-  EmitDebugPubNames();
+  emitDebugPubNames();
+
+  // Emit info into a debug pubtypes section.
+  emitDebugPubTypes();
 
   // Emit info into a debug str section.
-  EmitDebugStr();
+  emitDebugStr();
 
   // Emit info into a debug loc section.
-  EmitDebugLoc();
+  emitDebugLoc();
 
   // Emit info into a debug aranges section.
   EmitDebugARanges();
 
   // Emit info into a debug ranges section.
-  EmitDebugRanges();
+  emitDebugRanges();
 
   // Emit info into a debug macinfo section.
-  EmitDebugMacInfo();
+  emitDebugMacInfo();
 
   // Emit inline info.
-  EmitDebugInlineInfo();
+  emitDebugInlineInfo();
 
   if (TimePassesIsEnabled)
     DebugTimer->stopTimer();
 }
 
-/// ExtractScopeInformation - Scan machine instructions in this function
+/// findAbstractVariable - Find abstract variable, if any, associated with Var.
+DbgVariable *DwarfDebug::findAbstractVariable(DIVariable &Var,
+                                              unsigned FrameIdx,
+                                              DILocation &ScopeLoc) {
+
+  DbgVariable *AbsDbgVariable = AbstractVariables.lookup(Var.getNode());
+  if (AbsDbgVariable)
+    return AbsDbgVariable;
+
+  DbgScope *Scope = AbstractScopes.lookup(ScopeLoc.getScope().getNode());
+  if (!Scope)
+    return NULL;
+
+  AbsDbgVariable = new DbgVariable(Var, FrameIdx);
+  Scope->addVariable(AbsDbgVariable);
+  AbstractVariables[Var.getNode()] = AbsDbgVariable;
+  return AbsDbgVariable;
+}
+
+/// collectVariableInfo - Populate DbgScope entries with variables' info.
+void DwarfDebug::collectVariableInfo() {
+  if (!MMI) return;
+
+  MachineModuleInfo::VariableDbgInfoMapTy &VMap = MMI->getVariableDbgInfo();
+  for (MachineModuleInfo::VariableDbgInfoMapTy::iterator VI = VMap.begin(),
+         VE = VMap.end(); VI != VE; ++VI) {
+    MetadataBase *MB = VI->first;
+    MDNode *Var = dyn_cast_or_null<MDNode>(MB);
+    if (!Var) continue;
+    DIVariable DV (Var);
+    std::pair< unsigned, MDNode *> VP = VI->second;
+    DILocation ScopeLoc(VP.second);
+
+    DbgScope *Scope =
+      ConcreteScopes.lookup(ScopeLoc.getOrigLocation().getNode());
+    if (!Scope)
+      Scope = DbgScopeMap.lookup(ScopeLoc.getScope().getNode());
+    // If variable scope is not found then skip this variable.
+    if (!Scope)
+      continue;
+
+    DbgVariable *RegVar = new DbgVariable(DV, VP.first);
+    Scope->addVariable(RegVar);
+    if (DbgVariable *AbsDbgVariable = findAbstractVariable(DV, VP.first,
+                                                           ScopeLoc))
+      RegVar->setAbstractVariable(AbsDbgVariable);
+  }
+}
+
+/// beginScope - Process beginning of a scope starting at Label.
+void DwarfDebug::beginScope(const MachineInstr *MI, unsigned Label) {
+  InsnToDbgScopeMapTy::iterator I = DbgScopeBeginMap.find(MI);
+  if (I == DbgScopeBeginMap.end())
+    return;
+  ScopeVector &SD = I->second;
+  for (ScopeVector::iterator SDI = SD.begin(), SDE = SD.end();
+       SDI != SDE; ++SDI)
+    (*SDI)->setStartLabelID(Label);
+}
+
+/// endScope - Process end of a scope.
+void DwarfDebug::endScope(const MachineInstr *MI) {
+  InsnToDbgScopeMapTy::iterator I = DbgScopeEndMap.find(MI);
+  if (I == DbgScopeEndMap.end())
+    return;
+
+  unsigned Label = MMI->NextLabelID();
+  Asm->printLabel(Label);
+
+  SmallVector<DbgScope *, 2> &SD = I->second;
+  for (SmallVector<DbgScope *, 2>::iterator SDI = SD.begin(), SDE = SD.end();
+       SDI != SDE; ++SDI)
+    (*SDI)->setEndLabelID(Label);
+  return;
+}
+
+/// createDbgScope - Create DbgScope for the scope.
+void DwarfDebug::createDbgScope(MDNode *Scope, MDNode *InlinedAt) {
+
+  if (!InlinedAt) {
+    DbgScope *WScope = DbgScopeMap.lookup(Scope);
+    if (WScope)
+      return;
+    WScope = new DbgScope(NULL, DIDescriptor(Scope), NULL);
+    DbgScopeMap.insert(std::make_pair(Scope, WScope));
+    if (DIDescriptor(Scope).isLexicalBlock())
+      createDbgScope(DILexicalBlock(Scope).getContext().getNode(), NULL);
+    return;
+  }
+
+  DbgScope *WScope = DbgScopeMap.lookup(InlinedAt);
+  if (WScope)
+    return;
+
+  WScope = new DbgScope(NULL, DIDescriptor(Scope), InlinedAt);
+  DbgScopeMap.insert(std::make_pair(InlinedAt, WScope));
+  DILocation DL(InlinedAt);
+  createDbgScope(DL.getScope().getNode(), DL.getOrigLocation().getNode());
+}
+
+/// extractScopeInformation - Scan machine instructions in this function
 /// and collect DbgScopes. Return true, if atleast one scope was found.
-bool DwarfDebug::ExtractScopeInformation(MachineFunction *MF) {
+bool DwarfDebug::extractScopeInformation(MachineFunction *MF) {
   // If scope information was extracted using .dbg intrinsics then there is not
   // any need to extract these information by scanning each instruction.
   if (!DbgScopeMap.empty())
     return false;
 
-  // Scan each instruction and create scopes.
+  // Scan each instruction and create scopes. First build working set of scopes.
   for (MachineFunction::const_iterator I = MF->begin(), E = MF->end();
        I != E; ++I) {
     for (MachineBasicBlock::const_iterator II = I->begin(), IE = I->end();
          II != IE; ++II) {
       const MachineInstr *MInsn = II;
       DebugLoc DL = MInsn->getDebugLoc();
-      if (DL.isUnknown())
-        continue;
+      if (DL.isUnknown()) continue;
       DebugLocTuple DLT = MF->getDebugLocTuple(DL);
-      if (!DLT.CompileUnit)
-        continue;
+      if (!DLT.Scope) continue;
       // There is no need to create another DIE for compile unit. For all
-      // other scopes, create one DbgScope now. This will be translated 
+      // other scopes, create one DbgScope now. This will be translated
       // into a scope DIE at the end.
-      DIDescriptor D(DLT.CompileUnit);
-      if (!D.isCompileUnit()) {
-        DbgScope *Scope = getDbgScope(DLT.CompileUnit, MInsn);
-        Scope->setLastInsn(MInsn);
-      }
+      if (DIDescriptor(DLT.Scope).isCompileUnit()) continue;
+      createDbgScope(DLT.Scope, DLT.InlinedAtLoc);
+    }
+  }
+
+
+  // Build scope hierarchy using working set of scopes.
+  for (MachineFunction::const_iterator I = MF->begin(), E = MF->end();
+       I != E; ++I) {
+    for (MachineBasicBlock::const_iterator II = I->begin(), IE = I->end();
+         II != IE; ++II) {
+      const MachineInstr *MInsn = II;
+      DebugLoc DL = MInsn->getDebugLoc();
+      if (DL.isUnknown())  continue;
+      DebugLocTuple DLT = MF->getDebugLocTuple(DL);
+      if (!DLT.Scope)  continue;
+      // There is no need to create another DIE for compile unit. For all
+      // other scopes, create one DbgScope now. This will be translated
+      // into a scope DIE at the end.
+      if (DIDescriptor(DLT.Scope).isCompileUnit()) continue;
+      DbgScope *Scope = getUpdatedDbgScope(DLT.Scope, MInsn, DLT.InlinedAtLoc);
+      Scope->setLastInsn(MInsn);
     }
   }
 
   // If a scope's last instruction is not set then use its child scope's
   // last instruction as this scope's last instrunction.
-  for (DenseMap<MDNode *, DbgScope *>::iterator DI = DbgScopeMap.begin(),
+  for (ValueMap<MDNode *, DbgScope *>::iterator DI = DbgScopeMap.begin(),
 	 DE = DbgScopeMap.end(); DI != DE; ++DI) {
+    if (DI->second->isAbstractScope())
+      continue;
     assert (DI->second->getFirstInsn() && "Invalid first instruction!");
-    DI->second->FixInstructionMarkers();
+    DI->second->fixInstructionMarkers();
     assert (DI->second->getLastInsn() && "Invalid last instruction!");
   }
 
@@ -1814,10 +1991,11 @@ bool DwarfDebug::ExtractScopeInformation(MachineFunction *MF) {
   // and end of a scope respectively. Create an inverse map that list scopes
   // starts (and ends) with an instruction. One instruction may start (or end)
   // multiple scopes.
-  for (DenseMap<MDNode *, DbgScope *>::iterator DI = DbgScopeMap.begin(),
+  for (ValueMap<MDNode *, DbgScope *>::iterator DI = DbgScopeMap.begin(),
 	 DE = DbgScopeMap.end(); DI != DE; ++DI) {
     DbgScope *S = DI->second;
-    assert (S && "DbgScope is missing!");
+    if (S->isAbstractScope())
+      continue;
     const MachineInstr *MI = S->getFirstInsn();
     assert (MI && "DbgScope does not have first instruction!");
 
@@ -1825,8 +2003,7 @@ bool DwarfDebug::ExtractScopeInformation(MachineFunction *MF) {
     if (IDI != DbgScopeBeginMap.end())
       IDI->second.push_back(S);
     else
-      DbgScopeBeginMap.insert(std::make_pair(MI, 
-                                             SmallVector<DbgScope *, 2>(2, S)));
+      DbgScopeBeginMap[MI].push_back(S);
 
     MI = S->getLastInsn();
     assert (MI && "DbgScope does not have last instruction!");
@@ -1834,16 +2011,15 @@ bool DwarfDebug::ExtractScopeInformation(MachineFunction *MF) {
     if (IDI != DbgScopeEndMap.end())
       IDI->second.push_back(S);
     else
-      DbgScopeEndMap.insert(std::make_pair(MI,
-                                             SmallVector<DbgScope *, 2>(2, S)));
+      DbgScopeEndMap[MI].push_back(S);
   }
 
   return !DbgScopeMap.empty();
 }
 
-/// BeginFunction - Gather pre-function debug information.  Assumes being
+/// beginFunction - Gather pre-function debug information.  Assumes being
 /// emitted immediately after the function entry point.
-void DwarfDebug::BeginFunction(MachineFunction *MF) {
+void DwarfDebug::beginFunction(MachineFunction *MF) {
   this->MF = MF;
 
   if (!ShouldEmitDwarfDebug()) return;
@@ -1851,6 +2027,11 @@ void DwarfDebug::BeginFunction(MachineFunction *MF) {
   if (TimePassesIsEnabled)
     DebugTimer->startTimer();
 
+  if (!extractScopeInformation(MF))
+    return;
+
+  collectVariableInfo();
+
   // Begin accumulating function debug information.
   MMI->BeginFunction(MF);
 
@@ -1862,23 +2043,30 @@ void DwarfDebug::BeginFunction(MachineFunction *MF) {
   DebugLoc FDL = MF->getDefaultDebugLoc();
   if (!FDL.isUnknown()) {
     DebugLocTuple DLT = MF->getDebugLocTuple(FDL);
-    unsigned LabelID = RecordSourceLine(DLT.Line, DLT.Col, DLT.CompileUnit);
+    unsigned LabelID = 0;
+    DISubprogram SP = getDISubprogram(DLT.Scope);
+    if (!SP.isNull())
+      LabelID = recordSourceLine(SP.getLineNumber(), 0, DLT.Scope);
+    else
+      LabelID = recordSourceLine(DLT.Line, DLT.Col, DLT.Scope);
     Asm->printLabel(LabelID);
     O << '\n';
   }
-
   if (TimePassesIsEnabled)
     DebugTimer->stopTimer();
 }
 
-/// EndFunction - Gather and emit post-function debug information.
+/// endFunction - Gather and emit post-function debug information.
 ///
-void DwarfDebug::EndFunction(MachineFunction *MF) {
+void DwarfDebug::endFunction(MachineFunction *MF) {
   if (!ShouldEmitDwarfDebug()) return;
 
   if (TimePassesIsEnabled)
     DebugTimer->startTimer();
 
+  if (DbgScopeMap.empty())
+    return;
+
   // Define end label for subprogram.
   EmitLabel("func_end", SubprogramCount);
 
@@ -1893,41 +2081,24 @@ void DwarfDebug::EndFunction(MachineFunction *MF) {
                             Lines.begin(), Lines.end());
   }
 
-  // Construct the DbgScope for abstract instances.
-  for (SmallVector<DbgScope *, 32>::iterator
-         I = AbstractInstanceRootList.begin(),
-         E = AbstractInstanceRootList.end(); I != E; ++I)
-    ConstructFunctionDbgScope(*I);
+  // Construct abstract scopes.
+  for (SmallVector<DbgScope *, 4>::iterator AI = AbstractScopesList.begin(),
+         AE = AbstractScopesList.end(); AI != AE; ++AI)
+    constructScopeDIE(*AI);
 
-  // Construct scopes for subprogram.
-  if (FunctionDbgScope)
-    ConstructFunctionDbgScope(FunctionDbgScope);
-  else
-    // FIXME: This is wrong. We are essentially getting past a problem with
-    // debug information not being able to handle unreachable blocks that have
-    // debug information in them. In particular, those unreachable blocks that
-    // have "region end" info in them. That situation results in the "root
-    // scope" not being created. If that's the case, then emit a "default"
-    // scope, i.e., one that encompasses the whole function. This isn't
-    // desirable. And a better way of handling this (and all of the debugging
-    // information) needs to be explored.
-    ConstructDefaultDbgScope(MF);
+  constructScopeDIE(CurrentFnDbgScope);
 
   DebugFrames.push_back(FunctionDebugFrameInfo(SubprogramCount,
                                                MMI->getFrameMoves()));
 
   // Clear debug info
-  if (FunctionDbgScope) {
-    delete FunctionDbgScope;
+  if (CurrentFnDbgScope) {
+    CurrentFnDbgScope = NULL;
     DbgScopeMap.clear();
     DbgScopeBeginMap.clear();
     DbgScopeEndMap.clear();
-    DbgAbstractScopeMap.clear();
-    DbgConcreteScopeMap.clear();
-    FunctionDbgScope = NULL;
-    LexicalScopeStack.clear();
-    AbstractInstanceRootList.clear();
-    AbstractInstanceRootMap.clear();
+    ConcreteScopes.clear();
+    AbstractScopesList.clear();
   }
 
   Lines.clear();
@@ -1936,38 +2107,37 @@ void DwarfDebug::EndFunction(MachineFunction *MF) {
     DebugTimer->stopTimer();
 }
 
-/// RecordSourceLine - Records location information and associates it with a
+/// recordSourceLine - Records location information and associates it with a
 /// label. Returns a unique label ID used to generate a label and provide
 /// correspondence to the source line list.
-unsigned DwarfDebug::RecordSourceLine(Value *V, unsigned Line, unsigned Col) {
-  if (TimePassesIsEnabled)
-    DebugTimer->startTimer();
-
-  CompileUnit *Unit = CompileUnitMap[V];
-  assert(Unit && "Unable to find CompileUnit");
-  unsigned ID = MMI->NextLabelID();
-  Lines.push_back(SrcLineInfo(Line, Col, Unit->getID(), ID));
-
-  if (TimePassesIsEnabled)
-    DebugTimer->stopTimer();
-
-  return ID;
-}
-
-/// RecordSourceLine - Records location information and associates it with a
-/// label. Returns a unique label ID used to generate a label and provide
-/// correspondence to the source line list.
-unsigned DwarfDebug::RecordSourceLine(unsigned Line, unsigned Col, 
-                                      MDNode *Scope) {
+unsigned DwarfDebug::recordSourceLine(unsigned Line, unsigned Col,
+                                      MDNode *S) {
   if (!MMI)
     return 0;
 
   if (TimePassesIsEnabled)
     DebugTimer->startTimer();
 
-  DICompileUnit CU(Scope);
-  unsigned Src = GetOrCreateSourceID(CU.getDirectory(),
-                                     CU.getFilename());
+  StringRef Dir;
+  StringRef Fn;
+
+  DIDescriptor Scope(S);
+  if (Scope.isCompileUnit()) {
+    DICompileUnit CU(S);
+    Dir = CU.getDirectory();
+    Fn = CU.getFilename();
+  } else if (Scope.isSubprogram()) {
+    DISubprogram SP(S);
+    Dir = SP.getDirectory();
+    Fn = SP.getFilename();
+  } else if (Scope.isLexicalBlock()) {
+    DILexicalBlock DB(S);
+    Dir = DB.getDirectory();
+    Fn = DB.getFilename();
+  } else
+    assert (0 && "Unexpected scope info");
+
+  unsigned Src = GetOrCreateSourceID(Dir, Fn);
   unsigned ID = MMI->NextLabelID();
   Lines.push_back(SrcLineInfo(Line, Col, Src, ID));
 
@@ -1995,216 +2165,22 @@ unsigned DwarfDebug::getOrCreateSourceID(const std::string &DirName,
   return SrcId;
 }
 
-/// RecordRegionStart - Indicate the start of a region.
-unsigned DwarfDebug::RecordRegionStart(MDNode *N) {
-  if (TimePassesIsEnabled)
-    DebugTimer->startTimer();
-
-  DbgScope *Scope = getOrCreateScope(N);
-  unsigned ID = MMI->NextLabelID();
-  if (!Scope->getStartLabelID()) Scope->setStartLabelID(ID);
-  LexicalScopeStack.push_back(Scope);
-
-  if (TimePassesIsEnabled)
-    DebugTimer->stopTimer();
-
-  return ID;
-}
-
-/// RecordRegionEnd - Indicate the end of a region.
-unsigned DwarfDebug::RecordRegionEnd(MDNode *N) {
-  if (TimePassesIsEnabled)
-    DebugTimer->startTimer();
-
-  DbgScope *Scope = getOrCreateScope(N);
-  unsigned ID = MMI->NextLabelID();
-  Scope->setEndLabelID(ID);
-  // FIXME : region.end() may not be in the last basic block.
-  // For now, do not pop last lexical scope because next basic
-  // block may start new inlined function's body.
-  unsigned LSSize = LexicalScopeStack.size();
-  if (LSSize != 0 && LSSize != 1)
-    LexicalScopeStack.pop_back();
-
-  if (TimePassesIsEnabled)
-    DebugTimer->stopTimer();
-
-  return ID;
-}
-
-/// RecordVariable - Indicate the declaration of a local variable.
-void DwarfDebug::RecordVariable(MDNode *N, unsigned FrameIndex) {
-  if (TimePassesIsEnabled)
-    DebugTimer->startTimer();
-
-  DIDescriptor Desc(N);
-  DbgScope *Scope = NULL;
-  bool InlinedFnVar = false;
-
-  if (Desc.getTag() == dwarf::DW_TAG_variable)
-    Scope = getOrCreateScope(DIGlobalVariable(N).getContext().getNode());
-  else {
-    bool InlinedVar = false;
-    MDNode *Context = DIVariable(N).getContext().getNode();
-    DISubprogram SP(Context);
-    if (!SP.isNull()) {
-      // SP is inserted into DbgAbstractScopeMap when inlined function
-      // start was recorded by RecordInlineFnStart.
-      DenseMap<MDNode *, DbgScope *>::iterator
-        I = DbgAbstractScopeMap.find(SP.getNode());
-      if (I != DbgAbstractScopeMap.end()) {
-        InlinedVar = true;
-        Scope = I->second;
-      }
-    }
-    if (!InlinedVar)
-      Scope = getOrCreateScope(Context);
-  }
-
-  assert(Scope && "Unable to find the variable's scope");
-  DbgVariable *DV = new DbgVariable(DIVariable(N), FrameIndex, InlinedFnVar);
-  Scope->AddVariable(DV);
-
-  if (TimePassesIsEnabled)
-    DebugTimer->stopTimer();
-}
-
-//// RecordInlinedFnStart - Indicate the start of inlined subroutine.
-unsigned DwarfDebug::RecordInlinedFnStart(DISubprogram &SP, DICompileUnit CU,
-                                          unsigned Line, unsigned Col) {
-  unsigned LabelID = MMI->NextLabelID();
-
-  if (!MAI->doesDwarfUsesInlineInfoSection())
-    return LabelID;
-
-  if (TimePassesIsEnabled)
-    DebugTimer->startTimer();
-
-  MDNode *Node = SP.getNode();
-  DenseMap<const MDNode *, DbgScope *>::iterator
-    II = AbstractInstanceRootMap.find(Node);
-
-  if (II == AbstractInstanceRootMap.end()) {
-    // Create an abstract instance entry for this inlined function if it doesn't
-    // already exist.
-    DbgScope *Scope = new DbgScope(NULL, DIDescriptor(Node));
-
-    // Get the compile unit context.
-    DIE *SPDie = ModuleCU->getDieMapSlotFor(Node);
-    if (!SPDie)
-      SPDie = CreateSubprogramDIE(ModuleCU, SP, false, true);
-
-    // Mark as being inlined. This makes this subprogram entry an abstract
-    // instance root.
-    // FIXME: Our debugger doesn't care about the value of DW_AT_inline, only
-    // that it's defined. That probably won't change in the future. However,
-    // this could be more elegant.
-    AddUInt(SPDie, dwarf::DW_AT_inline, 0, dwarf::DW_INL_declared_not_inlined);
-
-    // Keep track of the abstract scope for this function.
-    DbgAbstractScopeMap[Node] = Scope;
-
-    AbstractInstanceRootMap[Node] = Scope;
-    AbstractInstanceRootList.push_back(Scope);
-  }
-
-  // Create a concrete inlined instance for this inlined function.
-  DbgConcreteScope *ConcreteScope = new DbgConcreteScope(DIDescriptor(Node));
-  DIE *ScopeDie = new DIE(dwarf::DW_TAG_inlined_subroutine);
-  ScopeDie->setAbstractCompileUnit(ModuleCU);
-
-  DIE *Origin = ModuleCU->getDieMapSlotFor(Node);
-  AddDIEEntry(ScopeDie, dwarf::DW_AT_abstract_origin,
-              dwarf::DW_FORM_ref4, Origin);
-  AddUInt(ScopeDie, dwarf::DW_AT_call_file, 0, ModuleCU->getID());
-  AddUInt(ScopeDie, dwarf::DW_AT_call_line, 0, Line);
-  AddUInt(ScopeDie, dwarf::DW_AT_call_column, 0, Col);
-
-  ConcreteScope->setDie(ScopeDie);
-  ConcreteScope->setStartLabelID(LabelID);
-  MMI->RecordUsedDbgLabel(LabelID);
-
-  LexicalScopeStack.back()->AddConcreteInst(ConcreteScope);
-
-  // Keep track of the concrete scope that's inlined into this function.
-  DenseMap<MDNode *, SmallVector<DbgScope *, 8> >::iterator
-    SI = DbgConcreteScopeMap.find(Node);
-
-  if (SI == DbgConcreteScopeMap.end())
-    DbgConcreteScopeMap[Node].push_back(ConcreteScope);
-  else
-    SI->second.push_back(ConcreteScope);
-
-  // Track the start label for this inlined function.
-  DenseMap<MDNode *, SmallVector<unsigned, 4> >::iterator
-    I = InlineInfo.find(Node);
-
-  if (I == InlineInfo.end())
-    InlineInfo[Node].push_back(LabelID);
-  else
-    I->second.push_back(LabelID);
-
-  if (TimePassesIsEnabled)
-    DebugTimer->stopTimer();
-
-  return LabelID;
-}
-
-/// RecordInlinedFnEnd - Indicate the end of inlined subroutine.
-unsigned DwarfDebug::RecordInlinedFnEnd(DISubprogram &SP) {
-  if (!MAI->doesDwarfUsesInlineInfoSection())
-    return 0;
-
-  if (TimePassesIsEnabled)
-    DebugTimer->startTimer();
-
-  MDNode *Node = SP.getNode();
-  DenseMap<MDNode *, SmallVector<DbgScope *, 8> >::iterator
-    I = DbgConcreteScopeMap.find(Node);
-
-  if (I == DbgConcreteScopeMap.end()) {
-    // FIXME: Can this situation actually happen? And if so, should it?
-    if (TimePassesIsEnabled)
-      DebugTimer->stopTimer();
-
-    return 0;
-  }
-
-  SmallVector<DbgScope *, 8> &Scopes = I->second;
-  if (Scopes.empty()) {
-    // Returned ID is 0 if this is unbalanced "end of inlined
-    // scope". This could happen if optimizer eats dbg intrinsics
-    // or "beginning of inlined scope" is not recoginized due to
-    // missing location info. In such cases, ignore this region.end.
-    return 0;
-  }
-
-  DbgScope *Scope = Scopes.back(); Scopes.pop_back();
-  unsigned ID = MMI->NextLabelID();
-  MMI->RecordUsedDbgLabel(ID);
-  Scope->setEndLabelID(ID);
-
-  if (TimePassesIsEnabled)
-    DebugTimer->stopTimer();
-
-  return ID;
-}
-
 //===----------------------------------------------------------------------===//
 // Emit Methods
 //===----------------------------------------------------------------------===//
 
-/// SizeAndOffsetDie - Compute the size and offset of a DIE.
+/// computeSizeAndOffset - Compute the size and offset of a DIE.
 ///
-unsigned DwarfDebug::SizeAndOffsetDie(DIE *Die, unsigned Offset, bool Last) {
+unsigned
+DwarfDebug::computeSizeAndOffset(DIE *Die, unsigned Offset, bool Last) {
   // Get the children.
   const std::vector<DIE *> &Children = Die->getChildren();
 
   // If not last sibling and has children then add sibling offset attribute.
-  if (!Last && !Children.empty()) Die->AddSiblingOffset();
+  if (!Last && !Children.empty()) Die->addSiblingOffset();
 
   // Record the abbreviation.
-  AssignAbbrevNumber(Die->getAbbrev());
+  assignAbbrevNumber(Die->getAbbrev());
 
   // Get the abbreviation for this DIE.
   unsigned AbbrevNumber = Die->getAbbrevNumber();
@@ -2230,7 +2206,7 @@ unsigned DwarfDebug::SizeAndOffsetDie(DIE *Die, unsigned Offset, bool Last) {
            "Children flag not set");
 
     for (unsigned j = 0, M = Children.size(); j < M; ++j)
-      Offset = SizeAndOffsetDie(Children[j], Offset, (j + 1) == M);
+      Offset = computeSizeAndOffset(Children[j], Offset, (j + 1) == M);
 
     // End of children marker.
     Offset += sizeof(int8_t);
@@ -2240,9 +2216,9 @@ unsigned DwarfDebug::SizeAndOffsetDie(DIE *Die, unsigned Offset, bool Last) {
   return Offset;
 }
 
-/// SizeAndOffsets - Compute the size and offset of all the DIEs.
+/// computeSizeAndOffsets - Compute the size and offset of all the DIEs.
 ///
-void DwarfDebug::SizeAndOffsets() {
+void DwarfDebug::computeSizeAndOffsets() {
   // Compute size of compile unit header.
   static unsigned Offset =
     sizeof(int32_t) + // Length of Compilation Unit Info
@@ -2250,13 +2226,13 @@ void DwarfDebug::SizeAndOffsets() {
     sizeof(int32_t) + // Offset Into Abbrev. Section
     sizeof(int8_t);   // Pointer Size (in bytes)
 
-  SizeAndOffsetDie(ModuleCU->getDie(), Offset, true);
+  computeSizeAndOffset(ModuleCU->getCUDie(), Offset, true);
   CompileUnitOffsets[ModuleCU] = 0;
 }
 
-/// EmitInitial - Emit initial Dwarf declarations.  This is necessary for cc
+/// emitInitial - Emit initial Dwarf declarations.  This is necessary for cc
 /// tools to recognize the object file contains Dwarf information.
-void DwarfDebug::EmitInitial() {
+void DwarfDebug::emitInitial() {
   // Check to see if we already emitted intial headers.
   if (didInitial) return;
   didInitial = true;
@@ -2287,6 +2263,8 @@ void DwarfDebug::EmitInitial() {
   EmitLabel("section_loc", 0);
   Asm->OutStreamer.SwitchSection(TLOF.getDwarfPubNamesSection());
   EmitLabel("section_pubnames", 0);
+  Asm->OutStreamer.SwitchSection(TLOF.getDwarfPubTypesSection());
+  EmitLabel("section_pubtypes", 0);
   Asm->OutStreamer.SwitchSection(TLOF.getDwarfStrSection());
   EmitLabel("section_str", 0);
   Asm->OutStreamer.SwitchSection(TLOF.getDwarfRangesSection());
@@ -2298,9 +2276,9 @@ void DwarfDebug::EmitInitial() {
   EmitLabel("data_begin", 0);
 }
 
-/// EmitDIE - Recusively Emits a debug information entry.
+/// emitDIE - Recusively Emits a debug information entry.
 ///
-void DwarfDebug::EmitDIE(DIE *Die) {
+void DwarfDebug::emitDIE(DIE *Die) {
   // Get the abbreviation for this DIE.
   unsigned AbbrevNumber = Die->getAbbrevNumber();
   const DIEAbbrev *Abbrev = Abbreviations[AbbrevNumber - 1];
@@ -2330,15 +2308,12 @@ void DwarfDebug::EmitDIE(DIE *Die) {
 
     switch (Attr) {
     case dwarf::DW_AT_sibling:
-      Asm->EmitInt32(Die->SiblingOffset());
+      Asm->EmitInt32(Die->getSiblingOffset());
       break;
     case dwarf::DW_AT_abstract_origin: {
       DIEEntry *E = cast<DIEEntry>(Values[i]);
       DIE *Origin = E->getEntry();
-      unsigned Addr =
-        CompileUnitOffsets[Die->getAbstractCompileUnit()] +
-        Origin->getOffset();
-
+      unsigned Addr = Origin->getOffset();
       Asm->EmitInt32(Addr);
       break;
     }
@@ -2356,16 +2331,16 @@ void DwarfDebug::EmitDIE(DIE *Die) {
     const std::vector<DIE *> &Children = Die->getChildren();
 
     for (unsigned j = 0, M = Children.size(); j < M; ++j)
-      EmitDIE(Children[j]);
+      emitDIE(Children[j]);
 
     Asm->EmitInt8(0); Asm->EOL("End Of Children Mark");
   }
 }
 
-/// EmitDebugInfo / EmitDebugInfoPerCU - Emit the debug info section.
+/// emitDebugInfo / emitDebugInfoPerCU - Emit the debug info section.
 ///
-void DwarfDebug::EmitDebugInfoPerCU(CompileUnit *Unit) {
-  DIE *Die = Unit->getDie();
+void DwarfDebug::emitDebugInfoPerCU(CompileUnit *Unit) {
+  DIE *Die = Unit->getCUDie();
 
   // Emit the compile units header.
   EmitLabel("info_begin", Unit->getID());
@@ -2383,7 +2358,7 @@ void DwarfDebug::EmitDebugInfoPerCU(CompileUnit *Unit) {
   Asm->EOL("Offset Into Abbrev. Section");
   Asm->EmitInt8(TD->getPointerSize()); Asm->EOL("Address Size (in bytes)");
 
-  EmitDIE(Die);
+  emitDIE(Die);
   // FIXME - extra padding for gdb bug.
   Asm->EmitInt8(0); Asm->EOL("Extra Pad For GDB");
   Asm->EmitInt8(0); Asm->EOL("Extra Pad For GDB");
@@ -2394,17 +2369,17 @@ void DwarfDebug::EmitDebugInfoPerCU(CompileUnit *Unit) {
   Asm->EOL();
 }
 
-void DwarfDebug::EmitDebugInfo() {
+void DwarfDebug::emitDebugInfo() {
   // Start debug info section.
   Asm->OutStreamer.SwitchSection(
                             Asm->getObjFileLowering().getDwarfInfoSection());
 
-  EmitDebugInfoPerCU(ModuleCU);
+  emitDebugInfoPerCU(ModuleCU);
 }
 
-/// EmitAbbreviations - Emit the abbreviation section.
+/// emitAbbreviations - Emit the abbreviation section.
 ///
-void DwarfDebug::EmitAbbreviations() const {
+void DwarfDebug::emitAbbreviations() const {
   // Check to see if it is worth the effort.
   if (!Abbreviations.empty()) {
     // Start the debug abbrev section.
@@ -2436,10 +2411,10 @@ void DwarfDebug::EmitAbbreviations() const {
   }
 }
 
-/// EmitEndOfLineMatrix - Emit the last address of the section and the end of
+/// emitEndOfLineMatrix - Emit the last address of the section and the end of
 /// the line matrix.
 ///
-void DwarfDebug::EmitEndOfLineMatrix(unsigned SectionEnd) {
+void DwarfDebug::emitEndOfLineMatrix(unsigned SectionEnd) {
   // Define last address of section.
   Asm->EmitInt8(0); Asm->EOL("Extended Op");
   Asm->EmitInt8(TD->getPointerSize() + 1); Asm->EOL("Op size");
@@ -2452,9 +2427,9 @@ void DwarfDebug::EmitEndOfLineMatrix(unsigned SectionEnd) {
   Asm->EmitInt8(1); Asm->EOL();
 }
 
-/// EmitDebugLines - Emit source line information.
+/// emitDebugLines - Emit source line information.
 ///
-void DwarfDebug::EmitDebugLines() {
+void DwarfDebug::emitDebugLines() {
   // If the target is using .loc/.file, the assembler will be emitting the
   // .debug_line table automatically.
   if (MAI->hasDotLocAndDotFile())
@@ -2603,22 +2578,22 @@ void DwarfDebug::EmitDebugLines() {
       }
     }
 
-    EmitEndOfLineMatrix(j + 1);
+    emitEndOfLineMatrix(j + 1);
   }
 
   if (SecSrcLinesSize == 0)
     // Because we're emitting a debug_line section, we still need a line
     // table. The linker and friends expect it to exist. If there's nothing to
     // put into it, emit an empty table.
-    EmitEndOfLineMatrix(1);
+    emitEndOfLineMatrix(1);
 
   EmitLabel("line_end", 0);
   Asm->EOL();
 }
 
-/// EmitCommonDebugFrame - Emit common frame info into a debug frame section.
+/// emitCommonDebugFrame - Emit common frame info into a debug frame section.
 ///
-void DwarfDebug::EmitCommonDebugFrame() {
+void DwarfDebug::emitCommonDebugFrame() {
   if (!MAI->doesDwarfRequireFrameSection())
     return;
 
@@ -2661,10 +2636,10 @@ void DwarfDebug::EmitCommonDebugFrame() {
   Asm->EOL();
 }
 
-/// EmitFunctionDebugFrame - Emit per function frame info into a debug frame
+/// emitFunctionDebugFrame - Emit per function frame info into a debug frame
 /// section.
 void
-DwarfDebug::EmitFunctionDebugFrame(const FunctionDebugFrameInfo&DebugFrameInfo){
+DwarfDebug::emitFunctionDebugFrame(const FunctionDebugFrameInfo&DebugFrameInfo){
   if (!MAI->doesDwarfRequireFrameSection())
     return;
 
@@ -2697,7 +2672,7 @@ DwarfDebug::EmitFunctionDebugFrame(const FunctionDebugFrameInfo&DebugFrameInfo){
   Asm->EOL();
 }
 
-void DwarfDebug::EmitDebugPubNamesPerCU(CompileUnit *Unit) {
+void DwarfDebug::emitDebugPubNamesPerCU(CompileUnit *Unit) {
   EmitDifference("pubnames_end", Unit->getID(),
                  "pubnames_begin", Unit->getID(), true);
   Asm->EOL("Length of Public Names Info");
@@ -2714,7 +2689,7 @@ void DwarfDebug::EmitDebugPubNamesPerCU(CompileUnit *Unit) {
                  true);
   Asm->EOL("Compilation Unit Length");
 
-  StringMap<DIE*> &Globals = Unit->getGlobals();
+  const StringMap<DIE*> &Globals = Unit->getGlobals();
   for (StringMap<DIE*>::const_iterator
          GI = Globals.begin(), GE = Globals.end(); GI != GE; ++GI) {
     const char *Name = GI->getKeyData();
@@ -2730,19 +2705,55 @@ void DwarfDebug::EmitDebugPubNamesPerCU(CompileUnit *Unit) {
   Asm->EOL();
 }
 
-/// EmitDebugPubNames - Emit visible names into a debug pubnames section.
+/// emitDebugPubNames - Emit visible names into a debug pubnames section.
 ///
-void DwarfDebug::EmitDebugPubNames() {
+void DwarfDebug::emitDebugPubNames() {
   // Start the dwarf pubnames section.
   Asm->OutStreamer.SwitchSection(
                           Asm->getObjFileLowering().getDwarfPubNamesSection());
 
-  EmitDebugPubNamesPerCU(ModuleCU);
+  emitDebugPubNamesPerCU(ModuleCU);
 }
 
-/// EmitDebugStr - Emit visible names into a debug str section.
+void DwarfDebug::emitDebugPubTypes() {
+  // Start the dwarf pubnames section.
+  Asm->OutStreamer.SwitchSection(
+                          Asm->getObjFileLowering().getDwarfPubTypesSection());
+  EmitDifference("pubtypes_end", ModuleCU->getID(),
+                 "pubtypes_begin", ModuleCU->getID(), true);
+  Asm->EOL("Length of Public Types Info");
+
+  EmitLabel("pubtypes_begin", ModuleCU->getID());
+
+  Asm->EmitInt16(dwarf::DWARF_VERSION); Asm->EOL("DWARF Version");
+
+  EmitSectionOffset("info_begin", "section_info",
+                    ModuleCU->getID(), 0, true, false);
+  Asm->EOL("Offset of Compilation ModuleCU Info");
+
+  EmitDifference("info_end", ModuleCU->getID(), "info_begin", ModuleCU->getID(),
+                 true);
+  Asm->EOL("Compilation ModuleCU Length");
+
+  const StringMap<DIE*> &Globals = ModuleCU->getGlobalTypes();
+  for (StringMap<DIE*>::const_iterator
+         GI = Globals.begin(), GE = Globals.end(); GI != GE; ++GI) {
+    const char *Name = GI->getKeyData();
+    DIE * Entity = GI->second;
+
+    Asm->EmitInt32(Entity->getOffset()); Asm->EOL("DIE offset");
+    Asm->EmitString(Name, strlen(Name)); Asm->EOL("External Name");
+  }
+
+  Asm->EmitInt32(0); Asm->EOL("End Mark");
+  EmitLabel("pubtypes_end", ModuleCU->getID());
+
+  Asm->EOL();
+}
+
+/// emitDebugStr - Emit visible names into a debug str section.
 ///
-void DwarfDebug::EmitDebugStr() {
+void DwarfDebug::emitDebugStr() {
   // Check to see if it is worth the effort.
   if (!StringPool.empty()) {
     // Start the dwarf str section.
@@ -2764,9 +2775,9 @@ void DwarfDebug::EmitDebugStr() {
   }
 }
 
-/// EmitDebugLoc - Emit visible names into a debug loc section.
+/// emitDebugLoc - Emit visible names into a debug loc section.
 ///
-void DwarfDebug::EmitDebugLoc() {
+void DwarfDebug::emitDebugLoc() {
   // Start the dwarf loc section.
   Asm->OutStreamer.SwitchSection(
                               Asm->getObjFileLowering().getDwarfLocSection());
@@ -2810,18 +2821,18 @@ void DwarfDebug::EmitDebugARanges() {
   Asm->EOL();
 }
 
-/// EmitDebugRanges - Emit visible names into a debug ranges section.
+/// emitDebugRanges - Emit visible names into a debug ranges section.
 ///
-void DwarfDebug::EmitDebugRanges() {
+void DwarfDebug::emitDebugRanges() {
   // Start the dwarf ranges section.
   Asm->OutStreamer.SwitchSection(
                             Asm->getObjFileLowering().getDwarfRangesSection());
   Asm->EOL();
 }
 
-/// EmitDebugMacInfo - Emit visible names into a debug macinfo section.
+/// emitDebugMacInfo - Emit visible names into a debug macinfo section.
 ///
-void DwarfDebug::EmitDebugMacInfo() {
+void DwarfDebug::emitDebugMacInfo() {
   if (const MCSection *LineInfo =
       Asm->getObjFileLowering().getDwarfMacroInfoSection()) {
     // Start the dwarf macinfo section.
@@ -2830,7 +2841,7 @@ void DwarfDebug::EmitDebugMacInfo() {
   }
 }
 
-/// EmitDebugInlineInfo - Emit inline info using following format.
+/// emitDebugInlineInfo - Emit inline info using following format.
 /// Section Header:
 /// 1. length of section
 /// 2. Dwarf version number
@@ -2848,7 +2859,7 @@ void DwarfDebug::EmitDebugMacInfo() {
 /// inlined instance; the die_offset points to the inlined_subroutine die in the
 /// __debug_info section, and the low_pc is the starting address for the
 /// inlining instance.
-void DwarfDebug::EmitDebugInlineInfo() {
+void DwarfDebug::emitDebugInlineInfo() {
   if (!MAI->doesDwarfUsesInlineInfoSection())
     return;
 
@@ -2867,15 +2878,20 @@ void DwarfDebug::EmitDebugInlineInfo() {
   Asm->EmitInt16(dwarf::DWARF_VERSION); Asm->EOL("Dwarf Version");
   Asm->EmitInt8(TD->getPointerSize()); Asm->EOL("Address Size (in bytes)");
 
-  for (DenseMap<MDNode *, SmallVector<unsigned, 4> >::iterator
-         I = InlineInfo.begin(), E = InlineInfo.end(); I != E; ++I) {
-    MDNode *Node = I->first;
-    SmallVector<unsigned, 4> &Labels = I->second;
+  for (SmallVector<MDNode *, 4>::iterator I = InlinedSPNodes.begin(),
+         E = InlinedSPNodes.end(); I != E; ++I) {
+
+//  for (ValueMap<MDNode *, SmallVector<InlineInfoLabels, 4> >::iterator
+    //        I = InlineInfo.begin(), E = InlineInfo.end(); I != E; ++I) {
+    MDNode *Node = *I;
+    ValueMap<MDNode *, SmallVector<InlineInfoLabels, 4> >::iterator II
+      = InlineInfo.find(Node);
+    SmallVector<InlineInfoLabels, 4> &Labels = II->second;
     DISubprogram SP(Node);
-    const char *LName = SP.getLinkageName();
-    const char *Name = SP.getName();
+    StringRef LName = SP.getLinkageName();
+    StringRef Name = SP.getName();
 
-    if (!LName)
+    if (LName.empty())
       Asm->EmitString(Name);
     else {
       // Skip special LLVM prefix that is used to inform the asm printer to not
@@ -2883,18 +2899,22 @@ void DwarfDebug::EmitDebugInlineInfo() {
       // Objective-C symbol names and symbol whose name is replaced using GCC's
       // __asm__ attribute.
       if (LName[0] == 1)
-        LName = &LName[1];
-      Asm->EmitString(LName);
+        LName = LName.substr(1);
+//      Asm->EmitString(LName);
+      EmitSectionOffset("string", "section_str",
+                        StringPool.idFor(LName), false, true);
+
     }
     Asm->EOL("MIPS linkage name");
-
-    Asm->EmitString(Name); Asm->EOL("Function name");
-
+//    Asm->EmitString(Name);
+    EmitSectionOffset("string", "section_str",
+                      StringPool.idFor(Name), false, true);
+    Asm->EOL("Function name");
     Asm->EmitULEB128Bytes(Labels.size()); Asm->EOL("Inline count");
 
-    for (SmallVector<unsigned, 4>::iterator LI = Labels.begin(),
+    for (SmallVector<InlineInfoLabels, 4>::iterator LI = Labels.begin(),
            LE = Labels.end(); LI != LE; ++LI) {
-      DIE *SP = ModuleCU->getDieMapSlotFor(Node);
+      DIE *SP = LI->second;
       Asm->EmitInt32(SP->getOffset()); Asm->EOL("DIE offset");
 
       if (TD->getPointerSize() == sizeof(int32_t))
@@ -2902,7 +2922,7 @@ void DwarfDebug::EmitDebugInlineInfo() {
       else
         O << MAI->getData64bitsDirective();
 
-      PrintLabelName("label", *LI); Asm->EOL("low_pc");
+      PrintLabelName("label", LI->first); Asm->EOL("low_pc");
     }
   }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
index f671ae3..679d9b9 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
@@ -20,7 +20,7 @@
 #include "llvm/CodeGen/MachineLocation.h"
 #include "llvm/Analysis/DebugInfo.h"
 #include "llvm/Support/raw_ostream.h"
-#include "llvm/ADT/DenseMap.h"
+#include "llvm/ADT/ValueMap.h"
 #include "llvm/ADT/FoldingSet.h"
 #include "llvm/ADT/SmallSet.h"
 #include "llvm/ADT/StringMap.h"
@@ -30,9 +30,9 @@
 namespace llvm {
 
 class CompileUnit;
-class DbgVariable;
-class DbgScope;
 class DbgConcreteScope;
+class DbgScope;
+class DbgVariable;
 class MachineFrameInfo;
 class MachineModuleInfo;
 class MCAsmInfo;
@@ -41,7 +41,7 @@ class Timer;
 //===----------------------------------------------------------------------===//
 /// SrcLineInfo - This class is used to record source line correspondence.
 ///
-class VISIBILITY_HIDDEN SrcLineInfo {
+class SrcLineInfo {
   unsigned Line;                     // Source line number.
   unsigned Column;                   // Source column.
   unsigned SourceID;                 // Source ID number.
@@ -57,7 +57,7 @@ public:
   unsigned getLabelID() const { return LabelID; }
 };
 
-class VISIBILITY_HIDDEN DwarfDebug : public Dwarf {
+class DwarfDebug : public Dwarf {
   //===--------------------------------------------------------------------===//
   // Attributes used to construct specific Dwarf sections.
   //
@@ -106,13 +106,9 @@ class VISIBILITY_HIDDEN DwarfDebug : public Dwarf {
   /// Lines - List of of source line correspondence.
   std::vector<SrcLineInfo> Lines;
 
-  /// ValuesSet - Used to uniquely define values.
-  ///
-  FoldingSet<DIEValue> ValuesSet;
-
-  /// Values - A list of all the unique values in use.
+  /// DIEValues - A list of all the unique values in use.
   ///
-  std::vector<DIEValue *> Values;
+  std::vector<DIEValue *> DIEValues;
 
   /// StringPool - A UniqueVector of strings used by indirect references.
   ///
@@ -134,48 +130,52 @@ class VISIBILITY_HIDDEN DwarfDebug : public Dwarf {
   ///
   bool shouldEmit;
 
-  // FunctionDbgScope - Top level scope for the current function.
+  // CurrentFnDbgScope - Top level scope for the current function.
   //
-  DbgScope *FunctionDbgScope;
+  DbgScope *CurrentFnDbgScope;
   
   /// DbgScopeMap - Tracks the scopes in the current function.
-  DenseMap<MDNode *, DbgScope *> DbgScopeMap;
+  ///
+  ValueMap<MDNode *, DbgScope *> DbgScopeMap;
+
+  /// ConcreteScopes - Tracks the concrete scopees in the current function.
+  /// These scopes are also included in DbgScopeMap.
+  ValueMap<MDNode *, DbgScope *> ConcreteScopes;
+
+  /// AbstractScopes - Tracks the abstract scopes a module. These scopes are
+  /// not included DbgScopeMap.
+  ValueMap<MDNode *, DbgScope *> AbstractScopes;
+  SmallVector<DbgScope *, 4>AbstractScopesList;
+
+  /// AbstractVariables - Collection on abstract variables.
+  ValueMap<MDNode *, DbgVariable *> AbstractVariables;
+
+  /// InliendSubprogramDIEs - Collection of subprgram DIEs that are marked
+  /// (at the end of the module) as DW_AT_inline.
+  SmallPtrSet<DIE *, 4> InlinedSubprogramDIEs;
 
-  typedef DenseMap<const MachineInstr *, SmallVector<DbgScope *, 2> > 
+  /// AbstractSubprogramDIEs - Collection of abstruct subprogram DIEs.
+  SmallPtrSet<DIE *, 4> AbstractSubprogramDIEs;
+
+  /// ScopedGVs - Tracks global variables that are not at file scope.
+  /// For example void f() { static int b = 42; }
+  SmallVector<WeakVH, 4> ScopedGVs;
+
+  typedef SmallVector<DbgScope *, 2> ScopeVector;
+  typedef DenseMap<const MachineInstr *, ScopeVector>
     InsnToDbgScopeMapTy;
 
-  /// DbgScopeBeginMap - Maps instruction with a list DbgScopes it starts.
+  /// DbgScopeBeginMap - Maps instruction with a list of DbgScopes it starts.
   InsnToDbgScopeMapTy DbgScopeBeginMap;
 
   /// DbgScopeEndMap - Maps instruction with a list DbgScopes it ends.
   InsnToDbgScopeMapTy DbgScopeEndMap;
 
-  /// DbgAbstractScopeMap - Tracks abstract instance scopes in the current
-  /// function.
-  DenseMap<MDNode *, DbgScope *> DbgAbstractScopeMap;
-
-  /// DbgConcreteScopeMap - Tracks concrete instance scopes in the current
-  /// function.
-  DenseMap<MDNode *,
-           SmallVector<DbgScope *, 8> > DbgConcreteScopeMap;
-
   /// InlineInfo - Keep track of inlined functions and their location.  This
   /// information is used to populate debug_inlined section.
-  DenseMap<MDNode *, SmallVector<unsigned, 4> > InlineInfo;
-
-  /// AbstractInstanceRootMap - Map of abstract instance roots of inlined
-  /// functions. These are subroutine entries that contain a DW_AT_inline
-  /// attribute.
-  DenseMap<const MDNode *, DbgScope *> AbstractInstanceRootMap;
-
-  /// AbstractInstanceRootList - List of abstract instance roots of inlined
-  /// functions. These are subroutine entries that contain a DW_AT_inline
-  /// attribute.
-  SmallVector<DbgScope *, 32> AbstractInstanceRootList;
-
-  /// LexicalScopeStack - A stack of lexical scopes. The top one is the current
-  /// scope.
-  SmallVector<DbgScope *, 16> LexicalScopeStack;
+  typedef std::pair<unsigned, DIE *> InlineInfoLabels;
+  ValueMap<MDNode *, SmallVector<InlineInfoLabels, 4> > InlineInfo;
+  SmallVector<MDNode *, 4> InlinedSPNodes;
 
   /// CompileUnitOffsets - A vector of the offsets of the compile units. This is
   /// used when calculating the "origin" of a concrete instance of an inlined
@@ -225,228 +225,244 @@ class VISIBILITY_HIDDEN DwarfDebug : public Dwarf {
     return SourceIds.size();
   }
 
-  /// AssignAbbrevNumber - Define a unique number for the abbreviation.
+  /// assignAbbrevNumber - Define a unique number for the abbreviation.
   ///
-  void AssignAbbrevNumber(DIEAbbrev &Abbrev);
+  void assignAbbrevNumber(DIEAbbrev &Abbrev);
 
-  /// CreateDIEEntry - Creates a new DIEEntry to be a proxy for a debug
+  /// createDIEEntry - Creates a new DIEEntry to be a proxy for a debug
   /// information entry.
-  DIEEntry *CreateDIEEntry(DIE *Entry = NULL);
-
-  /// SetDIEEntry - Set a DIEEntry once the debug information entry is defined.
-  ///
-  void SetDIEEntry(DIEEntry *Value, DIE *Entry);
+  DIEEntry *createDIEEntry(DIE *Entry = NULL);
 
-  /// AddUInt - Add an unsigned integer attribute data and value.
+  /// addUInt - Add an unsigned integer attribute data and value.
   ///
-  void AddUInt(DIE *Die, unsigned Attribute, unsigned Form, uint64_t Integer);
+  void addUInt(DIE *Die, unsigned Attribute, unsigned Form, uint64_t Integer);
 
-  /// AddSInt - Add an signed integer attribute data and value.
+  /// addSInt - Add an signed integer attribute data and value.
   ///
-  void AddSInt(DIE *Die, unsigned Attribute, unsigned Form, int64_t Integer);
+  void addSInt(DIE *Die, unsigned Attribute, unsigned Form, int64_t Integer);
 
-  /// AddString - Add a string attribute data and value.
+  /// addString - Add a string attribute data and value.
   ///
-  void AddString(DIE *Die, unsigned Attribute, unsigned Form,
-                 const std::string &String);
+  void addString(DIE *Die, unsigned Attribute, unsigned Form,
+                 const StringRef Str);
 
-  /// AddLabel - Add a Dwarf label attribute data and value.
+  /// addLabel - Add a Dwarf label attribute data and value.
   ///
-  void AddLabel(DIE *Die, unsigned Attribute, unsigned Form,
+  void addLabel(DIE *Die, unsigned Attribute, unsigned Form,
                 const DWLabel &Label);
 
-  /// AddObjectLabel - Add an non-Dwarf label attribute data and value.
+  /// addObjectLabel - Add an non-Dwarf label attribute data and value.
   ///
-  void AddObjectLabel(DIE *Die, unsigned Attribute, unsigned Form,
+  void addObjectLabel(DIE *Die, unsigned Attribute, unsigned Form,
                       const std::string &Label);
 
-  /// AddSectionOffset - Add a section offset label attribute data and value.
+  /// addSectionOffset - Add a section offset label attribute data and value.
   ///
-  void AddSectionOffset(DIE *Die, unsigned Attribute, unsigned Form,
+  void addSectionOffset(DIE *Die, unsigned Attribute, unsigned Form,
                         const DWLabel &Label, const DWLabel &Section,
                         bool isEH = false, bool useSet = true);
 
-  /// AddDelta - Add a label delta attribute data and value.
+  /// addDelta - Add a label delta attribute data and value.
   ///
-  void AddDelta(DIE *Die, unsigned Attribute, unsigned Form,
+  void addDelta(DIE *Die, unsigned Attribute, unsigned Form,
                 const DWLabel &Hi, const DWLabel &Lo);
 
-  /// AddDIEEntry - Add a DIE attribute data and value.
+  /// addDIEEntry - Add a DIE attribute data and value.
   ///
-  void AddDIEEntry(DIE *Die, unsigned Attribute, unsigned Form, DIE *Entry) {
-    Die->AddValue(Attribute, Form, CreateDIEEntry(Entry));
+  void addDIEEntry(DIE *Die, unsigned Attribute, unsigned Form, DIE *Entry) {
+    Die->addValue(Attribute, Form, createDIEEntry(Entry));
   }
 
-  /// AddBlock - Add block data.
+  /// addBlock - Add block data.
   ///
-  void AddBlock(DIE *Die, unsigned Attribute, unsigned Form, DIEBlock *Block);
+  void addBlock(DIE *Die, unsigned Attribute, unsigned Form, DIEBlock *Block);
 
-  /// AddSourceLine - Add location information to specified debug information
+  /// addSourceLine - Add location information to specified debug information
   /// entry.
-  void AddSourceLine(DIE *Die, const DIVariable *V);
-  void AddSourceLine(DIE *Die, const DIGlobal *G);
-  void AddSourceLine(DIE *Die, const DISubprogram *SP);
-  void AddSourceLine(DIE *Die, const DIType *Ty);
+  void addSourceLine(DIE *Die, const DIVariable *V);
+  void addSourceLine(DIE *Die, const DIGlobal *G);
+  void addSourceLine(DIE *Die, const DISubprogram *SP);
+  void addSourceLine(DIE *Die, const DIType *Ty);
 
-  /// AddAddress - Add an address attribute to a die based on the location
+  /// addAddress - Add an address attribute to a die based on the location
   /// provided.
-  void AddAddress(DIE *Die, unsigned Attribute,
+  void addAddress(DIE *Die, unsigned Attribute,
                   const MachineLocation &Location);
 
-  /// AddComplexAddress - Start with the address based on the location provided,
+  /// addComplexAddress - Start with the address based on the location provided,
   /// and generate the DWARF information necessary to find the actual variable
   /// (navigating the extra location information encoded in the type) based on
   /// the starting location.  Add the DWARF information to the die.
   ///
-  void AddComplexAddress(DbgVariable *&DV, DIE *Die, unsigned Attribute,
+  void addComplexAddress(DbgVariable *&DV, DIE *Die, unsigned Attribute,
                          const MachineLocation &Location);
 
-  // FIXME: Should be reformulated in terms of AddComplexAddress.
-  /// AddBlockByrefAddress - Start with the address based on the location
+  // FIXME: Should be reformulated in terms of addComplexAddress.
+  /// addBlockByrefAddress - Start with the address based on the location
   /// provided, and generate the DWARF information necessary to find the
   /// actual Block variable (navigating the Block struct) based on the
   /// starting location.  Add the DWARF information to the die.  Obsolete,
-  /// please use AddComplexAddress instead.
+  /// please use addComplexAddress instead.
   ///
-  void AddBlockByrefAddress(DbgVariable *&DV, DIE *Die, unsigned Attribute,
+  void addBlockByrefAddress(DbgVariable *&DV, DIE *Die, unsigned Attribute,
                             const MachineLocation &Location);
 
-  /// AddType - Add a new type attribute to the specified entity.
-  void AddType(CompileUnit *DW_Unit, DIE *Entity, DIType Ty);
+  /// addType - Add a new type attribute to the specified entity.
+  void addType(CompileUnit *DW_Unit, DIE *Entity, DIType Ty);
 
-  /// ConstructTypeDIE - Construct basic type die from DIBasicType.
-  void ConstructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
+  void addPubTypes(DISubprogram SP);
+
+  /// constructTypeDIE - Construct basic type die from DIBasicType.
+  void constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
                         DIBasicType BTy);
 
-  /// ConstructTypeDIE - Construct derived type die from DIDerivedType.
-  void ConstructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
+  /// constructTypeDIE - Construct derived type die from DIDerivedType.
+  void constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
                         DIDerivedType DTy);
 
-  /// ConstructTypeDIE - Construct type DIE from DICompositeType.
-  void ConstructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
+  /// constructTypeDIE - Construct type DIE from DICompositeType.
+  void constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
                         DICompositeType CTy);
 
-  /// ConstructSubrangeDIE - Construct subrange DIE from DISubrange.
-  void ConstructSubrangeDIE(DIE &Buffer, DISubrange SR, DIE *IndexTy);
+  /// constructSubrangeDIE - Construct subrange DIE from DISubrange.
+  void constructSubrangeDIE(DIE &Buffer, DISubrange SR, DIE *IndexTy);
 
-  /// ConstructArrayTypeDIE - Construct array type DIE from DICompositeType.
-  void ConstructArrayTypeDIE(CompileUnit *DW_Unit, DIE &Buffer, 
+  /// constructArrayTypeDIE - Construct array type DIE from DICompositeType.
+  void constructArrayTypeDIE(CompileUnit *DW_Unit, DIE &Buffer, 
                              DICompositeType *CTy);
 
-  /// ConstructEnumTypeDIE - Construct enum type DIE from DIEnumerator.
-  DIE *ConstructEnumTypeDIE(CompileUnit *DW_Unit, DIEnumerator *ETy);
+  /// constructEnumTypeDIE - Construct enum type DIE from DIEnumerator.
+  DIE *constructEnumTypeDIE(CompileUnit *DW_Unit, DIEnumerator *ETy);
 
-  /// CreateGlobalVariableDIE - Create new DIE using GV.
-  DIE *CreateGlobalVariableDIE(CompileUnit *DW_Unit,
+  /// createGlobalVariableDIE - Create new DIE using GV.
+  DIE *createGlobalVariableDIE(CompileUnit *DW_Unit,
                                const DIGlobalVariable &GV);
 
-  /// CreateMemberDIE - Create new member DIE.
-  DIE *CreateMemberDIE(CompileUnit *DW_Unit, const DIDerivedType &DT);
+  /// createMemberDIE - Create new member DIE.
+  DIE *createMemberDIE(CompileUnit *DW_Unit, const DIDerivedType &DT);
 
-  /// CreateSubprogramDIE - Create new DIE using SP.
-  DIE *CreateSubprogramDIE(CompileUnit *DW_Unit,
+  /// createSubprogramDIE - Create new DIE using SP.
+  DIE *createSubprogramDIE(CompileUnit *DW_Unit,
                            const DISubprogram &SP,
                            bool IsConstructor = false,
                            bool IsInlined = false);
 
-  /// FindCompileUnit - Get the compile unit for the given descriptor. 
+  /// findCompileUnit - Get the compile unit for the given descriptor. 
   ///
-  CompileUnit &FindCompileUnit(DICompileUnit Unit) const;
+  CompileUnit &findCompileUnit(DICompileUnit Unit) const;
 
-  /// CreateDbgScopeVariable - Create a new scope variable.
+  /// createDbgScopeVariable - Create a new scope variable.
   ///
-  DIE *CreateDbgScopeVariable(DbgVariable *DV, CompileUnit *Unit);
+  DIE *createDbgScopeVariable(DbgVariable *DV, CompileUnit *Unit);
 
-  /// getDbgScope - Returns the scope associated with the given descriptor.
-  ///
-  DbgScope *getOrCreateScope(MDNode *N);
-  DbgScope *getDbgScope(MDNode *N, const MachineInstr *MI);
+  /// getUpdatedDbgScope - Find or create DbgScope assicated with 
+  /// the instruction. Initialize scope and update scope hierarchy.
+  DbgScope *getUpdatedDbgScope(MDNode *N, const MachineInstr *MI, MDNode *InlinedAt);
 
-  /// ConstructDbgScope - Construct the components of a scope.
-  ///
-  void ConstructDbgScope(DbgScope *ParentScope,
-                         unsigned ParentStartID, unsigned ParentEndID,
-                         DIE *ParentDie, CompileUnit *Unit);
+  /// createDbgScope - Create DbgScope for the scope.
+  void createDbgScope(MDNode *Scope, MDNode *InlinedAt);
 
-  /// ConstructFunctionDbgScope - Construct the scope for the subprogram.
-  ///
-  void ConstructFunctionDbgScope(DbgScope *RootScope,
-                                 bool AbstractScope = false);
+  DbgScope *getOrCreateAbstractScope(MDNode *N);
 
-  /// ConstructDefaultDbgScope - Construct a default scope for the subprogram.
-  ///
-  void ConstructDefaultDbgScope(MachineFunction *MF);
+  /// findAbstractVariable - Find abstract variable associated with Var.
+  DbgVariable *findAbstractVariable(DIVariable &Var, unsigned FrameIdx, 
+                                    DILocation &Loc);
+
+  /// updateSubprogramScopeDIE - Find DIE for the given subprogram and 
+  /// attach appropriate DW_AT_low_pc and DW_AT_high_pc attributes.
+  /// If there are global variables in this scope then create and insert
+  /// DIEs for these variables.
+  DIE *updateSubprogramScopeDIE(MDNode *SPNode);
 
-  /// EmitInitial - Emit initial Dwarf declarations.  This is necessary for cc
+  /// constructLexicalScope - Construct new DW_TAG_lexical_block 
+  /// for this scope and attach DW_AT_low_pc/DW_AT_high_pc labels.
+  DIE *constructLexicalScopeDIE(DbgScope *Scope);
+
+  /// constructInlinedScopeDIE - This scope represents inlined body of
+  /// a function. Construct DIE to represent this concrete inlined copy
+  /// of the function.
+  DIE *constructInlinedScopeDIE(DbgScope *Scope);
+
+  /// constructVariableDIE - Construct a DIE for the given DbgVariable.
+  DIE *constructVariableDIE(DbgVariable *DV, DbgScope *S, CompileUnit *Unit);
+
+  /// constructScopeDIE - Construct a DIE for this scope.
+  DIE *constructScopeDIE(DbgScope *Scope);
+
+  /// emitInitial - Emit initial Dwarf declarations.  This is necessary for cc
   /// tools to recognize the object file contains Dwarf information.
-  void EmitInitial();
+  void emitInitial();
 
-  /// EmitDIE - Recusively Emits a debug information entry.
+  /// emitDIE - Recusively Emits a debug information entry.
   ///
-  void EmitDIE(DIE *Die);
+  void emitDIE(DIE *Die);
 
-  /// SizeAndOffsetDie - Compute the size and offset of a DIE.
+  /// computeSizeAndOffset - Compute the size and offset of a DIE.
   ///
-  unsigned SizeAndOffsetDie(DIE *Die, unsigned Offset, bool Last);
+  unsigned computeSizeAndOffset(DIE *Die, unsigned Offset, bool Last);
 
-  /// SizeAndOffsets - Compute the size and offset of all the DIEs.
+  /// computeSizeAndOffsets - Compute the size and offset of all the DIEs.
   ///
-  void SizeAndOffsets();
+  void computeSizeAndOffsets();
 
-  /// EmitDebugInfo / EmitDebugInfoPerCU - Emit the debug info section.
+  /// EmitDebugInfo / emitDebugInfoPerCU - Emit the debug info section.
   ///
-  void EmitDebugInfoPerCU(CompileUnit *Unit);
+  void emitDebugInfoPerCU(CompileUnit *Unit);
 
-  void EmitDebugInfo();
+  void emitDebugInfo();
 
-  /// EmitAbbreviations - Emit the abbreviation section.
+  /// emitAbbreviations - Emit the abbreviation section.
   ///
-  void EmitAbbreviations() const;
+  void emitAbbreviations() const;
 
-  /// EmitEndOfLineMatrix - Emit the last address of the section and the end of
+  /// emitEndOfLineMatrix - Emit the last address of the section and the end of
   /// the line matrix.
   ///
-  void EmitEndOfLineMatrix(unsigned SectionEnd);
+  void emitEndOfLineMatrix(unsigned SectionEnd);
 
-  /// EmitDebugLines - Emit source line information.
+  /// emitDebugLines - Emit source line information.
   ///
-  void EmitDebugLines();
+  void emitDebugLines();
 
-  /// EmitCommonDebugFrame - Emit common frame info into a debug frame section.
+  /// emitCommonDebugFrame - Emit common frame info into a debug frame section.
   ///
-  void EmitCommonDebugFrame();
+  void emitCommonDebugFrame();
 
-  /// EmitFunctionDebugFrame - Emit per function frame info into a debug frame
+  /// emitFunctionDebugFrame - Emit per function frame info into a debug frame
   /// section.
-  void EmitFunctionDebugFrame(const FunctionDebugFrameInfo &DebugFrameInfo);
+  void emitFunctionDebugFrame(const FunctionDebugFrameInfo &DebugFrameInfo);
 
-  void EmitDebugPubNamesPerCU(CompileUnit *Unit);
+  void emitDebugPubNamesPerCU(CompileUnit *Unit);
 
-  /// EmitDebugPubNames - Emit visible names into a debug pubnames section.
+  /// emitDebugPubNames - Emit visible names into a debug pubnames section.
   ///
-  void EmitDebugPubNames();
+  void emitDebugPubNames();
 
-  /// EmitDebugStr - Emit visible names into a debug str section.
+  /// emitDebugPubTypes - Emit visible types into a debug pubtypes section.
   ///
-  void EmitDebugStr();
+  void emitDebugPubTypes();
 
-  /// EmitDebugLoc - Emit visible names into a debug loc section.
+  /// emitDebugStr - Emit visible names into a debug str section.
   ///
-  void EmitDebugLoc();
+  void emitDebugStr();
+
+  /// emitDebugLoc - Emit visible names into a debug loc section.
+  ///
+  void emitDebugLoc();
 
   /// EmitDebugARanges - Emit visible names into a debug aranges section.
   ///
   void EmitDebugARanges();
 
-  /// EmitDebugRanges - Emit visible names into a debug ranges section.
+  /// emitDebugRanges - Emit visible names into a debug ranges section.
   ///
-  void EmitDebugRanges();
+  void emitDebugRanges();
 
-  /// EmitDebugMacInfo - Emit visible names into a debug macinfo section.
+  /// emitDebugMacInfo - Emit visible names into a debug macinfo section.
   ///
-  void EmitDebugMacInfo();
+  void emitDebugMacInfo();
 
-  /// EmitDebugInlineInfo - Emit inline info using following format.
+  /// emitDebugInlineInfo - Emit inline info using following format.
   /// Section Header:
   /// 1. length of section
   /// 2. Dwarf version number
@@ -464,26 +480,25 @@ class VISIBILITY_HIDDEN DwarfDebug : public Dwarf {
   /// inlined instance; the die_offset points to the inlined_subroutine die in
   /// the __debug_info section, and the low_pc is the starting address  for the
   ///  inlining instance.
-  void EmitDebugInlineInfo();
+  void emitDebugInlineInfo();
 
   /// GetOrCreateSourceID - Look up the source id with the given directory and
   /// source file names. If none currently exists, create a new id and insert it
   /// in the SourceIds map. This can update DirectoryNames and SourceFileNames maps
   /// as well.
-  unsigned GetOrCreateSourceID(const char *DirName,
-                               const char *FileName);
+  unsigned GetOrCreateSourceID(StringRef DirName, StringRef FileName);
 
-  void ConstructCompileUnit(MDNode *N);
+  void constructCompileUnit(MDNode *N);
 
-  void ConstructGlobalVariableDIE(MDNode *N);
+  void constructGlobalVariableDIE(MDNode *N);
 
-  void ConstructSubprogram(MDNode *N);
+  void constructSubprogramDIE(MDNode *N);
 
   // FIXME: This should go away in favor of complex addresses.
   /// Find the type the programmer originally declared the variable to be
   /// and return that type.  Obsolete, use GetComplexAddrType instead.
   ///
-  DIType GetBlockByrefType(DIType Ty, std::string Name);
+  DIType getBlockByrefType(DIType Ty, std::string Name);
 
 public:
   //===--------------------------------------------------------------------===//
@@ -496,35 +511,30 @@ public:
   /// be emitted.
   bool ShouldEmitDwarfDebug() const { return shouldEmit; }
 
-  /// BeginModule - Emit all Dwarf sections that should come prior to the
+  /// beginModule - Emit all Dwarf sections that should come prior to the
   /// content.
-  void BeginModule(Module *M, MachineModuleInfo *MMI);
+  void beginModule(Module *M, MachineModuleInfo *MMI);
 
-  /// EndModule - Emit all Dwarf sections that should come after the content.
+  /// endModule - Emit all Dwarf sections that should come after the content.
   ///
-  void EndModule();
+  void endModule();
 
-  /// BeginFunction - Gather pre-function debug information.  Assumes being
+  /// beginFunction - Gather pre-function debug information.  Assumes being
   /// emitted immediately after the function entry point.
-  void BeginFunction(MachineFunction *MF);
+  void beginFunction(MachineFunction *MF);
 
-  /// EndFunction - Gather and emit post-function debug information.
+  /// endFunction - Gather and emit post-function debug information.
   ///
-  void EndFunction(MachineFunction *MF);
+  void endFunction(MachineFunction *MF);
 
-  /// RecordSourceLine - Records location information and associates it with a 
+  /// recordSourceLine - Records location information and associates it with a 
   /// label. Returns a unique label ID used to generate a label and provide
   /// correspondence to the source line list.
-  unsigned RecordSourceLine(Value *V, unsigned Line, unsigned Col);
-  
-  /// RecordSourceLine - Records location information and associates it with a 
-  /// label. Returns a unique label ID used to generate a label and provide
-  /// correspondence to the source line list.
-  unsigned RecordSourceLine(unsigned Line, unsigned Col, MDNode *Scope);
+  unsigned recordSourceLine(unsigned Line, unsigned Col, MDNode *Scope);
 
-  /// getRecordSourceLineCount - Return the number of source lines in the debug
+  /// getSourceLineCount - Return the number of source lines in the debug
   /// info.
-  unsigned getRecordSourceLineCount() const {
+  unsigned getSourceLineCount() const {
     return Lines.size();
   }
                             
@@ -536,29 +546,19 @@ public:
   unsigned getOrCreateSourceID(const std::string &DirName,
                                const std::string &FileName);
 
-  /// RecordRegionStart - Indicate the start of a region.
-  unsigned RecordRegionStart(MDNode *N);
-
-  /// RecordRegionEnd - Indicate the end of a region.
-  unsigned RecordRegionEnd(MDNode *N);
-
-  /// RecordVariable - Indicate the declaration of  a local variable.
-  void RecordVariable(MDNode *N, unsigned FrameIndex);
+  /// extractScopeInformation - Scan machine instructions in this function
+  /// and collect DbgScopes. Return true, if atleast one scope was found.
+  bool extractScopeInformation(MachineFunction *MF);
 
-  //// RecordInlinedFnStart - Indicate the start of inlined subroutine.
-  unsigned RecordInlinedFnStart(DISubprogram &SP, DICompileUnit CU,
-                                unsigned Line, unsigned Col);
+  /// collectVariableInfo - Populate DbgScope entries with variables' info.
+  void collectVariableInfo();
 
-  /// RecordInlinedFnEnd - Indicate the end of inlined subroutine.
-  unsigned RecordInlinedFnEnd(DISubprogram &SP);
+  /// beginScope - Process beginning of a scope starting at Label.
+  void beginScope(const MachineInstr *MI, unsigned Label);
 
-  /// ExtractScopeInformation - Scan machine instructions in this function
-  /// and collect DbgScopes. Return true, if atleast one scope was found.
-  bool ExtractScopeInformation(MachineFunction *MF);
-
-  void SetDbgScopeLabels(const MachineInstr *MI, unsigned Label);
+  /// endScope - Prcess end of a scope.
+  void endScope(const MachineInstr *MI);
 };
-
 } // End of namespace llvm
 
 #endif
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.cpp
index 626523b..1c8b8f4 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.cpp
@@ -74,6 +74,25 @@ unsigned DwarfException::SizeOfEncodedValue(unsigned Encoding) {
   return 0;
 }
 
+/// CreateLabelDiff - Emit a label and subtract it from the expression we
+/// already have.  This is equivalent to emitting "foo - .", but we have to emit
+/// the label for "." directly.
+const MCExpr *DwarfException::CreateLabelDiff(const MCExpr *ExprRef,
+                                              const char *LabelName,
+                                              unsigned Index) {
+  SmallString<64> Name;
+  raw_svector_ostream(Name) << MAI->getPrivateGlobalPrefix()
+                            << LabelName << Asm->getFunctionNumber()
+                            << "_" << Index;
+  MCSymbol *DotSym = Asm->OutContext.GetOrCreateSymbol(Name.str());
+  Asm->OutStreamer.EmitLabel(DotSym);
+
+  return MCBinaryExpr::CreateSub(ExprRef,
+                                 MCSymbolRefExpr::Create(DotSym,
+                                                         Asm->OutContext),
+                                 Asm->OutContext);
+}
+
 /// EmitCIE - Emit a Common Information Entry (CIE). This holds information that
 /// is shared among many Frame Description Entries.  There is at least one CIE
 /// in every non-empty .debug_frame section.
@@ -176,24 +195,10 @@ void DwarfException::EmitCIE(const Function *PersonalityFn, unsigned Index) {
 
   // If there is a personality, we need to indicate the function's location.
   if (PersonalityRef) {
-    // If the reference to the personality function symbol is not already
-    // pc-relative, then we need to subtract our current address from it.  Do
-    // this by emitting a label and subtracting it from the expression we
-    // already have.  This is equivalent to emitting "foo - .", but we have to
-    // emit the label for "." directly.
-    if (!IsPersonalityPCRel) {
-      SmallString<64> Name;
-      raw_svector_ostream(Name) << MAI->getPrivateGlobalPrefix()
-         << "personalityref_addr" << Asm->getFunctionNumber() << "_" << Index;
-      MCSymbol *DotSym = Asm->OutContext.GetOrCreateSymbol(Name.str());
-      Asm->OutStreamer.EmitLabel(DotSym);
-      
-      PersonalityRef =  
-        MCBinaryExpr::CreateSub(PersonalityRef,
-                                MCSymbolRefExpr::Create(DotSym,Asm->OutContext),
-                                Asm->OutContext);
-    }
-    
+    if (!IsPersonalityPCRel)
+      PersonalityRef = CreateLabelDiff(PersonalityRef, "personalityref_addr",
+                                       Index);
+
     O << MAI->getData32bitsDirective();
     PersonalityRef->print(O, MAI);
     Asm->EOL("Personality");
@@ -232,11 +237,16 @@ void DwarfException::EmitFDE(const FunctionEHFrameInfo &EHFrameInfo) {
   // corresponding function is static, this should not be externally visible.
   if (!TheFunc->hasLocalLinkage())
     if (const char *GlobalEHDirective = MAI->getGlobalEHDirective())
-      O << GlobalEHDirective << EHFrameInfo.FnName << "\n";
+      O << GlobalEHDirective << EHFrameInfo.FnName << '\n';
 
   // If corresponding function is weak definition, this should be too.
   if (TheFunc->isWeakForLinker() && MAI->getWeakDefDirective())
-    O << MAI->getWeakDefDirective() << EHFrameInfo.FnName << "\n";
+    O << MAI->getWeakDefDirective() << EHFrameInfo.FnName << '\n';
+
+  // If corresponding function is hidden, this should be too.
+  if (TheFunc->hasHiddenVisibility())
+    if (const char *HiddenDirective = MAI->getHiddenDirective())
+      O << HiddenDirective << EHFrameInfo.FnName << '\n' ;
 
   // If there are no calls then you can't unwind.  This may mean we can omit the
   // EH Frame, but some environments do not handle weak absolute symbols. If
@@ -457,6 +467,39 @@ ComputeActionsTable(const SmallVectorImpl<const LandingPadInfo*> &LandingPads,
   return SizeActions;
 }
 
+/// CallToNoUnwindFunction - Return `true' if this is a call to a function
+/// marked `nounwind'. Return `false' otherwise.
+bool DwarfException::CallToNoUnwindFunction(const MachineInstr *MI) {
+  assert(MI->getDesc().isCall() && "This should be a call instruction!");
+
+  bool MarkedNoUnwind = false;
+  bool SawFunc = false;
+
+  for (unsigned I = 0, E = MI->getNumOperands(); I != E; ++I) {
+    const MachineOperand &MO = MI->getOperand(I);
+
+    if (MO.isGlobal()) {
+      if (Function *F = dyn_cast<Function>(MO.getGlobal())) {
+        if (SawFunc) {
+          // Be conservative. If we have more than one function operand for this
+          // call, then we can't make the assumption that it's the callee and
+          // not a parameter to the call.
+          // 
+          // FIXME: Determine if there's a way to say that `F' is the callee or
+          // parameter.
+          MarkedNoUnwind = false;
+          break;
+        }
+
+        MarkedNoUnwind = F->doesNotThrow();
+        SawFunc = true;
+      }
+    }
+  }
+
+  return MarkedNoUnwind;
+}
+
 /// ComputeCallSiteTable - Compute the call-site table.  The entry for an invoke
 /// has a try-range containing the call, a non-zero landing pad, and an
 /// appropriate action.  The entry for an ordinary call has a try-range
@@ -485,7 +528,9 @@ ComputeCallSiteTable(SmallVectorImpl<CallSiteEntry> &CallSites,
     for (MachineBasicBlock::const_iterator MI = I->begin(), E = I->end();
          MI != E; ++MI) {
       if (!MI->isLabel()) {
-        SawPotentiallyThrowing |= MI->getDesc().isCall();
+        if (MI->getDesc().isCall())
+          SawPotentiallyThrowing |= !CallToNoUnwindFunction(MI);
+
         continue;
       }
 
@@ -497,7 +542,7 @@ ComputeCallSiteTable(SmallVectorImpl<CallSiteEntry> &CallSites,
         SawPotentiallyThrowing = false;
 
       // Beginning of a new try-range?
-      RangeMapType::iterator L = PadMap.find(BeginLabel);
+      RangeMapType::const_iterator L = PadMap.find(BeginLabel);
       if (L == PadMap.end())
         // Nope, it was just some random label.
         continue;
@@ -871,28 +916,7 @@ void DwarfException::EmitExceptionTable() {
     Asm->EOL("Next action");
   }
 
-  // Emit the Catch Clauses. The code for the catch clauses following the same
-  // try is similar to a switch statement. The catch clause action record
-  // informs the runtime about the type of a catch clause and about the
-  // associated switch value.
-  //
-  //  Action Record Fields:
-  //
-  //   * Filter Value
-  //     Positive value, starting at 1. Index in the types table of the
-  //     __typeinfo for the catch-clause type. 1 is the first word preceding
-  //     TTBase, 2 is the second word, and so on. Used by the runtime to check
-  //     if the thrown exception type matches the catch-clause type. Back-end
-  //     generated switch statements check against this value.
-  //
-  //   * Next
-  //     Signed offset, in bytes from the start of this field, to the next
-  //     chained action record, or zero if none.
-  //
-  // The order of the action records determined by the next field is the order
-  // of the catch clauses as they appear in the source code, and must be kept in
-  // the same order. As a result, changing the order of the catch clause would
-  // change the semantics of the program.
+  // Emit the Catch TypeInfos.
   for (std::vector<GlobalVariable *>::const_reverse_iterator
          I = TypeInfos.rbegin(), E = TypeInfos.rend(); I != E; ++I) {
     const GlobalVariable *GV = *I;
@@ -907,12 +931,15 @@ void DwarfException::EmitExceptionTable() {
     Asm->EOL("TypeInfo");
   }
 
-  // Emit the Type Table.
+  // Emit the Exception Specifications.
   for (std::vector<unsigned>::const_iterator
          I = FilterIds.begin(), E = FilterIds.end(); I < E; ++I) {
     unsigned TypeID = *I;
     Asm->EmitULEB128Bytes(TypeID);
-    Asm->EOL("Filter TypeInfo index");
+    if (TypeID != 0)
+      Asm->EOL("Exception specification");
+    else
+      Asm->EOL();
   }
 
   Asm->EmitAlignment(2, 0, 0, false);
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.h b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.h
index f6f5025..aff1665 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.h
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.h
@@ -25,13 +25,14 @@ namespace llvm {
 struct LandingPadInfo;
 class MachineModuleInfo;
 class MCAsmInfo;
+class MCExpr;
 class Timer;
 class raw_ostream;
 
 //===----------------------------------------------------------------------===//
 /// DwarfException - Emits Dwarf exception handling directives.
 ///
-class VISIBILITY_HIDDEN DwarfException : public Dwarf {
+class DwarfException : public Dwarf {
   struct FunctionEHFrameInfo {
     std::string FnName;
     unsigned Number;
@@ -155,6 +156,10 @@ class VISIBILITY_HIDDEN DwarfException : public Dwarf {
                                SmallVectorImpl<ActionEntry> &Actions,
                                SmallVectorImpl<unsigned> &FirstActions);
 
+  /// CallToNoUnwindFunction - Return `true' if this is a call to a function
+  /// marked `nounwind'. Return `false' otherwise.
+  bool CallToNoUnwindFunction(const MachineInstr *MI);
+
   /// ComputeCallSiteTable - Compute the call-site table.  The entry for an
   /// invoke has a try-range containing the call, a non-zero landing pad and an
   /// appropriate action.  The entry for an ordinary call has a try-range
@@ -168,6 +173,11 @@ class VISIBILITY_HIDDEN DwarfException : public Dwarf {
                             const SmallVectorImpl<unsigned> &FirstActions);
   void EmitExceptionTable();
 
+  /// CreateLabelDiff - Emit a label and subtract it from the expression we
+  /// already have.  This is equivalent to emitting "foo - .", but we have to
+  /// emit the label for "." directly.
+  const MCExpr *CreateLabelDiff(const MCExpr *ExprRef, const char *LabelName,
+                                unsigned Index);
 public:
   //===--------------------------------------------------------------------===//
   // Main entry points.
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfPrinter.h b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfPrinter.h
index 33ebb3b..dedd695 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfPrinter.h
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfPrinter.h
@@ -29,7 +29,7 @@ namespace llvm {
   class TargetData;
   class TargetRegisterInfo;
 
-  class VISIBILITY_HIDDEN Dwarf {
+  class Dwarf {
   protected:
     //===-------------------------------------------------------------==---===//
     // Core attributes used by the DWARF printer.
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfWriter.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfWriter.cpp
index bebf8e0..dd8d88a 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfWriter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfWriter.cpp
@@ -43,14 +43,14 @@ void DwarfWriter::BeginModule(Module *M,
   DE = new DwarfException(OS, A, T);
   DD = new DwarfDebug(OS, A, T);
   DE->BeginModule(M, MMI);
-  DD->BeginModule(M, MMI);
+  DD->beginModule(M, MMI);
 }
 
 /// EndModule - Emit all Dwarf sections that should come after the content.
 ///
 void DwarfWriter::EndModule() {
   DE->EndModule();
-  DD->EndModule();
+  DD->endModule();
   delete DD; DD = 0;
   delete DE; DE = 0;
 }
@@ -59,13 +59,13 @@ void DwarfWriter::EndModule() {
 /// emitted immediately after the function entry point.
 void DwarfWriter::BeginFunction(MachineFunction *MF) {
   DE->BeginFunction(MF);
-  DD->BeginFunction(MF);
+  DD->beginFunction(MF);
 }
 
 /// EndFunction - Gather and emit post-function debug information.
 ///
 void DwarfWriter::EndFunction(MachineFunction *MF) {
-  DD->EndFunction(MF);
+  DD->endFunction(MF);
   DE->EndFunction();
 
   if (MachineModuleInfo *MMI = DD->getMMI() ? DD->getMMI() : DE->getMMI())
@@ -78,28 +78,12 @@ void DwarfWriter::EndFunction(MachineFunction *MF) {
 /// correspondence to the source line list.
 unsigned DwarfWriter::RecordSourceLine(unsigned Line, unsigned Col, 
                                        MDNode *Scope) {
-  return DD->RecordSourceLine(Line, Col, Scope);
-}
-
-/// RecordRegionStart - Indicate the start of a region.
-unsigned DwarfWriter::RecordRegionStart(MDNode *N) {
-  return DD->RecordRegionStart(N);
-}
-
-/// RecordRegionEnd - Indicate the end of a region.
-unsigned DwarfWriter::RecordRegionEnd(MDNode *N) {
-  return DD->RecordRegionEnd(N);
+  return DD->recordSourceLine(Line, Col, Scope);
 }
 
 /// getRecordSourceLineCount - Count source lines.
 unsigned DwarfWriter::getRecordSourceLineCount() {
-  return DD->getRecordSourceLineCount();
-}
-
-/// RecordVariable - Indicate the declaration of  a local variable.
-///
-void DwarfWriter::RecordVariable(MDNode *N, unsigned FrameIndex) {
-  DD->RecordVariable(N, FrameIndex);
+  return DD->getSourceLineCount();
 }
 
 /// ShouldEmitDwarfDebug - Returns true if Dwarf debugging declarations should
@@ -108,14 +92,9 @@ bool DwarfWriter::ShouldEmitDwarfDebug() const {
   return DD && DD->ShouldEmitDwarfDebug();
 }
 
-//// RecordInlinedFnStart
-unsigned DwarfWriter::RecordInlinedFnStart(DISubprogram SP, DICompileUnit CU,
-                                           unsigned Line, unsigned Col) {
-  return DD->RecordInlinedFnStart(SP, CU, Line, Col);
+void DwarfWriter::BeginScope(const MachineInstr *MI, unsigned L) {
+  DD->beginScope(MI, L);
 }
-
-/// RecordInlinedFnEnd - Indicate the end of inlined subroutine.
-unsigned DwarfWriter::RecordInlinedFnEnd(DISubprogram SP) {
-  return DD->RecordInlinedFnEnd(SP);
+void DwarfWriter::EndScope(const MachineInstr *MI) {
+  DD->endScope(MI);
 }
-
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/OcamlGCPrinter.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/OcamlGCPrinter.cpp
index 06b92b7..9286ad5 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/OcamlGCPrinter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/OcamlGCPrinter.cpp
@@ -20,14 +20,13 @@
 #include "llvm/Target/TargetData.h"
 #include "llvm/Target/TargetLoweringObjectFile.h"
 #include "llvm/Target/TargetMachine.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
 namespace {
 
-  class VISIBILITY_HIDDEN OcamlGCMetadataPrinter : public GCMetadataPrinter {
+  class OcamlGCMetadataPrinter : public GCMetadataPrinter {
   public:
     void beginAssembly(raw_ostream &OS, AsmPrinter &AP,
                        const MCAsmInfo &MAI);
diff --git a/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp b/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp
index f9abeac..8a62eb2 100644
--- a/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp
@@ -18,6 +18,7 @@
 
 #define DEBUG_TYPE "branchfolding"
 #include "BranchFolding.h"
+#include "llvm/Function.h"
 #include "llvm/CodeGen/Passes.h"
 #include "llvm/CodeGen/MachineModuleInfo.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
@@ -31,6 +32,7 @@
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/ADT/SmallSet.h"
+#include "llvm/ADT/SetVector.h"
 #include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/STLExtras.h"
 #include <algorithm>
@@ -39,18 +41,40 @@ using namespace llvm;
 STATISTIC(NumDeadBlocks, "Number of dead blocks removed");
 STATISTIC(NumBranchOpts, "Number of branches optimized");
 STATISTIC(NumTailMerge , "Number of block tails merged");
-static cl::opt<cl::boolOrDefault> FlagEnableTailMerge("enable-tail-merge", 
+
+static cl::opt<cl::boolOrDefault> FlagEnableTailMerge("enable-tail-merge",
                               cl::init(cl::BOU_UNSET), cl::Hidden);
+
 // Throttle for huge numbers of predecessors (compile speed problems)
 static cl::opt<unsigned>
-TailMergeThreshold("tail-merge-threshold", 
+TailMergeThreshold("tail-merge-threshold",
           cl::desc("Max number of predecessors to consider tail merging"),
           cl::init(150), cl::Hidden);
 
+// Heuristic for tail merging (and, inversely, tail duplication).
+// TODO: This should be replaced with a target query.
+static cl::opt<unsigned>
+TailMergeSize("tail-merge-size",
+          cl::desc("Min number of instructions to consider tail merging"),
+                              cl::init(3), cl::Hidden);
+
+namespace {
+  /// BranchFolderPass - Wrap branch folder in a machine function pass.
+  class BranchFolderPass : public MachineFunctionPass,
+                           public BranchFolder {
+  public:
+    static char ID;
+    explicit BranchFolderPass(bool defaultEnableTailMerge)
+      : MachineFunctionPass(&ID), BranchFolder(defaultEnableTailMerge) {}
+
+    virtual bool runOnMachineFunction(MachineFunction &MF);
+    virtual const char *getPassName() const { return "Control Flow Optimizer"; }
+  };
+}
 
 char BranchFolderPass::ID = 0;
 
-FunctionPass *llvm::createBranchFoldingPass(bool DefaultEnableTailMerge) { 
+FunctionPass *llvm::createBranchFoldingPass(bool DefaultEnableTailMerge) {
   return new BranchFolderPass(DefaultEnableTailMerge);
 }
 
@@ -62,7 +86,6 @@ bool BranchFolderPass::runOnMachineFunction(MachineFunction &MF) {
 }
 
 
-
 BranchFolder::BranchFolder(bool defaultEnableTailMerge) {
   switch (FlagEnableTailMerge) {
   case cl::BOU_UNSET: EnableTailMerge = defaultEnableTailMerge; break;
@@ -76,12 +99,12 @@ BranchFolder::BranchFolder(bool defaultEnableTailMerge) {
 void BranchFolder::RemoveDeadBlock(MachineBasicBlock *MBB) {
   assert(MBB->pred_empty() && "MBB must be dead!");
   DEBUG(errs() << "\nRemoving MBB: " << *MBB);
-  
+
   MachineFunction *MF = MBB->getParent();
   // drop all successors.
   while (!MBB->succ_empty())
     MBB->removeSuccessor(MBB->succ_end()-1);
-  
+
   // If there are any labels in the basic block, unregister them from
   // MachineModuleInfo.
   if (MMI && !MBB->empty()) {
@@ -92,7 +115,7 @@ void BranchFolder::RemoveDeadBlock(MachineBasicBlock *MBB) {
         MMI->InvalidateLabel(I->getOperand(0).getImm());
     }
   }
-  
+
   // Remove the block.
   MF->erase(MBB);
 }
@@ -172,7 +195,6 @@ bool BranchFolder::OptimizeFunction(MachineFunction &MF,
     MadeChange |= OptimizeImpDefsBlock(MBB);
   }
 
-
   bool MadeChangeThisIteration = true;
   while (MadeChangeThisIteration) {
     MadeChangeThisIteration = false;
@@ -189,7 +211,7 @@ bool BranchFolder::OptimizeFunction(MachineFunction &MF,
     // Figure out how these jump tables should be merged.
     std::vector<unsigned> JTMapping;
     JTMapping.reserve(JTs.size());
-    
+
     // We always keep the 0th jump table.
     JTMapping.push_back(0);
 
@@ -201,7 +223,7 @@ bool BranchFolder::OptimizeFunction(MachineFunction &MF,
       else
         JTMapping.push_back(JTI->getJumpTableIndex(JTs[i].MBBs));
     }
-    
+
     // If a jump table was merge with another one, walk the function rewriting
     // references to jump tables to reference the new JT ID's.  Keep track of
     // whether we see a jump table idx, if not, we can delete the JT.
@@ -220,7 +242,7 @@ bool BranchFolder::OptimizeFunction(MachineFunction &MF,
           JTIsLive.set(NewIdx);
         }
     }
-   
+
     // Finally, remove dead jump tables.  This happens either because the
     // indirect jump was unreachable (and thus deleted) or because the jump
     // table was merged with some other one.
@@ -244,7 +266,7 @@ static unsigned HashMachineInstr(const MachineInstr *MI) {
   unsigned Hash = MI->getOpcode();
   for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
     const MachineOperand &Op = MI->getOperand(i);
-    
+
     // Merge in bits from the operand if easy.
     unsigned OperandHash = 0;
     switch (Op.getType()) {
@@ -266,31 +288,30 @@ static unsigned HashMachineInstr(const MachineInstr *MI) {
       break;
     default: break;
     }
-    
+
     Hash += ((OperandHash << 3) | Op.getType()) << (i&31);
   }
   return Hash;
 }
 
 /// HashEndOfMBB - Hash the last few instructions in the MBB.  For blocks
-/// with no successors, we hash two instructions, because cross-jumping 
-/// only saves code when at least two instructions are removed (since a 
+/// with no successors, we hash two instructions, because cross-jumping
+/// only saves code when at least two instructions are removed (since a
 /// branch must be inserted).  For blocks with a successor, one of the
 /// two blocks to be tail-merged will end with a branch already, so
 /// it gains to cross-jump even for one instruction.
-
 static unsigned HashEndOfMBB(const MachineBasicBlock *MBB,
                              unsigned minCommonTailLength) {
   MachineBasicBlock::const_iterator I = MBB->end();
   if (I == MBB->begin())
     return 0;   // Empty MBB.
-  
+
   --I;
   unsigned Hash = HashMachineInstr(I);
-    
+
   if (I == MBB->begin() || minCommonTailLength == 1)
     return Hash;   // Single instr MBB.
-  
+
   --I;
   // Hash in the second-to-last instruction.
   Hash ^= HashMachineInstr(I) << 2;
@@ -306,11 +327,11 @@ static unsigned ComputeCommonTailLength(MachineBasicBlock *MBB1,
                                         MachineBasicBlock::iterator &I2) {
   I1 = MBB1->end();
   I2 = MBB2->end();
-  
+
   unsigned TailLen = 0;
   while (I1 != MBB1->begin() && I2 != MBB2->begin()) {
     --I1; --I2;
-    if (!I1->isIdenticalTo(I2) || 
+    if (!I1->isIdenticalTo(I2) ||
         // FIXME: This check is dubious. It's used to get around a problem where
         // people incorrectly expect inline asm directives to remain in the same
         // relative order. This is untenable because normal compiler
@@ -331,11 +352,11 @@ static unsigned ComputeCommonTailLength(MachineBasicBlock *MBB1,
 void BranchFolder::ReplaceTailWithBranchTo(MachineBasicBlock::iterator OldInst,
                                            MachineBasicBlock *NewDest) {
   MachineBasicBlock *OldBB = OldInst->getParent();
-  
+
   // Remove all the old successors of OldBB from the CFG.
   while (!OldBB->succ_empty())
     OldBB->removeSuccessor(OldBB->succ_begin());
-  
+
   // Remove all the dead instructions from the end of OldBB.
   OldBB->erase(OldInst, OldBB->end());
 
@@ -360,10 +381,10 @@ MachineBasicBlock *BranchFolder::SplitMBBAt(MachineBasicBlock &CurMBB,
 
   // Move all the successors of this block to the specified block.
   NewMBB->transferSuccessors(&CurMBB);
- 
+
   // Add an edge from CurMBB to NewMBB for the fall-through.
   CurMBB.addSuccessor(NewMBB);
-  
+
   // Splice the code over.
   NewMBB->splice(NewMBB->end(), &CurMBB, BBI1, CurMBB.end());
 
@@ -374,7 +395,7 @@ MachineBasicBlock *BranchFolder::SplitMBBAt(MachineBasicBlock &CurMBB,
       RS->forward(prior(CurMBB.end()));
     BitVector RegsLiveAtExit(TRI->getNumRegs());
     RS->getRegsUsed(RegsLiveAtExit, false);
-    for (unsigned int i=0, e=TRI->getNumRegs(); i!=e; i++)
+    for (unsigned int i = 0, e = TRI->getNumRegs(); i != e; i++)
       if (RegsLiveAtExit[i])
         NewMBB->addLiveIn(i);
   }
@@ -403,8 +424,7 @@ static unsigned EstimateRuntime(MachineBasicBlock::iterator I,
 // branches temporarily for tail merging).  In the case where CurMBB ends
 // with a conditional branch to the next block, optimize by reversing the
 // test and conditionally branching to SuccMBB instead.
-
-static void FixTail(MachineBasicBlock* CurMBB, MachineBasicBlock *SuccBB,
+static void FixTail(MachineBasicBlock *CurMBB, MachineBasicBlock *SuccBB,
                     const TargetInstrInfo *TII) {
   MachineFunction *MF = CurMBB->getParent();
   MachineFunction::iterator I = next(MachineFunction::iterator(CurMBB));
@@ -424,74 +444,145 @@ static void FixTail(MachineBasicBlock* CurMBB, MachineBasicBlock *SuccBB,
   TII->InsertBranch(*CurMBB, SuccBB, NULL, SmallVector<MachineOperand, 0>());
 }
 
-static bool MergeCompare(const std::pair<unsigned,MachineBasicBlock*> &p,
-                         const std::pair<unsigned,MachineBasicBlock*> &q) {
-    if (p.first < q.first)
-      return true;
-     else if (p.first > q.first)
-      return false;
-    else if (p.second->getNumber() < q.second->getNumber())
-      return true;
-    else if (p.second->getNumber() > q.second->getNumber())
-      return false;
-    else {
-      // _GLIBCXX_DEBUG checks strict weak ordering, which involves comparing
-      // an object with itself.
+bool
+BranchFolder::MergePotentialsElt::operator<(const MergePotentialsElt &o) const {
+  if (getHash() < o.getHash())
+    return true;
+   else if (getHash() > o.getHash())
+    return false;
+  else if (getBlock()->getNumber() < o.getBlock()->getNumber())
+    return true;
+  else if (getBlock()->getNumber() > o.getBlock()->getNumber())
+    return false;
+  else {
+    // _GLIBCXX_DEBUG checks strict weak ordering, which involves comparing
+    // an object with itself.
 #ifndef _GLIBCXX_DEBUG
-      llvm_unreachable("Predecessor appears twice");
+    llvm_unreachable("Predecessor appears twice");
 #endif
-      return false;
+    return false;
+  }
+}
+
+/// CountTerminators - Count the number of terminators in the given
+/// block and set I to the position of the first non-terminator, if there
+/// is one, or MBB->end() otherwise.
+static unsigned CountTerminators(MachineBasicBlock *MBB,
+                                 MachineBasicBlock::iterator &I) {
+  I = MBB->end();
+  unsigned NumTerms = 0;
+  for (;;) {
+    if (I == MBB->begin()) {
+      I = MBB->end();
+      break;
     }
+    --I;
+    if (!I->getDesc().isTerminator()) break;
+    ++NumTerms;
+  }
+  return NumTerms;
+}
+
+/// ProfitableToMerge - Check if two machine basic blocks have a common tail
+/// and decide if it would be profitable to merge those tails.  Return the
+/// length of the common tail and iterators to the first common instruction
+/// in each block.
+static bool ProfitableToMerge(MachineBasicBlock *MBB1,
+                              MachineBasicBlock *MBB2,
+                              unsigned minCommonTailLength,
+                              unsigned &CommonTailLen,
+                              MachineBasicBlock::iterator &I1,
+                              MachineBasicBlock::iterator &I2,
+                              MachineBasicBlock *SuccBB,
+                              MachineBasicBlock *PredBB) {
+  CommonTailLen = ComputeCommonTailLength(MBB1, MBB2, I1, I2);
+  MachineFunction *MF = MBB1->getParent();
+
+  if (CommonTailLen == 0)
+    return false;
+
+  // It's almost always profitable to merge any number of non-terminator
+  // instructions with the block that falls through into the common successor.
+  if (MBB1 == PredBB || MBB2 == PredBB) {
+    MachineBasicBlock::iterator I;
+    unsigned NumTerms = CountTerminators(MBB1 == PredBB ? MBB2 : MBB1, I);
+    if (CommonTailLen > NumTerms)
+      return true;
+  }
+
+  // If one of the blocks can be completely merged and happens to be in
+  // a position where the other could fall through into it, merge any number
+  // of instructions, because it can be done without a branch.
+  // TODO: If the blocks are not adjacent, move one of them so that they are?
+  if (MBB1->isLayoutSuccessor(MBB2) && I2 == MBB2->begin())
+    return true;
+  if (MBB2->isLayoutSuccessor(MBB1) && I1 == MBB1->begin())
+    return true;
+
+  // If both blocks have an unconditional branch temporarily stripped out,
+  // count that as an additional common instruction for the following
+  // heuristics.
+  unsigned EffectiveTailLen = CommonTailLen;
+  if (SuccBB && MBB1 != PredBB && MBB2 != PredBB &&
+      !MBB1->back().getDesc().isBarrier() &&
+      !MBB2->back().getDesc().isBarrier())
+    ++EffectiveTailLen;
+
+  // Check if the common tail is long enough to be worthwhile.
+  if (EffectiveTailLen >= minCommonTailLength)
+    return true;
+
+  // If we are optimizing for code size, 2 instructions in common is enough if
+  // we don't have to split a block.  At worst we will be introducing 1 new
+  // branch instruction, which is likely to be smaller than the 2
+  // instructions that would be deleted in the merge.
+  if (EffectiveTailLen >= 2 &&
+      MF->getFunction()->hasFnAttr(Attribute::OptimizeForSize) &&
+      (I1 == MBB1->begin() || I2 == MBB2->begin()))
+    return true;
+
+  return false;
 }
 
 /// ComputeSameTails - Look through all the blocks in MergePotentials that have
-/// hash CurHash (guaranteed to match the last element).   Build the vector 
+/// hash CurHash (guaranteed to match the last element).  Build the vector
 /// SameTails of all those that have the (same) largest number of instructions
 /// in common of any pair of these blocks.  SameTails entries contain an
-/// iterator into MergePotentials (from which the MachineBasicBlock can be 
-/// found) and a MachineBasicBlock::iterator into that MBB indicating the 
+/// iterator into MergePotentials (from which the MachineBasicBlock can be
+/// found) and a MachineBasicBlock::iterator into that MBB indicating the
 /// instruction where the matching code sequence begins.
 /// Order of elements in SameTails is the reverse of the order in which
 /// those blocks appear in MergePotentials (where they are not necessarily
 /// consecutive).
-unsigned BranchFolder::ComputeSameTails(unsigned CurHash, 
-                                        unsigned minCommonTailLength) {
+unsigned BranchFolder::ComputeSameTails(unsigned CurHash,
+                                        unsigned minCommonTailLength,
+                                        MachineBasicBlock *SuccBB,
+                                        MachineBasicBlock *PredBB) {
   unsigned maxCommonTailLength = 0U;
   SameTails.clear();
   MachineBasicBlock::iterator TrialBBI1, TrialBBI2;
   MPIterator HighestMPIter = prior(MergePotentials.end());
   for (MPIterator CurMPIter = prior(MergePotentials.end()),
-                  B = MergePotentials.begin(); 
-       CurMPIter!=B && CurMPIter->first==CurHash;
+                  B = MergePotentials.begin();
+       CurMPIter != B && CurMPIter->getHash() == CurHash;
        --CurMPIter) {
-    for (MPIterator I = prior(CurMPIter); I->first==CurHash ; --I) {
-      unsigned CommonTailLen = ComputeCommonTailLength(
-                                        CurMPIter->second,
-                                        I->second,
-                                        TrialBBI1, TrialBBI2);
-      // If we will have to split a block, there should be at least
-      // minCommonTailLength instructions in common; if not, at worst
-      // we will be replacing a fallthrough into the common tail with a
-      // branch, which at worst breaks even with falling through into
-      // the duplicated common tail, so 1 instruction in common is enough.
-      // We will always pick a block we do not have to split as the common
-      // tail if there is one.
-      // (Empty blocks will get forwarded and need not be considered.)
-      if (CommonTailLen >= minCommonTailLength ||
-          (CommonTailLen > 0 &&
-           (TrialBBI1==CurMPIter->second->begin() ||
-            TrialBBI2==I->second->begin()))) {
+    for (MPIterator I = prior(CurMPIter); I->getHash() == CurHash ; --I) {
+      unsigned CommonTailLen;
+      if (ProfitableToMerge(CurMPIter->getBlock(), I->getBlock(),
+                            minCommonTailLength,
+                            CommonTailLen, TrialBBI1, TrialBBI2,
+                            SuccBB, PredBB)) {
         if (CommonTailLen > maxCommonTailLength) {
           SameTails.clear();
           maxCommonTailLength = CommonTailLen;
           HighestMPIter = CurMPIter;
-          SameTails.push_back(std::make_pair(CurMPIter, TrialBBI1));
+          SameTails.push_back(SameTailElt(CurMPIter, TrialBBI1));
         }
         if (HighestMPIter == CurMPIter &&
             CommonTailLen == maxCommonTailLength)
-          SameTails.push_back(std::make_pair(I, TrialBBI2));
+          SameTails.push_back(SameTailElt(I, TrialBBI2));
       }
-      if (I==B)
+      if (I == B)
         break;
     }
   }
@@ -500,21 +591,21 @@ unsigned BranchFolder::ComputeSameTails(unsigned CurHash,
 
 /// RemoveBlocksWithHash - Remove all blocks with hash CurHash from
 /// MergePotentials, restoring branches at ends of blocks as appropriate.
-void BranchFolder::RemoveBlocksWithHash(unsigned CurHash, 
-                                        MachineBasicBlock* SuccBB,
-                                        MachineBasicBlock* PredBB) {
+void BranchFolder::RemoveBlocksWithHash(unsigned CurHash,
+                                        MachineBasicBlock *SuccBB,
+                                        MachineBasicBlock *PredBB) {
   MPIterator CurMPIter, B;
-  for (CurMPIter = prior(MergePotentials.end()), B = MergePotentials.begin(); 
-       CurMPIter->first==CurHash;
+  for (CurMPIter = prior(MergePotentials.end()), B = MergePotentials.begin();
+       CurMPIter->getHash() == CurHash;
        --CurMPIter) {
     // Put the unconditional branch back, if we need one.
-    MachineBasicBlock *CurMBB = CurMPIter->second;
+    MachineBasicBlock *CurMBB = CurMPIter->getBlock();
     if (SuccBB && CurMBB != PredBB)
       FixTail(CurMBB, SuccBB, TII);
-    if (CurMPIter==B)
+    if (CurMPIter == B)
       break;
   }
-  if (CurMPIter->first!=CurHash)
+  if (CurMPIter->getHash() != CurHash)
     CurMPIter++;
   MergePotentials.erase(CurMPIter, MergePotentials.end());
 }
@@ -523,35 +614,37 @@ void BranchFolder::RemoveBlocksWithHash(unsigned CurHash,
 /// only of the common tail.  Create a block that does by splitting one.
 unsigned BranchFolder::CreateCommonTailOnlyBlock(MachineBasicBlock *&PredBB,
                                              unsigned maxCommonTailLength) {
-  unsigned i, commonTailIndex;
+  unsigned commonTailIndex = 0;
   unsigned TimeEstimate = ~0U;
-  for (i=0, commonTailIndex=0; i<SameTails.size(); i++) {
+  for (unsigned i = 0, e = SameTails.size(); i != e; ++i) {
     // Use PredBB if possible; that doesn't require a new branch.
-    if (SameTails[i].first->second==PredBB) {
+    if (SameTails[i].getBlock() == PredBB) {
       commonTailIndex = i;
       break;
     }
     // Otherwise, make a (fairly bogus) choice based on estimate of
     // how long it will take the various blocks to execute.
-    unsigned t = EstimateRuntime(SameTails[i].first->second->begin(), 
-                                 SameTails[i].second);
-    if (t<=TimeEstimate) {
+    unsigned t = EstimateRuntime(SameTails[i].getBlock()->begin(),
+                                 SameTails[i].getTailStartPos());
+    if (t <= TimeEstimate) {
       TimeEstimate = t;
       commonTailIndex = i;
     }
   }
 
-  MachineBasicBlock::iterator BBI = SameTails[commonTailIndex].second;
-  MachineBasicBlock *MBB = SameTails[commonTailIndex].first->second;
+  MachineBasicBlock::iterator BBI =
+    SameTails[commonTailIndex].getTailStartPos();
+  MachineBasicBlock *MBB = SameTails[commonTailIndex].getBlock();
 
-  DEBUG(errs() << "\nSplitting " << MBB->getNumber() << ", size "
+  DEBUG(errs() << "\nSplitting BB#" << MBB->getNumber() << ", size "
                << maxCommonTailLength);
 
   MachineBasicBlock *newMBB = SplitMBBAt(*MBB, BBI);
-  SameTails[commonTailIndex].first->second = newMBB;
-  SameTails[commonTailIndex].second = newMBB->begin();
+  SameTails[commonTailIndex].setBlock(newMBB);
+  SameTails[commonTailIndex].setTailStartPos(newMBB->begin());
+
   // If we split PredBB, newMBB is the new predecessor.
-  if (PredBB==MBB)
+  if (PredBB == MBB)
     PredBB = newMBB;
 
   return commonTailIndex;
@@ -561,35 +654,49 @@ unsigned BranchFolder::CreateCommonTailOnlyBlock(MachineBasicBlock *&PredBB,
 // successor, or all have no successor) can be tail-merged.  If there is a
 // successor, any blocks in MergePotentials that are not tail-merged and
 // are not immediately before Succ must have an unconditional branch to
-// Succ added (but the predecessor/successor lists need no adjustment).  
+// Succ added (but the predecessor/successor lists need no adjustment).
 // The lone predecessor of Succ that falls through into Succ,
 // if any, is given in PredBB.
 
-bool BranchFolder::TryMergeBlocks(MachineBasicBlock *SuccBB,
-                                  MachineBasicBlock* PredBB) {
+bool BranchFolder::TryTailMergeBlocks(MachineBasicBlock *SuccBB,
+                                      MachineBasicBlock *PredBB) {
   bool MadeChange = false;
 
-  // It doesn't make sense to save a single instruction since tail merging
-  // will add a jump.
-  // FIXME: Ask the target to provide the threshold?
-  unsigned minCommonTailLength = (SuccBB ? 1 : 2) + 1;
-  
-  DEBUG(errs() << "\nTryMergeBlocks " << MergePotentials.size() << '\n');
+  // Except for the special cases below, tail-merge if there are at least
+  // this many instructions in common.
+  unsigned minCommonTailLength = TailMergeSize;
+
+  DEBUG(errs() << "\nTryTailMergeBlocks: ";
+        for (unsigned i = 0, e = MergePotentials.size(); i != e; ++i)
+          errs() << "BB#" << MergePotentials[i].getBlock()->getNumber()
+                 << (i == e-1 ? "" : ", ");
+        errs() << "\n";
+        if (SuccBB) {
+          errs() << "  with successor BB#" << SuccBB->getNumber() << '\n';
+          if (PredBB)
+            errs() << "  which has fall-through from BB#"
+                   << PredBB->getNumber() << "\n";
+        }
+        errs() << "Looking for common tails of at least "
+               << minCommonTailLength << " instruction"
+               << (minCommonTailLength == 1 ? "" : "s") << '\n';
+       );
 
   // Sort by hash value so that blocks with identical end sequences sort
   // together.
-  std::stable_sort(MergePotentials.begin(), MergePotentials.end(),MergeCompare);
+  std::stable_sort(MergePotentials.begin(), MergePotentials.end());
 
   // Walk through equivalence sets looking for actual exact matches.
   while (MergePotentials.size() > 1) {
-    unsigned CurHash  = prior(MergePotentials.end())->first;
-    
+    unsigned CurHash = MergePotentials.back().getHash();
+
     // Build SameTails, identifying the set of blocks with this hash code
     // and with the maximum number of instructions in common.
-    unsigned maxCommonTailLength = ComputeSameTails(CurHash, 
-                                                    minCommonTailLength);
+    unsigned maxCommonTailLength = ComputeSameTails(CurHash,
+                                                    minCommonTailLength,
+                                                    SuccBB, PredBB);
 
-    // If we didn't find any pair that has at least minCommonTailLength 
+    // If we didn't find any pair that has at least minCommonTailLength
     // instructions in common, remove all blocks with this hash code and retry.
     if (SameTails.empty()) {
       RemoveBlocksWithHash(CurHash, SuccBB, PredBB);
@@ -600,36 +707,58 @@ bool BranchFolder::TryMergeBlocks(MachineBasicBlock *SuccBB,
     // block, which we can't jump to), we can treat all blocks with this same
     // tail at once.  Use PredBB if that is one of the possibilities, as that
     // will not introduce any extra branches.
-    MachineBasicBlock *EntryBB = MergePotentials.begin()->second->
-                                getParent()->begin();
-    unsigned int commonTailIndex, i;
-    for (commonTailIndex=SameTails.size(), i=0; i<SameTails.size(); i++) {
-      MachineBasicBlock *MBB = SameTails[i].first->second;
-      if (MBB->begin() == SameTails[i].second && MBB != EntryBB) {
-        commonTailIndex = i;
-        if (MBB==PredBB)
+    MachineBasicBlock *EntryBB = MergePotentials.begin()->getBlock()->
+                                 getParent()->begin();
+    unsigned commonTailIndex = SameTails.size();
+    // If there are two blocks, check to see if one can be made to fall through
+    // into the other.
+    if (SameTails.size() == 2 &&
+        SameTails[0].getBlock()->isLayoutSuccessor(SameTails[1].getBlock()) &&
+        SameTails[1].tailIsWholeBlock())
+      commonTailIndex = 1;
+    else if (SameTails.size() == 2 &&
+             SameTails[1].getBlock()->isLayoutSuccessor(
+                                                     SameTails[0].getBlock()) &&
+             SameTails[0].tailIsWholeBlock())
+      commonTailIndex = 0;
+    else {
+      // Otherwise just pick one, favoring the fall-through predecessor if
+      // there is one.
+      for (unsigned i = 0, e = SameTails.size(); i != e; ++i) {
+        MachineBasicBlock *MBB = SameTails[i].getBlock();
+        if (MBB == EntryBB && SameTails[i].tailIsWholeBlock())
+          continue;
+        if (MBB == PredBB) {
+          commonTailIndex = i;
           break;
+        }
+        if (SameTails[i].tailIsWholeBlock())
+          commonTailIndex = i;
       }
     }
 
-    if (commonTailIndex==SameTails.size()) {
+    if (commonTailIndex == SameTails.size() ||
+        (SameTails[commonTailIndex].getBlock() == PredBB &&
+         !SameTails[commonTailIndex].tailIsWholeBlock())) {
       // None of the blocks consist entirely of the common tail.
       // Split a block so that one does.
-      commonTailIndex = CreateCommonTailOnlyBlock(PredBB,  maxCommonTailLength);
+      commonTailIndex = CreateCommonTailOnlyBlock(PredBB, maxCommonTailLength);
     }
 
-    MachineBasicBlock *MBB = SameTails[commonTailIndex].first->second;
+    MachineBasicBlock *MBB = SameTails[commonTailIndex].getBlock();
     // MBB is common tail.  Adjust all other BB's to jump to this one.
     // Traversal must be forwards so erases work.
-    DEBUG(errs() << "\nUsing common tail " << MBB->getNumber() << " for ");
-    for (unsigned int i=0; i<SameTails.size(); ++i) {
-      if (commonTailIndex==i)
+    DEBUG(errs() << "\nUsing common tail in BB#" << MBB->getNumber()
+                 << " for ");
+    for (unsigned int i=0, e = SameTails.size(); i != e; ++i) {
+      if (commonTailIndex == i)
         continue;
-      DEBUG(errs() << SameTails[i].first->second->getNumber() << ",");
+      DEBUG(errs() << "BB#" << SameTails[i].getBlock()->getNumber()
+                   << (i == e-1 ? "" : ", "));
       // Hack the end off BB i, making it jump to BB commonTailIndex instead.
-      ReplaceTailWithBranchTo(SameTails[i].second, MBB);
+      ReplaceTailWithBranchTo(SameTails[i].getTailStartPos(), MBB);
       // BB i is no longer a predecessor of SuccBB; remove it from the worklist.
-      MergePotentials.erase(SameTails[i].first);
+      MergePotentials.erase(SameTails[i].getMPIter());
     }
     DEBUG(errs() << "\n");
     // We leave commonTailIndex in the worklist in case there are other blocks
@@ -642,26 +771,27 @@ bool BranchFolder::TryMergeBlocks(MachineBasicBlock *SuccBB,
 bool BranchFolder::TailMergeBlocks(MachineFunction &MF) {
 
   if (!EnableTailMerge) return false;
- 
+
   bool MadeChange = false;
 
   // First find blocks with no successors.
   MergePotentials.clear();
   for (MachineFunction::iterator I = MF.begin(), E = MF.end(); I != E; ++I) {
     if (I->succ_empty())
-      MergePotentials.push_back(std::make_pair(HashEndOfMBB(I, 2U), I));
+      MergePotentials.push_back(MergePotentialsElt(HashEndOfMBB(I, 2U), I));
   }
+
   // See if we can do any tail merging on those.
   if (MergePotentials.size() < TailMergeThreshold &&
       MergePotentials.size() >= 2)
-    MadeChange |= TryMergeBlocks(NULL, NULL);
+    MadeChange |= TryTailMergeBlocks(NULL, NULL);
 
   // Look at blocks (IBB) with multiple predecessors (PBB).
   // We change each predecessor to a canonical form, by
   // (1) temporarily removing any unconditional branch from the predecessor
   // to IBB, and
   // (2) alter conditional branches so they branch to the other block
-  // not IBB; this may require adding back an unconditional branch to IBB 
+  // not IBB; this may require adding back an unconditional branch to IBB
   // later, where there wasn't one coming in.  E.g.
   //   Bcc IBB
   //   fallthrough to QBB
@@ -675,18 +805,19 @@ bool BranchFolder::TailMergeBlocks(MachineFunction &MF) {
   // a compile-time infinite loop repeatedly doing and undoing the same
   // transformations.)
 
-  for (MachineFunction::iterator I = MF.begin(), E = MF.end(); I != E; ++I) {
+  for (MachineFunction::iterator I = next(MF.begin()), E = MF.end();
+       I != E; ++I) {
     if (I->pred_size() >= 2 && I->pred_size() < TailMergeThreshold) {
       SmallPtrSet<MachineBasicBlock *, 8> UniquePreds;
       MachineBasicBlock *IBB = I;
       MachineBasicBlock *PredBB = prior(I);
       MergePotentials.clear();
-      for (MachineBasicBlock::pred_iterator P = I->pred_begin(), 
+      for (MachineBasicBlock::pred_iterator P = I->pred_begin(),
                                             E2 = I->pred_end();
            P != E2; ++P) {
-        MachineBasicBlock* PBB = *P;
+        MachineBasicBlock *PBB = *P;
         // Skip blocks that loop to themselves, can't tail merge these.
-        if (PBB==IBB)
+        if (PBB == IBB)
           continue;
         // Visit each predecessor only once.
         if (!UniquePreds.insert(PBB))
@@ -697,7 +828,7 @@ bool BranchFolder::TailMergeBlocks(MachineFunction &MF) {
           // Failing case:  IBB is the target of a cbr, and
           // we cannot reverse the branch.
           SmallVector<MachineOperand, 4> NewCond(Cond);
-          if (!Cond.empty() && TBB==IBB) {
+          if (!Cond.empty() && TBB == IBB) {
             if (TII->ReverseBranchCondition(NewCond))
               continue;
             // This is the QBB case described above
@@ -709,20 +840,20 @@ bool BranchFolder::TailMergeBlocks(MachineFunction &MF) {
           // to have a bit in the edge so we didn't have to do all this.
           if (IBB->isLandingPad()) {
             MachineFunction::iterator IP = PBB;  IP++;
-            MachineBasicBlock* PredNextBB = NULL;
-            if (IP!=MF.end())
+            MachineBasicBlock *PredNextBB = NULL;
+            if (IP != MF.end())
               PredNextBB = IP;
-            if (TBB==NULL) {
-              if (IBB!=PredNextBB)      // fallthrough
+            if (TBB == NULL) {
+              if (IBB != PredNextBB)      // fallthrough
                 continue;
             } else if (FBB) {
-              if (TBB!=IBB && FBB!=IBB)   // cbr then ubr
+              if (TBB != IBB && FBB != IBB)   // cbr then ubr
                 continue;
             } else if (Cond.empty()) {
-              if (TBB!=IBB)               // ubr
+              if (TBB != IBB)               // ubr
                 continue;
             } else {
-              if (TBB!=IBB && IBB!=PredNextBB)  // cbr
+              if (TBB != IBB && IBB != PredNextBB)  // cbr
                 continue;
             }
           }
@@ -731,19 +862,20 @@ bool BranchFolder::TailMergeBlocks(MachineFunction &MF) {
             TII->RemoveBranch(*PBB);
             if (!Cond.empty())
               // reinsert conditional branch only, for now
-              TII->InsertBranch(*PBB, (TBB==IBB) ? FBB : TBB, 0, NewCond);
+              TII->InsertBranch(*PBB, (TBB == IBB) ? FBB : TBB, 0, NewCond);
           }
-          MergePotentials.push_back(std::make_pair(HashEndOfMBB(PBB, 1U), *P));
+          MergePotentials.push_back(MergePotentialsElt(HashEndOfMBB(PBB, 1U),
+                                                       *P));
         }
       }
-    if (MergePotentials.size() >= 2)
-      MadeChange |= TryMergeBlocks(I, PredBB);
-    // Reinsert an unconditional branch if needed.
-    // The 1 below can occur as a result of removing blocks in TryMergeBlocks.
-    PredBB = prior(I);      // this may have been changed in TryMergeBlocks
-    if (MergePotentials.size()==1 && 
-        MergePotentials.begin()->second != PredBB)
-      FixTail(MergePotentials.begin()->second, I, TII);
+      if (MergePotentials.size() >= 2)
+        MadeChange |= TryTailMergeBlocks(IBB, PredBB);
+      // Reinsert an unconditional branch if needed.
+      // The 1 below can occur as a result of removing blocks in TryTailMergeBlocks.
+      PredBB = prior(I);      // this may have been changed in TryTailMergeBlocks
+      if (MergePotentials.size() == 1 &&
+          MergePotentials.begin()->getBlock() != PredBB)
+        FixTail(MergePotentials.begin()->getBlock(), IBB, TII);
     }
   }
   return MadeChange;
@@ -755,14 +887,14 @@ bool BranchFolder::TailMergeBlocks(MachineFunction &MF) {
 
 bool BranchFolder::OptimizeBranches(MachineFunction &MF) {
   bool MadeChange = false;
-  
+
   // Make sure blocks are numbered in order
   MF.RenumberBlocks();
 
   for (MachineFunction::iterator I = ++MF.begin(), E = MF.end(); I != E; ) {
     MachineBasicBlock *MBB = I++;
     MadeChange |= OptimizeBlock(MBB);
-    
+
     // If it is dead, remove it.
     if (MBB->pred_empty()) {
       RemoveDeadBlock(MBB);
@@ -774,75 +906,18 @@ bool BranchFolder::OptimizeBranches(MachineFunction &MF) {
 }
 
 
-/// CanFallThrough - Return true if the specified block (with the specified
-/// branch condition) can implicitly transfer control to the block after it by
-/// falling off the end of it.  This should return false if it can reach the
-/// block after it, but it uses an explicit branch to do so (e.g. a table jump).
-///
-/// True is a conservative answer.
-///
-bool BranchFolder::CanFallThrough(MachineBasicBlock *CurBB,
-                                  bool BranchUnAnalyzable,
-                                  MachineBasicBlock *TBB, 
-                                  MachineBasicBlock *FBB,
-                                  const SmallVectorImpl<MachineOperand> &Cond) {
-  MachineFunction::iterator Fallthrough = CurBB;
-  ++Fallthrough;
-  // If FallthroughBlock is off the end of the function, it can't fall through.
-  if (Fallthrough == CurBB->getParent()->end())
-    return false;
-  
-  // If FallthroughBlock isn't a successor of CurBB, no fallthrough is possible.
-  if (!CurBB->isSuccessor(Fallthrough))
-    return false;
-  
-  // If we couldn't analyze the branch, assume it could fall through.
-  if (BranchUnAnalyzable) return true;
-  
-  // If there is no branch, control always falls through.
-  if (TBB == 0) return true;
-
-  // If there is some explicit branch to the fallthrough block, it can obviously
-  // reach, even though the branch should get folded to fall through implicitly.
-  if (MachineFunction::iterator(TBB) == Fallthrough ||
-      MachineFunction::iterator(FBB) == Fallthrough)
-    return true;
-  
-  // If it's an unconditional branch to some block not the fall through, it 
-  // doesn't fall through.
-  if (Cond.empty()) return false;
-  
-  // Otherwise, if it is conditional and has no explicit false block, it falls
-  // through.
-  return FBB == 0;
-}
-
-/// CanFallThrough - Return true if the specified can implicitly transfer
-/// control to the block after it by falling off the end of it.  This should
-/// return false if it can reach the block after it, but it uses an explicit
-/// branch to do so (e.g. a table jump).
-///
-/// True is a conservative answer.
-///
-bool BranchFolder::CanFallThrough(MachineBasicBlock *CurBB) {
-  MachineBasicBlock *TBB = 0, *FBB = 0;
-  SmallVector<MachineOperand, 4> Cond;
-  bool CurUnAnalyzable = TII->AnalyzeBranch(*CurBB, TBB, FBB, Cond, true);
-  return CanFallThrough(CurBB, CurUnAnalyzable, TBB, FBB, Cond);
-}
-
 /// IsBetterFallthrough - Return true if it would be clearly better to
 /// fall-through to MBB1 than to fall through into MBB2.  This has to return
 /// a strict ordering, returning true for both (MBB1,MBB2) and (MBB2,MBB1) will
 /// result in infinite loops.
-static bool IsBetterFallthrough(MachineBasicBlock *MBB1, 
+static bool IsBetterFallthrough(MachineBasicBlock *MBB1,
                                 MachineBasicBlock *MBB2) {
   // Right now, we use a simple heuristic.  If MBB2 ends with a call, and
   // MBB1 doesn't, we prefer to fall through into MBB1.  This allows us to
   // optimize branches that branch to either a return block or an assert block
   // into a fallthrough to the return.
   if (MBB1->empty() || MBB2->empty()) return false;
- 
+
   // If there is a clear successor ordering we make sure that one block
   // will fall through to the next
   if (MBB1->isSuccessor(MBB2)) return true;
@@ -857,18 +932,21 @@ static bool IsBetterFallthrough(MachineBasicBlock *MBB1,
 /// block.  This is never called on the entry block.
 bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
   bool MadeChange = false;
+  MachineFunction &MF = *MBB->getParent();
+ReoptimizeBlock:
 
   MachineFunction::iterator FallThrough = MBB;
   ++FallThrough;
-  
+
   // If this block is empty, make everyone use its fall-through, not the block
   // explicitly.  Landing pads should not do this since the landing-pad table
-  // points to this block.
-  if (MBB->empty() && !MBB->isLandingPad()) {
+  // points to this block.  Blocks with their addresses taken shouldn't be
+  // optimized away.
+  if (MBB->empty() && !MBB->isLandingPad() && !MBB->hasAddressTaken()) {
     // Dead block?  Leave for cleanup later.
     if (MBB->pred_empty()) return MadeChange;
-    
-    if (FallThrough == MBB->getParent()->end()) {
+
+    if (FallThrough == MF.end()) {
       // TODO: Simplify preds to not branch here if possible!
     } else {
       // Rewrite all predecessors of the old block to go to the fallthrough
@@ -879,8 +957,7 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
       }
       // If MBB was the target of a jump table, update jump tables to go to the
       // fallthrough instead.
-      MBB->getParent()->getJumpTableInfo()->
-        ReplaceMBBInJumpTables(MBB, FallThrough);
+      MF.getJumpTableInfo()->ReplaceMBBInJumpTables(MBB, FallThrough);
       MadeChange = true;
     }
     return MadeChange;
@@ -898,29 +975,49 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
     // If the CFG for the prior block has extra edges, remove them.
     MadeChange |= PrevBB.CorrectExtraCFGEdges(PriorTBB, PriorFBB,
                                               !PriorCond.empty());
-    
+
     // If the previous branch is conditional and both conditions go to the same
     // destination, remove the branch, replacing it with an unconditional one or
     // a fall-through.
     if (PriorTBB && PriorTBB == PriorFBB) {
       TII->RemoveBranch(PrevBB);
-      PriorCond.clear(); 
+      PriorCond.clear();
       if (PriorTBB != MBB)
         TII->InsertBranch(PrevBB, PriorTBB, 0, PriorCond);
       MadeChange = true;
       ++NumBranchOpts;
-      return OptimizeBlock(MBB);
+      goto ReoptimizeBlock;
     }
-    
+
+    // If the previous block unconditionally falls through to this block and
+    // this block has no other predecessors, move the contents of this block
+    // into the prior block. This doesn't usually happen when SimplifyCFG
+    // has been used, but it can happen if tail merging splits a fall-through
+    // predecessor of a block.
+    // This has to check PrevBB->succ_size() because EH edges are ignored by
+    // AnalyzeBranch.
+    if (PriorCond.empty() && !PriorTBB && MBB->pred_size() == 1 &&
+        PrevBB.succ_size() == 1 &&
+        !MBB->hasAddressTaken()) {
+      DEBUG(errs() << "\nMerging into block: " << PrevBB
+                   << "From MBB: " << *MBB);
+      PrevBB.splice(PrevBB.end(), MBB, MBB->begin(), MBB->end());
+      PrevBB.removeSuccessor(PrevBB.succ_begin());;
+      assert(PrevBB.succ_empty());
+      PrevBB.transferSuccessors(MBB);
+      MadeChange = true;
+      return MadeChange;
+    }
+
     // If the previous branch *only* branches to *this* block (conditional or
     // not) remove the branch.
     if (PriorTBB == MBB && PriorFBB == 0) {
       TII->RemoveBranch(PrevBB);
       MadeChange = true;
       ++NumBranchOpts;
-      return OptimizeBlock(MBB);
+      goto ReoptimizeBlock;
     }
-    
+
     // If the prior block branches somewhere else on the condition and here if
     // the condition is false, remove the uncond second branch.
     if (PriorFBB == MBB) {
@@ -928,9 +1025,9 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
       TII->InsertBranch(PrevBB, PriorTBB, 0, PriorCond);
       MadeChange = true;
       ++NumBranchOpts;
-      return OptimizeBlock(MBB);
+      goto ReoptimizeBlock;
     }
-    
+
     // If the prior block branches here on true and somewhere else on false, and
     // if the branch condition is reversible, reverse the branch to create a
     // fall-through.
@@ -941,29 +1038,29 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
         TII->InsertBranch(PrevBB, PriorFBB, 0, NewPriorCond);
         MadeChange = true;
         ++NumBranchOpts;
-        return OptimizeBlock(MBB);
+        goto ReoptimizeBlock;
       }
     }
-    
-    // If this block doesn't fall through (e.g. it ends with an uncond branch or
-    // has no successors) and if the pred falls through into this block, and if
-    // it would otherwise fall through into the block after this, move this
-    // block to the end of the function.
+
+    // If this block has no successors (e.g. it is a return block or ends with
+    // a call to a no-return function like abort or __cxa_throw) and if the pred
+    // falls through into this block, and if it would otherwise fall through
+    // into the block after this, move this block to the end of the function.
     //
     // We consider it more likely that execution will stay in the function (e.g.
     // due to loops) than it is to exit it.  This asserts in loops etc, moving
     // the assert condition out of the loop body.
-    if (!PriorCond.empty() && PriorFBB == 0 &&
+    if (MBB->succ_empty() && !PriorCond.empty() && PriorFBB == 0 &&
         MachineFunction::iterator(PriorTBB) == FallThrough &&
-        !CanFallThrough(MBB)) {
+        !MBB->canFallThrough()) {
       bool DoTransform = true;
-      
+
       // We have to be careful that the succs of PredBB aren't both no-successor
       // blocks.  If neither have successors and if PredBB is the second from
       // last block in the function, we'd just keep swapping the two blocks for
       // last.  Only do the swap if one is clearly better to fall through than
       // the other.
-      if (FallThrough == --MBB->getParent()->end() &&
+      if (FallThrough == --MF.end() &&
           !IsBetterFallthrough(PriorTBB, MBB))
         DoTransform = false;
 
@@ -979,22 +1076,22 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
       // In this case, we could actually be moving the return block *into* a
       // loop!
       if (DoTransform && !MBB->succ_empty() &&
-          (!CanFallThrough(PriorTBB) || PriorTBB->empty()))
+          (!PriorTBB->canFallThrough() || PriorTBB->empty()))
         DoTransform = false;
-      
-      
+
+
       if (DoTransform) {
         // Reverse the branch so we will fall through on the previous true cond.
         SmallVector<MachineOperand, 4> NewPriorCond(PriorCond);
         if (!TII->ReverseBranchCondition(NewPriorCond)) {
           DEBUG(errs() << "\nMoving MBB: " << *MBB
                        << "To make fallthrough to: " << *PriorTBB << "\n");
-          
+
           TII->RemoveBranch(PrevBB);
           TII->InsertBranch(PrevBB, MBB, 0, NewPriorCond);
 
           // Move this block to the end of the function.
-          MBB->moveAfter(--MBB->getParent()->end());
+          MBB->moveAfter(--MF.end());
           MadeChange = true;
           ++NumBranchOpts;
           return MadeChange;
@@ -1002,7 +1099,7 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
       }
     }
   }
-  
+
   // Analyze the branch in the current block.
   MachineBasicBlock *CurTBB = 0, *CurFBB = 0;
   SmallVector<MachineOperand, 4> CurCond;
@@ -1011,7 +1108,7 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
     // If the CFG for the prior block has extra edges, remove them.
     MadeChange |= MBB->CorrectExtraCFGEdges(CurTBB, CurFBB, !CurCond.empty());
 
-    // If this is a two-way branch, and the FBB branches to this block, reverse 
+    // If this is a two-way branch, and the FBB branches to this block, reverse
     // the condition so the single-basic-block loop is faster.  Instead of:
     //    Loop: xxx; jcc Out; jmp Loop
     // we want:
@@ -1023,15 +1120,15 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
         TII->InsertBranch(*MBB, CurFBB, CurTBB, NewCond);
         MadeChange = true;
         ++NumBranchOpts;
-        return OptimizeBlock(MBB);
+        goto ReoptimizeBlock;
       }
     }
-    
-    
+
     // If this branch is the only thing in its block, see if we can forward
     // other blocks across it.
-    if (CurTBB && CurCond.empty() && CurFBB == 0 && 
-        MBB->begin()->getDesc().isBranch() && CurTBB != MBB) {
+    if (CurTBB && CurCond.empty() && CurFBB == 0 &&
+        MBB->begin()->getDesc().isBranch() && CurTBB != MBB &&
+        !MBB->hasAddressTaken()) {
       // This block may contain just an unconditional branch.  Because there can
       // be 'non-branch terminators' in the block, try removing the branch and
       // then seeing if the block is empty.
@@ -1048,7 +1145,7 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
             !PrevBB.isSuccessor(MBB)) {
           // If the prior block falls through into us, turn it into an
           // explicit branch to us to make updates simpler.
-          if (!PredHasNoFallThrough && PrevBB.isSuccessor(MBB) && 
+          if (!PredHasNoFallThrough && PrevBB.isSuccessor(MBB) &&
               PriorTBB != MBB && PriorFBB != MBB) {
             if (PriorTBB == 0) {
               assert(PriorCond.empty() && PriorFBB == 0 &&
@@ -1084,18 +1181,17 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
                       NewCurFBB, NewCurCond, true);
               if (!NewCurUnAnalyzable && NewCurTBB && NewCurTBB == NewCurFBB) {
                 TII->RemoveBranch(*PMBB);
-                NewCurCond.clear(); 
+                NewCurCond.clear();
                 TII->InsertBranch(*PMBB, NewCurTBB, 0, NewCurCond);
                 MadeChange = true;
                 ++NumBranchOpts;
-                PMBB->CorrectExtraCFGEdges(NewCurTBB, NewCurFBB, false);
+                PMBB->CorrectExtraCFGEdges(NewCurTBB, 0, false);
               }
             }
           }
 
           // Change any jumptables to go to the new MBB.
-          MBB->getParent()->getJumpTableInfo()->
-            ReplaceMBBInJumpTables(MBB, CurTBB);
+          MF.getJumpTableInfo()->ReplaceMBBInJumpTables(MBB, CurTBB);
           if (DidChange) {
             ++NumBranchOpts;
             MadeChange = true;
@@ -1103,7 +1199,7 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
           }
         }
       }
-      
+
       // Add the branch back if the block is more than just an uncond branch.
       TII->InsertBranch(*MBB, CurTBB, 0, CurCond);
     }
@@ -1112,12 +1208,11 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
   // If the prior block doesn't fall through into this block, and if this
   // block doesn't fall through into some other block, see if we can find a
   // place to move this block where a fall-through will happen.
-  if (!CanFallThrough(&PrevBB, PriorUnAnalyzable,
-                      PriorTBB, PriorFBB, PriorCond)) {
+  if (!PrevBB.canFallThrough()) {
+
     // Now we know that there was no fall-through into this block, check to
     // see if it has a fall-through into its successor.
-    bool CurFallsThru = CanFallThrough(MBB, CurUnAnalyzable, CurTBB, CurFBB, 
-                                       CurCond);
+    bool CurFallsThru = MBB->canFallThrough();
 
     if (!MBB->isLandingPad()) {
       // Check all the predecessors of this block.  If one of them has no fall
@@ -1127,12 +1222,15 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
         // Analyze the branch at the end of the pred.
         MachineBasicBlock *PredBB = *PI;
         MachineFunction::iterator PredFallthrough = PredBB; ++PredFallthrough;
-        if (PredBB != MBB && !CanFallThrough(PredBB)
+        MachineBasicBlock *PredTBB, *PredFBB;
+        SmallVector<MachineOperand, 4> PredCond;
+        if (PredBB != MBB && !PredBB->canFallThrough() &&
+            !TII->AnalyzeBranch(*PredBB, PredTBB, PredFBB, PredCond, true)
             && (!CurFallsThru || !CurTBB || !CurFBB)
             && (!CurFallsThru || MBB->getNumber() >= PredBB->getNumber())) {
           // If the current block doesn't fall through, just move it.
           // If the current block can fall through and does not end with a
-          // conditional branch, we need to append an unconditional jump to 
+          // conditional branch, we need to append an unconditional jump to
           // the (current) next block.  To avoid a possible compile-time
           // infinite loop, move blocks only backward in this case.
           // Also, if there are already 2 branches here, we cannot add a third;
@@ -1147,11 +1245,11 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
           }
           MBB->moveAfter(PredBB);
           MadeChange = true;
-          return OptimizeBlock(MBB);
+          goto ReoptimizeBlock;
         }
       }
     }
-        
+
     if (!CurFallsThru) {
       // Check all successors to see if we can move this block before it.
       for (MachineBasicBlock::succ_iterator SI = MBB->succ_begin(),
@@ -1159,26 +1257,29 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
         // Analyze the branch at the end of the block before the succ.
         MachineBasicBlock *SuccBB = *SI;
         MachineFunction::iterator SuccPrev = SuccBB; --SuccPrev;
-        std::vector<MachineOperand> SuccPrevCond;
-        
+
         // If this block doesn't already fall-through to that successor, and if
         // the succ doesn't already have a block that can fall through into it,
         // and if the successor isn't an EH destination, we can arrange for the
         // fallthrough to happen.
-        if (SuccBB != MBB && !CanFallThrough(SuccPrev) &&
+        if (SuccBB != MBB && &*SuccPrev != MBB &&
+            !SuccPrev->canFallThrough() && !CurUnAnalyzable &&
             !SuccBB->isLandingPad()) {
           MBB->moveBefore(SuccBB);
           MadeChange = true;
-          return OptimizeBlock(MBB);
+          goto ReoptimizeBlock;
         }
       }
-      
+
       // Okay, there is no really great place to put this block.  If, however,
       // the block before this one would be a fall-through if this block were
       // removed, move this block to the end of the function.
-      if (FallThrough != MBB->getParent()->end() &&
+      MachineBasicBlock *PrevTBB, *PrevFBB;
+      SmallVector<MachineOperand, 4> PrevCond;
+      if (FallThrough != MF.end() &&
+          !TII->AnalyzeBranch(PrevBB, PrevTBB, PrevFBB, PrevCond, true) &&
           PrevBB.isSuccessor(FallThrough)) {
-        MBB->moveAfter(--MBB->getParent()->end());
+        MBB->moveAfter(--MF.end());
         MadeChange = true;
         return MadeChange;
       }
diff --git a/libclamav/c++/llvm/lib/CodeGen/BranchFolding.h b/libclamav/c++/llvm/lib/CodeGen/BranchFolding.h
index 9763e33..b087395 100644
--- a/libclamav/c++/llvm/lib/CodeGen/BranchFolding.h
+++ b/libclamav/c++/llvm/lib/CodeGen/BranchFolding.h
@@ -11,7 +11,6 @@
 #define LLVM_CODEGEN_BRANCHFOLDING_HPP
 
 #include "llvm/CodeGen/MachineBasicBlock.h"
-#include "llvm/CodeGen/MachineFunctionPass.h"
 #include <vector>
 
 namespace llvm {
@@ -20,6 +19,7 @@ namespace llvm {
   class RegScavenger;
   class TargetInstrInfo;
   class TargetRegisterInfo;
+  template<typename T> class SmallVectorImpl;
 
   class BranchFolder {
   public:
@@ -30,11 +30,58 @@ namespace llvm {
                           const TargetRegisterInfo *tri,
                           MachineModuleInfo *mmi);
   private:
-    typedef std::pair<unsigned,MachineBasicBlock*> MergePotentialsElt;
+    class MergePotentialsElt {
+      unsigned Hash;
+      MachineBasicBlock *Block;
+    public:
+      MergePotentialsElt(unsigned h, MachineBasicBlock *b)
+        : Hash(h), Block(b) {}
+
+      unsigned getHash() const { return Hash; }
+      MachineBasicBlock *getBlock() const { return Block; }
+
+      void setBlock(MachineBasicBlock *MBB) {
+        Block = MBB;
+      }
+
+      bool operator<(const MergePotentialsElt &) const;
+    };
     typedef std::vector<MergePotentialsElt>::iterator MPIterator;
     std::vector<MergePotentialsElt> MergePotentials;
 
-    typedef std::pair<MPIterator, MachineBasicBlock::iterator> SameTailElt;
+    class SameTailElt {
+      MPIterator MPIter;
+      MachineBasicBlock::iterator TailStartPos;
+    public:
+      SameTailElt(MPIterator mp, MachineBasicBlock::iterator tsp)
+        : MPIter(mp), TailStartPos(tsp) {}
+
+      MPIterator getMPIter() const {
+        return MPIter;
+      }
+      MergePotentialsElt &getMergePotentialsElt() const {
+        return *getMPIter();
+      }
+      MachineBasicBlock::iterator getTailStartPos() const {
+        return TailStartPos;
+      }
+      unsigned getHash() const {
+        return getMergePotentialsElt().getHash();
+      }
+      MachineBasicBlock *getBlock() const {
+        return getMergePotentialsElt().getBlock();
+      }
+      bool tailIsWholeBlock() const {
+        return TailStartPos == getBlock()->begin();
+      }
+
+      void setBlock(MachineBasicBlock *MBB) {
+        getMergePotentialsElt().setBlock(MBB);
+      }
+      void setTailStartPos(MachineBasicBlock::iterator Pos) {
+        TailStartPos = Pos;
+      }
+    };
     std::vector<SameTailElt> SameTails;
 
     bool EnableTailMerge;
@@ -44,13 +91,15 @@ namespace llvm {
     RegScavenger *RS;
 
     bool TailMergeBlocks(MachineFunction &MF);
-    bool TryMergeBlocks(MachineBasicBlock* SuccBB,
-                        MachineBasicBlock* PredBB);
+    bool TryTailMergeBlocks(MachineBasicBlock* SuccBB,
+                       MachineBasicBlock* PredBB);
     void ReplaceTailWithBranchTo(MachineBasicBlock::iterator OldInst,
                                  MachineBasicBlock *NewDest);
     MachineBasicBlock *SplitMBBAt(MachineBasicBlock &CurMBB,
                                   MachineBasicBlock::iterator BBI1);
-    unsigned ComputeSameTails(unsigned CurHash, unsigned minCommonTailLength);
+    unsigned ComputeSameTails(unsigned CurHash, unsigned minCommonTailLength,
+                              MachineBasicBlock *SuccBB,
+                              MachineBasicBlock *PredBB);
     void RemoveBlocksWithHash(unsigned CurHash, MachineBasicBlock* SuccBB,
                                                 MachineBasicBlock* PredBB);
     unsigned CreateCommonTailOnlyBlock(MachineBasicBlock *&PredBB,
@@ -60,24 +109,6 @@ namespace llvm {
     bool OptimizeBlock(MachineBasicBlock *MBB);
     void RemoveDeadBlock(MachineBasicBlock *MBB);
     bool OptimizeImpDefsBlock(MachineBasicBlock *MBB);
-    
-    bool CanFallThrough(MachineBasicBlock *CurBB);
-    bool CanFallThrough(MachineBasicBlock *CurBB, bool BranchUnAnalyzable,
-                        MachineBasicBlock *TBB, MachineBasicBlock *FBB,
-                        const SmallVectorImpl<MachineOperand> &Cond);
-  };
-
-
-  /// BranchFolderPass - Wrap branch folder in a machine function pass.
-  class BranchFolderPass : public MachineFunctionPass,
-                           public BranchFolder {
-  public:
-    static char ID;
-    explicit BranchFolderPass(bool defaultEnableTailMerge)
-      :  MachineFunctionPass(&ID), BranchFolder(defaultEnableTailMerge) {}
-
-    virtual bool runOnMachineFunction(MachineFunction &MF);
-    virtual const char *getPassName() const { return "Control Flow Optimizer"; }
   };
 }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/CMakeLists.txt b/libclamav/c++/llvm/lib/CodeGen/CMakeLists.txt
index 5b116e9..6f86614 100644
--- a/libclamav/c++/llvm/lib/CodeGen/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/CodeGen/CMakeLists.txt
@@ -1,6 +1,8 @@
 add_llvm_library(LLVMCodeGen
+  AggressiveAntiDepBreaker.cpp
   BranchFolding.cpp
   CodePlacementOpt.cpp
+  CriticalAntiDepBreaker.cpp
   DeadMachineInstructionElim.cpp
   DwarfEHPrepare.cpp
   ELFCodeEmitter.cpp
@@ -13,7 +15,6 @@ add_llvm_library(LLVMCodeGen
   IntrinsicLowering.cpp
   LLVMTargetMachine.cpp
   LatencyPriorityQueue.cpp
-  LazyLiveness.cpp
   LiveInterval.cpp
   LiveIntervalAnalysis.cpp
   LiveStackAnalysis.cpp
@@ -41,6 +42,7 @@ add_llvm_library(LLVMCodeGen
   Passes.cpp
   PostRASchedulerList.cpp
   PreAllocSplitting.cpp
+  ProcessImplicitDefs.cpp
   PrologEpilogInserter.cpp
   PseudoSourceValue.cpp
   RegAllocLinearScan.cpp
@@ -56,10 +58,12 @@ add_llvm_library(LLVMCodeGen
   ShrinkWrapping.cpp
   SimpleRegisterCoalescing.cpp
   SjLjEHPrepare.cpp
+  SlotIndexes.cpp
   Spiller.cpp
   StackProtector.cpp
   StackSlotColoring.cpp
   StrongPHIElimination.cpp
+  TailDuplication.cpp
   TargetInstrInfoImpl.cpp
   TwoAddressInstructionPass.cpp
   UnreachableBlockElim.cpp
diff --git a/libclamav/c++/llvm/lib/CodeGen/CodePlacementOpt.cpp b/libclamav/c++/llvm/lib/CodeGen/CodePlacementOpt.cpp
index 383098e..e9844d8 100644
--- a/libclamav/c++/llvm/lib/CodeGen/CodePlacementOpt.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/CodePlacementOpt.cpp
@@ -24,7 +24,7 @@
 #include "llvm/ADT/Statistic.h"
 using namespace llvm;
 
-STATISTIC(NumHeaderAligned, "Number of loop header aligned");
+STATISTIC(NumLoopsAligned,  "Number of loops aligned");
 STATISTIC(NumIntraElim,     "Number of intra loop branches eliminated");
 STATISTIC(NumIntraMoved,    "Number of intra loop branches moved");
 
@@ -34,17 +34,6 @@ namespace {
     const TargetInstrInfo *TII;
     const TargetLowering  *TLI;
 
-    /// ChangedMBBs - BBs which are modified by OptimizeIntraLoopEdges.
-    SmallPtrSet<MachineBasicBlock*, 8> ChangedMBBs;
-
-    /// UncondJmpMBBs - A list of BBs which are in loops and end with
-    /// unconditional branches.
-    SmallVector<std::pair<MachineBasicBlock*,MachineBasicBlock*>, 4>
-    UncondJmpMBBs;
-
-    /// LoopHeaders - A list of BBs which are loop headers.
-    SmallVector<MachineBasicBlock*, 4> LoopHeaders;
-
   public:
     static char ID;
     CodePlacementOpt() : MachineFunctionPass(&ID) {}
@@ -61,10 +50,20 @@ namespace {
     }
 
   private:
-    bool OptimizeIntraLoopEdges();
-    bool HeaderShouldBeAligned(MachineBasicBlock *MBB, MachineLoop *L,
-                               SmallPtrSet<MachineBasicBlock*, 4> &DoNotAlign);
+    bool HasFallthrough(MachineBasicBlock *MBB);
+    bool HasAnalyzableTerminator(MachineBasicBlock *MBB);
+    void Splice(MachineFunction &MF,
+                MachineFunction::iterator InsertPt,
+                MachineFunction::iterator Begin,
+                MachineFunction::iterator End);
+    bool EliminateUnconditionalJumpsToTop(MachineFunction &MF,
+                                          MachineLoop *L);
+    bool MoveDiscontiguousLoopBlocks(MachineFunction &MF,
+                                     MachineLoop *L);
+    bool OptimizeIntraLoopEdgesInLoopNest(MachineFunction &MF, MachineLoop *L);
+    bool OptimizeIntraLoopEdges(MachineFunction &MF);
     bool AlignLoops(MachineFunction &MF);
+    bool AlignLoop(MachineFunction &MF, MachineLoop *L, unsigned Align);
   };
 
   char CodePlacementOpt::ID = 0;
@@ -74,214 +73,298 @@ FunctionPass *llvm::createCodePlacementOptPass() {
   return new CodePlacementOpt();
 }
 
-/// OptimizeBackEdges - Place loop back edges to move unconditional branches
-/// out of the loop.
-///
-///       A:
-///       ...
-///       <fallthrough to B>
-///
-///       B:  --> loop header
-///       ...
-///       jcc <cond> C, [exit]
-///
-///       C:
-///       ...
-///       jmp B
+/// HasFallthrough - Test whether the given branch has a fallthrough, either as
+/// a plain fallthrough or as a fallthrough case of a conditional branch.
 ///
-/// ==>
-///
-///       A:
-///       ...
-///       jmp B
+bool CodePlacementOpt::HasFallthrough(MachineBasicBlock *MBB) {
+  MachineBasicBlock *TBB = 0, *FBB = 0;
+  SmallVector<MachineOperand, 4> Cond;
+  if (TII->AnalyzeBranch(*MBB, TBB, FBB, Cond))
+    return false;
+  // This conditional branch has no fallthrough.
+  if (FBB)
+    return false;
+  // An unconditional branch has no fallthrough.
+  if (Cond.empty() && TBB)
+    return false;
+  // It has a fallthrough.
+  return true;
+}
+
+/// HasAnalyzableTerminator - Test whether AnalyzeBranch will succeed on MBB.
+/// This is called before major changes are begun to test whether it will be
+/// possible to complete the changes.
 ///
-///       C:  --> new loop header
-///       ...
-///       <fallthough to B>
-///       
-///       B:
-///       ...
-///       jcc <cond> C, [exit]
+/// Target-specific code is hereby encouraged to make AnalyzeBranch succeed
+/// whenever possible.
 ///
-bool CodePlacementOpt::OptimizeIntraLoopEdges() {
-  if (!TLI->shouldOptimizeCodePlacement())
+bool CodePlacementOpt::HasAnalyzableTerminator(MachineBasicBlock *MBB) {
+  // Conservatively ignore EH landing pads.
+  if (MBB->isLandingPad()) return false;
+
+  // Ignore blocks which look like they might have EH-related control flow.
+  // At the time of this writing, there are blocks which AnalyzeBranch
+  // thinks end in single uncoditional branches, yet which have two CFG
+  // successors. Code in this file is not prepared to reason about such things.
+  if (!MBB->empty() && MBB->back().getOpcode() == TargetInstrInfo::EH_LABEL)
+    return false;
+
+  // Aggressively handle return blocks and similar constructs.
+  if (MBB->succ_empty()) return true;
+
+  // Ask the target's AnalyzeBranch if it can handle this block.
+  MachineBasicBlock *TBB = 0, *FBB = 0;
+  SmallVector<MachineOperand, 4> Cond;
+  // Make the the terminator is understood.
+  if (TII->AnalyzeBranch(*MBB, TBB, FBB, Cond))
+    return false;
+  // Make sure we have the option of reversing the condition.
+  if (!Cond.empty() && TII->ReverseBranchCondition(Cond))
     return false;
+  return true;
+}
+
+/// Splice - Move the sequence of instructions [Begin,End) to just before
+/// InsertPt. Update branch instructions as needed to account for broken
+/// fallthrough edges and to take advantage of newly exposed fallthrough
+/// opportunities.
+///
+void CodePlacementOpt::Splice(MachineFunction &MF,
+                              MachineFunction::iterator InsertPt,
+                              MachineFunction::iterator Begin,
+                              MachineFunction::iterator End) {
+  assert(Begin != MF.begin() && End != MF.begin() && InsertPt != MF.begin() &&
+         "Splice can't change the entry block!");
+  MachineFunction::iterator OldBeginPrior = prior(Begin);
+  MachineFunction::iterator OldEndPrior = prior(End);
+
+  MF.splice(InsertPt, Begin, End);
+
+  prior(Begin)->updateTerminator();
+  OldBeginPrior->updateTerminator();
+  OldEndPrior->updateTerminator();
+}
 
+/// EliminateUnconditionalJumpsToTop - Move blocks which unconditionally jump
+/// to the loop top to the top of the loop so that they have a fall through.
+/// This can introduce a branch on entry to the loop, but it can eliminate a
+/// branch within the loop. See the @simple case in
+/// test/CodeGen/X86/loop_blocks.ll for an example of this.
+bool CodePlacementOpt::EliminateUnconditionalJumpsToTop(MachineFunction &MF,
+                                                        MachineLoop *L) {
   bool Changed = false;
-  for (unsigned i = 0, e = UncondJmpMBBs.size(); i != e; ++i) {
-    MachineBasicBlock *MBB = UncondJmpMBBs[i].first;
-    MachineBasicBlock *SuccMBB = UncondJmpMBBs[i].second;
-    MachineLoop *L = MLI->getLoopFor(MBB);
-    assert(L && "BB is expected to be in a loop!");
-
-    if (ChangedMBBs.count(MBB)) {
-      // BB has been modified, re-analyze.
-      MachineBasicBlock *TBB = 0, *FBB = 0;
-      SmallVector<MachineOperand, 4> Cond;
-      if (TII->AnalyzeBranch(*MBB, TBB, FBB, Cond) || !Cond.empty())
+  MachineBasicBlock *TopMBB = L->getTopBlock();
+
+  bool BotHasFallthrough = HasFallthrough(L->getBottomBlock());
+
+  if (TopMBB == MF.begin() ||
+      HasAnalyzableTerminator(prior(MachineFunction::iterator(TopMBB)))) {
+  new_top:
+    for (MachineBasicBlock::pred_iterator PI = TopMBB->pred_begin(),
+         PE = TopMBB->pred_end(); PI != PE; ++PI) {
+      MachineBasicBlock *Pred = *PI;
+      if (Pred == TopMBB) continue;
+      if (HasFallthrough(Pred)) continue;
+      if (!L->contains(Pred)) continue;
+
+      // Verify that we can analyze all the loop entry edges before beginning
+      // any changes which will require us to be able to analyze them.
+      if (Pred == MF.begin())
         continue;
-      if (MLI->getLoopFor(TBB) != L || TBB->isLandingPad())
+      if (!HasAnalyzableTerminator(Pred))
+        continue;
+      if (!HasAnalyzableTerminator(prior(MachineFunction::iterator(Pred))))
         continue;
-      SuccMBB = TBB;
-    } else {
-      assert(MLI->getLoopFor(SuccMBB) == L &&
-             "Successor is not in the same loop!");
-    }
 
-    if (MBB->isLayoutSuccessor(SuccMBB)) {
-      // Successor is right after MBB, just eliminate the unconditional jmp.
-      // Can this happen?
-      TII->RemoveBranch(*MBB);
-      ChangedMBBs.insert(MBB);
-      ++NumIntraElim;
+      // Move the block.
       Changed = true;
-      continue;
-    }
 
-    // Now check if the predecessor is fallthrough from any BB. If there is,
-    // that BB should be from outside the loop since edge will become a jmp.
-    bool OkToMove = true;
-    MachineBasicBlock *FtMBB = 0, *FtTBB = 0, *FtFBB = 0;
-    SmallVector<MachineOperand, 4> FtCond;    
-    for (MachineBasicBlock::pred_iterator PI = SuccMBB->pred_begin(),
-           PE = SuccMBB->pred_end(); PI != PE; ++PI) {
-      MachineBasicBlock *PredMBB = *PI;
-      if (PredMBB->isLayoutSuccessor(SuccMBB)) {
-        if (TII->AnalyzeBranch(*PredMBB, FtTBB, FtFBB, FtCond)) {
-          OkToMove = false;
+      // Move it and all the blocks that can reach it via fallthrough edges
+      // exclusively, to keep existing fallthrough edges intact.
+      MachineFunction::iterator Begin = Pred;
+      MachineFunction::iterator End = next(Begin);
+      while (Begin != MF.begin()) {
+        MachineFunction::iterator Prior = prior(Begin);
+        if (Prior == MF.begin())
+          break;
+        // Stop when a non-fallthrough edge is found.
+        if (!HasFallthrough(Prior))
+          break;
+        // Stop if a block which could fall-through out of the loop is found.
+        if (Prior->isSuccessor(End))
+          break;
+        // If we've reached the top, stop scanning.
+        if (Prior == MachineFunction::iterator(TopMBB)) {
+          // We know top currently has a fall through (because we just checked
+          // it) which would be lost if we do the transformation, so it isn't
+          // worthwhile to do the transformation unless it would expose a new
+          // fallthrough edge.
+          if (!Prior->isSuccessor(End))
+            goto next_pred;
+          // Otherwise we can stop scanning and procede to move the blocks.
           break;
         }
-        if (!FtTBB)
-          FtTBB = SuccMBB;
-        else if (!FtFBB) {
-          assert(FtFBB != SuccMBB && "Unexpected control flow!");
-          FtFBB = SuccMBB;
-        }
-        
-        // A fallthrough.
-        FtMBB = PredMBB;
-        MachineLoop *PL = MLI->getLoopFor(PredMBB);
-        if (PL && (PL == L || PL->getLoopDepth() >= L->getLoopDepth()))
-          OkToMove = false;
-
-        break;
+        // If we hit a switch or something complicated, don't move anything
+        // for this predecessor.
+        if (!HasAnalyzableTerminator(prior(MachineFunction::iterator(Prior))))
+          break;
+        // Ok, the block prior to Begin will be moved along with the rest.
+        // Extend the range to include it.
+        Begin = Prior;
+        ++NumIntraMoved;
       }
+
+      // Move the blocks.
+      Splice(MF, TopMBB, Begin, End);
+
+      // Update TopMBB.
+      TopMBB = L->getTopBlock();
+
+      // We have a new loop top. Iterate on it. We shouldn't have to do this
+      // too many times if BranchFolding has done a reasonable job.
+      goto new_top;
+    next_pred:;
     }
+  }
+
+  // If the loop previously didn't exit with a fall-through and it now does,
+  // we eliminated a branch.
+  if (Changed &&
+      !BotHasFallthrough &&
+      HasFallthrough(L->getBottomBlock())) {
+    ++NumIntraElim;
+    BotHasFallthrough = true;
+  }
+
+  return Changed;
+}
+
+/// MoveDiscontiguousLoopBlocks - Move any loop blocks that are not in the
+/// portion of the loop contiguous with the header. This usually makes the loop
+/// contiguous, provided that AnalyzeBranch can handle all the relevant
+/// branching. See the @cfg_islands case in test/CodeGen/X86/loop_blocks.ll
+/// for an example of this.
+bool CodePlacementOpt::MoveDiscontiguousLoopBlocks(MachineFunction &MF,
+                                                   MachineLoop *L) {
+  bool Changed = false;
+  MachineBasicBlock *TopMBB = L->getTopBlock();
+  MachineBasicBlock *BotMBB = L->getBottomBlock();
+
+  // Determine a position to move orphaned loop blocks to. If TopMBB is not
+  // entered via fallthrough and BotMBB is exited via fallthrough, prepend them
+  // to the top of the loop to avoid loosing that fallthrough. Otherwise append
+  // them to the bottom, even if it previously had a fallthrough, on the theory
+  // that it's worth an extra branch to keep the loop contiguous.
+  MachineFunction::iterator InsertPt = next(MachineFunction::iterator(BotMBB));
+  bool InsertAtTop = false;
+  if (TopMBB != MF.begin() &&
+      !HasFallthrough(prior(MachineFunction::iterator(TopMBB))) &&
+      HasFallthrough(BotMBB)) {
+    InsertPt = TopMBB;
+    InsertAtTop = true;
+  }
+
+  // Keep a record of which blocks are in the portion of the loop contiguous
+  // with the loop header.
+  SmallPtrSet<MachineBasicBlock *, 8> ContiguousBlocks;
+  for (MachineFunction::iterator I = TopMBB,
+       E = next(MachineFunction::iterator(BotMBB)); I != E; ++I)
+    ContiguousBlocks.insert(I);
+
+  // Find non-contigous blocks and fix them.
+  if (InsertPt != MF.begin() && HasAnalyzableTerminator(prior(InsertPt)))
+    for (MachineLoop::block_iterator BI = L->block_begin(), BE = L->block_end();
+         BI != BE; ++BI) {
+      MachineBasicBlock *BB = *BI;
+
+      // Verify that we can analyze all the loop entry edges before beginning
+      // any changes which will require us to be able to analyze them.
+      if (!HasAnalyzableTerminator(BB))
+        continue;
+      if (!HasAnalyzableTerminator(prior(MachineFunction::iterator(BB))))
+        continue;
 
-    if (!OkToMove)
-      continue;
-
-    // Is it profitable? If SuccMBB can fallthrough itself, that can be changed
-    // into a jmp.
-    MachineBasicBlock *TBB = 0, *FBB = 0;
-    SmallVector<MachineOperand, 4> Cond;
-    if (TII->AnalyzeBranch(*SuccMBB, TBB, FBB, Cond))
-      continue;
-    if (!TBB && Cond.empty())
-      TBB = next(MachineFunction::iterator(SuccMBB));
-    else if (!FBB && !Cond.empty())
-      FBB = next(MachineFunction::iterator(SuccMBB));
-
-    // This calculate the cost of the transformation. Also, it finds the *only*
-    // intra-loop edge if there is one.
-    int Cost = 0;
-    bool HasOneIntraSucc = true;
-    MachineBasicBlock *IntraSucc = 0;
-    for (MachineBasicBlock::succ_iterator SI = SuccMBB->succ_begin(),
-           SE = SuccMBB->succ_end(); SI != SE; ++SI) {
-      MachineBasicBlock *SSMBB = *SI;
-      if (MLI->getLoopFor(SSMBB) == L) {
-        if (!IntraSucc)
-          IntraSucc = SSMBB;
-        else
-          HasOneIntraSucc = false;
+      // If the layout predecessor is part of the loop, this block will be
+      // processed along with it. This keeps them in their relative order.
+      if (BB != MF.begin() &&
+          L->contains(prior(MachineFunction::iterator(BB))))
+        continue;
+
+      // Check to see if this block is already contiguous with the main
+      // portion of the loop.
+      if (!ContiguousBlocks.insert(BB))
+        continue;
+
+      // Move the block.
+      Changed = true;
+
+      // Process this block and all loop blocks contiguous with it, to keep
+      // them in their relative order.
+      MachineFunction::iterator Begin = BB;
+      MachineFunction::iterator End = next(MachineFunction::iterator(BB));
+      for (; End != MF.end(); ++End) {
+        if (!L->contains(End)) break;
+        if (!HasAnalyzableTerminator(End)) break;
+        ContiguousBlocks.insert(End);
+        ++NumIntraMoved;
       }
 
-      if (SuccMBB->isLayoutSuccessor(SSMBB))
-        // This will become a jmp.
-        ++Cost;
-      else if (MBB->isLayoutSuccessor(SSMBB)) {
-        // One of the successor will become the new fallthrough.
-        if (SSMBB == FBB) {
-          FBB = 0;
-          --Cost;
-        } else if (!FBB && SSMBB == TBB && Cond.empty()) {
-          TBB = 0;
-          --Cost;
-        } else if (!Cond.empty() && !TII->ReverseBranchCondition(Cond)) {
-          assert(SSMBB == TBB);
-          TBB = FBB;
-          FBB = 0;
-          --Cost;
+      // If we're inserting at the bottom of the loop, and the code we're
+      // moving originally had fall-through successors, bring the sucessors
+      // up with the loop blocks to preserve the fall-through edges.
+      if (!InsertAtTop)
+        for (; End != MF.end(); ++End) {
+          if (L->contains(End)) break;
+          if (!HasAnalyzableTerminator(End)) break;
+          if (!HasFallthrough(prior(End))) break;
         }
-      }
-    }
-    if (Cost)
-      continue;
-
-    // Now, let's move the successor to below the BB to eliminate the jmp.
-    SuccMBB->moveAfter(MBB);
-    TII->RemoveBranch(*MBB);
-    TII->RemoveBranch(*SuccMBB);
-    if (TBB)
-      TII->InsertBranch(*SuccMBB, TBB, FBB, Cond);
-    ChangedMBBs.insert(MBB);
-    ChangedMBBs.insert(SuccMBB);
-    if (FtMBB) {
-      TII->RemoveBranch(*FtMBB);
-      TII->InsertBranch(*FtMBB, FtTBB, FtFBB, FtCond);
-      ChangedMBBs.insert(FtMBB);
-    }
-    Changed = true;
-
-    // If BB is the loop latch, we may have a new loop headr.
-    if (MBB == L->getLoopLatch()) {
-      assert(MLI->isLoopHeader(SuccMBB) &&
-             "Only succ of loop latch is not the header?");
-      if (HasOneIntraSucc && IntraSucc)
-        std::replace(LoopHeaders.begin(),LoopHeaders.end(), SuccMBB, IntraSucc);
+
+      // Move the blocks. This may invalidate TopMBB and/or BotMBB, but
+      // we don't need them anymore at this point.
+      Splice(MF, InsertPt, Begin, End);
     }
-  }
 
-  ++NumIntraMoved;
   return Changed;
 }
 
-/// HeaderShouldBeAligned - Return true if the specified loop header block
-/// should be aligned. For now, we will not align it if all the predcessors
-/// (i.e. loop back edges) are laid out above the header. FIXME: Do not
-/// align small loops.
-bool
-CodePlacementOpt::HeaderShouldBeAligned(MachineBasicBlock *MBB, MachineLoop *L,
-                               SmallPtrSet<MachineBasicBlock*, 4> &DoNotAlign) {
-  if (DoNotAlign.count(MBB))
-    return false;
+/// OptimizeIntraLoopEdgesInLoopNest - Reposition loop blocks to minimize
+/// intra-loop branching and to form contiguous loops.
+///
+/// This code takes the approach of making minor changes to the existing
+/// layout to fix specific loop-oriented problems. Also, it depends on
+/// AnalyzeBranch, which can't understand complex control instructions.
+///
+bool CodePlacementOpt::OptimizeIntraLoopEdgesInLoopNest(MachineFunction &MF,
+                                                        MachineLoop *L) {
+  bool Changed = false;
 
-  bool BackEdgeBelow = false;
-  for (MachineBasicBlock::pred_iterator PI = MBB->pred_begin(),
-         PE = MBB->pred_end(); PI != PE; ++PI) {
-    MachineBasicBlock *PredMBB = *PI;
-    if (PredMBB == MBB || PredMBB->getNumber() > MBB->getNumber()) {
-      BackEdgeBelow = true;
-      break;
-    }
-  }
+  // Do optimization for nested loops.
+  for (MachineLoop::iterator I = L->begin(), E = L->end(); I != E; ++I)
+    Changed |= OptimizeIntraLoopEdgesInLoopNest(MF, *I);
 
-  if (!BackEdgeBelow)
-    return false;
+  // Do optimization for this loop.
+  Changed |= EliminateUnconditionalJumpsToTop(MF, L);
+  Changed |= MoveDiscontiguousLoopBlocks(MF, L);
 
-  // Ok, we are going to align this loop header. If it's an inner loop,
-  // do not align its outer loop.
-  MachineBasicBlock *PreHeader = L->getLoopPreheader();
-  if (PreHeader) {
-    MachineLoop *L = MLI->getLoopFor(PreHeader);
-    if (L) {
-      MachineBasicBlock *HeaderBlock = L->getHeader();
-      HeaderBlock->setAlignment(0);
-      DoNotAlign.insert(HeaderBlock);
-    }
-  }
-  return true;
+  return Changed;
+}
+
+/// OptimizeIntraLoopEdges - Reposition loop blocks to minimize
+/// intra-loop branching and to form contiguous loops.
+///
+bool CodePlacementOpt::OptimizeIntraLoopEdges(MachineFunction &MF) {
+  bool Changed = false;
+
+  if (!TLI->shouldOptimizeCodePlacement())
+    return Changed;
+
+  // Do optimization for each loop in the function.
+  for (MachineLoopInfo::iterator I = MLI->begin(), E = MLI->end();
+       I != E; ++I)
+    if (!(*I)->getParentLoop())
+      Changed |= OptimizeIntraLoopEdgesInLoopNest(MF, *I);
+
+  return Changed;
 }
 
 /// AlignLoops - Align loop headers to target preferred alignments.
@@ -295,25 +378,28 @@ bool CodePlacementOpt::AlignLoops(MachineFunction &MF) {
   if (!Align)
     return false;  // Don't care about loop alignment.
 
-  // Make sure blocks are numbered in order
-  MF.RenumberBlocks();
+  bool Changed = false;
+
+  for (MachineLoopInfo::iterator I = MLI->begin(), E = MLI->end();
+       I != E; ++I)
+    Changed |= AlignLoop(MF, *I, Align);
+
+  return Changed;
+}
 
+/// AlignLoop - Align loop headers to target preferred alignments.
+///
+bool CodePlacementOpt::AlignLoop(MachineFunction &MF, MachineLoop *L,
+                                 unsigned Align) {
   bool Changed = false;
-  SmallPtrSet<MachineBasicBlock*, 4> DoNotAlign;
-  for (unsigned i = 0, e = LoopHeaders.size(); i != e; ++i) {
-    MachineBasicBlock *HeaderMBB = LoopHeaders[i];
-    MachineBasicBlock *PredMBB = prior(MachineFunction::iterator(HeaderMBB));
-    MachineLoop *L = MLI->getLoopFor(HeaderMBB);
-    if (L == MLI->getLoopFor(PredMBB))
-      // If previously BB is in the same loop, don't align this BB. We want
-      // to prevent adding noop's inside a loop.
-      continue;
-    if (HeaderShouldBeAligned(HeaderMBB, L, DoNotAlign)) {
-      HeaderMBB->setAlignment(Align);
-      Changed = true;
-      ++NumHeaderAligned;
-    }
-  }
+
+  // Do alignment for nested loops.
+  for (MachineLoop::iterator I = L->begin(), E = L->end(); I != E; ++I)
+    Changed |= AlignLoop(MF, *I, Align);
+
+  L->getTopBlock()->setAlignment(Align);
+  Changed = true;
+  ++NumLoopsAligned;
 
   return Changed;
 }
@@ -326,33 +412,9 @@ bool CodePlacementOpt::runOnMachineFunction(MachineFunction &MF) {
   TLI = MF.getTarget().getTargetLowering();
   TII = MF.getTarget().getInstrInfo();
 
-  // Analyze the BBs first and keep track of loop headers and BBs that
-  // end with an unconditional jmp to another block in the same loop.
-  for (MachineFunction::iterator I = MF.begin(), E = MF.end(); I != E; ++I) {
-    MachineBasicBlock *MBB = I;
-    if (MBB->isLandingPad())
-      continue;
-    MachineLoop *L = MLI->getLoopFor(MBB);
-    if (!L)
-      continue;
-    if (MLI->isLoopHeader(MBB))
-      LoopHeaders.push_back(MBB);
-
-    MachineBasicBlock *TBB = 0, *FBB = 0;
-    SmallVector<MachineOperand, 4> Cond;
-    if (TII->AnalyzeBranch(*MBB, TBB, FBB, Cond) || !Cond.empty())
-      continue;
-    if (MLI->getLoopFor(TBB) == L && !TBB->isLandingPad())
-      UncondJmpMBBs.push_back(std::make_pair(MBB, TBB));
-  }
-
-  bool Changed = OptimizeIntraLoopEdges();
+  bool Changed = OptimizeIntraLoopEdges(MF);
 
   Changed |= AlignLoops(MF);
 
-  ChangedMBBs.clear();
-  UncondJmpMBBs.clear();
-  LoopHeaders.clear();
-
   return Changed;
 }
diff --git a/libclamav/c++/llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp b/libclamav/c++/llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp
new file mode 100644
index 0000000..1b39fec
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp
@@ -0,0 +1,539 @@
+//===----- CriticalAntiDepBreaker.cpp - Anti-dep breaker -------- ---------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements the CriticalAntiDepBreaker class, which
+// implements register anti-dependence breaking along a blocks
+// critical path during post-RA scheduler.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "post-RA-sched"
+#include "CriticalAntiDepBreaker.h"
+#include "llvm/CodeGen/MachineBasicBlock.h"
+#include "llvm/CodeGen/MachineFrameInfo.h"
+#include "llvm/Target/TargetMachine.h"
+#include "llvm/Target/TargetRegisterInfo.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/raw_ostream.h"
+
+using namespace llvm;
+
+CriticalAntiDepBreaker::
+CriticalAntiDepBreaker(MachineFunction& MFi) : 
+  AntiDepBreaker(), MF(MFi),
+  MRI(MF.getRegInfo()),
+  TRI(MF.getTarget().getRegisterInfo()),
+  AllocatableSet(TRI->getAllocatableSet(MF))
+{
+}
+
+CriticalAntiDepBreaker::~CriticalAntiDepBreaker() {
+}
+
+void CriticalAntiDepBreaker::StartBlock(MachineBasicBlock *BB) {
+  // Clear out the register class data.
+  std::fill(Classes, array_endof(Classes),
+            static_cast<const TargetRegisterClass *>(0));
+
+  // Initialize the indices to indicate that no registers are live.
+  std::fill(KillIndices, array_endof(KillIndices), ~0u);
+  std::fill(DefIndices, array_endof(DefIndices), BB->size());
+
+  // Clear "do not change" set.
+  KeepRegs.clear();
+
+  bool IsReturnBlock = (!BB->empty() && BB->back().getDesc().isReturn());
+
+  // Determine the live-out physregs for this block.
+  if (IsReturnBlock) {
+    // In a return block, examine the function live-out regs.
+    for (MachineRegisterInfo::liveout_iterator I = MRI.liveout_begin(),
+         E = MRI.liveout_end(); I != E; ++I) {
+      unsigned Reg = *I;
+      Classes[Reg] = reinterpret_cast<TargetRegisterClass *>(-1);
+      KillIndices[Reg] = BB->size();
+      DefIndices[Reg] = ~0u;
+      // Repeat, for all aliases.
+      for (const unsigned *Alias = TRI->getAliasSet(Reg); *Alias; ++Alias) {
+        unsigned AliasReg = *Alias;
+        Classes[AliasReg] = reinterpret_cast<TargetRegisterClass *>(-1);
+        KillIndices[AliasReg] = BB->size();
+        DefIndices[AliasReg] = ~0u;
+      }
+    }
+  } else {
+    // In a non-return block, examine the live-in regs of all successors.
+    for (MachineBasicBlock::succ_iterator SI = BB->succ_begin(),
+         SE = BB->succ_end(); SI != SE; ++SI)
+      for (MachineBasicBlock::livein_iterator I = (*SI)->livein_begin(),
+           E = (*SI)->livein_end(); I != E; ++I) {
+        unsigned Reg = *I;
+        Classes[Reg] = reinterpret_cast<TargetRegisterClass *>(-1);
+        KillIndices[Reg] = BB->size();
+        DefIndices[Reg] = ~0u;
+        // Repeat, for all aliases.
+        for (const unsigned *Alias = TRI->getAliasSet(Reg); *Alias; ++Alias) {
+          unsigned AliasReg = *Alias;
+          Classes[AliasReg] = reinterpret_cast<TargetRegisterClass *>(-1);
+          KillIndices[AliasReg] = BB->size();
+          DefIndices[AliasReg] = ~0u;
+        }
+      }
+  }
+
+  // Mark live-out callee-saved registers. In a return block this is
+  // all callee-saved registers. In non-return this is any
+  // callee-saved register that is not saved in the prolog.
+  const MachineFrameInfo *MFI = MF.getFrameInfo();
+  BitVector Pristine = MFI->getPristineRegs(BB);
+  for (const unsigned *I = TRI->getCalleeSavedRegs(); *I; ++I) {
+    unsigned Reg = *I;
+    if (!IsReturnBlock && !Pristine.test(Reg)) continue;
+    Classes[Reg] = reinterpret_cast<TargetRegisterClass *>(-1);
+    KillIndices[Reg] = BB->size();
+    DefIndices[Reg] = ~0u;
+    // Repeat, for all aliases.
+    for (const unsigned *Alias = TRI->getAliasSet(Reg); *Alias; ++Alias) {
+      unsigned AliasReg = *Alias;
+      Classes[AliasReg] = reinterpret_cast<TargetRegisterClass *>(-1);
+      KillIndices[AliasReg] = BB->size();
+      DefIndices[AliasReg] = ~0u;
+    }
+  }
+}
+
+void CriticalAntiDepBreaker::FinishBlock() {
+  RegRefs.clear();
+  KeepRegs.clear();
+}
+
+void CriticalAntiDepBreaker::Observe(MachineInstr *MI, unsigned Count,
+                                     unsigned InsertPosIndex) {
+  assert(Count < InsertPosIndex && "Instruction index out of expected range!");
+
+  // Any register which was defined within the previous scheduling region
+  // may have been rescheduled and its lifetime may overlap with registers
+  // in ways not reflected in our current liveness state. For each such
+  // register, adjust the liveness state to be conservatively correct.
+  for (unsigned Reg = 0; Reg != TargetRegisterInfo::FirstVirtualRegister; ++Reg)
+    if (DefIndices[Reg] < InsertPosIndex && DefIndices[Reg] >= Count) {
+      assert(KillIndices[Reg] == ~0u && "Clobbered register is live!");
+      // Mark this register to be non-renamable.
+      Classes[Reg] = reinterpret_cast<TargetRegisterClass *>(-1);
+      // Move the def index to the end of the previous region, to reflect
+      // that the def could theoretically have been scheduled at the end.
+      DefIndices[Reg] = InsertPosIndex;
+    }
+
+  PrescanInstruction(MI);
+  ScanInstruction(MI, Count);
+}
+
+/// CriticalPathStep - Return the next SUnit after SU on the bottom-up
+/// critical path.
+static SDep *CriticalPathStep(SUnit *SU) {
+  SDep *Next = 0;
+  unsigned NextDepth = 0;
+  // Find the predecessor edge with the greatest depth.
+  for (SUnit::pred_iterator P = SU->Preds.begin(), PE = SU->Preds.end();
+       P != PE; ++P) {
+    SUnit *PredSU = P->getSUnit();
+    unsigned PredLatency = P->getLatency();
+    unsigned PredTotalLatency = PredSU->getDepth() + PredLatency;
+    // In the case of a latency tie, prefer an anti-dependency edge over
+    // other types of edges.
+    if (NextDepth < PredTotalLatency ||
+        (NextDepth == PredTotalLatency && P->getKind() == SDep::Anti)) {
+      NextDepth = PredTotalLatency;
+      Next = &*P;
+    }
+  }
+  return Next;
+}
+
+void CriticalAntiDepBreaker::PrescanInstruction(MachineInstr *MI) {
+  // Scan the register operands for this instruction and update
+  // Classes and RegRefs.
+  for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
+    MachineOperand &MO = MI->getOperand(i);
+    if (!MO.isReg()) continue;
+    unsigned Reg = MO.getReg();
+    if (Reg == 0) continue;
+    const TargetRegisterClass *NewRC = 0;
+    
+    if (i < MI->getDesc().getNumOperands())
+      NewRC = MI->getDesc().OpInfo[i].getRegClass(TRI);
+
+    // For now, only allow the register to be changed if its register
+    // class is consistent across all uses.
+    if (!Classes[Reg] && NewRC)
+      Classes[Reg] = NewRC;
+    else if (!NewRC || Classes[Reg] != NewRC)
+      Classes[Reg] = reinterpret_cast<TargetRegisterClass *>(-1);
+
+    // Now check for aliases.
+    for (const unsigned *Alias = TRI->getAliasSet(Reg); *Alias; ++Alias) {
+      // If an alias of the reg is used during the live range, give up.
+      // Note that this allows us to skip checking if AntiDepReg
+      // overlaps with any of the aliases, among other things.
+      unsigned AliasReg = *Alias;
+      if (Classes[AliasReg]) {
+        Classes[AliasReg] = reinterpret_cast<TargetRegisterClass *>(-1);
+        Classes[Reg] = reinterpret_cast<TargetRegisterClass *>(-1);
+      }
+    }
+
+    // If we're still willing to consider this register, note the reference.
+    if (Classes[Reg] != reinterpret_cast<TargetRegisterClass *>(-1))
+      RegRefs.insert(std::make_pair(Reg, &MO));
+
+    // It's not safe to change register allocation for source operands of
+    // that have special allocation requirements.
+    if (MO.isUse() && MI->getDesc().hasExtraSrcRegAllocReq()) {
+      if (KeepRegs.insert(Reg)) {
+        for (const unsigned *Subreg = TRI->getSubRegisters(Reg);
+             *Subreg; ++Subreg)
+          KeepRegs.insert(*Subreg);
+      }
+    }
+  }
+}
+
+void CriticalAntiDepBreaker::ScanInstruction(MachineInstr *MI,
+                                             unsigned Count) {
+  // Update liveness.
+  // Proceding upwards, registers that are defed but not used in this
+  // instruction are now dead.
+  for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
+    MachineOperand &MO = MI->getOperand(i);
+    if (!MO.isReg()) continue;
+    unsigned Reg = MO.getReg();
+    if (Reg == 0) continue;
+    if (!MO.isDef()) continue;
+    // Ignore two-addr defs.
+    if (MI->isRegTiedToUseOperand(i)) continue;
+
+    DefIndices[Reg] = Count;
+    KillIndices[Reg] = ~0u;
+    assert(((KillIndices[Reg] == ~0u) !=
+            (DefIndices[Reg] == ~0u)) &&
+           "Kill and Def maps aren't consistent for Reg!");
+    KeepRegs.erase(Reg);
+    Classes[Reg] = 0;
+    RegRefs.erase(Reg);
+    // Repeat, for all subregs.
+    for (const unsigned *Subreg = TRI->getSubRegisters(Reg);
+         *Subreg; ++Subreg) {
+      unsigned SubregReg = *Subreg;
+      DefIndices[SubregReg] = Count;
+      KillIndices[SubregReg] = ~0u;
+      KeepRegs.erase(SubregReg);
+      Classes[SubregReg] = 0;
+      RegRefs.erase(SubregReg);
+    }
+    // Conservatively mark super-registers as unusable.
+    for (const unsigned *Super = TRI->getSuperRegisters(Reg);
+         *Super; ++Super) {
+      unsigned SuperReg = *Super;
+      Classes[SuperReg] = reinterpret_cast<TargetRegisterClass *>(-1);
+    }
+  }
+  for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
+    MachineOperand &MO = MI->getOperand(i);
+    if (!MO.isReg()) continue;
+    unsigned Reg = MO.getReg();
+    if (Reg == 0) continue;
+    if (!MO.isUse()) continue;
+
+    const TargetRegisterClass *NewRC = 0;
+    if (i < MI->getDesc().getNumOperands())
+      NewRC = MI->getDesc().OpInfo[i].getRegClass(TRI);
+
+    // For now, only allow the register to be changed if its register
+    // class is consistent across all uses.
+    if (!Classes[Reg] && NewRC)
+      Classes[Reg] = NewRC;
+    else if (!NewRC || Classes[Reg] != NewRC)
+      Classes[Reg] = reinterpret_cast<TargetRegisterClass *>(-1);
+
+    RegRefs.insert(std::make_pair(Reg, &MO));
+
+    // It wasn't previously live but now it is, this is a kill.
+    if (KillIndices[Reg] == ~0u) {
+      KillIndices[Reg] = Count;
+      DefIndices[Reg] = ~0u;
+          assert(((KillIndices[Reg] == ~0u) !=
+                  (DefIndices[Reg] == ~0u)) &&
+               "Kill and Def maps aren't consistent for Reg!");
+    }
+    // Repeat, for all aliases.
+    for (const unsigned *Alias = TRI->getAliasSet(Reg); *Alias; ++Alias) {
+      unsigned AliasReg = *Alias;
+      if (KillIndices[AliasReg] == ~0u) {
+        KillIndices[AliasReg] = Count;
+        DefIndices[AliasReg] = ~0u;
+      }
+    }
+  }
+}
+
+unsigned
+CriticalAntiDepBreaker::findSuitableFreeRegister(unsigned AntiDepReg,
+                                                 unsigned LastNewReg,
+                                                 const TargetRegisterClass *RC) {
+  for (TargetRegisterClass::iterator R = RC->allocation_order_begin(MF),
+       RE = RC->allocation_order_end(MF); R != RE; ++R) {
+    unsigned NewReg = *R;
+    // Don't replace a register with itself.
+    if (NewReg == AntiDepReg) continue;
+    // Don't replace a register with one that was recently used to repair
+    // an anti-dependence with this AntiDepReg, because that would
+    // re-introduce that anti-dependence.
+    if (NewReg == LastNewReg) continue;
+    // If NewReg is dead and NewReg's most recent def is not before
+    // AntiDepReg's kill, it's safe to replace AntiDepReg with NewReg.
+    assert(((KillIndices[AntiDepReg] == ~0u) != (DefIndices[AntiDepReg] == ~0u)) &&
+           "Kill and Def maps aren't consistent for AntiDepReg!");
+    assert(((KillIndices[NewReg] == ~0u) != (DefIndices[NewReg] == ~0u)) &&
+           "Kill and Def maps aren't consistent for NewReg!");
+    if (KillIndices[NewReg] != ~0u ||
+        Classes[NewReg] == reinterpret_cast<TargetRegisterClass *>(-1) ||
+        KillIndices[AntiDepReg] > DefIndices[NewReg])
+      continue;
+    return NewReg;
+  }
+
+  // No registers are free and available!
+  return 0;
+}
+
+unsigned CriticalAntiDepBreaker::
+BreakAntiDependencies(std::vector<SUnit>& SUnits,
+                      MachineBasicBlock::iterator& Begin,
+                      MachineBasicBlock::iterator& End,
+                      unsigned InsertPosIndex) {
+  // The code below assumes that there is at least one instruction,
+  // so just duck out immediately if the block is empty.
+  if (SUnits.empty()) return 0;
+
+  // Find the node at the bottom of the critical path.
+  SUnit *Max = 0;
+  for (unsigned i = 0, e = SUnits.size(); i != e; ++i) {
+    SUnit *SU = &SUnits[i];
+    if (!Max || SU->getDepth() + SU->Latency > Max->getDepth() + Max->Latency)
+      Max = SU;
+  }
+
+#ifndef NDEBUG
+  {
+    DEBUG(errs() << "Critical path has total latency "
+          << (Max->getDepth() + Max->Latency) << "\n");
+    DEBUG(errs() << "Available regs:");
+    for (unsigned Reg = 0; Reg < TRI->getNumRegs(); ++Reg) {
+      if (KillIndices[Reg] == ~0u)
+        DEBUG(errs() << " " << TRI->getName(Reg));
+    }
+    DEBUG(errs() << '\n');
+  }
+#endif
+
+  // Track progress along the critical path through the SUnit graph as we walk
+  // the instructions.
+  SUnit *CriticalPathSU = Max;
+  MachineInstr *CriticalPathMI = CriticalPathSU->getInstr();
+
+  // Consider this pattern:
+  //   A = ...
+  //   ... = A
+  //   A = ...
+  //   ... = A
+  //   A = ...
+  //   ... = A
+  //   A = ...
+  //   ... = A
+  // There are three anti-dependencies here, and without special care,
+  // we'd break all of them using the same register:
+  //   A = ...
+  //   ... = A
+  //   B = ...
+  //   ... = B
+  //   B = ...
+  //   ... = B
+  //   B = ...
+  //   ... = B
+  // because at each anti-dependence, B is the first register that
+  // isn't A which is free.  This re-introduces anti-dependencies
+  // at all but one of the original anti-dependencies that we were
+  // trying to break.  To avoid this, keep track of the most recent
+  // register that each register was replaced with, avoid
+  // using it to repair an anti-dependence on the same register.
+  // This lets us produce this:
+  //   A = ...
+  //   ... = A
+  //   B = ...
+  //   ... = B
+  //   C = ...
+  //   ... = C
+  //   B = ...
+  //   ... = B
+  // This still has an anti-dependence on B, but at least it isn't on the
+  // original critical path.
+  //
+  // TODO: If we tracked more than one register here, we could potentially
+  // fix that remaining critical edge too. This is a little more involved,
+  // because unlike the most recent register, less recent registers should
+  // still be considered, though only if no other registers are available.
+  unsigned LastNewReg[TargetRegisterInfo::FirstVirtualRegister] = {};
+
+  // Attempt to break anti-dependence edges on the critical path. Walk the
+  // instructions from the bottom up, tracking information about liveness
+  // as we go to help determine which registers are available.
+  unsigned Broken = 0;
+  unsigned Count = InsertPosIndex - 1;
+  for (MachineBasicBlock::iterator I = End, E = Begin;
+       I != E; --Count) {
+    MachineInstr *MI = --I;
+
+    // Check if this instruction has a dependence on the critical path that
+    // is an anti-dependence that we may be able to break. If it is, set
+    // AntiDepReg to the non-zero register associated with the anti-dependence.
+    //
+    // We limit our attention to the critical path as a heuristic to avoid
+    // breaking anti-dependence edges that aren't going to significantly
+    // impact the overall schedule. There are a limited number of registers
+    // and we want to save them for the important edges.
+    // 
+    // TODO: Instructions with multiple defs could have multiple
+    // anti-dependencies. The current code here only knows how to break one
+    // edge per instruction. Note that we'd have to be able to break all of
+    // the anti-dependencies in an instruction in order to be effective.
+    unsigned AntiDepReg = 0;
+    if (MI == CriticalPathMI) {
+      if (SDep *Edge = CriticalPathStep(CriticalPathSU)) {
+        SUnit *NextSU = Edge->getSUnit();
+
+        // Only consider anti-dependence edges.
+        if (Edge->getKind() == SDep::Anti) {
+          AntiDepReg = Edge->getReg();
+          assert(AntiDepReg != 0 && "Anti-dependence on reg0?");
+          if (!AllocatableSet.test(AntiDepReg))
+            // Don't break anti-dependencies on non-allocatable registers.
+            AntiDepReg = 0;
+          else if (KeepRegs.count(AntiDepReg))
+            // Don't break anti-dependencies if an use down below requires
+            // this exact register.
+            AntiDepReg = 0;
+          else {
+            // If the SUnit has other dependencies on the SUnit that it
+            // anti-depends on, don't bother breaking the anti-dependency
+            // since those edges would prevent such units from being
+            // scheduled past each other regardless.
+            //
+            // Also, if there are dependencies on other SUnits with the
+            // same register as the anti-dependency, don't attempt to
+            // break it.
+            for (SUnit::pred_iterator P = CriticalPathSU->Preds.begin(),
+                 PE = CriticalPathSU->Preds.end(); P != PE; ++P)
+              if (P->getSUnit() == NextSU ?
+                    (P->getKind() != SDep::Anti || P->getReg() != AntiDepReg) :
+                    (P->getKind() == SDep::Data && P->getReg() == AntiDepReg)) {
+                AntiDepReg = 0;
+                break;
+              }
+          }
+        }
+        CriticalPathSU = NextSU;
+        CriticalPathMI = CriticalPathSU->getInstr();
+      } else {
+        // We've reached the end of the critical path.
+        CriticalPathSU = 0;
+        CriticalPathMI = 0;
+      }
+    }
+
+    PrescanInstruction(MI);
+
+    if (MI->getDesc().hasExtraDefRegAllocReq())
+      // If this instruction's defs have special allocation requirement, don't
+      // break this anti-dependency.
+      AntiDepReg = 0;
+    else if (AntiDepReg) {
+      // If this instruction has a use of AntiDepReg, breaking it
+      // is invalid.
+      for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
+        MachineOperand &MO = MI->getOperand(i);
+        if (!MO.isReg()) continue;
+        unsigned Reg = MO.getReg();
+        if (Reg == 0) continue;
+        if (MO.isUse() && AntiDepReg == Reg) {
+          AntiDepReg = 0;
+          break;
+        }
+      }
+    }
+
+    // Determine AntiDepReg's register class, if it is live and is
+    // consistently used within a single class.
+    const TargetRegisterClass *RC = AntiDepReg != 0 ? Classes[AntiDepReg] : 0;
+    assert((AntiDepReg == 0 || RC != NULL) &&
+           "Register should be live if it's causing an anti-dependence!");
+    if (RC == reinterpret_cast<TargetRegisterClass *>(-1))
+      AntiDepReg = 0;
+
+    // Look for a suitable register to use to break the anti-depenence.
+    //
+    // TODO: Instead of picking the first free register, consider which might
+    // be the best.
+    if (AntiDepReg != 0) {
+      if (unsigned NewReg = findSuitableFreeRegister(AntiDepReg,
+                                                     LastNewReg[AntiDepReg],
+                                                     RC)) {
+        DEBUG(errs() << "Breaking anti-dependence edge on "
+              << TRI->getName(AntiDepReg)
+              << " with " << RegRefs.count(AntiDepReg) << " references"
+              << " using " << TRI->getName(NewReg) << "!\n");
+
+        // Update the references to the old register to refer to the new
+        // register.
+        std::pair<std::multimap<unsigned, MachineOperand *>::iterator,
+                  std::multimap<unsigned, MachineOperand *>::iterator>
+           Range = RegRefs.equal_range(AntiDepReg);
+        for (std::multimap<unsigned, MachineOperand *>::iterator
+             Q = Range.first, QE = Range.second; Q != QE; ++Q)
+          Q->second->setReg(NewReg);
+
+        // We just went back in time and modified history; the
+        // liveness information for the anti-depenence reg is now
+        // inconsistent. Set the state as if it were dead.
+        Classes[NewReg] = Classes[AntiDepReg];
+        DefIndices[NewReg] = DefIndices[AntiDepReg];
+        KillIndices[NewReg] = KillIndices[AntiDepReg];
+        assert(((KillIndices[NewReg] == ~0u) !=
+                (DefIndices[NewReg] == ~0u)) &&
+             "Kill and Def maps aren't consistent for NewReg!");
+
+        Classes[AntiDepReg] = 0;
+        DefIndices[AntiDepReg] = KillIndices[AntiDepReg];
+        KillIndices[AntiDepReg] = ~0u;
+        assert(((KillIndices[AntiDepReg] == ~0u) !=
+                (DefIndices[AntiDepReg] == ~0u)) &&
+             "Kill and Def maps aren't consistent for AntiDepReg!");
+
+        RegRefs.erase(AntiDepReg);
+        LastNewReg[AntiDepReg] = NewReg;
+        ++Broken;
+      }
+    }
+
+    ScanInstruction(MI, Count);
+  }
+
+  return Broken;
+}
diff --git a/libclamav/c++/llvm/lib/CodeGen/CriticalAntiDepBreaker.h b/libclamav/c++/llvm/lib/CodeGen/CriticalAntiDepBreaker.h
new file mode 100644
index 0000000..496888d
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/CriticalAntiDepBreaker.h
@@ -0,0 +1,96 @@
+//=- llvm/CodeGen/CriticalAntiDepBreaker.h - Anti-Dep Support -*- C++ -*-=//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements the CriticalAntiDepBreaker class, which
+// implements register anti-dependence breaking along a blocks
+// critical path during post-RA scheduler.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_CODEGEN_CRITICALANTIDEPBREAKER_H
+#define LLVM_CODEGEN_CRITICALANTIDEPBREAKER_H
+
+#include "AntiDepBreaker.h"
+#include "llvm/CodeGen/MachineBasicBlock.h"
+#include "llvm/CodeGen/MachineFrameInfo.h"
+#include "llvm/CodeGen/MachineFunction.h"
+#include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/CodeGen/ScheduleDAG.h"
+#include "llvm/Target/TargetRegisterInfo.h"
+#include "llvm/ADT/BitVector.h"
+#include "llvm/ADT/SmallSet.h"
+#include <map>
+
+namespace llvm {
+  class CriticalAntiDepBreaker : public AntiDepBreaker {
+    MachineFunction& MF;
+    MachineRegisterInfo &MRI;
+    const TargetRegisterInfo *TRI;
+
+    /// AllocatableSet - The set of allocatable registers.
+    /// We'll be ignoring anti-dependencies on non-allocatable registers,
+    /// because they may not be safe to break.
+    const BitVector AllocatableSet;
+
+    /// Classes - For live regs that are only used in one register class in a
+    /// live range, the register class. If the register is not live, the
+    /// corresponding value is null. If the register is live but used in
+    /// multiple register classes, the corresponding value is -1 casted to a
+    /// pointer.
+    const TargetRegisterClass *
+      Classes[TargetRegisterInfo::FirstVirtualRegister];
+
+    /// RegRegs - Map registers to all their references within a live range.
+    std::multimap<unsigned, MachineOperand *> RegRefs;
+
+    /// KillIndices - The index of the most recent kill (proceding bottom-up),
+    /// or ~0u if the register is not live.
+    unsigned KillIndices[TargetRegisterInfo::FirstVirtualRegister];
+
+    /// DefIndices - The index of the most recent complete def (proceding bottom
+    /// up), or ~0u if the register is live.
+    unsigned DefIndices[TargetRegisterInfo::FirstVirtualRegister];
+
+    /// KeepRegs - A set of registers which are live and cannot be changed to
+    /// break anti-dependencies.
+    SmallSet<unsigned, 4> KeepRegs;
+
+  public:
+    CriticalAntiDepBreaker(MachineFunction& MFi);
+    ~CriticalAntiDepBreaker();
+    
+    /// Start - Initialize anti-dep breaking for a new basic block.
+    void StartBlock(MachineBasicBlock *BB);
+
+    /// BreakAntiDependencies - Identifiy anti-dependencies along the critical path
+    /// of the ScheduleDAG and break them by renaming registers.
+    ///
+    unsigned BreakAntiDependencies(std::vector<SUnit>& SUnits,
+                                   MachineBasicBlock::iterator& Begin,
+                                   MachineBasicBlock::iterator& End,
+                                   unsigned InsertPosIndex);
+
+    /// Observe - Update liveness information to account for the current
+    /// instruction, which will not be scheduled.
+    ///
+    void Observe(MachineInstr *MI, unsigned Count, unsigned InsertPosIndex);
+
+    /// Finish - Finish anti-dep breaking for a basic block.
+    void FinishBlock();
+
+  private:
+    void PrescanInstruction(MachineInstr *MI);
+    void ScanInstruction(MachineInstr *MI, unsigned Count);
+    unsigned findSuitableFreeRegister(unsigned AntiDepReg,
+                                      unsigned LastNewReg,
+                                      const TargetRegisterClass *);
+  };
+}
+
+#endif
diff --git a/libclamav/c++/llvm/lib/CodeGen/DeadMachineInstructionElim.cpp b/libclamav/c++/llvm/lib/CodeGen/DeadMachineInstructionElim.cpp
index e104d8c..07a5d38 100644
--- a/libclamav/c++/llvm/lib/CodeGen/DeadMachineInstructionElim.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/DeadMachineInstructionElim.cpp
@@ -15,7 +15,6 @@
 #include "llvm/Pass.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Target/TargetInstrInfo.h"
@@ -23,8 +22,7 @@
 using namespace llvm;
 
 namespace {
-  class VISIBILITY_HIDDEN DeadMachineInstructionElim : 
-        public MachineFunctionPass {
+  class DeadMachineInstructionElim : public MachineFunctionPass {
     virtual bool runOnMachineFunction(MachineFunction &MF);
     
     const TargetRegisterInfo *TRI;
@@ -53,7 +51,7 @@ FunctionPass *llvm::createDeadMachineInstructionElimPass() {
 bool DeadMachineInstructionElim::isDead(const MachineInstr *MI) const {
   // Don't delete instructions with side effects.
   bool SawStore = false;
-  if (!MI->isSafeToMove(TII, SawStore))
+  if (!MI->isSafeToMove(TII, SawStore, 0))
     return false;
 
   // Examine each operand.
diff --git a/libclamav/c++/llvm/lib/CodeGen/DwarfEHPrepare.cpp b/libclamav/c++/llvm/lib/CodeGen/DwarfEHPrepare.cpp
index 0ae7b35..9b516ed 100644
--- a/libclamav/c++/llvm/lib/CodeGen/DwarfEHPrepare.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/DwarfEHPrepare.cpp
@@ -21,7 +21,6 @@
 #include "llvm/IntrinsicInst.h"
 #include "llvm/Module.h"
 #include "llvm/Pass.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Target/TargetLowering.h"
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
 #include "llvm/Transforms/Utils/PromoteMemToReg.h"
@@ -33,7 +32,7 @@ STATISTIC(NumExceptionValuesMoved, "Number of eh.exception calls moved");
 STATISTIC(NumStackTempsIntroduced, "Number of stack temporaries introduced");
 
 namespace {
-  class VISIBILITY_HIDDEN DwarfEHPrepare : public FunctionPass {
+  class DwarfEHPrepare : public FunctionPass {
     const TargetLowering *TLI;
     bool CompileFast;
 
@@ -236,7 +235,7 @@ bool DwarfEHPrepare::LowerUnwinds() {
   if (!RewindFunction) {
     LLVMContext &Ctx = UnwindInsts[0]->getContext();
     std::vector<const Type*>
-      Params(1, PointerType::getUnqual(Type::getInt8Ty(Ctx)));
+      Params(1, Type::getInt8PtrTy(Ctx));
     FunctionType *FTy = FunctionType::get(Type::getVoidTy(Ctx),
                                           Params, false);
     const char *RewindName = TLI->getLibcallName(RTLIB::UNWIND_RESUME);
@@ -333,7 +332,7 @@ bool DwarfEHPrepare::PromoteStackTemporaries() {
   if (ExceptionValueVar && DT && DF && isAllocaPromotable(ExceptionValueVar)) {
     // Turn the exception temporary into registers and phi nodes if possible.
     std::vector<AllocaInst*> Allocas(1, ExceptionValueVar);
-    PromoteMemToReg(Allocas, *DT, *DF, ExceptionValueVar->getContext());
+    PromoteMemToReg(Allocas, *DT, *DF);
     return true;
   }
   return false;
diff --git a/libclamav/c++/llvm/lib/CodeGen/ELF.h b/libclamav/c++/llvm/lib/CodeGen/ELF.h
index b466e89..e303ebb 100644
--- a/libclamav/c++/llvm/lib/CodeGen/ELF.h
+++ b/libclamav/c++/llvm/lib/CodeGen/ELF.h
@@ -22,7 +22,7 @@
 
 #include "llvm/CodeGen/BinaryObject.h"
 #include "llvm/CodeGen/MachineRelocation.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 
 namespace llvm {
   class GlobalValue;
diff --git a/libclamav/c++/llvm/lib/CodeGen/ELFWriter.cpp b/libclamav/c++/llvm/lib/CodeGen/ELFWriter.cpp
index 55a2f70..3e1ee11 100644
--- a/libclamav/c++/llvm/lib/CodeGen/ELFWriter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/ELFWriter.cpp
@@ -457,16 +457,15 @@ void ELFWriter::EmitGlobalConstant(const Constant *CV, ELFSection &GblS) {
     return;
   } else if (const ConstantFP *CFP = dyn_cast<ConstantFP>(CV)) {
     APInt Val = CFP->getValueAPF().bitcastToAPInt();
-    if (CFP->getType() == Type::getDoubleTy(CV->getContext()))
+    if (CFP->getType()->isDoubleTy())
       GblS.emitWord64(Val.getZExtValue());
-    else if (CFP->getType() == Type::getFloatTy(CV->getContext()))
+    else if (CFP->getType()->isFloatTy())
       GblS.emitWord32(Val.getZExtValue());
-    else if (CFP->getType() == Type::getX86_FP80Ty(CV->getContext())) {
-      unsigned PadSize = 
-             TD->getTypeAllocSize(Type::getX86_FP80Ty(CV->getContext()))-
-             TD->getTypeStoreSize(Type::getX86_FP80Ty(CV->getContext()));
+    else if (CFP->getType()->isX86_FP80Ty()) {
+      unsigned PadSize = TD->getTypeAllocSize(CFP->getType())-
+                         TD->getTypeStoreSize(CFP->getType());
       GblS.emitWordFP80(Val.getRawData(), PadSize);
-    } else if (CFP->getType() == Type::getPPC_FP128Ty(CV->getContext()))
+    } else if (CFP->getType()->isPPC_FP128Ty())
       llvm_unreachable("PPC_FP128Ty global emission not implemented");
     return;
   } else if (const ConstantInt *CI = dyn_cast<ConstantInt>(CV)) {
diff --git a/libclamav/c++/llvm/lib/CodeGen/ExactHazardRecognizer.cpp b/libclamav/c++/llvm/lib/CodeGen/ExactHazardRecognizer.cpp
index 4f32c2b..36925b1 100644
--- a/libclamav/c++/llvm/lib/CodeGen/ExactHazardRecognizer.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/ExactHazardRecognizer.cpp
@@ -12,7 +12,7 @@
 //
 //===----------------------------------------------------------------------===//
 
-#define DEBUG_TYPE "exact-hazards"
+#define DEBUG_TYPE "post-RA-sched"
 #include "ExactHazardRecognizer.h"
 #include "llvm/CodeGen/ScheduleHazardRecognizer.h"
 #include "llvm/Support/Debug.h"
@@ -22,7 +22,8 @@
 
 using namespace llvm;
 
-ExactHazardRecognizer::ExactHazardRecognizer(const InstrItineraryData &LItinData) :
+ExactHazardRecognizer::
+ExactHazardRecognizer(const InstrItineraryData &LItinData) :
   ScheduleHazardRecognizer(), ItinData(LItinData) 
 {
   // Determine the maximum depth of any itinerary. This determines the
diff --git a/libclamav/c++/llvm/lib/CodeGen/GCMetadata.cpp b/libclamav/c++/llvm/lib/CodeGen/GCMetadata.cpp
index a57296c..4d25dcc 100644
--- a/libclamav/c++/llvm/lib/CodeGen/GCMetadata.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/GCMetadata.cpp
@@ -17,14 +17,13 @@
 #include "llvm/Pass.h"
 #include "llvm/CodeGen/Passes.h"
 #include "llvm/Function.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
 namespace {
   
-  class VISIBILITY_HIDDEN Printer : public FunctionPass {
+  class Printer : public FunctionPass {
     static char ID;
     raw_ostream &OS;
     
@@ -39,7 +38,7 @@ namespace {
     bool runOnFunction(Function &F);
   };
   
-  class VISIBILITY_HIDDEN Deleter : public FunctionPass {
+  class Deleter : public FunctionPass {
     static char ID;
     
   public:
diff --git a/libclamav/c++/llvm/lib/CodeGen/GCStrategy.cpp b/libclamav/c++/llvm/lib/CodeGen/GCStrategy.cpp
index 6d0de41..6e0bde6 100644
--- a/libclamav/c++/llvm/lib/CodeGen/GCStrategy.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/GCStrategy.cpp
@@ -27,7 +27,6 @@
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Target/TargetRegisterInfo.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 
@@ -39,7 +38,7 @@ namespace {
   /// llvm.gcwrite intrinsics, replacing them with simple loads and stores as 
   /// directed by the GCStrategy. It also performs automatic root initialization
   /// and custom intrinsic lowering.
-  class VISIBILITY_HIDDEN LowerIntrinsics : public FunctionPass {
+  class LowerIntrinsics : public FunctionPass {
     static bool NeedsDefaultLoweringPass(const GCStrategy &C);
     static bool NeedsCustomLoweringPass(const GCStrategy &C);
     static bool CouldBecomeSafePoint(Instruction *I);
@@ -63,7 +62,7 @@ namespace {
   /// function representation to identify safe points for the garbage collector
   /// in the machine code. It inserts labels at safe points and populates a
   /// GCMetadata record for each function.
-  class VISIBILITY_HIDDEN MachineCodeAnalysis : public MachineFunctionPass {
+  class MachineCodeAnalysis : public MachineFunctionPass {
     const TargetMachine *TM;
     GCFunctionInfo *FI;
     MachineModuleInfo *MMI;
diff --git a/libclamav/c++/llvm/lib/CodeGen/IfConversion.cpp b/libclamav/c++/llvm/lib/CodeGen/IfConversion.cpp
index 7b613ff..c23d707 100644
--- a/libclamav/c++/llvm/lib/CodeGen/IfConversion.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/IfConversion.cpp
@@ -59,7 +59,7 @@ STATISTIC(NumIfConvBBs,    "Number of if-converted blocks");
 STATISTIC(NumDupBBs,       "Number of duplicated blocks");
 
 namespace {
-  class VISIBILITY_HIDDEN IfConverter : public MachineFunctionPass {
+  class IfConverter : public MachineFunctionPass {
     enum IfcvtKind {
       ICNotClassfied,  // BB data valid, but not classified.
       ICSimpleFalse,   // Same as ICSimple, but on the false path.
@@ -608,7 +608,7 @@ void IfConverter::ScanInstructions(BBInfo &BBI) {
     if (TII->DefinesPredicate(I, PredDefs))
       BBI.ClobbersPred = true;
 
-    if (!TID.isPredicable()) {
+    if (!TII->isPredicable(I)) {
       BBI.IsUnpredicable = true;
       return;
     }
diff --git a/libclamav/c++/llvm/lib/CodeGen/IntrinsicLowering.cpp b/libclamav/c++/llvm/lib/CodeGen/IntrinsicLowering.cpp
index e3bbdb2..8a3bd0b 100644
--- a/libclamav/c++/llvm/lib/CodeGen/IntrinsicLowering.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/IntrinsicLowering.cpp
@@ -103,22 +103,22 @@ void IntrinsicLowering::AddPrototypes(Module &M) {
         break;
       case Intrinsic::memcpy:
         M.getOrInsertFunction("memcpy",
-          PointerType::getUnqual(Type::getInt8Ty(Context)),
-                              PointerType::getUnqual(Type::getInt8Ty(Context)), 
-                              PointerType::getUnqual(Type::getInt8Ty(Context)), 
+          Type::getInt8PtrTy(Context),
+                              Type::getInt8PtrTy(Context), 
+                              Type::getInt8PtrTy(Context), 
                               TD.getIntPtrType(Context), (Type *)0);
         break;
       case Intrinsic::memmove:
         M.getOrInsertFunction("memmove",
-          PointerType::getUnqual(Type::getInt8Ty(Context)),
-                              PointerType::getUnqual(Type::getInt8Ty(Context)), 
-                              PointerType::getUnqual(Type::getInt8Ty(Context)), 
+          Type::getInt8PtrTy(Context),
+                              Type::getInt8PtrTy(Context), 
+                              Type::getInt8PtrTy(Context), 
                               TD.getIntPtrType(Context), (Type *)0);
         break;
       case Intrinsic::memset:
         M.getOrInsertFunction("memset",
-          PointerType::getUnqual(Type::getInt8Ty(Context)),
-                              PointerType::getUnqual(Type::getInt8Ty(Context)), 
+          Type::getInt8PtrTy(Context),
+                              Type::getInt8PtrTy(Context), 
                               Type::getInt32Ty(M.getContext()), 
                               TD.getIntPtrType(Context), (Type *)0);
         break;
@@ -435,13 +435,11 @@ void IntrinsicLowering::LowerIntrinsicCall(CallInst *CI) {
     break;    // Simply strip out debugging intrinsics
 
   case Intrinsic::eh_exception:
-  case Intrinsic::eh_selector_i32:
-  case Intrinsic::eh_selector_i64:
+  case Intrinsic::eh_selector:
     CI->replaceAllUsesWith(Constant::getNullValue(CI->getType()));
     break;
 
-  case Intrinsic::eh_typeid_for_i32:
-  case Intrinsic::eh_typeid_for_i64:
+  case Intrinsic::eh_typeid_for:
     // Return something different to eh_selector.
     CI->replaceAllUsesWith(ConstantInt::get(CI->getType(), 1));
     break;
@@ -517,6 +515,15 @@ void IntrinsicLowering::LowerIntrinsicCall(CallInst *CI) {
      if (CI->getType() != Type::getVoidTy(Context))
        CI->replaceAllUsesWith(ConstantInt::get(CI->getType(), 1));
      break;
+  case Intrinsic::invariant_start:
+  case Intrinsic::lifetime_start:
+    // Discard region information.
+    CI->replaceAllUsesWith(UndefValue::get(CI->getType()));
+    break;
+  case Intrinsic::invariant_end:
+  case Intrinsic::lifetime_end:
+    // Discard region information.
+    break;
   }
 
   assert(CI->use_empty() &&
diff --git a/libclamav/c++/llvm/lib/CodeGen/LLVMTargetMachine.cpp b/libclamav/c++/llvm/lib/CodeGen/LLVMTargetMachine.cpp
index 4e713a6..242cba5 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LLVMTargetMachine.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LLVMTargetMachine.cpp
@@ -31,6 +31,24 @@ namespace llvm {
   bool EnableFastISel;
 }
 
+static cl::opt<bool> DisablePostRA("disable-post-ra", cl::Hidden,
+    cl::desc("Disable Post Regalloc"));
+static cl::opt<bool> DisableBranchFold("disable-branch-fold", cl::Hidden,
+    cl::desc("Disable branch folding"));
+static cl::opt<bool> DisableTailDuplicate("disable-tail-duplicate", cl::Hidden,
+    cl::desc("Disable tail duplication"));
+static cl::opt<bool> DisableCodePlace("disable-code-place", cl::Hidden,
+    cl::desc("Disable code placement"));
+static cl::opt<bool> DisableSSC("disable-ssc", cl::Hidden,
+    cl::desc("Disable Stack Slot Coloring"));
+static cl::opt<bool> DisableMachineLICM("disable-machine-licm", cl::Hidden,
+    cl::desc("Disable Machine LICM"));
+static cl::opt<bool> DisableMachineSink("disable-machine-sink", cl::Hidden,
+    cl::desc("Disable Machine Sinking"));
+static cl::opt<bool> DisableLSR("disable-lsr", cl::Hidden,
+    cl::desc("Disable Loop Strength Reduction Pass"));
+static cl::opt<bool> DisableCGP("disable-cgp", cl::Hidden,
+    cl::desc("Disable Codegen Prepare"));
 static cl::opt<bool> PrintLSR("print-lsr-output", cl::Hidden,
     cl::desc("Print LLVM IR produced by the loop-reduce pass"));
 static cl::opt<bool> PrintISelInput("print-isel-input", cl::Hidden,
@@ -39,8 +57,6 @@ static cl::opt<bool> PrintEmittedAsm("print-emitted-asm", cl::Hidden,
     cl::desc("Dump emitter generated instructions as assembly"));
 static cl::opt<bool> PrintGCInfo("print-gc", cl::Hidden,
     cl::desc("Dump garbage collector data"));
-static cl::opt<bool> HoistConstants("hoist-constants", cl::Hidden,
-    cl::desc("Hoist constants out of loops"));
 static cl::opt<bool> VerifyMachineCode("verify-machineinstrs", cl::Hidden,
     cl::desc("Verify generated machine code"),
     cl::init(getenv("LLVM_VERIFY_MACHINEINSTRS")!=NULL));
@@ -52,6 +68,11 @@ static cl::opt<cl::boolOrDefault>
 EnableFastISelOption("fast-isel", cl::Hidden,
   cl::desc("Enable the \"fast\" instruction selector"));
 
+// Enable or disable an experimental optimization to split GEPs
+// and run a special GVN pass which does not examine loads, in
+// an effort to factor out redundancy implicit in complex GEPs.
+static cl::opt<bool> EnableSplitGEPGVN("split-gep-gvn", cl::Hidden,
+    cl::desc("Split GEPs and run no-load GVN"));
 
 LLVMTargetMachine::LLVMTargetMachine(const Target &T,
                                      const std::string &TargetTriple)
@@ -70,18 +91,6 @@ LLVMTargetMachine::addPassesToEmitFile(PassManagerBase &PM,
   if (addCommonCodeGenPasses(PM, OptLevel))
     return FileModel::Error;
 
-  // Fold redundant debug labels.
-  PM.add(createDebugLabelFoldingPass());
-
-  if (PrintMachineCode)
-    PM.add(createMachineFunctionPrinterPass(errs()));
-
-  if (addPreEmitPass(PM, OptLevel) && PrintMachineCode)
-    PM.add(createMachineFunctionPrinterPass(errs()));
-
-  if (OptLevel != CodeGenOpt::None)
-    PM.add(createCodePlacementOptPass());
-
   switch (FileType) {
   default:
     break;
@@ -173,9 +182,6 @@ bool LLVMTargetMachine::addPassesToEmitMachineCode(PassManagerBase &PM,
   if (addCommonCodeGenPasses(PM, OptLevel))
     return true;
 
-  if (addPreEmitPass(PM, OptLevel) && PrintMachineCode)
-    PM.add(createMachineFunctionPrinterPass(errs()));
-
   addCodeEmitter(PM, OptLevel, MCE);
   if (PrintEmittedAsm)
     addAssemblyEmitter(PM, OptLevel, true, ferrs());
@@ -198,9 +204,6 @@ bool LLVMTargetMachine::addPassesToEmitMachineCode(PassManagerBase &PM,
   if (addCommonCodeGenPasses(PM, OptLevel))
     return true;
 
-  if (addPreEmitPass(PM, OptLevel) && PrintMachineCode)
-    PM.add(createMachineFunctionPrinterPass(errs()));
-
   addCodeEmitter(PM, OptLevel, JCE);
   if (PrintEmittedAsm)
     addAssemblyEmitter(PM, OptLevel, true, ferrs());
@@ -211,9 +214,10 @@ bool LLVMTargetMachine::addPassesToEmitMachineCode(PassManagerBase &PM,
 }
 
 static void printAndVerify(PassManagerBase &PM,
+                           const char *Banner,
                            bool allowDoubleDefs = false) {
   if (PrintMachineCode)
-    PM.add(createMachineFunctionPrinterPass(errs()));
+    PM.add(createMachineFunctionPrinterPass(errs(), Banner));
 
   if (VerifyMachineCode)
     PM.add(createMachineVerifierPass(allowDoubleDefs));
@@ -226,8 +230,14 @@ bool LLVMTargetMachine::addCommonCodeGenPasses(PassManagerBase &PM,
                                                CodeGenOpt::Level OptLevel) {
   // Standard LLVM-Level Passes.
 
+  // Optionally, tun split-GEPs and no-load GVN.
+  if (EnableSplitGEPGVN) {
+    PM.add(createGEPSplitterPass());
+    PM.add(createGVNPass(/*NoPRE=*/false, /*NoLoads=*/true));
+  }
+
   // Run loop strength reduction before anything else.
-  if (OptLevel != CodeGenOpt::None) {
+  if (OptLevel != CodeGenOpt::None && !DisableLSR) {
     PM.add(createLoopStrengthReducePass(getTargetLowering()));
     if (PrintLSR)
       PM.add(createPrintFunctionPass("\n\n*** Code after LSR ***\n", &errs()));
@@ -255,11 +265,8 @@ bool LLVMTargetMachine::addCommonCodeGenPasses(PassManagerBase &PM,
   // Make sure that no unreachable blocks are instruction selected.
   PM.add(createUnreachableBlockEliminationPass());
 
-  if (OptLevel != CodeGenOpt::None) {
-    if (HoistConstants)
-      PM.add(createCodeGenLICMPass());
+  if (OptLevel != CodeGenOpt::None && !DisableCGP)
     PM.add(createCodeGenPreparePass(getTargetLowering()));
-  }
 
   PM.add(createStackProtectorPass(getTargetLowering()));
 
@@ -283,61 +290,80 @@ bool LLVMTargetMachine::addCommonCodeGenPasses(PassManagerBase &PM,
     return true;
 
   // Print the instruction selected machine code...
-  printAndVerify(PM, /* allowDoubleDefs= */ true);
+  printAndVerify(PM, "After Instruction Selection",
+                 /* allowDoubleDefs= */ true);
 
   if (OptLevel != CodeGenOpt::None) {
-    PM.add(createMachineLICMPass());
-    PM.add(createMachineSinkingPass());
-    printAndVerify(PM, /* allowDoubleDefs= */ true);
+    if (!DisableMachineLICM)
+      PM.add(createMachineLICMPass());
+    if (!DisableMachineSink)
+      PM.add(createMachineSinkingPass());
+    printAndVerify(PM, "After MachineLICM and MachineSinking",
+                   /* allowDoubleDefs= */ true);
   }
 
   // Run pre-ra passes.
   if (addPreRegAlloc(PM, OptLevel))
-    printAndVerify(PM, /* allowDoubleDefs= */ true);
+    printAndVerify(PM, "After PreRegAlloc passes",
+                   /* allowDoubleDefs= */ true);
 
   // Perform register allocation.
   PM.add(createRegisterAllocator());
+  printAndVerify(PM, "After Register Allocation");
 
   // Perform stack slot coloring.
-  if (OptLevel != CodeGenOpt::None)
+  if (OptLevel != CodeGenOpt::None && !DisableSSC) {
     // FIXME: Re-enable coloring with register when it's capable of adding
     // kill markers.
     PM.add(createStackSlotColoringPass(false));
-
-  printAndVerify(PM);           // Print the register-allocated code
+    printAndVerify(PM, "After StackSlotColoring");
+  }
 
   // Run post-ra passes.
   if (addPostRegAlloc(PM, OptLevel))
-    printAndVerify(PM);
+    printAndVerify(PM, "After PostRegAlloc passes");
 
   PM.add(createLowerSubregsPass());
-  printAndVerify(PM);
+  printAndVerify(PM, "After LowerSubregs");
 
   // Insert prolog/epilog code.  Eliminate abstract frame index references...
   PM.add(createPrologEpilogCodeInserter());
-  printAndVerify(PM);
+  printAndVerify(PM, "After PrologEpilogCodeInserter");
 
   // Run pre-sched2 passes.
   if (addPreSched2(PM, OptLevel))
-    printAndVerify(PM);
+    printAndVerify(PM, "After PreSched2 passes");
 
   // Second pass scheduler.
-  if (OptLevel != CodeGenOpt::None) {
-    PM.add(createPostRAScheduler());
-    printAndVerify(PM);
+  if (OptLevel != CodeGenOpt::None && !DisablePostRA) {
+    PM.add(createPostRAScheduler(OptLevel));
+    printAndVerify(PM, "After PostRAScheduler");
   }
 
   // Branch folding must be run after regalloc and prolog/epilog insertion.
-  if (OptLevel != CodeGenOpt::None) {
+  if (OptLevel != CodeGenOpt::None && !DisableBranchFold) {
     PM.add(createBranchFoldingPass(getEnableTailMergeDefault()));
-    printAndVerify(PM);
+    printAndVerify(PM, "After BranchFolding");
+  }
+
+  // Tail duplication.
+  if (OptLevel != CodeGenOpt::None && !DisableTailDuplicate) {
+    PM.add(createTailDuplicatePass());
+    printAndVerify(PM, "After TailDuplicate");
   }
 
   PM.add(createGCMachineCodeAnalysisPass());
-  printAndVerify(PM);
 
   if (PrintGCInfo)
     PM.add(createGCInfoPrinter(errs()));
 
+  if (OptLevel != CodeGenOpt::None && !DisableCodePlace) {
+    PM.add(createCodePlacementOptPass());
+    printAndVerify(PM, "After CodePlacementOpt");
+  }
+
+  if (addPreEmitPass(PM, OptLevel))
+    printAndVerify(PM, "After PreEmit passes");
+
   return false;
 }
diff --git a/libclamav/c++/llvm/lib/CodeGen/LatencyPriorityQueue.cpp b/libclamav/c++/llvm/lib/CodeGen/LatencyPriorityQueue.cpp
index 2e7b89c..f1bd573 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LatencyPriorityQueue.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LatencyPriorityQueue.cpp
@@ -73,9 +73,10 @@ void LatencyPriorityQueue::push_impl(SUnit *SU) {
   // this node is the sole unscheduled node for.
   unsigned NumNodesBlocking = 0;
   for (SUnit::const_succ_iterator I = SU->Succs.begin(), E = SU->Succs.end();
-       I != E; ++I)
+       I != E; ++I) {
     if (getSingleUnscheduledPred(I->getSUnit()) == SU)
       ++NumNodesBlocking;
+  }
   NumNodesSolelyBlocking[SU->NodeNum] = NumNodesBlocking;
   
   Queue.push(SU);
@@ -88,8 +89,9 @@ void LatencyPriorityQueue::push_impl(SUnit *SU) {
 // the node available.
 void LatencyPriorityQueue::ScheduledNode(SUnit *SU) {
   for (SUnit::const_succ_iterator I = SU->Succs.begin(), E = SU->Succs.end();
-       I != E; ++I)
+       I != E; ++I) {
     AdjustPriorityOfUnscheduledPreds(I->getSUnit());
+  }
 }
 
 /// AdjustPriorityOfUnscheduledPreds - One of the predecessors of SU was just
diff --git a/libclamav/c++/llvm/lib/CodeGen/LazyLiveness.cpp b/libclamav/c++/llvm/lib/CodeGen/LazyLiveness.cpp
deleted file mode 100644
index 8352fb2..0000000
--- a/libclamav/c++/llvm/lib/CodeGen/LazyLiveness.cpp
+++ /dev/null
@@ -1,168 +0,0 @@
-//===- LazyLiveness.cpp - Lazy, CFG-invariant liveness information --------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This pass implements a lazy liveness analysis as per "Fast Liveness Checking
-// for SSA-form Programs," by Boissinot, et al.
-//
-//===----------------------------------------------------------------------===//
-
-#define DEBUG_TYPE "lazyliveness"
-#include "llvm/CodeGen/LazyLiveness.h"
-#include "llvm/CodeGen/MachineDominators.h"
-#include "llvm/CodeGen/MachineFunction.h"
-#include "llvm/CodeGen/MachineRegisterInfo.h"
-#include "llvm/CodeGen/Passes.h"
-#include "llvm/ADT/DepthFirstIterator.h"
-#include "llvm/ADT/PostOrderIterator.h"
-using namespace llvm;
-
-char LazyLiveness::ID = 0;
-static RegisterPass<LazyLiveness> X("lazy-liveness", "Lazy Liveness Analysis");
-
-void LazyLiveness::computeBackedgeChain(MachineFunction& mf, 
-                                        MachineBasicBlock* MBB) {
-  SparseBitVector<128> tmp = rv[MBB];
-  tmp.set(preorder[MBB]);
-  tmp &= backedge_source;
-  calculated.set(preorder[MBB]);
-  
-  for (SparseBitVector<128>::iterator I = tmp.begin(); I != tmp.end(); ++I) {
-    assert(rev_preorder.size() > *I && "Unknown block!");
-    
-    MachineBasicBlock* SrcMBB = rev_preorder[*I];
-    
-    for (MachineBasicBlock::succ_iterator SI = SrcMBB->succ_begin(),
-         SE = SrcMBB->succ_end(); SI != SE; ++SI) {
-      MachineBasicBlock* TgtMBB = *SI;
-      
-      if (backedges.count(std::make_pair(SrcMBB, TgtMBB)) &&
-          !rv[MBB].test(preorder[TgtMBB])) {
-        if (!calculated.test(preorder[TgtMBB]))
-          computeBackedgeChain(mf, TgtMBB);
-        
-        tv[MBB].set(preorder[TgtMBB]);
-        SparseBitVector<128> right = tv[TgtMBB];
-        tv[MBB] |= right;
-      }
-    }
-    
-    tv[MBB].reset(preorder[MBB]);
-  }
-}
-
-bool LazyLiveness::runOnMachineFunction(MachineFunction &mf) {
-  rv.clear();
-  tv.clear();
-  backedges.clear();
-  backedge_source.clear();
-  backedge_target.clear();
-  calculated.clear();
-  preorder.clear();
-  rev_preorder.clear();
-  
-  rv.resize(mf.size());
-  tv.resize(mf.size());
-  preorder.resize(mf.size());
-  rev_preorder.reserve(mf.size());
-  
-  MRI = &mf.getRegInfo();
-  MachineDominatorTree& MDT = getAnalysis<MachineDominatorTree>();
-  
-  // Step 0: Compute preorder numbering for all MBBs.
-  unsigned num = 0;
-  for (df_iterator<MachineDomTreeNode*> DI = df_begin(MDT.getRootNode()),
-       DE = df_end(MDT.getRootNode()); DI != DE; ++DI) {
-    preorder[(*DI)->getBlock()] = num++;
-    rev_preorder.push_back((*DI)->getBlock());
-  }
-  
-  // Step 1: Compute the transitive closure of the CFG, ignoring backedges.
-  for (po_iterator<MachineBasicBlock*> POI = po_begin(&*mf.begin()),
-       POE = po_end(&*mf.begin()); POI != POE; ++POI) {
-    MachineBasicBlock* MBB = *POI;
-    SparseBitVector<128>& entry = rv[MBB];
-    entry.set(preorder[MBB]);
-    
-    for (MachineBasicBlock::succ_iterator SI = MBB->succ_begin(),
-         SE = MBB->succ_end(); SI != SE; ++SI) {
-      DenseMap<MachineBasicBlock*, SparseBitVector<128> >::iterator SII = 
-                                                         rv.find(*SI);
-      
-      // Because we're iterating in postorder, any successor that does not yet
-      // have an rv entry must be on a backedge.
-      if (SII != rv.end()) {
-        entry |= SII->second;
-      } else {
-        backedges.insert(std::make_pair(MBB, *SI));
-        backedge_source.set(preorder[MBB]);
-        backedge_target.set(preorder[*SI]);
-      }
-    }
-  }
-  
-  for (SparseBitVector<128>::iterator I = backedge_source.begin();
-       I != backedge_source.end(); ++I)
-    computeBackedgeChain(mf, rev_preorder[*I]);
-  
-  for (po_iterator<MachineBasicBlock*> POI = po_begin(&*mf.begin()),
-       POE = po_end(&*mf.begin()); POI != POE; ++POI)
-    if (!backedge_target.test(preorder[*POI]))
-      for (MachineBasicBlock::succ_iterator SI = (*POI)->succ_begin(),
-           SE = (*POI)->succ_end(); SI != SE; ++SI)
-        if (!backedges.count(std::make_pair(*POI, *SI)) && tv.count(*SI)) {
-          SparseBitVector<128> right = tv[*SI];
-          tv[*POI] |= right;
-        }
-  
-  for (po_iterator<MachineBasicBlock*> POI = po_begin(&*mf.begin()),
-       POE = po_end(&*mf.begin()); POI != POE; ++POI)
-    tv[*POI].set(preorder[*POI]);
-  
-  return false;
-}
-
-bool LazyLiveness::vregLiveIntoMBB(unsigned vreg, MachineBasicBlock* MBB) {
-  MachineDominatorTree& MDT = getAnalysis<MachineDominatorTree>();
-  
-  MachineBasicBlock* DefMBB = MRI->def_begin(vreg)->getParent();
-  unsigned def = preorder[DefMBB];
-  unsigned max_dom = 0;
-  for (df_iterator<MachineDomTreeNode*> DI = df_begin(MDT[DefMBB]),
-       DE = df_end(MDT[DefMBB]); DI != DE; ++DI) {
-    if (preorder[DI->getBlock()] > max_dom) {
-      max_dom = preorder[(*DI)->getBlock()];
-    }
-  }
-  
-  if (preorder[MBB] <= def || max_dom < preorder[MBB])
-    return false;
-  
-  SparseBitVector<128>::iterator I = tv[MBB].begin();
-  while (I != tv[MBB].end() && *I <= def) ++I;
-  while (I != tv[MBB].end() && *I < max_dom) {
-    for (MachineRegisterInfo::use_iterator UI = MRI->use_begin(vreg),
-         UE = MachineRegisterInfo::use_end(); UI != UE; ++UI) {
-      MachineBasicBlock* UseMBB = UI->getParent();
-      if (rv[rev_preorder[*I]].test(preorder[UseMBB]))
-        return true;
-      
-      unsigned t_dom = 0;
-      for (df_iterator<MachineDomTreeNode*> DI =
-           df_begin(MDT[rev_preorder[*I]]), DE = df_end(MDT[rev_preorder[*I]]); 
-           DI != DE; ++DI)
-        if (preorder[DI->getBlock()] > t_dom) {
-          max_dom = preorder[(*DI)->getBlock()];
-        }
-      I = tv[MBB].begin();
-      while (I != tv[MBB].end() && *I < t_dom) ++I;
-    }
-  }
-  
-  return false;
-}
diff --git a/libclamav/c++/llvm/lib/CodeGen/LiveInterval.cpp b/libclamav/c++/llvm/lib/CodeGen/LiveInterval.cpp
index 38b9401..8d632cb 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LiveInterval.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LiveInterval.cpp
@@ -19,6 +19,7 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/CodeGen/LiveInterval.h"
+#include "llvm/CodeGen/LiveIntervalAnalysis.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/SmallSet.h"
@@ -28,11 +29,6 @@
 #include <algorithm>
 using namespace llvm;
 
-// Print a MachineInstrIndex to a raw_ostream.
-void MachineInstrIndex::print(raw_ostream &os) const {
-  os << (index & ~PHI_BIT);
-}
-
 // An example for liveAt():
 //
 // this = [1,4), liveAt(0) will return false. The instruction defining this
@@ -40,7 +36,7 @@ void MachineInstrIndex::print(raw_ostream &os) const {
 // variable it represents. This is because slot 1 is used (def slot) and spans
 // up to slot 3 (store slot).
 //
-bool LiveInterval::liveAt(MachineInstrIndex I) const {
+bool LiveInterval::liveAt(SlotIndex I) const {
   Ranges::const_iterator r = std::upper_bound(ranges.begin(), ranges.end(), I);
 
   if (r == ranges.begin())
@@ -53,7 +49,7 @@ bool LiveInterval::liveAt(MachineInstrIndex I) const {
 // liveBeforeAndAt - Check if the interval is live at the index and the index
 // just before it. If index is liveAt, check if it starts a new live range.
 // If it does, then check if the previous live range ends at index-1.
-bool LiveInterval::liveBeforeAndAt(MachineInstrIndex I) const {
+bool LiveInterval::liveBeforeAndAt(SlotIndex I) const {
   Ranges::const_iterator r = std::upper_bound(ranges.begin(), ranges.end(), I);
 
   if (r == ranges.begin())
@@ -131,7 +127,7 @@ bool LiveInterval::overlapsFrom(const LiveInterval& other,
 
 /// overlaps - Return true if the live interval overlaps a range specified
 /// by [Start, End).
-bool LiveInterval::overlaps(MachineInstrIndex Start, MachineInstrIndex End) const {
+bool LiveInterval::overlaps(SlotIndex Start, SlotIndex End) const {
   assert(Start < End && "Invalid range");
   const_iterator I  = begin();
   const_iterator E  = end();
@@ -149,10 +145,10 @@ bool LiveInterval::overlaps(MachineInstrIndex Start, MachineInstrIndex End) cons
 /// specified by I to end at the specified endpoint.  To do this, we should
 /// merge and eliminate all ranges that this will overlap with.  The iterator is
 /// not invalidated.
-void LiveInterval::extendIntervalEndTo(Ranges::iterator I, MachineInstrIndex NewEnd) {
+void LiveInterval::extendIntervalEndTo(Ranges::iterator I, SlotIndex NewEnd) {
   assert(I != ranges.end() && "Not a valid interval!");
   VNInfo *ValNo = I->valno;
-  MachineInstrIndex OldEnd = I->end;
+  SlotIndex OldEnd = I->end;
 
   // Search for the first interval that we can't merge with.
   Ranges::iterator MergeTo = next(I);
@@ -167,7 +163,7 @@ void LiveInterval::extendIntervalEndTo(Ranges::iterator I, MachineInstrIndex New
   ranges.erase(next(I), MergeTo);
 
   // Update kill info.
-  ValNo->removeKills(OldEnd, I->end.prevSlot_());
+  ValNo->removeKills(OldEnd, I->end.getPrevSlot());
 
   // If the newly formed range now touches the range after it and if they have
   // the same value number, merge the two ranges into one range.
@@ -183,7 +179,7 @@ void LiveInterval::extendIntervalEndTo(Ranges::iterator I, MachineInstrIndex New
 /// specified by I to start at the specified endpoint.  To do this, we should
 /// merge and eliminate all ranges that this will overlap with.
 LiveInterval::Ranges::iterator
-LiveInterval::extendIntervalStartTo(Ranges::iterator I, MachineInstrIndex NewStart) {
+LiveInterval::extendIntervalStartTo(Ranges::iterator I, SlotIndex NewStart) {
   assert(I != ranges.end() && "Not a valid interval!");
   VNInfo *ValNo = I->valno;
 
@@ -216,7 +212,7 @@ LiveInterval::extendIntervalStartTo(Ranges::iterator I, MachineInstrIndex NewSta
 
 LiveInterval::iterator
 LiveInterval::addRangeFrom(LiveRange LR, iterator From) {
-  MachineInstrIndex Start = LR.start, End = LR.end;
+  SlotIndex Start = LR.start, End = LR.end;
   iterator it = std::upper_bound(From, ranges.end(), Start);
 
   // If the inserted interval starts in the middle or right at the end of
@@ -268,7 +264,7 @@ LiveInterval::addRangeFrom(LiveRange LR, iterator From) {
 
 /// isInOneLiveRange - Return true if the range specified is entirely in 
 /// a single LiveRange of the live interval.
-bool LiveInterval::isInOneLiveRange(MachineInstrIndex Start, MachineInstrIndex End) {
+bool LiveInterval::isInOneLiveRange(SlotIndex Start, SlotIndex End) {
   Ranges::iterator I = std::upper_bound(ranges.begin(), ranges.end(), Start);
   if (I == ranges.begin())
     return false;
@@ -279,7 +275,7 @@ bool LiveInterval::isInOneLiveRange(MachineInstrIndex Start, MachineInstrIndex E
 
 /// removeRange - Remove the specified range from this interval.  Note that
 /// the range must be in a single LiveRange in its entirety.
-void LiveInterval::removeRange(MachineInstrIndex Start, MachineInstrIndex End,
+void LiveInterval::removeRange(SlotIndex Start, SlotIndex End,
                                bool RemoveDeadValNo) {
   // Find the LiveRange containing this span.
   Ranges::iterator I = std::upper_bound(ranges.begin(), ranges.end(), Start);
@@ -331,7 +327,7 @@ void LiveInterval::removeRange(MachineInstrIndex Start, MachineInstrIndex End,
   }
 
   // Otherwise, we are splitting the LiveRange into two pieces.
-  MachineInstrIndex OldEnd = I->end;
+  SlotIndex OldEnd = I->end;
   I->end = Start;   // Trim the old interval.
 
   // Insert the new one.
@@ -362,36 +358,11 @@ void LiveInterval::removeValNo(VNInfo *ValNo) {
     ValNo->setIsUnused(true);
   }
 }
- 
-/// scaleNumbering - Renumber VNI and ranges to provide gaps for new
-/// instructions.                                                   
-
-void LiveInterval::scaleNumbering(unsigned factor) {
-  // Scale ranges.                                                            
-  for (iterator RI = begin(), RE = end(); RI != RE; ++RI) {
-    RI->start = RI->start.scale(factor);
-    RI->end = RI->end.scale(factor);
-  }
-
-  // Scale VNI info.                                                          
-  for (vni_iterator VNI = vni_begin(), VNIE = vni_end(); VNI != VNIE; ++VNI) {
-    VNInfo *vni = *VNI;
-
-    if (vni->isDefAccurate())
-      vni->def = vni->def.scale(factor);
-
-    for (unsigned i = 0; i < vni->kills.size(); ++i) {
-      if (!vni->kills[i].isPHIIndex())
-        vni->kills[i] = vni->kills[i].scale(factor);
-    }
-  }
-}
-
 
 /// getLiveRangeContaining - Return the live range that contains the
 /// specified index, or null if there is none.
 LiveInterval::const_iterator 
-LiveInterval::FindLiveRangeContaining(MachineInstrIndex Idx) const {
+LiveInterval::FindLiveRangeContaining(SlotIndex Idx) const {
   const_iterator It = std::upper_bound(begin(), end(), Idx);
   if (It != ranges.begin()) {
     --It;
@@ -403,7 +374,7 @@ LiveInterval::FindLiveRangeContaining(MachineInstrIndex Idx) const {
 }
 
 LiveInterval::iterator 
-LiveInterval::FindLiveRangeContaining(MachineInstrIndex Idx) {
+LiveInterval::FindLiveRangeContaining(SlotIndex Idx) {
   iterator It = std::upper_bound(begin(), end(), Idx);
   if (It != begin()) {
     --It;
@@ -416,7 +387,7 @@ LiveInterval::FindLiveRangeContaining(MachineInstrIndex Idx) {
 
 /// findDefinedVNInfo - Find the VNInfo defined by the specified
 /// index (register interval).
-VNInfo *LiveInterval::findDefinedVNInfoForRegInt(MachineInstrIndex Idx) const {
+VNInfo *LiveInterval::findDefinedVNInfoForRegInt(SlotIndex Idx) const {
   for (LiveInterval::const_vni_iterator i = vni_begin(), e = vni_end();
        i != e; ++i) {
     if ((*i)->def == Idx)
@@ -440,7 +411,8 @@ VNInfo *LiveInterval::findDefinedVNInfoForStackInt(unsigned reg) const {
 /// join - Join two live intervals (this, and other) together.  This applies
 /// mappings to the value numbers in the LHS/RHS intervals as specified.  If
 /// the intervals are not joinable, this aborts.
-void LiveInterval::join(LiveInterval &Other, const int *LHSValNoAssignments,
+void LiveInterval::join(LiveInterval &Other,
+                        const int *LHSValNoAssignments,
                         const int *RHSValNoAssignments, 
                         SmallVector<VNInfo*, 16> &NewVNInfo,
                         MachineRegisterInfo *MRI) {
@@ -554,14 +526,15 @@ void LiveInterval::MergeRangesInAsValue(const LiveInterval &RHS,
 /// The LiveRanges in RHS are allowed to overlap with LiveRanges in the
 /// current interval, it will replace the value numbers of the overlaped
 /// live ranges with the specified value number.
-void LiveInterval::MergeValueInAsValue(const LiveInterval &RHS,
-                                     const VNInfo *RHSValNo, VNInfo *LHSValNo) {
+void LiveInterval::MergeValueInAsValue(
+                                    const LiveInterval &RHS,
+                                    const VNInfo *RHSValNo, VNInfo *LHSValNo) {
   SmallVector<VNInfo*, 4> ReplacedValNos;
   iterator IP = begin();
   for (const_iterator I = RHS.begin(), E = RHS.end(); I != E; ++I) {
     if (I->valno != RHSValNo)
       continue;
-    MachineInstrIndex Start = I->start, End = I->end;
+    SlotIndex Start = I->start, End = I->end;
     IP = std::upper_bound(IP, end(), Start);
     // If the start of this range overlaps with an existing liverange, trim it.
     if (IP != begin() && IP[-1].end > Start) {
@@ -621,7 +594,8 @@ void LiveInterval::MergeValueInAsValue(const LiveInterval &RHS,
 /// MergeInClobberRanges - For any live ranges that are not defined in the
 /// current interval, but are defined in the Clobbers interval, mark them
 /// used with an unknown definition value.
-void LiveInterval::MergeInClobberRanges(const LiveInterval &Clobbers,
+void LiveInterval::MergeInClobberRanges(LiveIntervals &li_,
+                                        const LiveInterval &Clobbers,
                                         BumpPtrAllocator &VNInfoAllocator) {
   if (Clobbers.empty()) return;
   
@@ -638,20 +612,20 @@ void LiveInterval::MergeInClobberRanges(const LiveInterval &Clobbers,
       ClobberValNo = UnusedValNo;
     else {
       UnusedValNo = ClobberValNo =
-        getNextValue(MachineInstrIndex(), 0, false, VNInfoAllocator);
+        getNextValue(li_.getInvalidIndex(), 0, false, VNInfoAllocator);
       ValNoMaps.insert(std::make_pair(I->valno, ClobberValNo));
     }
 
     bool Done = false;
-    MachineInstrIndex Start = I->start, End = I->end;
+    SlotIndex Start = I->start, End = I->end;
     // If a clobber range starts before an existing range and ends after
     // it, the clobber range will need to be split into multiple ranges.
     // Loop until the entire clobber range is handled.
     while (!Done) {
       Done = true;
       IP = std::upper_bound(IP, end(), Start);
-      MachineInstrIndex SubRangeStart = Start;
-      MachineInstrIndex SubRangeEnd = End;
+      SlotIndex SubRangeStart = Start;
+      SlotIndex SubRangeEnd = End;
 
       // If the start of this range overlaps with an existing liverange, trim it.
       if (IP != begin() && IP[-1].end > SubRangeStart) {
@@ -687,13 +661,14 @@ void LiveInterval::MergeInClobberRanges(const LiveInterval &Clobbers,
   }
 }
 
-void LiveInterval::MergeInClobberRange(MachineInstrIndex Start,
-                                       MachineInstrIndex End,
+void LiveInterval::MergeInClobberRange(LiveIntervals &li_,
+                                       SlotIndex Start,
+                                       SlotIndex End,
                                        BumpPtrAllocator &VNInfoAllocator) {
   // Find a value # to use for the clobber ranges.  If there is already a value#
   // for unknown values, use it.
   VNInfo *ClobberValNo =
-    getNextValue(MachineInstrIndex(), 0, false, VNInfoAllocator);
+    getNextValue(li_.getInvalidIndex(), 0, false, VNInfoAllocator);
   
   iterator IP = begin();
   IP = std::upper_bound(IP, end(), Start);
@@ -881,8 +856,6 @@ void LiveInterval::print(raw_ostream &OS, const TargetRegisterInfo *TRI) const {
           OS << "-(";
           for (unsigned j = 0; j != ee; ++j) {
             OS << vni->kills[j];
-            if (vni->kills[j].isPHIIndex())
-              OS << "*";
             if (j != ee-1)
               OS << " ";
           }
diff --git a/libclamav/c++/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp b/libclamav/c++/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp
index 4f4bb9b..4412c1b 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp
@@ -28,7 +28,7 @@
 #include "llvm/CodeGen/MachineMemOperand.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
 #include "llvm/CodeGen/Passes.h"
-#include "llvm/CodeGen/PseudoSourceValue.h"
+#include "llvm/CodeGen/ProcessImplicitDefs.h"
 #include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetMachine.h"
@@ -50,20 +50,12 @@ using namespace llvm;
 static cl::opt<bool> DisableReMat("disable-rematerialization", 
                                   cl::init(false), cl::Hidden);
 
-static cl::opt<bool> EnableAggressiveRemat("aggressive-remat", cl::Hidden);
-
 static cl::opt<bool> EnableFastSpilling("fast-spill",
                                         cl::init(false), cl::Hidden);
 
-static cl::opt<bool> EarlyCoalescing("early-coalescing", cl::init(false));
-
-static cl::opt<int> CoalescingLimit("early-coalescing-limit",
-                                    cl::init(-1), cl::Hidden);
-
 STATISTIC(numIntervals , "Number of original intervals");
 STATISTIC(numFolds     , "Number of loads/stores folded into instructions");
 STATISTIC(numSplits    , "Number of intervals split");
-STATISTIC(numCoalescing, "Number of early coalescing performed");
 
 char LiveIntervals::ID = 0;
 static RegisterPass<LiveIntervals> X("liveintervals", "Live Interval Analysis");
@@ -83,6 +75,10 @@ void LiveIntervals::getAnalysisUsage(AnalysisUsage &AU) const {
   }
   
   AU.addRequiredID(TwoAddressInstructionPassID);
+  AU.addPreserved<ProcessImplicitDefs>();
+  AU.addRequired<ProcessImplicitDefs>();
+  AU.addPreserved<SlotIndexes>();
+  AU.addRequiredTransitive<SlotIndexes>();
   MachineFunctionPass::getAnalysisUsage(AU);
 }
 
@@ -92,13 +88,7 @@ void LiveIntervals::releaseMemory() {
        E = r2iMap_.end(); I != E; ++I)
     delete I->second;
   
-  MBB2IdxMap.clear();
-  Idx2MBBMap.clear();
-  mi2iMap_.clear();
-  i2miMap_.clear();
   r2iMap_.clear();
-  terminatorGaps.clear();
-  phiJoinCopies.clear();
 
   // Release VNInfo memroy regions after all VNInfo objects are dtor'd.
   VNInfoAllocator.Reset();
@@ -109,422 +99,6 @@ void LiveIntervals::releaseMemory() {
   }
 }
 
-static bool CanTurnIntoImplicitDef(MachineInstr *MI, unsigned Reg,
-                                   unsigned OpIdx, const TargetInstrInfo *tii_){
-  unsigned SrcReg, DstReg, SrcSubReg, DstSubReg;
-  if (tii_->isMoveInstr(*MI, SrcReg, DstReg, SrcSubReg, DstSubReg) &&
-      Reg == SrcReg)
-    return true;
-
-  if (OpIdx == 2 && MI->getOpcode() == TargetInstrInfo::SUBREG_TO_REG)
-    return true;
-  if (OpIdx == 1 && MI->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG)
-    return true;
-  return false;
-}
-
-/// processImplicitDefs - Process IMPLICIT_DEF instructions and make sure
-/// there is one implicit_def for each use. Add isUndef marker to
-/// implicit_def defs and their uses.
-void LiveIntervals::processImplicitDefs() {
-  SmallSet<unsigned, 8> ImpDefRegs;
-  SmallVector<MachineInstr*, 8> ImpDefMIs;
-  MachineBasicBlock *Entry = mf_->begin();
-  SmallPtrSet<MachineBasicBlock*,16> Visited;
-  for (df_ext_iterator<MachineBasicBlock*, SmallPtrSet<MachineBasicBlock*,16> >
-         DFI = df_ext_begin(Entry, Visited), E = df_ext_end(Entry, Visited);
-       DFI != E; ++DFI) {
-    MachineBasicBlock *MBB = *DFI;
-    for (MachineBasicBlock::iterator I = MBB->begin(), E = MBB->end();
-         I != E; ) {
-      MachineInstr *MI = &*I;
-      ++I;
-      if (MI->getOpcode() == TargetInstrInfo::IMPLICIT_DEF) {
-        unsigned Reg = MI->getOperand(0).getReg();
-        ImpDefRegs.insert(Reg);
-        if (TargetRegisterInfo::isPhysicalRegister(Reg)) {
-          for (const unsigned *SS = tri_->getSubRegisters(Reg); *SS; ++SS)
-            ImpDefRegs.insert(*SS);
-        }
-        ImpDefMIs.push_back(MI);
-        continue;
-      }
-
-      if (MI->getOpcode() == TargetInstrInfo::INSERT_SUBREG) {
-        MachineOperand &MO = MI->getOperand(2);
-        if (ImpDefRegs.count(MO.getReg())) {
-          // %reg1032<def> = INSERT_SUBREG %reg1032, undef, 2
-          // This is an identity copy, eliminate it now.
-          if (MO.isKill()) {
-            LiveVariables::VarInfo& vi = lv_->getVarInfo(MO.getReg());
-            vi.removeKill(MI);
-          }
-          MI->eraseFromParent();
-          continue;
-        }
-      }
-
-      bool ChangedToImpDef = false;
-      for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
-        MachineOperand& MO = MI->getOperand(i);
-        if (!MO.isReg() || !MO.isUse() || MO.isUndef())
-          continue;
-        unsigned Reg = MO.getReg();
-        if (!Reg)
-          continue;
-        if (!ImpDefRegs.count(Reg))
-          continue;
-        // Use is a copy, just turn it into an implicit_def.
-        if (CanTurnIntoImplicitDef(MI, Reg, i, tii_)) {
-          bool isKill = MO.isKill();
-          MI->setDesc(tii_->get(TargetInstrInfo::IMPLICIT_DEF));
-          for (int j = MI->getNumOperands() - 1, ee = 0; j > ee; --j)
-            MI->RemoveOperand(j);
-          if (isKill) {
-            ImpDefRegs.erase(Reg);
-            LiveVariables::VarInfo& vi = lv_->getVarInfo(Reg);
-            vi.removeKill(MI);
-          }
-          ChangedToImpDef = true;
-          break;
-        }
-
-        MO.setIsUndef();
-        if (MO.isKill() || MI->isRegTiedToDefOperand(i)) {
-          // Make sure other uses of 
-          for (unsigned j = i+1; j != e; ++j) {
-            MachineOperand &MOJ = MI->getOperand(j);
-            if (MOJ.isReg() && MOJ.isUse() && MOJ.getReg() == Reg)
-              MOJ.setIsUndef();
-          }
-          ImpDefRegs.erase(Reg);
-        }
-      }
-
-      if (ChangedToImpDef) {
-        // Backtrack to process this new implicit_def.
-        --I;
-      } else {
-        for (unsigned i = 0; i != MI->getNumOperands(); ++i) {
-          MachineOperand& MO = MI->getOperand(i);
-          if (!MO.isReg() || !MO.isDef())
-            continue;
-          ImpDefRegs.erase(MO.getReg());
-        }
-      }
-    }
-
-    // Any outstanding liveout implicit_def's?
-    for (unsigned i = 0, e = ImpDefMIs.size(); i != e; ++i) {
-      MachineInstr *MI = ImpDefMIs[i];
-      unsigned Reg = MI->getOperand(0).getReg();
-      if (TargetRegisterInfo::isPhysicalRegister(Reg) ||
-          !ImpDefRegs.count(Reg)) {
-        // Delete all "local" implicit_def's. That include those which define
-        // physical registers since they cannot be liveout.
-        MI->eraseFromParent();
-        continue;
-      }
-
-      // If there are multiple defs of the same register and at least one
-      // is not an implicit_def, do not insert implicit_def's before the
-      // uses.
-      bool Skip = false;
-      for (MachineRegisterInfo::def_iterator DI = mri_->def_begin(Reg),
-             DE = mri_->def_end(); DI != DE; ++DI) {
-        if (DI->getOpcode() != TargetInstrInfo::IMPLICIT_DEF) {
-          Skip = true;
-          break;
-        }
-      }
-      if (Skip)
-        continue;
-
-      // The only implicit_def which we want to keep are those that are live
-      // out of its block.
-      MI->eraseFromParent();
-
-      for (MachineRegisterInfo::use_iterator UI = mri_->use_begin(Reg),
-             UE = mri_->use_end(); UI != UE; ) {
-        MachineOperand &RMO = UI.getOperand();
-        MachineInstr *RMI = &*UI;
-        ++UI;
-        MachineBasicBlock *RMBB = RMI->getParent();
-        if (RMBB == MBB)
-          continue;
-
-        // Turn a copy use into an implicit_def.
-        unsigned SrcReg, DstReg, SrcSubReg, DstSubReg;
-        if (tii_->isMoveInstr(*RMI, SrcReg, DstReg, SrcSubReg, DstSubReg) &&
-            Reg == SrcReg) {
-          RMI->setDesc(tii_->get(TargetInstrInfo::IMPLICIT_DEF));
-          for (int j = RMI->getNumOperands() - 1, ee = 0; j > ee; --j)
-            RMI->RemoveOperand(j);
-          continue;
-        }
-
-        const TargetRegisterClass* RC = mri_->getRegClass(Reg);
-        unsigned NewVReg = mri_->createVirtualRegister(RC);
-        RMO.setReg(NewVReg);
-        RMO.setIsUndef();
-        RMO.setIsKill();
-      }
-    }
-    ImpDefRegs.clear();
-    ImpDefMIs.clear();
-  }
-}
-
-
-void LiveIntervals::computeNumbering() {
-  Index2MiMap OldI2MI = i2miMap_;
-  std::vector<IdxMBBPair> OldI2MBB = Idx2MBBMap;
-  
-  Idx2MBBMap.clear();
-  MBB2IdxMap.clear();
-  mi2iMap_.clear();
-  i2miMap_.clear();
-  terminatorGaps.clear();
-  phiJoinCopies.clear();
-  
-  FunctionSize = 0;
-  
-  // Number MachineInstrs and MachineBasicBlocks.
-  // Initialize MBB indexes to a sentinal.
-  MBB2IdxMap.resize(mf_->getNumBlockIDs(),
-                    std::make_pair(MachineInstrIndex(),MachineInstrIndex()));
-  
-  MachineInstrIndex MIIndex;
-  for (MachineFunction::iterator MBB = mf_->begin(), E = mf_->end();
-       MBB != E; ++MBB) {
-    MachineInstrIndex StartIdx = MIIndex;
-
-    // Insert an empty slot at the beginning of each block.
-    MIIndex = getNextIndex(MIIndex);
-    i2miMap_.push_back(0);
-
-    for (MachineBasicBlock::iterator I = MBB->begin(), E = MBB->end();
-         I != E; ++I) {
-      
-      if (I == MBB->getFirstTerminator()) {
-        // Leave a gap for before terminators, this is where we will point
-        // PHI kills.
-        MachineInstrIndex tGap(true, MIIndex);
-        bool inserted =
-          terminatorGaps.insert(std::make_pair(&*MBB, tGap)).second;
-        assert(inserted && 
-               "Multiple 'first' terminators encountered during numbering.");
-        inserted = inserted; // Avoid compiler warning if assertions turned off.
-        i2miMap_.push_back(0);
-
-        MIIndex = getNextIndex(MIIndex);
-      }
-
-      bool inserted = mi2iMap_.insert(std::make_pair(I, MIIndex)).second;
-      assert(inserted && "multiple MachineInstr -> index mappings");
-      inserted = true;
-      i2miMap_.push_back(I);
-      MIIndex = getNextIndex(MIIndex);
-      FunctionSize++;
-      
-      // Insert max(1, numdefs) empty slots after every instruction.
-      unsigned Slots = I->getDesc().getNumDefs();
-      if (Slots == 0)
-        Slots = 1;
-      while (Slots--) {
-        MIIndex = getNextIndex(MIIndex);
-        i2miMap_.push_back(0);
-      }
-
-    }
-  
-    if (MBB->getFirstTerminator() == MBB->end()) {
-      // Leave a gap for before terminators, this is where we will point
-      // PHI kills.
-      MachineInstrIndex tGap(true, MIIndex);
-      bool inserted =
-        terminatorGaps.insert(std::make_pair(&*MBB, tGap)).second;
-      assert(inserted && 
-             "Multiple 'first' terminators encountered during numbering.");
-      inserted = inserted; // Avoid compiler warning if assertions turned off.
-      i2miMap_.push_back(0);
- 
-      MIIndex = getNextIndex(MIIndex);
-    }
-    
-    // Set the MBB2IdxMap entry for this MBB.
-    MBB2IdxMap[MBB->getNumber()] = std::make_pair(StartIdx, getPrevSlot(MIIndex));
-    Idx2MBBMap.push_back(std::make_pair(StartIdx, MBB));
-  }
-
-  std::sort(Idx2MBBMap.begin(), Idx2MBBMap.end(), Idx2MBBCompare());
-  
-  if (!OldI2MI.empty())
-    for (iterator OI = begin(), OE = end(); OI != OE; ++OI) {
-      for (LiveInterval::iterator LI = OI->second->begin(),
-           LE = OI->second->end(); LI != LE; ++LI) {
-        
-        // Remap the start index of the live range to the corresponding new
-        // number, or our best guess at what it _should_ correspond to if the
-        // original instruction has been erased.  This is either the following
-        // instruction or its predecessor.
-        unsigned index = LI->start.getVecIndex();
-        MachineInstrIndex::Slot offset = LI->start.getSlot();
-        if (LI->start.isLoad()) {
-          std::vector<IdxMBBPair>::const_iterator I =
-                  std::lower_bound(OldI2MBB.begin(), OldI2MBB.end(), LI->start);
-          // Take the pair containing the index
-          std::vector<IdxMBBPair>::const_iterator J =
-                    (I == OldI2MBB.end() && OldI2MBB.size()>0) ? (I-1): I;
-          
-          LI->start = getMBBStartIdx(J->second);
-        } else {
-          LI->start = MachineInstrIndex(
-            MachineInstrIndex(mi2iMap_[OldI2MI[index]]), 
-                              (MachineInstrIndex::Slot)offset);
-        }
-        
-        // Remap the ending index in the same way that we remapped the start,
-        // except for the final step where we always map to the immediately
-        // following instruction.
-        index = (getPrevSlot(LI->end)).getVecIndex();
-        offset  = LI->end.getSlot();
-        if (LI->end.isLoad()) {
-          // VReg dies at end of block.
-          std::vector<IdxMBBPair>::const_iterator I =
-                  std::lower_bound(OldI2MBB.begin(), OldI2MBB.end(), LI->end);
-          --I;
-          
-          LI->end = getNextSlot(getMBBEndIdx(I->second));
-        } else {
-          unsigned idx = index;
-          while (index < OldI2MI.size() && !OldI2MI[index]) ++index;
-          
-          if (index != OldI2MI.size())
-            LI->end =
-              MachineInstrIndex(mi2iMap_[OldI2MI[index]],
-                (idx == index ? offset : MachineInstrIndex::LOAD));
-          else
-            LI->end =
-              MachineInstrIndex(MachineInstrIndex::NUM * i2miMap_.size());
-        }
-      }
-      
-      for (LiveInterval::vni_iterator VNI = OI->second->vni_begin(),
-           VNE = OI->second->vni_end(); VNI != VNE; ++VNI) { 
-        VNInfo* vni = *VNI;
-        
-        // Remap the VNInfo def index, which works the same as the
-        // start indices above. VN's with special sentinel defs
-        // don't need to be remapped.
-        if (vni->isDefAccurate() && !vni->isUnused()) {
-          unsigned index = vni->def.getVecIndex();
-          MachineInstrIndex::Slot offset = vni->def.getSlot();
-          if (vni->def.isLoad()) {
-            std::vector<IdxMBBPair>::const_iterator I =
-                  std::lower_bound(OldI2MBB.begin(), OldI2MBB.end(), vni->def);
-            // Take the pair containing the index
-            std::vector<IdxMBBPair>::const_iterator J =
-                    (I == OldI2MBB.end() && OldI2MBB.size()>0) ? (I-1): I;
-          
-            vni->def = getMBBStartIdx(J->second);
-          } else {
-            vni->def = MachineInstrIndex(mi2iMap_[OldI2MI[index]], offset);
-          }
-        }
-        
-        // Remap the VNInfo kill indices, which works the same as
-        // the end indices above.
-        for (size_t i = 0; i < vni->kills.size(); ++i) {
-          unsigned index = getPrevSlot(vni->kills[i]).getVecIndex();
-          MachineInstrIndex::Slot offset = vni->kills[i].getSlot();
-
-          if (vni->kills[i].isLoad()) {
-            assert("Value killed at a load slot.");
-            /*std::vector<IdxMBBPair>::const_iterator I =
-             std::lower_bound(OldI2MBB.begin(), OldI2MBB.end(), vni->kills[i]);
-            --I;
-
-            vni->kills[i] = getMBBEndIdx(I->second);*/
-          } else {
-            if (vni->kills[i].isPHIIndex()) {
-              std::vector<IdxMBBPair>::const_iterator I =
-                std::lower_bound(OldI2MBB.begin(), OldI2MBB.end(), vni->kills[i]);
-              --I;
-              vni->kills[i] = terminatorGaps[I->second];  
-            } else {
-              assert(OldI2MI[index] != 0 &&
-                     "Kill refers to instruction not present in index maps.");
-              vni->kills[i] = MachineInstrIndex(mi2iMap_[OldI2MI[index]], offset);
-            }
-           
-            /*
-            unsigned idx = index;
-            while (index < OldI2MI.size() && !OldI2MI[index]) ++index;
-            
-            if (index != OldI2MI.size())
-              vni->kills[i] = mi2iMap_[OldI2MI[index]] + 
-                              (idx == index ? offset : 0);
-            else
-              vni->kills[i] = InstrSlots::NUM * i2miMap_.size();
-            */
-          }
-        }
-      }
-    }
-}
-
-void LiveIntervals::scaleNumbering(int factor) {
-  // Need to
-  //  * scale MBB begin and end points
-  //  * scale all ranges.
-  //  * Update VNI structures.
-  //  * Scale instruction numberings 
-
-  // Scale the MBB indices.
-  Idx2MBBMap.clear();
-  for (MachineFunction::iterator MBB = mf_->begin(), MBBE = mf_->end();
-       MBB != MBBE; ++MBB) {
-    std::pair<MachineInstrIndex, MachineInstrIndex> &mbbIndices = MBB2IdxMap[MBB->getNumber()];
-    mbbIndices.first = mbbIndices.first.scale(factor);
-    mbbIndices.second = mbbIndices.second.scale(factor);
-    Idx2MBBMap.push_back(std::make_pair(mbbIndices.first, MBB)); 
-  }
-  std::sort(Idx2MBBMap.begin(), Idx2MBBMap.end(), Idx2MBBCompare());
-
-  // Scale terminator gaps.
-  for (DenseMap<MachineBasicBlock*, MachineInstrIndex>::iterator
-       TGI = terminatorGaps.begin(), TGE = terminatorGaps.end();
-       TGI != TGE; ++TGI) {
-    terminatorGaps[TGI->first] = TGI->second.scale(factor);
-  }
-
-  // Scale the intervals.
-  for (iterator LI = begin(), LE = end(); LI != LE; ++LI) {
-    LI->second->scaleNumbering(factor);
-  }
-
-  // Scale MachineInstrs.
-  Mi2IndexMap oldmi2iMap = mi2iMap_;
-  MachineInstrIndex highestSlot;
-  for (Mi2IndexMap::iterator MI = oldmi2iMap.begin(), ME = oldmi2iMap.end();
-       MI != ME; ++MI) {
-    MachineInstrIndex newSlot = MI->second.scale(factor);
-    mi2iMap_[MI->first] = newSlot;
-    highestSlot = std::max(highestSlot, newSlot); 
-  }
-
-  unsigned highestVIndex = highestSlot.getVecIndex();
-  i2miMap_.clear();
-  i2miMap_.resize(highestVIndex + 1);
-  for (Mi2IndexMap::iterator MI = mi2iMap_.begin(), ME = mi2iMap_.end();
-       MI != ME; ++MI) {
-    i2miMap_[MI->second.getVecIndex()] = const_cast<MachineInstr *>(MI->first);
-  }
-
-}
-
-
 /// runOnMachineFunction - Register allocate the whole function
 ///
 bool LiveIntervals::runOnMachineFunction(MachineFunction &fn) {
@@ -535,12 +109,10 @@ bool LiveIntervals::runOnMachineFunction(MachineFunction &fn) {
   tii_ = tm_->getInstrInfo();
   aa_ = &getAnalysis<AliasAnalysis>();
   lv_ = &getAnalysis<LiveVariables>();
+  indexes_ = &getAnalysis<SlotIndexes>();
   allocatableRegs_ = tri_->getAllocatableSet(fn);
 
-  processImplicitDefs();
-  computeNumbering();
   computeIntervals();
-  performEarlyCoalescing();
 
   numIntervals += getNumIntervals();
 
@@ -564,7 +136,8 @@ void LiveIntervals::printInstrs(raw_ostream &OS) const {
 
   for (MachineFunction::iterator mbbi = mf_->begin(), mbbe = mf_->end();
        mbbi != mbbe; ++mbbi) {
-    OS << ((Value*)mbbi->getBasicBlock())->getName() << ":\n";
+    OS << "BB#" << mbbi->getNumber()
+       << ":\t\t# derived from " << mbbi->getName() << "\n";
     for (MachineBasicBlock::iterator mii = mbbi->begin(),
            mie = mbbi->end(); mii != mie; ++mii) {
       OS << getInstructionIndex(mii) << '\t' << *mii;
@@ -582,12 +155,13 @@ bool LiveIntervals::conflictsWithPhysRegDef(const LiveInterval &li,
                                             VirtRegMap &vrm, unsigned reg) {
   for (LiveInterval::Ranges::const_iterator
          I = li.ranges.begin(), E = li.ranges.end(); I != E; ++I) {
-    for (MachineInstrIndex index = getBaseIndex(I->start),
-           end = getNextIndex(getBaseIndex(getPrevSlot(I->end))); index != end;
-         index = getNextIndex(index)) {
+    for (SlotIndex index = I->start.getBaseIndex(),
+           end = I->end.getPrevSlot().getBaseIndex().getNextIndex();
+           index != end;
+           index = index.getNextIndex()) {
       // skip deleted instructions
       while (index != end && !getInstructionFromIndex(index))
-        index = getNextIndex(index);
+        index = index.getNextIndex();
       if (index == end) break;
 
       MachineInstr *MI = getInstructionFromIndex(index);
@@ -623,16 +197,17 @@ bool LiveIntervals::conflictsWithPhysRegRef(LiveInterval &li,
                                   SmallPtrSet<MachineInstr*,32> &JoinedCopies) {
   for (LiveInterval::Ranges::const_iterator
          I = li.ranges.begin(), E = li.ranges.end(); I != E; ++I) {
-    for (MachineInstrIndex index = getBaseIndex(I->start),
-           end = getNextIndex(getBaseIndex(getPrevSlot(I->end))); index != end;
-         index = getNextIndex(index)) {
+    for (SlotIndex index = I->start.getBaseIndex(),
+           end = I->end.getPrevSlot().getBaseIndex().getNextIndex();
+           index != end;
+           index = index.getNextIndex()) {
       // Skip deleted instructions.
       MachineInstr *MI = 0;
       while (index != end) {
         MI = getInstructionFromIndex(index);
         if (MI)
           break;
-        index = getNextIndex(index);
+        index = index.getNextIndex();
       }
       if (index == end) break;
 
@@ -667,7 +242,7 @@ static void printRegName(unsigned reg, const TargetRegisterInfo* tri_) {
 
 void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
                                              MachineBasicBlock::iterator mi,
-                                             MachineInstrIndex MIIdx,
+                                             SlotIndex MIIdx,
                                              MachineOperand& MO,
                                              unsigned MOIdx,
                                              LiveInterval &interval) {
@@ -683,11 +258,11 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
   LiveVariables::VarInfo& vi = lv_->getVarInfo(interval.reg);
   if (interval.empty()) {
     // Get the Idx of the defining instructions.
-    MachineInstrIndex defIndex = getDefIndex(MIIdx);
+    SlotIndex defIndex = MIIdx.getDefIndex();
     // Earlyclobbers move back one, so that they overlap the live range
     // of inputs.
     if (MO.isEarlyClobber())
-      defIndex = getUseIndex(MIIdx);
+      defIndex = MIIdx.getUseIndex();
     VNInfo *ValNo;
     MachineInstr *CopyMI = NULL;
     unsigned SrcReg, DstReg, SrcSubReg, DstSubReg;
@@ -707,16 +282,11 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
     // will be a single kill, in MBB, which comes after the definition.
     if (vi.Kills.size() == 1 && vi.Kills[0]->getParent() == mbb) {
       // FIXME: what about dead vars?
-      MachineInstrIndex killIdx;
+      SlotIndex killIdx;
       if (vi.Kills[0] != mi)
-        killIdx = getNextSlot(getUseIndex(getInstructionIndex(vi.Kills[0])));
-      else if (MO.isEarlyClobber())
-        // Earlyclobbers that die in this instruction move up one extra, to
-        // compensate for having the starting point moved back one.  This
-        // gets them to overlap the live range of other outputs.
-        killIdx = getNextSlot(getNextSlot(defIndex));
+        killIdx = getInstructionIndex(vi.Kills[0]).getDefIndex();
       else
-        killIdx = getNextSlot(defIndex);
+        killIdx = defIndex.getStoreIndex();
 
       // If the kill happens after the definition, we have an intra-block
       // live range.
@@ -735,7 +305,8 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
     // of the defining block, potentially live across some blocks, then is
     // live into some number of blocks, but gets killed.  Start by adding a
     // range that goes from this definition to the end of the defining block.
-    LiveRange NewLR(defIndex, getNextSlot(getMBBEndIdx(mbb)), ValNo);
+    LiveRange NewLR(defIndex, getMBBEndIdx(mbb).getNextIndex().getLoadIndex(),
+                    ValNo);
     DEBUG(errs() << " +" << NewLR);
     interval.addRange(NewLR);
 
@@ -744,9 +315,10 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
     // live interval.
     for (SparseBitVector<>::iterator I = vi.AliveBlocks.begin(), 
              E = vi.AliveBlocks.end(); I != E; ++I) {
-      LiveRange LR(getMBBStartIdx(*I),
-                   getNextSlot(getMBBEndIdx(*I)),  // MBB ends at -1.
-                   ValNo);
+      LiveRange LR(
+          getMBBStartIdx(mf_->getBlockNumbered(*I)),
+          getMBBEndIdx(mf_->getBlockNumbered(*I)).getNextIndex().getLoadIndex(),
+          ValNo);
       interval.addRange(LR);
       DEBUG(errs() << " +" << LR);
     }
@@ -755,8 +327,8 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
     // block to the 'use' slot of the killing instruction.
     for (unsigned i = 0, e = vi.Kills.size(); i != e; ++i) {
       MachineInstr *Kill = vi.Kills[i];
-      MachineInstrIndex killIdx =
-        getNextSlot(getUseIndex(getInstructionIndex(Kill)));
+      SlotIndex killIdx =
+        getInstructionIndex(Kill).getDefIndex();
       LiveRange LR(getMBBStartIdx(Kill->getParent()), killIdx, ValNo);
       interval.addRange(LR);
       ValNo->addKill(killIdx);
@@ -775,13 +347,13 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
       // need to take the LiveRegion that defines this register and split it
       // into two values.
       assert(interval.containsOneValue());
-      MachineInstrIndex DefIndex = getDefIndex(interval.getValNumInfo(0)->def);
-      MachineInstrIndex RedefIndex = getDefIndex(MIIdx);
+      SlotIndex DefIndex = interval.getValNumInfo(0)->def.getDefIndex();
+      SlotIndex RedefIndex = MIIdx.getDefIndex();
       if (MO.isEarlyClobber())
-        RedefIndex = getUseIndex(MIIdx);
+        RedefIndex = MIIdx.getUseIndex();
 
       const LiveRange *OldLR =
-        interval.getLiveRangeContaining(getPrevSlot(RedefIndex));
+        interval.getLiveRangeContaining(RedefIndex.getUseIndex());
       VNInfo *OldValNo = OldLR->valno;
 
       // Delete the initial value, which should be short and continuous,
@@ -814,10 +386,8 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
       // If this redefinition is dead, we need to add a dummy unit live
       // range covering the def slot.
       if (MO.isDead())
-        interval.addRange(
-          LiveRange(RedefIndex, MO.isEarlyClobber() ?
-                                getNextSlot(getNextSlot(RedefIndex)) :
-                                getNextSlot(RedefIndex), OldValNo));
+        interval.addRange(LiveRange(RedefIndex, RedefIndex.getStoreIndex(),
+                                    OldValNo));
 
       DEBUG({
           errs() << " RESULT: ";
@@ -831,10 +401,8 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
         // Remove the old range that we now know has an incorrect number.
         VNInfo *VNI = interval.getValNumInfo(0);
         MachineInstr *Killer = vi.Kills[0];
-        phiJoinCopies.push_back(Killer);
-        MachineInstrIndex Start = getMBBStartIdx(Killer->getParent());
-        MachineInstrIndex End =
-          getNextSlot(getUseIndex(getInstructionIndex(Killer)));
+        SlotIndex Start = getMBBStartIdx(Killer->getParent());
+        SlotIndex End = getInstructionIndex(Killer).getDefIndex();
         DEBUG({
             errs() << " Removing [" << Start << "," << End << "] from: ";
             interval.print(errs(), tri_);
@@ -844,7 +412,7 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
         assert(interval.ranges.size() == 1 &&
                "Newly discovered PHI interval has >1 ranges.");
         MachineBasicBlock *killMBB = getMBBFromIndex(interval.endIndex());
-        VNI->addKill(terminatorGaps[killMBB]);
+        VNI->addKill(indexes_->getTerminatorGap(killMBB));
         VNI->setHasPHIKill(true);
         DEBUG({
             errs() << " RESULT: ";
@@ -854,8 +422,8 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
         // Replace the interval with one of a NEW value number.  Note that this
         // value number isn't actually defined by an instruction, weird huh? :)
         LiveRange LR(Start, End,
-          interval.getNextValue(MachineInstrIndex(mbb->getNumber()),
-                                0, false, VNInfoAllocator));
+                     interval.getNextValue(SlotIndex(getMBBStartIdx(mbb), true),
+                       0, false, VNInfoAllocator));
         LR.valno->setIsPHIDef(true);
         DEBUG(errs() << " replace range with " << LR);
         interval.addRange(LR);
@@ -869,9 +437,9 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
       // In the case of PHI elimination, each variable definition is only
       // live until the end of the block.  We've already taken care of the
       // rest of the live range.
-      MachineInstrIndex defIndex = getDefIndex(MIIdx);
+      SlotIndex defIndex = MIIdx.getDefIndex();
       if (MO.isEarlyClobber())
-        defIndex = getUseIndex(MIIdx);
+        defIndex = MIIdx.getUseIndex();
 
       VNInfo *ValNo;
       MachineInstr *CopyMI = NULL;
@@ -883,10 +451,10 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
         CopyMI = mi;
       ValNo = interval.getNextValue(defIndex, CopyMI, true, VNInfoAllocator);
       
-      MachineInstrIndex killIndex = getNextSlot(getMBBEndIdx(mbb));
+      SlotIndex killIndex = getMBBEndIdx(mbb).getNextIndex().getLoadIndex();
       LiveRange LR(defIndex, killIndex, ValNo);
       interval.addRange(LR);
-      ValNo->addKill(terminatorGaps[mbb]);
+      ValNo->addKill(indexes_->getTerminatorGap(mbb));
       ValNo->setHasPHIKill(true);
       DEBUG(errs() << " +" << LR);
     }
@@ -897,7 +465,7 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
 
 void LiveIntervals::handlePhysicalRegisterDef(MachineBasicBlock *MBB,
                                               MachineBasicBlock::iterator mi,
-                                              MachineInstrIndex MIIdx,
+                                              SlotIndex MIIdx,
                                               MachineOperand& MO,
                                               LiveInterval &interval,
                                               MachineInstr *CopyMI) {
@@ -908,12 +476,12 @@ void LiveIntervals::handlePhysicalRegisterDef(MachineBasicBlock *MBB,
       printRegName(interval.reg, tri_);
     });
 
-  MachineInstrIndex baseIndex = MIIdx;
-  MachineInstrIndex start = getDefIndex(baseIndex);
+  SlotIndex baseIndex = MIIdx;
+  SlotIndex start = baseIndex.getDefIndex();
   // Earlyclobbers move back one.
   if (MO.isEarlyClobber())
-    start = getUseIndex(MIIdx);
-  MachineInstrIndex end = start;
+    start = MIIdx.getUseIndex();
+  SlotIndex end = start;
 
   // If it is not used after definition, it is considered dead at
   // the instruction defining it. Hence its interval is:
@@ -922,53 +490,51 @@ void LiveIntervals::handlePhysicalRegisterDef(MachineBasicBlock *MBB,
   // advance below compensates.
   if (MO.isDead()) {
     DEBUG(errs() << " dead");
-    if (MO.isEarlyClobber())
-      end = getNextSlot(getNextSlot(start));
-    else
-      end = getNextSlot(start);
+    end = start.getStoreIndex();
     goto exit;
   }
 
   // If it is not dead on definition, it must be killed by a
   // subsequent instruction. Hence its interval is:
   // [defSlot(def), useSlot(kill)+1)
-  baseIndex = getNextIndex(baseIndex);
+  baseIndex = baseIndex.getNextIndex();
   while (++mi != MBB->end()) {
-    while (baseIndex.getVecIndex() < i2miMap_.size() &&
-           getInstructionFromIndex(baseIndex) == 0)
-      baseIndex = getNextIndex(baseIndex);
+
+    if (getInstructionFromIndex(baseIndex) == 0)
+      baseIndex = indexes_->getNextNonNullIndex(baseIndex);
+
     if (mi->killsRegister(interval.reg, tri_)) {
       DEBUG(errs() << " killed");
-      end = getNextSlot(getUseIndex(baseIndex));
+      end = baseIndex.getDefIndex();
       goto exit;
     } else {
       int DefIdx = mi->findRegisterDefOperandIdx(interval.reg, false, tri_);
       if (DefIdx != -1) {
         if (mi->isRegTiedToUseOperand(DefIdx)) {
           // Two-address instruction.
-          end = getDefIndex(baseIndex);
-          if (mi->getOperand(DefIdx).isEarlyClobber())
-            end = getUseIndex(baseIndex);
+          end = baseIndex.getDefIndex();
+          assert(!mi->getOperand(DefIdx).isEarlyClobber() &&
+                 "Two address instruction is an early clobber?"); 
         } else {
           // Another instruction redefines the register before it is ever read.
           // Then the register is essentially dead at the instruction that defines
           // it. Hence its interval is:
           // [defSlot(def), defSlot(def)+1)
           DEBUG(errs() << " dead");
-          end = getNextSlot(start);
+          end = start.getStoreIndex();
         }
         goto exit;
       }
     }
     
-    baseIndex = getNextIndex(baseIndex);
+    baseIndex = baseIndex.getNextIndex();
   }
   
   // The only case we should have a dead physreg here without a killing or
   // instruction where we know it's dead is if it is live-in to the function
   // and never used. Another possible case is the implicit use of the
   // physical register has been deleted by two-address pass.
-  end = getNextSlot(start);
+  end = start.getStoreIndex();
 
 exit:
   assert(start < end && "did not find end of interval?");
@@ -988,7 +554,7 @@ exit:
 
 void LiveIntervals::handleRegisterDef(MachineBasicBlock *MBB,
                                       MachineBasicBlock::iterator MI,
-                                      MachineInstrIndex MIIdx,
+                                      SlotIndex MIIdx,
                                       MachineOperand& MO,
                                       unsigned MOIdx) {
   if (TargetRegisterInfo::isVirtualRegister(MO.getReg()))
@@ -1015,7 +581,7 @@ void LiveIntervals::handleRegisterDef(MachineBasicBlock *MBB,
 }
 
 void LiveIntervals::handleLiveInRegister(MachineBasicBlock *MBB,
-                                         MachineInstrIndex MIIdx,
+                                         SlotIndex MIIdx,
                                          LiveInterval &interval, bool isAlias) {
   DEBUG({
       errs() << "\t\tlivein register: ";
@@ -1025,18 +591,18 @@ void LiveIntervals::handleLiveInRegister(MachineBasicBlock *MBB,
   // Look for kills, if it reaches a def before it's killed, then it shouldn't
   // be considered a livein.
   MachineBasicBlock::iterator mi = MBB->begin();
-  MachineInstrIndex baseIndex = MIIdx;
-  MachineInstrIndex start = baseIndex;
-  while (baseIndex.getVecIndex() < i2miMap_.size() && 
-         getInstructionFromIndex(baseIndex) == 0)
-    baseIndex = getNextIndex(baseIndex);
-  MachineInstrIndex end = baseIndex;
+  SlotIndex baseIndex = MIIdx;
+  SlotIndex start = baseIndex;
+  if (getInstructionFromIndex(baseIndex) == 0)
+    baseIndex = indexes_->getNextNonNullIndex(baseIndex);
+
+  SlotIndex end = baseIndex;
   bool SeenDefUse = false;
   
   while (mi != MBB->end()) {
     if (mi->killsRegister(interval.reg, tri_)) {
       DEBUG(errs() << " killed");
-      end = getNextSlot(getUseIndex(baseIndex));
+      end = baseIndex.getDefIndex();
       SeenDefUse = true;
       break;
     } else if (mi->modifiesRegister(interval.reg, tri_)) {
@@ -1045,17 +611,14 @@ void LiveIntervals::handleLiveInRegister(MachineBasicBlock *MBB,
       // it. Hence its interval is:
       // [defSlot(def), defSlot(def)+1)
       DEBUG(errs() << " dead");
-      end = getNextSlot(getDefIndex(start));
+      end = start.getStoreIndex();
       SeenDefUse = true;
       break;
     }
 
-    baseIndex = getNextIndex(baseIndex);
     ++mi;
     if (mi != MBB->end()) {
-      while (baseIndex.getVecIndex() < i2miMap_.size() && 
-             getInstructionFromIndex(baseIndex) == 0)
-        baseIndex = getNextIndex(baseIndex);
+      baseIndex = indexes_->getNextNonNullIndex(baseIndex);
     }
   }
 
@@ -1063,7 +626,7 @@ void LiveIntervals::handleLiveInRegister(MachineBasicBlock *MBB,
   if (!SeenDefUse) {
     if (isAlias) {
       DEBUG(errs() << " dead");
-      end = getNextSlot(getDefIndex(MIIdx));
+      end = MIIdx.getStoreIndex();
     } else {
       DEBUG(errs() << " live through");
       end = baseIndex;
@@ -1071,142 +634,16 @@ void LiveIntervals::handleLiveInRegister(MachineBasicBlock *MBB,
   }
 
   VNInfo *vni =
-    interval.getNextValue(MachineInstrIndex(MBB->getNumber()),
+    interval.getNextValue(SlotIndex(getMBBStartIdx(MBB), true),
                           0, false, VNInfoAllocator);
   vni->setIsPHIDef(true);
   LiveRange LR(start, end, vni);
-  
+
   interval.addRange(LR);
   LR.valno->addKill(end);
   DEBUG(errs() << " +" << LR << '\n');
 }
 
-bool
-LiveIntervals::isProfitableToCoalesce(LiveInterval &DstInt, LiveInterval &SrcInt,
-                                   SmallVector<MachineInstr*,16> &IdentCopies,
-                                   SmallVector<MachineInstr*,16> &OtherCopies) {
-  bool HaveConflict = false;
-  unsigned NumIdent = 0;
-  for (MachineRegisterInfo::def_iterator ri = mri_->def_begin(SrcInt.reg),
-         re = mri_->def_end(); ri != re; ++ri) {
-    MachineInstr *MI = &*ri;
-    unsigned SrcReg, DstReg, SrcSubReg, DstSubReg;
-    if (!tii_->isMoveInstr(*MI, SrcReg, DstReg, SrcSubReg, DstSubReg))
-      return false;
-    if (SrcReg != DstInt.reg) {
-      OtherCopies.push_back(MI);
-      HaveConflict |= DstInt.liveAt(getInstructionIndex(MI));
-    } else {
-      IdentCopies.push_back(MI);
-      ++NumIdent;
-    }
-  }
-
-  if (!HaveConflict)
-    return false; // Let coalescer handle it
-  return IdentCopies.size() > OtherCopies.size();
-}
-
-void LiveIntervals::performEarlyCoalescing() {
-  if (!EarlyCoalescing)
-    return;
-
-  /// Perform early coalescing: eliminate copies which feed into phi joins
-  /// and whose sources are defined by the phi joins.
-  for (unsigned i = 0, e = phiJoinCopies.size(); i != e; ++i) {
-    MachineInstr *Join = phiJoinCopies[i];
-    if (CoalescingLimit != -1 && (int)numCoalescing == CoalescingLimit)
-      break;
-
-    unsigned PHISrc, PHIDst, SrcSubReg, DstSubReg;
-    bool isMove= tii_->isMoveInstr(*Join, PHISrc, PHIDst, SrcSubReg, DstSubReg);
-#ifndef NDEBUG
-    assert(isMove && "PHI join instruction must be a move!");
-#else
-    isMove = isMove;
-#endif
-
-    LiveInterval &DstInt = getInterval(PHIDst);
-    LiveInterval &SrcInt = getInterval(PHISrc);
-    SmallVector<MachineInstr*, 16> IdentCopies;
-    SmallVector<MachineInstr*, 16> OtherCopies;
-    if (!isProfitableToCoalesce(DstInt, SrcInt, IdentCopies, OtherCopies))
-      continue;
-
-    DEBUG(errs() << "PHI Join: " << *Join);
-    assert(DstInt.containsOneValue() && "PHI join should have just one val#!");
-    VNInfo *VNI = DstInt.getValNumInfo(0);
-
-    // Change the non-identity copies to directly target the phi destination.
-    for (unsigned i = 0, e = OtherCopies.size(); i != e; ++i) {
-      MachineInstr *PHICopy = OtherCopies[i];
-      DEBUG(errs() << "Moving: " << *PHICopy);
-
-      MachineInstrIndex MIIndex = getInstructionIndex(PHICopy);
-      MachineInstrIndex DefIndex = getDefIndex(MIIndex);
-      LiveRange *SLR = SrcInt.getLiveRangeContaining(DefIndex);
-      MachineInstrIndex StartIndex = SLR->start;
-      MachineInstrIndex EndIndex = SLR->end;
-
-      // Delete val# defined by the now identity copy and add the range from
-      // beginning of the mbb to the end of the range.
-      SrcInt.removeValNo(SLR->valno);
-      DEBUG(errs() << "  added range [" << StartIndex << ','
-            << EndIndex << "] to reg" << DstInt.reg << '\n');
-      if (DstInt.liveAt(StartIndex))
-        DstInt.removeRange(StartIndex, EndIndex);
-      VNInfo *NewVNI = DstInt.getNextValue(DefIndex, PHICopy, true,
-                                           VNInfoAllocator);
-      NewVNI->setHasPHIKill(true);
-      DstInt.addRange(LiveRange(StartIndex, EndIndex, NewVNI));
-      for (unsigned j = 0, ee = PHICopy->getNumOperands(); j != ee; ++j) {
-        MachineOperand &MO = PHICopy->getOperand(j);
-        if (!MO.isReg() || MO.getReg() != PHISrc)
-          continue;
-        MO.setReg(PHIDst);
-      }
-    }
-
-    // Now let's eliminate all the would-be identity copies.
-    for (unsigned i = 0, e = IdentCopies.size(); i != e; ++i) {
-      MachineInstr *PHICopy = IdentCopies[i];
-      DEBUG(errs() << "Coalescing: " << *PHICopy);
-
-      MachineInstrIndex MIIndex = getInstructionIndex(PHICopy);
-      MachineInstrIndex DefIndex = getDefIndex(MIIndex);
-      LiveRange *SLR = SrcInt.getLiveRangeContaining(DefIndex);
-      MachineInstrIndex StartIndex = SLR->start;
-      MachineInstrIndex EndIndex = SLR->end;
-
-      // Delete val# defined by the now identity copy and add the range from
-      // beginning of the mbb to the end of the range.
-      SrcInt.removeValNo(SLR->valno);
-      RemoveMachineInstrFromMaps(PHICopy);
-      PHICopy->eraseFromParent();
-      DEBUG(errs() << "  added range [" << StartIndex << ','
-            << EndIndex << "] to reg" << DstInt.reg << '\n');
-      DstInt.addRange(LiveRange(StartIndex, EndIndex, VNI));
-    }
-
-    // Remove the phi join and update the phi block liveness.
-    MachineInstrIndex MIIndex = getInstructionIndex(Join);
-    MachineInstrIndex UseIndex = getUseIndex(MIIndex);
-    MachineInstrIndex DefIndex = getDefIndex(MIIndex);
-    LiveRange *SLR = SrcInt.getLiveRangeContaining(UseIndex);
-    LiveRange *DLR = DstInt.getLiveRangeContaining(DefIndex);
-    DLR->valno->setCopy(0);
-    DLR->valno->setIsDefAccurate(false);
-    DstInt.addRange(LiveRange(SLR->start, SLR->end, DLR->valno));
-    SrcInt.removeRange(SLR->start, SLR->end);
-    assert(SrcInt.empty());
-    removeInterval(PHISrc);
-    RemoveMachineInstrFromMaps(Join);
-    Join->eraseFromParent();
-
-    ++numCoalescing;
-  }
-}
-
 /// computeIntervals - computes the live intervals for virtual
 /// registers. for some ordering of the machine instructions [1,N] a
 /// live interval is an interval [i, j) where 1 <= i <= j < N for
@@ -1221,8 +658,8 @@ void LiveIntervals::computeIntervals() {
        MBBI != E; ++MBBI) {
     MachineBasicBlock *MBB = MBBI;
     // Track the index of the current machine instr.
-    MachineInstrIndex MIIndex = getMBBStartIdx(MBB);
-    DEBUG(errs() << ((Value*)MBB->getBasicBlock())->getName() << ":\n");
+    SlotIndex MIIndex = getMBBStartIdx(MBB);
+    DEBUG(errs() << MBB->getName() << ":\n");
 
     MachineBasicBlock::iterator MI = MBB->begin(), miEnd = MBB->end();
 
@@ -1238,9 +675,8 @@ void LiveIntervals::computeIntervals() {
     }
     
     // Skip over empty initial indices.
-    while (MIIndex.getVecIndex() < i2miMap_.size() &&
-           getInstructionFromIndex(MIIndex) == 0)
-      MIIndex = getNextIndex(MIIndex);
+    if (getInstructionFromIndex(MIIndex) == 0)
+      MIIndex = indexes_->getNextNonNullIndex(MIIndex);
     
     for (; MI != miEnd; ++MI) {
       DEBUG(errs() << MIIndex << "\t" << *MI);
@@ -1257,19 +693,9 @@ void LiveIntervals::computeIntervals() {
         else if (MO.isUndef())
           UndefUses.push_back(MO.getReg());
       }
-
-      // Skip over the empty slots after each instruction.
-      unsigned Slots = MI->getDesc().getNumDefs();
-      if (Slots == 0)
-        Slots = 1;
-
-      while (Slots--)
-        MIIndex = getNextIndex(MIIndex);
       
-      // Skip over empty indices.
-      while (MIIndex.getVecIndex() < i2miMap_.size() &&
-             getInstructionFromIndex(MIIndex) == 0)
-        MIIndex = getNextIndex(MIIndex);
+      // Move to the next instr slot.
+      MIIndex = indexes_->getNextNonNullIndex(MIIndex);
     }
   }
 
@@ -1282,45 +708,6 @@ void LiveIntervals::computeIntervals() {
   }
 }
 
-bool LiveIntervals::findLiveInMBBs(
-                              MachineInstrIndex Start, MachineInstrIndex End,
-                              SmallVectorImpl<MachineBasicBlock*> &MBBs) const {
-  std::vector<IdxMBBPair>::const_iterator I =
-    std::lower_bound(Idx2MBBMap.begin(), Idx2MBBMap.end(), Start);
-
-  bool ResVal = false;
-  while (I != Idx2MBBMap.end()) {
-    if (I->first >= End)
-      break;
-    MBBs.push_back(I->second);
-    ResVal = true;
-    ++I;
-  }
-  return ResVal;
-}
-
-bool LiveIntervals::findReachableMBBs(
-                              MachineInstrIndex Start, MachineInstrIndex End,
-                              SmallVectorImpl<MachineBasicBlock*> &MBBs) const {
-  std::vector<IdxMBBPair>::const_iterator I =
-    std::lower_bound(Idx2MBBMap.begin(), Idx2MBBMap.end(), Start);
-
-  bool ResVal = false;
-  while (I != Idx2MBBMap.end()) {
-    if (I->first > End)
-      break;
-    MachineBasicBlock *MBB = I->second;
-    if (getMBBEndIdx(MBB) > End)
-      break;
-    for (MachineBasicBlock::succ_iterator SI = MBB->succ_begin(),
-           SE = MBB->succ_end(); SI != SE; ++SI)
-      MBBs.push_back(*SI);
-    ResVal = true;
-    ++I;
-  }
-  return ResVal;
-}
-
 LiveInterval* LiveIntervals::createInterval(unsigned reg) {
   float Weight = TargetRegisterInfo::isPhysicalRegister(reg) ? HUGE_VALF : 0.0F;
   return new LiveInterval(reg, Weight);
@@ -1392,8 +779,8 @@ unsigned LiveIntervals::getReMatImplicitUse(const LiveInterval &li,
 /// isValNoAvailableAt - Return true if the val# of the specified interval
 /// which reaches the given instruction also reaches the specified use index.
 bool LiveIntervals::isValNoAvailableAt(const LiveInterval &li, MachineInstr *MI,
-                                       MachineInstrIndex UseIdx) const {
-  MachineInstrIndex Index = getInstructionIndex(MI);  
+                                       SlotIndex UseIdx) const {
+  SlotIndex Index = getInstructionIndex(MI);  
   VNInfo *ValNo = li.FindLiveRangeContaining(Index)->valno;
   LiveInterval::const_iterator UI = li.FindLiveRangeContaining(UseIdx);
   return UI != li.end() && UI->valno == ValNo;
@@ -1408,102 +795,19 @@ bool LiveIntervals::isReMaterializable(const LiveInterval &li,
   if (DisableReMat)
     return false;
 
-  if (MI->getOpcode() == TargetInstrInfo::IMPLICIT_DEF)
-    return true;
-
-  int FrameIdx = 0;
-  if (tii_->isLoadFromStackSlot(MI, FrameIdx) &&
-      mf_->getFrameInfo()->isImmutableObjectIndex(FrameIdx))
-    // FIXME: Let target specific isReallyTriviallyReMaterializable determines
-    // this but remember this is not safe to fold into a two-address
-    // instruction.
-    // This is a load from fixed stack slot. It can be rematerialized.
-    return true;
-
-  // If the target-specific rules don't identify an instruction as
-  // being trivially rematerializable, use some target-independent
-  // rules.
-  if (!MI->getDesc().isRematerializable() ||
-      !tii_->isTriviallyReMaterializable(MI)) {
-    if (!EnableAggressiveRemat)
-      return false;
-
-    // If the instruction accesses memory but the memoperands have been lost,
-    // we can't analyze it.
-    const TargetInstrDesc &TID = MI->getDesc();
-    if ((TID.mayLoad() || TID.mayStore()) && MI->memoperands_empty())
-      return false;
-
-    // Avoid instructions obviously unsafe for remat.
-    if (TID.hasUnmodeledSideEffects() || TID.isNotDuplicable())
-      return false;
-
-    // If the instruction accesses memory and the memory could be non-constant,
-    // assume the instruction is not rematerializable.
-    for (MachineInstr::mmo_iterator I = MI->memoperands_begin(),
-         E = MI->memoperands_end(); I != E; ++I){
-      const MachineMemOperand *MMO = *I;
-      if (MMO->isVolatile() || MMO->isStore())
-        return false;
-      const Value *V = MMO->getValue();
-      if (!V)
-        return false;
-      if (const PseudoSourceValue *PSV = dyn_cast<PseudoSourceValue>(V)) {
-        if (!PSV->isConstant(mf_->getFrameInfo()))
-          return false;
-      } else if (!aa_->pointsToConstantMemory(V))
-        return false;
-    }
-
-    // If any of the registers accessed are non-constant, conservatively assume
-    // the instruction is not rematerializable.
-    unsigned ImpUse = 0;
-    for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
-      const MachineOperand &MO = MI->getOperand(i);
-      if (MO.isReg()) {
-        unsigned Reg = MO.getReg();
-        if (Reg == 0)
-          continue;
-        if (TargetRegisterInfo::isPhysicalRegister(Reg))
-          return false;
-
-        // Only allow one def, and that in the first operand.
-        if (MO.isDef() != (i == 0))
-          return false;
-
-        // Only allow constant-valued registers.
-        bool IsLiveIn = mri_->isLiveIn(Reg);
-        MachineRegisterInfo::def_iterator I = mri_->def_begin(Reg),
-                                          E = mri_->def_end();
-
-        // For the def, it should be the only def of that register.
-        if (MO.isDef() && (next(I) != E || IsLiveIn))
-          return false;
-
-        if (MO.isUse()) {
-          // Only allow one use other register use, as that's all the
-          // remat mechanisms support currently.
-          if (Reg != li.reg) {
-            if (ImpUse == 0)
-              ImpUse = Reg;
-            else if (Reg != ImpUse)
-              return false;
-          }
-          // For the use, there should be only one associated def.
-          if (I != E && (next(I) != E || IsLiveIn))
-            return false;
-        }
-      }
-    }
-  }
+  if (!tii_->isTriviallyReMaterializable(MI, aa_))
+    return false;
 
+  // Target-specific code can mark an instruction as being rematerializable
+  // if it has one virtual reg use, though it had better be something like
+  // a PIC base register which is likely to be live everywhere.
   unsigned ImpUse = getReMatImplicitUse(li, MI);
   if (ImpUse) {
     const LiveInterval &ImpLi = getInterval(ImpUse);
     for (MachineRegisterInfo::use_iterator ri = mri_->use_begin(li.reg),
            re = mri_->use_end(); ri != re; ++ri) {
       MachineInstr *UseMI = &*ri;
-      MachineInstrIndex UseIdx = getInstructionIndex(UseMI);
+      SlotIndex UseIdx = getInstructionIndex(UseMI);
       if (li.FindLiveRangeContaining(UseIdx)->valno != ValNo)
         continue;
       if (!isValNoAvailableAt(ImpLi, MI, UseIdx))
@@ -1588,7 +892,7 @@ static bool FilterFoldedOps(MachineInstr *MI,
 /// returns true.
 bool LiveIntervals::tryFoldMemoryOperand(MachineInstr* &MI,
                                          VirtRegMap &vrm, MachineInstr *DefMI,
-                                         MachineInstrIndex InstrIdx,
+                                         SlotIndex InstrIdx,
                                          SmallVector<unsigned, 2> &Ops,
                                          bool isSS, int Slot, unsigned Reg) {
   // If it is an implicit def instruction, just delete it.
@@ -1626,9 +930,7 @@ bool LiveIntervals::tryFoldMemoryOperand(MachineInstr* &MI,
     vrm.transferSpillPts(MI, fmi);
     vrm.transferRestorePts(MI, fmi);
     vrm.transferEmergencySpills(MI, fmi);
-    mi2iMap_.erase(MI);
-    i2miMap_[InstrIdx.getVecIndex()] = fmi;
-    mi2iMap_[fmi] = InstrIdx;
+    ReplaceMachineInstrInMaps(MI, fmi);
     MI = MBB.insert(MBB.erase(MI), fmi);
     ++numFolds;
     return true;
@@ -1656,19 +958,21 @@ bool LiveIntervals::canFoldMemoryOperand(MachineInstr *MI,
 }
 
 bool LiveIntervals::intervalIsInOneMBB(const LiveInterval &li) const {
-  SmallPtrSet<MachineBasicBlock*, 4> MBBs;
-  for (LiveInterval::Ranges::const_iterator
-         I = li.ranges.begin(), E = li.ranges.end(); I != E; ++I) {
-    std::vector<IdxMBBPair>::const_iterator II =
-      std::lower_bound(Idx2MBBMap.begin(), Idx2MBBMap.end(), I->start);
-    if (II == Idx2MBBMap.end())
-      continue;
-    if (I->end > II->first)  // crossing a MBB.
-      return false;
-    MBBs.insert(II->second);
-    if (MBBs.size() > 1)
+  LiveInterval::Ranges::const_iterator itr = li.ranges.begin();
+
+  MachineBasicBlock *mbb =  indexes_->getMBBCoveringRange(itr->start, itr->end);
+
+  if (mbb == 0)
+    return false;
+
+  for (++itr; itr != li.ranges.end(); ++itr) {
+    MachineBasicBlock *mbb2 =
+      indexes_->getMBBCoveringRange(itr->start, itr->end);
+
+    if (mbb2 != mbb)
       return false;
   }
+
   return true;
 }
 
@@ -1700,7 +1004,7 @@ void LiveIntervals::rewriteImplicitOps(const LiveInterval &li,
 /// for addIntervalsForSpills to rewrite uses / defs for the given live range.
 bool LiveIntervals::
 rewriteInstructionForSpills(const LiveInterval &li, const VNInfo *VNI,
-                 bool TrySplit, MachineInstrIndex index, MachineInstrIndex end, 
+                 bool TrySplit, SlotIndex index, SlotIndex end, 
                  MachineInstr *MI,
                  MachineInstr *ReMatOrigDefMI, MachineInstr *ReMatDefMI,
                  unsigned Slot, int LdSlot,
@@ -1877,14 +1181,13 @@ rewriteInstructionForSpills(const LiveInterval &li, const VNInfo *VNI,
 
     if (HasUse) {
       if (CreatedNewVReg) {
-        LiveRange LR(getLoadIndex(index), getNextSlot(getUseIndex(index)),
-                     nI.getNextValue(MachineInstrIndex(), 0, false,
-                                     VNInfoAllocator));
+        LiveRange LR(index.getLoadIndex(), index.getDefIndex(),
+                     nI.getNextValue(SlotIndex(), 0, false, VNInfoAllocator));
         DEBUG(errs() << " +" << LR);
         nI.addRange(LR);
       } else {
         // Extend the split live interval to this def / use.
-        MachineInstrIndex End = getNextSlot(getUseIndex(index));
+        SlotIndex End = index.getDefIndex();
         LiveRange LR(nI.ranges[nI.ranges.size()-1].end, End,
                      nI.getValNumInfo(nI.getNumValNums()-1));
         DEBUG(errs() << " +" << LR);
@@ -1892,9 +1195,8 @@ rewriteInstructionForSpills(const LiveInterval &li, const VNInfo *VNI,
       }
     }
     if (HasDef) {
-      LiveRange LR(getDefIndex(index), getStoreIndex(index),
-                   nI.getNextValue(MachineInstrIndex(), 0, false,
-                                   VNInfoAllocator));
+      LiveRange LR(index.getDefIndex(), index.getStoreIndex(),
+                   nI.getNextValue(SlotIndex(), 0, false, VNInfoAllocator));
       DEBUG(errs() << " +" << LR);
       nI.addRange(LR);
     }
@@ -1910,13 +1212,13 @@ rewriteInstructionForSpills(const LiveInterval &li, const VNInfo *VNI,
 bool LiveIntervals::anyKillInMBBAfterIdx(const LiveInterval &li,
                                    const VNInfo *VNI,
                                    MachineBasicBlock *MBB,
-                                   MachineInstrIndex Idx) const {
-  MachineInstrIndex End = getMBBEndIdx(MBB);
+                                   SlotIndex Idx) const {
+  SlotIndex End = getMBBEndIdx(MBB);
   for (unsigned j = 0, ee = VNI->kills.size(); j != ee; ++j) {
-    if (VNI->kills[j].isPHIIndex())
+    if (VNI->kills[j].isPHI())
       continue;
 
-    MachineInstrIndex KillIdx = VNI->kills[j];
+    SlotIndex KillIdx = VNI->kills[j];
     if (KillIdx > Idx && KillIdx < End)
       return true;
   }
@@ -1927,11 +1229,11 @@ bool LiveIntervals::anyKillInMBBAfterIdx(const LiveInterval &li,
 /// during spilling.
 namespace {
   struct RewriteInfo {
-    MachineInstrIndex Index;
+    SlotIndex Index;
     MachineInstr *MI;
     bool HasUse;
     bool HasDef;
-    RewriteInfo(MachineInstrIndex i, MachineInstr *mi, bool u, bool d)
+    RewriteInfo(SlotIndex i, MachineInstr *mi, bool u, bool d)
       : Index(i), MI(mi), HasUse(u), HasDef(d) {}
   };
 
@@ -1960,8 +1262,8 @@ rewriteInstructionsForSpills(const LiveInterval &li, bool TrySplit,
                     std::vector<LiveInterval*> &NewLIs) {
   bool AllCanFold = true;
   unsigned NewVReg = 0;
-  MachineInstrIndex start = getBaseIndex(I->start);
-  MachineInstrIndex end = getNextIndex(getBaseIndex(getPrevSlot(I->end)));
+  SlotIndex start = I->start.getBaseIndex();
+  SlotIndex end = I->end.getPrevSlot().getBaseIndex().getNextIndex();
 
   // First collect all the def / use in this live range that will be rewritten.
   // Make sure they are sorted according to instruction index.
@@ -1972,7 +1274,7 @@ rewriteInstructionsForSpills(const LiveInterval &li, bool TrySplit,
     MachineOperand &O = ri.getOperand();
     ++ri;
     assert(!O.isImplicit() && "Spilling register that's used as implicit use?");
-    MachineInstrIndex index = getInstructionIndex(MI);
+    SlotIndex index = getInstructionIndex(MI);
     if (index < start || index >= end)
       continue;
 
@@ -1996,7 +1298,7 @@ rewriteInstructionsForSpills(const LiveInterval &li, bool TrySplit,
   for (unsigned i = 0, e = RewriteMIs.size(); i != e; ) {
     RewriteInfo &rwi = RewriteMIs[i];
     ++i;
-    MachineInstrIndex index = rwi.Index;
+    SlotIndex index = rwi.Index;
     bool MIHasUse = rwi.HasUse;
     bool MIHasDef = rwi.HasDef;
     MachineInstr *MI = rwi.MI;
@@ -2079,12 +1381,12 @@ rewriteInstructionsForSpills(const LiveInterval &li, bool TrySplit,
       if (MI != ReMatOrigDefMI || !CanDelete) {
         bool HasKill = false;
         if (!HasUse)
-          HasKill = anyKillInMBBAfterIdx(li, I->valno, MBB, getDefIndex(index));
+          HasKill = anyKillInMBBAfterIdx(li, I->valno, MBB, index.getDefIndex());
         else {
           // If this is a two-address code, then this index starts a new VNInfo.
-          const VNInfo *VNI = li.findDefinedVNInfoForRegInt(getDefIndex(index));
+          const VNInfo *VNI = li.findDefinedVNInfoForRegInt(index.getDefIndex());
           if (VNI)
-            HasKill = anyKillInMBBAfterIdx(li, VNI, MBB, getDefIndex(index));
+            HasKill = anyKillInMBBAfterIdx(li, VNI, MBB, index.getDefIndex());
         }
         DenseMap<unsigned, std::vector<SRInfo> >::iterator SII =
           SpillIdxes.find(MBBId);
@@ -2157,7 +1459,7 @@ rewriteInstructionsForSpills(const LiveInterval &li, bool TrySplit,
   }
 }
 
-bool LiveIntervals::alsoFoldARestore(int Id, MachineInstrIndex index,
+bool LiveIntervals::alsoFoldARestore(int Id, SlotIndex index,
                         unsigned vr, BitVector &RestoreMBBs,
                         DenseMap<unsigned,std::vector<SRInfo> > &RestoreIdxes) {
   if (!RestoreMBBs[Id])
@@ -2171,7 +1473,7 @@ bool LiveIntervals::alsoFoldARestore(int Id, MachineInstrIndex index,
   return false;
 }
 
-void LiveIntervals::eraseRestoreInfo(int Id, MachineInstrIndex index,
+void LiveIntervals::eraseRestoreInfo(int Id, SlotIndex index,
                         unsigned vr, BitVector &RestoreMBBs,
                         DenseMap<unsigned,std::vector<SRInfo> > &RestoreIdxes) {
   if (!RestoreMBBs[Id])
@@ -2179,7 +1481,7 @@ void LiveIntervals::eraseRestoreInfo(int Id, MachineInstrIndex index,
   std::vector<SRInfo> &Restores = RestoreIdxes[Id];
   for (unsigned i = 0, e = Restores.size(); i != e; ++i)
     if (Restores[i].index == index && Restores[i].vreg)
-      Restores[i].index = MachineInstrIndex();
+      Restores[i].index = SlotIndex();
 }
 
 /// handleSpilledImpDefs - Remove IMPLICIT_DEF instructions which are being
@@ -2278,18 +1580,18 @@ addIntervalsForSpillsFast(const LiveInterval &li,
       }
       
       // Fill in  the new live interval.
-      MachineInstrIndex index = getInstructionIndex(MI);
+      SlotIndex index = getInstructionIndex(MI);
       if (HasUse) {
-        LiveRange LR(getLoadIndex(index), getUseIndex(index),
-                     nI.getNextValue(MachineInstrIndex(), 0, false,
+        LiveRange LR(index.getLoadIndex(), index.getUseIndex(),
+                     nI.getNextValue(SlotIndex(), 0, false,
                                      getVNInfoAllocator()));
         DEBUG(errs() << " +" << LR);
         nI.addRange(LR);
         vrm.addRestorePoint(NewVReg, MI);
       }
       if (HasDef) {
-        LiveRange LR(getDefIndex(index), getStoreIndex(index),
-                     nI.getNextValue(MachineInstrIndex(), 0, false,
+        LiveRange LR(index.getDefIndex(), index.getStoreIndex(),
+                     nI.getNextValue(SlotIndex(), 0, false,
                                      getVNInfoAllocator()));
         DEBUG(errs() << " +" << LR);
         nI.addRange(LR);
@@ -2353,8 +1655,8 @@ addIntervalsForSpills(const LiveInterval &li,
   if (vrm.getPreSplitReg(li.reg)) {
     vrm.setIsSplitFromReg(li.reg, 0);
     // Unset the split kill marker on the last use.
-    MachineInstrIndex KillIdx = vrm.getKillPoint(li.reg);
-    if (KillIdx != MachineInstrIndex()) {
+    SlotIndex KillIdx = vrm.getKillPoint(li.reg);
+    if (KillIdx != SlotIndex()) {
       MachineInstr *KillMI = getInstructionFromIndex(KillIdx);
       assert(KillMI && "Last use disappeared?");
       int KillOp = KillMI->findRegisterUseOperandIdx(li.reg, true);
@@ -2480,7 +1782,7 @@ addIntervalsForSpills(const LiveInterval &li,
     while (Id != -1) {
       std::vector<SRInfo> &spills = SpillIdxes[Id];
       for (unsigned i = 0, e = spills.size(); i != e; ++i) {
-        MachineInstrIndex index = spills[i].index;
+        SlotIndex index = spills[i].index;
         unsigned VReg = spills[i].vreg;
         LiveInterval &nI = getOrCreateInterval(VReg);
         bool isReMat = vrm.isReMaterialized(VReg);
@@ -2518,16 +1820,16 @@ addIntervalsForSpills(const LiveInterval &li,
             if (FoundUse) {
               // Also folded uses, do not issue a load.
               eraseRestoreInfo(Id, index, VReg, RestoreMBBs, RestoreIdxes);
-              nI.removeRange(getLoadIndex(index), getNextSlot(getUseIndex(index)));
+              nI.removeRange(index.getLoadIndex(), index.getDefIndex());
             }
-            nI.removeRange(getDefIndex(index), getStoreIndex(index));
+            nI.removeRange(index.getDefIndex(), index.getStoreIndex());
           }
         }
 
         // Otherwise tell the spiller to issue a spill.
         if (!Folded) {
           LiveRange *LR = &nI.ranges[nI.ranges.size()-1];
-          bool isKill = LR->end == getStoreIndex(index);
+          bool isKill = LR->end == index.getStoreIndex();
           if (!MI->registerDefIsDead(nI.reg))
             // No need to spill a dead def.
             vrm.addSpillPoint(VReg, isKill, MI);
@@ -2543,8 +1845,8 @@ addIntervalsForSpills(const LiveInterval &li,
   while (Id != -1) {
     std::vector<SRInfo> &restores = RestoreIdxes[Id];
     for (unsigned i = 0, e = restores.size(); i != e; ++i) {
-      MachineInstrIndex index = restores[i].index;
-      if (index == MachineInstrIndex())
+      SlotIndex index = restores[i].index;
+      if (index == SlotIndex())
         continue;
       unsigned VReg = restores[i].vreg;
       LiveInterval &nI = getOrCreateInterval(VReg);
@@ -2599,7 +1901,7 @@ addIntervalsForSpills(const LiveInterval &li,
       // If folding is not possible / failed, then tell the spiller to issue a
       // load / rematerialization for us.
       if (Folded)
-        nI.removeRange(getLoadIndex(index), getNextSlot(getUseIndex(index)));
+        nI.removeRange(index.getLoadIndex(), index.getDefIndex());
       else
         vrm.addRestorePoint(VReg, MI);
     }
@@ -2612,10 +1914,10 @@ addIntervalsForSpills(const LiveInterval &li,
   for (unsigned i = 0, e = NewLIs.size(); i != e; ++i) {
     LiveInterval *LI = NewLIs[i];
     if (!LI->empty()) {
-      LI->weight /= InstrSlots::NUM * getApproximateInstructionCount(*LI);
+      LI->weight /= SlotIndex::NUM * getApproximateInstructionCount(*LI);
       if (!AddedKill.count(LI)) {
         LiveRange *LR = &LI->ranges[LI->ranges.size()-1];
-        MachineInstrIndex LastUseIdx = getBaseIndex(LR->end);
+        SlotIndex LastUseIdx = LR->end.getBaseIndex();
         MachineInstr *LastUse = getInstructionFromIndex(LastUseIdx);
         int UseIdx = LastUse->findRegisterUseOperandIdx(LI->reg, false);
         assert(UseIdx != -1);
@@ -2666,7 +1968,7 @@ unsigned LiveIntervals::getNumConflictsWithPhysReg(const LiveInterval &li,
          E = mri_->reg_end(); I != E; ++I) {
     MachineOperand &O = I.getOperand();
     MachineInstr *MI = O.getParent();
-    MachineInstrIndex Index = getInstructionIndex(MI);
+    SlotIndex Index = getInstructionIndex(MI);
     if (pli.liveAt(Index))
       ++NumConflicts;
   }
@@ -2688,7 +1990,19 @@ bool LiveIntervals::spillPhysRegAroundRegDefsUses(const LiveInterval &li,
            tri_->isSuperRegister(*AS, SpillReg));
 
   bool Cut = false;
-  LiveInterval &pli = getInterval(SpillReg);
+  SmallVector<unsigned, 4> PRegs;
+  if (hasInterval(SpillReg))
+    PRegs.push_back(SpillReg);
+  else {
+    SmallSet<unsigned, 4> Added;
+    for (const unsigned* AS = tri_->getSubRegisters(SpillReg); *AS; ++AS)
+      if (Added.insert(*AS) && hasInterval(*AS)) {
+        PRegs.push_back(*AS);
+        for (const unsigned* ASS = tri_->getSubRegisters(*AS); *ASS; ++ASS)
+          Added.insert(*ASS);
+      }
+  }
+
   SmallPtrSet<MachineInstr*, 8> SeenMIs;
   for (MachineRegisterInfo::reg_iterator I = mri_->reg_begin(li.reg),
          E = mri_->reg_end(); I != E; ++I) {
@@ -2697,11 +2011,15 @@ bool LiveIntervals::spillPhysRegAroundRegDefsUses(const LiveInterval &li,
     if (SeenMIs.count(MI))
       continue;
     SeenMIs.insert(MI);
-    MachineInstrIndex Index = getInstructionIndex(MI);
-    if (pli.liveAt(Index)) {
-      vrm.addEmergencySpill(SpillReg, MI);
-      MachineInstrIndex StartIdx = getLoadIndex(Index);
-      MachineInstrIndex EndIdx = getNextSlot(getStoreIndex(Index));
+    SlotIndex Index = getInstructionIndex(MI);
+    for (unsigned i = 0, e = PRegs.size(); i != e; ++i) {
+      unsigned PReg = PRegs[i];
+      LiveInterval &pli = getInterval(PReg);
+      if (!pli.liveAt(Index))
+        continue;
+      vrm.addEmergencySpill(PReg, MI);
+      SlotIndex StartIdx = Index.getLoadIndex();
+      SlotIndex EndIdx = Index.getNextIndex().getBaseIndex();
       if (pli.isInOneLiveRange(StartIdx, EndIdx)) {
         pli.removeRange(StartIdx, EndIdx);
         Cut = true;
@@ -2711,17 +2029,18 @@ bool LiveIntervals::spillPhysRegAroundRegDefsUses(const LiveInterval &li,
         Msg << "Ran out of registers during register allocation!";
         if (MI->getOpcode() == TargetInstrInfo::INLINEASM) {
           Msg << "\nPlease check your inline asm statement for invalid "
-               << "constraints:\n";
+              << "constraints:\n";
           MI->print(Msg, tm_);
         }
         llvm_report_error(Msg.str());
       }
-      for (const unsigned* AS = tri_->getSubRegisters(SpillReg); *AS; ++AS) {
+      for (const unsigned* AS = tri_->getSubRegisters(PReg); *AS; ++AS) {
         if (!hasInterval(*AS))
           continue;
         LiveInterval &spli = getInterval(*AS);
         if (spli.liveAt(Index))
-          spli.removeRange(getLoadIndex(Index), getNextSlot(getStoreIndex(Index)));
+          spli.removeRange(Index.getLoadIndex(),
+                           Index.getNextIndex().getBaseIndex());
       }
     }
   }
@@ -2732,13 +2051,13 @@ LiveRange LiveIntervals::addLiveRangeToEndOfBlock(unsigned reg,
                                                   MachineInstr* startInst) {
   LiveInterval& Interval = getOrCreateInterval(reg);
   VNInfo* VN = Interval.getNextValue(
-    MachineInstrIndex(getInstructionIndex(startInst), MachineInstrIndex::DEF),
+    SlotIndex(getInstructionIndex(startInst).getDefIndex()),
     startInst, true, getVNInfoAllocator());
   VN->setHasPHIKill(true);
-  VN->kills.push_back(terminatorGaps[startInst->getParent()]);
+  VN->kills.push_back(indexes_->getTerminatorGap(startInst->getParent()));
   LiveRange LR(
-    MachineInstrIndex(getInstructionIndex(startInst), MachineInstrIndex::DEF),
-    getNextSlot(getMBBEndIdx(startInst->getParent())), VN);
+     SlotIndex(getInstructionIndex(startInst).getDefIndex()),
+     getMBBEndIdx(startInst->getParent()).getNextIndex().getBaseIndex(), VN);
   Interval.addRange(LR);
   
   return LR;
diff --git a/libclamav/c++/llvm/lib/CodeGen/LiveStackAnalysis.cpp b/libclamav/c++/llvm/lib/CodeGen/LiveStackAnalysis.cpp
index a7bea1f..d2f3775 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LiveStackAnalysis.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LiveStackAnalysis.cpp
@@ -27,15 +27,10 @@ using namespace llvm;
 char LiveStacks::ID = 0;
 static RegisterPass<LiveStacks> X("livestacks", "Live Stack Slot Analysis");
 
-void LiveStacks::scaleNumbering(int factor) {
-  // Scale the intervals.
-  for (iterator LI = begin(), LE = end(); LI != LE; ++LI) {
-    LI->second.scaleNumbering(factor);
-  }
-}
-
 void LiveStacks::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.setPreservesAll();
+  AU.addPreserved<SlotIndexes>();
+  AU.addRequiredTransitive<SlotIndexes>();
   MachineFunctionPass::getAnalysisUsage(AU);
 }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/LiveVariables.cpp b/libclamav/c++/llvm/lib/CodeGen/LiveVariables.cpp
index 139e029..bfc2d08 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LiveVariables.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LiveVariables.cpp
@@ -50,6 +50,14 @@ void LiveVariables::getAnalysisUsage(AnalysisUsage &AU) const {
   MachineFunctionPass::getAnalysisUsage(AU);
 }
 
+MachineInstr *
+LiveVariables::VarInfo::findKill(const MachineBasicBlock *MBB) const {
+  for (unsigned i = 0, e = Kills.size(); i != e; ++i)
+    if (Kills[i]->getParent() == MBB)
+      return Kills[i];
+  return NULL;
+}
+
 void LiveVariables::VarInfo::dump() const {
   errs() << "  Alive in blocks: ";
   for (SparseBitVector<>::iterator I = AliveBlocks.begin(),
@@ -222,8 +230,9 @@ MachineInstr *LiveVariables::FindLastPartialDef(unsigned Reg,
 /// implicit defs to a machine instruction if there was an earlier def of its
 /// super-register.
 void LiveVariables::HandlePhysRegUse(unsigned Reg, MachineInstr *MI) {
+  MachineInstr *LastDef = PhysRegDef[Reg];
   // If there was a previous use or a "full" def all is well.
-  if (!PhysRegDef[Reg] && !PhysRegUse[Reg]) {
+  if (!LastDef && !PhysRegUse[Reg]) {
     // Otherwise, the last sub-register def implicitly defines this register.
     // e.g.
     // AH =
@@ -257,6 +266,11 @@ void LiveVariables::HandlePhysRegUse(unsigned Reg, MachineInstr *MI) {
       }
     }
   }
+  else if (LastDef && !PhysRegUse[Reg] &&
+           !LastDef->findRegisterDefOperand(Reg))
+    // Last def defines the super register, add an implicit def of reg.
+    LastDef->addOperand(MachineOperand::CreateReg(Reg,
+                                                 true/*IsDef*/, true/*IsImp*/));
 
   // Remember this use.
   PhysRegUse[Reg]  = MI;
@@ -323,10 +337,21 @@ bool LiveVariables::HandlePhysRegKill(unsigned Reg, MachineInstr *MI) {
       // The last partial def kills the register.
       LastPartDef->addOperand(MachineOperand::CreateReg(Reg, false/*IsDef*/,
                                                 true/*IsImp*/, true/*IsKill*/));
-    else
+    else {
+      MachineOperand *MO =
+        LastRefOrPartRef->findRegisterDefOperand(Reg, false, TRI);
+      bool NeedEC = MO->isEarlyClobber() && MO->getReg() != Reg;
       // If the last reference is the last def, then it's not used at all.
       // That is, unless we are currently processing the last reference itself.
       LastRefOrPartRef->addRegisterDead(Reg, TRI, true);
+      if (NeedEC) {
+        // If we are adding a subreg def and the superreg def is marked early
+        // clobber, add an early clobber marker to the subreg def.
+        MO = LastRefOrPartRef->findRegisterDefOperand(Reg);
+        if (MO)
+          MO->setIsEarlyClobber();
+      }
+    }
   } else if (!PhysRegUse[Reg]) {
     // Partial uses. Mark register def dead and add implicit def of
     // sub-registers which are used.
@@ -630,3 +655,46 @@ void LiveVariables::analyzePHINodes(const MachineFunction& Fn) {
         PHIVarInfo[BBI->getOperand(i + 1).getMBB()->getNumber()]
           .push_back(BBI->getOperand(i).getReg());
 }
+
+bool LiveVariables::VarInfo::isLiveIn(const MachineBasicBlock &MBB,
+                                      unsigned Reg,
+                                      MachineRegisterInfo &MRI) {
+  unsigned Num = MBB.getNumber();
+
+  // Reg is live-through.
+  if (AliveBlocks.test(Num))
+    return true;
+
+  // Registers defined in MBB cannot be live in.
+  const MachineInstr *Def = MRI.getVRegDef(Reg);
+  if (Def && Def->getParent() == &MBB)
+    return false;
+
+ // Reg was not defined in MBB, was it killed here?
+  return findKill(&MBB);
+}
+
+/// addNewBlock - Add a new basic block BB as an empty succcessor to DomBB. All
+/// variables that are live out of DomBB will be marked as passing live through
+/// BB.
+void LiveVariables::addNewBlock(MachineBasicBlock *BB,
+                                MachineBasicBlock *DomBB,
+                                MachineBasicBlock *SuccBB) {
+  const unsigned NumNew = BB->getNumber();
+
+  // All registers used by PHI nodes in SuccBB must be live through BB.
+  for (MachineBasicBlock::const_iterator BBI = SuccBB->begin(),
+         BBE = SuccBB->end();
+       BBI != BBE && BBI->getOpcode() == TargetInstrInfo::PHI; ++BBI)
+    for (unsigned i = 1, e = BBI->getNumOperands(); i != e; i += 2)
+      if (BBI->getOperand(i+1).getMBB() == BB)
+        getVarInfo(BBI->getOperand(i).getReg()).AliveBlocks.set(NumNew);
+
+  // Update info for all live variables
+  for (unsigned Reg = TargetRegisterInfo::FirstVirtualRegister,
+         E = MRI->getLastVirtReg()+1; Reg != E; ++Reg) {
+    VarInfo &VI = getVarInfo(Reg);
+    if (!VI.AliveBlocks.test(NumNew) && VI.isLiveIn(*SuccBB, Reg, *MRI))
+      VI.AliveBlocks.set(NumNew);
+  }
+}
diff --git a/libclamav/c++/llvm/lib/CodeGen/LowerSubregs.cpp b/libclamav/c++/llvm/lib/CodeGen/LowerSubregs.cpp
index 8486bb0..30636a8 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LowerSubregs.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LowerSubregs.cpp
@@ -25,13 +25,16 @@
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Support/Debug.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
 namespace {
-  struct VISIBILITY_HIDDEN LowerSubregsInstructionPass
-   : public MachineFunctionPass {
+  struct LowerSubregsInstructionPass : public MachineFunctionPass {
+  private:
+    const TargetRegisterInfo *TRI;
+    const TargetInstrInfo *TII;
+
+  public:
     static char ID; // Pass identification, replacement for typeid
     LowerSubregsInstructionPass() : MachineFunctionPass(&ID) {}
     
@@ -48,15 +51,16 @@ namespace {
 
     /// runOnMachineFunction - pass entry point
     bool runOnMachineFunction(MachineFunction&);
-    
+
+  private:
     bool LowerExtract(MachineInstr *MI);
     bool LowerInsert(MachineInstr *MI);
     bool LowerSubregToReg(MachineInstr *MI);
 
     void TransferDeadFlag(MachineInstr *MI, unsigned DstReg,
-                          const TargetRegisterInfo &TRI);
+                          const TargetRegisterInfo *TRI);
     void TransferKillFlag(MachineInstr *MI, unsigned SrcReg,
-                          const TargetRegisterInfo &TRI,
+                          const TargetRegisterInfo *TRI,
                           bool AddIfNotFound = false);
   };
 
@@ -73,10 +77,10 @@ FunctionPass *llvm::createLowerSubregsPass() {
 void
 LowerSubregsInstructionPass::TransferDeadFlag(MachineInstr *MI,
                                               unsigned DstReg,
-                                              const TargetRegisterInfo &TRI) {
+                                              const TargetRegisterInfo *TRI) {
   for (MachineBasicBlock::iterator MII =
         prior(MachineBasicBlock::iterator(MI)); ; --MII) {
-    if (MII->addRegisterDead(DstReg, &TRI))
+    if (MII->addRegisterDead(DstReg, TRI))
       break;
     assert(MII != MI->getParent()->begin() &&
            "copyRegToReg output doesn't reference destination register!");
@@ -89,11 +93,11 @@ LowerSubregsInstructionPass::TransferDeadFlag(MachineInstr *MI,
 void
 LowerSubregsInstructionPass::TransferKillFlag(MachineInstr *MI,
                                               unsigned SrcReg,
-                                              const TargetRegisterInfo &TRI,
+                                              const TargetRegisterInfo *TRI,
                                               bool AddIfNotFound) {
   for (MachineBasicBlock::iterator MII =
         prior(MachineBasicBlock::iterator(MI)); ; --MII) {
-    if (MII->addRegisterKilled(SrcReg, &TRI, AddIfNotFound))
+    if (MII->addRegisterKilled(SrcReg, TRI, AddIfNotFound))
       break;
     assert(MII != MI->getParent()->begin() &&
            "copyRegToReg output doesn't reference source register!");
@@ -102,9 +106,6 @@ LowerSubregsInstructionPass::TransferKillFlag(MachineInstr *MI,
 
 bool LowerSubregsInstructionPass::LowerExtract(MachineInstr *MI) {
   MachineBasicBlock *MBB = MI->getParent();
-  MachineFunction &MF = *MBB->getParent();
-  const TargetRegisterInfo &TRI = *MF.getTarget().getRegisterInfo();
-  const TargetInstrInfo &TII = *MF.getTarget().getInstrInfo();
 
   assert(MI->getOperand(0).isReg() && MI->getOperand(0).isDef() &&
          MI->getOperand(1).isReg() && MI->getOperand(1).isUse() &&
@@ -113,7 +114,7 @@ bool LowerSubregsInstructionPass::LowerExtract(MachineInstr *MI) {
   unsigned DstReg   = MI->getOperand(0).getReg();
   unsigned SuperReg = MI->getOperand(1).getReg();
   unsigned SubIdx   = MI->getOperand(2).getImm();
-  unsigned SrcReg   = TRI.getSubReg(SuperReg, SubIdx);
+  unsigned SrcReg   = TRI->getSubReg(SuperReg, SubIdx);
 
   assert(TargetRegisterInfo::isPhysicalRegister(SuperReg) &&
          "Extract supperg source must be a physical register");
@@ -128,7 +129,7 @@ bool LowerSubregsInstructionPass::LowerExtract(MachineInstr *MI) {
     if (MI->getOperand(1).isKill()) {
       // We must make sure the super-register gets killed. Replace the
       // instruction with KILL.
-      MI->setDesc(TII.get(TargetInstrInfo::KILL));
+      MI->setDesc(TII->get(TargetInstrInfo::KILL));
       MI->RemoveOperand(2);     // SubIdx
       DEBUG(errs() << "subreg: replace by: " << *MI);
       return true;
@@ -137,9 +138,9 @@ bool LowerSubregsInstructionPass::LowerExtract(MachineInstr *MI) {
     DEBUG(errs() << "subreg: eliminated!");
   } else {
     // Insert copy
-    const TargetRegisterClass *TRCS = TRI.getPhysicalRegisterRegClass(DstReg);
-    const TargetRegisterClass *TRCD = TRI.getPhysicalRegisterRegClass(SrcReg);
-    bool Emitted = TII.copyRegToReg(*MBB, MI, DstReg, SrcReg, TRCD, TRCS);
+    const TargetRegisterClass *TRCS = TRI->getPhysicalRegisterRegClass(DstReg);
+    const TargetRegisterClass *TRCD = TRI->getPhysicalRegisterRegClass(SrcReg);
+    bool Emitted = TII->copyRegToReg(*MBB, MI, DstReg, SrcReg, TRCD, TRCS);
     (void)Emitted;
     assert(Emitted && "Subreg and Dst must be of compatible register class");
     // Transfer the kill/dead flags, if needed.
@@ -160,9 +161,6 @@ bool LowerSubregsInstructionPass::LowerExtract(MachineInstr *MI) {
 
 bool LowerSubregsInstructionPass::LowerSubregToReg(MachineInstr *MI) {
   MachineBasicBlock *MBB = MI->getParent();
-  MachineFunction &MF = *MBB->getParent();
-  const TargetRegisterInfo &TRI = *MF.getTarget().getRegisterInfo(); 
-  const TargetInstrInfo &TII = *MF.getTarget().getInstrInfo();
   assert((MI->getOperand(0).isReg() && MI->getOperand(0).isDef()) &&
          MI->getOperand(1).isImm() &&
          (MI->getOperand(2).isReg() && MI->getOperand(2).isUse()) &&
@@ -174,7 +172,7 @@ bool LowerSubregsInstructionPass::LowerSubregToReg(MachineInstr *MI) {
   unsigned SubIdx  = MI->getOperand(3).getImm();
 
   assert(SubIdx != 0 && "Invalid index for insert_subreg");
-  unsigned DstSubReg = TRI.getSubReg(DstReg, SubIdx);
+  unsigned DstSubReg = TRI->getSubReg(DstReg, SubIdx);
 
   assert(TargetRegisterInfo::isPhysicalRegister(DstReg) &&
          "Insert destination must be in a physical register");
@@ -193,9 +191,11 @@ bool LowerSubregsInstructionPass::LowerSubregToReg(MachineInstr *MI) {
     DEBUG(errs() << "subreg: eliminated!");
   } else {
     // Insert sub-register copy
-    const TargetRegisterClass *TRC0= TRI.getPhysicalRegisterRegClass(DstSubReg);
-    const TargetRegisterClass *TRC1= TRI.getPhysicalRegisterRegClass(InsReg);
-    TII.copyRegToReg(*MBB, MI, DstSubReg, InsReg, TRC0, TRC1);
+    const TargetRegisterClass *TRC0= TRI->getPhysicalRegisterRegClass(DstSubReg);
+    const TargetRegisterClass *TRC1= TRI->getPhysicalRegisterRegClass(InsReg);
+    bool Emitted = TII->copyRegToReg(*MBB, MI, DstSubReg, InsReg, TRC0, TRC1);
+    (void)Emitted;
+    assert(Emitted && "Subreg and Dst must be of compatible register class");
     // Transfer the kill/dead flags, if needed.
     if (MI->getOperand(0).isDead())
       TransferDeadFlag(MI, DstSubReg, TRI);
@@ -209,14 +209,11 @@ bool LowerSubregsInstructionPass::LowerSubregToReg(MachineInstr *MI) {
 
   DEBUG(errs() << '\n');
   MBB->erase(MI);
-  return true;                    
+  return true;
 }
 
 bool LowerSubregsInstructionPass::LowerInsert(MachineInstr *MI) {
   MachineBasicBlock *MBB = MI->getParent();
-  MachineFunction &MF = *MBB->getParent();
-  const TargetRegisterInfo &TRI = *MF.getTarget().getRegisterInfo(); 
-  const TargetInstrInfo &TII = *MF.getTarget().getInstrInfo();
   assert((MI->getOperand(0).isReg() && MI->getOperand(0).isDef()) &&
          (MI->getOperand(1).isReg() && MI->getOperand(1).isUse()) &&
          (MI->getOperand(2).isReg() && MI->getOperand(2).isUse()) &&
@@ -231,7 +228,7 @@ bool LowerSubregsInstructionPass::LowerInsert(MachineInstr *MI) {
 
   assert(DstReg == SrcReg && "insert_subreg not a two-address instruction?");
   assert(SubIdx != 0 && "Invalid index for insert_subreg");
-  unsigned DstSubReg = TRI.getSubReg(DstReg, SubIdx);
+  unsigned DstSubReg = TRI->getSubReg(DstReg, SubIdx);
   assert(DstSubReg && "invalid subregister index for register");
   assert(TargetRegisterInfo::isPhysicalRegister(SrcReg) &&
          "Insert superreg source must be in a physical register");
@@ -245,7 +242,7 @@ bool LowerSubregsInstructionPass::LowerInsert(MachineInstr *MI) {
     // <undef>, we need to make sure it is alive by inserting a KILL
     if (MI->getOperand(1).isUndef() && !MI->getOperand(0).isDead()) {
       MachineInstrBuilder MIB = BuildMI(*MBB, MI, MI->getDebugLoc(),
-                                TII.get(TargetInstrInfo::KILL), DstReg);
+                                TII->get(TargetInstrInfo::KILL), DstReg);
       if (MI->getOperand(2).isUndef())
         MIB.addReg(InsReg, RegState::Undef);
       else
@@ -257,15 +254,18 @@ bool LowerSubregsInstructionPass::LowerInsert(MachineInstr *MI) {
     }
   } else {
     // Insert sub-register copy
-    const TargetRegisterClass *TRC0= TRI.getPhysicalRegisterRegClass(DstSubReg);
-    const TargetRegisterClass *TRC1= TRI.getPhysicalRegisterRegClass(InsReg);
+    const TargetRegisterClass *TRC0= TRI->getPhysicalRegisterRegClass(DstSubReg);
+    const TargetRegisterClass *TRC1= TRI->getPhysicalRegisterRegClass(InsReg);
     if (MI->getOperand(2).isUndef())
       // If the source register being inserted is undef, then this becomes a
       // KILL.
       BuildMI(*MBB, MI, MI->getDebugLoc(),
-              TII.get(TargetInstrInfo::KILL), DstSubReg);
-    else
-      TII.copyRegToReg(*MBB, MI, DstSubReg, InsReg, TRC0, TRC1);
+              TII->get(TargetInstrInfo::KILL), DstSubReg);
+    else {
+      bool Emitted = TII->copyRegToReg(*MBB, MI, DstSubReg, InsReg, TRC0, TRC1);
+      (void)Emitted;
+      assert(Emitted && "Subreg and Dst must be of compatible register class");
+    }
     MachineBasicBlock::iterator CopyMI = MI;
     --CopyMI;
 
@@ -303,6 +303,8 @@ bool LowerSubregsInstructionPass::runOnMachineFunction(MachineFunction &MF) {
                << "********** LOWERING SUBREG INSTRS **********\n"
                << "********** Function: " 
                << MF.getFunction()->getName() << '\n');
+  TRI = MF.getTarget().getRegisterInfo();
+  TII = MF.getTarget().getInstrInfo();
 
   bool MadeChange = false;
 
@@ -310,8 +312,8 @@ bool LowerSubregsInstructionPass::runOnMachineFunction(MachineFunction &MF) {
        mbbi != mbbe; ++mbbi) {
     for (MachineBasicBlock::iterator mi = mbbi->begin(), me = mbbi->end();
          mi != me;) {
-      MachineInstr *MI = mi++;
-           
+      MachineBasicBlock::iterator nmi = next(mi);
+      MachineInstr *MI = mi;
       if (MI->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG) {
         MadeChange |= LowerExtract(MI);
       } else if (MI->getOpcode() == TargetInstrInfo::INSERT_SUBREG) {
@@ -319,6 +321,7 @@ bool LowerSubregsInstructionPass::runOnMachineFunction(MachineFunction &MF) {
       } else if (MI->getOpcode() == TargetInstrInfo::SUBREG_TO_REG) {
         MadeChange |= LowerSubregToReg(MI);
       }
+      mi = nmi;
     }
   }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp
index b3eb2da..e55e369 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp
@@ -17,14 +17,17 @@
 #include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Target/TargetInstrDesc.h"
+#include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Support/LeakDetector.h"
 #include "llvm/Support/raw_ostream.h"
+#include "llvm/Assembly/Writer.h"
 #include <algorithm>
 using namespace llvm;
 
 MachineBasicBlock::MachineBasicBlock(MachineFunction &mf, const BasicBlock *bb)
-  : BB(bb), Number(-1), xParent(&mf), Alignment(0), IsLandingPad(false) {
+  : BB(bb), Number(-1), xParent(&mf), Alignment(0), IsLandingPad(false),
+    AddressTaken(false) {
   Insts.Parent = this;
 }
 
@@ -160,15 +163,22 @@ void MachineBasicBlock::dump() const {
 
 static inline void OutputReg(raw_ostream &os, unsigned RegNo,
                              const TargetRegisterInfo *TRI = 0) {
-  if (!RegNo || TargetRegisterInfo::isPhysicalRegister(RegNo)) {
+  if (RegNo != 0 && TargetRegisterInfo::isPhysicalRegister(RegNo)) {
     if (TRI)
       os << " %" << TRI->get(RegNo).Name;
     else
-      os << " %mreg(" << RegNo << ")";
+      os << " %physreg" << RegNo;
   } else
     os << " %reg" << RegNo;
 }
 
+StringRef MachineBasicBlock::getName() const {
+  if (const BasicBlock *LBB = getBasicBlock())
+    return LBB->getName();
+  else
+    return "(null)";
+}
+
 void MachineBasicBlock::print(raw_ostream &OS) const {
   const MachineFunction *MF = getParent();
   if (!MF) {
@@ -177,18 +187,23 @@ void MachineBasicBlock::print(raw_ostream &OS) const {
     return;
   }
 
-  const BasicBlock *LBB = getBasicBlock();
+  if (Alignment) { OS << "Alignment " << Alignment << "\n"; }
+
+  OS << "BB#" << getNumber() << ": ";
+
+  const char *Comma = "";
+  if (const BasicBlock *LBB = getBasicBlock()) {
+    OS << Comma << "derived from LLVM BB ";
+    WriteAsOperand(OS, LBB, /*PrintType=*/false);
+    Comma = ", ";
+  }
+  if (isLandingPad()) { OS << Comma << "EH LANDING PAD"; Comma = ", "; }
+  if (hasAddressTaken()) { OS << Comma << "ADDRESS TAKEN"; Comma = ", "; }
   OS << '\n';
-  if (LBB) OS << LBB->getName() << ": ";
-  OS << (const void*)this
-     << ", LLVM BB @" << (const void*) LBB << ", ID#" << getNumber();
-  if (Alignment) OS << ", Alignment " << Alignment;
-  if (isLandingPad()) OS << ", EH LANDING PAD";
-  OS << ":\n";
 
   const TargetRegisterInfo *TRI = MF->getTarget().getRegisterInfo();  
   if (!livein_empty()) {
-    OS << "Live Ins:";
+    OS << "    Live Ins:";
     for (const_livein_iterator I = livein_begin(),E = livein_end(); I != E; ++I)
       OutputReg(OS, *I, TRI);
     OS << '\n';
@@ -197,7 +212,7 @@ void MachineBasicBlock::print(raw_ostream &OS) const {
   if (!pred_empty()) {
     OS << "    Predecessors according to CFG:";
     for (const_pred_iterator PI = pred_begin(), E = pred_end(); PI != E; ++PI)
-      OS << ' ' << *PI << " (#" << (*PI)->getNumber() << ')';
+      OS << " BB#" << (*PI)->getNumber();
     OS << '\n';
   }
   
@@ -210,7 +225,7 @@ void MachineBasicBlock::print(raw_ostream &OS) const {
   if (!succ_empty()) {
     OS << "    Successors according to CFG:";
     for (const_succ_iterator SI = succ_begin(), E = succ_end(); SI != E; ++SI)
-      OS << ' ' << *SI << " (#" << (*SI)->getNumber() << ')';
+      OS << " BB#" << (*SI)->getNumber();
     OS << '\n';
   }
 }
@@ -235,6 +250,64 @@ void MachineBasicBlock::moveAfter(MachineBasicBlock *NewBefore) {
   getParent()->splice(++BBI, this);
 }
 
+void MachineBasicBlock::updateTerminator() {
+  const TargetInstrInfo *TII = getParent()->getTarget().getInstrInfo();
+  // A block with no successors has no concerns with fall-through edges.
+  if (this->succ_empty()) return;
+
+  MachineBasicBlock *TBB = 0, *FBB = 0;
+  SmallVector<MachineOperand, 4> Cond;
+  bool B = TII->AnalyzeBranch(*this, TBB, FBB, Cond);
+  (void) B;
+  assert(!B && "UpdateTerminators requires analyzable predecessors!");
+  if (Cond.empty()) {
+    if (TBB) {
+      // The block has an unconditional branch. If its successor is now
+      // its layout successor, delete the branch.
+      if (isLayoutSuccessor(TBB))
+        TII->RemoveBranch(*this);
+    } else {
+      // The block has an unconditional fallthrough. If its successor is not
+      // its layout successor, insert a branch.
+      TBB = *succ_begin();
+      if (!isLayoutSuccessor(TBB))
+        TII->InsertBranch(*this, TBB, 0, Cond);
+    }
+  } else {
+    if (FBB) {
+      // The block has a non-fallthrough conditional branch. If one of its
+      // successors is its layout successor, rewrite it to a fallthrough
+      // conditional branch.
+      if (isLayoutSuccessor(TBB)) {
+        if (TII->ReverseBranchCondition(Cond))
+          return;
+        TII->RemoveBranch(*this);
+        TII->InsertBranch(*this, FBB, 0, Cond);
+      } else if (isLayoutSuccessor(FBB)) {
+        TII->RemoveBranch(*this);
+        TII->InsertBranch(*this, TBB, 0, Cond);
+      }
+    } else {
+      // The block has a fallthrough conditional branch.
+      MachineBasicBlock *MBBA = *succ_begin();
+      MachineBasicBlock *MBBB = *next(succ_begin());
+      if (MBBA == TBB) std::swap(MBBB, MBBA);
+      if (isLayoutSuccessor(TBB)) {
+        if (TII->ReverseBranchCondition(Cond)) {
+          // We can't reverse the condition, add an unconditional branch.
+          Cond.clear();
+          TII->InsertBranch(*this, MBBA, 0, Cond);
+          return;
+        }
+        TII->RemoveBranch(*this);
+        TII->InsertBranch(*this, MBBA, 0, Cond);
+      } else if (!isLayoutSuccessor(MBBA)) {
+        TII->RemoveBranch(*this);
+        TII->InsertBranch(*this, TBB, MBBA, Cond);
+      }
+    }
+  }
+}
 
 void MachineBasicBlock::addSuccessor(MachineBasicBlock *succ) {
   Successors.push_back(succ);
@@ -289,6 +362,51 @@ bool MachineBasicBlock::isLayoutSuccessor(const MachineBasicBlock *MBB) const {
   return next(I) == MachineFunction::const_iterator(MBB);
 }
 
+bool MachineBasicBlock::canFallThrough() {
+  MachineBasicBlock *TBB = 0, *FBB = 0;
+  SmallVector<MachineOperand, 4> Cond;
+  const TargetInstrInfo *TII = getParent()->getTarget().getInstrInfo();
+  bool BranchUnAnalyzable = TII->AnalyzeBranch(*this, TBB, FBB, Cond, true);
+
+  MachineFunction::iterator Fallthrough = this;
+  ++Fallthrough;
+  // If FallthroughBlock is off the end of the function, it can't fall through.
+  if (Fallthrough == getParent()->end())
+    return false;
+
+  // If FallthroughBlock isn't a successor, no fallthrough is possible.
+  if (!isSuccessor(Fallthrough))
+    return false;
+
+  // If we couldn't analyze the branch, examine the last instruction.
+  // If the block doesn't end in a known control barrier, assume fallthrough
+  // is possible. The isPredicable check is needed because this code can be
+  // called during IfConversion, where an instruction which is normally a
+  // Barrier is predicated and thus no longer an actual control barrier. This
+  // is over-conservative though, because if an instruction isn't actually
+  // predicated we could still treat it like a barrier.
+  if (BranchUnAnalyzable)
+    return empty() || !back().getDesc().isBarrier() ||
+           back().getDesc().isPredicable();
+
+  // If there is no branch, control always falls through.
+  if (TBB == 0) return true;
+
+  // If there is some explicit branch to the fallthrough block, it can obviously
+  // reach, even though the branch should get folded to fall through implicitly.
+  if (MachineFunction::iterator(TBB) == Fallthrough ||
+      MachineFunction::iterator(FBB) == Fallthrough)
+    return true;
+
+  // If it's an unconditional branch to some block not the fall through, it
+  // doesn't fall through.
+  if (Cond.empty()) return false;
+
+  // Otherwise, if it is conditional and has no explicit false block, it falls
+  // through.
+  return FBB == 0;
+}
+
 /// removeFromParent - This method unlinks 'this' from the containing function,
 /// and returns it, but does not delete it.
 MachineBasicBlock *MachineBasicBlock::removeFromParent() {
@@ -364,10 +482,7 @@ bool MachineBasicBlock::CorrectExtraCFGEdges(MachineBasicBlock *DestA,
   MachineBasicBlock::succ_iterator SI = succ_begin();
   MachineBasicBlock *OrigDestA = DestA, *OrigDestB = DestB;
   while (SI != succ_end()) {
-    if (*SI == DestA && DestA == DestB) {
-      DestA = DestB = 0;
-      ++SI;
-    } else if (*SI == DestA) {
+    if (*SI == DestA) {
       DestA = 0;
       ++SI;
     } else if (*SI == DestB) {
@@ -390,3 +505,8 @@ bool MachineBasicBlock::CorrectExtraCFGEdges(MachineBasicBlock *DestA,
   }
   return MadeChange;
 }
+
+void llvm::WriteAsOperand(raw_ostream &OS, const MachineBasicBlock *MBB,
+                          bool t) {
+  OS << "BB#" << MBB->getNumber();
+}
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineFunction.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineFunction.cpp
index 2ff20f2..81d1301 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineFunction.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineFunction.cpp
@@ -30,13 +30,12 @@
 #include "llvm/Target/TargetLowering.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Target/TargetFrameInfo.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/GraphWriter.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
 namespace {
-  struct VISIBILITY_HIDDEN Printer : public MachineFunctionPass {
+  struct Printer : public MachineFunctionPass {
     static char ID;
 
     raw_ostream &OS;
@@ -53,7 +52,7 @@ namespace {
     }
 
     bool runOnMachineFunction(MachineFunction &MF) {
-      OS << Banner;
+      OS << "# " << Banner << ":\n";
       MF.print(OS);
       return false;
     }
@@ -235,12 +234,76 @@ MachineFunction::allocateMemRefsArray(unsigned long Num) {
   return Allocator.Allocate<MachineMemOperand *>(Num);
 }
 
+std::pair<MachineInstr::mmo_iterator, MachineInstr::mmo_iterator>
+MachineFunction::extractLoadMemRefs(MachineInstr::mmo_iterator Begin,
+                                    MachineInstr::mmo_iterator End) {
+  // Count the number of load mem refs.
+  unsigned Num = 0;
+  for (MachineInstr::mmo_iterator I = Begin; I != End; ++I)
+    if ((*I)->isLoad())
+      ++Num;
+
+  // Allocate a new array and populate it with the load information.
+  MachineInstr::mmo_iterator Result = allocateMemRefsArray(Num);
+  unsigned Index = 0;
+  for (MachineInstr::mmo_iterator I = Begin; I != End; ++I) {
+    if ((*I)->isLoad()) {
+      if (!(*I)->isStore())
+        // Reuse the MMO.
+        Result[Index] = *I;
+      else {
+        // Clone the MMO and unset the store flag.
+        MachineMemOperand *JustLoad =
+          getMachineMemOperand((*I)->getValue(),
+                               (*I)->getFlags() & ~MachineMemOperand::MOStore,
+                               (*I)->getOffset(), (*I)->getSize(),
+                               (*I)->getBaseAlignment());
+        Result[Index] = JustLoad;
+      }
+      ++Index;
+    }
+  }
+  return std::make_pair(Result, Result + Num);
+}
+
+std::pair<MachineInstr::mmo_iterator, MachineInstr::mmo_iterator>
+MachineFunction::extractStoreMemRefs(MachineInstr::mmo_iterator Begin,
+                                     MachineInstr::mmo_iterator End) {
+  // Count the number of load mem refs.
+  unsigned Num = 0;
+  for (MachineInstr::mmo_iterator I = Begin; I != End; ++I)
+    if ((*I)->isStore())
+      ++Num;
+
+  // Allocate a new array and populate it with the store information.
+  MachineInstr::mmo_iterator Result = allocateMemRefsArray(Num);
+  unsigned Index = 0;
+  for (MachineInstr::mmo_iterator I = Begin; I != End; ++I) {
+    if ((*I)->isStore()) {
+      if (!(*I)->isLoad())
+        // Reuse the MMO.
+        Result[Index] = *I;
+      else {
+        // Clone the MMO and unset the load flag.
+        MachineMemOperand *JustStore =
+          getMachineMemOperand((*I)->getValue(),
+                               (*I)->getFlags() & ~MachineMemOperand::MOLoad,
+                               (*I)->getOffset(), (*I)->getSize(),
+                               (*I)->getBaseAlignment());
+        Result[Index] = JustStore;
+      }
+      ++Index;
+    }
+  }
+  return std::make_pair(Result, Result + Num);
+}
+
 void MachineFunction::dump() const {
   print(errs());
 }
 
 void MachineFunction::print(raw_ostream &OS) const {
-  OS << "# Machine code for " << Fn->getName() << "():\n";
+  OS << "# Machine code for function " << Fn->getName() << ":\n";
 
   // Print Frame Information
   FrameInfo->print(*this, OS);
@@ -254,34 +317,43 @@ void MachineFunction::print(raw_ostream &OS) const {
   const TargetRegisterInfo *TRI = getTarget().getRegisterInfo();
   
   if (RegInfo && !RegInfo->livein_empty()) {
-    OS << "Live Ins:";
+    OS << "Function Live Ins: ";
     for (MachineRegisterInfo::livein_iterator
          I = RegInfo->livein_begin(), E = RegInfo->livein_end(); I != E; ++I) {
       if (TRI)
-        OS << " " << TRI->getName(I->first);
+        OS << "%" << TRI->getName(I->first);
       else
-        OS << " Reg #" << I->first;
+        OS << " %physreg" << I->first;
       
       if (I->second)
-        OS << " in VR#" << I->second << ' ';
+        OS << " in reg%" << I->second;
+
+      if (next(I) != E)
+        OS << ", ";
     }
     OS << '\n';
   }
   if (RegInfo && !RegInfo->liveout_empty()) {
-    OS << "Live Outs:";
+    OS << "Function Live Outs: ";
     for (MachineRegisterInfo::liveout_iterator
-         I = RegInfo->liveout_begin(), E = RegInfo->liveout_end(); I != E; ++I)
+         I = RegInfo->liveout_begin(), E = RegInfo->liveout_end(); I != E; ++I){
       if (TRI)
-        OS << ' ' << TRI->getName(*I);
+        OS << '%' << TRI->getName(*I);
       else
-        OS << " Reg #" << *I;
+        OS << "%physreg" << *I;
+
+      if (next(I) != E)
+        OS << " ";
+    }
     OS << '\n';
   }
   
-  for (const_iterator BB = begin(), E = end(); BB != E; ++BB)
+  for (const_iterator BB = begin(), E = end(); BB != E; ++BB) {
+    OS << '\n';
     BB->print(OS);
+  }
 
-  OS << "\n# End machine code for " << Fn->getName() << "().\n\n";
+  OS << "\n# End machine code for function " << Fn->getName() << ".\n\n";
 }
 
 namespace llvm {
@@ -369,9 +441,10 @@ DebugLocTuple MachineFunction::getDebugLocTuple(DebugLoc DL) const {
 /// index with a negative value.
 ///
 int MachineFrameInfo::CreateFixedObject(uint64_t Size, int64_t SPOffset,
-                                        bool Immutable) {
+                                        bool Immutable, bool isSS) {
   assert(Size != 0 && "Cannot allocate zero size fixed stack objects!");
-  Objects.insert(Objects.begin(), StackObject(Size, 1, SPOffset, Immutable));
+  Objects.insert(Objects.begin(), StackObject(Size, 1, SPOffset, Immutable,
+                                              isSS));
   return -++NumFixedObjects;
 }
 
@@ -408,12 +481,16 @@ MachineFrameInfo::getPristineRegs(const MachineBasicBlock *MBB) const {
 
 
 void MachineFrameInfo::print(const MachineFunction &MF, raw_ostream &OS) const{
+  if (Objects.empty()) return;
+
   const TargetFrameInfo *FI = MF.getTarget().getFrameInfo();
   int ValOffset = (FI ? FI->getOffsetOfLocalArea() : 0);
 
+  OS << "Frame Objects:\n";
+
   for (unsigned i = 0, e = Objects.size(); i != e; ++i) {
     const StackObject &SO = Objects[i];
-    OS << "  <fi#" << (int)(i-NumFixedObjects) << ">: ";
+    OS << "  fi#" << (int)(i-NumFixedObjects) << ": ";
     if (SO.Size == ~0ULL) {
       OS << "dead\n";
       continue;
@@ -421,15 +498,14 @@ void MachineFrameInfo::print(const MachineFunction &MF, raw_ostream &OS) const{
     if (SO.Size == 0)
       OS << "variable sized";
     else
-      OS << "size is " << SO.Size << " byte" << (SO.Size != 1 ? "s," : ",");
-    OS << " alignment is " << SO.Alignment << " byte"
-       << (SO.Alignment != 1 ? "s," : ",");
+      OS << "size=" << SO.Size;
+    OS << ", align=" << SO.Alignment;
 
     if (i < NumFixedObjects)
-      OS << " fixed";
+      OS << ", fixed";
     if (i < NumFixedObjects || SO.SPOffset != -1) {
       int64_t Off = SO.SPOffset - ValOffset;
-      OS << " at location [SP";
+      OS << ", at location [SP";
       if (Off > 0)
         OS << "+" << Off;
       else if (Off < 0)
@@ -438,9 +514,6 @@ void MachineFrameInfo::print(const MachineFunction &MF, raw_ostream &OS) const{
     }
     OS << "\n";
   }
-
-  if (HasVarSizedObjects)
-    OS << "  Stack frame contains variable sized objects\n";
 }
 
 void MachineFrameInfo::dump(const MachineFunction &MF) const {
@@ -457,10 +530,6 @@ void MachineFrameInfo::dump(const MachineFunction &MF) const {
 unsigned MachineJumpTableInfo::getJumpTableIndex(
                                const std::vector<MachineBasicBlock*> &DestBBs) {
   assert(!DestBBs.empty() && "Cannot create an empty jump table!");
-  for (unsigned i = 0, e = JumpTables.size(); i != e; ++i)
-    if (JumpTables[i].MBBs == DestBBs)
-      return i;
-  
   JumpTables.push_back(MachineJumpTableEntry(DestBBs));
   return JumpTables.size()-1;
 }
@@ -472,24 +541,40 @@ MachineJumpTableInfo::ReplaceMBBInJumpTables(MachineBasicBlock *Old,
                                              MachineBasicBlock *New) {
   assert(Old != New && "Not making a change?");
   bool MadeChange = false;
-  for (size_t i = 0, e = JumpTables.size(); i != e; ++i) {
-    MachineJumpTableEntry &JTE = JumpTables[i];
-    for (size_t j = 0, e = JTE.MBBs.size(); j != e; ++j)
-      if (JTE.MBBs[j] == Old) {
-        JTE.MBBs[j] = New;
-        MadeChange = true;
-      }
-  }
+  for (size_t i = 0, e = JumpTables.size(); i != e; ++i)
+    ReplaceMBBInJumpTable(i, Old, New);
+  return MadeChange;
+}
+
+/// ReplaceMBBInJumpTable - If Old is a target of the jump tables, update
+/// the jump table to branch to New instead.
+bool
+MachineJumpTableInfo::ReplaceMBBInJumpTable(unsigned Idx,
+                                            MachineBasicBlock *Old,
+                                            MachineBasicBlock *New) {
+  assert(Old != New && "Not making a change?");
+  bool MadeChange = false;
+  MachineJumpTableEntry &JTE = JumpTables[Idx];
+  for (size_t j = 0, e = JTE.MBBs.size(); j != e; ++j)
+    if (JTE.MBBs[j] == Old) {
+      JTE.MBBs[j] = New;
+      MadeChange = true;
+    }
   return MadeChange;
 }
 
 void MachineJumpTableInfo::print(raw_ostream &OS) const {
-  // FIXME: this is lame, maybe we could print out the MBB numbers or something
-  // like {1, 2, 4, 5, 3, 0}
+  if (JumpTables.empty()) return;
+
+  OS << "Jump Tables:\n";
+
   for (unsigned i = 0, e = JumpTables.size(); i != e; ++i) {
-    OS << "  <jt#" << i << "> has " << JumpTables[i].MBBs.size() 
-       << " entries\n";
+    OS << "  jt#" << i << ": ";
+    for (unsigned j = 0, f = JumpTables[i].MBBs.size(); j != f; ++j)
+      OS << " BB#" << JumpTables[i].MBBs[j]->getNumber();
   }
+
+  OS << '\n';
 }
 
 void MachineJumpTableInfo::dump() const { print(errs()); }
@@ -518,6 +603,48 @@ MachineConstantPool::~MachineConstantPool() {
       delete Constants[i].Val.MachineCPVal;
 }
 
+/// CanShareConstantPoolEntry - Test whether the given two constants
+/// can be allocated the same constant pool entry.
+static bool CanShareConstantPoolEntry(Constant *A, Constant *B,
+                                      const TargetData *TD) {
+  // Handle the trivial case quickly.
+  if (A == B) return true;
+
+  // If they have the same type but weren't the same constant, quickly
+  // reject them.
+  if (A->getType() == B->getType()) return false;
+
+  // For now, only support constants with the same size.
+  if (TD->getTypeStoreSize(A->getType()) != TD->getTypeStoreSize(B->getType()))
+    return false;
+
+  // If a floating-point value and an integer value have the same encoding,
+  // they can share a constant-pool entry.
+  if (ConstantFP *AFP = dyn_cast<ConstantFP>(A))
+    if (ConstantInt *BI = dyn_cast<ConstantInt>(B))
+      return AFP->getValueAPF().bitcastToAPInt() == BI->getValue();
+  if (ConstantFP *BFP = dyn_cast<ConstantFP>(B))
+    if (ConstantInt *AI = dyn_cast<ConstantInt>(A))
+      return BFP->getValueAPF().bitcastToAPInt() == AI->getValue();
+
+  // Two vectors can share an entry if each pair of corresponding
+  // elements could.
+  if (ConstantVector *AV = dyn_cast<ConstantVector>(A))
+    if (ConstantVector *BV = dyn_cast<ConstantVector>(B)) {
+      if (AV->getType()->getNumElements() != BV->getType()->getNumElements())
+        return false;
+      for (unsigned i = 0, e = AV->getType()->getNumElements(); i != e; ++i)
+        if (!CanShareConstantPoolEntry(AV->getOperand(i),
+                                       BV->getOperand(i), TD))
+          return false;
+      return true;
+    }
+
+  // TODO: Handle other cases.
+
+  return false;
+}
+
 /// getConstantPoolIndex - Create a new entry in the constant pool or return
 /// an existing one.  User must specify the log2 of the minimum required
 /// alignment for the object.
@@ -526,14 +653,17 @@ unsigned MachineConstantPool::getConstantPoolIndex(Constant *C,
                                                    unsigned Alignment) {
   assert(Alignment && "Alignment must be specified!");
   if (Alignment > PoolAlignment) PoolAlignment = Alignment;
-  
+
   // Check to see if we already have this constant.
   //
   // FIXME, this could be made much more efficient for large constant pools.
   for (unsigned i = 0, e = Constants.size(); i != e; ++i)
-    if (Constants[i].Val.ConstVal == C &&
-        (Constants[i].getAlignment() & (Alignment - 1)) == 0)
+    if (!Constants[i].isMachineConstantPoolEntry() &&
+        CanShareConstantPoolEntry(Constants[i].Val.ConstVal, C, TD)) {
+      if ((unsigned)Constants[i].getAlignment() < Alignment)
+        Constants[i].Alignment = Alignment;
       return i;
+    }
   
   Constants.push_back(MachineConstantPoolEntry(C, Alignment));
   return Constants.size()-1;
@@ -556,13 +686,16 @@ unsigned MachineConstantPool::getConstantPoolIndex(MachineConstantPoolValue *V,
 }
 
 void MachineConstantPool::print(raw_ostream &OS) const {
+  if (Constants.empty()) return;
+
+  OS << "Constant Pool:\n";
   for (unsigned i = 0, e = Constants.size(); i != e; ++i) {
-    OS << "  <cp#" << i << "> is";
+    OS << "  cp#" << i << ": ";
     if (Constants[i].isMachineConstantPoolEntry())
       Constants[i].Val.MachineCPVal->print(OS);
     else
       OS << *(Value*)Constants[i].Val.ConstVal;
-    OS << " , alignment=" << Constants[i].getAlignment();
+    OS << ", align=" << Constants[i].getAlignment();
     OS << "\n";
   }
 }
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineFunctionAnalysis.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineFunctionAnalysis.cpp
index ae9d5a9..f5febc5 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineFunctionAnalysis.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineFunctionAnalysis.cpp
@@ -24,12 +24,13 @@ X("Machine Function Analysis", "machine-function-analysis",
 
 char MachineFunctionAnalysis::ID = 0;
 
-MachineFunctionAnalysis::MachineFunctionAnalysis(TargetMachine &tm,
+MachineFunctionAnalysis::MachineFunctionAnalysis(const TargetMachine &tm,
                                                  CodeGenOpt::Level OL) :
   FunctionPass(&ID), TM(tm), OptLevel(OL), MF(0) {
 }
 
 MachineFunctionAnalysis::~MachineFunctionAnalysis() {
+  releaseMemory();
   assert(!MF && "MachineFunctionAnalysis left initialized!");
 }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineFunctionPass.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineFunctionPass.cpp
index 09f156a..2f8d4c9 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineFunctionPass.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineFunctionPass.cpp
@@ -11,12 +11,8 @@
 //
 //===----------------------------------------------------------------------===//
 
+#include "llvm/Function.h"
 #include "llvm/Analysis/AliasAnalysis.h"
-#include "llvm/Analysis/ScalarEvolution.h"
-#include "llvm/Analysis/IVUsers.h"
-#include "llvm/Analysis/LiveValues.h"
-#include "llvm/Analysis/LoopDependenceAnalysis.h"
-#include "llvm/Analysis/MemoryDependenceAnalysis.h"
 #include "llvm/CodeGen/MachineFunctionAnalysis.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
 using namespace llvm;
@@ -37,16 +33,18 @@ void MachineFunctionPass::getAnalysisUsage(AnalysisUsage &AU) const {
 
   // MachineFunctionPass preserves all LLVM IR passes, but there's no
   // high-level way to express this. Instead, just list a bunch of
-  // passes explicitly.
+  // passes explicitly. This does not include setPreservesCFG,
+  // because CodeGen overloads that to mean preserving the MachineBasicBlock
+  // CFG in addition to the LLVM IR CFG.
   AU.addPreserved<AliasAnalysis>();
-  AU.addPreserved<ScalarEvolution>();
-  AU.addPreserved<IVUsers>();
-  AU.addPreserved<LoopDependenceAnalysis>();
-  AU.addPreserved<MemoryDependenceAnalysis>();
-  AU.addPreserved<LiveValues>();
-  AU.addPreserved<DominatorTree>();
-  AU.addPreserved<DominanceFrontier>();
-  AU.addPreserved<LoopInfo>();
+  AU.addPreserved("scalar-evolution");
+  AU.addPreserved("iv-users");
+  AU.addPreserved("memdep");
+  AU.addPreserved("live-values");
+  AU.addPreserved("domtree");
+  AU.addPreserved("domfrontier");
+  AU.addPreserved("loops");
+  AU.addPreserved("lda");
 
   FunctionPass::getAnalysisUsage(AU);
 }
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineInstr.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineInstr.cpp
index 3d1c221..f11026f 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineInstr.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineInstr.cpp
@@ -13,6 +13,7 @@
 
 #include "llvm/CodeGen/MachineInstr.h"
 #include "llvm/Constants.h"
+#include "llvm/Function.h"
 #include "llvm/InlineAsm.h"
 #include "llvm/Value.h"
 #include "llvm/Assembly/Writer.h"
@@ -24,6 +25,7 @@
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetInstrDesc.h"
 #include "llvm/Target/TargetRegisterInfo.h"
+#include "llvm/Analysis/AliasAnalysis.h"
 #include "llvm/Analysis/DebugInfo.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/LeakDetector.h"
@@ -179,29 +181,31 @@ bool MachineOperand::isIdenticalTo(const MachineOperand &Other) const {
   case MachineOperand::MO_ExternalSymbol:
     return !strcmp(getSymbolName(), Other.getSymbolName()) &&
            getOffset() == Other.getOffset();
+  case MachineOperand::MO_BlockAddress:
+    return getBlockAddress() == Other.getBlockAddress();
   }
 }
 
 /// print - Print the specified machine operand.
 ///
 void MachineOperand::print(raw_ostream &OS, const TargetMachine *TM) const {
+  // If the instruction is embedded into a basic block, we can find the
+  // target info for the instruction.
+  if (!TM)
+    if (const MachineInstr *MI = getParent())
+      if (const MachineBasicBlock *MBB = MI->getParent())
+        if (const MachineFunction *MF = MBB->getParent())
+          TM = &MF->getTarget();
+
   switch (getType()) {
   case MachineOperand::MO_Register:
     if (getReg() == 0 || TargetRegisterInfo::isVirtualRegister(getReg())) {
       OS << "%reg" << getReg();
     } else {
-      // If the instruction is embedded into a basic block, we can find the
-      // target info for the instruction.
-      if (TM == 0)
-        if (const MachineInstr *MI = getParent())
-          if (const MachineBasicBlock *MBB = MI->getParent())
-            if (const MachineFunction *MF = MBB->getParent())
-              TM = &MF->getTarget();
-      
       if (TM)
         OS << "%" << TM->getRegisterInfo()->get(getReg()).Name;
       else
-        OS << "%mreg" << getReg();
+        OS << "%physreg" << getReg();
     }
 
     if (getSubReg() != 0)
@@ -211,17 +215,19 @@ void MachineOperand::print(raw_ostream &OS, const TargetMachine *TM) const {
         isEarlyClobber()) {
       OS << '<';
       bool NeedComma = false;
-      if (isImplicit()) {
-        if (NeedComma) OS << ',';
-        OS << (isDef() ? "imp-def" : "imp-use");
-        NeedComma = true;
-      } else if (isDef()) {
+      if (isDef()) {
         if (NeedComma) OS << ',';
         if (isEarlyClobber())
           OS << "earlyclobber,";
+        if (isImplicit())
+          OS << "imp-";
         OS << "def";
         NeedComma = true;
+      } else if (isImplicit()) {
+          OS << "imp-use";
+          NeedComma = true;
       }
+
       if (isKill() || isDead() || isUndef()) {
         if (NeedComma) OS << ',';
         if (isKill())  OS << "kill";
@@ -239,15 +245,13 @@ void MachineOperand::print(raw_ostream &OS, const TargetMachine *TM) const {
     OS << getImm();
     break;
   case MachineOperand::MO_FPImmediate:
-    if (getFPImm()->getType() == Type::getFloatTy(getFPImm()->getContext()))
+    if (getFPImm()->getType()->isFloatTy())
       OS << getFPImm()->getValueAPF().convertToFloat();
     else
       OS << getFPImm()->getValueAPF().convertToDouble();
     break;
   case MachineOperand::MO_MachineBasicBlock:
-    OS << "mbb<"
-       << ((Value*)getMBB()->getBasicBlock())->getName()
-       << "," << (void*)getMBB() << '>';
+    OS << "<BB#" << getMBB()->getNumber() << ">";
     break;
   case MachineOperand::MO_FrameIndex:
     OS << "<fi#" << getIndex() << '>';
@@ -261,7 +265,8 @@ void MachineOperand::print(raw_ostream &OS, const TargetMachine *TM) const {
     OS << "<jt#" << getIndex() << '>';
     break;
   case MachineOperand::MO_GlobalAddress:
-    OS << "<ga:" << ((Value*)getGlobal())->getName();
+    OS << "<ga:";
+    WriteAsOperand(OS, getGlobal(), /*PrintType=*/false);
     if (getOffset()) OS << "+" << getOffset();
     OS << '>';
     break;
@@ -270,6 +275,11 @@ void MachineOperand::print(raw_ostream &OS, const TargetMachine *TM) const {
     if (getOffset()) OS << "+" << getOffset();
     OS << '>';
     break;
+  case MachineOperand::MO_BlockAddress:
+    OS << "<";
+    WriteAsOperand(OS, getBlockAddress(), /*PrintType=*/false);
+    OS << '>';
+    break;
   default:
     llvm_unreachable("Unrecognized operand type");
   }
@@ -366,7 +376,7 @@ raw_ostream &llvm::operator<<(raw_ostream &OS, const MachineMemOperand &MMO) {
 /// MachineInstr ctor - This constructor creates a dummy MachineInstr with
 /// TID NULL and no operands.
 MachineInstr::MachineInstr()
-  : TID(0), NumImplicitOps(0), MemRefs(0), MemRefsEnd(0),
+  : TID(0), NumImplicitOps(0), AsmPrinterFlags(0), MemRefs(0), MemRefsEnd(0),
     Parent(0), debugLoc(DebugLoc::getUnknownLoc()) {
   // Make sure that we get added to a machine basicblock
   LeakDetector::addGarbageObject(this);
@@ -386,7 +396,8 @@ void MachineInstr::addImplicitDefUseOperands() {
 /// TargetInstrDesc or the numOperands if it is not zero. (for
 /// instructions with variable number of operands).
 MachineInstr::MachineInstr(const TargetInstrDesc &tid, bool NoImp)
-  : TID(&tid), NumImplicitOps(0), MemRefs(0), MemRefsEnd(0), Parent(0),
+  : TID(&tid), NumImplicitOps(0), AsmPrinterFlags(0),
+    MemRefs(0), MemRefsEnd(0), Parent(0),
     debugLoc(DebugLoc::getUnknownLoc()) {
   if (!NoImp && TID->getImplicitDefs())
     for (const unsigned *ImpDefs = TID->getImplicitDefs(); *ImpDefs; ++ImpDefs)
@@ -404,7 +415,7 @@ MachineInstr::MachineInstr(const TargetInstrDesc &tid, bool NoImp)
 /// MachineInstr ctor - As above, but with a DebugLoc.
 MachineInstr::MachineInstr(const TargetInstrDesc &tid, const DebugLoc dl,
                            bool NoImp)
-  : TID(&tid), NumImplicitOps(0), MemRefs(0), MemRefsEnd(0),
+  : TID(&tid), NumImplicitOps(0), AsmPrinterFlags(0), MemRefs(0), MemRefsEnd(0),
     Parent(0), debugLoc(dl) {
   if (!NoImp && TID->getImplicitDefs())
     for (const unsigned *ImpDefs = TID->getImplicitDefs(); *ImpDefs; ++ImpDefs)
@@ -424,7 +435,8 @@ MachineInstr::MachineInstr(const TargetInstrDesc &tid, const DebugLoc dl,
 /// basic block.
 ///
 MachineInstr::MachineInstr(MachineBasicBlock *MBB, const TargetInstrDesc &tid)
-  : TID(&tid), NumImplicitOps(0), MemRefs(0), MemRefsEnd(0), Parent(0), 
+  : TID(&tid), NumImplicitOps(0), AsmPrinterFlags(0),
+    MemRefs(0), MemRefsEnd(0), Parent(0), 
     debugLoc(DebugLoc::getUnknownLoc()) {
   assert(MBB && "Cannot use inserting ctor with null basic block!");
   if (TID->ImplicitDefs)
@@ -444,7 +456,7 @@ MachineInstr::MachineInstr(MachineBasicBlock *MBB, const TargetInstrDesc &tid)
 ///
 MachineInstr::MachineInstr(MachineBasicBlock *MBB, const DebugLoc dl,
                            const TargetInstrDesc &tid)
-  : TID(&tid), NumImplicitOps(0), MemRefs(0), MemRefsEnd(0),
+  : TID(&tid), NumImplicitOps(0), AsmPrinterFlags(0), MemRefs(0), MemRefsEnd(0),
     Parent(0), debugLoc(dl) {
   assert(MBB && "Cannot use inserting ctor with null basic block!");
   if (TID->ImplicitDefs)
@@ -463,7 +475,7 @@ MachineInstr::MachineInstr(MachineBasicBlock *MBB, const DebugLoc dl,
 /// MachineInstr ctor - Copies MachineInstr arg exactly
 ///
 MachineInstr::MachineInstr(MachineFunction &MF, const MachineInstr &MI)
-  : TID(&MI.getDesc()), NumImplicitOps(0),
+  : TID(&MI.getDesc()), NumImplicitOps(0), AsmPrinterFlags(0),
     MemRefs(MI.MemRefs), MemRefsEnd(MI.MemRefsEnd),
     Parent(0), debugLoc(MI.getDebugLoc()) {
   Operands.reserve(MI.getNumOperands());
@@ -932,7 +944,8 @@ void MachineInstr::copyPredicates(const MachineInstr *MI) {
 /// SawStore is set to true, it means that there is a store (or call) between
 /// the instruction's location and its intended destination.
 bool MachineInstr::isSafeToMove(const TargetInstrInfo *TII,
-                                bool &SawStore) const {
+                                bool &SawStore,
+                                AliasAnalysis *AA) const {
   // Ignore stuff that we obviously can't move.
   if (TID->mayStore() || TID->isCall()) {
     SawStore = true;
@@ -946,7 +959,7 @@ bool MachineInstr::isSafeToMove(const TargetInstrInfo *TII,
   // destination. The check for isInvariantLoad gives the targe the chance to
   // classify the load as always returning a constant, e.g. a constant pool
   // load.
-  if (TID->mayLoad() && !TII->isInvariantLoad(this))
+  if (TID->mayLoad() && !isInvariantLoad(AA))
     // Otherwise, this is a real load.  If there is a store between the load and
     // end of block, or if the load is volatile, we can't move it.
     return !SawStore && !hasVolatileMemoryRef();
@@ -957,11 +970,11 @@ bool MachineInstr::isSafeToMove(const TargetInstrInfo *TII,
 /// isSafeToReMat - Return true if it's safe to rematerialize the specified
 /// instruction which defined the specified register instead of copying it.
 bool MachineInstr::isSafeToReMat(const TargetInstrInfo *TII,
-                                 unsigned DstReg) const {
+                                 unsigned DstReg,
+                                 AliasAnalysis *AA) const {
   bool SawStore = false;
-  if (!getDesc().isRematerializable() ||
-      !TII->isTriviallyReMaterializable(this) ||
-      !isSafeToMove(TII, SawStore))
+  if (!TII->isTriviallyReMaterializable(this, AA) ||
+      !isSafeToMove(TII, SawStore, AA))
     return false;
   for (unsigned i = 0, e = getNumOperands(); i != e; ++i) {
     const MachineOperand &MO = getOperand(i);
@@ -1005,30 +1018,122 @@ bool MachineInstr::hasVolatileMemoryRef() const {
   return false;
 }
 
+/// isInvariantLoad - Return true if this instruction is loading from a
+/// location whose value is invariant across the function.  For example,
+/// loading a value from the constant pool or from from the argument area
+/// of a function if it does not change.  This should only return true of
+/// *all* loads the instruction does are invariant (if it does multiple loads).
+bool MachineInstr::isInvariantLoad(AliasAnalysis *AA) const {
+  // If the instruction doesn't load at all, it isn't an invariant load.
+  if (!TID->mayLoad())
+    return false;
+
+  // If the instruction has lost its memoperands, conservatively assume that
+  // it may not be an invariant load.
+  if (memoperands_empty())
+    return false;
+
+  const MachineFrameInfo *MFI = getParent()->getParent()->getFrameInfo();
+
+  for (mmo_iterator I = memoperands_begin(),
+       E = memoperands_end(); I != E; ++I) {
+    if ((*I)->isVolatile()) return false;
+    if ((*I)->isStore()) return false;
+
+    if (const Value *V = (*I)->getValue()) {
+      // A load from a constant PseudoSourceValue is invariant.
+      if (const PseudoSourceValue *PSV = dyn_cast<PseudoSourceValue>(V))
+        if (PSV->isConstant(MFI))
+          continue;
+      // If we have an AliasAnalysis, ask it whether the memory is constant.
+      if (AA && AA->pointsToConstantMemory(V))
+        continue;
+    }
+
+    // Otherwise assume conservatively.
+    return false;
+  }
+
+  // Everything checks out.
+  return true;
+}
+
 void MachineInstr::dump() const {
   errs() << "  " << *this;
 }
 
 void MachineInstr::print(raw_ostream &OS, const TargetMachine *TM) const {
-  // Specialize printing if op#0 is definition
-  unsigned StartOp = 0;
-  if (getNumOperands() && getOperand(0).isReg() && getOperand(0).isDef()) {
-    getOperand(0).print(OS, TM);
-    OS << " = ";
-    ++StartOp;   // Don't print this operand again!
+  // We can be a bit tidier if we know the TargetMachine and/or MachineFunction.
+  const MachineFunction *MF = 0;
+  if (const MachineBasicBlock *MBB = getParent()) {
+    MF = MBB->getParent();
+    if (!TM && MF)
+      TM = &MF->getTarget();
+  }
+
+  // Print explicitly defined operands on the left of an assignment syntax.
+  unsigned StartOp = 0, e = getNumOperands();
+  for (; StartOp < e && getOperand(StartOp).isReg() &&
+         getOperand(StartOp).isDef() &&
+         !getOperand(StartOp).isImplicit();
+       ++StartOp) {
+    if (StartOp != 0) OS << ", ";
+    getOperand(StartOp).print(OS, TM);
   }
 
+  if (StartOp != 0)
+    OS << " = ";
+
+  // Print the opcode name.
   OS << getDesc().getName();
 
+  // Print the rest of the operands.
+  bool OmittedAnyCallClobbers = false;
+  bool FirstOp = true;
   for (unsigned i = StartOp, e = getNumOperands(); i != e; ++i) {
-    if (i != StartOp)
-      OS << ",";
+    const MachineOperand &MO = getOperand(i);
+
+    // Omit call-clobbered registers which aren't used anywhere. This makes
+    // call instructions much less noisy on targets where calls clobber lots
+    // of registers. Don't rely on MO.isDead() because we may be called before
+    // LiveVariables is run, or we may be looking at a non-allocatable reg.
+    if (MF && getDesc().isCall() &&
+        MO.isReg() && MO.isImplicit() && MO.isDef()) {
+      unsigned Reg = MO.getReg();
+      if (Reg != 0 && TargetRegisterInfo::isPhysicalRegister(Reg)) {
+        const MachineRegisterInfo &MRI = MF->getRegInfo();
+        if (MRI.use_empty(Reg) && !MRI.isLiveOut(Reg)) {
+          bool HasAliasLive = false;
+          for (const unsigned *Alias = TM->getRegisterInfo()->getAliasSet(Reg);
+               unsigned AliasReg = *Alias; ++Alias)
+            if (!MRI.use_empty(AliasReg) || MRI.isLiveOut(AliasReg)) {
+              HasAliasLive = true;
+              break;
+            }
+          if (!HasAliasLive) {
+            OmittedAnyCallClobbers = true;
+            continue;
+          }
+        }
+      }
+    }
+
+    if (FirstOp) FirstOp = false; else OS << ",";
     OS << " ";
-    getOperand(i).print(OS, TM);
+    MO.print(OS, TM);
   }
 
+  // Briefly indicate whether any call clobbers were omitted.
+  if (OmittedAnyCallClobbers) {
+    if (FirstOp) FirstOp = false; else OS << ",";
+    OS << " ...";
+  }
+
+  bool HaveSemi = false;
   if (!memoperands_empty()) {
-    OS << ", Mem:";
+    if (!HaveSemi) OS << ";"; HaveSemi = true;
+
+    OS << " mem:";
     for (mmo_iterator i = memoperands_begin(), e = memoperands_end();
          i != e; ++i) {
       OS << **i;
@@ -1037,14 +1142,17 @@ void MachineInstr::print(raw_ostream &OS, const TargetMachine *TM) const {
     }
   }
 
-  if (!debugLoc.isUnknown()) {
-    const MachineFunction *MF = getParent()->getParent();
+  if (!debugLoc.isUnknown() && MF) {
+    if (!HaveSemi) OS << ";"; HaveSemi = true;
+
+    // TODO: print InlinedAtLoc information
+
     DebugLocTuple DLT = MF->getDebugLocTuple(debugLoc);
-    DICompileUnit CU(DLT.CompileUnit);
-    OS << " [dbg: "
-       << CU.getDirectory() << '/' << CU.getFilename() << ","
-       << DLT.Line << ","
-       << DLT.Col  << "]";
+    DICompileUnit CU(DLT.Scope);
+    OS << " dbg:";
+    if (!CU.isNull())
+      OS << CU.getDirectory() << '/' << CU.getFilename() << ":";
+    OS << DLT.Line << ":" << DLT.Col;
   }
 
   OS << "\n";
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineLICM.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineLICM.cpp
index 61678f1..66de535 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineLICM.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineLICM.cpp
@@ -22,15 +22,18 @@
 
 #define DEBUG_TYPE "machine-licm"
 #include "llvm/CodeGen/Passes.h"
+#include "llvm/CodeGen/MachineConstantPool.h"
 #include "llvm/CodeGen/MachineDominators.h"
 #include "llvm/CodeGen/MachineLoopInfo.h"
+#include "llvm/CodeGen/MachineMemOperand.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/CodeGen/PseudoSourceValue.h"
 #include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetMachine.h"
+#include "llvm/Analysis/AliasAnalysis.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/Statistic.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 
@@ -40,25 +43,27 @@ STATISTIC(NumHoisted, "Number of machine instructions hoisted out of loops");
 STATISTIC(NumCSEed,   "Number of hoisted machine instructions CSEed");
 
 namespace {
-  class VISIBILITY_HIDDEN MachineLICM : public MachineFunctionPass {
+  class MachineLICM : public MachineFunctionPass {
+    MachineConstantPool *MCP;
     const TargetMachine   *TM;
     const TargetInstrInfo *TII;
     const TargetRegisterInfo *TRI;
     BitVector AllocatableSet;
 
     // Various analyses that we use...
+    AliasAnalysis        *AA;      // Alias analysis info.
     MachineLoopInfo      *LI;      // Current MachineLoopInfo
     MachineDominatorTree *DT;      // Machine dominator tree for the cur loop
     MachineRegisterInfo  *RegInfo; // Machine register information
 
     // State that is updated as we process loops
     bool         Changed;          // True if a loop is changed.
+    bool         FirstInLoop;      // True if it's the first LICM in the loop.
     MachineLoop *CurLoop;          // The current loop we are working on.
     MachineBasicBlock *CurPreheader; // The preheader for CurLoop.
 
-    // For each BB and opcode pair, keep a list of hoisted instructions.
-    DenseMap<std::pair<unsigned, unsigned>,
-      std::vector<const MachineInstr*> > CSEMap;
+    // For each opcode, keep a list of potentail CSE instructions.
+    DenseMap<unsigned, std::vector<const MachineInstr*> > CSEMap;
   public:
     static char ID; // Pass identification, replacement for typeid
     MachineLICM() : MachineFunctionPass(&ID) {}
@@ -72,6 +77,7 @@ namespace {
       AU.setPreservesCFG();
       AU.addRequired<MachineLoopInfo>();
       AU.addRequired<MachineDominatorTree>();
+      AU.addRequired<AliasAnalysis>();
       AU.addPreserved<MachineLoopInfo>();
       AU.addPreserved<MachineDominatorTree>();
       MachineFunctionPass::getAnalysisUsage(AU);
@@ -101,10 +107,37 @@ namespace {
     ///
     void HoistRegion(MachineDomTreeNode *N);
 
+    /// isLoadFromConstantMemory - Return true if the given instruction is a
+    /// load from constant memory.
+    bool isLoadFromConstantMemory(MachineInstr *MI);
+
+    /// ExtractHoistableLoad - Unfold a load from the given machineinstr if
+    /// the load itself could be hoisted. Return the unfolded and hoistable
+    /// load, or null if the load couldn't be unfolded or if it wouldn't
+    /// be hoistable.
+    MachineInstr *ExtractHoistableLoad(MachineInstr *MI);
+
+    /// LookForDuplicate - Find an instruction amount PrevMIs that is a
+    /// duplicate of MI. Return this instruction if it's found.
+    const MachineInstr *LookForDuplicate(const MachineInstr *MI,
+                                     std::vector<const MachineInstr*> &PrevMIs);
+
+    /// EliminateCSE - Given a LICM'ed instruction, look for an instruction on
+    /// the preheader that compute the same value. If it's found, do a RAU on
+    /// with the definition of the existing instruction rather than hoisting
+    /// the instruction to the preheader.
+    bool EliminateCSE(MachineInstr *MI,
+           DenseMap<unsigned, std::vector<const MachineInstr*> >::iterator &CI);
+
     /// Hoist - When an instruction is found to only use loop invariant operands
     /// that is safe to hoist, this instruction is called to do the dirty work.
     ///
-    void Hoist(MachineInstr &MI);
+    void Hoist(MachineInstr *MI);
+
+    /// InitCSEMap - Initialize the CSE map with instructions that are in the
+    /// current loop preheader that may become duplicates of instructions that
+    /// are hoisted out of the loop.
+    void InitCSEMap(MachineBasicBlock *BB);
   };
 } // end anonymous namespace
 
@@ -128,13 +161,10 @@ static bool LoopIsOuterMostWithPreheader(MachineLoop *CurLoop) {
 /// loop.
 ///
 bool MachineLICM::runOnMachineFunction(MachineFunction &MF) {
-  const Function *F = MF.getFunction();
-  if (F->hasFnAttr(Attribute::OptimizeForSize))
-    return false;
-
   DEBUG(errs() << "******** Machine LICM ********\n");
 
-  Changed = false;
+  Changed = FirstInLoop = false;
+  MCP = MF.getConstantPool();
   TM = &MF.getTarget();
   TII = TM->getInstrInfo();
   TRI = TM->getRegisterInfo();
@@ -144,9 +174,9 @@ bool MachineLICM::runOnMachineFunction(MachineFunction &MF) {
   // Get our Loop information...
   LI = &getAnalysis<MachineLoopInfo>();
   DT = &getAnalysis<MachineDominatorTree>();
+  AA = &getAnalysis<AliasAnalysis>();
 
-  for (MachineLoopInfo::iterator
-         I = LI->begin(), E = LI->end(); I != E; ++I) {
+  for (MachineLoopInfo::iterator I = LI->begin(), E = LI->end(); I != E; ++I) {
     CurLoop = *I;
 
     // Only visit outer-most preheader-sporting loops.
@@ -163,7 +193,11 @@ bool MachineLICM::runOnMachineFunction(MachineFunction &MF) {
     if (!CurPreheader)
       continue;
 
+    // CSEMap is initialized for loop header when the first instruction is
+    // being hoisted.
+    FirstInLoop = true;
     HoistRegion(DT->getNode(CurLoop->getHeader()));
+    CSEMap.clear();
   }
 
   return Changed;
@@ -184,10 +218,7 @@ void MachineLICM::HoistRegion(MachineDomTreeNode *N) {
   for (MachineBasicBlock::iterator
          MII = BB->begin(), E = BB->end(); MII != E; ) {
     MachineBasicBlock::iterator NextMII = MII; ++NextMII;
-    MachineInstr &MI = *MII;
-
-    Hoist(MI);
-
+    Hoist(&*MII);
     MII = NextMII;
   }
 
@@ -214,10 +245,10 @@ bool MachineLICM::IsLoopInvariantInst(MachineInstr &I) {
     // Okay, this instruction does a load. As a refinement, we allow the target
     // to decide whether the loaded value is actually a constant. If so, we can
     // actually use it as a load.
-    if (!TII->isInvariantLoad(&I))
-      // FIXME: we should be able to sink loads with no other side effects if
-      // there is nothing that can change memory from here until the end of
-      // block. This is a trivial form of alias analysis.
+    if (!I.isInvariantLoad(AA))
+      // FIXME: we should be able to hoist loads with no other side effects if
+      // there are no other instructions which can change memory in this loop.
+      // This is a trivial form of alias analysis.
       return false;
   }
 
@@ -259,8 +290,6 @@ bool MachineLICM::IsLoopInvariantInst(MachineInstr &I) {
 
     // Don't hoist an instruction that uses or defines a physical register.
     if (TargetRegisterInfo::isPhysicalRegister(Reg)) {
-      // If this is a physical register use, we can't move it.  If it is a def,
-      // we can move it, but only if the def is dead.
       if (MO.isUse()) {
         // If the physreg has no defs anywhere, it's just an ambient register
         // and we can freely move its uses. Alternatively, if it's allocatable,
@@ -313,20 +342,42 @@ static bool HasPHIUses(unsigned Reg, MachineRegisterInfo *RegInfo) {
   return false;
 }
 
+/// isLoadFromConstantMemory - Return true if the given instruction is a
+/// load from constant memory. Machine LICM will hoist these even if they are
+/// not re-materializable.
+bool MachineLICM::isLoadFromConstantMemory(MachineInstr *MI) {
+  if (!MI->getDesc().mayLoad()) return false;
+  if (!MI->hasOneMemOperand()) return false;
+  MachineMemOperand *MMO = *MI->memoperands_begin();
+  if (MMO->isVolatile()) return false;
+  if (!MMO->getValue()) return false;
+  const PseudoSourceValue *PSV = dyn_cast<PseudoSourceValue>(MMO->getValue());
+  if (PSV) {
+    MachineFunction &MF = *MI->getParent()->getParent();
+    return PSV->isConstant(MF.getFrameInfo());
+  } else {
+    return AA->pointsToConstantMemory(MMO->getValue());
+  }
+}
+
 /// IsProfitableToHoist - Return true if it is potentially profitable to hoist
 /// the given loop invariant.
 bool MachineLICM::IsProfitableToHoist(MachineInstr &MI) {
   if (MI.getOpcode() == TargetInstrInfo::IMPLICIT_DEF)
     return false;
 
-  const TargetInstrDesc &TID = MI.getDesc();
-
   // FIXME: For now, only hoist re-materilizable instructions. LICM will
   // increase register pressure. We want to make sure it doesn't increase
   // spilling.
-  if (!TID.mayLoad() && (!TID.isRematerializable() ||
-                         !TII->isTriviallyReMaterializable(&MI)))
-    return false;
+  // Also hoist loads from constant memory, e.g. load from stubs, GOT. Hoisting
+  // these tend to help performance in low register pressure situation. The
+  // trade off is it may cause spill in high pressure situation. It will end up
+  // adding a store in the loop preheader. But the reload is no more expensive.
+  // The side benefit is these loads are frequently CSE'ed.
+  if (!TII->isTriviallyReMaterializable(&MI, AA)) {
+    if (!isLoadFromConstantMemory(&MI))
+      return false;
+  }
 
   // If result(s) of this instruction is used by PHIs, then don't hoist it.
   // The presence of joins makes it difficult for current register allocator
@@ -342,89 +393,148 @@ bool MachineLICM::IsProfitableToHoist(MachineInstr &MI) {
   return true;
 }
 
-static const MachineInstr *LookForDuplicate(const MachineInstr *MI,
-                                      std::vector<const MachineInstr*> &PrevMIs,
-                                      MachineRegisterInfo *RegInfo) {
-  unsigned NumOps = MI->getNumOperands();
-  for (unsigned i = 0, e = PrevMIs.size(); i != e; ++i) {
-    const MachineInstr *PrevMI = PrevMIs[i];
-    unsigned NumOps2 = PrevMI->getNumOperands();
-    if (NumOps != NumOps2)
-      continue;
-    bool IsSame = true;
-    for (unsigned j = 0; j != NumOps; ++j) {
-      const MachineOperand &MO = MI->getOperand(j);
-      if (MO.isReg() && MO.isDef()) {
-        if (RegInfo->getRegClass(MO.getReg()) !=
-            RegInfo->getRegClass(PrevMI->getOperand(j).getReg())) {
-          IsSame = false;
-          break;
-        }
-        continue;
-      }
-      if (!MO.isIdenticalTo(PrevMI->getOperand(j))) {
-        IsSame = false;
-        break;
+MachineInstr *MachineLICM::ExtractHoistableLoad(MachineInstr *MI) {
+  // If not, we may be able to unfold a load and hoist that.
+  // First test whether the instruction is loading from an amenable
+  // memory location.
+  if (!isLoadFromConstantMemory(MI))
+    return 0;
+
+  // Next determine the register class for a temporary register.
+  unsigned LoadRegIndex;
+  unsigned NewOpc =
+    TII->getOpcodeAfterMemoryUnfold(MI->getOpcode(),
+                                    /*UnfoldLoad=*/true,
+                                    /*UnfoldStore=*/false,
+                                    &LoadRegIndex);
+  if (NewOpc == 0) return 0;
+  const TargetInstrDesc &TID = TII->get(NewOpc);
+  if (TID.getNumDefs() != 1) return 0;
+  const TargetRegisterClass *RC = TID.OpInfo[LoadRegIndex].getRegClass(TRI);
+  // Ok, we're unfolding. Create a temporary register and do the unfold.
+  unsigned Reg = RegInfo->createVirtualRegister(RC);
+
+  MachineFunction &MF = *MI->getParent()->getParent();
+  SmallVector<MachineInstr *, 2> NewMIs;
+  bool Success =
+    TII->unfoldMemoryOperand(MF, MI, Reg,
+                             /*UnfoldLoad=*/true, /*UnfoldStore=*/false,
+                             NewMIs);
+  (void)Success;
+  assert(Success &&
+         "unfoldMemoryOperand failed when getOpcodeAfterMemoryUnfold "
+         "succeeded!");
+  assert(NewMIs.size() == 2 &&
+         "Unfolded a load into multiple instructions!");
+  MachineBasicBlock *MBB = MI->getParent();
+  MBB->insert(MI, NewMIs[0]);
+  MBB->insert(MI, NewMIs[1]);
+  // If unfolding produced a load that wasn't loop-invariant or profitable to
+  // hoist, discard the new instructions and bail.
+  if (!IsLoopInvariantInst(*NewMIs[0]) || !IsProfitableToHoist(*NewMIs[0])) {
+    NewMIs[0]->eraseFromParent();
+    NewMIs[1]->eraseFromParent();
+    return 0;
+  }
+  // Otherwise we successfully unfolded a load that we can hoist.
+  MI->eraseFromParent();
+  return NewMIs[0];
+}
+
+void MachineLICM::InitCSEMap(MachineBasicBlock *BB) {
+  for (MachineBasicBlock::iterator I = BB->begin(),E = BB->end(); I != E; ++I) {
+    const MachineInstr *MI = &*I;
+    // FIXME: For now, only hoist re-materilizable instructions. LICM will
+    // increase register pressure. We want to make sure it doesn't increase
+    // spilling.
+    if (TII->isTriviallyReMaterializable(MI, AA)) {
+      unsigned Opcode = MI->getOpcode();
+      DenseMap<unsigned, std::vector<const MachineInstr*> >::iterator
+        CI = CSEMap.find(Opcode);
+      if (CI != CSEMap.end())
+        CI->second.push_back(MI);
+      else {
+        std::vector<const MachineInstr*> CSEMIs;
+        CSEMIs.push_back(MI);
+        CSEMap.insert(std::make_pair(Opcode, CSEMIs));
       }
     }
-    if (IsSame)
+  }
+}
+
+const MachineInstr*
+MachineLICM::LookForDuplicate(const MachineInstr *MI,
+                              std::vector<const MachineInstr*> &PrevMIs) {
+  for (unsigned i = 0, e = PrevMIs.size(); i != e; ++i) {
+    const MachineInstr *PrevMI = PrevMIs[i];
+    if (TII->isIdentical(MI, PrevMI, RegInfo))
       return PrevMI;
   }
   return 0;
 }
 
+bool MachineLICM::EliminateCSE(MachineInstr *MI,
+          DenseMap<unsigned, std::vector<const MachineInstr*> >::iterator &CI) {
+  if (CI == CSEMap.end())
+    return false;
+
+  if (const MachineInstr *Dup = LookForDuplicate(MI, CI->second)) {
+    DEBUG(errs() << "CSEing " << *MI << " with " << *Dup);
+    for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
+      const MachineOperand &MO = MI->getOperand(i);
+      if (MO.isReg() && MO.isDef())
+        RegInfo->replaceRegWith(MO.getReg(), Dup->getOperand(i).getReg());
+    }
+    MI->eraseFromParent();
+    ++NumCSEed;
+    return true;
+  }
+  return false;
+}
+
 /// Hoist - When an instruction is found to use only loop invariant operands
 /// that are safe to hoist, this instruction is called to do the dirty work.
 ///
-void MachineLICM::Hoist(MachineInstr &MI) {
-  if (!IsLoopInvariantInst(MI)) return;
-  if (!IsProfitableToHoist(MI)) return;
+void MachineLICM::Hoist(MachineInstr *MI) {
+  // First check whether we should hoist this instruction.
+  if (!IsLoopInvariantInst(*MI) || !IsProfitableToHoist(*MI)) {
+    // If not, try unfolding a hoistable load.
+    MI = ExtractHoistableLoad(MI);
+    if (!MI) return;
+  }
 
   // Now move the instructions to the predecessor, inserting it before any
   // terminator instructions.
   DEBUG({
-      errs() << "Hoisting " << MI;
+      errs() << "Hoisting " << *MI;
       if (CurPreheader->getBasicBlock())
         errs() << " to MachineBasicBlock "
-               << CurPreheader->getBasicBlock()->getName();
-      if (MI.getParent()->getBasicBlock())
+               << CurPreheader->getName();
+      if (MI->getParent()->getBasicBlock())
         errs() << " from MachineBasicBlock "
-               << MI.getParent()->getBasicBlock()->getName();
+               << MI->getParent()->getName();
       errs() << "\n";
     });
 
+  // If this is the first instruction being hoisted to the preheader,
+  // initialize the CSE map with potential common expressions.
+  InitCSEMap(CurPreheader);
+
   // Look for opportunity to CSE the hoisted instruction.
-  std::pair<unsigned, unsigned> BBOpcPair =
-    std::make_pair(CurPreheader->getNumber(), MI.getOpcode());
-  DenseMap<std::pair<unsigned, unsigned>,
-    std::vector<const MachineInstr*> >::iterator CI = CSEMap.find(BBOpcPair);
-  bool DoneCSE = false;
-  if (CI != CSEMap.end()) {
-    const MachineInstr *Dup = LookForDuplicate(&MI, CI->second, RegInfo);
-    if (Dup) {
-      DEBUG(errs() << "CSEing " << MI << " with " << *Dup);
-      for (unsigned i = 0, e = MI.getNumOperands(); i != e; ++i) {
-        const MachineOperand &MO = MI.getOperand(i);
-        if (MO.isReg() && MO.isDef())
-          RegInfo->replaceRegWith(MO.getReg(), Dup->getOperand(i).getReg());
-      }
-      MI.eraseFromParent();
-      DoneCSE = true;
-      ++NumCSEed;
-    }
-  }
+  unsigned Opcode = MI->getOpcode();
+  DenseMap<unsigned, std::vector<const MachineInstr*> >::iterator
+    CI = CSEMap.find(Opcode);
+  if (!EliminateCSE(MI, CI)) {
+    // Otherwise, splice the instruction to the preheader.
+    CurPreheader->splice(CurPreheader->getFirstTerminator(),MI->getParent(),MI);
 
-  // Otherwise, splice the instruction to the preheader.
-  if (!DoneCSE) {
-    CurPreheader->splice(CurPreheader->getFirstTerminator(),
-                         MI.getParent(), &MI);
     // Add to the CSE map.
     if (CI != CSEMap.end())
-      CI->second.push_back(&MI);
+      CI->second.push_back(MI);
     else {
       std::vector<const MachineInstr*> CSEMIs;
-      CSEMIs.push_back(&MI);
-      CSEMap.insert(std::make_pair(BBOpcPair, CSEMIs));
+      CSEMIs.push_back(MI);
+      CSEMap.insert(std::make_pair(Opcode, CSEMIs));
     }
   }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineLoopInfo.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineLoopInfo.cpp
index 2da8e37..db77d19 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineLoopInfo.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineLoopInfo.cpp
@@ -43,3 +43,31 @@ void MachineLoopInfo::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.addRequired<MachineDominatorTree>();
   MachineFunctionPass::getAnalysisUsage(AU);
 }
+
+MachineBasicBlock *MachineLoop::getTopBlock() {
+  MachineBasicBlock *TopMBB = getHeader();
+  MachineFunction::iterator Begin = TopMBB->getParent()->begin();
+  if (TopMBB != Begin) {
+    MachineBasicBlock *PriorMBB = prior(MachineFunction::iterator(TopMBB));
+    while (contains(PriorMBB)) {
+      TopMBB = PriorMBB;
+      if (TopMBB == Begin) break;
+      PriorMBB = prior(MachineFunction::iterator(TopMBB));
+    }
+  }
+  return TopMBB;
+}
+
+MachineBasicBlock *MachineLoop::getBottomBlock() {
+  MachineBasicBlock *BotMBB = getHeader();
+  MachineFunction::iterator End = BotMBB->getParent()->end();
+  if (BotMBB != prior(End)) {
+    MachineBasicBlock *NextMBB = next(MachineFunction::iterator(BotMBB));
+    while (contains(NextMBB)) {
+      BotMBB = NextMBB;
+      if (BotMBB == next(MachineFunction::iterator(BotMBB))) break;
+      NextMBB = next(MachineFunction::iterator(BotMBB));
+    }
+  }
+  return BotMBB;
+}
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineModuleInfo.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineModuleInfo.cpp
index 8661b9e..ed5bb5e 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineModuleInfo.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineModuleInfo.cpp
@@ -76,6 +76,7 @@ void MachineModuleInfo::EndFunction() {
   FilterEnds.clear();
   CallsEHReturn = 0;
   CallsUnwindInit = 0;
+  VariableDbgInfo.clear();
 }
 
 /// AnalyzeModule - Scan the module for global debug information.
@@ -292,75 +293,3 @@ unsigned MachineModuleInfo::getPersonalityIndex() const {
   return 0;
 }
 
-//===----------------------------------------------------------------------===//
-/// DebugLabelFolding pass - This pass prunes out redundant labels.  This allows
-/// a info consumer to determine if the range of two labels is empty, by seeing
-/// if the labels map to the same reduced label.
-
-namespace llvm {
-
-struct DebugLabelFolder : public MachineFunctionPass {
-  static char ID;
-  DebugLabelFolder() : MachineFunctionPass(&ID) {}
-
-  virtual void getAnalysisUsage(AnalysisUsage &AU) const {
-    AU.setPreservesCFG();
-    AU.addPreservedID(MachineLoopInfoID);
-    AU.addPreservedID(MachineDominatorsID);
-    MachineFunctionPass::getAnalysisUsage(AU);
-  }
-
-  virtual bool runOnMachineFunction(MachineFunction &MF);
-  virtual const char *getPassName() const { return "Label Folder"; }
-};
-
-char DebugLabelFolder::ID = 0;
-
-bool DebugLabelFolder::runOnMachineFunction(MachineFunction &MF) {
-  // Get machine module info.
-  MachineModuleInfo *MMI = getAnalysisIfAvailable<MachineModuleInfo>();
-  if (!MMI) return false;
-
-  // Track if change is made.
-  bool MadeChange = false;
-  // No prior label to begin.
-  unsigned PriorLabel = 0;
-
-  // Iterate through basic blocks.
-  for (MachineFunction::iterator BB = MF.begin(), E = MF.end();
-       BB != E; ++BB) {
-    // Iterate through instructions.
-    for (MachineBasicBlock::iterator I = BB->begin(), E = BB->end(); I != E; ) {
-      // Is it a label.
-      if (I->isDebugLabel() && !MMI->isDbgLabelUsed(I->getOperand(0).getImm())){
-        // The label ID # is always operand #0, an immediate.
-        unsigned NextLabel = I->getOperand(0).getImm();
-
-        // If there was an immediate prior label.
-        if (PriorLabel) {
-          // Remap the current label to prior label.
-          MMI->RemapLabel(NextLabel, PriorLabel);
-          // Delete the current label.
-          I = BB->erase(I);
-          // Indicate a change has been made.
-          MadeChange = true;
-          continue;
-        } else {
-          // Start a new round.
-          PriorLabel = NextLabel;
-        }
-       } else {
-        // No consecutive labels.
-        PriorLabel = 0;
-      }
-
-      ++I;
-    }
-  }
-
-  return MadeChange;
-}
-
-FunctionPass *createDebugLabelFoldingPass() { return new DebugLabelFolder(); }
-
-}
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineSink.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineSink.cpp
index 636dad8..e040738 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineSink.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineSink.cpp
@@ -20,11 +20,11 @@
 #include "llvm/CodeGen/Passes.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
 #include "llvm/CodeGen/MachineDominators.h"
+#include "llvm/Analysis/AliasAnalysis.h"
 #include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/ADT/Statistic.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
@@ -32,13 +32,12 @@ using namespace llvm;
 STATISTIC(NumSunk, "Number of machine instructions sunk");
 
 namespace {
-  class VISIBILITY_HIDDEN MachineSinking : public MachineFunctionPass {
-    const TargetMachine   *TM;
+  class MachineSinking : public MachineFunctionPass {
     const TargetInstrInfo *TII;
     const TargetRegisterInfo *TRI;
-    MachineFunction       *CurMF; // Current MachineFunction
     MachineRegisterInfo  *RegInfo; // Machine register information
     MachineDominatorTree *DT;   // Machine dominator tree
+    AliasAnalysis *AA;
     BitVector AllocatableSet;   // Which physregs are allocatable?
 
   public:
@@ -50,6 +49,7 @@ namespace {
     virtual void getAnalysisUsage(AnalysisUsage &AU) const {
       AU.setPreservesCFG();
       MachineFunctionPass::getAnalysisUsage(AU);
+      AU.addRequired<AliasAnalysis>();
       AU.addRequired<MachineDominatorTree>();
       AU.addPreserved<MachineDominatorTree>();
     }
@@ -89,18 +89,16 @@ bool MachineSinking::AllUsesDominatedByBlock(unsigned Reg,
   return true;
 }
 
-
-
 bool MachineSinking::runOnMachineFunction(MachineFunction &MF) {
   DEBUG(errs() << "******** Machine Sinking ********\n");
   
-  CurMF = &MF;
-  TM = &CurMF->getTarget();
-  TII = TM->getInstrInfo();
-  TRI = TM->getRegisterInfo();
-  RegInfo = &CurMF->getRegInfo();
+  const TargetMachine &TM = MF.getTarget();
+  TII = TM.getInstrInfo();
+  TRI = TM.getRegisterInfo();
+  RegInfo = &MF.getRegInfo();
   DT = &getAnalysis<MachineDominatorTree>();
-  AllocatableSet = TRI->getAllocatableSet(*CurMF);
+  AA = &getAnalysis<AliasAnalysis>();
+  AllocatableSet = TRI->getAllocatableSet(MF);
 
   bool EverMadeChange = false;
   
@@ -108,7 +106,7 @@ bool MachineSinking::runOnMachineFunction(MachineFunction &MF) {
     bool MadeChange = false;
 
     // Process all basic blocks.
-    for (MachineFunction::iterator I = CurMF->begin(), E = CurMF->end(); 
+    for (MachineFunction::iterator I = MF.begin(), E = MF.end(); 
          I != E; ++I)
       MadeChange |= ProcessBlock(*I);
     
@@ -151,7 +149,7 @@ bool MachineSinking::ProcessBlock(MachineBasicBlock &MBB) {
 /// instruction out of its current block into a successor.
 bool MachineSinking::SinkInstruction(MachineInstr *MI, bool &SawStore) {
   // Check if it's safe to move the instruction.
-  if (!MI->isSafeToMove(TII, SawStore))
+  if (!MI->isSafeToMove(TII, SawStore, AA))
     return false;
   
   // FIXME: This should include support for sinking instructions within the
@@ -178,8 +176,6 @@ bool MachineSinking::SinkInstruction(MachineInstr *MI, bool &SawStore) {
     if (Reg == 0) continue;
     
     if (TargetRegisterInfo::isPhysicalRegister(Reg)) {
-      // If this is a physical register use, we can't move it.  If it is a def,
-      // we can move it, but only if the def is dead.
       if (MO.isUse()) {
         // If the physreg has no defs anywhere, it's just an ambient register
         // and we can freely move its uses. Alternatively, if it's allocatable,
@@ -254,7 +250,7 @@ bool MachineSinking::SinkInstruction(MachineInstr *MI, bool &SawStore) {
   if (SuccToSinkTo->isLandingPad())
     return false;
   
-  // If is not possible to sink an instruction into its own block.  This can
+  // It is not possible to sink an instruction into its own block.  This can
   // happen with loops.
   if (MI->getParent() == SuccToSinkTo)
     return false;
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineVerifier.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineVerifier.cpp
index 02e48dd..d9f4c99 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineVerifier.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineVerifier.cpp
@@ -27,6 +27,7 @@
 #include "llvm/CodeGen/LiveVariables.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
 #include "llvm/CodeGen/MachineFrameInfo.h"
+#include "llvm/CodeGen/MachineMemOperand.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
 #include "llvm/CodeGen/Passes.h"
 #include "llvm/Target/TargetMachine.h"
@@ -35,30 +36,24 @@
 #include "llvm/ADT/DenseSet.h"
 #include "llvm/ADT/SetOperations.h"
 #include "llvm/ADT/SmallVector.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
 namespace {
-  struct VISIBILITY_HIDDEN MachineVerifier : public MachineFunctionPass {
-    static char ID; // Pass ID, replacement for typeid
+  struct MachineVerifier {
 
-    MachineVerifier(bool allowDoubleDefs = false) :
-      MachineFunctionPass(&ID),
+    MachineVerifier(Pass *pass, bool allowDoubleDefs) :
+      PASS(pass),
       allowVirtDoubleDefs(allowDoubleDefs),
       allowPhysDoubleDefs(allowDoubleDefs),
       OutFileName(getenv("LLVM_VERIFY_MACHINEINSTRS"))
-        {}
-
-    void getAnalysisUsage(AnalysisUsage &AU) const {
-      AU.setPreservesAll();
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
+      {}
 
     bool runOnMachineFunction(MachineFunction &MF);
 
+    Pass *const PASS;
     const bool allowVirtDoubleDefs;
     const bool allowPhysDoubleDefs;
 
@@ -112,6 +107,10 @@ namespace {
       // regsKilled and regsLiveOut.
       RegSet vregsPassed;
 
+      // Vregs that must pass through MBB because they are needed by a successor
+      // block. This set is disjoint from regsLiveOut.
+      RegSet vregsRequired;
+
       BBInfo() : reachable(false) {}
 
       // Add register to vregsPassed if it belongs there. Return true if
@@ -133,6 +132,34 @@ namespace {
         return changed;
       }
 
+      // Add register to vregsRequired if it belongs there. Return true if
+      // anything changed.
+      bool addRequired(unsigned Reg) {
+        if (!TargetRegisterInfo::isVirtualRegister(Reg))
+          return false;
+        if (regsLiveOut.count(Reg))
+          return false;
+        return vregsRequired.insert(Reg).second;
+      }
+
+      // Same for a full set.
+      bool addRequired(const RegSet &RS) {
+        bool changed = false;
+        for (RegSet::const_iterator I = RS.begin(), E = RS.end(); I != E; ++I)
+          if (addRequired(*I))
+            changed = true;
+        return changed;
+      }
+
+      // Same for a full map.
+      bool addRequired(const RegMap &RM) {
+        bool changed = false;
+        for (RegMap::const_iterator I = RM.begin(), E = RM.end(); I != E; ++I)
+          if (addRequired(I->first))
+            changed = true;
+        return changed;
+      }
+
       // Live-out registers are either in regsLiveOut or vregsPassed.
       bool isLiveOut(unsigned Reg) const {
         return regsLiveOut.count(Reg) || vregsPassed.count(Reg);
@@ -146,6 +173,9 @@ namespace {
       return Reg < regsReserved.size() && regsReserved.test(Reg);
     }
 
+    // Analysis information if available
+    LiveVariables *LiveVars;
+
     void visitMachineFunctionBefore();
     void visitMachineBasicBlockBefore(const MachineBasicBlock *MBB);
     void visitMachineInstrBefore(const MachineInstr *MI);
@@ -163,16 +193,44 @@ namespace {
     void calcMaxRegsPassed();
     void calcMinRegsPassed();
     void checkPHIOps(const MachineBasicBlock *MBB);
+
+    void calcRegsRequired();
+    void verifyLiveVariables();
+  };
+
+  struct MachineVerifierPass : public MachineFunctionPass {
+    static char ID; // Pass ID, replacement for typeid
+    bool AllowDoubleDefs;
+
+    explicit MachineVerifierPass(bool allowDoubleDefs = false)
+      : MachineFunctionPass(&ID),
+        AllowDoubleDefs(allowDoubleDefs) {}
+
+    void getAnalysisUsage(AnalysisUsage &AU) const {
+      AU.setPreservesAll();
+      MachineFunctionPass::getAnalysisUsage(AU);
+    }
+
+    bool runOnMachineFunction(MachineFunction &MF) {
+      MF.verify(this, AllowDoubleDefs);
+      return false;
+    }
   };
+
 }
 
-char MachineVerifier::ID = 0;
-static RegisterPass<MachineVerifier>
+char MachineVerifierPass::ID = 0;
+static RegisterPass<MachineVerifierPass>
 MachineVer("machineverifier", "Verify generated machine code");
 static const PassInfo *const MachineVerifyID = &MachineVer;
 
 FunctionPass *llvm::createMachineVerifierPass(bool allowPhysDoubleDefs) {
-  return new MachineVerifier(allowPhysDoubleDefs);
+  return new MachineVerifierPass(allowPhysDoubleDefs);
+}
+
+void MachineFunction::verify(Pass *p, bool allowDoubleDefs) const {
+  MachineVerifier(p, allowDoubleDefs)
+    .runOnMachineFunction(const_cast<MachineFunction&>(*this));
 }
 
 bool MachineVerifier::runOnMachineFunction(MachineFunction &MF) {
@@ -185,7 +243,7 @@ bool MachineVerifier::runOnMachineFunction(MachineFunction &MF) {
       errs() << "Error opening '" << OutFileName << "': " << ErrorInfo << '\n';
       exit(1);
     }
-    
+
     OS = OutFile;
   } else {
     OS = &errs();
@@ -198,6 +256,12 @@ bool MachineVerifier::runOnMachineFunction(MachineFunction &MF) {
   TRI = TM->getRegisterInfo();
   MRI = &MF.getRegInfo();
 
+  if (PASS) {
+    LiveVars = PASS->getAnalysisIfAvailable<LiveVariables>();
+  } else {
+    LiveVars = NULL;
+  }
+
   visitMachineFunctionBefore();
   for (MachineFunction::const_iterator MFI = MF.begin(), MFE = MF.end();
        MFI!=MFE; ++MFI) {
@@ -238,29 +302,23 @@ void MachineVerifier::report(const char *msg, const MachineFunction *MF) {
       << "- function:    " << MF->getFunction()->getNameStr() << "\n";
 }
 
-void
-MachineVerifier::report(const char *msg, const MachineBasicBlock *MBB)
-{
+void MachineVerifier::report(const char *msg, const MachineBasicBlock *MBB) {
   assert(MBB);
   report(msg, MBB->getParent());
-  *OS << "- basic block: " << MBB->getBasicBlock()->getNameStr()
+  *OS << "- basic block: " << MBB->getName()
       << " " << (void*)MBB
-      << " (#" << MBB->getNumber() << ")\n";
+      << " (BB#" << MBB->getNumber() << ")\n";
 }
 
-void
-MachineVerifier::report(const char *msg, const MachineInstr *MI)
-{
+void MachineVerifier::report(const char *msg, const MachineInstr *MI) {
   assert(MI);
   report(msg, MI->getParent());
   *OS << "- instruction: ";
   MI->print(*OS, TM);
 }
 
-void
-MachineVerifier::report(const char *msg,
-                        const MachineOperand *MO, unsigned MONum)
-{
+void MachineVerifier::report(const char *msg,
+                             const MachineOperand *MO, unsigned MONum) {
   assert(MO);
   report(msg, MO->getParent());
   *OS << "- operand " << MONum << ":   ";
@@ -268,9 +326,7 @@ MachineVerifier::report(const char *msg,
   *OS << "\n";
 }
 
-void
-MachineVerifier::markReachable(const MachineBasicBlock *MBB)
-{
+void MachineVerifier::markReachable(const MachineBasicBlock *MBB) {
   BBInfo &MInfo = MBBInfoMap[MBB];
   if (!MInfo.reachable) {
     MInfo.reachable = true;
@@ -280,9 +336,7 @@ MachineVerifier::markReachable(const MachineBasicBlock *MBB)
   }
 }
 
-void
-MachineVerifier::visitMachineFunctionBefore()
-{
+void MachineVerifier::visitMachineFunctionBefore() {
   regsReserved = TRI->getReservedRegs(*MF);
 
   // A sub-register of a reserved register is also reserved
@@ -297,9 +351,18 @@ MachineVerifier::visitMachineFunctionBefore()
   markReachable(&MF->front());
 }
 
+// Does iterator point to a and b as the first two elements?
+bool matchPair(MachineBasicBlock::const_succ_iterator i,
+               const MachineBasicBlock *a, const MachineBasicBlock *b) {
+  if (*i == a)
+    return *++i == b;
+  if (*i == b)
+    return *++i == a;
+  return false;
+}
+
 void
-MachineVerifier::visitMachineBasicBlockBefore(const MachineBasicBlock *MBB)
-{
+MachineVerifier::visitMachineBasicBlockBefore(const MachineBasicBlock *MBB) {
   const TargetInstrInfo *TII = MF->getTarget().getInstrInfo();
 
   // Start with minimal CFG sanity checks.
@@ -391,8 +454,7 @@ MachineVerifier::visitMachineBasicBlockBefore(const MachineBasicBlock *MBB)
       } if (MBB->succ_size() != 2) {
         report("MBB exits via conditional branch/fall-through but doesn't have "
                "exactly two CFG successors!", MBB);
-      } else if ((MBB->succ_begin()[0] == TBB && MBB->succ_end()[1] == MBBI) ||
-                 (MBB->succ_begin()[1] == TBB && MBB->succ_end()[0] == MBBI)) {
+      } else if (!matchPair(MBB->succ_begin(), TBB, MBBI)) {
         report("MBB exits via conditional branch/fall-through but the CFG "
                "successors don't match the actual successors!", MBB);
       }
@@ -412,8 +474,7 @@ MachineVerifier::visitMachineBasicBlockBefore(const MachineBasicBlock *MBB)
       if (MBB->succ_size() != 2) {
         report("MBB exits via conditional branch/branch but doesn't have "
                "exactly two CFG successors!", MBB);
-      } else if ((MBB->succ_begin()[0] == TBB && MBB->succ_end()[1] == FBB) ||
-                 (MBB->succ_begin()[1] == TBB && MBB->succ_end()[0] == FBB)) {
+      } else if (!matchPair(MBB->succ_begin(), TBB, FBB)) {
         report("MBB exits via conditional branch/branch but the CFG "
                "successors don't match the actual successors!", MBB);
       }
@@ -462,20 +523,26 @@ MachineVerifier::visitMachineBasicBlockBefore(const MachineBasicBlock *MBB)
   regsDefined.clear();
 }
 
-void
-MachineVerifier::visitMachineInstrBefore(const MachineInstr *MI)
-{
+void MachineVerifier::visitMachineInstrBefore(const MachineInstr *MI) {
   const TargetInstrDesc &TI = MI->getDesc();
   if (MI->getNumOperands() < TI.getNumOperands()) {
     report("Too few operands", MI);
     *OS << TI.getNumOperands() << " operands expected, but "
         << MI->getNumExplicitOperands() << " given.\n";
   }
+
+  // Check the MachineMemOperands for basic consistency.
+  for (MachineInstr::mmo_iterator I = MI->memoperands_begin(),
+       E = MI->memoperands_end(); I != E; ++I) {
+    if ((*I)->isLoad() && !TI.mayLoad())
+      report("Missing mayLoad flag", MI);
+    if ((*I)->isStore() && !TI.mayStore())
+      report("Missing mayStore flag", MI);
+  }
 }
 
 void
-MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum)
-{
+MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum) {
   const MachineInstr *MI = MO->getParent();
   const TargetInstrDesc &TI = MI->getDesc();
 
@@ -511,8 +578,9 @@ MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum)
     } else if (MO->isUse()) {
       regsLiveInButUnused.erase(Reg);
 
+      bool isKill = false;
       if (MO->isKill()) {
-        addRegWithSubRegs(regsKilled, Reg);
+        isKill = true;
         // Tied operands on two-address instuctions MUST NOT have a <kill> flag.
         if (MI->isRegTiedToDefOperand(MONum))
             report("Illegal kill flag on two-address instruction operand",
@@ -522,8 +590,20 @@ MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum)
         unsigned defIdx;
         if (MI->isRegTiedToDefOperand(MONum, &defIdx) &&
             MI->getOperand(defIdx).getReg() == Reg)
-          addRegWithSubRegs(regsKilled, Reg);
+          isKill = true;
+      }
+      if (isKill) {
+        addRegWithSubRegs(regsKilled, Reg);
+
+        // Check that LiveVars knows this kill
+        if (LiveVars && TargetRegisterInfo::isVirtualRegister(Reg)) {
+          LiveVariables::VarInfo &VI = LiveVars->getVarInfo(Reg);
+          if (std::find(VI.Kills.begin(),
+                        VI.Kills.end(), MI) == VI.Kills.end())
+            report("Kill missing from LiveVariables", MO, MONum);
+        }
       }
+
       // Use of a dead register.
       if (!regsLive.count(Reg)) {
         if (TargetRegisterInfo::isPhysicalRegister(Reg)) {
@@ -608,9 +688,7 @@ MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum)
   }
 }
 
-void
-MachineVerifier::visitMachineInstrAfter(const MachineInstr *MI)
-{
+void MachineVerifier::visitMachineInstrAfter(const MachineInstr *MI) {
   BBInfo &MInfo = MBBInfoMap[MI->getParent()];
   set_union(MInfo.regsKilled, regsKilled);
   set_subtract(regsLive, regsKilled);
@@ -650,8 +728,7 @@ MachineVerifier::visitMachineInstrAfter(const MachineInstr *MI)
 }
 
 void
-MachineVerifier::visitMachineBasicBlockAfter(const MachineBasicBlock *MBB)
-{
+MachineVerifier::visitMachineBasicBlockAfter(const MachineBasicBlock *MBB) {
   MBBInfoMap[MBB].regsLiveOut = regsLive;
   regsLive.clear();
 }
@@ -659,9 +736,7 @@ MachineVerifier::visitMachineBasicBlockAfter(const MachineBasicBlock *MBB)
 // Calculate the largest possible vregsPassed sets. These are the registers that
 // can pass through an MBB live, but may not be live every time. It is assumed
 // that all vregsPassed sets are empty before the call.
-void
-MachineVerifier::calcMaxRegsPassed()
-{
+void MachineVerifier::calcMaxRegsPassed() {
   // First push live-out regs to successors' vregsPassed. Remember the MBBs that
   // have any vregsPassed.
   DenseSet<const MachineBasicBlock*> todo;
@@ -699,9 +774,7 @@ MachineVerifier::calcMaxRegsPassed()
 // Calculate the minimum vregsPassed set. These are the registers that always
 // pass live through an MBB. The calculation assumes that calcMaxRegsPassed has
 // been called earlier.
-void
-MachineVerifier::calcMinRegsPassed()
-{
+void MachineVerifier::calcMinRegsPassed() {
   DenseSet<const MachineBasicBlock*> todo;
   for (MachineFunction::const_iterator MFI = MF->begin(), MFE = MF->end();
        MFI != MFE; ++MFI)
@@ -734,11 +807,44 @@ MachineVerifier::calcMinRegsPassed()
   }
 }
 
+// Calculate the set of virtual registers that must be passed through each basic
+// block in order to satisfy the requirements of successor blocks. This is very
+// similar to calcMaxRegsPassed, only backwards.
+void MachineVerifier::calcRegsRequired() {
+  // First push live-in regs to predecessors' vregsRequired.
+  DenseSet<const MachineBasicBlock*> todo;
+  for (MachineFunction::const_iterator MFI = MF->begin(), MFE = MF->end();
+       MFI != MFE; ++MFI) {
+    const MachineBasicBlock &MBB(*MFI);
+    BBInfo &MInfo = MBBInfoMap[&MBB];
+    for (MachineBasicBlock::const_pred_iterator PrI = MBB.pred_begin(),
+           PrE = MBB.pred_end(); PrI != PrE; ++PrI) {
+      BBInfo &PInfo = MBBInfoMap[*PrI];
+      if (PInfo.addRequired(MInfo.vregsLiveIn))
+        todo.insert(*PrI);
+    }
+  }
+
+  // Iteratively push vregsRequired to predecessors. This will converge to the
+  // same final state regardless of DenseSet iteration order.
+  while (!todo.empty()) {
+    const MachineBasicBlock *MBB = *todo.begin();
+    todo.erase(MBB);
+    BBInfo &MInfo = MBBInfoMap[MBB];
+    for (MachineBasicBlock::const_pred_iterator PrI = MBB->pred_begin(),
+           PrE = MBB->pred_end(); PrI != PrE; ++PrI) {
+      if (*PrI == MBB)
+        continue;
+      BBInfo &SInfo = MBBInfoMap[*PrI];
+      if (SInfo.addRequired(MInfo.vregsRequired))
+        todo.insert(*PrI);
+    }
+  }
+}
+
 // Check PHI instructions at the beginning of MBB. It is assumed that
 // calcMinRegsPassed has been run so BBInfo::isLiveOut is valid.
-void
-MachineVerifier::checkPHIOps(const MachineBasicBlock *MBB)
-{
+void MachineVerifier::checkPHIOps(const MachineBasicBlock *MBB) {
   for (MachineBasicBlock::const_iterator BBI = MBB->begin(), BBE = MBB->end();
        BBI != BBE && BBI->getOpcode() == TargetInstrInfo::PHI; ++BBI) {
     DenseSet<const MachineBasicBlock*> seen;
@@ -760,16 +866,14 @@ MachineVerifier::checkPHIOps(const MachineBasicBlock *MBB)
            PrE = MBB->pred_end(); PrI != PrE; ++PrI) {
       if (!seen.count(*PrI)) {
         report("Missing PHI operand", BBI);
-        *OS << "MBB #" << (*PrI)->getNumber()
+        *OS << "BB#" << (*PrI)->getNumber()
             << " is a predecessor according to the CFG.\n";
       }
     }
   }
 }
 
-void
-MachineVerifier::visitMachineFunctionAfter()
-{
+void MachineVerifier::visitMachineFunctionAfter() {
   calcMaxRegsPassed();
 
   // With the maximal set of vregsPassed we can verify dead-in registers.
@@ -797,7 +901,7 @@ MachineVerifier::visitMachineFunctionAfter()
             report("Live-in physical register is not live-out from predecessor",
                    MFI);
             *OS << "Register " << TRI->getName(*I)
-                << " is not live-out from MBB #" << (*PrI)->getNumber()
+                << " is not live-out from BB#" << (*PrI)->getNumber()
                 << ".\n";
           }
         }
@@ -853,4 +957,39 @@ MachineVerifier::visitMachineFunctionAfter()
       }
     }
   }
+
+  // Now check LiveVariables info if available
+  if (LiveVars) {
+    calcRegsRequired();
+    verifyLiveVariables();
+  }
+}
+
+void MachineVerifier::verifyLiveVariables() {
+  assert(LiveVars && "Don't call verifyLiveVariables without LiveVars");
+  for (unsigned Reg = TargetRegisterInfo::FirstVirtualRegister,
+         RegE = MRI->getLastVirtReg()-1; Reg != RegE; ++Reg) {
+    LiveVariables::VarInfo &VI = LiveVars->getVarInfo(Reg);
+    for (MachineFunction::const_iterator MFI = MF->begin(), MFE = MF->end();
+         MFI != MFE; ++MFI) {
+      BBInfo &MInfo = MBBInfoMap[MFI];
+
+      // Our vregsRequired should be identical to LiveVariables' AliveBlocks
+      if (MInfo.vregsRequired.count(Reg)) {
+        if (!VI.AliveBlocks.test(MFI->getNumber())) {
+          report("LiveVariables: Block missing from AliveBlocks", MFI);
+          *OS << "Virtual register %reg" << Reg
+              << " must be live through the block.\n";
+        }
+      } else {
+        if (VI.AliveBlocks.test(MFI->getNumber())) {
+          report("LiveVariables: Block should not be in AliveBlocks", MFI);
+          *OS << "Virtual register %reg" << Reg
+              << " is not needed live through the block.\n";
+        }
+      }
+    }
+  }
 }
+
+
diff --git a/libclamav/c++/llvm/lib/CodeGen/OcamlGC.cpp b/libclamav/c++/llvm/lib/CodeGen/OcamlGC.cpp
index f7bc9f3..48db200 100644
--- a/libclamav/c++/llvm/lib/CodeGen/OcamlGC.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/OcamlGC.cpp
@@ -16,12 +16,11 @@
 
 #include "llvm/CodeGen/GCs.h"
 #include "llvm/CodeGen/GCStrategy.h"
-#include "llvm/Support/Compiler.h"
 
 using namespace llvm;
 
 namespace {
-  class VISIBILITY_HIDDEN OcamlGC : public GCStrategy {
+  class OcamlGC : public GCStrategy {
   public:
     OcamlGC();
   };
diff --git a/libclamav/c++/llvm/lib/CodeGen/PHIElimination.cpp b/libclamav/c++/llvm/lib/CodeGen/PHIElimination.cpp
index 8071b0a..2e30cc6 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PHIElimination.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/PHIElimination.cpp
@@ -15,24 +15,26 @@
 
 #define DEBUG_TYPE "phielim"
 #include "PHIElimination.h"
-#include "llvm/BasicBlock.h"
-#include "llvm/Instructions.h"
 #include "llvm/CodeGen/LiveVariables.h"
 #include "llvm/CodeGen/Passes.h"
-#include "llvm/CodeGen/MachineFunctionPass.h"
+#include "llvm/CodeGen/MachineDominators.h"
 #include "llvm/CodeGen/MachineInstr.h"
 #include "llvm/CodeGen/MachineInstrBuilder.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/Function.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/STLExtras.h"
 #include "llvm/ADT/Statistic.h"
+#include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Compiler.h"
+#include "llvm/Support/Debug.h"
 #include <algorithm>
 #include <map>
 using namespace llvm;
 
 STATISTIC(NumAtomic, "Number of atomic phis lowered");
+STATISTIC(NumSplits, "Number of critical edges split on demand");
 
 char PHIElimination::ID = 0;
 static RegisterPass<PHIElimination>
@@ -41,10 +43,10 @@ X("phi-node-elimination", "Eliminate PHI nodes for register allocation");
 const PassInfo *const llvm::PHIEliminationID = &X;
 
 void llvm::PHIElimination::getAnalysisUsage(AnalysisUsage &AU) const {
-  AU.setPreservesCFG();
   AU.addPreserved<LiveVariables>();
-  AU.addPreservedID(MachineLoopInfoID);
-  AU.addPreservedID(MachineDominatorsID);
+  AU.addPreserved<MachineDominatorTree>();
+  // rdar://7401784 This would be nice:
+  // AU.addPreservedID(MachineLoopInfoID);
   MachineFunctionPass::getAnalysisUsage(AU);
 }
 
@@ -53,10 +55,16 @@ bool llvm::PHIElimination::runOnMachineFunction(MachineFunction &Fn) {
 
   PHIDefs.clear();
   PHIKills.clear();
-  analyzePHINodes(Fn);
-
   bool Changed = false;
 
+  // Split critical edges to help the coalescer
+  if (LiveVariables *LV = getAnalysisIfAvailable<LiveVariables>())
+    for (MachineFunction::iterator I = Fn.begin(), E = Fn.end(); I != E; ++I)
+      Changed |= SplitPHIEdges(Fn, *I, *LV);
+
+  // Populate VRegPHIUseCount
+  analyzePHINodes(Fn);
+
   // Eliminate PHI instructions by inserting copies into predecessor blocks.
   for (MachineFunction::iterator I = Fn.begin(), E = Fn.end(); I != E; ++I)
     Changed |= EliminatePHINodes(Fn, *I);
@@ -75,7 +83,6 @@ bool llvm::PHIElimination::runOnMachineFunction(MachineFunction &Fn) {
   return Changed;
 }
 
-
 /// EliminatePHINodes - Eliminate phi nodes by inserting copy instructions in
 /// predecessor basic blocks.
 ///
@@ -107,26 +114,28 @@ static bool isSourceDefinedByImplicitDef(const MachineInstr *MPhi,
   return true;
 }
 
-// FindCopyInsertPoint - Find a safe place in MBB to insert a copy from SrcReg.
-// This needs to be after any def or uses of SrcReg, but before any subsequent
-// point where control flow might jump out of the basic block.
+// FindCopyInsertPoint - Find a safe place in MBB to insert a copy from SrcReg
+// when following the CFG edge to SuccMBB. This needs to be after any def of
+// SrcReg, but before any subsequent point where control flow might jump out of
+// the basic block.
 MachineBasicBlock::iterator
 llvm::PHIElimination::FindCopyInsertPoint(MachineBasicBlock &MBB,
+                                          MachineBasicBlock &SuccMBB,
                                           unsigned SrcReg) {
   // Handle the trivial case trivially.
   if (MBB.empty())
     return MBB.begin();
 
-  // If this basic block does not contain an invoke, then control flow always
-  // reaches the end of it, so place the copy there.  The logic below works in
-  // this case too, but is more expensive.
-  if (!isa<InvokeInst>(MBB.getBasicBlock()->getTerminator()))
+  // Usually, we just want to insert the copy before the first terminator
+  // instruction. However, for the edge going to a landing pad, we must insert
+  // the copy before the call/invoke instruction.
+  if (!SuccMBB.isLandingPad())
     return MBB.getFirstTerminator();
 
-  // Discover any definition/uses in this basic block.
+  // Discover any defs/uses in this basic block.
   SmallPtrSet<MachineInstr*, 8> DefUsesInMBB;
   for (MachineRegisterInfo::reg_iterator RI = MRI->reg_begin(SrcReg),
-       RE = MRI->reg_end(); RI != RE; ++RI) {
+         RE = MRI->reg_end(); RI != RE; ++RI) {
     MachineInstr *DefUseMI = &*RI;
     if (DefUseMI->getParent() == &MBB)
       DefUsesInMBB.insert(DefUseMI);
@@ -134,14 +143,14 @@ llvm::PHIElimination::FindCopyInsertPoint(MachineBasicBlock &MBB,
 
   MachineBasicBlock::iterator InsertPoint;
   if (DefUsesInMBB.empty()) {
-    // No def/uses.  Insert the copy at the start of the basic block.
+    // No defs.  Insert the copy at the start of the basic block.
     InsertPoint = MBB.begin();
   } else if (DefUsesInMBB.size() == 1) {
-    // Insert the copy immediately after the definition/use.
+    // Insert the copy immediately after the def/use.
     InsertPoint = *DefUsesInMBB.begin();
     ++InsertPoint;
   } else {
-    // Insert the copy immediately after the last definition/use.
+    // Insert the copy immediately after the last def/use.
     InsertPoint = MBB.end();
     while (!DefUsesInMBB.count(&*--InsertPoint)) {}
     ++InsertPoint;
@@ -155,7 +164,7 @@ llvm::PHIElimination::FindCopyInsertPoint(MachineBasicBlock &MBB,
 /// under the assuption that it needs to be lowered in a way that supports
 /// atomic execution of PHIs.  This lowering method is always correct all of the
 /// time.
-///  
+///
 void llvm::PHIElimination::LowerAtomicPHINode(
                                       MachineBasicBlock &MBB,
                                       MachineBasicBlock::iterator AfterPHIsIt) {
@@ -186,7 +195,7 @@ void llvm::PHIElimination::LowerAtomicPHINode(
   }
 
   // Record PHI def.
-  assert(!hasPHIDef(DestReg) && "Vreg has multiple phi-defs?"); 
+  assert(!hasPHIDef(DestReg) && "Vreg has multiple phi-defs?");
   PHIDefs[DestReg] = &MBB;
 
   // Update live variable information if there is any.
@@ -250,92 +259,35 @@ void llvm::PHIElimination::LowerAtomicPHINode(
     // basic block.
     if (!MBBsInsertedInto.insert(&opBlock))
       continue;  // If the copy has already been emitted, we're done.
- 
+
     // Find a safe location to insert the copy, this may be the first terminator
     // in the block (or end()).
-    MachineBasicBlock::iterator InsertPos = FindCopyInsertPoint(opBlock, SrcReg);
+    MachineBasicBlock::iterator InsertPos =
+      FindCopyInsertPoint(opBlock, MBB, SrcReg);
 
     // Insert the copy.
     TII->copyRegToReg(opBlock, InsertPos, IncomingReg, SrcReg, RC, RC);
 
     // Now update live variable information if we have it.  Otherwise we're done
     if (!LV) continue;
-    
+
     // We want to be able to insert a kill of the register if this PHI (aka, the
     // copy we just inserted) is the last use of the source value.  Live
     // variable analysis conservatively handles this by saying that the value is
     // live until the end of the block the PHI entry lives in.  If the value
     // really is dead at the PHI copy, there will be no successor blocks which
     // have the value live-in.
-    //
-    // Check to see if the copy is the last use, and if so, update the live
-    // variables information so that it knows the copy source instruction kills
-    // the incoming value.
-    LiveVariables::VarInfo &InRegVI = LV->getVarInfo(SrcReg);
-
-    // Loop over all of the successors of the basic block, checking to see if
-    // the value is either live in the block, or if it is killed in the block.
+
     // Also check to see if this register is in use by another PHI node which
     // has not yet been eliminated.  If so, it will be killed at an appropriate
     // point later.
 
     // Is it used by any PHI instructions in this block?
-    bool ValueIsLive = VRegPHIUseCount[BBVRegPair(&opBlock, SrcReg)] != 0;
-
-    std::vector<MachineBasicBlock*> OpSuccBlocks;
-    
-    // Otherwise, scan successors, including the BB the PHI node lives in.
-    for (MachineBasicBlock::succ_iterator SI = opBlock.succ_begin(),
-           E = opBlock.succ_end(); SI != E && !ValueIsLive; ++SI) {
-      MachineBasicBlock *SuccMBB = *SI;
-
-      // Is it alive in this successor?
-      unsigned SuccIdx = SuccMBB->getNumber();
-      if (InRegVI.AliveBlocks.test(SuccIdx)) {
-        ValueIsLive = true;
-        break;
-      }
-
-      OpSuccBlocks.push_back(SuccMBB);
-    }
-
-    // Check to see if this value is live because there is a use in a successor
-    // that kills it.
-    if (!ValueIsLive) {
-      switch (OpSuccBlocks.size()) {
-      case 1: {
-        MachineBasicBlock *MBB = OpSuccBlocks[0];
-        for (unsigned i = 0, e = InRegVI.Kills.size(); i != e; ++i)
-          if (InRegVI.Kills[i]->getParent() == MBB) {
-            ValueIsLive = true;
-            break;
-          }
-        break;
-      }
-      case 2: {
-        MachineBasicBlock *MBB1 = OpSuccBlocks[0], *MBB2 = OpSuccBlocks[1];
-        for (unsigned i = 0, e = InRegVI.Kills.size(); i != e; ++i)
-          if (InRegVI.Kills[i]->getParent() == MBB1 || 
-              InRegVI.Kills[i]->getParent() == MBB2) {
-            ValueIsLive = true;
-            break;
-          }
-        break;        
-      }
-      default:
-        std::sort(OpSuccBlocks.begin(), OpSuccBlocks.end());
-        for (unsigned i = 0, e = InRegVI.Kills.size(); i != e; ++i)
-          if (std::binary_search(OpSuccBlocks.begin(), OpSuccBlocks.end(),
-                                 InRegVI.Kills[i]->getParent())) {
-            ValueIsLive = true;
-            break;
-          }
-      }
-    }        
+    bool ValueIsUsed = VRegPHIUseCount[BBVRegPair(&opBlock, SrcReg)] != 0;
 
     // Okay, if we now know that the value is not live out of the block, we can
     // add a kill marker in this block saying that it kills the incoming value!
-    if (!ValueIsLive) {
+    if (!ValueIsUsed && !isLiveOut(SrcReg, opBlock, *LV)) {
       // In our final twist, we have to decide which instruction kills the
       // register.  In most cases this is the copy, however, the first
       // terminator instruction at the end of the block may also use the value.
@@ -346,7 +298,7 @@ void llvm::PHIElimination::LowerAtomicPHINode(
       if (Term != opBlock.end()) {
         if (Term->readsRegister(SrcReg))
           KillInst = Term;
-      
+
         // Check that no other terminators use values.
 #ifndef NDEBUG
         for (MachineBasicBlock::iterator TI = next(Term); TI != opBlock.end();
@@ -357,16 +309,16 @@ void llvm::PHIElimination::LowerAtomicPHINode(
         }
 #endif
       }
-      
+
       // Finally, mark it killed.
       LV->addVirtualRegisterKilled(SrcReg, KillInst);
 
       // This vreg no longer lives all of the way through opBlock.
       unsigned opBlockNum = opBlock.getNumber();
-      InRegVI.AliveBlocks.reset(opBlockNum);
+      LV->getVarInfo(SrcReg).AliveBlocks.reset(opBlockNum);
     }
   }
-    
+
   // Really delete the PHI instruction now!
   MF.DeleteMachineInstr(MPhi);
   ++NumAtomic;
@@ -386,3 +338,119 @@ void llvm::PHIElimination::analyzePHINodes(const MachineFunction& Fn) {
         ++VRegPHIUseCount[BBVRegPair(BBI->getOperand(i + 1).getMBB(),
                                      BBI->getOperand(i).getReg())];
 }
+
+bool llvm::PHIElimination::SplitPHIEdges(MachineFunction &MF,
+                                         MachineBasicBlock &MBB,
+                                         LiveVariables &LV) {
+  if (MBB.empty() || MBB.front().getOpcode() != TargetInstrInfo::PHI)
+    return false;   // Quick exit for basic blocks without PHIs.
+
+  for (MachineBasicBlock::const_iterator BBI = MBB.begin(), BBE = MBB.end();
+       BBI != BBE && BBI->getOpcode() == TargetInstrInfo::PHI; ++BBI) {
+    for (unsigned i = 1, e = BBI->getNumOperands(); i != e; i += 2) {
+      unsigned Reg = BBI->getOperand(i).getReg();
+      MachineBasicBlock *PreMBB = BBI->getOperand(i+1).getMBB();
+      // We break edges when registers are live out from the predecessor block
+      // (not considering PHI nodes). If the register is live in to this block
+      // anyway, we would gain nothing from splitting.
+      if (!LV.isLiveIn(Reg, MBB) && isLiveOut(Reg, *PreMBB, LV))
+        SplitCriticalEdge(PreMBB, &MBB);
+    }
+  }
+  return true;
+}
+
+bool llvm::PHIElimination::isLiveOut(unsigned Reg, const MachineBasicBlock &MBB,
+                                     LiveVariables &LV) {
+  LiveVariables::VarInfo &VI = LV.getVarInfo(Reg);
+
+  // Loop over all of the successors of the basic block, checking to see if
+  // the value is either live in the block, or if it is killed in the block.
+  std::vector<MachineBasicBlock*> OpSuccBlocks;
+  for (MachineBasicBlock::const_succ_iterator SI = MBB.succ_begin(),
+         E = MBB.succ_end(); SI != E; ++SI) {
+    MachineBasicBlock *SuccMBB = *SI;
+
+    // Is it alive in this successor?
+    unsigned SuccIdx = SuccMBB->getNumber();
+    if (VI.AliveBlocks.test(SuccIdx))
+      return true;
+    OpSuccBlocks.push_back(SuccMBB);
+  }
+
+  // Check to see if this value is live because there is a use in a successor
+  // that kills it.
+  switch (OpSuccBlocks.size()) {
+  case 1: {
+    MachineBasicBlock *SuccMBB = OpSuccBlocks[0];
+    for (unsigned i = 0, e = VI.Kills.size(); i != e; ++i)
+      if (VI.Kills[i]->getParent() == SuccMBB)
+        return true;
+    break;
+  }
+  case 2: {
+    MachineBasicBlock *SuccMBB1 = OpSuccBlocks[0], *SuccMBB2 = OpSuccBlocks[1];
+    for (unsigned i = 0, e = VI.Kills.size(); i != e; ++i)
+      if (VI.Kills[i]->getParent() == SuccMBB1 ||
+          VI.Kills[i]->getParent() == SuccMBB2)
+        return true;
+    break;
+  }
+  default:
+    std::sort(OpSuccBlocks.begin(), OpSuccBlocks.end());
+    for (unsigned i = 0, e = VI.Kills.size(); i != e; ++i)
+      if (std::binary_search(OpSuccBlocks.begin(), OpSuccBlocks.end(),
+                             VI.Kills[i]->getParent()))
+        return true;
+  }
+  return false;
+}
+
+MachineBasicBlock *PHIElimination::SplitCriticalEdge(MachineBasicBlock *A,
+                                                     MachineBasicBlock *B) {
+  assert(A && B && "Missing MBB end point");
+
+  MachineFunction *MF = A->getParent();
+
+  // We may need to update A's terminator, but we can't do that if AnalyzeBranch
+  // fails. If A uses a jump table, we won't touch it.
+  const TargetInstrInfo *TII = MF->getTarget().getInstrInfo();
+  MachineBasicBlock *TBB = 0, *FBB = 0;
+  SmallVector<MachineOperand, 4> Cond;
+  if (TII->AnalyzeBranch(*A, TBB, FBB, Cond))
+    return NULL;
+
+  ++NumSplits;
+
+  MachineBasicBlock *NMBB = MF->CreateMachineBasicBlock();
+  MF->insert(next(MachineFunction::iterator(A)), NMBB);
+  DEBUG(errs() << "PHIElimination splitting critical edge:"
+        " BB#" << A->getNumber()
+        << " -- BB#" << NMBB->getNumber()
+        << " -- BB#" << B->getNumber() << '\n');
+
+  A->ReplaceUsesOfBlockWith(B, NMBB);
+  A->updateTerminator();
+
+  // Insert unconditional "jump B" instruction in NMBB if necessary.
+  NMBB->addSuccessor(B);
+  if (!NMBB->isLayoutSuccessor(B)) {
+    Cond.clear();
+    MF->getTarget().getInstrInfo()->InsertBranch(*NMBB, B, NULL, Cond);
+  }
+
+  // Fix PHI nodes in B so they refer to NMBB instead of A
+  for (MachineBasicBlock::iterator i = B->begin(), e = B->end();
+       i != e && i->getOpcode() == TargetInstrInfo::PHI; ++i)
+    for (unsigned ni = 1, ne = i->getNumOperands(); ni != ne; ni += 2)
+      if (i->getOperand(ni+1).getMBB() == A)
+        i->getOperand(ni+1).setMBB(NMBB);
+
+  if (LiveVariables *LV=getAnalysisIfAvailable<LiveVariables>())
+    LV->addNewBlock(NMBB, A, B);
+
+  if (MachineDominatorTree *MDT=getAnalysisIfAvailable<MachineDominatorTree>())
+    MDT->addNewBlock(NMBB, A);
+
+  return NMBB;
+}
diff --git a/libclamav/c++/llvm/lib/CodeGen/PHIElimination.h b/libclamav/c++/llvm/lib/CodeGen/PHIElimination.h
index 3d02dfd..f5872cb 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PHIElimination.h
+++ b/libclamav/c++/llvm/lib/CodeGen/PHIElimination.h
@@ -89,11 +89,28 @@ namespace llvm {
     ///
     void analyzePHINodes(const MachineFunction& Fn);
 
-    // FindCopyInsertPoint - Find a safe place in MBB to insert a copy from
-    // SrcReg.  This needs to be after any def or uses of SrcReg, but before
-    // any subsequent point where control flow might jump out of the basic
-    // block.
+    /// Split critical edges where necessary for good coalescer performance.
+    bool SplitPHIEdges(MachineFunction &MF, MachineBasicBlock &MBB,
+                       LiveVariables &LV);
+
+    /// isLiveOut - Determine if Reg is live out from MBB, when not
+    /// considering PHI nodes. This means that Reg is either killed by
+    /// a successor block or passed through one.
+    bool isLiveOut(unsigned Reg, const MachineBasicBlock &MBB,
+                   LiveVariables &LV);
+
+    /// SplitCriticalEdge - Split a critical edge from A to B by
+    /// inserting a new MBB. Update branches in A and PHI instructions
+    /// in B. Return the new block.
+    MachineBasicBlock *SplitCriticalEdge(MachineBasicBlock *A,
+                                         MachineBasicBlock *B);
+
+    /// FindCopyInsertPoint - Find a safe place in MBB to insert a copy from
+    /// SrcReg when following the CFG edge to SuccMBB. This needs to be after
+    /// any def of SrcReg, but before any subsequent point where control flow
+    /// might jump out of the basic block.
     MachineBasicBlock::iterator FindCopyInsertPoint(MachineBasicBlock &MBB,
+                                                    MachineBasicBlock &SuccMBB,
                                                     unsigned SrcReg);
 
     // SkipPHIsAndLabels - Copies need to be inserted after phi nodes and
diff --git a/libclamav/c++/llvm/lib/CodeGen/PostRASchedulerList.cpp b/libclamav/c++/llvm/lib/CodeGen/PostRASchedulerList.cpp
index 902f505..9101fce 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PostRASchedulerList.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/PostRASchedulerList.cpp
@@ -19,6 +19,9 @@
 //===----------------------------------------------------------------------===//
 
 #define DEBUG_TYPE "post-RA-sched"
+#include "AntiDepBreaker.h"
+#include "AggressiveAntiDepBreaker.h"
+#include "CriticalAntiDepBreaker.h"
 #include "ExactHazardRecognizer.h"
 #include "SimpleHazardRecognizer.h"
 #include "ScheduleDAGInstrs.h"
@@ -31,15 +34,17 @@
 #include "llvm/CodeGen/MachineLoopInfo.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
 #include "llvm/CodeGen/ScheduleHazardRecognizer.h"
+#include "llvm/Analysis/AliasAnalysis.h"
 #include "llvm/Target/TargetLowering.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/Target/TargetSubtarget.h"
-#include "llvm/Support/Compiler.h"
+#include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
+#include "llvm/ADT/BitVector.h"
 #include "llvm/ADT/Statistic.h"
 #include <map>
 #include <set>
@@ -47,6 +52,7 @@ using namespace llvm;
 
 STATISTIC(NumNoops, "Number of noops inserted");
 STATISTIC(NumStalls, "Number of pipeline stalls");
+STATISTIC(NumFixedAnti, "Number of fixed anti-dependencies");
 
 // Post-RA scheduling is enabled with
 // TargetSubtarget.enablePostRAScheduler(). This flag can be used to
@@ -55,10 +61,11 @@ static cl::opt<bool>
 EnablePostRAScheduler("post-RA-scheduler",
                        cl::desc("Enable scheduling after register allocation"),
                        cl::init(false), cl::Hidden);
-static cl::opt<bool>
+static cl::opt<std::string>
 EnableAntiDepBreaking("break-anti-dependencies",
-                      cl::desc("Break post-RA scheduling anti-dependencies"),
-                      cl::init(true), cl::Hidden);
+                      cl::desc("Break post-RA scheduling anti-dependencies: "
+                               "\"critical\", \"all\", or \"none\""),
+                      cl::init("none"), cl::Hidden);
 static cl::opt<bool>
 EnablePostRAHazardAvoidance("avoid-hazards",
                       cl::desc("Enable exact hazard avoidance"),
@@ -74,14 +81,21 @@ DebugMod("postra-sched-debugmod",
                       cl::desc("Debug control MBBs that are scheduled"),
                       cl::init(0), cl::Hidden);
 
+AntiDepBreaker::~AntiDepBreaker() { }
+
 namespace {
-  class VISIBILITY_HIDDEN PostRAScheduler : public MachineFunctionPass {
+  class PostRAScheduler : public MachineFunctionPass {
+    AliasAnalysis *AA;
+    CodeGenOpt::Level OptLevel;
+
   public:
     static char ID;
-    PostRAScheduler() : MachineFunctionPass(&ID) {}
+    PostRAScheduler(CodeGenOpt::Level ol) :
+      MachineFunctionPass(&ID), OptLevel(ol) {}
 
     void getAnalysisUsage(AnalysisUsage &AU) const {
       AU.setPreservesCFG();
+      AU.addRequired<AliasAnalysis>();
       AU.addRequired<MachineDominatorTree>();
       AU.addPreserved<MachineDominatorTree>();
       AU.addRequired<MachineLoopInfo>();
@@ -97,7 +111,7 @@ namespace {
   };
   char PostRAScheduler::ID = 0;
 
-  class VISIBILITY_HIDDEN SchedulePostRATDList : public ScheduleDAGInstrs {
+  class SchedulePostRATDList : public ScheduleDAGInstrs {
     /// AvailableQueue - The priority queue to use for the available SUnits.
     ///
     LatencyPriorityQueue AvailableQueue;
@@ -111,48 +125,30 @@ namespace {
     /// Topo - A topological ordering for SUnits.
     ScheduleDAGTopologicalSort Topo;
 
-    /// AllocatableSet - The set of allocatable registers.
-    /// We'll be ignoring anti-dependencies on non-allocatable registers,
-    /// because they may not be safe to break.
-    const BitVector AllocatableSet;
-
     /// HazardRec - The hazard recognizer to use.
     ScheduleHazardRecognizer *HazardRec;
 
-    /// Classes - For live regs that are only used in one register class in a
-    /// live range, the register class. If the register is not live, the
-    /// corresponding value is null. If the register is live but used in
-    /// multiple register classes, the corresponding value is -1 casted to a
-    /// pointer.
-    const TargetRegisterClass *
-      Classes[TargetRegisterInfo::FirstVirtualRegister];
+    /// AntiDepBreak - Anti-dependence breaking object, or NULL if none
+    AntiDepBreaker *AntiDepBreak;
 
-    /// RegRegs - Map registers to all their references within a live range.
-    std::multimap<unsigned, MachineOperand *> RegRefs;
+    /// AA - AliasAnalysis for making memory reference queries.
+    AliasAnalysis *AA;
 
     /// KillIndices - The index of the most recent kill (proceding bottom-up),
     /// or ~0u if the register is not live.
     unsigned KillIndices[TargetRegisterInfo::FirstVirtualRegister];
 
-    /// DefIndices - The index of the most recent complete def (proceding bottom
-    /// up), or ~0u if the register is live.
-    unsigned DefIndices[TargetRegisterInfo::FirstVirtualRegister];
-
-    /// KeepRegs - A set of registers which are live and cannot be changed to
-    /// break anti-dependencies.
-    SmallSet<unsigned, 4> KeepRegs;
-
   public:
     SchedulePostRATDList(MachineFunction &MF,
                          const MachineLoopInfo &MLI,
                          const MachineDominatorTree &MDT,
-                         ScheduleHazardRecognizer *HR)
+                         ScheduleHazardRecognizer *HR,
+                         AntiDepBreaker *ADB,
+                         AliasAnalysis *aa)
       : ScheduleDAGInstrs(MF, MLI, MDT), Topo(SUnits),
-        AllocatableSet(TRI->getAllocatableSet(MF)),
-        HazardRec(HR) {}
+      HazardRec(HR), AntiDepBreak(ADB), AA(aa) {}
 
     ~SchedulePostRATDList() {
-      delete HazardRec;
     }
 
     /// StartBlock - Initialize register live-range state for scheduling in
@@ -164,11 +160,6 @@ namespace {
     ///
     void Schedule();
     
-    /// FixupKills - Fix register kill flags that have been made
-    /// invalid due to scheduling
-    ///
-    void FixupKills(MachineBasicBlock *MBB);
-
     /// Observe - Update liveness information to account for the current
     /// instruction, which will not be scheduled.
     ///
@@ -178,17 +169,16 @@ namespace {
     ///
     void FinishBlock();
 
+    /// FixupKills - Fix register kill flags that have been made
+    /// invalid due to scheduling
+    ///
+    void FixupKills(MachineBasicBlock *MBB);
+
   private:
-    void PrescanInstruction(MachineInstr *MI);
-    void ScanInstruction(MachineInstr *MI, unsigned Count);
     void ReleaseSucc(SUnit *SU, SDep *SuccEdge);
     void ReleaseSuccessors(SUnit *SU);
     void ScheduleNodeTopDown(SUnit *SU, unsigned CurCycle);
     void ListScheduleTopDown();
-    bool BreakAntiDependencies();
-    unsigned findSuitableFreeRegister(unsigned AntiDepReg,
-                                      unsigned LastNewReg,
-                                      const TargetRegisterClass *);
     void StartBlockForKills(MachineBasicBlock *BB);
     
     // ToggleKillFlag - Toggle a register operand kill flag. Other
@@ -221,15 +211,26 @@ static bool isSchedulingBoundary(const MachineInstr *MI,
 }
 
 bool PostRAScheduler::runOnMachineFunction(MachineFunction &Fn) {
+  AA = &getAnalysis<AliasAnalysis>();
+
   // Check for explicit enable/disable of post-ra scheduling.
+  TargetSubtarget::AntiDepBreakMode AntiDepMode = TargetSubtarget::ANTIDEP_NONE;
+  SmallVector<TargetRegisterClass*, 4> CriticalPathRCs;
   if (EnablePostRAScheduler.getPosition() > 0) {
     if (!EnablePostRAScheduler)
-      return true;
+      return false;
   } else {
-    // Check that post-RA scheduling is enabled for this function
+    // Check that post-RA scheduling is enabled for this target.
     const TargetSubtarget &ST = Fn.getTarget().getSubtarget<TargetSubtarget>();
-    if (!ST.enablePostRAScheduler())
-      return true;
+    if (!ST.enablePostRAScheduler(OptLevel, AntiDepMode, CriticalPathRCs))
+      return false;
+  }
+
+  // Check for antidep breaking override...
+  if (EnableAntiDepBreaking.getPosition() > 0) {
+    AntiDepMode = (EnableAntiDepBreaking == "all") ? TargetSubtarget::ANTIDEP_ALL :
+      (EnableAntiDepBreaking == "critical") ? TargetSubtarget::ANTIDEP_CRITICAL :
+      TargetSubtarget::ANTIDEP_NONE;
   }
 
   DEBUG(errs() << "PostRAScheduler\n");
@@ -240,8 +241,13 @@ bool PostRAScheduler::runOnMachineFunction(MachineFunction &Fn) {
   ScheduleHazardRecognizer *HR = EnablePostRAHazardAvoidance ?
     (ScheduleHazardRecognizer *)new ExactHazardRecognizer(InstrItins) :
     (ScheduleHazardRecognizer *)new SimpleHazardRecognizer();
+  AntiDepBreaker *ADB = 
+    ((AntiDepMode == TargetSubtarget::ANTIDEP_ALL) ?
+     (AntiDepBreaker *)new AggressiveAntiDepBreaker(Fn, CriticalPathRCs) :
+     ((AntiDepMode == TargetSubtarget::ANTIDEP_CRITICAL) ? 
+      (AntiDepBreaker *)new CriticalAntiDepBreaker(Fn) : NULL));
 
-  SchedulePostRATDList Scheduler(Fn, MLI, MDT, HR);
+  SchedulePostRATDList Scheduler(Fn, MLI, MDT, HR, ADB, AA);
 
   // Loop over all of the basic blocks
   for (MachineFunction::iterator MBB = Fn.begin(), MBBe = Fn.end();
@@ -253,7 +259,7 @@ bool PostRAScheduler::runOnMachineFunction(MachineFunction &Fn) {
       if (bbcnt++ % DebugDiv != DebugMod)
         continue;
       errs() << "*** DEBUG scheduling " << Fn.getFunction()->getNameStr() <<
-        ":MBB ID#" << MBB->getNumber() << " ***\n";
+        ":BB#" << MBB->getNumber() << " ***\n";
     }
 #endif
 
@@ -289,6 +295,9 @@ bool PostRAScheduler::runOnMachineFunction(MachineFunction &Fn) {
     Scheduler.FixupKills(MBB);
   }
 
+  delete HR;
+  delete ADB;
+
   return true;
 }
   
@@ -299,90 +308,24 @@ void SchedulePostRATDList::StartBlock(MachineBasicBlock *BB) {
   // Call the superclass.
   ScheduleDAGInstrs::StartBlock(BB);
 
-  // Reset the hazard recognizer.
+  // Reset the hazard recognizer and anti-dep breaker.
   HazardRec->Reset();
-
-  // Clear out the register class data.
-  std::fill(Classes, array_endof(Classes),
-            static_cast<const TargetRegisterClass *>(0));
-
-  // Initialize the indices to indicate that no registers are live.
-  std::fill(KillIndices, array_endof(KillIndices), ~0u);
-  std::fill(DefIndices, array_endof(DefIndices), BB->size());
-
-  // Clear "do not change" set.
-  KeepRegs.clear();
-
-  bool IsReturnBlock = (!BB->empty() && BB->back().getDesc().isReturn());
-
-  // Determine the live-out physregs for this block.
-  if (IsReturnBlock) {
-    // In a return block, examine the function live-out regs.
-    for (MachineRegisterInfo::liveout_iterator I = MRI.liveout_begin(),
-         E = MRI.liveout_end(); I != E; ++I) {
-      unsigned Reg = *I;
-      Classes[Reg] = reinterpret_cast<TargetRegisterClass *>(-1);
-      KillIndices[Reg] = BB->size();
-      DefIndices[Reg] = ~0u;
-      // Repeat, for all aliases.
-      for (const unsigned *Alias = TRI->getAliasSet(Reg); *Alias; ++Alias) {
-        unsigned AliasReg = *Alias;
-        Classes[AliasReg] = reinterpret_cast<TargetRegisterClass *>(-1);
-        KillIndices[AliasReg] = BB->size();
-        DefIndices[AliasReg] = ~0u;
-      }
-    }
-  } else {
-    // In a non-return block, examine the live-in regs of all successors.
-    for (MachineBasicBlock::succ_iterator SI = BB->succ_begin(),
-         SE = BB->succ_end(); SI != SE; ++SI)
-      for (MachineBasicBlock::livein_iterator I = (*SI)->livein_begin(),
-           E = (*SI)->livein_end(); I != E; ++I) {
-        unsigned Reg = *I;
-        Classes[Reg] = reinterpret_cast<TargetRegisterClass *>(-1);
-        KillIndices[Reg] = BB->size();
-        DefIndices[Reg] = ~0u;
-        // Repeat, for all aliases.
-        for (const unsigned *Alias = TRI->getAliasSet(Reg); *Alias; ++Alias) {
-          unsigned AliasReg = *Alias;
-          Classes[AliasReg] = reinterpret_cast<TargetRegisterClass *>(-1);
-          KillIndices[AliasReg] = BB->size();
-          DefIndices[AliasReg] = ~0u;
-        }
-      }
-  }
-
-  // Mark live-out callee-saved registers. In a return block this is
-  // all callee-saved registers. In non-return this is any
-  // callee-saved register that is not saved in the prolog.
-  const MachineFrameInfo *MFI = MF.getFrameInfo();
-  BitVector Pristine = MFI->getPristineRegs(BB);
-  for (const unsigned *I = TRI->getCalleeSavedRegs(); *I; ++I) {
-    unsigned Reg = *I;
-    if (!IsReturnBlock && !Pristine.test(Reg)) continue;
-    Classes[Reg] = reinterpret_cast<TargetRegisterClass *>(-1);
-    KillIndices[Reg] = BB->size();
-    DefIndices[Reg] = ~0u;
-    // Repeat, for all aliases.
-    for (const unsigned *Alias = TRI->getAliasSet(Reg); *Alias; ++Alias) {
-      unsigned AliasReg = *Alias;
-      Classes[AliasReg] = reinterpret_cast<TargetRegisterClass *>(-1);
-      KillIndices[AliasReg] = BB->size();
-      DefIndices[AliasReg] = ~0u;
-    }
-  }
+  if (AntiDepBreak != NULL)
+    AntiDepBreak->StartBlock(BB);
 }
 
 /// Schedule - Schedule the instruction range using list scheduling.
 ///
 void SchedulePostRATDList::Schedule() {
-  DEBUG(errs() << "********** List Scheduling **********\n");
-  
   // Build the scheduling graph.
-  BuildSchedGraph();
+  BuildSchedGraph(AA);
 
-  if (EnableAntiDepBreaking) {
-    if (BreakAntiDependencies()) {
+  if (AntiDepBreak != NULL) {
+    unsigned Broken = 
+      AntiDepBreak->BreakAntiDependencies(SUnits, Begin, InsertPos,
+                                          InsertPosIndex);
+    
+    if (Broken != 0) {
       // We made changes. Update the dependency graph.
       // Theoretically we could update the graph in place:
       // When a live range is changed to use a different register, remove
@@ -390,19 +333,21 @@ void SchedulePostRATDList::Schedule() {
       // that register, and add new anti-dependence and output-dependence
       // edges based on the next live range of the register.
       SUnits.clear();
+      Sequence.clear();
       EntrySU = SUnit();
       ExitSU = SUnit();
-      BuildSchedGraph();
+      BuildSchedGraph(AA);
+      
+      NumFixedAnti += Broken;
     }
   }
 
+  DEBUG(errs() << "********** List Scheduling **********\n");
   DEBUG(for (unsigned su = 0, e = SUnits.size(); su != e; ++su)
           SUnits[su].dumpAll(this));
 
   AvailableQueue.initNodes(SUnits);
-
   ListScheduleTopDown();
-  
   AvailableQueue.releaseState();
 }
 
@@ -410,426 +355,20 @@ void SchedulePostRATDList::Schedule() {
 /// instruction, which will not be scheduled.
 ///
 void SchedulePostRATDList::Observe(MachineInstr *MI, unsigned Count) {
-  assert(Count < InsertPosIndex && "Instruction index out of expected range!");
-
-  // Any register which was defined within the previous scheduling region
-  // may have been rescheduled and its lifetime may overlap with registers
-  // in ways not reflected in our current liveness state. For each such
-  // register, adjust the liveness state to be conservatively correct.
-  for (unsigned Reg = 0; Reg != TargetRegisterInfo::FirstVirtualRegister; ++Reg)
-    if (DefIndices[Reg] < InsertPosIndex && DefIndices[Reg] >= Count) {
-      assert(KillIndices[Reg] == ~0u && "Clobbered register is live!");
-      // Mark this register to be non-renamable.
-      Classes[Reg] = reinterpret_cast<TargetRegisterClass *>(-1);
-      // Move the def index to the end of the previous region, to reflect
-      // that the def could theoretically have been scheduled at the end.
-      DefIndices[Reg] = InsertPosIndex;
-    }
-
-  PrescanInstruction(MI);
-  ScanInstruction(MI, Count);
+  if (AntiDepBreak != NULL)
+    AntiDepBreak->Observe(MI, Count, InsertPosIndex);
 }
 
 /// FinishBlock - Clean up register live-range state.
 ///
 void SchedulePostRATDList::FinishBlock() {
-  RegRefs.clear();
+  if (AntiDepBreak != NULL)
+    AntiDepBreak->FinishBlock();
 
   // Call the superclass.
   ScheduleDAGInstrs::FinishBlock();
 }
 
-/// CriticalPathStep - Return the next SUnit after SU on the bottom-up
-/// critical path.
-static SDep *CriticalPathStep(SUnit *SU) {
-  SDep *Next = 0;
-  unsigned NextDepth = 0;
-  // Find the predecessor edge with the greatest depth.
-  for (SUnit::pred_iterator P = SU->Preds.begin(), PE = SU->Preds.end();
-       P != PE; ++P) {
-    SUnit *PredSU = P->getSUnit();
-    unsigned PredLatency = P->getLatency();
-    unsigned PredTotalLatency = PredSU->getDepth() + PredLatency;
-    // In the case of a latency tie, prefer an anti-dependency edge over
-    // other types of edges.
-    if (NextDepth < PredTotalLatency ||
-        (NextDepth == PredTotalLatency && P->getKind() == SDep::Anti)) {
-      NextDepth = PredTotalLatency;
-      Next = &*P;
-    }
-  }
-  return Next;
-}
-
-void SchedulePostRATDList::PrescanInstruction(MachineInstr *MI) {
-  // Scan the register operands for this instruction and update
-  // Classes and RegRefs.
-  for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
-    MachineOperand &MO = MI->getOperand(i);
-    if (!MO.isReg()) continue;
-    unsigned Reg = MO.getReg();
-    if (Reg == 0) continue;
-    const TargetRegisterClass *NewRC = 0;
-    
-    if (i < MI->getDesc().getNumOperands())
-      NewRC = MI->getDesc().OpInfo[i].getRegClass(TRI);
-
-    // For now, only allow the register to be changed if its register
-    // class is consistent across all uses.
-    if (!Classes[Reg] && NewRC)
-      Classes[Reg] = NewRC;
-    else if (!NewRC || Classes[Reg] != NewRC)
-      Classes[Reg] = reinterpret_cast<TargetRegisterClass *>(-1);
-
-    // Now check for aliases.
-    for (const unsigned *Alias = TRI->getAliasSet(Reg); *Alias; ++Alias) {
-      // If an alias of the reg is used during the live range, give up.
-      // Note that this allows us to skip checking if AntiDepReg
-      // overlaps with any of the aliases, among other things.
-      unsigned AliasReg = *Alias;
-      if (Classes[AliasReg]) {
-        Classes[AliasReg] = reinterpret_cast<TargetRegisterClass *>(-1);
-        Classes[Reg] = reinterpret_cast<TargetRegisterClass *>(-1);
-      }
-    }
-
-    // If we're still willing to consider this register, note the reference.
-    if (Classes[Reg] != reinterpret_cast<TargetRegisterClass *>(-1))
-      RegRefs.insert(std::make_pair(Reg, &MO));
-
-    // It's not safe to change register allocation for source operands of
-    // that have special allocation requirements.
-    if (MO.isUse() && MI->getDesc().hasExtraSrcRegAllocReq()) {
-      if (KeepRegs.insert(Reg)) {
-        for (const unsigned *Subreg = TRI->getSubRegisters(Reg);
-             *Subreg; ++Subreg)
-          KeepRegs.insert(*Subreg);
-      }
-    }
-  }
-}
-
-void SchedulePostRATDList::ScanInstruction(MachineInstr *MI,
-                                           unsigned Count) {
-  // Update liveness.
-  // Proceding upwards, registers that are defed but not used in this
-  // instruction are now dead.
-  for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
-    MachineOperand &MO = MI->getOperand(i);
-    if (!MO.isReg()) continue;
-    unsigned Reg = MO.getReg();
-    if (Reg == 0) continue;
-    if (!MO.isDef()) continue;
-    // Ignore two-addr defs.
-    if (MI->isRegTiedToUseOperand(i)) continue;
-
-    DefIndices[Reg] = Count;
-    KillIndices[Reg] = ~0u;
-    assert(((KillIndices[Reg] == ~0u) !=
-            (DefIndices[Reg] == ~0u)) &&
-           "Kill and Def maps aren't consistent for Reg!");
-    KeepRegs.erase(Reg);
-    Classes[Reg] = 0;
-    RegRefs.erase(Reg);
-    // Repeat, for all subregs.
-    for (const unsigned *Subreg = TRI->getSubRegisters(Reg);
-         *Subreg; ++Subreg) {
-      unsigned SubregReg = *Subreg;
-      DefIndices[SubregReg] = Count;
-      KillIndices[SubregReg] = ~0u;
-      KeepRegs.erase(SubregReg);
-      Classes[SubregReg] = 0;
-      RegRefs.erase(SubregReg);
-    }
-    // Conservatively mark super-registers as unusable.
-    for (const unsigned *Super = TRI->getSuperRegisters(Reg);
-         *Super; ++Super) {
-      unsigned SuperReg = *Super;
-      Classes[SuperReg] = reinterpret_cast<TargetRegisterClass *>(-1);
-    }
-  }
-  for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
-    MachineOperand &MO = MI->getOperand(i);
-    if (!MO.isReg()) continue;
-    unsigned Reg = MO.getReg();
-    if (Reg == 0) continue;
-    if (!MO.isUse()) continue;
-
-    const TargetRegisterClass *NewRC = 0;
-    if (i < MI->getDesc().getNumOperands())
-      NewRC = MI->getDesc().OpInfo[i].getRegClass(TRI);
-
-    // For now, only allow the register to be changed if its register
-    // class is consistent across all uses.
-    if (!Classes[Reg] && NewRC)
-      Classes[Reg] = NewRC;
-    else if (!NewRC || Classes[Reg] != NewRC)
-      Classes[Reg] = reinterpret_cast<TargetRegisterClass *>(-1);
-
-    RegRefs.insert(std::make_pair(Reg, &MO));
-
-    // It wasn't previously live but now it is, this is a kill.
-    if (KillIndices[Reg] == ~0u) {
-      KillIndices[Reg] = Count;
-      DefIndices[Reg] = ~0u;
-          assert(((KillIndices[Reg] == ~0u) !=
-                  (DefIndices[Reg] == ~0u)) &&
-               "Kill and Def maps aren't consistent for Reg!");
-    }
-    // Repeat, for all aliases.
-    for (const unsigned *Alias = TRI->getAliasSet(Reg); *Alias; ++Alias) {
-      unsigned AliasReg = *Alias;
-      if (KillIndices[AliasReg] == ~0u) {
-        KillIndices[AliasReg] = Count;
-        DefIndices[AliasReg] = ~0u;
-      }
-    }
-  }
-}
-
-unsigned
-SchedulePostRATDList::findSuitableFreeRegister(unsigned AntiDepReg,
-                                               unsigned LastNewReg,
-                                               const TargetRegisterClass *RC) {
-  for (TargetRegisterClass::iterator R = RC->allocation_order_begin(MF),
-       RE = RC->allocation_order_end(MF); R != RE; ++R) {
-    unsigned NewReg = *R;
-    // Don't replace a register with itself.
-    if (NewReg == AntiDepReg) continue;
-    // Don't replace a register with one that was recently used to repair
-    // an anti-dependence with this AntiDepReg, because that would
-    // re-introduce that anti-dependence.
-    if (NewReg == LastNewReg) continue;
-    // If NewReg is dead and NewReg's most recent def is not before
-    // AntiDepReg's kill, it's safe to replace AntiDepReg with NewReg.
-    assert(((KillIndices[AntiDepReg] == ~0u) != (DefIndices[AntiDepReg] == ~0u)) &&
-           "Kill and Def maps aren't consistent for AntiDepReg!");
-    assert(((KillIndices[NewReg] == ~0u) != (DefIndices[NewReg] == ~0u)) &&
-           "Kill and Def maps aren't consistent for NewReg!");
-    if (KillIndices[NewReg] != ~0u ||
-        Classes[NewReg] == reinterpret_cast<TargetRegisterClass *>(-1) ||
-        KillIndices[AntiDepReg] > DefIndices[NewReg])
-      continue;
-    return NewReg;
-  }
-
-  // No registers are free and available!
-  return 0;
-}
-
-/// BreakAntiDependencies - Identifiy anti-dependencies along the critical path
-/// of the ScheduleDAG and break them by renaming registers.
-///
-bool SchedulePostRATDList::BreakAntiDependencies() {
-  // The code below assumes that there is at least one instruction,
-  // so just duck out immediately if the block is empty.
-  if (SUnits.empty()) return false;
-
-  // Find the node at the bottom of the critical path.
-  SUnit *Max = 0;
-  for (unsigned i = 0, e = SUnits.size(); i != e; ++i) {
-    SUnit *SU = &SUnits[i];
-    if (!Max || SU->getDepth() + SU->Latency > Max->getDepth() + Max->Latency)
-      Max = SU;
-  }
-
-  DEBUG(errs() << "Critical path has total latency "
-        << (Max->getDepth() + Max->Latency) << "\n");
-
-  // Track progress along the critical path through the SUnit graph as we walk
-  // the instructions.
-  SUnit *CriticalPathSU = Max;
-  MachineInstr *CriticalPathMI = CriticalPathSU->getInstr();
-
-  // Consider this pattern:
-  //   A = ...
-  //   ... = A
-  //   A = ...
-  //   ... = A
-  //   A = ...
-  //   ... = A
-  //   A = ...
-  //   ... = A
-  // There are three anti-dependencies here, and without special care,
-  // we'd break all of them using the same register:
-  //   A = ...
-  //   ... = A
-  //   B = ...
-  //   ... = B
-  //   B = ...
-  //   ... = B
-  //   B = ...
-  //   ... = B
-  // because at each anti-dependence, B is the first register that
-  // isn't A which is free.  This re-introduces anti-dependencies
-  // at all but one of the original anti-dependencies that we were
-  // trying to break.  To avoid this, keep track of the most recent
-  // register that each register was replaced with, avoid
-  // using it to repair an anti-dependence on the same register.
-  // This lets us produce this:
-  //   A = ...
-  //   ... = A
-  //   B = ...
-  //   ... = B
-  //   C = ...
-  //   ... = C
-  //   B = ...
-  //   ... = B
-  // This still has an anti-dependence on B, but at least it isn't on the
-  // original critical path.
-  //
-  // TODO: If we tracked more than one register here, we could potentially
-  // fix that remaining critical edge too. This is a little more involved,
-  // because unlike the most recent register, less recent registers should
-  // still be considered, though only if no other registers are available.
-  unsigned LastNewReg[TargetRegisterInfo::FirstVirtualRegister] = {};
-
-  // Attempt to break anti-dependence edges on the critical path. Walk the
-  // instructions from the bottom up, tracking information about liveness
-  // as we go to help determine which registers are available.
-  bool Changed = false;
-  unsigned Count = InsertPosIndex - 1;
-  for (MachineBasicBlock::iterator I = InsertPos, E = Begin;
-       I != E; --Count) {
-    MachineInstr *MI = --I;
-
-    // Check if this instruction has a dependence on the critical path that
-    // is an anti-dependence that we may be able to break. If it is, set
-    // AntiDepReg to the non-zero register associated with the anti-dependence.
-    //
-    // We limit our attention to the critical path as a heuristic to avoid
-    // breaking anti-dependence edges that aren't going to significantly
-    // impact the overall schedule. There are a limited number of registers
-    // and we want to save them for the important edges.
-    // 
-    // TODO: Instructions with multiple defs could have multiple
-    // anti-dependencies. The current code here only knows how to break one
-    // edge per instruction. Note that we'd have to be able to break all of
-    // the anti-dependencies in an instruction in order to be effective.
-    unsigned AntiDepReg = 0;
-    if (MI == CriticalPathMI) {
-      if (SDep *Edge = CriticalPathStep(CriticalPathSU)) {
-        SUnit *NextSU = Edge->getSUnit();
-
-        // Only consider anti-dependence edges.
-        if (Edge->getKind() == SDep::Anti) {
-          AntiDepReg = Edge->getReg();
-          assert(AntiDepReg != 0 && "Anti-dependence on reg0?");
-          if (!AllocatableSet.test(AntiDepReg))
-            // Don't break anti-dependencies on non-allocatable registers.
-            AntiDepReg = 0;
-          else if (KeepRegs.count(AntiDepReg))
-            // Don't break anti-dependencies if an use down below requires
-            // this exact register.
-            AntiDepReg = 0;
-          else {
-            // If the SUnit has other dependencies on the SUnit that it
-            // anti-depends on, don't bother breaking the anti-dependency
-            // since those edges would prevent such units from being
-            // scheduled past each other regardless.
-            //
-            // Also, if there are dependencies on other SUnits with the
-            // same register as the anti-dependency, don't attempt to
-            // break it.
-            for (SUnit::pred_iterator P = CriticalPathSU->Preds.begin(),
-                 PE = CriticalPathSU->Preds.end(); P != PE; ++P)
-              if (P->getSUnit() == NextSU ?
-                    (P->getKind() != SDep::Anti || P->getReg() != AntiDepReg) :
-                    (P->getKind() == SDep::Data && P->getReg() == AntiDepReg)) {
-                AntiDepReg = 0;
-                break;
-              }
-          }
-        }
-        CriticalPathSU = NextSU;
-        CriticalPathMI = CriticalPathSU->getInstr();
-      } else {
-        // We've reached the end of the critical path.
-        CriticalPathSU = 0;
-        CriticalPathMI = 0;
-      }
-    }
-
-    PrescanInstruction(MI);
-
-    if (MI->getDesc().hasExtraDefRegAllocReq())
-      // If this instruction's defs have special allocation requirement, don't
-      // break this anti-dependency.
-      AntiDepReg = 0;
-    else if (AntiDepReg) {
-      // If this instruction has a use of AntiDepReg, breaking it
-      // is invalid.
-      for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
-        MachineOperand &MO = MI->getOperand(i);
-        if (!MO.isReg()) continue;
-        unsigned Reg = MO.getReg();
-        if (Reg == 0) continue;
-        if (MO.isUse() && AntiDepReg == Reg) {
-          AntiDepReg = 0;
-          break;
-        }
-      }
-    }
-
-    // Determine AntiDepReg's register class, if it is live and is
-    // consistently used within a single class.
-    const TargetRegisterClass *RC = AntiDepReg != 0 ? Classes[AntiDepReg] : 0;
-    assert((AntiDepReg == 0 || RC != NULL) &&
-           "Register should be live if it's causing an anti-dependence!");
-    if (RC == reinterpret_cast<TargetRegisterClass *>(-1))
-      AntiDepReg = 0;
-
-    // Look for a suitable register to use to break the anti-depenence.
-    //
-    // TODO: Instead of picking the first free register, consider which might
-    // be the best.
-    if (AntiDepReg != 0) {
-      if (unsigned NewReg = findSuitableFreeRegister(AntiDepReg,
-                                                     LastNewReg[AntiDepReg],
-                                                     RC)) {
-        DEBUG(errs() << "Breaking anti-dependence edge on "
-              << TRI->getName(AntiDepReg)
-              << " with " << RegRefs.count(AntiDepReg) << " references"
-              << " using " << TRI->getName(NewReg) << "!\n");
-
-        // Update the references to the old register to refer to the new
-        // register.
-        std::pair<std::multimap<unsigned, MachineOperand *>::iterator,
-                  std::multimap<unsigned, MachineOperand *>::iterator>
-           Range = RegRefs.equal_range(AntiDepReg);
-        for (std::multimap<unsigned, MachineOperand *>::iterator
-             Q = Range.first, QE = Range.second; Q != QE; ++Q)
-          Q->second->setReg(NewReg);
-
-        // We just went back in time and modified history; the
-        // liveness information for the anti-depenence reg is now
-        // inconsistent. Set the state as if it were dead.
-        Classes[NewReg] = Classes[AntiDepReg];
-        DefIndices[NewReg] = DefIndices[AntiDepReg];
-        KillIndices[NewReg] = KillIndices[AntiDepReg];
-        assert(((KillIndices[NewReg] == ~0u) !=
-                (DefIndices[NewReg] == ~0u)) &&
-             "Kill and Def maps aren't consistent for NewReg!");
-
-        Classes[AntiDepReg] = 0;
-        DefIndices[AntiDepReg] = KillIndices[AntiDepReg];
-        KillIndices[AntiDepReg] = ~0u;
-        assert(((KillIndices[AntiDepReg] == ~0u) !=
-                (DefIndices[AntiDepReg] == ~0u)) &&
-             "Kill and Def maps aren't consistent for AntiDepReg!");
-
-        RegRefs.erase(AntiDepReg);
-        Changed = true;
-        LastNewReg[AntiDepReg] = NewReg;
-      }
-    }
-
-    ScanInstruction(MI, Count);
-  }
-
-  return Changed;
-}
-
 /// StartBlockForKills - Initialize register live-range state for updating kills
 ///
 void SchedulePostRATDList::StartBlockForKills(MachineBasicBlock *BB) {
@@ -884,6 +423,7 @@ bool SchedulePostRATDList::ToggleKillFlag(MachineInstr *MI,
 
   // If any subreg of MO is live, then create an imp-def for that
   // subreg and keep MO marked as killed.
+  MO.setIsKill(false);
   bool AllDead = true;
   const unsigned SuperReg = MO.getReg();
   for (const unsigned *Subreg = TRI->getSubRegisters(SuperReg);
@@ -898,7 +438,8 @@ bool SchedulePostRATDList::ToggleKillFlag(MachineInstr *MI,
     }
   }
 
-  MO.setIsKill(AllDead);
+  if(AllDead)
+    MO.setIsKill(true);
   return false;
 }
 
@@ -906,7 +447,7 @@ bool SchedulePostRATDList::ToggleKillFlag(MachineInstr *MI,
 /// incorrect by instruction reordering.
 ///
 void SchedulePostRATDList::FixupKills(MachineBasicBlock *MBB) {
-  DEBUG(errs() << "Fixup kills for BB ID#" << MBB->getNumber() << '\n');
+  DEBUG(errs() << "Fixup kills for BB#" << MBB->getNumber() << '\n');
 
   std::set<unsigned> killedRegs;
   BitVector ReservedRegs = TRI->getReservedRegs(MF);
@@ -1032,8 +573,9 @@ void SchedulePostRATDList::ReleaseSucc(SUnit *SU, SDep *SuccEdge) {
 /// ReleaseSuccessors - Call ReleaseSucc on each of SU's successors.
 void SchedulePostRATDList::ReleaseSuccessors(SUnit *SU) {
   for (SUnit::succ_iterator I = SU->Succs.begin(), E = SU->Succs.end();
-       I != E; ++I)
+       I != E; ++I) {
     ReleaseSucc(SU, &*I);
+  }
 }
 
 /// ScheduleNodeTopDown - Add the node to the schedule. Decrement the pending
@@ -1044,7 +586,8 @@ void SchedulePostRATDList::ScheduleNodeTopDown(SUnit *SU, unsigned CurCycle) {
   DEBUG(SU->dump(this));
   
   Sequence.push_back(SU);
-  assert(CurCycle >= SU->getDepth() && "Node scheduled above its depth!");
+  assert(CurCycle >= SU->getDepth() && 
+         "Node scheduled above its depth!");
   SU->setDepthToAtLeast(CurCycle);
 
   ReleaseSuccessors(SU);
@@ -1056,14 +599,21 @@ void SchedulePostRATDList::ScheduleNodeTopDown(SUnit *SU, unsigned CurCycle) {
 /// schedulers.
 void SchedulePostRATDList::ListScheduleTopDown() {
   unsigned CurCycle = 0;
+  
+  // We're scheduling top-down but we're visiting the regions in
+  // bottom-up order, so we don't know the hazards at the start of a
+  // region. So assume no hazards (this should usually be ok as most
+  // blocks are a single region).
+  HazardRec->Reset();
 
   // Release any successors of the special Entry node.
   ReleaseSuccessors(&EntrySU);
 
-  // All leaves to Available queue.
+  // Add all leaves to Available queue.
   for (unsigned i = 0, e = SUnits.size(); i != e; ++i) {
     // It is available if it has no predecessors.
-    if (SUnits[i].Preds.empty()) {
+    bool available = SUnits[i].Preds.empty();
+    if (available) {
       AvailableQueue.push(&SUnits[i]);
       SUnits[i].isAvailable = true;
     }
@@ -1101,7 +651,6 @@ void SchedulePostRATDList::ListScheduleTopDown() {
           });
 
     SUnit *FoundSUnit = 0;
-
     bool HasNoopHazards = false;
     while (!AvailableQueue.empty()) {
       SUnit *CurSUnit = AvailableQueue.pop();
@@ -1125,8 +674,9 @@ void SchedulePostRATDList::ListScheduleTopDown() {
       NotReady.clear();
     }
 
-    // If we found a node to schedule, do it now.
+    // If we found a node to schedule...
     if (FoundSUnit) {
+      // ... schedule the node...
       ScheduleNodeTopDown(FoundSUnit, CurCycle);
       HazardRec->EmitInstruction(FoundSUnit);
       CycleHasInsts = true;
@@ -1173,6 +723,6 @@ void SchedulePostRATDList::ListScheduleTopDown() {
 //                         Public Constructor Functions
 //===----------------------------------------------------------------------===//
 
-FunctionPass *llvm::createPostRAScheduler() {
-  return new PostRAScheduler();
+FunctionPass *llvm::createPostRAScheduler(CodeGenOpt::Level OptLevel) {
+  return new PostRAScheduler(OptLevel);
 }
diff --git a/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp b/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp
index 2e20dc1..8f62345 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp
@@ -39,8 +39,10 @@
 using namespace llvm;
 
 static cl::opt<int> PreSplitLimit("pre-split-limit", cl::init(-1), cl::Hidden);
-static cl::opt<int> DeadSplitLimit("dead-split-limit", cl::init(-1), cl::Hidden);
-static cl::opt<int> RestoreFoldLimit("restore-fold-limit", cl::init(-1), cl::Hidden);
+static cl::opt<int> DeadSplitLimit("dead-split-limit", cl::init(-1),
+                                   cl::Hidden);
+static cl::opt<int> RestoreFoldLimit("restore-fold-limit", cl::init(-1),
+                                     cl::Hidden);
 
 STATISTIC(NumSplits, "Number of intervals split");
 STATISTIC(NumRemats, "Number of intervals split by rematerialization");
@@ -50,13 +52,14 @@ STATISTIC(NumRenumbers, "Number of intervals renumbered into new registers");
 STATISTIC(NumDeadSpills, "Number of dead spills removed");
 
 namespace {
-  class VISIBILITY_HIDDEN PreAllocSplitting : public MachineFunctionPass {
+  class PreAllocSplitting : public MachineFunctionPass {
     MachineFunction       *CurrMF;
     const TargetMachine   *TM;
     const TargetInstrInfo *TII;
     const TargetRegisterInfo* TRI;
     MachineFrameInfo      *MFI;
     MachineRegisterInfo   *MRI;
+    SlotIndexes           *SIs;
     LiveIntervals         *LIs;
     LiveStacks            *LSs;
     VirtRegMap            *VRM;
@@ -68,7 +71,7 @@ namespace {
     MachineBasicBlock     *BarrierMBB;
 
     // Barrier - Current barrier index.
-    MachineInstrIndex     BarrierIdx;
+    SlotIndex     BarrierIdx;
 
     // CurrLI - Current live interval being split.
     LiveInterval          *CurrLI;
@@ -83,16 +86,19 @@ namespace {
     DenseMap<unsigned, int> IntervalSSMap;
 
     // Def2SpillMap - A map from a def instruction index to spill index.
-    DenseMap<MachineInstrIndex, MachineInstrIndex> Def2SpillMap;
+    DenseMap<SlotIndex, SlotIndex> Def2SpillMap;
 
   public:
     static char ID;
-    PreAllocSplitting() : MachineFunctionPass(&ID) {}
+    PreAllocSplitting()
+      : MachineFunctionPass(&ID) {}
 
     virtual bool runOnMachineFunction(MachineFunction &MF);
 
     virtual void getAnalysisUsage(AnalysisUsage &AU) const {
       AU.setPreservesCFG();
+      AU.addRequired<SlotIndexes>();
+      AU.addPreserved<SlotIndexes>();
       AU.addRequired<LiveIntervals>();
       AU.addPreserved<LiveIntervals>();
       AU.addRequired<LiveStacks>();
@@ -127,25 +133,22 @@ namespace {
 
 
   private:
-    MachineBasicBlock::iterator
-      findNextEmptySlot(MachineBasicBlock*, MachineInstr*,
-                        MachineInstrIndex&);
 
     MachineBasicBlock::iterator
       findSpillPoint(MachineBasicBlock*, MachineInstr*, MachineInstr*,
-                     SmallPtrSet<MachineInstr*, 4>&, MachineInstrIndex&);
+                     SmallPtrSet<MachineInstr*, 4>&);
 
     MachineBasicBlock::iterator
-      findRestorePoint(MachineBasicBlock*, MachineInstr*, MachineInstrIndex,
-                     SmallPtrSet<MachineInstr*, 4>&, MachineInstrIndex&);
+      findRestorePoint(MachineBasicBlock*, MachineInstr*, SlotIndex,
+                     SmallPtrSet<MachineInstr*, 4>&);
 
     int CreateSpillStackSlot(unsigned, const TargetRegisterClass *);
 
     bool IsAvailableInStack(MachineBasicBlock*, unsigned,
-                            MachineInstrIndex, MachineInstrIndex,
-                            MachineInstrIndex&, int&) const;
+                            SlotIndex, SlotIndex,
+                            SlotIndex&, int&) const;
 
-    void UpdateSpillSlotInterval(VNInfo*, MachineInstrIndex, MachineInstrIndex);
+    void UpdateSpillSlotInterval(VNInfo*, SlotIndex, SlotIndex);
 
     bool SplitRegLiveInterval(LiveInterval*);
 
@@ -157,7 +160,6 @@ namespace {
     bool Rematerialize(unsigned vreg, VNInfo* ValNo,
                        MachineInstr* DefMI,
                        MachineBasicBlock::iterator RestorePt,
-                       MachineInstrIndex RestoreIdx,
                        SmallPtrSet<MachineInstr*, 4>& RefsInMBB);
     MachineInstr* FoldSpill(unsigned vreg, const TargetRegisterClass* RC,
                             MachineInstr* DefMI,
@@ -204,24 +206,6 @@ X("pre-alloc-splitting", "Pre-Register Allocation Live Interval Splitting");
 
 const PassInfo *const llvm::PreAllocSplittingID = &X;
 
-
-/// findNextEmptySlot - Find a gap after the given machine instruction in the
-/// instruction index map. If there isn't one, return end().
-MachineBasicBlock::iterator
-PreAllocSplitting::findNextEmptySlot(MachineBasicBlock *MBB, MachineInstr *MI,
-                                     MachineInstrIndex &SpotIndex) {
-  MachineBasicBlock::iterator MII = MI;
-  if (++MII != MBB->end()) {
-    MachineInstrIndex Index =
-      LIs->findGapBeforeInstr(LIs->getInstructionIndex(MII));
-    if (Index != MachineInstrIndex()) {
-      SpotIndex = Index;
-      return MII;
-    }
-  }
-  return MBB->end();
-}
-
 /// findSpillPoint - Find a gap as far away from the given MI that's suitable
 /// for spilling the current live interval. The index must be before any
 /// defs and uses of the live interval register in the mbb. Return begin() if
@@ -229,8 +213,7 @@ PreAllocSplitting::findNextEmptySlot(MachineBasicBlock *MBB, MachineInstr *MI,
 MachineBasicBlock::iterator
 PreAllocSplitting::findSpillPoint(MachineBasicBlock *MBB, MachineInstr *MI,
                                   MachineInstr *DefMI,
-                                  SmallPtrSet<MachineInstr*, 4> &RefsInMBB,
-                                  MachineInstrIndex &SpillIndex) {
+                                  SmallPtrSet<MachineInstr*, 4> &RefsInMBB) {
   MachineBasicBlock::iterator Pt = MBB->begin();
 
   MachineBasicBlock::iterator MII = MI;
@@ -243,8 +226,6 @@ PreAllocSplitting::findSpillPoint(MachineBasicBlock *MBB, MachineInstr *MI,
   if (MII == EndPt || RefsInMBB.count(MII)) return Pt;
     
   while (MII != EndPt && !RefsInMBB.count(MII)) {
-    MachineInstrIndex Index = LIs->getInstructionIndex(MII);
-    
     // We can't insert the spill between the barrier (a call), and its
     // corresponding call frame setup.
     if (MII->getOpcode() == TRI->getCallFrameDestroyOpcode()) {
@@ -255,9 +236,8 @@ PreAllocSplitting::findSpillPoint(MachineBasicBlock *MBB, MachineInstr *MI,
         }
       }
       continue;
-    } else if (LIs->hasGapBeforeInstr(Index)) {
+    } else {
       Pt = MII;
-      SpillIndex = LIs->findGapBeforeInstr(Index, true);
     }
     
     if (RefsInMBB.count(MII))
@@ -276,9 +256,8 @@ PreAllocSplitting::findSpillPoint(MachineBasicBlock *MBB, MachineInstr *MI,
 /// found.
 MachineBasicBlock::iterator
 PreAllocSplitting::findRestorePoint(MachineBasicBlock *MBB, MachineInstr *MI,
-                                    MachineInstrIndex LastIdx,
-                                    SmallPtrSet<MachineInstr*, 4> &RefsInMBB,
-                                    MachineInstrIndex &RestoreIndex) {
+                                    SlotIndex LastIdx,
+                                    SmallPtrSet<MachineInstr*, 4> &RefsInMBB) {
   // FIXME: Allow spill to be inserted to the beginning of the mbb. Update mbb
   // begin index accordingly.
   MachineBasicBlock::iterator Pt = MBB->end();
@@ -299,10 +278,9 @@ PreAllocSplitting::findRestorePoint(MachineBasicBlock *MBB, MachineInstr *MI,
   // FIXME: Limit the number of instructions to examine to reduce
   // compile time?
   while (MII != EndPt) {
-    MachineInstrIndex Index = LIs->getInstructionIndex(MII);
+    SlotIndex Index = LIs->getInstructionIndex(MII);
     if (Index > LastIdx)
       break;
-    MachineInstrIndex Gap = LIs->findGapBeforeInstr(Index);
       
     // We can't insert a restore between the barrier (a call) and its 
     // corresponding call frame teardown.
@@ -311,9 +289,8 @@ PreAllocSplitting::findRestorePoint(MachineBasicBlock *MBB, MachineInstr *MI,
         if (MII == EndPt || RefsInMBB.count(MII)) return Pt;
         ++MII;
       } while (MII->getOpcode() != TRI->getCallFrameDestroyOpcode());
-    } else if (Gap != MachineInstrIndex()) {
+    } else {
       Pt = MII;
-      RestoreIndex = Gap;
     }
     
     if (RefsInMBB.count(MII))
@@ -335,7 +312,7 @@ int PreAllocSplitting::CreateSpillStackSlot(unsigned Reg,
   if (I != IntervalSSMap.end()) {
     SS = I->second;
   } else {
-    SS = MFI->CreateStackObject(RC->getSize(), RC->getAlignment());
+    SS = MFI->CreateSpillStackObject(RC->getSize(), RC->getAlignment());
     IntervalSSMap[Reg] = SS;
   }
 
@@ -344,7 +321,7 @@ int PreAllocSplitting::CreateSpillStackSlot(unsigned Reg,
   if (CurrSLI->hasAtLeastOneValue())
     CurrSValNo = CurrSLI->getValNumInfo(0);
   else
-    CurrSValNo = CurrSLI->getNextValue(MachineInstrIndex(), 0, false,
+    CurrSValNo = CurrSLI->getNextValue(SlotIndex(), 0, false,
                                        LSs->getVNInfoAllocator());
   return SS;
 }
@@ -353,17 +330,17 @@ int PreAllocSplitting::CreateSpillStackSlot(unsigned Reg,
 /// slot at the specified index.
 bool
 PreAllocSplitting::IsAvailableInStack(MachineBasicBlock *DefMBB,
-                                    unsigned Reg, MachineInstrIndex DefIndex,
-                                    MachineInstrIndex RestoreIndex,
-                                    MachineInstrIndex &SpillIndex,
+                                    unsigned Reg, SlotIndex DefIndex,
+                                    SlotIndex RestoreIndex,
+                                    SlotIndex &SpillIndex,
                                     int& SS) const {
   if (!DefMBB)
     return false;
 
-  DenseMap<unsigned, int>::iterator I = IntervalSSMap.find(Reg);
+  DenseMap<unsigned, int>::const_iterator I = IntervalSSMap.find(Reg);
   if (I == IntervalSSMap.end())
     return false;
-  DenseMap<MachineInstrIndex, MachineInstrIndex>::iterator
+  DenseMap<SlotIndex, SlotIndex>::const_iterator
     II = Def2SpillMap.find(DefIndex);
   if (II == Def2SpillMap.end())
     return false;
@@ -384,8 +361,8 @@ PreAllocSplitting::IsAvailableInStack(MachineBasicBlock *DefMBB,
 /// interval being split, and the spill and restore indicies, update the live
 /// interval of the spill stack slot.
 void
-PreAllocSplitting::UpdateSpillSlotInterval(VNInfo *ValNo, MachineInstrIndex SpillIndex,
-                                           MachineInstrIndex RestoreIndex) {
+PreAllocSplitting::UpdateSpillSlotInterval(VNInfo *ValNo, SlotIndex SpillIndex,
+                                           SlotIndex RestoreIndex) {
   assert(LIs->getMBBFromIndex(RestoreIndex) == BarrierMBB &&
          "Expect restore in the barrier mbb");
 
@@ -398,8 +375,8 @@ PreAllocSplitting::UpdateSpillSlotInterval(VNInfo *ValNo, MachineInstrIndex Spil
   }
 
   SmallPtrSet<MachineBasicBlock*, 4> Processed;
-  MachineInstrIndex EndIdx = LIs->getMBBEndIdx(MBB);
-  LiveRange SLR(SpillIndex, LIs->getNextSlot(EndIdx), CurrSValNo);
+  SlotIndex EndIdx = LIs->getMBBEndIdx(MBB);
+  LiveRange SLR(SpillIndex, EndIdx.getNextSlot(), CurrSValNo);
   CurrSLI->addRange(SLR);
   Processed.insert(MBB);
 
@@ -418,7 +395,7 @@ PreAllocSplitting::UpdateSpillSlotInterval(VNInfo *ValNo, MachineInstrIndex Spil
     WorkList.pop_back();
     if (Processed.count(MBB))
       continue;
-    MachineInstrIndex Idx = LIs->getMBBStartIdx(MBB);
+    SlotIndex Idx = LIs->getMBBStartIdx(MBB);
     LR = CurrLI->getLiveRangeContaining(Idx);
     if (LR && LR->valno == ValNo) {
       EndIdx = LIs->getMBBEndIdx(MBB);
@@ -428,7 +405,7 @@ PreAllocSplitting::UpdateSpillSlotInterval(VNInfo *ValNo, MachineInstrIndex Spil
         CurrSLI->addRange(SLR);
       } else if (LR->end > EndIdx) {
         // Live range extends beyond end of mbb, process successors.
-        LiveRange SLR(Idx, LIs->getNextIndex(EndIdx), CurrSValNo);
+        LiveRange SLR(Idx, EndIdx.getNextIndex(), CurrSValNo);
         CurrSLI->addRange(SLR);
         for (MachineBasicBlock::succ_iterator SI = MBB->succ_begin(),
                SE = MBB->succ_end(); SI != SE; ++SI)
@@ -491,12 +468,12 @@ PreAllocSplitting::PerformPHIConstruction(MachineBasicBlock::iterator UseI,
     }
     
     // Once we've found it, extend its VNInfo to our instruction.
-    MachineInstrIndex DefIndex = LIs->getInstructionIndex(Walker);
-    DefIndex = LIs->getDefIndex(DefIndex);
-    MachineInstrIndex EndIndex = LIs->getMBBEndIdx(MBB);
+    SlotIndex DefIndex = LIs->getInstructionIndex(Walker);
+    DefIndex = DefIndex.getDefIndex();
+    SlotIndex EndIndex = LIs->getMBBEndIdx(MBB);
     
     RetVNI = NewVNs[Walker];
-    LI->addRange(LiveRange(DefIndex, LIs->getNextSlot(EndIndex), RetVNI));
+    LI->addRange(LiveRange(DefIndex, EndIndex.getNextSlot(), RetVNI));
   } else if (!ContainsDefs && ContainsUses) {
     SmallPtrSet<MachineInstr*, 2>& BlockUses = Uses[MBB];
     
@@ -528,12 +505,12 @@ PreAllocSplitting::PerformPHIConstruction(MachineBasicBlock::iterator UseI,
                                               IsTopLevel, IsIntraBlock);
     }
 
-    MachineInstrIndex UseIndex = LIs->getInstructionIndex(Walker);
-    UseIndex = LIs->getUseIndex(UseIndex);
-    MachineInstrIndex EndIndex;
+    SlotIndex UseIndex = LIs->getInstructionIndex(Walker);
+    UseIndex = UseIndex.getUseIndex();
+    SlotIndex EndIndex;
     if (IsIntraBlock) {
       EndIndex = LIs->getInstructionIndex(UseI);
-      EndIndex = LIs->getUseIndex(EndIndex);
+      EndIndex = EndIndex.getUseIndex();
     } else
       EndIndex = LIs->getMBBEndIdx(MBB);
 
@@ -542,7 +519,7 @@ PreAllocSplitting::PerformPHIConstruction(MachineBasicBlock::iterator UseI,
     RetVNI = PerformPHIConstruction(Walker, MBB, LI, Visited, Defs, Uses,
                                     NewVNs, LiveOut, Phis, false, true);
     
-    LI->addRange(LiveRange(UseIndex, LIs->getNextSlot(EndIndex), RetVNI));
+    LI->addRange(LiveRange(UseIndex, EndIndex.getNextSlot(), RetVNI));
     
     // FIXME: Need to set kills properly for inter-block stuff.
     if (RetVNI->isKill(UseIndex)) RetVNI->removeKill(UseIndex);
@@ -588,13 +565,12 @@ PreAllocSplitting::PerformPHIConstruction(MachineBasicBlock::iterator UseI,
                                               IsTopLevel, IsIntraBlock);
     }
 
-    MachineInstrIndex StartIndex = LIs->getInstructionIndex(Walker);
-    StartIndex = foundDef ? LIs->getDefIndex(StartIndex) :
-                            LIs->getUseIndex(StartIndex);
-    MachineInstrIndex EndIndex;
+    SlotIndex StartIndex = LIs->getInstructionIndex(Walker);
+    StartIndex = foundDef ? StartIndex.getDefIndex() : StartIndex.getUseIndex();
+    SlotIndex EndIndex;
     if (IsIntraBlock) {
       EndIndex = LIs->getInstructionIndex(UseI);
-      EndIndex = LIs->getUseIndex(EndIndex);
+      EndIndex = EndIndex.getUseIndex();
     } else
       EndIndex = LIs->getMBBEndIdx(MBB);
 
@@ -604,7 +580,7 @@ PreAllocSplitting::PerformPHIConstruction(MachineBasicBlock::iterator UseI,
       RetVNI = PerformPHIConstruction(Walker, MBB, LI, Visited, Defs, Uses,
                                       NewVNs, LiveOut, Phis, false, true);
 
-    LI->addRange(LiveRange(StartIndex, LIs->getNextSlot(EndIndex), RetVNI));
+    LI->addRange(LiveRange(StartIndex, EndIndex.getNextSlot(), RetVNI));
     
     if (foundUse && RetVNI->isKill(StartIndex))
       RetVNI->removeKill(StartIndex);
@@ -640,9 +616,9 @@ PreAllocSplitting::PerformPHIConstructionFallBack(MachineBasicBlock::iterator Us
   // assume that we are not intrablock here.
   if (Phis.count(MBB)) return Phis[MBB]; 
 
-  MachineInstrIndex StartIndex = LIs->getMBBStartIdx(MBB);
+  SlotIndex StartIndex = LIs->getMBBStartIdx(MBB);
   VNInfo *RetVNI = Phis[MBB] =
-    LI->getNextValue(MachineInstrIndex(), /*FIXME*/ 0, false,
+    LI->getNextValue(SlotIndex(), /*FIXME*/ 0, false,
                      LIs->getVNInfoAllocator());
 
   if (!IsIntraBlock) LiveOut[MBB] = RetVNI;
@@ -685,19 +661,19 @@ PreAllocSplitting::PerformPHIConstructionFallBack(MachineBasicBlock::iterator Us
     for (DenseMap<MachineBasicBlock*, VNInfo*>::iterator I =
            IncomingVNs.begin(), E = IncomingVNs.end(); I != E; ++I) {
       I->second->setHasPHIKill(true);
-      MachineInstrIndex KillIndex = LIs->getMBBEndIdx(I->first);
+      SlotIndex KillIndex = LIs->getMBBEndIdx(I->first);
       if (!I->second->isKill(KillIndex))
         I->second->addKill(KillIndex);
     }
   }
       
-  MachineInstrIndex EndIndex;
+  SlotIndex EndIndex;
   if (IsIntraBlock) {
     EndIndex = LIs->getInstructionIndex(UseI);
-    EndIndex = LIs->getUseIndex(EndIndex);
+    EndIndex = EndIndex.getUseIndex();
   } else
     EndIndex = LIs->getMBBEndIdx(MBB);
-  LI->addRange(LiveRange(StartIndex, LIs->getNextSlot(EndIndex), RetVNI));
+  LI->addRange(LiveRange(StartIndex, EndIndex.getNextSlot(), RetVNI));
   if (IsIntraBlock)
     RetVNI->addKill(EndIndex);
 
@@ -733,11 +709,11 @@ void PreAllocSplitting::ReconstructLiveInterval(LiveInterval* LI) {
        DE = MRI->def_end(); DI != DE; ++DI) {
     Defs[(*DI).getParent()].insert(&*DI);
     
-    MachineInstrIndex DefIdx = LIs->getInstructionIndex(&*DI);
-    DefIdx = LIs->getDefIndex(DefIdx);
+    SlotIndex DefIdx = LIs->getInstructionIndex(&*DI);
+    DefIdx = DefIdx.getDefIndex();
     
     assert(DI->getOpcode() != TargetInstrInfo::PHI &&
-           "Following NewVN isPHIDef flag incorrect. Fix me!");
+           "PHI instr in code during pre-alloc splitting.");
     VNInfo* NewVN = LI->getNextValue(DefIdx, 0, true, Alloc);
     
     // If the def is a move, set the copy field.
@@ -769,15 +745,33 @@ void PreAllocSplitting::ReconstructLiveInterval(LiveInterval* LI) {
   // Add ranges for dead defs
   for (MachineRegisterInfo::def_iterator DI = MRI->def_begin(LI->reg),
        DE = MRI->def_end(); DI != DE; ++DI) {
-    MachineInstrIndex DefIdx = LIs->getInstructionIndex(&*DI);
-    DefIdx = LIs->getDefIndex(DefIdx);
+    SlotIndex DefIdx = LIs->getInstructionIndex(&*DI);
+    DefIdx = DefIdx.getDefIndex();
     
     if (LI->liveAt(DefIdx)) continue;
     
     VNInfo* DeadVN = NewVNs[&*DI];
-    LI->addRange(LiveRange(DefIdx, LIs->getNextSlot(DefIdx), DeadVN));
+    LI->addRange(LiveRange(DefIdx, DefIdx.getNextSlot(), DeadVN));
     DeadVN->addKill(DefIdx);
   }
+
+  // Update kill markers.
+  for (LiveInterval::vni_iterator VI = LI->vni_begin(), VE = LI->vni_end();
+       VI != VE; ++VI) {
+    VNInfo* VNI = *VI;
+    for (unsigned i = 0, e = VNI->kills.size(); i != e; ++i) {
+      SlotIndex KillIdx = VNI->kills[i];
+      if (KillIdx.isPHI())
+        continue;
+      MachineInstr *KillMI = LIs->getInstructionFromIndex(KillIdx);
+      if (KillMI) {
+        MachineOperand *KillMO = KillMI->findRegisterUseOperand(CurrLI->reg);
+        if (KillMO)
+          // It could be a dead def.
+          KillMO->setIsKill();
+      }
+    }
+  }
 }
 
 /// RenumberValno - Split the given valno out into a new vreg, allowing it to
@@ -808,14 +802,14 @@ void PreAllocSplitting::RenumberValno(VNInfo* VN) {
     // Locate two-address redefinitions
     for (VNInfo::KillSet::iterator KI = OldVN->kills.begin(),
          KE = OldVN->kills.end(); KI != KE; ++KI) {
-      assert(!KI->isPHIIndex() &&
+      assert(!KI->isPHI() &&
              "VN previously reported having no PHI kills.");
       MachineInstr* MI = LIs->getInstructionFromIndex(*KI);
       unsigned DefIdx = MI->findRegisterDefOperandIdx(CurrLI->reg);
       if (DefIdx == ~0U) continue;
       if (MI->isRegTiedToUseOperand(DefIdx)) {
         VNInfo* NextVN =
-          CurrLI->findDefinedVNInfoForRegInt(LIs->getDefIndex(*KI));
+          CurrLI->findDefinedVNInfoForRegInt(KI->getDefIndex());
         if (NextVN == OldVN) continue;
         Stack.push_back(NextVN);
       }
@@ -847,10 +841,10 @@ void PreAllocSplitting::RenumberValno(VNInfo* VN) {
   for (MachineRegisterInfo::reg_iterator I = MRI->reg_begin(CurrLI->reg),
          E = MRI->reg_end(); I != E; ++I) {
     MachineOperand& MO = I.getOperand();
-    MachineInstrIndex InstrIdx = LIs->getInstructionIndex(&*I);
+    SlotIndex InstrIdx = LIs->getInstructionIndex(&*I);
     
-    if ((MO.isUse() && NewLI.liveAt(LIs->getUseIndex(InstrIdx))) ||
-        (MO.isDef() && NewLI.liveAt(LIs->getDefIndex(InstrIdx))))
+    if ((MO.isUse() && NewLI.liveAt(InstrIdx.getUseIndex())) ||
+        (MO.isDef() && NewLI.liveAt(InstrIdx.getDefIndex())))
       OpsToChange.push_back(std::make_pair(&*I, I.getOperandNo()));
   }
   
@@ -875,26 +869,23 @@ void PreAllocSplitting::RenumberValno(VNInfo* VN) {
 bool PreAllocSplitting::Rematerialize(unsigned VReg, VNInfo* ValNo,
                                       MachineInstr* DefMI,
                                       MachineBasicBlock::iterator RestorePt,
-                                      MachineInstrIndex RestoreIdx,
                                     SmallPtrSet<MachineInstr*, 4>& RefsInMBB) {
   MachineBasicBlock& MBB = *RestorePt->getParent();
   
   MachineBasicBlock::iterator KillPt = BarrierMBB->end();
-  MachineInstrIndex KillIdx;
   if (!ValNo->isDefAccurate() || DefMI->getParent() == BarrierMBB)
-    KillPt = findSpillPoint(BarrierMBB, Barrier, NULL, RefsInMBB, KillIdx);
+    KillPt = findSpillPoint(BarrierMBB, Barrier, NULL, RefsInMBB);
   else
-    KillPt = findNextEmptySlot(DefMI->getParent(), DefMI, KillIdx);
+    KillPt = next(MachineBasicBlock::iterator(DefMI));
   
   if (KillPt == DefMI->getParent()->end())
     return false;
   
-  TII->reMaterialize(MBB, RestorePt, VReg, 0, DefMI);
-  LIs->InsertMachineInstrInMaps(prior(RestorePt), RestoreIdx);
+  TII->reMaterialize(MBB, RestorePt, VReg, 0, DefMI, TRI);
+  SlotIndex RematIdx = LIs->InsertMachineInstrInMaps(prior(RestorePt));
   
   ReconstructLiveInterval(CurrLI);
-  MachineInstrIndex RematIdx = LIs->getInstructionIndex(prior(RestorePt));
-  RematIdx = LIs->getDefIndex(RematIdx);
+  RematIdx = RematIdx.getDefIndex();
   RenumberValno(CurrLI->findDefinedVNInfoForRegInt(RematIdx));
   
   ++NumSplits;
@@ -934,7 +925,7 @@ MachineInstr* PreAllocSplitting::FoldSpill(unsigned vreg,
   if (I != IntervalSSMap.end()) {
     SS = I->second;
   } else {
-    SS = MFI->CreateStackObject(RC->getSize(), RC->getAlignment());    
+    SS = MFI->CreateSpillStackObject(RC->getSize(), RC->getAlignment());
   }
   
   MachineInstr* FMI = TII->foldMemoryOperand(*MBB->getParent(),
@@ -950,7 +941,7 @@ MachineInstr* PreAllocSplitting::FoldSpill(unsigned vreg,
     if (CurrSLI->hasAtLeastOneValue())
       CurrSValNo = CurrSLI->getValNumInfo(0);
     else
-      CurrSValNo = CurrSLI->getNextValue(MachineInstrIndex(), 0, false,
+      CurrSValNo = CurrSLI->getNextValue(SlotIndex(), 0, false,
                                          LSs->getVNInfoAllocator());
   }
   
@@ -1034,11 +1025,14 @@ MachineInstr* PreAllocSplitting::FoldRestore(unsigned vreg,
 /// so it would not cross the barrier that's being processed. Shrink wrap
 /// (minimize) the live interval to the last uses.
 bool PreAllocSplitting::SplitRegLiveInterval(LiveInterval *LI) {
+  DEBUG(errs() << "Pre-alloc splitting " << LI->reg << " for " << *Barrier
+               << "  result: ");
+
   CurrLI = LI;
 
   // Find live range where current interval cross the barrier.
   LiveInterval::iterator LR =
-    CurrLI->FindLiveRangeContaining(LIs->getUseIndex(BarrierIdx));
+    CurrLI->FindLiveRangeContaining(BarrierIdx.getUseIndex());
   VNInfo *ValNo = LR->valno;
 
   assert(!ValNo->isUnused() && "Val# is defined by a dead def?");
@@ -1047,8 +1041,10 @@ bool PreAllocSplitting::SplitRegLiveInterval(LiveInterval *LI) {
     ? LIs->getInstructionFromIndex(ValNo->def) : NULL;
 
   // If this would create a new join point, do not split.
-  if (DefMI && createsNewJoin(LR, DefMI->getParent(), Barrier->getParent()))
+  if (DefMI && createsNewJoin(LR, DefMI->getParent(), Barrier->getParent())) {
+    DEBUG(errs() << "FAILED (would create a new join point).\n");
     return false;
+  }
 
   // Find all references in the barrier mbb.
   SmallPtrSet<MachineInstr*, 4> RefsInMBB;
@@ -1060,21 +1056,23 @@ bool PreAllocSplitting::SplitRegLiveInterval(LiveInterval *LI) {
   }
 
   // Find a point to restore the value after the barrier.
-  MachineInstrIndex RestoreIndex;
   MachineBasicBlock::iterator RestorePt =
-    findRestorePoint(BarrierMBB, Barrier, LR->end, RefsInMBB, RestoreIndex);
-  if (RestorePt == BarrierMBB->end())
+    findRestorePoint(BarrierMBB, Barrier, LR->end, RefsInMBB);
+  if (RestorePt == BarrierMBB->end()) {
+    DEBUG(errs() << "FAILED (could not find a suitable restore point).\n");
     return false;
+  }
 
   if (DefMI && LIs->isReMaterializable(*LI, ValNo, DefMI))
-    if (Rematerialize(LI->reg, ValNo, DefMI, RestorePt,
-                      RestoreIndex, RefsInMBB))
-    return true;
+    if (Rematerialize(LI->reg, ValNo, DefMI, RestorePt, RefsInMBB)) {
+      DEBUG(errs() << "success (remat).\n");
+      return true;
+    }
 
   // Add a spill either before the barrier or after the definition.
   MachineBasicBlock *DefMBB = DefMI ? DefMI->getParent() : NULL;
   const TargetRegisterClass *RC = MRI->getRegClass(CurrLI->reg);
-  MachineInstrIndex SpillIndex;
+  SlotIndex SpillIndex;
   MachineInstr *SpillMI = NULL;
   int SS = -1;
   if (!ValNo->isDefAccurate()) {
@@ -1084,25 +1082,29 @@ bool PreAllocSplitting::SplitRegLiveInterval(LiveInterval *LI) {
       SpillIndex = LIs->getInstructionIndex(SpillMI);
     } else {
       MachineBasicBlock::iterator SpillPt = 
-        findSpillPoint(BarrierMBB, Barrier, NULL, RefsInMBB, SpillIndex);
-      if (SpillPt == BarrierMBB->begin())
+        findSpillPoint(BarrierMBB, Barrier, NULL, RefsInMBB);
+      if (SpillPt == BarrierMBB->begin()) {
+        DEBUG(errs() << "FAILED (could not find a suitable spill point).\n");
         return false; // No gap to insert spill.
+      }
       // Add spill.
     
       SS = CreateSpillStackSlot(CurrLI->reg, RC);
       TII->storeRegToStackSlot(*BarrierMBB, SpillPt, CurrLI->reg, true, SS, RC);
       SpillMI = prior(SpillPt);
-      LIs->InsertMachineInstrInMaps(SpillMI, SpillIndex);
+      SpillIndex = LIs->InsertMachineInstrInMaps(SpillMI);
     }
   } else if (!IsAvailableInStack(DefMBB, CurrLI->reg, ValNo->def,
-                                 RestoreIndex, SpillIndex, SS)) {
+                                 LIs->getZeroIndex(), SpillIndex, SS)) {
     // If it's already split, just restore the value. There is no need to spill
     // the def again.
-    if (!DefMI)
+    if (!DefMI) {
+      DEBUG(errs() << "FAILED (def is dead).\n");
       return false; // Def is dead. Do nothing.
+    }
     
     if ((SpillMI = FoldSpill(LI->reg, RC, DefMI, Barrier,
-                            BarrierMBB, SS, RefsInMBB))) {
+                             BarrierMBB, SS, RefsInMBB))) {
       SpillIndex = LIs->getInstructionIndex(SpillMI);
     } else {
       // Check if it's possible to insert a spill after the def MI.
@@ -1110,21 +1112,23 @@ bool PreAllocSplitting::SplitRegLiveInterval(LiveInterval *LI) {
       if (DefMBB == BarrierMBB) {
         // Add spill after the def and the last use before the barrier.
         SpillPt = findSpillPoint(BarrierMBB, Barrier, DefMI,
-                                 RefsInMBB, SpillIndex);
-        if (SpillPt == DefMBB->begin())
+                                 RefsInMBB);
+        if (SpillPt == DefMBB->begin()) {
+          DEBUG(errs() << "FAILED (could not find a suitable spill point).\n");
           return false; // No gap to insert spill.
+        }
       } else {
-        SpillPt = findNextEmptySlot(DefMBB, DefMI, SpillIndex);
-        if (SpillPt == DefMBB->end())
+        SpillPt = next(MachineBasicBlock::iterator(DefMI));
+        if (SpillPt == DefMBB->end()) {
+          DEBUG(errs() << "FAILED (could not find a suitable spill point).\n");
           return false; // No gap to insert spill.
+        }
       }
-      // Add spill. The store instruction kills the register if def is before
-      // the barrier in the barrier block.
+      // Add spill. 
       SS = CreateSpillStackSlot(CurrLI->reg, RC);
-      TII->storeRegToStackSlot(*DefMBB, SpillPt, CurrLI->reg,
-                               DefMBB == BarrierMBB, SS, RC);
+      TII->storeRegToStackSlot(*DefMBB, SpillPt, CurrLI->reg, false, SS, RC);
       SpillMI = prior(SpillPt);
-      LIs->InsertMachineInstrInMaps(SpillMI, SpillIndex);
+      SpillIndex = LIs->InsertMachineInstrInMaps(SpillMI);
     }
   }
 
@@ -1134,6 +1138,7 @@ bool PreAllocSplitting::SplitRegLiveInterval(LiveInterval *LI) {
 
   // Add restore.
   bool FoldedRestore = false;
+  SlotIndex RestoreIndex;
   if (MachineInstr* LMI = FoldRestore(CurrLI->reg, RC, Barrier,
                                       BarrierMBB, SS, RefsInMBB)) {
     RestorePt = LMI;
@@ -1142,22 +1147,23 @@ bool PreAllocSplitting::SplitRegLiveInterval(LiveInterval *LI) {
   } else {
     TII->loadRegFromStackSlot(*BarrierMBB, RestorePt, CurrLI->reg, SS, RC);
     MachineInstr *LoadMI = prior(RestorePt);
-    LIs->InsertMachineInstrInMaps(LoadMI, RestoreIndex);
+    RestoreIndex = LIs->InsertMachineInstrInMaps(LoadMI);
   }
 
   // Update spill stack slot live interval.
-  UpdateSpillSlotInterval(ValNo, LIs->getNextSlot(LIs->getUseIndex(SpillIndex)),
-                          LIs->getDefIndex(RestoreIndex));
+  UpdateSpillSlotInterval(ValNo, SpillIndex.getUseIndex().getNextSlot(),
+                          RestoreIndex.getDefIndex());
 
   ReconstructLiveInterval(CurrLI);
-  
+
   if (!FoldedRestore) {
-    MachineInstrIndex RestoreIdx = LIs->getInstructionIndex(prior(RestorePt));
-    RestoreIdx = LIs->getDefIndex(RestoreIdx);
+    SlotIndex RestoreIdx = LIs->getInstructionIndex(prior(RestorePt));
+    RestoreIdx = RestoreIdx.getDefIndex();
     RenumberValno(CurrLI->findDefinedVNInfoForRegInt(RestoreIdx));
   }
   
   ++NumSplits;
+  DEBUG(errs() << "success.\n");
   return true;
 }
 
@@ -1193,8 +1199,6 @@ PreAllocSplitting::SplitRegLiveIntervals(const TargetRegisterClass **RCs,
   while (!Intervals.empty()) {
     if (PreSplitLimit != -1 && (int)NumSplits == PreSplitLimit)
       break;
-    else if (NumSplits == 4)
-      Change |= Change;
     LiveInterval *LI = Intervals.back();
     Intervals.pop_back();
     bool result = SplitRegLiveInterval(LI);
@@ -1240,8 +1244,8 @@ bool PreAllocSplitting::removeDeadSpills(SmallPtrSet<LiveInterval*, 8>& split) {
     // reaching definition (VNInfo).
     for (MachineRegisterInfo::use_iterator UI = MRI->use_begin((*LI)->reg),
          UE = MRI->use_end(); UI != UE; ++UI) {
-      MachineInstrIndex index = LIs->getInstructionIndex(&*UI);
-      index = LIs->getUseIndex(index);
+      SlotIndex index = LIs->getInstructionIndex(&*UI);
+      index = index.getUseIndex();
       
       const LiveRange* LR = (*LI)->getLiveRangeContaining(index);
       VNUseCount[LR->valno].insert(&*UI);
@@ -1363,7 +1367,7 @@ bool PreAllocSplitting::removeDeadSpills(SmallPtrSet<LiveInterval*, 8>& split) {
       // Otherwise, this is a load-store case, so DCE them.
       for (SmallPtrSet<MachineInstr*, 4>::iterator UI = 
            VNUseCount[CurrVN].begin(), UE = VNUseCount[CurrVN].end();
-           UI != UI; ++UI) {
+           UI != UE; ++UI) {
         LIs->RemoveMachineInstrFromMaps(*UI);
         (*UI)->eraseFromParent();
       }
@@ -1390,7 +1394,7 @@ bool PreAllocSplitting::createsNewJoin(LiveRange* LR,
   if (LR->valno->hasPHIKill())
     return false;
   
-  MachineInstrIndex MBBEnd = LIs->getMBBEndIdx(BarrierMBB);
+  SlotIndex MBBEnd = LIs->getMBBEndIdx(BarrierMBB);
   if (LR->end < MBBEnd)
     return false;
   
@@ -1453,6 +1457,7 @@ bool PreAllocSplitting::runOnMachineFunction(MachineFunction &MF) {
   TII    = TM->getInstrInfo();
   MFI    = MF.getFrameInfo();
   MRI    = &MF.getRegInfo();
+  SIs    = &getAnalysis<SlotIndexes>();
   LIs    = &getAnalysis<LiveIntervals>();
   LSs    = &getAnalysis<LiveStacks>();
   VRM    = &getAnalysis<VirtRegMap>();
diff --git a/libclamav/c++/llvm/lib/CodeGen/ProcessImplicitDefs.cpp b/libclamav/c++/llvm/lib/CodeGen/ProcessImplicitDefs.cpp
new file mode 100644
index 0000000..c9a33d8
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/ProcessImplicitDefs.cpp
@@ -0,0 +1,275 @@
+//===---------------------- ProcessImplicitDefs.cpp -----------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "processimplicitdefs"
+
+#include "llvm/CodeGen/ProcessImplicitDefs.h"
+
+#include "llvm/ADT/DepthFirstIterator.h"
+#include "llvm/ADT/SmallSet.h"
+#include "llvm/Analysis/AliasAnalysis.h"
+#include "llvm/CodeGen/LiveVariables.h"
+#include "llvm/CodeGen/MachineInstr.h"
+#include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/CodeGen/Passes.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Target/TargetInstrInfo.h"
+#include "llvm/Target/TargetRegisterInfo.h"
+
+
+using namespace llvm;
+
+char ProcessImplicitDefs::ID = 0;
+static RegisterPass<ProcessImplicitDefs> X("processimpdefs",
+                                           "Process Implicit Definitions.");
+
+void ProcessImplicitDefs::getAnalysisUsage(AnalysisUsage &AU) const {
+  AU.setPreservesCFG();
+  AU.addPreserved<AliasAnalysis>();
+  AU.addPreserved<LiveVariables>();
+  AU.addRequired<LiveVariables>();
+  AU.addPreservedID(MachineLoopInfoID);
+  AU.addPreservedID(MachineDominatorsID);
+  AU.addPreservedID(TwoAddressInstructionPassID);
+  AU.addPreservedID(PHIEliminationID);
+  MachineFunctionPass::getAnalysisUsage(AU);
+}
+
+bool ProcessImplicitDefs::CanTurnIntoImplicitDef(MachineInstr *MI,
+                                                 unsigned Reg, unsigned OpIdx,
+                                                 const TargetInstrInfo *tii_) {
+  unsigned SrcReg, DstReg, SrcSubReg, DstSubReg;
+  if (tii_->isMoveInstr(*MI, SrcReg, DstReg, SrcSubReg, DstSubReg) &&
+      Reg == SrcReg)
+    return true;
+
+  if (OpIdx == 2 && MI->getOpcode() == TargetInstrInfo::SUBREG_TO_REG)
+    return true;
+  if (OpIdx == 1 && MI->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG)
+    return true;
+  return false;
+}
+
+/// processImplicitDefs - Process IMPLICIT_DEF instructions and make sure
+/// there is one implicit_def for each use. Add isUndef marker to
+/// implicit_def defs and their uses.
+bool ProcessImplicitDefs::runOnMachineFunction(MachineFunction &fn) {
+
+  DEBUG(errs() << "********** PROCESS IMPLICIT DEFS **********\n"
+               << "********** Function: "
+               << ((Value*)fn.getFunction())->getName() << '\n');
+
+  bool Changed = false;
+
+  const TargetInstrInfo *tii_ = fn.getTarget().getInstrInfo();
+  const TargetRegisterInfo *tri_ = fn.getTarget().getRegisterInfo();
+  MachineRegisterInfo *mri_ = &fn.getRegInfo();
+
+  LiveVariables *lv_ = &getAnalysis<LiveVariables>();
+
+  SmallSet<unsigned, 8> ImpDefRegs;
+  SmallVector<MachineInstr*, 8> ImpDefMIs;
+  SmallVector<MachineInstr*, 4> RUses;
+  SmallPtrSet<MachineBasicBlock*,16> Visited;
+  SmallPtrSet<MachineInstr*, 8> ModInsts;
+
+  MachineBasicBlock *Entry = fn.begin();
+  for (df_ext_iterator<MachineBasicBlock*, SmallPtrSet<MachineBasicBlock*,16> >
+         DFI = df_ext_begin(Entry, Visited), E = df_ext_end(Entry, Visited);
+       DFI != E; ++DFI) {
+    MachineBasicBlock *MBB = *DFI;
+    for (MachineBasicBlock::iterator I = MBB->begin(), E = MBB->end();
+         I != E; ) {
+      MachineInstr *MI = &*I;
+      ++I;
+      if (MI->getOpcode() == TargetInstrInfo::IMPLICIT_DEF) {
+        unsigned Reg = MI->getOperand(0).getReg();
+        ImpDefRegs.insert(Reg);
+        if (TargetRegisterInfo::isPhysicalRegister(Reg)) {
+          for (const unsigned *SS = tri_->getSubRegisters(Reg); *SS; ++SS)
+            ImpDefRegs.insert(*SS);
+        }
+        ImpDefMIs.push_back(MI);
+        continue;
+      }
+
+      if (MI->getOpcode() == TargetInstrInfo::INSERT_SUBREG) {
+        MachineOperand &MO = MI->getOperand(2);
+        if (ImpDefRegs.count(MO.getReg())) {
+          // %reg1032<def> = INSERT_SUBREG %reg1032, undef, 2
+          // This is an identity copy, eliminate it now.
+          if (MO.isKill()) {
+            LiveVariables::VarInfo& vi = lv_->getVarInfo(MO.getReg());
+            vi.removeKill(MI);
+          }
+          MI->eraseFromParent();
+          Changed = true;
+          continue;
+        }
+      }
+
+      bool ChangedToImpDef = false;
+      for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
+        MachineOperand& MO = MI->getOperand(i);
+        if (!MO.isReg() || !MO.isUse() || MO.isUndef())
+          continue;
+        unsigned Reg = MO.getReg();
+        if (!Reg)
+          continue;
+        if (!ImpDefRegs.count(Reg))
+          continue;
+        // Use is a copy, just turn it into an implicit_def.
+        if (CanTurnIntoImplicitDef(MI, Reg, i, tii_)) {
+          bool isKill = MO.isKill();
+          MI->setDesc(tii_->get(TargetInstrInfo::IMPLICIT_DEF));
+          for (int j = MI->getNumOperands() - 1, ee = 0; j > ee; --j)
+            MI->RemoveOperand(j);
+          if (isKill) {
+            ImpDefRegs.erase(Reg);
+            LiveVariables::VarInfo& vi = lv_->getVarInfo(Reg);
+            vi.removeKill(MI);
+          }
+          ChangedToImpDef = true;
+          Changed = true;
+          break;
+        }
+
+        Changed = true;
+        MO.setIsUndef();
+        if (MO.isKill() || MI->isRegTiedToDefOperand(i)) {
+          // Make sure other uses of 
+          for (unsigned j = i+1; j != e; ++j) {
+            MachineOperand &MOJ = MI->getOperand(j);
+            if (MOJ.isReg() && MOJ.isUse() && MOJ.getReg() == Reg)
+              MOJ.setIsUndef();
+          }
+          ImpDefRegs.erase(Reg);
+        }
+      }
+
+      if (ChangedToImpDef) {
+        // Backtrack to process this new implicit_def.
+        --I;
+      } else {
+        for (unsigned i = 0; i != MI->getNumOperands(); ++i) {
+          MachineOperand& MO = MI->getOperand(i);
+          if (!MO.isReg() || !MO.isDef())
+            continue;
+          ImpDefRegs.erase(MO.getReg());
+        }
+      }
+    }
+
+    // Any outstanding liveout implicit_def's?
+    for (unsigned i = 0, e = ImpDefMIs.size(); i != e; ++i) {
+      MachineInstr *MI = ImpDefMIs[i];
+      unsigned Reg = MI->getOperand(0).getReg();
+      if (TargetRegisterInfo::isPhysicalRegister(Reg) ||
+          !ImpDefRegs.count(Reg)) {
+        // Delete all "local" implicit_def's. That include those which define
+        // physical registers since they cannot be liveout.
+        MI->eraseFromParent();
+        Changed = true;
+        continue;
+      }
+
+      // If there are multiple defs of the same register and at least one
+      // is not an implicit_def, do not insert implicit_def's before the
+      // uses.
+      bool Skip = false;
+      SmallVector<MachineInstr*, 4> DeadImpDefs;
+      for (MachineRegisterInfo::def_iterator DI = mri_->def_begin(Reg),
+             DE = mri_->def_end(); DI != DE; ++DI) {
+        MachineInstr *DeadImpDef = &*DI;
+        if (DeadImpDef->getOpcode() != TargetInstrInfo::IMPLICIT_DEF) {
+          Skip = true;
+          break;
+        }
+        DeadImpDefs.push_back(DeadImpDef);
+      }
+      if (Skip)
+        continue;
+
+      // The only implicit_def which we want to keep are those that are live
+      // out of its block.
+      for (unsigned j = 0, ee = DeadImpDefs.size(); j != ee; ++j)
+        DeadImpDefs[j]->eraseFromParent();
+      Changed = true;
+
+      // Process each use instruction once.
+      for (MachineRegisterInfo::use_iterator UI = mri_->use_begin(Reg),
+             UE = mri_->use_end(); UI != UE; ++UI) {
+        MachineInstr *RMI = &*UI;
+        MachineBasicBlock *RMBB = RMI->getParent();
+        if (RMBB == MBB)
+          continue;
+        if (ModInsts.insert(RMI))
+          RUses.push_back(RMI);
+      }
+
+      for (unsigned i = 0, e = RUses.size(); i != e; ++i) {
+        MachineInstr *RMI = RUses[i];
+
+        // Turn a copy use into an implicit_def.
+        unsigned SrcReg, DstReg, SrcSubReg, DstSubReg;
+        if (tii_->isMoveInstr(*RMI, SrcReg, DstReg, SrcSubReg, DstSubReg) &&
+            Reg == SrcReg) {
+          RMI->setDesc(tii_->get(TargetInstrInfo::IMPLICIT_DEF));
+
+          bool isKill = false;
+          SmallVector<unsigned, 4> Ops;
+          for (unsigned j = 0, ee = RMI->getNumOperands(); j != ee; ++j) {
+            MachineOperand &RRMO = RMI->getOperand(j);
+            if (RRMO.isReg() && RRMO.getReg() == Reg) {
+              Ops.push_back(j);
+              if (RRMO.isKill())
+                isKill = true;
+            }
+          }
+          // Leave the other operands along.
+          for (unsigned j = 0, ee = Ops.size(); j != ee; ++j) {
+            unsigned OpIdx = Ops[j];
+            RMI->RemoveOperand(OpIdx-j);
+          }
+
+          // Update LiveVariables varinfo if the instruction is a kill.
+          if (isKill) {
+            LiveVariables::VarInfo& vi = lv_->getVarInfo(Reg);
+            vi.removeKill(RMI);
+          }
+          continue;
+        }
+
+        // Replace Reg with a new vreg that's marked implicit.
+        const TargetRegisterClass* RC = mri_->getRegClass(Reg);
+        unsigned NewVReg = mri_->createVirtualRegister(RC);
+        bool isKill = true;
+        for (unsigned j = 0, ee = RMI->getNumOperands(); j != ee; ++j) {
+          MachineOperand &RRMO = RMI->getOperand(j);
+          if (RRMO.isReg() && RRMO.getReg() == Reg) {
+            RRMO.setReg(NewVReg);
+            RRMO.setIsUndef();
+            if (isKill) {
+              // Only the first operand of NewVReg is marked kill.
+              RRMO.setIsKill();
+              isKill = false;
+            }
+          }
+        }
+      }
+      RUses.clear();
+    }
+    ModInsts.clear();
+    ImpDefRegs.clear();
+    ImpDefMIs.clear();
+  }
+
+  return Changed;
+}
+
diff --git a/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.cpp b/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.cpp
index 51c78a1..8905f75 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.cpp
@@ -44,16 +44,6 @@ char PEI::ID = 0;
 static RegisterPass<PEI>
 X("prologepilog", "Prologue/Epilogue Insertion");
 
-// FIXME: For now, the frame index scavenging is off by default and only
-// used by the Thumb1 target. When it's the default and replaces the current
-// on-the-fly PEI scavenging for all targets, requiresRegisterScavenging()
-// will replace this.
-cl::opt<bool>
-FrameIndexVirtualScavenging("enable-frame-index-scavenging",
-                            cl::Hidden,
-                            cl::desc("Enable frame index elimination with"
-                                     "virtual register scavenging"));
-
 /// createPrologEpilogCodeInserter - This function returns a pass that inserts
 /// prolog and epilog code, and eliminates abstract frame references.
 ///
@@ -66,6 +56,7 @@ bool PEI::runOnMachineFunction(MachineFunction &Fn) {
   const Function* F = Fn.getFunction();
   const TargetRegisterInfo *TRI = Fn.getTarget().getRegisterInfo();
   RS = TRI->requiresRegisterScavenging(Fn) ? new RegScavenger() : NULL;
+  FrameIndexVirtualScavenging = TRI->requiresFrameIndexScavenging(Fn);
 
   // Get MachineModuleInfo so that we can track the construction of the
   // frame.
@@ -268,12 +259,13 @@ void PEI::calculateCalleeSavedRegisters(MachineFunction &Fn) {
       // the TargetRegisterClass if the stack alignment is smaller. Use the
       // min.
       Align = std::min(Align, StackAlign);
-      FrameIdx = FFI->CreateStackObject(RC->getSize(), Align);
+      FrameIdx = FFI->CreateStackObject(RC->getSize(), Align, true);
       if ((unsigned)FrameIdx < MinCSFrameIndex) MinCSFrameIndex = FrameIdx;
       if ((unsigned)FrameIdx > MaxCSFrameIndex) MaxCSFrameIndex = FrameIdx;
     } else {
       // Spill it to the stack where we must.
-      FrameIdx = FFI->CreateFixedObject(RC->getSize(), FixedSlot->Offset);
+      FrameIdx = FFI->CreateFixedObject(RC->getSize(), FixedSlot->Offset,
+                                        true, false);
     }
 
     I->setFrameIdx(FrameIdx);
@@ -551,7 +543,7 @@ void PEI::calculateFrameObjectOffsets(MachineFunction &Fn) {
   // Make sure the special register scavenging spill slot is closest to the
   // frame pointer if a frame pointer is required.
   const TargetRegisterInfo *RegInfo = Fn.getTarget().getRegisterInfo();
-  if (RS && RegInfo->hasFP(Fn)) {
+  if (RS && RegInfo->hasFP(Fn) && !RegInfo->needsStackRealignment(Fn)) {
     int SFI = RS->getScavengingFrameIndex();
     if (SFI >= 0)
       AdjustStackOffset(FFI, SFI, StackGrowsDown, Offset, MaxAlign);
@@ -580,7 +572,7 @@ void PEI::calculateFrameObjectOffsets(MachineFunction &Fn) {
 
   // Make sure the special register scavenging spill slot is closest to the
   // stack pointer.
-  if (RS && !RegInfo->hasFP(Fn)) {
+  if (RS && (!RegInfo->hasFP(Fn) || RegInfo->needsStackRealignment(Fn))) {
     int SFI = RS->getScavengingFrameIndex();
     if (SFI >= 0)
       AdjustStackOffset(FFI, SFI, StackGrowsDown, Offset, MaxAlign);
@@ -703,9 +695,16 @@ void PEI::replaceFrameIndices(MachineFunction &Fn) {
           // If this instruction has a FrameIndex operand, we need to
           // use that target machine register info object to eliminate
           // it.
-
-          TRI.eliminateFrameIndex(MI, SPAdj, FrameIndexVirtualScavenging ?
-                                  NULL : RS);
+          int Value;
+          unsigned VReg =
+            TRI.eliminateFrameIndex(MI, SPAdj, &Value,
+                                    FrameIndexVirtualScavenging ?  NULL : RS);
+          if (VReg) {
+            assert (FrameIndexVirtualScavenging &&
+                    "Not scavenging, but virtual returned from "
+                    "eliminateFrameIndex()!");
+            FrameConstantRegMap[VReg] = FrameConstantEntry(Value, SPAdj);
+          }
 
           // Reset the iterator if we were at the beginning of the BB.
           if (AtBeginning) {
@@ -727,6 +726,38 @@ void PEI::replaceFrameIndices(MachineFunction &Fn) {
   }
 }
 
+/// findLastUseReg - find the killing use of the specified register within
+/// the instruciton range. Return the operand number of the kill in Operand.
+static MachineBasicBlock::iterator
+findLastUseReg(MachineBasicBlock::iterator I, MachineBasicBlock::iterator ME,
+               unsigned Reg) {
+  // Scan forward to find the last use of this virtual register
+  for (++I; I != ME; ++I) {
+    MachineInstr *MI = I;
+    bool isDefInsn = false;
+    bool isKillInsn = false;
+    for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i)
+      if (MI->getOperand(i).isReg()) {
+        unsigned OpReg = MI->getOperand(i).getReg();
+        if (OpReg == 0 || !TargetRegisterInfo::isVirtualRegister(OpReg))
+          continue;
+        assert (OpReg == Reg
+                && "overlapping use of scavenged index register!");
+        // If this is the killing use, we have a candidate.
+        if (MI->getOperand(i).isKill())
+          isKillInsn = true;
+        else if (MI->getOperand(i).isDef())
+          isDefInsn = true;
+      }
+    if (isKillInsn && !isDefInsn)
+      return I;
+  }
+  // If we hit the end of the basic block, there was no kill of
+  // the virtual register, which is wrong.
+  assert (0 && "scavenged index register never killed!");
+  return ME;
+}
+
 /// scavengeFrameVirtualRegs - Replace all frame index virtual registers
 /// with physical registers. Use the register scavenger to find an
 /// appropriate register to use.
@@ -736,25 +767,57 @@ void PEI::scavengeFrameVirtualRegs(MachineFunction &Fn) {
        E = Fn.end(); BB != E; ++BB) {
     RS->enterBasicBlock(BB);
 
+    // FIXME: The logic flow in this function is still too convoluted.
+    // It needs a cleanup refactoring. Do that in preparation for tracking
+    // more than one scratch register value and using ranges to find
+    // available scratch registers.
     unsigned CurrentVirtReg = 0;
     unsigned CurrentScratchReg = 0;
-
-    for (MachineBasicBlock::iterator I = BB->begin(); I != BB->end(); ++I) {
+    bool havePrevValue = false;
+    int PrevValue = 0;
+    MachineInstr *PrevLastUseMI = NULL;
+    unsigned PrevLastUseOp = 0;
+    bool trackingCurrentValue = false;
+    int SPAdj = 0;
+    int Value = 0;
+
+    // The instruction stream may change in the loop, so check BB->end()
+    // directly.
+    for (MachineBasicBlock::iterator I = BB->begin(); I != BB->end(); ) {
       MachineInstr *MI = I;
-      for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i)
+      bool isDefInsn = false;
+      bool isKillInsn = false;
+      bool clobbersScratchReg = false;
+      bool DoIncr = true;
+      for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
         if (MI->getOperand(i).isReg()) {
-          unsigned Reg = MI->getOperand(i).getReg();
+          MachineOperand &MO = MI->getOperand(i);
+          unsigned Reg = MO.getReg();
           if (Reg == 0)
             continue;
           if (!TargetRegisterInfo::isVirtualRegister(Reg)) {
-            // If we have an active scavenged register, we shouldn't be
-            // seeing any references to it.
-            assert (Reg != CurrentScratchReg
-                    && "overlapping use of scavenged frame index register!");
+            // If we have a previous scratch reg, check and see if anything
+            // here kills whatever value is in there.
+            if (Reg == CurrentScratchReg) {
+              if (MO.isUse()) {
+                // Two-address operands implicitly kill
+                if (MO.isKill() || MI->isRegTiedToDefOperand(i))
+                  clobbersScratchReg = true;
+              } else {
+                assert (MO.isDef());
+                clobbersScratchReg = true;
+              }
+            }
             continue;
           }
-
-          // If we already have a scratch for this virtual register, use it
+          // If this is a def, remember that this insn defines the value.
+          // This lets us properly consider insns which re-use the scratch
+          // register, such as r2 = sub r2, #imm, in the middle of the
+          // scratch range.
+          if (MO.isDef())
+            isDefInsn = true;
+
+          // Have we already allocated a scratch register for this virtual?
           if (Reg != CurrentVirtReg) {
             // When we first encounter a new virtual register, it
             // must be a definition.
@@ -764,22 +827,94 @@ void PEI::scavengeFrameVirtualRegs(MachineFunction &Fn) {
             // there's only a guarantee of one scavenged register at a time.
             assert (CurrentVirtReg == 0 &&
                     "overlapping frame index virtual registers!");
+
+            // If the target gave us information about what's in the register,
+            // we can use that to re-use scratch regs.
+            DenseMap<unsigned, FrameConstantEntry>::iterator Entry =
+              FrameConstantRegMap.find(Reg);
+            trackingCurrentValue = Entry != FrameConstantRegMap.end();
+            if (trackingCurrentValue) {
+              SPAdj = (*Entry).second.second;
+              Value = (*Entry).second.first;
+            } else
+              SPAdj = Value = 0;
+
+            // If the scratch register from the last allocation is still
+            // available, see if the value matches. If it does, just re-use it.
+            if (trackingCurrentValue && havePrevValue && PrevValue == Value) {
+              // FIXME: This assumes that the instructions in the live range
+              // for the virtual register are exclusively for the purpose
+              // of populating the value in the register. That's reasonable
+              // for these frame index registers, but it's still a very, very
+              // strong assumption. rdar://7322732. Better would be to
+              // explicitly check each instruction in the range for references
+              // to the virtual register. Only delete those insns that
+              // touch the virtual register.
+
+              // Find the last use of the new virtual register. Remove all
+              // instruction between here and there, and update the current
+              // instruction to reference the last use insn instead.
+              MachineBasicBlock::iterator LastUseMI =
+                findLastUseReg(I, BB->end(), Reg);
+
+              // Remove all instructions up 'til the last use, since they're
+              // just calculating the value we already have.
+              BB->erase(I, LastUseMI);
+              MI = I = LastUseMI;
+
+              // Extend the live range of the scratch register
+              PrevLastUseMI->getOperand(PrevLastUseOp).setIsKill(false);
+              RS->setUsed(CurrentScratchReg);
+              CurrentVirtReg = Reg;
+
+              // We deleted the instruction we were scanning the operands of.
+              // Jump back to the instruction iterator loop. Don't increment
+              // past this instruction since we updated the iterator already.
+              DoIncr = false;
+              break;
+            }
+
+            // Scavenge a new scratch register
             CurrentVirtReg = Reg;
             const TargetRegisterClass *RC = Fn.getRegInfo().getRegClass(Reg);
             CurrentScratchReg = RS->FindUnusedReg(RC);
             if (CurrentScratchReg == 0)
               // No register is "free". Scavenge a register.
-              // FIXME: Track SPAdj. Zero won't always be right
-              CurrentScratchReg = RS->scavengeRegister(RC, I, 0);
+              CurrentScratchReg = RS->scavengeRegister(RC, I, SPAdj);
+
+            PrevValue = Value;
           }
+          // replace this reference to the virtual register with the
+          // scratch register.
           assert (CurrentScratchReg && "Missing scratch register!");
           MI->getOperand(i).setReg(CurrentScratchReg);
 
-          // If this is the last use of the register, stop tracking it.
-          if (MI->getOperand(i).isKill())
-            CurrentScratchReg = CurrentVirtReg = 0;
+          if (MI->getOperand(i).isKill()) {
+            isKillInsn = true;
+            PrevLastUseOp = i;
+            PrevLastUseMI = MI;
+          }
         }
-      RS->forward(MI);
+      }
+      // If this is the last use of the scratch, stop tracking it. The
+      // last use will be a kill operand in an instruction that does
+      // not also define the scratch register.
+      if (isKillInsn && !isDefInsn) {
+        CurrentVirtReg = 0;
+        havePrevValue = trackingCurrentValue;
+      }
+      // Similarly, notice if instruction clobbered the value in the
+      // register we're tracking for possible later reuse. This is noted
+      // above, but enforced here since the value is still live while we
+      // process the rest of the operands of the instruction.
+      if (clobbersScratchReg) {
+        havePrevValue = false;
+        CurrentScratchReg = 0;
+      }
+      if (DoIncr) {
+        RS->forward(I);
+        ++I;
+      }
     }
   }
 }
diff --git a/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.h b/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.h
index d0a68e1..931f1eb 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.h
+++ b/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.h
@@ -27,6 +27,7 @@
 #include "llvm/CodeGen/MachineLoopInfo.h"
 #include "llvm/ADT/SparseBitVector.h"
 #include "llvm/ADT/DenseMap.h"
+#include "llvm/Target/TargetRegisterInfo.h"
 
 namespace llvm {
   class RegScavenger;
@@ -93,6 +94,17 @@ namespace llvm {
     // functions.
     bool ShrinkWrapThisFunction;
 
+    // Flag to control whether to use the register scavenger to resolve
+    // frame index materialization registers. Set according to
+    // TRI->requiresFrameIndexScavenging() for the curren function.
+    bool FrameIndexVirtualScavenging;
+
+    // When using the scavenger post-pass to resolve frame reference
+    // materialization registers, maintain a map of the registers to
+    // the constant value and SP adjustment associated with it.
+    typedef std::pair<int, int> FrameConstantEntry;
+    DenseMap<unsigned, FrameConstantEntry> FrameConstantRegMap;
+
 #ifndef NDEBUG
     // Machine function handle.
     MachineFunction* MF;
diff --git a/libclamav/c++/llvm/lib/CodeGen/PseudoSourceValue.cpp b/libclamav/c++/llvm/lib/CodeGen/PseudoSourceValue.cpp
index 3728b7f..7fb3e6e 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PseudoSourceValue.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/PseudoSourceValue.cpp
@@ -14,7 +14,7 @@
 #include "llvm/CodeGen/MachineFrameInfo.h"
 #include "llvm/CodeGen/PseudoSourceValue.h"
 #include "llvm/DerivedTypes.h"
-#include "llvm/Support/Compiler.h"
+#include "llvm/LLVMContext.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/ManagedStatic.h"
 #include "llvm/Support/raw_ostream.h"
@@ -43,32 +43,14 @@ static const char *const PSVNames[] = {
 // Eventually these should be uniqued on LLVMContext rather than in a managed
 // static.  For now, we can safely use the global context for the time being to
 // squeak by.
-PseudoSourceValue::PseudoSourceValue() :
-  Value(PointerType::getUnqual(Type::getInt8Ty(getGlobalContext())),
-        PseudoSourceValueVal) {}
+PseudoSourceValue::PseudoSourceValue(enum ValueTy Subclass) :
+  Value(Type::getInt8PtrTy(getGlobalContext()),
+        Subclass) {}
 
 void PseudoSourceValue::printCustom(raw_ostream &O) const {
   O << PSVNames[this - *PSVs];
 }
 
-namespace {
-  /// FixedStackPseudoSourceValue - A specialized PseudoSourceValue
-  /// for holding FixedStack values, which must include a frame
-  /// index.
-  class VISIBILITY_HIDDEN FixedStackPseudoSourceValue
-    : public PseudoSourceValue {
-    const int FI;
-  public:
-    explicit FixedStackPseudoSourceValue(int fi) : FI(fi) {}
-
-    virtual bool isConstant(const MachineFrameInfo *MFI) const;
-
-    virtual void printCustom(raw_ostream &OS) const {
-      OS << "FixedStack" << FI;
-    }
-  };
-}
-
 static ManagedStatic<std::map<int, const PseudoSourceValue *> > FSValues;
 
 const PseudoSourceValue *PseudoSourceValue::getFixedStack(int FI) {
@@ -89,6 +71,45 @@ bool PseudoSourceValue::isConstant(const MachineFrameInfo *) const {
   return false;
 }
 
+bool PseudoSourceValue::isAliased(const MachineFrameInfo *MFI) const {
+  if (this == getStack() ||
+      this == getGOT() ||
+      this == getConstantPool() ||
+      this == getJumpTable())
+    return false;
+  llvm_unreachable("Unknown PseudoSourceValue!");
+  return true;
+}
+
+bool PseudoSourceValue::mayAlias(const MachineFrameInfo *MFI) const {
+  if (this == getGOT() ||
+      this == getConstantPool() ||
+      this == getJumpTable())
+    return false;
+  return true;
+}
+
 bool FixedStackPseudoSourceValue::isConstant(const MachineFrameInfo *MFI) const{
   return MFI && MFI->isImmutableObjectIndex(FI);
 }
+
+bool FixedStackPseudoSourceValue::isAliased(const MachineFrameInfo *MFI) const {
+  // Negative frame indices are used for special things that don't
+  // appear in LLVM IR. Non-negative indices may be used for things
+  // like static allocas.
+  if (!MFI)
+    return FI >= 0;
+  // Spill slots should not alias others.
+  return !MFI->isFixedObjectIndex(FI) && !MFI->isSpillSlotObjectIndex(FI);
+}
+
+bool FixedStackPseudoSourceValue::mayAlias(const MachineFrameInfo *MFI) const {
+  if (!MFI)
+    return true;
+  // Spill slots will not alias any LLVM IR value.
+  return !MFI->isSpillSlotObjectIndex(FI);
+}
+
+void FixedStackPseudoSourceValue::printCustom(raw_ostream &OS) const {
+  OS << "FixedStack" << FI;
+}
diff --git a/libclamav/c++/llvm/lib/CodeGen/README.txt b/libclamav/c++/llvm/lib/CodeGen/README.txt
index 64374ce..b655dda 100644
--- a/libclamav/c++/llvm/lib/CodeGen/README.txt
+++ b/libclamav/c++/llvm/lib/CodeGen/README.txt
@@ -30,44 +30,6 @@ It also increase the likelyhood the store may become dead.
 
 //===---------------------------------------------------------------------===//
 
-I think we should have a "hasSideEffects" flag (which is automatically set for
-stuff that "isLoad" "isCall" etc), and the remat pass should eventually be able
-to remat any instruction that has no side effects, if it can handle it and if
-profitable.
-
-For now, I'd suggest having the remat stuff work like this:
-
-1. I need to spill/reload this thing.
-2. Check to see if it has side effects.
-3. Check to see if it is simple enough: e.g. it only has one register
-destination and no register input.
-4. If so, clone the instruction, do the xform, etc.
-
-Advantages of this are:
-
-1. the .td file describes the behavior of the instructions, not the way the
-   algorithm should work.
-2. as remat gets smarter in the future, we shouldn't have to be changing the .td
-   files.
-3. it is easier to explain what the flag means in the .td file, because you
-   don't have to pull in the explanation of how the current remat algo works.
-
-Some potential added complexities:
-
-1. Some instructions have to be glued to it's predecessor or successor. All of
-   the PC relative instructions and condition code setting instruction. We could
-   mark them as hasSideEffects, but that's not quite right. PC relative loads
-   from constantpools can be remat'ed, for example. But it requires more than
-   just cloning the instruction. Some instructions can be remat'ed but it
-   expands to more than one instruction. But allocator will have to make a
-   decision.
-
-4. As stated in 3, not as simple as cloning in some cases. The target will have
-   to decide how to remat it. For example, an ARM 2-piece constant generation
-   instruction is remat'ed as a load from constantpool.
-
-//===---------------------------------------------------------------------===//
-
 bb27 ...
         ...
         %reg1037 = ADDri %reg1039, 1
@@ -206,3 +168,32 @@ Stack coloring improvments:
    not spill slots.
 2. Reorder objects to fill in gaps between objects.
    e.g. 4, 1, <gap>, 4, 1, 1, 1, <gap>, 4 => 4, 1, 1, 1, 1, 4, 4
+
+//===---------------------------------------------------------------------===//
+
+The scheduler should be able to sort nearby instructions by their address. For
+example, in an expanded memset sequence it's not uncommon to see code like this:
+
+  movl $0, 4(%rdi)
+  movl $0, 8(%rdi)
+  movl $0, 12(%rdi)
+  movl $0, 0(%rdi)
+
+Each of the stores is independent, and the scheduler is currently making an
+arbitrary decision about the order.
+
+//===---------------------------------------------------------------------===//
+
+Another opportunitiy in this code is that the $0 could be moved to a register:
+
+  movl $0, 4(%rdi)
+  movl $0, 8(%rdi)
+  movl $0, 12(%rdi)
+  movl $0, 0(%rdi)
+
+This would save substantial code size, especially for longer sequences like
+this. It would be easy to have a rule telling isel to avoid matching MOV32mi
+if the immediate has more than some fixed number of uses. It's more involved
+to teach the register allocator how to do late folding to recover from
+excessive register pressure.
+
diff --git a/libclamav/c++/llvm/lib/CodeGen/RegAllocLinearScan.cpp b/libclamav/c++/llvm/lib/CodeGen/RegAllocLinearScan.cpp
index 9d22e13..4ff5129 100644
--- a/libclamav/c++/llvm/lib/CodeGen/RegAllocLinearScan.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/RegAllocLinearScan.cpp
@@ -33,7 +33,6 @@
 #include "llvm/ADT/SmallSet.h"
 #include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/STLExtras.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
@@ -60,19 +59,36 @@ PreSplitIntervals("pre-alloc-split",
                   cl::desc("Pre-register allocation live interval splitting"),
                   cl::init(false), cl::Hidden);
 
-static cl::opt<bool>
-NewSpillFramework("new-spill-framework",
-                  cl::desc("New spilling framework"),
-                  cl::init(false), cl::Hidden);
-
 static RegisterRegAlloc
 linearscanRegAlloc("linearscan", "linear scan register allocator",
                    createLinearScanRegisterAllocator);
 
 namespace {
-  struct VISIBILITY_HIDDEN RALinScan : public MachineFunctionPass {
+  // When we allocate a register, add it to a fixed-size queue of
+  // registers to skip in subsequent allocations. This trades a small
+  // amount of register pressure and increased spills for flexibility in
+  // the post-pass scheduler.
+  //
+  // Note that in a the number of registers used for reloading spills
+  // will be one greater than the value of this option.
+  //
+  // One big limitation of this is that it doesn't differentiate between
+  // different register classes. So on x86-64, if there is xmm register
+  // pressure, it can caused fewer GPRs to be held in the queue.
+  static cl::opt<unsigned>
+  NumRecentlyUsedRegs("linearscan-skip-count",
+                      cl::desc("Number of registers for linearscan to remember to skip."),
+                      cl::init(0),
+                      cl::Hidden);
+ 
+  struct RALinScan : public MachineFunctionPass {
     static char ID;
-    RALinScan() : MachineFunctionPass(&ID) {}
+    RALinScan() : MachineFunctionPass(&ID) {
+      // Initialize the queue to record recently-used registers.
+      if (NumRecentlyUsedRegs > 0)
+        RecentRegs.resize(NumRecentlyUsedRegs, 0);
+      RecentNext = RecentRegs.begin();
+    }
 
     typedef std::pair<LiveInterval*, LiveInterval::iterator> IntervalPtr;
     typedef SmallVector<IntervalPtr, 32> IntervalPtrs;
@@ -138,6 +154,20 @@ namespace {
 
     std::auto_ptr<Spiller> spiller_;
 
+    // The queue of recently-used registers.
+    SmallVector<unsigned, 4> RecentRegs;
+    SmallVector<unsigned, 4>::iterator RecentNext;
+
+    // Record that we just picked this register.
+    void recordRecentlyUsed(unsigned reg) {
+      assert(reg != 0 && "Recently used register is NOREG!");
+      if (!RecentRegs.empty()) {
+        *RecentNext++ = reg;
+        if (RecentNext == RecentRegs.end())
+          RecentNext = RecentRegs.begin();
+      }
+    }
+
   public:
     virtual const char* getPassName() const {
       return "Linear Scan Register Allocator";
@@ -146,6 +176,7 @@ namespace {
     virtual void getAnalysisUsage(AnalysisUsage &AU) const {
       AU.setPreservesCFG();
       AU.addRequired<LiveIntervals>();
+      AU.addPreserved<SlotIndexes>();
       if (StrongPHIElim)
         AU.addRequiredID(StrongPHIEliminationID);
       // Make sure PassManager knows which analyses to make available
@@ -166,6 +197,12 @@ namespace {
     /// runOnMachineFunction - register allocate the whole function
     bool runOnMachineFunction(MachineFunction&);
 
+    // Determine if we skip this register due to its being recently used.
+    bool isRecentlyUsed(unsigned reg) const {
+      return std::find(RecentRegs.begin(), RecentRegs.end(), reg) !=
+             RecentRegs.end();
+    }
+
   private:
     /// linearScan - the linear scan algorithm
     void linearScan();
@@ -176,11 +213,11 @@ namespace {
 
     /// processActiveIntervals - expire old intervals and move non-overlapping
     /// ones to the inactive list.
-    void processActiveIntervals(MachineInstrIndex CurPoint);
+    void processActiveIntervals(SlotIndex CurPoint);
 
     /// processInactiveIntervals - expire old intervals and move overlapping
     /// ones to the active list.
-    void processInactiveIntervals(MachineInstrIndex CurPoint);
+    void processInactiveIntervals(SlotIndex CurPoint);
 
     /// hasNextReloadInterval - Return the next liveinterval that's being
     /// defined by a reload from the same SS as the specified one.
@@ -366,7 +403,7 @@ unsigned RALinScan::attemptTrivialCoalescing(LiveInterval &cur, unsigned Reg) {
     return Reg;
 
   VNInfo *vni = cur.begin()->valno;
-  if ((vni->def == MachineInstrIndex()) ||
+  if ((vni->def == SlotIndex()) ||
       vni->isUnused() || !vni->isDefAccurate())
     return Reg;
   MachineInstr *CopyMI = li_->getInstructionFromIndex(vni->def);
@@ -403,7 +440,7 @@ unsigned RALinScan::attemptTrivialCoalescing(LiveInterval &cur, unsigned Reg) {
         if (!O.isKill())
           continue;
         MachineInstr *MI = &*I;
-        if (SrcLI.liveAt(li_->getDefIndex(li_->getInstructionIndex(MI))))
+        if (SrcLI.liveAt(li_->getInstructionIndex(MI).getDefIndex()))
           O.setIsKill(false);
       }
     }
@@ -441,9 +478,7 @@ bool RALinScan::runOnMachineFunction(MachineFunction &fn) {
   vrm_ = &getAnalysis<VirtRegMap>();
   if (!rewriter_.get()) rewriter_.reset(createVirtRegRewriter());
   
-  if (NewSpillFramework) {
-    spiller_.reset(createSpiller(mf_, li_, ls_, vrm_));
-  }
+  spiller_.reset(createSpiller(mf_, li_, loopInfo, vrm_));
   
   initIntervalSets();
 
@@ -480,10 +515,17 @@ void RALinScan::initIntervalSets()
 
   for (LiveIntervals::iterator i = li_->begin(), e = li_->end(); i != e; ++i) {
     if (TargetRegisterInfo::isPhysicalRegister(i->second->reg)) {
-      mri_->setPhysRegUsed(i->second->reg);
-      fixed_.push_back(std::make_pair(i->second, i->second->begin()));
-    } else
-      unhandled_.push(i->second);
+      if (!i->second->empty()) {
+        mri_->setPhysRegUsed(i->second->reg);
+        fixed_.push_back(std::make_pair(i->second, i->second->begin()));
+      }
+    } else {
+      if (i->second->empty()) {
+        assignRegOrStackSlotAtInterval(i->second);
+      }
+      else
+        unhandled_.push(i->second);
+    }
   }
 }
 
@@ -503,13 +545,13 @@ void RALinScan::linearScan() {
     ++NumIters;
     DEBUG(errs() << "\n*** CURRENT ***: " << *cur << '\n');
 
-    if (!cur->empty()) {
-      processActiveIntervals(cur->beginIndex());
-      processInactiveIntervals(cur->beginIndex());
+    assert(!cur->empty() && "Empty interval in unhandled set.");
 
-      assert(TargetRegisterInfo::isVirtualRegister(cur->reg) &&
-             "Can only allocate virtual registers!");
-    }
+    processActiveIntervals(cur->beginIndex());
+    processInactiveIntervals(cur->beginIndex());
+
+    assert(TargetRegisterInfo::isVirtualRegister(cur->reg) &&
+           "Can only allocate virtual registers!");
 
     // Allocating a virtual register. try to find a free
     // physical register or spill an interval (possibly this one) in order to
@@ -586,7 +628,7 @@ void RALinScan::linearScan() {
 
 /// processActiveIntervals - expire old intervals and move non-overlapping ones
 /// to the inactive list.
-void RALinScan::processActiveIntervals(MachineInstrIndex CurPoint)
+void RALinScan::processActiveIntervals(SlotIndex CurPoint)
 {
   DEBUG(errs() << "\tprocessing active intervals:\n");
 
@@ -632,7 +674,7 @@ void RALinScan::processActiveIntervals(MachineInstrIndex CurPoint)
 
 /// processInactiveIntervals - expire old intervals and move overlapping
 /// ones to the active list.
-void RALinScan::processInactiveIntervals(MachineInstrIndex CurPoint)
+void RALinScan::processInactiveIntervals(SlotIndex CurPoint)
 {
   DEBUG(errs() << "\tprocessing inactive intervals:\n");
 
@@ -713,7 +755,7 @@ FindIntervalInVector(RALinScan::IntervalPtrs &IP, LiveInterval *LI) {
   return IP.end();
 }
 
-static void RevertVectorIteratorsTo(RALinScan::IntervalPtrs &V, MachineInstrIndex Point){
+static void RevertVectorIteratorsTo(RALinScan::IntervalPtrs &V, SlotIndex Point){
   for (unsigned i = 0, e = V.size(); i != e; ++i) {
     RALinScan::IntervalPtr &IP = V[i];
     LiveInterval::iterator I = std::upper_bound(IP.first->begin(),
@@ -739,7 +781,7 @@ static void addStackInterval(LiveInterval *cur, LiveStacks *ls_,
   if (SI.hasAtLeastOneValue())
     VNI = SI.getValNumInfo(0);
   else
-    VNI = SI.getNextValue(MachineInstrIndex(), 0, false,
+    VNI = SI.getNextValue(SlotIndex(), 0, false,
                           ls_->getVNInfoAllocator());
 
   LiveInterval &RI = li_->getInterval(cur->reg);
@@ -833,9 +875,15 @@ void RALinScan::findIntervalsToSpill(LiveInterval *cur,
 
 namespace {
   struct WeightCompare {
+  private:
+    const RALinScan &Allocator;
+
+  public:
+    WeightCompare(const RALinScan &Alloc) : Allocator(Alloc) {};
+
     typedef std::pair<unsigned, float> RegWeightPair;
     bool operator()(const RegWeightPair &LHS, const RegWeightPair &RHS) const {
-      return LHS.second < RHS.second;
+      return LHS.second < RHS.second && !Allocator.isRecentlyUsed(LHS.first);
     }
   };
 }
@@ -907,7 +955,7 @@ void RALinScan::assignRegOrStackSlotAtInterval(LiveInterval* cur) {
   backUpRegUses();
 
   std::vector<std::pair<unsigned, float> > SpillWeightsToAdd;
-  MachineInstrIndex StartPosition = cur->beginIndex();
+  SlotIndex StartPosition = cur->beginIndex();
   const TargetRegisterClass *RCLeader = RelatedRegClasses.getLeaderValue(RC);
 
   // If start of this live interval is defined by a move instruction and its
@@ -917,7 +965,7 @@ void RALinScan::assignRegOrStackSlotAtInterval(LiveInterval* cur) {
   // one, e.g. X86::mov32to32_. These move instructions are not coalescable.
   if (!vrm_->getRegAllocPref(cur->reg) && cur->hasAtLeastOneValue()) {
     VNInfo *vni = cur->begin()->valno;
-    if ((vni->def != MachineInstrIndex()) && !vni->isUnused() &&
+    if ((vni->def != SlotIndex()) && !vni->isUnused() &&
          vni->isDefAccurate()) {
       MachineInstr *CopyMI = li_->getInstructionFromIndex(vni->def);
       unsigned SrcReg, DstReg, SrcSubReg, DstSubReg;
@@ -1079,7 +1127,8 @@ void RALinScan::assignRegOrStackSlotAtInterval(LiveInterval* cur) {
            e = RC->allocation_order_end(*mf_); i != e; ++i) {
       unsigned reg = *i;
       float regWeight = SpillWeights[reg];
-      if (minWeight > regWeight)
+      // Skip recently allocated registers.
+      if (minWeight > regWeight && !isRecentlyUsed(reg))
         Found = true;
       RegsWeights.push_back(std::make_pair(reg, regWeight));
     }
@@ -1097,7 +1146,7 @@ void RALinScan::assignRegOrStackSlotAtInterval(LiveInterval* cur) {
   }
 
   // Sort all potential spill candidates by weight.
-  std::sort(RegsWeights.begin(), RegsWeights.end(), WeightCompare());
+  std::sort(RegsWeights.begin(), RegsWeights.end(), WeightCompare(*this));
   minReg = RegsWeights[0].first;
   minWeight = RegsWeights[0].second;
   if (minWeight == HUGE_VALF) {
@@ -1119,6 +1168,7 @@ void RALinScan::assignRegOrStackSlotAtInterval(LiveInterval* cur) {
         DowngradedRegs.clear();
         assignRegOrStackSlotAtInterval(cur);
       } else {
+        assert(false && "Ran out of registers during register allocation!");
         llvm_report_error("Ran out of registers during register allocation!");
       }
       return;
@@ -1149,11 +1199,7 @@ void RALinScan::assignRegOrStackSlotAtInterval(LiveInterval* cur) {
     SmallVector<LiveInterval*, 8> spillIs;
     std::vector<LiveInterval*> added;
     
-    if (!NewSpillFramework) {
-      added = li_->addIntervalsForSpills(*cur, spillIs, loopInfo, *vrm_);
-    } else {
-      added = spiller_->spill(cur); 
-    }
+    added = spiller_->spill(cur, spillIs); 
 
     std::sort(added.begin(), added.end(), LISorter());
     addStackInterval(cur, ls_, li_, mri_, *vrm_);
@@ -1173,7 +1219,7 @@ void RALinScan::assignRegOrStackSlotAtInterval(LiveInterval* cur) {
       LiveInterval *ReloadLi = added[i];
       if (ReloadLi->weight == HUGE_VALF &&
           li_->getApproximateInstructionCount(*ReloadLi) == 0) {
-        MachineInstrIndex ReloadIdx = ReloadLi->beginIndex();
+        SlotIndex ReloadIdx = ReloadLi->beginIndex();
         MachineBasicBlock *ReloadMBB = li_->getMBBFromIndex(ReloadIdx);
         int ReloadSS = vrm_->getStackSlot(ReloadLi->reg);
         if (LastReloadMBB == ReloadMBB && LastReloadSS == ReloadSS) {
@@ -1233,17 +1279,13 @@ void RALinScan::assignRegOrStackSlotAtInterval(LiveInterval* cur) {
          earliestStartInterval : sli;
        
     std::vector<LiveInterval*> newIs;
-    if (!NewSpillFramework) {
-      newIs = li_->addIntervalsForSpills(*sli, spillIs, loopInfo, *vrm_);
-    } else {
-      newIs = spiller_->spill(sli);
-    }
+    newIs = spiller_->spill(sli, spillIs);
     addStackInterval(sli, ls_, li_, mri_, *vrm_);
     std::copy(newIs.begin(), newIs.end(), std::back_inserter(added));
     spilled.insert(sli->reg);
   }
 
-  MachineInstrIndex earliestStart = earliestStartInterval->beginIndex();
+  SlotIndex earliestStart = earliestStartInterval->beginIndex();
 
   DEBUG(errs() << "\t\trolling back to: " << earliestStart << '\n');
 
@@ -1324,7 +1366,7 @@ void RALinScan::assignRegOrStackSlotAtInterval(LiveInterval* cur) {
     LiveInterval *ReloadLi = added[i];
     if (ReloadLi->weight == HUGE_VALF &&
         li_->getApproximateInstructionCount(*ReloadLi) == 0) {
-      MachineInstrIndex ReloadIdx = ReloadLi->beginIndex();
+      SlotIndex ReloadIdx = ReloadLi->beginIndex();
       MachineBasicBlock *ReloadMBB = li_->getMBBFromIndex(ReloadIdx);
       int ReloadSS = vrm_->getStackSlot(ReloadLi->reg);
       if (LastReloadMBB == ReloadMBB && LastReloadSS == ReloadSS) {
@@ -1367,7 +1409,8 @@ unsigned RALinScan::getFreePhysReg(LiveInterval* cur,
     // Ignore "downgraded" registers.
     if (SkipDGRegs && DowngradedRegs.count(Reg))
       continue;
-    if (isRegAvail(Reg)) {
+    // Skip recently allocated registers.
+    if (isRegAvail(Reg) && !isRecentlyUsed(Reg)) {
       FreeReg = Reg;
       if (FreeReg < inactiveCounts.size())
         FreeRegInactiveCount = inactiveCounts[FreeReg];
@@ -1379,9 +1422,12 @@ unsigned RALinScan::getFreePhysReg(LiveInterval* cur,
 
   // If there are no free regs, or if this reg has the max inactive count,
   // return this register.
-  if (FreeReg == 0 || FreeRegInactiveCount == MaxInactiveCount)
+  if (FreeReg == 0 || FreeRegInactiveCount == MaxInactiveCount) {
+    // Remember what register we picked so we can skip it next time.
+    if (FreeReg != 0) recordRecentlyUsed(FreeReg);
     return FreeReg;
- 
+  }
+
   // Continue scanning the registers, looking for the one with the highest
   // inactive count.  Alkis found that this reduced register pressure very
   // slightly on X86 (in rev 1.94 of this file), though this should probably be
@@ -1392,7 +1438,7 @@ unsigned RALinScan::getFreePhysReg(LiveInterval* cur,
     if (SkipDGRegs && DowngradedRegs.count(Reg))
       continue;
     if (isRegAvail(Reg) && Reg < inactiveCounts.size() &&
-        FreeRegInactiveCount < inactiveCounts[Reg]) {
+        FreeRegInactiveCount < inactiveCounts[Reg] && !isRecentlyUsed(Reg)) {
       FreeReg = Reg;
       FreeRegInactiveCount = inactiveCounts[Reg];
       if (FreeRegInactiveCount == MaxInactiveCount)
@@ -1400,6 +1446,9 @@ unsigned RALinScan::getFreePhysReg(LiveInterval* cur,
     }
   }
 
+  // Remember what register we picked so we can skip it next time.
+  recordRecentlyUsed(FreeReg);
+
   return FreeReg;
 }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/RegAllocLocal.cpp b/libclamav/c++/llvm/lib/CodeGen/RegAllocLocal.cpp
index 6caa2d3..7bb020a 100644
--- a/libclamav/c++/llvm/lib/CodeGen/RegAllocLocal.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/RegAllocLocal.cpp
@@ -24,7 +24,6 @@
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/ADT/DenseMap.h"
@@ -44,7 +43,7 @@ static RegisterRegAlloc
                 createLocalRegisterAllocator);
 
 namespace {
-  class VISIBILITY_HIDDEN RALocal : public MachineFunctionPass {
+  class RALocal : public MachineFunctionPass {
   public:
     static char ID;
     RALocal() : MachineFunctionPass(&ID), StackSlotForVirtReg(-1) {}
@@ -262,8 +261,8 @@ int RALocal::getStackSpaceFor(unsigned VirtReg, const TargetRegisterClass *RC) {
     return SS;          // Already has space allocated?
 
   // Allocate a new stack object for this spill location...
-  int FrameIdx = MF->getFrameInfo()->CreateStackObject(RC->getSize(),
-                                                       RC->getAlignment());
+  int FrameIdx = MF->getFrameInfo()->CreateSpillStackObject(RC->getSize(),
+                                                            RC->getAlignment());
 
   // Assign the slot...
   StackSlotForVirtReg[VirtReg] = FrameIdx;
diff --git a/libclamav/c++/llvm/lib/CodeGen/RegAllocPBQP.cpp b/libclamav/c++/llvm/lib/CodeGen/RegAllocPBQP.cpp
index 33d82d0..c677d34 100644
--- a/libclamav/c++/llvm/lib/CodeGen/RegAllocPBQP.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/RegAllocPBQP.cpp
@@ -70,7 +70,7 @@ namespace {
   /// PBQP based allocators solve the register allocation problem by mapping
   /// register allocation problems to Partitioned Boolean Quadratic
   /// Programming problems.
-  class VISIBILITY_HIDDEN PBQPRegAlloc : public MachineFunctionPass {
+  class PBQPRegAlloc : public MachineFunctionPass {
   public:
 
     static char ID;
@@ -85,6 +85,8 @@ namespace {
 
     /// PBQP analysis usage.
     virtual void getAnalysisUsage(AnalysisUsage &au) const {
+      au.addRequired<SlotIndexes>();
+      au.addPreserved<SlotIndexes>();
       au.addRequired<LiveIntervals>();
       //au.addRequiredID(SplitCriticalEdgesID);
       au.addRequired<RegisterCoalescer>();
@@ -684,13 +686,18 @@ void PBQPRegAlloc::addStackInterval(const LiveInterval *spilled,
     vni = stackInterval.getValNumInfo(0);
   else
     vni = stackInterval.getNextValue(
-      MachineInstrIndex(), 0, false, lss->getVNInfoAllocator());
+      SlotIndex(), 0, false, lss->getVNInfoAllocator());
 
   LiveInterval &rhsInterval = lis->getInterval(spilled->reg);
   stackInterval.MergeRangesInAsValue(rhsInterval, vni);
 }
 
 bool PBQPRegAlloc::mapPBQPToRegAlloc(const PBQP::Solution &solution) {
+
+  // Assert that this is a valid solution to the regalloc problem.
+  assert(solution.getCost() != std::numeric_limits<PBQP::PBQPNum>::infinity() &&
+         "Invalid (infinite cost) solution for PBQP problem.");
+
   // Set to true if we have any spills
   bool anotherRoundNeeded = false;
 
@@ -832,7 +839,7 @@ bool PBQPRegAlloc::runOnMachineFunction(MachineFunction &MF) {
   tm = &mf->getTarget();
   tri = tm->getRegisterInfo();
   tii = tm->getInstrInfo();
-  mri = &mf->getRegInfo();
+  mri = &mf->getRegInfo(); 
 
   lis = &getAnalysis<LiveIntervals>();
   lss = &getAnalysis<LiveStacks>();
diff --git a/libclamav/c++/llvm/lib/CodeGen/RegisterScavenging.cpp b/libclamav/c++/llvm/lib/CodeGen/RegisterScavenging.cpp
index 9fc3da3..94680ed 100644
--- a/libclamav/c++/llvm/lib/CodeGen/RegisterScavenging.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/RegisterScavenging.cpp
@@ -100,11 +100,8 @@ void RegScavenger::enterBasicBlock(MachineBasicBlock *mbb) {
         CalleeSavedRegs.set(CSRegs[i]);
   }
 
-  // RS used within emit{Pro,Epi}logue()
-  if (mbb != MBB) {
-    MBB = mbb;
-    initRegState();
-  }
+  MBB = mbb;
+  initRegState();
 
   Tracking = false;
 }
@@ -177,7 +174,24 @@ void RegScavenger::forward() {
     if (!Reg || isReserved(Reg))
       continue;
     if (MO.isUse()) {
-      assert(isUsed(Reg) && "Using an undefined register!");
+      if (!isUsed(Reg)) {
+        // Check if it's partial live: e.g.
+        // D0 = insert_subreg D0<undef>, S0
+        // ... D0
+        // The problem is the insert_subreg could be eliminated. The use of
+        // D0 is using a partially undef value. This is not *incorrect* since
+        // S1 is can be freely clobbered.
+        // Ideally we would like a way to model this, but leaving the
+        // insert_subreg around causes both correctness and performance issues.
+        bool SubUsed = false;
+        for (const unsigned *SubRegs = TRI->getSubRegisters(Reg);
+             unsigned SubReg = *SubRegs; ++SubRegs)
+          if (isUsed(SubReg)) {
+            SubUsed = true;
+            break;
+          }
+        assert(SubUsed && "Using an undefined register!");
+      }
       assert((!EarlyClobberRegs.test(Reg) || MI->isRegTiedToDefOperand(i)) &&
              "Using an early clobbered register!");
     } else {
@@ -227,7 +241,7 @@ unsigned RegScavenger::FindUnusedReg(const TargetRegisterClass *RC) const {
 ///
 /// No more than InstrLimit instructions are inspected.
 ///
-unsigned RegScavenger::findSurvivorReg(MachineBasicBlock::iterator MI,
+unsigned RegScavenger::findSurvivorReg(MachineBasicBlock::iterator StartMI,
                                        BitVector &Candidates,
                                        unsigned InstrLimit,
                                        MachineBasicBlock::iterator &UseMI) {
@@ -235,19 +249,37 @@ unsigned RegScavenger::findSurvivorReg(MachineBasicBlock::iterator MI,
   assert(Survivor > 0 && "No candidates for scavenging");
 
   MachineBasicBlock::iterator ME = MBB->getFirstTerminator();
-  assert(MI != ME && "MI already at terminator");
+  assert(StartMI != ME && "MI already at terminator");
+  MachineBasicBlock::iterator RestorePointMI = StartMI;
+  MachineBasicBlock::iterator MI = StartMI;
 
+  bool inVirtLiveRange = false;
   for (++MI; InstrLimit > 0 && MI != ME; ++MI, --InstrLimit) {
+    bool isVirtKillInsn = false;
+    bool isVirtDefInsn = false;
     // Remove any candidates touched by instruction.
     for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
       const MachineOperand &MO = MI->getOperand(i);
-      if (!MO.isReg() || MO.isUndef() || !MO.getReg() ||
-          TargetRegisterInfo::isVirtualRegister(MO.getReg()))
+      if (!MO.isReg() || MO.isUndef() || !MO.getReg())
+        continue;
+      if (TargetRegisterInfo::isVirtualRegister(MO.getReg())) {
+        if (MO.isDef())
+          isVirtDefInsn = true;
+        else if (MO.isKill())
+          isVirtKillInsn = true;
         continue;
+      }
       Candidates.reset(MO.getReg());
       for (const unsigned *R = TRI->getAliasSet(MO.getReg()); *R; R++)
         Candidates.reset(*R);
     }
+    // If we're not in a virtual reg's live range, this is a valid
+    // restore point.
+    if (!inVirtLiveRange) RestorePointMI = MI;
+
+    // Update whether we're in the live range of a virtual register
+    if (isVirtKillInsn) inVirtLiveRange = false;
+    if (isVirtDefInsn) inVirtLiveRange = true;
 
     // Was our survivor untouched by this instruction?
     if (Candidates.test(Survivor))
@@ -259,18 +291,19 @@ unsigned RegScavenger::findSurvivorReg(MachineBasicBlock::iterator MI,
 
     Survivor = Candidates.find_first();
   }
+  // If we ran off the end, that's where we want to restore.
+  if (MI == ME) RestorePointMI = ME;
+  assert (RestorePointMI != StartMI &&
+          "No available scavenger restore location!");
 
   // We ran out of candidates, so stop the search.
-  UseMI = MI;
+  UseMI = RestorePointMI;
   return Survivor;
 }
 
 unsigned RegScavenger::scavengeRegister(const TargetRegisterClass *RC,
                                         MachineBasicBlock::iterator I,
                                         int SPAdj) {
-  assert(ScavengingFrameIndex >= 0 &&
-         "Cannot scavenge a register without an emergency spill slot!");
-
   // Mask off the registers which are not in the TargetRegisterClass.
   BitVector Candidates(NumPhysRegs, false);
   CreateRegClassMask(RC, Candidates);
@@ -301,14 +334,24 @@ unsigned RegScavenger::scavengeRegister(const TargetRegisterClass *RC,
   // Avoid infinite regress
   ScavengedReg = SReg;
 
-  // Spill the scavenged register before I.
-  TII->storeRegToStackSlot(*MBB, I, SReg, true, ScavengingFrameIndex, RC);
-  MachineBasicBlock::iterator II = prior(I);
-  TRI->eliminateFrameIndex(II, SPAdj, this);
+  // If the target knows how to save/restore the register, let it do so;
+  // otherwise, use the emergency stack spill slot.
+  if (!TRI->saveScavengerRegister(*MBB, I, UseMI, RC, SReg)) {
+    // Spill the scavenged register before I.
+    assert(ScavengingFrameIndex >= 0 &&
+           "Cannot scavenge register without an emergency spill slot!");
+    TII->storeRegToStackSlot(*MBB, I, SReg, true, ScavengingFrameIndex, RC);
+    MachineBasicBlock::iterator II = prior(I);
+    TRI->eliminateFrameIndex(II, SPAdj, NULL, this);
+
+    // Restore the scavenged register before its use (or first terminator).
+    TII->loadRegFromStackSlot(*MBB, UseMI, SReg, ScavengingFrameIndex, RC);
+    II = prior(UseMI);
+    TRI->eliminateFrameIndex(II, SPAdj, NULL, this);
+  }
 
-  // Restore the scavenged register before its use (or first terminator).
-  TII->loadRegFromStackSlot(*MBB, UseMI, SReg, ScavengingFrameIndex, RC);
   ScavengeRestore = prior(UseMI);
+
   // Doing this here leads to infinite regress.
   // ScavengedReg = SReg;
   ScavengedRC = RC;
diff --git a/libclamav/c++/llvm/lib/CodeGen/ScheduleDAG.cpp b/libclamav/c++/llvm/lib/CodeGen/ScheduleDAG.cpp
index 5a59862..71693d2 100644
--- a/libclamav/c++/llvm/lib/CodeGen/ScheduleDAG.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/ScheduleDAG.cpp
@@ -346,7 +346,7 @@ void ScheduleDAG::VerifySchedule(bool isBottomUp) {
       AnyNotSched = true;
     }
     if (SUnits[i].isScheduled &&
-        (isBottomUp ? SUnits[i].getHeight() : SUnits[i].getHeight()) >
+        (isBottomUp ? SUnits[i].getHeight() : SUnits[i].getDepth()) >
           unsigned(INT_MAX)) {
       if (!AnyNotSched)
         errs() << "*** Scheduling failed! ***\n";
diff --git a/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGEmit.cpp b/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGEmit.cpp
index 0d15c02..8e03420 100644
--- a/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGEmit.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGEmit.cpp
@@ -50,8 +50,10 @@ void ScheduleDAG::EmitPhysRegCopy(SUnit *SU,
           break;
         }
       }
-      TII->copyRegToReg(*BB, InsertPos, Reg, VRI->second,
-                        SU->CopyDstRC, SU->CopySrcRC);
+      bool Success = TII->copyRegToReg(*BB, InsertPos, Reg, VRI->second,
+                                       SU->CopyDstRC, SU->CopySrcRC);
+      (void)Success;
+      assert(Success && "copyRegToReg failed!");
     } else {
       // Copy from physical register.
       assert(I->getReg() && "Unknown physical register!");
@@ -59,8 +61,10 @@ void ScheduleDAG::EmitPhysRegCopy(SUnit *SU,
       bool isNew = VRBaseMap.insert(std::make_pair(SU, VRBase)).second;
       isNew = isNew; // Silence compiler warning.
       assert(isNew && "Node emitted out of order - early");
-      TII->copyRegToReg(*BB, InsertPos, VRBase, I->getReg(),
-                        SU->CopyDstRC, SU->CopySrcRC);
+      bool Success = TII->copyRegToReg(*BB, InsertPos, VRBase, I->getReg(),
+                                       SU->CopyDstRC, SU->CopySrcRC);
+      (void)Success;
+      assert(Success && "copyRegToReg failed!");
     }
     break;
   }
diff --git a/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGInstrs.cpp b/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGInstrs.cpp
index b55e606..56dd533 100644
--- a/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGInstrs.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGInstrs.cpp
@@ -32,7 +32,9 @@ using namespace llvm;
 ScheduleDAGInstrs::ScheduleDAGInstrs(MachineFunction &mf,
                                      const MachineLoopInfo &mli,
                                      const MachineDominatorTree &mdt)
-  : ScheduleDAG(mf), MLI(mli), MDT(mdt), LoopRegs(MLI, MDT) {}
+  : ScheduleDAG(mf), MLI(mli), MDT(mdt), LoopRegs(MLI, MDT) {
+  MFI = mf.getFrameInfo();
+}
 
 /// Run - perform scheduling.
 ///
@@ -95,7 +97,10 @@ static const Value *getUnderlyingObject(const Value *V) {
 /// getUnderlyingObjectForInstr - If this machine instr has memory reference
 /// information and it can be tracked to a normal reference to a known
 /// object, return the Value for that object. Otherwise return null.
-static const Value *getUnderlyingObjectForInstr(const MachineInstr *MI) {
+static const Value *getUnderlyingObjectForInstr(const MachineInstr *MI,
+                                                const MachineFrameInfo *MFI,
+                                                bool &MayAlias) {
+  MayAlias = true;
   if (!MI->hasOneMemOperand() ||
       !(*MI->memoperands_begin())->getValue() ||
       (*MI->memoperands_begin())->isVolatile())
@@ -106,10 +111,21 @@ static const Value *getUnderlyingObjectForInstr(const MachineInstr *MI) {
     return 0;
 
   V = getUnderlyingObject(V);
-  if (!isa<PseudoSourceValue>(V) && !isIdentifiedObject(V))
-    return 0;
+  if (const PseudoSourceValue *PSV = dyn_cast<PseudoSourceValue>(V)) {
+    // For now, ignore PseudoSourceValues which may alias LLVM IR values
+    // because the code that uses this function has no way to cope with
+    // such aliases.
+    if (PSV->isAliased(MFI))
+      return 0;
+    
+    MayAlias = PSV->mayAlias(MFI);
+    return V;
+  }
 
-  return V;
+  if (isIdentifiedObject(V))
+    return V;
+
+  return 0;
 }
 
 void ScheduleDAGInstrs::StartBlock(MachineBasicBlock *BB) {
@@ -123,7 +139,7 @@ void ScheduleDAGInstrs::StartBlock(MachineBasicBlock *BB) {
     }
 }
 
-void ScheduleDAGInstrs::BuildSchedGraph() {
+void ScheduleDAGInstrs::BuildSchedGraph(AliasAnalysis *AA) {
   // We'll be allocating one SUnit for each instruction, plus one for
   // the region exit node.
   SUnits.reserve(BB->size());
@@ -131,16 +147,15 @@ void ScheduleDAGInstrs::BuildSchedGraph() {
   // We build scheduling units by walking a block's instruction list from bottom
   // to top.
 
-  // Remember where a generic side-effecting instruction is as we procede. If
-  // ChainMMO is null, this is assumed to have arbitrary side-effects. If
-  // ChainMMO is non-null, then Chain makes only a single memory reference.
-  SUnit *Chain = 0;
-  MachineMemOperand *ChainMMO = 0;
+  // Remember where a generic side-effecting instruction is as we procede.
+  SUnit *BarrierChain = 0, *AliasChain = 0;
 
-  // Memory references to specific known memory locations are tracked so that
-  // they can be given more precise dependencies.
-  std::map<const Value *, SUnit *> MemDefs;
-  std::map<const Value *, std::vector<SUnit *> > MemUses;
+  // Memory references to specific known memory locations are tracked
+  // so that they can be given more precise dependencies. We track
+  // separately the known memory locations that may alias and those
+  // that are known not to alias
+  std::map<const Value *, SUnit *> AliasMemDefs, NonAliasMemDefs;
+  std::map<const Value *, std::vector<SUnit *> > AliasMemUses, NonAliasMemUses;
 
   // Check to see if the scheduler cares about latencies.
   bool UnitLatencies = ForceUnitLatencies();
@@ -196,7 +211,7 @@ void ScheduleDAGInstrs::BuildSchedGraph() {
           SUnit *DefSU = DefList[i];
           if (DefSU != SU &&
               (Kind != SDep::Output || !MO.isDead() ||
-               !DefSU->getInstr()->registerDefIsDead(Reg)))
+               !DefSU->getInstr()->registerDefIsDead(*Alias)))
             DefSU->addPred(SDep(SU, Kind, AOLatency, /*Reg=*/ *Alias));
         }
       }
@@ -305,106 +320,142 @@ void ScheduleDAGInstrs::BuildSchedGraph() {
     }
 
     // Add chain dependencies.
+    // Chain dependencies used to enforce memory order should have
+    // latency of 0 (except for true dependency of Store followed by
+    // aliased Load... we estimate that with a single cycle of latency
+    // assuming the hardware will bypass)
     // Note that isStoreToStackSlot and isLoadFromStackSLot are not usable
     // after stack slots are lowered to actual addresses.
     // TODO: Use an AliasAnalysis and do real alias-analysis queries, and
     // produce more precise dependence information.
-    if (TID.isCall() || TID.hasUnmodeledSideEffects()) {
-    new_chain:
-      // This is the conservative case. Add dependencies on all memory
-      // references.
-      if (Chain)
-        Chain->addPred(SDep(SU, SDep::Order, SU->Latency));
-      Chain = SU;
+#define STORE_LOAD_LATENCY 1
+    unsigned TrueMemOrderLatency = 0;
+    if (TID.isCall() || TID.hasUnmodeledSideEffects() ||
+        (MI->hasVolatileMemoryRef() && 
+         (!TID.mayLoad() || !MI->isInvariantLoad(AA)))) {
+      // Be conservative with these and add dependencies on all memory
+      // references, even those that are known to not alias.
+      for (std::map<const Value *, SUnit *>::iterator I = 
+             NonAliasMemDefs.begin(), E = NonAliasMemDefs.end(); I != E; ++I) {
+        I->second->addPred(SDep(SU, SDep::Order, /*Latency=*/0));
+      }
+      for (std::map<const Value *, std::vector<SUnit *> >::iterator I =
+             NonAliasMemUses.begin(), E = NonAliasMemUses.end(); I != E; ++I) {
+        for (unsigned i = 0, e = I->second.size(); i != e; ++i)
+          I->second[i]->addPred(SDep(SU, SDep::Order, TrueMemOrderLatency));
+      }
+      NonAliasMemDefs.clear();
+      NonAliasMemUses.clear();
+      // Add SU to the barrier chain.
+      if (BarrierChain)
+        BarrierChain->addPred(SDep(SU, SDep::Order, /*Latency=*/0));
+      BarrierChain = SU;
+
+      // fall-through
+    new_alias_chain:
+      // Chain all possibly aliasing memory references though SU.
+      if (AliasChain)
+        AliasChain->addPred(SDep(SU, SDep::Order, /*Latency=*/0));
+      AliasChain = SU;
       for (unsigned k = 0, m = PendingLoads.size(); k != m; ++k)
-        PendingLoads[k]->addPred(SDep(SU, SDep::Order, SU->Latency));
-      PendingLoads.clear();
-      for (std::map<const Value *, SUnit *>::iterator I = MemDefs.begin(),
-           E = MemDefs.end(); I != E; ++I) {
-        I->second->addPred(SDep(SU, SDep::Order, SU->Latency));
-        I->second = SU;
+        PendingLoads[k]->addPred(SDep(SU, SDep::Order, TrueMemOrderLatency));
+      for (std::map<const Value *, SUnit *>::iterator I = AliasMemDefs.begin(),
+           E = AliasMemDefs.end(); I != E; ++I) {
+        I->second->addPred(SDep(SU, SDep::Order, /*Latency=*/0));
       }
       for (std::map<const Value *, std::vector<SUnit *> >::iterator I =
-           MemUses.begin(), E = MemUses.end(); I != E; ++I) {
+           AliasMemUses.begin(), E = AliasMemUses.end(); I != E; ++I) {
         for (unsigned i = 0, e = I->second.size(); i != e; ++i)
-          I->second[i]->addPred(SDep(SU, SDep::Order, SU->Latency));
-        I->second.clear();
+          I->second[i]->addPred(SDep(SU, SDep::Order, TrueMemOrderLatency));
       }
-      // See if it is known to just have a single memory reference.
-      MachineInstr *ChainMI = Chain->getInstr();
-      const TargetInstrDesc &ChainTID = ChainMI->getDesc();
-      if (!ChainTID.isCall() &&
-          !ChainTID.hasUnmodeledSideEffects() &&
-          ChainMI->hasOneMemOperand() &&
-          !(*ChainMI->memoperands_begin())->isVolatile() &&
-          (*ChainMI->memoperands_begin())->getValue())
-        // We know that the Chain accesses one specific memory location.
-        ChainMMO = *ChainMI->memoperands_begin();
-      else
-        // Unknown memory accesses. Assume the worst.
-        ChainMMO = 0;
+      PendingLoads.clear();
+      AliasMemDefs.clear();
+      AliasMemUses.clear();
     } else if (TID.mayStore()) {
-      if (const Value *V = getUnderlyingObjectForInstr(MI)) {
+      bool MayAlias = true;
+      TrueMemOrderLatency = STORE_LOAD_LATENCY;
+      if (const Value *V = getUnderlyingObjectForInstr(MI, MFI, MayAlias)) {
         // A store to a specific PseudoSourceValue. Add precise dependencies.
-        // Handle the def in MemDefs, if there is one.
-        std::map<const Value *, SUnit *>::iterator I = MemDefs.find(V);
-        if (I != MemDefs.end()) {
-          I->second->addPred(SDep(SU, SDep::Order, SU->Latency, /*Reg=*/0,
+        // Record the def in MemDefs, first adding a dep if there is
+        // an existing def.
+        std::map<const Value *, SUnit *>::iterator I = 
+          ((MayAlias) ? AliasMemDefs.find(V) : NonAliasMemDefs.find(V));
+        std::map<const Value *, SUnit *>::iterator IE = 
+          ((MayAlias) ? AliasMemDefs.end() : NonAliasMemDefs.end());
+        if (I != IE) {
+          I->second->addPred(SDep(SU, SDep::Order, /*Latency=*/0, /*Reg=*/0,
                                   /*isNormalMemory=*/true));
           I->second = SU;
         } else {
-          MemDefs[V] = SU;
+          if (MayAlias)
+            AliasMemDefs[V] = SU;
+          else
+            NonAliasMemDefs[V] = SU;
         }
         // Handle the uses in MemUses, if there are any.
         std::map<const Value *, std::vector<SUnit *> >::iterator J =
-          MemUses.find(V);
-        if (J != MemUses.end()) {
+          ((MayAlias) ? AliasMemUses.find(V) : NonAliasMemUses.find(V));
+        std::map<const Value *, std::vector<SUnit *> >::iterator JE =
+          ((MayAlias) ? AliasMemUses.end() : NonAliasMemUses.end());
+        if (J != JE) {
           for (unsigned i = 0, e = J->second.size(); i != e; ++i)
-            J->second[i]->addPred(SDep(SU, SDep::Order, SU->Latency, /*Reg=*/0,
-                                       /*isNormalMemory=*/true));
+            J->second[i]->addPred(SDep(SU, SDep::Order, TrueMemOrderLatency,
+                                       /*Reg=*/0, /*isNormalMemory=*/true));
           J->second.clear();
         }
-        // Add dependencies from all the PendingLoads, since without
-        // memoperands we must assume they alias anything.
-        for (unsigned k = 0, m = PendingLoads.size(); k != m; ++k)
-          PendingLoads[k]->addPred(SDep(SU, SDep::Order, SU->Latency));
-        // Add a general dependence too, if needed.
-        if (Chain)
-          Chain->addPred(SDep(SU, SDep::Order, SU->Latency));
-      } else
+        if (MayAlias) {
+          // Add dependencies from all the PendingLoads, i.e. loads
+          // with no underlying object.
+          for (unsigned k = 0, m = PendingLoads.size(); k != m; ++k)
+            PendingLoads[k]->addPred(SDep(SU, SDep::Order, TrueMemOrderLatency));
+          // Add dependence on alias chain, if needed.
+          if (AliasChain)
+            AliasChain->addPred(SDep(SU, SDep::Order, /*Latency=*/0));
+        }
+        // Add dependence on barrier chain, if needed.
+        if (BarrierChain)
+          BarrierChain->addPred(SDep(SU, SDep::Order, /*Latency=*/0));
+      } else {
         // Treat all other stores conservatively.
-        goto new_chain;
+        goto new_alias_chain;
+      }
     } else if (TID.mayLoad()) {
-      if (TII->isInvariantLoad(MI)) {
+      bool MayAlias = true;
+      TrueMemOrderLatency = 0;
+      if (MI->isInvariantLoad(AA)) {
         // Invariant load, no chain dependencies needed!
-      } else if (const Value *V = getUnderlyingObjectForInstr(MI)) {
-        // A load from a specific PseudoSourceValue. Add precise dependencies.
-        std::map<const Value *, SUnit *>::iterator I = MemDefs.find(V);
-        if (I != MemDefs.end())
-          I->second->addPred(SDep(SU, SDep::Order, SU->Latency, /*Reg=*/0,
-                                  /*isNormalMemory=*/true));
-        MemUses[V].push_back(SU);
-
-        // Add a general dependence too, if needed.
-        if (Chain && (!ChainMMO ||
-                      (ChainMMO->isStore() || ChainMMO->isVolatile())))
-          Chain->addPred(SDep(SU, SDep::Order, SU->Latency));
-      } else if (MI->hasVolatileMemoryRef()) {
-        // Treat volatile loads conservatively. Note that this includes
-        // cases where memoperand information is unavailable.
-        goto new_chain;
       } else {
-        // A normal load. Depend on the general chain, as well as on
-        // all stores. In the absense of MachineMemOperand information,
-        // we can't even assume that the load doesn't alias well-behaved
-        // memory locations.
-        if (Chain)
-          Chain->addPred(SDep(SU, SDep::Order, SU->Latency));
-        for (std::map<const Value *, SUnit *>::iterator I = MemDefs.begin(),
-             E = MemDefs.end(); I != E; ++I)
-          I->second->addPred(SDep(SU, SDep::Order, SU->Latency));
-        PendingLoads.push_back(SU);
-      }
+        if (const Value *V = 
+            getUnderlyingObjectForInstr(MI, MFI, MayAlias)) {
+          // A load from a specific PseudoSourceValue. Add precise dependencies.
+          std::map<const Value *, SUnit *>::iterator I = 
+            ((MayAlias) ? AliasMemDefs.find(V) : NonAliasMemDefs.find(V));
+          std::map<const Value *, SUnit *>::iterator IE = 
+            ((MayAlias) ? AliasMemDefs.end() : NonAliasMemDefs.end());
+          if (I != IE)
+            I->second->addPred(SDep(SU, SDep::Order, /*Latency=*/0, /*Reg=*/0,
+                                    /*isNormalMemory=*/true));
+          if (MayAlias)
+            AliasMemUses[V].push_back(SU);
+          else 
+            NonAliasMemUses[V].push_back(SU);
+        } else {
+          // A load with no underlying object. Depend on all
+          // potentially aliasing stores.
+          for (std::map<const Value *, SUnit *>::iterator I = 
+                 AliasMemDefs.begin(), E = AliasMemDefs.end(); I != E; ++I)
+            I->second->addPred(SDep(SU, SDep::Order, /*Latency=*/0));
+          
+          PendingLoads.push_back(SU);
+          MayAlias = true;
+        }
+        
+        // Add dependencies on alias and barrier chains, if needed.
+        if (MayAlias && AliasChain)
+          AliasChain->addPred(SDep(SU, SDep::Order, /*Latency=*/0));
+        if (BarrierChain)
+          BarrierChain->addPred(SDep(SU, SDep::Order, /*Latency=*/0));
+      } 
     }
   }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGInstrs.h b/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGInstrs.h
index e928ca1..366c3a8 100644
--- a/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGInstrs.h
+++ b/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGInstrs.h
@@ -98,6 +98,7 @@ namespace llvm {
   class VISIBILITY_HIDDEN ScheduleDAGInstrs : public ScheduleDAG {
     const MachineLoopInfo &MLI;
     const MachineDominatorTree &MDT;
+    const MachineFrameInfo *MFI;
 
     /// Defs, Uses - Remember where defs and uses of each physical register
     /// are as we iterate upward through the instructions. This is allocated
@@ -121,7 +122,6 @@ namespace llvm {
     SmallSet<unsigned, 8> LoopLiveInRegs;
 
   public:
-    MachineBasicBlock *BB;                // Current basic block
     MachineBasicBlock::iterator Begin;    // The beginning of the range to
                                           // be scheduled. The range extends
                                           // to InsertPos.
@@ -155,7 +155,7 @@ namespace llvm {
 
     /// BuildSchedGraph - Build SUnits from the MachineBasicBlock that we are
     /// input.
-    virtual void BuildSchedGraph();
+    virtual void BuildSchedGraph(AliasAnalysis *AA);
 
     /// ComputeLatency - Compute node latency.
     ///
diff --git a/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGPrinter.cpp b/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGPrinter.cpp
index 95ad05e..4851d49 100644
--- a/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGPrinter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGPrinter.cpp
@@ -18,7 +18,6 @@
 #include "llvm/CodeGen/MachineConstantPool.h"
 #include "llvm/CodeGen/MachineFunction.h"
 #include "llvm/CodeGen/MachineModuleInfo.h"
-#include "llvm/CodeGen/PseudoSourceValue.h"
 #include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Support/Debug.h"
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/CMakeLists.txt b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/CMakeLists.txt
index dfa91cf..80c7d7c 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/CMakeLists.txt
@@ -2,6 +2,8 @@ add_llvm_library(LLVMSelectionDAG
   CallingConvLower.cpp
   DAGCombiner.cpp
   FastISel.cpp
+  FunctionLoweringInfo.cpp
+  InstrEmitter.cpp
   LegalizeDAG.cpp
   LegalizeFloatTypes.cpp
   LegalizeIntegerTypes.cpp
@@ -13,9 +15,8 @@ add_llvm_library(LLVMSelectionDAG
   ScheduleDAGList.cpp
   ScheduleDAGRRList.cpp
   ScheduleDAGSDNodes.cpp
-  ScheduleDAGSDNodesEmit.cpp
   SelectionDAG.cpp
-  SelectionDAGBuild.cpp
+  SelectionDAGBuilder.cpp
   SelectionDAGISel.cpp
   SelectionDAGPrinter.cpp
   TargetLowering.cpp
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/CallingConvLower.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/CallingConvLower.cpp
index fbe40b6..38839c4 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/CallingConvLower.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/CallingConvLower.cpp
@@ -77,6 +77,21 @@ CCState::AnalyzeFormalArguments(const SmallVectorImpl<ISD::InputArg> &Ins,
   }
 }
 
+/// CheckReturn - Analyze the return values of a function, returning true if
+/// the return can be performed without sret-demotion, and false otherwise.
+bool CCState::CheckReturn(const SmallVectorImpl<EVT> &OutTys,
+                          const SmallVectorImpl<ISD::ArgFlagsTy> &ArgsFlags,
+                          CCAssignFn Fn) {
+  // Determine which register each value should be copied into.
+  for (unsigned i = 0, e = OutTys.size(); i != e; ++i) {
+    EVT VT = OutTys[i];
+    ISD::ArgFlagsTy ArgFlags = ArgsFlags[i];
+    if (Fn(i, VT, VT, CCValAssign::Full, ArgFlags, *this))
+      return false;
+  }
+  return true;
+}
+
 /// AnalyzeReturn - Analyze the returned values of a return,
 /// incorporating info about the result values into this state.
 void CCState::AnalyzeReturn(const SmallVectorImpl<ISD::OutputArg> &Outs,
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
index f04ab68..06ffdd6 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
@@ -31,14 +31,12 @@
 #include "llvm/Target/TargetOptions.h"
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/Statistic.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/MathExtras.h"
 #include "llvm/Support/raw_ostream.h"
 #include <algorithm>
-#include <set>
 using namespace llvm;
 
 STATISTIC(NodesCombined   , "Number of dag nodes combined");
@@ -57,7 +55,7 @@ namespace {
 
 //------------------------------ DAGCombiner ---------------------------------//
 
-  class VISIBILITY_HIDDEN DAGCombiner {
+  class DAGCombiner {
     SelectionDAG &DAG;
     const TargetLowering &TLI;
     CombineLevel Level;
@@ -280,8 +278,7 @@ public:
 namespace {
 /// WorkListRemover - This class is a DAGUpdateListener that removes any deleted
 /// nodes from the worklist.
-class VISIBILITY_HIDDEN WorkListRemover :
-  public SelectionDAG::DAGUpdateListener {
+class WorkListRemover : public SelectionDAG::DAGUpdateListener {
   DAGCombiner &DC;
 public:
   explicit WorkListRemover(DAGCombiner &dc) : DC(dc) {}
@@ -4381,15 +4378,17 @@ SDValue DAGCombiner::visitFP_EXTEND(SDNode *N) {
 
 SDValue DAGCombiner::visitFNEG(SDNode *N) {
   SDValue N0 = N->getOperand(0);
+  EVT VT = N->getValueType(0);
 
   if (isNegatibleForFree(N0, LegalOperations))
     return GetNegatedExpression(N0, DAG, LegalOperations);
 
   // Transform fneg(bitconvert(x)) -> bitconvert(x^sign) to avoid loading
   // constant pool values.
-  if (N0.getOpcode() == ISD::BIT_CONVERT && N0.getNode()->hasOneUse() &&
-      N0.getOperand(0).getValueType().isInteger() &&
-      !N0.getOperand(0).getValueType().isVector()) {
+  if (N0.getOpcode() == ISD::BIT_CONVERT && 
+      !VT.isVector() &&
+      N0.getNode()->hasOneUse() &&
+      N0.getOperand(0).getValueType().isInteger()) {
     SDValue Int = N0.getOperand(0);
     EVT IntVT = Int.getValueType();
     if (IntVT.isInteger() && !IntVT.isVector()) {
@@ -4397,7 +4396,7 @@ SDValue DAGCombiner::visitFNEG(SDNode *N) {
               DAG.getConstant(APInt::getSignBit(IntVT.getSizeInBits()), IntVT));
       AddToWorkList(Int.getNode());
       return DAG.getNode(ISD::BIT_CONVERT, N->getDebugLoc(),
-                         N->getValueType(0), Int);
+                         VT, Int);
     }
   }
 
@@ -4443,14 +4442,13 @@ SDValue DAGCombiner::visitBRCOND(SDNode *N) {
   SDValue Chain = N->getOperand(0);
   SDValue N1 = N->getOperand(1);
   SDValue N2 = N->getOperand(2);
-  ConstantSDNode *N1C = dyn_cast<ConstantSDNode>(N1);
 
-  // never taken branch, fold to chain
-  if (N1C && N1C->isNullValue())
-    return Chain;
-  // unconditional branch
-  if (N1C && N1C->getAPIntValue() == 1)
-    return DAG.getNode(ISD::BR, N->getDebugLoc(), MVT::Other, Chain, N2);
+  // If N is a constant we could fold this into a fallthrough or unconditional
+  // branch. However that doesn't happen very often in normal code, because
+  // Instcombine/SimplifyCFG should have handled the available opportunities.
+  // If we did this folding here, it would be necessary to update the
+  // MachineBasicBlock CFG, which is awkward.
+
   // fold a brcond with a setcc condition into a BR_CC node if BR_CC is legal
   // on the target.
   if (N1.getOpcode() == ISD::SETCC &&
@@ -4517,22 +4515,18 @@ SDValue DAGCombiner::visitBR_CC(SDNode *N) {
   CondCodeSDNode *CC = cast<CondCodeSDNode>(N->getOperand(1));
   SDValue CondLHS = N->getOperand(2), CondRHS = N->getOperand(3);
 
+  // If N is a constant we could fold this into a fallthrough or unconditional
+  // branch. However that doesn't happen very often in normal code, because
+  // Instcombine/SimplifyCFG should have handled the available opportunities.
+  // If we did this folding here, it would be necessary to update the
+  // MachineBasicBlock CFG, which is awkward.
+
   // Use SimplifySetCC to simplify SETCC's.
   SDValue Simp = SimplifySetCC(TLI.getSetCCResultType(CondLHS.getValueType()),
                                CondLHS, CondRHS, CC->get(), N->getDebugLoc(),
                                false);
   if (Simp.getNode()) AddToWorkList(Simp.getNode());
 
-  ConstantSDNode *SCCC = dyn_cast_or_null<ConstantSDNode>(Simp.getNode());
-
-  // fold br_cc true, dest -> br dest (unconditional branch)
-  if (SCCC && !SCCC->isNullValue())
-    return DAG.getNode(ISD::BR, N->getDebugLoc(), MVT::Other,
-                       N->getOperand(0), N->getOperand(4));
-  // fold br_cc false, dest -> unconditional fall through
-  if (SCCC && SCCC->isNullValue())
-    return N->getOperand(0);
-
   // fold to a simpler setcc
   if (Simp.getNode() && Simp.getOpcode() == ISD::SETCC)
     return DAG.getNode(ISD::BR_CC, N->getDebugLoc(), MVT::Other,
@@ -5730,15 +5724,17 @@ bool DAGCombiner::SimplifySelectOps(SDNode *TheSelect, SDValue LHS,
 
       // If this is an EXTLOAD, the VT's must match.
       if (LLD->getMemoryVT() == RLD->getMemoryVT()) {
-        // FIXME: this conflates two src values, discarding one.  This is not
-        // the right thing to do, but nothing uses srcvalues now.  When they do,
-        // turn SrcValue into a list of locations.
+        // FIXME: this discards src value information.  This is
+        // over-conservative. It would be beneficial to be able to remember
+        // both potential memory locations.
         SDValue Addr;
         if (TheSelect->getOpcode() == ISD::SELECT) {
           // Check that the condition doesn't reach either load.  If so, folding
           // this will induce a cycle into the DAG.
-          if (!LLD->isPredecessorOf(TheSelect->getOperand(0).getNode()) &&
-              !RLD->isPredecessorOf(TheSelect->getOperand(0).getNode())) {
+          if ((!LLD->hasAnyUseOfValue(1) ||
+               !LLD->isPredecessorOf(TheSelect->getOperand(0).getNode())) &&
+              (!RLD->hasAnyUseOfValue(1) ||
+               !RLD->isPredecessorOf(TheSelect->getOperand(0).getNode()))) {
             Addr = DAG.getNode(ISD::SELECT, TheSelect->getDebugLoc(),
                                LLD->getBasePtr().getValueType(),
                                TheSelect->getOperand(0), LLD->getBasePtr(),
@@ -5747,10 +5743,12 @@ bool DAGCombiner::SimplifySelectOps(SDNode *TheSelect, SDValue LHS,
         } else {
           // Check that the condition doesn't reach either load.  If so, folding
           // this will induce a cycle into the DAG.
-          if (!LLD->isPredecessorOf(TheSelect->getOperand(0).getNode()) &&
-              !RLD->isPredecessorOf(TheSelect->getOperand(0).getNode()) &&
-              !LLD->isPredecessorOf(TheSelect->getOperand(1).getNode()) &&
-              !RLD->isPredecessorOf(TheSelect->getOperand(1).getNode())) {
+          if ((!LLD->hasAnyUseOfValue(1) ||
+               (!LLD->isPredecessorOf(TheSelect->getOperand(0).getNode()) &&
+                !LLD->isPredecessorOf(TheSelect->getOperand(1).getNode()))) &&
+              (!RLD->hasAnyUseOfValue(1) ||
+               (!RLD->isPredecessorOf(TheSelect->getOperand(0).getNode()) &&
+                !RLD->isPredecessorOf(TheSelect->getOperand(1).getNode())))) {
             Addr = DAG.getNode(ISD::SELECT_CC, TheSelect->getDebugLoc(),
                                LLD->getBasePtr().getValueType(),
                                TheSelect->getOperand(0),
@@ -5766,16 +5764,14 @@ bool DAGCombiner::SimplifySelectOps(SDNode *TheSelect, SDValue LHS,
             Load = DAG.getLoad(TheSelect->getValueType(0),
                                TheSelect->getDebugLoc(),
                                LLD->getChain(),
-                               Addr,LLD->getSrcValue(),
-                               LLD->getSrcValueOffset(),
+                               Addr, 0, 0,
                                LLD->isVolatile(),
                                LLD->getAlignment());
           } else {
             Load = DAG.getExtLoad(LLD->getExtensionType(),
                                   TheSelect->getDebugLoc(),
                                   TheSelect->getValueType(0),
-                                  LLD->getChain(), Addr, LLD->getSrcValue(),
-                                  LLD->getSrcValueOffset(),
+                                  LLD->getChain(), Addr, 0, 0,
                                   LLD->getMemoryVT(),
                                   LLD->isVolatile(),
                                   LLD->getAlignment());
@@ -6230,13 +6226,28 @@ void DAGCombiner::GatherAllAliases(SDNode *N, SDValue OriginalChain,
 
   // Starting off.
   Chains.push_back(OriginalChain);
-
+  unsigned Depth = 0;
+  
   // Look at each chain and determine if it is an alias.  If so, add it to the
   // aliases list.  If not, then continue up the chain looking for the next
   // candidate.
   while (!Chains.empty()) {
     SDValue Chain = Chains.back();
     Chains.pop_back();
+    
+    // For TokenFactor nodes, look at each operand and only continue up the 
+    // chain until we find two aliases.  If we've seen two aliases, assume we'll 
+    // find more and revert to original chain since the xform is unlikely to be
+    // profitable.
+    // 
+    // FIXME: The depth check could be made to return the last non-aliasing 
+    // chain we found before we hit a tokenfactor rather than the original
+    // chain.
+    if (Depth > 6 || Aliases.size() == 2) {
+      Aliases.clear();
+      Aliases.push_back(OriginalChain);
+      break;
+    }
 
     // Don't bother if we've been before.
     if (!Visited.insert(Chain.getNode()))
@@ -6268,8 +6279,7 @@ void DAGCombiner::GatherAllAliases(SDNode *N, SDValue OriginalChain,
       } else {
         // Look further up the chain.
         Chains.push_back(Chain.getOperand(0));
-        // Clean up old chain.
-        AddToWorkList(Chain.getNode());
+        ++Depth;
       }
       break;
     }
@@ -6285,8 +6295,7 @@ void DAGCombiner::GatherAllAliases(SDNode *N, SDValue OriginalChain,
       }
       for (unsigned n = Chain.getNumOperands(); n;)
         Chains.push_back(Chain.getOperand(--n));
-      // Eliminate the token factor if we can.
-      AddToWorkList(Chain.getNode());
+      ++Depth;
       break;
 
     default:
@@ -6312,7 +6321,7 @@ SDValue DAGCombiner::FindBetterChain(SDNode *N, SDValue OldChain) {
     // If a single operand then chain to it.  We don't need to revisit it.
     return Aliases[0];
   }
-
+  
   // Construct a custom tailored token factor.
   return DAG.getNode(ISD::TokenFactor, N->getDebugLoc(), MVT::Other, 
                      &Aliases[0], Aliases.size());
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
index 0bec2cf..5eb9ca1 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
@@ -43,6 +43,7 @@
 #include "llvm/GlobalVariable.h"
 #include "llvm/Instructions.h"
 #include "llvm/IntrinsicInst.h"
+#include "llvm/LLVMContext.h"
 #include "llvm/CodeGen/FastISel.h"
 #include "llvm/CodeGen/MachineInstrBuilder.h"
 #include "llvm/CodeGen/MachineModuleInfo.h"
@@ -53,7 +54,8 @@
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetLowering.h"
 #include "llvm/Target/TargetMachine.h"
-#include "SelectionDAGBuild.h"
+#include "SelectionDAGBuilder.h"
+#include "FunctionLoweringInfo.h"
 using namespace llvm;
 
 unsigned FastISel::getRegForValue(Value *V) {
@@ -324,83 +326,12 @@ bool FastISel::SelectCall(User *I) {
   unsigned IID = F->getIntrinsicID();
   switch (IID) {
   default: break;
-  case Intrinsic::dbg_stoppoint: {
-    DbgStopPointInst *SPI = cast<DbgStopPointInst>(I);
-    if (isValidDebugInfoIntrinsic(*SPI, CodeGenOpt::None))
-      setCurDebugLoc(ExtractDebugLocation(*SPI, MF.getDebugLocInfo()));
+  case Intrinsic::dbg_stoppoint: 
+  case Intrinsic::dbg_region_start: 
+  case Intrinsic::dbg_region_end: 
+  case Intrinsic::dbg_func_start:
+    // FIXME - Remove this instructions once the dust settles.
     return true;
-  }
-  case Intrinsic::dbg_region_start: {
-    DbgRegionStartInst *RSI = cast<DbgRegionStartInst>(I);
-    if (isValidDebugInfoIntrinsic(*RSI, CodeGenOpt::None) && DW
-        && DW->ShouldEmitDwarfDebug()) {
-      unsigned ID = 
-        DW->RecordRegionStart(RSI->getContext());
-      const TargetInstrDesc &II = TII.get(TargetInstrInfo::DBG_LABEL);
-      BuildMI(MBB, DL, II).addImm(ID);
-    }
-    return true;
-  }
-  case Intrinsic::dbg_region_end: {
-    DbgRegionEndInst *REI = cast<DbgRegionEndInst>(I);
-    if (isValidDebugInfoIntrinsic(*REI, CodeGenOpt::None) && DW
-        && DW->ShouldEmitDwarfDebug()) {
-     unsigned ID = 0;
-     DISubprogram Subprogram(REI->getContext());
-     if (isInlinedFnEnd(*REI, MF.getFunction())) {
-        // This is end of an inlined function.
-        const TargetInstrDesc &II = TII.get(TargetInstrInfo::DBG_LABEL);
-        ID = DW->RecordInlinedFnEnd(Subprogram);
-        if (ID)
-          // Returned ID is 0 if this is unbalanced "end of inlined
-          // scope". This could happen if optimizer eats dbg intrinsics
-          // or "beginning of inlined scope" is not recoginized due to
-          // missing location info. In such cases, ignore this region.end.
-          BuildMI(MBB, DL, II).addImm(ID);
-      } else {
-        const TargetInstrDesc &II = TII.get(TargetInstrInfo::DBG_LABEL);
-        ID =  DW->RecordRegionEnd(REI->getContext());
-        BuildMI(MBB, DL, II).addImm(ID);
-      }
-    }
-    return true;
-  }
-  case Intrinsic::dbg_func_start: {
-    DbgFuncStartInst *FSI = cast<DbgFuncStartInst>(I);
-    if (!isValidDebugInfoIntrinsic(*FSI, CodeGenOpt::None) || !DW
-        || !DW->ShouldEmitDwarfDebug()) 
-      return true;
-
-    if (isInlinedFnStart(*FSI, MF.getFunction())) {
-      // This is a beginning of an inlined function.
-      
-      // If llvm.dbg.func.start is seen in a new block before any
-      // llvm.dbg.stoppoint intrinsic then the location info is unknown.
-      // FIXME : Why DebugLoc is reset at the beginning of each block ?
-      DebugLoc PrevLoc = DL;
-      if (PrevLoc.isUnknown())
-        return true;
-      // Record the source line.
-      setCurDebugLoc(ExtractDebugLocation(*FSI, MF.getDebugLocInfo()));
-      
-      DebugLocTuple PrevLocTpl = MF.getDebugLocTuple(PrevLoc);
-      DISubprogram SP(FSI->getSubprogram());
-      unsigned LabelID = DW->RecordInlinedFnStart(SP,
-                                                  DICompileUnit(PrevLocTpl.CompileUnit),
-                                                  PrevLocTpl.Line,
-                                                  PrevLocTpl.Col);
-      const TargetInstrDesc &II = TII.get(TargetInstrInfo::DBG_LABEL);
-      BuildMI(MBB, DL, II).addImm(LabelID);
-      return true;
-    }
-    
-    // This is a beginning of a new function.
-    MF.setDefaultDebugLoc(ExtractDebugLocation(*FSI, MF.getDebugLocInfo()));
-    
-    // llvm.dbg.func_start also defines beginning of function scope.
-    DW->RecordRegionStart(FSI->getSubprogram());
-    return true;
-  }
   case Intrinsic::dbg_declare: {
     DbgDeclareInst *DI = cast<DbgDeclareInst>(I);
     if (!isValidDebugInfoIntrinsic(*DI, CodeGenOpt::None) || !DW
@@ -418,14 +349,12 @@ bool FastISel::SelectCall(User *I) {
     if (SI == StaticAllocaMap.end()) break; // VLAs.
     int FI = SI->second;
     if (MMI) {
-      MetadataContext &TheMetadata = AI->getContext().getMetadata();
+      MetadataContext &TheMetadata = 
+        DI->getParent()->getContext().getMetadata();
       unsigned MDDbgKind = TheMetadata.getMDKind("dbg");
-      MDNode *AllocaLocation =
-        dyn_cast_or_null<MDNode>(TheMetadata.getMD(MDDbgKind, AI));
-      if (AllocaLocation)
-        MMI->setVariableDbgInfo(DI->getVariable(), AllocaLocation, FI);
+      MDNode *Dbg = TheMetadata.getMD(MDDbgKind, DI);
+      MMI->setVariableDbgInfo(DI->getVariable(), FI, Dbg);
     }
-    DW->RecordVariable(DI->getVariable(), FI);
     return true;
   }
   case Intrinsic::eh_exception: {
@@ -447,15 +376,11 @@ bool FastISel::SelectCall(User *I) {
     }
     break;
   }
-  case Intrinsic::eh_selector_i32:
-  case Intrinsic::eh_selector_i64: {
+  case Intrinsic::eh_selector: {
     EVT VT = TLI.getValueType(I->getType());
     switch (TLI.getOperationAction(ISD::EHSELECTION, VT)) {
     default: break;
     case TargetLowering::Expand: {
-      EVT VT = (IID == Intrinsic::eh_selector_i32 ?
-                           MVT::i32 : MVT::i64);
-
       if (MMI) {
         if (MBB->isLandingPad())
           AddCatchInfo(*cast<CallInst>(I), MMI, MBB);
@@ -469,12 +394,25 @@ bool FastISel::SelectCall(User *I) {
         }
 
         unsigned Reg = TLI.getExceptionSelectorRegister();
-        const TargetRegisterClass *RC = TLI.getRegClassFor(VT);
+        EVT SrcVT = TLI.getPointerTy();
+        const TargetRegisterClass *RC = TLI.getRegClassFor(SrcVT);
         unsigned ResultReg = createResultReg(RC);
-        bool InsertedCopy = TII.copyRegToReg(*MBB, MBB->end(), ResultReg,
-                                             Reg, RC, RC);
+        bool InsertedCopy = TII.copyRegToReg(*MBB, MBB->end(), ResultReg, Reg,
+                                             RC, RC);
         assert(InsertedCopy && "Can't copy address registers!");
         InsertedCopy = InsertedCopy;
+
+        // Cast the register to the type of the selector.
+        if (SrcVT.bitsGT(MVT::i32))
+          ResultReg = FastEmit_r(SrcVT.getSimpleVT(), MVT::i32, ISD::TRUNCATE,
+                                 ResultReg);
+        else if (SrcVT.bitsLT(MVT::i32))
+          ResultReg = FastEmit_r(SrcVT.getSimpleVT(), MVT::i32,
+                                 ISD::SIGN_EXTEND, ResultReg);
+        if (ResultReg == 0)
+          // Unhandled operand. Halt "fast" selection and bail.
+          return false;
+
         UpdateValueMap(I, ResultReg);
       } else {
         unsigned ResultReg =
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FunctionLoweringInfo.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FunctionLoweringInfo.cpp
new file mode 100644
index 0000000..e3b25c2
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FunctionLoweringInfo.cpp
@@ -0,0 +1,355 @@
+//===-- FunctionLoweringInfo.cpp ------------------------------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This implements routines for translating functions from LLVM IR into
+// Machine IR.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "function-lowering-info"
+#include "FunctionLoweringInfo.h"
+#include "llvm/CallingConv.h"
+#include "llvm/DerivedTypes.h"
+#include "llvm/Function.h"
+#include "llvm/Instructions.h"
+#include "llvm/IntrinsicInst.h"
+#include "llvm/LLVMContext.h"
+#include "llvm/Module.h"
+#include "llvm/CodeGen/MachineFunction.h"
+#include "llvm/CodeGen/MachineFrameInfo.h"
+#include "llvm/CodeGen/MachineInstrBuilder.h"
+#include "llvm/CodeGen/MachineModuleInfo.h"
+#include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/Analysis/DebugInfo.h"
+#include "llvm/Target/TargetRegisterInfo.h"
+#include "llvm/Target/TargetData.h"
+#include "llvm/Target/TargetFrameInfo.h"
+#include "llvm/Target/TargetInstrInfo.h"
+#include "llvm/Target/TargetIntrinsicInfo.h"
+#include "llvm/Target/TargetLowering.h"
+#include "llvm/Target/TargetOptions.h"
+#include "llvm/Support/Compiler.h"
+#include "llvm/Support/CommandLine.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/MathExtras.h"
+#include "llvm/Support/raw_ostream.h"
+#include <algorithm>
+using namespace llvm;
+
+/// ComputeLinearIndex - Given an LLVM IR aggregate type and a sequence
+/// of insertvalue or extractvalue indices that identify a member, return
+/// the linearized index of the start of the member.
+///
+unsigned llvm::ComputeLinearIndex(const TargetLowering &TLI, const Type *Ty,
+                                  const unsigned *Indices,
+                                  const unsigned *IndicesEnd,
+                                  unsigned CurIndex) {
+  // Base case: We're done.
+  if (Indices && Indices == IndicesEnd)
+    return CurIndex;
+
+  // Given a struct type, recursively traverse the elements.
+  if (const StructType *STy = dyn_cast<StructType>(Ty)) {
+    for (StructType::element_iterator EB = STy->element_begin(),
+                                      EI = EB,
+                                      EE = STy->element_end();
+        EI != EE; ++EI) {
+      if (Indices && *Indices == unsigned(EI - EB))
+        return ComputeLinearIndex(TLI, *EI, Indices+1, IndicesEnd, CurIndex);
+      CurIndex = ComputeLinearIndex(TLI, *EI, 0, 0, CurIndex);
+    }
+    return CurIndex;
+  }
+  // Given an array type, recursively traverse the elements.
+  else if (const ArrayType *ATy = dyn_cast<ArrayType>(Ty)) {
+    const Type *EltTy = ATy->getElementType();
+    for (unsigned i = 0, e = ATy->getNumElements(); i != e; ++i) {
+      if (Indices && *Indices == i)
+        return ComputeLinearIndex(TLI, EltTy, Indices+1, IndicesEnd, CurIndex);
+      CurIndex = ComputeLinearIndex(TLI, EltTy, 0, 0, CurIndex);
+    }
+    return CurIndex;
+  }
+  // We haven't found the type we're looking for, so keep searching.
+  return CurIndex + 1;
+}
+
+/// ComputeValueVTs - Given an LLVM IR type, compute a sequence of
+/// EVTs that represent all the individual underlying
+/// non-aggregate types that comprise it.
+///
+/// If Offsets is non-null, it points to a vector to be filled in
+/// with the in-memory offsets of each of the individual values.
+///
+void llvm::ComputeValueVTs(const TargetLowering &TLI, const Type *Ty,
+                           SmallVectorImpl<EVT> &ValueVTs,
+                           SmallVectorImpl<uint64_t> *Offsets,
+                           uint64_t StartingOffset) {
+  // Given a struct type, recursively traverse the elements.
+  if (const StructType *STy = dyn_cast<StructType>(Ty)) {
+    const StructLayout *SL = TLI.getTargetData()->getStructLayout(STy);
+    for (StructType::element_iterator EB = STy->element_begin(),
+                                      EI = EB,
+                                      EE = STy->element_end();
+         EI != EE; ++EI)
+      ComputeValueVTs(TLI, *EI, ValueVTs, Offsets,
+                      StartingOffset + SL->getElementOffset(EI - EB));
+    return;
+  }
+  // Given an array type, recursively traverse the elements.
+  if (const ArrayType *ATy = dyn_cast<ArrayType>(Ty)) {
+    const Type *EltTy = ATy->getElementType();
+    uint64_t EltSize = TLI.getTargetData()->getTypeAllocSize(EltTy);
+    for (unsigned i = 0, e = ATy->getNumElements(); i != e; ++i)
+      ComputeValueVTs(TLI, EltTy, ValueVTs, Offsets,
+                      StartingOffset + i * EltSize);
+    return;
+  }
+  // Interpret void as zero return values.
+  if (Ty == Type::getVoidTy(Ty->getContext()))
+    return;
+  // Base case: we can get an EVT for this LLVM IR type.
+  ValueVTs.push_back(TLI.getValueType(Ty));
+  if (Offsets)
+    Offsets->push_back(StartingOffset);
+}
+
+/// isUsedOutsideOfDefiningBlock - Return true if this instruction is used by
+/// PHI nodes or outside of the basic block that defines it, or used by a
+/// switch or atomic instruction, which may expand to multiple basic blocks.
+static bool isUsedOutsideOfDefiningBlock(Instruction *I) {
+  if (isa<PHINode>(I)) return true;
+  BasicBlock *BB = I->getParent();
+  for (Value::use_iterator UI = I->use_begin(), E = I->use_end(); UI != E; ++UI)
+    if (cast<Instruction>(*UI)->getParent() != BB || isa<PHINode>(*UI))
+      return true;
+  return false;
+}
+
+/// isOnlyUsedInEntryBlock - If the specified argument is only used in the
+/// entry block, return true.  This includes arguments used by switches, since
+/// the switch may expand into multiple basic blocks.
+static bool isOnlyUsedInEntryBlock(Argument *A, bool EnableFastISel) {
+  // With FastISel active, we may be splitting blocks, so force creation
+  // of virtual registers for all non-dead arguments.
+  // Don't force virtual registers for byval arguments though, because
+  // fast-isel can't handle those in all cases.
+  if (EnableFastISel && !A->hasByValAttr())
+    return A->use_empty();
+
+  BasicBlock *Entry = A->getParent()->begin();
+  for (Value::use_iterator UI = A->use_begin(), E = A->use_end(); UI != E; ++UI)
+    if (cast<Instruction>(*UI)->getParent() != Entry || isa<SwitchInst>(*UI))
+      return false;  // Use not in entry block.
+  return true;
+}
+
+FunctionLoweringInfo::FunctionLoweringInfo(TargetLowering &tli)
+  : TLI(tli) {
+}
+
+void FunctionLoweringInfo::set(Function &fn, MachineFunction &mf,
+                               bool EnableFastISel) {
+  Fn = &fn;
+  MF = &mf;
+  RegInfo = &MF->getRegInfo();
+
+  // Create a vreg for each argument register that is not dead and is used
+  // outside of the entry block for the function.
+  for (Function::arg_iterator AI = Fn->arg_begin(), E = Fn->arg_end();
+       AI != E; ++AI)
+    if (!isOnlyUsedInEntryBlock(AI, EnableFastISel))
+      InitializeRegForValue(AI);
+
+  // Initialize the mapping of values to registers.  This is only set up for
+  // instruction values that are used outside of the block that defines
+  // them.
+  Function::iterator BB = Fn->begin(), EB = Fn->end();
+  for (BasicBlock::iterator I = BB->begin(), E = BB->end(); I != E; ++I)
+    if (AllocaInst *AI = dyn_cast<AllocaInst>(I))
+      if (ConstantInt *CUI = dyn_cast<ConstantInt>(AI->getArraySize())) {
+        const Type *Ty = AI->getAllocatedType();
+        uint64_t TySize = TLI.getTargetData()->getTypeAllocSize(Ty);
+        unsigned Align =
+          std::max((unsigned)TLI.getTargetData()->getPrefTypeAlignment(Ty),
+                   AI->getAlignment());
+
+        TySize *= CUI->getZExtValue();   // Get total allocated size.
+        if (TySize == 0) TySize = 1; // Don't create zero-sized stack objects.
+        StaticAllocaMap[AI] =
+          MF->getFrameInfo()->CreateStackObject(TySize, Align, false);
+      }
+
+  for (; BB != EB; ++BB)
+    for (BasicBlock::iterator I = BB->begin(), E = BB->end(); I != E; ++I)
+      if (!I->use_empty() && isUsedOutsideOfDefiningBlock(I))
+        if (!isa<AllocaInst>(I) ||
+            !StaticAllocaMap.count(cast<AllocaInst>(I)))
+          InitializeRegForValue(I);
+
+  // Create an initial MachineBasicBlock for each LLVM BasicBlock in F.  This
+  // also creates the initial PHI MachineInstrs, though none of the input
+  // operands are populated.
+  for (BB = Fn->begin(), EB = Fn->end(); BB != EB; ++BB) {
+    MachineBasicBlock *MBB = mf.CreateMachineBasicBlock(BB);
+    MBBMap[BB] = MBB;
+    MF->push_back(MBB);
+
+    // Transfer the address-taken flag. This is necessary because there could
+    // be multiple MachineBasicBlocks corresponding to one BasicBlock, and only
+    // the first one should be marked.
+    if (BB->hasAddressTaken())
+      MBB->setHasAddressTaken();
+
+    // Create Machine PHI nodes for LLVM PHI nodes, lowering them as
+    // appropriate.
+    PHINode *PN;
+    DebugLoc DL;
+    for (BasicBlock::iterator
+           I = BB->begin(), E = BB->end(); I != E; ++I) {
+
+      PN = dyn_cast<PHINode>(I);
+      if (!PN || PN->use_empty()) continue;
+
+      unsigned PHIReg = ValueMap[PN];
+      assert(PHIReg && "PHI node does not have an assigned virtual register!");
+
+      SmallVector<EVT, 4> ValueVTs;
+      ComputeValueVTs(TLI, PN->getType(), ValueVTs);
+      for (unsigned vti = 0, vte = ValueVTs.size(); vti != vte; ++vti) {
+        EVT VT = ValueVTs[vti];
+        unsigned NumRegisters = TLI.getNumRegisters(Fn->getContext(), VT);
+        const TargetInstrInfo *TII = MF->getTarget().getInstrInfo();
+        for (unsigned i = 0; i != NumRegisters; ++i)
+          BuildMI(MBB, DL, TII->get(TargetInstrInfo::PHI), PHIReg + i);
+        PHIReg += NumRegisters;
+      }
+    }
+  }
+}
+
+/// clear - Clear out all the function-specific state. This returns this
+/// FunctionLoweringInfo to an empty state, ready to be used for a
+/// different function.
+void FunctionLoweringInfo::clear() {
+  MBBMap.clear();
+  ValueMap.clear();
+  StaticAllocaMap.clear();
+#ifndef NDEBUG
+  CatchInfoLost.clear();
+  CatchInfoFound.clear();
+#endif
+  LiveOutRegInfo.clear();
+}
+
+unsigned FunctionLoweringInfo::MakeReg(EVT VT) {
+  return RegInfo->createVirtualRegister(TLI.getRegClassFor(VT));
+}
+
+/// CreateRegForValue - Allocate the appropriate number of virtual registers of
+/// the correctly promoted or expanded types.  Assign these registers
+/// consecutive vreg numbers and return the first assigned number.
+///
+/// In the case that the given value has struct or array type, this function
+/// will assign registers for each member or element.
+///
+unsigned FunctionLoweringInfo::CreateRegForValue(const Value *V) {
+  SmallVector<EVT, 4> ValueVTs;
+  ComputeValueVTs(TLI, V->getType(), ValueVTs);
+
+  unsigned FirstReg = 0;
+  for (unsigned Value = 0, e = ValueVTs.size(); Value != e; ++Value) {
+    EVT ValueVT = ValueVTs[Value];
+    EVT RegisterVT = TLI.getRegisterType(V->getContext(), ValueVT);
+
+    unsigned NumRegs = TLI.getNumRegisters(V->getContext(), ValueVT);
+    for (unsigned i = 0; i != NumRegs; ++i) {
+      unsigned R = MakeReg(RegisterVT);
+      if (!FirstReg) FirstReg = R;
+    }
+  }
+  return FirstReg;
+}
+
+/// ExtractTypeInfo - Returns the type info, possibly bitcast, encoded in V.
+GlobalVariable *llvm::ExtractTypeInfo(Value *V) {
+  V = V->stripPointerCasts();
+  GlobalVariable *GV = dyn_cast<GlobalVariable>(V);
+  assert ((GV || isa<ConstantPointerNull>(V)) &&
+          "TypeInfo must be a global variable or NULL");
+  return GV;
+}
+
+/// AddCatchInfo - Extract the personality and type infos from an eh.selector
+/// call, and add them to the specified machine basic block.
+void llvm::AddCatchInfo(CallInst &I, MachineModuleInfo *MMI,
+                        MachineBasicBlock *MBB) {
+  // Inform the MachineModuleInfo of the personality for this landing pad.
+  ConstantExpr *CE = cast<ConstantExpr>(I.getOperand(2));
+  assert(CE->getOpcode() == Instruction::BitCast &&
+         isa<Function>(CE->getOperand(0)) &&
+         "Personality should be a function");
+  MMI->addPersonality(MBB, cast<Function>(CE->getOperand(0)));
+
+  // Gather all the type infos for this landing pad and pass them along to
+  // MachineModuleInfo.
+  std::vector<GlobalVariable *> TyInfo;
+  unsigned N = I.getNumOperands();
+
+  for (unsigned i = N - 1; i > 2; --i) {
+    if (ConstantInt *CI = dyn_cast<ConstantInt>(I.getOperand(i))) {
+      unsigned FilterLength = CI->getZExtValue();
+      unsigned FirstCatch = i + FilterLength + !FilterLength;
+      assert (FirstCatch <= N && "Invalid filter length");
+
+      if (FirstCatch < N) {
+        TyInfo.reserve(N - FirstCatch);
+        for (unsigned j = FirstCatch; j < N; ++j)
+          TyInfo.push_back(ExtractTypeInfo(I.getOperand(j)));
+        MMI->addCatchTypeInfo(MBB, TyInfo);
+        TyInfo.clear();
+      }
+
+      if (!FilterLength) {
+        // Cleanup.
+        MMI->addCleanup(MBB);
+      } else {
+        // Filter.
+        TyInfo.reserve(FilterLength - 1);
+        for (unsigned j = i + 1; j < FirstCatch; ++j)
+          TyInfo.push_back(ExtractTypeInfo(I.getOperand(j)));
+        MMI->addFilterTypeInfo(MBB, TyInfo);
+        TyInfo.clear();
+      }
+
+      N = i;
+    }
+  }
+
+  if (N > 3) {
+    TyInfo.reserve(N - 3);
+    for (unsigned j = 3; j < N; ++j)
+      TyInfo.push_back(ExtractTypeInfo(I.getOperand(j)));
+    MMI->addCatchTypeInfo(MBB, TyInfo);
+  }
+}
+
+void llvm::CopyCatchInfo(BasicBlock *SrcBB, BasicBlock *DestBB,
+                         MachineModuleInfo *MMI, FunctionLoweringInfo &FLI) {
+  for (BasicBlock::iterator I = SrcBB->begin(), E = --SrcBB->end(); I != E; ++I)
+    if (EHSelectorInst *EHSel = dyn_cast<EHSelectorInst>(I)) {
+      // Apply the catch info to DestBB.
+      AddCatchInfo(*EHSel, MMI, FLI.MBBMap[DestBB]);
+#ifndef NDEBUG
+      if (!FLI.MBBMap[SrcBB]->isLandingPad())
+        FLI.CatchInfoFound.insert(EHSel);
+#endif
+    }
+}
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FunctionLoweringInfo.h b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FunctionLoweringInfo.h
new file mode 100644
index 0000000..d851e64
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FunctionLoweringInfo.h
@@ -0,0 +1,151 @@
+//===-- FunctionLoweringInfo.h - Lower functions from LLVM IR to CodeGen --===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This implements routines for translating functions from LLVM IR into
+// Machine IR.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef FUNCTIONLOWERINGINFO_H
+#define FUNCTIONLOWERINGINFO_H
+
+#include "llvm/ADT/APInt.h"
+#include "llvm/ADT/DenseMap.h"
+#ifndef NDEBUG
+#include "llvm/ADT/SmallSet.h"
+#endif
+#include "llvm/CodeGen/ValueTypes.h"
+#include <vector>
+
+namespace llvm {
+
+class AllocaInst;
+class BasicBlock;
+class CallInst;
+class Function;
+class GlobalVariable;
+class Instruction;
+class MachineBasicBlock;
+class MachineFunction;
+class MachineModuleInfo;
+class MachineRegisterInfo;
+class TargetLowering;
+class Value;
+
+//===--------------------------------------------------------------------===//
+/// FunctionLoweringInfo - This contains information that is global to a
+/// function that is used when lowering a region of the function.
+///
+class FunctionLoweringInfo {
+public:
+  TargetLowering &TLI;
+  Function *Fn;
+  MachineFunction *MF;
+  MachineRegisterInfo *RegInfo;
+
+  /// CanLowerReturn - true iff the function's return value can be lowered to
+  /// registers.
+  bool CanLowerReturn;
+
+  /// DemoteRegister - if CanLowerReturn is false, DemoteRegister is a vreg
+  /// allocated to hold a pointer to the hidden sret parameter.
+  unsigned DemoteRegister;
+
+  explicit FunctionLoweringInfo(TargetLowering &TLI);
+
+  /// set - Initialize this FunctionLoweringInfo with the given Function
+  /// and its associated MachineFunction.
+  ///
+  void set(Function &Fn, MachineFunction &MF, bool EnableFastISel);
+
+  /// MBBMap - A mapping from LLVM basic blocks to their machine code entry.
+  DenseMap<const BasicBlock*, MachineBasicBlock *> MBBMap;
+
+  /// ValueMap - Since we emit code for the function a basic block at a time,
+  /// we must remember which virtual registers hold the values for
+  /// cross-basic-block values.
+  DenseMap<const Value*, unsigned> ValueMap;
+
+  /// StaticAllocaMap - Keep track of frame indices for fixed sized allocas in
+  /// the entry block.  This allows the allocas to be efficiently referenced
+  /// anywhere in the function.
+  DenseMap<const AllocaInst*, int> StaticAllocaMap;
+
+#ifndef NDEBUG
+  SmallSet<Instruction*, 8> CatchInfoLost;
+  SmallSet<Instruction*, 8> CatchInfoFound;
+#endif
+
+  unsigned MakeReg(EVT VT);
+  
+  /// isExportedInst - Return true if the specified value is an instruction
+  /// exported from its block.
+  bool isExportedInst(const Value *V) {
+    return ValueMap.count(V);
+  }
+
+  unsigned CreateRegForValue(const Value *V);
+  
+  unsigned InitializeRegForValue(const Value *V) {
+    unsigned &R = ValueMap[V];
+    assert(R == 0 && "Already initialized this value register!");
+    return R = CreateRegForValue(V);
+  }
+  
+  struct LiveOutInfo {
+    unsigned NumSignBits;
+    APInt KnownOne, KnownZero;
+    LiveOutInfo() : NumSignBits(0), KnownOne(1, 0), KnownZero(1, 0) {}
+  };
+  
+  /// LiveOutRegInfo - Information about live out vregs, indexed by their
+  /// register number offset by 'FirstVirtualRegister'.
+  std::vector<LiveOutInfo> LiveOutRegInfo;
+
+  /// clear - Clear out all the function-specific state. This returns this
+  /// FunctionLoweringInfo to an empty state, ready to be used for a
+  /// different function.
+  void clear();
+};
+
+/// ComputeLinearIndex - Given an LLVM IR aggregate type and a sequence
+/// of insertvalue or extractvalue indices that identify a member, return
+/// the linearized index of the start of the member.
+///
+unsigned ComputeLinearIndex(const TargetLowering &TLI, const Type *Ty,
+                            const unsigned *Indices,
+                            const unsigned *IndicesEnd,
+                            unsigned CurIndex = 0);
+
+/// ComputeValueVTs - Given an LLVM IR type, compute a sequence of
+/// EVTs that represent all the individual underlying
+/// non-aggregate types that comprise it.
+///
+/// If Offsets is non-null, it points to a vector to be filled in
+/// with the in-memory offsets of each of the individual values.
+///
+void ComputeValueVTs(const TargetLowering &TLI, const Type *Ty,
+                     SmallVectorImpl<EVT> &ValueVTs,
+                     SmallVectorImpl<uint64_t> *Offsets = 0,
+                     uint64_t StartingOffset = 0);
+
+/// ExtractTypeInfo - Returns the type info, possibly bitcast, encoded in V.
+GlobalVariable *ExtractTypeInfo(Value *V);
+
+/// AddCatchInfo - Extract the personality and type infos from an eh.selector
+/// call, and add them to the specified machine basic block.
+void AddCatchInfo(CallInst &I, MachineModuleInfo *MMI, MachineBasicBlock *MBB);
+
+/// CopyCatchInfo - Copy catch information from DestBB to SrcBB.
+void CopyCatchInfo(BasicBlock *SrcBB, BasicBlock *DestBB,
+                   MachineModuleInfo *MMI, FunctionLoweringInfo &FLI);
+
+} // end namespace llvm
+
+#endif
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp
new file mode 100644
index 0000000..669d414
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp
@@ -0,0 +1,702 @@
+//==--- InstrEmitter.cpp - Emit MachineInstrs for the SelectionDAG class ---==//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This implements the Emit routines for the SelectionDAG class, which creates
+// MachineInstrs based on the decisions of the SelectionDAG instruction
+// selection.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "instr-emitter"
+#include "InstrEmitter.h"
+#include "llvm/CodeGen/MachineConstantPool.h"
+#include "llvm/CodeGen/MachineFunction.h"
+#include "llvm/CodeGen/MachineInstrBuilder.h"
+#include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/Target/TargetData.h"
+#include "llvm/Target/TargetMachine.h"
+#include "llvm/Target/TargetInstrInfo.h"
+#include "llvm/Target/TargetLowering.h"
+#include "llvm/ADT/Statistic.h"
+#include "llvm/Support/CommandLine.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/MathExtras.h"
+using namespace llvm;
+
+/// CountResults - The results of target nodes have register or immediate
+/// operands first, then an optional chain, and optional flag operands (which do
+/// not go into the resulting MachineInstr).
+unsigned InstrEmitter::CountResults(SDNode *Node) {
+  unsigned N = Node->getNumValues();
+  while (N && Node->getValueType(N - 1) == MVT::Flag)
+    --N;
+  if (N && Node->getValueType(N - 1) == MVT::Other)
+    --N;    // Skip over chain result.
+  return N;
+}
+
+/// CountOperands - The inputs to target nodes have any actual inputs first,
+/// followed by an optional chain operand, then an optional flag operand.
+/// Compute the number of actual operands that will go into the resulting
+/// MachineInstr.
+unsigned InstrEmitter::CountOperands(SDNode *Node) {
+  unsigned N = Node->getNumOperands();
+  while (N && Node->getOperand(N - 1).getValueType() == MVT::Flag)
+    --N;
+  if (N && Node->getOperand(N - 1).getValueType() == MVT::Other)
+    --N; // Ignore chain if it exists.
+  return N;
+}
+
+/// EmitCopyFromReg - Generate machine code for an CopyFromReg node or an
+/// implicit physical register output.
+void InstrEmitter::
+EmitCopyFromReg(SDNode *Node, unsigned ResNo, bool IsClone, bool IsCloned,
+                unsigned SrcReg, DenseMap<SDValue, unsigned> &VRBaseMap) {
+  unsigned VRBase = 0;
+  if (TargetRegisterInfo::isVirtualRegister(SrcReg)) {
+    // Just use the input register directly!
+    SDValue Op(Node, ResNo);
+    if (IsClone)
+      VRBaseMap.erase(Op);
+    bool isNew = VRBaseMap.insert(std::make_pair(Op, SrcReg)).second;
+    isNew = isNew; // Silence compiler warning.
+    assert(isNew && "Node emitted out of order - early");
+    return;
+  }
+
+  // If the node is only used by a CopyToReg and the dest reg is a vreg, use
+  // the CopyToReg'd destination register instead of creating a new vreg.
+  bool MatchReg = true;
+  const TargetRegisterClass *UseRC = NULL;
+  if (!IsClone && !IsCloned)
+    for (SDNode::use_iterator UI = Node->use_begin(), E = Node->use_end();
+         UI != E; ++UI) {
+      SDNode *User = *UI;
+      bool Match = true;
+      if (User->getOpcode() == ISD::CopyToReg && 
+          User->getOperand(2).getNode() == Node &&
+          User->getOperand(2).getResNo() == ResNo) {
+        unsigned DestReg = cast<RegisterSDNode>(User->getOperand(1))->getReg();
+        if (TargetRegisterInfo::isVirtualRegister(DestReg)) {
+          VRBase = DestReg;
+          Match = false;
+        } else if (DestReg != SrcReg)
+          Match = false;
+      } else {
+        for (unsigned i = 0, e = User->getNumOperands(); i != e; ++i) {
+          SDValue Op = User->getOperand(i);
+          if (Op.getNode() != Node || Op.getResNo() != ResNo)
+            continue;
+          EVT VT = Node->getValueType(Op.getResNo());
+          if (VT == MVT::Other || VT == MVT::Flag)
+            continue;
+          Match = false;
+          if (User->isMachineOpcode()) {
+            const TargetInstrDesc &II = TII->get(User->getMachineOpcode());
+            const TargetRegisterClass *RC = 0;
+            if (i+II.getNumDefs() < II.getNumOperands())
+              RC = II.OpInfo[i+II.getNumDefs()].getRegClass(TRI);
+            if (!UseRC)
+              UseRC = RC;
+            else if (RC) {
+              const TargetRegisterClass *ComRC = getCommonSubClass(UseRC, RC);
+              // If multiple uses expect disjoint register classes, we emit
+              // copies in AddRegisterOperand.
+              if (ComRC)
+                UseRC = ComRC;
+            }
+          }
+        }
+      }
+      MatchReg &= Match;
+      if (VRBase)
+        break;
+    }
+
+  EVT VT = Node->getValueType(ResNo);
+  const TargetRegisterClass *SrcRC = 0, *DstRC = 0;
+  SrcRC = TRI->getPhysicalRegisterRegClass(SrcReg, VT);
+  
+  // Figure out the register class to create for the destreg.
+  if (VRBase) {
+    DstRC = MRI->getRegClass(VRBase);
+  } else if (UseRC) {
+    assert(UseRC->hasType(VT) && "Incompatible phys register def and uses!");
+    DstRC = UseRC;
+  } else {
+    DstRC = TLI->getRegClassFor(VT);
+  }
+    
+  // If all uses are reading from the src physical register and copying the
+  // register is either impossible or very expensive, then don't create a copy.
+  if (MatchReg && SrcRC->getCopyCost() < 0) {
+    VRBase = SrcReg;
+  } else {
+    // Create the reg, emit the copy.
+    VRBase = MRI->createVirtualRegister(DstRC);
+    bool Emitted = TII->copyRegToReg(*MBB, InsertPos, VRBase, SrcReg,
+                                     DstRC, SrcRC);
+
+    assert(Emitted && "Unable to issue a copy instruction!\n");
+    (void) Emitted;
+  }
+
+  SDValue Op(Node, ResNo);
+  if (IsClone)
+    VRBaseMap.erase(Op);
+  bool isNew = VRBaseMap.insert(std::make_pair(Op, VRBase)).second;
+  isNew = isNew; // Silence compiler warning.
+  assert(isNew && "Node emitted out of order - early");
+}
+
+/// getDstOfCopyToRegUse - If the only use of the specified result number of
+/// node is a CopyToReg, return its destination register. Return 0 otherwise.
+unsigned InstrEmitter::getDstOfOnlyCopyToRegUse(SDNode *Node,
+                                                unsigned ResNo) const {
+  if (!Node->hasOneUse())
+    return 0;
+
+  SDNode *User = *Node->use_begin();
+  if (User->getOpcode() == ISD::CopyToReg && 
+      User->getOperand(2).getNode() == Node &&
+      User->getOperand(2).getResNo() == ResNo) {
+    unsigned Reg = cast<RegisterSDNode>(User->getOperand(1))->getReg();
+    if (TargetRegisterInfo::isVirtualRegister(Reg))
+      return Reg;
+  }
+  return 0;
+}
+
+void InstrEmitter::CreateVirtualRegisters(SDNode *Node, MachineInstr *MI,
+                                       const TargetInstrDesc &II,
+                                       bool IsClone, bool IsCloned,
+                                       DenseMap<SDValue, unsigned> &VRBaseMap) {
+  assert(Node->getMachineOpcode() != TargetInstrInfo::IMPLICIT_DEF &&
+         "IMPLICIT_DEF should have been handled as a special case elsewhere!");
+
+  for (unsigned i = 0; i < II.getNumDefs(); ++i) {
+    // If the specific node value is only used by a CopyToReg and the dest reg
+    // is a vreg in the same register class, use the CopyToReg'd destination
+    // register instead of creating a new vreg.
+    unsigned VRBase = 0;
+    const TargetRegisterClass *RC = II.OpInfo[i].getRegClass(TRI);
+    if (II.OpInfo[i].isOptionalDef()) {
+      // Optional def must be a physical register.
+      unsigned NumResults = CountResults(Node);
+      VRBase = cast<RegisterSDNode>(Node->getOperand(i-NumResults))->getReg();
+      assert(TargetRegisterInfo::isPhysicalRegister(VRBase));
+      MI->addOperand(MachineOperand::CreateReg(VRBase, true));
+    }
+
+    if (!VRBase && !IsClone && !IsCloned)
+      for (SDNode::use_iterator UI = Node->use_begin(), E = Node->use_end();
+           UI != E; ++UI) {
+        SDNode *User = *UI;
+        if (User->getOpcode() == ISD::CopyToReg && 
+            User->getOperand(2).getNode() == Node &&
+            User->getOperand(2).getResNo() == i) {
+          unsigned Reg = cast<RegisterSDNode>(User->getOperand(1))->getReg();
+          if (TargetRegisterInfo::isVirtualRegister(Reg)) {
+            const TargetRegisterClass *RegRC = MRI->getRegClass(Reg);
+            if (RegRC == RC) {
+              VRBase = Reg;
+              MI->addOperand(MachineOperand::CreateReg(Reg, true));
+              break;
+            }
+          }
+        }
+      }
+
+    // Create the result registers for this node and add the result regs to
+    // the machine instruction.
+    if (VRBase == 0) {
+      assert(RC && "Isn't a register operand!");
+      VRBase = MRI->createVirtualRegister(RC);
+      MI->addOperand(MachineOperand::CreateReg(VRBase, true));
+    }
+
+    SDValue Op(Node, i);
+    if (IsClone)
+      VRBaseMap.erase(Op);
+    bool isNew = VRBaseMap.insert(std::make_pair(Op, VRBase)).second;
+    isNew = isNew; // Silence compiler warning.
+    assert(isNew && "Node emitted out of order - early");
+  }
+}
+
+/// getVR - Return the virtual register corresponding to the specified result
+/// of the specified node.
+unsigned InstrEmitter::getVR(SDValue Op,
+                             DenseMap<SDValue, unsigned> &VRBaseMap) {
+  if (Op.isMachineOpcode() &&
+      Op.getMachineOpcode() == TargetInstrInfo::IMPLICIT_DEF) {
+    // Add an IMPLICIT_DEF instruction before every use.
+    unsigned VReg = getDstOfOnlyCopyToRegUse(Op.getNode(), Op.getResNo());
+    // IMPLICIT_DEF can produce any type of result so its TargetInstrDesc
+    // does not include operand register class info.
+    if (!VReg) {
+      const TargetRegisterClass *RC = TLI->getRegClassFor(Op.getValueType());
+      VReg = MRI->createVirtualRegister(RC);
+    }
+    BuildMI(MBB, Op.getDebugLoc(),
+            TII->get(TargetInstrInfo::IMPLICIT_DEF), VReg);
+    return VReg;
+  }
+
+  DenseMap<SDValue, unsigned>::iterator I = VRBaseMap.find(Op);
+  assert(I != VRBaseMap.end() && "Node emitted out of order - late");
+  return I->second;
+}
+
+
+/// AddRegisterOperand - Add the specified register as an operand to the
+/// specified machine instr. Insert register copies if the register is
+/// not in the required register class.
+void
+InstrEmitter::AddRegisterOperand(MachineInstr *MI, SDValue Op,
+                                 unsigned IIOpNum,
+                                 const TargetInstrDesc *II,
+                                 DenseMap<SDValue, unsigned> &VRBaseMap) {
+  assert(Op.getValueType() != MVT::Other &&
+         Op.getValueType() != MVT::Flag &&
+         "Chain and flag operands should occur at end of operand list!");
+  // Get/emit the operand.
+  unsigned VReg = getVR(Op, VRBaseMap);
+  assert(TargetRegisterInfo::isVirtualRegister(VReg) && "Not a vreg?");
+
+  const TargetInstrDesc &TID = MI->getDesc();
+  bool isOptDef = IIOpNum < TID.getNumOperands() &&
+    TID.OpInfo[IIOpNum].isOptionalDef();
+
+  // If the instruction requires a register in a different class, create
+  // a new virtual register and copy the value into it.
+  if (II) {
+    const TargetRegisterClass *SrcRC = MRI->getRegClass(VReg);
+    const TargetRegisterClass *DstRC = 0;
+    if (IIOpNum < II->getNumOperands())
+      DstRC = II->OpInfo[IIOpNum].getRegClass(TRI);
+    assert((DstRC || (TID.isVariadic() && IIOpNum >= TID.getNumOperands())) &&
+           "Don't have operand info for this instruction!");
+    if (DstRC && SrcRC != DstRC && !SrcRC->hasSuperClass(DstRC)) {
+      unsigned NewVReg = MRI->createVirtualRegister(DstRC);
+      bool Emitted = TII->copyRegToReg(*MBB, InsertPos, NewVReg, VReg,
+                                       DstRC, SrcRC);
+      assert(Emitted && "Unable to issue a copy instruction!\n");
+      (void) Emitted;
+      VReg = NewVReg;
+    }
+  }
+
+  MI->addOperand(MachineOperand::CreateReg(VReg, isOptDef));
+}
+
+/// AddOperand - Add the specified operand to the specified machine instr.  II
+/// specifies the instruction information for the node, and IIOpNum is the
+/// operand number (in the II) that we are adding. IIOpNum and II are used for 
+/// assertions only.
+void InstrEmitter::AddOperand(MachineInstr *MI, SDValue Op,
+                              unsigned IIOpNum,
+                              const TargetInstrDesc *II,
+                              DenseMap<SDValue, unsigned> &VRBaseMap) {
+  if (Op.isMachineOpcode()) {
+    AddRegisterOperand(MI, Op, IIOpNum, II, VRBaseMap);
+  } else if (ConstantSDNode *C = dyn_cast<ConstantSDNode>(Op)) {
+    MI->addOperand(MachineOperand::CreateImm(C->getSExtValue()));
+  } else if (ConstantFPSDNode *F = dyn_cast<ConstantFPSDNode>(Op)) {
+    const ConstantFP *CFP = F->getConstantFPValue();
+    MI->addOperand(MachineOperand::CreateFPImm(CFP));
+  } else if (RegisterSDNode *R = dyn_cast<RegisterSDNode>(Op)) {
+    MI->addOperand(MachineOperand::CreateReg(R->getReg(), false));
+  } else if (GlobalAddressSDNode *TGA = dyn_cast<GlobalAddressSDNode>(Op)) {
+    MI->addOperand(MachineOperand::CreateGA(TGA->getGlobal(), TGA->getOffset(),
+                                            TGA->getTargetFlags()));
+  } else if (BasicBlockSDNode *BBNode = dyn_cast<BasicBlockSDNode>(Op)) {
+    MI->addOperand(MachineOperand::CreateMBB(BBNode->getBasicBlock()));
+  } else if (FrameIndexSDNode *FI = dyn_cast<FrameIndexSDNode>(Op)) {
+    MI->addOperand(MachineOperand::CreateFI(FI->getIndex()));
+  } else if (JumpTableSDNode *JT = dyn_cast<JumpTableSDNode>(Op)) {
+    MI->addOperand(MachineOperand::CreateJTI(JT->getIndex(),
+                                             JT->getTargetFlags()));
+  } else if (ConstantPoolSDNode *CP = dyn_cast<ConstantPoolSDNode>(Op)) {
+    int Offset = CP->getOffset();
+    unsigned Align = CP->getAlignment();
+    const Type *Type = CP->getType();
+    // MachineConstantPool wants an explicit alignment.
+    if (Align == 0) {
+      Align = TM->getTargetData()->getPrefTypeAlignment(Type);
+      if (Align == 0) {
+        // Alignment of vector types.  FIXME!
+        Align = TM->getTargetData()->getTypeAllocSize(Type);
+      }
+    }
+    
+    unsigned Idx;
+    MachineConstantPool *MCP = MF->getConstantPool();
+    if (CP->isMachineConstantPoolEntry())
+      Idx = MCP->getConstantPoolIndex(CP->getMachineCPVal(), Align);
+    else
+      Idx = MCP->getConstantPoolIndex(CP->getConstVal(), Align);
+    MI->addOperand(MachineOperand::CreateCPI(Idx, Offset,
+                                             CP->getTargetFlags()));
+  } else if (ExternalSymbolSDNode *ES = dyn_cast<ExternalSymbolSDNode>(Op)) {
+    MI->addOperand(MachineOperand::CreateES(ES->getSymbol(),
+                                            ES->getTargetFlags()));
+  } else if (BlockAddressSDNode *BA = dyn_cast<BlockAddressSDNode>(Op)) {
+    MI->addOperand(MachineOperand::CreateBA(BA->getBlockAddress(),
+                                            BA->getTargetFlags()));
+  } else {
+    assert(Op.getValueType() != MVT::Other &&
+           Op.getValueType() != MVT::Flag &&
+           "Chain and flag operands should occur at end of operand list!");
+    AddRegisterOperand(MI, Op, IIOpNum, II, VRBaseMap);
+  }
+}
+
+/// getSuperRegisterRegClass - Returns the register class of a superreg A whose
+/// "SubIdx"'th sub-register class is the specified register class and whose
+/// type matches the specified type.
+static const TargetRegisterClass*
+getSuperRegisterRegClass(const TargetRegisterClass *TRC,
+                         unsigned SubIdx, EVT VT) {
+  // Pick the register class of the superegister for this type
+  for (TargetRegisterInfo::regclass_iterator I = TRC->superregclasses_begin(),
+         E = TRC->superregclasses_end(); I != E; ++I)
+    if ((*I)->hasType(VT) && (*I)->getSubRegisterRegClass(SubIdx) == TRC)
+      return *I;
+  assert(false && "Couldn't find the register class");
+  return 0;
+}
+
+/// EmitSubregNode - Generate machine code for subreg nodes.
+///
+void InstrEmitter::EmitSubregNode(SDNode *Node, 
+                                  DenseMap<SDValue, unsigned> &VRBaseMap){
+  unsigned VRBase = 0;
+  unsigned Opc = Node->getMachineOpcode();
+  
+  // If the node is only used by a CopyToReg and the dest reg is a vreg, use
+  // the CopyToReg'd destination register instead of creating a new vreg.
+  for (SDNode::use_iterator UI = Node->use_begin(), E = Node->use_end();
+       UI != E; ++UI) {
+    SDNode *User = *UI;
+    if (User->getOpcode() == ISD::CopyToReg && 
+        User->getOperand(2).getNode() == Node) {
+      unsigned DestReg = cast<RegisterSDNode>(User->getOperand(1))->getReg();
+      if (TargetRegisterInfo::isVirtualRegister(DestReg)) {
+        VRBase = DestReg;
+        break;
+      }
+    }
+  }
+  
+  if (Opc == TargetInstrInfo::EXTRACT_SUBREG) {
+    unsigned SubIdx = cast<ConstantSDNode>(Node->getOperand(1))->getZExtValue();
+
+    // Create the extract_subreg machine instruction.
+    MachineInstr *MI = BuildMI(*MF, Node->getDebugLoc(),
+                               TII->get(TargetInstrInfo::EXTRACT_SUBREG));
+
+    // Figure out the register class to create for the destreg.
+    unsigned VReg = getVR(Node->getOperand(0), VRBaseMap);
+    const TargetRegisterClass *TRC = MRI->getRegClass(VReg);
+    const TargetRegisterClass *SRC = TRC->getSubRegisterRegClass(SubIdx);
+    assert(SRC && "Invalid subregister index in EXTRACT_SUBREG");
+
+    // Figure out the register class to create for the destreg.
+    // Note that if we're going to directly use an existing register,
+    // it must be precisely the required class, and not a subclass
+    // thereof.
+    if (VRBase == 0 || SRC != MRI->getRegClass(VRBase)) {
+      // Create the reg
+      assert(SRC && "Couldn't find source register class");
+      VRBase = MRI->createVirtualRegister(SRC);
+    }
+
+    // Add def, source, and subreg index
+    MI->addOperand(MachineOperand::CreateReg(VRBase, true));
+    AddOperand(MI, Node->getOperand(0), 0, 0, VRBaseMap);
+    MI->addOperand(MachineOperand::CreateImm(SubIdx));
+    MBB->insert(InsertPos, MI);
+  } else if (Opc == TargetInstrInfo::INSERT_SUBREG ||
+             Opc == TargetInstrInfo::SUBREG_TO_REG) {
+    SDValue N0 = Node->getOperand(0);
+    SDValue N1 = Node->getOperand(1);
+    SDValue N2 = Node->getOperand(2);
+    unsigned SubReg = getVR(N1, VRBaseMap);
+    unsigned SubIdx = cast<ConstantSDNode>(N2)->getZExtValue();
+    const TargetRegisterClass *TRC = MRI->getRegClass(SubReg);
+    const TargetRegisterClass *SRC =
+      getSuperRegisterRegClass(TRC, SubIdx,
+                               Node->getValueType(0));
+
+    // Figure out the register class to create for the destreg.
+    // Note that if we're going to directly use an existing register,
+    // it must be precisely the required class, and not a subclass
+    // thereof.
+    if (VRBase == 0 || SRC != MRI->getRegClass(VRBase)) {
+      // Create the reg
+      assert(SRC && "Couldn't find source register class");
+      VRBase = MRI->createVirtualRegister(SRC);
+    }
+
+    // Create the insert_subreg or subreg_to_reg machine instruction.
+    MachineInstr *MI = BuildMI(*MF, Node->getDebugLoc(), TII->get(Opc));
+    MI->addOperand(MachineOperand::CreateReg(VRBase, true));
+    
+    // If creating a subreg_to_reg, then the first input operand
+    // is an implicit value immediate, otherwise it's a register
+    if (Opc == TargetInstrInfo::SUBREG_TO_REG) {
+      const ConstantSDNode *SD = cast<ConstantSDNode>(N0);
+      MI->addOperand(MachineOperand::CreateImm(SD->getZExtValue()));
+    } else
+      AddOperand(MI, N0, 0, 0, VRBaseMap);
+    // Add the subregster being inserted
+    AddOperand(MI, N1, 0, 0, VRBaseMap);
+    MI->addOperand(MachineOperand::CreateImm(SubIdx));
+    MBB->insert(InsertPos, MI);
+  } else
+    llvm_unreachable("Node is not insert_subreg, extract_subreg, or subreg_to_reg");
+     
+  SDValue Op(Node, 0);
+  bool isNew = VRBaseMap.insert(std::make_pair(Op, VRBase)).second;
+  isNew = isNew; // Silence compiler warning.
+  assert(isNew && "Node emitted out of order - early");
+}
+
+/// EmitCopyToRegClassNode - Generate machine code for COPY_TO_REGCLASS nodes.
+/// COPY_TO_REGCLASS is just a normal copy, except that the destination
+/// register is constrained to be in a particular register class.
+///
+void
+InstrEmitter::EmitCopyToRegClassNode(SDNode *Node,
+                                     DenseMap<SDValue, unsigned> &VRBaseMap) {
+  unsigned VReg = getVR(Node->getOperand(0), VRBaseMap);
+  const TargetRegisterClass *SrcRC = MRI->getRegClass(VReg);
+
+  unsigned DstRCIdx = cast<ConstantSDNode>(Node->getOperand(1))->getZExtValue();
+  const TargetRegisterClass *DstRC = TRI->getRegClass(DstRCIdx);
+
+  // Create the new VReg in the destination class and emit a copy.
+  unsigned NewVReg = MRI->createVirtualRegister(DstRC);
+  bool Emitted = TII->copyRegToReg(*MBB, InsertPos, NewVReg, VReg,
+                                   DstRC, SrcRC);
+  assert(Emitted &&
+         "Unable to issue a copy instruction for a COPY_TO_REGCLASS node!\n");
+  (void) Emitted;
+
+  SDValue Op(Node, 0);
+  bool isNew = VRBaseMap.insert(std::make_pair(Op, NewVReg)).second;
+  isNew = isNew; // Silence compiler warning.
+  assert(isNew && "Node emitted out of order - early");
+}
+
+/// EmitNode - Generate machine code for a node and needed dependencies.
+///
+void InstrEmitter::EmitNode(SDNode *Node, bool IsClone, bool IsCloned,
+                            DenseMap<SDValue, unsigned> &VRBaseMap,
+                         DenseMap<MachineBasicBlock*, MachineBasicBlock*> *EM) {
+  // If machine instruction
+  if (Node->isMachineOpcode()) {
+    unsigned Opc = Node->getMachineOpcode();
+    
+    // Handle subreg insert/extract specially
+    if (Opc == TargetInstrInfo::EXTRACT_SUBREG || 
+        Opc == TargetInstrInfo::INSERT_SUBREG ||
+        Opc == TargetInstrInfo::SUBREG_TO_REG) {
+      EmitSubregNode(Node, VRBaseMap);
+      return;
+    }
+
+    // Handle COPY_TO_REGCLASS specially.
+    if (Opc == TargetInstrInfo::COPY_TO_REGCLASS) {
+      EmitCopyToRegClassNode(Node, VRBaseMap);
+      return;
+    }
+
+    if (Opc == TargetInstrInfo::IMPLICIT_DEF)
+      // We want a unique VR for each IMPLICIT_DEF use.
+      return;
+    
+    const TargetInstrDesc &II = TII->get(Opc);
+    unsigned NumResults = CountResults(Node);
+    unsigned NodeOperands = CountOperands(Node);
+    bool HasPhysRegOuts = (NumResults > II.getNumDefs()) &&
+                          II.getImplicitDefs() != 0;
+#ifndef NDEBUG
+    unsigned NumMIOperands = NodeOperands + NumResults;
+    assert((II.getNumOperands() == NumMIOperands ||
+            HasPhysRegOuts || II.isVariadic()) &&
+           "#operands for dag node doesn't match .td file!"); 
+#endif
+
+    // Create the new machine instruction.
+    MachineInstr *MI = BuildMI(*MF, Node->getDebugLoc(), II);
+    
+    // Add result register values for things that are defined by this
+    // instruction.
+    if (NumResults)
+      CreateVirtualRegisters(Node, MI, II, IsClone, IsCloned, VRBaseMap);
+    
+    // Emit all of the actual operands of this instruction, adding them to the
+    // instruction as appropriate.
+    bool HasOptPRefs = II.getNumDefs() > NumResults;
+    assert((!HasOptPRefs || !HasPhysRegOuts) &&
+           "Unable to cope with optional defs and phys regs defs!");
+    unsigned NumSkip = HasOptPRefs ? II.getNumDefs() - NumResults : 0;
+    for (unsigned i = NumSkip; i != NodeOperands; ++i)
+      AddOperand(MI, Node->getOperand(i), i-NumSkip+II.getNumDefs(), &II,
+                 VRBaseMap);
+
+    // Transfer all of the memory reference descriptions of this instruction.
+    MI->setMemRefs(cast<MachineSDNode>(Node)->memoperands_begin(),
+                   cast<MachineSDNode>(Node)->memoperands_end());
+
+    if (II.usesCustomInsertionHook()) {
+      // Insert this instruction into the basic block using a target
+      // specific inserter which may returns a new basic block.
+      MBB = TLI->EmitInstrWithCustomInserter(MI, MBB, EM);
+      InsertPos = MBB->end();
+    } else {
+      MBB->insert(InsertPos, MI);
+    }
+
+    // Additional results must be an physical register def.
+    if (HasPhysRegOuts) {
+      for (unsigned i = II.getNumDefs(); i < NumResults; ++i) {
+        unsigned Reg = II.getImplicitDefs()[i - II.getNumDefs()];
+        if (Node->hasAnyUseOfValue(i))
+          EmitCopyFromReg(Node, i, IsClone, IsCloned, Reg, VRBaseMap);
+        // If there are no uses, mark the register as dead now, so that
+        // MachineLICM/Sink can see that it's dead. Don't do this if the
+        // node has a Flag value, for the benefit of targets still using
+        // Flag for values in physregs.
+        else if (Node->getValueType(Node->getNumValues()-1) != MVT::Flag)
+          MI->addRegisterDead(Reg, TRI);
+      }
+    }
+    return;
+  }
+
+  switch (Node->getOpcode()) {
+  default:
+#ifndef NDEBUG
+    Node->dump();
+#endif
+    llvm_unreachable("This target-independent node should have been selected!");
+    break;
+  case ISD::EntryToken:
+    llvm_unreachable("EntryToken should have been excluded from the schedule!");
+    break;
+  case ISD::MERGE_VALUES:
+  case ISD::TokenFactor: // fall thru
+    break;
+  case ISD::CopyToReg: {
+    unsigned SrcReg;
+    SDValue SrcVal = Node->getOperand(2);
+    if (RegisterSDNode *R = dyn_cast<RegisterSDNode>(SrcVal))
+      SrcReg = R->getReg();
+    else
+      SrcReg = getVR(SrcVal, VRBaseMap);
+      
+    unsigned DestReg = cast<RegisterSDNode>(Node->getOperand(1))->getReg();
+    if (SrcReg == DestReg) // Coalesced away the copy? Ignore.
+      break;
+      
+    const TargetRegisterClass *SrcTRC = 0, *DstTRC = 0;
+    // Get the register classes of the src/dst.
+    if (TargetRegisterInfo::isVirtualRegister(SrcReg))
+      SrcTRC = MRI->getRegClass(SrcReg);
+    else
+      SrcTRC = TRI->getPhysicalRegisterRegClass(SrcReg,SrcVal.getValueType());
+
+    if (TargetRegisterInfo::isVirtualRegister(DestReg))
+      DstTRC = MRI->getRegClass(DestReg);
+    else
+      DstTRC = TRI->getPhysicalRegisterRegClass(DestReg,
+                                            Node->getOperand(1).getValueType());
+
+    bool Emitted = TII->copyRegToReg(*MBB, InsertPos, DestReg, SrcReg,
+                                     DstTRC, SrcTRC);
+    assert(Emitted && "Unable to issue a copy instruction!\n");
+    (void) Emitted;
+    break;
+  }
+  case ISD::CopyFromReg: {
+    unsigned SrcReg = cast<RegisterSDNode>(Node->getOperand(1))->getReg();
+    EmitCopyFromReg(Node, 0, IsClone, IsCloned, SrcReg, VRBaseMap);
+    break;
+  }
+  case ISD::INLINEASM: {
+    unsigned NumOps = Node->getNumOperands();
+    if (Node->getOperand(NumOps-1).getValueType() == MVT::Flag)
+      --NumOps;  // Ignore the flag operand.
+      
+    // Create the inline asm machine instruction.
+    MachineInstr *MI = BuildMI(*MF, Node->getDebugLoc(),
+                               TII->get(TargetInstrInfo::INLINEASM));
+
+    // Add the asm string as an external symbol operand.
+    const char *AsmStr =
+      cast<ExternalSymbolSDNode>(Node->getOperand(1))->getSymbol();
+    MI->addOperand(MachineOperand::CreateES(AsmStr));
+      
+    // Add all of the operand registers to the instruction.
+    for (unsigned i = 2; i != NumOps;) {
+      unsigned Flags =
+        cast<ConstantSDNode>(Node->getOperand(i))->getZExtValue();
+      unsigned NumVals = InlineAsm::getNumOperandRegisters(Flags);
+        
+      MI->addOperand(MachineOperand::CreateImm(Flags));
+      ++i;  // Skip the ID value.
+        
+      switch (Flags & 7) {
+      default: llvm_unreachable("Bad flags!");
+      case 2:   // Def of register.
+        for (; NumVals; --NumVals, ++i) {
+          unsigned Reg = cast<RegisterSDNode>(Node->getOperand(i))->getReg();
+          MI->addOperand(MachineOperand::CreateReg(Reg, true));
+        }
+        break;
+      case 6:   // Def of earlyclobber register.
+        for (; NumVals; --NumVals, ++i) {
+          unsigned Reg = cast<RegisterSDNode>(Node->getOperand(i))->getReg();
+          MI->addOperand(MachineOperand::CreateReg(Reg, true, false, false, 
+                                                   false, false, true));
+        }
+        break;
+      case 1:  // Use of register.
+      case 3:  // Immediate.
+      case 4:  // Addressing mode.
+        // The addressing mode has been selected, just add all of the
+        // operands to the machine instruction.
+        for (; NumVals; --NumVals, ++i)
+          AddOperand(MI, Node->getOperand(i), 0, 0, VRBaseMap);
+        break;
+      }
+    }
+    MBB->insert(InsertPos, MI);
+    break;
+  }
+  }
+}
+
+/// InstrEmitter - Construct an InstrEmitter and set it to start inserting
+/// at the given position in the given block.
+InstrEmitter::InstrEmitter(MachineBasicBlock *mbb,
+                           MachineBasicBlock::iterator insertpos)
+  : MF(mbb->getParent()),
+    MRI(&MF->getRegInfo()),
+    TM(&MF->getTarget()),
+    TII(TM->getInstrInfo()),
+    TRI(TM->getRegisterInfo()),
+    TLI(TM->getTargetLowering()),
+    MBB(mbb), InsertPos(insertpos) {
+}
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.h b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.h
new file mode 100644
index 0000000..91817e4
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.h
@@ -0,0 +1,119 @@
+//===---- InstrEmitter.h - Emit MachineInstrs for the SelectionDAG class ---==//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This declares the Emit routines for the SelectionDAG class, which creates
+// MachineInstrs based on the decisions of the SelectionDAG instruction
+// selection.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef INSTREMITTER_H
+#define INSTREMITTER_H
+
+#include "llvm/CodeGen/SelectionDAG.h"
+#include "llvm/CodeGen/MachineBasicBlock.h"
+#include "llvm/ADT/DenseMap.h"
+
+namespace llvm {
+
+class TargetInstrDesc;
+
+class InstrEmitter {
+  MachineFunction *MF;
+  MachineRegisterInfo *MRI;
+  const TargetMachine *TM;
+  const TargetInstrInfo *TII;
+  const TargetRegisterInfo *TRI;
+  const TargetLowering *TLI;
+
+  MachineBasicBlock *MBB;
+  MachineBasicBlock::iterator InsertPos;
+
+  /// EmitCopyFromReg - Generate machine code for an CopyFromReg node or an
+  /// implicit physical register output.
+  void EmitCopyFromReg(SDNode *Node, unsigned ResNo,
+                       bool IsClone, bool IsCloned,
+                       unsigned SrcReg,
+                       DenseMap<SDValue, unsigned> &VRBaseMap);
+
+  /// getDstOfCopyToRegUse - If the only use of the specified result number of
+  /// node is a CopyToReg, return its destination register. Return 0 otherwise.
+  unsigned getDstOfOnlyCopyToRegUse(SDNode *Node,
+                                    unsigned ResNo) const;
+
+  void CreateVirtualRegisters(SDNode *Node, MachineInstr *MI,
+                              const TargetInstrDesc &II,
+                              bool IsClone, bool IsCloned,
+                              DenseMap<SDValue, unsigned> &VRBaseMap);
+
+  /// getVR - Return the virtual register corresponding to the specified result
+  /// of the specified node.
+  unsigned getVR(SDValue Op,
+                 DenseMap<SDValue, unsigned> &VRBaseMap);
+
+  /// AddRegisterOperand - Add the specified register as an operand to the
+  /// specified machine instr. Insert register copies if the register is
+  /// not in the required register class.
+  void AddRegisterOperand(MachineInstr *MI, SDValue Op,
+                          unsigned IIOpNum,
+                          const TargetInstrDesc *II,
+                          DenseMap<SDValue, unsigned> &VRBaseMap);
+
+  /// AddOperand - Add the specified operand to the specified machine instr.  II
+  /// specifies the instruction information for the node, and IIOpNum is the
+  /// operand number (in the II) that we are adding. IIOpNum and II are used for
+  /// assertions only.
+  void AddOperand(MachineInstr *MI, SDValue Op,
+                  unsigned IIOpNum,
+                  const TargetInstrDesc *II,
+                  DenseMap<SDValue, unsigned> &VRBaseMap);
+
+  /// EmitSubregNode - Generate machine code for subreg nodes.
+  ///
+  void EmitSubregNode(SDNode *Node, DenseMap<SDValue, unsigned> &VRBaseMap);
+
+  /// EmitCopyToRegClassNode - Generate machine code for COPY_TO_REGCLASS nodes.
+  /// COPY_TO_REGCLASS is just a normal copy, except that the destination
+  /// register is constrained to be in a particular register class.
+  ///
+  void EmitCopyToRegClassNode(SDNode *Node,
+                              DenseMap<SDValue, unsigned> &VRBaseMap);
+
+public:
+  /// CountResults - The results of target nodes have register or immediate
+  /// operands first, then an optional chain, and optional flag operands
+  /// (which do not go into the machine instrs.)
+  static unsigned CountResults(SDNode *Node);
+
+  /// CountOperands - The inputs to target nodes have any actual inputs first,
+  /// followed by an optional chain operand, then flag operands.  Compute
+  /// the number of actual operands that will go into the resulting
+  /// MachineInstr.
+  static unsigned CountOperands(SDNode *Node);
+
+  /// EmitNode - Generate machine code for a node and needed dependencies.
+  ///
+  void EmitNode(SDNode *Node, bool IsClone, bool IsCloned,
+                DenseMap<SDValue, unsigned> &VRBaseMap,
+                DenseMap<MachineBasicBlock*, MachineBasicBlock*> *EM);
+
+  /// getBlock - Return the current basic block.
+  MachineBasicBlock *getBlock() { return MBB; }
+
+  /// getInsertPos - Return the current insertion position.
+  MachineBasicBlock::iterator getInsertPos() { return InsertPos; }
+
+  /// InstrEmitter - Construct an InstrEmitter and set it to start inserting
+  /// at the given position in the given block.
+  InstrEmitter(MachineBasicBlock *mbb, MachineBasicBlock::iterator insertpos);
+};
+
+}
+
+#endif
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
index fc01b07..273dbf0 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
@@ -32,7 +32,6 @@
 #include "llvm/GlobalVariable.h"
 #include "llvm/LLVMContext.h"
 #include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/MathExtras.h"
 #include "llvm/Support/raw_ostream.h"
@@ -55,7 +54,7 @@ using namespace llvm;
 /// will attempt merge setcc and brc instructions into brcc's.
 ///
 namespace {
-class VISIBILITY_HIDDEN SelectionDAGLegalize {
+class SelectionDAGLegalize {
   TargetLowering &TLI;
   SelectionDAG &DAG;
   CodeGenOpt::Level OptLevel;
@@ -149,14 +148,16 @@ private:
   SDValue ExpandFPLibCall(SDNode *Node, RTLIB::Libcall Call_F32,
                           RTLIB::Libcall Call_F64, RTLIB::Libcall Call_F80,
                           RTLIB::Libcall Call_PPCF128);
-  SDValue ExpandIntLibCall(SDNode *Node, bool isSigned, RTLIB::Libcall Call_I16,
-                           RTLIB::Libcall Call_I32, RTLIB::Libcall Call_I64,
+  SDValue ExpandIntLibCall(SDNode *Node, bool isSigned,
+                           RTLIB::Libcall Call_I8,
+                           RTLIB::Libcall Call_I16,
+                           RTLIB::Libcall Call_I32,
+                           RTLIB::Libcall Call_I64,
                            RTLIB::Libcall Call_I128);
 
   SDValue EmitStackConvert(SDValue SrcOp, EVT SlotVT, EVT DestVT, DebugLoc dl);
   SDValue ExpandBUILD_VECTOR(SDNode *Node);
   SDValue ExpandSCALAR_TO_VECTOR(SDNode *Node);
-  SDValue ExpandDBG_STOPPOINT(SDNode *Node);
   void ExpandDYNAMIC_STACKALLOC(SDNode *Node,
                                 SmallVectorImpl<SDValue> &Results);
   SDValue ExpandFCOPYSIGN(SDNode *Node);
@@ -1515,6 +1516,7 @@ SDValue SelectionDAGLegalize::ExpandVectorBuildThroughStack(SDNode* Node) {
   // Create the stack frame object.
   EVT VT = Node->getValueType(0);
   EVT OpVT = Node->getOperand(0).getValueType();
+  EVT EltVT = VT.getVectorElementType();
   DebugLoc dl = Node->getDebugLoc();
   SDValue FIPtr = DAG.CreateStackTemporary(VT);
   int FI = cast<FrameIndexSDNode>(FIPtr.getNode())->getIndex();
@@ -1522,7 +1524,7 @@ SDValue SelectionDAGLegalize::ExpandVectorBuildThroughStack(SDNode* Node) {
 
   // Emit a store of each element to the stack slot.
   SmallVector<SDValue, 8> Stores;
-  unsigned TypeByteSize = OpVT.getSizeInBits() / 8;
+  unsigned TypeByteSize = EltVT.getSizeInBits() / 8;
   // Store (in the right endianness) the elements to memory.
   for (unsigned i = 0, e = Node->getNumOperands(); i != e; ++i) {
     // Ignore undef elements.
@@ -1533,8 +1535,13 @@ SDValue SelectionDAGLegalize::ExpandVectorBuildThroughStack(SDNode* Node) {
     SDValue Idx = DAG.getConstant(Offset, FIPtr.getValueType());
     Idx = DAG.getNode(ISD::ADD, dl, FIPtr.getValueType(), FIPtr, Idx);
 
-    Stores.push_back(DAG.getStore(DAG.getEntryNode(), dl, Node->getOperand(i),
-                                  Idx, SV, Offset));
+    // If EltVT smaller than OpVT, only store the bits necessary.
+    if (EltVT.bitsLT(OpVT))
+      Stores.push_back(DAG.getTruncStore(DAG.getEntryNode(), dl,
+                          Node->getOperand(i), Idx, SV, Offset, EltVT));
+    else
+      Stores.push_back(DAG.getStore(DAG.getEntryNode(), dl, 
+                                    Node->getOperand(i), Idx, SV, Offset));
   }
 
   SDValue StoreChain;
@@ -1588,37 +1595,6 @@ SDValue SelectionDAGLegalize::ExpandFCOPYSIGN(SDNode* Node) {
                      AbsVal);
 }
 
-SDValue SelectionDAGLegalize::ExpandDBG_STOPPOINT(SDNode* Node) {
-  DebugLoc dl = Node->getDebugLoc();
-  DwarfWriter *DW = DAG.getDwarfWriter();
-  bool useDEBUG_LOC = TLI.isOperationLegalOrCustom(ISD::DEBUG_LOC,
-                                                    MVT::Other);
-  bool useLABEL = TLI.isOperationLegalOrCustom(ISD::DBG_LABEL, MVT::Other);
-
-  const DbgStopPointSDNode *DSP = cast<DbgStopPointSDNode>(Node);
-  MDNode *CU_Node = DSP->getCompileUnit();
-  if (DW && (useDEBUG_LOC || useLABEL)) {
-
-    unsigned Line = DSP->getLine();
-    unsigned Col = DSP->getColumn();
-
-    if (OptLevel == CodeGenOpt::None) {
-      // A bit self-referential to have DebugLoc on Debug_Loc nodes, but it
-      // won't hurt anything.
-      if (useDEBUG_LOC) {
-        return DAG.getNode(ISD::DEBUG_LOC, dl, MVT::Other, Node->getOperand(0),
-                           DAG.getConstant(Line, MVT::i32),
-                           DAG.getConstant(Col, MVT::i32),
-                           DAG.getSrcValue(CU_Node));
-      } else {
-        unsigned ID = DW->RecordSourceLine(Line, Col, CU_Node);
-        return DAG.getLabel(ISD::DBG_LABEL, dl, Node->getOperand(0), ID);
-      }
-    }
-  }
-  return Node->getOperand(0);
-}
-
 void SelectionDAGLegalize::ExpandDYNAMIC_STACKALLOC(SDNode* Node,
                                            SmallVectorImpl<SDValue> &Results) {
   unsigned SPReg = TLI.getStackPointerRegisterToSaveRestore();
@@ -1655,8 +1631,7 @@ void SelectionDAGLegalize::ExpandDYNAMIC_STACKALLOC(SDNode* Node,
 }
 
 /// LegalizeSetCCCondCode - Legalize a SETCC with given LHS and RHS and
-/// condition code CC on the current target. This routine assumes LHS and rHS
-/// have already been legalized by LegalizeSetCCOperands. It expands SETCC with
+/// condition code CC on the current target. This routine expands SETCC with
 /// illegal condition code into AND / OR of multiple SETCC values.
 void SelectionDAGLegalize::LegalizeSetCCCondCode(EVT VT,
                                                  SDValue &LHS, SDValue &RHS,
@@ -1812,10 +1787,19 @@ SDValue SelectionDAGLegalize::ExpandBUILD_VECTOR(SDNode *Node) {
         CV.push_back(const_cast<ConstantFP *>(V->getConstantFPValue()));
       } else if (ConstantSDNode *V =
                  dyn_cast<ConstantSDNode>(Node->getOperand(i))) {
-        CV.push_back(const_cast<ConstantInt *>(V->getConstantIntValue()));
+        if (OpVT==EltVT)
+          CV.push_back(const_cast<ConstantInt *>(V->getConstantIntValue()));
+        else {
+          // If OpVT and EltVT don't match, EltVT is not legal and the
+          // element values have been promoted/truncated earlier.  Undo this;
+          // we don't want a v16i8 to become a v16i32 for example.
+          const ConstantInt *CI = V->getConstantIntValue();
+          CV.push_back(ConstantInt::get(EltVT.getTypeForEVT(*DAG.getContext()),
+                                        CI->getZExtValue()));
+        }
       } else {
         assert(Node->getOperand(i).getOpcode() == ISD::UNDEF);
-        const Type *OpNTy = OpVT.getTypeForEVT(*DAG.getContext());
+        const Type *OpNTy = EltVT.getTypeForEVT(*DAG.getContext());
         CV.push_back(UndefValue::get(OpNTy));
       }
     }
@@ -1911,6 +1895,7 @@ SDValue SelectionDAGLegalize::ExpandFPLibCall(SDNode* Node,
 }
 
 SDValue SelectionDAGLegalize::ExpandIntLibCall(SDNode* Node, bool isSigned,
+                                               RTLIB::Libcall Call_I8,
                                                RTLIB::Libcall Call_I16,
                                                RTLIB::Libcall Call_I32,
                                                RTLIB::Libcall Call_I64,
@@ -1918,9 +1903,10 @@ SDValue SelectionDAGLegalize::ExpandIntLibCall(SDNode* Node, bool isSigned,
   RTLIB::Libcall LC;
   switch (Node->getValueType(0).getSimpleVT().SimpleTy) {
   default: llvm_unreachable("Unexpected request for libcall!");
-  case MVT::i16: LC = Call_I16; break;
-  case MVT::i32: LC = Call_I32; break;
-  case MVT::i64: LC = Call_I64; break;
+  case MVT::i8:   LC = Call_I8; break;
+  case MVT::i16:  LC = Call_I16; break;
+  case MVT::i32:  LC = Call_I32; break;
+  case MVT::i64:  LC = Call_I64; break;
   case MVT::i128: LC = Call_I128; break;
   }
   return ExpandLibCall(LC, Node, isSigned);
@@ -2257,16 +2243,12 @@ void SelectionDAGLegalize::ExpandNode(SDNode *Node,
     Results.push_back(DAG.getConstant(1, Node->getValueType(0)));
     break;
   case ISD::EH_RETURN:
-  case ISD::DBG_LABEL:
   case ISD::EH_LABEL:
   case ISD::PREFETCH:
   case ISD::MEMBARRIER:
   case ISD::VAEND:
     Results.push_back(Node->getOperand(0));
     break;
-  case ISD::DBG_STOPPOINT:
-    Results.push_back(ExpandDBG_STOPPOINT(Node));
-    break;
   case ISD::DYNAMIC_STACKALLOC:
     ExpandDYNAMIC_STACKALLOC(Node, Results);
     break;
@@ -2575,16 +2557,8 @@ void SelectionDAGLegalize::ExpandNode(SDNode *Node,
   case ISD::ConstantFP: {
     ConstantFPSDNode *CFP = cast<ConstantFPSDNode>(Node);
     // Check to see if this FP immediate is already legal.
-    bool isLegal = false;
-    for (TargetLowering::legal_fpimm_iterator I = TLI.legal_fpimm_begin(),
-            E = TLI.legal_fpimm_end(); I != E; ++I) {
-      if (CFP->isExactlyValue(*I)) {
-        isLegal = true;
-        break;
-      }
-    }
     // If this is a legal constant, turn it into a TargetConstantFP node.
-    if (isLegal)
+    if (TLI.isFPImmLegal(CFP->getValueAPF(), Node->getValueType(0)))
       Results.push_back(SDValue(Node, 0));
     else
       Results.push_back(ExpandConstantFP(CFP, true, DAG, TLI));
@@ -2634,10 +2608,14 @@ void SelectionDAGLegalize::ExpandNode(SDNode *Node,
       Tmp1 = DAG.getNode(ISD::MUL, dl, VT, Tmp1, Tmp3);
       Tmp1 = DAG.getNode(ISD::SUB, dl, VT, Tmp2, Tmp1);
     } else if (isSigned) {
-      Tmp1 = ExpandIntLibCall(Node, true, RTLIB::SREM_I16, RTLIB::SREM_I32,
+      Tmp1 = ExpandIntLibCall(Node, true,
+                              RTLIB::SREM_I8,
+                              RTLIB::SREM_I16, RTLIB::SREM_I32,
                               RTLIB::SREM_I64, RTLIB::SREM_I128);
     } else {
-      Tmp1 = ExpandIntLibCall(Node, false, RTLIB::UREM_I16, RTLIB::UREM_I32,
+      Tmp1 = ExpandIntLibCall(Node, false,
+                              RTLIB::UREM_I8,
+                              RTLIB::UREM_I16, RTLIB::UREM_I32,
                               RTLIB::UREM_I64, RTLIB::UREM_I128);
     }
     Results.push_back(Tmp1);
@@ -2653,10 +2631,14 @@ void SelectionDAGLegalize::ExpandNode(SDNode *Node,
       Tmp1 = DAG.getNode(DivRemOpc, dl, VTs, Node->getOperand(0),
                          Node->getOperand(1));
     else if (isSigned)
-      Tmp1 = ExpandIntLibCall(Node, true, RTLIB::SDIV_I16, RTLIB::SDIV_I32,
+      Tmp1 = ExpandIntLibCall(Node, true,
+                              RTLIB::SDIV_I8,
+                              RTLIB::SDIV_I16, RTLIB::SDIV_I32,
                               RTLIB::SDIV_I64, RTLIB::SDIV_I128);
     else
-      Tmp1 = ExpandIntLibCall(Node, false, RTLIB::UDIV_I16, RTLIB::UDIV_I32,
+      Tmp1 = ExpandIntLibCall(Node, false,
+                              RTLIB::UDIV_I8,
+                              RTLIB::UDIV_I16, RTLIB::UDIV_I32,
                               RTLIB::UDIV_I64, RTLIB::UDIV_I128);
     Results.push_back(Tmp1);
     break;
@@ -2701,7 +2683,9 @@ void SelectionDAGLegalize::ExpandNode(SDNode *Node,
                                     Node->getOperand(1)));
       break;
     }
-    Tmp1 = ExpandIntLibCall(Node, false, RTLIB::MUL_I16, RTLIB::MUL_I32,
+    Tmp1 = ExpandIntLibCall(Node, false,
+                            RTLIB::MUL_I8,
+                            RTLIB::MUL_I16, RTLIB::MUL_I32,
                             RTLIB::MUL_I64, RTLIB::MUL_I128);
     Results.push_back(Tmp1);
     break;
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp
index 5992f5d..e298649 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp
@@ -64,8 +64,12 @@ void DAGTypeLegalizer::PerformExpensiveChecks() {
   // The final node obtained by mapping by ReplacedValues is not marked NewNode.
   // Note that ReplacedValues should be applied iteratively.
 
-  // Note that the ReplacedValues map may also map deleted nodes.  By iterating
-  // over the DAG we only consider non-deleted nodes.
+  // Note that the ReplacedValues map may also map deleted nodes (by iterating
+  // over the DAG we never dereference deleted nodes).  This means that it may
+  // also map nodes marked NewNode if the deallocated memory was reallocated as
+  // another node, and that new node was not seen by the LegalizeTypes machinery
+  // (for example because it was created but not used).  In general, we cannot
+  // distinguish between new nodes and deleted nodes.
   SmallVector<SDNode*, 16> NewNodes;
   for (SelectionDAG::allnodes_iterator I = DAG.allnodes_begin(),
        E = DAG.allnodes_end(); I != E; ++I) {
@@ -114,7 +118,11 @@ void DAGTypeLegalizer::PerformExpensiveChecks() {
         Mapped |= 128;
 
       if (I->getNodeId() != Processed) {
-        if (Mapped != 0) {
+        // Since we allow ReplacedValues to map deleted nodes, it may map nodes
+        // marked NewNode too, since a deleted node may have been reallocated as
+        // another node that has not been seen by the LegalizeTypes machinery.
+        if ((I->getNodeId() == NewNode && Mapped > 1) ||
+            (I->getNodeId() != NewNode && Mapped != 0)) {
           errs() << "Unprocessed value in a map!";
           Failed = true;
         }
@@ -320,16 +328,12 @@ ScanOperands:
         continue;
 
       // The node morphed - this is equivalent to legalizing by replacing every
-      // value of N with the corresponding value of M.  So do that now.  However
-      // there is no need to remember the replacement - morphing will make sure
-      // it is never used non-trivially.
+      // value of N with the corresponding value of M.  So do that now.
       assert(N->getNumValues() == M->getNumValues() &&
              "Node morphing changed the number of results!");
       for (unsigned i = 0, e = N->getNumValues(); i != e; ++i)
-        // Replacing the value takes care of remapping the new value.  Do the
-        // replacement without recording it in ReplacedValues.  This does not
-        // expunge From but that is fine - it is not really a new node.
-        ReplaceValueWithHelper(SDValue(N, i), SDValue(M, i));
+        // Replacing the value takes care of remapping the new value.
+        ReplaceValueWith(SDValue(N, i), SDValue(M, i));
       assert(N->getNodeId() == NewNode && "Unexpected node state!");
       // The node continues to live on as part of the NewNode fungus that
       // grows on top of the useful nodes.  Nothing more needs to be done
@@ -623,8 +627,7 @@ void DAGTypeLegalizer::RemapValue(SDValue &N) {
 namespace {
   /// NodeUpdateListener - This class is a DAGUpdateListener that listens for
   /// updates to nodes and recomputes their ready state.
-  class VISIBILITY_HIDDEN NodeUpdateListener :
-    public SelectionDAG::DAGUpdateListener {
+  class NodeUpdateListener : public SelectionDAG::DAGUpdateListener {
     DAGTypeLegalizer &DTL;
     SmallSetVector<SDNode*, 16> &NodesToAnalyze;
   public:
@@ -667,14 +670,14 @@ namespace {
 }
 
 
-/// ReplaceValueWithHelper - Internal helper for ReplaceValueWith.  Updates the
-/// DAG causing any uses of From to use To instead, but without expunging From
-/// or recording the replacement in ReplacedValues.  Do not call directly unless
-/// you really know what you are doing!
-void DAGTypeLegalizer::ReplaceValueWithHelper(SDValue From, SDValue To) {
+/// ReplaceValueWith - The specified value was legalized to the specified other
+/// value.  Update the DAG and NodeIds replacing any uses of From to use To
+/// instead.
+void DAGTypeLegalizer::ReplaceValueWith(SDValue From, SDValue To) {
   assert(From.getNode() != To.getNode() && "Potential legalization loop!");
 
   // If expansion produced new nodes, make sure they are properly marked.
+  ExpungeNode(From.getNode());
   AnalyzeNewValue(To); // Expunges To.
 
   // Anything that used the old node should now use the new one.  Note that this
@@ -683,6 +686,10 @@ void DAGTypeLegalizer::ReplaceValueWithHelper(SDValue From, SDValue To) {
   NodeUpdateListener NUL(*this, NodesToAnalyze);
   DAG.ReplaceAllUsesOfValueWith(From, To, &NUL);
 
+  // The old node may still be present in a map like ExpandedIntegers or
+  // PromotedIntegers.  Inform maps about the replacement.
+  ReplacedValues[From] = To;
+
   // Process the list of nodes that need to be reanalyzed.
   while (!NodesToAnalyze.empty()) {
     SDNode *N = NodesToAnalyze.back();
@@ -713,25 +720,6 @@ void DAGTypeLegalizer::ReplaceValueWithHelper(SDValue From, SDValue To) {
   }
 }
 
-/// ReplaceValueWith - The specified value was legalized to the specified other
-/// value.  Update the DAG and NodeIds replacing any uses of From to use To
-/// instead.
-void DAGTypeLegalizer::ReplaceValueWith(SDValue From, SDValue To) {
-  assert(From.getNode()->getNodeId() == ReadyToProcess &&
-         "Only the node being processed may be remapped!");
-
-  // If expansion produced new nodes, make sure they are properly marked.
-  ExpungeNode(From.getNode());
-  AnalyzeNewValue(To); // Expunges To.
-
-  // The old node may still be present in a map like ExpandedIntegers or
-  // PromotedIntegers.  Inform maps about the replacement.
-  ReplacedValues[From] = To;
-
-  // Do the replacement.
-  ReplaceValueWithHelper(From, To);
-}
-
 void DAGTypeLegalizer::SetPromotedInteger(SDValue Op, SDValue Result) {
   assert(Result.getValueType() == TLI.getTypeToTransformTo(*DAG.getContext(), Op.getValueType()) &&
          "Invalid type for promoted integer");
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
index 859c656..7b9b010 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
@@ -196,7 +196,6 @@ private:
                       DebugLoc dl);
   SDValue PromoteTargetBoolean(SDValue Bool, EVT VT);
   void ReplaceValueWith(SDValue From, SDValue To);
-  void ReplaceValueWithHelper(SDValue From, SDValue To);
   void SplitInteger(SDValue Op, SDValue &Lo, SDValue &Hi);
   void SplitInteger(SDValue Op, EVT LoVT, EVT HiVT,
                     SDValue &Lo, SDValue &Hi);
@@ -617,6 +616,7 @@ private:
   SDValue WidenVecOp_BIT_CONVERT(SDNode *N);
   SDValue WidenVecOp_CONCAT_VECTORS(SDNode *N);
   SDValue WidenVecOp_EXTRACT_VECTOR_ELT(SDNode *N);
+  SDValue WidenVecOp_EXTRACT_SUBVECTOR(SDNode *N);
   SDValue WidenVecOp_STORE(SDNode* N);
 
   SDValue WidenVecOp_Convert(SDNode *N);
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypesGeneric.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypesGeneric.cpp
index 0eafe62..dbd3e39 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypesGeneric.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypesGeneric.cpp
@@ -115,7 +115,8 @@ void DAGTypeLegalizer::ExpandRes_BIT_CONVERT(SDNode *N, SDValue &Lo,
   // Create the stack frame object.  Make sure it is aligned for both
   // the source and expanded destination types.
   unsigned Alignment =
-    TLI.getTargetData()->getPrefTypeAlignment(NOutVT.getTypeForEVT(*DAG.getContext()));
+    TLI.getTargetData()->getPrefTypeAlignment(NOutVT.
+                                              getTypeForEVT(*DAG.getContext()));
   SDValue StackPtr = DAG.CreateStackTemporary(InVT, Alignment);
   int SPFI = cast<FrameIndexSDNode>(StackPtr.getNode())->getIndex();
   const Value *SV = PseudoSourceValue::getFixedStack(SPFI);
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
index a03f825..75e1239 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
@@ -1789,6 +1789,7 @@ bool DAGTypeLegalizer::WidenVectorOperand(SDNode *N, unsigned ResNo) {
 
   case ISD::BIT_CONVERT:        Res = WidenVecOp_BIT_CONVERT(N); break;
   case ISD::CONCAT_VECTORS:     Res = WidenVecOp_CONCAT_VECTORS(N); break;
+  case ISD::EXTRACT_SUBVECTOR:  Res = WidenVecOp_EXTRACT_SUBVECTOR(N); break;
   case ISD::EXTRACT_VECTOR_ELT: Res = WidenVecOp_EXTRACT_VECTOR_ELT(N); break;
   case ISD::STORE:              Res = WidenVecOp_STORE(N); break;
 
@@ -1893,6 +1894,12 @@ SDValue DAGTypeLegalizer::WidenVecOp_CONCAT_VECTORS(SDNode *N) {
   return DAG.getNode(ISD::BUILD_VECTOR, dl, VT, &Ops[0], NumElts);
 }
 
+SDValue DAGTypeLegalizer::WidenVecOp_EXTRACT_SUBVECTOR(SDNode *N) {
+  SDValue InOp = GetWidenedVector(N->getOperand(0));
+  return DAG.getNode(ISD::EXTRACT_SUBVECTOR, N->getDebugLoc(),
+                     N->getValueType(0), InOp, N->getOperand(1));
+}
+
 SDValue DAGTypeLegalizer::WidenVecOp_EXTRACT_VECTOR_ELT(SDNode *N) {
   SDValue InOp = GetWidenedVector(N->getOperand(0));
   return DAG.getNode(ISD::EXTRACT_VECTOR_ELT, N->getDebugLoc(),
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp
index 7eac4d8..4045a34 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp
@@ -19,7 +19,6 @@
 #include "llvm/Target/TargetData.h"
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Support/Debug.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/ADT/SmallSet.h"
 #include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/STLExtras.h"
@@ -40,7 +39,7 @@ namespace {
   /// FastPriorityQueue - A degenerate priority queue that considers
   /// all nodes to have the same priority.
   ///
-  struct VISIBILITY_HIDDEN FastPriorityQueue {
+  struct FastPriorityQueue {
     SmallVector<SUnit *, 16> Queue;
 
     bool empty() const { return Queue.empty(); }
@@ -60,7 +59,7 @@ namespace {
 //===----------------------------------------------------------------------===//
 /// ScheduleDAGFast - The actual "fast" list scheduler implementation.
 ///
-class VISIBILITY_HIDDEN ScheduleDAGFast : public ScheduleDAGSDNodes {
+class ScheduleDAGFast : public ScheduleDAGSDNodes {
 private:
   /// AvailableQueue - The priority queue to use for the available SUnits.
   FastPriorityQueue AvailableQueue;
@@ -117,7 +116,7 @@ void ScheduleDAGFast::Schedule() {
   LiveRegCycles.resize(TRI->getNumRegs(), 0);
 
   // Build the scheduling graph.
-  BuildSchedGraph();
+  BuildSchedGraph(NULL);
 
   DEBUG(for (unsigned su = 0, e = SUnits.size(); su != e; ++su)
           SUnits[su].dumpAll(this));
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGList.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGList.cpp
index f17fe23..faf21f7 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGList.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGList.cpp
@@ -28,7 +28,6 @@
 #include "llvm/Target/TargetData.h"
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Support/Debug.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/ADT/PriorityQueue.h"
@@ -48,7 +47,7 @@ namespace {
 /// ScheduleDAGList - The actual list scheduler implementation.  This supports
 /// top-down scheduling.
 ///
-class VISIBILITY_HIDDEN ScheduleDAGList : public ScheduleDAGSDNodes {
+class ScheduleDAGList : public ScheduleDAGSDNodes {
 private:
   /// AvailableQueue - The priority queue to use for the available SUnits.
   ///
@@ -91,7 +90,7 @@ void ScheduleDAGList::Schedule() {
   DEBUG(errs() << "********** List Scheduling **********\n");
   
   // Build the scheduling graph.
-  BuildSchedGraph();
+  BuildSchedGraph(NULL);
 
   AvailableQueue->initNodes(SUnits);
   
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp
index cd91b84..7e1015a 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp
@@ -24,7 +24,6 @@
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Support/Debug.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/ADT/PriorityQueue.h"
 #include "llvm/ADT/SmallSet.h"
@@ -53,7 +52,7 @@ namespace {
 /// ScheduleDAGRRList - The actual register reduction list scheduler
 /// implementation.  This supports both top-down and bottom-up scheduling.
 ///
-class VISIBILITY_HIDDEN ScheduleDAGRRList : public ScheduleDAGSDNodes {
+class ScheduleDAGRRList : public ScheduleDAGSDNodes {
 private:
   /// isBottomUp - This is true if the scheduling problem is bottom-up, false if
   /// it is top-down.
@@ -172,7 +171,7 @@ void ScheduleDAGRRList::Schedule() {
   LiveRegCycles.resize(TRI->getNumRegs(), 0);
 
   // Build the scheduling graph.
-  BuildSchedGraph();
+  BuildSchedGraph(NULL);
 
   DEBUG(for (unsigned su = 0, e = SUnits.size(); su != e; ++su)
           SUnits[su].dumpAll(this));
@@ -965,8 +964,7 @@ CalcNodeSethiUllmanNumber(const SUnit *SU, std::vector<unsigned> &SUNumbers) {
 
 namespace {
   template<class SF>
-  class VISIBILITY_HIDDEN RegReductionPriorityQueue
-   : public SchedulingPriorityQueue {
+  class RegReductionPriorityQueue : public SchedulingPriorityQueue {
     PriorityQueue<SUnit*, std::vector<SUnit*>, SF> Queue;
     unsigned currentQueueId;
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp
index 3e2101a..d53de34 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp
@@ -14,6 +14,7 @@
 
 #define DEBUG_TYPE "pre-RA-sched"
 #include "ScheduleDAGSDNodes.h"
+#include "InstrEmitter.h"
 #include "llvm/CodeGen/SelectionDAG.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Target/TargetInstrInfo.h"
@@ -181,7 +182,7 @@ void ScheduleDAGSDNodes::AddSchedEdges() {
       if (N->isMachineOpcode() &&
           TII->get(N->getMachineOpcode()).getImplicitDefs()) {
         SU->hasPhysRegClobbers = true;
-        unsigned NumUsed = CountResults(N);
+        unsigned NumUsed = InstrEmitter::CountResults(N);
         while (NumUsed != 0 && !N->hasAnyUseOfValue(NumUsed - 1))
           --NumUsed;    // Skip over unused values at the end.
         if (NumUsed > TII->get(N->getMachineOpcode()).getNumDefs())
@@ -230,7 +231,7 @@ void ScheduleDAGSDNodes::AddSchedEdges() {
 /// are input.  This SUnit graph is similar to the SelectionDAG, but
 /// excludes nodes that aren't interesting to scheduling, and represents
 /// flagged together nodes with a single SUnit.
-void ScheduleDAGSDNodes::BuildSchedGraph() {
+void ScheduleDAGSDNodes::BuildSchedGraph(AliasAnalysis *AA) {
   // Populate the SUnits array.
   BuildSchedUnits();
   // Compute all the scheduling dependencies between nodes.
@@ -250,31 +251,6 @@ void ScheduleDAGSDNodes::ComputeLatency(SUnit *SU) {
     }
 }
 
-/// CountResults - The results of target nodes have register or immediate
-/// operands first, then an optional chain, and optional flag operands (which do
-/// not go into the resulting MachineInstr).
-unsigned ScheduleDAGSDNodes::CountResults(SDNode *Node) {
-  unsigned N = Node->getNumValues();
-  while (N && Node->getValueType(N - 1) == MVT::Flag)
-    --N;
-  if (N && Node->getValueType(N - 1) == MVT::Other)
-    --N;    // Skip over chain result.
-  return N;
-}
-
-/// CountOperands - The inputs to target nodes have any actual inputs first,
-/// followed by an optional chain operand, then an optional flag operand.
-/// Compute the number of actual operands that will go into the resulting
-/// MachineInstr.
-unsigned ScheduleDAGSDNodes::CountOperands(SDNode *Node) {
-  unsigned N = Node->getNumOperands();
-  while (N && Node->getOperand(N - 1).getValueType() == MVT::Flag)
-    --N;
-  if (N && Node->getOperand(N - 1).getValueType() == MVT::Other)
-    --N; // Ignore chain if it exists.
-  return N;
-}
-
 void ScheduleDAGSDNodes::dumpNode(const SUnit *SU) const {
   if (!SU->getNode()) {
     errs() << "PHYS REG COPY\n";
@@ -293,3 +269,43 @@ void ScheduleDAGSDNodes::dumpNode(const SUnit *SU) const {
     FlaggedNodes.pop_back();
   }
 }
+
+/// EmitSchedule - Emit the machine code in scheduled order.
+MachineBasicBlock *ScheduleDAGSDNodes::
+EmitSchedule(DenseMap<MachineBasicBlock*, MachineBasicBlock*> *EM) {
+  InstrEmitter Emitter(BB, InsertPos);
+  DenseMap<SDValue, unsigned> VRBaseMap;
+  DenseMap<SUnit*, unsigned> CopyVRBaseMap;
+  for (unsigned i = 0, e = Sequence.size(); i != e; i++) {
+    SUnit *SU = Sequence[i];
+    if (!SU) {
+      // Null SUnit* is a noop.
+      EmitNoop();
+      continue;
+    }
+
+    // For pre-regalloc scheduling, create instructions corresponding to the
+    // SDNode and any flagged SDNodes and append them to the block.
+    if (!SU->getNode()) {
+      // Emit a copy.
+      EmitPhysRegCopy(SU, CopyVRBaseMap);
+      continue;
+    }
+
+    SmallVector<SDNode *, 4> FlaggedNodes;
+    for (SDNode *N = SU->getNode()->getFlaggedNode(); N;
+         N = N->getFlaggedNode())
+      FlaggedNodes.push_back(N);
+    while (!FlaggedNodes.empty()) {
+      Emitter.EmitNode(FlaggedNodes.back(), SU->OrigNode != SU, SU->isCloned,
+                       VRBaseMap, EM);
+      FlaggedNodes.pop_back();
+    }
+    Emitter.EmitNode(SU->getNode(), SU->OrigNode != SU, SU->isCloned,
+                     VRBaseMap, EM);
+  }
+
+  BB = Emitter.getBlock();
+  InsertPos = Emitter.getInsertPos();
+  return BB;
+}
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.h b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.h
index 0a6816a..ebb31ac 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.h
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.h
@@ -58,6 +58,7 @@ namespace llvm {
       if (isa<ConstantPoolSDNode>(Node))   return true;
       if (isa<JumpTableSDNode>(Node))      return true;
       if (isa<ExternalSymbolSDNode>(Node)) return true;
+      if (isa<BlockAddressSDNode>(Node))   return true;
       if (Node->getOpcode() == ISD::EntryToken) return true;
       return false;
     }
@@ -86,31 +87,12 @@ namespace llvm {
     /// are input.  This SUnit graph is similar to the SelectionDAG, but
     /// excludes nodes that aren't interesting to scheduling, and represents
     /// flagged together nodes with a single SUnit.
-    virtual void BuildSchedGraph();
+    virtual void BuildSchedGraph(AliasAnalysis *AA);
 
     /// ComputeLatency - Compute node latency.
     ///
     virtual void ComputeLatency(SUnit *SU);
 
-    /// CountResults - The results of target nodes have register or immediate
-    /// operands first, then an optional chain, and optional flag operands
-    /// (which do not go into the machine instrs.)
-    static unsigned CountResults(SDNode *Node);
-
-    /// CountOperands - The inputs to target nodes have any actual inputs first,
-    /// followed by an optional chain operand, then flag operands.  Compute
-    /// the number of actual operands that will go into the resulting
-    /// MachineInstr.
-    static unsigned CountOperands(SDNode *Node);
-
-    /// EmitNode - Generate machine code for an node and needed dependencies.
-    /// VRBaseMap contains, for each already emitted node, the first virtual
-    /// register number for the results of the node.
-    ///
-    void EmitNode(SDNode *Node, bool IsClone, bool HasClone,
-                  DenseMap<SDValue, unsigned> &VRBaseMap,
-                  DenseMap<MachineBasicBlock*, MachineBasicBlock*> *EM);
-    
     virtual MachineBasicBlock *
     EmitSchedule(DenseMap<MachineBasicBlock*, MachineBasicBlock*> *EM);
 
@@ -126,47 +108,6 @@ namespace llvm {
     virtual void getCustomGraphFeatures(GraphWriter<ScheduleDAG*> &GW) const;
 
   private:
-    /// EmitSubregNode - Generate machine code for subreg nodes.
-    ///
-    void EmitSubregNode(SDNode *Node, 
-                        DenseMap<SDValue, unsigned> &VRBaseMap);
-
-    /// EmitCopyToRegClassNode - Generate machine code for COPY_TO_REGCLASS
-    /// nodes.
-    ///
-    void EmitCopyToRegClassNode(SDNode *Node,
-                                DenseMap<SDValue, unsigned> &VRBaseMap);
-
-    /// getVR - Return the virtual register corresponding to the specified result
-    /// of the specified node.
-    unsigned getVR(SDValue Op, DenseMap<SDValue, unsigned> &VRBaseMap);
-  
-    /// getDstOfCopyToRegUse - If the only use of the specified result number of
-    /// node is a CopyToReg, return its destination register. Return 0 otherwise.
-    unsigned getDstOfOnlyCopyToRegUse(SDNode *Node, unsigned ResNo) const;
-
-    void AddOperand(MachineInstr *MI, SDValue Op, unsigned IIOpNum,
-                    const TargetInstrDesc *II,
-                    DenseMap<SDValue, unsigned> &VRBaseMap);
-
-    /// AddRegisterOperand - Add the specified register as an operand to the
-    /// specified machine instr. Insert register copies if the register is
-    /// not in the required register class.
-    void AddRegisterOperand(MachineInstr *MI, SDValue Op,
-                            unsigned IIOpNum, const TargetInstrDesc *II,
-                            DenseMap<SDValue, unsigned> &VRBaseMap);
-
-    /// EmitCopyFromReg - Generate machine code for an CopyFromReg node or an
-    /// implicit physical register output.
-    void EmitCopyFromReg(SDNode *Node, unsigned ResNo, bool IsClone,
-                         bool IsCloned, unsigned SrcReg,
-                         DenseMap<SDValue, unsigned> &VRBaseMap);
-    
-    void CreateVirtualRegisters(SDNode *Node, MachineInstr *MI,
-                                const TargetInstrDesc &II, bool IsClone,
-                                bool IsCloned,
-                                DenseMap<SDValue, unsigned> &VRBaseMap);
-
     /// BuildSchedUnits, AddSchedEdges - Helper functions for BuildSchedGraph.
     void BuildSchedUnits();
     void AddSchedEdges();
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodesEmit.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodesEmit.cpp
deleted file mode 100644
index c9e40ff..0000000
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodesEmit.cpp
+++ /dev/null
@@ -1,688 +0,0 @@
-//===---- ScheduleDAGEmit.cpp - Emit routines for the ScheduleDAG class ---===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This implements the Emit routines for the ScheduleDAG class, which creates
-// MachineInstrs according to the computed schedule.
-//
-//===----------------------------------------------------------------------===//
-
-#define DEBUG_TYPE "pre-RA-sched"
-#include "ScheduleDAGSDNodes.h"
-#include "llvm/CodeGen/MachineConstantPool.h"
-#include "llvm/CodeGen/MachineFunction.h"
-#include "llvm/CodeGen/MachineInstrBuilder.h"
-#include "llvm/CodeGen/MachineRegisterInfo.h"
-#include "llvm/Target/TargetData.h"
-#include "llvm/Target/TargetMachine.h"
-#include "llvm/Target/TargetInstrInfo.h"
-#include "llvm/Target/TargetLowering.h"
-#include "llvm/ADT/Statistic.h"
-#include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Debug.h"
-#include "llvm/Support/ErrorHandling.h"
-#include "llvm/Support/MathExtras.h"
-using namespace llvm;
-
-/// EmitCopyFromReg - Generate machine code for an CopyFromReg node or an
-/// implicit physical register output.
-void ScheduleDAGSDNodes::
-EmitCopyFromReg(SDNode *Node, unsigned ResNo, bool IsClone, bool IsCloned,
-                unsigned SrcReg, DenseMap<SDValue, unsigned> &VRBaseMap) {
-  unsigned VRBase = 0;
-  if (TargetRegisterInfo::isVirtualRegister(SrcReg)) {
-    // Just use the input register directly!
-    SDValue Op(Node, ResNo);
-    if (IsClone)
-      VRBaseMap.erase(Op);
-    bool isNew = VRBaseMap.insert(std::make_pair(Op, SrcReg)).second;
-    isNew = isNew; // Silence compiler warning.
-    assert(isNew && "Node emitted out of order - early");
-    return;
-  }
-
-  // If the node is only used by a CopyToReg and the dest reg is a vreg, use
-  // the CopyToReg'd destination register instead of creating a new vreg.
-  bool MatchReg = true;
-  const TargetRegisterClass *UseRC = NULL;
-  if (!IsClone && !IsCloned)
-    for (SDNode::use_iterator UI = Node->use_begin(), E = Node->use_end();
-         UI != E; ++UI) {
-      SDNode *User = *UI;
-      bool Match = true;
-      if (User->getOpcode() == ISD::CopyToReg && 
-          User->getOperand(2).getNode() == Node &&
-          User->getOperand(2).getResNo() == ResNo) {
-        unsigned DestReg = cast<RegisterSDNode>(User->getOperand(1))->getReg();
-        if (TargetRegisterInfo::isVirtualRegister(DestReg)) {
-          VRBase = DestReg;
-          Match = false;
-        } else if (DestReg != SrcReg)
-          Match = false;
-      } else {
-        for (unsigned i = 0, e = User->getNumOperands(); i != e; ++i) {
-          SDValue Op = User->getOperand(i);
-          if (Op.getNode() != Node || Op.getResNo() != ResNo)
-            continue;
-          EVT VT = Node->getValueType(Op.getResNo());
-          if (VT == MVT::Other || VT == MVT::Flag)
-            continue;
-          Match = false;
-          if (User->isMachineOpcode()) {
-            const TargetInstrDesc &II = TII->get(User->getMachineOpcode());
-            const TargetRegisterClass *RC = 0;
-            if (i+II.getNumDefs() < II.getNumOperands())
-              RC = II.OpInfo[i+II.getNumDefs()].getRegClass(TRI);
-            if (!UseRC)
-              UseRC = RC;
-            else if (RC) {
-              const TargetRegisterClass *ComRC = getCommonSubClass(UseRC, RC);
-              // If multiple uses expect disjoint register classes, we emit
-              // copies in AddRegisterOperand.
-              if (ComRC)
-                UseRC = ComRC;
-            }
-          }
-        }
-      }
-      MatchReg &= Match;
-      if (VRBase)
-        break;
-    }
-
-  EVT VT = Node->getValueType(ResNo);
-  const TargetRegisterClass *SrcRC = 0, *DstRC = 0;
-  SrcRC = TRI->getPhysicalRegisterRegClass(SrcReg, VT);
-  
-  // Figure out the register class to create for the destreg.
-  if (VRBase) {
-    DstRC = MRI.getRegClass(VRBase);
-  } else if (UseRC) {
-    assert(UseRC->hasType(VT) && "Incompatible phys register def and uses!");
-    DstRC = UseRC;
-  } else {
-    DstRC = TLI->getRegClassFor(VT);
-  }
-    
-  // If all uses are reading from the src physical register and copying the
-  // register is either impossible or very expensive, then don't create a copy.
-  if (MatchReg && SrcRC->getCopyCost() < 0) {
-    VRBase = SrcReg;
-  } else {
-    // Create the reg, emit the copy.
-    VRBase = MRI.createVirtualRegister(DstRC);
-    bool Emitted = TII->copyRegToReg(*BB, InsertPos, VRBase, SrcReg,
-                                     DstRC, SrcRC);
-
-    assert(Emitted && "Unable to issue a copy instruction!\n");
-    (void) Emitted;
-  }
-
-  SDValue Op(Node, ResNo);
-  if (IsClone)
-    VRBaseMap.erase(Op);
-  bool isNew = VRBaseMap.insert(std::make_pair(Op, VRBase)).second;
-  isNew = isNew; // Silence compiler warning.
-  assert(isNew && "Node emitted out of order - early");
-}
-
-/// getDstOfCopyToRegUse - If the only use of the specified result number of
-/// node is a CopyToReg, return its destination register. Return 0 otherwise.
-unsigned ScheduleDAGSDNodes::getDstOfOnlyCopyToRegUse(SDNode *Node,
-                                                      unsigned ResNo) const {
-  if (!Node->hasOneUse())
-    return 0;
-
-  SDNode *User = *Node->use_begin();
-  if (User->getOpcode() == ISD::CopyToReg && 
-      User->getOperand(2).getNode() == Node &&
-      User->getOperand(2).getResNo() == ResNo) {
-    unsigned Reg = cast<RegisterSDNode>(User->getOperand(1))->getReg();
-    if (TargetRegisterInfo::isVirtualRegister(Reg))
-      return Reg;
-  }
-  return 0;
-}
-
-void ScheduleDAGSDNodes::CreateVirtualRegisters(SDNode *Node, MachineInstr *MI,
-                                       const TargetInstrDesc &II,
-                                       bool IsClone, bool IsCloned,
-                                       DenseMap<SDValue, unsigned> &VRBaseMap) {
-  assert(Node->getMachineOpcode() != TargetInstrInfo::IMPLICIT_DEF &&
-         "IMPLICIT_DEF should have been handled as a special case elsewhere!");
-
-  for (unsigned i = 0; i < II.getNumDefs(); ++i) {
-    // If the specific node value is only used by a CopyToReg and the dest reg
-    // is a vreg in the same register class, use the CopyToReg'd destination
-    // register instead of creating a new vreg.
-    unsigned VRBase = 0;
-    const TargetRegisterClass *RC = II.OpInfo[i].getRegClass(TRI);
-    if (II.OpInfo[i].isOptionalDef()) {
-      // Optional def must be a physical register.
-      unsigned NumResults = CountResults(Node);
-      VRBase = cast<RegisterSDNode>(Node->getOperand(i-NumResults))->getReg();
-      assert(TargetRegisterInfo::isPhysicalRegister(VRBase));
-      MI->addOperand(MachineOperand::CreateReg(VRBase, true));
-    }
-
-    if (!VRBase && !IsClone && !IsCloned)
-      for (SDNode::use_iterator UI = Node->use_begin(), E = Node->use_end();
-           UI != E; ++UI) {
-        SDNode *User = *UI;
-        if (User->getOpcode() == ISD::CopyToReg && 
-            User->getOperand(2).getNode() == Node &&
-            User->getOperand(2).getResNo() == i) {
-          unsigned Reg = cast<RegisterSDNode>(User->getOperand(1))->getReg();
-          if (TargetRegisterInfo::isVirtualRegister(Reg)) {
-            const TargetRegisterClass *RegRC = MRI.getRegClass(Reg);
-            if (RegRC == RC) {
-              VRBase = Reg;
-              MI->addOperand(MachineOperand::CreateReg(Reg, true));
-              break;
-            }
-          }
-        }
-      }
-
-    // Create the result registers for this node and add the result regs to
-    // the machine instruction.
-    if (VRBase == 0) {
-      assert(RC && "Isn't a register operand!");
-      VRBase = MRI.createVirtualRegister(RC);
-      MI->addOperand(MachineOperand::CreateReg(VRBase, true));
-    }
-
-    SDValue Op(Node, i);
-    if (IsClone)
-      VRBaseMap.erase(Op);
-    bool isNew = VRBaseMap.insert(std::make_pair(Op, VRBase)).second;
-    isNew = isNew; // Silence compiler warning.
-    assert(isNew && "Node emitted out of order - early");
-  }
-}
-
-/// getVR - Return the virtual register corresponding to the specified result
-/// of the specified node.
-unsigned ScheduleDAGSDNodes::getVR(SDValue Op,
-                                   DenseMap<SDValue, unsigned> &VRBaseMap) {
-  if (Op.isMachineOpcode() &&
-      Op.getMachineOpcode() == TargetInstrInfo::IMPLICIT_DEF) {
-    // Add an IMPLICIT_DEF instruction before every use.
-    unsigned VReg = getDstOfOnlyCopyToRegUse(Op.getNode(), Op.getResNo());
-    // IMPLICIT_DEF can produce any type of result so its TargetInstrDesc
-    // does not include operand register class info.
-    if (!VReg) {
-      const TargetRegisterClass *RC = TLI->getRegClassFor(Op.getValueType());
-      VReg = MRI.createVirtualRegister(RC);
-    }
-    BuildMI(BB, Op.getDebugLoc(), TII->get(TargetInstrInfo::IMPLICIT_DEF),VReg);
-    return VReg;
-  }
-
-  DenseMap<SDValue, unsigned>::iterator I = VRBaseMap.find(Op);
-  assert(I != VRBaseMap.end() && "Node emitted out of order - late");
-  return I->second;
-}
-
-
-/// AddRegisterOperand - Add the specified register as an operand to the
-/// specified machine instr. Insert register copies if the register is
-/// not in the required register class.
-void
-ScheduleDAGSDNodes::AddRegisterOperand(MachineInstr *MI, SDValue Op,
-                                       unsigned IIOpNum,
-                                       const TargetInstrDesc *II,
-                                       DenseMap<SDValue, unsigned> &VRBaseMap) {
-  assert(Op.getValueType() != MVT::Other &&
-         Op.getValueType() != MVT::Flag &&
-         "Chain and flag operands should occur at end of operand list!");
-  // Get/emit the operand.
-  unsigned VReg = getVR(Op, VRBaseMap);
-  assert(TargetRegisterInfo::isVirtualRegister(VReg) && "Not a vreg?");
-
-  const TargetInstrDesc &TID = MI->getDesc();
-  bool isOptDef = IIOpNum < TID.getNumOperands() &&
-    TID.OpInfo[IIOpNum].isOptionalDef();
-
-  // If the instruction requires a register in a different class, create
-  // a new virtual register and copy the value into it.
-  if (II) {
-    const TargetRegisterClass *SrcRC = MRI.getRegClass(VReg);
-    const TargetRegisterClass *DstRC = 0;
-    if (IIOpNum < II->getNumOperands())
-      DstRC = II->OpInfo[IIOpNum].getRegClass(TRI);
-    assert((DstRC || (TID.isVariadic() && IIOpNum >= TID.getNumOperands())) &&
-           "Don't have operand info for this instruction!");
-    if (DstRC && SrcRC != DstRC && !SrcRC->hasSuperClass(DstRC)) {
-      unsigned NewVReg = MRI.createVirtualRegister(DstRC);
-      bool Emitted = TII->copyRegToReg(*BB, InsertPos, NewVReg, VReg,
-                                       DstRC, SrcRC);
-      assert(Emitted && "Unable to issue a copy instruction!\n");
-      (void) Emitted;
-      VReg = NewVReg;
-    }
-  }
-
-  MI->addOperand(MachineOperand::CreateReg(VReg, isOptDef));
-}
-
-/// AddOperand - Add the specified operand to the specified machine instr.  II
-/// specifies the instruction information for the node, and IIOpNum is the
-/// operand number (in the II) that we are adding. IIOpNum and II are used for 
-/// assertions only.
-void ScheduleDAGSDNodes::AddOperand(MachineInstr *MI, SDValue Op,
-                                    unsigned IIOpNum,
-                                    const TargetInstrDesc *II,
-                                    DenseMap<SDValue, unsigned> &VRBaseMap) {
-  if (Op.isMachineOpcode()) {
-    AddRegisterOperand(MI, Op, IIOpNum, II, VRBaseMap);
-  } else if (ConstantSDNode *C = dyn_cast<ConstantSDNode>(Op)) {
-    MI->addOperand(MachineOperand::CreateImm(C->getSExtValue()));
-  } else if (ConstantFPSDNode *F = dyn_cast<ConstantFPSDNode>(Op)) {
-    const ConstantFP *CFP = F->getConstantFPValue();
-    MI->addOperand(MachineOperand::CreateFPImm(CFP));
-  } else if (RegisterSDNode *R = dyn_cast<RegisterSDNode>(Op)) {
-    MI->addOperand(MachineOperand::CreateReg(R->getReg(), false));
-  } else if (GlobalAddressSDNode *TGA = dyn_cast<GlobalAddressSDNode>(Op)) {
-    MI->addOperand(MachineOperand::CreateGA(TGA->getGlobal(), TGA->getOffset(),
-                                            TGA->getTargetFlags()));
-  } else if (BasicBlockSDNode *BBNode = dyn_cast<BasicBlockSDNode>(Op)) {
-    MI->addOperand(MachineOperand::CreateMBB(BBNode->getBasicBlock()));
-  } else if (FrameIndexSDNode *FI = dyn_cast<FrameIndexSDNode>(Op)) {
-    MI->addOperand(MachineOperand::CreateFI(FI->getIndex()));
-  } else if (JumpTableSDNode *JT = dyn_cast<JumpTableSDNode>(Op)) {
-    MI->addOperand(MachineOperand::CreateJTI(JT->getIndex(),
-                                             JT->getTargetFlags()));
-  } else if (ConstantPoolSDNode *CP = dyn_cast<ConstantPoolSDNode>(Op)) {
-    int Offset = CP->getOffset();
-    unsigned Align = CP->getAlignment();
-    const Type *Type = CP->getType();
-    // MachineConstantPool wants an explicit alignment.
-    if (Align == 0) {
-      Align = TM.getTargetData()->getPrefTypeAlignment(Type);
-      if (Align == 0) {
-        // Alignment of vector types.  FIXME!
-        Align = TM.getTargetData()->getTypeAllocSize(Type);
-      }
-    }
-    
-    unsigned Idx;
-    if (CP->isMachineConstantPoolEntry())
-      Idx = ConstPool->getConstantPoolIndex(CP->getMachineCPVal(), Align);
-    else
-      Idx = ConstPool->getConstantPoolIndex(CP->getConstVal(), Align);
-    MI->addOperand(MachineOperand::CreateCPI(Idx, Offset,
-                                             CP->getTargetFlags()));
-  } else if (ExternalSymbolSDNode *ES = dyn_cast<ExternalSymbolSDNode>(Op)) {
-    MI->addOperand(MachineOperand::CreateES(ES->getSymbol(),
-                                            ES->getTargetFlags()));
-  } else {
-    assert(Op.getValueType() != MVT::Other &&
-           Op.getValueType() != MVT::Flag &&
-           "Chain and flag operands should occur at end of operand list!");
-    AddRegisterOperand(MI, Op, IIOpNum, II, VRBaseMap);
-  }
-}
-
-/// getSuperRegisterRegClass - Returns the register class of a superreg A whose
-/// "SubIdx"'th sub-register class is the specified register class and whose
-/// type matches the specified type.
-static const TargetRegisterClass*
-getSuperRegisterRegClass(const TargetRegisterClass *TRC,
-                         unsigned SubIdx, EVT VT) {
-  // Pick the register class of the superegister for this type
-  for (TargetRegisterInfo::regclass_iterator I = TRC->superregclasses_begin(),
-         E = TRC->superregclasses_end(); I != E; ++I)
-    if ((*I)->hasType(VT) && (*I)->getSubRegisterRegClass(SubIdx) == TRC)
-      return *I;
-  assert(false && "Couldn't find the register class");
-  return 0;
-}
-
-/// EmitSubregNode - Generate machine code for subreg nodes.
-///
-void ScheduleDAGSDNodes::EmitSubregNode(SDNode *Node, 
-                                        DenseMap<SDValue, unsigned> &VRBaseMap){
-  unsigned VRBase = 0;
-  unsigned Opc = Node->getMachineOpcode();
-  
-  // If the node is only used by a CopyToReg and the dest reg is a vreg, use
-  // the CopyToReg'd destination register instead of creating a new vreg.
-  for (SDNode::use_iterator UI = Node->use_begin(), E = Node->use_end();
-       UI != E; ++UI) {
-    SDNode *User = *UI;
-    if (User->getOpcode() == ISD::CopyToReg && 
-        User->getOperand(2).getNode() == Node) {
-      unsigned DestReg = cast<RegisterSDNode>(User->getOperand(1))->getReg();
-      if (TargetRegisterInfo::isVirtualRegister(DestReg)) {
-        VRBase = DestReg;
-        break;
-      }
-    }
-  }
-  
-  if (Opc == TargetInstrInfo::EXTRACT_SUBREG) {
-    unsigned SubIdx = cast<ConstantSDNode>(Node->getOperand(1))->getZExtValue();
-
-    // Create the extract_subreg machine instruction.
-    MachineInstr *MI = BuildMI(MF, Node->getDebugLoc(),
-                               TII->get(TargetInstrInfo::EXTRACT_SUBREG));
-
-    // Figure out the register class to create for the destreg.
-    unsigned VReg = getVR(Node->getOperand(0), VRBaseMap);
-    const TargetRegisterClass *TRC = MRI.getRegClass(VReg);
-    const TargetRegisterClass *SRC = TRC->getSubRegisterRegClass(SubIdx);
-    assert(SRC && "Invalid subregister index in EXTRACT_SUBREG");
-
-    // Figure out the register class to create for the destreg.
-    // Note that if we're going to directly use an existing register,
-    // it must be precisely the required class, and not a subclass
-    // thereof.
-    if (VRBase == 0 || SRC != MRI.getRegClass(VRBase)) {
-      // Create the reg
-      assert(SRC && "Couldn't find source register class");
-      VRBase = MRI.createVirtualRegister(SRC);
-    }
-
-    // Add def, source, and subreg index
-    MI->addOperand(MachineOperand::CreateReg(VRBase, true));
-    AddOperand(MI, Node->getOperand(0), 0, 0, VRBaseMap);
-    MI->addOperand(MachineOperand::CreateImm(SubIdx));
-    BB->insert(InsertPos, MI);
-  } else if (Opc == TargetInstrInfo::INSERT_SUBREG ||
-             Opc == TargetInstrInfo::SUBREG_TO_REG) {
-    SDValue N0 = Node->getOperand(0);
-    SDValue N1 = Node->getOperand(1);
-    SDValue N2 = Node->getOperand(2);
-    unsigned SubReg = getVR(N1, VRBaseMap);
-    unsigned SubIdx = cast<ConstantSDNode>(N2)->getZExtValue();
-    const TargetRegisterClass *TRC = MRI.getRegClass(SubReg);
-    const TargetRegisterClass *SRC =
-      getSuperRegisterRegClass(TRC, SubIdx,
-                               Node->getValueType(0));
-
-    // Figure out the register class to create for the destreg.
-    // Note that if we're going to directly use an existing register,
-    // it must be precisely the required class, and not a subclass
-    // thereof.
-    if (VRBase == 0 || SRC != MRI.getRegClass(VRBase)) {
-      // Create the reg
-      assert(SRC && "Couldn't find source register class");
-      VRBase = MRI.createVirtualRegister(SRC);
-    }
-
-    // Create the insert_subreg or subreg_to_reg machine instruction.
-    MachineInstr *MI = BuildMI(MF, Node->getDebugLoc(), TII->get(Opc));
-    MI->addOperand(MachineOperand::CreateReg(VRBase, true));
-    
-    // If creating a subreg_to_reg, then the first input operand
-    // is an implicit value immediate, otherwise it's a register
-    if (Opc == TargetInstrInfo::SUBREG_TO_REG) {
-      const ConstantSDNode *SD = cast<ConstantSDNode>(N0);
-      MI->addOperand(MachineOperand::CreateImm(SD->getZExtValue()));
-    } else
-      AddOperand(MI, N0, 0, 0, VRBaseMap);
-    // Add the subregster being inserted
-    AddOperand(MI, N1, 0, 0, VRBaseMap);
-    MI->addOperand(MachineOperand::CreateImm(SubIdx));
-    BB->insert(InsertPos, MI);
-  } else
-    llvm_unreachable("Node is not insert_subreg, extract_subreg, or subreg_to_reg");
-     
-  SDValue Op(Node, 0);
-  bool isNew = VRBaseMap.insert(std::make_pair(Op, VRBase)).second;
-  isNew = isNew; // Silence compiler warning.
-  assert(isNew && "Node emitted out of order - early");
-}
-
-/// EmitCopyToRegClassNode - Generate machine code for COPY_TO_REGCLASS nodes.
-/// COPY_TO_REGCLASS is just a normal copy, except that the destination
-/// register is constrained to be in a particular register class.
-///
-void
-ScheduleDAGSDNodes::EmitCopyToRegClassNode(SDNode *Node,
-                                       DenseMap<SDValue, unsigned> &VRBaseMap) {
-  unsigned VReg = getVR(Node->getOperand(0), VRBaseMap);
-  const TargetRegisterClass *SrcRC = MRI.getRegClass(VReg);
-
-  unsigned DstRCIdx = cast<ConstantSDNode>(Node->getOperand(1))->getZExtValue();
-  const TargetRegisterClass *DstRC = TRI->getRegClass(DstRCIdx);
-
-  // Create the new VReg in the destination class and emit a copy.
-  unsigned NewVReg = MRI.createVirtualRegister(DstRC);
-  bool Emitted = TII->copyRegToReg(*BB, InsertPos, NewVReg, VReg,
-                                   DstRC, SrcRC);
-  assert(Emitted &&
-         "Unable to issue a copy instruction for a COPY_TO_REGCLASS node!\n");
-  (void) Emitted;
-
-  SDValue Op(Node, 0);
-  bool isNew = VRBaseMap.insert(std::make_pair(Op, NewVReg)).second;
-  isNew = isNew; // Silence compiler warning.
-  assert(isNew && "Node emitted out of order - early");
-}
-
-/// EmitNode - Generate machine code for an node and needed dependencies.
-///
-void ScheduleDAGSDNodes::EmitNode(SDNode *Node, bool IsClone, bool IsCloned,
-                                  DenseMap<SDValue, unsigned> &VRBaseMap,
-                         DenseMap<MachineBasicBlock*, MachineBasicBlock*> *EM) {
-  // If machine instruction
-  if (Node->isMachineOpcode()) {
-    unsigned Opc = Node->getMachineOpcode();
-    
-    // Handle subreg insert/extract specially
-    if (Opc == TargetInstrInfo::EXTRACT_SUBREG || 
-        Opc == TargetInstrInfo::INSERT_SUBREG ||
-        Opc == TargetInstrInfo::SUBREG_TO_REG) {
-      EmitSubregNode(Node, VRBaseMap);
-      return;
-    }
-
-    // Handle COPY_TO_REGCLASS specially.
-    if (Opc == TargetInstrInfo::COPY_TO_REGCLASS) {
-      EmitCopyToRegClassNode(Node, VRBaseMap);
-      return;
-    }
-
-    if (Opc == TargetInstrInfo::IMPLICIT_DEF)
-      // We want a unique VR for each IMPLICIT_DEF use.
-      return;
-    
-    const TargetInstrDesc &II = TII->get(Opc);
-    unsigned NumResults = CountResults(Node);
-    unsigned NodeOperands = CountOperands(Node);
-    bool HasPhysRegOuts = (NumResults > II.getNumDefs()) &&
-                          II.getImplicitDefs() != 0;
-#ifndef NDEBUG
-    unsigned NumMIOperands = NodeOperands + NumResults;
-    assert((II.getNumOperands() == NumMIOperands ||
-            HasPhysRegOuts || II.isVariadic()) &&
-           "#operands for dag node doesn't match .td file!"); 
-#endif
-
-    // Create the new machine instruction.
-    MachineInstr *MI = BuildMI(MF, Node->getDebugLoc(), II);
-    
-    // Add result register values for things that are defined by this
-    // instruction.
-    if (NumResults)
-      CreateVirtualRegisters(Node, MI, II, IsClone, IsCloned, VRBaseMap);
-    
-    // Emit all of the actual operands of this instruction, adding them to the
-    // instruction as appropriate.
-    bool HasOptPRefs = II.getNumDefs() > NumResults;
-    assert((!HasOptPRefs || !HasPhysRegOuts) &&
-           "Unable to cope with optional defs and phys regs defs!");
-    unsigned NumSkip = HasOptPRefs ? II.getNumDefs() - NumResults : 0;
-    for (unsigned i = NumSkip; i != NodeOperands; ++i)
-      AddOperand(MI, Node->getOperand(i), i-NumSkip+II.getNumDefs(), &II,
-                 VRBaseMap);
-
-    // Transfer all of the memory reference descriptions of this instruction.
-    MI->setMemRefs(cast<MachineSDNode>(Node)->memoperands_begin(),
-                   cast<MachineSDNode>(Node)->memoperands_end());
-
-    if (II.usesCustomDAGSchedInsertionHook()) {
-      // Insert this instruction into the basic block using a target
-      // specific inserter which may returns a new basic block.
-      BB = TLI->EmitInstrWithCustomInserter(MI, BB, EM);
-      InsertPos = BB->end();
-    } else {
-      BB->insert(InsertPos, MI);
-    }
-
-    // Additional results must be an physical register def.
-    if (HasPhysRegOuts) {
-      for (unsigned i = II.getNumDefs(); i < NumResults; ++i) {
-        unsigned Reg = II.getImplicitDefs()[i - II.getNumDefs()];
-        if (Node->hasAnyUseOfValue(i))
-          EmitCopyFromReg(Node, i, IsClone, IsCloned, Reg, VRBaseMap);
-      }
-    }
-    return;
-  }
-
-  switch (Node->getOpcode()) {
-  default:
-#ifndef NDEBUG
-    Node->dump(DAG);
-#endif
-    llvm_unreachable("This target-independent node should have been selected!");
-    break;
-  case ISD::EntryToken:
-    llvm_unreachable("EntryToken should have been excluded from the schedule!");
-    break;
-  case ISD::MERGE_VALUES:
-  case ISD::TokenFactor: // fall thru
-    break;
-  case ISD::CopyToReg: {
-    unsigned SrcReg;
-    SDValue SrcVal = Node->getOperand(2);
-    if (RegisterSDNode *R = dyn_cast<RegisterSDNode>(SrcVal))
-      SrcReg = R->getReg();
-    else
-      SrcReg = getVR(SrcVal, VRBaseMap);
-      
-    unsigned DestReg = cast<RegisterSDNode>(Node->getOperand(1))->getReg();
-    if (SrcReg == DestReg) // Coalesced away the copy? Ignore.
-      break;
-      
-    const TargetRegisterClass *SrcTRC = 0, *DstTRC = 0;
-    // Get the register classes of the src/dst.
-    if (TargetRegisterInfo::isVirtualRegister(SrcReg))
-      SrcTRC = MRI.getRegClass(SrcReg);
-    else
-      SrcTRC = TRI->getPhysicalRegisterRegClass(SrcReg,SrcVal.getValueType());
-
-    if (TargetRegisterInfo::isVirtualRegister(DestReg))
-      DstTRC = MRI.getRegClass(DestReg);
-    else
-      DstTRC = TRI->getPhysicalRegisterRegClass(DestReg,
-                                            Node->getOperand(1).getValueType());
-
-    bool Emitted = TII->copyRegToReg(*BB, InsertPos, DestReg, SrcReg,
-                                     DstTRC, SrcTRC);
-    assert(Emitted && "Unable to issue a copy instruction!\n");
-    (void) Emitted;
-    break;
-  }
-  case ISD::CopyFromReg: {
-    unsigned SrcReg = cast<RegisterSDNode>(Node->getOperand(1))->getReg();
-    EmitCopyFromReg(Node, 0, IsClone, IsCloned, SrcReg, VRBaseMap);
-    break;
-  }
-  case ISD::INLINEASM: {
-    unsigned NumOps = Node->getNumOperands();
-    if (Node->getOperand(NumOps-1).getValueType() == MVT::Flag)
-      --NumOps;  // Ignore the flag operand.
-      
-    // Create the inline asm machine instruction.
-    MachineInstr *MI = BuildMI(MF, Node->getDebugLoc(),
-                               TII->get(TargetInstrInfo::INLINEASM));
-
-    // Add the asm string as an external symbol operand.
-    const char *AsmStr =
-      cast<ExternalSymbolSDNode>(Node->getOperand(1))->getSymbol();
-    MI->addOperand(MachineOperand::CreateES(AsmStr));
-      
-    // Add all of the operand registers to the instruction.
-    for (unsigned i = 2; i != NumOps;) {
-      unsigned Flags =
-        cast<ConstantSDNode>(Node->getOperand(i))->getZExtValue();
-      unsigned NumVals = InlineAsm::getNumOperandRegisters(Flags);
-        
-      MI->addOperand(MachineOperand::CreateImm(Flags));
-      ++i;  // Skip the ID value.
-        
-      switch (Flags & 7) {
-      default: llvm_unreachable("Bad flags!");
-      case 2:   // Def of register.
-        for (; NumVals; --NumVals, ++i) {
-          unsigned Reg = cast<RegisterSDNode>(Node->getOperand(i))->getReg();
-          MI->addOperand(MachineOperand::CreateReg(Reg, true));
-        }
-        break;
-      case 6:   // Def of earlyclobber register.
-        for (; NumVals; --NumVals, ++i) {
-          unsigned Reg = cast<RegisterSDNode>(Node->getOperand(i))->getReg();
-          MI->addOperand(MachineOperand::CreateReg(Reg, true, false, false, 
-                                                   false, false, true));
-        }
-        break;
-      case 1:  // Use of register.
-      case 3:  // Immediate.
-      case 4:  // Addressing mode.
-        // The addressing mode has been selected, just add all of the
-        // operands to the machine instruction.
-        for (; NumVals; --NumVals, ++i)
-          AddOperand(MI, Node->getOperand(i), 0, 0, VRBaseMap);
-        break;
-      }
-    }
-    BB->insert(InsertPos, MI);
-    break;
-  }
-  }
-}
-
-/// EmitSchedule - Emit the machine code in scheduled order.
-MachineBasicBlock *ScheduleDAGSDNodes::
-EmitSchedule(DenseMap<MachineBasicBlock*, MachineBasicBlock*> *EM) {
-  DenseMap<SDValue, unsigned> VRBaseMap;
-  DenseMap<SUnit*, unsigned> CopyVRBaseMap;
-  for (unsigned i = 0, e = Sequence.size(); i != e; i++) {
-    SUnit *SU = Sequence[i];
-    if (!SU) {
-      // Null SUnit* is a noop.
-      EmitNoop();
-      continue;
-    }
-
-    // For pre-regalloc scheduling, create instructions corresponding to the
-    // SDNode and any flagged SDNodes and append them to the block.
-    if (!SU->getNode()) {
-      // Emit a copy.
-      EmitPhysRegCopy(SU, CopyVRBaseMap);
-      continue;
-    }
-
-    SmallVector<SDNode *, 4> FlaggedNodes;
-    for (SDNode *N = SU->getNode()->getFlaggedNode(); N;
-         N = N->getFlaggedNode())
-      FlaggedNodes.push_back(N);
-    while (!FlaggedNodes.empty()) {
-      EmitNode(FlaggedNodes.back(), SU->OrigNode != SU, SU->isCloned,
-               VRBaseMap, EM);
-      FlaggedNodes.pop_back();
-    }
-    EmitNode(SU->getNode(), SU->OrigNode != SU, SU->isCloned, VRBaseMap, EM);
-  }
-
-  return BB;
-}
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index f6fed21..8f99957 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -30,6 +30,7 @@
 #include "llvm/Target/TargetLowering.h"
 #include "llvm/Target/TargetOptions.h"
 #include "llvm/Target/TargetInstrInfo.h"
+#include "llvm/Target/TargetIntrinsicInfo.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/ErrorHandling.h"
@@ -199,19 +200,6 @@ bool ISD::isScalarToVector(const SDNode *N) {
   return true;
 }
 
-
-/// isDebugLabel - Return true if the specified node represents a debug
-/// label (i.e. ISD::DBG_LABEL or TargetInstrInfo::DBG_LABEL node).
-bool ISD::isDebugLabel(const SDNode *N) {
-  SDValue Zero;
-  if (N->getOpcode() == ISD::DBG_LABEL)
-    return true;
-  if (N->isMachineOpcode() &&
-      N->getMachineOpcode() == TargetInstrInfo::DBG_LABEL)
-    return true;
-  return false;
-}
-
 /// getSetCCSwappedOperands - Return the operation corresponding to (Y op X)
 /// when given the operation for (X op Y).
 ISD::CondCode ISD::getSetCCSwappedOperands(ISD::CondCode Operation) {
@@ -392,13 +380,7 @@ static void AddNodeIDCustom(FoldingSetNodeID &ID, const SDNode *N) {
   case ISD::Register:
     ID.AddInteger(cast<RegisterSDNode>(N)->getReg());
     break;
-  case ISD::DBG_STOPPOINT: {
-    const DbgStopPointSDNode *DSP = cast<DbgStopPointSDNode>(N);
-    ID.AddInteger(DSP->getLine());
-    ID.AddInteger(DSP->getColumn());
-    ID.AddPointer(DSP->getCompileUnit());
-    break;
-  }
+
   case ISD::SRCVALUE:
     ID.AddPointer(cast<SrcValueSDNode>(N)->getValue());
     break;
@@ -459,6 +441,12 @@ static void AddNodeIDCustom(FoldingSetNodeID &ID, const SDNode *N) {
       ID.AddInteger(SVN->getMaskElt(i));
     break;
   }
+  case ISD::TargetBlockAddress:
+  case ISD::BlockAddress: {
+    ID.AddPointer(cast<BlockAddressSDNode>(N)->getBlockAddress());
+    ID.AddInteger(cast<BlockAddressSDNode>(N)->getTargetFlags());
+    break;
+  }
   } // end switch (N->getOpcode())
 }
 
@@ -502,8 +490,6 @@ static bool doNotCSE(SDNode *N) {
   switch (N->getOpcode()) {
   default: break;
   case ISD::HANDLENODE:
-  case ISD::DBG_LABEL:
-  case ISD::DBG_STOPPOINT:
   case ISD::EH_LABEL:
     return true;   // Never CSE these nodes.
   }
@@ -832,6 +818,18 @@ void SelectionDAG::clear() {
   Root = getEntryNode();
 }
 
+SDValue SelectionDAG::getSExtOrTrunc(SDValue Op, DebugLoc DL, EVT VT) {
+  return VT.bitsGT(Op.getValueType()) ?
+    getNode(ISD::SIGN_EXTEND, DL, VT, Op) :
+    getNode(ISD::TRUNCATE, DL, VT, Op);
+}
+
+SDValue SelectionDAG::getZExtOrTrunc(SDValue Op, DebugLoc DL, EVT VT) {
+  return VT.bitsGT(Op.getValueType()) ?
+    getNode(ISD::ZERO_EXTEND, DL, VT, Op) :
+    getNode(ISD::TRUNCATE, DL, VT, Op);
+}
+
 SDValue SelectionDAG::getZeroExtendInReg(SDValue Op, DebugLoc DL, EVT VT) {
   if (Op.getValueType() == VT) return Op;
   APInt Imm = APInt::getLowBitsSet(Op.getValueSizeInBits(),
@@ -1252,11 +1250,12 @@ SDValue SelectionDAG::getConvertRndSat(EVT VT, DebugLoc dl,
     return Val;
 
   FoldingSetNodeID ID;
+  SDValue Ops[] = { Val, DTy, STy, Rnd, Sat };
+  AddNodeIDNode(ID, ISD::CONVERT_RNDSAT, getVTList(VT), &Ops[0], 5);
   void* IP = 0;
   if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
     return SDValue(E, 0);
   CvtRndSatSDNode *N = NodeAllocator.Allocate<CvtRndSatSDNode>();
-  SDValue Ops[] = { Val, DTy, STy, Rnd, Sat };
   new (N) CvtRndSatSDNode(VT, dl, Ops, 5, Code);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
@@ -1277,16 +1276,6 @@ SDValue SelectionDAG::getRegister(unsigned RegNo, EVT VT) {
   return SDValue(N, 0);
 }
 
-SDValue SelectionDAG::getDbgStopPoint(DebugLoc DL, SDValue Root,
-                                      unsigned Line, unsigned Col,
-                                      MDNode *CU) {
-  SDNode *N = NodeAllocator.Allocate<DbgStopPointSDNode>();
-  new (N) DbgStopPointSDNode(Root, Line, Col, CU);
-  N->setDebugLoc(DL);
-  AllNodes.push_back(N);
-  return SDValue(N, 0);
-}
-
 SDValue SelectionDAG::getLabel(unsigned Opcode, DebugLoc dl,
                                SDValue Root,
                                unsigned LabelID) {
@@ -1304,6 +1293,25 @@ SDValue SelectionDAG::getLabel(unsigned Opcode, DebugLoc dl,
   return SDValue(N, 0);
 }
 
+SDValue SelectionDAG::getBlockAddress(BlockAddress *BA, EVT VT,
+                                      bool isTarget,
+                                      unsigned char TargetFlags) {
+  unsigned Opc = isTarget ? ISD::TargetBlockAddress : ISD::BlockAddress;
+
+  FoldingSetNodeID ID;
+  AddNodeIDNode(ID, Opc, getVTList(VT), 0, 0);
+  ID.AddPointer(BA);
+  ID.AddInteger(TargetFlags);
+  void *IP = 0;
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+    return SDValue(E, 0);
+  SDNode *N = NodeAllocator.Allocate<BlockAddressSDNode>();
+  new (N) BlockAddressSDNode(Opc, VT, BA, TargetFlags);
+  CSEMap.InsertNode(N, IP);
+  AllNodes.push_back(N);
+  return SDValue(N, 0);
+}
+
 SDValue SelectionDAG::getSrcValue(const Value *V) {
   assert((!V || isa<PointerType>(V->getType())) &&
          "SrcValue is not a pointer?");
@@ -1343,7 +1351,7 @@ SDValue SelectionDAG::CreateStackTemporary(EVT VT, unsigned minAlign) {
   unsigned StackAlign =
   std::max((unsigned)TLI.getTargetData()->getPrefTypeAlignment(Ty), minAlign);
 
-  int FrameIdx = FrameInfo->CreateStackObject(ByteSize, StackAlign);
+  int FrameIdx = FrameInfo->CreateStackObject(ByteSize, StackAlign, false);
   return getFrameIndex(FrameIdx, TLI.getPointerTy());
 }
 
@@ -1359,7 +1367,7 @@ SDValue SelectionDAG::CreateStackTemporary(EVT VT1, EVT VT2) {
                             TD->getPrefTypeAlignment(Ty2));
 
   MachineFrameInfo *FrameInfo = getMachineFunction().getFrameInfo();
-  int FrameIdx = FrameInfo->CreateStackObject(Bytes, Align);
+  int FrameIdx = FrameInfo->CreateStackObject(Bytes, Align, false);
   return getFrameIndex(FrameIdx, TLI.getPointerTy());
 }
 
@@ -4588,7 +4596,7 @@ SDNode *SelectionDAG::MorphNodeTo(SDNode *N, unsigned Opc,
       N->InitOperands(new SDUse[NumOps], Ops, NumOps);
       N->OperandsNeedDelete = true;
     } else
-      MN->InitOperands(MN->OperandList, Ops, NumOps);
+      N->InitOperands(N->OperandList, Ops, NumOps);
   }
 
   // Delete any nodes that are still dead after adding the uses for the
@@ -4612,115 +4620,126 @@ SDNode *SelectionDAG::MorphNodeTo(SDNode *N, unsigned Opc,
 /// Note that getMachineNode returns the resultant node.  If there is already a
 /// node of the specified opcode and operands, it returns that node instead of
 /// the current one.
-SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT) {
+MachineSDNode *
+SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT) {
   SDVTList VTs = getVTList(VT);
   return getMachineNode(Opcode, dl, VTs, 0, 0);
 }
 
-SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT,
-                                     SDValue Op1) {
+MachineSDNode *
+SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT, SDValue Op1) {
   SDVTList VTs = getVTList(VT);
   SDValue Ops[] = { Op1 };
   return getMachineNode(Opcode, dl, VTs, Ops, array_lengthof(Ops));
 }
 
-SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT,
-                                     SDValue Op1, SDValue Op2) {
+MachineSDNode *
+SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT,
+                             SDValue Op1, SDValue Op2) {
   SDVTList VTs = getVTList(VT);
   SDValue Ops[] = { Op1, Op2 };
   return getMachineNode(Opcode, dl, VTs, Ops, array_lengthof(Ops));
 }
 
-SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT,
-                                     SDValue Op1, SDValue Op2,
-                                     SDValue Op3) {
+MachineSDNode *
+SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT,
+                             SDValue Op1, SDValue Op2, SDValue Op3) {
   SDVTList VTs = getVTList(VT);
   SDValue Ops[] = { Op1, Op2, Op3 };
   return getMachineNode(Opcode, dl, VTs, Ops, array_lengthof(Ops));
 }
 
-SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT,
-                                     const SDValue *Ops, unsigned NumOps) {
+MachineSDNode *
+SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT,
+                             const SDValue *Ops, unsigned NumOps) {
   SDVTList VTs = getVTList(VT);
   return getMachineNode(Opcode, dl, VTs, Ops, NumOps);
 }
 
-SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl,
-                                     EVT VT1, EVT VT2) {
+MachineSDNode *
+SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1, EVT VT2) {
   SDVTList VTs = getVTList(VT1, VT2);
   return getMachineNode(Opcode, dl, VTs, 0, 0);
 }
 
-SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1,
-                                     EVT VT2, SDValue Op1) {
+MachineSDNode *
+SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl,
+                             EVT VT1, EVT VT2, SDValue Op1) {
   SDVTList VTs = getVTList(VT1, VT2);
   SDValue Ops[] = { Op1 };
   return getMachineNode(Opcode, dl, VTs, Ops, array_lengthof(Ops));
 }
 
-SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1,
-                                     EVT VT2, SDValue Op1,
-                                     SDValue Op2) {
+MachineSDNode *
+SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl,
+                             EVT VT1, EVT VT2, SDValue Op1, SDValue Op2) {
   SDVTList VTs = getVTList(VT1, VT2);
   SDValue Ops[] = { Op1, Op2 };
   return getMachineNode(Opcode, dl, VTs, Ops, array_lengthof(Ops));
 }
 
-SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1,
-                                     EVT VT2, SDValue Op1,
-                                     SDValue Op2, SDValue Op3) {
+MachineSDNode *
+SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl,
+                             EVT VT1, EVT VT2, SDValue Op1,
+                             SDValue Op2, SDValue Op3) {
   SDVTList VTs = getVTList(VT1, VT2);
   SDValue Ops[] = { Op1, Op2, Op3 };
   return getMachineNode(Opcode, dl, VTs, Ops, array_lengthof(Ops));
 }
 
-SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl,
-                                     EVT VT1, EVT VT2,
-                                     const SDValue *Ops, unsigned NumOps) {
+MachineSDNode *
+SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl,
+                             EVT VT1, EVT VT2,
+                             const SDValue *Ops, unsigned NumOps) {
   SDVTList VTs = getVTList(VT1, VT2);
   return getMachineNode(Opcode, dl, VTs, Ops, NumOps);
 }
 
-SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl,
-                                     EVT VT1, EVT VT2, EVT VT3,
-                                     SDValue Op1, SDValue Op2) {
+MachineSDNode *
+SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl,
+                             EVT VT1, EVT VT2, EVT VT3,
+                             SDValue Op1, SDValue Op2) {
   SDVTList VTs = getVTList(VT1, VT2, VT3);
   SDValue Ops[] = { Op1, Op2 };
   return getMachineNode(Opcode, dl, VTs, Ops, array_lengthof(Ops));
 }
 
-SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl,
-                                     EVT VT1, EVT VT2, EVT VT3,
-                                     SDValue Op1, SDValue Op2,
-                                     SDValue Op3) {
+MachineSDNode *
+SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl,
+                             EVT VT1, EVT VT2, EVT VT3,
+                             SDValue Op1, SDValue Op2, SDValue Op3) {
   SDVTList VTs = getVTList(VT1, VT2, VT3);
   SDValue Ops[] = { Op1, Op2, Op3 };
   return getMachineNode(Opcode, dl, VTs, Ops, array_lengthof(Ops));
 }
 
-SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl,
-                                     EVT VT1, EVT VT2, EVT VT3,
-                                     const SDValue *Ops, unsigned NumOps) {
+MachineSDNode *
+SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl,
+                             EVT VT1, EVT VT2, EVT VT3,
+                             const SDValue *Ops, unsigned NumOps) {
   SDVTList VTs = getVTList(VT1, VT2, VT3);
   return getMachineNode(Opcode, dl, VTs, Ops, NumOps);
 }
 
-SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1,
-                                     EVT VT2, EVT VT3, EVT VT4,
-                                     const SDValue *Ops, unsigned NumOps) {
+MachineSDNode *
+SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl, EVT VT1,
+                             EVT VT2, EVT VT3, EVT VT4,
+                             const SDValue *Ops, unsigned NumOps) {
   SDVTList VTs = getVTList(VT1, VT2, VT3, VT4);
   return getMachineNode(Opcode, dl, VTs, Ops, NumOps);
 }
 
-SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl,
-                                     const std::vector<EVT> &ResultTys,
-                                     const SDValue *Ops, unsigned NumOps) {
+MachineSDNode *
+SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc dl,
+                             const std::vector<EVT> &ResultTys,
+                             const SDValue *Ops, unsigned NumOps) {
   SDVTList VTs = getVTList(&ResultTys[0], ResultTys.size());
   return getMachineNode(Opcode, dl, VTs, Ops, NumOps);
 }
 
-SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc DL, SDVTList VTs,
-                                     const SDValue *Ops, unsigned NumOps) {
+MachineSDNode *
+SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc DL, SDVTList VTs,
+                             const SDValue *Ops, unsigned NumOps) {
   bool DoCSE = VTs.VTs[VTs.NumVTs-1] != MVT::Flag;
   MachineSDNode *N;
   void *IP;
@@ -4730,7 +4749,7 @@ SDNode *SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc DL, SDVTList VTs,
     AddNodeIDNode(ID, ~Opcode, VTs, Ops, NumOps);
     IP = 0;
     if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
-      return E;
+      return cast<MachineSDNode>(E);
   }
 
   // Allocate a new MachineSDNode.
@@ -4769,6 +4788,17 @@ SelectionDAG::getTargetExtractSubreg(int SRIdx, DebugLoc DL, EVT VT,
   return SDValue(Subreg, 0);
 }
 
+/// getTargetInsertSubreg - A convenience function for creating
+/// TargetInstrInfo::INSERT_SUBREG nodes.
+SDValue
+SelectionDAG::getTargetInsertSubreg(int SRIdx, DebugLoc DL, EVT VT,
+                                    SDValue Operand, SDValue Subreg) {
+  SDValue SRIdxVal = getTargetConstant(SRIdx, MVT::i32);
+  SDNode *Result = getMachineNode(TargetInstrInfo::INSERT_SUBREG, DL,
+                                  VT, Operand, Subreg, SRIdxVal);
+  return SDValue(Result, 0);
+}
+
 /// getNodeIfExists - Get the specified node if it's already available, or
 /// else return NULL.
 SDNode *SelectionDAG::getNodeIfExists(unsigned Opcode, SDVTList VTList,
@@ -5272,31 +5302,26 @@ bool SDValue::reachesChainWithoutSideEffects(SDValue Dest,
   return false;
 }
 
-
-static void findPredecessor(SDNode *N, const SDNode *P, bool &found,
-                            SmallPtrSet<SDNode *, 32> &Visited) {
-  if (found || !Visited.insert(N))
-    return;
-
-  for (unsigned i = 0, e = N->getNumOperands(); !found && i != e; ++i) {
-    SDNode *Op = N->getOperand(i).getNode();
-    if (Op == P) {
-      found = true;
-      return;
-    }
-    findPredecessor(Op, P, found, Visited);
-  }
-}
-
 /// isPredecessorOf - Return true if this node is a predecessor of N. This node
-/// is either an operand of N or it can be reached by recursively traversing
-/// up the operands.
+/// is either an operand of N or it can be reached by traversing up the operands.
 /// NOTE: this is an expensive method. Use it carefully.
 bool SDNode::isPredecessorOf(SDNode *N) const {
   SmallPtrSet<SDNode *, 32> Visited;
-  bool found = false;
-  findPredecessor(N, this, found, Visited);
-  return found;
+  SmallVector<SDNode *, 16> Worklist;
+  Worklist.push_back(N);
+
+  do {
+    N = Worklist.pop_back_val();
+    for (unsigned i = 0, e = N->getNumOperands(); i != e; ++i) {
+      SDNode *Op = N->getOperand(i).getNode();
+      if (Op == this)
+        return true;
+      if (Visited.insert(Op))
+        Worklist.push_back(Op);
+    }
+  } while (!Worklist.empty());
+
+  return false;
 }
 
 uint64_t SDNode::getConstantOperandVal(unsigned Num) const {
@@ -5370,14 +5395,17 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
   case ISD::EH_RETURN: return "EH_RETURN";
   case ISD::ConstantPool:  return "ConstantPool";
   case ISD::ExternalSymbol: return "ExternalSymbol";
-  case ISD::INTRINSIC_WO_CHAIN: {
-    unsigned IID = cast<ConstantSDNode>(getOperand(0))->getZExtValue();
-    return Intrinsic::getName((Intrinsic::ID)IID);
-  }
+  case ISD::BlockAddress:  return "BlockAddress";
+  case ISD::INTRINSIC_WO_CHAIN:
   case ISD::INTRINSIC_VOID:
   case ISD::INTRINSIC_W_CHAIN: {
-    unsigned IID = cast<ConstantSDNode>(getOperand(1))->getZExtValue();
-    return Intrinsic::getName((Intrinsic::ID)IID);
+    unsigned OpNo = getOpcode() == ISD::INTRINSIC_WO_CHAIN ? 0 : 1;
+    unsigned IID = cast<ConstantSDNode>(getOperand(OpNo))->getZExtValue();
+    if (IID < Intrinsic::num_intrinsics)
+      return Intrinsic::getName((Intrinsic::ID)IID);
+    else if (const TargetIntrinsicInfo *TII = G->getTarget().getIntrinsicInfo())
+      return TII->getName(IID);
+    llvm_unreachable("Invalid intrinsic ID");
   }
 
   case ISD::BUILD_VECTOR:   return "BUILD_VECTOR";
@@ -5389,13 +5417,13 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
   case ISD::TargetJumpTable:  return "TargetJumpTable";
   case ISD::TargetConstantPool:  return "TargetConstantPool";
   case ISD::TargetExternalSymbol: return "TargetExternalSymbol";
+  case ISD::TargetBlockAddress: return "TargetBlockAddress";
 
   case ISD::CopyToReg:     return "CopyToReg";
   case ISD::CopyFromReg:   return "CopyFromReg";
   case ISD::UNDEF:         return "undef";
   case ISD::MERGE_VALUES:  return "merge_values";
   case ISD::INLINEASM:     return "inlineasm";
-  case ISD::DBG_LABEL:     return "dbg_label";
   case ISD::EH_LABEL:      return "eh_label";
   case ISD::HANDLENODE:    return "handlenode";
 
@@ -5529,10 +5557,6 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
   case ISD::CTTZ:    return "cttz";
   case ISD::CTLZ:    return "ctlz";
 
-  // Debug info
-  case ISD::DBG_STOPPOINT: return "dbg_stoppoint";
-  case ISD::DEBUG_LOC: return "debug_loc";
-
   // Trampolines
   case ISD::TRAMPOLINE: return "trampoline";
 
@@ -5698,9 +5722,9 @@ void SDNode::print_details(raw_ostream &OS, const SelectionDAG *G) const {
   } else if (const RegisterSDNode *R = dyn_cast<RegisterSDNode>(this)) {
     if (G && R->getReg() &&
         TargetRegisterInfo::isPhysicalRegister(R->getReg())) {
-      OS << " " << G->getTarget().getRegisterInfo()->getName(R->getReg());
+      OS << " %" << G->getTarget().getRegisterInfo()->getName(R->getReg());
     } else {
-      OS << " #" << R->getReg();
+      OS << " %reg" << R->getReg();
     }
   } else if (const ExternalSymbolSDNode *ES =
              dyn_cast<ExternalSymbolSDNode>(this)) {
@@ -5716,7 +5740,7 @@ void SDNode::print_details(raw_ostream &OS, const SelectionDAG *G) const {
     OS << ":" << N->getVT().getEVTString();
   }
   else if (const LoadSDNode *LD = dyn_cast<LoadSDNode>(this)) {
-    OS << " <" << *LD->getMemOperand();
+    OS << "<" << *LD->getMemOperand();
 
     bool doExt = true;
     switch (LD->getExtensionType()) {
@@ -5734,7 +5758,7 @@ void SDNode::print_details(raw_ostream &OS, const SelectionDAG *G) const {
 
     OS << ">";
   } else if (const StoreSDNode *ST = dyn_cast<StoreSDNode>(this)) {
-    OS << " <" << *ST->getMemOperand();
+    OS << "<" << *ST->getMemOperand();
 
     if (ST->isTruncatingStore())
       OS << ", trunc to " << ST->getMemoryVT().getEVTString();
@@ -5745,15 +5769,23 @@ void SDNode::print_details(raw_ostream &OS, const SelectionDAG *G) const {
     
     OS << ">";
   } else if (const MemSDNode* M = dyn_cast<MemSDNode>(this)) {
-    OS << " <" << *M->getMemOperand() << ">";
+    OS << "<" << *M->getMemOperand() << ">";
+  } else if (const BlockAddressSDNode *BA =
+               dyn_cast<BlockAddressSDNode>(this)) {
+    OS << "<";
+    WriteAsOperand(OS, BA->getBlockAddress()->getFunction(), false);
+    OS << ", ";
+    WriteAsOperand(OS, BA->getBlockAddress()->getBasicBlock(), false);
+    OS << ">";
+    if (unsigned int TF = BA->getTargetFlags())
+      OS << " [TF=" << TF << ']';
   }
 }
 
 void SDNode::print(raw_ostream &OS, const SelectionDAG *G) const {
   print_types(OS, G);
-  OS << " ";
   for (unsigned i = 0, e = getNumOperands(); i != e; ++i) {
-    if (i) OS << ", ";
+    if (i) OS << ", "; else OS << " ";
     OS << (void*)getOperand(i).getNode();
     if (unsigned RN = getOperand(i).getResNo())
       OS << ":" << RN;
@@ -5853,7 +5885,8 @@ bool BuildVectorSDNode::isConstantSplat(APInt &SplatValue,
                                         APInt &SplatUndef,
                                         unsigned &SplatBitSize,
                                         bool &HasAnyUndefs,
-                                        unsigned MinSplatBits) {
+                                        unsigned MinSplatBits,
+                                        bool isBigEndian) {
   EVT VT = getValueType(0);
   assert(VT.isVector() && "Expected a vector type");
   unsigned sz = VT.getSizeInBits();
@@ -5870,12 +5903,14 @@ bool BuildVectorSDNode::isConstantSplat(APInt &SplatValue,
   unsigned int nOps = getNumOperands();
   assert(nOps > 0 && "isConstantSplat has 0-size build vector");
   unsigned EltBitSize = VT.getVectorElementType().getSizeInBits();
-  for (unsigned i = 0; i < nOps; ++i) {
+
+  for (unsigned j = 0; j < nOps; ++j) {
+    unsigned i = isBigEndian ? nOps-1-j : j;
     SDValue OpVal = getOperand(i);
-    unsigned BitPos = i * EltBitSize;
+    unsigned BitPos = j * EltBitSize;
 
     if (OpVal.getOpcode() == ISD::UNDEF)
-      SplatUndef |= APInt::getBitsSet(sz, BitPos, BitPos +EltBitSize);
+      SplatUndef |= APInt::getBitsSet(sz, BitPos, BitPos + EltBitSize);
     else if (ConstantSDNode *CN = dyn_cast<ConstantSDNode>(OpVal))
       SplatValue |= (APInt(CN->getAPIntValue()).zextOrTrunc(EltBitSize).
                      zextOrTrunc(sz) << BitPos);
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuild.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuild.cpp
deleted file mode 100644
index a27fbe6..0000000
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuild.cpp
+++ /dev/null
@@ -1,6085 +0,0 @@
-//===-- SelectionDAGBuild.cpp - Selection-DAG building --------------------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This implements routines for translating from LLVM IR into SelectionDAG IR.
-//
-//===----------------------------------------------------------------------===//
-
-#define DEBUG_TYPE "isel"
-#include "SelectionDAGBuild.h"
-#include "llvm/ADT/BitVector.h"
-#include "llvm/ADT/SmallSet.h"
-#include "llvm/Analysis/AliasAnalysis.h"
-#include "llvm/Constants.h"
-#include "llvm/Constants.h"
-#include "llvm/CallingConv.h"
-#include "llvm/DerivedTypes.h"
-#include "llvm/Function.h"
-#include "llvm/GlobalVariable.h"
-#include "llvm/InlineAsm.h"
-#include "llvm/Instructions.h"
-#include "llvm/Intrinsics.h"
-#include "llvm/IntrinsicInst.h"
-#include "llvm/Module.h"
-#include "llvm/CodeGen/FastISel.h"
-#include "llvm/CodeGen/GCStrategy.h"
-#include "llvm/CodeGen/GCMetadata.h"
-#include "llvm/CodeGen/MachineFunction.h"
-#include "llvm/CodeGen/MachineFrameInfo.h"
-#include "llvm/CodeGen/MachineInstrBuilder.h"
-#include "llvm/CodeGen/MachineJumpTableInfo.h"
-#include "llvm/CodeGen/MachineModuleInfo.h"
-#include "llvm/CodeGen/MachineRegisterInfo.h"
-#include "llvm/CodeGen/PseudoSourceValue.h"
-#include "llvm/CodeGen/SelectionDAG.h"
-#include "llvm/CodeGen/DwarfWriter.h"
-#include "llvm/Analysis/DebugInfo.h"
-#include "llvm/Target/TargetRegisterInfo.h"
-#include "llvm/Target/TargetData.h"
-#include "llvm/Target/TargetFrameInfo.h"
-#include "llvm/Target/TargetInstrInfo.h"
-#include "llvm/Target/TargetIntrinsicInfo.h"
-#include "llvm/Target/TargetLowering.h"
-#include "llvm/Target/TargetOptions.h"
-#include "llvm/Support/Compiler.h"
-#include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Debug.h"
-#include "llvm/Support/ErrorHandling.h"
-#include "llvm/Support/MathExtras.h"
-#include "llvm/Support/raw_ostream.h"
-#include <algorithm>
-using namespace llvm;
-
-/// LimitFloatPrecision - Generate low-precision inline sequences for
-/// some float libcalls (6, 8 or 12 bits).
-static unsigned LimitFloatPrecision;
-
-static cl::opt<unsigned, true>
-LimitFPPrecision("limit-float-precision",
-                 cl::desc("Generate low-precision inline sequences "
-                          "for some float libcalls"),
-                 cl::location(LimitFloatPrecision),
-                 cl::init(0));
-
-/// ComputeLinearIndex - Given an LLVM IR aggregate type and a sequence
-/// of insertvalue or extractvalue indices that identify a member, return
-/// the linearized index of the start of the member.
-///
-static unsigned ComputeLinearIndex(const TargetLowering &TLI, const Type *Ty,
-                                   const unsigned *Indices,
-                                   const unsigned *IndicesEnd,
-                                   unsigned CurIndex = 0) {
-  // Base case: We're done.
-  if (Indices && Indices == IndicesEnd)
-    return CurIndex;
-
-  // Given a struct type, recursively traverse the elements.
-  if (const StructType *STy = dyn_cast<StructType>(Ty)) {
-    for (StructType::element_iterator EB = STy->element_begin(),
-                                      EI = EB,
-                                      EE = STy->element_end();
-        EI != EE; ++EI) {
-      if (Indices && *Indices == unsigned(EI - EB))
-        return ComputeLinearIndex(TLI, *EI, Indices+1, IndicesEnd, CurIndex);
-      CurIndex = ComputeLinearIndex(TLI, *EI, 0, 0, CurIndex);
-    }
-    return CurIndex;
-  }
-  // Given an array type, recursively traverse the elements.
-  else if (const ArrayType *ATy = dyn_cast<ArrayType>(Ty)) {
-    const Type *EltTy = ATy->getElementType();
-    for (unsigned i = 0, e = ATy->getNumElements(); i != e; ++i) {
-      if (Indices && *Indices == i)
-        return ComputeLinearIndex(TLI, EltTy, Indices+1, IndicesEnd, CurIndex);
-      CurIndex = ComputeLinearIndex(TLI, EltTy, 0, 0, CurIndex);
-    }
-    return CurIndex;
-  }
-  // We haven't found the type we're looking for, so keep searching.
-  return CurIndex + 1;
-}
-
-/// ComputeValueVTs - Given an LLVM IR type, compute a sequence of
-/// EVTs that represent all the individual underlying
-/// non-aggregate types that comprise it.
-///
-/// If Offsets is non-null, it points to a vector to be filled in
-/// with the in-memory offsets of each of the individual values.
-///
-static void ComputeValueVTs(const TargetLowering &TLI, const Type *Ty,
-                            SmallVectorImpl<EVT> &ValueVTs,
-                            SmallVectorImpl<uint64_t> *Offsets = 0,
-                            uint64_t StartingOffset = 0) {
-  // Given a struct type, recursively traverse the elements.
-  if (const StructType *STy = dyn_cast<StructType>(Ty)) {
-    const StructLayout *SL = TLI.getTargetData()->getStructLayout(STy);
-    for (StructType::element_iterator EB = STy->element_begin(),
-                                      EI = EB,
-                                      EE = STy->element_end();
-         EI != EE; ++EI)
-      ComputeValueVTs(TLI, *EI, ValueVTs, Offsets,
-                      StartingOffset + SL->getElementOffset(EI - EB));
-    return;
-  }
-  // Given an array type, recursively traverse the elements.
-  if (const ArrayType *ATy = dyn_cast<ArrayType>(Ty)) {
-    const Type *EltTy = ATy->getElementType();
-    uint64_t EltSize = TLI.getTargetData()->getTypeAllocSize(EltTy);
-    for (unsigned i = 0, e = ATy->getNumElements(); i != e; ++i)
-      ComputeValueVTs(TLI, EltTy, ValueVTs, Offsets,
-                      StartingOffset + i * EltSize);
-    return;
-  }
-  // Interpret void as zero return values.
-  if (Ty == Type::getVoidTy(Ty->getContext()))
-    return;
-  // Base case: we can get an EVT for this LLVM IR type.
-  ValueVTs.push_back(TLI.getValueType(Ty));
-  if (Offsets)
-    Offsets->push_back(StartingOffset);
-}
-
-namespace llvm {
-  /// RegsForValue - This struct represents the registers (physical or virtual)
-  /// that a particular set of values is assigned, and the type information about
-  /// the value. The most common situation is to represent one value at a time,
-  /// but struct or array values are handled element-wise as multiple values.
-  /// The splitting of aggregates is performed recursively, so that we never
-  /// have aggregate-typed registers. The values at this point do not necessarily
-  /// have legal types, so each value may require one or more registers of some
-  /// legal type.
-  ///
-  struct VISIBILITY_HIDDEN RegsForValue {
-    /// TLI - The TargetLowering object.
-    ///
-    const TargetLowering *TLI;
-
-    /// ValueVTs - The value types of the values, which may not be legal, and
-    /// may need be promoted or synthesized from one or more registers.
-    ///
-    SmallVector<EVT, 4> ValueVTs;
-
-    /// RegVTs - The value types of the registers. This is the same size as
-    /// ValueVTs and it records, for each value, what the type of the assigned
-    /// register or registers are. (Individual values are never synthesized
-    /// from more than one type of register.)
-    ///
-    /// With virtual registers, the contents of RegVTs is redundant with TLI's
-    /// getRegisterType member function, however when with physical registers
-    /// it is necessary to have a separate record of the types.
-    ///
-    SmallVector<EVT, 4> RegVTs;
-
-    /// Regs - This list holds the registers assigned to the values.
-    /// Each legal or promoted value requires one register, and each
-    /// expanded value requires multiple registers.
-    ///
-    SmallVector<unsigned, 4> Regs;
-
-    RegsForValue() : TLI(0) {}
-
-    RegsForValue(const TargetLowering &tli,
-                 const SmallVector<unsigned, 4> &regs,
-                 EVT regvt, EVT valuevt)
-      : TLI(&tli),  ValueVTs(1, valuevt), RegVTs(1, regvt), Regs(regs) {}
-    RegsForValue(const TargetLowering &tli,
-                 const SmallVector<unsigned, 4> &regs,
-                 const SmallVector<EVT, 4> &regvts,
-                 const SmallVector<EVT, 4> &valuevts)
-      : TLI(&tli), ValueVTs(valuevts), RegVTs(regvts), Regs(regs) {}
-    RegsForValue(LLVMContext &Context, const TargetLowering &tli,
-                 unsigned Reg, const Type *Ty) : TLI(&tli) {
-      ComputeValueVTs(tli, Ty, ValueVTs);
-
-      for (unsigned Value = 0, e = ValueVTs.size(); Value != e; ++Value) {
-        EVT ValueVT = ValueVTs[Value];
-        unsigned NumRegs = TLI->getNumRegisters(Context, ValueVT);
-        EVT RegisterVT = TLI->getRegisterType(Context, ValueVT);
-        for (unsigned i = 0; i != NumRegs; ++i)
-          Regs.push_back(Reg + i);
-        RegVTs.push_back(RegisterVT);
-        Reg += NumRegs;
-      }
-    }
-
-    /// append - Add the specified values to this one.
-    void append(const RegsForValue &RHS) {
-      TLI = RHS.TLI;
-      ValueVTs.append(RHS.ValueVTs.begin(), RHS.ValueVTs.end());
-      RegVTs.append(RHS.RegVTs.begin(), RHS.RegVTs.end());
-      Regs.append(RHS.Regs.begin(), RHS.Regs.end());
-    }
-
-
-    /// getCopyFromRegs - Emit a series of CopyFromReg nodes that copies from
-    /// this value and returns the result as a ValueVTs value.  This uses
-    /// Chain/Flag as the input and updates them for the output Chain/Flag.
-    /// If the Flag pointer is NULL, no flag is used.
-    SDValue getCopyFromRegs(SelectionDAG &DAG, DebugLoc dl,
-                              SDValue &Chain, SDValue *Flag) const;
-
-    /// getCopyToRegs - Emit a series of CopyToReg nodes that copies the
-    /// specified value into the registers specified by this object.  This uses
-    /// Chain/Flag as the input and updates them for the output Chain/Flag.
-    /// If the Flag pointer is NULL, no flag is used.
-    void getCopyToRegs(SDValue Val, SelectionDAG &DAG, DebugLoc dl,
-                       SDValue &Chain, SDValue *Flag) const;
-
-    /// AddInlineAsmOperands - Add this value to the specified inlineasm node
-    /// operand list.  This adds the code marker, matching input operand index
-    /// (if applicable), and includes the number of values added into it.
-    void AddInlineAsmOperands(unsigned Code,
-                              bool HasMatching, unsigned MatchingIdx,
-                              SelectionDAG &DAG, std::vector<SDValue> &Ops) const;
-  };
-}
-
-/// isUsedOutsideOfDefiningBlock - Return true if this instruction is used by
-/// PHI nodes or outside of the basic block that defines it, or used by a
-/// switch or atomic instruction, which may expand to multiple basic blocks.
-static bool isUsedOutsideOfDefiningBlock(Instruction *I) {
-  if (isa<PHINode>(I)) return true;
-  BasicBlock *BB = I->getParent();
-  for (Value::use_iterator UI = I->use_begin(), E = I->use_end(); UI != E; ++UI)
-    if (cast<Instruction>(*UI)->getParent() != BB || isa<PHINode>(*UI))
-      return true;
-  return false;
-}
-
-/// isOnlyUsedInEntryBlock - If the specified argument is only used in the
-/// entry block, return true.  This includes arguments used by switches, since
-/// the switch may expand into multiple basic blocks.
-static bool isOnlyUsedInEntryBlock(Argument *A, bool EnableFastISel) {
-  // With FastISel active, we may be splitting blocks, so force creation
-  // of virtual registers for all non-dead arguments.
-  // Don't force virtual registers for byval arguments though, because
-  // fast-isel can't handle those in all cases.
-  if (EnableFastISel && !A->hasByValAttr())
-    return A->use_empty();
-
-  BasicBlock *Entry = A->getParent()->begin();
-  for (Value::use_iterator UI = A->use_begin(), E = A->use_end(); UI != E; ++UI)
-    if (cast<Instruction>(*UI)->getParent() != Entry || isa<SwitchInst>(*UI))
-      return false;  // Use not in entry block.
-  return true;
-}
-
-FunctionLoweringInfo::FunctionLoweringInfo(TargetLowering &tli)
-  : TLI(tli) {
-}
-
-void FunctionLoweringInfo::set(Function &fn, MachineFunction &mf,
-                               SelectionDAG &DAG,
-                               bool EnableFastISel) {
-  Fn = &fn;
-  MF = &mf;
-  RegInfo = &MF->getRegInfo();
-
-  // Create a vreg for each argument register that is not dead and is used
-  // outside of the entry block for the function.
-  for (Function::arg_iterator AI = Fn->arg_begin(), E = Fn->arg_end();
-       AI != E; ++AI)
-    if (!isOnlyUsedInEntryBlock(AI, EnableFastISel))
-      InitializeRegForValue(AI);
-
-  // Initialize the mapping of values to registers.  This is only set up for
-  // instruction values that are used outside of the block that defines
-  // them.
-  Function::iterator BB = Fn->begin(), EB = Fn->end();
-  for (BasicBlock::iterator I = BB->begin(), E = BB->end(); I != E; ++I)
-    if (AllocaInst *AI = dyn_cast<AllocaInst>(I))
-      if (ConstantInt *CUI = dyn_cast<ConstantInt>(AI->getArraySize())) {
-        const Type *Ty = AI->getAllocatedType();
-        uint64_t TySize = TLI.getTargetData()->getTypeAllocSize(Ty);
-        unsigned Align =
-          std::max((unsigned)TLI.getTargetData()->getPrefTypeAlignment(Ty),
-                   AI->getAlignment());
-
-        TySize *= CUI->getZExtValue();   // Get total allocated size.
-        if (TySize == 0) TySize = 1; // Don't create zero-sized stack objects.
-        StaticAllocaMap[AI] =
-          MF->getFrameInfo()->CreateStackObject(TySize, Align);
-      }
-
-  for (; BB != EB; ++BB)
-    for (BasicBlock::iterator I = BB->begin(), E = BB->end(); I != E; ++I)
-      if (!I->use_empty() && isUsedOutsideOfDefiningBlock(I))
-        if (!isa<AllocaInst>(I) ||
-            !StaticAllocaMap.count(cast<AllocaInst>(I)))
-          InitializeRegForValue(I);
-
-  // Create an initial MachineBasicBlock for each LLVM BasicBlock in F.  This
-  // also creates the initial PHI MachineInstrs, though none of the input
-  // operands are populated.
-  for (BB = Fn->begin(), EB = Fn->end(); BB != EB; ++BB) {
-    MachineBasicBlock *MBB = mf.CreateMachineBasicBlock(BB);
-    MBBMap[BB] = MBB;
-    MF->push_back(MBB);
-
-    // Create Machine PHI nodes for LLVM PHI nodes, lowering them as
-    // appropriate.
-    PHINode *PN;
-    DebugLoc DL;
-    for (BasicBlock::iterator
-           I = BB->begin(), E = BB->end(); I != E; ++I) {
-      if (CallInst *CI = dyn_cast<CallInst>(I)) {
-        if (Function *F = CI->getCalledFunction()) {
-          switch (F->getIntrinsicID()) {
-          default: break;
-          case Intrinsic::dbg_stoppoint: {
-            DbgStopPointInst *SPI = cast<DbgStopPointInst>(I);
-            if (isValidDebugInfoIntrinsic(*SPI, CodeGenOpt::Default)) 
-              DL = ExtractDebugLocation(*SPI, MF->getDebugLocInfo());
-            break;
-          }
-          case Intrinsic::dbg_func_start: {
-            DbgFuncStartInst *FSI = cast<DbgFuncStartInst>(I);
-            if (isValidDebugInfoIntrinsic(*FSI, CodeGenOpt::Default)) 
-              DL = ExtractDebugLocation(*FSI, MF->getDebugLocInfo());
-            break;
-          }
-          }
-        }
-      }
-
-      PN = dyn_cast<PHINode>(I);
-      if (!PN || PN->use_empty()) continue;
-
-      unsigned PHIReg = ValueMap[PN];
-      assert(PHIReg && "PHI node does not have an assigned virtual register!");
-
-      SmallVector<EVT, 4> ValueVTs;
-      ComputeValueVTs(TLI, PN->getType(), ValueVTs);
-      for (unsigned vti = 0, vte = ValueVTs.size(); vti != vte; ++vti) {
-        EVT VT = ValueVTs[vti];
-        unsigned NumRegisters = TLI.getNumRegisters(*DAG.getContext(), VT);
-        const TargetInstrInfo *TII = MF->getTarget().getInstrInfo();
-        for (unsigned i = 0; i != NumRegisters; ++i)
-          BuildMI(MBB, DL, TII->get(TargetInstrInfo::PHI), PHIReg + i);
-        PHIReg += NumRegisters;
-      }
-    }
-  }
-}
-
-unsigned FunctionLoweringInfo::MakeReg(EVT VT) {
-  return RegInfo->createVirtualRegister(TLI.getRegClassFor(VT));
-}
-
-/// CreateRegForValue - Allocate the appropriate number of virtual registers of
-/// the correctly promoted or expanded types.  Assign these registers
-/// consecutive vreg numbers and return the first assigned number.
-///
-/// In the case that the given value has struct or array type, this function
-/// will assign registers for each member or element.
-///
-unsigned FunctionLoweringInfo::CreateRegForValue(const Value *V) {
-  SmallVector<EVT, 4> ValueVTs;
-  ComputeValueVTs(TLI, V->getType(), ValueVTs);
-
-  unsigned FirstReg = 0;
-  for (unsigned Value = 0, e = ValueVTs.size(); Value != e; ++Value) {
-    EVT ValueVT = ValueVTs[Value];
-    EVT RegisterVT = TLI.getRegisterType(V->getContext(), ValueVT);
-
-    unsigned NumRegs = TLI.getNumRegisters(V->getContext(), ValueVT);
-    for (unsigned i = 0; i != NumRegs; ++i) {
-      unsigned R = MakeReg(RegisterVT);
-      if (!FirstReg) FirstReg = R;
-    }
-  }
-  return FirstReg;
-}
-
-/// getCopyFromParts - Create a value that contains the specified legal parts
-/// combined into the value they represent.  If the parts combine to a type
-/// larger then ValueVT then AssertOp can be used to specify whether the extra
-/// bits are known to be zero (ISD::AssertZext) or sign extended from ValueVT
-/// (ISD::AssertSext).
-static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl,
-                                const SDValue *Parts,
-                                unsigned NumParts, EVT PartVT, EVT ValueVT,
-                                ISD::NodeType AssertOp = ISD::DELETED_NODE) {
-  assert(NumParts > 0 && "No parts to assemble!");
-  const TargetLowering &TLI = DAG.getTargetLoweringInfo();
-  SDValue Val = Parts[0];
-
-  if (NumParts > 1) {
-    // Assemble the value from multiple parts.
-    if (!ValueVT.isVector() && ValueVT.isInteger()) {
-      unsigned PartBits = PartVT.getSizeInBits();
-      unsigned ValueBits = ValueVT.getSizeInBits();
-
-      // Assemble the power of 2 part.
-      unsigned RoundParts = NumParts & (NumParts - 1) ?
-        1 << Log2_32(NumParts) : NumParts;
-      unsigned RoundBits = PartBits * RoundParts;
-      EVT RoundVT = RoundBits == ValueBits ?
-        ValueVT : EVT::getIntegerVT(*DAG.getContext(), RoundBits);
-      SDValue Lo, Hi;
-
-      EVT HalfVT = EVT::getIntegerVT(*DAG.getContext(), RoundBits/2);
-
-      if (RoundParts > 2) {
-        Lo = getCopyFromParts(DAG, dl, Parts, RoundParts/2, PartVT, HalfVT);
-        Hi = getCopyFromParts(DAG, dl, Parts+RoundParts/2, RoundParts/2,
-                              PartVT, HalfVT);
-      } else {
-        Lo = DAG.getNode(ISD::BIT_CONVERT, dl, HalfVT, Parts[0]);
-        Hi = DAG.getNode(ISD::BIT_CONVERT, dl, HalfVT, Parts[1]);
-      }
-      if (TLI.isBigEndian())
-        std::swap(Lo, Hi);
-      Val = DAG.getNode(ISD::BUILD_PAIR, dl, RoundVT, Lo, Hi);
-
-      if (RoundParts < NumParts) {
-        // Assemble the trailing non-power-of-2 part.
-        unsigned OddParts = NumParts - RoundParts;
-        EVT OddVT = EVT::getIntegerVT(*DAG.getContext(), OddParts * PartBits);
-        Hi = getCopyFromParts(DAG, dl,
-                              Parts+RoundParts, OddParts, PartVT, OddVT);
-
-        // Combine the round and odd parts.
-        Lo = Val;
-        if (TLI.isBigEndian())
-          std::swap(Lo, Hi);
-        EVT TotalVT = EVT::getIntegerVT(*DAG.getContext(), NumParts * PartBits);
-        Hi = DAG.getNode(ISD::ANY_EXTEND, dl, TotalVT, Hi);
-        Hi = DAG.getNode(ISD::SHL, dl, TotalVT, Hi,
-                         DAG.getConstant(Lo.getValueType().getSizeInBits(),
-                                         TLI.getPointerTy()));
-        Lo = DAG.getNode(ISD::ZERO_EXTEND, dl, TotalVT, Lo);
-        Val = DAG.getNode(ISD::OR, dl, TotalVT, Lo, Hi);
-      }
-    } else if (ValueVT.isVector()) {
-      // Handle a multi-element vector.
-      EVT IntermediateVT, RegisterVT;
-      unsigned NumIntermediates;
-      unsigned NumRegs =
-        TLI.getVectorTypeBreakdown(*DAG.getContext(), ValueVT, IntermediateVT, 
-                                   NumIntermediates, RegisterVT);
-      assert(NumRegs == NumParts && "Part count doesn't match vector breakdown!");
-      NumParts = NumRegs; // Silence a compiler warning.
-      assert(RegisterVT == PartVT && "Part type doesn't match vector breakdown!");
-      assert(RegisterVT == Parts[0].getValueType() &&
-             "Part type doesn't match part!");
-
-      // Assemble the parts into intermediate operands.
-      SmallVector<SDValue, 8> Ops(NumIntermediates);
-      if (NumIntermediates == NumParts) {
-        // If the register was not expanded, truncate or copy the value,
-        // as appropriate.
-        for (unsigned i = 0; i != NumParts; ++i)
-          Ops[i] = getCopyFromParts(DAG, dl, &Parts[i], 1,
-                                    PartVT, IntermediateVT);
-      } else if (NumParts > 0) {
-        // If the intermediate type was expanded, build the intermediate operands
-        // from the parts.
-        assert(NumParts % NumIntermediates == 0 &&
-               "Must expand into a divisible number of parts!");
-        unsigned Factor = NumParts / NumIntermediates;
-        for (unsigned i = 0; i != NumIntermediates; ++i)
-          Ops[i] = getCopyFromParts(DAG, dl, &Parts[i * Factor], Factor,
-                                    PartVT, IntermediateVT);
-      }
-
-      // Build a vector with BUILD_VECTOR or CONCAT_VECTORS from the intermediate
-      // operands.
-      Val = DAG.getNode(IntermediateVT.isVector() ?
-                        ISD::CONCAT_VECTORS : ISD::BUILD_VECTOR, dl,
-                        ValueVT, &Ops[0], NumIntermediates);
-    } else if (PartVT.isFloatingPoint()) {
-      // FP split into multiple FP parts (for ppcf128)
-      assert(ValueVT == EVT(MVT::ppcf128) && PartVT == EVT(MVT::f64) &&
-             "Unexpected split");
-      SDValue Lo, Hi;
-      Lo = DAG.getNode(ISD::BIT_CONVERT, dl, EVT(MVT::f64), Parts[0]);
-      Hi = DAG.getNode(ISD::BIT_CONVERT, dl, EVT(MVT::f64), Parts[1]);
-      if (TLI.isBigEndian())
-        std::swap(Lo, Hi);
-      Val = DAG.getNode(ISD::BUILD_PAIR, dl, ValueVT, Lo, Hi);
-    } else {
-      // FP split into integer parts (soft fp)
-      assert(ValueVT.isFloatingPoint() && PartVT.isInteger() &&
-             !PartVT.isVector() && "Unexpected split");
-      EVT IntVT = EVT::getIntegerVT(*DAG.getContext(), ValueVT.getSizeInBits());
-      Val = getCopyFromParts(DAG, dl, Parts, NumParts, PartVT, IntVT);
-    }
-  }
-
-  // There is now one part, held in Val.  Correct it to match ValueVT.
-  PartVT = Val.getValueType();
-
-  if (PartVT == ValueVT)
-    return Val;
-
-  if (PartVT.isVector()) {
-    assert(ValueVT.isVector() && "Unknown vector conversion!");
-    return DAG.getNode(ISD::BIT_CONVERT, dl, ValueVT, Val);
-  }
-
-  if (ValueVT.isVector()) {
-    assert(ValueVT.getVectorElementType() == PartVT &&
-           ValueVT.getVectorNumElements() == 1 &&
-           "Only trivial scalar-to-vector conversions should get here!");
-    return DAG.getNode(ISD::BUILD_VECTOR, dl, ValueVT, Val);
-  }
-
-  if (PartVT.isInteger() &&
-      ValueVT.isInteger()) {
-    if (ValueVT.bitsLT(PartVT)) {
-      // For a truncate, see if we have any information to
-      // indicate whether the truncated bits will always be
-      // zero or sign-extension.
-      if (AssertOp != ISD::DELETED_NODE)
-        Val = DAG.getNode(AssertOp, dl, PartVT, Val,
-                          DAG.getValueType(ValueVT));
-      return DAG.getNode(ISD::TRUNCATE, dl, ValueVT, Val);
-    } else {
-      return DAG.getNode(ISD::ANY_EXTEND, dl, ValueVT, Val);
-    }
-  }
-
-  if (PartVT.isFloatingPoint() && ValueVT.isFloatingPoint()) {
-    if (ValueVT.bitsLT(Val.getValueType()))
-      // FP_ROUND's are always exact here.
-      return DAG.getNode(ISD::FP_ROUND, dl, ValueVT, Val,
-                         DAG.getIntPtrConstant(1));
-    return DAG.getNode(ISD::FP_EXTEND, dl, ValueVT, Val);
-  }
-
-  if (PartVT.getSizeInBits() == ValueVT.getSizeInBits())
-    return DAG.getNode(ISD::BIT_CONVERT, dl, ValueVT, Val);
-
-  llvm_unreachable("Unknown mismatch!");
-  return SDValue();
-}
-
-/// getCopyToParts - Create a series of nodes that contain the specified value
-/// split into legal parts.  If the parts contain more bits than Val, then, for
-/// integers, ExtendKind can be used to specify how to generate the extra bits.
-static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, SDValue Val,
-                           SDValue *Parts, unsigned NumParts, EVT PartVT,
-                           ISD::NodeType ExtendKind = ISD::ANY_EXTEND) {
-  const TargetLowering &TLI = DAG.getTargetLoweringInfo();
-  EVT PtrVT = TLI.getPointerTy();
-  EVT ValueVT = Val.getValueType();
-  unsigned PartBits = PartVT.getSizeInBits();
-  unsigned OrigNumParts = NumParts;
-  assert(TLI.isTypeLegal(PartVT) && "Copying to an illegal type!");
-
-  if (!NumParts)
-    return;
-
-  if (!ValueVT.isVector()) {
-    if (PartVT == ValueVT) {
-      assert(NumParts == 1 && "No-op copy with multiple parts!");
-      Parts[0] = Val;
-      return;
-    }
-
-    if (NumParts * PartBits > ValueVT.getSizeInBits()) {
-      // If the parts cover more bits than the value has, promote the value.
-      if (PartVT.isFloatingPoint() && ValueVT.isFloatingPoint()) {
-        assert(NumParts == 1 && "Do not know what to promote to!");
-        Val = DAG.getNode(ISD::FP_EXTEND, dl, PartVT, Val);
-      } else if (PartVT.isInteger() && ValueVT.isInteger()) {
-        ValueVT = EVT::getIntegerVT(*DAG.getContext(), NumParts * PartBits);
-        Val = DAG.getNode(ExtendKind, dl, ValueVT, Val);
-      } else {
-        llvm_unreachable("Unknown mismatch!");
-      }
-    } else if (PartBits == ValueVT.getSizeInBits()) {
-      // Different types of the same size.
-      assert(NumParts == 1 && PartVT != ValueVT);
-      Val = DAG.getNode(ISD::BIT_CONVERT, dl, PartVT, Val);
-    } else if (NumParts * PartBits < ValueVT.getSizeInBits()) {
-      // If the parts cover less bits than value has, truncate the value.
-      if (PartVT.isInteger() && ValueVT.isInteger()) {
-        ValueVT = EVT::getIntegerVT(*DAG.getContext(), NumParts * PartBits);
-        Val = DAG.getNode(ISD::TRUNCATE, dl, ValueVT, Val);
-      } else {
-        llvm_unreachable("Unknown mismatch!");
-      }
-    }
-
-    // The value may have changed - recompute ValueVT.
-    ValueVT = Val.getValueType();
-    assert(NumParts * PartBits == ValueVT.getSizeInBits() &&
-           "Failed to tile the value with PartVT!");
-
-    if (NumParts == 1) {
-      assert(PartVT == ValueVT && "Type conversion failed!");
-      Parts[0] = Val;
-      return;
-    }
-
-    // Expand the value into multiple parts.
-    if (NumParts & (NumParts - 1)) {
-      // The number of parts is not a power of 2.  Split off and copy the tail.
-      assert(PartVT.isInteger() && ValueVT.isInteger() &&
-             "Do not know what to expand to!");
-      unsigned RoundParts = 1 << Log2_32(NumParts);
-      unsigned RoundBits = RoundParts * PartBits;
-      unsigned OddParts = NumParts - RoundParts;
-      SDValue OddVal = DAG.getNode(ISD::SRL, dl, ValueVT, Val,
-                                   DAG.getConstant(RoundBits,
-                                                   TLI.getPointerTy()));
-      getCopyToParts(DAG, dl, OddVal, Parts + RoundParts, OddParts, PartVT);
-      if (TLI.isBigEndian())
-        // The odd parts were reversed by getCopyToParts - unreverse them.
-        std::reverse(Parts + RoundParts, Parts + NumParts);
-      NumParts = RoundParts;
-      ValueVT = EVT::getIntegerVT(*DAG.getContext(), NumParts * PartBits);
-      Val = DAG.getNode(ISD::TRUNCATE, dl, ValueVT, Val);
-    }
-
-    // The number of parts is a power of 2.  Repeatedly bisect the value using
-    // EXTRACT_ELEMENT.
-    Parts[0] = DAG.getNode(ISD::BIT_CONVERT, dl,
-                           EVT::getIntegerVT(*DAG.getContext(), ValueVT.getSizeInBits()),
-                           Val);
-    for (unsigned StepSize = NumParts; StepSize > 1; StepSize /= 2) {
-      for (unsigned i = 0; i < NumParts; i += StepSize) {
-        unsigned ThisBits = StepSize * PartBits / 2;
-        EVT ThisVT = EVT::getIntegerVT(*DAG.getContext(), ThisBits);
-        SDValue &Part0 = Parts[i];
-        SDValue &Part1 = Parts[i+StepSize/2];
-
-        Part1 = DAG.getNode(ISD::EXTRACT_ELEMENT, dl,
-                            ThisVT, Part0,
-                            DAG.getConstant(1, PtrVT));
-        Part0 = DAG.getNode(ISD::EXTRACT_ELEMENT, dl,
-                            ThisVT, Part0,
-                            DAG.getConstant(0, PtrVT));
-
-        if (ThisBits == PartBits && ThisVT != PartVT) {
-          Part0 = DAG.getNode(ISD::BIT_CONVERT, dl,
-                                                PartVT, Part0);
-          Part1 = DAG.getNode(ISD::BIT_CONVERT, dl,
-                                                PartVT, Part1);
-        }
-      }
-    }
-
-    if (TLI.isBigEndian())
-      std::reverse(Parts, Parts + OrigNumParts);
-
-    return;
-  }
-
-  // Vector ValueVT.
-  if (NumParts == 1) {
-    if (PartVT != ValueVT) {
-      if (PartVT.isVector()) {
-        Val = DAG.getNode(ISD::BIT_CONVERT, dl, PartVT, Val);
-      } else {
-        assert(ValueVT.getVectorElementType() == PartVT &&
-               ValueVT.getVectorNumElements() == 1 &&
-               "Only trivial vector-to-scalar conversions should get here!");
-        Val = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl,
-                          PartVT, Val,
-                          DAG.getConstant(0, PtrVT));
-      }
-    }
-
-    Parts[0] = Val;
-    return;
-  }
-
-  // Handle a multi-element vector.
-  EVT IntermediateVT, RegisterVT;
-  unsigned NumIntermediates;
-  unsigned NumRegs = TLI.getVectorTypeBreakdown(*DAG.getContext(), ValueVT,
-                              IntermediateVT, NumIntermediates, RegisterVT);
-  unsigned NumElements = ValueVT.getVectorNumElements();
-
-  assert(NumRegs == NumParts && "Part count doesn't match vector breakdown!");
-  NumParts = NumRegs; // Silence a compiler warning.
-  assert(RegisterVT == PartVT && "Part type doesn't match vector breakdown!");
-
-  // Split the vector into intermediate operands.
-  SmallVector<SDValue, 8> Ops(NumIntermediates);
-  for (unsigned i = 0; i != NumIntermediates; ++i)
-    if (IntermediateVT.isVector())
-      Ops[i] = DAG.getNode(ISD::EXTRACT_SUBVECTOR, dl,
-                           IntermediateVT, Val,
-                           DAG.getConstant(i * (NumElements / NumIntermediates),
-                                           PtrVT));
-    else
-      Ops[i] = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl,
-                           IntermediateVT, Val,
-                           DAG.getConstant(i, PtrVT));
-
-  // Split the intermediate operands into legal parts.
-  if (NumParts == NumIntermediates) {
-    // If the register was not expanded, promote or copy the value,
-    // as appropriate.
-    for (unsigned i = 0; i != NumParts; ++i)
-      getCopyToParts(DAG, dl, Ops[i], &Parts[i], 1, PartVT);
-  } else if (NumParts > 0) {
-    // If the intermediate type was expanded, split each the value into
-    // legal parts.
-    assert(NumParts % NumIntermediates == 0 &&
-           "Must expand into a divisible number of parts!");
-    unsigned Factor = NumParts / NumIntermediates;
-    for (unsigned i = 0; i != NumIntermediates; ++i)
-      getCopyToParts(DAG, dl, Ops[i], &Parts[i * Factor], Factor, PartVT);
-  }
-}
-
-
-void SelectionDAGLowering::init(GCFunctionInfo *gfi, AliasAnalysis &aa) {
-  AA = &aa;
-  GFI = gfi;
-  TD = DAG.getTarget().getTargetData();
-}
-
-/// clear - Clear out the curret SelectionDAG and the associated
-/// state and prepare this SelectionDAGLowering object to be used
-/// for a new block. This doesn't clear out information about
-/// additional blocks that are needed to complete switch lowering
-/// or PHI node updating; that information is cleared out as it is
-/// consumed.
-void SelectionDAGLowering::clear() {
-  NodeMap.clear();
-  PendingLoads.clear();
-  PendingExports.clear();
-  EdgeMapping.clear();
-  DAG.clear();
-  CurDebugLoc = DebugLoc::getUnknownLoc();
-  HasTailCall = false;
-}
-
-/// getRoot - Return the current virtual root of the Selection DAG,
-/// flushing any PendingLoad items. This must be done before emitting
-/// a store or any other node that may need to be ordered after any
-/// prior load instructions.
-///
-SDValue SelectionDAGLowering::getRoot() {
-  if (PendingLoads.empty())
-    return DAG.getRoot();
-
-  if (PendingLoads.size() == 1) {
-    SDValue Root = PendingLoads[0];
-    DAG.setRoot(Root);
-    PendingLoads.clear();
-    return Root;
-  }
-
-  // Otherwise, we have to make a token factor node.
-  SDValue Root = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(), MVT::Other,
-                               &PendingLoads[0], PendingLoads.size());
-  PendingLoads.clear();
-  DAG.setRoot(Root);
-  return Root;
-}
-
-/// getControlRoot - Similar to getRoot, but instead of flushing all the
-/// PendingLoad items, flush all the PendingExports items. It is necessary
-/// to do this before emitting a terminator instruction.
-///
-SDValue SelectionDAGLowering::getControlRoot() {
-  SDValue Root = DAG.getRoot();
-
-  if (PendingExports.empty())
-    return Root;
-
-  // Turn all of the CopyToReg chains into one factored node.
-  if (Root.getOpcode() != ISD::EntryToken) {
-    unsigned i = 0, e = PendingExports.size();
-    for (; i != e; ++i) {
-      assert(PendingExports[i].getNode()->getNumOperands() > 1);
-      if (PendingExports[i].getNode()->getOperand(0) == Root)
-        break;  // Don't add the root if we already indirectly depend on it.
-    }
-
-    if (i == e)
-      PendingExports.push_back(Root);
-  }
-
-  Root = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(), MVT::Other,
-                     &PendingExports[0],
-                     PendingExports.size());
-  PendingExports.clear();
-  DAG.setRoot(Root);
-  return Root;
-}
-
-void SelectionDAGLowering::visit(Instruction &I) {
-  visit(I.getOpcode(), I);
-}
-
-void SelectionDAGLowering::visit(unsigned Opcode, User &I) {
-  // Note: this doesn't use InstVisitor, because it has to work with
-  // ConstantExpr's in addition to instructions.
-  switch (Opcode) {
-  default: llvm_unreachable("Unknown instruction type encountered!");
-    // Build the switch statement using the Instruction.def file.
-#define HANDLE_INST(NUM, OPCODE, CLASS) \
-  case Instruction::OPCODE:return visit##OPCODE((CLASS&)I);
-#include "llvm/Instruction.def"
-  }
-}
-
-SDValue SelectionDAGLowering::getValue(const Value *V) {
-  SDValue &N = NodeMap[V];
-  if (N.getNode()) return N;
-
-  if (Constant *C = const_cast<Constant*>(dyn_cast<Constant>(V))) {
-    EVT VT = TLI.getValueType(V->getType(), true);
-
-    if (ConstantInt *CI = dyn_cast<ConstantInt>(C))
-      return N = DAG.getConstant(*CI, VT);
-
-    if (GlobalValue *GV = dyn_cast<GlobalValue>(C))
-      return N = DAG.getGlobalAddress(GV, VT);
-
-    if (isa<ConstantPointerNull>(C))
-      return N = DAG.getConstant(0, TLI.getPointerTy());
-
-    if (ConstantFP *CFP = dyn_cast<ConstantFP>(C))
-      return N = DAG.getConstantFP(*CFP, VT);
-
-    if (isa<UndefValue>(C) && !V->getType()->isAggregateType())
-      return N = DAG.getUNDEF(VT);
-
-    if (ConstantExpr *CE = dyn_cast<ConstantExpr>(C)) {
-      visit(CE->getOpcode(), *CE);
-      SDValue N1 = NodeMap[V];
-      assert(N1.getNode() && "visit didn't populate the ValueMap!");
-      return N1;
-    }
-
-    if (isa<ConstantStruct>(C) || isa<ConstantArray>(C)) {
-      SmallVector<SDValue, 4> Constants;
-      for (User::const_op_iterator OI = C->op_begin(), OE = C->op_end();
-           OI != OE; ++OI) {
-        SDNode *Val = getValue(*OI).getNode();
-        // If the operand is an empty aggregate, there are no values.
-        if (!Val) continue;
-        // Add each leaf value from the operand to the Constants list
-        // to form a flattened list of all the values.
-        for (unsigned i = 0, e = Val->getNumValues(); i != e; ++i)
-          Constants.push_back(SDValue(Val, i));
-      }
-      return DAG.getMergeValues(&Constants[0], Constants.size(),
-                                getCurDebugLoc());
-    }
-
-    if (isa<StructType>(C->getType()) || isa<ArrayType>(C->getType())) {
-      assert((isa<ConstantAggregateZero>(C) || isa<UndefValue>(C)) &&
-             "Unknown struct or array constant!");
-
-      SmallVector<EVT, 4> ValueVTs;
-      ComputeValueVTs(TLI, C->getType(), ValueVTs);
-      unsigned NumElts = ValueVTs.size();
-      if (NumElts == 0)
-        return SDValue(); // empty struct
-      SmallVector<SDValue, 4> Constants(NumElts);
-      for (unsigned i = 0; i != NumElts; ++i) {
-        EVT EltVT = ValueVTs[i];
-        if (isa<UndefValue>(C))
-          Constants[i] = DAG.getUNDEF(EltVT);
-        else if (EltVT.isFloatingPoint())
-          Constants[i] = DAG.getConstantFP(0, EltVT);
-        else
-          Constants[i] = DAG.getConstant(0, EltVT);
-      }
-      return DAG.getMergeValues(&Constants[0], NumElts, getCurDebugLoc());
-    }
-
-    const VectorType *VecTy = cast<VectorType>(V->getType());
-    unsigned NumElements = VecTy->getNumElements();
-
-    // Now that we know the number and type of the elements, get that number of
-    // elements into the Ops array based on what kind of constant it is.
-    SmallVector<SDValue, 16> Ops;
-    if (ConstantVector *CP = dyn_cast<ConstantVector>(C)) {
-      for (unsigned i = 0; i != NumElements; ++i)
-        Ops.push_back(getValue(CP->getOperand(i)));
-    } else {
-      assert(isa<ConstantAggregateZero>(C) && "Unknown vector constant!");
-      EVT EltVT = TLI.getValueType(VecTy->getElementType());
-
-      SDValue Op;
-      if (EltVT.isFloatingPoint())
-        Op = DAG.getConstantFP(0, EltVT);
-      else
-        Op = DAG.getConstant(0, EltVT);
-      Ops.assign(NumElements, Op);
-    }
-
-    // Create a BUILD_VECTOR node.
-    return NodeMap[V] = DAG.getNode(ISD::BUILD_VECTOR, getCurDebugLoc(),
-                                    VT, &Ops[0], Ops.size());
-  }
-
-  // If this is a static alloca, generate it as the frameindex instead of
-  // computation.
-  if (const AllocaInst *AI = dyn_cast<AllocaInst>(V)) {
-    DenseMap<const AllocaInst*, int>::iterator SI =
-      FuncInfo.StaticAllocaMap.find(AI);
-    if (SI != FuncInfo.StaticAllocaMap.end())
-      return DAG.getFrameIndex(SI->second, TLI.getPointerTy());
-  }
-
-  unsigned InReg = FuncInfo.ValueMap[V];
-  assert(InReg && "Value not in map!");
-
-  RegsForValue RFV(*DAG.getContext(), TLI, InReg, V->getType());
-  SDValue Chain = DAG.getEntryNode();
-  return RFV.getCopyFromRegs(DAG, getCurDebugLoc(), Chain, NULL);
-}
-
-
-void SelectionDAGLowering::visitRet(ReturnInst &I) {
-  SDValue Chain = getControlRoot();
-  SmallVector<ISD::OutputArg, 8> Outs;
-  for (unsigned i = 0, e = I.getNumOperands(); i != e; ++i) {
-    SmallVector<EVT, 4> ValueVTs;
-    ComputeValueVTs(TLI, I.getOperand(i)->getType(), ValueVTs);
-    unsigned NumValues = ValueVTs.size();
-    if (NumValues == 0) continue;
-
-    SDValue RetOp = getValue(I.getOperand(i));
-    for (unsigned j = 0, f = NumValues; j != f; ++j) {
-      EVT VT = ValueVTs[j];
-
-      ISD::NodeType ExtendKind = ISD::ANY_EXTEND;
-
-      const Function *F = I.getParent()->getParent();
-      if (F->paramHasAttr(0, Attribute::SExt))
-        ExtendKind = ISD::SIGN_EXTEND;
-      else if (F->paramHasAttr(0, Attribute::ZExt))
-        ExtendKind = ISD::ZERO_EXTEND;
-
-      // FIXME: C calling convention requires the return type to be promoted to
-      // at least 32-bit. But this is not necessary for non-C calling
-      // conventions. The frontend should mark functions whose return values
-      // require promoting with signext or zeroext attributes.
-      if (ExtendKind != ISD::ANY_EXTEND && VT.isInteger()) {
-        EVT MinVT = TLI.getRegisterType(*DAG.getContext(), MVT::i32);
-        if (VT.bitsLT(MinVT))
-          VT = MinVT;
-      }
-
-      unsigned NumParts = TLI.getNumRegisters(*DAG.getContext(), VT);
-      EVT PartVT = TLI.getRegisterType(*DAG.getContext(), VT);
-      SmallVector<SDValue, 4> Parts(NumParts);
-      getCopyToParts(DAG, getCurDebugLoc(),
-                     SDValue(RetOp.getNode(), RetOp.getResNo() + j),
-                     &Parts[0], NumParts, PartVT, ExtendKind);
-
-      // 'inreg' on function refers to return value
-      ISD::ArgFlagsTy Flags = ISD::ArgFlagsTy();
-      if (F->paramHasAttr(0, Attribute::InReg))
-        Flags.setInReg();
-
-      // Propagate extension type if any
-      if (F->paramHasAttr(0, Attribute::SExt))
-        Flags.setSExt();
-      else if (F->paramHasAttr(0, Attribute::ZExt))
-        Flags.setZExt();
-
-      for (unsigned i = 0; i < NumParts; ++i)
-        Outs.push_back(ISD::OutputArg(Flags, Parts[i], /*isfixed=*/true));
-    }
-  }
-
-  bool isVarArg = DAG.getMachineFunction().getFunction()->isVarArg();
-  CallingConv::ID CallConv =
-    DAG.getMachineFunction().getFunction()->getCallingConv();
-  Chain = TLI.LowerReturn(Chain, CallConv, isVarArg,
-                          Outs, getCurDebugLoc(), DAG);
-
-  // Verify that the target's LowerReturn behaved as expected.
-  assert(Chain.getNode() && Chain.getValueType() == MVT::Other &&
-         "LowerReturn didn't return a valid chain!");
-
-  // Update the DAG with the new chain value resulting from return lowering.
-  DAG.setRoot(Chain);
-}
-
-/// CopyToExportRegsIfNeeded - If the given value has virtual registers
-/// created for it, emit nodes to copy the value into the virtual
-/// registers.
-void SelectionDAGLowering::CopyToExportRegsIfNeeded(Value *V) {
-  if (!V->use_empty()) {
-    DenseMap<const Value *, unsigned>::iterator VMI = FuncInfo.ValueMap.find(V);
-    if (VMI != FuncInfo.ValueMap.end())
-      CopyValueToVirtualRegister(V, VMI->second);
-  }
-}
-
-/// ExportFromCurrentBlock - If this condition isn't known to be exported from
-/// the current basic block, add it to ValueMap now so that we'll get a
-/// CopyTo/FromReg.
-void SelectionDAGLowering::ExportFromCurrentBlock(Value *V) {
-  // No need to export constants.
-  if (!isa<Instruction>(V) && !isa<Argument>(V)) return;
-
-  // Already exported?
-  if (FuncInfo.isExportedInst(V)) return;
-
-  unsigned Reg = FuncInfo.InitializeRegForValue(V);
-  CopyValueToVirtualRegister(V, Reg);
-}
-
-bool SelectionDAGLowering::isExportableFromCurrentBlock(Value *V,
-                                                    const BasicBlock *FromBB) {
-  // The operands of the setcc have to be in this block.  We don't know
-  // how to export them from some other block.
-  if (Instruction *VI = dyn_cast<Instruction>(V)) {
-    // Can export from current BB.
-    if (VI->getParent() == FromBB)
-      return true;
-
-    // Is already exported, noop.
-    return FuncInfo.isExportedInst(V);
-  }
-
-  // If this is an argument, we can export it if the BB is the entry block or
-  // if it is already exported.
-  if (isa<Argument>(V)) {
-    if (FromBB == &FromBB->getParent()->getEntryBlock())
-      return true;
-
-    // Otherwise, can only export this if it is already exported.
-    return FuncInfo.isExportedInst(V);
-  }
-
-  // Otherwise, constants can always be exported.
-  return true;
-}
-
-static bool InBlock(const Value *V, const BasicBlock *BB) {
-  if (const Instruction *I = dyn_cast<Instruction>(V))
-    return I->getParent() == BB;
-  return true;
-}
-
-/// getFCmpCondCode - Return the ISD condition code corresponding to
-/// the given LLVM IR floating-point condition code.  This includes
-/// consideration of global floating-point math flags.
-///
-static ISD::CondCode getFCmpCondCode(FCmpInst::Predicate Pred) {
-  ISD::CondCode FPC, FOC;
-  switch (Pred) {
-  case FCmpInst::FCMP_FALSE: FOC = FPC = ISD::SETFALSE; break;
-  case FCmpInst::FCMP_OEQ:   FOC = ISD::SETEQ; FPC = ISD::SETOEQ; break;
-  case FCmpInst::FCMP_OGT:   FOC = ISD::SETGT; FPC = ISD::SETOGT; break;
-  case FCmpInst::FCMP_OGE:   FOC = ISD::SETGE; FPC = ISD::SETOGE; break;
-  case FCmpInst::FCMP_OLT:   FOC = ISD::SETLT; FPC = ISD::SETOLT; break;
-  case FCmpInst::FCMP_OLE:   FOC = ISD::SETLE; FPC = ISD::SETOLE; break;
-  case FCmpInst::FCMP_ONE:   FOC = ISD::SETNE; FPC = ISD::SETONE; break;
-  case FCmpInst::FCMP_ORD:   FOC = FPC = ISD::SETO;   break;
-  case FCmpInst::FCMP_UNO:   FOC = FPC = ISD::SETUO;  break;
-  case FCmpInst::FCMP_UEQ:   FOC = ISD::SETEQ; FPC = ISD::SETUEQ; break;
-  case FCmpInst::FCMP_UGT:   FOC = ISD::SETGT; FPC = ISD::SETUGT; break;
-  case FCmpInst::FCMP_UGE:   FOC = ISD::SETGE; FPC = ISD::SETUGE; break;
-  case FCmpInst::FCMP_ULT:   FOC = ISD::SETLT; FPC = ISD::SETULT; break;
-  case FCmpInst::FCMP_ULE:   FOC = ISD::SETLE; FPC = ISD::SETULE; break;
-  case FCmpInst::FCMP_UNE:   FOC = ISD::SETNE; FPC = ISD::SETUNE; break;
-  case FCmpInst::FCMP_TRUE:  FOC = FPC = ISD::SETTRUE; break;
-  default:
-    llvm_unreachable("Invalid FCmp predicate opcode!");
-    FOC = FPC = ISD::SETFALSE;
-    break;
-  }
-  if (FiniteOnlyFPMath())
-    return FOC;
-  else
-    return FPC;
-}
-
-/// getICmpCondCode - Return the ISD condition code corresponding to
-/// the given LLVM IR integer condition code.
-///
-static ISD::CondCode getICmpCondCode(ICmpInst::Predicate Pred) {
-  switch (Pred) {
-  case ICmpInst::ICMP_EQ:  return ISD::SETEQ;
-  case ICmpInst::ICMP_NE:  return ISD::SETNE;
-  case ICmpInst::ICMP_SLE: return ISD::SETLE;
-  case ICmpInst::ICMP_ULE: return ISD::SETULE;
-  case ICmpInst::ICMP_SGE: return ISD::SETGE;
-  case ICmpInst::ICMP_UGE: return ISD::SETUGE;
-  case ICmpInst::ICMP_SLT: return ISD::SETLT;
-  case ICmpInst::ICMP_ULT: return ISD::SETULT;
-  case ICmpInst::ICMP_SGT: return ISD::SETGT;
-  case ICmpInst::ICMP_UGT: return ISD::SETUGT;
-  default:
-    llvm_unreachable("Invalid ICmp predicate opcode!");
-    return ISD::SETNE;
-  }
-}
-
-/// EmitBranchForMergedCondition - Helper method for FindMergedConditions.
-/// This function emits a branch and is used at the leaves of an OR or an
-/// AND operator tree.
-///
-void
-SelectionDAGLowering::EmitBranchForMergedCondition(Value *Cond,
-                                                   MachineBasicBlock *TBB,
-                                                   MachineBasicBlock *FBB,
-                                                   MachineBasicBlock *CurBB) {
-  const BasicBlock *BB = CurBB->getBasicBlock();
-
-  // If the leaf of the tree is a comparison, merge the condition into
-  // the caseblock.
-  if (CmpInst *BOp = dyn_cast<CmpInst>(Cond)) {
-    // The operands of the cmp have to be in this block.  We don't know
-    // how to export them from some other block.  If this is the first block
-    // of the sequence, no exporting is needed.
-    if (CurBB == CurMBB ||
-        (isExportableFromCurrentBlock(BOp->getOperand(0), BB) &&
-         isExportableFromCurrentBlock(BOp->getOperand(1), BB))) {
-      ISD::CondCode Condition;
-      if (ICmpInst *IC = dyn_cast<ICmpInst>(Cond)) {
-        Condition = getICmpCondCode(IC->getPredicate());
-      } else if (FCmpInst *FC = dyn_cast<FCmpInst>(Cond)) {
-        Condition = getFCmpCondCode(FC->getPredicate());
-      } else {
-        Condition = ISD::SETEQ; // silence warning.
-        llvm_unreachable("Unknown compare instruction");
-      }
-
-      CaseBlock CB(Condition, BOp->getOperand(0),
-                   BOp->getOperand(1), NULL, TBB, FBB, CurBB);
-      SwitchCases.push_back(CB);
-      return;
-    }
-  }
-
-  // Create a CaseBlock record representing this branch.
-  CaseBlock CB(ISD::SETEQ, Cond, ConstantInt::getTrue(*DAG.getContext()),
-               NULL, TBB, FBB, CurBB);
-  SwitchCases.push_back(CB);
-}
-
-/// FindMergedConditions - If Cond is an expression like
-void SelectionDAGLowering::FindMergedConditions(Value *Cond,
-                                                MachineBasicBlock *TBB,
-                                                MachineBasicBlock *FBB,
-                                                MachineBasicBlock *CurBB,
-                                                unsigned Opc) {
-  // If this node is not part of the or/and tree, emit it as a branch.
-  Instruction *BOp = dyn_cast<Instruction>(Cond);
-  if (!BOp || !(isa<BinaryOperator>(BOp) || isa<CmpInst>(BOp)) ||
-      (unsigned)BOp->getOpcode() != Opc || !BOp->hasOneUse() ||
-      BOp->getParent() != CurBB->getBasicBlock() ||
-      !InBlock(BOp->getOperand(0), CurBB->getBasicBlock()) ||
-      !InBlock(BOp->getOperand(1), CurBB->getBasicBlock())) {
-    EmitBranchForMergedCondition(Cond, TBB, FBB, CurBB);
-    return;
-  }
-
-  //  Create TmpBB after CurBB.
-  MachineFunction::iterator BBI = CurBB;
-  MachineFunction &MF = DAG.getMachineFunction();
-  MachineBasicBlock *TmpBB = MF.CreateMachineBasicBlock(CurBB->getBasicBlock());
-  CurBB->getParent()->insert(++BBI, TmpBB);
-
-  if (Opc == Instruction::Or) {
-    // Codegen X | Y as:
-    //   jmp_if_X TBB
-    //   jmp TmpBB
-    // TmpBB:
-    //   jmp_if_Y TBB
-    //   jmp FBB
-    //
-
-    // Emit the LHS condition.
-    FindMergedConditions(BOp->getOperand(0), TBB, TmpBB, CurBB, Opc);
-
-    // Emit the RHS condition into TmpBB.
-    FindMergedConditions(BOp->getOperand(1), TBB, FBB, TmpBB, Opc);
-  } else {
-    assert(Opc == Instruction::And && "Unknown merge op!");
-    // Codegen X & Y as:
-    //   jmp_if_X TmpBB
-    //   jmp FBB
-    // TmpBB:
-    //   jmp_if_Y TBB
-    //   jmp FBB
-    //
-    //  This requires creation of TmpBB after CurBB.
-
-    // Emit the LHS condition.
-    FindMergedConditions(BOp->getOperand(0), TmpBB, FBB, CurBB, Opc);
-
-    // Emit the RHS condition into TmpBB.
-    FindMergedConditions(BOp->getOperand(1), TBB, FBB, TmpBB, Opc);
-  }
-}
-
-/// If the set of cases should be emitted as a series of branches, return true.
-/// If we should emit this as a bunch of and/or'd together conditions, return
-/// false.
-bool
-SelectionDAGLowering::ShouldEmitAsBranches(const std::vector<CaseBlock> &Cases){
-  if (Cases.size() != 2) return true;
-
-  // If this is two comparisons of the same values or'd or and'd together, they
-  // will get folded into a single comparison, so don't emit two blocks.
-  if ((Cases[0].CmpLHS == Cases[1].CmpLHS &&
-       Cases[0].CmpRHS == Cases[1].CmpRHS) ||
-      (Cases[0].CmpRHS == Cases[1].CmpLHS &&
-       Cases[0].CmpLHS == Cases[1].CmpRHS)) {
-    return false;
-  }
-
-  return true;
-}
-
-void SelectionDAGLowering::visitBr(BranchInst &I) {
-  // Update machine-CFG edges.
-  MachineBasicBlock *Succ0MBB = FuncInfo.MBBMap[I.getSuccessor(0)];
-
-  // Figure out which block is immediately after the current one.
-  MachineBasicBlock *NextBlock = 0;
-  MachineFunction::iterator BBI = CurMBB;
-  if (++BBI != FuncInfo.MF->end())
-    NextBlock = BBI;
-
-  if (I.isUnconditional()) {
-    // Update machine-CFG edges.
-    CurMBB->addSuccessor(Succ0MBB);
-
-    // If this is not a fall-through branch, emit the branch.
-    if (Succ0MBB != NextBlock)
-      DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(),
-                              MVT::Other, getControlRoot(),
-                              DAG.getBasicBlock(Succ0MBB)));
-    return;
-  }
-
-  // If this condition is one of the special cases we handle, do special stuff
-  // now.
-  Value *CondVal = I.getCondition();
-  MachineBasicBlock *Succ1MBB = FuncInfo.MBBMap[I.getSuccessor(1)];
-
-  // If this is a series of conditions that are or'd or and'd together, emit
-  // this as a sequence of branches instead of setcc's with and/or operations.
-  // For example, instead of something like:
-  //     cmp A, B
-  //     C = seteq
-  //     cmp D, E
-  //     F = setle
-  //     or C, F
-  //     jnz foo
-  // Emit:
-  //     cmp A, B
-  //     je foo
-  //     cmp D, E
-  //     jle foo
-  //
-  if (BinaryOperator *BOp = dyn_cast<BinaryOperator>(CondVal)) {
-    if (BOp->hasOneUse() &&
-        (BOp->getOpcode() == Instruction::And ||
-         BOp->getOpcode() == Instruction::Or)) {
-      FindMergedConditions(BOp, Succ0MBB, Succ1MBB, CurMBB, BOp->getOpcode());
-      // If the compares in later blocks need to use values not currently
-      // exported from this block, export them now.  This block should always
-      // be the first entry.
-      assert(SwitchCases[0].ThisBB == CurMBB && "Unexpected lowering!");
-
-      // Allow some cases to be rejected.
-      if (ShouldEmitAsBranches(SwitchCases)) {
-        for (unsigned i = 1, e = SwitchCases.size(); i != e; ++i) {
-          ExportFromCurrentBlock(SwitchCases[i].CmpLHS);
-          ExportFromCurrentBlock(SwitchCases[i].CmpRHS);
-        }
-
-        // Emit the branch for this block.
-        visitSwitchCase(SwitchCases[0]);
-        SwitchCases.erase(SwitchCases.begin());
-        return;
-      }
-
-      // Okay, we decided not to do this, remove any inserted MBB's and clear
-      // SwitchCases.
-      for (unsigned i = 1, e = SwitchCases.size(); i != e; ++i)
-        FuncInfo.MF->erase(SwitchCases[i].ThisBB);
-
-      SwitchCases.clear();
-    }
-  }
-
-  // Create a CaseBlock record representing this branch.
-  CaseBlock CB(ISD::SETEQ, CondVal, ConstantInt::getTrue(*DAG.getContext()),
-               NULL, Succ0MBB, Succ1MBB, CurMBB);
-  // Use visitSwitchCase to actually insert the fast branch sequence for this
-  // cond branch.
-  visitSwitchCase(CB);
-}
-
-/// visitSwitchCase - Emits the necessary code to represent a single node in
-/// the binary search tree resulting from lowering a switch instruction.
-void SelectionDAGLowering::visitSwitchCase(CaseBlock &CB) {
-  SDValue Cond;
-  SDValue CondLHS = getValue(CB.CmpLHS);
-  DebugLoc dl = getCurDebugLoc();
-
-  // Build the setcc now.
-  if (CB.CmpMHS == NULL) {
-    // Fold "(X == true)" to X and "(X == false)" to !X to
-    // handle common cases produced by branch lowering.
-    if (CB.CmpRHS == ConstantInt::getTrue(*DAG.getContext()) &&
-        CB.CC == ISD::SETEQ)
-      Cond = CondLHS;
-    else if (CB.CmpRHS == ConstantInt::getFalse(*DAG.getContext()) &&
-             CB.CC == ISD::SETEQ) {
-      SDValue True = DAG.getConstant(1, CondLHS.getValueType());
-      Cond = DAG.getNode(ISD::XOR, dl, CondLHS.getValueType(), CondLHS, True);
-    } else
-      Cond = DAG.getSetCC(dl, MVT::i1, CondLHS, getValue(CB.CmpRHS), CB.CC);
-  } else {
-    assert(CB.CC == ISD::SETLE && "Can handle only LE ranges now");
-
-    const APInt& Low = cast<ConstantInt>(CB.CmpLHS)->getValue();
-    const APInt& High  = cast<ConstantInt>(CB.CmpRHS)->getValue();
-
-    SDValue CmpOp = getValue(CB.CmpMHS);
-    EVT VT = CmpOp.getValueType();
-
-    if (cast<ConstantInt>(CB.CmpLHS)->isMinValue(true)) {
-      Cond = DAG.getSetCC(dl, MVT::i1, CmpOp, DAG.getConstant(High, VT),
-                          ISD::SETLE);
-    } else {
-      SDValue SUB = DAG.getNode(ISD::SUB, dl,
-                                VT, CmpOp, DAG.getConstant(Low, VT));
-      Cond = DAG.getSetCC(dl, MVT::i1, SUB,
-                          DAG.getConstant(High-Low, VT), ISD::SETULE);
-    }
-  }
-
-  // Update successor info
-  CurMBB->addSuccessor(CB.TrueBB);
-  CurMBB->addSuccessor(CB.FalseBB);
-
-  // Set NextBlock to be the MBB immediately after the current one, if any.
-  // This is used to avoid emitting unnecessary branches to the next block.
-  MachineBasicBlock *NextBlock = 0;
-  MachineFunction::iterator BBI = CurMBB;
-  if (++BBI != FuncInfo.MF->end())
-    NextBlock = BBI;
-
-  // If the lhs block is the next block, invert the condition so that we can
-  // fall through to the lhs instead of the rhs block.
-  if (CB.TrueBB == NextBlock) {
-    std::swap(CB.TrueBB, CB.FalseBB);
-    SDValue True = DAG.getConstant(1, Cond.getValueType());
-    Cond = DAG.getNode(ISD::XOR, dl, Cond.getValueType(), Cond, True);
-  }
-  SDValue BrCond = DAG.getNode(ISD::BRCOND, dl,
-                               MVT::Other, getControlRoot(), Cond,
-                               DAG.getBasicBlock(CB.TrueBB));
-
-  // If the branch was constant folded, fix up the CFG.
-  if (BrCond.getOpcode() == ISD::BR) {
-    CurMBB->removeSuccessor(CB.FalseBB);
-    DAG.setRoot(BrCond);
-  } else {
-    // Otherwise, go ahead and insert the false branch.
-    if (BrCond == getControlRoot())
-      CurMBB->removeSuccessor(CB.TrueBB);
-
-    if (CB.FalseBB == NextBlock)
-      DAG.setRoot(BrCond);
-    else
-      DAG.setRoot(DAG.getNode(ISD::BR, dl, MVT::Other, BrCond,
-                              DAG.getBasicBlock(CB.FalseBB)));
-  }
-}
-
-/// visitJumpTable - Emit JumpTable node in the current MBB
-void SelectionDAGLowering::visitJumpTable(JumpTable &JT) {
-  // Emit the code for the jump table
-  assert(JT.Reg != -1U && "Should lower JT Header first!");
-  EVT PTy = TLI.getPointerTy();
-  SDValue Index = DAG.getCopyFromReg(getControlRoot(), getCurDebugLoc(),
-                                     JT.Reg, PTy);
-  SDValue Table = DAG.getJumpTable(JT.JTI, PTy);
-  DAG.setRoot(DAG.getNode(ISD::BR_JT, getCurDebugLoc(),
-                          MVT::Other, Index.getValue(1),
-                          Table, Index));
-}
-
-/// visitJumpTableHeader - This function emits necessary code to produce index
-/// in the JumpTable from switch case.
-void SelectionDAGLowering::visitJumpTableHeader(JumpTable &JT,
-                                                JumpTableHeader &JTH) {
-  // Subtract the lowest switch case value from the value being switched on and
-  // conditional branch to default mbb if the result is greater than the
-  // difference between smallest and largest cases.
-  SDValue SwitchOp = getValue(JTH.SValue);
-  EVT VT = SwitchOp.getValueType();
-  SDValue SUB = DAG.getNode(ISD::SUB, getCurDebugLoc(), VT, SwitchOp,
-                            DAG.getConstant(JTH.First, VT));
-
-  // The SDNode we just created, which holds the value being switched on minus
-  // the the smallest case value, needs to be copied to a virtual register so it
-  // can be used as an index into the jump table in a subsequent basic block.
-  // This value may be smaller or larger than the target's pointer type, and
-  // therefore require extension or truncating.
-  if (VT.bitsGT(TLI.getPointerTy()))
-    SwitchOp = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(),
-                           TLI.getPointerTy(), SUB);
-  else
-    SwitchOp = DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(),
-                           TLI.getPointerTy(), SUB);
-
-  unsigned JumpTableReg = FuncInfo.MakeReg(TLI.getPointerTy());
-  SDValue CopyTo = DAG.getCopyToReg(getControlRoot(), getCurDebugLoc(),
-                                    JumpTableReg, SwitchOp);
-  JT.Reg = JumpTableReg;
-
-  // Emit the range check for the jump table, and branch to the default block
-  // for the switch statement if the value being switched on exceeds the largest
-  // case in the switch.
-  SDValue CMP = DAG.getSetCC(getCurDebugLoc(),
-                             TLI.getSetCCResultType(SUB.getValueType()), SUB,
-                             DAG.getConstant(JTH.Last-JTH.First,VT),
-                             ISD::SETUGT);
-
-  // Set NextBlock to be the MBB immediately after the current one, if any.
-  // This is used to avoid emitting unnecessary branches to the next block.
-  MachineBasicBlock *NextBlock = 0;
-  MachineFunction::iterator BBI = CurMBB;
-  if (++BBI != FuncInfo.MF->end())
-    NextBlock = BBI;
-
-  SDValue BrCond = DAG.getNode(ISD::BRCOND, getCurDebugLoc(),
-                               MVT::Other, CopyTo, CMP,
-                               DAG.getBasicBlock(JT.Default));
-
-  if (JT.MBB == NextBlock)
-    DAG.setRoot(BrCond);
-  else
-    DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(), MVT::Other, BrCond,
-                            DAG.getBasicBlock(JT.MBB)));
-}
-
-/// visitBitTestHeader - This function emits necessary code to produce value
-/// suitable for "bit tests"
-void SelectionDAGLowering::visitBitTestHeader(BitTestBlock &B) {
-  // Subtract the minimum value
-  SDValue SwitchOp = getValue(B.SValue);
-  EVT VT = SwitchOp.getValueType();
-  SDValue SUB = DAG.getNode(ISD::SUB, getCurDebugLoc(), VT, SwitchOp,
-                            DAG.getConstant(B.First, VT));
-
-  // Check range
-  SDValue RangeCmp = DAG.getSetCC(getCurDebugLoc(),
-                                  TLI.getSetCCResultType(SUB.getValueType()),
-                                  SUB, DAG.getConstant(B.Range, VT),
-                                  ISD::SETUGT);
-
-  SDValue ShiftOp;
-  if (VT.bitsGT(TLI.getPointerTy()))
-    ShiftOp = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(),
-                          TLI.getPointerTy(), SUB);
-  else
-    ShiftOp = DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(),
-                          TLI.getPointerTy(), SUB);
-
-  B.Reg = FuncInfo.MakeReg(TLI.getPointerTy());
-  SDValue CopyTo = DAG.getCopyToReg(getControlRoot(), getCurDebugLoc(),
-                                    B.Reg, ShiftOp);
-
-  // Set NextBlock to be the MBB immediately after the current one, if any.
-  // This is used to avoid emitting unnecessary branches to the next block.
-  MachineBasicBlock *NextBlock = 0;
-  MachineFunction::iterator BBI = CurMBB;
-  if (++BBI != FuncInfo.MF->end())
-    NextBlock = BBI;
-
-  MachineBasicBlock* MBB = B.Cases[0].ThisBB;
-
-  CurMBB->addSuccessor(B.Default);
-  CurMBB->addSuccessor(MBB);
-
-  SDValue BrRange = DAG.getNode(ISD::BRCOND, getCurDebugLoc(),
-                                MVT::Other, CopyTo, RangeCmp,
-                                DAG.getBasicBlock(B.Default));
-
-  if (MBB == NextBlock)
-    DAG.setRoot(BrRange);
-  else
-    DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(), MVT::Other, CopyTo,
-                            DAG.getBasicBlock(MBB)));
-}
-
-/// visitBitTestCase - this function produces one "bit test"
-void SelectionDAGLowering::visitBitTestCase(MachineBasicBlock* NextMBB,
-                                            unsigned Reg,
-                                            BitTestCase &B) {
-  // Make desired shift
-  SDValue ShiftOp = DAG.getCopyFromReg(getControlRoot(), getCurDebugLoc(), Reg,
-                                       TLI.getPointerTy());
-  SDValue SwitchVal = DAG.getNode(ISD::SHL, getCurDebugLoc(),
-                                  TLI.getPointerTy(),
-                                  DAG.getConstant(1, TLI.getPointerTy()),
-                                  ShiftOp);
-
-  // Emit bit tests and jumps
-  SDValue AndOp = DAG.getNode(ISD::AND, getCurDebugLoc(),
-                              TLI.getPointerTy(), SwitchVal,
-                              DAG.getConstant(B.Mask, TLI.getPointerTy()));
-  SDValue AndCmp = DAG.getSetCC(getCurDebugLoc(),
-                                TLI.getSetCCResultType(AndOp.getValueType()),
-                                AndOp, DAG.getConstant(0, TLI.getPointerTy()),
-                                ISD::SETNE);
-
-  CurMBB->addSuccessor(B.TargetBB);
-  CurMBB->addSuccessor(NextMBB);
-
-  SDValue BrAnd = DAG.getNode(ISD::BRCOND, getCurDebugLoc(),
-                              MVT::Other, getControlRoot(),
-                              AndCmp, DAG.getBasicBlock(B.TargetBB));
-
-  // Set NextBlock to be the MBB immediately after the current one, if any.
-  // This is used to avoid emitting unnecessary branches to the next block.
-  MachineBasicBlock *NextBlock = 0;
-  MachineFunction::iterator BBI = CurMBB;
-  if (++BBI != FuncInfo.MF->end())
-    NextBlock = BBI;
-
-  if (NextMBB == NextBlock)
-    DAG.setRoot(BrAnd);
-  else
-    DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(), MVT::Other, BrAnd,
-                            DAG.getBasicBlock(NextMBB)));
-}
-
-void SelectionDAGLowering::visitInvoke(InvokeInst &I) {
-  // Retrieve successors.
-  MachineBasicBlock *Return = FuncInfo.MBBMap[I.getSuccessor(0)];
-  MachineBasicBlock *LandingPad = FuncInfo.MBBMap[I.getSuccessor(1)];
-
-  const Value *Callee(I.getCalledValue());
-  if (isa<InlineAsm>(Callee))
-    visitInlineAsm(&I);
-  else
-    LowerCallTo(&I, getValue(Callee), false, LandingPad);
-
-  // If the value of the invoke is used outside of its defining block, make it
-  // available as a virtual register.
-  CopyToExportRegsIfNeeded(&I);
-
-  // Update successor info
-  CurMBB->addSuccessor(Return);
-  CurMBB->addSuccessor(LandingPad);
-
-  // Drop into normal successor.
-  DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(),
-                          MVT::Other, getControlRoot(),
-                          DAG.getBasicBlock(Return)));
-}
-
-void SelectionDAGLowering::visitUnwind(UnwindInst &I) {
-}
-
-/// handleSmallSwitchCaseRange - Emit a series of specific tests (suitable for
-/// small case ranges).
-bool SelectionDAGLowering::handleSmallSwitchRange(CaseRec& CR,
-                                                  CaseRecVector& WorkList,
-                                                  Value* SV,
-                                                  MachineBasicBlock* Default) {
-  Case& BackCase  = *(CR.Range.second-1);
-
-  // Size is the number of Cases represented by this range.
-  size_t Size = CR.Range.second - CR.Range.first;
-  if (Size > 3)
-    return false;
-
-  // Get the MachineFunction which holds the current MBB.  This is used when
-  // inserting any additional MBBs necessary to represent the switch.
-  MachineFunction *CurMF = FuncInfo.MF;
-
-  // Figure out which block is immediately after the current one.
-  MachineBasicBlock *NextBlock = 0;
-  MachineFunction::iterator BBI = CR.CaseBB;
-
-  if (++BBI != FuncInfo.MF->end())
-    NextBlock = BBI;
-
-  // TODO: If any two of the cases has the same destination, and if one value
-  // is the same as the other, but has one bit unset that the other has set,
-  // use bit manipulation to do two compares at once.  For example:
-  // "if (X == 6 || X == 4)" -> "if ((X|2) == 6)"
-
-  // Rearrange the case blocks so that the last one falls through if possible.
-  if (NextBlock && Default != NextBlock && BackCase.BB != NextBlock) {
-    // The last case block won't fall through into 'NextBlock' if we emit the
-    // branches in this order.  See if rearranging a case value would help.
-    for (CaseItr I = CR.Range.first, E = CR.Range.second-1; I != E; ++I) {
-      if (I->BB == NextBlock) {
-        std::swap(*I, BackCase);
-        break;
-      }
-    }
-  }
-
-  // Create a CaseBlock record representing a conditional branch to
-  // the Case's target mbb if the value being switched on SV is equal
-  // to C.
-  MachineBasicBlock *CurBlock = CR.CaseBB;
-  for (CaseItr I = CR.Range.first, E = CR.Range.second; I != E; ++I) {
-    MachineBasicBlock *FallThrough;
-    if (I != E-1) {
-      FallThrough = CurMF->CreateMachineBasicBlock(CurBlock->getBasicBlock());
-      CurMF->insert(BBI, FallThrough);
-
-      // Put SV in a virtual register to make it available from the new blocks.
-      ExportFromCurrentBlock(SV);
-    } else {
-      // If the last case doesn't match, go to the default block.
-      FallThrough = Default;
-    }
-
-    Value *RHS, *LHS, *MHS;
-    ISD::CondCode CC;
-    if (I->High == I->Low) {
-      // This is just small small case range :) containing exactly 1 case
-      CC = ISD::SETEQ;
-      LHS = SV; RHS = I->High; MHS = NULL;
-    } else {
-      CC = ISD::SETLE;
-      LHS = I->Low; MHS = SV; RHS = I->High;
-    }
-    CaseBlock CB(CC, LHS, RHS, MHS, I->BB, FallThrough, CurBlock);
-
-    // If emitting the first comparison, just call visitSwitchCase to emit the
-    // code into the current block.  Otherwise, push the CaseBlock onto the
-    // vector to be later processed by SDISel, and insert the node's MBB
-    // before the next MBB.
-    if (CurBlock == CurMBB)
-      visitSwitchCase(CB);
-    else
-      SwitchCases.push_back(CB);
-
-    CurBlock = FallThrough;
-  }
-
-  return true;
-}
-
-static inline bool areJTsAllowed(const TargetLowering &TLI) {
-  return !DisableJumpTables &&
-          (TLI.isOperationLegalOrCustom(ISD::BR_JT, MVT::Other) ||
-           TLI.isOperationLegalOrCustom(ISD::BRIND, MVT::Other));
-}
-
-static APInt ComputeRange(const APInt &First, const APInt &Last) {
-  APInt LastExt(Last), FirstExt(First);
-  uint32_t BitWidth = std::max(Last.getBitWidth(), First.getBitWidth()) + 1;
-  LastExt.sext(BitWidth); FirstExt.sext(BitWidth);
-  return (LastExt - FirstExt + 1ULL);
-}
-
-/// handleJTSwitchCase - Emit jumptable for current switch case range
-bool SelectionDAGLowering::handleJTSwitchCase(CaseRec& CR,
-                                              CaseRecVector& WorkList,
-                                              Value* SV,
-                                              MachineBasicBlock* Default) {
-  Case& FrontCase = *CR.Range.first;
-  Case& BackCase  = *(CR.Range.second-1);
-
-  const APInt& First = cast<ConstantInt>(FrontCase.Low)->getValue();
-  const APInt& Last  = cast<ConstantInt>(BackCase.High)->getValue();
-
-  size_t TSize = 0;
-  for (CaseItr I = CR.Range.first, E = CR.Range.second;
-       I!=E; ++I)
-    TSize += I->size();
-
-  if (!areJTsAllowed(TLI) || TSize <= 3)
-    return false;
-
-  APInt Range = ComputeRange(First, Last);
-  double Density = (double)TSize / Range.roundToDouble();
-  if (Density < 0.4)
-    return false;
-
-  DEBUG(errs() << "Lowering jump table\n"
-               << "First entry: " << First << ". Last entry: " << Last << '\n'
-               << "Range: " << Range
-               << "Size: " << TSize << ". Density: " << Density << "\n\n");
-
-  // Get the MachineFunction which holds the current MBB.  This is used when
-  // inserting any additional MBBs necessary to represent the switch.
-  MachineFunction *CurMF = FuncInfo.MF;
-
-  // Figure out which block is immediately after the current one.
-  MachineFunction::iterator BBI = CR.CaseBB;
-  ++BBI;
-
-  const BasicBlock *LLVMBB = CR.CaseBB->getBasicBlock();
-
-  // Create a new basic block to hold the code for loading the address
-  // of the jump table, and jumping to it.  Update successor information;
-  // we will either branch to the default case for the switch, or the jump
-  // table.
-  MachineBasicBlock *JumpTableBB = CurMF->CreateMachineBasicBlock(LLVMBB);
-  CurMF->insert(BBI, JumpTableBB);
-  CR.CaseBB->addSuccessor(Default);
-  CR.CaseBB->addSuccessor(JumpTableBB);
-
-  // Build a vector of destination BBs, corresponding to each target
-  // of the jump table. If the value of the jump table slot corresponds to
-  // a case statement, push the case's BB onto the vector, otherwise, push
-  // the default BB.
-  std::vector<MachineBasicBlock*> DestBBs;
-  APInt TEI = First;
-  for (CaseItr I = CR.Range.first, E = CR.Range.second; I != E; ++TEI) {
-    const APInt& Low = cast<ConstantInt>(I->Low)->getValue();
-    const APInt& High = cast<ConstantInt>(I->High)->getValue();
-
-    if (Low.sle(TEI) && TEI.sle(High)) {
-      DestBBs.push_back(I->BB);
-      if (TEI==High)
-        ++I;
-    } else {
-      DestBBs.push_back(Default);
-    }
-  }
-
-  // Update successor info. Add one edge to each unique successor.
-  BitVector SuccsHandled(CR.CaseBB->getParent()->getNumBlockIDs());
-  for (std::vector<MachineBasicBlock*>::iterator I = DestBBs.begin(),
-         E = DestBBs.end(); I != E; ++I) {
-    if (!SuccsHandled[(*I)->getNumber()]) {
-      SuccsHandled[(*I)->getNumber()] = true;
-      JumpTableBB->addSuccessor(*I);
-    }
-  }
-
-  // Create a jump table index for this jump table, or return an existing
-  // one.
-  unsigned JTI = CurMF->getJumpTableInfo()->getJumpTableIndex(DestBBs);
-
-  // Set the jump table information so that we can codegen it as a second
-  // MachineBasicBlock
-  JumpTable JT(-1U, JTI, JumpTableBB, Default);
-  JumpTableHeader JTH(First, Last, SV, CR.CaseBB, (CR.CaseBB == CurMBB));
-  if (CR.CaseBB == CurMBB)
-    visitJumpTableHeader(JT, JTH);
-
-  JTCases.push_back(JumpTableBlock(JTH, JT));
-
-  return true;
-}
-
-/// handleBTSplitSwitchCase - emit comparison and split binary search tree into
-/// 2 subtrees.
-bool SelectionDAGLowering::handleBTSplitSwitchCase(CaseRec& CR,
-                                                   CaseRecVector& WorkList,
-                                                   Value* SV,
-                                                   MachineBasicBlock* Default) {
-  // Get the MachineFunction which holds the current MBB.  This is used when
-  // inserting any additional MBBs necessary to represent the switch.
-  MachineFunction *CurMF = FuncInfo.MF;
-
-  // Figure out which block is immediately after the current one.
-  MachineFunction::iterator BBI = CR.CaseBB;
-  ++BBI;
-
-  Case& FrontCase = *CR.Range.first;
-  Case& BackCase  = *(CR.Range.second-1);
-  const BasicBlock *LLVMBB = CR.CaseBB->getBasicBlock();
-
-  // Size is the number of Cases represented by this range.
-  unsigned Size = CR.Range.second - CR.Range.first;
-
-  const APInt& First = cast<ConstantInt>(FrontCase.Low)->getValue();
-  const APInt& Last  = cast<ConstantInt>(BackCase.High)->getValue();
-  double FMetric = 0;
-  CaseItr Pivot = CR.Range.first + Size/2;
-
-  // Select optimal pivot, maximizing sum density of LHS and RHS. This will
-  // (heuristically) allow us to emit JumpTable's later.
-  size_t TSize = 0;
-  for (CaseItr I = CR.Range.first, E = CR.Range.second;
-       I!=E; ++I)
-    TSize += I->size();
-
-  size_t LSize = FrontCase.size();
-  size_t RSize = TSize-LSize;
-  DEBUG(errs() << "Selecting best pivot: \n"
-               << "First: " << First << ", Last: " << Last <<'\n'
-               << "LSize: " << LSize << ", RSize: " << RSize << '\n');
-  for (CaseItr I = CR.Range.first, J=I+1, E = CR.Range.second;
-       J!=E; ++I, ++J) {
-    const APInt& LEnd = cast<ConstantInt>(I->High)->getValue();
-    const APInt& RBegin = cast<ConstantInt>(J->Low)->getValue();
-    APInt Range = ComputeRange(LEnd, RBegin);
-    assert((Range - 2ULL).isNonNegative() &&
-           "Invalid case distance");
-    double LDensity = (double)LSize / (LEnd - First + 1ULL).roundToDouble();
-    double RDensity = (double)RSize / (Last - RBegin + 1ULL).roundToDouble();
-    double Metric = Range.logBase2()*(LDensity+RDensity);
-    // Should always split in some non-trivial place
-    DEBUG(errs() <<"=>Step\n"
-                 << "LEnd: " << LEnd << ", RBegin: " << RBegin << '\n'
-                 << "LDensity: " << LDensity
-                 << ", RDensity: " << RDensity << '\n'
-                 << "Metric: " << Metric << '\n');
-    if (FMetric < Metric) {
-      Pivot = J;
-      FMetric = Metric;
-      DEBUG(errs() << "Current metric set to: " << FMetric << '\n');
-    }
-
-    LSize += J->size();
-    RSize -= J->size();
-  }
-  if (areJTsAllowed(TLI)) {
-    // If our case is dense we *really* should handle it earlier!
-    assert((FMetric > 0) && "Should handle dense range earlier!");
-  } else {
-    Pivot = CR.Range.first + Size/2;
-  }
-
-  CaseRange LHSR(CR.Range.first, Pivot);
-  CaseRange RHSR(Pivot, CR.Range.second);
-  Constant *C = Pivot->Low;
-  MachineBasicBlock *FalseBB = 0, *TrueBB = 0;
-
-  // We know that we branch to the LHS if the Value being switched on is
-  // less than the Pivot value, C.  We use this to optimize our binary
-  // tree a bit, by recognizing that if SV is greater than or equal to the
-  // LHS's Case Value, and that Case Value is exactly one less than the
-  // Pivot's Value, then we can branch directly to the LHS's Target,
-  // rather than creating a leaf node for it.
-  if ((LHSR.second - LHSR.first) == 1 &&
-      LHSR.first->High == CR.GE &&
-      cast<ConstantInt>(C)->getValue() ==
-      (cast<ConstantInt>(CR.GE)->getValue() + 1LL)) {
-    TrueBB = LHSR.first->BB;
-  } else {
-    TrueBB = CurMF->CreateMachineBasicBlock(LLVMBB);
-    CurMF->insert(BBI, TrueBB);
-    WorkList.push_back(CaseRec(TrueBB, C, CR.GE, LHSR));
-
-    // Put SV in a virtual register to make it available from the new blocks.
-    ExportFromCurrentBlock(SV);
-  }
-
-  // Similar to the optimization above, if the Value being switched on is
-  // known to be less than the Constant CR.LT, and the current Case Value
-  // is CR.LT - 1, then we can branch directly to the target block for
-  // the current Case Value, rather than emitting a RHS leaf node for it.
-  if ((RHSR.second - RHSR.first) == 1 && CR.LT &&
-      cast<ConstantInt>(RHSR.first->Low)->getValue() ==
-      (cast<ConstantInt>(CR.LT)->getValue() - 1LL)) {
-    FalseBB = RHSR.first->BB;
-  } else {
-    FalseBB = CurMF->CreateMachineBasicBlock(LLVMBB);
-    CurMF->insert(BBI, FalseBB);
-    WorkList.push_back(CaseRec(FalseBB,CR.LT,C,RHSR));
-
-    // Put SV in a virtual register to make it available from the new blocks.
-    ExportFromCurrentBlock(SV);
-  }
-
-  // Create a CaseBlock record representing a conditional branch to
-  // the LHS node if the value being switched on SV is less than C.
-  // Otherwise, branch to LHS.
-  CaseBlock CB(ISD::SETLT, SV, C, NULL, TrueBB, FalseBB, CR.CaseBB);
-
-  if (CR.CaseBB == CurMBB)
-    visitSwitchCase(CB);
-  else
-    SwitchCases.push_back(CB);
-
-  return true;
-}
-
-/// handleBitTestsSwitchCase - if current case range has few destination and
-/// range span less, than machine word bitwidth, encode case range into series
-/// of masks and emit bit tests with these masks.
-bool SelectionDAGLowering::handleBitTestsSwitchCase(CaseRec& CR,
-                                                    CaseRecVector& WorkList,
-                                                    Value* SV,
-                                                    MachineBasicBlock* Default){
-  EVT PTy = TLI.getPointerTy();
-  unsigned IntPtrBits = PTy.getSizeInBits();
-
-  Case& FrontCase = *CR.Range.first;
-  Case& BackCase  = *(CR.Range.second-1);
-
-  // Get the MachineFunction which holds the current MBB.  This is used when
-  // inserting any additional MBBs necessary to represent the switch.
-  MachineFunction *CurMF = FuncInfo.MF;
-
-  // If target does not have legal shift left, do not emit bit tests at all.
-  if (!TLI.isOperationLegal(ISD::SHL, TLI.getPointerTy()))
-    return false;
-
-  size_t numCmps = 0;
-  for (CaseItr I = CR.Range.first, E = CR.Range.second;
-       I!=E; ++I) {
-    // Single case counts one, case range - two.
-    numCmps += (I->Low == I->High ? 1 : 2);
-  }
-
-  // Count unique destinations
-  SmallSet<MachineBasicBlock*, 4> Dests;
-  for (CaseItr I = CR.Range.first, E = CR.Range.second; I!=E; ++I) {
-    Dests.insert(I->BB);
-    if (Dests.size() > 3)
-      // Don't bother the code below, if there are too much unique destinations
-      return false;
-  }
-  DEBUG(errs() << "Total number of unique destinations: " << Dests.size() << '\n'
-               << "Total number of comparisons: " << numCmps << '\n');
-
-  // Compute span of values.
-  const APInt& minValue = cast<ConstantInt>(FrontCase.Low)->getValue();
-  const APInt& maxValue = cast<ConstantInt>(BackCase.High)->getValue();
-  APInt cmpRange = maxValue - minValue;
-
-  DEBUG(errs() << "Compare range: " << cmpRange << '\n'
-               << "Low bound: " << minValue << '\n'
-               << "High bound: " << maxValue << '\n');
-
-  if (cmpRange.uge(APInt(cmpRange.getBitWidth(), IntPtrBits)) ||
-      (!(Dests.size() == 1 && numCmps >= 3) &&
-       !(Dests.size() == 2 && numCmps >= 5) &&
-       !(Dests.size() >= 3 && numCmps >= 6)))
-    return false;
-
-  DEBUG(errs() << "Emitting bit tests\n");
-  APInt lowBound = APInt::getNullValue(cmpRange.getBitWidth());
-
-  // Optimize the case where all the case values fit in a
-  // word without having to subtract minValue. In this case,
-  // we can optimize away the subtraction.
-  if (minValue.isNonNegative() &&
-      maxValue.slt(APInt(maxValue.getBitWidth(), IntPtrBits))) {
-    cmpRange = maxValue;
-  } else {
-    lowBound = minValue;
-  }
-
-  CaseBitsVector CasesBits;
-  unsigned i, count = 0;
-
-  for (CaseItr I = CR.Range.first, E = CR.Range.second; I!=E; ++I) {
-    MachineBasicBlock* Dest = I->BB;
-    for (i = 0; i < count; ++i)
-      if (Dest == CasesBits[i].BB)
-        break;
-
-    if (i == count) {
-      assert((count < 3) && "Too much destinations to test!");
-      CasesBits.push_back(CaseBits(0, Dest, 0));
-      count++;
-    }
-
-    const APInt& lowValue = cast<ConstantInt>(I->Low)->getValue();
-    const APInt& highValue = cast<ConstantInt>(I->High)->getValue();
-
-    uint64_t lo = (lowValue - lowBound).getZExtValue();
-    uint64_t hi = (highValue - lowBound).getZExtValue();
-
-    for (uint64_t j = lo; j <= hi; j++) {
-      CasesBits[i].Mask |=  1ULL << j;
-      CasesBits[i].Bits++;
-    }
-
-  }
-  std::sort(CasesBits.begin(), CasesBits.end(), CaseBitsCmp());
-
-  BitTestInfo BTC;
-
-  // Figure out which block is immediately after the current one.
-  MachineFunction::iterator BBI = CR.CaseBB;
-  ++BBI;
-
-  const BasicBlock *LLVMBB = CR.CaseBB->getBasicBlock();
-
-  DEBUG(errs() << "Cases:\n");
-  for (unsigned i = 0, e = CasesBits.size(); i!=e; ++i) {
-    DEBUG(errs() << "Mask: " << CasesBits[i].Mask
-                 << ", Bits: " << CasesBits[i].Bits
-                 << ", BB: " << CasesBits[i].BB << '\n');
-
-    MachineBasicBlock *CaseBB = CurMF->CreateMachineBasicBlock(LLVMBB);
-    CurMF->insert(BBI, CaseBB);
-    BTC.push_back(BitTestCase(CasesBits[i].Mask,
-                              CaseBB,
-                              CasesBits[i].BB));
-
-    // Put SV in a virtual register to make it available from the new blocks.
-    ExportFromCurrentBlock(SV);
-  }
-
-  BitTestBlock BTB(lowBound, cmpRange, SV,
-                   -1U, (CR.CaseBB == CurMBB),
-                   CR.CaseBB, Default, BTC);
-
-  if (CR.CaseBB == CurMBB)
-    visitBitTestHeader(BTB);
-
-  BitTestCases.push_back(BTB);
-
-  return true;
-}
-
-
-/// Clusterify - Transform simple list of Cases into list of CaseRange's
-size_t SelectionDAGLowering::Clusterify(CaseVector& Cases,
-                                          const SwitchInst& SI) {
-  size_t numCmps = 0;
-
-  // Start with "simple" cases
-  for (size_t i = 1; i < SI.getNumSuccessors(); ++i) {
-    MachineBasicBlock *SMBB = FuncInfo.MBBMap[SI.getSuccessor(i)];
-    Cases.push_back(Case(SI.getSuccessorValue(i),
-                         SI.getSuccessorValue(i),
-                         SMBB));
-  }
-  std::sort(Cases.begin(), Cases.end(), CaseCmp());
-
-  // Merge case into clusters
-  if (Cases.size() >= 2)
-    // Must recompute end() each iteration because it may be
-    // invalidated by erase if we hold on to it
-    for (CaseItr I = Cases.begin(), J = ++(Cases.begin()); J != Cases.end(); ) {
-      const APInt& nextValue = cast<ConstantInt>(J->Low)->getValue();
-      const APInt& currentValue = cast<ConstantInt>(I->High)->getValue();
-      MachineBasicBlock* nextBB = J->BB;
-      MachineBasicBlock* currentBB = I->BB;
-
-      // If the two neighboring cases go to the same destination, merge them
-      // into a single case.
-      if ((nextValue - currentValue == 1) && (currentBB == nextBB)) {
-        I->High = J->High;
-        J = Cases.erase(J);
-      } else {
-        I = J++;
-      }
-    }
-
-  for (CaseItr I=Cases.begin(), E=Cases.end(); I!=E; ++I, ++numCmps) {
-    if (I->Low != I->High)
-      // A range counts double, since it requires two compares.
-      ++numCmps;
-  }
-
-  return numCmps;
-}
-
-void SelectionDAGLowering::visitSwitch(SwitchInst &SI) {
-  // Figure out which block is immediately after the current one.
-  MachineBasicBlock *NextBlock = 0;
-
-  MachineBasicBlock *Default = FuncInfo.MBBMap[SI.getDefaultDest()];
-
-  // If there is only the default destination, branch to it if it is not the
-  // next basic block.  Otherwise, just fall through.
-  if (SI.getNumOperands() == 2) {
-    // Update machine-CFG edges.
-
-    // If this is not a fall-through branch, emit the branch.
-    CurMBB->addSuccessor(Default);
-    if (Default != NextBlock)
-      DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(),
-                              MVT::Other, getControlRoot(),
-                              DAG.getBasicBlock(Default)));
-    return;
-  }
-
-  // If there are any non-default case statements, create a vector of Cases
-  // representing each one, and sort the vector so that we can efficiently
-  // create a binary search tree from them.
-  CaseVector Cases;
-  size_t numCmps = Clusterify(Cases, SI);
-  DEBUG(errs() << "Clusterify finished. Total clusters: " << Cases.size()
-               << ". Total compares: " << numCmps << '\n');
-  numCmps = 0;
-
-  // Get the Value to be switched on and default basic blocks, which will be
-  // inserted into CaseBlock records, representing basic blocks in the binary
-  // search tree.
-  Value *SV = SI.getOperand(0);
-
-  // Push the initial CaseRec onto the worklist
-  CaseRecVector WorkList;
-  WorkList.push_back(CaseRec(CurMBB,0,0,CaseRange(Cases.begin(),Cases.end())));
-
-  while (!WorkList.empty()) {
-    // Grab a record representing a case range to process off the worklist
-    CaseRec CR = WorkList.back();
-    WorkList.pop_back();
-
-    if (handleBitTestsSwitchCase(CR, WorkList, SV, Default))
-      continue;
-
-    // If the range has few cases (two or less) emit a series of specific
-    // tests.
-    if (handleSmallSwitchRange(CR, WorkList, SV, Default))
-      continue;
-
-    // If the switch has more than 5 blocks, and at least 40% dense, and the
-    // target supports indirect branches, then emit a jump table rather than
-    // lowering the switch to a binary tree of conditional branches.
-    if (handleJTSwitchCase(CR, WorkList, SV, Default))
-      continue;
-
-    // Emit binary tree. We need to pick a pivot, and push left and right ranges
-    // onto the worklist. Leafs are handled via handleSmallSwitchRange() call.
-    handleBTSplitSwitchCase(CR, WorkList, SV, Default);
-  }
-}
-
-
-void SelectionDAGLowering::visitFSub(User &I) {
-  // -0.0 - X --> fneg
-  const Type *Ty = I.getType();
-  if (isa<VectorType>(Ty)) {
-    if (ConstantVector *CV = dyn_cast<ConstantVector>(I.getOperand(0))) {
-      const VectorType *DestTy = cast<VectorType>(I.getType());
-      const Type *ElTy = DestTy->getElementType();
-      unsigned VL = DestTy->getNumElements();
-      std::vector<Constant*> NZ(VL, ConstantFP::getNegativeZero(ElTy));
-      Constant *CNZ = ConstantVector::get(&NZ[0], NZ.size());
-      if (CV == CNZ) {
-        SDValue Op2 = getValue(I.getOperand(1));
-        setValue(&I, DAG.getNode(ISD::FNEG, getCurDebugLoc(),
-                                 Op2.getValueType(), Op2));
-        return;
-      }
-    }
-  }
-  if (ConstantFP *CFP = dyn_cast<ConstantFP>(I.getOperand(0)))
-    if (CFP->isExactlyValue(ConstantFP::getNegativeZero(Ty)->getValueAPF())) {
-      SDValue Op2 = getValue(I.getOperand(1));
-      setValue(&I, DAG.getNode(ISD::FNEG, getCurDebugLoc(),
-                               Op2.getValueType(), Op2));
-      return;
-    }
-
-  visitBinary(I, ISD::FSUB);
-}
-
-void SelectionDAGLowering::visitBinary(User &I, unsigned OpCode) {
-  SDValue Op1 = getValue(I.getOperand(0));
-  SDValue Op2 = getValue(I.getOperand(1));
-
-  setValue(&I, DAG.getNode(OpCode, getCurDebugLoc(),
-                           Op1.getValueType(), Op1, Op2));
-}
-
-void SelectionDAGLowering::visitShift(User &I, unsigned Opcode) {
-  SDValue Op1 = getValue(I.getOperand(0));
-  SDValue Op2 = getValue(I.getOperand(1));
-  if (!isa<VectorType>(I.getType()) &&
-      Op2.getValueType() != TLI.getShiftAmountTy()) {
-    // If the operand is smaller than the shift count type, promote it.
-    EVT PTy = TLI.getPointerTy();
-    EVT STy = TLI.getShiftAmountTy();
-    if (STy.bitsGT(Op2.getValueType()))
-      Op2 = DAG.getNode(ISD::ANY_EXTEND, getCurDebugLoc(),
-                        TLI.getShiftAmountTy(), Op2);
-    // If the operand is larger than the shift count type but the shift
-    // count type has enough bits to represent any shift value, truncate
-    // it now. This is a common case and it exposes the truncate to
-    // optimization early.
-    else if (STy.getSizeInBits() >=
-             Log2_32_Ceil(Op2.getValueType().getSizeInBits()))
-      Op2 = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(),
-                        TLI.getShiftAmountTy(), Op2);
-    // Otherwise we'll need to temporarily settle for some other
-    // convenient type; type legalization will make adjustments as
-    // needed.
-    else if (PTy.bitsLT(Op2.getValueType()))
-      Op2 = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(),
-                        TLI.getPointerTy(), Op2);
-    else if (PTy.bitsGT(Op2.getValueType()))
-      Op2 = DAG.getNode(ISD::ANY_EXTEND, getCurDebugLoc(),
-                        TLI.getPointerTy(), Op2);
-  }
-
-  setValue(&I, DAG.getNode(Opcode, getCurDebugLoc(),
-                           Op1.getValueType(), Op1, Op2));
-}
-
-void SelectionDAGLowering::visitICmp(User &I) {
-  ICmpInst::Predicate predicate = ICmpInst::BAD_ICMP_PREDICATE;
-  if (ICmpInst *IC = dyn_cast<ICmpInst>(&I))
-    predicate = IC->getPredicate();
-  else if (ConstantExpr *IC = dyn_cast<ConstantExpr>(&I))
-    predicate = ICmpInst::Predicate(IC->getPredicate());
-  SDValue Op1 = getValue(I.getOperand(0));
-  SDValue Op2 = getValue(I.getOperand(1));
-  ISD::CondCode Opcode = getICmpCondCode(predicate);
-  
-  EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getSetCC(getCurDebugLoc(), DestVT, Op1, Op2, Opcode));
-}
-
-void SelectionDAGLowering::visitFCmp(User &I) {
-  FCmpInst::Predicate predicate = FCmpInst::BAD_FCMP_PREDICATE;
-  if (FCmpInst *FC = dyn_cast<FCmpInst>(&I))
-    predicate = FC->getPredicate();
-  else if (ConstantExpr *FC = dyn_cast<ConstantExpr>(&I))
-    predicate = FCmpInst::Predicate(FC->getPredicate());
-  SDValue Op1 = getValue(I.getOperand(0));
-  SDValue Op2 = getValue(I.getOperand(1));
-  ISD::CondCode Condition = getFCmpCondCode(predicate);
-  EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getSetCC(getCurDebugLoc(), DestVT, Op1, Op2, Condition));
-}
-
-void SelectionDAGLowering::visitSelect(User &I) {
-  SmallVector<EVT, 4> ValueVTs;
-  ComputeValueVTs(TLI, I.getType(), ValueVTs);
-  unsigned NumValues = ValueVTs.size();
-  if (NumValues != 0) {
-    SmallVector<SDValue, 4> Values(NumValues);
-    SDValue Cond     = getValue(I.getOperand(0));
-    SDValue TrueVal  = getValue(I.getOperand(1));
-    SDValue FalseVal = getValue(I.getOperand(2));
-
-    for (unsigned i = 0; i != NumValues; ++i)
-      Values[i] = DAG.getNode(ISD::SELECT, getCurDebugLoc(),
-                              TrueVal.getValueType(), Cond,
-                              SDValue(TrueVal.getNode(), TrueVal.getResNo() + i),
-                              SDValue(FalseVal.getNode(), FalseVal.getResNo() + i));
-
-    setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
-                             DAG.getVTList(&ValueVTs[0], NumValues),
-                             &Values[0], NumValues));
-  }
-}
-
-
-void SelectionDAGLowering::visitTrunc(User &I) {
-  // TruncInst cannot be a no-op cast because sizeof(src) > sizeof(dest).
-  SDValue N = getValue(I.getOperand(0));
-  EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(), DestVT, N));
-}
-
-void SelectionDAGLowering::visitZExt(User &I) {
-  // ZExt cannot be a no-op cast because sizeof(src) < sizeof(dest).
-  // ZExt also can't be a cast to bool for same reason. So, nothing much to do
-  SDValue N = getValue(I.getOperand(0));
-  EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(), DestVT, N));
-}
-
-void SelectionDAGLowering::visitSExt(User &I) {
-  // SExt cannot be a no-op cast because sizeof(src) < sizeof(dest).
-  // SExt also can't be a cast to bool for same reason. So, nothing much to do
-  SDValue N = getValue(I.getOperand(0));
-  EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::SIGN_EXTEND, getCurDebugLoc(), DestVT, N));
-}
-
-void SelectionDAGLowering::visitFPTrunc(User &I) {
-  // FPTrunc is never a no-op cast, no need to check
-  SDValue N = getValue(I.getOperand(0));
-  EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::FP_ROUND, getCurDebugLoc(),
-                           DestVT, N, DAG.getIntPtrConstant(0)));
-}
-
-void SelectionDAGLowering::visitFPExt(User &I){
-  // FPTrunc is never a no-op cast, no need to check
-  SDValue N = getValue(I.getOperand(0));
-  EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::FP_EXTEND, getCurDebugLoc(), DestVT, N));
-}
-
-void SelectionDAGLowering::visitFPToUI(User &I) {
-  // FPToUI is never a no-op cast, no need to check
-  SDValue N = getValue(I.getOperand(0));
-  EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::FP_TO_UINT, getCurDebugLoc(), DestVT, N));
-}
-
-void SelectionDAGLowering::visitFPToSI(User &I) {
-  // FPToSI is never a no-op cast, no need to check
-  SDValue N = getValue(I.getOperand(0));
-  EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::FP_TO_SINT, getCurDebugLoc(), DestVT, N));
-}
-
-void SelectionDAGLowering::visitUIToFP(User &I) {
-  // UIToFP is never a no-op cast, no need to check
-  SDValue N = getValue(I.getOperand(0));
-  EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::UINT_TO_FP, getCurDebugLoc(), DestVT, N));
-}
-
-void SelectionDAGLowering::visitSIToFP(User &I){
-  // SIToFP is never a no-op cast, no need to check
-  SDValue N = getValue(I.getOperand(0));
-  EVT DestVT = TLI.getValueType(I.getType());
-  setValue(&I, DAG.getNode(ISD::SINT_TO_FP, getCurDebugLoc(), DestVT, N));
-}
-
-void SelectionDAGLowering::visitPtrToInt(User &I) {
-  // What to do depends on the size of the integer and the size of the pointer.
-  // We can either truncate, zero extend, or no-op, accordingly.
-  SDValue N = getValue(I.getOperand(0));
-  EVT SrcVT = N.getValueType();
-  EVT DestVT = TLI.getValueType(I.getType());
-  SDValue Result;
-  if (DestVT.bitsLT(SrcVT))
-    Result = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(), DestVT, N);
-  else
-    // Note: ZERO_EXTEND can handle cases where the sizes are equal too
-    Result = DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(), DestVT, N);
-  setValue(&I, Result);
-}
-
-void SelectionDAGLowering::visitIntToPtr(User &I) {
-  // What to do depends on the size of the integer and the size of the pointer.
-  // We can either truncate, zero extend, or no-op, accordingly.
-  SDValue N = getValue(I.getOperand(0));
-  EVT SrcVT = N.getValueType();
-  EVT DestVT = TLI.getValueType(I.getType());
-  if (DestVT.bitsLT(SrcVT))
-    setValue(&I, DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(), DestVT, N));
-  else
-    // Note: ZERO_EXTEND can handle cases where the sizes are equal too
-    setValue(&I, DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(),
-                             DestVT, N));
-}
-
-void SelectionDAGLowering::visitBitCast(User &I) {
-  SDValue N = getValue(I.getOperand(0));
-  EVT DestVT = TLI.getValueType(I.getType());
-
-  // BitCast assures us that source and destination are the same size so this
-  // is either a BIT_CONVERT or a no-op.
-  if (DestVT != N.getValueType())
-    setValue(&I, DAG.getNode(ISD::BIT_CONVERT, getCurDebugLoc(),
-                             DestVT, N)); // convert types
-  else
-    setValue(&I, N); // noop cast.
-}
-
-void SelectionDAGLowering::visitInsertElement(User &I) {
-  SDValue InVec = getValue(I.getOperand(0));
-  SDValue InVal = getValue(I.getOperand(1));
-  SDValue InIdx = DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(),
-                                TLI.getPointerTy(),
-                                getValue(I.getOperand(2)));
-
-  setValue(&I, DAG.getNode(ISD::INSERT_VECTOR_ELT, getCurDebugLoc(),
-                           TLI.getValueType(I.getType()),
-                           InVec, InVal, InIdx));
-}
-
-void SelectionDAGLowering::visitExtractElement(User &I) {
-  SDValue InVec = getValue(I.getOperand(0));
-  SDValue InIdx = DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(),
-                                TLI.getPointerTy(),
-                                getValue(I.getOperand(1)));
-  setValue(&I, DAG.getNode(ISD::EXTRACT_VECTOR_ELT, getCurDebugLoc(),
-                           TLI.getValueType(I.getType()), InVec, InIdx));
-}
-
-
-// Utility for visitShuffleVector - Returns true if the mask is mask starting
-// from SIndx and increasing to the element length (undefs are allowed).
-static bool SequentialMask(SmallVectorImpl<int> &Mask, unsigned SIndx) {
-  unsigned MaskNumElts = Mask.size();
-  for (unsigned i = 0; i != MaskNumElts; ++i)
-    if ((Mask[i] >= 0) && (Mask[i] != (int)(i + SIndx)))
-      return false;
-  return true;
-}
-
-void SelectionDAGLowering::visitShuffleVector(User &I) {
-  SmallVector<int, 8> Mask;
-  SDValue Src1 = getValue(I.getOperand(0));
-  SDValue Src2 = getValue(I.getOperand(1));
-
-  // Convert the ConstantVector mask operand into an array of ints, with -1
-  // representing undef values.
-  SmallVector<Constant*, 8> MaskElts;
-  cast<Constant>(I.getOperand(2))->getVectorElements(*DAG.getContext(), 
-                                                     MaskElts);
-  unsigned MaskNumElts = MaskElts.size();
-  for (unsigned i = 0; i != MaskNumElts; ++i) {
-    if (isa<UndefValue>(MaskElts[i]))
-      Mask.push_back(-1);
-    else
-      Mask.push_back(cast<ConstantInt>(MaskElts[i])->getSExtValue());
-  }
-  
-  EVT VT = TLI.getValueType(I.getType());
-  EVT SrcVT = Src1.getValueType();
-  unsigned SrcNumElts = SrcVT.getVectorNumElements();
-
-  if (SrcNumElts == MaskNumElts) {
-    setValue(&I, DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2,
-                                      &Mask[0]));
-    return;
-  }
-
-  // Normalize the shuffle vector since mask and vector length don't match.
-  if (SrcNumElts < MaskNumElts && MaskNumElts % SrcNumElts == 0) {
-    // Mask is longer than the source vectors and is a multiple of the source
-    // vectors.  We can use concatenate vector to make the mask and vectors
-    // lengths match.
-    if (SrcNumElts*2 == MaskNumElts && SequentialMask(Mask, 0)) {
-      // The shuffle is concatenating two vectors together.
-      setValue(&I, DAG.getNode(ISD::CONCAT_VECTORS, getCurDebugLoc(),
-                               VT, Src1, Src2));
-      return;
-    }
-
-    // Pad both vectors with undefs to make them the same length as the mask.
-    unsigned NumConcat = MaskNumElts / SrcNumElts;
-    bool Src1U = Src1.getOpcode() == ISD::UNDEF;
-    bool Src2U = Src2.getOpcode() == ISD::UNDEF;
-    SDValue UndefVal = DAG.getUNDEF(SrcVT);
-
-    SmallVector<SDValue, 8> MOps1(NumConcat, UndefVal);
-    SmallVector<SDValue, 8> MOps2(NumConcat, UndefVal);
-    MOps1[0] = Src1;
-    MOps2[0] = Src2;
-    
-    Src1 = Src1U ? DAG.getUNDEF(VT) : DAG.getNode(ISD::CONCAT_VECTORS, 
-                                                  getCurDebugLoc(), VT, 
-                                                  &MOps1[0], NumConcat);
-    Src2 = Src2U ? DAG.getUNDEF(VT) : DAG.getNode(ISD::CONCAT_VECTORS,
-                                                  getCurDebugLoc(), VT, 
-                                                  &MOps2[0], NumConcat);
-
-    // Readjust mask for new input vector length.
-    SmallVector<int, 8> MappedOps;
-    for (unsigned i = 0; i != MaskNumElts; ++i) {
-      int Idx = Mask[i];
-      if (Idx < (int)SrcNumElts)
-        MappedOps.push_back(Idx);
-      else
-        MappedOps.push_back(Idx + MaskNumElts - SrcNumElts);
-    }
-    setValue(&I, DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2, 
-                                      &MappedOps[0]));
-    return;
-  }
-
-  if (SrcNumElts > MaskNumElts) {
-    // Analyze the access pattern of the vector to see if we can extract
-    // two subvectors and do the shuffle. The analysis is done by calculating
-    // the range of elements the mask access on both vectors.
-    int MinRange[2] = { SrcNumElts+1, SrcNumElts+1};
-    int MaxRange[2] = {-1, -1};
-
-    for (unsigned i = 0; i != MaskNumElts; ++i) {
-      int Idx = Mask[i];
-      int Input = 0;
-      if (Idx < 0)
-        continue;
-      
-      if (Idx >= (int)SrcNumElts) {
-        Input = 1;
-        Idx -= SrcNumElts;
-      }
-      if (Idx > MaxRange[Input])
-        MaxRange[Input] = Idx;
-      if (Idx < MinRange[Input])
-        MinRange[Input] = Idx;
-    }
-
-    // Check if the access is smaller than the vector size and can we find
-    // a reasonable extract index.
-    int RangeUse[2] = { 2, 2 };  // 0 = Unused, 1 = Extract, 2 = Can not Extract.
-    int StartIdx[2];  // StartIdx to extract from
-    for (int Input=0; Input < 2; ++Input) {
-      if (MinRange[Input] == (int)(SrcNumElts+1) && MaxRange[Input] == -1) {
-        RangeUse[Input] = 0; // Unused
-        StartIdx[Input] = 0;
-      } else if (MaxRange[Input] - MinRange[Input] < (int)MaskNumElts) {
-        // Fits within range but we should see if we can find a good
-        // start index that is a multiple of the mask length.
-        if (MaxRange[Input] < (int)MaskNumElts) {
-          RangeUse[Input] = 1; // Extract from beginning of the vector
-          StartIdx[Input] = 0;
-        } else {
-          StartIdx[Input] = (MinRange[Input]/MaskNumElts)*MaskNumElts;
-          if (MaxRange[Input] - StartIdx[Input] < (int)MaskNumElts &&
-              StartIdx[Input] + MaskNumElts < SrcNumElts)
-            RangeUse[Input] = 1; // Extract from a multiple of the mask length.
-        }
-      }
-    }
-
-    if (RangeUse[0] == 0 && RangeUse[1] == 0) {
-      setValue(&I, DAG.getUNDEF(VT));  // Vectors are not used.
-      return;
-    }
-    else if (RangeUse[0] < 2 && RangeUse[1] < 2) {
-      // Extract appropriate subvector and generate a vector shuffle
-      for (int Input=0; Input < 2; ++Input) {
-        SDValue& Src = Input == 0 ? Src1 : Src2;
-        if (RangeUse[Input] == 0) {
-          Src = DAG.getUNDEF(VT);
-        } else {
-          Src = DAG.getNode(ISD::EXTRACT_SUBVECTOR, getCurDebugLoc(), VT,
-                            Src, DAG.getIntPtrConstant(StartIdx[Input]));
-        }
-      }
-      // Calculate new mask.
-      SmallVector<int, 8> MappedOps;
-      for (unsigned i = 0; i != MaskNumElts; ++i) {
-        int Idx = Mask[i];
-        if (Idx < 0)
-          MappedOps.push_back(Idx);
-        else if (Idx < (int)SrcNumElts)
-          MappedOps.push_back(Idx - StartIdx[0]);
-        else
-          MappedOps.push_back(Idx - SrcNumElts - StartIdx[1] + MaskNumElts);
-      }
-      setValue(&I, DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2,
-                                        &MappedOps[0]));
-      return;
-    }
-  }
-
-  // We can't use either concat vectors or extract subvectors so fall back to
-  // replacing the shuffle with extract and build vector.
-  // to insert and build vector.
-  EVT EltVT = VT.getVectorElementType();
-  EVT PtrVT = TLI.getPointerTy();
-  SmallVector<SDValue,8> Ops;
-  for (unsigned i = 0; i != MaskNumElts; ++i) {
-    if (Mask[i] < 0) {
-      Ops.push_back(DAG.getUNDEF(EltVT));
-    } else {
-      int Idx = Mask[i];
-      if (Idx < (int)SrcNumElts)
-        Ops.push_back(DAG.getNode(ISD::EXTRACT_VECTOR_ELT, getCurDebugLoc(),
-                                  EltVT, Src1, DAG.getConstant(Idx, PtrVT)));
-      else
-        Ops.push_back(DAG.getNode(ISD::EXTRACT_VECTOR_ELT, getCurDebugLoc(),
-                                  EltVT, Src2,
-                                  DAG.getConstant(Idx - SrcNumElts, PtrVT)));
-    }
-  }
-  setValue(&I, DAG.getNode(ISD::BUILD_VECTOR, getCurDebugLoc(),
-                           VT, &Ops[0], Ops.size()));
-}
-
-void SelectionDAGLowering::visitInsertValue(InsertValueInst &I) {
-  const Value *Op0 = I.getOperand(0);
-  const Value *Op1 = I.getOperand(1);
-  const Type *AggTy = I.getType();
-  const Type *ValTy = Op1->getType();
-  bool IntoUndef = isa<UndefValue>(Op0);
-  bool FromUndef = isa<UndefValue>(Op1);
-
-  unsigned LinearIndex = ComputeLinearIndex(TLI, AggTy,
-                                            I.idx_begin(), I.idx_end());
-
-  SmallVector<EVT, 4> AggValueVTs;
-  ComputeValueVTs(TLI, AggTy, AggValueVTs);
-  SmallVector<EVT, 4> ValValueVTs;
-  ComputeValueVTs(TLI, ValTy, ValValueVTs);
-
-  unsigned NumAggValues = AggValueVTs.size();
-  unsigned NumValValues = ValValueVTs.size();
-  SmallVector<SDValue, 4> Values(NumAggValues);
-
-  SDValue Agg = getValue(Op0);
-  SDValue Val = getValue(Op1);
-  unsigned i = 0;
-  // Copy the beginning value(s) from the original aggregate.
-  for (; i != LinearIndex; ++i)
-    Values[i] = IntoUndef ? DAG.getUNDEF(AggValueVTs[i]) :
-                SDValue(Agg.getNode(), Agg.getResNo() + i);
-  // Copy values from the inserted value(s).
-  for (; i != LinearIndex + NumValValues; ++i)
-    Values[i] = FromUndef ? DAG.getUNDEF(AggValueVTs[i]) :
-                SDValue(Val.getNode(), Val.getResNo() + i - LinearIndex);
-  // Copy remaining value(s) from the original aggregate.
-  for (; i != NumAggValues; ++i)
-    Values[i] = IntoUndef ? DAG.getUNDEF(AggValueVTs[i]) :
-                SDValue(Agg.getNode(), Agg.getResNo() + i);
-
-  setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
-                           DAG.getVTList(&AggValueVTs[0], NumAggValues),
-                           &Values[0], NumAggValues));
-}
-
-void SelectionDAGLowering::visitExtractValue(ExtractValueInst &I) {
-  const Value *Op0 = I.getOperand(0);
-  const Type *AggTy = Op0->getType();
-  const Type *ValTy = I.getType();
-  bool OutOfUndef = isa<UndefValue>(Op0);
-
-  unsigned LinearIndex = ComputeLinearIndex(TLI, AggTy,
-                                            I.idx_begin(), I.idx_end());
-
-  SmallVector<EVT, 4> ValValueVTs;
-  ComputeValueVTs(TLI, ValTy, ValValueVTs);
-
-  unsigned NumValValues = ValValueVTs.size();
-  SmallVector<SDValue, 4> Values(NumValValues);
-
-  SDValue Agg = getValue(Op0);
-  // Copy out the selected value(s).
-  for (unsigned i = LinearIndex; i != LinearIndex + NumValValues; ++i)
-    Values[i - LinearIndex] =
-      OutOfUndef ?
-        DAG.getUNDEF(Agg.getNode()->getValueType(Agg.getResNo() + i)) :
-        SDValue(Agg.getNode(), Agg.getResNo() + i);
-
-  setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
-                           DAG.getVTList(&ValValueVTs[0], NumValValues),
-                           &Values[0], NumValValues));
-}
-
-
-void SelectionDAGLowering::visitGetElementPtr(User &I) {
-  SDValue N = getValue(I.getOperand(0));
-  const Type *Ty = I.getOperand(0)->getType();
-
-  for (GetElementPtrInst::op_iterator OI = I.op_begin()+1, E = I.op_end();
-       OI != E; ++OI) {
-    Value *Idx = *OI;
-    if (const StructType *StTy = dyn_cast<StructType>(Ty)) {
-      unsigned Field = cast<ConstantInt>(Idx)->getZExtValue();
-      if (Field) {
-        // N = N + Offset
-        uint64_t Offset = TD->getStructLayout(StTy)->getElementOffset(Field);
-        N = DAG.getNode(ISD::ADD, getCurDebugLoc(), N.getValueType(), N,
-                        DAG.getIntPtrConstant(Offset));
-      }
-      Ty = StTy->getElementType(Field);
-    } else {
-      Ty = cast<SequentialType>(Ty)->getElementType();
-
-      // If this is a constant subscript, handle it quickly.
-      if (ConstantInt *CI = dyn_cast<ConstantInt>(Idx)) {
-        if (CI->getZExtValue() == 0) continue;
-        uint64_t Offs =
-            TD->getTypeAllocSize(Ty)*cast<ConstantInt>(CI)->getSExtValue();
-        SDValue OffsVal;
-        EVT PTy = TLI.getPointerTy();
-        unsigned PtrBits = PTy.getSizeInBits();
-        if (PtrBits < 64) {
-          OffsVal = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(),
-                                TLI.getPointerTy(),
-                                DAG.getConstant(Offs, MVT::i64));
-        } else
-          OffsVal = DAG.getIntPtrConstant(Offs);
-        N = DAG.getNode(ISD::ADD, getCurDebugLoc(), N.getValueType(), N,
-                        OffsVal);
-        continue;
-      }
-
-      // N = N + Idx * ElementSize;
-      uint64_t ElementSize = TD->getTypeAllocSize(Ty);
-      SDValue IdxN = getValue(Idx);
-
-      // If the index is smaller or larger than intptr_t, truncate or extend
-      // it.
-      if (IdxN.getValueType().bitsLT(N.getValueType()))
-        IdxN = DAG.getNode(ISD::SIGN_EXTEND, getCurDebugLoc(),
-                           N.getValueType(), IdxN);
-      else if (IdxN.getValueType().bitsGT(N.getValueType()))
-        IdxN = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(),
-                           N.getValueType(), IdxN);
-
-      // If this is a multiply by a power of two, turn it into a shl
-      // immediately.  This is a very common case.
-      if (ElementSize != 1) {
-        if (isPowerOf2_64(ElementSize)) {
-          unsigned Amt = Log2_64(ElementSize);
-          IdxN = DAG.getNode(ISD::SHL, getCurDebugLoc(),
-                             N.getValueType(), IdxN,
-                             DAG.getConstant(Amt, TLI.getPointerTy()));
-        } else {
-          SDValue Scale = DAG.getIntPtrConstant(ElementSize);
-          IdxN = DAG.getNode(ISD::MUL, getCurDebugLoc(),
-                             N.getValueType(), IdxN, Scale);
-        }
-      }
-
-      N = DAG.getNode(ISD::ADD, getCurDebugLoc(),
-                      N.getValueType(), N, IdxN);
-    }
-  }
-  setValue(&I, N);
-}
-
-void SelectionDAGLowering::visitAlloca(AllocaInst &I) {
-  // If this is a fixed sized alloca in the entry block of the function,
-  // allocate it statically on the stack.
-  if (FuncInfo.StaticAllocaMap.count(&I))
-    return;   // getValue will auto-populate this.
-
-  const Type *Ty = I.getAllocatedType();
-  uint64_t TySize = TLI.getTargetData()->getTypeAllocSize(Ty);
-  unsigned Align =
-    std::max((unsigned)TLI.getTargetData()->getPrefTypeAlignment(Ty),
-             I.getAlignment());
-
-  SDValue AllocSize = getValue(I.getArraySize());
-  
-  AllocSize = DAG.getNode(ISD::MUL, getCurDebugLoc(), AllocSize.getValueType(),
-                          AllocSize,
-                          DAG.getConstant(TySize, AllocSize.getValueType()));
-  
-  
-  
-  EVT IntPtr = TLI.getPointerTy();
-  if (IntPtr.bitsLT(AllocSize.getValueType()))
-    AllocSize = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(),
-                            IntPtr, AllocSize);
-  else if (IntPtr.bitsGT(AllocSize.getValueType()))
-    AllocSize = DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(),
-                            IntPtr, AllocSize);
-
-  // Handle alignment.  If the requested alignment is less than or equal to
-  // the stack alignment, ignore it.  If the size is greater than or equal to
-  // the stack alignment, we note this in the DYNAMIC_STACKALLOC node.
-  unsigned StackAlign =
-    TLI.getTargetMachine().getFrameInfo()->getStackAlignment();
-  if (Align <= StackAlign)
-    Align = 0;
-
-  // Round the size of the allocation up to the stack alignment size
-  // by add SA-1 to the size.
-  AllocSize = DAG.getNode(ISD::ADD, getCurDebugLoc(),
-                          AllocSize.getValueType(), AllocSize,
-                          DAG.getIntPtrConstant(StackAlign-1));
-  // Mask out the low bits for alignment purposes.
-  AllocSize = DAG.getNode(ISD::AND, getCurDebugLoc(),
-                          AllocSize.getValueType(), AllocSize,
-                          DAG.getIntPtrConstant(~(uint64_t)(StackAlign-1)));
-
-  SDValue Ops[] = { getRoot(), AllocSize, DAG.getIntPtrConstant(Align) };
-  SDVTList VTs = DAG.getVTList(AllocSize.getValueType(), MVT::Other);
-  SDValue DSA = DAG.getNode(ISD::DYNAMIC_STACKALLOC, getCurDebugLoc(),
-                            VTs, Ops, 3);
-  setValue(&I, DSA);
-  DAG.setRoot(DSA.getValue(1));
-
-  // Inform the Frame Information that we have just allocated a variable-sized
-  // object.
-  FuncInfo.MF->getFrameInfo()->CreateVariableSizedObject();
-}
-
-void SelectionDAGLowering::visitLoad(LoadInst &I) {
-  const Value *SV = I.getOperand(0);
-  SDValue Ptr = getValue(SV);
-
-  const Type *Ty = I.getType();
-  bool isVolatile = I.isVolatile();
-  unsigned Alignment = I.getAlignment();
-
-  SmallVector<EVT, 4> ValueVTs;
-  SmallVector<uint64_t, 4> Offsets;
-  ComputeValueVTs(TLI, Ty, ValueVTs, &Offsets);
-  unsigned NumValues = ValueVTs.size();
-  if (NumValues == 0)
-    return;
-
-  SDValue Root;
-  bool ConstantMemory = false;
-  if (I.isVolatile())
-    // Serialize volatile loads with other side effects.
-    Root = getRoot();
-  else if (AA->pointsToConstantMemory(SV)) {
-    // Do not serialize (non-volatile) loads of constant memory with anything.
-    Root = DAG.getEntryNode();
-    ConstantMemory = true;
-  } else {
-    // Do not serialize non-volatile loads against each other.
-    Root = DAG.getRoot();
-  }
-
-  SmallVector<SDValue, 4> Values(NumValues);
-  SmallVector<SDValue, 4> Chains(NumValues);
-  EVT PtrVT = Ptr.getValueType();
-  for (unsigned i = 0; i != NumValues; ++i) {
-    SDValue L = DAG.getLoad(ValueVTs[i], getCurDebugLoc(), Root,
-                            DAG.getNode(ISD::ADD, getCurDebugLoc(),
-                                        PtrVT, Ptr,
-                                        DAG.getConstant(Offsets[i], PtrVT)),
-                            SV, Offsets[i], isVolatile, Alignment);
-    Values[i] = L;
-    Chains[i] = L.getValue(1);
-  }
-
-  if (!ConstantMemory) {
-    SDValue Chain = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(),
-                                  MVT::Other,
-                                  &Chains[0], NumValues);
-    if (isVolatile)
-      DAG.setRoot(Chain);
-    else
-      PendingLoads.push_back(Chain);
-  }
-
-  setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
-                           DAG.getVTList(&ValueVTs[0], NumValues),
-                           &Values[0], NumValues));
-}
-
-
-void SelectionDAGLowering::visitStore(StoreInst &I) {
-  Value *SrcV = I.getOperand(0);
-  Value *PtrV = I.getOperand(1);
-
-  SmallVector<EVT, 4> ValueVTs;
-  SmallVector<uint64_t, 4> Offsets;
-  ComputeValueVTs(TLI, SrcV->getType(), ValueVTs, &Offsets);
-  unsigned NumValues = ValueVTs.size();
-  if (NumValues == 0)
-    return;
-
-  // Get the lowered operands. Note that we do this after
-  // checking if NumResults is zero, because with zero results
-  // the operands won't have values in the map.
-  SDValue Src = getValue(SrcV);
-  SDValue Ptr = getValue(PtrV);
-
-  SDValue Root = getRoot();
-  SmallVector<SDValue, 4> Chains(NumValues);
-  EVT PtrVT = Ptr.getValueType();
-  bool isVolatile = I.isVolatile();
-  unsigned Alignment = I.getAlignment();
-  for (unsigned i = 0; i != NumValues; ++i)
-    Chains[i] = DAG.getStore(Root, getCurDebugLoc(),
-                             SDValue(Src.getNode(), Src.getResNo() + i),
-                             DAG.getNode(ISD::ADD, getCurDebugLoc(),
-                                         PtrVT, Ptr,
-                                         DAG.getConstant(Offsets[i], PtrVT)),
-                             PtrV, Offsets[i], isVolatile, Alignment);
-
-  DAG.setRoot(DAG.getNode(ISD::TokenFactor, getCurDebugLoc(),
-                          MVT::Other, &Chains[0], NumValues));
-}
-
-/// visitTargetIntrinsic - Lower a call of a target intrinsic to an INTRINSIC
-/// node.
-void SelectionDAGLowering::visitTargetIntrinsic(CallInst &I,
-                                                unsigned Intrinsic) {
-  bool HasChain = !I.doesNotAccessMemory();
-  bool OnlyLoad = HasChain && I.onlyReadsMemory();
-
-  // Build the operand list.
-  SmallVector<SDValue, 8> Ops;
-  if (HasChain) {  // If this intrinsic has side-effects, chainify it.
-    if (OnlyLoad) {
-      // We don't need to serialize loads against other loads.
-      Ops.push_back(DAG.getRoot());
-    } else {
-      Ops.push_back(getRoot());
-    }
-  }
-
-  // Info is set by getTgtMemInstrinsic
-  TargetLowering::IntrinsicInfo Info;
-  bool IsTgtIntrinsic = TLI.getTgtMemIntrinsic(Info, I, Intrinsic);
-
-  // Add the intrinsic ID as an integer operand if it's not a target intrinsic.
-  if (!IsTgtIntrinsic)
-    Ops.push_back(DAG.getConstant(Intrinsic, TLI.getPointerTy()));
-
-  // Add all operands of the call to the operand list.
-  for (unsigned i = 1, e = I.getNumOperands(); i != e; ++i) {
-    SDValue Op = getValue(I.getOperand(i));
-    assert(TLI.isTypeLegal(Op.getValueType()) &&
-           "Intrinsic uses a non-legal type?");
-    Ops.push_back(Op);
-  }
-
-  SmallVector<EVT, 4> ValueVTs;
-  ComputeValueVTs(TLI, I.getType(), ValueVTs);
-#ifndef NDEBUG
-  for (unsigned Val = 0, E = ValueVTs.size(); Val != E; ++Val) {
-    assert(TLI.isTypeLegal(ValueVTs[Val]) &&
-           "Intrinsic uses a non-legal type?");
-  }
-#endif // NDEBUG
-  if (HasChain)
-    ValueVTs.push_back(MVT::Other);
-
-  SDVTList VTs = DAG.getVTList(ValueVTs.data(), ValueVTs.size());
-
-  // Create the node.
-  SDValue Result;
-  if (IsTgtIntrinsic) {
-    // This is target intrinsic that touches memory
-    Result = DAG.getMemIntrinsicNode(Info.opc, getCurDebugLoc(),
-                                     VTs, &Ops[0], Ops.size(),
-                                     Info.memVT, Info.ptrVal, Info.offset,
-                                     Info.align, Info.vol,
-                                     Info.readMem, Info.writeMem);
-  }
-  else if (!HasChain)
-    Result = DAG.getNode(ISD::INTRINSIC_WO_CHAIN, getCurDebugLoc(),
-                         VTs, &Ops[0], Ops.size());
-  else if (I.getType() != Type::getVoidTy(*DAG.getContext()))
-    Result = DAG.getNode(ISD::INTRINSIC_W_CHAIN, getCurDebugLoc(),
-                         VTs, &Ops[0], Ops.size());
-  else
-    Result = DAG.getNode(ISD::INTRINSIC_VOID, getCurDebugLoc(),
-                         VTs, &Ops[0], Ops.size());
-
-  if (HasChain) {
-    SDValue Chain = Result.getValue(Result.getNode()->getNumValues()-1);
-    if (OnlyLoad)
-      PendingLoads.push_back(Chain);
-    else
-      DAG.setRoot(Chain);
-  }
-  if (I.getType() != Type::getVoidTy(*DAG.getContext())) {
-    if (const VectorType *PTy = dyn_cast<VectorType>(I.getType())) {
-      EVT VT = TLI.getValueType(PTy);
-      Result = DAG.getNode(ISD::BIT_CONVERT, getCurDebugLoc(), VT, Result);
-    }
-    setValue(&I, Result);
-  }
-}
-
-/// ExtractTypeInfo - Returns the type info, possibly bitcast, encoded in V.
-static GlobalVariable *ExtractTypeInfo(Value *V) {
-  V = V->stripPointerCasts();
-  GlobalVariable *GV = dyn_cast<GlobalVariable>(V);
-  assert ((GV || isa<ConstantPointerNull>(V)) &&
-          "TypeInfo must be a global variable or NULL");
-  return GV;
-}
-
-namespace llvm {
-
-/// AddCatchInfo - Extract the personality and type infos from an eh.selector
-/// call, and add them to the specified machine basic block.
-void AddCatchInfo(CallInst &I, MachineModuleInfo *MMI,
-                  MachineBasicBlock *MBB) {
-  // Inform the MachineModuleInfo of the personality for this landing pad.
-  ConstantExpr *CE = cast<ConstantExpr>(I.getOperand(2));
-  assert(CE->getOpcode() == Instruction::BitCast &&
-         isa<Function>(CE->getOperand(0)) &&
-         "Personality should be a function");
-  MMI->addPersonality(MBB, cast<Function>(CE->getOperand(0)));
-
-  // Gather all the type infos for this landing pad and pass them along to
-  // MachineModuleInfo.
-  std::vector<GlobalVariable *> TyInfo;
-  unsigned N = I.getNumOperands();
-
-  for (unsigned i = N - 1; i > 2; --i) {
-    if (ConstantInt *CI = dyn_cast<ConstantInt>(I.getOperand(i))) {
-      unsigned FilterLength = CI->getZExtValue();
-      unsigned FirstCatch = i + FilterLength + !FilterLength;
-      assert (FirstCatch <= N && "Invalid filter length");
-
-      if (FirstCatch < N) {
-        TyInfo.reserve(N - FirstCatch);
-        for (unsigned j = FirstCatch; j < N; ++j)
-          TyInfo.push_back(ExtractTypeInfo(I.getOperand(j)));
-        MMI->addCatchTypeInfo(MBB, TyInfo);
-        TyInfo.clear();
-      }
-
-      if (!FilterLength) {
-        // Cleanup.
-        MMI->addCleanup(MBB);
-      } else {
-        // Filter.
-        TyInfo.reserve(FilterLength - 1);
-        for (unsigned j = i + 1; j < FirstCatch; ++j)
-          TyInfo.push_back(ExtractTypeInfo(I.getOperand(j)));
-        MMI->addFilterTypeInfo(MBB, TyInfo);
-        TyInfo.clear();
-      }
-
-      N = i;
-    }
-  }
-
-  if (N > 3) {
-    TyInfo.reserve(N - 3);
-    for (unsigned j = 3; j < N; ++j)
-      TyInfo.push_back(ExtractTypeInfo(I.getOperand(j)));
-    MMI->addCatchTypeInfo(MBB, TyInfo);
-  }
-}
-
-}
-
-/// GetSignificand - Get the significand and build it into a floating-point
-/// number with exponent of 1:
-///
-///   Op = (Op & 0x007fffff) | 0x3f800000;
-///
-/// where Op is the hexidecimal representation of floating point value.
-static SDValue
-GetSignificand(SelectionDAG &DAG, SDValue Op, DebugLoc dl) {
-  SDValue t1 = DAG.getNode(ISD::AND, dl, MVT::i32, Op,
-                           DAG.getConstant(0x007fffff, MVT::i32));
-  SDValue t2 = DAG.getNode(ISD::OR, dl, MVT::i32, t1,
-                           DAG.getConstant(0x3f800000, MVT::i32));
-  return DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t2);
-}
-
-/// GetExponent - Get the exponent:
-///
-///   (float)(int)(((Op & 0x7f800000) >> 23) - 127);
-///
-/// where Op is the hexidecimal representation of floating point value.
-static SDValue
-GetExponent(SelectionDAG &DAG, SDValue Op, const TargetLowering &TLI,
-            DebugLoc dl) {
-  SDValue t0 = DAG.getNode(ISD::AND, dl, MVT::i32, Op,
-                           DAG.getConstant(0x7f800000, MVT::i32));
-  SDValue t1 = DAG.getNode(ISD::SRL, dl, MVT::i32, t0,
-                           DAG.getConstant(23, TLI.getPointerTy()));
-  SDValue t2 = DAG.getNode(ISD::SUB, dl, MVT::i32, t1,
-                           DAG.getConstant(127, MVT::i32));
-  return DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, t2);
-}
-
-/// getF32Constant - Get 32-bit floating point constant.
-static SDValue
-getF32Constant(SelectionDAG &DAG, unsigned Flt) {
-  return DAG.getConstantFP(APFloat(APInt(32, Flt)), MVT::f32);
-}
-
-/// Inlined utility function to implement binary input atomic intrinsics for
-/// visitIntrinsicCall: I is a call instruction
-///                     Op is the associated NodeType for I
-const char *
-SelectionDAGLowering::implVisitBinaryAtomic(CallInst& I, ISD::NodeType Op) {
-  SDValue Root = getRoot();
-  SDValue L =
-    DAG.getAtomic(Op, getCurDebugLoc(),
-                  getValue(I.getOperand(2)).getValueType().getSimpleVT(),
-                  Root,
-                  getValue(I.getOperand(1)),
-                  getValue(I.getOperand(2)),
-                  I.getOperand(1));
-  setValue(&I, L);
-  DAG.setRoot(L.getValue(1));
-  return 0;
-}
-
-// implVisitAluOverflow - Lower arithmetic overflow instrinsics.
-const char *
-SelectionDAGLowering::implVisitAluOverflow(CallInst &I, ISD::NodeType Op) {
-  SDValue Op1 = getValue(I.getOperand(1));
-  SDValue Op2 = getValue(I.getOperand(2));
-
-  SDVTList VTs = DAG.getVTList(Op1.getValueType(), MVT::i1);
-  SDValue Result = DAG.getNode(Op, getCurDebugLoc(), VTs, Op1, Op2);
-
-  setValue(&I, Result);
-  return 0;
-}
-
-/// visitExp - Lower an exp intrinsic. Handles the special sequences for
-/// limited-precision mode.
-void
-SelectionDAGLowering::visitExp(CallInst &I) {
-  SDValue result;
-  DebugLoc dl = getCurDebugLoc();
-
-  if (getValue(I.getOperand(1)).getValueType() == MVT::f32 &&
-      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
-    SDValue Op = getValue(I.getOperand(1));
-
-    // Put the exponent in the right bit position for later addition to the
-    // final result:
-    //
-    //   #define LOG2OFe 1.4426950f
-    //   IntegerPartOfX = ((int32_t)(X * LOG2OFe));
-    SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, Op,
-                             getF32Constant(DAG, 0x3fb8aa3b));
-    SDValue IntegerPartOfX = DAG.getNode(ISD::FP_TO_SINT, dl, MVT::i32, t0);
-
-    //   FractionalPartOfX = (X * LOG2OFe) - (float)IntegerPartOfX;
-    SDValue t1 = DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, IntegerPartOfX);
-    SDValue X = DAG.getNode(ISD::FSUB, dl, MVT::f32, t0, t1);
-
-    //   IntegerPartOfX <<= 23;
-    IntegerPartOfX = DAG.getNode(ISD::SHL, dl, MVT::i32, IntegerPartOfX,
-                                 DAG.getConstant(23, TLI.getPointerTy()));
-
-    if (LimitFloatPrecision <= 6) {
-      // For floating-point precision of 6:
-      //
-      //   TwoToFractionalPartOfX =
-      //     0.997535578f +
-      //       (0.735607626f + 0.252464424f * x) * x;
-      //
-      // error 0.0144103317, which is 6 bits
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0x3e814304));
-      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
-                               getF32Constant(DAG, 0x3f3c50c8));
-      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
-      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
-                               getF32Constant(DAG, 0x3f7f5e7e));
-      SDValue TwoToFracPartOfX = DAG.getNode(ISD::BIT_CONVERT, dl,MVT::i32, t5);
-
-      // Add the exponent into the result in integer domain.
-      SDValue t6 = DAG.getNode(ISD::ADD, dl, MVT::i32,
-                               TwoToFracPartOfX, IntegerPartOfX);
-
-      result = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t6);
-    } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
-      // For floating-point precision of 12:
-      //
-      //   TwoToFractionalPartOfX =
-      //     0.999892986f +
-      //       (0.696457318f +
-      //         (0.224338339f + 0.792043434e-1f * x) * x) * x;
-      //
-      // 0.000107046256 error, which is 13 to 14 bits
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0x3da235e3));
-      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
-                               getF32Constant(DAG, 0x3e65b8f3));
-      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
-      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
-                               getF32Constant(DAG, 0x3f324b07));
-      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
-      SDValue t7 = DAG.getNode(ISD::FADD, dl, MVT::f32, t6,
-                               getF32Constant(DAG, 0x3f7ff8fd));
-      SDValue TwoToFracPartOfX = DAG.getNode(ISD::BIT_CONVERT, dl,MVT::i32, t7);
-
-      // Add the exponent into the result in integer domain.
-      SDValue t8 = DAG.getNode(ISD::ADD, dl, MVT::i32,
-                               TwoToFracPartOfX, IntegerPartOfX);
-
-      result = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t8);
-    } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
-      // For floating-point precision of 18:
-      //
-      //   TwoToFractionalPartOfX =
-      //     0.999999982f +
-      //       (0.693148872f +
-      //         (0.240227044f +
-      //           (0.554906021e-1f +
-      //             (0.961591928e-2f +
-      //               (0.136028312e-2f + 0.157059148e-3f *x)*x)*x)*x)*x)*x;
-      //
-      // error 2.47208000*10^(-7), which is better than 18 bits
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0x3924b03e));
-      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
-                               getF32Constant(DAG, 0x3ab24b87));
-      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
-      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
-                               getF32Constant(DAG, 0x3c1d8c17));
-      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
-      SDValue t7 = DAG.getNode(ISD::FADD, dl, MVT::f32, t6,
-                               getF32Constant(DAG, 0x3d634a1d));
-      SDValue t8 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t7, X);
-      SDValue t9 = DAG.getNode(ISD::FADD, dl, MVT::f32, t8,
-                               getF32Constant(DAG, 0x3e75fe14));
-      SDValue t10 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t9, X);
-      SDValue t11 = DAG.getNode(ISD::FADD, dl, MVT::f32, t10,
-                                getF32Constant(DAG, 0x3f317234));
-      SDValue t12 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t11, X);
-      SDValue t13 = DAG.getNode(ISD::FADD, dl, MVT::f32, t12,
-                                getF32Constant(DAG, 0x3f800000));
-      SDValue TwoToFracPartOfX = DAG.getNode(ISD::BIT_CONVERT, dl,
-                                             MVT::i32, t13);
-
-      // Add the exponent into the result in integer domain.
-      SDValue t14 = DAG.getNode(ISD::ADD, dl, MVT::i32,
-                                TwoToFracPartOfX, IntegerPartOfX);
-
-      result = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t14);
-    }
-  } else {
-    // No special expansion.
-    result = DAG.getNode(ISD::FEXP, dl,
-                         getValue(I.getOperand(1)).getValueType(),
-                         getValue(I.getOperand(1)));
-  }
-
-  setValue(&I, result);
-}
-
-/// visitLog - Lower a log intrinsic. Handles the special sequences for
-/// limited-precision mode.
-void
-SelectionDAGLowering::visitLog(CallInst &I) {
-  SDValue result;
-  DebugLoc dl = getCurDebugLoc();
-
-  if (getValue(I.getOperand(1)).getValueType() == MVT::f32 &&
-      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
-    SDValue Op = getValue(I.getOperand(1));
-    SDValue Op1 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, Op);
-
-    // Scale the exponent by log(2) [0.69314718f].
-    SDValue Exp = GetExponent(DAG, Op1, TLI, dl);
-    SDValue LogOfExponent = DAG.getNode(ISD::FMUL, dl, MVT::f32, Exp,
-                                        getF32Constant(DAG, 0x3f317218));
-
-    // Get the significand and build it into a floating-point number with
-    // exponent of 1.
-    SDValue X = GetSignificand(DAG, Op1, dl);
-
-    if (LimitFloatPrecision <= 6) {
-      // For floating-point precision of 6:
-      //
-      //   LogofMantissa =
-      //     -1.1609546f +
-      //       (1.4034025f - 0.23903021f * x) * x;
-      //
-      // error 0.0034276066, which is better than 8 bits
-      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0xbe74c456));
-      SDValue t1 = DAG.getNode(ISD::FADD, dl, MVT::f32, t0,
-                               getF32Constant(DAG, 0x3fb3a2b1));
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
-      SDValue LogOfMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t2,
-                                          getF32Constant(DAG, 0x3f949a29));
-
-      result = DAG.getNode(ISD::FADD, dl,
-                           MVT::f32, LogOfExponent, LogOfMantissa);
-    } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
-      // For floating-point precision of 12:
-      //
-      //   LogOfMantissa =
-      //     -1.7417939f +
-      //       (2.8212026f +
-      //         (-1.4699568f +
-      //           (0.44717955f - 0.56570851e-1f * x) * x) * x) * x;
-      //
-      // error 0.000061011436, which is 14 bits
-      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0xbd67b6d6));
-      SDValue t1 = DAG.getNode(ISD::FADD, dl, MVT::f32, t0,
-                               getF32Constant(DAG, 0x3ee4f4b8));
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
-      SDValue t3 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t2,
-                               getF32Constant(DAG, 0x3fbc278b));
-      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
-      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
-                               getF32Constant(DAG, 0x40348e95));
-      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
-      SDValue LogOfMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t6,
-                                          getF32Constant(DAG, 0x3fdef31a));
-
-      result = DAG.getNode(ISD::FADD, dl,
-                           MVT::f32, LogOfExponent, LogOfMantissa);
-    } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
-      // For floating-point precision of 18:
-      //
-      //   LogOfMantissa =
-      //     -2.1072184f +
-      //       (4.2372794f +
-      //         (-3.7029485f +
-      //           (2.2781945f +
-      //             (-0.87823314f +
-      //               (0.19073739f - 0.17809712e-1f * x) * x) * x) * x) * x)*x;
-      //
-      // error 0.0000023660568, which is better than 18 bits
-      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0xbc91e5ac));
-      SDValue t1 = DAG.getNode(ISD::FADD, dl, MVT::f32, t0,
-                               getF32Constant(DAG, 0x3e4350aa));
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
-      SDValue t3 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t2,
-                               getF32Constant(DAG, 0x3f60d3e3));
-      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
-      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
-                               getF32Constant(DAG, 0x4011cdf0));
-      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
-      SDValue t7 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t6,
-                               getF32Constant(DAG, 0x406cfd1c));
-      SDValue t8 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t7, X);
-      SDValue t9 = DAG.getNode(ISD::FADD, dl, MVT::f32, t8,
-                               getF32Constant(DAG, 0x408797cb));
-      SDValue t10 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t9, X);
-      SDValue LogOfMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t10,
-                                          getF32Constant(DAG, 0x4006dcab));
-
-      result = DAG.getNode(ISD::FADD, dl,
-                           MVT::f32, LogOfExponent, LogOfMantissa);
-    }
-  } else {
-    // No special expansion.
-    result = DAG.getNode(ISD::FLOG, dl,
-                         getValue(I.getOperand(1)).getValueType(),
-                         getValue(I.getOperand(1)));
-  }
-
-  setValue(&I, result);
-}
-
-/// visitLog2 - Lower a log2 intrinsic. Handles the special sequences for
-/// limited-precision mode.
-void
-SelectionDAGLowering::visitLog2(CallInst &I) {
-  SDValue result;
-  DebugLoc dl = getCurDebugLoc();
-
-  if (getValue(I.getOperand(1)).getValueType() == MVT::f32 &&
-      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
-    SDValue Op = getValue(I.getOperand(1));
-    SDValue Op1 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, Op);
-
-    // Get the exponent.
-    SDValue LogOfExponent = GetExponent(DAG, Op1, TLI, dl);
-
-    // Get the significand and build it into a floating-point number with
-    // exponent of 1.
-    SDValue X = GetSignificand(DAG, Op1, dl);
-
-    // Different possible minimax approximations of significand in
-    // floating-point for various degrees of accuracy over [1,2].
-    if (LimitFloatPrecision <= 6) {
-      // For floating-point precision of 6:
-      //
-      //   Log2ofMantissa = -1.6749035f + (2.0246817f - .34484768f * x) * x;
-      //
-      // error 0.0049451742, which is more than 7 bits
-      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0xbeb08fe0));
-      SDValue t1 = DAG.getNode(ISD::FADD, dl, MVT::f32, t0,
-                               getF32Constant(DAG, 0x40019463));
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
-      SDValue Log2ofMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t2,
-                                           getF32Constant(DAG, 0x3fd6633d));
-
-      result = DAG.getNode(ISD::FADD, dl,
-                           MVT::f32, LogOfExponent, Log2ofMantissa);
-    } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
-      // For floating-point precision of 12:
-      //
-      //   Log2ofMantissa =
-      //     -2.51285454f +
-      //       (4.07009056f +
-      //         (-2.12067489f +
-      //           (.645142248f - 0.816157886e-1f * x) * x) * x) * x;
-      //
-      // error 0.0000876136000, which is better than 13 bits
-      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0xbda7262e));
-      SDValue t1 = DAG.getNode(ISD::FADD, dl, MVT::f32, t0,
-                               getF32Constant(DAG, 0x3f25280b));
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
-      SDValue t3 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t2,
-                               getF32Constant(DAG, 0x4007b923));
-      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
-      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
-                               getF32Constant(DAG, 0x40823e2f));
-      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
-      SDValue Log2ofMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t6,
-                                           getF32Constant(DAG, 0x4020d29c));
-
-      result = DAG.getNode(ISD::FADD, dl,
-                           MVT::f32, LogOfExponent, Log2ofMantissa);
-    } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
-      // For floating-point precision of 18:
-      //
-      //   Log2ofMantissa =
-      //     -3.0400495f +
-      //       (6.1129976f +
-      //         (-5.3420409f +
-      //           (3.2865683f +
-      //             (-1.2669343f +
-      //               (0.27515199f -
-      //                 0.25691327e-1f * x) * x) * x) * x) * x) * x;
-      //
-      // error 0.0000018516, which is better than 18 bits
-      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0xbcd2769e));
-      SDValue t1 = DAG.getNode(ISD::FADD, dl, MVT::f32, t0,
-                               getF32Constant(DAG, 0x3e8ce0b9));
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
-      SDValue t3 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t2,
-                               getF32Constant(DAG, 0x3fa22ae7));
-      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
-      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
-                               getF32Constant(DAG, 0x40525723));
-      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
-      SDValue t7 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t6,
-                               getF32Constant(DAG, 0x40aaf200));
-      SDValue t8 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t7, X);
-      SDValue t9 = DAG.getNode(ISD::FADD, dl, MVT::f32, t8,
-                               getF32Constant(DAG, 0x40c39dad));
-      SDValue t10 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t9, X);
-      SDValue Log2ofMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t10,
-                                           getF32Constant(DAG, 0x4042902c));
-
-      result = DAG.getNode(ISD::FADD, dl,
-                           MVT::f32, LogOfExponent, Log2ofMantissa);
-    }
-  } else {
-    // No special expansion.
-    result = DAG.getNode(ISD::FLOG2, dl,
-                         getValue(I.getOperand(1)).getValueType(),
-                         getValue(I.getOperand(1)));
-  }
-
-  setValue(&I, result);
-}
-
-/// visitLog10 - Lower a log10 intrinsic. Handles the special sequences for
-/// limited-precision mode.
-void
-SelectionDAGLowering::visitLog10(CallInst &I) {
-  SDValue result;
-  DebugLoc dl = getCurDebugLoc();
-
-  if (getValue(I.getOperand(1)).getValueType() == MVT::f32 &&
-      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
-    SDValue Op = getValue(I.getOperand(1));
-    SDValue Op1 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, Op);
-
-    // Scale the exponent by log10(2) [0.30102999f].
-    SDValue Exp = GetExponent(DAG, Op1, TLI, dl);
-    SDValue LogOfExponent = DAG.getNode(ISD::FMUL, dl, MVT::f32, Exp,
-                                        getF32Constant(DAG, 0x3e9a209a));
-
-    // Get the significand and build it into a floating-point number with
-    // exponent of 1.
-    SDValue X = GetSignificand(DAG, Op1, dl);
-
-    if (LimitFloatPrecision <= 6) {
-      // For floating-point precision of 6:
-      //
-      //   Log10ofMantissa =
-      //     -0.50419619f +
-      //       (0.60948995f - 0.10380950f * x) * x;
-      //
-      // error 0.0014886165, which is 6 bits
-      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0xbdd49a13));
-      SDValue t1 = DAG.getNode(ISD::FADD, dl, MVT::f32, t0,
-                               getF32Constant(DAG, 0x3f1c0789));
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
-      SDValue Log10ofMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t2,
-                                            getF32Constant(DAG, 0x3f011300));
-
-      result = DAG.getNode(ISD::FADD, dl,
-                           MVT::f32, LogOfExponent, Log10ofMantissa);
-    } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
-      // For floating-point precision of 12:
-      //
-      //   Log10ofMantissa =
-      //     -0.64831180f +
-      //       (0.91751397f +
-      //         (-0.31664806f + 0.47637168e-1f * x) * x) * x;
-      //
-      // error 0.00019228036, which is better than 12 bits
-      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0x3d431f31));
-      SDValue t1 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t0,
-                               getF32Constant(DAG, 0x3ea21fb2));
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
-      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
-                               getF32Constant(DAG, 0x3f6ae232));
-      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
-      SDValue Log10ofMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t4,
-                                            getF32Constant(DAG, 0x3f25f7c3));
-
-      result = DAG.getNode(ISD::FADD, dl,
-                           MVT::f32, LogOfExponent, Log10ofMantissa);
-    } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
-      // For floating-point precision of 18:
-      //
-      //   Log10ofMantissa =
-      //     -0.84299375f +
-      //       (1.5327582f +
-      //         (-1.0688956f +
-      //           (0.49102474f +
-      //             (-0.12539807f + 0.13508273e-1f * x) * x) * x) * x) * x;
-      //
-      // error 0.0000037995730, which is better than 18 bits
-      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0x3c5d51ce));
-      SDValue t1 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t0,
-                               getF32Constant(DAG, 0x3e00685a));
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
-      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
-                               getF32Constant(DAG, 0x3efb6798));
-      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
-      SDValue t5 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t4,
-                               getF32Constant(DAG, 0x3f88d192));
-      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
-      SDValue t7 = DAG.getNode(ISD::FADD, dl, MVT::f32, t6,
-                               getF32Constant(DAG, 0x3fc4316c));
-      SDValue t8 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t7, X);
-      SDValue Log10ofMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t8,
-                                            getF32Constant(DAG, 0x3f57ce70));
-
-      result = DAG.getNode(ISD::FADD, dl,
-                           MVT::f32, LogOfExponent, Log10ofMantissa);
-    }
-  } else {
-    // No special expansion.
-    result = DAG.getNode(ISD::FLOG10, dl,
-                         getValue(I.getOperand(1)).getValueType(),
-                         getValue(I.getOperand(1)));
-  }
-
-  setValue(&I, result);
-}
-
-/// visitExp2 - Lower an exp2 intrinsic. Handles the special sequences for
-/// limited-precision mode.
-void
-SelectionDAGLowering::visitExp2(CallInst &I) {
-  SDValue result;
-  DebugLoc dl = getCurDebugLoc();
-
-  if (getValue(I.getOperand(1)).getValueType() == MVT::f32 &&
-      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
-    SDValue Op = getValue(I.getOperand(1));
-
-    SDValue IntegerPartOfX = DAG.getNode(ISD::FP_TO_SINT, dl, MVT::i32, Op);
-
-    //   FractionalPartOfX = x - (float)IntegerPartOfX;
-    SDValue t1 = DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, IntegerPartOfX);
-    SDValue X = DAG.getNode(ISD::FSUB, dl, MVT::f32, Op, t1);
-
-    //   IntegerPartOfX <<= 23;
-    IntegerPartOfX = DAG.getNode(ISD::SHL, dl, MVT::i32, IntegerPartOfX,
-                                 DAG.getConstant(23, TLI.getPointerTy()));
-
-    if (LimitFloatPrecision <= 6) {
-      // For floating-point precision of 6:
-      //
-      //   TwoToFractionalPartOfX =
-      //     0.997535578f +
-      //       (0.735607626f + 0.252464424f * x) * x;
-      //
-      // error 0.0144103317, which is 6 bits
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0x3e814304));
-      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
-                               getF32Constant(DAG, 0x3f3c50c8));
-      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
-      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
-                               getF32Constant(DAG, 0x3f7f5e7e));
-      SDValue t6 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, t5);
-      SDValue TwoToFractionalPartOfX =
-        DAG.getNode(ISD::ADD, dl, MVT::i32, t6, IntegerPartOfX);
-
-      result = DAG.getNode(ISD::BIT_CONVERT, dl,
-                           MVT::f32, TwoToFractionalPartOfX);
-    } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
-      // For floating-point precision of 12:
-      //
-      //   TwoToFractionalPartOfX =
-      //     0.999892986f +
-      //       (0.696457318f +
-      //         (0.224338339f + 0.792043434e-1f * x) * x) * x;
-      //
-      // error 0.000107046256, which is 13 to 14 bits
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0x3da235e3));
-      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
-                               getF32Constant(DAG, 0x3e65b8f3));
-      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
-      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
-                               getF32Constant(DAG, 0x3f324b07));
-      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
-      SDValue t7 = DAG.getNode(ISD::FADD, dl, MVT::f32, t6,
-                               getF32Constant(DAG, 0x3f7ff8fd));
-      SDValue t8 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, t7);
-      SDValue TwoToFractionalPartOfX =
-        DAG.getNode(ISD::ADD, dl, MVT::i32, t8, IntegerPartOfX);
-
-      result = DAG.getNode(ISD::BIT_CONVERT, dl,
-                           MVT::f32, TwoToFractionalPartOfX);
-    } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
-      // For floating-point precision of 18:
-      //
-      //   TwoToFractionalPartOfX =
-      //     0.999999982f +
-      //       (0.693148872f +
-      //         (0.240227044f +
-      //           (0.554906021e-1f +
-      //             (0.961591928e-2f +
-      //               (0.136028312e-2f + 0.157059148e-3f *x)*x)*x)*x)*x)*x;
-      // error 2.47208000*10^(-7), which is better than 18 bits
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0x3924b03e));
-      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
-                               getF32Constant(DAG, 0x3ab24b87));
-      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
-      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
-                               getF32Constant(DAG, 0x3c1d8c17));
-      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
-      SDValue t7 = DAG.getNode(ISD::FADD, dl, MVT::f32, t6,
-                               getF32Constant(DAG, 0x3d634a1d));
-      SDValue t8 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t7, X);
-      SDValue t9 = DAG.getNode(ISD::FADD, dl, MVT::f32, t8,
-                               getF32Constant(DAG, 0x3e75fe14));
-      SDValue t10 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t9, X);
-      SDValue t11 = DAG.getNode(ISD::FADD, dl, MVT::f32, t10,
-                                getF32Constant(DAG, 0x3f317234));
-      SDValue t12 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t11, X);
-      SDValue t13 = DAG.getNode(ISD::FADD, dl, MVT::f32, t12,
-                                getF32Constant(DAG, 0x3f800000));
-      SDValue t14 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, t13);
-      SDValue TwoToFractionalPartOfX =
-        DAG.getNode(ISD::ADD, dl, MVT::i32, t14, IntegerPartOfX);
-
-      result = DAG.getNode(ISD::BIT_CONVERT, dl,
-                           MVT::f32, TwoToFractionalPartOfX);
-    }
-  } else {
-    // No special expansion.
-    result = DAG.getNode(ISD::FEXP2, dl,
-                         getValue(I.getOperand(1)).getValueType(),
-                         getValue(I.getOperand(1)));
-  }
-
-  setValue(&I, result);
-}
-
-/// visitPow - Lower a pow intrinsic. Handles the special sequences for
-/// limited-precision mode with x == 10.0f.
-void
-SelectionDAGLowering::visitPow(CallInst &I) {
-  SDValue result;
-  Value *Val = I.getOperand(1);
-  DebugLoc dl = getCurDebugLoc();
-  bool IsExp10 = false;
-
-  if (getValue(Val).getValueType() == MVT::f32 &&
-      getValue(I.getOperand(2)).getValueType() == MVT::f32 &&
-      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
-    if (Constant *C = const_cast<Constant*>(dyn_cast<Constant>(Val))) {
-      if (ConstantFP *CFP = dyn_cast<ConstantFP>(C)) {
-        APFloat Ten(10.0f);
-        IsExp10 = CFP->getValueAPF().bitwiseIsEqual(Ten);
-      }
-    }
-  }
-
-  if (IsExp10 && LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
-    SDValue Op = getValue(I.getOperand(2));
-
-    // Put the exponent in the right bit position for later addition to the
-    // final result:
-    //
-    //   #define LOG2OF10 3.3219281f
-    //   IntegerPartOfX = (int32_t)(x * LOG2OF10);
-    SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, Op,
-                             getF32Constant(DAG, 0x40549a78));
-    SDValue IntegerPartOfX = DAG.getNode(ISD::FP_TO_SINT, dl, MVT::i32, t0);
-
-    //   FractionalPartOfX = x - (float)IntegerPartOfX;
-    SDValue t1 = DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, IntegerPartOfX);
-    SDValue X = DAG.getNode(ISD::FSUB, dl, MVT::f32, t0, t1);
-
-    //   IntegerPartOfX <<= 23;
-    IntegerPartOfX = DAG.getNode(ISD::SHL, dl, MVT::i32, IntegerPartOfX,
-                                 DAG.getConstant(23, TLI.getPointerTy()));
-
-    if (LimitFloatPrecision <= 6) {
-      // For floating-point precision of 6:
-      //
-      //   twoToFractionalPartOfX =
-      //     0.997535578f +
-      //       (0.735607626f + 0.252464424f * x) * x;
-      //
-      // error 0.0144103317, which is 6 bits
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0x3e814304));
-      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
-                               getF32Constant(DAG, 0x3f3c50c8));
-      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
-      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
-                               getF32Constant(DAG, 0x3f7f5e7e));
-      SDValue t6 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, t5);
-      SDValue TwoToFractionalPartOfX =
-        DAG.getNode(ISD::ADD, dl, MVT::i32, t6, IntegerPartOfX);
-
-      result = DAG.getNode(ISD::BIT_CONVERT, dl,
-                           MVT::f32, TwoToFractionalPartOfX);
-    } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
-      // For floating-point precision of 12:
-      //
-      //   TwoToFractionalPartOfX =
-      //     0.999892986f +
-      //       (0.696457318f +
-      //         (0.224338339f + 0.792043434e-1f * x) * x) * x;
-      //
-      // error 0.000107046256, which is 13 to 14 bits
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0x3da235e3));
-      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
-                               getF32Constant(DAG, 0x3e65b8f3));
-      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
-      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
-                               getF32Constant(DAG, 0x3f324b07));
-      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
-      SDValue t7 = DAG.getNode(ISD::FADD, dl, MVT::f32, t6,
-                               getF32Constant(DAG, 0x3f7ff8fd));
-      SDValue t8 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, t7);
-      SDValue TwoToFractionalPartOfX =
-        DAG.getNode(ISD::ADD, dl, MVT::i32, t8, IntegerPartOfX);
-
-      result = DAG.getNode(ISD::BIT_CONVERT, dl,
-                           MVT::f32, TwoToFractionalPartOfX);
-    } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
-      // For floating-point precision of 18:
-      //
-      //   TwoToFractionalPartOfX =
-      //     0.999999982f +
-      //       (0.693148872f +
-      //         (0.240227044f +
-      //           (0.554906021e-1f +
-      //             (0.961591928e-2f +
-      //               (0.136028312e-2f + 0.157059148e-3f *x)*x)*x)*x)*x)*x;
-      // error 2.47208000*10^(-7), which is better than 18 bits
-      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
-                               getF32Constant(DAG, 0x3924b03e));
-      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
-                               getF32Constant(DAG, 0x3ab24b87));
-      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
-      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
-                               getF32Constant(DAG, 0x3c1d8c17));
-      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
-      SDValue t7 = DAG.getNode(ISD::FADD, dl, MVT::f32, t6,
-                               getF32Constant(DAG, 0x3d634a1d));
-      SDValue t8 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t7, X);
-      SDValue t9 = DAG.getNode(ISD::FADD, dl, MVT::f32, t8,
-                               getF32Constant(DAG, 0x3e75fe14));
-      SDValue t10 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t9, X);
-      SDValue t11 = DAG.getNode(ISD::FADD, dl, MVT::f32, t10,
-                                getF32Constant(DAG, 0x3f317234));
-      SDValue t12 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t11, X);
-      SDValue t13 = DAG.getNode(ISD::FADD, dl, MVT::f32, t12,
-                                getF32Constant(DAG, 0x3f800000));
-      SDValue t14 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, t13);
-      SDValue TwoToFractionalPartOfX =
-        DAG.getNode(ISD::ADD, dl, MVT::i32, t14, IntegerPartOfX);
-
-      result = DAG.getNode(ISD::BIT_CONVERT, dl,
-                           MVT::f32, TwoToFractionalPartOfX);
-    }
-  } else {
-    // No special expansion.
-    result = DAG.getNode(ISD::FPOW, dl,
-                         getValue(I.getOperand(1)).getValueType(),
-                         getValue(I.getOperand(1)),
-                         getValue(I.getOperand(2)));
-  }
-
-  setValue(&I, result);
-}
-
-/// visitIntrinsicCall - Lower the call to the specified intrinsic function.  If
-/// we want to emit this as a call to a named external function, return the name
-/// otherwise lower it and return null.
-const char *
-SelectionDAGLowering::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
-  DebugLoc dl = getCurDebugLoc();
-  switch (Intrinsic) {
-  default:
-    // By default, turn this into a target intrinsic node.
-    visitTargetIntrinsic(I, Intrinsic);
-    return 0;
-  case Intrinsic::vastart:  visitVAStart(I); return 0;
-  case Intrinsic::vaend:    visitVAEnd(I); return 0;
-  case Intrinsic::vacopy:   visitVACopy(I); return 0;
-  case Intrinsic::returnaddress:
-    setValue(&I, DAG.getNode(ISD::RETURNADDR, dl, TLI.getPointerTy(),
-                             getValue(I.getOperand(1))));
-    return 0;
-  case Intrinsic::frameaddress:
-    setValue(&I, DAG.getNode(ISD::FRAMEADDR, dl, TLI.getPointerTy(),
-                             getValue(I.getOperand(1))));
-    return 0;
-  case Intrinsic::setjmp:
-    return "_setjmp"+!TLI.usesUnderscoreSetJmp();
-    break;
-  case Intrinsic::longjmp:
-    return "_longjmp"+!TLI.usesUnderscoreLongJmp();
-    break;
-  case Intrinsic::memcpy: {
-    SDValue Op1 = getValue(I.getOperand(1));
-    SDValue Op2 = getValue(I.getOperand(2));
-    SDValue Op3 = getValue(I.getOperand(3));
-    unsigned Align = cast<ConstantInt>(I.getOperand(4))->getZExtValue();
-    DAG.setRoot(DAG.getMemcpy(getRoot(), dl, Op1, Op2, Op3, Align, false,
-                              I.getOperand(1), 0, I.getOperand(2), 0));
-    return 0;
-  }
-  case Intrinsic::memset: {
-    SDValue Op1 = getValue(I.getOperand(1));
-    SDValue Op2 = getValue(I.getOperand(2));
-    SDValue Op3 = getValue(I.getOperand(3));
-    unsigned Align = cast<ConstantInt>(I.getOperand(4))->getZExtValue();
-    DAG.setRoot(DAG.getMemset(getRoot(), dl, Op1, Op2, Op3, Align,
-                              I.getOperand(1), 0));
-    return 0;
-  }
-  case Intrinsic::memmove: {
-    SDValue Op1 = getValue(I.getOperand(1));
-    SDValue Op2 = getValue(I.getOperand(2));
-    SDValue Op3 = getValue(I.getOperand(3));
-    unsigned Align = cast<ConstantInt>(I.getOperand(4))->getZExtValue();
-
-    // If the source and destination are known to not be aliases, we can
-    // lower memmove as memcpy.
-    uint64_t Size = -1ULL;
-    if (ConstantSDNode *C = dyn_cast<ConstantSDNode>(Op3))
-      Size = C->getZExtValue();
-    if (AA->alias(I.getOperand(1), Size, I.getOperand(2), Size) ==
-        AliasAnalysis::NoAlias) {
-      DAG.setRoot(DAG.getMemcpy(getRoot(), dl, Op1, Op2, Op3, Align, false,
-                                I.getOperand(1), 0, I.getOperand(2), 0));
-      return 0;
-    }
-
-    DAG.setRoot(DAG.getMemmove(getRoot(), dl, Op1, Op2, Op3, Align,
-                               I.getOperand(1), 0, I.getOperand(2), 0));
-    return 0;
-  }
-  case Intrinsic::dbg_stoppoint: {
-    DbgStopPointInst &SPI = cast<DbgStopPointInst>(I);
-    if (isValidDebugInfoIntrinsic(SPI, CodeGenOpt::Default)) {
-      MachineFunction &MF = DAG.getMachineFunction();
-      DebugLoc Loc = ExtractDebugLocation(SPI, MF.getDebugLocInfo());
-      setCurDebugLoc(Loc);
-
-      if (OptLevel == CodeGenOpt::None)
-        DAG.setRoot(DAG.getDbgStopPoint(Loc, getRoot(),
-                                        SPI.getLine(),
-                                        SPI.getColumn(),
-                                        SPI.getContext()));
-    }
-    return 0;
-  }
-  case Intrinsic::dbg_region_start: {
-    DwarfWriter *DW = DAG.getDwarfWriter();
-    DbgRegionStartInst &RSI = cast<DbgRegionStartInst>(I);
-    if (isValidDebugInfoIntrinsic(RSI, OptLevel) && DW
-        && DW->ShouldEmitDwarfDebug()) {
-      unsigned LabelID =
-        DW->RecordRegionStart(RSI.getContext());
-      DAG.setRoot(DAG.getLabel(ISD::DBG_LABEL, getCurDebugLoc(),
-                               getRoot(), LabelID));
-    }
-    return 0;
-  }
-  case Intrinsic::dbg_region_end: {
-    DwarfWriter *DW = DAG.getDwarfWriter();
-    DbgRegionEndInst &REI = cast<DbgRegionEndInst>(I);
-
-    if (!isValidDebugInfoIntrinsic(REI, OptLevel) || !DW
-        || !DW->ShouldEmitDwarfDebug()) 
-      return 0;
-
-    MachineFunction &MF = DAG.getMachineFunction();
-    DISubprogram Subprogram(REI.getContext());
-    
-    if (isInlinedFnEnd(REI, MF.getFunction())) {
-      // This is end of inlined function. Debugging information for inlined
-      // function is not handled yet (only supported by FastISel).
-      if (OptLevel == CodeGenOpt::None) {
-        unsigned ID = DW->RecordInlinedFnEnd(Subprogram);
-        if (ID != 0)
-          // Returned ID is 0 if this is unbalanced "end of inlined
-          // scope". This could happen if optimizer eats dbg intrinsics or
-          // "beginning of inlined scope" is not recoginized due to missing
-          // location info. In such cases, do ignore this region.end.
-          DAG.setRoot(DAG.getLabel(ISD::DBG_LABEL, getCurDebugLoc(), 
-                                   getRoot(), ID));
-      }
-      return 0;
-    } 
-
-    unsigned LabelID =
-      DW->RecordRegionEnd(REI.getContext());
-    DAG.setRoot(DAG.getLabel(ISD::DBG_LABEL, getCurDebugLoc(),
-                             getRoot(), LabelID));
-    return 0;
-  }
-  case Intrinsic::dbg_func_start: {
-    DwarfWriter *DW = DAG.getDwarfWriter();
-    DbgFuncStartInst &FSI = cast<DbgFuncStartInst>(I);
-    if (!isValidDebugInfoIntrinsic(FSI, CodeGenOpt::None))
-      return 0;
-
-    MachineFunction &MF = DAG.getMachineFunction();
-    // This is a beginning of an inlined function.
-    if (isInlinedFnStart(FSI, MF.getFunction())) {
-      if (OptLevel != CodeGenOpt::None)
-        // FIXME: Debugging informaation for inlined function is only
-        // supported at CodeGenOpt::Node.
-        return 0;
-      
-      DebugLoc PrevLoc = CurDebugLoc;
-      // If llvm.dbg.func.start is seen in a new block before any
-      // llvm.dbg.stoppoint intrinsic then the location info is unknown.
-      // FIXME : Why DebugLoc is reset at the beginning of each block ?
-      if (PrevLoc.isUnknown())
-        return 0;
-      
-      // Record the source line.
-      setCurDebugLoc(ExtractDebugLocation(FSI, MF.getDebugLocInfo()));
-      
-      if (!DW || !DW->ShouldEmitDwarfDebug())
-        return 0;
-      DebugLocTuple PrevLocTpl = MF.getDebugLocTuple(PrevLoc);
-      DISubprogram SP(FSI.getSubprogram());
-      DICompileUnit CU(PrevLocTpl.CompileUnit);
-      unsigned LabelID = DW->RecordInlinedFnStart(SP, CU,
-                                                  PrevLocTpl.Line,
-                                                  PrevLocTpl.Col);
-      DAG.setRoot(DAG.getLabel(ISD::DBG_LABEL, getCurDebugLoc(),
-                               getRoot(), LabelID));
-      return 0;
-    }
-
-    // This is a beginning of a new function.
-    MF.setDefaultDebugLoc(ExtractDebugLocation(FSI, MF.getDebugLocInfo()));
-
-    if (!DW || !DW->ShouldEmitDwarfDebug())
-      return 0;
-    // llvm.dbg.func_start also defines beginning of function scope.
-    DW->RecordRegionStart(FSI.getSubprogram());
-    return 0;
-  }
-  case Intrinsic::dbg_declare: {
-    if (OptLevel != CodeGenOpt::None) 
-      // FIXME: Variable debug info is not supported here.
-      return 0;
-    DwarfWriter *DW = DAG.getDwarfWriter();
-    if (!DW)
-      return 0;
-    DbgDeclareInst &DI = cast<DbgDeclareInst>(I);
-    if (!isValidDebugInfoIntrinsic(DI, CodeGenOpt::None))
-      return 0;
-
-    Value *Variable = DI.getVariable();
-    Value *Address = DI.getAddress();
-    if (BitCastInst *BCI = dyn_cast<BitCastInst>(Address))
-      Address = BCI->getOperand(0);
-    AllocaInst *AI = dyn_cast<AllocaInst>(Address);
-    // Don't handle byval struct arguments or VLAs, for example.
-    if (!AI)
-      return 0;
-    DenseMap<const AllocaInst*, int>::iterator SI =
-      FuncInfo.StaticAllocaMap.find(AI);
-    if (SI == FuncInfo.StaticAllocaMap.end()) 
-      return 0; // VLAs.
-    int FI = SI->second;
-    DW->RecordVariable(cast<MDNode>(Variable), FI);
-    return 0;
-  }
-  case Intrinsic::eh_exception: {
-    // Insert the EXCEPTIONADDR instruction.
-    assert(CurMBB->isLandingPad() &&"Call to eh.exception not in landing pad!");
-    SDVTList VTs = DAG.getVTList(TLI.getPointerTy(), MVT::Other);
-    SDValue Ops[1];
-    Ops[0] = DAG.getRoot();
-    SDValue Op = DAG.getNode(ISD::EXCEPTIONADDR, dl, VTs, Ops, 1);
-    setValue(&I, Op);
-    DAG.setRoot(Op.getValue(1));
-    return 0;
-  }
-
-  case Intrinsic::eh_selector_i32:
-  case Intrinsic::eh_selector_i64: {
-    MachineModuleInfo *MMI = DAG.getMachineModuleInfo();
-
-    if (CurMBB->isLandingPad())
-      AddCatchInfo(I, MMI, CurMBB);
-    else {
-#ifndef NDEBUG
-      FuncInfo.CatchInfoLost.insert(&I);
-#endif
-      // FIXME: Mark exception selector register as live in.  Hack for PR1508.
-      unsigned Reg = TLI.getExceptionSelectorRegister();
-      if (Reg) CurMBB->addLiveIn(Reg);
-    }
-
-    // Insert the EHSELECTION instruction.
-    SDVTList VTs = DAG.getVTList(TLI.getPointerTy(), MVT::Other);
-    SDValue Ops[2];
-    Ops[0] = getValue(I.getOperand(1));
-    Ops[1] = getRoot();
-    SDValue Op = DAG.getNode(ISD::EHSELECTION, dl, VTs, Ops, 2);
-
-    DAG.setRoot(Op.getValue(1));
-
-    MVT::SimpleValueType VT =
-      (Intrinsic == Intrinsic::eh_selector_i32 ? MVT::i32 : MVT::i64);
-    if (Op.getValueType().getSimpleVT() < VT)
-      Op = DAG.getNode(ISD::SIGN_EXTEND, dl, VT, Op);
-    else if (Op.getValueType().getSimpleVT() < VT)
-      Op = DAG.getNode(ISD::TRUNCATE, dl, VT, Op);
-    
-    setValue(&I, Op);
-    return 0;
-  }
-
-  case Intrinsic::eh_typeid_for_i32:
-  case Intrinsic::eh_typeid_for_i64: {
-    MachineModuleInfo *MMI = DAG.getMachineModuleInfo();
-    EVT VT = (Intrinsic == Intrinsic::eh_typeid_for_i32 ?
-                         MVT::i32 : MVT::i64);
-
-    if (MMI) {
-      // Find the type id for the given typeinfo.
-      GlobalVariable *GV = ExtractTypeInfo(I.getOperand(1));
-
-      unsigned TypeID = MMI->getTypeIDFor(GV);
-      setValue(&I, DAG.getConstant(TypeID, VT));
-    } else {
-      // Return something different to eh_selector.
-      setValue(&I, DAG.getConstant(1, VT));
-    }
-
-    return 0;
-  }
-
-  case Intrinsic::eh_return_i32:
-  case Intrinsic::eh_return_i64:
-    if (MachineModuleInfo *MMI = DAG.getMachineModuleInfo()) {
-      MMI->setCallsEHReturn(true);
-      DAG.setRoot(DAG.getNode(ISD::EH_RETURN, dl,
-                              MVT::Other,
-                              getControlRoot(),
-                              getValue(I.getOperand(1)),
-                              getValue(I.getOperand(2))));
-    } else {
-      setValue(&I, DAG.getConstant(0, TLI.getPointerTy()));
-    }
-
-    return 0;
-  case Intrinsic::eh_unwind_init:
-    if (MachineModuleInfo *MMI = DAG.getMachineModuleInfo()) {
-      MMI->setCallsUnwindInit(true);
-    }
-
-    return 0;
-
-  case Intrinsic::eh_dwarf_cfa: {
-    EVT VT = getValue(I.getOperand(1)).getValueType();
-    SDValue CfaArg;
-    if (VT.bitsGT(TLI.getPointerTy()))
-      CfaArg = DAG.getNode(ISD::TRUNCATE, dl,
-                           TLI.getPointerTy(), getValue(I.getOperand(1)));
-    else
-      CfaArg = DAG.getNode(ISD::SIGN_EXTEND, dl,
-                           TLI.getPointerTy(), getValue(I.getOperand(1)));
-
-    SDValue Offset = DAG.getNode(ISD::ADD, dl,
-                                 TLI.getPointerTy(),
-                                 DAG.getNode(ISD::FRAME_TO_ARGS_OFFSET, dl,
-                                             TLI.getPointerTy()),
-                                 CfaArg);
-    setValue(&I, DAG.getNode(ISD::ADD, dl,
-                             TLI.getPointerTy(),
-                             DAG.getNode(ISD::FRAMEADDR, dl,
-                                         TLI.getPointerTy(),
-                                         DAG.getConstant(0,
-                                                         TLI.getPointerTy())),
-                             Offset));
-    return 0;
-  }
-  case Intrinsic::convertff:
-  case Intrinsic::convertfsi:
-  case Intrinsic::convertfui:
-  case Intrinsic::convertsif:
-  case Intrinsic::convertuif:
-  case Intrinsic::convertss:
-  case Intrinsic::convertsu:
-  case Intrinsic::convertus:
-  case Intrinsic::convertuu: {
-    ISD::CvtCode Code = ISD::CVT_INVALID;
-    switch (Intrinsic) {
-    case Intrinsic::convertff:  Code = ISD::CVT_FF; break;
-    case Intrinsic::convertfsi: Code = ISD::CVT_FS; break;
-    case Intrinsic::convertfui: Code = ISD::CVT_FU; break;
-    case Intrinsic::convertsif: Code = ISD::CVT_SF; break;
-    case Intrinsic::convertuif: Code = ISD::CVT_UF; break;
-    case Intrinsic::convertss:  Code = ISD::CVT_SS; break;
-    case Intrinsic::convertsu:  Code = ISD::CVT_SU; break;
-    case Intrinsic::convertus:  Code = ISD::CVT_US; break;
-    case Intrinsic::convertuu:  Code = ISD::CVT_UU; break;
-    }
-    EVT DestVT = TLI.getValueType(I.getType());
-    Value* Op1 = I.getOperand(1);
-    setValue(&I, DAG.getConvertRndSat(DestVT, getCurDebugLoc(), getValue(Op1),
-                                DAG.getValueType(DestVT),
-                                DAG.getValueType(getValue(Op1).getValueType()),
-                                getValue(I.getOperand(2)),
-                                getValue(I.getOperand(3)),
-                                Code));
-    return 0;
-  }
-
-  case Intrinsic::sqrt:
-    setValue(&I, DAG.getNode(ISD::FSQRT, dl,
-                             getValue(I.getOperand(1)).getValueType(),
-                             getValue(I.getOperand(1))));
-    return 0;
-  case Intrinsic::powi:
-    setValue(&I, DAG.getNode(ISD::FPOWI, dl,
-                             getValue(I.getOperand(1)).getValueType(),
-                             getValue(I.getOperand(1)),
-                             getValue(I.getOperand(2))));
-    return 0;
-  case Intrinsic::sin:
-    setValue(&I, DAG.getNode(ISD::FSIN, dl,
-                             getValue(I.getOperand(1)).getValueType(),
-                             getValue(I.getOperand(1))));
-    return 0;
-  case Intrinsic::cos:
-    setValue(&I, DAG.getNode(ISD::FCOS, dl,
-                             getValue(I.getOperand(1)).getValueType(),
-                             getValue(I.getOperand(1))));
-    return 0;
-  case Intrinsic::log:
-    visitLog(I);
-    return 0;
-  case Intrinsic::log2:
-    visitLog2(I);
-    return 0;
-  case Intrinsic::log10:
-    visitLog10(I);
-    return 0;
-  case Intrinsic::exp:
-    visitExp(I);
-    return 0;
-  case Intrinsic::exp2:
-    visitExp2(I);
-    return 0;
-  case Intrinsic::pow:
-    visitPow(I);
-    return 0;
-  case Intrinsic::pcmarker: {
-    SDValue Tmp = getValue(I.getOperand(1));
-    DAG.setRoot(DAG.getNode(ISD::PCMARKER, dl, MVT::Other, getRoot(), Tmp));
-    return 0;
-  }
-  case Intrinsic::readcyclecounter: {
-    SDValue Op = getRoot();
-    SDValue Tmp = DAG.getNode(ISD::READCYCLECOUNTER, dl,
-                              DAG.getVTList(MVT::i64, MVT::Other),
-                              &Op, 1);
-    setValue(&I, Tmp);
-    DAG.setRoot(Tmp.getValue(1));
-    return 0;
-  }
-  case Intrinsic::bswap:
-    setValue(&I, DAG.getNode(ISD::BSWAP, dl,
-                             getValue(I.getOperand(1)).getValueType(),
-                             getValue(I.getOperand(1))));
-    return 0;
-  case Intrinsic::cttz: {
-    SDValue Arg = getValue(I.getOperand(1));
-    EVT Ty = Arg.getValueType();
-    SDValue result = DAG.getNode(ISD::CTTZ, dl, Ty, Arg);
-    setValue(&I, result);
-    return 0;
-  }
-  case Intrinsic::ctlz: {
-    SDValue Arg = getValue(I.getOperand(1));
-    EVT Ty = Arg.getValueType();
-    SDValue result = DAG.getNode(ISD::CTLZ, dl, Ty, Arg);
-    setValue(&I, result);
-    return 0;
-  }
-  case Intrinsic::ctpop: {
-    SDValue Arg = getValue(I.getOperand(1));
-    EVT Ty = Arg.getValueType();
-    SDValue result = DAG.getNode(ISD::CTPOP, dl, Ty, Arg);
-    setValue(&I, result);
-    return 0;
-  }
-  case Intrinsic::stacksave: {
-    SDValue Op = getRoot();
-    SDValue Tmp = DAG.getNode(ISD::STACKSAVE, dl,
-              DAG.getVTList(TLI.getPointerTy(), MVT::Other), &Op, 1);
-    setValue(&I, Tmp);
-    DAG.setRoot(Tmp.getValue(1));
-    return 0;
-  }
-  case Intrinsic::stackrestore: {
-    SDValue Tmp = getValue(I.getOperand(1));
-    DAG.setRoot(DAG.getNode(ISD::STACKRESTORE, dl, MVT::Other, getRoot(), Tmp));
-    return 0;
-  }
-  case Intrinsic::stackprotector: {
-    // Emit code into the DAG to store the stack guard onto the stack.
-    MachineFunction &MF = DAG.getMachineFunction();
-    MachineFrameInfo *MFI = MF.getFrameInfo();
-    EVT PtrTy = TLI.getPointerTy();
-
-    SDValue Src = getValue(I.getOperand(1));   // The guard's value.
-    AllocaInst *Slot = cast<AllocaInst>(I.getOperand(2));
-
-    int FI = FuncInfo.StaticAllocaMap[Slot];
-    MFI->setStackProtectorIndex(FI);
-
-    SDValue FIN = DAG.getFrameIndex(FI, PtrTy);
-
-    // Store the stack protector onto the stack.
-    SDValue Result = DAG.getStore(getRoot(), getCurDebugLoc(), Src, FIN,
-                                  PseudoSourceValue::getFixedStack(FI),
-                                  0, true);
-    setValue(&I, Result);
-    DAG.setRoot(Result);
-    return 0;
-  }
-  case Intrinsic::var_annotation:
-    // Discard annotate attributes
-    return 0;
-
-  case Intrinsic::init_trampoline: {
-    const Function *F = cast<Function>(I.getOperand(2)->stripPointerCasts());
-
-    SDValue Ops[6];
-    Ops[0] = getRoot();
-    Ops[1] = getValue(I.getOperand(1));
-    Ops[2] = getValue(I.getOperand(2));
-    Ops[3] = getValue(I.getOperand(3));
-    Ops[4] = DAG.getSrcValue(I.getOperand(1));
-    Ops[5] = DAG.getSrcValue(F);
-
-    SDValue Tmp = DAG.getNode(ISD::TRAMPOLINE, dl,
-                              DAG.getVTList(TLI.getPointerTy(), MVT::Other),
-                              Ops, 6);
-
-    setValue(&I, Tmp);
-    DAG.setRoot(Tmp.getValue(1));
-    return 0;
-  }
-
-  case Intrinsic::gcroot:
-    if (GFI) {
-      Value *Alloca = I.getOperand(1);
-      Constant *TypeMap = cast<Constant>(I.getOperand(2));
-
-      FrameIndexSDNode *FI = cast<FrameIndexSDNode>(getValue(Alloca).getNode());
-      GFI->addStackRoot(FI->getIndex(), TypeMap);
-    }
-    return 0;
-
-  case Intrinsic::gcread:
-  case Intrinsic::gcwrite:
-    llvm_unreachable("GC failed to lower gcread/gcwrite intrinsics!");
-    return 0;
-
-  case Intrinsic::flt_rounds: {
-    setValue(&I, DAG.getNode(ISD::FLT_ROUNDS_, dl, MVT::i32));
-    return 0;
-  }
-
-  case Intrinsic::trap: {
-    DAG.setRoot(DAG.getNode(ISD::TRAP, dl,MVT::Other, getRoot()));
-    return 0;
-  }
-
-  case Intrinsic::uadd_with_overflow:
-    return implVisitAluOverflow(I, ISD::UADDO);
-  case Intrinsic::sadd_with_overflow:
-    return implVisitAluOverflow(I, ISD::SADDO);
-  case Intrinsic::usub_with_overflow:
-    return implVisitAluOverflow(I, ISD::USUBO);
-  case Intrinsic::ssub_with_overflow:
-    return implVisitAluOverflow(I, ISD::SSUBO);
-  case Intrinsic::umul_with_overflow:
-    return implVisitAluOverflow(I, ISD::UMULO);
-  case Intrinsic::smul_with_overflow:
-    return implVisitAluOverflow(I, ISD::SMULO);
-
-  case Intrinsic::prefetch: {
-    SDValue Ops[4];
-    Ops[0] = getRoot();
-    Ops[1] = getValue(I.getOperand(1));
-    Ops[2] = getValue(I.getOperand(2));
-    Ops[3] = getValue(I.getOperand(3));
-    DAG.setRoot(DAG.getNode(ISD::PREFETCH, dl, MVT::Other, &Ops[0], 4));
-    return 0;
-  }
-
-  case Intrinsic::memory_barrier: {
-    SDValue Ops[6];
-    Ops[0] = getRoot();
-    for (int x = 1; x < 6; ++x)
-      Ops[x] = getValue(I.getOperand(x));
-
-    DAG.setRoot(DAG.getNode(ISD::MEMBARRIER, dl, MVT::Other, &Ops[0], 6));
-    return 0;
-  }
-  case Intrinsic::atomic_cmp_swap: {
-    SDValue Root = getRoot();
-    SDValue L =
-      DAG.getAtomic(ISD::ATOMIC_CMP_SWAP, getCurDebugLoc(),
-                    getValue(I.getOperand(2)).getValueType().getSimpleVT(),
-                    Root,
-                    getValue(I.getOperand(1)),
-                    getValue(I.getOperand(2)),
-                    getValue(I.getOperand(3)),
-                    I.getOperand(1));
-    setValue(&I, L);
-    DAG.setRoot(L.getValue(1));
-    return 0;
-  }
-  case Intrinsic::atomic_load_add:
-    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_ADD);
-  case Intrinsic::atomic_load_sub:
-    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_SUB);
-  case Intrinsic::atomic_load_or:
-    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_OR);
-  case Intrinsic::atomic_load_xor:
-    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_XOR);
-  case Intrinsic::atomic_load_and:
-    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_AND);
-  case Intrinsic::atomic_load_nand:
-    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_NAND);
-  case Intrinsic::atomic_load_max:
-    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_MAX);
-  case Intrinsic::atomic_load_min:
-    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_MIN);
-  case Intrinsic::atomic_load_umin:
-    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_UMIN);
-  case Intrinsic::atomic_load_umax:
-    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_UMAX);
-  case Intrinsic::atomic_swap:
-    return implVisitBinaryAtomic(I, ISD::ATOMIC_SWAP);
-  }
-}
-
-/// Test if the given instruction is in a position to be optimized
-/// with a tail-call. This roughly means that it's in a block with
-/// a return and there's nothing that needs to be scheduled
-/// between it and the return.
-///
-/// This function only tests target-independent requirements.
-/// For target-dependent requirements, a target should override
-/// TargetLowering::IsEligibleForTailCallOptimization.
-///
-static bool
-isInTailCallPosition(const Instruction *I, Attributes RetAttr,
-                     const TargetLowering &TLI) {
-  const BasicBlock *ExitBB = I->getParent();
-  const TerminatorInst *Term = ExitBB->getTerminator();
-  const ReturnInst *Ret = dyn_cast<ReturnInst>(Term);
-  const Function *F = ExitBB->getParent();
-
-  // The block must end in a return statement or an unreachable.
-  if (!Ret && !isa<UnreachableInst>(Term)) return false;
-
-  // If I will have a chain, make sure no other instruction that will have a
-  // chain interposes between I and the return.
-  if (I->mayHaveSideEffects() || I->mayReadFromMemory() ||
-      !I->isSafeToSpeculativelyExecute())
-    for (BasicBlock::const_iterator BBI = prior(prior(ExitBB->end())); ;
-         --BBI) {
-      if (&*BBI == I)
-        break;
-      if (BBI->mayHaveSideEffects() || BBI->mayReadFromMemory() ||
-          !BBI->isSafeToSpeculativelyExecute())
-        return false;
-    }
-
-  // If the block ends with a void return or unreachable, it doesn't matter
-  // what the call's return type is.
-  if (!Ret || Ret->getNumOperands() == 0) return true;
-
-  // Conservatively require the attributes of the call to match those of
-  // the return.
-  if (F->getAttributes().getRetAttributes() != RetAttr)
-    return false;
-
-  // Otherwise, make sure the unmodified return value of I is the return value.
-  for (const Instruction *U = dyn_cast<Instruction>(Ret->getOperand(0)); ;
-       U = dyn_cast<Instruction>(U->getOperand(0))) {
-    if (!U)
-      return false;
-    if (!U->hasOneUse())
-      return false;
-    if (U == I)
-      break;
-    // Check for a truly no-op truncate.
-    if (isa<TruncInst>(U) &&
-        TLI.isTruncateFree(U->getOperand(0)->getType(), U->getType()))
-      continue;
-    // Check for a truly no-op bitcast.
-    if (isa<BitCastInst>(U) &&
-        (U->getOperand(0)->getType() == U->getType() ||
-         (isa<PointerType>(U->getOperand(0)->getType()) &&
-          isa<PointerType>(U->getType()))))
-      continue;
-    // Otherwise it's not a true no-op.
-    return false;
-  }
-
-  return true;
-}
-
-void SelectionDAGLowering::LowerCallTo(CallSite CS, SDValue Callee,
-                                       bool isTailCall,
-                                       MachineBasicBlock *LandingPad) {
-  const PointerType *PT = cast<PointerType>(CS.getCalledValue()->getType());
-  const FunctionType *FTy = cast<FunctionType>(PT->getElementType());
-  MachineModuleInfo *MMI = DAG.getMachineModuleInfo();
-  unsigned BeginLabel = 0, EndLabel = 0;
-
-  TargetLowering::ArgListTy Args;
-  TargetLowering::ArgListEntry Entry;
-  Args.reserve(CS.arg_size());
-  unsigned j = 1;
-  for (CallSite::arg_iterator i = CS.arg_begin(), e = CS.arg_end();
-       i != e; ++i, ++j) {
-    SDValue ArgNode = getValue(*i);
-    Entry.Node = ArgNode; Entry.Ty = (*i)->getType();
-
-    unsigned attrInd = i - CS.arg_begin() + 1;
-    Entry.isSExt  = CS.paramHasAttr(attrInd, Attribute::SExt);
-    Entry.isZExt  = CS.paramHasAttr(attrInd, Attribute::ZExt);
-    Entry.isInReg = CS.paramHasAttr(attrInd, Attribute::InReg);
-    Entry.isSRet  = CS.paramHasAttr(attrInd, Attribute::StructRet);
-    Entry.isNest  = CS.paramHasAttr(attrInd, Attribute::Nest);
-    Entry.isByVal = CS.paramHasAttr(attrInd, Attribute::ByVal);
-    Entry.Alignment = CS.getParamAlignment(attrInd);
-    Args.push_back(Entry);
-  }
-
-  if (LandingPad && MMI) {
-    // Insert a label before the invoke call to mark the try range.  This can be
-    // used to detect deletion of the invoke via the MachineModuleInfo.
-    BeginLabel = MMI->NextLabelID();
-
-    // Both PendingLoads and PendingExports must be flushed here;
-    // this call might not return.
-    (void)getRoot();
-    DAG.setRoot(DAG.getLabel(ISD::EH_LABEL, getCurDebugLoc(),
-                             getControlRoot(), BeginLabel));
-  }
-
-  // Check if target-independent constraints permit a tail call here.
-  // Target-dependent constraints are checked within TLI.LowerCallTo.
-  if (isTailCall &&
-      !isInTailCallPosition(CS.getInstruction(),
-                            CS.getAttributes().getRetAttributes(),
-                            TLI))
-    isTailCall = false;
-
-  std::pair<SDValue,SDValue> Result =
-    TLI.LowerCallTo(getRoot(), CS.getType(),
-                    CS.paramHasAttr(0, Attribute::SExt),
-                    CS.paramHasAttr(0, Attribute::ZExt), FTy->isVarArg(),
-                    CS.paramHasAttr(0, Attribute::InReg), FTy->getNumParams(),
-                    CS.getCallingConv(),
-                    isTailCall,
-                    !CS.getInstruction()->use_empty(),
-                    Callee, Args, DAG, getCurDebugLoc());
-  assert((isTailCall || Result.second.getNode()) &&
-         "Non-null chain expected with non-tail call!");
-  assert((Result.second.getNode() || !Result.first.getNode()) &&
-         "Null value expected with tail call!");
-  if (Result.first.getNode())
-    setValue(CS.getInstruction(), Result.first);
-  // As a special case, a null chain means that a tail call has
-  // been emitted and the DAG root is already updated.
-  if (Result.second.getNode())
-    DAG.setRoot(Result.second);
-  else
-    HasTailCall = true;
-
-  if (LandingPad && MMI) {
-    // Insert a label at the end of the invoke call to mark the try range.  This
-    // can be used to detect deletion of the invoke via the MachineModuleInfo.
-    EndLabel = MMI->NextLabelID();
-    DAG.setRoot(DAG.getLabel(ISD::EH_LABEL, getCurDebugLoc(),
-                             getRoot(), EndLabel));
-
-    // Inform MachineModuleInfo of range.
-    MMI->addInvoke(LandingPad, BeginLabel, EndLabel);
-  }
-}
-
-
-void SelectionDAGLowering::visitCall(CallInst &I) {
-  const char *RenameFn = 0;
-  if (Function *F = I.getCalledFunction()) {
-    if (F->isDeclaration()) {
-      const TargetIntrinsicInfo *II = TLI.getTargetMachine().getIntrinsicInfo();
-      if (II) {
-        if (unsigned IID = II->getIntrinsicID(F)) {
-          RenameFn = visitIntrinsicCall(I, IID);
-          if (!RenameFn)
-            return;
-        }
-      }
-      if (unsigned IID = F->getIntrinsicID()) {
-        RenameFn = visitIntrinsicCall(I, IID);
-        if (!RenameFn)
-          return;
-      }
-    }
-
-    // Check for well-known libc/libm calls.  If the function is internal, it
-    // can't be a library call.
-    if (!F->hasLocalLinkage() && F->hasName()) {
-      StringRef Name = F->getName();
-      if (Name == "copysign" || Name == "copysignf") {
-        if (I.getNumOperands() == 3 &&   // Basic sanity checks.
-            I.getOperand(1)->getType()->isFloatingPoint() &&
-            I.getType() == I.getOperand(1)->getType() &&
-            I.getType() == I.getOperand(2)->getType()) {
-          SDValue LHS = getValue(I.getOperand(1));
-          SDValue RHS = getValue(I.getOperand(2));
-          setValue(&I, DAG.getNode(ISD::FCOPYSIGN, getCurDebugLoc(),
-                                   LHS.getValueType(), LHS, RHS));
-          return;
-        }
-      } else if (Name == "fabs" || Name == "fabsf" || Name == "fabsl") {
-        if (I.getNumOperands() == 2 &&   // Basic sanity checks.
-            I.getOperand(1)->getType()->isFloatingPoint() &&
-            I.getType() == I.getOperand(1)->getType()) {
-          SDValue Tmp = getValue(I.getOperand(1));
-          setValue(&I, DAG.getNode(ISD::FABS, getCurDebugLoc(),
-                                   Tmp.getValueType(), Tmp));
-          return;
-        }
-      } else if (Name == "sin" || Name == "sinf" || Name == "sinl") {
-        if (I.getNumOperands() == 2 &&   // Basic sanity checks.
-            I.getOperand(1)->getType()->isFloatingPoint() &&
-            I.getType() == I.getOperand(1)->getType() &&
-            I.onlyReadsMemory()) {
-          SDValue Tmp = getValue(I.getOperand(1));
-          setValue(&I, DAG.getNode(ISD::FSIN, getCurDebugLoc(),
-                                   Tmp.getValueType(), Tmp));
-          return;
-        }
-      } else if (Name == "cos" || Name == "cosf" || Name == "cosl") {
-        if (I.getNumOperands() == 2 &&   // Basic sanity checks.
-            I.getOperand(1)->getType()->isFloatingPoint() &&
-            I.getType() == I.getOperand(1)->getType() &&
-            I.onlyReadsMemory()) {
-          SDValue Tmp = getValue(I.getOperand(1));
-          setValue(&I, DAG.getNode(ISD::FCOS, getCurDebugLoc(),
-                                   Tmp.getValueType(), Tmp));
-          return;
-        }
-      } else if (Name == "sqrt" || Name == "sqrtf" || Name == "sqrtl") {
-        if (I.getNumOperands() == 2 &&   // Basic sanity checks.
-            I.getOperand(1)->getType()->isFloatingPoint() &&
-            I.getType() == I.getOperand(1)->getType() &&
-            I.onlyReadsMemory()) {
-          SDValue Tmp = getValue(I.getOperand(1));
-          setValue(&I, DAG.getNode(ISD::FSQRT, getCurDebugLoc(),
-                                   Tmp.getValueType(), Tmp));
-          return;
-        }
-      }
-    }
-  } else if (isa<InlineAsm>(I.getOperand(0))) {
-    visitInlineAsm(&I);
-    return;
-  }
-
-  SDValue Callee;
-  if (!RenameFn)
-    Callee = getValue(I.getOperand(0));
-  else
-    Callee = DAG.getExternalSymbol(RenameFn, TLI.getPointerTy());
-
-  // Check if we can potentially perform a tail call. More detailed
-  // checking is be done within LowerCallTo, after more information
-  // about the call is known.
-  bool isTailCall = PerformTailCallOpt && I.isTailCall();
-
-  LowerCallTo(&I, Callee, isTailCall);
-}
-
-
-/// getCopyFromRegs - Emit a series of CopyFromReg nodes that copies from
-/// this value and returns the result as a ValueVT value.  This uses
-/// Chain/Flag as the input and updates them for the output Chain/Flag.
-/// If the Flag pointer is NULL, no flag is used.
-SDValue RegsForValue::getCopyFromRegs(SelectionDAG &DAG, DebugLoc dl,
-                                      SDValue &Chain,
-                                      SDValue *Flag) const {
-  // Assemble the legal parts into the final values.
-  SmallVector<SDValue, 4> Values(ValueVTs.size());
-  SmallVector<SDValue, 8> Parts;
-  for (unsigned Value = 0, Part = 0, e = ValueVTs.size(); Value != e; ++Value) {
-    // Copy the legal parts from the registers.
-    EVT ValueVT = ValueVTs[Value];
-    unsigned NumRegs = TLI->getNumRegisters(*DAG.getContext(), ValueVT);
-    EVT RegisterVT = RegVTs[Value];
-
-    Parts.resize(NumRegs);
-    for (unsigned i = 0; i != NumRegs; ++i) {
-      SDValue P;
-      if (Flag == 0)
-        P = DAG.getCopyFromReg(Chain, dl, Regs[Part+i], RegisterVT);
-      else {
-        P = DAG.getCopyFromReg(Chain, dl, Regs[Part+i], RegisterVT, *Flag);
-        *Flag = P.getValue(2);
-      }
-      Chain = P.getValue(1);
-
-      // If the source register was virtual and if we know something about it,
-      // add an assert node.
-      if (TargetRegisterInfo::isVirtualRegister(Regs[Part+i]) &&
-          RegisterVT.isInteger() && !RegisterVT.isVector()) {
-        unsigned SlotNo = Regs[Part+i]-TargetRegisterInfo::FirstVirtualRegister;
-        FunctionLoweringInfo &FLI = DAG.getFunctionLoweringInfo();
-        if (FLI.LiveOutRegInfo.size() > SlotNo) {
-          FunctionLoweringInfo::LiveOutInfo &LOI = FLI.LiveOutRegInfo[SlotNo];
-
-          unsigned RegSize = RegisterVT.getSizeInBits();
-          unsigned NumSignBits = LOI.NumSignBits;
-          unsigned NumZeroBits = LOI.KnownZero.countLeadingOnes();
-
-          // FIXME: We capture more information than the dag can represent.  For
-          // now, just use the tightest assertzext/assertsext possible.
-          bool isSExt = true;
-          EVT FromVT(MVT::Other);
-          if (NumSignBits == RegSize)
-            isSExt = true, FromVT = MVT::i1;   // ASSERT SEXT 1
-          else if (NumZeroBits >= RegSize-1)
-            isSExt = false, FromVT = MVT::i1;  // ASSERT ZEXT 1
-          else if (NumSignBits > RegSize-8)
-            isSExt = true, FromVT = MVT::i8;   // ASSERT SEXT 8
-          else if (NumZeroBits >= RegSize-8)
-            isSExt = false, FromVT = MVT::i8;  // ASSERT ZEXT 8
-          else if (NumSignBits > RegSize-16)
-            isSExt = true, FromVT = MVT::i16;  // ASSERT SEXT 16
-          else if (NumZeroBits >= RegSize-16)
-            isSExt = false, FromVT = MVT::i16; // ASSERT ZEXT 16
-          else if (NumSignBits > RegSize-32)
-            isSExt = true, FromVT = MVT::i32;  // ASSERT SEXT 32
-          else if (NumZeroBits >= RegSize-32)
-            isSExt = false, FromVT = MVT::i32; // ASSERT ZEXT 32
-
-          if (FromVT != MVT::Other) {
-            P = DAG.getNode(isSExt ? ISD::AssertSext : ISD::AssertZext, dl,
-                            RegisterVT, P, DAG.getValueType(FromVT));
-
-          }
-        }
-      }
-
-      Parts[i] = P;
-    }
-
-    Values[Value] = getCopyFromParts(DAG, dl, Parts.begin(),
-                                     NumRegs, RegisterVT, ValueVT);
-    Part += NumRegs;
-    Parts.clear();
-  }
-
-  return DAG.getNode(ISD::MERGE_VALUES, dl,
-                     DAG.getVTList(&ValueVTs[0], ValueVTs.size()),
-                     &Values[0], ValueVTs.size());
-}
-
-/// getCopyToRegs - Emit a series of CopyToReg nodes that copies the
-/// specified value into the registers specified by this object.  This uses
-/// Chain/Flag as the input and updates them for the output Chain/Flag.
-/// If the Flag pointer is NULL, no flag is used.
-void RegsForValue::getCopyToRegs(SDValue Val, SelectionDAG &DAG, DebugLoc dl,
-                                 SDValue &Chain, SDValue *Flag) const {
-  // Get the list of the values's legal parts.
-  unsigned NumRegs = Regs.size();
-  SmallVector<SDValue, 8> Parts(NumRegs);
-  for (unsigned Value = 0, Part = 0, e = ValueVTs.size(); Value != e; ++Value) {
-    EVT ValueVT = ValueVTs[Value];
-    unsigned NumParts = TLI->getNumRegisters(*DAG.getContext(), ValueVT);
-    EVT RegisterVT = RegVTs[Value];
-
-    getCopyToParts(DAG, dl, Val.getValue(Val.getResNo() + Value),
-                   &Parts[Part], NumParts, RegisterVT);
-    Part += NumParts;
-  }
-
-  // Copy the parts into the registers.
-  SmallVector<SDValue, 8> Chains(NumRegs);
-  for (unsigned i = 0; i != NumRegs; ++i) {
-    SDValue Part;
-    if (Flag == 0)
-      Part = DAG.getCopyToReg(Chain, dl, Regs[i], Parts[i]);
-    else {
-      Part = DAG.getCopyToReg(Chain, dl, Regs[i], Parts[i], *Flag);
-      *Flag = Part.getValue(1);
-    }
-    Chains[i] = Part.getValue(0);
-  }
-
-  if (NumRegs == 1 || Flag)
-    // If NumRegs > 1 && Flag is used then the use of the last CopyToReg is
-    // flagged to it. That is the CopyToReg nodes and the user are considered
-    // a single scheduling unit. If we create a TokenFactor and return it as
-    // chain, then the TokenFactor is both a predecessor (operand) of the
-    // user as well as a successor (the TF operands are flagged to the user).
-    // c1, f1 = CopyToReg
-    // c2, f2 = CopyToReg
-    // c3     = TokenFactor c1, c2
-    // ...
-    //        = op c3, ..., f2
-    Chain = Chains[NumRegs-1];
-  else
-    Chain = DAG.getNode(ISD::TokenFactor, dl, MVT::Other, &Chains[0], NumRegs);
-}
-
-/// AddInlineAsmOperands - Add this value to the specified inlineasm node
-/// operand list.  This adds the code marker and includes the number of
-/// values added into it.
-void RegsForValue::AddInlineAsmOperands(unsigned Code,
-                                        bool HasMatching,unsigned MatchingIdx,
-                                        SelectionDAG &DAG,
-                                        std::vector<SDValue> &Ops) const {
-  EVT IntPtrTy = DAG.getTargetLoweringInfo().getPointerTy();
-  assert(Regs.size() < (1 << 13) && "Too many inline asm outputs!");
-  unsigned Flag = Code | (Regs.size() << 3);
-  if (HasMatching)
-    Flag |= 0x80000000 | (MatchingIdx << 16);
-  Ops.push_back(DAG.getTargetConstant(Flag, IntPtrTy));
-  for (unsigned Value = 0, Reg = 0, e = ValueVTs.size(); Value != e; ++Value) {
-    unsigned NumRegs = TLI->getNumRegisters(*DAG.getContext(), ValueVTs[Value]);
-    EVT RegisterVT = RegVTs[Value];
-    for (unsigned i = 0; i != NumRegs; ++i) {
-      assert(Reg < Regs.size() && "Mismatch in # registers expected");
-      Ops.push_back(DAG.getRegister(Regs[Reg++], RegisterVT));
-    }
-  }
-}
-
-/// isAllocatableRegister - If the specified register is safe to allocate,
-/// i.e. it isn't a stack pointer or some other special register, return the
-/// register class for the register.  Otherwise, return null.
-static const TargetRegisterClass *
-isAllocatableRegister(unsigned Reg, MachineFunction &MF,
-                      const TargetLowering &TLI,
-                      const TargetRegisterInfo *TRI) {
-  EVT FoundVT = MVT::Other;
-  const TargetRegisterClass *FoundRC = 0;
-  for (TargetRegisterInfo::regclass_iterator RCI = TRI->regclass_begin(),
-       E = TRI->regclass_end(); RCI != E; ++RCI) {
-    EVT ThisVT = MVT::Other;
-
-    const TargetRegisterClass *RC = *RCI;
-    // If none of the the value types for this register class are valid, we
-    // can't use it.  For example, 64-bit reg classes on 32-bit targets.
-    for (TargetRegisterClass::vt_iterator I = RC->vt_begin(), E = RC->vt_end();
-         I != E; ++I) {
-      if (TLI.isTypeLegal(*I)) {
-        // If we have already found this register in a different register class,
-        // choose the one with the largest VT specified.  For example, on
-        // PowerPC, we favor f64 register classes over f32.
-        if (FoundVT == MVT::Other || FoundVT.bitsLT(*I)) {
-          ThisVT = *I;
-          break;
-        }
-      }
-    }
-
-    if (ThisVT == MVT::Other) continue;
-
-    // NOTE: This isn't ideal.  In particular, this might allocate the
-    // frame pointer in functions that need it (due to them not being taken
-    // out of allocation, because a variable sized allocation hasn't been seen
-    // yet).  This is a slight code pessimization, but should still work.
-    for (TargetRegisterClass::iterator I = RC->allocation_order_begin(MF),
-         E = RC->allocation_order_end(MF); I != E; ++I)
-      if (*I == Reg) {
-        // We found a matching register class.  Keep looking at others in case
-        // we find one with larger registers that this physreg is also in.
-        FoundRC = RC;
-        FoundVT = ThisVT;
-        break;
-      }
-  }
-  return FoundRC;
-}
-
-
-namespace llvm {
-/// AsmOperandInfo - This contains information for each constraint that we are
-/// lowering.
-class VISIBILITY_HIDDEN SDISelAsmOperandInfo :
-    public TargetLowering::AsmOperandInfo {
-public:
-  /// CallOperand - If this is the result output operand or a clobber
-  /// this is null, otherwise it is the incoming operand to the CallInst.
-  /// This gets modified as the asm is processed.
-  SDValue CallOperand;
-
-  /// AssignedRegs - If this is a register or register class operand, this
-  /// contains the set of register corresponding to the operand.
-  RegsForValue AssignedRegs;
-
-  explicit SDISelAsmOperandInfo(const InlineAsm::ConstraintInfo &info)
-    : TargetLowering::AsmOperandInfo(info), CallOperand(0,0) {
-  }
-
-  /// MarkAllocatedRegs - Once AssignedRegs is set, mark the assigned registers
-  /// busy in OutputRegs/InputRegs.
-  void MarkAllocatedRegs(bool isOutReg, bool isInReg,
-                         std::set<unsigned> &OutputRegs,
-                         std::set<unsigned> &InputRegs,
-                         const TargetRegisterInfo &TRI) const {
-    if (isOutReg) {
-      for (unsigned i = 0, e = AssignedRegs.Regs.size(); i != e; ++i)
-        MarkRegAndAliases(AssignedRegs.Regs[i], OutputRegs, TRI);
-    }
-    if (isInReg) {
-      for (unsigned i = 0, e = AssignedRegs.Regs.size(); i != e; ++i)
-        MarkRegAndAliases(AssignedRegs.Regs[i], InputRegs, TRI);
-    }
-  }
-
-  /// getCallOperandValEVT - Return the EVT of the Value* that this operand
-  /// corresponds to.  If there is no Value* for this operand, it returns
-  /// MVT::Other.
-  EVT getCallOperandValEVT(LLVMContext &Context, 
-                           const TargetLowering &TLI,
-                           const TargetData *TD) const {
-    if (CallOperandVal == 0) return MVT::Other;
-
-    if (isa<BasicBlock>(CallOperandVal))
-      return TLI.getPointerTy();
-
-    const llvm::Type *OpTy = CallOperandVal->getType();
-
-    // If this is an indirect operand, the operand is a pointer to the
-    // accessed type.
-    if (isIndirect)
-      OpTy = cast<PointerType>(OpTy)->getElementType();
-
-    // If OpTy is not a single value, it may be a struct/union that we
-    // can tile with integers.
-    if (!OpTy->isSingleValueType() && OpTy->isSized()) {
-      unsigned BitSize = TD->getTypeSizeInBits(OpTy);
-      switch (BitSize) {
-      default: break;
-      case 1:
-      case 8:
-      case 16:
-      case 32:
-      case 64:
-      case 128:
-        OpTy = IntegerType::get(Context, BitSize);
-        break;
-      }
-    }
-
-    return TLI.getValueType(OpTy, true);
-  }
-
-private:
-  /// MarkRegAndAliases - Mark the specified register and all aliases in the
-  /// specified set.
-  static void MarkRegAndAliases(unsigned Reg, std::set<unsigned> &Regs,
-                                const TargetRegisterInfo &TRI) {
-    assert(TargetRegisterInfo::isPhysicalRegister(Reg) && "Isn't a physreg");
-    Regs.insert(Reg);
-    if (const unsigned *Aliases = TRI.getAliasSet(Reg))
-      for (; *Aliases; ++Aliases)
-        Regs.insert(*Aliases);
-  }
-};
-} // end llvm namespace.
-
-
-/// GetRegistersForValue - Assign registers (virtual or physical) for the
-/// specified operand.  We prefer to assign virtual registers, to allow the
-/// register allocator handle the assignment process.  However, if the asm uses
-/// features that we can't model on machineinstrs, we have SDISel do the
-/// allocation.  This produces generally horrible, but correct, code.
-///
-///   OpInfo describes the operand.
-///   Input and OutputRegs are the set of already allocated physical registers.
-///
-void SelectionDAGLowering::
-GetRegistersForValue(SDISelAsmOperandInfo &OpInfo,
-                     std::set<unsigned> &OutputRegs,
-                     std::set<unsigned> &InputRegs) {
-  LLVMContext &Context = FuncInfo.Fn->getContext();
-
-  // Compute whether this value requires an input register, an output register,
-  // or both.
-  bool isOutReg = false;
-  bool isInReg = false;
-  switch (OpInfo.Type) {
-  case InlineAsm::isOutput:
-    isOutReg = true;
-
-    // If there is an input constraint that matches this, we need to reserve
-    // the input register so no other inputs allocate to it.
-    isInReg = OpInfo.hasMatchingInput();
-    break;
-  case InlineAsm::isInput:
-    isInReg = true;
-    isOutReg = false;
-    break;
-  case InlineAsm::isClobber:
-    isOutReg = true;
-    isInReg = true;
-    break;
-  }
-
-
-  MachineFunction &MF = DAG.getMachineFunction();
-  SmallVector<unsigned, 4> Regs;
-
-  // If this is a constraint for a single physreg, or a constraint for a
-  // register class, find it.
-  std::pair<unsigned, const TargetRegisterClass*> PhysReg =
-    TLI.getRegForInlineAsmConstraint(OpInfo.ConstraintCode,
-                                     OpInfo.ConstraintVT);
-
-  unsigned NumRegs = 1;
-  if (OpInfo.ConstraintVT != MVT::Other) {
-    // If this is a FP input in an integer register (or visa versa) insert a bit
-    // cast of the input value.  More generally, handle any case where the input
-    // value disagrees with the register class we plan to stick this in.
-    if (OpInfo.Type == InlineAsm::isInput &&
-        PhysReg.second && !PhysReg.second->hasType(OpInfo.ConstraintVT)) {
-      // Try to convert to the first EVT that the reg class contains.  If the
-      // types are identical size, use a bitcast to convert (e.g. two differing
-      // vector types).
-      EVT RegVT = *PhysReg.second->vt_begin();
-      if (RegVT.getSizeInBits() == OpInfo.ConstraintVT.getSizeInBits()) {
-        OpInfo.CallOperand = DAG.getNode(ISD::BIT_CONVERT, getCurDebugLoc(),
-                                         RegVT, OpInfo.CallOperand);
-        OpInfo.ConstraintVT = RegVT;
-      } else if (RegVT.isInteger() && OpInfo.ConstraintVT.isFloatingPoint()) {
-        // If the input is a FP value and we want it in FP registers, do a
-        // bitcast to the corresponding integer type.  This turns an f64 value
-        // into i64, which can be passed with two i32 values on a 32-bit
-        // machine.
-        RegVT = EVT::getIntegerVT(Context, 
-                                  OpInfo.ConstraintVT.getSizeInBits());
-        OpInfo.CallOperand = DAG.getNode(ISD::BIT_CONVERT, getCurDebugLoc(),
-                                         RegVT, OpInfo.CallOperand);
-        OpInfo.ConstraintVT = RegVT;
-      }
-    }
-
-    NumRegs = TLI.getNumRegisters(Context, OpInfo.ConstraintVT);
-  }
-
-  EVT RegVT;
-  EVT ValueVT = OpInfo.ConstraintVT;
-
-  // If this is a constraint for a specific physical register, like {r17},
-  // assign it now.
-  if (unsigned AssignedReg = PhysReg.first) {
-    const TargetRegisterClass *RC = PhysReg.second;
-    if (OpInfo.ConstraintVT == MVT::Other)
-      ValueVT = *RC->vt_begin();
-
-    // Get the actual register value type.  This is important, because the user
-    // may have asked for (e.g.) the AX register in i32 type.  We need to
-    // remember that AX is actually i16 to get the right extension.
-    RegVT = *RC->vt_begin();
-
-    // This is a explicit reference to a physical register.
-    Regs.push_back(AssignedReg);
-
-    // If this is an expanded reference, add the rest of the regs to Regs.
-    if (NumRegs != 1) {
-      TargetRegisterClass::iterator I = RC->begin();
-      for (; *I != AssignedReg; ++I)
-        assert(I != RC->end() && "Didn't find reg!");
-
-      // Already added the first reg.
-      --NumRegs; ++I;
-      for (; NumRegs; --NumRegs, ++I) {
-        assert(I != RC->end() && "Ran out of registers to allocate!");
-        Regs.push_back(*I);
-      }
-    }
-    OpInfo.AssignedRegs = RegsForValue(TLI, Regs, RegVT, ValueVT);
-    const TargetRegisterInfo *TRI = DAG.getTarget().getRegisterInfo();
-    OpInfo.MarkAllocatedRegs(isOutReg, isInReg, OutputRegs, InputRegs, *TRI);
-    return;
-  }
-
-  // Otherwise, if this was a reference to an LLVM register class, create vregs
-  // for this reference.
-  if (const TargetRegisterClass *RC = PhysReg.second) {
-    RegVT = *RC->vt_begin();
-    if (OpInfo.ConstraintVT == MVT::Other)
-      ValueVT = RegVT;
-
-    // Create the appropriate number of virtual registers.
-    MachineRegisterInfo &RegInfo = MF.getRegInfo();
-    for (; NumRegs; --NumRegs)
-      Regs.push_back(RegInfo.createVirtualRegister(RC));
-
-    OpInfo.AssignedRegs = RegsForValue(TLI, Regs, RegVT, ValueVT);
-    return;
-  }
-  
-  // This is a reference to a register class that doesn't directly correspond
-  // to an LLVM register class.  Allocate NumRegs consecutive, available,
-  // registers from the class.
-  std::vector<unsigned> RegClassRegs
-    = TLI.getRegClassForInlineAsmConstraint(OpInfo.ConstraintCode,
-                                            OpInfo.ConstraintVT);
-
-  const TargetRegisterInfo *TRI = DAG.getTarget().getRegisterInfo();
-  unsigned NumAllocated = 0;
-  for (unsigned i = 0, e = RegClassRegs.size(); i != e; ++i) {
-    unsigned Reg = RegClassRegs[i];
-    // See if this register is available.
-    if ((isOutReg && OutputRegs.count(Reg)) ||   // Already used.
-        (isInReg  && InputRegs.count(Reg))) {    // Already used.
-      // Make sure we find consecutive registers.
-      NumAllocated = 0;
-      continue;
-    }
-
-    // Check to see if this register is allocatable (i.e. don't give out the
-    // stack pointer).
-    const TargetRegisterClass *RC = isAllocatableRegister(Reg, MF, TLI, TRI);
-    if (!RC) {        // Couldn't allocate this register.
-      // Reset NumAllocated to make sure we return consecutive registers.
-      NumAllocated = 0;
-      continue;
-    }
-
-    // Okay, this register is good, we can use it.
-    ++NumAllocated;
-
-    // If we allocated enough consecutive registers, succeed.
-    if (NumAllocated == NumRegs) {
-      unsigned RegStart = (i-NumAllocated)+1;
-      unsigned RegEnd   = i+1;
-      // Mark all of the allocated registers used.
-      for (unsigned i = RegStart; i != RegEnd; ++i)
-        Regs.push_back(RegClassRegs[i]);
-
-      OpInfo.AssignedRegs = RegsForValue(TLI, Regs, *RC->vt_begin(),
-                                         OpInfo.ConstraintVT);
-      OpInfo.MarkAllocatedRegs(isOutReg, isInReg, OutputRegs, InputRegs, *TRI);
-      return;
-    }
-  }
-
-  // Otherwise, we couldn't allocate enough registers for this.
-}
-
-/// hasInlineAsmMemConstraint - Return true if the inline asm instruction being
-/// processed uses a memory 'm' constraint.
-static bool
-hasInlineAsmMemConstraint(std::vector<InlineAsm::ConstraintInfo> &CInfos,
-                          const TargetLowering &TLI) {
-  for (unsigned i = 0, e = CInfos.size(); i != e; ++i) {
-    InlineAsm::ConstraintInfo &CI = CInfos[i];
-    for (unsigned j = 0, ee = CI.Codes.size(); j != ee; ++j) {
-      TargetLowering::ConstraintType CType = TLI.getConstraintType(CI.Codes[j]);
-      if (CType == TargetLowering::C_Memory)
-        return true;
-    }
-    
-    // Indirect operand accesses access memory.
-    if (CI.isIndirect)
-      return true;
-  }
-
-  return false;
-}
-
-/// visitInlineAsm - Handle a call to an InlineAsm object.
-///
-void SelectionDAGLowering::visitInlineAsm(CallSite CS) {
-  InlineAsm *IA = cast<InlineAsm>(CS.getCalledValue());
-
-  /// ConstraintOperands - Information about all of the constraints.
-  std::vector<SDISelAsmOperandInfo> ConstraintOperands;
-
-  std::set<unsigned> OutputRegs, InputRegs;
-
-  // Do a prepass over the constraints, canonicalizing them, and building up the
-  // ConstraintOperands list.
-  std::vector<InlineAsm::ConstraintInfo>
-    ConstraintInfos = IA->ParseConstraints();
-
-  bool hasMemory = hasInlineAsmMemConstraint(ConstraintInfos, TLI);
-  
-  SDValue Chain, Flag;
-  
-  // We won't need to flush pending loads if this asm doesn't touch
-  // memory and is nonvolatile.
-  if (hasMemory || IA->hasSideEffects())
-    Chain = getRoot();
-  else
-    Chain = DAG.getRoot();
-
-  unsigned ArgNo = 0;   // ArgNo - The argument of the CallInst.
-  unsigned ResNo = 0;   // ResNo - The result number of the next output.
-  for (unsigned i = 0, e = ConstraintInfos.size(); i != e; ++i) {
-    ConstraintOperands.push_back(SDISelAsmOperandInfo(ConstraintInfos[i]));
-    SDISelAsmOperandInfo &OpInfo = ConstraintOperands.back();
-
-    EVT OpVT = MVT::Other;
-
-    // Compute the value type for each operand.
-    switch (OpInfo.Type) {
-    case InlineAsm::isOutput:
-      // Indirect outputs just consume an argument.
-      if (OpInfo.isIndirect) {
-        OpInfo.CallOperandVal = CS.getArgument(ArgNo++);
-        break;
-      }
-
-      // The return value of the call is this value.  As such, there is no
-      // corresponding argument.
-      assert(CS.getType() != Type::getVoidTy(*DAG.getContext()) &&
-             "Bad inline asm!");
-      if (const StructType *STy = dyn_cast<StructType>(CS.getType())) {
-        OpVT = TLI.getValueType(STy->getElementType(ResNo));
-      } else {
-        assert(ResNo == 0 && "Asm only has one result!");
-        OpVT = TLI.getValueType(CS.getType());
-      }
-      ++ResNo;
-      break;
-    case InlineAsm::isInput:
-      OpInfo.CallOperandVal = CS.getArgument(ArgNo++);
-      break;
-    case InlineAsm::isClobber:
-      // Nothing to do.
-      break;
-    }
-
-    // If this is an input or an indirect output, process the call argument.
-    // BasicBlocks are labels, currently appearing only in asm's.
-    if (OpInfo.CallOperandVal) {
-      // Strip bitcasts, if any.  This mostly comes up for functions.
-      OpInfo.CallOperandVal = OpInfo.CallOperandVal->stripPointerCasts();
-
-      if (BasicBlock *BB = dyn_cast<BasicBlock>(OpInfo.CallOperandVal)) {
-        OpInfo.CallOperand = DAG.getBasicBlock(FuncInfo.MBBMap[BB]);
-      } else {
-        OpInfo.CallOperand = getValue(OpInfo.CallOperandVal);
-      }
-
-      OpVT = OpInfo.getCallOperandValEVT(*DAG.getContext(), TLI, TD);
-    }
-
-    OpInfo.ConstraintVT = OpVT;
-  }
-
-  // Second pass over the constraints: compute which constraint option to use
-  // and assign registers to constraints that want a specific physreg.
-  for (unsigned i = 0, e = ConstraintInfos.size(); i != e; ++i) {
-    SDISelAsmOperandInfo &OpInfo = ConstraintOperands[i];
-
-    // If this is an output operand with a matching input operand, look up the
-    // matching input. If their types mismatch, e.g. one is an integer, the
-    // other is floating point, or their sizes are different, flag it as an
-    // error.
-    if (OpInfo.hasMatchingInput()) {
-      SDISelAsmOperandInfo &Input = ConstraintOperands[OpInfo.MatchingInput];
-      if (OpInfo.ConstraintVT != Input.ConstraintVT) {
-        if ((OpInfo.ConstraintVT.isInteger() !=
-             Input.ConstraintVT.isInteger()) ||
-            (OpInfo.ConstraintVT.getSizeInBits() !=
-             Input.ConstraintVT.getSizeInBits())) {
-          llvm_report_error("Unsupported asm: input constraint"
-                            " with a matching output constraint of incompatible"
-                            " type!");
-        }
-        Input.ConstraintVT = OpInfo.ConstraintVT;
-      }
-    }
-
-    // Compute the constraint code and ConstraintType to use.
-    TLI.ComputeConstraintToUse(OpInfo, OpInfo.CallOperand, hasMemory, &DAG);
-
-    // If this is a memory input, and if the operand is not indirect, do what we
-    // need to to provide an address for the memory input.
-    if (OpInfo.ConstraintType == TargetLowering::C_Memory &&
-        !OpInfo.isIndirect) {
-      assert(OpInfo.Type == InlineAsm::isInput &&
-             "Can only indirectify direct input operands!");
-
-      // Memory operands really want the address of the value.  If we don't have
-      // an indirect input, put it in the constpool if we can, otherwise spill
-      // it to a stack slot.
-
-      // If the operand is a float, integer, or vector constant, spill to a
-      // constant pool entry to get its address.
-      Value *OpVal = OpInfo.CallOperandVal;
-      if (isa<ConstantFP>(OpVal) || isa<ConstantInt>(OpVal) ||
-          isa<ConstantVector>(OpVal)) {
-        OpInfo.CallOperand = DAG.getConstantPool(cast<Constant>(OpVal),
-                                                 TLI.getPointerTy());
-      } else {
-        // Otherwise, create a stack slot and emit a store to it before the
-        // asm.
-        const Type *Ty = OpVal->getType();
-        uint64_t TySize = TLI.getTargetData()->getTypeAllocSize(Ty);
-        unsigned Align  = TLI.getTargetData()->getPrefTypeAlignment(Ty);
-        MachineFunction &MF = DAG.getMachineFunction();
-        int SSFI = MF.getFrameInfo()->CreateStackObject(TySize, Align);
-        SDValue StackSlot = DAG.getFrameIndex(SSFI, TLI.getPointerTy());
-        Chain = DAG.getStore(Chain, getCurDebugLoc(),
-                             OpInfo.CallOperand, StackSlot, NULL, 0);
-        OpInfo.CallOperand = StackSlot;
-      }
-
-      // There is no longer a Value* corresponding to this operand.
-      OpInfo.CallOperandVal = 0;
-      // It is now an indirect operand.
-      OpInfo.isIndirect = true;
-    }
-
-    // If this constraint is for a specific register, allocate it before
-    // anything else.
-    if (OpInfo.ConstraintType == TargetLowering::C_Register)
-      GetRegistersForValue(OpInfo, OutputRegs, InputRegs);
-  }
-  ConstraintInfos.clear();
-
-
-  // Second pass - Loop over all of the operands, assigning virtual or physregs
-  // to register class operands.
-  for (unsigned i = 0, e = ConstraintOperands.size(); i != e; ++i) {
-    SDISelAsmOperandInfo &OpInfo = ConstraintOperands[i];
-
-    // C_Register operands have already been allocated, Other/Memory don't need
-    // to be.
-    if (OpInfo.ConstraintType == TargetLowering::C_RegisterClass)
-      GetRegistersForValue(OpInfo, OutputRegs, InputRegs);
-  }
-
-  // AsmNodeOperands - The operands for the ISD::INLINEASM node.
-  std::vector<SDValue> AsmNodeOperands;
-  AsmNodeOperands.push_back(SDValue());  // reserve space for input chain
-  AsmNodeOperands.push_back(
-          DAG.getTargetExternalSymbol(IA->getAsmString().c_str(), MVT::Other));
-
-
-  // Loop over all of the inputs, copying the operand values into the
-  // appropriate registers and processing the output regs.
-  RegsForValue RetValRegs;
-
-  // IndirectStoresToEmit - The set of stores to emit after the inline asm node.
-  std::vector<std::pair<RegsForValue, Value*> > IndirectStoresToEmit;
-
-  for (unsigned i = 0, e = ConstraintOperands.size(); i != e; ++i) {
-    SDISelAsmOperandInfo &OpInfo = ConstraintOperands[i];
-
-    switch (OpInfo.Type) {
-    case InlineAsm::isOutput: {
-      if (OpInfo.ConstraintType != TargetLowering::C_RegisterClass &&
-          OpInfo.ConstraintType != TargetLowering::C_Register) {
-        // Memory output, or 'other' output (e.g. 'X' constraint).
-        assert(OpInfo.isIndirect && "Memory output must be indirect operand");
-
-        // Add information to the INLINEASM node to know about this output.
-        unsigned ResOpType = 4/*MEM*/ | (1<<3);
-        AsmNodeOperands.push_back(DAG.getTargetConstant(ResOpType,
-                                                        TLI.getPointerTy()));
-        AsmNodeOperands.push_back(OpInfo.CallOperand);
-        break;
-      }
-
-      // Otherwise, this is a register or register class output.
-
-      // Copy the output from the appropriate register.  Find a register that
-      // we can use.
-      if (OpInfo.AssignedRegs.Regs.empty()) {
-        llvm_report_error("Couldn't allocate output reg for"
-                          " constraint '" + OpInfo.ConstraintCode + "'!");
-      }
-
-      // If this is an indirect operand, store through the pointer after the
-      // asm.
-      if (OpInfo.isIndirect) {
-        IndirectStoresToEmit.push_back(std::make_pair(OpInfo.AssignedRegs,
-                                                      OpInfo.CallOperandVal));
-      } else {
-        // This is the result value of the call.
-        assert(CS.getType() != Type::getVoidTy(*DAG.getContext()) &&
-               "Bad inline asm!");
-        // Concatenate this output onto the outputs list.
-        RetValRegs.append(OpInfo.AssignedRegs);
-      }
-
-      // Add information to the INLINEASM node to know that this register is
-      // set.
-      OpInfo.AssignedRegs.AddInlineAsmOperands(OpInfo.isEarlyClobber ?
-                                               6 /* EARLYCLOBBER REGDEF */ :
-                                               2 /* REGDEF */ ,
-                                               false,
-                                               0,
-                                               DAG, AsmNodeOperands);
-      break;
-    }
-    case InlineAsm::isInput: {
-      SDValue InOperandVal = OpInfo.CallOperand;
-
-      if (OpInfo.isMatchingInputConstraint()) {   // Matching constraint?
-        // If this is required to match an output register we have already set,
-        // just use its register.
-        unsigned OperandNo = OpInfo.getMatchedOperand();
-
-        // Scan until we find the definition we already emitted of this operand.
-        // When we find it, create a RegsForValue operand.
-        unsigned CurOp = 2;  // The first operand.
-        for (; OperandNo; --OperandNo) {
-          // Advance to the next operand.
-          unsigned OpFlag =
-            cast<ConstantSDNode>(AsmNodeOperands[CurOp])->getZExtValue();
-          assert(((OpFlag & 7) == 2 /*REGDEF*/ ||
-                  (OpFlag & 7) == 6 /*EARLYCLOBBER REGDEF*/ ||
-                  (OpFlag & 7) == 4 /*MEM*/) &&
-                 "Skipped past definitions?");
-          CurOp += InlineAsm::getNumOperandRegisters(OpFlag)+1;
-        }
-
-        unsigned OpFlag =
-          cast<ConstantSDNode>(AsmNodeOperands[CurOp])->getZExtValue();
-        if ((OpFlag & 7) == 2 /*REGDEF*/
-            || (OpFlag & 7) == 6 /* EARLYCLOBBER REGDEF */) {
-          // Add (OpFlag&0xffff)>>3 registers to MatchedRegs.
-          if (OpInfo.isIndirect) {
-            llvm_report_error("Don't know how to handle tied indirect "
-                              "register inputs yet!");
-          }
-          RegsForValue MatchedRegs;
-          MatchedRegs.TLI = &TLI;
-          MatchedRegs.ValueVTs.push_back(InOperandVal.getValueType());
-          EVT RegVT = AsmNodeOperands[CurOp+1].getValueType();
-          MatchedRegs.RegVTs.push_back(RegVT);
-          MachineRegisterInfo &RegInfo = DAG.getMachineFunction().getRegInfo();
-          for (unsigned i = 0, e = InlineAsm::getNumOperandRegisters(OpFlag);
-               i != e; ++i)
-            MatchedRegs.Regs.
-              push_back(RegInfo.createVirtualRegister(TLI.getRegClassFor(RegVT)));
-
-          // Use the produced MatchedRegs object to
-          MatchedRegs.getCopyToRegs(InOperandVal, DAG, getCurDebugLoc(),
-                                    Chain, &Flag);
-          MatchedRegs.AddInlineAsmOperands(1 /*REGUSE*/,
-                                           true, OpInfo.getMatchedOperand(),
-                                           DAG, AsmNodeOperands);
-          break;
-        } else {
-          assert(((OpFlag & 7) == 4) && "Unknown matching constraint!");
-          assert((InlineAsm::getNumOperandRegisters(OpFlag)) == 1 &&
-                 "Unexpected number of operands");
-          // Add information to the INLINEASM node to know about this input.
-          // See InlineAsm.h isUseOperandTiedToDef.
-          OpFlag |= 0x80000000 | (OpInfo.getMatchedOperand() << 16);
-          AsmNodeOperands.push_back(DAG.getTargetConstant(OpFlag,
-                                                          TLI.getPointerTy()));
-          AsmNodeOperands.push_back(AsmNodeOperands[CurOp+1]);
-          break;
-        }
-      }
-
-      if (OpInfo.ConstraintType == TargetLowering::C_Other) {
-        assert(!OpInfo.isIndirect &&
-               "Don't know how to handle indirect other inputs yet!");
-
-        std::vector<SDValue> Ops;
-        TLI.LowerAsmOperandForConstraint(InOperandVal, OpInfo.ConstraintCode[0],
-                                         hasMemory, Ops, DAG);
-        if (Ops.empty()) {
-          llvm_report_error("Invalid operand for inline asm"
-                            " constraint '" + OpInfo.ConstraintCode + "'!");
-        }
-
-        // Add information to the INLINEASM node to know about this input.
-        unsigned ResOpType = 3 /*IMM*/ | (Ops.size() << 3);
-        AsmNodeOperands.push_back(DAG.getTargetConstant(ResOpType,
-                                                        TLI.getPointerTy()));
-        AsmNodeOperands.insert(AsmNodeOperands.end(), Ops.begin(), Ops.end());
-        break;
-      } else if (OpInfo.ConstraintType == TargetLowering::C_Memory) {
-        assert(OpInfo.isIndirect && "Operand must be indirect to be a mem!");
-        assert(InOperandVal.getValueType() == TLI.getPointerTy() &&
-               "Memory operands expect pointer values");
-
-        // Add information to the INLINEASM node to know about this input.
-        unsigned ResOpType = 4/*MEM*/ | (1<<3);
-        AsmNodeOperands.push_back(DAG.getTargetConstant(ResOpType,
-                                                        TLI.getPointerTy()));
-        AsmNodeOperands.push_back(InOperandVal);
-        break;
-      }
-
-      assert((OpInfo.ConstraintType == TargetLowering::C_RegisterClass ||
-              OpInfo.ConstraintType == TargetLowering::C_Register) &&
-             "Unknown constraint type!");
-      assert(!OpInfo.isIndirect &&
-             "Don't know how to handle indirect register inputs yet!");
-
-      // Copy the input into the appropriate registers.
-      if (OpInfo.AssignedRegs.Regs.empty()) {
-        llvm_report_error("Couldn't allocate input reg for"
-                          " constraint '"+ OpInfo.ConstraintCode +"'!");
-      }
-
-      OpInfo.AssignedRegs.getCopyToRegs(InOperandVal, DAG, getCurDebugLoc(),
-                                        Chain, &Flag);
-
-      OpInfo.AssignedRegs.AddInlineAsmOperands(1/*REGUSE*/, false, 0,
-                                               DAG, AsmNodeOperands);
-      break;
-    }
-    case InlineAsm::isClobber: {
-      // Add the clobbered value to the operand list, so that the register
-      // allocator is aware that the physreg got clobbered.
-      if (!OpInfo.AssignedRegs.Regs.empty())
-        OpInfo.AssignedRegs.AddInlineAsmOperands(6 /* EARLYCLOBBER REGDEF */,
-                                                 false, 0, DAG,AsmNodeOperands);
-      break;
-    }
-    }
-  }
-
-  // Finish up input operands.
-  AsmNodeOperands[0] = Chain;
-  if (Flag.getNode()) AsmNodeOperands.push_back(Flag);
-
-  Chain = DAG.getNode(ISD::INLINEASM, getCurDebugLoc(),
-                      DAG.getVTList(MVT::Other, MVT::Flag),
-                      &AsmNodeOperands[0], AsmNodeOperands.size());
-  Flag = Chain.getValue(1);
-
-  // If this asm returns a register value, copy the result from that register
-  // and set it as the value of the call.
-  if (!RetValRegs.Regs.empty()) {
-    SDValue Val = RetValRegs.getCopyFromRegs(DAG, getCurDebugLoc(),
-                                             Chain, &Flag);
-
-    // FIXME: Why don't we do this for inline asms with MRVs?
-    if (CS.getType()->isSingleValueType() && CS.getType()->isSized()) {
-      EVT ResultType = TLI.getValueType(CS.getType());
-
-      // If any of the results of the inline asm is a vector, it may have the
-      // wrong width/num elts.  This can happen for register classes that can
-      // contain multiple different value types.  The preg or vreg allocated may
-      // not have the same VT as was expected.  Convert it to the right type
-      // with bit_convert.
-      if (ResultType != Val.getValueType() && Val.getValueType().isVector()) {
-        Val = DAG.getNode(ISD::BIT_CONVERT, getCurDebugLoc(),
-                          ResultType, Val);
-
-      } else if (ResultType != Val.getValueType() &&
-                 ResultType.isInteger() && Val.getValueType().isInteger()) {
-        // If a result value was tied to an input value, the computed result may
-        // have a wider width than the expected result.  Extract the relevant
-        // portion.
-        Val = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(), ResultType, Val);
-      }
-
-      assert(ResultType == Val.getValueType() && "Asm result value mismatch!");
-    }
-
-    setValue(CS.getInstruction(), Val);
-    // Don't need to use this as a chain in this case.
-    if (!IA->hasSideEffects() && !hasMemory && IndirectStoresToEmit.empty())
-      return;
-  }
-
-  std::vector<std::pair<SDValue, Value*> > StoresToEmit;
-
-  // Process indirect outputs, first output all of the flagged copies out of
-  // physregs.
-  for (unsigned i = 0, e = IndirectStoresToEmit.size(); i != e; ++i) {
-    RegsForValue &OutRegs = IndirectStoresToEmit[i].first;
-    Value *Ptr = IndirectStoresToEmit[i].second;
-    SDValue OutVal = OutRegs.getCopyFromRegs(DAG, getCurDebugLoc(),
-                                             Chain, &Flag);
-    StoresToEmit.push_back(std::make_pair(OutVal, Ptr));
-
-  }
-
-  // Emit the non-flagged stores from the physregs.
-  SmallVector<SDValue, 8> OutChains;
-  for (unsigned i = 0, e = StoresToEmit.size(); i != e; ++i)
-    OutChains.push_back(DAG.getStore(Chain, getCurDebugLoc(),
-                                    StoresToEmit[i].first,
-                                    getValue(StoresToEmit[i].second),
-                                    StoresToEmit[i].second, 0));
-  if (!OutChains.empty())
-    Chain = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(), MVT::Other,
-                        &OutChains[0], OutChains.size());
-  DAG.setRoot(Chain);
-}
-
-
-void SelectionDAGLowering::visitMalloc(MallocInst &I) {
-  SDValue Src = getValue(I.getOperand(0));
-
-  // Scale up by the type size in the original i32 type width.  Various
-  // mid-level optimizers may make assumptions about demanded bits etc from the
-  // i32-ness of the optimizer: we do not want to promote to i64 and then
-  // multiply on 64-bit targets.
-  // FIXME: Malloc inst should go away: PR715.
-  uint64_t ElementSize = TD->getTypeAllocSize(I.getType()->getElementType());
-  if (ElementSize != 1) {
-    // Src is always 32-bits, make sure the constant fits.
-    assert(Src.getValueType() == MVT::i32);
-    ElementSize = (uint32_t)ElementSize;
-    Src = DAG.getNode(ISD::MUL, getCurDebugLoc(), Src.getValueType(),
-                      Src, DAG.getConstant(ElementSize, Src.getValueType()));
-  }
-  
-  EVT IntPtr = TLI.getPointerTy();
-
-  if (IntPtr.bitsLT(Src.getValueType()))
-    Src = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(), IntPtr, Src);
-  else if (IntPtr.bitsGT(Src.getValueType()))
-    Src = DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(), IntPtr, Src);
-
-  TargetLowering::ArgListTy Args;
-  TargetLowering::ArgListEntry Entry;
-  Entry.Node = Src;
-  Entry.Ty = TLI.getTargetData()->getIntPtrType(*DAG.getContext());
-  Args.push_back(Entry);
-
-  bool isTailCall = PerformTailCallOpt &&
-                    isInTailCallPosition(&I, Attribute::None, TLI);
-  std::pair<SDValue,SDValue> Result =
-    TLI.LowerCallTo(getRoot(), I.getType(), false, false, false, false,
-                    0, CallingConv::C, isTailCall,
-                    /*isReturnValueUsed=*/true,
-                    DAG.getExternalSymbol("malloc", IntPtr),
-                    Args, DAG, getCurDebugLoc());
-  if (Result.first.getNode())
-    setValue(&I, Result.first);  // Pointers always fit in registers
-  if (Result.second.getNode())
-    DAG.setRoot(Result.second);
-}
-
-void SelectionDAGLowering::visitFree(FreeInst &I) {
-  TargetLowering::ArgListTy Args;
-  TargetLowering::ArgListEntry Entry;
-  Entry.Node = getValue(I.getOperand(0));
-  Entry.Ty = TLI.getTargetData()->getIntPtrType(*DAG.getContext());
-  Args.push_back(Entry);
-  EVT IntPtr = TLI.getPointerTy();
-  bool isTailCall = PerformTailCallOpt &&
-                    isInTailCallPosition(&I, Attribute::None, TLI);
-  std::pair<SDValue,SDValue> Result =
-    TLI.LowerCallTo(getRoot(), Type::getVoidTy(*DAG.getContext()),
-                    false, false, false, false,
-                    0, CallingConv::C, isTailCall,
-                    /*isReturnValueUsed=*/true,
-                    DAG.getExternalSymbol("free", IntPtr), Args, DAG,
-                    getCurDebugLoc());
-  if (Result.second.getNode())
-    DAG.setRoot(Result.second);
-}
-
-void SelectionDAGLowering::visitVAStart(CallInst &I) {
-  DAG.setRoot(DAG.getNode(ISD::VASTART, getCurDebugLoc(),
-                          MVT::Other, getRoot(),
-                          getValue(I.getOperand(1)),
-                          DAG.getSrcValue(I.getOperand(1))));
-}
-
-void SelectionDAGLowering::visitVAArg(VAArgInst &I) {
-  SDValue V = DAG.getVAArg(TLI.getValueType(I.getType()), getCurDebugLoc(),
-                           getRoot(), getValue(I.getOperand(0)),
-                           DAG.getSrcValue(I.getOperand(0)));
-  setValue(&I, V);
-  DAG.setRoot(V.getValue(1));
-}
-
-void SelectionDAGLowering::visitVAEnd(CallInst &I) {
-  DAG.setRoot(DAG.getNode(ISD::VAEND, getCurDebugLoc(),
-                          MVT::Other, getRoot(),
-                          getValue(I.getOperand(1)),
-                          DAG.getSrcValue(I.getOperand(1))));
-}
-
-void SelectionDAGLowering::visitVACopy(CallInst &I) {
-  DAG.setRoot(DAG.getNode(ISD::VACOPY, getCurDebugLoc(),
-                          MVT::Other, getRoot(),
-                          getValue(I.getOperand(1)),
-                          getValue(I.getOperand(2)),
-                          DAG.getSrcValue(I.getOperand(1)),
-                          DAG.getSrcValue(I.getOperand(2))));
-}
-
-/// TargetLowering::LowerCallTo - This is the default LowerCallTo
-/// implementation, which just calls LowerCall.
-/// FIXME: When all targets are
-/// migrated to using LowerCall, this hook should be integrated into SDISel.
-std::pair<SDValue, SDValue>
-TargetLowering::LowerCallTo(SDValue Chain, const Type *RetTy,
-                            bool RetSExt, bool RetZExt, bool isVarArg,
-                            bool isInreg, unsigned NumFixedArgs,
-                            CallingConv::ID CallConv, bool isTailCall,
-                            bool isReturnValueUsed,
-                            SDValue Callee,
-                            ArgListTy &Args, SelectionDAG &DAG, DebugLoc dl) {
-
-  assert((!isTailCall || PerformTailCallOpt) &&
-         "isTailCall set when tail-call optimizations are disabled!");
-
-  // Handle all of the outgoing arguments.
-  SmallVector<ISD::OutputArg, 32> Outs;
-  for (unsigned i = 0, e = Args.size(); i != e; ++i) {
-    SmallVector<EVT, 4> ValueVTs;
-    ComputeValueVTs(*this, Args[i].Ty, ValueVTs);
-    for (unsigned Value = 0, NumValues = ValueVTs.size();
-         Value != NumValues; ++Value) {
-      EVT VT = ValueVTs[Value];
-      const Type *ArgTy = VT.getTypeForEVT(RetTy->getContext());
-      SDValue Op = SDValue(Args[i].Node.getNode(),
-                           Args[i].Node.getResNo() + Value);
-      ISD::ArgFlagsTy Flags;
-      unsigned OriginalAlignment =
-        getTargetData()->getABITypeAlignment(ArgTy);
-
-      if (Args[i].isZExt)
-        Flags.setZExt();
-      if (Args[i].isSExt)
-        Flags.setSExt();
-      if (Args[i].isInReg)
-        Flags.setInReg();
-      if (Args[i].isSRet)
-        Flags.setSRet();
-      if (Args[i].isByVal) {
-        Flags.setByVal();
-        const PointerType *Ty = cast<PointerType>(Args[i].Ty);
-        const Type *ElementTy = Ty->getElementType();
-        unsigned FrameAlign = getByValTypeAlignment(ElementTy);
-        unsigned FrameSize  = getTargetData()->getTypeAllocSize(ElementTy);
-        // For ByVal, alignment should come from FE.  BE will guess if this
-        // info is not there but there are cases it cannot get right.
-        if (Args[i].Alignment)
-          FrameAlign = Args[i].Alignment;
-        Flags.setByValAlign(FrameAlign);
-        Flags.setByValSize(FrameSize);
-      }
-      if (Args[i].isNest)
-        Flags.setNest();
-      Flags.setOrigAlign(OriginalAlignment);
-
-      EVT PartVT = getRegisterType(RetTy->getContext(), VT);
-      unsigned NumParts = getNumRegisters(RetTy->getContext(), VT);
-      SmallVector<SDValue, 4> Parts(NumParts);
-      ISD::NodeType ExtendKind = ISD::ANY_EXTEND;
-
-      if (Args[i].isSExt)
-        ExtendKind = ISD::SIGN_EXTEND;
-      else if (Args[i].isZExt)
-        ExtendKind = ISD::ZERO_EXTEND;
-
-      getCopyToParts(DAG, dl, Op, &Parts[0], NumParts, PartVT, ExtendKind);
-
-      for (unsigned j = 0; j != NumParts; ++j) {
-        // if it isn't first piece, alignment must be 1
-        ISD::OutputArg MyFlags(Flags, Parts[j], i < NumFixedArgs);
-        if (NumParts > 1 && j == 0)
-          MyFlags.Flags.setSplit();
-        else if (j != 0)
-          MyFlags.Flags.setOrigAlign(1);
-
-        Outs.push_back(MyFlags);
-      }
-    }
-  }
-
-  // Handle the incoming return values from the call.
-  SmallVector<ISD::InputArg, 32> Ins;
-  SmallVector<EVT, 4> RetTys;
-  ComputeValueVTs(*this, RetTy, RetTys);
-  for (unsigned I = 0, E = RetTys.size(); I != E; ++I) {
-    EVT VT = RetTys[I];
-    EVT RegisterVT = getRegisterType(RetTy->getContext(), VT);
-    unsigned NumRegs = getNumRegisters(RetTy->getContext(), VT);
-    for (unsigned i = 0; i != NumRegs; ++i) {
-      ISD::InputArg MyFlags;
-      MyFlags.VT = RegisterVT;
-      MyFlags.Used = isReturnValueUsed;
-      if (RetSExt)
-        MyFlags.Flags.setSExt();
-      if (RetZExt)
-        MyFlags.Flags.setZExt();
-      if (isInreg)
-        MyFlags.Flags.setInReg();
-      Ins.push_back(MyFlags);
-    }
-  }
-
-  // Check if target-dependent constraints permit a tail call here.
-  // Target-independent constraints should be checked by the caller.
-  if (isTailCall &&
-      !IsEligibleForTailCallOptimization(Callee, CallConv, isVarArg, Ins, DAG))
-    isTailCall = false;
-
-  SmallVector<SDValue, 4> InVals;
-  Chain = LowerCall(Chain, Callee, CallConv, isVarArg, isTailCall,
-                    Outs, Ins, dl, DAG, InVals);
-
-  // Verify that the target's LowerCall behaved as expected.
-  assert(Chain.getNode() && Chain.getValueType() == MVT::Other &&
-         "LowerCall didn't return a valid chain!");
-  assert((!isTailCall || InVals.empty()) &&
-         "LowerCall emitted a return value for a tail call!");
-  assert((isTailCall || InVals.size() == Ins.size()) &&
-         "LowerCall didn't emit the correct number of values!");
-  DEBUG(for (unsigned i = 0, e = Ins.size(); i != e; ++i) {
-          assert(InVals[i].getNode() &&
-                 "LowerCall emitted a null value!");
-          assert(Ins[i].VT == InVals[i].getValueType() &&
-                 "LowerCall emitted a value with the wrong type!");
-        });
-
-  // For a tail call, the return value is merely live-out and there aren't
-  // any nodes in the DAG representing it. Return a special value to
-  // indicate that a tail call has been emitted and no more Instructions
-  // should be processed in the current block.
-  if (isTailCall) {
-    DAG.setRoot(Chain);
-    return std::make_pair(SDValue(), SDValue());
-  }
-
-  // Collect the legal value parts into potentially illegal values
-  // that correspond to the original function's return values.
-  ISD::NodeType AssertOp = ISD::DELETED_NODE;
-  if (RetSExt)
-    AssertOp = ISD::AssertSext;
-  else if (RetZExt)
-    AssertOp = ISD::AssertZext;
-  SmallVector<SDValue, 4> ReturnValues;
-  unsigned CurReg = 0;
-  for (unsigned I = 0, E = RetTys.size(); I != E; ++I) {
-    EVT VT = RetTys[I];
-    EVT RegisterVT = getRegisterType(RetTy->getContext(), VT);
-    unsigned NumRegs = getNumRegisters(RetTy->getContext(), VT);
-
-    SDValue ReturnValue =
-      getCopyFromParts(DAG, dl, &InVals[CurReg], NumRegs, RegisterVT, VT,
-                       AssertOp);
-    ReturnValues.push_back(ReturnValue);
-    CurReg += NumRegs;
-  }
-
-  // For a function returning void, there is no return value. We can't create
-  // such a node, so we just return a null return value in that case. In
-  // that case, nothing will actualy look at the value.
-  if (ReturnValues.empty())
-    return std::make_pair(SDValue(), Chain);
-
-  SDValue Res = DAG.getNode(ISD::MERGE_VALUES, dl,
-                            DAG.getVTList(&RetTys[0], RetTys.size()),
-                            &ReturnValues[0], ReturnValues.size());
-
-  return std::make_pair(Res, Chain);
-}
-
-void TargetLowering::LowerOperationWrapper(SDNode *N,
-                                           SmallVectorImpl<SDValue> &Results,
-                                           SelectionDAG &DAG) {
-  SDValue Res = LowerOperation(SDValue(N, 0), DAG);
-  if (Res.getNode())
-    Results.push_back(Res);
-}
-
-SDValue TargetLowering::LowerOperation(SDValue Op, SelectionDAG &DAG) {
-  llvm_unreachable("LowerOperation not implemented for this target!");
-  return SDValue();
-}
-
-
-void SelectionDAGLowering::CopyValueToVirtualRegister(Value *V, unsigned Reg) {
-  SDValue Op = getValue(V);
-  assert((Op.getOpcode() != ISD::CopyFromReg ||
-          cast<RegisterSDNode>(Op.getOperand(1))->getReg() != Reg) &&
-         "Copy from a reg to the same reg!");
-  assert(!TargetRegisterInfo::isPhysicalRegister(Reg) && "Is a physreg");
-
-  RegsForValue RFV(V->getContext(), TLI, Reg, V->getType());
-  SDValue Chain = DAG.getEntryNode();
-  RFV.getCopyToRegs(Op, DAG, getCurDebugLoc(), Chain, 0);
-  PendingExports.push_back(Chain);
-}
-
-#include "llvm/CodeGen/SelectionDAGISel.h"
-
-void SelectionDAGISel::
-LowerArguments(BasicBlock *LLVMBB) {
-  // If this is the entry block, emit arguments.
-  Function &F = *LLVMBB->getParent();
-  SelectionDAG &DAG = SDL->DAG;
-  SDValue OldRoot = DAG.getRoot();
-  DebugLoc dl = SDL->getCurDebugLoc();
-  const TargetData *TD = TLI.getTargetData();
-
-  // Set up the incoming argument description vector.
-  SmallVector<ISD::InputArg, 16> Ins;
-  unsigned Idx = 1;
-  for (Function::arg_iterator I = F.arg_begin(), E = F.arg_end();
-       I != E; ++I, ++Idx) {
-    SmallVector<EVT, 4> ValueVTs;
-    ComputeValueVTs(TLI, I->getType(), ValueVTs);
-    bool isArgValueUsed = !I->use_empty();
-    for (unsigned Value = 0, NumValues = ValueVTs.size();
-         Value != NumValues; ++Value) {
-      EVT VT = ValueVTs[Value];
-      const Type *ArgTy = VT.getTypeForEVT(*DAG.getContext());
-      ISD::ArgFlagsTy Flags;
-      unsigned OriginalAlignment =
-        TD->getABITypeAlignment(ArgTy);
-
-      if (F.paramHasAttr(Idx, Attribute::ZExt))
-        Flags.setZExt();
-      if (F.paramHasAttr(Idx, Attribute::SExt))
-        Flags.setSExt();
-      if (F.paramHasAttr(Idx, Attribute::InReg))
-        Flags.setInReg();
-      if (F.paramHasAttr(Idx, Attribute::StructRet))
-        Flags.setSRet();
-      if (F.paramHasAttr(Idx, Attribute::ByVal)) {
-        Flags.setByVal();
-        const PointerType *Ty = cast<PointerType>(I->getType());
-        const Type *ElementTy = Ty->getElementType();
-        unsigned FrameAlign = TLI.getByValTypeAlignment(ElementTy);
-        unsigned FrameSize  = TD->getTypeAllocSize(ElementTy);
-        // For ByVal, alignment should be passed from FE.  BE will guess if
-        // this info is not there but there are cases it cannot get right.
-        if (F.getParamAlignment(Idx))
-          FrameAlign = F.getParamAlignment(Idx);
-        Flags.setByValAlign(FrameAlign);
-        Flags.setByValSize(FrameSize);
-      }
-      if (F.paramHasAttr(Idx, Attribute::Nest))
-        Flags.setNest();
-      Flags.setOrigAlign(OriginalAlignment);
-
-      EVT RegisterVT = TLI.getRegisterType(*CurDAG->getContext(), VT);
-      unsigned NumRegs = TLI.getNumRegisters(*CurDAG->getContext(), VT);
-      for (unsigned i = 0; i != NumRegs; ++i) {
-        ISD::InputArg MyFlags(Flags, RegisterVT, isArgValueUsed);
-        if (NumRegs > 1 && i == 0)
-          MyFlags.Flags.setSplit();
-        // if it isn't first piece, alignment must be 1
-        else if (i > 0)
-          MyFlags.Flags.setOrigAlign(1);
-        Ins.push_back(MyFlags);
-      }
-    }
-  }
-
-  // Call the target to set up the argument values.
-  SmallVector<SDValue, 8> InVals;
-  SDValue NewRoot = TLI.LowerFormalArguments(DAG.getRoot(), F.getCallingConv(),
-                                             F.isVarArg(), Ins,
-                                             dl, DAG, InVals);
-
-  // Verify that the target's LowerFormalArguments behaved as expected.
-  assert(NewRoot.getNode() && NewRoot.getValueType() == MVT::Other &&
-         "LowerFormalArguments didn't return a valid chain!");
-  assert(InVals.size() == Ins.size() &&
-         "LowerFormalArguments didn't emit the correct number of values!");
-  DEBUG(for (unsigned i = 0, e = Ins.size(); i != e; ++i) {
-          assert(InVals[i].getNode() &&
-                 "LowerFormalArguments emitted a null value!");
-          assert(Ins[i].VT == InVals[i].getValueType() &&
-                 "LowerFormalArguments emitted a value with the wrong type!");
-        });
-
-  // Update the DAG with the new chain value resulting from argument lowering.
-  DAG.setRoot(NewRoot);
-
-  // Set up the argument values.
-  unsigned i = 0;
-  Idx = 1;
-  for (Function::arg_iterator I = F.arg_begin(), E = F.arg_end(); I != E;
-      ++I, ++Idx) {
-    SmallVector<SDValue, 4> ArgValues;
-    SmallVector<EVT, 4> ValueVTs;
-    ComputeValueVTs(TLI, I->getType(), ValueVTs);
-    unsigned NumValues = ValueVTs.size();
-    for (unsigned Value = 0; Value != NumValues; ++Value) {
-      EVT VT = ValueVTs[Value];
-      EVT PartVT = TLI.getRegisterType(*CurDAG->getContext(), VT);
-      unsigned NumParts = TLI.getNumRegisters(*CurDAG->getContext(), VT);
-
-      if (!I->use_empty()) {
-        ISD::NodeType AssertOp = ISD::DELETED_NODE;
-        if (F.paramHasAttr(Idx, Attribute::SExt))
-          AssertOp = ISD::AssertSext;
-        else if (F.paramHasAttr(Idx, Attribute::ZExt))
-          AssertOp = ISD::AssertZext;
-
-        ArgValues.push_back(getCopyFromParts(DAG, dl, &InVals[i], NumParts,
-                                             PartVT, VT, AssertOp));
-      }
-      i += NumParts;
-    }
-    if (!I->use_empty()) {
-      SDL->setValue(I, DAG.getMergeValues(&ArgValues[0], NumValues,
-                                          SDL->getCurDebugLoc()));
-      // If this argument is live outside of the entry block, insert a copy from
-      // whereever we got it to the vreg that other BB's will reference it as.
-      SDL->CopyToExportRegsIfNeeded(I);
-    }
-  }
-  assert(i == InVals.size() && "Argument register count mismatch!");
-
-  // Finally, if the target has anything special to do, allow it to do so.
-  // FIXME: this should insert code into the DAG!
-  EmitFunctionEntryCode(F, SDL->DAG.getMachineFunction());
-}
-
-/// Handle PHI nodes in successor blocks.  Emit code into the SelectionDAG to
-/// ensure constants are generated when needed.  Remember the virtual registers
-/// that need to be added to the Machine PHI nodes as input.  We cannot just
-/// directly add them, because expansion might result in multiple MBB's for one
-/// BB.  As such, the start of the BB might correspond to a different MBB than
-/// the end.
-///
-void
-SelectionDAGISel::HandlePHINodesInSuccessorBlocks(BasicBlock *LLVMBB) {
-  TerminatorInst *TI = LLVMBB->getTerminator();
-
-  SmallPtrSet<MachineBasicBlock *, 4> SuccsHandled;
-
-  // Check successor nodes' PHI nodes that expect a constant to be available
-  // from this block.
-  for (unsigned succ = 0, e = TI->getNumSuccessors(); succ != e; ++succ) {
-    BasicBlock *SuccBB = TI->getSuccessor(succ);
-    if (!isa<PHINode>(SuccBB->begin())) continue;
-    MachineBasicBlock *SuccMBB = FuncInfo->MBBMap[SuccBB];
-
-    // If this terminator has multiple identical successors (common for
-    // switches), only handle each succ once.
-    if (!SuccsHandled.insert(SuccMBB)) continue;
-
-    MachineBasicBlock::iterator MBBI = SuccMBB->begin();
-    PHINode *PN;
-
-    // At this point we know that there is a 1-1 correspondence between LLVM PHI
-    // nodes and Machine PHI nodes, but the incoming operands have not been
-    // emitted yet.
-    for (BasicBlock::iterator I = SuccBB->begin();
-         (PN = dyn_cast<PHINode>(I)); ++I) {
-      // Ignore dead phi's.
-      if (PN->use_empty()) continue;
-
-      unsigned Reg;
-      Value *PHIOp = PN->getIncomingValueForBlock(LLVMBB);
-
-      if (Constant *C = dyn_cast<Constant>(PHIOp)) {
-        unsigned &RegOut = SDL->ConstantsOut[C];
-        if (RegOut == 0) {
-          RegOut = FuncInfo->CreateRegForValue(C);
-          SDL->CopyValueToVirtualRegister(C, RegOut);
-        }
-        Reg = RegOut;
-      } else {
-        Reg = FuncInfo->ValueMap[PHIOp];
-        if (Reg == 0) {
-          assert(isa<AllocaInst>(PHIOp) &&
-                 FuncInfo->StaticAllocaMap.count(cast<AllocaInst>(PHIOp)) &&
-                 "Didn't codegen value into a register!??");
-          Reg = FuncInfo->CreateRegForValue(PHIOp);
-          SDL->CopyValueToVirtualRegister(PHIOp, Reg);
-        }
-      }
-
-      // Remember that this register needs to added to the machine PHI node as
-      // the input for this MBB.
-      SmallVector<EVT, 4> ValueVTs;
-      ComputeValueVTs(TLI, PN->getType(), ValueVTs);
-      for (unsigned vti = 0, vte = ValueVTs.size(); vti != vte; ++vti) {
-        EVT VT = ValueVTs[vti];
-        unsigned NumRegisters = TLI.getNumRegisters(*CurDAG->getContext(), VT);
-        for (unsigned i = 0, e = NumRegisters; i != e; ++i)
-          SDL->PHINodesToUpdate.push_back(std::make_pair(MBBI++, Reg+i));
-        Reg += NumRegisters;
-      }
-    }
-  }
-  SDL->ConstantsOut.clear();
-}
-
-/// This is the Fast-ISel version of HandlePHINodesInSuccessorBlocks. It only
-/// supports legal types, and it emits MachineInstrs directly instead of
-/// creating SelectionDAG nodes.
-///
-bool
-SelectionDAGISel::HandlePHINodesInSuccessorBlocksFast(BasicBlock *LLVMBB,
-                                                      FastISel *F) {
-  TerminatorInst *TI = LLVMBB->getTerminator();
-
-  SmallPtrSet<MachineBasicBlock *, 4> SuccsHandled;
-  unsigned OrigNumPHINodesToUpdate = SDL->PHINodesToUpdate.size();
-
-  // Check successor nodes' PHI nodes that expect a constant to be available
-  // from this block.
-  for (unsigned succ = 0, e = TI->getNumSuccessors(); succ != e; ++succ) {
-    BasicBlock *SuccBB = TI->getSuccessor(succ);
-    if (!isa<PHINode>(SuccBB->begin())) continue;
-    MachineBasicBlock *SuccMBB = FuncInfo->MBBMap[SuccBB];
-
-    // If this terminator has multiple identical successors (common for
-    // switches), only handle each succ once.
-    if (!SuccsHandled.insert(SuccMBB)) continue;
-
-    MachineBasicBlock::iterator MBBI = SuccMBB->begin();
-    PHINode *PN;
-
-    // At this point we know that there is a 1-1 correspondence between LLVM PHI
-    // nodes and Machine PHI nodes, but the incoming operands have not been
-    // emitted yet.
-    for (BasicBlock::iterator I = SuccBB->begin();
-         (PN = dyn_cast<PHINode>(I)); ++I) {
-      // Ignore dead phi's.
-      if (PN->use_empty()) continue;
-
-      // Only handle legal types. Two interesting things to note here. First,
-      // by bailing out early, we may leave behind some dead instructions,
-      // since SelectionDAG's HandlePHINodesInSuccessorBlocks will insert its
-      // own moves. Second, this check is necessary becuase FastISel doesn't
-      // use CreateRegForValue to create registers, so it always creates
-      // exactly one register for each non-void instruction.
-      EVT VT = TLI.getValueType(PN->getType(), /*AllowUnknown=*/true);
-      if (VT == MVT::Other || !TLI.isTypeLegal(VT)) {
-        // Promote MVT::i1.
-        if (VT == MVT::i1)
-          VT = TLI.getTypeToTransformTo(*CurDAG->getContext(), VT);
-        else {
-          SDL->PHINodesToUpdate.resize(OrigNumPHINodesToUpdate);
-          return false;
-        }
-      }
-
-      Value *PHIOp = PN->getIncomingValueForBlock(LLVMBB);
-
-      unsigned Reg = F->getRegForValue(PHIOp);
-      if (Reg == 0) {
-        SDL->PHINodesToUpdate.resize(OrigNumPHINodesToUpdate);
-        return false;
-      }
-      SDL->PHINodesToUpdate.push_back(std::make_pair(MBBI++, Reg));
-    }
-  }
-
-  return true;
-}
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuild.h b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuild.h
deleted file mode 100644
index 06acc8a..0000000
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuild.h
+++ /dev/null
@@ -1,573 +0,0 @@
-//===-- SelectionDAGBuild.h - Selection-DAG building ----------------------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This implements routines for translating from LLVM IR into SelectionDAG IR.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef SELECTIONDAGBUILD_H
-#define SELECTIONDAGBUILD_H
-
-#include "llvm/Constants.h"
-#include "llvm/CodeGen/SelectionDAG.h"
-#include "llvm/ADT/APInt.h"
-#include "llvm/ADT/DenseMap.h"
-#ifndef NDEBUG
-#include "llvm/ADT/SmallSet.h"
-#endif
-#include "llvm/CodeGen/SelectionDAGNodes.h"
-#include "llvm/CodeGen/ValueTypes.h"
-#include "llvm/Support/CallSite.h"
-#include "llvm/Support/ErrorHandling.h"
-#include "llvm/Target/TargetMachine.h"
-#include <vector>
-#include <set>
-
-namespace llvm {
-
-class AliasAnalysis;
-class AllocaInst;
-class BasicBlock;
-class BitCastInst;
-class BranchInst;
-class CallInst;
-class ExtractElementInst;
-class ExtractValueInst;
-class FCmpInst;
-class FPExtInst;
-class FPToSIInst;
-class FPToUIInst;
-class FPTruncInst;
-class FreeInst;
-class Function;
-class GetElementPtrInst;
-class GCFunctionInfo;
-class ICmpInst;
-class IntToPtrInst;
-class InvokeInst;
-class InsertElementInst;
-class InsertValueInst;
-class Instruction;
-class LoadInst;
-class MachineBasicBlock;
-class MachineFunction;
-class MachineInstr;
-class MachineModuleInfo;
-class MachineRegisterInfo;
-class MallocInst;
-class PHINode;
-class PtrToIntInst;
-class ReturnInst;
-class SDISelAsmOperandInfo;
-class SExtInst;
-class SelectInst;
-class ShuffleVectorInst;
-class SIToFPInst;
-class StoreInst;
-class SwitchInst;
-class TargetData;
-class TargetLowering;
-class TruncInst;
-class UIToFPInst;
-class UnreachableInst;
-class UnwindInst;
-class VAArgInst;
-class ZExtInst;
-
-//===--------------------------------------------------------------------===//
-/// FunctionLoweringInfo - This contains information that is global to a
-/// function that is used when lowering a region of the function.
-///
-class FunctionLoweringInfo {
-public:
-  TargetLowering &TLI;
-  Function *Fn;
-  MachineFunction *MF;
-  MachineRegisterInfo *RegInfo;
-
-  explicit FunctionLoweringInfo(TargetLowering &TLI);
-
-  /// set - Initialize this FunctionLoweringInfo with the given Function
-  /// and its associated MachineFunction.
-  ///
-  void set(Function &Fn, MachineFunction &MF, SelectionDAG &DAG,
-           bool EnableFastISel);
-
-  /// MBBMap - A mapping from LLVM basic blocks to their machine code entry.
-  DenseMap<const BasicBlock*, MachineBasicBlock *> MBBMap;
-
-  /// ValueMap - Since we emit code for the function a basic block at a time,
-  /// we must remember which virtual registers hold the values for
-  /// cross-basic-block values.
-  DenseMap<const Value*, unsigned> ValueMap;
-
-  /// StaticAllocaMap - Keep track of frame indices for fixed sized allocas in
-  /// the entry block.  This allows the allocas to be efficiently referenced
-  /// anywhere in the function.
-  DenseMap<const AllocaInst*, int> StaticAllocaMap;
-
-#ifndef NDEBUG
-  SmallSet<Instruction*, 8> CatchInfoLost;
-  SmallSet<Instruction*, 8> CatchInfoFound;
-#endif
-
-  unsigned MakeReg(EVT VT);
-  
-  /// isExportedInst - Return true if the specified value is an instruction
-  /// exported from its block.
-  bool isExportedInst(const Value *V) {
-    return ValueMap.count(V);
-  }
-
-  unsigned CreateRegForValue(const Value *V);
-  
-  unsigned InitializeRegForValue(const Value *V) {
-    unsigned &R = ValueMap[V];
-    assert(R == 0 && "Already initialized this value register!");
-    return R = CreateRegForValue(V);
-  }
-  
-  struct LiveOutInfo {
-    unsigned NumSignBits;
-    APInt KnownOne, KnownZero;
-    LiveOutInfo() : NumSignBits(0), KnownOne(1, 0), KnownZero(1, 0) {}
-  };
-  
-  /// LiveOutRegInfo - Information about live out vregs, indexed by their
-  /// register number offset by 'FirstVirtualRegister'.
-  std::vector<LiveOutInfo> LiveOutRegInfo;
-
-  /// clear - Clear out all the function-specific state. This returns this
-  /// FunctionLoweringInfo to an empty state, ready to be used for a
-  /// different function.
-  void clear() {
-    MBBMap.clear();
-    ValueMap.clear();
-    StaticAllocaMap.clear();
-#ifndef NDEBUG
-    CatchInfoLost.clear();
-    CatchInfoFound.clear();
-#endif
-    LiveOutRegInfo.clear();
-  }
-};
-
-//===----------------------------------------------------------------------===//
-/// SelectionDAGLowering - This is the common target-independent lowering
-/// implementation that is parameterized by a TargetLowering object.
-/// Also, targets can overload any lowering method.
-///
-class SelectionDAGLowering {
-  MachineBasicBlock *CurMBB;
-
-  /// CurDebugLoc - current file + line number.  Changes as we build the DAG.
-  DebugLoc CurDebugLoc;
-
-  DenseMap<const Value*, SDValue> NodeMap;
-
-  /// PendingLoads - Loads are not emitted to the program immediately.  We bunch
-  /// them up and then emit token factor nodes when possible.  This allows us to
-  /// get simple disambiguation between loads without worrying about alias
-  /// analysis.
-  SmallVector<SDValue, 8> PendingLoads;
-
-  /// PendingExports - CopyToReg nodes that copy values to virtual registers
-  /// for export to other blocks need to be emitted before any terminator
-  /// instruction, but they have no other ordering requirements. We bunch them
-  /// up and the emit a single tokenfactor for them just before terminator
-  /// instructions.
-  SmallVector<SDValue, 8> PendingExports;
-
-  /// Case - A struct to record the Value for a switch case, and the
-  /// case's target basic block.
-  struct Case {
-    Constant* Low;
-    Constant* High;
-    MachineBasicBlock* BB;
-
-    Case() : Low(0), High(0), BB(0) { }
-    Case(Constant* low, Constant* high, MachineBasicBlock* bb) :
-      Low(low), High(high), BB(bb) { }
-    uint64_t size() const {
-      uint64_t rHigh = cast<ConstantInt>(High)->getSExtValue();
-      uint64_t rLow  = cast<ConstantInt>(Low)->getSExtValue();
-      return (rHigh - rLow + 1ULL);
-    }
-  };
-
-  struct CaseBits {
-    uint64_t Mask;
-    MachineBasicBlock* BB;
-    unsigned Bits;
-
-    CaseBits(uint64_t mask, MachineBasicBlock* bb, unsigned bits):
-      Mask(mask), BB(bb), Bits(bits) { }
-  };
-
-  typedef std::vector<Case>           CaseVector;
-  typedef std::vector<CaseBits>       CaseBitsVector;
-  typedef CaseVector::iterator        CaseItr;
-  typedef std::pair<CaseItr, CaseItr> CaseRange;
-
-  /// CaseRec - A struct with ctor used in lowering switches to a binary tree
-  /// of conditional branches.
-  struct CaseRec {
-    CaseRec(MachineBasicBlock *bb, Constant *lt, Constant *ge, CaseRange r) :
-    CaseBB(bb), LT(lt), GE(ge), Range(r) {}
-
-    /// CaseBB - The MBB in which to emit the compare and branch
-    MachineBasicBlock *CaseBB;
-    /// LT, GE - If nonzero, we know the current case value must be less-than or
-    /// greater-than-or-equal-to these Constants.
-    Constant *LT;
-    Constant *GE;
-    /// Range - A pair of iterators representing the range of case values to be
-    /// processed at this point in the binary search tree.
-    CaseRange Range;
-  };
-
-  typedef std::vector<CaseRec> CaseRecVector;
-
-  /// The comparison function for sorting the switch case values in the vector.
-  /// WARNING: Case ranges should be disjoint!
-  struct CaseCmp {
-    bool operator () (const Case& C1, const Case& C2) {
-      assert(isa<ConstantInt>(C1.Low) && isa<ConstantInt>(C2.High));
-      const ConstantInt* CI1 = cast<const ConstantInt>(C1.Low);
-      const ConstantInt* CI2 = cast<const ConstantInt>(C2.High);
-      return CI1->getValue().slt(CI2->getValue());
-    }
-  };
-
-  struct CaseBitsCmp {
-    bool operator () (const CaseBits& C1, const CaseBits& C2) {
-      return C1.Bits > C2.Bits;
-    }
-  };
-
-  size_t Clusterify(CaseVector& Cases, const SwitchInst &SI);
-
-  /// CaseBlock - This structure is used to communicate between SDLowering and
-  /// SDISel for the code generation of additional basic blocks needed by multi-
-  /// case switch statements.
-  struct CaseBlock {
-    CaseBlock(ISD::CondCode cc, Value *cmplhs, Value *cmprhs, Value *cmpmiddle,
-              MachineBasicBlock *truebb, MachineBasicBlock *falsebb,
-              MachineBasicBlock *me)
-      : CC(cc), CmpLHS(cmplhs), CmpMHS(cmpmiddle), CmpRHS(cmprhs),
-        TrueBB(truebb), FalseBB(falsebb), ThisBB(me) {}
-    // CC - the condition code to use for the case block's setcc node
-    ISD::CondCode CC;
-    // CmpLHS/CmpRHS/CmpMHS - The LHS/MHS/RHS of the comparison to emit.
-    // Emit by default LHS op RHS. MHS is used for range comparisons:
-    // If MHS is not null: (LHS <= MHS) and (MHS <= RHS).
-    Value *CmpLHS, *CmpMHS, *CmpRHS;
-    // TrueBB/FalseBB - the block to branch to if the setcc is true/false.
-    MachineBasicBlock *TrueBB, *FalseBB;
-    // ThisBB - the block into which to emit the code for the setcc and branches
-    MachineBasicBlock *ThisBB;
-  };
-  struct JumpTable {
-    JumpTable(unsigned R, unsigned J, MachineBasicBlock *M,
-              MachineBasicBlock *D): Reg(R), JTI(J), MBB(M), Default(D) {}
-  
-    /// Reg - the virtual register containing the index of the jump table entry
-    //. to jump to.
-    unsigned Reg;
-    /// JTI - the JumpTableIndex for this jump table in the function.
-    unsigned JTI;
-    /// MBB - the MBB into which to emit the code for the indirect jump.
-    MachineBasicBlock *MBB;
-    /// Default - the MBB of the default bb, which is a successor of the range
-    /// check MBB.  This is when updating PHI nodes in successors.
-    MachineBasicBlock *Default;
-  };
-  struct JumpTableHeader {
-    JumpTableHeader(APInt F, APInt L, Value* SV, MachineBasicBlock* H,
-                    bool E = false):
-      First(F), Last(L), SValue(SV), HeaderBB(H), Emitted(E) {}
-    APInt First;
-    APInt Last;
-    Value *SValue;
-    MachineBasicBlock *HeaderBB;
-    bool Emitted;
-  };
-  typedef std::pair<JumpTableHeader, JumpTable> JumpTableBlock;
-
-  struct BitTestCase {
-    BitTestCase(uint64_t M, MachineBasicBlock* T, MachineBasicBlock* Tr):
-      Mask(M), ThisBB(T), TargetBB(Tr) { }
-    uint64_t Mask;
-    MachineBasicBlock* ThisBB;
-    MachineBasicBlock* TargetBB;
-  };
-
-  typedef SmallVector<BitTestCase, 3> BitTestInfo;
-
-  struct BitTestBlock {
-    BitTestBlock(APInt F, APInt R, Value* SV,
-                 unsigned Rg, bool E,
-                 MachineBasicBlock* P, MachineBasicBlock* D,
-                 const BitTestInfo& C):
-      First(F), Range(R), SValue(SV), Reg(Rg), Emitted(E),
-      Parent(P), Default(D), Cases(C) { }
-    APInt First;
-    APInt Range;
-    Value  *SValue;
-    unsigned Reg;
-    bool Emitted;
-    MachineBasicBlock *Parent;
-    MachineBasicBlock *Default;
-    BitTestInfo Cases;
-  };
-
-public:
-  // TLI - This is information that describes the available target features we
-  // need for lowering.  This indicates when operations are unavailable,
-  // implemented with a libcall, etc.
-  TargetLowering &TLI;
-  SelectionDAG &DAG;
-  const TargetData *TD;
-  AliasAnalysis *AA;
-
-  /// SwitchCases - Vector of CaseBlock structures used to communicate
-  /// SwitchInst code generation information.
-  std::vector<CaseBlock> SwitchCases;
-  /// JTCases - Vector of JumpTable structures used to communicate
-  /// SwitchInst code generation information.
-  std::vector<JumpTableBlock> JTCases;
-  /// BitTestCases - Vector of BitTestBlock structures used to communicate
-  /// SwitchInst code generation information.
-  std::vector<BitTestBlock> BitTestCases;
-
-  /// PHINodesToUpdate - A list of phi instructions whose operand list will
-  /// be updated after processing the current basic block.
-  std::vector<std::pair<MachineInstr*, unsigned> > PHINodesToUpdate;
-
-  /// EdgeMapping - If an edge from CurMBB to any MBB is changed (e.g. due to
-  /// scheduler custom lowering), track the change here.
-  DenseMap<MachineBasicBlock*, MachineBasicBlock*> EdgeMapping;
-
-  // Emit PHI-node-operand constants only once even if used by multiple
-  // PHI nodes.
-  DenseMap<Constant*, unsigned> ConstantsOut;
-
-  /// FuncInfo - Information about the function as a whole.
-  ///
-  FunctionLoweringInfo &FuncInfo;
-
-  /// OptLevel - What optimization level we're generating code for.
-  /// 
-  CodeGenOpt::Level OptLevel;
-  
-  /// GFI - Garbage collection metadata for the function.
-  GCFunctionInfo *GFI;
-
-  /// HasTailCall - This is set to true if a call in the current
-  /// block has been translated as a tail call. In this case,
-  /// no subsequent DAG nodes should be created.
-  ///
-  bool HasTailCall;
-
-  LLVMContext *Context;
-
-  SelectionDAGLowering(SelectionDAG &dag, TargetLowering &tli,
-                       FunctionLoweringInfo &funcinfo,
-                       CodeGenOpt::Level ol)
-    : CurDebugLoc(DebugLoc::getUnknownLoc()), 
-      TLI(tli), DAG(dag), FuncInfo(funcinfo), OptLevel(ol),
-      HasTailCall(false),
-      Context(dag.getContext()) {
-  }
-
-  void init(GCFunctionInfo *gfi, AliasAnalysis &aa);
-
-  /// clear - Clear out the curret SelectionDAG and the associated
-  /// state and prepare this SelectionDAGLowering object to be used
-  /// for a new block. This doesn't clear out information about
-  /// additional blocks that are needed to complete switch lowering
-  /// or PHI node updating; that information is cleared out as it is
-  /// consumed.
-  void clear();
-
-  /// getRoot - Return the current virtual root of the Selection DAG,
-  /// flushing any PendingLoad items. This must be done before emitting
-  /// a store or any other node that may need to be ordered after any
-  /// prior load instructions.
-  ///
-  SDValue getRoot();
-
-  /// getControlRoot - Similar to getRoot, but instead of flushing all the
-  /// PendingLoad items, flush all the PendingExports items. It is necessary
-  /// to do this before emitting a terminator instruction.
-  ///
-  SDValue getControlRoot();
-
-  DebugLoc getCurDebugLoc() const { return CurDebugLoc; }
-  void setCurDebugLoc(DebugLoc dl) { CurDebugLoc = dl; }
-
-  void CopyValueToVirtualRegister(Value *V, unsigned Reg);
-
-  void visit(Instruction &I);
-
-  void visit(unsigned Opcode, User &I);
-
-  void setCurrentBasicBlock(MachineBasicBlock *MBB) { CurMBB = MBB; }
-
-  SDValue getValue(const Value *V);
-
-  void setValue(const Value *V, SDValue NewN) {
-    SDValue &N = NodeMap[V];
-    assert(N.getNode() == 0 && "Already set a value for this node!");
-    N = NewN;
-  }
-  
-  void GetRegistersForValue(SDISelAsmOperandInfo &OpInfo,
-                            std::set<unsigned> &OutputRegs, 
-                            std::set<unsigned> &InputRegs);
-
-  void FindMergedConditions(Value *Cond, MachineBasicBlock *TBB,
-                            MachineBasicBlock *FBB, MachineBasicBlock *CurBB,
-                            unsigned Opc);
-  void EmitBranchForMergedCondition(Value *Cond, MachineBasicBlock *TBB,
-                                    MachineBasicBlock *FBB,
-                                    MachineBasicBlock *CurBB);
-  bool ShouldEmitAsBranches(const std::vector<CaseBlock> &Cases);
-  bool isExportableFromCurrentBlock(Value *V, const BasicBlock *FromBB);
-  void CopyToExportRegsIfNeeded(Value *V);
-  void ExportFromCurrentBlock(Value *V);
-  void LowerCallTo(CallSite CS, SDValue Callee, bool IsTailCall,
-                   MachineBasicBlock *LandingPad = NULL);
-
-private:
-  // Terminator instructions.
-  void visitRet(ReturnInst &I);
-  void visitBr(BranchInst &I);
-  void visitSwitch(SwitchInst &I);
-  void visitUnreachable(UnreachableInst &I) { /* noop */ }
-
-  // Helpers for visitSwitch
-  bool handleSmallSwitchRange(CaseRec& CR,
-                              CaseRecVector& WorkList,
-                              Value* SV,
-                              MachineBasicBlock* Default);
-  bool handleJTSwitchCase(CaseRec& CR,
-                          CaseRecVector& WorkList,
-                          Value* SV,
-                          MachineBasicBlock* Default);
-  bool handleBTSplitSwitchCase(CaseRec& CR,
-                               CaseRecVector& WorkList,
-                               Value* SV,
-                               MachineBasicBlock* Default);
-  bool handleBitTestsSwitchCase(CaseRec& CR,
-                                CaseRecVector& WorkList,
-                                Value* SV,
-                                MachineBasicBlock* Default);  
-public:
-  void visitSwitchCase(CaseBlock &CB);
-  void visitBitTestHeader(BitTestBlock &B);
-  void visitBitTestCase(MachineBasicBlock* NextMBB,
-                        unsigned Reg,
-                        BitTestCase &B);
-  void visitJumpTable(JumpTable &JT);
-  void visitJumpTableHeader(JumpTable &JT, JumpTableHeader &JTH);
-  
-private:
-  // These all get lowered before this pass.
-  void visitInvoke(InvokeInst &I);
-  void visitUnwind(UnwindInst &I);
-
-  void visitBinary(User &I, unsigned OpCode);
-  void visitShift(User &I, unsigned Opcode);
-  void visitAdd(User &I)  { visitBinary(I, ISD::ADD); }
-  void visitFAdd(User &I) { visitBinary(I, ISD::FADD); }
-  void visitSub(User &I)  { visitBinary(I, ISD::SUB); }
-  void visitFSub(User &I);
-  void visitMul(User &I)  { visitBinary(I, ISD::MUL); }
-  void visitFMul(User &I) { visitBinary(I, ISD::FMUL); }
-  void visitURem(User &I) { visitBinary(I, ISD::UREM); }
-  void visitSRem(User &I) { visitBinary(I, ISD::SREM); }
-  void visitFRem(User &I) { visitBinary(I, ISD::FREM); }
-  void visitUDiv(User &I) { visitBinary(I, ISD::UDIV); }
-  void visitSDiv(User &I) { visitBinary(I, ISD::SDIV); }
-  void visitFDiv(User &I) { visitBinary(I, ISD::FDIV); }
-  void visitAnd (User &I) { visitBinary(I, ISD::AND); }
-  void visitOr  (User &I) { visitBinary(I, ISD::OR); }
-  void visitXor (User &I) { visitBinary(I, ISD::XOR); }
-  void visitShl (User &I) { visitShift(I, ISD::SHL); }
-  void visitLShr(User &I) { visitShift(I, ISD::SRL); }
-  void visitAShr(User &I) { visitShift(I, ISD::SRA); }
-  void visitICmp(User &I);
-  void visitFCmp(User &I);
-  // Visit the conversion instructions
-  void visitTrunc(User &I);
-  void visitZExt(User &I);
-  void visitSExt(User &I);
-  void visitFPTrunc(User &I);
-  void visitFPExt(User &I);
-  void visitFPToUI(User &I);
-  void visitFPToSI(User &I);
-  void visitUIToFP(User &I);
-  void visitSIToFP(User &I);
-  void visitPtrToInt(User &I);
-  void visitIntToPtr(User &I);
-  void visitBitCast(User &I);
-
-  void visitExtractElement(User &I);
-  void visitInsertElement(User &I);
-  void visitShuffleVector(User &I);
-
-  void visitExtractValue(ExtractValueInst &I);
-  void visitInsertValue(InsertValueInst &I);
-
-  void visitGetElementPtr(User &I);
-  void visitSelect(User &I);
-
-  void visitMalloc(MallocInst &I);
-  void visitFree(FreeInst &I);
-  void visitAlloca(AllocaInst &I);
-  void visitLoad(LoadInst &I);
-  void visitStore(StoreInst &I);
-  void visitPHI(PHINode &I) { } // PHI nodes are handled specially.
-  void visitCall(CallInst &I);
-  void visitInlineAsm(CallSite CS);
-  const char *visitIntrinsicCall(CallInst &I, unsigned Intrinsic);
-  void visitTargetIntrinsic(CallInst &I, unsigned Intrinsic);
-
-  void visitPow(CallInst &I);
-  void visitExp2(CallInst &I);
-  void visitExp(CallInst &I);
-  void visitLog(CallInst &I);
-  void visitLog2(CallInst &I);
-  void visitLog10(CallInst &I);
-
-  void visitVAStart(CallInst &I);
-  void visitVAArg(VAArgInst &I);
-  void visitVAEnd(CallInst &I);
-  void visitVACopy(CallInst &I);
-
-  void visitUserOp1(Instruction &I) {
-    llvm_unreachable("UserOp1 should not exist at instruction selection time!");
-  }
-  void visitUserOp2(Instruction &I) {
-    llvm_unreachable("UserOp2 should not exist at instruction selection time!");
-  }
-  
-  const char *implVisitBinaryAtomic(CallInst& I, ISD::NodeType Op);
-  const char *implVisitAluOverflow(CallInst &I, ISD::NodeType Op);
-};
-
-/// AddCatchInfo - Extract the personality and type infos from an eh.selector
-/// call, and add them to the specified machine basic block.
-void AddCatchInfo(CallInst &I, MachineModuleInfo *MMI,
-                  MachineBasicBlock *MBB);
-
-} // end namespace llvm
-
-#endif
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
new file mode 100644
index 0000000..57d8903
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -0,0 +1,5821 @@
+//===-- SelectionDAGBuilder.cpp - Selection-DAG building ------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This implements routines for translating from LLVM IR into SelectionDAG IR.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "isel"
+#include "SelectionDAGBuilder.h"
+#include "FunctionLoweringInfo.h"
+#include "llvm/ADT/BitVector.h"
+#include "llvm/ADT/SmallSet.h"
+#include "llvm/Analysis/AliasAnalysis.h"
+#include "llvm/Constants.h"
+#include "llvm/CallingConv.h"
+#include "llvm/DerivedTypes.h"
+#include "llvm/Function.h"
+#include "llvm/GlobalVariable.h"
+#include "llvm/InlineAsm.h"
+#include "llvm/Instructions.h"
+#include "llvm/Intrinsics.h"
+#include "llvm/IntrinsicInst.h"
+#include "llvm/LLVMContext.h"
+#include "llvm/Module.h"
+#include "llvm/CodeGen/FastISel.h"
+#include "llvm/CodeGen/GCStrategy.h"
+#include "llvm/CodeGen/GCMetadata.h"
+#include "llvm/CodeGen/MachineFunction.h"
+#include "llvm/CodeGen/MachineFrameInfo.h"
+#include "llvm/CodeGen/MachineInstrBuilder.h"
+#include "llvm/CodeGen/MachineJumpTableInfo.h"
+#include "llvm/CodeGen/MachineModuleInfo.h"
+#include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/CodeGen/PseudoSourceValue.h"
+#include "llvm/CodeGen/SelectionDAG.h"
+#include "llvm/CodeGen/DwarfWriter.h"
+#include "llvm/Analysis/DebugInfo.h"
+#include "llvm/Target/TargetRegisterInfo.h"
+#include "llvm/Target/TargetData.h"
+#include "llvm/Target/TargetFrameInfo.h"
+#include "llvm/Target/TargetInstrInfo.h"
+#include "llvm/Target/TargetIntrinsicInfo.h"
+#include "llvm/Target/TargetLowering.h"
+#include "llvm/Target/TargetOptions.h"
+#include "llvm/Support/Compiler.h"
+#include "llvm/Support/CommandLine.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/MathExtras.h"
+#include "llvm/Support/raw_ostream.h"
+#include <algorithm>
+using namespace llvm;
+
+/// LimitFloatPrecision - Generate low-precision inline sequences for
+/// some float libcalls (6, 8 or 12 bits).
+static unsigned LimitFloatPrecision;
+
+static cl::opt<unsigned, true>
+LimitFPPrecision("limit-float-precision",
+                 cl::desc("Generate low-precision inline sequences "
+                          "for some float libcalls"),
+                 cl::location(LimitFloatPrecision),
+                 cl::init(0));
+
+namespace {
+  /// RegsForValue - This struct represents the registers (physical or virtual)
+  /// that a particular set of values is assigned, and the type information about
+  /// the value. The most common situation is to represent one value at a time,
+  /// but struct or array values are handled element-wise as multiple values.
+  /// The splitting of aggregates is performed recursively, so that we never
+  /// have aggregate-typed registers. The values at this point do not necessarily
+  /// have legal types, so each value may require one or more registers of some
+  /// legal type.
+  ///
+  struct RegsForValue {
+    /// TLI - The TargetLowering object.
+    ///
+    const TargetLowering *TLI;
+
+    /// ValueVTs - The value types of the values, which may not be legal, and
+    /// may need be promoted or synthesized from one or more registers.
+    ///
+    SmallVector<EVT, 4> ValueVTs;
+
+    /// RegVTs - The value types of the registers. This is the same size as
+    /// ValueVTs and it records, for each value, what the type of the assigned
+    /// register or registers are. (Individual values are never synthesized
+    /// from more than one type of register.)
+    ///
+    /// With virtual registers, the contents of RegVTs is redundant with TLI's
+    /// getRegisterType member function, however when with physical registers
+    /// it is necessary to have a separate record of the types.
+    ///
+    SmallVector<EVT, 4> RegVTs;
+
+    /// Regs - This list holds the registers assigned to the values.
+    /// Each legal or promoted value requires one register, and each
+    /// expanded value requires multiple registers.
+    ///
+    SmallVector<unsigned, 4> Regs;
+
+    RegsForValue() : TLI(0) {}
+
+    RegsForValue(const TargetLowering &tli,
+                 const SmallVector<unsigned, 4> &regs,
+                 EVT regvt, EVT valuevt)
+      : TLI(&tli),  ValueVTs(1, valuevt), RegVTs(1, regvt), Regs(regs) {}
+    RegsForValue(const TargetLowering &tli,
+                 const SmallVector<unsigned, 4> &regs,
+                 const SmallVector<EVT, 4> &regvts,
+                 const SmallVector<EVT, 4> &valuevts)
+      : TLI(&tli), ValueVTs(valuevts), RegVTs(regvts), Regs(regs) {}
+    RegsForValue(LLVMContext &Context, const TargetLowering &tli,
+                 unsigned Reg, const Type *Ty) : TLI(&tli) {
+      ComputeValueVTs(tli, Ty, ValueVTs);
+
+      for (unsigned Value = 0, e = ValueVTs.size(); Value != e; ++Value) {
+        EVT ValueVT = ValueVTs[Value];
+        unsigned NumRegs = TLI->getNumRegisters(Context, ValueVT);
+        EVT RegisterVT = TLI->getRegisterType(Context, ValueVT);
+        for (unsigned i = 0; i != NumRegs; ++i)
+          Regs.push_back(Reg + i);
+        RegVTs.push_back(RegisterVT);
+        Reg += NumRegs;
+      }
+    }
+
+    /// append - Add the specified values to this one.
+    void append(const RegsForValue &RHS) {
+      TLI = RHS.TLI;
+      ValueVTs.append(RHS.ValueVTs.begin(), RHS.ValueVTs.end());
+      RegVTs.append(RHS.RegVTs.begin(), RHS.RegVTs.end());
+      Regs.append(RHS.Regs.begin(), RHS.Regs.end());
+    }
+
+
+    /// getCopyFromRegs - Emit a series of CopyFromReg nodes that copies from
+    /// this value and returns the result as a ValueVTs value.  This uses
+    /// Chain/Flag as the input and updates them for the output Chain/Flag.
+    /// If the Flag pointer is NULL, no flag is used.
+    SDValue getCopyFromRegs(SelectionDAG &DAG, DebugLoc dl,
+                              SDValue &Chain, SDValue *Flag) const;
+
+    /// getCopyToRegs - Emit a series of CopyToReg nodes that copies the
+    /// specified value into the registers specified by this object.  This uses
+    /// Chain/Flag as the input and updates them for the output Chain/Flag.
+    /// If the Flag pointer is NULL, no flag is used.
+    void getCopyToRegs(SDValue Val, SelectionDAG &DAG, DebugLoc dl,
+                       SDValue &Chain, SDValue *Flag) const;
+
+    /// AddInlineAsmOperands - Add this value to the specified inlineasm node
+    /// operand list.  This adds the code marker, matching input operand index
+    /// (if applicable), and includes the number of values added into it.
+    void AddInlineAsmOperands(unsigned Code,
+                              bool HasMatching, unsigned MatchingIdx,
+                              SelectionDAG &DAG, std::vector<SDValue> &Ops) const;
+  };
+}
+
+/// getCopyFromParts - Create a value that contains the specified legal parts
+/// combined into the value they represent.  If the parts combine to a type
+/// larger then ValueVT then AssertOp can be used to specify whether the extra
+/// bits are known to be zero (ISD::AssertZext) or sign extended from ValueVT
+/// (ISD::AssertSext).
+static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl,
+                                const SDValue *Parts,
+                                unsigned NumParts, EVT PartVT, EVT ValueVT,
+                                ISD::NodeType AssertOp = ISD::DELETED_NODE) {
+  assert(NumParts > 0 && "No parts to assemble!");
+  const TargetLowering &TLI = DAG.getTargetLoweringInfo();
+  SDValue Val = Parts[0];
+
+  if (NumParts > 1) {
+    // Assemble the value from multiple parts.
+    if (!ValueVT.isVector() && ValueVT.isInteger()) {
+      unsigned PartBits = PartVT.getSizeInBits();
+      unsigned ValueBits = ValueVT.getSizeInBits();
+
+      // Assemble the power of 2 part.
+      unsigned RoundParts = NumParts & (NumParts - 1) ?
+        1 << Log2_32(NumParts) : NumParts;
+      unsigned RoundBits = PartBits * RoundParts;
+      EVT RoundVT = RoundBits == ValueBits ?
+        ValueVT : EVT::getIntegerVT(*DAG.getContext(), RoundBits);
+      SDValue Lo, Hi;
+
+      EVT HalfVT = EVT::getIntegerVT(*DAG.getContext(), RoundBits/2);
+
+      if (RoundParts > 2) {
+        Lo = getCopyFromParts(DAG, dl, Parts, RoundParts/2, PartVT, HalfVT);
+        Hi = getCopyFromParts(DAG, dl, Parts+RoundParts/2, RoundParts/2,
+                              PartVT, HalfVT);
+      } else {
+        Lo = DAG.getNode(ISD::BIT_CONVERT, dl, HalfVT, Parts[0]);
+        Hi = DAG.getNode(ISD::BIT_CONVERT, dl, HalfVT, Parts[1]);
+      }
+      if (TLI.isBigEndian())
+        std::swap(Lo, Hi);
+      Val = DAG.getNode(ISD::BUILD_PAIR, dl, RoundVT, Lo, Hi);
+
+      if (RoundParts < NumParts) {
+        // Assemble the trailing non-power-of-2 part.
+        unsigned OddParts = NumParts - RoundParts;
+        EVT OddVT = EVT::getIntegerVT(*DAG.getContext(), OddParts * PartBits);
+        Hi = getCopyFromParts(DAG, dl,
+                              Parts+RoundParts, OddParts, PartVT, OddVT);
+
+        // Combine the round and odd parts.
+        Lo = Val;
+        if (TLI.isBigEndian())
+          std::swap(Lo, Hi);
+        EVT TotalVT = EVT::getIntegerVT(*DAG.getContext(), NumParts * PartBits);
+        Hi = DAG.getNode(ISD::ANY_EXTEND, dl, TotalVT, Hi);
+        Hi = DAG.getNode(ISD::SHL, dl, TotalVT, Hi,
+                         DAG.getConstant(Lo.getValueType().getSizeInBits(),
+                                         TLI.getPointerTy()));
+        Lo = DAG.getNode(ISD::ZERO_EXTEND, dl, TotalVT, Lo);
+        Val = DAG.getNode(ISD::OR, dl, TotalVT, Lo, Hi);
+      }
+    } else if (ValueVT.isVector()) {
+      // Handle a multi-element vector.
+      EVT IntermediateVT, RegisterVT;
+      unsigned NumIntermediates;
+      unsigned NumRegs =
+        TLI.getVectorTypeBreakdown(*DAG.getContext(), ValueVT, IntermediateVT, 
+                                   NumIntermediates, RegisterVT);
+      assert(NumRegs == NumParts && "Part count doesn't match vector breakdown!");
+      NumParts = NumRegs; // Silence a compiler warning.
+      assert(RegisterVT == PartVT && "Part type doesn't match vector breakdown!");
+      assert(RegisterVT == Parts[0].getValueType() &&
+             "Part type doesn't match part!");
+
+      // Assemble the parts into intermediate operands.
+      SmallVector<SDValue, 8> Ops(NumIntermediates);
+      if (NumIntermediates == NumParts) {
+        // If the register was not expanded, truncate or copy the value,
+        // as appropriate.
+        for (unsigned i = 0; i != NumParts; ++i)
+          Ops[i] = getCopyFromParts(DAG, dl, &Parts[i], 1,
+                                    PartVT, IntermediateVT);
+      } else if (NumParts > 0) {
+        // If the intermediate type was expanded, build the intermediate operands
+        // from the parts.
+        assert(NumParts % NumIntermediates == 0 &&
+               "Must expand into a divisible number of parts!");
+        unsigned Factor = NumParts / NumIntermediates;
+        for (unsigned i = 0; i != NumIntermediates; ++i)
+          Ops[i] = getCopyFromParts(DAG, dl, &Parts[i * Factor], Factor,
+                                    PartVT, IntermediateVT);
+      }
+
+      // Build a vector with BUILD_VECTOR or CONCAT_VECTORS from the intermediate
+      // operands.
+      Val = DAG.getNode(IntermediateVT.isVector() ?
+                        ISD::CONCAT_VECTORS : ISD::BUILD_VECTOR, dl,
+                        ValueVT, &Ops[0], NumIntermediates);
+    } else if (PartVT.isFloatingPoint()) {
+      // FP split into multiple FP parts (for ppcf128)
+      assert(ValueVT == EVT(MVT::ppcf128) && PartVT == EVT(MVT::f64) &&
+             "Unexpected split");
+      SDValue Lo, Hi;
+      Lo = DAG.getNode(ISD::BIT_CONVERT, dl, EVT(MVT::f64), Parts[0]);
+      Hi = DAG.getNode(ISD::BIT_CONVERT, dl, EVT(MVT::f64), Parts[1]);
+      if (TLI.isBigEndian())
+        std::swap(Lo, Hi);
+      Val = DAG.getNode(ISD::BUILD_PAIR, dl, ValueVT, Lo, Hi);
+    } else {
+      // FP split into integer parts (soft fp)
+      assert(ValueVT.isFloatingPoint() && PartVT.isInteger() &&
+             !PartVT.isVector() && "Unexpected split");
+      EVT IntVT = EVT::getIntegerVT(*DAG.getContext(), ValueVT.getSizeInBits());
+      Val = getCopyFromParts(DAG, dl, Parts, NumParts, PartVT, IntVT);
+    }
+  }
+
+  // There is now one part, held in Val.  Correct it to match ValueVT.
+  PartVT = Val.getValueType();
+
+  if (PartVT == ValueVT)
+    return Val;
+
+  if (PartVT.isVector()) {
+    assert(ValueVT.isVector() && "Unknown vector conversion!");
+    return DAG.getNode(ISD::BIT_CONVERT, dl, ValueVT, Val);
+  }
+
+  if (ValueVT.isVector()) {
+    assert(ValueVT.getVectorElementType() == PartVT &&
+           ValueVT.getVectorNumElements() == 1 &&
+           "Only trivial scalar-to-vector conversions should get here!");
+    return DAG.getNode(ISD::BUILD_VECTOR, dl, ValueVT, Val);
+  }
+
+  if (PartVT.isInteger() &&
+      ValueVT.isInteger()) {
+    if (ValueVT.bitsLT(PartVT)) {
+      // For a truncate, see if we have any information to
+      // indicate whether the truncated bits will always be
+      // zero or sign-extension.
+      if (AssertOp != ISD::DELETED_NODE)
+        Val = DAG.getNode(AssertOp, dl, PartVT, Val,
+                          DAG.getValueType(ValueVT));
+      return DAG.getNode(ISD::TRUNCATE, dl, ValueVT, Val);
+    } else {
+      return DAG.getNode(ISD::ANY_EXTEND, dl, ValueVT, Val);
+    }
+  }
+
+  if (PartVT.isFloatingPoint() && ValueVT.isFloatingPoint()) {
+    if (ValueVT.bitsLT(Val.getValueType()))
+      // FP_ROUND's are always exact here.
+      return DAG.getNode(ISD::FP_ROUND, dl, ValueVT, Val,
+                         DAG.getIntPtrConstant(1));
+    return DAG.getNode(ISD::FP_EXTEND, dl, ValueVT, Val);
+  }
+
+  if (PartVT.getSizeInBits() == ValueVT.getSizeInBits())
+    return DAG.getNode(ISD::BIT_CONVERT, dl, ValueVT, Val);
+
+  llvm_unreachable("Unknown mismatch!");
+  return SDValue();
+}
+
+/// getCopyToParts - Create a series of nodes that contain the specified value
+/// split into legal parts.  If the parts contain more bits than Val, then, for
+/// integers, ExtendKind can be used to specify how to generate the extra bits.
+static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, SDValue Val,
+                           SDValue *Parts, unsigned NumParts, EVT PartVT,
+                           ISD::NodeType ExtendKind = ISD::ANY_EXTEND) {
+  const TargetLowering &TLI = DAG.getTargetLoweringInfo();
+  EVT PtrVT = TLI.getPointerTy();
+  EVT ValueVT = Val.getValueType();
+  unsigned PartBits = PartVT.getSizeInBits();
+  unsigned OrigNumParts = NumParts;
+  assert(TLI.isTypeLegal(PartVT) && "Copying to an illegal type!");
+
+  if (!NumParts)
+    return;
+
+  if (!ValueVT.isVector()) {
+    if (PartVT == ValueVT) {
+      assert(NumParts == 1 && "No-op copy with multiple parts!");
+      Parts[0] = Val;
+      return;
+    }
+
+    if (NumParts * PartBits > ValueVT.getSizeInBits()) {
+      // If the parts cover more bits than the value has, promote the value.
+      if (PartVT.isFloatingPoint() && ValueVT.isFloatingPoint()) {
+        assert(NumParts == 1 && "Do not know what to promote to!");
+        Val = DAG.getNode(ISD::FP_EXTEND, dl, PartVT, Val);
+      } else if (PartVT.isInteger() && ValueVT.isInteger()) {
+        ValueVT = EVT::getIntegerVT(*DAG.getContext(), NumParts * PartBits);
+        Val = DAG.getNode(ExtendKind, dl, ValueVT, Val);
+      } else {
+        llvm_unreachable("Unknown mismatch!");
+      }
+    } else if (PartBits == ValueVT.getSizeInBits()) {
+      // Different types of the same size.
+      assert(NumParts == 1 && PartVT != ValueVT);
+      Val = DAG.getNode(ISD::BIT_CONVERT, dl, PartVT, Val);
+    } else if (NumParts * PartBits < ValueVT.getSizeInBits()) {
+      // If the parts cover less bits than value has, truncate the value.
+      if (PartVT.isInteger() && ValueVT.isInteger()) {
+        ValueVT = EVT::getIntegerVT(*DAG.getContext(), NumParts * PartBits);
+        Val = DAG.getNode(ISD::TRUNCATE, dl, ValueVT, Val);
+      } else {
+        llvm_unreachable("Unknown mismatch!");
+      }
+    }
+
+    // The value may have changed - recompute ValueVT.
+    ValueVT = Val.getValueType();
+    assert(NumParts * PartBits == ValueVT.getSizeInBits() &&
+           "Failed to tile the value with PartVT!");
+
+    if (NumParts == 1) {
+      assert(PartVT == ValueVT && "Type conversion failed!");
+      Parts[0] = Val;
+      return;
+    }
+
+    // Expand the value into multiple parts.
+    if (NumParts & (NumParts - 1)) {
+      // The number of parts is not a power of 2.  Split off and copy the tail.
+      assert(PartVT.isInteger() && ValueVT.isInteger() &&
+             "Do not know what to expand to!");
+      unsigned RoundParts = 1 << Log2_32(NumParts);
+      unsigned RoundBits = RoundParts * PartBits;
+      unsigned OddParts = NumParts - RoundParts;
+      SDValue OddVal = DAG.getNode(ISD::SRL, dl, ValueVT, Val,
+                                   DAG.getConstant(RoundBits,
+                                                   TLI.getPointerTy()));
+      getCopyToParts(DAG, dl, OddVal, Parts + RoundParts, OddParts, PartVT);
+      if (TLI.isBigEndian())
+        // The odd parts were reversed by getCopyToParts - unreverse them.
+        std::reverse(Parts + RoundParts, Parts + NumParts);
+      NumParts = RoundParts;
+      ValueVT = EVT::getIntegerVT(*DAG.getContext(), NumParts * PartBits);
+      Val = DAG.getNode(ISD::TRUNCATE, dl, ValueVT, Val);
+    }
+
+    // The number of parts is a power of 2.  Repeatedly bisect the value using
+    // EXTRACT_ELEMENT.
+    Parts[0] = DAG.getNode(ISD::BIT_CONVERT, dl,
+                           EVT::getIntegerVT(*DAG.getContext(), ValueVT.getSizeInBits()),
+                           Val);
+    for (unsigned StepSize = NumParts; StepSize > 1; StepSize /= 2) {
+      for (unsigned i = 0; i < NumParts; i += StepSize) {
+        unsigned ThisBits = StepSize * PartBits / 2;
+        EVT ThisVT = EVT::getIntegerVT(*DAG.getContext(), ThisBits);
+        SDValue &Part0 = Parts[i];
+        SDValue &Part1 = Parts[i+StepSize/2];
+
+        Part1 = DAG.getNode(ISD::EXTRACT_ELEMENT, dl,
+                            ThisVT, Part0,
+                            DAG.getConstant(1, PtrVT));
+        Part0 = DAG.getNode(ISD::EXTRACT_ELEMENT, dl,
+                            ThisVT, Part0,
+                            DAG.getConstant(0, PtrVT));
+
+        if (ThisBits == PartBits && ThisVT != PartVT) {
+          Part0 = DAG.getNode(ISD::BIT_CONVERT, dl,
+                                                PartVT, Part0);
+          Part1 = DAG.getNode(ISD::BIT_CONVERT, dl,
+                                                PartVT, Part1);
+        }
+      }
+    }
+
+    if (TLI.isBigEndian())
+      std::reverse(Parts, Parts + OrigNumParts);
+
+    return;
+  }
+
+  // Vector ValueVT.
+  if (NumParts == 1) {
+    if (PartVT != ValueVT) {
+      if (PartVT.isVector()) {
+        Val = DAG.getNode(ISD::BIT_CONVERT, dl, PartVT, Val);
+      } else {
+        assert(ValueVT.getVectorElementType() == PartVT &&
+               ValueVT.getVectorNumElements() == 1 &&
+               "Only trivial vector-to-scalar conversions should get here!");
+        Val = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl,
+                          PartVT, Val,
+                          DAG.getConstant(0, PtrVT));
+      }
+    }
+
+    Parts[0] = Val;
+    return;
+  }
+
+  // Handle a multi-element vector.
+  EVT IntermediateVT, RegisterVT;
+  unsigned NumIntermediates;
+  unsigned NumRegs = TLI.getVectorTypeBreakdown(*DAG.getContext(), ValueVT,
+                              IntermediateVT, NumIntermediates, RegisterVT);
+  unsigned NumElements = ValueVT.getVectorNumElements();
+
+  assert(NumRegs == NumParts && "Part count doesn't match vector breakdown!");
+  NumParts = NumRegs; // Silence a compiler warning.
+  assert(RegisterVT == PartVT && "Part type doesn't match vector breakdown!");
+
+  // Split the vector into intermediate operands.
+  SmallVector<SDValue, 8> Ops(NumIntermediates);
+  for (unsigned i = 0; i != NumIntermediates; ++i)
+    if (IntermediateVT.isVector())
+      Ops[i] = DAG.getNode(ISD::EXTRACT_SUBVECTOR, dl,
+                           IntermediateVT, Val,
+                           DAG.getConstant(i * (NumElements / NumIntermediates),
+                                           PtrVT));
+    else
+      Ops[i] = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl,
+                           IntermediateVT, Val,
+                           DAG.getConstant(i, PtrVT));
+
+  // Split the intermediate operands into legal parts.
+  if (NumParts == NumIntermediates) {
+    // If the register was not expanded, promote or copy the value,
+    // as appropriate.
+    for (unsigned i = 0; i != NumParts; ++i)
+      getCopyToParts(DAG, dl, Ops[i], &Parts[i], 1, PartVT);
+  } else if (NumParts > 0) {
+    // If the intermediate type was expanded, split each the value into
+    // legal parts.
+    assert(NumParts % NumIntermediates == 0 &&
+           "Must expand into a divisible number of parts!");
+    unsigned Factor = NumParts / NumIntermediates;
+    for (unsigned i = 0; i != NumIntermediates; ++i)
+      getCopyToParts(DAG, dl, Ops[i], &Parts[i * Factor], Factor, PartVT);
+  }
+}
+
+
+void SelectionDAGBuilder::init(GCFunctionInfo *gfi, AliasAnalysis &aa) {
+  AA = &aa;
+  GFI = gfi;
+  TD = DAG.getTarget().getTargetData();
+}
+
+/// clear - Clear out the curret SelectionDAG and the associated
+/// state and prepare this SelectionDAGBuilder object to be used
+/// for a new block. This doesn't clear out information about
+/// additional blocks that are needed to complete switch lowering
+/// or PHI node updating; that information is cleared out as it is
+/// consumed.
+void SelectionDAGBuilder::clear() {
+  NodeMap.clear();
+  PendingLoads.clear();
+  PendingExports.clear();
+  EdgeMapping.clear();
+  DAG.clear();
+  CurDebugLoc = DebugLoc::getUnknownLoc();
+  HasTailCall = false;
+}
+
+/// getRoot - Return the current virtual root of the Selection DAG,
+/// flushing any PendingLoad items. This must be done before emitting
+/// a store or any other node that may need to be ordered after any
+/// prior load instructions.
+///
+SDValue SelectionDAGBuilder::getRoot() {
+  if (PendingLoads.empty())
+    return DAG.getRoot();
+
+  if (PendingLoads.size() == 1) {
+    SDValue Root = PendingLoads[0];
+    DAG.setRoot(Root);
+    PendingLoads.clear();
+    return Root;
+  }
+
+  // Otherwise, we have to make a token factor node.
+  SDValue Root = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(), MVT::Other,
+                               &PendingLoads[0], PendingLoads.size());
+  PendingLoads.clear();
+  DAG.setRoot(Root);
+  return Root;
+}
+
+/// getControlRoot - Similar to getRoot, but instead of flushing all the
+/// PendingLoad items, flush all the PendingExports items. It is necessary
+/// to do this before emitting a terminator instruction.
+///
+SDValue SelectionDAGBuilder::getControlRoot() {
+  SDValue Root = DAG.getRoot();
+
+  if (PendingExports.empty())
+    return Root;
+
+  // Turn all of the CopyToReg chains into one factored node.
+  if (Root.getOpcode() != ISD::EntryToken) {
+    unsigned i = 0, e = PendingExports.size();
+    for (; i != e; ++i) {
+      assert(PendingExports[i].getNode()->getNumOperands() > 1);
+      if (PendingExports[i].getNode()->getOperand(0) == Root)
+        break;  // Don't add the root if we already indirectly depend on it.
+    }
+
+    if (i == e)
+      PendingExports.push_back(Root);
+  }
+
+  Root = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(), MVT::Other,
+                     &PendingExports[0],
+                     PendingExports.size());
+  PendingExports.clear();
+  DAG.setRoot(Root);
+  return Root;
+}
+
+void SelectionDAGBuilder::visit(Instruction &I) {
+  visit(I.getOpcode(), I);
+}
+
+void SelectionDAGBuilder::visit(unsigned Opcode, User &I) {
+  // Note: this doesn't use InstVisitor, because it has to work with
+  // ConstantExpr's in addition to instructions.
+  switch (Opcode) {
+  default: llvm_unreachable("Unknown instruction type encountered!");
+    // Build the switch statement using the Instruction.def file.
+#define HANDLE_INST(NUM, OPCODE, CLASS) \
+  case Instruction::OPCODE:return visit##OPCODE((CLASS&)I);
+#include "llvm/Instruction.def"
+  }
+}
+
+SDValue SelectionDAGBuilder::getValue(const Value *V) {
+  SDValue &N = NodeMap[V];
+  if (N.getNode()) return N;
+
+  if (Constant *C = const_cast<Constant*>(dyn_cast<Constant>(V))) {
+    EVT VT = TLI.getValueType(V->getType(), true);
+
+    if (ConstantInt *CI = dyn_cast<ConstantInt>(C))
+      return N = DAG.getConstant(*CI, VT);
+
+    if (GlobalValue *GV = dyn_cast<GlobalValue>(C))
+      return N = DAG.getGlobalAddress(GV, VT);
+
+    if (isa<ConstantPointerNull>(C))
+      return N = DAG.getConstant(0, TLI.getPointerTy());
+
+    if (ConstantFP *CFP = dyn_cast<ConstantFP>(C))
+      return N = DAG.getConstantFP(*CFP, VT);
+
+    if (isa<UndefValue>(C) && !V->getType()->isAggregateType())
+      return N = DAG.getUNDEF(VT);
+
+    if (ConstantExpr *CE = dyn_cast<ConstantExpr>(C)) {
+      visit(CE->getOpcode(), *CE);
+      SDValue N1 = NodeMap[V];
+      assert(N1.getNode() && "visit didn't populate the ValueMap!");
+      return N1;
+    }
+
+    if (isa<ConstantStruct>(C) || isa<ConstantArray>(C)) {
+      SmallVector<SDValue, 4> Constants;
+      for (User::const_op_iterator OI = C->op_begin(), OE = C->op_end();
+           OI != OE; ++OI) {
+        SDNode *Val = getValue(*OI).getNode();
+        // If the operand is an empty aggregate, there are no values.
+        if (!Val) continue;
+        // Add each leaf value from the operand to the Constants list
+        // to form a flattened list of all the values.
+        for (unsigned i = 0, e = Val->getNumValues(); i != e; ++i)
+          Constants.push_back(SDValue(Val, i));
+      }
+      return DAG.getMergeValues(&Constants[0], Constants.size(),
+                                getCurDebugLoc());
+    }
+
+    if (isa<StructType>(C->getType()) || isa<ArrayType>(C->getType())) {
+      assert((isa<ConstantAggregateZero>(C) || isa<UndefValue>(C)) &&
+             "Unknown struct or array constant!");
+
+      SmallVector<EVT, 4> ValueVTs;
+      ComputeValueVTs(TLI, C->getType(), ValueVTs);
+      unsigned NumElts = ValueVTs.size();
+      if (NumElts == 0)
+        return SDValue(); // empty struct
+      SmallVector<SDValue, 4> Constants(NumElts);
+      for (unsigned i = 0; i != NumElts; ++i) {
+        EVT EltVT = ValueVTs[i];
+        if (isa<UndefValue>(C))
+          Constants[i] = DAG.getUNDEF(EltVT);
+        else if (EltVT.isFloatingPoint())
+          Constants[i] = DAG.getConstantFP(0, EltVT);
+        else
+          Constants[i] = DAG.getConstant(0, EltVT);
+      }
+      return DAG.getMergeValues(&Constants[0], NumElts, getCurDebugLoc());
+    }
+
+    if (BlockAddress *BA = dyn_cast<BlockAddress>(C))
+      return DAG.getBlockAddress(BA, VT);
+
+    const VectorType *VecTy = cast<VectorType>(V->getType());
+    unsigned NumElements = VecTy->getNumElements();
+
+    // Now that we know the number and type of the elements, get that number of
+    // elements into the Ops array based on what kind of constant it is.
+    SmallVector<SDValue, 16> Ops;
+    if (ConstantVector *CP = dyn_cast<ConstantVector>(C)) {
+      for (unsigned i = 0; i != NumElements; ++i)
+        Ops.push_back(getValue(CP->getOperand(i)));
+    } else {
+      assert(isa<ConstantAggregateZero>(C) && "Unknown vector constant!");
+      EVT EltVT = TLI.getValueType(VecTy->getElementType());
+
+      SDValue Op;
+      if (EltVT.isFloatingPoint())
+        Op = DAG.getConstantFP(0, EltVT);
+      else
+        Op = DAG.getConstant(0, EltVT);
+      Ops.assign(NumElements, Op);
+    }
+
+    // Create a BUILD_VECTOR node.
+    return NodeMap[V] = DAG.getNode(ISD::BUILD_VECTOR, getCurDebugLoc(),
+                                    VT, &Ops[0], Ops.size());
+  }
+
+  // If this is a static alloca, generate it as the frameindex instead of
+  // computation.
+  if (const AllocaInst *AI = dyn_cast<AllocaInst>(V)) {
+    DenseMap<const AllocaInst*, int>::iterator SI =
+      FuncInfo.StaticAllocaMap.find(AI);
+    if (SI != FuncInfo.StaticAllocaMap.end())
+      return DAG.getFrameIndex(SI->second, TLI.getPointerTy());
+  }
+
+  unsigned InReg = FuncInfo.ValueMap[V];
+  assert(InReg && "Value not in map!");
+
+  RegsForValue RFV(*DAG.getContext(), TLI, InReg, V->getType());
+  SDValue Chain = DAG.getEntryNode();
+  return RFV.getCopyFromRegs(DAG, getCurDebugLoc(), Chain, NULL);
+}
+
+/// Get the EVTs and ArgFlags collections that represent the return type
+/// of the given function.  This does not require a DAG or a return value, and
+/// is suitable for use before any DAGs for the function are constructed.
+static void getReturnInfo(const Type* ReturnType,
+                   Attributes attr, SmallVectorImpl<EVT> &OutVTs,
+                   SmallVectorImpl<ISD::ArgFlagsTy> &OutFlags,
+                   TargetLowering &TLI,
+                   SmallVectorImpl<uint64_t> *Offsets = 0) {
+  SmallVector<EVT, 4> ValueVTs;
+  ComputeValueVTs(TLI, ReturnType, ValueVTs, Offsets);
+  unsigned NumValues = ValueVTs.size();
+  if ( NumValues == 0 ) return;
+
+  for (unsigned j = 0, f = NumValues; j != f; ++j) {
+    EVT VT = ValueVTs[j];
+    ISD::NodeType ExtendKind = ISD::ANY_EXTEND;
+
+    if (attr & Attribute::SExt)
+      ExtendKind = ISD::SIGN_EXTEND;
+    else if (attr & Attribute::ZExt)
+      ExtendKind = ISD::ZERO_EXTEND;
+
+    // FIXME: C calling convention requires the return type to be promoted to
+    // at least 32-bit. But this is not necessary for non-C calling
+    // conventions. The frontend should mark functions whose return values
+    // require promoting with signext or zeroext attributes.
+    if (ExtendKind != ISD::ANY_EXTEND && VT.isInteger()) {
+      EVT MinVT = TLI.getRegisterType(ReturnType->getContext(), MVT::i32);
+      if (VT.bitsLT(MinVT))
+        VT = MinVT;
+    }
+
+    unsigned NumParts = TLI.getNumRegisters(ReturnType->getContext(), VT);
+    EVT PartVT = TLI.getRegisterType(ReturnType->getContext(), VT);
+    // 'inreg' on function refers to return value
+    ISD::ArgFlagsTy Flags = ISD::ArgFlagsTy();
+    if (attr & Attribute::InReg)
+      Flags.setInReg();
+
+    // Propagate extension type if any
+    if (attr & Attribute::SExt)
+      Flags.setSExt();
+    else if (attr & Attribute::ZExt)
+      Flags.setZExt();
+
+    for (unsigned i = 0; i < NumParts; ++i) {
+      OutVTs.push_back(PartVT);
+      OutFlags.push_back(Flags);
+    }
+  }
+}
+
+void SelectionDAGBuilder::visitRet(ReturnInst &I) {
+  SDValue Chain = getControlRoot();
+  SmallVector<ISD::OutputArg, 8> Outs;
+  FunctionLoweringInfo &FLI = DAG.getFunctionLoweringInfo();
+  
+  if (!FLI.CanLowerReturn) {
+    unsigned DemoteReg = FLI.DemoteRegister;
+    const Function *F = I.getParent()->getParent();
+
+    // Emit a store of the return value through the virtual register.
+    // Leave Outs empty so that LowerReturn won't try to load return
+    // registers the usual way.
+    SmallVector<EVT, 1> PtrValueVTs;
+    ComputeValueVTs(TLI, PointerType::getUnqual(F->getReturnType()), 
+                    PtrValueVTs);
+
+    SDValue RetPtr = DAG.getRegister(DemoteReg, PtrValueVTs[0]);
+    SDValue RetOp = getValue(I.getOperand(0));
+  
+    SmallVector<EVT, 4> ValueVTs;
+    SmallVector<uint64_t, 4> Offsets;
+    ComputeValueVTs(TLI, I.getOperand(0)->getType(), ValueVTs, &Offsets);
+    unsigned NumValues = ValueVTs.size();
+
+    SmallVector<SDValue, 4> Chains(NumValues);
+    EVT PtrVT = PtrValueVTs[0];
+    for (unsigned i = 0; i != NumValues; ++i)
+      Chains[i] = DAG.getStore(Chain, getCurDebugLoc(),
+                  SDValue(RetOp.getNode(), RetOp.getResNo() + i),
+                  DAG.getNode(ISD::ADD, getCurDebugLoc(), PtrVT, RetPtr,
+                  DAG.getConstant(Offsets[i], PtrVT)),
+                  NULL, Offsets[i], false, 0);
+    Chain = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(),
+                        MVT::Other, &Chains[0], NumValues);
+  }
+  else {
+    for (unsigned i = 0, e = I.getNumOperands(); i != e; ++i) {
+      SmallVector<EVT, 4> ValueVTs;
+      ComputeValueVTs(TLI, I.getOperand(i)->getType(), ValueVTs);
+      unsigned NumValues = ValueVTs.size();
+      if (NumValues == 0) continue;
+  
+      SDValue RetOp = getValue(I.getOperand(i));
+      for (unsigned j = 0, f = NumValues; j != f; ++j) {
+        EVT VT = ValueVTs[j];
+
+        ISD::NodeType ExtendKind = ISD::ANY_EXTEND;
+
+        const Function *F = I.getParent()->getParent();
+        if (F->paramHasAttr(0, Attribute::SExt))
+          ExtendKind = ISD::SIGN_EXTEND;
+        else if (F->paramHasAttr(0, Attribute::ZExt))
+          ExtendKind = ISD::ZERO_EXTEND;
+
+        // FIXME: C calling convention requires the return type to be promoted to
+        // at least 32-bit. But this is not necessary for non-C calling
+        // conventions. The frontend should mark functions whose return values
+        // require promoting with signext or zeroext attributes.
+        if (ExtendKind != ISD::ANY_EXTEND && VT.isInteger()) {
+          EVT MinVT = TLI.getRegisterType(*DAG.getContext(), MVT::i32);
+          if (VT.bitsLT(MinVT))
+            VT = MinVT;
+        }
+
+        unsigned NumParts = TLI.getNumRegisters(*DAG.getContext(), VT);
+        EVT PartVT = TLI.getRegisterType(*DAG.getContext(), VT);
+        SmallVector<SDValue, 4> Parts(NumParts);
+        getCopyToParts(DAG, getCurDebugLoc(),
+                       SDValue(RetOp.getNode(), RetOp.getResNo() + j),
+                       &Parts[0], NumParts, PartVT, ExtendKind);
+
+        // 'inreg' on function refers to return value
+        ISD::ArgFlagsTy Flags = ISD::ArgFlagsTy();
+        if (F->paramHasAttr(0, Attribute::InReg))
+          Flags.setInReg();
+
+        // Propagate extension type if any
+        if (F->paramHasAttr(0, Attribute::SExt))
+          Flags.setSExt();
+        else if (F->paramHasAttr(0, Attribute::ZExt))
+          Flags.setZExt();
+
+        for (unsigned i = 0; i < NumParts; ++i)
+          Outs.push_back(ISD::OutputArg(Flags, Parts[i], /*isfixed=*/true));
+      }
+    }
+  }
+
+  bool isVarArg = DAG.getMachineFunction().getFunction()->isVarArg();
+  CallingConv::ID CallConv =
+    DAG.getMachineFunction().getFunction()->getCallingConv();
+  Chain = TLI.LowerReturn(Chain, CallConv, isVarArg,
+                          Outs, getCurDebugLoc(), DAG);
+
+  // Verify that the target's LowerReturn behaved as expected.
+  assert(Chain.getNode() && Chain.getValueType() == MVT::Other &&
+         "LowerReturn didn't return a valid chain!");
+
+  // Update the DAG with the new chain value resulting from return lowering.
+  DAG.setRoot(Chain);
+}
+
+/// CopyToExportRegsIfNeeded - If the given value has virtual registers
+/// created for it, emit nodes to copy the value into the virtual
+/// registers.
+void SelectionDAGBuilder::CopyToExportRegsIfNeeded(Value *V) {
+  if (!V->use_empty()) {
+    DenseMap<const Value *, unsigned>::iterator VMI = FuncInfo.ValueMap.find(V);
+    if (VMI != FuncInfo.ValueMap.end())
+      CopyValueToVirtualRegister(V, VMI->second);
+  }
+}
+
+/// ExportFromCurrentBlock - If this condition isn't known to be exported from
+/// the current basic block, add it to ValueMap now so that we'll get a
+/// CopyTo/FromReg.
+void SelectionDAGBuilder::ExportFromCurrentBlock(Value *V) {
+  // No need to export constants.
+  if (!isa<Instruction>(V) && !isa<Argument>(V)) return;
+
+  // Already exported?
+  if (FuncInfo.isExportedInst(V)) return;
+
+  unsigned Reg = FuncInfo.InitializeRegForValue(V);
+  CopyValueToVirtualRegister(V, Reg);
+}
+
+bool SelectionDAGBuilder::isExportableFromCurrentBlock(Value *V,
+                                                     const BasicBlock *FromBB) {
+  // The operands of the setcc have to be in this block.  We don't know
+  // how to export them from some other block.
+  if (Instruction *VI = dyn_cast<Instruction>(V)) {
+    // Can export from current BB.
+    if (VI->getParent() == FromBB)
+      return true;
+
+    // Is already exported, noop.
+    return FuncInfo.isExportedInst(V);
+  }
+
+  // If this is an argument, we can export it if the BB is the entry block or
+  // if it is already exported.
+  if (isa<Argument>(V)) {
+    if (FromBB == &FromBB->getParent()->getEntryBlock())
+      return true;
+
+    // Otherwise, can only export this if it is already exported.
+    return FuncInfo.isExportedInst(V);
+  }
+
+  // Otherwise, constants can always be exported.
+  return true;
+}
+
+static bool InBlock(const Value *V, const BasicBlock *BB) {
+  if (const Instruction *I = dyn_cast<Instruction>(V))
+    return I->getParent() == BB;
+  return true;
+}
+
+/// getFCmpCondCode - Return the ISD condition code corresponding to
+/// the given LLVM IR floating-point condition code.  This includes
+/// consideration of global floating-point math flags.
+///
+static ISD::CondCode getFCmpCondCode(FCmpInst::Predicate Pred) {
+  ISD::CondCode FPC, FOC;
+  switch (Pred) {
+  case FCmpInst::FCMP_FALSE: FOC = FPC = ISD::SETFALSE; break;
+  case FCmpInst::FCMP_OEQ:   FOC = ISD::SETEQ; FPC = ISD::SETOEQ; break;
+  case FCmpInst::FCMP_OGT:   FOC = ISD::SETGT; FPC = ISD::SETOGT; break;
+  case FCmpInst::FCMP_OGE:   FOC = ISD::SETGE; FPC = ISD::SETOGE; break;
+  case FCmpInst::FCMP_OLT:   FOC = ISD::SETLT; FPC = ISD::SETOLT; break;
+  case FCmpInst::FCMP_OLE:   FOC = ISD::SETLE; FPC = ISD::SETOLE; break;
+  case FCmpInst::FCMP_ONE:   FOC = ISD::SETNE; FPC = ISD::SETONE; break;
+  case FCmpInst::FCMP_ORD:   FOC = FPC = ISD::SETO;   break;
+  case FCmpInst::FCMP_UNO:   FOC = FPC = ISD::SETUO;  break;
+  case FCmpInst::FCMP_UEQ:   FOC = ISD::SETEQ; FPC = ISD::SETUEQ; break;
+  case FCmpInst::FCMP_UGT:   FOC = ISD::SETGT; FPC = ISD::SETUGT; break;
+  case FCmpInst::FCMP_UGE:   FOC = ISD::SETGE; FPC = ISD::SETUGE; break;
+  case FCmpInst::FCMP_ULT:   FOC = ISD::SETLT; FPC = ISD::SETULT; break;
+  case FCmpInst::FCMP_ULE:   FOC = ISD::SETLE; FPC = ISD::SETULE; break;
+  case FCmpInst::FCMP_UNE:   FOC = ISD::SETNE; FPC = ISD::SETUNE; break;
+  case FCmpInst::FCMP_TRUE:  FOC = FPC = ISD::SETTRUE; break;
+  default:
+    llvm_unreachable("Invalid FCmp predicate opcode!");
+    FOC = FPC = ISD::SETFALSE;
+    break;
+  }
+  if (FiniteOnlyFPMath())
+    return FOC;
+  else
+    return FPC;
+}
+
+/// getICmpCondCode - Return the ISD condition code corresponding to
+/// the given LLVM IR integer condition code.
+///
+static ISD::CondCode getICmpCondCode(ICmpInst::Predicate Pred) {
+  switch (Pred) {
+  case ICmpInst::ICMP_EQ:  return ISD::SETEQ;
+  case ICmpInst::ICMP_NE:  return ISD::SETNE;
+  case ICmpInst::ICMP_SLE: return ISD::SETLE;
+  case ICmpInst::ICMP_ULE: return ISD::SETULE;
+  case ICmpInst::ICMP_SGE: return ISD::SETGE;
+  case ICmpInst::ICMP_UGE: return ISD::SETUGE;
+  case ICmpInst::ICMP_SLT: return ISD::SETLT;
+  case ICmpInst::ICMP_ULT: return ISD::SETULT;
+  case ICmpInst::ICMP_SGT: return ISD::SETGT;
+  case ICmpInst::ICMP_UGT: return ISD::SETUGT;
+  default:
+    llvm_unreachable("Invalid ICmp predicate opcode!");
+    return ISD::SETNE;
+  }
+}
+
+/// EmitBranchForMergedCondition - Helper method for FindMergedConditions.
+/// This function emits a branch and is used at the leaves of an OR or an
+/// AND operator tree.
+///
+void
+SelectionDAGBuilder::EmitBranchForMergedCondition(Value *Cond,
+                                                  MachineBasicBlock *TBB,
+                                                  MachineBasicBlock *FBB,
+                                                  MachineBasicBlock *CurBB) {
+  const BasicBlock *BB = CurBB->getBasicBlock();
+
+  // If the leaf of the tree is a comparison, merge the condition into
+  // the caseblock.
+  if (CmpInst *BOp = dyn_cast<CmpInst>(Cond)) {
+    // The operands of the cmp have to be in this block.  We don't know
+    // how to export them from some other block.  If this is the first block
+    // of the sequence, no exporting is needed.
+    if (CurBB == CurMBB ||
+        (isExportableFromCurrentBlock(BOp->getOperand(0), BB) &&
+         isExportableFromCurrentBlock(BOp->getOperand(1), BB))) {
+      ISD::CondCode Condition;
+      if (ICmpInst *IC = dyn_cast<ICmpInst>(Cond)) {
+        Condition = getICmpCondCode(IC->getPredicate());
+      } else if (FCmpInst *FC = dyn_cast<FCmpInst>(Cond)) {
+        Condition = getFCmpCondCode(FC->getPredicate());
+      } else {
+        Condition = ISD::SETEQ; // silence warning.
+        llvm_unreachable("Unknown compare instruction");
+      }
+
+      CaseBlock CB(Condition, BOp->getOperand(0),
+                   BOp->getOperand(1), NULL, TBB, FBB, CurBB);
+      SwitchCases.push_back(CB);
+      return;
+    }
+  }
+
+  // Create a CaseBlock record representing this branch.
+  CaseBlock CB(ISD::SETEQ, Cond, ConstantInt::getTrue(*DAG.getContext()),
+               NULL, TBB, FBB, CurBB);
+  SwitchCases.push_back(CB);
+}
+
+/// FindMergedConditions - If Cond is an expression like
+void SelectionDAGBuilder::FindMergedConditions(Value *Cond,
+                                               MachineBasicBlock *TBB,
+                                               MachineBasicBlock *FBB,
+                                               MachineBasicBlock *CurBB,
+                                               unsigned Opc) {
+  // If this node is not part of the or/and tree, emit it as a branch.
+  Instruction *BOp = dyn_cast<Instruction>(Cond);
+  if (!BOp || !(isa<BinaryOperator>(BOp) || isa<CmpInst>(BOp)) ||
+      (unsigned)BOp->getOpcode() != Opc || !BOp->hasOneUse() ||
+      BOp->getParent() != CurBB->getBasicBlock() ||
+      !InBlock(BOp->getOperand(0), CurBB->getBasicBlock()) ||
+      !InBlock(BOp->getOperand(1), CurBB->getBasicBlock())) {
+    EmitBranchForMergedCondition(Cond, TBB, FBB, CurBB);
+    return;
+  }
+
+  //  Create TmpBB after CurBB.
+  MachineFunction::iterator BBI = CurBB;
+  MachineFunction &MF = DAG.getMachineFunction();
+  MachineBasicBlock *TmpBB = MF.CreateMachineBasicBlock(CurBB->getBasicBlock());
+  CurBB->getParent()->insert(++BBI, TmpBB);
+
+  if (Opc == Instruction::Or) {
+    // Codegen X | Y as:
+    //   jmp_if_X TBB
+    //   jmp TmpBB
+    // TmpBB:
+    //   jmp_if_Y TBB
+    //   jmp FBB
+    //
+
+    // Emit the LHS condition.
+    FindMergedConditions(BOp->getOperand(0), TBB, TmpBB, CurBB, Opc);
+
+    // Emit the RHS condition into TmpBB.
+    FindMergedConditions(BOp->getOperand(1), TBB, FBB, TmpBB, Opc);
+  } else {
+    assert(Opc == Instruction::And && "Unknown merge op!");
+    // Codegen X & Y as:
+    //   jmp_if_X TmpBB
+    //   jmp FBB
+    // TmpBB:
+    //   jmp_if_Y TBB
+    //   jmp FBB
+    //
+    //  This requires creation of TmpBB after CurBB.
+
+    // Emit the LHS condition.
+    FindMergedConditions(BOp->getOperand(0), TmpBB, FBB, CurBB, Opc);
+
+    // Emit the RHS condition into TmpBB.
+    FindMergedConditions(BOp->getOperand(1), TBB, FBB, TmpBB, Opc);
+  }
+}
+
+/// If the set of cases should be emitted as a series of branches, return true.
+/// If we should emit this as a bunch of and/or'd together conditions, return
+/// false.
+bool
+SelectionDAGBuilder::ShouldEmitAsBranches(const std::vector<CaseBlock> &Cases){
+  if (Cases.size() != 2) return true;
+
+  // If this is two comparisons of the same values or'd or and'd together, they
+  // will get folded into a single comparison, so don't emit two blocks.
+  if ((Cases[0].CmpLHS == Cases[1].CmpLHS &&
+       Cases[0].CmpRHS == Cases[1].CmpRHS) ||
+      (Cases[0].CmpRHS == Cases[1].CmpLHS &&
+       Cases[0].CmpLHS == Cases[1].CmpRHS)) {
+    return false;
+  }
+
+  return true;
+}
+
+void SelectionDAGBuilder::visitBr(BranchInst &I) {
+  // Update machine-CFG edges.
+  MachineBasicBlock *Succ0MBB = FuncInfo.MBBMap[I.getSuccessor(0)];
+
+  // Figure out which block is immediately after the current one.
+  MachineBasicBlock *NextBlock = 0;
+  MachineFunction::iterator BBI = CurMBB;
+  if (++BBI != FuncInfo.MF->end())
+    NextBlock = BBI;
+
+  if (I.isUnconditional()) {
+    // Update machine-CFG edges.
+    CurMBB->addSuccessor(Succ0MBB);
+
+    // If this is not a fall-through branch, emit the branch.
+    if (Succ0MBB != NextBlock)
+      DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(),
+                              MVT::Other, getControlRoot(),
+                              DAG.getBasicBlock(Succ0MBB)));
+    return;
+  }
+
+  // If this condition is one of the special cases we handle, do special stuff
+  // now.
+  Value *CondVal = I.getCondition();
+  MachineBasicBlock *Succ1MBB = FuncInfo.MBBMap[I.getSuccessor(1)];
+
+  // If this is a series of conditions that are or'd or and'd together, emit
+  // this as a sequence of branches instead of setcc's with and/or operations.
+  // For example, instead of something like:
+  //     cmp A, B
+  //     C = seteq
+  //     cmp D, E
+  //     F = setle
+  //     or C, F
+  //     jnz foo
+  // Emit:
+  //     cmp A, B
+  //     je foo
+  //     cmp D, E
+  //     jle foo
+  //
+  if (BinaryOperator *BOp = dyn_cast<BinaryOperator>(CondVal)) {
+    if (BOp->hasOneUse() &&
+        (BOp->getOpcode() == Instruction::And ||
+         BOp->getOpcode() == Instruction::Or)) {
+      FindMergedConditions(BOp, Succ0MBB, Succ1MBB, CurMBB, BOp->getOpcode());
+      // If the compares in later blocks need to use values not currently
+      // exported from this block, export them now.  This block should always
+      // be the first entry.
+      assert(SwitchCases[0].ThisBB == CurMBB && "Unexpected lowering!");
+
+      // Allow some cases to be rejected.
+      if (ShouldEmitAsBranches(SwitchCases)) {
+        for (unsigned i = 1, e = SwitchCases.size(); i != e; ++i) {
+          ExportFromCurrentBlock(SwitchCases[i].CmpLHS);
+          ExportFromCurrentBlock(SwitchCases[i].CmpRHS);
+        }
+
+        // Emit the branch for this block.
+        visitSwitchCase(SwitchCases[0]);
+        SwitchCases.erase(SwitchCases.begin());
+        return;
+      }
+
+      // Okay, we decided not to do this, remove any inserted MBB's and clear
+      // SwitchCases.
+      for (unsigned i = 1, e = SwitchCases.size(); i != e; ++i)
+        FuncInfo.MF->erase(SwitchCases[i].ThisBB);
+
+      SwitchCases.clear();
+    }
+  }
+
+  // Create a CaseBlock record representing this branch.
+  CaseBlock CB(ISD::SETEQ, CondVal, ConstantInt::getTrue(*DAG.getContext()),
+               NULL, Succ0MBB, Succ1MBB, CurMBB);
+  // Use visitSwitchCase to actually insert the fast branch sequence for this
+  // cond branch.
+  visitSwitchCase(CB);
+}
+
+/// visitSwitchCase - Emits the necessary code to represent a single node in
+/// the binary search tree resulting from lowering a switch instruction.
+void SelectionDAGBuilder::visitSwitchCase(CaseBlock &CB) {
+  SDValue Cond;
+  SDValue CondLHS = getValue(CB.CmpLHS);
+  DebugLoc dl = getCurDebugLoc();
+
+  // Build the setcc now.
+  if (CB.CmpMHS == NULL) {
+    // Fold "(X == true)" to X and "(X == false)" to !X to
+    // handle common cases produced by branch lowering.
+    if (CB.CmpRHS == ConstantInt::getTrue(*DAG.getContext()) &&
+        CB.CC == ISD::SETEQ)
+      Cond = CondLHS;
+    else if (CB.CmpRHS == ConstantInt::getFalse(*DAG.getContext()) &&
+             CB.CC == ISD::SETEQ) {
+      SDValue True = DAG.getConstant(1, CondLHS.getValueType());
+      Cond = DAG.getNode(ISD::XOR, dl, CondLHS.getValueType(), CondLHS, True);
+    } else
+      Cond = DAG.getSetCC(dl, MVT::i1, CondLHS, getValue(CB.CmpRHS), CB.CC);
+  } else {
+    assert(CB.CC == ISD::SETLE && "Can handle only LE ranges now");
+
+    const APInt& Low = cast<ConstantInt>(CB.CmpLHS)->getValue();
+    const APInt& High  = cast<ConstantInt>(CB.CmpRHS)->getValue();
+
+    SDValue CmpOp = getValue(CB.CmpMHS);
+    EVT VT = CmpOp.getValueType();
+
+    if (cast<ConstantInt>(CB.CmpLHS)->isMinValue(true)) {
+      Cond = DAG.getSetCC(dl, MVT::i1, CmpOp, DAG.getConstant(High, VT),
+                          ISD::SETLE);
+    } else {
+      SDValue SUB = DAG.getNode(ISD::SUB, dl,
+                                VT, CmpOp, DAG.getConstant(Low, VT));
+      Cond = DAG.getSetCC(dl, MVT::i1, SUB,
+                          DAG.getConstant(High-Low, VT), ISD::SETULE);
+    }
+  }
+
+  // Update successor info
+  CurMBB->addSuccessor(CB.TrueBB);
+  CurMBB->addSuccessor(CB.FalseBB);
+
+  // Set NextBlock to be the MBB immediately after the current one, if any.
+  // This is used to avoid emitting unnecessary branches to the next block.
+  MachineBasicBlock *NextBlock = 0;
+  MachineFunction::iterator BBI = CurMBB;
+  if (++BBI != FuncInfo.MF->end())
+    NextBlock = BBI;
+
+  // If the lhs block is the next block, invert the condition so that we can
+  // fall through to the lhs instead of the rhs block.
+  if (CB.TrueBB == NextBlock) {
+    std::swap(CB.TrueBB, CB.FalseBB);
+    SDValue True = DAG.getConstant(1, Cond.getValueType());
+    Cond = DAG.getNode(ISD::XOR, dl, Cond.getValueType(), Cond, True);
+  }
+  SDValue BrCond = DAG.getNode(ISD::BRCOND, dl,
+                               MVT::Other, getControlRoot(), Cond,
+                               DAG.getBasicBlock(CB.TrueBB));
+
+  // If the branch was constant folded, fix up the CFG.
+  if (BrCond.getOpcode() == ISD::BR) {
+    CurMBB->removeSuccessor(CB.FalseBB);
+    DAG.setRoot(BrCond);
+  } else {
+    // Otherwise, go ahead and insert the false branch.
+    if (BrCond == getControlRoot())
+      CurMBB->removeSuccessor(CB.TrueBB);
+
+    if (CB.FalseBB == NextBlock)
+      DAG.setRoot(BrCond);
+    else
+      DAG.setRoot(DAG.getNode(ISD::BR, dl, MVT::Other, BrCond,
+                              DAG.getBasicBlock(CB.FalseBB)));
+  }
+}
+
+/// visitJumpTable - Emit JumpTable node in the current MBB
+void SelectionDAGBuilder::visitJumpTable(JumpTable &JT) {
+  // Emit the code for the jump table
+  assert(JT.Reg != -1U && "Should lower JT Header first!");
+  EVT PTy = TLI.getPointerTy();
+  SDValue Index = DAG.getCopyFromReg(getControlRoot(), getCurDebugLoc(),
+                                     JT.Reg, PTy);
+  SDValue Table = DAG.getJumpTable(JT.JTI, PTy);
+  DAG.setRoot(DAG.getNode(ISD::BR_JT, getCurDebugLoc(),
+                          MVT::Other, Index.getValue(1),
+                          Table, Index));
+}
+
+/// visitJumpTableHeader - This function emits necessary code to produce index
+/// in the JumpTable from switch case.
+void SelectionDAGBuilder::visitJumpTableHeader(JumpTable &JT,
+                                               JumpTableHeader &JTH) {
+  // Subtract the lowest switch case value from the value being switched on and
+  // conditional branch to default mbb if the result is greater than the
+  // difference between smallest and largest cases.
+  SDValue SwitchOp = getValue(JTH.SValue);
+  EVT VT = SwitchOp.getValueType();
+  SDValue SUB = DAG.getNode(ISD::SUB, getCurDebugLoc(), VT, SwitchOp,
+                            DAG.getConstant(JTH.First, VT));
+
+  // The SDNode we just created, which holds the value being switched on minus
+  // the the smallest case value, needs to be copied to a virtual register so it
+  // can be used as an index into the jump table in a subsequent basic block.
+  // This value may be smaller or larger than the target's pointer type, and
+  // therefore require extension or truncating.
+  SwitchOp = DAG.getZExtOrTrunc(SUB, getCurDebugLoc(), TLI.getPointerTy());
+
+  unsigned JumpTableReg = FuncInfo.MakeReg(TLI.getPointerTy());
+  SDValue CopyTo = DAG.getCopyToReg(getControlRoot(), getCurDebugLoc(),
+                                    JumpTableReg, SwitchOp);
+  JT.Reg = JumpTableReg;
+
+  // Emit the range check for the jump table, and branch to the default block
+  // for the switch statement if the value being switched on exceeds the largest
+  // case in the switch.
+  SDValue CMP = DAG.getSetCC(getCurDebugLoc(),
+                             TLI.getSetCCResultType(SUB.getValueType()), SUB,
+                             DAG.getConstant(JTH.Last-JTH.First,VT),
+                             ISD::SETUGT);
+
+  // Set NextBlock to be the MBB immediately after the current one, if any.
+  // This is used to avoid emitting unnecessary branches to the next block.
+  MachineBasicBlock *NextBlock = 0;
+  MachineFunction::iterator BBI = CurMBB;
+  if (++BBI != FuncInfo.MF->end())
+    NextBlock = BBI;
+
+  SDValue BrCond = DAG.getNode(ISD::BRCOND, getCurDebugLoc(),
+                               MVT::Other, CopyTo, CMP,
+                               DAG.getBasicBlock(JT.Default));
+
+  if (JT.MBB == NextBlock)
+    DAG.setRoot(BrCond);
+  else
+    DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(), MVT::Other, BrCond,
+                            DAG.getBasicBlock(JT.MBB)));
+}
+
+/// visitBitTestHeader - This function emits necessary code to produce value
+/// suitable for "bit tests"
+void SelectionDAGBuilder::visitBitTestHeader(BitTestBlock &B) {
+  // Subtract the minimum value
+  SDValue SwitchOp = getValue(B.SValue);
+  EVT VT = SwitchOp.getValueType();
+  SDValue SUB = DAG.getNode(ISD::SUB, getCurDebugLoc(), VT, SwitchOp,
+                            DAG.getConstant(B.First, VT));
+
+  // Check range
+  SDValue RangeCmp = DAG.getSetCC(getCurDebugLoc(),
+                                  TLI.getSetCCResultType(SUB.getValueType()),
+                                  SUB, DAG.getConstant(B.Range, VT),
+                                  ISD::SETUGT);
+
+  SDValue ShiftOp = DAG.getZExtOrTrunc(SUB, getCurDebugLoc(), TLI.getPointerTy());
+
+  B.Reg = FuncInfo.MakeReg(TLI.getPointerTy());
+  SDValue CopyTo = DAG.getCopyToReg(getControlRoot(), getCurDebugLoc(),
+                                    B.Reg, ShiftOp);
+
+  // Set NextBlock to be the MBB immediately after the current one, if any.
+  // This is used to avoid emitting unnecessary branches to the next block.
+  MachineBasicBlock *NextBlock = 0;
+  MachineFunction::iterator BBI = CurMBB;
+  if (++BBI != FuncInfo.MF->end())
+    NextBlock = BBI;
+
+  MachineBasicBlock* MBB = B.Cases[0].ThisBB;
+
+  CurMBB->addSuccessor(B.Default);
+  CurMBB->addSuccessor(MBB);
+
+  SDValue BrRange = DAG.getNode(ISD::BRCOND, getCurDebugLoc(),
+                                MVT::Other, CopyTo, RangeCmp,
+                                DAG.getBasicBlock(B.Default));
+
+  if (MBB == NextBlock)
+    DAG.setRoot(BrRange);
+  else
+    DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(), MVT::Other, CopyTo,
+                            DAG.getBasicBlock(MBB)));
+}
+
+/// visitBitTestCase - this function produces one "bit test"
+void SelectionDAGBuilder::visitBitTestCase(MachineBasicBlock* NextMBB,
+                                           unsigned Reg,
+                                           BitTestCase &B) {
+  // Make desired shift
+  SDValue ShiftOp = DAG.getCopyFromReg(getControlRoot(), getCurDebugLoc(), Reg,
+                                       TLI.getPointerTy());
+  SDValue SwitchVal = DAG.getNode(ISD::SHL, getCurDebugLoc(),
+                                  TLI.getPointerTy(),
+                                  DAG.getConstant(1, TLI.getPointerTy()),
+                                  ShiftOp);
+
+  // Emit bit tests and jumps
+  SDValue AndOp = DAG.getNode(ISD::AND, getCurDebugLoc(),
+                              TLI.getPointerTy(), SwitchVal,
+                              DAG.getConstant(B.Mask, TLI.getPointerTy()));
+  SDValue AndCmp = DAG.getSetCC(getCurDebugLoc(),
+                                TLI.getSetCCResultType(AndOp.getValueType()),
+                                AndOp, DAG.getConstant(0, TLI.getPointerTy()),
+                                ISD::SETNE);
+
+  CurMBB->addSuccessor(B.TargetBB);
+  CurMBB->addSuccessor(NextMBB);
+
+  SDValue BrAnd = DAG.getNode(ISD::BRCOND, getCurDebugLoc(),
+                              MVT::Other, getControlRoot(),
+                              AndCmp, DAG.getBasicBlock(B.TargetBB));
+
+  // Set NextBlock to be the MBB immediately after the current one, if any.
+  // This is used to avoid emitting unnecessary branches to the next block.
+  MachineBasicBlock *NextBlock = 0;
+  MachineFunction::iterator BBI = CurMBB;
+  if (++BBI != FuncInfo.MF->end())
+    NextBlock = BBI;
+
+  if (NextMBB == NextBlock)
+    DAG.setRoot(BrAnd);
+  else
+    DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(), MVT::Other, BrAnd,
+                            DAG.getBasicBlock(NextMBB)));
+}
+
+void SelectionDAGBuilder::visitInvoke(InvokeInst &I) {
+  // Retrieve successors.
+  MachineBasicBlock *Return = FuncInfo.MBBMap[I.getSuccessor(0)];
+  MachineBasicBlock *LandingPad = FuncInfo.MBBMap[I.getSuccessor(1)];
+
+  const Value *Callee(I.getCalledValue());
+  if (isa<InlineAsm>(Callee))
+    visitInlineAsm(&I);
+  else
+    LowerCallTo(&I, getValue(Callee), false, LandingPad);
+
+  // If the value of the invoke is used outside of its defining block, make it
+  // available as a virtual register.
+  CopyToExportRegsIfNeeded(&I);
+
+  // Update successor info
+  CurMBB->addSuccessor(Return);
+  CurMBB->addSuccessor(LandingPad);
+
+  // Drop into normal successor.
+  DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(),
+                          MVT::Other, getControlRoot(),
+                          DAG.getBasicBlock(Return)));
+}
+
+void SelectionDAGBuilder::visitUnwind(UnwindInst &I) {
+}
+
+/// handleSmallSwitchCaseRange - Emit a series of specific tests (suitable for
+/// small case ranges).
+bool SelectionDAGBuilder::handleSmallSwitchRange(CaseRec& CR,
+                                                 CaseRecVector& WorkList,
+                                                 Value* SV,
+                                                 MachineBasicBlock* Default) {
+  Case& BackCase  = *(CR.Range.second-1);
+
+  // Size is the number of Cases represented by this range.
+  size_t Size = CR.Range.second - CR.Range.first;
+  if (Size > 3)
+    return false;
+
+  // Get the MachineFunction which holds the current MBB.  This is used when
+  // inserting any additional MBBs necessary to represent the switch.
+  MachineFunction *CurMF = FuncInfo.MF;
+
+  // Figure out which block is immediately after the current one.
+  MachineBasicBlock *NextBlock = 0;
+  MachineFunction::iterator BBI = CR.CaseBB;
+
+  if (++BBI != FuncInfo.MF->end())
+    NextBlock = BBI;
+
+  // TODO: If any two of the cases has the same destination, and if one value
+  // is the same as the other, but has one bit unset that the other has set,
+  // use bit manipulation to do two compares at once.  For example:
+  // "if (X == 6 || X == 4)" -> "if ((X|2) == 6)"
+
+  // Rearrange the case blocks so that the last one falls through if possible.
+  if (NextBlock && Default != NextBlock && BackCase.BB != NextBlock) {
+    // The last case block won't fall through into 'NextBlock' if we emit the
+    // branches in this order.  See if rearranging a case value would help.
+    for (CaseItr I = CR.Range.first, E = CR.Range.second-1; I != E; ++I) {
+      if (I->BB == NextBlock) {
+        std::swap(*I, BackCase);
+        break;
+      }
+    }
+  }
+
+  // Create a CaseBlock record representing a conditional branch to
+  // the Case's target mbb if the value being switched on SV is equal
+  // to C.
+  MachineBasicBlock *CurBlock = CR.CaseBB;
+  for (CaseItr I = CR.Range.first, E = CR.Range.second; I != E; ++I) {
+    MachineBasicBlock *FallThrough;
+    if (I != E-1) {
+      FallThrough = CurMF->CreateMachineBasicBlock(CurBlock->getBasicBlock());
+      CurMF->insert(BBI, FallThrough);
+
+      // Put SV in a virtual register to make it available from the new blocks.
+      ExportFromCurrentBlock(SV);
+    } else {
+      // If the last case doesn't match, go to the default block.
+      FallThrough = Default;
+    }
+
+    Value *RHS, *LHS, *MHS;
+    ISD::CondCode CC;
+    if (I->High == I->Low) {
+      // This is just small small case range :) containing exactly 1 case
+      CC = ISD::SETEQ;
+      LHS = SV; RHS = I->High; MHS = NULL;
+    } else {
+      CC = ISD::SETLE;
+      LHS = I->Low; MHS = SV; RHS = I->High;
+    }
+    CaseBlock CB(CC, LHS, RHS, MHS, I->BB, FallThrough, CurBlock);
+
+    // If emitting the first comparison, just call visitSwitchCase to emit the
+    // code into the current block.  Otherwise, push the CaseBlock onto the
+    // vector to be later processed by SDISel, and insert the node's MBB
+    // before the next MBB.
+    if (CurBlock == CurMBB)
+      visitSwitchCase(CB);
+    else
+      SwitchCases.push_back(CB);
+
+    CurBlock = FallThrough;
+  }
+
+  return true;
+}
+
+static inline bool areJTsAllowed(const TargetLowering &TLI) {
+  return !DisableJumpTables &&
+          (TLI.isOperationLegalOrCustom(ISD::BR_JT, MVT::Other) ||
+           TLI.isOperationLegalOrCustom(ISD::BRIND, MVT::Other));
+}
+
+static APInt ComputeRange(const APInt &First, const APInt &Last) {
+  APInt LastExt(Last), FirstExt(First);
+  uint32_t BitWidth = std::max(Last.getBitWidth(), First.getBitWidth()) + 1;
+  LastExt.sext(BitWidth); FirstExt.sext(BitWidth);
+  return (LastExt - FirstExt + 1ULL);
+}
+
+/// handleJTSwitchCase - Emit jumptable for current switch case range
+bool SelectionDAGBuilder::handleJTSwitchCase(CaseRec& CR,
+                                             CaseRecVector& WorkList,
+                                             Value* SV,
+                                             MachineBasicBlock* Default) {
+  Case& FrontCase = *CR.Range.first;
+  Case& BackCase  = *(CR.Range.second-1);
+
+  const APInt &First = cast<ConstantInt>(FrontCase.Low)->getValue();
+  const APInt &Last  = cast<ConstantInt>(BackCase.High)->getValue();
+
+  APInt TSize(First.getBitWidth(), 0);
+  for (CaseItr I = CR.Range.first, E = CR.Range.second;
+       I!=E; ++I)
+    TSize += I->size();
+
+  if (!areJTsAllowed(TLI) || TSize.ult(APInt(First.getBitWidth(), 4)))
+    return false;
+
+  APInt Range = ComputeRange(First, Last);
+  double Density = TSize.roundToDouble() / Range.roundToDouble();
+  if (Density < 0.4)
+    return false;
+
+  DEBUG(errs() << "Lowering jump table\n"
+               << "First entry: " << First << ". Last entry: " << Last << '\n'
+               << "Range: " << Range
+               << "Size: " << TSize << ". Density: " << Density << "\n\n");
+
+  // Get the MachineFunction which holds the current MBB.  This is used when
+  // inserting any additional MBBs necessary to represent the switch.
+  MachineFunction *CurMF = FuncInfo.MF;
+
+  // Figure out which block is immediately after the current one.
+  MachineFunction::iterator BBI = CR.CaseBB;
+  ++BBI;
+
+  const BasicBlock *LLVMBB = CR.CaseBB->getBasicBlock();
+
+  // Create a new basic block to hold the code for loading the address
+  // of the jump table, and jumping to it.  Update successor information;
+  // we will either branch to the default case for the switch, or the jump
+  // table.
+  MachineBasicBlock *JumpTableBB = CurMF->CreateMachineBasicBlock(LLVMBB);
+  CurMF->insert(BBI, JumpTableBB);
+  CR.CaseBB->addSuccessor(Default);
+  CR.CaseBB->addSuccessor(JumpTableBB);
+
+  // Build a vector of destination BBs, corresponding to each target
+  // of the jump table. If the value of the jump table slot corresponds to
+  // a case statement, push the case's BB onto the vector, otherwise, push
+  // the default BB.
+  std::vector<MachineBasicBlock*> DestBBs;
+  APInt TEI = First;
+  for (CaseItr I = CR.Range.first, E = CR.Range.second; I != E; ++TEI) {
+    const APInt& Low = cast<ConstantInt>(I->Low)->getValue();
+    const APInt& High = cast<ConstantInt>(I->High)->getValue();
+
+    if (Low.sle(TEI) && TEI.sle(High)) {
+      DestBBs.push_back(I->BB);
+      if (TEI==High)
+        ++I;
+    } else {
+      DestBBs.push_back(Default);
+    }
+  }
+
+  // Update successor info. Add one edge to each unique successor.
+  BitVector SuccsHandled(CR.CaseBB->getParent()->getNumBlockIDs());
+  for (std::vector<MachineBasicBlock*>::iterator I = DestBBs.begin(),
+         E = DestBBs.end(); I != E; ++I) {
+    if (!SuccsHandled[(*I)->getNumber()]) {
+      SuccsHandled[(*I)->getNumber()] = true;
+      JumpTableBB->addSuccessor(*I);
+    }
+  }
+
+  // Create a jump table index for this jump table, or return an existing
+  // one.
+  unsigned JTI = CurMF->getJumpTableInfo()->getJumpTableIndex(DestBBs);
+
+  // Set the jump table information so that we can codegen it as a second
+  // MachineBasicBlock
+  JumpTable JT(-1U, JTI, JumpTableBB, Default);
+  JumpTableHeader JTH(First, Last, SV, CR.CaseBB, (CR.CaseBB == CurMBB));
+  if (CR.CaseBB == CurMBB)
+    visitJumpTableHeader(JT, JTH);
+
+  JTCases.push_back(JumpTableBlock(JTH, JT));
+
+  return true;
+}
+
+/// handleBTSplitSwitchCase - emit comparison and split binary search tree into
+/// 2 subtrees.
+bool SelectionDAGBuilder::handleBTSplitSwitchCase(CaseRec& CR,
+                                                  CaseRecVector& WorkList,
+                                                  Value* SV,
+                                                  MachineBasicBlock* Default) {
+  // Get the MachineFunction which holds the current MBB.  This is used when
+  // inserting any additional MBBs necessary to represent the switch.
+  MachineFunction *CurMF = FuncInfo.MF;
+
+  // Figure out which block is immediately after the current one.
+  MachineFunction::iterator BBI = CR.CaseBB;
+  ++BBI;
+
+  Case& FrontCase = *CR.Range.first;
+  Case& BackCase  = *(CR.Range.second-1);
+  const BasicBlock *LLVMBB = CR.CaseBB->getBasicBlock();
+
+  // Size is the number of Cases represented by this range.
+  unsigned Size = CR.Range.second - CR.Range.first;
+
+  const APInt &First = cast<ConstantInt>(FrontCase.Low)->getValue();
+  const APInt &Last  = cast<ConstantInt>(BackCase.High)->getValue();
+  double FMetric = 0;
+  CaseItr Pivot = CR.Range.first + Size/2;
+
+  // Select optimal pivot, maximizing sum density of LHS and RHS. This will
+  // (heuristically) allow us to emit JumpTable's later.
+  APInt TSize(First.getBitWidth(), 0);
+  for (CaseItr I = CR.Range.first, E = CR.Range.second;
+       I!=E; ++I)
+    TSize += I->size();
+
+  APInt LSize = FrontCase.size();
+  APInt RSize = TSize-LSize;
+  DEBUG(errs() << "Selecting best pivot: \n"
+               << "First: " << First << ", Last: " << Last <<'\n'
+               << "LSize: " << LSize << ", RSize: " << RSize << '\n');
+  for (CaseItr I = CR.Range.first, J=I+1, E = CR.Range.second;
+       J!=E; ++I, ++J) {
+    const APInt &LEnd = cast<ConstantInt>(I->High)->getValue();
+    const APInt &RBegin = cast<ConstantInt>(J->Low)->getValue();
+    APInt Range = ComputeRange(LEnd, RBegin);
+    assert((Range - 2ULL).isNonNegative() &&
+           "Invalid case distance");
+    double LDensity = (double)LSize.roundToDouble() / 
+                           (LEnd - First + 1ULL).roundToDouble();
+    double RDensity = (double)RSize.roundToDouble() /
+                           (Last - RBegin + 1ULL).roundToDouble();
+    double Metric = Range.logBase2()*(LDensity+RDensity);
+    // Should always split in some non-trivial place
+    DEBUG(errs() <<"=>Step\n"
+                 << "LEnd: " << LEnd << ", RBegin: " << RBegin << '\n'
+                 << "LDensity: " << LDensity
+                 << ", RDensity: " << RDensity << '\n'
+                 << "Metric: " << Metric << '\n');
+    if (FMetric < Metric) {
+      Pivot = J;
+      FMetric = Metric;
+      DEBUG(errs() << "Current metric set to: " << FMetric << '\n');
+    }
+
+    LSize += J->size();
+    RSize -= J->size();
+  }
+  if (areJTsAllowed(TLI)) {
+    // If our case is dense we *really* should handle it earlier!
+    assert((FMetric > 0) && "Should handle dense range earlier!");
+  } else {
+    Pivot = CR.Range.first + Size/2;
+  }
+
+  CaseRange LHSR(CR.Range.first, Pivot);
+  CaseRange RHSR(Pivot, CR.Range.second);
+  Constant *C = Pivot->Low;
+  MachineBasicBlock *FalseBB = 0, *TrueBB = 0;
+
+  // We know that we branch to the LHS if the Value being switched on is
+  // less than the Pivot value, C.  We use this to optimize our binary
+  // tree a bit, by recognizing that if SV is greater than or equal to the
+  // LHS's Case Value, and that Case Value is exactly one less than the
+  // Pivot's Value, then we can branch directly to the LHS's Target,
+  // rather than creating a leaf node for it.
+  if ((LHSR.second - LHSR.first) == 1 &&
+      LHSR.first->High == CR.GE &&
+      cast<ConstantInt>(C)->getValue() ==
+      (cast<ConstantInt>(CR.GE)->getValue() + 1LL)) {
+    TrueBB = LHSR.first->BB;
+  } else {
+    TrueBB = CurMF->CreateMachineBasicBlock(LLVMBB);
+    CurMF->insert(BBI, TrueBB);
+    WorkList.push_back(CaseRec(TrueBB, C, CR.GE, LHSR));
+
+    // Put SV in a virtual register to make it available from the new blocks.
+    ExportFromCurrentBlock(SV);
+  }
+
+  // Similar to the optimization above, if the Value being switched on is
+  // known to be less than the Constant CR.LT, and the current Case Value
+  // is CR.LT - 1, then we can branch directly to the target block for
+  // the current Case Value, rather than emitting a RHS leaf node for it.
+  if ((RHSR.second - RHSR.first) == 1 && CR.LT &&
+      cast<ConstantInt>(RHSR.first->Low)->getValue() ==
+      (cast<ConstantInt>(CR.LT)->getValue() - 1LL)) {
+    FalseBB = RHSR.first->BB;
+  } else {
+    FalseBB = CurMF->CreateMachineBasicBlock(LLVMBB);
+    CurMF->insert(BBI, FalseBB);
+    WorkList.push_back(CaseRec(FalseBB,CR.LT,C,RHSR));
+
+    // Put SV in a virtual register to make it available from the new blocks.
+    ExportFromCurrentBlock(SV);
+  }
+
+  // Create a CaseBlock record representing a conditional branch to
+  // the LHS node if the value being switched on SV is less than C.
+  // Otherwise, branch to LHS.
+  CaseBlock CB(ISD::SETLT, SV, C, NULL, TrueBB, FalseBB, CR.CaseBB);
+
+  if (CR.CaseBB == CurMBB)
+    visitSwitchCase(CB);
+  else
+    SwitchCases.push_back(CB);
+
+  return true;
+}
+
+/// handleBitTestsSwitchCase - if current case range has few destination and
+/// range span less, than machine word bitwidth, encode case range into series
+/// of masks and emit bit tests with these masks.
+bool SelectionDAGBuilder::handleBitTestsSwitchCase(CaseRec& CR,
+                                                   CaseRecVector& WorkList,
+                                                   Value* SV,
+                                                   MachineBasicBlock* Default){
+  EVT PTy = TLI.getPointerTy();
+  unsigned IntPtrBits = PTy.getSizeInBits();
+
+  Case& FrontCase = *CR.Range.first;
+  Case& BackCase  = *(CR.Range.second-1);
+
+  // Get the MachineFunction which holds the current MBB.  This is used when
+  // inserting any additional MBBs necessary to represent the switch.
+  MachineFunction *CurMF = FuncInfo.MF;
+
+  // If target does not have legal shift left, do not emit bit tests at all.
+  if (!TLI.isOperationLegal(ISD::SHL, TLI.getPointerTy()))
+    return false;
+
+  size_t numCmps = 0;
+  for (CaseItr I = CR.Range.first, E = CR.Range.second;
+       I!=E; ++I) {
+    // Single case counts one, case range - two.
+    numCmps += (I->Low == I->High ? 1 : 2);
+  }
+
+  // Count unique destinations
+  SmallSet<MachineBasicBlock*, 4> Dests;
+  for (CaseItr I = CR.Range.first, E = CR.Range.second; I!=E; ++I) {
+    Dests.insert(I->BB);
+    if (Dests.size() > 3)
+      // Don't bother the code below, if there are too much unique destinations
+      return false;
+  }
+  DEBUG(errs() << "Total number of unique destinations: " << Dests.size() << '\n'
+               << "Total number of comparisons: " << numCmps << '\n');
+
+  // Compute span of values.
+  const APInt& minValue = cast<ConstantInt>(FrontCase.Low)->getValue();
+  const APInt& maxValue = cast<ConstantInt>(BackCase.High)->getValue();
+  APInt cmpRange = maxValue - minValue;
+
+  DEBUG(errs() << "Compare range: " << cmpRange << '\n'
+               << "Low bound: " << minValue << '\n'
+               << "High bound: " << maxValue << '\n');
+
+  if (cmpRange.uge(APInt(cmpRange.getBitWidth(), IntPtrBits)) ||
+      (!(Dests.size() == 1 && numCmps >= 3) &&
+       !(Dests.size() == 2 && numCmps >= 5) &&
+       !(Dests.size() >= 3 && numCmps >= 6)))
+    return false;
+
+  DEBUG(errs() << "Emitting bit tests\n");
+  APInt lowBound = APInt::getNullValue(cmpRange.getBitWidth());
+
+  // Optimize the case where all the case values fit in a
+  // word without having to subtract minValue. In this case,
+  // we can optimize away the subtraction.
+  if (minValue.isNonNegative() &&
+      maxValue.slt(APInt(maxValue.getBitWidth(), IntPtrBits))) {
+    cmpRange = maxValue;
+  } else {
+    lowBound = minValue;
+  }
+
+  CaseBitsVector CasesBits;
+  unsigned i, count = 0;
+
+  for (CaseItr I = CR.Range.first, E = CR.Range.second; I!=E; ++I) {
+    MachineBasicBlock* Dest = I->BB;
+    for (i = 0; i < count; ++i)
+      if (Dest == CasesBits[i].BB)
+        break;
+
+    if (i == count) {
+      assert((count < 3) && "Too much destinations to test!");
+      CasesBits.push_back(CaseBits(0, Dest, 0));
+      count++;
+    }
+
+    const APInt& lowValue = cast<ConstantInt>(I->Low)->getValue();
+    const APInt& highValue = cast<ConstantInt>(I->High)->getValue();
+
+    uint64_t lo = (lowValue - lowBound).getZExtValue();
+    uint64_t hi = (highValue - lowBound).getZExtValue();
+
+    for (uint64_t j = lo; j <= hi; j++) {
+      CasesBits[i].Mask |=  1ULL << j;
+      CasesBits[i].Bits++;
+    }
+
+  }
+  std::sort(CasesBits.begin(), CasesBits.end(), CaseBitsCmp());
+
+  BitTestInfo BTC;
+
+  // Figure out which block is immediately after the current one.
+  MachineFunction::iterator BBI = CR.CaseBB;
+  ++BBI;
+
+  const BasicBlock *LLVMBB = CR.CaseBB->getBasicBlock();
+
+  DEBUG(errs() << "Cases:\n");
+  for (unsigned i = 0, e = CasesBits.size(); i!=e; ++i) {
+    DEBUG(errs() << "Mask: " << CasesBits[i].Mask
+                 << ", Bits: " << CasesBits[i].Bits
+                 << ", BB: " << CasesBits[i].BB << '\n');
+
+    MachineBasicBlock *CaseBB = CurMF->CreateMachineBasicBlock(LLVMBB);
+    CurMF->insert(BBI, CaseBB);
+    BTC.push_back(BitTestCase(CasesBits[i].Mask,
+                              CaseBB,
+                              CasesBits[i].BB));
+
+    // Put SV in a virtual register to make it available from the new blocks.
+    ExportFromCurrentBlock(SV);
+  }
+
+  BitTestBlock BTB(lowBound, cmpRange, SV,
+                   -1U, (CR.CaseBB == CurMBB),
+                   CR.CaseBB, Default, BTC);
+
+  if (CR.CaseBB == CurMBB)
+    visitBitTestHeader(BTB);
+
+  BitTestCases.push_back(BTB);
+
+  return true;
+}
+
+
+/// Clusterify - Transform simple list of Cases into list of CaseRange's
+size_t SelectionDAGBuilder::Clusterify(CaseVector& Cases,
+                                       const SwitchInst& SI) {
+  size_t numCmps = 0;
+
+  // Start with "simple" cases
+  for (size_t i = 1; i < SI.getNumSuccessors(); ++i) {
+    MachineBasicBlock *SMBB = FuncInfo.MBBMap[SI.getSuccessor(i)];
+    Cases.push_back(Case(SI.getSuccessorValue(i),
+                         SI.getSuccessorValue(i),
+                         SMBB));
+  }
+  std::sort(Cases.begin(), Cases.end(), CaseCmp());
+
+  // Merge case into clusters
+  if (Cases.size() >= 2)
+    // Must recompute end() each iteration because it may be
+    // invalidated by erase if we hold on to it
+    for (CaseItr I = Cases.begin(), J = ++(Cases.begin()); J != Cases.end(); ) {
+      const APInt& nextValue = cast<ConstantInt>(J->Low)->getValue();
+      const APInt& currentValue = cast<ConstantInt>(I->High)->getValue();
+      MachineBasicBlock* nextBB = J->BB;
+      MachineBasicBlock* currentBB = I->BB;
+
+      // If the two neighboring cases go to the same destination, merge them
+      // into a single case.
+      if ((nextValue - currentValue == 1) && (currentBB == nextBB)) {
+        I->High = J->High;
+        J = Cases.erase(J);
+      } else {
+        I = J++;
+      }
+    }
+
+  for (CaseItr I=Cases.begin(), E=Cases.end(); I!=E; ++I, ++numCmps) {
+    if (I->Low != I->High)
+      // A range counts double, since it requires two compares.
+      ++numCmps;
+  }
+
+  return numCmps;
+}
+
+void SelectionDAGBuilder::visitSwitch(SwitchInst &SI) {
+  // Figure out which block is immediately after the current one.
+  MachineBasicBlock *NextBlock = 0;
+
+  MachineBasicBlock *Default = FuncInfo.MBBMap[SI.getDefaultDest()];
+
+  // If there is only the default destination, branch to it if it is not the
+  // next basic block.  Otherwise, just fall through.
+  if (SI.getNumOperands() == 2) {
+    // Update machine-CFG edges.
+
+    // If this is not a fall-through branch, emit the branch.
+    CurMBB->addSuccessor(Default);
+    if (Default != NextBlock)
+      DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(),
+                              MVT::Other, getControlRoot(),
+                              DAG.getBasicBlock(Default)));
+    return;
+  }
+
+  // If there are any non-default case statements, create a vector of Cases
+  // representing each one, and sort the vector so that we can efficiently
+  // create a binary search tree from them.
+  CaseVector Cases;
+  size_t numCmps = Clusterify(Cases, SI);
+  DEBUG(errs() << "Clusterify finished. Total clusters: " << Cases.size()
+               << ". Total compares: " << numCmps << '\n');
+  numCmps = 0;
+
+  // Get the Value to be switched on and default basic blocks, which will be
+  // inserted into CaseBlock records, representing basic blocks in the binary
+  // search tree.
+  Value *SV = SI.getOperand(0);
+
+  // Push the initial CaseRec onto the worklist
+  CaseRecVector WorkList;
+  WorkList.push_back(CaseRec(CurMBB,0,0,CaseRange(Cases.begin(),Cases.end())));
+
+  while (!WorkList.empty()) {
+    // Grab a record representing a case range to process off the worklist
+    CaseRec CR = WorkList.back();
+    WorkList.pop_back();
+
+    if (handleBitTestsSwitchCase(CR, WorkList, SV, Default))
+      continue;
+
+    // If the range has few cases (two or less) emit a series of specific
+    // tests.
+    if (handleSmallSwitchRange(CR, WorkList, SV, Default))
+      continue;
+
+    // If the switch has more than 5 blocks, and at least 40% dense, and the
+    // target supports indirect branches, then emit a jump table rather than
+    // lowering the switch to a binary tree of conditional branches.
+    if (handleJTSwitchCase(CR, WorkList, SV, Default))
+      continue;
+
+    // Emit binary tree. We need to pick a pivot, and push left and right ranges
+    // onto the worklist. Leafs are handled via handleSmallSwitchRange() call.
+    handleBTSplitSwitchCase(CR, WorkList, SV, Default);
+  }
+}
+
+void SelectionDAGBuilder::visitIndirectBr(IndirectBrInst &I) {
+  // Update machine-CFG edges.
+  for (unsigned i = 0, e = I.getNumSuccessors(); i != e; ++i)
+    CurMBB->addSuccessor(FuncInfo.MBBMap[I.getSuccessor(i)]);
+
+  DAG.setRoot(DAG.getNode(ISD::BRIND, getCurDebugLoc(),
+                          MVT::Other, getControlRoot(),
+                          getValue(I.getAddress())));
+}
+
+
+void SelectionDAGBuilder::visitFSub(User &I) {
+  // -0.0 - X --> fneg
+  const Type *Ty = I.getType();
+  if (isa<VectorType>(Ty)) {
+    if (ConstantVector *CV = dyn_cast<ConstantVector>(I.getOperand(0))) {
+      const VectorType *DestTy = cast<VectorType>(I.getType());
+      const Type *ElTy = DestTy->getElementType();
+      unsigned VL = DestTy->getNumElements();
+      std::vector<Constant*> NZ(VL, ConstantFP::getNegativeZero(ElTy));
+      Constant *CNZ = ConstantVector::get(&NZ[0], NZ.size());
+      if (CV == CNZ) {
+        SDValue Op2 = getValue(I.getOperand(1));
+        setValue(&I, DAG.getNode(ISD::FNEG, getCurDebugLoc(),
+                                 Op2.getValueType(), Op2));
+        return;
+      }
+    }
+  }
+  if (ConstantFP *CFP = dyn_cast<ConstantFP>(I.getOperand(0)))
+    if (CFP->isExactlyValue(ConstantFP::getNegativeZero(Ty)->getValueAPF())) {
+      SDValue Op2 = getValue(I.getOperand(1));
+      setValue(&I, DAG.getNode(ISD::FNEG, getCurDebugLoc(),
+                               Op2.getValueType(), Op2));
+      return;
+    }
+
+  visitBinary(I, ISD::FSUB);
+}
+
+void SelectionDAGBuilder::visitBinary(User &I, unsigned OpCode) {
+  SDValue Op1 = getValue(I.getOperand(0));
+  SDValue Op2 = getValue(I.getOperand(1));
+
+  setValue(&I, DAG.getNode(OpCode, getCurDebugLoc(),
+                           Op1.getValueType(), Op1, Op2));
+}
+
+void SelectionDAGBuilder::visitShift(User &I, unsigned Opcode) {
+  SDValue Op1 = getValue(I.getOperand(0));
+  SDValue Op2 = getValue(I.getOperand(1));
+  if (!isa<VectorType>(I.getType()) &&
+      Op2.getValueType() != TLI.getShiftAmountTy()) {
+    // If the operand is smaller than the shift count type, promote it.
+    EVT PTy = TLI.getPointerTy();
+    EVT STy = TLI.getShiftAmountTy();
+    if (STy.bitsGT(Op2.getValueType()))
+      Op2 = DAG.getNode(ISD::ANY_EXTEND, getCurDebugLoc(),
+                        TLI.getShiftAmountTy(), Op2);
+    // If the operand is larger than the shift count type but the shift
+    // count type has enough bits to represent any shift value, truncate
+    // it now. This is a common case and it exposes the truncate to
+    // optimization early.
+    else if (STy.getSizeInBits() >=
+             Log2_32_Ceil(Op2.getValueType().getSizeInBits()))
+      Op2 = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(),
+                        TLI.getShiftAmountTy(), Op2);
+    // Otherwise we'll need to temporarily settle for some other
+    // convenient type; type legalization will make adjustments as
+    // needed.
+    else if (PTy.bitsLT(Op2.getValueType()))
+      Op2 = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(),
+                        TLI.getPointerTy(), Op2);
+    else if (PTy.bitsGT(Op2.getValueType()))
+      Op2 = DAG.getNode(ISD::ANY_EXTEND, getCurDebugLoc(),
+                        TLI.getPointerTy(), Op2);
+  }
+
+  setValue(&I, DAG.getNode(Opcode, getCurDebugLoc(),
+                           Op1.getValueType(), Op1, Op2));
+}
+
+void SelectionDAGBuilder::visitICmp(User &I) {
+  ICmpInst::Predicate predicate = ICmpInst::BAD_ICMP_PREDICATE;
+  if (ICmpInst *IC = dyn_cast<ICmpInst>(&I))
+    predicate = IC->getPredicate();
+  else if (ConstantExpr *IC = dyn_cast<ConstantExpr>(&I))
+    predicate = ICmpInst::Predicate(IC->getPredicate());
+  SDValue Op1 = getValue(I.getOperand(0));
+  SDValue Op2 = getValue(I.getOperand(1));
+  ISD::CondCode Opcode = getICmpCondCode(predicate);
+  
+  EVT DestVT = TLI.getValueType(I.getType());
+  setValue(&I, DAG.getSetCC(getCurDebugLoc(), DestVT, Op1, Op2, Opcode));
+}
+
+void SelectionDAGBuilder::visitFCmp(User &I) {
+  FCmpInst::Predicate predicate = FCmpInst::BAD_FCMP_PREDICATE;
+  if (FCmpInst *FC = dyn_cast<FCmpInst>(&I))
+    predicate = FC->getPredicate();
+  else if (ConstantExpr *FC = dyn_cast<ConstantExpr>(&I))
+    predicate = FCmpInst::Predicate(FC->getPredicate());
+  SDValue Op1 = getValue(I.getOperand(0));
+  SDValue Op2 = getValue(I.getOperand(1));
+  ISD::CondCode Condition = getFCmpCondCode(predicate);
+  EVT DestVT = TLI.getValueType(I.getType());
+  setValue(&I, DAG.getSetCC(getCurDebugLoc(), DestVT, Op1, Op2, Condition));
+}
+
+void SelectionDAGBuilder::visitSelect(User &I) {
+  SmallVector<EVT, 4> ValueVTs;
+  ComputeValueVTs(TLI, I.getType(), ValueVTs);
+  unsigned NumValues = ValueVTs.size();
+  if (NumValues != 0) {
+    SmallVector<SDValue, 4> Values(NumValues);
+    SDValue Cond     = getValue(I.getOperand(0));
+    SDValue TrueVal  = getValue(I.getOperand(1));
+    SDValue FalseVal = getValue(I.getOperand(2));
+
+    for (unsigned i = 0; i != NumValues; ++i)
+      Values[i] = DAG.getNode(ISD::SELECT, getCurDebugLoc(),
+                              TrueVal.getValueType(), Cond,
+                              SDValue(TrueVal.getNode(), TrueVal.getResNo() + i),
+                              SDValue(FalseVal.getNode(), FalseVal.getResNo() + i));
+
+    setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
+                             DAG.getVTList(&ValueVTs[0], NumValues),
+                             &Values[0], NumValues));
+  }
+}
+
+
+void SelectionDAGBuilder::visitTrunc(User &I) {
+  // TruncInst cannot be a no-op cast because sizeof(src) > sizeof(dest).
+  SDValue N = getValue(I.getOperand(0));
+  EVT DestVT = TLI.getValueType(I.getType());
+  setValue(&I, DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(), DestVT, N));
+}
+
+void SelectionDAGBuilder::visitZExt(User &I) {
+  // ZExt cannot be a no-op cast because sizeof(src) < sizeof(dest).
+  // ZExt also can't be a cast to bool for same reason. So, nothing much to do
+  SDValue N = getValue(I.getOperand(0));
+  EVT DestVT = TLI.getValueType(I.getType());
+  setValue(&I, DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(), DestVT, N));
+}
+
+void SelectionDAGBuilder::visitSExt(User &I) {
+  // SExt cannot be a no-op cast because sizeof(src) < sizeof(dest).
+  // SExt also can't be a cast to bool for same reason. So, nothing much to do
+  SDValue N = getValue(I.getOperand(0));
+  EVT DestVT = TLI.getValueType(I.getType());
+  setValue(&I, DAG.getNode(ISD::SIGN_EXTEND, getCurDebugLoc(), DestVT, N));
+}
+
+void SelectionDAGBuilder::visitFPTrunc(User &I) {
+  // FPTrunc is never a no-op cast, no need to check
+  SDValue N = getValue(I.getOperand(0));
+  EVT DestVT = TLI.getValueType(I.getType());
+  setValue(&I, DAG.getNode(ISD::FP_ROUND, getCurDebugLoc(),
+                           DestVT, N, DAG.getIntPtrConstant(0)));
+}
+
+void SelectionDAGBuilder::visitFPExt(User &I){
+  // FPTrunc is never a no-op cast, no need to check
+  SDValue N = getValue(I.getOperand(0));
+  EVT DestVT = TLI.getValueType(I.getType());
+  setValue(&I, DAG.getNode(ISD::FP_EXTEND, getCurDebugLoc(), DestVT, N));
+}
+
+void SelectionDAGBuilder::visitFPToUI(User &I) {
+  // FPToUI is never a no-op cast, no need to check
+  SDValue N = getValue(I.getOperand(0));
+  EVT DestVT = TLI.getValueType(I.getType());
+  setValue(&I, DAG.getNode(ISD::FP_TO_UINT, getCurDebugLoc(), DestVT, N));
+}
+
+void SelectionDAGBuilder::visitFPToSI(User &I) {
+  // FPToSI is never a no-op cast, no need to check
+  SDValue N = getValue(I.getOperand(0));
+  EVT DestVT = TLI.getValueType(I.getType());
+  setValue(&I, DAG.getNode(ISD::FP_TO_SINT, getCurDebugLoc(), DestVT, N));
+}
+
+void SelectionDAGBuilder::visitUIToFP(User &I) {
+  // UIToFP is never a no-op cast, no need to check
+  SDValue N = getValue(I.getOperand(0));
+  EVT DestVT = TLI.getValueType(I.getType());
+  setValue(&I, DAG.getNode(ISD::UINT_TO_FP, getCurDebugLoc(), DestVT, N));
+}
+
+void SelectionDAGBuilder::visitSIToFP(User &I){
+  // SIToFP is never a no-op cast, no need to check
+  SDValue N = getValue(I.getOperand(0));
+  EVT DestVT = TLI.getValueType(I.getType());
+  setValue(&I, DAG.getNode(ISD::SINT_TO_FP, getCurDebugLoc(), DestVT, N));
+}
+
+void SelectionDAGBuilder::visitPtrToInt(User &I) {
+  // What to do depends on the size of the integer and the size of the pointer.
+  // We can either truncate, zero extend, or no-op, accordingly.
+  SDValue N = getValue(I.getOperand(0));
+  EVT SrcVT = N.getValueType();
+  EVT DestVT = TLI.getValueType(I.getType());
+  SDValue Result = DAG.getZExtOrTrunc(N, getCurDebugLoc(), DestVT);
+  setValue(&I, Result);
+}
+
+void SelectionDAGBuilder::visitIntToPtr(User &I) {
+  // What to do depends on the size of the integer and the size of the pointer.
+  // We can either truncate, zero extend, or no-op, accordingly.
+  SDValue N = getValue(I.getOperand(0));
+  EVT SrcVT = N.getValueType();
+  EVT DestVT = TLI.getValueType(I.getType());
+  setValue(&I, DAG.getZExtOrTrunc(N, getCurDebugLoc(), DestVT));
+}
+
+void SelectionDAGBuilder::visitBitCast(User &I) {
+  SDValue N = getValue(I.getOperand(0));
+  EVT DestVT = TLI.getValueType(I.getType());
+
+  // BitCast assures us that source and destination are the same size so this
+  // is either a BIT_CONVERT or a no-op.
+  if (DestVT != N.getValueType())
+    setValue(&I, DAG.getNode(ISD::BIT_CONVERT, getCurDebugLoc(),
+                             DestVT, N)); // convert types
+  else
+    setValue(&I, N); // noop cast.
+}
+
+void SelectionDAGBuilder::visitInsertElement(User &I) {
+  SDValue InVec = getValue(I.getOperand(0));
+  SDValue InVal = getValue(I.getOperand(1));
+  SDValue InIdx = DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(),
+                                TLI.getPointerTy(),
+                                getValue(I.getOperand(2)));
+
+  setValue(&I, DAG.getNode(ISD::INSERT_VECTOR_ELT, getCurDebugLoc(),
+                           TLI.getValueType(I.getType()),
+                           InVec, InVal, InIdx));
+}
+
+void SelectionDAGBuilder::visitExtractElement(User &I) {
+  SDValue InVec = getValue(I.getOperand(0));
+  SDValue InIdx = DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(),
+                                TLI.getPointerTy(),
+                                getValue(I.getOperand(1)));
+  setValue(&I, DAG.getNode(ISD::EXTRACT_VECTOR_ELT, getCurDebugLoc(),
+                           TLI.getValueType(I.getType()), InVec, InIdx));
+}
+
+
+// Utility for visitShuffleVector - Returns true if the mask is mask starting
+// from SIndx and increasing to the element length (undefs are allowed).
+static bool SequentialMask(SmallVectorImpl<int> &Mask, unsigned SIndx) {
+  unsigned MaskNumElts = Mask.size();
+  for (unsigned i = 0; i != MaskNumElts; ++i)
+    if ((Mask[i] >= 0) && (Mask[i] != (int)(i + SIndx)))
+      return false;
+  return true;
+}
+
+void SelectionDAGBuilder::visitShuffleVector(User &I) {
+  SmallVector<int, 8> Mask;
+  SDValue Src1 = getValue(I.getOperand(0));
+  SDValue Src2 = getValue(I.getOperand(1));
+
+  // Convert the ConstantVector mask operand into an array of ints, with -1
+  // representing undef values.
+  SmallVector<Constant*, 8> MaskElts;
+  cast<Constant>(I.getOperand(2))->getVectorElements(*DAG.getContext(), 
+                                                     MaskElts);
+  unsigned MaskNumElts = MaskElts.size();
+  for (unsigned i = 0; i != MaskNumElts; ++i) {
+    if (isa<UndefValue>(MaskElts[i]))
+      Mask.push_back(-1);
+    else
+      Mask.push_back(cast<ConstantInt>(MaskElts[i])->getSExtValue());
+  }
+  
+  EVT VT = TLI.getValueType(I.getType());
+  EVT SrcVT = Src1.getValueType();
+  unsigned SrcNumElts = SrcVT.getVectorNumElements();
+
+  if (SrcNumElts == MaskNumElts) {
+    setValue(&I, DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2,
+                                      &Mask[0]));
+    return;
+  }
+
+  // Normalize the shuffle vector since mask and vector length don't match.
+  if (SrcNumElts < MaskNumElts && MaskNumElts % SrcNumElts == 0) {
+    // Mask is longer than the source vectors and is a multiple of the source
+    // vectors.  We can use concatenate vector to make the mask and vectors
+    // lengths match.
+    if (SrcNumElts*2 == MaskNumElts && SequentialMask(Mask, 0)) {
+      // The shuffle is concatenating two vectors together.
+      setValue(&I, DAG.getNode(ISD::CONCAT_VECTORS, getCurDebugLoc(),
+                               VT, Src1, Src2));
+      return;
+    }
+
+    // Pad both vectors with undefs to make them the same length as the mask.
+    unsigned NumConcat = MaskNumElts / SrcNumElts;
+    bool Src1U = Src1.getOpcode() == ISD::UNDEF;
+    bool Src2U = Src2.getOpcode() == ISD::UNDEF;
+    SDValue UndefVal = DAG.getUNDEF(SrcVT);
+
+    SmallVector<SDValue, 8> MOps1(NumConcat, UndefVal);
+    SmallVector<SDValue, 8> MOps2(NumConcat, UndefVal);
+    MOps1[0] = Src1;
+    MOps2[0] = Src2;
+    
+    Src1 = Src1U ? DAG.getUNDEF(VT) : DAG.getNode(ISD::CONCAT_VECTORS, 
+                                                  getCurDebugLoc(), VT, 
+                                                  &MOps1[0], NumConcat);
+    Src2 = Src2U ? DAG.getUNDEF(VT) : DAG.getNode(ISD::CONCAT_VECTORS,
+                                                  getCurDebugLoc(), VT, 
+                                                  &MOps2[0], NumConcat);
+
+    // Readjust mask for new input vector length.
+    SmallVector<int, 8> MappedOps;
+    for (unsigned i = 0; i != MaskNumElts; ++i) {
+      int Idx = Mask[i];
+      if (Idx < (int)SrcNumElts)
+        MappedOps.push_back(Idx);
+      else
+        MappedOps.push_back(Idx + MaskNumElts - SrcNumElts);
+    }
+    setValue(&I, DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2, 
+                                      &MappedOps[0]));
+    return;
+  }
+
+  if (SrcNumElts > MaskNumElts) {
+    // Analyze the access pattern of the vector to see if we can extract
+    // two subvectors and do the shuffle. The analysis is done by calculating
+    // the range of elements the mask access on both vectors.
+    int MinRange[2] = { SrcNumElts+1, SrcNumElts+1};
+    int MaxRange[2] = {-1, -1};
+
+    for (unsigned i = 0; i != MaskNumElts; ++i) {
+      int Idx = Mask[i];
+      int Input = 0;
+      if (Idx < 0)
+        continue;
+      
+      if (Idx >= (int)SrcNumElts) {
+        Input = 1;
+        Idx -= SrcNumElts;
+      }
+      if (Idx > MaxRange[Input])
+        MaxRange[Input] = Idx;
+      if (Idx < MinRange[Input])
+        MinRange[Input] = Idx;
+    }
+
+    // Check if the access is smaller than the vector size and can we find
+    // a reasonable extract index.
+    int RangeUse[2] = { 2, 2 };  // 0 = Unused, 1 = Extract, 2 = Can not Extract.
+    int StartIdx[2];  // StartIdx to extract from
+    for (int Input=0; Input < 2; ++Input) {
+      if (MinRange[Input] == (int)(SrcNumElts+1) && MaxRange[Input] == -1) {
+        RangeUse[Input] = 0; // Unused
+        StartIdx[Input] = 0;
+      } else if (MaxRange[Input] - MinRange[Input] < (int)MaskNumElts) {
+        // Fits within range but we should see if we can find a good
+        // start index that is a multiple of the mask length.
+        if (MaxRange[Input] < (int)MaskNumElts) {
+          RangeUse[Input] = 1; // Extract from beginning of the vector
+          StartIdx[Input] = 0;
+        } else {
+          StartIdx[Input] = (MinRange[Input]/MaskNumElts)*MaskNumElts;
+          if (MaxRange[Input] - StartIdx[Input] < (int)MaskNumElts &&
+              StartIdx[Input] + MaskNumElts < SrcNumElts)
+            RangeUse[Input] = 1; // Extract from a multiple of the mask length.
+        }
+      }
+    }
+
+    if (RangeUse[0] == 0 && RangeUse[1] == 0) {
+      setValue(&I, DAG.getUNDEF(VT));  // Vectors are not used.
+      return;
+    }
+    else if (RangeUse[0] < 2 && RangeUse[1] < 2) {
+      // Extract appropriate subvector and generate a vector shuffle
+      for (int Input=0; Input < 2; ++Input) {
+        SDValue& Src = Input == 0 ? Src1 : Src2;
+        if (RangeUse[Input] == 0) {
+          Src = DAG.getUNDEF(VT);
+        } else {
+          Src = DAG.getNode(ISD::EXTRACT_SUBVECTOR, getCurDebugLoc(), VT,
+                            Src, DAG.getIntPtrConstant(StartIdx[Input]));
+        }
+      }
+      // Calculate new mask.
+      SmallVector<int, 8> MappedOps;
+      for (unsigned i = 0; i != MaskNumElts; ++i) {
+        int Idx = Mask[i];
+        if (Idx < 0)
+          MappedOps.push_back(Idx);
+        else if (Idx < (int)SrcNumElts)
+          MappedOps.push_back(Idx - StartIdx[0]);
+        else
+          MappedOps.push_back(Idx - SrcNumElts - StartIdx[1] + MaskNumElts);
+      }
+      setValue(&I, DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2,
+                                        &MappedOps[0]));
+      return;
+    }
+  }
+
+  // We can't use either concat vectors or extract subvectors so fall back to
+  // replacing the shuffle with extract and build vector.
+  // to insert and build vector.
+  EVT EltVT = VT.getVectorElementType();
+  EVT PtrVT = TLI.getPointerTy();
+  SmallVector<SDValue,8> Ops;
+  for (unsigned i = 0; i != MaskNumElts; ++i) {
+    if (Mask[i] < 0) {
+      Ops.push_back(DAG.getUNDEF(EltVT));
+    } else {
+      int Idx = Mask[i];
+      if (Idx < (int)SrcNumElts)
+        Ops.push_back(DAG.getNode(ISD::EXTRACT_VECTOR_ELT, getCurDebugLoc(),
+                                  EltVT, Src1, DAG.getConstant(Idx, PtrVT)));
+      else
+        Ops.push_back(DAG.getNode(ISD::EXTRACT_VECTOR_ELT, getCurDebugLoc(),
+                                  EltVT, Src2,
+                                  DAG.getConstant(Idx - SrcNumElts, PtrVT)));
+    }
+  }
+  setValue(&I, DAG.getNode(ISD::BUILD_VECTOR, getCurDebugLoc(),
+                           VT, &Ops[0], Ops.size()));
+}
+
+void SelectionDAGBuilder::visitInsertValue(InsertValueInst &I) {
+  const Value *Op0 = I.getOperand(0);
+  const Value *Op1 = I.getOperand(1);
+  const Type *AggTy = I.getType();
+  const Type *ValTy = Op1->getType();
+  bool IntoUndef = isa<UndefValue>(Op0);
+  bool FromUndef = isa<UndefValue>(Op1);
+
+  unsigned LinearIndex = ComputeLinearIndex(TLI, AggTy,
+                                            I.idx_begin(), I.idx_end());
+
+  SmallVector<EVT, 4> AggValueVTs;
+  ComputeValueVTs(TLI, AggTy, AggValueVTs);
+  SmallVector<EVT, 4> ValValueVTs;
+  ComputeValueVTs(TLI, ValTy, ValValueVTs);
+
+  unsigned NumAggValues = AggValueVTs.size();
+  unsigned NumValValues = ValValueVTs.size();
+  SmallVector<SDValue, 4> Values(NumAggValues);
+
+  SDValue Agg = getValue(Op0);
+  SDValue Val = getValue(Op1);
+  unsigned i = 0;
+  // Copy the beginning value(s) from the original aggregate.
+  for (; i != LinearIndex; ++i)
+    Values[i] = IntoUndef ? DAG.getUNDEF(AggValueVTs[i]) :
+                SDValue(Agg.getNode(), Agg.getResNo() + i);
+  // Copy values from the inserted value(s).
+  for (; i != LinearIndex + NumValValues; ++i)
+    Values[i] = FromUndef ? DAG.getUNDEF(AggValueVTs[i]) :
+                SDValue(Val.getNode(), Val.getResNo() + i - LinearIndex);
+  // Copy remaining value(s) from the original aggregate.
+  for (; i != NumAggValues; ++i)
+    Values[i] = IntoUndef ? DAG.getUNDEF(AggValueVTs[i]) :
+                SDValue(Agg.getNode(), Agg.getResNo() + i);
+
+  setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
+                           DAG.getVTList(&AggValueVTs[0], NumAggValues),
+                           &Values[0], NumAggValues));
+}
+
+void SelectionDAGBuilder::visitExtractValue(ExtractValueInst &I) {
+  const Value *Op0 = I.getOperand(0);
+  const Type *AggTy = Op0->getType();
+  const Type *ValTy = I.getType();
+  bool OutOfUndef = isa<UndefValue>(Op0);
+
+  unsigned LinearIndex = ComputeLinearIndex(TLI, AggTy,
+                                            I.idx_begin(), I.idx_end());
+
+  SmallVector<EVT, 4> ValValueVTs;
+  ComputeValueVTs(TLI, ValTy, ValValueVTs);
+
+  unsigned NumValValues = ValValueVTs.size();
+  SmallVector<SDValue, 4> Values(NumValValues);
+
+  SDValue Agg = getValue(Op0);
+  // Copy out the selected value(s).
+  for (unsigned i = LinearIndex; i != LinearIndex + NumValValues; ++i)
+    Values[i - LinearIndex] =
+      OutOfUndef ?
+        DAG.getUNDEF(Agg.getNode()->getValueType(Agg.getResNo() + i)) :
+        SDValue(Agg.getNode(), Agg.getResNo() + i);
+
+  setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
+                           DAG.getVTList(&ValValueVTs[0], NumValValues),
+                           &Values[0], NumValValues));
+}
+
+
+void SelectionDAGBuilder::visitGetElementPtr(User &I) {
+  SDValue N = getValue(I.getOperand(0));
+  const Type *Ty = I.getOperand(0)->getType();
+
+  for (GetElementPtrInst::op_iterator OI = I.op_begin()+1, E = I.op_end();
+       OI != E; ++OI) {
+    Value *Idx = *OI;
+    if (const StructType *StTy = dyn_cast<StructType>(Ty)) {
+      unsigned Field = cast<ConstantInt>(Idx)->getZExtValue();
+      if (Field) {
+        // N = N + Offset
+        uint64_t Offset = TD->getStructLayout(StTy)->getElementOffset(Field);
+        N = DAG.getNode(ISD::ADD, getCurDebugLoc(), N.getValueType(), N,
+                        DAG.getIntPtrConstant(Offset));
+      }
+      Ty = StTy->getElementType(Field);
+    } else {
+      Ty = cast<SequentialType>(Ty)->getElementType();
+
+      // If this is a constant subscript, handle it quickly.
+      if (ConstantInt *CI = dyn_cast<ConstantInt>(Idx)) {
+        if (CI->getZExtValue() == 0) continue;
+        uint64_t Offs =
+            TD->getTypeAllocSize(Ty)*cast<ConstantInt>(CI)->getSExtValue();
+        SDValue OffsVal;
+        EVT PTy = TLI.getPointerTy();
+        unsigned PtrBits = PTy.getSizeInBits();
+        if (PtrBits < 64) {
+          OffsVal = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(),
+                                TLI.getPointerTy(),
+                                DAG.getConstant(Offs, MVT::i64));
+        } else
+          OffsVal = DAG.getIntPtrConstant(Offs);
+        N = DAG.getNode(ISD::ADD, getCurDebugLoc(), N.getValueType(), N,
+                        OffsVal);
+        continue;
+      }
+
+      // N = N + Idx * ElementSize;
+      APInt ElementSize = APInt(TLI.getPointerTy().getSizeInBits(),
+                                TD->getTypeAllocSize(Ty));
+      SDValue IdxN = getValue(Idx);
+
+      // If the index is smaller or larger than intptr_t, truncate or extend
+      // it.
+      IdxN = DAG.getSExtOrTrunc(IdxN, getCurDebugLoc(), N.getValueType());
+
+      // If this is a multiply by a power of two, turn it into a shl
+      // immediately.  This is a very common case.
+      if (ElementSize != 1) {
+        if (ElementSize.isPowerOf2()) {
+          unsigned Amt = ElementSize.logBase2();
+          IdxN = DAG.getNode(ISD::SHL, getCurDebugLoc(),
+                             N.getValueType(), IdxN,
+                             DAG.getConstant(Amt, TLI.getPointerTy()));
+        } else {
+          SDValue Scale = DAG.getConstant(ElementSize, TLI.getPointerTy());
+          IdxN = DAG.getNode(ISD::MUL, getCurDebugLoc(),
+                             N.getValueType(), IdxN, Scale);
+        }
+      }
+
+      N = DAG.getNode(ISD::ADD, getCurDebugLoc(),
+                      N.getValueType(), N, IdxN);
+    }
+  }
+  setValue(&I, N);
+}
+
+void SelectionDAGBuilder::visitAlloca(AllocaInst &I) {
+  // If this is a fixed sized alloca in the entry block of the function,
+  // allocate it statically on the stack.
+  if (FuncInfo.StaticAllocaMap.count(&I))
+    return;   // getValue will auto-populate this.
+
+  const Type *Ty = I.getAllocatedType();
+  uint64_t TySize = TLI.getTargetData()->getTypeAllocSize(Ty);
+  unsigned Align =
+    std::max((unsigned)TLI.getTargetData()->getPrefTypeAlignment(Ty),
+             I.getAlignment());
+
+  SDValue AllocSize = getValue(I.getArraySize());
+  
+  AllocSize = DAG.getNode(ISD::MUL, getCurDebugLoc(), AllocSize.getValueType(),
+                          AllocSize,
+                          DAG.getConstant(TySize, AllocSize.getValueType()));
+  
+  
+  
+  EVT IntPtr = TLI.getPointerTy();
+  AllocSize = DAG.getZExtOrTrunc(AllocSize, getCurDebugLoc(), IntPtr);
+
+  // Handle alignment.  If the requested alignment is less than or equal to
+  // the stack alignment, ignore it.  If the size is greater than or equal to
+  // the stack alignment, we note this in the DYNAMIC_STACKALLOC node.
+  unsigned StackAlign =
+    TLI.getTargetMachine().getFrameInfo()->getStackAlignment();
+  if (Align <= StackAlign)
+    Align = 0;
+
+  // Round the size of the allocation up to the stack alignment size
+  // by add SA-1 to the size.
+  AllocSize = DAG.getNode(ISD::ADD, getCurDebugLoc(),
+                          AllocSize.getValueType(), AllocSize,
+                          DAG.getIntPtrConstant(StackAlign-1));
+  // Mask out the low bits for alignment purposes.
+  AllocSize = DAG.getNode(ISD::AND, getCurDebugLoc(),
+                          AllocSize.getValueType(), AllocSize,
+                          DAG.getIntPtrConstant(~(uint64_t)(StackAlign-1)));
+
+  SDValue Ops[] = { getRoot(), AllocSize, DAG.getIntPtrConstant(Align) };
+  SDVTList VTs = DAG.getVTList(AllocSize.getValueType(), MVT::Other);
+  SDValue DSA = DAG.getNode(ISD::DYNAMIC_STACKALLOC, getCurDebugLoc(),
+                            VTs, Ops, 3);
+  setValue(&I, DSA);
+  DAG.setRoot(DSA.getValue(1));
+
+  // Inform the Frame Information that we have just allocated a variable-sized
+  // object.
+  FuncInfo.MF->getFrameInfo()->CreateVariableSizedObject();
+}
+
+void SelectionDAGBuilder::visitLoad(LoadInst &I) {
+  const Value *SV = I.getOperand(0);
+  SDValue Ptr = getValue(SV);
+
+  const Type *Ty = I.getType();
+  bool isVolatile = I.isVolatile();
+  unsigned Alignment = I.getAlignment();
+
+  SmallVector<EVT, 4> ValueVTs;
+  SmallVector<uint64_t, 4> Offsets;
+  ComputeValueVTs(TLI, Ty, ValueVTs, &Offsets);
+  unsigned NumValues = ValueVTs.size();
+  if (NumValues == 0)
+    return;
+
+  SDValue Root;
+  bool ConstantMemory = false;
+  if (I.isVolatile())
+    // Serialize volatile loads with other side effects.
+    Root = getRoot();
+  else if (AA->pointsToConstantMemory(SV)) {
+    // Do not serialize (non-volatile) loads of constant memory with anything.
+    Root = DAG.getEntryNode();
+    ConstantMemory = true;
+  } else {
+    // Do not serialize non-volatile loads against each other.
+    Root = DAG.getRoot();
+  }
+
+  SmallVector<SDValue, 4> Values(NumValues);
+  SmallVector<SDValue, 4> Chains(NumValues);
+  EVT PtrVT = Ptr.getValueType();
+  for (unsigned i = 0; i != NumValues; ++i) {
+    SDValue L = DAG.getLoad(ValueVTs[i], getCurDebugLoc(), Root,
+                            DAG.getNode(ISD::ADD, getCurDebugLoc(),
+                                        PtrVT, Ptr,
+                                        DAG.getConstant(Offsets[i], PtrVT)),
+                            SV, Offsets[i], isVolatile, Alignment);
+    Values[i] = L;
+    Chains[i] = L.getValue(1);
+  }
+
+  if (!ConstantMemory) {
+    SDValue Chain = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(),
+                                  MVT::Other,
+                                  &Chains[0], NumValues);
+    if (isVolatile)
+      DAG.setRoot(Chain);
+    else
+      PendingLoads.push_back(Chain);
+  }
+
+  setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
+                           DAG.getVTList(&ValueVTs[0], NumValues),
+                           &Values[0], NumValues));
+}
+
+
+void SelectionDAGBuilder::visitStore(StoreInst &I) {
+  Value *SrcV = I.getOperand(0);
+  Value *PtrV = I.getOperand(1);
+
+  SmallVector<EVT, 4> ValueVTs;
+  SmallVector<uint64_t, 4> Offsets;
+  ComputeValueVTs(TLI, SrcV->getType(), ValueVTs, &Offsets);
+  unsigned NumValues = ValueVTs.size();
+  if (NumValues == 0)
+    return;
+
+  // Get the lowered operands. Note that we do this after
+  // checking if NumResults is zero, because with zero results
+  // the operands won't have values in the map.
+  SDValue Src = getValue(SrcV);
+  SDValue Ptr = getValue(PtrV);
+
+  SDValue Root = getRoot();
+  SmallVector<SDValue, 4> Chains(NumValues);
+  EVT PtrVT = Ptr.getValueType();
+  bool isVolatile = I.isVolatile();
+  unsigned Alignment = I.getAlignment();
+  for (unsigned i = 0; i != NumValues; ++i)
+    Chains[i] = DAG.getStore(Root, getCurDebugLoc(),
+                             SDValue(Src.getNode(), Src.getResNo() + i),
+                             DAG.getNode(ISD::ADD, getCurDebugLoc(),
+                                         PtrVT, Ptr,
+                                         DAG.getConstant(Offsets[i], PtrVT)),
+                             PtrV, Offsets[i], isVolatile, Alignment);
+
+  DAG.setRoot(DAG.getNode(ISD::TokenFactor, getCurDebugLoc(),
+                          MVT::Other, &Chains[0], NumValues));
+}
+
+/// visitTargetIntrinsic - Lower a call of a target intrinsic to an INTRINSIC
+/// node.
+void SelectionDAGBuilder::visitTargetIntrinsic(CallInst &I,
+                                               unsigned Intrinsic) {
+  bool HasChain = !I.doesNotAccessMemory();
+  bool OnlyLoad = HasChain && I.onlyReadsMemory();
+
+  // Build the operand list.
+  SmallVector<SDValue, 8> Ops;
+  if (HasChain) {  // If this intrinsic has side-effects, chainify it.
+    if (OnlyLoad) {
+      // We don't need to serialize loads against other loads.
+      Ops.push_back(DAG.getRoot());
+    } else {
+      Ops.push_back(getRoot());
+    }
+  }
+
+  // Info is set by getTgtMemInstrinsic
+  TargetLowering::IntrinsicInfo Info;
+  bool IsTgtIntrinsic = TLI.getTgtMemIntrinsic(Info, I, Intrinsic);
+
+  // Add the intrinsic ID as an integer operand if it's not a target intrinsic.
+  if (!IsTgtIntrinsic)
+    Ops.push_back(DAG.getConstant(Intrinsic, TLI.getPointerTy()));
+
+  // Add all operands of the call to the operand list.
+  for (unsigned i = 1, e = I.getNumOperands(); i != e; ++i) {
+    SDValue Op = getValue(I.getOperand(i));
+    assert(TLI.isTypeLegal(Op.getValueType()) &&
+           "Intrinsic uses a non-legal type?");
+    Ops.push_back(Op);
+  }
+
+  SmallVector<EVT, 4> ValueVTs;
+  ComputeValueVTs(TLI, I.getType(), ValueVTs);
+#ifndef NDEBUG
+  for (unsigned Val = 0, E = ValueVTs.size(); Val != E; ++Val) {
+    assert(TLI.isTypeLegal(ValueVTs[Val]) &&
+           "Intrinsic uses a non-legal type?");
+  }
+#endif // NDEBUG
+  if (HasChain)
+    ValueVTs.push_back(MVT::Other);
+
+  SDVTList VTs = DAG.getVTList(ValueVTs.data(), ValueVTs.size());
+
+  // Create the node.
+  SDValue Result;
+  if (IsTgtIntrinsic) {
+    // This is target intrinsic that touches memory
+    Result = DAG.getMemIntrinsicNode(Info.opc, getCurDebugLoc(),
+                                     VTs, &Ops[0], Ops.size(),
+                                     Info.memVT, Info.ptrVal, Info.offset,
+                                     Info.align, Info.vol,
+                                     Info.readMem, Info.writeMem);
+  }
+  else if (!HasChain)
+    Result = DAG.getNode(ISD::INTRINSIC_WO_CHAIN, getCurDebugLoc(),
+                         VTs, &Ops[0], Ops.size());
+  else if (I.getType() != Type::getVoidTy(*DAG.getContext()))
+    Result = DAG.getNode(ISD::INTRINSIC_W_CHAIN, getCurDebugLoc(),
+                         VTs, &Ops[0], Ops.size());
+  else
+    Result = DAG.getNode(ISD::INTRINSIC_VOID, getCurDebugLoc(),
+                         VTs, &Ops[0], Ops.size());
+
+  if (HasChain) {
+    SDValue Chain = Result.getValue(Result.getNode()->getNumValues()-1);
+    if (OnlyLoad)
+      PendingLoads.push_back(Chain);
+    else
+      DAG.setRoot(Chain);
+  }
+  if (I.getType() != Type::getVoidTy(*DAG.getContext())) {
+    if (const VectorType *PTy = dyn_cast<VectorType>(I.getType())) {
+      EVT VT = TLI.getValueType(PTy);
+      Result = DAG.getNode(ISD::BIT_CONVERT, getCurDebugLoc(), VT, Result);
+    }
+    setValue(&I, Result);
+  }
+}
+
+/// GetSignificand - Get the significand and build it into a floating-point
+/// number with exponent of 1:
+///
+///   Op = (Op & 0x007fffff) | 0x3f800000;
+///
+/// where Op is the hexidecimal representation of floating point value.
+static SDValue
+GetSignificand(SelectionDAG &DAG, SDValue Op, DebugLoc dl) {
+  SDValue t1 = DAG.getNode(ISD::AND, dl, MVT::i32, Op,
+                           DAG.getConstant(0x007fffff, MVT::i32));
+  SDValue t2 = DAG.getNode(ISD::OR, dl, MVT::i32, t1,
+                           DAG.getConstant(0x3f800000, MVT::i32));
+  return DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t2);
+}
+
+/// GetExponent - Get the exponent:
+///
+///   (float)(int)(((Op & 0x7f800000) >> 23) - 127);
+///
+/// where Op is the hexidecimal representation of floating point value.
+static SDValue
+GetExponent(SelectionDAG &DAG, SDValue Op, const TargetLowering &TLI,
+            DebugLoc dl) {
+  SDValue t0 = DAG.getNode(ISD::AND, dl, MVT::i32, Op,
+                           DAG.getConstant(0x7f800000, MVT::i32));
+  SDValue t1 = DAG.getNode(ISD::SRL, dl, MVT::i32, t0,
+                           DAG.getConstant(23, TLI.getPointerTy()));
+  SDValue t2 = DAG.getNode(ISD::SUB, dl, MVT::i32, t1,
+                           DAG.getConstant(127, MVT::i32));
+  return DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, t2);
+}
+
+/// getF32Constant - Get 32-bit floating point constant.
+static SDValue
+getF32Constant(SelectionDAG &DAG, unsigned Flt) {
+  return DAG.getConstantFP(APFloat(APInt(32, Flt)), MVT::f32);
+}
+
+/// Inlined utility function to implement binary input atomic intrinsics for
+/// visitIntrinsicCall: I is a call instruction
+///                     Op is the associated NodeType for I
+const char *
+SelectionDAGBuilder::implVisitBinaryAtomic(CallInst& I, ISD::NodeType Op) {
+  SDValue Root = getRoot();
+  SDValue L =
+    DAG.getAtomic(Op, getCurDebugLoc(),
+                  getValue(I.getOperand(2)).getValueType().getSimpleVT(),
+                  Root,
+                  getValue(I.getOperand(1)),
+                  getValue(I.getOperand(2)),
+                  I.getOperand(1));
+  setValue(&I, L);
+  DAG.setRoot(L.getValue(1));
+  return 0;
+}
+
+// implVisitAluOverflow - Lower arithmetic overflow instrinsics.
+const char *
+SelectionDAGBuilder::implVisitAluOverflow(CallInst &I, ISD::NodeType Op) {
+  SDValue Op1 = getValue(I.getOperand(1));
+  SDValue Op2 = getValue(I.getOperand(2));
+
+  SDVTList VTs = DAG.getVTList(Op1.getValueType(), MVT::i1);
+  SDValue Result = DAG.getNode(Op, getCurDebugLoc(), VTs, Op1, Op2);
+
+  setValue(&I, Result);
+  return 0;
+}
+
+/// visitExp - Lower an exp intrinsic. Handles the special sequences for
+/// limited-precision mode.
+void
+SelectionDAGBuilder::visitExp(CallInst &I) {
+  SDValue result;
+  DebugLoc dl = getCurDebugLoc();
+
+  if (getValue(I.getOperand(1)).getValueType() == MVT::f32 &&
+      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
+    SDValue Op = getValue(I.getOperand(1));
+
+    // Put the exponent in the right bit position for later addition to the
+    // final result:
+    //
+    //   #define LOG2OFe 1.4426950f
+    //   IntegerPartOfX = ((int32_t)(X * LOG2OFe));
+    SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, Op,
+                             getF32Constant(DAG, 0x3fb8aa3b));
+    SDValue IntegerPartOfX = DAG.getNode(ISD::FP_TO_SINT, dl, MVT::i32, t0);
+
+    //   FractionalPartOfX = (X * LOG2OFe) - (float)IntegerPartOfX;
+    SDValue t1 = DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, IntegerPartOfX);
+    SDValue X = DAG.getNode(ISD::FSUB, dl, MVT::f32, t0, t1);
+
+    //   IntegerPartOfX <<= 23;
+    IntegerPartOfX = DAG.getNode(ISD::SHL, dl, MVT::i32, IntegerPartOfX,
+                                 DAG.getConstant(23, TLI.getPointerTy()));
+
+    if (LimitFloatPrecision <= 6) {
+      // For floating-point precision of 6:
+      //
+      //   TwoToFractionalPartOfX =
+      //     0.997535578f +
+      //       (0.735607626f + 0.252464424f * x) * x;
+      //
+      // error 0.0144103317, which is 6 bits
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0x3e814304));
+      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
+                               getF32Constant(DAG, 0x3f3c50c8));
+      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
+      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
+                               getF32Constant(DAG, 0x3f7f5e7e));
+      SDValue TwoToFracPartOfX = DAG.getNode(ISD::BIT_CONVERT, dl,MVT::i32, t5);
+
+      // Add the exponent into the result in integer domain.
+      SDValue t6 = DAG.getNode(ISD::ADD, dl, MVT::i32,
+                               TwoToFracPartOfX, IntegerPartOfX);
+
+      result = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t6);
+    } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
+      // For floating-point precision of 12:
+      //
+      //   TwoToFractionalPartOfX =
+      //     0.999892986f +
+      //       (0.696457318f +
+      //         (0.224338339f + 0.792043434e-1f * x) * x) * x;
+      //
+      // 0.000107046256 error, which is 13 to 14 bits
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0x3da235e3));
+      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
+                               getF32Constant(DAG, 0x3e65b8f3));
+      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
+      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
+                               getF32Constant(DAG, 0x3f324b07));
+      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
+      SDValue t7 = DAG.getNode(ISD::FADD, dl, MVT::f32, t6,
+                               getF32Constant(DAG, 0x3f7ff8fd));
+      SDValue TwoToFracPartOfX = DAG.getNode(ISD::BIT_CONVERT, dl,MVT::i32, t7);
+
+      // Add the exponent into the result in integer domain.
+      SDValue t8 = DAG.getNode(ISD::ADD, dl, MVT::i32,
+                               TwoToFracPartOfX, IntegerPartOfX);
+
+      result = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t8);
+    } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
+      // For floating-point precision of 18:
+      //
+      //   TwoToFractionalPartOfX =
+      //     0.999999982f +
+      //       (0.693148872f +
+      //         (0.240227044f +
+      //           (0.554906021e-1f +
+      //             (0.961591928e-2f +
+      //               (0.136028312e-2f + 0.157059148e-3f *x)*x)*x)*x)*x)*x;
+      //
+      // error 2.47208000*10^(-7), which is better than 18 bits
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0x3924b03e));
+      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
+                               getF32Constant(DAG, 0x3ab24b87));
+      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
+      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
+                               getF32Constant(DAG, 0x3c1d8c17));
+      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
+      SDValue t7 = DAG.getNode(ISD::FADD, dl, MVT::f32, t6,
+                               getF32Constant(DAG, 0x3d634a1d));
+      SDValue t8 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t7, X);
+      SDValue t9 = DAG.getNode(ISD::FADD, dl, MVT::f32, t8,
+                               getF32Constant(DAG, 0x3e75fe14));
+      SDValue t10 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t9, X);
+      SDValue t11 = DAG.getNode(ISD::FADD, dl, MVT::f32, t10,
+                                getF32Constant(DAG, 0x3f317234));
+      SDValue t12 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t11, X);
+      SDValue t13 = DAG.getNode(ISD::FADD, dl, MVT::f32, t12,
+                                getF32Constant(DAG, 0x3f800000));
+      SDValue TwoToFracPartOfX = DAG.getNode(ISD::BIT_CONVERT, dl,
+                                             MVT::i32, t13);
+
+      // Add the exponent into the result in integer domain.
+      SDValue t14 = DAG.getNode(ISD::ADD, dl, MVT::i32,
+                                TwoToFracPartOfX, IntegerPartOfX);
+
+      result = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t14);
+    }
+  } else {
+    // No special expansion.
+    result = DAG.getNode(ISD::FEXP, dl,
+                         getValue(I.getOperand(1)).getValueType(),
+                         getValue(I.getOperand(1)));
+  }
+
+  setValue(&I, result);
+}
+
+/// visitLog - Lower a log intrinsic. Handles the special sequences for
+/// limited-precision mode.
+void
+SelectionDAGBuilder::visitLog(CallInst &I) {
+  SDValue result;
+  DebugLoc dl = getCurDebugLoc();
+
+  if (getValue(I.getOperand(1)).getValueType() == MVT::f32 &&
+      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
+    SDValue Op = getValue(I.getOperand(1));
+    SDValue Op1 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, Op);
+
+    // Scale the exponent by log(2) [0.69314718f].
+    SDValue Exp = GetExponent(DAG, Op1, TLI, dl);
+    SDValue LogOfExponent = DAG.getNode(ISD::FMUL, dl, MVT::f32, Exp,
+                                        getF32Constant(DAG, 0x3f317218));
+
+    // Get the significand and build it into a floating-point number with
+    // exponent of 1.
+    SDValue X = GetSignificand(DAG, Op1, dl);
+
+    if (LimitFloatPrecision <= 6) {
+      // For floating-point precision of 6:
+      //
+      //   LogofMantissa =
+      //     -1.1609546f +
+      //       (1.4034025f - 0.23903021f * x) * x;
+      //
+      // error 0.0034276066, which is better than 8 bits
+      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0xbe74c456));
+      SDValue t1 = DAG.getNode(ISD::FADD, dl, MVT::f32, t0,
+                               getF32Constant(DAG, 0x3fb3a2b1));
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
+      SDValue LogOfMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t2,
+                                          getF32Constant(DAG, 0x3f949a29));
+
+      result = DAG.getNode(ISD::FADD, dl,
+                           MVT::f32, LogOfExponent, LogOfMantissa);
+    } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
+      // For floating-point precision of 12:
+      //
+      //   LogOfMantissa =
+      //     -1.7417939f +
+      //       (2.8212026f +
+      //         (-1.4699568f +
+      //           (0.44717955f - 0.56570851e-1f * x) * x) * x) * x;
+      //
+      // error 0.000061011436, which is 14 bits
+      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0xbd67b6d6));
+      SDValue t1 = DAG.getNode(ISD::FADD, dl, MVT::f32, t0,
+                               getF32Constant(DAG, 0x3ee4f4b8));
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
+      SDValue t3 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t2,
+                               getF32Constant(DAG, 0x3fbc278b));
+      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
+      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
+                               getF32Constant(DAG, 0x40348e95));
+      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
+      SDValue LogOfMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t6,
+                                          getF32Constant(DAG, 0x3fdef31a));
+
+      result = DAG.getNode(ISD::FADD, dl,
+                           MVT::f32, LogOfExponent, LogOfMantissa);
+    } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
+      // For floating-point precision of 18:
+      //
+      //   LogOfMantissa =
+      //     -2.1072184f +
+      //       (4.2372794f +
+      //         (-3.7029485f +
+      //           (2.2781945f +
+      //             (-0.87823314f +
+      //               (0.19073739f - 0.17809712e-1f * x) * x) * x) * x) * x)*x;
+      //
+      // error 0.0000023660568, which is better than 18 bits
+      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0xbc91e5ac));
+      SDValue t1 = DAG.getNode(ISD::FADD, dl, MVT::f32, t0,
+                               getF32Constant(DAG, 0x3e4350aa));
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
+      SDValue t3 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t2,
+                               getF32Constant(DAG, 0x3f60d3e3));
+      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
+      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
+                               getF32Constant(DAG, 0x4011cdf0));
+      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
+      SDValue t7 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t6,
+                               getF32Constant(DAG, 0x406cfd1c));
+      SDValue t8 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t7, X);
+      SDValue t9 = DAG.getNode(ISD::FADD, dl, MVT::f32, t8,
+                               getF32Constant(DAG, 0x408797cb));
+      SDValue t10 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t9, X);
+      SDValue LogOfMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t10,
+                                          getF32Constant(DAG, 0x4006dcab));
+
+      result = DAG.getNode(ISD::FADD, dl,
+                           MVT::f32, LogOfExponent, LogOfMantissa);
+    }
+  } else {
+    // No special expansion.
+    result = DAG.getNode(ISD::FLOG, dl,
+                         getValue(I.getOperand(1)).getValueType(),
+                         getValue(I.getOperand(1)));
+  }
+
+  setValue(&I, result);
+}
+
+/// visitLog2 - Lower a log2 intrinsic. Handles the special sequences for
+/// limited-precision mode.
+void
+SelectionDAGBuilder::visitLog2(CallInst &I) {
+  SDValue result;
+  DebugLoc dl = getCurDebugLoc();
+
+  if (getValue(I.getOperand(1)).getValueType() == MVT::f32 &&
+      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
+    SDValue Op = getValue(I.getOperand(1));
+    SDValue Op1 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, Op);
+
+    // Get the exponent.
+    SDValue LogOfExponent = GetExponent(DAG, Op1, TLI, dl);
+
+    // Get the significand and build it into a floating-point number with
+    // exponent of 1.
+    SDValue X = GetSignificand(DAG, Op1, dl);
+
+    // Different possible minimax approximations of significand in
+    // floating-point for various degrees of accuracy over [1,2].
+    if (LimitFloatPrecision <= 6) {
+      // For floating-point precision of 6:
+      //
+      //   Log2ofMantissa = -1.6749035f + (2.0246817f - .34484768f * x) * x;
+      //
+      // error 0.0049451742, which is more than 7 bits
+      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0xbeb08fe0));
+      SDValue t1 = DAG.getNode(ISD::FADD, dl, MVT::f32, t0,
+                               getF32Constant(DAG, 0x40019463));
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
+      SDValue Log2ofMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t2,
+                                           getF32Constant(DAG, 0x3fd6633d));
+
+      result = DAG.getNode(ISD::FADD, dl,
+                           MVT::f32, LogOfExponent, Log2ofMantissa);
+    } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
+      // For floating-point precision of 12:
+      //
+      //   Log2ofMantissa =
+      //     -2.51285454f +
+      //       (4.07009056f +
+      //         (-2.12067489f +
+      //           (.645142248f - 0.816157886e-1f * x) * x) * x) * x;
+      //
+      // error 0.0000876136000, which is better than 13 bits
+      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0xbda7262e));
+      SDValue t1 = DAG.getNode(ISD::FADD, dl, MVT::f32, t0,
+                               getF32Constant(DAG, 0x3f25280b));
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
+      SDValue t3 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t2,
+                               getF32Constant(DAG, 0x4007b923));
+      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
+      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
+                               getF32Constant(DAG, 0x40823e2f));
+      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
+      SDValue Log2ofMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t6,
+                                           getF32Constant(DAG, 0x4020d29c));
+
+      result = DAG.getNode(ISD::FADD, dl,
+                           MVT::f32, LogOfExponent, Log2ofMantissa);
+    } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
+      // For floating-point precision of 18:
+      //
+      //   Log2ofMantissa =
+      //     -3.0400495f +
+      //       (6.1129976f +
+      //         (-5.3420409f +
+      //           (3.2865683f +
+      //             (-1.2669343f +
+      //               (0.27515199f -
+      //                 0.25691327e-1f * x) * x) * x) * x) * x) * x;
+      //
+      // error 0.0000018516, which is better than 18 bits
+      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0xbcd2769e));
+      SDValue t1 = DAG.getNode(ISD::FADD, dl, MVT::f32, t0,
+                               getF32Constant(DAG, 0x3e8ce0b9));
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
+      SDValue t3 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t2,
+                               getF32Constant(DAG, 0x3fa22ae7));
+      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
+      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
+                               getF32Constant(DAG, 0x40525723));
+      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
+      SDValue t7 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t6,
+                               getF32Constant(DAG, 0x40aaf200));
+      SDValue t8 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t7, X);
+      SDValue t9 = DAG.getNode(ISD::FADD, dl, MVT::f32, t8,
+                               getF32Constant(DAG, 0x40c39dad));
+      SDValue t10 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t9, X);
+      SDValue Log2ofMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t10,
+                                           getF32Constant(DAG, 0x4042902c));
+
+      result = DAG.getNode(ISD::FADD, dl,
+                           MVT::f32, LogOfExponent, Log2ofMantissa);
+    }
+  } else {
+    // No special expansion.
+    result = DAG.getNode(ISD::FLOG2, dl,
+                         getValue(I.getOperand(1)).getValueType(),
+                         getValue(I.getOperand(1)));
+  }
+
+  setValue(&I, result);
+}
+
+/// visitLog10 - Lower a log10 intrinsic. Handles the special sequences for
+/// limited-precision mode.
+void
+SelectionDAGBuilder::visitLog10(CallInst &I) {
+  SDValue result;
+  DebugLoc dl = getCurDebugLoc();
+
+  if (getValue(I.getOperand(1)).getValueType() == MVT::f32 &&
+      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
+    SDValue Op = getValue(I.getOperand(1));
+    SDValue Op1 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, Op);
+
+    // Scale the exponent by log10(2) [0.30102999f].
+    SDValue Exp = GetExponent(DAG, Op1, TLI, dl);
+    SDValue LogOfExponent = DAG.getNode(ISD::FMUL, dl, MVT::f32, Exp,
+                                        getF32Constant(DAG, 0x3e9a209a));
+
+    // Get the significand and build it into a floating-point number with
+    // exponent of 1.
+    SDValue X = GetSignificand(DAG, Op1, dl);
+
+    if (LimitFloatPrecision <= 6) {
+      // For floating-point precision of 6:
+      //
+      //   Log10ofMantissa =
+      //     -0.50419619f +
+      //       (0.60948995f - 0.10380950f * x) * x;
+      //
+      // error 0.0014886165, which is 6 bits
+      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0xbdd49a13));
+      SDValue t1 = DAG.getNode(ISD::FADD, dl, MVT::f32, t0,
+                               getF32Constant(DAG, 0x3f1c0789));
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
+      SDValue Log10ofMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t2,
+                                            getF32Constant(DAG, 0x3f011300));
+
+      result = DAG.getNode(ISD::FADD, dl,
+                           MVT::f32, LogOfExponent, Log10ofMantissa);
+    } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
+      // For floating-point precision of 12:
+      //
+      //   Log10ofMantissa =
+      //     -0.64831180f +
+      //       (0.91751397f +
+      //         (-0.31664806f + 0.47637168e-1f * x) * x) * x;
+      //
+      // error 0.00019228036, which is better than 12 bits
+      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0x3d431f31));
+      SDValue t1 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t0,
+                               getF32Constant(DAG, 0x3ea21fb2));
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
+      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
+                               getF32Constant(DAG, 0x3f6ae232));
+      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
+      SDValue Log10ofMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t4,
+                                            getF32Constant(DAG, 0x3f25f7c3));
+
+      result = DAG.getNode(ISD::FADD, dl,
+                           MVT::f32, LogOfExponent, Log10ofMantissa);
+    } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
+      // For floating-point precision of 18:
+      //
+      //   Log10ofMantissa =
+      //     -0.84299375f +
+      //       (1.5327582f +
+      //         (-1.0688956f +
+      //           (0.49102474f +
+      //             (-0.12539807f + 0.13508273e-1f * x) * x) * x) * x) * x;
+      //
+      // error 0.0000037995730, which is better than 18 bits
+      SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0x3c5d51ce));
+      SDValue t1 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t0,
+                               getF32Constant(DAG, 0x3e00685a));
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t1, X);
+      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
+                               getF32Constant(DAG, 0x3efb6798));
+      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
+      SDValue t5 = DAG.getNode(ISD::FSUB, dl, MVT::f32, t4,
+                               getF32Constant(DAG, 0x3f88d192));
+      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
+      SDValue t7 = DAG.getNode(ISD::FADD, dl, MVT::f32, t6,
+                               getF32Constant(DAG, 0x3fc4316c));
+      SDValue t8 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t7, X);
+      SDValue Log10ofMantissa = DAG.getNode(ISD::FSUB, dl, MVT::f32, t8,
+                                            getF32Constant(DAG, 0x3f57ce70));
+
+      result = DAG.getNode(ISD::FADD, dl,
+                           MVT::f32, LogOfExponent, Log10ofMantissa);
+    }
+  } else {
+    // No special expansion.
+    result = DAG.getNode(ISD::FLOG10, dl,
+                         getValue(I.getOperand(1)).getValueType(),
+                         getValue(I.getOperand(1)));
+  }
+
+  setValue(&I, result);
+}
+
+/// visitExp2 - Lower an exp2 intrinsic. Handles the special sequences for
+/// limited-precision mode.
+void
+SelectionDAGBuilder::visitExp2(CallInst &I) {
+  SDValue result;
+  DebugLoc dl = getCurDebugLoc();
+
+  if (getValue(I.getOperand(1)).getValueType() == MVT::f32 &&
+      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
+    SDValue Op = getValue(I.getOperand(1));
+
+    SDValue IntegerPartOfX = DAG.getNode(ISD::FP_TO_SINT, dl, MVT::i32, Op);
+
+    //   FractionalPartOfX = x - (float)IntegerPartOfX;
+    SDValue t1 = DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, IntegerPartOfX);
+    SDValue X = DAG.getNode(ISD::FSUB, dl, MVT::f32, Op, t1);
+
+    //   IntegerPartOfX <<= 23;
+    IntegerPartOfX = DAG.getNode(ISD::SHL, dl, MVT::i32, IntegerPartOfX,
+                                 DAG.getConstant(23, TLI.getPointerTy()));
+
+    if (LimitFloatPrecision <= 6) {
+      // For floating-point precision of 6:
+      //
+      //   TwoToFractionalPartOfX =
+      //     0.997535578f +
+      //       (0.735607626f + 0.252464424f * x) * x;
+      //
+      // error 0.0144103317, which is 6 bits
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0x3e814304));
+      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
+                               getF32Constant(DAG, 0x3f3c50c8));
+      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
+      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
+                               getF32Constant(DAG, 0x3f7f5e7e));
+      SDValue t6 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, t5);
+      SDValue TwoToFractionalPartOfX =
+        DAG.getNode(ISD::ADD, dl, MVT::i32, t6, IntegerPartOfX);
+
+      result = DAG.getNode(ISD::BIT_CONVERT, dl,
+                           MVT::f32, TwoToFractionalPartOfX);
+    } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
+      // For floating-point precision of 12:
+      //
+      //   TwoToFractionalPartOfX =
+      //     0.999892986f +
+      //       (0.696457318f +
+      //         (0.224338339f + 0.792043434e-1f * x) * x) * x;
+      //
+      // error 0.000107046256, which is 13 to 14 bits
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0x3da235e3));
+      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
+                               getF32Constant(DAG, 0x3e65b8f3));
+      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
+      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
+                               getF32Constant(DAG, 0x3f324b07));
+      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
+      SDValue t7 = DAG.getNode(ISD::FADD, dl, MVT::f32, t6,
+                               getF32Constant(DAG, 0x3f7ff8fd));
+      SDValue t8 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, t7);
+      SDValue TwoToFractionalPartOfX =
+        DAG.getNode(ISD::ADD, dl, MVT::i32, t8, IntegerPartOfX);
+
+      result = DAG.getNode(ISD::BIT_CONVERT, dl,
+                           MVT::f32, TwoToFractionalPartOfX);
+    } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
+      // For floating-point precision of 18:
+      //
+      //   TwoToFractionalPartOfX =
+      //     0.999999982f +
+      //       (0.693148872f +
+      //         (0.240227044f +
+      //           (0.554906021e-1f +
+      //             (0.961591928e-2f +
+      //               (0.136028312e-2f + 0.157059148e-3f *x)*x)*x)*x)*x)*x;
+      // error 2.47208000*10^(-7), which is better than 18 bits
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0x3924b03e));
+      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
+                               getF32Constant(DAG, 0x3ab24b87));
+      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
+      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
+                               getF32Constant(DAG, 0x3c1d8c17));
+      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
+      SDValue t7 = DAG.getNode(ISD::FADD, dl, MVT::f32, t6,
+                               getF32Constant(DAG, 0x3d634a1d));
+      SDValue t8 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t7, X);
+      SDValue t9 = DAG.getNode(ISD::FADD, dl, MVT::f32, t8,
+                               getF32Constant(DAG, 0x3e75fe14));
+      SDValue t10 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t9, X);
+      SDValue t11 = DAG.getNode(ISD::FADD, dl, MVT::f32, t10,
+                                getF32Constant(DAG, 0x3f317234));
+      SDValue t12 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t11, X);
+      SDValue t13 = DAG.getNode(ISD::FADD, dl, MVT::f32, t12,
+                                getF32Constant(DAG, 0x3f800000));
+      SDValue t14 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, t13);
+      SDValue TwoToFractionalPartOfX =
+        DAG.getNode(ISD::ADD, dl, MVT::i32, t14, IntegerPartOfX);
+
+      result = DAG.getNode(ISD::BIT_CONVERT, dl,
+                           MVT::f32, TwoToFractionalPartOfX);
+    }
+  } else {
+    // No special expansion.
+    result = DAG.getNode(ISD::FEXP2, dl,
+                         getValue(I.getOperand(1)).getValueType(),
+                         getValue(I.getOperand(1)));
+  }
+
+  setValue(&I, result);
+}
+
+/// visitPow - Lower a pow intrinsic. Handles the special sequences for
+/// limited-precision mode with x == 10.0f.
+void
+SelectionDAGBuilder::visitPow(CallInst &I) {
+  SDValue result;
+  Value *Val = I.getOperand(1);
+  DebugLoc dl = getCurDebugLoc();
+  bool IsExp10 = false;
+
+  if (getValue(Val).getValueType() == MVT::f32 &&
+      getValue(I.getOperand(2)).getValueType() == MVT::f32 &&
+      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
+    if (Constant *C = const_cast<Constant*>(dyn_cast<Constant>(Val))) {
+      if (ConstantFP *CFP = dyn_cast<ConstantFP>(C)) {
+        APFloat Ten(10.0f);
+        IsExp10 = CFP->getValueAPF().bitwiseIsEqual(Ten);
+      }
+    }
+  }
+
+  if (IsExp10 && LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
+    SDValue Op = getValue(I.getOperand(2));
+
+    // Put the exponent in the right bit position for later addition to the
+    // final result:
+    //
+    //   #define LOG2OF10 3.3219281f
+    //   IntegerPartOfX = (int32_t)(x * LOG2OF10);
+    SDValue t0 = DAG.getNode(ISD::FMUL, dl, MVT::f32, Op,
+                             getF32Constant(DAG, 0x40549a78));
+    SDValue IntegerPartOfX = DAG.getNode(ISD::FP_TO_SINT, dl, MVT::i32, t0);
+
+    //   FractionalPartOfX = x - (float)IntegerPartOfX;
+    SDValue t1 = DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, IntegerPartOfX);
+    SDValue X = DAG.getNode(ISD::FSUB, dl, MVT::f32, t0, t1);
+
+    //   IntegerPartOfX <<= 23;
+    IntegerPartOfX = DAG.getNode(ISD::SHL, dl, MVT::i32, IntegerPartOfX,
+                                 DAG.getConstant(23, TLI.getPointerTy()));
+
+    if (LimitFloatPrecision <= 6) {
+      // For floating-point precision of 6:
+      //
+      //   twoToFractionalPartOfX =
+      //     0.997535578f +
+      //       (0.735607626f + 0.252464424f * x) * x;
+      //
+      // error 0.0144103317, which is 6 bits
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0x3e814304));
+      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
+                               getF32Constant(DAG, 0x3f3c50c8));
+      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
+      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
+                               getF32Constant(DAG, 0x3f7f5e7e));
+      SDValue t6 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, t5);
+      SDValue TwoToFractionalPartOfX =
+        DAG.getNode(ISD::ADD, dl, MVT::i32, t6, IntegerPartOfX);
+
+      result = DAG.getNode(ISD::BIT_CONVERT, dl,
+                           MVT::f32, TwoToFractionalPartOfX);
+    } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
+      // For floating-point precision of 12:
+      //
+      //   TwoToFractionalPartOfX =
+      //     0.999892986f +
+      //       (0.696457318f +
+      //         (0.224338339f + 0.792043434e-1f * x) * x) * x;
+      //
+      // error 0.000107046256, which is 13 to 14 bits
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0x3da235e3));
+      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
+                               getF32Constant(DAG, 0x3e65b8f3));
+      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
+      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
+                               getF32Constant(DAG, 0x3f324b07));
+      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
+      SDValue t7 = DAG.getNode(ISD::FADD, dl, MVT::f32, t6,
+                               getF32Constant(DAG, 0x3f7ff8fd));
+      SDValue t8 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, t7);
+      SDValue TwoToFractionalPartOfX =
+        DAG.getNode(ISD::ADD, dl, MVT::i32, t8, IntegerPartOfX);
+
+      result = DAG.getNode(ISD::BIT_CONVERT, dl,
+                           MVT::f32, TwoToFractionalPartOfX);
+    } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
+      // For floating-point precision of 18:
+      //
+      //   TwoToFractionalPartOfX =
+      //     0.999999982f +
+      //       (0.693148872f +
+      //         (0.240227044f +
+      //           (0.554906021e-1f +
+      //             (0.961591928e-2f +
+      //               (0.136028312e-2f + 0.157059148e-3f *x)*x)*x)*x)*x)*x;
+      // error 2.47208000*10^(-7), which is better than 18 bits
+      SDValue t2 = DAG.getNode(ISD::FMUL, dl, MVT::f32, X,
+                               getF32Constant(DAG, 0x3924b03e));
+      SDValue t3 = DAG.getNode(ISD::FADD, dl, MVT::f32, t2,
+                               getF32Constant(DAG, 0x3ab24b87));
+      SDValue t4 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t3, X);
+      SDValue t5 = DAG.getNode(ISD::FADD, dl, MVT::f32, t4,
+                               getF32Constant(DAG, 0x3c1d8c17));
+      SDValue t6 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t5, X);
+      SDValue t7 = DAG.getNode(ISD::FADD, dl, MVT::f32, t6,
+                               getF32Constant(DAG, 0x3d634a1d));
+      SDValue t8 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t7, X);
+      SDValue t9 = DAG.getNode(ISD::FADD, dl, MVT::f32, t8,
+                               getF32Constant(DAG, 0x3e75fe14));
+      SDValue t10 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t9, X);
+      SDValue t11 = DAG.getNode(ISD::FADD, dl, MVT::f32, t10,
+                                getF32Constant(DAG, 0x3f317234));
+      SDValue t12 = DAG.getNode(ISD::FMUL, dl, MVT::f32, t11, X);
+      SDValue t13 = DAG.getNode(ISD::FADD, dl, MVT::f32, t12,
+                                getF32Constant(DAG, 0x3f800000));
+      SDValue t14 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, t13);
+      SDValue TwoToFractionalPartOfX =
+        DAG.getNode(ISD::ADD, dl, MVT::i32, t14, IntegerPartOfX);
+
+      result = DAG.getNode(ISD::BIT_CONVERT, dl,
+                           MVT::f32, TwoToFractionalPartOfX);
+    }
+  } else {
+    // No special expansion.
+    result = DAG.getNode(ISD::FPOW, dl,
+                         getValue(I.getOperand(1)).getValueType(),
+                         getValue(I.getOperand(1)),
+                         getValue(I.getOperand(2)));
+  }
+
+  setValue(&I, result);
+}
+
+/// visitIntrinsicCall - Lower the call to the specified intrinsic function.  If
+/// we want to emit this as a call to a named external function, return the name
+/// otherwise lower it and return null.
+const char *
+SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
+  DebugLoc dl = getCurDebugLoc();
+  switch (Intrinsic) {
+  default:
+    // By default, turn this into a target intrinsic node.
+    visitTargetIntrinsic(I, Intrinsic);
+    return 0;
+  case Intrinsic::vastart:  visitVAStart(I); return 0;
+  case Intrinsic::vaend:    visitVAEnd(I); return 0;
+  case Intrinsic::vacopy:   visitVACopy(I); return 0;
+  case Intrinsic::returnaddress:
+    setValue(&I, DAG.getNode(ISD::RETURNADDR, dl, TLI.getPointerTy(),
+                             getValue(I.getOperand(1))));
+    return 0;
+  case Intrinsic::frameaddress:
+    setValue(&I, DAG.getNode(ISD::FRAMEADDR, dl, TLI.getPointerTy(),
+                             getValue(I.getOperand(1))));
+    return 0;
+  case Intrinsic::setjmp:
+    return "_setjmp"+!TLI.usesUnderscoreSetJmp();
+    break;
+  case Intrinsic::longjmp:
+    return "_longjmp"+!TLI.usesUnderscoreLongJmp();
+    break;
+  case Intrinsic::memcpy: {
+    SDValue Op1 = getValue(I.getOperand(1));
+    SDValue Op2 = getValue(I.getOperand(2));
+    SDValue Op3 = getValue(I.getOperand(3));
+    unsigned Align = cast<ConstantInt>(I.getOperand(4))->getZExtValue();
+    DAG.setRoot(DAG.getMemcpy(getRoot(), dl, Op1, Op2, Op3, Align, false,
+                              I.getOperand(1), 0, I.getOperand(2), 0));
+    return 0;
+  }
+  case Intrinsic::memset: {
+    SDValue Op1 = getValue(I.getOperand(1));
+    SDValue Op2 = getValue(I.getOperand(2));
+    SDValue Op3 = getValue(I.getOperand(3));
+    unsigned Align = cast<ConstantInt>(I.getOperand(4))->getZExtValue();
+    DAG.setRoot(DAG.getMemset(getRoot(), dl, Op1, Op2, Op3, Align,
+                              I.getOperand(1), 0));
+    return 0;
+  }
+  case Intrinsic::memmove: {
+    SDValue Op1 = getValue(I.getOperand(1));
+    SDValue Op2 = getValue(I.getOperand(2));
+    SDValue Op3 = getValue(I.getOperand(3));
+    unsigned Align = cast<ConstantInt>(I.getOperand(4))->getZExtValue();
+
+    // If the source and destination are known to not be aliases, we can
+    // lower memmove as memcpy.
+    uint64_t Size = -1ULL;
+    if (ConstantSDNode *C = dyn_cast<ConstantSDNode>(Op3))
+      Size = C->getZExtValue();
+    if (AA->alias(I.getOperand(1), Size, I.getOperand(2), Size) ==
+        AliasAnalysis::NoAlias) {
+      DAG.setRoot(DAG.getMemcpy(getRoot(), dl, Op1, Op2, Op3, Align, false,
+                                I.getOperand(1), 0, I.getOperand(2), 0));
+      return 0;
+    }
+
+    DAG.setRoot(DAG.getMemmove(getRoot(), dl, Op1, Op2, Op3, Align,
+                               I.getOperand(1), 0, I.getOperand(2), 0));
+    return 0;
+  }
+  case Intrinsic::dbg_stoppoint: 
+  case Intrinsic::dbg_region_start:
+  case Intrinsic::dbg_region_end:
+  case Intrinsic::dbg_func_start:
+    // FIXME - Remove this instructions once the dust settles.
+    return 0;
+  case Intrinsic::dbg_declare: {
+    if (OptLevel != CodeGenOpt::None) 
+      // FIXME: Variable debug info is not supported here.
+      return 0;
+    DwarfWriter *DW = DAG.getDwarfWriter();
+    if (!DW)
+      return 0;
+    DbgDeclareInst &DI = cast<DbgDeclareInst>(I);
+    if (!isValidDebugInfoIntrinsic(DI, CodeGenOpt::None))
+      return 0;
+
+    MDNode *Variable = DI.getVariable();
+    Value *Address = DI.getAddress();
+    if (BitCastInst *BCI = dyn_cast<BitCastInst>(Address))
+      Address = BCI->getOperand(0);
+    AllocaInst *AI = dyn_cast<AllocaInst>(Address);
+    // Don't handle byval struct arguments or VLAs, for example.
+    if (!AI)
+      return 0;
+    DenseMap<const AllocaInst*, int>::iterator SI =
+      FuncInfo.StaticAllocaMap.find(AI);
+    if (SI == FuncInfo.StaticAllocaMap.end()) 
+      return 0; // VLAs.
+    int FI = SI->second;
+
+    MachineModuleInfo *MMI = DAG.getMachineModuleInfo();
+    if (MMI) {
+      MetadataContext &TheMetadata = 
+        DI.getParent()->getContext().getMetadata();
+      unsigned MDDbgKind = TheMetadata.getMDKind("dbg");
+      MDNode *Dbg = TheMetadata.getMD(MDDbgKind, &DI);
+      MMI->setVariableDbgInfo(Variable, FI, Dbg);
+    }
+    return 0;
+  }
+  case Intrinsic::eh_exception: {
+    // Insert the EXCEPTIONADDR instruction.
+    assert(CurMBB->isLandingPad() &&"Call to eh.exception not in landing pad!");
+    SDVTList VTs = DAG.getVTList(TLI.getPointerTy(), MVT::Other);
+    SDValue Ops[1];
+    Ops[0] = DAG.getRoot();
+    SDValue Op = DAG.getNode(ISD::EXCEPTIONADDR, dl, VTs, Ops, 1);
+    setValue(&I, Op);
+    DAG.setRoot(Op.getValue(1));
+    return 0;
+  }
+
+  case Intrinsic::eh_selector: {
+    MachineModuleInfo *MMI = DAG.getMachineModuleInfo();
+
+    if (CurMBB->isLandingPad())
+      AddCatchInfo(I, MMI, CurMBB);
+    else {
+#ifndef NDEBUG
+      FuncInfo.CatchInfoLost.insert(&I);
+#endif
+      // FIXME: Mark exception selector register as live in.  Hack for PR1508.
+      unsigned Reg = TLI.getExceptionSelectorRegister();
+      if (Reg) CurMBB->addLiveIn(Reg);
+    }
+
+    // Insert the EHSELECTION instruction.
+    SDVTList VTs = DAG.getVTList(TLI.getPointerTy(), MVT::Other);
+    SDValue Ops[2];
+    Ops[0] = getValue(I.getOperand(1));
+    Ops[1] = getRoot();
+    SDValue Op = DAG.getNode(ISD::EHSELECTION, dl, VTs, Ops, 2);
+
+    DAG.setRoot(Op.getValue(1));
+
+    setValue(&I, DAG.getSExtOrTrunc(Op, dl, MVT::i32));
+    return 0;
+  }
+
+  case Intrinsic::eh_typeid_for: {
+    MachineModuleInfo *MMI = DAG.getMachineModuleInfo();
+
+    if (MMI) {
+      // Find the type id for the given typeinfo.
+      GlobalVariable *GV = ExtractTypeInfo(I.getOperand(1));
+
+      unsigned TypeID = MMI->getTypeIDFor(GV);
+      setValue(&I, DAG.getConstant(TypeID, MVT::i32));
+    } else {
+      // Return something different to eh_selector.
+      setValue(&I, DAG.getConstant(1, MVT::i32));
+    }
+
+    return 0;
+  }
+
+  case Intrinsic::eh_return_i32:
+  case Intrinsic::eh_return_i64:
+    if (MachineModuleInfo *MMI = DAG.getMachineModuleInfo()) {
+      MMI->setCallsEHReturn(true);
+      DAG.setRoot(DAG.getNode(ISD::EH_RETURN, dl,
+                              MVT::Other,
+                              getControlRoot(),
+                              getValue(I.getOperand(1)),
+                              getValue(I.getOperand(2))));
+    } else {
+      setValue(&I, DAG.getConstant(0, TLI.getPointerTy()));
+    }
+
+    return 0;
+  case Intrinsic::eh_unwind_init:
+    if (MachineModuleInfo *MMI = DAG.getMachineModuleInfo()) {
+      MMI->setCallsUnwindInit(true);
+    }
+
+    return 0;
+
+  case Intrinsic::eh_dwarf_cfa: {
+    EVT VT = getValue(I.getOperand(1)).getValueType();
+    SDValue CfaArg = DAG.getSExtOrTrunc(getValue(I.getOperand(1)), dl,
+                                        TLI.getPointerTy());
+
+    SDValue Offset = DAG.getNode(ISD::ADD, dl,
+                                 TLI.getPointerTy(),
+                                 DAG.getNode(ISD::FRAME_TO_ARGS_OFFSET, dl,
+                                             TLI.getPointerTy()),
+                                 CfaArg);
+    setValue(&I, DAG.getNode(ISD::ADD, dl,
+                             TLI.getPointerTy(),
+                             DAG.getNode(ISD::FRAMEADDR, dl,
+                                         TLI.getPointerTy(),
+                                         DAG.getConstant(0,
+                                                         TLI.getPointerTy())),
+                             Offset));
+    return 0;
+  }
+  case Intrinsic::convertff:
+  case Intrinsic::convertfsi:
+  case Intrinsic::convertfui:
+  case Intrinsic::convertsif:
+  case Intrinsic::convertuif:
+  case Intrinsic::convertss:
+  case Intrinsic::convertsu:
+  case Intrinsic::convertus:
+  case Intrinsic::convertuu: {
+    ISD::CvtCode Code = ISD::CVT_INVALID;
+    switch (Intrinsic) {
+    case Intrinsic::convertff:  Code = ISD::CVT_FF; break;
+    case Intrinsic::convertfsi: Code = ISD::CVT_FS; break;
+    case Intrinsic::convertfui: Code = ISD::CVT_FU; break;
+    case Intrinsic::convertsif: Code = ISD::CVT_SF; break;
+    case Intrinsic::convertuif: Code = ISD::CVT_UF; break;
+    case Intrinsic::convertss:  Code = ISD::CVT_SS; break;
+    case Intrinsic::convertsu:  Code = ISD::CVT_SU; break;
+    case Intrinsic::convertus:  Code = ISD::CVT_US; break;
+    case Intrinsic::convertuu:  Code = ISD::CVT_UU; break;
+    }
+    EVT DestVT = TLI.getValueType(I.getType());
+    Value* Op1 = I.getOperand(1);
+    setValue(&I, DAG.getConvertRndSat(DestVT, getCurDebugLoc(), getValue(Op1),
+                                DAG.getValueType(DestVT),
+                                DAG.getValueType(getValue(Op1).getValueType()),
+                                getValue(I.getOperand(2)),
+                                getValue(I.getOperand(3)),
+                                Code));
+    return 0;
+  }
+
+  case Intrinsic::sqrt:
+    setValue(&I, DAG.getNode(ISD::FSQRT, dl,
+                             getValue(I.getOperand(1)).getValueType(),
+                             getValue(I.getOperand(1))));
+    return 0;
+  case Intrinsic::powi:
+    setValue(&I, DAG.getNode(ISD::FPOWI, dl,
+                             getValue(I.getOperand(1)).getValueType(),
+                             getValue(I.getOperand(1)),
+                             getValue(I.getOperand(2))));
+    return 0;
+  case Intrinsic::sin:
+    setValue(&I, DAG.getNode(ISD::FSIN, dl,
+                             getValue(I.getOperand(1)).getValueType(),
+                             getValue(I.getOperand(1))));
+    return 0;
+  case Intrinsic::cos:
+    setValue(&I, DAG.getNode(ISD::FCOS, dl,
+                             getValue(I.getOperand(1)).getValueType(),
+                             getValue(I.getOperand(1))));
+    return 0;
+  case Intrinsic::log:
+    visitLog(I);
+    return 0;
+  case Intrinsic::log2:
+    visitLog2(I);
+    return 0;
+  case Intrinsic::log10:
+    visitLog10(I);
+    return 0;
+  case Intrinsic::exp:
+    visitExp(I);
+    return 0;
+  case Intrinsic::exp2:
+    visitExp2(I);
+    return 0;
+  case Intrinsic::pow:
+    visitPow(I);
+    return 0;
+  case Intrinsic::pcmarker: {
+    SDValue Tmp = getValue(I.getOperand(1));
+    DAG.setRoot(DAG.getNode(ISD::PCMARKER, dl, MVT::Other, getRoot(), Tmp));
+    return 0;
+  }
+  case Intrinsic::readcyclecounter: {
+    SDValue Op = getRoot();
+    SDValue Tmp = DAG.getNode(ISD::READCYCLECOUNTER, dl,
+                              DAG.getVTList(MVT::i64, MVT::Other),
+                              &Op, 1);
+    setValue(&I, Tmp);
+    DAG.setRoot(Tmp.getValue(1));
+    return 0;
+  }
+  case Intrinsic::bswap:
+    setValue(&I, DAG.getNode(ISD::BSWAP, dl,
+                             getValue(I.getOperand(1)).getValueType(),
+                             getValue(I.getOperand(1))));
+    return 0;
+  case Intrinsic::cttz: {
+    SDValue Arg = getValue(I.getOperand(1));
+    EVT Ty = Arg.getValueType();
+    SDValue result = DAG.getNode(ISD::CTTZ, dl, Ty, Arg);
+    setValue(&I, result);
+    return 0;
+  }
+  case Intrinsic::ctlz: {
+    SDValue Arg = getValue(I.getOperand(1));
+    EVT Ty = Arg.getValueType();
+    SDValue result = DAG.getNode(ISD::CTLZ, dl, Ty, Arg);
+    setValue(&I, result);
+    return 0;
+  }
+  case Intrinsic::ctpop: {
+    SDValue Arg = getValue(I.getOperand(1));
+    EVT Ty = Arg.getValueType();
+    SDValue result = DAG.getNode(ISD::CTPOP, dl, Ty, Arg);
+    setValue(&I, result);
+    return 0;
+  }
+  case Intrinsic::stacksave: {
+    SDValue Op = getRoot();
+    SDValue Tmp = DAG.getNode(ISD::STACKSAVE, dl,
+              DAG.getVTList(TLI.getPointerTy(), MVT::Other), &Op, 1);
+    setValue(&I, Tmp);
+    DAG.setRoot(Tmp.getValue(1));
+    return 0;
+  }
+  case Intrinsic::stackrestore: {
+    SDValue Tmp = getValue(I.getOperand(1));
+    DAG.setRoot(DAG.getNode(ISD::STACKRESTORE, dl, MVT::Other, getRoot(), Tmp));
+    return 0;
+  }
+  case Intrinsic::stackprotector: {
+    // Emit code into the DAG to store the stack guard onto the stack.
+    MachineFunction &MF = DAG.getMachineFunction();
+    MachineFrameInfo *MFI = MF.getFrameInfo();
+    EVT PtrTy = TLI.getPointerTy();
+
+    SDValue Src = getValue(I.getOperand(1));   // The guard's value.
+    AllocaInst *Slot = cast<AllocaInst>(I.getOperand(2));
+
+    int FI = FuncInfo.StaticAllocaMap[Slot];
+    MFI->setStackProtectorIndex(FI);
+
+    SDValue FIN = DAG.getFrameIndex(FI, PtrTy);
+
+    // Store the stack protector onto the stack.
+    SDValue Result = DAG.getStore(getRoot(), getCurDebugLoc(), Src, FIN,
+                                  PseudoSourceValue::getFixedStack(FI),
+                                  0, true);
+    setValue(&I, Result);
+    DAG.setRoot(Result);
+    return 0;
+  }
+  case Intrinsic::objectsize: {
+    // If we don't know by now, we're never going to know.
+    ConstantInt *CI = dyn_cast<ConstantInt>(I.getOperand(2));
+
+    assert(CI && "Non-constant type in __builtin_object_size?");
+
+    SDValue Arg = getValue(I.getOperand(0));
+    EVT Ty = Arg.getValueType();
+
+    if (CI->getZExtValue() < 2)
+      setValue(&I, DAG.getConstant(-1ULL, Ty));
+    else
+      setValue(&I, DAG.getConstant(0, Ty));
+    return 0;
+  }
+  case Intrinsic::var_annotation:
+    // Discard annotate attributes
+    return 0;
+
+  case Intrinsic::init_trampoline: {
+    const Function *F = cast<Function>(I.getOperand(2)->stripPointerCasts());
+
+    SDValue Ops[6];
+    Ops[0] = getRoot();
+    Ops[1] = getValue(I.getOperand(1));
+    Ops[2] = getValue(I.getOperand(2));
+    Ops[3] = getValue(I.getOperand(3));
+    Ops[4] = DAG.getSrcValue(I.getOperand(1));
+    Ops[5] = DAG.getSrcValue(F);
+
+    SDValue Tmp = DAG.getNode(ISD::TRAMPOLINE, dl,
+                              DAG.getVTList(TLI.getPointerTy(), MVT::Other),
+                              Ops, 6);
+
+    setValue(&I, Tmp);
+    DAG.setRoot(Tmp.getValue(1));
+    return 0;
+  }
+
+  case Intrinsic::gcroot:
+    if (GFI) {
+      Value *Alloca = I.getOperand(1);
+      Constant *TypeMap = cast<Constant>(I.getOperand(2));
+
+      FrameIndexSDNode *FI = cast<FrameIndexSDNode>(getValue(Alloca).getNode());
+      GFI->addStackRoot(FI->getIndex(), TypeMap);
+    }
+    return 0;
+
+  case Intrinsic::gcread:
+  case Intrinsic::gcwrite:
+    llvm_unreachable("GC failed to lower gcread/gcwrite intrinsics!");
+    return 0;
+
+  case Intrinsic::flt_rounds: {
+    setValue(&I, DAG.getNode(ISD::FLT_ROUNDS_, dl, MVT::i32));
+    return 0;
+  }
+
+  case Intrinsic::trap: {
+    DAG.setRoot(DAG.getNode(ISD::TRAP, dl,MVT::Other, getRoot()));
+    return 0;
+  }
+
+  case Intrinsic::uadd_with_overflow:
+    return implVisitAluOverflow(I, ISD::UADDO);
+  case Intrinsic::sadd_with_overflow:
+    return implVisitAluOverflow(I, ISD::SADDO);
+  case Intrinsic::usub_with_overflow:
+    return implVisitAluOverflow(I, ISD::USUBO);
+  case Intrinsic::ssub_with_overflow:
+    return implVisitAluOverflow(I, ISD::SSUBO);
+  case Intrinsic::umul_with_overflow:
+    return implVisitAluOverflow(I, ISD::UMULO);
+  case Intrinsic::smul_with_overflow:
+    return implVisitAluOverflow(I, ISD::SMULO);
+
+  case Intrinsic::prefetch: {
+    SDValue Ops[4];
+    Ops[0] = getRoot();
+    Ops[1] = getValue(I.getOperand(1));
+    Ops[2] = getValue(I.getOperand(2));
+    Ops[3] = getValue(I.getOperand(3));
+    DAG.setRoot(DAG.getNode(ISD::PREFETCH, dl, MVT::Other, &Ops[0], 4));
+    return 0;
+  }
+
+  case Intrinsic::memory_barrier: {
+    SDValue Ops[6];
+    Ops[0] = getRoot();
+    for (int x = 1; x < 6; ++x)
+      Ops[x] = getValue(I.getOperand(x));
+
+    DAG.setRoot(DAG.getNode(ISD::MEMBARRIER, dl, MVT::Other, &Ops[0], 6));
+    return 0;
+  }
+  case Intrinsic::atomic_cmp_swap: {
+    SDValue Root = getRoot();
+    SDValue L =
+      DAG.getAtomic(ISD::ATOMIC_CMP_SWAP, getCurDebugLoc(),
+                    getValue(I.getOperand(2)).getValueType().getSimpleVT(),
+                    Root,
+                    getValue(I.getOperand(1)),
+                    getValue(I.getOperand(2)),
+                    getValue(I.getOperand(3)),
+                    I.getOperand(1));
+    setValue(&I, L);
+    DAG.setRoot(L.getValue(1));
+    return 0;
+  }
+  case Intrinsic::atomic_load_add:
+    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_ADD);
+  case Intrinsic::atomic_load_sub:
+    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_SUB);
+  case Intrinsic::atomic_load_or:
+    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_OR);
+  case Intrinsic::atomic_load_xor:
+    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_XOR);
+  case Intrinsic::atomic_load_and:
+    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_AND);
+  case Intrinsic::atomic_load_nand:
+    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_NAND);
+  case Intrinsic::atomic_load_max:
+    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_MAX);
+  case Intrinsic::atomic_load_min:
+    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_MIN);
+  case Intrinsic::atomic_load_umin:
+    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_UMIN);
+  case Intrinsic::atomic_load_umax:
+    return implVisitBinaryAtomic(I, ISD::ATOMIC_LOAD_UMAX);
+  case Intrinsic::atomic_swap:
+    return implVisitBinaryAtomic(I, ISD::ATOMIC_SWAP);
+
+  case Intrinsic::invariant_start:
+  case Intrinsic::lifetime_start:
+    // Discard region information.
+    setValue(&I, DAG.getUNDEF(TLI.getPointerTy()));
+    return 0;
+  case Intrinsic::invariant_end:
+  case Intrinsic::lifetime_end:
+    // Discard region information.
+    return 0;
+  }
+}
+
+/// Test if the given instruction is in a position to be optimized
+/// with a tail-call. This roughly means that it's in a block with
+/// a return and there's nothing that needs to be scheduled
+/// between it and the return.
+///
+/// This function only tests target-independent requirements.
+/// For target-dependent requirements, a target should override
+/// TargetLowering::IsEligibleForTailCallOptimization.
+///
+static bool
+isInTailCallPosition(const Instruction *I, Attributes CalleeRetAttr,
+                     const TargetLowering &TLI) {
+  const BasicBlock *ExitBB = I->getParent();
+  const TerminatorInst *Term = ExitBB->getTerminator();
+  const ReturnInst *Ret = dyn_cast<ReturnInst>(Term);
+  const Function *F = ExitBB->getParent();
+
+  // The block must end in a return statement or an unreachable.
+  if (!Ret && !isa<UnreachableInst>(Term)) return false;
+
+  // If I will have a chain, make sure no other instruction that will have a
+  // chain interposes between I and the return.
+  if (I->mayHaveSideEffects() || I->mayReadFromMemory() ||
+      !I->isSafeToSpeculativelyExecute())
+    for (BasicBlock::const_iterator BBI = prior(prior(ExitBB->end())); ;
+         --BBI) {
+      if (&*BBI == I)
+        break;
+      if (BBI->mayHaveSideEffects() || BBI->mayReadFromMemory() ||
+          !BBI->isSafeToSpeculativelyExecute())
+        return false;
+    }
+
+  // If the block ends with a void return or unreachable, it doesn't matter
+  // what the call's return type is.
+  if (!Ret || Ret->getNumOperands() == 0) return true;
+
+  // If the return value is undef, it doesn't matter what the call's
+  // return type is.
+  if (isa<UndefValue>(Ret->getOperand(0))) return true;
+
+  // Conservatively require the attributes of the call to match those of
+  // the return. Ignore noalias because it doesn't affect the call sequence.
+  unsigned CallerRetAttr = F->getAttributes().getRetAttributes();
+  if ((CalleeRetAttr ^ CallerRetAttr) & ~Attribute::NoAlias)
+    return false;
+
+  // Otherwise, make sure the unmodified return value of I is the return value.
+  for (const Instruction *U = dyn_cast<Instruction>(Ret->getOperand(0)); ;
+       U = dyn_cast<Instruction>(U->getOperand(0))) {
+    if (!U)
+      return false;
+    if (!U->hasOneUse())
+      return false;
+    if (U == I)
+      break;
+    // Check for a truly no-op truncate.
+    if (isa<TruncInst>(U) &&
+        TLI.isTruncateFree(U->getOperand(0)->getType(), U->getType()))
+      continue;
+    // Check for a truly no-op bitcast.
+    if (isa<BitCastInst>(U) &&
+        (U->getOperand(0)->getType() == U->getType() ||
+         (isa<PointerType>(U->getOperand(0)->getType()) &&
+          isa<PointerType>(U->getType()))))
+      continue;
+    // Otherwise it's not a true no-op.
+    return false;
+  }
+
+  return true;
+}
+
+void SelectionDAGBuilder::LowerCallTo(CallSite CS, SDValue Callee,
+                                      bool isTailCall,
+                                      MachineBasicBlock *LandingPad) {
+  const PointerType *PT = cast<PointerType>(CS.getCalledValue()->getType());
+  const FunctionType *FTy = cast<FunctionType>(PT->getElementType());
+  const Type *RetTy = FTy->getReturnType();
+  MachineModuleInfo *MMI = DAG.getMachineModuleInfo();
+  unsigned BeginLabel = 0, EndLabel = 0;
+
+  TargetLowering::ArgListTy Args;
+  TargetLowering::ArgListEntry Entry;
+  Args.reserve(CS.arg_size());
+
+  // Check whether the function can return without sret-demotion.
+  SmallVector<EVT, 4> OutVTs;
+  SmallVector<ISD::ArgFlagsTy, 4> OutsFlags;
+  SmallVector<uint64_t, 4> Offsets;
+  getReturnInfo(RetTy, CS.getAttributes().getRetAttributes(), 
+    OutVTs, OutsFlags, TLI, &Offsets);
+  
+
+  bool CanLowerReturn = TLI.CanLowerReturn(CS.getCallingConv(), 
+                        FTy->isVarArg(), OutVTs, OutsFlags, DAG);
+
+  SDValue DemoteStackSlot;
+
+  if (!CanLowerReturn) {
+    uint64_t TySize = TLI.getTargetData()->getTypeAllocSize(
+                      FTy->getReturnType());
+    unsigned Align  = TLI.getTargetData()->getPrefTypeAlignment(
+                      FTy->getReturnType());
+    MachineFunction &MF = DAG.getMachineFunction();
+    int SSFI = MF.getFrameInfo()->CreateStackObject(TySize, Align, false);
+    const Type *StackSlotPtrType = PointerType::getUnqual(FTy->getReturnType());
+
+    DemoteStackSlot = DAG.getFrameIndex(SSFI, TLI.getPointerTy());
+    Entry.Node = DemoteStackSlot;
+    Entry.Ty = StackSlotPtrType;
+    Entry.isSExt = false;
+    Entry.isZExt = false;
+    Entry.isInReg = false;
+    Entry.isSRet = true;
+    Entry.isNest = false;
+    Entry.isByVal = false;
+    Entry.Alignment = Align;
+    Args.push_back(Entry);
+    RetTy = Type::getVoidTy(FTy->getContext());
+  }
+
+  for (CallSite::arg_iterator i = CS.arg_begin(), e = CS.arg_end();
+       i != e; ++i) {
+    SDValue ArgNode = getValue(*i);
+    Entry.Node = ArgNode; Entry.Ty = (*i)->getType();
+
+    unsigned attrInd = i - CS.arg_begin() + 1;
+    Entry.isSExt  = CS.paramHasAttr(attrInd, Attribute::SExt);
+    Entry.isZExt  = CS.paramHasAttr(attrInd, Attribute::ZExt);
+    Entry.isInReg = CS.paramHasAttr(attrInd, Attribute::InReg);
+    Entry.isSRet  = CS.paramHasAttr(attrInd, Attribute::StructRet);
+    Entry.isNest  = CS.paramHasAttr(attrInd, Attribute::Nest);
+    Entry.isByVal = CS.paramHasAttr(attrInd, Attribute::ByVal);
+    Entry.Alignment = CS.getParamAlignment(attrInd);
+    Args.push_back(Entry);
+  }
+
+  if (LandingPad && MMI) {
+    // Insert a label before the invoke call to mark the try range.  This can be
+    // used to detect deletion of the invoke via the MachineModuleInfo.
+    BeginLabel = MMI->NextLabelID();
+
+    // Both PendingLoads and PendingExports must be flushed here;
+    // this call might not return.
+    (void)getRoot();
+    DAG.setRoot(DAG.getLabel(ISD::EH_LABEL, getCurDebugLoc(),
+                             getControlRoot(), BeginLabel));
+  }
+
+  // Check if target-independent constraints permit a tail call here.
+  // Target-dependent constraints are checked within TLI.LowerCallTo.
+  if (isTailCall &&
+      !isInTailCallPosition(CS.getInstruction(),
+                            CS.getAttributes().getRetAttributes(),
+                            TLI))
+    isTailCall = false;
+
+  std::pair<SDValue,SDValue> Result =
+    TLI.LowerCallTo(getRoot(), RetTy,
+                    CS.paramHasAttr(0, Attribute::SExt),
+                    CS.paramHasAttr(0, Attribute::ZExt), FTy->isVarArg(),
+                    CS.paramHasAttr(0, Attribute::InReg), FTy->getNumParams(),
+                    CS.getCallingConv(),
+                    isTailCall,
+                    !CS.getInstruction()->use_empty(),
+                    Callee, Args, DAG, getCurDebugLoc());
+  assert((isTailCall || Result.second.getNode()) &&
+         "Non-null chain expected with non-tail call!");
+  assert((Result.second.getNode() || !Result.first.getNode()) &&
+         "Null value expected with tail call!");
+  if (Result.first.getNode())
+    setValue(CS.getInstruction(), Result.first);
+  else if (!CanLowerReturn && Result.second.getNode()) {
+    // The instruction result is the result of loading from the
+    // hidden sret parameter.
+    SmallVector<EVT, 1> PVTs;
+    const Type *PtrRetTy = PointerType::getUnqual(FTy->getReturnType());
+
+    ComputeValueVTs(TLI, PtrRetTy, PVTs);
+    assert(PVTs.size() == 1 && "Pointers should fit in one register");
+    EVT PtrVT = PVTs[0];
+    unsigned NumValues = OutVTs.size();
+    SmallVector<SDValue, 4> Values(NumValues);
+    SmallVector<SDValue, 4> Chains(NumValues);
+
+    for (unsigned i = 0; i < NumValues; ++i) {
+      SDValue L = DAG.getLoad(OutVTs[i], getCurDebugLoc(), Result.second,
+        DAG.getNode(ISD::ADD, getCurDebugLoc(), PtrVT, DemoteStackSlot,
+        DAG.getConstant(Offsets[i], PtrVT)),
+        NULL, Offsets[i], false, 1);
+      Values[i] = L;
+      Chains[i] = L.getValue(1);
+    }
+    SDValue Chain = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(),
+                                MVT::Other, &Chains[0], NumValues);
+    PendingLoads.push_back(Chain);
+
+    setValue(CS.getInstruction(), DAG.getNode(ISD::MERGE_VALUES,
+             getCurDebugLoc(), DAG.getVTList(&OutVTs[0], NumValues),
+             &Values[0], NumValues));
+  }
+  // As a special case, a null chain means that a tail call has
+  // been emitted and the DAG root is already updated.
+  if (Result.second.getNode())
+    DAG.setRoot(Result.second);
+  else
+    HasTailCall = true;
+
+  if (LandingPad && MMI) {
+    // Insert a label at the end of the invoke call to mark the try range.  This
+    // can be used to detect deletion of the invoke via the MachineModuleInfo.
+    EndLabel = MMI->NextLabelID();
+    DAG.setRoot(DAG.getLabel(ISD::EH_LABEL, getCurDebugLoc(),
+                             getRoot(), EndLabel));
+
+    // Inform MachineModuleInfo of range.
+    MMI->addInvoke(LandingPad, BeginLabel, EndLabel);
+  }
+}
+
+
+void SelectionDAGBuilder::visitCall(CallInst &I) {
+  const char *RenameFn = 0;
+  if (Function *F = I.getCalledFunction()) {
+    if (F->isDeclaration()) {
+      const TargetIntrinsicInfo *II = TLI.getTargetMachine().getIntrinsicInfo();
+      if (II) {
+        if (unsigned IID = II->getIntrinsicID(F)) {
+          RenameFn = visitIntrinsicCall(I, IID);
+          if (!RenameFn)
+            return;
+        }
+      }
+      if (unsigned IID = F->getIntrinsicID()) {
+        RenameFn = visitIntrinsicCall(I, IID);
+        if (!RenameFn)
+          return;
+      }
+    }
+
+    // Check for well-known libc/libm calls.  If the function is internal, it
+    // can't be a library call.
+    if (!F->hasLocalLinkage() && F->hasName()) {
+      StringRef Name = F->getName();
+      if (Name == "copysign" || Name == "copysignf") {
+        if (I.getNumOperands() == 3 &&   // Basic sanity checks.
+            I.getOperand(1)->getType()->isFloatingPoint() &&
+            I.getType() == I.getOperand(1)->getType() &&
+            I.getType() == I.getOperand(2)->getType()) {
+          SDValue LHS = getValue(I.getOperand(1));
+          SDValue RHS = getValue(I.getOperand(2));
+          setValue(&I, DAG.getNode(ISD::FCOPYSIGN, getCurDebugLoc(),
+                                   LHS.getValueType(), LHS, RHS));
+          return;
+        }
+      } else if (Name == "fabs" || Name == "fabsf" || Name == "fabsl") {
+        if (I.getNumOperands() == 2 &&   // Basic sanity checks.
+            I.getOperand(1)->getType()->isFloatingPoint() &&
+            I.getType() == I.getOperand(1)->getType()) {
+          SDValue Tmp = getValue(I.getOperand(1));
+          setValue(&I, DAG.getNode(ISD::FABS, getCurDebugLoc(),
+                                   Tmp.getValueType(), Tmp));
+          return;
+        }
+      } else if (Name == "sin" || Name == "sinf" || Name == "sinl") {
+        if (I.getNumOperands() == 2 &&   // Basic sanity checks.
+            I.getOperand(1)->getType()->isFloatingPoint() &&
+            I.getType() == I.getOperand(1)->getType() &&
+            I.onlyReadsMemory()) {
+          SDValue Tmp = getValue(I.getOperand(1));
+          setValue(&I, DAG.getNode(ISD::FSIN, getCurDebugLoc(),
+                                   Tmp.getValueType(), Tmp));
+          return;
+        }
+      } else if (Name == "cos" || Name == "cosf" || Name == "cosl") {
+        if (I.getNumOperands() == 2 &&   // Basic sanity checks.
+            I.getOperand(1)->getType()->isFloatingPoint() &&
+            I.getType() == I.getOperand(1)->getType() &&
+            I.onlyReadsMemory()) {
+          SDValue Tmp = getValue(I.getOperand(1));
+          setValue(&I, DAG.getNode(ISD::FCOS, getCurDebugLoc(),
+                                   Tmp.getValueType(), Tmp));
+          return;
+        }
+      } else if (Name == "sqrt" || Name == "sqrtf" || Name == "sqrtl") {
+        if (I.getNumOperands() == 2 &&   // Basic sanity checks.
+            I.getOperand(1)->getType()->isFloatingPoint() &&
+            I.getType() == I.getOperand(1)->getType() &&
+            I.onlyReadsMemory()) {
+          SDValue Tmp = getValue(I.getOperand(1));
+          setValue(&I, DAG.getNode(ISD::FSQRT, getCurDebugLoc(),
+                                   Tmp.getValueType(), Tmp));
+          return;
+        }
+      }
+    }
+  } else if (isa<InlineAsm>(I.getOperand(0))) {
+    visitInlineAsm(&I);
+    return;
+  }
+
+  SDValue Callee;
+  if (!RenameFn)
+    Callee = getValue(I.getOperand(0));
+  else
+    Callee = DAG.getExternalSymbol(RenameFn, TLI.getPointerTy());
+
+  // Check if we can potentially perform a tail call. More detailed
+  // checking is be done within LowerCallTo, after more information
+  // about the call is known.
+  bool isTailCall = PerformTailCallOpt && I.isTailCall();
+
+  LowerCallTo(&I, Callee, isTailCall);
+}
+
+
+/// getCopyFromRegs - Emit a series of CopyFromReg nodes that copies from
+/// this value and returns the result as a ValueVT value.  This uses
+/// Chain/Flag as the input and updates them for the output Chain/Flag.
+/// If the Flag pointer is NULL, no flag is used.
+SDValue RegsForValue::getCopyFromRegs(SelectionDAG &DAG, DebugLoc dl,
+                                      SDValue &Chain,
+                                      SDValue *Flag) const {
+  // Assemble the legal parts into the final values.
+  SmallVector<SDValue, 4> Values(ValueVTs.size());
+  SmallVector<SDValue, 8> Parts;
+  for (unsigned Value = 0, Part = 0, e = ValueVTs.size(); Value != e; ++Value) {
+    // Copy the legal parts from the registers.
+    EVT ValueVT = ValueVTs[Value];
+    unsigned NumRegs = TLI->getNumRegisters(*DAG.getContext(), ValueVT);
+    EVT RegisterVT = RegVTs[Value];
+
+    Parts.resize(NumRegs);
+    for (unsigned i = 0; i != NumRegs; ++i) {
+      SDValue P;
+      if (Flag == 0)
+        P = DAG.getCopyFromReg(Chain, dl, Regs[Part+i], RegisterVT);
+      else {
+        P = DAG.getCopyFromReg(Chain, dl, Regs[Part+i], RegisterVT, *Flag);
+        *Flag = P.getValue(2);
+      }
+      Chain = P.getValue(1);
+
+      // If the source register was virtual and if we know something about it,
+      // add an assert node.
+      if (TargetRegisterInfo::isVirtualRegister(Regs[Part+i]) &&
+          RegisterVT.isInteger() && !RegisterVT.isVector()) {
+        unsigned SlotNo = Regs[Part+i]-TargetRegisterInfo::FirstVirtualRegister;
+        FunctionLoweringInfo &FLI = DAG.getFunctionLoweringInfo();
+        if (FLI.LiveOutRegInfo.size() > SlotNo) {
+          FunctionLoweringInfo::LiveOutInfo &LOI = FLI.LiveOutRegInfo[SlotNo];
+
+          unsigned RegSize = RegisterVT.getSizeInBits();
+          unsigned NumSignBits = LOI.NumSignBits;
+          unsigned NumZeroBits = LOI.KnownZero.countLeadingOnes();
+
+          // FIXME: We capture more information than the dag can represent.  For
+          // now, just use the tightest assertzext/assertsext possible.
+          bool isSExt = true;
+          EVT FromVT(MVT::Other);
+          if (NumSignBits == RegSize)
+            isSExt = true, FromVT = MVT::i1;   // ASSERT SEXT 1
+          else if (NumZeroBits >= RegSize-1)
+            isSExt = false, FromVT = MVT::i1;  // ASSERT ZEXT 1
+          else if (NumSignBits > RegSize-8)
+            isSExt = true, FromVT = MVT::i8;   // ASSERT SEXT 8
+          else if (NumZeroBits >= RegSize-8)
+            isSExt = false, FromVT = MVT::i8;  // ASSERT ZEXT 8
+          else if (NumSignBits > RegSize-16)
+            isSExt = true, FromVT = MVT::i16;  // ASSERT SEXT 16
+          else if (NumZeroBits >= RegSize-16)
+            isSExt = false, FromVT = MVT::i16; // ASSERT ZEXT 16
+          else if (NumSignBits > RegSize-32)
+            isSExt = true, FromVT = MVT::i32;  // ASSERT SEXT 32
+          else if (NumZeroBits >= RegSize-32)
+            isSExt = false, FromVT = MVT::i32; // ASSERT ZEXT 32
+
+          if (FromVT != MVT::Other) {
+            P = DAG.getNode(isSExt ? ISD::AssertSext : ISD::AssertZext, dl,
+                            RegisterVT, P, DAG.getValueType(FromVT));
+
+          }
+        }
+      }
+
+      Parts[i] = P;
+    }
+
+    Values[Value] = getCopyFromParts(DAG, dl, Parts.begin(),
+                                     NumRegs, RegisterVT, ValueVT);
+    Part += NumRegs;
+    Parts.clear();
+  }
+
+  return DAG.getNode(ISD::MERGE_VALUES, dl,
+                     DAG.getVTList(&ValueVTs[0], ValueVTs.size()),
+                     &Values[0], ValueVTs.size());
+}
+
+/// getCopyToRegs - Emit a series of CopyToReg nodes that copies the
+/// specified value into the registers specified by this object.  This uses
+/// Chain/Flag as the input and updates them for the output Chain/Flag.
+/// If the Flag pointer is NULL, no flag is used.
+void RegsForValue::getCopyToRegs(SDValue Val, SelectionDAG &DAG, DebugLoc dl,
+                                 SDValue &Chain, SDValue *Flag) const {
+  // Get the list of the values's legal parts.
+  unsigned NumRegs = Regs.size();
+  SmallVector<SDValue, 8> Parts(NumRegs);
+  for (unsigned Value = 0, Part = 0, e = ValueVTs.size(); Value != e; ++Value) {
+    EVT ValueVT = ValueVTs[Value];
+    unsigned NumParts = TLI->getNumRegisters(*DAG.getContext(), ValueVT);
+    EVT RegisterVT = RegVTs[Value];
+
+    getCopyToParts(DAG, dl, Val.getValue(Val.getResNo() + Value),
+                   &Parts[Part], NumParts, RegisterVT);
+    Part += NumParts;
+  }
+
+  // Copy the parts into the registers.
+  SmallVector<SDValue, 8> Chains(NumRegs);
+  for (unsigned i = 0; i != NumRegs; ++i) {
+    SDValue Part;
+    if (Flag == 0)
+      Part = DAG.getCopyToReg(Chain, dl, Regs[i], Parts[i]);
+    else {
+      Part = DAG.getCopyToReg(Chain, dl, Regs[i], Parts[i], *Flag);
+      *Flag = Part.getValue(1);
+    }
+    Chains[i] = Part.getValue(0);
+  }
+
+  if (NumRegs == 1 || Flag)
+    // If NumRegs > 1 && Flag is used then the use of the last CopyToReg is
+    // flagged to it. That is the CopyToReg nodes and the user are considered
+    // a single scheduling unit. If we create a TokenFactor and return it as
+    // chain, then the TokenFactor is both a predecessor (operand) of the
+    // user as well as a successor (the TF operands are flagged to the user).
+    // c1, f1 = CopyToReg
+    // c2, f2 = CopyToReg
+    // c3     = TokenFactor c1, c2
+    // ...
+    //        = op c3, ..., f2
+    Chain = Chains[NumRegs-1];
+  else
+    Chain = DAG.getNode(ISD::TokenFactor, dl, MVT::Other, &Chains[0], NumRegs);
+}
+
+/// AddInlineAsmOperands - Add this value to the specified inlineasm node
+/// operand list.  This adds the code marker and includes the number of
+/// values added into it.
+void RegsForValue::AddInlineAsmOperands(unsigned Code,
+                                        bool HasMatching,unsigned MatchingIdx,
+                                        SelectionDAG &DAG,
+                                        std::vector<SDValue> &Ops) const {
+  EVT IntPtrTy = DAG.getTargetLoweringInfo().getPointerTy();
+  assert(Regs.size() < (1 << 13) && "Too many inline asm outputs!");
+  unsigned Flag = Code | (Regs.size() << 3);
+  if (HasMatching)
+    Flag |= 0x80000000 | (MatchingIdx << 16);
+  Ops.push_back(DAG.getTargetConstant(Flag, IntPtrTy));
+  for (unsigned Value = 0, Reg = 0, e = ValueVTs.size(); Value != e; ++Value) {
+    unsigned NumRegs = TLI->getNumRegisters(*DAG.getContext(), ValueVTs[Value]);
+    EVT RegisterVT = RegVTs[Value];
+    for (unsigned i = 0; i != NumRegs; ++i) {
+      assert(Reg < Regs.size() && "Mismatch in # registers expected");
+      Ops.push_back(DAG.getRegister(Regs[Reg++], RegisterVT));
+    }
+  }
+}
+
+/// isAllocatableRegister - If the specified register is safe to allocate,
+/// i.e. it isn't a stack pointer or some other special register, return the
+/// register class for the register.  Otherwise, return null.
+static const TargetRegisterClass *
+isAllocatableRegister(unsigned Reg, MachineFunction &MF,
+                      const TargetLowering &TLI,
+                      const TargetRegisterInfo *TRI) {
+  EVT FoundVT = MVT::Other;
+  const TargetRegisterClass *FoundRC = 0;
+  for (TargetRegisterInfo::regclass_iterator RCI = TRI->regclass_begin(),
+       E = TRI->regclass_end(); RCI != E; ++RCI) {
+    EVT ThisVT = MVT::Other;
+
+    const TargetRegisterClass *RC = *RCI;
+    // If none of the the value types for this register class are valid, we
+    // can't use it.  For example, 64-bit reg classes on 32-bit targets.
+    for (TargetRegisterClass::vt_iterator I = RC->vt_begin(), E = RC->vt_end();
+         I != E; ++I) {
+      if (TLI.isTypeLegal(*I)) {
+        // If we have already found this register in a different register class,
+        // choose the one with the largest VT specified.  For example, on
+        // PowerPC, we favor f64 register classes over f32.
+        if (FoundVT == MVT::Other || FoundVT.bitsLT(*I)) {
+          ThisVT = *I;
+          break;
+        }
+      }
+    }
+
+    if (ThisVT == MVT::Other) continue;
+
+    // NOTE: This isn't ideal.  In particular, this might allocate the
+    // frame pointer in functions that need it (due to them not being taken
+    // out of allocation, because a variable sized allocation hasn't been seen
+    // yet).  This is a slight code pessimization, but should still work.
+    for (TargetRegisterClass::iterator I = RC->allocation_order_begin(MF),
+         E = RC->allocation_order_end(MF); I != E; ++I)
+      if (*I == Reg) {
+        // We found a matching register class.  Keep looking at others in case
+        // we find one with larger registers that this physreg is also in.
+        FoundRC = RC;
+        FoundVT = ThisVT;
+        break;
+      }
+  }
+  return FoundRC;
+}
+
+
+namespace llvm {
+/// AsmOperandInfo - This contains information for each constraint that we are
+/// lowering.
+class VISIBILITY_HIDDEN SDISelAsmOperandInfo :
+    public TargetLowering::AsmOperandInfo {
+public:
+  /// CallOperand - If this is the result output operand or a clobber
+  /// this is null, otherwise it is the incoming operand to the CallInst.
+  /// This gets modified as the asm is processed.
+  SDValue CallOperand;
+
+  /// AssignedRegs - If this is a register or register class operand, this
+  /// contains the set of register corresponding to the operand.
+  RegsForValue AssignedRegs;
+
+  explicit SDISelAsmOperandInfo(const InlineAsm::ConstraintInfo &info)
+    : TargetLowering::AsmOperandInfo(info), CallOperand(0,0) {
+  }
+
+  /// MarkAllocatedRegs - Once AssignedRegs is set, mark the assigned registers
+  /// busy in OutputRegs/InputRegs.
+  void MarkAllocatedRegs(bool isOutReg, bool isInReg,
+                         std::set<unsigned> &OutputRegs,
+                         std::set<unsigned> &InputRegs,
+                         const TargetRegisterInfo &TRI) const {
+    if (isOutReg) {
+      for (unsigned i = 0, e = AssignedRegs.Regs.size(); i != e; ++i)
+        MarkRegAndAliases(AssignedRegs.Regs[i], OutputRegs, TRI);
+    }
+    if (isInReg) {
+      for (unsigned i = 0, e = AssignedRegs.Regs.size(); i != e; ++i)
+        MarkRegAndAliases(AssignedRegs.Regs[i], InputRegs, TRI);
+    }
+  }
+
+  /// getCallOperandValEVT - Return the EVT of the Value* that this operand
+  /// corresponds to.  If there is no Value* for this operand, it returns
+  /// MVT::Other.
+  EVT getCallOperandValEVT(LLVMContext &Context, 
+                           const TargetLowering &TLI,
+                           const TargetData *TD) const {
+    if (CallOperandVal == 0) return MVT::Other;
+
+    if (isa<BasicBlock>(CallOperandVal))
+      return TLI.getPointerTy();
+
+    const llvm::Type *OpTy = CallOperandVal->getType();
+
+    // If this is an indirect operand, the operand is a pointer to the
+    // accessed type.
+    if (isIndirect)
+      OpTy = cast<PointerType>(OpTy)->getElementType();
+
+    // If OpTy is not a single value, it may be a struct/union that we
+    // can tile with integers.
+    if (!OpTy->isSingleValueType() && OpTy->isSized()) {
+      unsigned BitSize = TD->getTypeSizeInBits(OpTy);
+      switch (BitSize) {
+      default: break;
+      case 1:
+      case 8:
+      case 16:
+      case 32:
+      case 64:
+      case 128:
+        OpTy = IntegerType::get(Context, BitSize);
+        break;
+      }
+    }
+
+    return TLI.getValueType(OpTy, true);
+  }
+
+private:
+  /// MarkRegAndAliases - Mark the specified register and all aliases in the
+  /// specified set.
+  static void MarkRegAndAliases(unsigned Reg, std::set<unsigned> &Regs,
+                                const TargetRegisterInfo &TRI) {
+    assert(TargetRegisterInfo::isPhysicalRegister(Reg) && "Isn't a physreg");
+    Regs.insert(Reg);
+    if (const unsigned *Aliases = TRI.getAliasSet(Reg))
+      for (; *Aliases; ++Aliases)
+        Regs.insert(*Aliases);
+  }
+};
+} // end llvm namespace.
+
+
+/// GetRegistersForValue - Assign registers (virtual or physical) for the
+/// specified operand.  We prefer to assign virtual registers, to allow the
+/// register allocator handle the assignment process.  However, if the asm uses
+/// features that we can't model on machineinstrs, we have SDISel do the
+/// allocation.  This produces generally horrible, but correct, code.
+///
+///   OpInfo describes the operand.
+///   Input and OutputRegs are the set of already allocated physical registers.
+///
+void SelectionDAGBuilder::
+GetRegistersForValue(SDISelAsmOperandInfo &OpInfo,
+                     std::set<unsigned> &OutputRegs,
+                     std::set<unsigned> &InputRegs) {
+  LLVMContext &Context = FuncInfo.Fn->getContext();
+
+  // Compute whether this value requires an input register, an output register,
+  // or both.
+  bool isOutReg = false;
+  bool isInReg = false;
+  switch (OpInfo.Type) {
+  case InlineAsm::isOutput:
+    isOutReg = true;
+
+    // If there is an input constraint that matches this, we need to reserve
+    // the input register so no other inputs allocate to it.
+    isInReg = OpInfo.hasMatchingInput();
+    break;
+  case InlineAsm::isInput:
+    isInReg = true;
+    isOutReg = false;
+    break;
+  case InlineAsm::isClobber:
+    isOutReg = true;
+    isInReg = true;
+    break;
+  }
+
+
+  MachineFunction &MF = DAG.getMachineFunction();
+  SmallVector<unsigned, 4> Regs;
+
+  // If this is a constraint for a single physreg, or a constraint for a
+  // register class, find it.
+  std::pair<unsigned, const TargetRegisterClass*> PhysReg =
+    TLI.getRegForInlineAsmConstraint(OpInfo.ConstraintCode,
+                                     OpInfo.ConstraintVT);
+
+  unsigned NumRegs = 1;
+  if (OpInfo.ConstraintVT != MVT::Other) {
+    // If this is a FP input in an integer register (or visa versa) insert a bit
+    // cast of the input value.  More generally, handle any case where the input
+    // value disagrees with the register class we plan to stick this in.
+    if (OpInfo.Type == InlineAsm::isInput &&
+        PhysReg.second && !PhysReg.second->hasType(OpInfo.ConstraintVT)) {
+      // Try to convert to the first EVT that the reg class contains.  If the
+      // types are identical size, use a bitcast to convert (e.g. two differing
+      // vector types).
+      EVT RegVT = *PhysReg.second->vt_begin();
+      if (RegVT.getSizeInBits() == OpInfo.ConstraintVT.getSizeInBits()) {
+        OpInfo.CallOperand = DAG.getNode(ISD::BIT_CONVERT, getCurDebugLoc(),
+                                         RegVT, OpInfo.CallOperand);
+        OpInfo.ConstraintVT = RegVT;
+      } else if (RegVT.isInteger() && OpInfo.ConstraintVT.isFloatingPoint()) {
+        // If the input is a FP value and we want it in FP registers, do a
+        // bitcast to the corresponding integer type.  This turns an f64 value
+        // into i64, which can be passed with two i32 values on a 32-bit
+        // machine.
+        RegVT = EVT::getIntegerVT(Context, 
+                                  OpInfo.ConstraintVT.getSizeInBits());
+        OpInfo.CallOperand = DAG.getNode(ISD::BIT_CONVERT, getCurDebugLoc(),
+                                         RegVT, OpInfo.CallOperand);
+        OpInfo.ConstraintVT = RegVT;
+      }
+    }
+
+    NumRegs = TLI.getNumRegisters(Context, OpInfo.ConstraintVT);
+  }
+
+  EVT RegVT;
+  EVT ValueVT = OpInfo.ConstraintVT;
+
+  // If this is a constraint for a specific physical register, like {r17},
+  // assign it now.
+  if (unsigned AssignedReg = PhysReg.first) {
+    const TargetRegisterClass *RC = PhysReg.second;
+    if (OpInfo.ConstraintVT == MVT::Other)
+      ValueVT = *RC->vt_begin();
+
+    // Get the actual register value type.  This is important, because the user
+    // may have asked for (e.g.) the AX register in i32 type.  We need to
+    // remember that AX is actually i16 to get the right extension.
+    RegVT = *RC->vt_begin();
+
+    // This is a explicit reference to a physical register.
+    Regs.push_back(AssignedReg);
+
+    // If this is an expanded reference, add the rest of the regs to Regs.
+    if (NumRegs != 1) {
+      TargetRegisterClass::iterator I = RC->begin();
+      for (; *I != AssignedReg; ++I)
+        assert(I != RC->end() && "Didn't find reg!");
+
+      // Already added the first reg.
+      --NumRegs; ++I;
+      for (; NumRegs; --NumRegs, ++I) {
+        assert(I != RC->end() && "Ran out of registers to allocate!");
+        Regs.push_back(*I);
+      }
+    }
+    OpInfo.AssignedRegs = RegsForValue(TLI, Regs, RegVT, ValueVT);
+    const TargetRegisterInfo *TRI = DAG.getTarget().getRegisterInfo();
+    OpInfo.MarkAllocatedRegs(isOutReg, isInReg, OutputRegs, InputRegs, *TRI);
+    return;
+  }
+
+  // Otherwise, if this was a reference to an LLVM register class, create vregs
+  // for this reference.
+  if (const TargetRegisterClass *RC = PhysReg.second) {
+    RegVT = *RC->vt_begin();
+    if (OpInfo.ConstraintVT == MVT::Other)
+      ValueVT = RegVT;
+
+    // Create the appropriate number of virtual registers.
+    MachineRegisterInfo &RegInfo = MF.getRegInfo();
+    for (; NumRegs; --NumRegs)
+      Regs.push_back(RegInfo.createVirtualRegister(RC));
+
+    OpInfo.AssignedRegs = RegsForValue(TLI, Regs, RegVT, ValueVT);
+    return;
+  }
+  
+  // This is a reference to a register class that doesn't directly correspond
+  // to an LLVM register class.  Allocate NumRegs consecutive, available,
+  // registers from the class.
+  std::vector<unsigned> RegClassRegs
+    = TLI.getRegClassForInlineAsmConstraint(OpInfo.ConstraintCode,
+                                            OpInfo.ConstraintVT);
+
+  const TargetRegisterInfo *TRI = DAG.getTarget().getRegisterInfo();
+  unsigned NumAllocated = 0;
+  for (unsigned i = 0, e = RegClassRegs.size(); i != e; ++i) {
+    unsigned Reg = RegClassRegs[i];
+    // See if this register is available.
+    if ((isOutReg && OutputRegs.count(Reg)) ||   // Already used.
+        (isInReg  && InputRegs.count(Reg))) {    // Already used.
+      // Make sure we find consecutive registers.
+      NumAllocated = 0;
+      continue;
+    }
+
+    // Check to see if this register is allocatable (i.e. don't give out the
+    // stack pointer).
+    const TargetRegisterClass *RC = isAllocatableRegister(Reg, MF, TLI, TRI);
+    if (!RC) {        // Couldn't allocate this register.
+      // Reset NumAllocated to make sure we return consecutive registers.
+      NumAllocated = 0;
+      continue;
+    }
+
+    // Okay, this register is good, we can use it.
+    ++NumAllocated;
+
+    // If we allocated enough consecutive registers, succeed.
+    if (NumAllocated == NumRegs) {
+      unsigned RegStart = (i-NumAllocated)+1;
+      unsigned RegEnd   = i+1;
+      // Mark all of the allocated registers used.
+      for (unsigned i = RegStart; i != RegEnd; ++i)
+        Regs.push_back(RegClassRegs[i]);
+
+      OpInfo.AssignedRegs = RegsForValue(TLI, Regs, *RC->vt_begin(),
+                                         OpInfo.ConstraintVT);
+      OpInfo.MarkAllocatedRegs(isOutReg, isInReg, OutputRegs, InputRegs, *TRI);
+      return;
+    }
+  }
+
+  // Otherwise, we couldn't allocate enough registers for this.
+}
+
+/// hasInlineAsmMemConstraint - Return true if the inline asm instruction being
+/// processed uses a memory 'm' constraint.
+static bool
+hasInlineAsmMemConstraint(std::vector<InlineAsm::ConstraintInfo> &CInfos,
+                          const TargetLowering &TLI) {
+  for (unsigned i = 0, e = CInfos.size(); i != e; ++i) {
+    InlineAsm::ConstraintInfo &CI = CInfos[i];
+    for (unsigned j = 0, ee = CI.Codes.size(); j != ee; ++j) {
+      TargetLowering::ConstraintType CType = TLI.getConstraintType(CI.Codes[j]);
+      if (CType == TargetLowering::C_Memory)
+        return true;
+    }
+    
+    // Indirect operand accesses access memory.
+    if (CI.isIndirect)
+      return true;
+  }
+
+  return false;
+}
+
+/// visitInlineAsm - Handle a call to an InlineAsm object.
+///
+void SelectionDAGBuilder::visitInlineAsm(CallSite CS) {
+  InlineAsm *IA = cast<InlineAsm>(CS.getCalledValue());
+
+  /// ConstraintOperands - Information about all of the constraints.
+  std::vector<SDISelAsmOperandInfo> ConstraintOperands;
+
+  std::set<unsigned> OutputRegs, InputRegs;
+
+  // Do a prepass over the constraints, canonicalizing them, and building up the
+  // ConstraintOperands list.
+  std::vector<InlineAsm::ConstraintInfo>
+    ConstraintInfos = IA->ParseConstraints();
+
+  bool hasMemory = hasInlineAsmMemConstraint(ConstraintInfos, TLI);
+  
+  SDValue Chain, Flag;
+  
+  // We won't need to flush pending loads if this asm doesn't touch
+  // memory and is nonvolatile.
+  if (hasMemory || IA->hasSideEffects())
+    Chain = getRoot();
+  else
+    Chain = DAG.getRoot();
+
+  unsigned ArgNo = 0;   // ArgNo - The argument of the CallInst.
+  unsigned ResNo = 0;   // ResNo - The result number of the next output.
+  for (unsigned i = 0, e = ConstraintInfos.size(); i != e; ++i) {
+    ConstraintOperands.push_back(SDISelAsmOperandInfo(ConstraintInfos[i]));
+    SDISelAsmOperandInfo &OpInfo = ConstraintOperands.back();
+
+    EVT OpVT = MVT::Other;
+
+    // Compute the value type for each operand.
+    switch (OpInfo.Type) {
+    case InlineAsm::isOutput:
+      // Indirect outputs just consume an argument.
+      if (OpInfo.isIndirect) {
+        OpInfo.CallOperandVal = CS.getArgument(ArgNo++);
+        break;
+      }
+
+      // The return value of the call is this value.  As such, there is no
+      // corresponding argument.
+      assert(CS.getType() != Type::getVoidTy(*DAG.getContext()) &&
+             "Bad inline asm!");
+      if (const StructType *STy = dyn_cast<StructType>(CS.getType())) {
+        OpVT = TLI.getValueType(STy->getElementType(ResNo));
+      } else {
+        assert(ResNo == 0 && "Asm only has one result!");
+        OpVT = TLI.getValueType(CS.getType());
+      }
+      ++ResNo;
+      break;
+    case InlineAsm::isInput:
+      OpInfo.CallOperandVal = CS.getArgument(ArgNo++);
+      break;
+    case InlineAsm::isClobber:
+      // Nothing to do.
+      break;
+    }
+
+    // If this is an input or an indirect output, process the call argument.
+    // BasicBlocks are labels, currently appearing only in asm's.
+    if (OpInfo.CallOperandVal) {
+      // Strip bitcasts, if any.  This mostly comes up for functions.
+      OpInfo.CallOperandVal = OpInfo.CallOperandVal->stripPointerCasts();
+
+      if (BasicBlock *BB = dyn_cast<BasicBlock>(OpInfo.CallOperandVal)) {
+        OpInfo.CallOperand = DAG.getBasicBlock(FuncInfo.MBBMap[BB]);
+      } else {
+        OpInfo.CallOperand = getValue(OpInfo.CallOperandVal);
+      }
+
+      OpVT = OpInfo.getCallOperandValEVT(*DAG.getContext(), TLI, TD);
+    }
+
+    OpInfo.ConstraintVT = OpVT;
+  }
+
+  // Second pass over the constraints: compute which constraint option to use
+  // and assign registers to constraints that want a specific physreg.
+  for (unsigned i = 0, e = ConstraintInfos.size(); i != e; ++i) {
+    SDISelAsmOperandInfo &OpInfo = ConstraintOperands[i];
+
+    // If this is an output operand with a matching input operand, look up the
+    // matching input. If their types mismatch, e.g. one is an integer, the
+    // other is floating point, or their sizes are different, flag it as an
+    // error.
+    if (OpInfo.hasMatchingInput()) {
+      SDISelAsmOperandInfo &Input = ConstraintOperands[OpInfo.MatchingInput];
+      if (OpInfo.ConstraintVT != Input.ConstraintVT) {
+        if ((OpInfo.ConstraintVT.isInteger() !=
+             Input.ConstraintVT.isInteger()) ||
+            (OpInfo.ConstraintVT.getSizeInBits() !=
+             Input.ConstraintVT.getSizeInBits())) {
+          llvm_report_error("Unsupported asm: input constraint"
+                            " with a matching output constraint of incompatible"
+                            " type!");
+        }
+        Input.ConstraintVT = OpInfo.ConstraintVT;
+      }
+    }
+
+    // Compute the constraint code and ConstraintType to use.
+    TLI.ComputeConstraintToUse(OpInfo, OpInfo.CallOperand, hasMemory, &DAG);
+
+    // If this is a memory input, and if the operand is not indirect, do what we
+    // need to to provide an address for the memory input.
+    if (OpInfo.ConstraintType == TargetLowering::C_Memory &&
+        !OpInfo.isIndirect) {
+      assert(OpInfo.Type == InlineAsm::isInput &&
+             "Can only indirectify direct input operands!");
+
+      // Memory operands really want the address of the value.  If we don't have
+      // an indirect input, put it in the constpool if we can, otherwise spill
+      // it to a stack slot.
+
+      // If the operand is a float, integer, or vector constant, spill to a
+      // constant pool entry to get its address.
+      Value *OpVal = OpInfo.CallOperandVal;
+      if (isa<ConstantFP>(OpVal) || isa<ConstantInt>(OpVal) ||
+          isa<ConstantVector>(OpVal)) {
+        OpInfo.CallOperand = DAG.getConstantPool(cast<Constant>(OpVal),
+                                                 TLI.getPointerTy());
+      } else {
+        // Otherwise, create a stack slot and emit a store to it before the
+        // asm.
+        const Type *Ty = OpVal->getType();
+        uint64_t TySize = TLI.getTargetData()->getTypeAllocSize(Ty);
+        unsigned Align  = TLI.getTargetData()->getPrefTypeAlignment(Ty);
+        MachineFunction &MF = DAG.getMachineFunction();
+        int SSFI = MF.getFrameInfo()->CreateStackObject(TySize, Align, false);
+        SDValue StackSlot = DAG.getFrameIndex(SSFI, TLI.getPointerTy());
+        Chain = DAG.getStore(Chain, getCurDebugLoc(),
+                             OpInfo.CallOperand, StackSlot, NULL, 0);
+        OpInfo.CallOperand = StackSlot;
+      }
+
+      // There is no longer a Value* corresponding to this operand.
+      OpInfo.CallOperandVal = 0;
+      // It is now an indirect operand.
+      OpInfo.isIndirect = true;
+    }
+
+    // If this constraint is for a specific register, allocate it before
+    // anything else.
+    if (OpInfo.ConstraintType == TargetLowering::C_Register)
+      GetRegistersForValue(OpInfo, OutputRegs, InputRegs);
+  }
+  ConstraintInfos.clear();
+
+
+  // Second pass - Loop over all of the operands, assigning virtual or physregs
+  // to register class operands.
+  for (unsigned i = 0, e = ConstraintOperands.size(); i != e; ++i) {
+    SDISelAsmOperandInfo &OpInfo = ConstraintOperands[i];
+
+    // C_Register operands have already been allocated, Other/Memory don't need
+    // to be.
+    if (OpInfo.ConstraintType == TargetLowering::C_RegisterClass)
+      GetRegistersForValue(OpInfo, OutputRegs, InputRegs);
+  }
+
+  // AsmNodeOperands - The operands for the ISD::INLINEASM node.
+  std::vector<SDValue> AsmNodeOperands;
+  AsmNodeOperands.push_back(SDValue());  // reserve space for input chain
+  AsmNodeOperands.push_back(
+          DAG.getTargetExternalSymbol(IA->getAsmString().c_str(), MVT::Other));
+
+
+  // Loop over all of the inputs, copying the operand values into the
+  // appropriate registers and processing the output regs.
+  RegsForValue RetValRegs;
+
+  // IndirectStoresToEmit - The set of stores to emit after the inline asm node.
+  std::vector<std::pair<RegsForValue, Value*> > IndirectStoresToEmit;
+
+  for (unsigned i = 0, e = ConstraintOperands.size(); i != e; ++i) {
+    SDISelAsmOperandInfo &OpInfo = ConstraintOperands[i];
+
+    switch (OpInfo.Type) {
+    case InlineAsm::isOutput: {
+      if (OpInfo.ConstraintType != TargetLowering::C_RegisterClass &&
+          OpInfo.ConstraintType != TargetLowering::C_Register) {
+        // Memory output, or 'other' output (e.g. 'X' constraint).
+        assert(OpInfo.isIndirect && "Memory output must be indirect operand");
+
+        // Add information to the INLINEASM node to know about this output.
+        unsigned ResOpType = 4/*MEM*/ | (1<<3);
+        AsmNodeOperands.push_back(DAG.getTargetConstant(ResOpType,
+                                                        TLI.getPointerTy()));
+        AsmNodeOperands.push_back(OpInfo.CallOperand);
+        break;
+      }
+
+      // Otherwise, this is a register or register class output.
+
+      // Copy the output from the appropriate register.  Find a register that
+      // we can use.
+      if (OpInfo.AssignedRegs.Regs.empty()) {
+        llvm_report_error("Couldn't allocate output reg for"
+                          " constraint '" + OpInfo.ConstraintCode + "'!");
+      }
+
+      // If this is an indirect operand, store through the pointer after the
+      // asm.
+      if (OpInfo.isIndirect) {
+        IndirectStoresToEmit.push_back(std::make_pair(OpInfo.AssignedRegs,
+                                                      OpInfo.CallOperandVal));
+      } else {
+        // This is the result value of the call.
+        assert(CS.getType() != Type::getVoidTy(*DAG.getContext()) &&
+               "Bad inline asm!");
+        // Concatenate this output onto the outputs list.
+        RetValRegs.append(OpInfo.AssignedRegs);
+      }
+
+      // Add information to the INLINEASM node to know that this register is
+      // set.
+      OpInfo.AssignedRegs.AddInlineAsmOperands(OpInfo.isEarlyClobber ?
+                                               6 /* EARLYCLOBBER REGDEF */ :
+                                               2 /* REGDEF */ ,
+                                               false,
+                                               0,
+                                               DAG, AsmNodeOperands);
+      break;
+    }
+    case InlineAsm::isInput: {
+      SDValue InOperandVal = OpInfo.CallOperand;
+
+      if (OpInfo.isMatchingInputConstraint()) {   // Matching constraint?
+        // If this is required to match an output register we have already set,
+        // just use its register.
+        unsigned OperandNo = OpInfo.getMatchedOperand();
+
+        // Scan until we find the definition we already emitted of this operand.
+        // When we find it, create a RegsForValue operand.
+        unsigned CurOp = 2;  // The first operand.
+        for (; OperandNo; --OperandNo) {
+          // Advance to the next operand.
+          unsigned OpFlag =
+            cast<ConstantSDNode>(AsmNodeOperands[CurOp])->getZExtValue();
+          assert(((OpFlag & 7) == 2 /*REGDEF*/ ||
+                  (OpFlag & 7) == 6 /*EARLYCLOBBER REGDEF*/ ||
+                  (OpFlag & 7) == 4 /*MEM*/) &&
+                 "Skipped past definitions?");
+          CurOp += InlineAsm::getNumOperandRegisters(OpFlag)+1;
+        }
+
+        unsigned OpFlag =
+          cast<ConstantSDNode>(AsmNodeOperands[CurOp])->getZExtValue();
+        if ((OpFlag & 7) == 2 /*REGDEF*/
+            || (OpFlag & 7) == 6 /* EARLYCLOBBER REGDEF */) {
+          // Add (OpFlag&0xffff)>>3 registers to MatchedRegs.
+          if (OpInfo.isIndirect) {
+            llvm_report_error("Don't know how to handle tied indirect "
+                              "register inputs yet!");
+          }
+          RegsForValue MatchedRegs;
+          MatchedRegs.TLI = &TLI;
+          MatchedRegs.ValueVTs.push_back(InOperandVal.getValueType());
+          EVT RegVT = AsmNodeOperands[CurOp+1].getValueType();
+          MatchedRegs.RegVTs.push_back(RegVT);
+          MachineRegisterInfo &RegInfo = DAG.getMachineFunction().getRegInfo();
+          for (unsigned i = 0, e = InlineAsm::getNumOperandRegisters(OpFlag);
+               i != e; ++i)
+            MatchedRegs.Regs.
+              push_back(RegInfo.createVirtualRegister(TLI.getRegClassFor(RegVT)));
+
+          // Use the produced MatchedRegs object to
+          MatchedRegs.getCopyToRegs(InOperandVal, DAG, getCurDebugLoc(),
+                                    Chain, &Flag);
+          MatchedRegs.AddInlineAsmOperands(1 /*REGUSE*/,
+                                           true, OpInfo.getMatchedOperand(),
+                                           DAG, AsmNodeOperands);
+          break;
+        } else {
+          assert(((OpFlag & 7) == 4) && "Unknown matching constraint!");
+          assert((InlineAsm::getNumOperandRegisters(OpFlag)) == 1 &&
+                 "Unexpected number of operands");
+          // Add information to the INLINEASM node to know about this input.
+          // See InlineAsm.h isUseOperandTiedToDef.
+          OpFlag |= 0x80000000 | (OpInfo.getMatchedOperand() << 16);
+          AsmNodeOperands.push_back(DAG.getTargetConstant(OpFlag,
+                                                          TLI.getPointerTy()));
+          AsmNodeOperands.push_back(AsmNodeOperands[CurOp+1]);
+          break;
+        }
+      }
+
+      if (OpInfo.ConstraintType == TargetLowering::C_Other) {
+        assert(!OpInfo.isIndirect &&
+               "Don't know how to handle indirect other inputs yet!");
+
+        std::vector<SDValue> Ops;
+        TLI.LowerAsmOperandForConstraint(InOperandVal, OpInfo.ConstraintCode[0],
+                                         hasMemory, Ops, DAG);
+        if (Ops.empty()) {
+          llvm_report_error("Invalid operand for inline asm"
+                            " constraint '" + OpInfo.ConstraintCode + "'!");
+        }
+
+        // Add information to the INLINEASM node to know about this input.
+        unsigned ResOpType = 3 /*IMM*/ | (Ops.size() << 3);
+        AsmNodeOperands.push_back(DAG.getTargetConstant(ResOpType,
+                                                        TLI.getPointerTy()));
+        AsmNodeOperands.insert(AsmNodeOperands.end(), Ops.begin(), Ops.end());
+        break;
+      } else if (OpInfo.ConstraintType == TargetLowering::C_Memory) {
+        assert(OpInfo.isIndirect && "Operand must be indirect to be a mem!");
+        assert(InOperandVal.getValueType() == TLI.getPointerTy() &&
+               "Memory operands expect pointer values");
+
+        // Add information to the INLINEASM node to know about this input.
+        unsigned ResOpType = 4/*MEM*/ | (1<<3);
+        AsmNodeOperands.push_back(DAG.getTargetConstant(ResOpType,
+                                                        TLI.getPointerTy()));
+        AsmNodeOperands.push_back(InOperandVal);
+        break;
+      }
+
+      assert((OpInfo.ConstraintType == TargetLowering::C_RegisterClass ||
+              OpInfo.ConstraintType == TargetLowering::C_Register) &&
+             "Unknown constraint type!");
+      assert(!OpInfo.isIndirect &&
+             "Don't know how to handle indirect register inputs yet!");
+
+      // Copy the input into the appropriate registers.
+      if (OpInfo.AssignedRegs.Regs.empty()) {
+        llvm_report_error("Couldn't allocate input reg for"
+                          " constraint '"+ OpInfo.ConstraintCode +"'!");
+      }
+
+      OpInfo.AssignedRegs.getCopyToRegs(InOperandVal, DAG, getCurDebugLoc(),
+                                        Chain, &Flag);
+
+      OpInfo.AssignedRegs.AddInlineAsmOperands(1/*REGUSE*/, false, 0,
+                                               DAG, AsmNodeOperands);
+      break;
+    }
+    case InlineAsm::isClobber: {
+      // Add the clobbered value to the operand list, so that the register
+      // allocator is aware that the physreg got clobbered.
+      if (!OpInfo.AssignedRegs.Regs.empty())
+        OpInfo.AssignedRegs.AddInlineAsmOperands(6 /* EARLYCLOBBER REGDEF */,
+                                                 false, 0, DAG,AsmNodeOperands);
+      break;
+    }
+    }
+  }
+
+  // Finish up input operands.
+  AsmNodeOperands[0] = Chain;
+  if (Flag.getNode()) AsmNodeOperands.push_back(Flag);
+
+  Chain = DAG.getNode(ISD::INLINEASM, getCurDebugLoc(),
+                      DAG.getVTList(MVT::Other, MVT::Flag),
+                      &AsmNodeOperands[0], AsmNodeOperands.size());
+  Flag = Chain.getValue(1);
+
+  // If this asm returns a register value, copy the result from that register
+  // and set it as the value of the call.
+  if (!RetValRegs.Regs.empty()) {
+    SDValue Val = RetValRegs.getCopyFromRegs(DAG, getCurDebugLoc(),
+                                             Chain, &Flag);
+
+    // FIXME: Why don't we do this for inline asms with MRVs?
+    if (CS.getType()->isSingleValueType() && CS.getType()->isSized()) {
+      EVT ResultType = TLI.getValueType(CS.getType());
+
+      // If any of the results of the inline asm is a vector, it may have the
+      // wrong width/num elts.  This can happen for register classes that can
+      // contain multiple different value types.  The preg or vreg allocated may
+      // not have the same VT as was expected.  Convert it to the right type
+      // with bit_convert.
+      if (ResultType != Val.getValueType() && Val.getValueType().isVector()) {
+        Val = DAG.getNode(ISD::BIT_CONVERT, getCurDebugLoc(),
+                          ResultType, Val);
+
+      } else if (ResultType != Val.getValueType() &&
+                 ResultType.isInteger() && Val.getValueType().isInteger()) {
+        // If a result value was tied to an input value, the computed result may
+        // have a wider width than the expected result.  Extract the relevant
+        // portion.
+        Val = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(), ResultType, Val);
+      }
+
+      assert(ResultType == Val.getValueType() && "Asm result value mismatch!");
+    }
+
+    setValue(CS.getInstruction(), Val);
+    // Don't need to use this as a chain in this case.
+    if (!IA->hasSideEffects() && !hasMemory && IndirectStoresToEmit.empty())
+      return;
+  }
+
+  std::vector<std::pair<SDValue, Value*> > StoresToEmit;
+
+  // Process indirect outputs, first output all of the flagged copies out of
+  // physregs.
+  for (unsigned i = 0, e = IndirectStoresToEmit.size(); i != e; ++i) {
+    RegsForValue &OutRegs = IndirectStoresToEmit[i].first;
+    Value *Ptr = IndirectStoresToEmit[i].second;
+    SDValue OutVal = OutRegs.getCopyFromRegs(DAG, getCurDebugLoc(),
+                                             Chain, &Flag);
+    StoresToEmit.push_back(std::make_pair(OutVal, Ptr));
+
+  }
+
+  // Emit the non-flagged stores from the physregs.
+  SmallVector<SDValue, 8> OutChains;
+  for (unsigned i = 0, e = StoresToEmit.size(); i != e; ++i)
+    OutChains.push_back(DAG.getStore(Chain, getCurDebugLoc(),
+                                    StoresToEmit[i].first,
+                                    getValue(StoresToEmit[i].second),
+                                    StoresToEmit[i].second, 0));
+  if (!OutChains.empty())
+    Chain = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(), MVT::Other,
+                        &OutChains[0], OutChains.size());
+  DAG.setRoot(Chain);
+}
+
+void SelectionDAGBuilder::visitVAStart(CallInst &I) {
+  DAG.setRoot(DAG.getNode(ISD::VASTART, getCurDebugLoc(),
+                          MVT::Other, getRoot(),
+                          getValue(I.getOperand(1)),
+                          DAG.getSrcValue(I.getOperand(1))));
+}
+
+void SelectionDAGBuilder::visitVAArg(VAArgInst &I) {
+  SDValue V = DAG.getVAArg(TLI.getValueType(I.getType()), getCurDebugLoc(),
+                           getRoot(), getValue(I.getOperand(0)),
+                           DAG.getSrcValue(I.getOperand(0)));
+  setValue(&I, V);
+  DAG.setRoot(V.getValue(1));
+}
+
+void SelectionDAGBuilder::visitVAEnd(CallInst &I) {
+  DAG.setRoot(DAG.getNode(ISD::VAEND, getCurDebugLoc(),
+                          MVT::Other, getRoot(),
+                          getValue(I.getOperand(1)),
+                          DAG.getSrcValue(I.getOperand(1))));
+}
+
+void SelectionDAGBuilder::visitVACopy(CallInst &I) {
+  DAG.setRoot(DAG.getNode(ISD::VACOPY, getCurDebugLoc(),
+                          MVT::Other, getRoot(),
+                          getValue(I.getOperand(1)),
+                          getValue(I.getOperand(2)),
+                          DAG.getSrcValue(I.getOperand(1)),
+                          DAG.getSrcValue(I.getOperand(2))));
+}
+
+/// TargetLowering::LowerCallTo - This is the default LowerCallTo
+/// implementation, which just calls LowerCall.
+/// FIXME: When all targets are
+/// migrated to using LowerCall, this hook should be integrated into SDISel.
+std::pair<SDValue, SDValue>
+TargetLowering::LowerCallTo(SDValue Chain, const Type *RetTy,
+                            bool RetSExt, bool RetZExt, bool isVarArg,
+                            bool isInreg, unsigned NumFixedArgs,
+                            CallingConv::ID CallConv, bool isTailCall,
+                            bool isReturnValueUsed,
+                            SDValue Callee,
+                            ArgListTy &Args, SelectionDAG &DAG, DebugLoc dl) {
+
+  assert((!isTailCall || PerformTailCallOpt) &&
+         "isTailCall set when tail-call optimizations are disabled!");
+
+  // Handle all of the outgoing arguments.
+  SmallVector<ISD::OutputArg, 32> Outs;
+  for (unsigned i = 0, e = Args.size(); i != e; ++i) {
+    SmallVector<EVT, 4> ValueVTs;
+    ComputeValueVTs(*this, Args[i].Ty, ValueVTs);
+    for (unsigned Value = 0, NumValues = ValueVTs.size();
+         Value != NumValues; ++Value) {
+      EVT VT = ValueVTs[Value];
+      const Type *ArgTy = VT.getTypeForEVT(RetTy->getContext());
+      SDValue Op = SDValue(Args[i].Node.getNode(),
+                           Args[i].Node.getResNo() + Value);
+      ISD::ArgFlagsTy Flags;
+      unsigned OriginalAlignment =
+        getTargetData()->getABITypeAlignment(ArgTy);
+
+      if (Args[i].isZExt)
+        Flags.setZExt();
+      if (Args[i].isSExt)
+        Flags.setSExt();
+      if (Args[i].isInReg)
+        Flags.setInReg();
+      if (Args[i].isSRet)
+        Flags.setSRet();
+      if (Args[i].isByVal) {
+        Flags.setByVal();
+        const PointerType *Ty = cast<PointerType>(Args[i].Ty);
+        const Type *ElementTy = Ty->getElementType();
+        unsigned FrameAlign = getByValTypeAlignment(ElementTy);
+        unsigned FrameSize  = getTargetData()->getTypeAllocSize(ElementTy);
+        // For ByVal, alignment should come from FE.  BE will guess if this
+        // info is not there but there are cases it cannot get right.
+        if (Args[i].Alignment)
+          FrameAlign = Args[i].Alignment;
+        Flags.setByValAlign(FrameAlign);
+        Flags.setByValSize(FrameSize);
+      }
+      if (Args[i].isNest)
+        Flags.setNest();
+      Flags.setOrigAlign(OriginalAlignment);
+
+      EVT PartVT = getRegisterType(RetTy->getContext(), VT);
+      unsigned NumParts = getNumRegisters(RetTy->getContext(), VT);
+      SmallVector<SDValue, 4> Parts(NumParts);
+      ISD::NodeType ExtendKind = ISD::ANY_EXTEND;
+
+      if (Args[i].isSExt)
+        ExtendKind = ISD::SIGN_EXTEND;
+      else if (Args[i].isZExt)
+        ExtendKind = ISD::ZERO_EXTEND;
+
+      getCopyToParts(DAG, dl, Op, &Parts[0], NumParts, PartVT, ExtendKind);
+
+      for (unsigned j = 0; j != NumParts; ++j) {
+        // if it isn't first piece, alignment must be 1
+        ISD::OutputArg MyFlags(Flags, Parts[j], i < NumFixedArgs);
+        if (NumParts > 1 && j == 0)
+          MyFlags.Flags.setSplit();
+        else if (j != 0)
+          MyFlags.Flags.setOrigAlign(1);
+
+        Outs.push_back(MyFlags);
+      }
+    }
+  }
+
+  // Handle the incoming return values from the call.
+  SmallVector<ISD::InputArg, 32> Ins;
+  SmallVector<EVT, 4> RetTys;
+  ComputeValueVTs(*this, RetTy, RetTys);
+  for (unsigned I = 0, E = RetTys.size(); I != E; ++I) {
+    EVT VT = RetTys[I];
+    EVT RegisterVT = getRegisterType(RetTy->getContext(), VT);
+    unsigned NumRegs = getNumRegisters(RetTy->getContext(), VT);
+    for (unsigned i = 0; i != NumRegs; ++i) {
+      ISD::InputArg MyFlags;
+      MyFlags.VT = RegisterVT;
+      MyFlags.Used = isReturnValueUsed;
+      if (RetSExt)
+        MyFlags.Flags.setSExt();
+      if (RetZExt)
+        MyFlags.Flags.setZExt();
+      if (isInreg)
+        MyFlags.Flags.setInReg();
+      Ins.push_back(MyFlags);
+    }
+  }
+
+  // Check if target-dependent constraints permit a tail call here.
+  // Target-independent constraints should be checked by the caller.
+  if (isTailCall &&
+      !IsEligibleForTailCallOptimization(Callee, CallConv, isVarArg, Ins, DAG))
+    isTailCall = false;
+
+  SmallVector<SDValue, 4> InVals;
+  Chain = LowerCall(Chain, Callee, CallConv, isVarArg, isTailCall,
+                    Outs, Ins, dl, DAG, InVals);
+
+  // Verify that the target's LowerCall behaved as expected.
+  assert(Chain.getNode() && Chain.getValueType() == MVT::Other &&
+         "LowerCall didn't return a valid chain!");
+  assert((!isTailCall || InVals.empty()) &&
+         "LowerCall emitted a return value for a tail call!");
+  assert((isTailCall || InVals.size() == Ins.size()) &&
+         "LowerCall didn't emit the correct number of values!");
+  DEBUG(for (unsigned i = 0, e = Ins.size(); i != e; ++i) {
+          assert(InVals[i].getNode() &&
+                 "LowerCall emitted a null value!");
+          assert(Ins[i].VT == InVals[i].getValueType() &&
+                 "LowerCall emitted a value with the wrong type!");
+        });
+
+  // For a tail call, the return value is merely live-out and there aren't
+  // any nodes in the DAG representing it. Return a special value to
+  // indicate that a tail call has been emitted and no more Instructions
+  // should be processed in the current block.
+  if (isTailCall) {
+    DAG.setRoot(Chain);
+    return std::make_pair(SDValue(), SDValue());
+  }
+
+  // Collect the legal value parts into potentially illegal values
+  // that correspond to the original function's return values.
+  ISD::NodeType AssertOp = ISD::DELETED_NODE;
+  if (RetSExt)
+    AssertOp = ISD::AssertSext;
+  else if (RetZExt)
+    AssertOp = ISD::AssertZext;
+  SmallVector<SDValue, 4> ReturnValues;
+  unsigned CurReg = 0;
+  for (unsigned I = 0, E = RetTys.size(); I != E; ++I) {
+    EVT VT = RetTys[I];
+    EVT RegisterVT = getRegisterType(RetTy->getContext(), VT);
+    unsigned NumRegs = getNumRegisters(RetTy->getContext(), VT);
+
+    SDValue ReturnValue =
+      getCopyFromParts(DAG, dl, &InVals[CurReg], NumRegs, RegisterVT, VT,
+                       AssertOp);
+    ReturnValues.push_back(ReturnValue);
+    CurReg += NumRegs;
+  }
+
+  // For a function returning void, there is no return value. We can't create
+  // such a node, so we just return a null return value in that case. In
+  // that case, nothing will actualy look at the value.
+  if (ReturnValues.empty())
+    return std::make_pair(SDValue(), Chain);
+
+  SDValue Res = DAG.getNode(ISD::MERGE_VALUES, dl,
+                            DAG.getVTList(&RetTys[0], RetTys.size()),
+                            &ReturnValues[0], ReturnValues.size());
+
+  return std::make_pair(Res, Chain);
+}
+
+void TargetLowering::LowerOperationWrapper(SDNode *N,
+                                           SmallVectorImpl<SDValue> &Results,
+                                           SelectionDAG &DAG) {
+  SDValue Res = LowerOperation(SDValue(N, 0), DAG);
+  if (Res.getNode())
+    Results.push_back(Res);
+}
+
+SDValue TargetLowering::LowerOperation(SDValue Op, SelectionDAG &DAG) {
+  llvm_unreachable("LowerOperation not implemented for this target!");
+  return SDValue();
+}
+
+
+void SelectionDAGBuilder::CopyValueToVirtualRegister(Value *V, unsigned Reg) {
+  SDValue Op = getValue(V);
+  assert((Op.getOpcode() != ISD::CopyFromReg ||
+          cast<RegisterSDNode>(Op.getOperand(1))->getReg() != Reg) &&
+         "Copy from a reg to the same reg!");
+  assert(!TargetRegisterInfo::isPhysicalRegister(Reg) && "Is a physreg");
+
+  RegsForValue RFV(V->getContext(), TLI, Reg, V->getType());
+  SDValue Chain = DAG.getEntryNode();
+  RFV.getCopyToRegs(Op, DAG, getCurDebugLoc(), Chain, 0);
+  PendingExports.push_back(Chain);
+}
+
+#include "llvm/CodeGen/SelectionDAGISel.h"
+
+void SelectionDAGISel::LowerArguments(BasicBlock *LLVMBB) {
+  // If this is the entry block, emit arguments.
+  Function &F = *LLVMBB->getParent();
+  SelectionDAG &DAG = SDB->DAG;
+  SDValue OldRoot = DAG.getRoot();
+  DebugLoc dl = SDB->getCurDebugLoc();
+  const TargetData *TD = TLI.getTargetData();
+  SmallVector<ISD::InputArg, 16> Ins;
+
+  // Check whether the function can return without sret-demotion.
+  SmallVector<EVT, 4> OutVTs;
+  SmallVector<ISD::ArgFlagsTy, 4> OutsFlags;
+  getReturnInfo(F.getReturnType(), F.getAttributes().getRetAttributes(), 
+                OutVTs, OutsFlags, TLI);
+  FunctionLoweringInfo &FLI = DAG.getFunctionLoweringInfo();
+
+  FLI.CanLowerReturn = TLI.CanLowerReturn(F.getCallingConv(), F.isVarArg(), 
+    OutVTs, OutsFlags, DAG);
+  if (!FLI.CanLowerReturn) {
+    // Put in an sret pointer parameter before all the other parameters.
+    SmallVector<EVT, 1> ValueVTs;
+    ComputeValueVTs(TLI, PointerType::getUnqual(F.getReturnType()), ValueVTs);
+
+    // NOTE: Assuming that a pointer will never break down to more than one VT
+    // or one register.
+    ISD::ArgFlagsTy Flags;
+    Flags.setSRet();
+    EVT RegisterVT = TLI.getRegisterType(*CurDAG->getContext(), ValueVTs[0]);
+    ISD::InputArg RetArg(Flags, RegisterVT, true);
+    Ins.push_back(RetArg);
+  }
+
+  // Set up the incoming argument description vector.
+  unsigned Idx = 1;
+  for (Function::arg_iterator I = F.arg_begin(), E = F.arg_end();
+       I != E; ++I, ++Idx) {
+    SmallVector<EVT, 4> ValueVTs;
+    ComputeValueVTs(TLI, I->getType(), ValueVTs);
+    bool isArgValueUsed = !I->use_empty();
+    for (unsigned Value = 0, NumValues = ValueVTs.size();
+         Value != NumValues; ++Value) {
+      EVT VT = ValueVTs[Value];
+      const Type *ArgTy = VT.getTypeForEVT(*DAG.getContext());
+      ISD::ArgFlagsTy Flags;
+      unsigned OriginalAlignment =
+        TD->getABITypeAlignment(ArgTy);
+
+      if (F.paramHasAttr(Idx, Attribute::ZExt))
+        Flags.setZExt();
+      if (F.paramHasAttr(Idx, Attribute::SExt))
+        Flags.setSExt();
+      if (F.paramHasAttr(Idx, Attribute::InReg))
+        Flags.setInReg();
+      if (F.paramHasAttr(Idx, Attribute::StructRet))
+        Flags.setSRet();
+      if (F.paramHasAttr(Idx, Attribute::ByVal)) {
+        Flags.setByVal();
+        const PointerType *Ty = cast<PointerType>(I->getType());
+        const Type *ElementTy = Ty->getElementType();
+        unsigned FrameAlign = TLI.getByValTypeAlignment(ElementTy);
+        unsigned FrameSize  = TD->getTypeAllocSize(ElementTy);
+        // For ByVal, alignment should be passed from FE.  BE will guess if
+        // this info is not there but there are cases it cannot get right.
+        if (F.getParamAlignment(Idx))
+          FrameAlign = F.getParamAlignment(Idx);
+        Flags.setByValAlign(FrameAlign);
+        Flags.setByValSize(FrameSize);
+      }
+      if (F.paramHasAttr(Idx, Attribute::Nest))
+        Flags.setNest();
+      Flags.setOrigAlign(OriginalAlignment);
+
+      EVT RegisterVT = TLI.getRegisterType(*CurDAG->getContext(), VT);
+      unsigned NumRegs = TLI.getNumRegisters(*CurDAG->getContext(), VT);
+      for (unsigned i = 0; i != NumRegs; ++i) {
+        ISD::InputArg MyFlags(Flags, RegisterVT, isArgValueUsed);
+        if (NumRegs > 1 && i == 0)
+          MyFlags.Flags.setSplit();
+        // if it isn't first piece, alignment must be 1
+        else if (i > 0)
+          MyFlags.Flags.setOrigAlign(1);
+        Ins.push_back(MyFlags);
+      }
+    }
+  }
+
+  // Call the target to set up the argument values.
+  SmallVector<SDValue, 8> InVals;
+  SDValue NewRoot = TLI.LowerFormalArguments(DAG.getRoot(), F.getCallingConv(),
+                                             F.isVarArg(), Ins,
+                                             dl, DAG, InVals);
+
+  // Verify that the target's LowerFormalArguments behaved as expected.
+  assert(NewRoot.getNode() && NewRoot.getValueType() == MVT::Other &&
+         "LowerFormalArguments didn't return a valid chain!");
+  assert(InVals.size() == Ins.size() &&
+         "LowerFormalArguments didn't emit the correct number of values!");
+  DEBUG(for (unsigned i = 0, e = Ins.size(); i != e; ++i) {
+          assert(InVals[i].getNode() &&
+                 "LowerFormalArguments emitted a null value!");
+          assert(Ins[i].VT == InVals[i].getValueType() &&
+                 "LowerFormalArguments emitted a value with the wrong type!");
+        });
+
+  // Update the DAG with the new chain value resulting from argument lowering.
+  DAG.setRoot(NewRoot);
+
+  // Set up the argument values.
+  unsigned i = 0;
+  Idx = 1;
+  if (!FLI.CanLowerReturn) {
+    // Create a virtual register for the sret pointer, and put in a copy
+    // from the sret argument into it.
+    SmallVector<EVT, 1> ValueVTs;
+    ComputeValueVTs(TLI, PointerType::getUnqual(F.getReturnType()), ValueVTs);
+    EVT VT = ValueVTs[0];
+    EVT RegVT = TLI.getRegisterType(*CurDAG->getContext(), VT);
+    ISD::NodeType AssertOp = ISD::DELETED_NODE;
+    SDValue ArgValue = getCopyFromParts(DAG, dl, &InVals[0], 1, RegVT,
+                                        VT, AssertOp);
+
+    MachineFunction& MF = SDB->DAG.getMachineFunction();
+    MachineRegisterInfo& RegInfo = MF.getRegInfo();
+    unsigned SRetReg = RegInfo.createVirtualRegister(TLI.getRegClassFor(RegVT));
+    FLI.DemoteRegister = SRetReg;
+    NewRoot = SDB->DAG.getCopyToReg(NewRoot, SDB->getCurDebugLoc(), SRetReg, ArgValue);
+    DAG.setRoot(NewRoot);
+    
+    // i indexes lowered arguments.  Bump it past the hidden sret argument.
+    // Idx indexes LLVM arguments.  Don't touch it.
+    ++i;
+  }
+  for (Function::arg_iterator I = F.arg_begin(), E = F.arg_end(); I != E;
+      ++I, ++Idx) {
+    SmallVector<SDValue, 4> ArgValues;
+    SmallVector<EVT, 4> ValueVTs;
+    ComputeValueVTs(TLI, I->getType(), ValueVTs);
+    unsigned NumValues = ValueVTs.size();
+    for (unsigned Value = 0; Value != NumValues; ++Value) {
+      EVT VT = ValueVTs[Value];
+      EVT PartVT = TLI.getRegisterType(*CurDAG->getContext(), VT);
+      unsigned NumParts = TLI.getNumRegisters(*CurDAG->getContext(), VT);
+
+      if (!I->use_empty()) {
+        ISD::NodeType AssertOp = ISD::DELETED_NODE;
+        if (F.paramHasAttr(Idx, Attribute::SExt))
+          AssertOp = ISD::AssertSext;
+        else if (F.paramHasAttr(Idx, Attribute::ZExt))
+          AssertOp = ISD::AssertZext;
+
+        ArgValues.push_back(getCopyFromParts(DAG, dl, &InVals[i], NumParts,
+                                             PartVT, VT, AssertOp));
+      }
+      i += NumParts;
+    }
+    if (!I->use_empty()) {
+      SDB->setValue(I, DAG.getMergeValues(&ArgValues[0], NumValues,
+                                          SDB->getCurDebugLoc()));
+      // If this argument is live outside of the entry block, insert a copy from
+      // whereever we got it to the vreg that other BB's will reference it as.
+      SDB->CopyToExportRegsIfNeeded(I);
+    }
+  }
+  assert(i == InVals.size() && "Argument register count mismatch!");
+
+  // Finally, if the target has anything special to do, allow it to do so.
+  // FIXME: this should insert code into the DAG!
+  EmitFunctionEntryCode(F, SDB->DAG.getMachineFunction());
+}
+
+/// Handle PHI nodes in successor blocks.  Emit code into the SelectionDAG to
+/// ensure constants are generated when needed.  Remember the virtual registers
+/// that need to be added to the Machine PHI nodes as input.  We cannot just
+/// directly add them, because expansion might result in multiple MBB's for one
+/// BB.  As such, the start of the BB might correspond to a different MBB than
+/// the end.
+///
+void
+SelectionDAGISel::HandlePHINodesInSuccessorBlocks(BasicBlock *LLVMBB) {
+  TerminatorInst *TI = LLVMBB->getTerminator();
+
+  SmallPtrSet<MachineBasicBlock *, 4> SuccsHandled;
+
+  // Check successor nodes' PHI nodes that expect a constant to be available
+  // from this block.
+  for (unsigned succ = 0, e = TI->getNumSuccessors(); succ != e; ++succ) {
+    BasicBlock *SuccBB = TI->getSuccessor(succ);
+    if (!isa<PHINode>(SuccBB->begin())) continue;
+    MachineBasicBlock *SuccMBB = FuncInfo->MBBMap[SuccBB];
+
+    // If this terminator has multiple identical successors (common for
+    // switches), only handle each succ once.
+    if (!SuccsHandled.insert(SuccMBB)) continue;
+
+    MachineBasicBlock::iterator MBBI = SuccMBB->begin();
+    PHINode *PN;
+
+    // At this point we know that there is a 1-1 correspondence between LLVM PHI
+    // nodes and Machine PHI nodes, but the incoming operands have not been
+    // emitted yet.
+    for (BasicBlock::iterator I = SuccBB->begin();
+         (PN = dyn_cast<PHINode>(I)); ++I) {
+      // Ignore dead phi's.
+      if (PN->use_empty()) continue;
+
+      unsigned Reg;
+      Value *PHIOp = PN->getIncomingValueForBlock(LLVMBB);
+
+      if (Constant *C = dyn_cast<Constant>(PHIOp)) {
+        unsigned &RegOut = SDB->ConstantsOut[C];
+        if (RegOut == 0) {
+          RegOut = FuncInfo->CreateRegForValue(C);
+          SDB->CopyValueToVirtualRegister(C, RegOut);
+        }
+        Reg = RegOut;
+      } else {
+        Reg = FuncInfo->ValueMap[PHIOp];
+        if (Reg == 0) {
+          assert(isa<AllocaInst>(PHIOp) &&
+                 FuncInfo->StaticAllocaMap.count(cast<AllocaInst>(PHIOp)) &&
+                 "Didn't codegen value into a register!??");
+          Reg = FuncInfo->CreateRegForValue(PHIOp);
+          SDB->CopyValueToVirtualRegister(PHIOp, Reg);
+        }
+      }
+
+      // Remember that this register needs to added to the machine PHI node as
+      // the input for this MBB.
+      SmallVector<EVT, 4> ValueVTs;
+      ComputeValueVTs(TLI, PN->getType(), ValueVTs);
+      for (unsigned vti = 0, vte = ValueVTs.size(); vti != vte; ++vti) {
+        EVT VT = ValueVTs[vti];
+        unsigned NumRegisters = TLI.getNumRegisters(*CurDAG->getContext(), VT);
+        for (unsigned i = 0, e = NumRegisters; i != e; ++i)
+          SDB->PHINodesToUpdate.push_back(std::make_pair(MBBI++, Reg+i));
+        Reg += NumRegisters;
+      }
+    }
+  }
+  SDB->ConstantsOut.clear();
+}
+
+/// This is the Fast-ISel version of HandlePHINodesInSuccessorBlocks. It only
+/// supports legal types, and it emits MachineInstrs directly instead of
+/// creating SelectionDAG nodes.
+///
+bool
+SelectionDAGISel::HandlePHINodesInSuccessorBlocksFast(BasicBlock *LLVMBB,
+                                                      FastISel *F) {
+  TerminatorInst *TI = LLVMBB->getTerminator();
+
+  SmallPtrSet<MachineBasicBlock *, 4> SuccsHandled;
+  unsigned OrigNumPHINodesToUpdate = SDB->PHINodesToUpdate.size();
+
+  // Check successor nodes' PHI nodes that expect a constant to be available
+  // from this block.
+  for (unsigned succ = 0, e = TI->getNumSuccessors(); succ != e; ++succ) {
+    BasicBlock *SuccBB = TI->getSuccessor(succ);
+    if (!isa<PHINode>(SuccBB->begin())) continue;
+    MachineBasicBlock *SuccMBB = FuncInfo->MBBMap[SuccBB];
+
+    // If this terminator has multiple identical successors (common for
+    // switches), only handle each succ once.
+    if (!SuccsHandled.insert(SuccMBB)) continue;
+
+    MachineBasicBlock::iterator MBBI = SuccMBB->begin();
+    PHINode *PN;
+
+    // At this point we know that there is a 1-1 correspondence between LLVM PHI
+    // nodes and Machine PHI nodes, but the incoming operands have not been
+    // emitted yet.
+    for (BasicBlock::iterator I = SuccBB->begin();
+         (PN = dyn_cast<PHINode>(I)); ++I) {
+      // Ignore dead phi's.
+      if (PN->use_empty()) continue;
+
+      // Only handle legal types. Two interesting things to note here. First,
+      // by bailing out early, we may leave behind some dead instructions,
+      // since SelectionDAG's HandlePHINodesInSuccessorBlocks will insert its
+      // own moves. Second, this check is necessary becuase FastISel doesn't
+      // use CreateRegForValue to create registers, so it always creates
+      // exactly one register for each non-void instruction.
+      EVT VT = TLI.getValueType(PN->getType(), /*AllowUnknown=*/true);
+      if (VT == MVT::Other || !TLI.isTypeLegal(VT)) {
+        // Promote MVT::i1.
+        if (VT == MVT::i1)
+          VT = TLI.getTypeToTransformTo(*CurDAG->getContext(), VT);
+        else {
+          SDB->PHINodesToUpdate.resize(OrigNumPHINodesToUpdate);
+          return false;
+        }
+      }
+
+      Value *PHIOp = PN->getIncomingValueForBlock(LLVMBB);
+
+      unsigned Reg = F->getRegForValue(PHIOp);
+      if (Reg == 0) {
+        SDB->PHINodesToUpdate.resize(OrigNumPHINodesToUpdate);
+        return false;
+      }
+      SDB->PHINodesToUpdate.push_back(std::make_pair(MBBI++, Reg));
+    }
+  }
+
+  return true;
+}
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h
new file mode 100644
index 0000000..244f9b5
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h
@@ -0,0 +1,487 @@
+//===-- SelectionDAGBuilder.h - Selection-DAG building --------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This implements routines for translating from LLVM IR into SelectionDAG IR.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef SELECTIONDAGBUILDER_H
+#define SELECTIONDAGBUILDER_H
+
+#include "llvm/Constants.h"
+#include "llvm/CodeGen/SelectionDAG.h"
+#include "llvm/ADT/APInt.h"
+#include "llvm/ADT/DenseMap.h"
+#ifndef NDEBUG
+#include "llvm/ADT/SmallSet.h"
+#endif
+#include "llvm/CodeGen/SelectionDAGNodes.h"
+#include "llvm/CodeGen/ValueTypes.h"
+#include "llvm/Support/CallSite.h"
+#include "llvm/Support/ErrorHandling.h"
+#include <vector>
+#include <set>
+
+namespace llvm {
+
+class AliasAnalysis;
+class AllocaInst;
+class BasicBlock;
+class BitCastInst;
+class BranchInst;
+class CallInst;
+class ExtractElementInst;
+class ExtractValueInst;
+class FCmpInst;
+class FPExtInst;
+class FPToSIInst;
+class FPToUIInst;
+class FPTruncInst;
+class Function;
+class FunctionLoweringInfo;
+class GetElementPtrInst;
+class GCFunctionInfo;
+class ICmpInst;
+class IntToPtrInst;
+class IndirectBrInst;
+class InvokeInst;
+class InsertElementInst;
+class InsertValueInst;
+class Instruction;
+class LoadInst;
+class MachineBasicBlock;
+class MachineFunction;
+class MachineInstr;
+class MachineRegisterInfo;
+class PHINode;
+class PtrToIntInst;
+class ReturnInst;
+class SDISelAsmOperandInfo;
+class SExtInst;
+class SelectInst;
+class ShuffleVectorInst;
+class SIToFPInst;
+class StoreInst;
+class SwitchInst;
+class TargetData;
+class TargetLowering;
+class TruncInst;
+class UIToFPInst;
+class UnreachableInst;
+class UnwindInst;
+class VAArgInst;
+class ZExtInst;
+
+//===----------------------------------------------------------------------===//
+/// SelectionDAGBuilder - This is the common target-independent lowering
+/// implementation that is parameterized by a TargetLowering object.
+/// Also, targets can overload any lowering method.
+///
+class SelectionDAGBuilder {
+  MachineBasicBlock *CurMBB;
+
+  /// CurDebugLoc - current file + line number.  Changes as we build the DAG.
+  DebugLoc CurDebugLoc;
+
+  DenseMap<const Value*, SDValue> NodeMap;
+
+  /// PendingLoads - Loads are not emitted to the program immediately.  We bunch
+  /// them up and then emit token factor nodes when possible.  This allows us to
+  /// get simple disambiguation between loads without worrying about alias
+  /// analysis.
+  SmallVector<SDValue, 8> PendingLoads;
+
+  /// PendingExports - CopyToReg nodes that copy values to virtual registers
+  /// for export to other blocks need to be emitted before any terminator
+  /// instruction, but they have no other ordering requirements. We bunch them
+  /// up and the emit a single tokenfactor for them just before terminator
+  /// instructions.
+  SmallVector<SDValue, 8> PendingExports;
+
+  /// Case - A struct to record the Value for a switch case, and the
+  /// case's target basic block.
+  struct Case {
+    Constant* Low;
+    Constant* High;
+    MachineBasicBlock* BB;
+
+    Case() : Low(0), High(0), BB(0) { }
+    Case(Constant* low, Constant* high, MachineBasicBlock* bb) :
+      Low(low), High(high), BB(bb) { }
+    APInt size() const {
+      const APInt &rHigh = cast<ConstantInt>(High)->getValue();
+      const APInt &rLow  = cast<ConstantInt>(Low)->getValue();
+      return (rHigh - rLow + 1ULL);
+    }
+  };
+
+  struct CaseBits {
+    uint64_t Mask;
+    MachineBasicBlock* BB;
+    unsigned Bits;
+
+    CaseBits(uint64_t mask, MachineBasicBlock* bb, unsigned bits):
+      Mask(mask), BB(bb), Bits(bits) { }
+  };
+
+  typedef std::vector<Case>           CaseVector;
+  typedef std::vector<CaseBits>       CaseBitsVector;
+  typedef CaseVector::iterator        CaseItr;
+  typedef std::pair<CaseItr, CaseItr> CaseRange;
+
+  /// CaseRec - A struct with ctor used in lowering switches to a binary tree
+  /// of conditional branches.
+  struct CaseRec {
+    CaseRec(MachineBasicBlock *bb, Constant *lt, Constant *ge, CaseRange r) :
+    CaseBB(bb), LT(lt), GE(ge), Range(r) {}
+
+    /// CaseBB - The MBB in which to emit the compare and branch
+    MachineBasicBlock *CaseBB;
+    /// LT, GE - If nonzero, we know the current case value must be less-than or
+    /// greater-than-or-equal-to these Constants.
+    Constant *LT;
+    Constant *GE;
+    /// Range - A pair of iterators representing the range of case values to be
+    /// processed at this point in the binary search tree.
+    CaseRange Range;
+  };
+
+  typedef std::vector<CaseRec> CaseRecVector;
+
+  /// The comparison function for sorting the switch case values in the vector.
+  /// WARNING: Case ranges should be disjoint!
+  struct CaseCmp {
+    bool operator () (const Case& C1, const Case& C2) {
+      assert(isa<ConstantInt>(C1.Low) && isa<ConstantInt>(C2.High));
+      const ConstantInt* CI1 = cast<const ConstantInt>(C1.Low);
+      const ConstantInt* CI2 = cast<const ConstantInt>(C2.High);
+      return CI1->getValue().slt(CI2->getValue());
+    }
+  };
+
+  struct CaseBitsCmp {
+    bool operator () (const CaseBits& C1, const CaseBits& C2) {
+      return C1.Bits > C2.Bits;
+    }
+  };
+
+  size_t Clusterify(CaseVector& Cases, const SwitchInst &SI);
+
+  /// CaseBlock - This structure is used to communicate between
+  /// SelectionDAGBuilder and SDISel for the code generation of additional basic
+  /// blocks needed by multi-case switch statements.
+  struct CaseBlock {
+    CaseBlock(ISD::CondCode cc, Value *cmplhs, Value *cmprhs, Value *cmpmiddle,
+              MachineBasicBlock *truebb, MachineBasicBlock *falsebb,
+              MachineBasicBlock *me)
+      : CC(cc), CmpLHS(cmplhs), CmpMHS(cmpmiddle), CmpRHS(cmprhs),
+        TrueBB(truebb), FalseBB(falsebb), ThisBB(me) {}
+    // CC - the condition code to use for the case block's setcc node
+    ISD::CondCode CC;
+    // CmpLHS/CmpRHS/CmpMHS - The LHS/MHS/RHS of the comparison to emit.
+    // Emit by default LHS op RHS. MHS is used for range comparisons:
+    // If MHS is not null: (LHS <= MHS) and (MHS <= RHS).
+    Value *CmpLHS, *CmpMHS, *CmpRHS;
+    // TrueBB/FalseBB - the block to branch to if the setcc is true/false.
+    MachineBasicBlock *TrueBB, *FalseBB;
+    // ThisBB - the block into which to emit the code for the setcc and branches
+    MachineBasicBlock *ThisBB;
+  };
+  struct JumpTable {
+    JumpTable(unsigned R, unsigned J, MachineBasicBlock *M,
+              MachineBasicBlock *D): Reg(R), JTI(J), MBB(M), Default(D) {}
+  
+    /// Reg - the virtual register containing the index of the jump table entry
+    //. to jump to.
+    unsigned Reg;
+    /// JTI - the JumpTableIndex for this jump table in the function.
+    unsigned JTI;
+    /// MBB - the MBB into which to emit the code for the indirect jump.
+    MachineBasicBlock *MBB;
+    /// Default - the MBB of the default bb, which is a successor of the range
+    /// check MBB.  This is when updating PHI nodes in successors.
+    MachineBasicBlock *Default;
+  };
+  struct JumpTableHeader {
+    JumpTableHeader(APInt F, APInt L, Value* SV, MachineBasicBlock* H,
+                    bool E = false):
+      First(F), Last(L), SValue(SV), HeaderBB(H), Emitted(E) {}
+    APInt First;
+    APInt Last;
+    Value *SValue;
+    MachineBasicBlock *HeaderBB;
+    bool Emitted;
+  };
+  typedef std::pair<JumpTableHeader, JumpTable> JumpTableBlock;
+
+  struct BitTestCase {
+    BitTestCase(uint64_t M, MachineBasicBlock* T, MachineBasicBlock* Tr):
+      Mask(M), ThisBB(T), TargetBB(Tr) { }
+    uint64_t Mask;
+    MachineBasicBlock* ThisBB;
+    MachineBasicBlock* TargetBB;
+  };
+
+  typedef SmallVector<BitTestCase, 3> BitTestInfo;
+
+  struct BitTestBlock {
+    BitTestBlock(APInt F, APInt R, Value* SV,
+                 unsigned Rg, bool E,
+                 MachineBasicBlock* P, MachineBasicBlock* D,
+                 const BitTestInfo& C):
+      First(F), Range(R), SValue(SV), Reg(Rg), Emitted(E),
+      Parent(P), Default(D), Cases(C) { }
+    APInt First;
+    APInt Range;
+    Value  *SValue;
+    unsigned Reg;
+    bool Emitted;
+    MachineBasicBlock *Parent;
+    MachineBasicBlock *Default;
+    BitTestInfo Cases;
+  };
+
+public:
+  // TLI - This is information that describes the available target features we
+  // need for lowering.  This indicates when operations are unavailable,
+  // implemented with a libcall, etc.
+  TargetLowering &TLI;
+  SelectionDAG &DAG;
+  const TargetData *TD;
+  AliasAnalysis *AA;
+
+  /// SwitchCases - Vector of CaseBlock structures used to communicate
+  /// SwitchInst code generation information.
+  std::vector<CaseBlock> SwitchCases;
+  /// JTCases - Vector of JumpTable structures used to communicate
+  /// SwitchInst code generation information.
+  std::vector<JumpTableBlock> JTCases;
+  /// BitTestCases - Vector of BitTestBlock structures used to communicate
+  /// SwitchInst code generation information.
+  std::vector<BitTestBlock> BitTestCases;
+
+  /// PHINodesToUpdate - A list of phi instructions whose operand list will
+  /// be updated after processing the current basic block.
+  std::vector<std::pair<MachineInstr*, unsigned> > PHINodesToUpdate;
+
+  /// EdgeMapping - If an edge from CurMBB to any MBB is changed (e.g. due to
+  /// scheduler custom lowering), track the change here.
+  DenseMap<MachineBasicBlock*, MachineBasicBlock*> EdgeMapping;
+
+  // Emit PHI-node-operand constants only once even if used by multiple
+  // PHI nodes.
+  DenseMap<Constant*, unsigned> ConstantsOut;
+
+  /// FuncInfo - Information about the function as a whole.
+  ///
+  FunctionLoweringInfo &FuncInfo;
+
+  /// OptLevel - What optimization level we're generating code for.
+  /// 
+  CodeGenOpt::Level OptLevel;
+  
+  /// GFI - Garbage collection metadata for the function.
+  GCFunctionInfo *GFI;
+
+  /// HasTailCall - This is set to true if a call in the current
+  /// block has been translated as a tail call. In this case,
+  /// no subsequent DAG nodes should be created.
+  ///
+  bool HasTailCall;
+
+  LLVMContext *Context;
+
+  SelectionDAGBuilder(SelectionDAG &dag, TargetLowering &tli,
+                      FunctionLoweringInfo &funcinfo,
+                      CodeGenOpt::Level ol)
+    : CurDebugLoc(DebugLoc::getUnknownLoc()), 
+      TLI(tli), DAG(dag), FuncInfo(funcinfo), OptLevel(ol),
+      HasTailCall(false),
+      Context(dag.getContext()) {
+  }
+
+  void init(GCFunctionInfo *gfi, AliasAnalysis &aa);
+
+  /// clear - Clear out the curret SelectionDAG and the associated
+  /// state and prepare this SelectionDAGBuilder object to be used
+  /// for a new block. This doesn't clear out information about
+  /// additional blocks that are needed to complete switch lowering
+  /// or PHI node updating; that information is cleared out as it is
+  /// consumed.
+  void clear();
+
+  /// getRoot - Return the current virtual root of the Selection DAG,
+  /// flushing any PendingLoad items. This must be done before emitting
+  /// a store or any other node that may need to be ordered after any
+  /// prior load instructions.
+  ///
+  SDValue getRoot();
+
+  /// getControlRoot - Similar to getRoot, but instead of flushing all the
+  /// PendingLoad items, flush all the PendingExports items. It is necessary
+  /// to do this before emitting a terminator instruction.
+  ///
+  SDValue getControlRoot();
+
+  DebugLoc getCurDebugLoc() const { return CurDebugLoc; }
+  void setCurDebugLoc(DebugLoc dl) { CurDebugLoc = dl; }
+
+  void CopyValueToVirtualRegister(Value *V, unsigned Reg);
+
+  void visit(Instruction &I);
+
+  void visit(unsigned Opcode, User &I);
+
+  void setCurrentBasicBlock(MachineBasicBlock *MBB) { CurMBB = MBB; }
+
+  SDValue getValue(const Value *V);
+
+  void setValue(const Value *V, SDValue NewN) {
+    SDValue &N = NodeMap[V];
+    assert(N.getNode() == 0 && "Already set a value for this node!");
+    N = NewN;
+  }
+  
+  void GetRegistersForValue(SDISelAsmOperandInfo &OpInfo,
+                            std::set<unsigned> &OutputRegs, 
+                            std::set<unsigned> &InputRegs);
+
+  void FindMergedConditions(Value *Cond, MachineBasicBlock *TBB,
+                            MachineBasicBlock *FBB, MachineBasicBlock *CurBB,
+                            unsigned Opc);
+  void EmitBranchForMergedCondition(Value *Cond, MachineBasicBlock *TBB,
+                                    MachineBasicBlock *FBB,
+                                    MachineBasicBlock *CurBB);
+  bool ShouldEmitAsBranches(const std::vector<CaseBlock> &Cases);
+  bool isExportableFromCurrentBlock(Value *V, const BasicBlock *FromBB);
+  void CopyToExportRegsIfNeeded(Value *V);
+  void ExportFromCurrentBlock(Value *V);
+  void LowerCallTo(CallSite CS, SDValue Callee, bool IsTailCall,
+                   MachineBasicBlock *LandingPad = NULL);
+
+private:
+  // Terminator instructions.
+  void visitRet(ReturnInst &I);
+  void visitBr(BranchInst &I);
+  void visitSwitch(SwitchInst &I);
+  void visitIndirectBr(IndirectBrInst &I);
+  void visitUnreachable(UnreachableInst &I) { /* noop */ }
+
+  // Helpers for visitSwitch
+  bool handleSmallSwitchRange(CaseRec& CR,
+                              CaseRecVector& WorkList,
+                              Value* SV,
+                              MachineBasicBlock* Default);
+  bool handleJTSwitchCase(CaseRec& CR,
+                          CaseRecVector& WorkList,
+                          Value* SV,
+                          MachineBasicBlock* Default);
+  bool handleBTSplitSwitchCase(CaseRec& CR,
+                               CaseRecVector& WorkList,
+                               Value* SV,
+                               MachineBasicBlock* Default);
+  bool handleBitTestsSwitchCase(CaseRec& CR,
+                                CaseRecVector& WorkList,
+                                Value* SV,
+                                MachineBasicBlock* Default);  
+public:
+  void visitSwitchCase(CaseBlock &CB);
+  void visitBitTestHeader(BitTestBlock &B);
+  void visitBitTestCase(MachineBasicBlock* NextMBB,
+                        unsigned Reg,
+                        BitTestCase &B);
+  void visitJumpTable(JumpTable &JT);
+  void visitJumpTableHeader(JumpTable &JT, JumpTableHeader &JTH);
+  
+private:
+  // These all get lowered before this pass.
+  void visitInvoke(InvokeInst &I);
+  void visitUnwind(UnwindInst &I);
+
+  void visitBinary(User &I, unsigned OpCode);
+  void visitShift(User &I, unsigned Opcode);
+  void visitAdd(User &I)  { visitBinary(I, ISD::ADD); }
+  void visitFAdd(User &I) { visitBinary(I, ISD::FADD); }
+  void visitSub(User &I)  { visitBinary(I, ISD::SUB); }
+  void visitFSub(User &I);
+  void visitMul(User &I)  { visitBinary(I, ISD::MUL); }
+  void visitFMul(User &I) { visitBinary(I, ISD::FMUL); }
+  void visitURem(User &I) { visitBinary(I, ISD::UREM); }
+  void visitSRem(User &I) { visitBinary(I, ISD::SREM); }
+  void visitFRem(User &I) { visitBinary(I, ISD::FREM); }
+  void visitUDiv(User &I) { visitBinary(I, ISD::UDIV); }
+  void visitSDiv(User &I) { visitBinary(I, ISD::SDIV); }
+  void visitFDiv(User &I) { visitBinary(I, ISD::FDIV); }
+  void visitAnd (User &I) { visitBinary(I, ISD::AND); }
+  void visitOr  (User &I) { visitBinary(I, ISD::OR); }
+  void visitXor (User &I) { visitBinary(I, ISD::XOR); }
+  void visitShl (User &I) { visitShift(I, ISD::SHL); }
+  void visitLShr(User &I) { visitShift(I, ISD::SRL); }
+  void visitAShr(User &I) { visitShift(I, ISD::SRA); }
+  void visitICmp(User &I);
+  void visitFCmp(User &I);
+  // Visit the conversion instructions
+  void visitTrunc(User &I);
+  void visitZExt(User &I);
+  void visitSExt(User &I);
+  void visitFPTrunc(User &I);
+  void visitFPExt(User &I);
+  void visitFPToUI(User &I);
+  void visitFPToSI(User &I);
+  void visitUIToFP(User &I);
+  void visitSIToFP(User &I);
+  void visitPtrToInt(User &I);
+  void visitIntToPtr(User &I);
+  void visitBitCast(User &I);
+
+  void visitExtractElement(User &I);
+  void visitInsertElement(User &I);
+  void visitShuffleVector(User &I);
+
+  void visitExtractValue(ExtractValueInst &I);
+  void visitInsertValue(InsertValueInst &I);
+
+  void visitGetElementPtr(User &I);
+  void visitSelect(User &I);
+
+  void visitAlloca(AllocaInst &I);
+  void visitLoad(LoadInst &I);
+  void visitStore(StoreInst &I);
+  void visitPHI(PHINode &I) { } // PHI nodes are handled specially.
+  void visitCall(CallInst &I);
+  void visitInlineAsm(CallSite CS);
+  const char *visitIntrinsicCall(CallInst &I, unsigned Intrinsic);
+  void visitTargetIntrinsic(CallInst &I, unsigned Intrinsic);
+
+  void visitPow(CallInst &I);
+  void visitExp2(CallInst &I);
+  void visitExp(CallInst &I);
+  void visitLog(CallInst &I);
+  void visitLog2(CallInst &I);
+  void visitLog10(CallInst &I);
+
+  void visitVAStart(CallInst &I);
+  void visitVAArg(VAArgInst &I);
+  void visitVAEnd(CallInst &I);
+  void visitVACopy(CallInst &I);
+
+  void visitUserOp1(Instruction &I) {
+    llvm_unreachable("UserOp1 should not exist at instruction selection time!");
+  }
+  void visitUserOp2(Instruction &I) {
+    llvm_unreachable("UserOp2 should not exist at instruction selection time!");
+  }
+  
+  const char *implVisitBinaryAtomic(CallInst& I, ISD::NodeType Op);
+  const char *implVisitAluOverflow(CallInst &I, ISD::NodeType Op);
+};
+
+} // end namespace llvm
+
+#endif
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
index 8591da7..c39437f 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
@@ -13,7 +13,8 @@
 
 #define DEBUG_TYPE "isel"
 #include "ScheduleDAGSDNodes.h"
-#include "SelectionDAGBuild.h"
+#include "SelectionDAGBuilder.h"
+#include "FunctionLoweringInfo.h"
 #include "llvm/CodeGen/SelectionDAGISel.h"
 #include "llvm/Analysis/AliasAnalysis.h"
 #include "llvm/Analysis/DebugInfo.h"
@@ -26,6 +27,7 @@
 #include "llvm/Instructions.h"
 #include "llvm/Intrinsics.h"
 #include "llvm/IntrinsicInst.h"
+#include "llvm/LLVMContext.h"
 #include "llvm/CodeGen/FastISel.h"
 #include "llvm/CodeGen/GCStrategy.h"
 #include "llvm/CodeGen/GCMetadata.h"
@@ -43,6 +45,7 @@
 #include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Target/TargetFrameInfo.h"
+#include "llvm/Target/TargetIntrinsicInfo.h"
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetLowering.h"
 #include "llvm/Target/TargetMachine.h"
@@ -66,7 +69,7 @@ static cl::opt<bool>
 EnableFastISelAbort("fast-isel-abort", cl::Hidden,
           cl::desc("Enable abort calls when \"fast\" instruction fails"));
 static cl::opt<bool>
-SchedLiveInCopies("schedule-livein-copies",
+SchedLiveInCopies("schedule-livein-copies", cl::Hidden,
                   cl::desc("Schedule copies of livein registers"),
                   cl::init(false));
 
@@ -149,16 +152,20 @@ namespace llvm {
 }
 
 // EmitInstrWithCustomInserter - This method should be implemented by targets
-// that mark instructions with the 'usesCustomDAGSchedInserter' flag.  These
+// that mark instructions with the 'usesCustomInserter' flag.  These
 // instructions are special in various ways, which require special support to
 // insert.  The specified MachineInstr is created but not inserted into any
-// basic blocks, and the scheduler passes ownership of it to this method.
+// basic blocks, and this method is called to expand it into a sequence of
+// instructions, potentially also creating new basic blocks and control flow.
+// When new basic blocks are inserted and the edges from MBB to its successors
+// are modified, the method should insert pairs of <OldSucc, NewSucc> into the
+// DenseMap.
 MachineBasicBlock *TargetLowering::EmitInstrWithCustomInserter(MachineInstr *MI,
                                                          MachineBasicBlock *MBB,
                    DenseMap<MachineBasicBlock*, MachineBasicBlock*> *EM) const {
 #ifndef NDEBUG
   errs() << "If a target marks an instruction with "
-          "'usesCustomDAGSchedInserter', it must implement "
+          "'usesCustomInserter', it must implement "
           "TargetLowering::EmitInstrWithCustomInserter!";
 #endif
   llvm_unreachable(0);
@@ -226,7 +233,7 @@ static void EmitLiveInCopy(MachineBasicBlock *MBB,
   assert(Emitted && "Unable to issue a live-in copy instruction!\n");
   (void) Emitted;
 
-CopyRegMap.insert(std::make_pair(prior(Pos), VirtReg));
+  CopyRegMap.insert(std::make_pair(prior(Pos), VirtReg));
   if (Coalesced) {
     if (&*InsertPos == UseMI) ++InsertPos;
     MBB->erase(UseMI);
@@ -273,14 +280,14 @@ SelectionDAGISel::SelectionDAGISel(TargetMachine &tm, CodeGenOpt::Level OL) :
   MachineFunctionPass(&ID), TM(tm), TLI(*tm.getTargetLowering()),
   FuncInfo(new FunctionLoweringInfo(TLI)),
   CurDAG(new SelectionDAG(TLI, *FuncInfo)),
-  SDL(new SelectionDAGLowering(*CurDAG, TLI, *FuncInfo, OL)),
+  SDB(new SelectionDAGBuilder(*CurDAG, TLI, *FuncInfo, OL)),
   GFI(),
   OptLevel(OL),
   DAGSize(0)
 {}
 
 SelectionDAGISel::~SelectionDAGISel() {
-  delete SDL;
+  delete SDB;
   delete CurDAG;
   delete FuncInfo;
 }
@@ -325,8 +332,8 @@ bool SelectionDAGISel::runOnMachineFunction(MachineFunction &mf) {
   MachineModuleInfo *MMI = getAnalysisIfAvailable<MachineModuleInfo>();
   DwarfWriter *DW = getAnalysisIfAvailable<DwarfWriter>();
   CurDAG->init(*MF, MMI, DW);
-  FuncInfo->set(Fn, *MF, *CurDAG, EnableFastISel);
-  SDL->init(GFI, *AA);
+  FuncInfo->set(Fn, *MF, EnableFastISel);
+  SDB->init(GFI, *AA);
 
   for (Function::iterator I = Fn.begin(), E = Fn.end(); I != E; ++I)
     if (InvokeInst *Invoke = dyn_cast<InvokeInst>(I->getTerminator()))
@@ -355,64 +362,56 @@ bool SelectionDAGISel::runOnMachineFunction(MachineFunction &mf) {
   return true;
 }
 
-static void copyCatchInfo(BasicBlock *SrcBB, BasicBlock *DestBB,
-                          MachineModuleInfo *MMI, FunctionLoweringInfo &FLI) {
-  for (BasicBlock::iterator I = SrcBB->begin(), E = --SrcBB->end(); I != E; ++I)
-    if (EHSelectorInst *EHSel = dyn_cast<EHSelectorInst>(I)) {
-      // Apply the catch info to DestBB.
-      AddCatchInfo(*EHSel, MMI, FLI.MBBMap[DestBB]);
-#ifndef NDEBUG
-      if (!FLI.MBBMap[SrcBB]->isLandingPad())
-        FLI.CatchInfoFound.insert(EHSel);
-#endif
-    }
-}
-
 void SelectionDAGISel::SelectBasicBlock(BasicBlock *LLVMBB,
                                         BasicBlock::iterator Begin,
-                                        BasicBlock::iterator End) {
-  SDL->setCurrentBasicBlock(BB);
+                                        BasicBlock::iterator End,
+                                        bool &HadTailCall) {
+  SDB->setCurrentBasicBlock(BB);
   MetadataContext &TheMetadata = LLVMBB->getParent()->getContext().getMetadata();
   unsigned MDDbgKind = TheMetadata.getMDKind("dbg");
 
   // Lower all of the non-terminator instructions. If a call is emitted
   // as a tail call, cease emitting nodes for this block.
-  for (BasicBlock::iterator I = Begin; I != End && !SDL->HasTailCall; ++I) {
+  for (BasicBlock::iterator I = Begin; I != End && !SDB->HasTailCall; ++I) {
     if (MDDbgKind) {
       // Update DebugLoc if debug information is attached with this
       // instruction.
-      if (MDNode *Dbg = TheMetadata.getMD(MDDbgKind, I)) {
-        DILocation DILoc(Dbg);
-        DebugLoc Loc = ExtractDebugLocation(DILoc, MF->getDebugLocInfo());
-        SDL->setCurDebugLoc(Loc);
-      }
+      if (!isa<DbgInfoIntrinsic>(I)) 
+        if (MDNode *Dbg = TheMetadata.getMD(MDDbgKind, I)) {
+          DILocation DILoc(Dbg);
+          DebugLoc Loc = ExtractDebugLocation(DILoc, MF->getDebugLocInfo());
+          SDB->setCurDebugLoc(Loc);
+          if (MF->getDefaultDebugLoc().isUnknown())
+            MF->setDefaultDebugLoc(Loc);
+        }
     }
     if (!isa<TerminatorInst>(I))
-      SDL->visit(*I);
+      SDB->visit(*I);
   }
 
-  if (!SDL->HasTailCall) {
+  if (!SDB->HasTailCall) {
     // Ensure that all instructions which are used outside of their defining
     // blocks are available as virtual registers.  Invoke is handled elsewhere.
     for (BasicBlock::iterator I = Begin; I != End; ++I)
       if (!isa<PHINode>(I) && !isa<InvokeInst>(I))
-        SDL->CopyToExportRegsIfNeeded(I);
+        SDB->CopyToExportRegsIfNeeded(I);
 
     // Handle PHI nodes in successor blocks.
     if (End == LLVMBB->end()) {
       HandlePHINodesInSuccessorBlocks(LLVMBB);
 
       // Lower the terminator after the copies are emitted.
-      SDL->visit(*LLVMBB->getTerminator());
+      SDB->visit(*LLVMBB->getTerminator());
     }
   }
 
   // Make sure the root of the DAG is up-to-date.
-  CurDAG->setRoot(SDL->getControlRoot());
+  CurDAG->setRoot(SDB->getControlRoot());
 
   // Final step, emit the lowered DAG as machine code.
   CodeGenAndEmitDAG();
-  SDL->clear();
+  HadTailCall = SDB->HasTailCall;
+  SDB->clear();
 }
 
 void SelectionDAGISel::ComputeLiveOutVRegInfo() {
@@ -620,9 +619,9 @@ void SelectionDAGISel::CodeGenAndEmitDAG() {
   // inserted into.
   if (TimePassesIsEnabled) {
     NamedRegionTimer T("Instruction Creation", GroupName);
-    BB = Scheduler->EmitSchedule(&SDL->EdgeMapping);
+    BB = Scheduler->EmitSchedule(&SDB->EdgeMapping);
   } else {
-    BB = Scheduler->EmitSchedule(&SDL->EdgeMapping);
+    BB = Scheduler->EmitSchedule(&SDB->EdgeMapping);
   }
 
   // Free the scheduler state.
@@ -692,7 +691,7 @@ void SelectionDAGISel::SelectAllBasicBlocks(Function &Fn,
       unsigned LabelID = MMI->addLandingPad(BB);
 
       const TargetInstrDesc &II = TII.get(TargetInstrInfo::EH_LABEL);
-      BuildMI(BB, SDL->getCurDebugLoc(), II).addImm(LabelID);
+      BuildMI(BB, SDB->getCurDebugLoc(), II).addImm(LabelID);
 
       // Mark exception register as live in.
       unsigned Reg = TLI.getExceptionAddressRegister();
@@ -723,7 +722,7 @@ void SelectionDAGISel::SelectAllBasicBlocks(Function &Fn,
 
         if (I == E)
           // No catch info found - try to extract some from the successor.
-          copyCatchInfo(Br->getSuccessor(0), LLVMBB, MMI, *FuncInfo);
+          CopyCatchInfo(Br->getSuccessor(0), LLVMBB, MMI, *FuncInfo);
       }
     }
 
@@ -732,9 +731,9 @@ void SelectionDAGISel::SelectAllBasicBlocks(Function &Fn,
       // Emit code for any incoming arguments. This must happen before
       // beginning FastISel on the entry block.
       if (LLVMBB == &Fn.getEntryBlock()) {
-        CurDAG->setRoot(SDL->getControlRoot());
+        CurDAG->setRoot(SDB->getControlRoot());
         CodeGenAndEmitDAG();
-        SDL->clear();
+        SDB->clear();
       }
       FastIS->startNewBlock(BB);
       // Do FastISel on as many instructions as possible.
@@ -742,12 +741,15 @@ void SelectionDAGISel::SelectAllBasicBlocks(Function &Fn,
         if (MDDbgKind) {
           // Update DebugLoc if debug information is attached with this
           // instruction.
-          if (MDNode *Dbg = TheMetadata.getMD(MDDbgKind, BI)) {
-            DILocation DILoc(Dbg);
-            DebugLoc Loc = ExtractDebugLocation(DILoc,
-                                                MF.getDebugLocInfo());
-            FastIS->setCurDebugLoc(Loc);
-          }
+          if (!isa<DbgInfoIntrinsic>(BI)) 
+            if (MDNode *Dbg = TheMetadata.getMD(MDDbgKind, BI)) {
+              DILocation DILoc(Dbg);
+              DebugLoc Loc = ExtractDebugLocation(DILoc,
+                                                  MF.getDebugLocInfo());
+              FastIS->setCurDebugLoc(Loc);
+              if (MF.getDefaultDebugLoc().isUnknown())
+                MF.setDefaultDebugLoc(Loc);
+            }
         }
 
         // Just before the terminator instruction, insert instructions to
@@ -784,8 +786,17 @@ void SelectionDAGISel::SelectAllBasicBlocks(Function &Fn,
               R = FuncInfo->CreateRegForValue(BI);
           }
 
-          SDL->setCurDebugLoc(FastIS->getCurDebugLoc());
-          SelectBasicBlock(LLVMBB, BI, next(BI));
+          SDB->setCurDebugLoc(FastIS->getCurDebugLoc());
+
+          bool HadTailCall = false;
+          SelectBasicBlock(LLVMBB, BI, next(BI), HadTailCall);
+
+          // If the call was emitted as a tail call, we're done with the block.
+          if (HadTailCall) {
+            BI = End;
+            break;
+          }
+
           // If the instruction was codegen'd with multiple blocks,
           // inform the FastISel object where to resume inserting.
           FastIS->setCurrentBlock(BB);
@@ -814,8 +825,9 @@ void SelectionDAGISel::SelectAllBasicBlocks(Function &Fn,
     if (BI != End) {
       // If FastISel is run and it has known DebugLoc then use it.
       if (FastIS && !FastIS->getCurDebugLoc().isUnknown())
-        SDL->setCurDebugLoc(FastIS->getCurDebugLoc());
-      SelectBasicBlock(LLVMBB, BI, End);
+        SDB->setCurDebugLoc(FastIS->getCurDebugLoc());
+      bool HadTailCall;
+      SelectBasicBlock(LLVMBB, BI, End, HadTailCall);
     }
 
     FinishBasicBlock();
@@ -831,150 +843,150 @@ SelectionDAGISel::FinishBasicBlock() {
   DEBUG(BB->dump());
 
   DEBUG(errs() << "Total amount of phi nodes to update: "
-               << SDL->PHINodesToUpdate.size() << "\n");
-  DEBUG(for (unsigned i = 0, e = SDL->PHINodesToUpdate.size(); i != e; ++i)
+               << SDB->PHINodesToUpdate.size() << "\n");
+  DEBUG(for (unsigned i = 0, e = SDB->PHINodesToUpdate.size(); i != e; ++i)
           errs() << "Node " << i << " : ("
-                 << SDL->PHINodesToUpdate[i].first
-                 << ", " << SDL->PHINodesToUpdate[i].second << ")\n");
+                 << SDB->PHINodesToUpdate[i].first
+                 << ", " << SDB->PHINodesToUpdate[i].second << ")\n");
 
   // Next, now that we know what the last MBB the LLVM BB expanded is, update
   // PHI nodes in successors.
-  if (SDL->SwitchCases.empty() &&
-      SDL->JTCases.empty() &&
-      SDL->BitTestCases.empty()) {
-    for (unsigned i = 0, e = SDL->PHINodesToUpdate.size(); i != e; ++i) {
-      MachineInstr *PHI = SDL->PHINodesToUpdate[i].first;
+  if (SDB->SwitchCases.empty() &&
+      SDB->JTCases.empty() &&
+      SDB->BitTestCases.empty()) {
+    for (unsigned i = 0, e = SDB->PHINodesToUpdate.size(); i != e; ++i) {
+      MachineInstr *PHI = SDB->PHINodesToUpdate[i].first;
       assert(PHI->getOpcode() == TargetInstrInfo::PHI &&
              "This is not a machine PHI node that we are updating!");
-      PHI->addOperand(MachineOperand::CreateReg(SDL->PHINodesToUpdate[i].second,
+      PHI->addOperand(MachineOperand::CreateReg(SDB->PHINodesToUpdate[i].second,
                                                 false));
       PHI->addOperand(MachineOperand::CreateMBB(BB));
     }
-    SDL->PHINodesToUpdate.clear();
+    SDB->PHINodesToUpdate.clear();
     return;
   }
 
-  for (unsigned i = 0, e = SDL->BitTestCases.size(); i != e; ++i) {
+  for (unsigned i = 0, e = SDB->BitTestCases.size(); i != e; ++i) {
     // Lower header first, if it wasn't already lowered
-    if (!SDL->BitTestCases[i].Emitted) {
+    if (!SDB->BitTestCases[i].Emitted) {
       // Set the current basic block to the mbb we wish to insert the code into
-      BB = SDL->BitTestCases[i].Parent;
-      SDL->setCurrentBasicBlock(BB);
+      BB = SDB->BitTestCases[i].Parent;
+      SDB->setCurrentBasicBlock(BB);
       // Emit the code
-      SDL->visitBitTestHeader(SDL->BitTestCases[i]);
-      CurDAG->setRoot(SDL->getRoot());
+      SDB->visitBitTestHeader(SDB->BitTestCases[i]);
+      CurDAG->setRoot(SDB->getRoot());
       CodeGenAndEmitDAG();
-      SDL->clear();
+      SDB->clear();
     }
 
-    for (unsigned j = 0, ej = SDL->BitTestCases[i].Cases.size(); j != ej; ++j) {
+    for (unsigned j = 0, ej = SDB->BitTestCases[i].Cases.size(); j != ej; ++j) {
       // Set the current basic block to the mbb we wish to insert the code into
-      BB = SDL->BitTestCases[i].Cases[j].ThisBB;
-      SDL->setCurrentBasicBlock(BB);
+      BB = SDB->BitTestCases[i].Cases[j].ThisBB;
+      SDB->setCurrentBasicBlock(BB);
       // Emit the code
       if (j+1 != ej)
-        SDL->visitBitTestCase(SDL->BitTestCases[i].Cases[j+1].ThisBB,
-                              SDL->BitTestCases[i].Reg,
-                              SDL->BitTestCases[i].Cases[j]);
+        SDB->visitBitTestCase(SDB->BitTestCases[i].Cases[j+1].ThisBB,
+                              SDB->BitTestCases[i].Reg,
+                              SDB->BitTestCases[i].Cases[j]);
       else
-        SDL->visitBitTestCase(SDL->BitTestCases[i].Default,
-                              SDL->BitTestCases[i].Reg,
-                              SDL->BitTestCases[i].Cases[j]);
+        SDB->visitBitTestCase(SDB->BitTestCases[i].Default,
+                              SDB->BitTestCases[i].Reg,
+                              SDB->BitTestCases[i].Cases[j]);
 
 
-      CurDAG->setRoot(SDL->getRoot());
+      CurDAG->setRoot(SDB->getRoot());
       CodeGenAndEmitDAG();
-      SDL->clear();
+      SDB->clear();
     }
 
     // Update PHI Nodes
-    for (unsigned pi = 0, pe = SDL->PHINodesToUpdate.size(); pi != pe; ++pi) {
-      MachineInstr *PHI = SDL->PHINodesToUpdate[pi].first;
+    for (unsigned pi = 0, pe = SDB->PHINodesToUpdate.size(); pi != pe; ++pi) {
+      MachineInstr *PHI = SDB->PHINodesToUpdate[pi].first;
       MachineBasicBlock *PHIBB = PHI->getParent();
       assert(PHI->getOpcode() == TargetInstrInfo::PHI &&
              "This is not a machine PHI node that we are updating!");
       // This is "default" BB. We have two jumps to it. From "header" BB and
       // from last "case" BB.
-      if (PHIBB == SDL->BitTestCases[i].Default) {
-        PHI->addOperand(MachineOperand::CreateReg(SDL->PHINodesToUpdate[pi].second,
+      if (PHIBB == SDB->BitTestCases[i].Default) {
+        PHI->addOperand(MachineOperand::CreateReg(SDB->PHINodesToUpdate[pi].second,
                                                   false));
-        PHI->addOperand(MachineOperand::CreateMBB(SDL->BitTestCases[i].Parent));
-        PHI->addOperand(MachineOperand::CreateReg(SDL->PHINodesToUpdate[pi].second,
+        PHI->addOperand(MachineOperand::CreateMBB(SDB->BitTestCases[i].Parent));
+        PHI->addOperand(MachineOperand::CreateReg(SDB->PHINodesToUpdate[pi].second,
                                                   false));
-        PHI->addOperand(MachineOperand::CreateMBB(SDL->BitTestCases[i].Cases.
+        PHI->addOperand(MachineOperand::CreateMBB(SDB->BitTestCases[i].Cases.
                                                   back().ThisBB));
       }
       // One of "cases" BB.
-      for (unsigned j = 0, ej = SDL->BitTestCases[i].Cases.size();
+      for (unsigned j = 0, ej = SDB->BitTestCases[i].Cases.size();
            j != ej; ++j) {
-        MachineBasicBlock* cBB = SDL->BitTestCases[i].Cases[j].ThisBB;
+        MachineBasicBlock* cBB = SDB->BitTestCases[i].Cases[j].ThisBB;
         if (cBB->succ_end() !=
             std::find(cBB->succ_begin(),cBB->succ_end(), PHIBB)) {
-          PHI->addOperand(MachineOperand::CreateReg(SDL->PHINodesToUpdate[pi].second,
+          PHI->addOperand(MachineOperand::CreateReg(SDB->PHINodesToUpdate[pi].second,
                                                     false));
           PHI->addOperand(MachineOperand::CreateMBB(cBB));
         }
       }
     }
   }
-  SDL->BitTestCases.clear();
+  SDB->BitTestCases.clear();
 
   // If the JumpTable record is filled in, then we need to emit a jump table.
   // Updating the PHI nodes is tricky in this case, since we need to determine
   // whether the PHI is a successor of the range check MBB or the jump table MBB
-  for (unsigned i = 0, e = SDL->JTCases.size(); i != e; ++i) {
+  for (unsigned i = 0, e = SDB->JTCases.size(); i != e; ++i) {
     // Lower header first, if it wasn't already lowered
-    if (!SDL->JTCases[i].first.Emitted) {
+    if (!SDB->JTCases[i].first.Emitted) {
       // Set the current basic block to the mbb we wish to insert the code into
-      BB = SDL->JTCases[i].first.HeaderBB;
-      SDL->setCurrentBasicBlock(BB);
+      BB = SDB->JTCases[i].first.HeaderBB;
+      SDB->setCurrentBasicBlock(BB);
       // Emit the code
-      SDL->visitJumpTableHeader(SDL->JTCases[i].second, SDL->JTCases[i].first);
-      CurDAG->setRoot(SDL->getRoot());
+      SDB->visitJumpTableHeader(SDB->JTCases[i].second, SDB->JTCases[i].first);
+      CurDAG->setRoot(SDB->getRoot());
       CodeGenAndEmitDAG();
-      SDL->clear();
+      SDB->clear();
     }
 
     // Set the current basic block to the mbb we wish to insert the code into
-    BB = SDL->JTCases[i].second.MBB;
-    SDL->setCurrentBasicBlock(BB);
+    BB = SDB->JTCases[i].second.MBB;
+    SDB->setCurrentBasicBlock(BB);
     // Emit the code
-    SDL->visitJumpTable(SDL->JTCases[i].second);
-    CurDAG->setRoot(SDL->getRoot());
+    SDB->visitJumpTable(SDB->JTCases[i].second);
+    CurDAG->setRoot(SDB->getRoot());
     CodeGenAndEmitDAG();
-    SDL->clear();
+    SDB->clear();
 
     // Update PHI Nodes
-    for (unsigned pi = 0, pe = SDL->PHINodesToUpdate.size(); pi != pe; ++pi) {
-      MachineInstr *PHI = SDL->PHINodesToUpdate[pi].first;
+    for (unsigned pi = 0, pe = SDB->PHINodesToUpdate.size(); pi != pe; ++pi) {
+      MachineInstr *PHI = SDB->PHINodesToUpdate[pi].first;
       MachineBasicBlock *PHIBB = PHI->getParent();
       assert(PHI->getOpcode() == TargetInstrInfo::PHI &&
              "This is not a machine PHI node that we are updating!");
       // "default" BB. We can go there only from header BB.
-      if (PHIBB == SDL->JTCases[i].second.Default) {
+      if (PHIBB == SDB->JTCases[i].second.Default) {
         PHI->addOperand
-          (MachineOperand::CreateReg(SDL->PHINodesToUpdate[pi].second, false));
+          (MachineOperand::CreateReg(SDB->PHINodesToUpdate[pi].second, false));
         PHI->addOperand
-          (MachineOperand::CreateMBB(SDL->JTCases[i].first.HeaderBB));
+          (MachineOperand::CreateMBB(SDB->JTCases[i].first.HeaderBB));
       }
       // JT BB. Just iterate over successors here
       if (BB->succ_end() != std::find(BB->succ_begin(),BB->succ_end(), PHIBB)) {
         PHI->addOperand
-          (MachineOperand::CreateReg(SDL->PHINodesToUpdate[pi].second, false));
+          (MachineOperand::CreateReg(SDB->PHINodesToUpdate[pi].second, false));
         PHI->addOperand(MachineOperand::CreateMBB(BB));
       }
     }
   }
-  SDL->JTCases.clear();
+  SDB->JTCases.clear();
 
   // If the switch block involved a branch to one of the actual successors, we
   // need to update PHI nodes in that block.
-  for (unsigned i = 0, e = SDL->PHINodesToUpdate.size(); i != e; ++i) {
-    MachineInstr *PHI = SDL->PHINodesToUpdate[i].first;
+  for (unsigned i = 0, e = SDB->PHINodesToUpdate.size(); i != e; ++i) {
+    MachineInstr *PHI = SDB->PHINodesToUpdate[i].first;
     assert(PHI->getOpcode() == TargetInstrInfo::PHI &&
            "This is not a machine PHI node that we are updating!");
     if (BB->isSuccessor(PHI->getParent())) {
-      PHI->addOperand(MachineOperand::CreateReg(SDL->PHINodesToUpdate[i].second,
+      PHI->addOperand(MachineOperand::CreateReg(SDB->PHINodesToUpdate[i].second,
                                                 false));
       PHI->addOperand(MachineOperand::CreateMBB(BB));
     }
@@ -982,36 +994,36 @@ SelectionDAGISel::FinishBasicBlock() {
 
   // If we generated any switch lowering information, build and codegen any
   // additional DAGs necessary.
-  for (unsigned i = 0, e = SDL->SwitchCases.size(); i != e; ++i) {
+  for (unsigned i = 0, e = SDB->SwitchCases.size(); i != e; ++i) {
     // Set the current basic block to the mbb we wish to insert the code into
-    MachineBasicBlock *ThisBB = BB = SDL->SwitchCases[i].ThisBB;
-    SDL->setCurrentBasicBlock(BB);
+    MachineBasicBlock *ThisBB = BB = SDB->SwitchCases[i].ThisBB;
+    SDB->setCurrentBasicBlock(BB);
 
     // Emit the code
-    SDL->visitSwitchCase(SDL->SwitchCases[i]);
-    CurDAG->setRoot(SDL->getRoot());
+    SDB->visitSwitchCase(SDB->SwitchCases[i]);
+    CurDAG->setRoot(SDB->getRoot());
     CodeGenAndEmitDAG();
 
     // Handle any PHI nodes in successors of this chunk, as if we were coming
     // from the original BB before switch expansion.  Note that PHI nodes can
     // occur multiple times in PHINodesToUpdate.  We have to be very careful to
     // handle them the right number of times.
-    while ((BB = SDL->SwitchCases[i].TrueBB)) {  // Handle LHS and RHS.
+    while ((BB = SDB->SwitchCases[i].TrueBB)) {  // Handle LHS and RHS.
       // If new BB's are created during scheduling, the edges may have been
       // updated. That is, the edge from ThisBB to BB may have been split and
       // BB's predecessor is now another block.
       DenseMap<MachineBasicBlock*, MachineBasicBlock*>::iterator EI =
-        SDL->EdgeMapping.find(BB);
-      if (EI != SDL->EdgeMapping.end())
+        SDB->EdgeMapping.find(BB);
+      if (EI != SDB->EdgeMapping.end())
         ThisBB = EI->second;
       for (MachineBasicBlock::iterator Phi = BB->begin();
            Phi != BB->end() && Phi->getOpcode() == TargetInstrInfo::PHI; ++Phi){
         // This value for this PHI node is recorded in PHINodesToUpdate, get it.
         for (unsigned pn = 0; ; ++pn) {
-          assert(pn != SDL->PHINodesToUpdate.size() &&
+          assert(pn != SDB->PHINodesToUpdate.size() &&
                  "Didn't find PHI entry!");
-          if (SDL->PHINodesToUpdate[pn].first == Phi) {
-            Phi->addOperand(MachineOperand::CreateReg(SDL->PHINodesToUpdate[pn].
+          if (SDB->PHINodesToUpdate[pn].first == Phi) {
+            Phi->addOperand(MachineOperand::CreateReg(SDB->PHINodesToUpdate[pn].
                                                       second, false));
             Phi->addOperand(MachineOperand::CreateMBB(ThisBB));
             break;
@@ -1020,19 +1032,19 @@ SelectionDAGISel::FinishBasicBlock() {
       }
 
       // Don't process RHS if same block as LHS.
-      if (BB == SDL->SwitchCases[i].FalseBB)
-        SDL->SwitchCases[i].FalseBB = 0;
+      if (BB == SDB->SwitchCases[i].FalseBB)
+        SDB->SwitchCases[i].FalseBB = 0;
 
       // If we haven't handled the RHS, do so now.  Otherwise, we're done.
-      SDL->SwitchCases[i].TrueBB = SDL->SwitchCases[i].FalseBB;
-      SDL->SwitchCases[i].FalseBB = 0;
+      SDB->SwitchCases[i].TrueBB = SDB->SwitchCases[i].FalseBB;
+      SDB->SwitchCases[i].FalseBB = 0;
     }
-    assert(SDL->SwitchCases[i].TrueBB == 0 && SDL->SwitchCases[i].FalseBB == 0);
-    SDL->clear();
+    assert(SDB->SwitchCases[i].TrueBB == 0 && SDB->SwitchCases[i].FalseBB == 0);
+    SDB->clear();
   }
-  SDL->SwitchCases.clear();
+  SDB->SwitchCases.clear();
 
-  SDL->PHINodesToUpdate.clear();
+  SDB->PHINodesToUpdate.clear();
 }
 
 
@@ -1284,5 +1296,56 @@ bool SelectionDAGISel::IsLegalAndProfitableToFold(SDNode *N, SDNode *U,
   return !isNonImmUse(Root, N, U);
 }
 
+SDNode *SelectionDAGISel::Select_INLINEASM(SDValue N) {
+  std::vector<SDValue> Ops(N.getNode()->op_begin(), N.getNode()->op_end());
+  SelectInlineAsmMemoryOperands(Ops);
+    
+  std::vector<EVT> VTs;
+  VTs.push_back(MVT::Other);
+  VTs.push_back(MVT::Flag);
+  SDValue New = CurDAG->getNode(ISD::INLINEASM, N.getDebugLoc(),
+                                VTs, &Ops[0], Ops.size());
+  return New.getNode();
+}
+
+SDNode *SelectionDAGISel::Select_UNDEF(const SDValue &N) {
+  return CurDAG->SelectNodeTo(N.getNode(), TargetInstrInfo::IMPLICIT_DEF,
+                              N.getValueType());
+}
+
+SDNode *SelectionDAGISel::Select_DBG_LABEL(const SDValue &N) {
+  SDValue Chain = N.getOperand(0);
+  unsigned C = cast<LabelSDNode>(N)->getLabelID();
+  SDValue Tmp = CurDAG->getTargetConstant(C, MVT::i32);
+  return CurDAG->SelectNodeTo(N.getNode(), TargetInstrInfo::DBG_LABEL,
+                              MVT::Other, Tmp, Chain);
+}
+
+SDNode *SelectionDAGISel::Select_EH_LABEL(const SDValue &N) {
+  SDValue Chain = N.getOperand(0);
+  unsigned C = cast<LabelSDNode>(N)->getLabelID();
+  SDValue Tmp = CurDAG->getTargetConstant(C, MVT::i32);
+  return CurDAG->SelectNodeTo(N.getNode(), TargetInstrInfo::EH_LABEL,
+                              MVT::Other, Tmp, Chain);
+}
+
+void SelectionDAGISel::CannotYetSelect(SDValue N) {
+  std::string msg;
+  raw_string_ostream Msg(msg);
+  Msg << "Cannot yet select: ";
+  N.getNode()->print(Msg, CurDAG);
+  llvm_report_error(Msg.str());
+}
+
+void SelectionDAGISel::CannotYetSelectIntrinsic(SDValue N) {
+  errs() << "Cannot yet select: ";
+  unsigned iid =
+    cast<ConstantSDNode>(N.getOperand(N.getOperand(0).getValueType() == MVT::Other))->getZExtValue();
+  if (iid < Intrinsic::num_intrinsics)
+    llvm_report_error("Cannot yet select: intrinsic %" + Intrinsic::getName((Intrinsic::ID)iid));
+  else if (const TargetIntrinsicInfo *tii = TM.getIntrinsicInfo())
+    llvm_report_error(Twine("Cannot yet select: target intrinsic %") +
+                      tii->getName(iid));
+}
 
 char SelectionDAGISel::ID = 0;
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index a2baee4..68bc2d6 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -22,7 +22,6 @@
 #include "llvm/DerivedTypes.h"
 #include "llvm/CodeGen/MachineFrameInfo.h"
 #include "llvm/CodeGen/SelectionDAG.h"
-#include "llvm/ADT/StringExtras.h"
 #include "llvm/ADT/STLExtras.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/MathExtras.h"
@@ -65,22 +64,27 @@ static void InitLibcallNames(const char **Names) {
   Names[RTLIB::SRA_I32] = "__ashrsi3";
   Names[RTLIB::SRA_I64] = "__ashrdi3";
   Names[RTLIB::SRA_I128] = "__ashrti3";
+  Names[RTLIB::MUL_I8] = "__mulqi3";
   Names[RTLIB::MUL_I16] = "__mulhi3";
   Names[RTLIB::MUL_I32] = "__mulsi3";
   Names[RTLIB::MUL_I64] = "__muldi3";
   Names[RTLIB::MUL_I128] = "__multi3";
+  Names[RTLIB::SDIV_I8] = "__divqi3";
   Names[RTLIB::SDIV_I16] = "__divhi3";
   Names[RTLIB::SDIV_I32] = "__divsi3";
   Names[RTLIB::SDIV_I64] = "__divdi3";
   Names[RTLIB::SDIV_I128] = "__divti3";
+  Names[RTLIB::UDIV_I8] = "__udivqi3";
   Names[RTLIB::UDIV_I16] = "__udivhi3";
   Names[RTLIB::UDIV_I32] = "__udivsi3";
   Names[RTLIB::UDIV_I64] = "__udivdi3";
   Names[RTLIB::UDIV_I128] = "__udivti3";
+  Names[RTLIB::SREM_I8] = "__modqi3";
   Names[RTLIB::SREM_I16] = "__modhi3";
   Names[RTLIB::SREM_I32] = "__modsi3";
   Names[RTLIB::SREM_I64] = "__moddi3";
   Names[RTLIB::SREM_I128] = "__modti3";
+  Names[RTLIB::UREM_I8] = "__umodqi3";
   Names[RTLIB::UREM_I16] = "__umodhi3";
   Names[RTLIB::UREM_I32] = "__umodsi3";
   Names[RTLIB::UREM_I64] = "__umoddi3";
@@ -481,7 +485,7 @@ TargetLowering::TargetLowering(TargetMachine &tm,TargetLoweringObjectFile *tlof)
   setOperationAction(ISD::PREFETCH, MVT::Other, Expand);
   
   // ConstantFP nodes default to expand.  Targets can either change this to 
-  // Legal, in which case all fp constants are legal, or use addLegalFPImmediate
+  // Legal, in which case all fp constants are legal, or use isFPImmLegal()
   // to optimize expansions for certain constants.
   setOperationAction(ISD::ConstantFP, MVT::f32, Expand);
   setOperationAction(ISD::ConstantFP, MVT::f64, Expand);
@@ -528,11 +532,6 @@ TargetLowering::TargetLowering(TargetMachine &tm,TargetLoweringObjectFile *tlof)
   InitLibcallNames(LibcallRoutineNames);
   InitCmpLibcallCCs(CmpLibcallCCs);
   InitLibcallCallingConvs(LibcallCallingConvs);
-
-  // Tell Legalize whether the assembler supports DEBUG_LOC.
-  const MCAsmInfo *TASM = TM.getMCAsmInfo();
-  if (!TASM || !TASM->hasDotLocAndDotFile())
-    setOperationAction(ISD::DEBUG_LOC, MVT::Other, Expand);
 }
 
 TargetLowering::~TargetLowering() {
@@ -2360,7 +2359,7 @@ getRegForInlineAsmConstraint(const std::string &Constraint,
   assert(*(Constraint.end()-1) == '}' && "Not a brace enclosed constraint?");
 
   // Remove the braces from around the name.
-  std::string RegName(Constraint.begin()+1, Constraint.end()-1);
+  StringRef RegName(Constraint.data()+1, Constraint.size()-2);
 
   // Figure out which register class contains this reg.
   const TargetRegisterInfo *RI = TM.getRegisterInfo();
@@ -2383,7 +2382,7 @@ getRegForInlineAsmConstraint(const std::string &Constraint,
     
     for (TargetRegisterClass::iterator I = RC->begin(), E = RC->end(); 
          I != E; ++I) {
-      if (StringsEqualNoCase(RegName, RI->getName(*I)))
+      if (RegName.equals_lower(RI->getName(*I)))
         return std::make_pair(*I, RC);
     }
   }
diff --git a/libclamav/c++/llvm/lib/CodeGen/ShadowStackGC.cpp b/libclamav/c++/llvm/lib/CodeGen/ShadowStackGC.cpp
index 541ab9a..0e6d479 100644
--- a/libclamav/c++/llvm/lib/CodeGen/ShadowStackGC.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/ShadowStackGC.cpp
@@ -31,14 +31,13 @@
 #include "llvm/CodeGen/GCStrategy.h"
 #include "llvm/IntrinsicInst.h"
 #include "llvm/Module.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/IRBuilder.h"
 
 using namespace llvm;
 
 namespace {
 
-  class VISIBILITY_HIDDEN ShadowStackGC : public GCStrategy {
+  class ShadowStackGC : public GCStrategy {
     /// RootChain - This is the global linked-list that contains the chain of GC
     /// roots.
     GlobalVariable *Head;
@@ -84,7 +83,7 @@ namespace {
   ///
   /// It's wrapped up in a state machine using the same transform C# uses for
   /// 'yield return' enumerators, This transform allows it to be non-allocating.
-  class VISIBILITY_HIDDEN EscapeEnumerator {
+  class EscapeEnumerator {
     Function &F;
     const char *CleanupBBName;
 
@@ -189,7 +188,7 @@ ShadowStackGC::ShadowStackGC() : Head(0), StackEntryTy(0) {
 
 Constant *ShadowStackGC::GetFrameMap(Function &F) {
   // doInitialization creates the abstract type of this value.
-  Type *VoidPtr = PointerType::getUnqual(Type::getInt8Ty(F.getContext()));
+  const Type *VoidPtr = Type::getInt8PtrTy(F.getContext());
 
   // Truncate the ShadowStackDescriptor if some metadata is null.
   unsigned NumMeta = 0;
diff --git a/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp b/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp
index ac70893..7847f8e 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp
@@ -17,6 +17,7 @@
 #include "VirtRegMap.h"
 #include "llvm/CodeGen/LiveIntervalAnalysis.h"
 #include "llvm/Value.h"
+#include "llvm/Analysis/AliasAnalysis.h"
 #include "llvm/CodeGen/MachineFrameInfo.h"
 #include "llvm/CodeGen/MachineInstr.h"
 #include "llvm/CodeGen/MachineLoopInfo.h"
@@ -72,8 +73,10 @@ const PassInfo *const llvm::SimpleRegisterCoalescingID = &X;
 
 void SimpleRegisterCoalescing::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.setPreservesCFG();
+  AU.addRequired<AliasAnalysis>();
   AU.addRequired<LiveIntervals>();
   AU.addPreserved<LiveIntervals>();
+  AU.addPreserved<SlotIndexes>();
   AU.addRequired<MachineLoopInfo>();
   AU.addPreserved<MachineLoopInfo>();
   AU.addPreservedID(MachineDominatorsID);
@@ -103,7 +106,7 @@ void SimpleRegisterCoalescing::getAnalysisUsage(AnalysisUsage &AU) const {
 bool SimpleRegisterCoalescing::AdjustCopiesBackFrom(LiveInterval &IntA,
                                                     LiveInterval &IntB,
                                                     MachineInstr *CopyMI) {
-  MachineInstrIndex CopyIdx = li_->getDefIndex(li_->getInstructionIndex(CopyMI));
+  SlotIndex CopyIdx = li_->getInstructionIndex(CopyMI).getDefIndex();
 
   // BValNo is a value number in B that is defined by a copy from A.  'B3' in
   // the example above.
@@ -118,7 +121,7 @@ bool SimpleRegisterCoalescing::AdjustCopiesBackFrom(LiveInterval &IntA,
   assert(BValNo->def == CopyIdx && "Copy doesn't define the value?");
 
   // AValNo is the value number in A that defines the copy, A3 in the example.
-  MachineInstrIndex CopyUseIdx = li_->getUseIndex(CopyIdx);
+  SlotIndex CopyUseIdx = CopyIdx.getUseIndex();
   LiveInterval::iterator ALR = IntA.FindLiveRangeContaining(CopyUseIdx);
   assert(ALR != IntA.end() && "Live range not found!");
   VNInfo *AValNo = ALR->valno;
@@ -156,13 +159,13 @@ bool SimpleRegisterCoalescing::AdjustCopiesBackFrom(LiveInterval &IntA,
 
   // Get the LiveRange in IntB that this value number starts with.
   LiveInterval::iterator ValLR =
-    IntB.FindLiveRangeContaining(li_->getPrevSlot(AValNo->def));
+    IntB.FindLiveRangeContaining(AValNo->def.getPrevSlot());
   assert(ValLR != IntB.end() && "Live range not found!");
 
   // Make sure that the end of the live range is inside the same block as
   // CopyMI.
   MachineInstr *ValLREndInst =
-    li_->getInstructionFromIndex(li_->getPrevSlot(ValLR->end));
+    li_->getInstructionFromIndex(ValLR->end.getPrevSlot());
   if (!ValLREndInst ||
       ValLREndInst->getParent() != CopyMI->getParent()) return false;
 
@@ -191,7 +194,7 @@ bool SimpleRegisterCoalescing::AdjustCopiesBackFrom(LiveInterval &IntA,
       IntB.print(errs(), tri_);
     });
 
-  MachineInstrIndex FillerStart = ValLR->end, FillerEnd = BLR->start;
+  SlotIndex FillerStart = ValLR->end, FillerEnd = BLR->start;
   // We are about to delete CopyMI, so need to remove it as the 'instruction
   // that defines this value #'. Update the the valnum with the new defining
   // instruction #.
@@ -304,8 +307,8 @@ TransferImplicitOps(MachineInstr *MI, MachineInstr *NewMI) {
 bool SimpleRegisterCoalescing::RemoveCopyByCommutingDef(LiveInterval &IntA,
                                                         LiveInterval &IntB,
                                                         MachineInstr *CopyMI) {
-  MachineInstrIndex CopyIdx =
-    li_->getDefIndex(li_->getInstructionIndex(CopyMI));
+  SlotIndex CopyIdx =
+    li_->getInstructionIndex(CopyMI).getDefIndex();
 
   // FIXME: For now, only eliminate the copy by commuting its def when the
   // source register is a virtual register. We want to guard against cases
@@ -328,7 +331,7 @@ bool SimpleRegisterCoalescing::RemoveCopyByCommutingDef(LiveInterval &IntA,
 
   // AValNo is the value number in A that defines the copy, A3 in the example.
   LiveInterval::iterator ALR =
-    IntA.FindLiveRangeContaining(li_->getPrevSlot(CopyIdx));
+    IntA.FindLiveRangeContaining(CopyIdx.getUseIndex()); // 
 
   assert(ALR != IntA.end() && "Live range not found!");
   VNInfo *AValNo = ALR->valno;
@@ -374,7 +377,7 @@ bool SimpleRegisterCoalescing::RemoveCopyByCommutingDef(LiveInterval &IntA,
   for (MachineRegisterInfo::use_iterator UI = mri_->use_begin(IntA.reg),
          UE = mri_->use_end(); UI != UE; ++UI) {
     MachineInstr *UseMI = &*UI;
-    MachineInstrIndex UseIdx = li_->getInstructionIndex(UseMI);
+    SlotIndex UseIdx = li_->getInstructionIndex(UseMI);
     LiveInterval::iterator ULR = IntA.FindLiveRangeContaining(UseIdx);
     if (ULR == IntA.end())
       continue;
@@ -399,7 +402,7 @@ bool SimpleRegisterCoalescing::RemoveCopyByCommutingDef(LiveInterval &IntA,
   bool BHasPHIKill = BValNo->hasPHIKill();
   SmallVector<VNInfo*, 4> BDeadValNos;
   VNInfo::KillSet BKills;
-  std::map<MachineInstrIndex, MachineInstrIndex> BExtend;
+  std::map<SlotIndex, SlotIndex> BExtend;
 
   // If ALR and BLR overlaps and end of BLR extends beyond end of ALR, e.g.
   // A = or A, B
@@ -426,7 +429,7 @@ bool SimpleRegisterCoalescing::RemoveCopyByCommutingDef(LiveInterval &IntA,
     ++UI;
     if (JoinedCopies.count(UseMI))
       continue;
-    MachineInstrIndex UseIdx= li_->getUseIndex(li_->getInstructionIndex(UseMI));
+    SlotIndex UseIdx = li_->getInstructionIndex(UseMI).getUseIndex();
     LiveInterval::iterator ULR = IntA.FindLiveRangeContaining(UseIdx);
     if (ULR == IntA.end() || ULR->valno != AValNo)
       continue;
@@ -437,7 +440,7 @@ bool SimpleRegisterCoalescing::RemoveCopyByCommutingDef(LiveInterval &IntA,
       if (Extended)
         UseMO.setIsKill(false);
       else
-        BKills.push_back(li_->getNextSlot(UseIdx));
+        BKills.push_back(UseIdx.getDefIndex());
     }
     unsigned SrcReg, DstReg, SrcSubIdx, DstSubIdx;
     if (!tii_->isMoveInstr(*UseMI, SrcReg, DstReg, SrcSubIdx, DstSubIdx))
@@ -446,7 +449,7 @@ bool SimpleRegisterCoalescing::RemoveCopyByCommutingDef(LiveInterval &IntA,
       // This copy will become a noop. If it's defining a new val#,
       // remove that val# as well. However this live range is being
       // extended to the end of the existing live range defined by the copy.
-      MachineInstrIndex DefIdx = li_->getDefIndex(UseIdx);
+      SlotIndex DefIdx = UseIdx.getDefIndex();
       const LiveRange *DLR = IntB.getLiveRangeContaining(DefIdx);
       BHasPHIKill |= DLR->valno->hasPHIKill();
       assert(DLR->valno->def == DefIdx);
@@ -493,8 +496,8 @@ bool SimpleRegisterCoalescing::RemoveCopyByCommutingDef(LiveInterval &IntA,
   for (LiveInterval::iterator AI = IntA.begin(), AE = IntA.end();
        AI != AE; ++AI) {
     if (AI->valno != AValNo) continue;
-    MachineInstrIndex End = AI->end;
-    std::map<MachineInstrIndex, MachineInstrIndex>::iterator
+    SlotIndex End = AI->end;
+    std::map<SlotIndex, SlotIndex>::iterator
       EI = BExtend.find(End);
     if (EI != BExtend.end())
       End = EI->second;
@@ -505,7 +508,7 @@ bool SimpleRegisterCoalescing::RemoveCopyByCommutingDef(LiveInterval &IntA,
     if (BHasSubRegs) {
       for (const unsigned *SR = tri_->getSubRegisters(IntB.reg); *SR; ++SR) {
         LiveInterval &SRLI = li_->getInterval(*SR);
-        SRLI.MergeInClobberRange(AI->start, End, li_->getVNInfoAllocator());
+        SRLI.MergeInClobberRange(*li_, AI->start, End, li_->getVNInfoAllocator());
       }
     }
   }
@@ -549,7 +552,7 @@ static bool isSameOrFallThroughBB(MachineBasicBlock *MBB,
 /// from a physical register live interval as well as from the live intervals
 /// of its sub-registers.
 static void removeRange(LiveInterval &li,
-                        MachineInstrIndex Start, MachineInstrIndex End,
+                        SlotIndex Start, SlotIndex End,
                         LiveIntervals *li_, const TargetRegisterInfo *tri_) {
   li.removeRange(Start, End, true);
   if (TargetRegisterInfo::isPhysicalRegister(li.reg)) {
@@ -557,8 +560,9 @@ static void removeRange(LiveInterval &li,
       if (!li_->hasInterval(*SR))
         continue;
       LiveInterval &sli = li_->getInterval(*SR);
-      MachineInstrIndex RemoveStart = Start;
-      MachineInstrIndex RemoveEnd = Start;
+      SlotIndex RemoveStart = Start;
+      SlotIndex RemoveEnd = Start;
+
       while (RemoveEnd != End) {
         LiveInterval::iterator LR = sli.FindLiveRangeContaining(RemoveStart);
         if (LR == sli.end())
@@ -575,14 +579,14 @@ static void removeRange(LiveInterval &li,
 /// as the copy instruction, trim the live interval to the last use and return
 /// true.
 bool
-SimpleRegisterCoalescing::TrimLiveIntervalToLastUse(MachineInstrIndex CopyIdx,
+SimpleRegisterCoalescing::TrimLiveIntervalToLastUse(SlotIndex CopyIdx,
                                                     MachineBasicBlock *CopyMBB,
                                                     LiveInterval &li,
                                                     const LiveRange *LR) {
-  MachineInstrIndex MBBStart = li_->getMBBStartIdx(CopyMBB);
-  MachineInstrIndex LastUseIdx;
+  SlotIndex MBBStart = li_->getMBBStartIdx(CopyMBB);
+  SlotIndex LastUseIdx;
   MachineOperand *LastUse =
-    lastRegisterUse(LR->start, li_->getPrevSlot(CopyIdx), li.reg, LastUseIdx);
+    lastRegisterUse(LR->start, CopyIdx.getPrevSlot(), li.reg, LastUseIdx);
   if (LastUse) {
     MachineInstr *LastUseMI = LastUse->getParent();
     if (!isSameOrFallThroughBB(LastUseMI->getParent(), CopyMBB, tii_)) {
@@ -601,8 +605,8 @@ SimpleRegisterCoalescing::TrimLiveIntervalToLastUse(MachineInstrIndex CopyIdx,
     // There are uses before the copy, just shorten the live range to the end
     // of last use.
     LastUse->setIsKill();
-    removeRange(li, li_->getDefIndex(LastUseIdx), LR->end, li_, tri_);
-    LR->valno->addKill(li_->getNextSlot(LastUseIdx));
+    removeRange(li, LastUseIdx.getDefIndex(), LR->end, li_, tri_);
+    LR->valno->addKill(LastUseIdx.getDefIndex());
     unsigned SrcReg, DstReg, SrcSubIdx, DstSubIdx;
     if (tii_->isMoveInstr(*LastUseMI, SrcReg, DstReg, SrcSubIdx, DstSubIdx) &&
         DstReg == li.reg) {
@@ -615,7 +619,7 @@ SimpleRegisterCoalescing::TrimLiveIntervalToLastUse(MachineInstrIndex CopyIdx,
 
   // Is it livein?
   if (LR->start <= MBBStart && LR->end > MBBStart) {
-    if (LR->start == MachineInstrIndex()) {
+    if (LR->start == li_->getZeroIndex()) {
       assert(TargetRegisterInfo::isPhysicalRegister(li.reg));
       // Live-in to the function but dead. Remove it from entry live-in set.
       mf_->begin()->removeLiveIn(li.reg);
@@ -632,7 +636,7 @@ bool SimpleRegisterCoalescing::ReMaterializeTrivialDef(LiveInterval &SrcInt,
                                                        unsigned DstReg,
                                                        unsigned DstSubIdx,
                                                        MachineInstr *CopyMI) {
-  MachineInstrIndex CopyIdx = li_->getUseIndex(li_->getInstructionIndex(CopyMI));
+  SlotIndex CopyIdx = li_->getInstructionIndex(CopyMI).getUseIndex();
   LiveInterval::iterator SrcLR = SrcInt.FindLiveRangeContaining(CopyIdx);
   assert(SrcLR != SrcInt.end() && "Live range not found!");
   VNInfo *ValNo = SrcLR->valno;
@@ -646,11 +650,10 @@ bool SimpleRegisterCoalescing::ReMaterializeTrivialDef(LiveInterval &SrcInt,
   const TargetInstrDesc &TID = DefMI->getDesc();
   if (!TID.isAsCheapAsAMove())
     return false;
-  if (!DefMI->getDesc().isRematerializable() ||
-      !tii_->isTriviallyReMaterializable(DefMI))
+  if (!tii_->isTriviallyReMaterializable(DefMI, AA))
     return false;
   bool SawStore = false;
-  if (!DefMI->isSafeToMove(tii_, SawStore))
+  if (!DefMI->isSafeToMove(tii_, SawStore, AA))
     return false;
   if (TID.getNumDefs() != 1)
     return false;
@@ -682,7 +685,7 @@ bool SimpleRegisterCoalescing::ReMaterializeTrivialDef(LiveInterval &SrcInt,
       return false;
   }
 
-  MachineInstrIndex DefIdx = li_->getDefIndex(CopyIdx);
+  SlotIndex DefIdx = CopyIdx.getDefIndex();
   const LiveRange *DLR= li_->getInterval(DstReg).getLiveRangeContaining(DefIdx);
   DLR->valno->setCopy(0);
   // Don't forget to update sub-register intervals.
@@ -706,7 +709,7 @@ bool SimpleRegisterCoalescing::ReMaterializeTrivialDef(LiveInterval &SrcInt,
     }
 
   MachineBasicBlock::iterator MII = next(MachineBasicBlock::iterator(CopyMI));
-  tii_->reMaterialize(*MBB, MII, DstReg, DstSubIdx, DefMI);
+  tii_->reMaterialize(*MBB, MII, DstReg, DstSubIdx, DefMI, tri_);
   MachineInstr *NewMI = prior(MII);
 
   if (checkForDeadDef) {
@@ -715,7 +718,7 @@ bool SimpleRegisterCoalescing::ReMaterializeTrivialDef(LiveInterval &SrcInt,
     // should mark it dead:
     if (DefMI->getParent() == MBB) {
       DefMI->addRegisterDead(SrcInt.reg, tri_);
-      SrcLR->end = li_->getNextSlot(SrcLR->start);
+      SrcLR->end = SrcLR->start.getNextSlot();
     }
   }
 
@@ -814,8 +817,8 @@ SimpleRegisterCoalescing::UpdateRegDefsUses(unsigned SrcReg, unsigned DstReg,
         (TargetRegisterInfo::isVirtualRegister(CopyDstReg) ||
          allocatableRegs_[CopyDstReg])) {
       LiveInterval &LI = li_->getInterval(CopyDstReg);
-      MachineInstrIndex DefIdx =
-        li_->getDefIndex(li_->getInstructionIndex(UseMI));
+      SlotIndex DefIdx =
+        li_->getInstructionIndex(UseMI).getDefIndex();
       if (const LiveRange *DLR = LI.getLiveRangeContaining(DefIdx)) {
         if (DLR->valno->def == DefIdx)
           DLR->valno->setCopy(UseMI);
@@ -834,12 +837,12 @@ void SimpleRegisterCoalescing::RemoveUnnecessaryKills(unsigned Reg,
     if (!UseMO.isKill())
       continue;
     MachineInstr *UseMI = UseMO.getParent();
-    MachineInstrIndex UseIdx =
-      li_->getUseIndex(li_->getInstructionIndex(UseMI));
+    SlotIndex UseIdx =
+      li_->getInstructionIndex(UseMI).getUseIndex();
     const LiveRange *LR = LI.getLiveRangeContaining(UseIdx);
     if (!LR ||
-        (!LR->valno->isKill(li_->getNextSlot(UseIdx)) &&
-         LR->valno->def != li_->getNextSlot(UseIdx))) {
+        (!LR->valno->isKill(UseIdx.getDefIndex()) &&
+         LR->valno->def != UseIdx.getDefIndex())) {
       // Interesting problem. After coalescing reg1027's def and kill are both
       // at the same point:  %reg1027,0.000000e+00 = [56,814:0)  0 at 70-(814)
       //
@@ -880,16 +883,16 @@ static bool removeIntervalIfEmpty(LiveInterval &li, LiveIntervals *li_,
 /// Return true if live interval is removed.
 bool SimpleRegisterCoalescing::ShortenDeadCopyLiveRange(LiveInterval &li,
                                                         MachineInstr *CopyMI) {
-  MachineInstrIndex CopyIdx = li_->getInstructionIndex(CopyMI);
+  SlotIndex CopyIdx = li_->getInstructionIndex(CopyMI);
   LiveInterval::iterator MLR =
-    li.FindLiveRangeContaining(li_->getDefIndex(CopyIdx));
+    li.FindLiveRangeContaining(CopyIdx.getDefIndex());
   if (MLR == li.end())
     return false;  // Already removed by ShortenDeadCopySrcLiveRange.
-  MachineInstrIndex RemoveStart = MLR->start;
-  MachineInstrIndex RemoveEnd = MLR->end;
-  MachineInstrIndex DefIdx = li_->getDefIndex(CopyIdx);
+  SlotIndex RemoveStart = MLR->start;
+  SlotIndex RemoveEnd = MLR->end;
+  SlotIndex DefIdx = CopyIdx.getDefIndex();
   // Remove the liverange that's defined by this.
-  if (RemoveStart == DefIdx && RemoveEnd == li_->getNextSlot(DefIdx)) {
+  if (RemoveStart == DefIdx && RemoveEnd == DefIdx.getStoreIndex()) {
     removeRange(li, RemoveStart, RemoveEnd, li_, tri_);
     return removeIntervalIfEmpty(li, li_, tri_);
   }
@@ -900,7 +903,7 @@ bool SimpleRegisterCoalescing::ShortenDeadCopyLiveRange(LiveInterval &li,
 /// the val# it defines. If the live interval becomes empty, remove it as well.
 bool SimpleRegisterCoalescing::RemoveDeadDef(LiveInterval &li,
                                              MachineInstr *DefMI) {
-  MachineInstrIndex DefIdx = li_->getDefIndex(li_->getInstructionIndex(DefMI));
+  SlotIndex DefIdx = li_->getInstructionIndex(DefMI).getDefIndex();
   LiveInterval::iterator MLR = li.FindLiveRangeContaining(DefIdx);
   if (DefIdx != MLR->valno->def)
     return false;
@@ -911,18 +914,18 @@ bool SimpleRegisterCoalescing::RemoveDeadDef(LiveInterval &li,
 /// PropagateDeadness - Propagate the dead marker to the instruction which
 /// defines the val#.
 static void PropagateDeadness(LiveInterval &li, MachineInstr *CopyMI,
-                              MachineInstrIndex &LRStart, LiveIntervals *li_,
+                              SlotIndex &LRStart, LiveIntervals *li_,
                               const TargetRegisterInfo* tri_) {
   MachineInstr *DefMI =
-    li_->getInstructionFromIndex(li_->getDefIndex(LRStart));
+    li_->getInstructionFromIndex(LRStart.getDefIndex());
   if (DefMI && DefMI != CopyMI) {
     int DeadIdx = DefMI->findRegisterDefOperandIdx(li.reg, false);
     if (DeadIdx != -1)
       DefMI->getOperand(DeadIdx).setIsDead();
     else
       DefMI->addOperand(MachineOperand::CreateReg(li.reg,
-                                                  true, true, false, true));
-    LRStart = li_->getNextSlot(LRStart);
+                   /*def*/true, /*implicit*/true, /*kill*/false, /*dead*/true));
+    LRStart = LRStart.getNextSlot();
   }
 }
 
@@ -933,8 +936,8 @@ static void PropagateDeadness(LiveInterval &li, MachineInstr *CopyMI,
 bool
 SimpleRegisterCoalescing::ShortenDeadCopySrcLiveRange(LiveInterval &li,
                                                       MachineInstr *CopyMI) {
-  MachineInstrIndex CopyIdx = li_->getInstructionIndex(CopyMI);
-  if (CopyIdx == MachineInstrIndex()) {
+  SlotIndex CopyIdx = li_->getInstructionIndex(CopyMI);
+  if (CopyIdx == SlotIndex()) {
     // FIXME: special case: function live in. It can be a general case if the
     // first instruction index starts at > 0 value.
     assert(TargetRegisterInfo::isPhysicalRegister(li.reg));
@@ -947,13 +950,13 @@ SimpleRegisterCoalescing::ShortenDeadCopySrcLiveRange(LiveInterval &li,
   }
 
   LiveInterval::iterator LR =
-    li.FindLiveRangeContaining(li_->getPrevSlot(CopyIdx));
+    li.FindLiveRangeContaining(CopyIdx.getPrevIndex().getStoreIndex());
   if (LR == li.end())
     // Livein but defined by a phi.
     return false;
 
-  MachineInstrIndex RemoveStart = LR->start;
-  MachineInstrIndex RemoveEnd = li_->getNextSlot(li_->getDefIndex(CopyIdx));
+  SlotIndex RemoveStart = LR->start;
+  SlotIndex RemoveEnd = CopyIdx.getStoreIndex();
   if (LR->end > RemoveEnd)
     // More uses past this copy? Nothing to do.
     return false;
@@ -973,7 +976,7 @@ SimpleRegisterCoalescing::ShortenDeadCopySrcLiveRange(LiveInterval &li,
     // If the live range starts in another mbb and the copy mbb is not a fall
     // through mbb, then we can only cut the range from the beginning of the
     // copy mbb.
-    RemoveStart = li_->getNextSlot(li_->getMBBStartIdx(CopyMBB));
+    RemoveStart = li_->getMBBStartIdx(CopyMBB).getNextIndex().getBaseIndex();
 
   if (LR->valno->def == RemoveStart) {
     // If the def MI defines the val# and this copy is the only kill of the
@@ -1029,14 +1032,14 @@ SimpleRegisterCoalescing::isWinToJoinVRWithSrcPhysReg(MachineInstr *CopyMI,
 
   // If the virtual register live interval extends into a loop, turn down
   // aggressiveness.
-  MachineInstrIndex CopyIdx =
-    li_->getDefIndex(li_->getInstructionIndex(CopyMI));
+  SlotIndex CopyIdx =
+    li_->getInstructionIndex(CopyMI).getDefIndex();
   const MachineLoop *L = loopInfo->getLoopFor(CopyMBB);
   if (!L) {
     // Let's see if the virtual register live interval extends into the loop.
     LiveInterval::iterator DLR = DstInt.FindLiveRangeContaining(CopyIdx);
     assert(DLR != DstInt.end() && "Live range not found!");
-    DLR = DstInt.FindLiveRangeContaining(li_->getNextSlot(DLR->end));
+    DLR = DstInt.FindLiveRangeContaining(DLR->end.getNextSlot());
     if (DLR != DstInt.end()) {
       CopyMBB = li_->getMBBFromIndex(DLR->start);
       L = loopInfo->getLoopFor(CopyMBB);
@@ -1046,7 +1049,7 @@ SimpleRegisterCoalescing::isWinToJoinVRWithSrcPhysReg(MachineInstr *CopyMI,
   if (!L || Length <= Threshold)
     return true;
 
-  MachineInstrIndex UseIdx = li_->getUseIndex(CopyIdx);
+  SlotIndex UseIdx = CopyIdx.getUseIndex();
   LiveInterval::iterator SLR = SrcInt.FindLiveRangeContaining(UseIdx);
   MachineBasicBlock *SMBB = li_->getMBBFromIndex(SLR->start);
   if (loopInfo->getLoopFor(SMBB) != L) {
@@ -1059,7 +1062,7 @@ SimpleRegisterCoalescing::isWinToJoinVRWithSrcPhysReg(MachineInstr *CopyMI,
       if (SuccMBB == CopyMBB)
         continue;
       if (DstInt.overlaps(li_->getMBBStartIdx(SuccMBB),
-                          li_->getNextSlot(li_->getMBBEndIdx(SuccMBB))))
+                      li_->getMBBEndIdx(SuccMBB).getNextIndex().getBaseIndex()))
         return false;
     }
   }
@@ -1090,12 +1093,12 @@ SimpleRegisterCoalescing::isWinToJoinVRWithDstPhysReg(MachineInstr *CopyMI,
 
   // If the virtual register live interval is defined or cross a loop, turn
   // down aggressiveness.
-  MachineInstrIndex CopyIdx =
-    li_->getDefIndex(li_->getInstructionIndex(CopyMI));
-  MachineInstrIndex UseIdx = li_->getUseIndex(CopyIdx);
+  SlotIndex CopyIdx =
+    li_->getInstructionIndex(CopyMI).getDefIndex();
+  SlotIndex UseIdx = CopyIdx.getUseIndex();
   LiveInterval::iterator SLR = SrcInt.FindLiveRangeContaining(UseIdx);
   assert(SLR != SrcInt.end() && "Live range not found!");
-  SLR = SrcInt.FindLiveRangeContaining(li_->getPrevSlot(SLR->start));
+  SLR = SrcInt.FindLiveRangeContaining(SLR->start.getPrevSlot());
   if (SLR == SrcInt.end())
     return true;
   MachineBasicBlock *SMBB = li_->getMBBFromIndex(SLR->start);
@@ -1115,7 +1118,7 @@ SimpleRegisterCoalescing::isWinToJoinVRWithDstPhysReg(MachineInstr *CopyMI,
       if (PredMBB == SMBB)
         continue;
       if (SrcInt.overlaps(li_->getMBBStartIdx(PredMBB),
-                          li_->getNextSlot(li_->getMBBEndIdx(PredMBB))))
+                      li_->getMBBEndIdx(PredMBB).getNextIndex().getBaseIndex()))
         return false;
     }
   }
@@ -1366,7 +1369,7 @@ bool SimpleRegisterCoalescing::JoinCopy(CopyRec &TheCopy, bool &Again) {
     if (SrcSubIdx)
       SrcSubRC = SrcRC->getSubRegisterRegClass(SrcSubIdx);
     assert(SrcSubRC && "Illegal subregister index");
-    if (!SrcSubRC->contains(DstReg)) {
+    if (!SrcSubRC->contains(DstSubReg)) {
       DEBUG(errs() << "\tIncompatible source regclass: "
                    << tri_->getName(DstSubReg) << " not in "
                    << SrcSubRC->getName() << ".\n");
@@ -1704,7 +1707,7 @@ bool SimpleRegisterCoalescing::JoinCopy(CopyRec &TheCopy, bool &Again) {
 
     // Update the liveintervals of sub-registers.
     for (const unsigned *AS = tri_->getSubRegisters(DstReg); *AS; ++AS)
-      li_->getOrCreateInterval(*AS).MergeInClobberRanges(*ResSrcInt,
+      li_->getOrCreateInterval(*AS).MergeInClobberRanges(*li_, *ResSrcInt,
                                                  li_->getVNInfoAllocator());
   }
 
@@ -1831,6 +1834,25 @@ static bool InVector(VNInfo *Val, const SmallVector<VNInfo*, 8> &V) {
   return std::find(V.begin(), V.end(), Val) != V.end();
 }
 
+static bool isValNoDefMove(const MachineInstr *MI, unsigned DR, unsigned SR,
+                           const TargetInstrInfo *TII,
+                           const TargetRegisterInfo *TRI) {
+  unsigned SrcReg, DstReg, SrcSubIdx, DstSubIdx;
+  if (TII->isMoveInstr(*MI, SrcReg, DstReg, SrcSubIdx, DstSubIdx))
+    ;
+  else if (MI->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG) {
+    DstReg = MI->getOperand(0).getReg();
+    SrcReg = MI->getOperand(1).getReg();
+  } else if (MI->getOpcode() == TargetInstrInfo::SUBREG_TO_REG ||
+             MI->getOpcode() == TargetInstrInfo::INSERT_SUBREG) {
+    DstReg = MI->getOperand(0).getReg();
+    SrcReg = MI->getOperand(2).getReg();
+  } else
+    return false;
+  return (SrcReg == SR || TRI->isSuperRegister(SR, SrcReg)) &&
+         (DstReg == DR || TRI->isSuperRegister(DR, DstReg));
+}
+
 /// RangeIsDefinedByCopyFromReg - Return true if the specified live range of
 /// the specified live interval is defined by a copy from the specified
 /// register.
@@ -1847,12 +1869,9 @@ bool SimpleRegisterCoalescing::RangeIsDefinedByCopyFromReg(LiveInterval &li,
     // It's a sub-register live interval, we may not have precise information.
     // Re-compute it.
     MachineInstr *DefMI = li_->getInstructionFromIndex(LR->start);
-    unsigned SrcReg, DstReg, SrcSubIdx, DstSubIdx;
-    if (DefMI &&
-        tii_->isMoveInstr(*DefMI, SrcReg, DstReg, SrcSubIdx, DstSubIdx) &&
-        DstReg == li.reg && SrcReg == Reg) {
+    if (DefMI && isValNoDefMove(DefMI, li.reg, Reg, tii_, tri_)) {
       // Cache computed info.
-      LR->valno->def  = LR->start;
+      LR->valno->def = LR->start;
       LR->valno->setCopy(DefMI);
       return true;
     }
@@ -1860,6 +1879,23 @@ bool SimpleRegisterCoalescing::RangeIsDefinedByCopyFromReg(LiveInterval &li,
   return false;
 }
 
+
+/// ValueLiveAt - Return true if the LiveRange pointed to by the given
+/// iterator, or any subsequent range with the same value number,
+/// is live at the given point.
+bool SimpleRegisterCoalescing::ValueLiveAt(LiveInterval::iterator LRItr,
+                                           LiveInterval::iterator LREnd,
+                                           SlotIndex defPoint) const {
+  for (const VNInfo *valno = LRItr->valno;
+       (LRItr != LREnd) && (LRItr->valno == valno); ++LRItr) {
+    if (LRItr->contains(defPoint))
+      return true;
+  }
+
+  return false;
+}
+
+
 /// SimpleJoin - Attempt to joint the specified interval into this one. The
 /// caller of this method must guarantee that the RHS only contains a single
 /// value number and that the RHS is not defined by a copy from this
@@ -1906,7 +1942,7 @@ bool SimpleRegisterCoalescing::SimpleJoin(LiveInterval &LHS, LiveInterval &RHS){
         if (!RangeIsDefinedByCopyFromReg(LHS, LHSIt, RHS.reg))
           return false;    // Nope, bail out.
 
-        if (LHSIt->contains(RHSIt->valno->def))
+        if (ValueLiveAt(LHSIt, LHS.end(), RHSIt->valno->def))
           // Here is an interesting situation:
           // BB1:
           //   vr1025 = copy vr1024
@@ -1944,7 +1980,7 @@ bool SimpleRegisterCoalescing::SimpleJoin(LiveInterval &LHS, LiveInterval &RHS){
           // Otherwise, if this is a copy from the RHS, mark it as being merged
           // in.
           if (RangeIsDefinedByCopyFromReg(LHS, LHSIt, RHS.reg)) {
-            if (LHSIt->contains(RHSIt->valno->def))
+            if (ValueLiveAt(LHSIt, LHS.end(), RHSIt->valno->def))
               // Here is an interesting situation:
               // BB1:
               //   vr1025 = copy vr1024
@@ -2029,7 +2065,7 @@ bool SimpleRegisterCoalescing::SimpleJoin(LiveInterval &LHS, LiveInterval &RHS){
   // Update the liveintervals of sub-registers.
   if (TargetRegisterInfo::isPhysicalRegister(LHS.reg))
     for (const unsigned *AS = tri_->getSubRegisters(LHS.reg); *AS; ++AS)
-      li_->getOrCreateInterval(*AS).MergeInClobberRanges(LHS,
+      li_->getOrCreateInterval(*AS).MergeInClobberRanges(*li_, LHS,
                                                     li_->getVNInfoAllocator());
 
   return true;
@@ -2130,7 +2166,7 @@ SimpleRegisterCoalescing::JoinIntervals(LiveInterval &LHS, LiveInterval &RHS,
     } else {
       // It was defined as a copy from the LHS, find out what value # it is.
       RHSValNoInfo =
-        LHS.getLiveRangeContaining(li_->getPrevSlot(RHSValNoInfo0->def))->valno;
+        LHS.getLiveRangeContaining(RHSValNoInfo0->def.getPrevSlot())->valno;
       RHSValID = RHSValNoInfo->id;
       RHSVal0DefinedFromLHS = RHSValID;
     }
@@ -2194,7 +2230,7 @@ SimpleRegisterCoalescing::JoinIntervals(LiveInterval &LHS, LiveInterval &RHS,
 
       // Figure out the value # from the RHS.
       LHSValsDefinedFromRHS[VNI]=
-        RHS.getLiveRangeContaining(li_->getPrevSlot(VNI->def))->valno;
+        RHS.getLiveRangeContaining(VNI->def.getPrevSlot())->valno;
     }
 
     // Loop over the value numbers of the RHS, seeing if any are defined from
@@ -2212,7 +2248,7 @@ SimpleRegisterCoalescing::JoinIntervals(LiveInterval &LHS, LiveInterval &RHS,
 
       // Figure out the value # from the LHS.
       RHSValsDefinedFromLHS[VNI]=
-        LHS.getLiveRangeContaining(li_->getPrevSlot(VNI->def))->valno;
+        LHS.getLiveRangeContaining(VNI->def.getPrevSlot())->valno;
     }
 
     LHSValNoAssignments.resize(LHS.getNumValNums(), -1);
@@ -2344,7 +2380,7 @@ namespace {
 
 void SimpleRegisterCoalescing::CopyCoalesceInMBB(MachineBasicBlock *MBB,
                                                std::vector<CopyRec> &TryAgain) {
-  DEBUG(errs() << ((Value*)MBB->getBasicBlock())->getName() << ":\n");
+  DEBUG(errs() << MBB->getName() << ":\n");
 
   std::vector<CopyRec> VirtCopies;
   std::vector<CopyRec> PhysCopies;
@@ -2476,11 +2512,11 @@ SimpleRegisterCoalescing::differingRegisterClasses(unsigned RegA,
 /// lastRegisterUse - Returns the last use of the specific register between
 /// cycles Start and End or NULL if there are no uses.
 MachineOperand *
-SimpleRegisterCoalescing::lastRegisterUse(MachineInstrIndex Start,
-                                          MachineInstrIndex End,
+SimpleRegisterCoalescing::lastRegisterUse(SlotIndex Start,
+                                          SlotIndex End,
                                           unsigned Reg,
-                                          MachineInstrIndex &UseIdx) const{
-  UseIdx = MachineInstrIndex();
+                                          SlotIndex &UseIdx) const{
+  UseIdx = SlotIndex();
   if (TargetRegisterInfo::isVirtualRegister(Reg)) {
     MachineOperand *LastUse = NULL;
     for (MachineRegisterInfo::use_iterator I = mri_->use_begin(Reg),
@@ -2492,22 +2528,24 @@ SimpleRegisterCoalescing::lastRegisterUse(MachineInstrIndex Start,
           SrcReg == DstReg)
         // Ignore identity copies.
         continue;
-      MachineInstrIndex Idx = li_->getInstructionIndex(UseMI);
+      SlotIndex Idx = li_->getInstructionIndex(UseMI);
+      // FIXME: Should this be Idx != UseIdx? SlotIndex() will return something
+      // that compares higher than any other interval.
       if (Idx >= Start && Idx < End && Idx >= UseIdx) {
         LastUse = &Use;
-        UseIdx = li_->getUseIndex(Idx);
+        UseIdx = Idx.getUseIndex();
       }
     }
     return LastUse;
   }
 
-  MachineInstrIndex s = Start;
-  MachineInstrIndex e = li_->getBaseIndex(li_->getPrevSlot(End));
+  SlotIndex s = Start;
+  SlotIndex e = End.getPrevSlot().getBaseIndex();
   while (e >= s) {
     // Skip deleted instructions
     MachineInstr *MI = li_->getInstructionFromIndex(e);
-    while (e != MachineInstrIndex() && li_->getPrevIndex(e) >= s && !MI) {
-      e = li_->getPrevIndex(e);
+    while (e != SlotIndex() && e.getPrevIndex() >= s && !MI) {
+      e = e.getPrevIndex();
       MI = li_->getInstructionFromIndex(e);
     }
     if (e < s || MI == NULL)
@@ -2521,12 +2559,12 @@ SimpleRegisterCoalescing::lastRegisterUse(MachineInstrIndex Start,
         MachineOperand &Use = MI->getOperand(i);
         if (Use.isReg() && Use.isUse() && Use.getReg() &&
             tri_->regsOverlap(Use.getReg(), Reg)) {
-          UseIdx = li_->getUseIndex(e);
+          UseIdx = e.getUseIndex();
           return &Use;
         }
       }
 
-    e = li_->getPrevIndex(e);
+    e = e.getPrevIndex();
   }
 
   return NULL;
@@ -2550,24 +2588,30 @@ void SimpleRegisterCoalescing::releaseMemory() {
 static bool isZeroLengthInterval(LiveInterval *li, LiveIntervals *li_) {
   for (LiveInterval::Ranges::const_iterator
          i = li->ranges.begin(), e = li->ranges.end(); i != e; ++i)
-    if (li_->getPrevIndex(i->end) > i->start)
+    if (i->end.getPrevIndex() > i->start)
       return false;
   return true;
 }
 
+
 void SimpleRegisterCoalescing::CalculateSpillWeights() {
   SmallSet<unsigned, 4> Processed;
   for (MachineFunction::iterator mbbi = mf_->begin(), mbbe = mf_->end();
        mbbi != mbbe; ++mbbi) {
     MachineBasicBlock* MBB = mbbi;
-    MachineInstrIndex MBBEnd = li_->getMBBEndIdx(MBB);
+    SlotIndex MBBEnd = li_->getMBBEndIdx(MBB);
     MachineLoop* loop = loopInfo->getLoopFor(MBB);
     unsigned loopDepth = loop ? loop->getLoopDepth() : 0;
-    bool isExit = loop ? loop->isLoopExit(MBB) : false;
+    bool isExiting = loop ? loop->isLoopExiting(MBB) : false;
 
-    for (MachineBasicBlock::iterator mii = MBB->begin(), mie = MBB->end();
+    for (MachineBasicBlock::const_iterator mii = MBB->begin(), mie = MBB->end();
          mii != mie; ++mii) {
-      MachineInstr *MI = mii;
+      const MachineInstr *MI = mii;
+      if (tii_->isIdentityCopy(*MI))
+        continue;
+
+      if (MI->getOpcode() == TargetInstrInfo::IMPLICIT_DEF)
+        continue;
 
       for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
         const MachineOperand &mopi = MI->getOperand(i);
@@ -2595,10 +2639,9 @@ void SimpleRegisterCoalescing::CalculateSpillWeights() {
 
         LiveInterval &RegInt = li_->getInterval(Reg);
         float Weight = li_->getSpillWeight(HasDef, HasUse, loopDepth);
-        if (HasDef && isExit) {
+        if (HasDef && isExiting) {
           // Looks like this is a loop count variable update.
-          MachineInstrIndex DefIdx =
-            li_->getDefIndex(li_->getInstructionIndex(MI));
+          SlotIndex DefIdx = li_->getInstructionIndex(MI).getDefIndex();
           const LiveRange *DLR =
             li_->getInterval(Reg).getLiveRangeContaining(DefIdx);
           if (DLR->end > MBBEnd)
@@ -2656,6 +2699,7 @@ bool SimpleRegisterCoalescing::runOnMachineFunction(MachineFunction &fn) {
   tri_ = tm_->getRegisterInfo();
   tii_ = tm_->getInstrInfo();
   li_ = &getAnalysis<LiveIntervals>();
+  AA = &getAnalysis<AliasAnalysis>();
   loopInfo = &getAnalysis<MachineLoopInfo>();
 
   DEBUG(errs() << "********** SIMPLE REGISTER COALESCING **********\n"
@@ -2704,7 +2748,7 @@ bool SimpleRegisterCoalescing::runOnMachineFunction(MachineFunction &fn) {
             // registers unless the definition is dead. e.g.
             // %DO<def> = INSERT_SUBREG %D0<undef>, %S0<kill>, 1
             // or else the scavenger may complain. LowerSubregs will
-            // change this to an IMPLICIT_DEF later.
+            // delete them later.
             DoDelete = false;
         }
         if (MI->registerDefIsDead(DstReg)) {
diff --git a/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.h b/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.h
index 20b8eb2..78f8a9a 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.h
+++ b/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.h
@@ -45,6 +45,7 @@ namespace llvm {
     const TargetInstrInfo* tii_;
     LiveIntervals *li_;
     const MachineLoopInfo* loopInfo;
+    AliasAnalysis *AA;
     
     BitVector allocatableRegs_;
     DenseMap<const TargetRegisterClass*, BitVector> allocatableRCRegs_;
@@ -145,7 +146,7 @@ namespace llvm {
     /// TrimLiveIntervalToLastUse - If there is a last use in the same basic
     /// block as the copy instruction, trim the ive interval to the last use
     /// and return true.
-    bool TrimLiveIntervalToLastUse(MachineInstrIndex CopyIdx,
+    bool TrimLiveIntervalToLastUse(SlotIndex CopyIdx,
                                    MachineBasicBlock *CopyMBB,
                                    LiveInterval &li, const LiveRange *LR);
 
@@ -200,6 +201,12 @@ namespace llvm {
     bool CanJoinInsertSubRegToPhysReg(unsigned DstReg, unsigned SrcReg,
                                       unsigned SubIdx, unsigned &RealDstReg);
 
+    /// ValueLiveAt - Return true if the LiveRange pointed to by the given
+    /// iterator, or any subsequent range with the same value number,
+    /// is live at the given point.
+    bool ValueLiveAt(LiveInterval::iterator LRItr, LiveInterval::iterator LREnd, 
+                     SlotIndex defPoint) const;                                  
+
     /// RangeIsDefinedByCopyFromReg - Return true if the specified live range of
     /// the specified live interval is defined by a copy from the specified
     /// register.
@@ -234,9 +241,8 @@ namespace llvm {
 
     /// lastRegisterUse - Returns the last use of the specific register between
     /// cycles Start and End or NULL if there are no uses.
-    MachineOperand *lastRegisterUse(MachineInstrIndex Start,
-                                    MachineInstrIndex End, unsigned Reg,
-                                    MachineInstrIndex &LastUseIdx) const;
+    MachineOperand *lastRegisterUse(SlotIndex Start, SlotIndex End,
+                                    unsigned Reg, SlotIndex &LastUseIdx) const;
 
     /// CalculateSpillWeights - Compute spill weights for all virtual register
     /// live intervals.
diff --git a/libclamav/c++/llvm/lib/CodeGen/SjLjEHPrepare.cpp b/libclamav/c++/llvm/lib/CodeGen/SjLjEHPrepare.cpp
index 38996ff..6de03e1 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SjLjEHPrepare.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SjLjEHPrepare.cpp
@@ -27,7 +27,6 @@
 #include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Target/TargetLowering.h"
@@ -38,7 +37,7 @@ STATISTIC(NumUnwinds, "Number of unwinds replaced");
 STATISTIC(NumSpilled, "Number of registers live across unwind edges");
 
 namespace {
-  class VISIBILITY_HIDDEN SjLjEHPass : public FunctionPass {
+  class SjLjEHPass : public FunctionPass {
 
     const TargetLowering *TLI;
 
@@ -50,8 +49,7 @@ namespace {
     Constant *FrameAddrFn;
     Constant *LSDAAddrFn;
     Value *PersonalityFn;
-    Constant *Selector32Fn;
-    Constant *Selector64Fn;
+    Constant *SelectorFn;
     Constant *ExceptionFn;
 
     Value *CallSite;
@@ -88,7 +86,7 @@ bool SjLjEHPass::doInitialization(Module &M) {
   // Build the function context structure.
   // builtin_setjmp uses a five word jbuf
   const Type *VoidPtrTy =
-          PointerType::getUnqual(Type::getInt8Ty(M.getContext()));
+          Type::getInt8PtrTy(M.getContext());
   const Type *Int32Ty = Type::getInt32Ty(M.getContext());
   FunctionContextTy =
     StructType::get(M.getContext(),
@@ -116,8 +114,7 @@ bool SjLjEHPass::doInitialization(Module &M) {
   FrameAddrFn = Intrinsic::getDeclaration(&M, Intrinsic::frameaddress);
   BuiltinSetjmpFn = Intrinsic::getDeclaration(&M, Intrinsic::eh_sjlj_setjmp);
   LSDAAddrFn = Intrinsic::getDeclaration(&M, Intrinsic::eh_sjlj_lsda);
-  Selector32Fn = Intrinsic::getDeclaration(&M, Intrinsic::eh_selector_i32);
-  Selector64Fn = Intrinsic::getDeclaration(&M, Intrinsic::eh_selector_i64);
+  SelectorFn = Intrinsic::getDeclaration(&M, Intrinsic::eh_selector);
   ExceptionFn = Intrinsic::getDeclaration(&M, Intrinsic::eh_exception);
   PersonalityFn = 0;
 
@@ -298,8 +295,7 @@ bool SjLjEHPass::insertSjLjEHSupport(Function &F) {
   for (Function::iterator BB = F.begin(), E = F.end(); BB != E; ++BB) {
     for (BasicBlock::iterator I = BB->begin(), E = BB->end(); I != E; ++I) {
       if (CallInst *CI = dyn_cast<CallInst>(I)) {
-        if (CI->getCalledFunction() == Selector32Fn ||
-            CI->getCalledFunction() == Selector64Fn) {
+        if (CI->getCalledFunction() == SelectorFn) {
           if (!PersonalityFn) PersonalityFn = CI->getOperand(2);
           EH_Selectors.push_back(CI);
         } else if (CI->getCalledFunction() == ExceptionFn) {
@@ -378,7 +374,7 @@ bool SjLjEHPass::insertSjLjEHSupport(Function &F) {
       // the instruction hasn't already been removed.
       if (!I->getParent()) continue;
       Value *Val = new LoadInst(ExceptionAddr, "exception", true, I);
-      Type *Ty = PointerType::getUnqual(Type::getInt8Ty(F.getContext()));
+      const Type *Ty = Type::getInt8PtrTy(F.getContext());
       Val = CastInst::Create(Instruction::IntToPtr, Val, Ty, "", I);
 
       I->replaceAllUsesWith(Val);
@@ -455,8 +451,8 @@ bool SjLjEHPass::insertSjLjEHSupport(Function &F) {
     // Call the setjmp instrinsic. It fills in the rest of the jmpbuf
     Value *SetjmpArg =
       CastInst::Create(Instruction::BitCast, FieldPtr,
-                        Type::getInt8Ty(F.getContext())->getPointerTo(), "",
-                        EntryBB->getTerminator());
+                       Type::getInt8PtrTy(F.getContext()), "",
+                       EntryBB->getTerminator());
     Value *DispatchVal = CallInst::Create(BuiltinSetjmpFn, SetjmpArg,
                                           "dispatch",
                                           EntryBB->getTerminator());
diff --git a/libclamav/c++/llvm/lib/CodeGen/SlotIndexes.cpp b/libclamav/c++/llvm/lib/CodeGen/SlotIndexes.cpp
new file mode 100644
index 0000000..f85384b
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/SlotIndexes.cpp
@@ -0,0 +1,222 @@
+//===-- SlotIndexes.cpp - Slot Indexes Pass  ------------------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "slotindexes"
+
+#include "llvm/CodeGen/SlotIndexes.h"
+#include "llvm/CodeGen/MachineFunction.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/raw_ostream.h"
+#include "llvm/Support/ManagedStatic.h"
+
+using namespace llvm;
+
+
+// Yep - these are thread safe. See the header for details. 
+namespace {
+
+
+  class EmptyIndexListEntry : public IndexListEntry {
+  public:
+    EmptyIndexListEntry() : IndexListEntry(EMPTY_KEY) {}
+  };
+
+  class TombstoneIndexListEntry : public IndexListEntry {
+  public:
+    TombstoneIndexListEntry() : IndexListEntry(TOMBSTONE_KEY) {}
+  };
+
+  // The following statics are thread safe. They're read only, and you
+  // can't step from them to any other list entries.
+  ManagedStatic<EmptyIndexListEntry> IndexListEntryEmptyKey;
+  ManagedStatic<TombstoneIndexListEntry> IndexListEntryTombstoneKey;
+}
+
+char SlotIndexes::ID = 0;
+static RegisterPass<SlotIndexes> X("slotindexes", "Slot index numbering");
+
+IndexListEntry* IndexListEntry::getEmptyKeyEntry() {
+  return &*IndexListEntryEmptyKey;
+}
+
+IndexListEntry* IndexListEntry::getTombstoneKeyEntry() {
+  return &*IndexListEntryTombstoneKey;
+}
+
+
+void SlotIndexes::getAnalysisUsage(AnalysisUsage &au) const {
+  au.setPreservesAll();
+  MachineFunctionPass::getAnalysisUsage(au);
+}
+
+void SlotIndexes::releaseMemory() {
+  mi2iMap.clear();
+  mbb2IdxMap.clear();
+  idx2MBBMap.clear();
+  terminatorGaps.clear();
+  clearList();
+}
+
+bool SlotIndexes::runOnMachineFunction(MachineFunction &fn) {
+
+  // Compute numbering as follows:
+  // Grab an iterator to the start of the index list.
+  // Iterate over all MBBs, and within each MBB all MIs, keeping the MI
+  // iterator in lock-step (though skipping it over indexes which have
+  // null pointers in the instruction field).
+  // At each iteration assert that the instruction pointed to in the index
+  // is the same one pointed to by the MI iterator. This 
+
+  // FIXME: This can be simplified. The mi2iMap_, Idx2MBBMap, etc. should
+  // only need to be set up once after the first numbering is computed.
+
+  mf = &fn;
+  initList();
+
+  // Check that the list contains only the sentinal.
+  assert(indexListHead->getNext() == 0 &&
+         "Index list non-empty at initial numbering?");
+  assert(idx2MBBMap.empty() &&
+         "Index -> MBB mapping non-empty at initial numbering?");
+  assert(mbb2IdxMap.empty() &&
+         "MBB -> Index mapping non-empty at initial numbering?");
+  assert(mi2iMap.empty() &&
+         "MachineInstr -> Index mapping non-empty at initial numbering?");
+
+  functionSize = 0;
+  unsigned index = 0;
+
+  // Iterate over the the function.
+  for (MachineFunction::iterator mbbItr = mf->begin(), mbbEnd = mf->end();
+       mbbItr != mbbEnd; ++mbbItr) {
+    MachineBasicBlock *mbb = &*mbbItr;
+
+    // Insert an index for the MBB start.
+    push_back(createEntry(0, index));
+    SlotIndex blockStartIndex(back(), SlotIndex::LOAD);
+
+    index += SlotIndex::NUM;
+
+    for (MachineBasicBlock::iterator miItr = mbb->begin(), miEnd = mbb->end();
+         miItr != miEnd; ++miItr) {
+      MachineInstr *mi = &*miItr;
+
+      if (miItr == mbb->getFirstTerminator()) {
+        push_back(createEntry(0, index));
+        terminatorGaps.insert(
+          std::make_pair(mbb, SlotIndex(back(), SlotIndex::PHI_BIT)));
+        index += SlotIndex::NUM;
+      }
+
+      // Insert a store index for the instr.
+      push_back(createEntry(mi, index));
+
+      // Save this base index in the maps.
+      mi2iMap.insert(
+        std::make_pair(mi, SlotIndex(back(), SlotIndex::LOAD)));
+ 
+      ++functionSize;
+
+      unsigned Slots = mi->getDesc().getNumDefs();
+      if (Slots == 0)
+        Slots = 1;
+
+      index += (Slots + 1) * SlotIndex::NUM;
+    }
+
+    if (mbb->getFirstTerminator() == mbb->end()) {
+      push_back(createEntry(0, index));
+      terminatorGaps.insert(
+        std::make_pair(mbb, SlotIndex(back(), SlotIndex::PHI_BIT)));
+      index += SlotIndex::NUM;
+    }
+
+    SlotIndex blockEndIndex(back(), SlotIndex::STORE);
+    mbb2IdxMap.insert(
+      std::make_pair(mbb, std::make_pair(blockStartIndex, blockEndIndex)));
+
+    idx2MBBMap.push_back(IdxMBBPair(blockStartIndex, mbb));
+  }
+
+  // One blank instruction at the end.
+  push_back(createEntry(0, index));
+
+  // Sort the Idx2MBBMap
+  std::sort(idx2MBBMap.begin(), idx2MBBMap.end(), Idx2MBBCompare());
+
+  DEBUG(dump());
+
+  // And we're done!
+  return false;
+}
+
+void SlotIndexes::renumberIndexes() {
+
+  // Renumber updates the index of every element of the index list.
+  // If all instrs in the function have been allocated an index (which has been
+  // placed in the index list in the order of instruction iteration) then the
+  // resulting numbering will match what would have been generated by the
+  // pass during the initial numbering of the function if the new instructions
+  // had been present.
+
+  functionSize = 0;
+  unsigned index = 0;
+
+  for (IndexListEntry *curEntry = front(); curEntry != getTail();
+       curEntry = curEntry->getNext()) {
+
+    curEntry->setIndex(index);
+
+    if (curEntry->getInstr() == 0) {
+      // MBB start entry or terminator gap. Just step index by 1.
+      index += SlotIndex::NUM;
+    }
+    else {
+      ++functionSize;
+      unsigned Slots = curEntry->getInstr()->getDesc().getNumDefs();
+      if (Slots == 0)
+        Slots = 1;
+
+      index += (Slots + 1) * SlotIndex::NUM;
+    }
+  }
+}
+
+void SlotIndexes::dump() const {
+  for (const IndexListEntry *itr = front(); itr != getTail();
+       itr = itr->getNext()) {
+    errs() << itr->getIndex() << " ";
+
+    if (itr->getInstr() != 0) {
+      errs() << *itr->getInstr();
+    } else {
+      errs() << "\n";
+    }
+  }
+
+  for (MBB2IdxMap::const_iterator itr = mbb2IdxMap.begin();
+       itr != mbb2IdxMap.end(); ++itr) {
+    errs() << "MBB " << itr->first->getNumber() << " (" << itr->first << ") - ["
+           << itr->second.first << ", " << itr->second.second << "]\n";
+  }
+}
+
+// Print a SlotIndex to a raw_ostream.
+void SlotIndex::print(raw_ostream &os) const {
+  os << getIndex();
+  if (isPHI())
+    os << "*";
+}
+
+// Dump a SlotIndex to stderr.
+void SlotIndex::dump() const {
+  print(errs());
+  errs() << "\n";
+}
+
diff --git a/libclamav/c++/llvm/lib/CodeGen/Spiller.cpp b/libclamav/c++/llvm/lib/CodeGen/Spiller.cpp
index 4326a89..237d0b5 100644
--- a/libclamav/c++/llvm/lib/CodeGen/Spiller.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/Spiller.cpp
@@ -12,17 +12,30 @@
 #include "Spiller.h"
 #include "VirtRegMap.h"
 #include "llvm/CodeGen/LiveIntervalAnalysis.h"
-#include "llvm/CodeGen/LiveStackAnalysis.h"
 #include "llvm/CodeGen/MachineFrameInfo.h"
 #include "llvm/CodeGen/MachineFunction.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Target/TargetInstrInfo.h"
+#include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 
 using namespace llvm;
 
+namespace {
+  enum SpillerName { trivial, standard };
+}
+
+static cl::opt<SpillerName>
+spillerOpt("spiller",
+           cl::desc("Spiller to use: (default: standard)"),
+           cl::Prefix,
+           cl::values(clEnumVal(trivial, "trivial spiller"),
+                      clEnumVal(standard, "default spiller"),
+                      clEnumValEnd),
+           cl::init(standard));
+
 Spiller::~Spiller() {}
 
 namespace {
@@ -33,189 +46,23 @@ protected:
 
   MachineFunction *mf;
   LiveIntervals *lis;
-  LiveStacks *ls;
   MachineFrameInfo *mfi;
   MachineRegisterInfo *mri;
   const TargetInstrInfo *tii;
   VirtRegMap *vrm;
   
   /// Construct a spiller base. 
-  SpillerBase(MachineFunction *mf, LiveIntervals *lis, LiveStacks *ls,
-              VirtRegMap *vrm) :
-    mf(mf), lis(lis), ls(ls), vrm(vrm)
+  SpillerBase(MachineFunction *mf, LiveIntervals *lis, VirtRegMap *vrm)
+    : mf(mf), lis(lis), vrm(vrm)
   {
     mfi = mf->getFrameInfo();
     mri = &mf->getRegInfo();
     tii = mf->getTarget().getInstrInfo();
   }
 
-  /// Ensures there is space before the given machine instruction, returns the
-  /// instruction's new number.
-  MachineInstrIndex makeSpaceBefore(MachineInstr *mi) {
-    if (!lis->hasGapBeforeInstr(lis->getInstructionIndex(mi))) {
-      lis->scaleNumbering(2);
-      ls->scaleNumbering(2);
-    }
-
-    MachineInstrIndex miIdx = lis->getInstructionIndex(mi);
-
-    assert(lis->hasGapBeforeInstr(miIdx));
-    
-    return miIdx;
-  }
-
-  /// Ensure there is space after the given machine instruction, returns the
-  /// instruction's new number.
-  MachineInstrIndex makeSpaceAfter(MachineInstr *mi) {
-    if (!lis->hasGapAfterInstr(lis->getInstructionIndex(mi))) {
-      lis->scaleNumbering(2);
-      ls->scaleNumbering(2);
-    }
-
-    MachineInstrIndex miIdx = lis->getInstructionIndex(mi);
-
-    assert(lis->hasGapAfterInstr(miIdx));
-
-    return miIdx;
-  }  
-
-  /// Insert a store of the given vreg to the given stack slot immediately
-  /// after the given instruction. Returns the base index of the inserted
-  /// instruction. The caller is responsible for adding an appropriate
-  /// LiveInterval to the LiveIntervals analysis.
-  MachineInstrIndex insertStoreAfter(MachineInstr *mi, unsigned ss,
-                                     unsigned vreg,
-                                     const TargetRegisterClass *trc) {
-
-    MachineBasicBlock::iterator nextInstItr(next(mi)); 
-
-    MachineInstrIndex miIdx = makeSpaceAfter(mi);
-
-    tii->storeRegToStackSlot(*mi->getParent(), nextInstItr, vreg,
-                             true, ss, trc);
-    MachineBasicBlock::iterator storeInstItr(next(mi));
-    MachineInstr *storeInst = &*storeInstItr;
-    MachineInstrIndex storeInstIdx = lis->getNextIndex(miIdx);
-
-    assert(lis->getInstructionFromIndex(storeInstIdx) == 0 &&
-           "Store inst index already in use.");
-    
-    lis->InsertMachineInstrInMaps(storeInst, storeInstIdx);
-
-    return storeInstIdx;
-  }
-
-  /// Insert a store of the given vreg to the given stack slot immediately
-  /// before the given instructnion. Returns the base index of the inserted
-  /// Instruction.
-  MachineInstrIndex insertStoreBefore(MachineInstr *mi, unsigned ss,
-                                      unsigned vreg,
-                                      const TargetRegisterClass *trc) {
-    MachineInstrIndex miIdx = makeSpaceBefore(mi);
-  
-    tii->storeRegToStackSlot(*mi->getParent(), mi, vreg, true, ss, trc);
-    MachineBasicBlock::iterator storeInstItr(prior(mi));
-    MachineInstr *storeInst = &*storeInstItr;
-    MachineInstrIndex storeInstIdx = lis->getPrevIndex(miIdx);
-
-    assert(lis->getInstructionFromIndex(storeInstIdx) == 0 &&
-           "Store inst index already in use.");
-
-    lis->InsertMachineInstrInMaps(storeInst, storeInstIdx);
-
-    return storeInstIdx;
-  }
-
-  void insertStoreAfterInstOnInterval(LiveInterval *li,
-                                      MachineInstr *mi, unsigned ss,
-                                      unsigned vreg,
-                                      const TargetRegisterClass *trc) {
-
-    MachineInstrIndex storeInstIdx = insertStoreAfter(mi, ss, vreg, trc);
-    MachineInstrIndex start = lis->getDefIndex(lis->getInstructionIndex(mi)),
-                      end = lis->getUseIndex(storeInstIdx);
-
-    VNInfo *vni =
-      li->getNextValue(storeInstIdx, 0, true, lis->getVNInfoAllocator());
-    vni->addKill(storeInstIdx);
-    DEBUG(errs() << "    Inserting store range: [" << start
-                 << ", " << end << ")\n");
-    LiveRange lr(start, end, vni);
-      
-    li->addRange(lr);
-  }
-
-  /// Insert a load of the given vreg from the given stack slot immediately
-  /// after the given instruction. Returns the base index of the inserted
-  /// instruction. The caller is responsibel for adding/removing an appropriate
-  /// range vreg's LiveInterval.
-  MachineInstrIndex insertLoadAfter(MachineInstr *mi, unsigned ss,
-                                    unsigned vreg,
-                                    const TargetRegisterClass *trc) {
-
-    MachineBasicBlock::iterator nextInstItr(next(mi)); 
-
-    MachineInstrIndex miIdx = makeSpaceAfter(mi);
-
-    tii->loadRegFromStackSlot(*mi->getParent(), nextInstItr, vreg, ss, trc);
-    MachineBasicBlock::iterator loadInstItr(next(mi));
-    MachineInstr *loadInst = &*loadInstItr;
-    MachineInstrIndex loadInstIdx = lis->getNextIndex(miIdx);
-
-    assert(lis->getInstructionFromIndex(loadInstIdx) == 0 &&
-           "Store inst index already in use.");
-    
-    lis->InsertMachineInstrInMaps(loadInst, loadInstIdx);
-
-    return loadInstIdx;
-  }
-
-  /// Insert a load of the given vreg from the given stack slot immediately
-  /// before the given instruction. Returns the base index of the inserted
-  /// instruction. The caller is responsible for adding an appropriate
-  /// LiveInterval to the LiveIntervals analysis.
-  MachineInstrIndex insertLoadBefore(MachineInstr *mi, unsigned ss,
-                                     unsigned vreg,
-                                     const TargetRegisterClass *trc) {  
-    MachineInstrIndex miIdx = makeSpaceBefore(mi);
-  
-    tii->loadRegFromStackSlot(*mi->getParent(), mi, vreg, ss, trc);
-    MachineBasicBlock::iterator loadInstItr(prior(mi));
-    MachineInstr *loadInst = &*loadInstItr;
-    MachineInstrIndex loadInstIdx = lis->getPrevIndex(miIdx);
-
-    assert(lis->getInstructionFromIndex(loadInstIdx) == 0 &&
-           "Load inst index already in use.");
-
-    lis->InsertMachineInstrInMaps(loadInst, loadInstIdx);
-
-    return loadInstIdx;
-  }
-
-  void insertLoadBeforeInstOnInterval(LiveInterval *li,
-                                      MachineInstr *mi, unsigned ss, 
-                                      unsigned vreg,
-                                      const TargetRegisterClass *trc) {
-
-    MachineInstrIndex loadInstIdx = insertLoadBefore(mi, ss, vreg, trc);
-    MachineInstrIndex start = lis->getDefIndex(loadInstIdx),
-                      end = lis->getUseIndex(lis->getInstructionIndex(mi));
-
-    VNInfo *vni =
-      li->getNextValue(loadInstIdx, 0, true, lis->getVNInfoAllocator());
-    vni->addKill(lis->getInstructionIndex(mi));
-    DEBUG(errs() << "    Intserting load range: [" << start
-                 << ", " << end << ")\n");
-    LiveRange lr(start, end, vni);
-
-    li->addRange(lr);
-  }
-
-
-
   /// Add spill ranges for every use/def of the live interval, inserting loads
-  /// immediately before each use, and stores after each def. No folding is
-  /// attempted.
+  /// immediately before each use, and stores after each def. No folding or
+  /// remat is attempted.
   std::vector<LiveInterval*> trivialSpillEverywhere(LiveInterval *li) {
     DEBUG(errs() << "Spilling everywhere " << *li << "\n");
 
@@ -232,56 +79,77 @@ protected:
     const TargetRegisterClass *trc = mri->getRegClass(li->reg);
     unsigned ss = vrm->assignVirt2StackSlot(li->reg);
 
+    // Iterate over reg uses/defs.
     for (MachineRegisterInfo::reg_iterator
          regItr = mri->reg_begin(li->reg); regItr != mri->reg_end();) {
 
+      // Grab the use/def instr.
       MachineInstr *mi = &*regItr;
 
       DEBUG(errs() << "  Processing " << *mi);
 
+      // Step regItr to the next use/def instr.
       do {
         ++regItr;
       } while (regItr != mri->reg_end() && (&*regItr == mi));
       
+      // Collect uses & defs for this instr.
       SmallVector<unsigned, 2> indices;
       bool hasUse = false;
       bool hasDef = false;
-    
       for (unsigned i = 0; i != mi->getNumOperands(); ++i) {
         MachineOperand &op = mi->getOperand(i);
-
         if (!op.isReg() || op.getReg() != li->reg)
           continue;
-      
         hasUse |= mi->getOperand(i).isUse();
         hasDef |= mi->getOperand(i).isDef();
-      
         indices.push_back(i);
       }
 
+      // Create a new vreg & interval for this instr.
       unsigned newVReg = mri->createVirtualRegister(trc);
       vrm->grow();
       vrm->assignVirt2StackSlot(newVReg, ss);
-
       LiveInterval *newLI = &lis->getOrCreateInterval(newVReg);
       newLI->weight = HUGE_VALF;
       
+      // Update the reg operands & kill flags.
       for (unsigned i = 0; i < indices.size(); ++i) {
-        mi->getOperand(indices[i]).setReg(newVReg);
-
-        if (mi->getOperand(indices[i]).isUse()) {
-          mi->getOperand(indices[i]).setIsKill(true);
+        unsigned mopIdx = indices[i];
+        MachineOperand &mop = mi->getOperand(mopIdx);
+        mop.setReg(newVReg);
+        if (mop.isUse() && !mi->isRegTiedToDefOperand(mopIdx)) {
+          mop.setIsKill(true);
         }
       }
-
       assert(hasUse || hasDef);
 
+      // Insert reload if necessary.
+      MachineBasicBlock::iterator miItr(mi);
       if (hasUse) {
-        insertLoadBeforeInstOnInterval(newLI, mi, ss, newVReg, trc);
+        tii->loadRegFromStackSlot(*mi->getParent(), miItr, newVReg, ss, trc);
+        MachineInstr *loadInstr(prior(miItr));
+        SlotIndex loadIndex =
+          lis->InsertMachineInstrInMaps(loadInstr).getDefIndex();
+        SlotIndex endIndex = loadIndex.getNextIndex();
+        VNInfo *loadVNI =
+          newLI->getNextValue(loadIndex, 0, true, lis->getVNInfoAllocator());
+        loadVNI->addKill(endIndex);
+        newLI->addRange(LiveRange(loadIndex, endIndex, loadVNI));
       }
 
+      // Insert store if necessary.
       if (hasDef) {
-        insertStoreAfterInstOnInterval(newLI, mi, ss, newVReg, trc);
+        tii->storeRegToStackSlot(*mi->getParent(), next(miItr), newVReg, true,
+                                 ss, trc);
+        MachineInstr *storeInstr(next(miItr));
+        SlotIndex storeIndex =
+          lis->InsertMachineInstrInMaps(storeInstr).getDefIndex();
+        SlotIndex beginIndex = storeIndex.getPrevIndex();
+        VNInfo *storeVNI =
+          newLI->getNextValue(beginIndex, 0, true, lis->getVNInfoAllocator());
+        storeVNI->addKill(storeIndex);
+        newLI->addRange(LiveRange(beginIndex, storeIndex, storeVNI));
       }
 
       added.push_back(newLI);
@@ -298,61 +166,32 @@ protected:
 class TrivialSpiller : public SpillerBase {
 public:
 
-  TrivialSpiller(MachineFunction *mf, LiveIntervals *lis, LiveStacks *ls,
-                 VirtRegMap *vrm) :
-    SpillerBase(mf, lis, ls, vrm) {}
+  TrivialSpiller(MachineFunction *mf, LiveIntervals *lis, VirtRegMap *vrm)
+    : SpillerBase(mf, lis, vrm) {}
 
-  std::vector<LiveInterval*> spill(LiveInterval *li) {
+  std::vector<LiveInterval*> spill(LiveInterval *li,
+                                   SmallVectorImpl<LiveInterval*> &spillIs) {
+    // Ignore spillIs - we don't use it.
     return trivialSpillEverywhere(li);
   }
 
-  std::vector<LiveInterval*> intraBlockSplit(LiveInterval *li, VNInfo *valno)  {
-    std::vector<LiveInterval*> spillIntervals;
-
-    if (!valno->isDefAccurate() && !valno->isPHIDef()) {
-      // Early out for values which have no well defined def point.
-      return spillIntervals;
-    }
-
-    // Ok.. we should be able to proceed...
-    const TargetRegisterClass *trc = mri->getRegClass(li->reg);
-    unsigned ss = vrm->assignVirt2StackSlot(li->reg);    
-    vrm->grow();
-    vrm->assignVirt2StackSlot(li->reg, ss);
-
-    MachineInstr *mi = 0;
-    MachineInstrIndex storeIdx = MachineInstrIndex();
-
-    if (valno->isDefAccurate()) {
-      // If we have an accurate def we can just grab an iterator to the instr
-      // after the def.
-      mi = lis->getInstructionFromIndex(valno->def);
-      storeIdx = lis->getDefIndex(insertStoreAfter(mi, ss, li->reg, trc));
-    } else {
-      // if we get here we have a PHI def.
-      mi = &lis->getMBBFromIndex(valno->def)->front();
-      storeIdx = lis->getDefIndex(insertStoreBefore(mi, ss, li->reg, trc));
-    }
-
-    MachineBasicBlock *defBlock = mi->getParent();
-    MachineInstrIndex loadIdx = MachineInstrIndex();
-
-    // Now we need to find the load...
-    MachineBasicBlock::iterator useItr(mi);
-    for (; !useItr->readsRegister(li->reg); ++useItr) {}
-
-    if (useItr != defBlock->end()) {
-      MachineInstr *loadInst = useItr;
-      loadIdx = lis->getUseIndex(insertLoadBefore(loadInst, ss, li->reg, trc));
-    }
-    else {
-      MachineInstr *loadInst = &defBlock->back();
-      loadIdx = lis->getUseIndex(insertLoadAfter(loadInst, ss, li->reg, trc));
-    }
-
-    li->removeRange(storeIdx, loadIdx, true);
+};
 
-    return spillIntervals;
+/// Falls back on LiveIntervals::addIntervalsForSpills.
+class StandardSpiller : public Spiller {
+private:
+  LiveIntervals *lis;
+  const MachineLoopInfo *loopInfo;
+  VirtRegMap *vrm;
+public:
+  StandardSpiller(MachineFunction *mf, LiveIntervals *lis,
+                  const MachineLoopInfo *loopInfo, VirtRegMap *vrm)
+    : lis(lis), loopInfo(loopInfo), vrm(vrm) {}
+
+  /// Falls back on LiveIntervals::addIntervalsForSpills.
+  std::vector<LiveInterval*> spill(LiveInterval *li,
+                                   SmallVectorImpl<LiveInterval*> &spillIs) {
+    return lis->addIntervalsForSpills(*li, spillIs, loopInfo, *vrm);
   }
 
 };
@@ -360,6 +199,11 @@ public:
 }
 
 llvm::Spiller* llvm::createSpiller(MachineFunction *mf, LiveIntervals *lis,
-                                   LiveStacks *ls, VirtRegMap *vrm) {
-  return new TrivialSpiller(mf, lis, ls, vrm);
+                                   const MachineLoopInfo *loopInfo,
+                                   VirtRegMap *vrm) {
+  switch (spillerOpt) {
+    case trivial: return new TrivialSpiller(mf, lis, vrm); break;
+    case standard: return new StandardSpiller(mf, lis, loopInfo, vrm); break;
+    default: llvm_unreachable("Unreachable!"); break;
+  }
 }
diff --git a/libclamav/c++/llvm/lib/CodeGen/Spiller.h b/libclamav/c++/llvm/lib/CodeGen/Spiller.h
index 9c3900d..c6bd985 100644
--- a/libclamav/c++/llvm/lib/CodeGen/Spiller.h
+++ b/libclamav/c++/llvm/lib/CodeGen/Spiller.h
@@ -10,6 +10,7 @@
 #ifndef LLVM_CODEGEN_SPILLER_H
 #define LLVM_CODEGEN_SPILLER_H
 
+#include "llvm/ADT/SmallVector.h"
 #include <vector>
 
 namespace llvm {
@@ -19,6 +20,7 @@ namespace llvm {
   class LiveStacks;
   class MachineFunction;
   class MachineInstr;
+  class MachineLoopInfo;
   class VirtRegMap;
   class VNInfo;
 
@@ -32,17 +34,14 @@ namespace llvm {
 
     /// Spill the given live range. The method used will depend on the Spiller
     /// implementation selected.
-    virtual std::vector<LiveInterval*> spill(LiveInterval *li) = 0;
-
-    /// Intra-block split.
-    virtual std::vector<LiveInterval*> intraBlockSplit(LiveInterval *li,
-                                                       VNInfo *valno) = 0;
+    virtual std::vector<LiveInterval*> spill(LiveInterval *li,
+                                   SmallVectorImpl<LiveInterval*> &spillIs) = 0;
 
   };
 
   /// Create and return a spiller object, as specified on the command line.
   Spiller* createSpiller(MachineFunction *mf, LiveIntervals *li,
-                         LiveStacks *ls, VirtRegMap *vrm);
+                         const MachineLoopInfo *loopInfo, VirtRegMap *vrm);
 }
 
 #endif
diff --git a/libclamav/c++/llvm/lib/CodeGen/StackProtector.cpp b/libclamav/c++/llvm/lib/CodeGen/StackProtector.cpp
index 350bc6e..e8ee822 100644
--- a/libclamav/c++/llvm/lib/CodeGen/StackProtector.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/StackProtector.cpp
@@ -37,7 +37,7 @@ SSPBufferSize("stack-protector-buffer-size", cl::init(8),
                        "stack protection"));
 
 namespace {
-  class VISIBILITY_HIDDEN StackProtector : public FunctionPass {
+  class StackProtector : public FunctionPass {
     /// TLI - Keep a pointer of a TargetLowering to consult for determining
     /// target type sizes.
     const TargetLowering *TLI;
@@ -111,11 +111,16 @@ bool StackProtector::RequiresStackProtector() const {
           // protectors.
           return true;
 
-        if (const ArrayType *AT = dyn_cast<ArrayType>(AI->getAllocatedType()))
+        if (const ArrayType *AT = dyn_cast<ArrayType>(AI->getAllocatedType())) {
+          // We apparently only care about character arrays.
+          if (AT->getElementType() != Type::getInt8Ty(AT->getContext()))
+            continue;
+
           // If an array has more than SSPBufferSize bytes of allocated space,
           // then we emit stack protectors.
           if (SSPBufferSize <= TD->getTypeAllocSize(AT))
             return true;
+        }
       }
   }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/StackSlotColoring.cpp b/libclamav/c++/llvm/lib/CodeGen/StackSlotColoring.cpp
index fad0808..c299192 100644
--- a/libclamav/c++/llvm/lib/CodeGen/StackSlotColoring.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/StackSlotColoring.cpp
@@ -22,7 +22,6 @@
 #include "llvm/CodeGen/MachineRegisterInfo.h"
 #include "llvm/CodeGen/PseudoSourceValue.h"
 #include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetMachine.h"
@@ -53,7 +52,7 @@ STATISTIC(NumStoreElim,  "Number of stores eliminated");
 STATISTIC(NumDead,       "Number of trivially dead stack accesses eliminated");
 
 namespace {
-  class VISIBILITY_HIDDEN StackSlotColoring : public MachineFunctionPass {
+  class StackSlotColoring : public MachineFunctionPass {
     bool ColorWithRegs;
     LiveStacks* LS;
     VirtRegMap* VRM;
@@ -99,6 +98,8 @@ namespace {
     
     virtual void getAnalysisUsage(AnalysisUsage &AU) const {
       AU.setPreservesCFG();
+      AU.addRequired<SlotIndexes>();
+      AU.addPreserved<SlotIndexes>();
       AU.addRequired<LiveStacks>();
       AU.addRequired<VirtRegMap>();
       AU.addPreserved<VirtRegMap>();      
diff --git a/libclamav/c++/llvm/lib/CodeGen/StrongPHIElimination.cpp b/libclamav/c++/llvm/lib/CodeGen/StrongPHIElimination.cpp
index d2f93fc..3c13906 100644
--- a/libclamav/c++/llvm/lib/CodeGen/StrongPHIElimination.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/StrongPHIElimination.cpp
@@ -32,12 +32,11 @@
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/ADT/DepthFirstIterator.h"
 #include "llvm/ADT/Statistic.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 using namespace llvm;
 
 namespace {
-  struct VISIBILITY_HIDDEN StrongPHIElimination : public MachineFunctionPass {
+  struct StrongPHIElimination : public MachineFunctionPass {
     static char ID; // Pass identification, replacement for typeid
     StrongPHIElimination() : MachineFunctionPass(&ID) {}
 
@@ -73,6 +72,8 @@ namespace {
     virtual void getAnalysisUsage(AnalysisUsage &AU) const {
       AU.setPreservesCFG();
       AU.addRequired<MachineDominatorTree>();
+      AU.addRequired<SlotIndexes>();
+      AU.addPreserved<SlotIndexes>();
       AU.addRequired<LiveIntervals>();
       
       // TODO: Actually make this true.
@@ -295,7 +296,7 @@ StrongPHIElimination::computeDomForest(
 static bool isLiveIn(unsigned r, MachineBasicBlock* MBB,
                      LiveIntervals& LI) {
   LiveInterval& I = LI.getOrCreateInterval(r);
-  MachineInstrIndex idx = LI.getMBBStartIdx(MBB);
+  SlotIndex idx = LI.getMBBStartIdx(MBB);
   return I.liveAt(idx);
 }
 
@@ -428,7 +429,7 @@ void StrongPHIElimination::processBlock(MachineBasicBlock* MBB) {
     }
 
     LiveInterval& PI = LI.getOrCreateInterval(DestReg);
-    MachineInstrIndex pIdx = LI.getDefIndex(LI.getInstructionIndex(P));
+    SlotIndex pIdx = LI.getInstructionIndex(P).getDefIndex();
     VNInfo* PVN = PI.getLiveRangeContaining(pIdx)->valno;
     PhiValueNumber.insert(std::make_pair(DestReg, PVN->id));
 
@@ -748,7 +749,7 @@ void StrongPHIElimination::ScheduleCopies(MachineBasicBlock* MBB,
       
       LiveInterval& I = LI.getInterval(curr.second);
       MachineBasicBlock::iterator term = MBB->getFirstTerminator();
-      MachineInstrIndex endIdx = MachineInstrIndex();
+      SlotIndex endIdx = SlotIndex();
       if (term != MBB->end())
         endIdx = LI.getInstructionIndex(term);
       else
@@ -772,7 +773,7 @@ void StrongPHIElimination::ScheduleCopies(MachineBasicBlock* MBB,
   
   // Renumber the instructions so that we can perform the index computations
   // needed to create new live intervals.
-  LI.computeNumbering();
+  LI.renumber();
   
   // For copies that we inserted at the ends of predecessors, we construct
   // live intervals.  This is pretty easy, since we know that the destination
@@ -784,15 +785,15 @@ void StrongPHIElimination::ScheduleCopies(MachineBasicBlock* MBB,
        InsertedPHIDests.begin(), E = InsertedPHIDests.end(); I != E; ++I) {
     if (RegHandled.insert(I->first).second) {
       LiveInterval& Int = LI.getOrCreateInterval(I->first);
-      MachineInstrIndex instrIdx = LI.getInstructionIndex(I->second);
-      if (Int.liveAt(LI.getDefIndex(instrIdx)))
-        Int.removeRange(LI.getDefIndex(instrIdx),
-                        LI.getNextSlot(LI.getMBBEndIdx(I->second->getParent())),
+      SlotIndex instrIdx = LI.getInstructionIndex(I->second);
+      if (Int.liveAt(instrIdx.getDefIndex()))
+        Int.removeRange(instrIdx.getDefIndex(),
+                        LI.getMBBEndIdx(I->second->getParent()).getNextSlot(),
                         true);
       
       LiveRange R = LI.addLiveRangeToEndOfBlock(I->first, I->second);
       R.valno->setCopy(I->second);
-      R.valno->def = LI.getDefIndex(LI.getInstructionIndex(I->second));
+      R.valno->def = LI.getInstructionIndex(I->second).getDefIndex();
     }
   }
 }
@@ -817,8 +818,8 @@ void StrongPHIElimination::InsertCopies(MachineDomTreeNode* MDTN,
           Stacks[I->getOperand(i).getReg()].size()) {
         // Remove the live range for the old vreg.
         LiveInterval& OldInt = LI.getInterval(I->getOperand(i).getReg());
-        LiveInterval::iterator OldLR = OldInt.FindLiveRangeContaining(
-                  LI.getUseIndex(LI.getInstructionIndex(I)));
+        LiveInterval::iterator OldLR =
+          OldInt.FindLiveRangeContaining(LI.getInstructionIndex(I).getUseIndex());
         if (OldLR != OldInt.end())
           OldInt.removeRange(*OldLR, true);
         
@@ -830,11 +831,10 @@ void StrongPHIElimination::InsertCopies(MachineDomTreeNode* MDTN,
         VNInfo* FirstVN = *Int.vni_begin();
         FirstVN->setHasPHIKill(false);
         if (I->getOperand(i).isKill())
-          FirstVN->addKill(
-                 LI.getUseIndex(LI.getInstructionIndex(I)));
+          FirstVN->addKill(LI.getInstructionIndex(I).getUseIndex());
         
         LiveRange LR (LI.getMBBStartIdx(I->getParent()),
-                      LI.getNextSlot(LI.getUseIndex(LI.getInstructionIndex(I))),
+                      LI.getInstructionIndex(I).getUseIndex().getNextSlot(),
                       FirstVN);
         
         Int.addRange(LR);
@@ -863,14 +863,14 @@ bool StrongPHIElimination::mergeLiveIntervals(unsigned primary,
   LiveInterval& LHS = LI.getOrCreateInterval(primary);
   LiveInterval& RHS = LI.getOrCreateInterval(secondary);
   
-  LI.computeNumbering();
+  LI.renumber();
   
   DenseMap<VNInfo*, VNInfo*> VNMap;
   for (LiveInterval::iterator I = RHS.begin(), E = RHS.end(); I != E; ++I) {
     LiveRange R = *I;
  
-    MachineInstrIndex Start = R.start;
-    MachineInstrIndex End = R.end;
+    SlotIndex Start = R.start;
+    SlotIndex End = R.end;
     if (LHS.getLiveRangeContaining(Start))
       return false;
     
@@ -964,19 +964,19 @@ bool StrongPHIElimination::runOnMachineFunction(MachineFunction &Fn) {
           TII->copyRegToReg(*SI->second, SI->second->getFirstTerminator(),
                             I->first, SI->first, RC, RC);
           
-          LI.computeNumbering();
+          LI.renumber();
           
           LiveInterval& Int = LI.getOrCreateInterval(I->first);
-          MachineInstrIndex instrIdx =
+          SlotIndex instrIdx =
                      LI.getInstructionIndex(--SI->second->getFirstTerminator());
-          if (Int.liveAt(LI.getDefIndex(instrIdx)))
-            Int.removeRange(LI.getDefIndex(instrIdx),
-                            LI.getNextSlot(LI.getMBBEndIdx(SI->second)), true);
+          if (Int.liveAt(instrIdx.getDefIndex()))
+            Int.removeRange(instrIdx.getDefIndex(),
+                            LI.getMBBEndIdx(SI->second).getNextSlot(), true);
 
           LiveRange R = LI.addLiveRangeToEndOfBlock(I->first,
                                             --SI->second->getFirstTerminator());
           R.valno->setCopy(--SI->second->getFirstTerminator());
-          R.valno->def = LI.getDefIndex(instrIdx);
+          R.valno->def = instrIdx.getDefIndex();
           
           DEBUG(errs() << "Renaming failed: " << SI->first << " -> "
                        << I->first << "\n");
@@ -1011,7 +1011,7 @@ bool StrongPHIElimination::runOnMachineFunction(MachineFunction &Fn) {
       if (PI.containsOneValue()) {
         LI.removeInterval(DestReg);
       } else {
-        MachineInstrIndex idx = LI.getDefIndex(LI.getInstructionIndex(PInstr));
+        SlotIndex idx = LI.getInstructionIndex(PInstr).getDefIndex();
         PI.removeRange(*PI.getLiveRangeContaining(idx), true);
       }
     } else {
@@ -1025,7 +1025,7 @@ bool StrongPHIElimination::runOnMachineFunction(MachineFunction &Fn) {
         LiveInterval& InputI = LI.getInterval(reg);
         if (MBB != PInstr->getParent() &&
             InputI.liveAt(LI.getMBBStartIdx(PInstr->getParent())) &&
-            InputI.expiredAt(LI.getNextIndex(LI.getInstructionIndex(PInstr))))
+            InputI.expiredAt(LI.getInstructionIndex(PInstr).getNextIndex()))
           InputI.removeRange(LI.getMBBStartIdx(PInstr->getParent()),
                              LI.getInstructionIndex(PInstr),
                              true);
@@ -1033,7 +1033,7 @@ bool StrongPHIElimination::runOnMachineFunction(MachineFunction &Fn) {
       
       // If the PHI is not dead, then the valno defined by the PHI
       // now has an unknown def.
-      MachineInstrIndex idx = LI.getDefIndex(LI.getInstructionIndex(PInstr));
+      SlotIndex idx = LI.getInstructionIndex(PInstr).getDefIndex();
       const LiveRange* PLR = PI.getLiveRangeContaining(idx);
       PLR->valno->setIsPHIDef(true);
       LiveRange R (LI.getMBBStartIdx(PInstr->getParent()),
@@ -1045,7 +1045,7 @@ bool StrongPHIElimination::runOnMachineFunction(MachineFunction &Fn) {
     PInstr->eraseFromParent();
   }
   
-  LI.computeNumbering();
+  LI.renumber();
   
   return true;
 }
diff --git a/libclamav/c++/llvm/lib/CodeGen/TailDuplication.cpp b/libclamav/c++/llvm/lib/CodeGen/TailDuplication.cpp
new file mode 100644
index 0000000..12610b0
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/TailDuplication.cpp
@@ -0,0 +1,250 @@
+//===-- TailDuplication.cpp - Duplicate blocks into predecessors' tails ---===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This pass duplicates basic blocks ending in unconditional branches into
+// the tails of their predecessors.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "tailduplication"
+#include "llvm/Function.h"
+#include "llvm/CodeGen/Passes.h"
+#include "llvm/CodeGen/MachineModuleInfo.h"
+#include "llvm/CodeGen/MachineFunctionPass.h"
+#include "llvm/Target/TargetInstrInfo.h"
+#include "llvm/Support/CommandLine.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/raw_ostream.h"
+#include "llvm/ADT/SmallSet.h"
+#include "llvm/ADT/SetVector.h"
+#include "llvm/ADT/Statistic.h"
+using namespace llvm;
+
+STATISTIC(NumTailDups  , "Number of tail duplicated blocks");
+STATISTIC(NumInstrDups , "Additional instructions due to tail duplication");
+STATISTIC(NumDeadBlocks, "Number of dead blocks removed");
+
+// Heuristic for tail duplication.
+static cl::opt<unsigned>
+TailDuplicateSize("tail-dup-size",
+                  cl::desc("Maximum instructions to consider tail duplicating"),
+                  cl::init(2), cl::Hidden);
+
+namespace {
+  /// TailDuplicatePass - Perform tail duplication.
+  class TailDuplicatePass : public MachineFunctionPass {
+    const TargetInstrInfo *TII;
+    MachineModuleInfo *MMI;
+
+  public:
+    static char ID;
+    explicit TailDuplicatePass() : MachineFunctionPass(&ID) {}
+
+    virtual bool runOnMachineFunction(MachineFunction &MF);
+    virtual const char *getPassName() const { return "Tail Duplication"; }
+
+  private:
+    bool TailDuplicateBlocks(MachineFunction &MF);
+    bool TailDuplicate(MachineBasicBlock *TailBB, MachineFunction &MF);
+    void RemoveDeadBlock(MachineBasicBlock *MBB);
+  };
+
+  char TailDuplicatePass::ID = 0;
+}
+
+FunctionPass *llvm::createTailDuplicatePass() {
+  return new TailDuplicatePass();
+}
+
+bool TailDuplicatePass::runOnMachineFunction(MachineFunction &MF) {
+  TII = MF.getTarget().getInstrInfo();
+  MMI = getAnalysisIfAvailable<MachineModuleInfo>();
+
+  bool MadeChange = false;
+  bool MadeChangeThisIteration = true;
+  while (MadeChangeThisIteration) {
+    MadeChangeThisIteration = false;
+    MadeChangeThisIteration |= TailDuplicateBlocks(MF);
+    MadeChange |= MadeChangeThisIteration;
+  }
+
+  return MadeChange;
+}
+
+/// TailDuplicateBlocks - Look for small blocks that are unconditionally
+/// branched to and do not fall through. Tail-duplicate their instructions
+/// into their predecessors to eliminate (dynamic) branches.
+bool TailDuplicatePass::TailDuplicateBlocks(MachineFunction &MF) {
+  bool MadeChange = false;
+
+  for (MachineFunction::iterator I = ++MF.begin(), E = MF.end(); I != E; ) {
+    MachineBasicBlock *MBB = I++;
+
+    // Only duplicate blocks that end with unconditional branches.
+    if (MBB->canFallThrough())
+      continue;
+
+    MadeChange |= TailDuplicate(MBB, MF);
+
+    // If it is dead, remove it.
+    if (MBB->pred_empty()) {
+      NumInstrDups -= MBB->size();
+      RemoveDeadBlock(MBB);
+      MadeChange = true;
+      ++NumDeadBlocks;
+    }
+  }
+  return MadeChange;
+}
+
+/// TailDuplicate - If it is profitable, duplicate TailBB's contents in each
+/// of its predecessors.
+bool TailDuplicatePass::TailDuplicate(MachineBasicBlock *TailBB,
+                                        MachineFunction &MF) {
+  // Don't try to tail-duplicate single-block loops.
+  if (TailBB->isSuccessor(TailBB))
+    return false;
+
+  // Set the limit on the number of instructions to duplicate, with a default
+  // of one less than the tail-merge threshold. When optimizing for size,
+  // duplicate only one, because one branch instruction can be eliminated to
+  // compensate for the duplication.
+  unsigned MaxDuplicateCount;
+  if (MF.getFunction()->hasFnAttr(Attribute::OptimizeForSize))
+    MaxDuplicateCount = 1;
+  else if (TII->isProfitableToDuplicateIndirectBranch() &&
+           !TailBB->empty() && TailBB->back().getDesc().isIndirectBranch())
+    // If the target has hardware branch prediction that can handle indirect
+    // branches, duplicating them can often make them predictable when there
+    // are common paths through the code.  The limit needs to be high enough
+    // to allow undoing the effects of tail merging.
+    MaxDuplicateCount = 20;
+  else
+    MaxDuplicateCount = TailDuplicateSize;
+
+  // Check the instructions in the block to determine whether tail-duplication
+  // is invalid or unlikely to be profitable.
+  unsigned i = 0;
+  bool HasCall = false;
+  for (MachineBasicBlock::iterator I = TailBB->begin();
+       I != TailBB->end(); ++I, ++i) {
+    // Non-duplicable things shouldn't be tail-duplicated.
+    if (I->getDesc().isNotDuplicable()) return false;
+    // Don't duplicate more than the threshold.
+    if (i == MaxDuplicateCount) return false;
+    // Remember if we saw a call.
+    if (I->getDesc().isCall()) HasCall = true;
+  }
+  // Heuristically, don't tail-duplicate calls if it would expand code size,
+  // as it's less likely to be worth the extra cost.
+  if (i > 1 && HasCall)
+    return false;
+
+  // Iterate through all the unique predecessors and tail-duplicate this
+  // block into them, if possible. Copying the list ahead of time also
+  // avoids trouble with the predecessor list reallocating.
+  bool Changed = false;
+  SmallSetVector<MachineBasicBlock *, 8> Preds(TailBB->pred_begin(),
+                                               TailBB->pred_end());
+  for (SmallSetVector<MachineBasicBlock *, 8>::iterator PI = Preds.begin(),
+       PE = Preds.end(); PI != PE; ++PI) {
+    MachineBasicBlock *PredBB = *PI;
+
+    assert(TailBB != PredBB &&
+           "Single-block loop should have been rejected earlier!");
+    if (PredBB->succ_size() > 1) continue;
+
+    MachineBasicBlock *PredTBB, *PredFBB;
+    SmallVector<MachineOperand, 4> PredCond;
+    if (TII->AnalyzeBranch(*PredBB, PredTBB, PredFBB, PredCond, true))
+      continue;
+    if (!PredCond.empty())
+      continue;
+    // EH edges are ignored by AnalyzeBranch.
+    if (PredBB->succ_size() != 1)
+      continue;
+    // Don't duplicate into a fall-through predecessor (at least for now).
+    if (PredBB->isLayoutSuccessor(TailBB) && PredBB->canFallThrough())
+      continue;
+
+    DEBUG(errs() << "\nTail-duplicating into PredBB: " << *PredBB
+                 << "From Succ: " << *TailBB);
+
+    // Remove PredBB's unconditional branch.
+    TII->RemoveBranch(*PredBB);
+    // Clone the contents of TailBB into PredBB.
+    for (MachineBasicBlock::iterator I = TailBB->begin(), E = TailBB->end();
+         I != E; ++I) {
+      MachineInstr *NewMI = MF.CloneMachineInstr(I);
+      PredBB->insert(PredBB->end(), NewMI);
+    }
+    NumInstrDups += TailBB->size() - 1; // subtract one for removed branch
+
+    // Update the CFG.
+    PredBB->removeSuccessor(PredBB->succ_begin());
+    assert(PredBB->succ_empty() &&
+           "TailDuplicate called on block with multiple successors!");
+    for (MachineBasicBlock::succ_iterator I = TailBB->succ_begin(),
+         E = TailBB->succ_end(); I != E; ++I)
+       PredBB->addSuccessor(*I);
+
+    Changed = true;
+    ++NumTailDups;
+  }
+
+  // If TailBB was duplicated into all its predecessors except for the prior
+  // block, which falls through unconditionally, move the contents of this
+  // block into the prior block.
+  MachineBasicBlock &PrevBB = *prior(MachineFunction::iterator(TailBB));
+  MachineBasicBlock *PriorTBB = 0, *PriorFBB = 0;
+  SmallVector<MachineOperand, 4> PriorCond;
+  bool PriorUnAnalyzable =
+    TII->AnalyzeBranch(PrevBB, PriorTBB, PriorFBB, PriorCond, true);
+  // This has to check PrevBB->succ_size() because EH edges are ignored by
+  // AnalyzeBranch.
+  if (!PriorUnAnalyzable && PriorCond.empty() && !PriorTBB &&
+      TailBB->pred_size() == 1 && PrevBB.succ_size() == 1 &&
+      !TailBB->hasAddressTaken()) {
+    DEBUG(errs() << "\nMerging into block: " << PrevBB
+          << "From MBB: " << *TailBB);
+    PrevBB.splice(PrevBB.end(), TailBB, TailBB->begin(), TailBB->end());
+    PrevBB.removeSuccessor(PrevBB.succ_begin());;
+    assert(PrevBB.succ_empty());
+    PrevBB.transferSuccessors(TailBB);
+    Changed = true;
+  }
+
+  return Changed;
+}
+
+/// RemoveDeadBlock - Remove the specified dead machine basic block from the
+/// function, updating the CFG.
+void TailDuplicatePass::RemoveDeadBlock(MachineBasicBlock *MBB) {
+  assert(MBB->pred_empty() && "MBB must be dead!");
+  DEBUG(errs() << "\nRemoving MBB: " << *MBB);
+
+  // Remove all successors.
+  while (!MBB->succ_empty())
+    MBB->removeSuccessor(MBB->succ_end()-1);
+
+  // If there are any labels in the basic block, unregister them from
+  // MachineModuleInfo.
+  if (MMI && !MBB->empty()) {
+    for (MachineBasicBlock::iterator I = MBB->begin(), E = MBB->end();
+         I != E; ++I) {
+      if (I->isLabel())
+        // The label ID # is always operand #0, an immediate.
+        MMI->InvalidateLabel(I->getOperand(0).getImm());
+    }
+  }
+
+  // Remove the block.
+  MBB->eraseFromParent();
+}
+
diff --git a/libclamav/c++/llvm/lib/CodeGen/TargetInstrInfoImpl.cpp b/libclamav/c++/llvm/lib/CodeGen/TargetInstrInfoImpl.cpp
index ab67cd2..102e2a3 100644
--- a/libclamav/c++/llvm/lib/CodeGen/TargetInstrInfoImpl.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/TargetInstrInfoImpl.cpp
@@ -13,11 +13,14 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/Target/TargetInstrInfo.h"
+#include "llvm/Target/TargetMachine.h"
+#include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/CodeGen/MachineFrameInfo.h"
 #include "llvm/CodeGen/MachineInstr.h"
 #include "llvm/CodeGen/MachineInstrBuilder.h"
 #include "llvm/CodeGen/MachineMemOperand.h"
+#include "llvm/CodeGen/MachineRegisterInfo.h"
 #include "llvm/CodeGen/PseudoSourceValue.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
@@ -132,32 +135,49 @@ void TargetInstrInfoImpl::reMaterialize(MachineBasicBlock &MBB,
                                         MachineBasicBlock::iterator I,
                                         unsigned DestReg,
                                         unsigned SubIdx,
-                                        const MachineInstr *Orig) const {
+                                        const MachineInstr *Orig,
+                                        const TargetRegisterInfo *TRI) const {
   MachineInstr *MI = MBB.getParent()->CloneMachineInstr(Orig);
   MachineOperand &MO = MI->getOperand(0);
-  MO.setReg(DestReg);
-  MO.setSubReg(SubIdx);
+  if (TargetRegisterInfo::isVirtualRegister(DestReg)) {
+    MO.setReg(DestReg);
+    MO.setSubReg(SubIdx);
+  } else if (SubIdx) {
+    MO.setReg(TRI->getSubReg(DestReg, SubIdx));
+  } else {
+    MO.setReg(DestReg);
+  }
   MBB.insert(I, MI);
 }
 
-bool TargetInstrInfoImpl::isDeadInstruction(const MachineInstr *MI) const {
-  const TargetInstrDesc &TID = MI->getDesc();
-  if (TID.mayLoad() || TID.mayStore() || TID.isCall() || TID.isTerminator() ||
-      TID.isCall() || TID.isBarrier() || TID.isReturn() ||
-      TID.hasUnmodeledSideEffects())
+bool
+TargetInstrInfoImpl::isIdentical(const MachineInstr *MI,
+                                 const MachineInstr *Other,
+                                 const MachineRegisterInfo *MRI) const {
+  if (MI->getOpcode() != Other->getOpcode() ||
+      MI->getNumOperands() != Other->getNumOperands())
     return false;
+
   for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
     const MachineOperand &MO = MI->getOperand(i);
-    if (!MO.isReg() || !MO.getReg())
+    const MachineOperand &OMO = Other->getOperand(i);
+    if (MO.isReg() && MO.isDef()) {
+      assert(OMO.isReg() && OMO.isDef());
+      unsigned Reg = MO.getReg();
+      if (TargetRegisterInfo::isPhysicalRegister(Reg)) {
+        if (Reg != OMO.getReg())
+          return false;
+      } else if (MRI->getRegClass(MO.getReg()) !=
+                 MRI->getRegClass(OMO.getReg()))
+        return false;
+
       continue;
-    if (MO.isDef() && !MO.isDead())
-      return false;
-    if (MO.isUse() && MO.isKill())
-      // FIXME: We can't remove kill markers or else the scavenger will assert.
-      // An alternative is to add a ADD pseudo instruction to replace kill
-      // markers.
+    }
+
+    if (!MO.isIdenticalTo(OMO))
       return false;
   }
+
   return true;
 }
 
@@ -238,3 +258,88 @@ TargetInstrInfo::foldMemoryOperand(MachineFunction &MF,
 
   return NewMI;
 }
+
+bool
+TargetInstrInfo::isReallyTriviallyReMaterializableGeneric(const MachineInstr *
+                                                            MI,
+                                                          AliasAnalysis *
+                                                            AA) const {
+  const MachineFunction &MF = *MI->getParent()->getParent();
+  const MachineRegisterInfo &MRI = MF.getRegInfo();
+  const TargetMachine &TM = MF.getTarget();
+  const TargetInstrInfo &TII = *TM.getInstrInfo();
+  const TargetRegisterInfo &TRI = *TM.getRegisterInfo();
+
+  // A load from a fixed stack slot can be rematerialized. This may be
+  // redundant with subsequent checks, but it's target-independent,
+  // simple, and a common case.
+  int FrameIdx = 0;
+  if (TII.isLoadFromStackSlot(MI, FrameIdx) &&
+      MF.getFrameInfo()->isImmutableObjectIndex(FrameIdx))
+    return true;
+
+  const TargetInstrDesc &TID = MI->getDesc();
+
+  // Avoid instructions obviously unsafe for remat.
+  if (TID.hasUnmodeledSideEffects() || TID.isNotDuplicable() ||
+      TID.mayStore())
+    return false;
+
+  // Avoid instructions which load from potentially varying memory.
+  if (TID.mayLoad() && !MI->isInvariantLoad(AA))
+    return false;
+
+  // If any of the registers accessed are non-constant, conservatively assume
+  // the instruction is not rematerializable.
+  for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
+    const MachineOperand &MO = MI->getOperand(i);
+    if (!MO.isReg()) continue;
+    unsigned Reg = MO.getReg();
+    if (Reg == 0)
+      continue;
+
+    // Check for a well-behaved physical register.
+    if (TargetRegisterInfo::isPhysicalRegister(Reg)) {
+      if (MO.isUse()) {
+        // If the physreg has no defs anywhere, it's just an ambient register
+        // and we can freely move its uses. Alternatively, if it's allocatable,
+        // it could get allocated to something with a def during allocation.
+        if (!MRI.def_empty(Reg))
+          return false;
+        BitVector AllocatableRegs = TRI.getAllocatableSet(MF, 0);
+        if (AllocatableRegs.test(Reg))
+          return false;
+        // Check for a def among the register's aliases too.
+        for (const unsigned *Alias = TRI.getAliasSet(Reg); *Alias; ++Alias) {
+          unsigned AliasReg = *Alias;
+          if (!MRI.def_empty(AliasReg))
+            return false;
+          if (AllocatableRegs.test(AliasReg))
+            return false;
+        }
+      } else {
+        // A physreg def. We can't remat it.
+        return false;
+      }
+      continue;
+    }
+
+    // Only allow one virtual-register def, and that in the first operand.
+    if (MO.isDef() != (i == 0))
+      return false;
+
+    // For the def, it should be the only def of that register.
+    if (MO.isDef() && (next(MRI.def_begin(Reg)) != MRI.def_end() ||
+                       MRI.isLiveIn(Reg)))
+      return false;
+
+    // Don't allow any virtual-register uses. Rematting an instruction with
+    // virtual register uses would length the live ranges of the uses, which
+    // is not necessarily a good idea, certainly not "trivial".
+    if (MO.isUse())
+      return false;
+  }
+
+  // Everything checked out.
+  return true;
+}
diff --git a/libclamav/c++/llvm/lib/CodeGen/TwoAddressInstructionPass.cpp b/libclamav/c++/llvm/lib/CodeGen/TwoAddressInstructionPass.cpp
index a343fa4..5fa690b 100644
--- a/libclamav/c++/llvm/lib/CodeGen/TwoAddressInstructionPass.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/TwoAddressInstructionPass.cpp
@@ -34,11 +34,11 @@
 #include "llvm/CodeGen/MachineFunctionPass.h"
 #include "llvm/CodeGen/MachineInstr.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/Analysis/AliasAnalysis.h"
 #include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Target/TargetOptions.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/ADT/BitVector.h"
 #include "llvm/ADT/DenseMap.h"
@@ -56,12 +56,12 @@ STATISTIC(NumReMats,           "Number of instructions re-materialized");
 STATISTIC(NumDeletes,          "Number of dead instructions deleted");
 
 namespace {
-  class VISIBILITY_HIDDEN TwoAddressInstructionPass
-    : public MachineFunctionPass {
+  class TwoAddressInstructionPass : public MachineFunctionPass {
     const TargetInstrInfo *TII;
     const TargetRegisterInfo *TRI;
     MachineRegisterInfo *MRI;
     LiveVariables *LV;
+    AliasAnalysis *AA;
 
     // DistanceMap - Keep track the distance of a MI from the start of the
     // current basic block.
@@ -112,8 +112,7 @@ namespace {
                                MachineBasicBlock *MBB, unsigned Dist);
     bool DeleteUnusedInstr(MachineBasicBlock::iterator &mi,
                            MachineBasicBlock::iterator &nmi,
-                           MachineFunction::iterator &mbbi,
-                           unsigned regB, unsigned regBIdx, unsigned Dist);
+                           MachineFunction::iterator &mbbi, unsigned Dist);
 
     bool TryInstructionTransform(MachineBasicBlock::iterator &mi,
                                  MachineBasicBlock::iterator &nmi,
@@ -130,6 +129,7 @@ namespace {
 
     virtual void getAnalysisUsage(AnalysisUsage &AU) const {
       AU.setPreservesCFG();
+      AU.addRequired<AliasAnalysis>();
       AU.addPreserved<LiveVariables>();
       AU.addPreservedID(MachineLoopInfoID);
       AU.addPreservedID(MachineDominatorsID);
@@ -160,7 +160,7 @@ bool TwoAddressInstructionPass::Sink3AddrInstruction(MachineBasicBlock *MBB,
                                            MachineBasicBlock::iterator OldPos) {
   // Check if it's safe to move this instruction.
   bool SeenStore = true; // Be conservative.
-  if (!MI->isSafeToMove(TII, SeenStore))
+  if (!MI->isSafeToMove(TII, SeenStore, AA))
     return false;
 
   unsigned DefReg = 0;
@@ -729,7 +729,7 @@ void TwoAddressInstructionPass::ProcessCopy(MachineInstr *MI,
 
 /// isSafeToDelete - If the specified instruction does not produce any side
 /// effects and all of its defs are dead, then it's safe to delete.
-static bool isSafeToDelete(MachineInstr *MI, unsigned Reg,
+static bool isSafeToDelete(MachineInstr *MI,
                            const TargetInstrInfo *TII,
                            SmallVector<unsigned, 4> &Kills) {
   const TargetInstrDesc &TID = MI->getDesc();
@@ -744,10 +744,9 @@ static bool isSafeToDelete(MachineInstr *MI, unsigned Reg,
       continue;
     if (MO.isDef() && !MO.isDead())
       return false;
-    if (MO.isUse() && MO.getReg() != Reg && MO.isKill())
+    if (MO.isUse() && MO.isKill())
       Kills.push_back(MO.getReg());
   }
-
   return true;
 }
 
@@ -782,11 +781,10 @@ bool
 TwoAddressInstructionPass::DeleteUnusedInstr(MachineBasicBlock::iterator &mi,
                                              MachineBasicBlock::iterator &nmi,
                                              MachineFunction::iterator &mbbi,
-                                             unsigned regB, unsigned regBIdx,
                                              unsigned Dist) {
   // Check if the instruction has no side effects and if all its defs are dead.
   SmallVector<unsigned, 4> Kills;
-  if (!isSafeToDelete(mi, regB, TII, Kills))
+  if (!isSafeToDelete(mi, TII, Kills))
     return false;
 
   // If this instruction kills some virtual registers, we need to
@@ -809,10 +807,6 @@ TwoAddressInstructionPass::DeleteUnusedInstr(MachineBasicBlock::iterator &mi,
           LV->addVirtualRegisterKilled(Kill, NewKill);
       }
     }
-
-    // If regB was marked as a kill, update its Kills list.
-    if (mi->getOperand(regBIdx).isKill())
-      LV->removeVirtualRegisterKilled(regB, mi);
   }
 
   mbbi->erase(mi); // Nuke the old inst.
@@ -841,7 +835,7 @@ TryInstructionTransform(MachineBasicBlock::iterator &mi,
   // it so it doesn't clobber regB.
   bool regBKilled = isKilled(*mi, regB, MRI, TII);
   if (!regBKilled && mi->getOperand(DstIdx).isDead() &&
-      DeleteUnusedInstr(mi, nmi, mbbi, regB, SrcIdx, Dist)) {
+      DeleteUnusedInstr(mi, nmi, mbbi, Dist)) {
     ++NumDeletes;
     return true; // Done with this instruction.
   }
@@ -903,6 +897,7 @@ bool TwoAddressInstructionPass::runOnMachineFunction(MachineFunction &MF) {
   TII = TM.getInstrInfo();
   TRI = TM.getRegisterInfo();
   LV = getAnalysisIfAvailable<LiveVariables>();
+  AA = &getAnalysis<AliasAnalysis>();
 
   bool MadeChange = false;
 
@@ -1027,11 +1022,11 @@ bool TwoAddressInstructionPass::runOnMachineFunction(MachineFunction &MF) {
           // copying it.
           if (DefMI &&
               DefMI->getDesc().isAsCheapAsAMove() &&
-              DefMI->isSafeToReMat(TII, regB) &&
+              DefMI->isSafeToReMat(TII, regB, AA) &&
               isProfitableToReMat(regB, rc, mi, DefMI, mbbi, Dist)){
             DEBUG(errs() << "2addr: REMATTING : " << *DefMI << "\n");
             unsigned regASubIdx = mi->getOperand(DstIdx).getSubReg();
-            TII->reMaterialize(*mbbi, mi, regA, regASubIdx, DefMI);
+            TII->reMaterialize(*mbbi, mi, regA, regASubIdx, DefMI, TRI);
             ReMatRegs.set(regB);
             ++NumReMats;
           } else {
diff --git a/libclamav/c++/llvm/lib/CodeGen/UnreachableBlockElim.cpp b/libclamav/c++/llvm/lib/CodeGen/UnreachableBlockElim.cpp
index e7c3412..6ab5db2 100644
--- a/libclamav/c++/llvm/lib/CodeGen/UnreachableBlockElim.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/UnreachableBlockElim.cpp
@@ -33,14 +33,13 @@
 #include "llvm/CodeGen/MachineLoopInfo.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
 #include "llvm/Support/CFG.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/ADT/DepthFirstIterator.h"
 #include "llvm/ADT/SmallPtrSet.h"
 using namespace llvm;
 
 namespace {
-  class VISIBILITY_HIDDEN UnreachableBlockElim : public FunctionPass {
+  class UnreachableBlockElim : public FunctionPass {
     virtual bool runOnFunction(Function &F);
   public:
     static char ID; // Pass identification, replacement for typeid
@@ -95,8 +94,7 @@ bool UnreachableBlockElim::runOnFunction(Function &F) {
 
 
 namespace {
-  class VISIBILITY_HIDDEN UnreachableMachineBlockElim :
-        public MachineFunctionPass {
+  class UnreachableMachineBlockElim : public MachineFunctionPass {
     virtual bool runOnMachineFunction(MachineFunction &F);
     virtual void getAnalysisUsage(AnalysisUsage &AU) const;
     MachineModuleInfo *MMI;
diff --git a/libclamav/c++/llvm/lib/CodeGen/VirtRegMap.cpp b/libclamav/c++/llvm/lib/CodeGen/VirtRegMap.cpp
index c78f35b..c8c5d86 100644
--- a/libclamav/c++/llvm/lib/CodeGen/VirtRegMap.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/VirtRegMap.cpp
@@ -56,7 +56,7 @@ bool VirtRegMap::runOnMachineFunction(MachineFunction &mf) {
   TII = mf.getTarget().getInstrInfo();
   TRI = mf.getTarget().getRegisterInfo();
   MF = &mf;
-  
+
   ReMatId = MAX_STACK_SLOT+1;
   LowSpillSlot = HighSpillSlot = NO_STACK_SLOT;
   
@@ -117,8 +117,8 @@ int VirtRegMap::assignVirt2StackSlot(unsigned virtReg) {
   assert(Virt2StackSlotMap[virtReg] == NO_STACK_SLOT &&
          "attempt to assign stack slot to already spilled register");
   const TargetRegisterClass* RC = MF->getRegInfo().getRegClass(virtReg);
-  int SS = MF->getFrameInfo()->CreateStackObject(RC->getSize(),
-                                                RC->getAlignment());
+  int SS = MF->getFrameInfo()->CreateSpillStackObject(RC->getSize(),
+                                                 RC->getAlignment());
   if (LowSpillSlot == NO_STACK_SLOT)
     LowSpillSlot = SS;
   if (HighSpillSlot == NO_STACK_SLOT || SS > HighSpillSlot)
@@ -161,8 +161,8 @@ int VirtRegMap::getEmergencySpillSlot(const TargetRegisterClass *RC) {
     EmergencySpillSlots.find(RC);
   if (I != EmergencySpillSlots.end())
     return I->second;
-  int SS = MF->getFrameInfo()->CreateStackObject(RC->getSize(),
-                                                RC->getAlignment());
+  int SS = MF->getFrameInfo()->CreateSpillStackObject(RC->getSize(),
+                                                 RC->getAlignment());
   if (LowSpillSlot == NO_STACK_SLOT)
     LowSpillSlot = SS;
   if (HighSpillSlot == NO_STACK_SLOT || SS > HighSpillSlot)
diff --git a/libclamav/c++/llvm/lib/CodeGen/VirtRegMap.h b/libclamav/c++/llvm/lib/CodeGen/VirtRegMap.h
index ca174d5..a5599f6 100644
--- a/libclamav/c++/llvm/lib/CodeGen/VirtRegMap.h
+++ b/libclamav/c++/llvm/lib/CodeGen/VirtRegMap.h
@@ -80,7 +80,7 @@ namespace llvm {
 
     /// Virt2SplitKillMap - This is splitted virtual register to its last use
     /// (kill) index mapping.
-    IndexedMap<MachineInstrIndex> Virt2SplitKillMap;
+    IndexedMap<SlotIndex> Virt2SplitKillMap;
 
     /// ReMatMap - This is virtual register to re-materialized instruction
     /// mapping. Each virtual register whose definition is going to be
@@ -142,7 +142,7 @@ namespace llvm {
     VirtRegMap() : MachineFunctionPass(&ID), Virt2PhysMap(NO_PHYS_REG),
                    Virt2StackSlotMap(NO_STACK_SLOT), 
                    Virt2ReMatIdMap(NO_STACK_SLOT), Virt2SplitMap(0),
-                   Virt2SplitKillMap(MachineInstrIndex()), ReMatMap(NULL),
+                   Virt2SplitKillMap(SlotIndex()), ReMatMap(NULL),
                    ReMatId(MAX_STACK_SLOT+1),
                    LowSpillSlot(NO_STACK_SLOT), HighSpillSlot(NO_STACK_SLOT) { }
     virtual bool runOnMachineFunction(MachineFunction &MF);
@@ -266,17 +266,17 @@ namespace llvm {
     }
 
     /// @brief record the last use (kill) of a split virtual register.
-    void addKillPoint(unsigned virtReg, MachineInstrIndex index) {
+    void addKillPoint(unsigned virtReg, SlotIndex index) {
       Virt2SplitKillMap[virtReg] = index;
     }
 
-    MachineInstrIndex getKillPoint(unsigned virtReg) const {
+    SlotIndex getKillPoint(unsigned virtReg) const {
       return Virt2SplitKillMap[virtReg];
     }
 
     /// @brief remove the last use (kill) of a split virtual register.
     void removeKillPoint(unsigned virtReg) {
-      Virt2SplitKillMap[virtReg] = MachineInstrIndex();
+      Virt2SplitKillMap[virtReg] = SlotIndex();
     }
 
     /// @brief returns true if the specified MachineInstr is a spill point.
diff --git a/libclamav/c++/llvm/lib/CodeGen/VirtRegRewriter.cpp b/libclamav/c++/llvm/lib/CodeGen/VirtRegRewriter.cpp
index 670e1cb..10c8066 100644
--- a/libclamav/c++/llvm/lib/CodeGen/VirtRegRewriter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/VirtRegRewriter.cpp
@@ -13,7 +13,6 @@
 #include "llvm/CodeGen/MachineFrameInfo.h"
 #include "llvm/CodeGen/MachineInstrBuilder.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
@@ -66,7 +65,7 @@ namespace {
 /// This class is intended for use with the new spilling framework only. It
 /// rewrites vreg def/uses to use the assigned preg, but does not insert any
 /// spill code.
-struct VISIBILITY_HIDDEN TrivialRewriter : public VirtRegRewriter {
+struct TrivialRewriter : public VirtRegRewriter {
 
   bool runOnMachineFunction(MachineFunction &MF, VirtRegMap &VRM,
                             LiveIntervals* LIs) {
@@ -78,27 +77,38 @@ struct VISIBILITY_HIDDEN TrivialRewriter : public VirtRegRewriter {
     DEBUG(MF.dump());
 
     MachineRegisterInfo *mri = &MF.getRegInfo();
+    const TargetRegisterInfo *tri = MF.getTarget().getRegisterInfo();
 
     bool changed = false;
 
     for (LiveIntervals::iterator liItr = LIs->begin(), liEnd = LIs->end();
          liItr != liEnd; ++liItr) {
 
-      if (TargetRegisterInfo::isVirtualRegister(liItr->first)) {
-        if (VRM.hasPhys(liItr->first)) {
-          unsigned preg = VRM.getPhys(liItr->first);
-          mri->replaceRegWith(liItr->first, preg);
-          mri->setPhysRegUsed(preg);
-          changed = true;
-        }
+      const LiveInterval *li = liItr->second;
+      unsigned reg = li->reg;
+
+      if (TargetRegisterInfo::isPhysicalRegister(reg)) {
+        if (!li->empty())
+          mri->setPhysRegUsed(reg);
       }
       else {
-        if (!liItr->second->empty()) {
-          mri->setPhysRegUsed(liItr->first);
+        if (!VRM.hasPhys(reg))
+          continue;
+        unsigned pReg = VRM.getPhys(reg);
+        mri->setPhysRegUsed(pReg);
+        for (MachineRegisterInfo::reg_iterator regItr = mri->reg_begin(reg),
+             regEnd = mri->reg_end(); regItr != regEnd;) {
+          MachineOperand &mop = regItr.getOperand();
+          assert(mop.isReg() && mop.getReg() == reg && "reg_iterator broken?");
+          ++regItr;
+          unsigned subRegIdx = mop.getSubReg();
+          unsigned pRegOp = subRegIdx ? tri->getSubReg(pReg, subRegIdx) : pReg;
+          mop.setReg(pRegOp);
+          mop.setSubReg(0);
+          changed = true;
         }
       }
     }
-
     
     DEBUG(errs() << "**** Post Machine Instrs ****\n");
     DEBUG(MF.dump());
@@ -125,7 +135,7 @@ namespace {
 /// on a per-stack-slot / remat id basis as the low bit in the value of the
 /// SpillSlotsAvailable entries.  The predicate 'canClobberPhysReg()' checks
 /// this bit and addAvailable sets it if.
-class VISIBILITY_HIDDEN AvailableSpills {
+class AvailableSpills {
   const TargetRegisterInfo *TRI;
   const TargetInstrInfo *TII;
 
@@ -340,7 +350,7 @@ struct ReusedOp {
 
 /// ReuseInfo - This maintains a collection of ReuseOp's for each operand that
 /// is reused instead of reloaded.
-class VISIBILITY_HIDDEN ReuseInfo {
+class ReuseInfo {
   MachineInstr &MI;
   std::vector<ReusedOp> Reuses;
   BitVector PhysRegsClobbered;
@@ -484,19 +494,20 @@ static void InvalidateKills(MachineInstr &MI,
 }
 
 /// InvalidateRegDef - If the def operand of the specified def MI is now dead
-/// (since it's spill instruction is removed), mark it isDead. Also checks if
+/// (since its spill instruction is removed), mark it isDead. Also checks if
 /// the def MI has other definition operands that are not dead. Returns it by
 /// reference.
 static bool InvalidateRegDef(MachineBasicBlock::iterator I,
                              MachineInstr &NewDef, unsigned Reg,
-                             bool &HasLiveDef) {
+                             bool &HasLiveDef, 
+                             const TargetRegisterInfo *TRI) {
   // Due to remat, it's possible this reg isn't being reused. That is,
   // the def of this reg (by prev MI) is now dead.
   MachineInstr *DefMI = I;
   MachineOperand *DefOp = NULL;
   for (unsigned i = 0, e = DefMI->getNumOperands(); i != e; ++i) {
     MachineOperand &MO = DefMI->getOperand(i);
-    if (!MO.isReg() || !MO.isUse() || !MO.isKill() || MO.isUndef())
+    if (!MO.isReg() || !MO.isDef() || !MO.isKill() || MO.isUndef())
       continue;
     if (MO.getReg() == Reg)
       DefOp = &MO;
@@ -513,7 +524,8 @@ static bool InvalidateRegDef(MachineBasicBlock::iterator I,
     MachineInstr *NMI = I;
     for (unsigned j = 0, ee = NMI->getNumOperands(); j != ee; ++j) {
       MachineOperand &MO = NMI->getOperand(j);
-      if (!MO.isReg() || MO.getReg() != Reg)
+      if (!MO.isReg() || MO.getReg() == 0 ||
+          (MO.getReg() != Reg && !TRI->isSubRegister(Reg, MO.getReg())))
         continue;
       if (MO.isUse())
         FoundUse = true;
@@ -557,11 +569,30 @@ static void UpdateKills(MachineInstr &MI, const TargetRegisterInfo* TRI,
         KillOps[*SR] = NULL;
         RegKills.reset(*SR);
       }
-
-      if (!MI.isRegTiedToDefOperand(i))
-        // Unless it's a two-address operand, this is the new kill.
-        MO.setIsKill();
+    } else {
+      // Check for subreg kills as well.
+      // d4 = 
+      // store d4, fi#0
+      // ...
+      //    = s8<kill>
+      // ...
+      //    = d4  <avoiding reload>
+      for (const unsigned *SR = TRI->getSubRegisters(Reg); *SR; ++SR) {
+        unsigned SReg = *SR;
+        if (RegKills[SReg] && KillOps[SReg]->getParent() != &MI) {
+          KillOps[SReg]->setIsKill(false);
+          unsigned KReg = KillOps[SReg]->getReg();
+          KillOps[KReg] = NULL;
+          RegKills.reset(KReg);
+
+          for (const unsigned *SSR = TRI->getSubRegisters(KReg); *SSR; ++SSR) {
+            KillOps[*SSR] = NULL;
+            RegKills.reset(*SSR);
+          }
+        }
+      }
     }
+
     if (MO.isKill()) {
       RegKills.set(Reg);
       KillOps[Reg] = &MO;
@@ -574,7 +605,7 @@ static void UpdateKills(MachineInstr &MI, const TargetRegisterInfo* TRI,
 
   for (unsigned i = 0, e = MI.getNumOperands(); i != e; ++i) {
     const MachineOperand &MO = MI.getOperand(i);
-    if (!MO.isReg() || !MO.isDef())
+    if (!MO.isReg() || !MO.getReg() || !MO.isDef())
       continue;
     unsigned Reg = MO.getReg();
     RegKills.reset(Reg);
@@ -584,6 +615,10 @@ static void UpdateKills(MachineInstr &MI, const TargetRegisterInfo* TRI,
       RegKills.reset(*SR);
       KillOps[*SR] = NULL;
     }
+    for (const unsigned *SR = TRI->getSuperRegisters(Reg); *SR; ++SR) {
+      RegKills.reset(*SR);
+      KillOps[*SR] = NULL;
+    }
   }
 }
 
@@ -602,7 +637,7 @@ static void ReMaterialize(MachineBasicBlock &MBB,
          "Don't know how to remat instructions that define > 1 values!");
 #endif
   TII->reMaterialize(MBB, MII, DestReg,
-                     ReMatDefMI->getOperand(0).getSubReg(), ReMatDefMI);
+                     ReMatDefMI->getOperand(0).getSubReg(), ReMatDefMI, TRI);
   MachineInstr *NewMI = prior(MII);
   for (unsigned i = 0, e = NewMI->getNumOperands(); i != e; ++i) {
     MachineOperand &MO = NewMI->getOperand(i);
@@ -614,7 +649,7 @@ static void ReMaterialize(MachineBasicBlock &MBB,
     assert(MO.isUse());
     unsigned SubIdx = MO.getSubReg();
     unsigned Phys = VRM.getPhys(VirtReg);
-    assert(Phys);
+    assert(Phys && "Virtual register is not assigned a register?");
     unsigned RReg = SubIdx ? TRI->getSubReg(Phys, SubIdx) : Phys;
     MO.setReg(RReg);
     MO.setSubReg(0);
@@ -817,11 +852,8 @@ unsigned ReuseInfo::GetRegForReload(const TargetRegisterClass *RC,
                "A reuse cannot be a virtual register");
         if (PRRU != RealPhysRegUsed) {
           // What was the sub-register index?
-          unsigned SubReg;
-          for (SubIdx = 1; (SubReg = TRI->getSubReg(PRRU, SubIdx)); SubIdx++)
-            if (SubReg == RealPhysRegUsed)
-              break;
-          assert(SubReg == RealPhysRegUsed &&
+          SubIdx = TRI->getSubRegIndex(PRRU, RealPhysRegUsed);
+          assert(SubIdx &&
                  "Operand physreg is not a sub-register of PhysRegUsed");
         }
 
@@ -857,7 +889,7 @@ unsigned ReuseInfo::GetRegForReload(const TargetRegisterClass *RC,
         Spills.ClobberPhysReg(NewPhysReg);
         Spills.ClobberPhysReg(NewOp.PhysRegReused);
 
-        unsigned RReg = SubIdx ? TRI->getSubReg(NewPhysReg, SubIdx) : NewPhysReg;
+        unsigned RReg = SubIdx ? TRI->getSubReg(NewPhysReg, SubIdx) :NewPhysReg;
         MI->getOperand(NewOp.Operand).setReg(RReg);
         MI->getOperand(NewOp.Operand).setSubReg(0);
 
@@ -995,7 +1027,7 @@ namespace {
 
 namespace {
 
-class VISIBILITY_HIDDEN LocalRewriter : public VirtRegRewriter {
+class LocalRewriter : public VirtRegRewriter {
   MachineRegisterInfo *RegInfo;
   const TargetRegisterInfo *TRI;
   const TargetInstrInfo *TII;
@@ -1431,8 +1463,9 @@ private:
                            std::vector<MachineOperand*> &KillOps,
                            VirtRegMap &VRM) {
 
+    MachineBasicBlock::iterator oldNextMII = next(MII);
     TII->storeRegToStackSlot(MBB, next(MII), PhysReg, true, StackSlot, RC);
-    MachineInstr *StoreMI = next(MII);
+    MachineInstr *StoreMI = prior(oldNextMII);
     VRM.addSpillSlotUse(StackSlot, StoreMI);
     DEBUG(errs() << "Store:\t" << *StoreMI);
 
@@ -1454,7 +1487,7 @@ private:
         // being reused.
         for (unsigned j = 0, ee = KillRegs.size(); j != ee; ++j) {
           bool HasOtherDef = false;
-          if (InvalidateRegDef(PrevMII, *MII, KillRegs[j], HasOtherDef)) {
+          if (InvalidateRegDef(PrevMII, *MII, KillRegs[j], HasOtherDef, TRI)) {
             MachineInstr *DeadDef = PrevMII;
             if (ReMatDefs.count(DeadDef) && !HasOtherDef) {
               // FIXME: This assumes a remat def does not have side effects.
@@ -1467,7 +1500,9 @@ private:
       }
     }
 
-    LastStore = next(MII);
+    // Allow for multi-instruction spill sequences, as on PPC Altivec.  Presume
+    // the last of multiple instructions is the actual store.
+    LastStore = prior(oldNextMII);
 
     // If the stack slot value was previously available in some other
     // register, change it now.  Otherwise, make the register available,
@@ -1478,6 +1513,29 @@ private:
     ++NumStores;
   }
 
+  /// isSafeToDelete - Return true if this instruction doesn't produce any side
+  /// effect and all of its defs are dead.
+  static bool isSafeToDelete(MachineInstr &MI) {
+    const TargetInstrDesc &TID = MI.getDesc();
+    if (TID.mayLoad() || TID.mayStore() || TID.isCall() || TID.isTerminator() ||
+        TID.isCall() || TID.isBarrier() || TID.isReturn() ||
+        TID.hasUnmodeledSideEffects())
+      return false;
+    for (unsigned i = 0, e = MI.getNumOperands(); i != e; ++i) {
+      MachineOperand &MO = MI.getOperand(i);
+      if (!MO.isReg() || !MO.getReg())
+        continue;
+      if (MO.isDef() && !MO.isDead())
+        return false;
+      if (MO.isUse() && MO.isKill())
+        // FIXME: We can't remove kill markers or else the scavenger will assert.
+        // An alternative is to add a ADD pseudo instruction to replace kill
+        // markers.
+        return false;
+    }
+    return true;
+  }
+
   /// TransferDeadness - A identity copy definition is dead and it's being
   /// removed. Find the last def or use and mark it as dead / kill.
   void TransferDeadness(MachineBasicBlock *MBB, unsigned CurDist,
@@ -1519,7 +1577,7 @@ private:
       if (LastUD->isDef()) {
         // If the instruction has no side effect, delete it and propagate
         // backward further. Otherwise, mark is dead and we are done.
-        if (!TII->isDeadInstruction(LastUDMI)) {
+        if (!isSafeToDelete(*LastUDMI)) {
           LastUD->setIsDead();
           break;
         }
@@ -1542,7 +1600,7 @@ private:
                   std::vector<MachineOperand*> &KillOps) {
 
     DEBUG(errs() << "\n**** Local spiller rewriting MBB '"
-          << MBB.getBasicBlock()->getName() << "':\n");
+          << MBB.getName() << "':\n");
 
     MachineFunction &MF = *MBB.getParent();
     
@@ -1679,6 +1737,7 @@ private:
 
             // Mark is killed.
             MachineInstr *CopyMI = prior(InsertLoc);
+            CopyMI->setAsmPrinterFlag(AsmPrinter::ReloadReuse);
             MachineOperand *KillOpnd = CopyMI->findRegisterUseOperand(InReg);
             KillOpnd->setIsKill();
             UpdateKills(*CopyMI, TRI, RegKills, KillOps);
@@ -1726,8 +1785,9 @@ private:
           const TargetRegisterClass *RC = RegInfo->getRegClass(VirtReg);
           unsigned Phys = VRM.getPhys(VirtReg);
           int StackSlot = VRM.getStackSlot(VirtReg);
+          MachineBasicBlock::iterator oldNextMII = next(MII);
           TII->storeRegToStackSlot(MBB, next(MII), Phys, isKill, StackSlot, RC);
-          MachineInstr *StoreMI = next(MII);
+          MachineInstr *StoreMI = prior(oldNextMII);
           VRM.addSpillSlotUse(StackSlot, StoreMI);
           DEBUG(errs() << "Store:\t" << *StoreMI);
           VRM.virtFolded(VirtReg, StoreMI, VirtRegMap::isMod);
@@ -1958,6 +2018,7 @@ private:
           TII->copyRegToReg(MBB, InsertLoc, DesignatedReg, PhysReg, RC, RC);
 
           MachineInstr *CopyMI = prior(InsertLoc);
+          CopyMI->setAsmPrinterFlag(AsmPrinter::ReloadReuse);
           UpdateKills(*CopyMI, TRI, RegKills, KillOps);
 
           // This invalidates DesignatedReg.
@@ -2086,6 +2147,7 @@ private:
                 // virtual or needing to clobber any values if it's physical).
                 NextMII = &MI;
                 --NextMII;  // backtrack to the copy.
+                NextMII->setAsmPrinterFlag(AsmPrinter::ReloadReuse);
                 // Propagate the sub-register index over.
                 if (SubIdx) {
                   DefMO = NextMII->findRegisterDefOperand(DestReg);
@@ -2340,7 +2402,7 @@ private:
       }
     ProcessNextInst:
       // Delete dead instructions without side effects.
-      if (!Erased && !BackTracked && TII->isDeadInstruction(&MI)) {
+      if (!Erased && !BackTracked && isSafeToDelete(MI)) {
         InvalidateKills(MI, TRI, RegKills, KillOps);
         VRM.RemoveMachineInstrFromMaps(&MI);
         MBB.erase(&MI);
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/ExecutionEngine.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/ExecutionEngine.cpp
index 335d4de..cb30748 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/ExecutionEngine.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/ExecutionEngine.cpp
@@ -40,17 +40,19 @@ ExecutionEngine *(*ExecutionEngine::JITCtor)(ModuleProvider *MP,
                                              std::string *ErrorStr,
                                              JITMemoryManager *JMM,
                                              CodeGenOpt::Level OptLevel,
-                                             bool GVsWithCode) = 0;
+                                             bool GVsWithCode,
+					     CodeModel::Model CMM) = 0;
 ExecutionEngine *(*ExecutionEngine::InterpCtor)(ModuleProvider *MP,
                                                 std::string *ErrorStr) = 0;
 ExecutionEngine::EERegisterFn ExecutionEngine::ExceptionTableRegister = 0;
 
 
-ExecutionEngine::ExecutionEngine(ModuleProvider *P) : LazyFunctionCreator(0) {
-  LazyCompilationDisabled = false;
+ExecutionEngine::ExecutionEngine(ModuleProvider *P)
+  : EEState(*this),
+    LazyFunctionCreator(0) {
+  CompilingLazily         = false;
   GVCompilationDisabled   = false;
   SymbolSearchingDisabled = false;
-  DlsymStubsEnabled       = false;
   Modules.push_back(P);
   assert(P && "ModuleProvider is null?");
 }
@@ -113,6 +115,21 @@ Function *ExecutionEngine::FindFunctionNamed(const char *FnName) {
 }
 
 
+void *ExecutionEngineState::RemoveMapping(
+  const MutexGuard &, const GlobalValue *ToUnmap) {
+  GlobalAddressMapTy::iterator I = GlobalAddressMap.find(ToUnmap);
+  void *OldVal;
+  if (I == GlobalAddressMap.end())
+    OldVal = 0;
+  else {
+    OldVal = I->second;
+    GlobalAddressMap.erase(I);
+  }
+
+  GlobalAddressReverseMap.erase(OldVal);
+  return OldVal;
+}
+
 /// addGlobalMapping - Tell the execution engine that the specified global is
 /// at the specified location.  This is used internally as functions are JIT'd
 /// and as global variables are laid out in memory.  It can and should also be
@@ -123,14 +140,14 @@ void ExecutionEngine::addGlobalMapping(const GlobalValue *GV, void *Addr) {
 
   DEBUG(errs() << "JIT: Map \'" << GV->getName() 
         << "\' to [" << Addr << "]\n";);
-  void *&CurVal = state.getGlobalAddressMap(locked)[GV];
+  void *&CurVal = EEState.getGlobalAddressMap(locked)[GV];
   assert((CurVal == 0 || Addr == 0) && "GlobalMapping already established!");
   CurVal = Addr;
   
   // If we are using the reverse mapping, add it too
-  if (!state.getGlobalAddressReverseMap(locked).empty()) {
+  if (!EEState.getGlobalAddressReverseMap(locked).empty()) {
     AssertingVH<const GlobalValue> &V =
-      state.getGlobalAddressReverseMap(locked)[Addr];
+      EEState.getGlobalAddressReverseMap(locked)[Addr];
     assert((V == 0 || GV == 0) && "GlobalMapping already established!");
     V = GV;
   }
@@ -141,8 +158,8 @@ void ExecutionEngine::addGlobalMapping(const GlobalValue *GV, void *Addr) {
 void ExecutionEngine::clearAllGlobalMappings() {
   MutexGuard locked(lock);
   
-  state.getGlobalAddressMap(locked).clear();
-  state.getGlobalAddressReverseMap(locked).clear();
+  EEState.getGlobalAddressMap(locked).clear();
+  EEState.getGlobalAddressReverseMap(locked).clear();
 }
 
 /// clearGlobalMappingsFromModule - Clear all global mappings that came from a
@@ -151,13 +168,11 @@ void ExecutionEngine::clearGlobalMappingsFromModule(Module *M) {
   MutexGuard locked(lock);
   
   for (Module::iterator FI = M->begin(), FE = M->end(); FI != FE; ++FI) {
-    state.getGlobalAddressMap(locked).erase(&*FI);
-    state.getGlobalAddressReverseMap(locked).erase(&*FI);
+    EEState.RemoveMapping(locked, FI);
   }
   for (Module::global_iterator GI = M->global_begin(), GE = M->global_end(); 
        GI != GE; ++GI) {
-    state.getGlobalAddressMap(locked).erase(&*GI);
-    state.getGlobalAddressReverseMap(locked).erase(&*GI);
+    EEState.RemoveMapping(locked, GI);
   }
 }
 
@@ -167,36 +182,25 @@ void ExecutionEngine::clearGlobalMappingsFromModule(Module *M) {
 void *ExecutionEngine::updateGlobalMapping(const GlobalValue *GV, void *Addr) {
   MutexGuard locked(lock);
 
-  std::map<AssertingVH<const GlobalValue>, void *> &Map =
-    state.getGlobalAddressMap(locked);
+  ExecutionEngineState::GlobalAddressMapTy &Map =
+    EEState.getGlobalAddressMap(locked);
 
   // Deleting from the mapping?
   if (Addr == 0) {
-    std::map<AssertingVH<const GlobalValue>, void *>::iterator I = Map.find(GV);
-    void *OldVal;
-    if (I == Map.end())
-      OldVal = 0;
-    else {
-      OldVal = I->second;
-      Map.erase(I); 
-    }
-    
-    if (!state.getGlobalAddressReverseMap(locked).empty())
-      state.getGlobalAddressReverseMap(locked).erase(OldVal);
-    return OldVal;
+    return EEState.RemoveMapping(locked, GV);
   }
   
   void *&CurVal = Map[GV];
   void *OldVal = CurVal;
 
-  if (CurVal && !state.getGlobalAddressReverseMap(locked).empty())
-    state.getGlobalAddressReverseMap(locked).erase(CurVal);
+  if (CurVal && !EEState.getGlobalAddressReverseMap(locked).empty())
+    EEState.getGlobalAddressReverseMap(locked).erase(CurVal);
   CurVal = Addr;
   
   // If we are using the reverse mapping, add it too
-  if (!state.getGlobalAddressReverseMap(locked).empty()) {
+  if (!EEState.getGlobalAddressReverseMap(locked).empty()) {
     AssertingVH<const GlobalValue> &V =
-      state.getGlobalAddressReverseMap(locked)[Addr];
+      EEState.getGlobalAddressReverseMap(locked)[Addr];
     assert((V == 0 || GV == 0) && "GlobalMapping already established!");
     V = GV;
   }
@@ -209,9 +213,9 @@ void *ExecutionEngine::updateGlobalMapping(const GlobalValue *GV, void *Addr) {
 void *ExecutionEngine::getPointerToGlobalIfAvailable(const GlobalValue *GV) {
   MutexGuard locked(lock);
   
-  std::map<AssertingVH<const GlobalValue>, void*>::iterator I =
-    state.getGlobalAddressMap(locked).find(GV);
-  return I != state.getGlobalAddressMap(locked).end() ? I->second : 0;
+  ExecutionEngineState::GlobalAddressMapTy::iterator I =
+    EEState.getGlobalAddressMap(locked).find(GV);
+  return I != EEState.getGlobalAddressMap(locked).end() ? I->second : 0;
 }
 
 /// getGlobalValueAtAddress - Return the LLVM global value object that starts
@@ -221,17 +225,17 @@ const GlobalValue *ExecutionEngine::getGlobalValueAtAddress(void *Addr) {
   MutexGuard locked(lock);
 
   // If we haven't computed the reverse mapping yet, do so first.
-  if (state.getGlobalAddressReverseMap(locked).empty()) {
-    for (std::map<AssertingVH<const GlobalValue>, void *>::iterator
-         I = state.getGlobalAddressMap(locked).begin(),
-         E = state.getGlobalAddressMap(locked).end(); I != E; ++I)
-      state.getGlobalAddressReverseMap(locked).insert(std::make_pair(I->second,
+  if (EEState.getGlobalAddressReverseMap(locked).empty()) {
+    for (ExecutionEngineState::GlobalAddressMapTy::iterator
+         I = EEState.getGlobalAddressMap(locked).begin(),
+         E = EEState.getGlobalAddressMap(locked).end(); I != E; ++I)
+      EEState.getGlobalAddressReverseMap(locked).insert(std::make_pair(I->second,
                                                                      I->first));
   }
 
   std::map<void *, AssertingVH<const GlobalValue> >::iterator I =
-    state.getGlobalAddressReverseMap(locked).find(Addr);
-  return I != state.getGlobalAddressReverseMap(locked).end() ? I->second : 0;
+    EEState.getGlobalAddressReverseMap(locked).find(Addr);
+  return I != EEState.getGlobalAddressReverseMap(locked).end() ? I->second : 0;
 }
 
 // CreateArgv - Turn a vector of strings into a nice argv style array of
@@ -243,7 +247,7 @@ static void *CreateArgv(LLVMContext &C, ExecutionEngine *EE,
   char *Result = new char[(InputArgv.size()+1)*PtrSize];
 
   DEBUG(errs() << "JIT: ARGV = " << (void*)Result << "\n");
-  const Type *SBytePtr = PointerType::getUnqual(Type::getInt8Ty(C));
+  const Type *SBytePtr = Type::getInt8PtrTy(C);
 
   for (unsigned i = 0; i != InputArgv.size(); ++i) {
     unsigned Size = InputArgv[i].size()+1;
@@ -441,7 +445,7 @@ ExecutionEngine *EngineBuilder::create() {
     if (ExecutionEngine::JITCtor) {
       ExecutionEngine *EE =
         ExecutionEngine::JITCtor(MP, ErrorStr, JMM, OptLevel,
-                                 AllocateGVsWithCode);
+                                 AllocateGVsWithCode, CMModel);
       if (EE) return EE;
     }
   }
@@ -471,7 +475,7 @@ void *ExecutionEngine::getPointerToGlobal(const GlobalValue *GV) {
     return getPointerToFunction(F);
 
   MutexGuard locked(lock);
-  void *p = state.getGlobalAddressMap(locked)[GV];
+  void *p = EEState.getGlobalAddressMap(locked)[GV];
   if (p)
     return p;
 
@@ -481,7 +485,7 @@ void *ExecutionEngine::getPointerToGlobal(const GlobalValue *GV) {
     EmitGlobalVariable(GVar);
   else
     llvm_unreachable("Global hasn't had an address allocated yet!");
-  return state.getGlobalAddressMap(locked)[GV];
+  return EEState.getGlobalAddressMap(locked)[GV];
 }
 
 /// This function converts a Constant* into a GenericValue. The interesting 
@@ -539,11 +543,11 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
     }
     case Instruction::UIToFP: {
       GenericValue GV = getConstantValue(Op0);
-      if (CE->getType() == Type::getFloatTy(CE->getContext()))
+      if (CE->getType()->isFloatTy())
         GV.FloatVal = float(GV.IntVal.roundToDouble());
-      else if (CE->getType() == Type::getDoubleTy(CE->getContext()))
+      else if (CE->getType()->isDoubleTy())
         GV.DoubleVal = GV.IntVal.roundToDouble();
-      else if (CE->getType() == Type::getX86_FP80Ty(Op0->getContext())) {
+      else if (CE->getType()->isX86_FP80Ty()) {
         const uint64_t zero[] = {0, 0};
         APFloat apf = APFloat(APInt(80, 2, zero));
         (void)apf.convertFromAPInt(GV.IntVal, 
@@ -555,11 +559,11 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
     }
     case Instruction::SIToFP: {
       GenericValue GV = getConstantValue(Op0);
-      if (CE->getType() == Type::getFloatTy(CE->getContext()))
+      if (CE->getType()->isFloatTy())
         GV.FloatVal = float(GV.IntVal.signedRoundToDouble());
-      else if (CE->getType() == Type::getDoubleTy(CE->getContext()))
+      else if (CE->getType()->isDoubleTy())
         GV.DoubleVal = GV.IntVal.signedRoundToDouble();
-      else if (CE->getType() == Type::getX86_FP80Ty(CE->getContext())) {
+      else if (CE->getType()->isX86_FP80Ty()) {
         const uint64_t zero[] = { 0, 0};
         APFloat apf = APFloat(APInt(80, 2, zero));
         (void)apf.convertFromAPInt(GV.IntVal, 
@@ -573,11 +577,11 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
     case Instruction::FPToSI: {
       GenericValue GV = getConstantValue(Op0);
       uint32_t BitWidth = cast<IntegerType>(CE->getType())->getBitWidth();
-      if (Op0->getType() == Type::getFloatTy(Op0->getContext()))
+      if (Op0->getType()->isFloatTy())
         GV.IntVal = APIntOps::RoundFloatToAPInt(GV.FloatVal, BitWidth);
-      else if (Op0->getType() == Type::getDoubleTy(Op0->getContext()))
+      else if (Op0->getType()->isDoubleTy())
         GV.IntVal = APIntOps::RoundDoubleToAPInt(GV.DoubleVal, BitWidth);
-      else if (Op0->getType() == Type::getX86_FP80Ty(Op0->getContext())) {
+      else if (Op0->getType()->isX86_FP80Ty()) {
         APFloat apf = APFloat(GV.IntVal);
         uint64_t v;
         bool ignored;
@@ -610,9 +614,9 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
         default: llvm_unreachable("Invalid bitcast operand");
         case Type::IntegerTyID:
           assert(DestTy->isFloatingPoint() && "invalid bitcast");
-          if (DestTy == Type::getFloatTy(Op0->getContext()))
+          if (DestTy->isFloatTy())
             GV.FloatVal = GV.IntVal.bitsToFloat();
-          else if (DestTy == Type::getDoubleTy(DestTy->getContext()))
+          else if (DestTy->isDoubleTy())
             GV.DoubleVal = GV.IntVal.bitsToDouble();
           break;
         case Type::FloatTyID: 
@@ -756,8 +760,11 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
       Result.PointerVal = 0;
     else if (const Function *F = dyn_cast<Function>(C))
       Result = PTOGV(getPointerToFunctionOrStub(const_cast<Function*>(F)));
-    else if (const GlobalVariable* GV = dyn_cast<GlobalVariable>(C))
+    else if (const GlobalVariable *GV = dyn_cast<GlobalVariable>(C))
       Result = PTOGV(getOrEmitGlobalVariable(const_cast<GlobalVariable*>(GV)));
+    else if (const BlockAddress *BA = dyn_cast<BlockAddress>(C))
+      Result = PTOGV(getPointerToBasicBlock(const_cast<BasicBlock*>(
+                                                        BA->getBasicBlock())));
     else
       llvm_unreachable("Unknown constant pointer type!");
     break;
@@ -1066,3 +1073,23 @@ void ExecutionEngine::EmitGlobalVariable(const GlobalVariable *GV) {
   NumInitBytes += (unsigned)GVSize;
   ++NumGlobals;
 }
+
+ExecutionEngineState::ExecutionEngineState(ExecutionEngine &EE)
+  : EE(EE), GlobalAddressMap(this) {
+}
+
+sys::Mutex *ExecutionEngineState::AddressMapConfig::getMutex(
+  ExecutionEngineState *EES) {
+  return &EES->EE.lock;
+}
+void ExecutionEngineState::AddressMapConfig::onDelete(
+  ExecutionEngineState *EES, const GlobalValue *Old) {
+  void *OldVal = EES->GlobalAddressMap.lookup(Old);
+  EES->GlobalAddressReverseMap.erase(OldVal);
+}
+
+void ExecutionEngineState::AddressMapConfig::onRAUW(
+  ExecutionEngineState *, const GlobalValue *, const GlobalValue *) {
+  assert(false && "The ExecutionEngine doesn't know how to handle a"
+         " RAUW on a value it has a global mapping for.");
+}
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/Execution.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/Execution.cpp
index bb45b2c..b59cfd1 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/Execution.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/Execution.cpp
@@ -365,7 +365,7 @@ static GenericValue executeFCMP_OGT(GenericValue Src1, GenericValue Src2,
 }
 
 #define IMPLEMENT_UNORDERED(TY, X,Y)                                     \
-  if (TY == Type::getFloatTy(Ty->getContext())) {                        \
+  if (TY->isFloatTy()) {                                                 \
     if (X.FloatVal != X.FloatVal || Y.FloatVal != Y.FloatVal) {          \
       Dest.IntVal = APInt(1,true);                                       \
       return Dest;                                                       \
@@ -421,7 +421,7 @@ static GenericValue executeFCMP_UGT(GenericValue Src1, GenericValue Src2,
 static GenericValue executeFCMP_ORD(GenericValue Src1, GenericValue Src2,
                                      const Type *Ty) {
   GenericValue Dest;
-  if (Ty == Type::getFloatTy(Ty->getContext()))
+  if (Ty->isFloatTy())
     Dest.IntVal = APInt(1,(Src1.FloatVal == Src1.FloatVal && 
                            Src2.FloatVal == Src2.FloatVal));
   else
@@ -433,7 +433,7 @@ static GenericValue executeFCMP_ORD(GenericValue Src1, GenericValue Src2,
 static GenericValue executeFCMP_UNO(GenericValue Src1, GenericValue Src2,
                                      const Type *Ty) {
   GenericValue Dest;
-  if (Ty == Type::getFloatTy(Ty->getContext()))
+  if (Ty->isFloatTy())
     Dest.IntVal = APInt(1,(Src1.FloatVal != Src1.FloatVal || 
                            Src2.FloatVal != Src2.FloatVal));
   else
@@ -572,9 +572,9 @@ void Interpreter::exitCalled(GenericValue GV) {
   // runAtExitHandlers() assumes there are no stack frames, but
   // if exit() was called, then it had a stack frame. Blow away
   // the stack before interpreting atexit handlers.
-  ECStack.clear ();
-  runAtExitHandlers ();
-  exit (GV.IntVal.zextOrTrunc(32).getZExtValue());
+  ECStack.clear();
+  runAtExitHandlers();
+  exit(GV.IntVal.zextOrTrunc(32).getZExtValue());
 }
 
 /// Pop the last stack frame off of ECStack and then copy the result
@@ -585,8 +585,8 @@ void Interpreter::exitCalled(GenericValue GV) {
 /// care of switching to the normal destination BB, if we are returning
 /// from an invoke.
 ///
-void Interpreter::popStackAndReturnValueToCaller (const Type *RetTy,
-                                                  GenericValue Result) {
+void Interpreter::popStackAndReturnValueToCaller(const Type *RetTy,
+                                                 GenericValue Result) {
   // Pop the current stack frame.
   ECStack.pop_back();
 
@@ -629,15 +629,15 @@ void Interpreter::visitUnwindInst(UnwindInst &I) {
   // Unwind stack
   Instruction *Inst;
   do {
-    ECStack.pop_back ();
-    if (ECStack.empty ())
+    ECStack.pop_back();
+    if (ECStack.empty())
       llvm_report_error("Empty stack during unwind!");
-    Inst = ECStack.back ().Caller.getInstruction ();
-  } while (!(Inst && isa<InvokeInst> (Inst)));
+    Inst = ECStack.back().Caller.getInstruction();
+  } while (!(Inst && isa<InvokeInst>(Inst)));
 
   // Return from invoke
-  ExecutionContext &InvokingSF = ECStack.back ();
-  InvokingSF.Caller = CallSite ();
+  ExecutionContext &InvokingSF = ECStack.back();
+  InvokingSF.Caller = CallSite();
 
   // Go to exceptional destination BB of invoke instruction
   SwitchToNewBasicBlock(cast<InvokeInst>(Inst)->getUnwindDest(), InvokingSF);
@@ -678,6 +678,13 @@ void Interpreter::visitSwitchInst(SwitchInst &I) {
   SwitchToNewBasicBlock(Dest, SF);
 }
 
+void Interpreter::visitIndirectBrInst(IndirectBrInst &I) {
+  ExecutionContext &SF = ECStack.back();
+  void *Dest = GVTOP(getOperandValue(I.getAddress(), SF));
+  SwitchToNewBasicBlock((BasicBlock*)Dest, SF);
+}
+
+
 // SwitchToNewBasicBlock - This method is used to jump to a new basic block.
 // This function handles the actual updating of block and instruction iterators
 // as well as execution of all of the PHI nodes in the destination block.
@@ -720,7 +727,7 @@ void Interpreter::SwitchToNewBasicBlock(BasicBlock *Dest, ExecutionContext &SF){
 //                     Memory Instruction Implementations
 //===----------------------------------------------------------------------===//
 
-void Interpreter::visitAllocationInst(AllocationInst &I) {
+void Interpreter::visitAllocaInst(AllocaInst &I) {
   ExecutionContext &SF = ECStack.back();
 
   const Type *Ty = I.getType()->getElementType();  // Type to be allocated
@@ -749,14 +756,6 @@ void Interpreter::visitAllocationInst(AllocationInst &I) {
     ECStack.back().Allocas.add(Memory);
 }
 
-void Interpreter::visitFreeInst(FreeInst &I) {
-  ExecutionContext &SF = ECStack.back();
-  assert(isa<PointerType>(I.getOperand(0)->getType()) && "Freeing nonptr?");
-  GenericValue Value = getOperandValue(I.getOperand(0), SF);
-  // TODO: Check to make sure memory is allocated
-  free(GVTOP(Value));   // Free memory
-}
-
 // getElementOffset - The workhorse for getelementptr.
 //
 GenericValue Interpreter::executeGEPOperation(Value *Ptr, gep_type_iterator I,
@@ -835,7 +834,7 @@ void Interpreter::visitCallSite(CallSite CS) {
 
   // Check to see if this is an intrinsic function call...
   Function *F = CS.getCalledFunction();
-  if (F && F->isDeclaration ())
+  if (F && F->isDeclaration())
     switch (F->getIntrinsicID()) {
     case Intrinsic::not_intrinsic:
       break;
@@ -883,16 +882,6 @@ void Interpreter::visitCallSite(CallSite CS) {
          e = SF.Caller.arg_end(); i != e; ++i, ++pNum) {
     Value *V = *i;
     ArgVals.push_back(getOperandValue(V, SF));
-    // Promote all integral types whose size is < sizeof(i32) into i32.
-    // We do this by zero or sign extending the value as appropriate
-    // according to the parameter attributes
-    const Type *Ty = V->getType();
-    if (Ty->isInteger() && (ArgVals.back().IntVal.getBitWidth() < 32)) {
-      if (CS.paramHasAttr(pNum, Attribute::ZExt))
-        ArgVals.back().IntVal = ArgVals.back().IntVal.zext(32);
-      else if (CS.paramHasAttr(pNum, Attribute::SExt))
-        ArgVals.back().IntVal = ArgVals.back().IntVal.sext(32);
-    }
   }
 
   // To handle indirect calls, we must get the pointer value from the argument
@@ -970,8 +959,7 @@ GenericValue Interpreter::executeZExtInst(Value *SrcVal, const Type *DstTy,
 GenericValue Interpreter::executeFPTruncInst(Value *SrcVal, const Type *DstTy,
                                              ExecutionContext &SF) {
   GenericValue Dest, Src = getOperandValue(SrcVal, SF);
-  assert(SrcVal->getType() == Type::getDoubleTy(SrcVal->getContext()) &&
-         DstTy == Type::getFloatTy(SrcVal->getContext()) &&
+  assert(SrcVal->getType()->isDoubleTy() && DstTy->isFloatTy() &&
          "Invalid FPTrunc instruction");
   Dest.FloatVal = (float) Src.DoubleVal;
   return Dest;
@@ -980,8 +968,7 @@ GenericValue Interpreter::executeFPTruncInst(Value *SrcVal, const Type *DstTy,
 GenericValue Interpreter::executeFPExtInst(Value *SrcVal, const Type *DstTy,
                                            ExecutionContext &SF) {
   GenericValue Dest, Src = getOperandValue(SrcVal, SF);
-  assert(SrcVal->getType() == Type::getFloatTy(SrcVal->getContext()) &&
-         DstTy == Type::getDoubleTy(SrcVal->getContext()) &&
+  assert(SrcVal->getType()->isFloatTy() && DstTy->isDoubleTy() &&
          "Invalid FPTrunc instruction");
   Dest.DoubleVal = (double) Src.FloatVal;
   return Dest;
@@ -1072,22 +1059,22 @@ GenericValue Interpreter::executeBitCastInst(Value *SrcVal, const Type *DstTy,
     assert(isa<PointerType>(SrcTy) && "Invalid BitCast");
     Dest.PointerVal = Src.PointerVal;
   } else if (DstTy->isInteger()) {
-    if (SrcTy == Type::getFloatTy(SrcVal->getContext())) {
+    if (SrcTy->isFloatTy()) {
       Dest.IntVal.zext(sizeof(Src.FloatVal) * CHAR_BIT);
       Dest.IntVal.floatToBits(Src.FloatVal);
-    } else if (SrcTy == Type::getDoubleTy(SrcVal->getContext())) {
+    } else if (SrcTy->isDoubleTy()) {
       Dest.IntVal.zext(sizeof(Src.DoubleVal) * CHAR_BIT);
       Dest.IntVal.doubleToBits(Src.DoubleVal);
     } else if (SrcTy->isInteger()) {
       Dest.IntVal = Src.IntVal;
     } else 
       llvm_unreachable("Invalid BitCast");
-  } else if (DstTy == Type::getFloatTy(SrcVal->getContext())) {
+  } else if (DstTy->isFloatTy()) {
     if (SrcTy->isInteger())
       Dest.FloatVal = Src.IntVal.bitsToFloat();
     else
       Dest.FloatVal = Src.FloatVal;
-  } else if (DstTy == Type::getDoubleTy(SrcVal->getContext())) {
+  } else if (DstTy->isDoubleTy()) {
     if (SrcTy->isInteger())
       Dest.DoubleVal = Src.IntVal.bitsToDouble();
     else
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/ExternalFunctions.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/ExternalFunctions.cpp
index 8c45a36..c02d84f 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/ExternalFunctions.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/ExternalFunctions.cpp
@@ -158,7 +158,7 @@ static void *ffiValueFor(const Type *Ty, const GenericValue &AV,
       }
     case Type::FloatTyID: {
       float *FloatPtr = (float *) ArgDataPtr;
-      *FloatPtr = AV.DoubleVal;
+      *FloatPtr = AV.FloatVal;
       return ArgDataPtr;
     }
     case Type::DoubleTyID: {
@@ -284,6 +284,9 @@ GenericValue Interpreter::callExternalFunction(Function *F,
   else
     llvm_report_error("Tried to execute an unknown external function: " +
                       F->getType()->getDescription() + " " +F->getName());
+#ifndef USE_LIBFFI
+  errs() << "Recompiling LLVM with --enable-libffi might help.\n";
+#endif
   return GenericValue();
 }
 
@@ -419,83 +422,6 @@ GenericValue lle_X_printf(const FunctionType *FT,
   return GV;
 }
 
-static void ByteswapSCANFResults(LLVMContext &C,
-                                 const char *Fmt, void *Arg0, void *Arg1,
-                                 void *Arg2, void *Arg3, void *Arg4, void *Arg5,
-                                 void *Arg6, void *Arg7, void *Arg8) {
-  void *Args[] = { Arg0, Arg1, Arg2, Arg3, Arg4, Arg5, Arg6, Arg7, Arg8, 0 };
-
-  // Loop over the format string, munging read values as appropriate (performs
-  // byteswaps as necessary).
-  unsigned ArgNo = 0;
-  while (*Fmt) {
-    if (*Fmt++ == '%') {
-      // Read any flag characters that may be present...
-      bool Suppress = false;
-      bool Half = false;
-      bool Long = false;
-      bool LongLong = false;  // long long or long double
-
-      while (1) {
-        switch (*Fmt++) {
-        case '*': Suppress = true; break;
-        case 'a': /*Allocate = true;*/ break;  // We don't need to track this
-        case 'h': Half = true; break;
-        case 'l': Long = true; break;
-        case 'q':
-        case 'L': LongLong = true; break;
-        default:
-          if (Fmt[-1] > '9' || Fmt[-1] < '0')   // Ignore field width specs
-            goto Out;
-        }
-      }
-    Out:
-
-      // Read the conversion character
-      if (!Suppress && Fmt[-1] != '%') { // Nothing to do?
-        unsigned Size = 0;
-        const Type *Ty = 0;
-
-        switch (Fmt[-1]) {
-        case 'i': case 'o': case 'u': case 'x': case 'X': case 'n': case 'p':
-        case 'd':
-          if (Long || LongLong) {
-            Size = 8; Ty = Type::getInt64Ty(C);
-          } else if (Half) {
-            Size = 4; Ty = Type::getInt16Ty(C);
-          } else {
-            Size = 4; Ty = Type::getInt32Ty(C);
-          }
-          break;
-
-        case 'e': case 'g': case 'E':
-        case 'f':
-          if (Long || LongLong) {
-            Size = 8; Ty = Type::getDoubleTy(C);
-          } else {
-            Size = 4; Ty = Type::getFloatTy(C);
-          }
-          break;
-
-        case 's': case 'c': case '[':  // No byteswap needed
-          Size = 1;
-          Ty = Type::getInt8Ty(C);
-          break;
-
-        default: break;
-        }
-
-        if (Size) {
-          GenericValue GV;
-          void *Arg = Args[ArgNo++];
-          memcpy(&GV, Arg, Size);
-          TheInterpreter->StoreValueToMemory(GV, (GenericValue*)Arg, Ty);
-        }
-      }
-    }
-  }
-}
-
 // int sscanf(const char *format, ...);
 GenericValue lle_X_sscanf(const FunctionType *FT,
                           const std::vector<GenericValue> &args) {
@@ -508,9 +434,6 @@ GenericValue lle_X_sscanf(const FunctionType *FT,
   GenericValue GV;
   GV.IntVal = APInt(32, sscanf(Args[0], Args[1], Args[2], Args[3], Args[4],
                         Args[5], Args[6], Args[7], Args[8], Args[9]));
-  ByteswapSCANFResults(FT->getContext(),
-                       Args[1], Args[2], Args[3], Args[4],
-                       Args[5], Args[6], Args[7], Args[8], Args[9], 0);
   return GV;
 }
 
@@ -526,9 +449,6 @@ GenericValue lle_X_scanf(const FunctionType *FT,
   GenericValue GV;
   GV.IntVal = APInt(32, scanf( Args[0], Args[1], Args[2], Args[3], Args[4],
                         Args[5], Args[6], Args[7], Args[8], Args[9]));
-  ByteswapSCANFResults(FT->getContext(),
-                       Args[0], Args[1], Args[2], Args[3], Args[4],
-                       Args[5], Args[6], Args[7], Args[8], Args[9]);
   return GV;
 }
 
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/Interpreter.h b/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/Interpreter.h
index e026287..038830c 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/Interpreter.h
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/Interpreter.h
@@ -19,7 +19,7 @@
 #include "llvm/ExecutionEngine/GenericValue.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Support/CallSite.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/InstVisitor.h"
 #include "llvm/Support/raw_ostream.h"
@@ -135,12 +135,12 @@ public:
   void visitReturnInst(ReturnInst &I);
   void visitBranchInst(BranchInst &I);
   void visitSwitchInst(SwitchInst &I);
+  void visitIndirectBrInst(IndirectBrInst &I);
 
   void visitBinaryOperator(BinaryOperator &I);
   void visitICmpInst(ICmpInst &I);
   void visitFCmpInst(FCmpInst &I);
-  void visitAllocationInst(AllocationInst &I);
-  void visitFreeInst(FreeInst &I);
+  void visitAllocaInst(AllocaInst &I);
   void visitLoadInst(LoadInst &I);
   void visitStoreInst(StoreInst &I);
   void visitGetElementPtrInst(GetElementPtrInst &I);
@@ -203,6 +203,7 @@ private:  // Helper functions
   void SwitchToNewBasicBlock(BasicBlock *Dest, ExecutionContext &SF);
 
   void *getPointerToFunction(Function *F) { return (void*)F; }
+  void *getPointerToBasicBlock(BasicBlock *BB) { return (void*)BB; }
 
   void initializeExecutionEngine() { }
   void initializeExternalFunctions();
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/CMakeLists.txt b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/CMakeLists.txt
index 41b3b4e..42020d6 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/CMakeLists.txt
@@ -8,7 +8,6 @@ add_llvm_library(LLVMJIT
   JITDwarfEmitter.cpp
   JITEmitter.cpp
   JITMemoryManager.cpp
-  MacOSJITEventListener.cpp
   OProfileJITEventListener.cpp
   TargetSelect.cpp
   )
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.cpp
index b2a268b..6d781c7 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.cpp
@@ -198,15 +198,17 @@ ExecutionEngine *ExecutionEngine::createJIT(ModuleProvider *MP,
                                             std::string *ErrorStr,
                                             JITMemoryManager *JMM,
                                             CodeGenOpt::Level OptLevel,
-                                            bool GVsWithCode) {
-    return JIT::createJIT(MP, ErrorStr, JMM, OptLevel, GVsWithCode);
+                                            bool GVsWithCode,
+					    CodeModel::Model CMM) {
+  return JIT::createJIT(MP, ErrorStr, JMM, OptLevel, GVsWithCode, CMM);
 }
 
 ExecutionEngine *JIT::createJIT(ModuleProvider *MP,
                                 std::string *ErrorStr,
                                 JITMemoryManager *JMM,
                                 CodeGenOpt::Level OptLevel,
-                                bool GVsWithCode) {
+                                bool GVsWithCode,
+				CodeModel::Model CMM) {
   // Make sure we can resolve symbols in the program as well. The zero arg
   // to the function tells DynamicLibrary to load the program, not a library.
   if (sys::DynamicLibrary::LoadLibraryPermanently(0, ErrorStr))
@@ -215,6 +217,7 @@ ExecutionEngine *JIT::createJIT(ModuleProvider *MP,
   // Pick a target either via -march or by guessing the native arch.
   TargetMachine *TM = JIT::selectTarget(MP, ErrorStr);
   if (!TM || (ErrorStr && ErrorStr->length() > 0)) return 0;
+  TM->setCodeModel(CMM);
 
   // If the target supports JIT code generation, create a the JIT.
   if (TargetJITInfo *TJ = TM->getJITInfo()) {
@@ -556,10 +559,10 @@ void JIT::NotifyFunctionEmitted(
   }
 }
 
-void JIT::NotifyFreeingMachineCode(const Function &F, void *OldPtr) {
+void JIT::NotifyFreeingMachineCode(void *OldPtr) {
   MutexGuard locked(lock);
   for (unsigned I = 0, S = EventListeners.size(); I < S; ++I) {
-    EventListeners[I]->NotifyFreeingMachineCode(F, OldPtr);
+    EventListeners[I]->NotifyFreeingMachineCode(OldPtr);
   }
 }
 
@@ -599,7 +602,7 @@ void JIT::runJITOnFunctionUnlocked(Function *F, const MutexGuard &locked) {
   isAlreadyCodeGenerating = false;
 
   // If the function referred to another function that had not yet been
-  // read from bitcode, but we are jitting non-lazily, emit it now.
+  // read from bitcode, and we are jitting non-lazily, emit it now.
   while (!jitstate->getPendingFunctions(locked).empty()) {
     Function *PF = jitstate->getPendingFunctions(locked).back();
     jitstate->getPendingFunctions(locked).pop_back();
@@ -613,11 +616,6 @@ void JIT::runJITOnFunctionUnlocked(Function *F, const MutexGuard &locked) {
     // the stub with real address of the function.
     updateFunctionStub(PF);
   }
-  
-  // If the JIT is configured to emit info so that dlsym can be used to
-  // rewrite stubs to external globals, do so now.
-  if (areDlsymStubsEnabled() && isLazyCompilationDisabled())
-    updateDlsymStubTable();
 }
 
 /// getPointerToFunction - This method is used to get the address of the
@@ -659,9 +657,8 @@ void *JIT::getPointerToFunction(Function *F) {
       return Addr;
   }
 
-  if (F->isDeclaration()) {
-    bool AbortOnFailure =
-      !areDlsymStubsEnabled() && !F->hasExternalWeakLinkage();
+  if (F->isDeclaration() || F->hasAvailableExternallyLinkage()) {
+    bool AbortOnFailure = !F->hasExternalWeakLinkage();
     void *Addr = getPointerToNamedFunction(F->getName(), AbortOnFailure);
     addGlobalMapping(F, Addr);
     return Addr;
@@ -690,7 +687,7 @@ void *JIT::getOrEmitGlobalVariable(const GlobalVariable *GV) {
       return (void*)&__dso_handle;
 #endif
     Ptr = sys::DynamicLibrary::SearchForAddressOfSymbol(GV->getName());
-    if (Ptr == 0 && !areDlsymStubsEnabled()) {
+    if (Ptr == 0) {
       llvm_report_error("Could not resolve external global address: "
                         +GV->getName());
     }
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.h b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.h
index dfeffb5..f165bd6 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.h
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.h
@@ -16,6 +16,7 @@
 
 #include "llvm/ExecutionEngine/ExecutionEngine.h"
 #include "llvm/PassManager.h"
+#include "llvm/Support/ValueHandle.h"
 
 namespace llvm {
 
@@ -33,7 +34,7 @@ private:
 
   /// PendingFunctions - Functions which have not been code generated yet, but
   /// were called from a function being code generated.
-  std::vector<Function*> PendingFunctions;
+  std::vector<AssertingVH<Function> > PendingFunctions;
 
 public:
   explicit JITState(ModuleProvider *MP) : PM(MP), MP(MP) {}
@@ -43,7 +44,7 @@ public:
   }
   
   ModuleProvider *getMP() const { return MP; }
-  std::vector<Function*> &getPendingFunctions(const MutexGuard &L) {
+  std::vector<AssertingVH<Function> > &getPendingFunctions(const MutexGuard &L){
     return PendingFunctions;
   }
 };
@@ -84,8 +85,10 @@ public:
                                  JITMemoryManager *JMM,
                                  CodeGenOpt::Level OptLevel =
                                    CodeGenOpt::Default,
-                                 bool GVsWithCode = true) {
-    return ExecutionEngine::createJIT(MP, Err, JMM, OptLevel, GVsWithCode);
+                                 bool GVsWithCode = true,
+				 CodeModel::Model CMM = CodeModel::Default) {
+    return ExecutionEngine::createJIT(MP, Err, JMM, OptLevel, GVsWithCode,
+				      CMM);
   }
 
   virtual void addModuleProvider(ModuleProvider *MP);
@@ -127,6 +130,11 @@ public:
   ///
   void *getPointerToFunction(Function *F);
 
+  void *getPointerToBasicBlock(BasicBlock *BB) {
+    assert(0 && "JIT does not support address-of-label yet!");
+    return 0;
+  }
+  
   /// getOrEmitGlobalVariable - Return the address of the specified global
   /// variable, possibly emitting it to memory if needed.  This is used by the
   /// Emitter.
@@ -169,7 +177,8 @@ public:
                                     std::string *ErrorStr,
                                     JITMemoryManager *JMM,
                                     CodeGenOpt::Level OptLevel,
-                                    bool GVsWithCode);
+                                    bool GVsWithCode,
+				    CodeModel::Model CMM);
 
   // Run the JIT on F and return information about the generated code
   void runJITOnFunction(Function *F, MachineCodeInfo *MCI = 0);
@@ -182,14 +191,13 @@ public:
   void NotifyFunctionEmitted(
       const Function &F, void *Code, size_t Size,
       const JITEvent_EmittedFunctionDetails &Details);
-  void NotifyFreeingMachineCode(const Function &F, void *OldPtr);
+  void NotifyFreeingMachineCode(void *OldPtr);
 
 private:
   static JITCodeEmitter *createEmitter(JIT &J, JITMemoryManager *JMM,
                                        TargetMachine &tm);
   void runJITOnFunctionUnlocked(Function *F, const MutexGuard &locked);
   void updateFunctionStub(Function *F);
-  void updateDlsymStubTable();
 
 protected:
 
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITDebugRegisterer.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITDebugRegisterer.cpp
index fa64010..565509c 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITDebugRegisterer.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITDebugRegisterer.cpp
@@ -22,6 +22,7 @@
 #include "llvm/Target/TargetOptions.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/OwningPtr.h"
+#include "llvm/Support/Compiler.h"
 #include "llvm/Support/MutexGuard.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/System/Mutex.h"
@@ -34,7 +35,7 @@ namespace llvm {
 extern "C" {
 
   // Debuggers puts a breakpoint in this function.
-  void DISABLE_INLINE __jit_debug_register_code() { }
+  DISABLE_INLINE void __jit_debug_register_code() { }
 
   // We put information about the JITed function in this global, which the
   // debugger reads.  Make sure to specify the version statically, because the
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITDebugRegisterer.h b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITDebugRegisterer.h
index dce506b..7e53d78 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITDebugRegisterer.h
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITDebugRegisterer.h
@@ -16,7 +16,7 @@
 #define LLVM_EXECUTION_ENGINE_JIT_DEBUGREGISTERER_H
 
 #include "llvm/ADT/DenseMap.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <string>
 
 // This must be kept in sync with gdb/gdb/jit.h .
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITEmitter.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITEmitter.cpp
index 590846b..bbac762 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITEmitter.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITEmitter.cpp
@@ -42,9 +42,11 @@
 #include "llvm/System/Disassembler.h"
 #include "llvm/System/Memory.h"
 #include "llvm/Target/TargetInstrInfo.h"
+#include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/Statistic.h"
+#include "llvm/ADT/ValueMap.h"
 #include <algorithm>
 #ifndef NDEBUG
 #include <iomanip>
@@ -61,46 +63,142 @@ static JIT *TheJIT = 0;
 // JIT lazy compilation code.
 //
 namespace {
+  class JITEmitter;
+  class JITResolverState;
+
+  template<typename ValueTy>
+  struct NoRAUWValueMapConfig : public ValueMapConfig<ValueTy> {
+    typedef JITResolverState *ExtraData;
+    static void onRAUW(JITResolverState *, Value *Old, Value *New) {
+      assert(false && "The JIT doesn't know how to handle a"
+             " RAUW on a value it has emitted.");
+    }
+  };
+
+  struct CallSiteValueMapConfig : public NoRAUWValueMapConfig<Function*> {
+    typedef JITResolverState *ExtraData;
+    static void onDelete(JITResolverState *JRS, Function *F);
+  };
+
   class JITResolverState {
   public:
-    typedef std::map<AssertingVH<Function>, void*> FunctionToStubMapTy;
-    typedef std::map<void*, Function*> StubToFunctionMapTy;
+    typedef ValueMap<Function*, void*, NoRAUWValueMapConfig<Function*> >
+      FunctionToLazyStubMapTy;
+    typedef std::map<void*, AssertingVH<Function> > CallSiteToFunctionMapTy;
+    typedef ValueMap<Function *, SmallPtrSet<void*, 1>,
+                     CallSiteValueMapConfig> FunctionToCallSitesMapTy;
     typedef std::map<AssertingVH<GlobalValue>, void*> GlobalToIndirectSymMapTy;
   private:
-    /// FunctionToStubMap - Keep track of the stub created for a particular
-    /// function so that we can reuse them if necessary.
-    FunctionToStubMapTy FunctionToStubMap;
+    /// FunctionToLazyStubMap - Keep track of the lazy stub created for a
+    /// particular function so that we can reuse them if necessary.
+    FunctionToLazyStubMapTy FunctionToLazyStubMap;
 
-    /// StubToFunctionMap - Keep track of the function that each stub
-    /// corresponds to.
-    StubToFunctionMapTy StubToFunctionMap;
+    /// CallSiteToFunctionMap - Keep track of the function that each lazy call
+    /// site corresponds to, and vice versa.
+    CallSiteToFunctionMapTy CallSiteToFunctionMap;
+    FunctionToCallSitesMapTy FunctionToCallSitesMap;
 
     /// GlobalToIndirectSymMap - Keep track of the indirect symbol created for a
     /// particular GlobalVariable so that we can reuse them if necessary.
     GlobalToIndirectSymMapTy GlobalToIndirectSymMap;
 
   public:
-    FunctionToStubMapTy& getFunctionToStubMap(const MutexGuard& locked) {
-      assert(locked.holds(TheJIT->lock));
-      return FunctionToStubMap;
-    }
+    JITResolverState() : FunctionToLazyStubMap(this),
+                         FunctionToCallSitesMap(this) {}
 
-    StubToFunctionMapTy& getStubToFunctionMap(const MutexGuard& locked) {
+    FunctionToLazyStubMapTy& getFunctionToLazyStubMap(
+      const MutexGuard& locked) {
       assert(locked.holds(TheJIT->lock));
-      return StubToFunctionMap;
+      return FunctionToLazyStubMap;
     }
 
     GlobalToIndirectSymMapTy& getGlobalToIndirectSymMap(const MutexGuard& locked) {
       assert(locked.holds(TheJIT->lock));
       return GlobalToIndirectSymMap;
     }
+
+    pair<void *, Function *> LookupFunctionFromCallSite(
+        const MutexGuard &locked, void *CallSite) const {
+      assert(locked.holds(TheJIT->lock));
+
+      // The address given to us for the stub may not be exactly right, it might be
+      // a little bit after the stub.  As such, use upper_bound to find it.
+      CallSiteToFunctionMapTy::const_iterator I =
+        CallSiteToFunctionMap.upper_bound(CallSite);
+      assert(I != CallSiteToFunctionMap.begin() &&
+             "This is not a known call site!");
+      --I;
+      return *I;
+    }
+
+    void AddCallSite(const MutexGuard &locked, void *CallSite, Function *F) {
+      assert(locked.holds(TheJIT->lock));
+
+      bool Inserted = CallSiteToFunctionMap.insert(
+          std::make_pair(CallSite, F)).second;
+      (void)Inserted;
+      assert(Inserted && "Pair was already in CallSiteToFunctionMap");
+      FunctionToCallSitesMap[F].insert(CallSite);
+    }
+
+    // Returns the Function of the stub if a stub was erased, or NULL if there
+    // was no stub.  This function uses the call-site->function map to find a
+    // relevant function, but asserts that only stubs and not other call sites
+    // will be passed in.
+    Function *EraseStub(const MutexGuard &locked, void *Stub) {
+      CallSiteToFunctionMapTy::iterator C2F_I =
+        CallSiteToFunctionMap.find(Stub);
+      if (C2F_I == CallSiteToFunctionMap.end()) {
+        // Not a stub.
+        return NULL;
+      }
+
+      Function *const F = C2F_I->second;
+#ifndef NDEBUG
+      void *RealStub = FunctionToLazyStubMap.lookup(F);
+      assert(RealStub == Stub &&
+             "Call-site that wasn't a stub pass in to EraseStub");
+#endif
+      FunctionToLazyStubMap.erase(F);
+      CallSiteToFunctionMap.erase(C2F_I);
+
+      // Remove the stub from the function->call-sites map, and remove the whole
+      // entry from the map if that was the last call site.
+      FunctionToCallSitesMapTy::iterator F2C_I = FunctionToCallSitesMap.find(F);
+      assert(F2C_I != FunctionToCallSitesMap.end() &&
+             "FunctionToCallSitesMap broken");
+      bool Erased = F2C_I->second.erase(Stub);
+      (void)Erased;
+      assert(Erased && "FunctionToCallSitesMap broken");
+      if (F2C_I->second.empty())
+        FunctionToCallSitesMap.erase(F2C_I);
+
+      return F;
+    }
+
+    void EraseAllCallSites(const MutexGuard &locked, Function *F) {
+      assert(locked.holds(TheJIT->lock));
+      EraseAllCallSitesPrelocked(F);
+    }
+    void EraseAllCallSitesPrelocked(Function *F) {
+      FunctionToCallSitesMapTy::iterator F2C = FunctionToCallSitesMap.find(F);
+      if (F2C == FunctionToCallSitesMap.end())
+        return;
+      for (SmallPtrSet<void*, 1>::const_iterator I = F2C->second.begin(),
+             E = F2C->second.end(); I != E; ++I) {
+        bool Erased = CallSiteToFunctionMap.erase(*I);
+        (void)Erased;
+        assert(Erased && "Missing call site->function mapping");
+      }
+      FunctionToCallSitesMap.erase(F2C);
+    }
   };
 
   /// JITResolver - Keep track of, and resolve, call sites for functions that
   /// have not yet been compiled.
   class JITResolver {
-    typedef JITResolverState::FunctionToStubMapTy FunctionToStubMapTy;
-    typedef JITResolverState::StubToFunctionMapTy StubToFunctionMapTy;
+    typedef JITResolverState::FunctionToLazyStubMapTy FunctionToLazyStubMapTy;
+    typedef JITResolverState::CallSiteToFunctionMapTy CallSiteToFunctionMapTy;
     typedef JITResolverState::GlobalToIndirectSymMapTy GlobalToIndirectSymMapTy;
 
     /// LazyResolverFn - The target lazy resolver function that we actually
@@ -109,36 +207,40 @@ namespace {
 
     JITResolverState state;
 
-    /// ExternalFnToStubMap - This is the equivalent of FunctionToStubMap for
-    /// external functions.
+    /// ExternalFnToStubMap - This is the equivalent of FunctionToLazyStubMap
+    /// for external functions.  TODO: Of course, external functions don't need
+    /// a lazy stub.  It's actually here to make it more likely that far calls
+    /// succeed, but no single stub can guarantee that.  I'll remove this in a
+    /// subsequent checkin when I actually fix far calls.
     std::map<void*, void*> ExternalFnToStubMap;
 
     /// revGOTMap - map addresses to indexes in the GOT
     std::map<void*, unsigned> revGOTMap;
     unsigned nextGOTIndex;
 
+    JITEmitter &JE;
+
     static JITResolver *TheJITResolver;
   public:
-    explicit JITResolver(JIT &jit) : nextGOTIndex(0) {
+    explicit JITResolver(JIT &jit, JITEmitter &je) : nextGOTIndex(0), JE(je) {
       TheJIT = &jit;
 
       LazyResolverFn = jit.getJITInfo().getLazyResolverFunction(JITCompilerFn);
       assert(TheJITResolver == 0 && "Multiple JIT resolvers?");
       TheJITResolver = this;
     }
-    
+
     ~JITResolver() {
       TheJITResolver = 0;
     }
 
-    /// getFunctionStubIfAvailable - This returns a pointer to a function stub
-    /// if it has already been created.
-    void *getFunctionStubIfAvailable(Function *F);
+    /// getLazyFunctionStubIfAvailable - This returns a pointer to a function's
+    /// lazy-compilation stub if it has already been created.
+    void *getLazyFunctionStubIfAvailable(Function *F);
 
-    /// getFunctionStub - This returns a pointer to a function stub, creating
-    /// one on demand as needed.  If empty is true, create a function stub
-    /// pointing at address 0, to be filled in later.
-    void *getFunctionStub(Function *F);
+    /// getLazyFunctionStub - This returns a pointer to a function's
+    /// lazy-compilation stub, creating one on demand as needed.
+    void *getLazyFunctionStub(Function *F);
 
     /// getExternalFunctionStub - Return a stub for the function at the
     /// specified address, created lazily on demand.
@@ -148,19 +250,9 @@ namespace {
     /// specified GV address.
     void *getGlobalValueIndirectSym(GlobalValue *V, void *GVAddress);
 
-    /// AddCallbackAtLocation - If the target is capable of rewriting an
-    /// instruction without the use of a stub, record the location of the use so
-    /// we know which function is being used at the location.
-    void *AddCallbackAtLocation(Function *F, void *Location) {
-      MutexGuard locked(TheJIT->lock);
-      /// Get the target-specific JIT resolver function.
-      state.getStubToFunctionMap(locked)[Location] = F;
-      return (void*)(intptr_t)LazyResolverFn;
-    }
-    
     void getRelocatableGVs(SmallVectorImpl<GlobalValue*> &GVs,
                            SmallVectorImpl<void*> &Ptrs);
-    
+
     GlobalValue *invalidateStub(void *Stub);
 
     /// getGOTIndexForAddress - Return a new or existing index in the GOT for
@@ -173,50 +265,269 @@ namespace {
     /// been compiled, this function compiles it first.
     static void *JITCompilerFn(void *Stub);
   };
+
+  /// JITEmitter - The JIT implementation of the MachineCodeEmitter, which is
+  /// used to output functions to memory for execution.
+  class JITEmitter : public JITCodeEmitter {
+    JITMemoryManager *MemMgr;
+
+    // When reattempting to JIT a function after running out of space, we store
+    // the estimated size of the function we're trying to JIT here, so we can
+    // ask the memory manager for at least this much space.  When we
+    // successfully emit the function, we reset this back to zero.
+    uintptr_t SizeEstimate;
+
+    /// Relocations - These are the relocations that the function needs, as
+    /// emitted.
+    std::vector<MachineRelocation> Relocations;
+
+    /// MBBLocations - This vector is a mapping from MBB ID's to their address.
+    /// It is filled in by the StartMachineBasicBlock callback and queried by
+    /// the getMachineBasicBlockAddress callback.
+    std::vector<uintptr_t> MBBLocations;
+
+    /// ConstantPool - The constant pool for the current function.
+    ///
+    MachineConstantPool *ConstantPool;
+
+    /// ConstantPoolBase - A pointer to the first entry in the constant pool.
+    ///
+    void *ConstantPoolBase;
+
+    /// ConstPoolAddresses - Addresses of individual constant pool entries.
+    ///
+    SmallVector<uintptr_t, 8> ConstPoolAddresses;
+
+    /// JumpTable - The jump tables for the current function.
+    ///
+    MachineJumpTableInfo *JumpTable;
+
+    /// JumpTableBase - A pointer to the first entry in the jump table.
+    ///
+    void *JumpTableBase;
+
+    /// Resolver - This contains info about the currently resolved functions.
+    JITResolver Resolver;
+
+    /// DE - The dwarf emitter for the jit.
+    OwningPtr<JITDwarfEmitter> DE;
+
+    /// DR - The debug registerer for the jit.
+    OwningPtr<JITDebugRegisterer> DR;
+
+    /// LabelLocations - This vector is a mapping from Label ID's to their
+    /// address.
+    std::vector<uintptr_t> LabelLocations;
+
+    /// MMI - Machine module info for exception informations
+    MachineModuleInfo* MMI;
+
+    // GVSet - a set to keep track of which globals have been seen
+    SmallPtrSet<const GlobalVariable*, 8> GVSet;
+
+    // CurFn - The llvm function being emitted.  Only valid during
+    // finishFunction().
+    const Function *CurFn;
+
+    /// Information about emitted code, which is passed to the
+    /// JITEventListeners.  This is reset in startFunction and used in
+    /// finishFunction.
+    JITEvent_EmittedFunctionDetails EmissionDetails;
+
+    struct EmittedCode {
+      void *FunctionBody;  // Beginning of the function's allocation.
+      void *Code;  // The address the function's code actually starts at.
+      void *ExceptionTable;
+      EmittedCode() : FunctionBody(0), Code(0), ExceptionTable(0) {}
+    };
+    struct EmittedFunctionConfig : public ValueMapConfig<const Function*> {
+      typedef JITEmitter *ExtraData;
+      static void onDelete(JITEmitter *, const Function*);
+      static void onRAUW(JITEmitter *, const Function*, const Function*);
+    };
+    ValueMap<const Function *, EmittedCode,
+             EmittedFunctionConfig> EmittedFunctions;
+
+    // CurFnStubUses - For a given Function, a vector of stubs that it
+    // references.  This facilitates the JIT detecting that a stub is no
+    // longer used, so that it may be deallocated.
+    DenseMap<AssertingVH<const Function>, SmallVector<void*, 1> > CurFnStubUses;
+
+    // StubFnRefs - For a given pointer to a stub, a set of Functions which
+    // reference the stub.  When the count of a stub's references drops to zero,
+    // the stub is unused.
+    DenseMap<void *, SmallPtrSet<const Function*, 1> > StubFnRefs;
+
+    DebugLocTuple PrevDLT;
+
+  public:
+    JITEmitter(JIT &jit, JITMemoryManager *JMM, TargetMachine &TM)
+      : SizeEstimate(0), Resolver(jit, *this), MMI(0), CurFn(0),
+          EmittedFunctions(this) {
+      MemMgr = JMM ? JMM : JITMemoryManager::CreateDefaultMemManager();
+      if (jit.getJITInfo().needsGOT()) {
+        MemMgr->AllocateGOT();
+        DEBUG(errs() << "JIT is managing a GOT\n");
+      }
+
+      if (DwarfExceptionHandling || JITEmitDebugInfo) {
+        DE.reset(new JITDwarfEmitter(jit));
+      }
+      if (JITEmitDebugInfo) {
+        DR.reset(new JITDebugRegisterer(TM));
+      }
+    }
+    ~JITEmitter() {
+      delete MemMgr;
+    }
+
+    /// classof - Methods for support type inquiry through isa, cast, and
+    /// dyn_cast:
+    ///
+    static inline bool classof(const JITEmitter*) { return true; }
+    static inline bool classof(const MachineCodeEmitter*) { return true; }
+
+    JITResolver &getJITResolver() { return Resolver; }
+
+    virtual void startFunction(MachineFunction &F);
+    virtual bool finishFunction(MachineFunction &F);
+
+    void emitConstantPool(MachineConstantPool *MCP);
+    void initJumpTableInfo(MachineJumpTableInfo *MJTI);
+    void emitJumpTableInfo(MachineJumpTableInfo *MJTI);
+
+    virtual void startGVStub(BufferState &BS, const GlobalValue* GV,
+                             unsigned StubSize, unsigned Alignment = 1);
+    virtual void startGVStub(BufferState &BS, void *Buffer,
+                             unsigned StubSize);
+    virtual void* finishGVStub(BufferState &BS);
+
+    /// allocateSpace - Reserves space in the current block if any, or
+    /// allocate a new one of the given size.
+    virtual void *allocateSpace(uintptr_t Size, unsigned Alignment);
+
+    /// allocateGlobal - Allocate memory for a global.  Unlike allocateSpace,
+    /// this method does not allocate memory in the current output buffer,
+    /// because a global may live longer than the current function.
+    virtual void *allocateGlobal(uintptr_t Size, unsigned Alignment);
+
+    virtual void addRelocation(const MachineRelocation &MR) {
+      Relocations.push_back(MR);
+    }
+
+    virtual void StartMachineBasicBlock(MachineBasicBlock *MBB) {
+      if (MBBLocations.size() <= (unsigned)MBB->getNumber())
+        MBBLocations.resize((MBB->getNumber()+1)*2);
+      MBBLocations[MBB->getNumber()] = getCurrentPCValue();
+      DEBUG(errs() << "JIT: Emitting BB" << MBB->getNumber() << " at ["
+                   << (void*) getCurrentPCValue() << "]\n");
+    }
+
+    virtual uintptr_t getConstantPoolEntryAddress(unsigned Entry) const;
+    virtual uintptr_t getJumpTableEntryAddress(unsigned Entry) const;
+
+    virtual uintptr_t getMachineBasicBlockAddress(MachineBasicBlock *MBB) const {
+      assert(MBBLocations.size() > (unsigned)MBB->getNumber() &&
+             MBBLocations[MBB->getNumber()] && "MBB not emitted!");
+      return MBBLocations[MBB->getNumber()];
+    }
+
+    /// retryWithMoreMemory - Log a retry and deallocate all memory for the
+    /// given function.  Increase the minimum allocation size so that we get
+    /// more memory next time.
+    void retryWithMoreMemory(MachineFunction &F);
+
+    /// deallocateMemForFunction - Deallocate all memory for the specified
+    /// function body.
+    void deallocateMemForFunction(const Function *F);
+
+    /// AddStubToCurrentFunction - Mark the current function being JIT'd as
+    /// using the stub at the specified address. Allows
+    /// deallocateMemForFunction to also remove stubs no longer referenced.
+    void AddStubToCurrentFunction(void *Stub);
+
+    virtual void processDebugLoc(DebugLoc DL, bool BeforePrintingInsn);
+
+    virtual void emitLabel(uint64_t LabelID) {
+      if (LabelLocations.size() <= LabelID)
+        LabelLocations.resize((LabelID+1)*2);
+      LabelLocations[LabelID] = getCurrentPCValue();
+    }
+
+    virtual uintptr_t getLabelAddress(uint64_t LabelID) const {
+      assert(LabelLocations.size() > (unsigned)LabelID &&
+             LabelLocations[LabelID] && "Label not emitted!");
+      return LabelLocations[LabelID];
+    }
+
+    virtual void setModuleInfo(MachineModuleInfo* Info) {
+      MMI = Info;
+      if (DE.get()) DE->setModuleInfo(Info);
+    }
+
+    void setMemoryExecutable() {
+      MemMgr->setMemoryExecutable();
+    }
+
+    JITMemoryManager *getMemMgr() const { return MemMgr; }
+
+  private:
+    void *getPointerToGlobal(GlobalValue *GV, void *Reference,
+                             bool MayNeedFarStub);
+    void *getPointerToGVIndirectSym(GlobalValue *V, void *Reference);
+    unsigned addSizeOfGlobal(const GlobalVariable *GV, unsigned Size);
+    unsigned addSizeOfGlobalsInConstantVal(const Constant *C, unsigned Size);
+    unsigned addSizeOfGlobalsInInitializer(const Constant *Init, unsigned Size);
+    unsigned GetSizeOfGlobalsInBytes(MachineFunction &MF);
+  };
 }
 
 JITResolver *JITResolver::TheJITResolver = 0;
 
-/// getFunctionStubIfAvailable - This returns a pointer to a function stub
+void CallSiteValueMapConfig::onDelete(JITResolverState *JRS, Function *F) {
+  JRS->EraseAllCallSitesPrelocked(F);
+}
+
+/// getLazyFunctionStubIfAvailable - This returns a pointer to a function stub
 /// if it has already been created.
-void *JITResolver::getFunctionStubIfAvailable(Function *F) {
+void *JITResolver::getLazyFunctionStubIfAvailable(Function *F) {
   MutexGuard locked(TheJIT->lock);
 
   // If we already have a stub for this function, recycle it.
-  void *&Stub = state.getFunctionToStubMap(locked)[F];
-  return Stub;
+  return state.getFunctionToLazyStubMap(locked).lookup(F);
 }
 
 /// getFunctionStub - This returns a pointer to a function stub, creating
 /// one on demand as needed.
-void *JITResolver::getFunctionStub(Function *F) {
+void *JITResolver::getLazyFunctionStub(Function *F) {
   MutexGuard locked(TheJIT->lock);
 
-  // If we already have a stub for this function, recycle it.
-  void *&Stub = state.getFunctionToStubMap(locked)[F];
+  // If we already have a lazy stub for this function, recycle it.
+  void *&Stub = state.getFunctionToLazyStubMap(locked)[F];
   if (Stub) return Stub;
 
-  // Call the lazy resolver function unless we are JIT'ing non-lazily, in which
-  // case we must resolve the symbol now.
-  void *Actual =  TheJIT->isLazyCompilationDisabled() 
-    ? (void *)0 : (void *)(intptr_t)LazyResolverFn;
-   
+  // Call the lazy resolver function if we are JIT'ing lazily.  Otherwise we
+  // must resolve the symbol now.
+  void *Actual = TheJIT->isCompilingLazily()
+    ? (void *)(intptr_t)LazyResolverFn : (void *)0;
+
   // If this is an external declaration, attempt to resolve the address now
   // to place in the stub.
   if (F->isDeclaration() && !F->hasNotBeenReadFromBitcode()) {
     Actual = TheJIT->getPointerToFunction(F);
 
     // If we resolved the symbol to a null address (eg. a weak external)
-    // don't emit a stub. Return a null pointer to the application.  If dlsym
-    // stubs are enabled, not being able to resolve the address is not
-    // meaningful.
-    if (!Actual && !TheJIT->areDlsymStubsEnabled()) return 0;
+    // don't emit a stub. Return a null pointer to the application.
+    if (!Actual) return 0;
   }
 
+  MachineCodeEmitter::BufferState BS;
+  TargetJITInfo::StubLayout SL = TheJIT->getJITInfo().getStubLayout();
+  JE.startGVStub(BS, F, SL.Size, SL.Alignment);
   // Codegen a new stub, calling the lazy resolver or the actual address of the
   // external function, if it was resolved.
-  Stub = TheJIT->getJITInfo().emitFunctionStub(F, Actual,
-                                               *TheJIT->getCodeEmitter());
+  Stub = TheJIT->getJITInfo().emitFunctionStub(F, Actual, JE);
+  JE.finishGVStub(BS);
 
   if (Actual != (void*)(intptr_t)LazyResolverFn) {
     // If we are getting the stub for an external function, we really want the
@@ -225,20 +536,20 @@ void *JITResolver::getFunctionStub(Function *F) {
     TheJIT->updateGlobalMapping(F, Stub);
   }
 
-  DEBUG(errs() << "JIT: Stub emitted at [" << Stub << "] for function '"
+  DEBUG(errs() << "JIT: Lazy stub emitted at [" << Stub << "] for function '"
         << F->getName() << "'\n");
 
   // Finally, keep track of the stub-to-Function mapping so that the
   // JITCompilerFn knows which function to compile!
-  state.getStubToFunctionMap(locked)[Stub] = F;
-  
+  state.AddCallSite(locked, Stub, F);
+
   // If we are JIT'ing non-lazily but need to call a function that does not
   // exist yet, add it to the JIT's work list so that we can fill in the stub
   // address later.
-  if (!Actual && TheJIT->isLazyCompilationDisabled())
+  if (!Actual && !TheJIT->isCompilingLazily())
     if (!F->isDeclaration() || F->hasNotBeenReadFromBitcode())
       TheJIT->addPendingFunction(F);
-  
+
   return Stub;
 }
 
@@ -253,9 +564,9 @@ void *JITResolver::getGlobalValueIndirectSym(GlobalValue *GV, void *GVAddress) {
 
   // Otherwise, codegen a new indirect symbol.
   IndirectSym = TheJIT->getJITInfo().emitGlobalValueIndirectSym(GV, GVAddress,
-                                                     *TheJIT->getCodeEmitter());
+                                                                JE);
 
-  DEBUG(errs() << "JIT: Indirect symbol emitted at [" << IndirectSym 
+  DEBUG(errs() << "JIT: Indirect symbol emitted at [" << IndirectSym
         << "] for GV '" << GV->getName() << "'\n");
 
   return IndirectSym;
@@ -268,8 +579,11 @@ void *JITResolver::getExternalFunctionStub(void *FnAddr) {
   void *&Stub = ExternalFnToStubMap[FnAddr];
   if (Stub) return Stub;
 
-  Stub = TheJIT->getJITInfo().emitFunctionStub(0, FnAddr,
-                                               *TheJIT->getCodeEmitter());
+  MachineCodeEmitter::BufferState BS;
+  TargetJITInfo::StubLayout SL = TheJIT->getJITInfo().getStubLayout();
+  JE.startGVStub(BS, 0, SL.Size, SL.Alignment);
+  Stub = TheJIT->getJITInfo().emitFunctionStub(0, FnAddr, JE);
+  JE.finishGVStub(BS);
 
   DEBUG(errs() << "JIT: Stub emitted at [" << Stub
                << "] for external function at '" << FnAddr << "'\n");
@@ -290,11 +604,12 @@ unsigned JITResolver::getGOTIndexForAddr(void* addr) {
 void JITResolver::getRelocatableGVs(SmallVectorImpl<GlobalValue*> &GVs,
                                     SmallVectorImpl<void*> &Ptrs) {
   MutexGuard locked(TheJIT->lock);
-  
-  FunctionToStubMapTy &FM = state.getFunctionToStubMap(locked);
+
+  const FunctionToLazyStubMapTy &FM = state.getFunctionToLazyStubMap(locked);
   GlobalToIndirectSymMapTy &GM = state.getGlobalToIndirectSymMap(locked);
-  
-  for (FunctionToStubMapTy::iterator i = FM.begin(), e = FM.end(); i != e; ++i){
+
+  for (FunctionToLazyStubMapTy::const_iterator i = FM.begin(), e = FM.end();
+       i != e; ++i){
     Function *F = i->first;
     if (F->isDeclaration() && F->hasExternalLinkage()) {
       GVs.push_back(i->first);
@@ -310,20 +625,15 @@ void JITResolver::getRelocatableGVs(SmallVectorImpl<GlobalValue*> &GVs,
 
 GlobalValue *JITResolver::invalidateStub(void *Stub) {
   MutexGuard locked(TheJIT->lock);
-  
-  FunctionToStubMapTy &FM = state.getFunctionToStubMap(locked);
-  StubToFunctionMapTy &SM = state.getStubToFunctionMap(locked);
+
   GlobalToIndirectSymMapTy &GM = state.getGlobalToIndirectSymMap(locked);
-  
+
   // Look up the cheap way first, to see if it's a function stub we are
   // invalidating.  If so, remove it from both the forward and reverse maps.
-  if (SM.find(Stub) != SM.end()) {
-    Function *F = SM[Stub];
-    SM.erase(Stub);
-    FM.erase(F);
+  if (Function *F = state.EraseStub(locked, Stub)) {
     return F;
   }
-  
+
   // Otherwise, it might be an indirect symbol stub.  Find it and remove it.
   for (GlobalToIndirectSymMapTy::iterator i = GM.begin(), e = GM.end();
        i != e; ++i) {
@@ -333,7 +643,7 @@ GlobalValue *JITResolver::invalidateStub(void *Stub) {
     GM.erase(i);
     return GV;
   }
-  
+
   // Lastly, check to see if it's in the ExternalFnToStubMap.
   for (std::map<void *, void *>::iterator i = ExternalFnToStubMap.begin(),
        e = ExternalFnToStubMap.end(); i != e; ++i) {
@@ -342,7 +652,7 @@ GlobalValue *JITResolver::invalidateStub(void *Stub) {
     ExternalFnToStubMap.erase(i);
     break;
   }
-  
+
   return 0;
 }
 
@@ -351,7 +661,7 @@ GlobalValue *JITResolver::invalidateStub(void *Stub) {
 /// it if necessary, then returns the resultant function pointer.
 void *JITResolver::JITCompilerFn(void *Stub) {
   JITResolver &JR = *TheJITResolver;
-  
+
   Function* F = 0;
   void* ActualPtr = 0;
 
@@ -361,34 +671,25 @@ void *JITResolver::JITCompilerFn(void *Stub) {
     // JIT lock to be unlocked.
     MutexGuard locked(TheJIT->lock);
 
-    // The address given to us for the stub may not be exactly right, it might be
-    // a little bit after the stub.  As such, use upper_bound to find it.
-    StubToFunctionMapTy::iterator I =
-      JR.state.getStubToFunctionMap(locked).upper_bound(Stub);
-    assert(I != JR.state.getStubToFunctionMap(locked).begin() &&
-           "This is not a known stub!");
-    F = (--I)->second;
-    ActualPtr = I->first;
+    // The address given to us for the stub may not be exactly right, it might
+    // be a little bit after the stub.  As such, use upper_bound to find it.
+    pair<void*, Function*> I =
+      JR.state.LookupFunctionFromCallSite(locked, Stub);
+    F = I.second;
+    ActualPtr = I.first;
   }
 
   // If we have already code generated the function, just return the address.
   void *Result = TheJIT->getPointerToGlobalIfAvailable(F);
-  
+
   if (!Result) {
     // Otherwise we don't have it, do lazy compilation now.
-    
+
     // If lazy compilation is disabled, emit a useful error message and abort.
-    if (TheJIT->isLazyCompilationDisabled()) {
+    if (!TheJIT->isCompilingLazily()) {
       llvm_report_error("LLVM JIT requested to do lazy compilation of function '"
                         + F->getName() + "' when lazy compiles are disabled!");
     }
-  
-    // We might like to remove the stub from the StubToFunction map.
-    // We can't do that! Multiple threads could be stuck, waiting to acquire the
-    // lock above. As soon as the 1st function finishes compiling the function,
-    // the next one will be released, and needs to be able to find the function
-    // it needs to call.
-    //JR.state.getStubToFunctionMap(locked).erase(I);
 
     DEBUG(errs() << "JIT: Lazily resolving function '" << F->getName()
           << "' In stub ptr = " << Stub << " actual ptr = "
@@ -396,12 +697,15 @@ void *JITResolver::JITCompilerFn(void *Stub) {
 
     Result = TheJIT->getPointerToFunction(F);
   }
-  
-  // Reacquire the lock to erase the stub in the map.
+
+  // Reacquire the lock to update the GOT map.
   MutexGuard locked(TheJIT->lock);
 
-  // We don't need to reuse this stub in the future, as F is now compiled.
-  JR.state.getFunctionToStubMap(locked).erase(F);
+  // We might like to remove the call site from the CallSiteToFunction map, but
+  // we can't do that! Multiple threads could be stuck, waiting to acquire the
+  // lock above. As soon as the 1st function finishes compiling the function,
+  // the next one will be released, and needs to be able to find the function it
+  // needs to call.
 
   // FIXME: We could rewrite all references to this stub if we knew them.
 
@@ -419,222 +723,8 @@ void *JITResolver::JITCompilerFn(void *Stub) {
 //===----------------------------------------------------------------------===//
 // JITEmitter code.
 //
-namespace {
-  /// JITEmitter - The JIT implementation of the MachineCodeEmitter, which is
-  /// used to output functions to memory for execution.
-  class JITEmitter : public JITCodeEmitter {
-    JITMemoryManager *MemMgr;
-
-    // When outputting a function stub in the context of some other function, we
-    // save BufferBegin/BufferEnd/CurBufferPtr here.
-    uint8_t *SavedBufferBegin, *SavedBufferEnd, *SavedCurBufferPtr;
-
-    // When reattempting to JIT a function after running out of space, we store
-    // the estimated size of the function we're trying to JIT here, so we can
-    // ask the memory manager for at least this much space.  When we
-    // successfully emit the function, we reset this back to zero.
-    uintptr_t SizeEstimate;
-
-    /// Relocations - These are the relocations that the function needs, as
-    /// emitted.
-    std::vector<MachineRelocation> Relocations;
-    
-    /// MBBLocations - This vector is a mapping from MBB ID's to their address.
-    /// It is filled in by the StartMachineBasicBlock callback and queried by
-    /// the getMachineBasicBlockAddress callback.
-    std::vector<uintptr_t> MBBLocations;
-
-    /// ConstantPool - The constant pool for the current function.
-    ///
-    MachineConstantPool *ConstantPool;
-
-    /// ConstantPoolBase - A pointer to the first entry in the constant pool.
-    ///
-    void *ConstantPoolBase;
-
-    /// ConstPoolAddresses - Addresses of individual constant pool entries.
-    ///
-    SmallVector<uintptr_t, 8> ConstPoolAddresses;
-
-    /// JumpTable - The jump tables for the current function.
-    ///
-    MachineJumpTableInfo *JumpTable;
-    
-    /// JumpTableBase - A pointer to the first entry in the jump table.
-    ///
-    void *JumpTableBase;
-
-    /// Resolver - This contains info about the currently resolved functions.
-    JITResolver Resolver;
-
-    /// DE - The dwarf emitter for the jit.
-    OwningPtr<JITDwarfEmitter> DE;
-
-    /// DR - The debug registerer for the jit.
-    OwningPtr<JITDebugRegisterer> DR;
-
-    /// LabelLocations - This vector is a mapping from Label ID's to their 
-    /// address.
-    std::vector<uintptr_t> LabelLocations;
-
-    /// MMI - Machine module info for exception informations
-    MachineModuleInfo* MMI;
-
-    // GVSet - a set to keep track of which globals have been seen
-    SmallPtrSet<const GlobalVariable*, 8> GVSet;
-
-    // CurFn - The llvm function being emitted.  Only valid during 
-    // finishFunction().
-    const Function *CurFn;
-
-    /// Information about emitted code, which is passed to the
-    /// JITEventListeners.  This is reset in startFunction and used in
-    /// finishFunction.
-    JITEvent_EmittedFunctionDetails EmissionDetails;
-
-    // CurFnStubUses - For a given Function, a vector of stubs that it
-    // references.  This facilitates the JIT detecting that a stub is no
-    // longer used, so that it may be deallocated.
-    DenseMap<const Function *, SmallVector<void*, 1> > CurFnStubUses;
-    
-    // StubFnRefs - For a given pointer to a stub, a set of Functions which
-    // reference the stub.  When the count of a stub's references drops to zero,
-    // the stub is unused.
-    DenseMap<void *, SmallPtrSet<const Function*, 1> > StubFnRefs;
-    
-    // ExtFnStubs - A map of external function names to stubs which have entries
-    // in the JITResolver's ExternalFnToStubMap.
-    StringMap<void *> ExtFnStubs;
-
-    DebugLocTuple PrevDLT;
-
-  public:
-    JITEmitter(JIT &jit, JITMemoryManager *JMM, TargetMachine &TM)
-        : SizeEstimate(0), Resolver(jit), MMI(0), CurFn(0) {
-      MemMgr = JMM ? JMM : JITMemoryManager::CreateDefaultMemManager();
-      if (jit.getJITInfo().needsGOT()) {
-        MemMgr->AllocateGOT();
-        DEBUG(errs() << "JIT is managing a GOT\n");
-      }
-
-      if (DwarfExceptionHandling || JITEmitDebugInfo) {
-        DE.reset(new JITDwarfEmitter(jit));
-      }
-      if (JITEmitDebugInfo) {
-        DR.reset(new JITDebugRegisterer(TM));
-      }
-    }
-    ~JITEmitter() { 
-      delete MemMgr;
-    }
-
-    /// classof - Methods for support type inquiry through isa, cast, and
-    /// dyn_cast:
-    ///
-    static inline bool classof(const JITEmitter*) { return true; }
-    static inline bool classof(const MachineCodeEmitter*) { return true; }
-    
-    JITResolver &getJITResolver() { return Resolver; }
-
-    virtual void startFunction(MachineFunction &F);
-    virtual bool finishFunction(MachineFunction &F);
-    
-    void emitConstantPool(MachineConstantPool *MCP);
-    void initJumpTableInfo(MachineJumpTableInfo *MJTI);
-    void emitJumpTableInfo(MachineJumpTableInfo *MJTI);
-    
-    virtual void startGVStub(const GlobalValue* GV, unsigned StubSize,
-                                   unsigned Alignment = 1);
-    virtual void startGVStub(const GlobalValue* GV, void *Buffer,
-                             unsigned StubSize);
-    virtual void* finishGVStub(const GlobalValue *GV);
-
-    /// allocateSpace - Reserves space in the current block if any, or
-    /// allocate a new one of the given size.
-    virtual void *allocateSpace(uintptr_t Size, unsigned Alignment);
-
-    /// allocateGlobal - Allocate memory for a global.  Unlike allocateSpace,
-    /// this method does not allocate memory in the current output buffer,
-    /// because a global may live longer than the current function.
-    virtual void *allocateGlobal(uintptr_t Size, unsigned Alignment);
-
-    virtual void addRelocation(const MachineRelocation &MR) {
-      Relocations.push_back(MR);
-    }
-    
-    virtual void StartMachineBasicBlock(MachineBasicBlock *MBB) {
-      if (MBBLocations.size() <= (unsigned)MBB->getNumber())
-        MBBLocations.resize((MBB->getNumber()+1)*2);
-      MBBLocations[MBB->getNumber()] = getCurrentPCValue();
-      DEBUG(errs() << "JIT: Emitting BB" << MBB->getNumber() << " at ["
-                   << (void*) getCurrentPCValue() << "]\n");
-    }
-
-    virtual uintptr_t getConstantPoolEntryAddress(unsigned Entry) const;
-    virtual uintptr_t getJumpTableEntryAddress(unsigned Entry) const;
-
-    virtual uintptr_t getMachineBasicBlockAddress(MachineBasicBlock *MBB) const {
-      assert(MBBLocations.size() > (unsigned)MBB->getNumber() && 
-             MBBLocations[MBB->getNumber()] && "MBB not emitted!");
-      return MBBLocations[MBB->getNumber()];
-    }
-
-    /// retryWithMoreMemory - Log a retry and deallocate all memory for the
-    /// given function.  Increase the minimum allocation size so that we get
-    /// more memory next time.
-    void retryWithMoreMemory(MachineFunction &F);
-
-    /// deallocateMemForFunction - Deallocate all memory for the specified
-    /// function body.
-    void deallocateMemForFunction(const Function *F);
-
-    /// AddStubToCurrentFunction - Mark the current function being JIT'd as
-    /// using the stub at the specified address. Allows
-    /// deallocateMemForFunction to also remove stubs no longer referenced.
-    void AddStubToCurrentFunction(void *Stub);
-    
-    /// getExternalFnStubs - Accessor for the JIT to find stubs emitted for
-    /// MachineRelocations that reference external functions by name.
-    const StringMap<void*> &getExternalFnStubs() const { return ExtFnStubs; }
-    
-    virtual void processDebugLoc(DebugLoc DL);
-
-    virtual void emitLabel(uint64_t LabelID) {
-      if (LabelLocations.size() <= LabelID)
-        LabelLocations.resize((LabelID+1)*2);
-      LabelLocations[LabelID] = getCurrentPCValue();
-    }
-
-    virtual uintptr_t getLabelAddress(uint64_t LabelID) const {
-      assert(LabelLocations.size() > (unsigned)LabelID && 
-             LabelLocations[LabelID] && "Label not emitted!");
-      return LabelLocations[LabelID];
-    }
- 
-    virtual void setModuleInfo(MachineModuleInfo* Info) {
-      MMI = Info;
-      if (DE.get()) DE->setModuleInfo(Info);
-    }
-
-    void setMemoryExecutable() {
-      MemMgr->setMemoryExecutable();
-    }
-    
-    JITMemoryManager *getMemMgr() const { return MemMgr; }
-
-  private:
-    void *getPointerToGlobal(GlobalValue *GV, void *Reference, bool NoNeedStub);
-    void *getPointerToGVIndirectSym(GlobalValue *V, void *Reference,
-                                    bool NoNeedStub);
-    unsigned addSizeOfGlobal(const GlobalVariable *GV, unsigned Size);
-    unsigned addSizeOfGlobalsInConstantVal(const Constant *C, unsigned Size);
-    unsigned addSizeOfGlobalsInInitializer(const Constant *Init, unsigned Size);
-    unsigned GetSizeOfGlobalsInBytes(MachineFunction &MF);
-  };
-}
-
 void *JITEmitter::getPointerToGlobal(GlobalValue *V, void *Reference,
-                                     bool DoesntNeedStub) {
+                                     bool MayNeedFarStub) {
   if (GlobalVariable *GV = dyn_cast<GlobalVariable>(V))
     return TheJIT->getOrEmitGlobalVariable(GV);
 
@@ -643,64 +733,60 @@ void *JITEmitter::getPointerToGlobal(GlobalValue *V, void *Reference,
 
   // If we have already compiled the function, return a pointer to its body.
   Function *F = cast<Function>(V);
-  void *ResultPtr;
-  if (!DoesntNeedStub && !TheJIT->isLazyCompilationDisabled()) {
-    // Return the function stub if it's already created.
-    ResultPtr = Resolver.getFunctionStubIfAvailable(F);
-    if (ResultPtr)
-      AddStubToCurrentFunction(ResultPtr);
-  } else {
-    ResultPtr = TheJIT->getPointerToGlobalIfAvailable(F);
+
+  void *FnStub = Resolver.getLazyFunctionStubIfAvailable(F);
+  if (FnStub) {
+    // Return the function stub if it's already created.  We do this first so
+    // that we're returning the same address for the function as any previous
+    // call.  TODO: Yes, this is wrong. The lazy stub isn't guaranteed to be
+    // close enough to call.
+    AddStubToCurrentFunction(FnStub);
+    return FnStub;
   }
-  if (ResultPtr) return ResultPtr;
 
-  // If this is an external function pointer, we can force the JIT to
-  // 'compile' it, which really just adds it to the map.  In dlsym mode, 
-  // external functions are forced through a stub, regardless of reloc type.
-  if (F->isDeclaration() && !F->hasNotBeenReadFromBitcode() &&
-      DoesntNeedStub && !TheJIT->areDlsymStubsEnabled())
-    return TheJIT->getPointerToFunction(F);
+  // If we know the target can handle arbitrary-distance calls, try to
+  // return a direct pointer.
+  if (!MayNeedFarStub) {
+    // If we have code, go ahead and return that.
+    void *ResultPtr = TheJIT->getPointerToGlobalIfAvailable(F);
+    if (ResultPtr) return ResultPtr;
 
-  // Okay, the function has not been compiled yet, if the target callback
-  // mechanism is capable of rewriting the instruction directly, prefer to do
-  // that instead of emitting a stub.  This uses the lazy resolver, so is not
-  // legal if lazy compilation is disabled.
-  if (DoesntNeedStub && !TheJIT->isLazyCompilationDisabled())
-    return Resolver.AddCallbackAtLocation(F, Reference);
+    // If this is an external function pointer, we can force the JIT to
+    // 'compile' it, which really just adds it to the map.
+    if (F->isDeclaration() && !F->hasNotBeenReadFromBitcode())
+      return TheJIT->getPointerToFunction(F);
+  }
 
-  // Otherwise, we have to emit a stub.
-  void *StubAddr = Resolver.getFunctionStub(F);
+  // Otherwise, we may need a to emit a stub, and, conservatively, we
+  // always do so.
+  void *StubAddr = Resolver.getLazyFunctionStub(F);
 
   // Add the stub to the current function's list of referenced stubs, so we can
   // deallocate them if the current function is ever freed.  It's possible to
-  // return null from getFunctionStub in the case of a weak extern that fails
-  // to resolve.
+  // return null from getLazyFunctionStub in the case of a weak extern that
+  // fails to resolve.
   if (StubAddr)
     AddStubToCurrentFunction(StubAddr);
 
   return StubAddr;
 }
 
-void *JITEmitter::getPointerToGVIndirectSym(GlobalValue *V, void *Reference,
-                                            bool NoNeedStub) {
+void *JITEmitter::getPointerToGVIndirectSym(GlobalValue *V, void *Reference) {
   // Make sure GV is emitted first, and create a stub containing the fully
   // resolved address.
-  void *GVAddress = getPointerToGlobal(V, Reference, true);
+  void *GVAddress = getPointerToGlobal(V, Reference, false);
   void *StubAddr = Resolver.getGlobalValueIndirectSym(V, GVAddress);
-  
+
   // Add the stub to the current function's list of referenced stubs, so we can
   // deallocate them if the current function is ever freed.
   AddStubToCurrentFunction(StubAddr);
-  
+
   return StubAddr;
 }
 
 void JITEmitter::AddStubToCurrentFunction(void *StubAddr) {
-  if (!TheJIT->areDlsymStubsEnabled())
-    return;
-  
   assert(CurFn && "Stub added to current function, but current function is 0!");
-  
+
   SmallVectorImpl<void*> &StubsUsed = CurFnStubUses[CurFn];
   StubsUsed.push_back(StubAddr);
 
@@ -708,18 +794,20 @@ void JITEmitter::AddStubToCurrentFunction(void *StubAddr) {
   FnRefs.insert(CurFn);
 }
 
-void JITEmitter::processDebugLoc(DebugLoc DL) {
+void JITEmitter::processDebugLoc(DebugLoc DL, bool BeforePrintingInsn) {
   if (!DL.isUnknown()) {
     DebugLocTuple CurDLT = EmissionDetails.MF->getDebugLocTuple(DL);
 
-    if (CurDLT.CompileUnit != 0 && PrevDLT != CurDLT) {
-      JITEvent_EmittedFunctionDetails::LineStart NextLine;
-      NextLine.Address = getCurrentPCValue();
-      NextLine.Loc = DL;
-      EmissionDetails.LineStarts.push_back(NextLine);
-    }
+    if (BeforePrintingInsn) {
+      if (CurDLT.Scope != 0 && PrevDLT != CurDLT) {
+        JITEvent_EmittedFunctionDetails::LineStart NextLine;
+        NextLine.Address = getCurrentPCValue();
+        NextLine.Loc = DL;
+        EmissionDetails.LineStarts.push_back(NextLine);
+      }
 
-    PrevDLT = CurDLT;
+      PrevDLT = CurDLT;
+    }
   }
 }
 
@@ -742,7 +830,7 @@ static unsigned GetConstantPoolSizeInBytes(MachineConstantPool *MCP,
 static unsigned GetJumpTableSizeInBytes(MachineJumpTableInfo *MJTI) {
   const std::vector<MachineJumpTableEntry> &JT = MJTI->getJumpTables();
   if (JT.empty()) return 0;
-  
+
   unsigned NumEntries = 0;
   for (unsigned i = 0, e = JT.size(); i != e; ++i)
     NumEntries += JT[i].MBBs.size();
@@ -754,7 +842,7 @@ static unsigned GetJumpTableSizeInBytes(MachineJumpTableInfo *MJTI) {
 
 static uintptr_t RoundUpToAlign(uintptr_t Size, unsigned Alignment) {
   if (Alignment == 0) Alignment = 1;
-  // Since we do not know where the buffer will be allocated, be pessimistic. 
+  // Since we do not know where the buffer will be allocated, be pessimistic.
   return Size + Alignment;
 }
 
@@ -764,7 +852,7 @@ static uintptr_t RoundUpToAlign(uintptr_t Size, unsigned Alignment) {
 unsigned JITEmitter::addSizeOfGlobal(const GlobalVariable *GV, unsigned Size) {
   const Type *ElTy = GV->getType()->getElementType();
   size_t GVSize = (size_t)TheJIT->getTargetData()->getTypeAllocSize(ElTy);
-  size_t GVAlign = 
+  size_t GVAlign =
       (size_t)TheJIT->getTargetData()->getPreferredAlignment(GV);
   DEBUG(errs() << "JIT: Adding in size " << GVSize << " alignment " << GVAlign);
   DEBUG(GV->dump());
@@ -781,7 +869,7 @@ unsigned JITEmitter::addSizeOfGlobal(const GlobalVariable *GV, unsigned Size) {
 /// but are referenced from the constant; put them in GVSet and add their
 /// size into the running total Size.
 
-unsigned JITEmitter::addSizeOfGlobalsInConstantVal(const Constant *C, 
+unsigned JITEmitter::addSizeOfGlobalsInConstantVal(const Constant *C,
                                               unsigned Size) {
   // If its undefined, return the garbage.
   if (isa<UndefValue>(C))
@@ -844,7 +932,7 @@ unsigned JITEmitter::addSizeOfGlobalsInConstantVal(const Constant *C,
 /// addSizeOfGLobalsInInitializer - handle any globals that we haven't seen yet
 /// but are referenced from the given initializer.
 
-unsigned JITEmitter::addSizeOfGlobalsInInitializer(const Constant *Init, 
+unsigned JITEmitter::addSizeOfGlobalsInInitializer(const Constant *Init,
                                               unsigned Size) {
   if (!isa<UndefValue>(Init) &&
       !isa<ConstantVector>(Init) &&
@@ -865,7 +953,7 @@ unsigned JITEmitter::GetSizeOfGlobalsInBytes(MachineFunction &MF) {
   unsigned Size = 0;
   GVSet.clear();
 
-  for (MachineFunction::iterator MBB = MF.begin(), E = MF.end(); 
+  for (MachineFunction::iterator MBB = MF.begin(), E = MF.end();
        MBB != E; ++MBB) {
     for (MachineBasicBlock::const_iterator I = MBB->begin(), E = MBB->end();
          I != E; ++I) {
@@ -897,7 +985,7 @@ unsigned JITEmitter::GetSizeOfGlobalsInBytes(MachineFunction &MF) {
   DEBUG(errs() << "JIT: About to look through initializers\n");
   // Look for more globals that are referenced only from initializers.
   // GVSet.end is computed each time because the set can grow as we go.
-  for (SmallPtrSet<const GlobalVariable *, 8>::iterator I = GVSet.begin(); 
+  for (SmallPtrSet<const GlobalVariable *, 8>::iterator I = GVSet.begin();
        I != GVSet.end(); I++) {
     const GlobalVariable* GV = *I;
     if (GV->hasInitializer())
@@ -919,10 +1007,10 @@ void JITEmitter::startFunction(MachineFunction &F) {
     const TargetInstrInfo* TII = F.getTarget().getInstrInfo();
     MachineJumpTableInfo *MJTI = F.getJumpTableInfo();
     MachineConstantPool *MCP = F.getConstantPool();
-    
+
     // Ensure the constant pool/jump table info is at least 4-byte aligned.
     ActualSize = RoundUpToAlign(ActualSize, 16);
-    
+
     // Add the alignment of the constant pool
     ActualSize = RoundUpToAlign(ActualSize, MCP->getConstantPoolAlignment());
 
@@ -934,7 +1022,7 @@ void JITEmitter::startFunction(MachineFunction &F) {
 
     // Add the jump table size
     ActualSize += GetJumpTableSizeInBytes(MJTI);
-    
+
     // Add the alignment for the function
     ActualSize = RoundUpToAlign(ActualSize,
                                 std::max(F.getFunction()->getAlignment(), 8U));
@@ -956,7 +1044,8 @@ void JITEmitter::startFunction(MachineFunction &F) {
   BufferBegin = CurBufferPtr = MemMgr->startFunctionBody(F.getFunction(),
                                                          ActualSize);
   BufferEnd = BufferBegin+ActualSize;
-  
+  EmittedFunctions[F.getFunction()].FunctionBody = BufferBegin;
+
   // Ensure the constant pool/jump table info is at least 4-byte aligned.
   emitAlignment(16);
 
@@ -966,6 +1055,7 @@ void JITEmitter::startFunction(MachineFunction &F) {
   // About to start emitting the machine code for the function.
   emitAlignment(std::max(F.getFunction()->getAlignment(), 8U));
   TheJIT->updateGlobalMapping(F.getFunction(), CurBufferPtr);
+  EmittedFunctions[F.getFunction()].Code = CurBufferPtr;
 
   MBBLocations.clear();
 
@@ -1005,29 +1095,19 @@ bool JITEmitter::finishFunction(MachineFunction &F) {
           ResultPtr = TheJIT->getPointerToNamedFunction(MR.getExternalSymbol(),
                                                         false);
           DEBUG(errs() << "JIT: Map \'" << MR.getExternalSymbol() << "\' to ["
-                       << ResultPtr << "]\n"); 
+                       << ResultPtr << "]\n");
 
           // If the target REALLY wants a stub for this function, emit it now.
-          if (!MR.doesntNeedStub()) {
-            if (!TheJIT->areDlsymStubsEnabled()) {
-              ResultPtr = Resolver.getExternalFunctionStub(ResultPtr);
-            } else {
-              void *&Stub = ExtFnStubs[MR.getExternalSymbol()];
-              if (!Stub) {
-                Stub = Resolver.getExternalFunctionStub((void *)&Stub);
-                AddStubToCurrentFunction(Stub);
-              }
-              ResultPtr = Stub;
-            }
+          if (MR.mayNeedFarStub()) {
+            ResultPtr = Resolver.getExternalFunctionStub(ResultPtr);
           }
         } else if (MR.isGlobalValue()) {
           ResultPtr = getPointerToGlobal(MR.getGlobalValue(),
                                          BufferBegin+MR.getMachineCodeOffset(),
-                                         MR.doesntNeedStub());
+                                         MR.mayNeedFarStub());
         } else if (MR.isIndirectSymbol()) {
-          ResultPtr = getPointerToGVIndirectSym(MR.getGlobalValue(),
-                                          BufferBegin+MR.getMachineCodeOffset(),
-                                          MR.doesntNeedStub());
+          ResultPtr = getPointerToGVIndirectSym(
+              MR.getGlobalValue(), BufferBegin+MR.getMachineCodeOffset());
         } else if (MR.isBasicBlock()) {
           ResultPtr = (void*)getMachineBasicBlockAddress(MR.getBasicBlock());
         } else if (MR.isConstantPoolIndex()) {
@@ -1135,9 +1215,8 @@ bool JITEmitter::finishFunction(MachineFunction &F) {
 
   if (DwarfExceptionHandling || JITEmitDebugInfo) {
     uintptr_t ActualSize = 0;
-    SavedBufferBegin = BufferBegin;
-    SavedBufferEnd = BufferEnd;
-    SavedCurBufferPtr = CurBufferPtr;
+    BufferState BS;
+    SaveStateTo(BS);
 
     if (MemMgr->NeedsExactSize()) {
       ActualSize = DE->GetDwarfTableSizeInBytes(F, *this, FnStart, FnEnd);
@@ -1146,15 +1225,14 @@ bool JITEmitter::finishFunction(MachineFunction &F) {
     BufferBegin = CurBufferPtr = MemMgr->startExceptionTable(F.getFunction(),
                                                              ActualSize);
     BufferEnd = BufferBegin+ActualSize;
+    EmittedFunctions[F.getFunction()].ExceptionTable = BufferBegin;
     uint8_t *EhStart;
     uint8_t *FrameRegister = DE->EmitDwarfTable(F, *this, FnStart, FnEnd,
                                                 EhStart);
     MemMgr->endExceptionTable(F.getFunction(), BufferBegin, CurBufferPtr,
                               FrameRegister);
     uint8_t *EhEnd = CurBufferPtr;
-    BufferBegin = SavedBufferBegin;
-    BufferEnd = SavedBufferEnd;
-    CurBufferPtr = SavedCurBufferPtr;
+    RestoreStateFrom(BS);
 
     if (DwarfExceptionHandling) {
       TheJIT->RegisterTable(FrameRegister);
@@ -1172,7 +1250,7 @@ bool JITEmitter::finishFunction(MachineFunction &F) {
 
   if (MMI)
     MMI->EndFunction();
- 
+
   return false;
 }
 
@@ -1188,8 +1266,17 @@ void JITEmitter::retryWithMoreMemory(MachineFunction &F) {
 
 /// deallocateMemForFunction - Deallocate all memory for the specified
 /// function body.  Also drop any references the function has to stubs.
+/// May be called while the Function is being destroyed inside ~Value().
 void JITEmitter::deallocateMemForFunction(const Function *F) {
-  MemMgr->deallocateMemForFunction(F);
+  ValueMap<const Function *, EmittedCode, EmittedFunctionConfig>::iterator
+    Emitted = EmittedFunctions.find(F);
+  if (Emitted != EmittedFunctions.end()) {
+    MemMgr->deallocateFunctionBody(Emitted->second.FunctionBody);
+    MemMgr->deallocateExceptionTable(Emitted->second.ExceptionTable);
+    TheJIT->NotifyFreeingMachineCode(Emitted->second.Code);
+
+    EmittedFunctions.erase(Emitted);
+  }
 
   // TODO: Do we need to unregister exception handling information from libgcc
   // here?
@@ -1201,20 +1288,20 @@ void JITEmitter::deallocateMemForFunction(const Function *F) {
   // If the function did not reference any stubs, return.
   if (CurFnStubUses.find(F) == CurFnStubUses.end())
     return;
-  
+
   // For each referenced stub, erase the reference to this function, and then
   // erase the list of referenced stubs.
   SmallVectorImpl<void *> &StubList = CurFnStubUses[F];
   for (unsigned i = 0, e = StubList.size(); i != e; ++i) {
     void *Stub = StubList[i];
-    
+
     // If we already invalidated this stub for this function, continue.
     if (StubFnRefs.count(Stub) == 0)
       continue;
-      
+
     SmallPtrSet<const Function *, 1> &FnRefs = StubFnRefs[Stub];
     FnRefs.erase(F);
-    
+
     // If this function was the last reference to the stub, invalidate the stub
     // in the JITResolver.  Were there a memory manager deallocateStub routine,
     // we could call that at this point too.
@@ -1223,19 +1310,10 @@ void JITEmitter::deallocateMemForFunction(const Function *F) {
       StubFnRefs.erase(Stub);
 
       // Invalidate the stub.  If it is a GV stub, update the JIT's global
-      // mapping for that GV to zero, otherwise, search the string map of
-      // external function names to stubs and remove the entry for this stub.
+      // mapping for that GV to zero.
       GlobalValue *GV = Resolver.invalidateStub(Stub);
       if (GV) {
         TheJIT->updateGlobalMapping(GV, 0);
-      } else {
-        for (StringMapIterator<void*> i = ExtFnStubs.begin(),
-             e = ExtFnStubs.end(); i != e; ++i) {
-          if (i->second == Stub) {
-            ExtFnStubs.erase(i);
-            break;
-          }
-        }
       }
     }
   }
@@ -1306,7 +1384,7 @@ void JITEmitter::initJumpTableInfo(MachineJumpTableInfo *MJTI) {
 
   const std::vector<MachineJumpTableEntry> &JT = MJTI->getJumpTables();
   if (JT.empty()) return;
-  
+
   unsigned NumEntries = 0;
   for (unsigned i = 0, e = JT.size(); i != e; ++i)
     NumEntries += JT[i].MBBs.size();
@@ -1326,7 +1404,7 @@ void JITEmitter::emitJumpTableInfo(MachineJumpTableInfo *MJTI) {
 
   const std::vector<MachineJumpTableEntry> &JT = MJTI->getJumpTables();
   if (JT.empty() || JumpTableBase == 0) return;
-  
+
   if (TargetMachine::getRelocationModel() == Reloc::PIC_) {
     assert(MJTI->getEntrySize() == 4 && "Cross JIT'ing?");
     // For each jump table, place the offset from the beginning of the table
@@ -1345,8 +1423,8 @@ void JITEmitter::emitJumpTableInfo(MachineJumpTableInfo *MJTI) {
     }
   } else {
     assert(MJTI->getEntrySize() == sizeof(void*) && "Cross JIT'ing?");
-    
-    // For each jump table, map each target in the jump table to the address of 
+
+    // For each jump table, map each target in the jump table to the address of
     // an emitted MachineBasicBlock.
     intptr_t *SlotPtr = (intptr_t*)JumpTableBase;
 
@@ -1360,32 +1438,27 @@ void JITEmitter::emitJumpTableInfo(MachineJumpTableInfo *MJTI) {
   }
 }
 
-void JITEmitter::startGVStub(const GlobalValue* GV, unsigned StubSize,
-                             unsigned Alignment) {
-  SavedBufferBegin = BufferBegin;
-  SavedBufferEnd = BufferEnd;
-  SavedCurBufferPtr = CurBufferPtr;
-  
+void JITEmitter::startGVStub(BufferState &BS, const GlobalValue* GV,
+                             unsigned StubSize, unsigned Alignment) {
+  SaveStateTo(BS);
+
   BufferBegin = CurBufferPtr = MemMgr->allocateStub(GV, StubSize, Alignment);
   BufferEnd = BufferBegin+StubSize+1;
 }
 
-void JITEmitter::startGVStub(const GlobalValue* GV, void *Buffer,
-                             unsigned StubSize) {
-  SavedBufferBegin = BufferBegin;
-  SavedBufferEnd = BufferEnd;
-  SavedCurBufferPtr = CurBufferPtr;
-  
+void JITEmitter::startGVStub(BufferState &BS, void *Buffer, unsigned StubSize) {
+  SaveStateTo(BS);
+
   BufferBegin = CurBufferPtr = (uint8_t *)Buffer;
   BufferEnd = BufferBegin+StubSize+1;
 }
 
-void *JITEmitter::finishGVStub(const GlobalValue* GV) {
+void *JITEmitter::finishGVStub(BufferState &BS) {
+  assert(CurBufferPtr != BufferEnd && "Stub overflowed allocated space.");
   NumBytes += getCurrentPCOffset();
-  std::swap(SavedBufferBegin, BufferBegin);
-  BufferEnd = SavedBufferEnd;
-  CurBufferPtr = SavedCurBufferPtr;
-  return SavedBufferBegin;
+  void *Result = BufferBegin;
+  RestoreStateFrom(BS);
+  return Result;
 }
 
 // getConstantPoolEntryAddress - Return the address of the 'ConstantNum' entry
@@ -1404,18 +1477,29 @@ uintptr_t JITEmitter::getConstantPoolEntryAddress(unsigned ConstantNum) const {
 uintptr_t JITEmitter::getJumpTableEntryAddress(unsigned Index) const {
   const std::vector<MachineJumpTableEntry> &JT = JumpTable->getJumpTables();
   assert(Index < JT.size() && "Invalid jump table index!");
-  
+
   unsigned Offset = 0;
   unsigned EntrySize = JumpTable->getEntrySize();
-  
+
   for (unsigned i = 0; i < Index; ++i)
     Offset += JT[i].MBBs.size();
-  
+
    Offset *= EntrySize;
-  
+
   return (uintptr_t)((char *)JumpTableBase + Offset);
 }
 
+void JITEmitter::EmittedFunctionConfig::onDelete(
+  JITEmitter *Emitter, const Function *F) {
+  Emitter->deallocateMemForFunction(F);
+}
+void JITEmitter::EmittedFunctionConfig::onRAUW(
+  JITEmitter *, const Function*, const Function*) {
+  llvm_unreachable("The JIT doesn't know how to handle a"
+                   " RAUW on a value it has emitted.");
+}
+
+
 //===----------------------------------------------------------------------===//
 //  Public interface to this file
 //===----------------------------------------------------------------------===//
@@ -1446,121 +1530,35 @@ void *JIT::getPointerToFunctionOrStub(Function *F) {
   // If we have already code generated the function, just return the address.
   if (void *Addr = getPointerToGlobalIfAvailable(F))
     return Addr;
-  
+
   // Get a stub if the target supports it.
   assert(isa<JITEmitter>(JCE) && "Unexpected MCE?");
   JITEmitter *JE = cast<JITEmitter>(getCodeEmitter());
-  return JE->getJITResolver().getFunctionStub(F);
+  return JE->getJITResolver().getLazyFunctionStub(F);
 }
 
 void JIT::updateFunctionStub(Function *F) {
   // Get the empty stub we generated earlier.
   assert(isa<JITEmitter>(JCE) && "Unexpected MCE?");
   JITEmitter *JE = cast<JITEmitter>(getCodeEmitter());
-  void *Stub = JE->getJITResolver().getFunctionStub(F);
+  void *Stub = JE->getJITResolver().getLazyFunctionStub(F);
+  void *Addr = getPointerToGlobalIfAvailable(F);
 
   // Tell the target jit info to rewrite the stub at the specified address,
   // rather than creating a new one.
-  void *Addr = getPointerToGlobalIfAvailable(F);
-  getJITInfo().emitFunctionStubAtAddr(F, Addr, Stub, *getCodeEmitter());
-}
-
-/// updateDlsymStubTable - Emit the data necessary to relocate the stubs
-/// that were emitted during code generation.
-///
-void JIT::updateDlsymStubTable() {
-  assert(isa<JITEmitter>(JCE) && "Unexpected MCE?");
-  JITEmitter *JE = cast<JITEmitter>(getCodeEmitter());
-  
-  SmallVector<GlobalValue*, 8> GVs;
-  SmallVector<void*, 8> Ptrs;
-  const StringMap<void *> &ExtFns = JE->getExternalFnStubs();
-
-  JE->getJITResolver().getRelocatableGVs(GVs, Ptrs);
-
-  unsigned nStubs = GVs.size() + ExtFns.size();
-  
-  // If there are no relocatable stubs, return.
-  if (nStubs == 0)
-    return;
-
-  // If there are no new relocatable stubs, return.
-  void *CurTable = JE->getMemMgr()->getDlsymTable();
-  if (CurTable && (*(unsigned *)CurTable == nStubs))
-    return;
-  
-  // Calculate the size of the stub info
-  unsigned offset = 4 + 4 * nStubs + sizeof(intptr_t) * nStubs;
-  
-  SmallVector<unsigned, 8> Offsets;
-  for (unsigned i = 0; i != GVs.size(); ++i) {
-    Offsets.push_back(offset);
-    offset += GVs[i]->getName().size() + 1;
-  }
-  for (StringMapConstIterator<void*> i = ExtFns.begin(), e = ExtFns.end(); 
-       i != e; ++i) {
-    Offsets.push_back(offset);
-    offset += strlen(i->first()) + 1;
-  }
-  
-  // Allocate space for the new "stub", which contains the dlsym table.
-  JE->startGVStub(0, offset, 4);
-  
-  // Emit the number of records
-  JE->emitInt32(nStubs);
-  
-  // Emit the string offsets
-  for (unsigned i = 0; i != nStubs; ++i)
-    JE->emitInt32(Offsets[i]);
-  
-  // Emit the pointers.  Verify that they are at least 2-byte aligned, and set
-  // the low bit to 0 == GV, 1 == Function, so that the client code doing the
-  // relocation can write the relocated pointer at the appropriate place in
-  // the stub.
-  for (unsigned i = 0; i != GVs.size(); ++i) {
-    intptr_t Ptr = (intptr_t)Ptrs[i];
-    assert((Ptr & 1) == 0 && "Stub pointers must be at least 2-byte aligned!");
-    
-    if (isa<Function>(GVs[i]))
-      Ptr |= (intptr_t)1;
-           
-    if (sizeof(Ptr) == 8)
-      JE->emitInt64(Ptr);
-    else
-      JE->emitInt32(Ptr);
-  }
-  for (StringMapConstIterator<void*> i = ExtFns.begin(), e = ExtFns.end(); 
-       i != e; ++i) {
-    intptr_t Ptr = (intptr_t)i->second | 1;
-
-    if (sizeof(Ptr) == 8)
-      JE->emitInt64(Ptr);
-    else
-      JE->emitInt32(Ptr);
-  }
-  
-  // Emit the strings.
-  for (unsigned i = 0; i != GVs.size(); ++i)
-    JE->emitString(GVs[i]->getName());
-  for (StringMapConstIterator<void*> i = ExtFns.begin(), e = ExtFns.end(); 
-       i != e; ++i)
-    JE->emitString(i->first());
-  
-  // Tell the JIT memory manager where it is.  The JIT Memory Manager will
-  // deallocate space for the old one, if one existed.
-  JE->getMemMgr()->SetDlsymTable(JE->finishGVStub(0));
+  MachineCodeEmitter::BufferState BS;
+  TargetJITInfo::StubLayout layout = getJITInfo().getStubLayout();
+  JE->startGVStub(BS, Stub, layout.Size);
+  getJITInfo().emitFunctionStub(F, Addr, *getCodeEmitter());
+  JE->finishGVStub(BS);
 }
 
 /// freeMachineCodeForFunction - release machine code memory for given Function.
 ///
 void JIT::freeMachineCodeForFunction(Function *F) {
-
   // Delete translation for this from the ExecutionEngine, so it will get
   // retranslated next time it is used.
-  void *OldPtr = updateGlobalMapping(F, 0);
-
-  if (OldPtr)
-    TheJIT->NotifyFreeingMachineCode(*F, OldPtr);
+  updateGlobalMapping(F, 0);
 
   // Free the actual memory for the function body and related stuff.
   assert(isa<JITEmitter>(JCE) && "Unexpected MCE?");
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITMemoryManager.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITMemoryManager.cpp
index 474843f..80cb999 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITMemoryManager.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITMemoryManager.cpp
@@ -49,23 +49,23 @@ namespace {
     /// ThisAllocated - This is true if this block is currently allocated.  If
     /// not, this can be converted to a FreeRangeHeader.
     unsigned ThisAllocated : 1;
-    
+
     /// PrevAllocated - Keep track of whether the block immediately before us is
     /// allocated.  If not, the word immediately before this header is the size
     /// of the previous block.
     unsigned PrevAllocated : 1;
-    
+
     /// BlockSize - This is the size in bytes of this memory block,
     /// including this header.
     uintptr_t BlockSize : (sizeof(intptr_t)*CHAR_BIT - 2);
-    
+
 
     /// getBlockAfter - Return the memory block immediately after this one.
     ///
     MemoryRangeHeader &getBlockAfter() const {
       return *(MemoryRangeHeader*)((char*)this+BlockSize);
     }
-    
+
     /// getFreeBlockBefore - If the block before this one is free, return it,
     /// otherwise return null.
     FreeRangeHeader *getFreeBlockBefore() const {
@@ -73,15 +73,15 @@ namespace {
       intptr_t PrevSize = ((intptr_t *)this)[-1];
       return (FreeRangeHeader*)((char*)this-PrevSize);
     }
-    
+
     /// FreeBlock - Turn an allocated block into a free block, adjusting
     /// bits in the object headers, and adding an end of region memory block.
     FreeRangeHeader *FreeBlock(FreeRangeHeader *FreeList);
-    
+
     /// TrimAllocationToSize - If this allocated block is significantly larger
     /// than NewSize, split it into two pieces (where the former is NewSize
     /// bytes, including the header), and add the new block to the free list.
-    FreeRangeHeader *TrimAllocationToSize(FreeRangeHeader *FreeList, 
+    FreeRangeHeader *TrimAllocationToSize(FreeRangeHeader *FreeList,
                                           uint64_t NewSize);
   };
 
@@ -91,13 +91,13 @@ namespace {
   struct FreeRangeHeader : public MemoryRangeHeader {
     FreeRangeHeader *Prev;
     FreeRangeHeader *Next;
-    
+
     /// getMinBlockSize - Get the minimum size for a memory block.  Blocks
     /// smaller than this size cannot be created.
     static unsigned getMinBlockSize() {
       return sizeof(FreeRangeHeader)+sizeof(intptr_t);
     }
-    
+
     /// SetEndOfBlockSizeMarker - The word at the end of every free block is
     /// known to be the size of the free block.  Set it for this block.
     void SetEndOfBlockSizeMarker() {
@@ -110,7 +110,7 @@ namespace {
       Next->Prev = Prev;
       return Prev->Next = Next;
     }
-    
+
     void AddToFreeList(FreeRangeHeader *FreeList) {
       Next = FreeList;
       Prev = FreeList->Prev;
@@ -121,7 +121,7 @@ namespace {
     /// GrowBlock - The block after this block just got deallocated.  Merge it
     /// into the current block.
     void GrowBlock(uintptr_t NewSize);
-    
+
     /// AllocateBlock - Mark this entire block allocated, updating freelists
     /// etc.  This returns a pointer to the circular free-list.
     FreeRangeHeader *AllocateBlock();
@@ -137,7 +137,7 @@ FreeRangeHeader *FreeRangeHeader::AllocateBlock() {
   // Mark this block allocated.
   ThisAllocated = 1;
   getBlockAfter().PrevAllocated = 1;
- 
+
   // Remove it from the free list.
   return RemoveFromFreeList();
 }
@@ -150,9 +150,9 @@ FreeRangeHeader *MemoryRangeHeader::FreeBlock(FreeRangeHeader *FreeList) {
   MemoryRangeHeader *FollowingBlock = &getBlockAfter();
   assert(ThisAllocated && "This block is already free!");
   assert(FollowingBlock->PrevAllocated && "Flags out of sync!");
-  
+
   FreeRangeHeader *FreeListToReturn = FreeList;
-  
+
   // If the block after this one is free, merge it into this block.
   if (!FollowingBlock->ThisAllocated) {
     FreeRangeHeader &FollowingFreeBlock = *(FreeRangeHeader *)FollowingBlock;
@@ -164,18 +164,18 @@ FreeRangeHeader *MemoryRangeHeader::FreeBlock(FreeRangeHeader *FreeList) {
       assert(&FollowingFreeBlock != FreeList && "No tombstone block?");
     }
     FollowingFreeBlock.RemoveFromFreeList();
-    
+
     // Include the following block into this one.
     BlockSize += FollowingFreeBlock.BlockSize;
     FollowingBlock = &FollowingFreeBlock.getBlockAfter();
-    
+
     // Tell the block after the block we are coalescing that this block is
     // allocated.
     FollowingBlock->PrevAllocated = 1;
   }
-  
+
   assert(FollowingBlock->ThisAllocated && "Missed coalescing?");
-  
+
   if (FreeRangeHeader *PrevFreeBlock = getFreeBlockBefore()) {
     PrevFreeBlock->GrowBlock(PrevFreeBlock->BlockSize + BlockSize);
     return FreeListToReturn ? FreeListToReturn : PrevFreeBlock;
@@ -218,24 +218,24 @@ TrimAllocationToSize(FreeRangeHeader *FreeList, uint64_t NewSize) {
   // Round up size for alignment of header.
   unsigned HeaderAlign = __alignof(FreeRangeHeader);
   NewSize = (NewSize+ (HeaderAlign-1)) & ~(HeaderAlign-1);
-  
+
   // Size is now the size of the block we will remove from the start of the
   // current block.
   assert(NewSize <= BlockSize &&
          "Allocating more space from this block than exists!");
-  
+
   // If splitting this block will cause the remainder to be too small, do not
   // split the block.
   if (BlockSize <= NewSize+FreeRangeHeader::getMinBlockSize())
     return FreeList;
-  
+
   // Otherwise, we splice the required number of bytes out of this block, form
   // a new block immediately after it, then mark this block allocated.
   MemoryRangeHeader &FormerNextBlock = getBlockAfter();
-  
+
   // Change the size of this block.
   BlockSize = NewSize;
-  
+
   // Get the new block we just sliced out and turn it into a free block.
   FreeRangeHeader &NewNextBlock = (FreeRangeHeader &)getBlockAfter();
   NewNextBlock.BlockSize = (char*)&FormerNextBlock - (char*)&NewNextBlock;
@@ -283,7 +283,7 @@ namespace {
     sys::MemoryBlock LastSlab;
 
     // Memory slabs allocated by the JIT.  We refer to them as slabs so we don't
-    // confuse them with the blocks of memory descibed above.
+    // confuse them with the blocks of memory described above.
     std::vector<sys::MemoryBlock> CodeSlabs;
     JITSlabAllocator BumpSlabAllocator;
     BumpPtrAllocator StubAllocator;
@@ -296,10 +296,6 @@ namespace {
     MemoryRangeHeader *CurBlock;
 
     uint8_t *GOTBase;     // Target Specific reserved memory
-    void *DlsymTable;     // Stub external symbol information
-
-    std::map<const Function*, MemoryRangeHeader*> FunctionBlocks;
-    std::map<const Function*, MemoryRangeHeader*> TableBlocks;
   public:
     DefaultJITMemoryManager();
     ~DefaultJITMemoryManager();
@@ -321,7 +317,6 @@ namespace {
     static const size_t DefaultSizeThreshold;
 
     void AllocateGOT();
-    void SetDlsymTable(void *);
 
     // Testing methods.
     virtual bool CheckInvariants(std::string &ErrorStr);
@@ -352,7 +347,7 @@ namespace {
       }
 
       largest = largest - sizeof(MemoryRangeHeader);
-      
+
       // If this block isn't big enough for the allocation desired, allocate
       // another block of memory and add it to the free list.
       if (largest < ActualSize ||
@@ -414,7 +409,6 @@ namespace {
              "Mismatched function start/end!");
 
       uintptr_t BlockSize = FunctionEnd - (uint8_t *)CurBlock;
-      FunctionBlocks[F] = CurBlock;
 
       // Release the memory at the end of this block that isn't needed.
       FreeMemoryList =CurBlock->TrimAllocationToSize(FreeMemoryList, BlockSize);
@@ -449,44 +443,33 @@ namespace {
       return (uint8_t*)DataAllocator.Allocate(Size, Alignment);
     }
 
-    /// startExceptionTable - Use startFunctionBody to allocate memory for the 
+    /// startExceptionTable - Use startFunctionBody to allocate memory for the
     /// function's exception table.
     uint8_t* startExceptionTable(const Function* F, uintptr_t &ActualSize) {
       return startFunctionBody(F, ActualSize);
     }
 
-    /// endExceptionTable - The exception table of F is now allocated, 
+    /// endExceptionTable - The exception table of F is now allocated,
     /// and takes the memory in the range [TableStart,TableEnd).
     void endExceptionTable(const Function *F, uint8_t *TableStart,
                            uint8_t *TableEnd, uint8_t* FrameRegister) {
       assert(TableEnd > TableStart);
       assert(TableStart == (uint8_t *)(CurBlock+1) &&
              "Mismatched table start/end!");
-      
+
       uintptr_t BlockSize = TableEnd - (uint8_t *)CurBlock;
-      TableBlocks[F] = CurBlock;
 
       // Release the memory at the end of this block that isn't needed.
       FreeMemoryList =CurBlock->TrimAllocationToSize(FreeMemoryList, BlockSize);
     }
-    
+
     uint8_t *getGOTBase() const {
       return GOTBase;
     }
-    
-    void *getDlsymTable() const {
-      return DlsymTable;
-    }
-    
-    /// deallocateMemForFunction - Deallocate all memory for the specified
-    /// function body.
-    void deallocateMemForFunction(const Function *F) {
-      std::map<const Function*, MemoryRangeHeader*>::iterator
-        I = FunctionBlocks.find(F);
-      if (I == FunctionBlocks.end()) return;
-      
+
+    void deallocateBlock(void *Block) {
       // Find the block that is allocated for this function.
-      MemoryRangeHeader *MemRange = I->second;
+      MemoryRangeHeader *MemRange = static_cast<MemoryRangeHeader*>(Block) - 1;
       assert(MemRange->ThisAllocated && "Block isn't allocated!");
 
       // Fill the buffer with garbage!
@@ -496,27 +479,18 @@ namespace {
 
       // Free the memory.
       FreeMemoryList = MemRange->FreeBlock(FreeMemoryList);
-      
-      // Finally, remove this entry from FunctionBlocks.
-      FunctionBlocks.erase(I);
-      
-      I = TableBlocks.find(F);
-      if (I == TableBlocks.end()) return;
-      
-      // Find the block that is allocated for this function.
-      MemRange = I->second;
-      assert(MemRange->ThisAllocated && "Block isn't allocated!");
+    }
 
-      // Fill the buffer with garbage!
-      if (PoisonMemory) {
-        memset(MemRange+1, 0xCD, MemRange->BlockSize-sizeof(*MemRange));
-      }
+    /// deallocateFunctionBody - Deallocate all memory for the specified
+    /// function body.
+    void deallocateFunctionBody(void *Body) {
+      if (Body) deallocateBlock(Body);
+    }
 
-      // Free the memory.
-      FreeMemoryList = MemRange->FreeBlock(FreeMemoryList);
-      
-      // Finally, remove this entry from TableBlocks.
-      TableBlocks.erase(I);
+    /// deallocateExceptionTable - Deallocate memory for the specified
+    /// exception table.
+    void deallocateExceptionTable(void *ET) {
+      if (ET) deallocateBlock(ET);
     }
 
     /// setMemoryWritable - When code generation is in progress,
@@ -581,16 +555,16 @@ DefaultJITMemoryManager::DefaultJITMemoryManager()
   //  END ]
   //
   // The last three blocks are never deallocated or touched.
-  
+
   // Add MemoryRangeHeader to the end of the memory region, indicating that
   // the space after the block of memory is allocated.  This is block #3.
   MemoryRangeHeader *Mem3 = (MemoryRangeHeader*)(MemBase+MemBlock.size())-1;
   Mem3->ThisAllocated = 1;
   Mem3->PrevAllocated = 0;
   Mem3->BlockSize     = sizeof(MemoryRangeHeader);
-  
+
   /// Add a tiny free region so that the free list always has one entry.
-  FreeRangeHeader *Mem2 = 
+  FreeRangeHeader *Mem2 =
     (FreeRangeHeader *)(((char*)Mem3)-FreeRangeHeader::getMinBlockSize());
   Mem2->ThisAllocated = 0;
   Mem2->PrevAllocated = 1;
@@ -604,7 +578,7 @@ DefaultJITMemoryManager::DefaultJITMemoryManager()
   Mem1->ThisAllocated = 1;
   Mem1->PrevAllocated = 0;
   Mem1->BlockSize     = sizeof(MemoryRangeHeader);
-  
+
   // Add a FreeRangeHeader to the start of the function body region, indicating
   // that the space is free.  Mark the previous block allocated so we never look
   // at it.
@@ -614,12 +588,11 @@ DefaultJITMemoryManager::DefaultJITMemoryManager()
   Mem0->BlockSize = (char*)Mem1-(char*)Mem0;
   Mem0->SetEndOfBlockSizeMarker();
   Mem0->AddToFreeList(Mem2);
-  
+
   // Start out with the freelist pointing to Mem0.
   FreeMemoryList = Mem0;
 
   GOTBase = NULL;
-  DlsymTable = NULL;
 }
 
 void DefaultJITMemoryManager::AllocateGOT() {
@@ -628,10 +601,6 @@ void DefaultJITMemoryManager::AllocateGOT() {
   HasGOT = true;
 }
 
-void DefaultJITMemoryManager::SetDlsymTable(void *ptr) {
-  DlsymTable = ptr;
-}
-
 DefaultJITMemoryManager::~DefaultJITMemoryManager() {
   for (unsigned i = 0, e = CodeSlabs.size(); i != e; ++i)
     sys::Memory::ReleaseRWX(CodeSlabs[i]);
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/MacOSJITEventListener.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/MacOSJITEventListener.cpp
deleted file mode 100644
index 53585b8..0000000
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/MacOSJITEventListener.cpp
+++ /dev/null
@@ -1,172 +0,0 @@
-//===-- MacOSJITEventListener.cpp - Save symbol table for OSX perf tools --===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file defines a JITEventListener object that records JITted functions to
-// a global __jitSymbolTable linked list.  Apple's performance tools use this to
-// determine a symbol name and accurate code range for a PC value.  Because
-// performance tools are generally asynchronous, the code below is written with
-// the hope that it could be interrupted at any time and have useful answers.
-// However, we don't go crazy with atomic operations, we just do a "reasonable
-// effort".
-//
-//===----------------------------------------------------------------------===//
-
-#define DEBUG_TYPE "macos-jit-event-listener"
-#include "llvm/Function.h"
-#include "llvm/ExecutionEngine/JITEventListener.h"
-#include <stddef.h>
-using namespace llvm;
-
-#ifdef __APPLE__
-#define ENABLE_JIT_SYMBOL_TABLE 0
-#endif
-
-#if ENABLE_JIT_SYMBOL_TABLE
-
-namespace {
-
-/// JITSymbolEntry - Each function that is JIT compiled results in one of these
-/// being added to an array of symbols.  This indicates the name of the function
-/// as well as the address range it occupies.  This allows the client to map
-/// from a PC value to the name of the function.
-struct JITSymbolEntry {
-  const char *FnName;   // FnName - a strdup'd string.
-  void *FnStart;
-  intptr_t FnSize;
-};
-
-
-struct JITSymbolTable {
-  /// NextPtr - This forms a linked list of JitSymbolTable entries.  This
-  /// pointer is not used right now, but might be used in the future.  Consider
-  /// it reserved for future use.
-  JITSymbolTable *NextPtr;
-  
-  /// Symbols - This is an array of JitSymbolEntry entries.  Only the first
-  /// 'NumSymbols' symbols are valid.
-  JITSymbolEntry *Symbols;
-  
-  /// NumSymbols - This indicates the number entries in the Symbols array that
-  /// are valid.
-  unsigned NumSymbols;
-  
-  /// NumAllocated - This indicates the amount of space we have in the Symbols
-  /// array.  This is a private field that should not be read by external tools.
-  unsigned NumAllocated;
-};
-
-class MacOSJITEventListener : public JITEventListener {
-public:
-  virtual void NotifyFunctionEmitted(const Function &F,
-                                     void *FnStart, size_t FnSize,
-                                     const EmittedFunctionDetails &Details);
-  virtual void NotifyFreeingMachineCode(const Function &F, void *OldPtr);
-};
-
-}  // anonymous namespace.
-
-// This is a public symbol so the performance tools can find it.
-JITSymbolTable *__jitSymbolTable;
-
-namespace llvm {
-JITEventListener *createMacOSJITEventListener() {
-  return new MacOSJITEventListener;
-}
-}
-
-// Adds the just-emitted function to the symbol table.
-void MacOSJITEventListener::NotifyFunctionEmitted(
-    const Function &F, void *FnStart, size_t FnSize,
-    const EmittedFunctionDetails &) {
-  assert(F.hasName() && FnStart != 0 && "Bad symbol to add");
-  JITSymbolTable **SymTabPtrPtr = 0;
-  SymTabPtrPtr = &__jitSymbolTable;
-
-  // If this is the first entry in the symbol table, add the JITSymbolTable
-  // index.
-  if (*SymTabPtrPtr == 0) {
-    JITSymbolTable *New = new JITSymbolTable();
-    New->NextPtr = 0;
-    New->Symbols = 0;
-    New->NumSymbols = 0;
-    New->NumAllocated = 0;
-    *SymTabPtrPtr = New;
-  }
-
-  JITSymbolTable *SymTabPtr = *SymTabPtrPtr;
-
-  // If we have space in the table, reallocate the table.
-  if (SymTabPtr->NumSymbols >= SymTabPtr->NumAllocated) {
-    // If we don't have space, reallocate the table.
-    unsigned NewSize = std::max(64U, SymTabPtr->NumAllocated*2);
-    JITSymbolEntry *NewSymbols = new JITSymbolEntry[NewSize];
-    JITSymbolEntry *OldSymbols = SymTabPtr->Symbols;
-
-    // Copy the old entries over.
-    memcpy(NewSymbols, OldSymbols, SymTabPtr->NumSymbols*sizeof(OldSymbols[0]));
-
-    // Swap the new symbols in, delete the old ones.
-    SymTabPtr->Symbols = NewSymbols;
-    SymTabPtr->NumAllocated = NewSize;
-    delete [] OldSymbols;
-  }
-
-  // Otherwise, we have enough space, just tack it onto the end of the array.
-  JITSymbolEntry &Entry = SymTabPtr->Symbols[SymTabPtr->NumSymbols];
-  Entry.FnName = strdup(F.getName().data());
-  Entry.FnStart = FnStart;
-  Entry.FnSize = FnSize;
-  ++SymTabPtr->NumSymbols;
-}
-
-// Removes the to-be-deleted function from the symbol table.
-void MacOSJITEventListener::NotifyFreeingMachineCode(
-    const Function &, void *FnStart) {
-  assert(FnStart && "Invalid function pointer");
-  JITSymbolTable **SymTabPtrPtr = 0;
-  SymTabPtrPtr = &__jitSymbolTable;
-
-  JITSymbolTable *SymTabPtr = *SymTabPtrPtr;
-  JITSymbolEntry *Symbols = SymTabPtr->Symbols;
-
-  // Scan the table to find its index.  The table is not sorted, so do a linear
-  // scan.
-  unsigned Index;
-  for (Index = 0; Symbols[Index].FnStart != FnStart; ++Index)
-    assert(Index != SymTabPtr->NumSymbols && "Didn't find function!");
-
-  // Once we have an index, we know to nuke this entry, overwrite it with the
-  // entry at the end of the array, making the last entry redundant.
-  const char *OldName = Symbols[Index].FnName;
-  Symbols[Index] = Symbols[SymTabPtr->NumSymbols-1];
-  free((void*)OldName);
-
-  // Drop the number of symbols in the table.
-  --SymTabPtr->NumSymbols;
-
-  // Finally, if we deleted the final symbol, deallocate the table itself.
-  if (SymTabPtr->NumSymbols != 0)
-    return;
-
-  *SymTabPtrPtr = 0;
-  delete [] Symbols;
-  delete SymTabPtr;
-}
-
-#else  // !ENABLE_JIT_SYMBOL_TABLE
-
-namespace llvm {
-// By defining this to return NULL, we can let clients call it unconditionally,
-// even if they aren't on an Apple system.
-JITEventListener *createMacOSJITEventListener() {
-  return NULL;
-}
-}  // namespace llvm
-
-#endif  // ENABLE_JIT_SYMBOL_TABLE
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/OProfileJITEventListener.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/OProfileJITEventListener.cpp
index 69398be..b45c71f 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/OProfileJITEventListener.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/OProfileJITEventListener.cpp
@@ -43,7 +43,7 @@ public:
   virtual void NotifyFunctionEmitted(const Function &F,
                                      void *FnStart, size_t FnSize,
                                      const EmittedFunctionDetails &Details);
-  virtual void NotifyFreeingMachineCode(const Function &F, void *OldPtr);
+  virtual void NotifyFreeingMachineCode(void *OldPtr);
 };
 
 OProfileJITEventListener::OProfileJITEventListener()
@@ -69,16 +69,16 @@ OProfileJITEventListener::~OProfileJITEventListener() {
 }
 
 class FilenameCache {
-  // Holds the filename of each CompileUnit, so that we can pass the
+  // Holds the filename of each Scope, so that we can pass the
   // pointer into oprofile.  These char*s are freed in the destructor.
   DenseMap<MDNode*, char*> Filenames;
 
  public:
-  const char *getFilename(MDNode *CompileUnit) {
-    char *&Filename = Filenames[CompileUnit];
+  const char *getFilename(MDNode *Scope) {
+    char *&Filename = Filenames[Scope];
     if (Filename == NULL) {
-      DICompileUnit CU(CompileUnit);
-      Filename = strdup(CU.getFilename());
+      DIScope S(Scope);
+      Filename = strdup(S.getFilename());
     }
     return Filename;
   }
@@ -97,7 +97,7 @@ static debug_line_info LineStartToOProfileFormat(
   Result.vma = Address;
   const DebugLocTuple &tuple = MF.getDebugLocTuple(Loc);
   Result.lineno = tuple.Line;
-  Result.filename = Filenames.getFilename(tuple.CompileUnit);
+  Result.filename = Filenames.getFilename(tuple.Scope);
   DEBUG(errs() << "Mapping " << reinterpret_cast<void*>(Result.vma) << " to "
                << Result.filename << ":" << Result.lineno << "\n");
   return Result;
@@ -147,13 +147,13 @@ void OProfileJITEventListener::NotifyFunctionEmitted(
   }
 }
 
-// Removes the to-be-deleted function from the symbol table.
-void OProfileJITEventListener::NotifyFreeingMachineCode(
-    const Function &F, void *FnStart) {
+// Removes the being-deleted function from the symbol table.
+void OProfileJITEventListener::NotifyFreeingMachineCode(void *FnStart) {
   assert(FnStart && "Invalid function pointer");
   if (op_unload_native_code(Agent, reinterpret_cast<uint64_t>(FnStart)) == -1) {
-    DEBUG(errs() << "Failed to tell OProfile about unload of native function "
-                 << F.getName() << " at " << FnStart << "\n");
+    DEBUG(errs()
+          << "Failed to tell OProfile about unload of native function at "
+          << FnStart << "\n");
   }
 }
 
diff --git a/libclamav/c++/llvm/lib/MC/MCAsmInfo.cpp b/libclamav/c++/llvm/lib/MC/MCAsmInfo.cpp
index 74fb930..3e5c97d 100644
--- a/libclamav/c++/llvm/lib/MC/MCAsmInfo.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCAsmInfo.cpp
@@ -13,7 +13,7 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/MC/MCAsmInfo.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <cctype>
 #include <cstring>
 using namespace llvm;
diff --git a/libclamav/c++/llvm/lib/MC/MCAsmStreamer.cpp b/libclamav/c++/llvm/lib/MC/MCAsmStreamer.cpp
index e56e968..b6ebb1a 100644
--- a/libclamav/c++/llvm/lib/MC/MCAsmStreamer.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCAsmStreamer.cpp
@@ -58,7 +58,7 @@ public:
   virtual void EmitZerofill(const MCSection *Section, MCSymbol *Symbol = 0,
                             unsigned Size = 0, unsigned ByteAlignment = 0);
 
-  virtual void EmitBytes(const StringRef &Data);
+  virtual void EmitBytes(StringRef Data);
 
   virtual void EmitValue(const MCExpr *Value, unsigned Size);
 
@@ -123,23 +123,27 @@ void MCAsmStreamer::EmitAssignment(MCSymbol *Symbol, const MCExpr *Value) {
   OS << " = ";
   Value->print(OS, &MAI);
   OS << '\n';
+
+  // FIXME: Lift context changes into super class.
+  // FIXME: Set associated section.
+  Symbol->setValue(Value);
 }
 
-void MCAsmStreamer::EmitSymbolAttribute(MCSymbol *Symbol, 
+void MCAsmStreamer::EmitSymbolAttribute(MCSymbol *Symbol,
                                         SymbolAttr Attribute) {
   switch (Attribute) {
-  case Global: OS << ".globl"; break;
-  case Hidden: OS << ".hidden"; break;
+  case Global:         OS << ".globl";           break;
+  case Hidden:         OS << ".hidden";          break;
   case IndirectSymbol: OS << ".indirect_symbol"; break;
-  case Internal: OS << ".internal"; break;
-  case LazyReference: OS << ".lazy_reference"; break;
-  case NoDeadStrip: OS << ".no_dead_strip"; break;
-  case PrivateExtern: OS << ".private_extern"; break;
-  case Protected: OS << ".protected"; break;
-  case Reference: OS << ".reference"; break;
-  case Weak: OS << ".weak"; break;
+  case Internal:       OS << ".internal";        break;
+  case LazyReference:  OS << ".lazy_reference";  break;
+  case NoDeadStrip:    OS << ".no_dead_strip";   break;
+  case PrivateExtern:  OS << ".private_extern";  break;
+  case Protected:      OS << ".protected";       break;
+  case Reference:      OS << ".reference";       break;
+  case Weak:           OS << ".weak";            break;
   case WeakDefinition: OS << ".weak_definition"; break;
-  case WeakReference: OS << ".weak_reference"; break;
+  case WeakReference:  OS << ".weak_reference";  break;
   }
 
   OS << ' ';
@@ -182,7 +186,7 @@ void MCAsmStreamer::EmitZerofill(const MCSection *Section, MCSymbol *Symbol,
   OS << '\n';
 }
 
-void MCAsmStreamer::EmitBytes(const StringRef &Data) {
+void MCAsmStreamer::EmitBytes(StringRef Data) {
   assert(CurSection && "Cannot emit contents before setting section!");
   for (unsigned i = 0, e = Data.size(); i != e; ++i)
     OS << ".byte " << (unsigned) (unsigned char) Data[i] << '\n';
diff --git a/libclamav/c++/llvm/lib/MC/MCAssembler.cpp b/libclamav/c++/llvm/lib/MC/MCAssembler.cpp
index 0afdf98..1f5b6f1 100644
--- a/libclamav/c++/llvm/lib/MC/MCAssembler.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCAssembler.cpp
@@ -9,7 +9,10 @@
 
 #define DEBUG_TYPE "assembler"
 #include "llvm/MC/MCAssembler.h"
+#include "llvm/MC/MCExpr.h"
 #include "llvm/MC/MCSectionMachO.h"
+#include "llvm/MC/MCSymbol.h"
+#include "llvm/MC/MCValue.h"
 #include "llvm/Target/TargetMachOWriterInfo.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/SmallString.h"
@@ -48,7 +51,7 @@ class MachObjectWriter {
     Header_Magic32 = 0xFEEDFACE,
     Header_Magic64 = 0xFEEDFACF
   };
-  
+
   static const unsigned Header32Size = 28;
   static const unsigned Header64Size = 32;
   static const unsigned SegmentLoadCommand32Size = 56;
@@ -127,7 +130,7 @@ class MachObjectWriter {
   bool IsLSB;
 
 public:
-  MachObjectWriter(raw_ostream &_OS, bool _IsLSB = true) 
+  MachObjectWriter(raw_ostream &_OS, bool _IsLSB = true)
     : OS(_OS), IsLSB(_IsLSB) {
   }
 
@@ -170,21 +173,21 @@ public:
 
   void WriteZeros(unsigned N) {
     const char Zeros[16] = { 0 };
-    
+
     for (unsigned i = 0, e = N / 16; i != e; ++i)
       OS << StringRef(Zeros, 16);
-    
+
     OS << StringRef(Zeros, N % 16);
   }
 
-  void WriteString(const StringRef &Str, unsigned ZeroFillSize = 0) {
+  void WriteString(StringRef Str, unsigned ZeroFillSize = 0) {
     OS << Str;
     if (ZeroFillSize)
       WriteZeros(ZeroFillSize - Str.size());
   }
 
   /// @}
-  
+
   void WriteHeader32(unsigned NumLoadCommands, unsigned LoadCommandsSize,
                      bool SubsectionsViaSymbols) {
     uint32_t Flags = 0;
@@ -384,7 +387,7 @@ public:
     Write32(MSD.StringIndex);
     Write8(Type);
     Write8(MSD.SectionIndex);
-    
+
     // The Mach-O streamer uses the lowest 16-bits of the flags for the 'desc'
     // value.
     Write16(Flags);
@@ -397,6 +400,7 @@ public:
   };
   void ComputeScatteredRelocationInfo(MCAssembler &Asm,
                                       MCSectionData::Fixup &Fixup,
+                                      const MCValue &Target,
                              DenseMap<const MCSymbol*,MCSymbolData*> &SymbolMap,
                                      std::vector<MachRelocationEntry> &Relocs) {
     uint32_t Address = Fixup.Fragment->getOffset() + Fixup.Offset;
@@ -404,13 +408,12 @@ public:
     unsigned Type = RIT_Vanilla;
 
     // See <reloc.h>.
-
-    const MCSymbol *A = Fixup.Value.getSymA();
+    const MCSymbol *A = Target.getSymA();
     MCSymbolData *SD = SymbolMap.lookup(A);
     uint32_t Value = SD->getFragment()->getAddress() + SD->getOffset();
     uint32_t Value2 = 0;
 
-    if (const MCSymbol *B = Fixup.Value.getSymB()) {
+    if (const MCSymbol *B = Target.getSymB()) {
       Type = RIT_LocalDifference;
 
       MCSymbolData *SD = SymbolMap.lookup(B);
@@ -421,7 +424,7 @@ public:
     assert((1U << Log2Size) == Fixup.Size && "Invalid fixup size!");
 
     // The value which goes in the fixup is current value of the expression.
-    Fixup.FixedValue = Value - Value2 + Fixup.Value.getConstant();
+    Fixup.FixedValue = Value - Value2 + Target.getConstant();
 
     MachRelocationEntry MRE;
     MRE.Word0 = ((Address   <<  0) |
@@ -450,14 +453,18 @@ public:
                              MCSectionData::Fixup &Fixup,
                              DenseMap<const MCSymbol*,MCSymbolData*> &SymbolMap,
                              std::vector<MachRelocationEntry> &Relocs) {
-    // If this is a local symbol plus an offset or a difference, then we need a
+    MCValue Target;
+    if (!Fixup.Value->EvaluateAsRelocatable(Target))
+      llvm_report_error("expected relocatable expression");
+
+    // If this is a difference or a local symbol plus an offset, then we need a
     // scattered relocation entry.
-    if (Fixup.Value.getSymB()) // a - b
-      return ComputeScatteredRelocationInfo(Asm, Fixup, SymbolMap, Relocs);
-    if (Fixup.Value.getSymA() && Fixup.Value.getConstant())
-      if (!Fixup.Value.getSymA()->isUndefined())
-        return ComputeScatteredRelocationInfo(Asm, Fixup, SymbolMap, Relocs);
-        
+    if (Target.getSymB() ||
+        (Target.getSymA() && !Target.getSymA()->isUndefined() &&
+         Target.getConstant()))
+      return ComputeScatteredRelocationInfo(Asm, Fixup, Target,
+                                            SymbolMap, Relocs);
+
     // See <reloc.h>.
     uint32_t Address = Fixup.Fragment->getOffset() + Fixup.Offset;
     uint32_t Value = 0;
@@ -466,15 +473,15 @@ public:
     unsigned IsExtern = 0;
     unsigned Type = 0;
 
-    if (Fixup.Value.isAbsolute()) { // constant
+    if (Target.isAbsolute()) { // constant
       // SymbolNum of 0 indicates the absolute section.
       Type = RIT_Vanilla;
       Value = 0;
       llvm_unreachable("FIXME: Not yet implemented!");
     } else {
-      const MCSymbol *Symbol = Fixup.Value.getSymA();
+      const MCSymbol *Symbol = Target.getSymA();
       MCSymbolData *SD = SymbolMap.lookup(Symbol);
-      
+
       if (Symbol->isUndefined()) {
         IsExtern = 1;
         Index = SD->getIndex();
@@ -495,7 +502,7 @@ public:
     }
 
     // The value which goes in the fixup is current value of the expression.
-    Fixup.FixedValue = Value + Fixup.Value.getConstant();
+    Fixup.FixedValue = Value + Target.getConstant();
 
     unsigned Log2Size = Log2_32(Fixup.Size);
     assert((1U << Log2Size) == Fixup.Size && "Invalid fixup size!");
@@ -510,7 +517,7 @@ public:
                  (Type      << 28));
     Relocs.push_back(MRE);
   }
-  
+
   void BindIndirectSymbols(MCAssembler &Asm,
                            DenseMap<const MCSymbol*,MCSymbolData*> &SymbolMap) {
     // This is the point where 'as' creates actual symbols for indirect symbols
@@ -703,7 +710,7 @@ public:
     if (NumSymbols)
       ComputeSymbolTable(Asm, StringTable, LocalSymbolData, ExternalSymbolData,
                          UndefinedSymbolData);
-  
+
     // The section data starts after the header, the segment load command (and
     // section headers) and the symbol table.
     unsigned NumLoadCommands = 1;
@@ -733,7 +740,7 @@ public:
 
       SectionDataSize = std::max(SectionDataSize,
                                  SD.getAddress() + SD.getSize());
-      SectionDataFileSize = std::max(SectionDataFileSize, 
+      SectionDataFileSize = std::max(SectionDataFileSize,
                                      SD.getAddress() + SD.getFileSize());
     }
 
@@ -748,9 +755,9 @@ public:
                   Asm.getSubsectionsViaSymbols());
     WriteSegmentLoadCommand32(NumSections, VMSize,
                               SectionDataStart, SectionDataSize);
-  
+
     // ... and then the section headers.
-    // 
+    //
     // We also compute the section relocations while we do this. Note that
     // compute relocation info will also update the fixup to have the correct
     // value; this will be overwrite the appropriate data in the fragment when
@@ -774,7 +781,7 @@ public:
       WriteSection32(SD, SectionStart, RelocTableEnd, NumRelocs);
       RelocTableEnd += NumRelocs * RelocationInfoSize;
     }
-    
+
     // Write the symbol table load command, if used.
     if (NumSymbols) {
       unsigned FirstLocalSymbol = 0;
@@ -923,7 +930,7 @@ MCSectionData::LookupFixup(const MCFragment *Fragment, uint64_t Offset) const {
 
   return 0;
 }
-                                                       
+
 /* *** */
 
 MCSymbolData::MCSymbolData() : Symbol(0) {}
@@ -960,7 +967,7 @@ void MCAssembler::LayoutSection(MCSectionData &SD) {
     switch (F.getKind()) {
     case MCFragment::FT_Align: {
       MCAlignFragment &AF = cast<MCAlignFragment>(F);
-      
+
       uint64_t Size = OffsetToAlignment(Address, AF.getAlignment());
       if (Size > AF.getMaxBytesToEmit())
         AF.setFileSize(0);
@@ -978,8 +985,12 @@ void MCAssembler::LayoutSection(MCSectionData &SD) {
 
       F.setFileSize(F.getMaxFileSize());
 
+      MCValue Target;
+      if (!FF.getValue().EvaluateAsRelocatable(Target))
+        llvm_report_error("expected relocatable expression");
+
       // If the fill value is constant, thats it.
-      if (FF.getValue().isAbsolute())
+      if (Target.isAbsolute())
         break;
 
       // Otherwise, add fixups for the values.
@@ -994,19 +1005,23 @@ void MCAssembler::LayoutSection(MCSectionData &SD) {
     case MCFragment::FT_Org: {
       MCOrgFragment &OF = cast<MCOrgFragment>(F);
 
-      if (!OF.getOffset().isAbsolute())
+      MCValue Target;
+      if (!OF.getOffset().EvaluateAsRelocatable(Target))
+        llvm_report_error("expected relocatable expression");
+
+      if (!Target.isAbsolute())
         llvm_unreachable("FIXME: Not yet implemented!");
-      uint64_t OrgOffset = OF.getOffset().getConstant();
+      uint64_t OrgOffset = Target.getConstant();
       uint64_t Offset = Address - SD.getAddress();
 
       // FIXME: We need a way to communicate this error.
       if (OrgOffset < Offset)
-        llvm_report_error("invalid .org offset '" + Twine(OrgOffset) + 
+        llvm_report_error("invalid .org offset '" + Twine(OrgOffset) +
                           "' (at offset '" + Twine(Offset) + "'");
-        
+
       F.setFileSize(OrgOffset - Offset);
       break;
-    }      
+    }
 
     case MCFragment::FT_ZeroFill: {
       MCZeroFillFragment &ZFF = cast<MCZeroFillFragment>(F);
@@ -1038,7 +1053,7 @@ static void WriteFileData(raw_ostream &OS, const MCFragment &F,
                           MachObjectWriter &MOW) {
   uint64_t Start = OS.tell();
   (void) Start;
-    
+
   ++EmittedFragments;
 
   // FIXME: Embed in fragments instead?
@@ -1051,8 +1066,8 @@ static void WriteFileData(raw_ostream &OS, const MCFragment &F,
     // multiple .align directives to enforce the semantics it wants), but is
     // severe enough that we want to report it. How to handle this?
     if (Count * AF.getValueSize() != AF.getFileSize())
-      llvm_report_error("undefined .align directive, value size '" + 
-                        Twine(AF.getValueSize()) + 
+      llvm_report_error("undefined .align directive, value size '" +
+                        Twine(AF.getValueSize()) +
                         "' is not a divisor of padding size '" +
                         Twine(AF.getFileSize()) + "'");
 
@@ -1077,10 +1092,15 @@ static void WriteFileData(raw_ostream &OS, const MCFragment &F,
     MCFillFragment &FF = cast<MCFillFragment>(F);
 
     int64_t Value = 0;
-    if (FF.getValue().isAbsolute())
-      Value = FF.getValue().getConstant();
+
+    MCValue Target;
+    if (!FF.getValue().EvaluateAsRelocatable(Target))
+      llvm_report_error("expected relocatable expression");
+
+    if (Target.isAbsolute())
+      Value = Target.getConstant();
     for (uint64_t i = 0, e = FF.getCount(); i != e; ++i) {
-      if (!FF.getValue().isAbsolute()) {
+      if (!Target.isAbsolute()) {
         // Find the fixup.
         //
         // FIXME: Find a better way to write in the fixes.
@@ -1101,7 +1121,7 @@ static void WriteFileData(raw_ostream &OS, const MCFragment &F,
     }
     break;
   }
-    
+
   case MCFragment::FT_Org: {
     MCOrgFragment &OF = cast<MCOrgFragment>(F);
 
@@ -1131,7 +1151,7 @@ static void WriteFileData(raw_ostream &OS, const MCSectionData &SD,
 
   uint64_t Start = OS.tell();
   (void) Start;
-      
+
   for (MCSectionData::const_iterator it = SD.begin(),
          ie = SD.end(); it != ie; ++it)
     WriteFileData(OS, *it, MOW);
diff --git a/libclamav/c++/llvm/lib/MC/MCContext.cpp b/libclamav/c++/llvm/lib/MC/MCContext.cpp
index f36564a..45d2c02 100644
--- a/libclamav/c++/llvm/lib/MC/MCContext.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCContext.cpp
@@ -8,10 +8,11 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/MC/MCContext.h"
-
 #include "llvm/MC/MCSection.h"
 #include "llvm/MC/MCSymbol.h"
 #include "llvm/MC/MCValue.h"
+#include "llvm/ADT/SmallString.h"
+#include "llvm/ADT/Twine.h"
 using namespace llvm;
 
 MCContext::MCContext() {
@@ -22,7 +23,7 @@ MCContext::~MCContext() {
   // we don't need to free them here.
 }
 
-MCSymbol *MCContext::CreateSymbol(const StringRef &Name) {
+MCSymbol *MCContext::CreateSymbol(StringRef Name) {
   assert(Name[0] != '\0' && "Normal symbols cannot be unnamed!");
 
   // Create and bind the symbol, and ensure that names are unique.
@@ -31,14 +32,21 @@ MCSymbol *MCContext::CreateSymbol(const StringRef &Name) {
   return Entry = new (*this) MCSymbol(Name, false);
 }
 
-MCSymbol *MCContext::GetOrCreateSymbol(const StringRef &Name) {
+MCSymbol *MCContext::GetOrCreateSymbol(StringRef Name) {
   MCSymbol *&Entry = Symbols[Name];
   if (Entry) return Entry;
 
   return Entry = new (*this) MCSymbol(Name, false);
 }
 
-MCSymbol *MCContext::CreateTemporarySymbol(const StringRef &Name) {
+MCSymbol *MCContext::GetOrCreateSymbol(const Twine &Name) {
+  SmallString<128> NameSV;
+  Name.toVector(NameSV);
+  return GetOrCreateSymbol(NameSV.str());
+}
+
+
+MCSymbol *MCContext::CreateTemporarySymbol(StringRef Name) {
   // If unnamed, just create a symbol.
   if (Name.empty())
     new (*this) MCSymbol("", true);
@@ -49,23 +57,6 @@ MCSymbol *MCContext::CreateTemporarySymbol(const StringRef &Name) {
   return Entry = new (*this) MCSymbol(Name, true);
 }
 
-MCSymbol *MCContext::LookupSymbol(const StringRef &Name) const {
+MCSymbol *MCContext::LookupSymbol(StringRef Name) const {
   return Symbols.lookup(Name);
 }
-
-void MCContext::ClearSymbolValue(const MCSymbol *Sym) {
-  SymbolValues.erase(Sym);
-}
-
-void MCContext::SetSymbolValue(const MCSymbol *Sym, const MCValue &Value) {
-  SymbolValues[Sym] = Value;
-}
-
-const MCValue *MCContext::GetSymbolValue(const MCSymbol *Sym) const {
-  DenseMap<const MCSymbol*, MCValue>::iterator it = SymbolValues.find(Sym);
-
-  if (it == SymbolValues.end())
-    return 0;
-
-  return &it->second;
-}
diff --git a/libclamav/c++/llvm/lib/MC/MCDisassembler.cpp b/libclamav/c++/llvm/lib/MC/MCDisassembler.cpp
index 0985602..0809690 100644
--- a/libclamav/c++/llvm/lib/MC/MCDisassembler.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCDisassembler.cpp
@@ -11,4 +11,4 @@
 using namespace llvm;
 
 MCDisassembler::~MCDisassembler() {
-}
\ No newline at end of file
+}
diff --git a/libclamav/c++/llvm/lib/MC/MCExpr.cpp b/libclamav/c++/llvm/lib/MC/MCExpr.cpp
index 0f3e053..a5a2256 100644
--- a/libclamav/c++/llvm/lib/MC/MCExpr.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCExpr.cpp
@@ -133,18 +133,17 @@ const MCSymbolRefExpr *MCSymbolRefExpr::Create(const MCSymbol *Sym,
   return new (Ctx) MCSymbolRefExpr(Sym);
 }
 
-const MCSymbolRefExpr *MCSymbolRefExpr::Create(const StringRef &Name,
-                                               MCContext &Ctx) {
+const MCSymbolRefExpr *MCSymbolRefExpr::Create(StringRef Name, MCContext &Ctx) {
   return Create(Ctx.GetOrCreateSymbol(Name), Ctx);
 }
 
 
 /* *** */
 
-bool MCExpr::EvaluateAsAbsolute(MCContext &Ctx, int64_t &Res) const {
+bool MCExpr::EvaluateAsAbsolute(int64_t &Res) const {
   MCValue Value;
   
-  if (!EvaluateAsRelocatable(Ctx, Value) || !Value.isAbsolute())
+  if (!EvaluateAsRelocatable(Value) || !Value.isAbsolute())
     return false;
 
   Res = Value.getConstant();
@@ -173,7 +172,7 @@ static bool EvaluateSymbolicAdd(const MCValue &LHS, const MCSymbol *RHS_A,
   return true;
 }
 
-bool MCExpr::EvaluateAsRelocatable(MCContext &Ctx, MCValue &Res) const {
+bool MCExpr::EvaluateAsRelocatable(MCValue &Res) const {
   switch (getKind()) {
   case Constant:
     Res = MCValue::get(cast<MCConstantExpr>(this)->getValue());
@@ -181,10 +180,12 @@ bool MCExpr::EvaluateAsRelocatable(MCContext &Ctx, MCValue &Res) const {
 
   case SymbolRef: {
     const MCSymbol &Sym = cast<MCSymbolRefExpr>(this)->getSymbol();
-    if (const MCValue *Value = Ctx.GetSymbolValue(&Sym))
-      Res = *Value;
-    else
-      Res = MCValue::get(&Sym, 0, 0);
+
+    // Evaluate recursively if this is a variable.
+    if (Sym.isVariable())
+      return Sym.getValue()->EvaluateAsRelocatable(Res);
+
+    Res = MCValue::get(&Sym, 0, 0);
     return true;
   }
 
@@ -192,7 +193,7 @@ bool MCExpr::EvaluateAsRelocatable(MCContext &Ctx, MCValue &Res) const {
     const MCUnaryExpr *AUE = cast<MCUnaryExpr>(this);
     MCValue Value;
 
-    if (!AUE->getSubExpr()->EvaluateAsRelocatable(Ctx, Value))
+    if (!AUE->getSubExpr()->EvaluateAsRelocatable(Value))
       return false;
 
     switch (AUE->getOpcode()) {
@@ -225,8 +226,8 @@ bool MCExpr::EvaluateAsRelocatable(MCContext &Ctx, MCValue &Res) const {
     const MCBinaryExpr *ABE = cast<MCBinaryExpr>(this);
     MCValue LHSValue, RHSValue;
     
-    if (!ABE->getLHS()->EvaluateAsRelocatable(Ctx, LHSValue) ||
-        !ABE->getRHS()->EvaluateAsRelocatable(Ctx, RHSValue))
+    if (!ABE->getLHS()->EvaluateAsRelocatable(LHSValue) ||
+        !ABE->getRHS()->EvaluateAsRelocatable(RHSValue))
       return false;
 
     // We only support a few operations on non-constant expressions, handle
diff --git a/libclamav/c++/llvm/lib/MC/MCInstPrinter.cpp b/libclamav/c++/llvm/lib/MC/MCInstPrinter.cpp
index 6c33216..e90c03c 100644
--- a/libclamav/c++/llvm/lib/MC/MCInstPrinter.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCInstPrinter.cpp
@@ -11,4 +11,4 @@
 using namespace llvm;
 
 MCInstPrinter::~MCInstPrinter() {
-}
\ No newline at end of file
+}
diff --git a/libclamav/c++/llvm/lib/MC/MCMachOStreamer.cpp b/libclamav/c++/llvm/lib/MC/MCMachOStreamer.cpp
index e04bd1f..828b92a 100644
--- a/libclamav/c++/llvm/lib/MC/MCMachOStreamer.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCMachOStreamer.cpp
@@ -134,7 +134,7 @@ public:
   virtual void EmitZerofill(const MCSection *Section, MCSymbol *Symbol = 0,
                             unsigned Size = 0, unsigned ByteAlignment = 0);
 
-  virtual void EmitBytes(const StringRef &Data);
+  virtual void EmitBytes(StringRef Data);
 
   virtual void EmitValue(const MCExpr *Value, unsigned Size);
 
@@ -198,7 +198,9 @@ void MCMachOStreamer::EmitAssignment(MCSymbol *Symbol, const MCExpr *Value) {
   assert((Symbol->isUndefined() || Symbol->isAbsolute()) &&
          "Cannot define a symbol twice!");
 
-  llvm_unreachable("FIXME: Not yet implemented!");
+  // FIXME: Lift context changes into super class.
+  // FIXME: Set associated section.
+  Symbol->setValue(Value);
 }
 
 void MCMachOStreamer::EmitSymbolAttribute(MCSymbol *Symbol,
@@ -313,7 +315,7 @@ void MCMachOStreamer::EmitZerofill(const MCSection *Section, MCSymbol *Symbol,
     SectData.setAlignment(ByteAlignment);
 }
 
-void MCMachOStreamer::EmitBytes(const StringRef &Data) {
+void MCMachOStreamer::EmitBytes(StringRef Data) {
   MCDataFragment *DF = dyn_cast_or_null<MCDataFragment>(getCurrentFragment());
   if (!DF)
     DF = new MCDataFragment(CurSectionData);
@@ -321,12 +323,7 @@ void MCMachOStreamer::EmitBytes(const StringRef &Data) {
 }
 
 void MCMachOStreamer::EmitValue(const MCExpr *Value, unsigned Size) {
-  MCValue RelocValue;
-
-  if (!AddValueSymbols(Value)->EvaluateAsRelocatable(getContext(), RelocValue))
-    return llvm_report_error("expected relocatable expression");
-
-  new MCFillFragment(RelocValue, Size, 1, CurSectionData);
+  new MCFillFragment(*AddValueSymbols(Value), Size, 1, CurSectionData);
 }
 
 void MCMachOStreamer::EmitValueToAlignment(unsigned ByteAlignment,
@@ -344,13 +341,7 @@ void MCMachOStreamer::EmitValueToAlignment(unsigned ByteAlignment,
 
 void MCMachOStreamer::EmitValueToOffset(const MCExpr *Offset,
                                         unsigned char Value) {
-  MCValue RelocOffset;
-
-  if (!AddValueSymbols(Offset)->EvaluateAsRelocatable(getContext(),
-                                                      RelocOffset))
-    return llvm_report_error("expected relocatable expression");
-
-  new MCOrgFragment(RelocOffset, Value, CurSectionData);
+  new MCOrgFragment(*Offset, Value, CurSectionData);
 }
 
 void MCMachOStreamer::EmitInstruction(const MCInst &Inst) {
diff --git a/libclamav/c++/llvm/lib/MC/MCNullStreamer.cpp b/libclamav/c++/llvm/lib/MC/MCNullStreamer.cpp
index 3cd22ca..ddc4e69 100644
--- a/libclamav/c++/llvm/lib/MC/MCNullStreamer.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCNullStreamer.cpp
@@ -45,7 +45,7 @@ namespace {
     virtual void EmitZerofill(const MCSection *Section, MCSymbol *Symbol = 0,
                               unsigned Size = 0, unsigned ByteAlignment = 0) {}
 
-    virtual void EmitBytes(const StringRef &Data) {}
+    virtual void EmitBytes(StringRef Data) {}
 
     virtual void EmitValue(const MCExpr *Value, unsigned Size) {}
 
diff --git a/libclamav/c++/llvm/lib/MC/MCSection.cpp b/libclamav/c++/llvm/lib/MC/MCSection.cpp
index 333a471..24c89ef 100644
--- a/libclamav/c++/llvm/lib/MC/MCSection.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCSection.cpp
@@ -25,7 +25,7 @@ MCSection::~MCSection() {
 //===----------------------------------------------------------------------===//
 
 MCSectionCOFF *MCSectionCOFF::
-Create(const StringRef &Name, bool IsDirective, SectionKind K, MCContext &Ctx) {
+Create(StringRef Name, bool IsDirective, SectionKind K, MCContext &Ctx) {
   return new (Ctx) MCSectionCOFF(Name, IsDirective, K);
 }
 
diff --git a/libclamav/c++/llvm/lib/MC/MCSectionELF.cpp b/libclamav/c++/llvm/lib/MC/MCSectionELF.cpp
index 660a8c9..c6812ed 100644
--- a/libclamav/c++/llvm/lib/MC/MCSectionELF.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCSectionELF.cpp
@@ -15,7 +15,7 @@
 using namespace llvm;
 
 MCSectionELF *MCSectionELF::
-Create(const StringRef &Section, unsigned Type, unsigned Flags,
+Create(StringRef Section, unsigned Type, unsigned Flags,
        SectionKind K, bool isExplicit, MCContext &Ctx) {
   return new (Ctx) MCSectionELF(Section, Type, Flags, K, isExplicit);
 }
diff --git a/libclamav/c++/llvm/lib/MC/MCSectionMachO.cpp b/libclamav/c++/llvm/lib/MC/MCSectionMachO.cpp
index 33f5087..6cc67a2 100644
--- a/libclamav/c++/llvm/lib/MC/MCSectionMachO.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCSectionMachO.cpp
@@ -40,8 +40,8 @@ static const struct {
 
 /// SectionAttrDescriptors - This is an array of descriptors for section
 /// attributes.  Unlike the SectionTypeDescriptors, this is not directly indexed
-/// by attribute, instead it is searched.  The last entry has a zero AttrFlag
-/// value.
+/// by attribute, instead it is searched.  The last entry has an AttrFlagEnd
+/// AttrFlag value.
 static const struct {
   unsigned AttrFlag;
   const char *AssemblerName, *EnumName;
@@ -59,12 +59,14 @@ ENTRY(0 /*FIXME*/,           S_ATTR_SOME_INSTRUCTIONS)
 ENTRY(0 /*FIXME*/,           S_ATTR_EXT_RELOC)
 ENTRY(0 /*FIXME*/,           S_ATTR_LOC_RELOC)
 #undef ENTRY
-  { 0, "none", 0 }
+  { 0, "none", 0 }, // used if section has no attributes but has a stub size
+#define AttrFlagEnd 0xffffffff // non legal value, multiple attribute bits set
+  { AttrFlagEnd, 0, 0 }
 };
 
 
 MCSectionMachO *MCSectionMachO::
-Create(const StringRef &Segment, const StringRef &Section,
+Create(StringRef Segment, StringRef Section,
        unsigned TypeAndAttributes, unsigned Reserved2,
        SectionKind K, MCContext &Ctx) {
   // S_SYMBOL_STUBS must be set for Reserved2 to be non-zero.
@@ -228,7 +230,7 @@ std::string MCSectionMachO::ParseSectionSpecifier(StringRef Spec,        // In.
 
     // Look up the attribute.
     for (unsigned i = 0; ; ++i) {
-      if (SectionAttrDescriptors[i].AttrFlag == 0)
+      if (SectionAttrDescriptors[i].AttrFlag == AttrFlagEnd)
         return "mach-o section specifier has invalid attribute";
       
       if (SectionAttrDescriptors[i].AssemblerName &&
diff --git a/libclamav/c++/llvm/lib/MC/MCSymbol.cpp b/libclamav/c++/llvm/lib/MC/MCSymbol.cpp
index 86ff3f3..b145d07 100644
--- a/libclamav/c++/llvm/lib/MC/MCSymbol.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCSymbol.cpp
@@ -35,7 +35,7 @@ static void MangleLetter(raw_ostream &OS, unsigned char C) {
 
 /// NameNeedsEscaping - Return true if the identifier \arg Str needs quotes
 /// for this assembler.
-static bool NameNeedsEscaping(const StringRef &Str, const MCAsmInfo &MAI) {
+static bool NameNeedsEscaping(StringRef Str, const MCAsmInfo &MAI) {
   assert(!Str.empty() && "Cannot create an empty MCSymbol");
   
   // If the first character is a number and the target does not allow this, we
diff --git a/libclamav/c++/llvm/lib/Makefile b/libclamav/c++/llvm/lib/Makefile
index 1e87d9e..3807f31 100644
--- a/libclamav/c++/llvm/lib/Makefile
+++ b/libclamav/c++/llvm/lib/Makefile
@@ -11,7 +11,7 @@ LEVEL = ..
 include $(LEVEL)/Makefile.config
 
 PARALLEL_DIRS := VMCore AsmParser Bitcode Archive Analysis Transforms CodeGen \
-                Target ExecutionEngine Debugger Linker MC CompilerDriver
+                Target ExecutionEngine Linker MC CompilerDriver
 
 include $(LEVEL)/Makefile.common
 
diff --git a/libclamav/c++/llvm/lib/Support/APFloat.cpp b/libclamav/c++/llvm/lib/Support/APFloat.cpp
index e431d27..b9b323c 100644
--- a/libclamav/c++/llvm/lib/Support/APFloat.cpp
+++ b/libclamav/c++/llvm/lib/Support/APFloat.cpp
@@ -48,6 +48,7 @@ namespace llvm {
     unsigned int arithmeticOK;
   };
 
+  const fltSemantics APFloat::IEEEhalf = { 15, -14, 11, true };
   const fltSemantics APFloat::IEEEsingle = { 127, -126, 24, true };
   const fltSemantics APFloat::IEEEdouble = { 1023, -1022, 53, true };
   const fltSemantics APFloat::IEEEquad = { 16383, -16382, 113, true };
@@ -420,7 +421,7 @@ ulpsFromBoundary(const integerPart *parts, unsigned int bits, bool isNearest)
   unsigned int count, partBits;
   integerPart part, boundary;
 
-  assert (bits != 0);
+  assert(bits != 0);
 
   bits--;
   count = bits / integerPartWidth;
@@ -536,7 +537,7 @@ partAsHex (char *dst, integerPart part, unsigned int count,
 {
   unsigned int result = count;
 
-  assert (count != 0 && count <= integerPartWidth / 4);
+  assert(count != 0 && count <= integerPartWidth / 4);
 
   part >>= (integerPartWidth - 4 * count);
   while (count--) {
@@ -759,7 +760,7 @@ APFloat::significandParts()
 {
   assert(category == fcNormal || category == fcNaN);
 
-  if(partCount() > 1)
+  if (partCount() > 1)
     return significand.parts;
   else
     return &significand.part;
@@ -2288,8 +2289,8 @@ APFloat::roundSignificandWithExponent(const integerPart *decSigParts,
 
     /* Both multiplySignificand and divideSignificand return the
        result with the integer bit set.  */
-    assert (APInt::tcExtractBit
-            (decSig.significandParts(), calcSemantics.precision - 1) == 1);
+    assert(APInt::tcExtractBit
+           (decSig.significandParts(), calcSemantics.precision - 1) == 1);
 
     HUerr = HUerrBound(calcLostFraction != lfExactlyZero, sigStatus != opOK,
                        powHUerr);
@@ -2592,7 +2593,7 @@ APFloat::convertNormalToHexString(char *dst, unsigned int hexDigits,
       q--;
       *q = hexDigitChars[hexDigitValue (*q) + 1];
     } while (*q == '0');
-    assert (q >= p);
+    assert(q >= p);
   } else {
     /* Add trailing zeroes.  */
     memset (dst, '0', outputDigits);
@@ -2644,7 +2645,7 @@ APInt
 APFloat::convertF80LongDoubleAPFloatToAPInt() const
 {
   assert(semantics == (const llvm::fltSemantics*)&x87DoubleExtended);
-  assert (partCount()==2);
+  assert(partCount()==2);
 
   uint64_t myexponent, mysignificand;
 
@@ -2676,7 +2677,7 @@ APInt
 APFloat::convertPPCDoubleDoubleAPFloatToAPInt() const
 {
   assert(semantics == (const llvm::fltSemantics*)&PPCDoubleDouble);
-  assert (partCount()==2);
+  assert(partCount()==2);
 
   uint64_t myexponent, mysignificand, myexponent2, mysignificand2;
 
@@ -2721,7 +2722,7 @@ APInt
 APFloat::convertQuadrupleAPFloatToAPInt() const
 {
   assert(semantics == (const llvm::fltSemantics*)&IEEEquad);
-  assert (partCount()==2);
+  assert(partCount()==2);
 
   uint64_t myexponent, mysignificand, mysignificand2;
 
@@ -2757,7 +2758,7 @@ APInt
 APFloat::convertDoubleAPFloatToAPInt() const
 {
   assert(semantics == (const llvm::fltSemantics*)&IEEEdouble);
-  assert (partCount()==1);
+  assert(partCount()==1);
 
   uint64_t myexponent, mysignificand;
 
@@ -2787,7 +2788,7 @@ APInt
 APFloat::convertFloatAPFloatToAPInt() const
 {
   assert(semantics == (const llvm::fltSemantics*)&IEEEsingle);
-  assert (partCount()==1);
+  assert(partCount()==1);
 
   uint32_t myexponent, mysignificand;
 
@@ -2812,6 +2813,35 @@ APFloat::convertFloatAPFloatToAPInt() const
                     (mysignificand & 0x7fffff)));
 }
 
+APInt
+APFloat::convertHalfAPFloatToAPInt() const
+{
+  assert(semantics == (const llvm::fltSemantics*)&IEEEhalf);
+  assert(partCount()==1);
+
+  uint32_t myexponent, mysignificand;
+
+  if (category==fcNormal) {
+    myexponent = exponent+15; //bias
+    mysignificand = (uint32_t)*significandParts();
+    if (myexponent == 1 && !(mysignificand & 0x400))
+      myexponent = 0;   // denormal
+  } else if (category==fcZero) {
+    myexponent = 0;
+    mysignificand = 0;
+  } else if (category==fcInfinity) {
+    myexponent = 0x1f;
+    mysignificand = 0;
+  } else {
+    assert(category == fcNaN && "Unknown category!");
+    myexponent = 0x1f;
+    mysignificand = (uint32_t)*significandParts();
+  }
+
+  return APInt(16, (((sign&1) << 15) | ((myexponent&0x1f) << 10) |
+                    (mysignificand & 0x3ff)));
+}
+
 // This function creates an APInt that is just a bit map of the floating
 // point constant as it would appear in memory.  It is not a conversion,
 // and treating the result as a normal integer is unlikely to be useful.
@@ -2819,6 +2849,9 @@ APFloat::convertFloatAPFloatToAPInt() const
 APInt
 APFloat::bitcastToAPInt() const
 {
+  if (semantics == (const llvm::fltSemantics*)&IEEEhalf)
+    return convertHalfAPFloatToAPInt();
+
   if (semantics == (const llvm::fltSemantics*)&IEEEsingle)
     return convertFloatAPFloatToAPInt();
 
@@ -3051,6 +3084,39 @@ APFloat::initFromFloatAPInt(const APInt & api)
   }
 }
 
+void
+APFloat::initFromHalfAPInt(const APInt & api)
+{
+  assert(api.getBitWidth()==16);
+  uint32_t i = (uint32_t)*api.getRawData();
+  uint32_t myexponent = (i >> 10) & 0x1f;
+  uint32_t mysignificand = i & 0x3ff;
+
+  initialize(&APFloat::IEEEhalf);
+  assert(partCount()==1);
+
+  sign = i >> 15;
+  if (myexponent==0 && mysignificand==0) {
+    // exponent, significand meaningless
+    category = fcZero;
+  } else if (myexponent==0x1f && mysignificand==0) {
+    // exponent, significand meaningless
+    category = fcInfinity;
+  } else if (myexponent==0x1f && mysignificand!=0) {
+    // sign, exponent, significand meaningless
+    category = fcNaN;
+    *significandParts() = mysignificand;
+  } else {
+    category = fcNormal;
+    exponent = myexponent - 15;  //bias
+    *significandParts() = mysignificand;
+    if (myexponent==0)    // denormal
+      exponent = -14;
+    else
+      *significandParts() |= 0x400; // integer bit
+  }
+}
+
 /// Treat api as containing the bits of a floating point number.  Currently
 /// we infer the floating point type from the size of the APInt.  The
 /// isIEEE argument distinguishes between PPC128 and IEEE128 (not meaningful
@@ -3058,7 +3124,9 @@ APFloat::initFromFloatAPInt(const APInt & api)
 void
 APFloat::initFromAPInt(const APInt& api, bool isIEEE)
 {
-  if (api.getBitWidth() == 32)
+  if (api.getBitWidth() == 16)
+    return initFromHalfAPInt(api);
+  else if (api.getBitWidth() == 32)
     return initFromFloatAPInt(api);
   else if (api.getBitWidth()==64)
     return initFromDoubleAPInt(api);
diff --git a/libclamav/c++/llvm/lib/Support/Allocator.cpp b/libclamav/c++/llvm/lib/Support/Allocator.cpp
index 7a3fd87..31b45c8 100644
--- a/libclamav/c++/llvm/lib/Support/Allocator.cpp
+++ b/libclamav/c++/llvm/lib/Support/Allocator.cpp
@@ -12,7 +12,7 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/Support/Allocator.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/Support/Recycler.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/System/Memory.h"
diff --git a/libclamav/c++/llvm/lib/Support/CommandLine.cpp b/libclamav/c++/llvm/lib/Support/CommandLine.cpp
index 187024f..9cf9c89 100644
--- a/libclamav/c++/llvm/lib/Support/CommandLine.cpp
+++ b/libclamav/c++/llvm/lib/Support/CommandLine.cpp
@@ -17,6 +17,7 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/Support/CommandLine.h"
+#include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/MemoryBuffer.h"
 #include "llvm/Support/ManagedStatic.h"
@@ -38,6 +39,7 @@ using namespace cl;
 //===----------------------------------------------------------------------===//
 // Template instantiations and anchors.
 //
+namespace llvm { namespace cl {
 TEMPLATE_INSTANTIATION(class basic_parser<bool>);
 TEMPLATE_INSTANTIATION(class basic_parser<boolOrDefault>);
 TEMPLATE_INSTANTIATION(class basic_parser<int>);
@@ -52,6 +54,7 @@ TEMPLATE_INSTANTIATION(class opt<int>);
 TEMPLATE_INSTANTIATION(class opt<std::string>);
 TEMPLATE_INSTANTIATION(class opt<char>);
 TEMPLATE_INSTANTIATION(class opt<bool>);
+} } // end namespace llvm::cl
 
 void Option::anchor() {}
 void basic_parser_impl::anchor() {}
@@ -155,9 +158,9 @@ static Option *LookupOption(StringRef &Arg, StringRef &Value,
                             const StringMap<Option*> &OptionsMap) {
   // Reject all dashes.
   if (Arg.empty()) return 0;
-  
+
   size_t EqualPos = Arg.find('=');
-  
+
   // If we have an equals sign, remember the value.
   if (EqualPos == StringRef::npos) {
     // Look up the option.
@@ -170,13 +173,43 @@ static Option *LookupOption(StringRef &Arg, StringRef &Value,
   StringMap<Option*>::const_iterator I =
     OptionsMap.find(Arg.substr(0, EqualPos));
   if (I == OptionsMap.end()) return 0;
-  
+
   Value = Arg.substr(EqualPos+1);
   Arg = Arg.substr(0, EqualPos);
   return I->second;
 }
 
+/// CommaSeparateAndAddOccurence - A wrapper around Handler->addOccurence() that
+/// does special handling of cl::CommaSeparated options.
+static bool CommaSeparateAndAddOccurence(Option *Handler, unsigned pos,
+                                         StringRef ArgName,
+                                         StringRef Value, bool MultiArg = false)
+{
+  // Check to see if this option accepts a comma separated list of values.  If
+  // it does, we have to split up the value into multiple values.
+  if (Handler->getMiscFlags() & CommaSeparated) {
+    StringRef Val(Value);
+    StringRef::size_type Pos = Val.find(',');
+
+    while (Pos != StringRef::npos) {
+      // Process the portion before the comma.
+      if (Handler->addOccurrence(pos, ArgName, Val.substr(0, Pos), MultiArg))
+        return true;
+      // Erase the portion before the comma, AND the comma.
+      Val = Val.substr(Pos+1);
+      Value.substr(Pos+1);  // Increment the original value pointer as well.
+      // Check for another comma.
+      Pos = Val.find(',');
+    }
 
+    Value = Val;
+  }
+
+  if (Handler->addOccurrence(pos, ArgName, Value, MultiArg))
+    return true;
+
+  return false;
+}
 
 /// ProvideOption - For Value, this differentiates between an empty value ("")
 /// and a null value (StringRef()).  The later is accepted for arguments that
@@ -208,7 +241,7 @@ static inline bool ProvideOption(Option *Handler, StringRef ArgName,
     break;
   case ValueOptional:
     break;
-      
+
   default:
     errs() << ProgramName
          << ": Bad ValueMask flag! CommandLine usage error:"
@@ -218,13 +251,13 @@ static inline bool ProvideOption(Option *Handler, StringRef ArgName,
 
   // If this isn't a multi-arg option, just run the handler.
   if (NumAdditionalVals == 0)
-    return Handler->addOccurrence(i, ArgName, Value);
+    return CommaSeparateAndAddOccurence(Handler, i, ArgName, Value);
 
   // If it is, run the handle several times.
   bool MultiArg = false;
 
   if (Value.data()) {
-    if (Handler->addOccurrence(i, ArgName, Value, MultiArg))
+    if (CommaSeparateAndAddOccurence(Handler, i, ArgName, Value, MultiArg))
       return true;
     --NumAdditionalVals;
     MultiArg = true;
@@ -234,8 +267,8 @@ static inline bool ProvideOption(Option *Handler, StringRef ArgName,
     if (i+1 >= argc)
       return Handler->error("not enough values!");
     Value = argv[++i];
-    
-    if (Handler->addOccurrence(i, ArgName, Value, MultiArg))
+
+    if (CommaSeparateAndAddOccurence(Handler, i, ArgName, Value, MultiArg))
       return true;
     MultiArg = true;
     --NumAdditionalVals;
@@ -297,7 +330,7 @@ static Option *HandlePrefixedOrGroupedOption(StringRef &Arg, StringRef &Value,
   size_t Length = 0;
   Option *PGOpt = getOptionPred(Arg, Length, isPrefixedOrGrouping, OptionsMap);
   if (PGOpt == 0) return 0;
-  
+
   // If the option is a prefixed option, then the value is simply the
   // rest of the name...  so fall through to later processing, by
   // setting up the argument name flags and value fields.
@@ -307,16 +340,16 @@ static Option *HandlePrefixedOrGroupedOption(StringRef &Arg, StringRef &Value,
     assert(OptionsMap.count(Arg) && OptionsMap.find(Arg)->second == PGOpt);
     return PGOpt;
   }
-  
+
   // This must be a grouped option... handle them now.  Grouping options can't
   // have values.
   assert(isGrouping(PGOpt) && "Broken getOptionPred!");
-  
+
   do {
     // Move current arg name out of Arg into OneArgName.
     StringRef OneArgName = Arg.substr(0, Length);
     Arg = Arg.substr(Length);
-    
+
     // Because ValueRequired is an invalid flag for grouped arguments,
     // we don't need to pass argc/argv in.
     assert(PGOpt->getValueExpectedFlag() != cl::ValueRequired &&
@@ -324,11 +357,11 @@ static Option *HandlePrefixedOrGroupedOption(StringRef &Arg, StringRef &Value,
     int Dummy;
     ErrorParsing |= ProvideOption(PGOpt, OneArgName,
                                   StringRef(), 0, 0, Dummy);
-    
+
     // Get the next grouping option.
     PGOpt = getOptionPred(Arg, Length, isGrouping, OptionsMap);
   } while (PGOpt && Length != Arg.size());
-  
+
   // Return the last option with Arg cut down to just the last one.
   return PGOpt;
 }
@@ -365,17 +398,17 @@ static void ParseCStringVector(std::vector<char *> &OutputVector,
       WorkStr = WorkStr.substr(Pos);
       continue;
     }
-    
+
     // Find position of first delimiter.
     size_t Pos = WorkStr.find_first_of(Delims);
     if (Pos == StringRef::npos) Pos = WorkStr.size();
-    
+
     // Everything from 0 to Pos is the next word to copy.
     char *NewStr = (char*)malloc(Pos+1);
     memcpy(NewStr, WorkStr.data(), Pos);
     NewStr[Pos] = 0;
     OutputVector.push_back(NewStr);
-    
+
     WorkStr = WorkStr.substr(Pos);
   }
 }
@@ -562,7 +595,7 @@ void cl::ParseCommandLineOptions(int argc, char **argv,
         ProvidePositionalOption(ActivePositionalArg, argv[i], i);
         continue;  // We are done!
       }
-      
+
       if (!PositionalOpts.empty()) {
         PositionalVals.push_back(std::make_pair(argv[i],i));
 
@@ -592,7 +625,7 @@ void cl::ParseCommandLineOptions(int argc, char **argv,
       // Eat leading dashes.
       while (!ArgName.empty() && ArgName[0] == '-')
         ArgName = ArgName.substr(1);
-      
+
       Handler = LookupOption(ArgName, Value, Opts);
       if (!Handler || Handler->getFormattingFlag() != cl::Positional) {
         ProvidePositionalOption(ActivePositionalArg, argv[i], i);
@@ -604,7 +637,7 @@ void cl::ParseCommandLineOptions(int argc, char **argv,
       // Eat leading dashes.
       while (!ArgName.empty() && ArgName[0] == '-')
         ArgName = ArgName.substr(1);
-      
+
       Handler = LookupOption(ArgName, Value, Opts);
 
       // Check to see if this "option" is really a prefixed or grouped argument.
@@ -626,25 +659,6 @@ void cl::ParseCommandLineOptions(int argc, char **argv,
       continue;
     }
 
-    // Check to see if this option accepts a comma separated list of values.  If
-    // it does, we have to split up the value into multiple values.
-    if (Handler->getMiscFlags() & CommaSeparated) {
-      StringRef Val(Value);
-      StringRef::size_type Pos = Val.find(',');
-
-      while (Pos != StringRef::npos) {
-        // Process the portion before the comma.
-        ErrorParsing |= ProvideOption(Handler, ArgName, Val.substr(0, Pos),
-                                      argc, argv, i);
-        // Erase the portion before the comma, AND the comma.
-        Val = Val.substr(Pos+1);
-        Value.substr(Pos+1);  // Increment the original value pointer as well.
-
-        // Check for another comma.
-        Pos = Val.find(',');
-      }
-    }
-
     // If this is a named positional argument, just remember that it is the
     // active one...
     if (Handler->getFormattingFlag() == cl::Positional)
@@ -764,6 +778,11 @@ void cl::ParseCommandLineOptions(int argc, char **argv,
       free(*i);
   }
 
+  DEBUG(errs() << "\nArgs: ";
+        for (int i = 0; i < argc; ++i)
+          errs() << argv[i] << ' ';
+       );
+
   // If we had an error processing our arguments, don't let the program execute
   if (ErrorParsing) exit(1);
 }
@@ -874,7 +893,7 @@ bool parser<bool>::parse(Option &O, StringRef ArgName,
     Value = true;
     return false;
   }
-  
+
   if (Arg == "false" || Arg == "FALSE" || Arg == "False" || Arg == "0") {
     Value = false;
     return false;
@@ -896,7 +915,7 @@ bool parser<boolOrDefault>::parse(Option &O, StringRef ArgName,
     Value = BOU_FALSE;
     return false;
   }
-  
+
   return O.error("'" + Arg +
                  "' is invalid value for boolean argument! Try 0 or 1");
 }
@@ -1013,7 +1032,7 @@ void generic_parser_base::printOptionInfo(const Option &O,
 
 static int OptNameCompare(const void *LHS, const void *RHS) {
   typedef std::pair<const char *, Option*> pair_ty;
-  
+
   return strcmp(((pair_ty*)LHS)->first, ((pair_ty*)RHS)->first);
 }
 
@@ -1047,11 +1066,11 @@ public:
       // Ignore really-hidden options.
       if (I->second->getOptionHiddenFlag() == ReallyHidden)
         continue;
-      
+
       // Unless showhidden is set, ignore hidden flags.
       if (I->second->getOptionHiddenFlag() == Hidden && !ShowHidden)
         continue;
-      
+
       // If we've already seen this option, don't add it to the list again.
       if (!OptionSet.insert(I->second))
         continue;
@@ -1059,7 +1078,7 @@ public:
       Opts.push_back(std::pair<const char *, Option*>(I->getKey().data(),
                                                       I->second));
     }
-    
+
     // Sort the options list alphabetically.
     qsort(Opts.data(), Opts.size(), sizeof(Opts[0]), OptNameCompare);
 
@@ -1146,15 +1165,18 @@ public:
 #ifndef NDEBUG
     OS << " with assertions";
 #endif
+    std::string CPU = sys::getHostCPUName();
+    if (CPU == "generic") CPU = "(unknown)";
     OS << ".\n"
        << "  Built " << __DATE__ << " (" << __TIME__ << ").\n"
        << "  Host: " << sys::getHostTriple() << '\n'
+       << "  Host CPU: " << CPU << '\n'
        << '\n'
        << "  Registered Targets:\n";
 
     std::vector<std::pair<const char *, const Target*> > Targets;
     size_t Width = 0;
-    for (TargetRegistry::iterator it = TargetRegistry::begin(), 
+    for (TargetRegistry::iterator it = TargetRegistry::begin(),
            ie = TargetRegistry::end(); it != ie; ++it) {
       Targets.push_back(std::make_pair(it->getName(), &*it));
       Width = std::max(Width, strlen(Targets.back().first));
@@ -1173,7 +1195,7 @@ public:
   }
   void operator=(bool OptionWasSpecified) {
     if (!OptionWasSpecified) return;
-    
+
     if (OverrideVersionPrinter == 0) {
       print();
       exit(1);
diff --git a/libclamav/c++/llvm/lib/Support/ConstantRange.cpp b/libclamav/c++/llvm/lib/Support/ConstantRange.cpp
index 423e90d..e427f82 100644
--- a/libclamav/c++/llvm/lib/Support/ConstantRange.cpp
+++ b/libclamav/c++/llvm/lib/Support/ConstantRange.cpp
@@ -492,6 +492,30 @@ ConstantRange ConstantRange::truncate(uint32_t DstTySize) const {
   return ConstantRange(L, U);
 }
 
+/// zextOrTrunc - make this range have the bit width given by \p DstTySize. The
+/// value is zero extended, truncated, or left alone to make it that width.
+ConstantRange ConstantRange::zextOrTrunc(uint32_t DstTySize) const {
+  unsigned SrcTySize = getBitWidth();
+  if (SrcTySize > DstTySize)
+    return truncate(DstTySize);
+  else if (SrcTySize < DstTySize)
+    return zeroExtend(DstTySize);
+  else
+    return *this;
+}
+
+/// sextOrTrunc - make this range have the bit width given by \p DstTySize. The
+/// value is sign extended, truncated, or left alone to make it that width.
+ConstantRange ConstantRange::sextOrTrunc(uint32_t DstTySize) const {
+  unsigned SrcTySize = getBitWidth();
+  if (SrcTySize > DstTySize)
+    return truncate(DstTySize);
+  else if (SrcTySize < DstTySize)
+    return signExtend(DstTySize);
+  else
+    return *this;
+}
+
 ConstantRange
 ConstantRange::add(const ConstantRange &Other) const {
   if (isEmptySet() || Other.isEmptySet())
@@ -585,6 +609,43 @@ ConstantRange::udiv(const ConstantRange &RHS) const {
   return ConstantRange(Lower, Upper);
 }
 
+ConstantRange
+ConstantRange::shl(const ConstantRange &Amount) const {
+  if (isEmptySet())
+    return *this;
+
+  APInt min = getUnsignedMin() << Amount.getUnsignedMin();
+  APInt max = getUnsignedMax() << Amount.getUnsignedMax();
+
+  // there's no overflow!
+  APInt Zeros(getBitWidth(), getUnsignedMax().countLeadingZeros());
+  if (Zeros.uge(Amount.getUnsignedMax()))
+    return ConstantRange(min, max);
+
+  // FIXME: implement the other tricky cases
+  return ConstantRange(getBitWidth());
+}
+
+ConstantRange
+ConstantRange::ashr(const ConstantRange &Amount) const {
+  if (isEmptySet())
+    return *this;
+
+  APInt min = getUnsignedMax().ashr(Amount.getUnsignedMin());
+  APInt max = getUnsignedMin().ashr(Amount.getUnsignedMax());
+  return ConstantRange(min, max);
+}
+
+ConstantRange
+ConstantRange::lshr(const ConstantRange &Amount) const {
+  if (isEmptySet())
+    return *this;
+  
+  APInt min = getUnsignedMax().lshr(Amount.getUnsignedMin());
+  APInt max = getUnsignedMin().lshr(Amount.getUnsignedMax());
+  return ConstantRange(min, max);
+}
+
 /// print - Print out the bounds to a stream...
 ///
 void ConstantRange::print(raw_ostream &OS) const {
diff --git a/libclamav/c++/llvm/lib/Support/Debug.cpp b/libclamav/c++/llvm/lib/Support/Debug.cpp
index 71ff411..50abe01 100644
--- a/libclamav/c++/llvm/lib/Support/Debug.cpp
+++ b/libclamav/c++/llvm/lib/Support/Debug.cpp
@@ -57,7 +57,18 @@ DebugOnly("debug-only", cl::desc("Enable a specific type of debug output"),
 bool llvm::isCurrentDebugType(const char *DebugType) {
   return CurrentDebugType.empty() || DebugType == CurrentDebugType;
 }
+
+/// SetCurrentDebugType - Set the current debug type, as if the -debug-only=X
+/// option were specified.  Note that DebugFlag also needs to be set to true for
+/// debug output to be produced.
+///
+void llvm::SetCurrentDebugType(const char *Type) {
+  CurrentDebugType = Type;
+}
+
 #else
 // Avoid "has no symbols" warning.
+namespace llvm {
 int Debug_dummy = 0;
+}
 #endif
diff --git a/libclamav/c++/llvm/lib/Support/MemoryBuffer.cpp b/libclamav/c++/llvm/lib/Support/MemoryBuffer.cpp
index e35c626..b04864a 100644
--- a/libclamav/c++/llvm/lib/Support/MemoryBuffer.cpp
+++ b/libclamav/c++/llvm/lib/Support/MemoryBuffer.cpp
@@ -70,7 +70,7 @@ namespace {
 class MemoryBufferMem : public MemoryBuffer {
   std::string FileID;
 public:
-  MemoryBufferMem(const char *Start, const char *End, const char *FID,
+  MemoryBufferMem(const char *Start, const char *End, StringRef FID,
                   bool Copy = false)
   : FileID(FID) {
     if (!Copy)
@@ -107,7 +107,7 @@ MemoryBuffer *MemoryBuffer::getMemBufferCopy(const char *StartPtr,
 /// initialize the memory allocated by this method.  The memory is owned by
 /// the MemoryBuffer object.
 MemoryBuffer *MemoryBuffer::getNewUninitMemBuffer(size_t Size,
-                                                  const char *BufferName) {
+                                                  StringRef BufferName) {
   char *Buf = (char *)malloc((Size+1) * sizeof(char));
   if (!Buf) return 0;
   Buf[Size] = 0;
@@ -134,17 +134,12 @@ MemoryBuffer *MemoryBuffer::getNewMemBuffer(size_t Size,
 /// if the Filename is "-".  If an error occurs, this returns null and fills
 /// in *ErrStr with a reason.  If stdin is empty, this API (unlike getSTDIN)
 /// returns an empty buffer.
-MemoryBuffer *MemoryBuffer::getFileOrSTDIN(const char *Filename,
+MemoryBuffer *MemoryBuffer::getFileOrSTDIN(StringRef Filename,
                                            std::string *ErrStr,
                                            int64_t FileSize) {
-  if (Filename[0] != '-' || Filename[1] != 0)
-    return getFile(Filename, ErrStr, FileSize);
-  MemoryBuffer *M = getSTDIN();
-  if (M) return M;
-
-  // If stdin was empty, M is null.  Cons up an empty memory buffer now.
-  const char *EmptyStr = "";
-  return MemoryBuffer::getMemBuffer(EmptyStr, EmptyStr, "<stdin>");
+  if (Filename == "-")
+    return getSTDIN();
+  return getFile(Filename, ErrStr, FileSize);
 }
 
 //===----------------------------------------------------------------------===//
@@ -158,7 +153,7 @@ namespace {
 class MemoryBufferMMapFile : public MemoryBuffer {
   std::string Filename;
 public:
-  MemoryBufferMMapFile(const char *filename, const char *Pages, uint64_t Size)
+  MemoryBufferMMapFile(StringRef filename, const char *Pages, uint64_t Size)
     : Filename(filename) {
     init(Pages, Pages+Size);
   }
@@ -173,13 +168,13 @@ public:
 };
 }
 
-MemoryBuffer *MemoryBuffer::getFile(const char *Filename, std::string *ErrStr,
+MemoryBuffer *MemoryBuffer::getFile(StringRef Filename, std::string *ErrStr,
                                     int64_t FileSize) {
   int OpenFlags = 0;
 #ifdef O_BINARY
   OpenFlags |= O_BINARY;  // Open input file in binary mode on win32.
 #endif
-  int FD = ::open(Filename, O_RDONLY|OpenFlags);
+  int FD = ::open(Filename.str().c_str(), O_RDONLY|OpenFlags);
   if (FD == -1) {
     if (ErrStr) *ErrStr = "could not open file";
     return 0;
@@ -203,6 +198,8 @@ MemoryBuffer *MemoryBuffer::getFile(const char *Filename, std::string *ErrStr,
   // for small files, because this can severely fragment our address space. Also
   // don't try to map files that are exactly a multiple of the system page size,
   // as the file would not have the required null terminator.
+  //
+  // FIXME: Can we just mmap an extra page in the latter case?
   if (FileSize >= 4096*4 &&
       (FileSize & (sys::Process::GetPageSize()-1)) != 0) {
     if (const char *Pages = sys::Path::MapInFilePages(FD, FileSize)) {
@@ -226,10 +223,10 @@ MemoryBuffer *MemoryBuffer::getFile(const char *Filename, std::string *ErrStr,
   size_t BytesLeft = FileSize;
   while (BytesLeft) {
     ssize_t NumRead = ::read(FD, BufPtr, BytesLeft);
-    if (NumRead != -1) {
+    if (NumRead > 0) {
       BytesLeft -= NumRead;
       BufPtr += NumRead;
-    } else if (errno == EINTR) {
+    } else if (NumRead == -1 && errno == EINTR) {
       // try again
     } else {
       // error reading.
@@ -262,6 +259,9 @@ MemoryBuffer *MemoryBuffer::getSTDIN() {
   std::vector<char> FileData;
 
   // Read in all of the data from stdin, we cannot mmap stdin.
+  //
+  // FIXME: That isn't necessarily true, we should try to mmap stdin and
+  // fallback if it fails.
   sys::Program::ChangeStdinToBinary();
   size_t ReadBytes;
   do {
@@ -271,8 +271,6 @@ MemoryBuffer *MemoryBuffer::getSTDIN() {
 
   FileData.push_back(0); // &FileData[Size] is invalid. So is &*FileData.end().
   size_t Size = FileData.size();
-  if (Size <= 1)
-    return 0;
   MemoryBuffer *B = new STDINBufferFile();
   B->initCopyOf(&FileData[0], &FileData[Size-1]);
   return B;
diff --git a/libclamav/c++/llvm/lib/Support/SourceMgr.cpp b/libclamav/c++/llvm/lib/Support/SourceMgr.cpp
index 4b93f7f..7dd42f4 100644
--- a/libclamav/c++/llvm/lib/Support/SourceMgr.cpp
+++ b/libclamav/c++/llvm/lib/Support/SourceMgr.cpp
@@ -136,7 +136,7 @@ void SourceMgr::PrintIncludeStack(SMLoc IncludeLoc, raw_ostream &OS) const {
 /// @param Type - If non-null, the kind of message (e.g., "error") which is
 /// prefixed to the message.
 SMDiagnostic SourceMgr::GetMessage(SMLoc Loc, const std::string &Msg,
-                                   const char *Type) const {
+                                   const char *Type, bool ShowLine) const {
   
   // First thing to do: find the current buffer containing the specified
   // location.
@@ -144,18 +144,22 @@ SMDiagnostic SourceMgr::GetMessage(SMLoc Loc, const std::string &Msg,
   assert(CurBuf != -1 && "Invalid or unspecified location!");
   
   MemoryBuffer *CurMB = getBufferInfo(CurBuf).Buffer;
-  
-  
+
   // Scan backward to find the start of the line.
   const char *LineStart = Loc.getPointer();
-  while (LineStart != CurMB->getBufferStart() && 
+  while (LineStart != CurMB->getBufferStart() &&
          LineStart[-1] != '\n' && LineStart[-1] != '\r')
     --LineStart;
-  // Get the end of the line.
-  const char *LineEnd = Loc.getPointer();
-  while (LineEnd != CurMB->getBufferEnd() && 
-         LineEnd[0] != '\n' && LineEnd[0] != '\r')
-    ++LineEnd;
+
+  std::string LineStr;
+  if (ShowLine) {
+    // Get the end of the line.
+    const char *LineEnd = Loc.getPointer();
+    while (LineEnd != CurMB->getBufferEnd() &&
+           LineEnd[0] != '\n' && LineEnd[0] != '\r')
+      ++LineEnd;
+    LineStr = std::string(LineStart, LineEnd);
+  }
   
   std::string PrintedMsg;
   if (Type) {
@@ -163,22 +167,21 @@ SMDiagnostic SourceMgr::GetMessage(SMLoc Loc, const std::string &Msg,
     PrintedMsg += ": ";
   }
   PrintedMsg += Msg;
-  
-  // Print out the line.
+
   return SMDiagnostic(CurMB->getBufferIdentifier(), FindLineNumber(Loc, CurBuf),
                       Loc.getPointer()-LineStart, PrintedMsg,
-                      std::string(LineStart, LineEnd));
+                      LineStr, ShowLine);
 }
 
 void SourceMgr::PrintMessage(SMLoc Loc, const std::string &Msg, 
-                             const char *Type) const {
+                             const char *Type, bool ShowLine) const {
   raw_ostream &OS = errs();
 
   int CurBuf = FindBufferContainingLoc(Loc);
   assert(CurBuf != -1 && "Invalid or unspecified location!");
   PrintIncludeStack(getBufferInfo(CurBuf).IncludeLoc, OS);
 
-  GetMessage(Loc, Msg, Type).Print(0, OS);
+  GetMessage(Loc, Msg, Type, ShowLine).Print(0, OS);
 }
 
 //===----------------------------------------------------------------------===//
@@ -201,8 +204,8 @@ void SMDiagnostic::Print(const char *ProgName, raw_ostream &S) {
   }
   
   S << ": " << Message << '\n';
-  
-  if (LineNo != -1 && ColumnNo != -1) {
+
+  if (LineNo != -1 && ColumnNo != -1 && ShowLine) {
     S << LineContents << '\n';
     
     // Print out spaces/tabs before the caret.
diff --git a/libclamav/c++/llvm/lib/Support/StringExtras.cpp b/libclamav/c++/llvm/lib/Support/StringExtras.cpp
index 1618086..1b233ab 100644
--- a/libclamav/c++/llvm/lib/Support/StringExtras.cpp
+++ b/libclamav/c++/llvm/lib/Support/StringExtras.cpp
@@ -12,6 +12,7 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/ADT/StringExtras.h"
+#include "llvm/ADT/SmallVector.h"
 #include <cstring>
 using namespace llvm;
 
@@ -57,58 +58,23 @@ void llvm::SplitString(const std::string &Source,
   }
 }
 
+void llvm::StringRef::split(SmallVectorImpl<StringRef> &A,
+                            StringRef Separators, int MaxSplit,
+                            bool KeepEmpty) const {
+  StringRef rest = *this;
 
+  // rest.data() is used to distinguish cases like "a," that splits into
+  // "a" + "" and "a" that splits into "a" + 0.
+  for (int splits = 0;
+       rest.data() != NULL && (MaxSplit < 0 || splits < MaxSplit);
+       ++splits) {
+    std::pair<llvm::StringRef, llvm::StringRef> p = rest.split(Separators);
 
-/// UnescapeString - Modify the argument string, turning two character sequences
-/// @verbatim
-/// like '\\' 'n' into '\n'.  This handles: \e \a \b \f \n \r \t \v \' \ and
-/// \num (where num is a 1-3 byte octal value).
-/// @endverbatim
-void llvm::UnescapeString(std::string &Str) {
-  for (unsigned i = 0; i != Str.size(); ++i) {
-    if (Str[i] == '\\' && i != Str.size()-1) {
-      switch (Str[i+1]) {
-      default: continue;  // Don't execute the code after the switch.
-      case 'a': Str[i] = '\a'; break;
-      case 'b': Str[i] = '\b'; break;
-      case 'e': Str[i] = 27; break;
-      case 'f': Str[i] = '\f'; break;
-      case 'n': Str[i] = '\n'; break;
-      case 'r': Str[i] = '\r'; break;
-      case 't': Str[i] = '\t'; break;
-      case 'v': Str[i] = '\v'; break;
-      case '"': Str[i] = '\"'; break;
-      case '\'': Str[i] = '\''; break;
-      case '\\': Str[i] = '\\'; break;
-      }
-      // Nuke the second character.
-      Str.erase(Str.begin()+i+1);
-    }
-  }
-}
-
-/// EscapeString - Modify the argument string, turning '\\' and anything that
-/// doesn't satisfy std::isprint into an escape sequence.
-void llvm::EscapeString(std::string &Str) {
-  for (unsigned i = 0; i != Str.size(); ++i) {
-    if (Str[i] == '\\') {
-      ++i;
-      Str.insert(Str.begin()+i, '\\');
-    } else if (Str[i] == '\t') {
-      Str[i++] = '\\';
-      Str.insert(Str.begin()+i, 't');
-    } else if (Str[i] == '"') {
-      Str.insert(Str.begin()+i++, '\\');
-    } else if (Str[i] == '\n') {
-      Str[i++] = '\\';
-      Str.insert(Str.begin()+i, 'n');
-    } else if (!std::isprint(Str[i])) {
-      // Always expand to a 3-digit octal escape.
-      unsigned Char = Str[i];
-      Str[i++] = '\\';
-      Str.insert(Str.begin()+i++, '0'+((Char/64) & 7));
-      Str.insert(Str.begin()+i++, '0'+((Char/8)  & 7));
-      Str.insert(Str.begin()+i  , '0'+( Char     & 7));
-    }
+    if (p.first.size() != 0 || KeepEmpty)
+      A.push_back(p.first);
+    rest = p.second;
   }
+  // If we have a tail left, add it.
+  if (rest.data() != NULL && (rest.size() != 0 || KeepEmpty))
+    A.push_back(rest);
 }
diff --git a/libclamav/c++/llvm/lib/Support/StringMap.cpp b/libclamav/c++/llvm/lib/Support/StringMap.cpp
index 040308b..6f28277 100644
--- a/libclamav/c++/llvm/lib/Support/StringMap.cpp
+++ b/libclamav/c++/llvm/lib/Support/StringMap.cpp
@@ -12,6 +12,7 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/ADT/StringMap.h"
+#include "llvm/ADT/StringExtras.h"
 #include <cassert>
 using namespace llvm;
 
@@ -46,32 +47,18 @@ void StringMapImpl::init(unsigned InitSize) {
 }
 
 
-/// HashString - Compute a hash code for the specified string.
-///
-static unsigned HashString(const char *Start, const char *End) {
-  // Bernstein hash function.
-  unsigned int Result = 0;
-  // TODO: investigate whether a modified bernstein hash function performs
-  // better: http://eternallyconfuzzled.com/tuts/algorithms/jsw_tut_hashing.aspx
-  //   X*33+c -> X*33^c
-  while (Start != End)
-    Result = Result * 33 + *Start++;
-  Result = Result + (Result >> 5);
-  return Result;
-}
-
 /// LookupBucketFor - Look up the bucket that the specified string should end
 /// up in.  If it already exists as a key in the map, the Item pointer for the
 /// specified bucket will be non-null.  Otherwise, it will be null.  In either
 /// case, the FullHashValue field of the bucket will be set to the hash value
 /// of the string.
-unsigned StringMapImpl::LookupBucketFor(const StringRef &Name) {
+unsigned StringMapImpl::LookupBucketFor(StringRef Name) {
   unsigned HTSize = NumBuckets;
   if (HTSize == 0) {  // Hash table unallocated so far?
     init(16);
     HTSize = NumBuckets;
   }
-  unsigned FullHashValue = HashString(Name.begin(), Name.end());
+  unsigned FullHashValue = HashString(Name);
   unsigned BucketNo = FullHashValue & (HTSize-1);
   
   unsigned ProbeAmt = 1;
@@ -123,10 +110,10 @@ unsigned StringMapImpl::LookupBucketFor(const StringRef &Name) {
 /// FindKey - Look up the bucket that contains the specified key. If it exists
 /// in the map, return the bucket number of the key.  Otherwise return -1.
 /// This does not modify the map.
-int StringMapImpl::FindKey(const StringRef &Key) const {
+int StringMapImpl::FindKey(StringRef Key) const {
   unsigned HTSize = NumBuckets;
   if (HTSize == 0) return -1;  // Really empty table?
-  unsigned FullHashValue = HashString(Key.begin(), Key.end());
+  unsigned FullHashValue = HashString(Key);
   unsigned BucketNo = FullHashValue & (HTSize-1);
   
   unsigned ProbeAmt = 1;
@@ -174,7 +161,7 @@ void StringMapImpl::RemoveKey(StringMapEntryBase *V) {
 
 /// RemoveKey - Remove the StringMapEntry for the specified key from the
 /// table, returning it.  If the key is not in the table, this returns null.
-StringMapEntryBase *StringMapImpl::RemoveKey(const StringRef &Key) {
+StringMapEntryBase *StringMapImpl::RemoveKey(StringRef Key) {
   int Bucket = FindKey(Key);
   if (Bucket == -1) return 0;
   
diff --git a/libclamav/c++/llvm/lib/Support/StringRef.cpp b/libclamav/c++/llvm/lib/Support/StringRef.cpp
index deaa19e..2d023e4 100644
--- a/libclamav/c++/llvm/lib/Support/StringRef.cpp
+++ b/libclamav/c++/llvm/lib/Support/StringRef.cpp
@@ -15,6 +15,26 @@ using namespace llvm;
 const size_t StringRef::npos;
 #endif
 
+static char ascii_tolower(char x) {
+  if (x >= 'A' && x <= 'Z')
+    return x - 'A' + 'a';
+  return x;
+}
+
+/// compare_lower - Compare strings, ignoring case.
+int StringRef::compare_lower(StringRef RHS) const {
+  for (size_t I = 0, E = min(Length, RHS.Length); I != E; ++I) {
+    char LHC = ascii_tolower(Data[I]);
+    char RHC = ascii_tolower(RHS.Data[I]);
+    if (LHC != RHC)
+      return LHC < RHC ? -1 : 1;
+  }
+
+  if (Length == RHS.Length)
+        return 0;
+  return Length < RHS.Length ? -1 : 1;
+}
+
 //===----------------------------------------------------------------------===//
 // String Searching
 //===----------------------------------------------------------------------===//
@@ -24,11 +44,11 @@ const size_t StringRef::npos;
 ///
 /// \return - The index of the first occurence of \arg Str, or npos if not
 /// found.
-size_t StringRef::find(const StringRef &Str) const {
+size_t StringRef::find(StringRef Str, size_t From) const {
   size_t N = Str.size();
   if (N > Length)
     return npos;
-  for (size_t i = 0, e = Length - N + 1; i != e; ++i)
+  for (size_t e = Length - N + 1, i = min(From, e); i != e; ++i)
     if (substr(i, N).equals(Str))
       return i;
   return npos;
@@ -38,7 +58,7 @@ size_t StringRef::find(const StringRef &Str) const {
 ///
 /// \return - The index of the last occurence of \arg Str, or npos if not
 /// found.
-size_t StringRef::rfind(const StringRef &Str) const {
+size_t StringRef::rfind(StringRef Str) const {
   size_t N = Str.size();
   if (N > Length)
     return npos;
@@ -50,19 +70,34 @@ size_t StringRef::rfind(const StringRef &Str) const {
   return npos;
 }
 
-/// find_first_of - Find the first character from the string 'Chars' in the
-/// current string or return npos if not in string.
-StringRef::size_type StringRef::find_first_of(StringRef Chars) const {
-  for (size_type i = 0, e = Length; i != e; ++i)
+/// find_first_of - Find the first character in the string that is in \arg
+/// Chars, or npos if not found.
+///
+/// Note: O(size() * Chars.size())
+StringRef::size_type StringRef::find_first_of(StringRef Chars,
+                                              size_t From) const {
+  for (size_type i = min(From, Length), e = Length; i != e; ++i)
     if (Chars.find(Data[i]) != npos)
       return i;
   return npos;
 }
 
 /// find_first_not_of - Find the first character in the string that is not
-/// in the string 'Chars' or return npos if all are in string. Same as find.
-StringRef::size_type StringRef::find_first_not_of(StringRef Chars) const {
-  for (size_type i = 0, e = Length; i != e; ++i)
+/// \arg C or npos if not found.
+StringRef::size_type StringRef::find_first_not_of(char C, size_t From) const {
+  for (size_type i = min(From, Length), e = Length; i != e; ++i)
+    if (Data[i] != C)
+      return i;
+  return npos;
+}
+
+/// find_first_not_of - Find the first character in the string that is not
+/// in the string \arg Chars, or npos if not found.
+///
+/// Note: O(size() * Chars.size())
+StringRef::size_type StringRef::find_first_not_of(StringRef Chars,
+                                                  size_t From) const {
+  for (size_type i = min(From, Length), e = Length; i != e; ++i)
     if (Chars.find(Data[i]) == npos)
       return i;
   return npos;
@@ -75,7 +110,7 @@ StringRef::size_type StringRef::find_first_not_of(StringRef Chars) const {
 
 /// count - Return the number of non-overlapped occurrences of \arg Str in
 /// the string.
-size_t StringRef::count(const StringRef &Str) const {
+size_t StringRef::count(StringRef Str) const {
   size_t Count = 0;
   size_t N = Str.size();
   if (N > Length)
diff --git a/libclamav/c++/llvm/lib/Support/Timer.cpp b/libclamav/c++/llvm/lib/Support/Timer.cpp
index dd58d1f..7d32ee6 100644
--- a/libclamav/c++/llvm/lib/Support/Timer.cpp
+++ b/libclamav/c++/llvm/lib/Support/Timer.cpp
@@ -66,7 +66,7 @@ static TimerGroup *getDefaultTimerGroup() {
     }
     llvm_release_global_lock();
   }
-  
+
   return tmp;
 }
 
@@ -145,7 +145,7 @@ static TimeRecord getTimeRecord(bool Start) {
 static ManagedStatic<std::vector<Timer*> > ActiveTimers;
 
 void Timer::startTimer() {
-  sys::SmartScopedLock<true> L(Lock);
+  sys::SmartScopedLock<true> L(*TimerLock);
   Started = true;
   ActiveTimers->push_back(this);
   TimeRecord TR = getTimeRecord(true);
@@ -157,7 +157,7 @@ void Timer::startTimer() {
 }
 
 void Timer::stopTimer() {
-  sys::SmartScopedLock<true> L(Lock);
+  sys::SmartScopedLock<true> L(*TimerLock);
   TimeRecord TR = getTimeRecord(false);
   Elapsed    += TR.Elapsed;
   UserTime   += TR.UserTime;
@@ -175,27 +175,11 @@ void Timer::stopTimer() {
 }
 
 void Timer::sum(const Timer &T) {
-  if (&T < this) {
-    T.Lock.acquire();
-    Lock.acquire();
-  } else {
-    Lock.acquire();
-    T.Lock.acquire();
-  }
-  
   Elapsed    += T.Elapsed;
   UserTime   += T.UserTime;
   SystemTime += T.SystemTime;
   MemUsed    += T.MemUsed;
   PeakMem    += T.PeakMem;
-  
-  if (&T < this) {
-    T.Lock.release();
-    Lock.release();
-  } else {
-    Lock.release();
-    T.Lock.release();
-  }
 }
 
 /// addPeakMemoryMeasurement - This method should be called whenever memory
@@ -203,14 +187,12 @@ void Timer::sum(const Timer &T) {
 /// currently active timers, which will be printed when the timer group prints
 ///
 void Timer::addPeakMemoryMeasurement() {
+  sys::SmartScopedLock<true> L(*TimerLock);
   size_t MemUsed = getMemUsage();
 
   for (std::vector<Timer*>::iterator I = ActiveTimers->begin(),
-         E = ActiveTimers->end(); I != E; ++I) {
-    (*I)->Lock.acquire();
+         E = ActiveTimers->end(); I != E; ++I)
     (*I)->PeakMem = std::max((*I)->PeakMem, MemUsed-(*I)->PeakMemBase);
-    (*I)->Lock.release();
-  }
 }
 
 //===----------------------------------------------------------------------===//
@@ -280,14 +262,7 @@ static void printVal(double Val, double Total, raw_ostream &OS) {
 }
 
 void Timer::print(const Timer &Total, raw_ostream &OS) {
-  if (&Total < this) {
-    Total.Lock.acquire();
-    Lock.acquire();
-  } else {
-    Lock.acquire();
-    Total.Lock.acquire();
-  }
-  
+  sys::SmartScopedLock<true> L(*TimerLock);
   if (Total.UserTime)
     printVal(UserTime, Total.UserTime, OS);
   if (Total.SystemTime)
@@ -310,14 +285,6 @@ void Timer::print(const Timer &Total, raw_ostream &OS) {
   OS << Name << "\n";
 
   Started = false;  // Once printed, don't print again
-  
-  if (&Total < this) {
-    Total.Lock.release();
-    Lock.release();
-  } else {
-    Lock.release();
-    Total.Lock.release();
-  }
 }
 
 // GetLibSupportInfoOutputFile - Return a file stream to print our output on...
@@ -329,13 +296,13 @@ llvm::GetLibSupportInfoOutputFile() {
   if (LibSupportInfoOutputFilename == "-")
     return &outs();
 
-  
+
   std::string Error;
   raw_ostream *Result = new raw_fd_ostream(LibSupportInfoOutputFilename.c_str(),
                                            Error, raw_fd_ostream::F_Append);
   if (Error.empty())
     return Result;
-  
+
   errs() << "Error opening info-output-file '"
          << LibSupportInfoOutputFilename << " for appending!\n";
   delete Result;
diff --git a/libclamav/c++/llvm/lib/Support/Triple.cpp b/libclamav/c++/llvm/lib/Support/Triple.cpp
index fc3b3f7..2fec094 100644
--- a/libclamav/c++/llvm/lib/Support/Triple.cpp
+++ b/libclamav/c++/llvm/lib/Support/Triple.cpp
@@ -9,6 +9,7 @@
 
 #include "llvm/ADT/Triple.h"
 
+#include "llvm/ADT/SmallString.h"
 #include "llvm/ADT/Twine.h"
 #include <cassert>
 #include <cstring>
@@ -89,18 +90,21 @@ const char *Triple::getOSTypeName(OSType Kind) {
   case DragonFly: return "dragonfly";
   case FreeBSD: return "freebsd";
   case Linux: return "linux";
+  case Lv2: return "lv2";
   case MinGW32: return "mingw32";
   case MinGW64: return "mingw64";
   case NetBSD: return "netbsd";
   case OpenBSD: return "openbsd";
+  case Psp: return "psp";
   case Solaris: return "solaris";
   case Win32: return "win32";
+  case Haiku: return "haiku";
   }
 
   return "<invalid>";
 }
 
-Triple::ArchType Triple::getArchTypeForLLVMName(const StringRef &Name) {
+Triple::ArchType Triple::getArchTypeForLLVMName(StringRef Name) {
   if (Name == "alpha")
     return alpha;
   if (Name == "arm")
@@ -139,7 +143,7 @@ Triple::ArchType Triple::getArchTypeForLLVMName(const StringRef &Name) {
   return UnknownArch;
 }
 
-Triple::ArchType Triple::getArchTypeForDarwinArchName(const StringRef &Str) {
+Triple::ArchType Triple::getArchTypeForDarwinArchName(StringRef Str) {
   // See arch(3) and llvm-gcc's driver-driver.c. We don't implement support for
   // archs which Darwin doesn't use.
 
@@ -176,6 +180,33 @@ Triple::ArchType Triple::getArchTypeForDarwinArchName(const StringRef &Str) {
   return Triple::UnknownArch;
 }
 
+// Returns architecture name that is unsderstood by the target assembler.
+const char *Triple::getArchNameForAssembler() {
+  if (getOS() != Triple::Darwin && getVendor() != Triple::Apple)
+    return NULL;
+
+  StringRef Str = getArchName();
+  if (Str == "i386")
+    return "i386";
+  if (Str == "x86_64")
+    return "x86_64";
+  if (Str == "powerpc")
+    return "ppc";
+  if (Str == "powerpc64")
+    return "ppc64";
+  if (Str == "arm")
+    return "arm";
+  if (Str == "armv4t" || Str == "thumbv4t")
+    return "armv4t";
+  if (Str == "armv5" || Str == "armv5e" || Str == "thumbv5" || Str == "thumbv5e")
+    return "armv5";
+  if (Str == "armv6" || Str == "thumbv6")
+    return "armv6";
+  if (Str == "armv7" || Str == "thumbv7")
+    return "armv7";
+  return NULL;
+}
+
 //
 
 void Triple::Parse() const {
@@ -197,7 +228,7 @@ void Triple::Parse() const {
     Arch = pic16;
   else if (ArchName == "powerpc")
     Arch = ppc;
-  else if (ArchName == "powerpc64")
+  else if ((ArchName == "powerpc64") || (ArchName == "ppu"))
     Arch = ppc64;
   else if (ArchName == "arm" ||
            ArchName.startswith("armv") ||
@@ -263,6 +294,8 @@ void Triple::Parse() const {
     OS = FreeBSD;
   else if (OSName.startswith("linux"))
     OS = Linux;
+  else if (OSName.startswith("lv2"))
+    OS = Lv2;
   else if (OSName.startswith("mingw32"))
     OS = MinGW32;
   else if (OSName.startswith("mingw64"))
@@ -271,10 +304,14 @@ void Triple::Parse() const {
     OS = NetBSD;
   else if (OSName.startswith("openbsd"))
     OS = OpenBSD;
+  else if (OSName.startswith("psp"))
+    OS = Psp;
   else if (OSName.startswith("solaris"))
     OS = Solaris;
   else if (OSName.startswith("win32"))
     OS = Win32;
+  else if (OSName.startswith("haiku"))
+  	OS = Haiku;
   else
     OS = UnknownOS;
 
@@ -389,15 +426,22 @@ void Triple::setOS(OSType Kind) {
   setOSName(getOSTypeName(Kind));
 }
 
-void Triple::setArchName(const StringRef &Str) {
-  setTriple(Str + "-" + getVendorName() + "-" + getOSAndEnvironmentName());
+void Triple::setArchName(StringRef Str) {
+  // Work around a miscompilation bug for Twines in gcc 4.0.3.
+  SmallString<64> Triple;
+  Triple += Str;
+  Triple += "-";
+  Triple += getVendorName();
+  Triple += "-";
+  Triple += getOSAndEnvironmentName();
+  setTriple(Triple.str());
 }
 
-void Triple::setVendorName(const StringRef &Str) {
+void Triple::setVendorName(StringRef Str) {
   setTriple(getArchName() + "-" + Str + "-" + getOSAndEnvironmentName());
 }
 
-void Triple::setOSName(const StringRef &Str) {
+void Triple::setOSName(StringRef Str) {
   if (hasEnvironment())
     setTriple(getArchName() + "-" + getVendorName() + "-" + Str +
               "-" + getEnvironmentName());
@@ -405,11 +449,11 @@ void Triple::setOSName(const StringRef &Str) {
     setTriple(getArchName() + "-" + getVendorName() + "-" + Str);
 }
 
-void Triple::setEnvironmentName(const StringRef &Str) {
-  setTriple(getArchName() + "-" + getVendorName() + "-" + getOSName() + 
+void Triple::setEnvironmentName(StringRef Str) {
+  setTriple(getArchName() + "-" + getVendorName() + "-" + getOSName() +
             "-" + Str);
 }
 
-void Triple::setOSAndEnvironmentName(const StringRef &Str) {
+void Triple::setOSAndEnvironmentName(StringRef Str) {
   setTriple(getArchName() + "-" + getVendorName() + "-" + Str);
 }
diff --git a/libclamav/c++/llvm/lib/Support/raw_ostream.cpp b/libclamav/c++/llvm/lib/Support/raw_ostream.cpp
index 0a82cc1..31451cc 100644
--- a/libclamav/c++/llvm/lib/Support/raw_ostream.cpp
+++ b/libclamav/c++/llvm/lib/Support/raw_ostream.cpp
@@ -168,6 +168,40 @@ raw_ostream &raw_ostream::write_hex(unsigned long long N) {
   return write(CurPtr, EndPtr-CurPtr);
 }
 
+raw_ostream &raw_ostream::write_escaped(StringRef Str) {
+  for (unsigned i = 0, e = Str.size(); i != e; ++i) {
+    unsigned char c = Str[i];
+
+    switch (c) {
+    case '\\':
+      *this << '\\' << '\\';
+      break;
+    case '\t':
+      *this << '\\' << 't';
+      break;
+    case '\n':
+      *this << '\\' << 'n';
+      break;
+    case '"':
+      *this << '\\' << '"';
+      break;
+    default:
+      if (std::isprint(c)) {
+        *this << c;
+        break;
+      }
+
+      // Always expand to a 3-character octal escape.
+      *this << '\\';
+      *this << char('0' + ((c >> 6) & 7));
+      *this << char('0' + ((c >> 3) & 7));
+      *this << char('0' + ((c >> 0) & 7));
+    }
+  }
+
+  return *this;
+}
+
 raw_ostream &raw_ostream::operator<<(const void *P) {
   *this << '0' << 'x';
 
diff --git a/libclamav/c++/llvm/lib/System/CMakeLists.txt b/libclamav/c++/llvm/lib/System/CMakeLists.txt
index 2945e33..a56a1f7 100644
--- a/libclamav/c++/llvm/lib/System/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/System/CMakeLists.txt
@@ -42,5 +42,5 @@ add_llvm_library(LLVMSystem
   )
 
 if( BUILD_SHARED_LIBS AND NOT WIN32 )
-  target_link_libraries(LLVMSystem dl)
+  target_link_libraries(LLVMSystem ${CMAKE_DL_LIBS})
 endif()
diff --git a/libclamav/c++/llvm/lib/System/DynamicLibrary.cpp b/libclamav/c++/llvm/lib/System/DynamicLibrary.cpp
index 6efab94..7eb9f5f 100644
--- a/libclamav/c++/llvm/lib/System/DynamicLibrary.cpp
+++ b/libclamav/c++/llvm/lib/System/DynamicLibrary.cpp
@@ -15,7 +15,6 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/System/DynamicLibrary.h"
-#include "llvm/Support/ManagedStatic.h"
 #include "llvm/Config/config.h"
 #include <cstdio>
 #include <cstring>
diff --git a/libclamav/c++/llvm/lib/System/Host.cpp b/libclamav/c++/llvm/lib/System/Host.cpp
index fd2d952..e112698 100644
--- a/libclamav/c++/llvm/lib/System/Host.cpp
+++ b/libclamav/c++/llvm/lib/System/Host.cpp
@@ -13,6 +13,7 @@
 
 #include "llvm/System/Host.h"
 #include "llvm/Config/config.h"
+#include <string.h>
 
 // Include the platform-specific parts of this class.
 #ifdef LLVM_ON_UNIX
@@ -21,4 +22,280 @@
 #ifdef LLVM_ON_WIN32
 #include "Win32/Host.inc"
 #endif
+#ifdef _MSC_VER
+#include <intrin.h>
+#endif
+
+//===----------------------------------------------------------------------===//
+//
+//  Implementations of the CPU detection routines
+//
+//===----------------------------------------------------------------------===//
+
+using namespace llvm;
+
+#if defined(i386) || defined(__i386__) || defined(__x86__) || defined(_M_IX86)\
+ || defined(__x86_64__) || defined(_M_AMD64) || defined (_M_X64)
+
+/// GetX86CpuIDAndInfo - Execute the specified cpuid and return the 4 values in the
+/// specified arguments.  If we can't run cpuid on the host, return true.
+static bool GetX86CpuIDAndInfo(unsigned value, unsigned *rEAX,
+                            unsigned *rEBX, unsigned *rECX, unsigned *rEDX) {
+#if defined(__x86_64__) || defined(_M_AMD64) || defined (_M_X64)
+  #if defined(__GNUC__)
+    // gcc doesn't know cpuid would clobber ebx/rbx. Preseve it manually.
+    asm ("movq\t%%rbx, %%rsi\n\t"
+         "cpuid\n\t"
+         "xchgq\t%%rbx, %%rsi\n\t"
+         : "=a" (*rEAX),
+           "=S" (*rEBX),
+           "=c" (*rECX),
+           "=d" (*rEDX)
+         :  "a" (value));
+    return false;
+  #elif defined(_MSC_VER)
+    int registers[4];
+    __cpuid(registers, value);
+    *rEAX = registers[0];
+    *rEBX = registers[1];
+    *rECX = registers[2];
+    *rEDX = registers[3];
+    return false;
+  #endif
+#elif defined(i386) || defined(__i386__) || defined(__x86__) || defined(_M_IX86)
+  #if defined(__GNUC__)
+    asm ("movl\t%%ebx, %%esi\n\t"
+         "cpuid\n\t"
+         "xchgl\t%%ebx, %%esi\n\t"
+         : "=a" (*rEAX),
+           "=S" (*rEBX),
+           "=c" (*rECX),
+           "=d" (*rEDX)
+         :  "a" (value));
+    return false;
+  #elif defined(_MSC_VER)
+    __asm {
+      mov   eax,value
+      cpuid
+      mov   esi,rEAX
+      mov   dword ptr [esi],eax
+      mov   esi,rEBX
+      mov   dword ptr [esi],ebx
+      mov   esi,rECX
+      mov   dword ptr [esi],ecx
+      mov   esi,rEDX
+      mov   dword ptr [esi],edx
+    }
+    return false;
+  #endif
+#endif
+  return true;
+}
+
+static void DetectX86FamilyModel(unsigned EAX, unsigned &Family, unsigned &Model) {
+  Family = (EAX >> 8) & 0xf; // Bits 8 - 11
+  Model  = (EAX >> 4) & 0xf; // Bits 4 - 7
+  if (Family == 6 || Family == 0xf) {
+    if (Family == 0xf)
+      // Examine extended family ID if family ID is F.
+      Family += (EAX >> 20) & 0xff;    // Bits 20 - 27
+    // Examine extended model ID if family ID is 6 or F.
+    Model += ((EAX >> 16) & 0xf) << 4; // Bits 16 - 19
+  }
+}
+#endif
+
+
+std::string sys::getHostCPUName() {
+#if defined(__x86_64__) || defined(__i386__)
+  unsigned EAX = 0, EBX = 0, ECX = 0, EDX = 0;
+  if (GetX86CpuIDAndInfo(0x1, &EAX, &EBX, &ECX, &EDX))
+    return "generic";
+  unsigned Family = 0;
+  unsigned Model  = 0;
+  DetectX86FamilyModel(EAX, Family, Model);
+
+  GetX86CpuIDAndInfo(0x80000001, &EAX, &EBX, &ECX, &EDX);
+  bool Em64T = (EDX >> 29) & 0x1;
+  bool HasSSE3 = (ECX & 0x1);
+
+  union {
+    unsigned u[3];
+    char     c[12];
+  } text;
+
+  GetX86CpuIDAndInfo(0, &EAX, text.u+0, text.u+2, text.u+1);
+  if (memcmp(text.c, "GenuineIntel", 12) == 0) {
+    switch (Family) {
+    case 3:
+      return "i386";
+    case 4:
+      switch (Model) {
+      case 0: // Intel486TM DX processors
+      case 1: // Intel486TM DX processors
+      case 2: // Intel486 SX processors
+      case 3: // Intel487TM processors, IntelDX2 OverDrive® processors,
+              // IntelDX2TM processors
+      case 4: // Intel486 SL processor
+      case 5: // IntelSX2TM processors
+      case 7: // Write-Back Enhanced IntelDX2 processors
+      case 8: // IntelDX4 OverDrive processors, IntelDX4TM processors
+      default: return "i486";
+      }
+    case 5:
+      switch (Model) {
+      case  1: // Pentium OverDrive processor for Pentium processor (60, 66),
+               // Pentium® processors (60, 66)
+      case  2: // Pentium OverDrive processor for Pentium processor (75, 90,
+               // 100, 120, 133), Pentium processors (75, 90, 100, 120, 133,
+               // 150, 166, 200)
+      case  3: // Pentium OverDrive processors for Intel486 processor-based
+               // systems
+        return "pentium";
+
+      case  4: // Pentium OverDrive processor with MMXTM technology for Pentium
+               // processor (75, 90, 100, 120, 133), Pentium processor with
+               // MMXTM technology (166, 200)
+        return "pentium-mmx";
+
+      default: return "pentium";
+      }
+    case 6:
+      switch (Model) {
+      case  1: // Pentium Pro processor
+        return "pentiumpro";
+
+      case  3: // Intel Pentium II OverDrive processor, Pentium II processor,
+               // model 03
+      case  5: // Pentium II processor, model 05, Pentium II Xeon processor,
+               // model 05, and Intel® Celeron® processor, model 05
+      case  6: // Celeron processor, model 06
+        return "pentium2";
+
+      case  7: // Pentium III processor, model 07, and Pentium III Xeon
+               // processor, model 07
+      case  8: // Pentium III processor, model 08, Pentium III Xeon processor,
+               // model 08, and Celeron processor, model 08
+      case 10: // Pentium III Xeon processor, model 0Ah
+      case 11: // Pentium III processor, model 0Bh
+        return "pentium3";
+
+      case  9: // Intel Pentium M processor, Intel Celeron M processor model 09.
+      case 13: // Intel Pentium M processor, Intel Celeron M processor, model
+               // 0Dh. All processors are manufactured using the 90 nm process.
+        return "pentium-m";
+
+      case 14: // Intel CoreTM Duo processor, Intel CoreTM Solo processor, model
+               // 0Eh. All processors are manufactured using the 65 nm process.
+        return "yonah";
+
+      case 15: // Intel CoreTM2 Duo processor, Intel CoreTM2 Duo mobile
+               // processor, Intel CoreTM2 Quad processor, Intel CoreTM2 Quad
+               // mobile processor, Intel CoreTM2 Extreme processor, Intel
+               // Pentium Dual-Core processor, Intel Xeon processor, model
+               // 0Fh. All processors are manufactured using the 65 nm process.
+      case 22: // Intel Celeron processor model 16h. All processors are
+               // manufactured using the 65 nm process
+        return "core2";
+
+      case 21: // Intel EP80579 Integrated Processor and Intel EP80579
+               // Integrated Processor with Intel QuickAssist Technology
+        return "i686"; // FIXME: ???
+
+      case 23: // Intel CoreTM2 Extreme processor, Intel Xeon processor, model
+               // 17h. All processors are manufactured using the 45 nm process.
+               //
+               // 45nm: Penryn , Wolfdale, Yorkfield (XE)
+        return "penryn";
+
+      case 26: // Intel Core i7 processor and Intel Xeon processor. All
+               // processors are manufactured using the 45 nm process.
+      case 29: // Intel Xeon processor MP. All processors are manufactured using
+               // the 45 nm process.
+        return "corei7";
+
+      case 28: // Intel Atom processor. All processors are manufactured using
+               // the 45 nm process
+        return "atom";
+
+      default: return "i686";
+      }
+    case 15: {
+      switch (Model) {
+      case  0: // Pentium 4 processor, Intel Xeon processor. All processors are
+               // model 00h and manufactured using the 0.18 micron process.
+      case  1: // Pentium 4 processor, Intel Xeon processor, Intel Xeon
+               // processor MP, and Intel Celeron processor. All processors are
+               // model 01h and manufactured using the 0.18 micron process.
+      case  2: // Pentium 4 processor, Mobile Intel Pentium 4 processor – M,
+               // Intel Xeon processor, Intel Xeon processor MP, Intel Celeron
+               // processor, and Mobile Intel Celeron processor. All processors
+               // are model 02h and manufactured using the 0.13 micron process.
+        return (Em64T) ? "x86-64" : "pentium4";
+
+      case  3: // Pentium 4 processor, Intel Xeon processor, Intel Celeron D
+               // processor. All processors are model 03h and manufactured using
+               // the 90 nm process.
+      case  4: // Pentium 4 processor, Pentium 4 processor Extreme Edition,
+               // Pentium D processor, Intel Xeon processor, Intel Xeon
+               // processor MP, Intel Celeron D processor. All processors are
+               // model 04h and manufactured using the 90 nm process.
+      case  6: // Pentium 4 processor, Pentium D processor, Pentium processor
+               // Extreme Edition, Intel Xeon processor, Intel Xeon processor
+               // MP, Intel Celeron D processor. All processors are model 06h
+               // and manufactured using the 65 nm process.
+        return (Em64T) ? "nocona" : "prescott";
+
+      default:
+        return (Em64T) ? "x86-64" : "pentium4";
+      }
+    }
+
+    default:
+      return "generic";
+    }
+  } else if (memcmp(text.c, "AuthenticAMD", 12) == 0) {
+    // FIXME: this poorly matches the generated SubtargetFeatureKV table.  There
+    // appears to be no way to generate the wide variety of AMD-specific targets
+    // from the information returned from CPUID.
+    switch (Family) {
+      case 4:
+        return "i486";
+      case 5:
+        switch (Model) {
+        case 6:
+        case 7:  return "k6";
+        case 8:  return "k6-2";
+        case 9:
+        case 13: return "k6-3";
+        default: return "pentium";
+        }
+      case 6:
+        switch (Model) {
+        case 4:  return "athlon-tbird";
+        case 6:
+        case 7:
+        case 8:  return "athlon-mp";
+        case 10: return "athlon-xp";
+        default: return "athlon";
+        }
+      case 15:
+        if (HasSSE3) {
+          return "k8-sse3";
+        } else {
+          switch (Model) {
+          case 1:  return "opteron";
+          case 5:  return "athlon-fx"; // also opteron
+          default: return "athlon64";
+          }
+        }
+      case 16:
+        return "amdfam10";
+    default:
+      return "generic";
+    }
+  }
+#endif
 
+  return "generic";
+}
diff --git a/libclamav/c++/llvm/lib/System/Makefile b/libclamav/c++/llvm/lib/System/Makefile
index 49704c3..d4fd60e 100644
--- a/libclamav/c++/llvm/lib/System/Makefile
+++ b/libclamav/c++/llvm/lib/System/Makefile
@@ -11,6 +11,12 @@ LEVEL = ../..
 LIBRARYNAME = LLVMSystem
 BUILD_ARCHIVE = 1
 
+include $(LEVEL)/Makefile.config
+
+ifeq ($(HOST_OS),MingW)
+  REQUIRES_EH := 1
+endif
+
 EXTRA_DIST = Unix Win32 README.txt
 
 include $(LEVEL)/Makefile.common
diff --git a/libclamav/c++/llvm/lib/System/Path.cpp b/libclamav/c++/llvm/lib/System/Path.cpp
index df33574..8e1fa53 100644
--- a/libclamav/c++/llvm/lib/System/Path.cpp
+++ b/libclamav/c++/llvm/lib/System/Path.cpp
@@ -13,7 +13,6 @@
 
 #include "llvm/System/Path.h"
 #include "llvm/Config/config.h"
-#include "llvm/Support/ErrorHandling.h"
 #include <cassert>
 #include <cstring>
 #include <ostream>
diff --git a/libclamav/c++/llvm/lib/System/Unix/Memory.inc b/libclamav/c++/llvm/lib/System/Unix/Memory.inc
index a80f56f..1b038f9 100644
--- a/libclamav/c++/llvm/lib/System/Unix/Memory.inc
+++ b/libclamav/c++/llvm/lib/System/Unix/Memory.inc
@@ -12,7 +12,7 @@
 //===----------------------------------------------------------------------===//
 
 #include "Unix.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/System/Process.h"
 
 #ifdef HAVE_SYS_MMAN_H
diff --git a/libclamav/c++/llvm/lib/System/Unix/Path.inc b/libclamav/c++/llvm/lib/System/Unix/Path.inc
index 89285b4..4300d67 100644
--- a/libclamav/c++/llvm/lib/System/Unix/Path.inc
+++ b/libclamav/c++/llvm/lib/System/Unix/Path.inc
@@ -335,7 +335,7 @@ getprogpath(char ret[PATH_MAX], const char *bin)
   free(pv);
   return (NULL);
 }
-#endif
+#endif // __FreeBSD__
 
 /// GetMainExecutable - Return the path to the main executable, given the
 /// value of argv[0] from program startup.
@@ -454,6 +454,20 @@ Path::canWrite() const {
 }
 
 bool
+Path::isRegularFile() const {
+  // Get the status so we can determine if its a file or directory
+  struct stat buf;
+
+  if (0 != stat(path.c_str(), &buf))
+    return false;
+
+  if (S_ISREG(buf.st_mode))
+    return true;
+
+  return false;
+}
+
+bool
 Path::canExecute() const {
   if (0 != access(path.c_str(), R_OK | X_OK ))
     return false;
@@ -723,7 +737,7 @@ Path::createTemporaryFileOnDisk(bool reuse_current, std::string* ErrMsg) {
 
 bool
 Path::eraseFromDisk(bool remove_contents, std::string *ErrStr) const {
-  // Get the status so we can determin if its a file or directory
+  // Get the status so we can determine if its a file or directory
   struct stat buf;
   if (0 != stat(path.c_str(), &buf)) {
     MakeErrMsg(ErrStr, path + ": can't get status of file");
diff --git a/libclamav/c++/llvm/lib/System/Unix/Process.inc b/libclamav/c++/llvm/lib/System/Unix/Process.inc
index d715585..911b8c3 100644
--- a/libclamav/c++/llvm/lib/System/Unix/Process.inc
+++ b/libclamav/c++/llvm/lib/System/Unix/Process.inc
@@ -18,7 +18,9 @@
 #ifdef HAVE_SYS_RESOURCE_H
 #include <sys/resource.h>
 #endif
-#ifdef HAVE_MALLOC_H
+// DragonFly BSD has deprecated <malloc.h> for <stdlib.h> instead,
+//  Unix.h includes this for us already.
+#if defined(HAVE_MALLOC_H) && !defined(__DragonFly__)
 #include <malloc.h>
 #endif
 #ifdef HAVE_MALLOC_MALLOC_H
@@ -91,7 +93,7 @@ Process::GetTotalMemoryUsage()
   malloc_statistics_t Stats;
   malloc_zone_statistics(malloc_default_zone(), &Stats);
   return Stats.size_allocated;   // darwin
-#elif defined(HAVE_GETRUSAGE)
+#elif defined(HAVE_GETRUSAGE) && !defined(__HAIKU__)
   struct rusage usage;
   ::getrusage(RUSAGE_SELF, &usage);
   return usage.ru_maxrss;
diff --git a/libclamav/c++/llvm/lib/System/Unix/Program.inc b/libclamav/c++/llvm/lib/System/Unix/Program.inc
index 56dea25..43c3606 100644
--- a/libclamav/c++/llvm/lib/System/Unix/Program.inc
+++ b/libclamav/c++/llvm/lib/System/Unix/Program.inc
@@ -121,6 +121,9 @@ static bool RedirectIO(const Path *Path, int FD, std::string* ErrMsg) {
   return false;
 }
 
+static void TimeOutHandler(int Sig) {
+}
+
 static void SetMemoryLimits (unsigned size)
 {
 #if HAVE_SYS_RESOURCE_H
@@ -231,11 +234,14 @@ Program::Wait(unsigned secondsToWait,
     return -1;
   }
 
-  // Install a timeout handler.
+  // Install a timeout handler.  The handler itself does nothing, but the simple
+  // fact of having a handler at all causes the wait below to return with EINTR,
+  // unlike if we used SIG_IGN.
   if (secondsToWait) {
-    memset(&Act, 0, sizeof(Act));
-    Act.sa_handler = SIG_IGN;
+    Act.sa_sigaction = 0;
+    Act.sa_handler = TimeOutHandler;
     sigemptyset(&Act.sa_mask);
+    Act.sa_flags = 0;
     sigaction(SIGALRM, &Act, &Old);
     alarm(secondsToWait);
   }
@@ -244,7 +250,7 @@ Program::Wait(unsigned secondsToWait,
   int status;
   uint64_t pid = reinterpret_cast<uint64_t>(Data_);
   pid_t child = static_cast<pid_t>(pid);
-  while (wait(&status) != child)
+  while (waitpid(pid, &status, 0) != child)
     if (secondsToWait && errno == EINTR) {
       // Kill the child.
       kill(child, SIGKILL);
diff --git a/libclamav/c++/llvm/lib/System/Unix/Signals.inc b/libclamav/c++/llvm/lib/System/Unix/Signals.inc
index d39e1e9..676e1e5 100644
--- a/libclamav/c++/llvm/lib/System/Unix/Signals.inc
+++ b/libclamav/c++/llvm/lib/System/Unix/Signals.inc
@@ -209,7 +209,7 @@ static void PrintStackTrace(void *) {
     Dl_info dlinfo;
     dladdr(StackTrace[i], &dlinfo);
 
-    fprintf(stderr, "%-3d", i);
+    fprintf(stderr, "%-2d", i);
 
     const char* name = strrchr(dlinfo.dli_fname, '/');
     if (name == NULL) fprintf(stderr, " %-*s", width, dlinfo.dli_fname);
diff --git a/libclamav/c++/llvm/lib/System/Win32/Memory.inc b/libclamav/c++/llvm/lib/System/Win32/Memory.inc
index 7611ecd..19fccbd 100644
--- a/libclamav/c++/llvm/lib/System/Win32/Memory.inc
+++ b/libclamav/c++/llvm/lib/System/Win32/Memory.inc
@@ -13,7 +13,7 @@
 //===----------------------------------------------------------------------===//
 
 #include "Win32.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/System/Process.h"
 
 namespace llvm {
diff --git a/libclamav/c++/llvm/lib/System/Win32/Path.inc b/libclamav/c++/llvm/lib/System/Win32/Path.inc
index 46b965f..634fbc7 100644
--- a/libclamav/c++/llvm/lib/System/Win32/Path.inc
+++ b/libclamav/c++/llvm/lib/System/Win32/Path.inc
@@ -357,6 +357,13 @@ Path::canExecute() const {
   return attr != INVALID_FILE_ATTRIBUTES;
 }
 
+bool
+Path::isRegularFile() const {
+  if (isDirectory())
+    return false;
+  return true;
+}
+
 std::string
 Path::getLast() const {
   // Find the last slash
@@ -608,7 +615,8 @@ Path::createDirectoryOnDisk(bool create_parents, std::string* ErrMsg) {
     while (*next) {
       next = strchr(next, '/');
       *next = 0;
-      if (!CreateDirectory(pathname, NULL))
+      if (!CreateDirectory(pathname, NULL) &&
+          GetLastError() != ERROR_ALREADY_EXISTS)
           return MakeErrMsg(ErrMsg, 
             std::string(pathname) + ": Can't create directory: ");
       *next++ = '/';
@@ -616,7 +624,8 @@ Path::createDirectoryOnDisk(bool create_parents, std::string* ErrMsg) {
   } else {
     // Drop trailing slash.
     pathname[len-1] = 0;
-    if (!CreateDirectory(pathname, NULL)) {
+    if (!CreateDirectory(pathname, NULL) &&
+        GetLastError() != ERROR_ALREADY_EXISTS) {
       return MakeErrMsg(ErrMsg, std::string(pathname) + ": Can't create directory: ");
     }
   }
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARM.h b/libclamav/c++/llvm/lib/Target/ARM/ARM.h
index 487ce1d..ff1980d 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARM.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARM.h
@@ -103,10 +103,13 @@ FunctionPass *createARMObjectCodeEmitterPass(ARMBaseTargetMachine &TM,
                                              ObjectCodeEmitter &OCE);
 
 FunctionPass *createARMLoadStoreOptimizationPass(bool PreAlloc = false);
+FunctionPass *createARMExpandPseudoPass();
 FunctionPass *createARMConstantIslandPass();
 FunctionPass *createNEONPreAllocPass();
+FunctionPass *createNEONMoveFixPass();
 FunctionPass *createThumb2ITBlockPass();
 FunctionPass *createThumb2SizeReductionPass();
+FunctionPass *createARMMaxStackAlignmentCalculatorPass();
 
 extern Target TheARMTarget, TheThumbTarget;
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARM.td b/libclamav/c++/llvm/lib/Target/ARM/ARM.td
index 8851fbb..7033861 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARM.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARM.td
@@ -89,16 +89,18 @@ def : ProcNoItin<"xscale",          [ArchV5TE]>;
 def : ProcNoItin<"iwmmxt",          [ArchV5TE]>;
 
 // V6 Processors.
-def : ProcNoItin<"arm1136j-s",      [ArchV6]>;
-def : ProcNoItin<"arm1136jf-s",     [ArchV6, FeatureVFP2]>;
-def : ProcNoItin<"arm1176jz-s",     [ArchV6]>;
-def : ProcNoItin<"arm1176jzf-s",    [ArchV6, FeatureVFP2]>;
-def : ProcNoItin<"mpcorenovfp",     [ArchV6]>;
-def : ProcNoItin<"mpcore",          [ArchV6, FeatureVFP2]>;
+def : Processor<"arm1136j-s",       ARMV6Itineraries, [ArchV6]>;
+def : Processor<"arm1136jf-s",      ARMV6Itineraries, [ArchV6, FeatureVFP2]>;
+def : Processor<"arm1176jz-s",      ARMV6Itineraries, [ArchV6]>;
+def : Processor<"arm1176jzf-s",     ARMV6Itineraries, [ArchV6, FeatureVFP2]>;
+def : Processor<"mpcorenovfp",      ARMV6Itineraries, [ArchV6]>;
+def : Processor<"mpcore",           ARMV6Itineraries, [ArchV6, FeatureVFP2]>;
 
 // V6T2 Processors.
-def : ProcNoItin<"arm1156t2-s",     [ArchV6T2, FeatureThumb2]>;
-def : ProcNoItin<"arm1156t2f-s",    [ArchV6T2, FeatureThumb2, FeatureVFP2]>;
+def : Processor<"arm1156t2-s",     ARMV6Itineraries,
+                 [ArchV6T2, FeatureThumb2]>;
+def : Processor<"arm1156t2f-s",    ARMV6Itineraries,
+                 [ArchV6T2, FeatureThumb2, FeatureVFP2]>;
 
 // V7 Processors.
 def : Processor<"cortex-a8",        CortexA8Itineraries,
@@ -125,12 +127,16 @@ def ARMInstrInfo : InstrInfo {
                        "SizeFlag",
                        "IndexModeBits",
                        "Form",
-                       "isUnaryDataProc"];
+                       "isUnaryDataProc",
+                       "canXformTo16Bit",
+                       "Dom"];
   let TSFlagsShifts = [0,
                        4,
                        7,
                        9,
-                       15];
+                       15,
+                       16,
+                       17];
 }
 
 //===----------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMAddressingModes.h b/libclamav/c++/llvm/lib/Target/ARM/ARMAddressingModes.h
index 1839153..ddeb1b9 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMAddressingModes.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMAddressingModes.h
@@ -15,7 +15,6 @@
 #define LLVM_TARGET_ARM_ARMADDRESSINGMODES_H
 
 #include "llvm/CodeGen/SelectionDAGNodes.h"
-#include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/MathExtras.h"
 #include <cassert>
 
@@ -38,7 +37,7 @@ namespace ARM_AM {
 
   static inline const char *getShiftOpcStr(ShiftOpc Op) {
     switch (Op) {
-    default: llvm_unreachable("Unknown shift opc!");
+    default: assert(0 && "Unknown shift opc!");
     case ARM_AM::asr: return "asr";
     case ARM_AM::lsl: return "lsl";
     case ARM_AM::lsr: return "lsr";
@@ -71,7 +70,7 @@ namespace ARM_AM {
 
   static inline const char *getAMSubModeStr(AMSubMode Mode) {
     switch (Mode) {
-    default: llvm_unreachable("Unknown addressing sub-mode!");
+    default: assert(0 && "Unknown addressing sub-mode!");
     case ARM_AM::ia: return "ia";
     case ARM_AM::ib: return "ib";
     case ARM_AM::da: return "da";
@@ -81,7 +80,7 @@ namespace ARM_AM {
 
   static inline const char *getAMSubModeAltStr(AMSubMode Mode, bool isLD) {
     switch (Mode) {
-    default: llvm_unreachable("Unknown addressing sub-mode!");
+    default: assert(0 && "Unknown addressing sub-mode!");
     case ARM_AM::ia: return isLD ? "fd" : "ea";
     case ARM_AM::ib: return isLD ? "ed" : "fa";
     case ARM_AM::da: return isLD ? "fa" : "ed";
@@ -342,6 +341,66 @@ namespace ARM_AM {
     return -1;
   }
 
+  static inline unsigned getT2SOImmValRotate(unsigned V) {
+    if ((V & ~255U) == 0) return 0;
+    // Use CTZ to compute the rotate amount.
+    unsigned RotAmt = CountTrailingZeros_32(V);
+    return (32 - RotAmt) & 31;
+  }
+
+  static inline bool isT2SOImmTwoPartVal (unsigned Imm) {
+    unsigned V = Imm;
+    // Passing values can be any combination of splat values and shifter
+    // values. If this can be handled with a single shifter or splat, bail
+    // out. Those should be handled directly, not with a two-part val.
+    if (getT2SOImmValSplatVal(V) != -1)
+      return false;
+    V = rotr32 (~255U, getT2SOImmValRotate(V)) & V;
+    if (V == 0)
+      return false;
+
+    // If this can be handled as an immediate, accept.
+    if (getT2SOImmVal(V) != -1) return true;
+
+    // Likewise, try masking out a splat value first.
+    V = Imm;
+    if (getT2SOImmValSplatVal(V & 0xff00ff00U) != -1)
+      V &= ~0xff00ff00U;
+    else if (getT2SOImmValSplatVal(V & 0x00ff00ffU) != -1)
+      V &= ~0x00ff00ffU;
+    // If what's left can be handled as an immediate, accept.
+    if (getT2SOImmVal(V) != -1) return true;
+
+    // Otherwise, do not accept.
+    return false;
+  }
+
+  static inline unsigned getT2SOImmTwoPartFirst(unsigned Imm) {
+    assert (isT2SOImmTwoPartVal(Imm) &&
+            "Immedate cannot be encoded as two part immediate!");
+    // Try a shifter operand as one part
+    unsigned V = rotr32 (~255, getT2SOImmValRotate(Imm)) & Imm;
+    // If the rest is encodable as an immediate, then return it.
+    if (getT2SOImmVal(V) != -1) return V;
+
+    // Try masking out a splat value first.
+    if (getT2SOImmValSplatVal(Imm & 0xff00ff00U) != -1)
+      return Imm & 0xff00ff00U;
+
+    // The other splat is all that's left as an option.
+    assert (getT2SOImmValSplatVal(Imm & 0x00ff00ffU) != -1);
+    return Imm & 0x00ff00ffU;
+  }
+
+  static inline unsigned getT2SOImmTwoPartSecond(unsigned Imm) {
+    // Mask out the first hunk
+    Imm ^= getT2SOImmTwoPartFirst(Imm);
+    // Return what's left
+    assert (getT2SOImmVal(Imm) != -1 &&
+            "Unable to encode second part of T2 two part SO immediate");
+    return Imm;
+  }
+
 
   //===--------------------------------------------------------------------===//
   // Addressing Mode #2
@@ -461,8 +520,8 @@ namespace ARM_AM {
     return ((AM5Opc >> 8) & 1) ? sub : add;
   }
 
-  /// getAM5Opc - This function encodes the addrmode5 opc field for FLDM and
-  /// FSTM instructions.
+  /// getAM5Opc - This function encodes the addrmode5 opc field for VLDM and
+  /// VSTM instructions.
   static inline unsigned getAM5Opc(AMSubMode SubMode, bool WB,
                                    unsigned char Offset) {
     assert((SubMode == ia || SubMode == db) &&
@@ -482,13 +541,15 @@ namespace ARM_AM {
   //
   // This is used for NEON load / store instructions.
   //
-  // addrmode6 := reg with optional writeback
+  // addrmode6 := reg with optional writeback and alignment
   //
-  // This is stored in three operands [regaddr, regupdate, opc].  The first is
-  // the address register.  The second register holds the value of a post-access
-  // increment for writeback or reg0 if no writeback or if the writeback
-  // increment is the size of the memory access.  The third operand encodes
-  // whether there is writeback to the address register.
+  // This is stored in four operands [regaddr, regupdate, opc, align].  The
+  // first is the address register.  The second register holds the value of
+  // a post-access increment for writeback or reg0 if no writeback or if the
+  // writeback increment is the size of the memory access.  The third
+  // operand encodes whether there is writeback to the address register. The
+  // fourth operand is the value of the alignment specifier to use or zero if
+  // no explicit alignment.
 
   static inline unsigned getAM6Opc(bool WB = false) {
     return (int)WB;
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp
index 79950fe..705f970 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp
@@ -1,4 +1,4 @@
-//===- ARMBaseInstrInfo.cpp - ARM Instruction Information -----------*- C++ -*-===//
+//===- ARMBaseInstrInfo.cpp - ARM Instruction Information -------*- C++ -*-===//
 //
 //                     The LLVM Compiler Infrastructure
 //
@@ -14,15 +14,24 @@
 #include "ARMBaseInstrInfo.h"
 #include "ARM.h"
 #include "ARMAddressingModes.h"
+#include "ARMConstantPoolValue.h"
 #include "ARMGenInstrInfo.inc"
 #include "ARMMachineFunctionInfo.h"
+#include "ARMRegisterInfo.h"
+#include "llvm/Constants.h"
+#include "llvm/Function.h"
+#include "llvm/GlobalValue.h"
 #include "llvm/ADT/STLExtras.h"
 #include "llvm/CodeGen/LiveVariables.h"
+#include "llvm/CodeGen/MachineConstantPool.h"
 #include "llvm/CodeGen/MachineFrameInfo.h"
 #include "llvm/CodeGen/MachineInstrBuilder.h"
 #include "llvm/CodeGen/MachineJumpTableInfo.h"
+#include "llvm/CodeGen/MachineMemOperand.h"
+#include "llvm/CodeGen/PseudoSourceValue.h"
 #include "llvm/MC/MCAsmInfo.h"
 #include "llvm/Support/CommandLine.h"
+#include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 using namespace llvm;
 
@@ -30,8 +39,9 @@ static cl::opt<bool>
 EnableARM3Addr("enable-arm-3-addr-conv", cl::Hidden,
                cl::desc("Enable ARM 2-addr to 3-addr conv"));
 
-ARMBaseInstrInfo::ARMBaseInstrInfo()
-  : TargetInstrInfoImpl(ARMInsts, array_lengthof(ARMInsts)) {
+ARMBaseInstrInfo::ARMBaseInstrInfo(const ARMSubtarget& STI)
+  : TargetInstrInfoImpl(ARMInsts, array_lengthof(ARMInsts)),
+    Subtarget(STI) {
 }
 
 MachineInstr *
@@ -247,7 +257,8 @@ ARMBaseInstrInfo::AnalyzeBranch(MachineBasicBlock &MBB,MachineBasicBlock *&TBB,
   // ...likewise if it ends with a branch table followed by an unconditional
   // branch. The branch folder can create these, and we must get rid of them for
   // correctness of Thumb constant islands.
-  if (isJumpTableBranchOpcode(SecondLastOpc) &&
+  if ((isJumpTableBranchOpcode(SecondLastOpc) ||
+       isIndirectBranchOpcode(SecondLastOpc)) &&
       isUncondBranchOpcode(LastOpc)) {
     I = LastInst;
     if (AllowModify)
@@ -391,6 +402,21 @@ bool ARMBaseInstrInfo::DefinesPredicate(MachineInstr *MI,
   return Found;
 }
 
+/// isPredicable - Return true if the specified instruction can be predicated.
+/// By default, this returns true for every instruction with a
+/// PredicateOperand.
+bool ARMBaseInstrInfo::isPredicable(MachineInstr *MI) const {
+  const TargetInstrDesc &TID = MI->getDesc();
+  if (!TID.isPredicable())
+    return false;
+
+  if ((TID.TSFlags & ARMII::DomainMask) == ARMII::DomainNEON) {
+    ARMFunctionInfo *AFI =
+      MI->getParent()->getParent()->getInfo<ARMFunctionInfo>();
+    return AFI->isThumb2Function();
+  }
+  return true;
+}
 
 /// FIXME: Works around a gcc miscompilation with -fstrict-aliasing
 static unsigned getNumJTEntries(const std::vector<MachineJumpTableEntry> &JT,
@@ -442,7 +468,7 @@ unsigned ARMBaseInstrInfo::GetInstSizeInBytes(const MachineInstr *MI) const {
     case ARM::Int_eh_sjlj_setjmp:
       return 24;
     case ARM::t2Int_eh_sjlj_setjmp:
-      return 20;
+      return 22;
     case ARM::BR_JTr:
     case ARM::BR_JTm:
     case ARM::BR_JTadd:
@@ -498,10 +524,10 @@ ARMBaseInstrInfo::isMoveInstr(const MachineInstr &MI,
 
   switch (MI.getOpcode()) {
   default: break;
-  case ARM::FCPYS:
-  case ARM::FCPYD:
+  case ARM::VMOVS:
   case ARM::VMOVD:
-  case  ARM::VMOVQ: {
+  case ARM::VMOVDneon:
+  case ARM::VMOVQ: {
     SrcReg = MI.getOperand(1).getReg();
     DstReg = MI.getOperand(0).getReg();
     return true;
@@ -550,8 +576,8 @@ ARMBaseInstrInfo::isLoadFromStackSlot(const MachineInstr *MI,
       return MI->getOperand(0).getReg();
     }
     break;
-  case ARM::FLDD:
-  case ARM::FLDS:
+  case ARM::VLDRD:
+  case ARM::VLDRS:
     if (MI->getOperand(1).isFI() &&
         MI->getOperand(2).isImm() &&
         MI->getOperand(2).getImm() == 0) {
@@ -589,8 +615,8 @@ ARMBaseInstrInfo::isStoreToStackSlot(const MachineInstr *MI,
       return MI->getOperand(0).getReg();
     }
     break;
-  case ARM::FSTD:
-  case ARM::FSTS:
+  case ARM::VSTRD:
+  case ARM::VSTRS:
     if (MI->getOperand(1).isFI() &&
         MI->getOperand(2).isImm() &&
         MI->getOperand(2).getImm() == 0) {
@@ -613,28 +639,12 @@ ARMBaseInstrInfo::copyRegToReg(MachineBasicBlock &MBB,
   if (I != MBB.end()) DL = I->getDebugLoc();
 
   if (DestRC != SrcRC) {
-    // Allow DPR / DPR_VFP2 / DPR_8 cross-class copies
-    // Allow QPR / QPR_VFP2 cross-class copies
-    if (DestRC == ARM::DPRRegisterClass) {
-      if (SrcRC == ARM::DPR_VFP2RegisterClass ||
-          SrcRC == ARM::DPR_8RegisterClass) {
-      } else
-        return false;
-    } else if (DestRC == ARM::DPR_VFP2RegisterClass) {
-      if (SrcRC == ARM::DPRRegisterClass ||
-          SrcRC == ARM::DPR_8RegisterClass) {
-      } else
-        return false;
-    } else if (DestRC == ARM::DPR_8RegisterClass) {
-      if (SrcRC == ARM::DPRRegisterClass ||
-          SrcRC == ARM::DPR_VFP2RegisterClass) {
-      } else
-        return false;
-    } else if ((DestRC == ARM::QPRRegisterClass &&
-                SrcRC == ARM::QPR_VFP2RegisterClass) ||
-               (DestRC == ARM::QPR_VFP2RegisterClass &&
-                SrcRC == ARM::QPRRegisterClass)) {
-    } else
+    if (DestRC->getSize() != SrcRC->getSize())
+      return false;
+
+    // Allow DPR / DPR_VFP2 / DPR_8 cross-class copies.
+    // Allow QPR / QPR_VFP2 / QPR_8 cross-class copies.
+    if (DestRC->getSize() != 8 && DestRC->getSize() != 16)
       return false;
   }
 
@@ -642,16 +652,23 @@ ARMBaseInstrInfo::copyRegToReg(MachineBasicBlock &MBB,
     AddDefaultCC(AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::MOVr),
                                         DestReg).addReg(SrcReg)));
   } else if (DestRC == ARM::SPRRegisterClass) {
-    AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::FCPYS), DestReg)
+    AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::VMOVS), DestReg)
                    .addReg(SrcReg));
-  } else if ((DestRC == ARM::DPRRegisterClass) ||
-             (DestRC == ARM::DPR_VFP2RegisterClass) ||
-             (DestRC == ARM::DPR_8RegisterClass)) {
-    AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::FCPYD), DestReg)
+  } else if (DestRC == ARM::DPRRegisterClass) {
+    AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::VMOVD), DestReg)
                    .addReg(SrcReg));
+  } else if (DestRC == ARM::DPR_VFP2RegisterClass ||
+             DestRC == ARM::DPR_8RegisterClass ||
+             SrcRC == ARM::DPR_VFP2RegisterClass ||
+             SrcRC == ARM::DPR_8RegisterClass) {
+    // Always use neon reg-reg move if source or dest is NEON-only regclass.
+    AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::VMOVDneon),
+                           DestReg).addReg(SrcReg));
   } else if (DestRC == ARM::QPRRegisterClass ||
-             DestRC == ARM::QPR_VFP2RegisterClass) {
-    BuildMI(MBB, I, DL, get(ARM::VMOVQ), DestReg).addReg(SrcReg);
+             DestRC == ARM::QPR_VFP2RegisterClass ||
+             DestRC == ARM::QPR_8RegisterClass) {
+    AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::VMOVQ),
+                           DestReg).addReg(SrcReg));
   } else {
     return false;
   }
@@ -665,27 +682,45 @@ storeRegToStackSlot(MachineBasicBlock &MBB, MachineBasicBlock::iterator I,
                     const TargetRegisterClass *RC) const {
   DebugLoc DL = DebugLoc::getUnknownLoc();
   if (I != MBB.end()) DL = I->getDebugLoc();
+  MachineFunction &MF = *MBB.getParent();
+  MachineFrameInfo &MFI = *MF.getFrameInfo();
+  unsigned Align = MFI.getObjectAlignment(FI);
+
+  MachineMemOperand *MMO =
+    MF.getMachineMemOperand(PseudoSourceValue::getFixedStack(FI),
+                            MachineMemOperand::MOStore, 0,
+                            MFI.getObjectSize(FI),
+                            Align);
 
   if (RC == ARM::GPRRegisterClass) {
     AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::STR))
                    .addReg(SrcReg, getKillRegState(isKill))
-                   .addFrameIndex(FI).addReg(0).addImm(0));
+                   .addFrameIndex(FI).addReg(0).addImm(0).addMemOperand(MMO));
   } else if (RC == ARM::DPRRegisterClass ||
              RC == ARM::DPR_VFP2RegisterClass ||
              RC == ARM::DPR_8RegisterClass) {
-    AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::FSTD))
+    AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::VSTRD))
                    .addReg(SrcReg, getKillRegState(isKill))
-                   .addFrameIndex(FI).addImm(0));
+                   .addFrameIndex(FI).addImm(0).addMemOperand(MMO));
   } else if (RC == ARM::SPRRegisterClass) {
-    AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::FSTS))
+    AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::VSTRS))
                    .addReg(SrcReg, getKillRegState(isKill))
-                   .addFrameIndex(FI).addImm(0));
+                   .addFrameIndex(FI).addImm(0).addMemOperand(MMO));
   } else {
     assert((RC == ARM::QPRRegisterClass ||
             RC == ARM::QPR_VFP2RegisterClass) && "Unknown regclass!");
     // FIXME: Neon instructions should support predicates
-    BuildMI(MBB, I, DL, get(ARM::VSTRQ)).addReg(SrcReg, getKillRegState(isKill))
-      .addFrameIndex(FI).addImm(0);
+    if (Align >= 16
+        && (getRegisterInfo().needsStackRealignment(MF))) {
+      AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::VST1q64))
+                     .addFrameIndex(FI).addImm(0).addImm(0).addImm(128)
+                     .addMemOperand(MMO)
+                     .addReg(SrcReg, getKillRegState(isKill)));
+    } else {
+      AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::VSTRQ)).
+                     addReg(SrcReg, getKillRegState(isKill))
+                     .addFrameIndex(FI).addImm(0).addMemOperand(MMO));
+    }
   }
 }
 
@@ -695,23 +730,41 @@ loadRegFromStackSlot(MachineBasicBlock &MBB, MachineBasicBlock::iterator I,
                      const TargetRegisterClass *RC) const {
   DebugLoc DL = DebugLoc::getUnknownLoc();
   if (I != MBB.end()) DL = I->getDebugLoc();
+  MachineFunction &MF = *MBB.getParent();
+  MachineFrameInfo &MFI = *MF.getFrameInfo();
+  unsigned Align = MFI.getObjectAlignment(FI);
+
+  MachineMemOperand *MMO =
+    MF.getMachineMemOperand(PseudoSourceValue::getFixedStack(FI),
+                            MachineMemOperand::MOLoad, 0,
+                            MFI.getObjectSize(FI),
+                            Align);
 
   if (RC == ARM::GPRRegisterClass) {
     AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::LDR), DestReg)
-                   .addFrameIndex(FI).addReg(0).addImm(0));
+                   .addFrameIndex(FI).addReg(0).addImm(0).addMemOperand(MMO));
   } else if (RC == ARM::DPRRegisterClass ||
              RC == ARM::DPR_VFP2RegisterClass ||
              RC == ARM::DPR_8RegisterClass) {
-    AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::FLDD), DestReg)
-                   .addFrameIndex(FI).addImm(0));
+    AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::VLDRD), DestReg)
+                   .addFrameIndex(FI).addImm(0).addMemOperand(MMO));
   } else if (RC == ARM::SPRRegisterClass) {
-    AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::FLDS), DestReg)
-                   .addFrameIndex(FI).addImm(0));
+    AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::VLDRS), DestReg)
+                   .addFrameIndex(FI).addImm(0).addMemOperand(MMO));
   } else {
     assert((RC == ARM::QPRRegisterClass ||
-            RC == ARM::QPR_VFP2RegisterClass) && "Unknown regclass!");
+            RC == ARM::QPR_VFP2RegisterClass ||
+            RC == ARM::QPR_8RegisterClass) && "Unknown regclass!");
     // FIXME: Neon instructions should support predicates
-    BuildMI(MBB, I, DL, get(ARM::VLDRQ), DestReg).addFrameIndex(FI).addImm(0);
+    if (Align >= 16
+        && (getRegisterInfo().needsStackRealignment(MF))) {
+      AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::VLD1q64), DestReg)
+                     .addFrameIndex(FI).addImm(0).addImm(0).addImm(128)
+                     .addMemOperand(MMO));
+    } else {
+      AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::VLDRQ), DestReg)
+                     .addFrameIndex(FI).addImm(0).addMemOperand(MMO));
+    }
   }
 }
 
@@ -731,18 +784,24 @@ foldMemoryOperandImpl(MachineFunction &MF, MachineInstr *MI,
     unsigned PredReg = MI->getOperand(3).getReg();
     if (OpNum == 0) { // move -> store
       unsigned SrcReg = MI->getOperand(1).getReg();
+      unsigned SrcSubReg = MI->getOperand(1).getSubReg();
       bool isKill = MI->getOperand(1).isKill();
       bool isUndef = MI->getOperand(1).isUndef();
       if (Opc == ARM::MOVr)
         NewMI = BuildMI(MF, MI->getDebugLoc(), get(ARM::STR))
-          .addReg(SrcReg, getKillRegState(isKill) | getUndefRegState(isUndef))
+          .addReg(SrcReg,
+                  getKillRegState(isKill) | getUndefRegState(isUndef),
+                  SrcSubReg)
           .addFrameIndex(FI).addReg(0).addImm(0).addImm(Pred).addReg(PredReg);
       else // ARM::t2MOVr
         NewMI = BuildMI(MF, MI->getDebugLoc(), get(ARM::t2STRi12))
-          .addReg(SrcReg, getKillRegState(isKill) | getUndefRegState(isUndef))
+          .addReg(SrcReg,
+                  getKillRegState(isKill) | getUndefRegState(isUndef),
+                  SrcSubReg)
           .addFrameIndex(FI).addImm(0).addImm(Pred).addReg(PredReg);
     } else {          // move -> load
       unsigned DstReg = MI->getOperand(0).getReg();
+      unsigned DstSubReg = MI->getOperand(0).getSubReg();
       bool isDead = MI->getOperand(0).isDead();
       bool isUndef = MI->getOperand(0).isUndef();
       if (Opc == ARM::MOVr)
@@ -750,14 +809,14 @@ foldMemoryOperandImpl(MachineFunction &MF, MachineInstr *MI,
           .addReg(DstReg,
                   RegState::Define |
                   getDeadRegState(isDead) |
-                  getUndefRegState(isUndef))
+                  getUndefRegState(isUndef), DstSubReg)
           .addFrameIndex(FI).addReg(0).addImm(0).addImm(Pred).addReg(PredReg);
       else // ARM::t2MOVr
         NewMI = BuildMI(MF, MI->getDebugLoc(), get(ARM::t2LDRi12))
           .addReg(DstReg,
                   RegState::Define |
                   getDeadRegState(isDead) |
-                  getUndefRegState(isUndef))
+                  getUndefRegState(isUndef), DstSubReg)
           .addFrameIndex(FI).addImm(0).addImm(Pred).addReg(PredReg);
     }
   } else if (Opc == ARM::tMOVgpr2gpr ||
@@ -765,64 +824,78 @@ foldMemoryOperandImpl(MachineFunction &MF, MachineInstr *MI,
              Opc == ARM::tMOVgpr2tgpr) {
     if (OpNum == 0) { // move -> store
       unsigned SrcReg = MI->getOperand(1).getReg();
+      unsigned SrcSubReg = MI->getOperand(1).getSubReg();
       bool isKill = MI->getOperand(1).isKill();
       bool isUndef = MI->getOperand(1).isUndef();
       NewMI = BuildMI(MF, MI->getDebugLoc(), get(ARM::t2STRi12))
-        .addReg(SrcReg, getKillRegState(isKill) | getUndefRegState(isUndef))
+        .addReg(SrcReg,
+                getKillRegState(isKill) | getUndefRegState(isUndef),
+                SrcSubReg)
         .addFrameIndex(FI).addImm(0).addImm(ARMCC::AL).addReg(0);
     } else {          // move -> load
       unsigned DstReg = MI->getOperand(0).getReg();
+      unsigned DstSubReg = MI->getOperand(0).getSubReg();
       bool isDead = MI->getOperand(0).isDead();
       bool isUndef = MI->getOperand(0).isUndef();
       NewMI = BuildMI(MF, MI->getDebugLoc(), get(ARM::t2LDRi12))
         .addReg(DstReg,
                 RegState::Define |
                 getDeadRegState(isDead) |
-                getUndefRegState(isUndef))
+                getUndefRegState(isUndef),
+                DstSubReg)
         .addFrameIndex(FI).addImm(0).addImm(ARMCC::AL).addReg(0);
     }
-  } else if (Opc == ARM::FCPYS) {
+  } else if (Opc == ARM::VMOVS) {
     unsigned Pred = MI->getOperand(2).getImm();
     unsigned PredReg = MI->getOperand(3).getReg();
     if (OpNum == 0) { // move -> store
       unsigned SrcReg = MI->getOperand(1).getReg();
+      unsigned SrcSubReg = MI->getOperand(1).getSubReg();
       bool isKill = MI->getOperand(1).isKill();
       bool isUndef = MI->getOperand(1).isUndef();
-      NewMI = BuildMI(MF, MI->getDebugLoc(), get(ARM::FSTS))
-        .addReg(SrcReg, getKillRegState(isKill) | getUndefRegState(isUndef))
+      NewMI = BuildMI(MF, MI->getDebugLoc(), get(ARM::VSTRS))
+        .addReg(SrcReg, getKillRegState(isKill) | getUndefRegState(isUndef),
+                SrcSubReg)
         .addFrameIndex(FI)
         .addImm(0).addImm(Pred).addReg(PredReg);
     } else {          // move -> load
       unsigned DstReg = MI->getOperand(0).getReg();
+      unsigned DstSubReg = MI->getOperand(0).getSubReg();
       bool isDead = MI->getOperand(0).isDead();
       bool isUndef = MI->getOperand(0).isUndef();
-      NewMI = BuildMI(MF, MI->getDebugLoc(), get(ARM::FLDS))
+      NewMI = BuildMI(MF, MI->getDebugLoc(), get(ARM::VLDRS))
         .addReg(DstReg,
                 RegState::Define |
                 getDeadRegState(isDead) |
-                getUndefRegState(isUndef))
+                getUndefRegState(isUndef),
+                DstSubReg)
         .addFrameIndex(FI).addImm(0).addImm(Pred).addReg(PredReg);
     }
   }
-  else if (Opc == ARM::FCPYD) {
+  else if (Opc == ARM::VMOVD) {
     unsigned Pred = MI->getOperand(2).getImm();
     unsigned PredReg = MI->getOperand(3).getReg();
     if (OpNum == 0) { // move -> store
       unsigned SrcReg = MI->getOperand(1).getReg();
+      unsigned SrcSubReg = MI->getOperand(1).getSubReg();
       bool isKill = MI->getOperand(1).isKill();
       bool isUndef = MI->getOperand(1).isUndef();
-      NewMI = BuildMI(MF, MI->getDebugLoc(), get(ARM::FSTD))
-        .addReg(SrcReg, getKillRegState(isKill) | getUndefRegState(isUndef))
+      NewMI = BuildMI(MF, MI->getDebugLoc(), get(ARM::VSTRD))
+        .addReg(SrcReg,
+                getKillRegState(isKill) | getUndefRegState(isUndef),
+                SrcSubReg)
         .addFrameIndex(FI).addImm(0).addImm(Pred).addReg(PredReg);
     } else {          // move -> load
       unsigned DstReg = MI->getOperand(0).getReg();
+      unsigned DstSubReg = MI->getOperand(0).getSubReg();
       bool isDead = MI->getOperand(0).isDead();
       bool isUndef = MI->getOperand(0).isUndef();
-      NewMI = BuildMI(MF, MI->getDebugLoc(), get(ARM::FLDD))
+      NewMI = BuildMI(MF, MI->getDebugLoc(), get(ARM::VLDRD))
         .addReg(DstReg,
                 RegState::Define |
                 getDeadRegState(isDead) |
-                getUndefRegState(isUndef))
+                getUndefRegState(isUndef),
+                DstSubReg)
         .addFrameIndex(FI).addImm(0).addImm(Pred).addReg(PredReg);
     }
   }
@@ -853,15 +926,113 @@ ARMBaseInstrInfo::canFoldMemoryOperand(const MachineInstr *MI,
              Opc == ARM::tMOVtgpr2gpr ||
              Opc == ARM::tMOVgpr2tgpr) {
     return true;
-  } else if (Opc == ARM::FCPYS || Opc == ARM::FCPYD) {
+  } else if (Opc == ARM::VMOVS || Opc == ARM::VMOVD) {
     return true;
-  } else if (Opc == ARM::VMOVD || Opc == ARM::VMOVQ) {
+  } else if (Opc == ARM::VMOVDneon || Opc == ARM::VMOVQ) {
     return false; // FIXME
   }
 
   return false;
 }
 
+void ARMBaseInstrInfo::
+reMaterialize(MachineBasicBlock &MBB,
+              MachineBasicBlock::iterator I,
+              unsigned DestReg, unsigned SubIdx,
+              const MachineInstr *Orig,
+              const TargetRegisterInfo *TRI) const {
+  DebugLoc dl = Orig->getDebugLoc();
+
+  if (SubIdx && TargetRegisterInfo::isPhysicalRegister(DestReg)) {
+    DestReg = TRI->getSubReg(DestReg, SubIdx);
+    SubIdx = 0;
+  }
+
+  unsigned Opcode = Orig->getOpcode();
+  switch (Opcode) {
+  default: {
+    MachineInstr *MI = MBB.getParent()->CloneMachineInstr(Orig);
+    MI->getOperand(0).setReg(DestReg);
+    MBB.insert(I, MI);
+    break;
+  }
+  case ARM::tLDRpci_pic:
+  case ARM::t2LDRpci_pic: {
+    MachineFunction &MF = *MBB.getParent();
+    ARMFunctionInfo *AFI = MF.getInfo<ARMFunctionInfo>();
+    MachineConstantPool *MCP = MF.getConstantPool();
+    unsigned CPI = Orig->getOperand(1).getIndex();
+    const MachineConstantPoolEntry &MCPE = MCP->getConstants()[CPI];
+    assert(MCPE.isMachineConstantPoolEntry() &&
+           "Expecting a machine constantpool entry!");
+    ARMConstantPoolValue *ACPV =
+      static_cast<ARMConstantPoolValue*>(MCPE.Val.MachineCPVal);
+    unsigned PCLabelId = AFI->createConstPoolEntryUId();
+    ARMConstantPoolValue *NewCPV = 0;
+    if (ACPV->isGlobalValue())
+      NewCPV = new ARMConstantPoolValue(ACPV->getGV(), PCLabelId,
+                                        ARMCP::CPValue, 4);
+    else if (ACPV->isExtSymbol())
+      NewCPV = new ARMConstantPoolValue(MF.getFunction()->getContext(),
+                                        ACPV->getSymbol(), PCLabelId, 4);
+    else if (ACPV->isBlockAddress())
+      NewCPV = new ARMConstantPoolValue(ACPV->getBlockAddress(), PCLabelId,
+                                        ARMCP::CPBlockAddress, 4);
+    else
+      llvm_unreachable("Unexpected ARM constantpool value type!!");
+    CPI = MCP->getConstantPoolIndex(NewCPV, MCPE.getAlignment());
+    MachineInstrBuilder MIB = BuildMI(MBB, I, Orig->getDebugLoc(), get(Opcode),
+                                      DestReg)
+      .addConstantPoolIndex(CPI).addImm(PCLabelId);
+    (*MIB).setMemRefs(Orig->memoperands_begin(), Orig->memoperands_end());
+    break;
+  }
+  }
+
+  MachineInstr *NewMI = prior(I);
+  NewMI->getOperand(0).setSubReg(SubIdx);
+}
+
+bool ARMBaseInstrInfo::isIdentical(const MachineInstr *MI0,
+                                  const MachineInstr *MI1,
+                                  const MachineRegisterInfo *MRI) const {
+  int Opcode = MI0->getOpcode();
+  if (Opcode == ARM::t2LDRpci ||
+      Opcode == ARM::t2LDRpci_pic ||
+      Opcode == ARM::tLDRpci ||
+      Opcode == ARM::tLDRpci_pic) {
+    if (MI1->getOpcode() != Opcode)
+      return false;
+    if (MI0->getNumOperands() != MI1->getNumOperands())
+      return false;
+
+    const MachineOperand &MO0 = MI0->getOperand(1);
+    const MachineOperand &MO1 = MI1->getOperand(1);
+    if (MO0.getOffset() != MO1.getOffset())
+      return false;
+
+    const MachineFunction *MF = MI0->getParent()->getParent();
+    const MachineConstantPool *MCP = MF->getConstantPool();
+    int CPI0 = MO0.getIndex();
+    int CPI1 = MO1.getIndex();
+    const MachineConstantPoolEntry &MCPE0 = MCP->getConstants()[CPI0];
+    const MachineConstantPoolEntry &MCPE1 = MCP->getConstants()[CPI1];
+    ARMConstantPoolValue *ACPV0 =
+      static_cast<ARMConstantPoolValue*>(MCPE0.Val.MachineCPVal);
+    ARMConstantPoolValue *ACPV1 =
+      static_cast<ARMConstantPoolValue*>(MCPE1.Val.MachineCPVal);
+    return ACPV0->hasSameValue(ACPV1);
+  }
+
+  return TargetInstrInfoImpl::isIdentical(MI0, MI1, MRI);
+}
+
+bool ARMBaseInstrInfo::isProfitableToDuplicateIndirectBranch() const {
+  // If the target processor can predict indirect branches, it is highly
+  // desirable to duplicate them, since it can often make them predictable.
+  return getSubtarget().hasBranchTargetBuffer();
+}
+
 /// getInstrPredicate - If instruction is predicated, returns its predicate
 /// condition, otherwise returns AL. It also returns the condition code
 /// register by reference.
@@ -989,6 +1160,7 @@ bool llvm::rewriteARMFrameIndex(MachineInstr &MI, unsigned FrameRegIdx,
       break;
     }
     case ARMII::AddrMode4:
+    case ARMII::AddrMode6:
       // Can't fold any offset even if it's zero.
       return false;
     case ARMII::AddrMode5: {
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.h b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.h
index a13155b..7944f35 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.h
@@ -1,4 +1,4 @@
-//===- ARMBaseInstrInfo.h - ARM Base Instruction Information -------------*- C++ -*-===//
+//===- ARMBaseInstrInfo.h - ARM Base Instruction Information ----*- C++ -*-===//
 //
 //                     The LLVM Compiler Infrastructure
 //
@@ -131,6 +131,14 @@ namespace ARMII {
     Xform16Bit    = 1 << 16,
 
     //===------------------------------------------------------------------===//
+    // Code domain.
+    DomainShift   = 17,
+    DomainMask    = 3 << DomainShift,
+    DomainGeneral = 0 << DomainShift,
+    DomainVFP     = 1 << DomainShift,
+    DomainNEON    = 2 << DomainShift,
+
+    //===------------------------------------------------------------------===//
     // Field shifts - such shifts are used to set field while generating
     // machine instructions.
     M_BitShift     = 5,
@@ -154,12 +162,29 @@ namespace ARMII {
     I_BitShift     = 25,
     CondShift      = 28
   };
+
+  /// Target Operand Flag enum.
+  enum TOF {
+    //===------------------------------------------------------------------===//
+    // ARM Specific MachineOperand flags.
+
+    MO_NO_FLAG,
+
+    /// MO_LO16 - On a symbol operand, this represents a relocation containing
+    /// lower 16 bit of the address. Used only via movw instruction.
+    MO_LO16,
+
+    /// MO_HI16 - On a symbol operand, this represents a relocation containing
+    /// higher 16 bit of the address. Used only via movt instruction.
+    MO_HI16
+  };
 }
 
 class ARMBaseInstrInfo : public TargetInstrInfoImpl {
+  const ARMSubtarget& Subtarget;
 protected:
   // Can be only subclassed.
-  explicit ARMBaseInstrInfo();
+  explicit ARMBaseInstrInfo(const ARMSubtarget &STI);
 public:
   // Return the non-pre/post incrementing version of 'Opc'. Return 0
   // if there is not such an opcode.
@@ -173,6 +198,7 @@ public:
                                               LiveVariables *LV) const;
 
   virtual const ARMBaseRegisterInfo &getRegisterInfo() const =0;
+  const ARMSubtarget &getSubtarget() const { return Subtarget; }
 
   // Branch analysis.
   virtual bool AnalyzeBranch(MachineBasicBlock &MBB, MachineBasicBlock *&TBB,
@@ -210,6 +236,8 @@ public:
   virtual bool DefinesPredicate(MachineInstr *MI,
                                 std::vector<MachineOperand> &Pred) const;
 
+  virtual bool isPredicable(MachineInstr *MI) const;
+
   /// GetInstSize - Returns the size of the specified MachineInstr.
   ///
   virtual unsigned GetInstSizeInBytes(const MachineInstr* MI) const;
@@ -251,9 +279,19 @@ public:
 
   virtual MachineInstr* foldMemoryOperandImpl(MachineFunction &MF,
                                               MachineInstr* MI,
-                                              const SmallVectorImpl<unsigned> &Ops,
+                                           const SmallVectorImpl<unsigned> &Ops,
                                               MachineInstr* LoadMI) const;
 
+  virtual void reMaterialize(MachineBasicBlock &MBB,
+                             MachineBasicBlock::iterator MI,
+                             unsigned DestReg, unsigned SubIdx,
+                             const MachineInstr *Orig,
+                             const TargetRegisterInfo *TRI) const;
+
+  virtual bool isIdentical(const MachineInstr *MI, const MachineInstr *Other,
+                           const MachineRegisterInfo *MRI) const;
+
+  virtual bool isProfitableToDuplicateIndirectBranch() const;
 };
 
 static inline
@@ -293,6 +331,11 @@ bool isJumpTableBranchOpcode(int Opc) {
     Opc == ARM::tBR_JTr || Opc == ARM::t2BR_JT;
 }
 
+static inline
+bool isIndirectBranchOpcode(int Opc) {
+  return Opc == ARM::BRIND || Opc == ARM::tBRIND;
+}
+
 /// getInstrPredicate - If instruction is predicated, returns its predicate
 /// condition, otherwise returns AL. It also returns the condition code
 /// register by reference.
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.cpp
index 4db4636..653328d 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.cpp
@@ -29,6 +29,7 @@
 #include "llvm/CodeGen/MachineLocation.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
 #include "llvm/CodeGen/RegisterScavenging.h"
+#include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Target/TargetFrameInfo.h"
@@ -36,8 +37,13 @@
 #include "llvm/Target/TargetOptions.h"
 #include "llvm/ADT/BitVector.h"
 #include "llvm/ADT/SmallVector.h"
+#include "llvm/Support/CommandLine.h"
 using namespace llvm;
 
+static cl::opt<bool>
+ReuseFrameIndexVals("arm-reuse-frame-index-vals", cl::Hidden, cl::init(true),
+          cl::desc("Reuse repeated frame index values"));
+
 unsigned ARMBaseRegisterInfo::getRegisterNumbering(unsigned RegEnum,
                                                    bool *isSPVFP) {
   if (isSPVFP)
@@ -244,6 +250,42 @@ bool ARMBaseRegisterInfo::isReservedReg(const MachineFunction &MF,
 }
 
 const TargetRegisterClass *
+ARMBaseRegisterInfo::getMatchingSuperRegClass(const TargetRegisterClass *A,
+                                              const TargetRegisterClass *B,
+                                              unsigned SubIdx) const {
+  switch (SubIdx) {
+  default: return 0;
+  case 1:
+  case 2:
+  case 3:
+  case 4:
+    // S sub-registers.
+    if (A->getSize() == 8) {
+      if (B == &ARM::SPR_8RegClass)
+        return &ARM::DPR_8RegClass;
+      assert(B == &ARM::SPRRegClass && "Expecting SPR register class!");
+      if (A == &ARM::DPR_8RegClass)
+        return A;
+      return &ARM::DPR_VFP2RegClass;
+    }
+
+    assert(A->getSize() == 16 && "Expecting a Q register class!");
+    if (B == &ARM::SPR_8RegClass)
+      return &ARM::QPR_8RegClass;
+    return &ARM::QPR_VFP2RegClass;
+  case 5:
+  case 6:
+    // D sub-registers.
+    if (B == &ARM::DPR_VFP2RegClass)
+      return &ARM::QPR_VFP2RegClass;
+    if (B == &ARM::DPR_8RegClass)
+      return &ARM::QPR_8RegClass;
+    return A;
+  }
+  return 0;
+}
+
+const TargetRegisterClass *
 ARMBaseRegisterInfo::getPointerRegClass(unsigned Kind) const {
   return ARM::GPRRegisterClass;
 }
@@ -429,6 +471,21 @@ ARMBaseRegisterInfo::UpdateRegAllocHint(unsigned Reg, unsigned NewReg,
   }
 }
 
+static unsigned calculateMaxStackAlignment(const MachineFrameInfo *FFI) {
+  unsigned MaxAlign = 0;
+
+  for (int i = FFI->getObjectIndexBegin(),
+         e = FFI->getObjectIndexEnd(); i != e; ++i) {
+    if (FFI->isDeadObjectIndex(i))
+      continue;
+
+    unsigned Align = FFI->getObjectAlignment(i);
+    MaxAlign = std::max(MaxAlign, Align);
+  }
+
+  return MaxAlign;
+}
+
 /// hasFP - Return true if the specified function should have a dedicated frame
 /// pointer register.  This is true if the function has variable sized allocas
 /// or if frame pointer elimination is disabled.
@@ -436,15 +493,28 @@ ARMBaseRegisterInfo::UpdateRegAllocHint(unsigned Reg, unsigned NewReg,
 bool ARMBaseRegisterInfo::hasFP(const MachineFunction &MF) const {
   const MachineFrameInfo *MFI = MF.getFrameInfo();
   return (NoFramePointerElim ||
+          needsStackRealignment(MF) ||
           MFI->hasVarSizedObjects() ||
           MFI->isFrameAddressTaken());
 }
 
+bool ARMBaseRegisterInfo::
+needsStackRealignment(const MachineFunction &MF) const {
+  const MachineFrameInfo *MFI = MF.getFrameInfo();
+  const ARMFunctionInfo *AFI = MF.getInfo<ARMFunctionInfo>();
+  unsigned StackAlign = MF.getTarget().getFrameInfo()->getStackAlignment();
+  return (RealignStack &&
+          !AFI->isThumb1OnlyFunction() &&
+          (MFI->getMaxAlignment() > StackAlign) &&
+          !MFI->hasVarSizedObjects());
+}
+
 bool ARMBaseRegisterInfo::cannotEliminateFrame(const MachineFunction &MF) const {
   const MachineFrameInfo *MFI = MF.getFrameInfo();
   if (NoFramePointerElim && MFI->hasCalls())
     return true;
-  return MFI->hasVarSizedObjects() || MFI->isFrameAddressTaken();
+  return MFI->hasVarSizedObjects() || MFI->isFrameAddressTaken()
+    || needsStackRealignment(MF);
 }
 
 /// estimateStackSize - Estimate and return the size of the frame.
@@ -515,6 +585,16 @@ ARMBaseRegisterInfo::processFunctionBeforeCalleeSavedScan(MachineFunction &MF,
   SmallVector<unsigned, 4> UnspilledCS2GPRs;
   ARMFunctionInfo *AFI = MF.getInfo<ARMFunctionInfo>();
 
+  MachineFrameInfo *MFI = MF.getFrameInfo();
+
+  // Calculate and set max stack object alignment early, so we can decide
+  // whether we will need stack realignment (and thus FP).
+  if (RealignStack) {
+    unsigned MaxAlign = std::max(MFI->getMaxAlignment(),
+                                 calculateMaxStackAlignment(MFI));
+    MFI->setMaxAlignment(MaxAlign);
+  }
+
   // Don't spill FP if the frame can be eliminated. This is determined
   // by scanning the callee-save registers to see if any is used.
   const unsigned *CSRegs = getCalleeSavedRegs();
@@ -660,8 +740,7 @@ ARMBaseRegisterInfo::processFunctionBeforeCalleeSavedScan(MachineFunction &MF,
       // off the frame pointer, the effective stack size is 4 bytes larger
       // since the FP points to the stack slot of the previous FP.
       if (estimateStackSize(MF, MFI) + (hasFP(MF) ? 4 : 0)
-          >= estimateRSStackSizeLimit(MF)
-          || AFI->isThumb1OnlyFunction()) {
+          >= estimateRSStackSizeLimit(MF)) {
         // If any non-reserved CS register isn't spilled, just spill one or two
         // extra. That should take care of it!
         unsigned NumExtras = TargetAlign / 4;
@@ -690,11 +769,13 @@ ARMBaseRegisterInfo::processFunctionBeforeCalleeSavedScan(MachineFunction &MF,
             MF.getRegInfo().setPhysRegUsed(Extras[i]);
             AFI->setCSRegisterIsSpilled(Extras[i]);
           }
-        } else {
+        } else if (!AFI->isThumb1OnlyFunction()) {
+          // note: Thumb1 functions spill to R12, not the stack.
           // Reserve a slot closest to SP or frame pointer.
           const TargetRegisterClass *RC = ARM::GPRRegisterClass;
           RS->setScavengingFrameIndex(MFI->CreateStackObject(RC->getSize(),
-                                                           RC->getAlignment()));
+                                                             RC->getAlignment(),
+                                                             false));
         }
       }
     }
@@ -711,12 +792,61 @@ unsigned ARMBaseRegisterInfo::getRARegister() const {
   return ARM::LR;
 }
 
-unsigned ARMBaseRegisterInfo::getFrameRegister(MachineFunction &MF) const {
+unsigned 
+ARMBaseRegisterInfo::getFrameRegister(const MachineFunction &MF) const {
   if (STI.isTargetDarwin() || hasFP(MF))
     return FramePtr;
   return ARM::SP;
 }
 
+int
+ARMBaseRegisterInfo::getFrameIndexReference(MachineFunction &MF, int FI,
+                                            unsigned &FrameReg) const {
+  const MachineFrameInfo *MFI = MF.getFrameInfo();
+  ARMFunctionInfo *AFI = MF.getInfo<ARMFunctionInfo>();
+  int Offset = MFI->getObjectOffset(FI) + MFI->getStackSize();
+  bool isFixed = MFI->isFixedObjectIndex(FI);
+
+  FrameReg = ARM::SP;
+  if (AFI->isGPRCalleeSavedArea1Frame(FI))
+    Offset -= AFI->getGPRCalleeSavedArea1Offset();
+  else if (AFI->isGPRCalleeSavedArea2Frame(FI))
+    Offset -= AFI->getGPRCalleeSavedArea2Offset();
+  else if (AFI->isDPRCalleeSavedAreaFrame(FI))
+    Offset -= AFI->getDPRCalleeSavedAreaOffset();
+  else if (needsStackRealignment(MF)) {
+    // When dynamically realigning the stack, use the frame pointer for
+    // parameters, and the stack pointer for locals.
+    assert (hasFP(MF) && "dynamic stack realignment without a FP!");
+    if (isFixed) {
+      FrameReg = getFrameRegister(MF);
+      Offset -= AFI->getFramePtrSpillOffset();
+    }
+  } else if (hasFP(MF) && AFI->hasStackFrame()) {
+    if (isFixed || MFI->hasVarSizedObjects()) {
+      // Use frame pointer to reference fixed objects unless this is a
+      // frameless function.
+      FrameReg = getFrameRegister(MF);
+      Offset -= AFI->getFramePtrSpillOffset();
+    } else if (AFI->isThumb2Function()) {
+      // In Thumb2 mode, the negative offset is very limited.
+      int FPOffset = Offset - AFI->getFramePtrSpillOffset();
+      if (FPOffset >= -255 && FPOffset < 0) {
+        FrameReg = getFrameRegister(MF);
+        Offset = FPOffset;
+      }
+    }
+  }
+  return Offset;
+}
+
+
+int
+ARMBaseRegisterInfo::getFrameIndexOffset(MachineFunction &MF, int FI) const {
+  unsigned FrameReg;
+  return getFrameIndexReference(MF, FI, FrameReg);
+}
+
 unsigned ARMBaseRegisterInfo::getEHExceptionRegister() const {
   llvm_unreachable("What is the exception register");
   return 0;
@@ -740,8 +870,7 @@ unsigned ARMBaseRegisterInfo::getRegisterPairEven(unsigned Reg,
   case ARM::R1:
     return ARM::R0;
   case ARM::R3:
-    // FIXME!
-    return STI.isThumb1Only() ? 0 : ARM::R2;
+    return ARM::R2;
   case ARM::R5:
     return ARM::R4;
   case ARM::R7:
@@ -830,8 +959,7 @@ unsigned ARMBaseRegisterInfo::getRegisterPairOdd(unsigned Reg,
   case ARM::R0:
     return ARM::R1;
   case ARM::R2:
-    // FIXME!
-    return STI.isThumb1Only() ? 0 : ARM::R3;
+    return ARM::R3;
   case ARM::R4:
     return ARM::R5;
   case ARM::R6:
@@ -937,6 +1065,11 @@ requiresRegisterScavenging(const MachineFunction &MF) const {
   return true;
 }
 
+bool ARMBaseRegisterInfo::
+requiresFrameIndexScavenging(const MachineFunction &MF) const {
+  return true;
+}
+
 // hasReservedCallFrame - Under normal circumstances, when a frame pointer is
 // not required, we reserve argument space for call sites in the function
 // immediately on entry to the current function. This eliminates the need for
@@ -1012,20 +1145,10 @@ eliminateCallFramePseudoInstr(MachineFunction &MF, MachineBasicBlock &MBB,
   MBB.erase(I);
 }
 
-/// findScratchRegister - Find a 'free' ARM register. If register scavenger
-/// is not being used, R12 is available. Otherwise, try for a call-clobbered
-/// register first and then a spilled callee-saved register if that fails.
-static
-unsigned findScratchRegister(RegScavenger *RS, const TargetRegisterClass *RC,
-                             ARMFunctionInfo *AFI) {
-  unsigned Reg = RS ? RS->FindUnusedReg(RC) : (unsigned) ARM::R12;
-  assert(!AFI->isThumb1OnlyFunction());
-  return Reg;
-}
-
-void
+unsigned
 ARMBaseRegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
-                                         int SPAdj, RegScavenger *RS) const {
+                                         int SPAdj, int *Value,
+                                         RegScavenger *RS) const {
   unsigned i = 0;
   MachineInstr &MI = *II;
   MachineBasicBlock &MBB = *MI.getParent();
@@ -1040,25 +1163,15 @@ ARMBaseRegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
     assert(i < MI.getNumOperands() && "Instr doesn't have FrameIndex operand!");
   }
 
-  unsigned FrameReg = ARM::SP;
   int FrameIndex = MI.getOperand(i).getIndex();
   int Offset = MFI->getObjectOffset(FrameIndex) + MFI->getStackSize() + SPAdj;
+  unsigned FrameReg;
 
-  if (AFI->isGPRCalleeSavedArea1Frame(FrameIndex))
-    Offset -= AFI->getGPRCalleeSavedArea1Offset();
-  else if (AFI->isGPRCalleeSavedArea2Frame(FrameIndex))
-    Offset -= AFI->getGPRCalleeSavedArea2Offset();
-  else if (AFI->isDPRCalleeSavedAreaFrame(FrameIndex))
-    Offset -= AFI->getDPRCalleeSavedAreaOffset();
-  else if (hasFP(MF) && AFI->hasStackFrame()) {
-    assert(SPAdj == 0 && "Unexpected stack offset!");
-    // Use frame pointer to reference fixed objects unless this is a
-    // frameless function,
-    FrameReg = getFrameRegister(MF);
-    Offset -= AFI->getFramePtrSpillOffset();
-  }
+  Offset = getFrameIndexReference(MF, FrameIndex, FrameReg);
+  if (FrameReg != ARM::SP)
+    SPAdj = 0;
 
-  // modify MI as necessary to handle as much of 'Offset' as possible
+  // Modify MI as necessary to handle as much of 'Offset' as possible
   bool Done = false;
   if (!AFI->isThumbFunction())
     Done = rewriteARMFrameIndex(MI, i, FrameReg, Offset, TII);
@@ -1067,31 +1180,27 @@ ARMBaseRegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
     Done = rewriteT2FrameIndex(MI, i, FrameReg, Offset, TII);
   }
   if (Done)
-    return;
+    return 0;
 
   // If we get here, the immediate doesn't fit into the instruction.  We folded
   // as much as possible above, handle the rest, providing a register that is
   // SP+LargeImm.
   assert((Offset ||
-          (MI.getDesc().TSFlags & ARMII::AddrModeMask) == ARMII::AddrMode4) &&
+          (MI.getDesc().TSFlags & ARMII::AddrModeMask) == ARMII::AddrMode4 ||
+          (MI.getDesc().TSFlags & ARMII::AddrModeMask) == ARMII::AddrMode6) &&
          "This code isn't needed if offset already handled!");
 
-  // Insert a set of r12 with the full address: r12 = sp + offset
-  // If the offset we have is too large to fit into the instruction, we need
-  // to form it with a series of ADDri's.  Do this by taking 8-bit chunks
-  // out of 'Offset'.
-  unsigned ScratchReg = findScratchRegister(RS, ARM::GPRRegisterClass, AFI);
-  if (ScratchReg == 0)
-    // No register is "free". Scavenge a register.
-    ScratchReg = RS->scavengeRegister(ARM::GPRRegisterClass, II, SPAdj);
+  unsigned ScratchReg = 0;
   int PIdx = MI.findFirstPredOperandIdx();
   ARMCC::CondCodes Pred = (PIdx == -1)
     ? ARMCC::AL : (ARMCC::CondCodes)MI.getOperand(PIdx).getImm();
   unsigned PredReg = (PIdx == -1) ? 0 : MI.getOperand(PIdx+1).getReg();
   if (Offset == 0)
-    // Must be addrmode4.
+    // Must be addrmode4/6.
     MI.getOperand(i).ChangeToRegister(FrameReg, false, false, false);
   else {
+    ScratchReg = MF.getRegInfo().createVirtualRegister(ARM::GPRRegisterClass);
+    if (Value) *Value = Offset;
     if (!AFI->isThumbFunction())
       emitARMRegPlusImmediate(MBB, II, MI.getDebugLoc(), ScratchReg, FrameReg,
                               Offset, Pred, PredReg, TII);
@@ -1101,10 +1210,13 @@ ARMBaseRegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
                              Offset, Pred, PredReg, TII);
     }
     MI.getOperand(i).ChangeToRegister(ScratchReg, false, false, true);
+    if (!ReuseFrameIndexVals)
+      ScratchReg = 0;
   }
+  return ScratchReg;
 }
 
-/// Move iterator pass the next bunch of callee save load / store ops for
+/// Move iterator past the next bunch of callee save load / store ops for
 /// the particular spill area (1: integer area 1, 2: integer area 2,
 /// 3: fp area, 0: don't care).
 static void movePastCSLoadStoreOps(MachineBasicBlock &MBB,
@@ -1238,10 +1350,10 @@ emitPrologue(MachineFunction &MF) const {
   AFI->setGPRCalleeSavedArea2Offset(GPRCS2Offset);
   AFI->setDPRCalleeSavedAreaOffset(DPRCSOffset);
 
+  movePastCSLoadStoreOps(MBB, MBBI, ARM::VSTRD, 0, 3, STI);
   NumBytes = DPRCSOffset;
   if (NumBytes) {
-    // Insert it after all the callee-save spills.
-    movePastCSLoadStoreOps(MBB, MBBI, ARM::FSTD, 0, 3, STI);
+    // Adjust SP after all the callee-save spills.
     emitSPUpdate(isARM, MBB, MBBI, dl, TII, -NumBytes);
   }
 
@@ -1253,6 +1365,18 @@ emitPrologue(MachineFunction &MF) const {
   AFI->setGPRCalleeSavedArea1Size(GPRCS1Size);
   AFI->setGPRCalleeSavedArea2Size(GPRCS2Size);
   AFI->setDPRCalleeSavedAreaSize(DPRCSSize);
+
+  // If we need dynamic stack realignment, do it here.
+  if (needsStackRealignment(MF)) {
+    unsigned Opc;
+    unsigned MaxAlign = MFI->getMaxAlignment();
+    assert (!AFI->isThumb1OnlyFunction());
+    Opc = AFI->isThumbFunction() ? ARM::t2BICri : ARM::BICri;
+
+    AddDefaultCC(AddDefaultPred(BuildMI(MBB, MBBI, dl, TII.get(Opc), ARM::SP)
+                                  .addReg(ARM::SP, RegState::Kill)
+                                  .addImm(MaxAlign-1)));
+  }
 }
 
 static bool isCalleeSavedRegister(unsigned Reg, const unsigned *CSRegs) {
@@ -1265,7 +1389,7 @@ static bool isCalleeSavedRegister(unsigned Reg, const unsigned *CSRegs) {
 static bool isCSRestore(MachineInstr *MI,
                         const ARMBaseInstrInfo &TII,
                         const unsigned *CSRegs) {
-  return ((MI->getOpcode() == (int)ARM::FLDD ||
+  return ((MI->getOpcode() == (int)ARM::VLDRD ||
            MI->getOpcode() == (int)ARM::LDR ||
            MI->getOpcode() == (int)ARM::t2LDRi12) &&
           MI->getOperand(1).isFI() &&
@@ -1291,7 +1415,7 @@ emitEpilogue(MachineFunction &MF, MachineBasicBlock &MBB) const {
     if (NumBytes != 0)
       emitSPUpdate(isARM, MBB, MBBI, dl, TII, NumBytes);
   } else {
-    // Unwind MBBI to point to first LDR / FLDD.
+    // Unwind MBBI to point to first LDR / VLDRD.
     const unsigned *CSRegs = getCalleeSavedRegs();
     if (MBBI != MBB.begin()) {
       do
@@ -1339,7 +1463,7 @@ emitEpilogue(MachineFunction &MF, MachineBasicBlock &MBB) const {
       emitSPUpdate(isARM, MBB, MBBI, dl, TII, NumBytes);
 
     // Move SP to start of integer callee save spill area 2.
-    movePastCSLoadStoreOps(MBB, MBBI, ARM::FLDD, 0, 3, STI);
+    movePastCSLoadStoreOps(MBB, MBBI, ARM::VLDRD, 0, 3, STI);
     emitSPUpdate(isARM, MBB, MBBI, dl, TII, AFI->getDPRCalleeSavedAreaSize());
 
     // Move SP to start of integer callee save spill area 1.
@@ -1355,4 +1479,48 @@ emitEpilogue(MachineFunction &MF, MachineBasicBlock &MBB) const {
     emitSPUpdate(isARM, MBB, MBBI, dl, TII, VARegSaveSize);
 }
 
+namespace {
+  struct MaximalStackAlignmentCalculator : public MachineFunctionPass {
+    static char ID;
+    MaximalStackAlignmentCalculator() : MachineFunctionPass(&ID) {}
+
+    virtual bool runOnMachineFunction(MachineFunction &MF) {
+      MachineFrameInfo *FFI = MF.getFrameInfo();
+      MachineRegisterInfo &RI = MF.getRegInfo();
+
+      // Calculate max stack alignment of all already allocated stack objects.
+      unsigned MaxAlign = calculateMaxStackAlignment(FFI);
+
+      // Be over-conservative: scan over all vreg defs and find, whether vector
+      // registers are used. If yes - there is probability, that vector register
+      // will be spilled and thus stack needs to be aligned properly.
+      for (unsigned RegNum = TargetRegisterInfo::FirstVirtualRegister;
+           RegNum < RI.getLastVirtReg(); ++RegNum)
+        MaxAlign = std::max(MaxAlign, RI.getRegClass(RegNum)->getAlignment());
+
+      if (FFI->getMaxAlignment() == MaxAlign)
+        return false;
+
+      FFI->setMaxAlignment(MaxAlign);
+      return true;
+    }
+
+    virtual const char *getPassName() const {
+      return "ARM Stack Required Alignment Auto-Detector";
+    }
+
+    virtual void getAnalysisUsage(AnalysisUsage &AU) const {
+      AU.setPreservesCFG();
+      MachineFunctionPass::getAnalysisUsage(AU);
+    }
+  };
+
+  char MaximalStackAlignmentCalculator::ID = 0;
+}
+
+FunctionPass*
+llvm::createARMMaxStackAlignmentCalculatorPass() {
+  return new MaximalStackAlignmentCalculator();
+}
+
 #include "ARMGenRegisterInfo.inc"
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.h b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.h
index 3eccab0..2788d07 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.h
@@ -74,6 +74,13 @@ public:
 
   BitVector getReservedRegs(const MachineFunction &MF) const;
 
+  /// getMatchingSuperRegClass - Return a subclass of the specified register
+  /// class A so that each register in it has a sub-register of the
+  /// specified sub-register index which is in the specified register class B.
+  virtual const TargetRegisterClass *
+  getMatchingSuperRegClass(const TargetRegisterClass *A,
+                           const TargetRegisterClass *B, unsigned Idx) const;
+
   const TargetRegisterClass *getPointerRegClass(unsigned Kind = 0) const;
 
   std::pair<TargetRegisterClass::iterator,TargetRegisterClass::iterator>
@@ -89,6 +96,8 @@ public:
 
   bool hasFP(const MachineFunction &MF) const;
 
+  bool needsStackRealignment(const MachineFunction &MF) const;
+
   bool cannotEliminateFrame(const MachineFunction &MF) const;
 
   void processFunctionBeforeCalleeSavedScan(MachineFunction &MF,
@@ -96,7 +105,10 @@ public:
 
   // Debug information queries.
   unsigned getRARegister() const;
-  unsigned getFrameRegister(MachineFunction &MF) const;
+  unsigned getFrameRegister(const MachineFunction &MF) const;
+  int getFrameIndexReference(MachineFunction &MF, int FI,
+                             unsigned &FrameReg) const;
+  int getFrameIndexOffset(MachineFunction &MF, int FI) const;
 
   // Exception handling queries.
   unsigned getEHExceptionRegister() const;
@@ -122,14 +134,17 @@ public:
 
   virtual bool requiresRegisterScavenging(const MachineFunction &MF) const;
 
+  virtual bool requiresFrameIndexScavenging(const MachineFunction &MF) const;
+
   virtual bool hasReservedCallFrame(MachineFunction &MF) const;
 
   virtual void eliminateCallFramePseudoInstr(MachineFunction &MF,
                                              MachineBasicBlock &MBB,
                                              MachineBasicBlock::iterator I) const;
 
-  virtual void eliminateFrameIndex(MachineBasicBlock::iterator II,
-                                   int SPAdj, RegScavenger *RS = NULL) const;
+  virtual unsigned eliminateFrameIndex(MachineBasicBlock::iterator II,
+                                       int SPAdj, int *Value = NULL,
+                                       RegScavenger *RS = NULL) const;
 
   virtual void emitPrologue(MachineFunction &MF) const;
   virtual void emitEpilogue(MachineFunction &MF, MachineBasicBlock &MBB) const;
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMCallingConv.td b/libclamav/c++/llvm/lib/Target/ARM/ARMCallingConv.td
index 7161639..8fdb07f 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMCallingConv.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMCallingConv.td
@@ -68,6 +68,7 @@ def CC_ARM_AAPCS_Common : CallingConv<[
                        "ArgFlags.getOrigAlign() != 8",
                        CCAssignToReg<[R0, R1, R2, R3]>>>,
 
+  CCIfType<[i32], CCIfAlign<"8", CCAssignToStack<4, 8>>>,
   CCIfType<[i32, f32], CCAssignToStack<4, 4>>,
   CCIfType<[f64], CCAssignToStack<8, 8>>,
   CCIfType<[v2f64], CCAssignToStack<16, 8>>
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMCodeEmitter.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMCodeEmitter.cpp
index 6419b69..17e7d44 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMCodeEmitter.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMCodeEmitter.cpp
@@ -34,7 +34,6 @@
 #include "llvm/CodeGen/MachineModuleInfo.h"
 #include "llvm/CodeGen/Passes.h"
 #include "llvm/ADT/Statistic.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
@@ -56,8 +55,7 @@ namespace {
   };
 
   template<class CodeEmitter>
-  class VISIBILITY_HIDDEN Emitter : public MachineFunctionPass,
-                                    public ARMCodeEmitter {
+  class Emitter : public MachineFunctionPass, public ARMCodeEmitter {
     ARMJITInfo                *JTI;
     const ARMInstrInfo        *II;
     const TargetData          *TD;
@@ -170,7 +168,8 @@ namespace {
     /// Routines that handle operands which add machine relocations which are
     /// fixed up by the relocation stage.
     void emitGlobalAddress(GlobalValue *GV, unsigned Reloc,
-                           bool NeedStub,  bool Indirect, intptr_t ACPV = 0);
+                           bool MayNeedFarStub,  bool Indirect,
+                           intptr_t ACPV = 0);
     void emitExternalSymbolAddress(const char *ES, unsigned Reloc);
     void emitConstPoolAddress(unsigned CPI, unsigned Reloc);
     void emitJumpTableAddress(unsigned JTIndex, unsigned Reloc);
@@ -279,13 +278,13 @@ unsigned Emitter<CodeEmitter>::getMachineOpValue(const MachineInstr &MI,
 ///
 template<class CodeEmitter>
 void Emitter<CodeEmitter>::emitGlobalAddress(GlobalValue *GV, unsigned Reloc,
-                                             bool NeedStub, bool Indirect,
+                                             bool MayNeedFarStub, bool Indirect,
                                              intptr_t ACPV) {
   MachineRelocation MR = Indirect
     ? MachineRelocation::getIndirectSymbol(MCE.getCurrentPCOffset(), Reloc,
-                                           GV, ACPV, NeedStub)
+                                           GV, ACPV, MayNeedFarStub)
     : MachineRelocation::getGV(MCE.getCurrentPCOffset(), Reloc,
-                               GV, ACPV, NeedStub);
+                               GV, ACPV, MayNeedFarStub);
   MCE.addRelocation(MR);
 }
 
@@ -346,7 +345,7 @@ template<class CodeEmitter>
 void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI) {
   DEBUG(errs() << "JIT: " << (void*)MCE.getCurrentPCValue() << ":\t" << MI);
 
-  MCE.processDebugLoc(MI.getDebugLoc());
+  MCE.processDebugLoc(MI.getDebugLoc(), true);
 
   NumEmitted++;  // Keep track of the # of mi's emitted
   switch (MI.getDesc().TSFlags & ARMII::FormMask) {
@@ -409,6 +408,7 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI) {
     emitMiscInstruction(MI);
     break;
   }
+  MCE.processDebugLoc(MI.getDebugLoc(), false);
 }
 
 template<class CodeEmitter>
@@ -429,6 +429,7 @@ void Emitter<CodeEmitter>::emitConstPoolInstruction(const MachineInstr &MI) {
     DEBUG(errs() << "  ** ARM constant pool #" << CPI << " @ "
           << (void*)MCE.getCurrentPCValue() << " " << *ACPV << '\n');
 
+    assert(ACPV->isGlobalValue() && "unsupported constant pool value");
     GlobalValue *GV = ACPV->getGV();
     if (GV) {
       Reloc::Model RelocM = TM.getRelocationModel();
@@ -460,9 +461,9 @@ void Emitter<CodeEmitter>::emitConstPoolInstruction(const MachineInstr &MI) {
       uint32_t Val = *(uint32_t*)CI->getValue().getRawData();
       emitWordLE(Val);
     } else if (const ConstantFP *CFP = dyn_cast<ConstantFP>(CV)) {
-      if (CFP->getType() == Type::getFloatTy(CFP->getContext()))
+      if (CFP->getType()->isFloatTy())
         emitWordLE(CFP->getValueAPF().bitcastToAPInt().getZExtValue());
-      else if (CFP->getType() == Type::getDoubleTy(CFP->getContext()))
+      else if (CFP->getType()->isDoubleTy())
         emitDWordLE(CFP->getValueAPF().bitcastToAPInt().getZExtValue());
       else {
         llvm_unreachable("Unable to handle this constantpool entry!");
@@ -612,7 +613,6 @@ void Emitter<CodeEmitter>::emitPseudoInstruction(const MachineInstr &MI) {
     break;
   case TargetInstrInfo::IMPLICIT_DEF:
   case TargetInstrInfo::KILL:
-  case ARM::DWARF_LOC:
     // Do nothing.
     break;
   case ARM::CONSTPOOL_ENTRY:
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMConstantIslandPass.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMConstantIslandPass.cpp
index 43a823d..e59a315 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMConstantIslandPass.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMConstantIslandPass.cpp
@@ -24,13 +24,15 @@
 #include "llvm/CodeGen/MachineJumpTableInfo.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Target/TargetMachine.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
+#include "llvm/ADT/SmallSet.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/STLExtras.h"
 #include "llvm/ADT/Statistic.h"
+#include "llvm/Support/CommandLine.h"
+#include <algorithm>
 using namespace llvm;
 
 STATISTIC(NumCPEs,       "Number of constpool entries");
@@ -40,6 +42,14 @@ STATISTIC(NumUBrFixed,   "Number of uncond branches fixed");
 STATISTIC(NumTBs,        "Number of table branches generated");
 STATISTIC(NumT2CPShrunk, "Number of Thumb2 constantpool instructions shrunk");
 STATISTIC(NumT2BrShrunk, "Number of Thumb2 immediate branches shrunk");
+STATISTIC(NumCBZ,        "Number of CBZ / CBNZ formed");
+STATISTIC(NumJTMoved,    "Number of jump table destination blocks moved");
+STATISTIC(NumJTInserted, "Number of jump table intermediate blocks inserted");
+
+
+static cl::opt<bool>
+AdjustJumpTableBlocks("arm-adjust-jump-tables", cl::Hidden, cl::init(true),
+          cl::desc("Adjust basic block layout to better use TB[BH]"));
 
 namespace {
   /// ARMConstantIslands - Due to limited PC-relative displacements, ARM
@@ -53,7 +63,7 @@ namespace {
   ///   Water   - Potential places where an island could be formed.
   ///   CPE     - A constant pool entry that has been placed somewhere, which
   ///             tracks a list of users.
-  class VISIBILITY_HIDDEN ARMConstantIslands : public MachineFunctionPass {
+  class ARMConstantIslands : public MachineFunctionPass {
     /// BBSizes - The size of each MachineBasicBlock in bytes of code, indexed
     /// by MBB Number.  The two-byte pads required for Thumb alignment are
     /// counted as part of the following block (i.e., the offset and size for
@@ -70,18 +80,36 @@ namespace {
     /// to a return, unreachable, or unconditional branch).
     std::vector<MachineBasicBlock*> WaterList;
 
+    /// NewWaterList - The subset of WaterList that was created since the
+    /// previous iteration by inserting unconditional branches.
+    SmallSet<MachineBasicBlock*, 4> NewWaterList;
+
+    typedef std::vector<MachineBasicBlock*>::iterator water_iterator;
+
     /// CPUser - One user of a constant pool, keeping the machine instruction
     /// pointer, the constant pool being referenced, and the max displacement
-    /// allowed from the instruction to the CP.
+    /// allowed from the instruction to the CP.  The HighWaterMark records the
+    /// highest basic block where a new CPEntry can be placed.  To ensure this
+    /// pass terminates, the CP entries are initially placed at the end of the
+    /// function and then move monotonically to lower addresses.  The
+    /// exception to this rule is when the current CP entry for a particular
+    /// CPUser is out of range, but there is another CP entry for the same
+    /// constant value in range.  We want to use the existing in-range CP
+    /// entry, but if it later moves out of range, the search for new water
+    /// should resume where it left off.  The HighWaterMark is used to record
+    /// that point.
     struct CPUser {
       MachineInstr *MI;
       MachineInstr *CPEMI;
+      MachineBasicBlock *HighWaterMark;
       unsigned MaxDisp;
       bool NegOk;
       bool IsSoImm;
       CPUser(MachineInstr *mi, MachineInstr *cpemi, unsigned maxdisp,
              bool neg, bool soimm)
-        : MI(mi), CPEMI(cpemi), MaxDisp(maxdisp), NegOk(neg), IsSoImm(soimm) {}
+        : MI(mi), CPEMI(cpemi), MaxDisp(maxdisp), NegOk(neg), IsSoImm(soimm) {
+        HighWaterMark = CPEMI->getParent();
+      }
     };
 
     /// CPUsers - Keep track of all of the machine instructions that use various
@@ -134,6 +162,9 @@ namespace {
     /// the branch fix up pass.
     bool HasFarJump;
 
+    /// HasInlineAsm - True if the function contains inline assembly.
+    bool HasInlineAsm;
+
     const TargetInstrInfo *TII;
     const ARMSubtarget *STI;
     ARMFunctionInfo *AFI;
@@ -154,6 +185,7 @@ namespace {
     void DoInitialPlacement(MachineFunction &MF,
                             std::vector<MachineInstr*> &CPEMIs);
     CPEntry *findConstPoolEntry(unsigned CPI, const MachineInstr *CPEMI);
+    void JumpTableFunctionScan(MachineFunction &MF);
     void InitialFunctionScan(MachineFunction &MF,
                              const std::vector<MachineInstr*> &CPEMIs);
     MachineBasicBlock *SplitBlockBeforeInstr(MachineInstr *MI);
@@ -161,12 +193,9 @@ namespace {
     void AdjustBBOffsetsAfter(MachineBasicBlock *BB, int delta);
     bool DecrementOldEntry(unsigned CPI, MachineInstr* CPEMI);
     int LookForExistingCPEntry(CPUser& U, unsigned UserOffset);
-    bool LookForWater(CPUser&U, unsigned UserOffset,
-                      MachineBasicBlock** NewMBB);
-    MachineBasicBlock* AcceptWater(MachineBasicBlock *WaterBB,
-                        std::vector<MachineBasicBlock*>::iterator IP);
+    bool LookForWater(CPUser&U, unsigned UserOffset, water_iterator &WaterIter);
     void CreateNewWater(unsigned CPUserIndex, unsigned UserOffset,
-                      MachineBasicBlock** NewMBB);
+                        MachineBasicBlock *&NewMBB);
     bool HandleConstantPoolUser(MachineFunction &MF, unsigned CPUserIndex);
     void RemoveDeadCPEMI(MachineInstr *CPEMI);
     bool RemoveUnusedCPEntries();
@@ -184,7 +213,10 @@ namespace {
     bool UndoLRSpillRestore();
     bool OptimizeThumb2Instructions(MachineFunction &MF);
     bool OptimizeThumb2Branches(MachineFunction &MF);
+    bool ReorderThumb2JumpTables(MachineFunction &MF);
     bool OptimizeThumb2JumpTables(MachineFunction &MF);
+    MachineBasicBlock *AdjustJTTargetBlockForward(MachineBasicBlock *BB,
+                                                  MachineBasicBlock *JTBB);
 
     unsigned GetOffsetOf(MachineInstr *MI) const;
     void dumpBBs();
@@ -207,10 +239,19 @@ void ARMConstantIslands::verify(MachineFunction &MF) {
     if (!MBB->empty() &&
         MBB->begin()->getOpcode() == ARM::CONSTPOOL_ENTRY) {
       unsigned MBBId = MBB->getNumber();
-      assert((BBOffsets[MBBId]%4 == 0 && BBSizes[MBBId]%4 == 0) ||
+      assert(HasInlineAsm ||
+             (BBOffsets[MBBId]%4 == 0 && BBSizes[MBBId]%4 == 0) ||
              (BBOffsets[MBBId]%4 != 0 && BBSizes[MBBId]%4 != 0));
     }
   }
+  for (unsigned i = 0, e = CPUsers.size(); i != e; ++i) {
+    CPUser &U = CPUsers[i];
+    unsigned UserOffset = GetOffsetOf(U.MI) + (isThumb ? 4 : 8);
+    unsigned CPEOffset  = GetOffsetOf(U.CPEMI);
+    unsigned Disp = UserOffset < CPEOffset ? CPEOffset - UserOffset :
+      UserOffset - CPEOffset;
+    assert(Disp <= U.MaxDisp || "Constant pool entry out of range!");
+  }
 #endif
 }
 
@@ -240,11 +281,24 @@ bool ARMConstantIslands::runOnMachineFunction(MachineFunction &MF) {
   isThumb2 = AFI->isThumb2Function();
 
   HasFarJump = false;
+  HasInlineAsm = false;
 
   // Renumber all of the machine basic blocks in the function, guaranteeing that
   // the numbers agree with the position of the block in the function.
   MF.RenumberBlocks();
 
+  // Try to reorder and otherwise adjust the block layout to make good use
+  // of the TB[BH] instructions.
+  bool MadeChange = false;
+  if (isThumb2 && AdjustJumpTableBlocks) {
+    JumpTableFunctionScan(MF);
+    MadeChange |= ReorderThumb2JumpTables(MF);
+    // Data is out of date, so clear it. It'll be re-computed later.
+    T2JumpTables.clear();
+    // Blocks may have shifted around. Keep the numbering up to date.
+    MF.RenumberBlocks();
+  }
+
   // Thumb1 functions containing constant pools get 4-byte alignment.
   // This is so we can keep exact track of where the alignment padding goes.
 
@@ -275,7 +329,6 @@ bool ARMConstantIslands::runOnMachineFunction(MachineFunction &MF) {
 
   // Iteratively place constant pool entries and fix up branches until there
   // is no change.
-  bool MadeChange = false;
   unsigned NoCPIters = 0, NoBRIters = 0;
   while (true) {
     bool CPChange = false;
@@ -284,6 +337,10 @@ bool ARMConstantIslands::runOnMachineFunction(MachineFunction &MF) {
     if (CPChange && ++NoCPIters > 30)
       llvm_unreachable("Constant Island pass failed to converge!");
     DEBUG(dumpBBs());
+    
+    // Clear NewWaterList now.  If we split a block for branches, it should
+    // appear as "new water" for the next iteration of constant pool placement.
+    NewWaterList.clear();
 
     bool BRChange = false;
     for (unsigned i = 0, e = ImmBranches.size(); i != e; ++i)
@@ -388,11 +445,39 @@ ARMConstantIslands::CPEntry
   return NULL;
 }
 
+/// JumpTableFunctionScan - Do a scan of the function, building up
+/// information about the sizes of each block and the locations of all
+/// the jump tables.
+void ARMConstantIslands::JumpTableFunctionScan(MachineFunction &MF) {
+  for (MachineFunction::iterator MBBI = MF.begin(), E = MF.end();
+       MBBI != E; ++MBBI) {
+    MachineBasicBlock &MBB = *MBBI;
+
+    for (MachineBasicBlock::iterator I = MBB.begin(), E = MBB.end();
+         I != E; ++I)
+      if (I->getDesc().isBranch() && I->getOpcode() == ARM::t2BR_JT)
+        T2JumpTables.push_back(I);
+  }
+}
+
 /// InitialFunctionScan - Do the initial scan of the function, building up
 /// information about the sizes of each block, the location of all the water,
 /// and finding all of the constant pool users.
 void ARMConstantIslands::InitialFunctionScan(MachineFunction &MF,
                                  const std::vector<MachineInstr*> &CPEMIs) {
+  // First thing, see if the function has any inline assembly in it. If so,
+  // we have to be conservative about alignment assumptions, as we don't
+  // know for sure the size of any instructions in the inline assembly.
+  for (MachineFunction::iterator MBBI = MF.begin(), E = MF.end();
+       MBBI != E; ++MBBI) {
+    MachineBasicBlock &MBB = *MBBI;
+    for (MachineBasicBlock::iterator I = MBB.begin(), E = MBB.end();
+         I != E; ++I)
+      if (I->getOpcode() == ARM::INLINEASM)
+        HasInlineAsm = true;
+  }
+
+  // Now go back through the instructions and build up our data structures
   unsigned Offset = 0;
   for (MachineFunction::iterator MBBI = MF.begin(), E = MF.end();
        MBBI != E; ++MBBI) {
@@ -422,7 +507,7 @@ void ARMConstantIslands::InitialFunctionScan(MachineFunction &MF,
           // A Thumb1 table jump may involve padding; for the offsets to
           // be right, functions containing these must be 4-byte aligned.
           AFI->setAlign(2U);
-          if ((Offset+MBBSize)%4 != 0)
+          if ((Offset+MBBSize)%4 != 0 || HasInlineAsm)
             // FIXME: Add a pseudo ALIGN instruction instead.
             MBBSize += 2;           // padding
           continue;   // Does not get an entry in ImmBranches
@@ -491,7 +576,7 @@ void ARMConstantIslands::InitialFunctionScan(MachineFunction &MF,
           case ARM::LEApcrel:
             // This takes a SoImm, which is 8 bit immediate rotated. We'll
             // pretend the maximum offset is 255 * 4. Since each instruction
-            // 4 byte wide, this is always correct. We'llc heck for other
+            // 4 byte wide, this is always correct. We'll check for other
             // displacements that fits in a SoImm as well.
             Bits = 8;
             Scale = 4;
@@ -520,8 +605,8 @@ void ARMConstantIslands::InitialFunctionScan(MachineFunction &MF,
             Scale = 4;  // +(offset_8*4)
             break;
 
-          case ARM::FLDD:
-          case ARM::FLDS:
+          case ARM::VLDRD:
+          case ARM::VLDRS:
             Bits = 8;
             Scale = 4;  // +-(offset_8*4)
             NegOk = true;
@@ -550,7 +635,7 @@ void ARMConstantIslands::InitialFunctionScan(MachineFunction &MF,
     if (isThumb &&
         !MBB.empty() &&
         MBB.begin()->getOpcode() == ARM::CONSTPOOL_ENTRY &&
-        (Offset%4) != 0)
+        ((Offset%4) != 0 || HasInlineAsm))
       MBBSize += 2;
 
     BBSizes.push_back(MBBSize);
@@ -574,7 +659,7 @@ unsigned ARMConstantIslands::GetOffsetOf(MachineInstr *MI) const {
   // alignment padding, and compensate if so.
   if (isThumb &&
       MI->getOpcode() == ARM::CONSTPOOL_ENTRY &&
-      Offset%4 != 0)
+      (Offset%4 != 0 || HasInlineAsm))
     Offset += 2;
 
   // Sum instructions before MI in MBB.
@@ -608,7 +693,7 @@ void ARMConstantIslands::UpdateForInsertedWaterBlock(MachineBasicBlock *NewBB) {
 
   // Next, update WaterList.  Specifically, we need to add NewMBB as having
   // available water after it.
-  std::vector<MachineBasicBlock*>::iterator IP =
+  water_iterator IP =
     std::lower_bound(WaterList.begin(), WaterList.end(), NewBB,
                      CompareMBBNumbers);
   WaterList.insert(IP, NewBB);
@@ -616,7 +701,7 @@ void ARMConstantIslands::UpdateForInsertedWaterBlock(MachineBasicBlock *NewBB) {
 
 
 /// Split the basic block containing MI into two blocks, which are joined by
-/// an unconditional branch.  Update datastructures and renumber blocks to
+/// an unconditional branch.  Update data structures and renumber blocks to
 /// account for this change and returns the newly created block.
 MachineBasicBlock *ARMConstantIslands::SplitBlockBeforeInstr(MachineInstr *MI) {
   MachineBasicBlock *OrigBB = MI->getParent();
@@ -670,7 +755,7 @@ MachineBasicBlock *ARMConstantIslands::SplitBlockBeforeInstr(MachineInstr *MI) {
   // available water after it (but not if it's already there, which happens
   // when splitting before a conditional branch that is followed by an
   // unconditional branch - in that case we want to insert NewBB).
-  std::vector<MachineBasicBlock*>::iterator IP =
+  water_iterator IP =
     std::lower_bound(WaterList.begin(), WaterList.end(), OrigBB,
                      CompareMBBNumbers);
   MachineBasicBlock* WaterBB = *IP;
@@ -678,6 +763,7 @@ MachineBasicBlock *ARMConstantIslands::SplitBlockBeforeInstr(MachineInstr *MI) {
     WaterList.insert(next(IP), NewBB);
   else
     WaterList.insert(IP, OrigBB);
+  NewWaterList.insert(OrigBB);
 
   // Figure out how large the first NewMBB is.  (It cannot
   // contain a constpool_entry or tablejump.)
@@ -756,8 +842,7 @@ bool ARMConstantIslands::WaterIsInRange(unsigned UserOffset,
                        BBSizes[Water->getNumber()];
 
   // If the CPE is to be inserted before the instruction, that will raise
-  // the offset of the instruction.  (Currently applies only to ARM, so
-  // no alignment compensation attempted here.)
+  // the offset of the instruction.
   if (CPEOffset < UserOffset)
     UserOffset += U.CPEMI->getOperand(2).getImm();
 
@@ -770,7 +855,7 @@ bool ARMConstantIslands::CPEIsInRange(MachineInstr *MI, unsigned UserOffset,
                                       MachineInstr *CPEMI, unsigned MaxDisp,
                                       bool NegOk, bool DoDump) {
   unsigned CPEOffset  = GetOffsetOf(CPEMI);
-  assert(CPEOffset%4 == 0 && "Misaligned CPE");
+  assert((CPEOffset%4 == 0 || HasInlineAsm) && "Misaligned CPE");
 
   if (DoDump) {
     DEBUG(errs() << "User of CPE#" << CPEMI->getOperand(0).getImm()
@@ -811,7 +896,7 @@ void ARMConstantIslands::AdjustBBOffsetsAfter(MachineBasicBlock *BB,
     if (!isThumb)
       continue;
     MachineBasicBlock *MBB = MBBI;
-    if (!MBB->empty()) {
+    if (!MBB->empty() && !HasInlineAsm) {
       // Constant pool entries require padding.
       if (MBB->begin()->getOpcode() == ARM::CONSTPOOL_ENTRY) {
         unsigned OldOffset = BBOffsets[i] - delta;
@@ -929,56 +1014,54 @@ static inline unsigned getUnconditionalBrDisp(int Opc) {
   return ((1<<23)-1)*4;
 }
 
-/// AcceptWater - Small amount of common code factored out of the following.
-
-MachineBasicBlock* ARMConstantIslands::AcceptWater(MachineBasicBlock *WaterBB,
-                          std::vector<MachineBasicBlock*>::iterator IP) {
-  DEBUG(errs() << "found water in range\n");
-  // Remove the original WaterList entry; we want subsequent
-  // insertions in this vicinity to go after the one we're
-  // about to insert.  This considerably reduces the number
-  // of times we have to move the same CPE more than once.
-  WaterList.erase(IP);
-  // CPE goes before following block (NewMBB).
-  return next(MachineFunction::iterator(WaterBB));
-}
-
-/// LookForWater - look for an existing entry in the WaterList in which
+/// LookForWater - Look for an existing entry in the WaterList in which
 /// we can place the CPE referenced from U so it's within range of U's MI.
-/// Returns true if found, false if not.  If it returns true, *NewMBB
-/// is set to the WaterList entry.
-/// For ARM, we prefer the water that's farthest away. For Thumb, prefer
-/// water that will not introduce padding to water that will; within each
-/// group, prefer the water that's farthest away.
+/// Returns true if found, false if not.  If it returns true, WaterIter
+/// is set to the WaterList entry.  For Thumb, prefer water that will not
+/// introduce padding to water that will.  To ensure that this pass
+/// terminates, the CPE location for a particular CPUser is only allowed to
+/// move to a lower address, so search backward from the end of the list and
+/// prefer the first water that is in range.
 bool ARMConstantIslands::LookForWater(CPUser &U, unsigned UserOffset,
-                                      MachineBasicBlock** NewMBB) {
-  std::vector<MachineBasicBlock*>::iterator IPThatWouldPad;
-  MachineBasicBlock* WaterBBThatWouldPad = NULL;
-  if (!WaterList.empty()) {
-    for (std::vector<MachineBasicBlock*>::iterator IP = prior(WaterList.end()),
-           B = WaterList.begin();; --IP) {
-      MachineBasicBlock* WaterBB = *IP;
-      if (WaterIsInRange(UserOffset, WaterBB, U)) {
-        unsigned WBBId = WaterBB->getNumber();
-        if (isThumb &&
-            (BBOffsets[WBBId] + BBSizes[WBBId])%4 != 0) {
-          // This is valid Water, but would introduce padding.  Remember
-          // it in case we don't find any Water that doesn't do this.
-          if (!WaterBBThatWouldPad) {
-            WaterBBThatWouldPad = WaterBB;
-            IPThatWouldPad = IP;
-          }
-        } else {
-          *NewMBB = AcceptWater(WaterBB, IP);
-          return true;
+                                      water_iterator &WaterIter) {
+  if (WaterList.empty())
+    return false;
+
+  bool FoundWaterThatWouldPad = false;
+  water_iterator IPThatWouldPad;
+  for (water_iterator IP = prior(WaterList.end()),
+         B = WaterList.begin();; --IP) {
+    MachineBasicBlock* WaterBB = *IP;
+    // Check if water is in range and is either at a lower address than the
+    // current "high water mark" or a new water block that was created since
+    // the previous iteration by inserting an unconditional branch.  In the
+    // latter case, we want to allow resetting the high water mark back to
+    // this new water since we haven't seen it before.  Inserting branches
+    // should be relatively uncommon and when it does happen, we want to be
+    // sure to take advantage of it for all the CPEs near that block, so that
+    // we don't insert more branches than necessary.
+    if (WaterIsInRange(UserOffset, WaterBB, U) &&
+        (WaterBB->getNumber() < U.HighWaterMark->getNumber() ||
+         NewWaterList.count(WaterBB))) {
+      unsigned WBBId = WaterBB->getNumber();
+      if (isThumb &&
+          (BBOffsets[WBBId] + BBSizes[WBBId])%4 != 0) {
+        // This is valid Water, but would introduce padding.  Remember
+        // it in case we don't find any Water that doesn't do this.
+        if (!FoundWaterThatWouldPad) {
+          FoundWaterThatWouldPad = true;
+          IPThatWouldPad = IP;
         }
+      } else {
+        WaterIter = IP;
+        return true;
       }
-      if (IP == B)
-        break;
     }
+    if (IP == B)
+      break;
   }
-  if (isThumb && WaterBBThatWouldPad) {
-    *NewMBB = AcceptWater(WaterBBThatWouldPad, IPThatWouldPad);
+  if (FoundWaterThatWouldPad) {
+    WaterIter = IPThatWouldPad;
     return true;
   }
   return false;
@@ -988,12 +1071,12 @@ bool ARMConstantIslands::LookForWater(CPUser &U, unsigned UserOffset,
 /// CPUsers[CPUserIndex], so create a place to put the CPE.  The end of the
 /// block is used if in range, and the conditional branch munged so control
 /// flow is correct.  Otherwise the block is split to create a hole with an
-/// unconditional branch around it.  In either case *NewMBB is set to a
+/// unconditional branch around it.  In either case NewMBB is set to a
 /// block following which the new island can be inserted (the WaterList
 /// is not adjusted).
-
 void ARMConstantIslands::CreateNewWater(unsigned CPUserIndex,
-                        unsigned UserOffset, MachineBasicBlock** NewMBB) {
+                                        unsigned UserOffset,
+                                        MachineBasicBlock *&NewMBB) {
   CPUser &U = CPUsers[CPUserIndex];
   MachineInstr *UserMI = U.MI;
   MachineInstr *CPEMI  = U.CPEMI;
@@ -1002,20 +1085,18 @@ void ARMConstantIslands::CreateNewWater(unsigned CPUserIndex,
                                BBSizes[UserMBB->getNumber()];
   assert(OffsetOfNextBlock== BBOffsets[UserMBB->getNumber()+1]);
 
-  // If the use is at the end of the block, or the end of the block
-  // is within range, make new water there.  (The addition below is
-  // for the unconditional branch we will be adding:  4 bytes on ARM + Thumb2,
-  // 2 on Thumb1.  Possible Thumb1 alignment padding is allowed for
+  // If the block does not end in an unconditional branch already, and if the
+  // end of the block is within range, make new water there.  (The addition
+  // below is for the unconditional branch we will be adding: 4 bytes on ARM +
+  // Thumb2, 2 on Thumb1.  Possible Thumb1 alignment padding is allowed for
   // inside OffsetIsInRange.
-  // If the block ends in an unconditional branch already, it is water,
-  // and is known to be out of range, so we'll always be adding a branch.)
-  if (&UserMBB->back() == UserMI ||
+  if (BBHasFallthrough(UserMBB) &&
       OffsetIsInRange(UserOffset, OffsetOfNextBlock + (isThumb1 ? 2: 4),
                       U.MaxDisp, U.NegOk, U.IsSoImm)) {
     DEBUG(errs() << "Split at end of block\n");
     if (&UserMBB->back() == UserMI)
       assert(BBHasFallthrough(UserMBB) && "Expected a fallthrough BB!");
-    *NewMBB = next(MachineFunction::iterator(UserMBB));
+    NewMBB = next(MachineFunction::iterator(UserMBB));
     // Add an unconditional branch from UserMBB to fallthrough block.
     // Record it for branch lengthening; this new branch will not get out of
     // range, but if the preceding conditional branch is out of range, the
@@ -1023,7 +1104,7 @@ void ARMConstantIslands::CreateNewWater(unsigned CPUserIndex,
     // range, so the machinery has to know about it.
     int UncondBr = isThumb ? ((isThumb2) ? ARM::t2B : ARM::tB) : ARM::B;
     BuildMI(UserMBB, DebugLoc::getUnknownLoc(),
-            TII->get(UncondBr)).addMBB(*NewMBB);
+            TII->get(UncondBr)).addMBB(NewMBB);
     unsigned MaxDisp = getUnconditionalBrDisp(UncondBr);
     ImmBranches.push_back(ImmBranch(&UserMBB->back(),
                           MaxDisp, false, UncondBr));
@@ -1078,7 +1159,7 @@ void ARMConstantIslands::CreateNewWater(unsigned CPUserIndex,
       }
     }
     DEBUG(errs() << "Split in middle of big block\n");
-    *NewMBB = SplitBlockBeforeInstr(prior(MI));
+    NewMBB = SplitBlockBeforeInstr(prior(MI));
   }
 }
 
@@ -1093,7 +1174,6 @@ bool ARMConstantIslands::HandleConstantPoolUser(MachineFunction &MF,
   MachineInstr *CPEMI  = U.CPEMI;
   unsigned CPI = CPEMI->getOperand(1).getIndex();
   unsigned Size = CPEMI->getOperand(2).getImm();
-  MachineBasicBlock *NewMBB;
   // Compute this only once, it's expensive.  The 4 or 8 is the value the
   // hardware keeps in the PC.
   unsigned UserOffset = GetOffsetOf(UserMI) + (isThumb ? 4 : 8);
@@ -1108,18 +1188,51 @@ bool ARMConstantIslands::HandleConstantPoolUser(MachineFunction &MF,
   // We will be generating a new clone.  Get a UID for it.
   unsigned ID = AFI->createConstPoolEntryUId();
 
-  // Look for water where we can place this CPE.  We look for the farthest one
-  // away that will work.  Forward references only for now (although later
-  // we might find some that are backwards).
+  // Look for water where we can place this CPE.
+  MachineBasicBlock *NewIsland = MF.CreateMachineBasicBlock();
+  MachineBasicBlock *NewMBB;
+  water_iterator IP;
+  if (LookForWater(U, UserOffset, IP)) {
+    DEBUG(errs() << "found water in range\n");
+    MachineBasicBlock *WaterBB = *IP;
+
+    // If the original WaterList entry was "new water" on this iteration,
+    // propagate that to the new island.  This is just keeping NewWaterList
+    // updated to match the WaterList, which will be updated below.
+    if (NewWaterList.count(WaterBB)) {
+      NewWaterList.erase(WaterBB);
+      NewWaterList.insert(NewIsland);
+    }
+    // The new CPE goes before the following block (NewMBB).
+    NewMBB = next(MachineFunction::iterator(WaterBB));
 
-  if (!LookForWater(U, UserOffset, &NewMBB)) {
+  } else {
     // No water found.
     DEBUG(errs() << "No water found\n");
-    CreateNewWater(CPUserIndex, UserOffset, &NewMBB);
+    CreateNewWater(CPUserIndex, UserOffset, NewMBB);
+
+    // SplitBlockBeforeInstr adds to WaterList, which is important when it is
+    // called while handling branches so that the water will be seen on the
+    // next iteration for constant pools, but in this context, we don't want
+    // it.  Check for this so it will be removed from the WaterList.
+    // Also remove any entry from NewWaterList.
+    MachineBasicBlock *WaterBB = prior(MachineFunction::iterator(NewMBB));
+    IP = std::find(WaterList.begin(), WaterList.end(), WaterBB);
+    if (IP != WaterList.end())
+      NewWaterList.erase(WaterBB);
+
+    // We are adding new water.  Update NewWaterList.
+    NewWaterList.insert(NewIsland);
   }
 
+  // Remove the original WaterList entry; we want subsequent insertions in
+  // this vicinity to go after the one we're about to insert.  This
+  // considerably reduces the number of times we have to move the same CPE
+  // more than once and is also important to ensure the algorithm terminates.
+  if (IP != WaterList.end())
+    WaterList.erase(IP);
+
   // Okay, we know we can put an island before NewMBB now, do it!
-  MachineBasicBlock *NewIsland = MF.CreateMachineBasicBlock();
   MF.insert(NewMBB, NewIsland);
 
   // Update internal data structures to account for the newly inserted MBB.
@@ -1130,6 +1243,7 @@ bool ARMConstantIslands::HandleConstantPoolUser(MachineFunction &MF,
 
   // Now that we have an island to add the CPE to, clone the original CPE and
   // add it to the island.
+  U.HighWaterMark = NewIsland;
   U.CPEMI = BuildMI(NewIsland, DebugLoc::getUnknownLoc(),
                     TII->get(ARM::CONSTPOOL_ENTRY))
                 .addImm(ID).addConstantPoolIndex(CPI).addImm(Size);
@@ -1138,7 +1252,7 @@ bool ARMConstantIslands::HandleConstantPoolUser(MachineFunction &MF,
 
   BBOffsets[NewIsland->getNumber()] = BBOffsets[NewMBB->getNumber()];
   // Compensate for .align 2 in thumb mode.
-  if (isThumb && BBOffsets[NewIsland->getNumber()]%4 != 0)
+  if (isThumb && (BBOffsets[NewIsland->getNumber()]%4 != 0 || HasInlineAsm))
     Size += 2;
   // Increase the size of the island block to account for the new entry.
   BBSizes[NewIsland->getNumber()] += Size;
@@ -1437,31 +1551,71 @@ bool ARMConstantIslands::OptimizeThumb2Branches(MachineFunction &MF) {
       Bits = 11;
       Scale = 2;
       break;
-    case ARM::t2Bcc:
+    case ARM::t2Bcc: {
       NewOpc = ARM::tBcc;
       Bits = 8;
-      Scale = 2;      
+      Scale = 2;
       break;
     }
-    if (!NewOpc)
+    }
+    if (NewOpc) {
+      unsigned MaxOffs = ((1 << (Bits-1))-1) * Scale;
+      MachineBasicBlock *DestBB = Br.MI->getOperand(0).getMBB();
+      if (BBIsInRange(Br.MI, DestBB, MaxOffs)) {
+        Br.MI->setDesc(TII->get(NewOpc));
+        MachineBasicBlock *MBB = Br.MI->getParent();
+        BBSizes[MBB->getNumber()] -= 2;
+        AdjustBBOffsetsAfter(MBB, -2);
+        ++NumT2BrShrunk;
+        MadeChange = true;
+      }
+    }
+
+    Opcode = Br.MI->getOpcode();
+    if (Opcode != ARM::tBcc)
       continue;
 
-    unsigned MaxOffs = ((1 << (Bits-1))-1) * Scale;
+    NewOpc = 0;
+    unsigned PredReg = 0;
+    ARMCC::CondCodes Pred = llvm::getInstrPredicate(Br.MI, PredReg);
+    if (Pred == ARMCC::EQ)
+      NewOpc = ARM::tCBZ;
+    else if (Pred == ARMCC::NE)
+      NewOpc = ARM::tCBNZ;
+    if (!NewOpc)
+      continue;
     MachineBasicBlock *DestBB = Br.MI->getOperand(0).getMBB();
-    if (BBIsInRange(Br.MI, DestBB, MaxOffs)) {
-      Br.MI->setDesc(TII->get(NewOpc));
-      MachineBasicBlock *MBB = Br.MI->getParent();
-      BBSizes[MBB->getNumber()] -= 2;
-      AdjustBBOffsetsAfter(MBB, -2);
-      ++NumT2BrShrunk;
-      MadeChange = true;
+    // Check if the distance is within 126. Subtract starting offset by 2
+    // because the cmp will be eliminated.
+    unsigned BrOffset = GetOffsetOf(Br.MI) + 4 - 2;
+    unsigned DestOffset = BBOffsets[DestBB->getNumber()];
+    if (BrOffset < DestOffset && (DestOffset - BrOffset) <= 126) {
+      MachineBasicBlock::iterator CmpMI = Br.MI; --CmpMI;
+      if (CmpMI->getOpcode() == ARM::tCMPzi8) {
+        unsigned Reg = CmpMI->getOperand(0).getReg();
+        Pred = llvm::getInstrPredicate(CmpMI, PredReg);
+        if (Pred == ARMCC::AL &&
+            CmpMI->getOperand(1).getImm() == 0 &&
+            isARMLowRegister(Reg)) {
+          MachineBasicBlock *MBB = Br.MI->getParent();
+          MachineInstr *NewBR =
+            BuildMI(*MBB, CmpMI, Br.MI->getDebugLoc(), TII->get(NewOpc))
+            .addReg(Reg).addMBB(DestBB, Br.MI->getOperand(0).getTargetFlags());
+          CmpMI->eraseFromParent();
+          Br.MI->eraseFromParent();
+          Br.MI = NewBR;
+          BBSizes[MBB->getNumber()] -= 2;
+          AdjustBBOffsetsAfter(MBB, -2);
+          ++NumCBZ;
+          MadeChange = true;
+        }
+      }
     }
   }
 
   return MadeChange;
 }
 
-
 /// OptimizeThumb2JumpTables - Use tbb / tbh instructions to generate smaller
 /// jumptables when it's possible.
 bool ARMConstantIslands::OptimizeThumb2JumpTables(MachineFunction &MF) {
@@ -1469,7 +1623,7 @@ bool ARMConstantIslands::OptimizeThumb2JumpTables(MachineFunction &MF) {
 
   // FIXME: After the tables are shrunk, can we get rid some of the
   // constantpool tables?
-  const MachineJumpTableInfo *MJTI = MF.getJumpTableInfo();
+  MachineJumpTableInfo *MJTI = MF.getJumpTableInfo();
   const std::vector<MachineJumpTableEntry> &JT = MJTI->getJumpTables();
   for (unsigned i = 0, e = T2JumpTables.size(); i != e; ++i) {
     MachineInstr *MI = T2JumpTables[i];
@@ -1569,3 +1723,99 @@ bool ARMConstantIslands::OptimizeThumb2JumpTables(MachineFunction &MF) {
 
   return MadeChange;
 }
+
+/// ReorderThumb2JumpTables - Adjust the function's block layout to ensure that
+/// jump tables always branch forwards, since that's what tbb and tbh need.
+bool ARMConstantIslands::ReorderThumb2JumpTables(MachineFunction &MF) {
+  bool MadeChange = false;
+
+  MachineJumpTableInfo *MJTI = MF.getJumpTableInfo();
+  const std::vector<MachineJumpTableEntry> &JT = MJTI->getJumpTables();
+  for (unsigned i = 0, e = T2JumpTables.size(); i != e; ++i) {
+    MachineInstr *MI = T2JumpTables[i];
+    const TargetInstrDesc &TID = MI->getDesc();
+    unsigned NumOps = TID.getNumOperands();
+    unsigned JTOpIdx = NumOps - (TID.isPredicable() ? 3 : 2);
+    MachineOperand JTOP = MI->getOperand(JTOpIdx);
+    unsigned JTI = JTOP.getIndex();
+    assert(JTI < JT.size());
+
+    // We prefer if target blocks for the jump table come after the jump
+    // instruction so we can use TB[BH]. Loop through the target blocks
+    // and try to adjust them such that that's true.
+    int JTNumber = MI->getParent()->getNumber();
+    const std::vector<MachineBasicBlock*> &JTBBs = JT[JTI].MBBs;
+    for (unsigned j = 0, ee = JTBBs.size(); j != ee; ++j) {
+      MachineBasicBlock *MBB = JTBBs[j];
+      int DTNumber = MBB->getNumber();
+
+      if (DTNumber < JTNumber) {
+        // The destination precedes the switch. Try to move the block forward
+        // so we have a positive offset.
+        MachineBasicBlock *NewBB =
+          AdjustJTTargetBlockForward(MBB, MI->getParent());
+        if (NewBB)
+          MJTI->ReplaceMBBInJumpTable(JTI, JTBBs[j], NewBB);
+        MadeChange = true;
+      }
+    }
+  }
+
+  return MadeChange;
+}
+
+MachineBasicBlock *ARMConstantIslands::
+AdjustJTTargetBlockForward(MachineBasicBlock *BB, MachineBasicBlock *JTBB)
+{
+  MachineFunction &MF = *BB->getParent();
+
+  // If it's the destination block is terminated by an unconditional branch,
+  // try to move it; otherwise, create a new block following the jump
+  // table that branches back to the actual target. This is a very simple
+  // heuristic. FIXME: We can definitely improve it.
+  MachineBasicBlock *TBB = 0, *FBB = 0;
+  SmallVector<MachineOperand, 4> Cond;
+  SmallVector<MachineOperand, 4> CondPrior;
+  MachineFunction::iterator BBi = BB;
+  MachineFunction::iterator OldPrior = prior(BBi);
+
+  // If the block terminator isn't analyzable, don't try to move the block
+  bool B = TII->AnalyzeBranch(*BB, TBB, FBB, Cond);
+
+  // If the block ends in an unconditional branch, move it. The prior block
+  // has to have an analyzable terminator for us to move this one. Be paranoid
+  // and make sure we're not trying to move the entry block of the function.
+  if (!B && Cond.empty() && BB != MF.begin() &&
+      !TII->AnalyzeBranch(*OldPrior, TBB, FBB, CondPrior)) {
+    BB->moveAfter(JTBB);
+    OldPrior->updateTerminator();
+    BB->updateTerminator();
+    // Update numbering to account for the block being moved.
+    MF.RenumberBlocks();
+    ++NumJTMoved;
+    return NULL;
+  }
+
+  // Create a new MBB for the code after the jump BB.
+  MachineBasicBlock *NewBB =
+    MF.CreateMachineBasicBlock(JTBB->getBasicBlock());
+  MachineFunction::iterator MBBI = JTBB; ++MBBI;
+  MF.insert(MBBI, NewBB);
+
+  // Add an unconditional branch from NewBB to BB.
+  // There doesn't seem to be meaningful DebugInfo available; this doesn't
+  // correspond directly to anything in the source.
+  assert (isThumb2 && "Adjusting for TB[BH] but not in Thumb2?");
+  BuildMI(NewBB, DebugLoc::getUnknownLoc(), TII->get(ARM::t2B)).addMBB(BB);
+
+  // Update internal data structures to account for the newly inserted MBB.
+  MF.RenumberBlocks(NewBB);
+
+  // Update the CFG.
+  NewBB->addSuccessor(BB);
+  JTBB->removeSuccessor(BB);
+  JTBB->addSuccessor(NewBB);
+
+  ++NumJTInserted;
+  return NewBB;
+}
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMConstantPoolValue.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMConstantPoolValue.cpp
index 7170089..90dd0c7 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMConstantPoolValue.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMConstantPoolValue.cpp
@@ -13,19 +13,21 @@
 
 #include "ARMConstantPoolValue.h"
 #include "llvm/ADT/FoldingSet.h"
+#include "llvm/Constant.h"
+#include "llvm/Constants.h"
 #include "llvm/GlobalValue.h"
 #include "llvm/Type.h"
 #include "llvm/Support/raw_ostream.h"
 #include <cstdlib>
 using namespace llvm;
 
-ARMConstantPoolValue::ARMConstantPoolValue(GlobalValue *gv, unsigned id,
+ARMConstantPoolValue::ARMConstantPoolValue(Constant *cval, unsigned id,
                                            ARMCP::ARMCPKind K,
                                            unsigned char PCAdj,
                                            const char *Modif,
                                            bool AddCA)
-  : MachineConstantPoolValue((const Type*)gv->getType()),
-    GV(gv), S(NULL), LabelId(id), Kind(K), PCAdjust(PCAdj),
+  : MachineConstantPoolValue((const Type*)cval->getType()),
+    CVal(cval), S(NULL), LabelId(id), Kind(K), PCAdjust(PCAdj),
     Modifier(Modif), AddCurrentAddress(AddCA) {}
 
 ARMConstantPoolValue::ARMConstantPoolValue(LLVMContext &C,
@@ -34,14 +36,22 @@ ARMConstantPoolValue::ARMConstantPoolValue(LLVMContext &C,
                                            const char *Modif,
                                            bool AddCA)
   : MachineConstantPoolValue((const Type*)Type::getInt32Ty(C)),
-    GV(NULL), S(strdup(s)), LabelId(id), Kind(ARMCP::CPValue), PCAdjust(PCAdj),
-    Modifier(Modif), AddCurrentAddress(AddCA) {}
+    CVal(NULL), S(strdup(s)), LabelId(id), Kind(ARMCP::CPExtSymbol),
+    PCAdjust(PCAdj), Modifier(Modif), AddCurrentAddress(AddCA) {}
 
 ARMConstantPoolValue::ARMConstantPoolValue(GlobalValue *gv, const char *Modif)
   : MachineConstantPoolValue((const Type*)Type::getInt32Ty(gv->getContext())),
-    GV(gv), S(NULL), LabelId(0), Kind(ARMCP::CPValue), PCAdjust(0),
+    CVal(gv), S(NULL), LabelId(0), Kind(ARMCP::CPValue), PCAdjust(0),
     Modifier(Modif) {}
 
+GlobalValue *ARMConstantPoolValue::getGV() const {
+  return dyn_cast_or_null<GlobalValue>(CVal);
+}
+
+BlockAddress *ARMConstantPoolValue::getBlockAddress() const {
+  return dyn_cast_or_null<BlockAddress>(CVal);
+}
+
 int ARMConstantPoolValue::getExistingMachineCPValue(MachineConstantPool *CP,
                                                     unsigned Alignment) {
   unsigned AlignMask = Alignment - 1;
@@ -51,10 +61,11 @@ int ARMConstantPoolValue::getExistingMachineCPValue(MachineConstantPool *CP,
         (Constants[i].getAlignment() & AlignMask) == 0) {
       ARMConstantPoolValue *CPV =
         (ARMConstantPoolValue *)Constants[i].Val.MachineCPVal;
-      if (CPV->GV == GV &&
-          CPV->S == S &&
+      if (CPV->CVal == CVal &&
           CPV->LabelId == LabelId &&
-          CPV->PCAdjust == PCAdjust)
+          CPV->PCAdjust == PCAdjust &&
+          (CPV->S == S || strcmp(CPV->S, S) == 0) &&
+          (CPV->Modifier == Modifier || strcmp(CPV->Modifier, Modifier) == 0))
         return i;
     }
   }
@@ -68,20 +79,37 @@ ARMConstantPoolValue::~ARMConstantPoolValue() {
 
 void
 ARMConstantPoolValue::AddSelectionDAGCSEId(FoldingSetNodeID &ID) {
-  ID.AddPointer(GV);
+  ID.AddPointer(CVal);
   ID.AddPointer(S);
   ID.AddInteger(LabelId);
   ID.AddInteger(PCAdjust);
 }
 
+bool
+ARMConstantPoolValue::hasSameValue(ARMConstantPoolValue *ACPV) {
+  if (ACPV->Kind == Kind &&
+      ACPV->CVal == CVal &&
+      ACPV->PCAdjust == PCAdjust &&
+      (ACPV->S == S || strcmp(ACPV->S, S) == 0) &&
+      (ACPV->Modifier == Modifier || strcmp(ACPV->Modifier, Modifier) == 0)) {
+    if (ACPV->LabelId == LabelId)
+      return true;
+    // Two PC relative constpool entries containing the same GV address or
+    // external symbols. FIXME: What about blockaddress?
+    if (Kind == ARMCP::CPValue || Kind == ARMCP::CPExtSymbol)
+      return true;
+  }
+  return false;
+}
+
 void ARMConstantPoolValue::dump() const {
   errs() << "  " << *this;
 }
 
 
 void ARMConstantPoolValue::print(raw_ostream &O) const {
-  if (GV)
-    O << GV->getName();
+  if (CVal)
+    O << CVal->getName();
   else
     O << S;
   if (Modifier) O << "(" << Modifier << ")";
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMConstantPoolValue.h b/libclamav/c++/llvm/lib/Target/ARM/ARMConstantPoolValue.h
index 00c4808..741acde 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMConstantPoolValue.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMConstantPoolValue.h
@@ -18,31 +18,35 @@
 
 namespace llvm {
 
+class Constant;
+class BlockAddress;
 class GlobalValue;
 class LLVMContext;
 
 namespace ARMCP {
   enum ARMCPKind {
     CPValue,
+    CPExtSymbol,
+    CPBlockAddress,
     CPLSDA
   };
 }
 
 /// ARMConstantPoolValue - ARM specific constantpool value. This is used to
-/// represent PC relative displacement between the address of the load
-/// instruction and the global value being loaded, i.e. (&GV-(LPIC+8)).
+/// represent PC-relative displacement between the address of the load
+/// instruction and the constant being loaded, i.e. (&GV-(LPIC+8)).
 class ARMConstantPoolValue : public MachineConstantPoolValue {
-  GlobalValue *GV;         // GlobalValue being loaded.
+  Constant *CVal;          // Constant being loaded.
   const char *S;           // ExtSymbol being loaded.
   unsigned LabelId;        // Label id of the load.
-  ARMCP::ARMCPKind Kind;   // Value or LSDA?
-  unsigned char PCAdjust;  // Extra adjustment if constantpool is pc relative.
+  ARMCP::ARMCPKind Kind;   // Kind of constant.
+  unsigned char PCAdjust;  // Extra adjustment if constantpool is pc-relative.
                            // 8 for ARM, 4 for Thumb.
   const char *Modifier;    // GV modifier i.e. (&GV(modifier)-(LPIC+8))
   bool AddCurrentAddress;
 
 public:
-  ARMConstantPoolValue(GlobalValue *gv, unsigned id,
+  ARMConstantPoolValue(Constant *cval, unsigned id,
                        ARMCP::ARMCPKind Kind = ARMCP::CPValue,
                        unsigned char PCAdj = 0, const char *Modifier = NULL,
                        bool AddCurrentAddress = false);
@@ -53,14 +57,17 @@ public:
   ARMConstantPoolValue();
   ~ARMConstantPoolValue();
 
-
-  GlobalValue *getGV() const { return GV; }
+  GlobalValue *getGV() const;
   const char *getSymbol() const { return S; }
+  BlockAddress *getBlockAddress() const;
   const char *getModifier() const { return Modifier; }
   bool hasModifier() const { return Modifier != NULL; }
   bool mustAddCurrentAddress() const { return AddCurrentAddress; }
   unsigned getLabelId() const { return LabelId; }
   unsigned char getPCAdjustment() const { return PCAdjust; }
+  bool isGlobalValue() const { return Kind == ARMCP::CPValue; }
+  bool isExtSymbol() const { return Kind == ARMCP::CPExtSymbol; }
+  bool isBlockAddress() { return Kind == ARMCP::CPBlockAddress; }
   bool isLSDA() { return Kind == ARMCP::CPLSDA; }
 
   virtual unsigned getRelocationInfo() const {
@@ -69,18 +76,20 @@ public:
     return 2;
   }
 
-
   virtual int getExistingMachineCPValue(MachineConstantPool *CP,
                                         unsigned Alignment);
 
   virtual void AddSelectionDAGCSEId(FoldingSetNodeID &ID);
 
+  /// hasSameValue - Return true if this ARM constpool value
+  /// can share the same constantpool entry as another ARM constpool value.
+  bool hasSameValue(ARMConstantPoolValue *ACPV);
+
   void print(raw_ostream *O) const { if (O) print(*O); }
   void print(raw_ostream &O) const;
   void dump() const;
 };
 
-
 inline raw_ostream &operator<<(raw_ostream &O, const ARMConstantPoolValue &V) {
   V.print(O);
   return O;
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMExpandPseudoInsts.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMExpandPseudoInsts.cpp
new file mode 100644
index 0000000..c929c54
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMExpandPseudoInsts.cpp
@@ -0,0 +1,128 @@
+//===-- ARMExpandPseudoInsts.cpp - Expand pseudo instructions -----*- C++ -*-=//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file contains a pass that expand pseudo instructions into target
+// instructions to allow proper scheduling, if-conversion, and other late
+// optimizations. This pass should be run after register allocation but before
+// post- regalloc scheduling pass.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "arm-pseudo"
+#include "ARM.h"
+#include "ARMBaseInstrInfo.h"
+#include "llvm/CodeGen/MachineFunctionPass.h"
+#include "llvm/CodeGen/MachineInstrBuilder.h"
+
+using namespace llvm;
+
+namespace {
+  class ARMExpandPseudo : public MachineFunctionPass {
+  public:
+    static char ID;
+    ARMExpandPseudo() : MachineFunctionPass(&ID) {}
+
+    const TargetInstrInfo *TII;
+
+    virtual bool runOnMachineFunction(MachineFunction &Fn);
+
+    virtual const char *getPassName() const {
+      return "ARM pseudo instruction expansion pass";
+    }
+
+  private:
+    bool ExpandMBB(MachineBasicBlock &MBB);
+  };
+  char ARMExpandPseudo::ID = 0;
+}
+
+bool ARMExpandPseudo::ExpandMBB(MachineBasicBlock &MBB) {
+  bool Modified = false;
+
+  MachineBasicBlock::iterator MBBI = MBB.begin(), E = MBB.end();
+  while (MBBI != E) {
+    MachineInstr &MI = *MBBI;
+    MachineBasicBlock::iterator NMBBI = next(MBBI);
+
+    unsigned Opcode = MI.getOpcode();
+    switch (Opcode) {
+    default: break;
+    case ARM::tLDRpci_pic: 
+    case ARM::t2LDRpci_pic: {
+      unsigned NewLdOpc = (Opcode == ARM::tLDRpci_pic)
+        ? ARM::tLDRpci : ARM::t2LDRpci;
+      unsigned DstReg = MI.getOperand(0).getReg();
+      if (!MI.getOperand(0).isDead()) {
+        MachineInstr *NewMI =
+          AddDefaultPred(BuildMI(MBB, MBBI, MI.getDebugLoc(),
+                                 TII->get(NewLdOpc), DstReg)
+                         .addOperand(MI.getOperand(1)));
+        NewMI->setMemRefs(MI.memoperands_begin(), MI.memoperands_end());
+        BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM::tPICADD))
+          .addReg(DstReg, getDefRegState(true))
+          .addReg(DstReg)
+          .addOperand(MI.getOperand(2));
+      }
+      MI.eraseFromParent();
+      Modified = true;
+      break;
+    }
+    case ARM::t2MOVi32imm: {
+      unsigned DstReg = MI.getOperand(0).getReg();
+      if (!MI.getOperand(0).isDead()) {
+        const MachineOperand &MO = MI.getOperand(1);
+        MachineInstrBuilder LO16, HI16;
+
+        LO16 = BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM::t2MOVi16),
+                       DstReg);
+        HI16 = BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM::t2MOVTi16))
+          .addReg(DstReg, getDefRegState(true)).addReg(DstReg);
+
+        if (MO.isImm()) {
+          unsigned Imm = MO.getImm();
+          unsigned Lo16 = Imm & 0xffff;
+          unsigned Hi16 = (Imm >> 16) & 0xffff;
+          LO16 = LO16.addImm(Lo16);
+          HI16 = HI16.addImm(Hi16);
+        } else {
+          GlobalValue *GV = MO.getGlobal();
+          unsigned TF = MO.getTargetFlags();
+          LO16 = LO16.addGlobalAddress(GV, MO.getOffset(), TF | ARMII::MO_LO16);
+          HI16 = HI16.addGlobalAddress(GV, MO.getOffset(), TF | ARMII::MO_HI16);
+          // FIXME: What's about memoperands?
+        }
+        AddDefaultPred(LO16);
+        AddDefaultPred(HI16);
+      }
+      MI.eraseFromParent();
+      Modified = true;
+    }
+    // FIXME: expand t2MOVi32imm
+    }
+    MBBI = NMBBI;
+  }
+
+  return Modified;
+}
+
+bool ARMExpandPseudo::runOnMachineFunction(MachineFunction &MF) {
+  TII = MF.getTarget().getInstrInfo();
+
+  bool Modified = false;
+  for (MachineFunction::iterator MFI = MF.begin(), E = MF.end(); MFI != E;
+       ++MFI)
+    Modified |= ExpandMBB(*MFI);
+  return Modified;
+}
+
+/// createARMExpandPseudoPass - returns an instance of the pseudo instruction
+/// expansion pass.
+FunctionPass *llvm::createARMExpandPseudoPass() {
+  return new ARMExpandPseudo();
+}
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMISelDAGToDAG.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMISelDAGToDAG.cpp
index 53f2282..d63f3e6 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMISelDAGToDAG.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMISelDAGToDAG.cpp
@@ -13,7 +13,6 @@
 
 #include "ARM.h"
 #include "ARMAddressingModes.h"
-#include "ARMConstantPoolValue.h"
 #include "ARMISelLowering.h"
 #include "ARMTargetMachine.h"
 #include "llvm/CallingConv.h"
@@ -59,7 +58,8 @@ public:
     return "ARM Instruction Selection";
   }
 
- /// getI32Imm - Return a target constant with the specified value, of type i32.
+  /// getI32Imm - Return a target constant of type i32 with the specified
+  /// value.
   inline SDValue getI32Imm(unsigned Imm) {
     return CurDAG->getTargetConstant(Imm, MVT::i32);
   }
@@ -81,7 +81,7 @@ public:
   bool SelectAddrMode5(SDValue Op, SDValue N, SDValue &Base,
                        SDValue &Offset);
   bool SelectAddrMode6(SDValue Op, SDValue N, SDValue &Addr, SDValue &Update,
-                       SDValue &Opc);
+                       SDValue &Opc, SDValue &Align);
 
   bool SelectAddrModePC(SDValue Op, SDValue N, SDValue &Offset,
                         SDValue &Label);
@@ -125,17 +125,83 @@ private:
   /// SelectDYN_ALLOC - Select dynamic alloc for Thumb.
   SDNode *SelectDYN_ALLOC(SDValue Op);
 
+  /// SelectVLD - Select NEON load intrinsics.  NumVecs should
+  /// be 2, 3 or 4.  The opcode arrays specify the instructions used for
+  /// loads of D registers and even subregs and odd subregs of Q registers.
+  /// For NumVecs == 2, QOpcodes1 is not used.
+  SDNode *SelectVLD(SDValue Op, unsigned NumVecs, unsigned *DOpcodes,
+                    unsigned *QOpcodes0, unsigned *QOpcodes1);
+
+  /// SelectVST - Select NEON store intrinsics.  NumVecs should
+  /// be 2, 3 or 4.  The opcode arrays specify the instructions used for
+  /// stores of D registers and even subregs and odd subregs of Q registers.
+  /// For NumVecs == 2, QOpcodes1 is not used.
+  SDNode *SelectVST(SDValue Op, unsigned NumVecs, unsigned *DOpcodes,
+                    unsigned *QOpcodes0, unsigned *QOpcodes1);
+
+  /// SelectVLDSTLane - Select NEON load/store lane intrinsics.  NumVecs should
+  /// be 2, 3 or 4.  The opcode arrays specify the instructions used for
+  /// load/store of D registers and even subregs and odd subregs of Q registers.
+  SDNode *SelectVLDSTLane(SDValue Op, bool IsLoad, unsigned NumVecs,
+                          unsigned *DOpcodes, unsigned *QOpcodes0,
+                          unsigned *QOpcodes1);
+
+  /// SelectV6T2BitfieldExtractOp - Select SBFX/UBFX instructions for ARM.
+  SDNode *SelectV6T2BitfieldExtractOp(SDValue Op, unsigned Opc);
+
+  /// SelectCMOVOp - Select CMOV instructions for ARM.
+  SDNode *SelectCMOVOp(SDValue Op);
+  SDNode *SelectT2CMOVShiftOp(SDValue Op, SDValue FalseVal, SDValue TrueVal,
+                              ARMCC::CondCodes CCVal, SDValue CCR,
+                              SDValue InFlag);
+  SDNode *SelectARMCMOVShiftOp(SDValue Op, SDValue FalseVal, SDValue TrueVal,
+                               ARMCC::CondCodes CCVal, SDValue CCR,
+                               SDValue InFlag);
+  SDNode *SelectT2CMOVSoImmOp(SDValue Op, SDValue FalseVal, SDValue TrueVal,
+                              ARMCC::CondCodes CCVal, SDValue CCR,
+                              SDValue InFlag);
+  SDNode *SelectARMCMOVSoImmOp(SDValue Op, SDValue FalseVal, SDValue TrueVal,
+                               ARMCC::CondCodes CCVal, SDValue CCR,
+                               SDValue InFlag);
+
   /// SelectInlineAsmMemoryOperand - Implement addressing mode selection for
   /// inline asm expressions.
   virtual bool SelectInlineAsmMemoryOperand(const SDValue &Op,
                                             char ConstraintCode,
                                             std::vector<SDValue> &OutOps);
+
+  /// PairDRegs - Insert a pair of double registers into an implicit def to
+  /// form a quad register.
+  SDNode *PairDRegs(EVT VT, SDValue V0, SDValue V1);
 };
 }
 
-void ARMDAGToDAGISel::InstructionSelect() {
-  DEBUG(BB->dump());
+/// isInt32Immediate - This method tests to see if the node is a 32-bit constant
+/// operand. If so Imm will receive the 32-bit value.
+static bool isInt32Immediate(SDNode *N, unsigned &Imm) {
+  if (N->getOpcode() == ISD::Constant && N->getValueType(0) == MVT::i32) {
+    Imm = cast<ConstantSDNode>(N)->getZExtValue();
+    return true;
+  }
+  return false;
+}
 
+// isInt32Immediate - This method tests to see if a constant operand.
+// If so Imm will receive the 32 bit value.
+static bool isInt32Immediate(SDValue N, unsigned &Imm) {
+  return isInt32Immediate(N.getNode(), Imm);
+}
+
+// isOpcWithIntImmediate - This method tests to see if the node is a specific
+// opcode and that it has a immediate integer right operand.
+// If so Imm will receive the 32 bit value.
+static bool isOpcWithIntImmediate(SDNode *N, unsigned Opc, unsigned& Imm) {
+  return N->getOpcode() == Opc &&
+         isInt32Immediate(N->getOperand(1).getNode(), Imm);
+}
+
+
+void ARMDAGToDAGISel::InstructionSelect() {
   SelectRoot(*CurDAG);
   CurDAG->RemoveDeadNodes();
 }
@@ -195,7 +261,9 @@ bool ARMDAGToDAGISel::SelectAddrMode2(SDValue Op, SDValue N,
     if (N.getOpcode() == ISD::FrameIndex) {
       int FI = cast<FrameIndexSDNode>(N)->getIndex();
       Base = CurDAG->getTargetFrameIndex(FI, TLI.getPointerTy());
-    } else if (N.getOpcode() == ARMISD::Wrapper) {
+    } else if (N.getOpcode() == ARMISD::Wrapper &&
+               !(Subtarget->useMovt() &&
+                 N.getOperand(0).getOpcode() == ISD::TargetGlobalAddress)) {
       Base = N.getOperand(0);
     }
     Offset = CurDAG->getRegister(0, MVT::i32);
@@ -230,7 +298,7 @@ bool ARMDAGToDAGISel::SelectAddrMode2(SDValue Op, SDValue N,
       }
     }
 
-  // Otherwise this is R +/- [possibly shifted] R
+  // Otherwise this is R +/- [possibly shifted] R.
   ARM_AM::AddrOpc AddSub = N.getOpcode() == ISD::ADD ? ARM_AM::add:ARM_AM::sub;
   ARM_AM::ShiftOpc ShOpcVal = ARM_AM::getShiftOpcForNode(N.getOperand(1));
   unsigned ShAmt = 0;
@@ -397,7 +465,9 @@ bool ARMDAGToDAGISel::SelectAddrMode5(SDValue Op, SDValue N,
     if (N.getOpcode() == ISD::FrameIndex) {
       int FI = cast<FrameIndexSDNode>(N)->getIndex();
       Base = CurDAG->getTargetFrameIndex(FI, TLI.getPointerTy());
-    } else if (N.getOpcode() == ARMISD::Wrapper) {
+    } else if (N.getOpcode() == ARMISD::Wrapper &&
+               !(Subtarget->useMovt() &&
+                 N.getOperand(0).getOpcode() == ISD::TargetGlobalAddress)) {
       Base = N.getOperand(0);
     }
     Offset = CurDAG->getTargetConstant(ARM_AM::getAM5Opc(ARM_AM::add, 0),
@@ -438,11 +508,13 @@ bool ARMDAGToDAGISel::SelectAddrMode5(SDValue Op, SDValue N,
 
 bool ARMDAGToDAGISel::SelectAddrMode6(SDValue Op, SDValue N,
                                       SDValue &Addr, SDValue &Update,
-                                      SDValue &Opc) {
+                                      SDValue &Opc, SDValue &Align) {
   Addr = N;
-  // The optional writeback is handled in ARMLoadStoreOpt.
+  // Default to no writeback.
   Update = CurDAG->getRegister(0, MVT::i32);
   Opc = CurDAG->getTargetConstant(ARM_AM::getAM6Opc(false), MVT::i32);
+  // Default to no alignment.
+  Align = CurDAG->getTargetConstant(0, MVT::i32);
   return true;
 }
 
@@ -490,7 +562,13 @@ ARMDAGToDAGISel::SelectThumbAddrModeRI5(SDValue Op, SDValue N,
   }
 
   if (N.getOpcode() != ISD::ADD) {
-    Base = (N.getOpcode() == ARMISD::Wrapper) ? N.getOperand(0) : N;
+    if (N.getOpcode() == ARMISD::Wrapper &&
+        !(Subtarget->useMovt() &&
+          N.getOperand(0).getOpcode() == ISD::TargetGlobalAddress)) {
+      Base = N.getOperand(0);
+    } else
+      Base = N;
+
     Offset = CurDAG->getRegister(0, MVT::i32);
     OffImm = CurDAG->getTargetConstant(0, MVT::i32);
     return true;
@@ -613,7 +691,9 @@ bool ARMDAGToDAGISel::SelectT2AddrModeImm12(SDValue Op, SDValue N,
       Base = CurDAG->getTargetFrameIndex(FI, TLI.getPointerTy());
       OffImm  = CurDAG->getTargetConstant(0, MVT::i32);
       return true;
-    } else if (N.getOpcode() == ARMISD::Wrapper) {
+    } else if (N.getOpcode() == ARMISD::Wrapper &&
+               !(Subtarget->useMovt() &&
+                 N.getOperand(0).getOpcode() == ISD::TargetGlobalAddress)) {
       Base = N.getOperand(0);
       if (Base.getOpcode() == ISD::TargetConstantPool)
         return false;  // We want to select t2LDRpci instead.
@@ -923,6 +1003,520 @@ SDNode *ARMDAGToDAGISel::SelectDYN_ALLOC(SDValue Op) {
   return 0;
 }
 
+/// PairDRegs - Insert a pair of double registers into an implicit def to
+/// form a quad register.
+SDNode *ARMDAGToDAGISel::PairDRegs(EVT VT, SDValue V0, SDValue V1) {
+  DebugLoc dl = V0.getNode()->getDebugLoc();
+  SDValue Undef =
+    SDValue(CurDAG->getMachineNode(TargetInstrInfo::IMPLICIT_DEF, dl, VT), 0);
+  SDValue SubReg0 = CurDAG->getTargetConstant(ARM::DSUBREG_0, MVT::i32);
+  SDValue SubReg1 = CurDAG->getTargetConstant(ARM::DSUBREG_1, MVT::i32);
+  SDNode *Pair = CurDAG->getMachineNode(TargetInstrInfo::INSERT_SUBREG, dl,
+                                        VT, Undef, V0, SubReg0);
+  return CurDAG->getMachineNode(TargetInstrInfo::INSERT_SUBREG, dl,
+                                VT, SDValue(Pair, 0), V1, SubReg1);
+}
+
+/// GetNEONSubregVT - Given a type for a 128-bit NEON vector, return the type
+/// for a 64-bit subregister of the vector.
+static EVT GetNEONSubregVT(EVT VT) {
+  switch (VT.getSimpleVT().SimpleTy) {
+  default: llvm_unreachable("unhandled NEON type");
+  case MVT::v16i8: return MVT::v8i8;
+  case MVT::v8i16: return MVT::v4i16;
+  case MVT::v4f32: return MVT::v2f32;
+  case MVT::v4i32: return MVT::v2i32;
+  case MVT::v2i64: return MVT::v1i64;
+  }
+}
+
+SDNode *ARMDAGToDAGISel::SelectVLD(SDValue Op, unsigned NumVecs,
+                                   unsigned *DOpcodes, unsigned *QOpcodes0,
+                                   unsigned *QOpcodes1) {
+  assert(NumVecs >=2 && NumVecs <= 4 && "VLD NumVecs out-of-range");
+  SDNode *N = Op.getNode();
+  DebugLoc dl = N->getDebugLoc();
+
+  SDValue MemAddr, MemUpdate, MemOpc, Align;
+  if (!SelectAddrMode6(Op, N->getOperand(2), MemAddr, MemUpdate, MemOpc, Align))
+    return NULL;
+
+  SDValue Chain = N->getOperand(0);
+  EVT VT = N->getValueType(0);
+  bool is64BitVector = VT.is64BitVector();
+
+  unsigned OpcodeIndex;
+  switch (VT.getSimpleVT().SimpleTy) {
+  default: llvm_unreachable("unhandled vld type");
+    // Double-register operations:
+  case MVT::v8i8:  OpcodeIndex = 0; break;
+  case MVT::v4i16: OpcodeIndex = 1; break;
+  case MVT::v2f32:
+  case MVT::v2i32: OpcodeIndex = 2; break;
+  case MVT::v1i64: OpcodeIndex = 3; break;
+    // Quad-register operations:
+  case MVT::v16i8: OpcodeIndex = 0; break;
+  case MVT::v8i16: OpcodeIndex = 1; break;
+  case MVT::v4f32:
+  case MVT::v4i32: OpcodeIndex = 2; break;
+  }
+
+  SDValue Pred = CurDAG->getTargetConstant(14, MVT::i32);
+  SDValue PredReg = CurDAG->getRegister(0, MVT::i32);
+  if (is64BitVector) {
+    unsigned Opc = DOpcodes[OpcodeIndex];
+    const SDValue Ops[] = { MemAddr, MemUpdate, MemOpc, Align,
+                            Pred, PredReg, Chain };
+    std::vector<EVT> ResTys(NumVecs, VT);
+    ResTys.push_back(MVT::Other);
+    return CurDAG->getMachineNode(Opc, dl, ResTys, Ops, 7);
+  }
+
+  EVT RegVT = GetNEONSubregVT(VT);
+  if (NumVecs == 2) {
+    // Quad registers are directly supported for VLD2,
+    // loading 2 pairs of D regs.
+    unsigned Opc = QOpcodes0[OpcodeIndex];
+    const SDValue Ops[] = { MemAddr, MemUpdate, MemOpc, Align,
+                            Pred, PredReg, Chain };
+    std::vector<EVT> ResTys(4, VT);
+    ResTys.push_back(MVT::Other);
+    SDNode *VLd = CurDAG->getMachineNode(Opc, dl, ResTys, Ops, 7);
+    Chain = SDValue(VLd, 4);
+
+    // Combine the even and odd subregs to produce the result.
+    for (unsigned Vec = 0; Vec < NumVecs; ++Vec) {
+      SDNode *Q = PairDRegs(VT, SDValue(VLd, 2*Vec), SDValue(VLd, 2*Vec+1));
+      ReplaceUses(SDValue(N, Vec), SDValue(Q, 0));
+    }
+  } else {
+    // Otherwise, quad registers are loaded with two separate instructions,
+    // where one loads the even registers and the other loads the odd registers.
+
+    // Enable writeback to the address register.
+    MemOpc = CurDAG->getTargetConstant(ARM_AM::getAM6Opc(true), MVT::i32);
+
+    std::vector<EVT> ResTys(NumVecs, RegVT);
+    ResTys.push_back(MemAddr.getValueType());
+    ResTys.push_back(MVT::Other);
+
+    // Load the even subregs.
+    unsigned Opc = QOpcodes0[OpcodeIndex];
+    const SDValue OpsA[] = { MemAddr, MemUpdate, MemOpc, Align,
+                             Pred, PredReg, Chain };
+    SDNode *VLdA = CurDAG->getMachineNode(Opc, dl, ResTys, OpsA, 7);
+    Chain = SDValue(VLdA, NumVecs+1);
+
+    // Load the odd subregs.
+    Opc = QOpcodes1[OpcodeIndex];
+    const SDValue OpsB[] = { SDValue(VLdA, NumVecs), MemUpdate, MemOpc,
+                             Align, Pred, PredReg, Chain };
+    SDNode *VLdB = CurDAG->getMachineNode(Opc, dl, ResTys, OpsB, 7);
+    Chain = SDValue(VLdB, NumVecs+1);
+
+    // Combine the even and odd subregs to produce the result.
+    for (unsigned Vec = 0; Vec < NumVecs; ++Vec) {
+      SDNode *Q = PairDRegs(VT, SDValue(VLdA, Vec), SDValue(VLdB, Vec));
+      ReplaceUses(SDValue(N, Vec), SDValue(Q, 0));
+    }
+  }
+  ReplaceUses(SDValue(N, NumVecs), Chain);
+  return NULL;
+}
+
+SDNode *ARMDAGToDAGISel::SelectVST(SDValue Op, unsigned NumVecs,
+                                   unsigned *DOpcodes, unsigned *QOpcodes0,
+                                   unsigned *QOpcodes1) {
+  assert(NumVecs >=2 && NumVecs <= 4 && "VST NumVecs out-of-range");
+  SDNode *N = Op.getNode();
+  DebugLoc dl = N->getDebugLoc();
+
+  SDValue MemAddr, MemUpdate, MemOpc, Align;
+  if (!SelectAddrMode6(Op, N->getOperand(2), MemAddr, MemUpdate, MemOpc, Align))
+    return NULL;
+
+  SDValue Chain = N->getOperand(0);
+  EVT VT = N->getOperand(3).getValueType();
+  bool is64BitVector = VT.is64BitVector();
+
+  unsigned OpcodeIndex;
+  switch (VT.getSimpleVT().SimpleTy) {
+  default: llvm_unreachable("unhandled vst type");
+    // Double-register operations:
+  case MVT::v8i8:  OpcodeIndex = 0; break;
+  case MVT::v4i16: OpcodeIndex = 1; break;
+  case MVT::v2f32:
+  case MVT::v2i32: OpcodeIndex = 2; break;
+  case MVT::v1i64: OpcodeIndex = 3; break;
+    // Quad-register operations:
+  case MVT::v16i8: OpcodeIndex = 0; break;
+  case MVT::v8i16: OpcodeIndex = 1; break;
+  case MVT::v4f32:
+  case MVT::v4i32: OpcodeIndex = 2; break;
+  }
+
+  SDValue Pred = CurDAG->getTargetConstant(14, MVT::i32);
+  SDValue PredReg = CurDAG->getRegister(0, MVT::i32);
+
+  SmallVector<SDValue, 8> Ops;
+  Ops.push_back(MemAddr);
+  Ops.push_back(MemUpdate);
+  Ops.push_back(MemOpc);
+  Ops.push_back(Align);
+
+  if (is64BitVector) {
+    unsigned Opc = DOpcodes[OpcodeIndex];
+    for (unsigned Vec = 0; Vec < NumVecs; ++Vec)
+      Ops.push_back(N->getOperand(Vec+3));
+    Ops.push_back(Pred);
+    Ops.push_back(PredReg);
+    Ops.push_back(Chain);
+    return CurDAG->getMachineNode(Opc, dl, MVT::Other, Ops.data(), NumVecs+7);
+  }
+
+  EVT RegVT = GetNEONSubregVT(VT);
+  if (NumVecs == 2) {
+    // Quad registers are directly supported for VST2,
+    // storing 2 pairs of D regs.
+    unsigned Opc = QOpcodes0[OpcodeIndex];
+    for (unsigned Vec = 0; Vec < NumVecs; ++Vec) {
+      Ops.push_back(CurDAG->getTargetExtractSubreg(ARM::DSUBREG_0, dl, RegVT,
+                                                   N->getOperand(Vec+3)));
+      Ops.push_back(CurDAG->getTargetExtractSubreg(ARM::DSUBREG_1, dl, RegVT,
+                                                   N->getOperand(Vec+3)));
+    }
+    Ops.push_back(Pred);
+    Ops.push_back(PredReg);
+    Ops.push_back(Chain);
+    return CurDAG->getMachineNode(Opc, dl, MVT::Other, Ops.data(), 11);
+  }
+
+  // Otherwise, quad registers are stored with two separate instructions,
+  // where one stores the even registers and the other stores the odd registers.
+
+  // Enable writeback to the address register.
+  MemOpc = CurDAG->getTargetConstant(ARM_AM::getAM6Opc(true), MVT::i32);
+
+  // Store the even subregs.
+  for (unsigned Vec = 0; Vec < NumVecs; ++Vec)
+    Ops.push_back(CurDAG->getTargetExtractSubreg(ARM::DSUBREG_0, dl, RegVT,
+                                                 N->getOperand(Vec+3)));
+  Ops.push_back(Pred);
+  Ops.push_back(PredReg);
+  Ops.push_back(Chain);
+  unsigned Opc = QOpcodes0[OpcodeIndex];
+  SDNode *VStA = CurDAG->getMachineNode(Opc, dl, MemAddr.getValueType(),
+                                        MVT::Other, Ops.data(), NumVecs+7);
+  Chain = SDValue(VStA, 1);
+
+  // Store the odd subregs.
+  Ops[0] = SDValue(VStA, 0); // MemAddr
+  for (unsigned Vec = 0; Vec < NumVecs; ++Vec)
+    Ops[Vec+4] = CurDAG->getTargetExtractSubreg(ARM::DSUBREG_1, dl, RegVT,
+                                                N->getOperand(Vec+3));
+  Ops[NumVecs+4] = Pred;
+  Ops[NumVecs+5] = PredReg;
+  Ops[NumVecs+6] = Chain;
+  Opc = QOpcodes1[OpcodeIndex];
+  SDNode *VStB = CurDAG->getMachineNode(Opc, dl, MemAddr.getValueType(),
+                                        MVT::Other, Ops.data(), NumVecs+7);
+  Chain = SDValue(VStB, 1);
+  ReplaceUses(SDValue(N, 0), Chain);
+  return NULL;
+}
+
+SDNode *ARMDAGToDAGISel::SelectVLDSTLane(SDValue Op, bool IsLoad,
+                                         unsigned NumVecs, unsigned *DOpcodes,
+                                         unsigned *QOpcodes0,
+                                         unsigned *QOpcodes1) {
+  assert(NumVecs >=2 && NumVecs <= 4 && "VLDSTLane NumVecs out-of-range");
+  SDNode *N = Op.getNode();
+  DebugLoc dl = N->getDebugLoc();
+
+  SDValue MemAddr, MemUpdate, MemOpc, Align;
+  if (!SelectAddrMode6(Op, N->getOperand(2), MemAddr, MemUpdate, MemOpc, Align))
+    return NULL;
+
+  SDValue Chain = N->getOperand(0);
+  unsigned Lane =
+    cast<ConstantSDNode>(N->getOperand(NumVecs+3))->getZExtValue();
+  EVT VT = IsLoad ? N->getValueType(0) : N->getOperand(3).getValueType();
+  bool is64BitVector = VT.is64BitVector();
+
+  // Quad registers are handled by load/store of subregs. Find the subreg info.
+  unsigned NumElts = 0;
+  int SubregIdx = 0;
+  EVT RegVT = VT;
+  if (!is64BitVector) {
+    RegVT = GetNEONSubregVT(VT);
+    NumElts = RegVT.getVectorNumElements();
+    SubregIdx = (Lane < NumElts) ? ARM::DSUBREG_0 : ARM::DSUBREG_1;
+  }
+
+  unsigned OpcodeIndex;
+  switch (VT.getSimpleVT().SimpleTy) {
+  default: llvm_unreachable("unhandled vld/vst lane type");
+    // Double-register operations:
+  case MVT::v8i8:  OpcodeIndex = 0; break;
+  case MVT::v4i16: OpcodeIndex = 1; break;
+  case MVT::v2f32:
+  case MVT::v2i32: OpcodeIndex = 2; break;
+    // Quad-register operations:
+  case MVT::v8i16: OpcodeIndex = 0; break;
+  case MVT::v4f32:
+  case MVT::v4i32: OpcodeIndex = 1; break;
+  }
+
+  SDValue Pred = CurDAG->getTargetConstant(14, MVT::i32);
+  SDValue PredReg = CurDAG->getRegister(0, MVT::i32);
+
+  SmallVector<SDValue, 9> Ops;
+  Ops.push_back(MemAddr);
+  Ops.push_back(MemUpdate);
+  Ops.push_back(MemOpc);
+  Ops.push_back(Align);
+
+  unsigned Opc = 0;
+  if (is64BitVector) {
+    Opc = DOpcodes[OpcodeIndex];
+    for (unsigned Vec = 0; Vec < NumVecs; ++Vec)
+      Ops.push_back(N->getOperand(Vec+3));
+  } else {
+    // Check if this is loading the even or odd subreg of a Q register.
+    if (Lane < NumElts) {
+      Opc = QOpcodes0[OpcodeIndex];
+    } else {
+      Lane -= NumElts;
+      Opc = QOpcodes1[OpcodeIndex];
+    }
+    // Extract the subregs of the input vector.
+    for (unsigned Vec = 0; Vec < NumVecs; ++Vec)
+      Ops.push_back(CurDAG->getTargetExtractSubreg(SubregIdx, dl, RegVT,
+                                                   N->getOperand(Vec+3)));
+  }
+  Ops.push_back(getI32Imm(Lane));
+  Ops.push_back(Pred);
+  Ops.push_back(PredReg);
+  Ops.push_back(Chain);
+
+  if (!IsLoad)
+    return CurDAG->getMachineNode(Opc, dl, MVT::Other, Ops.data(), NumVecs+7);
+
+  std::vector<EVT> ResTys(NumVecs, RegVT);
+  ResTys.push_back(MVT::Other);
+  SDNode *VLdLn =
+    CurDAG->getMachineNode(Opc, dl, ResTys, Ops.data(), NumVecs+7);
+  // For a 64-bit vector load to D registers, nothing more needs to be done.
+  if (is64BitVector)
+    return VLdLn;
+
+  // For 128-bit vectors, take the 64-bit results of the load and insert them
+  // as subregs into the result.
+  for (unsigned Vec = 0; Vec < NumVecs; ++Vec) {
+    SDValue QuadVec = CurDAG->getTargetInsertSubreg(SubregIdx, dl, VT,
+                                                    N->getOperand(Vec+3),
+                                                    SDValue(VLdLn, Vec));
+    ReplaceUses(SDValue(N, Vec), QuadVec);
+  }
+
+  Chain = SDValue(VLdLn, NumVecs);
+  ReplaceUses(SDValue(N, NumVecs), Chain);
+  return NULL;
+}
+
+SDNode *ARMDAGToDAGISel::SelectV6T2BitfieldExtractOp(SDValue Op,
+                                                     unsigned Opc) {
+  if (!Subtarget->hasV6T2Ops())
+    return NULL;
+
+  unsigned Shl_imm = 0;
+  if (isOpcWithIntImmediate(Op.getOperand(0).getNode(), ISD::SHL, Shl_imm)) {
+    assert(Shl_imm > 0 && Shl_imm < 32 && "bad amount in shift node!");
+    unsigned Srl_imm = 0;
+    if (isInt32Immediate(Op.getOperand(1), Srl_imm)) {
+      assert(Srl_imm > 0 && Srl_imm < 32 && "bad amount in shift node!");
+      unsigned Width = 32 - Srl_imm;
+      int LSB = Srl_imm - Shl_imm;
+      if (LSB < 0)
+        return NULL;
+      SDValue Reg0 = CurDAG->getRegister(0, MVT::i32);
+      SDValue Ops[] = { Op.getOperand(0).getOperand(0),
+                        CurDAG->getTargetConstant(LSB, MVT::i32),
+                        CurDAG->getTargetConstant(Width, MVT::i32),
+                        getAL(CurDAG), Reg0 };
+      return CurDAG->SelectNodeTo(Op.getNode(), Opc, MVT::i32, Ops, 5);
+    }
+  }
+  return NULL;
+}
+
+SDNode *ARMDAGToDAGISel::
+SelectT2CMOVShiftOp(SDValue Op, SDValue FalseVal, SDValue TrueVal,
+                    ARMCC::CondCodes CCVal, SDValue CCR, SDValue InFlag) {
+  SDValue CPTmp0;
+  SDValue CPTmp1;
+  if (SelectT2ShifterOperandReg(Op, TrueVal, CPTmp0, CPTmp1)) {
+    unsigned SOVal = cast<ConstantSDNode>(CPTmp1)->getZExtValue();
+    unsigned SOShOp = ARM_AM::getSORegShOp(SOVal);
+    unsigned Opc = 0;
+    switch (SOShOp) {
+    case ARM_AM::lsl: Opc = ARM::t2MOVCClsl; break;
+    case ARM_AM::lsr: Opc = ARM::t2MOVCClsr; break;
+    case ARM_AM::asr: Opc = ARM::t2MOVCCasr; break;
+    case ARM_AM::ror: Opc = ARM::t2MOVCCror; break;
+    default:
+      llvm_unreachable("Unknown so_reg opcode!");
+      break;
+    }
+    SDValue SOShImm =
+      CurDAG->getTargetConstant(ARM_AM::getSORegOffset(SOVal), MVT::i32);
+    SDValue CC = CurDAG->getTargetConstant(CCVal, MVT::i32);
+    SDValue Ops[] = { FalseVal, CPTmp0, SOShImm, CC, CCR, InFlag };
+    return CurDAG->SelectNodeTo(Op.getNode(), Opc, MVT::i32,Ops, 6);
+  }
+  return 0;
+}
+
+SDNode *ARMDAGToDAGISel::
+SelectARMCMOVShiftOp(SDValue Op, SDValue FalseVal, SDValue TrueVal,
+                     ARMCC::CondCodes CCVal, SDValue CCR, SDValue InFlag) {
+  SDValue CPTmp0;
+  SDValue CPTmp1;
+  SDValue CPTmp2;
+  if (SelectShifterOperandReg(Op, TrueVal, CPTmp0, CPTmp1, CPTmp2)) {
+    SDValue CC = CurDAG->getTargetConstant(CCVal, MVT::i32);
+    SDValue Ops[] = { FalseVal, CPTmp0, CPTmp1, CPTmp2, CC, CCR, InFlag };
+    return CurDAG->SelectNodeTo(Op.getNode(), ARM::MOVCCs, MVT::i32, Ops, 7);
+  }
+  return 0;
+}
+
+SDNode *ARMDAGToDAGISel::
+SelectT2CMOVSoImmOp(SDValue Op, SDValue FalseVal, SDValue TrueVal,
+                    ARMCC::CondCodes CCVal, SDValue CCR, SDValue InFlag) {
+  ConstantSDNode *T = dyn_cast<ConstantSDNode>(TrueVal);
+  if (!T)
+    return 0;
+
+  if (Predicate_t2_so_imm(TrueVal.getNode())) {
+    SDValue True = CurDAG->getTargetConstant(T->getZExtValue(), MVT::i32);
+    SDValue CC = CurDAG->getTargetConstant(CCVal, MVT::i32);
+    SDValue Ops[] = { FalseVal, True, CC, CCR, InFlag };
+    return CurDAG->SelectNodeTo(Op.getNode(),
+                                ARM::t2MOVCCi, MVT::i32, Ops, 5);
+  }
+  return 0;
+}
+
+SDNode *ARMDAGToDAGISel::
+SelectARMCMOVSoImmOp(SDValue Op, SDValue FalseVal, SDValue TrueVal,
+                     ARMCC::CondCodes CCVal, SDValue CCR, SDValue InFlag) {
+  ConstantSDNode *T = dyn_cast<ConstantSDNode>(TrueVal);
+  if (!T)
+    return 0;
+
+  if (Predicate_so_imm(TrueVal.getNode())) {
+    SDValue True = CurDAG->getTargetConstant(T->getZExtValue(), MVT::i32);
+    SDValue CC = CurDAG->getTargetConstant(CCVal, MVT::i32);
+    SDValue Ops[] = { FalseVal, True, CC, CCR, InFlag };
+    return CurDAG->SelectNodeTo(Op.getNode(),
+                                ARM::MOVCCi, MVT::i32, Ops, 5);
+  }
+  return 0;
+}
+
+SDNode *ARMDAGToDAGISel::SelectCMOVOp(SDValue Op) {
+  EVT VT = Op.getValueType();
+  SDValue FalseVal = Op.getOperand(0);
+  SDValue TrueVal  = Op.getOperand(1);
+  SDValue CC = Op.getOperand(2);
+  SDValue CCR = Op.getOperand(3);
+  SDValue InFlag = Op.getOperand(4);
+  assert(CC.getOpcode() == ISD::Constant);
+  assert(CCR.getOpcode() == ISD::Register);
+  ARMCC::CondCodes CCVal =
+    (ARMCC::CondCodes)cast<ConstantSDNode>(CC)->getZExtValue();
+
+  if (!Subtarget->isThumb1Only() && VT == MVT::i32) {
+    // Pattern: (ARMcmov:i32 GPR:i32:$false, so_reg:i32:$true, (imm:i32):$cc)
+    // Emits: (MOVCCs:i32 GPR:i32:$false, so_reg:i32:$true, (imm:i32):$cc)
+    // Pattern complexity = 18  cost = 1  size = 0
+    SDValue CPTmp0;
+    SDValue CPTmp1;
+    SDValue CPTmp2;
+    if (Subtarget->isThumb()) {
+      SDNode *Res = SelectT2CMOVShiftOp(Op, FalseVal, TrueVal,
+                                        CCVal, CCR, InFlag);
+      if (!Res)
+        Res = SelectT2CMOVShiftOp(Op, TrueVal, FalseVal,
+                               ARMCC::getOppositeCondition(CCVal), CCR, InFlag);
+      if (Res)
+        return Res;
+    } else {
+      SDNode *Res = SelectARMCMOVShiftOp(Op, FalseVal, TrueVal,
+                                         CCVal, CCR, InFlag);
+      if (!Res)
+        Res = SelectARMCMOVShiftOp(Op, TrueVal, FalseVal,
+                               ARMCC::getOppositeCondition(CCVal), CCR, InFlag);
+      if (Res)
+        return Res;
+    }
+
+    // Pattern: (ARMcmov:i32 GPR:i32:$false,
+    //             (imm:i32)<<P:Predicate_so_imm>>:$true,
+    //             (imm:i32):$cc)
+    // Emits: (MOVCCi:i32 GPR:i32:$false,
+    //           (so_imm:i32 (imm:i32):$true), (imm:i32):$cc)
+    // Pattern complexity = 10  cost = 1  size = 0
+    if (Subtarget->isThumb()) {
+      SDNode *Res = SelectT2CMOVSoImmOp(Op, FalseVal, TrueVal,
+                                        CCVal, CCR, InFlag);
+      if (!Res)
+        Res = SelectT2CMOVSoImmOp(Op, TrueVal, FalseVal,
+                               ARMCC::getOppositeCondition(CCVal), CCR, InFlag);
+      if (Res)
+        return Res;
+    } else {
+      SDNode *Res = SelectARMCMOVSoImmOp(Op, FalseVal, TrueVal,
+                                         CCVal, CCR, InFlag);
+      if (!Res)
+        Res = SelectARMCMOVSoImmOp(Op, TrueVal, FalseVal,
+                               ARMCC::getOppositeCondition(CCVal), CCR, InFlag);
+      if (Res)
+        return Res;
+    }
+  }
+
+  // Pattern: (ARMcmov:i32 GPR:i32:$false, GPR:i32:$true, (imm:i32):$cc)
+  // Emits: (MOVCCr:i32 GPR:i32:$false, GPR:i32:$true, (imm:i32):$cc)
+  // Pattern complexity = 6  cost = 1  size = 0
+  //
+  // Pattern: (ARMcmov:i32 GPR:i32:$false, GPR:i32:$true, (imm:i32):$cc)
+  // Emits: (tMOVCCr:i32 GPR:i32:$false, GPR:i32:$true, (imm:i32):$cc)
+  // Pattern complexity = 6  cost = 11  size = 0
+  //
+  // Also FCPYScc and FCPYDcc.
+  SDValue Tmp2 = CurDAG->getTargetConstant(CCVal, MVT::i32);
+  SDValue Ops[] = { FalseVal, TrueVal, Tmp2, CCR, InFlag };
+  unsigned Opc = 0;
+  switch (VT.getSimpleVT().SimpleTy) {
+  default: assert(false && "Illegal conditional move type!");
+    break;
+  case MVT::i32:
+    Opc = Subtarget->isThumb()
+      ? (Subtarget->hasThumb2() ? ARM::t2MOVCCr : ARM::tMOVCCr_pseudo)
+      : ARM::MOVCCr;
+    break;
+  case MVT::f32:
+    Opc = ARM::VMOVScc;
+    break;
+  case MVT::f64:
+    Opc = ARM::VMOVDcc;
+    break;
+  }
+  return CurDAG->SelectNodeTo(Op.getNode(), Opc, VT, Ops, 5);
+}
+
 SDNode *ARMDAGToDAGISel::Select(SDValue Op) {
   SDNode *N = Op.getNode();
   DebugLoc dl = N->getDebugLoc();
@@ -958,7 +1552,7 @@ SDNode *ARMDAGToDAGISel::Select(SDValue Op) {
 
       SDNode *ResNode;
       if (Subtarget->isThumb1Only()) {
-        SDValue Pred = CurDAG->getTargetConstant(0xEULL, MVT::i32);
+        SDValue Pred = CurDAG->getTargetConstant(14, MVT::i32);
         SDValue PredReg = CurDAG->getRegister(0, MVT::i32);
         SDValue Ops[] = { CPIdx, Pred, PredReg, CurDAG->getEntryNode() };
         ResNode = CurDAG->getMachineNode(ARM::tLDRcp, dl, MVT::i32, MVT::Other,
@@ -1000,6 +1594,16 @@ SDNode *ARMDAGToDAGISel::Select(SDValue Op) {
   }
   case ARMISD::DYN_ALLOC:
     return SelectDYN_ALLOC(Op);
+  case ISD::SRL:
+    if (SDNode *I = SelectV6T2BitfieldExtractOp(Op,
+                      Subtarget->isThumb() ? ARM::t2UBFX : ARM::UBFX))
+      return I;
+    break;
+  case ISD::SRA:
+    if (SDNode *I = SelectV6T2BitfieldExtractOp(Op,
+                      Subtarget->isThumb() ? ARM::t2SBFX : ARM::SBFX))
+      return I;
+    break;
   case ISD::MUL:
     if (Subtarget->isThumb1Only())
       break;
@@ -1040,8 +1644,45 @@ SDNode *ARMDAGToDAGISel::Select(SDValue Op) {
       }
     }
     break;
-  case ARMISD::FMRRD:
-    return CurDAG->getMachineNode(ARM::FMRRD, dl, MVT::i32, MVT::i32,
+  case ISD::AND: {
+    // (and (or x, c2), c1) and top 16-bits of c1 and c2 match, lower 16-bits
+    // of c1 are 0xffff, and lower 16-bit of c2 are 0. That is, the top 16-bits
+    // are entirely contributed by c2 and lower 16-bits are entirely contributed
+    // by x. That's equal to (or (and x, 0xffff), (and c1, 0xffff0000)).
+    // Select it to: "movt x, ((c1 & 0xffff) >> 16)
+    EVT VT = Op.getValueType();
+    if (VT != MVT::i32)
+      break;
+    unsigned Opc = (Subtarget->isThumb() && Subtarget->hasThumb2())
+      ? ARM::t2MOVTi16
+      : (Subtarget->hasV6T2Ops() ? ARM::MOVTi16 : 0);
+    if (!Opc)
+      break;
+    SDValue N0 = Op.getOperand(0), N1 = Op.getOperand(1);
+    ConstantSDNode *N1C = dyn_cast<ConstantSDNode>(N1);
+    if (!N1C)
+      break;
+    if (N0.getOpcode() == ISD::OR && N0.getNode()->hasOneUse()) {
+      SDValue N2 = N0.getOperand(1);
+      ConstantSDNode *N2C = dyn_cast<ConstantSDNode>(N2);
+      if (!N2C)
+        break;
+      unsigned N1CVal = N1C->getZExtValue();
+      unsigned N2CVal = N2C->getZExtValue();
+      if ((N1CVal & 0xffff0000U) == (N2CVal & 0xffff0000U) &&
+          (N1CVal & 0xffffU) == 0xffffU &&
+          (N2CVal & 0xffffU) == 0x0U) {
+        SDValue Imm16 = CurDAG->getTargetConstant((N2CVal & 0xFFFF0000U) >> 16,
+                                                  MVT::i32);
+        SDValue Ops[] = { N0.getOperand(0), Imm16,
+                          getAL(CurDAG), CurDAG->getRegister(0, MVT::i32) };
+        return CurDAG->getMachineNode(Opc, dl, VT, Ops, 4);
+      }
+    }
+    break;
+  }
+  case ARMISD::VMOVRRD:
+    return CurDAG->getMachineNode(ARM::VMOVRRD, dl, MVT::i32, MVT::i32,
                                   Op.getOperand(0), getAL(CurDAG),
                                   CurDAG->getRegister(0, MVT::i32));
   case ISD::UMUL_LOHI: {
@@ -1119,125 +1760,12 @@ SDNode *ARMDAGToDAGISel::Select(SDValue Op) {
       InFlag = SDValue(ResNode, 1);
       ReplaceUses(SDValue(Op.getNode(), 1), InFlag);
     }
-    ReplaceUses(SDValue(Op.getNode(), 0), SDValue(Chain.getNode(), Chain.getResNo()));
+    ReplaceUses(SDValue(Op.getNode(), 0),
+                SDValue(Chain.getNode(), Chain.getResNo()));
     return NULL;
   }
-  case ARMISD::CMOV: {
-    EVT VT = Op.getValueType();
-    SDValue N0 = Op.getOperand(0);
-    SDValue N1 = Op.getOperand(1);
-    SDValue N2 = Op.getOperand(2);
-    SDValue N3 = Op.getOperand(3);
-    SDValue InFlag = Op.getOperand(4);
-    assert(N2.getOpcode() == ISD::Constant);
-    assert(N3.getOpcode() == ISD::Register);
-
-    if (!Subtarget->isThumb1Only() && VT == MVT::i32) {
-      // Pattern: (ARMcmov:i32 GPR:i32:$false, so_reg:i32:$true, (imm:i32):$cc)
-      // Emits: (MOVCCs:i32 GPR:i32:$false, so_reg:i32:$true, (imm:i32):$cc)
-      // Pattern complexity = 18  cost = 1  size = 0
-      SDValue CPTmp0;
-      SDValue CPTmp1;
-      SDValue CPTmp2;
-      if (Subtarget->isThumb()) {
-        if (SelectT2ShifterOperandReg(Op, N1, CPTmp0, CPTmp1)) {
-          unsigned SOVal = cast<ConstantSDNode>(CPTmp1)->getZExtValue();
-          unsigned SOShOp = ARM_AM::getSORegShOp(SOVal);
-          unsigned Opc = 0;
-          switch (SOShOp) {
-          case ARM_AM::lsl: Opc = ARM::t2MOVCClsl; break;
-          case ARM_AM::lsr: Opc = ARM::t2MOVCClsr; break;
-          case ARM_AM::asr: Opc = ARM::t2MOVCCasr; break;
-          case ARM_AM::ror: Opc = ARM::t2MOVCCror; break;
-          default:
-            llvm_unreachable("Unknown so_reg opcode!");
-            break;
-          }
-          SDValue SOShImm =
-            CurDAG->getTargetConstant(ARM_AM::getSORegOffset(SOVal), MVT::i32);
-          SDValue Tmp2 = CurDAG->getTargetConstant(((unsigned)
-                                   cast<ConstantSDNode>(N2)->getZExtValue()),
-                                   MVT::i32);
-          SDValue Ops[] = { N0, CPTmp0, SOShImm, Tmp2, N3, InFlag };
-          return CurDAG->SelectNodeTo(Op.getNode(), Opc, MVT::i32,Ops, 6);
-        }
-      } else {
-        if (SelectShifterOperandReg(Op, N1, CPTmp0, CPTmp1, CPTmp2)) {
-          SDValue Tmp2 = CurDAG->getTargetConstant(((unsigned)
-                                   cast<ConstantSDNode>(N2)->getZExtValue()),
-                                   MVT::i32);
-          SDValue Ops[] = { N0, CPTmp0, CPTmp1, CPTmp2, Tmp2, N3, InFlag };
-          return CurDAG->SelectNodeTo(Op.getNode(),
-                                      ARM::MOVCCs, MVT::i32, Ops, 7);
-        }
-      }
-
-      // Pattern: (ARMcmov:i32 GPR:i32:$false,
-      //             (imm:i32)<<P:Predicate_so_imm>>:$true,
-      //             (imm:i32):$cc)
-      // Emits: (MOVCCi:i32 GPR:i32:$false,
-      //           (so_imm:i32 (imm:i32):$true), (imm:i32):$cc)
-      // Pattern complexity = 10  cost = 1  size = 0
-      if (N3.getOpcode() == ISD::Constant) {
-        if (Subtarget->isThumb()) {
-          if (Predicate_t2_so_imm(N3.getNode())) {
-            SDValue Tmp1 = CurDAG->getTargetConstant(((unsigned)
-                                     cast<ConstantSDNode>(N1)->getZExtValue()),
-                                     MVT::i32);
-            SDValue Tmp2 = CurDAG->getTargetConstant(((unsigned)
-                                     cast<ConstantSDNode>(N2)->getZExtValue()),
-                                     MVT::i32);
-            SDValue Ops[] = { N0, Tmp1, Tmp2, N3, InFlag };
-            return CurDAG->SelectNodeTo(Op.getNode(),
-                                        ARM::t2MOVCCi, MVT::i32, Ops, 5);
-          }
-        } else {
-          if (Predicate_so_imm(N3.getNode())) {
-            SDValue Tmp1 = CurDAG->getTargetConstant(((unsigned)
-                                     cast<ConstantSDNode>(N1)->getZExtValue()),
-                                     MVT::i32);
-            SDValue Tmp2 = CurDAG->getTargetConstant(((unsigned)
-                                     cast<ConstantSDNode>(N2)->getZExtValue()),
-                                     MVT::i32);
-            SDValue Ops[] = { N0, Tmp1, Tmp2, N3, InFlag };
-            return CurDAG->SelectNodeTo(Op.getNode(),
-                                        ARM::MOVCCi, MVT::i32, Ops, 5);
-          }
-        }
-      }
-    }
-
-    // Pattern: (ARMcmov:i32 GPR:i32:$false, GPR:i32:$true, (imm:i32):$cc)
-    // Emits: (MOVCCr:i32 GPR:i32:$false, GPR:i32:$true, (imm:i32):$cc)
-    // Pattern complexity = 6  cost = 1  size = 0
-    //
-    // Pattern: (ARMcmov:i32 GPR:i32:$false, GPR:i32:$true, (imm:i32):$cc)
-    // Emits: (tMOVCCr:i32 GPR:i32:$false, GPR:i32:$true, (imm:i32):$cc)
-    // Pattern complexity = 6  cost = 11  size = 0
-    //
-    // Also FCPYScc and FCPYDcc.
-    SDValue Tmp2 = CurDAG->getTargetConstant(((unsigned)
-                               cast<ConstantSDNode>(N2)->getZExtValue()),
-                               MVT::i32);
-    SDValue Ops[] = { N0, N1, Tmp2, N3, InFlag };
-    unsigned Opc = 0;
-    switch (VT.getSimpleVT().SimpleTy) {
-    default: assert(false && "Illegal conditional move type!");
-      break;
-    case MVT::i32:
-      Opc = Subtarget->isThumb()
-        ? (Subtarget->hasThumb2() ? ARM::t2MOVCCr : ARM::tMOVCCr_pseudo)
-        : ARM::MOVCCr;
-      break;
-    case MVT::f32:
-      Opc = ARM::FCPYScc;
-      break;
-    case MVT::f64:
-      Opc = ARM::FCPYDcc;
-      break;
-    }
-    return CurDAG->SelectNodeTo(Op.getNode(), Opc, VT, Ops, 5);
-  }
+  case ARMISD::CMOV:
+    return SelectCMOVOp(Op);
   case ARMISD::CNEG: {
     EVT VT = Op.getValueType();
     SDValue N0 = Op.getOperand(0);
@@ -1257,10 +1785,10 @@ SDNode *ARMDAGToDAGISel::Select(SDValue Op) {
     default: assert(false && "Illegal conditional move type!");
       break;
     case MVT::f32:
-      Opc = ARM::FNEGScc;
+      Opc = ARM::VNEGScc;
       break;
     case MVT::f64:
-      Opc = ARM::FNEGDcc;
+      Opc = ARM::VNEGDcc;
       break;
     }
     return CurDAG->SelectNodeTo(Op.getNode(), Opc, VT, Ops, 5);
@@ -1280,8 +1808,10 @@ SDNode *ARMDAGToDAGISel::Select(SDValue Op) {
     case MVT::v4f32:
     case MVT::v4i32: Opc = ARM::VZIPq32; break;
     }
-    return CurDAG->getMachineNode(Opc, dl, VT, VT,
-                                  N->getOperand(0), N->getOperand(1));
+    SDValue Pred = CurDAG->getTargetConstant(14, MVT::i32);
+    SDValue PredReg = CurDAG->getRegister(0, MVT::i32);
+    SDValue Ops[] = { N->getOperand(0), N->getOperand(1), Pred, PredReg };
+    return CurDAG->getMachineNode(Opc, dl, VT, VT, Ops, 4);
   }
   case ARMISD::VUZP: {
     unsigned Opc = 0;
@@ -1297,8 +1827,10 @@ SDNode *ARMDAGToDAGISel::Select(SDValue Op) {
     case MVT::v4f32:
     case MVT::v4i32: Opc = ARM::VUZPq32; break;
     }
-    return CurDAG->getMachineNode(Opc, dl, VT, VT,
-                                  N->getOperand(0), N->getOperand(1));
+    SDValue Pred = CurDAG->getTargetConstant(14, MVT::i32);
+    SDValue PredReg = CurDAG->getRegister(0, MVT::i32);
+    SDValue Ops[] = { N->getOperand(0), N->getOperand(1), Pred, PredReg };
+    return CurDAG->getMachineNode(Opc, dl, VT, VT, Ops, 4);
   }
   case ARMISD::VTRN: {
     unsigned Opc = 0;
@@ -1314,233 +1846,105 @@ SDNode *ARMDAGToDAGISel::Select(SDValue Op) {
     case MVT::v4f32:
     case MVT::v4i32: Opc = ARM::VTRNq32; break;
     }
-    return CurDAG->getMachineNode(Opc, dl, VT, VT,
-                                  N->getOperand(0), N->getOperand(1));
+    SDValue Pred = CurDAG->getTargetConstant(14, MVT::i32);
+    SDValue PredReg = CurDAG->getRegister(0, MVT::i32);
+    SDValue Ops[] = { N->getOperand(0), N->getOperand(1), Pred, PredReg };
+    return CurDAG->getMachineNode(Opc, dl, VT, VT, Ops, 4);
   }
 
   case ISD::INTRINSIC_VOID:
   case ISD::INTRINSIC_W_CHAIN: {
     unsigned IntNo = cast<ConstantSDNode>(N->getOperand(1))->getZExtValue();
-    EVT VT = N->getValueType(0);
-    unsigned Opc = 0;
-
     switch (IntNo) {
     default:
       break;
 
     case Intrinsic::arm_neon_vld2: {
-      SDValue MemAddr, MemUpdate, MemOpc;
-      if (!SelectAddrMode6(Op, N->getOperand(2), MemAddr, MemUpdate, MemOpc))
-        return NULL;
-      switch (VT.getSimpleVT().SimpleTy) {
-      default: llvm_unreachable("unhandled vld2 type");
-      case MVT::v8i8:  Opc = ARM::VLD2d8; break;
-      case MVT::v4i16: Opc = ARM::VLD2d16; break;
-      case MVT::v2f32:
-      case MVT::v2i32: Opc = ARM::VLD2d32; break;
-      }
-      SDValue Chain = N->getOperand(0);
-      const SDValue Ops[] = { MemAddr, MemUpdate, MemOpc, Chain };
-      return CurDAG->getMachineNode(Opc, dl, VT, VT, MVT::Other, Ops, 4);
+      unsigned DOpcodes[] = { ARM::VLD2d8, ARM::VLD2d16,
+                              ARM::VLD2d32, ARM::VLD2d64 };
+      unsigned QOpcodes[] = { ARM::VLD2q8, ARM::VLD2q16, ARM::VLD2q32 };
+      return SelectVLD(Op, 2, DOpcodes, QOpcodes, 0);
     }
 
     case Intrinsic::arm_neon_vld3: {
-      SDValue MemAddr, MemUpdate, MemOpc;
-      if (!SelectAddrMode6(Op, N->getOperand(2), MemAddr, MemUpdate, MemOpc))
-        return NULL;
-      switch (VT.getSimpleVT().SimpleTy) {
-      default: llvm_unreachable("unhandled vld3 type");
-      case MVT::v8i8:  Opc = ARM::VLD3d8; break;
-      case MVT::v4i16: Opc = ARM::VLD3d16; break;
-      case MVT::v2f32:
-      case MVT::v2i32: Opc = ARM::VLD3d32; break;
-      }
-      SDValue Chain = N->getOperand(0);
-      const SDValue Ops[] = { MemAddr, MemUpdate, MemOpc, Chain };
-      return CurDAG->getMachineNode(Opc, dl, VT, VT, VT, MVT::Other, Ops, 4);
+      unsigned DOpcodes[] = { ARM::VLD3d8, ARM::VLD3d16,
+                              ARM::VLD3d32, ARM::VLD3d64 };
+      unsigned QOpcodes0[] = { ARM::VLD3q8a, ARM::VLD3q16a, ARM::VLD3q32a };
+      unsigned QOpcodes1[] = { ARM::VLD3q8b, ARM::VLD3q16b, ARM::VLD3q32b };
+      return SelectVLD(Op, 3, DOpcodes, QOpcodes0, QOpcodes1);
     }
 
     case Intrinsic::arm_neon_vld4: {
-      SDValue MemAddr, MemUpdate, MemOpc;
-      if (!SelectAddrMode6(Op, N->getOperand(2), MemAddr, MemUpdate, MemOpc))
-        return NULL;
-      switch (VT.getSimpleVT().SimpleTy) {
-      default: llvm_unreachable("unhandled vld4 type");
-      case MVT::v8i8:  Opc = ARM::VLD4d8; break;
-      case MVT::v4i16: Opc = ARM::VLD4d16; break;
-      case MVT::v2f32:
-      case MVT::v2i32: Opc = ARM::VLD4d32; break;
-      }
-      SDValue Chain = N->getOperand(0);
-      const SDValue Ops[] = { MemAddr, MemUpdate, MemOpc, Chain };
-      std::vector<EVT> ResTys(4, VT);
-      ResTys.push_back(MVT::Other);
-      return CurDAG->getMachineNode(Opc, dl, ResTys, Ops, 4);
+      unsigned DOpcodes[] = { ARM::VLD4d8, ARM::VLD4d16,
+                              ARM::VLD4d32, ARM::VLD4d64 };
+      unsigned QOpcodes0[] = { ARM::VLD4q8a, ARM::VLD4q16a, ARM::VLD4q32a };
+      unsigned QOpcodes1[] = { ARM::VLD4q8b, ARM::VLD4q16b, ARM::VLD4q32b };
+      return SelectVLD(Op, 4, DOpcodes, QOpcodes0, QOpcodes1);
     }
 
     case Intrinsic::arm_neon_vld2lane: {
-      SDValue MemAddr, MemUpdate, MemOpc;
-      if (!SelectAddrMode6(Op, N->getOperand(2), MemAddr, MemUpdate, MemOpc))
-        return NULL;
-      switch (VT.getSimpleVT().SimpleTy) {
-      default: llvm_unreachable("unhandled vld2lane type");
-      case MVT::v8i8:  Opc = ARM::VLD2LNd8; break;
-      case MVT::v4i16: Opc = ARM::VLD2LNd16; break;
-      case MVT::v2f32:
-      case MVT::v2i32: Opc = ARM::VLD2LNd32; break;
-      }
-      SDValue Chain = N->getOperand(0);
-      const SDValue Ops[] = { MemAddr, MemUpdate, MemOpc,
-                              N->getOperand(3), N->getOperand(4),
-                              N->getOperand(5), Chain };
-      return CurDAG->getMachineNode(Opc, dl, VT, VT, MVT::Other, Ops, 7);
+      unsigned DOpcodes[] = { ARM::VLD2LNd8, ARM::VLD2LNd16, ARM::VLD2LNd32 };
+      unsigned QOpcodes0[] = { ARM::VLD2LNq16a, ARM::VLD2LNq32a };
+      unsigned QOpcodes1[] = { ARM::VLD2LNq16b, ARM::VLD2LNq32b };
+      return SelectVLDSTLane(Op, true, 2, DOpcodes, QOpcodes0, QOpcodes1);
     }
 
     case Intrinsic::arm_neon_vld3lane: {
-      SDValue MemAddr, MemUpdate, MemOpc;
-      if (!SelectAddrMode6(Op, N->getOperand(2), MemAddr, MemUpdate, MemOpc))
-        return NULL;
-      switch (VT.getSimpleVT().SimpleTy) {
-      default: llvm_unreachable("unhandled vld3lane type");
-      case MVT::v8i8:  Opc = ARM::VLD3LNd8; break;
-      case MVT::v4i16: Opc = ARM::VLD3LNd16; break;
-      case MVT::v2f32:
-      case MVT::v2i32: Opc = ARM::VLD3LNd32; break;
-      }
-      SDValue Chain = N->getOperand(0);
-      const SDValue Ops[] = { MemAddr, MemUpdate, MemOpc,
-                              N->getOperand(3), N->getOperand(4),
-                              N->getOperand(5), N->getOperand(6), Chain };
-      return CurDAG->getMachineNode(Opc, dl, VT, VT, VT, MVT::Other, Ops, 8);
+      unsigned DOpcodes[] = { ARM::VLD3LNd8, ARM::VLD3LNd16, ARM::VLD3LNd32 };
+      unsigned QOpcodes0[] = { ARM::VLD3LNq16a, ARM::VLD3LNq32a };
+      unsigned QOpcodes1[] = { ARM::VLD3LNq16b, ARM::VLD3LNq32b };
+      return SelectVLDSTLane(Op, true, 3, DOpcodes, QOpcodes0, QOpcodes1);
     }
 
     case Intrinsic::arm_neon_vld4lane: {
-      SDValue MemAddr, MemUpdate, MemOpc;
-      if (!SelectAddrMode6(Op, N->getOperand(2), MemAddr, MemUpdate, MemOpc))
-        return NULL;
-      switch (VT.getSimpleVT().SimpleTy) {
-      default: llvm_unreachable("unhandled vld4lane type");
-      case MVT::v8i8:  Opc = ARM::VLD4LNd8; break;
-      case MVT::v4i16: Opc = ARM::VLD4LNd16; break;
-      case MVT::v2f32:
-      case MVT::v2i32: Opc = ARM::VLD4LNd32; break;
-      }
-      SDValue Chain = N->getOperand(0);
-      const SDValue Ops[] = { MemAddr, MemUpdate, MemOpc,
-                              N->getOperand(3), N->getOperand(4),
-                              N->getOperand(5), N->getOperand(6),
-                              N->getOperand(7), Chain };
-      std::vector<EVT> ResTys(4, VT);
-      ResTys.push_back(MVT::Other);
-      return CurDAG->getMachineNode(Opc, dl, ResTys, Ops, 9);
+      unsigned DOpcodes[] = { ARM::VLD4LNd8, ARM::VLD4LNd16, ARM::VLD4LNd32 };
+      unsigned QOpcodes0[] = { ARM::VLD4LNq16a, ARM::VLD4LNq32a };
+      unsigned QOpcodes1[] = { ARM::VLD4LNq16b, ARM::VLD4LNq32b };
+      return SelectVLDSTLane(Op, true, 4, DOpcodes, QOpcodes0, QOpcodes1);
     }
 
     case Intrinsic::arm_neon_vst2: {
-      SDValue MemAddr, MemUpdate, MemOpc;
-      if (!SelectAddrMode6(Op, N->getOperand(2), MemAddr, MemUpdate, MemOpc))
-        return NULL;
-      switch (N->getOperand(3).getValueType().getSimpleVT().SimpleTy) {
-      default: llvm_unreachable("unhandled vst2 type");
-      case MVT::v8i8:  Opc = ARM::VST2d8; break;
-      case MVT::v4i16: Opc = ARM::VST2d16; break;
-      case MVT::v2f32:
-      case MVT::v2i32: Opc = ARM::VST2d32; break;
-      }
-      SDValue Chain = N->getOperand(0);
-      const SDValue Ops[] = { MemAddr, MemUpdate, MemOpc,
-                              N->getOperand(3), N->getOperand(4), Chain };
-      return CurDAG->getMachineNode(Opc, dl, MVT::Other, Ops, 6);
+      unsigned DOpcodes[] = { ARM::VST2d8, ARM::VST2d16,
+                              ARM::VST2d32, ARM::VST2d64 };
+      unsigned QOpcodes[] = { ARM::VST2q8, ARM::VST2q16, ARM::VST2q32 };
+      return SelectVST(Op, 2, DOpcodes, QOpcodes, 0);
     }
 
     case Intrinsic::arm_neon_vst3: {
-      SDValue MemAddr, MemUpdate, MemOpc;
-      if (!SelectAddrMode6(Op, N->getOperand(2), MemAddr, MemUpdate, MemOpc))
-        return NULL;
-      switch (N->getOperand(3).getValueType().getSimpleVT().SimpleTy) {
-      default: llvm_unreachable("unhandled vst3 type");
-      case MVT::v8i8:  Opc = ARM::VST3d8; break;
-      case MVT::v4i16: Opc = ARM::VST3d16; break;
-      case MVT::v2f32:
-      case MVT::v2i32: Opc = ARM::VST3d32; break;
-      }
-      SDValue Chain = N->getOperand(0);
-      const SDValue Ops[] = { MemAddr, MemUpdate, MemOpc,
-                              N->getOperand(3), N->getOperand(4),
-                              N->getOperand(5), Chain };
-      return CurDAG->getMachineNode(Opc, dl, MVT::Other, Ops, 7);
+      unsigned DOpcodes[] = { ARM::VST3d8, ARM::VST3d16,
+                              ARM::VST3d32, ARM::VST3d64 };
+      unsigned QOpcodes0[] = { ARM::VST3q8a, ARM::VST3q16a, ARM::VST3q32a };
+      unsigned QOpcodes1[] = { ARM::VST3q8b, ARM::VST3q16b, ARM::VST3q32b };
+      return SelectVST(Op, 3, DOpcodes, QOpcodes0, QOpcodes1);
     }
 
     case Intrinsic::arm_neon_vst4: {
-      SDValue MemAddr, MemUpdate, MemOpc;
-      if (!SelectAddrMode6(Op, N->getOperand(2), MemAddr, MemUpdate, MemOpc))
-        return NULL;
-      switch (N->getOperand(3).getValueType().getSimpleVT().SimpleTy) {
-      default: llvm_unreachable("unhandled vst4 type");
-      case MVT::v8i8:  Opc = ARM::VST4d8; break;
-      case MVT::v4i16: Opc = ARM::VST4d16; break;
-      case MVT::v2f32:
-      case MVT::v2i32: Opc = ARM::VST4d32; break;
-      }
-      SDValue Chain = N->getOperand(0);
-      const SDValue Ops[] = { MemAddr, MemUpdate, MemOpc,
-                              N->getOperand(3), N->getOperand(4),
-                              N->getOperand(5), N->getOperand(6), Chain };
-      return CurDAG->getMachineNode(Opc, dl, MVT::Other, Ops, 8);
+      unsigned DOpcodes[] = { ARM::VST4d8, ARM::VST4d16,
+                              ARM::VST4d32, ARM::VST4d64 };
+      unsigned QOpcodes0[] = { ARM::VST4q8a, ARM::VST4q16a, ARM::VST4q32a };
+      unsigned QOpcodes1[] = { ARM::VST4q8b, ARM::VST4q16b, ARM::VST4q32b };
+      return SelectVST(Op, 4, DOpcodes, QOpcodes0, QOpcodes1);
     }
 
     case Intrinsic::arm_neon_vst2lane: {
-      SDValue MemAddr, MemUpdate, MemOpc;
-      if (!SelectAddrMode6(Op, N->getOperand(2), MemAddr, MemUpdate, MemOpc))
-        return NULL;
-      switch (N->getOperand(3).getValueType().getSimpleVT().SimpleTy) {
-      default: llvm_unreachable("unhandled vst2lane type");
-      case MVT::v8i8:  Opc = ARM::VST2LNd8; break;
-      case MVT::v4i16: Opc = ARM::VST2LNd16; break;
-      case MVT::v2f32:
-      case MVT::v2i32: Opc = ARM::VST2LNd32; break;
-      }
-      SDValue Chain = N->getOperand(0);
-      const SDValue Ops[] = { MemAddr, MemUpdate, MemOpc,
-                              N->getOperand(3), N->getOperand(4),
-                              N->getOperand(5), Chain };
-      return CurDAG->getMachineNode(Opc, dl, MVT::Other, Ops, 7);
+      unsigned DOpcodes[] = { ARM::VST2LNd8, ARM::VST2LNd16, ARM::VST2LNd32 };
+      unsigned QOpcodes0[] = { ARM::VST2LNq16a, ARM::VST2LNq32a };
+      unsigned QOpcodes1[] = { ARM::VST2LNq16b, ARM::VST2LNq32b };
+      return SelectVLDSTLane(Op, false, 2, DOpcodes, QOpcodes0, QOpcodes1);
     }
 
     case Intrinsic::arm_neon_vst3lane: {
-      SDValue MemAddr, MemUpdate, MemOpc;
-      if (!SelectAddrMode6(Op, N->getOperand(2), MemAddr, MemUpdate, MemOpc))
-        return NULL;
-      switch (N->getOperand(3).getValueType().getSimpleVT().SimpleTy) {
-      default: llvm_unreachable("unhandled vst3lane type");
-      case MVT::v8i8:  Opc = ARM::VST3LNd8; break;
-      case MVT::v4i16: Opc = ARM::VST3LNd16; break;
-      case MVT::v2f32:
-      case MVT::v2i32: Opc = ARM::VST3LNd32; break;
-      }
-      SDValue Chain = N->getOperand(0);
-      const SDValue Ops[] = { MemAddr, MemUpdate, MemOpc,
-                              N->getOperand(3), N->getOperand(4),
-                              N->getOperand(5), N->getOperand(6), Chain };
-      return CurDAG->getMachineNode(Opc, dl, MVT::Other, Ops, 8);
+      unsigned DOpcodes[] = { ARM::VST3LNd8, ARM::VST3LNd16, ARM::VST3LNd32 };
+      unsigned QOpcodes0[] = { ARM::VST3LNq16a, ARM::VST3LNq32a };
+      unsigned QOpcodes1[] = { ARM::VST3LNq16b, ARM::VST3LNq32b };
+      return SelectVLDSTLane(Op, false, 3, DOpcodes, QOpcodes0, QOpcodes1);
     }
 
     case Intrinsic::arm_neon_vst4lane: {
-      SDValue MemAddr, MemUpdate, MemOpc;
-      if (!SelectAddrMode6(Op, N->getOperand(2), MemAddr, MemUpdate, MemOpc))
-        return NULL;
-      switch (N->getOperand(3).getValueType().getSimpleVT().SimpleTy) {
-      default: llvm_unreachable("unhandled vst4lane type");
-      case MVT::v8i8:  Opc = ARM::VST4LNd8; break;
-      case MVT::v4i16: Opc = ARM::VST4LNd16; break;
-      case MVT::v2f32:
-      case MVT::v2i32: Opc = ARM::VST4LNd32; break;
-      }
-      SDValue Chain = N->getOperand(0);
-      const SDValue Ops[] = { MemAddr, MemUpdate, MemOpc,
-                              N->getOperand(3), N->getOperand(4),
-                              N->getOperand(5), N->getOperand(6),
-                              N->getOperand(7), Chain };
-      return CurDAG->getMachineNode(Opc, dl, MVT::Other, Ops, 9);
+      unsigned DOpcodes[] = { ARM::VST4LNd8, ARM::VST4LNd16, ARM::VST4LNd32 };
+      unsigned QOpcodes0[] = { ARM::VST4LNq16a, ARM::VST4LNq32a };
+      unsigned QOpcodes1[] = { ARM::VST4LNq16b, ARM::VST4LNq32b };
+      return SelectVLDSTLane(Op, false, 4, DOpcodes, QOpcodes0, QOpcodes1);
     }
     }
   }
@@ -1553,14 +1957,10 @@ bool ARMDAGToDAGISel::
 SelectInlineAsmMemoryOperand(const SDValue &Op, char ConstraintCode,
                              std::vector<SDValue> &OutOps) {
   assert(ConstraintCode == 'm' && "unexpected asm memory constraint");
-
-  SDValue Base, Offset, Opc;
-  if (!SelectAddrMode2(Op, Op, Base, Offset, Opc))
-    return true;
-
-  OutOps.push_back(Base);
-  OutOps.push_back(Offset);
-  OutOps.push_back(Opc);
+  // Require the address to be in a register.  That is safe for all ARM
+  // variants and it is hard to do anything much smarter without knowing
+  // how the operand is used.
+  OutOps.push_back(Op);
   return false;
 }
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 4fa24f3..c839fc6 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -25,9 +25,10 @@
 #include "llvm/CallingConv.h"
 #include "llvm/Constants.h"
 #include "llvm/Function.h"
+#include "llvm/GlobalValue.h"
 #include "llvm/Instruction.h"
 #include "llvm/Intrinsics.h"
-#include "llvm/GlobalValue.h"
+#include "llvm/Type.h"
 #include "llvm/CodeGen/CallingConvLower.h"
 #include "llvm/CodeGen/MachineBasicBlock.h"
 #include "llvm/CodeGen/MachineFrameInfo.h"
@@ -38,6 +39,7 @@
 #include "llvm/CodeGen/SelectionDAG.h"
 #include "llvm/Target/TargetOptions.h"
 #include "llvm/ADT/VectorExtras.h"
+#include "llvm/Support/CommandLine.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/MathExtras.h"
 #include <sstream>
@@ -132,7 +134,7 @@ static TargetLoweringObjectFile *createTLOF(TargetMachine &TM) {
 }
 
 ARMTargetLowering::ARMTargetLowering(TargetMachine &TM)
-    : TargetLowering(TM, createTLOF(TM)), ARMPCLabelIndex(0) {
+    : TargetLowering(TM, createTLOF(TM)) {
   Subtarget = &TM.getSubtarget<ARMSubtarget>();
 
   if (Subtarget->isTargetDarwin()) {
@@ -329,9 +331,9 @@ ARMTargetLowering::ARMTargetLowering(TargetMachine &TM)
     if (!Subtarget->hasV6Ops())
       setOperationAction(ISD::MULHS, MVT::i32, Expand);
   }
-  setOperationAction(ISD::SHL_PARTS, MVT::i32, Expand);
-  setOperationAction(ISD::SRA_PARTS, MVT::i32, Expand);
-  setOperationAction(ISD::SRL_PARTS, MVT::i32, Expand);
+  setOperationAction(ISD::SHL_PARTS, MVT::i32, Custom);
+  setOperationAction(ISD::SRA_PARTS, MVT::i32, Custom);
+  setOperationAction(ISD::SRL_PARTS, MVT::i32, Custom);
   setOperationAction(ISD::SRL,       MVT::i64, Custom);
   setOperationAction(ISD::SRA,       MVT::i64, Custom);
 
@@ -354,14 +356,11 @@ ARMTargetLowering::ARMTargetLowering(TargetMachine &TM)
   setOperationAction(ISD::SDIVREM, MVT::i32, Expand);
   setOperationAction(ISD::UDIVREM, MVT::i32, Expand);
 
-  // Support label based line numbers.
-  setOperationAction(ISD::DBG_STOPPOINT, MVT::Other, Expand);
-  setOperationAction(ISD::DEBUG_LOC, MVT::Other, Expand);
-
   setOperationAction(ISD::GlobalAddress, MVT::i32,   Custom);
   setOperationAction(ISD::ConstantPool,  MVT::i32,   Custom);
   setOperationAction(ISD::GLOBAL_OFFSET_TABLE, MVT::i32, Custom);
   setOperationAction(ISD::GlobalTLSAddress, MVT::i32, Custom);
+  setOperationAction(ISD::BlockAddress, MVT::i32, Custom);
 
   // Use the default implementation.
   setOperationAction(ISD::VASTART,            MVT::Other, Custom);
@@ -387,13 +386,11 @@ ARMTargetLowering::ARMTargetLowering(TargetMachine &TM)
   setOperationAction(ISD::SIGN_EXTEND_INREG, MVT::i1, Expand);
 
   if (!UseSoftFloat && Subtarget->hasVFP2() && !Subtarget->isThumb1Only())
-    // Turn f64->i64 into FMRRD, i64 -> f64 to FMDRR iff target supports vfp2.
+    // Turn f64->i64 into VMOVRRD, i64 -> f64 to VMOVDRR iff target supports vfp2.
     setOperationAction(ISD::BIT_CONVERT, MVT::i64, Custom);
 
   // We want to custom lower some of our intrinsics.
   setOperationAction(ISD::INTRINSIC_WO_CHAIN, MVT::Other, Custom);
-  setOperationAction(ISD::INTRINSIC_W_CHAIN, MVT::Other, Custom);
-  setOperationAction(ISD::INTRINSIC_VOID, MVT::Other, Custom);
 
   setOperationAction(ISD::SETCC,     MVT::i32, Expand);
   setOperationAction(ISD::SETCC,     MVT::f32, Expand);
@@ -434,7 +431,7 @@ ARMTargetLowering::ARMTargetLowering(TargetMachine &TM)
   }
 
   // We have target-specific dag combine patterns for the following nodes:
-  // ARMISD::FMRRD  - No need to call setTargetDAGCombine
+  // ARMISD::VMOVRRD  - No need to call setTargetDAGCombine
   setTargetDAGCombine(ISD::ADD);
   setTargetDAGCombine(ISD::SUB);
 
@@ -493,8 +490,11 @@ const char *ARMTargetLowering::getTargetNodeName(unsigned Opcode) const {
   case ARMISD::SRA_FLAG:      return "ARMISD::SRA_FLAG";
   case ARMISD::RRX:           return "ARMISD::RRX";
 
-  case ARMISD::FMRRD:         return "ARMISD::FMRRD";
-  case ARMISD::FMDRR:         return "ARMISD::FMDRR";
+  case ARMISD::VMOVRRD:         return "ARMISD::VMOVRRD";
+  case ARMISD::VMOVDRR:         return "ARMISD::VMOVDRR";
+
+  case ARMISD::EH_SJLJ_SETJMP: return "ARMISD::EH_SJLJ_SETJMP";
+  case ARMISD::EH_SJLJ_LONGJMP:return "ARMISD::EH_SJLJ_LONGJMP";
 
   case ARMISD::THREAD_POINTER:return "ARMISD::THREAD_POINTER";
 
@@ -787,7 +787,7 @@ ARMTargetLowering::LowerCallResult(SDValue Chain, SDValue InFlag,
                                       InFlag);
       Chain = Hi.getValue(1);
       InFlag = Hi.getValue(2);
-      Val = DAG.getNode(ARMISD::FMDRR, dl, MVT::f64, Lo, Hi);
+      Val = DAG.getNode(ARMISD::VMOVDRR, dl, MVT::f64, Lo, Hi);
 
       if (VA.getLocVT() == MVT::v2f64) {
         SDValue Vec = DAG.getNode(ISD::UNDEF, dl, MVT::v2f64);
@@ -802,7 +802,7 @@ ARMTargetLowering::LowerCallResult(SDValue Chain, SDValue InFlag,
         Hi = DAG.getCopyFromReg(Chain, dl, VA.getLocReg(), MVT::i32, InFlag);
         Chain = Hi.getValue(1);
         InFlag = Hi.getValue(2);
-        Val = DAG.getNode(ARMISD::FMDRR, dl, MVT::f64, Lo, Hi);
+        Val = DAG.getNode(ARMISD::VMOVDRR, dl, MVT::f64, Lo, Hi);
         Val = DAG.getNode(ISD::INSERT_VECTOR_ELT, dl, MVT::v2f64, Vec, Val,
                           DAG.getConstant(1, MVT::i32));
       }
@@ -867,7 +867,7 @@ void ARMTargetLowering::PassF64ArgInRegs(DebugLoc dl, SelectionDAG &DAG,
                                          SmallVector<SDValue, 8> &MemOpChains,
                                          ISD::ArgFlagsTy Flags) {
 
-  SDValue fmrrd = DAG.getNode(ARMISD::FMRRD, dl,
+  SDValue fmrrd = DAG.getNode(ARMISD::VMOVRRD, dl,
                               DAG.getVTList(MVT::i32, MVT::i32), Arg);
   RegsToPass.push_back(std::make_pair(VA.getLocReg(), fmrrd));
 
@@ -1001,6 +1001,8 @@ ARMTargetLowering::LowerCall(SDValue Chain, SDValue Callee,
   bool isDirect = false;
   bool isARMFunc = false;
   bool isLocalARMFunc = false;
+  MachineFunction &MF = DAG.getMachineFunction();
+  ARMFunctionInfo *AFI = MF.getInfo<ARMFunctionInfo>();
   if (GlobalAddressSDNode *G = dyn_cast<GlobalAddressSDNode>(Callee)) {
     GlobalValue *GV = G->getGlobal();
     isDirect = true;
@@ -1012,14 +1014,16 @@ ARMTargetLowering::LowerCall(SDValue Chain, SDValue Callee,
     isLocalARMFunc = !Subtarget->isThumb() && !isExt;
     // tBX takes a register source operand.
     if (isARMFunc && Subtarget->isThumb1Only() && !Subtarget->hasV5TOps()) {
+      unsigned ARMPCLabelIndex = AFI->createConstPoolEntryUId();
       ARMConstantPoolValue *CPV = new ARMConstantPoolValue(GV,
                                                            ARMPCLabelIndex,
                                                            ARMCP::CPValue, 4);
       SDValue CPAddr = DAG.getTargetConstantPool(CPV, getPointerTy(), 4);
       CPAddr = DAG.getNode(ARMISD::Wrapper, dl, MVT::i32, CPAddr);
       Callee = DAG.getLoad(getPointerTy(), dl,
-                           DAG.getEntryNode(), CPAddr, NULL, 0);
-      SDValue PICLabel = DAG.getConstant(ARMPCLabelIndex++, MVT::i32);
+                           DAG.getEntryNode(), CPAddr,
+                           PseudoSourceValue::getConstantPool(), 0);
+      SDValue PICLabel = DAG.getConstant(ARMPCLabelIndex, MVT::i32);
       Callee = DAG.getNode(ARMISD::PIC_ADD, dl,
                            getPointerTy(), Callee, PICLabel);
    } else
@@ -1032,13 +1036,15 @@ ARMTargetLowering::LowerCall(SDValue Chain, SDValue Callee,
     // tBX takes a register source operand.
     const char *Sym = S->getSymbol();
     if (isARMFunc && Subtarget->isThumb1Only() && !Subtarget->hasV5TOps()) {
+      unsigned ARMPCLabelIndex = AFI->createConstPoolEntryUId();
       ARMConstantPoolValue *CPV = new ARMConstantPoolValue(*DAG.getContext(),
                                                        Sym, ARMPCLabelIndex, 4);
       SDValue CPAddr = DAG.getTargetConstantPool(CPV, getPointerTy(), 4);
       CPAddr = DAG.getNode(ARMISD::Wrapper, dl, MVT::i32, CPAddr);
       Callee = DAG.getLoad(getPointerTy(), dl,
-                           DAG.getEntryNode(), CPAddr, NULL, 0);
-      SDValue PICLabel = DAG.getConstant(ARMPCLabelIndex++, MVT::i32);
+                           DAG.getEntryNode(), CPAddr,
+                           PseudoSourceValue::getConstantPool(), 0);
+      SDValue PICLabel = DAG.getConstant(ARMPCLabelIndex, MVT::i32);
       Callee = DAG.getNode(ARMISD::PIC_ADD, dl,
                            getPointerTy(), Callee, PICLabel);
     } else
@@ -1140,7 +1146,7 @@ ARMTargetLowering::LowerReturn(SDValue Chain,
         // Extract the first half and return it in two registers.
         SDValue Half = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl, MVT::f64, Arg,
                                    DAG.getConstant(0, MVT::i32));
-        SDValue HalfGPRs = DAG.getNode(ARMISD::FMRRD, dl,
+        SDValue HalfGPRs = DAG.getNode(ARMISD::VMOVRRD, dl,
                                        DAG.getVTList(MVT::i32, MVT::i32), Half);
 
         Chain = DAG.getCopyToReg(Chain, dl, VA.getLocReg(), HalfGPRs, Flag);
@@ -1157,7 +1163,7 @@ ARMTargetLowering::LowerReturn(SDValue Chain,
       }
       // Legalize ret f64 -> ret 2 x i32.  We always have fmrrd if f64 is
       // available.
-      SDValue fmrrd = DAG.getNode(ARMISD::FMRRD, dl,
+      SDValue fmrrd = DAG.getNode(ARMISD::VMOVRRD, dl,
                                   DAG.getVTList(MVT::i32, MVT::i32), &Arg, 1);
       Chain = DAG.getCopyToReg(Chain, dl, VA.getLocReg(), fmrrd, Flag);
       Flag = Chain.getValue(1);
@@ -1202,6 +1208,34 @@ static SDValue LowerConstantPool(SDValue Op, SelectionDAG &DAG) {
   return DAG.getNode(ARMISD::Wrapper, dl, MVT::i32, Res);
 }
 
+SDValue ARMTargetLowering::LowerBlockAddress(SDValue Op, SelectionDAG &DAG) {
+  MachineFunction &MF = DAG.getMachineFunction();
+  ARMFunctionInfo *AFI = MF.getInfo<ARMFunctionInfo>();
+  unsigned ARMPCLabelIndex = 0;
+  DebugLoc DL = Op.getDebugLoc();
+  EVT PtrVT = getPointerTy();
+  BlockAddress *BA = cast<BlockAddressSDNode>(Op)->getBlockAddress();
+  Reloc::Model RelocM = getTargetMachine().getRelocationModel();
+  SDValue CPAddr;
+  if (RelocM == Reloc::Static) {
+    CPAddr = DAG.getTargetConstantPool(BA, PtrVT, 4);
+  } else {
+    unsigned PCAdj = Subtarget->isThumb() ? 4 : 8;
+    ARMPCLabelIndex = AFI->createConstPoolEntryUId();
+    ARMConstantPoolValue *CPV = new ARMConstantPoolValue(BA, ARMPCLabelIndex,
+                                                         ARMCP::CPBlockAddress,
+                                                         PCAdj);
+    CPAddr = DAG.getTargetConstantPool(CPV, PtrVT, 4);
+  }
+  CPAddr = DAG.getNode(ARMISD::Wrapper, DL, PtrVT, CPAddr);
+  SDValue Result = DAG.getLoad(PtrVT, DL, DAG.getEntryNode(), CPAddr,
+                               PseudoSourceValue::getConstantPool(), 0);
+  if (RelocM == Reloc::Static)
+    return Result;
+  SDValue PICLabel = DAG.getConstant(ARMPCLabelIndex, MVT::i32);
+  return DAG.getNode(ARMISD::PIC_ADD, DL, PtrVT, Result, PICLabel);
+}
+
 // Lower ISD::GlobalTLSAddress using the "general dynamic" model
 SDValue
 ARMTargetLowering::LowerToTLSGeneralDynamicModel(GlobalAddressSDNode *GA,
@@ -1209,15 +1243,19 @@ ARMTargetLowering::LowerToTLSGeneralDynamicModel(GlobalAddressSDNode *GA,
   DebugLoc dl = GA->getDebugLoc();
   EVT PtrVT = getPointerTy();
   unsigned char PCAdj = Subtarget->isThumb() ? 4 : 8;
+  MachineFunction &MF = DAG.getMachineFunction();
+  ARMFunctionInfo *AFI = MF.getInfo<ARMFunctionInfo>();
+  unsigned ARMPCLabelIndex = AFI->createConstPoolEntryUId();
   ARMConstantPoolValue *CPV =
     new ARMConstantPoolValue(GA->getGlobal(), ARMPCLabelIndex,
                              ARMCP::CPValue, PCAdj, "tlsgd", true);
   SDValue Argument = DAG.getTargetConstantPool(CPV, PtrVT, 4);
   Argument = DAG.getNode(ARMISD::Wrapper, dl, MVT::i32, Argument);
-  Argument = DAG.getLoad(PtrVT, dl, DAG.getEntryNode(), Argument, NULL, 0);
+  Argument = DAG.getLoad(PtrVT, dl, DAG.getEntryNode(), Argument,
+                         PseudoSourceValue::getConstantPool(), 0);
   SDValue Chain = Argument.getValue(1);
 
-  SDValue PICLabel = DAG.getConstant(ARMPCLabelIndex++, MVT::i32);
+  SDValue PICLabel = DAG.getConstant(ARMPCLabelIndex, MVT::i32);
   Argument = DAG.getNode(ARMISD::PIC_ADD, dl, PtrVT, Argument, PICLabel);
 
   // call __tls_get_addr.
@@ -1249,26 +1287,32 @@ ARMTargetLowering::LowerToTLSExecModels(GlobalAddressSDNode *GA,
   SDValue ThreadPointer = DAG.getNode(ARMISD::THREAD_POINTER, dl, PtrVT);
 
   if (GV->isDeclaration()) {
-    // initial exec model
+    MachineFunction &MF = DAG.getMachineFunction();
+    ARMFunctionInfo *AFI = MF.getInfo<ARMFunctionInfo>();
+    unsigned ARMPCLabelIndex = AFI->createConstPoolEntryUId();
+    // Initial exec model.
     unsigned char PCAdj = Subtarget->isThumb() ? 4 : 8;
     ARMConstantPoolValue *CPV =
       new ARMConstantPoolValue(GA->getGlobal(), ARMPCLabelIndex,
                                ARMCP::CPValue, PCAdj, "gottpoff", true);
     Offset = DAG.getTargetConstantPool(CPV, PtrVT, 4);
     Offset = DAG.getNode(ARMISD::Wrapper, dl, MVT::i32, Offset);
-    Offset = DAG.getLoad(PtrVT, dl, Chain, Offset, NULL, 0);
+    Offset = DAG.getLoad(PtrVT, dl, Chain, Offset,
+                         PseudoSourceValue::getConstantPool(), 0);
     Chain = Offset.getValue(1);
 
-    SDValue PICLabel = DAG.getConstant(ARMPCLabelIndex++, MVT::i32);
+    SDValue PICLabel = DAG.getConstant(ARMPCLabelIndex, MVT::i32);
     Offset = DAG.getNode(ARMISD::PIC_ADD, dl, PtrVT, Offset, PICLabel);
 
-    Offset = DAG.getLoad(PtrVT, dl, Chain, Offset, NULL, 0);
+    Offset = DAG.getLoad(PtrVT, dl, Chain, Offset,
+                         PseudoSourceValue::getConstantPool(), 0);
   } else {
     // local exec model
     ARMConstantPoolValue *CPV = new ARMConstantPoolValue(GV, "tpoff");
     Offset = DAG.getTargetConstantPool(CPV, PtrVT, 4);
     Offset = DAG.getNode(ARMISD::Wrapper, dl, MVT::i32, Offset);
-    Offset = DAG.getLoad(PtrVT, dl, Chain, Offset, NULL, 0);
+    Offset = DAG.getLoad(PtrVT, dl, Chain, Offset,
+                         PseudoSourceValue::getConstantPool(), 0);
   }
 
   // The address of the thread local variable is the add of the thread
@@ -1303,22 +1347,35 @@ SDValue ARMTargetLowering::LowerGlobalAddressELF(SDValue Op,
     SDValue CPAddr = DAG.getTargetConstantPool(CPV, PtrVT, 4);
     CPAddr = DAG.getNode(ARMISD::Wrapper, dl, MVT::i32, CPAddr);
     SDValue Result = DAG.getLoad(PtrVT, dl, DAG.getEntryNode(),
-                                 CPAddr, NULL, 0);
+                                 CPAddr,
+                                 PseudoSourceValue::getConstantPool(), 0);
     SDValue Chain = Result.getValue(1);
     SDValue GOT = DAG.getGLOBAL_OFFSET_TABLE(PtrVT);
     Result = DAG.getNode(ISD::ADD, dl, PtrVT, Result, GOT);
     if (!UseGOTOFF)
-      Result = DAG.getLoad(PtrVT, dl, Chain, Result, NULL, 0);
+      Result = DAG.getLoad(PtrVT, dl, Chain, Result,
+                           PseudoSourceValue::getGOT(), 0);
     return Result;
   } else {
-    SDValue CPAddr = DAG.getTargetConstantPool(GV, PtrVT, 4);
-    CPAddr = DAG.getNode(ARMISD::Wrapper, dl, MVT::i32, CPAddr);
-    return DAG.getLoad(PtrVT, dl, DAG.getEntryNode(), CPAddr, NULL, 0);
+    // If we have T2 ops, we can materialize the address directly via movt/movw
+    // pair. This is always cheaper.
+    if (Subtarget->useMovt()) {
+      return DAG.getNode(ARMISD::Wrapper, dl, PtrVT,
+                         DAG.getTargetGlobalAddress(GV, PtrVT));
+    } else {
+      SDValue CPAddr = DAG.getTargetConstantPool(GV, PtrVT, 4);
+      CPAddr = DAG.getNode(ARMISD::Wrapper, dl, MVT::i32, CPAddr);
+      return DAG.getLoad(PtrVT, dl, DAG.getEntryNode(), CPAddr,
+                         PseudoSourceValue::getConstantPool(), 0);
+    }
   }
 }
 
 SDValue ARMTargetLowering::LowerGlobalAddressDarwin(SDValue Op,
                                                     SelectionDAG &DAG) {
+  MachineFunction &MF = DAG.getMachineFunction();
+  ARMFunctionInfo *AFI = MF.getInfo<ARMFunctionInfo>();
+  unsigned ARMPCLabelIndex = 0;
   EVT PtrVT = getPointerTy();
   DebugLoc dl = Op.getDebugLoc();
   GlobalValue *GV = cast<GlobalAddressSDNode>(Op)->getGlobal();
@@ -1327,6 +1384,7 @@ SDValue ARMTargetLowering::LowerGlobalAddressDarwin(SDValue Op,
   if (RelocM == Reloc::Static)
     CPAddr = DAG.getTargetConstantPool(GV, PtrVT, 4);
   else {
+    ARMPCLabelIndex = AFI->createConstPoolEntryUId();
     unsigned PCAdj = (RelocM != Reloc::PIC_) ? 0 : (Subtarget->isThumb()?4:8);
     ARMConstantPoolValue *CPV =
       new ARMConstantPoolValue(GV, ARMPCLabelIndex, ARMCP::CPValue, PCAdj);
@@ -1334,16 +1392,18 @@ SDValue ARMTargetLowering::LowerGlobalAddressDarwin(SDValue Op,
   }
   CPAddr = DAG.getNode(ARMISD::Wrapper, dl, MVT::i32, CPAddr);
 
-  SDValue Result = DAG.getLoad(PtrVT, dl, DAG.getEntryNode(), CPAddr, NULL, 0);
+  SDValue Result = DAG.getLoad(PtrVT, dl, DAG.getEntryNode(), CPAddr,
+                               PseudoSourceValue::getConstantPool(), 0);
   SDValue Chain = Result.getValue(1);
 
   if (RelocM == Reloc::PIC_) {
-    SDValue PICLabel = DAG.getConstant(ARMPCLabelIndex++, MVT::i32);
+    SDValue PICLabel = DAG.getConstant(ARMPCLabelIndex, MVT::i32);
     Result = DAG.getNode(ARMISD::PIC_ADD, dl, PtrVT, Result, PICLabel);
   }
 
   if (Subtarget->GVIsIndirectSymbol(GV, RelocM))
-    Result = DAG.getLoad(PtrVT, dl, Chain, Result, NULL, 0);
+    Result = DAG.getLoad(PtrVT, dl, Chain, Result,
+                         PseudoSourceValue::getGOT(), 0);
 
   return Result;
 }
@@ -1352,6 +1412,9 @@ SDValue ARMTargetLowering::LowerGLOBAL_OFFSET_TABLE(SDValue Op,
                                                     SelectionDAG &DAG){
   assert(Subtarget->isTargetELF() &&
          "GLOBAL OFFSET TABLE not implemented for non-ELF targets");
+  MachineFunction &MF = DAG.getMachineFunction();
+  ARMFunctionInfo *AFI = MF.getInfo<ARMFunctionInfo>();
+  unsigned ARMPCLabelIndex = AFI->createConstPoolEntryUId();
   EVT PtrVT = getPointerTy();
   DebugLoc dl = Op.getDebugLoc();
   unsigned PCAdj = Subtarget->isThumb() ? 4 : 8;
@@ -1360,107 +1423,12 @@ SDValue ARMTargetLowering::LowerGLOBAL_OFFSET_TABLE(SDValue Op,
                                                        ARMPCLabelIndex, PCAdj);
   SDValue CPAddr = DAG.getTargetConstantPool(CPV, PtrVT, 4);
   CPAddr = DAG.getNode(ARMISD::Wrapper, dl, MVT::i32, CPAddr);
-  SDValue Result = DAG.getLoad(PtrVT, dl, DAG.getEntryNode(), CPAddr, NULL, 0);
-  SDValue PICLabel = DAG.getConstant(ARMPCLabelIndex++, MVT::i32);
+  SDValue Result = DAG.getLoad(PtrVT, dl, DAG.getEntryNode(), CPAddr,
+                               PseudoSourceValue::getConstantPool(), 0);
+  SDValue PICLabel = DAG.getConstant(ARMPCLabelIndex, MVT::i32);
   return DAG.getNode(ARMISD::PIC_ADD, dl, PtrVT, Result, PICLabel);
 }
 
-static SDValue LowerNeonVLDIntrinsic(SDValue Op, SelectionDAG &DAG,
-                                     unsigned NumVecs) {
-  SDNode *Node = Op.getNode();
-  EVT VT = Node->getValueType(0);
-
-  // No expansion needed for 64-bit vectors.
-  if (VT.is64BitVector())
-    return SDValue();
-
-  // FIXME: We need to expand VLD3 and VLD4 of 128-bit vectors into separate
-  // operations to load the even and odd registers.
-  return SDValue();
-}
-
-static SDValue LowerNeonVSTIntrinsic(SDValue Op, SelectionDAG &DAG,
-                                     unsigned NumVecs) {
-  SDNode *Node = Op.getNode();
-  EVT VT = Node->getOperand(3).getValueType();
-
-  // No expansion needed for 64-bit vectors.
-  if (VT.is64BitVector())
-    return SDValue();
-
-  // FIXME: We need to expand VST3 and VST4 of 128-bit vectors into separate
-  // operations to store the even and odd registers.
-  return SDValue();
-}
-
-static SDValue LowerNeonVLDLaneIntrinsic(SDValue Op, SelectionDAG &DAG,
-                                         unsigned NumVecs) {
-  SDNode *Node = Op.getNode();
-  EVT VT = Node->getValueType(0);
-
-  if (!VT.is64BitVector())
-    return SDValue(); // unimplemented
-
-  // Change the lane number operand to be a TargetConstant; otherwise it
-  // will be legalized into a register.
-  ConstantSDNode *Lane = dyn_cast<ConstantSDNode>(Node->getOperand(NumVecs+3));
-  if (!Lane) {
-    assert(false && "vld lane number must be a constant");
-    return SDValue();
-  }
-  SmallVector<SDValue, 8> Ops(Node->op_begin(), Node->op_end());
-  Ops[NumVecs+3] = DAG.getTargetConstant(Lane->getZExtValue(), MVT::i32);
-  return DAG.UpdateNodeOperands(Op, &Ops[0], Ops.size());
-}
-
-static SDValue LowerNeonVSTLaneIntrinsic(SDValue Op, SelectionDAG &DAG,
-                                         unsigned NumVecs) {
-  SDNode *Node = Op.getNode();
-  EVT VT = Node->getOperand(3).getValueType();
-
-  if (!VT.is64BitVector())
-    return SDValue(); // unimplemented
-
-  // Change the lane number operand to be a TargetConstant; otherwise it
-  // will be legalized into a register.
-  ConstantSDNode *Lane = dyn_cast<ConstantSDNode>(Node->getOperand(NumVecs+3));
-  if (!Lane) {
-    assert(false && "vst lane number must be a constant");
-    return SDValue();
-  }
-  SmallVector<SDValue, 8> Ops(Node->op_begin(), Node->op_end());
-  Ops[NumVecs+3] = DAG.getTargetConstant(Lane->getZExtValue(), MVT::i32);
-  return DAG.UpdateNodeOperands(Op, &Ops[0], Ops.size());
-}
-
-SDValue
-ARMTargetLowering::LowerINTRINSIC_W_CHAIN(SDValue Op, SelectionDAG &DAG) {
-  unsigned IntNo = cast<ConstantSDNode>(Op.getOperand(1))->getZExtValue();
-  switch (IntNo) {
-  case Intrinsic::arm_neon_vld3:
-    return LowerNeonVLDIntrinsic(Op, DAG, 3);
-  case Intrinsic::arm_neon_vld4:
-    return LowerNeonVLDIntrinsic(Op, DAG, 4);
-  case Intrinsic::arm_neon_vld2lane:
-    return LowerNeonVLDLaneIntrinsic(Op, DAG, 2);
-  case Intrinsic::arm_neon_vld3lane:
-    return LowerNeonVLDLaneIntrinsic(Op, DAG, 3);
-  case Intrinsic::arm_neon_vld4lane:
-    return LowerNeonVLDLaneIntrinsic(Op, DAG, 4);
-  case Intrinsic::arm_neon_vst3:
-    return LowerNeonVSTIntrinsic(Op, DAG, 3);
-  case Intrinsic::arm_neon_vst4:
-    return LowerNeonVSTIntrinsic(Op, DAG, 4);
-  case Intrinsic::arm_neon_vst2lane:
-    return LowerNeonVSTLaneIntrinsic(Op, DAG, 2);
-  case Intrinsic::arm_neon_vst3lane:
-    return LowerNeonVSTLaneIntrinsic(Op, DAG, 3);
-  case Intrinsic::arm_neon_vst4lane:
-    return LowerNeonVSTLaneIntrinsic(Op, DAG, 4);
-  default: return SDValue();    // Don't custom lower most intrinsics.
-  }
-}
-
 SDValue
 ARMTargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG) {
   unsigned IntNo = cast<ConstantSDNode>(Op.getOperand(0))->getZExtValue();
@@ -1473,6 +1441,8 @@ ARMTargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG) {
   }
   case Intrinsic::eh_sjlj_lsda: {
     MachineFunction &MF = DAG.getMachineFunction();
+    ARMFunctionInfo *AFI = MF.getInfo<ARMFunctionInfo>();
+    unsigned ARMPCLabelIndex = AFI->createConstPoolEntryUId();
     EVT PtrVT = getPointerTy();
     DebugLoc dl = Op.getDebugLoc();
     Reloc::Model RelocM = getTargetMachine().getRelocationModel();
@@ -1485,11 +1455,12 @@ ARMTargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG) {
     CPAddr = DAG.getTargetConstantPool(CPV, PtrVT, 4);
     CPAddr = DAG.getNode(ARMISD::Wrapper, dl, MVT::i32, CPAddr);
     SDValue Result =
-      DAG.getLoad(PtrVT, dl, DAG.getEntryNode(), CPAddr, NULL, 0);
+      DAG.getLoad(PtrVT, dl, DAG.getEntryNode(), CPAddr,
+                  PseudoSourceValue::getConstantPool(), 0);
     SDValue Chain = Result.getValue(1);
 
     if (RelocM == Reloc::PIC_) {
-      SDValue PICLabel = DAG.getConstant(ARMPCLabelIndex++, MVT::i32);
+      SDValue PICLabel = DAG.getConstant(ARMPCLabelIndex, MVT::i32);
       Result = DAG.getNode(ARMISD::PIC_ADD, dl, PtrVT, Result, PICLabel);
     }
     return Result;
@@ -1578,17 +1549,19 @@ ARMTargetLowering::GetF64FormalArgument(CCValAssign &VA, CCValAssign &NextVA,
   if (NextVA.isMemLoc()) {
     unsigned ArgSize = NextVA.getLocVT().getSizeInBits()/8;
     MachineFrameInfo *MFI = MF.getFrameInfo();
-    int FI = MFI->CreateFixedObject(ArgSize, NextVA.getLocMemOffset());
+    int FI = MFI->CreateFixedObject(ArgSize, NextVA.getLocMemOffset(),
+                                    true, false);
 
     // Create load node to retrieve arguments from the stack.
     SDValue FIN = DAG.getFrameIndex(FI, getPointerTy());
-    ArgValue2 = DAG.getLoad(MVT::i32, dl, Root, FIN, NULL, 0);
+    ArgValue2 = DAG.getLoad(MVT::i32, dl, Root, FIN,
+                            PseudoSourceValue::getFixedStack(FI), 0);
   } else {
     Reg = MF.addLiveIn(NextVA.getLocReg(), RC);
     ArgValue2 = DAG.getCopyFromReg(Root, dl, Reg, MVT::i32);
   }
 
-  return DAG.getNode(ARMISD::FMDRR, dl, MVT::f64, ArgValue, ArgValue2);
+  return DAG.getNode(ARMISD::VMOVDRR, dl, MVT::f64, ArgValue, ArgValue2);
 }
 
 SDValue
@@ -1691,11 +1664,13 @@ ARMTargetLowering::LowerFormalArguments(SDValue Chain,
       assert(VA.getValVT() != MVT::i64 && "i64 should already be lowered");
 
       unsigned ArgSize = VA.getLocVT().getSizeInBits()/8;
-      int FI = MFI->CreateFixedObject(ArgSize, VA.getLocMemOffset());
+      int FI = MFI->CreateFixedObject(ArgSize, VA.getLocMemOffset(),
+                                      true, false);
 
       // Create load nodes to retrieve arguments from the stack.
       SDValue FIN = DAG.getFrameIndex(FI, getPointerTy());
-      InVals.push_back(DAG.getLoad(VA.getValVT(), dl, Chain, FIN, NULL, 0));
+      InVals.push_back(DAG.getLoad(VA.getValVT(), dl, Chain, FIN,
+                                   PseudoSourceValue::getFixedStack(FI), 0));
     }
   }
 
@@ -1711,15 +1686,15 @@ ARMTargetLowering::LowerFormalArguments(SDValue Chain,
     unsigned Align = MF.getTarget().getFrameInfo()->getStackAlignment();
     unsigned VARegSize = (4 - NumGPRs) * 4;
     unsigned VARegSaveSize = (VARegSize + Align - 1) & ~(Align - 1);
-    unsigned ArgOffset = 0;
+    unsigned ArgOffset = CCInfo.getNextStackOffset();
     if (VARegSaveSize) {
       // If this function is vararg, store any remaining integer argument regs
       // to their spots on the stack so that they may be loaded by deferencing
       // the result of va_next.
       AFI->setVarArgsRegSaveSize(VARegSaveSize);
-      ArgOffset = CCInfo.getNextStackOffset();
       VarArgsFrameIndex = MFI->CreateFixedObject(VARegSaveSize, ArgOffset +
-                                                 VARegSaveSize - VARegSize);
+                                                 VARegSaveSize - VARegSize,
+                                                 true, false);
       SDValue FIN = DAG.getFrameIndex(VarArgsFrameIndex, getPointerTy());
 
       SmallVector<SDValue, 4> MemOps;
@@ -1732,7 +1707,8 @@ ARMTargetLowering::LowerFormalArguments(SDValue Chain,
 
         unsigned VReg = MF.addLiveIn(GPRArgRegs[NumGPRs], RC);
         SDValue Val = DAG.getCopyFromReg(Chain, dl, VReg, MVT::i32);
-        SDValue Store = DAG.getStore(Val.getValue(1), dl, Val, FIN, NULL, 0);
+        SDValue Store = DAG.getStore(Val.getValue(1), dl, Val, FIN,
+                        PseudoSourceValue::getFixedStack(VarArgsFrameIndex), 0);
         MemOps.push_back(Store);
         FIN = DAG.getNode(ISD::ADD, dl, getPointerTy(), FIN,
                           DAG.getConstant(4, getPointerTy()));
@@ -1742,7 +1718,7 @@ ARMTargetLowering::LowerFormalArguments(SDValue Chain,
                             &MemOps[0], MemOps.size());
     } else
       // This will point to the next argument passed via stack.
-      VarArgsFrameIndex = MFI->CreateFixedObject(4, ArgOffset);
+      VarArgsFrameIndex = MFI->CreateFixedObject(4, ArgOffset, true, false);
   }
 
   return Chain;
@@ -1764,46 +1740,41 @@ static bool isFloatingPointZero(SDValue Op) {
   return false;
 }
 
-static bool isLegalCmpImmediate(unsigned C, bool isThumb1Only) {
-  return ( isThumb1Only && (C & ~255U) == 0) ||
-         (!isThumb1Only && ARM_AM::getSOImmVal(C) != -1);
-}
-
 /// Returns appropriate ARM CMP (cmp) and corresponding condition code for
 /// the given operands.
-static SDValue getARMCmp(SDValue LHS, SDValue RHS, ISD::CondCode CC,
-                         SDValue &ARMCC, SelectionDAG &DAG, bool isThumb1Only,
-                         DebugLoc dl) {
+SDValue
+ARMTargetLowering::getARMCmp(SDValue LHS, SDValue RHS, ISD::CondCode CC,
+                             SDValue &ARMCC, SelectionDAG &DAG, DebugLoc dl) {
   if (ConstantSDNode *RHSC = dyn_cast<ConstantSDNode>(RHS.getNode())) {
     unsigned C = RHSC->getZExtValue();
-    if (!isLegalCmpImmediate(C, isThumb1Only)) {
+    if (!isLegalICmpImmediate(C)) {
       // Constant does not fit, try adjusting it by one?
       switch (CC) {
       default: break;
       case ISD::SETLT:
       case ISD::SETGE:
-        if (isLegalCmpImmediate(C-1, isThumb1Only)) {
+        if (isLegalICmpImmediate(C-1)) {
           CC = (CC == ISD::SETLT) ? ISD::SETLE : ISD::SETGT;
           RHS = DAG.getConstant(C-1, MVT::i32);
         }
         break;
       case ISD::SETULT:
       case ISD::SETUGE:
-        if (C > 0 && isLegalCmpImmediate(C-1, isThumb1Only)) {
+        if (C > 0 && isLegalICmpImmediate(C-1)) {
           CC = (CC == ISD::SETULT) ? ISD::SETULE : ISD::SETUGT;
           RHS = DAG.getConstant(C-1, MVT::i32);
         }
         break;
       case ISD::SETLE:
       case ISD::SETGT:
-        if (isLegalCmpImmediate(C+1, isThumb1Only)) {
+        if (isLegalICmpImmediate(C+1)) {
           CC = (CC == ISD::SETLE) ? ISD::SETLT : ISD::SETGE;
           RHS = DAG.getConstant(C+1, MVT::i32);
         }
         break;
       case ISD::SETULE:
       case ISD::SETUGT:
-        if (C < 0xffffffff && isLegalCmpImmediate(C+1, isThumb1Only)) {
+        if (C < 0xffffffff && isLegalICmpImmediate(C+1)) {
           CC = (CC == ISD::SETULE) ? ISD::SETULT : ISD::SETUGE;
           RHS = DAG.getConstant(C+1, MVT::i32);
         }
@@ -1839,8 +1810,7 @@ static SDValue getVFPCmp(SDValue LHS, SDValue RHS, SelectionDAG &DAG,
   return DAG.getNode(ARMISD::FMSTAT, dl, MVT::Flag, Cmp);
 }
 
-static SDValue LowerSELECT_CC(SDValue Op, SelectionDAG &DAG,
-                              const ARMSubtarget *ST) {
+SDValue ARMTargetLowering::LowerSELECT_CC(SDValue Op, SelectionDAG &DAG) {
   EVT VT = Op.getValueType();
   SDValue LHS = Op.getOperand(0);
   SDValue RHS = Op.getOperand(1);
@@ -1852,7 +1822,7 @@ static SDValue LowerSELECT_CC(SDValue Op, SelectionDAG &DAG,
   if (LHS.getValueType() == MVT::i32) {
     SDValue ARMCC;
     SDValue CCR = DAG.getRegister(ARM::CPSR, MVT::i32);
-    SDValue Cmp = getARMCmp(LHS, RHS, CC, ARMCC, DAG, ST->isThumb1Only(), dl);
+    SDValue Cmp = getARMCmp(LHS, RHS, CC, ARMCC, DAG, dl);
     return DAG.getNode(ARMISD::CMOV, dl, VT, FalseVal, TrueVal, ARMCC, CCR,Cmp);
   }
 
@@ -1874,8 +1844,7 @@ static SDValue LowerSELECT_CC(SDValue Op, SelectionDAG &DAG,
   return Result;
 }
 
-static SDValue LowerBR_CC(SDValue Op, SelectionDAG &DAG,
-                          const ARMSubtarget *ST) {
+SDValue ARMTargetLowering::LowerBR_CC(SDValue Op, SelectionDAG &DAG) {
   SDValue  Chain = Op.getOperand(0);
   ISD::CondCode CC = cast<CondCodeSDNode>(Op.getOperand(1))->get();
   SDValue    LHS = Op.getOperand(2);
@@ -1886,7 +1855,7 @@ static SDValue LowerBR_CC(SDValue Op, SelectionDAG &DAG,
   if (LHS.getValueType() == MVT::i32) {
     SDValue ARMCC;
     SDValue CCR = DAG.getRegister(ARM::CPSR, MVT::i32);
-    SDValue Cmp = getARMCmp(LHS, RHS, CC, ARMCC, DAG, ST->isThumb1Only(), dl);
+    SDValue Cmp = getARMCmp(LHS, RHS, CC, ARMCC, DAG, dl);
     return DAG.getNode(ARMISD::BRCOND, dl, MVT::Other,
                        Chain, Dest, ARMCC, CCR,Cmp);
   }
@@ -1932,12 +1901,14 @@ SDValue ARMTargetLowering::LowerBR_JT(SDValue Op, SelectionDAG &DAG) {
                        Addr, Op.getOperand(2), JTI, UId);
   }
   if (getTargetMachine().getRelocationModel() == Reloc::PIC_) {
-    Addr = DAG.getLoad((EVT)MVT::i32, dl, Chain, Addr, NULL, 0);
+    Addr = DAG.getLoad((EVT)MVT::i32, dl, Chain, Addr,
+                       PseudoSourceValue::getJumpTable(), 0);
     Chain = Addr.getValue(1);
     Addr = DAG.getNode(ISD::ADD, dl, PTy, Addr, Table);
     return DAG.getNode(ARMISD::BR_JT, dl, MVT::Other, Chain, Addr, JTI, UId);
   } else {
-    Addr = DAG.getLoad(PTy, dl, Chain, Addr, NULL, 0);
+    Addr = DAG.getLoad(PTy, dl, Chain, Addr,
+                       PseudoSourceValue::getJumpTable(), 0);
     Chain = Addr.getValue(1);
     return DAG.getNode(ARMISD::BR_JT, dl, MVT::Other, Chain, Addr, JTI, UId);
   }
@@ -2101,16 +2072,16 @@ static SDValue ExpandBIT_CONVERT(SDNode *N, SelectionDAG &DAG) {
   SDValue Op = N->getOperand(0);
   DebugLoc dl = N->getDebugLoc();
   if (N->getValueType(0) == MVT::f64) {
-    // Turn i64->f64 into FMDRR.
+    // Turn i64->f64 into VMOVDRR.
     SDValue Lo = DAG.getNode(ISD::EXTRACT_ELEMENT, dl, MVT::i32, Op,
                              DAG.getConstant(0, MVT::i32));
     SDValue Hi = DAG.getNode(ISD::EXTRACT_ELEMENT, dl, MVT::i32, Op,
                              DAG.getConstant(1, MVT::i32));
-    return DAG.getNode(ARMISD::FMDRR, dl, MVT::f64, Lo, Hi);
+    return DAG.getNode(ARMISD::VMOVDRR, dl, MVT::f64, Lo, Hi);
   }
 
-  // Turn f64->i64 into FMRRD.
-  SDValue Cvt = DAG.getNode(ARMISD::FMRRD, dl,
+  // Turn f64->i64 into VMOVRRD.
+  SDValue Cvt = DAG.getNode(ARMISD::VMOVRRD, dl,
                             DAG.getVTList(MVT::i32, MVT::i32), &Op, 1);
 
   // Merge the pieces into a single i64 value.
@@ -2148,7 +2119,7 @@ static SDValue getZeroVector(EVT VT, SelectionDAG &DAG, DebugLoc dl) {
 static SDValue getOnesVector(EVT VT, SelectionDAG &DAG, DebugLoc dl) {
   assert(VT.isVector() && "Expected a vector type");
 
-  // Always build ones vectors as <16 x i32> or <8 x i32> bitcasted to their
+  // Always build ones vectors as <16 x i8> or <8 x i8> bitcasted to their
   // dest type. This ensures they get CSE'd.
   SDValue Vec;
   SDValue Cst = DAG.getTargetConstant(0xFF, MVT::i8);
@@ -2165,6 +2136,74 @@ static SDValue getOnesVector(EVT VT, SelectionDAG &DAG, DebugLoc dl) {
   return DAG.getNode(ISD::BIT_CONVERT, dl, VT, Vec);
 }
 
+/// LowerShiftRightParts - Lower SRA_PARTS, which returns two
+/// i32 values and take a 2 x i32 value to shift plus a shift amount.
+SDValue ARMTargetLowering::LowerShiftRightParts(SDValue Op, SelectionDAG &DAG) {
+  assert(Op.getNumOperands() == 3 && "Not a double-shift!");
+  EVT VT = Op.getValueType();
+  unsigned VTBits = VT.getSizeInBits();
+  DebugLoc dl = Op.getDebugLoc();
+  SDValue ShOpLo = Op.getOperand(0);
+  SDValue ShOpHi = Op.getOperand(1);
+  SDValue ShAmt  = Op.getOperand(2);
+  SDValue ARMCC;
+  unsigned Opc = (Op.getOpcode() == ISD::SRA_PARTS) ? ISD::SRA : ISD::SRL;
+
+  assert(Op.getOpcode() == ISD::SRA_PARTS || Op.getOpcode() == ISD::SRL_PARTS);
+
+  SDValue RevShAmt = DAG.getNode(ISD::SUB, dl, MVT::i32,
+                                 DAG.getConstant(VTBits, MVT::i32), ShAmt);
+  SDValue Tmp1 = DAG.getNode(ISD::SRL, dl, VT, ShOpLo, ShAmt);
+  SDValue ExtraShAmt = DAG.getNode(ISD::SUB, dl, MVT::i32, ShAmt,
+                                   DAG.getConstant(VTBits, MVT::i32));
+  SDValue Tmp2 = DAG.getNode(ISD::SHL, dl, VT, ShOpHi, RevShAmt);
+  SDValue FalseVal = DAG.getNode(ISD::OR, dl, VT, Tmp1, Tmp2);
+  SDValue TrueVal = DAG.getNode(Opc, dl, VT, ShOpHi, ExtraShAmt);
+
+  SDValue CCR = DAG.getRegister(ARM::CPSR, MVT::i32);
+  SDValue Cmp = getARMCmp(ExtraShAmt, DAG.getConstant(0, MVT::i32), ISD::SETGE,
+                          ARMCC, DAG, dl);
+  SDValue Hi = DAG.getNode(Opc, dl, VT, ShOpHi, ShAmt);
+  SDValue Lo = DAG.getNode(ARMISD::CMOV, dl, VT, FalseVal, TrueVal, ARMCC,
+                           CCR, Cmp);
+
+  SDValue Ops[2] = { Lo, Hi };
+  return DAG.getMergeValues(Ops, 2, dl);
+}
+
+/// LowerShiftLeftParts - Lower SHL_PARTS, which returns two
+/// i32 values and take a 2 x i32 value to shift plus a shift amount.
+SDValue ARMTargetLowering::LowerShiftLeftParts(SDValue Op, SelectionDAG &DAG) {
+  assert(Op.getNumOperands() == 3 && "Not a double-shift!");
+  EVT VT = Op.getValueType();
+  unsigned VTBits = VT.getSizeInBits();
+  DebugLoc dl = Op.getDebugLoc();
+  SDValue ShOpLo = Op.getOperand(0);
+  SDValue ShOpHi = Op.getOperand(1);
+  SDValue ShAmt  = Op.getOperand(2);
+  SDValue ARMCC;
+
+  assert(Op.getOpcode() == ISD::SHL_PARTS);
+  SDValue RevShAmt = DAG.getNode(ISD::SUB, dl, MVT::i32,
+                                 DAG.getConstant(VTBits, MVT::i32), ShAmt);
+  SDValue Tmp1 = DAG.getNode(ISD::SRL, dl, VT, ShOpLo, RevShAmt);
+  SDValue ExtraShAmt = DAG.getNode(ISD::SUB, dl, MVT::i32, ShAmt,
+                                   DAG.getConstant(VTBits, MVT::i32));
+  SDValue Tmp2 = DAG.getNode(ISD::SHL, dl, VT, ShOpHi, ShAmt);
+  SDValue Tmp3 = DAG.getNode(ISD::SHL, dl, VT, ShOpLo, ExtraShAmt);
+
+  SDValue FalseVal = DAG.getNode(ISD::OR, dl, VT, Tmp1, Tmp2);
+  SDValue CCR = DAG.getRegister(ARM::CPSR, MVT::i32);
+  SDValue Cmp = getARMCmp(ExtraShAmt, DAG.getConstant(0, MVT::i32), ISD::SETGE,
+                          ARMCC, DAG, dl);
+  SDValue Lo = DAG.getNode(ISD::SHL, dl, VT, ShOpLo, ShAmt);
+  SDValue Hi = DAG.getNode(ARMISD::CMOV, dl, VT, FalseVal, Tmp3, ARMCC,
+                           CCR, Cmp);
+
+  SDValue Ops[2] = { Lo, Hi };
+  return DAG.getMergeValues(Ops, 2, dl);
+}
+
 static SDValue LowerShift(SDNode *N, SelectionDAG &DAG,
                           const ARMSubtarget *ST) {
   EVT VT = N->getValueType(0);
@@ -2454,8 +2493,11 @@ static bool isVREVMask(const SmallVectorImpl<int> &M, EVT VT,
   assert((BlockSize==16 || BlockSize==32 || BlockSize==64) &&
          "Only possible block sizes for VREV are: 16, 32, 64");
 
-  unsigned NumElts = VT.getVectorNumElements();
   unsigned EltSz = VT.getVectorElementType().getSizeInBits();
+  if (EltSz == 64)
+    return false;
+
+  unsigned NumElts = VT.getVectorNumElements();
   unsigned BlockElts = M[0] + 1;
 
   if (BlockSize <= EltSz || BlockSize != BlockElts * EltSz)
@@ -2472,6 +2514,10 @@ static bool isVREVMask(const SmallVectorImpl<int> &M, EVT VT,
 
 static bool isVTRNMask(const SmallVectorImpl<int> &M, EVT VT,
                        unsigned &WhichResult) {
+  unsigned EltSz = VT.getVectorElementType().getSizeInBits();
+  if (EltSz == 64)
+    return false;
+
   unsigned NumElts = VT.getVectorNumElements();
   WhichResult = (M[0] == 0 ? 0 : 1);
   for (unsigned i = 0; i < NumElts; i += 2) {
@@ -2484,6 +2530,10 @@ static bool isVTRNMask(const SmallVectorImpl<int> &M, EVT VT,
 
 static bool isVUZPMask(const SmallVectorImpl<int> &M, EVT VT,
                        unsigned &WhichResult) {
+  unsigned EltSz = VT.getVectorElementType().getSizeInBits();
+  if (EltSz == 64)
+    return false;
+
   unsigned NumElts = VT.getVectorNumElements();
   WhichResult = (M[0] == 0 ? 0 : 1);
   for (unsigned i = 0; i != NumElts; ++i) {
@@ -2492,7 +2542,7 @@ static bool isVUZPMask(const SmallVectorImpl<int> &M, EVT VT,
   }
 
   // VUZP.32 for 64-bit vectors is a pseudo-instruction alias for VTRN.32.
-  if (VT.is64BitVector() && VT.getVectorElementType().getSizeInBits() == 32)
+  if (VT.is64BitVector() && EltSz == 32)
     return false;
 
   return true;
@@ -2500,6 +2550,10 @@ static bool isVUZPMask(const SmallVectorImpl<int> &M, EVT VT,
 
 static bool isVZIPMask(const SmallVectorImpl<int> &M, EVT VT,
                        unsigned &WhichResult) {
+  unsigned EltSz = VT.getVectorElementType().getSizeInBits();
+  if (EltSz == 64)
+    return false;
+
   unsigned NumElts = VT.getVectorNumElements();
   WhichResult = (M[0] == 0 ? 0 : 1);
   unsigned Idx = WhichResult * NumElts / 2;
@@ -2511,7 +2565,7 @@ static bool isVZIPMask(const SmallVectorImpl<int> &M, EVT VT,
   }
 
   // VZIP.32 for 64-bit vectors is a pseudo-instruction alias for VTRN.32.
-  if (VT.is64BitVector() && VT.getVectorElementType().getSizeInBits() == 32)
+  if (VT.is64BitVector() && EltSz == 32)
     return false;
 
   return true;
@@ -2719,6 +2773,9 @@ static SDValue LowerVECTOR_SHUFFLE(SDValue Op, SelectionDAG &DAG) {
 
   if (ShuffleVectorSDNode::isSplatMask(&ShuffleMask[0], VT)) {
     int Lane = SVN->getSplatIndex();
+    // If this is undef splat, generate it via "just" vdup, if possible.
+    if (Lane == -1) Lane = 0;
+
     if (Lane == 0 && V1.getOpcode() == ISD::SCALAR_TO_VECTOR) {
       return DAG.getNode(ARMISD::VDUP, dl, VT, V1.getOperand(0));
     }
@@ -2789,18 +2846,10 @@ static SDValue LowerEXTRACT_VECTOR_ELT(SDValue Op, SelectionDAG &DAG) {
   DebugLoc dl = Op.getDebugLoc();
   SDValue Vec = Op.getOperand(0);
   SDValue Lane = Op.getOperand(1);
-
-  // FIXME: This is invalid for 8 and 16-bit elements - the information about
-  // sign / zero extension is lost!
-  Op = DAG.getNode(ARMISD::VGETLANEu, dl, MVT::i32, Vec, Lane);
-  Op = DAG.getNode(ISD::AssertZext, dl, MVT::i32, Op, DAG.getValueType(VT));
-
-  if (VT.bitsLT(MVT::i32))
-    Op = DAG.getNode(ISD::TRUNCATE, dl, VT, Op);
-  else if (VT.bitsGT(MVT::i32))
-    Op = DAG.getNode(ISD::ANY_EXTEND, dl, VT, Op);
-
-  return Op;
+  assert(VT == MVT::i32 &&
+         Vec.getValueType().getVectorElementType().getSizeInBits() < 32 &&
+         "unexpected type for custom-lowering vector extract");
+  return DAG.getNode(ARMISD::VGETLANEu, dl, MVT::i32, Vec, Lane);
 }
 
 static SDValue LowerCONCAT_VECTORS(SDValue Op, SelectionDAG &DAG) {
@@ -2827,12 +2876,13 @@ SDValue ARMTargetLowering::LowerOperation(SDValue Op, SelectionDAG &DAG) {
   switch (Op.getOpcode()) {
   default: llvm_unreachable("Don't know how to custom lower this!");
   case ISD::ConstantPool:  return LowerConstantPool(Op, DAG);
+  case ISD::BlockAddress:  return LowerBlockAddress(Op, DAG);
   case ISD::GlobalAddress:
     return Subtarget->isTargetDarwin() ? LowerGlobalAddressDarwin(Op, DAG) :
       LowerGlobalAddressELF(Op, DAG);
   case ISD::GlobalTLSAddress:   return LowerGlobalTLSAddress(Op, DAG);
-  case ISD::SELECT_CC:     return LowerSELECT_CC(Op, DAG, Subtarget);
-  case ISD::BR_CC:         return LowerBR_CC(Op, DAG, Subtarget);
+  case ISD::SELECT_CC:     return LowerSELECT_CC(Op, DAG);
+  case ISD::BR_CC:         return LowerBR_CC(Op, DAG);
   case ISD::BR_JT:         return LowerBR_JT(Op, DAG);
   case ISD::DYNAMIC_STACKALLOC: return LowerDYNAMIC_STACKALLOC(Op, DAG);
   case ISD::VASTART:       return LowerVASTART(Op, DAG, VarArgsFrameIndex);
@@ -2844,13 +2894,14 @@ SDValue ARMTargetLowering::LowerOperation(SDValue Op, SelectionDAG &DAG) {
   case ISD::RETURNADDR:    break;
   case ISD::FRAMEADDR:     return LowerFRAMEADDR(Op, DAG);
   case ISD::GLOBAL_OFFSET_TABLE: return LowerGLOBAL_OFFSET_TABLE(Op, DAG);
-  case ISD::INTRINSIC_VOID:
-  case ISD::INTRINSIC_W_CHAIN: return LowerINTRINSIC_W_CHAIN(Op, DAG);
   case ISD::INTRINSIC_WO_CHAIN: return LowerINTRINSIC_WO_CHAIN(Op, DAG);
   case ISD::BIT_CONVERT:   return ExpandBIT_CONVERT(Op.getNode(), DAG);
   case ISD::SHL:
   case ISD::SRL:
   case ISD::SRA:           return LowerShift(Op.getNode(), DAG, Subtarget);
+  case ISD::SHL_PARTS:     return LowerShiftLeftParts(Op, DAG);
+  case ISD::SRL_PARTS:
+  case ISD::SRA_PARTS:     return LowerShiftRightParts(Op, DAG);
   case ISD::VSETCC:        return LowerVSETCC(Op, DAG);
   case ISD::BUILD_VECTOR:  return LowerBUILD_VECTOR(Op, DAG);
   case ISD::VECTOR_SHUFFLE: return LowerVECTOR_SHUFFLE(Op, DAG);
@@ -3125,13 +3176,12 @@ static SDValue PerformSUBCombine(SDNode *N,
   return SDValue();
 }
 
-
-/// PerformFMRRDCombine - Target-specific dag combine xforms for ARMISD::FMRRD.
-static SDValue PerformFMRRDCombine(SDNode *N,
+/// PerformVMOVRRDCombine - Target-specific dag combine xforms for ARMISD::VMOVRRD.
+static SDValue PerformVMOVRRDCombine(SDNode *N,
                                    TargetLowering::DAGCombinerInfo &DCI) {
   // fmrrd(fmdrr x, y) -> x,y
   SDValue InDouble = N->getOperand(0);
-  if (InDouble.getOpcode() == ARMISD::FMDRR)
+  if (InDouble.getOpcode() == ARMISD::VMOVDRR)
     return DCI.CombineTo(N, InDouble.getOperand(0), InDouble.getOperand(1));
   return SDValue();
 }
@@ -3426,7 +3476,7 @@ SDValue ARMTargetLowering::PerformDAGCombine(SDNode *N,
   default: break;
   case ISD::ADD:      return PerformADDCombine(N, DCI);
   case ISD::SUB:      return PerformSUBCombine(N, DCI);
-  case ARMISD::FMRRD: return PerformFMRRDCombine(N, DCI);
+  case ARMISD::VMOVRRD: return PerformVMOVRRDCombine(N, DCI);
   case ISD::INTRINSIC_WO_CHAIN:
     return PerformIntrinsicCombine(N, DCI.DAG);
   case ISD::SHL:
@@ -3654,6 +3704,18 @@ bool ARMTargetLowering::isLegalAddressingMode(const AddrMode &AM,
   return true;
 }
 
+/// isLegalICmpImmediate - Return true if the specified immediate is legal
+/// icmp immediate, that is the target has icmp instructions which can compare
+/// a register against the immediate without having to materialize the
+/// immediate into a register.
+bool ARMTargetLowering::isLegalICmpImmediate(int64_t Imm) const {
+  if (!Subtarget->isThumb())
+    return ARM_AM::getSOImmVal(Imm) != -1;
+  if (Subtarget->isThumb2())
+    return ARM_AM::getT2SOImmVal(Imm) != -1; 
+  return Imm >= 0 && Imm <= 255;
+}
+
 static bool getARMIndexedAddressParts(SDNode *Ptr, EVT VT,
                                       bool isSEXTLoad, SDValue &Base,
                                       SDValue &Offset, bool &isInc,
@@ -3708,7 +3770,7 @@ static bool getARMIndexedAddressParts(SDNode *Ptr, EVT VT,
     return true;
   }
 
-  // FIXME: Use FLDM / FSTM to emulate indexed FP load / store.
+  // FIXME: Use VLDM / VSTM to emulate indexed FP load / store.
   return false;
 }
 
@@ -4079,3 +4141,60 @@ ARMTargetLowering::isOffsetFoldingLegal(const GlobalAddressSDNode *GA) const {
   // The ARM target isn't yet aware of offsets.
   return false;
 }
+
+int ARM::getVFPf32Imm(const APFloat &FPImm) {
+  APInt Imm = FPImm.bitcastToAPInt();
+  uint32_t Sign = Imm.lshr(31).getZExtValue() & 1;
+  int32_t Exp = (Imm.lshr(23).getSExtValue() & 0xff) - 127;  // -126 to 127
+  int64_t Mantissa = Imm.getZExtValue() & 0x7fffff;  // 23 bits
+
+  // We can handle 4 bits of mantissa.
+  // mantissa = (16+UInt(e:f:g:h))/16.
+  if (Mantissa & 0x7ffff)
+    return -1;
+  Mantissa >>= 19;
+  if ((Mantissa & 0xf) != Mantissa)
+    return -1;
+
+  // We can handle 3 bits of exponent: exp == UInt(NOT(b):c:d)-3
+  if (Exp < -3 || Exp > 4)
+    return -1;
+  Exp = ((Exp+3) & 0x7) ^ 4;
+
+  return ((int)Sign << 7) | (Exp << 4) | Mantissa;
+}
+
+int ARM::getVFPf64Imm(const APFloat &FPImm) {
+  APInt Imm = FPImm.bitcastToAPInt();
+  uint64_t Sign = Imm.lshr(63).getZExtValue() & 1;
+  int64_t Exp = (Imm.lshr(52).getSExtValue() & 0x7ff) - 1023;   // -1022 to 1023
+  uint64_t Mantissa = Imm.getZExtValue() & 0xfffffffffffffLL;
+
+  // We can handle 4 bits of mantissa.
+  // mantissa = (16+UInt(e:f:g:h))/16.
+  if (Mantissa & 0xffffffffffffLL)
+    return -1;
+  Mantissa >>= 48;
+  if ((Mantissa & 0xf) != Mantissa)
+    return -1;
+
+  // We can handle 3 bits of exponent: exp == UInt(NOT(b):c:d)-3
+  if (Exp < -3 || Exp > 4)
+    return -1;
+  Exp = ((Exp+3) & 0x7) ^ 4;
+
+  return ((int)Sign << 7) | (Exp << 4) | Mantissa;
+}
+
+/// isFPImmLegal - Returns true if the target can instruction select the
+/// specified FP immediate natively. If false, the legalizer will
+/// materialize the FP immediate as a load from a constant pool.
+bool ARMTargetLowering::isFPImmLegal(const APFloat &Imm, EVT VT) const {
+  if (!Subtarget->hasVFP3())
+    return false;
+  if (VT == MVT::f32)
+    return ARM::getVFPf32Imm(Imm) != -1;
+  if (VT == MVT::f64)
+    return ARM::getVFPf64Imm(Imm) != -1;
+  return false;
+}
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.h b/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.h
index 7d85f45..4f31f8a 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.h
@@ -62,8 +62,8 @@ namespace llvm {
       SRA_FLAG,     // V,Flag = sra_flag X -> sra X, 1 + save carry out.
       RRX,          // V = RRX X, Flag     -> srl X, 1 + shift in carry flag.
 
-      FMRRD,        // double to two gprs.
-      FMDRR,        // Two gprs to double.
+      VMOVRRD,      // double to two gprs.
+      VMOVDRR,      // Two gprs to double.
 
       EH_SJLJ_SETJMP,    // SjLj exception handling setjmp.
       EH_SJLJ_LONGJMP,   // SjLj exception handling longjmp.
@@ -137,6 +137,13 @@ namespace llvm {
     /// return the constant being splatted.  The ByteSize field indicates the
     /// number of bytes of each element [1248].
     SDValue getVMOVImm(SDNode *N, unsigned ByteSize, SelectionDAG &DAG);
+
+    /// getVFPf32Imm / getVFPf64Imm - If the given fp immediate can be
+    /// materialized with a VMOV.f32 / VMOV.f64 (i.e. fconsts / fconstd)
+    /// instruction, returns its 8-bit integer representation. Otherwise,
+    /// returns -1.
+    int getVFPf32Imm(const APFloat &FPImm);
+    int getVFPf64Imm(const APFloat &FPImm);
   }
 
   //===--------------------------------------------------------------------===//
@@ -173,6 +180,12 @@ namespace llvm {
     virtual bool isLegalAddressingMode(const AddrMode &AM, const Type *Ty)const;
     bool isLegalT2ScaledAddressingMode(const AddrMode &AM, EVT VT) const;
 
+    /// isLegalICmpImmediate - Return true if the specified immediate is legal
+    /// icmp immediate, that is the target has icmp instructions which can compare
+    /// a register against the immediate without having to materialize the
+    /// immediate into a register.
+    virtual bool isLegalICmpImmediate(int64_t Imm) const;
+
     /// getPreIndexedAddressParts - returns true by value, base pointer and
     /// offset pointer and addressing mode by reference if the node's address
     /// can be legally represented as pre-indexed load / store address.
@@ -224,6 +237,12 @@ namespace llvm {
 
     bool isShuffleMaskLegal(const SmallVectorImpl<int> &M, EVT VT) const;
     bool isOffsetFoldingLegal(const GlobalAddressSDNode *GA) const;
+
+    /// isFPImmLegal - Returns true if the target can instruction select the
+    /// specified FP immediate natively. If false, the legalizer will
+    /// materialize the FP immediate as a load from a constant pool.
+    virtual bool isFPImmLegal(const APFloat &Imm, EVT VT) const;
+
   private:
     /// Subtarget - Keep a pointer to the ARMSubtarget around so that we can
     /// make the right decision when generating code for different targets.
@@ -255,6 +274,7 @@ namespace llvm {
                              ISD::ArgFlagsTy Flags);
     SDValue LowerINTRINSIC_W_CHAIN(SDValue Op, SelectionDAG &DAG);
     SDValue LowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG);
+    SDValue LowerBlockAddress(SDValue Op, SelectionDAG &DAG);
     SDValue LowerGlobalAddressDarwin(SDValue Op, SelectionDAG &DAG);
     SDValue LowerGlobalAddressELF(SDValue Op, SelectionDAG &DAG);
     SDValue LowerGlobalTLSAddress(SDValue Op, SelectionDAG &DAG);
@@ -264,8 +284,12 @@ namespace llvm {
                                    SelectionDAG &DAG);
     SDValue LowerGLOBAL_OFFSET_TABLE(SDValue Op, SelectionDAG &DAG);
     SDValue LowerBR_JT(SDValue Op, SelectionDAG &DAG);
+    SDValue LowerSELECT_CC(SDValue Op, SelectionDAG &DAG);
+    SDValue LowerBR_CC(SDValue Op, SelectionDAG &DAG);
     SDValue LowerFRAMEADDR(SDValue Op, SelectionDAG &DAG);
     SDValue LowerDYNAMIC_STACKALLOC(SDValue Op, SelectionDAG &DAG);
+    SDValue LowerShiftRightParts(SDValue Op, SelectionDAG &DAG);
+    SDValue LowerShiftLeftParts(SDValue Op, SelectionDAG &DAG);
 
     SDValue EmitTargetCodeForMemcpy(SelectionDAG &DAG, DebugLoc dl,
                                       SDValue Chain,
@@ -301,6 +325,9 @@ namespace llvm {
                   CallingConv::ID CallConv, bool isVarArg,
                   const SmallVectorImpl<ISD::OutputArg> &Outs,
                   DebugLoc dl, SelectionDAG &DAG);
+
+    SDValue getARMCmp(SDValue LHS, SDValue RHS, ISD::CondCode CC,
+                      SDValue &ARMCC, SelectionDAG &DAG, DebugLoc dl);
   };
 }
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td
index b3c0028..e76e93c 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td
@@ -108,6 +108,15 @@ def IndexModeNone : IndexMode<0>;
 def IndexModePre  : IndexMode<1>;
 def IndexModePost : IndexMode<2>;
 
+// Instruction execution domain.
+class Domain<bits<2> val> {
+  bits<2> Value = val;
+}
+def GenericDomain : Domain<0>;
+def VFPDomain     : Domain<1>; // Instructions in VFP domain only
+def NeonDomain    : Domain<2>; // Instructions in Neon domain only
+def VFPNeonDomain : Domain<3>; // Instructions in both VFP & Neon domains
+
 //===----------------------------------------------------------------------===//
 
 // ARM special operands.
@@ -136,7 +145,7 @@ def s_cc_out : OptionalDefOperand<OtherVT, (ops CCR), (ops (i32 CPSR))> {
 //
 
 class InstARM<AddrMode am, SizeFlagVal sz, IndexMode im,
-              Format f, string cstr, InstrItinClass itin>
+              Format f, Domain d, string cstr, InstrItinClass itin>
   : Instruction {
   field bits<32> Inst;
 
@@ -155,6 +164,9 @@ class InstARM<AddrMode am, SizeFlagVal sz, IndexMode im,
   Format F = f;
   bits<5> Form = F.Value;
 
+  Domain D = d;
+  bits<2> Dom = D.Value;
+
   //
   // Attributes specific to ARM instructions...
   //
@@ -167,7 +179,8 @@ class InstARM<AddrMode am, SizeFlagVal sz, IndexMode im,
 
 class PseudoInst<dag oops, dag iops, InstrItinClass itin, 
                  string asm, list<dag> pattern>
-  : InstARM<AddrModeNone, SizeSpecial, IndexModeNone, Pseudo, "", itin> {
+  : InstARM<AddrModeNone, SizeSpecial, IndexModeNone, Pseudo, GenericDomain, 
+            "", itin> {
   let OutOperandList = oops;
   let InOperandList = iops;
   let AsmString   = asm;
@@ -179,7 +192,7 @@ class I<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
         IndexMode im, Format f, InstrItinClass itin, 
         string opc, string asm, string cstr,
         list<dag> pattern>
-  : InstARM<am, sz, im, f, cstr, itin> {
+  : InstARM<am, sz, im, f, GenericDomain, cstr, itin> {
   let OutOperandList = oops;
   let InOperandList = !con(iops, (ops pred:$p));
   let AsmString   = !strconcat(opc, !strconcat("${p}", asm));
@@ -194,7 +207,7 @@ class sI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
          IndexMode im, Format f, InstrItinClass itin,
          string opc, string asm, string cstr,
          list<dag> pattern>
-  : InstARM<am, sz, im, f, cstr, itin> {
+  : InstARM<am, sz, im, f, GenericDomain, cstr, itin> {
   let OutOperandList = oops;
   let InOperandList = !con(iops, (ops pred:$p, cc_out:$s));
   let AsmString   = !strconcat(opc, !strconcat("${p}${s}", asm));
@@ -206,7 +219,7 @@ class sI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
 class XI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
          IndexMode im, Format f, InstrItinClass itin,
          string asm, string cstr, list<dag> pattern>
-  : InstARM<am, sz, im, f, cstr, itin> {
+  : InstARM<am, sz, im, f, GenericDomain, cstr, itin> {
   let OutOperandList = oops;
   let InOperandList = iops;
   let AsmString   = asm;
@@ -807,7 +820,7 @@ class ARMV6Pat<dag pattern, dag result> : Pat<pattern, result> {
 
 class ThumbI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
              InstrItinClass itin, string asm, string cstr, list<dag> pattern>
-  : InstARM<am, sz, IndexModeNone, ThumbFrm, cstr, itin> {
+  : InstARM<am, sz, IndexModeNone, ThumbFrm, GenericDomain, cstr, itin> {
   let OutOperandList = oops;
   let InOperandList = iops;
   let AsmString   = asm;
@@ -833,7 +846,7 @@ class TJTI<dag oops, dag iops, InstrItinClass itin, string asm, list<dag> patter
 // Thumb1 only
 class Thumb1I<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
               InstrItinClass itin, string asm, string cstr, list<dag> pattern>
-  : InstARM<am, sz, IndexModeNone, ThumbFrm, cstr, itin> {
+  : InstARM<am, sz, IndexModeNone, ThumbFrm, GenericDomain, cstr, itin> {
   let OutOperandList = oops;
   let InOperandList = iops;
   let AsmString   = asm;
@@ -861,7 +874,7 @@ class T1It<dag oops, dag iops, InstrItinClass itin,
 class Thumb1sI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
                InstrItinClass itin,
                string opc, string asm, string cstr, list<dag> pattern>
-  : InstARM<am, sz, IndexModeNone, ThumbFrm, cstr, itin> {
+  : InstARM<am, sz, IndexModeNone, ThumbFrm, GenericDomain, cstr, itin> {
   let OutOperandList = !con(oops, (ops s_cc_out:$s));
   let InOperandList = !con(iops, (ops pred:$p));
   let AsmString = !strconcat(opc, !strconcat("${s}${p}", asm));
@@ -883,7 +896,7 @@ class T1sIt<dag oops, dag iops, InstrItinClass itin,
 class Thumb1pI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
                InstrItinClass itin,
                string opc, string asm, string cstr, list<dag> pattern>
-  : InstARM<am, sz, IndexModeNone, ThumbFrm, cstr, itin> {
+  : InstARM<am, sz, IndexModeNone, ThumbFrm, GenericDomain, cstr, itin> {
   let OutOperandList = oops;
   let InOperandList = !con(iops, (ops pred:$p));
   let AsmString = !strconcat(opc, !strconcat("${p}", asm));
@@ -918,7 +931,7 @@ class T1pIs<dag oops, dag iops,
 class Thumb2I<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
               InstrItinClass itin,
               string opc, string asm, string cstr, list<dag> pattern>
-  : InstARM<am, sz, IndexModeNone, ThumbFrm, cstr, itin> {
+  : InstARM<am, sz, IndexModeNone, ThumbFrm, GenericDomain, cstr, itin> {
   let OutOperandList = oops;
   let InOperandList = !con(iops, (ops pred:$p));
   let AsmString = !strconcat(opc, !strconcat("${p}", asm));
@@ -934,7 +947,7 @@ class Thumb2I<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
 class Thumb2sI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
                InstrItinClass itin,
                string opc, string asm, string cstr, list<dag> pattern>
-  : InstARM<am, sz, IndexModeNone, ThumbFrm, cstr, itin> {
+  : InstARM<am, sz, IndexModeNone, ThumbFrm, GenericDomain, cstr, itin> {
   let OutOperandList = oops;
   let InOperandList = !con(iops, (ops pred:$p, cc_out:$s));
   let AsmString   = !strconcat(opc, !strconcat("${s}${p}", asm));
@@ -946,7 +959,7 @@ class Thumb2sI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
 class Thumb2XI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
                InstrItinClass itin,
                string asm, string cstr, list<dag> pattern>
-  : InstARM<am, sz, IndexModeNone, ThumbFrm, cstr, itin> {
+  : InstARM<am, sz, IndexModeNone, ThumbFrm, GenericDomain, cstr, itin> {
   let OutOperandList = oops;
   let InOperandList = iops;
   let AsmString   = asm;
@@ -993,7 +1006,7 @@ class T2Ix2<dag oops, dag iops, InstrItinClass itin,
 class T2Iidxldst<dag oops, dag iops, AddrMode am, IndexMode im,
                  InstrItinClass itin,
                  string opc, string asm, string cstr, list<dag> pattern>
-  : InstARM<am, Size4Bytes, im, ThumbFrm, cstr, itin> {
+  : InstARM<am, Size4Bytes, im, ThumbFrm, GenericDomain, cstr, itin> {
   let OutOperandList = oops;
   let InOperandList = !con(iops, (ops pred:$p));
   let AsmString = !strconcat(opc, !strconcat("${p}", asm));
@@ -1026,7 +1039,7 @@ class T2Pat<dag pattern, dag result> : Pat<pattern, result> {
 class VFPI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
            IndexMode im, Format f, InstrItinClass itin,
            string opc, string asm, string cstr, list<dag> pattern>
-  : InstARM<am, sz, im, f, cstr, itin> {
+  : InstARM<am, sz, im, f, VFPDomain, cstr, itin> {
   let OutOperandList = oops;
   let InOperandList = !con(iops, (ops pred:$p));
   let AsmString   = !strconcat(opc, !strconcat("${p}", asm));
@@ -1038,7 +1051,7 @@ class VFPI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
 class VFPXI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
             IndexMode im, Format f, InstrItinClass itin,
             string asm, string cstr, list<dag> pattern>
-  : InstARM<am, sz, im, f, cstr, itin> {
+  : InstARM<am, sz, im, f, VFPDomain, cstr, itin> {
   let OutOperandList = oops;
   let InOperandList = iops;
   let AsmString   = asm;
@@ -1061,6 +1074,9 @@ class ADI5<bits<4> opcod1, bits<2> opcod2, dag oops, dag iops,
   let Inst{27-24} = opcod1;
   let Inst{21-20} = opcod2;
   let Inst{11-8}  = 0b1011;
+
+  // 64-bit loads & stores operate on both NEON and VFP pipelines.
+  let Dom = VFPNeonDomain.Value;
 }
 
 class ASI5<bits<4> opcod1, bits<2> opcod2, dag oops, dag iops,
@@ -1082,6 +1098,9 @@ class AXDI5<dag oops, dag iops, InstrItinClass itin,
   // TODO: Mark the instructions with the appropriate subtarget info.
   let Inst{27-25} = 0b110;
   let Inst{11-8}  = 0b1011;
+
+  // 64-bit loads & stores operate on both NEON and VFP pipelines.
+  let Dom = VFPNeonDomain.Value;
 }
 
 class AXSI5<dag oops, dag iops, InstrItinClass itin,
@@ -1125,8 +1144,8 @@ class ASuI<bits<8> opcod1, bits<4> opcod2, bits<4> opcod3, dag oops, dag iops,
 // Single precision unary, if no NEON
 // Same as ASuI except not available if NEON is enabled
 class ASuIn<bits<8> opcod1, bits<4> opcod2, bits<4> opcod3, dag oops, dag iops,
-            InstrItinClass itin,    string opc, string asm, list<dag> pattern>
-  : ASuI<opcod1, opcod2, opcod2, oops, iops, itin, opc, asm, pattern> {
+            InstrItinClass itin, string opc, string asm, list<dag> pattern>
+  : ASuI<opcod1, opcod2, opcod3, oops, iops, itin, opc, asm, pattern> {
   list<Predicate> Predicates = [HasVFP2,DontUseNEONForFP];
 }
 
@@ -1198,32 +1217,63 @@ class AVConv5I<bits<8> opcod1, bits<4> opcod2, dag oops, dag iops,
 //
 
 class NeonI<dag oops, dag iops, AddrMode am, IndexMode im, InstrItinClass itin,
-            string asm, string cstr, list<dag> pattern>
-  : InstARM<am, Size4Bytes, im, NEONFrm, cstr, itin> {
+            string opc, string dt, string asm, string cstr, list<dag> pattern>
+  : InstARM<am, Size4Bytes, im, NEONFrm, NeonDomain, cstr, itin> {
   let OutOperandList = oops;
-  let InOperandList = iops;
-  let AsmString = asm;
+  let InOperandList = !con(iops, (ops pred:$p));
+  let AsmString = !strconcat(
+                     !strconcat(!strconcat(opc, "${p}"), !strconcat(".", dt)),
+                     !strconcat("\t", asm));
+  let Pattern = pattern;
+  list<Predicate> Predicates = [HasNEON];
+}
+
+// Same as NeonI except it does not have a "data type" specifier.
+class NeonXI<dag oops, dag iops, AddrMode am, IndexMode im, InstrItinClass itin,
+            string opc, string asm, string cstr, list<dag> pattern>
+  : InstARM<am, Size4Bytes, im, NEONFrm, NeonDomain, cstr, itin> {
+  let OutOperandList = oops;
+  let InOperandList = !con(iops, (ops pred:$p));
+  let AsmString = !strconcat(!strconcat(opc, "${p}"), !strconcat("\t", asm));
   let Pattern = pattern;
   list<Predicate> Predicates = [HasNEON];
 }
 
-class NI<dag oops, dag iops, InstrItinClass itin, string asm, list<dag> pattern>
-  : NeonI<oops, iops, AddrModeNone, IndexModeNone, itin, asm, "", pattern> {
+class NI<dag oops, dag iops, InstrItinClass itin, string opc, string asm,
+         list<dag> pattern>
+  : NeonXI<oops, iops, AddrModeNone, IndexModeNone, itin, opc, asm, "",
+          pattern> {
 }
 
-class NI4<dag oops, dag iops, InstrItinClass itin, string asm, list<dag> pattern>
-  : NeonI<oops, iops, AddrMode4, IndexModeNone, itin, asm, "", pattern> {
+class NI4<dag oops, dag iops, InstrItinClass itin, string opc,
+          string asm, list<dag> pattern>
+  : NeonXI<oops, iops, AddrMode4, IndexModeNone, itin, opc, asm, "",
+          pattern> {
 }
 
-class NLdSt<dag oops, dag iops, InstrItinClass itin,
-            string asm, string cstr, list<dag> pattern>
-  : NeonI<oops, iops, AddrMode6, IndexModeNone, itin, asm, cstr, pattern> {
+class NLdSt<bit op23, bits<2> op21_20, bits<4> op11_8, bits<4> op7_4,
+            dag oops, dag iops, InstrItinClass itin,
+            string opc, string dt, string asm, string cstr, list<dag> pattern>
+  : NeonI<oops, iops, AddrMode6, IndexModeNone, itin, opc, dt, asm, cstr,
+          pattern> {
   let Inst{31-24} = 0b11110100;
+  let Inst{23} = op23;
+  let Inst{21-20} = op21_20;
+  let Inst{11-8} = op11_8;
+  let Inst{7-4} = op7_4;
 }
 
 class NDataI<dag oops, dag iops, InstrItinClass itin,
-             string asm, string cstr, list<dag> pattern>
-  : NeonI<oops, iops, AddrModeNone, IndexModeNone, itin, asm, cstr, pattern> {
+             string opc, string dt, string asm, string cstr, list<dag> pattern>
+  : NeonI<oops, iops, AddrModeNone, IndexModeNone, itin, opc, dt, asm,
+         cstr, pattern> {
+  let Inst{31-25} = 0b1111001;
+}
+
+class NDataXI<dag oops, dag iops, InstrItinClass itin,
+             string opc, string asm, string cstr, list<dag> pattern>
+  : NeonXI<oops, iops, AddrModeNone, IndexModeNone, itin, opc, asm,
+         cstr, pattern> {
   let Inst{31-25} = 0b1111001;
 }
 
@@ -1231,8 +1281,8 @@ class NDataI<dag oops, dag iops, InstrItinClass itin,
 class N1ModImm<bit op23, bits<3> op21_19, bits<4> op11_8, bit op7, bit op6,
                bit op5, bit op4,
                dag oops, dag iops, InstrItinClass itin,
-               string asm, string cstr, list<dag> pattern>
-  : NDataI<oops, iops, itin, asm, cstr, pattern> {
+               string opc, string dt, string asm, string cstr, list<dag> pattern>
+  : NDataI<oops, iops, itin, opc, dt, asm, cstr, pattern> {
   let Inst{23} = op23;
   let Inst{21-19} = op21_19;
   let Inst{11-8} = op11_8;
@@ -1246,8 +1296,23 @@ class N1ModImm<bit op23, bits<3> op21_19, bits<4> op11_8, bit op7, bit op6,
 class N2V<bits<2> op24_23, bits<2> op21_20, bits<2> op19_18, bits<2> op17_16,
           bits<5> op11_7, bit op6, bit op4,
           dag oops, dag iops, InstrItinClass itin,
-          string asm, string cstr, list<dag> pattern>
-  : NDataI<oops, iops, itin, asm, cstr, pattern> {
+          string opc, string dt, string asm, string cstr, list<dag> pattern>
+  : NDataI<oops, iops, itin, opc, dt, asm, cstr, pattern> {
+  let Inst{24-23} = op24_23;
+  let Inst{21-20} = op21_20;
+  let Inst{19-18} = op19_18;
+  let Inst{17-16} = op17_16;
+  let Inst{11-7} = op11_7;
+  let Inst{6} = op6;
+  let Inst{4} = op4;
+}
+
+// Same as N2V except it doesn't have a datatype suffix.
+class N2VX<bits<2> op24_23, bits<2> op21_20, bits<2> op19_18, bits<2> op17_16,
+          bits<5> op11_7, bit op6, bit op4,
+          dag oops, dag iops, InstrItinClass itin,
+          string opc, string asm, string cstr, list<dag> pattern>
+  : NDataXI<oops, iops, itin, opc, asm, cstr, pattern> {
   let Inst{24-23} = op24_23;
   let Inst{21-20} = op21_20;
   let Inst{19-18} = op19_18;
@@ -1258,14 +1323,12 @@ class N2V<bits<2> op24_23, bits<2> op21_20, bits<2> op19_18, bits<2> op17_16,
 }
 
 // NEON 2 vector register with immediate.
-class N2VImm<bit op24, bit op23, bits<6> op21_16, bits<4> op11_8, bit op7,
-             bit op6, bit op4,
+class N2VImm<bit op24, bit op23, bits<4> op11_8, bit op7, bit op6, bit op4,
              dag oops, dag iops, InstrItinClass itin,
-             string asm, string cstr, list<dag> pattern>
-  : NDataI<oops, iops, itin, asm, cstr, pattern> {
+             string opc, string dt, string asm, string cstr, list<dag> pattern>
+  : NDataI<oops, iops, itin, opc, dt, asm, cstr, pattern> {
   let Inst{24} = op24;
   let Inst{23} = op23;
-  let Inst{21-16} = op21_16;
   let Inst{11-8} = op11_8;
   let Inst{7} = op7;
   let Inst{6} = op6;
@@ -1275,8 +1338,21 @@ class N2VImm<bit op24, bit op23, bits<6> op21_16, bits<4> op11_8, bit op7,
 // NEON 3 vector register format.
 class N3V<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op6, bit op4,
           dag oops, dag iops, InstrItinClass itin,
-          string asm, string cstr, list<dag> pattern>
-  : NDataI<oops, iops, itin, asm, cstr, pattern> {
+          string opc, string dt, string asm, string cstr, list<dag> pattern>
+  : NDataI<oops, iops, itin, opc, dt, asm, cstr, pattern> {
+  let Inst{24} = op24;
+  let Inst{23} = op23;
+  let Inst{21-20} = op21_20;
+  let Inst{11-8} = op11_8;
+  let Inst{6} = op6;
+  let Inst{4} = op4;
+}
+
+// Same as N3VX except it doesn't have a data type suffix.
+class N3VX<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op6, bit op4,
+          dag oops, dag iops, InstrItinClass itin,
+          string opc, string asm, string cstr, list<dag> pattern>
+  : NDataXI<oops, iops, itin, opc, asm, cstr, pattern> {
   let Inst{24} = op24;
   let Inst{23} = op23;
   let Inst{21-20} = op21_20;
@@ -1288,29 +1364,37 @@ class N3V<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op6, bit op4,
 // NEON VMOVs between scalar and core registers.
 class NVLaneOp<bits<8> opcod1, bits<4> opcod2, bits<2> opcod3,
                dag oops, dag iops, Format f, InstrItinClass itin,
-               string opc, string asm, list<dag> pattern>
-  : AI<oops, iops, f, itin, opc, asm, pattern> {
+               string opc, string dt, string asm, list<dag> pattern>
+  : InstARM<AddrModeNone, Size4Bytes, IndexModeNone, f, GenericDomain,
+    "", itin> {
   let Inst{27-20} = opcod1;
   let Inst{11-8} = opcod2;
   let Inst{6-5} = opcod3;
   let Inst{4} = 1;
+
+  let OutOperandList = oops;
+  let InOperandList = !con(iops, (ops pred:$p));
+  let AsmString = !strconcat(
+                     !strconcat(!strconcat(opc, "${p}"), !strconcat(".", dt)),
+                     !strconcat("\t", asm));
+  let Pattern = pattern;
   list<Predicate> Predicates = [HasNEON];
 }
 class NVGetLane<bits<8> opcod1, bits<4> opcod2, bits<2> opcod3,
                 dag oops, dag iops, InstrItinClass itin,
-                string opc, string asm, list<dag> pattern>
+                string opc, string dt, string asm, list<dag> pattern>
   : NVLaneOp<opcod1, opcod2, opcod3, oops, iops, NEONGetLnFrm, itin,
-             opc, asm, pattern>;
+             opc, dt, asm, pattern>;
 class NVSetLane<bits<8> opcod1, bits<4> opcod2, bits<2> opcod3,
                 dag oops, dag iops, InstrItinClass itin,
-                string opc, string asm, list<dag> pattern>
+                string opc, string dt, string asm, list<dag> pattern>
   : NVLaneOp<opcod1, opcod2, opcod3, oops, iops, NEONSetLnFrm, itin,
-             opc, asm, pattern>;
+             opc, dt, asm, pattern>;
 class NVDup<bits<8> opcod1, bits<4> opcod2, bits<2> opcod3,
             dag oops, dag iops, InstrItinClass itin,
-            string opc, string asm, list<dag> pattern>
+            string opc, string dt, string asm, list<dag> pattern>
   : NVLaneOp<opcod1, opcod2, opcod3, oops, iops, NEONDupFrm, itin,
-             opc, asm, pattern>;
+             opc, dt, asm, pattern>;
 
 // NEONFPPat - Same as Pat<>, but requires that the compiler be using NEON
 // for single-precision FP.
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.cpp
index 4c92891..87bb12b 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.cpp
@@ -22,11 +22,10 @@
 #include "llvm/CodeGen/MachineInstrBuilder.h"
 #include "llvm/CodeGen/MachineJumpTableInfo.h"
 #include "llvm/MC/MCAsmInfo.h"
-#include "llvm/Support/CommandLine.h"
 using namespace llvm;
 
 ARMInstrInfo::ARMInstrInfo(const ARMSubtarget &STI)
-  : RI(*this, STI), Subtarget(STI) {
+  : ARMBaseInstrInfo(STI), RI(*this, STI) {
 }
 
 unsigned ARMInstrInfo::getUnindexedOpcode(unsigned Opc) const {
@@ -68,6 +67,7 @@ bool ARMInstrInfo::BlockHasNoFallThrough(const MachineBasicBlock &MBB) const {
   case ARM::BX_RET:   // Return.
   case ARM::LDM_RET:
   case ARM::B:
+  case ARM::BRIND:
   case ARM::BR_JTr:   // Jumptable branch.
   case ARM::BR_JTm:   // Jumptable branch through mem.
   case ARM::BR_JTadd: // Jumptable branch add to pc.
@@ -80,22 +80,26 @@ bool ARMInstrInfo::BlockHasNoFallThrough(const MachineBasicBlock &MBB) const {
 }
 
 void ARMInstrInfo::
-reMaterialize(MachineBasicBlock &MBB,
-              MachineBasicBlock::iterator I,
-              unsigned DestReg, unsigned SubIdx,
-              const MachineInstr *Orig) const {
+reMaterialize(MachineBasicBlock &MBB, MachineBasicBlock::iterator I,
+              unsigned DestReg, unsigned SubIdx, const MachineInstr *Orig,
+              const TargetRegisterInfo *TRI) const {
   DebugLoc dl = Orig->getDebugLoc();
-  if (Orig->getOpcode() == ARM::MOVi2pieces) {
+  unsigned Opcode = Orig->getOpcode();
+  switch (Opcode) {
+  default:
+    break;
+  case ARM::MOVi2pieces: {
     RI.emitLoadConstPool(MBB, I, dl,
                          DestReg, SubIdx,
                          Orig->getOperand(1).getImm(),
                          (ARMCC::CondCodes)Orig->getOperand(2).getImm(),
                          Orig->getOperand(3).getReg());
+    MachineInstr *NewMI = prior(I);
+    NewMI->getOperand(0).setSubReg(SubIdx);
     return;
   }
+  }
 
-  MachineInstr *MI = MBB.getParent()->CloneMachineInstr(Orig);
-  MI->getOperand(0).setReg(DestReg);
-  MBB.insert(I, MI);
+  return ARMBaseInstrInfo::reMaterialize(MBB, I, DestReg, SubIdx, Orig, TRI);
 }
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.h b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.h
index c616949..4319577 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.h
@@ -25,7 +25,6 @@ namespace llvm {
 
 class ARMInstrInfo : public ARMBaseInstrInfo {
   ARMRegisterInfo RI;
-  const ARMSubtarget &Subtarget;
 public:
   explicit ARMInstrInfo(const ARMSubtarget &STI);
 
@@ -36,15 +35,16 @@ public:
   // Return true if the block does not fall through.
   bool BlockHasNoFallThrough(const MachineBasicBlock &MBB) const;
 
+  void reMaterialize(MachineBasicBlock &MBB, MachineBasicBlock::iterator MI,
+                     unsigned DestReg, unsigned SubIdx,
+                     const MachineInstr *Orig,
+                     const TargetRegisterInfo *TRI) const;
+
   /// getRegisterInfo - TargetInstrInfo is a superset of MRegister info.  As
   /// such, whenever a client has an instance of instruction info, it should
   /// always be able to get register info as well (through this method).
   ///
   const ARMRegisterInfo &getRegisterInfo() const { return RI; }
-
-  void reMaterialize(MachineBasicBlock &MBB, MachineBasicBlock::iterator MI,
-                     unsigned DestReg, unsigned SubIdx,
-                     const MachineInstr *Orig) const;
 };
 
 }
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td
index 9571ecd..0a8ecc0 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td
@@ -116,6 +116,10 @@ def IsNotDarwin : Predicate<"!Subtarget->isTargetDarwin()">;
 def CarryDefIsUnused : Predicate<"!N.getNode()->hasAnyUseOfValue(1)">;
 def CarryDefIsUsed   : Predicate<"N.getNode()->hasAnyUseOfValue(1)">;
 
+// FIXME: Eventually this will be just "hasV6T2Ops".
+def UseMovt   : Predicate<"Subtarget->useMovt()">;
+def DontUseMovt : Predicate<"!Subtarget->useMovt()">;
+
 //===----------------------------------------------------------------------===//
 // ARM Flag Definitions.
 
@@ -204,7 +208,7 @@ def hi16 : SDNodeXForm<imm, [{
 def lo16AllZero : PatLeaf<(i32 imm), [{
   // Returns true if all low 16-bits are 0.
   return (((uint32_t)N->getZExtValue()) & 0xFFFFUL) == 0;
-  }], hi16>;
+}], hi16>;
 
 /// imm0_65535 predicate - True if the 32-bit immediate is in the range 
 /// [0.65535].
@@ -284,6 +288,26 @@ def so_imm2part_2 : SDNodeXForm<imm, [{
   return CurDAG->getTargetConstant(V, MVT::i32);
 }]>;
 
+def so_neg_imm2part : Operand<i32>, PatLeaf<(imm), [{
+      return ARM_AM::isSOImmTwoPartVal(-(int)N->getZExtValue());
+    }]> {
+  let PrintMethod = "printSOImm2PartOperand";
+}
+
+def so_neg_imm2part_1 : SDNodeXForm<imm, [{
+  unsigned V = ARM_AM::getSOImmTwoPartFirst(-(int)N->getZExtValue());
+  return CurDAG->getTargetConstant(V, MVT::i32);
+}]>;
+
+def so_neg_imm2part_2 : SDNodeXForm<imm, [{
+  unsigned V = ARM_AM::getSOImmTwoPartSecond(-(int)N->getZExtValue());
+  return CurDAG->getTargetConstant(V, MVT::i32);
+}]>;
+
+/// imm0_31 predicate - True if the 32-bit immediate is in the range [0,31].
+def imm0_31 : Operand<i32>, PatLeaf<(imm), [{
+  return (int32_t)N->getZExtValue() < 32;
+}]>;
 
 // Define ARM specific addressing modes.
 
@@ -336,9 +360,9 @@ def addrmode5 : Operand<i32>,
 // addrmode6 := reg with optional writeback
 //
 def addrmode6 : Operand<i32>,
-                ComplexPattern<i32, 3, "SelectAddrMode6", []> {
+                ComplexPattern<i32, 4, "SelectAddrMode6", []> {
   let PrintMethod = "printAddrMode6Operand";
-  let MIOperandInfo = (ops GPR:$addr, GPR:$upd, i32imm);
+  let MIOperandInfo = (ops GPR:$addr, GPR:$upd, i32imm, i32imm);
 }
 
 // addrmodepc := pc + reg
@@ -366,42 +390,47 @@ include "ARMInstrFormats.td"
 multiclass AsI1_bin_irs<bits<4> opcod, string opc, PatFrag opnode,
                         bit Commutable = 0> {
   def ri : AsI1<opcod, (outs GPR:$dst), (ins GPR:$a, so_imm:$b), DPFrm,
-               IIC_iALUi, opc, " $dst, $a, $b",
+               IIC_iALUi, opc, "\t$dst, $a, $b",
                [(set GPR:$dst, (opnode GPR:$a, so_imm:$b))]> {
     let Inst{25} = 1;
   }
   def rr : AsI1<opcod, (outs GPR:$dst), (ins GPR:$a, GPR:$b), DPFrm,
-               IIC_iALUr, opc, " $dst, $a, $b",
+               IIC_iALUr, opc, "\t$dst, $a, $b",
                [(set GPR:$dst, (opnode GPR:$a, GPR:$b))]> {
+    let Inst{11-4} = 0b00000000;
     let Inst{25} = 0;
     let isCommutable = Commutable;
   }
   def rs : AsI1<opcod, (outs GPR:$dst), (ins GPR:$a, so_reg:$b), DPSoRegFrm,
-               IIC_iALUsr, opc, " $dst, $a, $b",
+               IIC_iALUsr, opc, "\t$dst, $a, $b",
                [(set GPR:$dst, (opnode GPR:$a, so_reg:$b))]> {
     let Inst{25} = 0;
   }
 }
 
 /// AI1_bin_s_irs - Similar to AsI1_bin_irs except it sets the 's' bit so the
-/// instruction modifies the CSPR register.
+/// instruction modifies the CPSR register.
 let Defs = [CPSR] in {
 multiclass AI1_bin_s_irs<bits<4> opcod, string opc, PatFrag opnode,
                          bit Commutable = 0> {
   def ri : AI1<opcod, (outs GPR:$dst), (ins GPR:$a, so_imm:$b), DPFrm,
-               IIC_iALUi, opc, "s $dst, $a, $b",
+               IIC_iALUi, opc, "\t$dst, $a, $b",
                [(set GPR:$dst, (opnode GPR:$a, so_imm:$b))]> {
+    let Inst{20} = 1;
     let Inst{25} = 1;
   }
   def rr : AI1<opcod, (outs GPR:$dst), (ins GPR:$a, GPR:$b), DPFrm,
-               IIC_iALUr, opc, "s $dst, $a, $b",
+               IIC_iALUr, opc, "\t$dst, $a, $b",
                [(set GPR:$dst, (opnode GPR:$a, GPR:$b))]> {
     let isCommutable = Commutable;
-	let Inst{25} = 0;
+    let Inst{11-4} = 0b00000000;
+    let Inst{20} = 1;
+    let Inst{25} = 0;
   }
   def rs : AI1<opcod, (outs GPR:$dst), (ins GPR:$a, so_reg:$b), DPSoRegFrm,
-               IIC_iALUsr, opc, "s $dst, $a, $b",
+               IIC_iALUsr, opc, "\t$dst, $a, $b",
                [(set GPR:$dst, (opnode GPR:$a, so_reg:$b))]> {
+    let Inst{20} = 1;
     let Inst{25} = 0;
   }
 }
@@ -414,19 +443,23 @@ let Defs = [CPSR] in {
 multiclass AI1_cmp_irs<bits<4> opcod, string opc, PatFrag opnode,
                        bit Commutable = 0> {
   def ri : AI1<opcod, (outs), (ins GPR:$a, so_imm:$b), DPFrm, IIC_iCMPi,
-               opc, " $a, $b",
+               opc, "\t$a, $b",
                [(opnode GPR:$a, so_imm:$b)]> {
+    let Inst{20} = 1;
     let Inst{25} = 1;
   }
   def rr : AI1<opcod, (outs), (ins GPR:$a, GPR:$b), DPFrm, IIC_iCMPr,
-               opc, " $a, $b",
+               opc, "\t$a, $b",
                [(opnode GPR:$a, GPR:$b)]> {
+    let Inst{11-4} = 0b00000000;
+    let Inst{20} = 1;
     let Inst{25} = 0;
     let isCommutable = Commutable;
   }
   def rs : AI1<opcod, (outs), (ins GPR:$a, so_reg:$b), DPSoRegFrm, IIC_iCMPsr,
-               opc, " $a, $b",
+               opc, "\t$a, $b",
                [(opnode GPR:$a, so_reg:$b)]> {
+    let Inst{20} = 1;
     let Inst{25} = 0;
   }
 }
@@ -437,28 +470,31 @@ multiclass AI1_cmp_irs<bits<4> opcod, string opc, PatFrag opnode,
 /// FIXME: Remove the 'r' variant. Its rot_imm is zero.
 multiclass AI_unary_rrot<bits<8> opcod, string opc, PatFrag opnode> {
   def r     : AExtI<opcod, (outs GPR:$dst), (ins GPR:$src),
-                 IIC_iUNAr, opc, " $dst, $src",
+                 IIC_iUNAr, opc, "\t$dst, $src",
                  [(set GPR:$dst, (opnode GPR:$src))]>,
               Requires<[IsARM, HasV6]> {
-                let Inst{19-16} = 0b1111;
-              }
+    let Inst{11-10} = 0b00;
+    let Inst{19-16} = 0b1111;
+  }
   def r_rot : AExtI<opcod, (outs GPR:$dst), (ins GPR:$src, i32imm:$rot),
-                 IIC_iUNAsi, opc, " $dst, $src, ror $rot",
+                 IIC_iUNAsi, opc, "\t$dst, $src, ror $rot",
                  [(set GPR:$dst, (opnode (rotr GPR:$src, rot_imm:$rot)))]>,
               Requires<[IsARM, HasV6]> {
-                let Inst{19-16} = 0b1111;
-              }
+    let Inst{19-16} = 0b1111;
+  }
 }
 
 /// AI_bin_rrot - A binary operation with two forms: one whose operand is a
 /// register and one whose operand is a register rotated by 8/16/24.
 multiclass AI_bin_rrot<bits<8> opcod, string opc, PatFrag opnode> {
   def rr     : AExtI<opcod, (outs GPR:$dst), (ins GPR:$LHS, GPR:$RHS),
-                  IIC_iALUr, opc, " $dst, $LHS, $RHS",
+                  IIC_iALUr, opc, "\t$dst, $LHS, $RHS",
                   [(set GPR:$dst, (opnode GPR:$LHS, GPR:$RHS))]>,
-                  Requires<[IsARM, HasV6]>;
+               Requires<[IsARM, HasV6]> {
+    let Inst{11-10} = 0b00;
+  }
   def rr_rot : AExtI<opcod, (outs GPR:$dst), (ins GPR:$LHS, GPR:$RHS, i32imm:$rot),
-                  IIC_iALUsi, opc, " $dst, $LHS, $RHS, ror $rot",
+                  IIC_iALUsi, opc, "\t$dst, $LHS, $RHS, ror $rot",
                   [(set GPR:$dst, (opnode GPR:$LHS,
                                           (rotr GPR:$RHS, rot_imm:$rot)))]>,
                   Requires<[IsARM, HasV6]>;
@@ -469,48 +505,58 @@ let Uses = [CPSR] in {
 multiclass AI1_adde_sube_irs<bits<4> opcod, string opc, PatFrag opnode,
                              bit Commutable = 0> {
   def ri : AsI1<opcod, (outs GPR:$dst), (ins GPR:$a, so_imm:$b),
-                DPFrm, IIC_iALUi, opc, " $dst, $a, $b",
+                DPFrm, IIC_iALUi, opc, "\t$dst, $a, $b",
                [(set GPR:$dst, (opnode GPR:$a, so_imm:$b))]>,
                Requires<[IsARM, CarryDefIsUnused]> {
     let Inst{25} = 1;
   }
   def rr : AsI1<opcod, (outs GPR:$dst), (ins GPR:$a, GPR:$b),
-                DPFrm, IIC_iALUr, opc, " $dst, $a, $b",
+                DPFrm, IIC_iALUr, opc, "\t$dst, $a, $b",
                [(set GPR:$dst, (opnode GPR:$a, GPR:$b))]>,
                Requires<[IsARM, CarryDefIsUnused]> {
     let isCommutable = Commutable;
+    let Inst{11-4} = 0b00000000;
     let Inst{25} = 0;
   }
   def rs : AsI1<opcod, (outs GPR:$dst), (ins GPR:$a, so_reg:$b),
-                DPSoRegFrm, IIC_iALUsr, opc, " $dst, $a, $b",
+                DPSoRegFrm, IIC_iALUsr, opc, "\t$dst, $a, $b",
                [(set GPR:$dst, (opnode GPR:$a, so_reg:$b))]>,
                Requires<[IsARM, CarryDefIsUnused]> {
     let Inst{25} = 0;
   }
-  // Carry setting variants
+}
+// Carry setting variants
+let Defs = [CPSR] in {
+multiclass AI1_adde_sube_s_irs<bits<4> opcod, string opc, PatFrag opnode,
+                             bit Commutable = 0> {
   def Sri : AXI1<opcod, (outs GPR:$dst), (ins GPR:$a, so_imm:$b),
-                DPFrm, IIC_iALUi, !strconcat(opc, "s $dst, $a, $b"),
+                DPFrm, IIC_iALUi, !strconcat(opc, "\t$dst, $a, $b"),
                [(set GPR:$dst, (opnode GPR:$a, so_imm:$b))]>,
                Requires<[IsARM, CarryDefIsUsed]> {
     let Defs = [CPSR];
+    let Inst{20} = 1;
     let Inst{25} = 1;
   }
   def Srr : AXI1<opcod, (outs GPR:$dst), (ins GPR:$a, GPR:$b),
-                DPFrm, IIC_iALUr, !strconcat(opc, "s $dst, $a, $b"),
+                DPFrm, IIC_iALUr, !strconcat(opc, "\t$dst, $a, $b"),
                [(set GPR:$dst, (opnode GPR:$a, GPR:$b))]>,
                Requires<[IsARM, CarryDefIsUsed]> {
     let Defs = [CPSR];
+    let Inst{11-4} = 0b00000000;
+    let Inst{20} = 1;
     let Inst{25} = 0;
   }
   def Srs : AXI1<opcod, (outs GPR:$dst), (ins GPR:$a, so_reg:$b),
-                DPSoRegFrm, IIC_iALUsr, !strconcat(opc, "s $dst, $a, $b"),
+                DPSoRegFrm, IIC_iALUsr, !strconcat(opc, "\t$dst, $a, $b"),
                [(set GPR:$dst, (opnode GPR:$a, so_reg:$b))]>,
                Requires<[IsARM, CarryDefIsUsed]> {
     let Defs = [CPSR];
+    let Inst{20} = 1;
     let Inst{25} = 0;
   }
 }
 }
+}
 
 //===----------------------------------------------------------------------===//
 // Instructions
@@ -542,51 +588,44 @@ PseudoInst<(outs), (ins i32imm:$amt, pred:$p), NoItinerary,
            [(ARMcallseq_start timm:$amt)]>;
 }
 
-def DWARF_LOC :
-PseudoInst<(outs), (ins i32imm:$line, i32imm:$col, i32imm:$file), NoItinerary,
-           ".loc $file, $line, $col",
-           [(dwarf_loc (i32 imm:$line), (i32 imm:$col), (i32 imm:$file))]>;
-
-
 // Address computation and loads and stores in PIC mode.
 let isNotDuplicable = 1 in {
 def PICADD : AXI1<0b0100, (outs GPR:$dst), (ins GPR:$a, pclabel:$cp, pred:$p),
-                  Pseudo, IIC_iALUr, "\n$cp:\n\tadd$p $dst, pc, $a",
+                  Pseudo, IIC_iALUr, "\n$cp:\n\tadd$p\t$dst, pc, $a",
                    [(set GPR:$dst, (ARMpic_add GPR:$a, imm:$cp))]>;
 
 let AddedComplexity = 10 in {
-let canFoldAsLoad = 1 in
 def PICLDR  : AXI2ldw<(outs GPR:$dst), (ins addrmodepc:$addr, pred:$p),
-                  Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldr$p $dst, $addr",
+                  Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldr$p\t$dst, $addr",
                   [(set GPR:$dst, (load addrmodepc:$addr))]>;
 
 def PICLDRH : AXI3ldh<(outs GPR:$dst), (ins addrmodepc:$addr, pred:$p),
-                 Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldr${p}h $dst, $addr",
+                Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldr${p}h\t$dst, $addr",
                   [(set GPR:$dst, (zextloadi16 addrmodepc:$addr))]>;
 
 def PICLDRB : AXI2ldb<(outs GPR:$dst), (ins addrmodepc:$addr, pred:$p),
-                 Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldr${p}b $dst, $addr",
+                Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldr${p}b\t$dst, $addr",
                   [(set GPR:$dst, (zextloadi8 addrmodepc:$addr))]>;
 
 def PICLDRSH : AXI3ldsh<(outs GPR:$dst), (ins addrmodepc:$addr, pred:$p),
-                Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldr${p}sh $dst, $addr",
+               Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldr${p}sh\t$dst, $addr",
                   [(set GPR:$dst, (sextloadi16 addrmodepc:$addr))]>;
 
 def PICLDRSB : AXI3ldsb<(outs GPR:$dst), (ins addrmodepc:$addr, pred:$p),
-                Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldr${p}sb $dst, $addr",
+               Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldr${p}sb\t$dst, $addr",
                   [(set GPR:$dst, (sextloadi8 addrmodepc:$addr))]>;
 }
 let AddedComplexity = 10 in {
 def PICSTR  : AXI2stw<(outs), (ins GPR:$src, addrmodepc:$addr, pred:$p),
-               Pseudo, IIC_iStorer, "\n${addr:label}:\n\tstr$p $src, $addr",
+               Pseudo, IIC_iStorer, "\n${addr:label}:\n\tstr$p\t$src, $addr",
                [(store GPR:$src, addrmodepc:$addr)]>;
 
 def PICSTRH : AXI3sth<(outs), (ins GPR:$src, addrmodepc:$addr, pred:$p),
-               Pseudo, IIC_iStorer, "\n${addr:label}:\n\tstr${p}h $src, $addr",
+               Pseudo, IIC_iStorer, "\n${addr:label}:\n\tstrh${p}\t$src, $addr",
                [(truncstorei16 GPR:$src, addrmodepc:$addr)]>;
 
 def PICSTRB : AXI2stb<(outs), (ins GPR:$src, addrmodepc:$addr, pred:$p),
-               Pseudo, IIC_iStorer, "\n${addr:label}:\n\tstr${p}b $src, $addr",
+               Pseudo, IIC_iStorer, "\n${addr:label}:\n\tstrb${p}\t$src, $addr",
                [(truncstorei8 GPR:$src, addrmodepc:$addr)]>;
 }
 } // isNotDuplicable = 1
@@ -596,10 +635,10 @@ def PICSTRB : AXI2stb<(outs), (ins GPR:$src, addrmodepc:$addr, pred:$p),
 // assembler.
 def LEApcrel : AXI1<0x0, (outs GPR:$dst), (ins i32imm:$label, pred:$p),
                     Pseudo, IIC_iALUi,
-            !strconcat(!strconcat(".set ${:private}PCRELV${:uid}, ($label-(",
-                                  "${:private}PCRELL${:uid}+8))\n"),
-                       !strconcat("${:private}PCRELL${:uid}:\n\t",
-                                  "add$p $dst, pc, #${:private}PCRELV${:uid}")),
+           !strconcat(!strconcat(".set ${:private}PCRELV${:uid}, ($label-(",
+                                 "${:private}PCRELL${:uid}+8))\n"),
+                      !strconcat("${:private}PCRELL${:uid}:\n\t",
+                                 "add$p\t$dst, pc, #${:private}PCRELV${:uid}")),
                    []>;
 
 def LEApcrelJT : AXI1<0x0, (outs GPR:$dst),
@@ -609,7 +648,7 @@ def LEApcrelJT : AXI1<0x0, (outs GPR:$dst),
                          "(${label}_${id}-(",
                                   "${:private}PCRELL${:uid}+8))\n"),
                        !strconcat("${:private}PCRELL${:uid}:\n\t",
-                                  "add$p $dst, pc, #${:private}PCRELV${:uid}")),
+                                  "add$p\t$dst, pc, #${:private}PCRELV${:uid}")),
                    []> {
     let Inst{25} = 1;
 }
@@ -620,19 +659,31 @@ def LEApcrelJT : AXI1<0x0, (outs GPR:$dst),
 
 let isReturn = 1, isTerminator = 1, isBarrier = 1 in
   def BX_RET : AI<(outs), (ins), BrMiscFrm, IIC_Br, 
-                  "bx", " lr", [(ARMretflag)]> {
+                  "bx", "\tlr", [(ARMretflag)]> {
+  let Inst{3-0}   = 0b1110;
   let Inst{7-4}   = 0b0001;
   let Inst{19-8}  = 0b111111111111;
   let Inst{27-20} = 0b00010010;
 }
 
+// Indirect branches
+let isBranch = 1, isTerminator = 1, isBarrier = 1, isIndirectBranch = 1 in {
+  def BRIND : AXI<(outs), (ins GPR:$dst), BrMiscFrm, IIC_Br, "bx\t$dst",
+                  [(brind GPR:$dst)]> {
+    let Inst{7-4}   = 0b0001;
+    let Inst{19-8}  = 0b111111111111;
+    let Inst{27-20} = 0b00010010;
+    let Inst{31-28} = 0b1110;
+  }
+}
+
 // FIXME: remove when we have a way to marking a MI with these properties.
 // FIXME: Should pc be an implicit operand like PICADD, etc?
 let isReturn = 1, isTerminator = 1, isBarrier = 1, mayLoad = 1,
     hasExtraDefRegAllocReq = 1 in
   def LDM_RET : AXI4ld<(outs),
                     (ins addrmode4:$addr, pred:$p, reglist:$wb, variable_ops),
-                    LdStMulFrm, IIC_Br, "ldm${p}${addr:submode} $addr, $wb",
+                    LdStMulFrm, IIC_Br, "ldm${addr:submode}${p}\t$addr, $wb",
                     []>;
 
 // On non-Darwin platforms R9 is callee-saved.
@@ -642,18 +693,20 @@ let isCall = 1,
           D16, D17, D18, D19, D20, D21, D22, D23,
           D24, D25, D26, D27, D28, D29, D30, D31, CPSR, FPSCR] in {
   def BL  : ABXI<0b1011, (outs), (ins i32imm:$func, variable_ops),
-                IIC_Br, "bl ${func:call}",
+                IIC_Br, "bl\t${func:call}",
                 [(ARMcall tglobaladdr:$func)]>,
-            Requires<[IsARM, IsNotDarwin]>;
+            Requires<[IsARM, IsNotDarwin]> {
+    let Inst{31-28} = 0b1110;
+  }
 
   def BL_pred : ABI<0b1011, (outs), (ins i32imm:$func, variable_ops),
-                   IIC_Br, "bl", " ${func:call}",
+                   IIC_Br, "bl", "\t${func:call}",
                    [(ARMcall_pred tglobaladdr:$func)]>,
                 Requires<[IsARM, IsNotDarwin]>;
 
   // ARMv5T and above
   def BLX : AXI<(outs), (ins GPR:$func, variable_ops), BrMiscFrm,
-                IIC_Br, "blx $func",
+                IIC_Br, "blx\t$func",
                 [(ARMcall GPR:$func)]>,
             Requires<[IsARM, HasV5T, IsNotDarwin]> {
     let Inst{7-4}   = 0b0011;
@@ -663,7 +716,7 @@ let isCall = 1,
 
   // ARMv4T
   def BX : ABXIx2<(outs), (ins GPR:$func, variable_ops),
-                  IIC_Br, "mov lr, pc\n\tbx $func",
+                  IIC_Br, "mov\tlr, pc\n\tbx\t$func",
                   [(ARMcall_nolink GPR:$func)]>,
            Requires<[IsARM, IsNotDarwin]> {
     let Inst{7-4}   = 0b0001;
@@ -679,17 +732,19 @@ let isCall = 1,
           D16, D17, D18, D19, D20, D21, D22, D23,
           D24, D25, D26, D27, D28, D29, D30, D31, CPSR, FPSCR] in {
   def BLr9  : ABXI<0b1011, (outs), (ins i32imm:$func, variable_ops),
-                IIC_Br, "bl ${func:call}",
-                [(ARMcall tglobaladdr:$func)]>, Requires<[IsARM, IsDarwin]>;
+                IIC_Br, "bl\t${func:call}",
+                [(ARMcall tglobaladdr:$func)]>, Requires<[IsARM, IsDarwin]> {
+    let Inst{31-28} = 0b1110;
+  }
 
   def BLr9_pred : ABI<0b1011, (outs), (ins i32imm:$func, variable_ops),
-                   IIC_Br, "bl", " ${func:call}",
+                   IIC_Br, "bl", "\t${func:call}",
                    [(ARMcall_pred tglobaladdr:$func)]>,
                   Requires<[IsARM, IsDarwin]>;
 
   // ARMv5T and above
   def BLXr9 : AXI<(outs), (ins GPR:$func, variable_ops), BrMiscFrm,
-                IIC_Br, "blx $func",
+                IIC_Br, "blx\t$func",
                 [(ARMcall GPR:$func)]>, Requires<[IsARM, HasV5T, IsDarwin]> {
     let Inst{7-4}   = 0b0011;
     let Inst{19-8}  = 0b111111111111;
@@ -698,7 +753,7 @@ let isCall = 1,
 
   // ARMv4T
   def BXr9 : ABXIx2<(outs), (ins GPR:$func, variable_ops),
-                  IIC_Br, "mov lr, pc\n\tbx $func",
+                  IIC_Br, "mov\tlr, pc\n\tbx\t$func",
                   [(ARMcall_nolink GPR:$func)]>, Requires<[IsARM, IsDarwin]> {
     let Inst{7-4}   = 0b0001;
     let Inst{19-8}  = 0b111111111111;
@@ -711,21 +766,23 @@ let isBranch = 1, isTerminator = 1 in {
   let isBarrier = 1 in {
     let isPredicable = 1 in
     def B : ABXI<0b1010, (outs), (ins brtarget:$target), IIC_Br,
-                "b $target", [(br bb:$target)]>;
+                "b\t$target", [(br bb:$target)]>;
 
   let isNotDuplicable = 1, isIndirectBranch = 1 in {
   def BR_JTr : JTI<(outs), (ins GPR:$target, jtblock_operand:$jt, i32imm:$id),
-                    IIC_Br, "mov pc, $target \n$jt",
+                    IIC_Br, "mov\tpc, $target \n$jt",
                     [(ARMbrjt GPR:$target, tjumptable:$jt, imm:$id)]> {
+    let Inst{15-12} = 0b1111;
     let Inst{20}    = 0; // S Bit
     let Inst{24-21} = 0b1101;
     let Inst{27-25} = 0b000;
   }
   def BR_JTm : JTI<(outs),
                    (ins addrmode2:$target, jtblock_operand:$jt, i32imm:$id),
-                   IIC_Br, "ldr pc, $target \n$jt",
+                   IIC_Br, "ldr\tpc, $target \n$jt",
                    [(ARMbrjt (i32 (load addrmode2:$target)), tjumptable:$jt,
                      imm:$id)]> {
+    let Inst{15-12} = 0b1111;
     let Inst{20}    = 1; // L bit
     let Inst{21}    = 0; // W bit
     let Inst{22}    = 0; // B bit
@@ -734,9 +791,10 @@ let isBranch = 1, isTerminator = 1 in {
   }
   def BR_JTadd : JTI<(outs),
                    (ins GPR:$target, GPR:$idx, jtblock_operand:$jt, i32imm:$id),
-                    IIC_Br, "add pc, $target, $idx \n$jt",
+                    IIC_Br, "add\tpc, $target, $idx \n$jt",
                     [(ARMbrjt (add GPR:$target, GPR:$idx), tjumptable:$jt,
                       imm:$id)]> {
+    let Inst{15-12} = 0b1111;
     let Inst{20}    = 0; // S bit
     let Inst{24-21} = 0b0100;
     let Inst{27-25} = 0b000;
@@ -747,7 +805,7 @@ let isBranch = 1, isTerminator = 1 in {
   // FIXME: should be able to write a pattern for ARMBrcond, but can't use
   // a two-value operand where a dag node expects two operands. :( 
   def Bcc : ABI<0b1010, (outs), (ins brtarget:$target),
-               IIC_Br, "b", " $target",
+               IIC_Br, "b", "\t$target",
                [/*(ARMbrcond bb:$target, imm:$cc, CCR:$ccr)*/]>;
 }
 
@@ -756,142 +814,143 @@ let isBranch = 1, isTerminator = 1 in {
 //
 
 // Load
-let canFoldAsLoad = 1 in 
+let canFoldAsLoad = 1, isReMaterializable = 1, mayHaveSideEffects = 1 in 
 def LDR  : AI2ldw<(outs GPR:$dst), (ins addrmode2:$addr), LdFrm, IIC_iLoadr,
-               "ldr", " $dst, $addr",
+               "ldr", "\t$dst, $addr",
                [(set GPR:$dst, (load addrmode2:$addr))]>;
 
 // Special LDR for loads from non-pc-relative constpools.
-let canFoldAsLoad = 1, mayLoad = 1, isReMaterializable = 1 in
+let canFoldAsLoad = 1, mayLoad = 1, isReMaterializable = 1,
+    mayHaveSideEffects = 1  in
 def LDRcp : AI2ldw<(outs GPR:$dst), (ins addrmode2:$addr), LdFrm, IIC_iLoadr,
-                 "ldr", " $dst, $addr", []>;
+                 "ldr", "\t$dst, $addr", []>;
 
 // Loads with zero extension
 def LDRH  : AI3ldh<(outs GPR:$dst), (ins addrmode3:$addr), LdMiscFrm,
-                  IIC_iLoadr, "ldr", "h $dst, $addr",
+                  IIC_iLoadr, "ldrh", "\t$dst, $addr",
                   [(set GPR:$dst, (zextloadi16 addrmode3:$addr))]>;
 
 def LDRB  : AI2ldb<(outs GPR:$dst), (ins addrmode2:$addr), LdFrm, 
-                  IIC_iLoadr, "ldr", "b $dst, $addr",
+                  IIC_iLoadr, "ldrb", "\t$dst, $addr",
                   [(set GPR:$dst, (zextloadi8 addrmode2:$addr))]>;
 
 // Loads with sign extension
 def LDRSH : AI3ldsh<(outs GPR:$dst), (ins addrmode3:$addr), LdMiscFrm,
-                   IIC_iLoadr, "ldr", "sh $dst, $addr",
+                   IIC_iLoadr, "ldrsh", "\t$dst, $addr",
                    [(set GPR:$dst, (sextloadi16 addrmode3:$addr))]>;
 
 def LDRSB : AI3ldsb<(outs GPR:$dst), (ins addrmode3:$addr), LdMiscFrm,
-                   IIC_iLoadr, "ldr", "sb $dst, $addr",
+                   IIC_iLoadr, "ldrsb", "\t$dst, $addr",
                    [(set GPR:$dst, (sextloadi8 addrmode3:$addr))]>;
 
 let mayLoad = 1, hasExtraDefRegAllocReq = 1 in {
 // Load doubleword
 def LDRD : AI3ldd<(outs GPR:$dst1, GPR:$dst2), (ins addrmode3:$addr), LdMiscFrm,
-                 IIC_iLoadr, "ldr", "d $dst1, $addr",
+                 IIC_iLoadr, "ldrd", "\t$dst1, $addr",
                  []>, Requires<[IsARM, HasV5TE]>;
 
 // Indexed loads
 def LDR_PRE  : AI2ldwpr<(outs GPR:$dst, GPR:$base_wb),
                      (ins addrmode2:$addr), LdFrm, IIC_iLoadru,
-                     "ldr", " $dst, $addr!", "$addr.base = $base_wb", []>;
+                     "ldr", "\t$dst, $addr!", "$addr.base = $base_wb", []>;
 
 def LDR_POST : AI2ldwpo<(outs GPR:$dst, GPR:$base_wb),
                      (ins GPR:$base, am2offset:$offset), LdFrm, IIC_iLoadru,
-                     "ldr", " $dst, [$base], $offset", "$base = $base_wb", []>;
+                     "ldr", "\t$dst, [$base], $offset", "$base = $base_wb", []>;
 
 def LDRH_PRE  : AI3ldhpr<(outs GPR:$dst, GPR:$base_wb),
                      (ins addrmode3:$addr), LdMiscFrm, IIC_iLoadru,
-                     "ldr", "h $dst, $addr!", "$addr.base = $base_wb", []>;
+                     "ldrh", "\t$dst, $addr!", "$addr.base = $base_wb", []>;
 
 def LDRH_POST : AI3ldhpo<(outs GPR:$dst, GPR:$base_wb),
                      (ins GPR:$base,am3offset:$offset), LdMiscFrm, IIC_iLoadru,
-                     "ldr", "h $dst, [$base], $offset", "$base = $base_wb", []>;
+                    "ldrh", "\t$dst, [$base], $offset", "$base = $base_wb", []>;
 
 def LDRB_PRE  : AI2ldbpr<(outs GPR:$dst, GPR:$base_wb),
                      (ins addrmode2:$addr), LdFrm, IIC_iLoadru,
-                     "ldr", "b $dst, $addr!", "$addr.base = $base_wb", []>;
+                     "ldrb", "\t$dst, $addr!", "$addr.base = $base_wb", []>;
 
 def LDRB_POST : AI2ldbpo<(outs GPR:$dst, GPR:$base_wb),
                      (ins GPR:$base,am2offset:$offset), LdFrm, IIC_iLoadru,
-                     "ldr", "b $dst, [$base], $offset", "$base = $base_wb", []>;
+                    "ldrb", "\t$dst, [$base], $offset", "$base = $base_wb", []>;
 
 def LDRSH_PRE : AI3ldshpr<(outs GPR:$dst, GPR:$base_wb),
                       (ins addrmode3:$addr), LdMiscFrm, IIC_iLoadru,
-                      "ldr", "sh $dst, $addr!", "$addr.base = $base_wb", []>;
+                      "ldrsh", "\t$dst, $addr!", "$addr.base = $base_wb", []>;
 
 def LDRSH_POST: AI3ldshpo<(outs GPR:$dst, GPR:$base_wb),
                       (ins GPR:$base,am3offset:$offset), LdMiscFrm, IIC_iLoadru,
-                    "ldr", "sh $dst, [$base], $offset", "$base = $base_wb", []>;
+                   "ldrsh", "\t$dst, [$base], $offset", "$base = $base_wb", []>;
 
 def LDRSB_PRE : AI3ldsbpr<(outs GPR:$dst, GPR:$base_wb),
                       (ins addrmode3:$addr), LdMiscFrm, IIC_iLoadru,
-                      "ldr", "sb $dst, $addr!", "$addr.base = $base_wb", []>;
+                      "ldrsb", "\t$dst, $addr!", "$addr.base = $base_wb", []>;
 
 def LDRSB_POST: AI3ldsbpo<(outs GPR:$dst, GPR:$base_wb),
                       (ins GPR:$base,am3offset:$offset), LdMiscFrm, IIC_iLoadru,
-                    "ldr", "sb $dst, [$base], $offset", "$base = $base_wb", []>;
+                   "ldrsb", "\t$dst, [$base], $offset", "$base = $base_wb", []>;
 }
 
 // Store
 def STR  : AI2stw<(outs), (ins GPR:$src, addrmode2:$addr), StFrm, IIC_iStorer,
-               "str", " $src, $addr",
+               "str", "\t$src, $addr",
                [(store GPR:$src, addrmode2:$addr)]>;
 
 // Stores with truncate
 def STRH : AI3sth<(outs), (ins GPR:$src, addrmode3:$addr), StMiscFrm, IIC_iStorer,
-               "str", "h $src, $addr",
+               "strh", "\t$src, $addr",
                [(truncstorei16 GPR:$src, addrmode3:$addr)]>;
 
 def STRB : AI2stb<(outs), (ins GPR:$src, addrmode2:$addr), StFrm, IIC_iStorer,
-               "str", "b $src, $addr",
+               "strb", "\t$src, $addr",
                [(truncstorei8 GPR:$src, addrmode2:$addr)]>;
 
 // Store doubleword
 let mayStore = 1, hasExtraSrcRegAllocReq = 1 in
 def STRD : AI3std<(outs), (ins GPR:$src1, GPR:$src2, addrmode3:$addr),
                StMiscFrm, IIC_iStorer,
-               "str", "d $src1, $addr", []>, Requires<[IsARM, HasV5TE]>;
+               "strd", "\t$src1, $addr", []>, Requires<[IsARM, HasV5TE]>;
 
 // Indexed stores
 def STR_PRE  : AI2stwpr<(outs GPR:$base_wb),
                      (ins GPR:$src, GPR:$base, am2offset:$offset), 
                      StFrm, IIC_iStoreru,
-                    "str", " $src, [$base, $offset]!", "$base = $base_wb",
+                    "str", "\t$src, [$base, $offset]!", "$base = $base_wb",
                     [(set GPR:$base_wb,
                       (pre_store GPR:$src, GPR:$base, am2offset:$offset))]>;
 
 def STR_POST : AI2stwpo<(outs GPR:$base_wb),
                      (ins GPR:$src, GPR:$base,am2offset:$offset), 
                      StFrm, IIC_iStoreru,
-                    "str", " $src, [$base], $offset", "$base = $base_wb",
+                    "str", "\t$src, [$base], $offset", "$base = $base_wb",
                     [(set GPR:$base_wb,
                       (post_store GPR:$src, GPR:$base, am2offset:$offset))]>;
 
 def STRH_PRE : AI3sthpr<(outs GPR:$base_wb),
                      (ins GPR:$src, GPR:$base,am3offset:$offset), 
                      StMiscFrm, IIC_iStoreru,
-                     "str", "h $src, [$base, $offset]!", "$base = $base_wb",
+                     "strh", "\t$src, [$base, $offset]!", "$base = $base_wb",
                     [(set GPR:$base_wb,
                       (pre_truncsti16 GPR:$src, GPR:$base,am3offset:$offset))]>;
 
 def STRH_POST: AI3sthpo<(outs GPR:$base_wb),
                      (ins GPR:$src, GPR:$base,am3offset:$offset), 
                      StMiscFrm, IIC_iStoreru,
-                     "str", "h $src, [$base], $offset", "$base = $base_wb",
+                     "strh", "\t$src, [$base], $offset", "$base = $base_wb",
                     [(set GPR:$base_wb, (post_truncsti16 GPR:$src,
                                          GPR:$base, am3offset:$offset))]>;
 
 def STRB_PRE : AI2stbpr<(outs GPR:$base_wb),
                      (ins GPR:$src, GPR:$base,am2offset:$offset), 
                      StFrm, IIC_iStoreru,
-                     "str", "b $src, [$base, $offset]!", "$base = $base_wb",
+                     "strb", "\t$src, [$base, $offset]!", "$base = $base_wb",
                     [(set GPR:$base_wb, (pre_truncsti8 GPR:$src,
                                          GPR:$base, am2offset:$offset))]>;
 
 def STRB_POST: AI2stbpo<(outs GPR:$base_wb),
                      (ins GPR:$src, GPR:$base,am2offset:$offset), 
                      StFrm, IIC_iStoreru,
-                     "str", "b $src, [$base], $offset", "$base = $base_wb",
+                     "strb", "\t$src, [$base], $offset", "$base = $base_wb",
                     [(set GPR:$base_wb, (post_truncsti8 GPR:$src,
                                          GPR:$base, am2offset:$offset))]>;
 
@@ -902,13 +961,13 @@ def STRB_POST: AI2stbpo<(outs GPR:$base_wb),
 let mayLoad = 1, hasExtraDefRegAllocReq = 1 in
 def LDM : AXI4ld<(outs),
                (ins addrmode4:$addr, pred:$p, reglist:$wb, variable_ops),
-               LdStMulFrm, IIC_iLoadm, "ldm${p}${addr:submode} $addr, $wb",
+               LdStMulFrm, IIC_iLoadm, "ldm${addr:submode}${p}\t$addr, $wb",
                []>;
 
 let mayStore = 1, hasExtraSrcRegAllocReq = 1 in
 def STM : AXI4st<(outs),
                (ins addrmode4:$addr, pred:$p, reglist:$wb, variable_ops),
-               LdStMulFrm, IIC_iStorem, "stm${p}${addr:submode} $addr, $wb",
+               LdStMulFrm, IIC_iStorem, "stm${addr:submode}${p}\t$addr, $wb",
                []>;
 
 //===----------------------------------------------------------------------===//
@@ -917,40 +976,51 @@ def STM : AXI4st<(outs),
 
 let neverHasSideEffects = 1 in
 def MOVr : AsI1<0b1101, (outs GPR:$dst), (ins GPR:$src), DPFrm, IIC_iMOVr,
-                "mov", " $dst, $src", []>, UnaryDP;
+                "mov", "\t$dst, $src", []>, UnaryDP {
+  let Inst{11-4} = 0b00000000;
+  let Inst{25} = 0;
+}
+
 def MOVs : AsI1<0b1101, (outs GPR:$dst), (ins so_reg:$src), 
                 DPSoRegFrm, IIC_iMOVsr,
-                "mov", " $dst, $src", [(set GPR:$dst, so_reg:$src)]>, UnaryDP;
+                "mov", "\t$dst, $src", [(set GPR:$dst, so_reg:$src)]>, UnaryDP {
+  let Inst{25} = 0;
+}
 
 let isReMaterializable = 1, isAsCheapAsAMove = 1 in
 def MOVi : AsI1<0b1101, (outs GPR:$dst), (ins so_imm:$src), DPFrm, IIC_iMOVi,
-                "mov", " $dst, $src", [(set GPR:$dst, so_imm:$src)]>, UnaryDP {
+                "mov", "\t$dst, $src", [(set GPR:$dst, so_imm:$src)]>, UnaryDP {
   let Inst{25} = 1;
 }
 
 let isReMaterializable = 1, isAsCheapAsAMove = 1 in
 def MOVi16 : AI1<0b1000, (outs GPR:$dst), (ins i32imm:$src), 
                  DPFrm, IIC_iMOVi,
-                 "movw", " $dst, $src",
+                 "movw", "\t$dst, $src",
                  [(set GPR:$dst, imm0_65535:$src)]>,
                  Requires<[IsARM, HasV6T2]> {
+  let Inst{20} = 0;
   let Inst{25} = 1;
 }
 
 let Constraints = "$src = $dst" in
 def MOVTi16 : AI1<0b1010, (outs GPR:$dst), (ins GPR:$src, i32imm:$imm),
                   DPFrm, IIC_iMOVi,
-                  "movt", " $dst, $imm", 
+                  "movt", "\t$dst, $imm",
                   [(set GPR:$dst,
                         (or (and GPR:$src, 0xffff), 
                             lo16AllZero:$imm))]>, UnaryDP,
                   Requires<[IsARM, HasV6T2]> {
+  let Inst{20} = 0;
   let Inst{25} = 1;
 }
 
+def : ARMPat<(or GPR:$src, 0xffff0000), (MOVTi16 GPR:$src, 0xffff)>,
+      Requires<[IsARM, HasV6T2]>;
+
 let Uses = [CPSR] in
 def MOVrx : AsI1<0b1101, (outs GPR:$dst), (ins GPR:$src), Pseudo, IIC_iMOVsi,
-                 "mov", " $dst, $src, rrx",
+                 "mov", "\t$dst, $src, rrx",
                  [(set GPR:$dst, (ARMrrx GPR:$src))]>, UnaryDP;
 
 // These aren't really mov instructions, but we have to define them this way
@@ -958,10 +1028,10 @@ def MOVrx : AsI1<0b1101, (outs GPR:$dst), (ins GPR:$src), Pseudo, IIC_iMOVsi,
 
 let Defs = [CPSR] in {
 def MOVsrl_flag : AI1<0b1101, (outs GPR:$dst), (ins GPR:$src), Pseudo, 
-                      IIC_iMOVsi, "mov", "s $dst, $src, lsr #1",
+                      IIC_iMOVsi, "movs", "\t$dst, $src, lsr #1",
                       [(set GPR:$dst, (ARMsrl_flag GPR:$src))]>, UnaryDP;
 def MOVsra_flag : AI1<0b1101, (outs GPR:$dst), (ins GPR:$src), Pseudo,
-                      IIC_iMOVsi, "mov", "s $dst, $src, asr #1",
+                      IIC_iMOVsi, "movs", "\t$dst, $src, asr #1",
                       [(set GPR:$dst, (ARMsra_flag GPR:$src))]>, UnaryDP;
 }
 
@@ -1009,6 +1079,24 @@ defm UXTAH : AI_bin_rrot<0b01101111, "uxtah",
 
 // TODO: UXT(A){B|H}16
 
+def SBFX  : I<(outs GPR:$dst),
+              (ins GPR:$src, imm0_31:$lsb, imm0_31:$width),
+               AddrMode1, Size4Bytes, IndexModeNone, DPFrm, IIC_iALUi,
+               "sbfx", "\t$dst, $src, $lsb, $width", "", []>,
+               Requires<[IsARM, HasV6T2]> {
+  let Inst{27-21} = 0b0111101;
+  let Inst{6-4}   = 0b101;
+}
+
+def UBFX  : I<(outs GPR:$dst),
+              (ins GPR:$src, imm0_31:$lsb, imm0_31:$width),
+               AddrMode1, Size4Bytes, IndexModeNone, DPFrm, IIC_iALUi,
+               "ubfx", "\t$dst, $src, $lsb, $width", "", []>,
+               Requires<[IsARM, HasV6T2]> {
+  let Inst{27-21} = 0b0111111;
+  let Inst{6-4}   = 0b101;
+}
+
 //===----------------------------------------------------------------------===//
 //  Arithmetic Instructions.
 //
@@ -1019,64 +1107,80 @@ defm SUB  : AsI1_bin_irs<0b0010, "sub",
                          BinOpFrag<(sub  node:$LHS, node:$RHS)>>;
 
 // ADD and SUB with 's' bit set.
-defm ADDS : AI1_bin_s_irs<0b0100, "add",
-                          BinOpFrag<(addc node:$LHS, node:$RHS)>>;
-defm SUBS : AI1_bin_s_irs<0b0010, "sub",
+defm ADDS : AI1_bin_s_irs<0b0100, "adds",
+                          BinOpFrag<(addc node:$LHS, node:$RHS)>, 1>;
+defm SUBS : AI1_bin_s_irs<0b0010, "subs",
                           BinOpFrag<(subc node:$LHS, node:$RHS)>>;
 
 defm ADC : AI1_adde_sube_irs<0b0101, "adc",
                              BinOpFrag<(adde node:$LHS, node:$RHS)>, 1>;
 defm SBC : AI1_adde_sube_irs<0b0110, "sbc",
                              BinOpFrag<(sube node:$LHS, node:$RHS)>>;
+defm ADCS : AI1_adde_sube_s_irs<0b0101, "adcs",
+                             BinOpFrag<(adde node:$LHS, node:$RHS)>, 1>;
+defm SBCS : AI1_adde_sube_s_irs<0b0110, "sbcs",
+                             BinOpFrag<(sube node:$LHS, node:$RHS)>>;
 
 // These don't define reg/reg forms, because they are handled above.
 def RSBri : AsI1<0b0011, (outs GPR:$dst), (ins GPR:$a, so_imm:$b), DPFrm,
-                  IIC_iALUi, "rsb", " $dst, $a, $b",
+                  IIC_iALUi, "rsb", "\t$dst, $a, $b",
                   [(set GPR:$dst, (sub so_imm:$b, GPR:$a))]> {
     let Inst{25} = 1;
 }
 
 def RSBrs : AsI1<0b0011, (outs GPR:$dst), (ins GPR:$a, so_reg:$b), DPSoRegFrm,
-                  IIC_iALUsr, "rsb", " $dst, $a, $b",
-                  [(set GPR:$dst, (sub so_reg:$b, GPR:$a))]>;
+                  IIC_iALUsr, "rsb", "\t$dst, $a, $b",
+                  [(set GPR:$dst, (sub so_reg:$b, GPR:$a))]> {
+    let Inst{25} = 0;
+}
 
 // RSB with 's' bit set.
 let Defs = [CPSR] in {
 def RSBSri : AI1<0b0011, (outs GPR:$dst), (ins GPR:$a, so_imm:$b), DPFrm,
-                 IIC_iALUi, "rsb", "s $dst, $a, $b",
+                 IIC_iALUi, "rsbs", "\t$dst, $a, $b",
                  [(set GPR:$dst, (subc so_imm:$b, GPR:$a))]> {
+    let Inst{20} = 1;
     let Inst{25} = 1;
 }
 def RSBSrs : AI1<0b0011, (outs GPR:$dst), (ins GPR:$a, so_reg:$b), DPSoRegFrm,
-                 IIC_iALUsr, "rsb", "s $dst, $a, $b",
-                 [(set GPR:$dst, (subc so_reg:$b, GPR:$a))]>;
+                 IIC_iALUsr, "rsbs", "\t$dst, $a, $b",
+                 [(set GPR:$dst, (subc so_reg:$b, GPR:$a))]> {
+    let Inst{20} = 1;
+    let Inst{25} = 0;
+}
 }
 
 let Uses = [CPSR] in {
 def RSCri : AsI1<0b0111, (outs GPR:$dst), (ins GPR:$a, so_imm:$b),
-                 DPFrm, IIC_iALUi, "rsc", " $dst, $a, $b",
+                 DPFrm, IIC_iALUi, "rsc", "\t$dst, $a, $b",
                  [(set GPR:$dst, (sube so_imm:$b, GPR:$a))]>,
                  Requires<[IsARM, CarryDefIsUnused]> {
     let Inst{25} = 1;
 }
 def RSCrs : AsI1<0b0111, (outs GPR:$dst), (ins GPR:$a, so_reg:$b),
-                 DPSoRegFrm, IIC_iALUsr, "rsc", " $dst, $a, $b",
+                 DPSoRegFrm, IIC_iALUsr, "rsc", "\t$dst, $a, $b",
                  [(set GPR:$dst, (sube so_reg:$b, GPR:$a))]>,
-                 Requires<[IsARM, CarryDefIsUnused]>;
+                 Requires<[IsARM, CarryDefIsUnused]> {
+    let Inst{25} = 0;
+}
 }
 
 // FIXME: Allow these to be predicated.
 let Defs = [CPSR], Uses = [CPSR] in {
 def RSCSri : AXI1<0b0111, (outs GPR:$dst), (ins GPR:$a, so_imm:$b),
-                  DPFrm, IIC_iALUi, "rscs $dst, $a, $b",
+                  DPFrm, IIC_iALUi, "rscs\t$dst, $a, $b",
                   [(set GPR:$dst, (sube so_imm:$b, GPR:$a))]>,
                   Requires<[IsARM, CarryDefIsUnused]> {
+    let Inst{20} = 1;
     let Inst{25} = 1;
 }
 def RSCSrs : AXI1<0b0111, (outs GPR:$dst), (ins GPR:$a, so_reg:$b),
-                  DPSoRegFrm, IIC_iALUsr, "rscs $dst, $a, $b",
+                  DPSoRegFrm, IIC_iALUsr, "rscs\t$dst, $a, $b",
                   [(set GPR:$dst, (sube so_reg:$b, GPR:$a))]>,
-                  Requires<[IsARM, CarryDefIsUnused]>;
+                  Requires<[IsARM, CarryDefIsUnused]> {
+    let Inst{20} = 1;
+    let Inst{25} = 0;
+}
 }
 
 // (sub X, imm) gets canonicalized to (add X, -imm).  Match this form.
@@ -1109,8 +1213,8 @@ defm BIC   : AsI1_bin_irs<0b1110, "bic",
                           BinOpFrag<(and node:$LHS, (not node:$RHS))>>;
 
 def BFC    : I<(outs GPR:$dst), (ins GPR:$src, bf_inv_mask_imm:$imm),
-               AddrMode1, Size4Bytes, IndexModeNone, DPFrm, IIC_iALUi,
-               "bfc", " $dst, $imm", "$src = $dst",
+               AddrMode1, Size4Bytes, IndexModeNone, DPFrm, IIC_iUNAsi,
+               "bfc", "\t$dst, $imm", "$src = $dst",
                [(set GPR:$dst, (and GPR:$src, bf_inv_mask_imm:$imm))]>,
                Requires<[IsARM, HasV6T2]> {
   let Inst{27-21} = 0b0111110;
@@ -1118,14 +1222,16 @@ def BFC    : I<(outs GPR:$dst), (ins GPR:$src, bf_inv_mask_imm:$imm),
 }
 
 def  MVNr  : AsI1<0b1111, (outs GPR:$dst), (ins GPR:$src), DPFrm, IIC_iMOVr,
-                  "mvn", " $dst, $src",
-                  [(set GPR:$dst, (not GPR:$src))]>, UnaryDP;
+                  "mvn", "\t$dst, $src",
+                  [(set GPR:$dst, (not GPR:$src))]>, UnaryDP {
+  let Inst{11-4} = 0b00000000;
+}
 def  MVNs  : AsI1<0b1111, (outs GPR:$dst), (ins so_reg:$src), DPSoRegFrm,
-                  IIC_iMOVsr, "mvn", " $dst, $src",
+                  IIC_iMOVsr, "mvn", "\t$dst, $src",
                   [(set GPR:$dst, (not so_reg:$src))]>, UnaryDP;
 let isReMaterializable = 1, isAsCheapAsAMove = 1 in
 def  MVNi  : AsI1<0b1111, (outs GPR:$dst), (ins so_imm:$imm), DPFrm, 
-                  IIC_iMOVi, "mvn", " $dst, $imm",
+                  IIC_iMOVi, "mvn", "\t$dst, $imm",
                   [(set GPR:$dst, so_imm_not:$imm)]>,UnaryDP {
     let Inst{25} = 1;
 }
@@ -1139,15 +1245,15 @@ def : ARMPat<(and   GPR:$src, so_imm_not:$imm),
 
 let isCommutable = 1 in
 def MUL   : AsMul1I<0b0000000, (outs GPR:$dst), (ins GPR:$a, GPR:$b),
-                   IIC_iMUL32, "mul", " $dst, $a, $b",
+                   IIC_iMUL32, "mul", "\t$dst, $a, $b",
                    [(set GPR:$dst, (mul GPR:$a, GPR:$b))]>;
 
 def MLA   : AsMul1I<0b0000001, (outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$c),
-                    IIC_iMAC32, "mla", " $dst, $a, $b, $c",
+                    IIC_iMAC32, "mla", "\t$dst, $a, $b, $c",
                    [(set GPR:$dst, (add (mul GPR:$a, GPR:$b), GPR:$c))]>;
 
 def MLS   : AMul1I<0b0000011, (outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$c),
-                   IIC_iMAC32, "mls", " $dst, $a, $b, $c",
+                   IIC_iMAC32, "mls", "\t$dst, $a, $b, $c",
                    [(set GPR:$dst, (sub GPR:$c, (mul GPR:$a, GPR:$b)))]>,
                    Requires<[IsARM, HasV6T2]>;
 
@@ -1156,31 +1262,31 @@ let neverHasSideEffects = 1 in {
 let isCommutable = 1 in {
 def SMULL : AsMul1I<0b0000110, (outs GPR:$ldst, GPR:$hdst),
                                (ins GPR:$a, GPR:$b), IIC_iMUL64,
-                    "smull", " $ldst, $hdst, $a, $b", []>;
+                    "smull", "\t$ldst, $hdst, $a, $b", []>;
 
 def UMULL : AsMul1I<0b0000100, (outs GPR:$ldst, GPR:$hdst),
                                (ins GPR:$a, GPR:$b), IIC_iMUL64,
-                    "umull", " $ldst, $hdst, $a, $b", []>;
+                    "umull", "\t$ldst, $hdst, $a, $b", []>;
 }
 
 // Multiply + accumulate
 def SMLAL : AsMul1I<0b0000111, (outs GPR:$ldst, GPR:$hdst),
                                (ins GPR:$a, GPR:$b), IIC_iMAC64,
-                    "smlal", " $ldst, $hdst, $a, $b", []>;
+                    "smlal", "\t$ldst, $hdst, $a, $b", []>;
 
 def UMLAL : AsMul1I<0b0000101, (outs GPR:$ldst, GPR:$hdst),
                                (ins GPR:$a, GPR:$b), IIC_iMAC64,
-                    "umlal", " $ldst, $hdst, $a, $b", []>;
+                    "umlal", "\t$ldst, $hdst, $a, $b", []>;
 
 def UMAAL : AMul1I <0b0000010, (outs GPR:$ldst, GPR:$hdst),
                                (ins GPR:$a, GPR:$b), IIC_iMAC64,
-                    "umaal", " $ldst, $hdst, $a, $b", []>,
+                    "umaal", "\t$ldst, $hdst, $a, $b", []>,
                     Requires<[IsARM, HasV6]>;
 } // neverHasSideEffects
 
 // Most significant word multiply
 def SMMUL : AMul2I <0b0111010, (outs GPR:$dst), (ins GPR:$a, GPR:$b),
-               IIC_iMUL32, "smmul", " $dst, $a, $b",
+               IIC_iMUL32, "smmul", "\t$dst, $a, $b",
                [(set GPR:$dst, (mulhs GPR:$a, GPR:$b))]>,
             Requires<[IsARM, HasV6]> {
   let Inst{7-4}   = 0b0001;
@@ -1188,7 +1294,7 @@ def SMMUL : AMul2I <0b0111010, (outs GPR:$dst), (ins GPR:$a, GPR:$b),
 }
 
 def SMMLA : AMul2I <0b0111010, (outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$c),
-               IIC_iMAC32, "smmla", " $dst, $a, $b, $c",
+               IIC_iMAC32, "smmla", "\t$dst, $a, $b, $c",
                [(set GPR:$dst, (add (mulhs GPR:$a, GPR:$b), GPR:$c))]>,
             Requires<[IsARM, HasV6]> {
   let Inst{7-4}   = 0b0001;
@@ -1196,7 +1302,7 @@ def SMMLA : AMul2I <0b0111010, (outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$c),
 
 
 def SMMLS : AMul2I <0b0111010, (outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$c),
-               IIC_iMAC32, "smmls", " $dst, $a, $b, $c",
+               IIC_iMAC32, "smmls", "\t$dst, $a, $b, $c",
                [(set GPR:$dst, (sub GPR:$c, (mulhs GPR:$a, GPR:$b)))]>,
             Requires<[IsARM, HasV6]> {
   let Inst{7-4}   = 0b1101;
@@ -1204,7 +1310,7 @@ def SMMLS : AMul2I <0b0111010, (outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$c),
 
 multiclass AI_smul<string opc, PatFrag opnode> {
   def BB : AMulxyI<0b0001011, (outs GPR:$dst), (ins GPR:$a, GPR:$b),
-              IIC_iMUL32, !strconcat(opc, "bb"), " $dst, $a, $b",
+              IIC_iMUL32, !strconcat(opc, "bb"), "\t$dst, $a, $b",
               [(set GPR:$dst, (opnode (sext_inreg GPR:$a, i16),
                                       (sext_inreg GPR:$b, i16)))]>,
            Requires<[IsARM, HasV5TE]> {
@@ -1213,7 +1319,7 @@ multiclass AI_smul<string opc, PatFrag opnode> {
            }
 
   def BT : AMulxyI<0b0001011, (outs GPR:$dst), (ins GPR:$a, GPR:$b),
-              IIC_iMUL32, !strconcat(opc, "bt"), " $dst, $a, $b",
+              IIC_iMUL32, !strconcat(opc, "bt"), "\t$dst, $a, $b",
               [(set GPR:$dst, (opnode (sext_inreg GPR:$a, i16),
                                       (sra GPR:$b, (i32 16))))]>,
            Requires<[IsARM, HasV5TE]> {
@@ -1222,7 +1328,7 @@ multiclass AI_smul<string opc, PatFrag opnode> {
            }
 
   def TB : AMulxyI<0b0001011, (outs GPR:$dst), (ins GPR:$a, GPR:$b),
-              IIC_iMUL32, !strconcat(opc, "tb"), " $dst, $a, $b",
+              IIC_iMUL32, !strconcat(opc, "tb"), "\t$dst, $a, $b",
               [(set GPR:$dst, (opnode (sra GPR:$a, (i32 16)),
                                       (sext_inreg GPR:$b, i16)))]>,
            Requires<[IsARM, HasV5TE]> {
@@ -1231,7 +1337,7 @@ multiclass AI_smul<string opc, PatFrag opnode> {
            }
 
   def TT : AMulxyI<0b0001011, (outs GPR:$dst), (ins GPR:$a, GPR:$b),
-              IIC_iMUL32, !strconcat(opc, "tt"), " $dst, $a, $b",
+              IIC_iMUL32, !strconcat(opc, "tt"), "\t$dst, $a, $b",
               [(set GPR:$dst, (opnode (sra GPR:$a, (i32 16)),
                                       (sra GPR:$b, (i32 16))))]>,
             Requires<[IsARM, HasV5TE]> {
@@ -1240,7 +1346,7 @@ multiclass AI_smul<string opc, PatFrag opnode> {
            }
 
   def WB : AMulxyI<0b0001001, (outs GPR:$dst), (ins GPR:$a, GPR:$b),
-              IIC_iMUL16, !strconcat(opc, "wb"), " $dst, $a, $b",
+              IIC_iMUL16, !strconcat(opc, "wb"), "\t$dst, $a, $b",
               [(set GPR:$dst, (sra (opnode GPR:$a,
                                     (sext_inreg GPR:$b, i16)), (i32 16)))]>,
            Requires<[IsARM, HasV5TE]> {
@@ -1249,7 +1355,7 @@ multiclass AI_smul<string opc, PatFrag opnode> {
            }
 
   def WT : AMulxyI<0b0001001, (outs GPR:$dst), (ins GPR:$a, GPR:$b),
-              IIC_iMUL16, !strconcat(opc, "wt"), " $dst, $a, $b",
+              IIC_iMUL16, !strconcat(opc, "wt"), "\t$dst, $a, $b",
               [(set GPR:$dst, (sra (opnode GPR:$a,
                                     (sra GPR:$b, (i32 16))), (i32 16)))]>,
             Requires<[IsARM, HasV5TE]> {
@@ -1261,7 +1367,7 @@ multiclass AI_smul<string opc, PatFrag opnode> {
 
 multiclass AI_smla<string opc, PatFrag opnode> {
   def BB : AMulxyI<0b0001000, (outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc),
-              IIC_iMAC16, !strconcat(opc, "bb"), " $dst, $a, $b, $acc",
+              IIC_iMAC16, !strconcat(opc, "bb"), "\t$dst, $a, $b, $acc",
               [(set GPR:$dst, (add GPR:$acc,
                                (opnode (sext_inreg GPR:$a, i16),
                                        (sext_inreg GPR:$b, i16))))]>,
@@ -1271,7 +1377,7 @@ multiclass AI_smla<string opc, PatFrag opnode> {
            }
 
   def BT : AMulxyI<0b0001000, (outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc),
-              IIC_iMAC16, !strconcat(opc, "bt"), " $dst, $a, $b, $acc",
+              IIC_iMAC16, !strconcat(opc, "bt"), "\t$dst, $a, $b, $acc",
               [(set GPR:$dst, (add GPR:$acc, (opnode (sext_inreg GPR:$a, i16),
                                                      (sra GPR:$b, (i32 16)))))]>,
            Requires<[IsARM, HasV5TE]> {
@@ -1280,7 +1386,7 @@ multiclass AI_smla<string opc, PatFrag opnode> {
            }
 
   def TB : AMulxyI<0b0001000, (outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc),
-              IIC_iMAC16, !strconcat(opc, "tb"), " $dst, $a, $b, $acc",
+              IIC_iMAC16, !strconcat(opc, "tb"), "\t$dst, $a, $b, $acc",
               [(set GPR:$dst, (add GPR:$acc, (opnode (sra GPR:$a, (i32 16)),
                                                  (sext_inreg GPR:$b, i16))))]>,
            Requires<[IsARM, HasV5TE]> {
@@ -1289,16 +1395,16 @@ multiclass AI_smla<string opc, PatFrag opnode> {
            }
 
   def TT : AMulxyI<0b0001000, (outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc),
-              IIC_iMAC16, !strconcat(opc, "tt"), " $dst, $a, $b, $acc",
-              [(set GPR:$dst, (add GPR:$acc, (opnode (sra GPR:$a, (i32 16)),
-                                                     (sra GPR:$b, (i32 16)))))]>,
+              IIC_iMAC16, !strconcat(opc, "tt"), "\t$dst, $a, $b, $acc",
+             [(set GPR:$dst, (add GPR:$acc, (opnode (sra GPR:$a, (i32 16)),
+                                                    (sra GPR:$b, (i32 16)))))]>,
             Requires<[IsARM, HasV5TE]> {
              let Inst{5} = 1;
              let Inst{6} = 1;
            }
 
   def WB : AMulxyI<0b0001001, (outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc),
-              IIC_iMAC16, !strconcat(opc, "wb"), " $dst, $a, $b, $acc",
+              IIC_iMAC16, !strconcat(opc, "wb"), "\t$dst, $a, $b, $acc",
               [(set GPR:$dst, (add GPR:$acc, (sra (opnode GPR:$a,
                                        (sext_inreg GPR:$b, i16)), (i32 16))))]>,
            Requires<[IsARM, HasV5TE]> {
@@ -1307,7 +1413,7 @@ multiclass AI_smla<string opc, PatFrag opnode> {
            }
 
   def WT : AMulxyI<0b0001001, (outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc),
-              IIC_iMAC16, !strconcat(opc, "wt"), " $dst, $a, $b, $acc",
+              IIC_iMAC16, !strconcat(opc, "wt"), "\t$dst, $a, $b, $acc",
               [(set GPR:$dst, (add GPR:$acc, (sra (opnode GPR:$a,
                                          (sra GPR:$b, (i32 16))), (i32 16))))]>,
             Requires<[IsARM, HasV5TE]> {
@@ -1327,7 +1433,7 @@ defm SMLA : AI_smla<"smla", BinOpFrag<(mul node:$LHS, node:$RHS)>>;
 //
 
 def CLZ  : AMiscA1I<0b000010110, (outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
-              "clz", " $dst, $src",
+              "clz", "\t$dst, $src",
               [(set GPR:$dst, (ctlz GPR:$src))]>, Requires<[IsARM, HasV5T]> {
   let Inst{7-4}   = 0b0001;
   let Inst{11-8}  = 0b1111;
@@ -1335,7 +1441,7 @@ def CLZ  : AMiscA1I<0b000010110, (outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
 }
 
 def REV  : AMiscA1I<0b01101011, (outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
-              "rev", " $dst, $src",
+              "rev", "\t$dst, $src",
               [(set GPR:$dst, (bswap GPR:$src))]>, Requires<[IsARM, HasV6]> {
   let Inst{7-4}   = 0b0011;
   let Inst{11-8}  = 0b1111;
@@ -1343,7 +1449,7 @@ def REV  : AMiscA1I<0b01101011, (outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
 }
 
 def REV16 : AMiscA1I<0b01101011, (outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
-               "rev16", " $dst, $src",
+               "rev16", "\t$dst, $src",
                [(set GPR:$dst,
                    (or (and (srl GPR:$src, (i32 8)), 0xFF),
                        (or (and (shl GPR:$src, (i32 8)), 0xFF00),
@@ -1356,7 +1462,7 @@ def REV16 : AMiscA1I<0b01101011, (outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
 }
 
 def REVSH : AMiscA1I<0b01101111, (outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
-               "revsh", " $dst, $src",
+               "revsh", "\t$dst, $src",
                [(set GPR:$dst,
                   (sext_inreg
                     (or (srl (and GPR:$src, 0xFF00), (i32 8)),
@@ -1369,7 +1475,7 @@ def REVSH : AMiscA1I<0b01101111, (outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
 
 def PKHBT : AMiscA1I<0b01101000, (outs GPR:$dst),
                                  (ins GPR:$src1, GPR:$src2, i32imm:$shamt),
-               IIC_iALUsi, "pkhbt", " $dst, $src1, $src2, LSL $shamt",
+               IIC_iALUsi, "pkhbt", "\t$dst, $src1, $src2, LSL $shamt",
                [(set GPR:$dst, (or (and GPR:$src1, 0xFFFF),
                                    (and (shl GPR:$src2, (i32 imm:$shamt)),
                                         0xFFFF0000)))]>,
@@ -1386,7 +1492,7 @@ def : ARMV6Pat<(or (and GPR:$src1, 0xFFFF), (shl GPR:$src2, imm16_31:$shamt)),
 
 def PKHTB : AMiscA1I<0b01101000, (outs GPR:$dst),
                                  (ins GPR:$src1, GPR:$src2, i32imm:$shamt),
-               IIC_iALUsi, "pkhtb", " $dst, $src1, $src2, ASR $shamt",
+               IIC_iALUsi, "pkhtb", "\t$dst, $src1, $src2, ASR $shamt",
                [(set GPR:$dst, (or (and GPR:$src1, 0xFFFF0000),
                                    (and (sra GPR:$src2, imm16_31:$shamt),
                                         0xFFFF)))]>, Requires<[IsARM, HasV6]> {
@@ -1432,22 +1538,27 @@ def : ARMPat<(ARMcmpZ GPR:$src, so_imm_neg:$imm),
 // FIXME: should be able to write a pattern for ARMcmov, but can't use
 // a two-value operand where a dag node expects two operands. :( 
 def MOVCCr : AI1<0b1101, (outs GPR:$dst), (ins GPR:$false, GPR:$true), DPFrm,
-                IIC_iCMOVr, "mov", " $dst, $true",
+                IIC_iCMOVr, "mov", "\t$dst, $true",
       [/*(set GPR:$dst, (ARMcmov GPR:$false, GPR:$true, imm:$cc, CCR:$ccr))*/]>,
-                RegConstraint<"$false = $dst">, UnaryDP;
+                RegConstraint<"$false = $dst">, UnaryDP {
+  let Inst{11-4} = 0b00000000;
+  let Inst{25} = 0;
+}
 
 def MOVCCs : AI1<0b1101, (outs GPR:$dst),
                         (ins GPR:$false, so_reg:$true), DPSoRegFrm, IIC_iCMOVsr,
-                "mov", " $dst, $true",
+                "mov", "\t$dst, $true",
    [/*(set GPR:$dst, (ARMcmov GPR:$false, so_reg:$true, imm:$cc, CCR:$ccr))*/]>,
-                RegConstraint<"$false = $dst">, UnaryDP;
+                RegConstraint<"$false = $dst">, UnaryDP {
+  let Inst{25} = 0;
+}
 
 def MOVCCi : AI1<0b1101, (outs GPR:$dst),
                         (ins GPR:$false, so_imm:$true), DPFrm, IIC_iCMOVi,
-                "mov", " $dst, $true",
+                "mov", "\t$dst, $true",
    [/*(set GPR:$dst, (ARMcmov GPR:$false, so_imm:$true, imm:$cc, CCR:$ccr))*/]>,
                 RegConstraint<"$false = $dst">, UnaryDP {
-    let Inst{25} = 1;
+  let Inst{25} = 1;
 }
 
 
@@ -1459,7 +1570,7 @@ def MOVCCi : AI1<0b1101, (outs GPR:$dst),
 let isCall = 1,
   Defs = [R0, R12, LR, CPSR] in {
   def TPsoft : ABXI<0b1011, (outs), (ins), IIC_Br,
-               "bl __aeabi_read_tp",
+               "bl\t__aeabi_read_tp",
                [(set R0, ARMthread_pointer)]>;
 }
 
@@ -1483,12 +1594,12 @@ let Defs =
   def Int_eh_sjlj_setjmp : XI<(outs), (ins GPR:$src),
                                AddrModeNone, SizeSpecial, IndexModeNone,
                                Pseudo, NoItinerary,
-                               "str sp, [$src, #+8] @ eh_setjmp begin\n\t"
-                               "add r12, pc, #8\n\t"
-                               "str r12, [$src, #+4]\n\t"
-                               "mov r0, #0\n\t"
-                               "add pc, pc, #0\n\t"
-                               "mov r0, #1 @ eh_setjmp end", "",
+                               "str\tsp, [$src, #+8] @ eh_setjmp begin\n\t"
+                               "add\tr12, pc, #8\n\t"
+                               "str\tr12, [$src, #+4]\n\t"
+                               "mov\tr0, #0\n\t"
+                               "add\tpc, pc, #0\n\t"
+                               "mov\tr0, #1 @ eh_setjmp end", "",
                                [(set R0, (ARMeh_sjlj_setjmp GPR:$src))]>;
 }
 
@@ -1496,19 +1607,13 @@ let Defs =
 // Non-Instruction Patterns
 //
 
-// ConstantPool, GlobalAddress, and JumpTable
-def : ARMPat<(ARMWrapper  tglobaladdr :$dst), (LEApcrel tglobaladdr :$dst)>;
-def : ARMPat<(ARMWrapper  tconstpool  :$dst), (LEApcrel tconstpool  :$dst)>;
-def : ARMPat<(ARMWrapperJT tjumptable:$dst, imm:$id),
-             (LEApcrelJT tjumptable:$dst, imm:$id)>;
-
 // Large immediate handling.
 
 // Two piece so_imms.
 let isReMaterializable = 1 in
 def MOVi2pieces : AI1x2<(outs GPR:$dst), (ins so_imm2part:$src), 
                          Pseudo, IIC_iMOVi,
-                         "mov", " $dst, $src",
+                         "mov", "\t$dst, $src",
                          [(set GPR:$dst, so_imm2part:$src)]>,
                   Requires<[IsARM, NoV6T2]>;
 
@@ -1518,16 +1623,32 @@ def : ARMPat<(or GPR:$LHS, so_imm2part:$RHS),
 def : ARMPat<(xor GPR:$LHS, so_imm2part:$RHS),
              (EORri (EORri GPR:$LHS, (so_imm2part_1 imm:$RHS)),
                     (so_imm2part_2 imm:$RHS))>;
+def : ARMPat<(add GPR:$LHS, so_imm2part:$RHS),
+             (ADDri (ADDri GPR:$LHS, (so_imm2part_1 imm:$RHS)),
+                    (so_imm2part_2 imm:$RHS))>;
+def : ARMPat<(add GPR:$LHS, so_neg_imm2part:$RHS),
+             (SUBri (SUBri GPR:$LHS, (so_neg_imm2part_1 imm:$RHS)),
+                    (so_neg_imm2part_2 imm:$RHS))>;
 
 // 32-bit immediate using movw + movt.
-// This is a single pseudo instruction to make it re-materializable. Remove
-// when we can do generalized remat.
+// This is a single pseudo instruction, the benefit is that it can be remat'd
+// as a single unit instead of having to handle reg inputs.
+// FIXME: Remove this when we can do generalized remat.
 let isReMaterializable = 1 in
 def MOVi32imm : AI1x2<(outs GPR:$dst), (ins i32imm:$src), Pseudo, IIC_iMOVi,
-                     "movw", " $dst, ${src:lo16}\n\tmovt${p} $dst, ${src:hi16}",
+                    "movw", "\t$dst, ${src:lo16}\n\tmovt${p}\t$dst, ${src:hi16}",
                      [(set GPR:$dst, (i32 imm:$src))]>,
                Requires<[IsARM, HasV6T2]>;
 
+// ConstantPool, GlobalAddress, and JumpTable
+def : ARMPat<(ARMWrapper  tglobaladdr :$dst), (LEApcrel tglobaladdr :$dst)>,
+            Requires<[IsARM, DontUseMovt]>;
+def : ARMPat<(ARMWrapper  tconstpool  :$dst), (LEApcrel tconstpool  :$dst)>;
+def : ARMPat<(ARMWrapper  tglobaladdr :$dst), (MOVi32imm tglobaladdr :$dst)>,
+            Requires<[IsARM, UseMovt]>;
+def : ARMPat<(ARMWrapperJT tjumptable:$dst, imm:$id),
+             (LEApcrelJT tjumptable:$dst, imm:$id)>;
+
 // TODO: add,sub,and, 3-instr forms?
 
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrNEON.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrNEON.td
index 57af2c1..a4fe752 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrNEON.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrNEON.td
@@ -102,6 +102,19 @@ def addrmode_neonldstm : Operand<i32>,
 }
 */
 
+def h8imm  : Operand<i8> {
+  let PrintMethod = "printHex8ImmOperand";
+}
+def h16imm : Operand<i16> {
+  let PrintMethod = "printHex16ImmOperand";
+}
+def h32imm : Operand<i32> {
+  let PrintMethod = "printHex32ImmOperand";
+}
+def h64imm : Operand<i64> {
+  let PrintMethod = "printHex64ImmOperand";
+}
+
 //===----------------------------------------------------------------------===//
 // NEON load / store instructions
 //===----------------------------------------------------------------------===//
@@ -111,7 +124,7 @@ let mayLoad = 1, hasExtraDefRegAllocReq = 1 in {
 def VLDMD : NI<(outs),
                (ins addrmode_neonldstm:$addr, reglist:$dst1, variable_ops),
                IIC_fpLoadm,
-               "vldm${addr:submode} ${addr:base}, $dst1",
+               "vldm", "${addr:submode} ${addr:base}, $dst1",
                []> {
   let Inst{27-25} = 0b110;
   let Inst{20}    = 1;
@@ -121,7 +134,7 @@ def VLDMD : NI<(outs),
 def VLDMS : NI<(outs),
                (ins addrmode_neonldstm:$addr, reglist:$dst1, variable_ops),
                IIC_fpLoadm,
-               "vldm${addr:submode} ${addr:base}, $dst1",
+               "vldm", "${addr:submode} ${addr:base}, $dst1",
                []> {
   let Inst{27-25} = 0b110;
   let Inst{20}    = 1;
@@ -133,7 +146,7 @@ def VLDMS : NI<(outs),
 // Use vldmia to load a Q register as a D register pair.
 def VLDRQ : NI4<(outs QPR:$dst), (ins addrmode4:$addr),
                IIC_fpLoadm,
-               "vldmia $addr, ${dst:dregpair}",
+               "vldmia", "$addr, ${dst:dregpair}",
                [(set QPR:$dst, (v2f64 (load addrmode4:$addr)))]> {
   let Inst{27-25} = 0b110;
   let Inst{24}    = 0; // P bit
@@ -145,7 +158,7 @@ def VLDRQ : NI4<(outs QPR:$dst), (ins addrmode4:$addr),
 // Use vstmia to store a Q register as a D register pair.
 def VSTRQ : NI4<(outs), (ins QPR:$src, addrmode4:$addr),
                IIC_fpStorem,
-               "vstmia $addr, ${src:dregpair}",
+               "vstmia", "$addr, ${src:dregpair}",
                [(store (v2f64 QPR:$src), addrmode4:$addr)]> {
   let Inst{27-25} = 0b110;
   let Inst{24}    = 0; // P bit
@@ -155,187 +168,446 @@ def VSTRQ : NI4<(outs), (ins QPR:$src, addrmode4:$addr),
 }
 
 //   VLD1     : Vector Load (multiple single elements)
-class VLD1D<string OpcodeStr, ValueType Ty, Intrinsic IntOp>
-  : NLdSt<(outs DPR:$dst), (ins addrmode6:$addr), IIC_VLD1,
-          !strconcat(OpcodeStr, "\t\\{$dst\\}, $addr"), "",
+class VLD1D<bits<4> op7_4, string OpcodeStr, string Dt,
+            ValueType Ty, Intrinsic IntOp>
+  : NLdSt<0,0b10,0b0111,op7_4, (outs DPR:$dst), (ins addrmode6:$addr), IIC_VLD1,
+          OpcodeStr, Dt, "\\{$dst\\}, $addr", "",
           [(set DPR:$dst, (Ty (IntOp addrmode6:$addr)))]>;
-class VLD1Q<string OpcodeStr, ValueType Ty, Intrinsic IntOp>
-  : NLdSt<(outs QPR:$dst), (ins addrmode6:$addr), IIC_VLD1,
-          !strconcat(OpcodeStr, "\t${dst:dregpair}, $addr"), "",
+class VLD1Q<bits<4> op7_4, string OpcodeStr, string Dt,
+            ValueType Ty, Intrinsic IntOp>
+  : NLdSt<0,0b10,0b1010,op7_4, (outs QPR:$dst), (ins addrmode6:$addr), IIC_VLD1,
+          OpcodeStr, Dt, "${dst:dregpair}, $addr", "",
           [(set QPR:$dst, (Ty (IntOp addrmode6:$addr)))]>;
 
-def  VLD1d8   : VLD1D<"vld1.8",  v8i8,  int_arm_neon_vld1>;
-def  VLD1d16  : VLD1D<"vld1.16", v4i16, int_arm_neon_vld1>;
-def  VLD1d32  : VLD1D<"vld1.32", v2i32, int_arm_neon_vld1>;
-def  VLD1df   : VLD1D<"vld1.32", v2f32, int_arm_neon_vld1>;
-def  VLD1d64  : VLD1D<"vld1.64", v1i64, int_arm_neon_vld1>;
+def  VLD1d8   : VLD1D<0b0000, "vld1", "8",  v8i8,  int_arm_neon_vld1>;
+def  VLD1d16  : VLD1D<0b0100, "vld1", "16", v4i16, int_arm_neon_vld1>;
+def  VLD1d32  : VLD1D<0b1000, "vld1", "32", v2i32, int_arm_neon_vld1>;
+def  VLD1df   : VLD1D<0b1000, "vld1", "32", v2f32, int_arm_neon_vld1>;
+def  VLD1d64  : VLD1D<0b1100, "vld1", "64", v1i64, int_arm_neon_vld1>;
 
-def  VLD1q8   : VLD1Q<"vld1.8",  v16i8, int_arm_neon_vld1>;
-def  VLD1q16  : VLD1Q<"vld1.16", v8i16, int_arm_neon_vld1>;
-def  VLD1q32  : VLD1Q<"vld1.32", v4i32, int_arm_neon_vld1>;
-def  VLD1qf   : VLD1Q<"vld1.32", v4f32, int_arm_neon_vld1>;
-def  VLD1q64  : VLD1Q<"vld1.64", v2i64, int_arm_neon_vld1>;
+def  VLD1q8   : VLD1Q<0b0000, "vld1", "8",  v16i8, int_arm_neon_vld1>;
+def  VLD1q16  : VLD1Q<0b0100, "vld1", "16", v8i16, int_arm_neon_vld1>;
+def  VLD1q32  : VLD1Q<0b1000, "vld1", "32", v4i32, int_arm_neon_vld1>;
+def  VLD1qf   : VLD1Q<0b1000, "vld1", "32", v4f32, int_arm_neon_vld1>;
+def  VLD1q64  : VLD1Q<0b1100, "vld1", "64", v2i64, int_arm_neon_vld1>;
 
 let mayLoad = 1, hasExtraDefRegAllocReq = 1 in {
 
 //   VLD2     : Vector Load (multiple 2-element structures)
-class VLD2D<string OpcodeStr>
-  : NLdSt<(outs DPR:$dst1, DPR:$dst2), (ins addrmode6:$addr), IIC_VLD2,
-          !strconcat(OpcodeStr, "\t\\{$dst1,$dst2\\}, $addr"), "", []>;
+class VLD2D<bits<4> op7_4, string OpcodeStr, string Dt>
+  : NLdSt<0,0b10,0b1000,op7_4, (outs DPR:$dst1, DPR:$dst2),
+          (ins addrmode6:$addr), IIC_VLD2,
+          OpcodeStr, Dt, "\\{$dst1,$dst2\\}, $addr", "", []>;
+class VLD2Q<bits<4> op7_4, string OpcodeStr, string Dt>
+  : NLdSt<0,0b10,0b0011,op7_4,
+          (outs DPR:$dst1, DPR:$dst2, DPR:$dst3, DPR:$dst4),
+          (ins addrmode6:$addr), IIC_VLD2,
+          OpcodeStr, Dt, "\\{$dst1,$dst2,$dst3,$dst4\\}, $addr",
+          "", []>;
 
-def  VLD2d8   : VLD2D<"vld2.8">;
-def  VLD2d16  : VLD2D<"vld2.16">;
-def  VLD2d32  : VLD2D<"vld2.32">;
+def  VLD2d8   : VLD2D<0b0000, "vld2", "8">;
+def  VLD2d16  : VLD2D<0b0100, "vld2", "16">;
+def  VLD2d32  : VLD2D<0b1000, "vld2", "32">;
+def  VLD2d64  : NLdSt<0,0b10,0b1010,0b1100, (outs DPR:$dst1, DPR:$dst2),
+                      (ins addrmode6:$addr), IIC_VLD1,
+                      "vld1", "64", "\\{$dst1,$dst2\\}, $addr", "", []>;
 
-//   VLD3     : Vector Load (multiple 3-element structures)
-class VLD3D<string OpcodeStr>
-  : NLdSt<(outs DPR:$dst1, DPR:$dst2, DPR:$dst3), (ins addrmode6:$addr),
-          IIC_VLD3,
-          !strconcat(OpcodeStr, "\t\\{$dst1,$dst2,$dst3\\}, $addr"), "", []>;
+def  VLD2q8   : VLD2Q<0b0000, "vld2", "8">;
+def  VLD2q16  : VLD2Q<0b0100, "vld2", "16">;
+def  VLD2q32  : VLD2Q<0b1000, "vld2", "32">;
 
-def  VLD3d8   : VLD3D<"vld3.8">;
-def  VLD3d16  : VLD3D<"vld3.16">;
-def  VLD3d32  : VLD3D<"vld3.32">;
+//   VLD3     : Vector Load (multiple 3-element structures)
+class VLD3D<bits<4> op7_4, string OpcodeStr, string Dt>
+  : NLdSt<0,0b10,0b0100,op7_4, (outs DPR:$dst1, DPR:$dst2, DPR:$dst3),
+          (ins addrmode6:$addr), IIC_VLD3,
+          OpcodeStr, Dt, "\\{$dst1,$dst2,$dst3\\}, $addr", "", []>;
+class VLD3WB<bits<4> op7_4, string OpcodeStr, string Dt>
+  : NLdSt<0,0b10,0b0101,op7_4, (outs DPR:$dst1, DPR:$dst2, DPR:$dst3, GPR:$wb),
+          (ins addrmode6:$addr), IIC_VLD3,
+          OpcodeStr, Dt, "\\{$dst1,$dst2,$dst3\\}, $addr",
+          "$addr.addr = $wb", []>;
+
+def  VLD3d8   : VLD3D<0b0000, "vld3", "8">;
+def  VLD3d16  : VLD3D<0b0100, "vld3", "16">;
+def  VLD3d32  : VLD3D<0b1000, "vld3", "32">;
+def  VLD3d64  : NLdSt<0,0b10,0b0110,0b1100,
+                      (outs DPR:$dst1, DPR:$dst2, DPR:$dst3),
+                      (ins addrmode6:$addr), IIC_VLD1,
+                      "vld1", "64", "\\{$dst1,$dst2,$dst3\\}, $addr", "", []>;
+
+// vld3 to double-spaced even registers.
+def  VLD3q8a  : VLD3WB<0b0000, "vld3", "8">;
+def  VLD3q16a : VLD3WB<0b0100, "vld3", "16">;
+def  VLD3q32a : VLD3WB<0b1000, "vld3", "32">;
+
+// vld3 to double-spaced odd registers.
+def  VLD3q8b  : VLD3WB<0b0000, "vld3", "8">;
+def  VLD3q16b : VLD3WB<0b0100, "vld3", "16">;
+def  VLD3q32b : VLD3WB<0b1000, "vld3", "32">;
 
 //   VLD4     : Vector Load (multiple 4-element structures)
-class VLD4D<string OpcodeStr>
-  : NLdSt<(outs DPR:$dst1, DPR:$dst2, DPR:$dst3, DPR:$dst4),
+class VLD4D<bits<4> op7_4, string OpcodeStr, string Dt>
+  : NLdSt<0,0b10,0b0000,op7_4,
+          (outs DPR:$dst1, DPR:$dst2, DPR:$dst3, DPR:$dst4),
           (ins addrmode6:$addr), IIC_VLD4,
-          !strconcat(OpcodeStr, "\t\\{$dst1,$dst2,$dst3,$dst4\\}, $addr"),
+          OpcodeStr, Dt, "\\{$dst1,$dst2,$dst3,$dst4\\}, $addr",
           "", []>;
-
-def  VLD4d8   : VLD4D<"vld4.8">;
-def  VLD4d16  : VLD4D<"vld4.16">;
-def  VLD4d32  : VLD4D<"vld4.32">;
+class VLD4WB<bits<4> op7_4, string OpcodeStr, string Dt>
+  : NLdSt<0,0b10,0b0001,op7_4,
+          (outs DPR:$dst1, DPR:$dst2, DPR:$dst3, DPR:$dst4, GPR:$wb),
+          (ins addrmode6:$addr), IIC_VLD4,
+          OpcodeStr, Dt, "\\{$dst1,$dst2,$dst3,$dst4\\}, $addr",
+          "$addr.addr = $wb", []>;
+
+def  VLD4d8   : VLD4D<0b0000, "vld4", "8">;
+def  VLD4d16  : VLD4D<0b0100, "vld4", "16">;
+def  VLD4d32  : VLD4D<0b1000, "vld4", "32">;
+def  VLD4d64  : NLdSt<0,0b10,0b0010,0b1100,
+                      (outs DPR:$dst1, DPR:$dst2, DPR:$dst3, DPR:$dst4),
+                      (ins addrmode6:$addr), IIC_VLD1,
+                   "vld1", "64", "\\{$dst1,$dst2,$dst3,$dst4\\}, $addr", "", []>;
+
+// vld4 to double-spaced even registers.
+def  VLD4q8a  : VLD4WB<0b0000, "vld4", "8">;
+def  VLD4q16a : VLD4WB<0b0100, "vld4", "16">;
+def  VLD4q32a : VLD4WB<0b1000, "vld4", "32">;
+
+// vld4 to double-spaced odd registers.
+def  VLD4q8b  : VLD4WB<0b0000, "vld4", "8">;
+def  VLD4q16b : VLD4WB<0b0100, "vld4", "16">;
+def  VLD4q32b : VLD4WB<0b1000, "vld4", "32">;
+
+//   VLD1LN   : Vector Load (single element to one lane)
+//   FIXME: Not yet implemented.
 
 //   VLD2LN   : Vector Load (single 2-element structure to one lane)
-class VLD2LND<string OpcodeStr>
-  : NLdSt<(outs DPR:$dst1, DPR:$dst2),
-          (ins addrmode6:$addr, DPR:$src1, DPR:$src2, nohash_imm:$lane),
-          IIC_VLD2,
-          !strconcat(OpcodeStr, "\t\\{$dst1[$lane],$dst2[$lane]\\}, $addr"),
-          "$src1 = $dst1, $src2 = $dst2", []>;
+class VLD2LN<bits<4> op11_8, string OpcodeStr, string Dt>
+  : NLdSt<1,0b10,op11_8,{?,?,?,?}, (outs DPR:$dst1, DPR:$dst2),
+            (ins addrmode6:$addr, DPR:$src1, DPR:$src2, nohash_imm:$lane),
+            IIC_VLD2,
+            OpcodeStr, Dt, "\\{$dst1[$lane],$dst2[$lane]\\}, $addr",
+            "$src1 = $dst1, $src2 = $dst2", []>;
+
+// vld2 to single-spaced registers.
+def VLD2LNd8  : VLD2LN<0b0001, "vld2", "8">;
+def VLD2LNd16 : VLD2LN<0b0101, "vld2", "16"> {
+  let Inst{5} = 0;
+}
+def VLD2LNd32 : VLD2LN<0b1001, "vld2", "32"> {
+  let Inst{6} = 0;
+}
 
-def VLD2LNd8  : VLD2LND<"vld2.8">;
-def VLD2LNd16 : VLD2LND<"vld2.16">;
-def VLD2LNd32 : VLD2LND<"vld2.32">;
+// vld2 to double-spaced even registers.
+def VLD2LNq16a: VLD2LN<0b0101, "vld2", "16"> {
+  let Inst{5} = 1;
+}
+def VLD2LNq32a: VLD2LN<0b1001, "vld2", "32"> {
+  let Inst{6} = 1;
+}
+
+// vld2 to double-spaced odd registers.
+def VLD2LNq16b: VLD2LN<0b0101, "vld2", "16"> {
+  let Inst{5} = 1;
+}
+def VLD2LNq32b: VLD2LN<0b1001, "vld2", "32"> {
+  let Inst{6} = 1;
+}
 
 //   VLD3LN   : Vector Load (single 3-element structure to one lane)
-class VLD3LND<string OpcodeStr>
-  : NLdSt<(outs DPR:$dst1, DPR:$dst2, DPR:$dst3),
-          (ins addrmode6:$addr, DPR:$src1, DPR:$src2, DPR:$src3,
-          nohash_imm:$lane), IIC_VLD3,
-          !strconcat(OpcodeStr,
-          "\t\\{$dst1[$lane],$dst2[$lane],$dst3[$lane]\\}, $addr"),
-          "$src1 = $dst1, $src2 = $dst2, $src3 = $dst3", []>;
-
-def VLD3LNd8  : VLD3LND<"vld3.8">;
-def VLD3LNd16 : VLD3LND<"vld3.16">;
-def VLD3LNd32 : VLD3LND<"vld3.32">;
+class VLD3LN<bits<4> op11_8, string OpcodeStr, string Dt>
+  : NLdSt<1,0b10,op11_8,{?,?,?,?}, (outs DPR:$dst1, DPR:$dst2, DPR:$dst3),
+            (ins addrmode6:$addr, DPR:$src1, DPR:$src2, DPR:$src3,
+            nohash_imm:$lane), IIC_VLD3,
+            OpcodeStr, Dt,
+            "\\{$dst1[$lane],$dst2[$lane],$dst3[$lane]\\}, $addr",
+            "$src1 = $dst1, $src2 = $dst2, $src3 = $dst3", []>;
+
+// vld3 to single-spaced registers.
+def VLD3LNd8  : VLD3LN<0b0010, "vld3", "8"> {
+  let Inst{4} = 0;
+}
+def VLD3LNd16 : VLD3LN<0b0110, "vld3", "16"> {
+  let Inst{5-4} = 0b00;
+}
+def VLD3LNd32 : VLD3LN<0b1010, "vld3", "32"> {
+  let Inst{6-4} = 0b000;
+}
+
+// vld3 to double-spaced even registers.
+def VLD3LNq16a: VLD3LN<0b0110, "vld3", "16"> {
+  let Inst{5-4} = 0b10;
+}
+def VLD3LNq32a: VLD3LN<0b1010, "vld3", "32"> {
+  let Inst{6-4} = 0b100;
+}
+
+// vld3 to double-spaced odd registers.
+def VLD3LNq16b: VLD3LN<0b0110, "vld3", "16"> {
+  let Inst{5-4} = 0b10;
+}
+def VLD3LNq32b: VLD3LN<0b1010, "vld3", "32"> {
+  let Inst{6-4} = 0b100;
+}
 
 //   VLD4LN   : Vector Load (single 4-element structure to one lane)
-class VLD4LND<string OpcodeStr>
-  : NLdSt<(outs DPR:$dst1, DPR:$dst2, DPR:$dst3, DPR:$dst4),
-          (ins addrmode6:$addr, DPR:$src1, DPR:$src2, DPR:$src3, DPR:$src4,
-          nohash_imm:$lane), IIC_VLD4,
-          !strconcat(OpcodeStr,
-          "\t\\{$dst1[$lane],$dst2[$lane],$dst3[$lane],$dst4[$lane]\\}, $addr"),
-          "$src1 = $dst1, $src2 = $dst2, $src3 = $dst3, $src4 = $dst4", []>;
-
-def VLD4LNd8  : VLD4LND<"vld4.8">;
-def VLD4LNd16 : VLD4LND<"vld4.16">;
-def VLD4LNd32 : VLD4LND<"vld4.32">;
+class VLD4LN<bits<4> op11_8, string OpcodeStr, string Dt>
+  : NLdSt<1,0b10,op11_8,{?,?,?,?},
+            (outs DPR:$dst1, DPR:$dst2, DPR:$dst3, DPR:$dst4),
+            (ins addrmode6:$addr, DPR:$src1, DPR:$src2, DPR:$src3, DPR:$src4,
+            nohash_imm:$lane), IIC_VLD4,
+            OpcodeStr, Dt,
+           "\\{$dst1[$lane],$dst2[$lane],$dst3[$lane],$dst4[$lane]\\}, $addr",
+            "$src1 = $dst1, $src2 = $dst2, $src3 = $dst3, $src4 = $dst4", []>;
+
+// vld4 to single-spaced registers.
+def VLD4LNd8  : VLD4LN<0b0011, "vld4", "8">;
+def VLD4LNd16 : VLD4LN<0b0111, "vld4", "16"> {
+  let Inst{5} = 0;
+}
+def VLD4LNd32 : VLD4LN<0b1011, "vld4", "32"> {
+  let Inst{6} = 0;
+}
+
+// vld4 to double-spaced even registers.
+def VLD4LNq16a: VLD4LN<0b0111, "vld4", "16"> {
+  let Inst{5} = 1;
+}
+def VLD4LNq32a: VLD4LN<0b1011, "vld4", "32"> {
+  let Inst{6} = 1;
+}
+
+// vld4 to double-spaced odd registers.
+def VLD4LNq16b: VLD4LN<0b0111, "vld4", "16"> {
+  let Inst{5} = 1;
+}
+def VLD4LNq32b: VLD4LN<0b1011, "vld4", "32"> {
+  let Inst{6} = 1;
+}
+
+//   VLD1DUP  : Vector Load (single element to all lanes)
+//   VLD2DUP  : Vector Load (single 2-element structure to all lanes)
+//   VLD3DUP  : Vector Load (single 3-element structure to all lanes)
+//   VLD4DUP  : Vector Load (single 4-element structure to all lanes)
+//   FIXME: Not yet implemented.
 } // mayLoad = 1, hasExtraDefRegAllocReq = 1
 
 //   VST1     : Vector Store (multiple single elements)
-class VST1D<string OpcodeStr, ValueType Ty, Intrinsic IntOp>
-  : NLdSt<(outs), (ins addrmode6:$addr, DPR:$src), IIC_VST,
-          !strconcat(OpcodeStr, "\t\\{$src\\}, $addr"), "",
+class VST1D<bits<4> op7_4, string OpcodeStr, string Dt,
+            ValueType Ty, Intrinsic IntOp>
+  : NLdSt<0,0b00,0b0111,op7_4, (outs), (ins addrmode6:$addr, DPR:$src), IIC_VST,
+          OpcodeStr, Dt, "\\{$src\\}, $addr", "",
           [(IntOp addrmode6:$addr, (Ty DPR:$src))]>;
-class VST1Q<string OpcodeStr, ValueType Ty, Intrinsic IntOp>
-  : NLdSt<(outs), (ins addrmode6:$addr, QPR:$src), IIC_VST,
-          !strconcat(OpcodeStr, "\t${src:dregpair}, $addr"), "",
+class VST1Q<bits<4> op7_4, string OpcodeStr, string Dt,
+            ValueType Ty, Intrinsic IntOp>
+  : NLdSt<0,0b00,0b1010,op7_4, (outs), (ins addrmode6:$addr, QPR:$src), IIC_VST,
+          OpcodeStr, Dt, "${src:dregpair}, $addr", "",
           [(IntOp addrmode6:$addr, (Ty QPR:$src))]>;
 
 let hasExtraSrcRegAllocReq = 1 in {
-def  VST1d8   : VST1D<"vst1.8",  v8i8,  int_arm_neon_vst1>;
-def  VST1d16  : VST1D<"vst1.16", v4i16, int_arm_neon_vst1>;
-def  VST1d32  : VST1D<"vst1.32", v2i32, int_arm_neon_vst1>;
-def  VST1df   : VST1D<"vst1.32", v2f32, int_arm_neon_vst1>;
-def  VST1d64  : VST1D<"vst1.64", v1i64, int_arm_neon_vst1>;
-
-def  VST1q8   : VST1Q<"vst1.8",  v16i8, int_arm_neon_vst1>;
-def  VST1q16  : VST1Q<"vst1.16", v8i16, int_arm_neon_vst1>;
-def  VST1q32  : VST1Q<"vst1.32", v4i32, int_arm_neon_vst1>;
-def  VST1qf   : VST1Q<"vst1.32", v4f32, int_arm_neon_vst1>;
-def  VST1q64  : VST1Q<"vst1.64", v2i64, int_arm_neon_vst1>;
+def  VST1d8   : VST1D<0b0000, "vst1", "8",  v8i8,  int_arm_neon_vst1>;
+def  VST1d16  : VST1D<0b0100, "vst1", "16", v4i16, int_arm_neon_vst1>;
+def  VST1d32  : VST1D<0b1000, "vst1", "32", v2i32, int_arm_neon_vst1>;
+def  VST1df   : VST1D<0b1000, "vst1", "32", v2f32, int_arm_neon_vst1>;
+def  VST1d64  : VST1D<0b1100, "vst1", "64", v1i64, int_arm_neon_vst1>;
+
+def  VST1q8   : VST1Q<0b0000, "vst1", "8",  v16i8, int_arm_neon_vst1>;
+def  VST1q16  : VST1Q<0b0100, "vst1", "16", v8i16, int_arm_neon_vst1>;
+def  VST1q32  : VST1Q<0b1000, "vst1", "32", v4i32, int_arm_neon_vst1>;
+def  VST1qf   : VST1Q<0b1000, "vst1", "32", v4f32, int_arm_neon_vst1>;
+def  VST1q64  : VST1Q<0b1100, "vst1", "64", v2i64, int_arm_neon_vst1>;
 } // hasExtraSrcRegAllocReq
 
 let mayStore = 1, hasExtraSrcRegAllocReq = 1 in {
 
 //   VST2     : Vector Store (multiple 2-element structures)
-class VST2D<string OpcodeStr>
-  : NLdSt<(outs), (ins addrmode6:$addr, DPR:$src1, DPR:$src2), IIC_VST,
-          !strconcat(OpcodeStr, "\t\\{$src1,$src2\\}, $addr"), "", []>;
+class VST2D<bits<4> op7_4, string OpcodeStr, string Dt>
+  : NLdSt<0,0b00,0b1000,op7_4, (outs),
+          (ins addrmode6:$addr, DPR:$src1, DPR:$src2), IIC_VST,
+          OpcodeStr, Dt, "\\{$src1,$src2\\}, $addr", "", []>;
+class VST2Q<bits<4> op7_4, string OpcodeStr, string Dt>
+  : NLdSt<0,0b00,0b0011,op7_4, (outs),
+          (ins addrmode6:$addr, DPR:$src1, DPR:$src2, DPR:$src3, DPR:$src4),
+          IIC_VST,
+          OpcodeStr, Dt, "\\{$src1,$src2,$src3,$src4\\}, $addr",
+          "", []>;
 
-def  VST2d8   : VST2D<"vst2.8">;
-def  VST2d16  : VST2D<"vst2.16">;
-def  VST2d32  : VST2D<"vst2.32">;
+def  VST2d8   : VST2D<0b0000, "vst2", "8">;
+def  VST2d16  : VST2D<0b0100, "vst2", "16">;
+def  VST2d32  : VST2D<0b1000, "vst2", "32">;
+def  VST2d64  : NLdSt<0,0b00,0b1010,0b1100, (outs),
+                      (ins addrmode6:$addr, DPR:$src1, DPR:$src2), IIC_VST,
+                      "vst1", "64", "\\{$src1,$src2\\}, $addr", "", []>;
 
-//   VST3     : Vector Store (multiple 3-element structures)
-class VST3D<string OpcodeStr>
-  : NLdSt<(outs), (ins addrmode6:$addr, DPR:$src1, DPR:$src2, DPR:$src3),
-          IIC_VST,
-          !strconcat(OpcodeStr, "\t\\{$src1,$src2,$src3\\}, $addr"), "", []>;
+def  VST2q8   : VST2Q<0b0000, "vst2", "8">;
+def  VST2q16  : VST2Q<0b0100, "vst2", "16">;
+def  VST2q32  : VST2Q<0b1000, "vst2", "32">;
 
-def  VST3d8   : VST3D<"vst3.8">;
-def  VST3d16  : VST3D<"vst3.16">;
-def  VST3d32  : VST3D<"vst3.32">;
+//   VST3     : Vector Store (multiple 3-element structures)
+class VST3D<bits<4> op7_4, string OpcodeStr, string Dt>
+  : NLdSt<0,0b00,0b0100,op7_4, (outs),
+          (ins addrmode6:$addr, DPR:$src1, DPR:$src2, DPR:$src3), IIC_VST,
+          OpcodeStr, Dt, "\\{$src1,$src2,$src3\\}, $addr", "", []>;
+class VST3WB<bits<4> op7_4, string OpcodeStr, string Dt>
+  : NLdSt<0,0b00,0b0101,op7_4, (outs GPR:$wb),
+          (ins addrmode6:$addr, DPR:$src1, DPR:$src2, DPR:$src3), IIC_VST,
+          OpcodeStr, Dt, "\\{$src1,$src2,$src3\\}, $addr",
+          "$addr.addr = $wb", []>;
+
+def  VST3d8   : VST3D<0b0000, "vst3", "8">;
+def  VST3d16  : VST3D<0b0100, "vst3", "16">;
+def  VST3d32  : VST3D<0b1000, "vst3", "32">;
+def  VST3d64  : NLdSt<0,0b00,0b0110,0b1100, (outs),
+                      (ins addrmode6:$addr, DPR:$src1, DPR:$src2, DPR:$src3),
+                      IIC_VST,
+                      "vst1", "64", "\\{$src1,$src2,$src3\\}, $addr", "", []>;
+
+// vst3 to double-spaced even registers.
+def  VST3q8a  : VST3WB<0b0000, "vst3", "8">;
+def  VST3q16a : VST3WB<0b0100, "vst3", "16">;
+def  VST3q32a : VST3WB<0b1000, "vst3", "32">;
+
+// vst3 to double-spaced odd registers.
+def  VST3q8b  : VST3WB<0b0000, "vst3", "8">;
+def  VST3q16b : VST3WB<0b0100, "vst3", "16">;
+def  VST3q32b : VST3WB<0b1000, "vst3", "32">;
 
 //   VST4     : Vector Store (multiple 4-element structures)
-class VST4D<string OpcodeStr>
-  : NLdSt<(outs), (ins addrmode6:$addr,
-                   DPR:$src1, DPR:$src2, DPR:$src3, DPR:$src4), IIC_VST,
-          !strconcat(OpcodeStr, "\t\\{$src1,$src2,$src3,$src4\\}, $addr"),
+class VST4D<bits<4> op7_4, string OpcodeStr, string Dt>
+  : NLdSt<0,0b00,0b0000,op7_4, (outs),
+          (ins addrmode6:$addr, DPR:$src1, DPR:$src2, DPR:$src3, DPR:$src4),
+          IIC_VST,
+          OpcodeStr, Dt, "\\{$src1,$src2,$src3,$src4\\}, $addr",
           "", []>;
-
-def  VST4d8   : VST4D<"vst4.8">;
-def  VST4d16  : VST4D<"vst4.16">;
-def  VST4d32  : VST4D<"vst4.32">;
+class VST4WB<bits<4> op7_4, string OpcodeStr, string Dt>
+  : NLdSt<0,0b00,0b0001,op7_4, (outs GPR:$wb),
+          (ins addrmode6:$addr, DPR:$src1, DPR:$src2, DPR:$src3, DPR:$src4),
+          IIC_VST,
+          OpcodeStr, Dt, "\\{$src1,$src2,$src3,$src4\\}, $addr",
+          "$addr.addr = $wb", []>;
+
+def  VST4d8   : VST4D<0b0000, "vst4", "8">;
+def  VST4d16  : VST4D<0b0100, "vst4", "16">;
+def  VST4d32  : VST4D<0b1000, "vst4", "32">;
+def  VST4d64  : NLdSt<0,0b00,0b0010,0b1100, (outs),
+                      (ins addrmode6:$addr, DPR:$src1, DPR:$src2, DPR:$src3,
+                       DPR:$src4), IIC_VST,
+                   "vst1", "64", "\\{$src1,$src2,$src3,$src4\\}, $addr", "", []>;
+
+// vst4 to double-spaced even registers.
+def  VST4q8a  : VST4WB<0b0000, "vst4", "8">;
+def  VST4q16a : VST4WB<0b0100, "vst4", "16">;
+def  VST4q32a : VST4WB<0b1000, "vst4", "32">;
+
+// vst4 to double-spaced odd registers.
+def  VST4q8b  : VST4WB<0b0000, "vst4", "8">;
+def  VST4q16b : VST4WB<0b0100, "vst4", "16">;
+def  VST4q32b : VST4WB<0b1000, "vst4", "32">;
+
+//   VST1LN   : Vector Store (single element from one lane)
+//   FIXME: Not yet implemented.
 
 //   VST2LN   : Vector Store (single 2-element structure from one lane)
-class VST2LND<string OpcodeStr>
-  : NLdSt<(outs), (ins addrmode6:$addr, DPR:$src1, DPR:$src2, nohash_imm:$lane),
-          IIC_VST,
-          !strconcat(OpcodeStr, "\t\\{$src1[$lane],$src2[$lane]\\}, $addr"),
-          "", []>;
+class VST2LN<bits<4> op11_8, string OpcodeStr, string Dt>
+  : NLdSt<1,0b00,op11_8,{?,?,?,?}, (outs),
+            (ins addrmode6:$addr, DPR:$src1, DPR:$src2, nohash_imm:$lane),
+            IIC_VST,
+            OpcodeStr, Dt, "\\{$src1[$lane],$src2[$lane]\\}, $addr",
+            "", []>;
+
+// vst2 to single-spaced registers.
+def VST2LNd8  : VST2LN<0b0001, "vst2", "8">;
+def VST2LNd16 : VST2LN<0b0101, "vst2", "16"> {
+  let Inst{5} = 0;
+}
+def VST2LNd32 : VST2LN<0b1001, "vst2", "32"> {
+  let Inst{6} = 0;
+}
 
-def VST2LNd8  : VST2LND<"vst2.8">;
-def VST2LNd16 : VST2LND<"vst2.16">;
-def VST2LNd32 : VST2LND<"vst2.32">;
+// vst2 to double-spaced even registers.
+def VST2LNq16a: VST2LN<0b0101, "vst2", "16"> {
+  let Inst{5} = 1;
+}
+def VST2LNq32a: VST2LN<0b1001, "vst2", "32"> {
+  let Inst{6} = 1;
+}
+
+// vst2 to double-spaced odd registers.
+def VST2LNq16b: VST2LN<0b0101, "vst2", "16"> {
+  let Inst{5} = 1;
+}
+def VST2LNq32b: VST2LN<0b1001, "vst2", "32"> {
+  let Inst{6} = 1;
+}
 
 //   VST3LN   : Vector Store (single 3-element structure from one lane)
-class VST3LND<string OpcodeStr>
-  : NLdSt<(outs), (ins addrmode6:$addr, DPR:$src1, DPR:$src2, DPR:$src3,
-          nohash_imm:$lane), IIC_VST,
-          !strconcat(OpcodeStr,
-          "\t\\{$src1[$lane],$src2[$lane],$src3[$lane]\\}, $addr"), "", []>;
+class VST3LN<bits<4> op11_8, string OpcodeStr, string Dt>
+  : NLdSt<1,0b00,op11_8,{?,?,?,?}, (outs),
+            (ins addrmode6:$addr, DPR:$src1, DPR:$src2, DPR:$src3,
+            nohash_imm:$lane), IIC_VST,
+            OpcodeStr, Dt,
+            "\\{$src1[$lane],$src2[$lane],$src3[$lane]\\}, $addr", "", []>;
+
+// vst3 to single-spaced registers.
+def VST3LNd8  : VST3LN<0b0010, "vst3", "8"> {
+  let Inst{4} = 0;
+}
+def VST3LNd16 : VST3LN<0b0110, "vst3", "16"> {
+  let Inst{5-4} = 0b00;
+}
+def VST3LNd32 : VST3LN<0b1010, "vst3", "32"> {
+  let Inst{6-4} = 0b000;
+}
 
-def VST3LNd8  : VST3LND<"vst3.8">;
-def VST3LNd16 : VST3LND<"vst3.16">;
-def VST3LNd32 : VST3LND<"vst3.32">;
+// vst3 to double-spaced even registers.
+def VST3LNq16a: VST3LN<0b0110, "vst3", "16"> {
+  let Inst{5-4} = 0b10;
+}
+def VST3LNq32a: VST3LN<0b1010, "vst3", "32"> {
+  let Inst{6-4} = 0b100;
+}
+
+// vst3 to double-spaced odd registers.
+def VST3LNq16b: VST3LN<0b0110, "vst3", "16"> {
+  let Inst{5-4} = 0b10;
+}
+def VST3LNq32b: VST3LN<0b1010, "vst3", "32"> {
+  let Inst{6-4} = 0b100;
+}
 
 //   VST4LN   : Vector Store (single 4-element structure from one lane)
-class VST4LND<string OpcodeStr>
-  : NLdSt<(outs), (ins addrmode6:$addr, DPR:$src1, DPR:$src2, DPR:$src3,
-          DPR:$src4, nohash_imm:$lane), IIC_VST,
-          !strconcat(OpcodeStr,
-          "\t\\{$src1[$lane],$src2[$lane],$src3[$lane],$src4[$lane]\\}, $addr"),
-          "", []>;
+class VST4LN<bits<4> op11_8, string OpcodeStr, string Dt>
+  : NLdSt<1,0b00,op11_8,{?,?,?,?}, (outs),
+            (ins addrmode6:$addr, DPR:$src1, DPR:$src2, DPR:$src3, DPR:$src4,
+            nohash_imm:$lane), IIC_VST,
+            OpcodeStr, Dt,
+           "\\{$src1[$lane],$src2[$lane],$src3[$lane],$src4[$lane]\\}, $addr",
+            "", []>;
+
+// vst4 to single-spaced registers.
+def VST4LNd8  : VST4LN<0b0011, "vst4", "8">;
+def VST4LNd16 : VST4LN<0b0111, "vst4", "16"> {
+  let Inst{5} = 0;
+}
+def VST4LNd32 : VST4LN<0b1011, "vst4", "32"> {
+  let Inst{6} = 0;
+}
+
+// vst4 to double-spaced even registers.
+def VST4LNq16a: VST4LN<0b0111, "vst4", "16"> {
+  let Inst{5} = 1;
+}
+def VST4LNq32a: VST4LN<0b1011, "vst4", "32"> {
+  let Inst{6} = 1;
+}
+
+// vst4 to double-spaced odd registers.
+def VST4LNq16b: VST4LN<0b0111, "vst4", "16"> {
+  let Inst{5} = 1;
+}
+def VST4LNq32b: VST4LN<0b1011, "vst4", "32"> {
+  let Inst{6} = 1;
+}
 
-def VST4LNd8  : VST4LND<"vst4.8">;
-def VST4LNd16 : VST4LND<"vst4.16">;
-def VST4LNd32 : VST4LND<"vst4.32">;
 } // mayStore = 1, hasExtraSrcRegAllocReq = 1
 
 
@@ -384,25 +656,25 @@ def SubReg_i32_lane : SDNodeXForm<imm, [{
 
 // Basic 2-register operations, both double- and quad-register.
 class N2VD<bits<2> op24_23, bits<2> op21_20, bits<2> op19_18,
-           bits<2> op17_16, bits<5> op11_7, bit op4, string OpcodeStr,
+           bits<2> op17_16, bits<5> op11_7, bit op4, string OpcodeStr,string Dt,
            ValueType ResTy, ValueType OpTy, SDNode OpNode>
   : N2V<op24_23, op21_20, op19_18, op17_16, op11_7, 0, op4, (outs DPR:$dst),
-        (ins DPR:$src), IIC_VUNAD, !strconcat(OpcodeStr, "\t$dst, $src"), "",
+        (ins DPR:$src), IIC_VUNAD, OpcodeStr, Dt, "$dst, $src", "",
         [(set DPR:$dst, (ResTy (OpNode (OpTy DPR:$src))))]>;
 class N2VQ<bits<2> op24_23, bits<2> op21_20, bits<2> op19_18,
-           bits<2> op17_16, bits<5> op11_7, bit op4, string OpcodeStr,
+           bits<2> op17_16, bits<5> op11_7, bit op4, string OpcodeStr,string Dt,
            ValueType ResTy, ValueType OpTy, SDNode OpNode>
   : N2V<op24_23, op21_20, op19_18, op17_16, op11_7, 1, op4, (outs QPR:$dst),
-        (ins QPR:$src), IIC_VUNAQ, !strconcat(OpcodeStr, "\t$dst, $src"), "",
+        (ins QPR:$src), IIC_VUNAQ, OpcodeStr, Dt, "$dst, $src", "",
         [(set QPR:$dst, (ResTy (OpNode (OpTy QPR:$src))))]>;
 
 // Basic 2-register operations, scalar single-precision.
 class N2VDs<bits<2> op24_23, bits<2> op21_20, bits<2> op19_18,
-            bits<2> op17_16, bits<5> op11_7, bit op4, string OpcodeStr,
+            bits<2> op17_16, bits<5> op11_7, bit op4, string OpcodeStr,string Dt,
             ValueType ResTy, ValueType OpTy, SDNode OpNode>
   : N2V<op24_23, op21_20, op19_18, op17_16, op11_7, 0, op4,
         (outs DPR_VFP2:$dst), (ins DPR_VFP2:$src),
-        IIC_VUNAD, !strconcat(OpcodeStr, "\t$dst, $src"), "", []>;
+        IIC_VUNAD, OpcodeStr, Dt, "$dst, $src", "", []>;
 
 class N2VDsPat<SDNode OpNode, ValueType ResTy, ValueType OpTy, NeonI Inst>
   : NEONFPPat<(ResTy (OpNode SPR:$a)),
@@ -413,27 +685,27 @@ class N2VDsPat<SDNode OpNode, ValueType ResTy, ValueType OpTy, NeonI Inst>
 // Basic 2-register intrinsics, both double- and quad-register.
 class N2VDInt<bits<2> op24_23, bits<2> op21_20, bits<2> op19_18,
               bits<2> op17_16, bits<5> op11_7, bit op4, 
-              InstrItinClass itin, string OpcodeStr,
+              InstrItinClass itin, string OpcodeStr, string Dt,
               ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
   : N2V<op24_23, op21_20, op19_18, op17_16, op11_7, 0, op4, (outs DPR:$dst),
-        (ins DPR:$src), itin, !strconcat(OpcodeStr, "\t$dst, $src"), "",
+        (ins DPR:$src), itin, OpcodeStr, Dt, "$dst, $src", "",
         [(set DPR:$dst, (ResTy (IntOp (OpTy DPR:$src))))]>;
 class N2VQInt<bits<2> op24_23, bits<2> op21_20, bits<2> op19_18,
               bits<2> op17_16, bits<5> op11_7, bit op4,
-              InstrItinClass itin, string OpcodeStr,
+              InstrItinClass itin, string OpcodeStr, string Dt,
               ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
   : N2V<op24_23, op21_20, op19_18, op17_16, op11_7, 1, op4, (outs QPR:$dst),
-        (ins QPR:$src), itin, !strconcat(OpcodeStr, "\t$dst, $src"), "",
+        (ins QPR:$src), itin, OpcodeStr, Dt, "$dst, $src", "",
         [(set QPR:$dst, (ResTy (IntOp (OpTy QPR:$src))))]>;
 
 // Basic 2-register intrinsics, scalar single-precision
 class N2VDInts<bits<2> op24_23, bits<2> op21_20, bits<2> op19_18,
               bits<2> op17_16, bits<5> op11_7, bit op4, 
-              InstrItinClass itin, string OpcodeStr,
+              InstrItinClass itin, string OpcodeStr, string Dt,
               ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
   : N2V<op24_23, op21_20, op19_18, op17_16, op11_7, 0, op4,
         (outs DPR_VFP2:$dst), (ins DPR_VFP2:$src), itin,
-        !strconcat(OpcodeStr, "\t$dst, $src"), "", []>;
+        OpcodeStr, Dt, "$dst, $src", "", []>;
 
 class N2VDIntsPat<SDNode OpNode, NeonI Inst>
   : NEONFPPat<(f32 (OpNode SPR:$a)),
@@ -444,49 +716,62 @@ class N2VDIntsPat<SDNode OpNode, NeonI Inst>
 // Narrow 2-register intrinsics.
 class N2VNInt<bits<2> op24_23, bits<2> op21_20, bits<2> op19_18,
               bits<2> op17_16, bits<5> op11_7, bit op6, bit op4,
-              InstrItinClass itin, string OpcodeStr,
+              InstrItinClass itin, string OpcodeStr, string Dt,
               ValueType TyD, ValueType TyQ, Intrinsic IntOp>
   : N2V<op24_23, op21_20, op19_18, op17_16, op11_7, op6, op4, (outs DPR:$dst),
-        (ins QPR:$src), itin, !strconcat(OpcodeStr, "\t$dst, $src"), "",
+        (ins QPR:$src), itin, OpcodeStr, Dt, "$dst, $src", "",
         [(set DPR:$dst, (TyD (IntOp (TyQ QPR:$src))))]>;
 
-// Long 2-register intrinsics.  (This is currently only used for VMOVL and is
-// derived from N2VImm instead of N2V because of the way the size is encoded.)
-class N2VLInt<bit op24, bit op23, bits<6> op21_16, bits<4> op11_8, bit op7,
-              bit op6, bit op4, InstrItinClass itin, string OpcodeStr,
+// Long 2-register intrinsics (currently only used for VMOVL).
+class N2VLInt<bits<2> op24_23, bits<2> op21_20, bits<2> op19_18,
+              bits<2> op17_16, bits<5> op11_7, bit op6, bit op4,
+              InstrItinClass itin, string OpcodeStr, string Dt,
               ValueType TyQ, ValueType TyD, Intrinsic IntOp>
-  : N2VImm<op24, op23, op21_16, op11_8, op7, op6, op4, (outs QPR:$dst),
-        (ins DPR:$src), itin, !strconcat(OpcodeStr, "\t$dst, $src"), "",
+  : N2V<op24_23, op21_20, op19_18, op17_16, op11_7, op6, op4, (outs QPR:$dst),
+        (ins DPR:$src), itin, OpcodeStr, Dt, "$dst, $src", "",
         [(set QPR:$dst, (TyQ (IntOp (TyD DPR:$src))))]>;
 
 // 2-register shuffles (VTRN/VZIP/VUZP), both double- and quad-register.
-class N2VDShuffle<bits<2> op19_18, bits<5> op11_7, string OpcodeStr>
+class N2VDShuffle<bits<2> op19_18, bits<5> op11_7, string OpcodeStr, string Dt>
   : N2V<0b11, 0b11, op19_18, 0b10, op11_7, 0, 0, (outs DPR:$dst1, DPR:$dst2),
         (ins DPR:$src1, DPR:$src2), IIC_VPERMD, 
-        !strconcat(OpcodeStr, "\t$dst1, $dst2"),
+        OpcodeStr, Dt, "$dst1, $dst2",
         "$src1 = $dst1, $src2 = $dst2", []>;
 class N2VQShuffle<bits<2> op19_18, bits<5> op11_7,
-                  InstrItinClass itin, string OpcodeStr>
+                  InstrItinClass itin, string OpcodeStr, string Dt>
   : N2V<0b11, 0b11, op19_18, 0b10, op11_7, 1, 0, (outs QPR:$dst1, QPR:$dst2),
         (ins QPR:$src1, QPR:$src2), itin, 
-        !strconcat(OpcodeStr, "\t$dst1, $dst2"),
+        OpcodeStr, Dt, "$dst1, $dst2",
         "$src1 = $dst1, $src2 = $dst2", []>;
 
 // Basic 3-register operations, both double- and quad-register.
 class N3VD<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op4,
-           InstrItinClass itin, string OpcodeStr, ValueType ResTy, ValueType OpTy,
+           InstrItinClass itin, string OpcodeStr, string Dt,
+           ValueType ResTy, ValueType OpTy,
            SDNode OpNode, bit Commutable>
   : N3V<op24, op23, op21_20, op11_8, 0, op4,
         (outs DPR:$dst), (ins DPR:$src1, DPR:$src2), itin, 
-        !strconcat(OpcodeStr, "\t$dst, $src1, $src2"), "",
+        OpcodeStr, Dt, "$dst, $src1, $src2", "",
+        [(set DPR:$dst, (ResTy (OpNode (OpTy DPR:$src1), (OpTy DPR:$src2))))]> {
+  let isCommutable = Commutable;
+}
+// Same as N3VD but no data type.
+class N3VDX<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op4,
+           InstrItinClass itin, string OpcodeStr,
+           ValueType ResTy, ValueType OpTy,
+           SDNode OpNode, bit Commutable>
+  : N3VX<op24, op23, op21_20, op11_8, 0, op4,
+        (outs DPR:$dst), (ins DPR:$src1, DPR:$src2), itin, 
+        OpcodeStr, "$dst, $src1, $src2", "",
         [(set DPR:$dst, (ResTy (OpNode (OpTy DPR:$src1), (OpTy DPR:$src2))))]> {
   let isCommutable = Commutable;
 }
 class N3VDSL<bits<2> op21_20, bits<4> op11_8, 
-             InstrItinClass itin, string OpcodeStr, ValueType Ty, SDNode ShOp>
+             InstrItinClass itin, string OpcodeStr, string Dt,
+             ValueType Ty, SDNode ShOp>
   : N3V<0, 1, op21_20, op11_8, 1, 0,
         (outs DPR:$dst), (ins DPR:$src1, DPR_VFP2:$src2, nohash_imm:$lane),
-        itin, !strconcat(OpcodeStr, "\t$dst, $src1, $src2[$lane]"), "",
+        itin, OpcodeStr, Dt, "$dst, $src1, $src2[$lane]", "",
         [(set (Ty DPR:$dst),
               (Ty (ShOp (Ty DPR:$src1),
                         (Ty (NEONvduplane (Ty DPR_VFP2:$src2),
@@ -494,11 +779,11 @@ class N3VDSL<bits<2> op21_20, bits<4> op11_8,
   let isCommutable = 0;
 }
 class N3VDSL16<bits<2> op21_20, bits<4> op11_8, 
-               string OpcodeStr, ValueType Ty, SDNode ShOp>
+               string OpcodeStr, string Dt, ValueType Ty, SDNode ShOp>
   : N3V<0, 1, op21_20, op11_8, 1, 0,
         (outs DPR:$dst), (ins DPR:$src1, DPR_8:$src2, nohash_imm:$lane),
         IIC_VMULi16D,
-        !strconcat(OpcodeStr, "\t$dst, $src1, $src2[$lane]"), "",
+        OpcodeStr, Dt, "$dst, $src1, $src2[$lane]", "",
         [(set (Ty DPR:$dst),
               (Ty (ShOp (Ty DPR:$src1),
                         (Ty (NEONvduplane (Ty DPR_8:$src2),
@@ -507,20 +792,31 @@ class N3VDSL16<bits<2> op21_20, bits<4> op11_8,
 }
 
 class N3VQ<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op4,
-           InstrItinClass itin, string OpcodeStr, ValueType ResTy, ValueType OpTy,
+           InstrItinClass itin, string OpcodeStr, string Dt,
+           ValueType ResTy, ValueType OpTy,
            SDNode OpNode, bit Commutable>
   : N3V<op24, op23, op21_20, op11_8, 1, op4,
         (outs QPR:$dst), (ins QPR:$src1, QPR:$src2), itin, 
-        !strconcat(OpcodeStr, "\t$dst, $src1, $src2"), "",
+        OpcodeStr, Dt, "$dst, $src1, $src2", "",
+        [(set QPR:$dst, (ResTy (OpNode (OpTy QPR:$src1), (OpTy QPR:$src2))))]> {
+  let isCommutable = Commutable;
+}
+class N3VQX<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op4,
+           InstrItinClass itin, string OpcodeStr,
+           ValueType ResTy, ValueType OpTy,
+           SDNode OpNode, bit Commutable>
+  : N3VX<op24, op23, op21_20, op11_8, 1, op4,
+        (outs QPR:$dst), (ins QPR:$src1, QPR:$src2), itin, 
+        OpcodeStr, "$dst, $src1, $src2", "",
         [(set QPR:$dst, (ResTy (OpNode (OpTy QPR:$src1), (OpTy QPR:$src2))))]> {
   let isCommutable = Commutable;
 }
 class N3VQSL<bits<2> op21_20, bits<4> op11_8, 
-             InstrItinClass itin, string OpcodeStr, 
+             InstrItinClass itin, string OpcodeStr, string Dt,
              ValueType ResTy, ValueType OpTy, SDNode ShOp>
   : N3V<1, 1, op21_20, op11_8, 1, 0,
         (outs QPR:$dst), (ins QPR:$src1, DPR_VFP2:$src2, nohash_imm:$lane),
-        itin, !strconcat(OpcodeStr, "\t$dst, $src1, $src2[$lane]"), "",
+        itin, OpcodeStr, Dt, "$dst, $src1, $src2[$lane]", "",
         [(set (ResTy QPR:$dst),
               (ResTy (ShOp (ResTy QPR:$src1),
                            (ResTy (NEONvduplane (OpTy DPR_VFP2:$src2),
@@ -528,11 +824,12 @@ class N3VQSL<bits<2> op21_20, bits<4> op11_8,
   let isCommutable = 0;
 }
 class N3VQSL16<bits<2> op21_20, bits<4> op11_8, 
-               string OpcodeStr, ValueType ResTy, ValueType OpTy, SDNode ShOp>
+               string OpcodeStr, string Dt,
+               ValueType ResTy, ValueType OpTy, SDNode ShOp>
   : N3V<1, 1, op21_20, op11_8, 1, 0,
         (outs QPR:$dst), (ins QPR:$src1, DPR_8:$src2, nohash_imm:$lane),
         IIC_VMULi16Q,
-        !strconcat(OpcodeStr, "\t$dst, $src1, $src2[$lane]"), "",
+        OpcodeStr, Dt, "$dst, $src1, $src2[$lane]", "",
         [(set (ResTy QPR:$dst),
               (ResTy (ShOp (ResTy QPR:$src1),
                            (ResTy (NEONvduplane (OpTy DPR_8:$src2),
@@ -542,11 +839,11 @@ class N3VQSL16<bits<2> op21_20, bits<4> op11_8,
 
 // Basic 3-register operations, scalar single-precision
 class N3VDs<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op4,
-           string OpcodeStr, ValueType ResTy, ValueType OpTy,
+           string OpcodeStr, string Dt, ValueType ResTy, ValueType OpTy,
            SDNode OpNode, bit Commutable>
   : N3V<op24, op23, op21_20, op11_8, 0, op4,
         (outs DPR_VFP2:$dst), (ins DPR_VFP2:$src1, DPR_VFP2:$src2), IIC_VBIND,
-        !strconcat(OpcodeStr, "\t$dst, $src1, $src2"), "", []> {
+        OpcodeStr, Dt, "$dst, $src1, $src2", "", []> {
   let isCommutable = Commutable;
 }
 class N3VDsPat<SDNode OpNode, NeonI Inst>
@@ -558,19 +855,20 @@ class N3VDsPat<SDNode OpNode, NeonI Inst>
 
 // Basic 3-register intrinsics, both double- and quad-register.
 class N3VDInt<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op4,
-              InstrItinClass itin, string OpcodeStr, ValueType ResTy, ValueType OpTy,
+              InstrItinClass itin, string OpcodeStr, string Dt,
+              ValueType ResTy, ValueType OpTy,
               Intrinsic IntOp, bit Commutable>
   : N3V<op24, op23, op21_20, op11_8, 0, op4,
         (outs DPR:$dst), (ins DPR:$src1, DPR:$src2), itin, 
-        !strconcat(OpcodeStr, "\t$dst, $src1, $src2"), "",
+        OpcodeStr, Dt, "$dst, $src1, $src2", "",
         [(set DPR:$dst, (ResTy (IntOp (OpTy DPR:$src1), (OpTy DPR:$src2))))]> {
   let isCommutable = Commutable;
 }
 class N3VDIntSL<bits<2> op21_20, bits<4> op11_8, InstrItinClass itin, 
-                string OpcodeStr, ValueType Ty, Intrinsic IntOp>
+                string OpcodeStr, string Dt, ValueType Ty, Intrinsic IntOp>
   : N3V<0, 1, op21_20, op11_8, 1, 0,
         (outs DPR:$dst), (ins DPR:$src1, DPR_VFP2:$src2, nohash_imm:$lane),
-        itin, !strconcat(OpcodeStr, "\t$dst, $src1, $src2[$lane]"), "",
+        itin, OpcodeStr, Dt, "$dst, $src1, $src2[$lane]", "",
         [(set (Ty DPR:$dst),
               (Ty (IntOp (Ty DPR:$src1),
                          (Ty (NEONvduplane (Ty DPR_VFP2:$src2),
@@ -578,10 +876,10 @@ class N3VDIntSL<bits<2> op21_20, bits<4> op11_8, InstrItinClass itin,
   let isCommutable = 0;
 }
 class N3VDIntSL16<bits<2> op21_20, bits<4> op11_8, InstrItinClass itin,
-                  string OpcodeStr, ValueType Ty, Intrinsic IntOp>
+                  string OpcodeStr, string Dt, ValueType Ty, Intrinsic IntOp>
   : N3V<0, 1, op21_20, op11_8, 1, 0,
         (outs DPR:$dst), (ins DPR:$src1, DPR_8:$src2, nohash_imm:$lane),
-        itin, !strconcat(OpcodeStr, "\t$dst, $src1, $src2[$lane]"), "",
+        itin, OpcodeStr, Dt, "$dst, $src1, $src2[$lane]", "",
         [(set (Ty DPR:$dst),
               (Ty (IntOp (Ty DPR:$src1),
                          (Ty (NEONvduplane (Ty DPR_8:$src2),
@@ -590,19 +888,21 @@ class N3VDIntSL16<bits<2> op21_20, bits<4> op11_8, InstrItinClass itin,
 }
 
 class N3VQInt<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op4,
-              InstrItinClass itin, string OpcodeStr, ValueType ResTy, ValueType OpTy,
+              InstrItinClass itin, string OpcodeStr, string Dt,
+              ValueType ResTy, ValueType OpTy,
               Intrinsic IntOp, bit Commutable>
   : N3V<op24, op23, op21_20, op11_8, 1, op4,
         (outs QPR:$dst), (ins QPR:$src1, QPR:$src2), itin, 
-        !strconcat(OpcodeStr, "\t$dst, $src1, $src2"), "",
+        OpcodeStr, Dt, "$dst, $src1, $src2", "",
         [(set QPR:$dst, (ResTy (IntOp (OpTy QPR:$src1), (OpTy QPR:$src2))))]> {
   let isCommutable = Commutable;
 }
 class N3VQIntSL<bits<2> op21_20, bits<4> op11_8, InstrItinClass itin, 
-                string OpcodeStr, ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
+                string OpcodeStr, string Dt,
+                ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
   : N3V<1, 1, op21_20, op11_8, 1, 0,
         (outs QPR:$dst), (ins QPR:$src1, DPR_VFP2:$src2, nohash_imm:$lane),
-        itin, !strconcat(OpcodeStr, "\t$dst, $src1, $src2[$lane]"), "",
+        itin, OpcodeStr, Dt, "$dst, $src1, $src2[$lane]", "",
         [(set (ResTy QPR:$dst),
               (ResTy (IntOp (ResTy QPR:$src1),
                             (ResTy (NEONvduplane (OpTy DPR_VFP2:$src2),
@@ -610,10 +910,11 @@ class N3VQIntSL<bits<2> op21_20, bits<4> op11_8, InstrItinClass itin,
   let isCommutable = 0;
 }
 class N3VQIntSL16<bits<2> op21_20, bits<4> op11_8, InstrItinClass itin,
-                  string OpcodeStr, ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
+                  string OpcodeStr, string Dt,
+                  ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
   : N3V<1, 1, op21_20, op11_8, 1, 0,
         (outs QPR:$dst), (ins QPR:$src1, DPR_8:$src2, nohash_imm:$lane),
-        itin, !strconcat(OpcodeStr, "\t$dst, $src1, $src2[$lane]"), "",
+        itin, OpcodeStr, Dt, "$dst, $src1, $src2[$lane]", "",
         [(set (ResTy QPR:$dst),
               (ResTy (IntOp (ResTy QPR:$src1),
                             (ResTy (NEONvduplane (OpTy DPR_8:$src2),
@@ -623,30 +924,32 @@ class N3VQIntSL16<bits<2> op21_20, bits<4> op11_8, InstrItinClass itin,
 
 // Multiply-Add/Sub operations, both double- and quad-register.
 class N3VDMulOp<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op4,
-                InstrItinClass itin, string OpcodeStr, 
+                InstrItinClass itin, string OpcodeStr, string Dt,
                 ValueType Ty, SDNode MulOp, SDNode OpNode>
   : N3V<op24, op23, op21_20, op11_8, 0, op4,
         (outs DPR:$dst), (ins DPR:$src1, DPR:$src2, DPR:$src3), itin,
-        !strconcat(OpcodeStr, "\t$dst, $src2, $src3"), "$src1 = $dst",
+        OpcodeStr, Dt, "$dst, $src2, $src3", "$src1 = $dst",
         [(set DPR:$dst, (Ty (OpNode DPR:$src1,
                              (Ty (MulOp DPR:$src2, DPR:$src3)))))]>;
 class N3VDMulOpSL<bits<2> op21_20, bits<4> op11_8, InstrItinClass itin,
-                  string OpcodeStr, ValueType Ty, SDNode MulOp, SDNode ShOp>
+                  string OpcodeStr, string Dt,
+                  ValueType Ty, SDNode MulOp, SDNode ShOp>
   : N3V<0, 1, op21_20, op11_8, 1, 0,
         (outs DPR:$dst),
         (ins DPR:$src1, DPR:$src2, DPR_VFP2:$src3, nohash_imm:$lane), itin,
-        !strconcat(OpcodeStr, "\t$dst, $src2, $src3[$lane]"), "$src1 = $dst",
+        OpcodeStr, Dt, "$dst, $src2, $src3[$lane]", "$src1 = $dst",
         [(set (Ty DPR:$dst),
               (Ty (ShOp (Ty DPR:$src1),
                         (Ty (MulOp DPR:$src2,
                                    (Ty (NEONvduplane (Ty DPR_VFP2:$src3),
                                                      imm:$lane)))))))]>;
 class N3VDMulOpSL16<bits<2> op21_20, bits<4> op11_8, InstrItinClass itin,
-                    string OpcodeStr, ValueType Ty, SDNode MulOp, SDNode ShOp>
+                    string OpcodeStr, string Dt,
+                    ValueType Ty, SDNode MulOp, SDNode ShOp>
   : N3V<0, 1, op21_20, op11_8, 1, 0,
         (outs DPR:$dst),
         (ins DPR:$src1, DPR:$src2, DPR_8:$src3, nohash_imm:$lane), itin,
-        !strconcat(OpcodeStr, "\t$dst, $src2, $src3[$lane]"), "$src1 = $dst",
+        OpcodeStr, Dt, "$dst, $src2, $src3[$lane]", "$src1 = $dst",
         [(set (Ty DPR:$dst),
               (Ty (ShOp (Ty DPR:$src1),
                         (Ty (MulOp DPR:$src2,
@@ -654,32 +957,33 @@ class N3VDMulOpSL16<bits<2> op21_20, bits<4> op11_8, InstrItinClass itin,
                                                      imm:$lane)))))))]>;
 
 class N3VQMulOp<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op4,
-                InstrItinClass itin, string OpcodeStr, ValueType Ty,
+                InstrItinClass itin, string OpcodeStr, string Dt, ValueType Ty,
                 SDNode MulOp, SDNode OpNode>
   : N3V<op24, op23, op21_20, op11_8, 1, op4,
         (outs QPR:$dst), (ins QPR:$src1, QPR:$src2, QPR:$src3), itin,
-        !strconcat(OpcodeStr, "\t$dst, $src2, $src3"), "$src1 = $dst",
+        OpcodeStr, Dt, "$dst, $src2, $src3", "$src1 = $dst",
         [(set QPR:$dst, (Ty (OpNode QPR:$src1,
                              (Ty (MulOp QPR:$src2, QPR:$src3)))))]>;
 class N3VQMulOpSL<bits<2> op21_20, bits<4> op11_8, InstrItinClass itin,
-                  string OpcodeStr, ValueType ResTy, ValueType OpTy,
+                  string OpcodeStr, string Dt, ValueType ResTy, ValueType OpTy,
                   SDNode MulOp, SDNode ShOp>
   : N3V<1, 1, op21_20, op11_8, 1, 0,
         (outs QPR:$dst),
         (ins QPR:$src1, QPR:$src2, DPR_VFP2:$src3, nohash_imm:$lane), itin,
-        !strconcat(OpcodeStr, "\t$dst, $src2, $src3[$lane]"), "$src1 = $dst",
+        OpcodeStr, Dt, "$dst, $src2, $src3[$lane]", "$src1 = $dst",
         [(set (ResTy QPR:$dst),
               (ResTy (ShOp (ResTy QPR:$src1),
                            (ResTy (MulOp QPR:$src2,
                                          (ResTy (NEONvduplane (OpTy DPR_VFP2:$src3),
                                                               imm:$lane)))))))]>;
 class N3VQMulOpSL16<bits<2> op21_20, bits<4> op11_8, InstrItinClass itin,
-                    string OpcodeStr, ValueType ResTy, ValueType OpTy,
+                    string OpcodeStr, string Dt,
+                    ValueType ResTy, ValueType OpTy,
                     SDNode MulOp, SDNode ShOp>
   : N3V<1, 1, op21_20, op11_8, 1, 0,
         (outs QPR:$dst),
         (ins QPR:$src1, QPR:$src2, DPR_8:$src3, nohash_imm:$lane), itin,
-        !strconcat(OpcodeStr, "\t$dst, $src2, $src3[$lane]"), "$src1 = $dst",
+        OpcodeStr, Dt, "$dst, $src2, $src3[$lane]", "$src1 = $dst",
         [(set (ResTy QPR:$dst),
               (ResTy (ShOp (ResTy QPR:$src1),
                            (ResTy (MulOp QPR:$src2,
@@ -688,12 +992,12 @@ class N3VQMulOpSL16<bits<2> op21_20, bits<4> op11_8, InstrItinClass itin,
 
 // Multiply-Add/Sub operations, scalar single-precision
 class N3VDMulOps<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op4,
-                 InstrItinClass itin, string OpcodeStr,
+                 InstrItinClass itin, string OpcodeStr, string Dt,
                  ValueType Ty, SDNode MulOp, SDNode OpNode>
   : N3V<op24, op23, op21_20, op11_8, 0, op4,
         (outs DPR_VFP2:$dst),
         (ins DPR_VFP2:$src1, DPR_VFP2:$src2, DPR_VFP2:$src3), itin,
-        !strconcat(OpcodeStr, "\t$dst, $src2, $src3"), "$src1 = $dst", []>;
+        OpcodeStr, Dt, "$dst, $src2, $src3", "$src1 = $dst", []>;
 
 class N3VDMulOpsPat<SDNode MulNode, SDNode OpNode, NeonI Inst>
   : NEONFPPat<(f32 (OpNode SPR:$acc, (f32 (MulNode SPR:$a, SPR:$b)))),
@@ -706,50 +1010,51 @@ class N3VDMulOpsPat<SDNode MulNode, SDNode OpNode, NeonI Inst>
 // Neon 3-argument intrinsics, both double- and quad-register.
 // The destination register is also used as the first source operand register.
 class N3VDInt3<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op4,
-               InstrItinClass itin, string OpcodeStr,
+               InstrItinClass itin, string OpcodeStr, string Dt,
                ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
   : N3V<op24, op23, op21_20, op11_8, 0, op4,
         (outs DPR:$dst), (ins DPR:$src1, DPR:$src2, DPR:$src3), itin,
-        !strconcat(OpcodeStr, "\t$dst, $src2, $src3"), "$src1 = $dst",
+        OpcodeStr, Dt, "$dst, $src2, $src3", "$src1 = $dst",
         [(set DPR:$dst, (ResTy (IntOp (OpTy DPR:$src1),
                                       (OpTy DPR:$src2), (OpTy DPR:$src3))))]>;
 class N3VQInt3<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op4,
-               InstrItinClass itin, string OpcodeStr,
+               InstrItinClass itin, string OpcodeStr, string Dt,
                ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
   : N3V<op24, op23, op21_20, op11_8, 1, op4,
         (outs QPR:$dst), (ins QPR:$src1, QPR:$src2, QPR:$src3), itin,
-        !strconcat(OpcodeStr, "\t$dst, $src2, $src3"), "$src1 = $dst",
+        OpcodeStr, Dt, "$dst, $src2, $src3", "$src1 = $dst",
         [(set QPR:$dst, (ResTy (IntOp (OpTy QPR:$src1),
                                       (OpTy QPR:$src2), (OpTy QPR:$src3))))]>;
 
 // Neon Long 3-argument intrinsic.  The destination register is
 // a quad-register and is also used as the first source operand register.
 class N3VLInt3<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op4,
-               InstrItinClass itin, string OpcodeStr,
+               InstrItinClass itin, string OpcodeStr, string Dt,
                ValueType TyQ, ValueType TyD, Intrinsic IntOp>
   : N3V<op24, op23, op21_20, op11_8, 0, op4,
         (outs QPR:$dst), (ins QPR:$src1, DPR:$src2, DPR:$src3), itin,
-        !strconcat(OpcodeStr, "\t$dst, $src2, $src3"), "$src1 = $dst",
+        OpcodeStr, Dt, "$dst, $src2, $src3", "$src1 = $dst",
         [(set QPR:$dst,
           (TyQ (IntOp (TyQ QPR:$src1), (TyD DPR:$src2), (TyD DPR:$src3))))]>;
 class N3VLInt3SL<bit op24, bits<2> op21_20, bits<4> op11_8, InstrItinClass itin,
-                 string OpcodeStr, ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
+                 string OpcodeStr, string Dt,
+                 ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
   : N3V<op24, 1, op21_20, op11_8, 1, 0,
         (outs QPR:$dst),
         (ins QPR:$src1, DPR:$src2, DPR_VFP2:$src3, nohash_imm:$lane), itin,
-        !strconcat(OpcodeStr, "\t$dst, $src2, $src3[$lane]"), "$src1 = $dst",
+        OpcodeStr, Dt, "$dst, $src2, $src3[$lane]", "$src1 = $dst",
         [(set (ResTy QPR:$dst),
               (ResTy (IntOp (ResTy QPR:$src1),
                             (OpTy DPR:$src2),
                             (OpTy (NEONvduplane (OpTy DPR_VFP2:$src3),
                                                 imm:$lane)))))]>;
 class N3VLInt3SL16<bit op24, bits<2> op21_20, bits<4> op11_8, InstrItinClass itin,
-                   string OpcodeStr, ValueType ResTy, ValueType OpTy,
+                   string OpcodeStr, string Dt, ValueType ResTy, ValueType OpTy,
                    Intrinsic IntOp>
   : N3V<op24, 1, op21_20, op11_8, 1, 0,
         (outs QPR:$dst),
         (ins QPR:$src1, DPR:$src2, DPR_8:$src3, nohash_imm:$lane), itin,
-        !strconcat(OpcodeStr, "\t$dst, $src2, $src3[$lane]"), "$src1 = $dst",
+        OpcodeStr, Dt, "$dst, $src2, $src3[$lane]", "$src1 = $dst",
         [(set (ResTy QPR:$dst),
               (ResTy (IntOp (ResTy QPR:$src1),
                             (OpTy DPR:$src2),
@@ -759,40 +1064,41 @@ class N3VLInt3SL16<bit op24, bits<2> op21_20, bits<4> op11_8, InstrItinClass iti
 
 // Narrowing 3-register intrinsics.
 class N3VNInt<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op4,
-              string OpcodeStr, ValueType TyD, ValueType TyQ,
+              string OpcodeStr, string Dt, ValueType TyD, ValueType TyQ,
               Intrinsic IntOp, bit Commutable>
   : N3V<op24, op23, op21_20, op11_8, 0, op4,
         (outs DPR:$dst), (ins QPR:$src1, QPR:$src2), IIC_VBINi4D,
-        !strconcat(OpcodeStr, "\t$dst, $src1, $src2"), "",
+        OpcodeStr, Dt, "$dst, $src1, $src2", "",
         [(set DPR:$dst, (TyD (IntOp (TyQ QPR:$src1), (TyQ QPR:$src2))))]> {
   let isCommutable = Commutable;
 }
 
 // Long 3-register intrinsics.
 class N3VLInt<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op4,
-              InstrItinClass itin, string OpcodeStr, ValueType TyQ, ValueType TyD,
-              Intrinsic IntOp, bit Commutable>
+              InstrItinClass itin, string OpcodeStr, string Dt,
+              ValueType TyQ, ValueType TyD, Intrinsic IntOp, bit Commutable>
   : N3V<op24, op23, op21_20, op11_8, 0, op4,
         (outs QPR:$dst), (ins DPR:$src1, DPR:$src2), itin,
-        !strconcat(OpcodeStr, "\t$dst, $src1, $src2"), "",
+        OpcodeStr, Dt, "$dst, $src1, $src2", "",
         [(set QPR:$dst, (TyQ (IntOp (TyD DPR:$src1), (TyD DPR:$src2))))]> {
   let isCommutable = Commutable;
 }
 class N3VLIntSL<bit op24, bits<2> op21_20, bits<4> op11_8, InstrItinClass itin,
-                string OpcodeStr, ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
+                string OpcodeStr, string Dt,
+                ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
   : N3V<op24, 1, op21_20, op11_8, 1, 0,
         (outs QPR:$dst), (ins DPR:$src1, DPR_VFP2:$src2, nohash_imm:$lane), 
-        itin, !strconcat(OpcodeStr, "\t$dst, $src1, $src2[$lane]"), "",
+        itin, OpcodeStr, Dt, "$dst, $src1, $src2[$lane]", "",
         [(set (ResTy QPR:$dst),
               (ResTy (IntOp (OpTy DPR:$src1),
                             (OpTy (NEONvduplane (OpTy DPR_VFP2:$src2),
                                                 imm:$lane)))))]>;
 class N3VLIntSL16<bit op24, bits<2> op21_20, bits<4> op11_8, InstrItinClass itin,
-                  string OpcodeStr, ValueType ResTy, ValueType OpTy, 
+                  string OpcodeStr, string Dt, ValueType ResTy, ValueType OpTy, 
                   Intrinsic IntOp>
   : N3V<op24, 1, op21_20, op11_8, 1, 0,
         (outs QPR:$dst), (ins DPR:$src1, DPR_8:$src2, nohash_imm:$lane), 
-        itin, !strconcat(OpcodeStr, "\t$dst, $src1, $src2[$lane]"), "",
+        itin, OpcodeStr, Dt, "$dst, $src1, $src2[$lane]", "",
         [(set (ResTy QPR:$dst),
               (ResTy (IntOp (OpTy DPR:$src1),
                             (OpTy (NEONvduplane (OpTy DPR_8:$src2),
@@ -800,182 +1106,202 @@ class N3VLIntSL16<bit op24, bits<2> op21_20, bits<4> op11_8, InstrItinClass itin
 
 // Wide 3-register intrinsics.
 class N3VWInt<bit op24, bit op23, bits<2> op21_20, bits<4> op11_8, bit op4,
-              string OpcodeStr, ValueType TyQ, ValueType TyD,
+              string OpcodeStr, string Dt, ValueType TyQ, ValueType TyD,
               Intrinsic IntOp, bit Commutable>
   : N3V<op24, op23, op21_20, op11_8, 0, op4,
         (outs QPR:$dst), (ins QPR:$src1, DPR:$src2), IIC_VSUBiD,
-        !strconcat(OpcodeStr, "\t$dst, $src1, $src2"), "",
+        OpcodeStr, Dt, "$dst, $src1, $src2", "",
         [(set QPR:$dst, (TyQ (IntOp (TyQ QPR:$src1), (TyD DPR:$src2))))]> {
   let isCommutable = Commutable;
 }
 
 // Pairwise long 2-register intrinsics, both double- and quad-register.
 class N2VDPLInt<bits<2> op24_23, bits<2> op21_20, bits<2> op19_18,
-                bits<2> op17_16, bits<5> op11_7, bit op4, string OpcodeStr,
+                bits<2> op17_16, bits<5> op11_7, bit op4,
+                string OpcodeStr, string Dt,
                 ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
   : N2V<op24_23, op21_20, op19_18, op17_16, op11_7, 0, op4, (outs DPR:$dst),
-        (ins DPR:$src), IIC_VSHLiD, !strconcat(OpcodeStr, "\t$dst, $src"), "",
+        (ins DPR:$src), IIC_VSHLiD, OpcodeStr, Dt, "$dst, $src", "",
         [(set DPR:$dst, (ResTy (IntOp (OpTy DPR:$src))))]>;
 class N2VQPLInt<bits<2> op24_23, bits<2> op21_20, bits<2> op19_18,
-                bits<2> op17_16, bits<5> op11_7, bit op4, string OpcodeStr,
+                bits<2> op17_16, bits<5> op11_7, bit op4,
+                string OpcodeStr, string Dt,
                 ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
   : N2V<op24_23, op21_20, op19_18, op17_16, op11_7, 1, op4, (outs QPR:$dst),
-        (ins QPR:$src), IIC_VSHLiD, !strconcat(OpcodeStr, "\t$dst, $src"), "",
+        (ins QPR:$src), IIC_VSHLiD, OpcodeStr, Dt, "$dst, $src", "",
         [(set QPR:$dst, (ResTy (IntOp (OpTy QPR:$src))))]>;
 
 // Pairwise long 2-register accumulate intrinsics,
 // both double- and quad-register.
 // The destination register is also used as the first source operand register.
 class N2VDPLInt2<bits<2> op24_23, bits<2> op21_20, bits<2> op19_18,
-                 bits<2> op17_16, bits<5> op11_7, bit op4, string OpcodeStr,
+                 bits<2> op17_16, bits<5> op11_7, bit op4,
+                 string OpcodeStr, string Dt,
                  ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
   : N2V<op24_23, op21_20, op19_18, op17_16, op11_7, 0, op4,
         (outs DPR:$dst), (ins DPR:$src1, DPR:$src2), IIC_VPALiD,
-        !strconcat(OpcodeStr, "\t$dst, $src2"), "$src1 = $dst",
+        OpcodeStr, Dt, "$dst, $src2", "$src1 = $dst",
         [(set DPR:$dst, (ResTy (IntOp (ResTy DPR:$src1), (OpTy DPR:$src2))))]>;
 class N2VQPLInt2<bits<2> op24_23, bits<2> op21_20, bits<2> op19_18,
-                 bits<2> op17_16, bits<5> op11_7, bit op4, string OpcodeStr,
+                 bits<2> op17_16, bits<5> op11_7, bit op4,
+                 string OpcodeStr, string Dt,
                  ValueType ResTy, ValueType OpTy, Intrinsic IntOp>
   : N2V<op24_23, op21_20, op19_18, op17_16, op11_7, 1, op4,
         (outs QPR:$dst), (ins QPR:$src1, QPR:$src2), IIC_VPALiQ,
-        !strconcat(OpcodeStr, "\t$dst, $src2"), "$src1 = $dst",
+        OpcodeStr, Dt, "$dst, $src2", "$src1 = $dst",
         [(set QPR:$dst, (ResTy (IntOp (ResTy QPR:$src1), (OpTy QPR:$src2))))]>;
 
 // Shift by immediate,
 // both double- and quad-register.
-class N2VDSh<bit op24, bit op23, bits<6> op21_16, bits<4> op11_8, bit op7,
-             bit op4, InstrItinClass itin, string OpcodeStr,
+class N2VDSh<bit op24, bit op23, bits<4> op11_8, bit op7, bit op4,
+             InstrItinClass itin, string OpcodeStr, string Dt,
              ValueType Ty, SDNode OpNode>
-  : N2VImm<op24, op23, op21_16, op11_8, op7, 0, op4,
+  : N2VImm<op24, op23, op11_8, op7, 0, op4,
            (outs DPR:$dst), (ins DPR:$src, i32imm:$SIMM), itin,
-           !strconcat(OpcodeStr, "\t$dst, $src, $SIMM"), "",
+           OpcodeStr, Dt, "$dst, $src, $SIMM", "",
            [(set DPR:$dst, (Ty (OpNode (Ty DPR:$src), (i32 imm:$SIMM))))]>;
-class N2VQSh<bit op24, bit op23, bits<6> op21_16, bits<4> op11_8, bit op7,
-             bit op4, InstrItinClass itin, string OpcodeStr,
+class N2VQSh<bit op24, bit op23, bits<4> op11_8, bit op7, bit op4,
+             InstrItinClass itin, string OpcodeStr, string Dt,
              ValueType Ty, SDNode OpNode>
-  : N2VImm<op24, op23, op21_16, op11_8, op7, 1, op4,
+  : N2VImm<op24, op23, op11_8, op7, 1, op4,
            (outs QPR:$dst), (ins QPR:$src, i32imm:$SIMM), itin,
-           !strconcat(OpcodeStr, "\t$dst, $src, $SIMM"), "",
+           OpcodeStr, Dt, "$dst, $src, $SIMM", "",
            [(set QPR:$dst, (Ty (OpNode (Ty QPR:$src), (i32 imm:$SIMM))))]>;
 
 // Long shift by immediate.
-class N2VLSh<bit op24, bit op23, bits<6> op21_16, bits<4> op11_8, bit op7,
-             bit op6, bit op4, string OpcodeStr, ValueType ResTy,
-             ValueType OpTy, SDNode OpNode>
-  : N2VImm<op24, op23, op21_16, op11_8, op7, op6, op4,
+class N2VLSh<bit op24, bit op23, bits<4> op11_8, bit op7, bit op6, bit op4,
+             string OpcodeStr, string Dt,
+             ValueType ResTy, ValueType OpTy, SDNode OpNode>
+  : N2VImm<op24, op23, op11_8, op7, op6, op4,
            (outs QPR:$dst), (ins DPR:$src, i32imm:$SIMM), IIC_VSHLiD,
-           !strconcat(OpcodeStr, "\t$dst, $src, $SIMM"), "",
+           OpcodeStr, Dt, "$dst, $src, $SIMM", "",
            [(set QPR:$dst, (ResTy (OpNode (OpTy DPR:$src),
                                           (i32 imm:$SIMM))))]>;
 
 // Narrow shift by immediate.
-class N2VNSh<bit op24, bit op23, bits<6> op21_16, bits<4> op11_8, bit op7,
-             bit op6, bit op4, InstrItinClass itin, string OpcodeStr,
+class N2VNSh<bit op24, bit op23, bits<4> op11_8, bit op7, bit op6, bit op4,
+             InstrItinClass itin, string OpcodeStr, string Dt,
              ValueType ResTy, ValueType OpTy, SDNode OpNode>
-  : N2VImm<op24, op23, op21_16, op11_8, op7, op6, op4,
+  : N2VImm<op24, op23, op11_8, op7, op6, op4,
            (outs DPR:$dst), (ins QPR:$src, i32imm:$SIMM), itin,
-           !strconcat(OpcodeStr, "\t$dst, $src, $SIMM"), "",
+           OpcodeStr, Dt, "$dst, $src, $SIMM", "",
            [(set DPR:$dst, (ResTy (OpNode (OpTy QPR:$src),
                                           (i32 imm:$SIMM))))]>;
 
 // Shift right by immediate and accumulate,
 // both double- and quad-register.
-class N2VDShAdd<bit op24, bit op23, bits<6> op21_16, bits<4> op11_8, bit op7,
-                bit op4, string OpcodeStr, ValueType Ty, SDNode ShOp>
-  : N2VImm<op24, op23, op21_16, op11_8, op7, 0, op4,
-           (outs DPR:$dst), (ins DPR:$src1, DPR:$src2, i32imm:$SIMM),
-           IIC_VPALiD, 
-           !strconcat(OpcodeStr, "\t$dst, $src2, $SIMM"), "$src1 = $dst",
+class N2VDShAdd<bit op24, bit op23, bits<4> op11_8, bit op7, bit op4,
+                string OpcodeStr, string Dt, ValueType Ty, SDNode ShOp>
+  : N2VImm<op24, op23, op11_8, op7, 0, op4, (outs DPR:$dst),
+           (ins DPR:$src1, DPR:$src2, i32imm:$SIMM), IIC_VPALiD, 
+           OpcodeStr, Dt, "$dst, $src2, $SIMM", "$src1 = $dst",
            [(set DPR:$dst, (Ty (add DPR:$src1,
                                 (Ty (ShOp DPR:$src2, (i32 imm:$SIMM))))))]>;
-class N2VQShAdd<bit op24, bit op23, bits<6> op21_16, bits<4> op11_8, bit op7,
-                bit op4, string OpcodeStr, ValueType Ty, SDNode ShOp>
-  : N2VImm<op24, op23, op21_16, op11_8, op7, 1, op4,
-           (outs QPR:$dst), (ins QPR:$src1, QPR:$src2, i32imm:$SIMM),
-           IIC_VPALiD, 
-           !strconcat(OpcodeStr, "\t$dst, $src2, $SIMM"), "$src1 = $dst",
+class N2VQShAdd<bit op24, bit op23, bits<4> op11_8, bit op7, bit op4,
+                string OpcodeStr, string Dt, ValueType Ty, SDNode ShOp>
+  : N2VImm<op24, op23, op11_8, op7, 1, op4, (outs QPR:$dst),
+           (ins QPR:$src1, QPR:$src2, i32imm:$SIMM), IIC_VPALiD, 
+           OpcodeStr, Dt, "$dst, $src2, $SIMM", "$src1 = $dst",
            [(set QPR:$dst, (Ty (add QPR:$src1,
                                 (Ty (ShOp QPR:$src2, (i32 imm:$SIMM))))))]>;
 
 // Shift by immediate and insert,
 // both double- and quad-register.
-class N2VDShIns<bit op24, bit op23, bits<6> op21_16, bits<4> op11_8, bit op7,
-                bit op4, string OpcodeStr, ValueType Ty, SDNode ShOp>
-  : N2VImm<op24, op23, op21_16, op11_8, op7, 0, op4,
-           (outs DPR:$dst), (ins DPR:$src1, DPR:$src2, i32imm:$SIMM),
-           IIC_VSHLiD, 
-           !strconcat(OpcodeStr, "\t$dst, $src2, $SIMM"), "$src1 = $dst",
+class N2VDShIns<bit op24, bit op23, bits<4> op11_8, bit op7, bit op4,
+                string OpcodeStr, string Dt, ValueType Ty, SDNode ShOp>
+  : N2VImm<op24, op23, op11_8, op7, 0, op4, (outs DPR:$dst),
+           (ins DPR:$src1, DPR:$src2, i32imm:$SIMM), IIC_VSHLiD, 
+           OpcodeStr, Dt, "$dst, $src2, $SIMM", "$src1 = $dst",
            [(set DPR:$dst, (Ty (ShOp DPR:$src1, DPR:$src2, (i32 imm:$SIMM))))]>;
-class N2VQShIns<bit op24, bit op23, bits<6> op21_16, bits<4> op11_8, bit op7,
-                bit op4, string OpcodeStr, ValueType Ty, SDNode ShOp>
-  : N2VImm<op24, op23, op21_16, op11_8, op7, 1, op4,
-           (outs QPR:$dst), (ins QPR:$src1, QPR:$src2, i32imm:$SIMM),
-           IIC_VSHLiQ, 
-           !strconcat(OpcodeStr, "\t$dst, $src2, $SIMM"), "$src1 = $dst",
+class N2VQShIns<bit op24, bit op23, bits<4> op11_8, bit op7, bit op4,
+                string OpcodeStr, string Dt, ValueType Ty, SDNode ShOp>
+  : N2VImm<op24, op23, op11_8, op7, 1, op4, (outs QPR:$dst),
+           (ins QPR:$src1, QPR:$src2, i32imm:$SIMM), IIC_VSHLiQ, 
+           OpcodeStr, Dt, "$dst, $src2, $SIMM", "$src1 = $dst",
            [(set QPR:$dst, (Ty (ShOp QPR:$src1, QPR:$src2, (i32 imm:$SIMM))))]>;
 
 // Convert, with fractional bits immediate,
 // both double- and quad-register.
-class N2VCvtD<bit op24, bit op23, bits<6> op21_16, bits<4> op11_8, bit op7,
-              bit op4, string OpcodeStr, ValueType ResTy, ValueType OpTy,
+class N2VCvtD<bit op24, bit op23, bits<4> op11_8, bit op7, bit op4,
+              string OpcodeStr, string Dt, ValueType ResTy, ValueType OpTy,
               Intrinsic IntOp>
-  : N2VImm<op24, op23, op21_16, op11_8, op7, 0, op4,
+  : N2VImm<op24, op23, op11_8, op7, 0, op4,
            (outs DPR:$dst), (ins DPR:$src, i32imm:$SIMM), IIC_VUNAD, 
-           !strconcat(OpcodeStr, "\t$dst, $src, $SIMM"), "",
+           OpcodeStr, Dt, "$dst, $src, $SIMM", "",
            [(set DPR:$dst, (ResTy (IntOp (OpTy DPR:$src), (i32 imm:$SIMM))))]>;
-class N2VCvtQ<bit op24, bit op23, bits<6> op21_16, bits<4> op11_8, bit op7,
-              bit op4, string OpcodeStr, ValueType ResTy, ValueType OpTy,
+class N2VCvtQ<bit op24, bit op23, bits<4> op11_8, bit op7, bit op4,
+              string OpcodeStr, string Dt, ValueType ResTy, ValueType OpTy,
               Intrinsic IntOp>
-  : N2VImm<op24, op23, op21_16, op11_8, op7, 1, op4,
+  : N2VImm<op24, op23, op11_8, op7, 1, op4,
            (outs QPR:$dst), (ins QPR:$src, i32imm:$SIMM), IIC_VUNAQ, 
-           !strconcat(OpcodeStr, "\t$dst, $src, $SIMM"), "",
+           OpcodeStr, Dt, "$dst, $src, $SIMM", "",
            [(set QPR:$dst, (ResTy (IntOp (OpTy QPR:$src), (i32 imm:$SIMM))))]>;
 
 //===----------------------------------------------------------------------===//
 // Multiclasses
 //===----------------------------------------------------------------------===//
 
+// Abbreviations used in multiclass suffixes:
+//   Q = quarter int (8 bit) elements
+//   H = half int (16 bit) elements
+//   S = single int (32 bit) elements
+//   D = double int (64 bit) elements
+
 // Neon 3-register vector operations.
 
 // First with only element sizes of 8, 16 and 32 bits:
 multiclass N3V_QHS<bit op24, bit op23, bits<4> op11_8, bit op4,
                    InstrItinClass itinD16, InstrItinClass itinD32,
                    InstrItinClass itinQ16, InstrItinClass itinQ32,
-                   string OpcodeStr, SDNode OpNode, bit Commutable = 0> {
+                   string OpcodeStr, string Dt,
+                   SDNode OpNode, bit Commutable = 0> {
   // 64-bit vector types.
   def v8i8  : N3VD<op24, op23, 0b00, op11_8, op4, itinD16, 
-                   !strconcat(OpcodeStr, "8"), v8i8, v8i8, OpNode, Commutable>;
+                   OpcodeStr, !strconcat(Dt, "8"),
+                   v8i8, v8i8, OpNode, Commutable>;
   def v4i16 : N3VD<op24, op23, 0b01, op11_8, op4, itinD16,
-                   !strconcat(OpcodeStr, "16"), v4i16, v4i16, OpNode, Commutable>;
+                 OpcodeStr, !strconcat(Dt, "16"),
+                 v4i16, v4i16, OpNode, Commutable>;
   def v2i32 : N3VD<op24, op23, 0b10, op11_8, op4, itinD32,
-                   !strconcat(OpcodeStr, "32"), v2i32, v2i32, OpNode, Commutable>;
+                 OpcodeStr, !strconcat(Dt, "32"),
+                 v2i32, v2i32, OpNode, Commutable>;
 
   // 128-bit vector types.
   def v16i8 : N3VQ<op24, op23, 0b00, op11_8, op4, itinQ16,
-                   !strconcat(OpcodeStr, "8"), v16i8, v16i8, OpNode, Commutable>;
+                  OpcodeStr, !strconcat(Dt, "8"),
+                  v16i8, v16i8, OpNode, Commutable>;
   def v8i16 : N3VQ<op24, op23, 0b01, op11_8, op4, itinQ16,
-                   !strconcat(OpcodeStr, "16"), v8i16, v8i16, OpNode, Commutable>;
+                 OpcodeStr, !strconcat(Dt, "16"),
+                 v8i16, v8i16, OpNode, Commutable>;
   def v4i32 : N3VQ<op24, op23, 0b10, op11_8, op4, itinQ32,
-                   !strconcat(OpcodeStr, "32"), v4i32, v4i32, OpNode, Commutable>;
+                 OpcodeStr, !strconcat(Dt, "32"),
+                 v4i32, v4i32, OpNode, Commutable>;
 }
 
-multiclass N3VSL_HS<bits<4> op11_8, string OpcodeStr, SDNode ShOp> {
-  def v4i16 : N3VDSL16<0b01, op11_8, !strconcat(OpcodeStr, "16"), v4i16, ShOp>;
-  def v2i32 : N3VDSL<0b10, op11_8, IIC_VMULi32D, !strconcat(OpcodeStr, "32"), v2i32, ShOp>;
-  def v8i16 : N3VQSL16<0b01, op11_8, !strconcat(OpcodeStr, "16"), v8i16, v4i16, ShOp>;
-  def v4i32 : N3VQSL<0b10, op11_8, IIC_VMULi32Q, !strconcat(OpcodeStr, "32"), v4i32, v2i32, ShOp>;
+multiclass N3VSL_HS<bits<4> op11_8, string OpcodeStr, string Dt, SDNode ShOp> {
+  def v4i16 : N3VDSL16<0b01, op11_8, OpcodeStr, !strconcat(Dt, "16"),
+                       v4i16, ShOp>;
+  def v2i32 : N3VDSL<0b10, op11_8, IIC_VMULi32D, OpcodeStr, !strconcat(Dt,"32"),
+                     v2i32, ShOp>;
+  def v8i16 : N3VQSL16<0b01, op11_8, OpcodeStr, !strconcat(Dt, "16"),
+                       v8i16, v4i16, ShOp>;
+  def v4i32 : N3VQSL<0b10, op11_8, IIC_VMULi32Q, OpcodeStr, !strconcat(Dt,"32"),
+                     v4i32, v2i32, ShOp>;
 }
 
 // ....then also with element size 64 bits:
 multiclass N3V_QHSD<bit op24, bit op23, bits<4> op11_8, bit op4,
                     InstrItinClass itinD, InstrItinClass itinQ,
-                    string OpcodeStr, SDNode OpNode, bit Commutable = 0>
+                    string OpcodeStr, string Dt,
+                    SDNode OpNode, bit Commutable = 0>
   : N3V_QHS<op24, op23, op11_8, op4, itinD, itinD, itinQ, itinQ,
-            OpcodeStr, OpNode, Commutable> {
+            OpcodeStr, Dt, OpNode, Commutable> {
   def v1i64 : N3VD<op24, op23, 0b11, op11_8, op4, itinD,
-                   !strconcat(OpcodeStr, "64"), v1i64, v1i64, OpNode, Commutable>;
+                   OpcodeStr, !strconcat(Dt, "64"),
+                   v1i64, v1i64, OpNode, Commutable>;
   def v2i64 : N3VQ<op24, op23, 0b11, op11_8, op4, itinQ,
-                   !strconcat(OpcodeStr, "64"), v2i64, v2i64, OpNode, Commutable>;
+                   OpcodeStr, !strconcat(Dt, "64"),
+                   v2i64, v2i64, OpNode, Commutable>;
 }
 
 
@@ -983,27 +1309,30 @@ multiclass N3V_QHSD<bit op24, bit op23, bits<4> op11_8, bit op4,
 //   source operand element sizes of 16, 32 and 64 bits:
 multiclass N2VNInt_HSD<bits<2> op24_23, bits<2> op21_20, bits<2> op17_16,
                        bits<5> op11_7, bit op6, bit op4, 
-                       InstrItinClass itin, string OpcodeStr,
+                       InstrItinClass itin, string OpcodeStr, string Dt,
                        Intrinsic IntOp> {
   def v8i8  : N2VNInt<op24_23, op21_20, 0b00, op17_16, op11_7, op6, op4,
-                      itin, !strconcat(OpcodeStr, "16"), v8i8, v8i16, IntOp>;
+                      itin, OpcodeStr, !strconcat(Dt, "16"),
+                      v8i8, v8i16, IntOp>;
   def v4i16 : N2VNInt<op24_23, op21_20, 0b01, op17_16, op11_7, op6, op4,
-                      itin, !strconcat(OpcodeStr, "32"), v4i16, v4i32, IntOp>;
+                      itin, OpcodeStr, !strconcat(Dt, "32"),
+                      v4i16, v4i32, IntOp>;
   def v2i32 : N2VNInt<op24_23, op21_20, 0b10, op17_16, op11_7, op6, op4,
-                      itin, !strconcat(OpcodeStr, "64"), v2i32, v2i64, IntOp>;
+                      itin, OpcodeStr, !strconcat(Dt, "64"),
+                      v2i32, v2i64, IntOp>;
 }
 
 
 // Neon Lengthening 2-register vector intrinsic (currently specific to VMOVL).
 //   source operand element sizes of 16, 32 and 64 bits:
-multiclass N2VLInt_QHS<bit op24, bit op23, bits<4> op11_8, bit op7, bit op6,
-                       bit op4, string OpcodeStr, Intrinsic IntOp> {
-  def v8i16 : N2VLInt<op24, op23, 0b001000, op11_8, op7, op6, op4,
-                      IIC_VQUNAiD, !strconcat(OpcodeStr, "8"), v8i16, v8i8, IntOp>;
-  def v4i32 : N2VLInt<op24, op23, 0b010000, op11_8, op7, op6, op4,
-                      IIC_VQUNAiD, !strconcat(OpcodeStr, "16"), v4i32, v4i16, IntOp>;
-  def v2i64 : N2VLInt<op24, op23, 0b100000, op11_8, op7, op6, op4,
-                      IIC_VQUNAiD, !strconcat(OpcodeStr, "32"), v2i64, v2i32, IntOp>;
+multiclass N2VLInt_QHS<bits<2> op24_23, bits<5> op11_7, bit op6, bit op4,
+                       string OpcodeStr, string Dt, Intrinsic IntOp> {
+  def v8i16 : N2VLInt<op24_23, 0b00, 0b10, 0b00, op11_7, op6, op4, IIC_VQUNAiD,
+                      OpcodeStr, !strconcat(Dt, "8"), v8i16, v8i8, IntOp>;
+  def v4i32 : N2VLInt<op24_23, 0b01, 0b00, 0b00, op11_7, op6, op4, IIC_VQUNAiD,
+                      OpcodeStr, !strconcat(Dt, "16"), v4i32, v4i16, IntOp>;
+  def v2i64 : N2VLInt<op24_23, 0b10, 0b00, 0b00, op11_7, op6, op4, IIC_VQUNAiD,
+                      OpcodeStr, !strconcat(Dt, "32"), v2i64, v2i32, IntOp>;
 }
 
 
@@ -1013,66 +1342,85 @@ multiclass N2VLInt_QHS<bit op24, bit op23, bits<4> op11_8, bit op7, bit op6,
 multiclass N3VInt_HS<bit op24, bit op23, bits<4> op11_8, bit op4,
                      InstrItinClass itinD16, InstrItinClass itinD32,
                      InstrItinClass itinQ16, InstrItinClass itinQ32,
-                     string OpcodeStr, Intrinsic IntOp, bit Commutable = 0> {
+                     string OpcodeStr, string Dt,
+                     Intrinsic IntOp, bit Commutable = 0> {
   // 64-bit vector types.
-  def v4i16 : N3VDInt<op24, op23, 0b01, op11_8, op4, itinD16, !strconcat(OpcodeStr,"16"),
+  def v4i16 : N3VDInt<op24, op23, 0b01, op11_8, op4, itinD16,
+                      OpcodeStr, !strconcat(Dt, "16"),
                       v4i16, v4i16, IntOp, Commutable>;
-  def v2i32 : N3VDInt<op24, op23, 0b10, op11_8, op4, itinD32, !strconcat(OpcodeStr,"32"),
+  def v2i32 : N3VDInt<op24, op23, 0b10, op11_8, op4, itinD32,
+                      OpcodeStr, !strconcat(Dt, "32"),
                       v2i32, v2i32, IntOp, Commutable>;
 
   // 128-bit vector types.
-  def v8i16 : N3VQInt<op24, op23, 0b01, op11_8, op4, itinQ16, !strconcat(OpcodeStr,"16"),
+  def v8i16 : N3VQInt<op24, op23, 0b01, op11_8, op4, itinQ16,
+                      OpcodeStr, !strconcat(Dt, "16"),
                       v8i16, v8i16, IntOp, Commutable>;
-  def v4i32 : N3VQInt<op24, op23, 0b10, op11_8, op4, itinQ32, !strconcat(OpcodeStr,"32"),
+  def v4i32 : N3VQInt<op24, op23, 0b10, op11_8, op4, itinQ32,
+                      OpcodeStr, !strconcat(Dt, "32"),
                       v4i32, v4i32, IntOp, Commutable>;
 }
 
 multiclass N3VIntSL_HS<bits<4> op11_8, 
                        InstrItinClass itinD16, InstrItinClass itinD32,
                        InstrItinClass itinQ16, InstrItinClass itinQ32,
-                       string OpcodeStr, Intrinsic IntOp> {
-  def v4i16 : N3VDIntSL16<0b01, op11_8, itinD16, !strconcat(OpcodeStr, "16"), v4i16, IntOp>;
-  def v2i32 : N3VDIntSL<0b10, op11_8, itinD32, !strconcat(OpcodeStr, "32"), v2i32, IntOp>;
-  def v8i16 : N3VQIntSL16<0b01, op11_8, itinQ16, !strconcat(OpcodeStr, "16"), v8i16, v4i16, IntOp>;
-  def v4i32 : N3VQIntSL<0b10, op11_8, itinQ32, !strconcat(OpcodeStr, "32"), v4i32, v2i32, IntOp>;
+                       string OpcodeStr, string Dt, Intrinsic IntOp> {
+  def v4i16 : N3VDIntSL16<0b01, op11_8, itinD16,
+                          OpcodeStr, !strconcat(Dt, "16"), v4i16, IntOp>;
+  def v2i32 : N3VDIntSL<0b10, op11_8, itinD32,
+                        OpcodeStr, !strconcat(Dt, "32"), v2i32, IntOp>;
+  def v8i16 : N3VQIntSL16<0b01, op11_8, itinQ16,
+                        OpcodeStr, !strconcat(Dt, "16"), v8i16, v4i16, IntOp>;
+  def v4i32 : N3VQIntSL<0b10, op11_8, itinQ32,
+                        OpcodeStr, !strconcat(Dt, "32"), v4i32, v2i32, IntOp>;
 }
 
 // ....then also with element size of 8 bits:
 multiclass N3VInt_QHS<bit op24, bit op23, bits<4> op11_8, bit op4,
                       InstrItinClass itinD16, InstrItinClass itinD32,
                       InstrItinClass itinQ16, InstrItinClass itinQ32,
-                      string OpcodeStr, Intrinsic IntOp, bit Commutable = 0>
+                      string OpcodeStr, string Dt,
+                      Intrinsic IntOp, bit Commutable = 0>
   : N3VInt_HS<op24, op23, op11_8, op4, itinD16, itinD32, itinQ16, itinQ32,
-              OpcodeStr, IntOp, Commutable> {
+              OpcodeStr, Dt, IntOp, Commutable> {
   def v8i8  : N3VDInt<op24, op23, 0b00, op11_8, op4, itinD16,
-                      !strconcat(OpcodeStr, "8"), v8i8, v8i8, IntOp, Commutable>;
+                     OpcodeStr, !strconcat(Dt, "8"),
+                     v8i8, v8i8, IntOp, Commutable>;
   def v16i8 : N3VQInt<op24, op23, 0b00, op11_8, op4, itinQ16,
-                      !strconcat(OpcodeStr, "8"), v16i8, v16i8, IntOp, Commutable>;
+                      OpcodeStr, !strconcat(Dt, "8"),
+                      v16i8, v16i8, IntOp, Commutable>;
 }
 
 // ....then also with element size of 64 bits:
 multiclass N3VInt_QHSD<bit op24, bit op23, bits<4> op11_8, bit op4,
                        InstrItinClass itinD16, InstrItinClass itinD32,
                        InstrItinClass itinQ16, InstrItinClass itinQ32,
-                       string OpcodeStr, Intrinsic IntOp, bit Commutable = 0>
+                       string OpcodeStr, string Dt,
+                       Intrinsic IntOp, bit Commutable = 0>
   : N3VInt_QHS<op24, op23, op11_8, op4, itinD16, itinD32, itinQ16, itinQ32,
-               OpcodeStr, IntOp, Commutable> {
+               OpcodeStr, Dt, IntOp, Commutable> {
   def v1i64 : N3VDInt<op24, op23, 0b11, op11_8, op4, itinD32,
-                      !strconcat(OpcodeStr,"64"), v1i64, v1i64, IntOp, Commutable>;
+                   OpcodeStr, !strconcat(Dt, "64"),
+                   v1i64, v1i64, IntOp, Commutable>;
   def v2i64 : N3VQInt<op24, op23, 0b11, op11_8, op4, itinQ32,
-                      !strconcat(OpcodeStr,"64"), v2i64, v2i64, IntOp, Commutable>;
+                   OpcodeStr, !strconcat(Dt, "64"),
+                   v2i64, v2i64, IntOp, Commutable>;
 }
 
 
 // Neon Narrowing 3-register vector intrinsics,
 //   source operand element sizes of 16, 32 and 64 bits:
 multiclass N3VNInt_HSD<bit op24, bit op23, bits<4> op11_8, bit op4,
-                       string OpcodeStr, Intrinsic IntOp, bit Commutable = 0> {
-  def v8i8  : N3VNInt<op24, op23, 0b00, op11_8, op4, !strconcat(OpcodeStr,"16"),
+                       string OpcodeStr, string Dt,
+                       Intrinsic IntOp, bit Commutable = 0> {
+  def v8i8  : N3VNInt<op24, op23, 0b00, op11_8, op4,
+                      OpcodeStr, !strconcat(Dt, "16"),
                       v8i8, v8i16, IntOp, Commutable>;
-  def v4i16 : N3VNInt<op24, op23, 0b01, op11_8, op4, !strconcat(OpcodeStr,"32"),
+  def v4i16 : N3VNInt<op24, op23, 0b01, op11_8, op4,
+                      OpcodeStr, !strconcat(Dt, "32"),
                       v4i16, v4i32, IntOp, Commutable>;
-  def v2i32 : N3VNInt<op24, op23, 0b10, op11_8, op4, !strconcat(OpcodeStr,"64"),
+  def v2i32 : N3VNInt<op24, op23, 0b10, op11_8, op4,
+                      OpcodeStr, !strconcat(Dt, "64"),
                       v2i32, v2i64, IntOp, Commutable>;
 }
 
@@ -1081,41 +1429,50 @@ multiclass N3VNInt_HSD<bit op24, bit op23, bits<4> op11_8, bit op4,
 
 // First with only element sizes of 16 and 32 bits:
 multiclass N3VLInt_HS<bit op24, bit op23, bits<4> op11_8, bit op4,
-                      InstrItinClass itin, string OpcodeStr,
+                      InstrItinClass itin, string OpcodeStr, string Dt,
                       Intrinsic IntOp, bit Commutable = 0> {
   def v4i32 : N3VLInt<op24, op23, 0b01, op11_8, op4, itin, 
-                      !strconcat(OpcodeStr,"16"), v4i32, v4i16, IntOp, Commutable>;
+                      OpcodeStr, !strconcat(Dt, "16"),
+                      v4i32, v4i16, IntOp, Commutable>;
   def v2i64 : N3VLInt<op24, op23, 0b10, op11_8, op4, itin,
-                      !strconcat(OpcodeStr,"32"), v2i64, v2i32, IntOp, Commutable>;
+                      OpcodeStr, !strconcat(Dt, "32"),
+                      v2i64, v2i32, IntOp, Commutable>;
 }
 
 multiclass N3VLIntSL_HS<bit op24, bits<4> op11_8,
-                        InstrItinClass itin, string OpcodeStr, Intrinsic IntOp> {
+                        InstrItinClass itin, string OpcodeStr, string Dt,
+                        Intrinsic IntOp> {
   def v4i16 : N3VLIntSL16<op24, 0b01, op11_8, itin, 
-                          !strconcat(OpcodeStr, "16"), v4i32, v4i16, IntOp>;
+                          OpcodeStr, !strconcat(Dt, "16"), v4i32, v4i16, IntOp>;
   def v2i32 : N3VLIntSL<op24, 0b10, op11_8, itin,
-                        !strconcat(OpcodeStr, "32"), v2i64, v2i32, IntOp>;
+                        OpcodeStr, !strconcat(Dt, "32"), v2i64, v2i32, IntOp>;
 }
 
 // ....then also with element size of 8 bits:
 multiclass N3VLInt_QHS<bit op24, bit op23, bits<4> op11_8, bit op4,
-                       InstrItinClass itin, string OpcodeStr,
+                       InstrItinClass itin, string OpcodeStr, string Dt,
                        Intrinsic IntOp, bit Commutable = 0>
-  : N3VLInt_HS<op24, op23, op11_8, op4, itin, OpcodeStr, IntOp, Commutable> {
+  : N3VLInt_HS<op24, op23, op11_8, op4, itin, OpcodeStr, Dt,
+               IntOp, Commutable> {
   def v8i16 : N3VLInt<op24, op23, 0b00, op11_8, op4, itin, 
-                      !strconcat(OpcodeStr, "8"), v8i16, v8i8, IntOp, Commutable>;
+                      OpcodeStr, !strconcat(Dt, "8"),
+                      v8i16, v8i8, IntOp, Commutable>;
 }
 
 
 // Neon Wide 3-register vector intrinsics,
 //   source operand element sizes of 8, 16 and 32 bits:
 multiclass N3VWInt_QHS<bit op24, bit op23, bits<4> op11_8, bit op4,
-                       string OpcodeStr, Intrinsic IntOp, bit Commutable = 0> {
-  def v8i16 : N3VWInt<op24, op23, 0b00, op11_8, op4, !strconcat(OpcodeStr, "8"),
+                       string OpcodeStr, string Dt,
+                       Intrinsic IntOp, bit Commutable = 0> {
+  def v8i16 : N3VWInt<op24, op23, 0b00, op11_8, op4,
+                      OpcodeStr, !strconcat(Dt, "8"),
                       v8i16, v8i8, IntOp, Commutable>;
-  def v4i32 : N3VWInt<op24, op23, 0b01, op11_8, op4, !strconcat(OpcodeStr,"16"),
+  def v4i32 : N3VWInt<op24, op23, 0b01, op11_8, op4,
+                      OpcodeStr, !strconcat(Dt, "16"),
                       v4i32, v4i16, IntOp, Commutable>;
-  def v2i64 : N3VWInt<op24, op23, 0b10, op11_8, op4, !strconcat(OpcodeStr,"32"),
+  def v2i64 : N3VWInt<op24, op23, 0b10, op11_8, op4,
+                      OpcodeStr, !strconcat(Dt, "32"),
                       v2i64, v2i32, IntOp, Commutable>;
 }
 
@@ -1125,57 +1482,57 @@ multiclass N3VWInt_QHS<bit op24, bit op23, bits<4> op11_8, bit op4,
 multiclass N3VMulOp_QHS<bit op24, bit op23, bits<4> op11_8, bit op4,
                         InstrItinClass itinD16, InstrItinClass itinD32,
                         InstrItinClass itinQ16, InstrItinClass itinQ32,
-                        string OpcodeStr, SDNode OpNode> {
+                        string OpcodeStr, string Dt, SDNode OpNode> {
   // 64-bit vector types.
   def v8i8  : N3VDMulOp<op24, op23, 0b00, op11_8, op4, itinD16,
-                        !strconcat(OpcodeStr, "8"), v8i8, mul, OpNode>;
+                        OpcodeStr, !strconcat(Dt, "8"), v8i8, mul, OpNode>;
   def v4i16 : N3VDMulOp<op24, op23, 0b01, op11_8, op4, itinD16,
-                        !strconcat(OpcodeStr, "16"), v4i16, mul, OpNode>;
+                        OpcodeStr, !strconcat(Dt, "16"), v4i16, mul, OpNode>;
   def v2i32 : N3VDMulOp<op24, op23, 0b10, op11_8, op4, itinD32,
-                        !strconcat(OpcodeStr, "32"), v2i32, mul, OpNode>;
+                        OpcodeStr, !strconcat(Dt, "32"), v2i32, mul, OpNode>;
 
   // 128-bit vector types.
   def v16i8 : N3VQMulOp<op24, op23, 0b00, op11_8, op4, itinQ16,
-                        !strconcat(OpcodeStr, "8"), v16i8, mul, OpNode>;
+                        OpcodeStr, !strconcat(Dt, "8"), v16i8, mul, OpNode>;
   def v8i16 : N3VQMulOp<op24, op23, 0b01, op11_8, op4, itinQ16,
-                        !strconcat(OpcodeStr, "16"), v8i16, mul, OpNode>;
+                        OpcodeStr, !strconcat(Dt, "16"), v8i16, mul, OpNode>;
   def v4i32 : N3VQMulOp<op24, op23, 0b10, op11_8, op4, itinQ32,
-                        !strconcat(OpcodeStr, "32"), v4i32, mul, OpNode>;
+                        OpcodeStr, !strconcat(Dt, "32"), v4i32, mul, OpNode>;
 }
 
 multiclass N3VMulOpSL_HS<bits<4> op11_8, 
                          InstrItinClass itinD16, InstrItinClass itinD32,
                          InstrItinClass itinQ16, InstrItinClass itinQ32,
-                         string OpcodeStr, SDNode ShOp> {
+                         string OpcodeStr, string Dt, SDNode ShOp> {
   def v4i16 : N3VDMulOpSL16<0b01, op11_8, itinD16,
-                            !strconcat(OpcodeStr, "16"), v4i16, mul, ShOp>;
+                            OpcodeStr, !strconcat(Dt, "16"), v4i16, mul, ShOp>;
   def v2i32 : N3VDMulOpSL<0b10, op11_8, itinD32,
-                          !strconcat(OpcodeStr, "32"), v2i32, mul, ShOp>;
+                          OpcodeStr, !strconcat(Dt, "32"), v2i32, mul, ShOp>;
   def v8i16 : N3VQMulOpSL16<0b01, op11_8, itinQ16,
-                            !strconcat(OpcodeStr, "16"), v8i16, v4i16, mul, ShOp>;
+                      OpcodeStr, !strconcat(Dt, "16"), v8i16, v4i16, mul, ShOp>;
   def v4i32 : N3VQMulOpSL<0b10, op11_8, itinQ32,
-                          !strconcat(OpcodeStr, "32"), v4i32, v2i32, mul, ShOp>;
+                      OpcodeStr, !strconcat(Dt, "32"), v4i32, v2i32, mul, ShOp>;
 }
 
 // Neon 3-argument intrinsics,
 //   element sizes of 8, 16 and 32 bits:
 multiclass N3VInt3_QHS<bit op24, bit op23, bits<4> op11_8, bit op4,
-                       string OpcodeStr, Intrinsic IntOp> {
+                       string OpcodeStr, string Dt, Intrinsic IntOp> {
   // 64-bit vector types.
   def v8i8  : N3VDInt3<op24, op23, 0b00, op11_8, op4, IIC_VMACi16D,
-                        !strconcat(OpcodeStr, "8"), v8i8, v8i8, IntOp>;
+                        OpcodeStr, !strconcat(Dt, "8"), v8i8, v8i8, IntOp>;
   def v4i16 : N3VDInt3<op24, op23, 0b01, op11_8, op4, IIC_VMACi16D,
-                        !strconcat(OpcodeStr, "16"), v4i16, v4i16, IntOp>;
+                        OpcodeStr, !strconcat(Dt, "16"), v4i16, v4i16, IntOp>;
   def v2i32 : N3VDInt3<op24, op23, 0b10, op11_8, op4, IIC_VMACi32D,
-                        !strconcat(OpcodeStr, "32"), v2i32, v2i32, IntOp>;
+                        OpcodeStr, !strconcat(Dt, "32"), v2i32, v2i32, IntOp>;
 
   // 128-bit vector types.
   def v16i8 : N3VQInt3<op24, op23, 0b00, op11_8, op4, IIC_VMACi16Q,
-                        !strconcat(OpcodeStr, "8"), v16i8, v16i8, IntOp>;
+                        OpcodeStr, !strconcat(Dt, "8"), v16i8, v16i8, IntOp>;
   def v8i16 : N3VQInt3<op24, op23, 0b01, op11_8, op4, IIC_VMACi16Q,
-                        !strconcat(OpcodeStr, "16"), v8i16, v8i16, IntOp>;
+                        OpcodeStr, !strconcat(Dt, "16"), v8i16, v8i16, IntOp>;
   def v4i32 : N3VQInt3<op24, op23, 0b10, op11_8, op4, IIC_VMACi32Q,
-                        !strconcat(OpcodeStr, "32"), v4i32, v4i32, IntOp>;
+                        OpcodeStr, !strconcat(Dt, "32"), v4i32, v4i32, IntOp>;
 }
 
 
@@ -1183,27 +1540,27 @@ multiclass N3VInt3_QHS<bit op24, bit op23, bits<4> op11_8, bit op4,
 
 // First with only element sizes of 16 and 32 bits:
 multiclass N3VLInt3_HS<bit op24, bit op23, bits<4> op11_8, bit op4,
-                       string OpcodeStr, Intrinsic IntOp> {
+                       string OpcodeStr, string Dt, Intrinsic IntOp> {
   def v4i32 : N3VLInt3<op24, op23, 0b01, op11_8, op4, IIC_VMACi16D,
-                       !strconcat(OpcodeStr, "16"), v4i32, v4i16, IntOp>;
+                       OpcodeStr, !strconcat(Dt, "16"), v4i32, v4i16, IntOp>;
   def v2i64 : N3VLInt3<op24, op23, 0b10, op11_8, op4, IIC_VMACi16D,
-                       !strconcat(OpcodeStr, "32"), v2i64, v2i32, IntOp>;
+                       OpcodeStr, !strconcat(Dt, "32"), v2i64, v2i32, IntOp>;
 }
 
 multiclass N3VLInt3SL_HS<bit op24, bits<4> op11_8,
-                         string OpcodeStr, Intrinsic IntOp> {
+                         string OpcodeStr, string Dt, Intrinsic IntOp> {
   def v4i16 : N3VLInt3SL16<op24, 0b01, op11_8, IIC_VMACi16D,
-                           !strconcat(OpcodeStr, "16"), v4i32, v4i16, IntOp>;
+                           OpcodeStr, !strconcat(Dt,"16"), v4i32, v4i16, IntOp>;
   def v2i32 : N3VLInt3SL<op24, 0b10, op11_8, IIC_VMACi32D,
-                         !strconcat(OpcodeStr, "32"), v2i64, v2i32, IntOp>;
+                         OpcodeStr, !strconcat(Dt, "32"), v2i64, v2i32, IntOp>;
 }
 
 // ....then also with element size of 8 bits:
 multiclass N3VLInt3_QHS<bit op24, bit op23, bits<4> op11_8, bit op4,
-                        string OpcodeStr, Intrinsic IntOp>
-  : N3VLInt3_HS<op24, op23, op11_8, op4, OpcodeStr, IntOp> {
-  def v8i16 : N3VLInt3<op24, op23, 0b01, op11_8, op4, IIC_VMACi16D,
-                       !strconcat(OpcodeStr, "8"), v8i16, v8i8, IntOp>;
+                        string OpcodeStr, string Dt, Intrinsic IntOp>
+  : N3VLInt3_HS<op24, op23, op11_8, op4, OpcodeStr, Dt, IntOp> {
+  def v8i16 : N3VLInt3<op24, op23, 0b00, op11_8, op4, IIC_VMACi16D,
+                       OpcodeStr, !strconcat(Dt, "8"), v8i16, v8i8, IntOp>;
 }
 
 
@@ -1212,22 +1569,22 @@ multiclass N3VLInt3_QHS<bit op24, bit op23, bits<4> op11_8, bit op4,
 multiclass N2VInt_QHS<bits<2> op24_23, bits<2> op21_20, bits<2> op17_16,
                       bits<5> op11_7, bit op4,
                       InstrItinClass itinD, InstrItinClass itinQ,
-                      string OpcodeStr, Intrinsic IntOp> {
+                      string OpcodeStr, string Dt, Intrinsic IntOp> {
   // 64-bit vector types.
   def v8i8  : N2VDInt<op24_23, op21_20, 0b00, op17_16, op11_7, op4,
-                      itinD, !strconcat(OpcodeStr, "8"), v8i8, v8i8, IntOp>;
+                      itinD, OpcodeStr, !strconcat(Dt, "8"), v8i8, v8i8, IntOp>;
   def v4i16 : N2VDInt<op24_23, op21_20, 0b01, op17_16, op11_7, op4,
-                      itinD, !strconcat(OpcodeStr, "16"), v4i16, v4i16, IntOp>;
+                   itinD, OpcodeStr, !strconcat(Dt, "16"), v4i16, v4i16, IntOp>;
   def v2i32 : N2VDInt<op24_23, op21_20, 0b10, op17_16, op11_7, op4,
-                      itinD, !strconcat(OpcodeStr, "32"), v2i32, v2i32, IntOp>;
+                   itinD, OpcodeStr, !strconcat(Dt, "32"), v2i32, v2i32, IntOp>;
 
   // 128-bit vector types.
   def v16i8 : N2VQInt<op24_23, op21_20, 0b00, op17_16, op11_7, op4,
-                      itinQ, !strconcat(OpcodeStr, "8"), v16i8, v16i8, IntOp>;
+                    itinQ, OpcodeStr, !strconcat(Dt, "8"), v16i8, v16i8, IntOp>;
   def v8i16 : N2VQInt<op24_23, op21_20, 0b01, op17_16, op11_7, op4,
-                      itinQ, !strconcat(OpcodeStr, "16"), v8i16, v8i16, IntOp>;
+                   itinQ, OpcodeStr, !strconcat(Dt, "16"), v8i16, v8i16, IntOp>;
   def v4i32 : N2VQInt<op24_23, op21_20, 0b10, op17_16, op11_7, op4,
-                      itinQ, !strconcat(OpcodeStr, "32"), v4i32, v4i32, IntOp>;
+                   itinQ, OpcodeStr, !strconcat(Dt, "32"), v4i32, v4i32, IntOp>;
 }
 
 
@@ -1235,22 +1592,22 @@ multiclass N2VInt_QHS<bits<2> op24_23, bits<2> op21_20, bits<2> op17_16,
 //   element sizes of 8, 16 and 32 bits:
 multiclass N2VPLInt_QHS<bits<2> op24_23, bits<2> op21_20, bits<2> op17_16,
                         bits<5> op11_7, bit op4,
-                        string OpcodeStr, Intrinsic IntOp> {
+                        string OpcodeStr, string Dt, Intrinsic IntOp> {
   // 64-bit vector types.
   def v8i8  : N2VDPLInt<op24_23, op21_20, 0b00, op17_16, op11_7, op4,
-                        !strconcat(OpcodeStr, "8"), v4i16, v8i8, IntOp>;
+                        OpcodeStr, !strconcat(Dt, "8"), v4i16, v8i8, IntOp>;
   def v4i16 : N2VDPLInt<op24_23, op21_20, 0b01, op17_16, op11_7, op4,
-                        !strconcat(OpcodeStr, "16"), v2i32, v4i16, IntOp>;
+                        OpcodeStr, !strconcat(Dt, "16"), v2i32, v4i16, IntOp>;
   def v2i32 : N2VDPLInt<op24_23, op21_20, 0b10, op17_16, op11_7, op4,
-                        !strconcat(OpcodeStr, "32"), v1i64, v2i32, IntOp>;
+                        OpcodeStr, !strconcat(Dt, "32"), v1i64, v2i32, IntOp>;
 
   // 128-bit vector types.
   def v16i8 : N2VQPLInt<op24_23, op21_20, 0b00, op17_16, op11_7, op4,
-                        !strconcat(OpcodeStr, "8"), v8i16, v16i8, IntOp>;
+                        OpcodeStr, !strconcat(Dt, "8"), v8i16, v16i8, IntOp>;
   def v8i16 : N2VQPLInt<op24_23, op21_20, 0b01, op17_16, op11_7, op4,
-                        !strconcat(OpcodeStr, "16"), v4i32, v8i16, IntOp>;
+                        OpcodeStr, !strconcat(Dt, "16"), v4i32, v8i16, IntOp>;
   def v4i32 : N2VQPLInt<op24_23, op21_20, 0b10, op17_16, op11_7, op4,
-                        !strconcat(OpcodeStr, "32"), v2i64, v4i32, IntOp>;
+                        OpcodeStr, !strconcat(Dt, "32"), v2i64, v4i32, IntOp>;
 }
 
 
@@ -1258,74 +1615,103 @@ multiclass N2VPLInt_QHS<bits<2> op24_23, bits<2> op21_20, bits<2> op17_16,
 //   element sizes of 8, 16 and 32 bits:
 multiclass N2VPLInt2_QHS<bits<2> op24_23, bits<2> op21_20, bits<2> op17_16,
                          bits<5> op11_7, bit op4,
-                         string OpcodeStr, Intrinsic IntOp> {
+                         string OpcodeStr, string Dt, Intrinsic IntOp> {
   // 64-bit vector types.
   def v8i8  : N2VDPLInt2<op24_23, op21_20, 0b00, op17_16, op11_7, op4,
-                         !strconcat(OpcodeStr, "8"), v4i16, v8i8, IntOp>;
+                         OpcodeStr, !strconcat(Dt, "8"), v4i16, v8i8, IntOp>;
   def v4i16 : N2VDPLInt2<op24_23, op21_20, 0b01, op17_16, op11_7, op4,
-                         !strconcat(OpcodeStr, "16"), v2i32, v4i16, IntOp>;
+                         OpcodeStr, !strconcat(Dt, "16"), v2i32, v4i16, IntOp>;
   def v2i32 : N2VDPLInt2<op24_23, op21_20, 0b10, op17_16, op11_7, op4,
-                         !strconcat(OpcodeStr, "32"), v1i64, v2i32, IntOp>;
+                         OpcodeStr, !strconcat(Dt, "32"), v1i64, v2i32, IntOp>;
 
   // 128-bit vector types.
   def v16i8 : N2VQPLInt2<op24_23, op21_20, 0b00, op17_16, op11_7, op4,
-                         !strconcat(OpcodeStr, "8"), v8i16, v16i8, IntOp>;
+                         OpcodeStr, !strconcat(Dt, "8"), v8i16, v16i8, IntOp>;
   def v8i16 : N2VQPLInt2<op24_23, op21_20, 0b01, op17_16, op11_7, op4,
-                         !strconcat(OpcodeStr, "16"), v4i32, v8i16, IntOp>;
+                         OpcodeStr, !strconcat(Dt, "16"), v4i32, v8i16, IntOp>;
   def v4i32 : N2VQPLInt2<op24_23, op21_20, 0b10, op17_16, op11_7, op4,
-                         !strconcat(OpcodeStr, "32"), v2i64, v4i32, IntOp>;
+                         OpcodeStr, !strconcat(Dt, "32"), v2i64, v4i32, IntOp>;
 }
 
 
 // Neon 2-register vector shift by immediate,
 //   element sizes of 8, 16, 32 and 64 bits:
 multiclass N2VSh_QHSD<bit op24, bit op23, bits<4> op11_8, bit op4,
-                      InstrItinClass itin, string OpcodeStr, SDNode OpNode> {
+                      InstrItinClass itin, string OpcodeStr, string Dt,
+                      SDNode OpNode> {
   // 64-bit vector types.
-  def v8i8  : N2VDSh<op24, op23, 0b001000, op11_8, 0, op4, itin,
-                     !strconcat(OpcodeStr, "8"), v8i8, OpNode>;
-  def v4i16 : N2VDSh<op24, op23, 0b010000, op11_8, 0, op4, itin,
-                     !strconcat(OpcodeStr, "16"), v4i16, OpNode>;
-  def v2i32 : N2VDSh<op24, op23, 0b100000, op11_8, 0, op4, itin,
-                     !strconcat(OpcodeStr, "32"), v2i32, OpNode>;
-  def v1i64 : N2VDSh<op24, op23, 0b000000, op11_8, 1, op4, itin,
-                     !strconcat(OpcodeStr, "64"), v1i64, OpNode>;
+  def v8i8  : N2VDSh<op24, op23, op11_8, 0, op4, itin,
+                     OpcodeStr, !strconcat(Dt, "8"), v8i8, OpNode> {
+    let Inst{21-19} = 0b001; // imm6 = 001xxx
+  }
+  def v4i16 : N2VDSh<op24, op23, op11_8, 0, op4, itin,
+                     OpcodeStr, !strconcat(Dt, "16"), v4i16, OpNode> {
+    let Inst{21-20} = 0b01;  // imm6 = 01xxxx
+  }
+  def v2i32 : N2VDSh<op24, op23, op11_8, 0, op4, itin,
+                     OpcodeStr, !strconcat(Dt, "32"), v2i32, OpNode> {
+    let Inst{21} = 0b1;      // imm6 = 1xxxxx
+  }
+  def v1i64 : N2VDSh<op24, op23, op11_8, 1, op4, itin,
+                     OpcodeStr, !strconcat(Dt, "64"), v1i64, OpNode>;
+                             // imm6 = xxxxxx
 
   // 128-bit vector types.
-  def v16i8 : N2VQSh<op24, op23, 0b001000, op11_8, 0, op4, itin,
-                     !strconcat(OpcodeStr, "8"), v16i8, OpNode>;
-  def v8i16 : N2VQSh<op24, op23, 0b010000, op11_8, 0, op4, itin,
-                     !strconcat(OpcodeStr, "16"), v8i16, OpNode>;
-  def v4i32 : N2VQSh<op24, op23, 0b100000, op11_8, 0, op4, itin,
-                     !strconcat(OpcodeStr, "32"), v4i32, OpNode>;
-  def v2i64 : N2VQSh<op24, op23, 0b000000, op11_8, 1, op4, itin,
-                     !strconcat(OpcodeStr, "64"), v2i64, OpNode>;
+  def v16i8 : N2VQSh<op24, op23, op11_8, 0, op4, itin,
+                     OpcodeStr, !strconcat(Dt, "8"), v16i8, OpNode> {
+    let Inst{21-19} = 0b001; // imm6 = 001xxx
+  }
+  def v8i16 : N2VQSh<op24, op23, op11_8, 0, op4, itin,
+                     OpcodeStr, !strconcat(Dt, "16"), v8i16, OpNode> {
+    let Inst{21-20} = 0b01;  // imm6 = 01xxxx
+  }
+  def v4i32 : N2VQSh<op24, op23, op11_8, 0, op4, itin,
+                     OpcodeStr, !strconcat(Dt, "32"), v4i32, OpNode> {
+    let Inst{21} = 0b1;      // imm6 = 1xxxxx
+  }
+  def v2i64 : N2VQSh<op24, op23, op11_8, 1, op4, itin,
+                     OpcodeStr, !strconcat(Dt, "64"), v2i64, OpNode>;
+                             // imm6 = xxxxxx
 }
 
 
 // Neon Shift-Accumulate vector operations,
 //   element sizes of 8, 16, 32 and 64 bits:
 multiclass N2VShAdd_QHSD<bit op24, bit op23, bits<4> op11_8, bit op4,
-                         string OpcodeStr, SDNode ShOp> {
+                         string OpcodeStr, string Dt, SDNode ShOp> {
   // 64-bit vector types.
-  def v8i8  : N2VDShAdd<op24, op23, 0b001000, op11_8, 0, op4,
-                        !strconcat(OpcodeStr, "8"), v8i8, ShOp>;
-  def v4i16 : N2VDShAdd<op24, op23, 0b010000, op11_8, 0, op4,
-                        !strconcat(OpcodeStr, "16"), v4i16, ShOp>;
-  def v2i32 : N2VDShAdd<op24, op23, 0b100000, op11_8, 0, op4,
-                        !strconcat(OpcodeStr, "32"), v2i32, ShOp>;
-  def v1i64 : N2VDShAdd<op24, op23, 0b000000, op11_8, 1, op4,
-                        !strconcat(OpcodeStr, "64"), v1i64, ShOp>;
+  def v8i8  : N2VDShAdd<op24, op23, op11_8, 0, op4,
+                        OpcodeStr, !strconcat(Dt, "8"), v8i8, ShOp> {
+    let Inst{21-19} = 0b001; // imm6 = 001xxx
+  }
+  def v4i16 : N2VDShAdd<op24, op23, op11_8, 0, op4,
+                        OpcodeStr, !strconcat(Dt, "16"), v4i16, ShOp> {
+    let Inst{21-20} = 0b01;  // imm6 = 01xxxx
+  }
+  def v2i32 : N2VDShAdd<op24, op23, op11_8, 0, op4,
+                        OpcodeStr, !strconcat(Dt, "32"), v2i32, ShOp> {
+    let Inst{21} = 0b1;      // imm6 = 1xxxxx
+  }
+  def v1i64 : N2VDShAdd<op24, op23, op11_8, 1, op4,
+                        OpcodeStr, !strconcat(Dt, "64"), v1i64, ShOp>;
+                             // imm6 = xxxxxx
 
   // 128-bit vector types.
-  def v16i8 : N2VQShAdd<op24, op23, 0b001000, op11_8, 0, op4,
-                        !strconcat(OpcodeStr, "8"), v16i8, ShOp>;
-  def v8i16 : N2VQShAdd<op24, op23, 0b010000, op11_8, 0, op4,
-                        !strconcat(OpcodeStr, "16"), v8i16, ShOp>;
-  def v4i32 : N2VQShAdd<op24, op23, 0b100000, op11_8, 0, op4,
-                        !strconcat(OpcodeStr, "32"), v4i32, ShOp>;
-  def v2i64 : N2VQShAdd<op24, op23, 0b000000, op11_8, 1, op4,
-                        !strconcat(OpcodeStr, "64"), v2i64, ShOp>;
+  def v16i8 : N2VQShAdd<op24, op23, op11_8, 0, op4,
+                        OpcodeStr, !strconcat(Dt, "8"), v16i8, ShOp> {
+    let Inst{21-19} = 0b001; // imm6 = 001xxx
+  }
+  def v8i16 : N2VQShAdd<op24, op23, op11_8, 0, op4,
+                        OpcodeStr, !strconcat(Dt, "16"), v8i16, ShOp> {
+    let Inst{21-20} = 0b01;  // imm6 = 01xxxx
+  }
+  def v4i32 : N2VQShAdd<op24, op23, op11_8, 0, op4,
+                        OpcodeStr, !strconcat(Dt, "32"), v4i32, ShOp> {
+    let Inst{21} = 0b1;      // imm6 = 1xxxxx
+  }
+  def v2i64 : N2VQShAdd<op24, op23, op11_8, 1, op4,
+                        OpcodeStr, !strconcat(Dt, "64"), v2i64, ShOp>;
+                             // imm6 = xxxxxx
 }
 
 
@@ -1334,24 +1720,75 @@ multiclass N2VShAdd_QHSD<bit op24, bit op23, bits<4> op11_8, bit op4,
 multiclass N2VShIns_QHSD<bit op24, bit op23, bits<4> op11_8, bit op4,
                          string OpcodeStr, SDNode ShOp> {
   // 64-bit vector types.
-  def v8i8  : N2VDShIns<op24, op23, 0b001000, op11_8, 0, op4,
-                        !strconcat(OpcodeStr, "8"), v8i8, ShOp>;
-  def v4i16 : N2VDShIns<op24, op23, 0b010000, op11_8, 0, op4,
-                        !strconcat(OpcodeStr, "16"), v4i16, ShOp>;
-  def v2i32 : N2VDShIns<op24, op23, 0b100000, op11_8, 0, op4,
-                        !strconcat(OpcodeStr, "32"), v2i32, ShOp>;
-  def v1i64 : N2VDShIns<op24, op23, 0b000000, op11_8, 1, op4,
-                        !strconcat(OpcodeStr, "64"), v1i64, ShOp>;
+  def v8i8  : N2VDShIns<op24, op23, op11_8, 0, op4,
+                        OpcodeStr, "8", v8i8, ShOp> {
+    let Inst{21-19} = 0b001; // imm6 = 001xxx
+  }
+  def v4i16 : N2VDShIns<op24, op23, op11_8, 0, op4,
+                        OpcodeStr, "16", v4i16, ShOp> {
+    let Inst{21-20} = 0b01;  // imm6 = 01xxxx
+  }
+  def v2i32 : N2VDShIns<op24, op23, op11_8, 0, op4,
+                        OpcodeStr, "32", v2i32, ShOp> {
+    let Inst{21} = 0b1;      // imm6 = 1xxxxx
+  }
+  def v1i64 : N2VDShIns<op24, op23, op11_8, 1, op4,
+                        OpcodeStr, "64", v1i64, ShOp>;
+                             // imm6 = xxxxxx
 
   // 128-bit vector types.
-  def v16i8 : N2VQShIns<op24, op23, 0b001000, op11_8, 0, op4,
-                        !strconcat(OpcodeStr, "8"), v16i8, ShOp>;
-  def v8i16 : N2VQShIns<op24, op23, 0b010000, op11_8, 0, op4,
-                        !strconcat(OpcodeStr, "16"), v8i16, ShOp>;
-  def v4i32 : N2VQShIns<op24, op23, 0b100000, op11_8, 0, op4,
-                        !strconcat(OpcodeStr, "32"), v4i32, ShOp>;
-  def v2i64 : N2VQShIns<op24, op23, 0b000000, op11_8, 1, op4,
-                        !strconcat(OpcodeStr, "64"), v2i64, ShOp>;
+  def v16i8 : N2VQShIns<op24, op23, op11_8, 0, op4,
+                        OpcodeStr, "8", v16i8, ShOp> {
+    let Inst{21-19} = 0b001; // imm6 = 001xxx
+  }
+  def v8i16 : N2VQShIns<op24, op23, op11_8, 0, op4,
+                        OpcodeStr, "16", v8i16, ShOp> {
+    let Inst{21-20} = 0b01;  // imm6 = 01xxxx
+  }
+  def v4i32 : N2VQShIns<op24, op23, op11_8, 0, op4,
+                        OpcodeStr, "32", v4i32, ShOp> {
+    let Inst{21} = 0b1;      // imm6 = 1xxxxx
+  }
+  def v2i64 : N2VQShIns<op24, op23, op11_8, 1, op4,
+                        OpcodeStr, "64", v2i64, ShOp>;
+                             // imm6 = xxxxxx
+}
+
+// Neon Shift Long operations,
+//   element sizes of 8, 16, 32 bits:
+multiclass N2VLSh_QHS<bit op24, bit op23, bits<4> op11_8, bit op7, bit op6,
+                      bit op4, string OpcodeStr, string Dt, SDNode OpNode> {
+  def v8i16 : N2VLSh<op24, op23, op11_8, op7, op6, op4,
+                 OpcodeStr, !strconcat(Dt, "8"), v8i16, v8i8, OpNode> {
+    let Inst{21-19} = 0b001; // imm6 = 001xxx
+  }
+  def v4i32 : N2VLSh<op24, op23, op11_8, op7, op6, op4,
+                  OpcodeStr, !strconcat(Dt, "16"), v4i32, v4i16, OpNode> {
+    let Inst{21-20} = 0b01;  // imm6 = 01xxxx
+  }
+  def v2i64 : N2VLSh<op24, op23, op11_8, op7, op6, op4,
+                  OpcodeStr, !strconcat(Dt, "32"), v2i64, v2i32, OpNode> {
+    let Inst{21} = 0b1;      // imm6 = 1xxxxx
+  }
+}
+
+// Neon Shift Narrow operations,
+//   element sizes of 16, 32, 64 bits:
+multiclass N2VNSh_HSD<bit op24, bit op23, bits<4> op11_8, bit op7, bit op6,
+                      bit op4, InstrItinClass itin, string OpcodeStr, string Dt,
+                      SDNode OpNode> {
+  def v8i8 : N2VNSh<op24, op23, op11_8, op7, op6, op4, itin,
+                    OpcodeStr, !strconcat(Dt, "16"), v8i8, v8i16, OpNode> {
+    let Inst{21-19} = 0b001; // imm6 = 001xxx
+  }
+  def v4i16 : N2VNSh<op24, op23, op11_8, op7, op6, op4, itin,
+                     OpcodeStr, !strconcat(Dt, "32"), v4i16, v4i32, OpNode> {
+    let Inst{21-20} = 0b01;  // imm6 = 01xxxx
+  }
+  def v2i32 : N2VNSh<op24, op23, op11_8, op7, op6, op4, itin,
+                     OpcodeStr, !strconcat(Dt, "64"), v2i32, v2i64, OpNode> {
+    let Inst{21} = 0b1;      // imm6 = 1xxxxx
+  }
 }
 
 //===----------------------------------------------------------------------===//
@@ -1361,49 +1798,58 @@ multiclass N2VShIns_QHSD<bit op24, bit op23, bits<4> op11_8, bit op4,
 // Vector Add Operations.
 
 //   VADD     : Vector Add (integer and floating-point)
-defm VADD     : N3V_QHSD<0, 0, 0b1000, 0, IIC_VBINiD, IIC_VBINiQ, "vadd.i", add, 1>;
-def  VADDfd   : N3VD<0, 0, 0b00, 0b1101, 0, IIC_VBIND, "vadd.f32", v2f32, v2f32, fadd, 1>;
-def  VADDfq   : N3VQ<0, 0, 0b00, 0b1101, 0, IIC_VBINQ, "vadd.f32", v4f32, v4f32, fadd, 1>;
+defm VADD     : N3V_QHSD<0, 0, 0b1000, 0, IIC_VBINiD, IIC_VBINiQ, "vadd", "i",
+                         add, 1>;
+def  VADDfd   : N3VD<0, 0, 0b00, 0b1101, 0, IIC_VBIND, "vadd", "f32",
+                     v2f32, v2f32, fadd, 1>;
+def  VADDfq   : N3VQ<0, 0, 0b00, 0b1101, 0, IIC_VBINQ, "vadd", "f32",
+                     v4f32, v4f32, fadd, 1>;
 //   VADDL    : Vector Add Long (Q = D + D)
-defm VADDLs   : N3VLInt_QHS<0,1,0b0000,0, IIC_VSHLiD, "vaddl.s", int_arm_neon_vaddls, 1>;
-defm VADDLu   : N3VLInt_QHS<1,1,0b0000,0, IIC_VSHLiD, "vaddl.u", int_arm_neon_vaddlu, 1>;
+defm VADDLs   : N3VLInt_QHS<0,1,0b0000,0, IIC_VSHLiD, "vaddl", "s",
+                            int_arm_neon_vaddls, 1>;
+defm VADDLu   : N3VLInt_QHS<1,1,0b0000,0, IIC_VSHLiD, "vaddl", "u",
+                            int_arm_neon_vaddlu, 1>;
 //   VADDW    : Vector Add Wide (Q = Q + D)
-defm VADDWs   : N3VWInt_QHS<0,1,0b0001,0, "vaddw.s", int_arm_neon_vaddws, 0>;
-defm VADDWu   : N3VWInt_QHS<1,1,0b0001,0, "vaddw.u", int_arm_neon_vaddwu, 0>;
+defm VADDWs   : N3VWInt_QHS<0,1,0b0001,0, "vaddw", "s", int_arm_neon_vaddws, 0>;
+defm VADDWu   : N3VWInt_QHS<1,1,0b0001,0, "vaddw", "u", int_arm_neon_vaddwu, 0>;
 //   VHADD    : Vector Halving Add
 defm VHADDs   : N3VInt_QHS<0,0,0b0000,0, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                           IIC_VBINi4Q, "vhadd.s", int_arm_neon_vhadds, 1>;
+                           IIC_VBINi4Q, "vhadd", "s", int_arm_neon_vhadds, 1>;
 defm VHADDu   : N3VInt_QHS<1,0,0b0000,0, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                           IIC_VBINi4Q, "vhadd.u", int_arm_neon_vhaddu, 1>;
+                           IIC_VBINi4Q, "vhadd", "u", int_arm_neon_vhaddu, 1>;
 //   VRHADD   : Vector Rounding Halving Add
 defm VRHADDs  : N3VInt_QHS<0,0,0b0001,0, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                           IIC_VBINi4Q, "vrhadd.s", int_arm_neon_vrhadds, 1>;
+                           IIC_VBINi4Q, "vrhadd", "s", int_arm_neon_vrhadds, 1>;
 defm VRHADDu  : N3VInt_QHS<1,0,0b0001,0, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                           IIC_VBINi4Q, "vrhadd.u", int_arm_neon_vrhaddu, 1>;
+                           IIC_VBINi4Q, "vrhadd", "u", int_arm_neon_vrhaddu, 1>;
 //   VQADD    : Vector Saturating Add
 defm VQADDs   : N3VInt_QHSD<0,0,0b0000,1, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                            IIC_VBINi4Q, "vqadd.s", int_arm_neon_vqadds, 1>;
+                            IIC_VBINi4Q, "vqadd", "s", int_arm_neon_vqadds, 1>;
 defm VQADDu   : N3VInt_QHSD<1,0,0b0000,1, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                            IIC_VBINi4Q, "vqadd.u", int_arm_neon_vqaddu, 1>;
+                            IIC_VBINi4Q, "vqadd", "u", int_arm_neon_vqaddu, 1>;
 //   VADDHN   : Vector Add and Narrow Returning High Half (D = Q + Q)
-defm VADDHN   : N3VNInt_HSD<0,1,0b0100,0, "vaddhn.i", int_arm_neon_vaddhn, 1>;
+defm VADDHN   : N3VNInt_HSD<0,1,0b0100,0, "vaddhn", "i",
+                            int_arm_neon_vaddhn, 1>;
 //   VRADDHN  : Vector Rounding Add and Narrow Returning High Half (D = Q + Q)
-defm VRADDHN  : N3VNInt_HSD<1,1,0b0100,0, "vraddhn.i", int_arm_neon_vraddhn, 1>;
+defm VRADDHN  : N3VNInt_HSD<1,1,0b0100,0, "vraddhn", "i",
+                            int_arm_neon_vraddhn, 1>;
 
 // Vector Multiply Operations.
 
 //   VMUL     : Vector Multiply (integer, polynomial and floating-point)
-defm VMUL     : N3V_QHS<0, 0, 0b1001, 1, IIC_VMULi16D, IIC_VMULi32D, IIC_VMULi16Q,
-                        IIC_VMULi32Q, "vmul.i", mul, 1>;
-def  VMULpd   : N3VDInt<1, 0, 0b00, 0b1001, 1, IIC_VMULi16D, "vmul.p8", v8i8, v8i8,
-                        int_arm_neon_vmulp, 1>;
-def  VMULpq   : N3VQInt<1, 0, 0b00, 0b1001, 1, IIC_VMULi16Q, "vmul.p8", v16i8, v16i8,
-                        int_arm_neon_vmulp, 1>;
-def  VMULfd   : N3VD<1, 0, 0b00, 0b1101, 1, IIC_VBIND, "vmul.f32", v2f32, v2f32, fmul, 1>;
-def  VMULfq   : N3VQ<1, 0, 0b00, 0b1101, 1, IIC_VBINQ, "vmul.f32", v4f32, v4f32, fmul, 1>;
-defm VMULsl  : N3VSL_HS<0b1000, "vmul.i", mul>;
-def VMULslfd : N3VDSL<0b10, 0b1001, IIC_VBIND, "vmul.f32", v2f32, fmul>;
-def VMULslfq : N3VQSL<0b10, 0b1001, IIC_VBINQ, "vmul.f32", v4f32, v2f32, fmul>;
+defm VMUL     : N3V_QHS<0, 0, 0b1001, 1, IIC_VMULi16D, IIC_VMULi32D,
+                        IIC_VMULi16Q, IIC_VMULi32Q, "vmul", "i", mul, 1>;
+def  VMULpd   : N3VDInt<1, 0, 0b00, 0b1001, 1, IIC_VMULi16D, "vmul", "p8",
+                        v8i8, v8i8, int_arm_neon_vmulp, 1>;
+def  VMULpq   : N3VQInt<1, 0, 0b00, 0b1001, 1, IIC_VMULi16Q, "vmul", "p8",
+                        v16i8, v16i8, int_arm_neon_vmulp, 1>;
+def  VMULfd   : N3VD<1, 0, 0b00, 0b1101, 1, IIC_VBIND, "vmul", "f32",
+                        v2f32, v2f32, fmul, 1>;
+def  VMULfq   : N3VQ<1, 0, 0b00, 0b1101, 1, IIC_VBINQ, "vmul", "f32",
+                        v4f32, v4f32, fmul, 1>;
+defm VMULsl  : N3VSL_HS<0b1000, "vmul", "i", mul>;
+def VMULslfd : N3VDSL<0b10, 0b1001, IIC_VBIND, "vmul", "f32", v2f32, fmul>;
+def VMULslfq : N3VQSL<0b10, 0b1001, IIC_VBINQ, "vmul", "f32", v4f32, v2f32, fmul>;
 def : Pat<(v8i16 (mul (v8i16 QPR:$src1),
                       (v8i16 (NEONvduplane (v8i16 QPR:$src2), imm:$lane)))),
           (v8i16 (VMULslv8i16 (v8i16 QPR:$src1),
@@ -1426,66 +1872,80 @@ def : Pat<(v4f32 (fmul (v4f32 QPR:$src1),
 //   VQDMULH  : Vector Saturating Doubling Multiply Returning High Half
 defm VQDMULH  : N3VInt_HS<0, 0, 0b1011, 0, IIC_VMULi16D, IIC_VMULi32D,
                           IIC_VMULi16Q, IIC_VMULi32Q, 
-                          "vqdmulh.s", int_arm_neon_vqdmulh, 1>;
+                          "vqdmulh", "s", int_arm_neon_vqdmulh, 1>;
 defm VQDMULHsl: N3VIntSL_HS<0b1100, IIC_VMULi16D, IIC_VMULi32D,
                             IIC_VMULi16Q, IIC_VMULi32Q,
-                            "vqdmulh.s",  int_arm_neon_vqdmulh>;
+                            "vqdmulh", "s",  int_arm_neon_vqdmulh>;
 def : Pat<(v8i16 (int_arm_neon_vqdmulh (v8i16 QPR:$src1),
-                                       (v8i16 (NEONvduplane (v8i16 QPR:$src2), imm:$lane)))),
+                                       (v8i16 (NEONvduplane (v8i16 QPR:$src2),
+                                                            imm:$lane)))),
           (v8i16 (VQDMULHslv8i16 (v8i16 QPR:$src1),
                                  (v4i16 (EXTRACT_SUBREG QPR:$src2,
-                                                        (DSubReg_i16_reg imm:$lane))),
+                                                  (DSubReg_i16_reg imm:$lane))),
                                  (SubReg_i16_lane imm:$lane)))>;
 def : Pat<(v4i32 (int_arm_neon_vqdmulh (v4i32 QPR:$src1),
-                                       (v4i32 (NEONvduplane (v4i32 QPR:$src2), imm:$lane)))),
+                                       (v4i32 (NEONvduplane (v4i32 QPR:$src2),
+                                                            imm:$lane)))),
           (v4i32 (VQDMULHslv4i32 (v4i32 QPR:$src1),
                                  (v2i32 (EXTRACT_SUBREG QPR:$src2,
-                                                        (DSubReg_i32_reg imm:$lane))),
+                                                  (DSubReg_i32_reg imm:$lane))),
                                  (SubReg_i32_lane imm:$lane)))>;
 
 //   VQRDMULH : Vector Rounding Saturating Doubling Multiply Returning High Half
 defm VQRDMULH   : N3VInt_HS<1, 0, 0b1011, 0, IIC_VMULi16D, IIC_VMULi32D,
                             IIC_VMULi16Q, IIC_VMULi32Q,
-                            "vqrdmulh.s", int_arm_neon_vqrdmulh, 1>;
+                            "vqrdmulh", "s", int_arm_neon_vqrdmulh, 1>;
 defm VQRDMULHsl : N3VIntSL_HS<0b1101, IIC_VMULi16D, IIC_VMULi32D,
                               IIC_VMULi16Q, IIC_VMULi32Q,
-                              "vqrdmulh.s",  int_arm_neon_vqrdmulh>;
+                              "vqrdmulh", "s",  int_arm_neon_vqrdmulh>;
 def : Pat<(v8i16 (int_arm_neon_vqrdmulh (v8i16 QPR:$src1),
-                                        (v8i16 (NEONvduplane (v8i16 QPR:$src2), imm:$lane)))),
+                                        (v8i16 (NEONvduplane (v8i16 QPR:$src2),
+                                                             imm:$lane)))),
           (v8i16 (VQRDMULHslv8i16 (v8i16 QPR:$src1),
                                   (v4i16 (EXTRACT_SUBREG QPR:$src2,
                                                          (DSubReg_i16_reg imm:$lane))),
                                   (SubReg_i16_lane imm:$lane)))>;
 def : Pat<(v4i32 (int_arm_neon_vqrdmulh (v4i32 QPR:$src1),
-                                        (v4i32 (NEONvduplane (v4i32 QPR:$src2), imm:$lane)))),
+                                        (v4i32 (NEONvduplane (v4i32 QPR:$src2),
+                                                             imm:$lane)))),
           (v4i32 (VQRDMULHslv4i32 (v4i32 QPR:$src1),
                                   (v2i32 (EXTRACT_SUBREG QPR:$src2,
-                                                         (DSubReg_i32_reg imm:$lane))),
+                                                  (DSubReg_i32_reg imm:$lane))),
                                   (SubReg_i32_lane imm:$lane)))>;
 
 //   VMULL    : Vector Multiply Long (integer and polynomial) (Q = D * D)
-defm VMULLs   : N3VLInt_QHS<0,1,0b1100,0, IIC_VMULi16D, "vmull.s", int_arm_neon_vmulls, 1>;
-defm VMULLu   : N3VLInt_QHS<1,1,0b1100,0, IIC_VMULi16D, "vmull.u", int_arm_neon_vmullu, 1>;
-def  VMULLp   : N3VLInt<0, 1, 0b00, 0b1110, 0, IIC_VMULi16D, "vmull.p8", v8i16, v8i8,
-                        int_arm_neon_vmullp, 1>;
-defm VMULLsls : N3VLIntSL_HS<0, 0b1010, IIC_VMULi16D, "vmull.s", int_arm_neon_vmulls>;
-defm VMULLslu : N3VLIntSL_HS<1, 0b1010, IIC_VMULi16D, "vmull.u", int_arm_neon_vmullu>;
+defm VMULLs   : N3VLInt_QHS<0,1,0b1100,0, IIC_VMULi16D, "vmull", "s",
+                            int_arm_neon_vmulls, 1>;
+defm VMULLu   : N3VLInt_QHS<1,1,0b1100,0, IIC_VMULi16D, "vmull", "u",
+                            int_arm_neon_vmullu, 1>;
+def  VMULLp   : N3VLInt<0, 1, 0b00, 0b1110, 0, IIC_VMULi16D, "vmull", "p8",
+                        v8i16, v8i8, int_arm_neon_vmullp, 1>;
+defm VMULLsls : N3VLIntSL_HS<0, 0b1010, IIC_VMULi16D, "vmull", "s",
+                             int_arm_neon_vmulls>;
+defm VMULLslu : N3VLIntSL_HS<1, 0b1010, IIC_VMULi16D, "vmull", "u",
+                             int_arm_neon_vmullu>;
 
 //   VQDMULL  : Vector Saturating Doubling Multiply Long (Q = D * D)
-defm VQDMULL  : N3VLInt_HS<0,1,0b1101,0, IIC_VMULi16D, "vqdmull.s", int_arm_neon_vqdmull, 1>;
-defm VQDMULLsl: N3VLIntSL_HS<0, 0b1011, IIC_VMULi16D, "vqdmull.s", int_arm_neon_vqdmull>;
+defm VQDMULL  : N3VLInt_HS<0,1,0b1101,0, IIC_VMULi16D, "vqdmull", "s",
+                           int_arm_neon_vqdmull, 1>;
+defm VQDMULLsl: N3VLIntSL_HS<0, 0b1011, IIC_VMULi16D, "vqdmull", "s",
+                             int_arm_neon_vqdmull>;
 
 // Vector Multiply-Accumulate and Multiply-Subtract Operations.
 
 //   VMLA     : Vector Multiply Accumulate (integer and floating-point)
 defm VMLA     : N3VMulOp_QHS<0, 0, 0b1001, 0, IIC_VMACi16D, IIC_VMACi32D,
-                             IIC_VMACi16Q, IIC_VMACi32Q, "vmla.i", add>;
-def  VMLAfd   : N3VDMulOp<0, 0, 0b00, 0b1101, 1, IIC_VMACD, "vmla.f32", v2f32, fmul, fadd>;
-def  VMLAfq   : N3VQMulOp<0, 0, 0b00, 0b1101, 1, IIC_VMACQ, "vmla.f32", v4f32, fmul, fadd>;
+                             IIC_VMACi16Q, IIC_VMACi32Q, "vmla", "i", add>;
+def  VMLAfd   : N3VDMulOp<0, 0, 0b00, 0b1101, 1, IIC_VMACD, "vmla", "f32",
+                          v2f32, fmul, fadd>;
+def  VMLAfq   : N3VQMulOp<0, 0, 0b00, 0b1101, 1, IIC_VMACQ, "vmla", "f32",
+                          v4f32, fmul, fadd>;
 defm VMLAsl   : N3VMulOpSL_HS<0b0000, IIC_VMACi16D, IIC_VMACi32D,
-                              IIC_VMACi16Q, IIC_VMACi32Q, "vmla.i", add>;
-def  VMLAslfd : N3VDMulOpSL<0b10, 0b0001, IIC_VMACD, "vmla.f32", v2f32, fmul, fadd>;
-def  VMLAslfq : N3VQMulOpSL<0b10, 0b0001, IIC_VMACQ, "vmla.f32", v4f32, v2f32, fmul, fadd>;
+                              IIC_VMACi16Q, IIC_VMACi32Q, "vmla", "i", add>;
+def  VMLAslfd : N3VDMulOpSL<0b10, 0b0001, IIC_VMACD, "vmla", "f32",
+                            v2f32, fmul, fadd>;
+def  VMLAslfq : N3VQMulOpSL<0b10, 0b0001, IIC_VMACQ, "vmla", "f32",
+                            v4f32, v2f32, fmul, fadd>;
 
 def : Pat<(v8i16 (add (v8i16 QPR:$src1),
                       (mul (v8i16 QPR:$src2),
@@ -1502,7 +1962,7 @@ def : Pat<(v4i32 (add (v4i32 QPR:$src1),
           (v4i32 (VMLAslv4i32 (v4i32 QPR:$src1),
                               (v4i32 QPR:$src2),
                               (v2i32 (EXTRACT_SUBREG QPR:$src3,
-                                                     (DSubReg_i32_reg imm:$lane))),
+                                                  (DSubReg_i32_reg imm:$lane))),
                               (SubReg_i32_lane imm:$lane)))>;
 
 def : Pat<(v4f32 (fadd (v4f32 QPR:$src1),
@@ -1515,25 +1975,30 @@ def : Pat<(v4f32 (fadd (v4f32 QPR:$src1),
                            (SubReg_i32_lane imm:$lane)))>;
 
 //   VMLAL    : Vector Multiply Accumulate Long (Q += D * D)
-defm VMLALs   : N3VLInt3_QHS<0,1,0b1000,0, "vmlal.s", int_arm_neon_vmlals>;
-defm VMLALu   : N3VLInt3_QHS<1,1,0b1000,0, "vmlal.u", int_arm_neon_vmlalu>;
+defm VMLALs   : N3VLInt3_QHS<0,1,0b1000,0, "vmlal", "s", int_arm_neon_vmlals>;
+defm VMLALu   : N3VLInt3_QHS<1,1,0b1000,0, "vmlal", "u", int_arm_neon_vmlalu>;
 
-defm VMLALsls : N3VLInt3SL_HS<0, 0b0010, "vmlal.s", int_arm_neon_vmlals>;
-defm VMLALslu : N3VLInt3SL_HS<1, 0b0010, "vmlal.u", int_arm_neon_vmlalu>;
+defm VMLALsls : N3VLInt3SL_HS<0, 0b0010, "vmlal", "s", int_arm_neon_vmlals>;
+defm VMLALslu : N3VLInt3SL_HS<1, 0b0010, "vmlal", "u", int_arm_neon_vmlalu>;
 
 //   VQDMLAL  : Vector Saturating Doubling Multiply Accumulate Long (Q += D * D)
-defm VQDMLAL  : N3VLInt3_HS<0, 1, 0b1001, 0, "vqdmlal.s", int_arm_neon_vqdmlal>;
-defm VQDMLALsl: N3VLInt3SL_HS<0, 0b0011, "vqdmlal.s", int_arm_neon_vqdmlal>;
+defm VQDMLAL  : N3VLInt3_HS<0, 1, 0b1001, 0, "vqdmlal", "s",
+                            int_arm_neon_vqdmlal>;
+defm VQDMLALsl: N3VLInt3SL_HS<0, 0b0011, "vqdmlal", "s", int_arm_neon_vqdmlal>;
 
 //   VMLS     : Vector Multiply Subtract (integer and floating-point)
-defm VMLS     : N3VMulOp_QHS<0, 0, 0b1001, 0, IIC_VMACi16D, IIC_VMACi32D,
-                             IIC_VMACi16Q, IIC_VMACi32Q, "vmls.i", sub>;
-def  VMLSfd   : N3VDMulOp<0, 0, 0b10, 0b1101, 1, IIC_VMACD, "vmls.f32", v2f32, fmul, fsub>;
-def  VMLSfq   : N3VQMulOp<0, 0, 0b10, 0b1101, 1, IIC_VMACQ, "vmls.f32", v4f32, fmul, fsub>;
+defm VMLS     : N3VMulOp_QHS<1, 0, 0b1001, 0, IIC_VMACi16D, IIC_VMACi32D,
+                             IIC_VMACi16Q, IIC_VMACi32Q, "vmls", "i", sub>;
+def  VMLSfd   : N3VDMulOp<0, 0, 0b10, 0b1101, 1, IIC_VMACD, "vmls", "f32",
+                          v2f32, fmul, fsub>;
+def  VMLSfq   : N3VQMulOp<0, 0, 0b10, 0b1101, 1, IIC_VMACQ, "vmls", "f32",
+                          v4f32, fmul, fsub>;
 defm VMLSsl   : N3VMulOpSL_HS<0b0100, IIC_VMACi16D, IIC_VMACi32D,
-                              IIC_VMACi16Q, IIC_VMACi32Q, "vmls.i", sub>;
-def  VMLSslfd : N3VDMulOpSL<0b10, 0b0101, IIC_VMACD, "vmls.f32", v2f32, fmul, fsub>;
-def  VMLSslfq : N3VQMulOpSL<0b10, 0b0101, IIC_VMACQ, "vmls.f32", v4f32, v2f32, fmul, fsub>;
+                              IIC_VMACi16Q, IIC_VMACi32Q, "vmls", "i", sub>;
+def  VMLSslfd : N3VDMulOpSL<0b10, 0b0101, IIC_VMACD, "vmls", "f32",
+                            v2f32, fmul, fsub>;
+def  VMLSslfq : N3VQMulOpSL<0b10, 0b0101, IIC_VMACQ, "vmls", "f32",
+                            v4f32, v2f32, fmul, fsub>;
 
 def : Pat<(v8i16 (sub (v8i16 QPR:$src1),
                       (mul (v8i16 QPR:$src2),
@@ -1546,7 +2011,7 @@ def : Pat<(v8i16 (sub (v8i16 QPR:$src1),
 
 def : Pat<(v4i32 (sub (v4i32 QPR:$src1),
                       (mul (v4i32 QPR:$src2),
-                           (v4i32 (NEONvduplane (v4i32 QPR:$src3), imm:$lane))))),
+                         (v4i32 (NEONvduplane (v4i32 QPR:$src3), imm:$lane))))),
           (v4i32 (VMLSslv4i32 (v4i32 QPR:$src1),
                               (v4i32 QPR:$src2),
                               (v2i32 (EXTRACT_SUBREG QPR:$src3,
@@ -1555,7 +2020,7 @@ def : Pat<(v4i32 (sub (v4i32 QPR:$src1),
 
 def : Pat<(v4f32 (fsub (v4f32 QPR:$src1),
                        (fmul (v4f32 QPR:$src2),
-                             (v4f32 (NEONvduplane (v4f32 QPR:$src3), imm:$lane))))),
+                           (v4f32 (NEONvduplane (v4f32 QPR:$src3), imm:$lane))))),
           (v4f32 (VMLSslfq (v4f32 QPR:$src1),
                            (v4f32 QPR:$src2),
                            (v2f32 (EXTRACT_SUBREG QPR:$src3,
@@ -1563,146 +2028,170 @@ def : Pat<(v4f32 (fsub (v4f32 QPR:$src1),
                            (SubReg_i32_lane imm:$lane)))>;
 
 //   VMLSL    : Vector Multiply Subtract Long (Q -= D * D)
-defm VMLSLs   : N3VLInt3_QHS<0,1,0b1010,0, "vmlsl.s", int_arm_neon_vmlsls>;
-defm VMLSLu   : N3VLInt3_QHS<1,1,0b1010,0, "vmlsl.u", int_arm_neon_vmlslu>;
+defm VMLSLs   : N3VLInt3_QHS<0,1,0b1010,0, "vmlsl", "s", int_arm_neon_vmlsls>;
+defm VMLSLu   : N3VLInt3_QHS<1,1,0b1010,0, "vmlsl", "u", int_arm_neon_vmlslu>;
 
-defm VMLSLsls : N3VLInt3SL_HS<0, 0b0110, "vmlsl.s", int_arm_neon_vmlsls>;
-defm VMLSLslu : N3VLInt3SL_HS<1, 0b0110, "vmlsl.u", int_arm_neon_vmlslu>;
+defm VMLSLsls : N3VLInt3SL_HS<0, 0b0110, "vmlsl", "s", int_arm_neon_vmlsls>;
+defm VMLSLslu : N3VLInt3SL_HS<1, 0b0110, "vmlsl", "u", int_arm_neon_vmlslu>;
 
 //   VQDMLSL  : Vector Saturating Doubling Multiply Subtract Long (Q -= D * D)
-defm VQDMLSL  : N3VLInt3_HS<0, 1, 0b1011, 0, "vqdmlsl.s", int_arm_neon_vqdmlsl>;
-defm VQDMLSLsl: N3VLInt3SL_HS<0, 0b111, "vqdmlsl.s", int_arm_neon_vqdmlsl>;
+defm VQDMLSL  : N3VLInt3_HS<0, 1, 0b1011, 0, "vqdmlsl", "s",
+                            int_arm_neon_vqdmlsl>;
+defm VQDMLSLsl: N3VLInt3SL_HS<0, 0b111, "vqdmlsl", "s", int_arm_neon_vqdmlsl>;
 
 // Vector Subtract Operations.
 
 //   VSUB     : Vector Subtract (integer and floating-point)
-defm VSUB     : N3V_QHSD<1, 0, 0b1000, 0, IIC_VSUBiD, IIC_VSUBiQ, "vsub.i", sub, 0>;
-def  VSUBfd   : N3VD<0, 0, 0b10, 0b1101, 0, IIC_VBIND, "vsub.f32", v2f32, v2f32, fsub, 0>;
-def  VSUBfq   : N3VQ<0, 0, 0b10, 0b1101, 0, IIC_VBINQ, "vsub.f32", v4f32, v4f32, fsub, 0>;
+defm VSUB     : N3V_QHSD<1, 0, 0b1000, 0, IIC_VSUBiD, IIC_VSUBiQ,
+                         "vsub", "i", sub, 0>;
+def  VSUBfd   : N3VD<0, 0, 0b10, 0b1101, 0, IIC_VBIND, "vsub", "f32",
+                     v2f32, v2f32, fsub, 0>;
+def  VSUBfq   : N3VQ<0, 0, 0b10, 0b1101, 0, IIC_VBINQ, "vsub", "f32",
+                     v4f32, v4f32, fsub, 0>;
 //   VSUBL    : Vector Subtract Long (Q = D - D)
-defm VSUBLs   : N3VLInt_QHS<0,1,0b0010,0, IIC_VSHLiD, "vsubl.s", int_arm_neon_vsubls, 1>;
-defm VSUBLu   : N3VLInt_QHS<1,1,0b0010,0, IIC_VSHLiD, "vsubl.u", int_arm_neon_vsublu, 1>;
+defm VSUBLs   : N3VLInt_QHS<0,1,0b0010,0, IIC_VSHLiD, "vsubl", "s",
+                            int_arm_neon_vsubls, 1>;
+defm VSUBLu   : N3VLInt_QHS<1,1,0b0010,0, IIC_VSHLiD, "vsubl", "u",
+                            int_arm_neon_vsublu, 1>;
 //   VSUBW    : Vector Subtract Wide (Q = Q - D)
-defm VSUBWs   : N3VWInt_QHS<0,1,0b0011,0, "vsubw.s", int_arm_neon_vsubws, 0>;
-defm VSUBWu   : N3VWInt_QHS<1,1,0b0011,0, "vsubw.u", int_arm_neon_vsubwu, 0>;
+defm VSUBWs   : N3VWInt_QHS<0,1,0b0011,0, "vsubw", "s", int_arm_neon_vsubws, 0>;
+defm VSUBWu   : N3VWInt_QHS<1,1,0b0011,0, "vsubw", "u", int_arm_neon_vsubwu, 0>;
 //   VHSUB    : Vector Halving Subtract
-defm VHSUBs   : N3VInt_QHS<0, 0, 0b0010, 0, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                           IIC_VBINi4Q, "vhsub.s", int_arm_neon_vhsubs, 0>;
-defm VHSUBu   : N3VInt_QHS<1, 0, 0b0010, 0, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                           IIC_VBINi4Q, "vhsub.u", int_arm_neon_vhsubu, 0>;
+defm VHSUBs   : N3VInt_QHS<0, 0, 0b0010, 0, IIC_VBINi4D, IIC_VBINi4D,
+                           IIC_VBINi4Q, IIC_VBINi4Q,
+                           "vhsub", "s", int_arm_neon_vhsubs, 0>;
+defm VHSUBu   : N3VInt_QHS<1, 0, 0b0010, 0, IIC_VBINi4D, IIC_VBINi4D,
+                           IIC_VBINi4Q, IIC_VBINi4Q,
+                           "vhsub", "u", int_arm_neon_vhsubu, 0>;
 //   VQSUB    : Vector Saturing Subtract
-defm VQSUBs   : N3VInt_QHSD<0, 0, 0b0010, 1, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                            IIC_VBINi4Q, "vqsub.s", int_arm_neon_vqsubs, 0>;
-defm VQSUBu   : N3VInt_QHSD<1, 0, 0b0010, 1, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                            IIC_VBINi4Q, "vqsub.u", int_arm_neon_vqsubu, 0>;
+defm VQSUBs   : N3VInt_QHSD<0, 0, 0b0010, 1, IIC_VBINi4D, IIC_VBINi4D,
+                            IIC_VBINi4Q, IIC_VBINi4Q,
+                            "vqsub", "s", int_arm_neon_vqsubs, 0>;
+defm VQSUBu   : N3VInt_QHSD<1, 0, 0b0010, 1, IIC_VBINi4D, IIC_VBINi4D,
+                            IIC_VBINi4Q, IIC_VBINi4Q,
+                            "vqsub", "u", int_arm_neon_vqsubu, 0>;
 //   VSUBHN   : Vector Subtract and Narrow Returning High Half (D = Q - Q)
-defm VSUBHN   : N3VNInt_HSD<0,1,0b0110,0, "vsubhn.i", int_arm_neon_vsubhn, 0>;
+defm VSUBHN   : N3VNInt_HSD<0,1,0b0110,0, "vsubhn", "i",
+                            int_arm_neon_vsubhn, 0>;
 //   VRSUBHN  : Vector Rounding Subtract and Narrow Returning High Half (D=Q-Q)
-defm VRSUBHN  : N3VNInt_HSD<1,1,0b0110,0, "vrsubhn.i", int_arm_neon_vrsubhn, 0>;
+defm VRSUBHN  : N3VNInt_HSD<1,1,0b0110,0, "vrsubhn", "i",
+                            int_arm_neon_vrsubhn, 0>;
 
 // Vector Comparisons.
 
 //   VCEQ     : Vector Compare Equal
 defm VCEQ     : N3V_QHS<1, 0, 0b1000, 1, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                        IIC_VBINi4Q, "vceq.i", NEONvceq, 1>;
-def  VCEQfd   : N3VD<0,0,0b00,0b1110,0, IIC_VBIND, "vceq.f32", v2i32, v2f32, NEONvceq, 1>;
-def  VCEQfq   : N3VQ<0,0,0b00,0b1110,0, IIC_VBINQ, "vceq.f32", v4i32, v4f32, NEONvceq, 1>;
+                        IIC_VBINi4Q, "vceq", "i", NEONvceq, 1>;
+def  VCEQfd   : N3VD<0,0,0b00,0b1110,0, IIC_VBIND, "vceq", "f32", v2i32, v2f32,
+                     NEONvceq, 1>;
+def  VCEQfq   : N3VQ<0,0,0b00,0b1110,0, IIC_VBINQ, "vceq", "f32", v4i32, v4f32,
+                     NEONvceq, 1>;
 //   VCGE     : Vector Compare Greater Than or Equal
 defm VCGEs    : N3V_QHS<0, 0, 0b0011, 1, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                        IIC_VBINi4Q, "vcge.s", NEONvcge, 0>;
+                        IIC_VBINi4Q, "vcge", "s", NEONvcge, 0>;
 defm VCGEu    : N3V_QHS<1, 0, 0b0011, 1, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q, 
-                        IIC_VBINi4Q, "vcge.u", NEONvcgeu, 0>;
-def  VCGEfd   : N3VD<1,0,0b00,0b1110,0, IIC_VBIND, "vcge.f32", v2i32, v2f32, NEONvcge, 0>;
-def  VCGEfq   : N3VQ<1,0,0b00,0b1110,0, IIC_VBINQ, "vcge.f32", v4i32, v4f32, NEONvcge, 0>;
+                        IIC_VBINi4Q, "vcge", "u", NEONvcgeu, 0>;
+def  VCGEfd   : N3VD<1,0,0b00,0b1110,0, IIC_VBIND, "vcge", "f32",
+                     v2i32, v2f32, NEONvcge, 0>;
+def  VCGEfq   : N3VQ<1,0,0b00,0b1110,0, IIC_VBINQ, "vcge", "f32", v4i32, v4f32,
+                     NEONvcge, 0>;
 //   VCGT     : Vector Compare Greater Than
 defm VCGTs    : N3V_QHS<0, 0, 0b0011, 0, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q, 
-                        IIC_VBINi4Q, "vcgt.s", NEONvcgt, 0>;
+                        IIC_VBINi4Q, "vcgt", "s", NEONvcgt, 0>;
 defm VCGTu    : N3V_QHS<1, 0, 0b0011, 0, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q, 
-                        IIC_VBINi4Q, "vcgt.u", NEONvcgtu, 0>;
-def  VCGTfd   : N3VD<1,0,0b10,0b1110,0, IIC_VBIND, "vcgt.f32", v2i32, v2f32, NEONvcgt, 0>;
-def  VCGTfq   : N3VQ<1,0,0b10,0b1110,0, IIC_VBINQ, "vcgt.f32", v4i32, v4f32, NEONvcgt, 0>;
+                        IIC_VBINi4Q, "vcgt", "u", NEONvcgtu, 0>;
+def  VCGTfd   : N3VD<1,0,0b10,0b1110,0, IIC_VBIND, "vcgt", "f32", v2i32, v2f32,
+                     NEONvcgt, 0>;
+def  VCGTfq   : N3VQ<1,0,0b10,0b1110,0, IIC_VBINQ, "vcgt", "f32", v4i32, v4f32,
+                     NEONvcgt, 0>;
 //   VACGE    : Vector Absolute Compare Greater Than or Equal (aka VCAGE)
-def  VACGEd   : N3VDInt<1, 0, 0b00, 0b1110, 1, IIC_VBIND, "vacge.f32", v2i32, v2f32,
-                        int_arm_neon_vacged, 0>;
-def  VACGEq   : N3VQInt<1, 0, 0b00, 0b1110, 1, IIC_VBINQ, "vacge.f32", v4i32, v4f32,
-                        int_arm_neon_vacgeq, 0>;
+def  VACGEd   : N3VDInt<1, 0, 0b00, 0b1110, 1, IIC_VBIND, "vacge", "f32",
+                        v2i32, v2f32, int_arm_neon_vacged, 0>;
+def  VACGEq   : N3VQInt<1, 0, 0b00, 0b1110, 1, IIC_VBINQ, "vacge", "f32",
+                        v4i32, v4f32, int_arm_neon_vacgeq, 0>;
 //   VACGT    : Vector Absolute Compare Greater Than (aka VCAGT)
-def  VACGTd   : N3VDInt<1, 0, 0b10, 0b1110, 1, IIC_VBIND, "vacgt.f32", v2i32, v2f32,
-                        int_arm_neon_vacgtd, 0>;
-def  VACGTq   : N3VQInt<1, 0, 0b10, 0b1110, 1, IIC_VBINQ, "vacgt.f32", v4i32, v4f32,
-                        int_arm_neon_vacgtq, 0>;
+def  VACGTd   : N3VDInt<1, 0, 0b10, 0b1110, 1, IIC_VBIND, "vacgt", "f32",
+                        v2i32, v2f32, int_arm_neon_vacgtd, 0>;
+def  VACGTq   : N3VQInt<1, 0, 0b10, 0b1110, 1, IIC_VBINQ, "vacgt", "f32",
+                        v4i32, v4f32, int_arm_neon_vacgtq, 0>;
 //   VTST     : Vector Test Bits
 defm VTST     : N3V_QHS<0, 0, 0b1000, 1, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q, 
-                        IIC_VBINi4Q, "vtst.i", NEONvtst, 1>;
+                        IIC_VBINi4Q, "vtst", "i", NEONvtst, 1>;
 
 // Vector Bitwise Operations.
 
 //   VAND     : Vector Bitwise AND
-def  VANDd    : N3VD<0, 0, 0b00, 0b0001, 1, IIC_VBINiD, "vand", v2i32, v2i32, and, 1>;
-def  VANDq    : N3VQ<0, 0, 0b00, 0b0001, 1, IIC_VBINiQ, "vand", v4i32, v4i32, and, 1>;
+def  VANDd    : N3VDX<0, 0, 0b00, 0b0001, 1, IIC_VBINiD, "vand",
+                      v2i32, v2i32, and, 1>;
+def  VANDq    : N3VQX<0, 0, 0b00, 0b0001, 1, IIC_VBINiQ, "vand",
+                      v4i32, v4i32, and, 1>;
 
 //   VEOR     : Vector Bitwise Exclusive OR
-def  VEORd    : N3VD<1, 0, 0b00, 0b0001, 1, IIC_VBINiD, "veor", v2i32, v2i32, xor, 1>;
-def  VEORq    : N3VQ<1, 0, 0b00, 0b0001, 1, IIC_VBINiQ, "veor", v4i32, v4i32, xor, 1>;
+def  VEORd    : N3VDX<1, 0, 0b00, 0b0001, 1, IIC_VBINiD, "veor",
+                      v2i32, v2i32, xor, 1>;
+def  VEORq    : N3VQX<1, 0, 0b00, 0b0001, 1, IIC_VBINiQ, "veor",
+                      v4i32, v4i32, xor, 1>;
 
 //   VORR     : Vector Bitwise OR
-def  VORRd    : N3VD<0, 0, 0b10, 0b0001, 1, IIC_VBINiD, "vorr", v2i32, v2i32, or, 1>;
-def  VORRq    : N3VQ<0, 0, 0b10, 0b0001, 1, IIC_VBINiQ, "vorr", v4i32, v4i32, or, 1>;
+def  VORRd    : N3VDX<0, 0, 0b10, 0b0001, 1, IIC_VBINiD, "vorr",
+                      v2i32, v2i32, or, 1>;
+def  VORRq    : N3VQX<0, 0, 0b10, 0b0001, 1, IIC_VBINiQ, "vorr",
+                      v4i32, v4i32, or, 1>;
 
 //   VBIC     : Vector Bitwise Bit Clear (AND NOT)
-def  VBICd    : N3V<0, 0, 0b01, 0b0001, 0, 1, (outs DPR:$dst),
+def  VBICd    : N3VX<0, 0, 0b01, 0b0001, 0, 1, (outs DPR:$dst),
                     (ins DPR:$src1, DPR:$src2), IIC_VBINiD,
-                    "vbic\t$dst, $src1, $src2", "",
+                    "vbic", "$dst, $src1, $src2", "",
                     [(set DPR:$dst, (v2i32 (and DPR:$src1,
                                                 (vnot_conv DPR:$src2))))]>;
-def  VBICq    : N3V<0, 0, 0b01, 0b0001, 1, 1, (outs QPR:$dst),
+def  VBICq    : N3VX<0, 0, 0b01, 0b0001, 1, 1, (outs QPR:$dst),
                     (ins QPR:$src1, QPR:$src2), IIC_VBINiQ,
-                    "vbic\t$dst, $src1, $src2", "",
+                    "vbic", "$dst, $src1, $src2", "",
                     [(set QPR:$dst, (v4i32 (and QPR:$src1,
                                                 (vnot_conv QPR:$src2))))]>;
 
 //   VORN     : Vector Bitwise OR NOT
-def  VORNd    : N3V<0, 0, 0b11, 0b0001, 0, 1, (outs DPR:$dst),
+def  VORNd    : N3VX<0, 0, 0b11, 0b0001, 0, 1, (outs DPR:$dst),
                     (ins DPR:$src1, DPR:$src2), IIC_VBINiD,
-                    "vorn\t$dst, $src1, $src2", "",
+                    "vorn", "$dst, $src1, $src2", "",
                     [(set DPR:$dst, (v2i32 (or DPR:$src1,
                                                (vnot_conv DPR:$src2))))]>;
-def  VORNq    : N3V<0, 0, 0b11, 0b0001, 1, 1, (outs QPR:$dst),
+def  VORNq    : N3VX<0, 0, 0b11, 0b0001, 1, 1, (outs QPR:$dst),
                     (ins QPR:$src1, QPR:$src2), IIC_VBINiQ,
-                    "vorn\t$dst, $src1, $src2", "",
+                    "vorn", "$dst, $src1, $src2", "",
                     [(set QPR:$dst, (v4i32 (or QPR:$src1,
                                                (vnot_conv QPR:$src2))))]>;
 
 //   VMVN     : Vector Bitwise NOT
-def  VMVNd    : N2V<0b11, 0b11, 0b00, 0b00, 0b01011, 0, 0,
+def  VMVNd    : N2VX<0b11, 0b11, 0b00, 0b00, 0b01011, 0, 0,
                     (outs DPR:$dst), (ins DPR:$src), IIC_VSHLiD,
-                    "vmvn\t$dst, $src", "",
+                    "vmvn", "$dst, $src", "",
                     [(set DPR:$dst, (v2i32 (vnot DPR:$src)))]>;
-def  VMVNq    : N2V<0b11, 0b11, 0b00, 0b00, 0b01011, 1, 0,
+def  VMVNq    : N2VX<0b11, 0b11, 0b00, 0b00, 0b01011, 1, 0,
                     (outs QPR:$dst), (ins QPR:$src), IIC_VSHLiD,
-                    "vmvn\t$dst, $src", "",
+                    "vmvn", "$dst, $src", "",
                     [(set QPR:$dst, (v4i32 (vnot QPR:$src)))]>;
 def : Pat<(v2i32 (vnot_conv DPR:$src)), (VMVNd DPR:$src)>;
 def : Pat<(v4i32 (vnot_conv QPR:$src)), (VMVNq QPR:$src)>;
 
 //   VBSL     : Vector Bitwise Select
-def  VBSLd    : N3V<1, 0, 0b01, 0b0001, 0, 1, (outs DPR:$dst),
+def  VBSLd    : N3VX<1, 0, 0b01, 0b0001, 0, 1, (outs DPR:$dst),
                     (ins DPR:$src1, DPR:$src2, DPR:$src3), IIC_VCNTiD,
-                    "vbsl\t$dst, $src2, $src3", "$src1 = $dst",
+                    "vbsl", "$dst, $src2, $src3", "$src1 = $dst",
                     [(set DPR:$dst,
                       (v2i32 (or (and DPR:$src2, DPR:$src1),
                                  (and DPR:$src3, (vnot_conv DPR:$src1)))))]>;
-def  VBSLq    : N3V<1, 0, 0b01, 0b0001, 1, 1, (outs QPR:$dst),
+def  VBSLq    : N3VX<1, 0, 0b01, 0b0001, 1, 1, (outs QPR:$dst),
                     (ins QPR:$src1, QPR:$src2, QPR:$src3), IIC_VCNTiQ,
-                    "vbsl\t$dst, $src2, $src3", "$src1 = $dst",
+                    "vbsl", "$dst, $src2, $src3", "$src1 = $dst",
                     [(set QPR:$dst,
                       (v4i32 (or (and QPR:$src2, QPR:$src1),
                                  (and QPR:$src3, (vnot_conv QPR:$src1)))))]>;
 
 //   VBIF     : Vector Bitwise Insert if False
-//              like VBSL but with: "vbif\t$dst, $src3, $src1", "$src2 = $dst",
+//              like VBSL but with: "vbif $dst, $src3, $src1", "$src2 = $dst",
 //   VBIT     : Vector Bitwise Insert if True
-//              like VBSL but with: "vbit\t$dst, $src2, $src1", "$src3 = $dst",
+//              like VBSL but with: "vbit $dst, $src2, $src1", "$src3 = $dst",
 // These are not yet implemented.  The TwoAddress pass will not go looking
 // for equivalent operations with different register constraints; it just
 // inserts copies.
@@ -1710,296 +2199,270 @@ def  VBSLq    : N3V<1, 0, 0b01, 0b0001, 1, 1, (outs QPR:$dst),
 // Vector Absolute Differences.
 
 //   VABD     : Vector Absolute Difference
-defm VABDs    : N3VInt_QHS<0, 0, 0b0111, 0, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                           IIC_VBINi4Q, "vabd.s", int_arm_neon_vabds, 0>;
-defm VABDu    : N3VInt_QHS<1, 0, 0b0111, 0, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                           IIC_VBINi4Q, "vabd.u", int_arm_neon_vabdu, 0>;
-def  VABDfd   : N3VDInt<1, 0, 0b10, 0b1101, 0, IIC_VBIND, "vabd.f32", v2f32, v2f32,
-                        int_arm_neon_vabds, 0>;
-def  VABDfq   : N3VQInt<1, 0, 0b10, 0b1101, 0, IIC_VBINQ, "vabd.f32", v4f32, v4f32,
-                        int_arm_neon_vabds, 0>;
+defm VABDs    : N3VInt_QHS<0, 0, 0b0111, 0, IIC_VBINi4D, IIC_VBINi4D,
+                           IIC_VBINi4Q, IIC_VBINi4Q,
+                           "vabd", "s", int_arm_neon_vabds, 0>;
+defm VABDu    : N3VInt_QHS<1, 0, 0b0111, 0, IIC_VBINi4D, IIC_VBINi4D,
+                           IIC_VBINi4Q, IIC_VBINi4Q,
+                           "vabd", "u", int_arm_neon_vabdu, 0>;
+def  VABDfd   : N3VDInt<1, 0, 0b10, 0b1101, 0, IIC_VBIND,
+                        "vabd", "f32", v2f32, v2f32, int_arm_neon_vabds, 0>;
+def  VABDfq   : N3VQInt<1, 0, 0b10, 0b1101, 0, IIC_VBINQ,
+                        "vabd", "f32", v4f32, v4f32, int_arm_neon_vabds, 0>;
 
 //   VABDL    : Vector Absolute Difference Long (Q = | D - D |)
-defm VABDLs   : N3VLInt_QHS<0,1,0b0111,0, IIC_VBINi4Q, "vabdl.s", int_arm_neon_vabdls, 0>;
-defm VABDLu   : N3VLInt_QHS<1,1,0b0111,0, IIC_VBINi4Q, "vabdl.u", int_arm_neon_vabdlu, 0>;
+defm VABDLs   : N3VLInt_QHS<0,1,0b0111,0, IIC_VBINi4Q,
+                            "vabdl", "s", int_arm_neon_vabdls, 0>;
+defm VABDLu   : N3VLInt_QHS<1,1,0b0111,0, IIC_VBINi4Q,
+                             "vabdl", "u", int_arm_neon_vabdlu, 0>;
 
 //   VABA     : Vector Absolute Difference and Accumulate
-defm VABAs    : N3VInt3_QHS<0,1,0b0101,0, "vaba.s", int_arm_neon_vabas>;
-defm VABAu    : N3VInt3_QHS<1,1,0b0101,0, "vaba.u", int_arm_neon_vabau>;
+defm VABAs    : N3VInt3_QHS<0,0,0b0111,1, "vaba", "s", int_arm_neon_vabas>;
+defm VABAu    : N3VInt3_QHS<1,0,0b0111,1, "vaba", "u", int_arm_neon_vabau>;
 
 //   VABAL    : Vector Absolute Difference and Accumulate Long (Q += | D - D |)
-defm VABALs   : N3VLInt3_QHS<0,1,0b0101,0, "vabal.s", int_arm_neon_vabals>;
-defm VABALu   : N3VLInt3_QHS<1,1,0b0101,0, "vabal.u", int_arm_neon_vabalu>;
+defm VABALs   : N3VLInt3_QHS<0,1,0b0101,0, "vabal", "s", int_arm_neon_vabals>;
+defm VABALu   : N3VLInt3_QHS<1,1,0b0101,0, "vabal", "u", int_arm_neon_vabalu>;
 
 // Vector Maximum and Minimum.
 
 //   VMAX     : Vector Maximum
 defm VMAXs    : N3VInt_QHS<0, 0, 0b0110, 0, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                           IIC_VBINi4Q, "vmax.s", int_arm_neon_vmaxs, 1>;
+                           IIC_VBINi4Q, "vmax", "s", int_arm_neon_vmaxs, 1>;
 defm VMAXu    : N3VInt_QHS<1, 0, 0b0110, 0, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                           IIC_VBINi4Q, "vmax.u", int_arm_neon_vmaxu, 1>;
-def  VMAXfd   : N3VDInt<0, 0, 0b00, 0b1111, 0, IIC_VBIND, "vmax.f32", v2f32, v2f32,
-                        int_arm_neon_vmaxs, 1>;
-def  VMAXfq   : N3VQInt<0, 0, 0b00, 0b1111, 0, IIC_VBINQ, "vmax.f32", v4f32, v4f32,
-                        int_arm_neon_vmaxs, 1>;
+                           IIC_VBINi4Q, "vmax", "u", int_arm_neon_vmaxu, 1>;
+def  VMAXfd   : N3VDInt<0, 0, 0b00, 0b1111, 0, IIC_VBIND, "vmax", "f32",
+                        v2f32, v2f32, int_arm_neon_vmaxs, 1>;
+def  VMAXfq   : N3VQInt<0, 0, 0b00, 0b1111, 0, IIC_VBINQ, "vmax", "f32",
+                        v4f32, v4f32, int_arm_neon_vmaxs, 1>;
 
 //   VMIN     : Vector Minimum
 defm VMINs    : N3VInt_QHS<0, 0, 0b0110, 1, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                           IIC_VBINi4Q, "vmin.s", int_arm_neon_vmins, 1>;
+                           IIC_VBINi4Q, "vmin", "s", int_arm_neon_vmins, 1>;
 defm VMINu    : N3VInt_QHS<1, 0, 0b0110, 1, IIC_VBINi4D, IIC_VBINi4D, IIC_VBINi4Q,
-                           IIC_VBINi4Q, "vmin.u", int_arm_neon_vminu, 1>;
-def  VMINfd   : N3VDInt<0, 0, 0b10, 0b1111, 0, IIC_VBIND, "vmin.f32", v2f32, v2f32,
-                        int_arm_neon_vmins, 1>;
-def  VMINfq   : N3VQInt<0, 0, 0b10, 0b1111, 0, IIC_VBINQ, "vmin.f32", v4f32, v4f32,
-                        int_arm_neon_vmins, 1>;
+                           IIC_VBINi4Q, "vmin", "u", int_arm_neon_vminu, 1>;
+def  VMINfd   : N3VDInt<0, 0, 0b10, 0b1111, 0, IIC_VBIND, "vmin", "f32",
+                        v2f32, v2f32, int_arm_neon_vmins, 1>;
+def  VMINfq   : N3VQInt<0, 0, 0b10, 0b1111, 0, IIC_VBINQ, "vmin", "f32",
+                        v4f32, v4f32, int_arm_neon_vmins, 1>;
 
 // Vector Pairwise Operations.
 
 //   VPADD    : Vector Pairwise Add
-def  VPADDi8  : N3VDInt<0, 0, 0b00, 0b1011, 1, IIC_VBINiD, "vpadd.i8", v8i8, v8i8,
-                        int_arm_neon_vpadd, 0>;
-def  VPADDi16 : N3VDInt<0, 0, 0b01, 0b1011, 1, IIC_VBINiD, "vpadd.i16", v4i16, v4i16,
-                        int_arm_neon_vpadd, 0>;
-def  VPADDi32 : N3VDInt<0, 0, 0b10, 0b1011, 1, IIC_VBINiD, "vpadd.i32", v2i32, v2i32,
-                        int_arm_neon_vpadd, 0>;
-def  VPADDf   : N3VDInt<1, 0, 0b00, 0b1101, 0, IIC_VBIND, "vpadd.f32", v2f32, v2f32,
-                        int_arm_neon_vpadd, 0>;
+def  VPADDi8  : N3VDInt<0, 0, 0b00, 0b1011, 1, IIC_VBINiD, "vpadd", "i8",
+                        v8i8, v8i8, int_arm_neon_vpadd, 0>;
+def  VPADDi16 : N3VDInt<0, 0, 0b01, 0b1011, 1, IIC_VBINiD, "vpadd", "i16",
+                        v4i16, v4i16, int_arm_neon_vpadd, 0>;
+def  VPADDi32 : N3VDInt<0, 0, 0b10, 0b1011, 1, IIC_VBINiD, "vpadd", "i32",
+                        v2i32, v2i32, int_arm_neon_vpadd, 0>;
+def  VPADDf   : N3VDInt<1, 0, 0b00, 0b1101, 0, IIC_VBIND, "vpadd", "f32",
+                        v2f32, v2f32, int_arm_neon_vpadd, 0>;
 
 //   VPADDL   : Vector Pairwise Add Long
-defm VPADDLs  : N2VPLInt_QHS<0b11, 0b11, 0b00, 0b00100, 0, "vpaddl.s",
+defm VPADDLs  : N2VPLInt_QHS<0b11, 0b11, 0b00, 0b00100, 0, "vpaddl", "s",
                              int_arm_neon_vpaddls>;
-defm VPADDLu  : N2VPLInt_QHS<0b11, 0b11, 0b00, 0b00101, 0, "vpaddl.u",
+defm VPADDLu  : N2VPLInt_QHS<0b11, 0b11, 0b00, 0b00101, 0, "vpaddl", "u",
                              int_arm_neon_vpaddlu>;
 
 //   VPADAL   : Vector Pairwise Add and Accumulate Long
-defm VPADALs  : N2VPLInt2_QHS<0b11, 0b11, 0b00, 0b00100, 0, "vpadal.s",
+defm VPADALs  : N2VPLInt2_QHS<0b11, 0b11, 0b00, 0b01100, 0, "vpadal", "s",
                               int_arm_neon_vpadals>;
-defm VPADALu  : N2VPLInt2_QHS<0b11, 0b11, 0b00, 0b00101, 0, "vpadal.u",
+defm VPADALu  : N2VPLInt2_QHS<0b11, 0b11, 0b00, 0b01101, 0, "vpadal", "u",
                               int_arm_neon_vpadalu>;
 
 //   VPMAX    : Vector Pairwise Maximum
-def  VPMAXs8  : N3VDInt<0, 0, 0b00, 0b1010, 0, IIC_VBINi4D, "vpmax.s8", v8i8, v8i8,
-                        int_arm_neon_vpmaxs, 0>;
-def  VPMAXs16 : N3VDInt<0, 0, 0b01, 0b1010, 0, IIC_VBINi4D, "vpmax.s16", v4i16, v4i16,
-                        int_arm_neon_vpmaxs, 0>;
-def  VPMAXs32 : N3VDInt<0, 0, 0b10, 0b1010, 0, IIC_VBINi4D, "vpmax.s32", v2i32, v2i32,
-                        int_arm_neon_vpmaxs, 0>;
-def  VPMAXu8  : N3VDInt<1, 0, 0b00, 0b1010, 0, IIC_VBINi4D, "vpmax.u8", v8i8, v8i8,
-                        int_arm_neon_vpmaxu, 0>;
-def  VPMAXu16 : N3VDInt<1, 0, 0b01, 0b1010, 0, IIC_VBINi4D, "vpmax.u16", v4i16, v4i16,
-                        int_arm_neon_vpmaxu, 0>;
-def  VPMAXu32 : N3VDInt<1, 0, 0b10, 0b1010, 0, IIC_VBINi4D, "vpmax.u32", v2i32, v2i32,
-                        int_arm_neon_vpmaxu, 0>;
-def  VPMAXf   : N3VDInt<1, 0, 0b00, 0b1111, 0, IIC_VBINi4D, "vpmax.f32", v2f32, v2f32,
-                        int_arm_neon_vpmaxs, 0>;
+def  VPMAXs8  : N3VDInt<0, 0, 0b00, 0b1010, 0, IIC_VBINi4D, "vpmax", "s8",
+                        v8i8, v8i8, int_arm_neon_vpmaxs, 0>;
+def  VPMAXs16 : N3VDInt<0, 0, 0b01, 0b1010, 0, IIC_VBINi4D, "vpmax", "s16",
+                        v4i16, v4i16, int_arm_neon_vpmaxs, 0>;
+def  VPMAXs32 : N3VDInt<0, 0, 0b10, 0b1010, 0, IIC_VBINi4D, "vpmax", "s32",
+                        v2i32, v2i32, int_arm_neon_vpmaxs, 0>;
+def  VPMAXu8  : N3VDInt<1, 0, 0b00, 0b1010, 0, IIC_VBINi4D, "vpmax", "u8",
+                        v8i8, v8i8, int_arm_neon_vpmaxu, 0>;
+def  VPMAXu16 : N3VDInt<1, 0, 0b01, 0b1010, 0, IIC_VBINi4D, "vpmax", "u16",
+                        v4i16, v4i16, int_arm_neon_vpmaxu, 0>;
+def  VPMAXu32 : N3VDInt<1, 0, 0b10, 0b1010, 0, IIC_VBINi4D, "vpmax", "u32",
+                        v2i32, v2i32, int_arm_neon_vpmaxu, 0>;
+def  VPMAXf   : N3VDInt<1, 0, 0b00, 0b1111, 0, IIC_VBINi4D, "vpmax", "f32",
+                        v2f32, v2f32, int_arm_neon_vpmaxs, 0>;
 
 //   VPMIN    : Vector Pairwise Minimum
-def  VPMINs8  : N3VDInt<0, 0, 0b00, 0b1010, 1, IIC_VBINi4D, "vpmin.s8", v8i8, v8i8,
-                        int_arm_neon_vpmins, 0>;
-def  VPMINs16 : N3VDInt<0, 0, 0b01, 0b1010, 1, IIC_VBINi4D, "vpmin.s16", v4i16, v4i16,
-                        int_arm_neon_vpmins, 0>;
-def  VPMINs32 : N3VDInt<0, 0, 0b10, 0b1010, 1, IIC_VBINi4D, "vpmin.s32", v2i32, v2i32,
-                        int_arm_neon_vpmins, 0>;
-def  VPMINu8  : N3VDInt<1, 0, 0b00, 0b1010, 1, IIC_VBINi4D, "vpmin.u8", v8i8, v8i8,
-                        int_arm_neon_vpminu, 0>;
-def  VPMINu16 : N3VDInt<1, 0, 0b01, 0b1010, 1, IIC_VBINi4D, "vpmin.u16", v4i16, v4i16,
-                        int_arm_neon_vpminu, 0>;
-def  VPMINu32 : N3VDInt<1, 0, 0b10, 0b1010, 1, IIC_VBINi4D, "vpmin.u32", v2i32, v2i32,
-                        int_arm_neon_vpminu, 0>;
-def  VPMINf   : N3VDInt<1, 0, 0b10, 0b1111, 0, IIC_VBINi4D, "vpmin.f32", v2f32, v2f32,
-                        int_arm_neon_vpmins, 0>;
+def  VPMINs8  : N3VDInt<0, 0, 0b00, 0b1010, 1, IIC_VBINi4D, "vpmin", "s8",
+                        v8i8, v8i8, int_arm_neon_vpmins, 0>;
+def  VPMINs16 : N3VDInt<0, 0, 0b01, 0b1010, 1, IIC_VBINi4D, "vpmin", "s16",
+                        v4i16, v4i16, int_arm_neon_vpmins, 0>;
+def  VPMINs32 : N3VDInt<0, 0, 0b10, 0b1010, 1, IIC_VBINi4D, "vpmin", "s32",
+                        v2i32, v2i32, int_arm_neon_vpmins, 0>;
+def  VPMINu8  : N3VDInt<1, 0, 0b00, 0b1010, 1, IIC_VBINi4D, "vpmin", "u8",
+                        v8i8, v8i8, int_arm_neon_vpminu, 0>;
+def  VPMINu16 : N3VDInt<1, 0, 0b01, 0b1010, 1, IIC_VBINi4D, "vpmin", "u16",
+                        v4i16, v4i16, int_arm_neon_vpminu, 0>;
+def  VPMINu32 : N3VDInt<1, 0, 0b10, 0b1010, 1, IIC_VBINi4D, "vpmin", "u32",
+                        v2i32, v2i32, int_arm_neon_vpminu, 0>;
+def  VPMINf   : N3VDInt<1, 0, 0b10, 0b1111, 0, IIC_VBINi4D, "vpmin", "f32",
+                        v2f32, v2f32, int_arm_neon_vpmins, 0>;
 
 // Vector Reciprocal and Reciprocal Square Root Estimate and Step.
 
 //   VRECPE   : Vector Reciprocal Estimate
 def  VRECPEd  : N2VDInt<0b11, 0b11, 0b10, 0b11, 0b01000, 0, 
-                        IIC_VUNAD, "vrecpe.u32",
+                        IIC_VUNAD, "vrecpe", "u32",
                         v2i32, v2i32, int_arm_neon_vrecpe>;
 def  VRECPEq  : N2VQInt<0b11, 0b11, 0b10, 0b11, 0b01000, 0, 
-                        IIC_VUNAQ, "vrecpe.u32",
+                        IIC_VUNAQ, "vrecpe", "u32",
                         v4i32, v4i32, int_arm_neon_vrecpe>;
 def  VRECPEfd : N2VDInt<0b11, 0b11, 0b10, 0b11, 0b01010, 0,
-                        IIC_VUNAD, "vrecpe.f32",
+                        IIC_VUNAD, "vrecpe", "f32",
                         v2f32, v2f32, int_arm_neon_vrecpe>;
 def  VRECPEfq : N2VQInt<0b11, 0b11, 0b10, 0b11, 0b01010, 0,
-                        IIC_VUNAQ, "vrecpe.f32",
+                        IIC_VUNAQ, "vrecpe", "f32",
                         v4f32, v4f32, int_arm_neon_vrecpe>;
 
 //   VRECPS   : Vector Reciprocal Step
-def  VRECPSfd : N3VDInt<0, 0, 0b00, 0b1111, 1, IIC_VRECSD, "vrecps.f32", v2f32, v2f32,
-                        int_arm_neon_vrecps, 1>;
-def  VRECPSfq : N3VQInt<0, 0, 0b00, 0b1111, 1, IIC_VRECSQ, "vrecps.f32", v4f32, v4f32,
-                        int_arm_neon_vrecps, 1>;
+def  VRECPSfd : N3VDInt<0, 0, 0b00, 0b1111, 1,
+                        IIC_VRECSD, "vrecps", "f32",
+                        v2f32, v2f32, int_arm_neon_vrecps, 1>;
+def  VRECPSfq : N3VQInt<0, 0, 0b00, 0b1111, 1,
+                        IIC_VRECSQ, "vrecps", "f32",
+                        v4f32, v4f32, int_arm_neon_vrecps, 1>;
 
 //   VRSQRTE  : Vector Reciprocal Square Root Estimate
 def  VRSQRTEd  : N2VDInt<0b11, 0b11, 0b10, 0b11, 0b01001, 0,
-                         IIC_VUNAD, "vrsqrte.u32",
+                         IIC_VUNAD, "vrsqrte", "u32",
                          v2i32, v2i32, int_arm_neon_vrsqrte>;
 def  VRSQRTEq  : N2VQInt<0b11, 0b11, 0b10, 0b11, 0b01001, 0,
-                         IIC_VUNAQ, "vrsqrte.u32",
+                         IIC_VUNAQ, "vrsqrte", "u32",
                          v4i32, v4i32, int_arm_neon_vrsqrte>;
 def  VRSQRTEfd : N2VDInt<0b11, 0b11, 0b10, 0b11, 0b01011, 0,
-                         IIC_VUNAD, "vrsqrte.f32",
+                         IIC_VUNAD, "vrsqrte", "f32",
                          v2f32, v2f32, int_arm_neon_vrsqrte>;
 def  VRSQRTEfq : N2VQInt<0b11, 0b11, 0b10, 0b11, 0b01011, 0, 
-                         IIC_VUNAQ, "vrsqrte.f32",
+                         IIC_VUNAQ, "vrsqrte", "f32",
                          v4f32, v4f32, int_arm_neon_vrsqrte>;
 
 //   VRSQRTS  : Vector Reciprocal Square Root Step
-def VRSQRTSfd : N3VDInt<0, 0, 0b10, 0b1111, 1, IIC_VRECSD, "vrsqrts.f32", v2f32, v2f32,
-                        int_arm_neon_vrsqrts, 1>;
-def VRSQRTSfq : N3VQInt<0, 0, 0b10, 0b1111, 1, IIC_VRECSQ, "vrsqrts.f32", v4f32, v4f32,
-                        int_arm_neon_vrsqrts, 1>;
+def VRSQRTSfd : N3VDInt<0, 0, 0b10, 0b1111, 1,
+                        IIC_VRECSD, "vrsqrts", "f32",
+                        v2f32, v2f32, int_arm_neon_vrsqrts, 1>;
+def VRSQRTSfq : N3VQInt<0, 0, 0b10, 0b1111, 1,
+                        IIC_VRECSQ, "vrsqrts", "f32",
+                        v4f32, v4f32, int_arm_neon_vrsqrts, 1>;
 
 // Vector Shifts.
 
 //   VSHL     : Vector Shift
 defm VSHLs    : N3VInt_QHSD<0, 0, 0b0100, 0, IIC_VSHLiD, IIC_VSHLiD, IIC_VSHLiQ,
-                            IIC_VSHLiQ, "vshl.s", int_arm_neon_vshifts, 0>;
+                            IIC_VSHLiQ, "vshl", "s", int_arm_neon_vshifts, 0>;
 defm VSHLu    : N3VInt_QHSD<1, 0, 0b0100, 0, IIC_VSHLiD, IIC_VSHLiD, IIC_VSHLiQ,
-                            IIC_VSHLiQ, "vshl.u", int_arm_neon_vshiftu, 0>;
+                            IIC_VSHLiQ, "vshl", "u", int_arm_neon_vshiftu, 0>;
 //   VSHL     : Vector Shift Left (Immediate)
-defm VSHLi    : N2VSh_QHSD<0, 1, 0b0111, 1, IIC_VSHLiD, "vshl.i", NEONvshl>;
+defm VSHLi    : N2VSh_QHSD<0, 1, 0b0101, 1, IIC_VSHLiD, "vshl", "i", NEONvshl>;
 //   VSHR     : Vector Shift Right (Immediate)
-defm VSHRs    : N2VSh_QHSD<0, 1, 0b0000, 1, IIC_VSHLiD, "vshr.s", NEONvshrs>;
-defm VSHRu    : N2VSh_QHSD<1, 1, 0b0000, 1, IIC_VSHLiD, "vshr.u", NEONvshru>;
+defm VSHRs    : N2VSh_QHSD<0, 1, 0b0000, 1, IIC_VSHLiD, "vshr", "s", NEONvshrs>;
+defm VSHRu    : N2VSh_QHSD<1, 1, 0b0000, 1, IIC_VSHLiD, "vshr", "u", NEONvshru>;
 
 //   VSHLL    : Vector Shift Left Long
-def  VSHLLs8  : N2VLSh<0, 1, 0b001000, 0b1010, 0, 0, 1, "vshll.s8",
-                       v8i16, v8i8, NEONvshlls>;
-def  VSHLLs16 : N2VLSh<0, 1, 0b010000, 0b1010, 0, 0, 1, "vshll.s16",
-                       v4i32, v4i16, NEONvshlls>;
-def  VSHLLs32 : N2VLSh<0, 1, 0b100000, 0b1010, 0, 0, 1, "vshll.s32",
-                       v2i64, v2i32, NEONvshlls>;
-def  VSHLLu8  : N2VLSh<1, 1, 0b001000, 0b1010, 0, 0, 1, "vshll.u8",
-                       v8i16, v8i8, NEONvshllu>;
-def  VSHLLu16 : N2VLSh<1, 1, 0b010000, 0b1010, 0, 0, 1, "vshll.u16",
-                       v4i32, v4i16, NEONvshllu>;
-def  VSHLLu32 : N2VLSh<1, 1, 0b100000, 0b1010, 0, 0, 1, "vshll.u32",
-                       v2i64, v2i32, NEONvshllu>;
+defm VSHLLs   : N2VLSh_QHS<0, 1, 0b1010, 0, 0, 1, "vshll", "s", NEONvshlls>;
+defm VSHLLu   : N2VLSh_QHS<1, 1, 0b1010, 0, 0, 1, "vshll", "u", NEONvshllu>;
 
 //   VSHLL    : Vector Shift Left Long (with maximum shift count)
-def  VSHLLi8  : N2VLSh<1, 1, 0b110010, 0b0011, 0, 0, 0, "vshll.i8",
-                       v8i16, v8i8, NEONvshlli>;
-def  VSHLLi16 : N2VLSh<1, 1, 0b110110, 0b0011, 0, 0, 0, "vshll.i16",
-                       v4i32, v4i16, NEONvshlli>;
-def  VSHLLi32 : N2VLSh<1, 1, 0b111010, 0b0011, 0, 0, 0, "vshll.i32",
-                       v2i64, v2i32, NEONvshlli>;
+class N2VLShMax<bit op24, bit op23, bits<6> op21_16, bits<4> op11_8, bit op7,
+                bit op6, bit op4, string OpcodeStr, string Dt, ValueType ResTy,
+                ValueType OpTy, SDNode OpNode>
+  : N2VLSh<op24, op23, op11_8, op7, op6, op4, OpcodeStr, Dt,
+           ResTy, OpTy, OpNode> {
+  let Inst{21-16} = op21_16;
+}
+def  VSHLLi8  : N2VLShMax<1, 1, 0b110010, 0b0011, 0, 0, 0, "vshll", "i8",
+                          v8i16, v8i8, NEONvshlli>;
+def  VSHLLi16 : N2VLShMax<1, 1, 0b110110, 0b0011, 0, 0, 0, "vshll", "i16",
+                          v4i32, v4i16, NEONvshlli>;
+def  VSHLLi32 : N2VLShMax<1, 1, 0b111010, 0b0011, 0, 0, 0, "vshll", "i32",
+                          v2i64, v2i32, NEONvshlli>;
 
 //   VSHRN    : Vector Shift Right and Narrow
-def  VSHRN16  : N2VNSh<0, 1, 0b001000, 0b1000, 0, 0, 1, 
-                       IIC_VSHLiD, "vshrn.i16", v8i8, v8i16, NEONvshrn>;
-def  VSHRN32  : N2VNSh<0, 1, 0b010000, 0b1000, 0, 0, 1,
-                       IIC_VSHLiD, "vshrn.i32", v4i16, v4i32, NEONvshrn>;
-def  VSHRN64  : N2VNSh<0, 1, 0b100000, 0b1000, 0, 0, 1,
-                       IIC_VSHLiD, "vshrn.i64", v2i32, v2i64, NEONvshrn>;
+defm VSHRN    : N2VNSh_HSD<0,1,0b1000,0,0,1, IIC_VSHLiD, "vshrn", "i", NEONvshrn>;
 
 //   VRSHL    : Vector Rounding Shift
 defm VRSHLs   : N3VInt_QHSD<0,0,0b0101,0, IIC_VSHLi4D, IIC_VSHLi4D, IIC_VSHLi4Q,
-                            IIC_VSHLi4Q, "vrshl.s", int_arm_neon_vrshifts, 0>;
+                            IIC_VSHLi4Q, "vrshl", "s", int_arm_neon_vrshifts, 0>;
 defm VRSHLu   : N3VInt_QHSD<1,0,0b0101,0, IIC_VSHLi4D, IIC_VSHLi4D, IIC_VSHLi4Q,
-                            IIC_VSHLi4Q, "vrshl.u", int_arm_neon_vrshiftu, 0>;
+                            IIC_VSHLi4Q, "vrshl", "u", int_arm_neon_vrshiftu, 0>;
 //   VRSHR    : Vector Rounding Shift Right
-defm VRSHRs   : N2VSh_QHSD<0, 1, 0b0010, 1, IIC_VSHLi4D, "vrshr.s", NEONvrshrs>;
-defm VRSHRu   : N2VSh_QHSD<1, 1, 0b0010, 1, IIC_VSHLi4D, "vrshr.u", NEONvrshru>;
+defm VRSHRs   : N2VSh_QHSD<0, 1, 0b0010, 1, IIC_VSHLi4D, "vrshr", "s", NEONvrshrs>;
+defm VRSHRu   : N2VSh_QHSD<1, 1, 0b0010, 1, IIC_VSHLi4D, "vrshr", "u", NEONvrshru>;
 
 //   VRSHRN   : Vector Rounding Shift Right and Narrow
-def  VRSHRN16 : N2VNSh<0, 1, 0b001000, 0b1000, 0, 1, 1,
-                       IIC_VSHLi4D, "vrshrn.i16", v8i8, v8i16, NEONvrshrn>;
-def  VRSHRN32 : N2VNSh<0, 1, 0b010000, 0b1000, 0, 1, 1, 
-                       IIC_VSHLi4D, "vrshrn.i32", v4i16, v4i32, NEONvrshrn>;
-def  VRSHRN64 : N2VNSh<0, 1, 0b100000, 0b1000, 0, 1, 1,
-                       IIC_VSHLi4D, "vrshrn.i64", v2i32, v2i64, NEONvrshrn>;
+defm VRSHRN   : N2VNSh_HSD<0, 1, 0b1000, 0, 1, 1, IIC_VSHLi4D, "vrshrn", "i",
+                           NEONvrshrn>;
 
 //   VQSHL    : Vector Saturating Shift
 defm VQSHLs   : N3VInt_QHSD<0,0,0b0100,1, IIC_VSHLi4D, IIC_VSHLi4D, IIC_VSHLi4Q,
-                            IIC_VSHLi4Q, "vqshl.s", int_arm_neon_vqshifts, 0>;
+                            IIC_VSHLi4Q, "vqshl", "s", int_arm_neon_vqshifts, 0>;
 defm VQSHLu   : N3VInt_QHSD<1,0,0b0100,1, IIC_VSHLi4D, IIC_VSHLi4D, IIC_VSHLi4Q,
-                            IIC_VSHLi4Q, "vqshl.u", int_arm_neon_vqshiftu, 0>;
+                            IIC_VSHLi4Q, "vqshl", "u", int_arm_neon_vqshiftu, 0>;
 //   VQSHL    : Vector Saturating Shift Left (Immediate)
-defm VQSHLsi  : N2VSh_QHSD<0, 1, 0b0111, 1, IIC_VSHLi4D, "vqshl.s", NEONvqshls>;
-defm VQSHLui  : N2VSh_QHSD<1, 1, 0b0111, 1, IIC_VSHLi4D, "vqshl.u", NEONvqshlu>;
+defm VQSHLsi  : N2VSh_QHSD<0, 1, 0b0111, 1, IIC_VSHLi4D, "vqshl", "s", NEONvqshls>;
+defm VQSHLui  : N2VSh_QHSD<1, 1, 0b0111, 1, IIC_VSHLi4D, "vqshl", "u", NEONvqshlu>;
 //   VQSHLU   : Vector Saturating Shift Left (Immediate, Unsigned)
-defm VQSHLsu  : N2VSh_QHSD<1, 1, 0b0110, 1, IIC_VSHLi4D, "vqshlu.s", NEONvqshlsu>;
+defm VQSHLsu  : N2VSh_QHSD<1, 1, 0b0110, 1, IIC_VSHLi4D, "vqshlu", "s", NEONvqshlsu>;
 
 //   VQSHRN   : Vector Saturating Shift Right and Narrow
-def VQSHRNs16 : N2VNSh<0, 1, 0b001000, 0b1001, 0, 0, 1, 
-                       IIC_VSHLi4D, "vqshrn.s16", v8i8, v8i16, NEONvqshrns>;
-def VQSHRNs32 : N2VNSh<0, 1, 0b010000, 0b1001, 0, 0, 1,
-                       IIC_VSHLi4D, "vqshrn.s32", v4i16, v4i32, NEONvqshrns>;
-def VQSHRNs64 : N2VNSh<0, 1, 0b100000, 0b1001, 0, 0, 1, 
-                       IIC_VSHLi4D, "vqshrn.s64", v2i32, v2i64, NEONvqshrns>;
-def VQSHRNu16 : N2VNSh<1, 1, 0b001000, 0b1001, 0, 0, 1,
-                       IIC_VSHLi4D, "vqshrn.u16", v8i8, v8i16, NEONvqshrnu>;
-def VQSHRNu32 : N2VNSh<1, 1, 0b010000, 0b1001, 0, 0, 1,
-                       IIC_VSHLi4D, "vqshrn.u32", v4i16, v4i32, NEONvqshrnu>;
-def VQSHRNu64 : N2VNSh<1, 1, 0b100000, 0b1001, 0, 0, 1,
-                       IIC_VSHLi4D, "vqshrn.u64", v2i32, v2i64, NEONvqshrnu>;
+defm VQSHRNs  : N2VNSh_HSD<0, 1, 0b1001, 0, 0, 1, IIC_VSHLi4D, "vqshrn", "s",
+                           NEONvqshrns>;
+defm VQSHRNu  : N2VNSh_HSD<1, 1, 0b1001, 0, 0, 1, IIC_VSHLi4D, "vqshrn", "u",
+                           NEONvqshrnu>;
 
 //   VQSHRUN  : Vector Saturating Shift Right and Narrow (Unsigned)
-def VQSHRUN16 : N2VNSh<1, 1, 0b001000, 0b1000, 0, 0, 1,
-                       IIC_VSHLi4D, "vqshrun.s16", v8i8, v8i16, NEONvqshrnsu>;
-def VQSHRUN32 : N2VNSh<1, 1, 0b010000, 0b1000, 0, 0, 1,
-                       IIC_VSHLi4D, "vqshrun.s32", v4i16, v4i32, NEONvqshrnsu>;
-def VQSHRUN64 : N2VNSh<1, 1, 0b100000, 0b1000, 0, 0, 1,
-                       IIC_VSHLi4D, "vqshrun.s64", v2i32, v2i64, NEONvqshrnsu>;
+defm VQSHRUN  : N2VNSh_HSD<1, 1, 0b1000, 0, 0, 1, IIC_VSHLi4D, "vqshrun", "s",
+                           NEONvqshrnsu>;
 
 //   VQRSHL   : Vector Saturating Rounding Shift
 defm VQRSHLs  : N3VInt_QHSD<0, 0, 0b0101, 1, IIC_VSHLi4D, IIC_VSHLi4D, IIC_VSHLi4Q,
-                            IIC_VSHLi4Q, "vqrshl.s", int_arm_neon_vqrshifts, 0>;
+                            IIC_VSHLi4Q, "vqrshl", "s",
+                            int_arm_neon_vqrshifts, 0>;
 defm VQRSHLu  : N3VInt_QHSD<1, 0, 0b0101, 1, IIC_VSHLi4D, IIC_VSHLi4D, IIC_VSHLi4Q,
-                            IIC_VSHLi4Q, "vqrshl.u", int_arm_neon_vqrshiftu, 0>;
+                            IIC_VSHLi4Q, "vqrshl", "u",
+                            int_arm_neon_vqrshiftu, 0>;
 
 //   VQRSHRN  : Vector Saturating Rounding Shift Right and Narrow
-def VQRSHRNs16: N2VNSh<0, 1, 0b001000, 0b1001, 0, 1, 1,
-                       IIC_VSHLi4D, "vqrshrn.s16", v8i8, v8i16, NEONvqrshrns>;
-def VQRSHRNs32: N2VNSh<0, 1, 0b010000, 0b1001, 0, 1, 1,
-                       IIC_VSHLi4D, "vqrshrn.s32", v4i16, v4i32, NEONvqrshrns>;
-def VQRSHRNs64: N2VNSh<0, 1, 0b100000, 0b1001, 0, 1, 1,
-                       IIC_VSHLi4D, "vqrshrn.s64", v2i32, v2i64, NEONvqrshrns>;
-def VQRSHRNu16: N2VNSh<1, 1, 0b001000, 0b1001, 0, 1, 1,
-                       IIC_VSHLi4D, "vqrshrn.u16", v8i8, v8i16, NEONvqrshrnu>;
-def VQRSHRNu32: N2VNSh<1, 1, 0b010000, 0b1001, 0, 1, 1,
-                       IIC_VSHLi4D, "vqrshrn.u32", v4i16, v4i32, NEONvqrshrnu>;
-def VQRSHRNu64: N2VNSh<1, 1, 0b100000, 0b1001, 0, 1, 1, 
-                       IIC_VSHLi4D, "vqrshrn.u64", v2i32, v2i64, NEONvqrshrnu>;
+defm VQRSHRNs : N2VNSh_HSD<0, 1, 0b1001, 0, 1, 1, IIC_VSHLi4D, "vqrshrn", "s",
+                           NEONvqrshrns>;
+defm VQRSHRNu : N2VNSh_HSD<1, 1, 0b1001, 0, 1, 1, IIC_VSHLi4D, "vqrshrn", "u",
+                           NEONvqrshrnu>;
 
 //   VQRSHRUN : Vector Saturating Rounding Shift Right and Narrow (Unsigned)
-def VQRSHRUN16: N2VNSh<1, 1, 0b001000, 0b1000, 0, 1, 1,
-                       IIC_VSHLi4D, "vqrshrun.s16", v8i8, v8i16, NEONvqrshrnsu>;
-def VQRSHRUN32: N2VNSh<1, 1, 0b010000, 0b1000, 0, 1, 1, 
-                       IIC_VSHLi4D, "vqrshrun.s32", v4i16, v4i32, NEONvqrshrnsu>;
-def VQRSHRUN64: N2VNSh<1, 1, 0b100000, 0b1000, 0, 1, 1,
-                       IIC_VSHLi4D, "vqrshrun.s64", v2i32, v2i64, NEONvqrshrnsu>;
+defm VQRSHRUN : N2VNSh_HSD<1, 1, 0b1000, 0, 1, 1, IIC_VSHLi4D, "vqrshrun", "s",
+                           NEONvqrshrnsu>;
 
 //   VSRA     : Vector Shift Right and Accumulate
-defm VSRAs    : N2VShAdd_QHSD<0, 1, 0b0001, 1, "vsra.s", NEONvshrs>;
-defm VSRAu    : N2VShAdd_QHSD<1, 1, 0b0001, 1, "vsra.u", NEONvshru>;
+defm VSRAs    : N2VShAdd_QHSD<0, 1, 0b0001, 1, "vsra", "s", NEONvshrs>;
+defm VSRAu    : N2VShAdd_QHSD<1, 1, 0b0001, 1, "vsra", "u", NEONvshru>;
 //   VRSRA    : Vector Rounding Shift Right and Accumulate
-defm VRSRAs   : N2VShAdd_QHSD<0, 1, 0b0011, 1, "vrsra.s", NEONvrshrs>;
-defm VRSRAu   : N2VShAdd_QHSD<1, 1, 0b0011, 1, "vrsra.u", NEONvrshru>;
+defm VRSRAs   : N2VShAdd_QHSD<0, 1, 0b0011, 1, "vrsra", "s", NEONvrshrs>;
+defm VRSRAu   : N2VShAdd_QHSD<1, 1, 0b0011, 1, "vrsra", "u", NEONvrshru>;
 
 //   VSLI     : Vector Shift Left and Insert
-defm VSLI     : N2VShIns_QHSD<1, 1, 0b0101, 1, "vsli.", NEONvsli>;
+defm VSLI     : N2VShIns_QHSD<1, 1, 0b0101, 1, "vsli", NEONvsli>;
 //   VSRI     : Vector Shift Right and Insert
-defm VSRI     : N2VShIns_QHSD<1, 1, 0b0100, 1, "vsri.", NEONvsri>;
+defm VSRI     : N2VShIns_QHSD<1, 1, 0b0100, 1, "vsri", NEONvsri>;
 
 // Vector Absolute and Saturating Absolute.
 
 //   VABS     : Vector Absolute Value
 defm VABS     : N2VInt_QHS<0b11, 0b11, 0b01, 0b00110, 0, 
-                           IIC_VUNAiD, IIC_VUNAiQ, "vabs.s",
+                           IIC_VUNAiD, IIC_VUNAiQ, "vabs", "s",
                            int_arm_neon_vabs>;
 def  VABSfd   : N2VDInt<0b11, 0b11, 0b10, 0b01, 0b01110, 0,
-                        IIC_VUNAD, "vabs.f32",
+                        IIC_VUNAD, "vabs", "f32",
                         v2f32, v2f32, int_arm_neon_vabs>;
 def  VABSfq   : N2VQInt<0b11, 0b11, 0b10, 0b01, 0b01110, 0,
-                        IIC_VUNAQ, "vabs.f32",
+                        IIC_VUNAQ, "vabs", "f32",
                         v4f32, v4f32, int_arm_neon_vabs>;
 
 //   VQABS    : Vector Saturating Absolute Value
 defm VQABS    : N2VInt_QHS<0b11, 0b11, 0b00, 0b01110, 0, 
-                           IIC_VQUNAiD, IIC_VQUNAiQ, "vqabs.s",
+                           IIC_VQUNAiD, IIC_VQUNAiQ, "vqabs", "s",
                            int_arm_neon_vqabs>;
 
 // Vector Negate.
@@ -2007,31 +2470,31 @@ defm VQABS    : N2VInt_QHS<0b11, 0b11, 0b00, 0b01110, 0,
 def vneg      : PatFrag<(ops node:$in), (sub immAllZerosV, node:$in)>;
 def vneg_conv : PatFrag<(ops node:$in), (sub immAllZerosV_bc, node:$in)>;
 
-class VNEGD<bits<2> size, string OpcodeStr, ValueType Ty>
+class VNEGD<bits<2> size, string OpcodeStr, string Dt, ValueType Ty>
   : N2V<0b11, 0b11, size, 0b01, 0b00111, 0, 0, (outs DPR:$dst), (ins DPR:$src),
-        IIC_VSHLiD, !strconcat(OpcodeStr, "\t$dst, $src"), "",
+        IIC_VSHLiD, OpcodeStr, Dt, "$dst, $src", "",
         [(set DPR:$dst, (Ty (vneg DPR:$src)))]>;
-class VNEGQ<bits<2> size, string OpcodeStr, ValueType Ty>
+class VNEGQ<bits<2> size, string OpcodeStr, string Dt, ValueType Ty>
   : N2V<0b11, 0b11, size, 0b01, 0b00111, 1, 0, (outs QPR:$dst), (ins QPR:$src),
-        IIC_VSHLiD, !strconcat(OpcodeStr, "\t$dst, $src"), "",
+        IIC_VSHLiD, OpcodeStr, Dt, "$dst, $src", "",
         [(set QPR:$dst, (Ty (vneg QPR:$src)))]>;
 
 //   VNEG     : Vector Negate
-def  VNEGs8d  : VNEGD<0b00, "vneg.s8", v8i8>;
-def  VNEGs16d : VNEGD<0b01, "vneg.s16", v4i16>;
-def  VNEGs32d : VNEGD<0b10, "vneg.s32", v2i32>;
-def  VNEGs8q  : VNEGQ<0b00, "vneg.s8", v16i8>;
-def  VNEGs16q : VNEGQ<0b01, "vneg.s16", v8i16>;
-def  VNEGs32q : VNEGQ<0b10, "vneg.s32", v4i32>;
+def  VNEGs8d  : VNEGD<0b00, "vneg", "s8", v8i8>;
+def  VNEGs16d : VNEGD<0b01, "vneg", "s16", v4i16>;
+def  VNEGs32d : VNEGD<0b10, "vneg", "s32", v2i32>;
+def  VNEGs8q  : VNEGQ<0b00, "vneg", "s8", v16i8>;
+def  VNEGs16q : VNEGQ<0b01, "vneg", "s16", v8i16>;
+def  VNEGs32q : VNEGQ<0b10, "vneg", "s32", v4i32>;
 
 //   VNEG     : Vector Negate (floating-point)
 def  VNEGf32d : N2V<0b11, 0b11, 0b10, 0b01, 0b01111, 0, 0,
                     (outs DPR:$dst), (ins DPR:$src), IIC_VUNAD,
-                    "vneg.f32\t$dst, $src", "",
+                    "vneg", "f32", "$dst, $src", "",
                     [(set DPR:$dst, (v2f32 (fneg DPR:$src)))]>;
 def  VNEGf32q : N2V<0b11, 0b11, 0b10, 0b01, 0b01111, 1, 0,
                     (outs QPR:$dst), (ins QPR:$src), IIC_VUNAQ,
-                    "vneg.f32\t$dst, $src", "",
+                    "vneg", "f32", "$dst, $src", "",
                     [(set QPR:$dst, (v4f32 (fneg QPR:$src)))]>;
 
 def : Pat<(v8i8 (vneg_conv DPR:$src)), (VNEGs8d DPR:$src)>;
@@ -2043,35 +2506,35 @@ def : Pat<(v4i32 (vneg_conv QPR:$src)), (VNEGs32q QPR:$src)>;
 
 //   VQNEG    : Vector Saturating Negate
 defm VQNEG    : N2VInt_QHS<0b11, 0b11, 0b00, 0b01111, 0, 
-                           IIC_VQUNAiD, IIC_VQUNAiQ, "vqneg.s",
+                           IIC_VQUNAiD, IIC_VQUNAiQ, "vqneg", "s",
                            int_arm_neon_vqneg>;
 
 // Vector Bit Counting Operations.
 
 //   VCLS     : Vector Count Leading Sign Bits
 defm VCLS     : N2VInt_QHS<0b11, 0b11, 0b00, 0b01000, 0, 
-                           IIC_VCNTiD, IIC_VCNTiQ, "vcls.s",
+                           IIC_VCNTiD, IIC_VCNTiQ, "vcls", "s",
                            int_arm_neon_vcls>;
 //   VCLZ     : Vector Count Leading Zeros
 defm VCLZ     : N2VInt_QHS<0b11, 0b11, 0b00, 0b01001, 0, 
-                           IIC_VCNTiD, IIC_VCNTiQ, "vclz.i",
+                           IIC_VCNTiD, IIC_VCNTiQ, "vclz", "i",
                            int_arm_neon_vclz>;
 //   VCNT     : Vector Count One Bits
 def  VCNTd    : N2VDInt<0b11, 0b11, 0b00, 0b00, 0b01010, 0, 
-                        IIC_VCNTiD, "vcnt.8",
+                        IIC_VCNTiD, "vcnt", "8",
                         v8i8, v8i8, int_arm_neon_vcnt>;
 def  VCNTq    : N2VQInt<0b11, 0b11, 0b00, 0b00, 0b01010, 0,
-                        IIC_VCNTiQ, "vcnt.8",
+                        IIC_VCNTiQ, "vcnt", "8",
                         v16i8, v16i8, int_arm_neon_vcnt>;
 
 // Vector Move Operations.
 
 //   VMOV     : Vector Move (Register)
 
-def  VMOVD    : N3V<0, 0, 0b10, 0b0001, 0, 1, (outs DPR:$dst), (ins DPR:$src),
-                    IIC_VMOVD, "vmov\t$dst, $src", "", []>;
-def  VMOVQ    : N3V<0, 0, 0b10, 0b0001, 1, 1, (outs QPR:$dst), (ins QPR:$src),
-                    IIC_VMOVD, "vmov\t$dst, $src", "", []>;
+def  VMOVDneon: N3VX<0, 0, 0b10, 0b0001, 0, 1, (outs DPR:$dst), (ins DPR:$src),
+                    IIC_VMOVD, "vmov", "$dst, $src", "", []>;
+def  VMOVQ    : N3VX<0, 0, 0b10, 0b0001, 1, 1, (outs QPR:$dst), (ins QPR:$src),
+                    IIC_VMOVD, "vmov", "$dst, $src", "", []>;
 
 //   VMOV     : Vector Move (Immediate)
 
@@ -2111,66 +2574,66 @@ def vmovImm64 : PatLeaf<(build_vector), [{
 // be encoded based on the immed values.
 
 def VMOVv8i8  : N1ModImm<1, 0b000, 0b1110, 0, 0, 0, 1, (outs DPR:$dst),
-                         (ins i8imm:$SIMM), IIC_VMOVImm,
-                         "vmov.i8\t$dst, $SIMM", "",
+                         (ins h8imm:$SIMM), IIC_VMOVImm,
+                         "vmov", "i8", "$dst, $SIMM", "",
                          [(set DPR:$dst, (v8i8 vmovImm8:$SIMM))]>;
 def VMOVv16i8 : N1ModImm<1, 0b000, 0b1110, 0, 1, 0, 1, (outs QPR:$dst),
-                         (ins i8imm:$SIMM), IIC_VMOVImm,
-                         "vmov.i8\t$dst, $SIMM", "",
+                         (ins h8imm:$SIMM), IIC_VMOVImm,
+                         "vmov", "i8", "$dst, $SIMM", "",
                          [(set QPR:$dst, (v16i8 vmovImm8:$SIMM))]>;
 
 def VMOVv4i16 : N1ModImm<1, 0b000, 0b1000, 0, 0, 0, 1, (outs DPR:$dst),
-                         (ins i16imm:$SIMM), IIC_VMOVImm,
-                         "vmov.i16\t$dst, $SIMM", "",
+                         (ins h16imm:$SIMM), IIC_VMOVImm,
+                         "vmov", "i16", "$dst, $SIMM", "",
                          [(set DPR:$dst, (v4i16 vmovImm16:$SIMM))]>;
 def VMOVv8i16 : N1ModImm<1, 0b000, 0b1000, 0, 1, 0, 1, (outs QPR:$dst),
-                         (ins i16imm:$SIMM), IIC_VMOVImm,
-                         "vmov.i16\t$dst, $SIMM", "",
+                         (ins h16imm:$SIMM), IIC_VMOVImm,
+                         "vmov", "i16", "$dst, $SIMM", "",
                          [(set QPR:$dst, (v8i16 vmovImm16:$SIMM))]>;
 
 def VMOVv2i32 : N1ModImm<1, 0b000, 0b0000, 0, 0, 0, 1, (outs DPR:$dst),
-                         (ins i32imm:$SIMM), IIC_VMOVImm,
-                         "vmov.i32\t$dst, $SIMM", "",
+                         (ins h32imm:$SIMM), IIC_VMOVImm,
+                         "vmov", "i32", "$dst, $SIMM", "",
                          [(set DPR:$dst, (v2i32 vmovImm32:$SIMM))]>;
 def VMOVv4i32 : N1ModImm<1, 0b000, 0b0000, 0, 1, 0, 1, (outs QPR:$dst),
-                         (ins i32imm:$SIMM), IIC_VMOVImm,
-                         "vmov.i32\t$dst, $SIMM", "",
+                         (ins h32imm:$SIMM), IIC_VMOVImm,
+                         "vmov", "i32", "$dst, $SIMM", "",
                          [(set QPR:$dst, (v4i32 vmovImm32:$SIMM))]>;
 
 def VMOVv1i64 : N1ModImm<1, 0b000, 0b1110, 0, 0, 1, 1, (outs DPR:$dst),
-                         (ins i64imm:$SIMM), IIC_VMOVImm,
-                         "vmov.i64\t$dst, $SIMM", "",
+                         (ins h64imm:$SIMM), IIC_VMOVImm,
+                         "vmov", "i64", "$dst, $SIMM", "",
                          [(set DPR:$dst, (v1i64 vmovImm64:$SIMM))]>;
 def VMOVv2i64 : N1ModImm<1, 0b000, 0b1110, 0, 1, 1, 1, (outs QPR:$dst),
-                         (ins i64imm:$SIMM), IIC_VMOVImm,
-                         "vmov.i64\t$dst, $SIMM", "",
+                         (ins h64imm:$SIMM), IIC_VMOVImm,
+                         "vmov", "i64", "$dst, $SIMM", "",
                          [(set QPR:$dst, (v2i64 vmovImm64:$SIMM))]>;
 
 //   VMOV     : Vector Get Lane (move scalar to ARM core register)
 
-def VGETLNs8  : NVGetLane<0b11100101, 0b1011, 0b00,
+def VGETLNs8  : NVGetLane<{1,1,1,0,0,1,?,1}, 0b1011, {?,?},
                           (outs GPR:$dst), (ins DPR:$src, nohash_imm:$lane),
-                          IIC_VMOVSI, "vmov", ".s8\t$dst, $src[$lane]",
+                          IIC_VMOVSI, "vmov", "s8", "$dst, $src[$lane]",
                           [(set GPR:$dst, (NEONvgetlanes (v8i8 DPR:$src),
                                            imm:$lane))]>;
-def VGETLNs16 : NVGetLane<0b11100001, 0b1011, 0b01,
+def VGETLNs16 : NVGetLane<{1,1,1,0,0,0,?,1}, 0b1011, {?,1},
                           (outs GPR:$dst), (ins DPR:$src, nohash_imm:$lane),
-                          IIC_VMOVSI, "vmov", ".s16\t$dst, $src[$lane]",
+                          IIC_VMOVSI, "vmov", "s16", "$dst, $src[$lane]",
                           [(set GPR:$dst, (NEONvgetlanes (v4i16 DPR:$src),
                                            imm:$lane))]>;
-def VGETLNu8  : NVGetLane<0b11101101, 0b1011, 0b00,
+def VGETLNu8  : NVGetLane<{1,1,1,0,1,1,?,1}, 0b1011, {?,?},
                           (outs GPR:$dst), (ins DPR:$src, nohash_imm:$lane),
-                          IIC_VMOVSI, "vmov", ".u8\t$dst, $src[$lane]",
+                          IIC_VMOVSI, "vmov", "u8", "$dst, $src[$lane]",
                           [(set GPR:$dst, (NEONvgetlaneu (v8i8 DPR:$src),
                                            imm:$lane))]>;
-def VGETLNu16 : NVGetLane<0b11101001, 0b1011, 0b01,
+def VGETLNu16 : NVGetLane<{1,1,1,0,1,0,?,1}, 0b1011, {?,1},
                           (outs GPR:$dst), (ins DPR:$src, nohash_imm:$lane),
-                          IIC_VMOVSI, "vmov", ".u16\t$dst, $src[$lane]",
+                          IIC_VMOVSI, "vmov", "u16", "$dst, $src[$lane]",
                           [(set GPR:$dst, (NEONvgetlaneu (v4i16 DPR:$src),
                                            imm:$lane))]>;
-def VGETLNi32 : NVGetLane<0b11100001, 0b1011, 0b00,
+def VGETLNi32 : NVGetLane<{1,1,1,0,0,0,?,1}, 0b1011, 0b00,
                           (outs GPR:$dst), (ins DPR:$src, nohash_imm:$lane),
-                          IIC_VMOVSI, "vmov", ".32\t$dst, $src[$lane]",
+                          IIC_VMOVSI, "vmov", "32", "$dst, $src[$lane]",
                           [(set GPR:$dst, (extractelt (v2i32 DPR:$src),
                                            imm:$lane))]>;
 // def VGETLNf32: see FMRDH and FMRDL in ARMInstrVFP.td
@@ -2195,10 +2658,10 @@ def : Pat<(extractelt (v4i32 QPR:$src), imm:$lane),
                              (DSubReg_i32_reg imm:$lane))),
                      (SubReg_i32_lane imm:$lane))>;
 def : Pat<(extractelt (v2f32 DPR:$src1), imm:$src2),
-          (EXTRACT_SUBREG (COPY_TO_REGCLASS DPR:$src1, DPR_VFP2),
+          (EXTRACT_SUBREG (v2f32 (COPY_TO_REGCLASS (v2f32 DPR:$src1), DPR_VFP2)),
                           (SSubReg_f32_reg imm:$src2))>;
 def : Pat<(extractelt (v4f32 QPR:$src1), imm:$src2),
-          (EXTRACT_SUBREG (COPY_TO_REGCLASS QPR:$src1, QPR_VFP2),
+          (EXTRACT_SUBREG (v4f32 (COPY_TO_REGCLASS (v4f32 QPR:$src1), QPR_VFP2)),
                           (SSubReg_f32_reg imm:$src2))>;
 //def : Pat<(extractelt (v2i64 QPR:$src1), imm:$src2),
 //          (EXTRACT_SUBREG QPR:$src1, (DSubReg_f64_reg imm:$src2))>;
@@ -2209,19 +2672,19 @@ def : Pat<(extractelt (v2f64 QPR:$src1), imm:$src2),
 //   VMOV     : Vector Set Lane (move ARM core register to scalar)
 
 let Constraints = "$src1 = $dst" in {
-def VSETLNi8  : NVSetLane<0b11100100, 0b1011, 0b00, (outs DPR:$dst),
+def VSETLNi8  : NVSetLane<{1,1,1,0,0,1,?,0}, 0b1011, {?,?}, (outs DPR:$dst),
                           (ins DPR:$src1, GPR:$src2, nohash_imm:$lane),
-                          IIC_VMOVISL, "vmov", ".8\t$dst[$lane], $src2",
+                          IIC_VMOVISL, "vmov", "8", "$dst[$lane], $src2",
                           [(set DPR:$dst, (vector_insert (v8i8 DPR:$src1),
                                            GPR:$src2, imm:$lane))]>;
-def VSETLNi16 : NVSetLane<0b11100000, 0b1011, 0b01, (outs DPR:$dst),
+def VSETLNi16 : NVSetLane<{1,1,1,0,0,0,?,0}, 0b1011, {?,1}, (outs DPR:$dst),
                           (ins DPR:$src1, GPR:$src2, nohash_imm:$lane),
-                          IIC_VMOVISL, "vmov", ".16\t$dst[$lane], $src2",
+                          IIC_VMOVISL, "vmov", "16", "$dst[$lane], $src2",
                           [(set DPR:$dst, (vector_insert (v4i16 DPR:$src1),
                                            GPR:$src2, imm:$lane))]>;
-def VSETLNi32 : NVSetLane<0b11100000, 0b1011, 0b00, (outs DPR:$dst),
+def VSETLNi32 : NVSetLane<{1,1,1,0,0,0,?,0}, 0b1011, 0b00, (outs DPR:$dst),
                           (ins DPR:$src1, GPR:$src2, nohash_imm:$lane),
-                          IIC_VMOVISL, "vmov", ".32\t$dst[$lane], $src2",
+                          IIC_VMOVISL, "vmov", "32", "$dst[$lane], $src2",
                           [(set DPR:$dst, (insertelt (v2i32 DPR:$src1),
                                            GPR:$src2, imm:$lane))]>;
 }
@@ -2245,11 +2708,11 @@ def : Pat<(insertelt (v4i32 QPR:$src1), GPR:$src2, imm:$lane),
                   (DSubReg_i32_reg imm:$lane)))>;
 
 def : Pat<(v2f32 (insertelt DPR:$src1, SPR:$src2, imm:$src3)),
-          (INSERT_SUBREG (COPY_TO_REGCLASS DPR:$src1, DPR_VFP2),
-                         SPR:$src2, (SSubReg_f32_reg imm:$src3))>;
+          (INSERT_SUBREG (v2f32 (COPY_TO_REGCLASS DPR:$src1, DPR_VFP2)),
+                                SPR:$src2, (SSubReg_f32_reg imm:$src3))>;
 def : Pat<(v4f32 (insertelt QPR:$src1, SPR:$src2, imm:$src3)),
-          (INSERT_SUBREG (COPY_TO_REGCLASS QPR:$src1, QPR_VFP2),
-                         SPR:$src2, (SSubReg_f32_reg imm:$src3))>;
+          (INSERT_SUBREG (v4f32 (COPY_TO_REGCLASS QPR:$src1, QPR_VFP2)),
+                                SPR:$src2, (SSubReg_f32_reg imm:$src3))>;
 
 //def : Pat<(v2i64 (insertelt QPR:$src1, DPR:$src2, imm:$src3)),
 //          (INSERT_SUBREG QPR:$src1, DPR:$src2, (DSubReg_f64_reg imm:$src3))>;
@@ -2285,54 +2748,57 @@ def : Pat<(v4i32 (scalar_to_vector GPR:$src)),
 
 //   VDUP     : Vector Duplicate (from ARM core register to all elements)
 
-class VDUPD<bits<8> opcod1, bits<2> opcod3, string asmSize, ValueType Ty>
+class VDUPD<bits<8> opcod1, bits<2> opcod3, string Dt, ValueType Ty>
   : NVDup<opcod1, 0b1011, opcod3, (outs DPR:$dst), (ins GPR:$src),
-          IIC_VMOVIS, "vdup", !strconcat(asmSize, "\t$dst, $src"),
+          IIC_VMOVIS, "vdup", Dt, "$dst, $src",
           [(set DPR:$dst, (Ty (NEONvdup (i32 GPR:$src))))]>;
-class VDUPQ<bits<8> opcod1, bits<2> opcod3, string asmSize, ValueType Ty>
+class VDUPQ<bits<8> opcod1, bits<2> opcod3, string Dt, ValueType Ty>
   : NVDup<opcod1, 0b1011, opcod3, (outs QPR:$dst), (ins GPR:$src),
-          IIC_VMOVIS, "vdup", !strconcat(asmSize, "\t$dst, $src"),
+          IIC_VMOVIS, "vdup", Dt, "$dst, $src",
           [(set QPR:$dst, (Ty (NEONvdup (i32 GPR:$src))))]>;
 
-def  VDUP8d   : VDUPD<0b11101100, 0b00, ".8", v8i8>;
-def  VDUP16d  : VDUPD<0b11101000, 0b01, ".16", v4i16>;
-def  VDUP32d  : VDUPD<0b11101000, 0b00, ".32", v2i32>;
-def  VDUP8q   : VDUPQ<0b11101110, 0b00, ".8", v16i8>;
-def  VDUP16q  : VDUPQ<0b11101010, 0b01, ".16", v8i16>;
-def  VDUP32q  : VDUPQ<0b11101010, 0b00, ".32", v4i32>;
+def  VDUP8d   : VDUPD<0b11101100, 0b00, "8", v8i8>;
+def  VDUP16d  : VDUPD<0b11101000, 0b01, "16", v4i16>;
+def  VDUP32d  : VDUPD<0b11101000, 0b00, "32", v2i32>;
+def  VDUP8q   : VDUPQ<0b11101110, 0b00, "8", v16i8>;
+def  VDUP16q  : VDUPQ<0b11101010, 0b01, "16", v8i16>;
+def  VDUP32q  : VDUPQ<0b11101010, 0b00, "32", v4i32>;
 
 def  VDUPfd   : NVDup<0b11101000, 0b1011, 0b00, (outs DPR:$dst), (ins GPR:$src),
-                      IIC_VMOVIS, "vdup", ".32\t$dst, $src",
+                      IIC_VMOVIS, "vdup", "32", "$dst, $src",
                       [(set DPR:$dst, (v2f32 (NEONvdup
                                               (f32 (bitconvert GPR:$src)))))]>;
 def  VDUPfq   : NVDup<0b11101010, 0b1011, 0b00, (outs QPR:$dst), (ins GPR:$src),
-                      IIC_VMOVIS, "vdup", ".32\t$dst, $src",
+                      IIC_VMOVIS, "vdup", "32", "$dst, $src",
                       [(set QPR:$dst, (v4f32 (NEONvdup
                                               (f32 (bitconvert GPR:$src)))))]>;
 
 //   VDUP     : Vector Duplicate Lane (from scalar to all elements)
 
-class VDUPLND<bits<2> op19_18, bits<2> op17_16, string OpcodeStr, ValueType Ty>
+class VDUPLND<bits<2> op19_18, bits<2> op17_16,
+              string OpcodeStr, string Dt, ValueType Ty>
   : N2V<0b11, 0b11, op19_18, op17_16, 0b11000, 0, 0,
         (outs DPR:$dst), (ins DPR:$src, nohash_imm:$lane), IIC_VMOVD,
-        !strconcat(OpcodeStr, "\t$dst, $src[$lane]"), "",
+        OpcodeStr, Dt, "$dst, $src[$lane]", "",
         [(set DPR:$dst, (Ty (NEONvduplane (Ty DPR:$src), imm:$lane)))]>;
 
-class VDUPLNQ<bits<2> op19_18, bits<2> op17_16, string OpcodeStr,
+class VDUPLNQ<bits<2> op19_18, bits<2> op17_16, string OpcodeStr, string Dt,
               ValueType ResTy, ValueType OpTy>
   : N2V<0b11, 0b11, op19_18, op17_16, 0b11000, 1, 0,
         (outs QPR:$dst), (ins DPR:$src, nohash_imm:$lane), IIC_VMOVD,
-        !strconcat(OpcodeStr, "\t$dst, $src[$lane]"), "",
+        OpcodeStr, Dt, "$dst, $src[$lane]", "",
         [(set QPR:$dst, (ResTy (NEONvduplane (OpTy DPR:$src), imm:$lane)))]>;
 
-def VDUPLN8d  : VDUPLND<0b00, 0b01, "vdup.8", v8i8>;
-def VDUPLN16d : VDUPLND<0b00, 0b10, "vdup.16", v4i16>;
-def VDUPLN32d : VDUPLND<0b01, 0b00, "vdup.32", v2i32>;
-def VDUPLNfd  : VDUPLND<0b01, 0b00, "vdup.32", v2f32>;
-def VDUPLN8q  : VDUPLNQ<0b00, 0b01, "vdup.8", v16i8, v8i8>;
-def VDUPLN16q : VDUPLNQ<0b00, 0b10, "vdup.16", v8i16, v4i16>;
-def VDUPLN32q : VDUPLNQ<0b01, 0b00, "vdup.32", v4i32, v2i32>;
-def VDUPLNfq  : VDUPLNQ<0b01, 0b00, "vdup.32", v4f32, v2f32>;
+// Inst{19-16} is partially specified depending on the element size.
+
+def VDUPLN8d  : VDUPLND<{?,?}, {?,1}, "vdup", "8", v8i8>;
+def VDUPLN16d : VDUPLND<{?,?}, {1,0}, "vdup", "16", v4i16>;
+def VDUPLN32d : VDUPLND<{?,1}, {0,0}, "vdup", "32", v2i32>;
+def VDUPLNfd  : VDUPLND<{?,1}, {0,0}, "vdup", "32", v2f32>;
+def VDUPLN8q  : VDUPLNQ<{?,?}, {?,1}, "vdup", "8", v16i8, v8i8>;
+def VDUPLN16q : VDUPLNQ<{?,?}, {1,0}, "vdup", "16", v8i16, v4i16>;
+def VDUPLN32q : VDUPLNQ<{?,1}, {0,0}, "vdup", "32", v4i32, v2i32>;
+def VDUPLNfq  : VDUPLNQ<{?,1}, {0,0}, "vdup", "32", v4f32, v2f32>;
 
 def : Pat<(v16i8 (NEONvduplane (v16i8 QPR:$src), imm:$lane)),
           (v16i8 (VDUPLN8q (v8i8 (EXTRACT_SUBREG QPR:$src,
@@ -2351,14 +2817,14 @@ def : Pat<(v4f32 (NEONvduplane (v4f32 QPR:$src), imm:$lane)),
                                    (DSubReg_i32_reg imm:$lane))),
                            (SubReg_i32_lane imm:$lane)))>;
 
-def VDUPfdf   : N2V<0b11, 0b11, 0b01, 0b00, 0b11000, 0, 0,
+def  VDUPfdf  : N2V<0b11, 0b11, {?,1}, {0,0}, 0b11000, 0, 0,
                     (outs DPR:$dst), (ins SPR:$src),
-                    IIC_VMOVD, "vdup.32\t$dst, ${src:lane}", "",
+                    IIC_VMOVD, "vdup", "32", "$dst, ${src:lane}", "",
                     [(set DPR:$dst, (v2f32 (NEONvdup (f32 SPR:$src))))]>;
 
-def VDUPfqf   : N2V<0b11, 0b11, 0b01, 0b00, 0b11000, 1, 0,
+def  VDUPfqf  : N2V<0b11, 0b11, {?,1}, {0,0}, 0b11000, 1, 0,
                     (outs QPR:$dst), (ins SPR:$src),
-                    IIC_VMOVD, "vdup.32\t$dst, ${src:lane}", "",
+                    IIC_VMOVD, "vdup", "32", "$dst, ${src:lane}", "",
                     [(set QPR:$dst, (v4f32 (NEONvdup (f32 SPR:$src))))]>;
 
 def : Pat<(v2i64 (NEONvduplane (v2i64 QPR:$src), imm:$lane)),
@@ -2371,178 +2837,178 @@ def : Pat<(v2f64 (NEONvduplane (v2f64 QPR:$src), imm:$lane)),
                          (DSubReg_f64_other_reg imm:$lane))>;
 
 //   VMOVN    : Vector Narrowing Move
-defm VMOVN    : N2VNInt_HSD<0b11,0b11,0b10,0b00100,0,0, IIC_VMOVD, "vmovn.i",
-                            int_arm_neon_vmovn>;
+defm VMOVN    : N2VNInt_HSD<0b11,0b11,0b10,0b00100,0,0, IIC_VMOVD,
+                            "vmovn", "i", int_arm_neon_vmovn>;
 //   VQMOVN   : Vector Saturating Narrowing Move
-defm VQMOVNs  : N2VNInt_HSD<0b11,0b11,0b10,0b00101,0,0, IIC_VQUNAiD, "vqmovn.s",
-                            int_arm_neon_vqmovns>;
-defm VQMOVNu  : N2VNInt_HSD<0b11,0b11,0b10,0b00101,1,0, IIC_VQUNAiD, "vqmovn.u",
-                            int_arm_neon_vqmovnu>;
-defm VQMOVNsu : N2VNInt_HSD<0b11,0b11,0b10,0b00100,1,0, IIC_VQUNAiD, "vqmovun.s",
-                            int_arm_neon_vqmovnsu>;
+defm VQMOVNs  : N2VNInt_HSD<0b11,0b11,0b10,0b00101,0,0, IIC_VQUNAiD,
+                            "vqmovn", "s", int_arm_neon_vqmovns>;
+defm VQMOVNu  : N2VNInt_HSD<0b11,0b11,0b10,0b00101,1,0, IIC_VQUNAiD,
+                            "vqmovn", "u", int_arm_neon_vqmovnu>;
+defm VQMOVNsu : N2VNInt_HSD<0b11,0b11,0b10,0b00100,1,0, IIC_VQUNAiD,
+                            "vqmovun", "s", int_arm_neon_vqmovnsu>;
 //   VMOVL    : Vector Lengthening Move
-defm VMOVLs   : N2VLInt_QHS<0,1,0b1010,0,0,1, "vmovl.s", int_arm_neon_vmovls>;
-defm VMOVLu   : N2VLInt_QHS<1,1,0b1010,0,0,1, "vmovl.u", int_arm_neon_vmovlu>;
+defm VMOVLs   : N2VLInt_QHS<0b01,0b10100,0,1, "vmovl", "s",
+                            int_arm_neon_vmovls>;
+defm VMOVLu   : N2VLInt_QHS<0b11,0b10100,0,1, "vmovl", "u",
+                            int_arm_neon_vmovlu>;
 
 // Vector Conversions.
 
 //   VCVT     : Vector Convert Between Floating-Point and Integers
-def  VCVTf2sd : N2VD<0b11, 0b11, 0b10, 0b11, 0b01110, 0, "vcvt.s32.f32",
+def  VCVTf2sd : N2VD<0b11, 0b11, 0b10, 0b11, 0b01110, 0, "vcvt", "s32.f32",
                      v2i32, v2f32, fp_to_sint>;
-def  VCVTf2ud : N2VD<0b11, 0b11, 0b10, 0b11, 0b01111, 0, "vcvt.u32.f32",
+def  VCVTf2ud : N2VD<0b11, 0b11, 0b10, 0b11, 0b01111, 0, "vcvt", "u32.f32",
                      v2i32, v2f32, fp_to_uint>;
-def  VCVTs2fd : N2VD<0b11, 0b11, 0b10, 0b11, 0b01100, 0, "vcvt.f32.s32",
+def  VCVTs2fd : N2VD<0b11, 0b11, 0b10, 0b11, 0b01100, 0, "vcvt", "f32.s32",
                      v2f32, v2i32, sint_to_fp>;
-def  VCVTu2fd : N2VD<0b11, 0b11, 0b10, 0b11, 0b01101, 0, "vcvt.f32.u32",
+def  VCVTu2fd : N2VD<0b11, 0b11, 0b10, 0b11, 0b01101, 0, "vcvt", "f32.u32",
                      v2f32, v2i32, uint_to_fp>;
 
-def  VCVTf2sq : N2VQ<0b11, 0b11, 0b10, 0b11, 0b01110, 0, "vcvt.s32.f32",
+def  VCVTf2sq : N2VQ<0b11, 0b11, 0b10, 0b11, 0b01110, 0, "vcvt", "s32.f32",
                      v4i32, v4f32, fp_to_sint>;
-def  VCVTf2uq : N2VQ<0b11, 0b11, 0b10, 0b11, 0b01111, 0, "vcvt.u32.f32",
+def  VCVTf2uq : N2VQ<0b11, 0b11, 0b10, 0b11, 0b01111, 0, "vcvt", "u32.f32",
                      v4i32, v4f32, fp_to_uint>;
-def  VCVTs2fq : N2VQ<0b11, 0b11, 0b10, 0b11, 0b01100, 0, "vcvt.f32.s32",
+def  VCVTs2fq : N2VQ<0b11, 0b11, 0b10, 0b11, 0b01100, 0, "vcvt", "f32.s32",
                      v4f32, v4i32, sint_to_fp>;
-def  VCVTu2fq : N2VQ<0b11, 0b11, 0b10, 0b11, 0b01101, 0, "vcvt.f32.u32",
+def  VCVTu2fq : N2VQ<0b11, 0b11, 0b10, 0b11, 0b01101, 0, "vcvt", "f32.u32",
                      v4f32, v4i32, uint_to_fp>;
 
 //   VCVT     : Vector Convert Between Floating-Point and Fixed-Point.
-// Note: Some of the opcode bits in the following VCVT instructions need to
-// be encoded based on the immed values.
-def VCVTf2xsd : N2VCvtD<0, 1, 0b000000, 0b1111, 0, 1, "vcvt.s32.f32",
+def VCVTf2xsd : N2VCvtD<0, 1, 0b1111, 0, 1, "vcvt", "s32.f32",
                         v2i32, v2f32, int_arm_neon_vcvtfp2fxs>;
-def VCVTf2xud : N2VCvtD<1, 1, 0b000000, 0b1111, 0, 1, "vcvt.u32.f32",
+def VCVTf2xud : N2VCvtD<1, 1, 0b1111, 0, 1, "vcvt", "u32.f32",
                         v2i32, v2f32, int_arm_neon_vcvtfp2fxu>;
-def VCVTxs2fd : N2VCvtD<0, 1, 0b000000, 0b1110, 0, 1, "vcvt.f32.s32",
+def VCVTxs2fd : N2VCvtD<0, 1, 0b1110, 0, 1, "vcvt", "f32.s32",
                         v2f32, v2i32, int_arm_neon_vcvtfxs2fp>;
-def VCVTxu2fd : N2VCvtD<1, 1, 0b000000, 0b1110, 0, 1, "vcvt.f32.u32",
+def VCVTxu2fd : N2VCvtD<1, 1, 0b1110, 0, 1, "vcvt", "f32.u32",
                         v2f32, v2i32, int_arm_neon_vcvtfxu2fp>;
 
-def VCVTf2xsq : N2VCvtQ<0, 1, 0b000000, 0b1111, 0, 1, "vcvt.s32.f32",
+def VCVTf2xsq : N2VCvtQ<0, 1, 0b1111, 0, 1, "vcvt", "s32.f32",
                         v4i32, v4f32, int_arm_neon_vcvtfp2fxs>;
-def VCVTf2xuq : N2VCvtQ<1, 1, 0b000000, 0b1111, 0, 1, "vcvt.u32.f32",
+def VCVTf2xuq : N2VCvtQ<1, 1, 0b1111, 0, 1, "vcvt", "u32.f32",
                         v4i32, v4f32, int_arm_neon_vcvtfp2fxu>;
-def VCVTxs2fq : N2VCvtQ<0, 1, 0b000000, 0b1110, 0, 1, "vcvt.f32.s32",
+def VCVTxs2fq : N2VCvtQ<0, 1, 0b1110, 0, 1, "vcvt", "f32.s32",
                         v4f32, v4i32, int_arm_neon_vcvtfxs2fp>;
-def VCVTxu2fq : N2VCvtQ<1, 1, 0b000000, 0b1110, 0, 1, "vcvt.f32.u32",
+def VCVTxu2fq : N2VCvtQ<1, 1, 0b1110, 0, 1, "vcvt", "f32.u32",
                         v4f32, v4i32, int_arm_neon_vcvtfxu2fp>;
 
 // Vector Reverse.
 
 //   VREV64   : Vector Reverse elements within 64-bit doublewords
 
-class VREV64D<bits<2> op19_18, string OpcodeStr, ValueType Ty>
+class VREV64D<bits<2> op19_18, string OpcodeStr, string Dt, ValueType Ty>
   : N2V<0b11, 0b11, op19_18, 0b00, 0b00000, 0, 0, (outs DPR:$dst),
         (ins DPR:$src), IIC_VMOVD, 
-        !strconcat(OpcodeStr, "\t$dst, $src"), "",
+        OpcodeStr, Dt, "$dst, $src", "",
         [(set DPR:$dst, (Ty (NEONvrev64 (Ty DPR:$src))))]>;
-class VREV64Q<bits<2> op19_18, string OpcodeStr, ValueType Ty>
+class VREV64Q<bits<2> op19_18, string OpcodeStr, string Dt, ValueType Ty>
   : N2V<0b11, 0b11, op19_18, 0b00, 0b00000, 1, 0, (outs QPR:$dst),
         (ins QPR:$src), IIC_VMOVD, 
-        !strconcat(OpcodeStr, "\t$dst, $src"), "",
+        OpcodeStr, Dt, "$dst, $src", "",
         [(set QPR:$dst, (Ty (NEONvrev64 (Ty QPR:$src))))]>;
 
-def VREV64d8  : VREV64D<0b00, "vrev64.8", v8i8>;
-def VREV64d16 : VREV64D<0b01, "vrev64.16", v4i16>;
-def VREV64d32 : VREV64D<0b10, "vrev64.32", v2i32>;
-def VREV64df  : VREV64D<0b10, "vrev64.32", v2f32>;
+def VREV64d8  : VREV64D<0b00, "vrev64", "8", v8i8>;
+def VREV64d16 : VREV64D<0b01, "vrev64", "16", v4i16>;
+def VREV64d32 : VREV64D<0b10, "vrev64", "32", v2i32>;
+def VREV64df  : VREV64D<0b10, "vrev64", "32", v2f32>;
 
-def VREV64q8  : VREV64Q<0b00, "vrev64.8", v16i8>;
-def VREV64q16 : VREV64Q<0b01, "vrev64.16", v8i16>;
-def VREV64q32 : VREV64Q<0b10, "vrev64.32", v4i32>;
-def VREV64qf  : VREV64Q<0b10, "vrev64.32", v4f32>;
+def VREV64q8  : VREV64Q<0b00, "vrev64", "8", v16i8>;
+def VREV64q16 : VREV64Q<0b01, "vrev64", "16", v8i16>;
+def VREV64q32 : VREV64Q<0b10, "vrev64", "32", v4i32>;
+def VREV64qf  : VREV64Q<0b10, "vrev64", "32", v4f32>;
 
 //   VREV32   : Vector Reverse elements within 32-bit words
 
-class VREV32D<bits<2> op19_18, string OpcodeStr, ValueType Ty>
+class VREV32D<bits<2> op19_18, string OpcodeStr, string Dt, ValueType Ty>
   : N2V<0b11, 0b11, op19_18, 0b00, 0b00001, 0, 0, (outs DPR:$dst),
         (ins DPR:$src), IIC_VMOVD, 
-        !strconcat(OpcodeStr, "\t$dst, $src"), "",
+        OpcodeStr, Dt, "$dst, $src", "",
         [(set DPR:$dst, (Ty (NEONvrev32 (Ty DPR:$src))))]>;
-class VREV32Q<bits<2> op19_18, string OpcodeStr, ValueType Ty>
+class VREV32Q<bits<2> op19_18, string OpcodeStr, string Dt, ValueType Ty>
   : N2V<0b11, 0b11, op19_18, 0b00, 0b00001, 1, 0, (outs QPR:$dst),
         (ins QPR:$src), IIC_VMOVD, 
-        !strconcat(OpcodeStr, "\t$dst, $src"), "",
+        OpcodeStr, Dt, "$dst, $src", "",
         [(set QPR:$dst, (Ty (NEONvrev32 (Ty QPR:$src))))]>;
 
-def VREV32d8  : VREV32D<0b00, "vrev32.8", v8i8>;
-def VREV32d16 : VREV32D<0b01, "vrev32.16", v4i16>;
+def VREV32d8  : VREV32D<0b00, "vrev32", "8", v8i8>;
+def VREV32d16 : VREV32D<0b01, "vrev32", "16", v4i16>;
 
-def VREV32q8  : VREV32Q<0b00, "vrev32.8", v16i8>;
-def VREV32q16 : VREV32Q<0b01, "vrev32.16", v8i16>;
+def VREV32q8  : VREV32Q<0b00, "vrev32", "8", v16i8>;
+def VREV32q16 : VREV32Q<0b01, "vrev32", "16", v8i16>;
 
 //   VREV16   : Vector Reverse elements within 16-bit halfwords
 
-class VREV16D<bits<2> op19_18, string OpcodeStr, ValueType Ty>
+class VREV16D<bits<2> op19_18, string OpcodeStr, string Dt, ValueType Ty>
   : N2V<0b11, 0b11, op19_18, 0b00, 0b00010, 0, 0, (outs DPR:$dst),
         (ins DPR:$src), IIC_VMOVD, 
-        !strconcat(OpcodeStr, "\t$dst, $src"), "",
+        OpcodeStr, Dt, "$dst, $src", "",
         [(set DPR:$dst, (Ty (NEONvrev16 (Ty DPR:$src))))]>;
-class VREV16Q<bits<2> op19_18, string OpcodeStr, ValueType Ty>
+class VREV16Q<bits<2> op19_18, string OpcodeStr, string Dt, ValueType Ty>
   : N2V<0b11, 0b11, op19_18, 0b00, 0b00010, 1, 0, (outs QPR:$dst),
         (ins QPR:$src), IIC_VMOVD, 
-        !strconcat(OpcodeStr, "\t$dst, $src"), "",
+        OpcodeStr, Dt, "$dst, $src", "",
         [(set QPR:$dst, (Ty (NEONvrev16 (Ty QPR:$src))))]>;
 
-def VREV16d8  : VREV16D<0b00, "vrev16.8", v8i8>;
-def VREV16q8  : VREV16Q<0b00, "vrev16.8", v16i8>;
+def VREV16d8  : VREV16D<0b00, "vrev16", "8", v8i8>;
+def VREV16q8  : VREV16Q<0b00, "vrev16", "8", v16i8>;
 
 // Other Vector Shuffles.
 
 //   VEXT     : Vector Extract
 
-class VEXTd<string OpcodeStr, ValueType Ty>
-  : N3V<0,1,0b11,0b0000,0,0, (outs DPR:$dst),
+class VEXTd<string OpcodeStr, string Dt, ValueType Ty>
+  : N3V<0,1,0b11,{?,?,?,?},0,0, (outs DPR:$dst),
         (ins DPR:$lhs, DPR:$rhs, i32imm:$index), IIC_VEXTD,
-        !strconcat(OpcodeStr, "\t$dst, $lhs, $rhs, $index"), "",
+        OpcodeStr, Dt, "$dst, $lhs, $rhs, $index", "",
         [(set DPR:$dst, (Ty (NEONvext (Ty DPR:$lhs),
                                       (Ty DPR:$rhs), imm:$index)))]>;
 
-class VEXTq<string OpcodeStr, ValueType Ty>
-  : N3V<0,1,0b11,0b0000,1,0, (outs QPR:$dst),
+class VEXTq<string OpcodeStr, string Dt, ValueType Ty>
+  : N3V<0,1,0b11,{?,?,?,?},1,0, (outs QPR:$dst),
         (ins QPR:$lhs, QPR:$rhs, i32imm:$index), IIC_VEXTQ,
-        !strconcat(OpcodeStr, "\t$dst, $lhs, $rhs, $index"), "",
+        OpcodeStr, Dt, "$dst, $lhs, $rhs, $index", "",
         [(set QPR:$dst, (Ty (NEONvext (Ty QPR:$lhs),
                                       (Ty QPR:$rhs), imm:$index)))]>;
 
-def VEXTd8  : VEXTd<"vext.8",  v8i8>;
-def VEXTd16 : VEXTd<"vext.16", v4i16>;
-def VEXTd32 : VEXTd<"vext.32", v2i32>;
-def VEXTdf  : VEXTd<"vext.32", v2f32>;
+def VEXTd8  : VEXTd<"vext", "8",  v8i8>;
+def VEXTd16 : VEXTd<"vext", "16", v4i16>;
+def VEXTd32 : VEXTd<"vext", "32", v2i32>;
+def VEXTdf  : VEXTd<"vext", "32", v2f32>;
 
-def VEXTq8  : VEXTq<"vext.8",  v16i8>;
-def VEXTq16 : VEXTq<"vext.16", v8i16>;
-def VEXTq32 : VEXTq<"vext.32", v4i32>;
-def VEXTqf  : VEXTq<"vext.32", v4f32>;
+def VEXTq8  : VEXTq<"vext", "8",  v16i8>;
+def VEXTq16 : VEXTq<"vext", "16", v8i16>;
+def VEXTq32 : VEXTq<"vext", "32", v4i32>;
+def VEXTqf  : VEXTq<"vext", "32", v4f32>;
 
 //   VTRN     : Vector Transpose
 
-def  VTRNd8   : N2VDShuffle<0b00, 0b00001, "vtrn.8">;
-def  VTRNd16  : N2VDShuffle<0b01, 0b00001, "vtrn.16">;
-def  VTRNd32  : N2VDShuffle<0b10, 0b00001, "vtrn.32">;
+def  VTRNd8   : N2VDShuffle<0b00, 0b00001, "vtrn", "8">;
+def  VTRNd16  : N2VDShuffle<0b01, 0b00001, "vtrn", "16">;
+def  VTRNd32  : N2VDShuffle<0b10, 0b00001, "vtrn", "32">;
 
-def  VTRNq8   : N2VQShuffle<0b00, 0b00001, IIC_VPERMQ, "vtrn.8">;
-def  VTRNq16  : N2VQShuffle<0b01, 0b00001, IIC_VPERMQ, "vtrn.16">;
-def  VTRNq32  : N2VQShuffle<0b10, 0b00001, IIC_VPERMQ, "vtrn.32">;
+def  VTRNq8   : N2VQShuffle<0b00, 0b00001, IIC_VPERMQ, "vtrn", "8">;
+def  VTRNq16  : N2VQShuffle<0b01, 0b00001, IIC_VPERMQ, "vtrn", "16">;
+def  VTRNq32  : N2VQShuffle<0b10, 0b00001, IIC_VPERMQ, "vtrn", "32">;
 
 //   VUZP     : Vector Unzip (Deinterleave)
 
-def  VUZPd8   : N2VDShuffle<0b00, 0b00010, "vuzp.8">;
-def  VUZPd16  : N2VDShuffle<0b01, 0b00010, "vuzp.16">;
-def  VUZPd32  : N2VDShuffle<0b10, 0b00010, "vuzp.32">;
+def  VUZPd8   : N2VDShuffle<0b00, 0b00010, "vuzp", "8">;
+def  VUZPd16  : N2VDShuffle<0b01, 0b00010, "vuzp", "16">;
+def  VUZPd32  : N2VDShuffle<0b10, 0b00010, "vuzp", "32">;
 
-def  VUZPq8   : N2VQShuffle<0b00, 0b00010, IIC_VPERMQ3, "vuzp.8">;
-def  VUZPq16  : N2VQShuffle<0b01, 0b00010, IIC_VPERMQ3, "vuzp.16">;
-def  VUZPq32  : N2VQShuffle<0b10, 0b00010, IIC_VPERMQ3, "vuzp.32">;
+def  VUZPq8   : N2VQShuffle<0b00, 0b00010, IIC_VPERMQ3, "vuzp", "8">;
+def  VUZPq16  : N2VQShuffle<0b01, 0b00010, IIC_VPERMQ3, "vuzp", "16">;
+def  VUZPq32  : N2VQShuffle<0b10, 0b00010, IIC_VPERMQ3, "vuzp", "32">;
 
 //   VZIP     : Vector Zip (Interleave)
 
-def  VZIPd8   : N2VDShuffle<0b00, 0b00011, "vzip.8">;
-def  VZIPd16  : N2VDShuffle<0b01, 0b00011, "vzip.16">;
-def  VZIPd32  : N2VDShuffle<0b10, 0b00011, "vzip.32">;
+def  VZIPd8   : N2VDShuffle<0b00, 0b00011, "vzip", "8">;
+def  VZIPd16  : N2VDShuffle<0b01, 0b00011, "vzip", "16">;
+def  VZIPd32  : N2VDShuffle<0b10, 0b00011, "vzip", "32">;
 
-def  VZIPq8   : N2VQShuffle<0b00, 0b00011, IIC_VPERMQ3, "vzip.8">;
-def  VZIPq16  : N2VQShuffle<0b01, 0b00011, IIC_VPERMQ3, "vzip.16">;
-def  VZIPq32  : N2VQShuffle<0b10, 0b00011, IIC_VPERMQ3, "vzip.32">;
+def  VZIPq8   : N2VQShuffle<0b00, 0b00011, IIC_VPERMQ3, "vzip", "8">;
+def  VZIPq16  : N2VQShuffle<0b01, 0b00011, IIC_VPERMQ3, "vzip", "16">;
+def  VZIPq32  : N2VQShuffle<0b10, 0b00011, IIC_VPERMQ3, "vzip", "32">;
 
 // Vector Table Lookup and Table Extension.
 
@@ -2550,25 +3016,25 @@ def  VZIPq32  : N2VQShuffle<0b10, 0b00011, IIC_VPERMQ3, "vzip.32">;
 def  VTBL1
   : N3V<1,1,0b11,0b1000,0,0, (outs DPR:$dst),
         (ins DPR:$tbl1, DPR:$src), IIC_VTB1,
-        "vtbl.8\t$dst, \\{$tbl1\\}, $src", "",
+        "vtbl", "8", "$dst, \\{$tbl1\\}, $src", "",
         [(set DPR:$dst, (v8i8 (int_arm_neon_vtbl1 DPR:$tbl1, DPR:$src)))]>;
 let hasExtraSrcRegAllocReq = 1 in {
 def  VTBL2
   : N3V<1,1,0b11,0b1001,0,0, (outs DPR:$dst),
         (ins DPR:$tbl1, DPR:$tbl2, DPR:$src), IIC_VTB2,
-        "vtbl.8\t$dst, \\{$tbl1,$tbl2\\}, $src", "",
+        "vtbl", "8", "$dst, \\{$tbl1,$tbl2\\}, $src", "",
         [(set DPR:$dst, (v8i8 (int_arm_neon_vtbl2
                                DPR:$tbl1, DPR:$tbl2, DPR:$src)))]>;
 def  VTBL3
   : N3V<1,1,0b11,0b1010,0,0, (outs DPR:$dst),
         (ins DPR:$tbl1, DPR:$tbl2, DPR:$tbl3, DPR:$src), IIC_VTB3,
-        "vtbl.8\t$dst, \\{$tbl1,$tbl2,$tbl3\\}, $src", "",
+        "vtbl", "8", "$dst, \\{$tbl1,$tbl2,$tbl3\\}, $src", "",
         [(set DPR:$dst, (v8i8 (int_arm_neon_vtbl3
                                DPR:$tbl1, DPR:$tbl2, DPR:$tbl3, DPR:$src)))]>;
 def  VTBL4
   : N3V<1,1,0b11,0b1011,0,0, (outs DPR:$dst),
         (ins DPR:$tbl1, DPR:$tbl2, DPR:$tbl3, DPR:$tbl4, DPR:$src), IIC_VTB4,
-        "vtbl.8\t$dst, \\{$tbl1,$tbl2,$tbl3,$tbl4\\}, $src", "",
+        "vtbl", "8", "$dst, \\{$tbl1,$tbl2,$tbl3,$tbl4\\}, $src", "",
         [(set DPR:$dst, (v8i8 (int_arm_neon_vtbl4 DPR:$tbl1, DPR:$tbl2,
                                DPR:$tbl3, DPR:$tbl4, DPR:$src)))]>;
 } // hasExtraSrcRegAllocReq = 1
@@ -2577,26 +3043,26 @@ def  VTBL4
 def  VTBX1
   : N3V<1,1,0b11,0b1000,1,0, (outs DPR:$dst),
         (ins DPR:$orig, DPR:$tbl1, DPR:$src), IIC_VTBX1,
-        "vtbx.8\t$dst, \\{$tbl1\\}, $src", "$orig = $dst",
+        "vtbx", "8", "$dst, \\{$tbl1\\}, $src", "$orig = $dst",
         [(set DPR:$dst, (v8i8 (int_arm_neon_vtbx1
                                DPR:$orig, DPR:$tbl1, DPR:$src)))]>;
 let hasExtraSrcRegAllocReq = 1 in {
 def  VTBX2
   : N3V<1,1,0b11,0b1001,1,0, (outs DPR:$dst),
         (ins DPR:$orig, DPR:$tbl1, DPR:$tbl2, DPR:$src), IIC_VTBX2,
-        "vtbx.8\t$dst, \\{$tbl1,$tbl2\\}, $src", "$orig = $dst",
+        "vtbx", "8", "$dst, \\{$tbl1,$tbl2\\}, $src", "$orig = $dst",
         [(set DPR:$dst, (v8i8 (int_arm_neon_vtbx2
                                DPR:$orig, DPR:$tbl1, DPR:$tbl2, DPR:$src)))]>;
 def  VTBX3
   : N3V<1,1,0b11,0b1010,1,0, (outs DPR:$dst),
         (ins DPR:$orig, DPR:$tbl1, DPR:$tbl2, DPR:$tbl3, DPR:$src), IIC_VTBX3,
-        "vtbx.8\t$dst, \\{$tbl1,$tbl2,$tbl3\\}, $src", "$orig = $dst",
+        "vtbx", "8", "$dst, \\{$tbl1,$tbl2,$tbl3\\}, $src", "$orig = $dst",
         [(set DPR:$dst, (v8i8 (int_arm_neon_vtbx3 DPR:$orig, DPR:$tbl1,
                                DPR:$tbl2, DPR:$tbl3, DPR:$src)))]>;
 def  VTBX4
   : N3V<1,1,0b11,0b1011,1,0, (outs DPR:$dst), (ins DPR:$orig, DPR:$tbl1,
         DPR:$tbl2, DPR:$tbl3, DPR:$tbl4, DPR:$src), IIC_VTBX4,
-        "vtbx.8\t$dst, \\{$tbl1,$tbl2,$tbl3,$tbl4\\}, $src", "$orig = $dst",
+        "vtbx", "8", "$dst, \\{$tbl1,$tbl2,$tbl3,$tbl4\\}, $src", "$orig = $dst",
         [(set DPR:$dst, (v8i8 (int_arm_neon_vtbx4 DPR:$orig, DPR:$tbl1,
                                DPR:$tbl2, DPR:$tbl3, DPR:$tbl4, DPR:$src)))]>;
 } // hasExtraSrcRegAllocReq = 1
@@ -2610,32 +3076,35 @@ def  VTBX4
 
 // Vector Add Operations used for single-precision FP
 let neverHasSideEffects = 1 in
-def VADDfd_sfp : N3VDs<0, 0, 0b00, 0b1101, 0, "vadd.f32", v2f32, v2f32, fadd,1>;
+def VADDfd_sfp : N3VDs<0, 0, 0b00, 0b1101, 0, "vadd", "f32", v2f32, v2f32, fadd,1>;
 def : N3VDsPat<fadd, VADDfd_sfp>;
 
 // Vector Sub Operations used for single-precision FP
 let neverHasSideEffects = 1 in
-def VSUBfd_sfp : N3VDs<0, 0, 0b10, 0b1101, 0, "vsub.f32", v2f32, v2f32, fsub,0>;
+def VSUBfd_sfp : N3VDs<0, 0, 0b10, 0b1101, 0, "vsub", "f32", v2f32, v2f32, fsub,0>;
 def : N3VDsPat<fsub, VSUBfd_sfp>;
 
 // Vector Multiply Operations used for single-precision FP
 let neverHasSideEffects = 1 in
-def VMULfd_sfp : N3VDs<1, 0, 0b00, 0b1101, 1, "vmul.f32", v2f32, v2f32, fmul,1>;
+def VMULfd_sfp : N3VDs<1, 0, 0b00, 0b1101, 1, "vmul", "f32", v2f32, v2f32, fmul,1>;
 def : N3VDsPat<fmul, VMULfd_sfp>;
 
 // Vector Multiply-Accumulate/Subtract used for single-precision FP
-let neverHasSideEffects = 1 in
-def VMLAfd_sfp : N3VDMulOps<0, 0, 0b00, 0b1101, 1, IIC_VMACD, "vmla.f32", v2f32,fmul,fadd>;
-def : N3VDMulOpsPat<fmul, fadd, VMLAfd_sfp>;
+// vml[as].f32 can cause 4-8 cycle stalls in following ASIMD instructions, so
+// we want to avoid them for now. e.g., alternating vmla/vadd instructions.
 
-let neverHasSideEffects = 1 in
-def VMLSfd_sfp : N3VDMulOps<0, 0, 0b10, 0b1101, 1, IIC_VMACD, "vmls.f32", v2f32,fmul,fsub>;
-def : N3VDMulOpsPat<fmul, fsub, VMLSfd_sfp>;
+//let neverHasSideEffects = 1 in
+//def VMLAfd_sfp : N3VDMulOps<0, 0, 0b00, 0b1101, 1, IIC_VMACD, "vmla", "f32", v2f32,fmul,fadd>;
+//def : N3VDMulOpsPat<fmul, fadd, VMLAfd_sfp>;
+
+//let neverHasSideEffects = 1 in
+//def VMLSfd_sfp : N3VDMulOps<0, 0, 0b10, 0b1101, 1, IIC_VMACD, "vmls", "f32", v2f32,fmul,fsub>;
+//def : N3VDMulOpsPat<fmul, fsub, VMLSfd_sfp>;
 
 // Vector Absolute used for single-precision FP
 let neverHasSideEffects = 1 in
 def  VABSfd_sfp : N2VDInts<0b11, 0b11, 0b10, 0b01, 0b01110, 0,
-                           IIC_VUNAD, "vabs.f32",
+                           IIC_VUNAD, "vabs", "f32",
                            v2f32, v2f32, int_arm_neon_vabs>;
 def : N2VDIntsPat<fabs, VABSfd_sfp>;
 
@@ -2643,27 +3112,27 @@ def : N2VDIntsPat<fabs, VABSfd_sfp>;
 let neverHasSideEffects = 1 in
 def  VNEGf32d_sfp : N2V<0b11, 0b11, 0b10, 0b01, 0b01111, 0, 0,
                         (outs DPR_VFP2:$dst), (ins DPR_VFP2:$src), IIC_VUNAD,
-                        "vneg.f32\t$dst, $src", "", []>;
+                        "vneg", "f32", "$dst, $src", "", []>;
 def : N2VDIntsPat<fneg, VNEGf32d_sfp>;
 
 // Vector Convert between single-precision FP and integer
 let neverHasSideEffects = 1 in
-def  VCVTf2sd_sfp : N2VDs<0b11, 0b11, 0b10, 0b11, 0b01110, 0, "vcvt.s32.f32",
+def  VCVTf2sd_sfp : N2VDs<0b11, 0b11, 0b10, 0b11, 0b01110, 0, "vcvt", "s32.f32",
                           v2i32, v2f32, fp_to_sint>;
 def : N2VDsPat<arm_ftosi, f32, v2f32, VCVTf2sd_sfp>;
 
 let neverHasSideEffects = 1 in
-def  VCVTf2ud_sfp : N2VDs<0b11, 0b11, 0b10, 0b11, 0b01111, 0, "vcvt.u32.f32",
+def  VCVTf2ud_sfp : N2VDs<0b11, 0b11, 0b10, 0b11, 0b01111, 0, "vcvt", "u32.f32",
                           v2i32, v2f32, fp_to_uint>;
 def : N2VDsPat<arm_ftoui, f32, v2f32, VCVTf2ud_sfp>;
 
 let neverHasSideEffects = 1 in
-def  VCVTs2fd_sfp : N2VDs<0b11, 0b11, 0b10, 0b11, 0b01100, 0, "vcvt.f32.s32",
+def  VCVTs2fd_sfp : N2VDs<0b11, 0b11, 0b10, 0b11, 0b01100, 0, "vcvt", "f32.s32",
                           v2f32, v2i32, sint_to_fp>;
 def : N2VDsPat<arm_sitof, f32, v2i32, VCVTs2fd_sfp>;
 
 let neverHasSideEffects = 1 in
-def  VCVTu2fd_sfp : N2VDs<0b11, 0b11, 0b10, 0b11, 0b01101, 0, "vcvt.f32.u32",
+def  VCVTu2fd_sfp : N2VDs<0b11, 0b11, 0b10, 0b11, 0b01101, 0, "vcvt", "f32.u32",
                           v2f32, v2i32, uint_to_fp>;
 def : N2VDsPat<arm_uitof, f32, v2i32, VCVTu2fd_sfp>;
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb.td
index 9816add..b5956a3 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb.td
@@ -66,6 +66,11 @@ def thumb_immshifted_shamt : SDNodeXForm<imm, [{
   return CurDAG->getTargetConstant(V, MVT::i32);
 }]>;
 
+// Scaled 4 immediate.
+def t_imm_s4 : Operand<i32> {
+  let PrintMethod = "printThumbS4ImmOperand";
+}
+
 // Define Thumb specific addressing modes.
 
 // t_addrmode_rr := reg + reg
@@ -130,61 +135,67 @@ PseudoInst<(outs), (ins i32imm:$amt), NoItinerary,
 // For both thumb1 and thumb2.
 let isNotDuplicable = 1 in
 def tPICADD : TIt<(outs GPR:$dst), (ins GPR:$lhs, pclabel:$cp), IIC_iALUr,
-                 "\n$cp:\n\tadd $dst, pc",
+                 "\n$cp:\n\tadd\t$dst, pc",
                  [(set GPR:$dst, (ARMpic_add GPR:$lhs, imm:$cp))]>;
 
 // PC relative add.
-def tADDrPCi : T1I<(outs tGPR:$dst), (ins i32imm:$rhs), IIC_iALUi,
-                  "add $dst, pc, $rhs * 4", []>;
+def tADDrPCi : T1I<(outs tGPR:$dst), (ins t_imm_s4:$rhs), IIC_iALUi,
+                  "add\t$dst, pc, $rhs", []>;
 
 // ADD rd, sp, #imm8
-def tADDrSPi : T1I<(outs tGPR:$dst), (ins GPR:$sp, i32imm:$rhs), IIC_iALUi,
-                  "add $dst, $sp, $rhs * 4", []>;
+def tADDrSPi : T1I<(outs tGPR:$dst), (ins GPR:$sp, t_imm_s4:$rhs), IIC_iALUi,
+                  "add\t$dst, $sp, $rhs", []>;
 
 // ADD sp, sp, #imm7
-def tADDspi : TIt<(outs GPR:$dst), (ins GPR:$lhs, i32imm:$rhs), IIC_iALUi,
-                  "add $dst, $rhs * 4", []>;
+def tADDspi : TIt<(outs GPR:$dst), (ins GPR:$lhs, t_imm_s4:$rhs), IIC_iALUi,
+                  "add\t$dst, $rhs", []>;
 
 // SUB sp, sp, #imm7
-def tSUBspi : TIt<(outs GPR:$dst), (ins GPR:$lhs, i32imm:$rhs), IIC_iALUi,
-                  "sub $dst, $rhs * 4", []>;
+def tSUBspi : TIt<(outs GPR:$dst), (ins GPR:$lhs, t_imm_s4:$rhs), IIC_iALUi,
+                  "sub\t$dst, $rhs", []>;
 
 // ADD rm, sp
 def tADDrSP : TIt<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iALUr,
-                  "add $dst, $rhs", []>;
+                  "add\t$dst, $rhs", []>;
 
 // ADD sp, rm
 def tADDspr : TIt<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iALUr,
-                  "add $dst, $rhs", []>;
+                  "add\t$dst, $rhs", []>;
 
 // Pseudo instruction that will expand into a tSUBspi + a copy.
-let usesCustomDAGSchedInserter = 1 in { // Expanded by the scheduler.
-def tSUBspi_ : PseudoInst<(outs GPR:$dst), (ins GPR:$lhs, i32imm:$rhs),
-               NoItinerary, "@ sub $dst, $rhs * 4", []>;
+let usesCustomInserter = 1 in { // Expanded after instruction selection.
+def tSUBspi_ : PseudoInst<(outs GPR:$dst), (ins GPR:$lhs, t_imm_s4:$rhs),
+               NoItinerary, "@ sub\t$dst, $rhs", []>;
 
 def tADDspr_ : PseudoInst<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs),
-               NoItinerary, "@ add $dst, $rhs", []>;
+               NoItinerary, "@ add\t$dst, $rhs", []>;
 
 let Defs = [CPSR] in
 def tANDsp : PseudoInst<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs),
-             NoItinerary, "@ and $dst, $rhs", []>;
-} // usesCustomDAGSchedInserter
+             NoItinerary, "@ and\t$dst, $rhs", []>;
+} // usesCustomInserter
 
 //===----------------------------------------------------------------------===//
 //  Control Flow Instructions.
 //
 
 let isReturn = 1, isTerminator = 1, isBarrier = 1 in {
-  def tBX_RET : TI<(outs), (ins), IIC_Br, "bx lr", [(ARMretflag)]>;
+  def tBX_RET : TI<(outs), (ins), IIC_Br, "bx\tlr", [(ARMretflag)]>;
   // Alternative return instruction used by vararg functions.
-  def tBX_RET_vararg : TI<(outs), (ins tGPR:$target), IIC_Br, "bx $target", []>;
+  def tBX_RET_vararg : TI<(outs), (ins tGPR:$target), IIC_Br, "bx\t$target", []>;
+}
+
+// Indirect branches
+let isBranch = 1, isTerminator = 1, isBarrier = 1, isIndirectBranch = 1 in {
+  def tBRIND : TI<(outs), (ins GPR:$dst), IIC_Br, "mov\tpc, $dst",
+                  [(brind GPR:$dst)]>;
 }
 
 // FIXME: remove when we have a way to marking a MI with these properties.
 let isReturn = 1, isTerminator = 1, isBarrier = 1, mayLoad = 1,
     hasExtraDefRegAllocReq = 1 in
 def tPOP_RET : T1I<(outs), (ins pred:$p, reglist:$wb, variable_ops), IIC_Br,
-                   "pop${p} $wb", []>;
+                   "pop${p}\t$wb", []>;
 
 let isCall = 1,
   Defs = [R0,  R1,  R2,  R3,  R12, LR,
@@ -193,25 +204,25 @@ let isCall = 1,
           D24, D25, D26, D27, D28, D29, D30, D31, CPSR, FPSCR] in {
   // Also used for Thumb2
   def tBL  : TIx2<(outs), (ins i32imm:$func, variable_ops), IIC_Br, 
-                   "bl ${func:call}",
+                   "bl\t${func:call}",
                    [(ARMtcall tglobaladdr:$func)]>,
              Requires<[IsThumb, IsNotDarwin]>;
 
   // ARMv5T and above, also used for Thumb2
   def tBLXi : TIx2<(outs), (ins i32imm:$func, variable_ops), IIC_Br, 
-                    "blx ${func:call}",
+                    "blx\t${func:call}",
                     [(ARMcall tglobaladdr:$func)]>,
               Requires<[IsThumb, HasV5T, IsNotDarwin]>;
 
   // Also used for Thumb2
   def tBLXr : TI<(outs), (ins GPR:$func, variable_ops), IIC_Br, 
-                  "blx $func",
+                  "blx\t$func",
                   [(ARMtcall GPR:$func)]>,
               Requires<[IsThumb, HasV5T, IsNotDarwin]>;
 
   // ARMv4T
   def tBX : TIx2<(outs), (ins tGPR:$func, variable_ops), IIC_Br, 
-                  "mov lr, pc\n\tbx $func",
+                  "mov\tlr, pc\n\tbx\t$func",
                   [(ARMcall_nolink tGPR:$func)]>,
             Requires<[IsThumb1Only, IsNotDarwin]>;
 }
@@ -224,25 +235,25 @@ let isCall = 1,
           D24, D25, D26, D27, D28, D29, D30, D31, CPSR, FPSCR] in {
   // Also used for Thumb2
   def tBLr9 : TIx2<(outs), (ins i32imm:$func, variable_ops), IIC_Br, 
-                   "bl ${func:call}",
+                   "bl\t${func:call}",
                    [(ARMtcall tglobaladdr:$func)]>,
               Requires<[IsThumb, IsDarwin]>;
 
   // ARMv5T and above, also used for Thumb2
   def tBLXi_r9 : TIx2<(outs), (ins i32imm:$func, variable_ops), IIC_Br, 
-                      "blx ${func:call}",
+                      "blx\t${func:call}",
                       [(ARMcall tglobaladdr:$func)]>,
                  Requires<[IsThumb, HasV5T, IsDarwin]>;
 
   // Also used for Thumb2
   def tBLXr_r9 : TI<(outs), (ins GPR:$func, variable_ops), IIC_Br, 
-                  "blx $func",
+                  "blx\t$func",
                   [(ARMtcall GPR:$func)]>,
                  Requires<[IsThumb, HasV5T, IsDarwin]>;
 
   // ARMv4T
   def tBXr9 : TIx2<(outs), (ins tGPR:$func, variable_ops), IIC_Br, 
-                  "mov lr, pc\n\tbx $func",
+                  "mov\tlr, pc\n\tbx\t$func",
                   [(ARMcall_nolink tGPR:$func)]>,
               Requires<[IsThumb1Only, IsDarwin]>;
 }
@@ -251,16 +262,16 @@ let isBranch = 1, isTerminator = 1 in {
   let isBarrier = 1 in {
     let isPredicable = 1 in
     def tB   : T1I<(outs), (ins brtarget:$target), IIC_Br,
-                   "b $target", [(br bb:$target)]>;
+                   "b\t$target", [(br bb:$target)]>;
 
   // Far jump
   let Defs = [LR] in
   def tBfar : TIx2<(outs), (ins brtarget:$target), IIC_Br, 
-                    "bl $target\t@ far jump",[]>;
+                    "bl\t$target\t@ far jump",[]>;
 
   def tBR_JTr : T1JTI<(outs),
                       (ins tGPR:$target, jtblock_operand:$jt, i32imm:$id),
-                      IIC_Br, "mov pc, $target\n\t.align\t2\n$jt",
+                      IIC_Br, "mov\tpc, $target\n\t.align\t2\n$jt",
                       [(ARMbrjt tGPR:$target, tjumptable:$jt, imm:$id)]>;
   }
 }
@@ -269,79 +280,90 @@ let isBranch = 1, isTerminator = 1 in {
 // a two-value operand where a dag node expects two operands. :(
 let isBranch = 1, isTerminator = 1 in
   def tBcc : T1I<(outs), (ins brtarget:$target, pred:$cc), IIC_Br,
-                 "b$cc $target",
+                 "b$cc\t$target",
                  [/*(ARMbrcond bb:$target, imm:$cc)*/]>;
 
+// Compare and branch on zero / non-zero
+let isBranch = 1, isTerminator = 1 in {
+  def tCBZ  : T1I<(outs), (ins tGPR:$cmp, brtarget:$target), IIC_Br,
+                  "cbz\t$cmp, $target", []>;
+
+  def tCBNZ : T1I<(outs), (ins tGPR:$cmp, brtarget:$target), IIC_Br,
+                  "cbnz\t$cmp, $target", []>;
+}
+
 //===----------------------------------------------------------------------===//
 //  Load Store Instructions.
 //
 
-let canFoldAsLoad = 1 in
+let canFoldAsLoad = 1, isReMaterializable = 1, mayHaveSideEffects = 1 in
 def tLDR : T1pI4<(outs tGPR:$dst), (ins t_addrmode_s4:$addr), IIC_iLoadr, 
-               "ldr", " $dst, $addr",
+               "ldr", "\t$dst, $addr",
                [(set tGPR:$dst, (load t_addrmode_s4:$addr))]>;
 
 def tLDRB : T1pI1<(outs tGPR:$dst), (ins t_addrmode_s1:$addr), IIC_iLoadr,
-                "ldrb", " $dst, $addr",
+                "ldrb", "\t$dst, $addr",
                 [(set tGPR:$dst, (zextloadi8 t_addrmode_s1:$addr))]>;
 
 def tLDRH : T1pI2<(outs tGPR:$dst), (ins t_addrmode_s2:$addr), IIC_iLoadr,
-                "ldrh", " $dst, $addr",
+                "ldrh", "\t$dst, $addr",
                 [(set tGPR:$dst, (zextloadi16 t_addrmode_s2:$addr))]>;
 
 let AddedComplexity = 10 in
 def tLDRSB : T1pI1<(outs tGPR:$dst), (ins t_addrmode_rr:$addr), IIC_iLoadr,
-                 "ldrsb", " $dst, $addr",
+                 "ldrsb", "\t$dst, $addr",
                  [(set tGPR:$dst, (sextloadi8 t_addrmode_rr:$addr))]>;
 
 let AddedComplexity = 10 in
 def tLDRSH : T1pI2<(outs tGPR:$dst), (ins t_addrmode_rr:$addr), IIC_iLoadr,
-                 "ldrsh", " $dst, $addr",
+                 "ldrsh", "\t$dst, $addr",
                  [(set tGPR:$dst, (sextloadi16 t_addrmode_rr:$addr))]>;
 
 let canFoldAsLoad = 1 in
 def tLDRspi : T1pIs<(outs tGPR:$dst), (ins t_addrmode_sp:$addr), IIC_iLoadi,
-                  "ldr", " $dst, $addr",
+                  "ldr", "\t$dst, $addr",
                   [(set tGPR:$dst, (load t_addrmode_sp:$addr))]>;
 
 // Special instruction for restore. It cannot clobber condition register
 // when it's expanded by eliminateCallFramePseudoInstr().
 let canFoldAsLoad = 1, mayLoad = 1 in
 def tRestore : T1pIs<(outs tGPR:$dst), (ins t_addrmode_sp:$addr), IIC_iLoadi,
-                    "ldr", " $dst, $addr", []>;
+                    "ldr", "\t$dst, $addr", []>;
 
 // Load tconstpool
-let canFoldAsLoad = 1 in
+// FIXME: Use ldr.n to work around a Darwin assembler bug.
+let canFoldAsLoad = 1, isReMaterializable = 1, mayHaveSideEffects = 1  in 
 def tLDRpci : T1pIs<(outs tGPR:$dst), (ins i32imm:$addr), IIC_iLoadi,
-                  "ldr", " $dst, $addr",
+                  "ldr", ".n\t$dst, $addr",
                   [(set tGPR:$dst, (load (ARMWrapper tconstpool:$addr)))]>;
 
 // Special LDR for loads from non-pc-relative constpools.
-let canFoldAsLoad = 1, mayLoad = 1, isReMaterializable = 1 in
+let canFoldAsLoad = 1, mayLoad = 1, isReMaterializable = 1,
+    mayHaveSideEffects = 1  in
 def tLDRcp  : T1pIs<(outs tGPR:$dst), (ins i32imm:$addr), IIC_iLoadi,
-                  "ldr", " $dst, $addr", []>;
+                  "ldr", "\t$dst, $addr", []>;
 
 def tSTR : T1pI4<(outs), (ins tGPR:$src, t_addrmode_s4:$addr), IIC_iStorer,
-               "str", " $src, $addr",
+               "str", "\t$src, $addr",
                [(store tGPR:$src, t_addrmode_s4:$addr)]>;
 
 def tSTRB : T1pI1<(outs), (ins tGPR:$src, t_addrmode_s1:$addr), IIC_iStorer,
-                 "strb", " $src, $addr",
+                 "strb", "\t$src, $addr",
                  [(truncstorei8 tGPR:$src, t_addrmode_s1:$addr)]>;
 
 def tSTRH : T1pI2<(outs), (ins tGPR:$src, t_addrmode_s2:$addr), IIC_iStorer,
-                 "strh", " $src, $addr",
+                 "strh", "\t$src, $addr",
                  [(truncstorei16 tGPR:$src, t_addrmode_s2:$addr)]>;
 
 def tSTRspi : T1pIs<(outs), (ins tGPR:$src, t_addrmode_sp:$addr), IIC_iStorei,
-                   "str", " $src, $addr",
+                   "str", "\t$src, $addr",
                    [(store tGPR:$src, t_addrmode_sp:$addr)]>;
 
 let mayStore = 1 in {
 // Special instruction for spill. It cannot clobber condition register
 // when it's expanded by eliminateCallFramePseudoInstr().
 def tSpill : T1pIs<(outs), (ins tGPR:$src, t_addrmode_sp:$addr), IIC_iStorei,
-                  "str", " $src, $addr", []>;
+                  "str", "\t$src, $addr", []>;
 }
 
 //===----------------------------------------------------------------------===//
@@ -353,21 +375,21 @@ let mayLoad = 1, hasExtraDefRegAllocReq = 1 in
 def tLDM : T1I<(outs),
                (ins addrmode4:$addr, pred:$p, reglist:$wb, variable_ops),
                IIC_iLoadm,
-               "ldm${addr:submode}${p} $addr, $wb", []>;
+               "ldm${addr:submode}${p}\t$addr, $wb", []>;
 
 let mayStore = 1, hasExtraSrcRegAllocReq = 1 in
 def tSTM : T1I<(outs),
                (ins addrmode4:$addr, pred:$p, reglist:$wb, variable_ops),
                IIC_iStorem,
-               "stm${addr:submode}${p} $addr, $wb", []>;
+               "stm${addr:submode}${p}\t$addr, $wb", []>;
 
 let mayLoad = 1, Uses = [SP], Defs = [SP], hasExtraDefRegAllocReq = 1 in
 def tPOP : T1I<(outs), (ins pred:$p, reglist:$wb, variable_ops), IIC_Br,
-               "pop${p} $wb", []>;
+               "pop${p}\t$wb", []>;
 
 let mayStore = 1, Uses = [SP], Defs = [SP], hasExtraSrcRegAllocReq = 1 in
 def tPUSH : T1I<(outs), (ins pred:$p, reglist:$wb, variable_ops), IIC_Br,
-                "push${p} $wb", []>;
+                "push${p}\t$wb", []>;
 
 //===----------------------------------------------------------------------===//
 //  Arithmetic Instructions.
@@ -376,66 +398,66 @@ def tPUSH : T1I<(outs), (ins pred:$p, reglist:$wb, variable_ops), IIC_Br,
 // Add with carry register
 let isCommutable = 1, Uses = [CPSR] in
 def tADC : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iALUr,
-                 "adc", " $dst, $rhs",
+                 "adc", "\t$dst, $rhs",
                  [(set tGPR:$dst, (adde tGPR:$lhs, tGPR:$rhs))]>;
 
 // Add immediate
 def tADDi3 : T1sI<(outs tGPR:$dst), (ins tGPR:$lhs, i32imm:$rhs), IIC_iALUi,
-                   "add", " $dst, $lhs, $rhs",
+                   "add", "\t$dst, $lhs, $rhs",
                    [(set tGPR:$dst, (add tGPR:$lhs, imm0_7:$rhs))]>;
 
 def tADDi8 : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, i32imm:$rhs), IIC_iALUi,
-                   "add", " $dst, $rhs",
+                   "add", "\t$dst, $rhs",
                    [(set tGPR:$dst, (add tGPR:$lhs, imm8_255:$rhs))]>;
 
 // Add register
 let isCommutable = 1 in
 def tADDrr : T1sI<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iALUr,
-                   "add", " $dst, $lhs, $rhs",
+                   "add", "\t$dst, $lhs, $rhs",
                    [(set tGPR:$dst, (add tGPR:$lhs, tGPR:$rhs))]>;
 
 let neverHasSideEffects = 1 in
 def tADDhirr : T1pIt<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iALUr,
-                     "add", " $dst, $rhs", []>;
+                     "add", "\t$dst, $rhs", []>;
 
 // And register
 let isCommutable = 1 in
 def tAND : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iALUr,
-                 "and", " $dst, $rhs",
+                 "and", "\t$dst, $rhs",
                  [(set tGPR:$dst, (and tGPR:$lhs, tGPR:$rhs))]>;
 
 // ASR immediate
 def tASRri : T1sI<(outs tGPR:$dst), (ins tGPR:$lhs, i32imm:$rhs), IIC_iMOVsi,
-                  "asr", " $dst, $lhs, $rhs",
+                  "asr", "\t$dst, $lhs, $rhs",
                   [(set tGPR:$dst, (sra tGPR:$lhs, (i32 imm:$rhs)))]>;
 
 // ASR register
 def tASRrr : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iMOVsr,
-                   "asr", " $dst, $rhs",
+                   "asr", "\t$dst, $rhs",
                    [(set tGPR:$dst, (sra tGPR:$lhs, tGPR:$rhs))]>;
 
 // BIC register
 def tBIC : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iALUr,
-                 "bic", " $dst, $rhs",
+                 "bic", "\t$dst, $rhs",
                  [(set tGPR:$dst, (and tGPR:$lhs, (not tGPR:$rhs)))]>;
 
 // CMN register
 let Defs = [CPSR] in {
 def tCMN : T1pI<(outs), (ins tGPR:$lhs, tGPR:$rhs), IIC_iCMPr,
-                "cmn", " $lhs, $rhs",
+                "cmn", "\t$lhs, $rhs",
                 [(ARMcmp tGPR:$lhs, (ineg tGPR:$rhs))]>;
 def tCMNZ : T1pI<(outs), (ins tGPR:$lhs, tGPR:$rhs), IIC_iCMPr,
-                 "cmn", " $lhs, $rhs",
+                 "cmn", "\t$lhs, $rhs",
                  [(ARMcmpZ tGPR:$lhs, (ineg tGPR:$rhs))]>;
 }
 
 // CMP immediate
 let Defs = [CPSR] in {
 def tCMPi8 : T1pI<(outs), (ins tGPR:$lhs, i32imm:$rhs), IIC_iCMPi,
-                  "cmp", " $lhs, $rhs",
+                  "cmp", "\t$lhs, $rhs",
                   [(ARMcmp tGPR:$lhs, imm0_255:$rhs)]>;
 def tCMPzi8 : T1pI<(outs), (ins tGPR:$lhs, i32imm:$rhs), IIC_iCMPi,
-                  "cmp", " $lhs, $rhs",
+                  "cmp", "\t$lhs, $rhs",
                   [(ARMcmpZ tGPR:$lhs, imm0_255:$rhs)]>;
 
 }
@@ -443,48 +465,48 @@ def tCMPzi8 : T1pI<(outs), (ins tGPR:$lhs, i32imm:$rhs), IIC_iCMPi,
 // CMP register
 let Defs = [CPSR] in {
 def tCMPr : T1pI<(outs), (ins tGPR:$lhs, tGPR:$rhs), IIC_iCMPr,
-                 "cmp", " $lhs, $rhs",
+                 "cmp", "\t$lhs, $rhs",
                  [(ARMcmp tGPR:$lhs, tGPR:$rhs)]>;
 def tCMPzr : T1pI<(outs), (ins tGPR:$lhs, tGPR:$rhs), IIC_iCMPr,
-                  "cmp", " $lhs, $rhs",
+                  "cmp", "\t$lhs, $rhs",
                   [(ARMcmpZ tGPR:$lhs, tGPR:$rhs)]>;
 
 def tCMPhir : T1pI<(outs), (ins GPR:$lhs, GPR:$rhs), IIC_iCMPr,
-                   "cmp", " $lhs, $rhs", []>;
+                   "cmp", "\t$lhs, $rhs", []>;
 def tCMPzhir : T1pI<(outs), (ins GPR:$lhs, GPR:$rhs), IIC_iCMPr,
-                    "cmp", " $lhs, $rhs", []>;
+                    "cmp", "\t$lhs, $rhs", []>;
 }
 
 
 // XOR register
 let isCommutable = 1 in
 def tEOR : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iALUr,
-                 "eor", " $dst, $rhs",
+                 "eor", "\t$dst, $rhs",
                  [(set tGPR:$dst, (xor tGPR:$lhs, tGPR:$rhs))]>;
 
 // LSL immediate
 def tLSLri : T1sI<(outs tGPR:$dst), (ins tGPR:$lhs, i32imm:$rhs), IIC_iMOVsi,
-                  "lsl", " $dst, $lhs, $rhs",
+                  "lsl", "\t$dst, $lhs, $rhs",
                   [(set tGPR:$dst, (shl tGPR:$lhs, (i32 imm:$rhs)))]>;
 
 // LSL register
 def tLSLrr : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iMOVsr,
-                   "lsl", " $dst, $rhs",
+                   "lsl", "\t$dst, $rhs",
                    [(set tGPR:$dst, (shl tGPR:$lhs, tGPR:$rhs))]>;
 
 // LSR immediate
 def tLSRri : T1sI<(outs tGPR:$dst), (ins tGPR:$lhs, i32imm:$rhs), IIC_iMOVsi,
-                  "lsr", " $dst, $lhs, $rhs",
+                  "lsr", "\t$dst, $lhs, $rhs",
                   [(set tGPR:$dst, (srl tGPR:$lhs, (i32 imm:$rhs)))]>;
 
 // LSR register
 def tLSRrr : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iMOVsr,
-                   "lsr", " $dst, $rhs",
+                   "lsr", "\t$dst, $rhs",
                    [(set tGPR:$dst, (srl tGPR:$lhs, tGPR:$rhs))]>;
 
 // move register
 def tMOVi8 : T1sI<(outs tGPR:$dst), (ins i32imm:$src), IIC_iMOVi,
-                  "mov", " $dst, $src",
+                  "mov", "\t$dst, $src",
                   [(set tGPR:$dst, imm0_255:$src)]>;
 
 // TODO: A7-73: MOV(2) - mov setting flag.
@@ -493,45 +515,45 @@ def tMOVi8 : T1sI<(outs tGPR:$dst), (ins i32imm:$src), IIC_iMOVi,
 let neverHasSideEffects = 1 in {
 // FIXME: Make this predicable.
 def tMOVr       : T1I<(outs tGPR:$dst), (ins tGPR:$src), IIC_iMOVr,
-                      "mov $dst, $src", []>;
+                      "mov\t$dst, $src", []>;
 let Defs = [CPSR] in
 def tMOVSr      : T1I<(outs tGPR:$dst), (ins tGPR:$src), IIC_iMOVr,
-                       "movs $dst, $src", []>;
+                       "movs\t$dst, $src", []>;
 
 // FIXME: Make these predicable.
 def tMOVgpr2tgpr : T1I<(outs tGPR:$dst), (ins GPR:$src), IIC_iMOVr,
-                       "mov $dst, $src", []>;
+                       "mov\t$dst, $src", []>;
 def tMOVtgpr2gpr : T1I<(outs GPR:$dst), (ins tGPR:$src), IIC_iMOVr,
-                       "mov $dst, $src", []>;
+                       "mov\t$dst, $src", []>;
 def tMOVgpr2gpr  : T1I<(outs GPR:$dst), (ins GPR:$src), IIC_iMOVr,
-                       "mov $dst, $src", []>;
+                       "mov\t$dst, $src", []>;
 } // neverHasSideEffects
 
 // multiply register
 let isCommutable = 1 in
 def tMUL : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iMUL32,
-                 "mul", " $dst, $rhs",
+                 "mul", "\t$dst, $rhs",
                  [(set tGPR:$dst, (mul tGPR:$lhs, tGPR:$rhs))]>;
 
 // move inverse register
 def tMVN : T1sI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iMOVr,
-                "mvn", " $dst, $src",
+                "mvn", "\t$dst, $src",
                 [(set tGPR:$dst, (not tGPR:$src))]>;
 
 // bitwise or register
 let isCommutable = 1 in
 def tORR : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs),  IIC_iALUr,
-                 "orr", " $dst, $rhs",
+                 "orr", "\t$dst, $rhs",
                  [(set tGPR:$dst, (or tGPR:$lhs, tGPR:$rhs))]>;
 
 // swaps
 def tREV : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
-                "rev", " $dst, $src",
+                "rev", "\t$dst, $src",
                 [(set tGPR:$dst, (bswap tGPR:$src))]>,
                 Requires<[IsThumb1Only, HasV6]>;
 
 def tREV16 : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
-                  "rev16", " $dst, $src",
+                  "rev16", "\t$dst, $src",
              [(set tGPR:$dst,
                    (or (and (srl tGPR:$src, (i32 8)), 0xFF),
                        (or (and (shl tGPR:$src, (i32 8)), 0xFF00),
@@ -540,7 +562,7 @@ def tREV16 : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
                 Requires<[IsThumb1Only, HasV6]>;
 
 def tREVSH : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
-                  "revsh", " $dst, $src",
+                  "revsh", "\t$dst, $src",
                   [(set tGPR:$dst,
                         (sext_inreg
                           (or (srl (and tGPR:$src, 0xFF00), (i32 8)),
@@ -549,70 +571,70 @@ def tREVSH : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
 
 // rotate right register
 def tROR : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iMOVsr,
-                 "ror", " $dst, $rhs",
+                 "ror", "\t$dst, $rhs",
                  [(set tGPR:$dst, (rotr tGPR:$lhs, tGPR:$rhs))]>;
 
 // negate register
 def tRSB : T1sI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iALUi,
-                "rsb", " $dst, $src, #0",
+                "rsb", "\t$dst, $src, #0",
                 [(set tGPR:$dst, (ineg tGPR:$src))]>;
 
 // Subtract with carry register
 let Uses = [CPSR] in
 def tSBC : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iALUr,
-                 "sbc", " $dst, $rhs",
+                 "sbc", "\t$dst, $rhs",
                  [(set tGPR:$dst, (sube tGPR:$lhs, tGPR:$rhs))]>;
 
 // Subtract immediate
 def tSUBi3 : T1sI<(outs tGPR:$dst), (ins tGPR:$lhs, i32imm:$rhs), IIC_iALUi,
-                  "sub", " $dst, $lhs, $rhs",
+                  "sub", "\t$dst, $lhs, $rhs",
                   [(set tGPR:$dst, (add tGPR:$lhs, imm0_7_neg:$rhs))]>;
 
 def tSUBi8 : T1sIt<(outs tGPR:$dst), (ins tGPR:$lhs, i32imm:$rhs), IIC_iALUi,
-                   "sub", " $dst, $rhs",
+                   "sub", "\t$dst, $rhs",
                    [(set tGPR:$dst, (add tGPR:$lhs, imm8_255_neg:$rhs))]>;
 
 // subtract register
 def tSUBrr : T1sI<(outs tGPR:$dst), (ins tGPR:$lhs, tGPR:$rhs), IIC_iALUr,
-                  "sub", " $dst, $lhs, $rhs",
+                  "sub", "\t$dst, $lhs, $rhs",
                   [(set tGPR:$dst, (sub tGPR:$lhs, tGPR:$rhs))]>;
 
 // TODO: A7-96: STMIA - store multiple.
 
 // sign-extend byte
 def tSXTB  : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
-                  "sxtb", " $dst, $src",
+                  "sxtb", "\t$dst, $src",
                   [(set tGPR:$dst, (sext_inreg tGPR:$src, i8))]>,
                   Requires<[IsThumb1Only, HasV6]>;
 
 // sign-extend short
 def tSXTH  : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
-                  "sxth", " $dst, $src",
+                  "sxth", "\t$dst, $src",
                   [(set tGPR:$dst, (sext_inreg tGPR:$src, i16))]>,
                   Requires<[IsThumb1Only, HasV6]>;
 
 // test
 let isCommutable = 1, Defs = [CPSR] in
 def tTST  : T1pI<(outs), (ins tGPR:$lhs, tGPR:$rhs), IIC_iCMPr,
-                 "tst", " $lhs, $rhs",
+                 "tst", "\t$lhs, $rhs",
                  [(ARMcmpZ (and tGPR:$lhs, tGPR:$rhs), 0)]>;
 
 // zero-extend byte
 def tUXTB  : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
-                  "uxtb", " $dst, $src",
+                  "uxtb", "\t$dst, $src",
                   [(set tGPR:$dst, (and tGPR:$src, 0xFF))]>,
                   Requires<[IsThumb1Only, HasV6]>;
 
 // zero-extend short
 def tUXTH  : T1pI<(outs tGPR:$dst), (ins tGPR:$src), IIC_iUNAr,
-                  "uxth", " $dst, $src",
+                  "uxth", "\t$dst, $src",
                   [(set tGPR:$dst, (and tGPR:$src, 0xFFFF))]>,
                   Requires<[IsThumb1Only, HasV6]>;
 
 
 // Conditional move tMOVCCr - Used to implement the Thumb SELECT_CC DAG operation.
-// Expanded by the scheduler into a branch sequence.
-let usesCustomDAGSchedInserter = 1 in  // Expanded by the scheduler.
+// Expanded after instruction selection into a branch sequence.
+let usesCustomInserter = 1 in  // Expanded after instruction selection.
   def tMOVCCr_pseudo :
   PseudoInst<(outs tGPR:$dst), (ins tGPR:$false, tGPR:$true, pred:$cc),
               NoItinerary, "@ tMOVCCr $cc",
@@ -621,19 +643,19 @@ let usesCustomDAGSchedInserter = 1 in  // Expanded by the scheduler.
 
 // 16-bit movcc in IT blocks for Thumb2.
 def tMOVCCr : T1pIt<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iCMOVr,
-                    "mov", " $dst, $rhs", []>;
+                    "mov", "\t$dst, $rhs", []>;
 
 def tMOVCCi : T1pIt<(outs GPR:$dst), (ins GPR:$lhs, i32imm:$rhs), IIC_iCMOVi,
-                    "mov", " $dst, $rhs", []>;
+                    "mov", "\t$dst, $rhs", []>;
 
 // tLEApcrel - Load a pc-relative address into a register without offending the
 // assembler.
 def tLEApcrel : T1I<(outs tGPR:$dst), (ins i32imm:$label, pred:$p), IIC_iALUi,
-                    "adr$p $dst, #$label", []>;
+                    "adr$p\t$dst, #$label", []>;
 
 def tLEApcrelJT : T1I<(outs tGPR:$dst),
                       (ins i32imm:$label, nohash_imm:$id, pred:$p),
-                      IIC_iALUi, "adr$p $dst, #${label}_${id}", []>;
+                      IIC_iALUi, "adr$p\t$dst, #${label}_${id}", []>;
 
 //===----------------------------------------------------------------------===//
 // TLS Instructions
@@ -643,7 +665,7 @@ def tLEApcrelJT : T1I<(outs tGPR:$dst),
 let isCall = 1,
   Defs = [R0, LR] in {
   def tTPsoft  : TIx2<(outs), (ins), IIC_Br,
-               "bl __aeabi_read_tp",
+               "bl\t__aeabi_read_tp",
                [(set R0, ARMthread_pointer)]>;
 }
 
@@ -724,3 +746,13 @@ def : T1Pat<(i32 thumb_immshifted:$src),
 
 def : T1Pat<(i32 imm0_255_comp:$src),
             (tMVN (tMOVi8 (imm_comp_XFORM imm:$src)))>;
+
+// Pseudo instruction that combines ldr from constpool and add pc. This should
+// be expanded into two instructions late to allow if-conversion and
+// scheduling.
+let isReMaterializable = 1 in
+def tLDRpci_pic : PseudoInst<(outs GPR:$dst), (ins i32imm:$addr, pclabel:$cp),
+                   NoItinerary, "@ ldr.n\t$dst, $addr\n$cp:\n\tadd\t$dst, pc",
+               [(set GPR:$dst, (ARMpic_add (load (ARMWrapper tconstpool:$addr)),
+                                           imm:$cp))]>,
+               Requires<[IsThumb1Only]>;
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb2.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb2.td
index 79d7108..9489815 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb2.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb2.td
@@ -49,8 +49,8 @@ def t2_so_imm_neg_XFORM : SDNodeXForm<imm, [{
 // 8-bit immediate rotated by an arbitrary number of bits, or an 8-bit
 // immediate splatted into multiple bytes of the word. t2_so_imm values are
 // represented in the imm field in the same 12-bit form that they are encoded
-// into t2_so_imm instructions: the 8-bit immediate is the least significant bits
-// [bits 0-7], the 4-bit shift/splat amount is the next 4 bits [bits 8-11].
+// into t2_so_imm instructions: the 8-bit immediate is the least significant
+// bits [bits 0-7], the 4-bit shift/splat amount is the next 4 bits [bits 8-11].
 def t2_so_imm : Operand<i32>,
                 PatLeaf<(imm), [{
   return ARM_AM::getT2SOImmVal((uint32_t)N->getZExtValue()) != -1; 
@@ -69,6 +69,40 @@ def t2_so_imm_neg : Operand<i32>,
   return ARM_AM::getT2SOImmVal(-((int)N->getZExtValue())) != -1;
 }], t2_so_imm_neg_XFORM>;
 
+// Break t2_so_imm's up into two pieces.  This handles immediates with up to 16
+// bits set in them.  This uses t2_so_imm2part to match and t2_so_imm2part_[12]
+// to get the first/second pieces.
+def t2_so_imm2part : Operand<i32>,
+                  PatLeaf<(imm), [{
+      return ARM_AM::isT2SOImmTwoPartVal((unsigned)N->getZExtValue());
+    }]> {
+}
+
+def t2_so_imm2part_1 : SDNodeXForm<imm, [{
+  unsigned V = ARM_AM::getT2SOImmTwoPartFirst((unsigned)N->getZExtValue());
+  return CurDAG->getTargetConstant(V, MVT::i32);
+}]>;
+
+def t2_so_imm2part_2 : SDNodeXForm<imm, [{
+  unsigned V = ARM_AM::getT2SOImmTwoPartSecond((unsigned)N->getZExtValue());
+  return CurDAG->getTargetConstant(V, MVT::i32);
+}]>;
+
+def t2_so_neg_imm2part : Operand<i32>, PatLeaf<(imm), [{
+      return ARM_AM::isT2SOImmTwoPartVal(-(int)N->getZExtValue());
+    }]> {
+}
+
+def t2_so_neg_imm2part_1 : SDNodeXForm<imm, [{
+  unsigned V = ARM_AM::getT2SOImmTwoPartFirst(-(int)N->getZExtValue());
+  return CurDAG->getTargetConstant(V, MVT::i32);
+}]>;
+
+def t2_so_neg_imm2part_2 : SDNodeXForm<imm, [{
+  unsigned V = ARM_AM::getT2SOImmTwoPartSecond(-(int)N->getZExtValue());
+  return CurDAG->getTargetConstant(V, MVT::i32);
+}]>;
+
 /// imm1_31 predicate - True if the 32-bit immediate is in the range [1,31].
 def imm1_31 : PatLeaf<(i32 imm), [{
   return (int32_t)N->getZExtValue() >= 1 && (int32_t)N->getZExtValue() < 32;
@@ -134,18 +168,18 @@ def t2addrmode_so_reg : Operand<i32>,
 multiclass T2I_un_irs<string opc, PatFrag opnode, bit Cheap = 0, bit ReMat = 0>{
    // shifted imm
    def i : T2sI<(outs GPR:$dst), (ins t2_so_imm:$src), IIC_iMOVi,
-                opc, " $dst, $src",
+                opc, "\t$dst, $src",
                 [(set GPR:$dst, (opnode t2_so_imm:$src))]> {
      let isAsCheapAsAMove = Cheap;
      let isReMaterializable = ReMat;
    }
    // register
    def r : T2I<(outs GPR:$dst), (ins GPR:$src), IIC_iMOVr,
-               opc, ".w $dst, $src",
+               opc, ".w\t$dst, $src",
                 [(set GPR:$dst, (opnode GPR:$src))]>;
    // shifted register
    def s : T2I<(outs GPR:$dst), (ins t2_so_reg:$src), IIC_iMOVsi,
-               opc, ".w $dst, $src",
+               opc, ".w\t$dst, $src",
                [(set GPR:$dst, (opnode t2_so_reg:$src))]>;
 }
 
@@ -156,17 +190,17 @@ multiclass T2I_bin_irs<string opc, PatFrag opnode,
                        bit Commutable = 0, string wide =""> {
    // shifted imm
    def ri : T2sI<(outs GPR:$dst), (ins GPR:$lhs, t2_so_imm:$rhs), IIC_iALUi,
-                 opc, " $dst, $lhs, $rhs",
+                 opc, "\t$dst, $lhs, $rhs",
                  [(set GPR:$dst, (opnode GPR:$lhs, t2_so_imm:$rhs))]>;
    // register
    def rr : T2sI<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iALUr,
-                 opc, !strconcat(wide, " $dst, $lhs, $rhs"),
+                 opc, !strconcat(wide, "\t$dst, $lhs, $rhs"),
                  [(set GPR:$dst, (opnode GPR:$lhs, GPR:$rhs))]> {
      let isCommutable = Commutable;
    }
    // shifted register
    def rs : T2sI<(outs GPR:$dst), (ins GPR:$lhs, t2_so_reg:$rhs), IIC_iALUsi,
-                 opc, !strconcat(wide, " $dst, $lhs, $rhs"),
+                 opc, !strconcat(wide, "\t$dst, $lhs, $rhs"),
                  [(set GPR:$dst, (opnode GPR:$lhs, t2_so_reg:$rhs))]>;
 }
 
@@ -181,11 +215,11 @@ multiclass T2I_bin_w_irs<string opc, PatFrag opnode, bit Commutable = 0> :
 multiclass T2I_rbin_is<string opc, PatFrag opnode> {
    // shifted imm
    def ri : T2I<(outs GPR:$dst), (ins GPR:$rhs, t2_so_imm:$lhs), IIC_iALUi,
-                opc, ".w $dst, $rhs, $lhs",
+                opc, ".w\t$dst, $rhs, $lhs",
                 [(set GPR:$dst, (opnode t2_so_imm:$lhs, GPR:$rhs))]>;
    // shifted register
    def rs : T2I<(outs GPR:$dst), (ins GPR:$rhs, t2_so_reg:$lhs), IIC_iALUsi,
-                opc, " $dst, $rhs, $lhs",
+                opc, "\t$dst, $rhs, $lhs",
                 [(set GPR:$dst, (opnode t2_so_reg:$lhs, GPR:$rhs))]>;
 }
 
@@ -195,17 +229,17 @@ let Defs = [CPSR] in {
 multiclass T2I_bin_s_irs<string opc, PatFrag opnode, bit Commutable = 0> {
    // shifted imm
    def ri : T2I<(outs GPR:$dst), (ins GPR:$lhs, t2_so_imm:$rhs), IIC_iALUi,
-                !strconcat(opc, "s"), ".w $dst, $lhs, $rhs",
+                !strconcat(opc, "s"), ".w\t$dst, $lhs, $rhs",
                 [(set GPR:$dst, (opnode GPR:$lhs, t2_so_imm:$rhs))]>;
    // register
    def rr : T2I<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iALUr,
-                !strconcat(opc, "s"), ".w $dst, $lhs, $rhs",
+                !strconcat(opc, "s"), ".w\t$dst, $lhs, $rhs",
                 [(set GPR:$dst, (opnode GPR:$lhs, GPR:$rhs))]> {
      let isCommutable = Commutable;
    }
    // shifted register
    def rs : T2I<(outs GPR:$dst), (ins GPR:$lhs, t2_so_reg:$rhs), IIC_iALUsi,
-                !strconcat(opc, "s"), ".w $dst, $lhs, $rhs",
+                !strconcat(opc, "s"), ".w\t$dst, $lhs, $rhs",
                 [(set GPR:$dst, (opnode GPR:$lhs, t2_so_reg:$rhs))]>;
 }
 }
@@ -215,57 +249,57 @@ multiclass T2I_bin_s_irs<string opc, PatFrag opnode, bit Commutable = 0> {
 multiclass T2I_bin_ii12rs<string opc, PatFrag opnode, bit Commutable = 0> {
    // shifted imm
    def ri : T2sI<(outs GPR:$dst), (ins GPR:$lhs, t2_so_imm:$rhs), IIC_iALUi,
-                 opc, ".w $dst, $lhs, $rhs",
+                 opc, ".w\t$dst, $lhs, $rhs",
                  [(set GPR:$dst, (opnode GPR:$lhs, t2_so_imm:$rhs))]>;
    // 12-bit imm
    def ri12 : T2sI<(outs GPR:$dst), (ins GPR:$lhs, imm0_4095:$rhs), IIC_iALUi,
-                   !strconcat(opc, "w"), " $dst, $lhs, $rhs",
+                   !strconcat(opc, "w"), "\t$dst, $lhs, $rhs",
                    [(set GPR:$dst, (opnode GPR:$lhs, imm0_4095:$rhs))]>;
    // register
    def rr : T2sI<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iALUr,
-                 opc, ".w $dst, $lhs, $rhs",
+                 opc, ".w\t$dst, $lhs, $rhs",
                  [(set GPR:$dst, (opnode GPR:$lhs, GPR:$rhs))]> {
      let isCommutable = Commutable;
    }
    // shifted register
    def rs : T2sI<(outs GPR:$dst), (ins GPR:$lhs, t2_so_reg:$rhs), IIC_iALUsi,
-                 opc, ".w $dst, $lhs, $rhs",
+                 opc, ".w\t$dst, $lhs, $rhs",
                  [(set GPR:$dst, (opnode GPR:$lhs, t2_so_reg:$rhs))]>;
 }
 
-/// T2I_adde_sube_irs - Defines a set of (op reg, {so_imm|r|so_reg}) patterns for a
-/// binary operation that produces a value and use and define the carry bit.
-/// It's not predicable.
+/// T2I_adde_sube_irs - Defines a set of (op reg, {so_imm|r|so_reg}) patterns
+/// for a binary operation that produces a value and use and define the carry
+/// bit. It's not predicable.
 let Uses = [CPSR] in {
 multiclass T2I_adde_sube_irs<string opc, PatFrag opnode, bit Commutable = 0> {
    // shifted imm
    def ri : T2sI<(outs GPR:$dst), (ins GPR:$lhs, t2_so_imm:$rhs), IIC_iALUi,
-                 opc, " $dst, $lhs, $rhs",
+                 opc, "\t$dst, $lhs, $rhs",
                  [(set GPR:$dst, (opnode GPR:$lhs, t2_so_imm:$rhs))]>,
                  Requires<[IsThumb2, CarryDefIsUnused]>;
    // register
    def rr : T2sI<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iALUr,
-                 opc, ".w $dst, $lhs, $rhs",
+                 opc, ".w\t$dst, $lhs, $rhs",
                  [(set GPR:$dst, (opnode GPR:$lhs, GPR:$rhs))]>,
                  Requires<[IsThumb2, CarryDefIsUnused]> {
      let isCommutable = Commutable;
    }
    // shifted register
    def rs : T2sI<(outs GPR:$dst), (ins GPR:$lhs, t2_so_reg:$rhs), IIC_iALUsi,
-                 opc, ".w $dst, $lhs, $rhs",
+                 opc, ".w\t$dst, $lhs, $rhs",
                  [(set GPR:$dst, (opnode GPR:$lhs, t2_so_reg:$rhs))]>,
                  Requires<[IsThumb2, CarryDefIsUnused]>;
    // Carry setting variants
    // shifted imm
    def Sri : T2XI<(outs GPR:$dst), (ins GPR:$lhs, t2_so_imm:$rhs), IIC_iALUi,
-                  !strconcat(opc, "s $dst, $lhs, $rhs"),
+                  !strconcat(opc, "s\t$dst, $lhs, $rhs"),
                   [(set GPR:$dst, (opnode GPR:$lhs, t2_so_imm:$rhs))]>,
                   Requires<[IsThumb2, CarryDefIsUsed]> {
                     let Defs = [CPSR];
                   }
    // register
    def Srr : T2XI<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iALUr,
-                  !strconcat(opc, "s.w $dst, $lhs, $rhs"),
+                  !strconcat(opc, "s.w\t$dst, $lhs, $rhs"),
                   [(set GPR:$dst, (opnode GPR:$lhs, GPR:$rhs))]>,
                   Requires<[IsThumb2, CarryDefIsUsed]> {
                     let Defs = [CPSR];
@@ -273,7 +307,7 @@ multiclass T2I_adde_sube_irs<string opc, PatFrag opnode, bit Commutable = 0> {
    }
    // shifted register
    def Srs : T2XI<(outs GPR:$dst), (ins GPR:$lhs, t2_so_reg:$rhs), IIC_iALUsi,
-                  !strconcat(opc, "s.w $dst, $lhs, $rhs"),
+                  !strconcat(opc, "s.w\t$dst, $lhs, $rhs"),
                   [(set GPR:$dst, (opnode GPR:$lhs, t2_so_reg:$rhs))]>,
                   Requires<[IsThumb2, CarryDefIsUsed]> {
                     let Defs = [CPSR];
@@ -287,12 +321,12 @@ multiclass T2I_rbin_s_is<string opc, PatFrag opnode> {
    // shifted imm
    def ri : T2XI<(outs GPR:$dst), (ins GPR:$rhs, t2_so_imm:$lhs, cc_out:$s),
                  IIC_iALUi,
-                 !strconcat(opc, "${s}.w $dst, $rhs, $lhs"),
+                 !strconcat(opc, "${s}.w\t$dst, $rhs, $lhs"),
                  [(set GPR:$dst, (opnode t2_so_imm:$lhs, GPR:$rhs))]>;
    // shifted register
    def rs : T2XI<(outs GPR:$dst), (ins GPR:$rhs, t2_so_reg:$lhs, cc_out:$s),
                  IIC_iALUsi,
-                 !strconcat(opc, "${s} $dst, $rhs, $lhs"),
+                 !strconcat(opc, "${s}\t$dst, $rhs, $lhs"),
                  [(set GPR:$dst, (opnode t2_so_reg:$lhs, GPR:$rhs))]>;
 }
 }
@@ -302,11 +336,11 @@ multiclass T2I_rbin_s_is<string opc, PatFrag opnode> {
 multiclass T2I_sh_ir<string opc, PatFrag opnode> {
    // 5-bit imm
    def ri : T2sI<(outs GPR:$dst), (ins GPR:$lhs, i32imm:$rhs), IIC_iMOVsi,
-                 opc, ".w $dst, $lhs, $rhs",
+                 opc, ".w\t$dst, $lhs, $rhs",
                  [(set GPR:$dst, (opnode GPR:$lhs, imm1_31:$rhs))]>;
    // register
    def rr : T2sI<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iMOVsr,
-                 opc, ".w $dst, $lhs, $rhs",
+                 opc, ".w\t$dst, $lhs, $rhs",
                  [(set GPR:$dst, (opnode GPR:$lhs, GPR:$rhs))]>;
 }
 
@@ -317,15 +351,15 @@ let Defs = [CPSR] in {
 multiclass T2I_cmp_is<string opc, PatFrag opnode> {
    // shifted imm
    def ri : T2I<(outs), (ins GPR:$lhs, t2_so_imm:$rhs), IIC_iCMPi,
-                opc, ".w $lhs, $rhs",
+                opc, ".w\t$lhs, $rhs",
                 [(opnode GPR:$lhs, t2_so_imm:$rhs)]>;
    // register
    def rr : T2I<(outs), (ins GPR:$lhs, GPR:$rhs), IIC_iCMPr,
-                opc, ".w $lhs, $rhs",
+                opc, ".w\t$lhs, $rhs",
                 [(opnode GPR:$lhs, GPR:$rhs)]>;
    // shifted register
    def rs : T2I<(outs), (ins GPR:$lhs, t2_so_reg:$rhs), IIC_iCMPsi,
-                opc, ".w $lhs, $rhs",
+                opc, ".w\t$lhs, $rhs",
                 [(opnode GPR:$lhs, t2_so_reg:$rhs)]>;
 }
 }
@@ -333,42 +367,44 @@ multiclass T2I_cmp_is<string opc, PatFrag opnode> {
 /// T2I_ld - Defines a set of (op r, {imm12|imm8|so_reg}) load patterns.
 multiclass T2I_ld<string opc, PatFrag opnode> {
   def i12 : T2Ii12<(outs GPR:$dst), (ins t2addrmode_imm12:$addr), IIC_iLoadi,
-                   opc, ".w $dst, $addr",
+                   opc, ".w\t$dst, $addr",
                    [(set GPR:$dst, (opnode t2addrmode_imm12:$addr))]>;
   def i8  : T2Ii8 <(outs GPR:$dst), (ins t2addrmode_imm8:$addr), IIC_iLoadi,
-                   opc, " $dst, $addr",
+                   opc, "\t$dst, $addr",
                    [(set GPR:$dst, (opnode t2addrmode_imm8:$addr))]>;
   def s   : T2Iso <(outs GPR:$dst), (ins t2addrmode_so_reg:$addr), IIC_iLoadr,
-                   opc, ".w $dst, $addr",
+                   opc, ".w\t$dst, $addr",
                    [(set GPR:$dst, (opnode t2addrmode_so_reg:$addr))]>;
   def pci : T2Ipc <(outs GPR:$dst), (ins i32imm:$addr), IIC_iLoadi,
-                   opc, ".w $dst, $addr",
-                   [(set GPR:$dst, (opnode (ARMWrapper tconstpool:$addr)))]>;
+                   opc, ".w\t$dst, $addr",
+                   [(set GPR:$dst, (opnode (ARMWrapper tconstpool:$addr)))]> {
+    let isReMaterializable = 1;
+  }
 }
 
 /// T2I_st - Defines a set of (op r, {imm12|imm8|so_reg}) store patterns.
 multiclass T2I_st<string opc, PatFrag opnode> {
   def i12 : T2Ii12<(outs), (ins GPR:$src, t2addrmode_imm12:$addr), IIC_iStorei,
-                   opc, ".w $src, $addr",
+                   opc, ".w\t$src, $addr",
                    [(opnode GPR:$src, t2addrmode_imm12:$addr)]>;
   def i8  : T2Ii8 <(outs), (ins GPR:$src, t2addrmode_imm8:$addr), IIC_iStorei,
-                   opc, " $src, $addr",
+                   opc, "\t$src, $addr",
                    [(opnode GPR:$src, t2addrmode_imm8:$addr)]>;
   def s   : T2Iso <(outs), (ins GPR:$src, t2addrmode_so_reg:$addr), IIC_iStorer,
-                   opc, ".w $src, $addr",
+                   opc, ".w\t$src, $addr",
                    [(opnode GPR:$src, t2addrmode_so_reg:$addr)]>;
 }
 
 /// T2I_picld - Defines the PIC load pattern.
 class T2I_picld<string opc, PatFrag opnode> :
       T2I<(outs GPR:$dst), (ins addrmodepc:$addr), IIC_iLoadi,
-          !strconcat("\n${addr:label}:\n\t", opc), " $dst, $addr",
+          !strconcat("\n${addr:label}:\n\t", opc), "\t$dst, $addr",
           [(set GPR:$dst, (opnode addrmodepc:$addr))]>;
 
 /// T2I_picst - Defines the PIC store pattern.
 class T2I_picst<string opc, PatFrag opnode> :
       T2I<(outs), (ins GPR:$src, addrmodepc:$addr), IIC_iStorer,
-          !strconcat("\n${addr:label}:\n\t", opc), " $src, $addr",
+          !strconcat("\n${addr:label}:\n\t", opc), "\t$src, $addr",
           [(opnode GPR:$src, addrmodepc:$addr)]>;
 
 
@@ -376,10 +412,10 @@ class T2I_picst<string opc, PatFrag opnode> :
 /// register and one whose operand is a register rotated by 8/16/24.
 multiclass T2I_unary_rrot<string opc, PatFrag opnode> {
   def r     : T2I<(outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
-                  opc, ".w $dst, $src",
+                  opc, ".w\t$dst, $src",
                  [(set GPR:$dst, (opnode GPR:$src))]>;
   def r_rot : T2I<(outs GPR:$dst), (ins GPR:$src, i32imm:$rot), IIC_iUNAsi,
-                  opc, ".w $dst, $src, ror $rot",
+                  opc, ".w\t$dst, $src, ror $rot",
                  [(set GPR:$dst, (opnode (rotr GPR:$src, rot_imm:$rot)))]>;
 }
 
@@ -387,10 +423,10 @@ multiclass T2I_unary_rrot<string opc, PatFrag opnode> {
 /// register and one whose operand is a register rotated by 8/16/24.
 multiclass T2I_bin_rrot<string opc, PatFrag opnode> {
   def rr     : T2I<(outs GPR:$dst), (ins GPR:$LHS, GPR:$RHS), IIC_iALUr,
-                  opc, " $dst, $LHS, $RHS",
+                  opc, "\t$dst, $LHS, $RHS",
                   [(set GPR:$dst, (opnode GPR:$LHS, GPR:$RHS))]>;
   def rr_rot : T2I<(outs GPR:$dst), (ins GPR:$LHS, GPR:$RHS, i32imm:$rot),
-                  IIC_iALUsr, opc, " $dst, $LHS, $RHS, ror $rot",
+                  IIC_iALUsr, opc, "\t$dst, $LHS, $RHS, ror $rot",
                   [(set GPR:$dst, (opnode GPR:$LHS,
                                           (rotr GPR:$RHS, rot_imm:$rot)))]>;
 }
@@ -406,43 +442,43 @@ multiclass T2I_bin_rrot<string opc, PatFrag opnode> {
 // LEApcrel - Load a pc-relative address into a register without offending the
 // assembler.
 def t2LEApcrel : T2XI<(outs GPR:$dst), (ins i32imm:$label, pred:$p), IIC_iALUi,
-                      "adr$p.w $dst, #$label", []>;
+                      "adr$p.w\t$dst, #$label", []>;
 
 def t2LEApcrelJT : T2XI<(outs GPR:$dst),
                         (ins i32imm:$label, nohash_imm:$id, pred:$p), IIC_iALUi,
-                        "adr$p.w $dst, #${label}_${id}", []>;
+                        "adr$p.w\t$dst, #${label}_${id}", []>;
 
 // ADD r, sp, {so_imm|i12}
 def t2ADDrSPi   : T2sI<(outs GPR:$dst), (ins GPR:$sp, t2_so_imm:$imm),
-                        IIC_iALUi, "add", ".w $dst, $sp, $imm", []>;
+                        IIC_iALUi, "add", ".w\t$dst, $sp, $imm", []>;
 def t2ADDrSPi12 : T2I<(outs GPR:$dst), (ins GPR:$sp, imm0_4095:$imm), 
-                       IIC_iALUi, "addw", " $dst, $sp, $imm", []>;
+                       IIC_iALUi, "addw", "\t$dst, $sp, $imm", []>;
 
 // ADD r, sp, so_reg
 def t2ADDrSPs   : T2sI<(outs GPR:$dst), (ins GPR:$sp, t2_so_reg:$rhs),
-                        IIC_iALUsi, "add", ".w $dst, $sp, $rhs", []>;
+                        IIC_iALUsi, "add", ".w\t$dst, $sp, $rhs", []>;
 
 // SUB r, sp, {so_imm|i12}
 def t2SUBrSPi   : T2sI<(outs GPR:$dst), (ins GPR:$sp, t2_so_imm:$imm),
-                        IIC_iALUi, "sub", ".w $dst, $sp, $imm", []>;
+                        IIC_iALUi, "sub", ".w\t$dst, $sp, $imm", []>;
 def t2SUBrSPi12 : T2I<(outs GPR:$dst), (ins GPR:$sp, imm0_4095:$imm),
-                       IIC_iALUi, "subw", " $dst, $sp, $imm", []>;
+                       IIC_iALUi, "subw", "\t$dst, $sp, $imm", []>;
 
 // SUB r, sp, so_reg
 def t2SUBrSPs   : T2sI<(outs GPR:$dst), (ins GPR:$sp, t2_so_reg:$rhs),
                        IIC_iALUsi,
-                       "sub", " $dst, $sp, $rhs", []>;
+                       "sub", "\t$dst, $sp, $rhs", []>;
 
 
 // Pseudo instruction that will expand into a t2SUBrSPi + a copy.
-let usesCustomDAGSchedInserter = 1 in { // Expanded by the scheduler.
+let usesCustomInserter = 1 in { // Expanded after instruction selection.
 def t2SUBrSPi_   : PseudoInst<(outs GPR:$dst), (ins GPR:$sp, t2_so_imm:$imm),
-                   NoItinerary, "@ sub.w $dst, $sp, $imm", []>;
+                   NoItinerary, "@ sub.w\t$dst, $sp, $imm", []>;
 def t2SUBrSPi12_ : PseudoInst<(outs GPR:$dst), (ins GPR:$sp, imm0_4095:$imm),
-                   NoItinerary, "@ subw $dst, $sp, $imm", []>;
+                   NoItinerary, "@ subw\t$dst, $sp, $imm", []>;
 def t2SUBrSPs_   : PseudoInst<(outs GPR:$dst), (ins GPR:$sp, t2_so_reg:$rhs),
-                   NoItinerary, "@ sub $dst, $sp, $rhs", []>;
-} // usesCustomDAGSchedInserter
+                   NoItinerary, "@ sub\t$dst, $sp, $rhs", []>;
+} // usesCustomInserter
 
 
 //===----------------------------------------------------------------------===//
@@ -450,7 +486,7 @@ def t2SUBrSPs_   : PseudoInst<(outs GPR:$dst), (ins GPR:$sp, t2_so_reg:$rhs),
 //
 
 // Load
-let canFoldAsLoad = 1 in
+let canFoldAsLoad = 1, isReMaterializable = 1, mayHaveSideEffects = 1  in 
 defm t2LDR   : T2I_ld<"ldr",  UnOpFrag<(load node:$Src)>>;
 
 // Loads with zero extension
@@ -465,10 +501,10 @@ let mayLoad = 1, hasExtraDefRegAllocReq = 1 in {
 // Load doubleword
 def t2LDRDi8  : T2Ii8s4<(outs GPR:$dst1, GPR:$dst2),
                         (ins t2addrmode_imm8s4:$addr),
-                        IIC_iLoadi, "ldrd", " $dst1, $addr", []>;
+                        IIC_iLoadi, "ldrd", "\t$dst1, $addr", []>;
 def t2LDRDpci : T2Ii8s4<(outs GPR:$dst1, GPR:$dst2),
                         (ins i32imm:$addr), IIC_iLoadi,
-                       "ldrd", " $dst1, $addr", []>;
+                       "ldrd", "\t$dst1, $addr", []>;
 }
 
 // zextload i1 -> zextload i8
@@ -516,57 +552,57 @@ let mayLoad = 1 in {
 def t2LDR_PRE  : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
                             (ins t2addrmode_imm8:$addr),
                             AddrModeT2_i8, IndexModePre, IIC_iLoadiu,
-                            "ldr", " $dst, $addr!", "$addr.base = $base_wb",
+                            "ldr", "\t$dst, $addr!", "$addr.base = $base_wb",
                             []>;
 
 def t2LDR_POST : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
                             (ins GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePost, IIC_iLoadiu,
-                           "ldr", " $dst, [$base], $offset", "$base = $base_wb",
+                          "ldr", "\t$dst, [$base], $offset", "$base = $base_wb",
                             []>;
 
 def t2LDRB_PRE : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
                             (ins t2addrmode_imm8:$addr),
                             AddrModeT2_i8, IndexModePre, IIC_iLoadiu,
-                            "ldrb", " $dst, $addr!", "$addr.base = $base_wb",
+                            "ldrb", "\t$dst, $addr!", "$addr.base = $base_wb",
                             []>;
 def t2LDRB_POST : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
                             (ins GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePost, IIC_iLoadiu,
-                          "ldrb", " $dst, [$base], $offset", "$base = $base_wb",
+                         "ldrb", "\t$dst, [$base], $offset", "$base = $base_wb",
                             []>;
 
 def t2LDRH_PRE : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
                             (ins t2addrmode_imm8:$addr),
                             AddrModeT2_i8, IndexModePre, IIC_iLoadiu,
-                            "ldrh", " $dst, $addr!", "$addr.base = $base_wb",
+                            "ldrh", "\t$dst, $addr!", "$addr.base = $base_wb",
                             []>;
 def t2LDRH_POST : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
                             (ins GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePost, IIC_iLoadiu,
-                          "ldrh", " $dst, [$base], $offset", "$base = $base_wb",
+                         "ldrh", "\t$dst, [$base], $offset", "$base = $base_wb",
                             []>;
 
 def t2LDRSB_PRE : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
                             (ins t2addrmode_imm8:$addr),
                             AddrModeT2_i8, IndexModePre, IIC_iLoadiu,
-                            "ldrsb", " $dst, $addr!", "$addr.base = $base_wb",
+                            "ldrsb", "\t$dst, $addr!", "$addr.base = $base_wb",
                             []>;
 def t2LDRSB_POST : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
                             (ins GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePost, IIC_iLoadiu,
-                         "ldrsb", " $dst, [$base], $offset", "$base = $base_wb",
+                        "ldrsb", "\t$dst, [$base], $offset", "$base = $base_wb",
                             []>;
 
 def t2LDRSH_PRE : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
                             (ins t2addrmode_imm8:$addr),
                             AddrModeT2_i8, IndexModePre, IIC_iLoadiu,
-                            "ldrsh", " $dst, $addr!", "$addr.base = $base_wb",
+                            "ldrsh", "\t$dst, $addr!", "$addr.base = $base_wb",
                             []>;
 def t2LDRSH_POST : T2Iidxldst<(outs GPR:$dst, GPR:$base_wb),
                             (ins GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePost, IIC_iLoadiu,
-                         "ldrsh", " $dst, [$base], $offset", "$base = $base_wb",
+                        "ldrsh", "\t$dst, [$base], $offset", "$base = $base_wb",
                             []>;
 }
 
@@ -579,48 +615,48 @@ defm t2STRH  : T2I_st<"strh", BinOpFrag<(truncstorei16 node:$LHS, node:$RHS)>>;
 let mayLoad = 1, hasExtraSrcRegAllocReq = 1 in
 def t2STRDi8 : T2Ii8s4<(outs),
                        (ins GPR:$src1, GPR:$src2, t2addrmode_imm8s4:$addr),
-               IIC_iStorer, "strd", " $src1, $addr", []>;
+               IIC_iStorer, "strd", "\t$src1, $addr", []>;
 
 // Indexed stores
 def t2STR_PRE  : T2Iidxldst<(outs GPR:$base_wb),
                             (ins GPR:$src, GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePre, IIC_iStoreiu,
-                          "str", " $src, [$base, $offset]!", "$base = $base_wb",
+                         "str", "\t$src, [$base, $offset]!", "$base = $base_wb",
              [(set GPR:$base_wb,
                    (pre_store GPR:$src, GPR:$base, t2am_imm8_offset:$offset))]>;
 
 def t2STR_POST : T2Iidxldst<(outs GPR:$base_wb),
                             (ins GPR:$src, GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePost, IIC_iStoreiu,
-                           "str", " $src, [$base], $offset", "$base = $base_wb",
+                          "str", "\t$src, [$base], $offset", "$base = $base_wb",
              [(set GPR:$base_wb,
-                   (post_store GPR:$src, GPR:$base, t2am_imm8_offset:$offset))]>;
+                  (post_store GPR:$src, GPR:$base, t2am_imm8_offset:$offset))]>;
 
 def t2STRH_PRE  : T2Iidxldst<(outs GPR:$base_wb),
                             (ins GPR:$src, GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePre, IIC_iStoreiu,
-                         "strh", " $src, [$base, $offset]!", "$base = $base_wb",
+                        "strh", "\t$src, [$base, $offset]!", "$base = $base_wb",
         [(set GPR:$base_wb,
               (pre_truncsti16 GPR:$src, GPR:$base, t2am_imm8_offset:$offset))]>;
 
 def t2STRH_POST : T2Iidxldst<(outs GPR:$base_wb),
                             (ins GPR:$src, GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePost, IIC_iStoreiu,
-                          "strh", " $src, [$base], $offset", "$base = $base_wb",
+                         "strh", "\t$src, [$base], $offset", "$base = $base_wb",
        [(set GPR:$base_wb,
              (post_truncsti16 GPR:$src, GPR:$base, t2am_imm8_offset:$offset))]>;
 
 def t2STRB_PRE  : T2Iidxldst<(outs GPR:$base_wb),
                             (ins GPR:$src, GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePre, IIC_iStoreiu,
-                         "strb", " $src, [$base, $offset]!", "$base = $base_wb",
+                        "strb", "\t$src, [$base, $offset]!", "$base = $base_wb",
          [(set GPR:$base_wb,
                (pre_truncsti8 GPR:$src, GPR:$base, t2am_imm8_offset:$offset))]>;
 
 def t2STRB_POST : T2Iidxldst<(outs GPR:$base_wb),
                             (ins GPR:$src, GPR:$base, t2am_imm8_offset:$offset),
                             AddrModeT2_i8, IndexModePost, IIC_iStoreiu,
-                          "strb", " $src, [$base], $offset", "$base = $base_wb",
+                         "strb", "\t$src, [$base], $offset", "$base = $base_wb",
         [(set GPR:$base_wb,
               (post_truncsti8 GPR:$src, GPR:$base, t2am_imm8_offset:$offset))]>;
 
@@ -634,12 +670,12 @@ def t2STRB_POST : T2Iidxldst<(outs GPR:$base_wb),
 let mayLoad = 1, hasExtraDefRegAllocReq = 1 in
 def t2LDM : T2XI<(outs),
                  (ins addrmode4:$addr, pred:$p, reglist:$wb, variable_ops),
-              IIC_iLoadm, "ldm${addr:submode}${p}${addr:wide} $addr, $wb", []>;
+              IIC_iLoadm, "ldm${addr:submode}${p}${addr:wide}\t$addr, $wb", []>;
 
 let mayStore = 1, hasExtraSrcRegAllocReq = 1 in
 def t2STM : T2XI<(outs),
                  (ins addrmode4:$addr, pred:$p, reglist:$wb, variable_ops),
-              IIC_iStorem, "stm${addr:submode}${p}${addr:wide} $addr, $wb", []>;
+             IIC_iStorem, "stm${addr:submode}${p}${addr:wide}\t$addr, $wb", []>;
 
 //===----------------------------------------------------------------------===//
 //  Move Instructions.
@@ -647,25 +683,27 @@ def t2STM : T2XI<(outs),
 
 let neverHasSideEffects = 1 in
 def t2MOVr : T2sI<(outs GPR:$dst), (ins GPR:$src), IIC_iMOVr,
-                   "mov", ".w $dst, $src", []>;
+                   "mov", ".w\t$dst, $src", []>;
 
 // AddedComplexity to ensure isel tries t2MOVi before t2MOVi16.
 let isReMaterializable = 1, isAsCheapAsAMove = 1, AddedComplexity = 1 in
 def t2MOVi : T2sI<(outs GPR:$dst), (ins t2_so_imm:$src), IIC_iMOVi,
-                   "mov", ".w $dst, $src",
+                   "mov", ".w\t$dst, $src",
                    [(set GPR:$dst, t2_so_imm:$src)]>;
 
 let isReMaterializable = 1, isAsCheapAsAMove = 1 in
 def t2MOVi16 : T2I<(outs GPR:$dst), (ins i32imm:$src), IIC_iMOVi,
-                   "movw", " $dst, $src",
+                   "movw", "\t$dst, $src",
                    [(set GPR:$dst, imm0_65535:$src)]>;
 
 let Constraints = "$src = $dst" in
 def t2MOVTi16 : T2I<(outs GPR:$dst), (ins GPR:$src, i32imm:$imm), IIC_iMOVi,
-                    "movt", " $dst, $imm",
+                    "movt", "\t$dst, $imm",
                     [(set GPR:$dst,
                           (or (and GPR:$src, 0xffff), lo16AllZero:$imm))]>;
 
+def : T2Pat<(or GPR:$src, 0xffff0000), (t2MOVTi16 GPR:$src, 0xffff)>;
+
 //===----------------------------------------------------------------------===//
 //  Extend Instructions.
 //
@@ -695,9 +733,9 @@ def : T2Pat<(and (srl GPR:$Src, (i32 8)), 0xFF00FF),
             (t2UXTB16r_rot GPR:$Src, 8)>;
 
 defm t2UXTAB : T2I_bin_rrot<"uxtab",
-                            BinOpFrag<(add node:$LHS, (and node:$RHS, 0x00FF))>>;
+                           BinOpFrag<(add node:$LHS, (and node:$RHS, 0x00FF))>>;
 defm t2UXTAH : T2I_bin_rrot<"uxtah",
-                            BinOpFrag<(add node:$LHS, (and node:$RHS, 0xFFFF))>>;
+                           BinOpFrag<(add node:$LHS, (and node:$RHS, 0xFFFF))>>;
 }
 
 //===----------------------------------------------------------------------===//
@@ -739,16 +777,16 @@ defm t2ROR  : T2I_sh_ir<"ror", BinOpFrag<(rotr node:$LHS, node:$RHS)>>;
 
 let Uses = [CPSR] in {
 def t2MOVrx : T2sI<(outs GPR:$dst), (ins GPR:$src), IIC_iMOVsi,
-                   "rrx", " $dst, $src",
+                   "rrx", "\t$dst, $src",
                    [(set GPR:$dst, (ARMrrx GPR:$src))]>;
 }
 
 let Defs = [CPSR] in {
 def t2MOVsrl_flag : T2XI<(outs GPR:$dst), (ins GPR:$src), IIC_iMOVsi,
-                         "lsrs.w $dst, $src, #1",
+                         "lsrs.w\t$dst, $src, #1",
                          [(set GPR:$dst, (ARMsrl_flag GPR:$src))]>;
 def t2MOVsra_flag : T2XI<(outs GPR:$dst), (ins GPR:$src), IIC_iMOVsi,
-                         "asrs.w $dst, $src, #1",
+                         "asrs.w\t$dst, $src, #1",
                          [(set GPR:$dst, (ARMsra_flag GPR:$src))]>;
 }
 
@@ -764,9 +802,15 @@ defm t2BIC  : T2I_bin_w_irs<"bic", BinOpFrag<(and node:$LHS, (not node:$RHS))>>;
 
 let Constraints = "$src = $dst" in
 def t2BFC : T2I<(outs GPR:$dst), (ins GPR:$src, bf_inv_mask_imm:$imm),
-                IIC_iALUi, "bfc", " $dst, $imm",
+                IIC_iUNAsi, "bfc", "\t$dst, $imm",
                 [(set GPR:$dst, (and GPR:$src, bf_inv_mask_imm:$imm))]>;
 
+def t2SBFX : T2I<(outs GPR:$dst), (ins GPR:$src, imm0_31:$lsb, imm0_31:$width),
+                 IIC_iALUi, "sbfx", "\t$dst, $src, $lsb, $width", []>;
+
+def t2UBFX : T2I<(outs GPR:$dst), (ins GPR:$src, imm0_31:$lsb, imm0_31:$width),
+                 IIC_iALUi, "ubfx", "\t$dst, $src, $lsb, $width", []>;
+
 // FIXME: A8.6.18  BFI - Bitfield insert (Encoding T1)
 
 defm t2ORN  : T2I_bin_irs<"orn", BinOpFrag<(or  node:$LHS, (not node:$RHS))>>;
@@ -792,80 +836,80 @@ def : T2Pat<(t2_so_imm_not:$src),
 //
 let isCommutable = 1 in
 def t2MUL: T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b), IIC_iMUL32,
-                "mul", " $dst, $a, $b",
+                "mul", "\t$dst, $a, $b",
                 [(set GPR:$dst, (mul GPR:$a, GPR:$b))]>;
 
 def t2MLA: T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$c), IIC_iMAC32,
-		"mla", " $dst, $a, $b, $c",
+		"mla", "\t$dst, $a, $b, $c",
 		[(set GPR:$dst, (add (mul GPR:$a, GPR:$b), GPR:$c))]>;
 
 def t2MLS: T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$c), IIC_iMAC32,
-		"mls", " $dst, $a, $b, $c",
+		"mls", "\t$dst, $a, $b, $c",
                 [(set GPR:$dst, (sub GPR:$c, (mul GPR:$a, GPR:$b)))]>;
 
 // Extra precision multiplies with low / high results
 let neverHasSideEffects = 1 in {
 let isCommutable = 1 in {
 def t2SMULL : T2I<(outs GPR:$ldst, GPR:$hdst), (ins GPR:$a, GPR:$b), IIC_iMUL64,
-                   "smull", " $ldst, $hdst, $a, $b", []>;
+                   "smull", "\t$ldst, $hdst, $a, $b", []>;
 
 def t2UMULL : T2I<(outs GPR:$ldst, GPR:$hdst), (ins GPR:$a, GPR:$b), IIC_iMUL64,
-                   "umull", " $ldst, $hdst, $a, $b", []>;
+                   "umull", "\t$ldst, $hdst, $a, $b", []>;
 }
 
 // Multiply + accumulate
 def t2SMLAL : T2I<(outs GPR:$ldst, GPR:$hdst), (ins GPR:$a, GPR:$b), IIC_iMAC64,
-                  "smlal", " $ldst, $hdst, $a, $b", []>;
+                  "smlal", "\t$ldst, $hdst, $a, $b", []>;
 
 def t2UMLAL : T2I<(outs GPR:$ldst, GPR:$hdst), (ins GPR:$a, GPR:$b), IIC_iMAC64,
-                  "umlal", " $ldst, $hdst, $a, $b", []>;
+                  "umlal", "\t$ldst, $hdst, $a, $b", []>;
 
 def t2UMAAL : T2I<(outs GPR:$ldst, GPR:$hdst), (ins GPR:$a, GPR:$b), IIC_iMAC64,
-                  "umaal", " $ldst, $hdst, $a, $b", []>;
+                  "umaal", "\t$ldst, $hdst, $a, $b", []>;
 } // neverHasSideEffects
 
 // Most significant word multiply
 def t2SMMUL : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b), IIC_iMUL32,
-                  "smmul", " $dst, $a, $b",
+                  "smmul", "\t$dst, $a, $b",
                   [(set GPR:$dst, (mulhs GPR:$a, GPR:$b))]>;
 
 def t2SMMLA : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$c), IIC_iMAC32,
-                  "smmla", " $dst, $a, $b, $c",
+                  "smmla", "\t$dst, $a, $b, $c",
                   [(set GPR:$dst, (add (mulhs GPR:$a, GPR:$b), GPR:$c))]>;
 
 
 def t2SMMLS : T2I <(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$c), IIC_iMAC32,
-                   "smmls", " $dst, $a, $b, $c",
+                   "smmls", "\t$dst, $a, $b, $c",
                    [(set GPR:$dst, (sub GPR:$c, (mulhs GPR:$a, GPR:$b)))]>;
 
 multiclass T2I_smul<string opc, PatFrag opnode> {
   def BB : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b), IIC_iMUL32,
-              !strconcat(opc, "bb"), " $dst, $a, $b",
+              !strconcat(opc, "bb"), "\t$dst, $a, $b",
               [(set GPR:$dst, (opnode (sext_inreg GPR:$a, i16),
                                       (sext_inreg GPR:$b, i16)))]>;
 
   def BT : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b), IIC_iMUL32,
-              !strconcat(opc, "bt"), " $dst, $a, $b",
+              !strconcat(opc, "bt"), "\t$dst, $a, $b",
               [(set GPR:$dst, (opnode (sext_inreg GPR:$a, i16),
                                       (sra GPR:$b, (i32 16))))]>;
 
   def TB : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b), IIC_iMUL32,
-              !strconcat(opc, "tb"), " $dst, $a, $b",
+              !strconcat(opc, "tb"), "\t$dst, $a, $b",
               [(set GPR:$dst, (opnode (sra GPR:$a, (i32 16)),
                                       (sext_inreg GPR:$b, i16)))]>;
 
   def TT : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b), IIC_iMUL32,
-              !strconcat(opc, "tt"), " $dst, $a, $b",
+              !strconcat(opc, "tt"), "\t$dst, $a, $b",
               [(set GPR:$dst, (opnode (sra GPR:$a, (i32 16)),
                                       (sra GPR:$b, (i32 16))))]>;
 
   def WB : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b), IIC_iMUL16,
-              !strconcat(opc, "wb"), " $dst, $a, $b",
+              !strconcat(opc, "wb"), "\t$dst, $a, $b",
               [(set GPR:$dst, (sra (opnode GPR:$a,
                                     (sext_inreg GPR:$b, i16)), (i32 16)))]>;
 
   def WT : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b), IIC_iMUL16,
-              !strconcat(opc, "wt"), " $dst, $a, $b",
+              !strconcat(opc, "wt"), "\t$dst, $a, $b",
               [(set GPR:$dst, (sra (opnode GPR:$a,
                                     (sra GPR:$b, (i32 16))), (i32 16)))]>;
 }
@@ -873,33 +917,33 @@ multiclass T2I_smul<string opc, PatFrag opnode> {
 
 multiclass T2I_smla<string opc, PatFrag opnode> {
   def BB : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc), IIC_iMAC16,
-              !strconcat(opc, "bb"), " $dst, $a, $b, $acc",
+              !strconcat(opc, "bb"), "\t$dst, $a, $b, $acc",
               [(set GPR:$dst, (add GPR:$acc,
                                (opnode (sext_inreg GPR:$a, i16),
                                        (sext_inreg GPR:$b, i16))))]>;
 
   def BT : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc), IIC_iMAC16,
-             !strconcat(opc, "bt"), " $dst, $a, $b, $acc",
+             !strconcat(opc, "bt"), "\t$dst, $a, $b, $acc",
              [(set GPR:$dst, (add GPR:$acc, (opnode (sext_inreg GPR:$a, i16),
                                                     (sra GPR:$b, (i32 16)))))]>;
 
   def TB : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc), IIC_iMAC16,
-              !strconcat(opc, "tb"), " $dst, $a, $b, $acc",
+              !strconcat(opc, "tb"), "\t$dst, $a, $b, $acc",
               [(set GPR:$dst, (add GPR:$acc, (opnode (sra GPR:$a, (i32 16)),
                                                  (sext_inreg GPR:$b, i16))))]>;
 
   def TT : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc), IIC_iMAC16,
-              !strconcat(opc, "tt"), " $dst, $a, $b, $acc",
+              !strconcat(opc, "tt"), "\t$dst, $a, $b, $acc",
              [(set GPR:$dst, (add GPR:$acc, (opnode (sra GPR:$a, (i32 16)),
                                                     (sra GPR:$b, (i32 16)))))]>;
 
   def WB : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc), IIC_iMAC16,
-              !strconcat(opc, "wb"), " $dst, $a, $b, $acc",
+              !strconcat(opc, "wb"), "\t$dst, $a, $b, $acc",
               [(set GPR:$dst, (add GPR:$acc, (sra (opnode GPR:$a,
                                        (sext_inreg GPR:$b, i16)), (i32 16))))]>;
 
   def WT : T2I<(outs GPR:$dst), (ins GPR:$a, GPR:$b, GPR:$acc), IIC_iMAC16,
-              !strconcat(opc, "wt"), " $dst, $a, $b, $acc",
+              !strconcat(opc, "wt"), "\t$dst, $a, $b, $acc",
               [(set GPR:$dst, (add GPR:$acc, (sra (opnode GPR:$a,
                                          (sra GPR:$b, (i32 16))), (i32 16))))]>;
 }
@@ -916,15 +960,15 @@ defm t2SMLA : T2I_smla<"smla", BinOpFrag<(mul node:$LHS, node:$RHS)>>;
 //
 
 def t2CLZ : T2I<(outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
-                "clz", " $dst, $src",
+                "clz", "\t$dst, $src",
                 [(set GPR:$dst, (ctlz GPR:$src))]>;
 
 def t2REV : T2I<(outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
-                "rev", ".w $dst, $src",
+                "rev", ".w\t$dst, $src",
                 [(set GPR:$dst, (bswap GPR:$src))]>;
 
 def t2REV16 : T2I<(outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
-                "rev16", ".w $dst, $src",
+                "rev16", ".w\t$dst, $src",
                 [(set GPR:$dst,
                     (or (and (srl GPR:$src, (i32 8)), 0xFF),
                         (or (and (shl GPR:$src, (i32 8)), 0xFF00),
@@ -932,14 +976,14 @@ def t2REV16 : T2I<(outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
                                 (and (shl GPR:$src, (i32 8)), 0xFF000000)))))]>;
 
 def t2REVSH : T2I<(outs GPR:$dst), (ins GPR:$src), IIC_iUNAr,
-                 "revsh", ".w $dst, $src",
+                 "revsh", ".w\t$dst, $src",
                  [(set GPR:$dst,
                     (sext_inreg
                       (or (srl (and GPR:$src, 0xFF00), (i32 8)),
                           (shl GPR:$src, (i32 8))), i16))]>;
 
 def t2PKHBT : T2I<(outs GPR:$dst), (ins GPR:$src1, GPR:$src2, i32imm:$shamt),
-                  IIC_iALUsi, "pkhbt", " $dst, $src1, $src2, LSL $shamt",
+                  IIC_iALUsi, "pkhbt", "\t$dst, $src1, $src2, LSL $shamt",
                   [(set GPR:$dst, (or (and GPR:$src1, 0xFFFF),
                                       (and (shl GPR:$src2, (i32 imm:$shamt)),
                                            0xFFFF0000)))]>;
@@ -951,7 +995,7 @@ def : T2Pat<(or (and GPR:$src1, 0xFFFF), (shl GPR:$src2, imm16_31:$shamt)),
             (t2PKHBT GPR:$src1, GPR:$src2, imm16_31:$shamt)>;
 
 def t2PKHTB : T2I<(outs GPR:$dst), (ins GPR:$src1, GPR:$src2, i32imm:$shamt),
-                  IIC_iALUsi, "pkhtb", " $dst, $src1, $src2, ASR $shamt",
+                  IIC_iALUsi, "pkhtb", "\t$dst, $src1, $src2, ASR $shamt",
                   [(set GPR:$dst, (or (and GPR:$src1, 0xFFFF0000),
                                       (and (sra GPR:$src2, imm16_31:$shamt),
                                            0xFFFF)))]>;
@@ -998,26 +1042,26 @@ defm t2TEQ  : T2I_cmp_is<"teq",
 // FIXME: should be able to write a pattern for ARMcmov, but can't use
 // a two-value operand where a dag node expects two operands. :( 
 def t2MOVCCr : T2I<(outs GPR:$dst), (ins GPR:$false, GPR:$true), IIC_iCMOVr,
-                   "mov", ".w $dst, $true",
+                   "mov", ".w\t$dst, $true",
       [/*(set GPR:$dst, (ARMcmov GPR:$false, GPR:$true, imm:$cc, CCR:$ccr))*/]>,
                 RegConstraint<"$false = $dst">;
 
 def t2MOVCCi : T2I<(outs GPR:$dst), (ins GPR:$false, t2_so_imm:$true),
-                   IIC_iCMOVi, "mov", ".w $dst, $true",
+                   IIC_iCMOVi, "mov", ".w\t$dst, $true",
 [/*(set GPR:$dst, (ARMcmov GPR:$false, t2_so_imm:$true, imm:$cc, CCR:$ccr))*/]>,
                    RegConstraint<"$false = $dst">;
 
 def t2MOVCClsl : T2I<(outs GPR:$dst), (ins GPR:$false, GPR:$true, i32imm:$rhs),
-                   IIC_iCMOVsi, "lsl", ".w $dst, $true, $rhs", []>,
+                   IIC_iCMOVsi, "lsl", ".w\t$dst, $true, $rhs", []>,
                    RegConstraint<"$false = $dst">;
 def t2MOVCClsr : T2I<(outs GPR:$dst), (ins GPR:$false, GPR:$true, i32imm:$rhs),
-                   IIC_iCMOVsi, "lsr", ".w $dst, $true, $rhs", []>,
+                   IIC_iCMOVsi, "lsr", ".w\t$dst, $true, $rhs", []>,
                    RegConstraint<"$false = $dst">;
 def t2MOVCCasr : T2I<(outs GPR:$dst), (ins GPR:$false, GPR:$true, i32imm:$rhs),
-                   IIC_iCMOVsi, "asr", ".w $dst, $true, $rhs", []>,
+                   IIC_iCMOVsi, "asr", ".w\t$dst, $true, $rhs", []>,
                    RegConstraint<"$false = $dst">;
 def t2MOVCCror : T2I<(outs GPR:$dst), (ins GPR:$false, GPR:$true, i32imm:$rhs),
-                   IIC_iCMOVsi, "ror", ".w $dst, $true, $rhs", []>,
+                   IIC_iCMOVsi, "ror", ".w\t$dst, $true, $rhs", []>,
                    RegConstraint<"$false = $dst">;
 
 //===----------------------------------------------------------------------===//
@@ -1028,7 +1072,7 @@ def t2MOVCCror : T2I<(outs GPR:$dst), (ins GPR:$false, GPR:$true, i32imm:$rhs),
 let isCall = 1,
   Defs = [R0, R12, LR, CPSR] in {
   def t2TPsoft : T2XI<(outs), (ins), IIC_Br,
-                     "bl __aeabi_read_tp",
+                     "bl\t__aeabi_read_tp",
                      [(set R0, ARMthread_pointer)]>;
 }
 
@@ -1051,13 +1095,13 @@ let Defs =
     D31 ] in {
   def t2Int_eh_sjlj_setjmp : Thumb2XI<(outs), (ins GPR:$src),
                                AddrModeNone, SizeSpecial, NoItinerary,
-                               "str.w sp, [$src, #+8] @ eh_setjmp begin\n"
-                               "\tadr r12, 0f\n"
-                               "\torr r12, #1\n"
-                               "\tstr.w r12, [$src, #+4]\n"
-                               "\tmovs r0, #0\n"
-                               "\tb 1f\n"
-                               "0:\tmovs r0, #1 @ eh_setjmp end\n"
+                               "str.w\tsp, [$src, #+8] @ eh_setjmp begin\n"
+                               "\tadr\tr12, 0f\n"
+                               "\torr.w\tr12, r12, #1\n"
+                               "\tstr.w\tr12, [$src, #+4]\n"
+                               "\tmovs\tr0, #0\n"
+                               "\tb\t1f\n"
+                               "0:\tmovs\tr0, #1 @ eh_setjmp end\n"
                                "1:", "",
                                [(set R0, (ARMeh_sjlj_setjmp GPR:$src))]>;
 }
@@ -1076,32 +1120,32 @@ let isReturn = 1, isTerminator = 1, isBarrier = 1, mayLoad = 1,
     hasExtraDefRegAllocReq = 1 in
   def t2LDM_RET : T2XI<(outs),
                     (ins addrmode4:$addr, pred:$p, reglist:$wb, variable_ops),
-                    IIC_Br, "ldm${addr:submode}${p}${addr:wide} $addr, $wb",
+                    IIC_Br, "ldm${addr:submode}${p}${addr:wide}\t$addr, $wb",
                     []>;
 
 let isBranch = 1, isTerminator = 1, isBarrier = 1 in {
 let isPredicable = 1 in
 def t2B   : T2XI<(outs), (ins brtarget:$target), IIC_Br,
-                 "b.w $target",
+                 "b.w\t$target",
                  [(br bb:$target)]>;
 
 let isNotDuplicable = 1, isIndirectBranch = 1 in {
 def t2BR_JT :
     T2JTI<(outs),
           (ins GPR:$target, GPR:$index, jt2block_operand:$jt, i32imm:$id),
-           IIC_Br, "mov pc, $target\n$jt",
+           IIC_Br, "mov\tpc, $target\n$jt",
           [(ARMbr2jt GPR:$target, GPR:$index, tjumptable:$jt, imm:$id)]>;
 
 // FIXME: Add a non-pc based case that can be predicated.
 def t2TBB :
     T2JTI<(outs),
         (ins tb_addrmode:$index, jt2block_operand:$jt, i32imm:$id),
-         IIC_Br, "tbb $index\n$jt", []>;
+         IIC_Br, "tbb\t$index\n$jt", []>;
 
 def t2TBH :
     T2JTI<(outs),
         (ins tb_addrmode:$index, jt2block_operand:$jt, i32imm:$id),
-         IIC_Br, "tbh $index\n$jt", []>;
+         IIC_Br, "tbh\t$index\n$jt", []>;
 } // isNotDuplicable, isIndirectBranch
 
 } // isBranch, isTerminator, isBarrier
@@ -1110,29 +1154,57 @@ def t2TBH :
 // a two-value operand where a dag node expects two operands. :(
 let isBranch = 1, isTerminator = 1 in
 def t2Bcc : T2I<(outs), (ins brtarget:$target), IIC_Br,
-                "b", ".w $target",
+                "b", ".w\t$target",
                 [/*(ARMbrcond bb:$target, imm:$cc)*/]>;
 
 
 // IT block
 def t2IT : Thumb2XI<(outs), (ins it_pred:$cc, it_mask:$mask),
                     AddrModeNone, Size2Bytes,  IIC_iALUx,
-                    "it$mask $cc", "", []>;
+                    "it$mask\t$cc", "", []>;
 
 //===----------------------------------------------------------------------===//
 // Non-Instruction Patterns
 //
 
-// ConstantPool, GlobalAddress, and JumpTable
-def : T2Pat<(ARMWrapper  tglobaladdr :$dst), (t2LEApcrel tglobaladdr :$dst)>;
-def : T2Pat<(ARMWrapper  tconstpool  :$dst), (t2LEApcrel tconstpool  :$dst)>;
-def : T2Pat<(ARMWrapperJT tjumptable:$dst, imm:$id),
-            (t2LEApcrelJT tjumptable:$dst, imm:$id)>;
+// Two piece so_imms.
+def : T2Pat<(or GPR:$LHS, t2_so_imm2part:$RHS),
+             (t2ORRri (t2ORRri GPR:$LHS, (t2_so_imm2part_1 imm:$RHS)),
+                    (t2_so_imm2part_2 imm:$RHS))>;
+def : T2Pat<(xor GPR:$LHS, t2_so_imm2part:$RHS),
+             (t2EORri (t2EORri GPR:$LHS, (t2_so_imm2part_1 imm:$RHS)),
+                    (t2_so_imm2part_2 imm:$RHS))>;
+def : T2Pat<(add GPR:$LHS, t2_so_imm2part:$RHS),
+             (t2ADDri (t2ADDri GPR:$LHS, (t2_so_imm2part_1 imm:$RHS)),
+                    (t2_so_imm2part_2 imm:$RHS))>;
+def : T2Pat<(add GPR:$LHS, t2_so_neg_imm2part:$RHS),
+             (t2SUBri (t2SUBri GPR:$LHS, (t2_so_neg_imm2part_1 imm:$RHS)),
+                    (t2_so_neg_imm2part_2 imm:$RHS))>;
 
 // 32-bit immediate using movw + movt.
 // This is a single pseudo instruction to make it re-materializable. Remove
 // when we can do generalized remat.
 let isReMaterializable = 1 in
 def t2MOVi32imm : T2Ix2<(outs GPR:$dst), (ins i32imm:$src), IIC_iMOVi,
-                     "movw", " $dst, ${src:lo16}\n\tmovt${p} $dst, ${src:hi16}",
+                   "movw", "\t$dst, ${src:lo16}\n\tmovt${p}\t$dst, ${src:hi16}",
                      [(set GPR:$dst, (i32 imm:$src))]>;
+
+// ConstantPool, GlobalAddress, and JumpTable
+def : T2Pat<(ARMWrapper  tglobaladdr :$dst), (t2LEApcrel tglobaladdr :$dst)>,
+           Requires<[IsThumb2, DontUseMovt]>;
+def : T2Pat<(ARMWrapper  tconstpool  :$dst), (t2LEApcrel tconstpool  :$dst)>;
+def : T2Pat<(ARMWrapper  tglobaladdr :$dst), (t2MOVi32imm tglobaladdr :$dst)>,
+           Requires<[IsThumb2, UseMovt]>;
+
+def : T2Pat<(ARMWrapperJT tjumptable:$dst, imm:$id),
+            (t2LEApcrelJT tjumptable:$dst, imm:$id)>;
+
+// Pseudo instruction that combines ldr from constpool and add pc. This should
+// be expanded into two instructions late to allow if-conversion and
+// scheduling.
+let canFoldAsLoad = 1, isReMaterializable = 1, mayHaveSideEffects = 1 in 
+def t2LDRpci_pic : PseudoInst<(outs GPR:$dst), (ins i32imm:$addr, pclabel:$cp),
+                   NoItinerary, "@ ldr.w\t$dst, $addr\n$cp:\n\tadd\t$dst, pc",
+               [(set GPR:$dst, (ARMpic_add (load (ARMWrapper tconstpool:$addr)),
+                                           imm:$cp))]>,
+               Requires<[IsThumb2]>;
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrVFP.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrVFP.td
index 56336d1..5bfe89d 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrVFP.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrVFP.td
@@ -17,7 +17,7 @@ def SDT_ITOF :
 SDTypeProfile<1, 1, [SDTCisFP<0>, SDTCisVT<1, f32>]>;
 def SDT_CMPFP0 :
 SDTypeProfile<0, 1, [SDTCisFP<0>]>;
-def SDT_FMDRR :
+def SDT_VMOVDRR :
 SDTypeProfile<1, 2, [SDTCisVT<0, f64>, SDTCisVT<1, i32>,
                      SDTCisSameAs<1, 2>]>;
 
@@ -28,28 +28,48 @@ def arm_uitof  : SDNode<"ARMISD::UITOF",  SDT_ITOF>;
 def arm_fmstat : SDNode<"ARMISD::FMSTAT", SDTNone, [SDNPInFlag,SDNPOutFlag]>;
 def arm_cmpfp  : SDNode<"ARMISD::CMPFP",  SDT_ARMCmp, [SDNPOutFlag]>;
 def arm_cmpfp0 : SDNode<"ARMISD::CMPFPw0",SDT_CMPFP0, [SDNPOutFlag]>;
-def arm_fmdrr  : SDNode<"ARMISD::FMDRR",  SDT_FMDRR>;
+def arm_fmdrr  : SDNode<"ARMISD::VMOVDRR",  SDT_VMOVDRR>;
+
+//===----------------------------------------------------------------------===//
+// Operand Definitions.
+//
+
+
+def vfp_f32imm : Operand<f32>,
+                 PatLeaf<(f32 fpimm), [{
+      return ARM::getVFPf32Imm(N->getValueAPF()) != -1;
+    }]> {
+  let PrintMethod = "printVFPf32ImmOperand";
+}
+
+def vfp_f64imm : Operand<f64>,
+                 PatLeaf<(f64 fpimm), [{
+      return ARM::getVFPf64Imm(N->getValueAPF()) != -1;
+    }]> {
+  let PrintMethod = "printVFPf64ImmOperand";
+}
+
 
 //===----------------------------------------------------------------------===//
 //  Load / store Instructions.
 //
 
-let canFoldAsLoad = 1 in {
-def FLDD  : ADI5<0b1101, 0b01, (outs DPR:$dst), (ins addrmode5:$addr),
-                 IIC_fpLoad64, "fldd", " $dst, $addr",
+let canFoldAsLoad = 1, isReMaterializable = 1, mayHaveSideEffects = 1 in {
+def VLDRD : ADI5<0b1101, 0b01, (outs DPR:$dst), (ins addrmode5:$addr),
+                 IIC_fpLoad64, "vldr", ".64\t$dst, $addr",
                  [(set DPR:$dst, (load addrmode5:$addr))]>;
 
-def FLDS  : ASI5<0b1101, 0b01, (outs SPR:$dst), (ins addrmode5:$addr),
-                 IIC_fpLoad32, "flds", " $dst, $addr",
+def VLDRS : ASI5<0b1101, 0b01, (outs SPR:$dst), (ins addrmode5:$addr),
+                 IIC_fpLoad32, "vldr", ".32\t$dst, $addr",
                  [(set SPR:$dst, (load addrmode5:$addr))]>;
 } // canFoldAsLoad
 
-def FSTD  : ADI5<0b1101, 0b00, (outs), (ins DPR:$src, addrmode5:$addr),
-                 IIC_fpStore64, "fstd", " $src, $addr",
+def VSTRD  : ADI5<0b1101, 0b00, (outs), (ins DPR:$src, addrmode5:$addr),
+                 IIC_fpStore64, "vstr", ".64\t$src, $addr",
                  [(store DPR:$src, addrmode5:$addr)]>;
 
-def FSTS  : ASI5<0b1101, 0b00, (outs), (ins SPR:$src, addrmode5:$addr),
-                 IIC_fpStore32, "fsts", " $src, $addr",
+def VSTRS  : ASI5<0b1101, 0b00, (outs), (ins SPR:$src, addrmode5:$addr),
+                 IIC_fpStore32, "vstr", ".32\t$src, $addr",
                  [(store SPR:$src, addrmode5:$addr)]>;
 
 //===----------------------------------------------------------------------===//
@@ -57,32 +77,32 @@ def FSTS  : ASI5<0b1101, 0b00, (outs), (ins SPR:$src, addrmode5:$addr),
 //
 
 let mayLoad = 1, hasExtraDefRegAllocReq = 1 in {
-def FLDMD : AXDI5<(outs), (ins addrmode5:$addr, pred:$p, reglist:$wb,
+def VLDMD : AXDI5<(outs), (ins addrmode5:$addr, pred:$p, reglist:$wb,
                            variable_ops), IIC_fpLoadm,
-                  "fldm${addr:submode}d${p} ${addr:base}, $wb",
+                  "vldm${addr:submode}${p}\t${addr:base}, $wb",
                   []> {
   let Inst{20} = 1;
 }
 
-def FLDMS : AXSI5<(outs), (ins addrmode5:$addr, pred:$p, reglist:$wb,
+def VLDMS : AXSI5<(outs), (ins addrmode5:$addr, pred:$p, reglist:$wb,
                            variable_ops), IIC_fpLoadm, 
-                  "fldm${addr:submode}s${p} ${addr:base}, $wb",
+                  "vldm${addr:submode}${p}\t${addr:base}, $wb",
                   []> {
   let Inst{20} = 1;
 }
 } // mayLoad, hasExtraDefRegAllocReq
 
 let mayStore = 1, hasExtraSrcRegAllocReq = 1 in {
-def FSTMD : AXDI5<(outs), (ins addrmode5:$addr, pred:$p, reglist:$wb,
+def VSTMD : AXDI5<(outs), (ins addrmode5:$addr, pred:$p, reglist:$wb,
                            variable_ops), IIC_fpStorem,
-                 "fstm${addr:submode}d${p} ${addr:base}, $wb",
+                 "vstm${addr:submode}${p}\t${addr:base}, $wb",
                  []> {
   let Inst{20} = 0;
 }
 
-def FSTMS : AXSI5<(outs), (ins addrmode5:$addr, pred:$p, reglist:$wb,
+def VSTMS : AXSI5<(outs), (ins addrmode5:$addr, pred:$p, reglist:$wb,
                            variable_ops), IIC_fpStorem,
-                 "fstm${addr:submode}s${p} ${addr:base}, $wb",
+                 "vstm${addr:submode}${p}\t${addr:base}, $wb",
                  []> {
   let Inst{20} = 0;
 }
@@ -94,68 +114,68 @@ def FSTMS : AXSI5<(outs), (ins addrmode5:$addr, pred:$p, reglist:$wb,
 // FP Binary Operations.
 //
 
-def FADDD  : ADbI<0b11100011, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
-                 IIC_fpALU64, "faddd", " $dst, $a, $b",
+def VADDD  : ADbI<0b11100011, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
+                 IIC_fpALU64, "vadd", ".f64\t$dst, $a, $b",
                  [(set DPR:$dst, (fadd DPR:$a, DPR:$b))]>;
 
-def FADDS  : ASbIn<0b11100011, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
-                  IIC_fpALU32, "fadds", " $dst, $a, $b",
+def VADDS  : ASbIn<0b11100011, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
+                  IIC_fpALU32, "vadd", ".f32\t$dst, $a, $b",
                   [(set SPR:$dst, (fadd SPR:$a, SPR:$b))]>;
 
 // These are encoded as unary instructions.
 let Defs = [FPSCR] in {
-def FCMPED : ADuI<0b11101011, 0b0100, 0b1100, (outs), (ins DPR:$a, DPR:$b),
-                 IIC_fpCMP64, "fcmped", " $a, $b",
+def VCMPED : ADuI<0b11101011, 0b0100, 0b1100, (outs), (ins DPR:$a, DPR:$b),
+                 IIC_fpCMP64, "vcmpe", ".f64\t$a, $b",
                  [(arm_cmpfp DPR:$a, DPR:$b)]>;
 
-def FCMPES : ASuI<0b11101011, 0b0100, 0b1100, (outs), (ins SPR:$a, SPR:$b),
-                 IIC_fpCMP32, "fcmpes", " $a, $b",
+def VCMPES : ASuI<0b11101011, 0b0100, 0b1100, (outs), (ins SPR:$a, SPR:$b),
+                 IIC_fpCMP32, "vcmpe", ".f32\t$a, $b",
                  [(arm_cmpfp SPR:$a, SPR:$b)]>;
 }
 
-def FDIVD  : ADbI<0b11101000, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
-                 IIC_fpDIV64, "fdivd", " $dst, $a, $b",
+def VDIVD  : ADbI<0b11101000, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
+                 IIC_fpDIV64, "vdiv", ".f64\t$dst, $a, $b",
                  [(set DPR:$dst, (fdiv DPR:$a, DPR:$b))]>;
 
-def FDIVS  : ASbI<0b11101000, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
-                 IIC_fpDIV32, "fdivs", " $dst, $a, $b",
+def VDIVS  : ASbI<0b11101000, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
+                 IIC_fpDIV32, "vdiv", ".f32\t$dst, $a, $b",
                  [(set SPR:$dst, (fdiv SPR:$a, SPR:$b))]>;
 
-def FMULD  : ADbI<0b11100010, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
-                 IIC_fpMUL64, "fmuld", " $dst, $a, $b",
+def VMULD  : ADbI<0b11100010, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
+                 IIC_fpMUL64, "vmul", ".f64\t$dst, $a, $b",
                  [(set DPR:$dst, (fmul DPR:$a, DPR:$b))]>;
 
-def FMULS  : ASbIn<0b11100010, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
-                  IIC_fpMUL32, "fmuls", " $dst, $a, $b",
+def VMULS  : ASbIn<0b11100010, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
+                  IIC_fpMUL32, "vmul", ".f32\t$dst, $a, $b",
                   [(set SPR:$dst, (fmul SPR:$a, SPR:$b))]>;
-                 
-def FNMULD  : ADbI<0b11100010, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
-                  IIC_fpMUL64, "fnmuld", " $dst, $a, $b",
+
+def VNMULD  : ADbI<0b11100010, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
+                  IIC_fpMUL64, "vnmul", ".f64\t$dst, $a, $b",
                   [(set DPR:$dst, (fneg (fmul DPR:$a, DPR:$b)))]> {
   let Inst{6} = 1;
 }
 
-def FNMULS  : ASbI<0b11100010, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
-                  IIC_fpMUL32, "fnmuls", " $dst, $a, $b",
+def VNMULS  : ASbI<0b11100010, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
+                  IIC_fpMUL32, "vnmul", ".f32\t$dst, $a, $b",
                   [(set SPR:$dst, (fneg (fmul SPR:$a, SPR:$b)))]> {
   let Inst{6} = 1;
 }
 
 // Match reassociated forms only if not sign dependent rounding.
 def : Pat<(fmul (fneg DPR:$a), DPR:$b),
-          (FNMULD DPR:$a, DPR:$b)>, Requires<[NoHonorSignDependentRounding]>;
+          (VNMULD DPR:$a, DPR:$b)>, Requires<[NoHonorSignDependentRounding]>;
 def : Pat<(fmul (fneg SPR:$a), SPR:$b),
-          (FNMULS SPR:$a, SPR:$b)>, Requires<[NoHonorSignDependentRounding]>;
+          (VNMULS SPR:$a, SPR:$b)>, Requires<[NoHonorSignDependentRounding]>;
 
 
-def FSUBD  : ADbI<0b11100011, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
-                 IIC_fpALU64, "fsubd", " $dst, $a, $b",
+def VSUBD  : ADbI<0b11100011, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
+                 IIC_fpALU64, "vsub", ".f64\t$dst, $a, $b",
                  [(set DPR:$dst, (fsub DPR:$a, DPR:$b))]> {
   let Inst{6} = 1;
 }
 
-def FSUBS  : ASbIn<0b11100011, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
-                  IIC_fpALU32, "fsubs", " $dst, $a, $b",
+def VSUBS  : ASbIn<0b11100011, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
+                  IIC_fpALU32, "vsub", ".f32\t$dst, $a, $b",
                   [(set SPR:$dst, (fsub SPR:$a, SPR:$b))]> {
   let Inst{6} = 1;
 }
@@ -164,31 +184,31 @@ def FSUBS  : ASbIn<0b11100011, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
 // FP Unary Operations.
 //
 
-def FABSD  : ADuI<0b11101011, 0b0000, 0b1100, (outs DPR:$dst), (ins DPR:$a),
-                 IIC_fpUNA64, "fabsd", " $dst, $a",
+def VABSD  : ADuI<0b11101011, 0b0000, 0b1100, (outs DPR:$dst), (ins DPR:$a),
+                 IIC_fpUNA64, "vabs", ".f64\t$dst, $a",
                  [(set DPR:$dst, (fabs DPR:$a))]>;
 
-def FABSS  : ASuIn<0b11101011, 0b0000, 0b1100, (outs SPR:$dst), (ins SPR:$a),
-                  IIC_fpUNA32, "fabss", " $dst, $a",
+def VABSS  : ASuIn<0b11101011, 0b0000, 0b1100, (outs SPR:$dst), (ins SPR:$a),
+                  IIC_fpUNA32, "vabs", ".f32\t$dst, $a",
                   [(set SPR:$dst, (fabs SPR:$a))]>;
 
 let Defs = [FPSCR] in {
-def FCMPEZD : ADuI<0b11101011, 0b0101, 0b1100, (outs), (ins DPR:$a),
-                  IIC_fpCMP64, "fcmpezd", " $a",
+def VCMPEZD : ADuI<0b11101011, 0b0101, 0b1100, (outs), (ins DPR:$a),
+                  IIC_fpCMP64, "vcmpe", ".f64\t$a, #0",
                   [(arm_cmpfp0 DPR:$a)]>;
 
-def FCMPEZS : ASuI<0b11101011, 0b0101, 0b1100, (outs), (ins SPR:$a),
-                  IIC_fpCMP32, "fcmpezs", " $a",
+def VCMPEZS : ASuI<0b11101011, 0b0101, 0b1100, (outs), (ins SPR:$a),
+                  IIC_fpCMP32, "vcmpe", ".f32\t$a, #0",
                   [(arm_cmpfp0 SPR:$a)]>;
 }
 
-def FCVTDS : ASuI<0b11101011, 0b0111, 0b1100, (outs DPR:$dst), (ins SPR:$a),
-                 IIC_fpCVTDS, "fcvtds", " $dst, $a",
+def VCVTDS : ASuI<0b11101011, 0b0111, 0b1100, (outs DPR:$dst), (ins SPR:$a),
+                 IIC_fpCVTDS, "vcvt", ".f64.f32\t$dst, $a",
                  [(set DPR:$dst, (fextend SPR:$a))]>;
 
 // Special case encoding: bits 11-8 is 0b1011.
-def FCVTSD : VFPAI<(outs SPR:$dst), (ins DPR:$a), VFPUnaryFrm,
-                   IIC_fpCVTSD, "fcvtsd", " $dst, $a",
+def VCVTSD : VFPAI<(outs SPR:$dst), (ins DPR:$a), VFPUnaryFrm,
+                   IIC_fpCVTSD, "vcvt", ".f32.f64\t$dst, $a",
                    [(set SPR:$dst, (fround DPR:$a))]> {
   let Inst{27-23} = 0b11101;
   let Inst{21-16} = 0b110111;
@@ -197,52 +217,52 @@ def FCVTSD : VFPAI<(outs SPR:$dst), (ins DPR:$a), VFPUnaryFrm,
 }
 
 let neverHasSideEffects = 1 in {
-def FCPYD  : ADuI<0b11101011, 0b0000, 0b0100, (outs DPR:$dst), (ins DPR:$a),
-                 IIC_fpUNA64, "fcpyd", " $dst, $a", []>;
+def VMOVD: ADuI<0b11101011, 0b0000, 0b0100, (outs DPR:$dst), (ins DPR:$a),
+                 IIC_fpUNA64, "vmov", ".f64\t$dst, $a", []>;
 
-def FCPYS  : ASuI<0b11101011, 0b0000, 0b0100, (outs SPR:$dst), (ins SPR:$a),
-                 IIC_fpUNA32, "fcpys", " $dst, $a", []>;
+def VMOVS: ASuI<0b11101011, 0b0000, 0b0100, (outs SPR:$dst), (ins SPR:$a),
+                 IIC_fpUNA32, "vmov", ".f32\t$dst, $a", []>;
 } // neverHasSideEffects
 
-def FNEGD  : ADuI<0b11101011, 0b0001, 0b0100, (outs DPR:$dst), (ins DPR:$a),
-                 IIC_fpUNA64, "fnegd", " $dst, $a",
+def VNEGD  : ADuI<0b11101011, 0b0001, 0b0100, (outs DPR:$dst), (ins DPR:$a),
+                 IIC_fpUNA64, "vneg", ".f64\t$dst, $a",
                  [(set DPR:$dst, (fneg DPR:$a))]>;
 
-def FNEGS  : ASuIn<0b11101011, 0b0001, 0b0100, (outs SPR:$dst), (ins SPR:$a),
-                  IIC_fpUNA32, "fnegs", " $dst, $a",
+def VNEGS  : ASuIn<0b11101011, 0b0001, 0b0100, (outs SPR:$dst), (ins SPR:$a),
+                  IIC_fpUNA32, "vneg", ".f32\t$dst, $a",
                   [(set SPR:$dst, (fneg SPR:$a))]>;
 
-def FSQRTD  : ADuI<0b11101011, 0b0001, 0b1100, (outs DPR:$dst), (ins DPR:$a),
-                 IIC_fpSQRT64, "fsqrtd", " $dst, $a",
+def VSQRTD  : ADuI<0b11101011, 0b0001, 0b1100, (outs DPR:$dst), (ins DPR:$a),
+                 IIC_fpSQRT64, "vsqrt", ".f64\t$dst, $a",
                  [(set DPR:$dst, (fsqrt DPR:$a))]>;
 
-def FSQRTS  : ASuI<0b11101011, 0b0001, 0b1100, (outs SPR:$dst), (ins SPR:$a),
-                 IIC_fpSQRT32, "fsqrts", " $dst, $a",
+def VSQRTS  : ASuI<0b11101011, 0b0001, 0b1100, (outs SPR:$dst), (ins SPR:$a),
+                 IIC_fpSQRT32, "vsqrt", ".f32\t$dst, $a",
                  [(set SPR:$dst, (fsqrt SPR:$a))]>;
 
 //===----------------------------------------------------------------------===//
 // FP <-> GPR Copies.  Int <-> FP Conversions.
 //
 
-def FMRS   : AVConv2I<0b11100001, 0b1010, (outs GPR:$dst), (ins SPR:$src),
-                 IIC_VMOVSI, "fmrs", " $dst, $src",
+def VMOVRS : AVConv2I<0b11100001, 0b1010, (outs GPR:$dst), (ins SPR:$src),
+                 IIC_VMOVSI, "vmov", "\t$dst, $src",
                  [(set GPR:$dst, (bitconvert SPR:$src))]>;
 
-def FMSR   : AVConv4I<0b11100000, 0b1010, (outs SPR:$dst), (ins GPR:$src),
-                 IIC_VMOVIS, "fmsr", " $dst, $src",
+def VMOVSR : AVConv4I<0b11100000, 0b1010, (outs SPR:$dst), (ins GPR:$src),
+                 IIC_VMOVIS, "vmov", "\t$dst, $src",
                  [(set SPR:$dst, (bitconvert GPR:$src))]>;
 
-def FMRRD  : AVConv3I<0b11000101, 0b1011,
+def VMOVRRD  : AVConv3I<0b11000101, 0b1011,
                       (outs GPR:$wb, GPR:$dst2), (ins DPR:$src),
-                 IIC_VMOVDI, "fmrrd", " $wb, $dst2, $src",
+                 IIC_VMOVDI, "vmov", "\t$wb, $dst2, $src",
                  [/* FIXME: Can't write pattern for multiple result instr*/]>;
 
 // FMDHR: GPR -> SPR
 // FMDLR: GPR -> SPR
 
-def FMDRR : AVConv5I<0b11000100, 0b1011,
+def VMOVDRR : AVConv5I<0b11000100, 0b1011,
                      (outs DPR:$dst), (ins GPR:$src1, GPR:$src2),
-                IIC_VMOVID, "fmdrr", " $dst, $src1, $src2",
+                IIC_VMOVID, "vmov", "\t$dst, $src1, $src2",
                 [(set DPR:$dst, (arm_fmdrr GPR:$src1, GPR:$src2))]>;
 
 // FMRDH: SPR -> GPR
@@ -257,53 +277,53 @@ def FMDRR : AVConv5I<0b11000100, 0b1011,
 
 // Int to FP:
 
-def FSITOD : AVConv1I<0b11101011, 0b1000, 0b1011, (outs DPR:$dst), (ins SPR:$a),
-                 IIC_fpCVTID, "fsitod", " $dst, $a",
+def VSITOD : AVConv1I<0b11101011, 0b1000, 0b1011, (outs DPR:$dst), (ins SPR:$a),
+                 IIC_fpCVTID, "vcvt", ".f64.s32\t$dst, $a",
                  [(set DPR:$dst, (arm_sitof SPR:$a))]> {
   let Inst{7} = 1;
 }
 
-def FSITOS : AVConv1In<0b11101011, 0b1000, 0b1010, (outs SPR:$dst),(ins SPR:$a),
-                 IIC_fpCVTIS, "fsitos", " $dst, $a",
+def VSITOS : AVConv1In<0b11101011, 0b1000, 0b1010, (outs SPR:$dst),(ins SPR:$a),
+                 IIC_fpCVTIS, "vcvt", ".f32.s32\t$dst, $a",
                  [(set SPR:$dst, (arm_sitof SPR:$a))]> {
   let Inst{7} = 1;
 }
 
-def FUITOD : AVConv1I<0b11101011, 0b1000, 0b1011, (outs DPR:$dst), (ins SPR:$a),
-                 IIC_fpCVTID, "fuitod", " $dst, $a",
+def VUITOD : AVConv1I<0b11101011, 0b1000, 0b1011, (outs DPR:$dst), (ins SPR:$a),
+                 IIC_fpCVTID, "vcvt", ".f64.u32\t$dst, $a",
                  [(set DPR:$dst, (arm_uitof SPR:$a))]>;
 
-def FUITOS : AVConv1In<0b11101011, 0b1000, 0b1010, (outs SPR:$dst),(ins SPR:$a),
-                 IIC_fpCVTIS, "fuitos", " $dst, $a",
+def VUITOS : AVConv1In<0b11101011, 0b1000, 0b1010, (outs SPR:$dst),(ins SPR:$a),
+                 IIC_fpCVTIS, "vcvt", ".f32.u32\t$dst, $a",
                  [(set SPR:$dst, (arm_uitof SPR:$a))]>;
 
 // FP to Int:
 // Always set Z bit in the instruction, i.e. "round towards zero" variants.
 
-def FTOSIZD : AVConv1I<0b11101011, 0b1101, 0b1011,
+def VTOSIZD : AVConv1I<0b11101011, 0b1101, 0b1011,
                        (outs SPR:$dst), (ins DPR:$a),
-                 IIC_fpCVTDI, "ftosizd", " $dst, $a",
+                 IIC_fpCVTDI, "vcvt", ".s32.f64\t$dst, $a",
                  [(set SPR:$dst, (arm_ftosi DPR:$a))]> {
   let Inst{7} = 1; // Z bit
 }
 
-def FTOSIZS : AVConv1In<0b11101011, 0b1101, 0b1010,
+def VTOSIZS : AVConv1In<0b11101011, 0b1101, 0b1010,
                         (outs SPR:$dst), (ins SPR:$a),
-                 IIC_fpCVTSI, "ftosizs", " $dst, $a",
+                 IIC_fpCVTSI, "vcvt", ".s32.f32\t$dst, $a",
                  [(set SPR:$dst, (arm_ftosi SPR:$a))]> {
   let Inst{7} = 1; // Z bit
 }
 
-def FTOUIZD : AVConv1I<0b11101011, 0b1100, 0b1011,
+def VTOUIZD : AVConv1I<0b11101011, 0b1100, 0b1011,
                        (outs SPR:$dst), (ins DPR:$a),
-                 IIC_fpCVTDI, "ftouizd", " $dst, $a",
+                 IIC_fpCVTDI, "vcvt", ".u32.f64\t$dst, $a",
                  [(set SPR:$dst, (arm_ftoui DPR:$a))]> {
   let Inst{7} = 1; // Z bit
 }
 
-def FTOUIZS : AVConv1In<0b11101011, 0b1100, 0b1010,
+def VTOUIZS : AVConv1In<0b11101011, 0b1100, 0b1010,
                         (outs SPR:$dst), (ins SPR:$a),
-                 IIC_fpCVTSI, "ftouizs", " $dst, $a",
+                 IIC_fpCVTSI, "vcvt", ".u32.f32\t$dst, $a",
                  [(set SPR:$dst, (arm_ftoui SPR:$a))]> {
   let Inst{7} = 1; // Z bit
 }
@@ -312,54 +332,54 @@ def FTOUIZS : AVConv1In<0b11101011, 0b1100, 0b1010,
 // FP FMA Operations.
 //
 
-def FMACD : ADbI<0b11100000, (outs DPR:$dst), (ins DPR:$dstin, DPR:$a, DPR:$b),
-                IIC_fpMAC64, "fmacd", " $dst, $a, $b",
+def VMLAD : ADbI<0b11100000, (outs DPR:$dst), (ins DPR:$dstin, DPR:$a, DPR:$b),
+                IIC_fpMAC64, "vmla", ".f64\t$dst, $a, $b",
                 [(set DPR:$dst, (fadd (fmul DPR:$a, DPR:$b), DPR:$dstin))]>,
                 RegConstraint<"$dstin = $dst">;
 
-def FMACS : ASbIn<0b11100000, (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
-                 IIC_fpMAC32, "fmacs", " $dst, $a, $b",
+def VMLAS : ASbIn<0b11100000, (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
+                 IIC_fpMAC32, "vmla", ".f32\t$dst, $a, $b",
                  [(set SPR:$dst, (fadd (fmul SPR:$a, SPR:$b), SPR:$dstin))]>,
                  RegConstraint<"$dstin = $dst">;
 
-def FMSCD : ADbI<0b11100001, (outs DPR:$dst), (ins DPR:$dstin, DPR:$a, DPR:$b),
-                IIC_fpMAC64, "fmscd", " $dst, $a, $b",
+def VNMLSD : ADbI<0b11100001, (outs DPR:$dst), (ins DPR:$dstin, DPR:$a, DPR:$b),
+                IIC_fpMAC64, "vnmls", ".f64\t$dst, $a, $b",
                 [(set DPR:$dst, (fsub (fmul DPR:$a, DPR:$b), DPR:$dstin))]>,
                 RegConstraint<"$dstin = $dst">;
 
-def FMSCS : ASbI<0b11100001, (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
-                IIC_fpMAC32, "fmscs", " $dst, $a, $b",
+def VNMLSS : ASbI<0b11100001, (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
+                IIC_fpMAC32, "vnmls", ".f32\t$dst, $a, $b",
                 [(set SPR:$dst, (fsub (fmul SPR:$a, SPR:$b), SPR:$dstin))]>,
                 RegConstraint<"$dstin = $dst">;
 
-def FNMACD : ADbI<0b11100000, (outs DPR:$dst), (ins DPR:$dstin, DPR:$a, DPR:$b),
-                 IIC_fpMAC64, "fnmacd", " $dst, $a, $b",
+def VMLSD : ADbI<0b11100000, (outs DPR:$dst), (ins DPR:$dstin, DPR:$a, DPR:$b),
+                 IIC_fpMAC64, "vmls", ".f64\t$dst, $a, $b",
              [(set DPR:$dst, (fadd (fneg (fmul DPR:$a, DPR:$b)), DPR:$dstin))]>,
                 RegConstraint<"$dstin = $dst"> {
   let Inst{6} = 1;
 }
 
-def FNMACS : ASbIn<0b11100000, (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
-                  IIC_fpMAC32, "fnmacs", " $dst, $a, $b",
+def VMLSS : ASbIn<0b11100000, (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
+                  IIC_fpMAC32, "vmls", ".f32\t$dst, $a, $b",
              [(set SPR:$dst, (fadd (fneg (fmul SPR:$a, SPR:$b)), SPR:$dstin))]>,
                 RegConstraint<"$dstin = $dst"> {
   let Inst{6} = 1;
 }
 
 def : Pat<(fsub DPR:$dstin, (fmul DPR:$a, DPR:$b)),
-          (FNMACD DPR:$dstin, DPR:$a, DPR:$b)>, Requires<[DontUseNEONForFP]>;
+          (VMLSD DPR:$dstin, DPR:$a, DPR:$b)>, Requires<[DontUseNEONForFP]>;
 def : Pat<(fsub SPR:$dstin, (fmul SPR:$a, SPR:$b)),
-          (FNMACS SPR:$dstin, SPR:$a, SPR:$b)>, Requires<[DontUseNEONForFP]>;
+          (VMLSS SPR:$dstin, SPR:$a, SPR:$b)>, Requires<[DontUseNEONForFP]>;
 
-def FNMSCD : ADbI<0b11100001, (outs DPR:$dst), (ins DPR:$dstin, DPR:$a, DPR:$b),
-                 IIC_fpMAC64, "fnmscd", " $dst, $a, $b",
+def VNMLAD : ADbI<0b11100001, (outs DPR:$dst), (ins DPR:$dstin, DPR:$a, DPR:$b),
+                 IIC_fpMAC64, "vnmla", ".f64\t$dst, $a, $b",
              [(set DPR:$dst, (fsub (fneg (fmul DPR:$a, DPR:$b)), DPR:$dstin))]>,
                 RegConstraint<"$dstin = $dst"> {
   let Inst{6} = 1;
 }
 
-def FNMSCS : ASbI<0b11100001, (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
-                IIC_fpMAC32, "fnmscs", " $dst, $a, $b",
+def VNMLAS : ASbI<0b11100001, (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
+                IIC_fpMAC32, "vnmla", ".f32\t$dst, $a, $b",
              [(set SPR:$dst, (fsub (fneg (fmul SPR:$a, SPR:$b)), SPR:$dstin))]>,
                 RegConstraint<"$dstin = $dst"> {
   let Inst{6} = 1;
@@ -369,27 +389,27 @@ def FNMSCS : ASbI<0b11100001, (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
 // FP Conditional moves.
 //
 
-def FCPYDcc  : ADuI<0b11101011, 0b0000, 0b0100,
+def VMOVDcc  : ADuI<0b11101011, 0b0000, 0b0100,
                     (outs DPR:$dst), (ins DPR:$false, DPR:$true),
-                    IIC_fpUNA64, "fcpyd", " $dst, $true",
+                    IIC_fpUNA64, "vmov", ".f64\t$dst, $true",
                 [/*(set DPR:$dst, (ARMcmov DPR:$false, DPR:$true, imm:$cc))*/]>,
                     RegConstraint<"$false = $dst">;
 
-def FCPYScc  : ASuI<0b11101011, 0b0000, 0b0100,
+def VMOVScc  : ASuI<0b11101011, 0b0000, 0b0100,
                     (outs SPR:$dst), (ins SPR:$false, SPR:$true),
-                    IIC_fpUNA32, "fcpys", " $dst, $true",
+                    IIC_fpUNA32, "vmov", ".f32\t$dst, $true",
                 [/*(set SPR:$dst, (ARMcmov SPR:$false, SPR:$true, imm:$cc))*/]>,
                     RegConstraint<"$false = $dst">;
 
-def FNEGDcc  : ADuI<0b11101011, 0b0001, 0b0100,
+def VNEGDcc  : ADuI<0b11101011, 0b0001, 0b0100,
                     (outs DPR:$dst), (ins DPR:$false, DPR:$true),
-                    IIC_fpUNA64, "fnegd", " $dst, $true",
+                    IIC_fpUNA64, "vneg", ".f64\t$dst, $true",
                 [/*(set DPR:$dst, (ARMcneg DPR:$false, DPR:$true, imm:$cc))*/]>,
                     RegConstraint<"$false = $dst">;
 
-def FNEGScc  : ASuI<0b11101011, 0b0001, 0b0100,
+def VNEGScc  : ASuI<0b11101011, 0b0001, 0b0100,
                     (outs SPR:$dst), (ins SPR:$false, SPR:$true),
-                    IIC_fpUNA32, "fnegs", " $dst, $true",
+                    IIC_fpUNA32, "vneg", ".f32\t$dst, $true",
                 [/*(set SPR:$dst, (ARMcneg SPR:$false, SPR:$true, imm:$cc))*/]>,
                     RegConstraint<"$false = $dst">;
 
@@ -398,8 +418,12 @@ def FNEGScc  : ASuI<0b11101011, 0b0001, 0b0100,
 // Misc.
 //
 
+// APSR is the application level alias of CPSR. This FPSCR N, Z, C, V flags
+// to APSR.
 let Defs = [CPSR], Uses = [FPSCR] in
-def FMSTAT : VFPAI<(outs), (ins), VFPMiscFrm, IIC_fpSTAT, "fmstat", "", [(arm_fmstat)]> {
+def FMSTAT : VFPAI<(outs), (ins), VFPMiscFrm, IIC_fpSTAT, "vmrs",
+                   "\tapsr_nzcv, fpscr",
+             [(arm_fmstat)]> {
   let Inst{27-20} = 0b11101111;
   let Inst{19-16} = 0b0001;
   let Inst{15-12} = 0b1111;
@@ -407,3 +431,29 @@ def FMSTAT : VFPAI<(outs), (ins), VFPMiscFrm, IIC_fpSTAT, "fmstat", "", [(arm_fm
   let Inst{7}     = 0;
   let Inst{4}     = 1;
 }
+
+
+// Materialize FP immediates. VFP3 only.
+let isReMaterializable = 1 in {
+def FCONSTD : VFPAI<(outs DPR:$dst), (ins vfp_f64imm:$imm),
+                    VFPMiscFrm, IIC_VMOVImm,
+                    "vmov", ".f64\t$dst, $imm",
+                    [(set DPR:$dst, vfp_f64imm:$imm)]>, Requires<[HasVFP3]> {
+  let Inst{27-23} = 0b11101;
+  let Inst{21-20} = 0b11;
+  let Inst{11-9}  = 0b101;
+  let Inst{8}     = 1;
+  let Inst{7-4}   = 0b0000;
+}
+
+def FCONSTS : VFPAI<(outs SPR:$dst), (ins vfp_f32imm:$imm),
+                    VFPMiscFrm, IIC_VMOVImm,
+                    "vmov", ".f32\t$dst, $imm",
+                    [(set SPR:$dst, vfp_f32imm:$imm)]>, Requires<[HasVFP3]> {
+  let Inst{27-23} = 0b11101;
+  let Inst{21-20} = 0b11;
+  let Inst{11-9}  = 0b101;
+  let Inst{8}     = 0;
+  let Inst{7-4}   = 0b0000;
+}
+}
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMJITInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMJITInfo.cpp
index 24990e6..aa50cfd 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMJITInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMJITInfo.cpp
@@ -139,7 +139,8 @@ ARMJITInfo::getLazyResolverFunction(JITCompilerFn F) {
 
 void *ARMJITInfo::emitGlobalValueIndirectSym(const GlobalValue *GV, void *Ptr,
                                              JITCodeEmitter &JCE) {
-  JCE.startGVStub(GV, 4, 4);
+  MachineCodeEmitter::BufferState BS;
+  JCE.startGVStub(BS, GV, 4, 4);
   intptr_t Addr = (intptr_t)JCE.getCurrentPCValue();
   if (!sys::Memory::setRangeWritable((void*)Addr, 4)) {
     llvm_unreachable("ERROR: Unable to mark indirect symbol writable");
@@ -148,19 +149,27 @@ void *ARMJITInfo::emitGlobalValueIndirectSym(const GlobalValue *GV, void *Ptr,
   if (!sys::Memory::setRangeExecutable((void*)Addr, 4)) {
     llvm_unreachable("ERROR: Unable to mark indirect symbol executable");
   }
-  void *PtrAddr = JCE.finishGVStub(GV);
+  void *PtrAddr = JCE.finishGVStub(BS);
   addIndirectSymAddr(Ptr, (intptr_t)PtrAddr);
   return PtrAddr;
 }
 
+TargetJITInfo::StubLayout ARMJITInfo::getStubLayout() {
+  // The stub contains up to 3 4-byte instructions, aligned at 4 bytes, and a
+  // 4-byte address.  See emitFunctionStub for details.
+  StubLayout Result = {16, 4};
+  return Result;
+}
+
 void *ARMJITInfo::emitFunctionStub(const Function* F, void *Fn,
                                    JITCodeEmitter &JCE) {
+  void *Addr;
   // If this is just a call to an external function, emit a branch instead of a
   // call.  The code is the same except for one bit of the last instruction.
   if (Fn != (void*)(intptr_t)ARMCompilationCallback) {
     // Branch to the corresponding function addr.
     if (IsPIC) {
-      // The stub is 8-byte size and 4-aligned.
+      // The stub is 16-byte size and 4-aligned.
       intptr_t LazyPtr = getIndirectSymAddr(Fn);
       if (!LazyPtr) {
         // In PIC mode, the function stub is loading a lazy-ptr.
@@ -172,30 +181,30 @@ void *ARMJITInfo::emitFunctionStub(const Function* F, void *Fn,
                 errs() << "JIT: Stub emitted at [" << LazyPtr
                        << "] for external function at '" << Fn << "'\n");
       }
-      JCE.startGVStub(F, 16, 4);
-      intptr_t Addr = (intptr_t)JCE.getCurrentPCValue();
-      if (!sys::Memory::setRangeWritable((void*)Addr, 16)) {
+      JCE.emitAlignment(4);
+      Addr = (void*)JCE.getCurrentPCValue();
+      if (!sys::Memory::setRangeWritable(Addr, 16)) {
         llvm_unreachable("ERROR: Unable to mark stub writable");
       }
-      JCE.emitWordLE(0xe59fc004);            // ldr pc, [pc, #+4]
+      JCE.emitWordLE(0xe59fc004);            // ldr ip, [pc, #+4]
       JCE.emitWordLE(0xe08fc00c);            // L_func$scv: add ip, pc, ip
       JCE.emitWordLE(0xe59cf000);            // ldr pc, [ip]
-      JCE.emitWordLE(LazyPtr - (Addr+4+8));  // func - (L_func$scv+8)
-      sys::Memory::InvalidateInstructionCache((void*)Addr, 16);
-      if (!sys::Memory::setRangeExecutable((void*)Addr, 16)) {
+      JCE.emitWordLE(LazyPtr - (intptr_t(Addr)+4+8));  // func - (L_func$scv+8)
+      sys::Memory::InvalidateInstructionCache(Addr, 16);
+      if (!sys::Memory::setRangeExecutable(Addr, 16)) {
         llvm_unreachable("ERROR: Unable to mark stub executable");
       }
     } else {
       // The stub is 8-byte size and 4-aligned.
-      JCE.startGVStub(F, 8, 4);
-      intptr_t Addr = (intptr_t)JCE.getCurrentPCValue();
-      if (!sys::Memory::setRangeWritable((void*)Addr, 8)) {
+      JCE.emitAlignment(4);
+      Addr = (void*)JCE.getCurrentPCValue();
+      if (!sys::Memory::setRangeWritable(Addr, 8)) {
         llvm_unreachable("ERROR: Unable to mark stub writable");
       }
       JCE.emitWordLE(0xe51ff004);    // ldr pc, [pc, #-4]
       JCE.emitWordLE((intptr_t)Fn);  // addr of function
-      sys::Memory::InvalidateInstructionCache((void*)Addr, 8);
-      if (!sys::Memory::setRangeExecutable((void*)Addr, 8)) {
+      sys::Memory::InvalidateInstructionCache(Addr, 8);
+      if (!sys::Memory::setRangeExecutable(Addr, 8)) {
         llvm_unreachable("ERROR: Unable to mark stub executable");
       }
     }
@@ -207,9 +216,9 @@ void *ARMJITInfo::emitFunctionStub(const Function* F, void *Fn,
     //
     // Branch and link to the compilation callback.
     // The stub is 16-byte size and 4-byte aligned.
-    JCE.startGVStub(F, 16, 4);
-    intptr_t Addr = (intptr_t)JCE.getCurrentPCValue();
-    if (!sys::Memory::setRangeWritable((void*)Addr, 16)) {
+    JCE.emitAlignment(4);
+    Addr = (void*)JCE.getCurrentPCValue();
+    if (!sys::Memory::setRangeWritable(Addr, 16)) {
       llvm_unreachable("ERROR: Unable to mark stub writable");
     }
     // Save LR so the callback can determine which stub called it.
@@ -222,13 +231,13 @@ void *ARMJITInfo::emitFunctionStub(const Function* F, void *Fn,
     JCE.emitWordLE(0xe51ff004); // ldr pc, [pc, #-4]
     // The address of the compilation callback.
     JCE.emitWordLE((intptr_t)ARMCompilationCallback);
-    sys::Memory::InvalidateInstructionCache((void*)Addr, 16);
-    if (!sys::Memory::setRangeExecutable((void*)Addr, 16)) {
+    sys::Memory::InvalidateInstructionCache(Addr, 16);
+    if (!sys::Memory::setRangeExecutable(Addr, 16)) {
       llvm_unreachable("ERROR: Unable to mark stub executable");
     }
   }
 
-  return JCE.finishGVStub(F);
+  return Addr;
 }
 
 intptr_t ARMJITInfo::resolveRelocDestAddr(MachineRelocation *MR) const {
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMJITInfo.h b/libclamav/c++/llvm/lib/Target/ARM/ARMJITInfo.h
index 7dfeed8..ff332b7 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMJITInfo.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMJITInfo.h
@@ -61,6 +61,10 @@ namespace llvm {
     virtual void *emitGlobalValueIndirectSym(const GlobalValue* GV, void *ptr,
                                             JITCodeEmitter &JCE);
 
+    // getStubLayout - Returns the size and alignment of the largest call stub
+    // on ARM.
+    virtual StubLayout getStubLayout();
+
     /// emitFunctionStub - Use the specified JITCodeEmitter object to emit a
     /// small native function that simply calls the function at the specified
     /// address.
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
index d2ec9ee..304d0ef 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
@@ -30,7 +30,6 @@
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Target/TargetRegisterInfo.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/STLExtras.h"
@@ -42,8 +41,8 @@ using namespace llvm;
 
 STATISTIC(NumLDMGened , "Number of ldm instructions generated");
 STATISTIC(NumSTMGened , "Number of stm instructions generated");
-STATISTIC(NumFLDMGened, "Number of fldm instructions generated");
-STATISTIC(NumFSTMGened, "Number of fstm instructions generated");
+STATISTIC(NumVLDMGened, "Number of vldm instructions generated");
+STATISTIC(NumVSTMGened, "Number of vstm instructions generated");
 STATISTIC(NumLdStMoved, "Number of load / store instructions moved");
 STATISTIC(NumLDRDFormed,"Number of ldrd created before allocation");
 STATISTIC(NumSTRDFormed,"Number of strd created before allocation");
@@ -56,7 +55,7 @@ STATISTIC(NumSTRD2STR,  "Number of strd instructions turned back into str's");
 /// load / store instructions to form ldm / stm instructions.
 
 namespace {
-  struct VISIBILITY_HIDDEN ARMLoadStoreOpt : public MachineFunctionPass {
+  struct ARMLoadStoreOpt : public MachineFunctionPass {
     static char ID;
     ARMLoadStoreOpt() : MachineFunctionPass(&ID) {}
 
@@ -128,18 +127,18 @@ static int getLoadStoreMultipleOpcode(int Opcode) {
   case ARM::t2STRi12:
     NumSTMGened++;
     return ARM::t2STM;
-  case ARM::FLDS:
-    NumFLDMGened++;
-    return ARM::FLDMS;
-  case ARM::FSTS:
-    NumFSTMGened++;
-    return ARM::FSTMS;
-  case ARM::FLDD:
-    NumFLDMGened++;
-    return ARM::FLDMD;
-  case ARM::FSTD:
-    NumFSTMGened++;
-    return ARM::FSTMD;
+  case ARM::VLDRS:
+    NumVLDMGened++;
+    return ARM::VLDMS;
+  case ARM::VSTRS:
+    NumVSTMGened++;
+    return ARM::VSTMS;
+  case ARM::VLDRD:
+    NumVLDMGened++;
+    return ARM::VLDMD;
+  case ARM::VSTRD:
+    NumVSTMGened++;
+    return ARM::VSTMD;
   default: llvm_unreachable("Unhandled opcode!");
   }
   return 0;
@@ -230,8 +229,8 @@ ARMLoadStoreOpt::MergeOps(MachineBasicBlock &MBB,
     BaseKill = true;  // New base is always killed right its use.
   }
 
-  bool isDPR = Opcode == ARM::FLDD || Opcode == ARM::FSTD;
-  bool isDef = isi32Load(Opcode) || Opcode == ARM::FLDS || Opcode == ARM::FLDD;
+  bool isDPR = Opcode == ARM::VLDRD || Opcode == ARM::VSTRD;
+  bool isDef = isi32Load(Opcode) || Opcode == ARM::VLDRS || Opcode == ARM::VLDRD;
   Opcode = getLoadStoreMultipleOpcode(Opcode);
   MachineInstrBuilder MIB = (isAM4)
     ? BuildMI(MBB, MBBI, dl, TII->get(Opcode))
@@ -374,27 +373,27 @@ static inline unsigned getLSMultipleTransferSize(MachineInstr *MI) {
   case ARM::t2LDRi12:
   case ARM::t2STRi8:
   case ARM::t2STRi12:
-  case ARM::FLDS:
-  case ARM::FSTS:
+  case ARM::VLDRS:
+  case ARM::VSTRS:
     return 4;
-  case ARM::FLDD:
-  case ARM::FSTD:
+  case ARM::VLDRD:
+  case ARM::VSTRD:
     return 8;
   case ARM::LDM:
   case ARM::STM:
   case ARM::t2LDM:
   case ARM::t2STM:
     return (MI->getNumOperands() - 5) * 4;
-  case ARM::FLDMS:
-  case ARM::FSTMS:
-  case ARM::FLDMD:
-  case ARM::FSTMD:
+  case ARM::VLDMS:
+  case ARM::VSTMS:
+  case ARM::VLDMD:
+  case ARM::VSTMD:
     return ARM_AM::getAM5Offset(MI->getOperand(1).getImm()) * 4;
   }
 }
 
 /// MergeBaseUpdateLSMultiple - Fold proceeding/trailing inc/dec of base
-/// register into the LDM/STM/FLDM{D|S}/FSTM{D|S} op when possible:
+/// register into the LDM/STM/VLDM{D|S}/VSTM{D|S} op when possible:
 ///
 /// stmia rn, <ra, rb, rc>
 /// rn := rn + 4 * 3;
@@ -476,7 +475,7 @@ bool ARMLoadStoreOpt::MergeBaseUpdateLSMultiple(MachineBasicBlock &MBB,
       }
     }
   } else {
-    // FLDM{D|S}, FSTM{D|S} addressing mode 5 ops.
+    // VLDM{D|S}, VSTM{D|S} addressing mode 5 ops.
     if (ARM_AM::getAM5WBFlag(MI->getOperand(1).getImm()))
       return false;
 
@@ -518,10 +517,10 @@ static unsigned getPreIndexedLoadStoreOpcode(unsigned Opc) {
   switch (Opc) {
   case ARM::LDR: return ARM::LDR_PRE;
   case ARM::STR: return ARM::STR_PRE;
-  case ARM::FLDS: return ARM::FLDMS;
-  case ARM::FLDD: return ARM::FLDMD;
-  case ARM::FSTS: return ARM::FSTMS;
-  case ARM::FSTD: return ARM::FSTMD;
+  case ARM::VLDRS: return ARM::VLDMS;
+  case ARM::VLDRD: return ARM::VLDMD;
+  case ARM::VSTRS: return ARM::VSTMS;
+  case ARM::VSTRD: return ARM::VSTMD;
   case ARM::t2LDRi8:
   case ARM::t2LDRi12:
     return ARM::t2LDR_PRE;
@@ -537,10 +536,10 @@ static unsigned getPostIndexedLoadStoreOpcode(unsigned Opc) {
   switch (Opc) {
   case ARM::LDR: return ARM::LDR_POST;
   case ARM::STR: return ARM::STR_POST;
-  case ARM::FLDS: return ARM::FLDMS;
-  case ARM::FLDD: return ARM::FLDMD;
-  case ARM::FSTS: return ARM::FSTMS;
-  case ARM::FSTD: return ARM::FSTMD;
+  case ARM::VLDRS: return ARM::VLDMS;
+  case ARM::VLDRD: return ARM::VLDMD;
+  case ARM::VSTRS: return ARM::VSTMS;
+  case ARM::VSTRD: return ARM::VSTMD;
   case ARM::t2LDRi8:
   case ARM::t2LDRi12:
     return ARM::t2LDR_POST;
@@ -565,8 +564,8 @@ bool ARMLoadStoreOpt::MergeBaseUpdateLoadStore(MachineBasicBlock &MBB,
   unsigned Bytes = getLSMultipleTransferSize(MI);
   int Opcode = MI->getOpcode();
   DebugLoc dl = MI->getDebugLoc();
-  bool isAM5 = Opcode == ARM::FLDD || Opcode == ARM::FLDS ||
-    Opcode == ARM::FSTD || Opcode == ARM::FSTS;
+  bool isAM5 = Opcode == ARM::VLDRD || Opcode == ARM::VLDRS ||
+    Opcode == ARM::VSTRD || Opcode == ARM::VSTRS;
   bool isAM2 = Opcode == ARM::LDR || Opcode == ARM::STR;
   if (isAM2 && ARM_AM::getAM2Offset(MI->getOperand(3).getImm()) != 0)
     return false;
@@ -576,7 +575,7 @@ bool ARMLoadStoreOpt::MergeBaseUpdateLoadStore(MachineBasicBlock &MBB,
     if (MI->getOperand(2).getImm() != 0)
       return false;
 
-  bool isLd = isi32Load(Opcode) || Opcode == ARM::FLDS || Opcode == ARM::FLDD;
+  bool isLd = isi32Load(Opcode) || Opcode == ARM::VLDRS || Opcode == ARM::VLDRD;
   // Can't do the merge if the destination register is the same as the would-be
   // writeback register.
   if (isLd && MI->getOperand(0).getReg() == Base)
@@ -627,7 +626,7 @@ bool ARMLoadStoreOpt::MergeBaseUpdateLoadStore(MachineBasicBlock &MBB,
   if (!DoMerge)
     return false;
 
-  bool isDPR = NewOpc == ARM::FLDMD || NewOpc == ARM::FSTMD;
+  bool isDPR = NewOpc == ARM::VLDMD || NewOpc == ARM::VSTMD;
   unsigned Offset = 0;
   if (isAM5)
     Offset = ARM_AM::getAM5Opc((AddSub == ARM_AM::sub)
@@ -639,7 +638,7 @@ bool ARMLoadStoreOpt::MergeBaseUpdateLoadStore(MachineBasicBlock &MBB,
     Offset = AddSub == ARM_AM::sub ? -Bytes : Bytes;
   if (isLd) {
     if (isAM5)
-      // FLDMS, FLDMD
+      // VLDMS, VLDMD
       BuildMI(MBB, MBBI, dl, TII->get(NewOpc))
         .addReg(Base, getKillRegState(BaseKill))
         .addImm(Offset).addImm(Pred).addReg(PredReg)
@@ -658,7 +657,7 @@ bool ARMLoadStoreOpt::MergeBaseUpdateLoadStore(MachineBasicBlock &MBB,
   } else {
     MachineOperand &MO = MI->getOperand(0);
     if (isAM5)
-      // FSTMS, FSTMD
+      // VSTMS, VSTMD
       BuildMI(MBB, MBBI, dl, TII->get(NewOpc)).addReg(Base).addImm(Offset)
         .addImm(Pred).addReg(PredReg)
         .addReg(Base, getDefRegState(true)) // WB base register
@@ -688,11 +687,11 @@ static bool isMemoryOp(const MachineInstr *MI) {
   case ARM::LDR:
   case ARM::STR:
     return MI->getOperand(1).isReg() && MI->getOperand(2).getReg() == 0;
-  case ARM::FLDS:
-  case ARM::FSTS:
+  case ARM::VLDRS:
+  case ARM::VSTRS:
     return MI->getOperand(1).isReg();
-  case ARM::FLDD:
-  case ARM::FSTD:
+  case ARM::VLDRD:
+  case ARM::VSTRD:
     return MI->getOperand(1).isReg();
   case ARM::t2LDRi8:
   case ARM::t2LDRi12:
@@ -867,6 +866,13 @@ bool ARMLoadStoreOpt::FixInvalidRegPairOp(MachineBasicBlock &MBB,
                       BaseReg, BaseKill, BaseUndef, OffReg, OffKill, OffUndef,
                       Pred, PredReg, TII, isT2);
       } else {
+        if (OddReg == EvenReg && EvenDeadKill) {
+          // If the two source operands are the same, the kill marker is probably
+          // on the first one. e.g.
+          // t2STRDi8 %R5<kill>, %R5, %R9<kill>, 0, 14, %reg0
+          EvenDeadKill = false;
+          OddDeadKill = true;
+        }
         InsertLDR_STR(MBB, MBBI, OffImm, isLd, dl, NewOpc,
                       EvenReg, EvenDeadKill, EvenUndef,
                       BaseReg, false, BaseUndef, OffReg, false, OffUndef,
@@ -974,6 +980,9 @@ bool ARMLoadStoreOpt::LoadStoreMultipleOpti(MachineBasicBlock &MBB) {
     if (Advance) {
       ++Position;
       ++MBBI;
+      if (MBBI == E)
+        // Reach the end of the block, try merging the memory instructions.
+        TryMerge = true;
     } else
       TryMerge = true;
 
@@ -1103,7 +1112,7 @@ bool ARMLoadStoreOpt::runOnMachineFunction(MachineFunction &Fn) {
 /// likely they will be combined later.
 
 namespace {
-  struct VISIBILITY_HIDDEN ARMPreAllocLoadStoreOpt : public MachineFunctionPass{
+  struct ARMPreAllocLoadStoreOpt : public MachineFunctionPass{
     static char ID;
     ARMPreAllocLoadStoreOpt() : MachineFunctionPass(&ID) {}
 
@@ -1212,7 +1221,7 @@ ARMPreAllocLoadStoreOpt::CanFormLdStDWord(MachineInstr *Op0, MachineInstr *Op1,
   if (!STI->hasV5TEOps())
     return false;
 
-  // FIXME: FLDS / FSTS -> FLDD / FSTD
+  // FIXME: VLDRS / VSTRS -> VLDRD / VSTRD
   unsigned Scale = 1;
   unsigned Opcode = Op0->getOpcode();
   if (Opcode == ARM::LDR)
@@ -1454,7 +1463,7 @@ ARMPreAllocLoadStoreOpt::RescheduleLoadStoreInstrs(MachineBasicBlock *MBB) {
         continue;
 
       int Opc = MI->getOpcode();
-      bool isLd = isi32Load(Opc) || Opc == ARM::FLDS || Opc == ARM::FLDD;
+      bool isLd = isi32Load(Opc) || Opc == ARM::VLDRS || Opc == ARM::VLDRD;
       unsigned Base = MI->getOperand(1).getReg();
       int Offset = getMemoryOpOffset(MI);
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMMachineFunctionInfo.h b/libclamav/c++/llvm/lib/Target/ARM/ARMMachineFunctionInfo.h
index 7ae287d..2176b27 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMMachineFunctionInfo.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMMachineFunctionInfo.h
@@ -52,10 +52,6 @@ class ARMFunctionInfo : public MachineFunctionInfo {
   /// enable far jump.
   bool LRSpilledForFarJump;
 
-  /// R3IsLiveIn - True if R3 is live in to this function.
-  /// FIXME: Remove when register scavenger for Thumb is done.
-  bool R3IsLiveIn;
-
   /// FramePtrSpillOffset - If HasStackFrame, this records the frame pointer
   /// spill stack offset.
   unsigned FramePtrSpillOffset;
@@ -100,7 +96,7 @@ public:
     hasThumb2(false),
     Align(2U),
     VarArgsRegSaveSize(0), HasStackFrame(false),
-    LRSpilledForFarJump(false), R3IsLiveIn(false),
+    LRSpilledForFarJump(false),
     FramePtrSpillOffset(0), GPRCS1Offset(0), GPRCS2Offset(0), DPRCSOffset(0),
     GPRCS1Size(0), GPRCS2Size(0), DPRCSSize(0),
     GPRCS1Frames(0), GPRCS2Frames(0), DPRCSFrames(0),
@@ -111,7 +107,7 @@ public:
     hasThumb2(MF.getTarget().getSubtarget<ARMSubtarget>().hasThumb2()),
     Align(isThumb ? 1U : 2U),
     VarArgsRegSaveSize(0), HasStackFrame(false),
-    LRSpilledForFarJump(false), R3IsLiveIn(false),
+    LRSpilledForFarJump(false),
     FramePtrSpillOffset(0), GPRCS1Offset(0), GPRCS2Offset(0), DPRCSOffset(0),
     GPRCS1Size(0), GPRCS2Size(0), DPRCSSize(0),
     GPRCS1Frames(32), GPRCS2Frames(32), DPRCSFrames(32),
@@ -134,10 +130,6 @@ public:
   bool isLRSpilledForFarJump() const { return LRSpilledForFarJump; }
   void setLRIsSpilledForFarJump(bool s) { LRSpilledForFarJump = s; }
 
-  // FIXME: Remove when register scavenger for Thumb is done.
-  bool isR3LiveIn() const { return R3IsLiveIn; }
-  void setR3IsLiveIn(bool l) { R3IsLiveIn = l; }
-
   unsigned getFramePtrSpillOffset() const { return FramePtrSpillOffset; }
   void setFramePtrSpillOffset(unsigned o) { FramePtrSpillOffset = o; }
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMRegisterInfo.h b/libclamav/c++/llvm/lib/Target/ARM/ARMRegisterInfo.h
index 8edfb9a..041afd0 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMRegisterInfo.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMRegisterInfo.h
@@ -23,6 +23,16 @@ namespace llvm {
   class ARMBaseInstrInfo;
   class Type;
 
+namespace ARM {
+  /// SubregIndex - The index of various subregister classes. Note that 
+  /// these indices must be kept in sync with the class indices in the 
+  /// ARMRegisterInfo.td file.
+  enum SubregIndex {
+    SSUBREG_0 = 1, SSUBREG_1 = 2, SSUBREG_2 = 3, SSUBREG_3 = 4,
+    DSUBREG_0 = 5, DSUBREG_1 = 6
+  };
+}
+
 struct ARMRegisterInfo : public ARMBaseRegisterInfo {
 public:
   ARMRegisterInfo(const ARMBaseInstrInfo &tii, const ARMSubtarget &STI);
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMRegisterInfo.td b/libclamav/c++/llvm/lib/Target/ARM/ARMRegisterInfo.td
index 20a7355..d393e8d 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMRegisterInfo.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMRegisterInfo.td
@@ -129,9 +129,6 @@ def GPR : RegisterClass<"ARM", [i32], 32, [R0, R1, R2, R3, R4, R5, R6,
     iterator allocation_order_begin(const MachineFunction &MF) const;
     iterator allocation_order_end(const MachineFunction &MF) const;
   }];
-  // FIXME: We are reserving r12 in case the PEI needs to use it to
-  // generate large stack offset. Make it available once we have register
-  // scavenging. Similarly r3 is reserved in Thumb mode for now.
   let MethodBodies = [{
     // FP is R11, R9 is available.
     static const unsigned ARM_GPR_AO_1[] = {
@@ -169,10 +166,20 @@ def GPR : RegisterClass<"ARM", [i32], 32, [R0, R1, R2, R3, R4, R5, R6,
       ARM::R4, ARM::R5, ARM::R6,
       ARM::R8, ARM::R9, ARM::R10,ARM::R11,ARM::R7 };
 
+    // For Thumb1 mode, we don't want to allocate hi regs at all, as we
+    // don't know how to spill them. If we make our prologue/epilogue code
+    // smarter at some point, we can go back to using the above allocation
+    // orders for the Thumb1 instructions that know how to use hi regs.
+    static const unsigned THUMB_GPR_AO[] = {
+      ARM::R0, ARM::R1, ARM::R2, ARM::R3,
+      ARM::R4, ARM::R5, ARM::R6, ARM::R7 };
+
     GPRClass::iterator
     GPRClass::allocation_order_begin(const MachineFunction &MF) const {
       const TargetMachine &TM = MF.getTarget();
       const ARMSubtarget &Subtarget = TM.getSubtarget<ARMSubtarget>();
+      if (Subtarget.isThumb1Only())
+        return THUMB_GPR_AO;
       if (Subtarget.isTargetDarwin()) {
         if (Subtarget.isR9Reserved())
           return ARM_GPR_AO_4;
@@ -195,6 +202,12 @@ def GPR : RegisterClass<"ARM", [i32], 32, [R0, R1, R2, R3, R4, R5, R6,
       const ARMSubtarget &Subtarget = TM.getSubtarget<ARMSubtarget>();
       GPRClass::iterator I;
 
+      if (Subtarget.isThumb1Only()) {
+        I = THUMB_GPR_AO + (sizeof(THUMB_GPR_AO)/sizeof(unsigned));
+        // Mac OS X requires FP not to be clobbered for backtracing purpose.
+        return (Subtarget.isTargetDarwin() || RI->hasFP(MF)) ? I-1 : I;
+      }
+
       if (Subtarget.isTargetDarwin()) {
         if (Subtarget.isR9Reserved())
           I = ARM_GPR_AO_4 + (sizeof(ARM_GPR_AO_4)/sizeof(unsigned));
@@ -222,12 +235,9 @@ def tGPR : RegisterClass<"ARM", [i32], 32, [R0, R1, R2, R3, R4, R5, R6, R7]> {
     iterator allocation_order_begin(const MachineFunction &MF) const;
     iterator allocation_order_end(const MachineFunction &MF) const;
   }];
-  // FIXME: We are reserving r3 in Thumb mode in case the PEI needs to use it
-  // to generate large stack offset. Make it available once we have register
-  // scavenging.
   let MethodBodies = [{
     static const unsigned THUMB_tGPR_AO[] = {
-      ARM::R0, ARM::R1, ARM::R2,
+      ARM::R0, ARM::R1, ARM::R2, ARM::R3,
       ARM::R4, ARM::R5, ARM::R6, ARM::R7 };
 
     // FP is R7, only low registers available.
@@ -319,7 +329,7 @@ def DPR : RegisterClass<"ARM", [f64, v8i8, v4i16, v2i32, v1i64, v2f32], 64,
 
 // Subset of DPR that are accessible with VFP2 (and so that also have
 // 32-bit SPR subregs).
-def DPR_VFP2 : RegisterClass<"ARM", [f64, v2i32, v2f32], 64,
+def DPR_VFP2 : RegisterClass<"ARM", [f64, v8i8, v4i16, v2i32, v1i64, v2f32], 64,
                              [D0,  D1,  D2,  D3,  D4,  D5,  D6,  D7,
                               D8,  D9,  D10, D11, D12, D13, D14, D15]> {
   let SubRegClassList = [SPR, SPR];
@@ -327,7 +337,7 @@ def DPR_VFP2 : RegisterClass<"ARM", [f64, v2i32, v2f32], 64,
 
 // Subset of DPR which can be used as a source of NEON scalars for 16-bit
 // operations
-def DPR_8 : RegisterClass<"ARM", [f64, v4i16, v2f32], 64,
+def DPR_8 : RegisterClass<"ARM", [f64, v8i8, v4i16, v2i32, v1i64, v2f32], 64,
                           [D0,  D1,  D2,  D3,  D4,  D5,  D6,  D7]> {
   let SubRegClassList = [SPR_8, SPR_8];
 }
@@ -347,6 +357,13 @@ def QPR_VFP2 : RegisterClass<"ARM", [v16i8, v8i16, v4i32, v2i64, v4f32, v2f64],
   let SubRegClassList = [SPR, SPR, SPR, SPR, DPR_VFP2, DPR_VFP2];
 }
 
+// Subset of QPR that have DPR_8 and SPR_8 subregs.
+def QPR_8 : RegisterClass<"ARM", [v16i8, v8i16, v4i32, v2i64, v4f32, v2f64],
+                           128,
+                           [Q0,  Q1,  Q2,  Q3]> {
+  let SubRegClassList = [SPR_8, SPR_8, SPR_8, SPR_8, DPR_8, DPR_8];
+}
+
 // Condition code registers.
 def CCR : RegisterClass<"ARM", [i32], 32, [CPSR]>;
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMScheduleV6.td b/libclamav/c++/llvm/lib/Target/ARM/ARMScheduleV6.td
index 1ace718..0fef466 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMScheduleV6.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMScheduleV6.td
@@ -11,4 +11,190 @@
 //
 //===----------------------------------------------------------------------===//
 
-// TODO: Add model for an ARM11
+// Model based on ARM1176
+//
+// Scheduling information derived from "ARM1176JZF-S Technical Reference Manual".
+//
+def ARMV6Itineraries : ProcessorItineraries<[
+  //
+  // No operand cycles
+  InstrItinData<IIC_iALUx    , [InstrStage<1, [FU_Pipe0]>]>,
+  //
+  // Binary Instructions that produce a result
+  InstrItinData<IIC_iALUi    , [InstrStage<1, [FU_Pipe0]>], [2, 2]>,
+  InstrItinData<IIC_iALUr    , [InstrStage<1, [FU_Pipe0]>], [2, 2, 2]>,
+  InstrItinData<IIC_iALUsi   , [InstrStage<1, [FU_Pipe0]>], [2, 2, 1]>,
+  InstrItinData<IIC_iALUsr   , [InstrStage<2, [FU_Pipe0]>], [3, 3, 2, 1]>,
+  //
+  // Unary Instructions that produce a result
+  InstrItinData<IIC_iUNAr    , [InstrStage<1, [FU_Pipe0]>], [2, 2]>,
+  InstrItinData<IIC_iUNAsi   , [InstrStage<1, [FU_Pipe0]>], [2, 1]>,
+  InstrItinData<IIC_iUNAsr   , [InstrStage<2, [FU_Pipe0]>], [3, 2, 1]>,
+  //
+  // Compare instructions
+  InstrItinData<IIC_iCMPi    , [InstrStage<1, [FU_Pipe0]>], [2]>,
+  InstrItinData<IIC_iCMPr    , [InstrStage<1, [FU_Pipe0]>], [2, 2]>,
+  InstrItinData<IIC_iCMPsi   , [InstrStage<1, [FU_Pipe0]>], [2, 1]>,
+  InstrItinData<IIC_iCMPsr   , [InstrStage<2, [FU_Pipe0]>], [3, 2, 1]>,
+  //
+  // Move instructions, unconditional
+  InstrItinData<IIC_iMOVi    , [InstrStage<1, [FU_Pipe0]>], [2]>,
+  InstrItinData<IIC_iMOVr    , [InstrStage<1, [FU_Pipe0]>], [2, 2]>,
+  InstrItinData<IIC_iMOVsi   , [InstrStage<1, [FU_Pipe0]>], [2, 1]>,
+  InstrItinData<IIC_iMOVsr   , [InstrStage<2, [FU_Pipe0]>], [3, 2, 1]>,
+  //
+  // Move instructions, conditional
+  InstrItinData<IIC_iCMOVi   , [InstrStage<1, [FU_Pipe0]>], [3]>,
+  InstrItinData<IIC_iCMOVr   , [InstrStage<1, [FU_Pipe0]>], [3, 2]>,
+  InstrItinData<IIC_iCMOVsi  , [InstrStage<1, [FU_Pipe0]>], [3, 1]>,
+  InstrItinData<IIC_iCMOVsr  , [InstrStage<1, [FU_Pipe0]>], [4, 2, 1]>,
+
+  // Integer multiply pipeline
+  //
+  InstrItinData<IIC_iMUL16   , [InstrStage<1, [FU_Pipe0]>], [4, 1, 1]>,
+  InstrItinData<IIC_iMAC16   , [InstrStage<1, [FU_Pipe0]>], [4, 1, 1, 2]>,
+  InstrItinData<IIC_iMUL32   , [InstrStage<2, [FU_Pipe0]>], [5, 1, 1]>,
+  InstrItinData<IIC_iMAC32   , [InstrStage<2, [FU_Pipe0]>], [5, 1, 1, 2]>,
+  InstrItinData<IIC_iMUL64   , [InstrStage<3, [FU_Pipe0]>], [6, 1, 1]>,
+  InstrItinData<IIC_iMAC64   , [InstrStage<3, [FU_Pipe0]>], [6, 1, 1, 2]>,
+  
+  // Integer load pipeline
+  //
+  // Immediate offset
+  InstrItinData<IIC_iLoadi   , [InstrStage<1, [FU_Pipe0]>], [4, 1]>,
+  //
+  // Register offset
+  InstrItinData<IIC_iLoadr   , [InstrStage<1, [FU_Pipe0]>], [4, 1, 1]>,
+  //
+  // Scaled register offset, issues over 2 cycles
+  InstrItinData<IIC_iLoadsi  , [InstrStage<2, [FU_Pipe0]>], [5, 2, 1]>,
+  //
+  // Immediate offset with update
+  InstrItinData<IIC_iLoadiu  , [InstrStage<1, [FU_Pipe0]>], [4, 2, 1]>,
+  //
+  // Register offset with update
+  InstrItinData<IIC_iLoadru  , [InstrStage<1, [FU_Pipe0]>], [4, 2, 1, 1]>,
+  //
+  // Scaled register offset with update, issues over 2 cycles
+  InstrItinData<IIC_iLoadsiu , [InstrStage<2, [FU_Pipe0]>], [5, 2, 2, 1]>,
+
+  //
+  // Load multiple
+  InstrItinData<IIC_iLoadm   , [InstrStage<3, [FU_Pipe0]>]>,
+
+  // Integer store pipeline
+  //
+  // Immediate offset
+  InstrItinData<IIC_iStorei  , [InstrStage<1, [FU_Pipe0]>], [2, 1]>,
+  //
+  // Register offset
+  InstrItinData<IIC_iStorer  , [InstrStage<1, [FU_Pipe0]>], [2, 1, 1]>,
+
+  //
+  // Scaled register offset, issues over 2 cycles
+  InstrItinData<IIC_iStoresi , [InstrStage<2, [FU_Pipe0]>], [2, 2, 1]>,
+  //
+  // Immediate offset with update
+  InstrItinData<IIC_iStoreiu , [InstrStage<1, [FU_Pipe0]>], [2, 2, 1]>,
+  //
+  // Register offset with update
+  InstrItinData<IIC_iStoreru , [InstrStage<1, [FU_Pipe0]>], [2, 2, 1, 1]>,
+  //
+  // Scaled register offset with update, issues over 2 cycles
+  InstrItinData<IIC_iStoresiu, [InstrStage<2, [FU_Pipe0]>], [2, 2, 2, 1]>,
+  //
+  // Store multiple
+  InstrItinData<IIC_iStorem   , [InstrStage<3, [FU_Pipe0]>]>,
+  
+  // Branch
+  //
+  // no delay slots, so the latency of a branch is unimportant
+  InstrItinData<IIC_Br      , [InstrStage<1, [FU_Pipe0]>]>,
+
+  // VFP
+  // Issue through integer pipeline, and execute in NEON unit. We assume
+  // RunFast mode so that NFP pipeline is used for single-precision when
+  // possible.
+  //
+  // FP Special Register to Integer Register File Move
+  InstrItinData<IIC_fpSTAT , [InstrStage<1, [FU_Pipe0]>], [3]>,
+  //
+  // Single-precision FP Unary
+  InstrItinData<IIC_fpUNA32 , [InstrStage<1, [FU_Pipe0]>], [5, 2]>,
+  //
+  // Double-precision FP Unary
+  InstrItinData<IIC_fpUNA64 , [InstrStage<1, [FU_Pipe0]>], [5, 2]>,
+  //
+  // Single-precision FP Compare
+  InstrItinData<IIC_fpCMP32 , [InstrStage<1, [FU_Pipe0]>], [2, 2]>,
+  //
+  // Double-precision FP Compare
+  InstrItinData<IIC_fpCMP64 , [InstrStage<1, [FU_Pipe0]>], [2, 2]>,
+  //
+  // Single to Double FP Convert
+  InstrItinData<IIC_fpCVTSD , [InstrStage<1, [FU_Pipe0]>], [5, 2]>,
+  //
+  // Double to Single FP Convert
+  InstrItinData<IIC_fpCVTDS , [InstrStage<1, [FU_Pipe0]>], [5, 2]>,
+  //
+  // Single-Precision FP to Integer Convert
+  InstrItinData<IIC_fpCVTSI , [InstrStage<1, [FU_Pipe0]>], [9, 2]>,
+  //
+  // Double-Precision FP to Integer Convert
+  InstrItinData<IIC_fpCVTDI , [InstrStage<1, [FU_Pipe0]>], [9, 2]>,
+  //
+  // Integer to Single-Precision FP Convert
+  InstrItinData<IIC_fpCVTIS , [InstrStage<1, [FU_Pipe0]>], [9, 2]>,
+  //
+  // Integer to Double-Precision FP Convert
+  InstrItinData<IIC_fpCVTID , [InstrStage<1, [FU_Pipe0]>], [9, 2]>,
+  //
+  // Single-precision FP ALU
+  InstrItinData<IIC_fpALU32 , [InstrStage<1, [FU_Pipe0]>], [9, 2, 2]>,
+  //
+  // Double-precision FP ALU
+  InstrItinData<IIC_fpALU64 , [InstrStage<1, [FU_Pipe0]>], [9, 2, 2]>,
+  //
+  // Single-precision FP Multiply
+  InstrItinData<IIC_fpMUL32 , [InstrStage<1, [FU_Pipe0]>], [9, 2, 2]>,
+  //
+  // Double-precision FP Multiply
+  InstrItinData<IIC_fpMUL64 , [InstrStage<2, [FU_Pipe0]>], [9, 2, 2]>,
+  //
+  // Single-precision FP MAC
+  InstrItinData<IIC_fpMAC32 , [InstrStage<1, [FU_Pipe0]>], [9, 2, 2, 2]>,
+  //
+  // Double-precision FP MAC
+  InstrItinData<IIC_fpMAC64 , [InstrStage<2, [FU_Pipe0]>], [9, 2, 2, 2]>,
+  //
+  // Single-precision FP DIV
+  InstrItinData<IIC_fpDIV32 , [InstrStage<15, [FU_Pipe0]>], [20, 2, 2]>,
+  //
+  // Double-precision FP DIV
+  InstrItinData<IIC_fpDIV64 , [InstrStage<29, [FU_Pipe0]>], [34, 2, 2]>,
+  //
+  // Single-precision FP SQRT
+  InstrItinData<IIC_fpSQRT32 , [InstrStage<15, [FU_Pipe0]>], [20, 2, 2]>,
+  //
+  // Double-precision FP SQRT
+  InstrItinData<IIC_fpSQRT64 , [InstrStage<29, [FU_Pipe0]>], [34, 2, 2]>,
+  //
+  // Single-precision FP Load
+  InstrItinData<IIC_fpLoad32 , [InstrStage<1, [FU_Pipe0]>], [5, 2, 2]>,
+  //
+  // Double-precision FP Load
+  InstrItinData<IIC_fpLoad64 , [InstrStage<1, [FU_Pipe0]>], [5, 2, 2]>,
+  //
+  // FP Load Multiple
+  InstrItinData<IIC_fpLoadm , [InstrStage<3, [FU_Pipe0]>]>,
+  //
+  // Single-precision FP Store
+  InstrItinData<IIC_fpStore32 , [InstrStage<1, [FU_Pipe0]>], [2, 2, 2]>,
+  //
+  // Double-precision FP Store
+  // use FU_Issue to enforce the 1 load/store per cycle limit
+  InstrItinData<IIC_fpStore64 , [InstrStage<1, [FU_Pipe0]>], [2, 2, 2]>,
+  //
+  // FP Store Multiple
+  InstrItinData<IIC_fpStorem , [InstrStage<3, [FU_Pipe0]>]>
+]>;
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMScheduleV7.td b/libclamav/c++/llvm/lib/Target/ARM/ARMScheduleV7.td
index e565813..bbbf413 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMScheduleV7.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMScheduleV7.td
@@ -180,26 +180,26 @@ def CortexA8Itineraries : ProcessorItineraries<[
   // Double-precision FP Unary
   InstrItinData<IIC_fpUNA64 , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
                                InstrStage<4, [FU_NPipe], 0>,
-                               InstrStage<4, [FU_NLSPipe]>]>,
+                               InstrStage<4, [FU_NLSPipe]>], [4, 1]>,
   //
   // Single-precision FP Compare
   InstrItinData<IIC_fpCMP32 , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
-                               InstrStage<1, [FU_NPipe]>], [7, 1]>,
+                               InstrStage<1, [FU_NPipe]>], [1, 1]>,
   //
   // Double-precision FP Compare
   InstrItinData<IIC_fpCMP64 , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
                                InstrStage<4, [FU_NPipe], 0>,
-                               InstrStage<4, [FU_NLSPipe]>]>,
+                               InstrStage<4, [FU_NLSPipe]>], [4, 1]>,
   //
   // Single to Double FP Convert
   InstrItinData<IIC_fpCVTSD , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
                                InstrStage<7, [FU_NPipe], 0>,
-                               InstrStage<7, [FU_NLSPipe]>]>,
+                               InstrStage<7, [FU_NLSPipe]>], [7, 1]>,
   //
   // Double to Single FP Convert
   InstrItinData<IIC_fpCVTDS , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
                                InstrStage<5, [FU_NPipe], 0>,
-                               InstrStage<5, [FU_NLSPipe]>]>,
+                               InstrStage<5, [FU_NLSPipe]>], [5, 1]>,
   //
   // Single-Precision FP to Integer Convert
   InstrItinData<IIC_fpCVTSI , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
@@ -208,7 +208,7 @@ def CortexA8Itineraries : ProcessorItineraries<[
   // Double-Precision FP to Integer Convert
   InstrItinData<IIC_fpCVTDI , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
                                InstrStage<8, [FU_NPipe], 0>,
-                               InstrStage<8, [FU_NLSPipe]>]>,
+                               InstrStage<8, [FU_NLSPipe]>], [8, 1]>,
   //
   // Integer to Single-Precision FP Convert
   InstrItinData<IIC_fpCVTIS , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
@@ -217,54 +217,54 @@ def CortexA8Itineraries : ProcessorItineraries<[
   // Integer to Double-Precision FP Convert
   InstrItinData<IIC_fpCVTID , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
                                InstrStage<8, [FU_NPipe], 0>,
-                               InstrStage<8, [FU_NLSPipe]>]>,
+                               InstrStage<8, [FU_NLSPipe]>], [8, 1]>,
   //
   // Single-precision FP ALU
   InstrItinData<IIC_fpALU32 , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
-                               InstrStage<1, [FU_NPipe]>], [7, 1]>,
+                               InstrStage<1, [FU_NPipe]>], [7, 1, 1]>,
   //
   // Double-precision FP ALU
   InstrItinData<IIC_fpALU64 , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
                                InstrStage<9, [FU_NPipe], 0>,
-                               InstrStage<9, [FU_NLSPipe]>]>,
+                               InstrStage<9, [FU_NLSPipe]>], [9, 1, 1]>,
   //
   // Single-precision FP Multiply
   InstrItinData<IIC_fpMUL32 , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
-                               InstrStage<1, [FU_NPipe]>], [7, 1]>,
+                               InstrStage<1, [FU_NPipe]>], [7, 1, 1]>,
   //
   // Double-precision FP Multiply
   InstrItinData<IIC_fpMUL64 , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
                                InstrStage<11, [FU_NPipe], 0>,
-                               InstrStage<11, [FU_NLSPipe]>]>,
+                               InstrStage<11, [FU_NLSPipe]>], [11, 1, 1]>,
   //
   // Single-precision FP MAC
   InstrItinData<IIC_fpMAC32 , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
-                               InstrStage<1, [FU_NPipe]>], [7, 1]>,
+                               InstrStage<1, [FU_NPipe]>], [7, 2, 1, 1]>,
   //
   // Double-precision FP MAC
   InstrItinData<IIC_fpMAC64 , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
                                InstrStage<19, [FU_NPipe], 0>,
-                               InstrStage<19, [FU_NLSPipe]>]>,
+                               InstrStage<19, [FU_NLSPipe]>], [19, 2, 1, 1]>,
   //
   // Single-precision FP DIV
   InstrItinData<IIC_fpDIV32 , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
                                InstrStage<20, [FU_NPipe], 0>,
-                               InstrStage<20, [FU_NLSPipe]>]>,
+                               InstrStage<20, [FU_NLSPipe]>], [20, 1, 1]>,
   //
   // Double-precision FP DIV
   InstrItinData<IIC_fpDIV64 , [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
                                InstrStage<29, [FU_NPipe], 0>,
-                               InstrStage<29, [FU_NLSPipe]>]>,
+                               InstrStage<29, [FU_NLSPipe]>], [29, 1, 1]>,
   //
   // Single-precision FP SQRT
   InstrItinData<IIC_fpSQRT32, [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
                                InstrStage<19, [FU_NPipe], 0>,
-                               InstrStage<19, [FU_NLSPipe]>]>,
+                               InstrStage<19, [FU_NLSPipe]>], [19, 1]>,
   //
   // Double-precision FP SQRT
   InstrItinData<IIC_fpSQRT64, [InstrStage<1, [FU_Pipe0, FU_Pipe1]>,
                                InstrStage<29, [FU_NPipe], 0>,
-                               InstrStage<29, [FU_NLSPipe]>]>,
+                               InstrStage<29, [FU_NLSPipe]>], [29, 1]>,
   //
   // Single-precision FP Load
   // use FU_Issue to enforce the 1 load/store per cycle limit
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.cpp
index cf1ee3f..d6b072b 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.cpp
@@ -16,6 +16,7 @@
 #include "llvm/GlobalValue.h"
 #include "llvm/Target/TargetOptions.h"
 #include "llvm/Support/CommandLine.h"
+#include "llvm/ADT/SmallVector.h"
 using namespace llvm;
 
 static cl::opt<bool>
@@ -26,15 +27,20 @@ UseNEONFP("arm-use-neon-fp",
           cl::desc("Use NEON for single-precision FP"),
           cl::init(false), cl::Hidden);
 
+static cl::opt<bool>
+UseMOVT("arm-use-movt",
+        cl::init(true), cl::Hidden);
+
 ARMSubtarget::ARMSubtarget(const std::string &TT, const std::string &FS,
-                           bool isThumb)
+                           bool isT)
   : ARMArchVersion(V4T)
   , ARMFPUType(None)
   , UseNEONForSinglePrecisionFP(UseNEONFP)
-  , IsThumb(isThumb)
+  , IsThumb(isT)
   , ThumbMode(Thumb1)
   , PostRAScheduler(false)
   , IsR9Reserved(ReserveR9)
+  , UseMovt(UseMOVT)
   , stackAlignment(4)
   , CPUString("generic")
   , TargetType(isELF) // Default to ELF unless otherwise specified.
@@ -98,12 +104,18 @@ ARMSubtarget::ARMSubtarget(const std::string &TT, const std::string &FS,
   if (isTargetDarwin())
     IsR9Reserved = ReserveR9 | (ARMArchVersion < V6);
 
+  if (!isThumb() || hasThumb2())
+    PostRAScheduler = true;
+
   // Set CPU specific features.
   if (CPUString == "cortex-a8") {
-    PostRAScheduler = true;
+    // On Cortex-a8, it's faster to perform some single-precision FP
+    // operations with NEON instructions.
     if (UseNEONFP.getPosition() == 0)
       UseNEONForSinglePrecisionFP = true;
   }
+  HasBranchTargetBuffer = (CPUString == "cortex-a8" ||
+                           CPUString == "cortex-a9");
 }
 
 /// GVIsIndirectSymbol - true if the GV will be accessed via an indirect symbol.
@@ -155,3 +167,13 @@ ARMSubtarget::GVIsIndirectSymbol(GlobalValue *GV, Reloc::Model RelocM) const {
 
   return false;
 }
+
+bool ARMSubtarget::enablePostRAScheduler(
+           CodeGenOpt::Level OptLevel,
+           TargetSubtarget::AntiDepBreakMode& Mode,
+           RegClassVector& CriticalPathRCs) const {
+  Mode = TargetSubtarget::ANTIDEP_CRITICAL;
+  CriticalPathRCs.clear();
+  CriticalPathRCs.push_back(&ARM::GPRRegClass);
+  return PostRAScheduler && OptLevel >= CodeGenOpt::Default;
+}
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.h b/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.h
index 7098fd4..b2467b0 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.h
@@ -17,6 +17,7 @@
 #include "llvm/Target/TargetInstrItineraries.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Target/TargetSubtarget.h"
+#include "ARMBaseRegisterInfo.h"
 #include <string>
 
 namespace llvm {
@@ -49,6 +50,9 @@ protected:
   /// determine if NEON should actually be used.
   bool UseNEONForSinglePrecisionFP;
 
+  /// HasBranchTargetBuffer - True if processor can predict indirect branches.
+  bool HasBranchTargetBuffer;
+
   /// IsThumb - True if we are in thumb mode, false if in ARM mode.
   bool IsThumb;
 
@@ -61,6 +65,10 @@ protected:
   /// IsR9Reserved - True if R9 is a not available as general purpose register.
   bool IsR9Reserved;
 
+  /// UseMovt - True if MOVT / MOVW pairs are used for materialization of 32-bit
+  /// imms (including global addresses).
+  bool UseMovt;
+
   /// stackAlignment - The minimum alignment known to hold of the stack frame on
   /// entry to the function and which must be maintained by every function.
   unsigned stackAlignment;
@@ -122,13 +130,18 @@ protected:
   bool isThumb2() const { return IsThumb && (ThumbMode == Thumb2); }
   bool hasThumb2() const { return ThumbMode >= Thumb2; }
 
+  bool hasBranchTargetBuffer() const { return HasBranchTargetBuffer; }
+
   bool isR9Reserved() const { return IsR9Reserved; }
 
+  bool useMovt() const { return UseMovt && hasV6T2Ops(); }
+
   const std::string & getCPUString() const { return CPUString; }
-  
-  /// enablePostRAScheduler - From TargetSubtarget, return true to
-  /// enable post-RA scheduler.
-  bool enablePostRAScheduler() const { return PostRAScheduler; }
+
+  /// enablePostRAScheduler - True at 'More' optimization.
+  bool enablePostRAScheduler(CodeGenOpt::Level OptLevel,
+                             TargetSubtarget::AntiDepBreakMode& Mode,
+                             RegClassVector& CriticalPathRCs) const;
 
   /// getInstrItins - Return the instruction itineraies based on subtarget
   /// selection.
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.cpp
index 32ddc20..2564ed9 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.cpp
@@ -16,14 +16,12 @@
 #include "ARM.h"
 #include "llvm/PassManager.h"
 #include "llvm/CodeGen/Passes.h"
-#include "llvm/Support/CommandLine.h"
 #include "llvm/Support/FormattedStream.h"
 #include "llvm/Target/TargetOptions.h"
 #include "llvm/Target/TargetRegistry.h"
 using namespace llvm;
 
-static const MCAsmInfo *createMCAsmInfo(const Target &T,
-                                        const StringRef &TT) {
+static const MCAsmInfo *createMCAsmInfo(const Target &T, StringRef TT) {
   Triple TheTriple(TT);
   switch (TheTriple.getOS()) {
   case Triple::Darwin:
@@ -62,8 +60,8 @@ ARMTargetMachine::ARMTargetMachine(const Target &T, const std::string &TT,
                                    const std::string &FS)
   : ARMBaseTargetMachine(T, TT, FS, false), InstrInfo(Subtarget),
     DataLayout(Subtarget.isAPCS_ABI() ?
-               std::string("e-p:32:32-f64:32:32-i64:32:32") :
-               std::string("e-p:32:32-f64:64:64-i64:64:64")),
+               std::string("e-p:32:32-f64:32:32-i64:32:32-n32") :
+               std::string("e-p:32:32-f64:64:64-i64:64:64-n32")),
     TLInfo(*this) {
 }
 
@@ -75,9 +73,9 @@ ThumbTargetMachine::ThumbTargetMachine(const Target &T, const std::string &TT,
               : ((ARMBaseInstrInfo*)new Thumb1InstrInfo(Subtarget))),
     DataLayout(Subtarget.isAPCS_ABI() ?
                std::string("e-p:32:32-f64:32:32-i64:32:32-"
-                           "i16:16:32-i8:8:32-i1:8:32-a:0:32") :
+                           "i16:16:32-i8:8:32-i1:8:32-a:0:32-n32") :
                std::string("e-p:32:32-f64:64:64-i64:64:64-"
-                           "i16:16:32-i8:8:32-i1:8:32-a:0:32")),
+                           "i16:16:32-i8:8:32-i1:8:32-a:0:32-n32")),
     TLInfo(*this) {
 }
 
@@ -95,6 +93,10 @@ bool ARMBaseTargetMachine::addPreRegAlloc(PassManagerBase &PM,
   if (Subtarget.hasNEON())
     PM.add(createNEONPreAllocPass());
 
+  // Calculate and set max stack object alignment early, so we can decide
+  // whether we will need stack realignment (and thus FP).
+  PM.add(createARMMaxStackAlignmentCalculatorPass());
+
   // FIXME: temporarily disabling load / store optimization pass for Thumb1.
   if (OptLevel != CodeGenOpt::None && !Subtarget.isThumb1Only())
     PM.add(createARMLoadStoreOptimizationPass(true));
@@ -107,14 +109,22 @@ bool ARMBaseTargetMachine::addPreSched2(PassManagerBase &PM,
   if (OptLevel != CodeGenOpt::None && !Subtarget.isThumb1Only())
     PM.add(createARMLoadStoreOptimizationPass());
 
+  // Expand some pseudo instructions into multiple instructions to allow
+  // proper scheduling.
+  PM.add(createARMExpandPseudoPass());
+
   return true;
 }
 
 bool ARMBaseTargetMachine::addPreEmitPass(PassManagerBase &PM,
                                           CodeGenOpt::Level OptLevel) {
   // FIXME: temporarily disabling load / store optimization pass for Thumb1.
-  if (OptLevel != CodeGenOpt::None && !Subtarget.isThumb1Only())
-    PM.add(createIfConverterPass());
+  if (OptLevel != CodeGenOpt::None) {
+    if (!Subtarget.isThumb1Only())
+      PM.add(createIfConverterPass());
+    if (Subtarget.hasNEON())
+      PM.add(createNEONMoveFixPass());
+  }
 
   if (Subtarget.isThumb2()) {
     PM.add(createThumb2ITBlockPass());
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.h b/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.h
index 71a5348..dd9542e 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.h
@@ -103,7 +103,7 @@ public:
   ThumbTargetMachine(const Target &T, const std::string &TT,
                      const std::string &FS);
 
-  /// returns either Thumb1RegisterInfo of Thumb2RegisterInfo
+  /// returns either Thumb1RegisterInfo or Thumb2RegisterInfo
   virtual const ARMBaseRegisterInfo *getRegisterInfo() const {
     return &InstrInfo->getRegisterInfo();
   }
diff --git a/libclamav/c++/llvm/lib/Target/ARM/AsmParser/ARMAsmParser.cpp b/libclamav/c++/llvm/lib/Target/ARM/AsmParser/ARMAsmParser.cpp
index c0ca149..894f913 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/AsmParser/ARMAsmParser.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/AsmParser/ARMAsmParser.cpp
@@ -23,6 +23,15 @@ using namespace llvm;
 namespace {
 struct ARMOperand;
 
+// The shift types for register controlled shifts in arm memory addressing
+enum ShiftType {
+  Lsl,
+  Lsr,
+  Asr,
+  Ror,
+  Rrx
+};
+
 class ARMAsmParser : public TargetAsmParser {
   MCAsmParser &Parser;
 
@@ -35,8 +44,51 @@ private:
 
   bool Error(SMLoc L, const Twine &Msg) { return Parser.Error(L, Msg); }
 
+  bool MaybeParseRegister(ARMOperand &Op, bool ParseWriteBack);
+
+  bool ParseRegisterList(ARMOperand &Op);
+
+  bool ParseMemory(ARMOperand &Op);
+
+  bool ParseMemoryOffsetReg(bool &Negative,
+                            bool &OffsetRegShifted,
+                            enum ShiftType &ShiftType,
+                            const MCExpr *&ShiftAmount,
+                            const MCExpr *&Offset,
+                            bool &OffsetIsReg,
+                            int &OffsetRegNum);
+
+  bool ParseShift(enum ShiftType &St, const MCExpr *&ShiftAmount);
+
+  bool ParseOperand(ARMOperand &Op);
+
   bool ParseDirectiveWord(unsigned Size, SMLoc L);
 
+  bool ParseDirectiveThumb(SMLoc L);
+
+  bool ParseDirectiveThumbFunc(SMLoc L);
+
+  bool ParseDirectiveCode(SMLoc L);
+
+  bool ParseDirectiveSyntax(SMLoc L);
+
+  // TODO - For now hacked versions of the next two are in here in this file to
+  // allow some parser testing until the table gen versions are implemented.
+
+  /// @name Auto-generated Match Functions
+  /// {
+  bool MatchInstruction(SmallVectorImpl<ARMOperand> &Operands,
+                        MCInst &Inst);
+
+  /// MatchRegisterName - Match the given string to a register name and return
+  /// its register number, or -1 if there is no match.  To allow return values
+  /// to be used directly in register lists, arm registers have values between
+  /// 0 and 15.
+  int MatchRegisterName(const StringRef &Name);
+
+  /// }
+
+
 public:
   ARMAsmParser(const Target &T, MCAsmParser &_Parser)
     : TargetAsmParser(T), Parser(_Parser) {}
@@ -48,16 +100,529 @@ public:
   
 } // end anonymous namespace
 
+namespace {
+
+/// ARMOperand - Instances of this class represent a parsed ARM machine
+/// instruction.
+struct ARMOperand {
+  enum {
+    Token,
+    Register,
+    Immediate,
+    Memory
+  } Kind;
+
+
+  union {
+    struct {
+      const char *Data;
+      unsigned Length;
+    } Tok;
+
+    struct {
+      unsigned RegNum;
+      bool Writeback;
+    } Reg;
+
+    struct {
+      const MCExpr *Val;
+    } Imm;
+
+    // This is for all forms of ARM address expressions
+    struct {
+      unsigned BaseRegNum;
+      unsigned OffsetRegNum; // used when OffsetIsReg is true
+      const MCExpr *Offset; // used when OffsetIsReg is false
+      const MCExpr *ShiftAmount; // used when OffsetRegShifted is true
+      enum ShiftType ShiftType;  // used when OffsetRegShifted is true
+      unsigned
+        OffsetRegShifted : 1, // only used when OffsetIsReg is true
+        Preindexed : 1,
+        Postindexed : 1,
+        OffsetIsReg : 1,
+        Negative : 1, // only used when OffsetIsReg is true
+        Writeback : 1;
+    } Mem;
+
+  };
+
+  StringRef getToken() const {
+    assert(Kind == Token && "Invalid access!");
+    return StringRef(Tok.Data, Tok.Length);
+  }
+
+  unsigned getReg() const {
+    assert(Kind == Register && "Invalid access!");
+    return Reg.RegNum;
+  }
+
+  const MCExpr *getImm() const {
+    assert(Kind == Immediate && "Invalid access!");
+    return Imm.Val;
+  }
+
+  bool isToken() const {return Kind == Token; }
+
+  bool isReg() const { return Kind == Register; }
+
+  void addRegOperands(MCInst &Inst, unsigned N) const {
+    assert(N == 1 && "Invalid number of operands!");
+    Inst.addOperand(MCOperand::CreateReg(getReg()));
+  }
+
+  static ARMOperand CreateToken(StringRef Str) {
+    ARMOperand Res;
+    Res.Kind = Token;
+    Res.Tok.Data = Str.data();
+    Res.Tok.Length = Str.size();
+    return Res;
+  }
+
+  static ARMOperand CreateReg(unsigned RegNum, bool Writeback) {
+    ARMOperand Res;
+    Res.Kind = Register;
+    Res.Reg.RegNum = RegNum;
+    Res.Reg.Writeback = Writeback;
+    return Res;
+  }
+
+  static ARMOperand CreateImm(const MCExpr *Val) {
+    ARMOperand Res;
+    Res.Kind = Immediate;
+    Res.Imm.Val = Val;
+    return Res;
+  }
+
+  static ARMOperand CreateMem(unsigned BaseRegNum, bool OffsetIsReg,
+                              const MCExpr *Offset, unsigned OffsetRegNum,
+                              bool OffsetRegShifted, enum ShiftType ShiftType,
+                              const MCExpr *ShiftAmount, bool Preindexed,
+                              bool Postindexed, bool Negative, bool Writeback) {
+    ARMOperand Res;
+    Res.Kind = Memory;
+    Res.Mem.BaseRegNum = BaseRegNum;
+    Res.Mem.OffsetIsReg = OffsetIsReg;
+    Res.Mem.Offset = Offset;
+    Res.Mem.OffsetRegNum = OffsetRegNum;
+    Res.Mem.OffsetRegShifted = OffsetRegShifted;
+    Res.Mem.ShiftType = ShiftType;
+    Res.Mem.ShiftAmount = ShiftAmount;
+    Res.Mem.Preindexed = Preindexed;
+    Res.Mem.Postindexed = Postindexed;
+    Res.Mem.Negative = Negative;
+    Res.Mem.Writeback = Writeback;
+    return Res;
+  }
+};
+
+} // end anonymous namespace.
+
+/// Try to parse a register name.  The token must be an Identifier when called,
+/// and if it is a register name a Reg operand is created, the token is eaten
+/// and false is returned.  Else true is returned and no token is eaten.
+/// TODO this is likely to change to allow different register types and or to
+/// parse for a specific register type.
+bool ARMAsmParser::MaybeParseRegister(ARMOperand &Op, bool ParseWriteBack) {
+  const AsmToken &Tok = getLexer().getTok();
+  assert(Tok.is(AsmToken::Identifier) && "Token is not an Identifier");
+
+  // FIXME: Validate register for the current architecture; we have to do
+  // validation later, so maybe there is no need for this here.
+  int RegNum;
+
+  RegNum = MatchRegisterName(Tok.getString());
+  if (RegNum == -1)
+    return true;
+  getLexer().Lex(); // Eat identifier token.
+
+  bool Writeback = false;
+  if (ParseWriteBack) {
+    const AsmToken &ExclaimTok = getLexer().getTok();
+    if (ExclaimTok.is(AsmToken::Exclaim)) {
+      Writeback = true;
+      getLexer().Lex(); // Eat exclaim token
+    }
+  }
+
+  Op = ARMOperand::CreateReg(RegNum, Writeback);
+
+  return false;
+}
+
+/// Parse a register list, return false if successful else return true or an 
+/// error.  The first token must be a '{' when called.
+bool ARMAsmParser::ParseRegisterList(ARMOperand &Op) {
+  assert(getLexer().getTok().is(AsmToken::LCurly) &&
+         "Token is not an Left Curly Brace");
+  getLexer().Lex(); // Eat left curly brace token.
+
+  const AsmToken &RegTok = getLexer().getTok();
+  SMLoc RegLoc = RegTok.getLoc();
+  if (RegTok.isNot(AsmToken::Identifier))
+    return Error(RegLoc, "register expected");
+  int RegNum = MatchRegisterName(RegTok.getString());
+  if (RegNum == -1)
+    return Error(RegLoc, "register expected");
+  getLexer().Lex(); // Eat identifier token.
+  unsigned RegList = 1 << RegNum;
+
+  int HighRegNum = RegNum;
+  // TODO ranges like "{Rn-Rm}"
+  while (getLexer().getTok().is(AsmToken::Comma)) {
+    getLexer().Lex(); // Eat comma token.
+
+    const AsmToken &RegTok = getLexer().getTok();
+    SMLoc RegLoc = RegTok.getLoc();
+    if (RegTok.isNot(AsmToken::Identifier))
+      return Error(RegLoc, "register expected");
+    int RegNum = MatchRegisterName(RegTok.getString());
+    if (RegNum == -1)
+      return Error(RegLoc, "register expected");
+
+    if (RegList & (1 << RegNum))
+      Warning(RegLoc, "register duplicated in register list");
+    else if (RegNum <= HighRegNum)
+      Warning(RegLoc, "register not in ascending order in register list");
+    RegList |= 1 << RegNum;
+    HighRegNum = RegNum;
+
+    getLexer().Lex(); // Eat identifier token.
+  }
+  const AsmToken &RCurlyTok = getLexer().getTok();
+  if (RCurlyTok.isNot(AsmToken::RCurly))
+    return Error(RCurlyTok.getLoc(), "'}' expected");
+  getLexer().Lex(); // Eat left curly brace token.
+
+  return false;
+}
+
+/// Parse an arm memory expression, return false if successful else return true
+/// or an error.  The first token must be a '[' when called.
+/// TODO Only preindexing and postindexing addressing are started, unindexed
+/// with option, etc are still to do.
+bool ARMAsmParser::ParseMemory(ARMOperand &Op) {
+  assert(getLexer().getTok().is(AsmToken::LBrac) &&
+         "Token is not an Left Bracket");
+  getLexer().Lex(); // Eat left bracket token.
+
+  const AsmToken &BaseRegTok = getLexer().getTok();
+  if (BaseRegTok.isNot(AsmToken::Identifier))
+    return Error(BaseRegTok.getLoc(), "register expected");
+  if (MaybeParseRegister(Op, false))
+    return Error(BaseRegTok.getLoc(), "register expected");
+  int BaseRegNum = Op.getReg();
+
+  bool Preindexed = false;
+  bool Postindexed = false;
+  bool OffsetIsReg = false;
+  bool Negative = false;
+  bool Writeback = false;
+
+  // First look for preindexed address forms, that is after the "[Rn" we now
+  // have to see if the next token is a comma.
+  const AsmToken &Tok = getLexer().getTok();
+  if (Tok.is(AsmToken::Comma)) {
+    Preindexed = true;
+    getLexer().Lex(); // Eat comma token.
+    int OffsetRegNum;
+    bool OffsetRegShifted;
+    enum ShiftType ShiftType;
+    const MCExpr *ShiftAmount;
+    const MCExpr *Offset;
+    if(ParseMemoryOffsetReg(Negative, OffsetRegShifted, ShiftType, ShiftAmount,
+                            Offset, OffsetIsReg, OffsetRegNum))
+      return true;
+    const AsmToken &RBracTok = getLexer().getTok();
+    if (RBracTok.isNot(AsmToken::RBrac))
+      return Error(RBracTok.getLoc(), "']' expected");
+    getLexer().Lex(); // Eat right bracket token.
+
+    const AsmToken &ExclaimTok = getLexer().getTok();
+    if (ExclaimTok.is(AsmToken::Exclaim)) {
+      Writeback = true;
+      getLexer().Lex(); // Eat exclaim token
+    }
+    Op = ARMOperand::CreateMem(BaseRegNum, OffsetIsReg, Offset, OffsetRegNum,
+                               OffsetRegShifted, ShiftType, ShiftAmount,
+                               Preindexed, Postindexed, Negative, Writeback);
+    return false;
+  }
+  // The "[Rn" we have so far was not followed by a comma.
+  else if (Tok.is(AsmToken::RBrac)) {
+    // This is a post indexing addressing forms, that is a ']' follows after
+    // the "[Rn".
+    Postindexed = true;
+    Writeback = true;
+    getLexer().Lex(); // Eat right bracket token.
+
+    int OffsetRegNum = 0;
+    bool OffsetRegShifted = false;
+    enum ShiftType ShiftType;
+    const MCExpr *ShiftAmount;
+    const MCExpr *Offset;
+
+    const AsmToken &NextTok = getLexer().getTok();
+    if (NextTok.isNot(AsmToken::EndOfStatement)) {
+      if (NextTok.isNot(AsmToken::Comma))
+	return Error(NextTok.getLoc(), "',' expected");
+      getLexer().Lex(); // Eat comma token.
+      if(ParseMemoryOffsetReg(Negative, OffsetRegShifted, ShiftType,
+                              ShiftAmount, Offset, OffsetIsReg, OffsetRegNum))
+        return true;
+    }
+
+    Op = ARMOperand::CreateMem(BaseRegNum, OffsetIsReg, Offset, OffsetRegNum,
+                               OffsetRegShifted, ShiftType, ShiftAmount,
+                               Preindexed, Postindexed, Negative, Writeback);
+    return false;
+  }
+
+  return true;
+}
+
+/// Parse the offset of a memory operand after we have seen "[Rn," or "[Rn],"
+/// we will parse the following (were +/- means that a plus or minus is
+/// optional):
+///   +/-Rm
+///   +/-Rm, shift
+///   #offset
+/// we return false on success or an error otherwise.
+bool ARMAsmParser::ParseMemoryOffsetReg(bool &Negative,
+					bool &OffsetRegShifted,
+                                        enum ShiftType &ShiftType,
+                                        const MCExpr *&ShiftAmount,
+                                        const MCExpr *&Offset,
+                                        bool &OffsetIsReg,
+                                        int &OffsetRegNum) {
+  ARMOperand Op;
+  Negative = false;
+  OffsetRegShifted = false;
+  OffsetIsReg = false;
+  OffsetRegNum = -1;
+  const AsmToken &NextTok = getLexer().getTok();
+  if (NextTok.is(AsmToken::Plus))
+    getLexer().Lex(); // Eat plus token.
+  else if (NextTok.is(AsmToken::Minus)) {
+    Negative = true;
+    getLexer().Lex(); // Eat minus token
+  }
+  // See if there is a register following the "[Rn," or "[Rn]," we have so far.
+  const AsmToken &OffsetRegTok = getLexer().getTok();
+  if (OffsetRegTok.is(AsmToken::Identifier)) {
+    OffsetIsReg = !MaybeParseRegister(Op, false);
+    if (OffsetIsReg)
+      OffsetRegNum = Op.getReg();
+  }
+  // If we parsed a register as the offset then their can be a shift after that
+  if (OffsetRegNum != -1) {
+    // Look for a comma then a shift
+    const AsmToken &Tok = getLexer().getTok();
+    if (Tok.is(AsmToken::Comma)) {
+      getLexer().Lex(); // Eat comma token.
+
+      const AsmToken &Tok = getLexer().getTok();
+      if (ParseShift(ShiftType, ShiftAmount))
+	return Error(Tok.getLoc(), "shift expected");
+      OffsetRegShifted = true;
+    }
+  }
+  else { // the "[Rn," or "[Rn,]" we have so far was not followed by "Rm"
+    // Look for #offset following the "[Rn," or "[Rn],"
+    const AsmToken &HashTok = getLexer().getTok();
+    if (HashTok.isNot(AsmToken::Hash))
+      return Error(HashTok.getLoc(), "'#' expected");
+    getLexer().Lex(); // Eat hash token.
+
+    if (getParser().ParseExpression(Offset))
+     return true;
+  }
+  return false;
+}
+
+/// ParseShift as one of these two:
+///   ( lsl | lsr | asr | ror ) , # shift_amount
+///   rrx
+/// and returns true if it parses a shift otherwise it returns false.
+bool ARMAsmParser::ParseShift(ShiftType &St, const MCExpr *&ShiftAmount) {
+  const AsmToken &Tok = getLexer().getTok();
+  if (Tok.isNot(AsmToken::Identifier))
+    return true;
+  const StringRef &ShiftName = Tok.getString();
+  if (ShiftName == "lsl" || ShiftName == "LSL")
+    St = Lsl;
+  else if (ShiftName == "lsr" || ShiftName == "LSR")
+    St = Lsr;
+  else if (ShiftName == "asr" || ShiftName == "ASR")
+    St = Asr;
+  else if (ShiftName == "ror" || ShiftName == "ROR")
+    St = Ror;
+  else if (ShiftName == "rrx" || ShiftName == "RRX")
+    St = Rrx;
+  else
+    return true;
+  getLexer().Lex(); // Eat shift type token.
+
+  // Rrx stands alone.
+  if (St == Rrx)
+    return false;
+
+  // Otherwise, there must be a '#' and a shift amount.
+  const AsmToken &HashTok = getLexer().getTok();
+  if (HashTok.isNot(AsmToken::Hash))
+    return Error(HashTok.getLoc(), "'#' expected");
+  getLexer().Lex(); // Eat hash token.
+
+  if (getParser().ParseExpression(ShiftAmount))
+    return true;
+
+  return false;
+}
+
+/// A hack to allow some testing, to be replaced by a real table gen version.
+int ARMAsmParser::MatchRegisterName(const StringRef &Name) {
+  if (Name == "r0" || Name == "R0")
+    return 0;
+  else if (Name == "r1" || Name == "R1")
+    return 1;
+  else if (Name == "r2" || Name == "R2")
+    return 2;
+  else if (Name == "r3" || Name == "R3")
+    return 3;
+  else if (Name == "r3" || Name == "R3")
+    return 3;
+  else if (Name == "r4" || Name == "R4")
+    return 4;
+  else if (Name == "r5" || Name == "R5")
+    return 5;
+  else if (Name == "r6" || Name == "R6")
+    return 6;
+  else if (Name == "r7" || Name == "R7")
+    return 7;
+  else if (Name == "r8" || Name == "R8")
+    return 8;
+  else if (Name == "r9" || Name == "R9")
+    return 9;
+  else if (Name == "r10" || Name == "R10")
+    return 10;
+  else if (Name == "r11" || Name == "R11" || Name == "fp")
+    return 11;
+  else if (Name == "r12" || Name == "R12" || Name == "ip")
+    return 12;
+  else if (Name == "r13" || Name == "R13" || Name == "sp")
+    return 13;
+  else if (Name == "r14" || Name == "R14" || Name == "lr")
+      return 14;
+  else if (Name == "r15" || Name == "R15" || Name == "pc")
+    return 15;
+  return -1;
+}
+
+/// A hack to allow some testing, to be replaced by a real table gen version.
+bool ARMAsmParser::MatchInstruction(SmallVectorImpl<ARMOperand> &Operands,
+                                    MCInst &Inst) {
+  struct ARMOperand Op0 = Operands[0];
+  assert(Op0.Kind == ARMOperand::Token && "First operand not a Token");
+  const StringRef &Mnemonic = Op0.getToken();
+  if (Mnemonic == "add" ||
+      Mnemonic == "stmfd" ||
+      Mnemonic == "str" ||
+      Mnemonic == "ldmfd" ||
+      Mnemonic == "ldr" ||
+      Mnemonic == "mov" ||
+      Mnemonic == "sub" ||
+      Mnemonic == "bl" ||
+      Mnemonic == "push" ||
+      Mnemonic == "blx" ||
+      Mnemonic == "pop") {
+    // Hard-coded to a valid instruction, till we have a real matcher.
+    Inst = MCInst();
+    Inst.setOpcode(ARM::MOVr);
+    Inst.addOperand(MCOperand::CreateReg(2));
+    Inst.addOperand(MCOperand::CreateReg(2));
+    Inst.addOperand(MCOperand::CreateImm(0));
+    Inst.addOperand(MCOperand::CreateImm(0));
+    Inst.addOperand(MCOperand::CreateReg(0));
+    return false;
+  }
+
+  return true;
+}
+
+/// Parse a arm instruction operand.  For now this parses the operand regardless
+/// of the mnemonic.
+bool ARMAsmParser::ParseOperand(ARMOperand &Op) {
+  switch (getLexer().getKind()) {
+  case AsmToken::Identifier:
+    if (!MaybeParseRegister(Op, true))
+      return false;
+    // This was not a register so parse other operands that start with an
+    // identifier (like labels) as expressions and create them as immediates.
+    const MCExpr *IdVal;
+    if (getParser().ParseExpression(IdVal))
+      return true;
+    Op = ARMOperand::CreateImm(IdVal);
+    return false;
+  case AsmToken::LBrac:
+    return ParseMemory(Op);
+  case AsmToken::LCurly:
+    return ParseRegisterList(Op);
+  case AsmToken::Hash:
+    // #42 -> immediate.
+    // TODO: ":lower16:" and ":upper16:" modifiers after # before immediate
+    getLexer().Lex();
+    const MCExpr *ImmVal;
+    if (getParser().ParseExpression(ImmVal))
+      return true;
+    Op = ARMOperand::CreateImm(ImmVal);
+    return false;
+  default:
+    return Error(getLexer().getTok().getLoc(), "unexpected token in operand");
+  }
+}
+
+/// Parse an arm instruction mnemonic followed by its operands.
 bool ARMAsmParser::ParseInstruction(const StringRef &Name, MCInst &Inst) {
+  SmallVector<ARMOperand, 7> Operands;
+
+  Operands.push_back(ARMOperand::CreateToken(Name));
+
   SMLoc Loc = getLexer().getTok().getLoc();
-  Error(Loc, "ARMAsmParser::ParseInstruction currently unimplemented");
+  if (getLexer().isNot(AsmToken::EndOfStatement)) {
+
+    // Read the first operand.
+    Operands.push_back(ARMOperand());
+    if (ParseOperand(Operands.back()))
+      return true;
+
+    while (getLexer().is(AsmToken::Comma)) {
+      getLexer().Lex();  // Eat the comma.
+
+      // Parse and remember the operand.
+      Operands.push_back(ARMOperand());
+      if (ParseOperand(Operands.back()))
+        return true;
+    }
+  }
+  if (!MatchInstruction(Operands, Inst))
+    return false;
+
+  Error(Loc, "ARMAsmParser::ParseInstruction only partly implemented");
   return true;
 }
 
+/// ParseDirective parses the arm specific directives
 bool ARMAsmParser::ParseDirective(AsmToken DirectiveID) {
   StringRef IDVal = DirectiveID.getIdentifier();
   if (IDVal == ".word")
     return ParseDirectiveWord(4, DirectiveID.getLoc());
+  else if (IDVal == ".thumb")
+    return ParseDirectiveThumb(DirectiveID.getLoc());
+  else if (IDVal == ".thumb_func")
+    return ParseDirectiveThumbFunc(DirectiveID.getLoc());
+  else if (IDVal == ".code")
+    return ParseDirectiveCode(DirectiveID.getLoc());
+  else if (IDVal == ".syntax")
+    return ParseDirectiveSyntax(DirectiveID.getLoc());
   return true;
 }
 
@@ -86,7 +651,94 @@ bool ARMAsmParser::ParseDirectiveWord(unsigned Size, SMLoc L) {
   return false;
 }
 
-// Force static initialization.
+/// ParseDirectiveThumb
+///  ::= .thumb
+bool ARMAsmParser::ParseDirectiveThumb(SMLoc L) {
+  if (getLexer().isNot(AsmToken::EndOfStatement))
+    return Error(L, "unexpected token in directive");
+  getLexer().Lex();
+
+  // TODO: set thumb mode
+  // TODO: tell the MC streamer the mode
+  // getParser().getStreamer().Emit???();
+  return false;
+}
+
+/// ParseDirectiveThumbFunc
+///  ::= .thumbfunc symbol_name
+bool ARMAsmParser::ParseDirectiveThumbFunc(SMLoc L) {
+  const AsmToken &Tok = getLexer().getTok();
+  if (Tok.isNot(AsmToken::Identifier) && Tok.isNot(AsmToken::String))
+    return Error(L, "unexpected token in .syntax directive");
+  StringRef SymbolName = getLexer().getTok().getIdentifier();
+  getLexer().Lex(); // Consume the identifier token.
+
+  if (getLexer().isNot(AsmToken::EndOfStatement))
+    return Error(L, "unexpected token in directive");
+  getLexer().Lex();
+
+  // TODO: mark symbol as a thumb symbol
+  // getParser().getStreamer().Emit???();
+  return false;
+}
+
+/// ParseDirectiveSyntax
+///  ::= .syntax unified | divided
+bool ARMAsmParser::ParseDirectiveSyntax(SMLoc L) {
+  const AsmToken &Tok = getLexer().getTok();
+  if (Tok.isNot(AsmToken::Identifier))
+    return Error(L, "unexpected token in .syntax directive");
+  const StringRef &Mode = Tok.getString();
+  bool unified_syntax;
+  if (Mode == "unified" || Mode == "UNIFIED") {
+    getLexer().Lex();
+    unified_syntax = true;
+  }
+  else if (Mode == "divided" || Mode == "DIVIDED") {
+    getLexer().Lex();
+    unified_syntax = false;
+  }
+  else
+    return Error(L, "unrecognized syntax mode in .syntax directive");
+
+  if (getLexer().isNot(AsmToken::EndOfStatement))
+    return Error(getLexer().getTok().getLoc(), "unexpected token in directive");
+  getLexer().Lex();
+
+  // TODO tell the MC streamer the mode
+  // getParser().getStreamer().Emit???();
+  return false;
+}
+
+/// ParseDirectiveCode
+///  ::= .code 16 | 32
+bool ARMAsmParser::ParseDirectiveCode(SMLoc L) {
+  const AsmToken &Tok = getLexer().getTok();
+  if (Tok.isNot(AsmToken::Integer))
+    return Error(L, "unexpected token in .code directive");
+  int64_t Val = getLexer().getTok().getIntVal();
+  bool thumb_mode;
+  if (Val == 16) {
+    getLexer().Lex();
+    thumb_mode = true;
+  }
+  else if (Val == 32) {
+    getLexer().Lex();
+    thumb_mode = false;
+  }
+  else
+    return Error(L, "invalid operand to .code directive");
+
+  if (getLexer().isNot(AsmToken::EndOfStatement))
+    return Error(getLexer().getTok().getLoc(), "unexpected token in directive");
+  getLexer().Lex();
+
+  // TODO tell the MC streamer the mode
+  // getParser().getStreamer().Emit???();
+  return false;
+}
+
+/// Force static initialization.
 extern "C" void LLVMInitializeARMAsmParser() {
   RegisterAsmParser<ARMAsmParser> X(TheARMTarget);
   RegisterAsmParser<ARMAsmParser> Y(TheThumbTarget);
diff --git a/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp
index a441993..692bb19 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp
@@ -1,3 +1,5 @@
+//===-- ARMAsmPrinter.cpp - Print machine code to an ARM .s file ----------===//
+//
 //                     The LLVM Compiler Infrastructure
 //
 // This file is distributed under the University of Illinois Open Source
@@ -13,21 +15,25 @@
 #define DEBUG_TYPE "asm-printer"
 #include "ARM.h"
 #include "ARMBuildAttrs.h"
-#include "ARMTargetMachine.h"
 #include "ARMAddressingModes.h"
 #include "ARMConstantPoolValue.h"
+#include "ARMInstPrinter.h"
 #include "ARMMachineFunctionInfo.h"
+#include "ARMMCInstLower.h"
+#include "ARMTargetMachine.h"
 #include "llvm/Constants.h"
 #include "llvm/Module.h"
 #include "llvm/Assembly/Writer.h"
 #include "llvm/CodeGen/AsmPrinter.h"
 #include "llvm/CodeGen/DwarfWriter.h"
-#include "llvm/CodeGen/MachineModuleInfo.h"
+#include "llvm/CodeGen/MachineModuleInfoImpls.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
 #include "llvm/CodeGen/MachineJumpTableInfo.h"
+#include "llvm/MC/MCAsmInfo.h"
+#include "llvm/MC/MCContext.h"
+#include "llvm/MC/MCInst.h"
 #include "llvm/MC/MCSectionMachO.h"
 #include "llvm/MC/MCStreamer.h"
-#include "llvm/MC/MCAsmInfo.h"
 #include "llvm/MC/MCSymbol.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Target/TargetLoweringObjectFile.h"
@@ -37,19 +43,24 @@
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/SmallString.h"
 #include "llvm/ADT/Statistic.h"
+#include "llvm/ADT/StringExtras.h"
 #include "llvm/ADT/StringSet.h"
-#include "llvm/Support/Compiler.h"
+#include "llvm/Support/CommandLine.h"
 #include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/FormattedStream.h"
 #include "llvm/Support/Mangler.h"
 #include "llvm/Support/MathExtras.h"
-#include "llvm/Support/FormattedStream.h"
 #include <cctype>
 using namespace llvm;
 
 STATISTIC(EmittedInsts, "Number of machine instrs printed");
 
+static cl::opt<bool>
+EnableMCInst("enable-arm-mcinst-printer", cl::Hidden,
+            cl::desc("enable experimental asmprinter gunk in the arm backend"));
+
 namespace {
-  class VISIBILITY_HIDDEN ARMAsmPrinter : public AsmPrinter {
+  class ARMAsmPrinter : public AsmPrinter {
 
     /// Subtarget - Keep a pointer to the ARMSubtarget around so that we can
     /// make the right decision when printing asm code for different targets.
@@ -63,34 +74,23 @@ namespace {
     /// MachineFunction.
     const MachineConstantPool *MCP;
 
-    /// We name each basic block in a Function with a unique number, so
-    /// that we can consistently refer to them later. This is cleared
-    /// at the beginning of each call to runOnMachineFunction().
-    ///
-    typedef std::map<const Value *, unsigned> ValueMapTy;
-    ValueMapTy NumberForBB;
-
-    /// GVNonLazyPtrs - Keeps the set of GlobalValues that require
-    /// non-lazy-pointers for indirect access.
-    StringMap<std::string> GVNonLazyPtrs;
-
-    /// HiddenGVNonLazyPtrs - Keeps the set of GlobalValues with hidden
-    /// visibility that require non-lazy-pointers for indirect access.
-    StringMap<std::string> HiddenGVNonLazyPtrs;
-
-    /// True if asm printer is printing a series of CONSTPOOL_ENTRY.
-    bool InCPMode;
   public:
     explicit ARMAsmPrinter(formatted_raw_ostream &O, TargetMachine &TM,
                            const MCAsmInfo *T, bool V)
-      : AsmPrinter(O, TM, T, V), AFI(NULL), MCP(NULL),
-        InCPMode(false) {
+      : AsmPrinter(O, TM, T, V), AFI(NULL), MCP(NULL) {
       Subtarget = &TM.getSubtarget<ARMSubtarget>();
     }
 
     virtual const char *getPassName() const {
       return "ARM Assembly Printer";
     }
+    
+    void printMCInst(const MCInst *MI) {
+      ARMInstPrinter(O, *MAI, VerboseAsm).printInstruction(MI);
+    }
+    
+    void printInstructionThroughMCStreamer(const MachineInstr *MI);
+    
 
     void printOperand(const MachineInstr *MI, int OpNum,
                       const char *Modifier = 0);
@@ -110,6 +110,7 @@ namespace {
                                 const char *Modifier = 0);
     void printBitfieldInvMaskImmOperand (const MachineInstr *MI, int OpNum);
 
+    void printThumbS4ImmOperand(const MachineInstr *MI, int OpNum);
     void printThumbITMask(const MachineInstr *MI, int OpNum);
     void printThumbAddrModeRROperand(const MachineInstr *MI, int OpNum);
     void printThumbAddrModeRI5Operand(const MachineInstr *MI, int OpNum,
@@ -136,6 +137,21 @@ namespace {
     void printJT2BlockOperand(const MachineInstr *MI, int OpNum);
     void printTBAddrMode(const MachineInstr *MI, int OpNum);
     void printNoHashImmediate(const MachineInstr *MI, int OpNum);
+    void printVFPf32ImmOperand(const MachineInstr *MI, int OpNum);
+    void printVFPf64ImmOperand(const MachineInstr *MI, int OpNum);
+
+    void printHex8ImmOperand(const MachineInstr *MI, int OpNum) {
+      O << "#0x" << utohexstr(MI->getOperand(OpNum).getImm() & 0xff);
+    }
+    void printHex16ImmOperand(const MachineInstr *MI, int OpNum) {
+      O << "#0x" << utohexstr(MI->getOperand(OpNum).getImm() & 0xffff);
+    }
+    void printHex32ImmOperand(const MachineInstr *MI, int OpNum) {
+      O << "#0x" << utohexstr(MI->getOperand(OpNum).getImm() & 0xffffffff);
+    }
+    void printHex64ImmOperand(const MachineInstr *MI, int OpNum) {
+      O << "#0x" << utohexstr(MI->getOperand(OpNum).getImm());
+    }
 
     virtual bool PrintAsmOperand(const MachineInstr *MI, unsigned OpNum,
                                  unsigned AsmVariant, const char *ExtraCode);
@@ -149,8 +165,8 @@ namespace {
 
     void printMachineInstruction(const MachineInstr *MI);
     bool runOnMachineFunction(MachineFunction &F);
-    bool doFinalization(Module &M);
     void EmitStartOfAsmFile(Module &M);
+    void EmitEndOfAsmFile(Module &M);
 
     /// EmitMachineConstantPoolValue - Print a machine constantpool value to
     /// the .s file.
@@ -158,7 +174,6 @@ namespace {
       printDataDirective(MCPV->getType());
 
       ARMConstantPoolValue *ACPV = static_cast<ARMConstantPoolValue*>(MCPV);
-      GlobalValue *GV = ACPV->getGV();
       std::string Name;
 
       if (ACPV->isLSDA()) {
@@ -166,28 +181,40 @@ namespace {
         raw_svector_ostream(LSDAName) << MAI->getPrivateGlobalPrefix() <<
           "_LSDA_" << getFunctionNumber();
         Name = LSDAName.str();
-      } else if (GV) {
+      } else if (ACPV->isBlockAddress()) {
+        Name = GetBlockAddressSymbol(ACPV->getBlockAddress())->getName();
+      } else if (ACPV->isGlobalValue()) {
+        GlobalValue *GV = ACPV->getGV();
         bool isIndirect = Subtarget->isTargetDarwin() &&
           Subtarget->GVIsIndirectSymbol(GV, TM.getRelocationModel());
         if (!isIndirect)
           Name = Mang->getMangledName(GV);
         else {
           // FIXME: Remove this when Darwin transition to @GOT like syntax.
-          std::string SymName = Mang->getMangledName(GV);
           Name = Mang->getMangledName(GV, "$non_lazy_ptr", true);
-          if (GV->hasHiddenVisibility())
-            HiddenGVNonLazyPtrs[SymName] = Name;
-          else
-            GVNonLazyPtrs[SymName] = Name;
+          MCSymbol *Sym = OutContext.GetOrCreateSymbol(StringRef(Name));
+          
+          MachineModuleInfoMachO &MMIMachO =
+            MMI->getObjFileInfo<MachineModuleInfoMachO>();
+          const MCSymbol *&StubSym =
+            GV->hasHiddenVisibility() ? MMIMachO.getHiddenGVStubEntry(Sym) :
+                                        MMIMachO.getGVStubEntry(Sym);
+          if (StubSym == 0) {
+            SmallString<128> NameStr;
+            Mang->getNameWithPrefix(NameStr, GV, false);
+            StubSym = OutContext.GetOrCreateSymbol(NameStr.str());
+          }
         }
-      } else
+      } else {
+        assert(ACPV->isExtSymbol() && "unrecognized constant pool value");
         Name = Mang->makeNameProper(ACPV->getSymbol());
+      }
       O << Name;
 
       if (ACPV->hasModifier()) O << "(" << ACPV->getModifier() << ")";
       if (ACPV->getPCAdjustment() != 0) {
         O << "-(" << MAI->getPrivateGlobalPrefix() << "PC"
-          << ACPV->getLabelId()
+          << getFunctionNumber() << "_"  << ACPV->getLabelId()
           << "+" << (unsigned)ACPV->getPCAdjustment();
          if (ACPV->mustAddCurrentAddress())
            O << "-.";
@@ -260,7 +287,6 @@ bool ARMAsmPrinter::runOnMachineFunction(MachineFunction &MF) {
     if (Subtarget->isTargetDarwin())
       O << "\t" << CurrentFnName;
     O << "\n";
-    InCPMode = false;
   } else {
     EmitAlignment(FnAlign, F);
   }
@@ -283,15 +309,13 @@ bool ARMAsmPrinter::runOnMachineFunction(MachineFunction &MF) {
   for (MachineFunction::const_iterator I = MF.begin(), E = MF.end();
        I != E; ++I) {
     // Print a label for the basic block.
-    if (I != MF.begin()) {
+    if (I != MF.begin())
       EmitBasicBlockStart(I);
-      O << '\n';
-    }
+
+    // Print the assembly for the instruction.
     for (MachineBasicBlock::const_iterator II = I->begin(), E = I->end();
-         II != E; ++II) {
-      // Print the assembly for the instruction.
+         II != E; ++II)
       printMachineInstruction(II);
-    }
   }
 
   if (MAI->hasDotTypeDotSizeDirective())
@@ -306,37 +330,41 @@ bool ARMAsmPrinter::runOnMachineFunction(MachineFunction &MF) {
 void ARMAsmPrinter::printOperand(const MachineInstr *MI, int OpNum,
                                  const char *Modifier) {
   const MachineOperand &MO = MI->getOperand(OpNum);
+  unsigned TF = MO.getTargetFlags();
+
   switch (MO.getType()) {
+  default:
+    assert(0 && "<unknown operand type>");
   case MachineOperand::MO_Register: {
     unsigned Reg = MO.getReg();
-    if (TargetRegisterInfo::isPhysicalRegister(Reg)) {
-      if (Modifier && strcmp(Modifier, "dregpair") == 0) {
-        unsigned DRegLo = TRI->getSubReg(Reg, 5); // arm_dsubreg_0
-        unsigned DRegHi = TRI->getSubReg(Reg, 6); // arm_dsubreg_1
-        O << '{'
-          << getRegisterName(DRegLo) << ',' << getRegisterName(DRegHi)
-          << '}';
-      } else if (Modifier && strcmp(Modifier, "lane") == 0) {
-        unsigned RegNum = ARMRegisterInfo::getRegisterNumbering(Reg);
-        unsigned DReg = TRI->getMatchingSuperReg(Reg, RegNum & 1 ? 2 : 1,
-                                                 &ARM::DPR_VFP2RegClass);
-        O << getRegisterName(DReg) << '[' << (RegNum & 1) << ']';
-      } else {
-        O << getRegisterName(Reg);
-      }
-    } else
-      llvm_unreachable("not implemented");
+    assert(TargetRegisterInfo::isPhysicalRegister(Reg));
+    if (Modifier && strcmp(Modifier, "dregpair") == 0) {
+      unsigned DRegLo = TRI->getSubReg(Reg, 5); // arm_dsubreg_0
+      unsigned DRegHi = TRI->getSubReg(Reg, 6); // arm_dsubreg_1
+      O << '{'
+        << getRegisterName(DRegLo) << ',' << getRegisterName(DRegHi)
+        << '}';
+    } else if (Modifier && strcmp(Modifier, "lane") == 0) {
+      unsigned RegNum = ARMRegisterInfo::getRegisterNumbering(Reg);
+      unsigned DReg = TRI->getMatchingSuperReg(Reg, RegNum & 1 ? 2 : 1,
+                                               &ARM::DPR_VFP2RegClass);
+      O << getRegisterName(DReg) << '[' << (RegNum & 1) << ']';
+    } else {
+      assert(!MO.getSubReg() && "Subregs should be eliminated!");
+      O << getRegisterName(Reg);
+    }
     break;
   }
   case MachineOperand::MO_Immediate: {
     int64_t Imm = MO.getImm();
-    if (Modifier) {
-      if (strcmp(Modifier, "lo16") == 0)
-        Imm = Imm & 0xffffLL;
-      else if (strcmp(Modifier, "hi16") == 0)
-        Imm = (Imm & 0xffff0000LL) >> 16;
-    }
-    O << '#' << Imm;
+    O << '#';
+    if ((Modifier && strcmp(Modifier, "lo16") == 0) ||
+        (TF & ARMII::MO_LO16))
+      O << ":lower16:";
+    else if ((Modifier && strcmp(Modifier, "hi16") == 0) ||
+             (TF & ARMII::MO_HI16))
+      O << ":upper16:";
+    O << Imm;
     break;
   }
   case MachineOperand::MO_MachineBasicBlock:
@@ -345,6 +373,13 @@ void ARMAsmPrinter::printOperand(const MachineInstr *MI, int OpNum,
   case MachineOperand::MO_GlobalAddress: {
     bool isCallOp = Modifier && !strcmp(Modifier, "call");
     GlobalValue *GV = MO.getGlobal();
+
+    if ((Modifier && strcmp(Modifier, "lo16") == 0) ||
+        (TF & ARMII::MO_LO16))
+      O << ":lower16:";
+    else if ((Modifier && strcmp(Modifier, "hi16") == 0) ||
+             (TF & ARMII::MO_HI16))
+      O << ":upper16:";
     O << Mang->getMangledName(GV);
 
     printOffset(MO.getOffset());
@@ -372,8 +407,6 @@ void ARMAsmPrinter::printOperand(const MachineInstr *MI, int OpNum,
     O << MAI->getPrivateGlobalPrefix() << "JTI" << getFunctionNumber()
       << '_' << MO.getIndex();
     break;
-  default:
-    O << "<unknown operand type>"; abort (); break;
   }
 }
 
@@ -391,9 +424,11 @@ static void printSOImm(formatted_raw_ostream &O, int64_t V, bool VerboseAsm,
   if (Rot) {
     O << "#" << Imm << ", " << Rot;
     // Pretty printed version.
-    if (VerboseAsm)
-      O << ' ' << MAI->getCommentString()
-        << ' ' << (int)ARM_AM::rotr32(Imm, Rot);
+    if (VerboseAsm) {
+      O.PadToColumn(MAI->getCommentColumn());
+      O << MAI->getCommentString() << ' ';
+      O << (int)ARM_AM::rotr32(Imm, Rot);
+    }
   } else {
     O << "#" << Imm;
   }
@@ -417,7 +452,7 @@ void ARMAsmPrinter::printSOImm2PartOperand(const MachineInstr *MI, int OpNum) {
   printSOImm(O, V1, VerboseAsm, MAI);
   O << "\n\torr";
   printPredicateOperand(MI, 2);
-  O << " ";
+  O << "\t";
   printOperand(MI, 0);
   O << ", ";
   printOperand(MI, 0);
@@ -584,12 +619,7 @@ void ARMAsmPrinter::printAddrMode5Operand(const MachineInstr *MI, int Op,
 
   if (Modifier && strcmp(Modifier, "submode") == 0) {
     ARM_AM::AMSubMode Mode = ARM_AM::getAM5SubMode(MO2.getImm());
-    if (MO1.getReg() == ARM::SP) {
-      bool isFLDM = (MI->getOpcode() == ARM::FLDMD ||
-                     MI->getOpcode() == ARM::FLDMS);
-      O << ARM_AM::getAMSubModeAltStr(Mode, isFLDM);
-    } else
-      O << ARM_AM::getAMSubModeStr(Mode);
+    O << ARM_AM::getAMSubModeStr(Mode);
     return;
   } else if (Modifier && strcmp(Modifier, "base") == 0) {
     // Used for FSTM{D|S} and LSTM{D|S} operations.
@@ -613,9 +643,14 @@ void ARMAsmPrinter::printAddrMode6Operand(const MachineInstr *MI, int Op) {
   const MachineOperand &MO1 = MI->getOperand(Op);
   const MachineOperand &MO2 = MI->getOperand(Op+1);
   const MachineOperand &MO3 = MI->getOperand(Op+2);
+  const MachineOperand &MO4 = MI->getOperand(Op+3);
 
-  // FIXME: No support yet for specifying alignment.
-  O << "[" << getRegisterName(MO1.getReg()) << "]";
+  O << "[" << getRegisterName(MO1.getReg());
+  if (MO4.getImm()) {
+    // FIXME: Both darwin as and GNU as violate ARM docs here.
+    O << ", :" << MO4.getImm();
+  }
+  O << "]";
 
   if (ARM_AM::getAM6WBFlag(MO3.getImm())) {
     if (MO2.getReg() == 0)
@@ -649,6 +684,10 @@ ARMAsmPrinter::printBitfieldInvMaskImmOperand(const MachineInstr *MI, int Op) {
 
 //===--------------------------------------------------------------------===//
 
+void ARMAsmPrinter::printThumbS4ImmOperand(const MachineInstr *MI, int Op) {
+  O << "#" <<  MI->getOperand(Op).getImm() * 4;
+}
+
 void
 ARMAsmPrinter::printThumbITMask(const MachineInstr *MI, int Op) {
   // (3 - the number of trailing zeros) is the number of then / else.
@@ -687,11 +726,8 @@ ARMAsmPrinter::printThumbAddrModeRI5Operand(const MachineInstr *MI, int Op,
   O << "[" << getRegisterName(MO1.getReg());
   if (MO3.getReg())
     O << ", " << getRegisterName(MO3.getReg());
-  else if (unsigned ImmOffs = MO2.getImm()) {
-    O << ", #" << ImmOffs;
-    if (Scale > 1)
-      O << " * " << Scale;
-  }
+  else if (unsigned ImmOffs = MO2.getImm())
+    O << ", #+" << ImmOffs * Scale;
   O << "]";
 }
 
@@ -713,7 +749,7 @@ void ARMAsmPrinter::printThumbAddrModeSPOperand(const MachineInstr *MI,int Op) {
   const MachineOperand &MO2 = MI->getOperand(Op+1);
   O << "[" << getRegisterName(MO1.getReg());
   if (unsigned ImmOffs = MO2.getImm())
-    O << ", #" << ImmOffs << " * 4";
+    O << ", #+" << ImmOffs*4;
   O << "]";
 }
 
@@ -779,9 +815,9 @@ void ARMAsmPrinter::printT2AddrModeImm8s4Operand(const MachineInstr *MI,
   int32_t OffImm = (int32_t)MO2.getImm() / 4;
   // Don't print +0.
   if (OffImm < 0)
-    O << ", #-" << -OffImm << " * 4";
+    O << ", #-" << -OffImm * 4;
   else if (OffImm > 0)
-    O << ", #+" << OffImm << " * 4";
+    O << ", #+" << OffImm * 4;
   O << "]";
 }
 
@@ -834,7 +870,8 @@ void ARMAsmPrinter::printSBitModifierOperand(const MachineInstr *MI, int OpNum){
 
 void ARMAsmPrinter::printPCLabel(const MachineInstr *MI, int OpNum) {
   int Id = (int)MI->getOperand(OpNum).getImm();
-  O << MAI->getPrivateGlobalPrefix() << "PC" << Id;
+  O << MAI->getPrivateGlobalPrefix()
+    << "PC" << getFunctionNumber() << "_" << Id;
 }
 
 void ARMAsmPrinter::printRegisterList(const MachineInstr *MI, int OpNum) {
@@ -968,6 +1005,26 @@ void ARMAsmPrinter::printNoHashImmediate(const MachineInstr *MI, int OpNum) {
   O << MI->getOperand(OpNum).getImm();
 }
 
+void ARMAsmPrinter::printVFPf32ImmOperand(const MachineInstr *MI, int OpNum) {
+  const ConstantFP *FP = MI->getOperand(OpNum).getFPImm();
+  O << '#' << FP->getValueAPF().convertToFloat();
+  if (VerboseAsm) {
+    O.PadToColumn(MAI->getCommentColumn());
+    O << MAI->getCommentString() << ' ';
+    WriteAsOperand(O, FP, /*PrintType=*/false);
+  }
+}
+
+void ARMAsmPrinter::printVFPf64ImmOperand(const MachineInstr *MI, int OpNum) {
+  const ConstantFP *FP = MI->getOperand(OpNum).getFPImm();
+  O << '#' << FP->getValueAPF().convertToDouble();
+  if (VerboseAsm) {
+    O.PadToColumn(MAI->getCommentColumn());
+    O << MAI->getCommentString() << ' ';
+    WriteAsOperand(O, FP, /*PrintType=*/false);
+  }
+}
+
 bool ARMAsmPrinter::PrintAsmOperand(const MachineInstr *MI, unsigned OpNum,
                                     unsigned AsmVariant, const char *ExtraCode){
   // Does this asm operand have a single letter operand modifier?
@@ -1017,32 +1074,33 @@ bool ARMAsmPrinter::PrintAsmMemoryOperand(const MachineInstr *MI,
                                           const char *ExtraCode) {
   if (ExtraCode && ExtraCode[0])
     return true; // Unknown modifier.
-  printAddrMode2Operand(MI, OpNum);
+
+  const MachineOperand &MO = MI->getOperand(OpNum);
+  assert(MO.isReg() && "unexpected inline asm memory operand");
+  O << "[" << getRegisterName(MO.getReg()) << "]";
   return false;
 }
 
 void ARMAsmPrinter::printMachineInstruction(const MachineInstr *MI) {
   ++EmittedInsts;
 
-  int Opc = MI->getOpcode();
-  switch (Opc) {
-  case ARM::CONSTPOOL_ENTRY:
-    if (!InCPMode && AFI->isThumbFunction()) {
-      EmitAlignment(2);
-      InCPMode = true;
-    }
-    break;
-  default: {
-    if (InCPMode && AFI->isThumbFunction())
-      InCPMode = false;
-  }}
-
   // Call the autogenerated instruction printer routines.
-  processDebugLoc(MI);
-  printInstruction(MI);
-  if (VerboseAsm && !MI->getDebugLoc().isUnknown())
+  processDebugLoc(MI, true);
+  
+  if (EnableMCInst) {
+    printInstructionThroughMCStreamer(MI);
+  } else {
+    int Opc = MI->getOpcode();
+    if (Opc == ARM::CONSTPOOL_ENTRY)
+      EmitAlignment(2);
+    
+    printInstruction(MI);
+  }
+  
+  if (VerboseAsm)
     EmitComments(*MI);
   O << '\n';
+  processDebugLoc(MI, false);
 }
 
 void ARMAsmPrinter::EmitStartOfAsmFile(Module &M) {
@@ -1076,9 +1134,8 @@ void ARMAsmPrinter::EmitStartOfAsmFile(Module &M) {
     }
   }
 
-  // Use unified assembler syntax mode for Thumb.
-  if (Subtarget->isThumb())
-    O << "\t.syntax unified\n";
+  // Use unified assembler syntax.
+  O << "\t.syntax unified\n";
 
   // Emit ARM Build Attributes
   if (Subtarget->isTargetELF()) {
@@ -1179,7 +1236,8 @@ void ARMAsmPrinter::PrintGlobalVariable(const GlobalVariable* GVar) {
           EmitAlignment(Align, GVar);
           O << name << ":";
           if (VerboseAsm) {
-            O << "\t\t\t\t" << MAI->getCommentString() << ' ';
+            O.PadToColumn(MAI->getCommentColumn());
+            O << MAI->getCommentString() << ' ';
             WriteAsOperand(O, GVar, /*PrintType=*/false, GVar->getParent());
           }
           O << '\n';
@@ -1202,7 +1260,8 @@ void ARMAsmPrinter::PrintGlobalVariable(const GlobalVariable* GVar) {
           O << "," << (MAI->getAlignmentIsInBytes() ? (1 << Align) : Align);
       }
       if (VerboseAsm) {
-        O << "\t\t" << MAI->getCommentString() << " ";
+        O.PadToColumn(MAI->getCommentColumn());
+        O << MAI->getCommentString() << ' ';
         WriteAsOperand(O, GVar, /*PrintType=*/false, GVar->getParent());
       }
       O << "\n";
@@ -1240,7 +1299,8 @@ void ARMAsmPrinter::PrintGlobalVariable(const GlobalVariable* GVar) {
   EmitAlignment(Align, GVar);
   O << name << ":";
   if (VerboseAsm) {
-    O << "\t\t\t\t" << MAI->getCommentString() << " ";
+    O.PadToColumn(MAI->getCommentColumn());
+    O << MAI->getCommentString() << ' ';
     WriteAsOperand(O, GVar, /*PrintType=*/false, GVar->getParent());
   }
   O << "\n";
@@ -1252,34 +1312,40 @@ void ARMAsmPrinter::PrintGlobalVariable(const GlobalVariable* GVar) {
 }
 
 
-bool ARMAsmPrinter::doFinalization(Module &M) {
+void ARMAsmPrinter::EmitEndOfAsmFile(Module &M) {
   if (Subtarget->isTargetDarwin()) {
     // All darwin targets use mach-o.
     TargetLoweringObjectFileMachO &TLOFMacho =
       static_cast<TargetLoweringObjectFileMachO &>(getObjFileLowering());
+    MachineModuleInfoMachO &MMIMacho =
+      MMI->getObjFileInfo<MachineModuleInfoMachO>();
 
     O << '\n';
 
     // Output non-lazy-pointers for external and common global variables.
-    if (!GVNonLazyPtrs.empty()) {
+    MachineModuleInfoMachO::SymbolListTy Stubs = MMIMacho.GetGVStubList();
+    
+    if (!Stubs.empty()) {
       // Switch with ".non_lazy_symbol_pointer" directive.
       OutStreamer.SwitchSection(TLOFMacho.getNonLazySymbolPointerSection());
       EmitAlignment(2);
-      for (StringMap<std::string>::iterator I = GVNonLazyPtrs.begin(),
-           E = GVNonLazyPtrs.end(); I != E; ++I) {
-        O << I->second << ":\n";
-        O << "\t.indirect_symbol " << I->getKeyData() << "\n";
-        O << "\t.long\t0\n";
+      for (unsigned i = 0, e = Stubs.size(); i != e; ++i) {
+        Stubs[i].first->print(O, MAI);
+        O << ":\n\t.indirect_symbol ";
+        Stubs[i].second->print(O, MAI);
+        O << "\n\t.long\t0\n";
       }
     }
 
-    if (!HiddenGVNonLazyPtrs.empty()) {
+    Stubs = MMIMacho.GetHiddenGVStubList();
+    if (!Stubs.empty()) {
       OutStreamer.SwitchSection(getObjFileLowering().getDataSection());
       EmitAlignment(2);
-      for (StringMap<std::string>::iterator I = HiddenGVNonLazyPtrs.begin(),
-             E = HiddenGVNonLazyPtrs.end(); I != E; ++I) {
-        O << I->second << ":\n";
-        O << "\t.long " << I->getKeyData() << "\n";
+      for (unsigned i = 0, e = Stubs.size(); i != e; ++i) {
+        Stubs[i].first->print(O, MAI);
+        O << ":\n\t.long ";
+        Stubs[i].second->print(O, MAI);
+        O << "\n";
       }
     }
 
@@ -1288,14 +1354,180 @@ bool ARMAsmPrinter::doFinalization(Module &M) {
     // implementation of multiple entry points).  If this doesn't occur, the
     // linker can safely perform dead code stripping.  Since LLVM never
     // generates code that does this, it is always safe to set.
-    O << "\t.subsections_via_symbols\n";
+    OutStreamer.EmitAssemblerFlag(MCStreamer::SubsectionsViaSymbols);
+  }
+}
+
+//===----------------------------------------------------------------------===//
+
+void ARMAsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
+  ARMMCInstLower MCInstLowering(OutContext, *Mang, *this);
+  switch (MI->getOpcode()) {
+  case ARM::t2MOVi32imm:
+    assert(0 && "Should be lowered by thumb2it pass");
+  default: break;
+  case TargetInstrInfo::DBG_LABEL:
+  case TargetInstrInfo::EH_LABEL:
+  case TargetInstrInfo::GC_LABEL:
+    printLabel(MI);
+    return;
+  case TargetInstrInfo::KILL:
+    printKill(MI);
+    return;
+  case TargetInstrInfo::INLINEASM:
+    printInlineAsm(MI);
+    return;
+  case TargetInstrInfo::IMPLICIT_DEF:
+    printImplicitDef(MI);
+    return;
+  case ARM::PICADD: { // FIXME: Remove asm string from td file.
+    // This is a pseudo op for a label + instruction sequence, which looks like:
+    // LPC0:
+    //     add r0, pc, r0
+    // This adds the address of LPC0 to r0.
+    
+    // Emit the label.
+    // FIXME: MOVE TO SHARED PLACE.
+    unsigned Id = (unsigned)MI->getOperand(2).getImm();
+    const char *Prefix = MAI->getPrivateGlobalPrefix();
+    MCSymbol *Label =OutContext.GetOrCreateSymbol(Twine(Prefix)
+                         + "PC" + Twine(getFunctionNumber()) + "_" + Twine(Id));
+    OutStreamer.EmitLabel(Label);
+    
+    
+    // Form and emit tha dd.
+    MCInst AddInst;
+    AddInst.setOpcode(ARM::ADDrr);
+    AddInst.addOperand(MCOperand::CreateReg(MI->getOperand(0).getReg()));
+    AddInst.addOperand(MCOperand::CreateReg(ARM::PC));
+    AddInst.addOperand(MCOperand::CreateReg(MI->getOperand(1).getReg()));
+    printMCInst(&AddInst);
+    return;
+  }
+  case ARM::CONSTPOOL_ENTRY: { // FIXME: Remove asm string from td file.
+    /// CONSTPOOL_ENTRY - This instruction represents a floating constant pool
+    /// in the function.  The first operand is the ID# for this instruction, the
+    /// second is the index into the MachineConstantPool that this is, the third
+    /// is the size in bytes of this constant pool entry.
+    unsigned LabelId = (unsigned)MI->getOperand(0).getImm();
+    unsigned CPIdx   = (unsigned)MI->getOperand(1).getIndex();
+
+    EmitAlignment(2);
+
+    const char *Prefix = MAI->getPrivateGlobalPrefix();
+    MCSymbol *Label = OutContext.GetOrCreateSymbol(Twine(Prefix)+"CPI"+
+                                                   Twine(getFunctionNumber())+
+                                                   "_"+ Twine(LabelId));
+    OutStreamer.EmitLabel(Label);
+
+    const MachineConstantPoolEntry &MCPE = MCP->getConstants()[CPIdx];
+    if (MCPE.isMachineConstantPoolEntry())
+      EmitMachineConstantPoolValue(MCPE.Val.MachineCPVal);
+    else
+      EmitGlobalConstant(MCPE.Val.ConstVal);
+    
+    return;
   }
+  case ARM::MOVi2pieces: { // FIXME: Remove asmstring from td file.
+    // This is a hack that lowers as a two instruction sequence.
+    unsigned DstReg = MI->getOperand(0).getReg();
+    unsigned ImmVal = (unsigned)MI->getOperand(1).getImm();
+
+    unsigned SOImmValV1 = ARM_AM::getSOImmTwoPartFirst(ImmVal);
+    unsigned SOImmValV2 = ARM_AM::getSOImmTwoPartSecond(ImmVal);
+    
+    {
+      MCInst TmpInst;
+      TmpInst.setOpcode(ARM::MOVi);
+      TmpInst.addOperand(MCOperand::CreateReg(DstReg));
+      TmpInst.addOperand(MCOperand::CreateImm(SOImmValV1));
+      
+      // Predicate.
+      TmpInst.addOperand(MCOperand::CreateImm(MI->getOperand(2).getImm()));
+      TmpInst.addOperand(MCOperand::CreateReg(MI->getOperand(3).getReg()));
+
+      TmpInst.addOperand(MCOperand::CreateReg(0));          // cc_out
+      printMCInst(&TmpInst);
+      O << '\n';
+    }
 
-  return AsmPrinter::doFinalization(M);
+    {
+      MCInst TmpInst;
+      TmpInst.setOpcode(ARM::ORRri);
+      TmpInst.addOperand(MCOperand::CreateReg(DstReg));     // dstreg
+      TmpInst.addOperand(MCOperand::CreateReg(DstReg));     // inreg
+      TmpInst.addOperand(MCOperand::CreateImm(SOImmValV2)); // so_imm
+      // Predicate.
+      TmpInst.addOperand(MCOperand::CreateImm(MI->getOperand(2).getImm()));
+      TmpInst.addOperand(MCOperand::CreateReg(MI->getOperand(3).getReg()));
+      
+      TmpInst.addOperand(MCOperand::CreateReg(0));          // cc_out
+      printMCInst(&TmpInst);
+    }
+    return; 
+  }
+  case ARM::MOVi32imm: { // FIXME: Remove asmstring from td file.
+    // This is a hack that lowers as a two instruction sequence.
+    unsigned DstReg = MI->getOperand(0).getReg();
+    unsigned ImmVal = (unsigned)MI->getOperand(1).getImm();
+    
+    {
+      MCInst TmpInst;
+      TmpInst.setOpcode(ARM::MOVi16);
+      TmpInst.addOperand(MCOperand::CreateReg(DstReg));         // dstreg
+      TmpInst.addOperand(MCOperand::CreateImm(ImmVal & 65535)); // lower16(imm)
+      
+      // Predicate.
+      TmpInst.addOperand(MCOperand::CreateImm(MI->getOperand(2).getImm()));
+      TmpInst.addOperand(MCOperand::CreateReg(MI->getOperand(3).getReg()));
+      
+      printMCInst(&TmpInst);
+      O << '\n';
+    }
+    
+    {
+      MCInst TmpInst;
+      TmpInst.setOpcode(ARM::MOVTi16);
+      TmpInst.addOperand(MCOperand::CreateReg(DstReg));         // dstreg
+      TmpInst.addOperand(MCOperand::CreateReg(DstReg));         // srcreg
+      TmpInst.addOperand(MCOperand::CreateImm(ImmVal >> 16));   // upper16(imm)
+      
+      // Predicate.
+      TmpInst.addOperand(MCOperand::CreateImm(MI->getOperand(2).getImm()));
+      TmpInst.addOperand(MCOperand::CreateReg(MI->getOperand(3).getReg()));
+      
+      printMCInst(&TmpInst);
+    }
+    
+    return;
+  }
+  }
+      
+  MCInst TmpInst;
+  MCInstLowering.Lower(MI, TmpInst);
+  
+  printMCInst(&TmpInst);
+}
+
+//===----------------------------------------------------------------------===//
+// Target Registry Stuff
+//===----------------------------------------------------------------------===//
+
+static MCInstPrinter *createARMMCInstPrinter(const Target &T,
+                                             unsigned SyntaxVariant,
+                                             const MCAsmInfo &MAI,
+                                             raw_ostream &O) {
+  if (SyntaxVariant == 0)
+    return new ARMInstPrinter(O, MAI, false);
+  return 0;
 }
 
 // Force static initialization.
 extern "C" void LLVMInitializeARMAsmPrinter() {
   RegisterAsmPrinter<ARMAsmPrinter> X(TheARMTarget);
   RegisterAsmPrinter<ARMAsmPrinter> Y(TheThumbTarget);
+
+  TargetRegistry::RegisterMCInstPrinter(TheARMTarget, createARMMCInstPrinter);
+  TargetRegistry::RegisterMCInstPrinter(TheThumbTarget, createARMMCInstPrinter);
 }
+
diff --git a/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMInstPrinter.cpp b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMInstPrinter.cpp
new file mode 100644
index 0000000..9fc57e0
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMInstPrinter.cpp
@@ -0,0 +1,358 @@
+//===-- ARMInstPrinter.cpp - Convert ARM MCInst to assembly syntax --------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This class prints an ARM MCInst to a .s file.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "asm-printer"
+#include "ARM.h" // FIXME: FACTOR ENUMS BETTER.
+#include "ARMInstPrinter.h"
+#include "ARMAddressingModes.h"
+#include "llvm/MC/MCInst.h"
+#include "llvm/MC/MCAsmInfo.h"
+#include "llvm/MC/MCExpr.h"
+#include "llvm/Support/raw_ostream.h"
+using namespace llvm;
+
+// Include the auto-generated portion of the assembly writer.
+#define MachineInstr MCInst
+#define ARMAsmPrinter ARMInstPrinter  // FIXME: REMOVE.
+#define NO_ASM_WRITER_BOILERPLATE
+#include "ARMGenAsmWriter.inc"
+#undef MachineInstr
+#undef ARMAsmPrinter
+
+void ARMInstPrinter::printInst(const MCInst *MI) { printInstruction(MI); }
+
+void ARMInstPrinter::printOperand(const MCInst *MI, unsigned OpNo,
+                                  const char *Modifier) {
+  const MCOperand &Op = MI->getOperand(OpNo);
+  if (Op.isReg()) {
+    unsigned Reg = Op.getReg();
+    if (Modifier && strcmp(Modifier, "dregpair") == 0) {
+      // FIXME: Breaks e.g. ARM/vmul.ll.
+      assert(0);
+      /*
+      unsigned DRegLo = TRI->getSubReg(Reg, 5); // arm_dsubreg_0
+      unsigned DRegHi = TRI->getSubReg(Reg, 6); // arm_dsubreg_1
+      O << '{'
+      << getRegisterName(DRegLo) << ',' << getRegisterName(DRegHi)
+      << '}';*/
+    } else if (Modifier && strcmp(Modifier, "lane") == 0) {
+      assert(0);
+      /*
+      unsigned RegNum = ARMRegisterInfo::getRegisterNumbering(Reg);
+      unsigned DReg = TRI->getMatchingSuperReg(Reg, RegNum & 1 ? 2 : 1,
+                                               &ARM::DPR_VFP2RegClass);
+      O << getRegisterName(DReg) << '[' << (RegNum & 1) << ']';
+       */
+    } else {
+      O << getRegisterName(Reg);
+    }
+  } else if (Op.isImm()) {
+    assert((Modifier == 0 || Modifier[0] == 0) && "No modifiers supported");
+    O << '#' << Op.getImm();
+  } else {
+    assert((Modifier == 0 || Modifier[0] == 0) && "No modifiers supported");
+    assert(Op.isExpr() && "unknown operand kind in printOperand");
+    Op.getExpr()->print(O, &MAI);
+  }
+}
+
+static void printSOImm(raw_ostream &O, int64_t V, bool VerboseAsm,
+                       const MCAsmInfo *MAI) {
+  // Break it up into two parts that make up a shifter immediate.
+  V = ARM_AM::getSOImmVal(V);
+  assert(V != -1 && "Not a valid so_imm value!");
+  
+  unsigned Imm = ARM_AM::getSOImmValImm(V);
+  unsigned Rot = ARM_AM::getSOImmValRot(V);
+  
+  // Print low-level immediate formation info, per
+  // A5.1.3: "Data-processing operands - Immediate".
+  if (Rot) {
+    O << "#" << Imm << ", " << Rot;
+    // Pretty printed version.
+    if (VerboseAsm)
+      O << ' ' << MAI->getCommentString()
+      << ' ' << (int)ARM_AM::rotr32(Imm, Rot);
+  } else {
+    O << "#" << Imm;
+  }
+}
+
+
+/// printSOImmOperand - SOImm is 4-bit rotate amount in bits 8-11 with 8-bit
+/// immediate in bits 0-7.
+void ARMInstPrinter::printSOImmOperand(const MCInst *MI, unsigned OpNum) {
+  const MCOperand &MO = MI->getOperand(OpNum);
+  assert(MO.isImm() && "Not a valid so_imm value!");
+  printSOImm(O, MO.getImm(), VerboseAsm, &MAI);
+}
+
+/// printSOImm2PartOperand - SOImm is broken into two pieces using a 'mov'
+/// followed by an 'orr' to materialize.
+void ARMInstPrinter::printSOImm2PartOperand(const MCInst *MI, unsigned OpNum) {
+  // FIXME: REMOVE this method.
+  abort();
+}
+
+// so_reg is a 4-operand unit corresponding to register forms of the A5.1
+// "Addressing Mode 1 - Data-processing operands" forms.  This includes:
+//    REG 0   0           - e.g. R5
+//    REG REG 0,SH_OPC    - e.g. R5, ROR R3
+//    REG 0   IMM,SH_OPC  - e.g. R5, LSL #3
+void ARMInstPrinter::printSORegOperand(const MCInst *MI, unsigned OpNum) {
+  const MCOperand &MO1 = MI->getOperand(OpNum);
+  const MCOperand &MO2 = MI->getOperand(OpNum+1);
+  const MCOperand &MO3 = MI->getOperand(OpNum+2);
+  
+  O << getRegisterName(MO1.getReg());
+  
+  // Print the shift opc.
+  O << ", "
+    << ARM_AM::getShiftOpcStr(ARM_AM::getSORegShOp(MO3.getImm()))
+    << ' ';
+  
+  if (MO2.getReg()) {
+    O << getRegisterName(MO2.getReg());
+    assert(ARM_AM::getSORegOffset(MO3.getImm()) == 0);
+  } else {
+    O << "#" << ARM_AM::getSORegOffset(MO3.getImm());
+  }
+}
+
+
+void ARMInstPrinter::printAddrMode2Operand(const MCInst *MI, unsigned Op) {
+  const MCOperand &MO1 = MI->getOperand(Op);
+  const MCOperand &MO2 = MI->getOperand(Op+1);
+  const MCOperand &MO3 = MI->getOperand(Op+2);
+  
+  if (!MO1.isReg()) {   // FIXME: This is for CP entries, but isn't right.
+    printOperand(MI, Op);
+    return;
+  }
+  
+  O << "[" << getRegisterName(MO1.getReg());
+  
+  if (!MO2.getReg()) {
+    if (ARM_AM::getAM2Offset(MO3.getImm()))  // Don't print +0.
+      O << ", #"
+      << (char)ARM_AM::getAM2Op(MO3.getImm())
+      << ARM_AM::getAM2Offset(MO3.getImm());
+    O << "]";
+    return;
+  }
+  
+  O << ", "
+  << (char)ARM_AM::getAM2Op(MO3.getImm())
+  << getRegisterName(MO2.getReg());
+  
+  if (unsigned ShImm = ARM_AM::getAM2Offset(MO3.getImm()))
+    O << ", "
+    << ARM_AM::getShiftOpcStr(ARM_AM::getAM2ShiftOpc(MO3.getImm()))
+    << " #" << ShImm;
+  O << "]";
+}  
+
+void ARMInstPrinter::printAddrMode2OffsetOperand(const MCInst *MI,
+                                                 unsigned OpNum) {
+  const MCOperand &MO1 = MI->getOperand(OpNum);
+  const MCOperand &MO2 = MI->getOperand(OpNum+1);
+  
+  if (!MO1.getReg()) {
+    unsigned ImmOffs = ARM_AM::getAM2Offset(MO2.getImm());
+    assert(ImmOffs && "Malformed indexed load / store!");
+    O << '#' << (char)ARM_AM::getAM2Op(MO2.getImm()) << ImmOffs;
+    return;
+  }
+  
+  O << (char)ARM_AM::getAM2Op(MO2.getImm()) << getRegisterName(MO1.getReg());
+  
+  if (unsigned ShImm = ARM_AM::getAM2Offset(MO2.getImm()))
+    O << ", "
+    << ARM_AM::getShiftOpcStr(ARM_AM::getAM2ShiftOpc(MO2.getImm()))
+    << " #" << ShImm;
+}
+
+void ARMInstPrinter::printAddrMode3Operand(const MCInst *MI, unsigned OpNum) {
+  const MCOperand &MO1 = MI->getOperand(OpNum);
+  const MCOperand &MO2 = MI->getOperand(OpNum+1);
+  const MCOperand &MO3 = MI->getOperand(OpNum+2);
+  
+  O << '[' << getRegisterName(MO1.getReg());
+  
+  if (MO2.getReg()) {
+    O << ", " << (char)ARM_AM::getAM3Op(MO3.getImm())
+      << getRegisterName(MO2.getReg()) << ']';
+    return;
+  }
+  
+  if (unsigned ImmOffs = ARM_AM::getAM3Offset(MO3.getImm()))
+    O << ", #"
+    << (char)ARM_AM::getAM3Op(MO3.getImm())
+    << ImmOffs;
+  O << ']';
+}
+
+void ARMInstPrinter::printAddrMode3OffsetOperand(const MCInst *MI,
+                                                 unsigned OpNum) {
+  const MCOperand &MO1 = MI->getOperand(OpNum);
+  const MCOperand &MO2 = MI->getOperand(OpNum+1);
+  
+  if (MO1.getReg()) {
+    O << (char)ARM_AM::getAM3Op(MO2.getImm())
+    << getRegisterName(MO1.getReg());
+    return;
+  }
+  
+  unsigned ImmOffs = ARM_AM::getAM3Offset(MO2.getImm());
+  assert(ImmOffs && "Malformed indexed load / store!");
+  O << "#"
+  << (char)ARM_AM::getAM3Op(MO2.getImm())
+  << ImmOffs;
+}
+
+
+void ARMInstPrinter::printAddrMode4Operand(const MCInst *MI, unsigned OpNum,
+                                           const char *Modifier) {
+  const MCOperand &MO1 = MI->getOperand(OpNum);
+  const MCOperand &MO2 = MI->getOperand(OpNum+1);
+  ARM_AM::AMSubMode Mode = ARM_AM::getAM4SubMode(MO2.getImm());
+  if (Modifier && strcmp(Modifier, "submode") == 0) {
+    if (MO1.getReg() == ARM::SP) {
+      // FIXME
+      bool isLDM = (MI->getOpcode() == ARM::LDM ||
+                    MI->getOpcode() == ARM::LDM_RET ||
+                    MI->getOpcode() == ARM::t2LDM ||
+                    MI->getOpcode() == ARM::t2LDM_RET);
+      O << ARM_AM::getAMSubModeAltStr(Mode, isLDM);
+    } else
+      O << ARM_AM::getAMSubModeStr(Mode);
+  } else if (Modifier && strcmp(Modifier, "wide") == 0) {
+    ARM_AM::AMSubMode Mode = ARM_AM::getAM4SubMode(MO2.getImm());
+    if (Mode == ARM_AM::ia)
+      O << ".w";
+  } else {
+    printOperand(MI, OpNum);
+    if (ARM_AM::getAM4WBFlag(MO2.getImm()))
+      O << "!";
+  }
+}
+
+void ARMInstPrinter::printAddrMode5Operand(const MCInst *MI, unsigned OpNum,
+                                           const char *Modifier) {
+  const MCOperand &MO1 = MI->getOperand(OpNum);
+  const MCOperand &MO2 = MI->getOperand(OpNum+1);
+  
+  if (!MO1.isReg()) {   // FIXME: This is for CP entries, but isn't right.
+    printOperand(MI, OpNum);
+    return;
+  }
+  
+  if (Modifier && strcmp(Modifier, "submode") == 0) {
+    ARM_AM::AMSubMode Mode = ARM_AM::getAM5SubMode(MO2.getImm());
+    O << ARM_AM::getAMSubModeStr(Mode);
+    return;
+  } else if (Modifier && strcmp(Modifier, "base") == 0) {
+    // Used for FSTM{D|S} and LSTM{D|S} operations.
+    O << getRegisterName(MO1.getReg());
+    if (ARM_AM::getAM5WBFlag(MO2.getImm()))
+      O << "!";
+    return;
+  }
+  
+  O << "[" << getRegisterName(MO1.getReg());
+  
+  if (unsigned ImmOffs = ARM_AM::getAM5Offset(MO2.getImm())) {
+    O << ", #"
+      << (char)ARM_AM::getAM5Op(MO2.getImm())
+      << ImmOffs*4;
+  }
+  O << "]";
+}
+
+void ARMInstPrinter::printAddrMode6Operand(const MCInst *MI, unsigned OpNum) {
+  const MCOperand &MO1 = MI->getOperand(OpNum);
+  const MCOperand &MO2 = MI->getOperand(OpNum+1);
+  const MCOperand &MO3 = MI->getOperand(OpNum+2);
+  
+  // FIXME: No support yet for specifying alignment.
+  O << '[' << getRegisterName(MO1.getReg()) << ']';
+  
+  if (ARM_AM::getAM6WBFlag(MO3.getImm())) {
+    if (MO2.getReg() == 0)
+      O << '!';
+    else
+      O << ", " << getRegisterName(MO2.getReg());
+  }
+}
+
+void ARMInstPrinter::printAddrModePCOperand(const MCInst *MI, unsigned OpNum,
+                                            const char *Modifier) {
+  assert(0 && "FIXME: Implement printAddrModePCOperand");
+}
+
+void ARMInstPrinter::printBitfieldInvMaskImmOperand (const MCInst *MI,
+                                                     unsigned OpNum) {
+  const MCOperand &MO = MI->getOperand(OpNum);
+  uint32_t v = ~MO.getImm();
+  int32_t lsb = CountTrailingZeros_32(v);
+  int32_t width = (32 - CountLeadingZeros_32 (v)) - lsb;
+  assert(MO.isImm() && "Not a valid bf_inv_mask_imm value!");
+  O << '#' << lsb << ", #" << width;
+}
+
+void ARMInstPrinter::printRegisterList(const MCInst *MI, unsigned OpNum) {
+  O << "{";
+  // Always skip the first operand, it's the optional (and implicit writeback).
+  for (unsigned i = OpNum+1, e = MI->getNumOperands(); i != e; ++i) {
+    if (i != OpNum+1) O << ", ";
+    O << getRegisterName(MI->getOperand(i).getReg());
+  }
+  O << "}";
+}
+
+void ARMInstPrinter::printPredicateOperand(const MCInst *MI, unsigned OpNum) {
+  ARMCC::CondCodes CC = (ARMCC::CondCodes)MI->getOperand(OpNum).getImm();
+  if (CC != ARMCC::AL)
+    O << ARMCondCodeToString(CC);
+}
+
+void ARMInstPrinter::printSBitModifierOperand(const MCInst *MI, unsigned OpNum){
+  if (MI->getOperand(OpNum).getReg()) {
+    assert(MI->getOperand(OpNum).getReg() == ARM::CPSR &&
+           "Expect ARM CPSR register!");
+    O << 's';
+  }
+}
+
+
+
+void ARMInstPrinter::printCPInstOperand(const MCInst *MI, unsigned OpNum,
+                                        const char *Modifier) {
+  // FIXME: remove this.
+  abort();
+}
+
+void ARMInstPrinter::printNoHashImmediate(const MCInst *MI, unsigned OpNum) {
+  O << MI->getOperand(OpNum).getImm();
+}
+
+
+void ARMInstPrinter::printPCLabel(const MCInst *MI, unsigned OpNum) {
+  // FIXME: remove this.
+  abort();
+}
+
+void ARMInstPrinter::printThumbS4ImmOperand(const MCInst *MI, unsigned OpNum) {
+  // FIXME: remove this.
+  abort();
+}
diff --git a/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMInstPrinter.h b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMInstPrinter.h
new file mode 100644
index 0000000..23a7f05
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMInstPrinter.h
@@ -0,0 +1,96 @@
+//===-- ARMInstPrinter.h - Convert ARM MCInst to assembly syntax ----------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This class prints an ARM MCInst to a .s file.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef ARMINSTPRINTER_H
+#define ARMINSTPRINTER_H
+
+#include "llvm/MC/MCInstPrinter.h"
+
+namespace llvm {
+  class MCOperand;
+  
+class ARMInstPrinter : public MCInstPrinter {
+  bool VerboseAsm;
+public:
+  ARMInstPrinter(raw_ostream &O, const MCAsmInfo &MAI, bool verboseAsm)
+    : MCInstPrinter(O, MAI), VerboseAsm(verboseAsm) {}
+
+  virtual void printInst(const MCInst *MI);
+  
+  // Autogenerated by tblgen.
+  void printInstruction(const MCInst *MI);
+  static const char *getRegisterName(unsigned RegNo);
+
+
+  void printOperand(const MCInst *MI, unsigned OpNo,
+                    const char *Modifier = 0);
+    
+  void printSOImmOperand(const MCInst *MI, unsigned OpNum);
+  void printSOImm2PartOperand(const MCInst *MI, unsigned OpNum);
+  
+  void printSORegOperand(const MCInst *MI, unsigned OpNum);
+  void printAddrMode2Operand(const MCInst *MI, unsigned OpNum);
+  void printAddrMode2OffsetOperand(const MCInst *MI, unsigned OpNum);
+  void printAddrMode3Operand(const MCInst *MI, unsigned OpNum);
+  void printAddrMode3OffsetOperand(const MCInst *MI, unsigned OpNum);
+  void printAddrMode4Operand(const MCInst *MI, unsigned OpNum,
+                             const char *Modifier = 0);
+  void printAddrMode5Operand(const MCInst *MI, unsigned OpNum,
+                             const char *Modifier = 0);
+  void printAddrMode6Operand(const MCInst *MI, unsigned OpNum);
+  void printAddrModePCOperand(const MCInst *MI, unsigned OpNum,
+                              const char *Modifier = 0);
+    
+  void printBitfieldInvMaskImmOperand(const MCInst *MI, unsigned OpNum);
+
+  void printThumbS4ImmOperand(const MCInst *MI, unsigned OpNum);
+  void printThumbITMask(const MCInst *MI, unsigned OpNum) {}
+  void printThumbAddrModeRROperand(const MCInst *MI, unsigned OpNum) {}
+  void printThumbAddrModeRI5Operand(const MCInst *MI, unsigned OpNum,
+                                    unsigned Scale) {}
+  void printThumbAddrModeS1Operand(const MCInst *MI, unsigned OpNum) {}
+  void printThumbAddrModeS2Operand(const MCInst *MI, unsigned OpNum) {}
+  void printThumbAddrModeS4Operand(const MCInst *MI, unsigned OpNum) {}
+  void printThumbAddrModeSPOperand(const MCInst *MI, unsigned OpNum) {}
+  
+  void printT2SOOperand(const MCInst *MI, unsigned OpNum) {}
+  void printT2AddrModeImm12Operand(const MCInst *MI, unsigned OpNum) {}
+  void printT2AddrModeImm8Operand(const MCInst *MI, unsigned OpNum) {}
+  void printT2AddrModeImm8s4Operand(const MCInst *MI, unsigned OpNum) {}
+  void printT2AddrModeImm8OffsetOperand(const MCInst *MI, unsigned OpNum) {}
+  void printT2AddrModeSoRegOperand(const MCInst *MI, unsigned OpNum) {}
+  
+  void printPredicateOperand(const MCInst *MI, unsigned OpNum);
+  void printSBitModifierOperand(const MCInst *MI, unsigned OpNum);
+  void printRegisterList(const MCInst *MI, unsigned OpNum);
+  void printCPInstOperand(const MCInst *MI, unsigned OpNum,
+                          const char *Modifier);
+  void printJTBlockOperand(const MCInst *MI, unsigned OpNum) {}
+  void printJT2BlockOperand(const MCInst *MI, unsigned OpNum) {}
+  void printTBAddrMode(const MCInst *MI, unsigned OpNum) {}
+  void printNoHashImmediate(const MCInst *MI, unsigned OpNum);
+  void printVFPf32ImmOperand(const MCInst *MI, int OpNum) {}
+  void printVFPf64ImmOperand(const MCInst *MI, int OpNum) {}
+  void printHex8ImmOperand(const MCInst *MI, int OpNum) {}
+  void printHex16ImmOperand(const MCInst *MI, int OpNum) {}
+  void printHex32ImmOperand(const MCInst *MI, int OpNum) {}
+  void printHex64ImmOperand(const MCInst *MI, int OpNum) {}
+
+  void printPCLabel(const MCInst *MI, unsigned OpNum);  
+  // FIXME: Implement.
+  void PrintSpecial(const MCInst *MI, const char *Kind) {}
+};
+  
+}
+
+#endif
diff --git a/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMMCInstLower.cpp b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMMCInstLower.cpp
new file mode 100644
index 0000000..c49fee3
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMMCInstLower.cpp
@@ -0,0 +1,171 @@
+//===-- ARMMCInstLower.cpp - Convert ARM MachineInstr to an MCInst --------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file contains code to lower ARM MachineInstrs to their corresponding
+// MCInst records.
+//
+//===----------------------------------------------------------------------===//
+
+#include "ARMMCInstLower.h"
+//#include "llvm/CodeGen/MachineModuleInfoImpls.h"
+#include "llvm/CodeGen/AsmPrinter.h"
+#include "llvm/CodeGen/MachineBasicBlock.h"
+#include "llvm/MC/MCAsmInfo.h"
+#include "llvm/MC/MCContext.h"
+#include "llvm/MC/MCExpr.h"
+#include "llvm/MC/MCInst.h"
+//#include "llvm/MC/MCStreamer.h"
+#include "llvm/Support/raw_ostream.h"
+#include "llvm/Support/Mangler.h"
+#include "llvm/ADT/SmallString.h"
+using namespace llvm;
+
+
+#if 0
+const ARMSubtarget &ARMMCInstLower::getSubtarget() const {
+  return AsmPrinter.getSubtarget();
+}
+
+MachineModuleInfoMachO &ARMMCInstLower::getMachOMMI() const {
+  assert(getSubtarget().isTargetDarwin() &&"Can only get MachO info on darwin");
+  return AsmPrinter.MMI->getObjFileInfo<MachineModuleInfoMachO>(); 
+}
+#endif
+
+MCSymbol *ARMMCInstLower::
+GetGlobalAddressSymbol(const MachineOperand &MO) const {
+  const GlobalValue *GV = MO.getGlobal();
+  
+  SmallString<128> Name;
+  Mang.getNameWithPrefix(Name, GV, false);
+  
+  // FIXME: HANDLE PLT references how??
+  switch (MO.getTargetFlags()) {
+  default: assert(0 && "Unknown target flag on GV operand");
+  case 0: break;
+  }
+  
+  return Ctx.GetOrCreateSymbol(Name.str());
+}
+
+MCSymbol *ARMMCInstLower::
+GetExternalSymbolSymbol(const MachineOperand &MO) const {
+  SmallString<128> Name;
+  Name += Printer.MAI->getGlobalPrefix();
+  Name += MO.getSymbolName();
+  
+  // FIXME: HANDLE PLT references how??
+  switch (MO.getTargetFlags()) {
+  default: assert(0 && "Unknown target flag on GV operand");
+  case 0: break;
+  }
+  
+  return Ctx.GetOrCreateSymbol(Name.str());
+}
+
+
+
+MCSymbol *ARMMCInstLower::
+GetJumpTableSymbol(const MachineOperand &MO) const {
+  SmallString<256> Name;
+  raw_svector_ostream(Name) << Printer.MAI->getPrivateGlobalPrefix() << "JTI"
+    << Printer.getFunctionNumber() << '_' << MO.getIndex();
+  
+#if 0
+  switch (MO.getTargetFlags()) {
+    default: llvm_unreachable("Unknown target flag on GV operand");
+  }
+#endif
+  
+  // Create a symbol for the name.
+  return Ctx.GetOrCreateSymbol(Name.str());
+}
+
+MCSymbol *ARMMCInstLower::
+GetConstantPoolIndexSymbol(const MachineOperand &MO) const {
+  SmallString<256> Name;
+  raw_svector_ostream(Name) << Printer.MAI->getPrivateGlobalPrefix() << "CPI"
+    << Printer.getFunctionNumber() << '_' << MO.getIndex();
+  
+#if 0
+  switch (MO.getTargetFlags()) {
+  default: llvm_unreachable("Unknown target flag on GV operand");
+  }
+#endif
+  
+  // Create a symbol for the name.
+  return Ctx.GetOrCreateSymbol(Name.str());
+}
+  
+MCOperand ARMMCInstLower::
+LowerSymbolOperand(const MachineOperand &MO, MCSymbol *Sym) const {
+  // FIXME: We would like an efficient form for this, so we don't have to do a
+  // lot of extra uniquing.
+  const MCExpr *Expr = MCSymbolRefExpr::Create(Sym, Ctx);
+  
+#if 0
+  switch (MO.getTargetFlags()) {
+  default: llvm_unreachable("Unknown target flag on GV operand");
+  }
+#endif
+  
+  if (!MO.isJTI() && MO.getOffset())
+    Expr = MCBinaryExpr::CreateAdd(Expr,
+                                   MCConstantExpr::Create(MO.getOffset(), Ctx),
+                                   Ctx);
+  return MCOperand::CreateExpr(Expr);
+}
+
+
+void ARMMCInstLower::Lower(const MachineInstr *MI, MCInst &OutMI) const {
+  OutMI.setOpcode(MI->getOpcode());
+  
+  for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
+    const MachineOperand &MO = MI->getOperand(i);
+    
+    MCOperand MCOp;
+    switch (MO.getType()) {
+    default:
+      MI->dump();
+      assert(0 && "unknown operand type");
+    case MachineOperand::MO_Register:
+      // Ignore all implicit register operands.
+      if (MO.isImplicit()) continue;
+      assert(!MO.getSubReg() && "Subregs should be eliminated!");
+      MCOp = MCOperand::CreateReg(MO.getReg());
+      break;
+    case MachineOperand::MO_Immediate:
+      MCOp = MCOperand::CreateImm(MO.getImm());
+      break;
+    case MachineOperand::MO_MachineBasicBlock:
+      MCOp = MCOperand::CreateExpr(MCSymbolRefExpr::Create(
+                       Printer.GetMBBSymbol(MO.getMBB()->getNumber()), Ctx));
+      break;
+    case MachineOperand::MO_GlobalAddress:
+      MCOp = LowerSymbolOperand(MO, GetGlobalAddressSymbol(MO));
+      break;
+    case MachineOperand::MO_ExternalSymbol:
+      MCOp = LowerSymbolOperand(MO, GetExternalSymbolSymbol(MO));
+      break;
+    case MachineOperand::MO_JumpTableIndex:
+      MCOp = LowerSymbolOperand(MO, GetJumpTableSymbol(MO));
+      break;
+    case MachineOperand::MO_ConstantPoolIndex:
+      MCOp = LowerSymbolOperand(MO, GetConstantPoolIndexSymbol(MO));
+      break;
+    case MachineOperand::MO_BlockAddress:
+      MCOp = LowerSymbolOperand(MO, Printer.GetBlockAddressSymbol(
+                                              MO.getBlockAddress()));
+      break;
+    }
+    
+    OutMI.addOperand(MCOp);
+  }
+  
+}
diff --git a/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMMCInstLower.h b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMMCInstLower.h
new file mode 100644
index 0000000..383d30d
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMMCInstLower.h
@@ -0,0 +1,56 @@
+//===-- ARMMCInstLower.h - Lower MachineInstr to MCInst -------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef ARM_MCINSTLOWER_H
+#define ARM_MCINSTLOWER_H
+
+#include "llvm/Support/Compiler.h"
+
+namespace llvm {
+  class AsmPrinter;
+  class MCAsmInfo;
+  class MCContext;
+  class MCInst;
+  class MCOperand;
+  class MCSymbol;
+  class MachineInstr;
+  class MachineModuleInfoMachO;
+  class MachineOperand;
+  class Mangler;
+  //class ARMSubtarget;
+  
+/// ARMMCInstLower - This class is used to lower an MachineInstr into an MCInst.
+class VISIBILITY_HIDDEN ARMMCInstLower {
+  MCContext &Ctx;
+  Mangler &Mang;
+  AsmPrinter &Printer;
+
+  //const ARMSubtarget &getSubtarget() const;
+public:
+  ARMMCInstLower(MCContext &ctx, Mangler &mang, AsmPrinter &printer)
+    : Ctx(ctx), Mang(mang), Printer(printer) {}
+  
+  void Lower(const MachineInstr *MI, MCInst &OutMI) const;
+
+  //MCSymbol *GetPICBaseSymbol() const;
+  MCSymbol *GetGlobalAddressSymbol(const MachineOperand &MO) const;
+  MCSymbol *GetExternalSymbolSymbol(const MachineOperand &MO) const;
+  MCSymbol *GetJumpTableSymbol(const MachineOperand &MO) const;
+  MCSymbol *GetConstantPoolIndexSymbol(const MachineOperand &MO) const;
+  MCOperand LowerSymbolOperand(const MachineOperand &MO, MCSymbol *Sym) const;
+  
+/*
+private:
+  MachineModuleInfoMachO &getMachOMMI() const;
+ */
+};
+
+}
+
+#endif
diff --git a/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/CMakeLists.txt b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/CMakeLists.txt
index a67fc84..4e299f8 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/CMakeLists.txt
@@ -2,5 +2,7 @@ include_directories( ${CMAKE_CURRENT_BINARY_DIR}/.. ${CMAKE_CURRENT_SOURCE_DIR}/
 
 add_llvm_library(LLVMARMAsmPrinter
   ARMAsmPrinter.cpp
+  ARMInstPrinter.cpp
+  ARMMCInstLower.cpp
   )
-add_dependencies(LLVMARMAsmPrinter ARMCodeGenTable_gen)
\ No newline at end of file
+add_dependencies(LLVMARMAsmPrinter ARMCodeGenTable_gen)
diff --git a/libclamav/c++/llvm/lib/Target/ARM/CMakeLists.txt b/libclamav/c++/llvm/lib/Target/ARM/CMakeLists.txt
index 6e09eb2..964551f 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/Target/ARM/CMakeLists.txt
@@ -17,15 +17,17 @@ add_llvm_target(ARMCodeGen
   ARMCodeEmitter.cpp
   ARMConstantIslandPass.cpp
   ARMConstantPoolValue.cpp
-  ARMInstrInfo.cpp
+  ARMExpandPseudoInsts.cpp
   ARMISelDAGToDAG.cpp
   ARMISelLowering.cpp
+  ARMInstrInfo.cpp
   ARMJITInfo.cpp
   ARMLoadStoreOptimizer.cpp
   ARMMCAsmInfo.cpp
   ARMRegisterInfo.cpp
   ARMSubtarget.cpp
   ARMTargetMachine.cpp
+  NEONMoveFix.cpp
   NEONPreAllocPass.cpp
   Thumb1InstrInfo.cpp
   Thumb1RegisterInfo.cpp
diff --git a/libclamav/c++/llvm/lib/Target/ARM/NEONMoveFix.cpp b/libclamav/c++/llvm/lib/Target/ARM/NEONMoveFix.cpp
new file mode 100644
index 0000000..50abcf4
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/ARM/NEONMoveFix.cpp
@@ -0,0 +1,141 @@
+//===-- NEONMoveFix.cpp - Convert vfp reg-reg moves into neon ---*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "neon-mov-fix"
+#include "ARM.h"
+#include "ARMMachineFunctionInfo.h"
+#include "ARMInstrInfo.h"
+#include "llvm/CodeGen/MachineInstr.h"
+#include "llvm/CodeGen/MachineInstrBuilder.h"
+#include "llvm/CodeGen/MachineFunctionPass.h"
+#include "llvm/ADT/Statistic.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/raw_ostream.h"
+using namespace llvm;
+
+STATISTIC(NumVMovs, "Number of reg-reg moves converted");
+
+namespace {
+  struct NEONMoveFixPass : public MachineFunctionPass {
+    static char ID;
+    NEONMoveFixPass() : MachineFunctionPass(&ID) {}
+
+    virtual bool runOnMachineFunction(MachineFunction &Fn);
+
+    virtual const char *getPassName() const {
+      return "NEON reg-reg move conversion";
+    }
+
+  private:
+    const TargetRegisterInfo *TRI;
+    const ARMBaseInstrInfo *TII;
+
+    typedef DenseMap<unsigned, const MachineInstr*> RegMap;
+
+    bool InsertMoves(MachineBasicBlock &MBB);
+  };
+  char NEONMoveFixPass::ID = 0;
+}
+
+bool NEONMoveFixPass::InsertMoves(MachineBasicBlock &MBB) {
+  RegMap Defs;
+  bool Modified = false;
+
+  // Walk over MBB tracking the def points of the registers.
+  MachineBasicBlock::iterator MII = MBB.begin(), E = MBB.end();
+  MachineBasicBlock::iterator NextMII;
+  for (; MII != E; MII = NextMII) {
+    NextMII = next(MII);
+    MachineInstr *MI = &*MII;
+
+    if (MI->getOpcode() == ARM::VMOVD &&
+        !TII->isPredicated(MI)) {
+      unsigned SrcReg = MI->getOperand(1).getReg();
+      // If we do not find an instruction defining the reg, this means the
+      // register should be live-in for this BB. It's always to better to use
+      // NEON reg-reg moves.
+      unsigned Domain = ARMII::DomainNEON;
+      RegMap::iterator DefMI = Defs.find(SrcReg);
+      if (DefMI != Defs.end()) {
+        Domain = DefMI->second->getDesc().TSFlags & ARMII::DomainMask;
+        // Instructions in general domain are subreg accesses.
+        // Map them to NEON reg-reg moves.
+        if (Domain == ARMII::DomainGeneral)
+          Domain = ARMII::DomainNEON;
+      }
+
+      if (Domain & ARMII::DomainNEON) {
+        // Convert VMOVD to VMOVDneon
+        unsigned DestReg = MI->getOperand(0).getReg();
+
+        DEBUG({errs() << "vmov convert: "; MI->dump();});
+
+        // It's safe to ignore imp-defs / imp-uses here, since:
+        //  - We're running late, no intelligent condegen passes should be run
+        //    afterwards
+        //  - The imp-defs / imp-uses are superregs only, we don't care about
+        //    them.
+        AddDefaultPred(BuildMI(MBB, *MI, MI->getDebugLoc(),
+                             TII->get(ARM::VMOVDneon), DestReg).addReg(SrcReg));
+        MBB.erase(MI);
+        MachineBasicBlock::iterator I = prior(NextMII);
+        MI = &*I;
+
+        DEBUG({errs() << "        into: "; MI->dump();});
+
+        Modified = true;
+        ++NumVMovs;
+      } else {
+        assert((Domain & ARMII::DomainVFP) && "Invalid domain!");
+        // Do nothing.
+      }
+    }
+
+    // Update def information.
+    for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
+      const MachineOperand& MO = MI->getOperand(i);
+      if (!MO.isReg() || !MO.isDef())
+        continue;
+      unsigned MOReg = MO.getReg();
+
+      Defs[MOReg] = MI;
+      // Catch subregs as well.
+      for (const unsigned *R = TRI->getSubRegisters(MOReg); *R; ++R)
+        Defs[*R] = MI;
+    }
+  }
+
+  return Modified;
+}
+
+bool NEONMoveFixPass::runOnMachineFunction(MachineFunction &Fn) {
+  ARMFunctionInfo *AFI = Fn.getInfo<ARMFunctionInfo>();
+  const TargetMachine &TM = Fn.getTarget();
+
+  if (AFI->isThumbFunction())
+    return false;
+
+  TRI = TM.getRegisterInfo();
+  TII = static_cast<const ARMBaseInstrInfo*>(TM.getInstrInfo());
+
+  bool Modified = false;
+  for (MachineFunction::iterator MFI = Fn.begin(), E = Fn.end(); MFI != E;
+       ++MFI) {
+    MachineBasicBlock &MBB = *MFI;
+    Modified |= InsertMoves(MBB);
+  }
+
+  return Modified;
+}
+
+/// createNEONMoveFixPass - Returns an instance of the NEON reg-reg moves fix
+/// pass.
+FunctionPass *llvm::createNEONMoveFixPass() {
+  return new NEONMoveFixPass();
+}
diff --git a/libclamav/c++/llvm/lib/Target/ARM/NEONPreAllocPass.cpp b/libclamav/c++/llvm/lib/Target/ARM/NEONPreAllocPass.cpp
index 985cc86..206677b 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/NEONPreAllocPass.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/NEONPreAllocPass.cpp
@@ -16,7 +16,7 @@
 using namespace llvm;
 
 namespace {
-  class VISIBILITY_HIDDEN NEONPreAllocPass : public MachineFunctionPass {
+  class NEONPreAllocPass : public MachineFunctionPass {
     const TargetInstrInfo *TII;
 
   public:
@@ -36,8 +36,12 @@ namespace {
   char NEONPreAllocPass::ID = 0;
 }
 
-static bool isNEONMultiRegOp(int Opcode, unsigned &FirstOpnd,
-                             unsigned &NumRegs) {
+static bool isNEONMultiRegOp(int Opcode, unsigned &FirstOpnd, unsigned &NumRegs,
+                             unsigned &Offset, unsigned &Stride) {
+  // Default to unit stride with no offset.
+  Stride = 1;
+  Offset = 0;
+
   switch (Opcode) {
   default:
     break;
@@ -45,6 +49,7 @@ static bool isNEONMultiRegOp(int Opcode, unsigned &FirstOpnd,
   case ARM::VLD2d8:
   case ARM::VLD2d16:
   case ARM::VLD2d32:
+  case ARM::VLD2d64:
   case ARM::VLD2LNd8:
   case ARM::VLD2LNd16:
   case ARM::VLD2LNd32:
@@ -52,9 +57,33 @@ static bool isNEONMultiRegOp(int Opcode, unsigned &FirstOpnd,
     NumRegs = 2;
     return true;
 
+  case ARM::VLD2q8:
+  case ARM::VLD2q16:
+  case ARM::VLD2q32:
+    FirstOpnd = 0;
+    NumRegs = 4;
+    return true;
+
+  case ARM::VLD2LNq16a:
+  case ARM::VLD2LNq32a:
+    FirstOpnd = 0;
+    NumRegs = 2;
+    Offset = 0;
+    Stride = 2;
+    return true;
+
+  case ARM::VLD2LNq16b:
+  case ARM::VLD2LNq32b:
+    FirstOpnd = 0;
+    NumRegs = 2;
+    Offset = 1;
+    Stride = 2;
+    return true;
+
   case ARM::VLD3d8:
   case ARM::VLD3d16:
   case ARM::VLD3d32:
+  case ARM::VLD3d64:
   case ARM::VLD3LNd8:
   case ARM::VLD3LNd16:
   case ARM::VLD3LNd32:
@@ -62,9 +91,44 @@ static bool isNEONMultiRegOp(int Opcode, unsigned &FirstOpnd,
     NumRegs = 3;
     return true;
 
+  case ARM::VLD3q8a:
+  case ARM::VLD3q16a:
+  case ARM::VLD3q32a:
+    FirstOpnd = 0;
+    NumRegs = 3;
+    Offset = 0;
+    Stride = 2;
+    return true;
+
+  case ARM::VLD3q8b:
+  case ARM::VLD3q16b:
+  case ARM::VLD3q32b:
+    FirstOpnd = 0;
+    NumRegs = 3;
+    Offset = 1;
+    Stride = 2;
+    return true;
+
+  case ARM::VLD3LNq16a:
+  case ARM::VLD3LNq32a:
+    FirstOpnd = 0;
+    NumRegs = 3;
+    Offset = 0;
+    Stride = 2;
+    return true;
+
+  case ARM::VLD3LNq16b:
+  case ARM::VLD3LNq32b:
+    FirstOpnd = 0;
+    NumRegs = 3;
+    Offset = 1;
+    Stride = 2;
+    return true;
+
   case ARM::VLD4d8:
   case ARM::VLD4d16:
   case ARM::VLD4d32:
+  case ARM::VLD4d64:
   case ARM::VLD4LNd8:
   case ARM::VLD4LNd16:
   case ARM::VLD4LNd32:
@@ -72,34 +136,162 @@ static bool isNEONMultiRegOp(int Opcode, unsigned &FirstOpnd,
     NumRegs = 4;
     return true;
 
+  case ARM::VLD4q8a:
+  case ARM::VLD4q16a:
+  case ARM::VLD4q32a:
+    FirstOpnd = 0;
+    NumRegs = 4;
+    Offset = 0;
+    Stride = 2;
+    return true;
+
+  case ARM::VLD4q8b:
+  case ARM::VLD4q16b:
+  case ARM::VLD4q32b:
+    FirstOpnd = 0;
+    NumRegs = 4;
+    Offset = 1;
+    Stride = 2;
+    return true;
+
+  case ARM::VLD4LNq16a:
+  case ARM::VLD4LNq32a:
+    FirstOpnd = 0;
+    NumRegs = 4;
+    Offset = 0;
+    Stride = 2;
+    return true;
+
+  case ARM::VLD4LNq16b:
+  case ARM::VLD4LNq32b:
+    FirstOpnd = 0;
+    NumRegs = 4;
+    Offset = 1;
+    Stride = 2;
+    return true;
+
   case ARM::VST2d8:
   case ARM::VST2d16:
   case ARM::VST2d32:
+  case ARM::VST2d64:
   case ARM::VST2LNd8:
   case ARM::VST2LNd16:
   case ARM::VST2LNd32:
-    FirstOpnd = 3;
+    FirstOpnd = 4;
+    NumRegs = 2;
+    return true;
+
+  case ARM::VST2q8:
+  case ARM::VST2q16:
+  case ARM::VST2q32:
+    FirstOpnd = 4;
+    NumRegs = 4;
+    return true;
+
+  case ARM::VST2LNq16a:
+  case ARM::VST2LNq32a:
+    FirstOpnd = 4;
+    NumRegs = 2;
+    Offset = 0;
+    Stride = 2;
+    return true;
+
+  case ARM::VST2LNq16b:
+  case ARM::VST2LNq32b:
+    FirstOpnd = 4;
     NumRegs = 2;
+    Offset = 1;
+    Stride = 2;
     return true;
 
   case ARM::VST3d8:
   case ARM::VST3d16:
   case ARM::VST3d32:
+  case ARM::VST3d64:
   case ARM::VST3LNd8:
   case ARM::VST3LNd16:
   case ARM::VST3LNd32:
-    FirstOpnd = 3;
+    FirstOpnd = 4;
     NumRegs = 3;
     return true;
 
+  case ARM::VST3q8a:
+  case ARM::VST3q16a:
+  case ARM::VST3q32a:
+    FirstOpnd = 5;
+    NumRegs = 3;
+    Offset = 0;
+    Stride = 2;
+    return true;
+
+  case ARM::VST3q8b:
+  case ARM::VST3q16b:
+  case ARM::VST3q32b:
+    FirstOpnd = 5;
+    NumRegs = 3;
+    Offset = 1;
+    Stride = 2;
+    return true;
+
+  case ARM::VST3LNq16a:
+  case ARM::VST3LNq32a:
+    FirstOpnd = 4;
+    NumRegs = 3;
+    Offset = 0;
+    Stride = 2;
+    return true;
+
+  case ARM::VST3LNq16b:
+  case ARM::VST3LNq32b:
+    FirstOpnd = 4;
+    NumRegs = 3;
+    Offset = 1;
+    Stride = 2;
+    return true;
+
   case ARM::VST4d8:
   case ARM::VST4d16:
   case ARM::VST4d32:
+  case ARM::VST4d64:
   case ARM::VST4LNd8:
   case ARM::VST4LNd16:
   case ARM::VST4LNd32:
-    FirstOpnd = 3;
+    FirstOpnd = 4;
+    NumRegs = 4;
+    return true;
+
+  case ARM::VST4q8a:
+  case ARM::VST4q16a:
+  case ARM::VST4q32a:
+    FirstOpnd = 5;
+    NumRegs = 4;
+    Offset = 0;
+    Stride = 2;
+    return true;
+
+  case ARM::VST4q8b:
+  case ARM::VST4q16b:
+  case ARM::VST4q32b:
+    FirstOpnd = 5;
+    NumRegs = 4;
+    Offset = 1;
+    Stride = 2;
+    return true;
+
+  case ARM::VST4LNq16a:
+  case ARM::VST4LNq32a:
+    FirstOpnd = 4;
+    NumRegs = 4;
+    Offset = 0;
+    Stride = 2;
+    return true;
+
+  case ARM::VST4LNq16b:
+  case ARM::VST4LNq32b:
+    FirstOpnd = 4;
     NumRegs = 4;
+    Offset = 1;
+    Stride = 2;
     return true;
 
   case ARM::VTBL2:
@@ -142,8 +334,8 @@ bool NEONPreAllocPass::PreAllocNEONRegisters(MachineBasicBlock &MBB) {
   MachineBasicBlock::iterator MBBI = MBB.begin(), E = MBB.end();
   for (; MBBI != E; ++MBBI) {
     MachineInstr *MI = &*MBBI;
-    unsigned FirstOpnd, NumRegs;
-    if (!isNEONMultiRegOp(MI->getOpcode(), FirstOpnd, NumRegs))
+    unsigned FirstOpnd, NumRegs, Offset, Stride;
+    if (!isNEONMultiRegOp(MI->getOpcode(), FirstOpnd, NumRegs, Offset, Stride))
       continue;
 
     MachineBasicBlock::iterator NextI = next(MBBI);
@@ -157,15 +349,15 @@ bool NEONPreAllocPass::PreAllocNEONRegisters(MachineBasicBlock &MBB) {
       // For now, just assign a fixed set of adjacent registers.
       // This leaves plenty of room for future improvements.
       static const unsigned NEONDRegs[] = {
-        ARM::D0, ARM::D1, ARM::D2, ARM::D3
+        ARM::D0, ARM::D1, ARM::D2, ARM::D3,
+        ARM::D4, ARM::D5, ARM::D6, ARM::D7
       };
-      MO.setReg(NEONDRegs[R]);
+      MO.setReg(NEONDRegs[Offset + R * Stride]);
 
       if (MO.isUse()) {
         // Insert a copy from VirtReg.
-        AddDefaultPred(BuildMI(MBB, MBBI, MI->getDebugLoc(),
-                               TII->get(ARM::FCPYD), MO.getReg())
-                       .addReg(VirtReg));
+        TII->copyRegToReg(MBB, MBBI, MO.getReg(), VirtReg,
+                          ARM::DPRRegisterClass, ARM::DPRRegisterClass);
         if (MO.isKill()) {
           MachineInstr *CopyMI = prior(MBBI);
           CopyMI->findRegisterUseOperand(VirtReg)->setIsKill();
@@ -173,9 +365,8 @@ bool NEONPreAllocPass::PreAllocNEONRegisters(MachineBasicBlock &MBB) {
         MO.setIsKill();
       } else if (MO.isDef() && !MO.isDead()) {
         // Add a copy to VirtReg.
-        AddDefaultPred(BuildMI(MBB, NextI, MI->getDebugLoc(),
-                               TII->get(ARM::FCPYD), VirtReg)
-                       .addReg(MO.getReg()));
+        TII->copyRegToReg(MBB, NextI, VirtReg, MO.getReg(),
+                          ARM::DPRRegisterClass, ARM::DPRRegisterClass);
       }
     }
   }
diff --git a/libclamav/c++/llvm/lib/Target/ARM/README-Thumb.txt b/libclamav/c++/llvm/lib/Target/ARM/README-Thumb.txt
index a961a57..6b605bb 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/README-Thumb.txt
+++ b/libclamav/c++/llvm/lib/Target/ARM/README-Thumb.txt
@@ -37,7 +37,7 @@ LPCRELL0:
 	mov r1, #PCRELV0
 	add r1, pc
 	ldr r0, [r0, r1]
-	cpy pc, r0 
+	mov pc, r0 
 	.align	2
 LJTI1_0_0:
 	.long	 LBB1_3
@@ -51,7 +51,7 @@ We should be able to generate:
 LPCRELL0:
 	add r1, LJTI1_0_0
 	ldr r0, [r0, r1]
-	cpy pc, r0 
+	mov pc, r0 
 	.align	2
 LJTI1_0_0:
 	.long	 LBB1_3
@@ -196,14 +196,6 @@ This is especially bad when dynamic alloca is used. The all fixed size stack
 objects are referenced off the frame pointer with negative offsets. See
 oggenc for an example.
 
-//===---------------------------------------------------------------------===//
-
-We are reserving R3 as a scratch register under thumb mode. So if it is live in
-to the function, we save / restore R3 to / from R12. Until register scavenging
-is done, we should save R3 to a high callee saved reg at emitPrologue time
-(when hasFP is true or stack size is large) and restore R3 from that register
-instead. This allows us to at least get rid of the save to r12 everytime it is
-used.
 
 //===---------------------------------------------------------------------===//
 
@@ -214,8 +206,8 @@ LPC0:
 	add r5, pc
 	ldr r6, LCPI1_1
 	ldr r2, LCPI1_2
-	cpy r3, r6
-	cpy lr, pc
+	mov r3, r6
+	mov lr, pc
 	bx r5
 
 //===---------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/Target/ARM/README.txt b/libclamav/c++/llvm/lib/Target/ARM/README.txt
index 8fb1da3..11c48ad 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/README.txt
+++ b/libclamav/c++/llvm/lib/Target/ARM/README.txt
@@ -8,12 +8,8 @@ Reimplement 'select' in terms of 'SEL'.
   add doesn't need to overflow between the two 16-bit chunks.
 
 * Implement pre/post increment support.  (e.g. PR935)
-* Coalesce stack slots!
 * Implement smarter constant generation for binops with large immediates.
 
-* Consider materializing FP constants like 0.0f and 1.0f using integer 
-  immediate instructions then copy to FPU.  Slower than load into FPU?
-
 //===---------------------------------------------------------------------===//
 
 Crazy idea:  Consider code that uses lots of 8-bit or 16-bit values.  By the
@@ -325,7 +321,7 @@ time.
 4) Once we added support for multiple result patterns, write indexed loads
    patterns instead of C++ instruction selection code.
 
-5) Use FLDM / FSTM to emulate indexed FP load / store.
+5) Use VLDM / VSTM to emulate indexed FP load / store.
 
 //===---------------------------------------------------------------------===//
 
@@ -422,14 +418,6 @@ are not remembered when the same two values are compared twice.
 
 //===---------------------------------------------------------------------===//
 
-More register scavenging work:
-
-1. Use the register scavenger to track frame index materialized into registers
-   (those that do not fit in addressing modes) to allow reuse in the same BB.
-2. Finish scavenging for Thumb.
-
-//===---------------------------------------------------------------------===//
-
 More LSR enhancements possible:
 
 1. Teach LSR about pre- and post- indexed ops to allow iv increment be merged
@@ -540,10 +528,6 @@ while ARMConstantIslandPass only need to worry about LDR (literal).
 
 //===---------------------------------------------------------------------===//
 
-We need to fix constant isel for ARMv6t2 to use MOVT.
-
-//===---------------------------------------------------------------------===//
-
 Constant island pass should make use of full range SoImm values for LEApcrel.
 Be careful though as the last attempt caused infinite looping on lencod.
 
@@ -593,10 +577,22 @@ it saves an instruction and a register.
 
 //===---------------------------------------------------------------------===//
 
-add/sub/and/or + i32 imm can be simplified by folding part of the immediate
-into the operation.
+It might be profitable to cse MOVi16 if there are lots of 32-bit immediates
+with the same bottom half.
+
+//===---------------------------------------------------------------------===//
+
+Robert Muth started working on an alternate jump table implementation that
+does not put the tables in-line in the text.  This is more like the llvm
+default jump table implementation.  This might be useful sometime.  Several
+revisions of patches are on the mailing list, beginning at:
+http://lists.cs.uiuc.edu/pipermail/llvmdev/2009-June/022763.html
 
 //===---------------------------------------------------------------------===//
 
-It might be profitable to cse MOVi16 if there are lots of 32-bit immediates
-with the same bottom half.
+Make use of the "rbit" instruction.
+
+//===---------------------------------------------------------------------===//
+
+Take a look at test/CodeGen/Thumb2/machine-licm.ll. ARM should be taught how
+to licm and cse the unnecessary load from cp#1.
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.cpp
index 7eed30e..7602b6d 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.cpp
@@ -1,4 +1,4 @@
-//===- Thumb1InstrInfo.cpp - Thumb-1 Instruction Information --------*- C++ -*-===//
+//===- Thumb1InstrInfo.cpp - Thumb-1 Instruction Information ----*- C++ -*-===//
 //
 //                     The LLVM Compiler Infrastructure
 //
@@ -11,18 +11,21 @@
 //
 //===----------------------------------------------------------------------===//
 
-#include "ARMInstrInfo.h"
+#include "Thumb1InstrInfo.h"
 #include "ARM.h"
 #include "ARMGenInstrInfo.inc"
 #include "ARMMachineFunctionInfo.h"
 #include "llvm/CodeGen/MachineFrameInfo.h"
 #include "llvm/CodeGen/MachineInstrBuilder.h"
+#include "llvm/CodeGen/MachineMemOperand.h"
+#include "llvm/CodeGen/PseudoSourceValue.h"
 #include "llvm/ADT/SmallVector.h"
 #include "Thumb1InstrInfo.h"
 
 using namespace llvm;
 
-Thumb1InstrInfo::Thumb1InstrInfo(const ARMSubtarget &STI) : RI(*this, STI) {
+Thumb1InstrInfo::Thumb1InstrInfo(const ARMSubtarget &STI)
+  : ARMBaseInstrInfo(STI), RI(*this, STI) {
 }
 
 unsigned Thumb1InstrInfo::getUnindexedOpcode(unsigned Opc) const {
@@ -38,6 +41,7 @@ Thumb1InstrInfo::BlockHasNoFallThrough(const MachineBasicBlock &MBB) const {
   case ARM::tBX_RET_vararg:
   case ARM::tPOP_RET:
   case ARM::tB:
+  case ARM::tBRIND:
   case ARM::tBR_JTr:
     return true;
   default:
@@ -121,9 +125,16 @@ storeRegToStackSlot(MachineBasicBlock &MBB, MachineBasicBlock::iterator I,
            isARMLowRegister(SrcReg))) && "Unknown regclass!");
 
   if (RC == ARM::tGPRRegisterClass) {
+    MachineFunction &MF = *MBB.getParent();
+    MachineFrameInfo &MFI = *MF.getFrameInfo();
+    MachineMemOperand *MMO =
+      MF.getMachineMemOperand(PseudoSourceValue::getFixedStack(FI),
+                              MachineMemOperand::MOStore, 0,
+                              MFI.getObjectSize(FI),
+                              MFI.getObjectAlignment(FI));
     AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::tSpill))
                    .addReg(SrcReg, getKillRegState(isKill))
-                   .addFrameIndex(FI).addImm(0));
+                   .addFrameIndex(FI).addImm(0).addMemOperand(MMO));
   }
 }
 
@@ -139,8 +150,15 @@ loadRegFromStackSlot(MachineBasicBlock &MBB, MachineBasicBlock::iterator I,
            isARMLowRegister(DestReg))) && "Unknown regclass!");
 
   if (RC == ARM::tGPRRegisterClass) {
+    MachineFunction &MF = *MBB.getParent();
+    MachineFrameInfo &MFI = *MF.getFrameInfo();
+    MachineMemOperand *MMO =
+      MF.getMachineMemOperand(PseudoSourceValue::getFixedStack(FI),
+                              MachineMemOperand::MOLoad, 0,
+                              MFI.getObjectSize(FI),
+                              MFI.getObjectAlignment(FI));
     AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::tRestore), DestReg)
-                   .addFrameIndex(FI).addImm(0));
+                   .addFrameIndex(FI).addImm(0).addMemOperand(MMO));
   }
 }
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.h b/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.h
index 13cc578..b28229d 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.h
@@ -1,4 +1,4 @@
-//===- Thumb1InstrInfo.h - Thumb-1 Instruction Information ----------*- C++ -*-===//
+//===- Thumb1InstrInfo.h - Thumb-1 Instruction Information ------*- C++ -*-===//
 //
 //                     The LLVM Compiler Infrastructure
 //
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb1RegisterInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/Thumb1RegisterInfo.cpp
index 0cea27f..37adf37 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb1RegisterInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb1RegisterInfo.cpp
@@ -1,4 +1,4 @@
-//===- Thumb1RegisterInfo.cpp - Thumb-1 Register Information -------*- C++ -*-===//
+//===- Thumb1RegisterInfo.cpp - Thumb-1 Register Information ----*- C++ -*-===//
 //
 //                     The LLVM Compiler Infrastructure
 //
@@ -7,7 +7,8 @@
 //
 //===----------------------------------------------------------------------===//
 //
-// This file contains the Thumb-1 implementation of the TargetRegisterInfo class.
+// This file contains the Thumb-1 implementation of the TargetRegisterInfo
+// class.
 //
 //===----------------------------------------------------------------------===//
 
@@ -32,16 +33,10 @@
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/ADT/BitVector.h"
 #include "llvm/ADT/SmallVector.h"
-#include "llvm/Support/CommandLine.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
-// FIXME: This cmd line option conditionalizes the new register scavenging
-// implemenation in PEI. Remove the option when scavenging works well enough
-// to be the default.
-extern cl::opt<bool> FrameIndexVirtualScavenging;
-
 Thumb1RegisterInfo::Thumb1RegisterInfo(const ARMBaseInstrInfo &tii,
                                        const ARMSubtarget &sti)
   : ARMBaseRegisterInfo(tii, sti) {
@@ -82,11 +77,6 @@ Thumb1RegisterInfo::getPhysicalRegisterRegClass(unsigned Reg, EVT VT) const {
   return TargetRegisterInfo::getPhysicalRegisterRegClass(Reg, VT);
 }
 
-bool
-Thumb1RegisterInfo::requiresRegisterScavenging(const MachineFunction &MF) const {
-  return FrameIndexVirtualScavenging;
-}
-
 bool Thumb1RegisterInfo::hasReservedCallFrame(MachineFunction &MF) const {
   const MachineFrameInfo *FFI = MF.getFrameInfo();
   unsigned CFSize = FFI->getMaxCallFrameSize();
@@ -128,13 +118,7 @@ void emitThumbRegPlusImmInReg(MachineBasicBlock &MBB,
     unsigned LdReg = DestReg;
     if (DestReg == ARM::SP) {
       assert(BaseReg == ARM::SP && "Unexpected!");
-      if (FrameIndexVirtualScavenging) {
-        LdReg = MF.getRegInfo().createVirtualRegister(ARM::tGPRRegisterClass);
-      } else {
-        LdReg = ARM::R3;
-        BuildMI(MBB, MBBI, dl, TII.get(ARM::tMOVtgpr2gpr), ARM::R12)
-          .addReg(ARM::R3, RegState::Kill);
-      }
+      LdReg = MF.getRegInfo().createVirtualRegister(ARM::tGPRRegisterClass);
     }
 
     if (NumBytes <= 255 && NumBytes >= 0)
@@ -159,10 +143,6 @@ void emitThumbRegPlusImmInReg(MachineBasicBlock &MBB,
     else
       MIB.addReg(LdReg).addReg(BaseReg, RegState::Kill);
     AddDefaultPred(MIB);
-
-    if (!FrameIndexVirtualScavenging && DestReg == ARM::SP)
-      BuildMI(MBB, MBBI, dl, TII.get(ARM::tMOVgpr2tgpr), ARM::R3)
-        .addReg(ARM::R12, RegState::Kill);
 }
 
 /// calcNumMI - Returns the number of instructions required to materialize
@@ -402,8 +382,53 @@ rewriteFrameIndex(MachineInstr &MI, unsigned FrameRegIdx,
   return 0;
 }
 
-void Thumb1RegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
-                                             int SPAdj, RegScavenger *RS) const{
+/// saveScavengerRegister - Spill the register so it can be used by the
+/// register scavenger. Return true.
+bool
+Thumb1RegisterInfo::saveScavengerRegister(MachineBasicBlock &MBB,
+                                          MachineBasicBlock::iterator I,
+                                          MachineBasicBlock::iterator &UseMI,
+                                          const TargetRegisterClass *RC,
+                                          unsigned Reg) const {
+  // Thumb1 can't use the emergency spill slot on the stack because
+  // ldr/str immediate offsets must be positive, and if we're referencing
+  // off the frame pointer (if, for example, there are alloca() calls in
+  // the function, the offset will be negative. Use R12 instead since that's
+  // a call clobbered register that we know won't be used in Thumb1 mode.
+  DebugLoc DL = DebugLoc::getUnknownLoc();
+  BuildMI(MBB, I, DL, TII.get(ARM::tMOVtgpr2gpr)).
+    addReg(ARM::R12, RegState::Define).addReg(Reg, RegState::Kill);
+
+  // The UseMI is where we would like to restore the register. If there's
+  // interference with R12 before then, however, we'll need to restore it
+  // before that instead and adjust the UseMI.
+  bool done = false;
+  for (MachineBasicBlock::iterator II = I; !done && II != UseMI ; ++II) {
+    // If this instruction affects R12, adjust our restore point.
+    for (unsigned i = 0, e = II->getNumOperands(); i != e; ++i) {
+      const MachineOperand &MO = II->getOperand(i);
+      if (!MO.isReg() || MO.isUndef() || !MO.getReg() ||
+          TargetRegisterInfo::isVirtualRegister(MO.getReg()))
+        continue;
+      if (MO.getReg() == ARM::R12) {
+        UseMI = II;
+        done = true;
+        break;
+      }
+    }
+  }
+  // Restore the register from R12
+  BuildMI(MBB, UseMI, DL, TII.get(ARM::tMOVgpr2tgpr)).
+    addReg(Reg, RegState::Define).addReg(ARM::R12, RegState::Kill);
+
+  return true;
+}
+
+unsigned
+Thumb1RegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
+                                        int SPAdj, int *Value,
+                                        RegScavenger *RS) const{
+  unsigned VReg = 0;
   unsigned i = 0;
   MachineInstr &MI = *II;
   MachineBasicBlock &MBB = *MI.getParent();
@@ -459,7 +484,7 @@ void Thumb1RegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
       MI.setDesc(TII.get(ARM::tMOVgpr2tgpr));
       MI.getOperand(i).ChangeToRegister(FrameReg, false);
       MI.RemoveOperand(i+1);
-      return;
+      return 0;
     }
 
     // Common case: small offset, fits into instruction.
@@ -475,7 +500,7 @@ void Thumb1RegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
         MI.getOperand(i).ChangeToRegister(FrameReg, false);
         MI.getOperand(i+1).ChangeToImmediate(Offset / Scale);
       }
-      return;
+      return 0;
     }
 
     unsigned DestReg = MI.getOperand(0).getReg();
@@ -487,7 +512,7 @@ void Thumb1RegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
       emitThumbRegPlusImmediate(MBB, II, DestReg, FrameReg, Offset, TII,
                                 *this, dl);
       MBB.erase(II);
-      return;
+      return 0;
     }
 
     if (Offset > 0) {
@@ -520,7 +545,7 @@ void Thumb1RegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
         AddDefaultPred(MIB);
       }
     }
-    return;
+    return 0;
   } else {
     unsigned ImmIdx = 0;
     int InstrOffs = 0;
@@ -550,7 +575,7 @@ void Thumb1RegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
       // Replace the FrameIndex with sp
       MI.getOperand(i).ChangeToRegister(FrameReg, false);
       ImmOp.ChangeToImmediate(ImmedOffset);
-      return;
+      return 0;
     }
 
     bool isThumSpillRestore = Opcode == ARM::tRestore || Opcode == ARM::tSpill;
@@ -607,73 +632,28 @@ void Thumb1RegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
     else  // tLDR has an extra register operand.
       MI.addOperand(MachineOperand::CreateReg(0, false));
   } else if (Desc.mayStore()) {
-    if (FrameIndexVirtualScavenging) {
-      unsigned TmpReg =
-        MF.getRegInfo().createVirtualRegister(ARM::tGPRRegisterClass);
+      VReg = MF.getRegInfo().createVirtualRegister(ARM::tGPRRegisterClass);
+      assert (Value && "Frame index virtual allocated, but Value arg is NULL!");
+      *Value = Offset;
       bool UseRR = false;
+
       if (Opcode == ARM::tSpill) {
         if (FrameReg == ARM::SP)
-          emitThumbRegPlusImmInReg(MBB, II, TmpReg, FrameReg,
-                                   Offset, false, TII, *this, dl);
-        else {
-          emitLoadConstPool(MBB, II, dl, TmpReg, 0, Offset);
-          UseRR = true;
-        }
-      } else
-        emitThumbRegPlusImmediate(MBB, II, TmpReg, FrameReg, Offset, TII,
-                                  *this, dl);
-      MI.setDesc(TII.get(ARM::tSTR));
-      MI.getOperand(i).ChangeToRegister(TmpReg, false, false, true);
-      if (UseRR)  // Use [reg, reg] addrmode.
-        MI.addOperand(MachineOperand::CreateReg(FrameReg, false));
-      else // tSTR has an extra register operand.
-        MI.addOperand(MachineOperand::CreateReg(0, false));
-    } else {
-      // FIXME! This is horrific!!! We need register scavenging.
-      // Our temporary workaround has marked r3 unavailable. Of course, r3 is
-      // also a ABI register so it's possible that is is the register that is
-      // being storing here. If that's the case, we do the following:
-      // r12 = r2
-      // Use r2 to materialize sp + offset
-      // str r3, r2
-      // r2 = r12
-      unsigned ValReg = MI.getOperand(0).getReg();
-      unsigned TmpReg = ARM::R3;
-      bool UseRR = false;
-      if (ValReg == ARM::R3) {
-        BuildMI(MBB, II, dl, TII.get(ARM::tMOVtgpr2gpr), ARM::R12)
-          .addReg(ARM::R2, RegState::Kill);
-        TmpReg = ARM::R2;
-      }
-      if (TmpReg == ARM::R3 && AFI->isR3LiveIn())
-        BuildMI(MBB, II, dl, TII.get(ARM::tMOVtgpr2gpr), ARM::R12)
-          .addReg(ARM::R3, RegState::Kill);
-      if (Opcode == ARM::tSpill) {
-        if (FrameReg == ARM::SP)
-          emitThumbRegPlusImmInReg(MBB, II, TmpReg, FrameReg,
+          emitThumbRegPlusImmInReg(MBB, II, VReg, FrameReg,
                                    Offset, false, TII, *this, dl);
         else {
-          emitLoadConstPool(MBB, II, dl, TmpReg, 0, Offset);
+          emitLoadConstPool(MBB, II, dl, VReg, 0, Offset);
           UseRR = true;
         }
       } else
-        emitThumbRegPlusImmediate(MBB, II, TmpReg, FrameReg, Offset, TII,
+        emitThumbRegPlusImmediate(MBB, II, VReg, FrameReg, Offset, TII,
                                   *this, dl);
       MI.setDesc(TII.get(ARM::tSTR));
-      MI.getOperand(i).ChangeToRegister(TmpReg, false, false, true);
+      MI.getOperand(i).ChangeToRegister(VReg, false, false, true);
       if (UseRR)  // Use [reg, reg] addrmode.
         MI.addOperand(MachineOperand::CreateReg(FrameReg, false));
       else // tSTR has an extra register operand.
         MI.addOperand(MachineOperand::CreateReg(0, false));
-
-      MachineBasicBlock::iterator NII = next(II);
-      if (ValReg == ARM::R3)
-        BuildMI(MBB, NII, dl, TII.get(ARM::tMOVgpr2tgpr), ARM::R2)
-          .addReg(ARM::R12, RegState::Kill);
-      if (TmpReg == ARM::R3 && AFI->isR3LiveIn())
-        BuildMI(MBB, NII, dl, TII.get(ARM::tMOVgpr2tgpr), ARM::R3)
-          .addReg(ARM::R12, RegState::Kill);
-    }
   } else
     assert(false && "Unexpected opcode!");
 
@@ -682,6 +662,7 @@ void Thumb1RegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
     MachineInstrBuilder MIB(&MI);
     AddDefaultPred(MIB);
   }
+  return VReg;
 }
 
 void Thumb1RegisterInfo::emitPrologue(MachineFunction &MF) const {
@@ -695,15 +676,6 @@ void Thumb1RegisterInfo::emitPrologue(MachineFunction &MF) const {
   DebugLoc dl = (MBBI != MBB.end() ?
                  MBBI->getDebugLoc() : DebugLoc::getUnknownLoc());
 
-  // Check if R3 is live in. It might have to be used as a scratch register.
-  for (MachineRegisterInfo::livein_iterator I =MF.getRegInfo().livein_begin(),
-         E = MF.getRegInfo().livein_end(); I != E; ++I) {
-    if (I->first == ARM::R3) {
-      AFI->setR3IsLiveIn(true);
-      break;
-    }
-  }
-
   // Thumb add/sub sp, imm8 instructions implicitly multiply the offset by 4.
   NumBytes = (NumBytes + 3) & ~3;
   MFI->setStackSize(NumBytes);
@@ -823,7 +795,7 @@ void Thumb1RegisterInfo::emitEpilogue(MachineFunction &MF,
     if (NumBytes != 0)
       emitSPUpdate(MBB, MBBI, TII, dl, *this, NumBytes);
   } else {
-    // Unwind MBBI to point to first LDR / FLDD.
+    // Unwind MBBI to point to first LDR / VLDRD.
     const unsigned *CSRegs = getCalleeSavedRegs();
     if (MBBI != MBB.begin()) {
       do
@@ -861,7 +833,6 @@ void Thumb1RegisterInfo::emitEpilogue(MachineFunction &MF,
 
   if (VARegSaveSize) {
     // Epilogue for vararg functions: pop LR to R3 and branch off it.
-    // FIXME: Verify this is still ok when R3 is no longer being reserved.
     AddDefaultPred(BuildMI(MBB, MBBI, dl, TII.get(ARM::tPOP)))
       .addReg(0) // No write back.
       .addReg(ARM::R3, RegState::Define);
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb1RegisterInfo.h b/libclamav/c++/llvm/lib/Target/ARM/Thumb1RegisterInfo.h
index 6eae904..37ad388 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb1RegisterInfo.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb1RegisterInfo.h
@@ -1,4 +1,4 @@
-//===- Thumb1RegisterInfo.h - Thumb-1 Register Information Impl ----*- C++ -*-===//
+//===- Thumb1RegisterInfo.h - Thumb-1 Register Information Impl -*- C++ -*-===//
 //
 //                     The LLVM Compiler Infrastructure
 //
@@ -7,7 +7,8 @@
 //
 //===----------------------------------------------------------------------===//
 //
-// This file contains the Thumb-1 implementation of the TargetRegisterInfo class.
+// This file contains the Thumb-1 implementation of the TargetRegisterInfo
+// class.
 //
 //===----------------------------------------------------------------------===//
 
@@ -40,8 +41,6 @@ public:
   const TargetRegisterClass *
     getPhysicalRegisterRegClass(unsigned Reg, EVT VT = MVT::Other) const;
 
-  bool requiresRegisterScavenging(const MachineFunction &MF) const;
-
   bool hasReservedCallFrame(MachineFunction &MF) const;
 
   void eliminateCallFramePseudoInstr(MachineFunction &MF,
@@ -54,8 +53,14 @@ public:
                         unsigned FrameReg, int Offset,
                         unsigned MOVOpc, unsigned ADDriOpc, unsigned SUBriOpc) const;
 
-  void eliminateFrameIndex(MachineBasicBlock::iterator II,
-                           int SPAdj, RegScavenger *RS = NULL) const;
+  bool saveScavengerRegister(MachineBasicBlock &MBB,
+                             MachineBasicBlock::iterator I,
+                             MachineBasicBlock::iterator &UseMI,
+                             const TargetRegisterClass *RC,
+                             unsigned Reg) const;
+  unsigned eliminateFrameIndex(MachineBasicBlock::iterator II,
+                               int SPAdj, int *Value = NULL,
+                               RegScavenger *RS = NULL) const;
 
   void emitPrologue(MachineFunction &MF) const;
   void emitEpilogue(MachineFunction &MF, MachineBasicBlock &MBB) const;
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb2ITBlockPass.cpp b/libclamav/c++/llvm/lib/Target/ARM/Thumb2ITBlockPass.cpp
index 98b5cbd..f5ba155 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb2ITBlockPass.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb2ITBlockPass.cpp
@@ -1,4 +1,4 @@
-//===-- Thumb2ITBlockPass.cpp - Insert Thumb IT blocks -----------*- C++ -*-===//
+//===-- Thumb2ITBlockPass.cpp - Insert Thumb IT blocks ----------*- C++ -*-===//
 //
 //                     The LLVM Compiler Infrastructure
 //
@@ -14,14 +14,13 @@
 #include "llvm/CodeGen/MachineInstr.h"
 #include "llvm/CodeGen/MachineInstrBuilder.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/ADT/Statistic.h"
 using namespace llvm;
 
 STATISTIC(NumITs,     "Number of IT blocks inserted");
 
 namespace {
-  struct VISIBILITY_HIDDEN Thumb2ITBlockPass : public MachineFunctionPass {
+  struct Thumb2ITBlockPass : public MachineFunctionPass {
     static char ID;
     Thumb2ITBlockPass() : MachineFunctionPass(&ID) {}
 
@@ -35,10 +34,6 @@ namespace {
     }
 
   private:
-    MachineBasicBlock::iterator
-      SplitT2MOV32imm(MachineBasicBlock &MBB, MachineBasicBlock::iterator MBBI,
-                      MachineInstr *MI, DebugLoc dl,
-                      unsigned PredReg, ARMCC::CondCodes CC);
     bool InsertITBlocks(MachineBasicBlock &MBB);
   };
   char Thumb2ITBlockPass::ID = 0;
@@ -51,34 +46,6 @@ static ARMCC::CondCodes getPredicate(const MachineInstr *MI, unsigned &PredReg){
   return llvm::getInstrPredicate(MI, PredReg);
 }
 
-MachineBasicBlock::iterator
-Thumb2ITBlockPass::SplitT2MOV32imm(MachineBasicBlock &MBB,
-                                   MachineBasicBlock::iterator MBBI,
-                                   MachineInstr *MI,
-                                   DebugLoc dl, unsigned PredReg,
-                                   ARMCC::CondCodes CC) {
-  // Splitting t2MOVi32imm into a pair of t2MOVi16 + t2MOVTi16 here.
-  // The only reason it was a single instruction was so it could be
-  // re-materialized. We want to split it before this and the thumb2
-  // size reduction pass to make sure the IT mask is correct and expose
-  // width reduction opportunities. It doesn't make sense to do this in a 
-  // separate pass so here it is.
-  unsigned DstReg = MI->getOperand(0).getReg();
-  bool DstDead = MI->getOperand(0).isDead(); // Is this possible?
-  unsigned Imm = MI->getOperand(1).getImm();
-  unsigned Lo16 = Imm & 0xffff;
-  unsigned Hi16 = (Imm >> 16) & 0xffff;
-  BuildMI(MBB, MBBI, dl, TII->get(ARM::t2MOVi16), DstReg)
-    .addImm(Lo16).addImm(CC).addReg(PredReg);
-  BuildMI(MBB, MBBI, dl, TII->get(ARM::t2MOVTi16))
-    .addReg(DstReg, getDefRegState(true) | getDeadRegState(DstDead))
-    .addReg(DstReg).addImm(Hi16).addImm(CC).addReg(PredReg);
-  --MBBI;
-  --MBBI;
-  MI->eraseFromParent();
-  return MBBI;
-}
-
 bool Thumb2ITBlockPass::InsertITBlocks(MachineBasicBlock &MBB) {
   bool Modified = false;
 
@@ -89,11 +56,6 @@ bool Thumb2ITBlockPass::InsertITBlocks(MachineBasicBlock &MBB) {
     unsigned PredReg = 0;
     ARMCC::CondCodes CC = getPredicate(MI, PredReg);
 
-    if (MI->getOpcode() == ARM::t2MOVi32imm) {
-      MBBI = SplitT2MOV32imm(MBB, MBBI, MI, dl, PredReg, CC);
-      continue;
-    }
-
     if (CC == ARMCC::AL) {
       ++MBBI;
       continue;
@@ -107,16 +69,15 @@ bool Thumb2ITBlockPass::InsertITBlocks(MachineBasicBlock &MBB) {
     // Finalize IT mask.
     ARMCC::CondCodes OCC = ARMCC::getOppositeCondition(CC);
     unsigned Mask = 0, Pos = 3;
-    while (MBBI != E && Pos) {
+    // Branches, including tricky ones like LDM_RET, need to end an IT
+    // block so check the instruction we just put in the block.
+    while (MBBI != E && Pos &&
+           (!MI->getDesc().isBranch() && !MI->getDesc().isReturn())) {
       MachineInstr *NMI = &*MBBI;
+      MI = NMI;
       DebugLoc ndl = NMI->getDebugLoc();
       unsigned NPredReg = 0;
       ARMCC::CondCodes NCC = getPredicate(NMI, NPredReg);
-      if (NMI->getOpcode() == ARM::t2MOVi32imm) {
-        MBBI = SplitT2MOV32imm(MBB, MBBI, NMI, ndl, NPredReg, NCC);
-        continue;
-      }
-
       if (NCC == OCC) {
         Mask |= (1 << Pos);
       } else if (NCC != CC)
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.cpp
index 264601b..16c1e6f 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.cpp
@@ -1,4 +1,4 @@
-//===- Thumb2InstrInfo.cpp - Thumb-2 Instruction Information --------*- C++ -*-===//
+//===- Thumb2InstrInfo.cpp - Thumb-2 Instruction Information ----*- C++ -*-===//
 //
 //                     The LLVM Compiler Infrastructure
 //
@@ -11,19 +11,23 @@
 //
 //===----------------------------------------------------------------------===//
 
-#include "ARMInstrInfo.h"
+#include "Thumb2InstrInfo.h"
 #include "ARM.h"
+#include "ARMConstantPoolValue.h"
 #include "ARMAddressingModes.h"
 #include "ARMGenInstrInfo.inc"
 #include "ARMMachineFunctionInfo.h"
 #include "llvm/CodeGen/MachineFrameInfo.h"
 #include "llvm/CodeGen/MachineInstrBuilder.h"
+#include "llvm/CodeGen/MachineMemOperand.h"
+#include "llvm/CodeGen/PseudoSourceValue.h"
 #include "llvm/ADT/SmallVector.h"
 #include "Thumb2InstrInfo.h"
 
 using namespace llvm;
 
-Thumb2InstrInfo::Thumb2InstrInfo(const ARMSubtarget &STI) : RI(*this, STI) {
+Thumb2InstrInfo::Thumb2InstrInfo(const ARMSubtarget &STI)
+  : ARMBaseInstrInfo(STI), RI(*this, STI) {
 }
 
 unsigned Thumb2InstrInfo::getUnindexedOpcode(unsigned Opc) const {
@@ -46,6 +50,7 @@ Thumb2InstrInfo::BlockHasNoFallThrough(const MachineBasicBlock &MBB) const {
   case ARM::tBX_RET_vararg:
   case ARM::tPOP_RET:
   case ARM::tB:
+  case ARM::tBRIND:
     return true;
   default:
     break;
@@ -89,9 +94,16 @@ storeRegToStackSlot(MachineBasicBlock &MBB, MachineBasicBlock::iterator I,
   if (I != MBB.end()) DL = I->getDebugLoc();
 
   if (RC == ARM::GPRRegisterClass) {
+    MachineFunction &MF = *MBB.getParent();
+    MachineFrameInfo &MFI = *MF.getFrameInfo();
+    MachineMemOperand *MMO =
+      MF.getMachineMemOperand(PseudoSourceValue::getFixedStack(FI),
+                              MachineMemOperand::MOStore, 0,
+                              MFI.getObjectSize(FI),
+                              MFI.getObjectAlignment(FI));
     AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::t2STRi12))
                    .addReg(SrcReg, getKillRegState(isKill))
-                   .addFrameIndex(FI).addImm(0));
+                   .addFrameIndex(FI).addImm(0).addMemOperand(MMO));
     return;
   }
 
@@ -106,15 +118,21 @@ loadRegFromStackSlot(MachineBasicBlock &MBB, MachineBasicBlock::iterator I,
   if (I != MBB.end()) DL = I->getDebugLoc();
 
   if (RC == ARM::GPRRegisterClass) {
+    MachineFunction &MF = *MBB.getParent();
+    MachineFrameInfo &MFI = *MF.getFrameInfo();
+    MachineMemOperand *MMO =
+      MF.getMachineMemOperand(PseudoSourceValue::getFixedStack(FI),
+                              MachineMemOperand::MOLoad, 0,
+                              MFI.getObjectSize(FI),
+                              MFI.getObjectAlignment(FI));
     AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::t2LDRi12), DestReg)
-                   .addFrameIndex(FI).addImm(0));
+                   .addFrameIndex(FI).addImm(0).addMemOperand(MMO));
     return;
   }
 
   ARMBaseInstrInfo::loadRegFromStackSlot(MBB, I, DestReg, FI, RC);
 }
 
-
 void llvm::emitT2RegPlusImmediate(MachineBasicBlock &MBB,
                                MachineBasicBlock::iterator &MBBI, DebugLoc dl,
                                unsigned DestReg, unsigned BaseReg, int NumBytes,
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.h b/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.h
index f3688c0..663a60b 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.h
@@ -1,4 +1,4 @@
-//===- Thumb2InstrInfo.h - Thumb-2 Instruction Information ----------*- C++ -*-===//
+//===- Thumb2InstrInfo.h - Thumb-2 Instruction Information ------*- C++ -*-===//
 //
 //                     The LLVM Compiler Infrastructure
 //
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb2RegisterInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/Thumb2RegisterInfo.cpp
index 6c4c15d..f24d3e2 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb2RegisterInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb2RegisterInfo.cpp
@@ -1,4 +1,4 @@
-//===- Thumb2RegisterInfo.cpp - Thumb-2 Register Information -------*- C++ -*-===//
+//===- Thumb2RegisterInfo.cpp - Thumb-2 Register Information ----*- C++ -*-===//
 //
 //                     The LLVM Compiler Infrastructure
 //
@@ -7,7 +7,8 @@
 //
 //===----------------------------------------------------------------------===//
 //
-// This file contains the Thumb-2 implementation of the TargetRegisterInfo class.
+// This file contains the Thumb-2 implementation of the TargetRegisterInfo
+// class.
 //
 //===----------------------------------------------------------------------===//
 
@@ -32,7 +33,6 @@
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/ADT/BitVector.h"
 #include "llvm/ADT/SmallVector.h"
-#include "llvm/Support/CommandLine.h"
 #include "llvm/Support/ErrorHandling.h"
 using namespace llvm;
 
@@ -60,8 +60,3 @@ void Thumb2RegisterInfo::emitLoadConstPool(MachineBasicBlock &MBB,
     .addReg(DestReg, getDefRegState(true), SubIdx)
     .addConstantPoolIndex(Idx).addImm((int64_t)ARMCC::AL).addReg(0);
 }
-
-bool Thumb2RegisterInfo::
-requiresRegisterScavenging(const MachineFunction &MF) const {
-  return true;
-}
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb2RegisterInfo.h b/libclamav/c++/llvm/lib/Target/ARM/Thumb2RegisterInfo.h
index a63c60b..b3cf2e5 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb2RegisterInfo.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb2RegisterInfo.h
@@ -1,4 +1,4 @@
-//===- Thumb2RegisterInfo.h - Thumb-2 Register Information Impl ----*- C++ -*-===//
+//===- Thumb2RegisterInfo.h - Thumb-2 Register Information Impl -*- C++ -*-===//
 //
 //                     The LLVM Compiler Infrastructure
 //
@@ -7,7 +7,8 @@
 //
 //===----------------------------------------------------------------------===//
 //
-// This file contains the Thumb-2 implementation of the TargetRegisterInfo class.
+// This file contains the Thumb-2 implementation of the TargetRegisterInfo
+// class.
 //
 //===----------------------------------------------------------------------===//
 
@@ -35,8 +36,6 @@ public:
                          unsigned DestReg, unsigned SubIdx, int Val,
                          ARMCC::CondCodes Pred = ARMCC::AL,
                          unsigned PredReg = 0) const;
-
-  bool requiresRegisterScavenging(const MachineFunction &MF) const;
 };
 }
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb2SizeReduction.cpp b/libclamav/c++/llvm/lib/Target/ARM/Thumb2SizeReduction.cpp
index b8879d2..b2fd7b3 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb2SizeReduction.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb2SizeReduction.cpp
@@ -17,7 +17,6 @@
 #include "llvm/CodeGen/MachineInstrBuilder.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
 #include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/ADT/DenseMap.h"
@@ -79,7 +78,7 @@ namespace {
     { ARM::t2LSRri, ARM::tLSRri,  0,             5,   0,    1,   0,  0,0, 0 },
     { ARM::t2LSRrr, 0,            ARM::tLSRrr,   0,   0,    0,   1,  0,0, 0 },
     { ARM::t2MOVi,  ARM::tMOVi8,  0,             8,   0,    1,   0,  0,0, 0 },
-    { ARM::t2MOVi16,ARM::tMOVi8,  0,             8,   0,    1,   0,  0,0, 0 },
+    { ARM::t2MOVi16,ARM::tMOVi8,  0,             8,   0,    1,   0,  0,0, 1 },
     // FIXME: Do we need the 16-bit 'S' variant?
     { ARM::t2MOVr,ARM::tMOVgpr2gpr,0,            0,   0,    0,   0,  1,0, 0 },
     { ARM::t2MOVCCr,0,            ARM::tMOVCCr,  0,   0,    0,   0,  0,1, 0 },
@@ -106,7 +105,7 @@ namespace {
 
     // FIXME: Clean this up after splitting each Thumb load / store opcode
     // into multiple ones.
-    { ARM::t2LDRi12,ARM::tLDR,    0,             5,   0,    1,   0,  0,0, 1 },
+    { ARM::t2LDRi12,ARM::tLDR,    ARM::tLDRspi,  5,   8,    1,   0,  0,0, 1 },
     { ARM::t2LDRs,  ARM::tLDR,    0,             0,   0,    1,   0,  0,0, 1 },
     { ARM::t2LDRBi12,ARM::tLDRB,  0,             5,   0,    1,   0,  0,0, 1 },
     { ARM::t2LDRBs, ARM::tLDRB,   0,             0,   0,    1,   0,  0,0, 1 },
@@ -114,7 +113,7 @@ namespace {
     { ARM::t2LDRHs, ARM::tLDRH,   0,             0,   0,    1,   0,  0,0, 1 },
     { ARM::t2LDRSBs,ARM::tLDRSB,  0,             0,   0,    1,   0,  0,0, 1 },
     { ARM::t2LDRSHs,ARM::tLDRSH,  0,             0,   0,    1,   0,  0,0, 1 },
-    { ARM::t2STRi12,ARM::tSTR,    0,             5,   0,    1,   0,  0,0, 1 },
+    { ARM::t2STRi12,ARM::tSTR,    ARM::tSTRspi,  5,   8,    1,   0,  0,0, 1 },
     { ARM::t2STRs,  ARM::tSTR,    0,             0,   0,    1,   0,  0,0, 1 },
     { ARM::t2STRBi12,ARM::tSTRB,  0,             5,   0,    1,   0,  0,0, 1 },
     { ARM::t2STRBs, ARM::tSTRB,   0,             0,   0,    1,   0,  0,0, 1 },
@@ -126,7 +125,7 @@ namespace {
     { ARM::t2STM,   ARM::tSTM,    ARM::tPUSH,    0,   0,    1,   1,  1,1, 1 },
   };
 
-  class VISIBILITY_HIDDEN Thumb2SizeReduce : public MachineFunctionPass {
+  class Thumb2SizeReduce : public MachineFunctionPass {
   public:
     static char ID;
     Thumb2SizeReduce();
@@ -245,8 +244,13 @@ static bool VerifyLowRegs(MachineInstr *MI) {
       continue;
     if (isLROk && Reg == ARM::LR)
       continue;
-    if (isSPOk && Reg == ARM::SP)
-      continue;
+    if (Reg == ARM::SP) {
+      if (isSPOk)
+        continue;
+      if (i == 1 && (Opc == ARM::t2LDRi12 || Opc == ARM::t2STRi12))
+        // Special case for these ldr / str with sp as base register.
+        continue;
+    }
     if (!isARMLowRegister(Reg))
       return false;
   }
@@ -262,17 +266,26 @@ Thumb2SizeReduce::ReduceLoadStore(MachineBasicBlock &MBB, MachineInstr *MI,
   unsigned Scale = 1;
   bool HasImmOffset = false;
   bool HasShift = false;
+  bool HasOffReg = true;
   bool isLdStMul = false;
   unsigned Opc = Entry.NarrowOpc1;
   unsigned OpNum = 3; // First 'rest' of operands.
+  uint8_t  ImmLimit = Entry.Imm1Limit;
   switch (Entry.WideOpc) {
   default:
     llvm_unreachable("Unexpected Thumb2 load / store opcode!");
   case ARM::t2LDRi12:
-  case ARM::t2STRi12:
+  case ARM::t2STRi12: {
+    unsigned BaseReg = MI->getOperand(1).getReg();
+    if (BaseReg == ARM::SP) {
+      Opc = Entry.NarrowOpc2;
+      ImmLimit = Entry.Imm2Limit;
+      HasOffReg = false;
+    }
     Scale = 4;
     HasImmOffset = true;
     break;
+  }
   case ARM::t2LDRBi12:
   case ARM::t2STRBi12:
     HasImmOffset = true;
@@ -326,7 +339,7 @@ Thumb2SizeReduce::ReduceLoadStore(MachineBasicBlock &MBB, MachineInstr *MI,
   unsigned OffsetImm = 0;
   if (HasImmOffset) {
     OffsetImm = MI->getOperand(2).getImm();
-    unsigned MaxOffset = ((1 << Entry.Imm1Limit) - 1) * Scale;
+    unsigned MaxOffset = ((1 << ImmLimit) - 1) * Scale;
     if ((OffsetImm & (Scale-1)) || OffsetImm > MaxOffset)
       // Make sure the immediate field fits.
       return false;
@@ -338,7 +351,7 @@ Thumb2SizeReduce::ReduceLoadStore(MachineBasicBlock &MBB, MachineInstr *MI,
   MachineInstrBuilder MIB = BuildMI(MBB, *MI, dl, TII->get(Opc));
   if (!isLdStMul) {
     MIB.addOperand(MI->getOperand(0)).addOperand(MI->getOperand(1));
-    if (Entry.NarrowOpc1 != ARM::tLDRSB && Entry.NarrowOpc1 != ARM::tLDRSH) {
+    if (Opc != ARM::tLDRSB && Opc != ARM::tLDRSH) {
       // tLDRSB and tLDRSH do not have an immediate offset field. On the other
       // hand, it must have an offset register.
       // FIXME: Remove this special case.
@@ -346,13 +359,17 @@ Thumb2SizeReduce::ReduceLoadStore(MachineBasicBlock &MBB, MachineInstr *MI,
     }
     assert((!HasShift || OffsetReg) && "Invalid so_reg load / store address!");
 
-    MIB.addReg(OffsetReg, getKillRegState(OffsetKill));
+    if (HasOffReg)
+      MIB.addReg(OffsetReg, getKillRegState(OffsetKill));
   }
 
   // Transfer the rest of operands.
   for (unsigned e = MI->getNumOperands(); OpNum != e; ++OpNum)
     MIB.addOperand(MI->getOperand(OpNum));
 
+  // Transfer memoperands.
+  (*MIB).setMemRefs(MI->memoperands_begin(), MI->memoperands_end());
+
   DEBUG(errs() << "Converted 32-bit: " << *MI << "       to 16-bit: " << *MIB);
 
   MBB.erase(MI);
@@ -396,6 +413,12 @@ Thumb2SizeReduce::ReduceSpecial(MachineBasicBlock &MBB, MachineInstr *MI,
     if (MI->getOperand(2).getImm() == 0)
       return ReduceToNarrow(MBB, MI, Entry, LiveCPSR);
     break;
+  case ARM::t2MOVi16:
+    // Can convert only 'pure' immediate operands, not immediates obtained as
+    // globals' addresses.
+    if (MI->getOperand(1).isImm())
+      return ReduceToNarrow(MBB, MI, Entry, LiveCPSR);
+    break;
   }
   return false;
 }
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/AsmPrinter/PPCAsmPrinter.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/AsmPrinter/PPCAsmPrinter.cpp
index 750cec9..aae4607 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/AsmPrinter/PPCAsmPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/AsmPrinter/PPCAsmPrinter.cpp
@@ -45,7 +45,6 @@
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/FormattedStream.h"
 #include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/StringExtras.h"
@@ -55,7 +54,7 @@ using namespace llvm;
 STATISTIC(EmittedInsts, "Number of machine instrs printed");
 
 namespace {
-  class VISIBILITY_HIDDEN PPCAsmPrinter : public AsmPrinter {
+  class PPCAsmPrinter : public AsmPrinter {
   protected:
     struct FnStubInfo {
       std::string Stub, LazyPtr, AnonSymbol;
@@ -344,7 +343,7 @@ namespace {
   };
 
   /// PPCLinuxAsmPrinter - PowerPC assembly printer, customized for Linux
-  class VISIBILITY_HIDDEN PPCLinuxAsmPrinter : public PPCAsmPrinter {
+  class PPCLinuxAsmPrinter : public PPCAsmPrinter {
   public:
     explicit PPCLinuxAsmPrinter(formatted_raw_ostream &O, TargetMachine &TM,
                                 const MCAsmInfo *T, bool V)
@@ -369,7 +368,7 @@ namespace {
 
   /// PPCDarwinAsmPrinter - PowerPC assembly printer, customized for Darwin/Mac
   /// OS X
-  class VISIBILITY_HIDDEN PPCDarwinAsmPrinter : public PPCAsmPrinter {
+  class PPCDarwinAsmPrinter : public PPCAsmPrinter {
     formatted_raw_ostream &OS;
   public:
     explicit PPCDarwinAsmPrinter(formatted_raw_ostream &O, TargetMachine &TM,
@@ -415,6 +414,9 @@ void PPCAsmPrinter::printOp(const MachineOperand &MO) {
     O << MAI->getPrivateGlobalPrefix() << "CPI" << getFunctionNumber()
       << '_' << MO.getIndex();
     return;
+  case MachineOperand::MO_BlockAddress:
+    GetBlockAddressSymbol(MO.getBlockAddress())->print(O, MAI);
+    return;
   case MachineOperand::MO_ExternalSymbol: {
     // Computing the address of an external symbol, not calling it.
     std::string Name(MAI->getGlobalPrefix());
@@ -545,7 +547,7 @@ void PPCAsmPrinter::printPredicateOperand(const MachineInstr *MI, unsigned OpNo,
 void PPCAsmPrinter::printMachineInstruction(const MachineInstr *MI) {
   ++EmittedInsts;
   
-  processDebugLoc(MI);
+  processDebugLoc(MI, true);
 
   // Check for slwi/srwi mnemonics.
   if (MI->getOpcode() == PPC::RLWINM) {
@@ -592,9 +594,11 @@ void PPCAsmPrinter::printMachineInstruction(const MachineInstr *MI) {
 
   printInstruction(MI);
   
-  if (VerboseAsm && !MI->getDebugLoc().isUnknown())
+  if (VerboseAsm)
     EmitComments(*MI);
   O << '\n';
+
+  processDebugLoc(MI, false);
 }
 
 /// runOnMachineFunction - This uses the printMachineInstruction()
@@ -658,7 +662,6 @@ bool PPCLinuxAsmPrinter::runOnMachineFunction(MachineFunction &MF) {
     // Print a label for the basic block.
     if (I != MF.begin()) {
       EmitBasicBlockStart(I);
-      O << '\n';
     }
     for (MachineBasicBlock::const_iterator II = I->begin(), E = I->end();
          II != E; ++II) {
@@ -669,14 +672,14 @@ bool PPCLinuxAsmPrinter::runOnMachineFunction(MachineFunction &MF) {
 
   O << "\t.size\t" << CurrentFnName << ",.-" << CurrentFnName << '\n';
 
-  // Print out jump tables referenced by the function.
-  EmitJumpTableInfo(MF.getJumpTableInfo(), MF);
-
   OutStreamer.SwitchSection(getObjFileLowering().SectionForGlobal(F, Mang, TM));
 
   // Emit post-function debug information.
   DW->EndFunction(&MF);
 
+  // Print out jump tables referenced by the function.
+  EmitJumpTableInfo(MF.getJumpTableInfo(), MF);
+
   // We didn't modify anything.
   return false;
 }
@@ -842,7 +845,6 @@ bool PPCDarwinAsmPrinter::runOnMachineFunction(MachineFunction &MF) {
     // Print a label for the basic block.
     if (I != MF.begin()) {
       EmitBasicBlockStart(I);
-      O << '\n';
     }
     for (MachineBasicBlock::const_iterator II = I->begin(), IE = I->end();
          II != IE; ++II) {
@@ -851,12 +853,12 @@ bool PPCDarwinAsmPrinter::runOnMachineFunction(MachineFunction &MF) {
     }
   }
 
-  // Print out jump tables referenced by the function.
-  EmitJumpTableInfo(MF.getJumpTableInfo(), MF);
-
   // Emit post-function debug information.
   DW->EndFunction(&MF);
 
+  // Print out jump tables referenced by the function.
+  EmitJumpTableInfo(MF.getJumpTableInfo(), MF);
+
   // We didn't modify anything.
   return false;
 }
@@ -1129,7 +1131,7 @@ bool PPCDarwinAsmPrinter::doFinalization(Module &M) {
   // implementation of multiple entry points).  If this doesn't occur, the
   // linker can safely perform dead code stripping.  Since LLVM never generates
   // code that does this, it is always safe to set.
-  O << "\t.subsections_via_symbols\n";
+  OutStreamer.EmitAssemblerFlag(MCStreamer::SubsectionsViaSymbols);
 
   return AsmPrinter::doFinalization(M);
 }
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCBranchSelector.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCBranchSelector.cpp
index b95a502..a752421 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCBranchSelector.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCBranchSelector.cpp
@@ -23,14 +23,13 @@
 #include "llvm/CodeGen/MachineFunctionPass.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/ADT/Statistic.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/MathExtras.h"
 using namespace llvm;
 
 STATISTIC(NumExpanded, "Number of branches expanded to long format");
 
 namespace {
-  struct VISIBILITY_HIDDEN PPCBSel : public MachineFunctionPass {
+  struct PPCBSel : public MachineFunctionPass {
     static char ID;
     PPCBSel() : MachineFunctionPass(&ID) {}
 
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCCodeEmitter.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCCodeEmitter.cpp
index 16d55a3..da9ea36 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCCodeEmitter.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCCodeEmitter.cpp
@@ -25,7 +25,6 @@
 #include "llvm/CodeGen/MachineModuleInfo.h"
 #include "llvm/CodeGen/Passes.h"
 #include "llvm/Support/Debug.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Target/TargetOptions.h"
@@ -57,8 +56,7 @@ namespace {
   };
 
   template <class CodeEmitter>
-  class VISIBILITY_HIDDEN Emitter : public MachineFunctionPass,
-      public PPCCodeEmitter {
+  class Emitter : public MachineFunctionPass, public PPCCodeEmitter {
     TargetMachine &TM;
     CodeEmitter &MCE;
 
@@ -132,7 +130,7 @@ void Emitter<CodeEmitter>::emitBasicBlock(MachineBasicBlock &MBB) {
 
   for (MachineBasicBlock::iterator I = MBB.begin(), E = MBB.end(); I != E; ++I){
     const MachineInstr &MI = *I;
-    MCE.processDebugLoc(MI.getDebugLoc());
+    MCE.processDebugLoc(MI.getDebugLoc(), true);
     switch (MI.getOpcode()) {
     default:
       MCE.emitWordBE(getBinaryCodeForInstr(MI));
@@ -151,6 +149,7 @@ void Emitter<CodeEmitter>::emitBasicBlock(MachineBasicBlock &MBB) {
       MCE.emitWordBE(0x48000005);   // bl 1
       break;
     }
+    MCE.processDebugLoc(MI.getDebugLoc(), false);
   }
 }
 
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCFrameInfo.h b/libclamav/c++/llvm/lib/Target/PowerPC/PPCFrameInfo.h
index 65f113e..73d30bf 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCFrameInfo.h
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCFrameInfo.h
@@ -42,11 +42,12 @@ public:
   /// frame pointer.
   static unsigned getFramePointerSaveOffset(bool isPPC64, bool isDarwinABI) {
     // For the Darwin ABI:
-    // Use the TOC save slot in the PowerPC linkage area for saving the frame
-    // pointer (if needed.)  LLVM does not generate code that uses the TOC (R2
-    // is treated as a caller saved register.)
+    // We cannot use the TOC save slot (offset +20) in the PowerPC linkage area
+    // for saving the frame pointer (if needed.)  While the published ABI has
+    // not used this slot since at least MacOSX 10.2, there is older code
+    // around that does use it, and that needs to continue to work.
     if (isDarwinABI)
-      return isPPC64 ? 40 : 20;
+      return isPPC64 ? -8U : -4U;
     
     // SVR4 ABI: First slot in the general register save area.
     return -4U;
@@ -90,6 +91,17 @@ public:
   // With the SVR4 ABI, callee-saved registers have fixed offsets on the stack.
   const SpillSlot *
   getCalleeSavedSpillSlots(unsigned &NumEntries) const {
+    if (TM.getSubtarget<PPCSubtarget>().isDarwinABI()) {
+      NumEntries = 1;
+      if (TM.getSubtarget<PPCSubtarget>().isPPC64()) {
+        static const SpillSlot darwin64Offsets = {PPC::X31, -8};
+        return &darwin64Offsets;
+      } else {
+        static const SpillSlot darwinOffsets = {PPC::R31, -4};
+        return &darwinOffsets;
+      }
+    }
+
     // Early exit if not using the SVR4 ABI.
     if (!TM.getSubtarget<PPCSubtarget>().isSVR4ABI()) {
       NumEntries = 0;
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelDAGToDAG.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelDAGToDAG.cpp
index 8fa6a66..e7334b5 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelDAGToDAG.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelDAGToDAG.cpp
@@ -31,7 +31,6 @@
 #include "llvm/Intrinsics.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/MathExtras.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
@@ -41,7 +40,7 @@ namespace {
   /// PPCDAGToDAGISel - PPC specific code to select PPC machine
   /// instructions for SelectionDAG operations.
   ///
-  class VISIBILITY_HIDDEN PPCDAGToDAGISel : public SelectionDAGISel {
+  class PPCDAGToDAGISel : public SelectionDAGISel {
     PPCTargetMachine &TM;
     PPCTargetLowering &PPCLowering;
     const PPCSubtarget &PPCSubTarget;
@@ -87,7 +86,7 @@ namespace {
 
     /// isRotateAndMask - Returns true if Mask and Shift can be folded into a
     /// rotate and mask opcode and mask operation.
-    static bool isRotateAndMask(SDNode *N, unsigned Mask, bool IsShiftMask,
+    static bool isRotateAndMask(SDNode *N, unsigned Mask, bool isShiftMask,
                                 unsigned &SH, unsigned &MB, unsigned &ME);
     
     /// getGlobalBaseReg - insert code into the entry mbb to materialize the PIC
@@ -188,8 +187,6 @@ private:
 /// InstructionSelect - This callback is invoked by
 /// SelectionDAGISel when it has created a SelectionDAG for us to codegen.
 void PPCDAGToDAGISel::InstructionSelect() {
-  DEBUG(BB->dump());
-
   // Select target instructions for the DAG.
   SelectRoot(*CurDAG);
   CurDAG->RemoveDeadNodes();
@@ -361,7 +358,7 @@ bool PPCDAGToDAGISel::isRunOfOnes(unsigned Val, unsigned &MB, unsigned &ME) {
 }
 
 bool PPCDAGToDAGISel::isRotateAndMask(SDNode *N, unsigned Mask, 
-                                      bool IsShiftMask, unsigned &SH, 
+                                      bool isShiftMask, unsigned &SH, 
                                       unsigned &MB, unsigned &ME) {
   // Don't even go down this path for i64, since different logic will be
   // necessary for rldicl/rldicr/rldimi.
@@ -377,12 +374,12 @@ bool PPCDAGToDAGISel::isRotateAndMask(SDNode *N, unsigned Mask,
   
   if (Opcode == ISD::SHL) {
     // apply shift left to mask if it comes first
-    if (IsShiftMask) Mask = Mask << Shift;
+    if (isShiftMask) Mask = Mask << Shift;
     // determine which bits are made indeterminant by shift
     Indeterminant = ~(0xFFFFFFFFu << Shift);
   } else if (Opcode == ISD::SRL) { 
     // apply shift right to mask if it comes first
-    if (IsShiftMask) Mask = Mask >> Shift;
+    if (isShiftMask) Mask = Mask >> Shift;
     // determine which bits are made indeterminant by shift
     Indeterminant = ~(0xFFFFFFFFu >> Shift);
     // adjust for the left rotate
@@ -446,8 +443,7 @@ SDNode *PPCDAGToDAGISel::SelectBitfieldInsert(SDNode *N) {
     
     unsigned MB, ME;
     if (InsertMask && isRunOfOnes(InsertMask, MB, ME)) {
-      SDValue Tmp1, Tmp2, Tmp3;
-      bool DisjointMask = (TargetMask ^ InsertMask) == 0xFFFFFFFF;
+      SDValue Tmp1, Tmp2;
 
       if ((Op1Opc == ISD::SHL || Op1Opc == ISD::SRL) &&
           isInt32Immediate(Op1.getOperand(1), Value)) {
@@ -464,10 +460,9 @@ SDNode *PPCDAGToDAGISel::SelectBitfieldInsert(SDNode *N) {
           Op1 = Op1.getOperand(0);
         }
       }
-      
-      Tmp3 = (Op0Opc == ISD::AND && DisjointMask) ? Op0.getOperand(0) : Op0;
+
       SH &= 31;
-      SDValue Ops[] = { Tmp3, Op1, getI32Imm(SH), getI32Imm(MB),
+      SDValue Ops[] = { Op0, Op1, getI32Imm(SH), getI32Imm(MB),
                           getI32Imm(ME) };
       return CurDAG->getMachineNode(PPC::RLWIMI, dl, MVT::i32, Ops, 5);
     }
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.cpp
index 3920b38..30a7861 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.cpp
@@ -182,10 +182,6 @@ PPCTargetLowering::PPCTargetLowering(PPCTargetMachine &TM)
   // We cannot sextinreg(i1).  Expand to shifts.
   setOperationAction(ISD::SIGN_EXTEND_INREG, MVT::i1, Expand);
 
-  // Support label based line numbers.
-  setOperationAction(ISD::DBG_STOPPOINT, MVT::Other, Expand);
-  setOperationAction(ISD::DEBUG_LOC, MVT::Other, Expand);
-
   setOperationAction(ISD::EXCEPTIONADDR, MVT::i64, Expand);
   setOperationAction(ISD::EHSELECTION,   MVT::i64, Expand);
   setOperationAction(ISD::EXCEPTIONADDR, MVT::i32, Expand);
@@ -196,10 +192,12 @@ PPCTargetLowering::PPCTargetLowering(PPCTargetMachine &TM)
   // appropriate instructions to materialize the address.
   setOperationAction(ISD::GlobalAddress, MVT::i32, Custom);
   setOperationAction(ISD::GlobalTLSAddress, MVT::i32, Custom);
+  setOperationAction(ISD::BlockAddress,  MVT::i32, Custom);
   setOperationAction(ISD::ConstantPool,  MVT::i32, Custom);
   setOperationAction(ISD::JumpTable,     MVT::i32, Custom);
   setOperationAction(ISD::GlobalAddress, MVT::i64, Custom);
   setOperationAction(ISD::GlobalTLSAddress, MVT::i64, Custom);
+  setOperationAction(ISD::BlockAddress,  MVT::i64, Custom);
   setOperationAction(ISD::ConstantPool,  MVT::i64, Custom);
   setOperationAction(ISD::JumpTable,     MVT::i64, Custom);
 
@@ -635,7 +633,7 @@ bool PPC::isAllNegativeZeroVector(SDNode *N) {
   unsigned BitSize;
   bool HasAnyUndefs;
   
-  if (BV->isConstantSplat(APVal, APUndef, BitSize, HasAnyUndefs, 32))
+  if (BV->isConstantSplat(APVal, APUndef, BitSize, HasAnyUndefs, 32, true))
     if (ConstantFPSDNode *CFP = dyn_cast<ConstantFPSDNode>(N->getOperand(0)))
       return CFP->getValueAPF().isNegZero();
 
@@ -1167,6 +1165,36 @@ SDValue PPCTargetLowering::LowerGlobalTLSAddress(SDValue Op,
   return SDValue(); // Not reached
 }
 
+SDValue PPCTargetLowering::LowerBlockAddress(SDValue Op, SelectionDAG &DAG) {
+  EVT PtrVT = Op.getValueType();
+  DebugLoc DL = Op.getDebugLoc();
+
+  BlockAddress *BA = cast<BlockAddressSDNode>(Op)->getBlockAddress();
+  SDValue TgtBA = DAG.getBlockAddress(BA, PtrVT, /*isTarget=*/true);
+  SDValue Zero = DAG.getConstant(0, PtrVT);
+  SDValue Hi = DAG.getNode(PPCISD::Hi, DL, PtrVT, TgtBA, Zero);
+  SDValue Lo = DAG.getNode(PPCISD::Lo, DL, PtrVT, TgtBA, Zero);
+
+  // If this is a non-darwin platform, we don't support non-static relo models
+  // yet.
+  const TargetMachine &TM = DAG.getTarget();
+  if (TM.getRelocationModel() == Reloc::Static ||
+      !TM.getSubtarget<PPCSubtarget>().isDarwin()) {
+    // Generate non-pic code that has direct accesses to globals.
+    // The address of the global is just (hi(&g)+lo(&g)).
+    return DAG.getNode(ISD::ADD, DL, PtrVT, Hi, Lo);
+  }
+
+  if (TM.getRelocationModel() == Reloc::PIC_) {
+    // With PIC, the first instruction is actually "GR+hi(&G)".
+    Hi = DAG.getNode(ISD::ADD, DL, PtrVT,
+                     DAG.getNode(PPCISD::GlobalBaseReg,
+                                 DebugLoc::getUnknownLoc(), PtrVT), Hi);
+  }
+
+  return DAG.getNode(ISD::ADD, DL, PtrVT, Hi, Lo);
+}
+
 SDValue PPCTargetLowering::LowerGlobalAddress(SDValue Op,
                                               SelectionDAG &DAG) {
   EVT PtrVT = Op.getValueType();
@@ -1593,7 +1621,7 @@ PPCTargetLowering::LowerFormalArguments_SVR4(
 
       unsigned ArgSize = VA.getLocVT().getSizeInBits() / 8;
       int FI = MFI->CreateFixedObject(ArgSize, VA.getLocMemOffset(),
-                                      isImmutable);
+                                      isImmutable, false);
 
       // Create load nodes to retrieve arguments from the stack.
       SDValue FIN = DAG.getFrameIndex(FI, PtrVT);
@@ -1658,9 +1686,10 @@ PPCTargetLowering::LowerFormalArguments_SVR4(
                 NumFPArgRegs * EVT(MVT::f64).getSizeInBits()/8;
 
     VarArgsStackOffset = MFI->CreateFixedObject(PtrVT.getSizeInBits()/8,
-                                                CCInfo.getNextStackOffset());
+                                                CCInfo.getNextStackOffset(),
+                                                true, false);
 
-    VarArgsFrameIndex = MFI->CreateStackObject(Depth, 8);
+    VarArgsFrameIndex = MFI->CreateStackObject(Depth, 8, false);
     SDValue FIN = DAG.getFrameIndex(VarArgsFrameIndex, PtrVT);
 
     // The fixed integer arguments of a variadic function are
@@ -1863,7 +1892,7 @@ PPCTargetLowering::LowerFormalArguments_Darwin(
         CurArgOffset = CurArgOffset + (4 - ObjSize);
       }
       // The value of the object is its address.
-      int FI = MFI->CreateFixedObject(ObjSize, CurArgOffset);
+      int FI = MFI->CreateFixedObject(ObjSize, CurArgOffset, true, false);
       SDValue FIN = DAG.getFrameIndex(FI, PtrVT);
       InVals.push_back(FIN);
       if (ObjSize==1 || ObjSize==2) {
@@ -1886,7 +1915,7 @@ PPCTargetLowering::LowerFormalArguments_Darwin(
         // the object.
         if (GPR_idx != Num_GPR_Regs) {
           unsigned VReg = MF.addLiveIn(GPR[GPR_idx], &PPC::GPRCRegClass);
-          int FI = MFI->CreateFixedObject(PtrByteSize, ArgOffset);
+          int FI = MFI->CreateFixedObject(PtrByteSize, ArgOffset, true, false);
           SDValue FIN = DAG.getFrameIndex(FI, PtrVT);
           SDValue Val = DAG.getCopyFromReg(Chain, dl, VReg, PtrVT);
           SDValue Store = DAG.getStore(Val.getValue(1), dl, Val, FIN, NULL, 0);
@@ -2011,7 +2040,7 @@ PPCTargetLowering::LowerFormalArguments_Darwin(
     if (needsLoad) {
       int FI = MFI->CreateFixedObject(ObjSize,
                                       CurArgOffset + (ArgSize - ObjSize),
-                                      isImmutable);
+                                      isImmutable, false);
       SDValue FIN = DAG.getFrameIndex(FI, PtrVT);
       ArgVal = DAG.getLoad(ObjectVT, dl, Chain, FIN, NULL, 0);
     }
@@ -2044,7 +2073,7 @@ PPCTargetLowering::LowerFormalArguments_Darwin(
     int Depth = ArgOffset;
 
     VarArgsFrameIndex = MFI->CreateFixedObject(PtrVT.getSizeInBits()/8,
-                                               Depth);
+                                               Depth, true, false);
     SDValue FIN = DAG.getFrameIndex(VarArgsFrameIndex, PtrVT);
 
     // If this function is vararg, store any remaining integer argument regs
@@ -2144,10 +2173,10 @@ CalculateParameterAndLinkageAreaSize(SelectionDAG &DAG,
 
 /// CalculateTailCallSPDiff - Get the amount the stack pointer has to be
 /// adjusted to accomodate the arguments for the tailcall.
-static int CalculateTailCallSPDiff(SelectionDAG& DAG, bool IsTailCall,
+static int CalculateTailCallSPDiff(SelectionDAG& DAG, bool isTailCall,
                                    unsigned ParamSize) {
 
-  if (!IsTailCall) return 0;
+  if (!isTailCall) return 0;
 
   PPCFunctionInfo *FI = DAG.getMachineFunction().getInfo<PPCFunctionInfo>();
   unsigned CallerMinReservedArea = FI->getMinReservedArea();
@@ -2257,7 +2286,8 @@ static SDValue EmitTailCallStoreFPAndRetAddr(SelectionDAG &DAG,
     int NewRetAddrLoc = SPDiff + PPCFrameInfo::getReturnSaveOffset(isPPC64,
                                                                    isDarwinABI);
     int NewRetAddr = MF.getFrameInfo()->CreateFixedObject(SlotSize,
-                                                          NewRetAddrLoc);
+                                                          NewRetAddrLoc,
+                                                          true, false);
     EVT VT = isPPC64 ? MVT::i64 : MVT::i32;
     SDValue NewRetAddrFrIdx = DAG.getFrameIndex(NewRetAddr, VT);
     Chain = DAG.getStore(Chain, dl, OldRetAddr, NewRetAddrFrIdx,
@@ -2268,7 +2298,8 @@ static SDValue EmitTailCallStoreFPAndRetAddr(SelectionDAG &DAG,
     if (isDarwinABI) {
       int NewFPLoc =
         SPDiff + PPCFrameInfo::getFramePointerSaveOffset(isPPC64, isDarwinABI);
-      int NewFPIdx = MF.getFrameInfo()->CreateFixedObject(SlotSize, NewFPLoc);
+      int NewFPIdx = MF.getFrameInfo()->CreateFixedObject(SlotSize, NewFPLoc,
+                                                          true, false);
       SDValue NewFramePtrIdx = DAG.getFrameIndex(NewFPIdx, VT);
       Chain = DAG.getStore(Chain, dl, OldFP, NewFramePtrIdx,
                            PseudoSourceValue::getFixedStack(NewFPIdx), 0);
@@ -2285,7 +2316,7 @@ CalculateTailCallArgDest(SelectionDAG &DAG, MachineFunction &MF, bool isPPC64,
                       SmallVector<TailCallArgumentInfo, 8>& TailCallArguments) {
   int Offset = ArgOffset + SPDiff;
   uint32_t OpSize = (Arg.getValueType().getSizeInBits()+7)/8;
-  int FI = MF.getFrameInfo()->CreateFixedObject(OpSize, Offset);
+  int FI = MF.getFrameInfo()->CreateFixedObject(OpSize, Offset, true,false);
   EVT VT = isPPC64 ? MVT::i64 : MVT::i32;
   SDValue FIN = DAG.getFrameIndex(FI, VT);
   TailCallArgumentInfo Info;
@@ -3155,8 +3186,8 @@ SDValue PPCTargetLowering::LowerSTACKRESTORE(SDValue Op, SelectionDAG &DAG,
   EVT PtrVT = DAG.getTargetLoweringInfo().getPointerTy();
 
   // Construct the stack pointer operand.
-  bool IsPPC64 = Subtarget.isPPC64();
-  unsigned SP = IsPPC64 ? PPC::X1 : PPC::R1;
+  bool isPPC64 = Subtarget.isPPC64();
+  unsigned SP = isPPC64 ? PPC::X1 : PPC::R1;
   SDValue StackPtr = DAG.getRegister(SP, PtrVT);
 
   // Get the operands for the STACKRESTORE.
@@ -3178,7 +3209,7 @@ SDValue PPCTargetLowering::LowerSTACKRESTORE(SDValue Op, SelectionDAG &DAG,
 SDValue
 PPCTargetLowering::getReturnAddrFrameIndex(SelectionDAG & DAG) const {
   MachineFunction &MF = DAG.getMachineFunction();
-  bool IsPPC64 = PPCSubTarget.isPPC64();
+  bool isPPC64 = PPCSubTarget.isPPC64();
   bool isDarwinABI = PPCSubTarget.isDarwinABI();
   EVT PtrVT = DAG.getTargetLoweringInfo().getPointerTy();
 
@@ -3190,9 +3221,10 @@ PPCTargetLowering::getReturnAddrFrameIndex(SelectionDAG & DAG) const {
   // If the frame pointer save index hasn't been defined yet.
   if (!RASI) {
     // Find out what the fix offset of the frame pointer save area.
-    int LROffset = PPCFrameInfo::getReturnSaveOffset(IsPPC64, isDarwinABI);
+    int LROffset = PPCFrameInfo::getReturnSaveOffset(isPPC64, isDarwinABI);
     // Allocate the frame index for frame pointer save area.
-    RASI = MF.getFrameInfo()->CreateFixedObject(IsPPC64? 8 : 4, LROffset);
+    RASI = MF.getFrameInfo()->CreateFixedObject(isPPC64? 8 : 4, LROffset,
+                                                true, false);
     // Save the result.
     FI->setReturnAddrSaveIndex(RASI);
   }
@@ -3202,7 +3234,7 @@ PPCTargetLowering::getReturnAddrFrameIndex(SelectionDAG & DAG) const {
 SDValue
 PPCTargetLowering::getFramePointerFrameIndex(SelectionDAG & DAG) const {
   MachineFunction &MF = DAG.getMachineFunction();
-  bool IsPPC64 = PPCSubTarget.isPPC64();
+  bool isPPC64 = PPCSubTarget.isPPC64();
   bool isDarwinABI = PPCSubTarget.isDarwinABI();
   EVT PtrVT = DAG.getTargetLoweringInfo().getPointerTy();
 
@@ -3214,11 +3246,12 @@ PPCTargetLowering::getFramePointerFrameIndex(SelectionDAG & DAG) const {
   // If the frame pointer save index hasn't been defined yet.
   if (!FPSI) {
     // Find out what the fix offset of the frame pointer save area.
-    int FPOffset = PPCFrameInfo::getFramePointerSaveOffset(IsPPC64,
+    int FPOffset = PPCFrameInfo::getFramePointerSaveOffset(isPPC64,
                                                            isDarwinABI);
 
     // Allocate the frame index for frame pointer save area.
-    FPSI = MF.getFrameInfo()->CreateFixedObject(IsPPC64? 8 : 4, FPOffset);
+    FPSI = MF.getFrameInfo()->CreateFixedObject(isPPC64? 8 : 4, FPOffset,
+                                                true, false);
     // Save the result.
     FI->setFramePointerSaveIndex(FPSI);
   }
@@ -3379,7 +3412,7 @@ SDValue PPCTargetLowering::LowerSINT_TO_FP(SDValue Op, SelectionDAG &DAG) {
   // then lfd it and fcfid it.
   MachineFunction &MF = DAG.getMachineFunction();
   MachineFrameInfo *FrameInfo = MF.getFrameInfo();
-  int FrameIdx = FrameInfo->CreateStackObject(8, 8);
+  int FrameIdx = FrameInfo->CreateStackObject(8, 8, false);
   EVT PtrVT = DAG.getTargetLoweringInfo().getPointerTy();
   SDValue FIdx = DAG.getFrameIndex(FrameIdx, PtrVT);
 
@@ -3437,7 +3470,7 @@ SDValue PPCTargetLowering::LowerFLT_ROUNDS_(SDValue Op, SelectionDAG &DAG) {
   SDValue Chain = DAG.getNode(PPCISD::MFFS, dl, NodeTys, &InFlag, 0);
 
   // Save FP register to stack slot
-  int SSFI = MF.getFrameInfo()->CreateStackObject(8, 8);
+  int SSFI = MF.getFrameInfo()->CreateStackObject(8, 8, false);
   SDValue StackSlot = DAG.getFrameIndex(SSFI, PtrVT);
   SDValue Store = DAG.getStore(DAG.getEntryNode(), dl, Chain,
                                  StackSlot, NULL, 0);
@@ -3635,7 +3668,7 @@ SDValue PPCTargetLowering::LowerBUILD_VECTOR(SDValue Op, SelectionDAG &DAG) {
   unsigned SplatBitSize;
   bool HasAnyUndefs;
   if (! BVN->isConstantSplat(APSplatBits, APSplatUndef, SplatBitSize,
-                             HasAnyUndefs) || SplatBitSize > 32)
+                             HasAnyUndefs, 0, true) || SplatBitSize > 32)
     return SDValue();
 
   unsigned SplatBits = APSplatBits.getZExtValue();
@@ -4105,7 +4138,7 @@ SDValue PPCTargetLowering::LowerSCALAR_TO_VECTOR(SDValue Op,
   DebugLoc dl = Op.getDebugLoc();
   // Create a stack slot that is 16-byte aligned.
   MachineFrameInfo *FrameInfo = DAG.getMachineFunction().getFrameInfo();
-  int FrameIdx = FrameInfo->CreateStackObject(16, 16);
+  int FrameIdx = FrameInfo->CreateStackObject(16, 16, false);
   EVT PtrVT = DAG.getTargetLoweringInfo().getPointerTy();
   SDValue FIdx = DAG.getFrameIndex(FrameIdx, PtrVT);
 
@@ -4181,6 +4214,7 @@ SDValue PPCTargetLowering::LowerOperation(SDValue Op, SelectionDAG &DAG) {
   switch (Op.getOpcode()) {
   default: llvm_unreachable("Wasn't expecting to be able to lower this!");
   case ISD::ConstantPool:       return LowerConstantPool(Op, DAG);
+  case ISD::BlockAddress:       return LowerBlockAddress(Op, DAG);
   case ISD::GlobalAddress:      return LowerGlobalAddress(Op, DAG);
   case ISD::GlobalTLSAddress:   return LowerGlobalTLSAddress(Op, DAG);
   case ISD::JumpTable:          return LowerJumpTable(Op, DAG);
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.h b/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.h
index ac72d87..e45b261 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.h
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.h
@@ -361,6 +361,7 @@ namespace llvm {
     SDValue LowerRETURNADDR(SDValue Op, SelectionDAG &DAG);
     SDValue LowerFRAMEADDR(SDValue Op, SelectionDAG &DAG);
     SDValue LowerConstantPool(SDValue Op, SelectionDAG &DAG);
+    SDValue LowerBlockAddress(SDValue Op, SelectionDAG &DAG);
     SDValue LowerGlobalAddress(SDValue Op, SelectionDAG &DAG);
     SDValue LowerGlobalTLSAddress(SDValue Op, SelectionDAG &DAG);
     SDValue LowerJumpTable(SDValue Op, SelectionDAG &DAG);
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstr64Bit.td b/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstr64Bit.td
index 0f68fb9..ebdc58b 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstr64Bit.td
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstr64Bit.td
@@ -127,7 +127,7 @@ def : Pat<(PPCnop),
           (NOP)>;
 
 // Atomic operations
-let usesCustomDAGSchedInserter = 1 in {
+let usesCustomInserter = 1 in {
   let Uses = [CR0] in {
     def ATOMIC_LOAD_ADD_I64 : Pseudo<
       (outs G8RC:$dst), (ins memrr:$ptr, G8RC:$incr),
@@ -731,9 +731,13 @@ def : Pat<(PPChi tconstpool:$in , 0), (LIS8 tconstpool:$in)>;
 def : Pat<(PPClo tconstpool:$in , 0), (LI8  tconstpool:$in)>;
 def : Pat<(PPChi tjumptable:$in , 0), (LIS8 tjumptable:$in)>;
 def : Pat<(PPClo tjumptable:$in , 0), (LI8  tjumptable:$in)>;
+def : Pat<(PPChi tblockaddress:$in, 0), (LIS8 tblockaddress:$in)>;
+def : Pat<(PPClo tblockaddress:$in, 0), (LI8  tblockaddress:$in)>;
 def : Pat<(add G8RC:$in, (PPChi tglobaladdr:$g, 0)),
           (ADDIS8 G8RC:$in, tglobaladdr:$g)>;
 def : Pat<(add G8RC:$in, (PPChi tconstpool:$g, 0)),
           (ADDIS8 G8RC:$in, tconstpool:$g)>;
 def : Pat<(add G8RC:$in, (PPChi tjumptable:$g, 0)),
           (ADDIS8 G8RC:$in, tjumptable:$g)>;
+def : Pat<(add G8RC:$in, (PPChi tblockaddress:$g, 0)),
+          (ADDIS8 G8RC:$in, tblockaddress:$g)>;
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.h b/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.h
index a69a616..ab341bd 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.h
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.h
@@ -146,18 +146,13 @@ public:
   virtual bool BlockHasNoFallThrough(const MachineBasicBlock &MBB) const;
   virtual
   bool ReverseBranchCondition(SmallVectorImpl<MachineOperand> &Cond) const;
-
-  virtual bool isDeadInstruction(const MachineInstr *MI) const {
-    // FIXME: Without this, ppc llvm-gcc doesn't bootstrap. That means some
-    // instruction definitions are not modeling side effects correctly.
-    // This is a workaround until we know the exact cause.
-    return false;
-  }
   
   /// GetInstSize - Return the number of bytes of code the specified
   /// instruction may be.  This returns the maximum number of bytes.
   ///
   virtual unsigned GetInstSizeInBytes(const MachineInstr *MI) const;
+
+  virtual bool isProfitableToDuplicateIndirectBranch() const { return true; }
 };
 
 }
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.td b/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.td
index dc5db6f..2b3f80d 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.td
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.td
@@ -363,9 +363,9 @@ def DYNALLOC : Pseudo<(outs GPRC:$result), (ins GPRC:$negsize, memri:$fpsi),
                        [(set GPRC:$result,
                              (PPCdynalloc GPRC:$negsize, iaddr:$fpsi))]>;
                          
-// SELECT_CC_* - Used to implement the SELECT_CC DAG operation.  Expanded by the
-// scheduler into a branch sequence.
-let usesCustomDAGSchedInserter = 1,    // Expanded by the scheduler.
+// SELECT_CC_* - Used to implement the SELECT_CC DAG operation.  Expanded after
+// instruction selection into a branch sequence.
+let usesCustomInserter = 1,    // Expanded after instruction selection.
     PPC970_Single = 1 in {
   def SELECT_CC_I4 : Pseudo<(outs GPRC:$dst), (ins CRRC:$cond, GPRC:$T, GPRC:$F,
                               i32imm:$BROPC), "${:comment} SELECT_CC PSEUDO!",
@@ -539,7 +539,7 @@ def DCBZL  : DCB_Form<1014, 1, (outs), (ins memrr:$dst),
                       PPC970_DGroup_Single;
 
 // Atomic operations
-let usesCustomDAGSchedInserter = 1 in {
+let usesCustomInserter = 1 in {
   let Uses = [CR0] in {
     def ATOMIC_LOAD_ADD_I8 : Pseudo<
       (outs GPRC:$dst), (ins memrr:$ptr, GPRC:$incr),
@@ -1358,15 +1358,6 @@ def RLWNM  : MForm_2<23,
 
 
 //===----------------------------------------------------------------------===//
-// DWARF Pseudo Instructions
-//
-
-def DWARF_LOC        : Pseudo<(outs), (ins i32imm:$line, i32imm:$col, i32imm:$file),
-                              "${:comment} .loc $file, $line, $col",
-                      [(dwarf_loc (i32 imm:$line), (i32 imm:$col),
-                                  (i32 imm:$file))]>;
-
-//===----------------------------------------------------------------------===//
 // PowerPC Instruction Patterns
 //
 
@@ -1436,12 +1427,16 @@ def : Pat<(PPChi tconstpool:$in, 0), (LIS tconstpool:$in)>;
 def : Pat<(PPClo tconstpool:$in, 0), (LI tconstpool:$in)>;
 def : Pat<(PPChi tjumptable:$in, 0), (LIS tjumptable:$in)>;
 def : Pat<(PPClo tjumptable:$in, 0), (LI tjumptable:$in)>;
+def : Pat<(PPChi tblockaddress:$in, 0), (LIS tblockaddress:$in)>;
+def : Pat<(PPClo tblockaddress:$in, 0), (LI tblockaddress:$in)>;
 def : Pat<(add GPRC:$in, (PPChi tglobaladdr:$g, 0)),
           (ADDIS GPRC:$in, tglobaladdr:$g)>;
 def : Pat<(add GPRC:$in, (PPChi tconstpool:$g, 0)),
           (ADDIS GPRC:$in, tconstpool:$g)>;
 def : Pat<(add GPRC:$in, (PPChi tjumptable:$g, 0)),
           (ADDIS GPRC:$in, tjumptable:$g)>;
+def : Pat<(add GPRC:$in, (PPChi tblockaddress:$g, 0)),
+          (ADDIS GPRC:$in, tblockaddress:$g)>;
 
 // Fused negative multiply subtract, alternate pattern
 def : Pat<(fsub F8RC:$B, (fmul F8RC:$A, F8RC:$C)),
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCJITInfo.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCJITInfo.cpp
index ef25d92..c679bcd 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCJITInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCJITInfo.cpp
@@ -323,6 +323,15 @@ PPCJITInfo::getLazyResolverFunction(JITCompilerFn Fn) {
   return is64Bit ? PPC64CompilationCallback : PPC32CompilationCallback;
 }
 
+TargetJITInfo::StubLayout PPCJITInfo::getStubLayout() {
+  // The stub contains up to 10 4-byte instructions, aligned at 4 bytes: 3
+  // instructions to save the caller's address if this is a lazy-compilation
+  // stub, plus a 1-, 4-, or 7-instruction sequence to load an arbitrary address
+  // into a register and jump through it.
+  StubLayout Result = {10*4, 4};
+  return Result;
+}
+
 #if (defined(__POWERPC__) || defined (__ppc__) || defined(_POWER)) && \
 defined(__APPLE__)
 extern "C" void sys_icache_invalidate(const void *Addr, size_t len);
@@ -330,12 +339,12 @@ extern "C" void sys_icache_invalidate(const void *Addr, size_t len);
 
 void *PPCJITInfo::emitFunctionStub(const Function* F, void *Fn,
                                    JITCodeEmitter &JCE) {
+  MachineCodeEmitter::BufferState BS;
   // If this is just a call to an external function, emit a branch instead of a
   // call.  The code is the same except for one bit of the last instruction.
   if (Fn != (void*)(intptr_t)PPC32CompilationCallback && 
       Fn != (void*)(intptr_t)PPC64CompilationCallback) {
-    JCE.startGVStub(F, 7*4);
-    intptr_t Addr = (intptr_t)JCE.getCurrentPCValue();
+    void *Addr = (void*)JCE.getCurrentPCValue();
     JCE.emitWordBE(0);
     JCE.emitWordBE(0);
     JCE.emitWordBE(0);
@@ -343,13 +352,12 @@ void *PPCJITInfo::emitFunctionStub(const Function* F, void *Fn,
     JCE.emitWordBE(0);
     JCE.emitWordBE(0);
     JCE.emitWordBE(0);
-    EmitBranchToAt(Addr, (intptr_t)Fn, false, is64Bit);
-    sys::Memory::InvalidateInstructionCache((void*)Addr, 7*4);
-    return JCE.finishGVStub(F);
+    EmitBranchToAt((intptr_t)Addr, (intptr_t)Fn, false, is64Bit);
+    sys::Memory::InvalidateInstructionCache(Addr, 7*4);
+    return Addr;
   }
 
-  JCE.startGVStub(F, 10*4);
-  intptr_t Addr = (intptr_t)JCE.getCurrentPCValue();
+  void *Addr = (void*)JCE.getCurrentPCValue();
   if (is64Bit) {
     JCE.emitWordBE(0xf821ffb1);     // stdu r1,-80(r1)
     JCE.emitWordBE(0x7d6802a6);     // mflr r11
@@ -372,8 +380,8 @@ void *PPCJITInfo::emitFunctionStub(const Function* F, void *Fn,
   JCE.emitWordBE(0);
   JCE.emitWordBE(0);
   EmitBranchToAt(BranchAddr, (intptr_t)Fn, true, is64Bit);
-  sys::Memory::InvalidateInstructionCache((void*)Addr, 10*4);
-  return JCE.finishGVStub(F);
+  sys::Memory::InvalidateInstructionCache(Addr, 10*4);
+  return Addr;
 }
 
 
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCJITInfo.h b/libclamav/c++/llvm/lib/Target/PowerPC/PPCJITInfo.h
index 2e25b29..47ead59 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCJITInfo.h
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCJITInfo.h
@@ -30,6 +30,7 @@ namespace llvm {
       is64Bit = tmIs64Bit;
     }
 
+    virtual StubLayout getStubLayout();
     virtual void *emitFunctionStub(const Function* F, void *Fn,
                                    JITCodeEmitter &JCE);
     virtual LazyResolverFn getLazyResolverFunction(JITCompilerFn);
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCRegisterInfo.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCRegisterInfo.cpp
index f120caa..0c3c8eb 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCRegisterInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCRegisterInfo.cpp
@@ -699,8 +699,10 @@ void PPCRegisterInfo::lowerCRSpilling(MachineBasicBlock::iterator II,
   MBB.erase(II);
 }
 
-void PPCRegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
-                                          int SPAdj, RegScavenger *RS) const {
+unsigned
+PPCRegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
+                                     int SPAdj, int *Value,
+                                     RegScavenger *RS) const {
   assert(SPAdj == 0 && "Unexpected");
 
   // Get the instruction.
@@ -739,14 +741,14 @@ void PPCRegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
   if (FPSI && FrameIndex == FPSI &&
       (OpC == PPC::DYNALLOC || OpC == PPC::DYNALLOC8)) {
     lowerDynamicAlloc(II, SPAdj, RS);
-    return;
+    return 0;
   }
 
   // Special case for pseudo-op SPILL_CR.
   if (EnableRegisterScavenging) // FIXME (64-bit): Enable by default.
     if (OpC == PPC::SPILL_CR) {
       lowerCRSpilling(II, FrameIndex, SPAdj, RS);
-      return;
+      return 0;
     }
 
   // Replace the FrameIndex with base register with GPR1 (SP) or GPR31 (FP).
@@ -788,7 +790,7 @@ void PPCRegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
     if (isIXAddr)
       Offset >>= 2;    // The actual encoded value has the low two bits zero.
     MI.getOperand(OffsetOperandNo).ChangeToImmediate(Offset);
-    return;
+    return 0;
   }
 
   // The offset doesn't fit into a single register, scavenge one to build the
@@ -828,6 +830,7 @@ void PPCRegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
   unsigned StackReg = MI.getOperand(FIOperandNo).getReg();
   MI.getOperand(OperandBase).ChangeToRegister(StackReg, false);
   MI.getOperand(OperandBase + 1).ChangeToRegister(SReg, false);
+  return 0;
 }
 
 /// VRRegNo - Map from a numbered VR register to its enum value.
@@ -1029,18 +1032,18 @@ PPCRegisterInfo::processFunctionBeforeCalleeSavedScan(MachineFunction &MF,
 
   //  Save R31 if necessary
   int FPSI = FI->getFramePointerSaveIndex();
-  bool IsPPC64 = Subtarget.isPPC64();
-  bool IsSVR4ABI = Subtarget.isSVR4ABI();
+  bool isPPC64 = Subtarget.isPPC64();
   bool isDarwinABI  = Subtarget.isDarwinABI();
   MachineFrameInfo *MFI = MF.getFrameInfo();
  
   // If the frame pointer save index hasn't been defined yet.
-  if (!FPSI && needsFP(MF) && IsSVR4ABI) {
+  if (!FPSI && needsFP(MF)) {
     // Find out what the fix offset of the frame pointer save area.
-    int FPOffset = PPCFrameInfo::getFramePointerSaveOffset(IsPPC64,
+    int FPOffset = PPCFrameInfo::getFramePointerSaveOffset(isPPC64,
                                                            isDarwinABI);
     // Allocate the frame index for frame pointer save area.
-    FPSI = MF.getFrameInfo()->CreateFixedObject(IsPPC64? 8 : 4, FPOffset);
+    FPSI = MF.getFrameInfo()->CreateFixedObject(isPPC64? 8 : 4, FPOffset,
+                                                true, false);
     // Save the result.
     FI->setFramePointerSaveIndex(FPSI);                      
   }
@@ -1048,7 +1051,8 @@ PPCRegisterInfo::processFunctionBeforeCalleeSavedScan(MachineFunction &MF,
   // Reserve stack space to move the linkage area to in case of a tail call.
   int TCSPDelta = 0;
   if (PerformTailCallOpt && (TCSPDelta = FI->getTailCallSPDelta()) < 0) {
-    MF.getFrameInfo()->CreateFixedObject(-1 * TCSPDelta, TCSPDelta);
+    MF.getFrameInfo()->CreateFixedObject(-1 * TCSPDelta, TCSPDelta,
+                                         true, false);
   }
   
   // Reserve a slot closest to SP or frame pointer if we have a dynalloc or
@@ -1062,9 +1066,10 @@ PPCRegisterInfo::processFunctionBeforeCalleeSavedScan(MachineFunction &MF,
     if (needsFP(MF) || spillsCR(MF)) {
       const TargetRegisterClass *GPRC = &PPC::GPRCRegClass;
       const TargetRegisterClass *G8RC = &PPC::G8RCRegClass;
-      const TargetRegisterClass *RC = IsPPC64 ? G8RC : GPRC;
+      const TargetRegisterClass *RC = isPPC64 ? G8RC : GPRC;
       RS->setScavengingFrameIndex(MFI->CreateStackObject(RC->getSize(),
-                                                         RC->getAlignment()));
+                                                         RC->getAlignment(),
+                                                         false));
     }
 }
 
@@ -1291,7 +1296,7 @@ PPCRegisterInfo::emitPrologue(MachineFunction &MF) const {
   int NegFrameSize = -FrameSize;
   
   // Get processor type.
-  bool IsPPC64 = Subtarget.isPPC64();
+  bool isPPC64 = Subtarget.isPPC64();
   // Get operating system
   bool isDarwinABI = Subtarget.isDarwinABI();
   // Check if the link register (LR) must be saved.
@@ -1300,7 +1305,7 @@ PPCRegisterInfo::emitPrologue(MachineFunction &MF) const {
   // Do we have a frame pointer for this function?
   bool HasFP = hasFP(MF) && FrameSize;
   
-  int LROffset = PPCFrameInfo::getReturnSaveOffset(IsPPC64, isDarwinABI);
+  int LROffset = PPCFrameInfo::getReturnSaveOffset(isPPC64, isDarwinABI);
 
   int FPOffset = 0;
   if (HasFP) {
@@ -1310,11 +1315,11 @@ PPCRegisterInfo::emitPrologue(MachineFunction &MF) const {
       assert(FPIndex && "No Frame Pointer Save Slot!");
       FPOffset = FFI->getObjectOffset(FPIndex);
     } else {
-      FPOffset = PPCFrameInfo::getFramePointerSaveOffset(IsPPC64, isDarwinABI);
+      FPOffset = PPCFrameInfo::getFramePointerSaveOffset(isPPC64, isDarwinABI);
     }
   }
 
-  if (IsPPC64) {
+  if (isPPC64) {
     if (MustSaveLR)
       BuildMI(MBB, MBBI, dl, TII.get(PPC::MFLR8), PPC::X0);
       
@@ -1353,15 +1358,9 @@ PPCRegisterInfo::emitPrologue(MachineFunction &MF) const {
   unsigned TargetAlign = MF.getTarget().getFrameInfo()->getStackAlignment();
   unsigned MaxAlign = MFI->getMaxAlignment();
 
-  if (needsFrameMoves) {
-    // Mark effective beginning of when frame pointer becomes valid.
-    FrameLabelId = MMI->NextLabelID();
-    BuildMI(MBB, MBBI, dl, TII.get(PPC::DBG_LABEL)).addImm(FrameLabelId);
-  }
-  
   // Adjust stack pointer: r1 += NegFrameSize.
   // If there is a preferred stack alignment, align R1 now
-  if (!IsPPC64) {
+  if (!isPPC64) {
     // PPC32.
     if (ALIGN_STACK && MaxAlign > TargetAlign) {
       assert(isPowerOf2_32(MaxAlign)&&isInt16(MaxAlign)&&"Invalid alignment!");
@@ -1428,54 +1427,44 @@ PPCRegisterInfo::emitPrologue(MachineFunction &MF) const {
         .addReg(PPC::X0);
     }
   }
+
+  std::vector<MachineMove> &Moves = MMI->getFrameMoves();
   
+  // Add the "machine moves" for the instructions we generated above, but in
+  // reverse order.
   if (needsFrameMoves) {
-    std::vector<MachineMove> &Moves = MMI->getFrameMoves();
-    
+    // Mark effective beginning of when frame pointer becomes valid.
+    FrameLabelId = MMI->NextLabelID();
+    BuildMI(MBB, MBBI, dl, TII.get(PPC::DBG_LABEL)).addImm(FrameLabelId);
+  
+    // Show update of SP.
     if (NegFrameSize) {
-      // Show update of SP.
       MachineLocation SPDst(MachineLocation::VirtualFP);
       MachineLocation SPSrc(MachineLocation::VirtualFP, NegFrameSize);
       Moves.push_back(MachineMove(FrameLabelId, SPDst, SPSrc));
     } else {
-      MachineLocation SP(IsPPC64 ? PPC::X31 : PPC::R31);
+      MachineLocation SP(isPPC64 ? PPC::X31 : PPC::R31);
       Moves.push_back(MachineMove(FrameLabelId, SP, SP));
     }
     
     if (HasFP) {
       MachineLocation FPDst(MachineLocation::VirtualFP, FPOffset);
-      MachineLocation FPSrc(IsPPC64 ? PPC::X31 : PPC::R31);
+      MachineLocation FPSrc(isPPC64 ? PPC::X31 : PPC::R31);
       Moves.push_back(MachineMove(FrameLabelId, FPDst, FPSrc));
     }
 
-    // Add callee saved registers to move list.
-    const std::vector<CalleeSavedInfo> &CSI = MFI->getCalleeSavedInfo();
-    for (unsigned I = 0, E = CSI.size(); I != E; ++I) {
-      int Offset = MFI->getObjectOffset(CSI[I].getFrameIdx());
-      unsigned Reg = CSI[I].getReg();
-      if (Reg == PPC::LR || Reg == PPC::LR8 || Reg == PPC::RM) continue;
-      MachineLocation CSDst(MachineLocation::VirtualFP, Offset);
-      MachineLocation CSSrc(Reg);
-      Moves.push_back(MachineMove(FrameLabelId, CSDst, CSSrc));
+    if (MustSaveLR) {
+      MachineLocation LRDst(MachineLocation::VirtualFP, LROffset);
+      MachineLocation LRSrc(isPPC64 ? PPC::LR8 : PPC::LR);
+      Moves.push_back(MachineMove(FrameLabelId, LRDst, LRSrc));
     }
-    
-    MachineLocation LRDst(MachineLocation::VirtualFP, LROffset);
-    MachineLocation LRSrc(IsPPC64 ? PPC::LR8 : PPC::LR);
-    Moves.push_back(MachineMove(FrameLabelId, LRDst, LRSrc));
-    
-    // Mark effective beginning of when frame pointer is ready.
-    unsigned ReadyLabelId = MMI->NextLabelID();
-    BuildMI(MBB, MBBI, dl, TII.get(PPC::DBG_LABEL)).addImm(ReadyLabelId);
-    
-    MachineLocation FPDst(HasFP ? (IsPPC64 ? PPC::X31 : PPC::R31) :
-                                  (IsPPC64 ? PPC::X1 : PPC::R1));
-    MachineLocation FPSrc(MachineLocation::VirtualFP);
-    Moves.push_back(MachineMove(ReadyLabelId, FPDst, FPSrc));
   }
 
+  unsigned ReadyLabelId = 0;
+
   // If there is a frame pointer, copy R1 into R31
   if (HasFP) {
-    if (!IsPPC64) {
+    if (!isPPC64) {
       BuildMI(MBB, MBBI, dl, TII.get(PPC::OR), PPC::R31)
         .addReg(PPC::R1)
         .addReg(PPC::R1);
@@ -1484,6 +1473,33 @@ PPCRegisterInfo::emitPrologue(MachineFunction &MF) const {
         .addReg(PPC::X1)
         .addReg(PPC::X1);
     }
+
+    if (needsFrameMoves) {
+      ReadyLabelId = MMI->NextLabelID();
+
+      // Mark effective beginning of when frame pointer is ready.
+      BuildMI(MBB, MBBI, dl, TII.get(PPC::DBG_LABEL)).addImm(ReadyLabelId);
+
+      MachineLocation FPDst(HasFP ? (isPPC64 ? PPC::X31 : PPC::R31) :
+                                    (isPPC64 ? PPC::X1 : PPC::R1));
+      MachineLocation FPSrc(MachineLocation::VirtualFP);
+      Moves.push_back(MachineMove(ReadyLabelId, FPDst, FPSrc));
+    }
+  }
+
+  if (needsFrameMoves) {
+    unsigned LabelId = HasFP ? ReadyLabelId : FrameLabelId;
+
+    // Add callee saved registers to move list.
+    const std::vector<CalleeSavedInfo> &CSI = MFI->getCalleeSavedInfo();
+    for (unsigned I = 0, E = CSI.size(); I != E; ++I) {
+      int Offset = MFI->getObjectOffset(CSI[I].getFrameIdx());
+      unsigned Reg = CSI[I].getReg();
+      if (Reg == PPC::LR || Reg == PPC::LR8 || Reg == PPC::RM) continue;
+      MachineLocation CSDst(MachineLocation::VirtualFP, Offset);
+      MachineLocation CSSrc(Reg);
+      Moves.push_back(MachineMove(LabelId, CSDst, CSSrc));
+    }
   }
 }
 
@@ -1511,7 +1527,7 @@ void PPCRegisterInfo::emitEpilogue(MachineFunction &MF,
   int FrameSize = MFI->getStackSize();
 
   // Get processor type.
-  bool IsPPC64 = Subtarget.isPPC64();
+  bool isPPC64 = Subtarget.isPPC64();
   // Get operating system
   bool isDarwinABI = Subtarget.isDarwinABI();
   // Check if the link register (LR) has been saved.
@@ -1520,7 +1536,7 @@ void PPCRegisterInfo::emitEpilogue(MachineFunction &MF,
   // Do we have a frame pointer for this function?
   bool HasFP = hasFP(MF) && FrameSize;
   
-  int LROffset = PPCFrameInfo::getReturnSaveOffset(IsPPC64, isDarwinABI);
+  int LROffset = PPCFrameInfo::getReturnSaveOffset(isPPC64, isDarwinABI);
 
   int FPOffset = 0;
   if (HasFP) {
@@ -1530,7 +1546,7 @@ void PPCRegisterInfo::emitEpilogue(MachineFunction &MF,
       assert(FPIndex && "No Frame Pointer Save Slot!");
       FPOffset = FFI->getObjectOffset(FPIndex);
     } else {
-      FPOffset = PPCFrameInfo::getFramePointerSaveOffset(IsPPC64, isDarwinABI);
+      FPOffset = PPCFrameInfo::getFramePointerSaveOffset(isPPC64, isDarwinABI);
     }
   }
   
@@ -1558,7 +1574,7 @@ void PPCRegisterInfo::emitEpilogue(MachineFunction &MF,
   if (FrameSize) {
     // The loaded (or persistent) stack pointer value is offset by the 'stwu'
     // on entry to the function.  Add this offset back now.
-    if (!IsPPC64) {
+    if (!isPPC64) {
       // If this function contained a fastcc call and PerformTailCallOpt is
       // enabled (=> hasFastCall()==true) the fastcc call might contain a tail
       // call which invalidates the stack pointer value in SP(0). So we use the
@@ -1612,7 +1628,7 @@ void PPCRegisterInfo::emitEpilogue(MachineFunction &MF,
     }
   }
 
-  if (IsPPC64) {
+  if (isPPC64) {
     if (MustSaveLR)
       BuildMI(MBB, MBBI, dl, TII.get(PPC::LD), PPC::X0)
         .addImm(LROffset/4).addReg(PPC::X1);
@@ -1642,13 +1658,13 @@ void PPCRegisterInfo::emitEpilogue(MachineFunction &MF,
       MF.getFunction()->getCallingConv() == CallingConv::Fast) {
      PPCFunctionInfo *FI = MF.getInfo<PPCFunctionInfo>();
      unsigned CallerAllocatedAmt = FI->getMinReservedArea();
-     unsigned StackReg = IsPPC64 ? PPC::X1 : PPC::R1;
-     unsigned FPReg = IsPPC64 ? PPC::X31 : PPC::R31;
-     unsigned TmpReg = IsPPC64 ? PPC::X0 : PPC::R0;
-     unsigned ADDIInstr = IsPPC64 ? PPC::ADDI8 : PPC::ADDI;
-     unsigned ADDInstr = IsPPC64 ? PPC::ADD8 : PPC::ADD4;
-     unsigned LISInstr = IsPPC64 ? PPC::LIS8 : PPC::LIS;
-     unsigned ORIInstr = IsPPC64 ? PPC::ORI8 : PPC::ORI;
+     unsigned StackReg = isPPC64 ? PPC::X1 : PPC::R1;
+     unsigned FPReg = isPPC64 ? PPC::X31 : PPC::R31;
+     unsigned TmpReg = isPPC64 ? PPC::X0 : PPC::R0;
+     unsigned ADDIInstr = isPPC64 ? PPC::ADDI8 : PPC::ADDI;
+     unsigned ADDInstr = isPPC64 ? PPC::ADD8 : PPC::ADD4;
+     unsigned LISInstr = isPPC64 ? PPC::LIS8 : PPC::LIS;
+     unsigned ORIInstr = isPPC64 ? PPC::ORI8 : PPC::ORI;
 
      if (CallerAllocatedAmt && isInt16(CallerAllocatedAmt)) {
        BuildMI(MBB, MBBI, dl, TII.get(ADDIInstr), StackReg)
@@ -1697,7 +1713,7 @@ unsigned PPCRegisterInfo::getRARegister() const {
   return !Subtarget.isPPC64() ? PPC::LR : PPC::LR8;
 }
 
-unsigned PPCRegisterInfo::getFrameRegister(MachineFunction &MF) const {
+unsigned PPCRegisterInfo::getFrameRegister(const MachineFunction &MF) const {
   if (!Subtarget.isPPC64())
     return hasFP(MF) ? PPC::R31 : PPC::R1;
   else
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCRegisterInfo.h b/libclamav/c++/llvm/lib/Target/PowerPC/PPCRegisterInfo.h
index 2b5ad14..3aeed80 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCRegisterInfo.h
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCRegisterInfo.h
@@ -66,8 +66,9 @@ public:
                          int SPAdj, RegScavenger *RS) const;
   void lowerCRSpilling(MachineBasicBlock::iterator II, unsigned FrameIndex,
                        int SPAdj, RegScavenger *RS) const;
-  void eliminateFrameIndex(MachineBasicBlock::iterator II,
-                           int SPAdj, RegScavenger *RS = NULL) const;
+  unsigned eliminateFrameIndex(MachineBasicBlock::iterator II,
+                               int SPAdj, int *Value = NULL,
+                               RegScavenger *RS = NULL) const;
 
   /// determineFrameLayout - Determine the size of the frame and maximum call
   /// frame size.
@@ -82,7 +83,7 @@ public:
 
   // Debug information queries.
   unsigned getRARegister() const;
-  unsigned getFrameRegister(MachineFunction &MF) const;
+  unsigned getFrameRegister(const MachineFunction &MF) const;
   void getInitialFrameState(std::vector<MachineMove> &Moves) const;
 
   // Exception handling queries.
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCSubtarget.h b/libclamav/c++/llvm/lib/Target/PowerPC/PPCSubtarget.h
index 02c8ad7..75fcf62 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCSubtarget.h
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCSubtarget.h
@@ -101,8 +101,8 @@ public:
   const char *getTargetDataString() const {
     // Note, the alignment values for f64 and i64 on ppc64 in Darwin
     // documentation are wrong; these are correct (i.e. "what gcc does").
-    return isPPC64() ? "E-p:64:64-f64:64:64-i64:64:64-f128:64:128"
-                     : "E-p:32:32-f64:32:64-i64:32:64-f128:64:128";
+    return isPPC64() ? "E-p:64:64-f64:64:64-i64:64:64-f128:64:128-n32:64"
+                     : "E-p:32:32-f64:32:64-i64:32:64-f128:64:128-n32";
   }
 
   /// isPPC64 - Return true if we are generating code for 64-bit pointer mode.
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCTargetMachine.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCTargetMachine.cpp
index 3371954..8079c6e 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCTargetMachine.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCTargetMachine.cpp
@@ -20,8 +20,7 @@
 #include "llvm/Support/FormattedStream.h"
 using namespace llvm;
 
-static const MCAsmInfo *createMCAsmInfo(const Target &T,
-                                                const StringRef &TT) {
+static const MCAsmInfo *createMCAsmInfo(const Target &T, StringRef TT) {
   Triple TheTriple(TT);
   bool isPPC64 = TheTriple.getArch() == Triple::ppc64;
   if (TheTriple.getOS() == Triple::Darwin)
diff --git a/libclamav/c++/llvm/lib/Target/README.txt b/libclamav/c++/llvm/lib/Target/README.txt
index 89ea9d0..e7a55a0 100644
--- a/libclamav/c++/llvm/lib/Target/README.txt
+++ b/libclamav/c++/llvm/lib/Target/README.txt
@@ -220,7 +220,7 @@ so cool to turn it into something like:
 ... which would only do one 32-bit XOR per loop iteration instead of two.
 
 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
-alas.
+this requires TBAA.
 
 //===---------------------------------------------------------------------===//
 
@@ -280,6 +280,9 @@ unsigned int popcount(unsigned int input) {
   return count;
 }
 
+This is a form of idiom recognition for loops, the same thing that could be
+useful for recognizing memset/memcpy.
+
 //===---------------------------------------------------------------------===//
 
 These should turn into single 16-bit (unaligned?) loads on little/big endian
@@ -339,9 +342,11 @@ we don't have whole-function selection dags.  On x86, this means we use one
 extra register for the function when effective_addr2 is declared as U64 than
 when it is declared U32.
 
+PHI Slicing could be extended to do this.
+
 //===---------------------------------------------------------------------===//
 
-LSR should know what GPR types a target has.  This code:
+LSR should know what GPR types a target has from TargetData.  This code:
 
 volatile short X, Y; // globals
 
@@ -367,7 +372,6 @@ LBB1_2:
 
 LSR should reuse the "+" IV for the exit test.
 
-
 //===---------------------------------------------------------------------===//
 
 Tail call elim should be more aggressive, checking to see if the call is
@@ -406,22 +410,6 @@ return:		; preds = %then.1, %else.0, %then.0
 
 //===---------------------------------------------------------------------===//
 
-Tail recursion elimination is not transforming this function, because it is
-returning n, which fails the isDynamicConstant check in the accumulator 
-recursion checks.
-
-long long fib(const long long n) {
-  switch(n) {
-    case 0:
-    case 1:
-      return n;
-    default:
-      return fib(n-1) + fib(n-2);
-  }
-}
-
-//===---------------------------------------------------------------------===//
-
 Tail recursion elimination should handle:
 
 int pow2m1(int n) {
@@ -455,25 +443,6 @@ entry:
 
 //===---------------------------------------------------------------------===//
 
-"basicaa" should know how to look through "or" instructions that act like add
-instructions.  For example in this code, the x*4+1 is turned into x*4 | 1, and
-basicaa can't analyze the array subscript, leading to duplicated loads in the
-generated code:
-
-void test(int X, int Y, int a[]) {
-int i;
-  for (i=2; i<1000; i+=4) {
-  a[i+0] = a[i-1+0]*a[i-2+0];
-  a[i+1] = a[i-1+1]*a[i-2+1];
-  a[i+2] = a[i-1+2]*a[i-2+2];
-  a[i+3] = a[i-1+3]*a[i-2+3];
-  }
-}
-
-BasicAA also doesn't do this for add.  It needs to know that &A[i+1] != &A[i].
-
-//===---------------------------------------------------------------------===//
-
 We should investigate an instruction sinking pass.  Consider this silly
 example in pic mode:
 
@@ -1227,6 +1196,7 @@ bb3:		; preds = %bb1, %bb2, %bb
 
 GCC PR33344 is a similar case.
 
+
 //===---------------------------------------------------------------------===//
 
 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
@@ -1281,8 +1251,6 @@ http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35287 [LPRE crit edge splitting]
 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34677 (licm does this, LPRE crit edge)
   llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as | opt -mem2reg -simplifycfg -gvn | llvm-dis
 
-http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16799 [BITCAST PHI TRANS]
-
 //===---------------------------------------------------------------------===//
 
 Type based alias analysis:
@@ -1594,53 +1562,123 @@ int int_char(char m) {if(m>7) return 0; return m;}
 
 //===---------------------------------------------------------------------===//
 
-Instcombine should replace the load with a constant in:
+int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
 
-  static const char x[4] = {'a', 'b', 'c', 'd'};
-  
-  unsigned int y(void) {
-    return *(unsigned int *)x;
-  }
+Generates this:
+
+define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
+entry:
+  %0 = and i32 %a, 128                            ; <i32> [#uses=1]
+  %1 = icmp eq i32 %0, 0                          ; <i1> [#uses=1]
+  %2 = or i32 %b, 128                             ; <i32> [#uses=1]
+  %3 = and i32 %b, -129                           ; <i32> [#uses=1]
+  %b_addr.0 = select i1 %1, i32 %3, i32 %2        ; <i32> [#uses=1]
+  ret i32 %b_addr.0
+}
+
+However, it's functionally equivalent to:
+
+         b = (b & ~0x80) | (a & 0x80);
 
-It currently only does this transformation when the size of the constant 
-is the same as the size of the integer (so, try x[5]) and the last byte 
-is a null (making it a C string). There's no need for these restrictions.
+Which generates this:
+
+define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
+entry:
+  %0 = and i32 %b, -129                           ; <i32> [#uses=1]
+  %1 = and i32 %a, 128                            ; <i32> [#uses=1]
+  %2 = or i32 %0, %1                              ; <i32> [#uses=1]
+  ret i32 %2
+}
+
+This can be generalized for other forms:
+
+     b = (b & ~0x80) | (a & 0x40) << 1;
 
 //===---------------------------------------------------------------------===//
 
-InstCombine's "turn load from constant into constant" optimization should be
-more aggressive in the presence of bitcasts.  For example, because of unions,
-this code:
+These two functions produce different code. They shouldn't:
 
-union vec2d {
-    double e[2];
-    double v __attribute__((vector_size(16)));
-};
-typedef union vec2d vec2d;
+#include <stdint.h>
+ 
+uint8_t p1(uint8_t b, uint8_t a) {
+  b = (b & ~0xc0) | (a & 0xc0);
+  return (b);
+}
+ 
+uint8_t p2(uint8_t b, uint8_t a) {
+  b = (b & ~0x40) | (a & 0x40);
+  b = (b & ~0x80) | (a & 0x80);
+  return (b);
+}
 
-static vec2d a={{1,2}}, b={{3,4}};
-    
-vec2d foo () {
-    return (vec2d){ .v = a.v + b.v * (vec2d){{5,5}}.v };
+define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
+entry:
+  %0 = and i8 %b, 63                              ; <i8> [#uses=1]
+  %1 = and i8 %a, -64                             ; <i8> [#uses=1]
+  %2 = or i8 %1, %0                               ; <i8> [#uses=1]
+  ret i8 %2
 }
 
-Compiles into:
+define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
+entry:
+  %0 = and i8 %b, 63                              ; <i8> [#uses=1]
+  %.masked = and i8 %a, 64                        ; <i8> [#uses=1]
+  %1 = and i8 %a, -128                            ; <i8> [#uses=1]
+  %2 = or i8 %1, %0                               ; <i8> [#uses=1]
+  %3 = or i8 %2, %.masked                         ; <i8> [#uses=1]
+  ret i8 %3
+}
 
- at a = internal constant %0 { [2 x double] 
-           [double 1.000000e+00, double 2.000000e+00] }, align 16
- at b = internal constant %0 { [2 x double]
-           [double 3.000000e+00, double 4.000000e+00] }, align 16
-...
-define void @foo(%struct.vec2d* noalias nocapture sret %agg.result) nounwind {
+//===---------------------------------------------------------------------===//
+
+IPSCCP does not currently propagate argument dependent constants through
+functions where it does not not all of the callers.  This includes functions
+with normal external linkage as well as templates, C99 inline functions etc.
+Specifically, it does nothing to:
+
+define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
 entry:
-	%0 = load <2 x double>* getelementptr (%struct.vec2d* 
-           bitcast (%0* @a to %struct.vec2d*), i32 0, i32 0), align 16
-	%1 = load <2 x double>* getelementptr (%struct.vec2d* 
-           bitcast (%0* @b to %struct.vec2d*), i32 0, i32 0), align 16
+  %0 = add nsw i32 %y, %z                         
+  %1 = mul i32 %0, %x                             
+  %2 = mul i32 %y, %z                             
+  %3 = add nsw i32 %1, %2                         
+  ret i32 %3
+}
 
+define i32 @test2() nounwind {
+entry:
+  %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
+  ret i32 %0
+}
+
+It would be interesting extend IPSCCP to be able to handle simple cases like
+this, where all of the arguments to a call are constant.  Because IPSCCP runs
+before inlining, trivial templates and inline functions are not yet inlined.
+The results for a function + set of constant arguments should be memoized in a
+map.
+
+//===---------------------------------------------------------------------===//
 
-Instcombine should be able to optimize away the loads (and thus the globals).
+The libcall constant folding stuff should be moved out of SimplifyLibcalls into
+libanalysis' constantfolding logic.  This would allow IPSCCP to be able to
+handle simple things like this:
+
+static int foo(const char *X) { return strlen(X); }
+int bar() { return foo("abcd"); }
+
+//===---------------------------------------------------------------------===//
+
+InstCombine should use SimplifyDemandedBits to remove the or instruction:
+
+define i1 @test(i8 %x, i8 %y) {
+  %A = or i8 %x, 1
+  %B = icmp ugt i8 %A, 3
+  ret i1 %B
+}
 
-See also PR4973
+Currently instcombine calls SimplifyDemandedBits with either all bits or just
+the sign bit, if the comparison is obviously a sign test. In this case, we only
+need all but the bottom two bits from %A, and if we gave that mask to SDB it
+would delete the or instruction for us.
 
 //===---------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/Target/SubtargetFeature.cpp b/libclamav/c++/llvm/lib/Target/SubtargetFeature.cpp
index 664a43c..590574e 100644
--- a/libclamav/c++/llvm/lib/Target/SubtargetFeature.cpp
+++ b/libclamav/c++/llvm/lib/Target/SubtargetFeature.cpp
@@ -357,3 +357,30 @@ void SubtargetFeatures::print(raw_ostream &OS) const {
 void SubtargetFeatures::dump() const {
   print(errs());
 }
+
+/// getDefaultSubtargetFeatures - Return a string listing
+/// the features associated with the target triple.
+///
+/// FIXME: This is an inelegant way of specifying the features of a
+/// subtarget. It would be better if we could encode this information
+/// into the IR. See <rdar://5972456>.
+///
+std::string SubtargetFeatures::getDefaultSubtargetFeatures(
+                                               const Triple& Triple) {
+  switch (Triple.getVendor()) {
+  case Triple::Apple:
+    switch (Triple.getArch()) {
+    case Triple::ppc:   // powerpc-apple-*
+      return std::string("altivec");
+    case Triple::ppc64: // powerpc64-apple-*
+      return std::string("64bit,altivec");
+    default:
+      break;
+    }
+    break;
+  default:
+    break;
+  } 
+
+  return std::string("");
+}
diff --git a/libclamav/c++/llvm/lib/Target/TargetData.cpp b/libclamav/c++/llvm/lib/Target/TargetData.cpp
index 5bcd658..fc71bc3 100644
--- a/libclamav/c++/llvm/lib/Target/TargetData.cpp
+++ b/libclamav/c++/llvm/lib/Target/TargetData.cpp
@@ -17,16 +17,16 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/Target/TargetData.h"
-#include "llvm/Module.h"
-#include "llvm/DerivedTypes.h"
 #include "llvm/Constants.h"
+#include "llvm/DerivedTypes.h"
+#include "llvm/Module.h"
 #include "llvm/Support/GetElementPtrTypeIterator.h"
 #include "llvm/Support/MathExtras.h"
 #include "llvm/Support/ManagedStatic.h"
 #include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/raw_ostream.h"
 #include "llvm/System/Mutex.h"
 #include "llvm/ADT/DenseMap.h"
-#include "llvm/ADT/StringExtras.h"
 #include <algorithm>
 #include <cstdlib>
 using namespace llvm;
@@ -132,50 +132,18 @@ const TargetAlignElem TargetData::InvalidAlignmentElem =
 //                       TargetData Class Implementation
 //===----------------------------------------------------------------------===//
 
-/*!
- A TargetDescription string consists of a sequence of hyphen-delimited
- specifiers for target endianness, pointer size and alignments, and various
- primitive type sizes and alignments. A typical string looks something like:
- <br><br>
- "E-p:32:32:32-i1:8:8-i8:8:8-i32:32:32-i64:32:64-f32:32:32-f64:32:64"
- <br><br>
- (note: this string is not fully specified and is only an example.)
- \p
- Alignments come in two flavors: ABI and preferred. ABI alignment (abi_align,
- below) dictates how a type will be aligned within an aggregate and when used
- as an argument.  Preferred alignment (pref_align, below) determines a type's
- alignment when emitted as a global.
- \p
- Specifier string details:
- <br><br>
- <i>[E|e]</i>: Endianness. "E" specifies a big-endian target data model, "e"
- specifies a little-endian target data model.
- <br><br>
- <i>p:@verbatim<size>:<abi_align>:<pref_align>@endverbatim</i>: Pointer size, 
- ABI and preferred alignment.
- <br><br>
- <i>@verbatim<type><size>:<abi_align>:<pref_align>@endverbatim</i>: Numeric type
- alignment. Type is
- one of <i>i|f|v|a</i>, corresponding to integer, floating point, vector, or
- aggregate.  Size indicates the size, e.g., 32 or 64 bits.
- \p
- The default string, fully specified, is:
- <br><br>
- "E-p:64:64:64-a0:0:8-f32:32:32-f64:64:64"
- "-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64"
- "-v64:64:64-v128:128:128"
- <br><br>
- Note that in the case of aggregates, 0 is the default ABI and preferred
- alignment. This is a special case, where the aggregate's computed worst-case
- alignment will be used.
- */ 
-void TargetData::init(const std::string &TargetDescription) {
-  std::string temp = TargetDescription;
-  
+/// getInt - Get an integer ignoring errors.
+static unsigned getInt(StringRef R) {
+  unsigned Result = 0;
+  R.getAsInteger(10, Result);
+  return Result;
+}
+
+void TargetData::init(StringRef Desc) {
   LayoutMap = 0;
   LittleEndian = false;
   PointerMemSize = 8;
-  PointerABIAlign   = 8;
+  PointerABIAlign = 8;
   PointerPrefAlign = PointerABIAlign;
 
   // Default alignments
@@ -190,11 +158,21 @@ void TargetData::init(const std::string &TargetDescription) {
   setAlignment(VECTOR_ALIGN,   16, 16, 128); // v16i8, v8i16, v4i32, ...
   setAlignment(AGGREGATE_ALIGN, 0,  8,  0);  // struct
 
-  while (!temp.empty()) {
-    std::string token = getToken(temp, "-");
-    std::string arg0 = getToken(token, ":");
-    const char *p = arg0.c_str();
-    switch(*p) {
+  while (!Desc.empty()) {
+    std::pair<StringRef, StringRef> Split = Desc.split('-');
+    StringRef Token = Split.first;
+    Desc = Split.second;
+    
+    if (Token.empty())
+      continue;
+    
+    Split = Token.split(':');
+    StringRef Specifier = Split.first;
+    Token = Split.second;
+    
+    assert(!Specifier.empty() && "Can't be empty here");
+    
+    switch (Specifier[0]) {
     case 'E':
       LittleEndian = false;
       break;
@@ -202,9 +180,12 @@ void TargetData::init(const std::string &TargetDescription) {
       LittleEndian = true;
       break;
     case 'p':
-      PointerMemSize = atoi(getToken(token,":").c_str()) / 8;
-      PointerABIAlign = atoi(getToken(token,":").c_str()) / 8;
-      PointerPrefAlign = atoi(getToken(token,":").c_str()) / 8;
+      Split = Token.split(':');
+      PointerMemSize = getInt(Split.first) / 8;
+      Split = Split.second.split(':');
+      PointerABIAlign = getInt(Split.first) / 8;
+      Split = Split.second.split(':');
+      PointerPrefAlign = getInt(Split.first) / 8;
       if (PointerPrefAlign == 0)
         PointerPrefAlign = PointerABIAlign;
       break;
@@ -213,28 +194,52 @@ void TargetData::init(const std::string &TargetDescription) {
     case 'f':
     case 'a':
     case 's': {
-      AlignTypeEnum align_type = STACK_ALIGN; // Dummy init, silence warning
-      switch(*p) {
-        case 'i': align_type = INTEGER_ALIGN; break;
-        case 'v': align_type = VECTOR_ALIGN; break;
-        case 'f': align_type = FLOAT_ALIGN; break;
-        case 'a': align_type = AGGREGATE_ALIGN; break;
-        case 's': align_type = STACK_ALIGN; break;
+      AlignTypeEnum AlignType;
+      switch (Specifier[0]) {
+      default:
+      case 'i': AlignType = INTEGER_ALIGN; break;
+      case 'v': AlignType = VECTOR_ALIGN; break;
+      case 'f': AlignType = FLOAT_ALIGN; break;
+      case 'a': AlignType = AGGREGATE_ALIGN; break;
+      case 's': AlignType = STACK_ALIGN; break;
       }
-      uint32_t size = (uint32_t) atoi(++p);
-      unsigned char abi_align = atoi(getToken(token, ":").c_str()) / 8;
-      unsigned char pref_align = atoi(getToken(token, ":").c_str()) / 8;
-      if (pref_align == 0)
-        pref_align = abi_align;
-      setAlignment(align_type, abi_align, pref_align, size);
+      unsigned Size = getInt(Specifier.substr(1));
+      Split = Token.split(':');
+      unsigned char ABIAlign = getInt(Split.first) / 8;
+      
+      Split = Split.second.split(':');
+      unsigned char PrefAlign = getInt(Split.first) / 8;
+      if (PrefAlign == 0)
+        PrefAlign = ABIAlign;
+      setAlignment(AlignType, ABIAlign, PrefAlign, Size);
       break;
     }
+    case 'n':  // Native integer types.
+      Specifier = Specifier.substr(1);
+      do {
+        if (unsigned Width = getInt(Specifier))
+          LegalIntWidths.push_back(Width);
+        Split = Token.split(':');
+        Specifier = Split.first;
+        Token = Split.second;
+      } while (!Specifier.empty() || !Token.empty());
+      break;
+        
     default:
       break;
     }
   }
 }
 
+/// Default ctor.
+///
+/// @note This has to exist, because this is a pass, but it should never be
+/// used.
+TargetData::TargetData() : ImmutablePass(&ID) {
+  llvm_report_error("Bad TargetData ctor used.  "
+                    "Tool did not specify a TargetData to use?");
+}
+
 TargetData::TargetData(const Module *M) 
   : ImmutablePass(&ID) {
   init(M->getDataLayout());
@@ -318,37 +323,130 @@ unsigned TargetData::getAlignmentInfo(AlignTypeEnum AlignType,
                  : Alignments[BestMatchIdx].PrefAlign;
 }
 
-typedef DenseMap<const StructType*, StructLayout*>LayoutInfoTy;
+typedef DenseMap<const StructType*, StructLayout*> LayoutInfoTy;
 
-TargetData::~TargetData() {
-  if (!LayoutMap)
-    return;
-  
-  // Remove any layouts for this TD.
-  LayoutInfoTy &TheMap = *static_cast<LayoutInfoTy*>(LayoutMap);
-  for (LayoutInfoTy::iterator I = TheMap.begin(), E = TheMap.end(); I != E; ) {
-    I->second->~StructLayout();
-    free(I->second);
-    TheMap.erase(I++);
+namespace llvm {
+
+class StructLayoutMap : public AbstractTypeUser {
+  LayoutInfoTy LayoutInfo;
+
+  /// refineAbstractType - The callback method invoked when an abstract type is
+  /// resolved to another type.  An object must override this method to update
+  /// its internal state to reference NewType instead of OldType.
+  ///
+  virtual void refineAbstractType(const DerivedType *OldTy,
+                                  const Type *) {
+    const StructType *STy = dyn_cast<const StructType>(OldTy);
+    if (!STy) {
+      OldTy->removeAbstractTypeUser(this);
+      return;
+    }
+
+    StructLayout *SL = LayoutInfo[STy];
+    if (SL) {
+      SL->~StructLayout();
+      free(SL);
+      LayoutInfo[STy] = NULL;
+    }
+
+    OldTy->removeAbstractTypeUser(this);
   }
-  
-  delete static_cast<LayoutInfoTy*>(LayoutMap);
+
+  /// typeBecameConcrete - The other case which AbstractTypeUsers must be aware
+  /// of is when a type makes the transition from being abstract (where it has
+  /// clients on its AbstractTypeUsers list) to concrete (where it does not).
+  /// This method notifies ATU's when this occurs for a type.
+  ///
+  virtual void typeBecameConcrete(const DerivedType *AbsTy) {
+    const StructType *STy = dyn_cast<const StructType>(AbsTy);
+    if (!STy) {
+      AbsTy->removeAbstractTypeUser(this);
+      return;
+    }
+
+    StructLayout *SL = LayoutInfo[STy];
+    if (SL) {
+      SL->~StructLayout();
+      free(SL);
+      LayoutInfo[STy] = NULL;
+    }
+
+    AbsTy->removeAbstractTypeUser(this);
+  }
+
+  bool insert(const Type *Ty) {
+    if (Ty->isAbstract())
+      Ty->addAbstractTypeUser(this);
+    return true;
+  }
+
+public:
+  virtual ~StructLayoutMap() {
+    // Remove any layouts.
+    for (LayoutInfoTy::iterator
+           I = LayoutInfo.begin(), E = LayoutInfo.end(); I != E; ++I)
+      if (StructLayout *SL = I->second) {
+        SL->~StructLayout();
+        free(SL);
+      }
+  }
+
+  inline LayoutInfoTy::iterator begin() {
+    return LayoutInfo.begin();
+  }
+  inline LayoutInfoTy::iterator end() {
+    return LayoutInfo.end();
+  }
+  inline LayoutInfoTy::const_iterator begin() const {
+    return LayoutInfo.begin();
+  }
+  inline LayoutInfoTy::const_iterator end() const {
+    return LayoutInfo.end();
+  }
+
+  LayoutInfoTy::iterator find(const StructType *&Val) {
+    return LayoutInfo.find(Val);
+  }
+  LayoutInfoTy::const_iterator find(const StructType *&Val) const {
+    return LayoutInfo.find(Val);
+  }
+
+  bool erase(const StructType *&Val) {
+    return LayoutInfo.erase(Val);
+  }
+  bool erase(LayoutInfoTy::iterator I) {
+    return LayoutInfo.erase(I);
+  }
+
+  StructLayout *&operator[](const Type *Key) {
+    const StructType *STy = dyn_cast<const StructType>(Key);
+    assert(STy && "Trying to access the struct layout map with a non-struct!");
+    insert(STy);
+    return LayoutInfo[STy];
+  }
+
+  // for debugging...
+  virtual void dump() const {}
+};
+
+} // end namespace llvm
+
+TargetData::~TargetData() {
+  delete LayoutMap;
 }
 
 const StructLayout *TargetData::getStructLayout(const StructType *Ty) const {
   if (!LayoutMap)
-    LayoutMap = static_cast<void*>(new LayoutInfoTy());
-  
-  LayoutInfoTy &TheMap = *static_cast<LayoutInfoTy*>(LayoutMap);
+    LayoutMap = new StructLayoutMap();
   
-  StructLayout *&SL = TheMap[Ty];
+  StructLayout *&SL = (*LayoutMap)[Ty];
   if (SL) return SL;
 
   // Otherwise, create the struct layout.  Because it is variable length, we 
   // malloc it, then use placement new.
   int NumElts = Ty->getNumElements();
   StructLayout *L =
-    (StructLayout *)malloc(sizeof(StructLayout)+(NumElts-1)*sizeof(uint64_t));
+    (StructLayout *)malloc(sizeof(StructLayout)+(NumElts-1) * sizeof(uint64_t));
   
   // Set SL before calling StructLayout's ctor.  The ctor could cause other
   // entries to be added to TheMap, invalidating our reference.
@@ -365,31 +463,35 @@ const StructLayout *TargetData::getStructLayout(const StructType *Ty) const {
 void TargetData::InvalidateStructLayoutInfo(const StructType *Ty) const {
   if (!LayoutMap) return;  // No cache.
   
-  LayoutInfoTy* LayoutInfo = static_cast<LayoutInfoTy*>(LayoutMap);
-  LayoutInfoTy::iterator I = LayoutInfo->find(Ty);
-  if (I == LayoutInfo->end()) return;
+  DenseMap<const StructType*, StructLayout*>::iterator I = LayoutMap->find(Ty);
+  if (I == LayoutMap->end()) return;
   
   I->second->~StructLayout();
   free(I->second);
-  LayoutInfo->erase(I);
+  LayoutMap->erase(I);
 }
 
 
 std::string TargetData::getStringRepresentation() const {
-  std::string repr;
-  repr.append(LittleEndian ? "e" : "E");
-  repr.append("-p:").append(itostr((int64_t) (PointerMemSize * 8))).
-      append(":").append(itostr((int64_t) (PointerABIAlign * 8))).
-      append(":").append(itostr((int64_t) (PointerPrefAlign * 8)));
-  for (align_const_iterator I = Alignments.begin();
-       I != Alignments.end();
-       ++I) {
-    repr.append("-").append(1, (char) I->AlignType).
-      append(utostr((int64_t) I->TypeBitWidth)).
-      append(":").append(utostr((uint64_t) (I->ABIAlign * 8))).
-      append(":").append(utostr((uint64_t) (I->PrefAlign * 8)));
+  std::string Result;
+  raw_string_ostream OS(Result);
+  
+  OS << (LittleEndian ? "e" : "E")
+     << "-p:" << PointerMemSize*8 << ':' << PointerABIAlign*8
+     << ':' << PointerPrefAlign*8;
+  for (unsigned i = 0, e = Alignments.size(); i != e; ++i) {
+    const TargetAlignElem &AI = Alignments[i];
+    OS << '-' << (char)AI.AlignType << AI.TypeBitWidth << ':'
+       << AI.ABIAlign*8 << ':' << AI.PrefAlign*8;
+  }
+  
+  if (!LegalIntWidths.empty()) {
+    OS << "-n" << (unsigned)LegalIntWidths[0];
+    
+    for (unsigned i = 1, e = LegalIntWidths.size(); i != e; ++i)
+      OS << ':' << (unsigned)LegalIntWidths[i];
   }
-  return repr;
+  return OS.str();
 }
 
 
diff --git a/libclamav/c++/llvm/lib/Target/TargetIntrinsicInfo.cpp b/libclamav/c++/llvm/lib/Target/TargetIntrinsicInfo.cpp
index d8da08e..e049a1d 100644
--- a/libclamav/c++/llvm/lib/Target/TargetIntrinsicInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/TargetIntrinsicInfo.cpp
@@ -12,11 +12,19 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/Target/TargetIntrinsicInfo.h"
+#include "llvm/Function.h"
+#include "llvm/ADT/StringMap.h"
 using namespace llvm;
 
-TargetIntrinsicInfo::TargetIntrinsicInfo(const char **desc, unsigned count)
-  : Intrinsics(desc), NumIntrinsics(count) {
+TargetIntrinsicInfo::TargetIntrinsicInfo() {
 }
 
 TargetIntrinsicInfo::~TargetIntrinsicInfo() {
 }
+
+unsigned TargetIntrinsicInfo::getIntrinsicID(Function *F) const {
+  const ValueName *ValName = F->getValueName();
+  if (!ValName)
+    return 0;
+  return lookupName(ValName->getKeyData(), ValName->getKeyLength());
+}
diff --git a/libclamav/c++/llvm/lib/Target/TargetLoweringObjectFile.cpp b/libclamav/c++/llvm/lib/Target/TargetLoweringObjectFile.cpp
index c1aab99..f887523 100644
--- a/libclamav/c++/llvm/lib/Target/TargetLoweringObjectFile.cpp
+++ b/libclamav/c++/llvm/lib/Target/TargetLoweringObjectFile.cpp
@@ -24,6 +24,7 @@
 #include "llvm/Target/TargetData.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Target/TargetOptions.h"
+#include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/Mangler.h"
 #include "llvm/ADT/SmallString.h"
 #include "llvm/ADT/StringExtras.h"
@@ -151,7 +152,7 @@ SectionKind TargetLoweringObjectFile::getKindForGlobal(const GlobalValue *GV,
     // relocation, then we may have to drop this into a wriable data section
     // even though it is marked const.
     switch (C->getRelocationInfo()) {
-    default: llvm_unreachable("unknown relocation info kind");
+    default: assert(0 && "unknown relocation info kind");
     case Constant::NoRelocation:
       // If initializer is a null-terminated string, put it in a "cstring"
       // section of the right width.
@@ -219,7 +220,7 @@ SectionKind TargetLoweringObjectFile::getKindForGlobal(const GlobalValue *GV,
     return SectionKind::getDataNoRel();
 
   switch (C->getRelocationInfo()) {
-  default: llvm_unreachable("unknown relocation info kind");
+  default: assert(0 && "unknown relocation info kind");
   case Constant::NoRelocation:
     return SectionKind::getDataNoRel();
   case Constant::LocalRelocation:
@@ -671,7 +672,7 @@ TargetLoweringObjectFileMachO::~TargetLoweringObjectFileMachO() {
 
 
 const MCSectionMachO *TargetLoweringObjectFileMachO::
-getMachOSection(const StringRef &Segment, const StringRef &Section,
+getMachOSection(StringRef Segment, StringRef Section,
                 unsigned TypeAndAttributes,
                 unsigned Reserved2, SectionKind Kind) const {
   // We unique sections by their segment/section pair.  The returned section
diff --git a/libclamav/c++/llvm/lib/Target/TargetRegisterInfo.cpp b/libclamav/c++/llvm/lib/Target/TargetRegisterInfo.cpp
index 4312399..fac67e2 100644
--- a/libclamav/c++/llvm/lib/Target/TargetRegisterInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/TargetRegisterInfo.cpp
@@ -62,14 +62,14 @@ TargetRegisterInfo::getPhysicalRegisterRegClass(unsigned reg, EVT VT) const {
 
 /// getAllocatableSetForRC - Toggle the bits that represent allocatable
 /// registers for the specific register class.
-static void getAllocatableSetForRC(MachineFunction &MF,
+static void getAllocatableSetForRC(const MachineFunction &MF,
                                    const TargetRegisterClass *RC, BitVector &R){  
   for (TargetRegisterClass::iterator I = RC->allocation_order_begin(MF),
          E = RC->allocation_order_end(MF); I != E; ++I)
     R.set(*I);
 }
 
-BitVector TargetRegisterInfo::getAllocatableSet(MachineFunction &MF,
+BitVector TargetRegisterInfo::getAllocatableSet(const MachineFunction &MF,
                                           const TargetRegisterClass *RC) const {
   BitVector Allocatable(NumRegs);
   if (RC) {
diff --git a/libclamav/c++/llvm/lib/Target/TargetSubtarget.cpp b/libclamav/c++/llvm/lib/Target/TargetSubtarget.cpp
index 95c92ca..edb76f9 100644
--- a/libclamav/c++/llvm/lib/Target/TargetSubtarget.cpp
+++ b/libclamav/c++/llvm/lib/Target/TargetSubtarget.cpp
@@ -12,6 +12,7 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/Target/TargetSubtarget.h"
+#include "llvm/ADT/SmallVector.h"
 using namespace llvm;
 
 //---------------------------------------------------------------------------
@@ -20,3 +21,13 @@ using namespace llvm;
 TargetSubtarget::TargetSubtarget() {}
 
 TargetSubtarget::~TargetSubtarget() {}
+
+bool TargetSubtarget::enablePostRAScheduler(
+          CodeGenOpt::Level OptLevel,
+          AntiDepBreakMode& Mode,
+          RegClassVector& CriticalPathRCs) const {
+  Mode = ANTIDEP_NONE;
+  CriticalPathRCs.clear();
+  return false;
+}
+
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/CMakeLists.txt b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/CMakeLists.txt
index 8aec1e8..b70a587 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/CMakeLists.txt
@@ -1,8 +1,8 @@
 include_directories( ${CMAKE_CURRENT_BINARY_DIR}/.. ${CMAKE_CURRENT_SOURCE_DIR}/.. )
 
 add_llvm_library(LLVMX86AsmPrinter
-  X86AsmPrinter.cpp
   X86ATTInstPrinter.cpp
+  X86AsmPrinter.cpp
   X86IntelInstPrinter.cpp
   X86MCInstLower.cpp
   )
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.cpp b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.cpp
index bc70ffe..8ec5b62 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.cpp
@@ -57,9 +57,7 @@ void X86ATTInstPrinter::print_pcrel_imm(const MCInst *MI, unsigned OpNo) {
   }
 }
 
-void X86ATTInstPrinter::printOperand(const MCInst *MI, unsigned OpNo,
-                                     const char *Modifier) {
-  assert(Modifier == 0 && "Modifiers should not be used");
+void X86ATTInstPrinter::printOperand(const MCInst *MI, unsigned OpNo) {
   
   const MCOperand &Op = MI->getOperand(OpNo);
   if (Op.isReg()) {
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.h b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.h
index 5f28fa4..3180618 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.h
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.h
@@ -32,8 +32,7 @@ public:
   static const char *getRegisterName(unsigned RegNo);
 
 
-  void printOperand(const MCInst *MI, unsigned OpNo,
-                    const char *Modifier = 0);
+  void printOperand(const MCInst *MI, unsigned OpNo);
   void printMemReference(const MCInst *MI, unsigned Op);
   void printLeaMemReference(const MCInst *MI, unsigned Op);
   void printSSECC(const MCInst *MI, unsigned Op);
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86AsmPrinter.cpp b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86AsmPrinter.cpp
index 4f89b71..b88063f 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86AsmPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86AsmPrinter.cpp
@@ -165,13 +165,7 @@ bool X86AsmPrinter::runOnMachineFunction(MachineFunction &MF) {
   for (MachineFunction::const_iterator I = MF.begin(), E = MF.end();
        I != E; ++I) {
     // Print a label for the basic block.
-    if (!VerboseAsm && (I->pred_empty() || I->isOnlyReachableByFallthrough())) {
-      // This is an entry block or a block that's only reachable via a
-      // fallthrough edge. In non-VerboseAsm mode, don't print the label.
-    } else {
-      EmitBasicBlockStart(I);
-      O << '\n';
-    }
+    EmitBasicBlockStart(I);
     for (MachineBasicBlock::const_iterator II = I->begin(), IE = I->end();
          II != IE; ++II) {
       // Print the assembly for the instruction.
@@ -231,7 +225,7 @@ void X86AsmPrinter::printSymbolOperand(const MachineOperand &MO) {
     
     std::string Name = Mang->getMangledName(GV, Suffix, Suffix[0] != '\0');
     if (Subtarget->isTargetCygMing()) {
-      X86COFFMachineModuleInfo &COFFMMI = 
+      X86COFFMachineModuleInfo &COFFMMI =
         MMI->getObjFileInfo<X86COFFMachineModuleInfo>();
       COFFMMI.DecorateCygMingName(Name, GV, *TM.getTargetData());
     }
@@ -294,12 +288,12 @@ void X86AsmPrinter::printSymbolOperand(const MachineOperand &MO) {
     std::string Name = Mang->makeNameProper(MO.getSymbolName());
     if (MO.getTargetFlags() == X86II::MO_DARWIN_STUB) {
       Name += "$stub";
-      MCSymbol *Sym = OutContext.GetOrCreateSymbol(Name);
+      MCSymbol *Sym = OutContext.GetOrCreateSymbol(StringRef(Name));
       const MCSymbol *&StubSym =
         MMI->getObjFileInfo<MachineModuleInfoMachO>().getFnStubEntry(Sym);
       if (StubSym == 0) {
         Name.erase(Name.end()-5, Name.end());
-        StubSym = OutContext.GetOrCreateSymbol(Name);
+        StubSym = OutContext.GetOrCreateSymbol(StringRef(Name));
       }
     }
     
@@ -653,13 +647,15 @@ bool X86AsmPrinter::PrintAsmMemoryOperand(const MachineInstr *MI,
 void X86AsmPrinter::printMachineInstruction(const MachineInstr *MI) {
   ++EmittedInsts;
 
-  processDebugLoc(MI);
+  processDebugLoc(MI, true);
   
   printInstructionThroughMCStreamer(MI);
   
-  if (VerboseAsm && !MI->getDebugLoc().isUnknown())
+  if (VerboseAsm)
     EmitComments(*MI);
   O << '\n';
+
+  processDebugLoc(MI, false);
 }
 
 void X86AsmPrinter::PrintGlobalVariable(const GlobalVariable* GVar) {
@@ -874,49 +870,55 @@ void X86AsmPrinter::EmitEndOfAsmFile(Module &M) {
     // implementation of multiple entry points).  If this doesn't occur, the
     // linker can safely perform dead code stripping.  Since LLVM never
     // generates code that does this, it is always safe to set.
-    O << "\t.subsections_via_symbols\n";
-  }  
-  
-  if (Subtarget->isTargetCOFF()) {
-    // Necessary for dllexport support
-    std::vector<std::string> DLLExportedFns, DLLExportedGlobals;
+    OutStreamer.EmitAssemblerFlag(MCStreamer::SubsectionsViaSymbols);
+  }
 
-    X86COFFMachineModuleInfo &COFFMMI = 
+  if (Subtarget->isTargetCOFF()) {
+    X86COFFMachineModuleInfo &COFFMMI =
       MMI->getObjFileInfo<X86COFFMachineModuleInfo>();
-    TargetLoweringObjectFileCOFF &TLOFCOFF = 
-      static_cast<TargetLoweringObjectFileCOFF&>(getObjFileLowering());
 
-    for (Module::const_iterator I = M.begin(), E = M.end(); I != E; ++I)
-      if (I->hasDLLExportLinkage())
-        DLLExportedFns.push_back(Mang->getMangledName(I));
-    
-    for (Module::const_global_iterator I = M.global_begin(), E = M.global_end();
-         I != E; ++I)
-      if (I->hasDLLExportLinkage())
-        DLLExportedGlobals.push_back(Mang->getMangledName(I));
-    
-    if (Subtarget->isTargetCygMing()) {
-      // Emit type information for external functions
-      for (X86COFFMachineModuleInfo::stub_iterator I = COFFMMI.stub_begin(),
+    // Emit type information for external functions
+    for (X86COFFMachineModuleInfo::stub_iterator I = COFFMMI.stub_begin(),
            E = COFFMMI.stub_end(); I != E; ++I) {
-        O << "\t.def\t " << I->getKeyData()
+      O << "\t.def\t " << I->getKeyData()
         << ";\t.scl\t" << COFF::C_EXT
         << ";\t.type\t" << (COFF::DT_FCN << COFF::N_BTSHFT)
         << ";\t.endef\n";
-      }
     }
-  
-    // Output linker support code for dllexported globals on windows.
-    if (!DLLExportedGlobals.empty() || !DLLExportedFns.empty()) {
-      OutStreamer.SwitchSection(TLOFCOFF.getCOFFSection(".section .drectve",
-                                                        true,
+
+    if (Subtarget->isTargetCygMing()) {
+      // Necessary for dllexport support
+      std::vector<std::string> DLLExportedFns, DLLExportedGlobals;
+
+      TargetLoweringObjectFileCOFF &TLOFCOFF =
+        static_cast<TargetLoweringObjectFileCOFF&>(getObjFileLowering());
+
+      for (Module::const_iterator I = M.begin(), E = M.end(); I != E; ++I)
+        if (I->hasDLLExportLinkage()) {
+          std::string Name = Mang->getMangledName(I);
+          COFFMMI.DecorateCygMingName(Name, I, *TM.getTargetData());
+          DLLExportedFns.push_back(Name);
+        }
+
+      for (Module::const_global_iterator I = M.global_begin(),
+             E = M.global_end(); I != E; ++I)
+        if (I->hasDLLExportLinkage()) {
+          std::string Name = Mang->getMangledName(I);
+          COFFMMI.DecorateCygMingName(Name, I, *TM.getTargetData());
+          DLLExportedGlobals.push_back(Mang->getMangledName(I));
+        }
+
+      // Output linker support code for dllexported globals on windows.
+      if (!DLLExportedGlobals.empty() || !DLLExportedFns.empty()) {
+        OutStreamer.SwitchSection(TLOFCOFF.getCOFFSection(".section .drectve",
+                                                          true,
                                                    SectionKind::getMetadata()));
-    
-      for (unsigned i = 0, e = DLLExportedGlobals.size(); i != e; ++i)
-        O << "\t.ascii \" -export:" << DLLExportedGlobals[i] << ",data\"\n";
-    
-      for (unsigned i = 0, e = DLLExportedFns.size(); i != e; ++i)
-        O << "\t.ascii \" -export:" << DLLExportedFns[i] << "\"\n";
+        for (unsigned i = 0, e = DLLExportedGlobals.size(); i != e; ++i)
+          O << "\t.ascii \" -export:" << DLLExportedGlobals[i] << ",data\"\n";
+
+        for (unsigned i = 0, e = DLLExportedFns.size(); i != e; ++i)
+          O << "\t.ascii \" -export:" << DLLExportedFns[i] << "\"\n";
+      }
     }
   }
 }
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.cpp b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.cpp
index 5ccddf5..38c0c28 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.cpp
@@ -21,6 +21,7 @@
 #include "llvm/MC/MCExpr.h"
 #include "llvm/MC/MCInst.h"
 #include "llvm/MC/MCStreamer.h"
+#include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/FormattedStream.h"
 #include "llvm/Support/Mangler.h"
 #include "llvm/ADT/SmallString.h"
@@ -38,13 +39,10 @@ MachineModuleInfoMachO &X86MCInstLower::getMachOMMI() const {
 
 
 MCSymbol *X86MCInstLower::GetPICBaseSymbol() const {
-  SmallString<60> Name;
-  raw_svector_ostream(Name) << AsmPrinter.MAI->getPrivateGlobalPrefix()
-    << AsmPrinter.getFunctionNumber() << "$pb";
-  return Ctx.GetOrCreateSymbol(Name.str());
+  return Ctx.GetOrCreateSymbol(Twine(AsmPrinter.MAI->getPrivateGlobalPrefix())+
+                               Twine(AsmPrinter.getFunctionNumber())+"$pb");
 }
 
-
 /// LowerGlobalAddressOperand - Lower an MO_GlobalAddress operand to an
 /// MCOperand.
 MCSymbol *X86MCInstLower::
@@ -232,6 +230,19 @@ GetConstantPoolIndexSymbol(const MachineOperand &MO) const {
   return Ctx.GetOrCreateSymbol(Name.str());
 }
 
+MCSymbol *X86MCInstLower::
+GetBlockAddressSymbol(const MachineOperand &MO) const {
+  const char *Suffix = "";
+  switch (MO.getTargetFlags()) {
+  default: llvm_unreachable("Unknown target flag on BA operand");
+  case X86II::MO_NO_FLAG:         break; // No flag.
+  case X86II::MO_PIC_BASE_OFFSET: break; // Doesn't modify symbol name.
+  case X86II::MO_GOTOFF: Suffix = "@GOTOFF"; break;
+  }
+
+  return AsmPrinter.GetBlockAddressSymbol(MO.getBlockAddress(), Suffix);
+}
+
 MCOperand X86MCInstLower::LowerSymbolOperand(const MachineOperand &MO,
                                              MCSymbol *Sym) const {
   // FIXME: We would like an efficient form for this, so we don't have to do a
@@ -308,6 +319,8 @@ void X86MCInstLower::Lower(const MachineInstr *MI, MCInst &OutMI) const {
       MI->dump();
       llvm_unreachable("unknown operand type");
     case MachineOperand::MO_Register:
+      // Ignore all implicit register operands.
+      if (MO.isImplicit()) continue;
       MCOp = MCOperand::CreateReg(MO.getReg());
       break;
     case MachineOperand::MO_Immediate:
@@ -329,6 +342,9 @@ void X86MCInstLower::Lower(const MachineInstr *MI, MCInst &OutMI) const {
     case MachineOperand::MO_ConstantPoolIndex:
       MCOp = LowerSymbolOperand(MO, GetConstantPoolIndexSymbol(MO));
       break;
+    case MachineOperand::MO_BlockAddress:
+      MCOp = LowerSymbolOperand(MO, GetBlockAddressSymbol(MO));
+      break;
     }
     
     OutMI.addOperand(MCOp);
@@ -401,13 +417,13 @@ void X86AsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
     printLabel(MI);
     return;
   case TargetInstrInfo::INLINEASM:
-    O << '\t';
     printInlineAsm(MI);
     return;
   case TargetInstrInfo::IMPLICIT_DEF:
     printImplicitDef(MI);
     return;
   case TargetInstrInfo::KILL:
+    printKill(MI);
     return;
   case X86::MOVPC32r: {
     MCInst TmpInst;
@@ -449,10 +465,9 @@ void X86AsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
     //   MYGLOBAL + (. - PICBASE)
     // However, we can't generate a ".", so just emit a new label here and refer
     // to it.  We know that this operand flag occurs at most once per function.
-    SmallString<64> Name;
-    raw_svector_ostream(Name) << MAI->getPrivateGlobalPrefix()
-      << "picbaseref" << getFunctionNumber();
-    MCSymbol *DotSym = OutContext.GetOrCreateSymbol(Name.str());
+    const char *Prefix = MAI->getPrivateGlobalPrefix();
+    MCSymbol *DotSym = OutContext.GetOrCreateSymbol(Twine(Prefix)+"picbaseref"+
+                                                    Twine(getFunctionNumber()));
     OutStreamer.EmitLabel(DotSym);
     
     // Now that we have emitted the label, lower the complex operand expression.
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.h b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.h
index fa25b90..94f8bfc 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.h
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.h
@@ -43,6 +43,7 @@ public:
   MCSymbol *GetExternalSymbolSymbol(const MachineOperand &MO) const;
   MCSymbol *GetJumpTableSymbol(const MachineOperand &MO) const;
   MCSymbol *GetConstantPoolIndexSymbol(const MachineOperand &MO) const;
+  MCSymbol *GetBlockAddressSymbol(const MachineOperand &MO) const;
   MCOperand LowerSymbolOperand(const MachineOperand &MO, MCSymbol *Sym) const;
   
 private:
diff --git a/libclamav/c++/llvm/lib/Target/X86/Disassembler/CMakeLists.txt b/libclamav/c++/llvm/lib/Target/X86/Disassembler/CMakeLists.txt
new file mode 100644
index 0000000..b329e89
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/X86/Disassembler/CMakeLists.txt
@@ -0,0 +1,6 @@
+include_directories( ${CMAKE_CURRENT_BINARY_DIR}/.. ${CMAKE_CURRENT_SOURCE_DIR}/.. )
+
+add_llvm_library(LLVMX86Disassembler
+  X86Disassembler.cpp
+  )
+add_dependencies(LLVMX86Disassembler X86CodeGenTable_gen)
diff --git a/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86Disassembler.cpp b/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86Disassembler.cpp
new file mode 100644
index 0000000..2ebbc9b
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/X86/Disassembler/X86Disassembler.cpp
@@ -0,0 +1,29 @@
+//===- X86Disassembler.cpp - Disassembler for x86 and x86_64 ----*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/MC/MCDisassembler.h"
+#include "llvm/Target/TargetRegistry.h"
+#include "X86.h"
+using namespace llvm;
+
+static const MCDisassembler *createX86_32Disassembler(const Target &T) {
+  return 0;
+}
+
+static const MCDisassembler *createX86_64Disassembler(const Target &T) {
+  return 0; 
+}
+
+extern "C" void LLVMInitializeX86Disassembler() { 
+  // Register the disassembler.
+  TargetRegistry::RegisterMCDisassembler(TheX86_32Target, 
+                                         createX86_32Disassembler);
+  TargetRegistry::RegisterMCDisassembler(TheX86_64Target,
+                                         createX86_64Disassembler);
+}
diff --git a/libclamav/c++/llvm/lib/Target/X86/Makefile b/libclamav/c++/llvm/lib/Target/X86/Makefile
index 220831d..b311a6e 100644
--- a/libclamav/c++/llvm/lib/Target/X86/Makefile
+++ b/libclamav/c++/llvm/lib/Target/X86/Makefile
@@ -18,6 +18,6 @@ BUILT_SOURCES = X86GenRegisterInfo.h.inc X86GenRegisterNames.inc \
                 X86GenFastISel.inc \
                 X86GenCallingConv.inc X86GenSubtarget.inc
 
-DIRS = AsmPrinter AsmParser TargetInfo
+DIRS = AsmPrinter AsmParser Disassembler TargetInfo
 
 include $(LEVEL)/Makefile.common
diff --git a/libclamav/c++/llvm/lib/Target/X86/README.txt b/libclamav/c++/llvm/lib/Target/X86/README.txt
index 046d35c..9b7aab8 100644
--- a/libclamav/c++/llvm/lib/Target/X86/README.txt
+++ b/libclamav/c++/llvm/lib/Target/X86/README.txt
@@ -1952,3 +1952,5 @@ fact these instructions are identical to the non-lock versions. We need a way to
 add target specific information to target nodes and have this information
 carried over to machine instructions. Asm printer (or JIT) can use this
 information to add the "lock" prefix.
+
+//===---------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86CodeEmitter.cpp b/libclamav/c++/llvm/lib/Target/X86/X86CodeEmitter.cpp
index 4c12edd..4892e17 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86CodeEmitter.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86CodeEmitter.cpp
@@ -19,6 +19,7 @@
 #include "X86TargetMachine.h"
 #include "X86Relocations.h"
 #include "X86.h"
+#include "llvm/LLVMContext.h"
 #include "llvm/PassManager.h"
 #include "llvm/CodeGen/MachineCodeEmitter.h"
 #include "llvm/CodeGen/JITCodeEmitter.h"
@@ -32,7 +33,6 @@
 #include "llvm/MC/MCCodeEmitter.h"
 #include "llvm/MC/MCExpr.h"
 #include "llvm/MC/MCInst.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
@@ -43,7 +43,7 @@ STATISTIC(NumEmitted, "Number of machine instructions emitted");
 
 namespace {
   template<class CodeEmitter>
-  class VISIBILITY_HIDDEN Emitter : public MachineFunctionPass {
+  class Emitter : public MachineFunctionPass {
     const X86InstrInfo  *II;
     const TargetData    *TD;
     X86TargetMachine    &TM;
@@ -82,7 +82,7 @@ namespace {
     void emitPCRelativeBlockAddress(MachineBasicBlock *MBB);
     void emitGlobalAddress(GlobalValue *GV, unsigned Reloc,
                            intptr_t Disp = 0, intptr_t PCAdj = 0,
-                           bool NeedStub = false, bool Indirect = false);
+                           bool Indirect = false);
     void emitExternalSymbolAddress(const char *ES, unsigned Reloc);
     void emitConstPoolAddress(unsigned CPI, unsigned Reloc, intptr_t Disp = 0,
                               intptr_t PCAdj = 0);
@@ -176,7 +176,6 @@ template<class CodeEmitter>
 void Emitter<CodeEmitter>::emitGlobalAddress(GlobalValue *GV, unsigned Reloc,
                                 intptr_t Disp /* = 0 */,
                                 intptr_t PCAdj /* = 0 */,
-                                bool NeedStub /* = false */,
                                 bool Indirect /* = false */) {
   intptr_t RelocCST = Disp;
   if (Reloc == X86::reloc_picrel_word)
@@ -185,9 +184,9 @@ void Emitter<CodeEmitter>::emitGlobalAddress(GlobalValue *GV, unsigned Reloc,
     RelocCST = PCAdj;
   MachineRelocation MR = Indirect
     ? MachineRelocation::getIndirectSymbol(MCE.getCurrentPCOffset(), Reloc,
-                                           GV, RelocCST, NeedStub)
+                                           GV, RelocCST, false)
     : MachineRelocation::getGV(MCE.getCurrentPCOffset(), Reloc,
-                               GV, RelocCST, NeedStub);
+                               GV, RelocCST, false);
   MCE.addRelocation(MR);
   // The relocated value will be added to the displacement
   if (Reloc == X86::reloc_absolute_dword)
@@ -333,10 +332,9 @@ void Emitter<CodeEmitter>::emitDisplacementField(const MachineOperand *RelocOp,
     // do it, otherwise fallback to absolute (this is determined by IsPCRel). 
     //  89 05 00 00 00 00     mov    %eax,0(%rip)  # PC-relative
     //  89 04 25 00 00 00 00  mov    %eax,0x0      # Absolute
-    bool NeedStub = isa<Function>(RelocOp->getGlobal());
     bool Indirect = gvNeedsNonLazyPtr(*RelocOp, TM);
     emitGlobalAddress(RelocOp->getGlobal(), RelocType, RelocOp->getOffset(),
-                      Adj, NeedStub, Indirect);
+                      Adj, Indirect);
   } else if (RelocOp->isSymbol()) {
     emitExternalSymbolAddress(RelocOp->getSymbolName(), RelocType);
   } else if (RelocOp->isCPI()) {
@@ -481,7 +479,7 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
                                            const TargetInstrDesc *Desc) {
   DEBUG(errs() << MI);
 
-  MCE.processDebugLoc(MI.getDebugLoc());
+  MCE.processDebugLoc(MI.getDebugLoc(), true);
 
   unsigned Opcode = Desc->Opcode;
 
@@ -587,8 +585,8 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
     case TargetInstrInfo::INLINEASM:
       // We allow inline assembler nodes with empty bodies - they can
       // implicitly define registers, which is ok for JIT.
-      assert(MI.getOperand(0).getSymbolName()[0] == 0 && 
-             "JIT does not support inline asm!");
+      if (MI.getOperand(0).getSymbolName()[0])
+        llvm_report_error("JIT does not support inline asm!");
       break;
     case TargetInstrInfo::DBG_LABEL:
     case TargetInstrInfo::EH_LABEL:
@@ -597,7 +595,6 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
       break;
     case TargetInstrInfo::IMPLICIT_DEF:
     case TargetInstrInfo::KILL:
-    case X86::DWARF_LOC:
     case X86::FP_REG_KILL:
       break;
     case X86::MOVPC32r: {
@@ -633,14 +630,8 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
     }
     
     if (MO.isGlobal()) {
-      // Assume undefined functions may be outside the Small codespace.
-      bool NeedStub = 
-        (Is64BitMode && 
-            (TM.getCodeModel() == CodeModel::Large ||
-             TM.getSubtarget<X86Subtarget>().isTargetDarwin())) ||
-        Opcode == X86::TAILJMPd;
       emitGlobalAddress(MO.getGlobal(), X86::reloc_pcrel_word,
-                        MO.getOffset(), 0, NeedStub);
+                        MO.getOffset(), 0);
       break;
     }
     
@@ -681,10 +672,9 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
     if (Opcode == X86::MOV64ri)
       rt = X86::reloc_absolute_dword;  // FIXME: add X86II flag?
     if (MO1.isGlobal()) {
-      bool NeedStub = isa<Function>(MO1.getGlobal());
       bool Indirect = gvNeedsNonLazyPtr(MO1, TM);
       emitGlobalAddress(MO1.getGlobal(), rt, MO1.getOffset(), 0,
-                        NeedStub, Indirect);
+                        Indirect);
     } else if (MO1.isSymbol())
       emitExternalSymbolAddress(MO1.getSymbolName(), rt);
     else if (MO1.isCPI())
@@ -790,10 +780,9 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
     if (Opcode == X86::MOV64ri32)
       rt = X86::reloc_absolute_word_sext;  // FIXME: add X86II flag?
     if (MO1.isGlobal()) {
-      bool NeedStub = isa<Function>(MO1.getGlobal());
       bool Indirect = gvNeedsNonLazyPtr(MO1, TM);
       emitGlobalAddress(MO1.getGlobal(), rt, MO1.getOffset(), 0,
-                        NeedStub, Indirect);
+                        Indirect);
     } else if (MO1.isSymbol())
       emitExternalSymbolAddress(MO1.getSymbolName(), rt);
     else if (MO1.isCPI())
@@ -831,10 +820,9 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
     if (Opcode == X86::MOV64mi32)
       rt = X86::reloc_absolute_word_sext;  // FIXME: add X86II flag?
     if (MO.isGlobal()) {
-      bool NeedStub = isa<Function>(MO.getGlobal());
       bool Indirect = gvNeedsNonLazyPtr(MO, TM);
       emitGlobalAddress(MO.getGlobal(), rt, MO.getOffset(), 0,
-                        NeedStub, Indirect);
+                        Indirect);
     } else if (MO.isSymbol())
       emitExternalSymbolAddress(MO.getSymbolName(), rt);
     else if (MO.isCPI())
@@ -859,6 +847,8 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
 #endif
     llvm_unreachable(0);
   }
+
+  MCE.processDebugLoc(MI.getDebugLoc(), false);
 }
 
 // Adapt the Emitter / CodeEmitter interfaces to MCCodeEmitter.
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86FastISel.cpp b/libclamav/c++/llvm/lib/Target/X86/X86FastISel.cpp
index ef931bd..431c120 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86FastISel.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86FastISel.cpp
@@ -1058,9 +1058,9 @@ bool X86FastISel::X86SelectSelect(Instruction *I) {
 bool X86FastISel::X86SelectFPExt(Instruction *I) {
   // fpext from float to double.
   if (Subtarget->hasSSE2() &&
-      I->getType() == Type::getDoubleTy(I->getContext())) {
+      I->getType()->isDoubleTy()) {
     Value *V = I->getOperand(0);
-    if (V->getType() == Type::getFloatTy(I->getContext())) {
+    if (V->getType()->isFloatTy()) {
       unsigned OpReg = getRegForValue(V);
       if (OpReg == 0) return false;
       unsigned ResultReg = createResultReg(X86::FR64RegisterClass);
@@ -1075,9 +1075,9 @@ bool X86FastISel::X86SelectFPExt(Instruction *I) {
 
 bool X86FastISel::X86SelectFPTrunc(Instruction *I) {
   if (Subtarget->hasSSE2()) {
-    if (I->getType() == Type::getFloatTy(I->getContext())) {
+    if (I->getType()->isFloatTy()) {
       Value *V = I->getOperand(0);
-      if (V->getType() == Type::getDoubleTy(I->getContext())) {
+      if (V->getType()->isDoubleTy()) {
         unsigned OpReg = getRegForValue(V);
         if (OpReg == 0) return false;
         unsigned ResultReg = createResultReg(X86::FR32RegisterClass);
@@ -1244,7 +1244,7 @@ bool X86FastISel::X86SelectCall(Instruction *I) {
   // Handle *simple* calls for now.
   const Type *RetTy = CS.getType();
   EVT RetVT;
-  if (RetTy == Type::getVoidTy(I->getContext()))
+  if (RetTy->isVoidTy())
     RetVT = MVT::isVoid;
   else if (!isTypeLegal(RetTy, RetVT, true))
     return false;
@@ -1493,7 +1493,7 @@ bool X86FastISel::X86SelectCall(Instruction *I) {
       EVT ResVT = RVLocs[0].getValVT();
       unsigned Opc = ResVT == MVT::f32 ? X86::ST_Fp80m32 : X86::ST_Fp80m64;
       unsigned MemSize = ResVT.getSizeInBits()/8;
-      int FI = MFI.CreateStackObject(MemSize, MemSize);
+      int FI = MFI.CreateStackObject(MemSize, MemSize, false);
       addFrameReference(BuildMI(MBB, DL, TII.get(Opc)), FI).addReg(ResultReg);
       DstRC = ResVT == MVT::f32
         ? X86::FR32RegisterClass : X86::FR64RegisterClass;
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86FloatingPoint.cpp b/libclamav/c++/llvm/lib/Target/X86/X86FloatingPoint.cpp
index d9a05a8..a2fe9b0 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86FloatingPoint.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86FloatingPoint.cpp
@@ -40,7 +40,6 @@
 #include "llvm/CodeGen/MachineInstrBuilder.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
 #include "llvm/CodeGen/Passes.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
@@ -53,7 +52,7 @@ STATISTIC(NumFXCH, "Number of fxch instructions inserted");
 STATISTIC(NumFP  , "Number of floating point instructions");
 
 namespace {
-  struct VISIBILITY_HIDDEN FPS : public MachineFunctionPass {
+  struct FPS : public MachineFunctionPass {
     static char ID;
     FPS() : MachineFunctionPass(&ID) {}
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86FloatingPointRegKill.cpp b/libclamav/c++/llvm/lib/Target/X86/X86FloatingPointRegKill.cpp
index 3e0385c..34a0045 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86FloatingPointRegKill.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86FloatingPointRegKill.cpp
@@ -22,7 +22,6 @@
 #include "llvm/CodeGen/Passes.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Support/Debug.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/CFG.h"
 #include "llvm/ADT/Statistic.h"
 using namespace llvm;
@@ -30,7 +29,7 @@ using namespace llvm;
 STATISTIC(NumFPKill, "Number of FP_REG_KILL instructions added");
 
 namespace {
-  struct VISIBILITY_HIDDEN FPRegKiller : public MachineFunctionPass {
+  struct FPRegKiller : public MachineFunctionPass {
     static char ID;
     FPRegKiller() : MachineFunctionPass(&ID) {}
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86ISelDAGToDAG.cpp b/libclamav/c++/llvm/lib/Target/X86/X86ISelDAGToDAG.cpp
index 71b4062..a9a78be 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86ISelDAGToDAG.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86ISelDAGToDAG.cpp
@@ -12,6 +12,15 @@
 //
 //===----------------------------------------------------------------------===//
 
+// Force NDEBUG on in any optimized build on Darwin.
+//
+// FIXME: This is a huge hack, to work around ridiculously awful compile times
+// on this file with gcc-4.2 on Darwin, in Release mode.
+#if (!defined(__llvm__) && defined(__APPLE__) && \
+     defined(__OPTIMIZE__) && !defined(NDEBUG))
+#define NDEBUG
+#endif
+
 #define DEBUG_TYPE "x86-isel"
 #include "X86.h"
 #include "X86InstrBuilder.h"
@@ -33,7 +42,6 @@
 #include "llvm/CodeGen/SelectionDAGISel.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Target/TargetOptions.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/MathExtras.h"
@@ -72,6 +80,7 @@ namespace {
     SDValue Segment;
     GlobalValue *GV;
     Constant *CP;
+    BlockAddress *BlockAddr;
     const char *ES;
     int JT;
     unsigned Align;    // CP alignment.
@@ -79,12 +88,12 @@ namespace {
 
     X86ISelAddressMode()
       : BaseType(RegBase), Scale(1), IndexReg(), Disp(0),
-        Segment(), GV(0), CP(0), ES(0), JT(-1), Align(0),
+        Segment(), GV(0), CP(0), BlockAddr(0), ES(0), JT(-1), Align(0),
         SymbolFlags(X86II::MO_NO_FLAG) {
     }
 
     bool hasSymbolicDisplacement() const {
-      return GV != 0 || CP != 0 || ES != 0 || JT != -1;
+      return GV != 0 || CP != 0 || ES != 0 || JT != -1 || BlockAddr != 0;
     }
     
     bool hasBaseOrIndexReg() const {
@@ -147,7 +156,7 @@ namespace {
   /// ISel - X86 specific code to select X86 machine instructions for
   /// SelectionDAG operations.
   ///
-  class VISIBILITY_HIDDEN X86DAGToDAGISel : public SelectionDAGISel {
+  class X86DAGToDAGISel : public SelectionDAGISel {
     /// X86Lowering - This object fully describes how to lower LLVM code to an
     /// X86-specific SelectionDAG.
     X86TargetLowering &X86Lowering;
@@ -242,6 +251,9 @@ namespace {
         Disp = CurDAG->getTargetExternalSymbol(AM.ES, MVT::i32, AM.SymbolFlags);
       else if (AM.JT != -1)
         Disp = CurDAG->getTargetJumpTable(AM.JT, MVT::i32, AM.SymbolFlags);
+      else if (AM.BlockAddr)
+        Disp = CurDAG->getBlockAddress(AM.BlockAddr, MVT::i32,
+                                       true, AM.SymbolFlags);
       else
         Disp = CurDAG->getTargetConstant(AM.Disp, MVT::i32);
 
@@ -658,7 +670,6 @@ void X86DAGToDAGISel::InstructionSelect() {
   const Function *F = MF->getFunction();
   OptForSize = F->hasFnAttr(Attribute::OptimizeForSize);
 
-  DEBUG(BB->dump());
   if (OptLevel != CodeGenOpt::None)
     PreprocessForRMW();
 
@@ -761,10 +772,12 @@ bool X86DAGToDAGISel::MatchWrapper(SDValue N, X86ISelAddressMode &AM) {
     } else if (ExternalSymbolSDNode *S = dyn_cast<ExternalSymbolSDNode>(N0)) {
       AM.ES = S->getSymbol();
       AM.SymbolFlags = S->getTargetFlags();
-    } else {
-      JumpTableSDNode *J = cast<JumpTableSDNode>(N0);
+    } else if (JumpTableSDNode *J = dyn_cast<JumpTableSDNode>(N0)) {
       AM.JT = J->getIndex();
       AM.SymbolFlags = J->getTargetFlags();
+    } else {
+      AM.BlockAddr = cast<BlockAddressSDNode>(N0)->getBlockAddress();
+      AM.SymbolFlags = cast<BlockAddressSDNode>(N0)->getTargetFlags();
     }
 
     if (N.getOpcode() == X86ISD::WrapperRIP)
@@ -790,10 +803,12 @@ bool X86DAGToDAGISel::MatchWrapper(SDValue N, X86ISelAddressMode &AM) {
     } else if (ExternalSymbolSDNode *S = dyn_cast<ExternalSymbolSDNode>(N0)) {
       AM.ES = S->getSymbol();
       AM.SymbolFlags = S->getTargetFlags();
-    } else {
-      JumpTableSDNode *J = cast<JumpTableSDNode>(N0);
+    } else if (JumpTableSDNode *J = dyn_cast<JumpTableSDNode>(N0)) {
       AM.JT = J->getIndex();
       AM.SymbolFlags = J->getTargetFlags();
+    } else {
+      AM.BlockAddr = cast<BlockAddressSDNode>(N0)->getBlockAddress();
+      AM.SymbolFlags = cast<BlockAddressSDNode>(N0)->getTargetFlags();
     }
     return false;
   }
@@ -1625,6 +1640,68 @@ SDNode *X86DAGToDAGISel::SelectAtomicLoadAdd(SDNode *Node, EVT NVT) {
   }
 }
 
+/// HasNoSignedComparisonUses - Test whether the given X86ISD::CMP node has
+/// any uses which require the SF or OF bits to be accurate.
+static bool HasNoSignedComparisonUses(SDNode *N) {
+  // Examine each user of the node.
+  for (SDNode::use_iterator UI = N->use_begin(),
+         UE = N->use_end(); UI != UE; ++UI) {
+    // Only examine CopyToReg uses.
+    if (UI->getOpcode() != ISD::CopyToReg)
+      return false;
+    // Only examine CopyToReg uses that copy to EFLAGS.
+    if (cast<RegisterSDNode>(UI->getOperand(1))->getReg() !=
+          X86::EFLAGS)
+      return false;
+    // Examine each user of the CopyToReg use.
+    for (SDNode::use_iterator FlagUI = UI->use_begin(),
+           FlagUE = UI->use_end(); FlagUI != FlagUE; ++FlagUI) {
+      // Only examine the Flag result.
+      if (FlagUI.getUse().getResNo() != 1) continue;
+      // Anything unusual: assume conservatively.
+      if (!FlagUI->isMachineOpcode()) return false;
+      // Examine the opcode of the user.
+      switch (FlagUI->getMachineOpcode()) {
+      // These comparisons don't treat the most significant bit specially.
+      case X86::SETAr: case X86::SETAEr: case X86::SETBr: case X86::SETBEr:
+      case X86::SETEr: case X86::SETNEr: case X86::SETPr: case X86::SETNPr:
+      case X86::SETAm: case X86::SETAEm: case X86::SETBm: case X86::SETBEm:
+      case X86::SETEm: case X86::SETNEm: case X86::SETPm: case X86::SETNPm:
+      case X86::JA: case X86::JAE: case X86::JB: case X86::JBE:
+      case X86::JE: case X86::JNE: case X86::JP: case X86::JNP:
+      case X86::CMOVA16rr: case X86::CMOVA16rm:
+      case X86::CMOVA32rr: case X86::CMOVA32rm:
+      case X86::CMOVA64rr: case X86::CMOVA64rm:
+      case X86::CMOVAE16rr: case X86::CMOVAE16rm:
+      case X86::CMOVAE32rr: case X86::CMOVAE32rm:
+      case X86::CMOVAE64rr: case X86::CMOVAE64rm:
+      case X86::CMOVB16rr: case X86::CMOVB16rm:
+      case X86::CMOVB32rr: case X86::CMOVB32rm:
+      case X86::CMOVB64rr: case X86::CMOVB64rm:
+      case X86::CMOVBE16rr: case X86::CMOVBE16rm:
+      case X86::CMOVBE32rr: case X86::CMOVBE32rm:
+      case X86::CMOVBE64rr: case X86::CMOVBE64rm:
+      case X86::CMOVE16rr: case X86::CMOVE16rm:
+      case X86::CMOVE32rr: case X86::CMOVE32rm:
+      case X86::CMOVE64rr: case X86::CMOVE64rm:
+      case X86::CMOVNE16rr: case X86::CMOVNE16rm:
+      case X86::CMOVNE32rr: case X86::CMOVNE32rm:
+      case X86::CMOVNE64rr: case X86::CMOVNE64rm:
+      case X86::CMOVNP16rr: case X86::CMOVNP16rm:
+      case X86::CMOVNP32rr: case X86::CMOVNP32rm:
+      case X86::CMOVNP64rr: case X86::CMOVNP64rm:
+      case X86::CMOVP16rr: case X86::CMOVP16rm:
+      case X86::CMOVP32rr: case X86::CMOVP32rm:
+      case X86::CMOVP64rr: case X86::CMOVP64rm:
+        continue;
+      // Anything else: assume conservatively.
+      default: return false;
+      }
+    }
+  }
+  return true;
+}
+
 SDNode *X86DAGToDAGISel::Select(SDValue N) {
   SDNode *Node = N.getNode();
   EVT NVT = Node->getValueType(0);
@@ -1881,14 +1958,12 @@ SDNode *X86DAGToDAGISel::Select(SDValue N) {
                             0);
           // We just did a 32-bit clear, insert it into a 64-bit register to
           // clear the whole 64-bit reg.
-          SDValue Undef =
-            SDValue(CurDAG->getMachineNode(TargetInstrInfo::IMPLICIT_DEF,
-                                           dl, MVT::i64), 0);
+          SDValue Zero = CurDAG->getTargetConstant(0, MVT::i64);
           SDValue SubRegNo =
             CurDAG->getTargetConstant(X86::SUBREG_32BIT, MVT::i32);
           ClrNode =
-            SDValue(CurDAG->getMachineNode(TargetInstrInfo::INSERT_SUBREG, dl,
-                                           MVT::i64, Undef, ClrNode, SubRegNo),
+            SDValue(CurDAG->getMachineNode(TargetInstrInfo::SUBREG_TO_REG, dl,
+                                           MVT::i64, Zero, ClrNode, SubRegNo),
                     0);
         } else {
           ClrNode = SDValue(CurDAG->getMachineNode(ClrOpcode, dl, NVT), 0);
@@ -1978,7 +2053,9 @@ SDNode *X86DAGToDAGISel::Select(SDValue N) {
       if (!C) break;
 
       // For example, convert "testl %eax, $8" to "testb %al, $8"
-      if ((C->getZExtValue() & ~UINT64_C(0xff)) == 0) {
+      if ((C->getZExtValue() & ~UINT64_C(0xff)) == 0 &&
+          (!(C->getZExtValue() & 0x80) ||
+           HasNoSignedComparisonUses(Node))) {
         SDValue Imm = CurDAG->getTargetConstant(C->getZExtValue(), MVT::i8);
         SDValue Reg = N0.getNode()->getOperand(0);
 
@@ -2004,7 +2081,9 @@ SDNode *X86DAGToDAGISel::Select(SDValue N) {
       }
 
       // For example, "testl %eax, $2048" to "testb %ah, $8".
-      if ((C->getZExtValue() & ~UINT64_C(0xff00)) == 0) {
+      if ((C->getZExtValue() & ~UINT64_C(0xff00)) == 0 &&
+          (!(C->getZExtValue() & 0x8000) ||
+           HasNoSignedComparisonUses(Node))) {
         // Shift the immediate right by 8 bits.
         SDValue ShiftedImm = CurDAG->getTargetConstant(C->getZExtValue() >> 8,
                                                        MVT::i8);
@@ -2034,7 +2113,9 @@ SDNode *X86DAGToDAGISel::Select(SDValue N) {
 
       // For example, "testl %eax, $32776" to "testw %ax, $32776".
       if ((C->getZExtValue() & ~UINT64_C(0xffff)) == 0 &&
-          N0.getValueType() != MVT::i16) {
+          N0.getValueType() != MVT::i16 &&
+          (!(C->getZExtValue() & 0x8000) ||
+           HasNoSignedComparisonUses(Node))) {
         SDValue Imm = CurDAG->getTargetConstant(C->getZExtValue(), MVT::i16);
         SDValue Reg = N0.getNode()->getOperand(0);
 
@@ -2048,7 +2129,9 @@ SDNode *X86DAGToDAGISel::Select(SDValue N) {
 
       // For example, "testq %rax, $268468232" to "testl %eax, $268468232".
       if ((C->getZExtValue() & ~UINT64_C(0xffffffff)) == 0 &&
-          N0.getValueType() == MVT::i64) {
+          N0.getValueType() == MVT::i64 &&
+          (!(C->getZExtValue() & 0x80000000) ||
+           HasNoSignedComparisonUses(Node))) {
         SDValue Imm = CurDAG->getTargetConstant(C->getZExtValue(), MVT::i32);
         SDValue Reg = N0.getNode()->getOperand(0);
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp
index de44adf..8567ca4 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp
@@ -328,11 +328,13 @@ X86TargetLowering::X86TargetLowering(X86TargetMachine &TM)
   if (Subtarget->is64Bit())
     setOperationAction(ISD::GlobalTLSAddress, MVT::i64, Custom);
   setOperationAction(ISD::ExternalSymbol  , MVT::i32  , Custom);
+  setOperationAction(ISD::BlockAddress    , MVT::i32  , Custom);
   if (Subtarget->is64Bit()) {
     setOperationAction(ISD::ConstantPool  , MVT::i64  , Custom);
     setOperationAction(ISD::JumpTable     , MVT::i64  , Custom);
     setOperationAction(ISD::GlobalAddress , MVT::i64  , Custom);
     setOperationAction(ISD::ExternalSymbol, MVT::i64  , Custom);
+    setOperationAction(ISD::BlockAddress  , MVT::i64  , Custom);
   }
   // 64-bit addm sub, shl, sra, srl (iff 32-bit x86)
   setOperationAction(ISD::SHL_PARTS       , MVT::i32  , Custom);
@@ -371,13 +373,10 @@ X86TargetLowering::X86TargetLowering(X86TargetMachine &TM)
     setOperationAction(ISD::ATOMIC_SWAP, MVT::i64, Custom);
   }
 
-  // Use the default ISD::DBG_STOPPOINT.
-  setOperationAction(ISD::DBG_STOPPOINT, MVT::Other, Expand);
   // FIXME - use subtarget debug flags
   if (!Subtarget->isTargetDarwin() &&
       !Subtarget->isTargetELF() &&
       !Subtarget->isTargetCygMing()) {
-    setOperationAction(ISD::DBG_LABEL, MVT::Other, Expand);
     setOperationAction(ISD::EH_LABEL, MVT::Other, Expand);
   }
 
@@ -1085,6 +1084,17 @@ unsigned X86TargetLowering::getFunctionAlignment(const Function *F) const {
 
 #include "X86GenCallingConv.inc"
 
+bool 
+X86TargetLowering::CanLowerReturn(CallingConv::ID CallConv, bool isVarArg,
+                        const SmallVectorImpl<EVT> &OutTys,
+                        const SmallVectorImpl<ISD::ArgFlagsTy> &ArgsFlags,
+                        SelectionDAG &DAG) {
+  SmallVector<CCValAssign, 16> RVLocs;
+  CCState CCInfo(CallConv, isVarArg, getTargetMachine(),
+                 RVLocs, *DAG.getContext());
+  return CCInfo.CheckReturn(OutTys, ArgsFlags, RetCC_X86);
+}
+
 SDValue
 X86TargetLowering::LowerReturn(SDValue Chain,
                                CallingConv::ID CallConv, bool isVarArg,
@@ -1162,6 +1172,9 @@ X86TargetLowering::LowerReturn(SDValue Chain,
 
     Chain = DAG.getCopyToReg(Chain, dl, X86::RAX, Val, Flag);
     Flag = Chain.getValue(1);
+
+    // RAX now acts like a return value.
+    MF.getRegInfo().addLiveOut(X86::RAX);
   }
 
   RetOps[0] = Chain;  // Update chain.
@@ -1365,7 +1378,7 @@ X86TargetLowering::LowerMemArgument(SDValue Chain,
   // In case of tail call optimization mark all arguments mutable. Since they
   // could be overwritten by lowering of arguments in case of a tail call.
   int FI = MFI->CreateFixedObject(ValVT.getSizeInBits()/8,
-                                  VA.getLocMemOffset(), isImmutable);
+                                  VA.getLocMemOffset(), isImmutable, false);
   SDValue FIN = DAG.getFrameIndex(FI, getPointerTy());
   if (Flags.isByVal())
     return FIN;
@@ -1494,7 +1507,7 @@ X86TargetLowering::LowerFormalArguments(SDValue Chain,
   // the start of the first vararg value... for expansion of llvm.va_start.
   if (isVarArg) {
     if (Is64Bit || CallConv != CallingConv::X86_FastCall) {
-      VarArgsFrameIndex = MFI->CreateFixedObject(1, StackSize);
+      VarArgsFrameIndex = MFI->CreateFixedObject(1, StackSize, true, false);
     }
     if (Is64Bit) {
       unsigned TotalNumIntRegs = 0, TotalNumXMMRegs = 0;
@@ -1545,7 +1558,8 @@ X86TargetLowering::LowerFormalArguments(SDValue Chain,
       VarArgsGPOffset = NumIntRegs * 8;
       VarArgsFPOffset = TotalNumIntRegs * 8 + NumXMMRegs * 16;
       RegSaveFrameIndex = MFI->CreateStackObject(TotalNumIntRegs * 8 +
-                                                 TotalNumXMMRegs * 16, 16);
+                                                 TotalNumXMMRegs * 16, 16,
+                                                 false);
 
       // Store the integer parameter registers.
       SmallVector<SDValue, 8> MemOps;
@@ -1666,7 +1680,8 @@ EmitTailCallStoreRetAddr(SelectionDAG & DAG, MachineFunction &MF,
   // Calculate the new stack slot for the return address.
   int SlotSize = Is64Bit ? 8 : 4;
   int NewReturnAddrFI =
-    MF.getFrameInfo()->CreateFixedObject(SlotSize, FPDiff-SlotSize);
+    MF.getFrameInfo()->CreateFixedObject(SlotSize, FPDiff-SlotSize,
+                                         true, false);
   EVT VT = Is64Bit ? MVT::i64 : MVT::i32;
   SDValue NewRetAddrFrIdx = DAG.getFrameIndex(NewReturnAddrFI, VT);
   Chain = DAG.getStore(Chain, dl, RetAddrFrIdx, NewRetAddrFrIdx,
@@ -1879,7 +1894,7 @@ X86TargetLowering::LowerCall(SDValue Chain, SDValue Callee,
         // Create frame index.
         int32_t Offset = VA.getLocMemOffset()+FPDiff;
         uint32_t OpSize = (VA.getLocVT().getSizeInBits()+7)/8;
-        FI = MF.getFrameInfo()->CreateFixedObject(OpSize, Offset);
+        FI = MF.getFrameInfo()->CreateFixedObject(OpSize, Offset, true, false);
         FIN = DAG.getFrameIndex(FI, getPointerTy());
 
         if (Flags.isByVal()) {
@@ -1919,9 +1934,19 @@ X86TargetLowering::LowerCall(SDValue Chain, SDValue Callee,
                                      FPDiff, dl);
   }
 
-  // If the callee is a GlobalAddress node (quite common, every direct call is)
-  // turn it into a TargetGlobalAddress node so that legalize doesn't hack it.
-  if (GlobalAddressSDNode *G = dyn_cast<GlobalAddressSDNode>(Callee)) {
+  bool WasGlobalOrExternal = false;
+  if (getTargetMachine().getCodeModel() == CodeModel::Large) {
+    assert(Is64Bit && "Large code model is only legal in 64-bit mode.");
+    // In the 64-bit large code model, we have to make all calls
+    // through a register, since the call instruction's 32-bit
+    // pc-relative offset may not be large enough to hold the whole
+    // address.
+  } else if (GlobalAddressSDNode *G = dyn_cast<GlobalAddressSDNode>(Callee)) {
+    WasGlobalOrExternal = true;
+    // If the callee is a GlobalAddress node (quite common, every direct call
+    // is) turn it into a TargetGlobalAddress node so that legalize doesn't hack
+    // it.
+
     // We should use extra load for direct calls to dllimported functions in
     // non-JIT mode.
     GlobalValue *GV = G->getGlobal();
@@ -1949,6 +1974,7 @@ X86TargetLowering::LowerCall(SDValue Chain, SDValue Callee,
                                           G->getOffset(), OpFlags);
     }
   } else if (ExternalSymbolSDNode *S = dyn_cast<ExternalSymbolSDNode>(Callee)) {
+    WasGlobalOrExternal = true;
     unsigned char OpFlags = 0;
 
     // On ELF targets, in either X86-64 or X86-32 mode, direct calls to external
@@ -1966,7 +1992,9 @@ X86TargetLowering::LowerCall(SDValue Chain, SDValue Callee,
 
     Callee = DAG.getTargetExternalSymbol(S->getSymbol(), getPointerTy(),
                                          OpFlags);
-  } else if (isTailCall) {
+  }
+
+  if (isTailCall && !WasGlobalOrExternal) {
     unsigned Opc = Is64Bit ? X86::R11 : X86::EAX;
 
     Chain = DAG.getCopyToReg(Chain,  dl,
@@ -2164,7 +2192,8 @@ SDValue X86TargetLowering::getReturnAddressFrameIndex(SelectionDAG &DAG) {
   if (ReturnAddrIndex == 0) {
     // Set up a frame object for the return address.
     uint64_t SlotSize = TD->getPointerSize();
-    ReturnAddrIndex = MF.getFrameInfo()->CreateFixedObject(SlotSize, -SlotSize);
+    ReturnAddrIndex = MF.getFrameInfo()->CreateFixedObject(SlotSize, -SlotSize,
+                                                           true, false);
     FuncInfo->setRAIndex(ReturnAddrIndex);
   }
 
@@ -2283,6 +2312,8 @@ static unsigned TranslateX86CC(ISD::CondCode SetCCOpcode, bool isFP,
   case ISD::SETNE:   return X86::COND_NE;
   case ISD::SETUO:   return X86::COND_P;
   case ISD::SETO:    return X86::COND_NP;
+  case ISD::SETOEQ:
+  case ISD::SETUNE:  return X86::COND_INVALID;
   }
 }
 
@@ -2305,6 +2336,17 @@ static bool hasFPCMov(unsigned X86CC) {
   }
 }
 
+/// isFPImmLegal - Returns true if the target can instruction select the
+/// specified FP immediate natively. If false, the legalizer will
+/// materialize the FP immediate as a load from a constant pool.
+bool X86TargetLowering::isFPImmLegal(const APFloat &Imm, EVT VT) const {
+  for (unsigned i = 0, e = LegalFPImmediates.size(); i != e; ++i) {
+    if (Imm.bitwiseIsEqual(LegalFPImmediates[i]))
+      return true;
+  }
+  return false;
+}
+
 /// isUndefOrInRange - Return true if Val is undef or if its value falls within
 /// the specified range (L, H].
 static bool isUndefOrInRange(int Val, int Low, int Hi) {
@@ -2386,6 +2428,56 @@ bool X86::isPSHUFLWMask(ShuffleVectorSDNode *N) {
   return ::isPSHUFLWMask(M, N->getValueType(0));
 }
 
+/// isPALIGNRMask - Return true if the node specifies a shuffle of elements that
+/// is suitable for input to PALIGNR.
+static bool isPALIGNRMask(const SmallVectorImpl<int> &Mask, EVT VT,
+                          bool hasSSSE3) {
+  int i, e = VT.getVectorNumElements();
+  
+  // Do not handle v2i64 / v2f64 shuffles with palignr.
+  if (e < 4 || !hasSSSE3)
+    return false;
+  
+  for (i = 0; i != e; ++i)
+    if (Mask[i] >= 0)
+      break;
+  
+  // All undef, not a palignr.
+  if (i == e)
+    return false;
+
+  // Determine if it's ok to perform a palignr with only the LHS, since we
+  // don't have access to the actual shuffle elements to see if RHS is undef.
+  bool Unary = Mask[i] < (int)e;
+  bool NeedsUnary = false;
+
+  int s = Mask[i] - i;
+  
+  // Check the rest of the elements to see if they are consecutive.
+  for (++i; i != e; ++i) {
+    int m = Mask[i];
+    if (m < 0) 
+      continue;
+    
+    Unary = Unary && (m < (int)e);
+    NeedsUnary = NeedsUnary || (m < s);
+
+    if (NeedsUnary && !Unary)
+      return false;
+    if (Unary && m != ((s+i) & (e-1)))
+      return false;
+    if (!Unary && m != (s+i))
+      return false;
+  }
+  return true;
+}
+
+bool X86::isPALIGNRMask(ShuffleVectorSDNode *N) {
+  SmallVector<int, 8> M;
+  N->getMask(M);
+  return ::isPALIGNRMask(M, N->getValueType(0), true);
+}
+
 /// isSHUFPMask - Return true if the specified VECTOR_SHUFFLE operand
 /// specifies a shuffle of elements that is suitable for input to SHUFP*.
 static bool isSHUFPMask(const SmallVectorImpl<int> &Mask, EVT VT) {
@@ -2449,6 +2541,21 @@ bool X86::isMOVHLPSMask(ShuffleVectorSDNode *N) {
          isUndefOrEqual(N->getMaskElt(3), 3);
 }
 
+/// isMOVHLPS_v_undef_Mask - Special case of isMOVHLPSMask for canonical form
+/// of vector_shuffle v, v, <2, 3, 2, 3>, i.e. vector_shuffle v, undef,
+/// <2, 3, 2, 3>
+bool X86::isMOVHLPS_v_undef_Mask(ShuffleVectorSDNode *N) {
+  unsigned NumElems = N->getValueType(0).getVectorNumElements();
+  
+  if (NumElems != 4)
+    return false;
+  
+  return isUndefOrEqual(N->getMaskElt(0), 2) &&
+  isUndefOrEqual(N->getMaskElt(1), 3) &&
+  isUndefOrEqual(N->getMaskElt(2), 2) &&
+  isUndefOrEqual(N->getMaskElt(3), 3);
+}
+
 /// isMOVLPMask - Return true if the specified VECTOR_SHUFFLE operand
 /// specifies a shuffle of elements that is suitable for input to MOVLP{S|D}.
 bool X86::isMOVLPMask(ShuffleVectorSDNode *N) {
@@ -2468,10 +2575,9 @@ bool X86::isMOVLPMask(ShuffleVectorSDNode *N) {
   return true;
 }
 
-/// isMOVHPMask - Return true if the specified VECTOR_SHUFFLE operand
-/// specifies a shuffle of elements that is suitable for input to MOVHP{S|D}
-/// and MOVLHPS.
-bool X86::isMOVHPMask(ShuffleVectorSDNode *N) {
+/// isMOVLHPSMask - Return true if the specified VECTOR_SHUFFLE operand
+/// specifies a shuffle of elements that is suitable for input to MOVLHPS.
+bool X86::isMOVLHPSMask(ShuffleVectorSDNode *N) {
   unsigned NumElems = N->getValueType(0).getVectorNumElements();
 
   if (NumElems != 2 && NumElems != 4)
@@ -2488,21 +2594,6 @@ bool X86::isMOVHPMask(ShuffleVectorSDNode *N) {
   return true;
 }
 
-/// isMOVHLPS_v_undef_Mask - Special case of isMOVHLPSMask for canonical form
-/// of vector_shuffle v, v, <2, 3, 2, 3>, i.e. vector_shuffle v, undef,
-/// <2, 3, 2, 3>
-bool X86::isMOVHLPS_v_undef_Mask(ShuffleVectorSDNode *N) {
-  unsigned NumElems = N->getValueType(0).getVectorNumElements();
-
-  if (NumElems != 4)
-    return false;
-
-  return isUndefOrEqual(N->getMaskElt(0), 2) &&
-         isUndefOrEqual(N->getMaskElt(1), 3) &&
-         isUndefOrEqual(N->getMaskElt(2), 2) &&
-         isUndefOrEqual(N->getMaskElt(3), 3);
-}
-
 /// isUNPCKLMask - Return true if the specified VECTOR_SHUFFLE operand
 /// specifies a shuffle of elements that is suitable for input to UNPCKL.
 static bool isUNPCKLMask(const SmallVectorImpl<int> &Mask, EVT VT,
@@ -2730,8 +2821,7 @@ bool X86::isMOVDDUPMask(ShuffleVectorSDNode *N) {
 }
 
 /// getShuffleSHUFImmediate - Return the appropriate immediate to shuffle
-/// the specified isShuffleMask VECTOR_SHUFFLE mask with PSHUF* and SHUFP*
-/// instructions.
+/// the specified VECTOR_SHUFFLE mask with PSHUF* and SHUFP* instructions.
 unsigned X86::getShuffleSHUFImmediate(SDNode *N) {
   ShuffleVectorSDNode *SVOp = cast<ShuffleVectorSDNode>(N);
   int NumOperands = SVOp->getValueType(0).getVectorNumElements();
@@ -2750,8 +2840,7 @@ unsigned X86::getShuffleSHUFImmediate(SDNode *N) {
 }
 
 /// getShufflePSHUFHWImmediate - Return the appropriate immediate to shuffle
-/// the specified isShuffleMask VECTOR_SHUFFLE mask with PSHUFHW
-/// instructions.
+/// the specified VECTOR_SHUFFLE mask with the PSHUFHW instruction.
 unsigned X86::getShufflePSHUFHWImmediate(SDNode *N) {
   ShuffleVectorSDNode *SVOp = cast<ShuffleVectorSDNode>(N);
   unsigned Mask = 0;
@@ -2767,8 +2856,7 @@ unsigned X86::getShufflePSHUFHWImmediate(SDNode *N) {
 }
 
 /// getShufflePSHUFLWImmediate - Return the appropriate immediate to shuffle
-/// the specified isShuffleMask VECTOR_SHUFFLE mask with PSHUFLW
-/// instructions.
+/// the specified VECTOR_SHUFFLE mask with the PSHUFLW instruction.
 unsigned X86::getShufflePSHUFLWImmediate(SDNode *N) {
   ShuffleVectorSDNode *SVOp = cast<ShuffleVectorSDNode>(N);
   unsigned Mask = 0;
@@ -2783,6 +2871,23 @@ unsigned X86::getShufflePSHUFLWImmediate(SDNode *N) {
   return Mask;
 }
 
+/// getShufflePALIGNRImmediate - Return the appropriate immediate to shuffle
+/// the specified VECTOR_SHUFFLE mask with the PALIGNR instruction.
+unsigned X86::getShufflePALIGNRImmediate(SDNode *N) {
+  ShuffleVectorSDNode *SVOp = cast<ShuffleVectorSDNode>(N);
+  EVT VVT = N->getValueType(0);
+  unsigned EltSize = VVT.getVectorElementType().getSizeInBits() >> 3;
+  int Val = 0;
+
+  unsigned i, e;
+  for (i = 0, e = VVT.getVectorNumElements(); i != e; ++i) {
+    Val = SVOp->getMaskElt(i);
+    if (Val >= 0)
+      break;
+  }
+  return (Val - i) * EltSize;
+}
+
 /// isZeroNode - Returns true if Elt is a constant zero or a floating point
 /// constant +0.0.
 bool X86::isZeroNode(SDValue Elt) {
@@ -4182,7 +4287,7 @@ X86TargetLowering::LowerVECTOR_SHUFFLE(SDValue Op, SelectionDAG &DAG) {
   if (!isMMX && (X86::isMOVSHDUPMask(SVOp) ||
                  X86::isMOVSLDUPMask(SVOp) ||
                  X86::isMOVHLPSMask(SVOp) ||
-                 X86::isMOVHPMask(SVOp) ||
+                 X86::isMOVLHPSMask(SVOp) ||
                  X86::isMOVLPMask(SVOp)))
     return Op;
 
@@ -4613,6 +4718,33 @@ X86TargetLowering::LowerExternalSymbol(SDValue Op, SelectionDAG &DAG) {
 }
 
 SDValue
+X86TargetLowering::LowerBlockAddress(SDValue Op, SelectionDAG &DAG) {
+  // Create the TargetBlockAddressAddress node.
+  unsigned char OpFlags =
+    Subtarget->ClassifyBlockAddressReference();
+  CodeModel::Model M = getTargetMachine().getCodeModel();
+  BlockAddress *BA = cast<BlockAddressSDNode>(Op)->getBlockAddress();
+  DebugLoc dl = Op.getDebugLoc();
+  SDValue Result = DAG.getBlockAddress(BA, getPointerTy(),
+                                       /*isTarget=*/true, OpFlags);
+
+  if (Subtarget->isPICStyleRIPRel() &&
+      (M == CodeModel::Small || M == CodeModel::Kernel))
+    Result = DAG.getNode(X86ISD::WrapperRIP, dl, getPointerTy(), Result);
+  else
+    Result = DAG.getNode(X86ISD::Wrapper, dl, getPointerTy(), Result);
+
+  // With PIC, the address is actually $g + Offset.
+  if (isGlobalRelativeToPICBase(OpFlags)) {
+    Result = DAG.getNode(ISD::ADD, dl, getPointerTy(),
+                         DAG.getNode(X86ISD::GlobalBaseReg, dl, getPointerTy()),
+                         Result);
+  }
+
+  return Result;
+}
+
+SDValue
 X86TargetLowering::LowerGlobalAddress(const GlobalValue *GV, DebugLoc dl,
                                       int64_t Offset,
                                       SelectionDAG &DAG) const {
@@ -4861,7 +4993,7 @@ SDValue X86TargetLowering::LowerSINT_TO_FP(SDValue Op, SelectionDAG &DAG) {
   DebugLoc dl = Op.getDebugLoc();
   unsigned Size = SrcVT.getSizeInBits()/8;
   MachineFunction &MF = DAG.getMachineFunction();
-  int SSFI = MF.getFrameInfo()->CreateStackObject(Size, Size);
+  int SSFI = MF.getFrameInfo()->CreateStackObject(Size, Size, false);
   SDValue StackSlot = DAG.getFrameIndex(SSFI, getPointerTy());
   SDValue Chain = DAG.getStore(DAG.getEntryNode(), dl, Op.getOperand(0),
                                StackSlot,
@@ -4895,7 +5027,7 @@ SDValue X86TargetLowering::BuildFILD(SDValue Op, EVT SrcVT, SDValue Chain,
     // shouldn't be necessary except that RFP cannot be live across
     // multiple blocks. When stackifier is fixed, they can be uncoupled.
     MachineFunction &MF = DAG.getMachineFunction();
-    int SSFI = MF.getFrameInfo()->CreateStackObject(8, 8);
+    int SSFI = MF.getFrameInfo()->CreateStackObject(8, 8, false);
     SDValue StackSlot = DAG.getFrameIndex(SSFI, getPointerTy());
     Tys = DAG.getVTList(MVT::Other);
     SmallVector<SDValue, 8> Ops;
@@ -5105,7 +5237,7 @@ FP_TO_INTHelper(SDValue Op, SelectionDAG &DAG, bool IsSigned) {
   // stack slot.
   MachineFunction &MF = DAG.getMachineFunction();
   unsigned MemSize = DstTy.getSizeInBits()/8;
-  int SSFI = MF.getFrameInfo()->CreateStackObject(MemSize, MemSize);
+  int SSFI = MF.getFrameInfo()->CreateStackObject(MemSize, MemSize, false);
   SDValue StackSlot = DAG.getFrameIndex(SSFI, getPointerTy());
 
   unsigned Opc;
@@ -5128,7 +5260,7 @@ FP_TO_INTHelper(SDValue Op, SelectionDAG &DAG, bool IsSigned) {
     };
     Value = DAG.getNode(X86ISD::FLD, dl, Tys, Ops, 3);
     Chain = Value.getValue(1);
-    SSFI = MF.getFrameInfo()->CreateStackObject(MemSize, MemSize);
+    SSFI = MF.getFrameInfo()->CreateStackObject(MemSize, MemSize, false);
     StackSlot = DAG.getFrameIndex(SSFI, getPointerTy());
   }
 
@@ -5499,6 +5631,8 @@ SDValue X86TargetLowering::LowerSETCC(SDValue Op, SelectionDAG &DAG) {
 
   bool isFP = Op.getOperand(1).getValueType().isFloatingPoint();
   unsigned X86CC = TranslateX86CC(CC, isFP, Op0, Op1, DAG);
+  if (X86CC == X86::COND_INVALID)
+    return SDValue();
 
   SDValue Cond = EmitCmp(Op0, Op1, X86CC, DAG);
   return DAG.getNode(X86ISD::SETCC, dl, MVT::i8,
@@ -5647,8 +5781,11 @@ SDValue X86TargetLowering::LowerSELECT(SDValue Op, SelectionDAG &DAG) {
   DebugLoc dl = Op.getDebugLoc();
   SDValue CC;
 
-  if (Cond.getOpcode() == ISD::SETCC)
-    Cond = LowerSETCC(Cond, DAG);
+  if (Cond.getOpcode() == ISD::SETCC) {
+    SDValue NewCond = LowerSETCC(Cond, DAG);
+    if (NewCond.getNode())
+      Cond = NewCond;
+  }
 
   // If condition flag is set by a X86ISD::CMP, then use it as the condition
   // setting operand in place of the X86ISD::SETCC.
@@ -5721,8 +5858,11 @@ SDValue X86TargetLowering::LowerBRCOND(SDValue Op, SelectionDAG &DAG) {
   DebugLoc dl = Op.getDebugLoc();
   SDValue CC;
 
-  if (Cond.getOpcode() == ISD::SETCC)
-    Cond = LowerSETCC(Cond, DAG);
+  if (Cond.getOpcode() == ISD::SETCC) {
+    SDValue NewCond = LowerSETCC(Cond, DAG);
+    if (NewCond.getNode())
+      Cond = NewCond;
+  }
 #if 0
   // FIXME: LowerXALUO doesn't handle these!!
   else if (Cond.getOpcode() == X86ISD::ADD  ||
@@ -6271,6 +6411,7 @@ X86TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG) {
     SDValue LHS = Op.getOperand(1);
     SDValue RHS = Op.getOperand(2);
     unsigned X86CC = TranslateX86CC(CC, true, LHS, RHS, DAG);
+    assert(X86CC != X86::COND_INVALID && "Unexpected illegal condition!");
     SDValue Cond = DAG.getNode(Opc, dl, MVT::i32, LHS, RHS);
     SDValue SetCC = DAG.getNode(X86ISD::SETCC, dl, MVT::i8,
                                 DAG.getConstant(X86CC, MVT::i8), Cond);
@@ -6643,7 +6784,7 @@ SDValue X86TargetLowering::LowerFLT_ROUNDS_(SDValue Op, SelectionDAG &DAG) {
   DebugLoc dl = Op.getDebugLoc();
 
   // Save FP Control Word to stack slot
-  int SSFI = MF.getFrameInfo()->CreateStackObject(2, StackAlignment);
+  int SSFI = MF.getFrameInfo()->CreateStackObject(2, StackAlignment, false);
   SDValue StackSlot = DAG.getFrameIndex(SSFI, getPointerTy());
 
   SDValue Chain = DAG.getNode(X86ISD::FNSTCW16m, dl, MVT::Other,
@@ -6930,6 +7071,7 @@ SDValue X86TargetLowering::LowerOperation(SDValue Op, SelectionDAG &DAG) {
   case ISD::GlobalAddress:      return LowerGlobalAddress(Op, DAG);
   case ISD::GlobalTLSAddress:   return LowerGlobalTLSAddress(Op, DAG);
   case ISD::ExternalSymbol:     return LowerExternalSymbol(Op, DAG);
+  case ISD::BlockAddress:       return LowerBlockAddress(Op, DAG);
   case ISD::SHL_PARTS:
   case ISD::SRA_PARTS:
   case ISD::SRL_PARTS:          return LowerShift(Op, DAG);
@@ -7271,7 +7413,7 @@ X86TargetLowering::isShuffleMaskLegal(const SmallVectorImpl<int> &M,
   if (VT.getSizeInBits() == 64)
     return false;
 
-  // FIXME: pshufb, blends, palignr, shifts.
+  // FIXME: pshufb, blends, shifts.
   return (VT.getVectorNumElements() == 2 ||
           ShuffleVectorSDNode::isSplatMask(&M[0], VT) ||
           isMOVLMask(M, VT) ||
@@ -7279,6 +7421,7 @@ X86TargetLowering::isShuffleMaskLegal(const SmallVectorImpl<int> &M,
           isPSHUFDMask(M, VT) ||
           isPSHUFHWMask(M, VT) ||
           isPSHUFLWMask(M, VT) ||
+          isPALIGNRMask(M, VT, Subtarget->hasSSSE3()) ||
           isUNPCKLMask(M, VT) ||
           isUNPCKHMask(M, VT) ||
           isUNPCKL_v_undef_Mask(M, VT) ||
@@ -7866,7 +8009,7 @@ X86TargetLowering::EmitInstrWithCustomInserter(MachineInstr *MI,
     // Change the floating point control register to use "round towards zero"
     // mode when truncating to an integer value.
     MachineFunction *F = BB->getParent();
-    int CWFrameIdx = F->getFrameInfo()->CreateStackObject(2, 2);
+    int CWFrameIdx = F->getFrameInfo()->CreateStackObject(2, 2, false);
     addFrameReference(BuildMI(BB, DL, TII->get(X86::FNSTCW16m)), CWFrameIdx);
 
     // Load the old value of the high byte of the control word...
@@ -9397,7 +9540,6 @@ X86TargetLowering::getRegForInlineAsmConstraint(const std::string &Constraint,
     switch (Constraint[0]) {
     default: break;
     case 'r':   // GENERAL_REGS
-    case 'R':   // LEGACY_REGS
     case 'l':   // INDEX_REGS
       if (VT == MVT::i8)
         return std::make_pair(0U, X86::GR8RegisterClass);
@@ -9406,6 +9548,14 @@ X86TargetLowering::getRegForInlineAsmConstraint(const std::string &Constraint,
       if (VT == MVT::i32 || !Subtarget->is64Bit())
         return std::make_pair(0U, X86::GR32RegisterClass);
       return std::make_pair(0U, X86::GR64RegisterClass);
+    case 'R':   // LEGACY_REGS
+      if (VT == MVT::i8)
+        return std::make_pair(0U, X86::GR8_NOREXRegisterClass);
+      if (VT == MVT::i16)
+        return std::make_pair(0U, X86::GR16_NOREXRegisterClass);
+      if (VT == MVT::i32 || !Subtarget->is64Bit())
+        return std::make_pair(0U, X86::GR32_NOREXRegisterClass);
+      return std::make_pair(0U, X86::GR64_NOREXRegisterClass);
     case 'f':  // FP Stack registers.
       // If SSE is enabled for this VT, use f80 to ensure the isel moves the
       // value to the correct fpstack register class.
@@ -9467,14 +9617,14 @@ X86TargetLowering::getRegForInlineAsmConstraint(const std::string &Constraint,
     }
 
     // GCC allows "st(0)" to be called just plain "st".
-    if (StringsEqualNoCase("{st}", Constraint)) {
+    if (StringRef("{st}").equals_lower(Constraint)) {
       Res.first = X86::ST0;
       Res.second = X86::RFP80RegisterClass;
       return Res;
     }
 
     // flags -> EFLAGS
-    if (StringsEqualNoCase("{flags}", Constraint)) {
+    if (StringRef("{flags}").equals_lower(Constraint)) {
       Res.first = X86::EFLAGS;
       Res.second = X86::CCRRegisterClass;
       return Res;
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.h b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.h
index 2f7b8ba..7b4ab62 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.h
@@ -286,7 +286,7 @@ namespace llvm {
     /// isMOVHPMask - Return true if the specified VECTOR_SHUFFLE operand
     /// specifies a shuffle of elements that is suitable for MOVHP{S|D}.
     /// as well as MOVLHPS.
-    bool isMOVHPMask(ShuffleVectorSDNode *N);
+    bool isMOVLHPSMask(ShuffleVectorSDNode *N);
 
     /// isUNPCKLMask - Return true if the specified VECTOR_SHUFFLE operand
     /// specifies a shuffle of elements that is suitable for input to UNPCKL.
@@ -323,21 +323,27 @@ namespace llvm {
     /// specifies a shuffle of elements that is suitable for input to MOVDDUP.
     bool isMOVDDUPMask(ShuffleVectorSDNode *N);
 
+    /// isPALIGNRMask - Return true if the specified VECTOR_SHUFFLE operand
+    /// specifies a shuffle of elements that is suitable for input to PALIGNR.
+    bool isPALIGNRMask(ShuffleVectorSDNode *N);
+
     /// getShuffleSHUFImmediate - Return the appropriate immediate to shuffle
     /// the specified isShuffleMask VECTOR_SHUFFLE mask with PSHUF* and SHUFP*
     /// instructions.
     unsigned getShuffleSHUFImmediate(SDNode *N);
 
     /// getShufflePSHUFHWImmediate - Return the appropriate immediate to shuffle
-    /// the specified isShuffleMask VECTOR_SHUFFLE mask with PSHUFHW
-    /// instructions.
+    /// the specified VECTOR_SHUFFLE mask with PSHUFHW instruction.
     unsigned getShufflePSHUFHWImmediate(SDNode *N);
 
-    /// getShufflePSHUFKWImmediate - Return the appropriate immediate to shuffle
-    /// the specified isShuffleMask VECTOR_SHUFFLE mask with PSHUFLW
-    /// instructions.
+    /// getShufflePSHUFLWImmediate - Return the appropriate immediate to shuffle
+    /// the specified VECTOR_SHUFFLE mask with PSHUFLW instruction.
     unsigned getShufflePSHUFLWImmediate(SDNode *N);
 
+    /// getShufflePALIGNRImmediate - Return the appropriate immediate to shuffle
+    /// the specified VECTOR_SHUFFLE mask with the PALIGNR instruction.
+    unsigned getShufflePALIGNRImmediate(SDNode *N);
+
     /// isZeroNode - Returns true if Elt is a constant zero or a floating point
     /// constant +0.0.
     bool isZeroNode(SDValue Elt);
@@ -493,6 +499,11 @@ namespace llvm {
     /// from i32 to i8 but not from i32 to i16.
     virtual bool isNarrowingProfitable(EVT VT1, EVT VT2) const;
 
+    /// isFPImmLegal - Returns true if the target can instruction select the
+    /// specified FP immediate natively. If false, the legalizer will
+    /// materialize the FP immediate as a load from a constant pool.
+    virtual bool isFPImmLegal(const APFloat &Imm, EVT VT) const;
+
     /// isShuffleMaskLegal - Targets can use this to indicate that they only
     /// support *some* VECTOR_SHUFFLE operations, those with specific masks.
     /// By default, if a target supports the VECTOR_SHUFFLE node, all mask
@@ -578,6 +589,15 @@ namespace llvm {
     bool X86ScalarSSEf32;
     bool X86ScalarSSEf64;
 
+    /// LegalFPImmediates - A list of legal fp immediates.
+    std::vector<APFloat> LegalFPImmediates;
+
+    /// addLegalFPImmediate - Indicate that this x86 target can instruction
+    /// select the specified FP immediate natively.
+    void addLegalFPImmediate(const APFloat& Imm) {
+      LegalFPImmediates.push_back(Imm);
+    }
+
     SDValue LowerCallResult(SDValue Chain, SDValue InFlag,
                             CallingConv::ID CallConv, bool isVarArg,
                             const SmallVectorImpl<ISD::InputArg> &Ins,
@@ -615,6 +635,7 @@ namespace llvm {
     SDValue LowerINSERT_VECTOR_ELT_SSE4(SDValue Op, SelectionDAG &DAG);
     SDValue LowerSCALAR_TO_VECTOR(SDValue Op, SelectionDAG &DAG);
     SDValue LowerConstantPool(SDValue Op, SelectionDAG &DAG);
+    SDValue LowerBlockAddress(SDValue Op, SelectionDAG &DAG);
     SDValue LowerGlobalAddress(const GlobalValue *GV, DebugLoc dl,
                                int64_t Offset, SelectionDAG &DAG) const;
     SDValue LowerGlobalAddress(SDValue Op, SelectionDAG &DAG);
@@ -678,6 +699,12 @@ namespace llvm {
                   const SmallVectorImpl<ISD::OutputArg> &Outs,
                   DebugLoc dl, SelectionDAG &DAG);
 
+    virtual bool
+      CanLowerReturn(CallingConv::ID CallConv, bool isVarArg,
+                     const SmallVectorImpl<EVT> &OutTys,
+                     const SmallVectorImpl<ISD::ArgFlagsTy> &ArgsFlags,
+                     SelectionDAG &DAG);
+
     void ReplaceATOMIC_BINARY_64(SDNode *N, SmallVectorImpl<SDValue> &Results,
                                  SelectionDAG &DAG, unsigned NewOp);
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td b/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td
index ef19823..a01534b 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td
@@ -309,7 +309,7 @@ def MOV64ri32 : RIi32<0xC7, MRM0r, (outs GR64:$dst), (ins i64i32imm:$src),
                       [(set GR64:$dst, i64immSExt32:$src)]>;
 }
 
-let canFoldAsLoad = 1 in
+let canFoldAsLoad = 1, isReMaterializable = 1, mayHaveSideEffects = 1 in
 def MOV64rm : RI<0x8B, MRMSrcMem, (outs GR64:$dst), (ins i64mem:$src),
                  "mov{q}\t{$src, $dst|$dst, $src}",
                  [(set GR64:$dst, (load addr:$src))]>;
@@ -368,19 +368,15 @@ def MOVSX64rm32: RI<0x63, MRMSrcMem, (outs GR64:$dst), (ins i32mem:$src),
 // Use movzbl instead of movzbq when the destination is a register; it's
 // equivalent due to implicit zero-extending, and it has a smaller encoding.
 def MOVZX64rr8 : I<0xB6, MRMSrcReg, (outs GR64:$dst), (ins GR8 :$src),
-                   "movz{bl|x}\t{$src, ${dst:subreg32}|${dst:subreg32}, $src}",
-                   [(set GR64:$dst, (zext GR8:$src))]>, TB;
+                   "", [(set GR64:$dst, (zext GR8:$src))]>, TB;
 def MOVZX64rm8 : I<0xB6, MRMSrcMem, (outs GR64:$dst), (ins i8mem :$src),
-                   "movz{bl|x}\t{$src, ${dst:subreg32}|${dst:subreg32}, $src}",
-                   [(set GR64:$dst, (zextloadi64i8 addr:$src))]>, TB;
+                   "", [(set GR64:$dst, (zextloadi64i8 addr:$src))]>, TB;
 // Use movzwl instead of movzwq when the destination is a register; it's
 // equivalent due to implicit zero-extending, and it has a smaller encoding.
 def MOVZX64rr16: I<0xB7, MRMSrcReg, (outs GR64:$dst), (ins GR16:$src),
-                   "movz{wl|x}\t{$src, ${dst:subreg32}|${dst:subreg32}, $src}",
-                   [(set GR64:$dst, (zext GR16:$src))]>, TB;
+                   "", [(set GR64:$dst, (zext GR16:$src))]>, TB;
 def MOVZX64rm16: I<0xB7, MRMSrcMem, (outs GR64:$dst), (ins i16mem:$src),
-                   "movz{wl|x}\t{$src, ${dst:subreg32}|${dst:subreg32}, $src}",
-                   [(set GR64:$dst, (zextloadi64i16 addr:$src))]>, TB;
+                   "", [(set GR64:$dst, (zextloadi64i16 addr:$src))]>, TB;
 
 // There's no movzlq instruction, but movl can be used for this purpose, using
 // implicit zero-extension. The preferred way to do 32-bit-to-64-bit zero
@@ -390,11 +386,9 @@ def MOVZX64rm16: I<0xB7, MRMSrcMem, (outs GR64:$dst), (ins i16mem:$src),
 // necessarily all zero. In such cases, we fall back to these explicit zext
 // instructions.
 def MOVZX64rr32 : I<0x89, MRMDestReg, (outs GR64:$dst), (ins GR32:$src),
-                    "mov{l}\t{$src, ${dst:subreg32}|${dst:subreg32}, $src}",
-                    [(set GR64:$dst, (zext GR32:$src))]>;
+                    "", [(set GR64:$dst, (zext GR32:$src))]>;
 def MOVZX64rm32 : I<0x8B, MRMSrcMem, (outs GR64:$dst), (ins i32mem:$src),
-                    "mov{l}\t{$src, ${dst:subreg32}|${dst:subreg32}, $src}",
-                    [(set GR64:$dst, (zextloadi64i32 addr:$src))]>;
+                    "", [(set GR64:$dst, (zextloadi64i32 addr:$src))]>;
 
 // Any instruction that defines a 32-bit result leaves the high half of the
 // register. Truncate can be lowered to EXTRACT_SUBREG. CopyFromReg may
@@ -1455,8 +1449,7 @@ def : Pat<(i64 0),
 // Materialize i64 constant where top 32-bits are zero.
 let AddedComplexity = 1, isReMaterializable = 1, isAsCheapAsAMove = 1 in
 def MOV64ri64i32 : Ii32<0xB8, AddRegFrm, (outs GR64:$dst), (ins i64i32imm:$src),
-                        "mov{l}\t{$src, ${dst:subreg32}|${dst:subreg32}, $src}",
-                        [(set GR64:$dst, i64immZExt32:$src)]>;
+                        "", [(set GR64:$dst, i64immZExt32:$src)]>;
 
 //===----------------------------------------------------------------------===//
 // Thread Local Storage Instructions
@@ -1515,6 +1508,7 @@ def XCHG64rm : RI<0x87, MRMSrcMem, (outs GR64:$dst), (ins i64mem:$ptr,GR64:$val)
 }
 
 // Optimized codegen when the non-memory output is not used.
+let Defs = [EFLAGS] in {
 // FIXME: Use normal add / sub instructions and add lock prefix dynamically.
 def LOCK_ADD64mr : RI<0x03, MRMDestMem, (outs), (ins i64mem:$dst, GR64:$src2),
                       "lock\n\t"
@@ -1544,10 +1538,10 @@ def LOCK_INC64m : RI<0xFF, MRM0m, (outs), (ins i64mem:$dst),
 def LOCK_DEC64m : RI<0xFF, MRM1m, (outs), (ins i64mem:$dst),
                       "lock\n\t"
                       "dec{q}\t$dst", []>, LOCK;
-
+}
 // Atomic exchange, and, or, xor
 let Constraints = "$val = $dst", Defs = [EFLAGS],
-                  usesCustomDAGSchedInserter = 1 in {
+                  usesCustomInserter = 1 in {
 def ATOMAND64 : I<0, Pseudo, (outs GR64:$dst),(ins i64mem:$ptr, GR64:$val),
                "#ATOMAND64 PSEUDO!", 
                [(set GR64:$dst, (atomic_load_and_64 addr:$ptr, GR64:$val))]>;
@@ -1601,6 +1595,8 @@ def : Pat<(i64 (X86Wrapper tglobaladdr :$dst)),
           (MOV64ri tglobaladdr :$dst)>, Requires<[FarData]>;
 def : Pat<(i64 (X86Wrapper texternalsym:$dst)),
           (MOV64ri texternalsym:$dst)>, Requires<[FarData]>;
+def : Pat<(i64 (X86Wrapper tblockaddress:$dst)),
+          (MOV64ri tblockaddress:$dst)>, Requires<[FarData]>;
 
 // In static codegen with small code model, we can get the address of a label
 // into a register with 'movl'.  FIXME: This is a hack, the 'imm' predicate of
@@ -1613,6 +1609,8 @@ def : Pat<(i64 (X86Wrapper tglobaladdr :$dst)),
           (MOV64ri64i32 tglobaladdr :$dst)>, Requires<[SmallCode]>;
 def : Pat<(i64 (X86Wrapper texternalsym:$dst)),
           (MOV64ri64i32 texternalsym:$dst)>, Requires<[SmallCode]>;
+def : Pat<(i64 (X86Wrapper tblockaddress:$dst)),
+          (MOV64ri64i32 tblockaddress:$dst)>, Requires<[SmallCode]>;
 
 // In kernel code model, we can get the address of a label
 // into a register with 'movq'.  FIXME: This is a hack, the 'imm' predicate of
@@ -1625,6 +1623,8 @@ def : Pat<(i64 (X86Wrapper tglobaladdr :$dst)),
           (MOV64ri32 tglobaladdr :$dst)>, Requires<[KernelCode]>;
 def : Pat<(i64 (X86Wrapper texternalsym:$dst)),
           (MOV64ri32 texternalsym:$dst)>, Requires<[KernelCode]>;
+def : Pat<(i64 (X86Wrapper tblockaddress:$dst)),
+          (MOV64ri32 tblockaddress:$dst)>, Requires<[KernelCode]>;
 
 // If we have small model and -static mode, it is safe to store global addresses
 // directly as immediates.  FIXME: This is really a hack, the 'imm' predicate
@@ -1641,6 +1641,9 @@ def : Pat<(store (i64 (X86Wrapper tglobaladdr:$src)), addr:$dst),
 def : Pat<(store (i64 (X86Wrapper texternalsym:$src)), addr:$dst),
           (MOV64mi32 addr:$dst, texternalsym:$src)>,
           Requires<[NearData, IsStatic]>;
+def : Pat<(store (i64 (X86Wrapper tblockaddress:$src)), addr:$dst),
+          (MOV64mi32 addr:$dst, tblockaddress:$src)>,
+          Requires<[NearData, IsStatic]>;
 
 // Calls
 // Direct PC relative function call for small code model. 32-bit displacement
@@ -1805,43 +1808,43 @@ def : Pat<(and (srl_su GR64:$src, (i8 8)), (i64 255)),
           (SUBREG_TO_REG
             (i64 0),
             (MOVZX32_NOREXrr8
-              (EXTRACT_SUBREG (COPY_TO_REGCLASS GR64:$src, GR64_ABCD),
+              (EXTRACT_SUBREG (i64 (COPY_TO_REGCLASS GR64:$src, GR64_ABCD)),
                               x86_subreg_8bit_hi)),
             x86_subreg_32bit)>;
 def : Pat<(and (srl_su GR32:$src, (i8 8)), (i32 255)),
           (MOVZX32_NOREXrr8
-            (EXTRACT_SUBREG (COPY_TO_REGCLASS GR32:$src, GR32_ABCD),
+            (EXTRACT_SUBREG (i32 (COPY_TO_REGCLASS GR32:$src, GR32_ABCD)),
                             x86_subreg_8bit_hi))>,
       Requires<[In64BitMode]>;
 def : Pat<(srl_su GR16:$src, (i8 8)),
           (EXTRACT_SUBREG
             (MOVZX32_NOREXrr8
-              (EXTRACT_SUBREG (COPY_TO_REGCLASS GR16:$src, GR16_ABCD),
+              (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR16:$src, GR16_ABCD)),
                               x86_subreg_8bit_hi)),
             x86_subreg_16bit)>,
       Requires<[In64BitMode]>;
 def : Pat<(i32 (zext (srl_su GR16:$src, (i8 8)))),
           (MOVZX32_NOREXrr8
-            (EXTRACT_SUBREG (COPY_TO_REGCLASS GR16:$src, GR16_ABCD),
+            (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR16:$src, GR16_ABCD)),
                             x86_subreg_8bit_hi))>,
       Requires<[In64BitMode]>;
 def : Pat<(i32 (anyext (srl_su GR16:$src, (i8 8)))),
           (MOVZX32_NOREXrr8
-            (EXTRACT_SUBREG (COPY_TO_REGCLASS GR16:$src, GR16_ABCD),
+            (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR16:$src, GR16_ABCD)),
                             x86_subreg_8bit_hi))>,
       Requires<[In64BitMode]>;
 def : Pat<(i64 (zext (srl_su GR16:$src, (i8 8)))),
           (SUBREG_TO_REG
             (i64 0),
             (MOVZX32_NOREXrr8
-              (EXTRACT_SUBREG (COPY_TO_REGCLASS GR16:$src, GR16_ABCD),
+              (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR16:$src, GR16_ABCD)),
                               x86_subreg_8bit_hi)),
             x86_subreg_32bit)>;
 def : Pat<(i64 (anyext (srl_su GR16:$src, (i8 8)))),
           (SUBREG_TO_REG
             (i64 0),
             (MOVZX32_NOREXrr8
-              (EXTRACT_SUBREG (COPY_TO_REGCLASS GR16:$src, GR16_ABCD),
+              (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR16:$src, GR16_ABCD)),
                               x86_subreg_8bit_hi)),
             x86_subreg_32bit)>;
 
@@ -1849,18 +1852,18 @@ def : Pat<(i64 (anyext (srl_su GR16:$src, (i8 8)))),
 def : Pat<(store (i8 (trunc_su (srl_su GR64:$src, (i8 8)))), addr:$dst),
           (MOV8mr_NOREX
             addr:$dst,
-            (EXTRACT_SUBREG (COPY_TO_REGCLASS GR64:$src, GR64_ABCD),
+            (EXTRACT_SUBREG (i64 (COPY_TO_REGCLASS GR64:$src, GR64_ABCD)),
                             x86_subreg_8bit_hi))>;
 def : Pat<(store (i8 (trunc_su (srl_su GR32:$src, (i8 8)))), addr:$dst),
           (MOV8mr_NOREX
             addr:$dst,
-            (EXTRACT_SUBREG (COPY_TO_REGCLASS GR32:$src, GR32_ABCD),
+            (EXTRACT_SUBREG (i32 (COPY_TO_REGCLASS GR32:$src, GR32_ABCD)),
                             x86_subreg_8bit_hi))>,
       Requires<[In64BitMode]>;
 def : Pat<(store (i8 (trunc_su (srl_su GR16:$src, (i8 8)))), addr:$dst),
           (MOV8mr_NOREX
             addr:$dst,
-            (EXTRACT_SUBREG (COPY_TO_REGCLASS GR16:$src, GR16_ABCD),
+            (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR16:$src, GR16_ABCD)),
                             x86_subreg_8bit_hi))>,
       Requires<[In64BitMode]>;
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrFPStack.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrFPStack.td
index 7e37373..b0b0409 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrFPStack.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrFPStack.td
@@ -69,7 +69,7 @@ def fpimmneg1 : PatLeaf<(fpimm), [{
 }]>;
 
 // Some 'special' instructions
-let usesCustomDAGSchedInserter = 1 in {  // Expanded by the scheduler.
+let usesCustomInserter = 1 in {  // Expanded after instruction selection.
   def FP32_TO_INT16_IN_MEM : I<0, Pseudo,
                               (outs), (ins i16mem:$dst, RFP32:$src),
                               "##FP32_TO_INT16_IN_MEM PSEUDO!",
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrFormats.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrFormats.td
index abdb313..2f14bb0 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrFormats.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrFormats.td
@@ -224,10 +224,10 @@ class S3I<bits<8> o, Format F, dag outs, dag ins, string asm, list<dag> pattern>
 
 class SS38I<bits<8> o, Format F, dag outs, dag ins, string asm,
             list<dag> pattern>
-      : I<o, F, outs, ins, asm, pattern>, T8, Requires<[HasSSSE3]>;
+      : Ii8<o, F, outs, ins, asm, pattern>, T8, Requires<[HasSSSE3]>;
 class SS3AI<bits<8> o, Format F, dag outs, dag ins, string asm,
             list<dag> pattern>
-      : I<o, F, outs, ins, asm, pattern>, TA, Requires<[HasSSSE3]>;
+      : Ii8<o, F, outs, ins, asm, pattern>, TA, Requires<[HasSSSE3]>;
 
 // SSE4.1 Instruction Templates:
 // 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp
index 363674b..a37013d 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp
@@ -18,7 +18,6 @@
 #include "X86MachineFunctionInfo.h"
 #include "X86Subtarget.h"
 #include "X86TargetMachine.h"
-#include "llvm/GlobalVariable.h"
 #include "llvm/DerivedTypes.h"
 #include "llvm/LLVMContext.h"
 #include "llvm/ADT/STLExtras.h"
@@ -27,11 +26,15 @@
 #include "llvm/CodeGen/MachineInstrBuilder.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
 #include "llvm/CodeGen/LiveVariables.h"
+#include "llvm/CodeGen/PseudoSourceValue.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Target/TargetOptions.h"
 #include "llvm/MC/MCAsmInfo.h"
+
+#include <limits>
+
 using namespace llvm;
 
 static cl::opt<bool>
@@ -708,9 +711,23 @@ bool X86InstrInfo::isMoveInstr(const MachineInstr& MI,
   }
 }
 
-unsigned X86InstrInfo::isLoadFromStackSlot(const MachineInstr *MI, 
-                                           int &FrameIndex) const {
-  switch (MI->getOpcode()) {
+/// isFrameOperand - Return true and the FrameIndex if the specified
+/// operand and follow operands form a reference to the stack frame.
+bool X86InstrInfo::isFrameOperand(const MachineInstr *MI, unsigned int Op,
+                                  int &FrameIndex) const {
+  if (MI->getOperand(Op).isFI() && MI->getOperand(Op+1).isImm() &&
+      MI->getOperand(Op+2).isReg() && MI->getOperand(Op+3).isImm() &&
+      MI->getOperand(Op+1).getImm() == 1 &&
+      MI->getOperand(Op+2).getReg() == 0 &&
+      MI->getOperand(Op+3).getImm() == 0) {
+    FrameIndex = MI->getOperand(Op).getIndex();
+    return true;
+  }
+  return false;
+}
+
+static bool isFrameLoadOpcode(int Opcode) {
+  switch (Opcode) {
   default: break;
   case X86::MOV8rm:
   case X86::MOV16rm:
@@ -724,22 +741,14 @@ unsigned X86InstrInfo::isLoadFromStackSlot(const MachineInstr *MI,
   case X86::MOVDQArm:
   case X86::MMX_MOVD64rm:
   case X86::MMX_MOVQ64rm:
-    if (MI->getOperand(1).isFI() && MI->getOperand(2).isImm() &&
-        MI->getOperand(3).isReg() && MI->getOperand(4).isImm() &&
-        MI->getOperand(2).getImm() == 1 &&
-        MI->getOperand(3).getReg() == 0 &&
-        MI->getOperand(4).getImm() == 0) {
-      FrameIndex = MI->getOperand(1).getIndex();
-      return MI->getOperand(0).getReg();
-    }
+    return true;
     break;
   }
-  return 0;
+  return false;
 }
 
-unsigned X86InstrInfo::isStoreToStackSlot(const MachineInstr *MI,
-                                          int &FrameIndex) const {
-  switch (MI->getOpcode()) {
+static bool isFrameStoreOpcode(int Opcode) {
+  switch (Opcode) {
   default: break;
   case X86::MOV8mr:
   case X86::MOV16mr:
@@ -754,19 +763,83 @@ unsigned X86InstrInfo::isStoreToStackSlot(const MachineInstr *MI,
   case X86::MMX_MOVD64mr:
   case X86::MMX_MOVQ64mr:
   case X86::MMX_MOVNTQmr:
-    if (MI->getOperand(0).isFI() && MI->getOperand(1).isImm() &&
-        MI->getOperand(2).isReg() && MI->getOperand(3).isImm() &&
-        MI->getOperand(1).getImm() == 1 &&
-        MI->getOperand(2).getReg() == 0 &&
-        MI->getOperand(3).getImm() == 0) {
-      FrameIndex = MI->getOperand(0).getIndex();
+    return true;
+  }
+  return false;
+}
+
+unsigned X86InstrInfo::isLoadFromStackSlot(const MachineInstr *MI, 
+                                           int &FrameIndex) const {
+  if (isFrameLoadOpcode(MI->getOpcode()))
+    if (isFrameOperand(MI, 1, FrameIndex))
+      return MI->getOperand(0).getReg();
+  return 0;
+}
+
+unsigned X86InstrInfo::isLoadFromStackSlotPostFE(const MachineInstr *MI, 
+                                                 int &FrameIndex) const {
+  if (isFrameLoadOpcode(MI->getOpcode())) {
+    unsigned Reg;
+    if ((Reg = isLoadFromStackSlot(MI, FrameIndex)))
+      return Reg;
+    // Check for post-frame index elimination operations
+    return hasLoadFromStackSlot(MI, FrameIndex);
+  }
+  return 0;
+}
+
+bool X86InstrInfo::hasLoadFromStackSlot(const MachineInstr *MI,
+                                        int &FrameIndex) const {
+  for (MachineInstr::mmo_iterator o = MI->memoperands_begin(),
+         oe = MI->memoperands_end();
+       o != oe;
+       ++o) {
+    if ((*o)->isLoad() && (*o)->getValue())
+      if (const FixedStackPseudoSourceValue *Value =
+          dyn_cast<const FixedStackPseudoSourceValue>((*o)->getValue())) {
+        FrameIndex = Value->getFrameIndex();
+        return true;
+      }
+  }
+  return false;
+}
+
+unsigned X86InstrInfo::isStoreToStackSlot(const MachineInstr *MI,
+                                          int &FrameIndex) const {
+  if (isFrameStoreOpcode(MI->getOpcode()))
+    if (isFrameOperand(MI, 0, FrameIndex))
       return MI->getOperand(X86AddrNumOperands).getReg();
-    }
-    break;
+  return 0;
+}
+
+unsigned X86InstrInfo::isStoreToStackSlotPostFE(const MachineInstr *MI,
+                                                int &FrameIndex) const {
+  if (isFrameStoreOpcode(MI->getOpcode())) {
+    unsigned Reg;
+    if ((Reg = isStoreToStackSlot(MI, FrameIndex)))
+      return Reg;
+    // Check for post-frame index elimination operations
+    return hasStoreToStackSlot(MI, FrameIndex);
   }
   return 0;
 }
 
+bool X86InstrInfo::hasStoreToStackSlot(const MachineInstr *MI,
+                                       int &FrameIndex) const {
+  for (MachineInstr::mmo_iterator o = MI->memoperands_begin(),
+         oe = MI->memoperands_end();
+       o != oe;
+       ++o) {
+    if ((*o)->isStore() && (*o)->getValue())
+      if (const FixedStackPseudoSourceValue *Value =
+          dyn_cast<const FixedStackPseudoSourceValue>((*o)->getValue())) {
+        FrameIndex = Value->getFrameIndex();
+        return true;
+      }
+  }
+  return false;
+}
+
 /// regIsPICBase - Return true if register is PIC base (i.e.g defined by
 /// X86::MOVPC32r.
 static bool regIsPICBase(unsigned BaseReg, const MachineRegisterInfo &MRI) {
@@ -782,31 +855,9 @@ static bool regIsPICBase(unsigned BaseReg, const MachineRegisterInfo &MRI) {
   return isPICBase;
 }
 
-/// CanRematLoadWithDispOperand - Return true if a load with the specified
-/// operand is a candidate for remat: for this to be true we need to know that
-/// the load will always return the same value, even if moved.
-static bool CanRematLoadWithDispOperand(const MachineOperand &MO,
-                                        X86TargetMachine &TM) {
-  // Loads from constant pool entries can be remat'd.
-  if (MO.isCPI()) return true;
-  
-  // We can remat globals in some cases.
-  if (MO.isGlobal()) {
-    // If this is a load of a stub, not of the global, we can remat it.  This
-    // access will always return the address of the global.
-    if (isGlobalStubReference(MO.getTargetFlags()))
-      return true;
-    
-    // If the global itself is constant, we can remat the load.
-    if (GlobalVariable *GV = dyn_cast<GlobalVariable>(MO.getGlobal()))
-      if (GV->isConstant())
-        return true;
-  }
-  return false;
-}
- 
 bool
-X86InstrInfo::isReallyTriviallyReMaterializable(const MachineInstr *MI) const {
+X86InstrInfo::isReallyTriviallyReMaterializable(const MachineInstr *MI,
+                                                AliasAnalysis *AA) const {
   switch (MI->getOpcode()) {
   default: break;
     case X86::MOV8rm:
@@ -817,15 +868,19 @@ X86InstrInfo::isReallyTriviallyReMaterializable(const MachineInstr *MI) const {
     case X86::MOVSSrm:
     case X86::MOVSDrm:
     case X86::MOVAPSrm:
+    case X86::MOVUPSrm:
+    case X86::MOVUPSrm_Int:
     case X86::MOVAPDrm:
     case X86::MOVDQArm:
     case X86::MMX_MOVD64rm:
-    case X86::MMX_MOVQ64rm: {
+    case X86::MMX_MOVQ64rm:
+    case X86::FsMOVAPSrm:
+    case X86::FsMOVAPDrm: {
       // Loads from constant pools are trivially rematerializable.
       if (MI->getOperand(1).isReg() &&
           MI->getOperand(2).isImm() &&
           MI->getOperand(3).isReg() && MI->getOperand(3).getReg() == 0 &&
-          CanRematLoadWithDispOperand(MI->getOperand(4), TM)) {
+          MI->isInvariantLoad(AA)) {
         unsigned BaseReg = MI->getOperand(1).getReg();
         if (BaseReg == 0 || BaseReg == X86::RIP)
           return true;
@@ -876,7 +931,7 @@ X86InstrInfo::isReallyTriviallyReMaterializable(const MachineInstr *MI) const {
 /// isSafeToClobberEFLAGS - Return true if it's safe insert an instruction that
 /// would clobber the EFLAGS condition register. Note the result may be
 /// conservative. If it cannot definitely determine the safety after visiting
-/// two instructions it assumes it's not safe.
+/// a few instructions in each direction it assumes it's not safe.
 static bool isSafeToClobberEFLAGS(MachineBasicBlock &MBB,
                                   MachineBasicBlock::iterator I) {
   // It's always safe to clobber EFLAGS at the end of a block.
@@ -884,11 +939,13 @@ static bool isSafeToClobberEFLAGS(MachineBasicBlock &MBB,
     return true;
 
   // For compile time consideration, if we are not able to determine the
-  // safety after visiting 2 instructions, we will assume it's not safe.
-  for (unsigned i = 0; i < 2; ++i) {
+  // safety after visiting 4 instructions in each direction, we will assume
+  // it's not safe.
+  MachineBasicBlock::iterator Iter = I;
+  for (unsigned i = 0; i < 4; ++i) {
     bool SeenDef = false;
-    for (unsigned j = 0, e = I->getNumOperands(); j != e; ++j) {
-      MachineOperand &MO = I->getOperand(j);
+    for (unsigned j = 0, e = Iter->getNumOperands(); j != e; ++j) {
+      MachineOperand &MO = Iter->getOperand(j);
       if (!MO.isReg())
         continue;
       if (MO.getReg() == X86::EFLAGS) {
@@ -901,10 +958,33 @@ static bool isSafeToClobberEFLAGS(MachineBasicBlock &MBB,
     if (SeenDef)
       // This instruction defines EFLAGS, no need to look any further.
       return true;
-    ++I;
+    ++Iter;
 
     // If we make it to the end of the block, it's safe to clobber EFLAGS.
-    if (I == MBB.end())
+    if (Iter == MBB.end())
+      return true;
+  }
+
+  Iter = I;
+  for (unsigned i = 0; i < 4; ++i) {
+    // If we make it to the beginning of the block, it's safe to clobber
+    // EFLAGS iff EFLAGS is not live-in.
+    if (Iter == MBB.begin())
+      return !MBB.isLiveIn(X86::EFLAGS);
+
+    --Iter;
+    bool SawKill = false;
+    for (unsigned j = 0, e = Iter->getNumOperands(); j != e; ++j) {
+      MachineOperand &MO = Iter->getOperand(j);
+      if (MO.isReg() && MO.getReg() == X86::EFLAGS) {
+        if (MO.isDef()) return MO.isDead();
+        if (MO.isKill()) SawKill = true;
+      }
+    }
+
+    if (SawKill)
+      // This instruction kills EFLAGS and doesn't redefine it, so
+      // there's no need to look further.
       return true;
   }
 
@@ -915,12 +995,13 @@ static bool isSafeToClobberEFLAGS(MachineBasicBlock &MBB,
 void X86InstrInfo::reMaterialize(MachineBasicBlock &MBB,
                                  MachineBasicBlock::iterator I,
                                  unsigned DestReg, unsigned SubIdx,
-                                 const MachineInstr *Orig) const {
+                                 const MachineInstr *Orig,
+                                 const TargetRegisterInfo *TRI) const {
   DebugLoc DL = DebugLoc::getUnknownLoc();
   if (I != MBB.end()) DL = I->getDebugLoc();
 
   if (SubIdx && TargetRegisterInfo::isPhysicalRegister(DestReg)) {
-    DestReg = RI.getSubReg(DestReg, SubIdx);
+    DestReg = TRI->getSubReg(DestReg, SubIdx);
     SubIdx = 0;
   }
 
@@ -958,43 +1039,6 @@ void X86InstrInfo::reMaterialize(MachineBasicBlock &MBB,
   NewMI->getOperand(0).setSubReg(SubIdx);
 }
 
-/// isInvariantLoad - Return true if the specified instruction (which is marked
-/// mayLoad) is loading from a location whose value is invariant across the
-/// function.  For example, loading a value from the constant pool or from
-/// from the argument area of a function if it does not change.  This should
-/// only return true of *all* loads the instruction does are invariant (if it
-/// does multiple loads).
-bool X86InstrInfo::isInvariantLoad(const MachineInstr *MI) const {
-  // This code cares about loads from three cases: constant pool entries,
-  // invariant argument slots, and global stubs.  In order to handle these cases
-  // for all of the myriad of X86 instructions, we just scan for a CP/FI/GV
-  // operand and base our analysis on it.  This is safe because the address of
-  // none of these three cases is ever used as anything other than a load base
-  // and X86 doesn't have any instructions that load from multiple places.
-  
-  for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
-    const MachineOperand &MO = MI->getOperand(i);
-    // Loads from constant pools are trivially invariant.
-    if (MO.isCPI())
-      return true;
-
-    if (MO.isGlobal())
-      return isGlobalStubReference(MO.getTargetFlags());
-
-    // If this is a load from an invariant stack slot, the load is a constant.
-    if (MO.isFI()) {
-      const MachineFrameInfo &MFI =
-        *MI->getParent()->getParent()->getFrameInfo();
-      int Idx = MO.getIndex();
-      return MFI.isFixedObjectIndex(Idx) && MFI.isImmutableObjectIndex(Idx);
-    }
-  }
-  
-  // All other instances of these instructions are presumed to have other
-  // issues.
-  return false;
-}
-
 /// hasLiveCondCodeDef - True if MI has a condition code def, e.g. EFLAGS, that
 /// is not marked dead.
 static bool hasLiveCondCodeDef(MachineInstr *MI) {
@@ -1923,15 +1967,17 @@ void X86InstrInfo::storeRegToAddr(MachineFunction &MF, unsigned SrcReg,
                                   bool isKill,
                                   SmallVectorImpl<MachineOperand> &Addr,
                                   const TargetRegisterClass *RC,
+                                  MachineInstr::mmo_iterator MMOBegin,
+                                  MachineInstr::mmo_iterator MMOEnd,
                                   SmallVectorImpl<MachineInstr*> &NewMIs) const {
-  bool isAligned = (RI.getStackAlignment() >= 16) ||
-    RI.needsStackRealignment(MF);
+  bool isAligned = (*MMOBegin)->getAlignment() >= 16;
   unsigned Opc = getStoreRegOpcode(SrcReg, RC, isAligned, TM);
   DebugLoc DL = DebugLoc::getUnknownLoc();
   MachineInstrBuilder MIB = BuildMI(MF, DL, get(Opc));
   for (unsigned i = 0, e = Addr.size(); i != e; ++i)
     MIB.addOperand(Addr[i]);
   MIB.addReg(SrcReg, getKillRegState(isKill));
+  (*MIB).setMemRefs(MMOBegin, MMOEnd);
   NewMIs.push_back(MIB);
 }
 
@@ -2014,14 +2060,16 @@ void X86InstrInfo::loadRegFromStackSlot(MachineBasicBlock &MBB,
 void X86InstrInfo::loadRegFromAddr(MachineFunction &MF, unsigned DestReg,
                                  SmallVectorImpl<MachineOperand> &Addr,
                                  const TargetRegisterClass *RC,
+                                 MachineInstr::mmo_iterator MMOBegin,
+                                 MachineInstr::mmo_iterator MMOEnd,
                                  SmallVectorImpl<MachineInstr*> &NewMIs) const {
-  bool isAligned = (RI.getStackAlignment() >= 16) ||
-    RI.needsStackRealignment(MF);
+  bool isAligned = (*MMOBegin)->getAlignment() >= 16;
   unsigned Opc = getLoadRegOpcode(DestReg, RC, isAligned, TM);
   DebugLoc DL = DebugLoc::getUnknownLoc();
   MachineInstrBuilder MIB = BuildMI(MF, DL, get(Opc), DestReg);
   for (unsigned i = 0, e = Addr.size(); i != e; ++i)
     MIB.addOperand(Addr[i]);
+  (*MIB).setMemRefs(MMOBegin, MMOEnd);
   NewMIs.push_back(MIB);
 }
 
@@ -2199,7 +2247,7 @@ X86InstrInfo::foldMemoryOperandImpl(MachineFunction &MF,
   // If table selected...
   if (OpcodeTablePtr) {
     // Find the Opcode to fuse
-    DenseMap<unsigned*, std::pair<unsigned,unsigned> >::iterator I =
+    DenseMap<unsigned*, std::pair<unsigned,unsigned> >::const_iterator I =
       OpcodeTablePtr->find((unsigned*)MI->getOpcode());
     if (I != OpcodeTablePtr->end()) {
       unsigned Opcode = I->second.first;
@@ -2431,7 +2479,7 @@ bool X86InstrInfo::canFoldMemoryOperand(const MachineInstr *MI,
   
   if (OpcodeTablePtr) {
     // Find the Opcode to fuse
-    DenseMap<unsigned*, std::pair<unsigned,unsigned> >::iterator I =
+    DenseMap<unsigned*, std::pair<unsigned,unsigned> >::const_iterator I =
       OpcodeTablePtr->find((unsigned*)Opc);
     if (I != OpcodeTablePtr->end())
       return true;
@@ -2442,7 +2490,7 @@ bool X86InstrInfo::canFoldMemoryOperand(const MachineInstr *MI,
 bool X86InstrInfo::unfoldMemoryOperand(MachineFunction &MF, MachineInstr *MI,
                                 unsigned Reg, bool UnfoldLoad, bool UnfoldStore,
                                 SmallVectorImpl<MachineInstr*> &NewMIs) const {
-  DenseMap<unsigned*, std::pair<unsigned,unsigned> >::iterator I =
+  DenseMap<unsigned*, std::pair<unsigned,unsigned> >::const_iterator I =
     MemOp2RegOpTable.find((unsigned*)MI->getOpcode());
   if (I == MemOp2RegOpTable.end())
     return false;
@@ -2479,7 +2527,11 @@ bool X86InstrInfo::unfoldMemoryOperand(MachineFunction &MF, MachineInstr *MI,
 
   // Emit the load instruction.
   if (UnfoldLoad) {
-    loadRegFromAddr(MF, Reg, AddrOps, RC, NewMIs);
+    std::pair<MachineInstr::mmo_iterator,
+              MachineInstr::mmo_iterator> MMOs =
+      MF.extractLoadMemRefs(MI->memoperands_begin(),
+                            MI->memoperands_end());
+    loadRegFromAddr(MF, Reg, AddrOps, RC, MMOs.first, MMOs.second, NewMIs);
     if (UnfoldStore) {
       // Address operands cannot be marked isKill.
       for (unsigned i = 1; i != 1 + X86AddrNumOperands; ++i) {
@@ -2539,7 +2591,11 @@ bool X86InstrInfo::unfoldMemoryOperand(MachineFunction &MF, MachineInstr *MI,
   // Emit the store instruction.
   if (UnfoldStore) {
     const TargetRegisterClass *DstRC = TID.OpInfo[0].getRegClass(&RI);
-    storeRegToAddr(MF, Reg, true, AddrOps, DstRC, NewMIs);
+    std::pair<MachineInstr::mmo_iterator,
+              MachineInstr::mmo_iterator> MMOs =
+      MF.extractStoreMemRefs(MI->memoperands_begin(),
+                             MI->memoperands_end());
+    storeRegToAddr(MF, Reg, true, AddrOps, DstRC, MMOs.first, MMOs.second, NewMIs);
   }
 
   return true;
@@ -2551,7 +2607,7 @@ X86InstrInfo::unfoldMemoryOperand(SelectionDAG &DAG, SDNode *N,
   if (!N->isMachineOpcode())
     return false;
 
-  DenseMap<unsigned*, std::pair<unsigned,unsigned> >::iterator I =
+  DenseMap<unsigned*, std::pair<unsigned,unsigned> >::const_iterator I =
     MemOp2RegOpTable.find((unsigned*)N->getMachineOpcode());
   if (I == MemOp2RegOpTable.end())
     return false;
@@ -2581,14 +2637,20 @@ X86InstrInfo::unfoldMemoryOperand(SelectionDAG &DAG, SDNode *N,
 
   // Emit the load instruction.
   SDNode *Load = 0;
-  const MachineFunction &MF = DAG.getMachineFunction();
+  MachineFunction &MF = DAG.getMachineFunction();
   if (FoldedLoad) {
     EVT VT = *RC->vt_begin();
-    bool isAligned = (RI.getStackAlignment() >= 16) ||
-      RI.needsStackRealignment(MF);
+    std::pair<MachineInstr::mmo_iterator,
+              MachineInstr::mmo_iterator> MMOs =
+      MF.extractLoadMemRefs(cast<MachineSDNode>(N)->memoperands_begin(),
+                            cast<MachineSDNode>(N)->memoperands_end());
+    bool isAligned = (*MMOs.first)->getAlignment() >= 16;
     Load = DAG.getMachineNode(getLoadRegOpcode(0, RC, isAligned, TM), dl,
                               VT, MVT::Other, &AddrOps[0], AddrOps.size());
     NewNodes.push_back(Load);
+
+    // Preserve memory reference information.
+    cast<MachineSDNode>(Load)->setMemRefs(MMOs.first, MMOs.second);
   }
 
   // Emit the data processing instruction.
@@ -2615,21 +2677,28 @@ X86InstrInfo::unfoldMemoryOperand(SelectionDAG &DAG, SDNode *N,
     AddrOps.pop_back();
     AddrOps.push_back(SDValue(NewNode, 0));
     AddrOps.push_back(Chain);
-    bool isAligned = (RI.getStackAlignment() >= 16) ||
-      RI.needsStackRealignment(MF);
+    std::pair<MachineInstr::mmo_iterator,
+              MachineInstr::mmo_iterator> MMOs =
+      MF.extractStoreMemRefs(cast<MachineSDNode>(N)->memoperands_begin(),
+                             cast<MachineSDNode>(N)->memoperands_end());
+    bool isAligned = (*MMOs.first)->getAlignment() >= 16;
     SDNode *Store = DAG.getMachineNode(getStoreRegOpcode(0, DstRC,
                                                          isAligned, TM),
                                        dl, MVT::Other,
                                        &AddrOps[0], AddrOps.size());
     NewNodes.push_back(Store);
+
+    // Preserve memory reference information.
+    cast<MachineSDNode>(Load)->setMemRefs(MMOs.first, MMOs.second);
   }
 
   return true;
 }
 
 unsigned X86InstrInfo::getOpcodeAfterMemoryUnfold(unsigned Opc,
-                                      bool UnfoldLoad, bool UnfoldStore) const {
-  DenseMap<unsigned*, std::pair<unsigned,unsigned> >::iterator I =
+                                      bool UnfoldLoad, bool UnfoldStore,
+                                      unsigned *LoadRegIndex) const {
+  DenseMap<unsigned*, std::pair<unsigned,unsigned> >::const_iterator I =
     MemOp2RegOpTable.find((unsigned*)Opc);
   if (I == MemOp2RegOpTable.end())
     return 0;
@@ -2639,6 +2708,8 @@ unsigned X86InstrInfo::getOpcodeAfterMemoryUnfold(unsigned Opc,
     return 0;
   if (UnfoldStore && !FoldedStore)
     return 0;
+  if (LoadRegIndex)
+    *LoadRegIndex = I->second.second & 0xf;
   return I->second.first;
 }
 
@@ -3062,7 +3133,6 @@ static unsigned GetInstSizeWithDesc(const MachineInstr &MI,
       break;
     case TargetInstrInfo::IMPLICIT_DEF:
     case TargetInstrInfo::KILL:
-    case X86::DWARF_LOC:
     case X86::FP_REG_KILL:
       break;
     case X86::MOVPC32r: {
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.h b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.h
index aff3603..3d4c2f6 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.h
@@ -74,31 +74,31 @@ namespace X86II {
     //===------------------------------------------------------------------===//
     // X86 Specific MachineOperand flags.
     
-    MO_NO_FLAG = 0,
+    MO_NO_FLAG,
     
     /// MO_GOT_ABSOLUTE_ADDRESS - On a symbol operand, this represents a
     /// relocation of:
     ///    SYMBOL_LABEL + [. - PICBASELABEL]
-    MO_GOT_ABSOLUTE_ADDRESS = 1,
+    MO_GOT_ABSOLUTE_ADDRESS,
     
     /// MO_PIC_BASE_OFFSET - On a symbol operand this indicates that the
     /// immediate should get the value of the symbol minus the PIC base label:
     ///    SYMBOL_LABEL - PICBASELABEL
-    MO_PIC_BASE_OFFSET = 2,
+    MO_PIC_BASE_OFFSET,
 
     /// MO_GOT - On a symbol operand this indicates that the immediate is the
     /// offset to the GOT entry for the symbol name from the base of the GOT.
     ///
     /// See the X86-64 ELF ABI supplement for more details. 
     ///    SYMBOL_LABEL @GOT
-    MO_GOT = 3,
+    MO_GOT,
     
     /// MO_GOTOFF - On a symbol operand this indicates that the immediate is
     /// the offset to the location of the symbol name from the base of the GOT. 
     ///
     /// See the X86-64 ELF ABI supplement for more details. 
     ///    SYMBOL_LABEL @GOTOFF
-    MO_GOTOFF = 4,
+    MO_GOTOFF,
     
     /// MO_GOTPCREL - On a symbol operand this indicates that the immediate is
     /// offset to the GOT entry for the symbol name from the current code
@@ -106,75 +106,75 @@ namespace X86II {
     ///
     /// See the X86-64 ELF ABI supplement for more details. 
     ///    SYMBOL_LABEL @GOTPCREL
-    MO_GOTPCREL = 5,
+    MO_GOTPCREL,
     
     /// MO_PLT - On a symbol operand this indicates that the immediate is
     /// offset to the PLT entry of symbol name from the current code location. 
     ///
     /// See the X86-64 ELF ABI supplement for more details. 
     ///    SYMBOL_LABEL @PLT
-    MO_PLT = 6,
+    MO_PLT,
     
     /// MO_TLSGD - On a symbol operand this indicates that the immediate is
     /// some TLS offset.
     ///
     /// See 'ELF Handling for Thread-Local Storage' for more details. 
     ///    SYMBOL_LABEL @TLSGD
-    MO_TLSGD = 7,
+    MO_TLSGD,
     
     /// MO_GOTTPOFF - On a symbol operand this indicates that the immediate is
     /// some TLS offset.
     ///
     /// See 'ELF Handling for Thread-Local Storage' for more details. 
     ///    SYMBOL_LABEL @GOTTPOFF
-    MO_GOTTPOFF = 8,
+    MO_GOTTPOFF,
    
     /// MO_INDNTPOFF - On a symbol operand this indicates that the immediate is
     /// some TLS offset.
     ///
     /// See 'ELF Handling for Thread-Local Storage' for more details. 
     ///    SYMBOL_LABEL @INDNTPOFF
-    MO_INDNTPOFF = 9,
+    MO_INDNTPOFF,
     
     /// MO_TPOFF - On a symbol operand this indicates that the immediate is
     /// some TLS offset.
     ///
     /// See 'ELF Handling for Thread-Local Storage' for more details. 
     ///    SYMBOL_LABEL @TPOFF
-    MO_TPOFF = 10,
+    MO_TPOFF,
     
     /// MO_NTPOFF - On a symbol operand this indicates that the immediate is
     /// some TLS offset.
     ///
     /// See 'ELF Handling for Thread-Local Storage' for more details. 
     ///    SYMBOL_LABEL @NTPOFF
-    MO_NTPOFF = 11,
+    MO_NTPOFF,
     
     /// MO_DLLIMPORT - On a symbol operand "FOO", this indicates that the
     /// reference is actually to the "__imp_FOO" symbol.  This is used for
     /// dllimport linkage on windows.
-    MO_DLLIMPORT = 12,
+    MO_DLLIMPORT,
     
     /// MO_DARWIN_STUB - On a symbol operand "FOO", this indicates that the
     /// reference is actually to the "FOO$stub" symbol.  This is used for calls
     /// and jumps to external functions on Tiger and before.
-    MO_DARWIN_STUB = 13,
+    MO_DARWIN_STUB,
     
     /// MO_DARWIN_NONLAZY - On a symbol operand "FOO", this indicates that the
     /// reference is actually to the "FOO$non_lazy_ptr" symbol, which is a
     /// non-PIC-base-relative reference to a non-hidden dyld lazy pointer stub.
-    MO_DARWIN_NONLAZY = 14,
+    MO_DARWIN_NONLAZY,
 
     /// MO_DARWIN_NONLAZY_PIC_BASE - On a symbol operand "FOO", this indicates
     /// that the reference is actually to "FOO$non_lazy_ptr - PICBASE", which is
     /// a PIC-base-relative reference to a non-hidden dyld lazy pointer stub.
-    MO_DARWIN_NONLAZY_PIC_BASE = 15,
+    MO_DARWIN_NONLAZY_PIC_BASE,
     
     /// MO_DARWIN_HIDDEN_NONLAZY_PIC_BASE - On a symbol operand "FOO", this
     /// indicates that the reference is actually to "FOO$non_lazy_ptr -PICBASE",
     /// which is a PIC-base-relative reference to a hidden dyld lazy pointer
     /// stub.
-    MO_DARWIN_HIDDEN_NONLAZY_PIC_BASE = 16
+    MO_DARWIN_HIDDEN_NONLAZY_PIC_BASE
   };
 }
 
@@ -449,14 +449,41 @@ public:
                            unsigned &SrcSubIdx, unsigned &DstSubIdx) const;
 
   unsigned isLoadFromStackSlot(const MachineInstr *MI, int &FrameIndex) const;
-  unsigned isStoreToStackSlot(const MachineInstr *MI, int &FrameIndex) const;
+  /// isLoadFromStackSlotPostFE - Check for post-frame ptr elimination
+  /// stack locations as well.  This uses a heuristic so it isn't
+  /// reliable for correctness.
+  unsigned isLoadFromStackSlotPostFE(const MachineInstr *MI,
+                                     int &FrameIndex) const;
+
+  /// hasLoadFromStackSlot - If the specified machine instruction has
+  /// a load from a stack slot, return true along with the FrameIndex
+  /// of the loaded stack slot.  If not, return false.  Unlike
+  /// isLoadFromStackSlot, this returns true for any instructions that
+  /// loads from the stack.  This is a hint only and may not catch all
+  /// cases.
+  bool hasLoadFromStackSlot(const MachineInstr *MI, int &FrameIndex) const;
 
-  bool isReallyTriviallyReMaterializable(const MachineInstr *MI) const;
+  unsigned isStoreToStackSlot(const MachineInstr *MI, int &FrameIndex) const;
+  /// isStoreToStackSlotPostFE - Check for post-frame ptr elimination
+  /// stack locations as well.  This uses a heuristic so it isn't
+  /// reliable for correctness.
+  unsigned isStoreToStackSlotPostFE(const MachineInstr *MI,
+                                    int &FrameIndex) const;
+
+  /// hasStoreToStackSlot - If the specified machine instruction has a
+  /// store to a stack slot, return true along with the FrameIndex of
+  /// the loaded stack slot.  If not, return false.  Unlike
+  /// isStoreToStackSlot, this returns true for any instructions that
+  /// loads from the stack.  This is a hint only and may not catch all
+  /// cases.
+  bool hasStoreToStackSlot(const MachineInstr *MI, int &FrameIndex) const;
+
+  bool isReallyTriviallyReMaterializable(const MachineInstr *MI,
+                                         AliasAnalysis *AA) const;
   void reMaterialize(MachineBasicBlock &MBB, MachineBasicBlock::iterator MI,
                      unsigned DestReg, unsigned SubIdx,
-                     const MachineInstr *Orig) const;
-
-  bool isInvariantLoad(const MachineInstr *MI) const;
+                     const MachineInstr *Orig,
+                     const TargetRegisterInfo *TRI) const;
 
   /// convertToThreeAddress - This method must be implemented by targets that
   /// set the M_CONVERTIBLE_TO_3_ADDR flag.  When this flag is set, the target
@@ -500,6 +527,8 @@ public:
   virtual void storeRegToAddr(MachineFunction &MF, unsigned SrcReg, bool isKill,
                               SmallVectorImpl<MachineOperand> &Addr,
                               const TargetRegisterClass *RC,
+                              MachineInstr::mmo_iterator MMOBegin,
+                              MachineInstr::mmo_iterator MMOEnd,
                               SmallVectorImpl<MachineInstr*> &NewMIs) const;
 
   virtual void loadRegFromStackSlot(MachineBasicBlock &MBB,
@@ -510,6 +539,8 @@ public:
   virtual void loadRegFromAddr(MachineFunction &MF, unsigned DestReg,
                                SmallVectorImpl<MachineOperand> &Addr,
                                const TargetRegisterClass *RC,
+                               MachineInstr::mmo_iterator MMOBegin,
+                               MachineInstr::mmo_iterator MMOEnd,
                                SmallVectorImpl<MachineInstr*> &NewMIs) const;
   
   virtual bool spillCalleeSavedRegisters(MachineBasicBlock &MBB,
@@ -557,9 +588,12 @@ public:
   /// getOpcodeAfterMemoryUnfold - Returns the opcode of the would be new
   /// instruction after load / store are unfolded from an instruction of the
   /// specified opcode. It returns zero if the specified unfolding is not
-  /// possible.
+  /// possible. If LoadRegIndex is non-null, it is filled in with the operand
+  /// index of the operand which will hold the register holding the loaded
+  /// value.
   virtual unsigned getOpcodeAfterMemoryUnfold(unsigned Opc,
-                                      bool UnfoldLoad, bool UnfoldStore) const;
+                                      bool UnfoldLoad, bool UnfoldStore,
+                                      unsigned *LoadRegIndex = 0) const;
   
   virtual bool BlockHasNoFallThrough(const MachineBasicBlock &MBB) const;
   virtual
@@ -598,12 +632,19 @@ public:
   ///
   unsigned getGlobalBaseReg(MachineFunction *MF) const;
 
+  virtual bool isProfitableToDuplicateIndirectBranch() const { return true; }
+
 private:
   MachineInstr* foldMemoryOperandImpl(MachineFunction &MF,
                                      MachineInstr* MI,
                                      unsigned OpNum,
                                      const SmallVectorImpl<MachineOperand> &MOs,
                                      unsigned Size, unsigned Alignment) const;
+
+  /// isFrameOperand - Return true and the FrameIndex if the specified
+  /// operand and follow operands form a reference to the stack frame.
+  bool isFrameOperand(const MachineInstr *MI, unsigned int Op,
+                      int &FrameIndex) const;
 };
 
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td
index 30b57d8..1cf5529 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td
@@ -524,7 +524,7 @@ def ADJCALLSTACKUP32   : I<0, Pseudo, (outs), (ins i32imm:$amt1, i32imm:$amt2),
 }
 
 // x86-64 va_start lowering magic.
-let usesCustomDAGSchedInserter = 1 in
+let usesCustomInserter = 1 in
 def VASTART_SAVE_XMM_REGS : I<0, Pseudo,
                               (outs),
                               (ins GR8:$al,
@@ -543,7 +543,7 @@ let neverHasSideEffects = 1 in {
 }
 
 // Trap
-def INT3 : I<0xcc, RawFrm, (outs), (ins), "int 3", []>;
+def INT3 : I<0xcc, RawFrm, (outs), (ins), "int\t3", []>;
 def INT : I<0xcd, RawFrm, (outs), (ins i8imm:$trap), "int\t$trap", []>;
 
 // PIC base construction.  This expands to code that looks like this:
@@ -1129,13 +1129,13 @@ let isTwoAddress = 1 in {
 // Conditional moves
 let Uses = [EFLAGS] in {
 
-// X86 doesn't have 8-bit conditional moves. Use a customDAGSchedInserter to
+// X86 doesn't have 8-bit conditional moves. Use a customInserter to
 // emit control flow. An alternative to this is to mark i8 SELECT as Promote,
 // however that requires promoting the operands, and can induce additional
 // i8 register pressure. Note that CMOV_GR8 is conservatively considered to
 // clobber EFLAGS, because if one of the operands is zero, the expansion
 // could involve an xor.
-let usesCustomDAGSchedInserter = 1, isTwoAddress = 0, Defs = [EFLAGS] in
+let usesCustomInserter = 1, isTwoAddress = 0, Defs = [EFLAGS] in
 def CMOV_GR8 : I<0, Pseudo,
                  (outs GR8:$dst), (ins GR8:$src1, GR8:$src2, i8imm:$cond),
                  "#CMOV_GR8 PSEUDO!",
@@ -3392,11 +3392,9 @@ def BT32mi8 : Ii8<0xBA, MRM4m, (outs), (ins i32mem:$src1, i32i8imm:$src2),
 // of the register here. This has a smaller encoding and avoids a
 // partial-register update.
 def MOVSX16rr8 : I<0xBE, MRMSrcReg, (outs GR16:$dst), (ins GR8 :$src),
-                   "movs{bl|x}\t{$src, ${dst:subreg32}|${dst:subreg32}, $src}",
-                   [(set GR16:$dst, (sext GR8:$src))]>, TB;
+                   "", [(set GR16:$dst, (sext GR8:$src))]>, TB;
 def MOVSX16rm8 : I<0xBE, MRMSrcMem, (outs GR16:$dst), (ins i8mem :$src),
-                   "movs{bl|x}\t{$src, ${dst:subreg32}|${dst:subreg32}, $src}",
-                   [(set GR16:$dst, (sextloadi16i8 addr:$src))]>, TB;
+                   "", [(set GR16:$dst, (sextloadi16i8 addr:$src))]>, TB;
 def MOVSX32rr8 : I<0xBE, MRMSrcReg, (outs GR32:$dst), (ins GR8 :$src),
                    "movs{bl|x}\t{$src, $dst|$dst, $src}",
                    [(set GR32:$dst, (sext GR8:$src))]>, TB;
@@ -3414,11 +3412,9 @@ def MOVSX32rm16: I<0xBF, MRMSrcMem, (outs GR32:$dst), (ins i16mem:$src),
 // of the register here. This has a smaller encoding and avoids a
 // partial-register update.
 def MOVZX16rr8 : I<0xB6, MRMSrcReg, (outs GR16:$dst), (ins GR8 :$src),
-                   "movz{bl|x}\t{$src, ${dst:subreg32}|${dst:subreg32}, $src}",
-                   [(set GR16:$dst, (zext GR8:$src))]>, TB;
+                   "", [(set GR16:$dst, (zext GR8:$src))]>, TB;
 def MOVZX16rm8 : I<0xB6, MRMSrcMem, (outs GR16:$dst), (ins i8mem :$src),
-                   "movz{bl|x}\t{$src, ${dst:subreg32}|${dst:subreg32}, $src}",
-                   [(set GR16:$dst, (zextloadi16i8 addr:$src))]>, TB;
+                   "", [(set GR16:$dst, (zextloadi16i8 addr:$src))]>, TB;
 def MOVZX32rr8 : I<0xB6, MRMSrcReg, (outs GR32:$dst), (ins GR8 :$src),
                    "movz{bl|x}\t{$src, $dst|$dst, $src}",
                    [(set GR32:$dst, (zext GR8:$src))]>, TB;
@@ -3474,9 +3470,8 @@ def MOV8r0   : I<0x30, MRMInitReg, (outs GR8 :$dst), (ins),
                  [(set GR8:$dst, 0)]>;
 // Use xorl instead of xorw since we don't care about the high 16 bits,
 // it's smaller, and it avoids a partial-register update.
-def MOV16r0  : I<0x31, MRMInitReg,  (outs GR16:$dst), (ins),
-                 "xor{l}\t${dst:subreg32}, ${dst:subreg32}",
-                 [(set GR16:$dst, 0)]>;
+def MOV16r0  : I<0x31, MRMInitReg, (outs GR16:$dst), (ins),
+                 "", [(set GR16:$dst, 0)]>;
 def MOV32r0  : I<0x31, MRMInitReg,  (outs GR32:$dst), (ins),
                  "xor{l}\t$dst, $dst",
                  [(set GR32:$dst, 0)]>;
@@ -3511,16 +3506,6 @@ def FS_MOV32rm : I<0x8B, MRMSrcMem, (outs GR32:$dst), (ins i32mem:$src),
                    [(set GR32:$dst, (fsload addr:$src))]>, SegFS;
 
 //===----------------------------------------------------------------------===//
-// DWARF Pseudo Instructions
-//
-
-def DWARF_LOC   : I<0, Pseudo, (outs),
-                    (ins i32imm:$line, i32imm:$col, i32imm:$file),
-                    ".loc\t$file $line $col",
-                    [(dwarf_loc (i32 imm:$line), (i32 imm:$col),
-                      (i32 imm:$file))]>;
-
-//===----------------------------------------------------------------------===//
 // EH Pseudo Instructions
 //
 let isTerminator = 1, isReturn = 1, isBarrier = 1,
@@ -3598,6 +3583,7 @@ def LXADD8  : I<0xC0, MRMSrcMem, (outs GR8:$dst), (ins i8mem:$ptr, GR8:$val),
 
 // Optimized codegen when the non-memory output is not used.
 // FIXME: Use normal add / sub instructions and add lock prefix dynamically.
+let Defs = [EFLAGS] in {
 def LOCK_ADD8mr  : I<0x00, MRMDestMem, (outs), (ins i8mem:$dst, GR8:$src2),
                     "lock\n\t"
                     "add{b}\t{$src2, $dst|$dst, $src2}", []>, LOCK;
@@ -3667,10 +3653,11 @@ def LOCK_DEC16m : I<0xFF, MRM1m, (outs), (ins i16mem:$dst),
 def LOCK_DEC32m : I<0xFF, MRM1m, (outs), (ins i32mem:$dst),
                     "lock\n\t"
                     "dec{l}\t$dst", []>, LOCK;
+}
 
 // Atomic exchange, and, or, xor
 let Constraints = "$val = $dst", Defs = [EFLAGS],
-                  usesCustomDAGSchedInserter = 1 in {
+                  usesCustomInserter = 1 in {
 def ATOMAND32 : I<0, Pseudo, (outs GR32:$dst),(ins i32mem:$ptr, GR32:$val),
                "#ATOMAND32 PSEUDO!", 
                [(set GR32:$dst, (atomic_load_and_32 addr:$ptr, GR32:$val))]>;
@@ -3739,7 +3726,7 @@ let Constraints = "$val1 = $dst1, $val2 = $dst2",
                   Defs = [EFLAGS, EAX, EBX, ECX, EDX],
                   Uses = [EAX, EBX, ECX, EDX],
                   mayLoad = 1, mayStore = 1,
-                  usesCustomDAGSchedInserter = 1 in {
+                  usesCustomInserter = 1 in {
 def ATOMAND6432 : I<0, Pseudo, (outs GR32:$dst1, GR32:$dst2),
                                (ins i64mem:$ptr, GR32:$val1, GR32:$val2),
                "#ATOMAND6432 PSEUDO!", []>;
@@ -3792,6 +3779,7 @@ def : Pat<(i32 (X86Wrapper tjumptable  :$dst)), (MOV32ri tjumptable  :$dst)>;
 def : Pat<(i32 (X86Wrapper tglobaltlsaddr:$dst)),(MOV32ri tglobaltlsaddr:$dst)>;
 def : Pat<(i32 (X86Wrapper tglobaladdr :$dst)), (MOV32ri tglobaladdr :$dst)>;
 def : Pat<(i32 (X86Wrapper texternalsym:$dst)), (MOV32ri texternalsym:$dst)>;
+def : Pat<(i32 (X86Wrapper tblockaddress:$dst)), (MOV32ri tblockaddress:$dst)>;
 
 def : Pat<(add GR32:$src1, (X86Wrapper tconstpool:$src2)),
           (ADD32ri GR32:$src1, tconstpool:$src2)>;
@@ -3801,11 +3789,15 @@ def : Pat<(add GR32:$src1, (X86Wrapper tglobaladdr :$src2)),
           (ADD32ri GR32:$src1, tglobaladdr:$src2)>;
 def : Pat<(add GR32:$src1, (X86Wrapper texternalsym:$src2)),
           (ADD32ri GR32:$src1, texternalsym:$src2)>;
+def : Pat<(add GR32:$src1, (X86Wrapper tblockaddress:$src2)),
+          (ADD32ri GR32:$src1, tblockaddress:$src2)>;
 
 def : Pat<(store (i32 (X86Wrapper tglobaladdr:$src)), addr:$dst),
           (MOV32mi addr:$dst, tglobaladdr:$src)>;
 def : Pat<(store (i32 (X86Wrapper texternalsym:$src)), addr:$dst),
           (MOV32mi addr:$dst, texternalsym:$src)>;
+def : Pat<(store (i32 (X86Wrapper tblockaddress:$src)), addr:$dst),
+          (MOV32mi addr:$dst, tblockaddress:$src)>;
 
 // Calls
 // tailcall stuff
@@ -3967,12 +3959,14 @@ def : Pat<(and GR32:$src1, 0xffff),
           (MOVZX32rr16 (EXTRACT_SUBREG GR32:$src1, x86_subreg_16bit))>;
 // r & (2^8-1) ==> movz
 def : Pat<(and GR32:$src1, 0xff),
-          (MOVZX32rr8 (EXTRACT_SUBREG (COPY_TO_REGCLASS GR32:$src1, GR32_ABCD),
+          (MOVZX32rr8 (EXTRACT_SUBREG (i32 (COPY_TO_REGCLASS GR32:$src1, 
+                                                             GR32_ABCD)),
                                       x86_subreg_8bit))>,
       Requires<[In32BitMode]>;
 // r & (2^8-1) ==> movz
 def : Pat<(and GR16:$src1, 0xff),
-          (MOVZX16rr8 (EXTRACT_SUBREG (COPY_TO_REGCLASS GR16:$src1, GR16_ABCD),
+          (MOVZX16rr8 (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR16:$src1, 
+                                                             GR16_ABCD)),
                                       x86_subreg_8bit))>,
       Requires<[In32BitMode]>;
 
@@ -3980,11 +3974,13 @@ def : Pat<(and GR16:$src1, 0xff),
 def : Pat<(sext_inreg GR32:$src, i16),
           (MOVSX32rr16 (EXTRACT_SUBREG GR32:$src, x86_subreg_16bit))>;
 def : Pat<(sext_inreg GR32:$src, i8),
-          (MOVSX32rr8 (EXTRACT_SUBREG (COPY_TO_REGCLASS GR32:$src, GR32_ABCD),
+          (MOVSX32rr8 (EXTRACT_SUBREG (i32 (COPY_TO_REGCLASS GR32:$src, 
+                                                             GR32_ABCD)),
                                       x86_subreg_8bit))>,
       Requires<[In32BitMode]>;
 def : Pat<(sext_inreg GR16:$src, i8),
-          (MOVSX16rr8 (EXTRACT_SUBREG (COPY_TO_REGCLASS GR16:$src, GR16_ABCD),
+          (MOVSX16rr8 (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR16:$src, 
+                                                             GR16_ABCD)),
                                       x86_subreg_8bit))>,
       Requires<[In32BitMode]>;
 
@@ -3992,40 +3988,40 @@ def : Pat<(sext_inreg GR16:$src, i8),
 def : Pat<(i16 (trunc GR32:$src)),
           (EXTRACT_SUBREG GR32:$src, x86_subreg_16bit)>;
 def : Pat<(i8 (trunc GR32:$src)),
-          (EXTRACT_SUBREG (COPY_TO_REGCLASS GR32:$src, GR32_ABCD),
+          (EXTRACT_SUBREG (i32 (COPY_TO_REGCLASS GR32:$src, GR32_ABCD)),
                           x86_subreg_8bit)>,
       Requires<[In32BitMode]>;
 def : Pat<(i8 (trunc GR16:$src)),
-          (EXTRACT_SUBREG (COPY_TO_REGCLASS GR16:$src, GR16_ABCD),
+          (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR16:$src, GR16_ABCD)),
                           x86_subreg_8bit)>,
       Requires<[In32BitMode]>;
 
 // h-register tricks
 def : Pat<(i8 (trunc (srl_su GR16:$src, (i8 8)))),
-          (EXTRACT_SUBREG (COPY_TO_REGCLASS GR16:$src, GR16_ABCD),
+          (EXTRACT_SUBREG (i32 (COPY_TO_REGCLASS GR16:$src, GR16_ABCD)),
                           x86_subreg_8bit_hi)>,
       Requires<[In32BitMode]>;
 def : Pat<(i8 (trunc (srl_su GR32:$src, (i8 8)))),
-          (EXTRACT_SUBREG (COPY_TO_REGCLASS GR32:$src, GR32_ABCD),
+          (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR32:$src, GR32_ABCD)),
                           x86_subreg_8bit_hi)>,
       Requires<[In32BitMode]>;
 def : Pat<(srl_su GR16:$src, (i8 8)),
           (EXTRACT_SUBREG
             (MOVZX32rr8
-              (EXTRACT_SUBREG (COPY_TO_REGCLASS GR16:$src, GR16_ABCD),
+              (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR16:$src, GR16_ABCD)),
                               x86_subreg_8bit_hi)),
             x86_subreg_16bit)>,
       Requires<[In32BitMode]>;
 def : Pat<(i32 (zext (srl_su GR16:$src, (i8 8)))),
-          (MOVZX32rr8 (EXTRACT_SUBREG (COPY_TO_REGCLASS GR16:$src, GR16_ABCD),
+          (MOVZX32rr8 (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR16:$src, GR16_ABCD)),
                                       x86_subreg_8bit_hi))>,
       Requires<[In32BitMode]>;
 def : Pat<(i32 (anyext (srl_su GR16:$src, (i8 8)))),
-          (MOVZX32rr8 (EXTRACT_SUBREG (COPY_TO_REGCLASS GR16:$src, GR16_ABCD),
+          (MOVZX32rr8 (EXTRACT_SUBREG (i16 (COPY_TO_REGCLASS GR16:$src, GR16_ABCD)),
                                       x86_subreg_8bit_hi))>,
       Requires<[In32BitMode]>;
 def : Pat<(and (srl_su GR32:$src, (i8 8)), (i32 255)),
-          (MOVZX32rr8 (EXTRACT_SUBREG (COPY_TO_REGCLASS GR32:$src, GR32_ABCD),
+          (MOVZX32rr8 (EXTRACT_SUBREG (i32 (COPY_TO_REGCLASS GR32:$src, GR32_ABCD)),
                                       x86_subreg_8bit_hi))>,
       Requires<[In32BitMode]>;
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrMMX.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrMMX.td
index ce76b4e..500785b 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrMMX.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrMMX.td
@@ -706,10 +706,9 @@ def : Pat<(v2i32 (X86pcmpgtd VR64:$src1, VR64:$src2)),
 def : Pat<(v2i32 (X86pcmpgtd VR64:$src1, (bitconvert (load_mmx addr:$src2)))),
           (MMX_PCMPGTDrm VR64:$src1, addr:$src2)>;
 
-// CMOV* - Used to implement the SELECT DAG operation.  Expanded by the
-// scheduler into a branch sequence.
-// These are expanded by the scheduler.
-let Uses = [EFLAGS], usesCustomDAGSchedInserter = 1 in {
+// CMOV* - Used to implement the SELECT DAG operation.  Expanded after
+// instruction selection into a branch sequence.
+let Uses = [EFLAGS], usesCustomInserter = 1 in {
   def CMOV_V1I64 : I<0, Pseudo,
                     (outs VR64:$dst), (ins VR64:$t, VR64:$f, i8imm:$cond),
                     "#CMOV_V1I64 PSEUDO!",
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrSSE.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrSSE.td
index 96fc932..dfdd4ce 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrSSE.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrSSE.td
@@ -174,7 +174,8 @@ def fp32imm0 : PatLeaf<(f32 fpimm), [{
   return N->isExactlyValue(+0.0);
 }]>;
 
-def PSxLDQ_imm  : SDNodeXForm<imm, [{
+// BYTE_imm - Transform bit immediates into byte immediates.
+def BYTE_imm  : SDNodeXForm<imm, [{
   // Transformation function: imm >> 3
   return getI32Imm(N->getZExtValue() >> 3);
 }]>;
@@ -197,6 +198,12 @@ def SHUFFLE_get_pshuflw_imm : SDNodeXForm<vector_shuffle, [{
   return getI8Imm(X86::getShufflePSHUFLWImmediate(N));
 }]>;
 
+// SHUFFLE_get_palign_imm xform function: convert vector_shuffle mask to
+// a PALIGNR imm.
+def SHUFFLE_get_palign_imm : SDNodeXForm<vector_shuffle, [{
+  return getI8Imm(X86::getShufflePALIGNRImmediate(N));
+}]>;
+
 def splat_lo : PatFrag<(ops node:$lhs, node:$rhs),
                        (vector_shuffle node:$lhs, node:$rhs), [{
   ShuffleVectorSDNode *SVOp = cast<ShuffleVectorSDNode>(N);
@@ -218,9 +225,9 @@ def movhlps_undef : PatFrag<(ops node:$lhs, node:$rhs),
   return X86::isMOVHLPS_v_undef_Mask(cast<ShuffleVectorSDNode>(N));
 }]>;
 
-def movhp : PatFrag<(ops node:$lhs, node:$rhs),
-                    (vector_shuffle node:$lhs, node:$rhs), [{
-  return X86::isMOVHPMask(cast<ShuffleVectorSDNode>(N));
+def movlhps : PatFrag<(ops node:$lhs, node:$rhs),
+                      (vector_shuffle node:$lhs, node:$rhs), [{
+  return X86::isMOVLHPSMask(cast<ShuffleVectorSDNode>(N));
 }]>;
 
 def movlp : PatFrag<(ops node:$lhs, node:$rhs),
@@ -283,14 +290,18 @@ def pshuflw : PatFrag<(ops node:$lhs, node:$rhs),
   return X86::isPSHUFLWMask(cast<ShuffleVectorSDNode>(N));
 }], SHUFFLE_get_pshuflw_imm>;
 
+def palign : PatFrag<(ops node:$lhs, node:$rhs),
+                     (vector_shuffle node:$lhs, node:$rhs), [{
+  return X86::isPALIGNRMask(cast<ShuffleVectorSDNode>(N));
+}], SHUFFLE_get_palign_imm>;
+
 //===----------------------------------------------------------------------===//
 // SSE scalar FP Instructions
 //===----------------------------------------------------------------------===//
 
-// CMOV* - Used to implement the SSE SELECT DAG operation.  Expanded by the
-// scheduler into a branch sequence.
-// These are expanded by the scheduler.
-let Uses = [EFLAGS], usesCustomDAGSchedInserter = 1 in {
+// CMOV* - Used to implement the SSE SELECT DAG operation.  Expanded after
+// instruction selection into a branch sequence.
+let Uses = [EFLAGS], usesCustomInserter = 1 in {
   def CMOV_FR32 : I<0, Pseudo,
                     (outs FR32:$dst), (ins FR32:$t, FR32:$f, i8imm:$cond),
                     "#CMOV_FR32 PSEUDO!",
@@ -486,7 +497,7 @@ def FsMOVAPSrr : PSI<0x28, MRMSrcReg, (outs FR32:$dst), (ins FR32:$src),
 
 // Alias instruction to load FR32 from f128mem using movaps. Upper bits are
 // disregarded.
-let canFoldAsLoad = 1 in
+let canFoldAsLoad = 1, isReMaterializable = 1, mayHaveSideEffects = 1 in
 def FsMOVAPSrm : PSI<0x28, MRMSrcMem, (outs FR32:$dst), (ins f128mem:$src),
                      "movaps\t{$src, $dst|$dst, $src}",
                      [(set FR32:$dst, (alignedloadfsf32 addr:$src))]>;
@@ -695,7 +706,7 @@ def MOVAPSmr : PSI<0x29, MRMDestMem, (outs), (ins f128mem:$dst, VR128:$src),
 let neverHasSideEffects = 1 in
 def MOVUPSrr : PSI<0x10, MRMSrcReg, (outs VR128:$dst), (ins VR128:$src),
                    "movups\t{$src, $dst|$dst, $src}", []>;
-let canFoldAsLoad = 1 in
+let canFoldAsLoad = 1, isReMaterializable = 1, mayHaveSideEffects = 1 in
 def MOVUPSrm : PSI<0x10, MRMSrcMem, (outs VR128:$dst), (ins f128mem:$src),
                    "movups\t{$src, $dst|$dst, $src}",
                    [(set VR128:$dst, (loadv4f32 addr:$src))]>;
@@ -704,7 +715,7 @@ def MOVUPSmr : PSI<0x11, MRMDestMem, (outs), (ins f128mem:$dst, VR128:$src),
                    [(store (v4f32 VR128:$src), addr:$dst)]>;
 
 // Intrinsic forms of MOVUPS load and store
-let canFoldAsLoad = 1 in
+let canFoldAsLoad = 1, isReMaterializable = 1, mayHaveSideEffects = 1 in
 def MOVUPSrm_Int : PSI<0x10, MRMSrcMem, (outs VR128:$dst), (ins f128mem:$src),
                        "movups\t{$src, $dst|$dst, $src}",
                        [(set VR128:$dst, (int_x86_sse_loadu_ps addr:$src))]>;
@@ -724,7 +735,7 @@ let Constraints = "$src1 = $dst" in {
                        (outs VR128:$dst), (ins VR128:$src1, f64mem:$src2),
                        "movhps\t{$src2, $dst|$dst, $src2}",
        [(set VR128:$dst,
-         (movhp VR128:$src1,
+         (movlhps VR128:$src1,
                 (bc_v4f32 (v2f64 (scalar_to_vector (loadf64 addr:$src2))))))]>;
   } // AddedComplexity
 } // Constraints = "$src1 = $dst"
@@ -749,7 +760,7 @@ def MOVLHPSrr : PSI<0x16, MRMSrcReg, (outs VR128:$dst),
                                      (ins VR128:$src1, VR128:$src2),
                     "movlhps\t{$src2, $dst|$dst, $src2}",
                     [(set VR128:$dst,
-                      (v4f32 (movhp VR128:$src1, VR128:$src2)))]>;
+                      (v4f32 (movlhps VR128:$src1, VR128:$src2)))]>;
 
 def MOVHLPSrr : PSI<0x12, MRMSrcReg, (outs VR128:$dst),
                                      (ins VR128:$src1, VR128:$src2),
@@ -1245,7 +1256,7 @@ def FsMOVAPDrr : PDI<0x28, MRMSrcReg, (outs FR64:$dst), (ins FR64:$src),
 
 // Alias instruction to load FR64 from f128mem using movapd. Upper bits are
 // disregarded.
-let canFoldAsLoad = 1 in
+let canFoldAsLoad = 1, isReMaterializable = 1, mayHaveSideEffects = 1 in
 def FsMOVAPDrm : PDI<0x28, MRMSrcMem, (outs FR64:$dst), (ins f128mem:$src),
                      "movapd\t{$src, $dst|$dst, $src}",
                      [(set FR64:$dst, (alignedloadfsf64 addr:$src))]>;
@@ -1483,7 +1494,7 @@ let Constraints = "$src1 = $dst" in {
                        (outs VR128:$dst), (ins VR128:$src1, f64mem:$src2),
                        "movhpd\t{$src2, $dst|$dst, $src2}",
                        [(set VR128:$dst,
-                         (v2f64 (movhp VR128:$src1,
+                         (v2f64 (movlhps VR128:$src1,
                                  (scalar_to_vector (loadf64 addr:$src2)))))]>;
   } // AddedComplexity
 } // Constraints = "$src1 = $dst"
@@ -1985,21 +1996,21 @@ let Constraints = "$src1 = $dst", neverHasSideEffects = 1 in {
 
 let Predicates = [HasSSE2] in {
   def : Pat<(int_x86_sse2_psll_dq VR128:$src1, imm:$src2),
-            (v2i64 (PSLLDQri VR128:$src1, (PSxLDQ_imm imm:$src2)))>;
+            (v2i64 (PSLLDQri VR128:$src1, (BYTE_imm imm:$src2)))>;
   def : Pat<(int_x86_sse2_psrl_dq VR128:$src1, imm:$src2),
-            (v2i64 (PSRLDQri VR128:$src1, (PSxLDQ_imm imm:$src2)))>;
+            (v2i64 (PSRLDQri VR128:$src1, (BYTE_imm imm:$src2)))>;
   def : Pat<(int_x86_sse2_psll_dq_bs VR128:$src1, imm:$src2),
             (v2i64 (PSLLDQri VR128:$src1, imm:$src2))>;
   def : Pat<(int_x86_sse2_psrl_dq_bs VR128:$src1, imm:$src2),
             (v2i64 (PSRLDQri VR128:$src1, imm:$src2))>;
   def : Pat<(v2f64 (X86fsrl VR128:$src1, i32immSExt8:$src2)),
-            (v2f64 (PSRLDQri VR128:$src1, (PSxLDQ_imm imm:$src2)))>;
+            (v2f64 (PSRLDQri VR128:$src1, (BYTE_imm imm:$src2)))>;
 
   // Shift up / down and insert zero's.
   def : Pat<(v2i64 (X86vshl  VR128:$src, (i8 imm:$amt))),
-            (v2i64 (PSLLDQri VR128:$src, (PSxLDQ_imm imm:$amt)))>;
+            (v2i64 (PSLLDQri VR128:$src, (BYTE_imm imm:$amt)))>;
   def : Pat<(v2i64 (X86vshr  VR128:$src, (i8 imm:$amt))),
-            (v2i64 (PSRLDQri VR128:$src, (PSxLDQ_imm imm:$amt)))>;
+            (v2i64 (PSRLDQri VR128:$src, (BYTE_imm imm:$amt)))>;
 }
 
 // Logical
@@ -2062,6 +2073,7 @@ defm PACKSSDW : PDI_binop_rm_int<0x6B, "packssdw", int_x86_sse2_packssdw_128>;
 defm PACKUSWB : PDI_binop_rm_int<0x67, "packuswb", int_x86_sse2_packuswb_128>;
 
 // Shuffle and unpack instructions
+let AddedComplexity = 5 in {
 def PSHUFDri : PDIi8<0x70, MRMSrcReg,
                      (outs VR128:$dst), (ins VR128:$src1, i8imm:$src2),
                      "pshufd\t{$src2, $src1, $dst|$dst, $src1, $src2}",
@@ -2073,6 +2085,7 @@ def PSHUFDmi : PDIi8<0x70, MRMSrcMem,
                      [(set VR128:$dst, (v4i32 (pshufd:$src2
                                              (bc_v4i32(memopv2i64 addr:$src1)),
                                              (undef))))]>;
+}
 
 // SSE2 with ImmT == Imm8 and XS prefix.
 def PSHUFHWri : Ii8<0x70, MRMSrcReg,
@@ -2807,36 +2820,60 @@ defm PSIGND      : SS3I_binop_rm_int_32<0x0A, "psignd",
 
 let Constraints = "$src1 = $dst" in {
   def PALIGNR64rr  : SS3AI<0x0F, MRMSrcReg, (outs VR64:$dst),
-                           (ins VR64:$src1, VR64:$src2, i16imm:$src3),
+                           (ins VR64:$src1, VR64:$src2, i8imm:$src3),
                            "palignr\t{$src3, $src2, $dst|$dst, $src2, $src3}",
-                           [(set VR64:$dst,
-                             (int_x86_ssse3_palign_r
-                              VR64:$src1, VR64:$src2,
-                              imm:$src3))]>;
+                           []>;
   def PALIGNR64rm  : SS3AI<0x0F, MRMSrcMem, (outs VR64:$dst),
-                           (ins VR64:$src1, i64mem:$src2, i16imm:$src3),
+                           (ins VR64:$src1, i64mem:$src2, i8imm:$src3),
                            "palignr\t{$src3, $src2, $dst|$dst, $src2, $src3}",
-                           [(set VR64:$dst,
-                             (int_x86_ssse3_palign_r
-                              VR64:$src1,
-                              (bitconvert (memopv2i32 addr:$src2)),
-                              imm:$src3))]>;
+                           []>;
 
   def PALIGNR128rr : SS3AI<0x0F, MRMSrcReg, (outs VR128:$dst),
-                           (ins VR128:$src1, VR128:$src2, i32imm:$src3),
+                           (ins VR128:$src1, VR128:$src2, i8imm:$src3),
                            "palignr\t{$src3, $src2, $dst|$dst, $src2, $src3}",
-                           [(set VR128:$dst,
-                             (int_x86_ssse3_palign_r_128
-                              VR128:$src1, VR128:$src2,
-                              imm:$src3))]>, OpSize;
+                           []>, OpSize;
   def PALIGNR128rm : SS3AI<0x0F, MRMSrcMem, (outs VR128:$dst),
-                           (ins VR128:$src1, i128mem:$src2, i32imm:$src3),
+                           (ins VR128:$src1, i128mem:$src2, i8imm:$src3),
                            "palignr\t{$src3, $src2, $dst|$dst, $src2, $src3}",
-                           [(set VR128:$dst,
-                             (int_x86_ssse3_palign_r_128
-                              VR128:$src1,
-                              (bitconvert (memopv4i32 addr:$src2)),
-                              imm:$src3))]>, OpSize;
+                           []>, OpSize;
+}
+
+// palignr patterns.
+def : Pat<(int_x86_ssse3_palign_r VR64:$src1, VR64:$src2, (i8 imm:$src3)),
+          (PALIGNR64rr VR64:$src1, VR64:$src2, (BYTE_imm imm:$src3))>,
+          Requires<[HasSSSE3]>;
+def : Pat<(int_x86_ssse3_palign_r VR64:$src1,
+                                      (memop64 addr:$src2),
+                                      (i8 imm:$src3)),
+          (PALIGNR64rm VR64:$src1, addr:$src2, (BYTE_imm imm:$src3))>,
+          Requires<[HasSSSE3]>;
+
+def : Pat<(int_x86_ssse3_palign_r_128 VR128:$src1, VR128:$src2, (i8 imm:$src3)),
+          (PALIGNR128rr VR128:$src1, VR128:$src2, (BYTE_imm imm:$src3))>,
+          Requires<[HasSSSE3]>;
+def : Pat<(int_x86_ssse3_palign_r_128 VR128:$src1,
+                                      (memopv2i64 addr:$src2),
+                                      (i8 imm:$src3)),
+          (PALIGNR128rm VR128:$src1, addr:$src2, (BYTE_imm imm:$src3))>,
+          Requires<[HasSSSE3]>;
+
+let AddedComplexity = 5 in {
+def : Pat<(v4i32 (palign:$src3 VR128:$src1, VR128:$src2)),
+          (PALIGNR128rr VR128:$src2, VR128:$src1,
+                        (SHUFFLE_get_palign_imm VR128:$src3))>,
+      Requires<[HasSSSE3]>;
+def : Pat<(v4f32 (palign:$src3 VR128:$src1, VR128:$src2)),
+          (PALIGNR128rr VR128:$src2, VR128:$src1,
+                        (SHUFFLE_get_palign_imm VR128:$src3))>,
+      Requires<[HasSSSE3]>;
+def : Pat<(v8i16 (palign:$src3 VR128:$src1, VR128:$src2)),
+          (PALIGNR128rr VR128:$src2, VR128:$src1,
+                        (SHUFFLE_get_palign_imm VR128:$src3))>,
+      Requires<[HasSSSE3]>;
+def : Pat<(v16i8 (palign:$src3 VR128:$src1, VR128:$src2)),
+          (PALIGNR128rr VR128:$src2, VR128:$src1,
+                        (SHUFFLE_get_palign_imm VR128:$src3))>,
+      Requires<[HasSSSE3]>;
 }
 
 def : Pat<(X86pshufb VR128:$src, VR128:$mask),
@@ -2998,7 +3035,7 @@ def : Pat<(v4i32 (unpckh_undef VR128:$src, (undef))),
 
 let AddedComplexity = 20 in {
 // vector_shuffle v1, v2 <0, 1, 4, 5> using MOVLHPS
-def : Pat<(v4i32 (movhp VR128:$src1, VR128:$src2)),
+def : Pat<(v4i32 (movlhps VR128:$src1, VR128:$src2)),
           (MOVLHPSrr VR128:$src1, VR128:$src2)>;
 
 // vector_shuffle v1, v2 <6, 7, 2, 3> using MOVHLPS
@@ -3014,48 +3051,26 @@ def : Pat<(v4i32 (movhlps_undef VR128:$src1, (undef))),
 
 let AddedComplexity = 20 in {
 // vector_shuffle v1, (load v2) <4, 5, 2, 3> using MOVLPS
-// vector_shuffle v1, (load v2) <0, 1, 4, 5> using MOVHPS
 def : Pat<(v4f32 (movlp VR128:$src1, (load addr:$src2))),
           (MOVLPSrm VR128:$src1, addr:$src2)>, Requires<[HasSSE1]>;
 def : Pat<(v2f64 (movlp VR128:$src1, (load addr:$src2))),
           (MOVLPDrm VR128:$src1, addr:$src2)>, Requires<[HasSSE2]>;
-def : Pat<(v4f32 (movhp VR128:$src1, (load addr:$src2))),
-          (MOVHPSrm VR128:$src1, addr:$src2)>, Requires<[HasSSE1]>;
-def : Pat<(v2f64 (movhp VR128:$src1, (load addr:$src2))),
-          (MOVHPDrm VR128:$src1, addr:$src2)>, Requires<[HasSSE2]>;
-
 def : Pat<(v4i32 (movlp VR128:$src1, (load addr:$src2))),
           (MOVLPSrm VR128:$src1, addr:$src2)>, Requires<[HasSSE2]>;
 def : Pat<(v2i64 (movlp VR128:$src1, (load addr:$src2))),
           (MOVLPDrm VR128:$src1, addr:$src2)>, Requires<[HasSSE2]>;
-def : Pat<(v4i32 (movhp VR128:$src1, (load addr:$src2))),
-          (MOVHPSrm VR128:$src1, addr:$src2)>, Requires<[HasSSE1]>;
-def : Pat<(v2i64 (movhp VR128:$src1, (load addr:$src2))),
-          (MOVHPDrm VR128:$src1, addr:$src2)>, Requires<[HasSSE2]>;
 }
 
 // (store (vector_shuffle (load addr), v2, <4, 5, 2, 3>), addr) using MOVLPS
-// (store (vector_shuffle (load addr), v2, <0, 1, 4, 5>), addr) using MOVHPS
 def : Pat<(store (v4f32 (movlp (load addr:$src1), VR128:$src2)), addr:$src1),
           (MOVLPSmr addr:$src1, VR128:$src2)>, Requires<[HasSSE1]>;
 def : Pat<(store (v2f64 (movlp (load addr:$src1), VR128:$src2)), addr:$src1),
           (MOVLPDmr addr:$src1, VR128:$src2)>, Requires<[HasSSE2]>;
-def : Pat<(store (v4f32 (movhp (load addr:$src1), VR128:$src2)), addr:$src1),
-          (MOVHPSmr addr:$src1, VR128:$src2)>, Requires<[HasSSE1]>;
-def : Pat<(store (v2f64 (movhp (load addr:$src1), VR128:$src2)), addr:$src1),
-          (MOVHPDmr addr:$src1, VR128:$src2)>, Requires<[HasSSE2]>;
-
 def : Pat<(store (v4i32 (movlp (bc_v4i32 (loadv2i64 addr:$src1)), VR128:$src2)),
                  addr:$src1),
           (MOVLPSmr addr:$src1, VR128:$src2)>, Requires<[HasSSE1]>;
 def : Pat<(store (v2i64 (movlp (load addr:$src1), VR128:$src2)), addr:$src1),
           (MOVLPDmr addr:$src1, VR128:$src2)>, Requires<[HasSSE2]>;
-def : Pat<(store (v4i32 (movhp (bc_v4i32 (loadv2i64 addr:$src1)), VR128:$src2)),
-                 addr:$src1),
-          (MOVHPSmr addr:$src1, VR128:$src2)>, Requires<[HasSSE1]>;
-def : Pat<(store (v2i64 (movhp (load addr:$src1), VR128:$src2)), addr:$src1),
-          (MOVHPDmr addr:$src1, VR128:$src2)>, Requires<[HasSSE2]>;
-
 
 let AddedComplexity = 15 in {
 // Setting the lowest element in the vector.
@@ -3769,7 +3784,7 @@ let Constraints = "$src1 = $dst" in {
 }
 
 // String/text processing instructions.
-let Defs = [EFLAGS], usesCustomDAGSchedInserter = 1 in {
+let Defs = [EFLAGS], usesCustomInserter = 1 in {
 def PCMPISTRM128REG : SS42AI<0, Pseudo, (outs VR128:$dst),
 			(ins VR128:$src1, VR128:$src2, i8imm:$src3),
 		    "#PCMPISTRM128rr PSEUDO!",
@@ -3797,7 +3812,7 @@ def PCMPISTRM128rm : SS42AI<0x62, MRMSrcMem, (outs),
 }
 
 let Defs = [EFLAGS], Uses = [EAX, EDX],
-	usesCustomDAGSchedInserter = 1 in {
+	usesCustomInserter = 1 in {
 def PCMPESTRM128REG : SS42AI<0, Pseudo, (outs VR128:$dst),
 			(ins VR128:$src1, VR128:$src3, i8imm:$src5),
 		    "#PCMPESTRM128rr PSEUDO!",
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86JITInfo.cpp b/libclamav/c++/llvm/lib/Target/X86/X86JITInfo.cpp
index 62ca47f..ce06f0f 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86JITInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86JITInfo.cpp
@@ -367,8 +367,9 @@ X86CompilationCallback2(intptr_t *StackPtr, intptr_t RetAddr) {
   // Rewrite the call target... so that we don't end up here every time we
   // execute the call.
 #if defined (X86_64_JIT)
-  if (!isStub)
-    *(intptr_t *)(RetAddr - 0xa) = NewVal;
+  assert(isStub &&
+         "X86-64 doesn't support rewriting non-stub lazy compilation calls:"
+         " the call instruction varies too much.");
 #else
   *(intptr_t *)RetAddr = (intptr_t)(NewVal-RetAddr-4);
 #endif
@@ -425,83 +426,77 @@ X86JITInfo::X86JITInfo(X86TargetMachine &tm) : TM(tm) {
 
 void *X86JITInfo::emitGlobalValueIndirectSym(const GlobalValue* GV, void *ptr,
                                              JITCodeEmitter &JCE) {
+  MachineCodeEmitter::BufferState BS;
 #if defined (X86_64_JIT)
-  JCE.startGVStub(GV, 8, 8);
+  JCE.startGVStub(BS, GV, 8, 8);
   JCE.emitWordLE((unsigned)(intptr_t)ptr);
   JCE.emitWordLE((unsigned)(((intptr_t)ptr) >> 32));
 #else
-  JCE.startGVStub(GV, 4, 4);
+  JCE.startGVStub(BS, GV, 4, 4);
   JCE.emitWordLE((intptr_t)ptr);
 #endif
-  return JCE.finishGVStub(GV);
+  return JCE.finishGVStub(BS);
 }
 
-void *X86JITInfo::emitFunctionStub(const Function* F, void *Fn,
+TargetJITInfo::StubLayout X86JITInfo::getStubLayout() {
+  // The 64-bit stub contains:
+  //   movabs r10 <- 8-byte-target-address  # 10 bytes
+  //   call|jmp *r10  # 3 bytes
+  // The 32-bit stub contains a 5-byte call|jmp.
+  // If the stub is a call to the compilation callback, an extra byte is added
+  // to mark it as a stub.
+  StubLayout Result = {14, 4};
+  return Result;
+}
+
+void *X86JITInfo::emitFunctionStub(const Function* F, void *Target,
                                    JITCodeEmitter &JCE) {
+  MachineCodeEmitter::BufferState BS;
   // Note, we cast to intptr_t here to silence a -pedantic warning that 
   // complains about casting a function pointer to a normal pointer.
 #if defined (X86_32_JIT) && !defined (_MSC_VER)
-  bool NotCC = (Fn != (void*)(intptr_t)X86CompilationCallback &&
-                Fn != (void*)(intptr_t)X86CompilationCallback_SSE);
+  bool NotCC = (Target != (void*)(intptr_t)X86CompilationCallback &&
+                Target != (void*)(intptr_t)X86CompilationCallback_SSE);
 #else
-  bool NotCC = Fn != (void*)(intptr_t)X86CompilationCallback;
+  bool NotCC = Target != (void*)(intptr_t)X86CompilationCallback;
 #endif
+  JCE.emitAlignment(4);
+  void *Result = (void*)JCE.getCurrentPCValue();
   if (NotCC) {
 #if defined (X86_64_JIT)
-    JCE.startGVStub(F, 13, 4);
     JCE.emitByte(0x49);          // REX prefix
     JCE.emitByte(0xB8+2);        // movabsq r10
-    JCE.emitWordLE((unsigned)(intptr_t)Fn);
-    JCE.emitWordLE((unsigned)(((intptr_t)Fn) >> 32));
+    JCE.emitWordLE((unsigned)(intptr_t)Target);
+    JCE.emitWordLE((unsigned)(((intptr_t)Target) >> 32));
     JCE.emitByte(0x41);          // REX prefix
     JCE.emitByte(0xFF);          // jmpq *r10
     JCE.emitByte(2 | (4 << 3) | (3 << 6));
 #else
-    JCE.startGVStub(F, 5, 4);
     JCE.emitByte(0xE9);
-    JCE.emitWordLE((intptr_t)Fn-JCE.getCurrentPCValue()-4);
+    JCE.emitWordLE((intptr_t)Target-JCE.getCurrentPCValue()-4);
 #endif
-    return JCE.finishGVStub(F);
+    return Result;
   }
 
 #if defined (X86_64_JIT)
-  JCE.startGVStub(F, 14, 4);
   JCE.emitByte(0x49);          // REX prefix
   JCE.emitByte(0xB8+2);        // movabsq r10
-  JCE.emitWordLE((unsigned)(intptr_t)Fn);
-  JCE.emitWordLE((unsigned)(((intptr_t)Fn) >> 32));
+  JCE.emitWordLE((unsigned)(intptr_t)Target);
+  JCE.emitWordLE((unsigned)(((intptr_t)Target) >> 32));
   JCE.emitByte(0x41);          // REX prefix
   JCE.emitByte(0xFF);          // callq *r10
   JCE.emitByte(2 | (2 << 3) | (3 << 6));
 #else
-  JCE.startGVStub(F, 6, 4);
   JCE.emitByte(0xE8);   // Call with 32 bit pc-rel destination...
 
-  JCE.emitWordLE((intptr_t)Fn-JCE.getCurrentPCValue()-4);
+  JCE.emitWordLE((intptr_t)Target-JCE.getCurrentPCValue()-4);
 #endif
 
   // This used to use 0xCD, but that value is used by JITMemoryManager to
   // initialize the buffer with garbage, which means it may follow a
   // noreturn function call, confusing X86CompilationCallback2.  PR 4929.
   JCE.emitByte(0xCE);   // Interrupt - Just a marker identifying the stub!
-  return JCE.finishGVStub(F);
-}
-
-void X86JITInfo::emitFunctionStubAtAddr(const Function* F, void *Fn, void *Stub,
-                                        JITCodeEmitter &JCE) {
-  // Note, we cast to intptr_t here to silence a -pedantic warning that 
-  // complains about casting a function pointer to a normal pointer.
-  JCE.startGVStub(F, Stub, 5);
-  JCE.emitByte(0xE9);
-#if defined (X86_64_JIT) && !defined (NDEBUG)
-  // Yes, we need both of these casts, or some broken versions of GCC (4.2.4)
-  // get the signed-ness of the expression wrong.  Go figure.
-  intptr_t Displacement = (intptr_t)Fn - (intptr_t)JCE.getCurrentPCValue() - 5;
-  assert(((Displacement << 32) >> 32) == Displacement
-         && "PIC displacement does not fit in displacement field!");
-#endif
-  JCE.emitWordLE((intptr_t)Fn-JCE.getCurrentPCValue()-4);
-  JCE.finishGVStub(F);
+  return Result;
 }
 
 /// getPICJumpTableEntry - Returns the value of the jumptable entry for the
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86JITInfo.h b/libclamav/c++/llvm/lib/Target/X86/X86JITInfo.h
index c381433..238420c 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86JITInfo.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86JITInfo.h
@@ -43,18 +43,16 @@ namespace llvm {
     virtual void *emitGlobalValueIndirectSym(const GlobalValue* GV, void *ptr,
                                              JITCodeEmitter &JCE);
 
+    // getStubLayout - Returns the size and alignment of the largest call stub
+    // on X86.
+    virtual StubLayout getStubLayout();
+
     /// emitFunctionStub - Use the specified JITCodeEmitter object to emit a
     /// small native function that simply calls the function at the specified
     /// address.
-    virtual void *emitFunctionStub(const Function* F, void *Fn,
+    virtual void *emitFunctionStub(const Function* F, void *Target,
                                    JITCodeEmitter &JCE);
 
-    /// emitFunctionStubAtAddr - Use the specified JITCodeEmitter object to
-    /// emit a small native function that simply calls Fn. Emit the stub into
-    /// the supplied buffer.
-    virtual void emitFunctionStubAtAddr(const Function* F, void *Fn,
-                                        void *Buffer, JITCodeEmitter &JCE);
-
     /// getPICJumpTableEntry - Returns the value of the jumptable entry for the
     /// specific basic block.
     virtual uintptr_t getPICJumpTableEntry(uintptr_t BB, uintptr_t JTBase);
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.cpp b/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.cpp
index 64bd97e..f577fcf 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.cpp
@@ -38,7 +38,6 @@
 #include "llvm/ADT/BitVector.h"
 #include "llvm/ADT/STLExtras.h"
 #include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 using namespace llvm;
 
@@ -393,6 +392,11 @@ BitVector X86RegisterInfo::getReservedRegs(const MachineFunction &MF) const {
   Reserved.set(X86::SP);
   Reserved.set(X86::SPL);
 
+  // Set the instruction pointer register and its aliases as reserved.
+  Reserved.set(X86::RIP);
+  Reserved.set(X86::EIP);
+  Reserved.set(X86::IP);
+
   // Set the frame-pointer register and its aliases as reserved if needed.
   if (hasFP(MF)) {
     Reserved.set(X86::RBP);
@@ -451,12 +455,17 @@ bool X86RegisterInfo::hasFP(const MachineFunction &MF) const {
 
 bool X86RegisterInfo::needsStackRealignment(const MachineFunction &MF) const {
   const MachineFrameInfo *MFI = MF.getFrameInfo();
+  bool requiresRealignment =
+    RealignStack && (MFI->getMaxAlignment() > StackAlign);
 
   // FIXME: Currently we don't support stack realignment for functions with
-  //        variable-sized allocas
-  return (RealignStack &&
-          (MFI->getMaxAlignment() > StackAlign &&
-           !MFI->hasVarSizedObjects()));
+  //        variable-sized allocas.
+  // FIXME: Temporary disable the error - it seems to be too conservative.
+  if (0 && requiresRealignment && MFI->hasVarSizedObjects())
+    llvm_report_error(
+      "Stack realignment in presense of dynamic allocas is not supported");
+
+  return (requiresRealignment && !MFI->hasVarSizedObjects());
 }
 
 bool X86RegisterInfo::hasReservedCallFrame(MachineFunction &MF) const {
@@ -579,8 +588,10 @@ eliminateCallFramePseudoInstr(MachineFunction &MF, MachineBasicBlock &MBB,
   MBB.erase(I);
 }
 
-void X86RegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
-                                          int SPAdj, RegScavenger *RS) const{
+unsigned
+X86RegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
+                                     int SPAdj, int *Value,
+                                     RegScavenger *RS) const{
   assert(SPAdj == 0 && "Unexpected");
 
   unsigned i = 0;
@@ -609,14 +620,15 @@ void X86RegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
     // Offset is a 32-bit integer.
     int Offset = getFrameIndexOffset(MF, FrameIndex) +
       (int)(MI.getOperand(i + 3).getImm());
-  
-     MI.getOperand(i + 3).ChangeToImmediate(Offset);
+
+    MI.getOperand(i + 3).ChangeToImmediate(Offset);
   } else {
     // Offset is symbolic. This is extremely rare.
     uint64_t Offset = getFrameIndexOffset(MF, FrameIndex) +
                       (uint64_t)MI.getOperand(i+3).getOffset();
     MI.getOperand(i+3).setOffset(Offset);
   }
+  return 0;
 }
 
 void
@@ -645,7 +657,8 @@ X86RegisterInfo::processFunctionBeforeCalleeSavedScan(MachineFunction &MF,
     //   }
     //   [EBP]
     MFI->CreateFixedObject(-TailCallReturnAddrDelta,
-                           (-1U*SlotSize)+TailCallReturnAddrDelta);
+                           (-1U*SlotSize)+TailCallReturnAddrDelta,
+                           true, false);
   }
 
   if (hasFP(MF)) {
@@ -657,7 +670,8 @@ X86RegisterInfo::processFunctionBeforeCalleeSavedScan(MachineFunction &MF,
     int FrameIdx = MFI->CreateFixedObject(SlotSize,
                                           -(int)SlotSize +
                                           TFI.getOffsetOfLocalArea() +
-                                          TailCallReturnAddrDelta);
+                                          TailCallReturnAddrDelta,
+                                          true, false);
     assert(FrameIdx == MFI->getObjectIndexBegin() &&
            "Slot for EBP register must be last in order to be found!");
     FrameIdx = 0;
@@ -1269,7 +1283,7 @@ unsigned X86RegisterInfo::getRARegister() const {
                  : X86::EIP;    // Should have dwarf #8.
 }
 
-unsigned X86RegisterInfo::getFrameRegister(MachineFunction &MF) const {
+unsigned X86RegisterInfo::getFrameRegister(const MachineFunction &MF) const {
   return hasFP(MF) ? FramePtr : StackPtr;
 }
 
@@ -1470,7 +1484,7 @@ unsigned getX86SubSuperRegister(unsigned Reg, EVT VT, bool High) {
 #include "X86GenRegisterInfo.inc"
 
 namespace {
-  struct VISIBILITY_HIDDEN MSAC : public MachineFunctionPass {
+  struct MSAC : public MachineFunctionPass {
     static char ID;
     MSAC() : MachineFunctionPass(&ID) {}
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.h b/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.h
index c89a57c..f281a3c 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.h
@@ -139,8 +139,9 @@ public:
                                      MachineBasicBlock &MBB,
                                      MachineBasicBlock::iterator MI) const;
 
-  void eliminateFrameIndex(MachineBasicBlock::iterator MI,
-                           int SPAdj, RegScavenger *RS = NULL) const;
+  unsigned eliminateFrameIndex(MachineBasicBlock::iterator MI,
+                               int SPAdj, int *Value = NULL,
+                               RegScavenger *RS = NULL) const;
 
   void processFunctionBeforeCalleeSavedScan(MachineFunction &MF,
                                             RegScavenger *RS = NULL) const;
@@ -152,7 +153,7 @@ public:
 
   // Debug information queries.
   unsigned getRARegister() const;
-  unsigned getFrameRegister(MachineFunction &MF) const;
+  unsigned getFrameRegister(const MachineFunction &MF) const;
   int getFrameIndexOffset(MachineFunction &MF, int FI) const;
   void getInitialFrameState(std::vector<MachineMove> &Moves) const;
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.td b/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.td
index 469a3d8..7bf074d 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.td
@@ -555,7 +555,7 @@ def GR32_NOREX : RegisterClass<"X86", [i32], 32,
 }
 // GR64_NOREX - GR64 registers which do not require a REX prefix.
 def GR64_NOREX : RegisterClass<"X86", [i64], 64,
-                               [RAX, RCX, RDX, RSI, RDI, RBX, RBP, RSP]> {
+                               [RAX, RCX, RDX, RSI, RDI, RBX, RBP, RSP, RIP]> {
   let SubRegClassList = [GR8_NOREX, GR8_NOREX, GR16_NOREX, GR32_NOREX];
   let MethodProtos = [{
     iterator allocation_order_end(const MachineFunction &MF) const;
@@ -567,11 +567,11 @@ def GR64_NOREX : RegisterClass<"X86", [i64], 64,
       const TargetRegisterInfo *RI = TM.getRegisterInfo();
       // Does the function dedicate RBP to being a frame ptr?
       if (RI->hasFP(MF))
-        // If so, don't allocate RSP or RBP.
-        return end() - 2;
+        // If so, don't allocate RIP, RSP or RBP.
+        return end() - 3;
       else
-        // If not, just don't allocate RSP.
-        return end() - 1;
+        // If not, just don't allocate RIP or RSP.
+        return end() - 2;
     }
   }];
 }
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.cpp b/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.cpp
index fb76aeb..661f560 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.cpp
@@ -18,14 +18,31 @@
 #include "llvm/GlobalValue.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
+#include "llvm/System/Host.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Target/TargetOptions.h"
+#include "llvm/ADT/SmallVector.h"
 using namespace llvm;
 
 #if defined(_MSC_VER)
 #include <intrin.h>
 #endif
 
+/// ClassifyBlockAddressReference - Classify a blockaddress reference for the
+/// current subtarget according to how we should reference it in a non-pcrel
+/// context.
+unsigned char X86Subtarget::
+ClassifyBlockAddressReference() const {
+  if (isPICStyleGOT())    // 32-bit ELF targets.
+    return X86II::MO_GOTOFF;
+  
+  if (isPICStyleStubPIC())   // Darwin/32 in PIC mode.
+    return X86II::MO_PIC_BASE_OFFSET;
+  
+  // Direct static reference to label.
+  return X86II::MO_NO_FLAG;
+}
+
 /// ClassifyGlobalReference - Classify a global variable reference for the
 /// current subtarget according to how we should reference it in a non-pcrel
 /// context.
@@ -257,118 +274,6 @@ void X86Subtarget::AutoDetectSubtargetFeatures() {
   }
 }
 
-static const char *GetCurrentX86CPU() {
-  unsigned EAX = 0, EBX = 0, ECX = 0, EDX = 0;
-  if (GetCpuIDAndInfo(0x1, &EAX, &EBX, &ECX, &EDX))
-    return "generic";
-  unsigned Family = 0;
-  unsigned Model  = 0;
-  DetectFamilyModel(EAX, Family, Model);
-
-  GetCpuIDAndInfo(0x80000001, &EAX, &EBX, &ECX, &EDX);
-  bool Em64T = (EDX >> 29) & 0x1;
-  bool HasSSE3 = (ECX & 0x1);
-
-  union {
-    unsigned u[3];
-    char     c[12];
-  } text;
-
-  GetCpuIDAndInfo(0, &EAX, text.u+0, text.u+2, text.u+1);
-  if (memcmp(text.c, "GenuineIntel", 12) == 0) {
-    switch (Family) {
-      case 3:
-        return "i386";
-      case 4:
-        return "i486";
-      case 5:
-        switch (Model) {
-        case 4:  return "pentium-mmx";
-        default: return "pentium";
-        }
-      case 6:
-        switch (Model) {
-        case 1:  return "pentiumpro";
-        case 3:
-        case 5:
-        case 6:  return "pentium2";
-        case 7:
-        case 8:
-        case 10:
-        case 11: return "pentium3";
-        case 9:
-        case 13: return "pentium-m";
-        case 14: return "yonah";
-        case 15:
-        case 22: // Celeron M 540
-          return "core2";
-        case 23: // 45nm: Penryn , Wolfdale, Yorkfield (XE)
-          return "penryn";
-        default: return "i686";
-        }
-      case 15: {
-        switch (Model) {
-        case 3:  
-        case 4:
-        case 6: // same as 4, but 65nm
-          return (Em64T) ? "nocona" : "prescott";
-        case 26:
-          return "corei7";
-        case 28:
-          return "atom";
-        default:
-          return (Em64T) ? "x86-64" : "pentium4";
-        }
-      }
-        
-    default:
-      return "generic";
-    }
-  } else if (memcmp(text.c, "AuthenticAMD", 12) == 0) {
-    // FIXME: this poorly matches the generated SubtargetFeatureKV table.  There
-    // appears to be no way to generate the wide variety of AMD-specific targets
-    // from the information returned from CPUID.
-    switch (Family) {
-      case 4:
-        return "i486";
-      case 5:
-        switch (Model) {
-        case 6:
-        case 7:  return "k6";
-        case 8:  return "k6-2";
-        case 9:
-        case 13: return "k6-3";
-        default: return "pentium";
-        }
-      case 6:
-        switch (Model) {
-        case 4:  return "athlon-tbird";
-        case 6:
-        case 7:
-        case 8:  return "athlon-mp";
-        case 10: return "athlon-xp";
-        default: return "athlon";
-        }
-      case 15:
-        if (HasSSE3) {
-          return "k8-sse3";
-        } else {
-          switch (Model) {
-          case 1:  return "opteron";
-          case 5:  return "athlon-fx"; // also opteron
-          default: return "athlon64";
-          }
-        }
-      case 16:
-        return "amdfam10";
-    default:
-      return "generic";
-    }
-  } else {
-    return "generic";
-  }
-}
-
 X86Subtarget::X86Subtarget(const std::string &TT, const std::string &FS, 
                            bool is64Bit)
   : PICStyle(PICStyles::None)
@@ -382,7 +287,6 @@ X86Subtarget::X86Subtarget(const std::string &TT, const std::string &FS,
   , HasFMA4(false)
   , IsBTMemSlow(false)
   , DarwinVers(0)
-  , IsLinux(false)
   , stackAlignment(8)
   // FIXME: this is a known good value for Yonah. How about others?
   , MaxInlineSizeThreshold(128)
@@ -396,7 +300,7 @@ X86Subtarget::X86Subtarget(const std::string &TT, const std::string &FS,
   // Determine default and user specified characteristics
   if (!FS.empty()) {
     // If feature string is not empty, parse features string.
-    std::string CPU = GetCurrentX86CPU();
+    std::string CPU = sys::getHostCPUName();
     ParseSubtargetFeatures(FS, CPU);
     // All X86-64 CPUs also have SSE2, however user might request no SSE via 
     // -mattr, so don't force SSELevel here.
@@ -434,7 +338,6 @@ X86Subtarget::X86Subtarget(const std::string &TT, const std::string &FS,
     } else if (TT.find("linux") != std::string::npos) {
       // Linux doesn't imply ELF, but we don't currently support anything else.
       TargetType = isELF;
-      IsLinux = true;
     } else if (TT.find("cygwin") != std::string::npos) {
       TargetType = isCygwin;
     } else if (TT.find("mingw") != std::string::npos) {
@@ -457,3 +360,12 @@ X86Subtarget::X86Subtarget(const std::string &TT, const std::string &FS,
   if (StackAlignment)
     stackAlignment = StackAlignment;
 }
+
+bool X86Subtarget::enablePostRAScheduler(
+            CodeGenOpt::Level OptLevel,
+            TargetSubtarget::AntiDepBreakMode& Mode,
+            RegClassVector& CriticalPathRCs) const {
+  Mode = TargetSubtarget::ANTIDEP_CRITICAL;
+  CriticalPathRCs.clear();
+  return OptLevel >= CodeGenOpt::Default;
+}
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.h b/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.h
index a2e368d..fb457dd 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.h
@@ -82,9 +82,6 @@ protected:
   /// version of the platform, e.g. 8 = 10.4 (Tiger), 9 = 10.5 (Leopard), etc.
   unsigned char DarwinVers; // Is any darwin-x86 platform.
 
-  /// isLinux - true if this is a "linux" platform.
-  bool IsLinux;
-
   /// stackAlignment - The minimum alignment known to hold of the stack frame on
   /// entry to the function and which must be maintained by every function.
   unsigned stackAlignment;
@@ -169,11 +166,11 @@ public:
   std::string getDataLayout() const {
     const char *p;
     if (is64Bit())
-      p = "e-p:64:64-s:64-f64:64:64-i64:64:64-f80:128:128";
+      p = "e-p:64:64-s:64-f64:64:64-i64:64:64-f80:128:128-n8:16:32:64";
     else if (isTargetDarwin())
-      p = "e-p:32:32-f64:32:64-i64:32:64-f80:128:128";
+      p = "e-p:32:32-f64:32:64-i64:32:64-f80:128:128-n8:16:32";
     else
-      p = "e-p:32:32-f64:32:64-i64:32:64-f80:32:32";
+      p = "e-p:32:32-f64:32:64-i64:32:64-f80:32:32-n8:16:32";
     return std::string(p);
   }
 
@@ -195,17 +192,18 @@ public:
   /// getDarwinVers - Return the darwin version number, 8 = Tiger, 9 = Leopard,
   /// 10 = Snow Leopard, etc.
   unsigned getDarwinVers() const { return DarwinVers; }
-  
-  /// isLinux - Return true if the target is "Linux".
-  bool isLinux() const { return IsLinux; }
-
-  
+    
   /// ClassifyGlobalReference - Classify a global variable reference for the
   /// current subtarget according to how we should reference it in a non-pcrel
   /// context.
   unsigned char ClassifyGlobalReference(const GlobalValue *GV,
                                         const TargetMachine &TM)const;
 
+  /// ClassifyBlockAddressReference - Classify a blockaddress reference for the
+  /// current subtarget according to how we should reference it in a non-pcrel
+  /// context.
+  unsigned char ClassifyBlockAddressReference() const;
+
   /// IsLegalToCallImmediateAddr - Return true if the subtarget allows calls
   /// to immediate address.
   bool IsLegalToCallImmediateAddr(const TargetMachine &TM) const;
@@ -222,6 +220,12 @@ public:
   /// indicating the number of scheduling cycles of backscheduling that
   /// should be attempted.
   unsigned getSpecialAddressLatency() const;
+
+  /// enablePostRAScheduler - X86 target is enabling post-alloc scheduling
+  /// at 'More' optimization level.
+  bool enablePostRAScheduler(CodeGenOpt::Level OptLevel,
+                             TargetSubtarget::AntiDepBreakMode& Mode,
+                             RegClassVector& CriticalPathRCs) const;
 };
 
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.cpp b/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.cpp
index a61de1c..0cda8bc 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.cpp
@@ -22,8 +22,7 @@
 #include "llvm/Target/TargetRegistry.h"
 using namespace llvm;
 
-static const MCAsmInfo *createMCAsmInfo(const Target &T,
-                                                const StringRef &TT) {
+static const MCAsmInfo *createMCAsmInfo(const Target &T, StringRef TT) {
   Triple TheTriple(TT);
   switch (TheTriple.getOS()) {
   case Triple::Darwin:
@@ -186,14 +185,8 @@ bool X86TargetMachine::addCodeEmitter(PassManagerBase &PM,
   }
   
   // 64-bit JIT places everything in the same buffer except external functions.
-  // On Darwin, use small code model but hack the call instruction for 
-  // externals.  Elsewhere, do not assume globals are in the lower 4G.
-  if (Subtarget.is64Bit()) {
-    if (Subtarget.isTargetDarwin())
-      setCodeModel(CodeModel::Small);
-    else
+  if (Subtarget.is64Bit())
       setCodeModel(CodeModel::Large);
-  }
 
   PM.add(createX86CodeEmitterPass(*this, MCE));
 
@@ -212,14 +205,8 @@ bool X86TargetMachine::addCodeEmitter(PassManagerBase &PM,
   }
   
   // 64-bit JIT places everything in the same buffer except external functions.
-  // On Darwin, use small code model but hack the call instruction for 
-  // externals.  Elsewhere, do not assume globals are in the lower 4G.
-  if (Subtarget.is64Bit()) {
-    if (Subtarget.isTargetDarwin())
-      setCodeModel(CodeModel::Small);
-    else
+  if (Subtarget.is64Bit())
       setCodeModel(CodeModel::Large);
-  }
 
   PM.add(createX86JITCodeEmitterPass(*this, JCE));
 
diff --git a/libclamav/c++/llvm/lib/Transforms/Hello/CMakeLists.txt b/libclamav/c++/llvm/lib/Transforms/Hello/CMakeLists.txt
index b80d15b..917b745 100644
--- a/libclamav/c++/llvm/lib/Transforms/Hello/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/Transforms/Hello/CMakeLists.txt
@@ -1,3 +1,3 @@
-add_llvm_library( LLVMHello
+add_llvm_loadable_module( LLVMHello
   Hello.cpp
   )
diff --git a/libclamav/c++/llvm/lib/Transforms/Hello/Hello.cpp b/libclamav/c++/llvm/lib/Transforms/Hello/Hello.cpp
index 8000d0d..91534a7 100644
--- a/libclamav/c++/llvm/lib/Transforms/Hello/Hello.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Hello/Hello.cpp
@@ -30,9 +30,8 @@ namespace {
 
     virtual bool runOnFunction(Function &F) {
       HelloCounter++;
-      std::string fname = F.getName();
-      EscapeString(fname);
-      errs() << "Hello: " << fname << "\n";
+      errs() << "Hello: ";
+      errs().write_escaped(F.getName()) << '\n';
       return false;
     }
   };
@@ -49,9 +48,8 @@ namespace {
 
     virtual bool runOnFunction(Function &F) {
       HelloCounter++;
-      std::string fname = F.getName();
-      EscapeString(fname);
-      errs() << "Hello: " << fname << "\n";
+      errs() << "Hello: ";
+      errs().write_escaped(F.getName()) << '\n';
       return false;
     }
 
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/ArgumentPromotion.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/ArgumentPromotion.cpp
index 5b91f3d..dd5a6d8 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/ArgumentPromotion.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/ArgumentPromotion.cpp
@@ -41,7 +41,6 @@
 #include "llvm/Analysis/CallGraph.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Support/CallSite.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/CFG.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
@@ -59,7 +58,7 @@ STATISTIC(NumArgumentsDead     , "Number of dead pointer args eliminated");
 namespace {
   /// ArgPromotion - The 'by reference' to 'by value' argument promotion pass.
   ///
-  struct VISIBILITY_HIDDEN ArgPromotion : public CallGraphSCCPass {
+  struct ArgPromotion : public CallGraphSCCPass {
     virtual void getAnalysisUsage(AnalysisUsage &AU) const {
       AU.addRequired<AliasAnalysis>();
       CallGraphSCCPass::getAnalysisUsage(AU);
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/CMakeLists.txt b/libclamav/c++/llvm/lib/Transforms/IPO/CMakeLists.txt
index ec0f1e1..92bef3b 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/CMakeLists.txt
@@ -9,7 +9,6 @@ add_llvm_library(LLVMipo
   GlobalOpt.cpp
   IPConstantPropagation.cpp
   IPO.cpp
-  IndMemRemoval.cpp
   InlineAlways.cpp
   InlineSimple.cpp
   Inliner.cpp
@@ -20,7 +19,6 @@ add_llvm_library(LLVMipo
   PartialInlining.cpp
   PartialSpecialization.cpp
   PruneEH.cpp
-  RaiseAllocations.cpp
   StripDeadPrototypes.cpp
   StripSymbols.cpp
   StructRetPromotion.cpp
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/ConstantMerge.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/ConstantMerge.cpp
index c1a1045..4972687 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/ConstantMerge.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/ConstantMerge.cpp
@@ -22,14 +22,13 @@
 #include "llvm/Module.h"
 #include "llvm/Pass.h"
 #include "llvm/ADT/Statistic.h"
-#include "llvm/Support/Compiler.h"
 #include <map>
 using namespace llvm;
 
 STATISTIC(NumMerged, "Number of global constants merged");
 
 namespace {
-  struct VISIBILITY_HIDDEN ConstantMerge : public ModulePass {
+  struct ConstantMerge : public ModulePass {
     static char ID; // Pass identification, replacement for typeid
     ConstantMerge() : ModulePass(&ID) {}
 
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/DeadArgumentElimination.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/DeadArgumentElimination.cpp
index 79a32f0..a3db836 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/DeadArgumentElimination.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/DeadArgumentElimination.cpp
@@ -33,7 +33,6 @@
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/StringExtras.h"
-#include "llvm/Support/Compiler.h"
 #include <map>
 #include <set>
 using namespace llvm;
@@ -44,7 +43,7 @@ STATISTIC(NumRetValsEliminated  , "Number of unused return values removed");
 namespace {
   /// DAE - The dead argument elimination pass.
   ///
-  class VISIBILITY_HIDDEN DAE : public ModulePass {
+  class DAE : public ModulePass {
   public:
 
     /// Struct that represents (part of) either a return value or a function
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/DeadTypeElimination.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/DeadTypeElimination.cpp
index 85aed2b..025d77e 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/DeadTypeElimination.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/DeadTypeElimination.cpp
@@ -19,13 +19,12 @@
 #include "llvm/TypeSymbolTable.h"
 #include "llvm/DerivedTypes.h"
 #include "llvm/ADT/Statistic.h"
-#include "llvm/Support/Compiler.h"
 using namespace llvm;
 
 STATISTIC(NumKilled, "Number of unused typenames removed from symtab");
 
 namespace {
-  struct VISIBILITY_HIDDEN DTE : public ModulePass {
+  struct DTE : public ModulePass {
     static char ID; // Pass identification, replacement for typeid
     DTE() : ModulePass(&ID) {}
 
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/ExtractGV.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/ExtractGV.cpp
index 3dd3a80..7f67e48 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/ExtractGV.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/ExtractGV.cpp
@@ -17,13 +17,12 @@
 #include "llvm/Pass.h"
 #include "llvm/Constants.h"
 #include "llvm/Transforms/IPO.h"
-#include "llvm/Support/Compiler.h"
 #include <algorithm>
 using namespace llvm;
 
 namespace {
   /// @brief A pass to extract specific functions and their dependencies.
-  class VISIBILITY_HIDDEN GVExtractorPass : public ModulePass {
+  class GVExtractorPass : public ModulePass {
     std::vector<GlobalValue*> Named;
     bool deleteStuff;
     bool reLink;
@@ -102,7 +101,7 @@ namespace {
       {
         std::vector<Constant *> AUGs;
         const Type *SBP=
-              PointerType::getUnqual(Type::getInt8Ty(M.getContext()));
+              Type::getInt8PtrTy(M.getContext());
         for (std::vector<GlobalValue*>::iterator GI = Named.begin(), 
                GE = Named.end(); GI != GE; ++GI) {
           (*GI)->setLinkage(GlobalValue::ExternalLinkage);
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/FunctionAttrs.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/FunctionAttrs.cpp
index 7edaa7f..a16d335 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/FunctionAttrs.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/FunctionAttrs.cpp
@@ -26,11 +26,10 @@
 #include "llvm/Analysis/AliasAnalysis.h"
 #include "llvm/Analysis/CallGraph.h"
 #include "llvm/Analysis/CaptureTracking.h"
-#include "llvm/Analysis/MallocHelper.h"
+#include "llvm/Analysis/MemoryBuiltins.h"
 #include "llvm/ADT/SmallSet.h"
 #include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/UniqueVector.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/InstIterator.h"
 using namespace llvm;
 
@@ -40,7 +39,7 @@ STATISTIC(NumNoCapture, "Number of arguments marked nocapture");
 STATISTIC(NumNoAlias, "Number of function returns marked noalias");
 
 namespace {
-  struct VISIBILITY_HIDDEN FunctionAttrs : public CallGraphSCCPass {
+  struct FunctionAttrs : public CallGraphSCCPass {
     static char ID; // Pass identification, replacement for typeid
     FunctionAttrs() : CallGraphSCCPass(&ID) {}
 
@@ -153,7 +152,7 @@ bool FunctionAttrs::AddReadAttrs(const std::vector<CallGraphNode *> &SCC) {
         // Writes memory.  Just give up.
         return false;
 
-      if (isa<MallocInst>(I))
+      if (isMalloc(I))
         // malloc claims not to write memory!  PR3754.
         return false;
 
@@ -213,7 +212,7 @@ bool FunctionAttrs::AddNoCaptureAttrs(const std::vector<CallGraphNode *> &SCC) {
 
     for (Function::arg_iterator A = F->arg_begin(), E = F->arg_end(); A!=E; ++A)
       if (isa<PointerType>(A->getType()) && !A->hasNoCaptureAttr() &&
-          !PointerMayBeCaptured(A, true)) {
+          !PointerMayBeCaptured(A, true, /*StoreCaptures=*/false)) {
         A->addAttr(Attribute::NoCapture);
         ++NumNoCapture;
         Changed = true;
@@ -267,11 +266,8 @@ bool FunctionAttrs::IsFunctionMallocLike(Function *F,
 
         // Check whether the pointer came from an allocation.
         case Instruction::Alloca:
-        case Instruction::Malloc:
           break;
         case Instruction::Call:
-          if (isMalloc(RVI))
-            break;
         case Instruction::Invoke: {
           CallSite CS(RVI);
           if (CS.paramHasAttr(0, Attribute::NoAlias))
@@ -284,7 +280,7 @@ bool FunctionAttrs::IsFunctionMallocLike(Function *F,
           return false;  // Did not come from an allocation.
       }
 
-    if (PointerMayBeCaptured(RetVal, false))
+    if (PointerMayBeCaptured(RetVal, false, /*StoreCaptures=*/false))
       return false;
   }
 
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/GlobalDCE.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/GlobalDCE.cpp
index 09f9e7c..44216a6 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/GlobalDCE.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/GlobalDCE.cpp
@@ -20,9 +20,8 @@
 #include "llvm/Constants.h"
 #include "llvm/Module.h"
 #include "llvm/Pass.h"
+#include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/Statistic.h"
-#include "llvm/Support/Compiler.h"
-#include <set>
 using namespace llvm;
 
 STATISTIC(NumAliases  , "Number of global aliases removed");
@@ -30,7 +29,7 @@ STATISTIC(NumFunctions, "Number of functions removed");
 STATISTIC(NumVariables, "Number of global variables removed");
 
 namespace {
-  struct VISIBILITY_HIDDEN GlobalDCE : public ModulePass {
+  struct GlobalDCE : public ModulePass {
     static char ID; // Pass identification, replacement for typeid
     GlobalDCE() : ModulePass(&ID) {}
 
@@ -40,7 +39,7 @@ namespace {
     bool runOnModule(Module &M);
 
   private:
-    std::set<GlobalValue*> AliveGlobals;
+    SmallPtrSet<GlobalValue*, 32> AliveGlobals;
 
     /// GlobalIsNeeded - mark the specific global value as needed, and
     /// recursively mark anything that it uses as also needed.
@@ -92,7 +91,8 @@ bool GlobalDCE::runOnModule(Module &M) {
 
   // The first pass is to drop initializers of global variables which are dead.
   std::vector<GlobalVariable*> DeadGlobalVars;   // Keep track of dead globals
-  for (Module::global_iterator I = M.global_begin(), E = M.global_end(); I != E; ++I)
+  for (Module::global_iterator I = M.global_begin(), E = M.global_end();
+       I != E; ++I)
     if (!AliveGlobals.count(I)) {
       DeadGlobalVars.push_back(I);         // Keep track of dead globals
       I->setInitializer(0);
@@ -149,22 +149,16 @@ bool GlobalDCE::runOnModule(Module &M) {
   // Make sure that all memory is released
   AliveGlobals.clear();
 
-  // Remove dead metadata.
-  Changed |= M.getContext().RemoveDeadMetadata();
   return Changed;
 }
 
 /// GlobalIsNeeded - the specific global value as needed, and
 /// recursively mark anything that it uses as also needed.
 void GlobalDCE::GlobalIsNeeded(GlobalValue *G) {
-  std::set<GlobalValue*>::iterator I = AliveGlobals.find(G);
-
   // If the global is already in the set, no need to reprocess it.
-  if (I != AliveGlobals.end()) return;
-
-  // Otherwise insert it now, so we do not infinitely recurse
-  AliveGlobals.insert(I, G);
-
+  if (!AliveGlobals.insert(G))
+    return;
+  
   if (GlobalVariable *GV = dyn_cast<GlobalVariable>(G)) {
     // If this is a global variable, we must make sure to add any global values
     // referenced by the initializer to the alive set.
@@ -179,11 +173,9 @@ void GlobalDCE::GlobalIsNeeded(GlobalValue *G) {
     // operands.  Any operands of these types must be processed to ensure that
     // any globals used will be marked as needed.
     Function *F = cast<Function>(G);
-    // For all basic blocks...
+
     for (Function::iterator BB = F->begin(), E = F->end(); BB != E; ++BB)
-      // For all instructions...
       for (BasicBlock::iterator I = BB->begin(), E = BB->end(); I != E; ++I)
-        // For all operands...
         for (User::op_iterator U = I->op_begin(), E = I->op_end(); U != E; ++U)
           if (GlobalValue *GV = dyn_cast<GlobalValue>(*U))
             GlobalIsNeeded(GV);
@@ -194,13 +186,13 @@ void GlobalDCE::GlobalIsNeeded(GlobalValue *G) {
 
 void GlobalDCE::MarkUsedGlobalsAsNeeded(Constant *C) {
   if (GlobalValue *GV = dyn_cast<GlobalValue>(C))
-    GlobalIsNeeded(GV);
-  else {
-    // Loop over all of the operands of the constant, adding any globals they
-    // use to the list of needed globals.
-    for (User::op_iterator I = C->op_begin(), E = C->op_end(); I != E; ++I)
-      MarkUsedGlobalsAsNeeded(cast<Constant>(*I));
-  }
+    return GlobalIsNeeded(GV);
+  
+  // Loop over all of the operands of the constant, adding any globals they
+  // use to the list of needed globals.
+  for (User::op_iterator I = C->op_begin(), E = C->op_end(); I != E; ++I)
+    if (Constant *OpC = dyn_cast<Constant>(*I))
+      MarkUsedGlobalsAsNeeded(OpC);
 }
 
 // RemoveUnusedGlobalValue - Loop over all of the uses of the specified
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/GlobalOpt.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/GlobalOpt.cpp
index 8edd79c..4635d0e 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/GlobalOpt.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/GlobalOpt.cpp
@@ -20,14 +20,12 @@
 #include "llvm/DerivedTypes.h"
 #include "llvm/Instructions.h"
 #include "llvm/IntrinsicInst.h"
-#include "llvm/LLVMContext.h"
 #include "llvm/Module.h"
 #include "llvm/Pass.h"
 #include "llvm/Analysis/ConstantFolding.h"
-#include "llvm/Analysis/MallocHelper.h"
+#include "llvm/Analysis/MemoryBuiltins.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Support/CallSite.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/GetElementPtrTypeIterator.h"
@@ -57,7 +55,7 @@ STATISTIC(NumAliasesResolved, "Number of global aliases resolved");
 STATISTIC(NumAliasesRemoved, "Number of global aliases eliminated");
 
 namespace {
-  struct VISIBILITY_HIDDEN GlobalOpt : public ModulePass {
+  struct GlobalOpt : public ModulePass {
     virtual void getAnalysisUsage(AnalysisUsage &AU) const {
     }
     static char ID; // Pass identification, replacement for typeid
@@ -85,7 +83,7 @@ namespace {
 /// GlobalStatus - As we analyze each global, keep track of some information
 /// about it.  If we find out that the address of the global is taken, none of
 /// this info will be accurate.
-struct VISIBILITY_HIDDEN GlobalStatus {
+struct GlobalStatus {
   /// isLoaded - True if the global is ever loaded.  If the global isn't ever
   /// loaded it can be deleted.
   bool isLoaded;
@@ -246,8 +244,7 @@ static bool AnalyzeGlobal(Value *V, GlobalStatus &GS,
   return false;
 }
 
-static Constant *getAggregateConstantElement(Constant *Agg, Constant *Idx,
-                                             LLVMContext &Context) {
+static Constant *getAggregateConstantElement(Constant *Agg, Constant *Idx) {
   ConstantInt *CI = dyn_cast<ConstantInt>(Idx);
   if (!CI) return 0;
   unsigned IdxV = CI->getZExtValue();
@@ -283,8 +280,7 @@ static Constant *getAggregateConstantElement(Constant *Agg, Constant *Idx,
 /// users of the global, cleaning up the obvious ones.  This is largely just a
 /// quick scan over the use list to clean up the easy and obvious cruft.  This
 /// returns true if it made a change.
-static bool CleanupConstantGlobalUsers(Value *V, Constant *Init,
-                                       LLVMContext &Context) {
+static bool CleanupConstantGlobalUsers(Value *V, Constant *Init) {
   bool Changed = false;
   for (Value::use_iterator UI = V->use_begin(), E = V->use_end(); UI != E;) {
     User *U = *UI++;
@@ -304,12 +300,12 @@ static bool CleanupConstantGlobalUsers(Value *V, Constant *Init,
       if (CE->getOpcode() == Instruction::GetElementPtr) {
         Constant *SubInit = 0;
         if (Init)
-          SubInit = ConstantFoldLoadThroughGEPConstantExpr(Init, CE, Context);
-        Changed |= CleanupConstantGlobalUsers(CE, SubInit, Context);
+          SubInit = ConstantFoldLoadThroughGEPConstantExpr(Init, CE);
+        Changed |= CleanupConstantGlobalUsers(CE, SubInit);
       } else if (CE->getOpcode() == Instruction::BitCast && 
                  isa<PointerType>(CE->getType())) {
         // Pointer cast, delete any stores and memsets to the global.
-        Changed |= CleanupConstantGlobalUsers(CE, 0, Context);
+        Changed |= CleanupConstantGlobalUsers(CE, 0);
       }
 
       if (CE->use_empty()) {
@@ -323,11 +319,11 @@ static bool CleanupConstantGlobalUsers(Value *V, Constant *Init,
       Constant *SubInit = 0;
       if (!isa<ConstantExpr>(GEP->getOperand(0))) {
         ConstantExpr *CE = 
-          dyn_cast_or_null<ConstantExpr>(ConstantFoldInstruction(GEP, Context));
+          dyn_cast_or_null<ConstantExpr>(ConstantFoldInstruction(GEP));
         if (Init && CE && CE->getOpcode() == Instruction::GetElementPtr)
-          SubInit = ConstantFoldLoadThroughGEPConstantExpr(Init, CE, Context);
+          SubInit = ConstantFoldLoadThroughGEPConstantExpr(Init, CE);
       }
-      Changed |= CleanupConstantGlobalUsers(GEP, SubInit, Context);
+      Changed |= CleanupConstantGlobalUsers(GEP, SubInit);
 
       if (GEP->use_empty()) {
         GEP->eraseFromParent();
@@ -345,7 +341,7 @@ static bool CleanupConstantGlobalUsers(Value *V, Constant *Init,
       if (SafeToDestroyConstant(C)) {
         C->destroyConstant();
         // This could have invalidated UI, start over from scratch.
-        CleanupConstantGlobalUsers(V, Init, Context);
+        CleanupConstantGlobalUsers(V, Init);
         return true;
       }
     }
@@ -470,8 +466,7 @@ static bool GlobalUsersSafeToSRA(GlobalValue *GV) {
 /// behavior of the program in a more fine-grained way.  We have determined that
 /// this transformation is safe already.  We return the first global variable we
 /// insert so that the caller can reprocess it.
-static GlobalVariable *SRAGlobal(GlobalVariable *GV, const TargetData &TD,
-                                 LLVMContext &Context) {
+static GlobalVariable *SRAGlobal(GlobalVariable *GV, const TargetData &TD) {
   // Make sure this global only has simple uses that we can SRA.
   if (!GlobalUsersSafeToSRA(GV))
     return 0;
@@ -493,11 +488,9 @@ static GlobalVariable *SRAGlobal(GlobalVariable *GV, const TargetData &TD,
     const StructLayout &Layout = *TD.getStructLayout(STy);
     for (unsigned i = 0, e = STy->getNumElements(); i != e; ++i) {
       Constant *In = getAggregateConstantElement(Init,
-                                ConstantInt::get(Type::getInt32Ty(Context), i),
-                                    Context);
+                    ConstantInt::get(Type::getInt32Ty(STy->getContext()), i));
       assert(In && "Couldn't get element of initializer?");
-      GlobalVariable *NGV = new GlobalVariable(Context,
-                                               STy->getElementType(i), false,
+      GlobalVariable *NGV = new GlobalVariable(STy->getElementType(i), false,
                                                GlobalVariable::InternalLinkage,
                                                In, GV->getName()+"."+Twine(i),
                                                GV->isThreadLocal(),
@@ -528,12 +521,10 @@ static GlobalVariable *SRAGlobal(GlobalVariable *GV, const TargetData &TD,
     unsigned EltAlign = TD.getABITypeAlignment(STy->getElementType());
     for (unsigned i = 0, e = NumElements; i != e; ++i) {
       Constant *In = getAggregateConstantElement(Init,
-                                ConstantInt::get(Type::getInt32Ty(Context), i),
-                                    Context);
+                    ConstantInt::get(Type::getInt32Ty(Init->getContext()), i));
       assert(In && "Couldn't get element of initializer?");
 
-      GlobalVariable *NGV = new GlobalVariable(Context,
-                                               STy->getElementType(), false,
+      GlobalVariable *NGV = new GlobalVariable(STy->getElementType(), false,
                                                GlobalVariable::InternalLinkage,
                                                In, GV->getName()+"."+Twine(i),
                                                GV->isThreadLocal(),
@@ -555,7 +546,7 @@ static GlobalVariable *SRAGlobal(GlobalVariable *GV, const TargetData &TD,
 
   DEBUG(errs() << "PERFORMING GLOBAL SRA ON: " << *GV);
 
-  Constant *NullInt = Constant::getNullValue(Type::getInt32Ty(Context));
+  Constant *NullInt =Constant::getNullValue(Type::getInt32Ty(GV->getContext()));
 
   // Loop over all of the uses of the global, replacing the constantexpr geps,
   // with smaller constantexpr geps or direct references.
@@ -679,8 +670,7 @@ static bool AllUsesOfLoadedValueWillTrapIfNull(GlobalVariable *GV) {
   return true;
 }
 
-static bool OptimizeAwayTrappingUsesOfValue(Value *V, Constant *NewV,
-                                           LLVMContext &Context) {
+static bool OptimizeAwayTrappingUsesOfValue(Value *V, Constant *NewV) {
   bool Changed = false;
   for (Value::use_iterator UI = V->use_begin(), E = V->use_end(); UI != E; ) {
     Instruction *I = cast<Instruction>(*UI++);
@@ -713,7 +703,7 @@ static bool OptimizeAwayTrappingUsesOfValue(Value *V, Constant *NewV,
     } else if (CastInst *CI = dyn_cast<CastInst>(I)) {
       Changed |= OptimizeAwayTrappingUsesOfValue(CI,
                                 ConstantExpr::getCast(CI->getOpcode(),
-                                                NewV, CI->getType()), Context);
+                                                      NewV, CI->getType()));
       if (CI->use_empty()) {
         Changed = true;
         CI->eraseFromParent();
@@ -731,7 +721,7 @@ static bool OptimizeAwayTrappingUsesOfValue(Value *V, Constant *NewV,
       if (Idxs.size() == GEPI->getNumOperands()-1)
         Changed |= OptimizeAwayTrappingUsesOfValue(GEPI,
                           ConstantExpr::getGetElementPtr(NewV, &Idxs[0],
-                                                        Idxs.size()), Context);
+                                                        Idxs.size()));
       if (GEPI->use_empty()) {
         Changed = true;
         GEPI->eraseFromParent();
@@ -747,8 +737,7 @@ static bool OptimizeAwayTrappingUsesOfValue(Value *V, Constant *NewV,
 /// value stored into it.  If there are uses of the loaded value that would trap
 /// if the loaded value is dynamically null, then we know that they cannot be
 /// reachable with a null optimize away the load.
-static bool OptimizeAwayTrappingUsesOfLoads(GlobalVariable *GV, Constant *LV,
-                                            LLVMContext &Context) {
+static bool OptimizeAwayTrappingUsesOfLoads(GlobalVariable *GV, Constant *LV) {
   bool Changed = false;
 
   // Keep track of whether we are able to remove all the uses of the global
@@ -759,7 +748,7 @@ static bool OptimizeAwayTrappingUsesOfLoads(GlobalVariable *GV, Constant *LV,
   for (Value::use_iterator GUI = GV->use_begin(), E = GV->use_end(); GUI != E;){
     User *GlobalUser = *GUI++;
     if (LoadInst *LI = dyn_cast<LoadInst>(GlobalUser)) {
-      Changed |= OptimizeAwayTrappingUsesOfValue(LI, LV, Context);
+      Changed |= OptimizeAwayTrappingUsesOfValue(LI, LV);
       // If we were able to delete all uses of the loads
       if (LI->use_empty()) {
         LI->eraseFromParent();
@@ -790,7 +779,7 @@ static bool OptimizeAwayTrappingUsesOfLoads(GlobalVariable *GV, Constant *LV,
   // nor is the global.
   if (AllNonStoreUsesGone) {
     DEBUG(errs() << "  *** GLOBAL NOW DEAD!\n");
-    CleanupConstantGlobalUsers(GV, 0, Context);
+    CleanupConstantGlobalUsers(GV, 0);
     if (GV->use_empty()) {
       GV->eraseFromParent();
       ++NumDeleted;
@@ -802,10 +791,10 @@ static bool OptimizeAwayTrappingUsesOfLoads(GlobalVariable *GV, Constant *LV,
 
 /// ConstantPropUsersOf - Walk the use list of V, constant folding all of the
 /// instructions that are foldable.
-static void ConstantPropUsersOf(Value *V, LLVMContext &Context) {
+static void ConstantPropUsersOf(Value *V) {
   for (Value::use_iterator UI = V->use_begin(), E = V->use_end(); UI != E; )
     if (Instruction *I = dyn_cast<Instruction>(*UI++))
-      if (Constant *NewC = ConstantFoldInstruction(I, Context)) {
+      if (Constant *NewC = ConstantFoldInstruction(I)) {
         I->replaceAllUsesWith(NewC);
 
         // Advance UI to the next non-I use to avoid invalidating it!
@@ -822,163 +811,46 @@ static void ConstantPropUsersOf(Value *V, LLVMContext &Context) {
 /// malloc, there is no reason to actually DO the malloc.  Instead, turn the
 /// malloc into a global, and any loads of GV as uses of the new global.
 static GlobalVariable *OptimizeGlobalAddressOfMalloc(GlobalVariable *GV,
-                                                     MallocInst *MI,
-                                                     LLVMContext &Context) {
-  DEBUG(errs() << "PROMOTING MALLOC GLOBAL: " << *GV << "  MALLOC = " << *MI);
-  ConstantInt *NElements = cast<ConstantInt>(MI->getArraySize());
-
-  if (NElements->getZExtValue() != 1) {
-    // If we have an array allocation, transform it to a single element
-    // allocation to make the code below simpler.
-    Type *NewTy = ArrayType::get(MI->getAllocatedType(),
-                                 NElements->getZExtValue());
-    MallocInst *NewMI =
-      new MallocInst(NewTy, Constant::getNullValue(Type::getInt32Ty(Context)),
-                     MI->getAlignment(), MI->getName(), MI);
-    Value* Indices[2];
-    Indices[0] = Indices[1] = Constant::getNullValue(Type::getInt32Ty(Context));
-    Value *NewGEP = GetElementPtrInst::Create(NewMI, Indices, Indices + 2,
-                                              NewMI->getName()+".el0", MI);
-    MI->replaceAllUsesWith(NewGEP);
-    MI->eraseFromParent();
-    MI = NewMI;
-  }
-
-  // Create the new global variable.  The contents of the malloc'd memory is
-  // undefined, so initialize with an undef value.
-  // FIXME: This new global should have the alignment returned by malloc.  Code
-  // could depend on malloc returning large alignment (on the mac, 16 bytes) but
-  // this would only guarantee some lower alignment.
-  Constant *Init = UndefValue::get(MI->getAllocatedType());
-  GlobalVariable *NewGV = new GlobalVariable(*GV->getParent(), 
-                                             MI->getAllocatedType(), false,
-                                             GlobalValue::InternalLinkage, Init,
-                                             GV->getName()+".body",
-                                             GV,
-                                             GV->isThreadLocal());
-  
-  // Anything that used the malloc now uses the global directly.
-  MI->replaceAllUsesWith(NewGV);
-
-  Constant *RepValue = NewGV;
-  if (NewGV->getType() != GV->getType()->getElementType())
-    RepValue = ConstantExpr::getBitCast(RepValue, 
-                                        GV->getType()->getElementType());
-
-  // If there is a comparison against null, we will insert a global bool to
-  // keep track of whether the global was initialized yet or not.
-  GlobalVariable *InitBool =
-    new GlobalVariable(Context, Type::getInt1Ty(Context), false,
-                       GlobalValue::InternalLinkage,
-                       ConstantInt::getFalse(Context), GV->getName()+".init",
-                       GV->isThreadLocal());
-  bool InitBoolUsed = false;
-
-  // Loop over all uses of GV, processing them in turn.
-  std::vector<StoreInst*> Stores;
-  while (!GV->use_empty())
-    if (LoadInst *LI = dyn_cast<LoadInst>(GV->use_back())) {
-      while (!LI->use_empty()) {
-        Use &LoadUse = LI->use_begin().getUse();
-        if (!isa<ICmpInst>(LoadUse.getUser()))
-          LoadUse = RepValue;
-        else {
-          ICmpInst *CI = cast<ICmpInst>(LoadUse.getUser());
-          // Replace the cmp X, 0 with a use of the bool value.
-          Value *LV = new LoadInst(InitBool, InitBool->getName()+".val", CI);
-          InitBoolUsed = true;
-          switch (CI->getPredicate()) {
-          default: llvm_unreachable("Unknown ICmp Predicate!");
-          case ICmpInst::ICMP_ULT:
-          case ICmpInst::ICMP_SLT:
-            LV = ConstantInt::getFalse(Context);   // X < null -> always false
-            break;
-          case ICmpInst::ICMP_ULE:
-          case ICmpInst::ICMP_SLE:
-          case ICmpInst::ICMP_EQ:
-            LV = BinaryOperator::CreateNot(LV, "notinit", CI);
-            break;
-          case ICmpInst::ICMP_NE:
-          case ICmpInst::ICMP_UGE:
-          case ICmpInst::ICMP_SGE:
-          case ICmpInst::ICMP_UGT:
-          case ICmpInst::ICMP_SGT:
-            break;  // no change.
-          }
-          CI->replaceAllUsesWith(LV);
-          CI->eraseFromParent();
-        }
-      }
-      LI->eraseFromParent();
-    } else {
-      StoreInst *SI = cast<StoreInst>(GV->use_back());
-      // The global is initialized when the store to it occurs.
-      new StoreInst(ConstantInt::getTrue(Context), InitBool, SI);
-      SI->eraseFromParent();
-    }
-
-  // If the initialization boolean was used, insert it, otherwise delete it.
-  if (!InitBoolUsed) {
-    while (!InitBool->use_empty())  // Delete initializations
-      cast<Instruction>(InitBool->use_back())->eraseFromParent();
-    delete InitBool;
-  } else
-    GV->getParent()->getGlobalList().insert(GV, InitBool);
-
-
-  // Now the GV is dead, nuke it and the malloc.
-  GV->eraseFromParent();
-  MI->eraseFromParent();
-
-  // To further other optimizations, loop over all users of NewGV and try to
-  // constant prop them.  This will promote GEP instructions with constant
-  // indices into GEP constant-exprs, which will allow global-opt to hack on it.
-  ConstantPropUsersOf(NewGV, Context);
-  if (RepValue != NewGV)
-    ConstantPropUsersOf(RepValue, Context);
-
-  return NewGV;
-}
-
-/// OptimizeGlobalAddressOfMalloc - This function takes the specified global
-/// variable, and transforms the program as if it always contained the result of
-/// the specified malloc.  Because it is always the result of the specified
-/// malloc, there is no reason to actually DO the malloc.  Instead, turn the
-/// malloc into a global, and any loads of GV as uses of the new global.
-static GlobalVariable *OptimizeGlobalAddressOfMalloc(GlobalVariable *GV,
                                                      CallInst *CI,
-                                                     BitCastInst *BCI,
-                                                     LLVMContext &Context,
+                                                     const Type *AllocTy,
+                                                     Value* NElems,
                                                      TargetData* TD) {
-  const Type *IntPtrTy = TD->getIntPtrType(Context);
+  DEBUG(errs() << "PROMOTING GLOBAL: " << *GV << "  CALL = " << *CI << '\n');
+
+  const Type *IntPtrTy = TD->getIntPtrType(GV->getContext());
   
-  DEBUG(errs() << "PROMOTING MALLOC GLOBAL: " << *GV << "  MALLOC = " << *CI);
+  // CI has either 0 or 1 bitcast uses (getMallocType() would otherwise have
+  // returned NULL and we would not be here).
+  BitCastInst *BCI = NULL;
+  for (Value::use_iterator UI = CI->use_begin(), E = CI->use_end(); UI != E; )
+    if ((BCI = dyn_cast<BitCastInst>(cast<Instruction>(*UI++))))
+      break;
 
-  ConstantInt *NElements = cast<ConstantInt>(getMallocArraySize(CI,
-                                                                Context, TD));
+  ConstantInt *NElements = cast<ConstantInt>(NElems);
   if (NElements->getZExtValue() != 1) {
     // If we have an array allocation, transform it to a single element
     // allocation to make the code below simpler.
-    Type *NewTy = ArrayType::get(getMallocAllocatedType(CI),
-                                 NElements->getZExtValue());
-    Value* NewM = CallInst::CreateMalloc(CI, IntPtrTy, NewTy);
-    Instruction* NewMI = cast<Instruction>(NewM);
+    Type *NewTy = ArrayType::get(AllocTy, NElements->getZExtValue());
+    unsigned TypeSize = TD->getTypeAllocSize(NewTy);
+    if (const StructType *ST = dyn_cast<StructType>(NewTy))
+      TypeSize = TD->getStructLayout(ST)->getSizeInBytes();
+    Instruction *NewCI = CallInst::CreateMalloc(CI, IntPtrTy, NewTy,
+                                         ConstantInt::get(IntPtrTy, TypeSize));
     Value* Indices[2];
     Indices[0] = Indices[1] = Constant::getNullValue(IntPtrTy);
-    Value *NewGEP = GetElementPtrInst::Create(NewMI, Indices, Indices + 2,
-                                              NewMI->getName()+".el0", CI);
-    BCI->replaceAllUsesWith(NewGEP);
-    BCI->eraseFromParent();
+    Value *NewGEP = GetElementPtrInst::Create(NewCI, Indices, Indices + 2,
+                                              NewCI->getName()+".el0", CI);
+    Value *Cast = new BitCastInst(NewGEP, CI->getType(), "el0", CI);
+    if (BCI) BCI->replaceAllUsesWith(NewGEP);
+    CI->replaceAllUsesWith(Cast);
+    if (BCI) BCI->eraseFromParent();
     CI->eraseFromParent();
-    BCI = cast<BitCastInst>(NewMI);
-    CI = extractMallocCallFromBitCast(NewMI);
+    BCI = dyn_cast<BitCastInst>(NewCI);
+    CI = BCI ? extractMallocCallFromBitCast(BCI) : cast<CallInst>(NewCI);
   }
 
   // Create the new global variable.  The contents of the malloc'd memory is
   // undefined, so initialize with an undef value.
-  // FIXME: This new global should have the alignment returned by malloc.  Code
-  // could depend on malloc returning large alignment (on the mac, 16 bytes) but
-  // this would only guarantee some lower alignment.
   const Type *MAT = getMallocAllocatedType(CI);
   Constant *Init = UndefValue::get(MAT);
   GlobalVariable *NewGV = new GlobalVariable(*GV->getParent(), 
@@ -988,8 +860,9 @@ static GlobalVariable *OptimizeGlobalAddressOfMalloc(GlobalVariable *GV,
                                              GV,
                                              GV->isThreadLocal());
   
-  // Anything that used the malloc now uses the global directly.
-  BCI->replaceAllUsesWith(NewGV);
+  // Anything that used the malloc or its bitcast now uses the global directly.
+  if (BCI) BCI->replaceAllUsesWith(NewGV);
+  CI->replaceAllUsesWith(new BitCastInst(NewGV, CI->getType(), "newgv", CI));
 
   Constant *RepValue = NewGV;
   if (NewGV->getType() != GV->getType()->getElementType())
@@ -999,10 +872,10 @@ static GlobalVariable *OptimizeGlobalAddressOfMalloc(GlobalVariable *GV,
   // If there is a comparison against null, we will insert a global bool to
   // keep track of whether the global was initialized yet or not.
   GlobalVariable *InitBool =
-    new GlobalVariable(Context, Type::getInt1Ty(Context), false,
+    new GlobalVariable(Type::getInt1Ty(GV->getContext()), false,
                        GlobalValue::InternalLinkage,
-                       ConstantInt::getFalse(Context), GV->getName()+".init",
-                       GV->isThreadLocal());
+                       ConstantInt::getFalse(GV->getContext()),
+                       GV->getName()+".init", GV->isThreadLocal());
   bool InitBoolUsed = false;
 
   // Loop over all uses of GV, processing them in turn.
@@ -1021,8 +894,8 @@ static GlobalVariable *OptimizeGlobalAddressOfMalloc(GlobalVariable *GV,
           switch (ICI->getPredicate()) {
           default: llvm_unreachable("Unknown ICmp Predicate!");
           case ICmpInst::ICMP_ULT:
-          case ICmpInst::ICMP_SLT:
-            LV = ConstantInt::getFalse(Context);   // X < null -> always false
+          case ICmpInst::ICMP_SLT:   // X < null -> always false
+            LV = ConstantInt::getFalse(GV->getContext());
             break;
           case ICmpInst::ICMP_ULE:
           case ICmpInst::ICMP_SLE:
@@ -1044,7 +917,7 @@ static GlobalVariable *OptimizeGlobalAddressOfMalloc(GlobalVariable *GV,
     } else {
       StoreInst *SI = cast<StoreInst>(GV->use_back());
       // The global is initialized when the store to it occurs.
-      new StoreInst(ConstantInt::getTrue(Context), InitBool, SI);
+      new StoreInst(ConstantInt::getTrue(GV->getContext()), InitBool, SI);
       SI->eraseFromParent();
     }
 
@@ -1057,17 +930,17 @@ static GlobalVariable *OptimizeGlobalAddressOfMalloc(GlobalVariable *GV,
     GV->getParent()->getGlobalList().insert(GV, InitBool);
 
 
-  // Now the GV is dead, nuke it and the malloc.
+  // Now the GV is dead, nuke it and the malloc (both CI and BCI).
   GV->eraseFromParent();
-  BCI->eraseFromParent();
+  if (BCI) BCI->eraseFromParent();
   CI->eraseFromParent();
 
   // To further other optimizations, loop over all users of NewGV and try to
   // constant prop them.  This will promote GEP instructions with constant
   // indices into GEP constant-exprs, which will allow global-opt to hack on it.
-  ConstantPropUsersOf(NewGV, Context);
+  ConstantPropUsersOf(NewGV);
   if (RepValue != NewGV)
-    ConstantPropUsersOf(RepValue, Context);
+    ConstantPropUsersOf(RepValue);
 
   return NewGV;
 }
@@ -1269,8 +1142,7 @@ static bool AllGlobalLoadUsesSimpleEnoughForHeapSRA(GlobalVariable *GV,
 
 static Value *GetHeapSROAValue(Value *V, unsigned FieldNo,
                DenseMap<Value*, std::vector<Value*> > &InsertedScalarizedValues,
-                   std::vector<std::pair<PHINode*, unsigned> > &PHIsToRewrite,
-                   LLVMContext &Context) {
+                   std::vector<std::pair<PHINode*, unsigned> > &PHIsToRewrite) {
   std::vector<Value*> &FieldVals = InsertedScalarizedValues[V];
   
   if (FieldNo >= FieldVals.size())
@@ -1288,7 +1160,7 @@ static Value *GetHeapSROAValue(Value *V, unsigned FieldNo,
     // a new Load of the scalarized global.
     Result = new LoadInst(GetHeapSROAValue(LI->getOperand(0), FieldNo,
                                            InsertedScalarizedValues,
-                                           PHIsToRewrite, Context),
+                                           PHIsToRewrite),
                           LI->getName()+".f"+Twine(FieldNo), LI);
   } else if (PHINode *PN = dyn_cast<PHINode>(V)) {
     // PN's type is pointer to struct.  Make a new PHI of pointer to struct
@@ -1312,16 +1184,14 @@ static Value *GetHeapSROAValue(Value *V, unsigned FieldNo,
 /// the load, rewrite the derived value to use the HeapSRoA'd load.
 static void RewriteHeapSROALoadUser(Instruction *LoadUser, 
              DenseMap<Value*, std::vector<Value*> > &InsertedScalarizedValues,
-                   std::vector<std::pair<PHINode*, unsigned> > &PHIsToRewrite,
-                   LLVMContext &Context) {
+                   std::vector<std::pair<PHINode*, unsigned> > &PHIsToRewrite) {
   // If this is a comparison against null, handle it.
   if (ICmpInst *SCI = dyn_cast<ICmpInst>(LoadUser)) {
     assert(isa<ConstantPointerNull>(SCI->getOperand(1)));
     // If we have a setcc of the loaded pointer, we can use a setcc of any
     // field.
     Value *NPtr = GetHeapSROAValue(SCI->getOperand(0), 0,
-                                   InsertedScalarizedValues, PHIsToRewrite,
-                                   Context);
+                                   InsertedScalarizedValues, PHIsToRewrite);
     
     Value *New = new ICmpInst(SCI, SCI->getPredicate(), NPtr,
                               Constant::getNullValue(NPtr->getType()), 
@@ -1339,8 +1209,7 @@ static void RewriteHeapSROALoadUser(Instruction *LoadUser,
     // Load the pointer for this field.
     unsigned FieldNo = cast<ConstantInt>(GEPI->getOperand(2))->getZExtValue();
     Value *NewPtr = GetHeapSROAValue(GEPI->getOperand(0), FieldNo,
-                                     InsertedScalarizedValues, PHIsToRewrite,
-                                     Context);
+                                     InsertedScalarizedValues, PHIsToRewrite);
     
     // Create the new GEP idx vector.
     SmallVector<Value*, 8> GEPIdx;
@@ -1372,8 +1241,7 @@ static void RewriteHeapSROALoadUser(Instruction *LoadUser,
   // users.
   for (Value::use_iterator UI = PN->use_begin(), E = PN->use_end(); UI != E; ) {
     Instruction *User = cast<Instruction>(*UI++);
-    RewriteHeapSROALoadUser(User, InsertedScalarizedValues, PHIsToRewrite,
-                            Context);
+    RewriteHeapSROALoadUser(User, InsertedScalarizedValues, PHIsToRewrite);
   }
 }
 
@@ -1383,13 +1251,11 @@ static void RewriteHeapSROALoadUser(Instruction *LoadUser,
 /// AllGlobalLoadUsesSimpleEnoughForHeapSRA.
 static void RewriteUsesOfLoadForHeapSRoA(LoadInst *Load, 
                DenseMap<Value*, std::vector<Value*> > &InsertedScalarizedValues,
-                   std::vector<std::pair<PHINode*, unsigned> > &PHIsToRewrite,
-                   LLVMContext &Context) {
+                   std::vector<std::pair<PHINode*, unsigned> > &PHIsToRewrite) {
   for (Value::use_iterator UI = Load->use_begin(), E = Load->use_end();
        UI != E; ) {
     Instruction *User = cast<Instruction>(*UI++);
-    RewriteHeapSROALoadUser(User, InsertedScalarizedValues, PHIsToRewrite,
-                            Context);
+    RewriteHeapSROALoadUser(User, InsertedScalarizedValues, PHIsToRewrite);
   }
   
   if (Load->use_empty()) {
@@ -1398,193 +1264,11 @@ static void RewriteUsesOfLoadForHeapSRoA(LoadInst *Load,
   }
 }
 
-/// PerformHeapAllocSRoA - MI is an allocation of an array of structures.  Break
-/// it up into multiple allocations of arrays of the fields.
-static GlobalVariable *PerformHeapAllocSRoA(GlobalVariable *GV, MallocInst *MI,
-                                            LLVMContext &Context){
-  DEBUG(errs() << "SROA HEAP ALLOC: " << *GV << "  MALLOC = " << *MI);
-  const StructType *STy = cast<StructType>(MI->getAllocatedType());
-
-  // There is guaranteed to be at least one use of the malloc (storing
-  // it into GV).  If there are other uses, change them to be uses of
-  // the global to simplify later code.  This also deletes the store
-  // into GV.
-  ReplaceUsesOfMallocWithGlobal(MI, GV);
-  
-  // Okay, at this point, there are no users of the malloc.  Insert N
-  // new mallocs at the same place as MI, and N globals.
-  std::vector<Value*> FieldGlobals;
-  std::vector<MallocInst*> FieldMallocs;
-  
-  for (unsigned FieldNo = 0, e = STy->getNumElements(); FieldNo != e;++FieldNo){
-    const Type *FieldTy = STy->getElementType(FieldNo);
-    const Type *PFieldTy = PointerType::getUnqual(FieldTy);
-    
-    GlobalVariable *NGV =
-      new GlobalVariable(*GV->getParent(),
-                         PFieldTy, false, GlobalValue::InternalLinkage,
-                         Constant::getNullValue(PFieldTy),
-                         GV->getName() + ".f" + Twine(FieldNo), GV,
-                         GV->isThreadLocal());
-    FieldGlobals.push_back(NGV);
-    
-    MallocInst *NMI = new MallocInst(FieldTy, MI->getArraySize(),
-                                     MI->getName() + ".f" + Twine(FieldNo), MI);
-    FieldMallocs.push_back(NMI);
-    new StoreInst(NMI, NGV, MI);
-  }
-  
-  // The tricky aspect of this transformation is handling the case when malloc
-  // fails.  In the original code, malloc failing would set the result pointer
-  // of malloc to null.  In this case, some mallocs could succeed and others
-  // could fail.  As such, we emit code that looks like this:
-  //    F0 = malloc(field0)
-  //    F1 = malloc(field1)
-  //    F2 = malloc(field2)
-  //    if (F0 == 0 || F1 == 0 || F2 == 0) {
-  //      if (F0) { free(F0); F0 = 0; }
-  //      if (F1) { free(F1); F1 = 0; }
-  //      if (F2) { free(F2); F2 = 0; }
-  //    }
-  Value *RunningOr = 0;
-  for (unsigned i = 0, e = FieldMallocs.size(); i != e; ++i) {
-    Value *Cond = new ICmpInst(MI, ICmpInst::ICMP_EQ, FieldMallocs[i],
-                              Constant::getNullValue(FieldMallocs[i]->getType()),
-                                  "isnull");
-    if (!RunningOr)
-      RunningOr = Cond;   // First seteq
-    else
-      RunningOr = BinaryOperator::CreateOr(RunningOr, Cond, "tmp", MI);
-  }
-
-  // Split the basic block at the old malloc.
-  BasicBlock *OrigBB = MI->getParent();
-  BasicBlock *ContBB = OrigBB->splitBasicBlock(MI, "malloc_cont");
-  
-  // Create the block to check the first condition.  Put all these blocks at the
-  // end of the function as they are unlikely to be executed.
-  BasicBlock *NullPtrBlock = BasicBlock::Create(Context, "malloc_ret_null",
-                                                OrigBB->getParent());
-  
-  // Remove the uncond branch from OrigBB to ContBB, turning it into a cond
-  // branch on RunningOr.
-  OrigBB->getTerminator()->eraseFromParent();
-  BranchInst::Create(NullPtrBlock, ContBB, RunningOr, OrigBB);
-  
-  // Within the NullPtrBlock, we need to emit a comparison and branch for each
-  // pointer, because some may be null while others are not.
-  for (unsigned i = 0, e = FieldGlobals.size(); i != e; ++i) {
-    Value *GVVal = new LoadInst(FieldGlobals[i], "tmp", NullPtrBlock);
-    Value *Cmp = new ICmpInst(*NullPtrBlock, ICmpInst::ICMP_NE, GVVal, 
-                              Constant::getNullValue(GVVal->getType()),
-                              "tmp");
-    BasicBlock *FreeBlock = BasicBlock::Create(Context, "free_it", 
-                                               OrigBB->getParent());
-    BasicBlock *NextBlock = BasicBlock::Create(Context, "next", 
-                                               OrigBB->getParent());
-    BranchInst::Create(FreeBlock, NextBlock, Cmp, NullPtrBlock);
-
-    // Fill in FreeBlock.
-    new FreeInst(GVVal, FreeBlock);
-    new StoreInst(Constant::getNullValue(GVVal->getType()), FieldGlobals[i],
-                  FreeBlock);
-    BranchInst::Create(NextBlock, FreeBlock);
-    
-    NullPtrBlock = NextBlock;
-  }
-  
-  BranchInst::Create(ContBB, NullPtrBlock);
-  
-  // MI is no longer needed, remove it.
-  MI->eraseFromParent();
-
-  /// InsertedScalarizedLoads - As we process loads, if we can't immediately
-  /// update all uses of the load, keep track of what scalarized loads are
-  /// inserted for a given load.
-  DenseMap<Value*, std::vector<Value*> > InsertedScalarizedValues;
-  InsertedScalarizedValues[GV] = FieldGlobals;
-  
-  std::vector<std::pair<PHINode*, unsigned> > PHIsToRewrite;
-  
-  // Okay, the malloc site is completely handled.  All of the uses of GV are now
-  // loads, and all uses of those loads are simple.  Rewrite them to use loads
-  // of the per-field globals instead.
-  for (Value::use_iterator UI = GV->use_begin(), E = GV->use_end(); UI != E;) {
-    Instruction *User = cast<Instruction>(*UI++);
-    
-    if (LoadInst *LI = dyn_cast<LoadInst>(User)) {
-      RewriteUsesOfLoadForHeapSRoA(LI, InsertedScalarizedValues, PHIsToRewrite,
-                                   Context);
-      continue;
-    }
-    
-    // Must be a store of null.
-    StoreInst *SI = cast<StoreInst>(User);
-    assert(isa<ConstantPointerNull>(SI->getOperand(0)) &&
-           "Unexpected heap-sra user!");
-    
-    // Insert a store of null into each global.
-    for (unsigned i = 0, e = FieldGlobals.size(); i != e; ++i) {
-      const PointerType *PT = cast<PointerType>(FieldGlobals[i]->getType());
-      Constant *Null = Constant::getNullValue(PT->getElementType());
-      new StoreInst(Null, FieldGlobals[i], SI);
-    }
-    // Erase the original store.
-    SI->eraseFromParent();
-  }
-
-  // While we have PHIs that are interesting to rewrite, do it.
-  while (!PHIsToRewrite.empty()) {
-    PHINode *PN = PHIsToRewrite.back().first;
-    unsigned FieldNo = PHIsToRewrite.back().second;
-    PHIsToRewrite.pop_back();
-    PHINode *FieldPN = cast<PHINode>(InsertedScalarizedValues[PN][FieldNo]);
-    assert(FieldPN->getNumIncomingValues() == 0 &&"Already processed this phi");
-
-    // Add all the incoming values.  This can materialize more phis.
-    for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
-      Value *InVal = PN->getIncomingValue(i);
-      InVal = GetHeapSROAValue(InVal, FieldNo, InsertedScalarizedValues,
-                               PHIsToRewrite, Context);
-      FieldPN->addIncoming(InVal, PN->getIncomingBlock(i));
-    }
-  }
-  
-  // Drop all inter-phi links and any loads that made it this far.
-  for (DenseMap<Value*, std::vector<Value*> >::iterator
-       I = InsertedScalarizedValues.begin(), E = InsertedScalarizedValues.end();
-       I != E; ++I) {
-    if (PHINode *PN = dyn_cast<PHINode>(I->first))
-      PN->dropAllReferences();
-    else if (LoadInst *LI = dyn_cast<LoadInst>(I->first))
-      LI->dropAllReferences();
-  }
-  
-  // Delete all the phis and loads now that inter-references are dead.
-  for (DenseMap<Value*, std::vector<Value*> >::iterator
-       I = InsertedScalarizedValues.begin(), E = InsertedScalarizedValues.end();
-       I != E; ++I) {
-    if (PHINode *PN = dyn_cast<PHINode>(I->first))
-      PN->eraseFromParent();
-    else if (LoadInst *LI = dyn_cast<LoadInst>(I->first))
-      LI->eraseFromParent();
-  }
-  
-  // The old global is now dead, remove it.
-  GV->eraseFromParent();
-
-  ++NumHeapSRA;
-  return cast<GlobalVariable>(FieldGlobals[0]);
-}
-
 /// PerformHeapAllocSRoA - CI is an allocation of an array of structures.  Break
 /// it up into multiple allocations of arrays of the fields.
-static GlobalVariable *PerformHeapAllocSRoA(GlobalVariable *GV,
-                                            CallInst *CI, BitCastInst* BCI, 
-                                            LLVMContext &Context,
-                                            TargetData *TD){
-  DEBUG(errs() << "SROA HEAP ALLOC: " << *GV << "  MALLOC CALL = " << *CI 
-               << " BITCAST = " << *BCI << '\n');
+static GlobalVariable *PerformHeapAllocSRoA(GlobalVariable *GV, CallInst *CI,
+                                            Value* NElems, TargetData *TD) {
+  DEBUG(errs() << "SROA HEAP ALLOC: " << *GV << "  MALLOC = " << *CI << '\n');
   const Type* MAT = getMallocAllocatedType(CI);
   const StructType *STy = cast<StructType>(MAT);
 
@@ -1592,8 +1276,8 @@ static GlobalVariable *PerformHeapAllocSRoA(GlobalVariable *GV,
   // it into GV).  If there are other uses, change them to be uses of
   // the global to simplify later code.  This also deletes the store
   // into GV.
-  ReplaceUsesOfMallocWithGlobal(BCI, GV);
-  
+  ReplaceUsesOfMallocWithGlobal(CI, GV);
+
   // Okay, at this point, there are no users of the malloc.  Insert N
   // new mallocs at the same place as CI, and N globals.
   std::vector<Value*> FieldGlobals;
@@ -1611,11 +1295,18 @@ static GlobalVariable *PerformHeapAllocSRoA(GlobalVariable *GV,
                          GV->isThreadLocal());
     FieldGlobals.push_back(NGV);
     
-    Value *NMI = CallInst::CreateMalloc(CI, TD->getIntPtrType(Context), FieldTy,
-                                        getMallocArraySize(CI, Context, TD),
-                                        BCI->getName() + ".f" + Twine(FieldNo));
-    FieldMallocs.push_back(NMI);
-    new StoreInst(NMI, NGV, BCI);
+    unsigned TypeSize = TD->getTypeAllocSize(FieldTy);
+    if (const StructType *ST = dyn_cast<StructType>(FieldTy))
+      TypeSize = TD->getStructLayout(ST)->getSizeInBytes();
+    const Type *IntPtrTy = TD->getIntPtrType(CI->getContext());
+    Value *NMI = CallInst::CreateMalloc(CI, IntPtrTy, FieldTy,
+                                        ConstantInt::get(IntPtrTy, TypeSize),
+                                        NElems,
+                                        CI->getName() + ".f" + Twine(FieldNo));
+    CallInst *NCI = dyn_cast<BitCastInst>(NMI) ?
+                    extractMallocCallFromBitCast(NMI) : cast<CallInst>(NMI);
+    FieldMallocs.push_back(NCI);
+    new StoreInst(NMI, NGV, CI);
   }
   
   // The tricky aspect of this transformation is handling the case when malloc
@@ -1630,24 +1321,25 @@ static GlobalVariable *PerformHeapAllocSRoA(GlobalVariable *GV,
   //      if (F1) { free(F1); F1 = 0; }
   //      if (F2) { free(F2); F2 = 0; }
   //    }
-  Value *RunningOr = 0;
+  // The malloc can also fail if its argument is too large.
+  Constant *ConstantZero = ConstantInt::get(CI->getOperand(1)->getType(), 0);
+  Value *RunningOr = new ICmpInst(CI, ICmpInst::ICMP_SLT, CI->getOperand(1),
+                                  ConstantZero, "isneg");
   for (unsigned i = 0, e = FieldMallocs.size(); i != e; ++i) {
-    Value *Cond = new ICmpInst(BCI, ICmpInst::ICMP_EQ, FieldMallocs[i],
-                              Constant::getNullValue(FieldMallocs[i]->getType()),
-                                  "isnull");
-    if (!RunningOr)
-      RunningOr = Cond;   // First seteq
-    else
-      RunningOr = BinaryOperator::CreateOr(RunningOr, Cond, "tmp", BCI);
+    Value *Cond = new ICmpInst(CI, ICmpInst::ICMP_EQ, FieldMallocs[i],
+                             Constant::getNullValue(FieldMallocs[i]->getType()),
+                               "isnull");
+    RunningOr = BinaryOperator::CreateOr(RunningOr, Cond, "tmp", CI);
   }
 
   // Split the basic block at the old malloc.
-  BasicBlock *OrigBB = BCI->getParent();
-  BasicBlock *ContBB = OrigBB->splitBasicBlock(BCI, "malloc_cont");
+  BasicBlock *OrigBB = CI->getParent();
+  BasicBlock *ContBB = OrigBB->splitBasicBlock(CI, "malloc_cont");
   
   // Create the block to check the first condition.  Put all these blocks at the
   // end of the function as they are unlikely to be executed.
-  BasicBlock *NullPtrBlock = BasicBlock::Create(Context, "malloc_ret_null",
+  BasicBlock *NullPtrBlock = BasicBlock::Create(OrigBB->getContext(),
+                                                "malloc_ret_null",
                                                 OrigBB->getParent());
   
   // Remove the uncond branch from OrigBB to ContBB, turning it into a cond
@@ -1662,14 +1354,15 @@ static GlobalVariable *PerformHeapAllocSRoA(GlobalVariable *GV,
     Value *Cmp = new ICmpInst(*NullPtrBlock, ICmpInst::ICMP_NE, GVVal, 
                               Constant::getNullValue(GVVal->getType()),
                               "tmp");
-    BasicBlock *FreeBlock = BasicBlock::Create(Context, "free_it",
+    BasicBlock *FreeBlock = BasicBlock::Create(Cmp->getContext(), "free_it",
                                                OrigBB->getParent());
-    BasicBlock *NextBlock = BasicBlock::Create(Context, "next",
+    BasicBlock *NextBlock = BasicBlock::Create(Cmp->getContext(), "next",
                                                OrigBB->getParent());
-    BranchInst::Create(FreeBlock, NextBlock, Cmp, NullPtrBlock);
+    Instruction *BI = BranchInst::Create(FreeBlock, NextBlock,
+                                         Cmp, NullPtrBlock);
 
     // Fill in FreeBlock.
-    new FreeInst(GVVal, FreeBlock);
+    CallInst::CreateFree(GVVal, BI);
     new StoreInst(Constant::getNullValue(GVVal->getType()), FieldGlobals[i],
                   FreeBlock);
     BranchInst::Create(NextBlock, FreeBlock);
@@ -1678,9 +1371,8 @@ static GlobalVariable *PerformHeapAllocSRoA(GlobalVariable *GV,
   }
   
   BranchInst::Create(ContBB, NullPtrBlock);
-  
-  // CI and BCI are no longer needed, remove them.
-  BCI->eraseFromParent();
+
+  // CI is no longer needed, remove it.
   CI->eraseFromParent();
 
   /// InsertedScalarizedLoads - As we process loads, if we can't immediately
@@ -1698,8 +1390,7 @@ static GlobalVariable *PerformHeapAllocSRoA(GlobalVariable *GV,
     Instruction *User = cast<Instruction>(*UI++);
     
     if (LoadInst *LI = dyn_cast<LoadInst>(User)) {
-      RewriteUsesOfLoadForHeapSRoA(LI, InsertedScalarizedValues, PHIsToRewrite,
-                                   Context);
+      RewriteUsesOfLoadForHeapSRoA(LI, InsertedScalarizedValues, PHIsToRewrite);
       continue;
     }
     
@@ -1730,7 +1421,7 @@ static GlobalVariable *PerformHeapAllocSRoA(GlobalVariable *GV,
     for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
       Value *InVal = PN->getIncomingValue(i);
       InVal = GetHeapSROAValue(InVal, FieldNo, InsertedScalarizedValues,
-                               PHIsToRewrite, Context);
+                               PHIsToRewrite);
       FieldPN->addIncoming(InVal, PN->getIncomingBlock(i));
     }
   }
@@ -1766,104 +1457,10 @@ static GlobalVariable *PerformHeapAllocSRoA(GlobalVariable *GV,
 /// pointer global variable with a single value stored it that is a malloc or
 /// cast of malloc.
 static bool TryToOptimizeStoreOfMallocToGlobal(GlobalVariable *GV,
-                                               MallocInst *MI,
-                                               Module::global_iterator &GVI,
-                                               TargetData *TD,
-                                               LLVMContext &Context) {
-  // If this is a malloc of an abstract type, don't touch it.
-  if (!MI->getAllocatedType()->isSized())
-    return false;
-  
-  // We can't optimize this global unless all uses of it are *known* to be
-  // of the malloc value, not of the null initializer value (consider a use
-  // that compares the global's value against zero to see if the malloc has
-  // been reached).  To do this, we check to see if all uses of the global
-  // would trap if the global were null: this proves that they must all
-  // happen after the malloc.
-  if (!AllUsesOfLoadedValueWillTrapIfNull(GV))
-    return false;
-  
-  // We can't optimize this if the malloc itself is used in a complex way,
-  // for example, being stored into multiple globals.  This allows the
-  // malloc to be stored into the specified global, loaded setcc'd, and
-  // GEP'd.  These are all things we could transform to using the global
-  // for.
-  {
-    SmallPtrSet<PHINode*, 8> PHIs;
-    if (!ValueIsOnlyUsedLocallyOrStoredToOneGlobal(MI, GV, PHIs))
-      return false;
-  }
-  
-  
-  // If we have a global that is only initialized with a fixed size malloc,
-  // transform the program to use global memory instead of malloc'd memory.
-  // This eliminates dynamic allocation, avoids an indirection accessing the
-  // data, and exposes the resultant global to further GlobalOpt.
-  if (ConstantInt *NElements = dyn_cast<ConstantInt>(MI->getArraySize())) {
-    // Restrict this transformation to only working on small allocations
-    // (2048 bytes currently), as we don't want to introduce a 16M global or
-    // something.
-    if (TD &&
-        NElements->getZExtValue()*
-        TD->getTypeAllocSize(MI->getAllocatedType()) < 2048) {
-      GVI = OptimizeGlobalAddressOfMalloc(GV, MI, Context);
-      return true;
-    }
-  }
-  
-  // If the allocation is an array of structures, consider transforming this
-  // into multiple malloc'd arrays, one for each field.  This is basically
-  // SRoA for malloc'd memory.
-  const Type *AllocTy = MI->getAllocatedType();
-  
-  // If this is an allocation of a fixed size array of structs, analyze as a
-  // variable size array.  malloc [100 x struct],1 -> malloc struct, 100
-  if (!MI->isArrayAllocation())
-    if (const ArrayType *AT = dyn_cast<ArrayType>(AllocTy))
-      AllocTy = AT->getElementType();
-  
-  if (const StructType *AllocSTy = dyn_cast<StructType>(AllocTy)) {
-    // This the structure has an unreasonable number of fields, leave it
-    // alone.
-    if (AllocSTy->getNumElements() <= 16 && AllocSTy->getNumElements() != 0 &&
-        AllGlobalLoadUsesSimpleEnoughForHeapSRA(GV, MI)) {
-      
-      // If this is a fixed size array, transform the Malloc to be an alloc of
-      // structs.  malloc [100 x struct],1 -> malloc struct, 100
-      if (const ArrayType *AT = dyn_cast<ArrayType>(MI->getAllocatedType())) {
-        MallocInst *NewMI = 
-          new MallocInst(AllocSTy, 
-                  ConstantInt::get(Type::getInt32Ty(Context),
-                  AT->getNumElements()),
-                         "", MI);
-        NewMI->takeName(MI);
-        Value *Cast = new BitCastInst(NewMI, MI->getType(), "tmp", MI);
-        MI->replaceAllUsesWith(Cast);
-        MI->eraseFromParent();
-        MI = NewMI;
-      }
-      
-      GVI = PerformHeapAllocSRoA(GV, MI, Context);
-      return true;
-    }
-  }
-  
-  return false;
-}  
-
-/// TryToOptimizeStoreOfMallocToGlobal - This function is called when we see a
-/// pointer global variable with a single value stored it that is a malloc or
-/// cast of malloc.
-static bool TryToOptimizeStoreOfMallocToGlobal(GlobalVariable *GV,
                                                CallInst *CI,
-                                               BitCastInst *BCI,
+                                               const Type *AllocTy,
                                                Module::global_iterator &GVI,
-                                               TargetData *TD,
-                                               LLVMContext &Context) {
-  // If we can't figure out the type being malloced, then we can't optimize.
-  const Type *AllocTy = getMallocAllocatedType(CI);
-  assert(AllocTy);
-
+                                               TargetData *TD) {
   // If this is a malloc of an abstract type, don't touch it.
   if (!AllocTy->isSized())
     return false;
@@ -1884,7 +1481,7 @@ static bool TryToOptimizeStoreOfMallocToGlobal(GlobalVariable *GV,
   // for.
   {
     SmallPtrSet<PHINode*, 8> PHIs;
-    if (!ValueIsOnlyUsedLocallyOrStoredToOneGlobal(BCI, GV, PHIs))
+    if (!ValueIsOnlyUsedLocallyOrStoredToOneGlobal(CI, GV, PHIs))
       return false;
   }  
 
@@ -1892,52 +1489,55 @@ static bool TryToOptimizeStoreOfMallocToGlobal(GlobalVariable *GV,
   // transform the program to use global memory instead of malloc'd memory.
   // This eliminates dynamic allocation, avoids an indirection accessing the
   // data, and exposes the resultant global to further GlobalOpt.
-  if (ConstantInt *NElements =
-              dyn_cast<ConstantInt>(getMallocArraySize(CI, Context, TD))) {
-    // Restrict this transformation to only working on small allocations
-    // (2048 bytes currently), as we don't want to introduce a 16M global or
-    // something.
-    if (TD && 
-        NElements->getZExtValue() * TD->getTypeAllocSize(AllocTy) < 2048) {
-      GVI = OptimizeGlobalAddressOfMalloc(GV, CI, BCI, Context, TD);
-      return true;
-    }
-  }
-  
-  // If the allocation is an array of structures, consider transforming this
-  // into multiple malloc'd arrays, one for each field.  This is basically
-  // SRoA for malloc'd memory.
-
-  // If this is an allocation of a fixed size array of structs, analyze as a
-  // variable size array.  malloc [100 x struct],1 -> malloc struct, 100
-  if (!isArrayMalloc(CI, Context, TD))
-    if (const ArrayType *AT = dyn_cast<ArrayType>(AllocTy))
-      AllocTy = AT->getElementType();
-  
-  if (const StructType *AllocSTy = dyn_cast<StructType>(AllocTy)) {
-    // This the structure has an unreasonable number of fields, leave it
-    // alone.
-    if (AllocSTy->getNumElements() <= 16 && AllocSTy->getNumElements() != 0 &&
-        AllGlobalLoadUsesSimpleEnoughForHeapSRA(GV, BCI)) {
-
-      // If this is a fixed size array, transform the Malloc to be an alloc of
-      // structs.  malloc [100 x struct],1 -> malloc struct, 100
-      if (const ArrayType *AT = dyn_cast<ArrayType>(getMallocAllocatedType(CI))) {
-        Value* NumElements = ConstantInt::get(Type::getInt32Ty(Context),
-                                              AT->getNumElements());
-        Value* NewMI = CallInst::CreateMalloc(CI, TD->getIntPtrType(Context),
-                                              AllocSTy, NumElements,
-                                              BCI->getName());
-        Value *Cast = new BitCastInst(NewMI, getMallocType(CI), "tmp", CI);
-        BCI->replaceAllUsesWith(Cast);
-        BCI->eraseFromParent();
-        CI->eraseFromParent();
-        BCI = cast<BitCastInst>(NewMI);
-        CI = extractMallocCallFromBitCast(NewMI);
+  // We cannot optimize the malloc if we cannot determine malloc array size.
+  if (Value *NElems = getMallocArraySize(CI, TD, true)) {
+    if (ConstantInt *NElements = dyn_cast<ConstantInt>(NElems))
+      // Restrict this transformation to only working on small allocations
+      // (2048 bytes currently), as we don't want to introduce a 16M global or
+      // something.
+      if (TD && 
+          NElements->getZExtValue() * TD->getTypeAllocSize(AllocTy) < 2048) {
+        GVI = OptimizeGlobalAddressOfMalloc(GV, CI, AllocTy, NElems, TD);
+        return true;
       }
+  
+    // If the allocation is an array of structures, consider transforming this
+    // into multiple malloc'd arrays, one for each field.  This is basically
+    // SRoA for malloc'd memory.
+
+    // If this is an allocation of a fixed size array of structs, analyze as a
+    // variable size array.  malloc [100 x struct],1 -> malloc struct, 100
+    if (NElems == ConstantInt::get(CI->getOperand(1)->getType(), 1))
+      if (const ArrayType *AT = dyn_cast<ArrayType>(AllocTy))
+        AllocTy = AT->getElementType();
+  
+    if (const StructType *AllocSTy = dyn_cast<StructType>(AllocTy)) {
+      // This the structure has an unreasonable number of fields, leave it
+      // alone.
+      if (AllocSTy->getNumElements() <= 16 && AllocSTy->getNumElements() != 0 &&
+          AllGlobalLoadUsesSimpleEnoughForHeapSRA(GV, CI)) {
+
+        // If this is a fixed size array, transform the Malloc to be an alloc of
+        // structs.  malloc [100 x struct],1 -> malloc struct, 100
+        if (const ArrayType *AT =
+                              dyn_cast<ArrayType>(getMallocAllocatedType(CI))) {
+          const Type *IntPtrTy = TD->getIntPtrType(CI->getContext());
+          unsigned TypeSize = TD->getStructLayout(AllocSTy)->getSizeInBytes();
+          Value *AllocSize = ConstantInt::get(IntPtrTy, TypeSize);
+          Value *NumElements = ConstantInt::get(IntPtrTy, AT->getNumElements());
+          Instruction *Malloc = CallInst::CreateMalloc(CI, IntPtrTy, AllocSTy,
+                                                       AllocSize, NumElements,
+                                                       CI->getName());
+          Instruction *Cast = new BitCastInst(Malloc, CI->getType(), "tmp", CI);
+          CI->replaceAllUsesWith(Cast);
+          CI->eraseFromParent();
+          CI = dyn_cast<BitCastInst>(Malloc) ?
+               extractMallocCallFromBitCast(Malloc) : cast<CallInst>(Malloc);
+        }
       
-      GVI = PerformHeapAllocSRoA(GV, CI, BCI, Context, TD);
-      return true;
+        GVI = PerformHeapAllocSRoA(GV, CI, getMallocArraySize(CI, TD, true),TD);
+        return true;
+      }
     }
   }
   
@@ -1948,7 +1548,7 @@ static bool TryToOptimizeStoreOfMallocToGlobal(GlobalVariable *GV,
 // that only one value (besides its initializer) is ever stored to the global.
 static bool OptimizeOnceStoredGlobal(GlobalVariable *GV, Value *StoredOnceVal,
                                      Module::global_iterator &GVI,
-                                     TargetData *TD, LLVMContext &Context) {
+                                     TargetData *TD) {
   // Ignore no-op GEPs and bitcasts.
   StoredOnceVal = StoredOnceVal->stripPointerCasts();
 
@@ -1964,21 +1564,13 @@ static bool OptimizeOnceStoredGlobal(GlobalVariable *GV, Value *StoredOnceVal,
          ConstantExpr::getBitCast(SOVC, GV->getInitializer()->getType());
 
       // Optimize away any trapping uses of the loaded value.
-      if (OptimizeAwayTrappingUsesOfLoads(GV, SOVC, Context))
-        return true;
-    } else if (MallocInst *MI = dyn_cast<MallocInst>(StoredOnceVal)) {
-      if (TryToOptimizeStoreOfMallocToGlobal(GV, MI, GVI, TD, Context))
+      if (OptimizeAwayTrappingUsesOfLoads(GV, SOVC))
         return true;
     } else if (CallInst *CI = extractMallocCall(StoredOnceVal)) {
-      if (getMallocAllocatedType(CI)) {
-        BitCastInst* BCI = NULL;
-        for (Value::use_iterator UI = CI->use_begin(), E = CI->use_end();
-             UI != E; )
-          BCI = dyn_cast<BitCastInst>(cast<Instruction>(*UI++));
-        if (BCI &&
-            TryToOptimizeStoreOfMallocToGlobal(GV, CI, BCI, GVI, TD, Context))
-          return true;
-      }
+      const Type* MallocType = getMallocAllocatedType(CI);
+      if (MallocType && TryToOptimizeStoreOfMallocToGlobal(GV, CI, MallocType, 
+                                                           GVI, TD))
+        return true;
     }
   }
 
@@ -1989,8 +1581,7 @@ static bool OptimizeOnceStoredGlobal(GlobalVariable *GV, Value *StoredOnceVal,
 /// two values ever stored into GV are its initializer and OtherVal.  See if we
 /// can shrink the global into a boolean and select between the two values
 /// whenever it is used.  This exposes the values to other scalar optimizations.
-static bool TryToShrinkGlobalToBoolean(GlobalVariable *GV, Constant *OtherVal,
-                                       LLVMContext &Context) {
+static bool TryToShrinkGlobalToBoolean(GlobalVariable *GV, Constant *OtherVal) {
   const Type *GVElType = GV->getType()->getElementType();
   
   // If GVElType is already i1, it is already shrunk.  If the type of the GV is
@@ -1998,7 +1589,8 @@ static bool TryToShrinkGlobalToBoolean(GlobalVariable *GV, Constant *OtherVal,
   // between them is very expensive and unlikely to lead to later
   // simplification.  In these cases, we typically end up with "cond ? v1 : v2"
   // where v1 and v2 both require constant pool loads, a big loss.
-  if (GVElType == Type::getInt1Ty(Context) || GVElType->isFloatingPoint() ||
+  if (GVElType == Type::getInt1Ty(GV->getContext()) ||
+      GVElType->isFloatingPoint() ||
       isa<PointerType>(GVElType) || isa<VectorType>(GVElType))
     return false;
   
@@ -2011,15 +1603,16 @@ static bool TryToShrinkGlobalToBoolean(GlobalVariable *GV, Constant *OtherVal,
   DEBUG(errs() << "   *** SHRINKING TO BOOL: " << *GV);
   
   // Create the new global, initializing it to false.
-  GlobalVariable *NewGV = new GlobalVariable(Context,
-                                             Type::getInt1Ty(Context), false,
-         GlobalValue::InternalLinkage, ConstantInt::getFalse(Context),
+  GlobalVariable *NewGV = new GlobalVariable(Type::getInt1Ty(GV->getContext()),
+                                             false,
+                                             GlobalValue::InternalLinkage, 
+                                        ConstantInt::getFalse(GV->getContext()),
                                              GV->getName()+".b",
                                              GV->isThreadLocal());
   GV->getParent()->getGlobalList().insert(GV, NewGV);
 
   Constant *InitVal = GV->getInitializer();
-  assert(InitVal->getType() != Type::getInt1Ty(Context) &&
+  assert(InitVal->getType() != Type::getInt1Ty(GV->getContext()) &&
          "No reason to shrink to bool!");
 
   // If initialized to zero and storing one into the global, we can use a cast
@@ -2036,7 +1629,8 @@ static bool TryToShrinkGlobalToBoolean(GlobalVariable *GV, Constant *OtherVal,
       // Only do this if we weren't storing a loaded value.
       Value *StoreVal;
       if (StoringOther || SI->getOperand(0) == InitVal)
-        StoreVal = ConstantInt::get(Type::getInt1Ty(Context), StoringOther);
+        StoreVal = ConstantInt::get(Type::getInt1Ty(GV->getContext()),
+                                    StoringOther);
       else {
         // Otherwise, we are storing a previously loaded copy.  To do this,
         // change the copy from copying the original value to just copying the
@@ -2095,24 +1689,26 @@ bool GlobalOpt::ProcessInternalGlobal(GlobalVariable *GV,
 
   if (!AnalyzeGlobal(GV, GS, PHIUsers)) {
 #if 0
-    cerr << "Global: " << *GV;
-    cerr << "  isLoaded = " << GS.isLoaded << "\n";
-    cerr << "  StoredType = ";
+    DEBUG(errs() << "Global: " << *GV);
+    DEBUG(errs() << "  isLoaded = " << GS.isLoaded << "\n");
+    DEBUG(errs() << "  StoredType = ");
     switch (GS.StoredType) {
-    case GlobalStatus::NotStored: cerr << "NEVER STORED\n"; break;
-    case GlobalStatus::isInitializerStored: cerr << "INIT STORED\n"; break;
-    case GlobalStatus::isStoredOnce: cerr << "STORED ONCE\n"; break;
-    case GlobalStatus::isStored: cerr << "stored\n"; break;
+    case GlobalStatus::NotStored: DEBUG(errs() << "NEVER STORED\n"); break;
+    case GlobalStatus::isInitializerStored: DEBUG(errs() << "INIT STORED\n");
+                                            break;
+    case GlobalStatus::isStoredOnce: DEBUG(errs() << "STORED ONCE\n"); break;
+    case GlobalStatus::isStored: DEBUG(errs() << "stored\n"); break;
     }
     if (GS.StoredType == GlobalStatus::isStoredOnce && GS.StoredOnceValue)
-      cerr << "  StoredOnceValue = " << *GS.StoredOnceValue << "\n";
+      DEBUG(errs() << "  StoredOnceValue = " << *GS.StoredOnceValue << "\n");
     if (GS.AccessingFunction && !GS.HasMultipleAccessingFunctions)
-      cerr << "  AccessingFunction = " << GS.AccessingFunction->getName()
-                << "\n";
-    cerr << "  HasMultipleAccessingFunctions =  "
-              << GS.HasMultipleAccessingFunctions << "\n";
-    cerr << "  HasNonInstructionUser = " << GS.HasNonInstructionUser<<"\n";
-    cerr << "\n";
+      DEBUG(errs() << "  AccessingFunction = " << GS.AccessingFunction->getName()
+                  << "\n");
+    DEBUG(errs() << "  HasMultipleAccessingFunctions =  "
+                 << GS.HasMultipleAccessingFunctions << "\n");
+    DEBUG(errs() << "  HasNonInstructionUser = " 
+                 << GS.HasNonInstructionUser<<"\n");
+    DEBUG(errs() << "\n");
 #endif
     
     // If this is a first class global and has only one accessing function
@@ -2151,8 +1747,7 @@ bool GlobalOpt::ProcessInternalGlobal(GlobalVariable *GV,
 
       // Delete any stores we can find to the global.  We may not be able to
       // make it completely dead though.
-      bool Changed = CleanupConstantGlobalUsers(GV, GV->getInitializer(), 
-                                                GV->getContext());
+      bool Changed = CleanupConstantGlobalUsers(GV, GV->getInitializer());
 
       // If the global is dead now, delete it.
       if (GV->use_empty()) {
@@ -2167,7 +1762,7 @@ bool GlobalOpt::ProcessInternalGlobal(GlobalVariable *GV,
       GV->setConstant(true);
 
       // Clean up any obviously simplifiable users now.
-      CleanupConstantGlobalUsers(GV, GV->getInitializer(), GV->getContext());
+      CleanupConstantGlobalUsers(GV, GV->getInitializer());
 
       // If the global is dead now, just nuke it.
       if (GV->use_empty()) {
@@ -2181,8 +1776,7 @@ bool GlobalOpt::ProcessInternalGlobal(GlobalVariable *GV,
       return true;
     } else if (!GV->getInitializer()->getType()->isSingleValueType()) {
       if (TargetData *TD = getAnalysisIfAvailable<TargetData>())
-        if (GlobalVariable *FirstNewGV = SRAGlobal(GV, *TD,
-                                                   GV->getContext())) {
+        if (GlobalVariable *FirstNewGV = SRAGlobal(GV, *TD)) {
           GVI = FirstNewGV;  // Don't skip the newly produced globals!
           return true;
         }
@@ -2197,8 +1791,7 @@ bool GlobalOpt::ProcessInternalGlobal(GlobalVariable *GV,
           GV->setInitializer(SOVConstant);
 
           // Clean up any obviously simplifiable users now.
-          CleanupConstantGlobalUsers(GV, GV->getInitializer(), 
-                                     GV->getContext());
+          CleanupConstantGlobalUsers(GV, GV->getInitializer());
 
           if (GV->use_empty()) {
             DEBUG(errs() << "   *** Substituting initializer allowed us to "
@@ -2215,14 +1808,13 @@ bool GlobalOpt::ProcessInternalGlobal(GlobalVariable *GV,
       // Try to optimize globals based on the knowledge that only one value
       // (besides its initializer) is ever stored to the global.
       if (OptimizeOnceStoredGlobal(GV, GS.StoredOnceValue, GVI,
-                                   getAnalysisIfAvailable<TargetData>(),
-                                   GV->getContext()))
+                                   getAnalysisIfAvailable<TargetData>()))
         return true;
 
       // Otherwise, if the global was not a boolean, we can shrink it to be a
       // boolean.
       if (Constant *SOVConstant = dyn_cast<Constant>(GS.StoredOnceValue))
-        if (TryToShrinkGlobalToBoolean(GV, SOVConstant, GV->getContext())) {
+        if (TryToShrinkGlobalToBoolean(GV, SOVConstant)) {
           ++NumShrunkToBool;
           return true;
         }
@@ -2269,9 +1861,8 @@ bool GlobalOpt::OptimizeFunctions(Module &M) {
     if (!F->hasName() && !F->isDeclaration())
       F->setLinkage(GlobalValue::InternalLinkage);
     F->removeDeadConstantUsers();
-    if (F->use_empty() && (F->hasLocalLinkage() ||
-                           F->hasLinkOnceLinkage())) {
-      M.getFunctionList().erase(F);
+    if (F->use_empty() && (F->hasLocalLinkage() || F->hasLinkOnceLinkage())) {
+      F->eraseFromParent();
       Changed = true;
       ++NumFnDeleted;
     } else if (F->hasLocalLinkage()) {
@@ -2307,6 +1898,15 @@ bool GlobalOpt::OptimizeGlobalVars(Module &M) {
     // Global variables without names cannot be referenced outside this module.
     if (!GV->hasName() && !GV->isDeclaration())
       GV->setLinkage(GlobalValue::InternalLinkage);
+    // Simplify the initializer.
+    if (GV->hasInitializer())
+      if (ConstantExpr *CE = dyn_cast<ConstantExpr>(GV->getInitializer())) {
+        TargetData *TD = getAnalysisIfAvailable<TargetData>();
+        Constant *New = ConstantFoldConstantExpression(CE, TD);
+        if (New && New != CE)
+          GV->setInitializer(New);
+      }
+    // Do more involved optimizations if the global is internal.
     if (!GV->isConstant() && GV->hasLocalLinkage() &&
         GV->hasInitializer())
       Changed |= ProcessInternalGlobal(GV, GVI);
@@ -2375,11 +1975,10 @@ static std::vector<Function*> ParseGlobalCtors(GlobalVariable *GV) {
 /// InstallGlobalCtors - Given a specified llvm.global_ctors list, install the
 /// specified array, returning the new global to use.
 static GlobalVariable *InstallGlobalCtors(GlobalVariable *GCL, 
-                                          const std::vector<Function*> &Ctors,
-                                          LLVMContext &Context) {
+                                          const std::vector<Function*> &Ctors) {
   // If we made a change, reassemble the initializer list.
   std::vector<Constant*> CSVals;
-  CSVals.push_back(ConstantInt::get(Type::getInt32Ty(Context), 65535));
+  CSVals.push_back(ConstantInt::get(Type::getInt32Ty(GCL->getContext()),65535));
   CSVals.push_back(0);
   
   // Create the new init list.
@@ -2388,12 +1987,14 @@ static GlobalVariable *InstallGlobalCtors(GlobalVariable *GCL,
     if (Ctors[i]) {
       CSVals[1] = Ctors[i];
     } else {
-      const Type *FTy = FunctionType::get(Type::getVoidTy(Context), false);
+      const Type *FTy = FunctionType::get(Type::getVoidTy(GCL->getContext()),
+                                          false);
       const PointerType *PFTy = PointerType::getUnqual(FTy);
       CSVals[1] = Constant::getNullValue(PFTy);
-      CSVals[0] = ConstantInt::get(Type::getInt32Ty(Context), 2147483647);
+      CSVals[0] = ConstantInt::get(Type::getInt32Ty(GCL->getContext()),
+                                   2147483647);
     }
-    CAList.push_back(ConstantStruct::get(Context, CSVals, false));
+    CAList.push_back(ConstantStruct::get(GCL->getContext(), CSVals, false));
   }
   
   // Create the array initializer.
@@ -2409,8 +2010,7 @@ static GlobalVariable *InstallGlobalCtors(GlobalVariable *GCL,
   }
   
   // Create the new global and insert it next to the existing list.
-  GlobalVariable *NGV = new GlobalVariable(Context, CA->getType(), 
-                                           GCL->isConstant(),
+  GlobalVariable *NGV = new GlobalVariable(CA->getType(), GCL->isConstant(),
                                            GCL->getLinkage(), CA, "",
                                            GCL->isThreadLocal());
   GCL->getParent()->getGlobalList().insert(GCL, NGV);
@@ -2444,7 +2044,7 @@ static Constant *getVal(DenseMap<Value*, Constant*> &ComputedValues,
 /// enough for us to understand.  In particular, if it is a cast of something,
 /// we punt.  We basically just support direct accesses to globals and GEP's of
 /// globals.  This should be kept up to date with CommitValueTo.
-static bool isSimpleEnoughPointerToCommit(Constant *C, LLVMContext &Context) {
+static bool isSimpleEnoughPointerToCommit(Constant *C) {
   // Conservatively, avoid aggregate types. This is because we don't
   // want to worry about them partially overlapping other stores.
   if (!cast<PointerType>(C->getType())->getElementType()->isSingleValueType())
@@ -2475,8 +2075,7 @@ static bool isSimpleEnoughPointerToCommit(Constant *C, LLVMContext &Context) {
       if (!CE->isGEPWithNoNotionalOverIndexing())
         return false;
 
-      return ConstantFoldLoadThroughGEPConstantExpr(GV->getInitializer(), CE,
-                                                    Context);
+      return ConstantFoldLoadThroughGEPConstantExpr(GV->getInitializer(), CE);
     }
   return false;
 }
@@ -2485,8 +2084,7 @@ static bool isSimpleEnoughPointerToCommit(Constant *C, LLVMContext &Context) {
 /// initializer.  This returns 'Init' modified to reflect 'Val' stored into it.
 /// At this point, the GEP operands of Addr [0, OpNo) have been stepped into.
 static Constant *EvaluateStoreInto(Constant *Init, Constant *Val,
-                                   ConstantExpr *Addr, unsigned OpNo,
-                                   LLVMContext &Context) {
+                                   ConstantExpr *Addr, unsigned OpNo) {
   // Base case of the recursion.
   if (OpNo == Addr->getNumOperands()) {
     assert(Val->getType() == Init->getType() && "Type mismatch!");
@@ -2515,10 +2113,11 @@ static Constant *EvaluateStoreInto(Constant *Init, Constant *Val,
     ConstantInt *CU = cast<ConstantInt>(Addr->getOperand(OpNo));
     unsigned Idx = CU->getZExtValue();
     assert(Idx < STy->getNumElements() && "Struct index out of range!");
-    Elts[Idx] = EvaluateStoreInto(Elts[Idx], Val, Addr, OpNo+1, Context);
+    Elts[Idx] = EvaluateStoreInto(Elts[Idx], Val, Addr, OpNo+1);
     
     // Return the modified struct.
-    return ConstantStruct::get(Context, &Elts[0], Elts.size(), STy->isPacked());
+    return ConstantStruct::get(Init->getContext(), &Elts[0], Elts.size(),
+                               STy->isPacked());
   } else {
     ConstantInt *CI = cast<ConstantInt>(Addr->getOperand(OpNo));
     const ArrayType *ATy = cast<ArrayType>(Init->getType());
@@ -2541,15 +2140,14 @@ static Constant *EvaluateStoreInto(Constant *Init, Constant *Val,
     
     assert(CI->getZExtValue() < ATy->getNumElements());
     Elts[CI->getZExtValue()] =
-      EvaluateStoreInto(Elts[CI->getZExtValue()], Val, Addr, OpNo+1, Context);
+      EvaluateStoreInto(Elts[CI->getZExtValue()], Val, Addr, OpNo+1);
     return ConstantArray::get(ATy, Elts);
   }    
 }
 
 /// CommitValueTo - We have decided that Addr (which satisfies the predicate
 /// isSimpleEnoughPointerToCommit) should get Val as its value.  Make it happen.
-static void CommitValueTo(Constant *Val, Constant *Addr,
-                          LLVMContext &Context) {
+static void CommitValueTo(Constant *Val, Constant *Addr) {
   if (GlobalVariable *GV = dyn_cast<GlobalVariable>(Addr)) {
     assert(GV->hasInitializer());
     GV->setInitializer(Val);
@@ -2560,7 +2158,7 @@ static void CommitValueTo(Constant *Val, Constant *Addr,
   GlobalVariable *GV = cast<GlobalVariable>(CE->getOperand(0));
   
   Constant *Init = GV->getInitializer();
-  Init = EvaluateStoreInto(Init, Val, CE, 2, Context);
+  Init = EvaluateStoreInto(Init, Val, CE, 2);
   GV->setInitializer(Init);
 }
 
@@ -2568,8 +2166,7 @@ static void CommitValueTo(Constant *Val, Constant *Addr,
 /// P after the stores reflected by 'memory' have been performed.  If we can't
 /// decide, return null.
 static Constant *ComputeLoadResult(Constant *P,
-                                const DenseMap<Constant*, Constant*> &Memory,
-                                LLVMContext &Context) {
+                                const DenseMap<Constant*, Constant*> &Memory) {
   // If this memory location has been recently stored, use the stored value: it
   // is the most up-to-date.
   DenseMap<Constant*, Constant*>::const_iterator I = Memory.find(P);
@@ -2588,8 +2185,7 @@ static Constant *ComputeLoadResult(Constant *P,
         isa<GlobalVariable>(CE->getOperand(0))) {
       GlobalVariable *GV = cast<GlobalVariable>(CE->getOperand(0));
       if (GV->hasDefinitiveInitializer())
-        return ConstantFoldLoadThroughGEPConstantExpr(GV->getInitializer(), CE,
-                                                      Context);
+        return ConstantFoldLoadThroughGEPConstantExpr(GV->getInitializer(), CE);
     }
 
   return 0;  // don't know how to evaluate.
@@ -2608,8 +2204,6 @@ static bool EvaluateFunction(Function *F, Constant *&RetVal,
   if (std::find(CallStack.begin(), CallStack.end(), F) != CallStack.end())
     return false;
   
-  LLVMContext &Context = F->getContext();
-  
   CallStack.push_back(F);
   
   /// Values - As we compute SSA register values, we store their contents here.
@@ -2636,7 +2230,7 @@ static bool EvaluateFunction(Function *F, Constant *&RetVal,
     if (StoreInst *SI = dyn_cast<StoreInst>(CurInst)) {
       if (SI->isVolatile()) return false;  // no volatile accesses.
       Constant *Ptr = getVal(Values, SI->getOperand(1));
-      if (!isSimpleEnoughPointerToCommit(Ptr, Context))
+      if (!isSimpleEnoughPointerToCommit(Ptr))
         // If this is too complex for us to commit, reject it.
         return false;
       Constant *Val = getVal(Values, SI->getOperand(0));
@@ -2670,12 +2264,12 @@ static bool EvaluateFunction(Function *F, Constant *&RetVal,
     } else if (LoadInst *LI = dyn_cast<LoadInst>(CurInst)) {
       if (LI->isVolatile()) return false;  // no volatile accesses.
       InstResult = ComputeLoadResult(getVal(Values, LI->getOperand(0)),
-                                     MutatedMemory, Context);
+                                     MutatedMemory);
       if (InstResult == 0) return false; // Could not evaluate load.
     } else if (AllocaInst *AI = dyn_cast<AllocaInst>(CurInst)) {
       if (AI->isArrayAllocation()) return false;  // Cannot handle array allocs.
       const Type *Ty = AI->getType()->getElementType();
-      AllocaTmps.push_back(new GlobalVariable(Context, Ty, false,
+      AllocaTmps.push_back(new GlobalVariable(Ty, false,
                                               GlobalValue::InternalLinkage,
                                               UndefValue::get(Ty),
                                               AI->getName()));
@@ -2736,6 +2330,12 @@ static bool EvaluateFunction(Function *F, Constant *&RetVal,
           dyn_cast<ConstantInt>(getVal(Values, SI->getCondition()));
         if (!Val) return false;  // Cannot determine.
         NewBB = SI->getSuccessor(SI->findCaseValue(Val));
+      } else if (IndirectBrInst *IBI = dyn_cast<IndirectBrInst>(CurInst)) {
+        Value *Val = getVal(Values, IBI->getAddress())->stripPointerCasts();
+        if (BlockAddress *BA = dyn_cast<BlockAddress>(Val))
+          NewBB = BA->getBasicBlock();
+        else
+          return false;  // Cannot determine.
       } else if (ReturnInst *RI = dyn_cast<ReturnInst>(CurInst)) {
         if (RI->getNumOperands())
           RetVal = getVal(Values, RI->getOperand(0));
@@ -2807,7 +2407,7 @@ static bool EvaluateStaticConstructor(Function *F) {
           << " stores.\n");
     for (DenseMap<Constant*, Constant*>::iterator I = MutatedMemory.begin(),
          E = MutatedMemory.end(); I != E; ++I)
-      CommitValueTo(I->second, I->first, F->getContext());
+      CommitValueTo(I->second, I->first);
   }
   
   // At this point, we are done interpreting.  If we created any 'alloca'
@@ -2864,7 +2464,7 @@ bool GlobalOpt::OptimizeGlobalCtorsList(GlobalVariable *&GCL) {
   
   if (!MadeChange) return false;
   
-  GCL = InstallGlobalCtors(GCL, Ctors, GCL->getContext());
+  GCL = InstallGlobalCtors(GCL, Ctors);
   return true;
 }
 
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/IPConstantPropagation.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/IPConstantPropagation.cpp
index 7b0e9c7..df2456f 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/IPConstantPropagation.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/IPConstantPropagation.cpp
@@ -19,12 +19,10 @@
 #include "llvm/Transforms/IPO.h"
 #include "llvm/Constants.h"
 #include "llvm/Instructions.h"
-#include "llvm/LLVMContext.h"
 #include "llvm/Module.h"
 #include "llvm/Pass.h"
 #include "llvm/Analysis/ValueTracking.h"
 #include "llvm/Support/CallSite.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/SmallVector.h"
 using namespace llvm;
@@ -35,7 +33,7 @@ STATISTIC(NumReturnValProped, "Number of return values turned into constants");
 namespace {
   /// IPCP - The interprocedural constant propagation pass
   ///
-  struct VISIBILITY_HIDDEN IPCP : public ModulePass {
+  struct IPCP : public ModulePass {
     static char ID; // Pass identification, replacement for typeid
     IPCP() : ModulePass(&ID) {}
 
@@ -87,6 +85,9 @@ bool IPCP::PropagateConstantsIntoArguments(Function &F) {
 
   unsigned NumNonconstant = 0;
   for (Value::use_iterator UI = F.use_begin(), E = F.use_end(); UI != E; ++UI) {
+    // Ignore blockaddress uses.
+    if (isa<BlockAddress>(*UI)) continue;
+    
     // Used by a non-instruction, or not the callee of a function, do not
     // transform.
     if (!isa<CallInst>(*UI) && !isa<InvokeInst>(*UI))
@@ -153,7 +154,7 @@ bool IPCP::PropagateConstantsIntoArguments(Function &F) {
 // callers will be updated to use the value they pass in directly instead of
 // using the return value.
 bool IPCP::PropagateConstantReturn(Function &F) {
-  if (F.getReturnType() == Type::getVoidTy(F.getContext()))
+  if (F.getReturnType()->isVoidTy())
     return false; // No return value.
 
   // If this function could be overridden later in the link stage, we can't
@@ -161,8 +162,6 @@ bool IPCP::PropagateConstantReturn(Function &F) {
   if (F.mayBeOverridden())
     return false;
     
-  LLVMContext &Context = F.getContext();
-  
   // Check to see if this function returns a constant.
   SmallVector<Value *,4> RetVals;
   const StructType *STy = dyn_cast<StructType>(F.getReturnType());
@@ -186,7 +185,7 @@ bool IPCP::PropagateConstantReturn(Function &F) {
         if (!STy)
           V = RI->getOperand(i);
         else
-          V = FindInsertedValue(RI->getOperand(0), i, Context);
+          V = FindInsertedValue(RI->getOperand(0), i);
 
         if (V) {
           // Ignore undefs, we can change them into anything
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/IPO.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/IPO.cpp
index 4306607..83e8624 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/IPO.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/IPO.cpp
@@ -63,7 +63,7 @@ void LLVMAddPruneEHPass(LLVMPassManagerRef PM) {
 }
 
 void LLVMAddRaiseAllocationsPass(LLVMPassManagerRef PM) {
-  unwrap(PM)->add(createRaiseAllocationsPass());
+  // FIXME: Remove in LLVM 3.0.
 }
 
 void LLVMAddStripDeadPrototypesPass(LLVMPassManagerRef PM) {
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/IndMemRemoval.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/IndMemRemoval.cpp
deleted file mode 100644
index e7884ec..0000000
--- a/libclamav/c++/llvm/lib/Transforms/IPO/IndMemRemoval.cpp
+++ /dev/null
@@ -1,90 +0,0 @@
-//===-- IndMemRemoval.cpp - Remove indirect allocations and frees ---------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This pass finds places where memory allocation functions may escape into
-// indirect land.  Some transforms are much easier (aka possible) only if free 
-// or malloc are not called indirectly.
-// Thus find places where the address of memory functions are taken and 
-// construct bounce functions with direct calls of those functions.
-//
-//===----------------------------------------------------------------------===//
-
-#define DEBUG_TYPE "indmemrem"
-#include "llvm/Transforms/IPO.h"
-#include "llvm/Pass.h"
-#include "llvm/Module.h"
-#include "llvm/Instructions.h"
-#include "llvm/Type.h"
-#include "llvm/DerivedTypes.h"
-#include "llvm/ADT/Statistic.h"
-#include "llvm/Support/Compiler.h"
-using namespace llvm;
-
-STATISTIC(NumBounceSites, "Number of sites modified");
-STATISTIC(NumBounce     , "Number of bounce functions created");
-
-namespace {
-  class VISIBILITY_HIDDEN IndMemRemPass : public ModulePass {
-  public:
-    static char ID; // Pass identification, replacement for typeid
-    IndMemRemPass() : ModulePass(&ID) {}
-
-    virtual bool runOnModule(Module &M);
-  };
-} // end anonymous namespace
-
-char IndMemRemPass::ID = 0;
-static RegisterPass<IndMemRemPass>
-X("indmemrem","Indirect Malloc and Free Removal");
-
-bool IndMemRemPass::runOnModule(Module &M) {
-  // In theory, all direct calls of malloc and free should be promoted
-  // to intrinsics.  Therefore, this goes through and finds where the
-  // address of free or malloc are taken and replaces those with bounce
-  // functions, ensuring that all malloc and free that might happen
-  // happen through intrinsics.
-  bool changed = false;
-  if (Function* F = M.getFunction("free")) {
-    if (F->isDeclaration() && F->arg_size() == 1 && !F->use_empty()) {
-      Function* FN = Function::Create(F->getFunctionType(),
-                                      GlobalValue::LinkOnceAnyLinkage,
-                                      "free_llvm_bounce", &M);
-      BasicBlock* bb = BasicBlock::Create(M.getContext(), "entry",FN);
-      Instruction* R = ReturnInst::Create(M.getContext(), bb);
-      new FreeInst(FN->arg_begin(), R);
-      ++NumBounce;
-      NumBounceSites += F->getNumUses();
-      F->replaceAllUsesWith(FN);
-      changed = true;
-    }
-  }
-  if (Function* F = M.getFunction("malloc")) {
-    if (F->isDeclaration() && F->arg_size() == 1 && !F->use_empty()) {
-      Function* FN = Function::Create(F->getFunctionType(), 
-                                      GlobalValue::LinkOnceAnyLinkage,
-                                      "malloc_llvm_bounce", &M);
-      FN->setDoesNotAlias(0);
-      BasicBlock* bb = BasicBlock::Create(M.getContext(), "entry",FN);
-      Instruction* c = CastInst::CreateIntegerCast(
-          FN->arg_begin(), Type::getInt32Ty(M.getContext()), false, "c", bb);
-      Instruction* a = new MallocInst(Type::getInt8Ty(M.getContext()),
-                                      c, "m", bb);
-      ReturnInst::Create(M.getContext(), a, bb);
-      ++NumBounce;
-      NumBounceSites += F->getNumUses();
-      F->replaceAllUsesWith(FN);
-      changed = true;
-    }
-  }
-  return changed;
-}
-
-ModulePass *llvm::createIndMemRemPass() {
-  return new IndMemRemPass();
-}
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/InlineAlways.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/InlineAlways.cpp
index 5f9ea54..f11ecae 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/InlineAlways.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/InlineAlways.cpp
@@ -19,11 +19,10 @@
 #include "llvm/Module.h"
 #include "llvm/Type.h"
 #include "llvm/Analysis/CallGraph.h"
+#include "llvm/Analysis/InlineCost.h"
 #include "llvm/Support/CallSite.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Transforms/IPO.h"
 #include "llvm/Transforms/IPO/InlinerPass.h"
-#include "llvm/Transforms/Utils/InlineCost.h"
 #include "llvm/ADT/SmallPtrSet.h"
 
 using namespace llvm;
@@ -31,7 +30,7 @@ using namespace llvm;
 namespace {
 
   // AlwaysInliner only inlines functions that are mark as "always inline".
-  class VISIBILITY_HIDDEN AlwaysInliner : public Inliner {
+  class AlwaysInliner : public Inliner {
     // Functions that are never inlined
     SmallPtrSet<const Function*, 16> NeverInline; 
     InlineCostAnalyzer CA;
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/InlineSimple.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/InlineSimple.cpp
index b7765dc..598043d 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/InlineSimple.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/InlineSimple.cpp
@@ -18,18 +18,17 @@
 #include "llvm/Module.h"
 #include "llvm/Type.h"
 #include "llvm/Analysis/CallGraph.h"
+#include "llvm/Analysis/InlineCost.h"
 #include "llvm/Support/CallSite.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Transforms/IPO.h"
 #include "llvm/Transforms/IPO/InlinerPass.h"
-#include "llvm/Transforms/Utils/InlineCost.h"
 #include "llvm/ADT/SmallPtrSet.h"
 
 using namespace llvm;
 
 namespace {
 
-  class VISIBILITY_HIDDEN SimpleInliner : public Inliner {
+  class SimpleInliner : public Inliner {
     // Functions that are never inlined
     SmallPtrSet<const Function*, 16> NeverInline; 
     InlineCostAnalyzer CA;
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/Inliner.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/Inliner.cpp
index 6177265..6918fe8 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/Inliner.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/Inliner.cpp
@@ -18,11 +18,12 @@
 #include "llvm/Instructions.h"
 #include "llvm/IntrinsicInst.h"
 #include "llvm/Analysis/CallGraph.h"
-#include "llvm/Support/CallSite.h"
+#include "llvm/Analysis/InlineCost.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Transforms/IPO/InlinerPass.h"
-#include "llvm/Transforms/Utils/InlineCost.h"
 #include "llvm/Transforms/Utils/Cloning.h"
+#include "llvm/Transforms/Utils/Local.h"
+#include "llvm/Support/CallSite.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
@@ -32,6 +33,7 @@
 using namespace llvm;
 
 STATISTIC(NumInlined, "Number of functions inlined");
+STATISTIC(NumCallsDeleted, "Number of call sites deleted, not inlined");
 STATISTIC(NumDeleted, "Number of functions deleted because all callers found");
 STATISTIC(NumMergedAllocas, "Number of allocas merged together");
 
@@ -189,9 +191,10 @@ bool Inliner::shouldInline(CallSite CS) {
   
   int Cost = IC.getValue();
   int CurrentThreshold = InlineThreshold;
-  Function *Fn = CS.getCaller();
-  if (Fn && !Fn->isDeclaration() &&
-      Fn->hasFnAttr(Attribute::OptimizeForSize) &&
+  Function *Caller = CS.getCaller();
+  if (Caller && !Caller->isDeclaration() &&
+      Caller->hasFnAttr(Attribute::OptimizeForSize) &&
+      InlineLimit.getNumOccurrences() == 0 &&
       InlineThreshold != 50)
     CurrentThreshold = 50;
   
@@ -202,8 +205,73 @@ bool Inliner::shouldInline(CallSite CS) {
     return false;
   }
   
+  // Try to detect the case where the current inlining candidate caller
+  // (call it B) is a static function and is an inlining candidate elsewhere,
+  // and the current candidate callee (call it C) is large enough that
+  // inlining it into B would make B too big to inline later.  In these
+  // circumstances it may be best not to inline C into B, but to inline B
+  // into its callers.
+  if (Caller->hasLocalLinkage()) {
+    int TotalSecondaryCost = 0;
+    bool outerCallsFound = false;
+    bool allOuterCallsWillBeInlined = true;
+    bool someOuterCallWouldNotBeInlined = false;
+    for (Value::use_iterator I = Caller->use_begin(), E =Caller->use_end(); 
+         I != E; ++I) {
+      CallSite CS2 = CallSite::get(*I);
+
+      // If this isn't a call to Caller (it could be some other sort
+      // of reference) skip it.
+      if (CS2.getInstruction() == 0 || CS2.getCalledFunction() != Caller)
+        continue;
+
+      InlineCost IC2 = getInlineCost(CS2);
+      if (IC2.isNever())
+        allOuterCallsWillBeInlined = false;
+      if (IC2.isAlways() || IC2.isNever())
+        continue;
+
+      outerCallsFound = true;
+      int Cost2 = IC2.getValue();
+      int CurrentThreshold2 = InlineThreshold;
+      Function *Caller2 = CS2.getCaller();
+      if (Caller2 && !Caller2->isDeclaration() &&
+          Caller2->hasFnAttr(Attribute::OptimizeForSize) &&
+          InlineThreshold != 50)
+        CurrentThreshold2 = 50;
+
+      float FudgeFactor2 = getInlineFudgeFactor(CS2);
+
+      if (Cost2 >= (int)(CurrentThreshold2 * FudgeFactor2))
+        allOuterCallsWillBeInlined = false;
+
+      // See if we have this case.  We subtract off the penalty
+      // for the call instruction, which we would be deleting.
+      if (Cost2 < (int)(CurrentThreshold2 * FudgeFactor2) &&
+          Cost2 + Cost - (InlineConstants::CallPenalty + 1) >= 
+                (int)(CurrentThreshold2 * FudgeFactor2)) {
+        someOuterCallWouldNotBeInlined = true;
+        TotalSecondaryCost += Cost2;
+      }
+    }
+    // If all outer calls to Caller would get inlined, the cost for the last
+    // one is set very low by getInlineCost, in anticipation that Caller will
+    // be removed entirely.  We did not account for this above unless there
+    // is only one caller of Caller.
+    if (allOuterCallsWillBeInlined && Caller->use_begin() != Caller->use_end())
+      TotalSecondaryCost += InlineConstants::LastCallToStaticBonus;
+
+    if (outerCallsFound && someOuterCallWouldNotBeInlined && 
+        TotalSecondaryCost < Cost) {
+      DEBUG(errs() << "    NOT Inlining: " << *CS.getInstruction() << 
+           " Cost = " << Cost << 
+           ", outer Cost = " << TotalSecondaryCost << '\n');
+      return false;
+    }
+  }
+
   DEBUG(errs() << "    Inlining: cost=" << Cost
-        << ", Call: " << *CS.getInstruction() << "\n");
+        << ", Call: " << *CS.getInstruction() << '\n');
   return true;
 }
 
@@ -231,7 +299,7 @@ bool Inliner::runOnSCC(std::vector<CallGraphNode*> &SCC) {
     for (Function::iterator BB = F->begin(), E = F->end(); BB != E; ++BB)
       for (BasicBlock::iterator I = BB->begin(), E = BB->end(); I != E; ++I) {
         CallSite CS = CallSite::get(I);
-        // If this this isn't a call, or it is a call to an intrinsic, it can
+        // If this isn't a call, or it is a call to an intrinsic, it can
         // never be inlined.
         if (CS.getInstruction() == 0 || isa<IntrinsicInst>(I))
           continue;
@@ -270,23 +338,38 @@ bool Inliner::runOnSCC(std::vector<CallGraphNode*> &SCC) {
     for (unsigned CSi = 0; CSi != CallSites.size(); ++CSi) {
       CallSite CS = CallSites[CSi];
       
+      Function *Caller = CS.getCaller();
       Function *Callee = CS.getCalledFunction();
-      // We can only inline direct calls to non-declarations.
-      if (Callee == 0 || Callee->isDeclaration()) continue;
-      
-      // If the policy determines that we should inline this function,
-      // try to do so.
-      if (!shouldInline(CS))
-        continue;
+
+      // If this call site is dead and it is to a readonly function, we should
+      // just delete the call instead of trying to inline it, regardless of
+      // size.  This happens because IPSCCP propagates the result out of the
+      // call and then we're left with the dead call.
+      if (isInstructionTriviallyDead(CS.getInstruction())) {
+        DEBUG(errs() << "    -> Deleting dead call: "
+                     << *CS.getInstruction() << "\n");
+        // Update the call graph by deleting the edge from Callee to Caller.
+        CG[Caller]->removeCallEdgeFor(CS);
+        CS.getInstruction()->eraseFromParent();
+        ++NumCallsDeleted;
+      } else {
+        // We can only inline direct calls to non-declarations.
+        if (Callee == 0 || Callee->isDeclaration()) continue;
       
-      Function *Caller = CS.getCaller();
-      // Attempt to inline the function...
-      if (!InlineCallIfPossible(CS, CG, TD, InlinedArrayAllocas))
-        continue;
+        // If the policy determines that we should inline this function,
+        // try to do so.
+        if (!shouldInline(CS))
+          continue;
+
+        // Attempt to inline the function...
+        if (!InlineCallIfPossible(CS, CG, TD, InlinedArrayAllocas))
+          continue;
+        ++NumInlined;
+      }
       
-      // If we inlined the last possible call site to the function, delete the
-      // function body now.
-      if (Callee->use_empty() && Callee->hasLocalLinkage() &&
+      // If we inlined or deleted the last possible call site to the function,
+      // delete the function body now.
+      if (Callee && Callee->use_empty() && Callee->hasLocalLinkage() &&
           // TODO: Can remove if in SCC now.
           !SCCFunctions.count(Callee) &&
           
@@ -325,7 +408,6 @@ bool Inliner::runOnSCC(std::vector<CallGraphNode*> &SCC) {
       }
       --CSi;
 
-      ++NumInlined;
       Changed = true;
       LocalChange = true;
     }
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/Internalize.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/Internalize.cpp
index e3c3c67..20ae0d5 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/Internalize.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/Internalize.cpp
@@ -19,7 +19,6 @@
 #include "llvm/Pass.h"
 #include "llvm/Module.h"
 #include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/ADT/Statistic.h"
@@ -44,7 +43,7 @@ APIList("internalize-public-api-list", cl::value_desc("list"),
         cl::CommaSeparated);
 
 namespace {
-  class VISIBILITY_HIDDEN InternalizePass : public ModulePass {
+  class InternalizePass : public ModulePass {
     std::set<std::string> ExternalNames;
     /// If no api symbols were specified and a main function is defined,
     /// assume the main function is the only API
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/LoopExtractor.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/LoopExtractor.cpp
index 02ac3bb..cb81330 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/LoopExtractor.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/LoopExtractor.cpp
@@ -22,7 +22,6 @@
 #include "llvm/Analysis/Dominators.h"
 #include "llvm/Analysis/LoopPass.h"
 #include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Transforms/Scalar.h"
 #include "llvm/Transforms/Utils/FunctionUtils.h"
 #include "llvm/ADT/Statistic.h"
@@ -33,7 +32,7 @@ using namespace llvm;
 STATISTIC(NumExtracted, "Number of loops extracted");
 
 namespace {
-  struct VISIBILITY_HIDDEN LoopExtractor : public LoopPass {
+  struct LoopExtractor : public LoopPass {
     static char ID; // Pass identification, replacement for typeid
     unsigned NumLoops;
 
@@ -76,6 +75,10 @@ bool LoopExtractor::runOnLoop(Loop *L, LPPassManager &LPM) {
   if (L->getParentLoop())
     return false;
 
+  // If LoopSimplify form is not available, stay out of trouble.
+  if (!L->isLoopSimplifyForm())
+    return false;
+
   DominatorTree &DT = getAnalysis<DominatorTree>();
   bool Changed = false;
 
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/LowerSetJmp.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/LowerSetJmp.cpp
index 5dff47a..4d61e83 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/LowerSetJmp.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/LowerSetJmp.cpp
@@ -43,14 +43,10 @@
 #include "llvm/Module.h"
 #include "llvm/Pass.h"
 #include "llvm/Support/CFG.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/InstVisitor.h"
 #include "llvm/Transforms/Utils/Local.h"
 #include "llvm/ADT/DepthFirstIterator.h"
 #include "llvm/ADT/Statistic.h"
-#include "llvm/ADT/StringExtras.h"
-#include "llvm/ADT/VectorExtras.h"
-#include "llvm/ADT/SmallVector.h"
 #include <map>
 using namespace llvm;
 
@@ -62,8 +58,7 @@ STATISTIC(InvokesTransformed , "Number of invokes modified");
 namespace {
   //===--------------------------------------------------------------------===//
   // LowerSetJmp pass implementation.
-  class VISIBILITY_HIDDEN LowerSetJmp : public ModulePass,
-                      public InstVisitor<LowerSetJmp> {
+  class LowerSetJmp : public ModulePass, public InstVisitor<LowerSetJmp> {
     // LLVM library functions...
     Constant *InitSJMap;        // __llvm_sjljeh_init_setjmpmap
     Constant *DestroySJMap;     // __llvm_sjljeh_destroy_setjmpmap
@@ -110,7 +105,7 @@ namespace {
     void TransformLongJmpCall(CallInst* Inst);
     void TransformSetJmpCall(CallInst* Inst);
 
-    bool IsTransformableFunction(const std::string& Name);
+    bool IsTransformableFunction(StringRef Name);
   public:
     static char ID; // Pass identification, replacement for typeid
     LowerSetJmp() : ModulePass(&ID) {}
@@ -201,7 +196,7 @@ bool LowerSetJmp::runOnModule(Module& M) {
 // This function is always successful, unless it isn't.
 bool LowerSetJmp::doInitialization(Module& M)
 {
-  const Type *SBPTy = PointerType::getUnqual(Type::getInt8Ty(M.getContext()));
+  const Type *SBPTy = Type::getInt8PtrTy(M.getContext());
   const Type *SBPPTy = PointerType::getUnqual(SBPTy);
 
   // N.B. See llvm/runtime/GCCLibraries/libexception/SJLJ-Exception.h for
@@ -251,13 +246,8 @@ bool LowerSetJmp::doInitialization(Module& M)
 // "llvm.{setjmp,longjmp}" functions and none of the setjmp/longjmp error
 // handling functions (beginning with __llvm_sjljeh_...they don't throw
 // exceptions).
-bool LowerSetJmp::IsTransformableFunction(const std::string& Name) {
-  std::string SJLJEh("__llvm_sjljeh");
-
-  if (Name.size() > SJLJEh.size())
-    return std::string(Name.begin(), Name.begin() + SJLJEh.size()) != SJLJEh;
-
-  return true;
+bool LowerSetJmp::IsTransformableFunction(StringRef Name) {
+  return !Name.startswith("__llvm_sjljeh_");
 }
 
 // TransformLongJmpCall - Transform a longjmp call into a call to the
@@ -265,8 +255,7 @@ bool LowerSetJmp::IsTransformableFunction(const std::string& Name) {
 // throwing the exception for us.
 void LowerSetJmp::TransformLongJmpCall(CallInst* Inst)
 {
-  const Type* SBPTy =
-        PointerType::getUnqual(Type::getInt8Ty(Inst->getContext()));
+  const Type* SBPTy = Type::getInt8PtrTy(Inst->getContext());
 
   // Create the call to "__llvm_sjljeh_throw_longjmp". This takes the
   // same parameters as "longjmp", except that the buffer is cast to a
@@ -274,10 +263,8 @@ void LowerSetJmp::TransformLongJmpCall(CallInst* Inst)
   // Inst's uses and doesn't get a name.
   CastInst* CI = 
     new BitCastInst(Inst->getOperand(1), SBPTy, "LJBuf", Inst);
-  SmallVector<Value *, 2> Args;
-  Args.push_back(CI);
-  Args.push_back(Inst->getOperand(2));
-  CallInst::Create(ThrowLongJmp, Args.begin(), Args.end(), "", Inst);
+  Value *Args[] = { CI, Inst->getOperand(2) };
+  CallInst::Create(ThrowLongJmp, Args, Args + 2, "", Inst);
 
   SwitchValuePair& SVP = SwitchValMap[Inst->getParent()->getParent()];
 
@@ -319,7 +306,7 @@ AllocaInst* LowerSetJmp::GetSetJmpMap(Function* Func)
 
   // Fill in the alloca and call to initialize the SJ map.
   const Type *SBPTy =
-        PointerType::getUnqual(Type::getInt8Ty(Func->getContext()));
+        Type::getInt8PtrTy(Func->getContext());
   AllocaInst* Map = new AllocaInst(SBPTy, 0, "SJMap", Inst);
   CallInst::Create(InitSJMap, Map, "", Inst);
   return SJMap[Func] = Map;
@@ -389,14 +376,14 @@ void LowerSetJmp::TransformSetJmpCall(CallInst* Inst)
 
   // Add this setjmp to the setjmp map.
   const Type* SBPTy =
-          PointerType::getUnqual(Type::getInt8Ty(Inst->getContext()));
+          Type::getInt8PtrTy(Inst->getContext());
   CastInst* BufPtr = 
     new BitCastInst(Inst->getOperand(1), SBPTy, "SBJmpBuf", Inst);
-  std::vector<Value*> Args = 
-    make_vector<Value*>(GetSetJmpMap(Func), BufPtr,
-                        ConstantInt::get(Type::getInt32Ty(Inst->getContext()),
-                                         SetJmpIDMap[Func]++), 0);
-  CallInst::Create(AddSJToMap, Args.begin(), Args.end(), "", Inst);
+  Value *Args[] = {
+    GetSetJmpMap(Func), BufPtr,
+    ConstantInt::get(Type::getInt32Ty(Inst->getContext()), SetJmpIDMap[Func]++)
+  };
+  CallInst::Create(AddSJToMap, Args, Args + 3, "", Inst);
 
   // We are guaranteed that there are no values live across basic blocks
   // (because we are "not in SSA form" yet), but there can still be values live
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/MergeFunctions.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/MergeFunctions.cpp
index 13bbf9c..b2bdabc 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/MergeFunctions.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/MergeFunctions.cpp
@@ -51,7 +51,6 @@
 #include "llvm/Module.h"
 #include "llvm/Pass.h"
 #include "llvm/Support/CallSite.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
@@ -62,7 +61,7 @@ using namespace llvm;
 STATISTIC(NumFunctionsMerged, "Number of functions merged");
 
 namespace {
-  struct VISIBILITY_HIDDEN MergeFunctions : public ModulePass {
+  struct MergeFunctions : public ModulePass {
     static char ID; // Pass identification, replacement for typeid
     MergeFunctions() : ModulePass(&ID) {}
 
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/PartialInlining.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/PartialInlining.cpp
index 8f858d3..b955b97 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/PartialInlining.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/PartialInlining.cpp
@@ -21,14 +21,13 @@
 #include "llvm/Transforms/Utils/Cloning.h"
 #include "llvm/Transforms/Utils/FunctionUtils.h"
 #include "llvm/ADT/Statistic.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/CFG.h"
 using namespace llvm;
 
 STATISTIC(NumPartialInlined, "Number of functions partially inlined");
 
 namespace {
-  struct VISIBILITY_HIDDEN PartialInliner : public ModulePass {
+  struct PartialInliner : public ModulePass {
     virtual void getAnalysisUsage(AnalysisUsage &AU) const { }
     static char ID; // Pass identification, replacement for typeid
     PartialInliner() : ModulePass(&ID) {}
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/PartialSpecialization.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/PartialSpecialization.cpp
index 0e1fdb9..084b94e 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/PartialSpecialization.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/PartialSpecialization.cpp
@@ -27,7 +27,6 @@
 #include "llvm/ADT/Statistic.h"
 #include "llvm/Transforms/Utils/Cloning.h"
 #include "llvm/Support/CallSite.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/ADT/DenseSet.h"
 #include <map>
 using namespace llvm;
@@ -41,7 +40,7 @@ static const int CallsMin = 5;
 static const double ConstValPercent = .1;
 
 namespace {
-  class VISIBILITY_HIDDEN PartSpec : public ModulePass {
+  class PartSpec : public ModulePass {
     void scanForInterest(Function&, SmallVector<int, 6>&);
     int scanDistribution(Function&, int, std::map<Constant*, int>&);
   public :
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/PruneEH.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/PruneEH.cpp
index daf81e9..3ae771c 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/PruneEH.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/PruneEH.cpp
@@ -27,7 +27,6 @@
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/Statistic.h"
 #include "llvm/Support/CFG.h"
-#include "llvm/Support/Compiler.h"
 #include <set>
 #include <algorithm>
 using namespace llvm;
@@ -36,7 +35,7 @@ STATISTIC(NumRemoved, "Number of invokes removed");
 STATISTIC(NumUnreach, "Number of noreturn calls optimized");
 
 namespace {
-  struct VISIBILITY_HIDDEN PruneEH : public CallGraphSCCPass {
+  struct PruneEH : public CallGraphSCCPass {
     static char ID; // Pass identification, replacement for typeid
     PruneEH() : CallGraphSCCPass(&ID) {}
 
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/RaiseAllocations.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/RaiseAllocations.cpp
deleted file mode 100644
index 7b4ad27..0000000
--- a/libclamav/c++/llvm/lib/Transforms/IPO/RaiseAllocations.cpp
+++ /dev/null
@@ -1,261 +0,0 @@
-//===- RaiseAllocations.cpp - Convert @malloc & @free calls to insts ------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file defines the RaiseAllocations pass which convert malloc and free
-// calls to malloc and free instructions.
-//
-//===----------------------------------------------------------------------===//
-
-#define DEBUG_TYPE "raiseallocs"
-#include "llvm/Transforms/IPO.h"
-#include "llvm/Constants.h"
-#include "llvm/DerivedTypes.h"
-#include "llvm/LLVMContext.h"
-#include "llvm/Module.h"
-#include "llvm/Instructions.h"
-#include "llvm/Pass.h"
-#include "llvm/Support/CallSite.h"
-#include "llvm/Support/Compiler.h"
-#include "llvm/ADT/Statistic.h"
-#include <algorithm>
-using namespace llvm;
-
-STATISTIC(NumRaised, "Number of allocations raised");
-
-namespace {
-  // RaiseAllocations - Turn @malloc and @free calls into the appropriate
-  // instruction.
-  //
-  class VISIBILITY_HIDDEN RaiseAllocations : public ModulePass {
-    Function *MallocFunc;   // Functions in the module we are processing
-    Function *FreeFunc;     // Initialized by doPassInitializationVirt
-  public:
-    static char ID; // Pass identification, replacement for typeid
-    RaiseAllocations() 
-      : ModulePass(&ID), MallocFunc(0), FreeFunc(0) {}
-
-    // doPassInitialization - For the raise allocations pass, this finds a
-    // declaration for malloc and free if they exist.
-    //
-    void doInitialization(Module &M);
-
-    // run - This method does the actual work of converting instructions over.
-    //
-    bool runOnModule(Module &M);
-  };
-}  // end anonymous namespace
-
-char RaiseAllocations::ID = 0;
-static RegisterPass<RaiseAllocations>
-X("raiseallocs", "Raise allocations from calls to instructions");
-
-// createRaiseAllocationsPass - The interface to this file...
-ModulePass *llvm::createRaiseAllocationsPass() {
-  return new RaiseAllocations();
-}
-
-
-// If the module has a symbol table, they might be referring to the malloc and
-// free functions.  If this is the case, grab the method pointers that the
-// module is using.
-//
-// Lookup @malloc and @free in the symbol table, for later use.  If they don't
-// exist, or are not external, we do not worry about converting calls to that
-// function into the appropriate instruction.
-//
-void RaiseAllocations::doInitialization(Module &M) {
-  // Get Malloc and free prototypes if they exist!
-  MallocFunc = M.getFunction("malloc");
-  if (MallocFunc) {
-    const FunctionType* TyWeHave = MallocFunc->getFunctionType();
-
-    // Get the expected prototype for malloc
-    const FunctionType *Malloc1Type = 
-      FunctionType::get(PointerType::getUnqual(Type::getInt8Ty(M.getContext())),
-                      std::vector<const Type*>(1,
-                                      Type::getInt64Ty(M.getContext())), false);
-
-    // Chck to see if we got the expected malloc
-    if (TyWeHave != Malloc1Type) {
-      // Check to see if the prototype is wrong, giving us i8*(i32) * malloc
-      // This handles the common declaration of: 'void *malloc(unsigned);'
-      const FunctionType *Malloc2Type = 
-        FunctionType::get(PointerType::getUnqual(
-                          Type::getInt8Ty(M.getContext())),
-                          std::vector<const Type*>(1, 
-                                      Type::getInt32Ty(M.getContext())), false);
-      if (TyWeHave != Malloc2Type) {
-        // Check to see if the prototype is missing, giving us 
-        // i8*(...) * malloc
-        // This handles the common declaration of: 'void *malloc();'
-        const FunctionType *Malloc3Type = 
-          FunctionType::get(PointerType::getUnqual(
-                                    Type::getInt8Ty(M.getContext())), 
-                                    true);
-        if (TyWeHave != Malloc3Type)
-          // Give up
-          MallocFunc = 0;
-      }
-    }
-  }
-
-  FreeFunc = M.getFunction("free");
-  if (FreeFunc) {
-    const FunctionType* TyWeHave = FreeFunc->getFunctionType();
-    
-    // Get the expected prototype for void free(i8*)
-    const FunctionType *Free1Type =
-      FunctionType::get(Type::getVoidTy(M.getContext()),
-        std::vector<const Type*>(1, PointerType::getUnqual(
-                                 Type::getInt8Ty(M.getContext()))), 
-                                 false);
-
-    if (TyWeHave != Free1Type) {
-      // Check to see if the prototype was forgotten, giving us 
-      // void (...) * free
-      // This handles the common forward declaration of: 'void free();'
-      const FunctionType* Free2Type =
-                    FunctionType::get(Type::getVoidTy(M.getContext()), true);
-
-      if (TyWeHave != Free2Type) {
-        // One last try, check to see if we can find free as 
-        // int (...)* free.  This handles the case where NOTHING was declared.
-        const FunctionType* Free3Type =
-                    FunctionType::get(Type::getInt32Ty(M.getContext()), true);
-        
-        if (TyWeHave != Free3Type) {
-          // Give up.
-          FreeFunc = 0;
-        }
-      }
-    }
-  }
-
-  // Don't mess with locally defined versions of these functions...
-  if (MallocFunc && !MallocFunc->isDeclaration()) MallocFunc = 0;
-  if (FreeFunc && !FreeFunc->isDeclaration())     FreeFunc = 0;
-}
-
-// run - Transform calls into instructions...
-//
-bool RaiseAllocations::runOnModule(Module &M) {
-  // Find the malloc/free prototypes...
-  doInitialization(M);
-  
-  bool Changed = false;
-
-  // First, process all of the malloc calls...
-  if (MallocFunc) {
-    std::vector<User*> Users(MallocFunc->use_begin(), MallocFunc->use_end());
-    std::vector<Value*> EqPointers;   // Values equal to MallocFunc
-    while (!Users.empty()) {
-      User *U = Users.back();
-      Users.pop_back();
-
-      if (Instruction *I = dyn_cast<Instruction>(U)) {
-        CallSite CS = CallSite::get(I);
-        if (CS.getInstruction() && !CS.arg_empty() &&
-            (CS.getCalledFunction() == MallocFunc ||
-             std::find(EqPointers.begin(), EqPointers.end(),
-                       CS.getCalledValue()) != EqPointers.end())) {
-
-          Value *Source = *CS.arg_begin();
-
-          // If no prototype was provided for malloc, we may need to cast the
-          // source size.
-          if (Source->getType() != Type::getInt32Ty(M.getContext()))
-            Source = 
-              CastInst::CreateIntegerCast(Source, 
-                                          Type::getInt32Ty(M.getContext()), 
-                                          false/*ZExt*/,
-                                          "MallocAmtCast", I);
-
-          MallocInst *MI = new MallocInst(Type::getInt8Ty(M.getContext()),
-                                          Source, "", I);
-          MI->takeName(I);
-          I->replaceAllUsesWith(MI);
-
-          // If the old instruction was an invoke, add an unconditional branch
-          // before the invoke, which will become the new terminator.
-          if (InvokeInst *II = dyn_cast<InvokeInst>(I))
-            BranchInst::Create(II->getNormalDest(), I);
-
-          // Delete the old call site
-          I->eraseFromParent();
-          Changed = true;
-          ++NumRaised;
-        }
-      } else if (GlobalValue *GV = dyn_cast<GlobalValue>(U)) {
-        Users.insert(Users.end(), GV->use_begin(), GV->use_end());
-        EqPointers.push_back(GV);
-      } else if (ConstantExpr *CE = dyn_cast<ConstantExpr>(U)) {
-        if (CE->isCast()) {
-          Users.insert(Users.end(), CE->use_begin(), CE->use_end());
-          EqPointers.push_back(CE);
-        }
-      }
-    }
-  }
-
-  // Next, process all free calls...
-  if (FreeFunc) {
-    std::vector<User*> Users(FreeFunc->use_begin(), FreeFunc->use_end());
-    std::vector<Value*> EqPointers;   // Values equal to FreeFunc
-
-    while (!Users.empty()) {
-      User *U = Users.back();
-      Users.pop_back();
-
-      if (Instruction *I = dyn_cast<Instruction>(U)) {
-        if (isa<InvokeInst>(I))
-          continue;
-        CallSite CS = CallSite::get(I);
-        if (CS.getInstruction() && !CS.arg_empty() &&
-            (CS.getCalledFunction() == FreeFunc ||
-             std::find(EqPointers.begin(), EqPointers.end(),
-                       CS.getCalledValue()) != EqPointers.end())) {
-
-          // If no prototype was provided for free, we may need to cast the
-          // source pointer.  This should be really uncommon, but it's necessary
-          // just in case we are dealing with weird code like this:
-          //   free((long)ptr);
-          //
-          Value *Source = *CS.arg_begin();
-          if (!isa<PointerType>(Source->getType()))
-            Source = new IntToPtrInst(Source,           
-                        PointerType::getUnqual(Type::getInt8Ty(M.getContext())), 
-                                      "FreePtrCast", I);
-          new FreeInst(Source, I);
-
-          // If the old instruction was an invoke, add an unconditional branch
-          // before the invoke, which will become the new terminator.
-          if (InvokeInst *II = dyn_cast<InvokeInst>(I))
-            BranchInst::Create(II->getNormalDest(), I);
-
-          // Delete the old call site
-          if (I->getType() != Type::getVoidTy(M.getContext()))
-            I->replaceAllUsesWith(UndefValue::get(I->getType()));
-          I->eraseFromParent();
-          Changed = true;
-          ++NumRaised;
-        }
-      } else if (GlobalValue *GV = dyn_cast<GlobalValue>(U)) {
-        Users.insert(Users.end(), GV->use_begin(), GV->use_end());
-        EqPointers.push_back(GV);
-      } else if (ConstantExpr *CE = dyn_cast<ConstantExpr>(U)) {
-        if (CE->isCast()) {
-          Users.insert(Users.end(), CE->use_begin(), CE->use_end());
-          EqPointers.push_back(CE);
-        }
-      }
-    }
-  }
-
-  return Changed;
-}
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/StripDeadPrototypes.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/StripDeadPrototypes.cpp
index a94d78e..4566a76 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/StripDeadPrototypes.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/StripDeadPrototypes.cpp
@@ -19,7 +19,6 @@
 #include "llvm/Pass.h"
 #include "llvm/Module.h"
 #include "llvm/ADT/Statistic.h"
-#include "llvm/Support/Compiler.h"
 using namespace llvm;
 
 STATISTIC(NumDeadPrototypes, "Number of dead prototypes removed");
@@ -27,7 +26,7 @@ STATISTIC(NumDeadPrototypes, "Number of dead prototypes removed");
 namespace {
 
 /// @brief Pass to remove unused function declarations.
-class VISIBILITY_HIDDEN StripDeadPrototypesPass : public ModulePass {
+class StripDeadPrototypesPass : public ModulePass {
 public:
   static char ID; // Pass identification, replacement for typeid
   StripDeadPrototypesPass() : ModulePass(&ID) { }
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/StripSymbols.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/StripSymbols.cpp
index 77d44b2..0b5e007 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/StripSymbols.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/StripSymbols.cpp
@@ -112,11 +112,11 @@ static bool OnlyUsedBy(Value *V, Value *Usr) {
 
 static void RemoveDeadConstant(Constant *C) {
   assert(C->use_empty() && "Constant is not dead!");
-  SmallPtrSet<Constant *, 4> Operands;
+  SmallPtrSet<Constant*, 4> Operands;
   for (unsigned i = 0, e = C->getNumOperands(); i != e; ++i)
     if (isa<DerivedType>(C->getOperand(i)->getType()) &&
         OnlyUsedBy(C->getOperand(i), C)) 
-      Operands.insert(C->getOperand(i));
+      Operands.insert(cast<Constant>(C->getOperand(i)));
   if (GlobalVariable *GV = dyn_cast<GlobalVariable>(C)) {
     if (!GV->hasLocalLinkage()) return;   // Don't delete non static globals.
     GV->eraseFromParent();
@@ -126,7 +126,7 @@ static void RemoveDeadConstant(Constant *C) {
       C->destroyConstant();
 
   // If the constant referenced anything, see if we can delete it as well.
-  for (SmallPtrSet<Constant *, 4>::iterator OI = Operands.begin(),
+  for (SmallPtrSet<Constant*, 4>::iterator OI = Operands.begin(),
          OE = Operands.end(); OI != OE; ++OI)
     RemoveDeadConstant(*OI);
 }
@@ -202,56 +202,36 @@ static bool StripSymbolNames(Module &M, bool PreserveDbgInfo) {
 // llvm.dbg.region.end calls, and any globals they point to if now dead.
 static bool StripDebugInfo(Module &M) {
 
+  bool Changed = false;
+
   // Remove all of the calls to the debugger intrinsics, and remove them from
   // the module.
-  Function *FuncStart = M.getFunction("llvm.dbg.func.start");
-  Function *StopPoint = M.getFunction("llvm.dbg.stoppoint");
-  Function *RegionStart = M.getFunction("llvm.dbg.region.start");
-  Function *RegionEnd = M.getFunction("llvm.dbg.region.end");
-  Function *Declare = M.getFunction("llvm.dbg.declare");
-
-  if (FuncStart) {
-    while (!FuncStart->use_empty()) {
-      CallInst *CI = cast<CallInst>(FuncStart->use_back());
-      CI->eraseFromParent();
-    }
-    FuncStart->eraseFromParent();
-  }
-  if (StopPoint) {
-    while (!StopPoint->use_empty()) {
-      CallInst *CI = cast<CallInst>(StopPoint->use_back());
-      CI->eraseFromParent();
-    }
-    StopPoint->eraseFromParent();
-  }
-  if (RegionStart) {
-    while (!RegionStart->use_empty()) {
-      CallInst *CI = cast<CallInst>(RegionStart->use_back());
-      CI->eraseFromParent();
-    }
-    RegionStart->eraseFromParent();
-  }
-  if (RegionEnd) {
-    while (!RegionEnd->use_empty()) {
-      CallInst *CI = cast<CallInst>(RegionEnd->use_back());
-      CI->eraseFromParent();
-    }
-    RegionEnd->eraseFromParent();
-  }
-  if (Declare) {
+  if (Function *Declare = M.getFunction("llvm.dbg.declare")) {
     while (!Declare->use_empty()) {
       CallInst *CI = cast<CallInst>(Declare->use_back());
       CI->eraseFromParent();
     }
     Declare->eraseFromParent();
+    Changed = true;
   }
 
   NamedMDNode *NMD = M.getNamedMetadata("llvm.dbg.gv");
-  if (NMD)
+  if (NMD) {
+    Changed = true;
     NMD->eraseFromParent();
+  }
+  MetadataContext &TheMetadata = M.getContext().getMetadata();
+  unsigned MDDbgKind = TheMetadata.getMDKind("dbg");
+  if (!MDDbgKind)
+    return Changed;
+
+  for (Module::iterator MI = M.begin(), ME = M.end(); MI != ME; ++MI) 
+    for (Function::iterator FI = MI->begin(), FE = MI->end(); FI != FE;
+         ++FI)
+      for (BasicBlock::iterator BI = FI->begin(), BE = FI->end(); BI != BE;
+           ++BI) 
+        TheMetadata.removeMD(MDDbgKind, BI);
 
-  // Remove dead metadata.
-  M.getContext().RemoveDeadMetadata();
   return true;
 }
 
@@ -292,23 +272,13 @@ bool StripDebugDeclare::runOnModule(Module &M) {
     Declare->eraseFromParent();
   }
 
-  // Delete all llvm.dbg.global_variables.
-  for (Module::global_iterator I = M.global_begin(), E = M.global_end(); 
-       I != E; ++I) {
-    GlobalVariable *GV = dyn_cast<GlobalVariable>(I);
-    if (!GV) continue;
-    if (GV->use_empty() && GV->getName().startswith("llvm.dbg.global_variable"))
-      DeadConstants.push_back(GV);
-  }
-
   while (!DeadConstants.empty()) {
     Constant *C = DeadConstants.back();
     DeadConstants.pop_back();
     if (GlobalVariable *GV = dyn_cast<GlobalVariable>(C)) {
       if (GV->hasLocalLinkage())
         RemoveDeadConstant(GV);
-    }
-    else
+    } else
       RemoveDeadConstant(C);
   }
 
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/StructRetPromotion.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/StructRetPromotion.cpp
index 4442820..67fc934 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/StructRetPromotion.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/StructRetPromotion.cpp
@@ -34,7 +34,6 @@
 #include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/Statistic.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
@@ -44,7 +43,7 @@ namespace {
   /// SRETPromotion - This pass removes sret parameter and updates
   /// function to use multiple return value.
   ///
-  struct VISIBILITY_HIDDEN SRETPromotion : public CallGraphSCCPass {
+  struct SRETPromotion : public CallGraphSCCPass {
     virtual void getAnalysisUsage(AnalysisUsage &AU) const {
       CallGraphSCCPass::getAnalysisUsage(AU);
     }
diff --git a/libclamav/c++/llvm/lib/Transforms/Instrumentation/BlockProfiling.cpp b/libclamav/c++/llvm/lib/Transforms/Instrumentation/BlockProfiling.cpp
index eb8f225..211a6d6 100644
--- a/libclamav/c++/llvm/lib/Transforms/Instrumentation/BlockProfiling.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Instrumentation/BlockProfiling.cpp
@@ -22,7 +22,6 @@
 #include "llvm/DerivedTypes.h"
 #include "llvm/Module.h"
 #include "llvm/Pass.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Transforms/Instrumentation.h"
 #include "RSProfiling.h"
@@ -30,7 +29,7 @@
 using namespace llvm;
 
 namespace {
-  class VISIBILITY_HIDDEN FunctionProfiler : public RSProfilers_std {
+  class FunctionProfiler : public RSProfilers_std {
   public:
     static char ID;
     bool runOnModule(Module &M);
diff --git a/libclamav/c++/llvm/lib/Transforms/Instrumentation/EdgeProfiling.cpp b/libclamav/c++/llvm/lib/Transforms/Instrumentation/EdgeProfiling.cpp
index b9cb275..9ae3786 100644
--- a/libclamav/c++/llvm/lib/Transforms/Instrumentation/EdgeProfiling.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Instrumentation/EdgeProfiling.cpp
@@ -20,7 +20,6 @@
 #include "ProfilingUtils.h"
 #include "llvm/Module.h"
 #include "llvm/Pass.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
 #include "llvm/Transforms/Instrumentation.h"
@@ -31,7 +30,7 @@ using namespace llvm;
 STATISTIC(NumEdgesInserted, "The # of edges inserted.");
 
 namespace {
-  class VISIBILITY_HIDDEN EdgeProfiler : public ModulePass {
+  class EdgeProfiler : public ModulePass {
     bool runOnModule(Module &M);
   public:
     static char ID; // Pass identification, replacement for typeid
diff --git a/libclamav/c++/llvm/lib/Transforms/Instrumentation/OptimalEdgeProfiling.cpp b/libclamav/c++/llvm/lib/Transforms/Instrumentation/OptimalEdgeProfiling.cpp
index b2e6747..0a46fe5 100644
--- a/libclamav/c++/llvm/lib/Transforms/Instrumentation/OptimalEdgeProfiling.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Instrumentation/OptimalEdgeProfiling.cpp
@@ -19,7 +19,6 @@
 #include "llvm/Analysis/Passes.h"
 #include "llvm/Analysis/ProfileInfo.h"
 #include "llvm/Analysis/ProfileInfoLoader.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
@@ -33,7 +32,7 @@ using namespace llvm;
 STATISTIC(NumEdgesInserted, "The # of edges inserted.");
 
 namespace {
-  class VISIBILITY_HIDDEN OptimalEdgeProfiler : public ModulePass {
+  class OptimalEdgeProfiler : public ModulePass {
     bool runOnModule(Module &M);
   public:
     static char ID; // Pass identification, replacement for typeid
diff --git a/libclamav/c++/llvm/lib/Transforms/Instrumentation/ProfilingUtils.cpp b/libclamav/c++/llvm/lib/Transforms/Instrumentation/ProfilingUtils.cpp
index 88a1d2a..1679bea 100644
--- a/libclamav/c++/llvm/lib/Transforms/Instrumentation/ProfilingUtils.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Instrumentation/ProfilingUtils.cpp
@@ -25,9 +25,9 @@ void llvm::InsertProfilingInitCall(Function *MainFn, const char *FnName,
                                    GlobalValue *Array) {
   LLVMContext &Context = MainFn->getContext();
   const Type *ArgVTy = 
-    PointerType::getUnqual(PointerType::getUnqual(Type::getInt8Ty(Context)));
+    PointerType::getUnqual(Type::getInt8PtrTy(Context));
   const PointerType *UIntPtr =
-        PointerType::getUnqual(Type::getInt32Ty(Context));
+        Type::getInt32PtrTy(Context);
   Module &M = *MainFn->getParent();
   Constant *InitFn = M.getOrInsertFunction(FnName, Type::getInt32Ty(Context),
                                            Type::getInt32Ty(Context),
diff --git a/libclamav/c++/llvm/lib/Transforms/Instrumentation/RSProfiling.cpp b/libclamav/c++/llvm/lib/Transforms/Instrumentation/RSProfiling.cpp
index 3b72260..c08efc1 100644
--- a/libclamav/c++/llvm/lib/Transforms/Instrumentation/RSProfiling.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Instrumentation/RSProfiling.cpp
@@ -42,7 +42,6 @@
 #include "llvm/Transforms/Scalar.h"
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
 #include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
@@ -72,7 +71,7 @@ namespace {
   /// NullProfilerRS - The basic profiler that does nothing.  It is the default
   /// profiler and thus terminates RSProfiler chains.  It is useful for 
   /// measuring framework overhead
-  class VISIBILITY_HIDDEN NullProfilerRS : public RSProfilers {
+  class NullProfilerRS : public RSProfilers {
   public:
     static char ID; // Pass identification, replacement for typeid
     bool isProfiling(Value* v) {
@@ -94,7 +93,7 @@ static RegisterAnalysisGroup<RSProfilers, true> NPT(NP);
 
 namespace {
   /// Chooser - Something that chooses when to make a sample of the profiled code
-  class VISIBILITY_HIDDEN Chooser {
+  class Chooser {
   public:
     /// ProcessChoicePoint - is called for each basic block inserted to choose 
     /// between normal and sample code
@@ -108,7 +107,7 @@ namespace {
   //Things that implement sampling policies
   //A global value that is read-mod-stored to choose when to sample.
   //A sample is taken when the global counter hits 0
-  class VISIBILITY_HIDDEN GlobalRandomCounter : public Chooser {
+  class GlobalRandomCounter : public Chooser {
     GlobalVariable* Counter;
     Value* ResetValue;
     const IntegerType* T;
@@ -120,7 +119,7 @@ namespace {
   };
 
   //Same is GRC, but allow register allocation of the global counter
-  class VISIBILITY_HIDDEN GlobalRandomCounterOpt : public Chooser {
+  class GlobalRandomCounterOpt : public Chooser {
     GlobalVariable* Counter;
     Value* ResetValue;
     AllocaInst* AI;
@@ -134,7 +133,7 @@ namespace {
 
   //Use the cycle counter intrinsic as a source of pseudo randomness when
   //deciding when to sample.
-  class VISIBILITY_HIDDEN CycleCounter : public Chooser {
+  class CycleCounter : public Chooser {
     uint64_t rm;
     Constant *F;
   public:
@@ -145,7 +144,7 @@ namespace {
   };
 
   /// ProfilerRS - Insert the random sampling framework
-  struct VISIBILITY_HIDDEN ProfilerRS : public FunctionPass {
+  struct ProfilerRS : public FunctionPass {
     static char ID; // Pass identification, replacement for typeid
     ProfilerRS() : FunctionPass(&ID) {}
 
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/ABCD.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/ABCD.cpp
new file mode 100644
index 0000000..e58fa63
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/ABCD.cpp
@@ -0,0 +1,1117 @@
+//===------- ABCD.cpp - Removes redundant conditional branches ------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This pass removes redundant branch instructions. This algorithm was
+// described by Rastislav Bodik, Rajiv Gupta and Vivek Sarkar in their paper
+// "ABCD: Eliminating Array Bounds Checks on Demand (2000)". The original
+// Algorithm was created to remove array bound checks for strongly typed
+// languages. This implementation expands the idea and removes any conditional
+// branches that can be proved redundant, not only those used in array bound
+// checks. With the SSI representation, each variable has a
+// constraint. By analyzing these constraints we can prove that a branch is
+// redundant. When a branch is proved redundant it means that
+// one direction will always be taken; thus, we can change this branch into an
+// unconditional jump.
+// It is advisable to run SimplifyCFG and Aggressive Dead Code Elimination
+// after ABCD to clean up the code.
+// This implementation was created based on the implementation of the ABCD
+// algorithm implemented for the compiler Jitrino.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "abcd"
+#include "llvm/ADT/DenseMap.h"
+#include "llvm/ADT/SmallPtrSet.h"
+#include "llvm/ADT/Statistic.h"
+#include "llvm/Constants.h"
+#include "llvm/Function.h"
+#include "llvm/Instructions.h"
+#include "llvm/Pass.h"
+#include "llvm/Support/raw_ostream.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Transforms/Scalar.h"
+#include "llvm/Transforms/Utils/SSI.h"
+
+using namespace llvm;
+
+STATISTIC(NumBranchTested, "Number of conditional branches analyzed");
+STATISTIC(NumBranchRemoved, "Number of conditional branches removed");
+
+namespace {
+
+class ABCD : public FunctionPass {
+ public:
+  static char ID;  // Pass identification, replacement for typeid.
+  ABCD() : FunctionPass(&ID) {}
+
+  void getAnalysisUsage(AnalysisUsage &AU) const {
+    AU.addRequired<SSI>();
+  }
+
+  bool runOnFunction(Function &F);
+
+ private:
+  /// Keep track of whether we've modified the program yet.
+  bool modified;
+
+  enum ProveResult {
+    False = 0,
+    Reduced = 1,
+    True = 2
+  };
+
+  typedef ProveResult (*meet_function)(ProveResult, ProveResult);
+  static ProveResult max(ProveResult res1, ProveResult res2) {
+    return (ProveResult) std::max(res1, res2);
+  }
+  static ProveResult min(ProveResult res1, ProveResult res2) {
+    return (ProveResult) std::min(res1, res2);
+  }
+
+  class Bound {
+   public:
+    Bound(APInt v, bool upper) : value(v), upper_bound(upper) {}
+    Bound(const Bound *b, int cnst)
+      : value(b->value - cnst), upper_bound(b->upper_bound) {}
+    Bound(const Bound *b, const APInt &cnst)
+      : value(b->value - cnst), upper_bound(b->upper_bound) {}
+
+    /// Test if Bound is an upper bound
+    bool isUpperBound() const { return upper_bound; }
+
+    /// Get the bitwidth of this bound
+    int32_t getBitWidth() const { return value.getBitWidth(); }
+
+    /// Creates a Bound incrementing the one received
+    static Bound *createIncrement(const Bound *b) {
+      return new Bound(b->isUpperBound() ? b->value+1 : b->value-1,
+                       b->upper_bound);
+    }
+
+    /// Creates a Bound decrementing the one received
+    static Bound *createDecrement(const Bound *b) {
+      return new Bound(b->isUpperBound() ? b->value-1 : b->value+1,
+                       b->upper_bound);
+    }
+
+    /// Test if two bounds are equal
+    static bool eq(const Bound *a, const Bound *b) {
+      if (!a || !b) return false;
+
+      assert(a->isUpperBound() == b->isUpperBound());
+      return a->value == b->value;
+    }
+
+    /// Test if val is less than or equal to Bound b
+    static bool leq(APInt val, const Bound *b) {
+      if (!b) return false;
+      return b->isUpperBound() ? val.sle(b->value) : val.sge(b->value);
+    }
+
+    /// Test if Bound a is less then or equal to Bound
+    static bool leq(const Bound *a, const Bound *b) {
+      if (!a || !b) return false;
+
+      assert(a->isUpperBound() == b->isUpperBound());
+      return a->isUpperBound() ? a->value.sle(b->value) :
+                                 a->value.sge(b->value);
+    }
+
+    /// Test if Bound a is less then Bound b
+    static bool lt(const Bound *a, const Bound *b) {
+      if (!a || !b) return false;
+
+      assert(a->isUpperBound() == b->isUpperBound());
+      return a->isUpperBound() ? a->value.slt(b->value) :
+                                 a->value.sgt(b->value);
+    }
+
+    /// Test if Bound b is greater then or equal val
+    static bool geq(const Bound *b, APInt val) {
+      return leq(val, b);
+    }
+
+    /// Test if Bound a is greater then or equal Bound b
+    static bool geq(const Bound *a, const Bound *b) {
+      return leq(b, a);
+    }
+
+   private:
+    APInt value;
+    bool upper_bound;
+  };
+
+  /// This class is used to store results some parts of the graph,
+  /// so information does not need to be recalculated. The maximum false,
+  /// minimum true and minimum reduced results are stored
+  class MemoizedResultChart {
+   public:
+     MemoizedResultChart()
+       : max_false(NULL), min_true(NULL), min_reduced(NULL) {}
+
+    /// Returns the max false
+    Bound *getFalse() const { return max_false; }
+
+    /// Returns the min true
+    Bound *getTrue() const { return min_true; }
+
+    /// Returns the min reduced
+    Bound *getReduced() const { return min_reduced; }
+
+    /// Return the stored result for this bound
+    ProveResult getResult(const Bound *bound) const;
+
+    /// Stores a false found
+    void addFalse(Bound *bound);
+
+    /// Stores a true found
+    void addTrue(Bound *bound);
+
+    /// Stores a Reduced found
+    void addReduced(Bound *bound);
+
+    /// Clears redundant reduced
+    /// If a min_true is smaller than a min_reduced then the min_reduced
+    /// is unnecessary and then removed. It also works for min_reduced
+    /// begin smaller than max_false.
+    void clearRedundantReduced();
+
+    void clear() {
+      delete max_false;
+      delete min_true;
+      delete min_reduced;
+    }
+
+  private:
+    Bound *max_false, *min_true, *min_reduced;
+  };
+
+  /// This class stores the result found for a node of the graph,
+  /// so these results do not need to be recalculated, only searched for.
+  class MemoizedResult {
+  public:
+    /// Test if there is true result stored from b to a
+    /// that is less then the bound
+    bool hasTrue(Value *b, const Bound *bound) const {
+      Bound *trueBound = map.lookup(b).getTrue();
+      return trueBound && Bound::leq(trueBound, bound);
+    }
+
+    /// Test if there is false result stored from b to a
+    /// that is less then the bound
+    bool hasFalse(Value *b, const Bound *bound) const {
+      Bound *falseBound = map.lookup(b).getFalse();
+      return falseBound && Bound::leq(falseBound, bound);
+    }
+
+    /// Test if there is reduced result stored from b to a
+    /// that is less then the bound
+    bool hasReduced(Value *b, const Bound *bound) const {
+      Bound *reducedBound = map.lookup(b).getReduced();
+      return reducedBound && Bound::leq(reducedBound, bound);
+    }
+
+    /// Returns the stored bound for b
+    ProveResult getBoundResult(Value *b, Bound *bound) {
+      return map[b].getResult(bound);
+    }
+
+    /// Clears the map
+    void clear() {
+      DenseMapIterator<Value*, MemoizedResultChart> begin = map.begin();
+      DenseMapIterator<Value*, MemoizedResultChart> end = map.end();
+      for (; begin != end; ++begin) {
+	begin->second.clear();
+      }
+      map.clear();
+    }
+
+    /// Stores the bound found
+    void updateBound(Value *b, Bound *bound, const ProveResult res);
+
+  private:
+    // Maps a nod in the graph with its results found.
+    DenseMap<Value*, MemoizedResultChart> map;
+  };
+
+  /// This class represents an edge in the inequality graph used by the
+  /// ABCD algorithm. An edge connects node v to node u with a value c if
+  /// we could infer a constraint v <= u + c in the source program.
+  class Edge {
+  public:
+    Edge(Value *V, APInt val, bool upper)
+      : vertex(V), value(val), upper_bound(upper) {}
+
+    Value *getVertex() const { return vertex; }
+    const APInt &getValue() const { return value; }
+    bool isUpperBound() const { return upper_bound; }
+
+  private:
+    Value *vertex;
+    APInt value;
+    bool upper_bound;
+  };
+
+  /// Weighted and Directed graph to represent constraints.
+  /// There is one type of constraint, a <= b + X, which will generate an
+  /// edge from b to a with weight X.
+  class InequalityGraph {
+  public:
+
+    /// Adds an edge from V_from to V_to with weight value
+    void addEdge(Value *V_from, Value *V_to, APInt value, bool upper);
+
+    /// Test if there is a node V
+    bool hasNode(Value *V) const { return graph.count(V); }
+
+    /// Test if there is any edge from V in the upper direction
+    bool hasEdge(Value *V, bool upper) const;
+
+    /// Returns all edges pointed by vertex V
+    SmallPtrSet<Edge *, 16> getEdges(Value *V) const {
+      return graph.lookup(V);
+    }
+
+    /// Prints the graph in dot format.
+    /// Blue edges represent upper bound and Red lower bound.
+    void printGraph(raw_ostream &OS, Function &F) const {
+      printHeader(OS, F);
+      printBody(OS);
+      printFooter(OS);
+    }
+
+    /// Clear the graph
+    void clear() {
+      graph.clear();
+    }
+
+  private:
+    DenseMap<Value *, SmallPtrSet<Edge *, 16> > graph;
+
+    /// Adds a Node to the graph.
+    DenseMap<Value *, SmallPtrSet<Edge *, 16> >::iterator addNode(Value *V) {
+      SmallPtrSet<Edge *, 16> p;
+      return graph.insert(std::make_pair(V, p)).first;
+    }
+
+    /// Prints the header of the dot file
+    void printHeader(raw_ostream &OS, Function &F) const;
+
+    /// Prints the footer of the dot file
+    void printFooter(raw_ostream &OS) const {
+      OS << "}\n";
+    }
+
+    /// Prints the body of the dot file
+    void printBody(raw_ostream &OS) const;
+
+    /// Prints vertex source to the dot file
+    void printVertex(raw_ostream &OS, Value *source) const;
+
+    /// Prints the edge to the dot file
+    void printEdge(raw_ostream &OS, Value *source, Edge *edge) const;
+
+    void printName(raw_ostream &OS, Value *info) const;
+  };
+
+  /// Iterates through all BasicBlocks, if the Terminator Instruction
+  /// uses an Comparator Instruction, all operands of this comparator
+  /// are sent to be transformed to SSI. Only Instruction operands are
+  /// transformed.
+  void createSSI(Function &F);
+
+  /// Creates the graphs for this function.
+  /// It will look for all comparators used in branches, and create them.
+  /// These comparators will create constraints for any instruction as an
+  /// operand.
+  void executeABCD(Function &F);
+
+  /// Seeks redundancies in the comparator instruction CI.
+  /// If the ABCD algorithm can prove that the comparator CI always
+  /// takes one way, then the Terminator Instruction TI is substituted from
+  /// a conditional branch to a unconditional one.
+  /// This code basically receives a comparator, and verifies which kind of
+  /// instruction it is. Depending on the kind of instruction, we use different
+  /// strategies to prove its redundancy.
+  void seekRedundancy(ICmpInst *ICI, TerminatorInst *TI);
+
+  /// Substitutes Terminator Instruction TI, that is a conditional branch,
+  /// with one unconditional branch. Succ_edge determines if the new
+  /// unconditional edge will be the first or second edge of the former TI
+  /// instruction.
+  void removeRedundancy(TerminatorInst *TI, bool Succ_edge);
+
+  /// When an conditional branch is removed, the BasicBlock that is no longer
+  /// reachable will have problems in phi functions. This method fixes these
+  /// phis removing the former BasicBlock from the list of incoming BasicBlocks
+  /// of all phis. In case the phi remains with no predecessor it will be
+  /// marked to be removed later.
+  void fixPhi(BasicBlock *BB, BasicBlock *Succ);
+
+  /// Removes phis that have no predecessor
+  void removePhis();
+
+  /// Creates constraints for Instructions.
+  /// If the constraint for this instruction has already been created
+  /// nothing is done.
+  void createConstraintInstruction(Instruction *I);
+
+  /// Creates constraints for Binary Operators.
+  /// It will create constraints only for addition and subtraction,
+  /// the other binary operations are not treated by ABCD.
+  /// For additions in the form a = b + X and a = X + b, where X is a constant,
+  /// the constraint a <= b + X can be obtained. For this constraint, an edge
+  /// a->b with weight X is added to the lower bound graph, and an edge
+  /// b->a with weight -X is added to the upper bound graph.
+  /// Only subtractions in the format a = b - X is used by ABCD.
+  /// Edges are created using the same semantic as addition.
+  void createConstraintBinaryOperator(BinaryOperator *BO);
+
+  /// Creates constraints for Comparator Instructions.
+  /// Only comparators that have any of the following operators
+  /// are used to create constraints: >=, >, <=, <. And only if
+  /// at least one operand is an Instruction. In a Comparator Instruction
+  /// a op b, there will be 4 sigma functions a_t, a_f, b_t and b_f. Where
+  /// t and f represent sigma for operands in true and false branches. The
+  /// following constraints can be obtained. a_t <= a, a_f <= a, b_t <= b and
+  /// b_f <= b. There are two more constraints that depend on the operator.
+  /// For the operator <= : a_t <= b_t   and b_f <= a_f-1
+  /// For the operator <  : a_t <= b_t-1 and b_f <= a_f
+  /// For the operator >= : b_t <= a_t   and a_f <= b_f-1
+  /// For the operator >  : b_t <= a_t-1 and a_f <= b_f
+  void createConstraintCmpInst(ICmpInst *ICI, TerminatorInst *TI);
+
+  /// Creates constraints for PHI nodes.
+  /// In a PHI node a = phi(b,c) we can create the constraint
+  /// a<= max(b,c). With this constraint there will be the edges,
+  /// b->a and c->a with weight 0 in the lower bound graph, and the edges
+  /// a->b and a->c with weight 0 in the upper bound graph.
+  void createConstraintPHINode(PHINode *PN);
+
+  /// Given a binary operator, we are only interest in the case
+  /// that one operand is an Instruction and the other is a ConstantInt. In
+  /// this case the method returns true, otherwise false. It also obtains the
+  /// Instruction and ConstantInt from the BinaryOperator and returns it.
+  bool createBinaryOperatorInfo(BinaryOperator *BO, Instruction **I1,
+				Instruction **I2, ConstantInt **C1,
+				ConstantInt **C2);
+
+  /// This method creates a constraint between a Sigma and an Instruction.
+  /// These constraints are created as soon as we find a comparator that uses a
+  /// SSI variable.
+  void createConstraintSigInst(Instruction *I_op, BasicBlock *BB_succ_t,
+                               BasicBlock *BB_succ_f, PHINode **SIG_op_t,
+                               PHINode **SIG_op_f);
+
+  /// If PN_op1 and PN_o2 are different from NULL, create a constraint
+  /// PN_op2 -> PN_op1 with value. In case any of them is NULL, replace
+  /// with the respective V_op#, if V_op# is a ConstantInt.
+  void createConstraintSigSig(PHINode *SIG_op1, PHINode *SIG_op2, 
+                              ConstantInt *V_op1, ConstantInt *V_op2,
+                              APInt value);
+
+  /// Returns the sigma representing the Instruction I in BasicBlock BB.
+  /// Returns NULL in case there is no sigma for this Instruction in this
+  /// Basic Block. This methods assume that sigmas are the first instructions
+  /// in a block, and that there can be only two sigmas in a block. So it will
+  /// only look on the first two instructions of BasicBlock BB.
+  PHINode *findSigma(BasicBlock *BB, Instruction *I);
+
+  /// Original ABCD algorithm to prove redundant checks.
+  /// This implementation works on any kind of inequality branch.
+  bool demandProve(Value *a, Value *b, int c, bool upper_bound);
+
+  /// Prove that distance between b and a is <= bound
+  ProveResult prove(Value *a, Value *b, Bound *bound, unsigned level);
+
+  /// Updates the distance value for a and b
+  void updateMemDistance(Value *a, Value *b, Bound *bound, unsigned level,
+                         meet_function meet);
+
+  InequalityGraph inequality_graph;
+  MemoizedResult mem_result;
+  DenseMap<Value*, Bound*> active;
+  SmallPtrSet<Value*, 16> created;
+  SmallVector<PHINode *, 16> phis_to_remove;
+};
+
+}  // end anonymous namespace.
+
+char ABCD::ID = 0;
+static RegisterPass<ABCD> X("abcd", "ABCD: Eliminating Array Bounds Checks on Demand");
+
+
+bool ABCD::runOnFunction(Function &F) {
+  modified = false;
+  createSSI(F);
+  executeABCD(F);
+  DEBUG(inequality_graph.printGraph(errs(), F));
+  removePhis();
+
+  inequality_graph.clear();
+  mem_result.clear();
+  active.clear();
+  created.clear();
+  phis_to_remove.clear();
+  return modified;
+}
+
+/// Iterates through all BasicBlocks, if the Terminator Instruction
+/// uses an Comparator Instruction, all operands of this comparator
+/// are sent to be transformed to SSI. Only Instruction operands are
+/// transformed.
+void ABCD::createSSI(Function &F) {
+  SSI *ssi = &getAnalysis<SSI>();
+
+  SmallVector<Instruction *, 16> Insts;
+
+  for (Function::iterator begin = F.begin(), end = F.end();
+       begin != end; ++begin) {
+    BasicBlock *BB = begin;
+    TerminatorInst *TI = BB->getTerminator();
+    if (TI->getNumOperands() == 0)
+      continue;
+
+    if (ICmpInst *ICI = dyn_cast<ICmpInst>(TI->getOperand(0))) {
+      if (Instruction *I = dyn_cast<Instruction>(ICI->getOperand(0))) {
+        modified = true;  // XXX: but yet createSSI might do nothing
+        Insts.push_back(I);
+      }
+      if (Instruction *I = dyn_cast<Instruction>(ICI->getOperand(1))) {
+        modified = true;
+        Insts.push_back(I);
+      }
+    }
+  }
+  ssi->createSSI(Insts);
+}
+
+/// Creates the graphs for this function.
+/// It will look for all comparators used in branches, and create them.
+/// These comparators will create constraints for any instruction as an
+/// operand.
+void ABCD::executeABCD(Function &F) {
+  for (Function::iterator begin = F.begin(), end = F.end();
+       begin != end; ++begin) {
+    BasicBlock *BB = begin;
+    TerminatorInst *TI = BB->getTerminator();
+    if (TI->getNumOperands() == 0)
+      continue;
+
+    ICmpInst *ICI = dyn_cast<ICmpInst>(TI->getOperand(0));
+    if (!ICI || !isa<IntegerType>(ICI->getOperand(0)->getType()))
+      continue;
+
+    createConstraintCmpInst(ICI, TI);
+    seekRedundancy(ICI, TI);
+  }
+}
+
+/// Seeks redundancies in the comparator instruction CI.
+/// If the ABCD algorithm can prove that the comparator CI always
+/// takes one way, then the Terminator Instruction TI is substituted from
+/// a conditional branch to a unconditional one.
+/// This code basically receives a comparator, and verifies which kind of
+/// instruction it is. Depending on the kind of instruction, we use different
+/// strategies to prove its redundancy.
+void ABCD::seekRedundancy(ICmpInst *ICI, TerminatorInst *TI) {
+  CmpInst::Predicate Pred = ICI->getPredicate();
+
+  Value *source, *dest;
+  int distance1, distance2;
+  bool upper;
+
+  switch(Pred) {
+    case CmpInst::ICMP_SGT: // signed greater than
+      upper = false;
+      distance1 = 1;
+      distance2 = 0;
+      break;
+
+    case CmpInst::ICMP_SGE: // signed greater or equal
+      upper = false;
+      distance1 = 0;
+      distance2 = -1;
+      break;
+
+    case CmpInst::ICMP_SLT: // signed less than
+      upper = true;
+      distance1 = -1;
+      distance2 = 0;
+      break;
+
+    case CmpInst::ICMP_SLE: // signed less or equal
+      upper = true;
+      distance1 = 0;
+      distance2 = 1;
+      break;
+
+    default:
+      return;
+  }
+
+  ++NumBranchTested;
+  source = ICI->getOperand(0);
+  dest = ICI->getOperand(1);
+  if (demandProve(dest, source, distance1, upper)) {
+    removeRedundancy(TI, true);
+  } else if (demandProve(dest, source, distance2, !upper)) {
+    removeRedundancy(TI, false);
+  }
+}
+
+/// Substitutes Terminator Instruction TI, that is a conditional branch,
+/// with one unconditional branch. Succ_edge determines if the new
+/// unconditional edge will be the first or second edge of the former TI
+/// instruction.
+void ABCD::removeRedundancy(TerminatorInst *TI, bool Succ_edge) {
+  BasicBlock *Succ;
+  if (Succ_edge) {
+    Succ = TI->getSuccessor(0);
+    fixPhi(TI->getParent(), TI->getSuccessor(1));
+  } else {
+    Succ = TI->getSuccessor(1);
+    fixPhi(TI->getParent(), TI->getSuccessor(0));
+  }
+
+  BranchInst::Create(Succ, TI);
+  TI->eraseFromParent();  // XXX: invoke
+  ++NumBranchRemoved;
+  modified = true;
+}
+
+/// When an conditional branch is removed, the BasicBlock that is no longer
+/// reachable will have problems in phi functions. This method fixes these
+/// phis removing the former BasicBlock from the list of incoming BasicBlocks
+/// of all phis. In case the phi remains with no predecessor it will be
+/// marked to be removed later.
+void ABCD::fixPhi(BasicBlock *BB, BasicBlock *Succ) {
+  BasicBlock::iterator begin = Succ->begin();
+  while (PHINode *PN = dyn_cast<PHINode>(begin++)) {
+    PN->removeIncomingValue(BB, false);
+    if (PN->getNumIncomingValues() == 0)
+      phis_to_remove.push_back(PN);
+  }
+}
+
+/// Removes phis that have no predecessor
+void ABCD::removePhis() {
+  for (unsigned i = 0, e = phis_to_remove.size(); i != e; ++i) {
+    PHINode *PN = phis_to_remove[i];
+    PN->replaceAllUsesWith(UndefValue::get(PN->getType()));
+    PN->eraseFromParent();
+  }
+}
+
+/// Creates constraints for Instructions.
+/// If the constraint for this instruction has already been created
+/// nothing is done.
+void ABCD::createConstraintInstruction(Instruction *I) {
+  // Test if this instruction has not been created before
+  if (created.insert(I)) {
+    if (BinaryOperator *BO = dyn_cast<BinaryOperator>(I)) {
+      createConstraintBinaryOperator(BO);
+    } else if (PHINode *PN = dyn_cast<PHINode>(I)) {
+      createConstraintPHINode(PN);
+    }
+  }
+}
+
+/// Creates constraints for Binary Operators.
+/// It will create constraints only for addition and subtraction,
+/// the other binary operations are not treated by ABCD.
+/// For additions in the form a = b + X and a = X + b, where X is a constant,
+/// the constraint a <= b + X can be obtained. For this constraint, an edge
+/// a->b with weight X is added to the lower bound graph, and an edge
+/// b->a with weight -X is added to the upper bound graph.
+/// Only subtractions in the format a = b - X is used by ABCD.
+/// Edges are created using the same semantic as addition.
+void ABCD::createConstraintBinaryOperator(BinaryOperator *BO) {
+  Instruction *I1 = NULL, *I2 = NULL;
+  ConstantInt *CI1 = NULL, *CI2 = NULL;
+
+  // Test if an operand is an Instruction and the other is a Constant
+  if (!createBinaryOperatorInfo(BO, &I1, &I2, &CI1, &CI2))
+    return;
+
+  Instruction *I = 0;
+  APInt value;
+
+  switch (BO->getOpcode()) {
+    case Instruction::Add:
+      if (I1) {
+        I = I1;
+        value = CI2->getValue();
+      } else if (I2) {
+        I = I2;
+        value = CI1->getValue();
+      }
+      break;
+
+    case Instruction::Sub:
+      // Instructions like a = X-b, where X is a constant are not represented
+      // in the graph.
+      if (!I1)
+        return;
+
+      I = I1;
+      value = -CI2->getValue();
+      break;
+
+    default:
+      return;
+  }
+
+  inequality_graph.addEdge(I, BO, value, true);
+  inequality_graph.addEdge(BO, I, -value, false);
+  createConstraintInstruction(I);
+}
+
+/// Given a binary operator, we are only interest in the case
+/// that one operand is an Instruction and the other is a ConstantInt. In
+/// this case the method returns true, otherwise false. It also obtains the
+/// Instruction and ConstantInt from the BinaryOperator and returns it.
+bool ABCD::createBinaryOperatorInfo(BinaryOperator *BO, Instruction **I1,
+                                    Instruction **I2, ConstantInt **C1,
+                                    ConstantInt **C2) {
+  Value *op1 = BO->getOperand(0);
+  Value *op2 = BO->getOperand(1);
+
+  if ((*I1 = dyn_cast<Instruction>(op1))) {
+    if ((*C2 = dyn_cast<ConstantInt>(op2)))
+      return true; // First is Instruction and second ConstantInt
+
+    return false; // Both are Instruction
+  } else {
+    if ((*C1 = dyn_cast<ConstantInt>(op1)) &&
+        (*I2 = dyn_cast<Instruction>(op2)))
+      return true; // First is ConstantInt and second Instruction
+
+    return false; // Both are not Instruction
+  }
+}
+
+/// Creates constraints for Comparator Instructions.
+/// Only comparators that have any of the following operators
+/// are used to create constraints: >=, >, <=, <. And only if
+/// at least one operand is an Instruction. In a Comparator Instruction
+/// a op b, there will be 4 sigma functions a_t, a_f, b_t and b_f. Where
+/// t and f represent sigma for operands in true and false branches. The
+/// following constraints can be obtained. a_t <= a, a_f <= a, b_t <= b and
+/// b_f <= b. There are two more constraints that depend on the operator.
+/// For the operator <= : a_t <= b_t   and b_f <= a_f-1
+/// For the operator <  : a_t <= b_t-1 and b_f <= a_f
+/// For the operator >= : b_t <= a_t   and a_f <= b_f-1
+/// For the operator >  : b_t <= a_t-1 and a_f <= b_f
+void ABCD::createConstraintCmpInst(ICmpInst *ICI, TerminatorInst *TI) {
+  Value *V_op1 = ICI->getOperand(0);
+  Value *V_op2 = ICI->getOperand(1);
+
+  if (!isa<IntegerType>(V_op1->getType()))
+    return;
+
+  Instruction *I_op1 = dyn_cast<Instruction>(V_op1);
+  Instruction *I_op2 = dyn_cast<Instruction>(V_op2);
+
+  // Test if at least one operand is an Instruction
+  if (!I_op1 && !I_op2)
+    return;
+
+  BasicBlock *BB_succ_t = TI->getSuccessor(0);
+  BasicBlock *BB_succ_f = TI->getSuccessor(1);
+
+  PHINode *SIG_op1_t = NULL, *SIG_op1_f = NULL,
+          *SIG_op2_t = NULL, *SIG_op2_f = NULL;
+
+  createConstraintSigInst(I_op1, BB_succ_t, BB_succ_f, &SIG_op1_t, &SIG_op1_f);
+  createConstraintSigInst(I_op2, BB_succ_t, BB_succ_f, &SIG_op2_t, &SIG_op2_f);
+
+  int32_t width = cast<IntegerType>(V_op1->getType())->getBitWidth();
+  APInt MinusOne = APInt::getAllOnesValue(width);
+  APInt Zero = APInt::getNullValue(width);
+
+  CmpInst::Predicate Pred = ICI->getPredicate();
+  ConstantInt *CI1 = dyn_cast<ConstantInt>(V_op1);
+  ConstantInt *CI2 = dyn_cast<ConstantInt>(V_op2);
+  switch (Pred) {
+  case CmpInst::ICMP_SGT:  // signed greater than
+    createConstraintSigSig(SIG_op2_t, SIG_op1_t, CI2, CI1, MinusOne);
+    createConstraintSigSig(SIG_op1_f, SIG_op2_f, CI1, CI2, Zero);
+    break;
+
+  case CmpInst::ICMP_SGE:  // signed greater or equal
+    createConstraintSigSig(SIG_op2_t, SIG_op1_t, CI2, CI1, Zero);
+    createConstraintSigSig(SIG_op1_f, SIG_op2_f, CI1, CI2, MinusOne);
+    break;
+
+  case CmpInst::ICMP_SLT:  // signed less than
+    createConstraintSigSig(SIG_op1_t, SIG_op2_t, CI1, CI2, MinusOne);
+    createConstraintSigSig(SIG_op2_f, SIG_op1_f, CI2, CI1, Zero);
+    break;
+
+  case CmpInst::ICMP_SLE:  // signed less or equal
+    createConstraintSigSig(SIG_op1_t, SIG_op2_t, CI1, CI2, Zero);
+    createConstraintSigSig(SIG_op2_f, SIG_op1_f, CI2, CI1, MinusOne);
+    break;
+
+  default:
+    break;
+  }
+
+  if (I_op1)
+    createConstraintInstruction(I_op1);
+  if (I_op2)
+    createConstraintInstruction(I_op2);
+}
+
+/// Creates constraints for PHI nodes.
+/// In a PHI node a = phi(b,c) we can create the constraint
+/// a<= max(b,c). With this constraint there will be the edges,
+/// b->a and c->a with weight 0 in the lower bound graph, and the edges
+/// a->b and a->c with weight 0 in the upper bound graph.
+void ABCD::createConstraintPHINode(PHINode *PN) {
+  // FIXME: We really want to disallow sigma nodes, but I don't know the best
+  // way to detect the other than this.
+  if (PN->getNumOperands() == 2) return;
+  
+  int32_t width = cast<IntegerType>(PN->getType())->getBitWidth();
+  for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
+    Value *V = PN->getIncomingValue(i);
+    if (Instruction *I = dyn_cast<Instruction>(V)) {
+      createConstraintInstruction(I);
+    }
+    inequality_graph.addEdge(V, PN, APInt(width, 0), true);
+    inequality_graph.addEdge(V, PN, APInt(width, 0), false);
+  }
+}
+
+/// This method creates a constraint between a Sigma and an Instruction.
+/// These constraints are created as soon as we find a comparator that uses a
+/// SSI variable.
+void ABCD::createConstraintSigInst(Instruction *I_op, BasicBlock *BB_succ_t,
+                                   BasicBlock *BB_succ_f, PHINode **SIG_op_t,
+                                   PHINode **SIG_op_f) {
+  *SIG_op_t = findSigma(BB_succ_t, I_op);
+  *SIG_op_f = findSigma(BB_succ_f, I_op);
+
+  if (*SIG_op_t) {
+    int32_t width = cast<IntegerType>((*SIG_op_t)->getType())->getBitWidth();
+    inequality_graph.addEdge(I_op, *SIG_op_t, APInt(width, 0), true);
+    inequality_graph.addEdge(*SIG_op_t, I_op, APInt(width, 0), false);
+  }
+  if (*SIG_op_f) {
+    int32_t width = cast<IntegerType>((*SIG_op_f)->getType())->getBitWidth();
+    inequality_graph.addEdge(I_op, *SIG_op_f, APInt(width, 0), true);
+    inequality_graph.addEdge(*SIG_op_f, I_op, APInt(width, 0), false);
+  }
+}
+
+/// If PN_op1 and PN_o2 are different from NULL, create a constraint
+/// PN_op2 -> PN_op1 with value. In case any of them is NULL, replace
+/// with the respective V_op#, if V_op# is a ConstantInt.
+void ABCD::createConstraintSigSig(PHINode *SIG_op1, PHINode *SIG_op2,
+                                  ConstantInt *V_op1, ConstantInt *V_op2,
+                                  APInt value) {
+  if (SIG_op1 && SIG_op2) {
+    inequality_graph.addEdge(SIG_op2, SIG_op1, value, true);
+    inequality_graph.addEdge(SIG_op1, SIG_op2, -value, false);
+  } else if (SIG_op1 && V_op2) {
+    inequality_graph.addEdge(V_op2, SIG_op1, value, true);
+    inequality_graph.addEdge(SIG_op1, V_op2, -value, false);
+  } else if (SIG_op2 && V_op1) {
+    inequality_graph.addEdge(SIG_op2, V_op1, value, true);
+    inequality_graph.addEdge(V_op1, SIG_op2, -value, false);
+  }
+}
+
+/// Returns the sigma representing the Instruction I in BasicBlock BB.
+/// Returns NULL in case there is no sigma for this Instruction in this
+/// Basic Block. This methods assume that sigmas are the first instructions
+/// in a block, and that there can be only two sigmas in a block. So it will
+/// only look on the first two instructions of BasicBlock BB.
+PHINode *ABCD::findSigma(BasicBlock *BB, Instruction *I) {
+  // BB has more than one predecessor, BB cannot have sigmas.
+  if (I == NULL || BB->getSinglePredecessor() == NULL)
+    return NULL;
+
+  BasicBlock::iterator begin = BB->begin();
+  BasicBlock::iterator end = BB->end();
+
+  for (unsigned i = 0; i < 2 && begin != end; ++i, ++begin) {
+    Instruction *I_succ = begin;
+    if (PHINode *PN = dyn_cast<PHINode>(I_succ))
+      if (PN->getIncomingValue(0) == I)
+        return PN;
+  }
+
+  return NULL;
+}
+
+/// Original ABCD algorithm to prove redundant checks.
+/// This implementation works on any kind of inequality branch.
+bool ABCD::demandProve(Value *a, Value *b, int c, bool upper_bound) {
+  int32_t width = cast<IntegerType>(a->getType())->getBitWidth();
+  Bound *bound = new Bound(APInt(width, c), upper_bound);
+
+  mem_result.clear();
+  active.clear();
+
+  ProveResult res = prove(a, b, bound, 0);
+  return res != False;
+}
+
+/// Prove that distance between b and a is <= bound
+ABCD::ProveResult ABCD::prove(Value *a, Value *b, Bound *bound,
+                              unsigned level) {
+  // if (C[b-a<=e] == True for some e <= bound
+  // Same or stronger difference was already proven
+  if (mem_result.hasTrue(b, bound))
+    return True;
+
+  // if (C[b-a<=e] == False for some e >= bound
+  // Same or weaker difference was already disproved
+  if (mem_result.hasFalse(b, bound))
+    return False;
+
+  // if (C[b-a<=e] == Reduced for some e <= bound
+  // b is on a cycle that was reduced for same or stronger difference
+  if (mem_result.hasReduced(b, bound))
+    return Reduced;
+
+  // traversal reached the source vertex
+  if (a == b && Bound::geq(bound, APInt(bound->getBitWidth(), 0, true)))
+    return True;
+
+  // if b has no predecessor then fail
+  if (!inequality_graph.hasEdge(b, bound->isUpperBound()))
+    return False;
+
+  // a cycle was encountered
+  if (active.count(b)) {
+    if (Bound::leq(active.lookup(b), bound))
+      return Reduced; // a "harmless" cycle
+
+    return False; // an amplifying cycle
+  }
+
+  active[b] = bound;
+  PHINode *PN = dyn_cast<PHINode>(b);
+
+  // Test if a Value is a Phi. If it is a PHINode with more than 1 incoming
+  // value, then it is a phi, if it has 1 incoming value it is a sigma.
+  if (PN && PN->getNumIncomingValues() > 1)
+    updateMemDistance(a, b, bound, level, min);
+  else
+    updateMemDistance(a, b, bound, level, max);
+
+  active.erase(b);
+
+  ABCD::ProveResult res = mem_result.getBoundResult(b, bound);
+  return res;
+}
+
+/// Updates the distance value for a and b
+void ABCD::updateMemDistance(Value *a, Value *b, Bound *bound, unsigned level,
+                             meet_function meet) {
+  ABCD::ProveResult res = (meet == max) ? False : True;
+
+  SmallPtrSet<Edge *, 16> Edges = inequality_graph.getEdges(b);
+  SmallPtrSet<Edge *, 16>::iterator begin = Edges.begin(), end = Edges.end();
+
+  for (; begin != end ; ++begin) {
+    if (((res >= Reduced) && (meet == max)) ||
+       ((res == False) && (meet == min))) {
+      break;
+    }
+    Edge *in = *begin;
+    if (in->isUpperBound() == bound->isUpperBound()) {
+      Value *succ = in->getVertex();
+      res = meet(res, prove(a, succ, new Bound(bound, in->getValue()),
+                 level+1));
+    }
+  }
+
+  mem_result.updateBound(b, bound, res);
+}
+
+/// Return the stored result for this bound
+ABCD::ProveResult ABCD::MemoizedResultChart::getResult(const Bound *bound)const{
+  if (max_false && Bound::leq(bound, max_false))
+    return False;
+  if (min_true && Bound::leq(min_true, bound))
+    return True;
+  if (min_reduced && Bound::leq(min_reduced, bound))
+    return Reduced;
+  return False;
+}
+
+/// Stores a false found
+void ABCD::MemoizedResultChart::addFalse(Bound *bound) {
+  if (!max_false || Bound::leq(max_false, bound))
+    max_false = bound;
+
+  if (Bound::eq(max_false, min_reduced))
+    min_reduced = Bound::createIncrement(min_reduced);
+  if (Bound::eq(max_false, min_true))
+    min_true = Bound::createIncrement(min_true);
+  if (Bound::eq(min_reduced, min_true))
+    min_reduced = NULL;
+  clearRedundantReduced();
+}
+
+/// Stores a true found
+void ABCD::MemoizedResultChart::addTrue(Bound *bound) {
+  if (!min_true || Bound::leq(bound, min_true))
+    min_true = bound;
+
+  if (Bound::eq(min_true, min_reduced))
+    min_reduced = Bound::createDecrement(min_reduced);
+  if (Bound::eq(min_true, max_false))
+    max_false = Bound::createDecrement(max_false);
+  if (Bound::eq(max_false, min_reduced))
+    min_reduced = NULL;
+  clearRedundantReduced();
+}
+
+/// Stores a Reduced found
+void ABCD::MemoizedResultChart::addReduced(Bound *bound) {
+  if (!min_reduced || Bound::leq(bound, min_reduced))
+    min_reduced = bound;
+
+  if (Bound::eq(min_reduced, min_true))
+    min_true = Bound::createIncrement(min_true);
+  if (Bound::eq(min_reduced, max_false))
+    max_false = Bound::createDecrement(max_false);
+}
+
+/// Clears redundant reduced
+/// If a min_true is smaller than a min_reduced then the min_reduced
+/// is unnecessary and then removed. It also works for min_reduced
+/// begin smaller than max_false.
+void ABCD::MemoizedResultChart::clearRedundantReduced() {
+  if (min_true && min_reduced && Bound::lt(min_true, min_reduced))
+    min_reduced = NULL;
+  if (max_false && min_reduced && Bound::lt(min_reduced, max_false))
+    min_reduced = NULL;
+}
+
+/// Stores the bound found
+void ABCD::MemoizedResult::updateBound(Value *b, Bound *bound,
+                                       const ProveResult res) {
+  if (res == False) {
+    map[b].addFalse(bound);
+  } else if (res == True) {
+    map[b].addTrue(bound);
+  } else {
+    map[b].addReduced(bound);
+  }
+}
+
+/// Adds an edge from V_from to V_to with weight value
+void ABCD::InequalityGraph::addEdge(Value *V_to, Value *V_from,
+                                    APInt value, bool upper) {
+  assert(V_from->getType() == V_to->getType());
+  assert(cast<IntegerType>(V_from->getType())->getBitWidth() ==
+         value.getBitWidth());
+
+  DenseMap<Value *, SmallPtrSet<Edge *, 16> >::iterator from;
+  from = addNode(V_from);
+  from->second.insert(new Edge(V_to, value, upper));
+}
+
+/// Test if there is any edge from V in the upper direction
+bool ABCD::InequalityGraph::hasEdge(Value *V, bool upper) const {
+  SmallPtrSet<Edge *, 16> it = graph.lookup(V);
+
+  SmallPtrSet<Edge *, 16>::iterator begin = it.begin();
+  SmallPtrSet<Edge *, 16>::iterator end = it.end();
+  for (; begin != end; ++begin) {
+    if ((*begin)->isUpperBound() == upper) {
+      return true;
+    }
+  }
+  return false;
+}
+
+/// Prints the header of the dot file
+void ABCD::InequalityGraph::printHeader(raw_ostream &OS, Function &F) const {
+  OS << "digraph dotgraph {\n";
+  OS << "label=\"Inequality Graph for \'";
+  OS << F.getNameStr() << "\' function\";\n";
+  OS << "node [shape=record,fontname=\"Times-Roman\",fontsize=14];\n";
+}
+
+/// Prints the body of the dot file
+void ABCD::InequalityGraph::printBody(raw_ostream &OS) const {
+  DenseMap<Value *, SmallPtrSet<Edge *, 16> >::const_iterator begin =
+      graph.begin(), end = graph.end();
+
+  for (; begin != end ; ++begin) {
+    SmallPtrSet<Edge *, 16>::iterator begin_par =
+        begin->second.begin(), end_par = begin->second.end();
+    Value *source = begin->first;
+
+    printVertex(OS, source);
+
+    for (; begin_par != end_par ; ++begin_par) {
+      Edge *edge = *begin_par;
+      printEdge(OS, source, edge);
+    }
+  }
+}
+
+/// Prints vertex source to the dot file
+///
+void ABCD::InequalityGraph::printVertex(raw_ostream &OS, Value *source) const {
+  OS << "\"";
+  printName(OS, source);
+  OS << "\"";
+  OS << " [label=\"{";
+  printName(OS, source);
+  OS << "}\"];\n";
+}
+
+/// Prints the edge to the dot file
+void ABCD::InequalityGraph::printEdge(raw_ostream &OS, Value *source,
+                                      Edge *edge) const {
+  Value *dest = edge->getVertex();
+  APInt value = edge->getValue();
+  bool upper = edge->isUpperBound();
+
+  OS << "\"";
+  printName(OS, source);
+  OS << "\"";
+  OS << " -> ";
+  OS << "\"";
+  printName(OS, dest);
+  OS << "\"";
+  OS << " [label=\"" << value << "\"";
+  if (upper) {
+    OS << "color=\"blue\"";
+  } else {
+    OS << "color=\"red\"";
+  }
+  OS << "];\n";
+}
+
+void ABCD::InequalityGraph::printName(raw_ostream &OS, Value *info) const {
+  if (ConstantInt *CI = dyn_cast<ConstantInt>(info)) {
+    OS << *CI;
+  } else {
+    if (!info->hasName()) {
+      info->setName("V");
+    }
+    OS << info->getNameStr();
+  }
+}
+
+/// createABCDPass - The public interface to this file...
+FunctionPass *llvm::createABCDPass() {
+  return new ABCD();
+}
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/CMakeLists.txt b/libclamav/c++/llvm/lib/Transforms/Scalar/CMakeLists.txt
index 7f516b7..5a92399 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/CMakeLists.txt
@@ -1,12 +1,12 @@
 add_llvm_library(LLVMScalarOpts
+  ABCD.cpp
   ADCE.cpp
   BasicBlockPlacement.cpp
-  CodeGenLICM.cpp
   CodeGenPrepare.cpp
-  CondPropagate.cpp
   ConstantProp.cpp
   DCE.cpp
   DeadStoreElimination.cpp
+  GEPSplitter.cpp
   GVN.cpp
   IndVarSimplify.cpp
   InstructionCombining.cpp
@@ -16,13 +16,13 @@ add_llvm_library(LLVMScalarOpts
   LoopIndexSplit.cpp
   LoopRotation.cpp
   LoopStrengthReduce.cpp
-  LoopUnroll.cpp
+  LoopUnrollPass.cpp
   LoopUnswitch.cpp
   MemCpyOptimizer.cpp
-  PredicateSimplifier.cpp
   Reassociate.cpp
   Reg2Mem.cpp
   SCCP.cpp
+  SCCVN.cpp
   Scalar.cpp
   ScalarReplAggregates.cpp
   SimplifyCFGPass.cpp
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenLICM.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenLICM.cpp
deleted file mode 100644
index 9f1d148..0000000
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenLICM.cpp
+++ /dev/null
@@ -1,112 +0,0 @@
-//===- CodeGenLICM.cpp - LICM a function for code generation --------------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This function performs late LICM, hoisting constants out of loops that
-// are not valid immediates. It should not be followed by instcombine,
-// because instcombine would quickly stuff the constants back into the loop.
-//
-//===----------------------------------------------------------------------===//
-
-#define DEBUG_TYPE "codegen-licm"
-#include "llvm/Transforms/Scalar.h"
-#include "llvm/Constants.h"
-#include "llvm/DerivedTypes.h"
-#include "llvm/Instructions.h"
-#include "llvm/IntrinsicInst.h"
-#include "llvm/LLVMContext.h"
-#include "llvm/Analysis/LoopPass.h"
-#include "llvm/Analysis/AliasAnalysis.h"
-#include "llvm/Analysis/ScalarEvolution.h"
-#include "llvm/Analysis/IVUsers.h"
-#include "llvm/ADT/DenseMap.h"
-using namespace llvm;
-
-namespace {
-  class CodeGenLICM : public LoopPass {
-    virtual bool runOnLoop(Loop *L, LPPassManager &LPM);
-    virtual void getAnalysisUsage(AnalysisUsage &AU) const;
-  public:
-    static char ID; // Pass identification, replacement for typeid
-    explicit CodeGenLICM() : LoopPass(&ID) {}
-  };
-}
-
-char CodeGenLICM::ID = 0;
-static RegisterPass<CodeGenLICM> X("codegen-licm",
-                                   "hoist constants out of loops");
-
-Pass *llvm::createCodeGenLICMPass() {
-  return new CodeGenLICM();
-}
-
-bool CodeGenLICM::runOnLoop(Loop *L, LPPassManager &) {
-  bool Changed = false;
-
-  // Only visit outermost loops.
-  if (L->getParentLoop()) return Changed;
-
-  Instruction *PreheaderTerm = L->getLoopPreheader()->getTerminator();
-  DenseMap<Constant *, BitCastInst *> HoistedConstants;
-
-  for (Loop::block_iterator I = L->block_begin(), E = L->block_end();
-       I != E; ++I) {
-    BasicBlock *BB = *I;
-    for (BasicBlock::iterator BBI = BB->begin(), BBE = BB->end();
-         BBI != BBE; ++BBI) {
-      Instruction *I = BBI;
-      // TODO: For now, skip all intrinsic instructions, because some of them
-      // can require their operands to be constants, and we don't want to
-      // break that.
-      if (isa<IntrinsicInst>(I))
-        continue;
-      // LLVM represents fneg as -0.0-x; don't hoist the -0.0 out.
-      if (BinaryOperator::isFNeg(I) ||
-          BinaryOperator::isNeg(I) ||
-          BinaryOperator::isNot(I))
-        continue;
-      for (unsigned i = 0, e = I->getNumOperands(); i != e; ++i) {
-        // Don't hoist out switch case constants.
-        if (isa<SwitchInst>(I) && i == 1)
-          break;
-        // Don't hoist out shuffle masks.
-        if (isa<ShuffleVectorInst>(I) && i == 2)
-          break;
-        Value *Op = I->getOperand(i);
-        Constant *C = dyn_cast<Constant>(Op);
-        if (!C) continue;
-        // TODO: Ask the target which constants are legal. This would allow
-        // us to add support for hoisting ConstantInts and GlobalValues too.
-        if (isa<ConstantFP>(C) ||
-            isa<ConstantVector>(C) ||
-            isa<ConstantAggregateZero>(C)) {
-          BitCastInst *&BC = HoistedConstants[C];
-          if (!BC)
-            BC = new BitCastInst(C, C->getType(), "hoist", PreheaderTerm);
-          I->setOperand(i, BC);
-          Changed = true;
-        }
-      }
-    }
-  }
-
-  return Changed;
-}
-
-void CodeGenLICM::getAnalysisUsage(AnalysisUsage &AU) const {
-  // This pass preserves just about everything. List some popular things here.
-  AU.setPreservesCFG();
-  AU.addPreservedID(LoopSimplifyID);
-  AU.addPreserved<LoopInfo>();
-  AU.addPreserved<AliasAnalysis>();
-  AU.addPreserved<ScalarEvolution>();
-  AU.addPreserved<IVUsers>();
-
-  // Hoisting requires a loop preheader.
-  AU.addRequiredID(LoopSimplifyID);
-}
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp
index a3e3fea..9ca90c3 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp
@@ -73,6 +73,7 @@ namespace {
                             DenseMap<Value*,Value*> &SunkAddrs);
     bool OptimizeInlineAsmInst(Instruction *I, CallSite CS,
                                DenseMap<Value*,Value*> &SunkAddrs);
+    bool MoveExtToFormExtLoad(Instruction *I);
     bool OptimizeExtUses(Instruction *I);
     void findLoopBackEdges(const Function &F);
   };
@@ -317,6 +318,7 @@ static void SplitEdgeNicely(TerminatorInst *TI, unsigned SuccNum,
     if (Invoke->getSuccessor(1) == Dest)
       return;
   }
+  
 
   // As a hack, never split backedges of loops.  Even though the copy for any
   // PHIs inserted on the backedge would be dead for exits from the loop, we
@@ -731,6 +733,43 @@ bool CodeGenPrepare::OptimizeInlineAsmInst(Instruction *I, CallSite CS,
   return MadeChange;
 }
 
+/// MoveExtToFormExtLoad - Move a zext or sext fed by a load into the same
+/// basic block as the load, unless conditions are unfavorable. This allows
+/// SelectionDAG to fold the extend into the load.
+///
+bool CodeGenPrepare::MoveExtToFormExtLoad(Instruction *I) {
+  // Look for a load being extended.
+  LoadInst *LI = dyn_cast<LoadInst>(I->getOperand(0));
+  if (!LI) return false;
+
+  // If they're already in the same block, there's nothing to do.
+  if (LI->getParent() == I->getParent())
+    return false;
+
+  // If the load has other users and the truncate is not free, this probably
+  // isn't worthwhile.
+  if (!LI->hasOneUse() &&
+      TLI && !TLI->isTruncateFree(I->getType(), LI->getType()))
+    return false;
+
+  // Check whether the target supports casts folded into loads.
+  unsigned LType;
+  if (isa<ZExtInst>(I))
+    LType = ISD::ZEXTLOAD;
+  else {
+    assert(isa<SExtInst>(I) && "Unexpected ext type!");
+    LType = ISD::SEXTLOAD;
+  }
+  if (TLI && !TLI->isLoadExtLegal(LType, TLI->getValueType(LI->getType())))
+    return false;
+
+  // Move the extend into the same block as the load, so that SelectionDAG
+  // can fold it.
+  I->removeFromParent();
+  I->insertAfter(LI);
+  return true;
+}
+
 bool CodeGenPrepare::OptimizeExtUses(Instruction *I) {
   BasicBlock *DefBB = I->getParent();
 
@@ -814,7 +853,7 @@ bool CodeGenPrepare::OptimizeBlock(BasicBlock &BB) {
 
   // Split all critical edges where the dest block has a PHI.
   TerminatorInst *BBTI = BB.getTerminator();
-  if (BBTI->getNumSuccessors() > 1) {
+  if (BBTI->getNumSuccessors() > 1 && !isa<IndirectBrInst>(BBTI)) {
     for (unsigned i = 0, e = BBTI->getNumSuccessors(); i != e; ++i) {
       BasicBlock *SuccBB = BBTI->getSuccessor(i);
       if (isa<PHINode>(SuccBB->begin()) && isCriticalEdge(BBTI, i, true))
@@ -846,8 +885,10 @@ bool CodeGenPrepare::OptimizeBlock(BasicBlock &BB) {
         MadeChange |= Change;
       }
 
-      if (!Change && (isa<ZExtInst>(I) || isa<SExtInst>(I)))
+      if (!Change && (isa<ZExtInst>(I) || isa<SExtInst>(I))) {
+        MadeChange |= MoveExtToFormExtLoad(I);
         MadeChange |= OptimizeExtUses(I);
+      }
     } else if (CmpInst *CI = dyn_cast<CmpInst>(I)) {
       MadeChange |= OptimizeCmpExpression(CI);
     } else if (LoadInst *LI = dyn_cast<LoadInst>(I)) {
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/CondPropagate.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/CondPropagate.cpp
deleted file mode 100644
index 5b573f4..0000000
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/CondPropagate.cpp
+++ /dev/null
@@ -1,287 +0,0 @@
-//===-- CondPropagate.cpp - Propagate Conditional Expressions -------------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This pass propagates information about conditional expressions through the
-// program, allowing it to eliminate conditional branches in some cases.
-//
-//===----------------------------------------------------------------------===//
-
-#define DEBUG_TYPE "condprop"
-#include "llvm/Transforms/Scalar.h"
-#include "llvm/Instructions.h"
-#include "llvm/IntrinsicInst.h"
-#include "llvm/Pass.h"
-#include "llvm/Type.h"
-#include "llvm/Transforms/Utils/BasicBlockUtils.h"
-#include "llvm/Transforms/Utils/Local.h"
-#include "llvm/ADT/Statistic.h"
-#include "llvm/ADT/SmallVector.h"
-using namespace llvm;
-
-STATISTIC(NumBrThread, "Number of CFG edges threaded through branches");
-STATISTIC(NumSwThread, "Number of CFG edges threaded through switches");
-
-namespace {
-  struct CondProp : public FunctionPass {
-    static char ID; // Pass identification, replacement for typeid
-    CondProp() : FunctionPass(&ID) {}
-
-    virtual bool runOnFunction(Function &F);
-
-    virtual void getAnalysisUsage(AnalysisUsage &AU) const {
-      AU.addRequiredID(BreakCriticalEdgesID);
-      //AU.addRequired<DominanceFrontier>();
-    }
-
-  private:
-    bool MadeChange;
-    SmallVector<BasicBlock *, 4> DeadBlocks;
-    void SimplifyBlock(BasicBlock *BB);
-    void SimplifyPredecessors(BranchInst *BI);
-    void SimplifyPredecessors(SwitchInst *SI);
-    void RevectorBlockTo(BasicBlock *FromBB, BasicBlock *ToBB);
-    bool RevectorBlockTo(BasicBlock *FromBB, Value *Cond, BranchInst *BI);
-  };
-}
-  
-char CondProp::ID = 0;
-static RegisterPass<CondProp> X("condprop", "Conditional Propagation");
-
-FunctionPass *llvm::createCondPropagationPass() {
-  return new CondProp();
-}
-
-bool CondProp::runOnFunction(Function &F) {
-  bool EverMadeChange = false;
-  DeadBlocks.clear();
-
-  // While we are simplifying blocks, keep iterating.
-  do {
-    MadeChange = false;
-    for (Function::iterator BB = F.begin(), E = F.end(); BB != E;)
-      SimplifyBlock(BB++);
-    EverMadeChange = EverMadeChange || MadeChange;
-  } while (MadeChange);
-
-  if (EverMadeChange) {
-    while (!DeadBlocks.empty()) {
-      BasicBlock *BB = DeadBlocks.back(); DeadBlocks.pop_back();
-      DeleteDeadBlock(BB);
-    }
-  }
-  return EverMadeChange;
-}
-
-void CondProp::SimplifyBlock(BasicBlock *BB) {
-  if (BranchInst *BI = dyn_cast<BranchInst>(BB->getTerminator())) {
-    // If this is a conditional branch based on a phi node that is defined in
-    // this block, see if we can simplify predecessors of this block.
-    if (BI->isConditional() && isa<PHINode>(BI->getCondition()) &&
-        cast<PHINode>(BI->getCondition())->getParent() == BB)
-      SimplifyPredecessors(BI);
-
-  } else if (SwitchInst *SI = dyn_cast<SwitchInst>(BB->getTerminator())) {
-    if (isa<PHINode>(SI->getCondition()) &&
-        cast<PHINode>(SI->getCondition())->getParent() == BB)
-      SimplifyPredecessors(SI);
-  }
-
-  // If possible, simplify the terminator of this block.
-  if (ConstantFoldTerminator(BB))
-    MadeChange = true;
-
-  // If this block ends with an unconditional branch and the only successor has
-  // only this block as a predecessor, merge the two blocks together.
-  if (BranchInst *BI = dyn_cast<BranchInst>(BB->getTerminator()))
-    if (BI->isUnconditional() && BI->getSuccessor(0)->getSinglePredecessor() &&
-        BB != BI->getSuccessor(0)) {
-      BasicBlock *Succ = BI->getSuccessor(0);
-      
-      // If Succ has any PHI nodes, they are all single-entry PHI's.  Eliminate
-      // them.
-      FoldSingleEntryPHINodes(Succ);
-      
-      // Remove BI.
-      BI->eraseFromParent();
-
-      // Move over all of the instructions.
-      BB->getInstList().splice(BB->end(), Succ->getInstList());
-
-      // Any phi nodes that had entries for Succ now have entries from BB.
-      Succ->replaceAllUsesWith(BB);
-
-      // Succ is now dead, but we cannot delete it without potentially
-      // invalidating iterators elsewhere.  Just insert an unreachable
-      // instruction in it and delete this block later on.
-      new UnreachableInst(BB->getContext(), Succ);
-      DeadBlocks.push_back(Succ);
-      MadeChange = true;
-    }
-}
-
-// SimplifyPredecessors(branches) - We know that BI is a conditional branch
-// based on a PHI node defined in this block.  If the phi node contains constant
-// operands, then the blocks corresponding to those operands can be modified to
-// jump directly to the destination instead of going through this block.
-void CondProp::SimplifyPredecessors(BranchInst *BI) {
-  // TODO: We currently only handle the most trival case, where the PHI node has
-  // one use (the branch), and is the only instruction besides the branch and dbg
-  // intrinsics in the block.
-  PHINode *PN = cast<PHINode>(BI->getCondition());
-
-  if (PN->getNumIncomingValues() == 1) {
-    // Eliminate single-entry PHI nodes.
-    FoldSingleEntryPHINodes(PN->getParent());
-    return;
-  }
-  
-  
-  if (!PN->hasOneUse()) return;
-
-  BasicBlock *BB = BI->getParent();
-  if (&*BB->begin() != PN)
-    return;
-  BasicBlock::iterator BBI = BB->begin();
-  BasicBlock::iterator BBE = BB->end();
-  while (BBI != BBE && isa<DbgInfoIntrinsic>(++BBI)) /* empty */;
-  if (&*BBI != BI)
-    return;
-
-  // Ok, we have this really simple case, walk the PHI operands, looking for
-  // constants.  Walk from the end to remove operands from the end when
-  // possible, and to avoid invalidating "i".
-  for (unsigned i = PN->getNumIncomingValues(); i != 0; --i) {
-    Value *InVal = PN->getIncomingValue(i-1);
-    if (!RevectorBlockTo(PN->getIncomingBlock(i-1), InVal, BI))
-      continue;
-
-    ++NumBrThread;
-
-    // If there were two predecessors before this simplification, or if the
-    // PHI node contained all the same value except for the one we just
-    // substituted, the PHI node may be deleted.  Don't iterate through it the
-    // last time.
-    if (BI->getCondition() != PN) return;
-  }
-}
-
-// SimplifyPredecessors(switch) - We know that SI is switch based on a PHI node
-// defined in this block.  If the phi node contains constant operands, then the
-// blocks corresponding to those operands can be modified to jump directly to
-// the destination instead of going through this block.
-void CondProp::SimplifyPredecessors(SwitchInst *SI) {
-  // TODO: We currently only handle the most trival case, where the PHI node has
-  // one use (the branch), and is the only instruction besides the branch and 
-  // dbg intrinsics in the block.
-  PHINode *PN = cast<PHINode>(SI->getCondition());
-  if (!PN->hasOneUse()) return;
-
-  BasicBlock *BB = SI->getParent();
-  if (&*BB->begin() != PN)
-    return;
-  BasicBlock::iterator BBI = BB->begin();
-  BasicBlock::iterator BBE = BB->end();
-  while (BBI != BBE && isa<DbgInfoIntrinsic>(++BBI)) /* empty */;
-  if (&*BBI != SI)
-    return;
-
-  // Ok, we have this really simple case, walk the PHI operands, looking for
-  // constants.  Walk from the end to remove operands from the end when
-  // possible, and to avoid invalidating "i".
-  for (unsigned i = PN->getNumIncomingValues(); i != 0; --i)
-    if (ConstantInt *CI = dyn_cast<ConstantInt>(PN->getIncomingValue(i-1))) {
-      // If we have a constant, forward the edge from its current to its
-      // ultimate destination.
-      unsigned DestCase = SI->findCaseValue(CI);
-      RevectorBlockTo(PN->getIncomingBlock(i-1),
-                      SI->getSuccessor(DestCase));
-      ++NumSwThread;
-
-      // If there were two predecessors before this simplification, or if the
-      // PHI node contained all the same value except for the one we just
-      // substituted, the PHI node may be deleted.  Don't iterate through it the
-      // last time.
-      if (SI->getCondition() != PN) return;
-    }
-}
-
-
-// RevectorBlockTo - Revector the unconditional branch at the end of FromBB to
-// the ToBB block, which is one of the successors of its current successor.
-void CondProp::RevectorBlockTo(BasicBlock *FromBB, BasicBlock *ToBB) {
-  BranchInst *FromBr = cast<BranchInst>(FromBB->getTerminator());
-  assert(FromBr->isUnconditional() && "FromBB should end with uncond br!");
-
-  // Get the old block we are threading through.
-  BasicBlock *OldSucc = FromBr->getSuccessor(0);
-
-  // OldSucc had multiple successors. If ToBB has multiple predecessors, then 
-  // the edge between them would be critical, which we already took care of.
-  // If ToBB has single operand PHI node then take care of it here.
-  FoldSingleEntryPHINodes(ToBB);
-
-  // Update PHI nodes in OldSucc to know that FromBB no longer branches to it.
-  OldSucc->removePredecessor(FromBB);
-
-  // Change FromBr to branch to the new destination.
-  FromBr->setSuccessor(0, ToBB);
-
-  MadeChange = true;
-}
-
-bool CondProp::RevectorBlockTo(BasicBlock *FromBB, Value *Cond, BranchInst *BI){
-  BranchInst *FromBr = cast<BranchInst>(FromBB->getTerminator());
-  if (!FromBr->isUnconditional())
-    return false;
-
-  // Get the old block we are threading through.
-  BasicBlock *OldSucc = FromBr->getSuccessor(0);
-
-  // If the condition is a constant, simply revector the unconditional branch at
-  // the end of FromBB to one of the successors of its current successor.
-  if (ConstantInt *CB = dyn_cast<ConstantInt>(Cond)) {
-    BasicBlock *ToBB = BI->getSuccessor(CB->isZero());
-
-    // OldSucc had multiple successors. If ToBB has multiple predecessors, then 
-    // the edge between them would be critical, which we already took care of.
-    // If ToBB has single operand PHI node then take care of it here.
-    FoldSingleEntryPHINodes(ToBB);
-
-    // Update PHI nodes in OldSucc to know that FromBB no longer branches to it.
-    OldSucc->removePredecessor(FromBB);
-
-    // Change FromBr to branch to the new destination.
-    FromBr->setSuccessor(0, ToBB);
-  } else {
-    BasicBlock *Succ0 = BI->getSuccessor(0);
-    // Do not perform transform if the new destination has PHI nodes. The
-    // transform will add new preds to the PHI's.
-    if (isa<PHINode>(Succ0->begin()))
-      return false;
-
-    BasicBlock *Succ1 = BI->getSuccessor(1);
-    if (isa<PHINode>(Succ1->begin()))
-      return false;
-
-    // Insert the new conditional branch.
-    BranchInst::Create(Succ0, Succ1, Cond, FromBr);
-
-    FoldSingleEntryPHINodes(Succ0);
-    FoldSingleEntryPHINodes(Succ1);
-
-    // Update PHI nodes in OldSucc to know that FromBB no longer branches to it.
-    OldSucc->removePredecessor(FromBB);
-
-    // Delete the old branch.
-    FromBr->eraseFromParent();
-  }
-
-  MadeChange = true;
-  return true;
-}
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/ConstantProp.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/ConstantProp.cpp
index 4fee327..ea20813 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/ConstantProp.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/ConstantProp.cpp
@@ -66,7 +66,7 @@ bool ConstantPropagation::runOnFunction(Function &F) {
     WorkList.erase(WorkList.begin());    // Get an element from the worklist...
 
     if (!I->use_empty())                 // Don't muck with dead instructions...
-      if (Constant *C = ConstantFoldInstruction(I, F.getContext())) {
+      if (Constant *C = ConstantFoldInstruction(I)) {
         // Add all of the users of this instruction to the worklist, they might
         // be constant propagatable now...
         for (Value::use_iterator UI = I->use_begin(), UE = I->use_end();
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
index a7b3e75..b0988b5 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
@@ -26,6 +26,7 @@
 #include "llvm/ADT/Statistic.h"
 #include "llvm/Analysis/AliasAnalysis.h"
 #include "llvm/Analysis/Dominators.h"
+#include "llvm/Analysis/MemoryBuiltins.h"
 #include "llvm/Analysis/MemoryDependenceAnalysis.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Transforms/Utils/Local.h"
@@ -49,7 +50,7 @@ namespace {
     }
     
     bool runOnBasicBlock(BasicBlock &BB);
-    bool handleFreeWithNonTrivialDependency(FreeInst *F, MemDepResult Dep);
+    bool handleFreeWithNonTrivialDependency(Instruction *F, MemDepResult Dep);
     bool handleEndBlock(BasicBlock &BB);
     bool RemoveUndeadPointers(Value* Ptr, uint64_t killPointerSize,
                               BasicBlock::iterator& BBI,
@@ -77,6 +78,98 @@ static RegisterPass<DSE> X("dse", "Dead Store Elimination");
 
 FunctionPass *llvm::createDeadStoreEliminationPass() { return new DSE(); }
 
+/// doesClobberMemory - Does this instruction clobber (write without reading)
+/// some memory?
+static bool doesClobberMemory(Instruction *I) {
+  if (isa<StoreInst>(I))
+    return true;
+  if (IntrinsicInst *II = dyn_cast<IntrinsicInst>(I)) {
+    switch (II->getIntrinsicID()) {
+    default: return false;
+    case Intrinsic::memset: case Intrinsic::memmove: case Intrinsic::memcpy:
+    case Intrinsic::init_trampoline: case Intrinsic::lifetime_end: return true;
+    }
+  }
+  return false;
+}
+
+/// isElidable - If the value of this instruction and the memory it writes to is
+/// unused, may we delete this instrtction?
+static bool isElidable(Instruction *I) {
+  assert(doesClobberMemory(I));
+  if (IntrinsicInst *II = dyn_cast<IntrinsicInst>(I))
+    return II->getIntrinsicID() != Intrinsic::lifetime_end;
+  if (StoreInst *SI = dyn_cast<StoreInst>(I))
+    return !SI->isVolatile();
+  return true;
+}
+
+/// getPointerOperand - Return the pointer that is being clobbered.
+static Value *getPointerOperand(Instruction *I) {
+  assert(doesClobberMemory(I));
+  if (StoreInst *SI = dyn_cast<StoreInst>(I))
+    return SI->getPointerOperand();
+  if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(I))
+    return MI->getOperand(1);
+  IntrinsicInst *II = cast<IntrinsicInst>(I);
+  switch (II->getIntrinsicID()) {
+    default:
+      assert(false && "Unexpected intrinsic!");
+    case Intrinsic::init_trampoline:
+      return II->getOperand(1);
+    case Intrinsic::lifetime_end:
+      return II->getOperand(2);
+  }
+}
+
+/// getStoreSize - Return the length in bytes of the write by the clobbering
+/// instruction. If variable or unknown, returns -1.
+static unsigned getStoreSize(Instruction *I, const TargetData *TD) {
+  assert(doesClobberMemory(I));
+  if (StoreInst *SI = dyn_cast<StoreInst>(I)) {
+    if (!TD) return -1u;
+    return TD->getTypeStoreSize(SI->getOperand(0)->getType());
+  }
+
+  Value *Len;
+  if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(I)) {
+    Len = MI->getLength();
+  } else {
+    IntrinsicInst *II = cast<IntrinsicInst>(I);
+    switch (II->getIntrinsicID()) {
+      default:
+        assert(false && "Unexpected intrinsic!");
+      case Intrinsic::init_trampoline:
+        return -1u;
+      case Intrinsic::lifetime_end:
+        Len = II->getOperand(1);
+        break;
+    }
+  }
+  if (ConstantInt *LenCI = dyn_cast<ConstantInt>(Len))
+    if (!LenCI->isAllOnesValue())
+      return LenCI->getZExtValue();
+  return -1u;
+}
+
+/// isStoreAtLeastAsWideAs - Return true if the size of the store in I1 is
+/// greater than or equal to the store in I2.  This returns false if we don't
+/// know.
+///
+static bool isStoreAtLeastAsWideAs(Instruction *I1, Instruction *I2,
+                                   const TargetData *TD) {
+  const Type *I1Ty = getPointerOperand(I1)->getType();
+  const Type *I2Ty = getPointerOperand(I2)->getType();
+  
+  // Exactly the same type, must have exactly the same size.
+  if (I1Ty == I2Ty) return true;
+  
+  int I1Size = getStoreSize(I1, TD);
+  int I2Size = getStoreSize(I2, TD);
+  
+  return I1Size != -1 && I2Size != -1 && I1Size >= I2Size;
+}
+
 bool DSE::runOnBasicBlock(BasicBlock &BB) {
   MemoryDependenceAnalysis& MD = getAnalysis<MemoryDependenceAnalysis>();
   TD = getAnalysisIfAvailable<TargetData>();
@@ -88,14 +181,9 @@ bool DSE::runOnBasicBlock(BasicBlock &BB) {
     Instruction *Inst = BBI++;
     
     // If we find a store or a free, get its memory dependence.
-    if (!isa<StoreInst>(Inst) && !isa<FreeInst>(Inst))
+    if (!doesClobberMemory(Inst) && !isFreeCall(Inst))
       continue;
     
-    // Don't molest volatile stores or do queries that will return "clobber".
-    if (StoreInst *SI = dyn_cast<StoreInst>(Inst))
-      if (SI->isVolatile())
-        continue;
-
     MemDepResult InstDep = MD.getDependency(Inst);
     
     // Ignore non-local stores.
@@ -103,23 +191,21 @@ bool DSE::runOnBasicBlock(BasicBlock &BB) {
     if (InstDep.isNonLocal()) continue;
   
     // Handle frees whose dependencies are non-trivial.
-    if (FreeInst *FI = dyn_cast<FreeInst>(Inst)) {
-      MadeChange |= handleFreeWithNonTrivialDependency(FI, InstDep);
+    if (isFreeCall(Inst)) {
+      MadeChange |= handleFreeWithNonTrivialDependency(Inst, InstDep);
       continue;
     }
     
-    StoreInst *SI = cast<StoreInst>(Inst);
-    
     // If not a definite must-alias dependency, ignore it.
     if (!InstDep.isDef())
       continue;
     
     // If this is a store-store dependence, then the previous store is dead so
     // long as this store is at least as big as it.
-    if (StoreInst *DepStore = dyn_cast<StoreInst>(InstDep.getInst()))
-      if (TD &&
-          TD->getTypeStoreSize(DepStore->getOperand(0)->getType()) <=
-          TD->getTypeStoreSize(SI->getOperand(0)->getType())) {
+    if (doesClobberMemory(InstDep.getInst())) {
+      Instruction *DepStore = InstDep.getInst();
+      if (isStoreAtLeastAsWideAs(Inst, DepStore, TD) &&
+          isElidable(DepStore)) {
         // Delete the store and now-dead instructions that feed it.
         DeleteDeadInstruction(DepStore);
         NumFastStores++;
@@ -132,17 +218,43 @@ bool DSE::runOnBasicBlock(BasicBlock &BB) {
           --BBI;
         continue;
       }
+    }
+    
+    if (!isElidable(Inst))
+      continue;
     
     // If we're storing the same value back to a pointer that we just
     // loaded from, then the store can be removed.
-    if (LoadInst *DepLoad = dyn_cast<LoadInst>(InstDep.getInst())) {
-      if (SI->getPointerOperand() == DepLoad->getPointerOperand() &&
-          SI->getOperand(0) == DepLoad) {
+    if (StoreInst *SI = dyn_cast<StoreInst>(Inst)) {
+      if (LoadInst *DepLoad = dyn_cast<LoadInst>(InstDep.getInst())) {
+        if (SI->getPointerOperand() == DepLoad->getPointerOperand() &&
+            SI->getOperand(0) == DepLoad) {
+          // DeleteDeadInstruction can delete the current instruction.  Save BBI
+          // in case we need it.
+          WeakVH NextInst(BBI);
+          
+          DeleteDeadInstruction(SI);
+          
+          if (NextInst == 0)  // Next instruction deleted.
+            BBI = BB.begin();
+          else if (BBI != BB.begin())  // Revisit this instruction if possible.
+            --BBI;
+          NumFastStores++;
+          MadeChange = true;
+          continue;
+        }
+      }
+    }
+    
+    // If this is a lifetime end marker, we can throw away the store.
+    if (IntrinsicInst *II = dyn_cast<IntrinsicInst>(InstDep.getInst())) {
+      if (II->getIntrinsicID() == Intrinsic::lifetime_end) {
+        // Delete the store and now-dead instructions that feed it.
         // DeleteDeadInstruction can delete the current instruction.  Save BBI
         // in case we need it.
         WeakVH NextInst(BBI);
         
-        DeleteDeadInstruction(SI);
+        DeleteDeadInstruction(Inst);
         
         if (NextInst == 0)  // Next instruction deleted.
           BBI = BB.begin();
@@ -165,17 +277,17 @@ bool DSE::runOnBasicBlock(BasicBlock &BB) {
 
 /// handleFreeWithNonTrivialDependency - Handle frees of entire structures whose
 /// dependency is a store to a field of that structure.
-bool DSE::handleFreeWithNonTrivialDependency(FreeInst *F, MemDepResult Dep) {
+bool DSE::handleFreeWithNonTrivialDependency(Instruction *F, MemDepResult Dep) {
   AliasAnalysis &AA = getAnalysis<AliasAnalysis>();
   
-  StoreInst *Dependency = dyn_cast_or_null<StoreInst>(Dep.getInst());
-  if (!Dependency || Dependency->isVolatile())
+  Instruction *Dependency = Dep.getInst();
+  if (!Dependency || !doesClobberMemory(Dependency) || !isElidable(Dependency))
     return false;
   
-  Value *DepPointer = Dependency->getPointerOperand()->getUnderlyingObject();
+  Value *DepPointer = getPointerOperand(Dependency)->getUnderlyingObject();
 
   // Check for aliasing.
-  if (AA.alias(F->getPointerOperand(), 1, DepPointer, 1) !=
+  if (AA.alias(F->getOperand(1), 1, DepPointer, 1) !=
          AliasAnalysis::MustAlias)
     return false;
   
@@ -217,39 +329,28 @@ bool DSE::handleEndBlock(BasicBlock &BB) {
     --BBI;
     
     // If we find a store whose pointer is dead.
-    if (StoreInst* S = dyn_cast<StoreInst>(BBI)) {
-      if (!S->isVolatile()) {
+    if (doesClobberMemory(BBI)) {
+      if (isElidable(BBI)) {
         // See through pointer-to-pointer bitcasts
-        Value* pointerOperand = S->getPointerOperand()->getUnderlyingObject();
+        Value *pointerOperand = getPointerOperand(BBI)->getUnderlyingObject();
 
         // Alloca'd pointers or byval arguments (which are functionally like
         // alloca's) are valid candidates for removal.
         if (deadPointers.count(pointerOperand)) {
           // DCE instructions only used to calculate that store.
+          Instruction *Dead = BBI;
           BBI++;
-          DeleteDeadInstruction(S, &deadPointers);
+          DeleteDeadInstruction(Dead, &deadPointers);
           NumFastStores++;
           MadeChange = true;
+          continue;
         }
       }
       
-      continue;
-    }
-    
-    // We can also remove memcpy's to local variables at the end of a function.
-    if (MemCpyInst *M = dyn_cast<MemCpyInst>(BBI)) {
-      Value *dest = M->getDest()->getUnderlyingObject();
-
-      if (deadPointers.count(dest)) {
-        BBI++;
-        DeleteDeadInstruction(M, &deadPointers);
-        NumFastOther++;
-        MadeChange = true;
+      // Because a memcpy or memmove is also a load, we can't skip it if we
+      // didn't remove it.
+      if (!isa<MemTransferInst>(BBI))
         continue;
-      }
-      
-      // Because a memcpy is also a load, we can't skip it if we didn't remove
-      // it.
     }
     
     Value* killPointer = 0;
@@ -270,11 +371,11 @@ bool DSE::handleEndBlock(BasicBlock &BB) {
       killPointer = L->getPointerOperand();
     } else if (VAArgInst* V = dyn_cast<VAArgInst>(BBI)) {
       killPointer = V->getOperand(0);
-    } else if (isa<MemCpyInst>(BBI) &&
-               isa<ConstantInt>(cast<MemCpyInst>(BBI)->getLength())) {
-      killPointer = cast<MemCpyInst>(BBI)->getSource();
+    } else if (isa<MemTransferInst>(BBI) &&
+               isa<ConstantInt>(cast<MemTransferInst>(BBI)->getLength())) {
+      killPointer = cast<MemTransferInst>(BBI)->getSource();
       killPointerSize = cast<ConstantInt>(
-                            cast<MemCpyInst>(BBI)->getLength())->getZExtValue();
+                       cast<MemTransferInst>(BBI)->getLength())->getZExtValue();
     } else if (AllocaInst* A = dyn_cast<AllocaInst>(BBI)) {
       deadPointers.erase(A);
       
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/GEPSplitter.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/GEPSplitter.cpp
new file mode 100644
index 0000000..610a41d
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/GEPSplitter.cpp
@@ -0,0 +1,81 @@
+//===- GEPSplitter.cpp - Split complex GEPs into simple ones --------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This function breaks GEPs with more than 2 non-zero operands into smaller
+// GEPs each with no more than 2 non-zero operands. This exposes redundancy
+// between GEPs with common initial operand sequences.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "split-geps"
+#include "llvm/Transforms/Scalar.h"
+#include "llvm/Constants.h"
+#include "llvm/Function.h"
+#include "llvm/Instructions.h"
+#include "llvm/Pass.h"
+using namespace llvm;
+
+namespace {
+  class GEPSplitter : public FunctionPass {
+    virtual bool runOnFunction(Function &F);
+    virtual void getAnalysisUsage(AnalysisUsage &AU) const;
+  public:
+    static char ID; // Pass identification, replacement for typeid
+    explicit GEPSplitter() : FunctionPass(&ID) {}
+  };
+}
+
+char GEPSplitter::ID = 0;
+static RegisterPass<GEPSplitter> X("split-geps",
+                                   "split complex GEPs into simple GEPs");
+
+FunctionPass *llvm::createGEPSplitterPass() {
+  return new GEPSplitter();
+}
+
+bool GEPSplitter::runOnFunction(Function &F) {
+  bool Changed = false;
+
+  // Visit each GEP instruction.
+  for (Function::iterator I = F.begin(), E = F.end(); I != E; ++I)
+    for (BasicBlock::iterator II = I->begin(), IE = I->end(); II != IE; )
+      if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(II++)) {
+        unsigned NumOps = GEP->getNumOperands();
+        // Ignore GEPs which are already simple.
+        if (NumOps <= 2)
+          continue;
+        bool FirstIndexIsZero = isa<ConstantInt>(GEP->getOperand(1)) &&
+                                cast<ConstantInt>(GEP->getOperand(1))->isZero();
+        if (NumOps == 3 && FirstIndexIsZero)
+          continue;
+        // The first index is special and gets expanded with a 2-operand GEP
+        // (unless it's zero, in which case we can skip this).
+        Value *NewGEP = FirstIndexIsZero ?
+          GEP->getOperand(0) :
+          GetElementPtrInst::Create(GEP->getOperand(0), GEP->getOperand(1),
+                                    "tmp", GEP);
+        // All remaining indices get expanded with a 3-operand GEP with zero
+        // as the second operand.
+        Value *Idxs[2];
+        Idxs[0] = ConstantInt::get(Type::getInt64Ty(F.getContext()), 0);
+        for (unsigned i = 2; i != NumOps; ++i) {
+          Idxs[1] = GEP->getOperand(i);
+          NewGEP = GetElementPtrInst::Create(NewGEP, Idxs, Idxs+2, "tmp", GEP);
+        }
+        GEP->replaceAllUsesWith(NewGEP);
+        GEP->eraseFromParent();
+        Changed = true;
+      }
+
+  return Changed;
+}
+
+void GEPSplitter::getAnalysisUsage(AnalysisUsage &AU) const {
+  AU.setPreservesCFG();
+}
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp
index 86bbc60..72eb900 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp
@@ -33,7 +33,7 @@
 #include "llvm/ADT/Statistic.h"
 #include "llvm/Analysis/Dominators.h"
 #include "llvm/Analysis/AliasAnalysis.h"
-#include "llvm/Analysis/MallocHelper.h"
+#include "llvm/Analysis/MemoryBuiltins.h"
 #include "llvm/Analysis/MemoryDependenceAnalysis.h"
 #include "llvm/Support/CFG.h"
 #include "llvm/Support/CommandLine.h"
@@ -44,6 +44,7 @@
 #include "llvm/Target/TargetData.h"
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
 #include "llvm/Transforms/Utils/Local.h"
+#include "llvm/Transforms/Utils/SSAUpdater.h"
 #include <cstdio>
 using namespace llvm;
 
@@ -77,13 +78,10 @@ namespace {
                             SHUFFLE, SELECT, TRUNC, ZEXT, SEXT, FPTOUI,
                             FPTOSI, UITOFP, SITOFP, FPTRUNC, FPEXT,
                             PTRTOINT, INTTOPTR, BITCAST, GEP, CALL, CONSTANT,
-                            EMPTY, TOMBSTONE };
+                            INSERTVALUE, EXTRACTVALUE, EMPTY, TOMBSTONE };
 
     ExpressionOpcode opcode;
     const Type* type;
-    uint32_t firstVN;
-    uint32_t secondVN;
-    uint32_t thirdVN;
     SmallVector<uint32_t, 4> varargs;
     Value *function;
 
@@ -99,12 +97,6 @@ namespace {
         return false;
       else if (function != other.function)
         return false;
-      else if (firstVN != other.firstVN)
-        return false;
-      else if (secondVN != other.secondVN)
-        return false;
-      else if (thirdVN != other.thirdVN)
-        return false;
       else {
         if (varargs.size() != other.varargs.size())
           return false;
@@ -145,6 +137,10 @@ namespace {
       Expression create_expression(GetElementPtrInst* G);
       Expression create_expression(CallInst* C);
       Expression create_expression(Constant* C);
+      Expression create_expression(ExtractValueInst* C);
+      Expression create_expression(InsertValueInst* C);
+      
+      uint32_t lookup_or_add_call(CallInst* C);
     public:
       ValueTable() : nextValueNumber(1) { }
       uint32_t lookup_or_add(Value *V);
@@ -175,13 +171,8 @@ template <> struct DenseMapInfo<Expression> {
   static unsigned getHashValue(const Expression e) {
     unsigned hash = e.opcode;
 
-    hash = e.firstVN + hash * 37;
-    hash = e.secondVN + hash * 37;
-    hash = e.thirdVN + hash * 37;
-
     hash = ((unsigned)((uintptr_t)e.type >> 4) ^
-            (unsigned)((uintptr_t)e.type >> 9)) +
-           hash * 37;
+            (unsigned)((uintptr_t)e.type >> 9));
 
     for (SmallVector<uint32_t, 4>::const_iterator I = e.varargs.begin(),
          E = e.varargs.end(); I != E; ++I)
@@ -289,9 +280,6 @@ Expression ValueTable::create_expression(CallInst* C) {
   Expression e;
 
   e.type = C->getType();
-  e.firstVN = 0;
-  e.secondVN = 0;
-  e.thirdVN = 0;
   e.function = C->getCalledFunction();
   e.opcode = Expression::CALL;
 
@@ -304,10 +292,8 @@ Expression ValueTable::create_expression(CallInst* C) {
 
 Expression ValueTable::create_expression(BinaryOperator* BO) {
   Expression e;
-
-  e.firstVN = lookup_or_add(BO->getOperand(0));
-  e.secondVN = lookup_or_add(BO->getOperand(1));
-  e.thirdVN = 0;
+  e.varargs.push_back(lookup_or_add(BO->getOperand(0)));
+  e.varargs.push_back(lookup_or_add(BO->getOperand(1)));
   e.function = 0;
   e.type = BO->getType();
   e.opcode = getOpcode(BO);
@@ -318,9 +304,8 @@ Expression ValueTable::create_expression(BinaryOperator* BO) {
 Expression ValueTable::create_expression(CmpInst* C) {
   Expression e;
 
-  e.firstVN = lookup_or_add(C->getOperand(0));
-  e.secondVN = lookup_or_add(C->getOperand(1));
-  e.thirdVN = 0;
+  e.varargs.push_back(lookup_or_add(C->getOperand(0)));
+  e.varargs.push_back(lookup_or_add(C->getOperand(1)));
   e.function = 0;
   e.type = C->getType();
   e.opcode = getOpcode(C);
@@ -331,9 +316,7 @@ Expression ValueTable::create_expression(CmpInst* C) {
 Expression ValueTable::create_expression(CastInst* C) {
   Expression e;
 
-  e.firstVN = lookup_or_add(C->getOperand(0));
-  e.secondVN = 0;
-  e.thirdVN = 0;
+  e.varargs.push_back(lookup_or_add(C->getOperand(0)));
   e.function = 0;
   e.type = C->getType();
   e.opcode = getOpcode(C);
@@ -344,9 +327,9 @@ Expression ValueTable::create_expression(CastInst* C) {
 Expression ValueTable::create_expression(ShuffleVectorInst* S) {
   Expression e;
 
-  e.firstVN = lookup_or_add(S->getOperand(0));
-  e.secondVN = lookup_or_add(S->getOperand(1));
-  e.thirdVN = lookup_or_add(S->getOperand(2));
+  e.varargs.push_back(lookup_or_add(S->getOperand(0)));
+  e.varargs.push_back(lookup_or_add(S->getOperand(1)));
+  e.varargs.push_back(lookup_or_add(S->getOperand(2)));
   e.function = 0;
   e.type = S->getType();
   e.opcode = Expression::SHUFFLE;
@@ -357,9 +340,8 @@ Expression ValueTable::create_expression(ShuffleVectorInst* S) {
 Expression ValueTable::create_expression(ExtractElementInst* E) {
   Expression e;
 
-  e.firstVN = lookup_or_add(E->getOperand(0));
-  e.secondVN = lookup_or_add(E->getOperand(1));
-  e.thirdVN = 0;
+  e.varargs.push_back(lookup_or_add(E->getOperand(0)));
+  e.varargs.push_back(lookup_or_add(E->getOperand(1)));
   e.function = 0;
   e.type = E->getType();
   e.opcode = Expression::EXTRACT;
@@ -370,9 +352,9 @@ Expression ValueTable::create_expression(ExtractElementInst* E) {
 Expression ValueTable::create_expression(InsertElementInst* I) {
   Expression e;
 
-  e.firstVN = lookup_or_add(I->getOperand(0));
-  e.secondVN = lookup_or_add(I->getOperand(1));
-  e.thirdVN = lookup_or_add(I->getOperand(2));
+  e.varargs.push_back(lookup_or_add(I->getOperand(0)));
+  e.varargs.push_back(lookup_or_add(I->getOperand(1)));
+  e.varargs.push_back(lookup_or_add(I->getOperand(2)));
   e.function = 0;
   e.type = I->getType();
   e.opcode = Expression::INSERT;
@@ -383,9 +365,9 @@ Expression ValueTable::create_expression(InsertElementInst* I) {
 Expression ValueTable::create_expression(SelectInst* I) {
   Expression e;
 
-  e.firstVN = lookup_or_add(I->getCondition());
-  e.secondVN = lookup_or_add(I->getTrueValue());
-  e.thirdVN = lookup_or_add(I->getFalseValue());
+  e.varargs.push_back(lookup_or_add(I->getCondition()));
+  e.varargs.push_back(lookup_or_add(I->getTrueValue()));
+  e.varargs.push_back(lookup_or_add(I->getFalseValue()));
   e.function = 0;
   e.type = I->getType();
   e.opcode = Expression::SELECT;
@@ -396,9 +378,7 @@ Expression ValueTable::create_expression(SelectInst* I) {
 Expression ValueTable::create_expression(GetElementPtrInst* G) {
   Expression e;
 
-  e.firstVN = lookup_or_add(G->getPointerOperand());
-  e.secondVN = 0;
-  e.thirdVN = 0;
+  e.varargs.push_back(lookup_or_add(G->getPointerOperand()));
   e.function = 0;
   e.type = G->getType();
   e.opcode = Expression::GEP;
@@ -410,6 +390,35 @@ Expression ValueTable::create_expression(GetElementPtrInst* G) {
   return e;
 }
 
+Expression ValueTable::create_expression(ExtractValueInst* E) {
+  Expression e;
+
+  e.varargs.push_back(lookup_or_add(E->getAggregateOperand()));
+  for (ExtractValueInst::idx_iterator II = E->idx_begin(), IE = E->idx_end();
+       II != IE; ++II)
+    e.varargs.push_back(*II);
+  e.function = 0;
+  e.type = E->getType();
+  e.opcode = Expression::EXTRACTVALUE;
+
+  return e;
+}
+
+Expression ValueTable::create_expression(InsertValueInst* E) {
+  Expression e;
+
+  e.varargs.push_back(lookup_or_add(E->getAggregateOperand()));
+  e.varargs.push_back(lookup_or_add(E->getInsertedValueOperand()));
+  for (InsertValueInst::idx_iterator II = E->idx_begin(), IE = E->idx_end();
+       II != IE; ++II)
+    e.varargs.push_back(*II);
+  e.function = 0;
+  e.type = E->getType();
+  e.opcode = Expression::INSERTVALUE;
+
+  return e;
+}
+
 //===----------------------------------------------------------------------===//
 //                     ValueTable External Functions
 //===----------------------------------------------------------------------===//
@@ -419,238 +428,208 @@ void ValueTable::add(Value *V, uint32_t num) {
   valueNumbering.insert(std::make_pair(V, num));
 }
 
-/// lookup_or_add - Returns the value number for the specified value, assigning
-/// it a new number if it did not have one before.
-uint32_t ValueTable::lookup_or_add(Value *V) {
-  DenseMap<Value*, uint32_t>::iterator VI = valueNumbering.find(V);
-  if (VI != valueNumbering.end())
-    return VI->second;
-
-  if (CallInst* C = dyn_cast<CallInst>(V)) {
-    if (AA->doesNotAccessMemory(C)) {
-      Expression e = create_expression(C);
-
-      DenseMap<Expression, uint32_t>::iterator EI = expressionNumbering.find(e);
-      if (EI != expressionNumbering.end()) {
-        valueNumbering.insert(std::make_pair(V, EI->second));
-        return EI->second;
-      } else {
-        expressionNumbering.insert(std::make_pair(e, nextValueNumber));
-        valueNumbering.insert(std::make_pair(V, nextValueNumber));
-
-        return nextValueNumber++;
-      }
-    } else if (AA->onlyReadsMemory(C)) {
-      Expression e = create_expression(C);
-
-      if (expressionNumbering.find(e) == expressionNumbering.end()) {
-        expressionNumbering.insert(std::make_pair(e, nextValueNumber));
-        valueNumbering.insert(std::make_pair(V, nextValueNumber));
-        return nextValueNumber++;
-      }
-
-      MemDepResult local_dep = MD->getDependency(C);
-
-      if (!local_dep.isDef() && !local_dep.isNonLocal()) {
-        valueNumbering.insert(std::make_pair(V, nextValueNumber));
-        return nextValueNumber++;
-      }
-
-      if (local_dep.isDef()) {
-        CallInst* local_cdep = cast<CallInst>(local_dep.getInst());
-
-        if (local_cdep->getNumOperands() != C->getNumOperands()) {
-          valueNumbering.insert(std::make_pair(V, nextValueNumber));
-          return nextValueNumber++;
-        }
-
-        for (unsigned i = 1; i < C->getNumOperands(); ++i) {
-          uint32_t c_vn = lookup_or_add(C->getOperand(i));
-          uint32_t cd_vn = lookup_or_add(local_cdep->getOperand(i));
-          if (c_vn != cd_vn) {
-            valueNumbering.insert(std::make_pair(V, nextValueNumber));
-            return nextValueNumber++;
-          }
-        }
-
-        uint32_t v = lookup_or_add(local_cdep);
-        valueNumbering.insert(std::make_pair(V, v));
-        return v;
-      }
-
-      // Non-local case.
-      const MemoryDependenceAnalysis::NonLocalDepInfo &deps =
-        MD->getNonLocalCallDependency(CallSite(C));
-      // FIXME: call/call dependencies for readonly calls should return def, not
-      // clobber!  Move the checking logic to MemDep!
-      CallInst* cdep = 0;
-
-      // Check to see if we have a single dominating call instruction that is
-      // identical to C.
-      for (unsigned i = 0, e = deps.size(); i != e; ++i) {
-        const MemoryDependenceAnalysis::NonLocalDepEntry *I = &deps[i];
-        // Ignore non-local dependencies.
-        if (I->second.isNonLocal())
-          continue;
+uint32_t ValueTable::lookup_or_add_call(CallInst* C) {
+  if (AA->doesNotAccessMemory(C)) {
+    Expression exp = create_expression(C);
+    uint32_t& e = expressionNumbering[exp];
+    if (!e) e = nextValueNumber++;
+    valueNumbering[C] = e;
+    return e;
+  } else if (AA->onlyReadsMemory(C)) {
+    Expression exp = create_expression(C);
+    uint32_t& e = expressionNumbering[exp];
+    if (!e) {
+      e = nextValueNumber++;
+      valueNumbering[C] = e;
+      return e;
+    }
+    if (!MD) {
+      e = nextValueNumber++;
+      valueNumbering[C] = e;
+      return e;
+    }
 
-        // We don't handle non-depedencies.  If we already have a call, reject
-        // instruction dependencies.
-        if (I->second.isClobber() || cdep != 0) {
-          cdep = 0;
-          break;
-        }
+    MemDepResult local_dep = MD->getDependency(C);
 
-        CallInst *NonLocalDepCall = dyn_cast<CallInst>(I->second.getInst());
-        // FIXME: All duplicated with non-local case.
-        if (NonLocalDepCall && DT->properlyDominates(I->first, C->getParent())){
-          cdep = NonLocalDepCall;
-          continue;
-        }
+    if (!local_dep.isDef() && !local_dep.isNonLocal()) {
+      valueNumbering[C] =  nextValueNumber;
+      return nextValueNumber++;
+    }
 
-        cdep = 0;
-        break;
-      }
+    if (local_dep.isDef()) {
+      CallInst* local_cdep = cast<CallInst>(local_dep.getInst());
 
-      if (!cdep) {
-        valueNumbering.insert(std::make_pair(V, nextValueNumber));
+      if (local_cdep->getNumOperands() != C->getNumOperands()) {
+        valueNumbering[C] = nextValueNumber;
         return nextValueNumber++;
       }
 
-      if (cdep->getNumOperands() != C->getNumOperands()) {
-        valueNumbering.insert(std::make_pair(V, nextValueNumber));
-        return nextValueNumber++;
-      }
       for (unsigned i = 1; i < C->getNumOperands(); ++i) {
         uint32_t c_vn = lookup_or_add(C->getOperand(i));
-        uint32_t cd_vn = lookup_or_add(cdep->getOperand(i));
+        uint32_t cd_vn = lookup_or_add(local_cdep->getOperand(i));
         if (c_vn != cd_vn) {
-          valueNumbering.insert(std::make_pair(V, nextValueNumber));
+          valueNumbering[C] = nextValueNumber;
           return nextValueNumber++;
         }
       }
 
-      uint32_t v = lookup_or_add(cdep);
-      valueNumbering.insert(std::make_pair(V, v));
+      uint32_t v = lookup_or_add(local_cdep);
+      valueNumbering[C] = v;
       return v;
-
-    } else {
-      valueNumbering.insert(std::make_pair(V, nextValueNumber));
-      return nextValueNumber++;
     }
-  } else if (BinaryOperator* BO = dyn_cast<BinaryOperator>(V)) {
-    Expression e = create_expression(BO);
-
-    DenseMap<Expression, uint32_t>::iterator EI = expressionNumbering.find(e);
-    if (EI != expressionNumbering.end()) {
-      valueNumbering.insert(std::make_pair(V, EI->second));
-      return EI->second;
-    } else {
-      expressionNumbering.insert(std::make_pair(e, nextValueNumber));
-      valueNumbering.insert(std::make_pair(V, nextValueNumber));
 
-      return nextValueNumber++;
-    }
-  } else if (CmpInst* C = dyn_cast<CmpInst>(V)) {
-    Expression e = create_expression(C);
-
-    DenseMap<Expression, uint32_t>::iterator EI = expressionNumbering.find(e);
-    if (EI != expressionNumbering.end()) {
-      valueNumbering.insert(std::make_pair(V, EI->second));
-      return EI->second;
-    } else {
-      expressionNumbering.insert(std::make_pair(e, nextValueNumber));
-      valueNumbering.insert(std::make_pair(V, nextValueNumber));
+    // Non-local case.
+    const MemoryDependenceAnalysis::NonLocalDepInfo &deps =
+      MD->getNonLocalCallDependency(CallSite(C));
+    // FIXME: call/call dependencies for readonly calls should return def, not
+    // clobber!  Move the checking logic to MemDep!
+    CallInst* cdep = 0;
+
+    // Check to see if we have a single dominating call instruction that is
+    // identical to C.
+    for (unsigned i = 0, e = deps.size(); i != e; ++i) {
+      const MemoryDependenceAnalysis::NonLocalDepEntry *I = &deps[i];
+      // Ignore non-local dependencies.
+      if (I->second.isNonLocal())
+        continue;
 
-      return nextValueNumber++;
-    }
-  } else if (ShuffleVectorInst* U = dyn_cast<ShuffleVectorInst>(V)) {
-    Expression e = create_expression(U);
+      // We don't handle non-depedencies.  If we already have a call, reject
+      // instruction dependencies.
+      if (I->second.isClobber() || cdep != 0) {
+        cdep = 0;
+        break;
+      }
 
-    DenseMap<Expression, uint32_t>::iterator EI = expressionNumbering.find(e);
-    if (EI != expressionNumbering.end()) {
-      valueNumbering.insert(std::make_pair(V, EI->second));
-      return EI->second;
-    } else {
-      expressionNumbering.insert(std::make_pair(e, nextValueNumber));
-      valueNumbering.insert(std::make_pair(V, nextValueNumber));
+      CallInst *NonLocalDepCall = dyn_cast<CallInst>(I->second.getInst());
+      // FIXME: All duplicated with non-local case.
+      if (NonLocalDepCall && DT->properlyDominates(I->first, C->getParent())){
+        cdep = NonLocalDepCall;
+        continue;
+      }
 
-      return nextValueNumber++;
+      cdep = 0;
+      break;
     }
-  } else if (ExtractElementInst* U = dyn_cast<ExtractElementInst>(V)) {
-    Expression e = create_expression(U);
-
-    DenseMap<Expression, uint32_t>::iterator EI = expressionNumbering.find(e);
-    if (EI != expressionNumbering.end()) {
-      valueNumbering.insert(std::make_pair(V, EI->second));
-      return EI->second;
-    } else {
-      expressionNumbering.insert(std::make_pair(e, nextValueNumber));
-      valueNumbering.insert(std::make_pair(V, nextValueNumber));
 
+    if (!cdep) {
+      valueNumbering[C] = nextValueNumber;
       return nextValueNumber++;
     }
-  } else if (InsertElementInst* U = dyn_cast<InsertElementInst>(V)) {
-    Expression e = create_expression(U);
-
-    DenseMap<Expression, uint32_t>::iterator EI = expressionNumbering.find(e);
-    if (EI != expressionNumbering.end()) {
-      valueNumbering.insert(std::make_pair(V, EI->second));
-      return EI->second;
-    } else {
-      expressionNumbering.insert(std::make_pair(e, nextValueNumber));
-      valueNumbering.insert(std::make_pair(V, nextValueNumber));
 
+    if (cdep->getNumOperands() != C->getNumOperands()) {
+      valueNumbering[C] = nextValueNumber;
       return nextValueNumber++;
     }
-  } else if (SelectInst* U = dyn_cast<SelectInst>(V)) {
-    Expression e = create_expression(U);
-
-    DenseMap<Expression, uint32_t>::iterator EI = expressionNumbering.find(e);
-    if (EI != expressionNumbering.end()) {
-      valueNumbering.insert(std::make_pair(V, EI->second));
-      return EI->second;
-    } else {
-      expressionNumbering.insert(std::make_pair(e, nextValueNumber));
-      valueNumbering.insert(std::make_pair(V, nextValueNumber));
-
-      return nextValueNumber++;
+    for (unsigned i = 1; i < C->getNumOperands(); ++i) {
+      uint32_t c_vn = lookup_or_add(C->getOperand(i));
+      uint32_t cd_vn = lookup_or_add(cdep->getOperand(i));
+      if (c_vn != cd_vn) {
+        valueNumbering[C] = nextValueNumber;
+        return nextValueNumber++;
+      }
     }
-  } else if (CastInst* U = dyn_cast<CastInst>(V)) {
-    Expression e = create_expression(U);
 
-    DenseMap<Expression, uint32_t>::iterator EI = expressionNumbering.find(e);
-    if (EI != expressionNumbering.end()) {
-      valueNumbering.insert(std::make_pair(V, EI->second));
-      return EI->second;
-    } else {
-      expressionNumbering.insert(std::make_pair(e, nextValueNumber));
-      valueNumbering.insert(std::make_pair(V, nextValueNumber));
+    uint32_t v = lookup_or_add(cdep);
+    valueNumbering[C] = v;
+    return v;
 
-      return nextValueNumber++;
-    }
-  } else if (GetElementPtrInst* U = dyn_cast<GetElementPtrInst>(V)) {
-    Expression e = create_expression(U);
+  } else {
+    valueNumbering[C] = nextValueNumber;
+    return nextValueNumber++;
+  }
+}
 
-    DenseMap<Expression, uint32_t>::iterator EI = expressionNumbering.find(e);
-    if (EI != expressionNumbering.end()) {
-      valueNumbering.insert(std::make_pair(V, EI->second));
-      return EI->second;
-    } else {
-      expressionNumbering.insert(std::make_pair(e, nextValueNumber));
-      valueNumbering.insert(std::make_pair(V, nextValueNumber));
+/// lookup_or_add - Returns the value number for the specified value, assigning
+/// it a new number if it did not have one before.
+uint32_t ValueTable::lookup_or_add(Value *V) {
+  DenseMap<Value*, uint32_t>::iterator VI = valueNumbering.find(V);
+  if (VI != valueNumbering.end())
+    return VI->second;
 
-      return nextValueNumber++;
-    }
-  } else {
-    valueNumbering.insert(std::make_pair(V, nextValueNumber));
+  if (!isa<Instruction>(V)) {
+    valueNumbering[V] = nextValueNumber;
     return nextValueNumber++;
   }
+  
+  Instruction* I = cast<Instruction>(V);
+  Expression exp;
+  switch (I->getOpcode()) {
+    case Instruction::Call:
+      return lookup_or_add_call(cast<CallInst>(I));
+    case Instruction::Add:
+    case Instruction::FAdd:
+    case Instruction::Sub:
+    case Instruction::FSub:
+    case Instruction::Mul:
+    case Instruction::FMul:
+    case Instruction::UDiv:
+    case Instruction::SDiv:
+    case Instruction::FDiv:
+    case Instruction::URem:
+    case Instruction::SRem:
+    case Instruction::FRem:
+    case Instruction::Shl:
+    case Instruction::LShr:
+    case Instruction::AShr:
+    case Instruction::And:
+    case Instruction::Or :
+    case Instruction::Xor:
+      exp = create_expression(cast<BinaryOperator>(I));
+      break;
+    case Instruction::ICmp:
+    case Instruction::FCmp:
+      exp = create_expression(cast<CmpInst>(I));
+      break;
+    case Instruction::Trunc:
+    case Instruction::ZExt:
+    case Instruction::SExt:
+    case Instruction::FPToUI:
+    case Instruction::FPToSI:
+    case Instruction::UIToFP:
+    case Instruction::SIToFP:
+    case Instruction::FPTrunc:
+    case Instruction::FPExt:
+    case Instruction::PtrToInt:
+    case Instruction::IntToPtr:
+    case Instruction::BitCast:
+      exp = create_expression(cast<CastInst>(I));
+      break;
+    case Instruction::Select:
+      exp = create_expression(cast<SelectInst>(I));
+      break;
+    case Instruction::ExtractElement:
+      exp = create_expression(cast<ExtractElementInst>(I));
+      break;
+    case Instruction::InsertElement:
+      exp = create_expression(cast<InsertElementInst>(I));
+      break;
+    case Instruction::ShuffleVector:
+      exp = create_expression(cast<ShuffleVectorInst>(I));
+      break;
+    case Instruction::ExtractValue:
+      exp = create_expression(cast<ExtractValueInst>(I));
+      break;
+    case Instruction::InsertValue:
+      exp = create_expression(cast<InsertValueInst>(I));
+      break;      
+    case Instruction::GetElementPtr:
+      exp = create_expression(cast<GetElementPtrInst>(I));
+      break;
+    default:
+      valueNumbering[V] = nextValueNumber;
+      return nextValueNumber++;
+  }
+
+  uint32_t& e = expressionNumbering[exp];
+  if (!e) e = nextValueNumber++;
+  valueNumbering[V] = e;
+  return e;
 }
 
 /// lookup - Returns the value number of the specified value. Fails if
 /// the value has not yet been numbered.
 uint32_t ValueTable::lookup(Value *V) const {
-  DenseMap<Value*, uint32_t>::iterator VI = valueNumbering.find(V);
+  DenseMap<Value*, uint32_t>::const_iterator VI = valueNumbering.find(V);
   assert(VI != valueNumbering.end() && "Value not numbered?");
   return VI->second;
 }
@@ -670,7 +649,7 @@ void ValueTable::erase(Value *V) {
 /// verifyRemoved - Verify that the value is removed from all internal data
 /// structures.
 void ValueTable::verifyRemoved(const Value *V) const {
-  for (DenseMap<Value*, uint32_t>::iterator
+  for (DenseMap<Value*, uint32_t>::const_iterator
          I = valueNumbering.begin(), E = valueNumbering.end(); I != E; ++I) {
     assert(I->first != V && "Inst still occurs in value numbering map!");
   }
@@ -695,23 +674,23 @@ namespace {
     bool runOnFunction(Function &F);
   public:
     static char ID; // Pass identification, replacement for typeid
-    GVN() : FunctionPass(&ID) { }
+    explicit GVN(bool nopre = false, bool noloads = false)
+      : FunctionPass(&ID), NoPRE(nopre), NoLoads(noloads), MD(0) { }
 
   private:
+    bool NoPRE;
+    bool NoLoads;
     MemoryDependenceAnalysis *MD;
     DominatorTree *DT;
 
     ValueTable VN;
     DenseMap<BasicBlock*, ValueNumberScope*> localAvail;
 
-    typedef DenseMap<Value*, SmallPtrSet<Instruction*, 4> > PhiMapType;
-    PhiMapType phiMap;
-
-
     // This transformation requires dominator postdominator info
     virtual void getAnalysisUsage(AnalysisUsage &AU) const {
       AU.addRequired<DominatorTree>();
-      AU.addRequired<MemoryDependenceAnalysis>();
+      if (!NoLoads)
+        AU.addRequired<MemoryDependenceAnalysis>();
       AU.addRequired<AliasAnalysis>();
 
       AU.addPreserved<DominatorTree>();
@@ -727,15 +706,11 @@ namespace {
     bool processNonLocalLoad(LoadInst* L,
                              SmallVectorImpl<Instruction*> &toErase);
     bool processBlock(BasicBlock *BB);
-    Value *GetValueForBlock(BasicBlock *BB, Instruction *orig,
-                            DenseMap<BasicBlock*, Value*> &Phis,
-                            bool top_level = false);
     void dump(DenseMap<uint32_t, Value*>& d);
     bool iterateOnFunction(Function &F);
     Value *CollapsePhi(PHINode* p);
     bool performPRE(Function& F);
     Value *lookupNumber(BasicBlock *BB, uint32_t num);
-    Value *AttemptRedundancyElimination(Instruction *orig, unsigned valno);
     void cleanupGlobalSets();
     void verifyRemoved(const Instruction *I) const;
   };
@@ -744,7 +719,9 @@ namespace {
 }
 
 // createGVNPass - The public interface to this file...
-FunctionPass *llvm::createGVNPass() { return new GVN(); }
+FunctionPass *llvm::createGVNPass(bool NoPRE, bool NoLoads) {
+  return new GVN(NoPRE, NoLoads);
+}
 
 static RegisterPass<GVN> X("gvn",
                            "Global Value Numbering");
@@ -786,82 +763,6 @@ Value *GVN::CollapsePhi(PHINode *PN) {
   return 0;
 }
 
-/// GetValueForBlock - Get the value to use within the specified basic block.
-/// available values are in Phis.
-Value *GVN::GetValueForBlock(BasicBlock *BB, Instruction *Orig,
-                             DenseMap<BasicBlock*, Value*> &Phis,
-                             bool TopLevel) {
-
-  // If we have already computed this value, return the previously computed val.
-  DenseMap<BasicBlock*, Value*>::iterator V = Phis.find(BB);
-  if (V != Phis.end() && !TopLevel) return V->second;
-
-  // If the block is unreachable, just return undef, since this path
-  // can't actually occur at runtime.
-  if (!DT->isReachableFromEntry(BB))
-    return Phis[BB] = UndefValue::get(Orig->getType());
-
-  if (BasicBlock *Pred = BB->getSinglePredecessor()) {
-    Value *ret = GetValueForBlock(Pred, Orig, Phis);
-    Phis[BB] = ret;
-    return ret;
-  }
-
-  // Get the number of predecessors of this block so we can reserve space later.
-  // If there is already a PHI in it, use the #preds from it, otherwise count.
-  // Getting it from the PHI is constant time.
-  unsigned NumPreds;
-  if (PHINode *ExistingPN = dyn_cast<PHINode>(BB->begin()))
-    NumPreds = ExistingPN->getNumIncomingValues();
-  else
-    NumPreds = std::distance(pred_begin(BB), pred_end(BB));
-
-  // Otherwise, the idom is the loop, so we need to insert a PHI node.  Do so
-  // now, then get values to fill in the incoming values for the PHI.
-  PHINode *PN = PHINode::Create(Orig->getType(), Orig->getName()+".rle",
-                                BB->begin());
-  PN->reserveOperandSpace(NumPreds);
-
-  Phis.insert(std::make_pair(BB, PN));
-
-  // Fill in the incoming values for the block.
-  for (pred_iterator PI = pred_begin(BB), E = pred_end(BB); PI != E; ++PI) {
-    Value *val = GetValueForBlock(*PI, Orig, Phis);
-    PN->addIncoming(val, *PI);
-  }
-
-  VN.getAliasAnalysis()->copyValue(Orig, PN);
-
-  // Attempt to collapse PHI nodes that are trivially redundant
-  Value *v = CollapsePhi(PN);
-  if (!v) {
-    // Cache our phi construction results
-    if (LoadInst* L = dyn_cast<LoadInst>(Orig))
-      phiMap[L->getPointerOperand()].insert(PN);
-    else
-      phiMap[Orig].insert(PN);
-
-    return PN;
-  }
-
-  PN->replaceAllUsesWith(v);
-  if (isa<PointerType>(v->getType()))
-    MD->invalidateCachedPointerInfo(v);
-
-  for (DenseMap<BasicBlock*, Value*>::iterator I = Phis.begin(),
-       E = Phis.end(); I != E; ++I)
-    if (I->second == PN)
-      I->second = v;
-
-  DEBUG(errs() << "GVN removed: " << *PN << '\n');
-  MD->removeInstruction(PN);
-  PN->eraseFromParent();
-  DEBUG(verifyRemoved(PN));
-
-  Phis[BB] = v;
-  return v;
-}
-
 /// IsValueFullyAvailableInBlock - Return true if we can prove that the value
 /// we're analyzing is fully available in the specified block.  As we go, keep
 /// track of which blocks we know are fully alive in FullyAvailableBlocks.  This
@@ -1234,22 +1135,26 @@ struct AvailableValueInBlock {
   }
 };
 
-/// GetAvailableBlockValues - Given the ValuesPerBlock list, convert all of the
-/// available values to values of the expected LoadTy in their blocks and insert
-/// the new values into BlockReplValues.
-static void 
-GetAvailableBlockValues(DenseMap<BasicBlock*, Value*> &BlockReplValues,
-                  const SmallVector<AvailableValueInBlock, 16> &ValuesPerBlock,
-                        const Type *LoadTy,
-                        const TargetData *TD) {
-
+/// ConstructSSAForLoadSet - Given a set of loads specified by ValuesPerBlock,
+/// construct SSA form, allowing us to eliminate LI.  This returns the value
+/// that should be used at LI's definition site.
+static Value *ConstructSSAForLoadSet(LoadInst *LI, 
+                         SmallVectorImpl<AvailableValueInBlock> &ValuesPerBlock,
+                                     const TargetData *TD,
+                                     AliasAnalysis *AA) {
+  SmallVector<PHINode*, 8> NewPHIs;
+  SSAUpdater SSAUpdate(&NewPHIs);
+  SSAUpdate.Initialize(LI);
+  
+  const Type *LoadTy = LI->getType();
+  
   for (unsigned i = 0, e = ValuesPerBlock.size(); i != e; ++i) {
     BasicBlock *BB = ValuesPerBlock[i].BB;
     Value *AvailableVal = ValuesPerBlock[i].V;
     unsigned Offset = ValuesPerBlock[i].Offset;
     
-    Value *&BlockEntry = BlockReplValues[BB];
-    if (BlockEntry) continue;
+    if (SSAUpdate.HasValueForBlock(BB))
+      continue;
     
     if (AvailableVal->getType() != LoadTy) {
       assert(TD && "Need target data to handle type mismatch case");
@@ -1258,17 +1163,28 @@ GetAvailableBlockValues(DenseMap<BasicBlock*, Value*> &BlockReplValues,
       
       if (Offset) {
         DEBUG(errs() << "GVN COERCED NONLOCAL VAL:\n"
-            << *ValuesPerBlock[i].V << '\n'
-            << *AvailableVal << '\n' << "\n\n\n");
+              << *ValuesPerBlock[i].V << '\n'
+              << *AvailableVal << '\n' << "\n\n\n");
       }
       
       
       DEBUG(errs() << "GVN COERCED NONLOCAL VAL:\n"
-                   << *ValuesPerBlock[i].V << '\n'
-                   << *AvailableVal << '\n' << "\n\n\n");
+            << *ValuesPerBlock[i].V << '\n'
+            << *AvailableVal << '\n' << "\n\n\n");
     }
-    BlockEntry = AvailableVal;
+    
+    SSAUpdate.AddAvailableValue(BB, AvailableVal);
   }
+  
+  // Perform PHI construction.
+  Value *V = SSAUpdate.GetValueInMiddleOfBlock(LI->getParent());
+  
+  // If new PHI nodes were created, notify alias analysis.
+  if (isa<PointerType>(V->getType()))
+    for (unsigned i = 0, e = NewPHIs.size(); i != e; ++i)
+      AA->copyValue(LI, NewPHIs[i]);
+
+  return V;
 }
 
 /// processNonLocalLoad - Attempt to eliminate a load whose dependencies are
@@ -1338,11 +1254,20 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
     Instruction *DepInst = DepInfo.getInst();
 
     // Loading the allocation -> undef.
-    if (isa<AllocationInst>(DepInst) || isMalloc(DepInst)) {
+    if (isa<AllocaInst>(DepInst) || isMalloc(DepInst)) {
       ValuesPerBlock.push_back(AvailableValueInBlock::get(DepBB,
                                              UndefValue::get(LI->getType())));
       continue;
     }
+    
+    // Loading immediately after lifetime begin or end -> undef.
+    if (IntrinsicInst* II = dyn_cast<IntrinsicInst>(DepInst)) {
+      if (II->getIntrinsicID() == Intrinsic::lifetime_start ||
+          II->getIntrinsicID() == Intrinsic::lifetime_end) {
+        ValuesPerBlock.push_back(AvailableValueInBlock::get(DepBB,
+                                             UndefValue::get(LI->getType())));
+      }
+    }
 
     if (StoreInst *S = dyn_cast<StoreInst>(DepInst)) {
       // Reject loads and stores that are to the same address but are of
@@ -1394,33 +1319,11 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
   // load, then it is fully redundant and we can use PHI insertion to compute
   // its value.  Insert PHIs and remove the fully redundant value now.
   if (UnavailableBlocks.empty()) {
-    // Use cached PHI construction information from previous runs
-    SmallPtrSet<Instruction*, 4> &p = phiMap[LI->getPointerOperand()];
-    // FIXME: What does phiMap do? Are we positive it isn't getting invalidated?
-    for (SmallPtrSet<Instruction*, 4>::iterator I = p.begin(), E = p.end();
-         I != E; ++I) {
-      if ((*I)->getParent() == LI->getParent()) {
-        DEBUG(errs() << "GVN REMOVING NONLOCAL LOAD #1: " << *LI << '\n');
-        LI->replaceAllUsesWith(*I);
-        if (isa<PointerType>((*I)->getType()))
-          MD->invalidateCachedPointerInfo(*I);
-        toErase.push_back(LI);
-        NumGVNLoad++;
-        return true;
-      }
-
-      ValuesPerBlock.push_back(AvailableValueInBlock::get((*I)->getParent(),
-                                                          *I));
-    }
-
     DEBUG(errs() << "GVN REMOVING NONLOCAL LOAD: " << *LI << '\n');
-
-    // Convert the block information to a map, and insert coersions as needed.
-    DenseMap<BasicBlock*, Value*> BlockReplValues;
-    GetAvailableBlockValues(BlockReplValues, ValuesPerBlock, LI->getType(), TD);
     
     // Perform PHI construction.
-    Value *V = GetValueForBlock(LI->getParent(), LI, BlockReplValues, true);
+    Value *V = ConstructSSAForLoadSet(LI, ValuesPerBlock, TD,
+                                      VN.getAliasAnalysis());
     LI->replaceAllUsesWith(V);
 
     if (isa<PHINode>(V))
@@ -1522,26 +1425,40 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
   assert(UnavailablePred != 0 &&
          "Fully available value should be eliminated above!");
 
-  // If the loaded pointer is PHI node defined in this block, do PHI translation
-  // to get its value in the predecessor.
-  Value *LoadPtr = LI->getOperand(0)->DoPHITranslation(LoadBB, UnavailablePred);
-
-  // Make sure the value is live in the predecessor.  If it was defined by a
-  // non-PHI instruction in this block, we don't know how to recompute it above.
-  if (Instruction *LPInst = dyn_cast<Instruction>(LoadPtr))
-    if (!DT->dominates(LPInst->getParent(), UnavailablePred)) {
-      DEBUG(errs() << "COULDN'T PRE LOAD BECAUSE PTR IS UNAVAILABLE IN PRED: "
-                   << *LPInst << '\n' << *LI << "\n");
-      return false;
-    }
-
   // We don't currently handle critical edges :(
   if (UnavailablePred->getTerminator()->getNumSuccessors() != 1) {
     DEBUG(errs() << "COULD NOT PRE LOAD BECAUSE OF CRITICAL EDGE '"
                  << UnavailablePred->getName() << "': " << *LI << '\n');
     return false;
   }
-
+  
+  // If the loaded pointer is PHI node defined in this block, do PHI translation
+  // to get its value in the predecessor.
+  Value *LoadPtr = MD->PHITranslatePointer(LI->getOperand(0),
+                                           LoadBB, UnavailablePred, TD);
+  // Make sure the value is live in the predecessor.  MemDep found a computation
+  // of LPInst with the right value, but that does not dominate UnavailablePred,
+  // then we can't use it.
+  if (Instruction *LPInst = dyn_cast_or_null<Instruction>(LoadPtr))
+    if (!DT->dominates(LPInst->getParent(), UnavailablePred))
+      LoadPtr = 0;
+
+  // If we don't have a computation of this phi translated value, try to insert
+  // one.
+  if (LoadPtr == 0) {
+    LoadPtr = MD->InsertPHITranslatedPointer(LI->getOperand(0),
+                                             LoadBB, UnavailablePred, TD);
+    if (LoadPtr == 0) {
+      DEBUG(errs() << "COULDN'T INSERT PHI TRANSLATED VALUE OF: "
+                   << *LI->getOperand(0) << "\n");
+      return false;
+    }
+    
+    // FIXME: This inserts a computation, but we don't tell scalar GVN
+    // optimization stuff about it.  How do we do this?
+    DEBUG(errs() << "INSERTED PHI TRANSLATED VALUE: " << *LoadPtr << "\n");
+  }
+  
   // Make sure it is valid to move this load here.  We have to watch out for:
   //  @1 = getelementptr (i8* p, ...
   //  test p and branch if == 0
@@ -1564,17 +1481,12 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
                                 LI->getAlignment(),
                                 UnavailablePred->getTerminator());
 
-  SmallPtrSet<Instruction*, 4> &p = phiMap[LI->getPointerOperand()];
-  for (SmallPtrSet<Instruction*, 4>::iterator I = p.begin(), E = p.end();
-       I != E; ++I)
-    ValuesPerBlock.push_back(AvailableValueInBlock::get((*I)->getParent(), *I));
-
-  DenseMap<BasicBlock*, Value*> BlockReplValues;
-  GetAvailableBlockValues(BlockReplValues, ValuesPerBlock, LI->getType(), TD);
-  BlockReplValues[UnavailablePred] = NewLoad;
+  // Add the newly created load.
+  ValuesPerBlock.push_back(AvailableValueInBlock::get(UnavailablePred,NewLoad));
 
   // Perform PHI construction.
-  Value *V = GetValueForBlock(LI->getParent(), LI, BlockReplValues, true);
+  Value *V = ConstructSSAForLoadSet(LI, ValuesPerBlock, TD,
+                                    VN.getAliasAnalysis());
   LI->replaceAllUsesWith(V);
   if (isa<PHINode>(V))
     V->takeName(LI);
@@ -1588,6 +1500,9 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
 /// processLoad - Attempt to eliminate a load, first by eliminating it
 /// locally, and then attempting non-local elimination if that fails.
 bool GVN::processLoad(LoadInst *L, SmallVectorImpl<Instruction*> &toErase) {
+  if (!MD)
+    return false;
+
   if (L->isVolatile())
     return false;
 
@@ -1652,15 +1567,18 @@ bool GVN::processLoad(LoadInst *L, SmallVectorImpl<Instruction*> &toErase) {
     // actually have the same type.  See if we know how to reuse the stored
     // value (depending on its type).
     const TargetData *TD = 0;
-    if (StoredVal->getType() != L->getType() &&
-        (TD = getAnalysisIfAvailable<TargetData>())) {
-      StoredVal = CoerceAvailableValueToLoadType(StoredVal, L->getType(),
-                                                 L, *TD);
-      if (StoredVal == 0)
+    if (StoredVal->getType() != L->getType()) {
+      if ((TD = getAnalysisIfAvailable<TargetData>())) {
+        StoredVal = CoerceAvailableValueToLoadType(StoredVal, L->getType(),
+                                                   L, *TD);
+        if (StoredVal == 0)
+          return false;
+        
+        DEBUG(errs() << "GVN COERCED STORE:\n" << *DepSI << '\n' << *StoredVal
+                     << '\n' << *L << "\n\n\n");
+      }
+      else 
         return false;
-      
-      DEBUG(errs() << "GVN COERCED STORE:\n" << *DepSI << '\n' << *StoredVal
-                   << '\n' << *L << "\n\n\n");
     }
 
     // Remove it!
@@ -1679,14 +1597,17 @@ bool GVN::processLoad(LoadInst *L, SmallVectorImpl<Instruction*> &toErase) {
     // the same type.  See if we know how to reuse the previously loaded value
     // (depending on its type).
     const TargetData *TD = 0;
-    if (DepLI->getType() != L->getType() &&
-        (TD = getAnalysisIfAvailable<TargetData>())) {
-      AvailableVal = CoerceAvailableValueToLoadType(DepLI, L->getType(), L,*TD);
-      if (AvailableVal == 0)
-        return false;
+    if (DepLI->getType() != L->getType()) {
+      if ((TD = getAnalysisIfAvailable<TargetData>())) {
+        AvailableVal = CoerceAvailableValueToLoadType(DepLI, L->getType(), L,*TD);
+        if (AvailableVal == 0)
+          return false;
       
-      DEBUG(errs() << "GVN COERCED LOAD:\n" << *DepLI << "\n" << *AvailableVal
-                   << "\n" << *L << "\n\n\n");
+        DEBUG(errs() << "GVN COERCED LOAD:\n" << *DepLI << "\n" << *AvailableVal
+                     << "\n" << *L << "\n\n\n");
+      }
+      else 
+        return false;
     }
     
     // Remove it!
@@ -1701,12 +1622,24 @@ bool GVN::processLoad(LoadInst *L, SmallVectorImpl<Instruction*> &toErase) {
   // If this load really doesn't depend on anything, then we must be loading an
   // undef value.  This can happen when loading for a fresh allocation with no
   // intervening stores, for example.
-  if (isa<AllocationInst>(DepInst) || isMalloc(DepInst)) {
+  if (isa<AllocaInst>(DepInst) || isMalloc(DepInst)) {
     L->replaceAllUsesWith(UndefValue::get(L->getType()));
     toErase.push_back(L);
     NumGVNLoad++;
     return true;
   }
+  
+  // If this load occurs either right after a lifetime begin or a lifetime end,
+  // then the loaded value is undefined.
+  if (IntrinsicInst* II = dyn_cast<IntrinsicInst>(DepInst)) {
+    if (II->getIntrinsicID() == Intrinsic::lifetime_start ||
+        II->getIntrinsicID() == Intrinsic::lifetime_end) {
+      L->replaceAllUsesWith(UndefValue::get(L->getType()));
+      toErase.push_back(L);
+      NumGVNLoad++;
+      return true;
+    }
+  }
 
   return false;
 }
@@ -1727,59 +1660,6 @@ Value *GVN::lookupNumber(BasicBlock *BB, uint32_t num) {
   return 0;
 }
 
-/// AttemptRedundancyElimination - If the "fast path" of redundancy elimination
-/// by inheritance from the dominator fails, see if we can perform phi
-/// construction to eliminate the redundancy.
-Value *GVN::AttemptRedundancyElimination(Instruction *orig, unsigned valno) {
-  BasicBlock *BaseBlock = orig->getParent();
-
-  SmallPtrSet<BasicBlock*, 4> Visited;
-  SmallVector<BasicBlock*, 8> Stack;
-  Stack.push_back(BaseBlock);
-
-  DenseMap<BasicBlock*, Value*> Results;
-
-  // Walk backwards through our predecessors, looking for instances of the
-  // value number we're looking for.  Instances are recorded in the Results
-  // map, which is then used to perform phi construction.
-  while (!Stack.empty()) {
-    BasicBlock *Current = Stack.back();
-    Stack.pop_back();
-
-    // If we've walked all the way to a proper dominator, then give up. Cases
-    // where the instance is in the dominator will have been caught by the fast
-    // path, and any cases that require phi construction further than this are
-    // probably not worth it anyways.  Note that this is a SIGNIFICANT compile
-    // time improvement.
-    if (DT->properlyDominates(Current, orig->getParent())) return 0;
-
-    DenseMap<BasicBlock*, ValueNumberScope*>::iterator LA =
-                                                       localAvail.find(Current);
-    if (LA == localAvail.end()) return 0;
-    DenseMap<uint32_t, Value*>::iterator V = LA->second->table.find(valno);
-
-    if (V != LA->second->table.end()) {
-      // Found an instance, record it.
-      Results.insert(std::make_pair(Current, V->second));
-      continue;
-    }
-
-    // If we reach the beginning of the function, then give up.
-    if (pred_begin(Current) == pred_end(Current))
-      return 0;
-
-    for (pred_iterator PI = pred_begin(Current), PE = pred_end(Current);
-         PI != PE; ++PI)
-      if (Visited.insert(*PI))
-        Stack.push_back(*PI);
-  }
-
-  // If we didn't find instances, give up.  Otherwise, perform phi construction.
-  if (Results.size() == 0)
-    return 0;
-  else
-    return GetValueForBlock(BaseBlock, orig, Results, true);
-}
 
 /// processInstruction - When calculating availability, handle an instruction
 /// by inserting it into the appropriate sets
@@ -1822,7 +1702,7 @@ bool GVN::processInstruction(Instruction *I,
 
   // Allocations are always uniquely numbered, so we can save time and memory
   // by fast failing them.
-  } else if (isa<AllocationInst>(I) || isa<TerminatorInst>(I)) {
+  } else if (isa<AllocaInst>(I) || isa<TerminatorInst>(I)) {
     localAvail[I->getParent()]->table.insert(std::make_pair(Num, I));
     return false;
   }
@@ -1832,12 +1712,8 @@ bool GVN::processInstruction(Instruction *I,
     Value *constVal = CollapsePhi(p);
 
     if (constVal) {
-      for (PhiMapType::iterator PI = phiMap.begin(), PE = phiMap.end();
-           PI != PE; ++PI)
-        PI->second.erase(p);
-
       p->replaceAllUsesWith(constVal);
-      if (isa<PointerType>(constVal->getType()))
+      if (MD && isa<PointerType>(constVal->getType()))
         MD->invalidateCachedPointerInfo(constVal);
       VN.erase(p);
 
@@ -1858,22 +1734,11 @@ bool GVN::processInstruction(Instruction *I,
     // Remove it!
     VN.erase(I);
     I->replaceAllUsesWith(repl);
-    if (isa<PointerType>(repl->getType()))
+    if (MD && isa<PointerType>(repl->getType()))
       MD->invalidateCachedPointerInfo(repl);
     toErase.push_back(I);
     return true;
 
-#if 0
-  // Perform slow-pathvalue-number based elimination with phi construction.
-  } else if (Value *repl = AttemptRedundancyElimination(I, Num)) {
-    // Remove it!
-    VN.erase(I);
-    I->replaceAllUsesWith(repl);
-    if (isa<PointerType>(repl->getType()))
-      MD->invalidateCachedPointerInfo(repl);
-    toErase.push_back(I);
-    return true;
-#endif
   } else {
     localAvail[I->getParent()]->table.insert(std::make_pair(Num, I));
   }
@@ -1883,7 +1748,8 @@ bool GVN::processInstruction(Instruction *I,
 
 /// runOnFunction - This is the main transformation entry point for a function.
 bool GVN::runOnFunction(Function& F) {
-  MD = &getAnalysis<MemoryDependenceAnalysis>();
+  if (!NoLoads)
+    MD = &getAnalysis<MemoryDependenceAnalysis>();
   DT = &getAnalysis<DominatorTree>();
   VN.setAliasAnalysis(&getAnalysis<AliasAnalysis>());
   VN.setMemDep(MD);
@@ -1955,7 +1821,7 @@ bool GVN::processBlock(BasicBlock *BB) {
     for (SmallVector<Instruction*, 4>::iterator I = toErase.begin(),
          E = toErase.end(); I != E; ++I) {
       DEBUG(errs() << "GVN removed: " << **I << '\n');
-      MD->removeInstruction(*I);
+      if (MD) MD->removeInstruction(*I);
       (*I)->eraseFromParent();
       DEBUG(verifyRemoved(*I));
     }
@@ -1972,7 +1838,7 @@ bool GVN::processBlock(BasicBlock *BB) {
 
 /// performPRE - Perform a purely local form of PRE that looks for diamond
 /// control flow patterns and attempts to perform simple PRE at the join point.
-bool GVN::performPRE(Function& F) {
+bool GVN::performPRE(Function &F) {
   bool Changed = false;
   SmallVector<std::pair<TerminatorInst*, unsigned>, 4> toSplit;
   DenseMap<BasicBlock*, Value*> predMap;
@@ -1987,9 +1853,9 @@ bool GVN::performPRE(Function& F) {
          BE = CurrentBlock->end(); BI != BE; ) {
       Instruction *CurInst = BI++;
 
-      if (isa<AllocationInst>(CurInst) ||
+      if (isa<AllocaInst>(CurInst) ||
           isa<TerminatorInst>(CurInst) || isa<PHINode>(CurInst) ||
-          (CurInst->getType() == Type::getVoidTy(F.getContext())) ||
+          CurInst->getType()->isVoidTy() ||
           CurInst->mayReadFromMemory() || CurInst->mayHaveSideEffects() ||
           isa<DbgInfoIntrinsic>(CurInst))
         continue;
@@ -2037,6 +1903,10 @@ bool GVN::performPRE(Function& F) {
       // we would need to insert instructions in more than one pred.
       if (NumWithout != 1 || NumWith == 0)
         continue;
+      
+      // Don't do PRE across indirect branch.
+      if (isa<IndirectBrInst>(PREPred->getTerminator()))
+        continue;
 
       // We can't do PRE safely on a critical edge, so instead we schedule
       // the edge to be split and perform the PRE the next time we iterate
@@ -2104,12 +1974,12 @@ bool GVN::performPRE(Function& F) {
       localAvail[CurrentBlock]->table[ValNo] = Phi;
 
       CurInst->replaceAllUsesWith(Phi);
-      if (isa<PointerType>(Phi->getType()))
+      if (MD && isa<PointerType>(Phi->getType()))
         MD->invalidateCachedPointerInfo(Phi);
       VN.erase(CurInst);
 
       DEBUG(errs() << "GVN PRE removed: " << *CurInst << '\n');
-      MD->removeInstruction(CurInst);
+      if (MD) MD->removeInstruction(CurInst);
       CurInst->eraseFromParent();
       DEBUG(verifyRemoved(CurInst));
       Changed = true;
@@ -2155,7 +2025,6 @@ bool GVN::iterateOnFunction(Function &F) {
 
 void GVN::cleanupGlobalSets() {
   VN.clear();
-  phiMap.clear();
 
   for (DenseMap<BasicBlock*, ValueNumberScope*>::iterator
        I = localAvail.begin(), E = localAvail.end(); I != E; ++I)
@@ -2168,26 +2037,14 @@ void GVN::cleanupGlobalSets() {
 void GVN::verifyRemoved(const Instruction *Inst) const {
   VN.verifyRemoved(Inst);
 
-  // Walk through the PHI map to make sure the instruction isn't hiding in there
-  // somewhere.
-  for (PhiMapType::iterator
-         I = phiMap.begin(), E = phiMap.end(); I != E; ++I) {
-    assert(I->first != Inst && "Inst is still a key in PHI map!");
-
-    for (SmallPtrSet<Instruction*, 4>::iterator
-           II = I->second.begin(), IE = I->second.end(); II != IE; ++II) {
-      assert(*II != Inst && "Inst is still a value in PHI map!");
-    }
-  }
-
   // Walk through the value number scope to make sure the instruction isn't
   // ferreted away in it.
-  for (DenseMap<BasicBlock*, ValueNumberScope*>::iterator
+  for (DenseMap<BasicBlock*, ValueNumberScope*>::const_iterator
          I = localAvail.begin(), E = localAvail.end(); I != E; ++I) {
     const ValueNumberScope *VNS = I->second;
 
     while (VNS) {
-      for (DenseMap<uint32_t, Value*>::iterator
+      for (DenseMap<uint32_t, Value*>::const_iterator
              II = VNS->table.begin(), IE = VNS->table.end(); II != IE; ++II) {
         assert(II->second != Inst && "Inst still in value numbering scope!");
       }
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/IndVarSimplify.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/IndVarSimplify.cpp
index e2d9e0b..2912421 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/IndVarSimplify.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/IndVarSimplify.cpp
@@ -292,7 +292,7 @@ void IndVarSimplify::RewriteLoopExitValues(Loop *L,
       if (NumPreds != 1) {
         // Clone the PHI and delete the original one. This lets IVUsers and
         // any other maps purge the original user from their records.
-        PHINode *NewPN = PN->clone();
+        PHINode *NewPN = cast<PHINode>(PN->clone());
         NewPN->takeName(PN);
         NewPN->insertBefore(PN);
         PN->replaceAllUsesWith(NewPN);
@@ -322,7 +322,7 @@ void IndVarSimplify::RewriteNonIntegerIVs(Loop *L) {
   // may not have been able to compute a trip count. Now that we've done some
   // re-writing, the trip count may be computable.
   if (Changed)
-    SE->forgetLoopBackedgeTakenCount(L);
+    SE->forgetLoop(L);
 }
 
 bool IndVarSimplify::runOnLoop(Loop *L, LPPassManager &LPM) {
@@ -536,8 +536,10 @@ void IndVarSimplify::SinkUnusedInvariants(Loop *L) {
   BasicBlock *ExitBlock = L->getExitBlock();
   if (!ExitBlock) return;
 
-  Instruction *InsertPt = ExitBlock->getFirstNonPHI();
   BasicBlock *Preheader = L->getLoopPreheader();
+  if (!Preheader) return;
+
+  Instruction *InsertPt = ExitBlock->getFirstNonPHI();
   BasicBlock::iterator I = Preheader->getTerminator();
   while (I != Preheader->begin()) {
     --I;
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/InstructionCombining.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/InstructionCombining.cpp
index 561527c..95563b0 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/InstructionCombining.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/InstructionCombining.cpp
@@ -42,7 +42,8 @@
 #include "llvm/GlobalVariable.h"
 #include "llvm/Operator.h"
 #include "llvm/Analysis/ConstantFolding.h"
-#include "llvm/Analysis/MallocHelper.h"
+#include "llvm/Analysis/InstructionSimplify.h"
+#include "llvm/Analysis/MemoryBuiltins.h"
 #include "llvm/Analysis/ValueTracking.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
@@ -56,6 +57,7 @@
 #include "llvm/Support/IRBuilder.h"
 #include "llvm/Support/MathExtras.h"
 #include "llvm/Support/PatternMatch.h"
+#include "llvm/Support/TargetFolder.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/SmallVector.h"
@@ -90,9 +92,10 @@ namespace {
     /// Add - Add the specified instruction to the worklist if it isn't already
     /// in it.
     void Add(Instruction *I) {
-      DEBUG(errs() << "IC: ADD: " << *I << '\n');
-      if (WorklistMap.insert(std::make_pair(I, Worklist.size())).second)
+      if (WorklistMap.insert(std::make_pair(I, Worklist.size())).second) {
+        DEBUG(errs() << "IC: ADD: " << *I << '\n');
         Worklist.push_back(I);
+      }
     }
     
     void AddValue(Value *V) {
@@ -100,6 +103,20 @@ namespace {
         Add(I);
     }
     
+    /// AddInitialGroup - Add the specified batch of stuff in reverse order.
+    /// which should only be done when the worklist is empty and when the group
+    /// has no duplicates.
+    void AddInitialGroup(Instruction *const *List, unsigned NumEntries) {
+      assert(Worklist.empty() && "Worklist must be empty to add initial group");
+      Worklist.reserve(NumEntries+16);
+      DEBUG(errs() << "IC: ADDING: " << NumEntries << " instrs to worklist\n");
+      for (; NumEntries; --NumEntries) {
+        Instruction *I = List[NumEntries-1];
+        WorklistMap.insert(std::make_pair(I, Worklist.size()));
+        Worklist.push_back(I);
+      }
+    }
+    
     // Remove - remove I from the worklist if it exists.
     void Remove(Instruction *I) {
       DenseMap<Instruction*, unsigned>::iterator It = WorklistMap.find(I);
@@ -171,7 +188,7 @@ namespace {
 
     /// Builder - This is an IRBuilder that automatically inserts new
     /// instructions into the worklist when they are created.
-    typedef IRBuilder<true, ConstantFolder, InstCombineIRInserter> BuilderTy;
+    typedef IRBuilder<true, TargetFolder, InstCombineIRInserter> BuilderTy;
     BuilderTy *Builder;
         
     static char ID; // Pass identification, replacement for typeid
@@ -201,6 +218,7 @@ namespace {
     //
     Instruction *visitAdd(BinaryOperator &I);
     Instruction *visitFAdd(BinaryOperator &I);
+    Value *OptimizePointerDifference(Value *LHS, Value *RHS, const Type *Ty);
     Instruction *visitSub(BinaryOperator &I);
     Instruction *visitFSub(BinaryOperator &I);
     Instruction *visitMul(BinaryOperator &I);
@@ -266,10 +284,12 @@ namespace {
     Instruction *visitSelectInstWithICmp(SelectInst &SI, ICmpInst *ICI);
     Instruction *visitCallInst(CallInst &CI);
     Instruction *visitInvokeInst(InvokeInst &II);
+
+    Instruction *SliceUpIllegalIntegerPHI(PHINode &PN);
     Instruction *visitPHINode(PHINode &PN);
     Instruction *visitGetElementPtrInst(GetElementPtrInst &GEP);
-    Instruction *visitAllocationInst(AllocationInst &AI);
-    Instruction *visitFreeInst(FreeInst &FI);
+    Instruction *visitAllocaInst(AllocaInst &AI);
+    Instruction *visitFree(Instruction &FI);
     Instruction *visitLoadInst(LoadInst &LI);
     Instruction *visitStoreInst(StoreInst &SI);
     Instruction *visitBranchInst(BranchInst &BI);
@@ -363,10 +383,6 @@ namespace {
     /// commutative operators.
     bool SimplifyCommutative(BinaryOperator &I);
 
-    /// SimplifyCompare - This reorders the operands of a CmpInst to get them in
-    /// most-complex to least-complex order.
-    bool SimplifyCompare(CmpInst &I);
-
     /// SimplifyDemandedUseBits - Attempts to replace V with a simpler value
     /// based on the demanded bits.
     Value *SimplifyDemandedUseBits(Value *V, APInt DemandedMask, 
@@ -400,6 +416,7 @@ namespace {
     Instruction *FoldPHIArgOpIntoPHI(PHINode &PN);
     Instruction *FoldPHIArgBinOpIntoPHI(PHINode &PN);
     Instruction *FoldPHIArgGEPIntoPHI(PHINode &PN);
+    Instruction *FoldPHIArgLoadIntoPHI(PHINode &PN);
 
     
     Instruction *OptAndOp(Instruction *Op, ConstantInt *OpRHS,
@@ -409,7 +426,7 @@ namespace {
                               bool isSub, Instruction &I);
     Instruction *InsertRangeTest(Value *V, Constant *Lo, Constant *Hi,
                                  bool isSigned, bool Inside, Instruction &IB);
-    Instruction *PromoteCastOfAllocation(BitCastInst &CI, AllocationInst &AI);
+    Instruction *PromoteCastOfAllocation(BitCastInst &CI, AllocaInst &AI);
     Instruction *MatchBSwap(BinaryOperator &I);
     bool SimplifyStoreAtEndOfBlock(StoreInst &SI);
     Instruction *SimplifyMemTransfer(MemIntrinsic *MI);
@@ -460,6 +477,34 @@ static const Type *getPromotedType(const Type *Ty) {
   return Ty;
 }
 
+/// ShouldChangeType - Return true if it is desirable to convert a computation
+/// from 'From' to 'To'.  We don't want to convert from a legal to an illegal
+/// type for example, or from a smaller to a larger illegal type.
+static bool ShouldChangeType(const Type *From, const Type *To,
+                             const TargetData *TD) {
+  assert(isa<IntegerType>(From) && isa<IntegerType>(To));
+  
+  // If we don't have TD, we don't know if the source/dest are legal.
+  if (!TD) return false;
+  
+  unsigned FromWidth = From->getPrimitiveSizeInBits();
+  unsigned ToWidth = To->getPrimitiveSizeInBits();
+  bool FromLegal = TD->isLegalInteger(FromWidth);
+  bool ToLegal = TD->isLegalInteger(ToWidth);
+  
+  // If this is a legal integer from type, and the result would be an illegal
+  // type, don't do the transformation.
+  if (FromLegal && !ToLegal)
+    return false;
+  
+  // Otherwise, if both are illegal, do not increase the size of the result. We
+  // do allow things like i160 -> i64, but not i64 -> i160.
+  if (!FromLegal && !ToLegal && ToWidth > FromWidth)
+    return false;
+  
+  return true;
+}
+
 /// getBitCastOperand - If the specified operand is a CastInst, a constant
 /// expression bitcast, or a GetElementPtrInst with all zero indices, return the
 /// operand value, otherwise return null.
@@ -566,17 +611,6 @@ bool InstCombiner::SimplifyCommutative(BinaryOperator &I) {
   return Changed;
 }
 
-/// SimplifyCompare - For a CmpInst this function just orders the operands
-/// so that theyare listed from right (least complex) to left (most complex).
-/// This puts constants before unary operators before binary operators.
-bool InstCombiner::SimplifyCompare(CmpInst &I) {
-  if (getComplexity(I.getOperand(0)) >= getComplexity(I.getOperand(1)))
-    return false;
-  I.swapOperands();
-  // Compare instructions are not associative so there's nothing else we can do.
-  return true;
-}
-
 // dyn_castNegVal - Given a 'sub' instruction, return the RHS of the instruction
 // if the LHS is a constant zero (which is the 'negate' form).
 //
@@ -614,9 +648,32 @@ static inline Value *dyn_castFNegVal(Value *V) {
   return 0;
 }
 
-static inline Value *dyn_castNotVal(Value *V) {
+/// isFreeToInvert - Return true if the specified value is free to invert (apply
+/// ~ to).  This happens in cases where the ~ can be eliminated.
+static inline bool isFreeToInvert(Value *V) {
+  // ~(~(X)) -> X.
   if (BinaryOperator::isNot(V))
-    return BinaryOperator::getNotArgument(V);
+    return true;
+  
+  // Constants can be considered to be not'ed values.
+  if (isa<ConstantInt>(V))
+    return true;
+  
+  // Compares can be inverted if they have a single use.
+  if (CmpInst *CI = dyn_cast<CmpInst>(V))
+    return CI->hasOneUse();
+  
+  return false;
+}
+
+static inline Value *dyn_castNotVal(Value *V) {
+  // If this is not(not(x)) don't return that this is a not: we want the two
+  // not's to be folded first.
+  if (BinaryOperator::isNot(V)) {
+    Value *Operand = BinaryOperator::getNotArgument(V);
+    if (!isFreeToInvert(Operand))
+      return Operand;
+  }
 
   // Constants can be considered to be not'ed values...
   if (ConstantInt *C = dyn_cast<ConstantInt>(V))
@@ -624,6 +681,8 @@ static inline Value *dyn_castNotVal(Value *V) {
   return 0;
 }
 
+
+
 // dyn_castFoldableMul - If this value is a multiply that can be folded into
 // other computations (because it has a constant operand), return the
 // non-constant operand of the multiply, and set CST to point to the multiplier.
@@ -774,7 +833,7 @@ bool InstCombiner::SimplifyDemandedBits(Use &U, APInt DemandedMask,
   Value *NewVal = SimplifyDemandedUseBits(U.get(), DemandedMask,
                                           KnownZero, KnownOne, Depth);
   if (NewVal == 0) return false;
-  U.set(NewVal);
+  U = NewVal;
   return true;
 }
 
@@ -1046,6 +1105,33 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
     if (ShrinkDemandedConstant(I, 1, DemandedMask))
       return I;
     
+    // If our LHS is an 'and' and if it has one use, and if any of the bits we
+    // are flipping are known to be set, then the xor is just resetting those
+    // bits to zero.  We can just knock out bits from the 'and' and the 'xor',
+    // simplifying both of them.
+    if (Instruction *LHSInst = dyn_cast<Instruction>(I->getOperand(0)))
+      if (LHSInst->getOpcode() == Instruction::And && LHSInst->hasOneUse() &&
+          isa<ConstantInt>(I->getOperand(1)) &&
+          isa<ConstantInt>(LHSInst->getOperand(1)) &&
+          (LHSKnownOne & RHSKnownOne & DemandedMask) != 0) {
+        ConstantInt *AndRHS = cast<ConstantInt>(LHSInst->getOperand(1));
+        ConstantInt *XorRHS = cast<ConstantInt>(I->getOperand(1));
+        APInt NewMask = ~(LHSKnownOne & RHSKnownOne & DemandedMask);
+        
+        Constant *AndC =
+          ConstantInt::get(I->getType(), NewMask & AndRHS->getValue());
+        Instruction *NewAnd = 
+          BinaryOperator::CreateAnd(I->getOperand(0), AndC, "tmp");
+        InsertNewInstBefore(NewAnd, *I);
+        
+        Constant *XorC =
+          ConstantInt::get(I->getType(), NewMask & XorRHS->getValue());
+        Instruction *NewXor =
+          BinaryOperator::CreateXor(NewAnd, XorC, "tmp");
+        return InsertNewInstBefore(NewXor, *I);
+      }
+          
+          
     RHSKnownZero = KnownZeroOut;
     RHSKnownOne  = KnownOneOut;
     break;
@@ -2351,8 +2437,8 @@ Instruction *InstCombiner::visitAdd(BinaryOperator &I) {
           ConstantExpr::getSExt(CI, I.getType()) == RHSC &&
           WillNotOverflowSignedAdd(LHSConv->getOperand(0), CI)) {
         // Insert the new, smaller add.
-        Value *NewAdd = Builder->CreateAdd(LHSConv->getOperand(0), 
-                                           CI, "addconv");
+        Value *NewAdd = Builder->CreateNSWAdd(LHSConv->getOperand(0), 
+                                              CI, "addconv");
         return new SExtInst(NewAdd, I.getType());
       }
     }
@@ -2367,8 +2453,8 @@ Instruction *InstCombiner::visitAdd(BinaryOperator &I) {
           WillNotOverflowSignedAdd(LHSConv->getOperand(0),
                                    RHSConv->getOperand(0))) {
         // Insert the new integer add.
-        Value *NewAdd = Builder->CreateAdd(LHSConv->getOperand(0), 
-                                           RHSConv->getOperand(0), "addconv");
+        Value *NewAdd = Builder->CreateNSWAdd(LHSConv->getOperand(0), 
+                                              RHSConv->getOperand(0), "addconv");
         return new SExtInst(NewAdd, I.getType());
       }
     }
@@ -2424,8 +2510,8 @@ Instruction *InstCombiner::visitFAdd(BinaryOperator &I) {
           ConstantExpr::getSIToFP(CI, I.getType()) == CFP &&
           WillNotOverflowSignedAdd(LHSConv->getOperand(0), CI)) {
         // Insert the new integer add.
-        Value *NewAdd = Builder->CreateAdd(LHSConv->getOperand(0),
-                                           CI, "addconv");
+        Value *NewAdd = Builder->CreateNSWAdd(LHSConv->getOperand(0),
+                                              CI, "addconv");
         return new SIToFPInst(NewAdd, I.getType());
       }
     }
@@ -2440,8 +2526,8 @@ Instruction *InstCombiner::visitFAdd(BinaryOperator &I) {
           WillNotOverflowSignedAdd(LHSConv->getOperand(0),
                                    RHSConv->getOperand(0))) {
         // Insert the new integer add.
-        Value *NewAdd = Builder->CreateAdd(LHSConv->getOperand(0), 
-                                           RHSConv->getOperand(0), "addconv");
+        Value *NewAdd = Builder->CreateNSWAdd(LHSConv->getOperand(0), 
+                                              RHSConv->getOperand(0),"addconv");
         return new SIToFPInst(NewAdd, I.getType());
       }
     }
@@ -2450,13 +2536,210 @@ Instruction *InstCombiner::visitFAdd(BinaryOperator &I) {
   return Changed ? &I : 0;
 }
 
+
+/// EmitGEPOffset - Given a getelementptr instruction/constantexpr, emit the
+/// code necessary to compute the offset from the base pointer (without adding
+/// in the base pointer).  Return the result as a signed integer of intptr size.
+static Value *EmitGEPOffset(User *GEP, InstCombiner &IC) {
+  TargetData &TD = *IC.getTargetData();
+  gep_type_iterator GTI = gep_type_begin(GEP);
+  const Type *IntPtrTy = TD.getIntPtrType(GEP->getContext());
+  Value *Result = Constant::getNullValue(IntPtrTy);
+
+  // Build a mask for high order bits.
+  unsigned IntPtrWidth = TD.getPointerSizeInBits();
+  uint64_t PtrSizeMask = ~0ULL >> (64-IntPtrWidth);
+
+  for (User::op_iterator i = GEP->op_begin() + 1, e = GEP->op_end(); i != e;
+       ++i, ++GTI) {
+    Value *Op = *i;
+    uint64_t Size = TD.getTypeAllocSize(GTI.getIndexedType()) & PtrSizeMask;
+    if (ConstantInt *OpC = dyn_cast<ConstantInt>(Op)) {
+      if (OpC->isZero()) continue;
+      
+      // Handle a struct index, which adds its field offset to the pointer.
+      if (const StructType *STy = dyn_cast<StructType>(*GTI)) {
+        Size = TD.getStructLayout(STy)->getElementOffset(OpC->getZExtValue());
+        
+        Result = IC.Builder->CreateAdd(Result,
+                                       ConstantInt::get(IntPtrTy, Size),
+                                       GEP->getName()+".offs");
+        continue;
+      }
+      
+      Constant *Scale = ConstantInt::get(IntPtrTy, Size);
+      Constant *OC =
+              ConstantExpr::getIntegerCast(OpC, IntPtrTy, true /*SExt*/);
+      Scale = ConstantExpr::getMul(OC, Scale);
+      // Emit an add instruction.
+      Result = IC.Builder->CreateAdd(Result, Scale, GEP->getName()+".offs");
+      continue;
+    }
+    // Convert to correct type.
+    if (Op->getType() != IntPtrTy)
+      Op = IC.Builder->CreateIntCast(Op, IntPtrTy, true, Op->getName()+".c");
+    if (Size != 1) {
+      Constant *Scale = ConstantInt::get(IntPtrTy, Size);
+      // We'll let instcombine(mul) convert this to a shl if possible.
+      Op = IC.Builder->CreateMul(Op, Scale, GEP->getName()+".idx");
+    }
+
+    // Emit an add instruction.
+    Result = IC.Builder->CreateAdd(Op, Result, GEP->getName()+".offs");
+  }
+  return Result;
+}
+
+
+/// EvaluateGEPOffsetExpression - Return a value that can be used to compare
+/// the *offset* implied by a GEP to zero.  For example, if we have &A[i], we
+/// want to return 'i' for "icmp ne i, 0".  Note that, in general, indices can
+/// be complex, and scales are involved.  The above expression would also be
+/// legal to codegen as "icmp ne (i*4), 0" (assuming A is a pointer to i32).
+/// This later form is less amenable to optimization though, and we are allowed
+/// to generate the first by knowing that pointer arithmetic doesn't overflow.
+///
+/// If we can't emit an optimized form for this expression, this returns null.
+/// 
+static Value *EvaluateGEPOffsetExpression(User *GEP, Instruction &I,
+                                          InstCombiner &IC) {
+  TargetData &TD = *IC.getTargetData();
+  gep_type_iterator GTI = gep_type_begin(GEP);
+
+  // Check to see if this gep only has a single variable index.  If so, and if
+  // any constant indices are a multiple of its scale, then we can compute this
+  // in terms of the scale of the variable index.  For example, if the GEP
+  // implies an offset of "12 + i*4", then we can codegen this as "3 + i",
+  // because the expression will cross zero at the same point.
+  unsigned i, e = GEP->getNumOperands();
+  int64_t Offset = 0;
+  for (i = 1; i != e; ++i, ++GTI) {
+    if (ConstantInt *CI = dyn_cast<ConstantInt>(GEP->getOperand(i))) {
+      // Compute the aggregate offset of constant indices.
+      if (CI->isZero()) continue;
+
+      // Handle a struct index, which adds its field offset to the pointer.
+      if (const StructType *STy = dyn_cast<StructType>(*GTI)) {
+        Offset += TD.getStructLayout(STy)->getElementOffset(CI->getZExtValue());
+      } else {
+        uint64_t Size = TD.getTypeAllocSize(GTI.getIndexedType());
+        Offset += Size*CI->getSExtValue();
+      }
+    } else {
+      // Found our variable index.
+      break;
+    }
+  }
+  
+  // If there are no variable indices, we must have a constant offset, just
+  // evaluate it the general way.
+  if (i == e) return 0;
+  
+  Value *VariableIdx = GEP->getOperand(i);
+  // Determine the scale factor of the variable element.  For example, this is
+  // 4 if the variable index is into an array of i32.
+  uint64_t VariableScale = TD.getTypeAllocSize(GTI.getIndexedType());
+  
+  // Verify that there are no other variable indices.  If so, emit the hard way.
+  for (++i, ++GTI; i != e; ++i, ++GTI) {
+    ConstantInt *CI = dyn_cast<ConstantInt>(GEP->getOperand(i));
+    if (!CI) return 0;
+   
+    // Compute the aggregate offset of constant indices.
+    if (CI->isZero()) continue;
+    
+    // Handle a struct index, which adds its field offset to the pointer.
+    if (const StructType *STy = dyn_cast<StructType>(*GTI)) {
+      Offset += TD.getStructLayout(STy)->getElementOffset(CI->getZExtValue());
+    } else {
+      uint64_t Size = TD.getTypeAllocSize(GTI.getIndexedType());
+      Offset += Size*CI->getSExtValue();
+    }
+  }
+  
+  // Okay, we know we have a single variable index, which must be a
+  // pointer/array/vector index.  If there is no offset, life is simple, return
+  // the index.
+  unsigned IntPtrWidth = TD.getPointerSizeInBits();
+  if (Offset == 0) {
+    // Cast to intptrty in case a truncation occurs.  If an extension is needed,
+    // we don't need to bother extending: the extension won't affect where the
+    // computation crosses zero.
+    if (VariableIdx->getType()->getPrimitiveSizeInBits() > IntPtrWidth)
+      VariableIdx = new TruncInst(VariableIdx, 
+                                  TD.getIntPtrType(VariableIdx->getContext()),
+                                  VariableIdx->getName(), &I);
+    return VariableIdx;
+  }
+  
+  // Otherwise, there is an index.  The computation we will do will be modulo
+  // the pointer size, so get it.
+  uint64_t PtrSizeMask = ~0ULL >> (64-IntPtrWidth);
+  
+  Offset &= PtrSizeMask;
+  VariableScale &= PtrSizeMask;
+
+  // To do this transformation, any constant index must be a multiple of the
+  // variable scale factor.  For example, we can evaluate "12 + 4*i" as "3 + i",
+  // but we can't evaluate "10 + 3*i" in terms of i.  Check that the offset is a
+  // multiple of the variable scale.
+  int64_t NewOffs = Offset / (int64_t)VariableScale;
+  if (Offset != NewOffs*(int64_t)VariableScale)
+    return 0;
+
+  // Okay, we can do this evaluation.  Start by converting the index to intptr.
+  const Type *IntPtrTy = TD.getIntPtrType(VariableIdx->getContext());
+  if (VariableIdx->getType() != IntPtrTy)
+    VariableIdx = CastInst::CreateIntegerCast(VariableIdx, IntPtrTy,
+                                              true /*SExt*/, 
+                                              VariableIdx->getName(), &I);
+  Constant *OffsetVal = ConstantInt::get(IntPtrTy, NewOffs);
+  return BinaryOperator::CreateAdd(VariableIdx, OffsetVal, "offset", &I);
+}
+
+
+/// Optimize pointer differences into the same array into a size.  Consider:
+///  &A[10] - &A[0]: we should compile this to "10".  LHS/RHS are the pointer
+/// operands to the ptrtoint instructions for the LHS/RHS of the subtract.
+///
+Value *InstCombiner::OptimizePointerDifference(Value *LHS, Value *RHS,
+                                               const Type *Ty) {
+  assert(TD && "Must have target data info for this");
+  
+  // If LHS is a gep based on RHS or RHS is a gep based on LHS, we can optimize
+  // this.
+  bool Swapped;
+  GetElementPtrInst *GEP;
+  
+  if ((GEP = dyn_cast<GetElementPtrInst>(LHS)) &&
+      GEP->getOperand(0) == RHS)
+    Swapped = false;
+  else if ((GEP = dyn_cast<GetElementPtrInst>(RHS)) &&
+           GEP->getOperand(0) == LHS)
+    Swapped = true;
+  else
+    return 0;
+  
+  // TODO: Could also optimize &A[i] - &A[j] -> "i-j".
+  
+  // Emit the offset of the GEP and an intptr_t.
+  Value *Result = EmitGEPOffset(GEP, *this);
+
+  // If we have p - gep(p, ...)  then we have to negate the result.
+  if (Swapped)
+    Result = Builder->CreateNeg(Result, "diff.neg");
+
+  return Builder->CreateIntCast(Result, Ty, true);
+}
+
+
 Instruction *InstCombiner::visitSub(BinaryOperator &I) {
   Value *Op0 = I.getOperand(0), *Op1 = I.getOperand(1);
 
   if (Op0 == Op1)                        // sub X, X  -> 0
     return ReplaceInstUsesWith(I, Constant::getNullValue(I.getType()));
 
-  // If this is a 'B = x-(-A)', change to B = x+A...
+  // If this is a 'B = x-(-A)', change to B = x+A.
   if (Value *V = dyn_castNegVal(Op1))
     return BinaryOperator::CreateAdd(Op0, V);
 
@@ -2464,9 +2747,11 @@ Instruction *InstCombiner::visitSub(BinaryOperator &I) {
     return ReplaceInstUsesWith(I, Op0);    // undef - X -> undef
   if (isa<UndefValue>(Op1))
     return ReplaceInstUsesWith(I, Op1);    // X - undef -> undef
-
+  if (I.getType() == Type::getInt1Ty(*Context))
+    return BinaryOperator::CreateXor(Op0, Op1);
+  
   if (ConstantInt *C = dyn_cast<ConstantInt>(Op0)) {
-    // Replace (-1 - A) with (~A)...
+    // Replace (-1 - A) with (~A).
     if (C->isAllOnesValue())
       return BinaryOperator::CreateNot(Op1);
 
@@ -2489,8 +2774,7 @@ Instruction *InstCombiner::visitSub(BinaryOperator &I) {
                                           SI->getOperand(0), CU, SI->getName());
             }
           }
-        }
-        else if (SI->getOpcode() == Instruction::AShr) {
+        } else if (SI->getOpcode() == Instruction::AShr) {
           if (ConstantInt *CU = dyn_cast<ConstantInt>(SI->getOperand(1))) {
             // Check to see if we are shifting out everything but the sign bit.
             if (CU->getLimitedValue(SI->getType()->getPrimitiveSizeInBits()) ==
@@ -2515,9 +2799,6 @@ Instruction *InstCombiner::visitSub(BinaryOperator &I) {
         return SelectInst::Create(ZI->getOperand(0), SubOne(C), C);
   }
 
-  if (I.getType() == Type::getInt1Ty(*Context))
-    return BinaryOperator::CreateXor(Op0, Op1);
-
   if (BinaryOperator *Op1I = dyn_cast<BinaryOperator>(Op1)) {
     if (Op1I->getOpcode() == Instruction::Add) {
       if (Op1I->getOperand(0) == Op0)              // X-(X+Y) == -Y
@@ -2599,6 +2880,28 @@ Instruction *InstCombiner::visitSub(BinaryOperator &I) {
     if (X == dyn_castFoldableMul(Op1, C2))
       return BinaryOperator::CreateMul(X, ConstantExpr::getSub(C1, C2));
   }
+  
+  // Optimize pointer differences into the same array into a size.  Consider:
+  //  &A[10] - &A[0]: we should compile this to "10".
+  if (TD) {
+    if (PtrToIntInst *LHS = dyn_cast<PtrToIntInst>(Op0))
+      if (PtrToIntInst *RHS = dyn_cast<PtrToIntInst>(Op1))
+        if (Value *Res = OptimizePointerDifference(LHS->getOperand(0),
+                                                   RHS->getOperand(0),
+                                                   I.getType()))
+          return ReplaceInstUsesWith(I, Res);
+    
+    // trunc(p)-trunc(q) -> trunc(p-q)
+    if (TruncInst *LHST = dyn_cast<TruncInst>(Op0))
+      if (TruncInst *RHST = dyn_cast<TruncInst>(Op1))
+        if (PtrToIntInst *LHS = dyn_cast<PtrToIntInst>(LHST->getOperand(0)))
+          if (PtrToIntInst *RHS = dyn_cast<PtrToIntInst>(RHST->getOperand(0)))
+            if (Value *Res = OptimizePointerDifference(LHS->getOperand(0),
+                                                       RHS->getOperand(0),
+                                                       I.getType()))
+              return ReplaceInstUsesWith(I, Res);
+  }
+  
   return 0;
 }
 
@@ -2655,14 +2958,14 @@ static bool isSignBitCheck(ICmpInst::Predicate pred, ConstantInt *RHS,
 
 Instruction *InstCombiner::visitMul(BinaryOperator &I) {
   bool Changed = SimplifyCommutative(I);
-  Value *Op0 = I.getOperand(0);
+  Value *Op0 = I.getOperand(0), *Op1 = I.getOperand(1);
 
-  if (isa<UndefValue>(I.getOperand(1)))              // undef * X -> 0
+  if (isa<UndefValue>(Op1))              // undef * X -> 0
     return ReplaceInstUsesWith(I, Constant::getNullValue(I.getType()));
 
-  // Simplify mul instructions with a constant RHS...
-  if (Constant *Op1 = dyn_cast<Constant>(I.getOperand(1))) {
-    if (ConstantInt *CI = dyn_cast<ConstantInt>(Op1)) {
+  // Simplify mul instructions with a constant RHS.
+  if (Constant *Op1C = dyn_cast<Constant>(Op1)) {
+    if (ConstantInt *CI = dyn_cast<ConstantInt>(Op1C)) {
 
       // ((X << C1)*C2) == (X * (C2 << C1))
       if (BinaryOperator *SI = dyn_cast<BinaryOperator>(Op0))
@@ -2672,7 +2975,7 @@ Instruction *InstCombiner::visitMul(BinaryOperator &I) {
                                         ConstantExpr::getShl(CI, ShOp));
 
       if (CI->isZero())
-        return ReplaceInstUsesWith(I, Op1);  // X * 0  == 0
+        return ReplaceInstUsesWith(I, Op1C);  // X * 0  == 0
       if (CI->equalsInt(1))                  // X * 1  == X
         return ReplaceInstUsesWith(I, Op0);
       if (CI->isAllOnesValue())              // X * -1 == 0 - X
@@ -2683,11 +2986,11 @@ Instruction *InstCombiner::visitMul(BinaryOperator &I) {
         return BinaryOperator::CreateShl(Op0,
                  ConstantInt::get(Op0->getType(), Val.logBase2()));
       }
-    } else if (isa<VectorType>(Op1->getType())) {
-      if (Op1->isNullValue())
-        return ReplaceInstUsesWith(I, Op1);
+    } else if (isa<VectorType>(Op1C->getType())) {
+      if (Op1C->isNullValue())
+        return ReplaceInstUsesWith(I, Op1C);
 
-      if (ConstantVector *Op1V = dyn_cast<ConstantVector>(Op1)) {
+      if (ConstantVector *Op1V = dyn_cast<ConstantVector>(Op1C)) {
         if (Op1V->isAllOnesValue())              // X * -1 == 0 - X
           return BinaryOperator::CreateNeg(Op0, I.getName());
 
@@ -2702,10 +3005,10 @@ Instruction *InstCombiner::visitMul(BinaryOperator &I) {
     
     if (BinaryOperator *Op0I = dyn_cast<BinaryOperator>(Op0))
       if (Op0I->getOpcode() == Instruction::Add && Op0I->hasOneUse() &&
-          isa<ConstantInt>(Op0I->getOperand(1)) && isa<ConstantInt>(Op1)) {
+          isa<ConstantInt>(Op0I->getOperand(1)) && isa<ConstantInt>(Op1C)) {
         // Canonicalize (X+C1)*C2 -> X*C2+C1*C2.
-        Value *Add = Builder->CreateMul(Op0I->getOperand(0), Op1, "tmp");
-        Value *C1C2 = Builder->CreateMul(Op1, Op0I->getOperand(1));
+        Value *Add = Builder->CreateMul(Op0I->getOperand(0), Op1C, "tmp");
+        Value *C1C2 = Builder->CreateMul(Op1C, Op0I->getOperand(1));
         return BinaryOperator::CreateAdd(Add, C1C2);
         
       }
@@ -2721,23 +3024,23 @@ Instruction *InstCombiner::visitMul(BinaryOperator &I) {
   }
 
   if (Value *Op0v = dyn_castNegVal(Op0))     // -X * -Y = X*Y
-    if (Value *Op1v = dyn_castNegVal(I.getOperand(1)))
+    if (Value *Op1v = dyn_castNegVal(Op1))
       return BinaryOperator::CreateMul(Op0v, Op1v);
 
   // (X / Y) *  Y = X - (X % Y)
   // (X / Y) * -Y = (X % Y) - X
   {
-    Value *Op1 = I.getOperand(1);
+    Value *Op1C = Op1;
     BinaryOperator *BO = dyn_cast<BinaryOperator>(Op0);
     if (!BO ||
         (BO->getOpcode() != Instruction::UDiv && 
          BO->getOpcode() != Instruction::SDiv)) {
-      Op1 = Op0;
-      BO = dyn_cast<BinaryOperator>(I.getOperand(1));
+      Op1C = Op0;
+      BO = dyn_cast<BinaryOperator>(Op1);
     }
-    Value *Neg = dyn_castNegVal(Op1);
+    Value *Neg = dyn_castNegVal(Op1C);
     if (BO && BO->hasOneUse() &&
-        (BO->getOperand(1) == Op1 || BO->getOperand(1) == Neg) &&
+        (BO->getOperand(1) == Op1C || BO->getOperand(1) == Neg) &&
         (BO->getOpcode() == Instruction::UDiv ||
          BO->getOpcode() == Instruction::SDiv)) {
       Value *Op0BO = BO->getOperand(0), *Op1BO = BO->getOperand(1);
@@ -2745,10 +3048,9 @@ Instruction *InstCombiner::visitMul(BinaryOperator &I) {
       // If the division is exact, X % Y is zero.
       if (SDivOperator *SDiv = dyn_cast<SDivOperator>(BO))
         if (SDiv->isExact()) {
-          if (Op1BO == Op1)
+          if (Op1BO == Op1C)
             return ReplaceInstUsesWith(I, Op0BO);
-          else
-            return BinaryOperator::CreateNeg(Op0BO);
+          return BinaryOperator::CreateNeg(Op0BO);
         }
 
       Value *Rem;
@@ -2758,52 +3060,43 @@ Instruction *InstCombiner::visitMul(BinaryOperator &I) {
         Rem = Builder->CreateSRem(Op0BO, Op1BO);
       Rem->takeName(BO);
 
-      if (Op1BO == Op1)
+      if (Op1BO == Op1C)
         return BinaryOperator::CreateSub(Op0BO, Rem);
       return BinaryOperator::CreateSub(Rem, Op0BO);
     }
   }
 
+  /// i1 mul -> i1 and.
   if (I.getType() == Type::getInt1Ty(*Context))
-    return BinaryOperator::CreateAnd(Op0, I.getOperand(1));
+    return BinaryOperator::CreateAnd(Op0, Op1);
 
+  // X*(1 << Y) --> X << Y
+  // (1 << Y)*X --> X << Y
+  {
+    Value *Y;
+    if (match(Op0, m_Shl(m_One(), m_Value(Y))))
+      return BinaryOperator::CreateShl(Op1, Y);
+    if (match(Op1, m_Shl(m_One(), m_Value(Y))))
+      return BinaryOperator::CreateShl(Op0, Y);
+  }
+  
   // If one of the operands of the multiply is a cast from a boolean value, then
   // we know the bool is either zero or one, so this is a 'masking' multiply.
-  // See if we can simplify things based on how the boolean was originally
-  // formed.
-  CastInst *BoolCast = 0;
-  if (ZExtInst *CI = dyn_cast<ZExtInst>(Op0))
-    if (CI->getOperand(0)->getType() == Type::getInt1Ty(*Context))
-      BoolCast = CI;
-  if (!BoolCast)
-    if (ZExtInst *CI = dyn_cast<ZExtInst>(I.getOperand(1)))
-      if (CI->getOperand(0)->getType() == Type::getInt1Ty(*Context))
-        BoolCast = CI;
-  if (BoolCast) {
-    if (ICmpInst *SCI = dyn_cast<ICmpInst>(BoolCast->getOperand(0))) {
-      Value *SCIOp0 = SCI->getOperand(0), *SCIOp1 = SCI->getOperand(1);
-      const Type *SCOpTy = SCIOp0->getType();
-      bool TIS = false;
-      
-      // If the icmp is true iff the sign bit of X is set, then convert this
-      // multiply into a shift/and combination.
-      if (isa<ConstantInt>(SCIOp1) &&
-          isSignBitCheck(SCI->getPredicate(), cast<ConstantInt>(SCIOp1), TIS) &&
-          TIS) {
-        // Shift the X value right to turn it into "all signbits".
-        Constant *Amt = ConstantInt::get(SCIOp0->getType(),
-                                          SCOpTy->getPrimitiveSizeInBits()-1);
-        Value *V = Builder->CreateAShr(SCIOp0, Amt,
-                                    BoolCast->getOperand(0)->getName()+".mask");
-
-        // If the multiply type is not the same as the source type, sign extend
-        // or truncate to the multiply type.
-        if (I.getType() != V->getType())
-          V = Builder->CreateIntCast(V, I.getType(), true);
+  //   X * Y (where Y is 0 or 1) -> X & (0-Y)
+  if (!isa<VectorType>(I.getType())) {
+    // -2 is "-1 << 1" so it is all bits set except the low one.
+    APInt Negative2(I.getType()->getPrimitiveSizeInBits(), (uint64_t)-2, true);
+    
+    Value *BoolCast = 0, *OtherOp = 0;
+    if (MaskedValueIsZero(Op0, Negative2))
+      BoolCast = Op0, OtherOp = Op1;
+    else if (MaskedValueIsZero(Op1, Negative2))
+      BoolCast = Op1, OtherOp = Op0;
 
-        Value *OtherOp = Op0 == BoolCast ? I.getOperand(1) : Op0;
-        return BinaryOperator::CreateAnd(V, OtherOp);
-      }
+    if (BoolCast) {
+      Value *V = Builder->CreateSub(Constant::getNullValue(I.getType()),
+                                    BoolCast, "tmp");
+      return BinaryOperator::CreateAnd(V, OtherOp);
     }
   }
 
@@ -2812,17 +3105,17 @@ Instruction *InstCombiner::visitMul(BinaryOperator &I) {
 
 Instruction *InstCombiner::visitFMul(BinaryOperator &I) {
   bool Changed = SimplifyCommutative(I);
-  Value *Op0 = I.getOperand(0);
+  Value *Op0 = I.getOperand(0), *Op1 = I.getOperand(1);
 
   // Simplify mul instructions with a constant RHS...
-  if (Constant *Op1 = dyn_cast<Constant>(I.getOperand(1))) {
-    if (ConstantFP *Op1F = dyn_cast<ConstantFP>(Op1)) {
+  if (Constant *Op1C = dyn_cast<Constant>(Op1)) {
+    if (ConstantFP *Op1F = dyn_cast<ConstantFP>(Op1C)) {
       // "In IEEE floating point, x*1 is not equivalent to x for nans.  However,
       // ANSI says we can drop signals, so we can do this anyway." (from GCC)
       if (Op1F->isExactlyValue(1.0))
         return ReplaceInstUsesWith(I, Op0);  // Eliminate 'mul double %X, 1.0'
-    } else if (isa<VectorType>(Op1->getType())) {
-      if (ConstantVector *Op1V = dyn_cast<ConstantVector>(Op1)) {
+    } else if (isa<VectorType>(Op1C->getType())) {
+      if (ConstantVector *Op1V = dyn_cast<ConstantVector>(Op1C)) {
         // As above, vector X*splat(1.0) -> X in all defined cases.
         if (Constant *Splat = Op1V->getSplatValue()) {
           if (ConstantFP *F = dyn_cast<ConstantFP>(Splat))
@@ -2843,7 +3136,7 @@ Instruction *InstCombiner::visitFMul(BinaryOperator &I) {
   }
 
   if (Value *Op0v = dyn_castFNegVal(Op0))     // -X * -Y = X*Y
-    if (Value *Op1v = dyn_castFNegVal(I.getOperand(1)))
+    if (Value *Op1v = dyn_castFNegVal(Op1))
       return BinaryOperator::CreateFMul(Op0v, Op1v);
 
   return Changed ? &I : 0;
@@ -3477,9 +3770,9 @@ static Value *getFCmpValue(bool isordered, unsigned code,
 /// PredicatesFoldable - Return true if both predicates match sign or if at
 /// least one of them is an equality comparison (which is signless).
 static bool PredicatesFoldable(ICmpInst::Predicate p1, ICmpInst::Predicate p2) {
-  return (ICmpInst::isSignedPredicate(p1) == ICmpInst::isSignedPredicate(p2)) ||
-         (ICmpInst::isSignedPredicate(p1) && ICmpInst::isEquality(p2)) ||
-         (ICmpInst::isSignedPredicate(p2) && ICmpInst::isEquality(p1));
+  return (CmpInst::isSigned(p1) == CmpInst::isSigned(p2)) ||
+         (CmpInst::isSigned(p1) && ICmpInst::isEquality(p2)) ||
+         (CmpInst::isSigned(p2) && ICmpInst::isEquality(p1));
 }
 
 namespace { 
@@ -3516,9 +3809,7 @@ struct FoldICmpLogical {
     default: llvm_unreachable("Illegal logical opcode!"); return 0;
     }
 
-    bool isSigned = ICmpInst::isSignedPredicate(RHSICI->getPredicate()) || 
-                    ICmpInst::isSignedPredicate(ICI->getPredicate());
-      
+    bool isSigned = RHSICI->isSigned() || ICI->isSigned();
     Value *RV = getICmpValue(isSigned, Code, LHS, RHS, IC.getContext());
     if (Instruction *I = dyn_cast<Instruction>(RV))
       return I;
@@ -3815,9 +4106,9 @@ Instruction *InstCombiner::FoldAndOfICmps(Instruction &I,
     
   // Ensure that the larger constant is on the RHS.
   bool ShouldSwap;
-  if (ICmpInst::isSignedPredicate(LHSCC) ||
+  if (CmpInst::isSigned(LHSCC) ||
       (ICmpInst::isEquality(LHSCC) && 
-       ICmpInst::isSignedPredicate(RHSCC)))
+       CmpInst::isSigned(RHSCC)))
     ShouldSwap = LHSCst->getValue().sgt(RHSCst->getValue());
   else
     ShouldSwap = LHSCst->getValue().ugt(RHSCst->getValue());
@@ -4029,55 +4320,43 @@ Instruction *InstCombiner::visitAnd(BinaryOperator &I) {
   bool Changed = SimplifyCommutative(I);
   Value *Op0 = I.getOperand(0), *Op1 = I.getOperand(1);
 
-  if (isa<UndefValue>(Op1))                         // X & undef -> 0
-    return ReplaceInstUsesWith(I, Constant::getNullValue(I.getType()));
-
-  // and X, X = X
-  if (Op0 == Op1)
-    return ReplaceInstUsesWith(I, Op1);
+  if (Value *V = SimplifyAndInst(Op0, Op1, TD))
+    return ReplaceInstUsesWith(I, V);
+    
 
   // See if we can simplify any instructions used by the instruction whose sole 
   // purpose is to compute bits we don't care about.
   if (SimplifyDemandedInstructionBits(I))
     return &I;
-  if (isa<VectorType>(I.getType())) {
-    if (ConstantVector *CP = dyn_cast<ConstantVector>(Op1)) {
-      if (CP->isAllOnesValue())            // X & <-1,-1> -> X
-        return ReplaceInstUsesWith(I, I.getOperand(0));
-    } else if (isa<ConstantAggregateZero>(Op1)) {
-      return ReplaceInstUsesWith(I, Op1);  // X & <0,0> -> <0,0>
-    }
-  }
+  
 
   if (ConstantInt *AndRHS = dyn_cast<ConstantInt>(Op1)) {
-    const APInt& AndRHSMask = AndRHS->getValue();
+    const APInt &AndRHSMask = AndRHS->getValue();
     APInt NotAndRHS(~AndRHSMask);
 
     // Optimize a variety of ((val OP C1) & C2) combinations...
-    if (isa<BinaryOperator>(Op0)) {
-      Instruction *Op0I = cast<Instruction>(Op0);
+    if (BinaryOperator *Op0I = dyn_cast<BinaryOperator>(Op0)) {
       Value *Op0LHS = Op0I->getOperand(0);
       Value *Op0RHS = Op0I->getOperand(1);
       switch (Op0I->getOpcode()) {
+      default: break;
       case Instruction::Xor:
       case Instruction::Or:
         // If the mask is only needed on one incoming arm, push it up.
-        if (Op0I->hasOneUse()) {
-          if (MaskedValueIsZero(Op0LHS, NotAndRHS)) {
-            // Not masking anything out for the LHS, move to RHS.
-            Value *NewRHS = Builder->CreateAnd(Op0RHS, AndRHS,
-                                               Op0RHS->getName()+".masked");
-            return BinaryOperator::Create(
-                       cast<BinaryOperator>(Op0I)->getOpcode(), Op0LHS, NewRHS);
-          }
-          if (!isa<Constant>(Op0RHS) &&
-              MaskedValueIsZero(Op0RHS, NotAndRHS)) {
-            // Not masking anything out for the RHS, move to LHS.
-            Value *NewLHS = Builder->CreateAnd(Op0LHS, AndRHS,
-                                               Op0LHS->getName()+".masked");
-            return BinaryOperator::Create(
-                       cast<BinaryOperator>(Op0I)->getOpcode(), NewLHS, Op0RHS);
-          }
+        if (!Op0I->hasOneUse()) break;
+          
+        if (MaskedValueIsZero(Op0LHS, NotAndRHS)) {
+          // Not masking anything out for the LHS, move to RHS.
+          Value *NewRHS = Builder->CreateAnd(Op0RHS, AndRHS,
+                                             Op0RHS->getName()+".masked");
+          return BinaryOperator::Create(Op0I->getOpcode(), Op0LHS, NewRHS);
+        }
+        if (!isa<Constant>(Op0RHS) &&
+            MaskedValueIsZero(Op0RHS, NotAndRHS)) {
+          // Not masking anything out for the RHS, move to LHS.
+          Value *NewLHS = Builder->CreateAnd(Op0LHS, AndRHS,
+                                             Op0LHS->getName()+".masked");
+          return BinaryOperator::Create(Op0I->getOpcode(), NewLHS, Op0RHS);
         }
 
         break;
@@ -4136,7 +4415,7 @@ Instruction *InstCombiner::visitAnd(BinaryOperator &I) {
       if (Instruction *CastOp = dyn_cast<Instruction>(CI->getOperand(0))) {
         if ((isa<TruncInst>(CI) || isa<BitCastInst>(CI)) &&
             CastOp->getNumOperands() == 2)
-          if (ConstantInt *AndCI = dyn_cast<ConstantInt>(CastOp->getOperand(1))) {
+          if (ConstantInt *AndCI =dyn_cast<ConstantInt>(CastOp->getOperand(1))){
             if (CastOp->getOpcode() == Instruction::And) {
               // Change: and (cast (and X, C1) to T), C2
               // into  : and (cast X to T), trunc_or_bitcast(C1)&C2
@@ -4170,42 +4449,29 @@ Instruction *InstCombiner::visitAnd(BinaryOperator &I) {
         return NV;
   }
 
-  Value *Op0NotVal = dyn_castNotVal(Op0);
-  Value *Op1NotVal = dyn_castNotVal(Op1);
-
-  if (Op0NotVal == Op1 || Op1NotVal == Op0)  // A & ~A  == ~A & A == 0
-    return ReplaceInstUsesWith(I, Constant::getNullValue(I.getType()));
 
   // (~A & ~B) == (~(A | B)) - De Morgan's Law
-  if (Op0NotVal && Op1NotVal && isOnlyUse(Op0) && isOnlyUse(Op1)) {
-    Value *Or = Builder->CreateOr(Op0NotVal, Op1NotVal,
-                                  I.getName()+".demorgan");
-    return BinaryOperator::CreateNot(Or);
-  }
-  
+  if (Value *Op0NotVal = dyn_castNotVal(Op0))
+    if (Value *Op1NotVal = dyn_castNotVal(Op1))
+      if (Op0->hasOneUse() && Op1->hasOneUse()) {
+        Value *Or = Builder->CreateOr(Op0NotVal, Op1NotVal,
+                                      I.getName()+".demorgan");
+        return BinaryOperator::CreateNot(Or);
+      }
+
   {
     Value *A = 0, *B = 0, *C = 0, *D = 0;
-    if (match(Op0, m_Or(m_Value(A), m_Value(B)))) {
-      if (A == Op1 || B == Op1)    // (A | ?) & A  --> A
-        return ReplaceInstUsesWith(I, Op1);
-    
-      // (A|B) & ~(A&B) -> A^B
-      if (match(Op1, m_Not(m_And(m_Value(C), m_Value(D))))) {
-        if ((A == C && B == D) || (A == D && B == C))
-          return BinaryOperator::CreateXor(A, B);
-      }
-    }
+    // (A|B) & ~(A&B) -> A^B
+    if (match(Op0, m_Or(m_Value(A), m_Value(B))) &&
+        match(Op1, m_Not(m_And(m_Value(C), m_Value(D)))) &&
+        ((A == C && B == D) || (A == D && B == C)))
+      return BinaryOperator::CreateXor(A, B);
     
-    if (match(Op1, m_Or(m_Value(A), m_Value(B)))) {
-      if (A == Op0 || B == Op0)    // A & (A | ?)  --> A
-        return ReplaceInstUsesWith(I, Op0);
-
-      // ~(A&B) & (A|B) -> A^B
-      if (match(Op0, m_Not(m_And(m_Value(C), m_Value(D))))) {
-        if ((A == C && B == D) || (A == D && B == C))
-          return BinaryOperator::CreateXor(A, B);
-      }
-    }
+    // ~(A&B) & (A|B) -> A^B
+    if (match(Op1, m_Or(m_Value(A), m_Value(B))) &&
+        match(Op0, m_Not(m_And(m_Value(C), m_Value(D)))) &&
+        ((A == C && B == D) || (A == D && B == C)))
+      return BinaryOperator::CreateXor(A, B);
     
     if (Op0->hasOneUse() &&
         match(Op0, m_Xor(m_Value(A), m_Value(B)))) {
@@ -4505,9 +4771,9 @@ Instruction *InstCombiner::FoldOrOfICmps(Instruction &I,
   
   // Ensure that the larger constant is on the RHS.
   bool ShouldSwap;
-  if (ICmpInst::isSignedPredicate(LHSCC) ||
+  if (CmpInst::isSigned(LHSCC) ||
       (ICmpInst::isEquality(LHSCC) && 
-       ICmpInst::isSignedPredicate(RHSCC)))
+       CmpInst::isSigned(RHSCC)))
     ShouldSwap = LHSCst->getValue().sgt(RHSCst->getValue());
   else
     ShouldSwap = LHSCst->getValue().ugt(RHSCst->getValue());
@@ -4737,27 +5003,15 @@ Instruction *InstCombiner::visitOr(BinaryOperator &I) {
   bool Changed = SimplifyCommutative(I);
   Value *Op0 = I.getOperand(0), *Op1 = I.getOperand(1);
 
-  if (isa<UndefValue>(Op1))                       // X | undef -> -1
-    return ReplaceInstUsesWith(I, Constant::getAllOnesValue(I.getType()));
-
-  // or X, X = X
-  if (Op0 == Op1)
-    return ReplaceInstUsesWith(I, Op0);
-
+  if (Value *V = SimplifyOrInst(Op0, Op1, TD))
+    return ReplaceInstUsesWith(I, V);
+  
+  
   // See if we can simplify any instructions used by the instruction whose sole 
   // purpose is to compute bits we don't care about.
   if (SimplifyDemandedInstructionBits(I))
     return &I;
-  if (isa<VectorType>(I.getType())) {
-    if (isa<ConstantAggregateZero>(Op1)) {
-      return ReplaceInstUsesWith(I, Op0);  // X | <0,0> -> X
-    } else if (ConstantVector *CP = dyn_cast<ConstantVector>(Op1)) {
-      if (CP->isAllOnesValue())            // X | <-1,-1> -> <-1,-1>
-        return ReplaceInstUsesWith(I, I.getOperand(1));
-    }
-  }
 
-  // or X, -1 == -1
   if (ConstantInt *RHS = dyn_cast<ConstantInt>(Op1)) {
     ConstantInt *C1 = 0; Value *X = 0;
     // (X & C1) | C2 --> (X | C2) & (C1|C2)
@@ -4790,13 +5044,6 @@ Instruction *InstCombiner::visitOr(BinaryOperator &I) {
   Value *A = 0, *B = 0;
   ConstantInt *C1 = 0, *C2 = 0;
 
-  if (match(Op0, m_And(m_Value(A), m_Value(B))))
-    if (A == Op1 || B == Op1)    // (A & ?) | A  --> A
-      return ReplaceInstUsesWith(I, Op1);
-  if (match(Op1, m_And(m_Value(A), m_Value(B))))
-    if (A == Op0 || B == Op0)    // A | (A & ?)  --> A
-      return ReplaceInstUsesWith(I, Op0);
-
   // (A | B) | C  and  A | (B | C)                  -> bswap if possible.
   // (A >> B) | (C << D)  and  (A << B) | (B >> C)  -> bswap if possible.
   if (match(Op0, m_Or(m_Value(), m_Value())) ||
@@ -4930,23 +5177,14 @@ Instruction *InstCombiner::visitOr(BinaryOperator &I) {
     if (Ret) return Ret;
   }
 
-  if (match(Op0, m_Not(m_Value(A)))) {   // ~A | Op1
-    if (A == Op1)   // ~A | A == -1
-      return ReplaceInstUsesWith(I, Constant::getAllOnesValue(I.getType()));
-  } else {
-    A = 0;
-  }
-  // Note, A is still live here!
-  if (match(Op1, m_Not(m_Value(B)))) {   // Op0 | ~B
-    if (Op0 == B)
-      return ReplaceInstUsesWith(I, Constant::getAllOnesValue(I.getType()));
-
-    // (~A | ~B) == (~(A & B)) - De Morgan's Law
-    if (A && isOnlyUse(Op0) && isOnlyUse(Op1)) {
-      Value *And = Builder->CreateAnd(A, B, I.getName()+".demorgan");
-      return BinaryOperator::CreateNot(And);
-    }
-  }
+  // (~A | ~B) == (~(A & B)) - De Morgan's Law
+  if (Value *Op0NotVal = dyn_castNotVal(Op0))
+    if (Value *Op1NotVal = dyn_castNotVal(Op1))
+      if (Op0->hasOneUse() && Op1->hasOneUse()) {
+        Value *And = Builder->CreateAnd(Op0NotVal, Op1NotVal,
+                                        I.getName()+".demorgan");
+        return BinaryOperator::CreateNot(And);
+      }
 
   // (icmp1 A, B) | (icmp2 A, B) --> (icmp3 A, B)
   if (ICmpInst *RHS = dyn_cast<ICmpInst>(I.getOperand(1))) {
@@ -5034,12 +5272,13 @@ Instruction *InstCombiner::visitXor(BinaryOperator &I) {
 
   // Is this a ~ operation?
   if (Value *NotOp = dyn_castNotVal(&I)) {
-    // ~(~X & Y) --> (X | ~Y) - De Morgan's Law
-    // ~(~X | Y) === (X & ~Y) - De Morgan's Law
     if (BinaryOperator *Op0I = dyn_cast<BinaryOperator>(NotOp)) {
       if (Op0I->getOpcode() == Instruction::And || 
           Op0I->getOpcode() == Instruction::Or) {
-        if (dyn_castNotVal(Op0I->getOperand(1))) Op0I->swapOperands();
+        // ~(~X & Y) --> (X | ~Y) - De Morgan's Law
+        // ~(~X | Y) === (X & ~Y) - De Morgan's Law
+        if (dyn_castNotVal(Op0I->getOperand(1)))
+          Op0I->swapOperands();
         if (Value *Op0NotVal = dyn_castNotVal(Op0I->getOperand(0))) {
           Value *NotY =
             Builder->CreateNot(Op0I->getOperand(1),
@@ -5048,13 +5287,26 @@ Instruction *InstCombiner::visitXor(BinaryOperator &I) {
             return BinaryOperator::CreateOr(Op0NotVal, NotY);
           return BinaryOperator::CreateAnd(Op0NotVal, NotY);
         }
+        
+        // ~(X & Y) --> (~X | ~Y) - De Morgan's Law
+        // ~(X | Y) === (~X & ~Y) - De Morgan's Law
+        if (isFreeToInvert(Op0I->getOperand(0)) && 
+            isFreeToInvert(Op0I->getOperand(1))) {
+          Value *NotX =
+            Builder->CreateNot(Op0I->getOperand(0), "notlhs");
+          Value *NotY =
+            Builder->CreateNot(Op0I->getOperand(1), "notrhs");
+          if (Op0I->getOpcode() == Instruction::And)
+            return BinaryOperator::CreateOr(NotX, NotY);
+          return BinaryOperator::CreateAnd(NotX, NotY);
+        }
       }
     }
   }
   
   
   if (ConstantInt *RHS = dyn_cast<ConstantInt>(Op1)) {
-    if (RHS == ConstantInt::getTrue(*Context) && Op0->hasOneUse()) {
+    if (RHS->isOne() && Op0->hasOneUse()) {
       // xor (cmp A, B), true = not (cmp A, B) = !cmp A, B
       if (ICmpInst *ICI = dyn_cast<ICmpInst>(Op0))
         return new ICmpInst(ICI->getInversePredicate(),
@@ -5348,166 +5600,6 @@ static bool SubWithOverflow(Constant *&Result, Constant *In1,
                         IsSigned);
 }
 
-/// EmitGEPOffset - Given a getelementptr instruction/constantexpr, emit the
-/// code necessary to compute the offset from the base pointer (without adding
-/// in the base pointer).  Return the result as a signed integer of intptr size.
-static Value *EmitGEPOffset(User *GEP, Instruction &I, InstCombiner &IC) {
-  TargetData &TD = *IC.getTargetData();
-  gep_type_iterator GTI = gep_type_begin(GEP);
-  const Type *IntPtrTy = TD.getIntPtrType(I.getContext());
-  Value *Result = Constant::getNullValue(IntPtrTy);
-
-  // Build a mask for high order bits.
-  unsigned IntPtrWidth = TD.getPointerSizeInBits();
-  uint64_t PtrSizeMask = ~0ULL >> (64-IntPtrWidth);
-
-  for (User::op_iterator i = GEP->op_begin() + 1, e = GEP->op_end(); i != e;
-       ++i, ++GTI) {
-    Value *Op = *i;
-    uint64_t Size = TD.getTypeAllocSize(GTI.getIndexedType()) & PtrSizeMask;
-    if (ConstantInt *OpC = dyn_cast<ConstantInt>(Op)) {
-      if (OpC->isZero()) continue;
-      
-      // Handle a struct index, which adds its field offset to the pointer.
-      if (const StructType *STy = dyn_cast<StructType>(*GTI)) {
-        Size = TD.getStructLayout(STy)->getElementOffset(OpC->getZExtValue());
-        
-        Result = IC.Builder->CreateAdd(Result,
-                                       ConstantInt::get(IntPtrTy, Size),
-                                       GEP->getName()+".offs");
-        continue;
-      }
-      
-      Constant *Scale = ConstantInt::get(IntPtrTy, Size);
-      Constant *OC =
-              ConstantExpr::getIntegerCast(OpC, IntPtrTy, true /*SExt*/);
-      Scale = ConstantExpr::getMul(OC, Scale);
-      // Emit an add instruction.
-      Result = IC.Builder->CreateAdd(Result, Scale, GEP->getName()+".offs");
-      continue;
-    }
-    // Convert to correct type.
-    if (Op->getType() != IntPtrTy)
-      Op = IC.Builder->CreateIntCast(Op, IntPtrTy, true, Op->getName()+".c");
-    if (Size != 1) {
-      Constant *Scale = ConstantInt::get(IntPtrTy, Size);
-      // We'll let instcombine(mul) convert this to a shl if possible.
-      Op = IC.Builder->CreateMul(Op, Scale, GEP->getName()+".idx");
-    }
-
-    // Emit an add instruction.
-    Result = IC.Builder->CreateAdd(Op, Result, GEP->getName()+".offs");
-  }
-  return Result;
-}
-
-
-/// EvaluateGEPOffsetExpression - Return a value that can be used to compare
-/// the *offset* implied by a GEP to zero.  For example, if we have &A[i], we
-/// want to return 'i' for "icmp ne i, 0".  Note that, in general, indices can
-/// be complex, and scales are involved.  The above expression would also be
-/// legal to codegen as "icmp ne (i*4), 0" (assuming A is a pointer to i32).
-/// This later form is less amenable to optimization though, and we are allowed
-/// to generate the first by knowing that pointer arithmetic doesn't overflow.
-///
-/// If we can't emit an optimized form for this expression, this returns null.
-/// 
-static Value *EvaluateGEPOffsetExpression(User *GEP, Instruction &I,
-                                          InstCombiner &IC) {
-  TargetData &TD = *IC.getTargetData();
-  gep_type_iterator GTI = gep_type_begin(GEP);
-
-  // Check to see if this gep only has a single variable index.  If so, and if
-  // any constant indices are a multiple of its scale, then we can compute this
-  // in terms of the scale of the variable index.  For example, if the GEP
-  // implies an offset of "12 + i*4", then we can codegen this as "3 + i",
-  // because the expression will cross zero at the same point.
-  unsigned i, e = GEP->getNumOperands();
-  int64_t Offset = 0;
-  for (i = 1; i != e; ++i, ++GTI) {
-    if (ConstantInt *CI = dyn_cast<ConstantInt>(GEP->getOperand(i))) {
-      // Compute the aggregate offset of constant indices.
-      if (CI->isZero()) continue;
-
-      // Handle a struct index, which adds its field offset to the pointer.
-      if (const StructType *STy = dyn_cast<StructType>(*GTI)) {
-        Offset += TD.getStructLayout(STy)->getElementOffset(CI->getZExtValue());
-      } else {
-        uint64_t Size = TD.getTypeAllocSize(GTI.getIndexedType());
-        Offset += Size*CI->getSExtValue();
-      }
-    } else {
-      // Found our variable index.
-      break;
-    }
-  }
-  
-  // If there are no variable indices, we must have a constant offset, just
-  // evaluate it the general way.
-  if (i == e) return 0;
-  
-  Value *VariableIdx = GEP->getOperand(i);
-  // Determine the scale factor of the variable element.  For example, this is
-  // 4 if the variable index is into an array of i32.
-  uint64_t VariableScale = TD.getTypeAllocSize(GTI.getIndexedType());
-  
-  // Verify that there are no other variable indices.  If so, emit the hard way.
-  for (++i, ++GTI; i != e; ++i, ++GTI) {
-    ConstantInt *CI = dyn_cast<ConstantInt>(GEP->getOperand(i));
-    if (!CI) return 0;
-   
-    // Compute the aggregate offset of constant indices.
-    if (CI->isZero()) continue;
-    
-    // Handle a struct index, which adds its field offset to the pointer.
-    if (const StructType *STy = dyn_cast<StructType>(*GTI)) {
-      Offset += TD.getStructLayout(STy)->getElementOffset(CI->getZExtValue());
-    } else {
-      uint64_t Size = TD.getTypeAllocSize(GTI.getIndexedType());
-      Offset += Size*CI->getSExtValue();
-    }
-  }
-  
-  // Okay, we know we have a single variable index, which must be a
-  // pointer/array/vector index.  If there is no offset, life is simple, return
-  // the index.
-  unsigned IntPtrWidth = TD.getPointerSizeInBits();
-  if (Offset == 0) {
-    // Cast to intptrty in case a truncation occurs.  If an extension is needed,
-    // we don't need to bother extending: the extension won't affect where the
-    // computation crosses zero.
-    if (VariableIdx->getType()->getPrimitiveSizeInBits() > IntPtrWidth)
-      VariableIdx = new TruncInst(VariableIdx, 
-                                  TD.getIntPtrType(VariableIdx->getContext()),
-                                  VariableIdx->getName(), &I);
-    return VariableIdx;
-  }
-  
-  // Otherwise, there is an index.  The computation we will do will be modulo
-  // the pointer size, so get it.
-  uint64_t PtrSizeMask = ~0ULL >> (64-IntPtrWidth);
-  
-  Offset &= PtrSizeMask;
-  VariableScale &= PtrSizeMask;
-
-  // To do this transformation, any constant index must be a multiple of the
-  // variable scale factor.  For example, we can evaluate "12 + 4*i" as "3 + i",
-  // but we can't evaluate "10 + 3*i" in terms of i.  Check that the offset is a
-  // multiple of the variable scale.
-  int64_t NewOffs = Offset / (int64_t)VariableScale;
-  if (Offset != NewOffs*(int64_t)VariableScale)
-    return 0;
-
-  // Okay, we can do this evaluation.  Start by converting the index to intptr.
-  const Type *IntPtrTy = TD.getIntPtrType(VariableIdx->getContext());
-  if (VariableIdx->getType() != IntPtrTy)
-    VariableIdx = CastInst::CreateIntegerCast(VariableIdx, IntPtrTy,
-                                              true /*SExt*/, 
-                                              VariableIdx->getName(), &I);
-  Constant *OffsetVal = ConstantInt::get(IntPtrTy, NewOffs);
-  return BinaryOperator::CreateAdd(VariableIdx, OffsetVal, "offset", &I);
-}
-
 
 /// FoldGEPICmp - Fold comparisons between a GEP instruction and something
 /// else.  At this point we know that the GEP is on the LHS of the comparison.
@@ -5528,7 +5620,7 @@ Instruction *InstCombiner::FoldGEPICmp(GEPOperator *GEPLHS, Value *RHS,
     
     // If not, synthesize the offset the hard way.
     if (Offset == 0)
-      Offset = EmitGEPOffset(GEPLHS, I, *this);
+      Offset = EmitGEPOffset(GEPLHS, *this);
     return new ICmpInst(ICmpInst::getSignedPredicate(Cond), Offset,
                         Constant::getNullValue(Offset->getType()));
   } else if (GEPOperator *GEPRHS = dyn_cast<GEPOperator>(RHS)) {
@@ -5614,8 +5706,8 @@ Instruction *InstCombiner::FoldGEPICmp(GEPOperator *GEPLHS, Value *RHS,
         (isa<ConstantExpr>(GEPLHS) || GEPLHS->hasOneUse()) &&
         (isa<ConstantExpr>(GEPRHS) || GEPRHS->hasOneUse())) {
       // ((gep Ptr, OFFSET1) cmp (gep Ptr, OFFSET2)  --->  (OFFSET1 cmp OFFSET2)
-      Value *L = EmitGEPOffset(GEPLHS, I, *this);
-      Value *R = EmitGEPOffset(GEPRHS, I, *this);
+      Value *L = EmitGEPOffset(GEPLHS, *this);
+      Value *R = EmitGEPOffset(GEPRHS, *this);
       return new ICmpInst(ICmpInst::getSignedPredicate(Cond), L, R);
     }
   }
@@ -5815,28 +5907,25 @@ Instruction *InstCombiner::FoldFCmp_IntToFP_Cst(FCmpInst &I,
 }
 
 Instruction *InstCombiner::visitFCmpInst(FCmpInst &I) {
-  bool Changed = SimplifyCompare(I);
-  Value *Op0 = I.getOperand(0), *Op1 = I.getOperand(1);
+  bool Changed = false;
+  
+  /// Orders the operands of the compare so that they are listed from most
+  /// complex to least complex.  This puts constants before unary operators,
+  /// before binary operators.
+  if (getComplexity(I.getOperand(0)) < getComplexity(I.getOperand(1))) {
+    I.swapOperands();
+    Changed = true;
+  }
 
-  // Fold trivial predicates.
-  if (I.getPredicate() == FCmpInst::FCMP_FALSE)
-    return ReplaceInstUsesWith(I, ConstantInt::get(I.getType(), 0));
-  if (I.getPredicate() == FCmpInst::FCMP_TRUE)
-    return ReplaceInstUsesWith(I, ConstantInt::get(I.getType(), 1));
+  Value *Op0 = I.getOperand(0), *Op1 = I.getOperand(1);
   
+  if (Value *V = SimplifyFCmpInst(I.getPredicate(), Op0, Op1, TD))
+    return ReplaceInstUsesWith(I, V);
+
   // Simplify 'fcmp pred X, X'
   if (Op0 == Op1) {
     switch (I.getPredicate()) {
     default: llvm_unreachable("Unknown predicate!");
-    case FCmpInst::FCMP_UEQ:    // True if unordered or equal
-    case FCmpInst::FCMP_UGE:    // True if unordered, greater than, or equal
-    case FCmpInst::FCMP_ULE:    // True if unordered, less than, or equal
-      return ReplaceInstUsesWith(I, ConstantInt::get(I.getType(), 1));
-    case FCmpInst::FCMP_OGT:    // True if ordered and greater than
-    case FCmpInst::FCMP_OLT:    // True if ordered and less than
-    case FCmpInst::FCMP_ONE:    // True if ordered and operands are unequal
-      return ReplaceInstUsesWith(I, ConstantInt::get(I.getType(), 0));
-      
     case FCmpInst::FCMP_UNO:    // True if unordered: isnan(X) | isnan(Y)
     case FCmpInst::FCMP_ULT:    // True if unordered or less than
     case FCmpInst::FCMP_UGT:    // True if unordered or greater than
@@ -5857,23 +5946,8 @@ Instruction *InstCombiner::visitFCmpInst(FCmpInst &I) {
     }
   }
     
-  if (isa<UndefValue>(Op1))                  // fcmp pred X, undef -> undef
-    return ReplaceInstUsesWith(I, UndefValue::get(I.getType()));
-
   // Handle fcmp with constant RHS
   if (Constant *RHSC = dyn_cast<Constant>(Op1)) {
-    // If the constant is a nan, see if we can fold the comparison based on it.
-    if (ConstantFP *CFP = dyn_cast<ConstantFP>(RHSC)) {
-      if (CFP->getValueAPF().isNaN()) {
-        if (FCmpInst::isOrdered(I.getPredicate()))   // True if ordered and...
-          return ReplaceInstUsesWith(I, ConstantInt::getFalse(*Context));
-        assert(FCmpInst::isUnordered(I.getPredicate()) &&
-               "Comparison must be either ordered or unordered!");
-        // True if unordered.
-        return ReplaceInstUsesWith(I, ConstantInt::getTrue(*Context));
-      }
-    }
-    
     if (Instruction *LHSI = dyn_cast<Instruction>(Op0))
       switch (LHSI->getOpcode()) {
       case Instruction::PHI:
@@ -5920,26 +5994,22 @@ Instruction *InstCombiner::visitFCmpInst(FCmpInst &I) {
 }
 
 Instruction *InstCombiner::visitICmpInst(ICmpInst &I) {
-  bool Changed = SimplifyCompare(I);
+  bool Changed = false;
+  
+  /// Orders the operands of the compare so that they are listed from most
+  /// complex to least complex.  This puts constants before unary operators,
+  /// before binary operators.
+  if (getComplexity(I.getOperand(0)) < getComplexity(I.getOperand(1))) {
+    I.swapOperands();
+    Changed = true;
+  }
+  
   Value *Op0 = I.getOperand(0), *Op1 = I.getOperand(1);
-  const Type *Ty = Op0->getType();
-
-  // icmp X, X
-  if (Op0 == Op1)
-    return ReplaceInstUsesWith(I, ConstantInt::get(I.getType(),
-                                                   I.isTrueWhenEqual()));
-
-  if (isa<UndefValue>(Op1))                  // X icmp undef -> undef
-    return ReplaceInstUsesWith(I, UndefValue::get(I.getType()));
   
-  // icmp <global/alloca*/null>, <global/alloca*/null> - Global/Stack value
-  // addresses never equal each other!  We already know that Op0 != Op1.
-  if ((isa<GlobalValue>(Op0) || isa<AllocaInst>(Op0) || isMalloc(Op0) ||
-       isa<ConstantPointerNull>(Op0)) &&
-      (isa<GlobalValue>(Op1) || isa<AllocaInst>(Op1) || isMalloc(Op1) ||
-       isa<ConstantPointerNull>(Op1)))
-    return ReplaceInstUsesWith(I, ConstantInt::get(Type::getInt1Ty(*Context), 
-                                                   !I.isTrueWhenEqual()));
+  if (Value *V = SimplifyICmpInst(I.getPredicate(), Op0, Op1, TD))
+    return ReplaceInstUsesWith(I, V);
+  
+  const Type *Ty = Op0->getType();
 
   // icmp's with boolean values can always be turned into bitwise operations
   if (Ty == Type::getInt1Ty(*Context)) {
@@ -6004,27 +6074,24 @@ Instruction *InstCombiner::visitICmpInst(ICmpInst &I) {
     
     // If we have an icmp le or icmp ge instruction, turn it into the
     // appropriate icmp lt or icmp gt instruction.  This allows us to rely on
-    // them being folded in the code below.
+    // them being folded in the code below.  The SimplifyICmpInst code has
+    // already handled the edge cases for us, so we just assert on them.
     switch (I.getPredicate()) {
     default: break;
     case ICmpInst::ICMP_ULE:
-      if (CI->isMaxValue(false))                 // A <=u MAX -> TRUE
-        return ReplaceInstUsesWith(I, ConstantInt::getTrue(*Context));
+      assert(!CI->isMaxValue(false));                 // A <=u MAX -> TRUE
       return new ICmpInst(ICmpInst::ICMP_ULT, Op0,
                           AddOne(CI));
     case ICmpInst::ICMP_SLE:
-      if (CI->isMaxValue(true))                  // A <=s MAX -> TRUE
-        return ReplaceInstUsesWith(I, ConstantInt::getTrue(*Context));
+      assert(!CI->isMaxValue(true));                  // A <=s MAX -> TRUE
       return new ICmpInst(ICmpInst::ICMP_SLT, Op0,
                           AddOne(CI));
     case ICmpInst::ICMP_UGE:
-      if (CI->isMinValue(false))                 // A >=u MIN -> TRUE
-        return ReplaceInstUsesWith(I, ConstantInt::getTrue(*Context));
+      assert(!CI->isMinValue(false));                  // A >=u MIN -> TRUE
       return new ICmpInst(ICmpInst::ICMP_UGT, Op0,
                           SubOne(CI));
     case ICmpInst::ICMP_SGE:
-      if (CI->isMinValue(true))                  // A >=s MIN -> TRUE
-        return ReplaceInstUsesWith(I, ConstantInt::getTrue(*Context));
+      assert(!CI->isMinValue(true));                   // A >=s MIN -> TRUE
       return new ICmpInst(ICmpInst::ICMP_SGT, Op0,
                           SubOne(CI));
     }
@@ -6056,7 +6123,7 @@ Instruction *InstCombiner::visitICmpInst(ICmpInst &I) {
     // EQ and NE we use unsigned values.
     APInt Op0Min(BitWidth, 0), Op0Max(BitWidth, 0);
     APInt Op1Min(BitWidth, 0), Op1Max(BitWidth, 0);
-    if (ICmpInst::isSignedPredicate(I.getPredicate())) {
+    if (I.isSigned()) {
       ComputeSignedMinMaxValuesFromKnownBits(Op0KnownZero, Op0KnownOne,
                                              Op0Min, Op0Max);
       ComputeSignedMinMaxValuesFromKnownBits(Op1KnownZero, Op1KnownOne,
@@ -6186,7 +6253,7 @@ Instruction *InstCombiner::visitICmpInst(ICmpInst &I) {
 
     // Turn a signed comparison into an unsigned one if both operands
     // are known to have the same sign.
-    if (I.isSignedPredicate() &&
+    if (I.isSigned() &&
         ((Op0KnownZero.isNegative() && Op1KnownZero.isNegative()) ||
          (Op0KnownOne.isNegative() && Op1KnownOne.isNegative())))
       return new ICmpInst(I.getUnsignedPredicate(), Op0, Op1);
@@ -6269,25 +6336,33 @@ Instruction *InstCombiner::visitICmpInst(ICmpInst &I) {
           return SelectInst::Create(LHSI->getOperand(0), Op1, Op2);
         break;
       }
-      case Instruction::Malloc:
-        // If we have (malloc != null), and if the malloc has a single use, we
-        // can assume it is successful and remove the malloc.
-        if (LHSI->hasOneUse() && isa<ConstantPointerNull>(RHSC)) {
-          Worklist.Add(LHSI);
-          return ReplaceInstUsesWith(I,
-                                     ConstantInt::get(Type::getInt1Ty(*Context),
-                                                      !I.isTrueWhenEqual()));
-        }
-        break;
       case Instruction::Call:
         // If we have (malloc != null), and if the malloc has a single use, we
         // can assume it is successful and remove the malloc.
         if (isMalloc(LHSI) && LHSI->hasOneUse() &&
             isa<ConstantPointerNull>(RHSC)) {
-          Worklist.Add(LHSI);
-          return ReplaceInstUsesWith(I,
+          // Need to explicitly erase malloc call here, instead of adding it to
+          // Worklist, because it won't get DCE'd from the Worklist since
+          // isInstructionTriviallyDead() returns false for function calls.
+          // It is OK to replace LHSI/MallocCall with Undef because the 
+          // instruction that uses it will be erased via Worklist.
+          if (extractMallocCall(LHSI)) {
+            LHSI->replaceAllUsesWith(UndefValue::get(LHSI->getType()));
+            EraseInstFromFunction(*LHSI);
+            return ReplaceInstUsesWith(I,
+                                     ConstantInt::get(Type::getInt1Ty(*Context),
+                                                      !I.isTrueWhenEqual()));
+          }
+          if (CallInst* MallocCall = extractMallocCallFromBitCast(LHSI))
+            if (MallocCall->hasOneUse()) {
+              MallocCall->replaceAllUsesWith(
+                                        UndefValue::get(MallocCall->getType()));
+              EraseInstFromFunction(*MallocCall);
+              Worklist.Add(LHSI); // The malloc's bitcast use.
+              return ReplaceInstUsesWith(I,
                                      ConstantInt::get(Type::getInt1Ty(*Context),
                                                       !I.isTrueWhenEqual()));
+            }
         }
         break;
       }
@@ -6358,7 +6433,7 @@ Instruction *InstCombiner::visitICmpInst(ICmpInst &I) {
           // icmp u/s (a ^ signbit), (b ^ signbit) --> icmp s/u a, b
           if (ConstantInt *CI = dyn_cast<ConstantInt>(Op0I->getOperand(1))) {
             if (CI->getValue().isSignBit()) {
-              ICmpInst::Predicate Pred = I.isSignedPredicate()
+              ICmpInst::Predicate Pred = I.isSigned()
                                              ? I.getUnsignedPredicate()
                                              : I.getSignedPredicate();
               return new ICmpInst(Pred, Op0I->getOperand(0),
@@ -6366,7 +6441,7 @@ Instruction *InstCombiner::visitICmpInst(ICmpInst &I) {
             }
             
             if (CI->getValue().isMaxSignedValue()) {
-              ICmpInst::Predicate Pred = I.isSignedPredicate()
+              ICmpInst::Predicate Pred = I.isSigned()
                                              ? I.getUnsignedPredicate()
                                              : I.getSignedPredicate();
               Pred = I.getSwappedPredicate(Pred);
@@ -6503,7 +6578,7 @@ Instruction *InstCombiner::FoldICmpDivCst(ICmpInst &ICI, BinaryOperator *DivI,
   // work. :(  The if statement below tests that condition and bails 
   // if it finds it. 
   bool DivIsSigned = DivI->getOpcode() == Instruction::SDiv;
-  if (!ICI.isEquality() && DivIsSigned != ICI.isSignedPredicate())
+  if (!ICI.isEquality() && DivIsSigned != ICI.isSigned())
     return 0;
   if (DivRHS->isZero())
     return 0; // The ProdOV computation fails on divide by zero.
@@ -6702,7 +6777,7 @@ Instruction *InstCombiner::visitICmpInstWithInstAndIntCst(ICmpInst &ICI,
         // (icmp u/s (xor A SignBit), C) -> (icmp s/u A, (xor C SignBit))
         if (!ICI.isEquality() && XorCST->getValue().isSignBit()) {
           const APInt &SignBit = XorCST->getValue();
-          ICmpInst::Predicate Pred = ICI.isSignedPredicate()
+          ICmpInst::Predicate Pred = ICI.isSigned()
                                          ? ICI.getUnsignedPredicate()
                                          : ICI.getSignedPredicate();
           return new ICmpInst(Pred, LHSI->getOperand(0),
@@ -6712,7 +6787,7 @@ Instruction *InstCombiner::visitICmpInstWithInstAndIntCst(ICmpInst &ICI,
         // (icmp u/s (xor A ~SignBit), C) -> (icmp s/u (xor C ~SignBit), A)
         if (!ICI.isEquality() && XorCST->getValue().isMaxSignedValue()) {
           const APInt &NotSignBit = XorCST->getValue();
-          ICmpInst::Predicate Pred = ICI.isSignedPredicate()
+          ICmpInst::Predicate Pred = ICI.isSigned()
                                          ? ICI.getUnsignedPredicate()
                                          : ICI.getSignedPredicate();
           Pred = ICI.getSwappedPredicate(Pred);
@@ -6970,7 +7045,7 @@ Instruction *InstCombiner::visitICmpInstWithInstAndIntCst(ICmpInst &ICI,
       ConstantRange CR = ICI.makeConstantRange(ICI.getPredicate(), RHSV)
                             .subtract(LHSV);
 
-      if (ICI.isSignedPredicate()) {
+      if (ICI.isSigned()) {
         if (CR.getLower().isSignBit()) {
           return new ICmpInst(ICmpInst::ICMP_SLT, LHSI->getOperand(0),
                               ConstantInt::get(*Context, CR.getUpper()));
@@ -7145,7 +7220,7 @@ Instruction *InstCombiner::visitICmpInstWithCastAndCast(ICmpInst &ICI) {
     return 0;
 
   bool isSignedExt = LHSCI->getOpcode() == Instruction::SExt;
-  bool isSignedCmp = ICI.isSignedPredicate();
+  bool isSignedCmp = ICI.isSigned();
 
   if (CastInst *CI = dyn_cast<CastInst>(ICI.getOperand(1))) {
     // Not an extension from the same type?
@@ -7706,7 +7781,7 @@ static Value *DecomposeSimpleLinearExpr(Value *Val, unsigned &Scale,
 /// PromoteCastOfAllocation - If we find a cast of an allocation instruction,
 /// try to eliminate the cast by moving the type information into the alloc.
 Instruction *InstCombiner::PromoteCastOfAllocation(BitCastInst &CI,
-                                                   AllocationInst &AI) {
+                                                   AllocaInst &AI) {
   const PointerType *PTy = cast<PointerType>(CI.getType());
   
   BuilderTy AllocaBuilder(*Builder);
@@ -7778,11 +7853,7 @@ Instruction *InstCombiner::PromoteCastOfAllocation(BitCastInst &CI,
     Amt = AllocaBuilder.CreateAdd(Amt, Off, "tmp");
   }
   
-  AllocationInst *New;
-  if (isa<MallocInst>(AI))
-    New = AllocaBuilder.CreateMalloc(CastElTy, Amt);
-  else
-    New = AllocaBuilder.CreateAlloca(CastElTy, Amt);
+  AllocaInst *New = AllocaBuilder.CreateAlloca(CastElTy, Amt);
   New->setAlignment(AI.getAlignment());
   New->takeName(&AI);
   
@@ -7952,8 +8023,7 @@ bool InstCombiner::CanEvaluateInDifferentType(Value *V, const Type *Ty,
 Value *InstCombiner::EvaluateInDifferentType(Value *V, const Type *Ty, 
                                              bool isSigned) {
   if (Constant *C = dyn_cast<Constant>(V))
-    return ConstantExpr::getIntegerCast(C, Ty,
-                                               isSigned /*Sext or ZExt*/);
+    return ConstantExpr::getIntegerCast(C, Ty, isSigned /*Sext or ZExt*/);
 
   // Otherwise, it must be an instruction.
   Instruction *I = cast<Instruction>(V);
@@ -7986,8 +8056,7 @@ Value *InstCombiner::EvaluateInDifferentType(Value *V, const Type *Ty,
       return I->getOperand(0);
     
     // Otherwise, must be the same type of cast, so just reinsert a new one.
-    Res = CastInst::Create(cast<CastInst>(I)->getOpcode(), I->getOperand(0),
-                           Ty);
+    Res = CastInst::Create(cast<CastInst>(I)->getOpcode(), I->getOperand(0),Ty);
     break;
   case Instruction::Select: {
     Value *True = EvaluateInDifferentType(I->getOperand(1), Ty, isSigned);
@@ -8036,9 +8105,15 @@ Instruction *InstCombiner::commonCastTransforms(CastInst &CI) {
       return NV;
 
   // If we are casting a PHI then fold the cast into the PHI
-  if (isa<PHINode>(Src))
-    if (Instruction *NV = FoldOpIntoPhi(CI))
-      return NV;
+  if (isa<PHINode>(Src)) {
+    // We don't do this if this would create a PHI node with an illegal type if
+    // it is currently legal.
+    if (!isa<IntegerType>(Src->getType()) ||
+        !isa<IntegerType>(CI.getType()) ||
+        ShouldChangeType(CI.getType(), Src->getType(), TD))
+      if (Instruction *NV = FoldOpIntoPhi(CI))
+        return NV;
+  }
   
   return 0;
 }
@@ -8128,8 +8203,7 @@ Instruction *InstCombiner::commonPointerCastTransforms(CastInst &CI) {
     if (TD && GEP->hasOneUse() && isa<BitCastInst>(GEP->getOperand(0))) {
       if (GEP->hasAllConstantIndices()) {
         // We are guaranteed to get a constant from EmitGEPOffset.
-        ConstantInt *OffsetV =
-                      cast<ConstantInt>(EmitGEPOffset(GEP, CI, *this));
+        ConstantInt *OffsetV = cast<ConstantInt>(EmitGEPOffset(GEP, *this));
         int64_t Offset = OffsetV->getSExtValue();
         
         // Get the base pointer input of the bitcast, and the type it points to.
@@ -8159,23 +8233,6 @@ Instruction *InstCombiner::commonPointerCastTransforms(CastInst &CI) {
   return commonCastTransforms(CI);
 }
 
-/// isSafeIntegerType - Return true if this is a basic integer type, not a crazy
-/// type like i42.  We don't want to introduce operations on random non-legal
-/// integer types where they don't already exist in the code.  In the future,
-/// we should consider making this based off target-data, so that 32-bit targets
-/// won't get i64 operations etc.
-static bool isSafeIntegerType(const Type *Ty) {
-  switch (Ty->getPrimitiveSizeInBits()) {
-  case 8:
-  case 16:
-  case 32:
-  case 64:
-    return true;
-  default: 
-    return false;
-  }
-}
-
 /// commonIntCastTransforms - This function implements the common transforms
 /// for trunc, zext, and sext.
 Instruction *InstCombiner::commonIntCastTransforms(CastInst &CI) {
@@ -8204,8 +8261,8 @@ Instruction *InstCombiner::commonIntCastTransforms(CastInst &CI) {
   // Only do this if the dest type is a simple type, don't convert the
   // expression tree to something weird like i93 unless the source is also
   // strange.
-  if ((isSafeIntegerType(DestTy->getScalarType()) ||
-       !isSafeIntegerType(SrcI->getType()->getScalarType())) &&
+  if ((isa<VectorType>(DestTy) ||
+       ShouldChangeType(SrcI->getType(), DestTy, TD)) &&
       CanEvaluateInDifferentType(SrcI, DestTy,
                                  CI.getOpcode(), NumCastsRemoved)) {
     // If this cast is a truncate, evaluting in a different type always
@@ -8226,6 +8283,7 @@ Instruction *InstCombiner::commonIntCastTransforms(CastInst &CI) {
       break;
     case Instruction::ZExt: {
       DoXForm = NumCastsRemoved >= 1;
+      
       if (!DoXForm && 0) {
         // If it's unnecessary to issue an AND to clear the high bits, it's
         // always profitable to do this xform.
@@ -8392,7 +8450,7 @@ Instruction *InstCombiner::visitTrunc(TruncInst &CI) {
       return BinaryOperator::CreateLShr(V1, V2);
     }
   }
-  
+ 
   return 0;
 }
 
@@ -8481,6 +8539,36 @@ Instruction *InstCombiner::transformZExtICmp(ICmpInst *ICI, Instruction &CI,
     }
   }
 
+  // icmp ne A, B is equal to xor A, B when A and B only really have one bit.
+  // It is also profitable to transform icmp eq into not(xor(A, B)) because that
+  // may lead to additional simplifications.
+  if (ICI->isEquality() && CI.getType() == ICI->getOperand(0)->getType()) {
+    if (const IntegerType *ITy = dyn_cast<IntegerType>(CI.getType())) {
+      uint32_t BitWidth = ITy->getBitWidth();
+      if (BitWidth > 1) {
+        Value *LHS = ICI->getOperand(0);
+        Value *RHS = ICI->getOperand(1);
+
+        APInt KnownZeroLHS(BitWidth, 0), KnownOneLHS(BitWidth, 0);
+        APInt KnownZeroRHS(BitWidth, 0), KnownOneRHS(BitWidth, 0);
+        APInt TypeMask(APInt::getHighBitsSet(BitWidth, BitWidth-1));
+        ComputeMaskedBits(LHS, TypeMask, KnownZeroLHS, KnownOneLHS);
+        ComputeMaskedBits(RHS, TypeMask, KnownZeroRHS, KnownOneRHS);
+
+        if (KnownZeroLHS.countLeadingOnes() == BitWidth-1 &&
+            KnownZeroRHS.countLeadingOnes() == BitWidth-1) {
+          if (!DoXform) return ICI;
+
+          Value *Xor = Builder->CreateXor(LHS, RHS);
+          if (ICI->getPredicate() == ICmpInst::ICMP_EQ)
+            Xor = Builder->CreateXor(Xor, ConstantInt::get(ITy, 1));
+          Xor->takeName(ICI);
+          return ReplaceInstUsesWith(CI, Xor);
+        }
+      }
+    }
+  }
+
   return 0;
 }
 
@@ -8843,7 +8931,7 @@ Instruction *InstCombiner::visitBitCast(BitCastInst &CI) {
     // size, rewrite the allocation instruction to allocate the "right" type.
     // There is no need to modify malloc calls because it is their bitcast that
     // needs to be cleaned up.
-    if (AllocationInst *AI = dyn_cast<AllocationInst>(Src))
+    if (AllocaInst *AI = dyn_cast<AllocaInst>(Src))
       if (Instruction *V = PromoteCastOfAllocation(CI, *AI))
         return V;
     
@@ -9239,14 +9327,44 @@ Instruction *InstCombiner::visitSelectInstWithICmp(SelectInst &SI,
   return Changed ? &SI : 0;
 }
 
-/// isDefinedInBB - Return true if the value is an instruction defined in the
-/// specified basicblock.
-static bool isDefinedInBB(const Value *V, const BasicBlock *BB) {
+
+/// CanSelectOperandBeMappingIntoPredBlock - SI is a select whose condition is a
+/// PHI node (but the two may be in different blocks).  See if the true/false
+/// values (V) are live in all of the predecessor blocks of the PHI.  For
+/// example, cases like this cannot be mapped:
+///
+///   X = phi [ C1, BB1], [C2, BB2]
+///   Y = add
+///   Z = select X, Y, 0
+///
+/// because Y is not live in BB1/BB2.
+///
+static bool CanSelectOperandBeMappingIntoPredBlock(const Value *V,
+                                                   const SelectInst &SI) {
+  // If the value is a non-instruction value like a constant or argument, it
+  // can always be mapped.
   const Instruction *I = dyn_cast<Instruction>(V);
-  return I != 0 && I->getParent() == BB;
+  if (I == 0) return true;
+  
+  // If V is a PHI node defined in the same block as the condition PHI, we can
+  // map the arguments.
+  const PHINode *CondPHI = cast<PHINode>(SI.getCondition());
+  
+  if (const PHINode *VP = dyn_cast<PHINode>(I))
+    if (VP->getParent() == CondPHI->getParent())
+      return true;
+  
+  // Otherwise, if the PHI and select are defined in the same block and if V is
+  // defined in a different block, then we can transform it.
+  if (SI.getParent() == CondPHI->getParent() &&
+      I->getParent() != CondPHI->getParent())
+    return true;
+  
+  // Otherwise we have a 'hard' case and we can't tell without doing more
+  // detailed dominator based analysis, punt.
+  return false;
 }
 
-
 Instruction *InstCombiner::visitSelectInst(SelectInst &SI) {
   Value *CondVal = SI.getCondition();
   Value *TrueVal = SI.getTrueValue();
@@ -9458,16 +9576,13 @@ Instruction *InstCombiner::visitSelectInst(SelectInst &SI) {
       return FoldI;
   }
 
-  // See if we can fold the select into a phi node.  The true/false values have
-  // to be live in the predecessor blocks.  If they are instructions in SI's
-  // block, we can't map to the predecessor.
-  if (isa<PHINode>(SI.getCondition()) &&
-      (!isDefinedInBB(SI.getTrueValue(), SI.getParent()) ||
-       isa<PHINode>(SI.getTrueValue())) &&
-      (!isDefinedInBB(SI.getFalseValue(), SI.getParent()) ||
-       isa<PHINode>(SI.getFalseValue())))
-    if (Instruction *NV = FoldOpIntoPhi(SI))
-      return NV;
+  // See if we can fold the select into a phi node if the condition is a select.
+  if (isa<PHINode>(SI.getCondition())) 
+    // The true/false values have to be live in the PHI predecessor's blocks.
+    if (CanSelectOperandBeMappingIntoPredBlock(TrueVal, SI) &&
+        CanSelectOperandBeMappingIntoPredBlock(FalseVal, SI))
+      if (Instruction *NV = FoldOpIntoPhi(SI))
+        return NV;
 
   if (BinaryOperator::isNot(CondVal)) {
     SI.setOperand(0, BinaryOperator::getNotArgument(CondVal));
@@ -9685,6 +9800,9 @@ Instruction *InstCombiner::SimplifyMemSet(MemSetInst *MI) {
 /// the heavy lifting.
 ///
 Instruction *InstCombiner::visitCallInst(CallInst &CI) {
+  if (isFreeCall(&CI))
+    return visitFree(CI);
+
   // If the caller function is nounwind, mark the call as nounwind, even if the
   // callee isn't.
   if (CI.getParent()->getParent()->doesNotThrow() &&
@@ -9754,6 +9872,123 @@ Instruction *InstCombiner::visitCallInst(CallInst &CI) {
       if (Operand->getIntrinsicID() == Intrinsic::bswap)
         return ReplaceInstUsesWith(CI, Operand->getOperand(1));
     break;
+  case Intrinsic::uadd_with_overflow: {
+    Value *LHS = II->getOperand(1), *RHS = II->getOperand(2);
+    const IntegerType *IT = cast<IntegerType>(II->getOperand(1)->getType());
+    uint32_t BitWidth = IT->getBitWidth();
+    APInt Mask = APInt::getSignBit(BitWidth);
+    APInt LHSKnownZero(BitWidth, 0);
+    APInt LHSKnownOne(BitWidth, 0);
+    ComputeMaskedBits(LHS, Mask, LHSKnownZero, LHSKnownOne);
+    bool LHSKnownNegative = LHSKnownOne[BitWidth - 1];
+    bool LHSKnownPositive = LHSKnownZero[BitWidth - 1];
+
+    if (LHSKnownNegative || LHSKnownPositive) {
+      APInt RHSKnownZero(BitWidth, 0);
+      APInt RHSKnownOne(BitWidth, 0);
+      ComputeMaskedBits(RHS, Mask, RHSKnownZero, RHSKnownOne);
+      bool RHSKnownNegative = RHSKnownOne[BitWidth - 1];
+      bool RHSKnownPositive = RHSKnownZero[BitWidth - 1];
+      if (LHSKnownNegative && RHSKnownNegative) {
+        // The sign bit is set in both cases: this MUST overflow.
+        // Create a simple add instruction, and insert it into the struct.
+        Instruction *Add = BinaryOperator::CreateAdd(LHS, RHS, "", &CI);
+        Worklist.Add(Add);
+        Constant *V[2];
+        V[0] = UndefValue::get(LHS->getType());
+        V[1] = ConstantInt::getTrue(*Context);
+        Constant *Struct = ConstantStruct::get(*Context, V, 2, false);
+        return InsertValueInst::Create(Struct, Add, 0);
+      }
+      
+      if (LHSKnownPositive && RHSKnownPositive) {
+        // The sign bit is clear in both cases: this CANNOT overflow.
+        // Create a simple add instruction, and insert it into the struct.
+        Instruction *Add = BinaryOperator::CreateNUWAdd(LHS, RHS, "", &CI);
+        Worklist.Add(Add);
+        Constant *V[2];
+        V[0] = UndefValue::get(LHS->getType());
+        V[1] = ConstantInt::getFalse(*Context);
+        Constant *Struct = ConstantStruct::get(*Context, V, 2, false);
+        return InsertValueInst::Create(Struct, Add, 0);
+      }
+    }
+  }
+  // FALL THROUGH uadd into sadd
+  case Intrinsic::sadd_with_overflow:
+    // Canonicalize constants into the RHS.
+    if (isa<Constant>(II->getOperand(1)) &&
+        !isa<Constant>(II->getOperand(2))) {
+      Value *LHS = II->getOperand(1);
+      II->setOperand(1, II->getOperand(2));
+      II->setOperand(2, LHS);
+      return II;
+    }
+
+    // X + undef -> undef
+    if (isa<UndefValue>(II->getOperand(2)))
+      return ReplaceInstUsesWith(CI, UndefValue::get(II->getType()));
+      
+    if (ConstantInt *RHS = dyn_cast<ConstantInt>(II->getOperand(2))) {
+      // X + 0 -> {X, false}
+      if (RHS->isZero()) {
+        Constant *V[] = {
+          UndefValue::get(II->getType()), ConstantInt::getFalse(*Context)
+        };
+        Constant *Struct = ConstantStruct::get(*Context, V, 2, false);
+        return InsertValueInst::Create(Struct, II->getOperand(1), 0);
+      }
+    }
+    break;
+  case Intrinsic::usub_with_overflow:
+  case Intrinsic::ssub_with_overflow:
+    // undef - X -> undef
+    // X - undef -> undef
+    if (isa<UndefValue>(II->getOperand(1)) ||
+        isa<UndefValue>(II->getOperand(2)))
+      return ReplaceInstUsesWith(CI, UndefValue::get(II->getType()));
+      
+    if (ConstantInt *RHS = dyn_cast<ConstantInt>(II->getOperand(2))) {
+      // X - 0 -> {X, false}
+      if (RHS->isZero()) {
+        Constant *V[] = {
+          UndefValue::get(II->getType()), ConstantInt::getFalse(*Context)
+        };
+        Constant *Struct = ConstantStruct::get(*Context, V, 2, false);
+        return InsertValueInst::Create(Struct, II->getOperand(1), 0);
+      }
+    }
+    break;
+  case Intrinsic::umul_with_overflow:
+  case Intrinsic::smul_with_overflow:
+    // Canonicalize constants into the RHS.
+    if (isa<Constant>(II->getOperand(1)) &&
+        !isa<Constant>(II->getOperand(2))) {
+      Value *LHS = II->getOperand(1);
+      II->setOperand(1, II->getOperand(2));
+      II->setOperand(2, LHS);
+      return II;
+    }
+
+    // X * undef -> undef
+    if (isa<UndefValue>(II->getOperand(2)))
+      return ReplaceInstUsesWith(CI, UndefValue::get(II->getType()));
+      
+    if (ConstantInt *RHSI = dyn_cast<ConstantInt>(II->getOperand(2))) {
+      // X*0 -> {0, false}
+      if (RHSI->isZero())
+        return ReplaceInstUsesWith(CI, Constant::getNullValue(II->getType()));
+      
+      // X * 1 -> {X, false}
+      if (RHSI->equalsInt(1)) {
+        Constant *V[2];
+        V[0] = UndefValue::get(II->getType());
+        V[1] = ConstantInt::getFalse(*Context);
+        Constant *Struct = ConstantStruct::get(*Context, V, 2, false);
+        return InsertValueInst::Create(Struct, II->getOperand(1), 1);
+      }
+    }
+    break;
   case Intrinsic::ppc_altivec_lvx:
   case Intrinsic::ppc_altivec_lvxl:
   case Intrinsic::x86_sse_loadu_ps:
@@ -9947,9 +10182,11 @@ Instruction *InstCombiner::visitCallSite(CallSite CS) {
       // If the call and callee calling conventions don't match, this call must
       // be unreachable, as the call is undefined.
       new StoreInst(ConstantInt::getTrue(*Context),
-                UndefValue::get(PointerType::getUnqual(Type::getInt1Ty(*Context))), 
+                UndefValue::get(Type::getInt1PtrTy(*Context)), 
                                   OldCall);
-      if (!OldCall->use_empty())
+      // If OldCall dues not return void then replaceAllUsesWith undef.
+      // This allows ValueHandlers and custom metadata to adjust itself.
+      if (!OldCall->getType()->isVoidTy())
         OldCall->replaceAllUsesWith(UndefValue::get(OldCall->getType()));
       if (isa<CallInst>(OldCall))   // Not worth removing an invoke here.
         return EraseInstFromFunction(*OldCall);
@@ -9961,10 +10198,12 @@ Instruction *InstCombiner::visitCallSite(CallSite CS) {
     // undef so that we know that this code is not reachable, despite the fact
     // that we can't modify the CFG here.
     new StoreInst(ConstantInt::getTrue(*Context),
-               UndefValue::get(PointerType::getUnqual(Type::getInt1Ty(*Context))),
+               UndefValue::get(Type::getInt1PtrTy(*Context)),
                   CS.getInstruction());
 
-    if (!CS.getInstruction()->use_empty())
+    // If CS dues not return void then replaceAllUsesWith undef.
+    // This allows ValueHandlers and custom metadata to adjust itself.
+    if (!CS.getInstruction()->getType()->isVoidTy())
       CS.getInstruction()->
         replaceAllUsesWith(UndefValue::get(CS.getInstruction()->getType()));
 
@@ -10043,7 +10282,7 @@ bool InstCombiner::transformConstExprCastCall(CallSite CS) {
 
     if (!Caller->use_empty() &&
         // void -> non-void is handled specially
-        NewRetTy != Type::getVoidTy(*Context) && !CastInst::isCastable(NewRetTy, OldRetTy))
+        !NewRetTy->isVoidTy() && !CastInst::isCastable(NewRetTy, OldRetTy))
       return false;   // Cannot transform this return value.
 
     if (!CallerPAL.isEmpty() && !Caller->use_empty()) {
@@ -10175,7 +10414,7 @@ bool InstCombiner::transformConstExprCastCall(CallSite CS) {
   if (Attributes FnAttrs =  CallerPAL.getFnAttributes())
     attrVec.push_back(AttributeWithIndex::get(~0, FnAttrs));
 
-  if (NewRetTy == Type::getVoidTy(*Context))
+  if (NewRetTy->isVoidTy())
     Caller->setName("");   // Void type should not have a name.
 
   const AttrListPtr &NewCallerPAL = AttrListPtr::get(attrVec.begin(),
@@ -10201,7 +10440,7 @@ bool InstCombiner::transformConstExprCastCall(CallSite CS) {
   // Insert a cast of the return type as necessary.
   Value *NV = NC;
   if (OldRetTy != NV->getType() && !Caller->use_empty()) {
-    if (NV->getType() != Type::getVoidTy(*Context)) {
+    if (!NV->getType()->isVoidTy()) {
       Instruction::CastOps opcode = CastInst::getCastOpcode(NC, false, 
                                                             OldRetTy, false);
       NV = NC = CastInst::Create(opcode, NC, OldRetTy, "tmp");
@@ -10221,7 +10460,7 @@ bool InstCombiner::transformConstExprCastCall(CallSite CS) {
     }
   }
 
-  
+
   if (!Caller->use_empty())
     Caller->replaceAllUsesWith(NV);
   
@@ -10368,7 +10607,7 @@ Instruction *InstCombiner::transformCallThroughTrampoline(CallSite CS) {
           setCallingConv(cast<CallInst>(Caller)->getCallingConv());
         cast<CallInst>(NewCaller)->setAttributes(NewPAL);
       }
-      if (Caller->getType() != Type::getVoidTy(*Context) && !Caller->use_empty())
+      if (!Caller->getType()->isVoidTy())
         Caller->replaceAllUsesWith(NewCaller);
       Caller->eraseFromParent();
       Worklist.Remove(Caller);
@@ -10625,73 +10864,143 @@ static bool isSafeAndProfitableToSinkLoad(LoadInst *L) {
   return true;
 }
 
+Instruction *InstCombiner::FoldPHIArgLoadIntoPHI(PHINode &PN) {
+  LoadInst *FirstLI = cast<LoadInst>(PN.getIncomingValue(0));
+  
+  // When processing loads, we need to propagate two bits of information to the
+  // sunk load: whether it is volatile, and what its alignment is.  We currently
+  // don't sink loads when some have their alignment specified and some don't.
+  // visitLoadInst will propagate an alignment onto the load when TD is around,
+  // and if TD isn't around, we can't handle the mixed case.
+  bool isVolatile = FirstLI->isVolatile();
+  unsigned LoadAlignment = FirstLI->getAlignment();
+  
+  // We can't sink the load if the loaded value could be modified between the
+  // load and the PHI.
+  if (FirstLI->getParent() != PN.getIncomingBlock(0) ||
+      !isSafeAndProfitableToSinkLoad(FirstLI))
+    return 0;
+  
+  // If the PHI is of volatile loads and the load block has multiple
+  // successors, sinking it would remove a load of the volatile value from
+  // the path through the other successor.
+  if (isVolatile && 
+      FirstLI->getParent()->getTerminator()->getNumSuccessors() != 1)
+    return 0;
+  
+  // Check to see if all arguments are the same operation.
+  for (unsigned i = 1, e = PN.getNumIncomingValues(); i != e; ++i) {
+    LoadInst *LI = dyn_cast<LoadInst>(PN.getIncomingValue(i));
+    if (!LI || !LI->hasOneUse())
+      return 0;
+    
+    // We can't sink the load if the loaded value could be modified between 
+    // the load and the PHI.
+    if (LI->isVolatile() != isVolatile ||
+        LI->getParent() != PN.getIncomingBlock(i) ||
+        !isSafeAndProfitableToSinkLoad(LI))
+      return 0;
+      
+    // If some of the loads have an alignment specified but not all of them,
+    // we can't do the transformation.
+    if ((LoadAlignment != 0) != (LI->getAlignment() != 0))
+      return 0;
+    
+    LoadAlignment = std::min(LoadAlignment, LI->getAlignment());
+    
+    // If the PHI is of volatile loads and the load block has multiple
+    // successors, sinking it would remove a load of the volatile value from
+    // the path through the other successor.
+    if (isVolatile &&
+        LI->getParent()->getTerminator()->getNumSuccessors() != 1)
+      return 0;
+  }
+  
+  // Okay, they are all the same operation.  Create a new PHI node of the
+  // correct type, and PHI together all of the LHS's of the instructions.
+  PHINode *NewPN = PHINode::Create(FirstLI->getOperand(0)->getType(),
+                                   PN.getName()+".in");
+  NewPN->reserveOperandSpace(PN.getNumOperands()/2);
+  
+  Value *InVal = FirstLI->getOperand(0);
+  NewPN->addIncoming(InVal, PN.getIncomingBlock(0));
+  
+  // Add all operands to the new PHI.
+  for (unsigned i = 1, e = PN.getNumIncomingValues(); i != e; ++i) {
+    Value *NewInVal = cast<LoadInst>(PN.getIncomingValue(i))->getOperand(0);
+    if (NewInVal != InVal)
+      InVal = 0;
+    NewPN->addIncoming(NewInVal, PN.getIncomingBlock(i));
+  }
+  
+  Value *PhiVal;
+  if (InVal) {
+    // The new PHI unions all of the same values together.  This is really
+    // common, so we handle it intelligently here for compile-time speed.
+    PhiVal = InVal;
+    delete NewPN;
+  } else {
+    InsertNewInstBefore(NewPN, PN);
+    PhiVal = NewPN;
+  }
+  
+  // If this was a volatile load that we are merging, make sure to loop through
+  // and mark all the input loads as non-volatile.  If we don't do this, we will
+  // insert a new volatile load and the old ones will not be deletable.
+  if (isVolatile)
+    for (unsigned i = 0, e = PN.getNumIncomingValues(); i != e; ++i)
+      cast<LoadInst>(PN.getIncomingValue(i))->setVolatile(false);
+  
+  return new LoadInst(PhiVal, "", isVolatile, LoadAlignment);
+}
+
+
 
-// FoldPHIArgOpIntoPHI - If all operands to a PHI node are the same "unary"
-// operator and they all are only used by the PHI, PHI together their
-// inputs, and do the operation once, to the result of the PHI.
+/// FoldPHIArgOpIntoPHI - If all operands to a PHI node are the same "unary"
+/// operator and they all are only used by the PHI, PHI together their
+/// inputs, and do the operation once, to the result of the PHI.
 Instruction *InstCombiner::FoldPHIArgOpIntoPHI(PHINode &PN) {
   Instruction *FirstInst = cast<Instruction>(PN.getIncomingValue(0));
 
+  if (isa<GetElementPtrInst>(FirstInst))
+    return FoldPHIArgGEPIntoPHI(PN);
+  if (isa<LoadInst>(FirstInst))
+    return FoldPHIArgLoadIntoPHI(PN);
+  
   // Scan the instruction, looking for input operations that can be folded away.
   // If all input operands to the phi are the same instruction (e.g. a cast from
   // the same type or "+42") we can pull the operation through the PHI, reducing
   // code size and simplifying code.
   Constant *ConstantOp = 0;
   const Type *CastSrcTy = 0;
-  bool isVolatile = false;
+  
   if (isa<CastInst>(FirstInst)) {
     CastSrcTy = FirstInst->getOperand(0)->getType();
+
+    // Be careful about transforming integer PHIs.  We don't want to pessimize
+    // the code by turning an i32 into an i1293.
+    if (isa<IntegerType>(PN.getType()) && isa<IntegerType>(CastSrcTy)) {
+      if (!ShouldChangeType(PN.getType(), CastSrcTy, TD))
+        return 0;
+    }
   } else if (isa<BinaryOperator>(FirstInst) || isa<CmpInst>(FirstInst)) {
     // Can fold binop, compare or shift here if the RHS is a constant, 
     // otherwise call FoldPHIArgBinOpIntoPHI.
     ConstantOp = dyn_cast<Constant>(FirstInst->getOperand(1));
     if (ConstantOp == 0)
       return FoldPHIArgBinOpIntoPHI(PN);
-  } else if (LoadInst *LI = dyn_cast<LoadInst>(FirstInst)) {
-    isVolatile = LI->isVolatile();
-    // We can't sink the load if the loaded value could be modified between the
-    // load and the PHI.
-    if (LI->getParent() != PN.getIncomingBlock(0) ||
-        !isSafeAndProfitableToSinkLoad(LI))
-      return 0;
-    
-    // If the PHI is of volatile loads and the load block has multiple
-    // successors, sinking it would remove a load of the volatile value from
-    // the path through the other successor.
-    if (isVolatile &&
-        LI->getParent()->getTerminator()->getNumSuccessors() != 1)
-      return 0;
-    
-  } else if (isa<GetElementPtrInst>(FirstInst)) {
-    return FoldPHIArgGEPIntoPHI(PN);
   } else {
     return 0;  // Cannot fold this operation.
   }
 
   // Check to see if all arguments are the same operation.
   for (unsigned i = 1, e = PN.getNumIncomingValues(); i != e; ++i) {
-    if (!isa<Instruction>(PN.getIncomingValue(i))) return 0;
-    Instruction *I = cast<Instruction>(PN.getIncomingValue(i));
-    if (!I->hasOneUse() || !I->isSameOperationAs(FirstInst))
+    Instruction *I = dyn_cast<Instruction>(PN.getIncomingValue(i));
+    if (I == 0 || !I->hasOneUse() || !I->isSameOperationAs(FirstInst))
       return 0;
     if (CastSrcTy) {
       if (I->getOperand(0)->getType() != CastSrcTy)
         return 0;  // Cast operation must match.
-    } else if (LoadInst *LI = dyn_cast<LoadInst>(I)) {
-      // We can't sink the load if the loaded value could be modified between 
-      // the load and the PHI.
-      if (LI->isVolatile() != isVolatile ||
-          LI->getParent() != PN.getIncomingBlock(i) ||
-          !isSafeAndProfitableToSinkLoad(LI))
-        return 0;
-      
-      // If the PHI is of volatile loads and the load block has multiple
-      // successors, sinking it would remove a load of the volatile value from
-      // the path through the other successor.
-      if (isVolatile &&
-          LI->getParent()->getTerminator()->getNumSuccessors() != 1)
-        return 0;
-      
     } else if (I->getOperand(1) != ConstantOp) {
       return 0;
     }
@@ -10726,23 +11035,15 @@ Instruction *InstCombiner::FoldPHIArgOpIntoPHI(PHINode &PN) {
   }
 
   // Insert and return the new operation.
-  if (CastInst* FirstCI = dyn_cast<CastInst>(FirstInst))
+  if (CastInst *FirstCI = dyn_cast<CastInst>(FirstInst))
     return CastInst::Create(FirstCI->getOpcode(), PhiVal, PN.getType());
+  
   if (BinaryOperator *BinOp = dyn_cast<BinaryOperator>(FirstInst))
     return BinaryOperator::Create(BinOp->getOpcode(), PhiVal, ConstantOp);
-  if (CmpInst *CIOp = dyn_cast<CmpInst>(FirstInst))
-    return CmpInst::Create(CIOp->getOpcode(), CIOp->getPredicate(),
-                           PhiVal, ConstantOp);
-  assert(isa<LoadInst>(FirstInst) && "Unknown operation");
   
-  // If this was a volatile load that we are merging, make sure to loop through
-  // and mark all the input loads as non-volatile.  If we don't do this, we will
-  // insert a new volatile load and the old ones will not be deletable.
-  if (isVolatile)
-    for (unsigned i = 0, e = PN.getNumIncomingValues(); i != e; ++i)
-      cast<LoadInst>(PN.getIncomingValue(i))->setVolatile(false);
-  
-  return new LoadInst(PhiVal, "", isVolatile);
+  CmpInst *CIOp = cast<CmpInst>(FirstInst);
+  return CmpInst::Create(CIOp->getOpcode(), CIOp->getPredicate(),
+                         PhiVal, ConstantOp);
 }
 
 /// DeadPHICycle - Return true if this PHI node is only used by a PHI node cycle
@@ -10794,6 +11095,222 @@ static bool PHIsEqualValue(PHINode *PN, Value *NonPhiInVal,
 }
 
 
+namespace {
+struct PHIUsageRecord {
+  unsigned PHIId;     // The ID # of the PHI (something determinstic to sort on)
+  unsigned Shift;     // The amount shifted.
+  Instruction *Inst;  // The trunc instruction.
+  
+  PHIUsageRecord(unsigned pn, unsigned Sh, Instruction *User)
+    : PHIId(pn), Shift(Sh), Inst(User) {}
+  
+  bool operator<(const PHIUsageRecord &RHS) const {
+    if (PHIId < RHS.PHIId) return true;
+    if (PHIId > RHS.PHIId) return false;
+    if (Shift < RHS.Shift) return true;
+    if (Shift > RHS.Shift) return false;
+    return Inst->getType()->getPrimitiveSizeInBits() <
+           RHS.Inst->getType()->getPrimitiveSizeInBits();
+  }
+};
+  
+struct LoweredPHIRecord {
+  PHINode *PN;        // The PHI that was lowered.
+  unsigned Shift;     // The amount shifted.
+  unsigned Width;     // The width extracted.
+  
+  LoweredPHIRecord(PHINode *pn, unsigned Sh, const Type *Ty)
+    : PN(pn), Shift(Sh), Width(Ty->getPrimitiveSizeInBits()) {}
+  
+  // Ctor form used by DenseMap.
+  LoweredPHIRecord(PHINode *pn, unsigned Sh)
+    : PN(pn), Shift(Sh), Width(0) {}
+};
+}
+
+namespace llvm {
+  template<>
+  struct DenseMapInfo<LoweredPHIRecord> {
+    static inline LoweredPHIRecord getEmptyKey() {
+      return LoweredPHIRecord(0, 0);
+    }
+    static inline LoweredPHIRecord getTombstoneKey() {
+      return LoweredPHIRecord(0, 1);
+    }
+    static unsigned getHashValue(const LoweredPHIRecord &Val) {
+      return DenseMapInfo<PHINode*>::getHashValue(Val.PN) ^ (Val.Shift>>3) ^
+             (Val.Width>>3);
+    }
+    static bool isEqual(const LoweredPHIRecord &LHS,
+                        const LoweredPHIRecord &RHS) {
+      return LHS.PN == RHS.PN && LHS.Shift == RHS.Shift &&
+             LHS.Width == RHS.Width;
+    }
+    static bool isPod() { return true; }
+  };
+}
+
+
+/// SliceUpIllegalIntegerPHI - This is an integer PHI and we know that it has an
+/// illegal type: see if it is only used by trunc or trunc(lshr) operations.  If
+/// so, we split the PHI into the various pieces being extracted.  This sort of
+/// thing is introduced when SROA promotes an aggregate to large integer values.
+///
+/// TODO: The user of the trunc may be an bitcast to float/double/vector or an
+/// inttoptr.  We should produce new PHIs in the right type.
+///
+Instruction *InstCombiner::SliceUpIllegalIntegerPHI(PHINode &FirstPhi) {
+  // PHIUsers - Keep track of all of the truncated values extracted from a set
+  // of PHIs, along with their offset.  These are the things we want to rewrite.
+  SmallVector<PHIUsageRecord, 16> PHIUsers;
+  
+  // PHIs are often mutually cyclic, so we keep track of a whole set of PHI
+  // nodes which are extracted from. PHIsToSlice is a set we use to avoid
+  // revisiting PHIs, PHIsInspected is a ordered list of PHIs that we need to
+  // check the uses of (to ensure they are all extracts).
+  SmallVector<PHINode*, 8> PHIsToSlice;
+  SmallPtrSet<PHINode*, 8> PHIsInspected;
+  
+  PHIsToSlice.push_back(&FirstPhi);
+  PHIsInspected.insert(&FirstPhi);
+  
+  for (unsigned PHIId = 0; PHIId != PHIsToSlice.size(); ++PHIId) {
+    PHINode *PN = PHIsToSlice[PHIId];
+    
+    for (Value::use_iterator UI = PN->use_begin(), E = PN->use_end();
+         UI != E; ++UI) {
+      Instruction *User = cast<Instruction>(*UI);
+      
+      // If the user is a PHI, inspect its uses recursively.
+      if (PHINode *UserPN = dyn_cast<PHINode>(User)) {
+        if (PHIsInspected.insert(UserPN))
+          PHIsToSlice.push_back(UserPN);
+        continue;
+      }
+      
+      // Truncates are always ok.
+      if (isa<TruncInst>(User)) {
+        PHIUsers.push_back(PHIUsageRecord(PHIId, 0, User));
+        continue;
+      }
+      
+      // Otherwise it must be a lshr which can only be used by one trunc.
+      if (User->getOpcode() != Instruction::LShr ||
+          !User->hasOneUse() || !isa<TruncInst>(User->use_back()) ||
+          !isa<ConstantInt>(User->getOperand(1)))
+        return 0;
+      
+      unsigned Shift = cast<ConstantInt>(User->getOperand(1))->getZExtValue();
+      PHIUsers.push_back(PHIUsageRecord(PHIId, Shift, User->use_back()));
+    }
+  }
+  
+  // If we have no users, they must be all self uses, just nuke the PHI.
+  if (PHIUsers.empty())
+    return ReplaceInstUsesWith(FirstPhi, UndefValue::get(FirstPhi.getType()));
+  
+  // If this phi node is transformable, create new PHIs for all the pieces
+  // extracted out of it.  First, sort the users by their offset and size.
+  array_pod_sort(PHIUsers.begin(), PHIUsers.end());
+  
+  DEBUG(errs() << "SLICING UP PHI: " << FirstPhi << '\n';
+            for (unsigned i = 1, e = PHIsToSlice.size(); i != e; ++i)
+              errs() << "AND USER PHI #" << i << ": " << *PHIsToSlice[i] <<'\n';
+        );
+  
+  // PredValues - This is a temporary used when rewriting PHI nodes.  It is
+  // hoisted out here to avoid construction/destruction thrashing.
+  DenseMap<BasicBlock*, Value*> PredValues;
+  
+  // ExtractedVals - Each new PHI we introduce is saved here so we don't
+  // introduce redundant PHIs.
+  DenseMap<LoweredPHIRecord, PHINode*> ExtractedVals;
+  
+  for (unsigned UserI = 0, UserE = PHIUsers.size(); UserI != UserE; ++UserI) {
+    unsigned PHIId = PHIUsers[UserI].PHIId;
+    PHINode *PN = PHIsToSlice[PHIId];
+    unsigned Offset = PHIUsers[UserI].Shift;
+    const Type *Ty = PHIUsers[UserI].Inst->getType();
+    
+    PHINode *EltPHI;
+    
+    // If we've already lowered a user like this, reuse the previously lowered
+    // value.
+    if ((EltPHI = ExtractedVals[LoweredPHIRecord(PN, Offset, Ty)]) == 0) {
+      
+      // Otherwise, Create the new PHI node for this user.
+      EltPHI = PHINode::Create(Ty, PN->getName()+".off"+Twine(Offset), PN);
+      assert(EltPHI->getType() != PN->getType() &&
+             "Truncate didn't shrink phi?");
+    
+      for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
+        BasicBlock *Pred = PN->getIncomingBlock(i);
+        Value *&PredVal = PredValues[Pred];
+        
+        // If we already have a value for this predecessor, reuse it.
+        if (PredVal) {
+          EltPHI->addIncoming(PredVal, Pred);
+          continue;
+        }
+
+        // Handle the PHI self-reuse case.
+        Value *InVal = PN->getIncomingValue(i);
+        if (InVal == PN) {
+          PredVal = EltPHI;
+          EltPHI->addIncoming(PredVal, Pred);
+          continue;
+        } else if (PHINode *InPHI = dyn_cast<PHINode>(PN)) {
+          // If the incoming value was a PHI, and if it was one of the PHIs we
+          // already rewrote it, just use the lowered value.
+          if (Value *Res = ExtractedVals[LoweredPHIRecord(InPHI, Offset, Ty)]) {
+            PredVal = Res;
+            EltPHI->addIncoming(PredVal, Pred);
+            continue;
+          }
+        }
+        
+        // Otherwise, do an extract in the predecessor.
+        Builder->SetInsertPoint(Pred, Pred->getTerminator());
+        Value *Res = InVal;
+        if (Offset)
+          Res = Builder->CreateLShr(Res, ConstantInt::get(InVal->getType(),
+                                                          Offset), "extract");
+        Res = Builder->CreateTrunc(Res, Ty, "extract.t");
+        PredVal = Res;
+        EltPHI->addIncoming(Res, Pred);
+        
+        // If the incoming value was a PHI, and if it was one of the PHIs we are
+        // rewriting, we will ultimately delete the code we inserted.  This
+        // means we need to revisit that PHI to make sure we extract out the
+        // needed piece.
+        if (PHINode *OldInVal = dyn_cast<PHINode>(PN->getIncomingValue(i)))
+          if (PHIsInspected.count(OldInVal)) {
+            unsigned RefPHIId = std::find(PHIsToSlice.begin(),PHIsToSlice.end(),
+                                          OldInVal)-PHIsToSlice.begin();
+            PHIUsers.push_back(PHIUsageRecord(RefPHIId, Offset, 
+                                              cast<Instruction>(Res)));
+            ++UserE;
+          }
+      }
+      PredValues.clear();
+      
+      DEBUG(errs() << "  Made element PHI for offset " << Offset << ": "
+                   << *EltPHI << '\n');
+      ExtractedVals[LoweredPHIRecord(PN, Offset, Ty)] = EltPHI;
+    }
+    
+    // Replace the use of this piece with the PHI node.
+    ReplaceInstUsesWith(*PHIUsers[UserI].Inst, EltPHI);
+  }
+  
+  // Replace all the remaining uses of the PHI nodes (self uses and the lshrs)
+  // with undefs.
+  Value *Undef = UndefValue::get(FirstPhi.getType());
+  for (unsigned i = 1, e = PHIsToSlice.size(); i != e; ++i)
+    ReplaceInstUsesWith(*PHIsToSlice[i], Undef);
+  return ReplaceInstUsesWith(FirstPhi, Undef);
+}
+
 // PHINode simplification
 //
 Instruction *InstCombiner::visitPHINode(PHINode &PN) {
@@ -10874,25 +11391,54 @@ Instruction *InstCombiner::visitPHINode(PHINode &PN) {
       }
     }
   }
+
+  // If there are multiple PHIs, sort their operands so that they all list
+  // the blocks in the same order. This will help identical PHIs be eliminated
+  // by other passes. Other passes shouldn't depend on this for correctness
+  // however.
+  PHINode *FirstPN = cast<PHINode>(PN.getParent()->begin());
+  if (&PN != FirstPN)
+    for (unsigned i = 0, e = FirstPN->getNumIncomingValues(); i != e; ++i) {
+      BasicBlock *BBA = PN.getIncomingBlock(i);
+      BasicBlock *BBB = FirstPN->getIncomingBlock(i);
+      if (BBA != BBB) {
+        Value *VA = PN.getIncomingValue(i);
+        unsigned j = PN.getBasicBlockIndex(BBB);
+        Value *VB = PN.getIncomingValue(j);
+        PN.setIncomingBlock(i, BBB);
+        PN.setIncomingValue(i, VB);
+        PN.setIncomingBlock(j, BBA);
+        PN.setIncomingValue(j, VA);
+        // NOTE: Instcombine normally would want us to "return &PN" if we
+        // modified any of the operands of an instruction.  However, since we
+        // aren't adding or removing uses (just rearranging them) we don't do
+        // this in this case.
+      }
+    }
+
+  // If this is an integer PHI and we know that it has an illegal type, see if
+  // it is only used by trunc or trunc(lshr) operations.  If so, we split the
+  // PHI into the various pieces being extracted.  This sort of thing is
+  // introduced when SROA promotes an aggregate to a single large integer type.
+  if (isa<IntegerType>(PN.getType()) && TD &&
+      !TD->isLegalInteger(PN.getType()->getPrimitiveSizeInBits()))
+    if (Instruction *Res = SliceUpIllegalIntegerPHI(PN))
+      return Res;
+  
   return 0;
 }
 
 Instruction *InstCombiner::visitGetElementPtrInst(GetElementPtrInst &GEP) {
+  SmallVector<Value*, 8> Ops(GEP.op_begin(), GEP.op_end());
+
+  if (Value *V = SimplifyGEPInst(&Ops[0], Ops.size(), TD))
+    return ReplaceInstUsesWith(GEP, V);
+
   Value *PtrOp = GEP.getOperand(0);
-  // Eliminate 'getelementptr %P, i32 0' and 'getelementptr %P', they are noops.
-  if (GEP.getNumOperands() == 1)
-    return ReplaceInstUsesWith(GEP, PtrOp);
 
   if (isa<UndefValue>(GEP.getOperand(0)))
     return ReplaceInstUsesWith(GEP, UndefValue::get(GEP.getType()));
 
-  bool HasZeroPointerIndex = false;
-  if (Constant *C = dyn_cast<Constant>(GEP.getOperand(1)))
-    HasZeroPointerIndex = C->isNullValue();
-
-  if (GEP.getNumOperands() == 2 && HasZeroPointerIndex)
-    return ReplaceInstUsesWith(GEP, PtrOp);
-
   // Eliminate unneeded casts for indices.
   if (TD) {
     bool MadeChange = false;
@@ -10997,6 +11543,10 @@ Instruction *InstCombiner::visitGetElementPtrInst(GetElementPtrInst &GEP) {
       return 0;
     }
     
+    bool HasZeroPointerIndex = false;
+    if (ConstantInt *C = dyn_cast<ConstantInt>(GEP.getOperand(1)))
+      HasZeroPointerIndex = C->isZero();
+    
     // Transform: GEP (bitcast [10 x i8]* X to [0 x i8]*), i32 0, ...
     // into     : GEP [10 x i8]* X, i32 0, ...
     //
@@ -11124,8 +11674,7 @@ Instruction *InstCombiner::visitGetElementPtrInst(GetElementPtrInst &GEP) {
         !isa<BitCastInst>(BCI->getOperand(0)) && GEP.hasAllConstantIndices()) {
       // Determine how much the GEP moves the pointer.  We are guaranteed to get
       // a constant back from EmitGEPOffset.
-      ConstantInt *OffsetV =
-                    cast<ConstantInt>(EmitGEPOffset(&GEP, GEP, *this));
+      ConstantInt *OffsetV = cast<ConstantInt>(EmitGEPOffset(&GEP, *this));
       int64_t Offset = OffsetV->getSExtValue();
       
       // If this GEP instruction doesn't move the pointer, just replace the GEP
@@ -11133,7 +11682,7 @@ Instruction *InstCombiner::visitGetElementPtrInst(GetElementPtrInst &GEP) {
       if (Offset == 0) {
         // If the bitcast is of an allocation, and the allocation will be
         // converted to match the type of the cast, don't touch this.
-        if (isa<AllocationInst>(BCI->getOperand(0)) ||
+        if (isa<AllocaInst>(BCI->getOperand(0)) ||
             isMalloc(BCI->getOperand(0))) {
           // See if the bitcast simplifies, if so, don't nuke this GEP yet.
           if (Instruction *I = visitBitCast(*BCI)) {
@@ -11172,28 +11721,21 @@ Instruction *InstCombiner::visitGetElementPtrInst(GetElementPtrInst &GEP) {
   return 0;
 }
 
-Instruction *InstCombiner::visitAllocationInst(AllocationInst &AI) {
-  // Convert: malloc Ty, C - where C is a constant != 1 into: malloc [C x Ty], 1
+Instruction *InstCombiner::visitAllocaInst(AllocaInst &AI) {
+  // Convert: alloca Ty, C - where C is a constant != 1 into: alloca [C x Ty], 1
   if (AI.isArrayAllocation()) {  // Check C != 1
     if (const ConstantInt *C = dyn_cast<ConstantInt>(AI.getArraySize())) {
       const Type *NewTy = 
         ArrayType::get(AI.getAllocatedType(), C->getZExtValue());
-      AllocationInst *New = 0;
-
-      // Create and insert the replacement instruction...
-      if (isa<MallocInst>(AI))
-        New = Builder->CreateMalloc(NewTy, 0, AI.getName());
-      else {
-        assert(isa<AllocaInst>(AI) && "Unknown type of allocation inst!");
-        New = Builder->CreateAlloca(NewTy, 0, AI.getName());
-      }
+      assert(isa<AllocaInst>(AI) && "Unknown type of allocation inst!");
+      AllocaInst *New = Builder->CreateAlloca(NewTy, 0, AI.getName());
       New->setAlignment(AI.getAlignment());
 
       // Scan to the end of the allocation instructions, to skip over a block of
       // allocas if possible...also skip interleaved debug info
       //
       BasicBlock::iterator It = New;
-      while (isa<AllocationInst>(*It) || isa<DbgInfoIntrinsic>(*It)) ++It;
+      while (isa<AllocaInst>(*It) || isa<DbgInfoIntrinsic>(*It)) ++It;
 
       // Now that I is pointing to the first non-allocation-inst in the block,
       // insert our getelementptr instruction...
@@ -11228,14 +11770,14 @@ Instruction *InstCombiner::visitAllocationInst(AllocationInst &AI) {
   return 0;
 }
 
-Instruction *InstCombiner::visitFreeInst(FreeInst &FI) {
-  Value *Op = FI.getOperand(0);
+Instruction *InstCombiner::visitFree(Instruction &FI) {
+  Value *Op = FI.getOperand(1);
 
   // free undef -> unreachable.
   if (isa<UndefValue>(Op)) {
     // Insert a new store to null because we cannot modify the CFG here.
     new StoreInst(ConstantInt::getTrue(*Context),
-           UndefValue::get(PointerType::getUnqual(Type::getInt1Ty(*Context))), &FI);
+           UndefValue::get(Type::getInt1PtrTy(*Context)), &FI);
     return EraseInstFromFunction(FI);
   }
   
@@ -11243,28 +11785,8 @@ Instruction *InstCombiner::visitFreeInst(FreeInst &FI) {
   // when lots of inlining happens.
   if (isa<ConstantPointerNull>(Op))
     return EraseInstFromFunction(FI);
-  
-  // Change free <ty>* (cast <ty2>* X to <ty>*) into free <ty2>* X
-  if (BitCastInst *CI = dyn_cast<BitCastInst>(Op)) {
-    FI.setOperand(0, CI->getOperand(0));
-    return &FI;
-  }
-  
-  // Change free (gep X, 0,0,0,0) into free(X)
-  if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(Op)) {
-    if (GEPI->hasAllZeroIndices()) {
-      Worklist.Add(GEPI);
-      FI.setOperand(0, GEPI->getOperand(0));
-      return &FI;
-    }
-  }
-  
-  // Change free(malloc) into nothing, if the malloc has a single use.
-  if (MallocInst *MI = dyn_cast<MallocInst>(Op))
-    if (MI->hasOneUse()) {
-      EraseInstFromFunction(FI);
-      return EraseInstFromFunction(*MI);
-    }
+
+  // If we have a malloc call whose only use is a free call, delete both.
   if (isMalloc(Op)) {
     if (CallInst* CI = extractMallocCallFromBitCast(Op)) {
       if (Op->hasOneUse() && CI->hasOneUse()) {
@@ -11284,7 +11806,6 @@ Instruction *InstCombiner::visitFreeInst(FreeInst &FI) {
   return 0;
 }
 
-
 /// InstCombineLoadCast - Fold 'load (cast P)' -> cast (load P)' when possible.
 static Instruction *InstCombineLoadCast(InstCombiner &IC, LoadInst &LI,
                                         const TargetData *TD) {
@@ -11292,40 +11813,6 @@ static Instruction *InstCombineLoadCast(InstCombiner &IC, LoadInst &LI,
   Value *CastOp = CI->getOperand(0);
   LLVMContext *Context = IC.getContext();
 
-  if (TD) {
-    if (ConstantExpr *CE = dyn_cast<ConstantExpr>(CI)) {
-      // Instead of loading constant c string, use corresponding integer value
-      // directly if string length is small enough.
-      std::string Str;
-      if (GetConstantStringInfo(CE->getOperand(0), Str) && !Str.empty()) {
-        unsigned len = Str.length();
-        const Type *Ty = cast<PointerType>(CE->getType())->getElementType();
-        unsigned numBits = Ty->getPrimitiveSizeInBits();
-        // Replace LI with immediate integer store.
-        if ((numBits >> 3) == len + 1) {
-          APInt StrVal(numBits, 0);
-          APInt SingleChar(numBits, 0);
-          if (TD->isLittleEndian()) {
-            for (signed i = len-1; i >= 0; i--) {
-              SingleChar = (uint64_t) Str[i] & UCHAR_MAX;
-              StrVal = (StrVal << 8) | SingleChar;
-            }
-          } else {
-            for (unsigned i = 0; i < len; i++) {
-              SingleChar = (uint64_t) Str[i] & UCHAR_MAX;
-              StrVal = (StrVal << 8) | SingleChar;
-            }
-            // Append NULL at the end.
-            SingleChar = 0;
-            StrVal = (StrVal << 8) | SingleChar;
-          }
-          Value *NL = ConstantInt::get(*Context, StrVal);
-          return IC.ReplaceInstUsesWith(LI, NL);
-        }
-      }
-    }
-  }
-
   const PointerType *DestTy = cast<PointerType>(CI->getType());
   const Type *DestPTy = DestTy->getElementType();
   if (const PointerType *SrcTy = dyn_cast<PointerType>(CastOp->getType())) {
@@ -11345,7 +11832,8 @@ static Instruction *InstCombineLoadCast(InstCombiner &IC, LoadInst &LI,
         if (Constant *CSrc = dyn_cast<Constant>(CastOp))
           if (ASrcTy->getNumElements() != 0) {
             Value *Idxs[2];
-            Idxs[0] = Idxs[1] = Constant::getNullValue(Type::getInt32Ty(*Context));
+            Idxs[0] = Constant::getNullValue(Type::getInt32Ty(*Context));
+            Idxs[1] = Idxs[0];
             CastOp = ConstantExpr::getGetElementPtr(CSrc, Idxs, 2);
             SrcTy = cast<PointerType>(CastOp->getType());
             SrcPTy = SrcTy->getElementType();
@@ -11401,6 +11889,7 @@ Instruction *InstCombiner::visitLoadInst(LoadInst &LI) {
   if (Value *AvailableVal = FindAvailableLoadedValue(Op, LI.getParent(), BBI,6))
     return ReplaceInstUsesWith(LI, AvailableVal);
 
+  // load(gep null, ...) -> unreachable
   if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(Op)) {
     const Value *GEPI0 = GEPI->getOperand(0);
     // TODO: Consider a target hook for valid address spaces for this xform.
@@ -11415,61 +11904,24 @@ Instruction *InstCombiner::visitLoadInst(LoadInst &LI) {
     }
   } 
 
-  if (Constant *C = dyn_cast<Constant>(Op)) {
-    // load null/undef -> undef
-    // TODO: Consider a target hook for valid address spaces for this xform.
-    if (isa<UndefValue>(C) ||
-        (C->isNullValue() && LI.getPointerAddressSpace() == 0)) {
-      // Insert a new store to null instruction before the load to indicate that
-      // this code is not reachable.  We do this instead of inserting an
-      // unreachable instruction directly because we cannot modify the CFG.
-      new StoreInst(UndefValue::get(LI.getType()),
-                    Constant::getNullValue(Op->getType()), &LI);
-      return ReplaceInstUsesWith(LI, UndefValue::get(LI.getType()));
-    }
-
-    // Instcombine load (constant global) into the value loaded.
-    if (GlobalVariable *GV = dyn_cast<GlobalVariable>(Op))
-      if (GV->isConstant() && GV->hasDefinitiveInitializer())
-        return ReplaceInstUsesWith(LI, GV->getInitializer());
-
-    // Instcombine load (constantexpr_GEP global, 0, ...) into the value loaded.
-    if (ConstantExpr *CE = dyn_cast<ConstantExpr>(Op)) {
-      if (CE->getOpcode() == Instruction::GetElementPtr) {
-        if (GlobalVariable *GV = dyn_cast<GlobalVariable>(CE->getOperand(0)))
-          if (GV->isConstant() && GV->hasDefinitiveInitializer())
-            if (Constant *V = 
-               ConstantFoldLoadThroughGEPConstantExpr(GV->getInitializer(), CE, 
-                                                      *Context))
-              return ReplaceInstUsesWith(LI, V);
-        if (CE->getOperand(0)->isNullValue()) {
-          // Insert a new store to null instruction before the load to indicate
-          // that this code is not reachable.  We do this instead of inserting
-          // an unreachable instruction directly because we cannot modify the
-          // CFG.
-          new StoreInst(UndefValue::get(LI.getType()),
-                        Constant::getNullValue(Op->getType()), &LI);
-          return ReplaceInstUsesWith(LI, UndefValue::get(LI.getType()));
-        }
-
-      } else if (CE->isCast()) {
-        if (Instruction *Res = InstCombineLoadCast(*this, LI, TD))
-          return Res;
-      }
-    }
-  }
-    
-  // If this load comes from anywhere in a constant global, and if the global
-  // is all undef or zero, we know what it loads.
-  if (GlobalVariable *GV = dyn_cast<GlobalVariable>(Op->getUnderlyingObject())){
-    if (GV->isConstant() && GV->hasDefinitiveInitializer()) {
-      if (GV->getInitializer()->isNullValue())
-        return ReplaceInstUsesWith(LI, Constant::getNullValue(LI.getType()));
-      else if (isa<UndefValue>(GV->getInitializer()))
-        return ReplaceInstUsesWith(LI, UndefValue::get(LI.getType()));
-    }
+  // load null/undef -> unreachable
+  // TODO: Consider a target hook for valid address spaces for this xform.
+  if (isa<UndefValue>(Op) ||
+      (isa<ConstantPointerNull>(Op) && LI.getPointerAddressSpace() == 0)) {
+    // Insert a new store to null instruction before the load to indicate that
+    // this code is not reachable.  We do this instead of inserting an
+    // unreachable instruction directly because we cannot modify the CFG.
+    new StoreInst(UndefValue::get(LI.getType()),
+                  Constant::getNullValue(Op->getType()), &LI);
+    return ReplaceInstUsesWith(LI, UndefValue::get(LI.getType()));
   }
 
+  // Instcombine load (constantexpr_cast global) -> cast (load global)
+  if (ConstantExpr *CE = dyn_cast<ConstantExpr>(Op))
+    if (CE->isCast())
+      if (Instruction *Res = InstCombineLoadCast(*this, LI, TD))
+        return Res;
+  
   if (Op->hasOneUse()) {
     // Change select and PHI nodes to select values instead of addresses: this
     // helps alias analysis out a lot, allows many others simplifications, and
@@ -11646,12 +12098,6 @@ Instruction *InstCombiner::visitStoreInst(StoreInst &SI) {
   Value *Val = SI.getOperand(0);
   Value *Ptr = SI.getOperand(1);
 
-  if (isa<UndefValue>(Ptr)) {     // store X, undef -> noop (even if volatile)
-    EraseInstFromFunction(SI);
-    ++NumCombined;
-    return 0;
-  }
-  
   // If the RHS is an alloca with a single use, zapify the store, making the
   // alloca dead.
   // If the RHS is an alloca with a two uses, the other one being a 
@@ -11854,9 +12300,11 @@ bool InstCombiner::SimplifyStoreAtEndOfBlock(StoreInst &SI) {
         return false;
       --BBI;
     }
-    // If this isn't a store, or isn't a store to the same location, bail out.
+    // If this isn't a store, isn't a store to the same location, or if the
+    // alignments differ, bail out.
     OtherStore = dyn_cast<StoreInst>(BBI);
-    if (!OtherStore || OtherStore->getOperand(1) != SI.getOperand(1))
+    if (!OtherStore || OtherStore->getOperand(1) != SI.getOperand(1) ||
+        OtherStore->getAlignment() != SI.getAlignment())
       return false;
   } else {
     // Otherwise, the other block ended with a conditional branch. If one of the
@@ -11871,7 +12319,8 @@ bool InstCombiner::SimplifyStoreAtEndOfBlock(StoreInst &SI) {
     for (;; --BBI) {
       // Check to see if we find the matching store.
       if ((OtherStore = dyn_cast<StoreInst>(BBI))) {
-        if (OtherStore->getOperand(1) != SI.getOperand(1))
+        if (OtherStore->getOperand(1) != SI.getOperand(1) ||
+            OtherStore->getAlignment() != SI.getAlignment())
           return false;
         break;
       }
@@ -11906,7 +12355,8 @@ bool InstCombiner::SimplifyStoreAtEndOfBlock(StoreInst &SI) {
   // insert it.
   BBI = DestBB->getFirstNonPHI();
   InsertNewInstBefore(new StoreInst(MergedVal, SI.getOperand(1),
-                                    OtherStore->isVolatile()), *BBI);
+                                    OtherStore->isVolatile(),
+                                    SI.getAlignment()), *BBI);
   
   // Nuke the old stores.
   EraseInstFromFunction(SI);
@@ -12061,6 +12511,47 @@ Instruction *InstCombiner::visitExtractValueInst(ExtractValueInst &EV) {
       return ExtractValueInst::Create(IV->getInsertedValueOperand(), 
                                       exti, exte);
   }
+  if (IntrinsicInst *II = dyn_cast<IntrinsicInst>(Agg)) {
+    // We're extracting from an intrinsic, see if we're the only user, which
+    // allows us to simplify multiple result intrinsics to simpler things that
+    // just get one value..
+    if (II->hasOneUse()) {
+      // Check if we're grabbing the overflow bit or the result of a 'with
+      // overflow' intrinsic.  If it's the latter we can remove the intrinsic
+      // and replace it with a traditional binary instruction.
+      switch (II->getIntrinsicID()) {
+      case Intrinsic::uadd_with_overflow:
+      case Intrinsic::sadd_with_overflow:
+        if (*EV.idx_begin() == 0) {  // Normal result.
+          Value *LHS = II->getOperand(1), *RHS = II->getOperand(2);
+          II->replaceAllUsesWith(UndefValue::get(II->getType()));
+          EraseInstFromFunction(*II);
+          return BinaryOperator::CreateAdd(LHS, RHS);
+        }
+        break;
+      case Intrinsic::usub_with_overflow:
+      case Intrinsic::ssub_with_overflow:
+        if (*EV.idx_begin() == 0) {  // Normal result.
+          Value *LHS = II->getOperand(1), *RHS = II->getOperand(2);
+          II->replaceAllUsesWith(UndefValue::get(II->getType()));
+          EraseInstFromFunction(*II);
+          return BinaryOperator::CreateSub(LHS, RHS);
+        }
+        break;
+      case Intrinsic::umul_with_overflow:
+      case Intrinsic::smul_with_overflow:
+        if (*EV.idx_begin() == 0) {  // Normal result.
+          Value *LHS = II->getOperand(1), *RHS = II->getOperand(2);
+          II->replaceAllUsesWith(UndefValue::get(II->getType()));
+          EraseInstFromFunction(*II);
+          return BinaryOperator::CreateMul(LHS, RHS);
+        }
+        break;
+      default:
+        break;
+      }
+    }
+  }
   // Can't simplify extracts from other values. Note that nested extracts are
   // already simplified implicitely by the above (extract ( extract (insert) )
   // will be translated into extract ( insert ( extract ) ) first and then just
@@ -12457,28 +12948,6 @@ Instruction *InstCombiner::visitInsertElementInst(InsertElementInst &IE) {
       if (EI->getOperand(0) == VecOp && ExtractedIdx == InsertedIdx)
         return ReplaceInstUsesWith(IE, VecOp);      
       
-      // We could theoretically do this for ANY input.  However, doing so could
-      // turn chains of insertelement instructions into a chain of shufflevector
-      // instructions, and right now we do not merge shufflevectors.  As such,
-      // only do this in a situation where it is clear that there is benefit.
-      if (isa<UndefValue>(VecOp) || isa<ConstantAggregateZero>(VecOp)) {
-        // Turn this into shuffle(EIOp0, VecOp, Mask).  The result has all of
-        // the values of VecOp, except then one read from EIOp0.
-        // Build a new shuffle mask.
-        std::vector<Constant*> Mask;
-        if (isa<UndefValue>(VecOp))
-          Mask.assign(NumVectorElts, UndefValue::get(Type::getInt32Ty(*Context)));
-        else {
-          assert(isa<ConstantAggregateZero>(VecOp) && "Unknown thing");
-          Mask.assign(NumVectorElts, ConstantInt::get(Type::getInt32Ty(*Context),
-                                                       NumVectorElts));
-        } 
-        Mask[InsertedIdx] = 
-                           ConstantInt::get(Type::getInt32Ty(*Context), ExtractedIdx);
-        return new ShuffleVectorInst(EI->getOperand(0), VecOp,
-                                     ConstantVector::get(Mask));
-      }
-      
       // If this insertelement isn't used by some other insertelement, turn it
       // (and any insertelements it points to), into one big shuffle.
       if (!IE.hasOneUse() || !isa<InsertElementInst>(IE.use_back())) {
@@ -12588,29 +13057,33 @@ Instruction *InstCombiner::visitShuffleVectorInst(ShuffleVectorInst &SVI) {
     if (isa<UndefValue>(RHS)) {
       std::vector<unsigned> LHSMask = getShuffleMask(LHSSVI);
 
-      std::vector<unsigned> NewMask;
-      for (unsigned i = 0, e = Mask.size(); i != e; ++i)
-        if (Mask[i] >= 2*e)
-          NewMask.push_back(2*e);
-        else
-          NewMask.push_back(LHSMask[Mask[i]]);
+      if (LHSMask.size() == Mask.size()) {
+        std::vector<unsigned> NewMask;
+        for (unsigned i = 0, e = Mask.size(); i != e; ++i)
+          if (Mask[i] >= e)
+            NewMask.push_back(2*e);
+          else
+            NewMask.push_back(LHSMask[Mask[i]]);
       
-      // If the result mask is equal to the src shuffle or this shuffle mask, do
-      // the replacement.
-      if (NewMask == LHSMask || NewMask == Mask) {
-        unsigned LHSInNElts =
-          cast<VectorType>(LHSSVI->getOperand(0)->getType())->getNumElements();
-        std::vector<Constant*> Elts;
-        for (unsigned i = 0, e = NewMask.size(); i != e; ++i) {
-          if (NewMask[i] >= LHSInNElts*2) {
-            Elts.push_back(UndefValue::get(Type::getInt32Ty(*Context)));
-          } else {
-            Elts.push_back(ConstantInt::get(Type::getInt32Ty(*Context), NewMask[i]));
+        // If the result mask is equal to the src shuffle or this
+        // shuffle mask, do the replacement.
+        if (NewMask == LHSMask || NewMask == Mask) {
+          unsigned LHSInNElts =
+            cast<VectorType>(LHSSVI->getOperand(0)->getType())->
+            getNumElements();
+          std::vector<Constant*> Elts;
+          for (unsigned i = 0, e = NewMask.size(); i != e; ++i) {
+            if (NewMask[i] >= LHSInNElts*2) {
+              Elts.push_back(UndefValue::get(Type::getInt32Ty(*Context)));
+            } else {
+              Elts.push_back(ConstantInt::get(Type::getInt32Ty(*Context),
+                                              NewMask[i]));
+            }
           }
+          return new ShuffleVectorInst(LHSSVI->getOperand(0),
+                                       LHSSVI->getOperand(1),
+                                       ConstantVector::get(Elts));
         }
-        return new ShuffleVectorInst(LHSSVI->getOperand(0),
-                                     LHSSVI->getOperand(1),
-                                     ConstantVector::get(Elts));
       }
     }
   }
@@ -12664,13 +13137,19 @@ static bool TryToSinkInstruction(Instruction *I, BasicBlock *DestBlock) {
 /// many instructions are dead or constant).  Additionally, if we find a branch
 /// whose condition is a known constant, we only visit the reachable successors.
 ///
-static void AddReachableCodeToWorklist(BasicBlock *BB, 
+static bool AddReachableCodeToWorklist(BasicBlock *BB, 
                                        SmallPtrSet<BasicBlock*, 64> &Visited,
                                        InstCombiner &IC,
                                        const TargetData *TD) {
+  bool MadeIRChange = false;
   SmallVector<BasicBlock*, 256> Worklist;
   Worklist.push_back(BB);
+  
+  std::vector<Instruction*> InstrsForInstCombineWorklist;
+  InstrsForInstCombineWorklist.reserve(128);
 
+  SmallPtrSet<ConstantExpr*, 64> FoldedConstants;
+  
   while (!Worklist.empty()) {
     BB = Worklist.back();
     Worklist.pop_back();
@@ -12678,7 +13157,6 @@ static void AddReachableCodeToWorklist(BasicBlock *BB,
     // We have now visited this block!  If we've already been here, ignore it.
     if (!Visited.insert(BB)) continue;
 
-    DbgInfoIntrinsic *DBI_Prev = NULL;
     for (BasicBlock::iterator BBI = BB->begin(), E = BB->end(); BBI != E; ) {
       Instruction *Inst = BBI++;
       
@@ -12691,32 +13169,39 @@ static void AddReachableCodeToWorklist(BasicBlock *BB,
       }
       
       // ConstantProp instruction if trivially constant.
-      if (Constant *C = ConstantFoldInstruction(Inst, BB->getContext(), TD)) {
-        DEBUG(errs() << "IC: ConstFold to: " << *C << " from: "
-                     << *Inst << '\n');
-        Inst->replaceAllUsesWith(C);
-        ++NumConstProp;
-        Inst->eraseFromParent();
-        continue;
-      }
-     
-      // If there are two consecutive llvm.dbg.stoppoint calls then
-      // it is likely that the optimizer deleted code in between these
-      // two intrinsics. 
-      DbgInfoIntrinsic *DBI_Next = dyn_cast<DbgInfoIntrinsic>(Inst);
-      if (DBI_Next) {
-        if (DBI_Prev
-            && DBI_Prev->getIntrinsicID() == llvm::Intrinsic::dbg_stoppoint
-            && DBI_Next->getIntrinsicID() == llvm::Intrinsic::dbg_stoppoint) {
-          IC.Worklist.Remove(DBI_Prev);
-          DBI_Prev->eraseFromParent();
+      if (!Inst->use_empty() && isa<Constant>(Inst->getOperand(0)))
+        if (Constant *C = ConstantFoldInstruction(Inst, TD)) {
+          DEBUG(errs() << "IC: ConstFold to: " << *C << " from: "
+                       << *Inst << '\n');
+          Inst->replaceAllUsesWith(C);
+          ++NumConstProp;
+          Inst->eraseFromParent();
+          continue;
+        }
+      
+      
+      
+      if (TD) {
+        // See if we can constant fold its operands.
+        for (User::op_iterator i = Inst->op_begin(), e = Inst->op_end();
+             i != e; ++i) {
+          ConstantExpr *CE = dyn_cast<ConstantExpr>(i);
+          if (CE == 0) continue;
+          
+          // If we already folded this constant, don't try again.
+          if (!FoldedConstants.insert(CE))
+            continue;
+          
+          Constant *NewC = ConstantFoldConstantExpression(CE, TD);
+          if (NewC && NewC != CE) {
+            *i = NewC;
+            MadeIRChange = true;
+          }
         }
-        DBI_Prev = DBI_Next;
-      } else {
-        DBI_Prev = 0;
       }
+      
 
-      IC.Worklist.Add(Inst);
+      InstrsForInstCombineWorklist.push_back(Inst);
     }
 
     // Recursively visit successors.  If this is a branch or switch on a
@@ -12748,11 +13233,20 @@ static void AddReachableCodeToWorklist(BasicBlock *BB,
     for (unsigned i = 0, e = TI->getNumSuccessors(); i != e; ++i)
       Worklist.push_back(TI->getSuccessor(i));
   }
+  
+  // Once we've found all of the instructions to add to instcombine's worklist,
+  // add them in reverse order.  This way instcombine will visit from the top
+  // of the function down.  This jives well with the way that it adds all uses
+  // of instructions to the worklist after doing a transformation, thus avoiding
+  // some N^2 behavior in pathological cases.
+  IC.Worklist.AddInitialGroup(&InstrsForInstCombineWorklist[0],
+                              InstrsForInstCombineWorklist.size());
+  
+  return MadeIRChange;
 }
 
 bool InstCombiner::DoOneIteration(Function &F, unsigned Iteration) {
   MadeIRChange = false;
-  TD = getAnalysisIfAvailable<TargetData>();
   
   DEBUG(errs() << "\n\nINSTCOMBINE ITERATION #" << Iteration << " on "
         << F.getNameStr() << "\n");
@@ -12762,7 +13256,7 @@ bool InstCombiner::DoOneIteration(Function &F, unsigned Iteration) {
     // the reachable instructions.  Ignore blocks that are not reachable.  Keep
     // track of which blocks we visit.
     SmallPtrSet<BasicBlock*, 64> Visited;
-    AddReachableCodeToWorklist(F.begin(), Visited, *this, TD);
+    MadeIRChange |= AddReachableCodeToWorklist(F.begin(), Visited, *this, TD);
 
     // Do a quick scan over the function.  If we find any blocks that are
     // unreachable, remove any instructions inside of them.  This prevents
@@ -12780,7 +13274,10 @@ bool InstCombiner::DoOneIteration(Function &F, unsigned Iteration) {
             ++NumDeadInst;
             MadeIRChange = true;
           }
-          if (!I->use_empty())
+
+          // If I is not void type then replaceAllUsesWith undef.
+          // This allows ValueHandlers and custom metadata to adjust itself.
+          if (!I->getType()->isVoidTy())
             I->replaceAllUsesWith(UndefValue::get(I->getType()));
           I->eraseFromParent();
         }
@@ -12801,33 +13298,30 @@ bool InstCombiner::DoOneIteration(Function &F, unsigned Iteration) {
     }
 
     // Instruction isn't dead, see if we can constant propagate it.
-    if (Constant *C = ConstantFoldInstruction(I, F.getContext(), TD)) {
-      DEBUG(errs() << "IC: ConstFold to: " << *C << " from: " << *I << '\n');
+    if (!I->use_empty() && isa<Constant>(I->getOperand(0)))
+      if (Constant *C = ConstantFoldInstruction(I, TD)) {
+        DEBUG(errs() << "IC: ConstFold to: " << *C << " from: " << *I << '\n');
 
-      // Add operands to the worklist.
-      ReplaceInstUsesWith(*I, C);
-      ++NumConstProp;
-      EraseInstFromFunction(*I);
-      MadeIRChange = true;
-      continue;
-    }
-
-    if (TD) {
-      // See if we can constant fold its operands.
-      for (User::op_iterator i = I->op_begin(), e = I->op_end(); i != e; ++i)
-        if (ConstantExpr *CE = dyn_cast<ConstantExpr>(i))
-          if (Constant *NewC = ConstantFoldConstantExpression(CE,   
-                                  F.getContext(), TD))
-            if (NewC != CE) {
-              i->set(NewC);
-              MadeIRChange = true;
-            }
-    }
+        // Add operands to the worklist.
+        ReplaceInstUsesWith(*I, C);
+        ++NumConstProp;
+        EraseInstFromFunction(*I);
+        MadeIRChange = true;
+        continue;
+      }
 
     // See if we can trivially sink this instruction to a successor basic block.
     if (I->hasOneUse()) {
       BasicBlock *BB = I->getParent();
-      BasicBlock *UserParent = cast<Instruction>(I->use_back())->getParent();
+      Instruction *UserInst = cast<Instruction>(I->use_back());
+      BasicBlock *UserParent;
+      
+      // Get the block the use occurs in.
+      if (PHINode *PN = dyn_cast<PHINode>(UserInst))
+        UserParent = PN->getIncomingBlock(I->use_begin().getUse());
+      else
+        UserParent = UserInst->getParent();
+      
       if (UserParent != BB) {
         bool UserIsSuccessor = false;
         // See if the user is one of our successors.
@@ -12840,8 +13334,7 @@ bool InstCombiner::DoOneIteration(Function &F, unsigned Iteration) {
         // If the user is one of our immediate successors, and if that successor
         // only has us as a predecessors (we'd have to split the critical edge
         // otherwise), we can keep going.
-        if (UserIsSuccessor && !isa<PHINode>(I->use_back()) &&
-            next(pred_begin(UserParent)) == pred_end(UserParent))
+        if (UserIsSuccessor && UserParent->getSinglePredecessor())
           // Okay, the CFG is simple enough, try to sink this instruction.
           MadeIRChange |= TryToSinkInstruction(I, UserParent);
       }
@@ -12854,7 +13347,8 @@ bool InstCombiner::DoOneIteration(Function &F, unsigned Iteration) {
     std::string OrigI;
 #endif
     DEBUG(raw_string_ostream SS(OrigI); I->print(SS); OrigI = SS.str(););
-    
+    DEBUG(errs() << "IC: Visiting: " << OrigI << '\n');
+
     if (Instruction *Result = visit(*I)) {
       ++NumCombined;
       // Should we replace the old instruction with a new one?
@@ -12910,12 +13404,13 @@ bool InstCombiner::DoOneIteration(Function &F, unsigned Iteration) {
 bool InstCombiner::runOnFunction(Function &F) {
   MustPreserveLCSSA = mustPreserveAnalysisID(LCSSAID);
   Context = &F.getContext();
-  
+  TD = getAnalysisIfAvailable<TargetData>();
+
   
   /// Builder - This is an IRBuilder that automatically inserts new
   /// instructions into the worklist when they are created.
-  IRBuilder<true, ConstantFolder, InstCombineIRInserter> 
-    TheBuilder(F.getContext(), ConstantFolder(F.getContext()),
+  IRBuilder<true, TargetFolder, InstCombineIRInserter> 
+    TheBuilder(F.getContext(), TargetFolder(TD),
                InstCombineIRInserter(Worklist));
   Builder = &TheBuilder;
   
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/JumpThreading.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/JumpThreading.cpp
index 21b6ceb..5864113 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/JumpThreading.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/JumpThreading.cpp
@@ -16,9 +16,11 @@
 #include "llvm/IntrinsicInst.h"
 #include "llvm/LLVMContext.h"
 #include "llvm/Pass.h"
-#include "llvm/Analysis/ConstantFolding.h"
+#include "llvm/Analysis/InstructionSimplify.h"
+#include "llvm/Analysis/LazyValueInfo.h"
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
 #include "llvm/Transforms/Utils/Local.h"
+#include "llvm/Transforms/Utils/SSAUpdater.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/Statistic.h"
@@ -27,18 +29,24 @@
 #include "llvm/ADT/SmallSet.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
-#include "llvm/Support/ValueHandle.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
 STATISTIC(NumThreads, "Number of jumps threaded");
 STATISTIC(NumFolds,   "Number of terminators folded");
+STATISTIC(NumDupes,   "Number of branch blocks duplicated to eliminate phi");
 
 static cl::opt<unsigned>
 Threshold("jump-threading-threshold", 
           cl::desc("Max block size to duplicate for jump threading"),
           cl::init(6), cl::Hidden);
 
+// Turn on use of LazyValueInfo.
+static cl::opt<bool>
+EnableLVI("enable-jump-threading-lvi", cl::ReallyHidden);
+
+
+
 namespace {
   /// This pass performs 'jump threading', which looks at blocks that have
   /// multiple predecessors and multiple successors.  If one or more of the
@@ -58,6 +66,7 @@ namespace {
   ///
   class JumpThreading : public FunctionPass {
     TargetData *TD;
+    LazyValueInfo *LVI;
 #ifdef NDEBUG
     SmallPtrSet<BasicBlock*, 16> LoopHeaders;
 #else
@@ -67,22 +76,32 @@ namespace {
     static char ID; // Pass identification
     JumpThreading() : FunctionPass(&ID) {}
 
+    bool runOnFunction(Function &F);
+    
     virtual void getAnalysisUsage(AnalysisUsage &AU) const {
+      if (EnableLVI)
+        AU.addRequired<LazyValueInfo>();
     }
-
-    bool runOnFunction(Function &F);
-    void FindLoopHeaders(Function &F);
     
+    void FindLoopHeaders(Function &F);
     bool ProcessBlock(BasicBlock *BB);
-    bool ThreadEdge(BasicBlock *BB, BasicBlock *PredBB, BasicBlock *SuccBB,
-                    unsigned JumpThreadCost);
-    BasicBlock *FactorCommonPHIPreds(PHINode *PN, Value *Val);
+    bool ThreadEdge(BasicBlock *BB, const SmallVectorImpl<BasicBlock*> &PredBBs,
+                    BasicBlock *SuccBB);
+    bool DuplicateCondBranchOnPHIIntoPred(BasicBlock *BB,
+                                          BasicBlock *PredBB);
+    
+    typedef SmallVectorImpl<std::pair<ConstantInt*,
+                                      BasicBlock*> > PredValueInfo;
+    
+    bool ComputeValueKnownInPredecessors(Value *V, BasicBlock *BB,
+                                         PredValueInfo &Result);
+    bool ProcessThreadableEdges(Value *Cond, BasicBlock *BB);
+    
+    
     bool ProcessBranchOnDuplicateCond(BasicBlock *PredBB, BasicBlock *DestBB);
     bool ProcessSwitchOnDuplicateCond(BasicBlock *PredBB, BasicBlock *DestBB);
 
     bool ProcessJumpOnPHI(PHINode *PN);
-    bool ProcessBranchOnLogical(Value *V, BasicBlock *BB, bool isAnd);
-    bool ProcessBranchOnCompare(CmpInst *Cmp, BasicBlock *BB);
     
     bool SimplifyPartiallyRedundantLoad(LoadInst *LI);
   };
@@ -100,6 +119,7 @@ FunctionPass *llvm::createJumpThreadingPass() { return new JumpThreading(); }
 bool JumpThreading::runOnFunction(Function &F) {
   DEBUG(errs() << "Jump threading on function '" << F.getName() << "'\n");
   TD = getAnalysisIfAvailable<TargetData>();
+  LVI = EnableLVI ? &getAnalysis<LazyValueInfo>() : 0;
   
   FindLoopHeaders(F);
   
@@ -109,6 +129,7 @@ bool JumpThreading::runOnFunction(Function &F) {
     bool Changed = false;
     for (Function::iterator I = F.begin(), E = F.end(); I != E;) {
       BasicBlock *BB = I;
+      // Thread all of the branches we can over this block. 
       while (ProcessBlock(BB))
         Changed = true;
       
@@ -119,10 +140,33 @@ bool JumpThreading::runOnFunction(Function &F) {
       if (pred_begin(BB) == pred_end(BB) &&
           BB != &BB->getParent()->getEntryBlock()) {
         DEBUG(errs() << "  JT: Deleting dead block '" << BB->getName()
-              << "' with terminator: " << *BB->getTerminator());
+              << "' with terminator: " << *BB->getTerminator() << '\n');
         LoopHeaders.erase(BB);
         DeleteDeadBlock(BB);
         Changed = true;
+      } else if (BranchInst *BI = dyn_cast<BranchInst>(BB->getTerminator())) {
+        // Can't thread an unconditional jump, but if the block is "almost
+        // empty", we can replace uses of it with uses of the successor and make
+        // this dead.
+        if (BI->isUnconditional() && 
+            BB != &BB->getParent()->getEntryBlock()) {
+          BasicBlock::iterator BBI = BB->getFirstNonPHI();
+          // Ignore dbg intrinsics.
+          while (isa<DbgInfoIntrinsic>(BBI))
+            ++BBI;
+          // If the terminator is the only non-phi instruction, try to nuke it.
+          if (BBI->isTerminator()) {
+            // Since TryToSimplifyUncondBranchFromEmptyBlock may delete the
+            // block, we have to make sure it isn't in the LoopHeaders set.  We
+            // reinsert afterward in the rare case when the block isn't deleted.
+            bool ErasedFromLoopHeaders = LoopHeaders.erase(BB);
+            
+            if (TryToSimplifyUncondBranchFromEmptyBlock(BB))
+              Changed = true;
+            else if (ErasedFromLoopHeaders)
+              LoopHeaders.insert(BB);
+          }
+        }
       }
     }
     AnotherIteration = Changed;
@@ -133,59 +177,16 @@ bool JumpThreading::runOnFunction(Function &F) {
   return EverChanged;
 }
 
-/// FindLoopHeaders - We do not want jump threading to turn proper loop
-/// structures into irreducible loops.  Doing this breaks up the loop nesting
-/// hierarchy and pessimizes later transformations.  To prevent this from
-/// happening, we first have to find the loop headers.  Here we approximate this
-/// by finding targets of backedges in the CFG.
-///
-/// Note that there definitely are cases when we want to allow threading of
-/// edges across a loop header.  For example, threading a jump from outside the
-/// loop (the preheader) to an exit block of the loop is definitely profitable.
-/// It is also almost always profitable to thread backedges from within the loop
-/// to exit blocks, and is often profitable to thread backedges to other blocks
-/// within the loop (forming a nested loop).  This simple analysis is not rich
-/// enough to track all of these properties and keep it up-to-date as the CFG
-/// mutates, so we don't allow any of these transformations.
-///
-void JumpThreading::FindLoopHeaders(Function &F) {
-  SmallVector<std::pair<const BasicBlock*,const BasicBlock*>, 32> Edges;
-  FindFunctionBackedges(F, Edges);
-  
-  for (unsigned i = 0, e = Edges.size(); i != e; ++i)
-    LoopHeaders.insert(const_cast<BasicBlock*>(Edges[i].second));
-}
-
-
-/// FactorCommonPHIPreds - If there are multiple preds with the same incoming
-/// value for the PHI, factor them together so we get one block to thread for
-/// the whole group.
-/// This is important for things like "phi i1 [true, true, false, true, x]"
-/// where we only need to clone the block for the true blocks once.
-///
-BasicBlock *JumpThreading::FactorCommonPHIPreds(PHINode *PN, Value *Val) {
-  SmallVector<BasicBlock*, 16> CommonPreds;
-  for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i)
-    if (PN->getIncomingValue(i) == Val)
-      CommonPreds.push_back(PN->getIncomingBlock(i));
-  
-  if (CommonPreds.size() == 1)
-    return CommonPreds[0];
-    
-  DEBUG(errs() << "  Factoring out " << CommonPreds.size()
-        << " common predecessors.\n");
-  return SplitBlockPredecessors(PN->getParent(),
-                                &CommonPreds[0], CommonPreds.size(),
-                                ".thr_comm", this);
-}
-  
-
 /// getJumpThreadDuplicationCost - Return the cost of duplicating this block to
 /// thread across it.
 static unsigned getJumpThreadDuplicationCost(const BasicBlock *BB) {
   /// Ignore PHI nodes, these will be flattened when duplication happens.
   BasicBlock::const_iterator I = BB->getFirstNonPHI();
-
+  
+  // FIXME: THREADING will delete values that are just used to compute the
+  // branch, so they shouldn't count against the duplication cost.
+  
+  
   // Sum up the cost of each instruction until we get to the terminator.  Don't
   // include the terminator because the copy won't include it.
   unsigned Size = 0;
@@ -220,6 +221,227 @@ static unsigned getJumpThreadDuplicationCost(const BasicBlock *BB) {
   return Size;
 }
 
+/// FindLoopHeaders - We do not want jump threading to turn proper loop
+/// structures into irreducible loops.  Doing this breaks up the loop nesting
+/// hierarchy and pessimizes later transformations.  To prevent this from
+/// happening, we first have to find the loop headers.  Here we approximate this
+/// by finding targets of backedges in the CFG.
+///
+/// Note that there definitely are cases when we want to allow threading of
+/// edges across a loop header.  For example, threading a jump from outside the
+/// loop (the preheader) to an exit block of the loop is definitely profitable.
+/// It is also almost always profitable to thread backedges from within the loop
+/// to exit blocks, and is often profitable to thread backedges to other blocks
+/// within the loop (forming a nested loop).  This simple analysis is not rich
+/// enough to track all of these properties and keep it up-to-date as the CFG
+/// mutates, so we don't allow any of these transformations.
+///
+void JumpThreading::FindLoopHeaders(Function &F) {
+  SmallVector<std::pair<const BasicBlock*,const BasicBlock*>, 32> Edges;
+  FindFunctionBackedges(F, Edges);
+  
+  for (unsigned i = 0, e = Edges.size(); i != e; ++i)
+    LoopHeaders.insert(const_cast<BasicBlock*>(Edges[i].second));
+}
+
+/// ComputeValueKnownInPredecessors - Given a basic block BB and a value V, see
+/// if we can infer that the value is a known ConstantInt in any of our
+/// predecessors.  If so, return the known list of value and pred BB in the
+/// result vector.  If a value is known to be undef, it is returned as null.
+///
+/// This returns true if there were any known values.
+///
+bool JumpThreading::
+ComputeValueKnownInPredecessors(Value *V, BasicBlock *BB,PredValueInfo &Result){
+  // If V is a constantint, then it is known in all predecessors.
+  if (isa<ConstantInt>(V) || isa<UndefValue>(V)) {
+    ConstantInt *CI = dyn_cast<ConstantInt>(V);
+    
+    for (pred_iterator PI = pred_begin(BB), E = pred_end(BB); PI != E; ++PI)
+      Result.push_back(std::make_pair(CI, *PI));
+    return true;
+  }
+  
+  // If V is a non-instruction value, or an instruction in a different block,
+  // then it can't be derived from a PHI.
+  Instruction *I = dyn_cast<Instruction>(V);
+  if (I == 0 || I->getParent() != BB) {
+    
+    // Okay, if this is a live-in value, see if it has a known value at the end
+    // of any of our predecessors.
+    //
+    // FIXME: This should be an edge property, not a block end property.
+    /// TODO: Per PR2563, we could infer value range information about a
+    /// predecessor based on its terminator.
+    //
+    if (LVI) {
+      // FIXME: change this to use the more-rich 'getPredicateOnEdge' method if
+      // "I" is a non-local compare-with-a-constant instruction.  This would be
+      // able to handle value inequalities better, for example if the compare is
+      // "X < 4" and "X < 3" is known true but "X < 4" itself is not available.
+      // Perhaps getConstantOnEdge should be smart enough to do this?
+      
+      for (pred_iterator PI = pred_begin(BB), E = pred_end(BB); PI != E; ++PI) {
+        // If the value is known by LazyValueInfo to be a constant in a
+        // predecessor, use that information to try to thread this block.
+        Constant *PredCst = LVI->getConstantOnEdge(V, *PI, BB);
+        if (PredCst == 0 ||
+            (!isa<ConstantInt>(PredCst) && !isa<UndefValue>(PredCst)))
+          continue;
+        
+        Result.push_back(std::make_pair(dyn_cast<ConstantInt>(PredCst), *PI));
+      }
+      
+      return !Result.empty();
+    }
+    
+    return false;
+  }
+  
+  /// If I is a PHI node, then we know the incoming values for any constants.
+  if (PHINode *PN = dyn_cast<PHINode>(I)) {
+    for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
+      Value *InVal = PN->getIncomingValue(i);
+      if (isa<ConstantInt>(InVal) || isa<UndefValue>(InVal)) {
+        ConstantInt *CI = dyn_cast<ConstantInt>(InVal);
+        Result.push_back(std::make_pair(CI, PN->getIncomingBlock(i)));
+      }
+    }
+    return !Result.empty();
+  }
+  
+  SmallVector<std::pair<ConstantInt*, BasicBlock*>, 8> LHSVals, RHSVals;
+
+  // Handle some boolean conditions.
+  if (I->getType()->getPrimitiveSizeInBits() == 1) { 
+    // X | true -> true
+    // X & false -> false
+    if (I->getOpcode() == Instruction::Or ||
+        I->getOpcode() == Instruction::And) {
+      ComputeValueKnownInPredecessors(I->getOperand(0), BB, LHSVals);
+      ComputeValueKnownInPredecessors(I->getOperand(1), BB, RHSVals);
+      
+      if (LHSVals.empty() && RHSVals.empty())
+        return false;
+      
+      ConstantInt *InterestingVal;
+      if (I->getOpcode() == Instruction::Or)
+        InterestingVal = ConstantInt::getTrue(I->getContext());
+      else
+        InterestingVal = ConstantInt::getFalse(I->getContext());
+      
+      // Scan for the sentinel.
+      for (unsigned i = 0, e = LHSVals.size(); i != e; ++i)
+        if (LHSVals[i].first == InterestingVal || LHSVals[i].first == 0)
+          Result.push_back(LHSVals[i]);
+      for (unsigned i = 0, e = RHSVals.size(); i != e; ++i)
+        if (RHSVals[i].first == InterestingVal || RHSVals[i].first == 0)
+          Result.push_back(RHSVals[i]);
+      return !Result.empty();
+    }
+    
+    // Handle the NOT form of XOR.
+    if (I->getOpcode() == Instruction::Xor &&
+        isa<ConstantInt>(I->getOperand(1)) &&
+        cast<ConstantInt>(I->getOperand(1))->isOne()) {
+      ComputeValueKnownInPredecessors(I->getOperand(0), BB, Result);
+      if (Result.empty())
+        return false;
+
+      // Invert the known values.
+      for (unsigned i = 0, e = Result.size(); i != e; ++i)
+        if (Result[i].first)
+          Result[i].first =
+            cast<ConstantInt>(ConstantExpr::getNot(Result[i].first));
+      return true;
+    }
+  }
+  
+  // Handle compare with phi operand, where the PHI is defined in this block.
+  if (CmpInst *Cmp = dyn_cast<CmpInst>(I)) {
+    PHINode *PN = dyn_cast<PHINode>(Cmp->getOperand(0));
+    if (PN && PN->getParent() == BB) {
+      // We can do this simplification if any comparisons fold to true or false.
+      // See if any do.
+      for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
+        BasicBlock *PredBB = PN->getIncomingBlock(i);
+        Value *LHS = PN->getIncomingValue(i);
+        Value *RHS = Cmp->getOperand(1)->DoPHITranslation(BB, PredBB);
+        
+        Value *Res = SimplifyCmpInst(Cmp->getPredicate(), LHS, RHS, TD);
+        if (Res == 0) {
+          if (!LVI || !isa<Constant>(RHS))
+            continue;
+          
+          LazyValueInfo::Tristate 
+            ResT = LVI->getPredicateOnEdge(Cmp->getPredicate(), LHS,
+                                           cast<Constant>(RHS), PredBB, BB);
+          if (ResT == LazyValueInfo::Unknown)
+            continue;
+          Res = ConstantInt::get(Type::getInt1Ty(LHS->getContext()), ResT);
+        }
+        
+        if (isa<UndefValue>(Res))
+          Result.push_back(std::make_pair((ConstantInt*)0, PredBB));
+        else if (ConstantInt *CI = dyn_cast<ConstantInt>(Res))
+          Result.push_back(std::make_pair(CI, PredBB));
+      }
+      
+      return !Result.empty();
+    }
+    
+    
+    // If comparing a live-in value against a constant, see if we know the
+    // live-in value on any predecessors.
+    if (LVI && isa<Constant>(Cmp->getOperand(1)) &&
+        Cmp->getType()->isInteger() && // Not vector compare.
+        (!isa<Instruction>(Cmp->getOperand(0)) ||
+         cast<Instruction>(Cmp->getOperand(0))->getParent() != BB)) {
+      Constant *RHSCst = cast<Constant>(Cmp->getOperand(1));
+      
+      for (pred_iterator PI = pred_begin(BB), E = pred_end(BB); PI != E; ++PI) {
+        // If the value is known by LazyValueInfo to be a constant in a
+        // predecessor, use that information to try to thread this block.
+        LazyValueInfo::Tristate
+          Res = LVI->getPredicateOnEdge(Cmp->getPredicate(), Cmp->getOperand(0),
+                                        RHSCst, *PI, BB);
+        if (Res == LazyValueInfo::Unknown)
+          continue;
+
+        Constant *ResC = ConstantInt::get(Cmp->getType(), Res);
+        Result.push_back(std::make_pair(cast<ConstantInt>(ResC), *PI));
+      }
+      
+      return !Result.empty();
+    }
+  }
+  return false;
+}
+
+
+
+/// GetBestDestForBranchOnUndef - If we determine that the specified block ends
+/// in an undefined jump, decide which block is best to revector to.
+///
+/// Since we can pick an arbitrary destination, we pick the successor with the
+/// fewest predecessors.  This should reduce the in-degree of the others.
+///
+static unsigned GetBestDestForJumpOnUndef(BasicBlock *BB) {
+  TerminatorInst *BBTerm = BB->getTerminator();
+  unsigned MinSucc = 0;
+  BasicBlock *TestBB = BBTerm->getSuccessor(MinSucc);
+  // Compute the successor with the minimum number of predecessors.
+  unsigned MinNumPreds = std::distance(pred_begin(TestBB), pred_end(TestBB));
+  for (unsigned i = 1, e = BBTerm->getNumSuccessors(); i != e; ++i) {
+    TestBB = BBTerm->getSuccessor(i);
+    unsigned NumPreds = std::distance(pred_begin(TestBB), pred_end(TestBB));
+    if (NumPreds < MinNumPreds)
+      MinSucc = i;
+  }
+  
+  return MinSucc;
+}
+
 /// ProcessBlock - If there are any predecessors whose control can be threaded
 /// through to a successor, transform them now.
 bool JumpThreading::ProcessBlock(BasicBlock *BB) {
@@ -227,7 +449,7 @@ bool JumpThreading::ProcessBlock(BasicBlock *BB) {
   // successor, merge the blocks.  This encourages recursive jump threading
   // because now the condition in this block can be threaded through
   // predecessors of our predecessor block.
-  if (BasicBlock *SinglePred = BB->getSinglePredecessor())
+  if (BasicBlock *SinglePred = BB->getSinglePredecessor()) {
     if (SinglePred->getTerminator()->getNumSuccessors() == 1 &&
         SinglePred != BB) {
       // If SinglePred was a loop header, BB becomes one.
@@ -243,10 +465,10 @@ bool JumpThreading::ProcessBlock(BasicBlock *BB) {
         BB->moveBefore(&BB->getParent()->getEntryBlock());
       return true;
     }
-  
-  // See if this block ends with a branch or switch.  If so, see if the
-  // condition is a phi node.  If so, and if an entry of the phi node is a
-  // constant, we can thread the block.
+  }
+
+  // Look to see if the terminator is a branch of switch, if not we can't thread
+  // it.
   Value *Condition;
   if (BranchInst *BI = dyn_cast<BranchInst>(BB->getTerminator())) {
     // Can't thread an unconditional jump.
@@ -262,38 +484,27 @@ bool JumpThreading::ProcessBlock(BasicBlock *BB) {
   // other blocks.
   if (isa<ConstantInt>(Condition)) {
     DEBUG(errs() << "  In block '" << BB->getName()
-          << "' folding terminator: " << *BB->getTerminator());
+          << "' folding terminator: " << *BB->getTerminator() << '\n');
     ++NumFolds;
     ConstantFoldTerminator(BB);
     return true;
   }
   
   // If the terminator is branching on an undef, we can pick any of the
-  // successors to branch to.  Since this is arbitrary, we pick the successor
-  // with the fewest predecessors.  This should reduce the in-degree of the
-  // others.
+  // successors to branch to.  Let GetBestDestForJumpOnUndef decide.
   if (isa<UndefValue>(Condition)) {
-    TerminatorInst *BBTerm = BB->getTerminator();
-    unsigned MinSucc = 0;
-    BasicBlock *TestBB = BBTerm->getSuccessor(MinSucc);
-    // Compute the successor with the minimum number of predecessors.
-    unsigned MinNumPreds = std::distance(pred_begin(TestBB), pred_end(TestBB));
-    for (unsigned i = 1, e = BBTerm->getNumSuccessors(); i != e; ++i) {
-      TestBB = BBTerm->getSuccessor(i);
-      unsigned NumPreds = std::distance(pred_begin(TestBB), pred_end(TestBB));
-      if (NumPreds < MinNumPreds)
-        MinSucc = i;
-    }
+    unsigned BestSucc = GetBestDestForJumpOnUndef(BB);
     
     // Fold the branch/switch.
+    TerminatorInst *BBTerm = BB->getTerminator();
     for (unsigned i = 0, e = BBTerm->getNumSuccessors(); i != e; ++i) {
-      if (i == MinSucc) continue;
-      BBTerm->getSuccessor(i)->removePredecessor(BB);
+      if (i == BestSucc) continue;
+      RemovePredecessorAndSimplify(BBTerm->getSuccessor(i), BB, TD);
     }
     
     DEBUG(errs() << "  In block '" << BB->getName()
-          << "' folding undef terminator: " << *BBTerm);
-    BranchInst::Create(BBTerm->getSuccessor(MinSucc), BBTerm);
+          << "' folding undef terminator: " << *BBTerm << '\n');
+    BranchInst::Create(BBTerm->getSuccessor(BestSucc), BBTerm);
     BBTerm->eraseFromParent();
     return true;
   }
@@ -305,7 +516,8 @@ bool JumpThreading::ProcessBlock(BasicBlock *BB) {
   //     br COND, BBX, BBY
   //  BBX:
   //     br COND, BBZ, BBW
-  if (!Condition->hasOneUse() && // Multiple uses.
+  if (!LVI &&
+      !Condition->hasOneUse() && // Multiple uses.
       (CondInst == 0 || CondInst->getParent() != BB)) { // Non-local definition.
     pred_iterator PI = pred_begin(BB), E = pred_end(BB);
     if (isa<BranchInst>(BB->getTerminator())) {
@@ -325,52 +537,40 @@ bool JumpThreading::ProcessBlock(BasicBlock *BB) {
   }
 
   // All the rest of our checks depend on the condition being an instruction.
-  if (CondInst == 0)
+  if (CondInst == 0) {
+    // FIXME: Unify this with code below.
+    if (LVI && ProcessThreadableEdges(Condition, BB))
+      return true;
     return false;
+  }  
+    
   
   // See if this is a phi node in the current block.
   if (PHINode *PN = dyn_cast<PHINode>(CondInst))
     if (PN->getParent() == BB)
       return ProcessJumpOnPHI(PN);
   
-  // If this is a conditional branch whose condition is and/or of a phi, try to
-  // simplify it.
-  if ((CondInst->getOpcode() == Instruction::And || 
-       CondInst->getOpcode() == Instruction::Or) &&
-      isa<BranchInst>(BB->getTerminator()) &&
-      ProcessBranchOnLogical(CondInst, BB,
-                             CondInst->getOpcode() == Instruction::And))
-    return true;
-  
   if (CmpInst *CondCmp = dyn_cast<CmpInst>(CondInst)) {
-    if (isa<PHINode>(CondCmp->getOperand(0))) {
-      // If we have "br (phi != 42)" and the phi node has any constant values
-      // as operands, we can thread through this block.
-      // 
-      // If we have "br (cmp phi, x)" and the phi node contains x such that the
-      // comparison uniquely identifies the branch target, we can thread
-      // through this block.
-
-      if (ProcessBranchOnCompare(CondCmp, BB))
-        return true;      
-    }
-    
-    // If we have a comparison, loop over the predecessors to see if there is
-    // a condition with the same value.
-    pred_iterator PI = pred_begin(BB), E = pred_end(BB);
-    for (; PI != E; ++PI)
-      if (BranchInst *PBI = dyn_cast<BranchInst>((*PI)->getTerminator()))
-        if (PBI->isConditional() && *PI != BB) {
-          if (CmpInst *CI = dyn_cast<CmpInst>(PBI->getCondition())) {
-            if (CI->getOperand(0) == CondCmp->getOperand(0) &&
-                CI->getOperand(1) == CondCmp->getOperand(1) &&
-                CI->getPredicate() == CondCmp->getPredicate()) {
-              // TODO: Could handle things like (x != 4) --> (x == 17)
-              if (ProcessBranchOnDuplicateCond(*PI, BB))
-                return true;
+    if (!LVI &&
+        (!isa<PHINode>(CondCmp->getOperand(0)) ||
+         cast<PHINode>(CondCmp->getOperand(0))->getParent() != BB)) {
+      // If we have a comparison, loop over the predecessors to see if there is
+      // a condition with a lexically identical value.
+      pred_iterator PI = pred_begin(BB), E = pred_end(BB);
+      for (; PI != E; ++PI)
+        if (BranchInst *PBI = dyn_cast<BranchInst>((*PI)->getTerminator()))
+          if (PBI->isConditional() && *PI != BB) {
+            if (CmpInst *CI = dyn_cast<CmpInst>(PBI->getCondition())) {
+              if (CI->getOperand(0) == CondCmp->getOperand(0) &&
+                  CI->getOperand(1) == CondCmp->getOperand(1) &&
+                  CI->getPredicate() == CondCmp->getPredicate()) {
+                // TODO: Could handle things like (x != 4) --> (x == 17)
+                if (ProcessBranchOnDuplicateCond(*PI, BB))
+                  return true;
+              }
             }
           }
-        }
+    }
   }
 
   // Check for some cases that are worth simplifying.  Right now we want to look
@@ -385,10 +585,21 @@ bool JumpThreading::ProcessBlock(BasicBlock *BB) {
     if (isa<Constant>(CondCmp->getOperand(1)))
       SimplifyValue = CondCmp->getOperand(0);
   
+  // TODO: There are other places where load PRE would be profitable, such as
+  // more complex comparisons.
   if (LoadInst *LI = dyn_cast<LoadInst>(SimplifyValue))
     if (SimplifyPartiallyRedundantLoad(LI))
       return true;
   
+  
+  // Handle a variety of cases where we are branching on something derived from
+  // a PHI node in the current block.  If we can prove that any predecessors
+  // compute a predictable value based on a PHI node, thread those predecessors.
+  //
+  if (ProcessThreadableEdges(CondInst, BB))
+    return true;
+  
+  
   // TODO: If we have: "br (X > 0)"  and we have a predecessor where we know
   // "(X == 4)" thread through this block.
   
@@ -419,7 +630,7 @@ bool JumpThreading::ProcessBranchOnDuplicateCond(BasicBlock *PredBB,
     BranchDir = false;
   else {
     DEBUG(errs() << "  In block '" << PredBB->getName()
-          << "' folding terminator: " << *PredBB->getTerminator());
+          << "' folding terminator: " << *PredBB->getTerminator() << '\n');
     ++NumFolds;
     ConstantFoldTerminator(PredBB);
     return true;
@@ -432,28 +643,25 @@ bool JumpThreading::ProcessBranchOnDuplicateCond(BasicBlock *PredBB,
   if (BB->getSinglePredecessor()) {
     DEBUG(errs() << "  In block '" << BB->getName()
           << "' folding condition to '" << BranchDir << "': "
-          << *BB->getTerminator());
+          << *BB->getTerminator() << '\n');
     ++NumFolds;
+    Value *OldCond = DestBI->getCondition();
     DestBI->setCondition(ConstantInt::get(Type::getInt1Ty(BB->getContext()),
                                           BranchDir));
     ConstantFoldTerminator(BB);
+    RecursivelyDeleteTriviallyDeadInstructions(OldCond);
     return true;
   }
-  
-  // Otherwise we need to thread from PredBB to DestBB's successor which
-  // involves code duplication.  Check to see if it is worth it.
-  unsigned JumpThreadCost = getJumpThreadDuplicationCost(BB);
-  if (JumpThreadCost > Threshold) {
-    DEBUG(errs() << "  Not threading BB '" << BB->getName()
-          << "' - Cost is too high: " << JumpThreadCost << "\n");
-    return false;
-  }
+ 
   
   // Next, figure out which successor we are threading to.
   BasicBlock *SuccBB = DestBI->getSuccessor(!BranchDir);
   
+  SmallVector<BasicBlock*, 2> Preds;
+  Preds.push_back(PredBB);
+  
   // Ok, try to thread it!
-  return ThreadEdge(BB, PredBB, SuccBB, JumpThreadCost);
+  return ThreadEdge(BB, Preds, SuccBB);
 }
 
 /// ProcessSwitchOnDuplicateCond - We found a block and a predecessor of that
@@ -472,7 +680,6 @@ bool JumpThreading::ProcessSwitchOnDuplicateCond(BasicBlock *PredBB,
   if (PredBB == DestBB)
     return false;
   
-  
   SwitchInst *PredSI = cast<SwitchInst>(PredBB->getTerminator());
   SwitchInst *DestSI = cast<SwitchInst>(DestBB->getTerminator());
 
@@ -547,7 +754,7 @@ bool JumpThreading::SimplifyPartiallyRedundantLoad(LoadInst *LI) {
   Value *LoadedPtr = LI->getOperand(0);
 
   // If the loaded operand is defined in the LoadBB, it can't be available.
-  // FIXME: Could do PHI translation, that would be fun :)
+  // TODO: Could do simple PHI translation, that would be fun :)
   if (Instruction *PtrOp = dyn_cast<Instruction>(LoadedPtr))
     if (PtrOp->getParent() == LoadBB)
       return false;
@@ -556,8 +763,8 @@ bool JumpThreading::SimplifyPartiallyRedundantLoad(LoadInst *LI) {
   // the entry to its block.
   BasicBlock::iterator BBIt = LI;
 
-  if (Value *AvailableVal = FindAvailableLoadedValue(LoadedPtr, LoadBB, 
-                                                     BBIt, 6)) {
+  if (Value *AvailableVal = 
+        FindAvailableLoadedValue(LoadedPtr, LoadBB, BBIt, 6)) {
     // If the value if the load is locally available within the block, just use
     // it.  This frequently occurs for reg2mem'd allocas.
     //cerr << "LOAD ELIMINATED:\n" << *BBIt << *LI << "\n";
@@ -640,7 +847,7 @@ bool JumpThreading::SimplifyPartiallyRedundantLoad(LoadInst *LI) {
     // Split them out to their own block.
     UnavailablePred =
       SplitBlockPredecessors(LoadBB, &PredsToSplit[0], PredsToSplit.size(),
-                             "thread-split", this);
+                             "thread-pre-split", this);
   }
   
   // If the value isn't available in all predecessors, then there will be
@@ -649,7 +856,8 @@ bool JumpThreading::SimplifyPartiallyRedundantLoad(LoadInst *LI) {
   if (UnavailablePred) {
     assert(UnavailablePred->getTerminator()->getNumSuccessors() == 1 &&
            "Can't handle critical edge here!");
-    Value *NewVal = new LoadInst(LoadedPtr, LI->getName()+".pr",
+    Value *NewVal = new LoadInst(LoadedPtr, LI->getName()+".pr", false,
+                                 LI->getAlignment(),
                                  UnavailablePred->getTerminator());
     AvailablePreds.push_back(std::make_pair(UnavailablePred, NewVal));
   }
@@ -684,204 +892,233 @@ bool JumpThreading::SimplifyPartiallyRedundantLoad(LoadInst *LI) {
   return true;
 }
 
-
-/// ProcessJumpOnPHI - We have a conditional branch of switch on a PHI node in
-/// the current block.  See if there are any simplifications we can do based on
-/// inputs to the phi node.
-/// 
-bool JumpThreading::ProcessJumpOnPHI(PHINode *PN) {
-  // See if the phi node has any constant values.  If so, we can determine where
-  // the corresponding predecessor will branch.
-  ConstantInt *PredCst = 0;
-  for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i)
-    if ((PredCst = dyn_cast<ConstantInt>(PN->getIncomingValue(i))))
-      break;
-  
-  // If no incoming value has a constant, we don't know the destination of any
-  // predecessors.
-  if (PredCst == 0)
-    return false;
-  
-  // See if the cost of duplicating this block is low enough.
-  BasicBlock *BB = PN->getParent();
-  unsigned JumpThreadCost = getJumpThreadDuplicationCost(BB);
-  if (JumpThreadCost > Threshold) {
-    DEBUG(errs() << "  Not threading BB '" << BB->getName()
-          << "' - Cost is too high: " << JumpThreadCost << "\n");
-    return false;
-  }
-  
-  // If so, we can actually do this threading.  Merge any common predecessors
-  // that will act the same.
-  BasicBlock *PredBB = FactorCommonPHIPreds(PN, PredCst);
-  
-  // Next, figure out which successor we are threading to.
-  BasicBlock *SuccBB;
-  if (BranchInst *BI = dyn_cast<BranchInst>(BB->getTerminator()))
-    SuccBB = BI->getSuccessor(PredCst ==
-                                   ConstantInt::getFalse(PredBB->getContext()));
-  else {
-    SwitchInst *SI = cast<SwitchInst>(BB->getTerminator());
-    SuccBB = SI->getSuccessor(SI->findCaseValue(PredCst));
+/// FindMostPopularDest - The specified list contains multiple possible
+/// threadable destinations.  Pick the one that occurs the most frequently in
+/// the list.
+static BasicBlock *
+FindMostPopularDest(BasicBlock *BB,
+                    const SmallVectorImpl<std::pair<BasicBlock*,
+                                  BasicBlock*> > &PredToDestList) {
+  assert(!PredToDestList.empty());
+  
+  // Determine popularity.  If there are multiple possible destinations, we
+  // explicitly choose to ignore 'undef' destinations.  We prefer to thread
+  // blocks with known and real destinations to threading undef.  We'll handle
+  // them later if interesting.
+  DenseMap<BasicBlock*, unsigned> DestPopularity;
+  for (unsigned i = 0, e = PredToDestList.size(); i != e; ++i)
+    if (PredToDestList[i].second)
+      DestPopularity[PredToDestList[i].second]++;
+  
+  // Find the most popular dest.
+  DenseMap<BasicBlock*, unsigned>::iterator DPI = DestPopularity.begin();
+  BasicBlock *MostPopularDest = DPI->first;
+  unsigned Popularity = DPI->second;
+  SmallVector<BasicBlock*, 4> SamePopularity;
+  
+  for (++DPI; DPI != DestPopularity.end(); ++DPI) {
+    // If the popularity of this entry isn't higher than the popularity we've
+    // seen so far, ignore it.
+    if (DPI->second < Popularity)
+      ; // ignore.
+    else if (DPI->second == Popularity) {
+      // If it is the same as what we've seen so far, keep track of it.
+      SamePopularity.push_back(DPI->first);
+    } else {
+      // If it is more popular, remember it.
+      SamePopularity.clear();
+      MostPopularDest = DPI->first;
+      Popularity = DPI->second;
+    }      
   }
   
-  // Ok, try to thread it!
-  return ThreadEdge(BB, PredBB, SuccBB, JumpThreadCost);
-}
-
-/// ProcessJumpOnLogicalPHI - PN's basic block contains a conditional branch
-/// whose condition is an AND/OR where one side is PN.  If PN has constant
-/// operands that permit us to evaluate the condition for some operand, thread
-/// through the block.  For example with:
-///   br (and X, phi(Y, Z, false))
-/// the predecessor corresponding to the 'false' will always jump to the false
-/// destination of the branch.
-///
-bool JumpThreading::ProcessBranchOnLogical(Value *V, BasicBlock *BB,
-                                           bool isAnd) {
-  // If this is a binary operator tree of the same AND/OR opcode, check the
-  // LHS/RHS.
-  if (BinaryOperator *BO = dyn_cast<BinaryOperator>(V))
-    if ((isAnd && BO->getOpcode() == Instruction::And) ||
-        (!isAnd && BO->getOpcode() == Instruction::Or)) {
-      if (ProcessBranchOnLogical(BO->getOperand(0), BB, isAnd))
-        return true;
-      if (ProcessBranchOnLogical(BO->getOperand(1), BB, isAnd))
-        return true;
-    }
+  // Okay, now we know the most popular destination.  If there is more than
+  // destination, we need to determine one.  This is arbitrary, but we need
+  // to make a deterministic decision.  Pick the first one that appears in the
+  // successor list.
+  if (!SamePopularity.empty()) {
+    SamePopularity.push_back(MostPopularDest);
+    TerminatorInst *TI = BB->getTerminator();
+    for (unsigned i = 0; ; ++i) {
+      assert(i != TI->getNumSuccessors() && "Didn't find any successor!");
       
-  // If this isn't a PHI node, we can't handle it.
-  PHINode *PN = dyn_cast<PHINode>(V);
-  if (!PN || PN->getParent() != BB) return false;
-                                             
-  // We can only do the simplification for phi nodes of 'false' with AND or
-  // 'true' with OR.  See if we have any entries in the phi for this.
-  unsigned PredNo = ~0U;
-  ConstantInt *PredCst = ConstantInt::get(Type::getInt1Ty(BB->getContext()),
-                                          !isAnd);
-  for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
-    if (PN->getIncomingValue(i) == PredCst) {
-      PredNo = i;
+      if (std::find(SamePopularity.begin(), SamePopularity.end(),
+                    TI->getSuccessor(i)) == SamePopularity.end())
+        continue;
+      
+      MostPopularDest = TI->getSuccessor(i);
       break;
     }
   }
   
-  // If no match, bail out.
-  if (PredNo == ~0U)
-    return false;
-  
-  // See if the cost of duplicating this block is low enough.
-  unsigned JumpThreadCost = getJumpThreadDuplicationCost(BB);
-  if (JumpThreadCost > Threshold) {
-    DEBUG(errs() << "  Not threading BB '" << BB->getName()
-          << "' - Cost is too high: " << JumpThreadCost << "\n");
-    return false;
-  }
-
-  // If so, we can actually do this threading.  Merge any common predecessors
-  // that will act the same.
-  BasicBlock *PredBB = FactorCommonPHIPreds(PN, PredCst);
-  
-  // Next, figure out which successor we are threading to.  If this was an AND,
-  // the constant must be FALSE, and we must be targeting the 'false' block.
-  // If this is an OR, the constant must be TRUE, and we must be targeting the
-  // 'true' block.
-  BasicBlock *SuccBB = BB->getTerminator()->getSuccessor(isAnd);
-  
-  // Ok, try to thread it!
-  return ThreadEdge(BB, PredBB, SuccBB, JumpThreadCost);
-}
-
-/// GetResultOfComparison - Given an icmp/fcmp predicate and the left and right
-/// hand sides of the compare instruction, try to determine the result. If the
-/// result can not be determined, a null pointer is returned.
-static Constant *GetResultOfComparison(CmpInst::Predicate pred,
-                                       Value *LHS, Value *RHS,
-                                       LLVMContext &Context) {
-  if (Constant *CLHS = dyn_cast<Constant>(LHS))
-    if (Constant *CRHS = dyn_cast<Constant>(RHS))
-      return ConstantExpr::getCompare(pred, CLHS, CRHS);
-
-  if (LHS == RHS)
-    if (isa<IntegerType>(LHS->getType()) || isa<PointerType>(LHS->getType()))
-      return ICmpInst::isTrueWhenEqual(pred) ? 
-                 ConstantInt::getTrue(Context) : ConstantInt::getFalse(Context);
-
-  return 0;
+  // Okay, we have finally picked the most popular destination.
+  return MostPopularDest;
 }
 
-/// ProcessBranchOnCompare - We found a branch on a comparison between a phi
-/// node and a value.  If we can identify when the comparison is true between
-/// the phi inputs and the value, we can fold the compare for that edge and
-/// thread through it.
-bool JumpThreading::ProcessBranchOnCompare(CmpInst *Cmp, BasicBlock *BB) {
-  PHINode *PN = cast<PHINode>(Cmp->getOperand(0));
-  Value *RHS = Cmp->getOperand(1);
-  
-  // If the phi isn't in the current block, an incoming edge to this block
-  // doesn't control the destination.
-  if (PN->getParent() != BB)
+bool JumpThreading::ProcessThreadableEdges(Value *Cond, BasicBlock *BB) {
+  // If threading this would thread across a loop header, don't even try to
+  // thread the edge.
+  if (LoopHeaders.count(BB))
     return false;
   
-  // We can do this simplification if any comparisons fold to true or false.
-  // See if any do.
-  Value *PredVal = 0;
-  bool TrueDirection = false;
-  for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
-    PredVal = PN->getIncomingValue(i);
+  SmallVector<std::pair<ConstantInt*, BasicBlock*>, 8> PredValues;
+  if (!ComputeValueKnownInPredecessors(Cond, BB, PredValues))
+    return false;
+  assert(!PredValues.empty() &&
+         "ComputeValueKnownInPredecessors returned true with no values");
+
+  DEBUG(errs() << "IN BB: " << *BB;
+        for (unsigned i = 0, e = PredValues.size(); i != e; ++i) {
+          errs() << "  BB '" << BB->getName() << "': FOUND condition = ";
+          if (PredValues[i].first)
+            errs() << *PredValues[i].first;
+          else
+            errs() << "UNDEF";
+          errs() << " for pred '" << PredValues[i].second->getName()
+          << "'.\n";
+        });
+  
+  // Decide what we want to thread through.  Convert our list of known values to
+  // a list of known destinations for each pred.  This also discards duplicate
+  // predecessors and keeps track of the undefined inputs (which are represented
+  // as a null dest in the PredToDestList).
+  SmallPtrSet<BasicBlock*, 16> SeenPreds;
+  SmallVector<std::pair<BasicBlock*, BasicBlock*>, 16> PredToDestList;
+  
+  BasicBlock *OnlyDest = 0;
+  BasicBlock *MultipleDestSentinel = (BasicBlock*)(intptr_t)~0ULL;
+  
+  for (unsigned i = 0, e = PredValues.size(); i != e; ++i) {
+    BasicBlock *Pred = PredValues[i].second;
+    if (!SeenPreds.insert(Pred))
+      continue;  // Duplicate predecessor entry.
     
-    Constant *Res = GetResultOfComparison(Cmp->getPredicate(), PredVal,
-                                          RHS, Cmp->getContext());
-    if (!Res) {
-      PredVal = 0;
+    // If the predecessor ends with an indirect goto, we can't change its
+    // destination.
+    if (isa<IndirectBrInst>(Pred->getTerminator()))
       continue;
-    }
     
-    // If this folded to a constant expr, we can't do anything.
-    if (ConstantInt *ResC = dyn_cast<ConstantInt>(Res)) {
-      TrueDirection = ResC->getZExtValue();
-      break;
-    }
-    // If this folded to undef, just go the false way.
-    if (isa<UndefValue>(Res)) {
-      TrueDirection = false;
-      break;
+    ConstantInt *Val = PredValues[i].first;
+    
+    BasicBlock *DestBB;
+    if (Val == 0)      // Undef.
+      DestBB = 0;
+    else if (BranchInst *BI = dyn_cast<BranchInst>(BB->getTerminator()))
+      DestBB = BI->getSuccessor(Val->isZero());
+    else {
+      SwitchInst *SI = cast<SwitchInst>(BB->getTerminator());
+      DestBB = SI->getSuccessor(SI->findCaseValue(Val));
     }
+
+    // If we have exactly one destination, remember it for efficiency below.
+    if (i == 0)
+      OnlyDest = DestBB;
+    else if (OnlyDest != DestBB)
+      OnlyDest = MultipleDestSentinel;
     
-    // Otherwise, we can't fold this input.
-    PredVal = 0;
+    PredToDestList.push_back(std::make_pair(Pred, DestBB));
   }
   
-  // If no match, bail out.
-  if (PredVal == 0)
-    return false;
-  
-  // See if the cost of duplicating this block is low enough.
-  unsigned JumpThreadCost = getJumpThreadDuplicationCost(BB);
-  if (JumpThreadCost > Threshold) {
-    DEBUG(errs() << "  Not threading BB '" << BB->getName()
-          << "' - Cost is too high: " << JumpThreadCost << "\n");
+  // If all edges were unthreadable, we fail.
+  if (PredToDestList.empty())
     return false;
-  }
   
-  // If so, we can actually do this threading.  Merge any common predecessors
-  // that will act the same.
-  BasicBlock *PredBB = FactorCommonPHIPreds(PN, PredVal);
+  // Determine which is the most common successor.  If we have many inputs and
+  // this block is a switch, we want to start by threading the batch that goes
+  // to the most popular destination first.  If we only know about one
+  // threadable destination (the common case) we can avoid this.
+  BasicBlock *MostPopularDest = OnlyDest;
+  
+  if (MostPopularDest == MultipleDestSentinel)
+    MostPopularDest = FindMostPopularDest(BB, PredToDestList);
+  
+  // Now that we know what the most popular destination is, factor all
+  // predecessors that will jump to it into a single predecessor.
+  SmallVector<BasicBlock*, 16> PredsToFactor;
+  for (unsigned i = 0, e = PredToDestList.size(); i != e; ++i)
+    if (PredToDestList[i].second == MostPopularDest) {
+      BasicBlock *Pred = PredToDestList[i].first;
+      
+      // This predecessor may be a switch or something else that has multiple
+      // edges to the block.  Factor each of these edges by listing them
+      // according to # occurrences in PredsToFactor.
+      TerminatorInst *PredTI = Pred->getTerminator();
+      for (unsigned i = 0, e = PredTI->getNumSuccessors(); i != e; ++i)
+        if (PredTI->getSuccessor(i) == BB)
+          PredsToFactor.push_back(Pred);
+    }
+
+  // If the threadable edges are branching on an undefined value, we get to pick
+  // the destination that these predecessors should get to.
+  if (MostPopularDest == 0)
+    MostPopularDest = BB->getTerminator()->
+                            getSuccessor(GetBestDestForJumpOnUndef(BB));
+        
+  // Ok, try to thread it!
+  return ThreadEdge(BB, PredsToFactor, MostPopularDest);
+}
+
+/// ProcessJumpOnPHI - We have a conditional branch or switch on a PHI node in
+/// the current block.  See if there are any simplifications we can do based on
+/// inputs to the phi node.
+/// 
+bool JumpThreading::ProcessJumpOnPHI(PHINode *PN) {
+  BasicBlock *BB = PN->getParent();
   
-  // Next, get our successor.
-  BasicBlock *SuccBB = BB->getTerminator()->getSuccessor(!TrueDirection);
+  // If any of the predecessor blocks end in an unconditional branch, we can
+  // *duplicate* the jump into that block in order to further encourage jump
+  // threading and to eliminate cases where we have branch on a phi of an icmp
+  // (branch on icmp is much better).
+
+  // We don't want to do this tranformation for switches, because we don't
+  // really want to duplicate a switch.
+  if (isa<SwitchInst>(BB->getTerminator()))
+    return false;
   
-  // Ok, try to thread it!
-  return ThreadEdge(BB, PredBB, SuccBB, JumpThreadCost);
+  // Look for unconditional branch predecessors.
+  for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
+    BasicBlock *PredBB = PN->getIncomingBlock(i);
+    if (BranchInst *PredBr = dyn_cast<BranchInst>(PredBB->getTerminator()))
+      if (PredBr->isUnconditional() &&
+          // Try to duplicate BB into PredBB.
+          DuplicateCondBranchOnPHIIntoPred(BB, PredBB))
+        return true;
+  }
+
+  return false;
 }
 
 
-/// ThreadEdge - We have decided that it is safe and profitable to thread an
-/// edge from PredBB to SuccBB across BB.  Transform the IR to reflect this
-/// change.
-bool JumpThreading::ThreadEdge(BasicBlock *BB, BasicBlock *PredBB, 
-                               BasicBlock *SuccBB, unsigned JumpThreadCost) {
+/// AddPHINodeEntriesForMappedBlock - We're adding 'NewPred' as a new
+/// predecessor to the PHIBB block.  If it has PHI nodes, add entries for
+/// NewPred using the entries from OldPred (suitably mapped).
+static void AddPHINodeEntriesForMappedBlock(BasicBlock *PHIBB,
+                                            BasicBlock *OldPred,
+                                            BasicBlock *NewPred,
+                                     DenseMap<Instruction*, Value*> &ValueMap) {
+  for (BasicBlock::iterator PNI = PHIBB->begin();
+       PHINode *PN = dyn_cast<PHINode>(PNI); ++PNI) {
+    // Ok, we have a PHI node.  Figure out what the incoming value was for the
+    // DestBlock.
+    Value *IV = PN->getIncomingValueForBlock(OldPred);
+    
+    // Remap the value if necessary.
+    if (Instruction *Inst = dyn_cast<Instruction>(IV)) {
+      DenseMap<Instruction*, Value*>::iterator I = ValueMap.find(Inst);
+      if (I != ValueMap.end())
+        IV = I->second;
+    }
+    
+    PN->addIncoming(IV, NewPred);
+  }
+}
 
+/// ThreadEdge - We have decided that it is safe and profitable to factor the
+/// blocks in PredBBs to one predecessor, then thread an edge from it to SuccBB
+/// across BB.  Transform the IR to reflect this change.
+bool JumpThreading::ThreadEdge(BasicBlock *BB, 
+                               const SmallVectorImpl<BasicBlock*> &PredBBs, 
+                               BasicBlock *SuccBB) {
   // If threading to the same block as we come from, we would infinite loop.
   if (SuccBB == BB) {
     DEBUG(errs() << "  Not threading across BB '" << BB->getName()
@@ -892,31 +1129,36 @@ bool JumpThreading::ThreadEdge(BasicBlock *BB, BasicBlock *PredBB,
   // If threading this would thread across a loop header, don't thread the edge.
   // See the comments above FindLoopHeaders for justifications and caveats.
   if (LoopHeaders.count(BB)) {
-    DEBUG(errs() << "  Not threading from '" << PredBB->getName()
-          << "' across loop header BB '" << BB->getName()
+    DEBUG(errs() << "  Not threading across loop header BB '" << BB->getName()
           << "' to dest BB '" << SuccBB->getName()
           << "' - it might create an irreducible loop!\n");
     return false;
   }
 
+  unsigned JumpThreadCost = getJumpThreadDuplicationCost(BB);
+  if (JumpThreadCost > Threshold) {
+    DEBUG(errs() << "  Not threading BB '" << BB->getName()
+          << "' - Cost is too high: " << JumpThreadCost << "\n");
+    return false;
+  }
+  
+  // And finally, do it!  Start by factoring the predecessors is needed.
+  BasicBlock *PredBB;
+  if (PredBBs.size() == 1)
+    PredBB = PredBBs[0];
+  else {
+    DEBUG(errs() << "  Factoring out " << PredBBs.size()
+          << " common predecessors.\n");
+    PredBB = SplitBlockPredecessors(BB, &PredBBs[0], PredBBs.size(),
+                                    ".thr_comm", this);
+  }
+  
   // And finally, do it!
   DEBUG(errs() << "  Threading edge from '" << PredBB->getName() << "' to '"
         << SuccBB->getName() << "' with cost: " << JumpThreadCost
         << ", across block:\n    "
         << *BB << "\n");
   
-  // Jump Threading can not update SSA properties correctly if the values
-  // defined in the duplicated block are used outside of the block itself.  For
-  // this reason, we spill all values that are used outside of BB to the stack.
-  for (BasicBlock::iterator I = BB->begin(); I != BB->end(); ++I) {
-    if (!I->isUsedOutsideOfBlock(BB))
-      continue;
-    
-    // We found a use of I outside of BB.  Create a new stack slot to
-    // break this inter-block usage pattern.
-    DemoteRegToStack(*I);
-  }
- 
   // We are going to have to map operands from the original BB block to the new
   // copy of the block 'NewBB'.  If there are PHI nodes in BB, evaluate them to
   // account for entry from PredBB.
@@ -954,28 +1196,55 @@ bool JumpThreading::ThreadEdge(BasicBlock *BB, BasicBlock *PredBB,
   
   // Check to see if SuccBB has PHI nodes. If so, we need to add entries to the
   // PHI nodes for NewBB now.
-  for (BasicBlock::iterator PNI = SuccBB->begin(); isa<PHINode>(PNI); ++PNI) {
-    PHINode *PN = cast<PHINode>(PNI);
-    // Ok, we have a PHI node.  Figure out what the incoming value was for the
-    // DestBlock.
-    Value *IV = PN->getIncomingValueForBlock(BB);
-    
-    // Remap the value if necessary.
-    if (Instruction *Inst = dyn_cast<Instruction>(IV)) {
-      DenseMap<Instruction*, Value*>::iterator I = ValueMapping.find(Inst);
-      if (I != ValueMapping.end())
-        IV = I->second;
+  AddPHINodeEntriesForMappedBlock(SuccBB, BB, NewBB, ValueMapping);
+  
+  // If there were values defined in BB that are used outside the block, then we
+  // now have to update all uses of the value to use either the original value,
+  // the cloned value, or some PHI derived value.  This can require arbitrary
+  // PHI insertion, of which we are prepared to do, clean these up now.
+  SSAUpdater SSAUpdate;
+  SmallVector<Use*, 16> UsesToRename;
+  for (BasicBlock::iterator I = BB->begin(); I != BB->end(); ++I) {
+    // Scan all uses of this instruction to see if it is used outside of its
+    // block, and if so, record them in UsesToRename.
+    for (Value::use_iterator UI = I->use_begin(), E = I->use_end(); UI != E;
+         ++UI) {
+      Instruction *User = cast<Instruction>(*UI);
+      if (PHINode *UserPN = dyn_cast<PHINode>(User)) {
+        if (UserPN->getIncomingBlock(UI) == BB)
+          continue;
+      } else if (User->getParent() == BB)
+        continue;
+      
+      UsesToRename.push_back(&UI.getUse());
     }
-    PN->addIncoming(IV, NewBB);
+    
+    // If there are no uses outside the block, we're done with this instruction.
+    if (UsesToRename.empty())
+      continue;
+    
+    DEBUG(errs() << "JT: Renaming non-local uses of: " << *I << "\n");
+
+    // We found a use of I outside of BB.  Rename all uses of I that are outside
+    // its block to be uses of the appropriate PHI node etc.  See ValuesInBlocks
+    // with the two values we know.
+    SSAUpdate.Initialize(I);
+    SSAUpdate.AddAvailableValue(BB, I);
+    SSAUpdate.AddAvailableValue(NewBB, ValueMapping[I]);
+    
+    while (!UsesToRename.empty())
+      SSAUpdate.RewriteUse(*UsesToRename.pop_back_val());
+    DEBUG(errs() << "\n");
   }
   
+  
   // Ok, NewBB is good to go.  Update the terminator of PredBB to jump to
   // NewBB instead of BB.  This eliminates predecessors from BB, which requires
   // us to simplify any PHI nodes in BB.
   TerminatorInst *PredTerm = PredBB->getTerminator();
   for (unsigned i = 0, e = PredTerm->getNumSuccessors(); i != e; ++i)
     if (PredTerm->getSuccessor(i) == BB) {
-      BB->removePredecessor(PredBB);
+      RemovePredecessorAndSimplify(BB, PredBB, TD);
       PredTerm->setSuccessor(i, NewBB);
     }
   
@@ -985,9 +1254,12 @@ bool JumpThreading::ThreadEdge(BasicBlock *BB, BasicBlock *PredBB,
   BI = NewBB->begin();
   for (BasicBlock::iterator E = NewBB->end(); BI != E; ) {
     Instruction *Inst = BI++;
-    if (Constant *C = ConstantFoldInstruction(Inst, BB->getContext(), TD)) {
-      Inst->replaceAllUsesWith(C);
-      Inst->eraseFromParent();
+    
+    if (Value *V = SimplifyInstruction(Inst, TD)) {
+      WeakVH BIHandle(BI);
+      ReplaceAndSimplifyAllUses(Inst, V, TD);
+      if (BIHandle == 0)
+        BI = NewBB->begin();
       continue;
     }
     
@@ -998,3 +1270,120 @@ bool JumpThreading::ThreadEdge(BasicBlock *BB, BasicBlock *PredBB,
   ++NumThreads;
   return true;
 }
+
+/// DuplicateCondBranchOnPHIIntoPred - PredBB contains an unconditional branch
+/// to BB which contains an i1 PHI node and a conditional branch on that PHI.
+/// If we can duplicate the contents of BB up into PredBB do so now, this
+/// improves the odds that the branch will be on an analyzable instruction like
+/// a compare.
+bool JumpThreading::DuplicateCondBranchOnPHIIntoPred(BasicBlock *BB,
+                                                     BasicBlock *PredBB) {
+  // If BB is a loop header, then duplicating this block outside the loop would
+  // cause us to transform this into an irreducible loop, don't do this.
+  // See the comments above FindLoopHeaders for justifications and caveats.
+  if (LoopHeaders.count(BB)) {
+    DEBUG(errs() << "  Not duplicating loop header '" << BB->getName()
+          << "' into predecessor block '" << PredBB->getName()
+          << "' - it might create an irreducible loop!\n");
+    return false;
+  }
+  
+  unsigned DuplicationCost = getJumpThreadDuplicationCost(BB);
+  if (DuplicationCost > Threshold) {
+    DEBUG(errs() << "  Not duplicating BB '" << BB->getName()
+          << "' - Cost is too high: " << DuplicationCost << "\n");
+    return false;
+  }
+  
+  // Okay, we decided to do this!  Clone all the instructions in BB onto the end
+  // of PredBB.
+  DEBUG(errs() << "  Duplicating block '" << BB->getName() << "' into end of '"
+        << PredBB->getName() << "' to eliminate branch on phi.  Cost: "
+        << DuplicationCost << " block is:" << *BB << "\n");
+  
+  // We are going to have to map operands from the original BB block into the
+  // PredBB block.  Evaluate PHI nodes in BB.
+  DenseMap<Instruction*, Value*> ValueMapping;
+  
+  BasicBlock::iterator BI = BB->begin();
+  for (; PHINode *PN = dyn_cast<PHINode>(BI); ++BI)
+    ValueMapping[PN] = PN->getIncomingValueForBlock(PredBB);
+  
+  BranchInst *OldPredBranch = cast<BranchInst>(PredBB->getTerminator());
+  
+  // Clone the non-phi instructions of BB into PredBB, keeping track of the
+  // mapping and using it to remap operands in the cloned instructions.
+  for (; BI != BB->end(); ++BI) {
+    Instruction *New = BI->clone();
+    New->setName(BI->getName());
+    PredBB->getInstList().insert(OldPredBranch, New);
+    ValueMapping[BI] = New;
+    
+    // Remap operands to patch up intra-block references.
+    for (unsigned i = 0, e = New->getNumOperands(); i != e; ++i)
+      if (Instruction *Inst = dyn_cast<Instruction>(New->getOperand(i))) {
+        DenseMap<Instruction*, Value*>::iterator I = ValueMapping.find(Inst);
+        if (I != ValueMapping.end())
+          New->setOperand(i, I->second);
+      }
+  }
+  
+  // Check to see if the targets of the branch had PHI nodes. If so, we need to
+  // add entries to the PHI nodes for branch from PredBB now.
+  BranchInst *BBBranch = cast<BranchInst>(BB->getTerminator());
+  AddPHINodeEntriesForMappedBlock(BBBranch->getSuccessor(0), BB, PredBB,
+                                  ValueMapping);
+  AddPHINodeEntriesForMappedBlock(BBBranch->getSuccessor(1), BB, PredBB,
+                                  ValueMapping);
+  
+  // If there were values defined in BB that are used outside the block, then we
+  // now have to update all uses of the value to use either the original value,
+  // the cloned value, or some PHI derived value.  This can require arbitrary
+  // PHI insertion, of which we are prepared to do, clean these up now.
+  SSAUpdater SSAUpdate;
+  SmallVector<Use*, 16> UsesToRename;
+  for (BasicBlock::iterator I = BB->begin(); I != BB->end(); ++I) {
+    // Scan all uses of this instruction to see if it is used outside of its
+    // block, and if so, record them in UsesToRename.
+    for (Value::use_iterator UI = I->use_begin(), E = I->use_end(); UI != E;
+         ++UI) {
+      Instruction *User = cast<Instruction>(*UI);
+      if (PHINode *UserPN = dyn_cast<PHINode>(User)) {
+        if (UserPN->getIncomingBlock(UI) == BB)
+          continue;
+      } else if (User->getParent() == BB)
+        continue;
+      
+      UsesToRename.push_back(&UI.getUse());
+    }
+    
+    // If there are no uses outside the block, we're done with this instruction.
+    if (UsesToRename.empty())
+      continue;
+    
+    DEBUG(errs() << "JT: Renaming non-local uses of: " << *I << "\n");
+    
+    // We found a use of I outside of BB.  Rename all uses of I that are outside
+    // its block to be uses of the appropriate PHI node etc.  See ValuesInBlocks
+    // with the two values we know.
+    SSAUpdate.Initialize(I);
+    SSAUpdate.AddAvailableValue(BB, I);
+    SSAUpdate.AddAvailableValue(PredBB, ValueMapping[I]);
+    
+    while (!UsesToRename.empty())
+      SSAUpdate.RewriteUse(*UsesToRename.pop_back_val());
+    DEBUG(errs() << "\n");
+  }
+  
+  // PredBB no longer jumps to BB, remove entries in the PHI node for the edge
+  // that we nuked.
+  RemovePredecessorAndSimplify(BB, PredBB, TD);
+  
+  // Remove the unconditional branch at the end of the PredBB block.
+  OldPredBranch->eraseFromParent();
+  
+  ++NumDupes;
+  return true;
+}
+
+
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LICM.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LICM.cpp
index 6df246f..5511387 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LICM.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LICM.cpp
@@ -35,6 +35,7 @@
 #include "llvm/Transforms/Scalar.h"
 #include "llvm/Constants.h"
 #include "llvm/DerivedTypes.h"
+#include "llvm/IntrinsicInst.h"
 #include "llvm/Instructions.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Analysis/LoopInfo.h"
@@ -62,15 +63,6 @@ static cl::opt<bool>
 DisablePromotion("disable-licm-promotion", cl::Hidden,
                  cl::desc("Disable memory promotion in LICM pass"));
 
-// This feature is currently disabled by default because CodeGen is not yet
-// capable of rematerializing these constants in PIC mode, so it can lead to
-// degraded performance. Compile test/CodeGen/X86/remat-constant.ll with
-// -relocation-model=pic to see an example of this.
-static cl::opt<bool>
-EnableLICMConstantMotion("enable-licm-constant-variables", cl::Hidden,
-                         cl::desc("Enable hoisting/sinking of constant "
-                                  "global variables"));
-
 namespace {
   struct LICM : public LoopPass {
     static char ID; // Pass identification, replacement for typeid
@@ -262,7 +254,6 @@ bool LICM::runOnLoop(Loop *L, LPPassManager &LPM) {
 
   // Get the preheader block to move instructions into...
   Preheader = L->getLoopPreheader();
-  assert(Preheader&&"Preheader insertion pass guarantees we have a preheader!");
 
   // Loop over the body of this loop, looking for calls, invokes, and stores.
   // Because subloops have already been incorporated into AST, we skip blocks in
@@ -285,12 +276,14 @@ bool LICM::runOnLoop(Loop *L, LPPassManager &LPM) {
   // us to sink instructions in one pass, without iteration.  After sinking
   // instructions, we perform another pass to hoist them out of the loop.
   //
-  SinkRegion(DT->getNode(L->getHeader()));
-  HoistRegion(DT->getNode(L->getHeader()));
+  if (L->hasDedicatedExits())
+    SinkRegion(DT->getNode(L->getHeader()));
+  if (Preheader)
+    HoistRegion(DT->getNode(L->getHeader()));
 
   // Now that all loop invariants have been removed from the loop, promote any
   // memory references to scalars that we can...
-  if (!DisablePromotion)
+  if (!DisablePromotion && Preheader && L->hasDedicatedExits())
     PromoteValuesInLoop();
 
   // Clear out loops state information for the next iteration
@@ -338,7 +331,6 @@ void LICM::SinkRegion(DomTreeNode *N) {
   }
 }
 
-
 /// HoistRegion - Walk the specified region of the CFG (defined by all blocks
 /// dominated by the specified block, and that are in the current loop) in depth
 /// first order w.r.t the DominatorTree.  This allows us to visit definitions
@@ -382,8 +374,7 @@ bool LICM::canSinkOrHoistInst(Instruction &I) {
 
     // Loads from constant memory are always safe to move, even if they end up
     // in the same alias set as something that ends up being modified.
-    if (EnableLICMConstantMotion &&
-        AA->pointsToConstantMemory(LI->getOperand(0)))
+    if (AA->pointsToConstantMemory(LI->getOperand(0)))
       return true;
     
     // Don't hoist loads which have may-aliased stores in loop.
@@ -392,6 +383,10 @@ bool LICM::canSinkOrHoistInst(Instruction &I) {
       Size = AA->getTypeStoreSize(LI->getType());
     return !pointerInvalidatedByLoop(LI->getOperand(0), Size);
   } else if (CallInst *CI = dyn_cast<CallInst>(&I)) {
+    if (isa<DbgStopPointInst>(CI)) {
+      // Don't hoist/sink dbgstoppoints, we handle them separately
+      return false;
+    }
     // Handle obvious cases efficiently.
     AliasAnalysis::ModRefBehavior Behavior = AA->getModRefBehavior(CI);
     if (Behavior == AliasAnalysis::DoesNotAccessMemory)
@@ -482,21 +477,26 @@ void LICM::sink(Instruction &I) {
     if (!isExitBlockDominatedByBlockInLoop(ExitBlocks[0], I.getParent())) {
       // Instruction is not used, just delete it.
       CurAST->deleteValue(&I);
-      if (!I.use_empty())  // If I has users in unreachable blocks, eliminate.
+      // If I has users in unreachable blocks, eliminate.
+      // If I is not void type then replaceAllUsesWith undef.
+      // This allows ValueHandlers and custom metadata to adjust itself.
+      if (!I.getType()->isVoidTy())
         I.replaceAllUsesWith(UndefValue::get(I.getType()));
       I.eraseFromParent();
     } else {
       // Move the instruction to the start of the exit block, after any PHI
       // nodes in it.
       I.removeFromParent();
-
       BasicBlock::iterator InsertPt = ExitBlocks[0]->getFirstNonPHI();
       ExitBlocks[0]->getInstList().insert(InsertPt, &I);
     }
   } else if (ExitBlocks.empty()) {
     // The instruction is actually dead if there ARE NO exit blocks.
     CurAST->deleteValue(&I);
-    if (!I.use_empty())  // If I has users in unreachable blocks, eliminate.
+    // If I has users in unreachable blocks, eliminate.
+    // If I is not void type then replaceAllUsesWith undef.
+    // This allows ValueHandlers and custom metadata to adjust itself.
+    if (!I.getType()->isVoidTy())
       I.replaceAllUsesWith(UndefValue::get(I.getType()));
     I.eraseFromParent();
   } else {
@@ -507,7 +507,7 @@ void LICM::sink(Instruction &I) {
     // Firstly, we create a stack object to hold the value...
     AllocaInst *AI = 0;
 
-    if (I.getType() != Type::getVoidTy(I.getContext())) {
+    if (!I.getType()->isVoidTy()) {
       AI = new AllocaInst(I.getType(), 0, I.getName(),
                           I.getParent()->getParent()->getEntryBlock().begin());
       CurAST->add(AI);
@@ -593,7 +593,7 @@ void LICM::sink(Instruction &I) {
     if (AI) {
       std::vector<AllocaInst*> Allocas;
       Allocas.push_back(AI);
-      PromoteMemToReg(Allocas, *DT, *DF, AI->getContext(), CurAST);
+      PromoteMemToReg(Allocas, *DT, *DF, CurAST);
     }
   }
 }
@@ -602,7 +602,8 @@ void LICM::sink(Instruction &I) {
 /// that is safe to hoist, this instruction is called to do the dirty work.
 ///
 void LICM::hoist(Instruction &I) {
-  DEBUG(errs() << "LICM hoisting to " << Preheader->getName() << ": " << I);
+  DEBUG(errs() << "LICM hoisting to " << Preheader->getName() << ": "
+        << I << "\n");
 
   // Remove the instruction from its current basic block... but don't delete the
   // instruction.
@@ -768,7 +769,7 @@ void LICM::PromoteValuesInLoop() {
   PromotedAllocas.reserve(PromotedValues.size());
   for (unsigned i = 0, e = PromotedValues.size(); i != e; ++i)
     PromotedAllocas.push_back(PromotedValues[i].first);
-  PromoteMemToReg(PromotedAllocas, *DT, *DF, Preheader->getContext(), CurAST);
+  PromoteMemToReg(PromotedAllocas, *DT, *DF, CurAST);
 }
 
 /// FindPromotableValuesInLoop - Check the current loop for stores to definite
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopDeletion.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopDeletion.cpp
index 5f93756..48817ab 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopDeletion.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopDeletion.cpp
@@ -33,8 +33,6 @@ namespace {
     // Possibly eliminate loop L if it is dead.
     bool runOnLoop(Loop* L, LPPassManager& LPM);
     
-    bool SingleDominatingExit(Loop* L,
-                              SmallVector<BasicBlock*, 4>& exitingBlocks);
     bool IsLoopDead(Loop* L, SmallVector<BasicBlock*, 4>& exitingBlocks,
                     SmallVector<BasicBlock*, 4>& exitBlocks,
                     bool &Changed, BasicBlock *Preheader);
@@ -63,25 +61,6 @@ Pass* llvm::createLoopDeletionPass() {
   return new LoopDeletion();
 }
 
-/// SingleDominatingExit - Checks that there is only a single blocks that 
-/// branches out of the loop, and that it also g the latch block.  Loops
-/// with multiple or non-latch-dominating exiting blocks could be dead, but we'd
-/// have to do more extensive analysis to make sure, for instance, that the 
-/// control flow logic involved was or could be made loop-invariant.
-bool LoopDeletion::SingleDominatingExit(Loop* L,
-                                   SmallVector<BasicBlock*, 4>& exitingBlocks) {
-  
-  if (exitingBlocks.size() != 1)
-    return false;
-  
-  BasicBlock* latch = L->getLoopLatch();
-  if (!latch)
-    return false;
-  
-  DominatorTree& DT = getAnalysis<DominatorTree>();
-  return DT.dominates(exitingBlocks[0], latch);
-}
-
 /// IsLoopDead - Determined if a loop is dead.  This assumes that we've already
 /// checked for unique exit and exiting blocks, and that the code is in LCSSA
 /// form.
@@ -136,6 +115,10 @@ bool LoopDeletion::runOnLoop(Loop* L, LPPassManager& LPM) {
   if (!preheader)
     return false;
   
+  // If LoopSimplify form is not available, stay out of trouble.
+  if (!L->hasDedicatedExits())
+    return false;
+
   // We can't remove loops that contain subloops.  If the subloops were dead,
   // they would already have been removed in earlier executions of this pass.
   if (L->begin() != L->end())
@@ -154,9 +137,8 @@ bool LoopDeletion::runOnLoop(Loop* L, LPPassManager& LPM) {
   if (exitBlocks.size() != 1)
     return false;
   
-  // Loops with multiple exits or exits that don't dominate the latch
-  // are too complicated to handle correctly.
-  if (!SingleDominatingExit(L, exitingBlocks))
+  // Loops with multiple exits are too complicated to handle correctly.
+  if (exitingBlocks.size() != 1)
     return false;
   
   // Finally, we have to check that the loop really is dead.
@@ -167,7 +149,7 @@ bool LoopDeletion::runOnLoop(Loop* L, LPPassManager& LPM) {
   // Don't remove loops for which we can't solve the trip count.
   // They could be infinite, in which case we'd be changing program behavior.
   ScalarEvolution& SE = getAnalysis<ScalarEvolution>();
-  const SCEV *S = SE.getBackedgeTakenCount(L);
+  const SCEV *S = SE.getMaxBackedgeTakenCount(L);
   if (isa<SCEVCouldNotCompute>(S))
     return Changed;
   
@@ -183,7 +165,7 @@ bool LoopDeletion::runOnLoop(Loop* L, LPPassManager& LPM) {
   // Tell ScalarEvolution that the loop is deleted. Do this before
   // deleting the loop so that ScalarEvolution can look at the loop
   // to determine what it needs to clean up.
-  SE.forgetLoopBackedgeTakenCount(L);
+  SE.forgetLoop(L);
 
   // Connect the preheader directly to the exit block.
   TerminatorInst* TI = preheader->getTerminator();
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopIndexSplit.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopIndexSplit.cpp
index 5f9d370..8b6a233 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopIndexSplit.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopIndexSplit.cpp
@@ -209,6 +209,10 @@ bool LoopIndexSplit::runOnLoop(Loop *IncomingLoop, LPPassManager &LPM_Ref) {
   L = IncomingLoop;
   LPM = &LPM_Ref;
 
+  // If LoopSimplify form is not available, stay out of trouble.
+  if (!L->isLoopSimplifyForm())
+    return false;
+
   // FIXME - Nested loops make dominator info updates tricky. 
   if (!L->getSubLoops().empty())
     return false;
@@ -426,7 +430,7 @@ bool LoopIndexSplit::processOneIterationLoop() {
   //      c1 = icmp uge i32 SplitValue, StartValue
   //      c2 = icmp ult i32 SplitValue, ExitValue
   //      and i32 c1, c2 
-  Instruction *C1 = new ICmpInst(BR, ExitCondition->isSignedPredicate() ? 
+  Instruction *C1 = new ICmpInst(BR, ExitCondition->isSigned() ? 
                                  ICmpInst::ICMP_SGE : ICmpInst::ICMP_UGE,
                                  SplitValue, StartValue, "lisplit");
 
@@ -478,7 +482,7 @@ bool LoopIndexSplit::processOneIterationLoop() {
 /// with a loop invariant value. Update loop's lower and upper bound based on 
 /// the loop invariant value.
 bool LoopIndexSplit::restrictLoopBound(ICmpInst &Op) {
-  bool Sign = Op.isSignedPredicate();
+  bool Sign = Op.isSigned();
   Instruction *PHTerm = L->getLoopPreheader()->getTerminator();
 
   if (IVisGT(*ExitCondition) || IVisGE(*ExitCondition)) {
@@ -933,7 +937,7 @@ bool LoopIndexSplit::splitLoop() {
     return false;
 
   // If the predicate sign does not match then skip.
-  if (ExitCondition->isSignedPredicate() != SplitCondition->isSignedPredicate())
+  if (ExitCondition->isSigned() != SplitCondition->isSigned())
     return false;
 
   unsigned EVOpNum = (ExitCondition->getOperand(1) == IVExitValue);
@@ -963,7 +967,7 @@ bool LoopIndexSplit::splitLoop() {
   //[*] Calculate new loop bounds.
   Value *AEV = SplitValue;
   Value *BSV = SplitValue;
-  bool Sign = SplitCondition->isSignedPredicate();
+  bool Sign = SplitCondition->isSigned();
   Instruction *PHTerm = L->getLoopPreheader()->getTerminator();
 
   if (IVisLT(*ExitCondition)) {
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopRotation.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopRotation.cpp
index 70c69bb..5004483 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopRotation.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopRotation.cpp
@@ -15,12 +15,12 @@
 #include "llvm/Transforms/Scalar.h"
 #include "llvm/Function.h"
 #include "llvm/IntrinsicInst.h"
-#include "llvm/Analysis/LoopInfo.h"
 #include "llvm/Analysis/LoopPass.h"
 #include "llvm/Analysis/Dominators.h"
 #include "llvm/Analysis/ScalarEvolution.h"
 #include "llvm/Transforms/Utils/Local.h"
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
+#include "llvm/Transforms/Utils/SSAUpdater.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/ADT/Statistic.h"
@@ -32,16 +32,6 @@ using namespace llvm;
 STATISTIC(NumRotated, "Number of loops rotated");
 namespace {
 
-  class RenameData {
-  public:
-    RenameData(Instruction *O, Value *P, Instruction *H) 
-      : Original(O), PreHeader(P), Header(H) { }
-  public:
-    Instruction *Original; // Original instruction
-    Value *PreHeader; // Original pre-header replacement
-    Instruction *Header; // New header replacement
-  };
-  
   class LoopRotate : public LoopPass {
   public:
     static char ID; // Pass ID, replacement for typeid
@@ -58,6 +48,7 @@ namespace {
       AU.addRequiredID(LCSSAID);
       AU.addPreservedID(LCSSAID);
       AU.addPreserved<ScalarEvolution>();
+      AU.addRequired<LoopInfo>();
       AU.addPreserved<LoopInfo>();
       AU.addPreserved<DominatorTree>();
       AU.addPreserved<DominanceFrontier>();
@@ -71,25 +62,12 @@ namespace {
     /// Initialize local data
     void initialize();
 
-    /// Make sure all Exit block PHINodes have required incoming values.
-    /// If incoming value is constant or defined outside the loop then
-    /// PHINode may not have an entry for original pre-header. 
-    void  updateExitBlock();
-
-    /// Return true if this instruction is used outside original header.
-    bool usedOutsideOriginalHeader(Instruction *In);
-
-    /// Find Replacement information for instruction. Return NULL if it is
-    /// not available.
-    const RenameData *findReplacementData(Instruction *I);
-
     /// After loop rotation, loop pre-header has multiple sucessors.
     /// Insert one forwarding basic block to ensure that loop pre-header
     /// has only one successor.
     void preserveCanonicalLoopForm(LPPassManager &LPM);
 
   private:
-
     Loop *L;
     BasicBlock *OrigHeader;
     BasicBlock *OrigPreHeader;
@@ -97,7 +75,6 @@ namespace {
     BasicBlock *NewHeader;
     BasicBlock *Exit;
     LPPassManager *LPM_Ptr;
-    SmallVector<RenameData, MAX_HEADER_SIZE> LoopHeaderInfo;
   };
 }
   
@@ -127,21 +104,22 @@ bool LoopRotate::runOnLoop(Loop *Lp, LPPassManager &LPM) {
 bool LoopRotate::rotateLoop(Loop *Lp, LPPassManager &LPM) {
   L = Lp;
 
-  OrigHeader =  L->getHeader();
   OrigPreHeader = L->getLoopPreheader();
+  if (!OrigPreHeader) return false;
+
   OrigLatch = L->getLoopLatch();
+  if (!OrigLatch) return false;
+
+  OrigHeader =  L->getHeader();
 
   // If the loop has only one block then there is not much to rotate.
   if (L->getBlocks().size() == 1)
     return false;
 
-  assert(OrigHeader && OrigLatch && OrigPreHeader &&
-         "Loop is not in canonical form");
-
   // If the loop header is not one of the loop exiting blocks then
   // either this loop is already rotated or it is not
   // suitable for loop rotation transformations.
-  if (!L->isLoopExit(OrigHeader))
+  if (!L->isLoopExiting(OrigHeader))
     return false;
 
   BranchInst *BI = dyn_cast<BranchInst>(OrigHeader->getTerminator());
@@ -180,7 +158,7 @@ bool LoopRotate::rotateLoop(Loop *Lp, LPPassManager &LPM) {
   // Anything ScalarEvolution may know about this loop or the PHI nodes
   // in its header will soon be invalidated.
   if (ScalarEvolution *SE = getAnalysisIfAvailable<ScalarEvolution>())
-    SE->forgetLoopBackedgeTakenCount(L);
+    SE->forgetLoop(L);
 
   // Find new Loop header. NewHeader is a Header's one and only successor
   // that is inside loop.  Header's other successor is outside the
@@ -199,168 +177,88 @@ bool LoopRotate::rotateLoop(Loop *Lp, LPPassManager &LPM) {
          "New header doesn't have one pred!");
   FoldSingleEntryPHINodes(NewHeader);
 
-  // Copy PHI nodes and other instructions from the original header
-  // into the original pre-header. Unlike the original header, the original
-  // pre-header is not a member of the loop.
-  //
-  // The new loop header is the one and only successor of original header that
-  // is inside the loop. All other original header successors are outside 
-  // the loop. Copy PHI Nodes from the original header into the new loop header.
-  // Add second incoming value, from original loop pre-header into these phi 
-  // nodes. If a value defined in original header is used outside original 
-  // header then new loop header will need new phi nodes with two incoming 
-  // values, one definition from original header and second definition is 
-  // from original loop pre-header.
-
-  // Remove terminator from Original pre-header. Original pre-header will
-  // receive a clone of original header terminator as a new terminator.
-  OrigPreHeader->getInstList().pop_back();
+  // Begin by walking OrigHeader and populating ValueMap with an entry for
+  // each Instruction.
   BasicBlock::iterator I = OrigHeader->begin(), E = OrigHeader->end();
-  PHINode *PN = 0;
-  for (; (PN = dyn_cast<PHINode>(I)); ++I) {
-    // PHI nodes are not copied into original pre-header. Instead their values
-    // are directly propagated.
-    Value *NPV = PN->getIncomingValueForBlock(OrigPreHeader);
-
-    // Create a new PHI node with two incoming values for NewHeader.
-    // One incoming value is from OrigLatch (through OrigHeader) and the
-    // second incoming value is from original pre-header.
-    PHINode *NH = PHINode::Create(PN->getType(), PN->getName(),
-                                  NewHeader->begin());
-    NH->addIncoming(PN->getIncomingValueForBlock(OrigLatch), OrigHeader);
-    NH->addIncoming(NPV, OrigPreHeader);
-    
-    // "In" can be replaced by NH at various places.
-    LoopHeaderInfo.push_back(RenameData(PN, NPV, NH));
-  }
+  DenseMap<const Value *, Value *> ValueMap;
 
-  // Now, handle non-phi instructions.
-  for (; I != E; ++I) {
-    Instruction *In = I;
-    assert(!isa<PHINode>(In) && "PHINode is not expected here");
-    
-    // This is not a PHI instruction. Insert its clone into original pre-header.
-    // If this instruction is using a value from same basic block then
-    // update it to use value from cloned instruction.
-    Instruction *C = In->clone();
-    C->setName(In->getName());
-    OrigPreHeader->getInstList().push_back(C);
-
-    for (unsigned opi = 0, e = In->getNumOperands(); opi != e; ++opi) {
-      Instruction *OpInsn = dyn_cast<Instruction>(In->getOperand(opi));
-      if (!OpInsn) continue;  // Ignore non-instruction values.
-      if (const RenameData *D = findReplacementData(OpInsn))
-        C->setOperand(opi, D->PreHeader);
-    }
+  // For PHI nodes, the value available in OldPreHeader is just the
+  // incoming value from OldPreHeader.
+  for (; PHINode *PN = dyn_cast<PHINode>(I); ++I)
+    ValueMap[PN] = PN->getIncomingValue(PN->getBasicBlockIndex(OrigPreHeader));
 
-    // If this instruction is used outside this basic block then
-    // create new PHINode for this instruction.
-    Instruction *NewHeaderReplacement = NULL;
-    if (usedOutsideOriginalHeader(In)) {
-      PHINode *PN = PHINode::Create(In->getType(), In->getName(),
-                                    NewHeader->begin());
-      PN->addIncoming(In, OrigHeader);
-      PN->addIncoming(C, OrigPreHeader);
-      NewHeaderReplacement = PN;
-    }
-    LoopHeaderInfo.push_back(RenameData(In, C, NewHeaderReplacement));
+  // For the rest of the instructions, create a clone in the OldPreHeader.
+  TerminatorInst *LoopEntryBranch = OrigPreHeader->getTerminator();
+  for (; I != E; ++I) {
+    Instruction *C = I->clone();
+    C->setName(I->getName());
+    C->insertBefore(LoopEntryBranch);
+    ValueMap[I] = C;
   }
 
-  // Rename uses of original header instructions to reflect their new
-  // definitions (either from original pre-header node or from newly created
-  // new header PHINodes.
-  //
-  // Original header instructions are used in
-  // 1) Original header:
-  //
-  //    If instruction is used in non-phi instructions then it is using
-  //    defintion from original heder iteself. Do not replace this use
-  //    with definition from new header or original pre-header.
-  //
-  //    If instruction is used in phi node then it is an incoming 
-  //    value. Rename its use to reflect new definition from new-preheader
-  //    or new header.
-  //
-  // 2) Inside loop but not in original header
-  //
-  //    Replace this use to reflect definition from new header.
-  for (unsigned LHI = 0, LHI_E = LoopHeaderInfo.size(); LHI != LHI_E; ++LHI) {
-    const RenameData &ILoopHeaderInfo = LoopHeaderInfo[LHI];
-
-    if (!ILoopHeaderInfo.Header)
-      continue;
-
-    Instruction *OldPhi = ILoopHeaderInfo.Original;
-    Instruction *NewPhi = ILoopHeaderInfo.Header;
-
-    // Before replacing uses, collect them first, so that iterator is
-    // not invalidated.
-    SmallVector<Instruction *, 16> AllUses;
-    for (Value::use_iterator UI = OldPhi->use_begin(), UE = OldPhi->use_end();
-         UI != UE; ++UI)
-      AllUses.push_back(cast<Instruction>(UI));
-
-    for (SmallVector<Instruction *, 16>::iterator UI = AllUses.begin(), 
-           UE = AllUses.end(); UI != UE; ++UI) {
-      Instruction *U = *UI;
-      BasicBlock *Parent = U->getParent();
-
-      // Used inside original header
-      if (Parent == OrigHeader) {
-        // Do not rename uses inside original header non-phi instructions.
-        PHINode *PU = dyn_cast<PHINode>(U);
-        if (!PU)
+  // Along with all the other instructions, we just cloned OrigHeader's
+  // terminator into OrigPreHeader. Fix up the PHI nodes in each of OrigHeader's
+  // successors by duplicating their incoming values for OrigHeader.
+  TerminatorInst *TI = OrigHeader->getTerminator();
+  for (unsigned i = 0, e = TI->getNumSuccessors(); i != e; ++i)
+    for (BasicBlock::iterator BI = TI->getSuccessor(i)->begin();
+         PHINode *PN = dyn_cast<PHINode>(BI); ++BI)
+      PN->addIncoming(PN->getIncomingValueForBlock(OrigHeader), OrigPreHeader);
+
+  // Now that OrigPreHeader has a clone of OrigHeader's terminator, remove
+  // OrigPreHeader's old terminator (the original branch into the loop), and
+  // remove the corresponding incoming values from the PHI nodes in OrigHeader.
+  LoopEntryBranch->eraseFromParent();
+  for (I = OrigHeader->begin(); PHINode *PN = dyn_cast<PHINode>(I); ++I)
+    PN->removeIncomingValue(PN->getBasicBlockIndex(OrigPreHeader));
+
+  // Now fix up users of the instructions in OrigHeader, inserting PHI nodes
+  // as necessary.
+  SSAUpdater SSA;
+  for (I = OrigHeader->begin(); I != E; ++I) {
+    Value *OrigHeaderVal = I;
+    Value *OrigPreHeaderVal = ValueMap[OrigHeaderVal];
+
+    // The value now exits in two versions: the initial value in the preheader
+    // and the loop "next" value in the original header.
+    SSA.Initialize(OrigHeaderVal);
+    SSA.AddAvailableValue(OrigHeader, OrigHeaderVal);
+    SSA.AddAvailableValue(OrigPreHeader, OrigPreHeaderVal);
+
+    // Visit each use of the OrigHeader instruction.
+    for (Value::use_iterator UI = OrigHeaderVal->use_begin(),
+         UE = OrigHeaderVal->use_end(); UI != UE; ) {
+      // Grab the use before incrementing the iterator.
+      Use &U = UI.getUse();
+
+      // Increment the iterator before removing the use from the list.
+      ++UI;
+
+      // SSAUpdater can't handle a non-PHI use in the same block as an
+      // earlier def. We can easily handle those cases manually.
+      Instruction *UserInst = cast<Instruction>(U.getUser());
+      if (!isa<PHINode>(UserInst)) {
+        BasicBlock *UserBB = UserInst->getParent();
+
+        // The original users in the OrigHeader are already using the
+        // original definitions.
+        if (UserBB == OrigHeader)
           continue;
 
-        // Do not rename uses inside original header phi nodes, if the
-        // incoming value is for new header.
-        if (PU->getBasicBlockIndex(NewHeader) != -1
-            && PU->getIncomingValueForBlock(NewHeader) == U)
+        // Users in the OrigPreHeader need to use the value to which the
+        // original definitions are mapped.
+        if (UserBB == OrigPreHeader) {
+          U = OrigPreHeaderVal;
           continue;
-        
-       U->replaceUsesOfWith(OldPhi, NewPhi);
-       continue;
+        }
       }
 
-      // Used inside loop, but not in original header.
-      if (L->contains(U->getParent())) {
-        if (U != NewPhi)
-          U->replaceUsesOfWith(OldPhi, NewPhi);
-        continue;
-      }
-      
-      // Used inside Exit Block. Since we are in LCSSA form, U must be PHINode.
-      if (U->getParent() == Exit) {
-        assert(isa<PHINode>(U) && "Use in Exit Block that is not PHINode");
-        
-        PHINode *UPhi = cast<PHINode>(U);
-        // UPhi already has one incoming argument from original header. 
-        // Add second incoming argument from new Pre header.
-        UPhi->addIncoming(ILoopHeaderInfo.PreHeader, OrigPreHeader);
-      } else {
-        // Used outside Exit block. Create a new PHI node in the exit block
-        // to receive the value from the new header and pre-header.
-        PHINode *PN = PHINode::Create(U->getType(), U->getName(),
-                                      Exit->begin());
-        PN->addIncoming(ILoopHeaderInfo.PreHeader, OrigPreHeader);
-        PN->addIncoming(OldPhi, OrigHeader);
-        U->replaceUsesOfWith(OldPhi, PN);
-      }
+      // Anything else can be handled by SSAUpdater.
+      SSA.RewriteUse(U);
     }
   }
-  
-  /// Make sure all Exit block PHINodes have required incoming values.
-  updateExitBlock();
-
-  // Update CFG
-
-  // Removing incoming branch from loop preheader to original header.
-  // Now original header is inside the loop.
-  for (BasicBlock::iterator I = OrigHeader->begin();
-       (PN = dyn_cast<PHINode>(I)); ++I)
-    PN->removeIncomingValue(OrigPreHeader);
 
-  // Make NewHeader as the new header for the loop.
+  // NewHeader is now the header of the loop.
   L->moveToHeader(NewHeader);
 
   preserveCanonicalLoopForm(LPM);
@@ -369,31 +267,6 @@ bool LoopRotate::rotateLoop(Loop *Lp, LPPassManager &LPM) {
   return true;
 }
 
-/// Make sure all Exit block PHINodes have required incoming values.
-/// If an incoming value is constant or defined outside the loop then
-/// PHINode may not have an entry for the original pre-header.
-void LoopRotate::updateExitBlock() {
-
-  PHINode *PN;
-  for (BasicBlock::iterator I = Exit->begin();
-       (PN = dyn_cast<PHINode>(I)); ++I) {
-
-    // There is already one incoming value from original pre-header block.
-    if (PN->getBasicBlockIndex(OrigPreHeader) != -1)
-      continue;
-
-    const RenameData *ILoopHeaderInfo;
-    Value *V = PN->getIncomingValueForBlock(OrigHeader);
-    if (isa<Instruction>(V) &&
-        (ILoopHeaderInfo = findReplacementData(cast<Instruction>(V)))) {
-      assert(ILoopHeaderInfo->PreHeader && "Missing New Preheader Instruction");
-      PN->addIncoming(ILoopHeaderInfo->PreHeader, OrigPreHeader);
-    } else {
-      PN->addIncoming(V, OrigPreHeader);
-    }
-  }
-}
-
 /// Initialize local data
 void LoopRotate::initialize() {
   L = NULL;
@@ -401,34 +274,6 @@ void LoopRotate::initialize() {
   OrigPreHeader = NULL;
   NewHeader = NULL;
   Exit = NULL;
-
-  LoopHeaderInfo.clear();
-}
-
-/// Return true if this instruction is used by any instructions in the loop that
-/// aren't in original header.
-bool LoopRotate::usedOutsideOriginalHeader(Instruction *In) {
-  for (Value::use_iterator UI = In->use_begin(), UE = In->use_end();
-       UI != UE; ++UI) {
-    BasicBlock *UserBB = cast<Instruction>(UI)->getParent();
-    if (UserBB != OrigHeader && L->contains(UserBB))
-      return true;
-  }
-
-  return false;
-}
-
-/// Find Replacement information for instruction. Return NULL if it is
-/// not available.
-const RenameData *LoopRotate::findReplacementData(Instruction *In) {
-
-  // Since LoopHeaderInfo is small, linear walk is OK.
-  for (unsigned LHI = 0, LHI_E = LoopHeaderInfo.size(); LHI != LHI_E; ++LHI) {
-    const RenameData &ILoopHeaderInfo = LoopHeaderInfo[LHI];
-    if (ILoopHeaderInfo.Original == In)
-      return &ILoopHeaderInfo;
-  }
-  return NULL;
 }
 
 /// After loop rotation, loop pre-header has multiple sucessors.
@@ -443,7 +288,7 @@ void LoopRotate::preserveCanonicalLoopForm(LPPassManager &LPM) {
                                                 "bb.nph",
                                                 OrigHeader->getParent(), 
                                                 NewHeader);
-  LoopInfo &LI = LPM.getAnalysis<LoopInfo>();
+  LoopInfo &LI = getAnalysis<LoopInfo>();
   if (Loop *PL = LI.getLoopFor(OrigPreHeader))
     PL->addBasicBlockToLoop(NewPreHeader, LI.getBase());
   BranchInst::Create(NewHeader, NewPreHeader);
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
index d8f6cc1..564c7ac 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
@@ -51,6 +51,7 @@ STATISTIC(NumEliminated,  "Number of strides eliminated");
 STATISTIC(NumShadow,      "Number of Shadow IVs optimized");
 STATISTIC(NumImmSunk,     "Number of common expr immediates sunk into uses");
 STATISTIC(NumLoopCond,    "Number of loop terminating conds optimized");
+STATISTIC(NumCountZero,   "Number of count iv optimized to count toward zero");
 
 static cl::opt<bool> EnableFullLSRMode("enable-full-lsr",
                                        cl::init(false),
@@ -107,7 +108,7 @@ namespace {
 
   public:
     static char ID; // Pass ID, replacement for typeid
-    explicit LoopStrengthReduce(const TargetLowering *tli = NULL) : 
+    explicit LoopStrengthReduce(const TargetLowering *tli = NULL) :
       LoopPass(&ID), TLI(tli) {
     }
 
@@ -131,12 +132,10 @@ namespace {
     }
 
   private:
-    ICmpInst *ChangeCompareStride(Loop *L, ICmpInst *Cond,
-                                  IVStrideUse* &CondUse,
-                                  const SCEV *const *  &CondStride);
-
     void OptimizeIndvars(Loop *L);
-    void OptimizeLoopCountIV(Loop *L);
+
+    /// OptimizeLoopTermCond - Change loop terminating condition to use the
+    /// postinc iv when possible.
     void OptimizeLoopTermCond(Loop *L);
 
     /// OptimizeShadowIV - If IV is used in a int-to-float cast
@@ -148,8 +147,28 @@ namespace {
     ICmpInst *OptimizeMax(Loop *L, ICmpInst *Cond,
                           IVStrideUse* &CondUse);
 
+    /// OptimizeLoopCountIV - If, after all sharing of IVs, the IV used for
+    /// deciding when to exit the loop is used only for that purpose, try to
+    /// rearrange things so it counts down to a test against zero.
+    bool OptimizeLoopCountIV(Loop *L);
+    bool OptimizeLoopCountIVOfStride(const SCEV* &Stride,
+                                     IVStrideUse* &CondUse, Loop *L);
+
+    /// StrengthReduceIVUsersOfStride - Strength reduce all of the users of a
+    /// single stride of IV.  All of the users may have different starting
+    /// values, and this may not be the only stride.
+    void StrengthReduceIVUsersOfStride(const SCEV *const &Stride,
+                                      IVUsersOfOneStride &Uses,
+                                      Loop *L);
+    void StrengthReduceIVUsers(Loop *L);
+
+    ICmpInst *ChangeCompareStride(Loop *L, ICmpInst *Cond,
+                                  IVStrideUse* &CondUse,
+                                  const SCEV* &CondStride,
+                                  bool PostPass = false);
+
     bool FindIVUserForCond(ICmpInst *Cond, IVStrideUse *&CondUse,
-                           const SCEV *const * &CondStride);
+                           const SCEV* &CondStride);
     bool RequiresTypeConversion(const Type *Ty, const Type *NewTy);
     const SCEV *CheckForIVReuse(bool, bool, bool, const SCEV *const&,
                              IVExpr&, const Type*,
@@ -164,6 +183,7 @@ namespace {
                               bool &AllUsesAreAddresses,
                               bool &AllUsesAreOutsideLoop,
                               std::vector<BasedUser> &UsersToProcess);
+    bool StrideMightBeShared(const SCEV *Stride, Loop *L, bool CheckPreInc);
     bool ShouldUseFullStrengthReductionMode(
                                 const std::vector<BasedUser> &UsersToProcess,
                                 const Loop *L,
@@ -188,9 +208,7 @@ namespace {
                                   Instruction *IVIncInsertPt,
                                   const Loop *L,
                                   SCEVExpander &PreheaderRewriter);
-    void StrengthReduceStridedIVUsers(const SCEV *const &Stride,
-                                      IVUsersOfOneStride &Uses,
-                                      Loop *L);
+
     void DeleteTriviallyDeadInstructions();
   };
 }
@@ -208,11 +226,11 @@ Pass *llvm::createLoopStrengthReducePass(const TargetLowering *TLI) {
 /// their operands subsequently dead.
 void LoopStrengthReduce::DeleteTriviallyDeadInstructions() {
   if (DeadInsts.empty()) return;
-  
+
   while (!DeadInsts.empty()) {
     Instruction *I = dyn_cast_or_null<Instruction>(DeadInsts.back());
     DeadInsts.pop_back();
-    
+
     if (I == 0 || !isInstructionTriviallyDead(I))
       continue;
 
@@ -223,14 +241,14 @@ void LoopStrengthReduce::DeleteTriviallyDeadInstructions() {
           DeadInsts.push_back(U);
       }
     }
-    
+
     I->eraseFromParent();
     Changed = true;
   }
 }
 
-/// containsAddRecFromDifferentLoop - Determine whether expression S involves a 
-/// subexpression that is an AddRec from a loop other than L.  An outer loop 
+/// containsAddRecFromDifferentLoop - Determine whether expression S involves a
+/// subexpression that is an AddRec from a loop other than L.  An outer loop
 /// of L is OK, but not an inner loop nor a disjoint loop.
 static bool containsAddRecFromDifferentLoop(const SCEV *S, Loop *L) {
   // This is very common, put it first.
@@ -256,7 +274,7 @@ static bool containsAddRecFromDifferentLoop(const SCEV *S, Loop *L) {
     return containsAddRecFromDifferentLoop(DE->getLHS(), L) ||
            containsAddRecFromDifferentLoop(DE->getRHS(), L);
 #if 0
-  // SCEVSDivExpr has been backed out temporarily, but will be back; we'll 
+  // SCEVSDivExpr has been backed out temporarily, but will be back; we'll
   // need this when it is.
   if (const SCEVSDivExpr *DE = dyn_cast<SCEVSDivExpr>(S))
     return containsAddRecFromDifferentLoop(DE->getLHS(), L) ||
@@ -328,7 +346,7 @@ namespace {
     /// field to the Imm field (below).  BasedUser values are sorted by this
     /// field.
     const SCEV *Base;
-    
+
     /// Inst - The instruction using the induction variable.
     Instruction *Inst;
 
@@ -352,11 +370,11 @@ namespace {
     // instruction for a loop and uses outside the loop that are dominated by
     // the loop.
     bool isUseOfPostIncrementedValue;
-    
+
     BasedUser(IVStrideUse &IVSU, ScalarEvolution *se)
       : SE(se), Base(IVSU.getOffset()), Inst(IVSU.getUser()),
         OperandValToReplace(IVSU.getOperandValToReplace()),
-        Imm(SE->getIntegerSCEV(0, Base->getType())), 
+        Imm(SE->getIntegerSCEV(0, Base->getType())),
         isUseOfPostIncrementedValue(IVSU.isUseOfPostIncrementedValue()) {}
 
     // Once we rewrite the code to insert the new IVs we want, update the
@@ -367,8 +385,8 @@ namespace {
                                        SCEVExpander &Rewriter, Loop *L, Pass *P,
                                         LoopInfo &LI,
                                         SmallVectorImpl<WeakVH> &DeadInsts);
-    
-    Value *InsertCodeForBaseAtPosition(const SCEV *const &NewBase, 
+
+    Value *InsertCodeForBaseAtPosition(const SCEV *const &NewBase,
                                        const Type *Ty,
                                        SCEVExpander &Rewriter,
                                        Instruction *IP, Loop *L,
@@ -383,7 +401,7 @@ void BasedUser::dump() const {
   errs() << "   Inst: " << *Inst;
 }
 
-Value *BasedUser::InsertCodeForBaseAtPosition(const SCEV *const &NewBase, 
+Value *BasedUser::InsertCodeForBaseAtPosition(const SCEV *const &NewBase,
                                               const Type *Ty,
                                               SCEVExpander &Rewriter,
                                               Instruction *IP, Loop *L,
@@ -393,10 +411,10 @@ Value *BasedUser::InsertCodeForBaseAtPosition(const SCEV *const &NewBase,
   // want to insert this expression before the user, we'd rather pull it out as
   // many loops as possible.
   Instruction *BaseInsertPt = IP;
-  
+
   // Figure out the most-nested loop that IP is in.
   Loop *InsertLoop = LI.getLoopFor(IP->getParent());
-  
+
   // If InsertLoop is not L, and InsertLoop is nested inside of L, figure out
   // the preheader of the outer-most loop where NewBase is not loop invariant.
   if (L->contains(IP->getParent()))
@@ -404,7 +422,7 @@ Value *BasedUser::InsertCodeForBaseAtPosition(const SCEV *const &NewBase,
       BaseInsertPt = InsertLoop->getLoopPreheader()->getTerminator();
       InsertLoop = InsertLoop->getParentLoop();
     }
-  
+
   Value *Base = Rewriter.expandCodeFor(NewBase, 0, BaseInsertPt);
 
   const SCEV *NewValSCEV = SE->getUnknown(Base);
@@ -430,7 +448,7 @@ void BasedUser::RewriteInstructionToUseNewBase(const SCEV *const &NewBase,
   if (!isa<PHINode>(Inst)) {
     // By default, insert code at the user instruction.
     BasicBlock::iterator InsertPt = Inst;
-    
+
     // However, if the Operand is itself an instruction, the (potentially
     // complex) inserted code may be shared by many users.  Because of this, we
     // want to emit code for the computation of the operand right before its old
@@ -442,7 +460,7 @@ void BasedUser::RewriteInstructionToUseNewBase(const SCEV *const &NewBase,
     //
     // If this is a use outside the loop (which means after, since it is based
     // on a loop indvar) we use the post-incremented value, so that we don't
-    // artificially make the preinc value live out the bottom of the loop. 
+    // artificially make the preinc value live out the bottom of the loop.
     if (!isUseOfPostIncrementedValue && L->contains(Inst->getParent())) {
       if (NewBasePt && isa<PHINode>(OperandValToReplace)) {
         InsertPt = NewBasePt;
@@ -477,7 +495,7 @@ void BasedUser::RewriteInstructionToUseNewBase(const SCEV *const &NewBase,
     if (PN->getIncomingValue(i) == OperandValToReplace) {
       // If the original expression is outside the loop, put the replacement
       // code in the same place as the original expression,
-      // which need not be an immediate predecessor of this PHI.  This way we 
+      // which need not be an immediate predecessor of this PHI.  This way we
       // need only one copy of it even if it is referenced multiple times in
       // the PHI.  We don't do this when the original expression is inside the
       // loop because multiple copies sometimes do useful sinking of code in
@@ -490,6 +508,7 @@ void BasedUser::RewriteInstructionToUseNewBase(const SCEV *const &NewBase,
         // is the canonical backedge for this loop, as this can make some
         // inserted code be in an illegal position.
         if (e != 1 && PHIPred->getTerminator()->getNumSuccessors() > 1 &&
+            !isa<IndirectBrInst>(PHIPred->getTerminator()) &&
             (PN->getParent() != L->getHeader() || !L->contains(PHIPred))) {
 
           // First step, split the critical edge.
@@ -572,11 +591,11 @@ static bool fitsInAddressMode(const SCEV *const &V, const Type *AccessTy,
 static void MoveLoopVariantsToImmediateField(const SCEV *&Val, const SCEV *&Imm,
                                              Loop *L, ScalarEvolution *SE) {
   if (Val->isLoopInvariant(L)) return;  // Nothing to do.
-  
+
   if (const SCEVAddExpr *SAE = dyn_cast<SCEVAddExpr>(Val)) {
     SmallVector<const SCEV *, 4> NewOps;
     NewOps.reserve(SAE->getNumOperands());
-    
+
     for (unsigned i = 0; i != SAE->getNumOperands(); ++i)
       if (!SAE->getOperand(i)->isLoopInvariant(L)) {
         // If this is a loop-variant expression, it must stay in the immediate
@@ -594,7 +613,7 @@ static void MoveLoopVariantsToImmediateField(const SCEV *&Val, const SCEV *&Imm,
     // Try to pull immediates out of the start value of nested addrec's.
     const SCEV *Start = SARE->getStart();
     MoveLoopVariantsToImmediateField(Start, Imm, L, SE);
-    
+
     SmallVector<const SCEV *, 4> Ops(SARE->op_begin(), SARE->op_end());
     Ops[0] = Start;
     Val = SE->getAddRecExpr(Ops, SARE->getLoop());
@@ -617,11 +636,11 @@ static void MoveImmediateValues(const TargetLowering *TLI,
   if (const SCEVAddExpr *SAE = dyn_cast<SCEVAddExpr>(Val)) {
     SmallVector<const SCEV *, 4> NewOps;
     NewOps.reserve(SAE->getNumOperands());
-    
+
     for (unsigned i = 0; i != SAE->getNumOperands(); ++i) {
       const SCEV *NewOp = SAE->getOperand(i);
       MoveImmediateValues(TLI, AccessTy, NewOp, Imm, isAddress, L, SE);
-      
+
       if (!NewOp->isLoopInvariant(L)) {
         // If this is a loop-variant expression, it must stay in the immediate
         // field of the expression.
@@ -640,7 +659,7 @@ static void MoveImmediateValues(const TargetLowering *TLI,
     // Try to pull immediates out of the start value of nested addrec's.
     const SCEV *Start = SARE->getStart();
     MoveImmediateValues(TLI, AccessTy, Start, Imm, isAddress, L, SE);
-    
+
     if (Start != SARE->getStart()) {
       SmallVector<const SCEV *, 4> Ops(SARE->op_begin(), SARE->op_end());
       Ops[0] = Start;
@@ -656,8 +675,8 @@ static void MoveImmediateValues(const TargetLowering *TLI,
       const SCEV *SubImm = SE->getIntegerSCEV(0, Val->getType());
       const SCEV *NewOp = SME->getOperand(1);
       MoveImmediateValues(TLI, AccessTy, NewOp, SubImm, isAddress, L, SE);
-      
-      // If we extracted something out of the subexpressions, see if we can 
+
+      // If we extracted something out of the subexpressions, see if we can
       // simplify this!
       if (NewOp != SME->getOperand(1)) {
         // Scale SubImm up by "8".  If the result is a target constant, we are
@@ -666,7 +685,7 @@ static void MoveImmediateValues(const TargetLowering *TLI,
         if (fitsInAddressMode(SubImm, AccessTy, TLI, false)) {
           // Accumulate the immediate.
           Imm = SE->getAddExpr(Imm, SubImm);
-          
+
           // Update what is left of 'Val'.
           Val = SE->getMulExpr(SME->getOperand(0), NewOp);
           return;
@@ -714,7 +733,7 @@ static void SeparateSubExprs(SmallVector<const SCEV *, 16> &SubExprs,
       SmallVector<const SCEV *, 4> Ops(SARE->op_begin(), SARE->op_end());
       Ops[0] = Zero;   // Start with zero base.
       SubExprs.push_back(SE->getAddRecExpr(Ops, SARE->getLoop()));
-      
+
 
       SeparateSubExprs(SubExprs, SARE->getOperand(0), SE);
     }
@@ -724,7 +743,7 @@ static void SeparateSubExprs(SmallVector<const SCEV *, 16> &SubExprs,
   }
 }
 
-// This is logically local to the following function, but C++ says we have 
+// This is logically local to the following function, but C++ says we have
 // to make it file scope.
 struct SubExprUseData { unsigned Count; bool notAllUsesAreFree; };
 
@@ -762,7 +781,7 @@ RemoveCommonExpressionsFromUseBases(std::vector<BasedUser> &Uses,
   // an addressing mode "for free"; such expressions are left within the loop.
   // struct SubExprUseData { unsigned Count; bool notAllUsesAreFree; };
   std::map<const SCEV *, SubExprUseData> SubExpressionUseData;
-  
+
   // UniqueSubExprs - Keep track of all of the subexpressions we see in the
   // order we see them.
   SmallVector<const SCEV *, 16> UniqueSubExprs;
@@ -779,7 +798,7 @@ RemoveCommonExpressionsFromUseBases(std::vector<BasedUser> &Uses,
     if (!L->contains(Uses[i].Inst->getParent()))
       continue;
     NumUsesInsideLoop++;
-    
+
     // If the base is zero (which is common), return zero now, there are no
     // CSEs we can find.
     if (Uses[i].Base == Zero) return Zero;
@@ -811,13 +830,13 @@ RemoveCommonExpressionsFromUseBases(std::vector<BasedUser> &Uses,
   // Now that we know how many times each is used, build Result.  Iterate over
   // UniqueSubexprs so that we have a stable ordering.
   for (unsigned i = 0, e = UniqueSubExprs.size(); i != e; ++i) {
-    std::map<const SCEV *, SubExprUseData>::iterator I = 
+    std::map<const SCEV *, SubExprUseData>::iterator I =
        SubExpressionUseData.find(UniqueSubExprs[i]);
     assert(I != SubExpressionUseData.end() && "Entry not found?");
-    if (I->second.Count == NumUsesInsideLoop) { // Found CSE! 
+    if (I->second.Count == NumUsesInsideLoop) { // Found CSE!
       if (I->second.notAllUsesAreFree)
         Result = SE->getAddExpr(Result, I->first);
-      else 
+      else
         FreeResult = SE->getAddExpr(FreeResult, I->first);
     } else
       // Remove non-cse's from SubExpressionUseData.
@@ -849,13 +868,13 @@ RemoveCommonExpressionsFromUseBases(std::vector<BasedUser> &Uses,
 
   // If we found no CSE's, return now.
   if (Result == Zero) return Result;
-  
+
   // If we still have a FreeResult, remove its subexpressions from
   // SubExpressionUseData.  This means they will remain in the use Bases.
   if (FreeResult != Zero) {
     SeparateSubExprs(SubExprs, FreeResult, SE);
     for (unsigned j = 0, e = SubExprs.size(); j != e; ++j) {
-      std::map<const SCEV *, SubExprUseData>::iterator I = 
+      std::map<const SCEV *, SubExprUseData>::iterator I =
          SubExpressionUseData.find(SubExprs[j]);
       SubExpressionUseData.erase(I);
     }
@@ -882,7 +901,7 @@ RemoveCommonExpressionsFromUseBases(std::vector<BasedUser> &Uses,
         SubExprs.erase(SubExprs.begin()+j);
         --j; --e;
       }
-    
+
     // Finally, add the non-shared expressions together.
     if (SubExprs.empty())
       Uses[i].Base = Zero;
@@ -890,11 +909,11 @@ RemoveCommonExpressionsFromUseBases(std::vector<BasedUser> &Uses,
       Uses[i].Base = SE->getAddExpr(SubExprs);
     SubExprs.clear();
   }
- 
+
   return Result;
 }
 
-/// ValidScale - Check whether the given Scale is valid for all loads and 
+/// ValidScale - Check whether the given Scale is valid for all loads and
 /// stores in UsersToProcess.
 ///
 bool LoopStrengthReduce::ValidScale(bool HasBaseReg, int64_t Scale,
@@ -911,7 +930,7 @@ bool LoopStrengthReduce::ValidScale(bool HasBaseReg, int64_t Scale,
       AccessTy = getAccessType(UsersToProcess[i].Inst);
     else if (isa<PHINode>(UsersToProcess[i].Inst))
       continue;
-    
+
     TargetLowering::AddrMode AM;
     if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(UsersToProcess[i].Imm))
       AM.BaseOffs = SC->getValue()->getSExtValue();
@@ -983,13 +1002,13 @@ bool LoopStrengthReduce::RequiresTypeConversion(const Type *Ty1,
 /// reuse is possible.  Factors can be negative on same targets, e.g. ARM.
 ///
 /// If all uses are outside the loop, we don't require that all multiplies
-/// be folded into the addressing mode, nor even that the factor be constant; 
-/// a multiply (executed once) outside the loop is better than another IV 
+/// be folded into the addressing mode, nor even that the factor be constant;
+/// a multiply (executed once) outside the loop is better than another IV
 /// within.  Well, usually.
 const SCEV *LoopStrengthReduce::CheckForIVReuse(bool HasBaseReg,
                                 bool AllUsesAreAddresses,
                                 bool AllUsesAreOutsideLoop,
-                                const SCEV *const &Stride, 
+                                const SCEV *const &Stride,
                                 IVExpr &IV, const Type *Ty,
                                 const std::vector<BasedUser>& UsersToProcess) {
   if (StrideNoReuse.count(Stride))
@@ -999,11 +1018,16 @@ const SCEV *LoopStrengthReduce::CheckForIVReuse(bool HasBaseReg,
     int64_t SInt = SC->getValue()->getSExtValue();
     for (unsigned NewStride = 0, e = IU->StrideOrder.size();
          NewStride != e; ++NewStride) {
-      std::map<const SCEV *, IVsOfOneStride>::iterator SI = 
+      std::map<const SCEV *, IVsOfOneStride>::iterator SI =
                 IVsByStride.find(IU->StrideOrder[NewStride]);
       if (SI == IVsByStride.end() || !isa<SCEVConstant>(SI->first) ||
           StrideNoReuse.count(SI->first))
         continue;
+      // The other stride has no uses, don't reuse it.
+      std::map<const SCEV *, IVUsersOfOneStride *>::iterator UI =
+        IU->IVUsesByStride.find(IU->StrideOrder[NewStride]);
+      if (UI->second->Users.empty())
+        continue;
       int64_t SSInt = cast<SCEVConstant>(SI->first)->getValue()->getSExtValue();
       if (SI->first != Stride &&
           (unsigned(abs64(SInt)) < SSInt || (SInt % SSInt) != 0))
@@ -1052,7 +1076,7 @@ const SCEV *LoopStrengthReduce::CheckForIVReuse(bool HasBaseReg,
     // an existing IV if we can.
     for (unsigned NewStride = 0, e = IU->StrideOrder.size();
          NewStride != e; ++NewStride) {
-      std::map<const SCEV *, IVsOfOneStride>::iterator SI = 
+      std::map<const SCEV *, IVsOfOneStride>::iterator SI =
                 IVsByStride.find(IU->StrideOrder[NewStride]);
       if (SI == IVsByStride.end() || !isa<SCEVConstant>(SI->first))
         continue;
@@ -1072,9 +1096,9 @@ const SCEV *LoopStrengthReduce::CheckForIVReuse(bool HasBaseReg,
     // -1*old.
     for (unsigned NewStride = 0, e = IU->StrideOrder.size();
          NewStride != e; ++NewStride) {
-      std::map<const SCEV *, IVsOfOneStride>::iterator SI = 
+      std::map<const SCEV *, IVsOfOneStride>::iterator SI =
                 IVsByStride.find(IU->StrideOrder[NewStride]);
-      if (SI == IVsByStride.end()) 
+      if (SI == IVsByStride.end())
         continue;
       if (const SCEVMulExpr *ME = dyn_cast<SCEVMulExpr>(SI->first))
         if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(ME->getOperand(0)))
@@ -1104,18 +1128,18 @@ static bool PartitionByIsUseOfPostIncrementedValue(const BasedUser &Val) {
 static bool isNonConstantNegative(const SCEV *const &Expr) {
   const SCEVMulExpr *Mul = dyn_cast<SCEVMulExpr>(Expr);
   if (!Mul) return false;
-  
+
   // If there is a constant factor, it will be first.
   const SCEVConstant *SC = dyn_cast<SCEVConstant>(Mul->getOperand(0));
   if (!SC) return false;
-  
+
   // Return true if the value is negative, this matches things like (-42 * V).
   return SC->getValue()->getValue().isNegative();
 }
 
 /// CollectIVUsers - Transform our list of users and offsets to a bit more
-/// complex table. In this new vector, each 'BasedUser' contains 'Base', the base
-/// of the strided accesses, as well as the old information from Uses. We
+/// complex table. In this new vector, each 'BasedUser' contains 'Base', the
+/// base of the strided accesses, as well as the old information from Uses. We
 /// progressively move information from the Base field to the Imm field, until
 /// we eventually have the full access expression to rewrite the use.
 const SCEV *LoopStrengthReduce::CollectIVUsers(const SCEV *const &Stride,
@@ -1145,7 +1169,7 @@ const SCEV *LoopStrengthReduce::CollectIVUsers(const SCEV *const &Stride,
   // We now have a whole bunch of uses of like-strided induction variables, but
   // they might all have different bases.  We want to emit one PHI node for this
   // stride which we fold as many common expressions (between the IVs) into as
-  // possible.  Start by identifying the common expressions in the base values 
+  // possible.  Start by identifying the common expressions in the base values
   // for the strides (e.g. if we have "A+C+B" and "A+B+D" as our bases, find
   // "A+B"), emit it to the preheader, then remove the expression from the
   // UsersToProcess base values.
@@ -1165,11 +1189,11 @@ const SCEV *LoopStrengthReduce::CollectIVUsers(const SCEV *const &Stride,
     if (!L->contains(UsersToProcess[i].Inst->getParent())) {
       UsersToProcess[i].Imm = SE->getAddExpr(UsersToProcess[i].Imm,
                                              UsersToProcess[i].Base);
-      UsersToProcess[i].Base = 
+      UsersToProcess[i].Base =
         SE->getIntegerSCEV(0, UsersToProcess[i].Base->getType());
     } else {
       // Not all uses are outside the loop.
-      AllUsesAreOutsideLoop = false; 
+      AllUsesAreOutsideLoop = false;
 
       // Addressing modes can be folded into loads and stores.  Be careful that
       // the store is through the expression, not of the expression though.
@@ -1183,11 +1207,11 @@ const SCEV *LoopStrengthReduce::CollectIVUsers(const SCEV *const &Stride,
 
       if (isAddress)
         HasAddress = true;
-     
+
       // If this use isn't an address, then not all uses are addresses.
       if (!isAddress && !isPHI)
         AllUsesAreAddresses = false;
-      
+
       MoveImmediateValues(TLI, UsersToProcess[i].Inst, UsersToProcess[i].Base,
                           UsersToProcess[i].Imm, isAddress, L, SE);
     }
@@ -1198,7 +1222,7 @@ const SCEV *LoopStrengthReduce::CollectIVUsers(const SCEV *const &Stride,
   // for one fewer iv.
   if (NumPHI > 1)
     AllUsesAreAddresses = false;
-    
+
   // There are no in-loop address uses.
   if (AllUsesAreAddresses && (!HasAddress && !AllUsesAreOutsideLoop))
     AllUsesAreAddresses = false;
@@ -1491,12 +1515,13 @@ static bool IsImmFoldedIntoAddrMode(GlobalValue *GV, int64_t Offset,
   return true;
 }
 
-/// StrengthReduceStridedIVUsers - Strength reduce all of the users of a single
+/// StrengthReduceIVUsersOfStride - Strength reduce all of the users of a single
 /// stride of IV.  All of the users may have different starting values, and this
 /// may not be the only stride.
-void LoopStrengthReduce::StrengthReduceStridedIVUsers(const SCEV *const &Stride,
-                                                      IVUsersOfOneStride &Uses,
-                                                      Loop *L) {
+void
+LoopStrengthReduce::StrengthReduceIVUsersOfStride(const SCEV *const &Stride,
+                                                  IVUsersOfOneStride &Uses,
+                                                  Loop *L) {
   // If all the users are moved to another stride, then there is nothing to do.
   if (Uses.Users.empty())
     return;
@@ -1518,8 +1543,8 @@ void LoopStrengthReduce::StrengthReduceStridedIVUsers(const SCEV *const &Stride,
   // have the full access expression to rewrite the use.
   std::vector<BasedUser> UsersToProcess;
   const SCEV *CommonExprs = CollectIVUsers(Stride, Uses, L, AllUsesAreAddresses,
-                                          AllUsesAreOutsideLoop,
-                                          UsersToProcess);
+                                           AllUsesAreOutsideLoop,
+                                           UsersToProcess);
 
   // Sort the UsersToProcess array so that users with common bases are
   // next to each other.
@@ -1588,12 +1613,12 @@ void LoopStrengthReduce::StrengthReduceStridedIVUsers(const SCEV *const &Stride,
   const SCEV *RewriteFactor = SE->getIntegerSCEV(0, ReplacedTy);
   IVExpr   ReuseIV(SE->getIntegerSCEV(0,
                                     Type::getInt32Ty(Preheader->getContext())),
-                   SE->getIntegerSCEV(0, 
+                   SE->getIntegerSCEV(0,
                                     Type::getInt32Ty(Preheader->getContext())),
                    0);
 
-  /// Choose a strength-reduction strategy and prepare for it by creating
-  /// the necessary PHIs and adjusting the bookkeeping.
+  // Choose a strength-reduction strategy and prepare for it by creating
+  // the necessary PHIs and adjusting the bookkeeping.
   if (ShouldUseFullStrengthReductionMode(UsersToProcess, L,
                                          AllUsesAreAddresses, Stride)) {
     PrepareToStrengthReduceFully(UsersToProcess, Stride, CommonExprs, L,
@@ -1606,7 +1631,7 @@ void LoopStrengthReduce::StrengthReduceStridedIVUsers(const SCEV *const &Stride,
     // If all uses are addresses, check if it is possible to reuse an IV.  The
     // new IV must have a stride that is a multiple of the old stride; the
     // multiple must be a number that can be encoded in the scale field of the
-    // target addressing mode; and we must have a valid instruction after this 
+    // target addressing mode; and we must have a valid instruction after this
     // substitution, including the immediate field, if any.
     RewriteFactor = CheckForIVReuse(HaveCommonExprs, AllUsesAreAddresses,
                                     AllUsesAreOutsideLoop,
@@ -1649,7 +1674,7 @@ void LoopStrengthReduce::StrengthReduceStridedIVUsers(const SCEV *const &Stride,
         // We want this constant emitted into the preheader! This is just
         // using cast as a copy so BitCast (no-op cast) is appropriate
         BaseV = new BitCastInst(BaseV, BaseV->getType(), "preheaderinsert",
-                                PreInsertPt);       
+                                PreInsertPt);
       }
     }
 
@@ -1723,7 +1748,7 @@ void LoopStrengthReduce::StrengthReduceStridedIVUsers(const SCEV *const &Stride,
             assert(SE->getTypeSizeInBits(RewriteExpr->getType()) <
                    SE->getTypeSizeInBits(ReuseIV.Base->getType()) &&
                    "Unexpected lengthening conversion!");
-            typedBase = SE->getTruncateExpr(ReuseIV.Base, 
+            typedBase = SE->getTruncateExpr(ReuseIV.Base,
                                             RewriteExpr->getType());
           }
           RewriteExpr = SE->getMinusSCEV(RewriteExpr, typedBase);
@@ -1775,11 +1800,29 @@ void LoopStrengthReduce::StrengthReduceStridedIVUsers(const SCEV *const &Stride,
   // different starting values, into different PHIs.
 }
 
+void LoopStrengthReduce::StrengthReduceIVUsers(Loop *L) {
+  // Note: this processes each stride/type pair individually.  All users
+  // passed into StrengthReduceIVUsersOfStride have the same type AND stride.
+  // Also, note that we iterate over IVUsesByStride indirectly by using
+  // StrideOrder. This extra layer of indirection makes the ordering of
+  // strides deterministic - not dependent on map order.
+  for (unsigned Stride = 0, e = IU->StrideOrder.size(); Stride != e; ++Stride) {
+    std::map<const SCEV *, IVUsersOfOneStride *>::iterator SI =
+      IU->IVUsesByStride.find(IU->StrideOrder[Stride]);
+    assert(SI != IU->IVUsesByStride.end() && "Stride doesn't exist!");
+    // FIXME: Generalize to non-affine IV's.
+    if (!SI->first->isLoopInvariant(L))
+      continue;
+    StrengthReduceIVUsersOfStride(SI->first, *SI->second, L);
+  }
+}
+
 /// FindIVUserForCond - If Cond has an operand that is an expression of an IV,
 /// set the IV user and stride information and return true, otherwise return
 /// false.
-bool LoopStrengthReduce::FindIVUserForCond(ICmpInst *Cond, IVStrideUse *&CondUse,
-                                       const SCEV *const * &CondStride) {
+bool LoopStrengthReduce::FindIVUserForCond(ICmpInst *Cond,
+                                           IVStrideUse *&CondUse,
+                                           const SCEV* &CondStride) {
   for (unsigned Stride = 0, e = IU->StrideOrder.size();
        Stride != e && !CondUse; ++Stride) {
     std::map<const SCEV *, IVUsersOfOneStride *>::iterator SI =
@@ -1793,12 +1836,12 @@ bool LoopStrengthReduce::FindIVUserForCond(ICmpInst *Cond, IVStrideUse *&CondUse
         // InstCombine does it as well for simple uses, it's not clear that it
         // occurs enough in real life to handle.
         CondUse = UI;
-        CondStride = &SI->first;
+        CondStride = SI->first;
         return true;
       }
   }
   return false;
-}    
+}
 
 namespace {
   // Constant strides come first which in turns are sorted by their absolute
@@ -1851,8 +1894,9 @@ namespace {
 /// v1 = v1 + 3
 /// if (v1 < 30) goto loop
 ICmpInst *LoopStrengthReduce::ChangeCompareStride(Loop *L, ICmpInst *Cond,
-                                                IVStrideUse* &CondUse,
-                                              const SCEV *const* &CondStride) {
+                                                  IVStrideUse* &CondUse,
+                                                  const SCEV* &CondStride,
+                                                  bool PostPass) {
   // If there's only one stride in the loop, there's nothing to do here.
   if (IU->StrideOrder.size() < 2)
     return Cond;
@@ -1860,23 +1904,31 @@ ICmpInst *LoopStrengthReduce::ChangeCompareStride(Loop *L, ICmpInst *Cond,
   // trying to change the condition because the stride will still
   // remain.
   std::map<const SCEV *, IVUsersOfOneStride *>::iterator I =
-    IU->IVUsesByStride.find(*CondStride);
-  if (I == IU->IVUsesByStride.end() ||
-      I->second->Users.size() != 1)
+    IU->IVUsesByStride.find(CondStride);
+  if (I == IU->IVUsesByStride.end())
     return Cond;
+  if (I->second->Users.size() > 1) {
+    for (ilist<IVStrideUse>::iterator II = I->second->Users.begin(),
+           EE = I->second->Users.end(); II != EE; ++II) {
+      if (II->getUser() == Cond)
+        continue;
+      if (!isInstructionTriviallyDead(II->getUser()))
+        return Cond;
+    }
+  }
   // Only handle constant strides for now.
-  const SCEVConstant *SC = dyn_cast<SCEVConstant>(*CondStride);
+  const SCEVConstant *SC = dyn_cast<SCEVConstant>(CondStride);
   if (!SC) return Cond;
 
   ICmpInst::Predicate Predicate = Cond->getPredicate();
   int64_t CmpSSInt = SC->getValue()->getSExtValue();
-  unsigned BitWidth = SE->getTypeSizeInBits((*CondStride)->getType());
+  unsigned BitWidth = SE->getTypeSizeInBits(CondStride->getType());
   uint64_t SignBit = 1ULL << (BitWidth-1);
   const Type *CmpTy = Cond->getOperand(0)->getType();
   const Type *NewCmpTy = NULL;
   unsigned TyBits = SE->getTypeSizeInBits(CmpTy);
   unsigned NewTyBits = 0;
-  const SCEV **NewStride = NULL;
+  const SCEV *NewStride = NULL;
   Value *NewCmpLHS = NULL;
   Value *NewCmpRHS = NULL;
   int64_t Scale = 1;
@@ -1885,16 +1937,31 @@ ICmpInst *LoopStrengthReduce::ChangeCompareStride(Loop *L, ICmpInst *Cond,
   if (ConstantInt *C = dyn_cast<ConstantInt>(Cond->getOperand(1))) {
     int64_t CmpVal = C->getValue().getSExtValue();
 
+    // Check the relevant induction variable for conformance to
+    // the pattern.
+    const SCEV *IV = SE->getSCEV(Cond->getOperand(0));
+    const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(IV);
+    if (!AR || !AR->isAffine())
+      return Cond;
+
+    const SCEVConstant *StartC = dyn_cast<SCEVConstant>(AR->getStart());
     // Check stride constant and the comparision constant signs to detect
     // overflow.
-    if ((CmpVal & SignBit) != (CmpSSInt & SignBit))
-      return Cond;
+    if (StartC) {
+      if ((StartC->getValue()->getSExtValue() < CmpVal && CmpSSInt < 0) ||
+          (StartC->getValue()->getSExtValue() > CmpVal && CmpSSInt > 0))
+        return Cond;
+    } else {
+      // More restrictive check for the other cases.
+      if ((CmpVal & SignBit) != (CmpSSInt & SignBit))
+        return Cond;
+    }
 
     // Look for a suitable stride / iv as replacement.
     for (unsigned i = 0, e = IU->StrideOrder.size(); i != e; ++i) {
       std::map<const SCEV *, IVUsersOfOneStride *>::iterator SI =
         IU->IVUsesByStride.find(IU->StrideOrder[i]);
-      if (!isa<SCEVConstant>(SI->first))
+      if (!isa<SCEVConstant>(SI->first) || SI->second->Users.empty())
         continue;
       int64_t SSInt = cast<SCEVConstant>(SI->first)->getValue()->getSExtValue();
       if (SSInt == CmpSSInt ||
@@ -1904,6 +1971,14 @@ ICmpInst *LoopStrengthReduce::ChangeCompareStride(Loop *L, ICmpInst *Cond,
 
       Scale = SSInt / CmpSSInt;
       int64_t NewCmpVal = CmpVal * Scale;
+
+      // If old icmp value fits in icmp immediate field, but the new one doesn't
+      // try something else.
+      if (TLI &&
+          TLI->isLegalICmpImmediate(CmpVal) &&
+          !TLI->isLegalICmpImmediate(NewCmpVal))
+        continue;
+
       APInt Mul = APInt(BitWidth*2, CmpVal, true);
       Mul = Mul * APInt(BitWidth*2, Scale, true);
       // Check for overflow.
@@ -1914,12 +1989,10 @@ ICmpInst *LoopStrengthReduce::ChangeCompareStride(Loop *L, ICmpInst *Cond,
         continue;
 
       // Watch out for overflow.
-      if (ICmpInst::isSignedPredicate(Predicate) &&
+      if (ICmpInst::isSigned(Predicate) &&
           (CmpVal & SignBit) != (NewCmpVal & SignBit))
         continue;
 
-      if (NewCmpVal == CmpVal)
-        continue;
       // Pick the best iv to use trying to avoid a cast.
       NewCmpLHS = NULL;
       for (ilist<IVStrideUse>::iterator UI = SI->second->Users.begin(),
@@ -1956,7 +2029,7 @@ ICmpInst *LoopStrengthReduce::ChangeCompareStride(Loop *L, ICmpInst *Cond,
         // Check if it is possible to rewrite it using
         // an iv / stride of a smaller integer type.
         unsigned Bits = NewTyBits;
-        if (ICmpInst::isSignedPredicate(Predicate))
+        if (ICmpInst::isSigned(Predicate))
           --Bits;
         uint64_t Mask = (1ULL << Bits) - 1;
         if (((uint64_t)NewCmpVal & Mask) != (uint64_t)NewCmpVal)
@@ -1969,19 +2042,21 @@ ICmpInst *LoopStrengthReduce::ChangeCompareStride(Loop *L, ICmpInst *Cond,
       if (NewTyBits != TyBits && !isa<SCEVConstant>(CondUse->getOffset()))
         continue;
 
-      bool AllUsesAreAddresses = true;
-      bool AllUsesAreOutsideLoop = true;
-      std::vector<BasedUser> UsersToProcess;
-      const SCEV *CommonExprs = CollectIVUsers(SI->first, *SI->second, L,
-                                              AllUsesAreAddresses,
-                                              AllUsesAreOutsideLoop,
-                                              UsersToProcess);
-      // Avoid rewriting the compare instruction with an iv of new stride
-      // if it's likely the new stride uses will be rewritten using the
-      // stride of the compare instruction.
-      if (AllUsesAreAddresses &&
-          ValidScale(!CommonExprs->isZero(), Scale, UsersToProcess))
-        continue;
+      if (!PostPass) {
+        bool AllUsesAreAddresses = true;
+        bool AllUsesAreOutsideLoop = true;
+        std::vector<BasedUser> UsersToProcess;
+        const SCEV *CommonExprs = CollectIVUsers(SI->first, *SI->second, L,
+                                                 AllUsesAreAddresses,
+                                                 AllUsesAreOutsideLoop,
+                                                 UsersToProcess);
+        // Avoid rewriting the compare instruction with an iv of new stride
+        // if it's likely the new stride uses will be rewritten using the
+        // stride of the compare instruction.
+        if (AllUsesAreAddresses &&
+            ValidScale(!CommonExprs->isZero(), Scale, UsersToProcess))
+          continue;
+      }
 
       // Avoid rewriting the compare instruction with an iv which has
       // implicit extension or truncation built into it.
@@ -1994,7 +2069,7 @@ ICmpInst *LoopStrengthReduce::ChangeCompareStride(Loop *L, ICmpInst *Cond,
       if (Scale < 0 && !Cond->isEquality())
         Predicate = ICmpInst::getSwappedPredicate(Predicate);
 
-      NewStride = &IU->StrideOrder[i];
+      NewStride = IU->StrideOrder[i];
       if (!isa<PointerType>(NewCmpTy))
         NewCmpRHS = ConstantInt::get(NewCmpTy, NewCmpVal);
       else {
@@ -2031,13 +2106,16 @@ ICmpInst *LoopStrengthReduce::ChangeCompareStride(Loop *L, ICmpInst *Cond,
     Cond = new ICmpInst(OldCond, Predicate, NewCmpLHS, NewCmpRHS,
                         L->getHeader()->getName() + ".termcond");
 
+    DEBUG(errs() << "    Change compare stride in Inst " << *OldCond);
+    DEBUG(errs() << " to " << *Cond << '\n');
+
     // Remove the old compare instruction. The old indvar is probably dead too.
     DeadInsts.push_back(CondUse->getOperandValToReplace());
     OldCond->replaceAllUsesWith(Cond);
     OldCond->eraseFromParent();
 
-    IU->IVUsesByStride[*NewStride]->addUser(NewOffset, Cond, NewCmpLHS);
-    CondUse = &IU->IVUsesByStride[*NewStride]->Users.back();
+    IU->IVUsesByStride[NewStride]->addUser(NewOffset, Cond, NewCmpLHS);
+    CondUse = &IU->IVUsesByStride[NewStride]->Users.back();
     CondStride = NewStride;
     ++NumEliminated;
     Changed = true;
@@ -2180,7 +2258,7 @@ void LoopStrengthReduce::OptimizeShadowIV(Loop *L) {
   const SCEV *BackedgeTakenCount = SE->getBackedgeTakenCount(L);
   if (isa<SCEVCouldNotCompute>(BackedgeTakenCount))
     return;
-    
+
   for (unsigned Stride = 0, e = IU->StrideOrder.size(); Stride != e;
        ++Stride) {
     std::map<const SCEV *, IVUsersOfOneStride *>::iterator SI =
@@ -2199,13 +2277,13 @@ void LoopStrengthReduce::OptimizeShadowIV(Loop *L) {
       /* If shadow use is a int->float cast then insert a second IV
          to eliminate this cast.
 
-           for (unsigned i = 0; i < n; ++i) 
+           for (unsigned i = 0; i < n; ++i)
              foo((double)i);
 
          is transformed into
 
            double d = 0.0;
-           for (unsigned i = 0; i < n; ++i, ++d) 
+           for (unsigned i = 0; i < n; ++i, ++d)
              foo(d);
       */
       if (UIToFPInst *UCast = dyn_cast<UIToFPInst>(CandidateUI->getUser()))
@@ -2227,7 +2305,7 @@ void LoopStrengthReduce::OptimizeShadowIV(Loop *L) {
 
       const Type *SrcTy = PH->getType();
       int Mantissa = DestTy->getFPMantissaWidth();
-      if (Mantissa == -1) continue; 
+      if (Mantissa == -1) continue;
       if ((int)SE->getTypeSizeInBits(SrcTy) > Mantissa)
         continue;
 
@@ -2239,12 +2317,12 @@ void LoopStrengthReduce::OptimizeShadowIV(Loop *L) {
         Entry = 1;
         Latch = 0;
       }
-        
+
       ConstantInt *Init = dyn_cast<ConstantInt>(PH->getIncomingValue(Entry));
       if (!Init) continue;
       Constant *NewInit = ConstantFP::get(DestTy, Init->getZExtValue());
 
-      BinaryOperator *Incr = 
+      BinaryOperator *Incr =
         dyn_cast<BinaryOperator>(PH->getIncomingValue(Latch));
       if (!Incr) continue;
       if (Incr->getOpcode() != Instruction::Add
@@ -2262,12 +2340,16 @@ void LoopStrengthReduce::OptimizeShadowIV(Loop *L) {
 
       if (!C) continue;
 
+      // Ignore negative constants, as the code below doesn't handle them
+      // correctly. TODO: Remove this restriction.
+      if (!C->getValue().isStrictlyPositive()) continue;
+
       /* Add new PHINode. */
       PHINode *NewPH = PHINode::Create(DestTy, "IV.S.", PH);
 
       /* create new increment. '++d' in above example. */
       Constant *CFP = ConstantFP::get(DestTy, C->getZExtValue());
-      BinaryOperator *NewIncr = 
+      BinaryOperator *NewIncr =
         BinaryOperator::Create(Incr->getOpcode() == Instruction::Add ?
                                  Instruction::FAdd : Instruction::FSub,
                                NewPH, CFP, "IV.S.next.", Incr);
@@ -2293,237 +2375,385 @@ void LoopStrengthReduce::OptimizeIndvars(Loop *L) {
   OptimizeShadowIV(L);
 }
 
-/// OptimizeLoopTermCond - Change loop terminating condition to use the 
+bool LoopStrengthReduce::StrideMightBeShared(const SCEV* Stride, Loop *L,
+                                             bool CheckPreInc) {
+  int64_t SInt = cast<SCEVConstant>(Stride)->getValue()->getSExtValue();
+  for (unsigned i = 0, e = IU->StrideOrder.size(); i != e; ++i) {
+    std::map<const SCEV *, IVUsersOfOneStride *>::iterator SI =
+      IU->IVUsesByStride.find(IU->StrideOrder[i]);
+    const SCEV *Share = SI->first;
+    if (!isa<SCEVConstant>(SI->first) || Share == Stride)
+      continue;
+    int64_t SSInt = cast<SCEVConstant>(Share)->getValue()->getSExtValue();
+    if (SSInt == SInt)
+      return true; // This can definitely be reused.
+    if (unsigned(abs64(SSInt)) < SInt || (SSInt % SInt) != 0)
+      continue;
+    int64_t Scale = SSInt / SInt;
+    bool AllUsesAreAddresses = true;
+    bool AllUsesAreOutsideLoop = true;
+    std::vector<BasedUser> UsersToProcess;
+    const SCEV *CommonExprs = CollectIVUsers(SI->first, *SI->second, L,
+                                             AllUsesAreAddresses,
+                                             AllUsesAreOutsideLoop,
+                                             UsersToProcess);
+    if (AllUsesAreAddresses &&
+        ValidScale(!CommonExprs->isZero(), Scale, UsersToProcess)) {
+      if (!CheckPreInc)
+        return true;
+      // Any pre-inc iv use?
+      IVUsersOfOneStride &StrideUses = *IU->IVUsesByStride[Share];
+      for (ilist<IVStrideUse>::iterator I = StrideUses.Users.begin(),
+             E = StrideUses.Users.end(); I != E; ++I) {
+        if (!I->isUseOfPostIncrementedValue())
+          return true;
+      }
+    }
+  }
+  return false;
+}
+
+/// isUsedByExitBranch - Return true if icmp is used by a loop terminating
+/// conditional branch or it's and / or with other conditions before being used
+/// as the condition.
+static bool isUsedByExitBranch(ICmpInst *Cond, Loop *L) {
+  BasicBlock *CondBB = Cond->getParent();
+  if (!L->isLoopExiting(CondBB))
+    return false;
+  BranchInst *TermBr = dyn_cast<BranchInst>(CondBB->getTerminator());
+  if (!TermBr || !TermBr->isConditional())
+    return false;
+
+  Value *User = *Cond->use_begin();
+  Instruction *UserInst = dyn_cast<Instruction>(User);
+  while (UserInst &&
+         (UserInst->getOpcode() == Instruction::And ||
+          UserInst->getOpcode() == Instruction::Or)) {
+    if (!UserInst->hasOneUse() || UserInst->getParent() != CondBB)
+      return false;
+    User = *User->use_begin();
+    UserInst = dyn_cast<Instruction>(User);
+  }
+  return User == TermBr;
+}
+
+static bool ShouldCountToZero(ICmpInst *Cond, IVStrideUse* &CondUse,
+                              ScalarEvolution *SE, Loop *L,
+                              const TargetLowering *TLI = 0) {
+  if (!L->contains(Cond->getParent()))
+    return false;
+
+  if (!isa<SCEVConstant>(CondUse->getOffset()))
+    return false;
+
+  // Handle only tests for equality for the moment.
+  if (!Cond->isEquality() || !Cond->hasOneUse())
+    return false;
+  if (!isUsedByExitBranch(Cond, L))
+    return false;
+
+  Value *CondOp0 = Cond->getOperand(0);
+  const SCEV *IV = SE->getSCEV(CondOp0);
+  const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(IV);
+  if (!AR || !AR->isAffine())
+    return false;
+
+  const SCEVConstant *SC = dyn_cast<SCEVConstant>(AR->getStepRecurrence(*SE));
+  if (!SC || SC->getValue()->getSExtValue() < 0)
+    // If it's already counting down, don't do anything.
+    return false;
+
+  // If the RHS of the comparison is not an loop invariant, the rewrite
+  // cannot be done. Also bail out if it's already comparing against a zero.
+  // If we are checking this before cmp stride optimization, check if it's
+  // comparing against a already legal immediate.
+  Value *RHS = Cond->getOperand(1);
+  ConstantInt *RHSC = dyn_cast<ConstantInt>(RHS);
+  if (!L->isLoopInvariant(RHS) ||
+      (RHSC && RHSC->isZero()) ||
+      (RHSC && TLI && TLI->isLegalICmpImmediate(RHSC->getSExtValue())))
+    return false;
+
+  // Make sure the IV is only used for counting.  Value may be preinc or
+  // postinc; 2 uses in either case.
+  if (!CondOp0->hasNUses(2))
+    return false;
+
+  return true;
+}
+
+/// OptimizeLoopTermCond - Change loop terminating condition to use the
 /// postinc iv when possible.
 void LoopStrengthReduce::OptimizeLoopTermCond(Loop *L) {
-  // Finally, get the terminating condition for the loop if possible.  If we
-  // can, we want to change it to use a post-incremented version of its
-  // induction variable, to allow coalescing the live ranges for the IV into
-  // one register value.
   BasicBlock *LatchBlock = L->getLoopLatch();
-  BasicBlock *ExitingBlock = L->getExitingBlock();
-  
-  if (!ExitingBlock)
-    // Multiple exits, just look at the exit in the latch block if there is one.
-    ExitingBlock = LatchBlock;
-  BranchInst *TermBr = dyn_cast<BranchInst>(ExitingBlock->getTerminator());
-  if (!TermBr)
-    return;
-  if (TermBr->isUnconditional() || !isa<ICmpInst>(TermBr->getCondition()))
-    return;
+  bool LatchExit = L->isLoopExiting(LatchBlock);
+  SmallVector<BasicBlock*, 8> ExitingBlocks;
+  L->getExitingBlocks(ExitingBlocks);
 
-  // Search IVUsesByStride to find Cond's IVUse if there is one.
-  IVStrideUse *CondUse = 0;
-  const SCEV *const *CondStride = 0;
-  ICmpInst *Cond = cast<ICmpInst>(TermBr->getCondition());
-  if (!FindIVUserForCond(Cond, CondUse, CondStride))
-    return; // setcc doesn't use the IV.
-
-  if (ExitingBlock != LatchBlock) {
-    if (!Cond->hasOneUse())
-      // See below, we don't want the condition to be cloned.
-      return;
-
-    // If exiting block is the latch block, we know it's safe and profitable to
-    // transform the icmp to use post-inc iv. Otherwise do so only if it would
-    // not reuse another iv and its iv would be reused by other uses. We are
-    // optimizing for the case where the icmp is the only use of the iv.
-    IVUsersOfOneStride &StrideUses = *IU->IVUsesByStride[*CondStride];
-    for (ilist<IVStrideUse>::iterator I = StrideUses.Users.begin(),
-         E = StrideUses.Users.end(); I != E; ++I) {
-      if (I->getUser() == Cond)
-        continue;
-      if (!I->isUseOfPostIncrementedValue())
-        return;
-    }
+  for (unsigned i = 0, e = ExitingBlocks.size(); i != e; ++i) {
+    BasicBlock *ExitingBlock = ExitingBlocks[i];
 
-    // FIXME: This is expensive, and worse still ChangeCompareStride does a
-    // similar check. Can we perform all the icmp related transformations after
-    // StrengthReduceStridedIVUsers?
-    if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(*CondStride)) {
-      int64_t SInt = SC->getValue()->getSExtValue();
-      for (unsigned NewStride = 0, ee = IU->StrideOrder.size(); NewStride != ee;
-           ++NewStride) {
-        std::map<const SCEV *, IVUsersOfOneStride *>::iterator SI =
-          IU->IVUsesByStride.find(IU->StrideOrder[NewStride]);
-        if (!isa<SCEVConstant>(SI->first) || SI->first == *CondStride)
-          continue;
-        int64_t SSInt =
-          cast<SCEVConstant>(SI->first)->getValue()->getSExtValue();
-        if (SSInt == SInt)
-          return; // This can definitely be reused.
-        if (unsigned(abs64(SSInt)) < SInt || (SSInt % SInt) != 0)
-          continue;
-        int64_t Scale = SSInt / SInt;
-        bool AllUsesAreAddresses = true;
-        bool AllUsesAreOutsideLoop = true;
-        std::vector<BasedUser> UsersToProcess;
-        const SCEV *CommonExprs = CollectIVUsers(SI->first, *SI->second, L,
-                                                AllUsesAreAddresses,
-                                                AllUsesAreOutsideLoop,
-                                                UsersToProcess);
-        // Avoid rewriting the compare instruction with an iv of new stride
-        // if it's likely the new stride uses will be rewritten using the
-        // stride of the compare instruction.
-        if (AllUsesAreAddresses &&
-            ValidScale(!CommonExprs->isZero(), Scale, UsersToProcess))
-          return;
-      }
-    }
+    // Finally, get the terminating condition for the loop if possible.  If we
+    // can, we want to change it to use a post-incremented version of its
+    // induction variable, to allow coalescing the live ranges for the IV into
+    // one register value.
 
-    StrideNoReuse.insert(*CondStride);
-  }
+    BranchInst *TermBr = dyn_cast<BranchInst>(ExitingBlock->getTerminator());
+    if (!TermBr)
+      continue;
+    // FIXME: Overly conservative, termination condition could be an 'or' etc..
+    if (TermBr->isUnconditional() || !isa<ICmpInst>(TermBr->getCondition()))
+      continue;
 
-  // If the trip count is computed in terms of a max (due to ScalarEvolution
-  // being unable to find a sufficient guard, for example), change the loop
-  // comparison to use SLT or ULT instead of NE.
-  Cond = OptimizeMax(L, Cond, CondUse);
-
-  // If possible, change stride and operands of the compare instruction to
-  // eliminate one stride.
-  if (ExitingBlock == LatchBlock)
-    Cond = ChangeCompareStride(L, Cond, CondUse, CondStride);
-
-  // It's possible for the setcc instruction to be anywhere in the loop, and
-  // possible for it to have multiple users.  If it is not immediately before
-  // the latch block branch, move it.
-  if (&*++BasicBlock::iterator(Cond) != (Instruction*)TermBr) {
-    if (Cond->hasOneUse()) {   // Condition has a single use, just move it.
-      Cond->moveBefore(TermBr);
-    } else {
-      // Otherwise, clone the terminating condition and insert into the loopend.
-      Cond = cast<ICmpInst>(Cond->clone());
-      Cond->setName(L->getHeader()->getName() + ".termcond");
-      LatchBlock->getInstList().insert(TermBr, Cond);
-      
-      // Clone the IVUse, as the old use still exists!
-      IU->IVUsesByStride[*CondStride]->addUser(CondUse->getOffset(), Cond,
-                                             CondUse->getOperandValToReplace());
-      CondUse = &IU->IVUsesByStride[*CondStride]->Users.back();
+    // Search IVUsesByStride to find Cond's IVUse if there is one.
+    IVStrideUse *CondUse = 0;
+    const SCEV *CondStride = 0;
+    ICmpInst *Cond = cast<ICmpInst>(TermBr->getCondition());
+    if (!FindIVUserForCond(Cond, CondUse, CondStride))
+      continue;
+
+    // If the latch block is exiting and it's not a single block loop, it's
+    // not safe to use postinc iv in other exiting blocks. FIXME: overly
+    // conservative? How about icmp stride optimization?
+    bool UsePostInc =  !(e > 1 && LatchExit && ExitingBlock != LatchBlock);
+    if (UsePostInc && ExitingBlock != LatchBlock) {
+      if (!Cond->hasOneUse())
+        // See below, we don't want the condition to be cloned.
+        UsePostInc = false;
+      else {
+        // If exiting block is the latch block, we know it's safe and profitable
+        // to transform the icmp to use post-inc iv. Otherwise do so only if it
+        // would not reuse another iv and its iv would be reused by other uses.
+        // We are optimizing for the case where the icmp is the only use of the
+        // iv.
+        IVUsersOfOneStride &StrideUses = *IU->IVUsesByStride[CondStride];
+        for (ilist<IVStrideUse>::iterator I = StrideUses.Users.begin(),
+               E = StrideUses.Users.end(); I != E; ++I) {
+          if (I->getUser() == Cond)
+            continue;
+          if (!I->isUseOfPostIncrementedValue()) {
+            UsePostInc = false;
+            break;
+          }
+        }
+      }
+
+      // If iv for the stride might be shared and any of the users use pre-inc
+      // iv might be used, then it's not safe to use post-inc iv.
+      if (UsePostInc &&
+          isa<SCEVConstant>(CondStride) &&
+          StrideMightBeShared(CondStride, L, true))
+        UsePostInc = false;
     }
-  }
 
-  // If we get to here, we know that we can transform the setcc instruction to
-  // use the post-incremented version of the IV, allowing us to coalesce the
-  // live ranges for the IV correctly.
-  CondUse->setOffset(SE->getMinusSCEV(CondUse->getOffset(), *CondStride));
-  CondUse->setIsUseOfPostIncrementedValue(true);
-  Changed = true;
+    // If the trip count is computed in terms of a max (due to ScalarEvolution
+    // being unable to find a sufficient guard, for example), change the loop
+    // comparison to use SLT or ULT instead of NE.
+    Cond = OptimizeMax(L, Cond, CondUse);
+
+    // If possible, change stride and operands of the compare instruction to
+    // eliminate one stride. However, avoid rewriting the compare instruction
+    // with an iv of new stride if it's likely the new stride uses will be
+    // rewritten using the stride of the compare instruction.
+    if (ExitingBlock == LatchBlock && isa<SCEVConstant>(CondStride)) {
+      // If the condition stride is a constant and it's the only use, we might
+      // want to optimize it first by turning it to count toward zero.
+      if (!StrideMightBeShared(CondStride, L, false) &&
+          !ShouldCountToZero(Cond, CondUse, SE, L, TLI))
+        Cond = ChangeCompareStride(L, Cond, CondUse, CondStride);
+    }
 
-  ++NumLoopCond;
-}
+    if (!UsePostInc)
+      continue;
 
-/// OptimizeLoopCountIV - If, after all sharing of IVs, the IV used for deciding
-/// when to exit the loop is used only for that purpose, try to rearrange things
-/// so it counts down to a test against zero.
-void LoopStrengthReduce::OptimizeLoopCountIV(Loop *L) {
+    DEBUG(errs() << "  Change loop exiting icmp to use postinc iv: "
+          << *Cond << '\n');
 
-  // If the number of times the loop is executed isn't computable, give up.
-  const SCEV *BackedgeTakenCount = SE->getBackedgeTakenCount(L);
-  if (isa<SCEVCouldNotCompute>(BackedgeTakenCount))
-    return;
+    // It's possible for the setcc instruction to be anywhere in the loop, and
+    // possible for it to have multiple users.  If it is not immediately before
+    // the exiting block branch, move it.
+    if (&*++BasicBlock::iterator(Cond) != (Instruction*)TermBr) {
+      if (Cond->hasOneUse()) {   // Condition has a single use, just move it.
+        Cond->moveBefore(TermBr);
+      } else {
+        // Otherwise, clone the terminating condition and insert into the
+        // loopend.
+        Cond = cast<ICmpInst>(Cond->clone());
+        Cond->setName(L->getHeader()->getName() + ".termcond");
+        ExitingBlock->getInstList().insert(TermBr, Cond);
+
+        // Clone the IVUse, as the old use still exists!
+        IU->IVUsesByStride[CondStride]->addUser(CondUse->getOffset(), Cond,
+                                             CondUse->getOperandValToReplace());
+        CondUse = &IU->IVUsesByStride[CondStride]->Users.back();
+      }
+    }
 
-  // Get the terminating condition for the loop if possible (this isn't
-  // necessarily in the latch, or a block that's a predecessor of the header).
-  if (!L->getExitBlock())
-    return; // More than one loop exit blocks.
+    // If we get to here, we know that we can transform the setcc instruction to
+    // use the post-incremented version of the IV, allowing us to coalesce the
+    // live ranges for the IV correctly.
+    CondUse->setOffset(SE->getMinusSCEV(CondUse->getOffset(), CondStride));
+    CondUse->setIsUseOfPostIncrementedValue(true);
+    Changed = true;
 
-  // Okay, there is one exit block.  Try to find the condition that causes the
-  // loop to be exited.
-  BasicBlock *ExitingBlock = L->getExitingBlock();
-  if (!ExitingBlock)
-    return; // More than one block exiting!
+    ++NumLoopCond;
+  }
+}
 
-  // Okay, we've computed the exiting block.  See what condition causes us to
-  // exit.
-  //
-  // FIXME: we should be able to handle switch instructions (with a single exit)
-  BranchInst *TermBr = dyn_cast<BranchInst>(ExitingBlock->getTerminator());
-  if (TermBr == 0) return;
-  assert(TermBr->isConditional() && "If unconditional, it can't be in loop!");
-  if (!isa<ICmpInst>(TermBr->getCondition()))
-    return;
-  ICmpInst *Cond = cast<ICmpInst>(TermBr->getCondition());
+bool LoopStrengthReduce::OptimizeLoopCountIVOfStride(const SCEV* &Stride,
+                                                     IVStrideUse* &CondUse,
+                                                     Loop *L) {
+  // If the only use is an icmp of a loop exiting conditional branch, then
+  // attempt the optimization.
+  BasedUser User = BasedUser(*CondUse, SE);
+  assert(isa<ICmpInst>(User.Inst) && "Expecting an ICMPInst!");
+  ICmpInst *Cond = cast<ICmpInst>(User.Inst);
+
+  // Less strict check now that compare stride optimization is done.
+  if (!ShouldCountToZero(Cond, CondUse, SE, L))
+    return false;
 
-  // Handle only tests for equality for the moment, and only stride 1.
-  if (Cond->getPredicate() != CmpInst::ICMP_EQ)
-    return;
-  const SCEV *IV = SE->getSCEV(Cond->getOperand(0));
-  const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(IV);
-  const SCEV *One = SE->getIntegerSCEV(1, BackedgeTakenCount->getType());
-  if (!AR || !AR->isAffine() || AR->getStepRecurrence(*SE) != One)
-    return;
-  // If the RHS of the comparison is defined inside the loop, the rewrite
-  // cannot be done.
-  if (Instruction *CR = dyn_cast<Instruction>(Cond->getOperand(1)))
-    if (L->contains(CR->getParent()))
-      return;
+  Value *CondOp0 = Cond->getOperand(0);
+  PHINode *PHIExpr = dyn_cast<PHINode>(CondOp0);
+  Instruction *Incr;
+  if (!PHIExpr) {
+    // Value tested is postinc. Find the phi node.
+    Incr = dyn_cast<BinaryOperator>(CondOp0);
+    // FIXME: Just use User.OperandValToReplace here?
+    if (!Incr || Incr->getOpcode() != Instruction::Add)
+      return false;
 
-  // Make sure the IV is only used for counting.  Value may be preinc or
-  // postinc; 2 uses in either case.
-  if (!Cond->getOperand(0)->hasNUses(2))
-    return;
-  PHINode *phi = dyn_cast<PHINode>(Cond->getOperand(0));
-  Instruction *incr;
-  if (phi && phi->getParent()==L->getHeader()) {
-    // value tested is preinc.  Find the increment.
-    // A CmpInst is not a BinaryOperator; we depend on this.
-    Instruction::use_iterator UI = phi->use_begin();
-    incr = dyn_cast<BinaryOperator>(UI);
-    if (!incr)
-      incr = dyn_cast<BinaryOperator>(++UI);
-    // 1 use for postinc value, the phi.  Unnecessarily conservative?
-    if (!incr || !incr->hasOneUse() || incr->getOpcode()!=Instruction::Add)
-      return;
-  } else {
-    // Value tested is postinc.  Find the phi node.
-    incr = dyn_cast<BinaryOperator>(Cond->getOperand(0));
-    if (!incr || incr->getOpcode()!=Instruction::Add)
-      return;
-
-    Instruction::use_iterator UI = Cond->getOperand(0)->use_begin();
-    phi = dyn_cast<PHINode>(UI);
-    if (!phi)
-      phi = dyn_cast<PHINode>(++UI);
+    PHIExpr = dyn_cast<PHINode>(Incr->getOperand(0));
+    if (!PHIExpr)
+      return false;
     // 1 use for preinc value, the increment.
-    if (!phi || phi->getParent()!=L->getHeader() || !phi->hasOneUse())
-      return;
+    if (!PHIExpr->hasOneUse())
+      return false;
+  } else {
+    assert(isa<PHINode>(CondOp0) &&
+           "Unexpected loop exiting counting instruction sequence!");
+    PHIExpr = cast<PHINode>(CondOp0);
+    // Value tested is preinc.  Find the increment.
+    // A CmpInst is not a BinaryOperator; we depend on this.
+    Instruction::use_iterator UI = PHIExpr->use_begin();
+    Incr = dyn_cast<BinaryOperator>(UI);
+    if (!Incr)
+      Incr = dyn_cast<BinaryOperator>(++UI);
+    // One use for postinc value, the phi.  Unnecessarily conservative?
+    if (!Incr || !Incr->hasOneUse() || Incr->getOpcode() != Instruction::Add)
+      return false;
   }
 
   // Replace the increment with a decrement.
-  BinaryOperator *decr = 
-    BinaryOperator::Create(Instruction::Sub, incr->getOperand(0),
-                           incr->getOperand(1), "tmp", incr);
-  incr->replaceAllUsesWith(decr);
-  incr->eraseFromParent();
+  DEBUG(errs() << "LSR: Examining use ");
+  DEBUG(WriteAsOperand(errs(), CondOp0, /*PrintType=*/false));
+  DEBUG(errs() << " in Inst: " << *Cond << '\n');
+  BinaryOperator *Decr =  BinaryOperator::Create(Instruction::Sub,
+                         Incr->getOperand(0), Incr->getOperand(1), "tmp", Incr);
+  Incr->replaceAllUsesWith(Decr);
+  Incr->eraseFromParent();
 
   // Substitute endval-startval for the original startval, and 0 for the
-  // original endval.  Since we're only testing for equality this is OK even 
+  // original endval.  Since we're only testing for equality this is OK even
   // if the computation wraps around.
   BasicBlock  *Preheader = L->getLoopPreheader();
   Instruction *PreInsertPt = Preheader->getTerminator();
-  int inBlock = L->contains(phi->getIncomingBlock(0)) ? 1 : 0;
-  Value *startVal = phi->getIncomingValue(inBlock);
-  Value *endVal = Cond->getOperand(1);
-  // FIXME check for case where both are constant
+  unsigned InBlock = L->contains(PHIExpr->getIncomingBlock(0)) ? 1 : 0;
+  Value *StartVal = PHIExpr->getIncomingValue(InBlock);
+  Value *EndVal = Cond->getOperand(1);
+  DEBUG(errs() << "    Optimize loop counting iv to count down ["
+        << *EndVal << " .. " << *StartVal << "]\n");
+
+  // FIXME: check for case where both are constant.
   Constant* Zero = ConstantInt::get(Cond->getOperand(1)->getType(), 0);
-  BinaryOperator *NewStartVal = 
-    BinaryOperator::Create(Instruction::Sub, endVal, startVal,
-                           "tmp", PreInsertPt);
-  phi->setIncomingValue(inBlock, NewStartVal);
+  BinaryOperator *NewStartVal = BinaryOperator::Create(Instruction::Sub,
+                                          EndVal, StartVal, "tmp", PreInsertPt);
+  PHIExpr->setIncomingValue(InBlock, NewStartVal);
   Cond->setOperand(1, Zero);
+  DEBUG(errs() << "    New icmp: " << *Cond << "\n");
+
+  int64_t SInt = cast<SCEVConstant>(Stride)->getValue()->getSExtValue();
+  const SCEV *NewStride = 0;
+  bool Found = false;
+  for (unsigned i = 0, e = IU->StrideOrder.size(); i != e; ++i) {
+    const SCEV *OldStride = IU->StrideOrder[i];
+    if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(OldStride))
+      if (SC->getValue()->getSExtValue() == -SInt) {
+        Found = true;
+        NewStride = OldStride;
+        break;
+      }
+  }
+
+  if (!Found)
+    NewStride = SE->getIntegerSCEV(-SInt, Stride->getType());
+  IU->AddUser(NewStride, CondUse->getOffset(), Cond, Cond->getOperand(0));
+  IU->IVUsesByStride[Stride]->removeUser(CondUse);
+
+  CondUse = &IU->IVUsesByStride[NewStride]->Users.back();
+  Stride = NewStride;
 
-  Changed = true;
+  ++NumCountZero;
+
+  return true;
 }
 
-bool LoopStrengthReduce::runOnLoop(Loop *L, LPPassManager &LPM) {
+/// OptimizeLoopCountIV - If, after all sharing of IVs, the IV used for deciding
+/// when to exit the loop is used only for that purpose, try to rearrange things
+/// so it counts down to a test against zero.
+bool LoopStrengthReduce::OptimizeLoopCountIV(Loop *L) {
+  bool ThisChanged = false;
+  for (unsigned i = 0, e = IU->StrideOrder.size(); i != e; ++i) {
+    const SCEV *Stride = IU->StrideOrder[i];
+    std::map<const SCEV *, IVUsersOfOneStride *>::iterator SI =
+      IU->IVUsesByStride.find(Stride);
+    assert(SI != IU->IVUsesByStride.end() && "Stride doesn't exist!");
+    // FIXME: Generalize to non-affine IV's.
+    if (!SI->first->isLoopInvariant(L))
+      continue;
+    // If stride is a constant and it has an icmpinst use, check if we can
+    // optimize the loop to count down.
+    if (isa<SCEVConstant>(Stride) && SI->second->Users.size() == 1) {
+      Instruction *User = SI->second->Users.begin()->getUser();
+      if (!isa<ICmpInst>(User))
+        continue;
+      const SCEV *CondStride = Stride;
+      IVStrideUse *Use = &*SI->second->Users.begin();
+      if (!OptimizeLoopCountIVOfStride(CondStride, Use, L))
+        continue;
+      ThisChanged = true;
 
+      // Now check if it's possible to reuse this iv for other stride uses.
+      for (unsigned j = 0, ee = IU->StrideOrder.size(); j != ee; ++j) {
+        const SCEV *SStride = IU->StrideOrder[j];
+        if (SStride == CondStride)
+          continue;
+        std::map<const SCEV *, IVUsersOfOneStride *>::iterator SII =
+          IU->IVUsesByStride.find(SStride);
+        assert(SII != IU->IVUsesByStride.end() && "Stride doesn't exist!");
+        // FIXME: Generalize to non-affine IV's.
+        if (!SII->first->isLoopInvariant(L))
+          continue;
+        // FIXME: Rewrite other stride using CondStride.
+      }
+    }
+  }
+
+  Changed |= ThisChanged;
+  return ThisChanged;
+}
+
+bool LoopStrengthReduce::runOnLoop(Loop *L, LPPassManager &LPM) {
   IU = &getAnalysis<IVUsers>();
   LI = &getAnalysis<LoopInfo>();
   DT = &getAnalysis<DominatorTree>();
   SE = &getAnalysis<ScalarEvolution>();
   Changed = false;
 
+  // If LoopSimplify form is not available, stay out of trouble.
+  if (!L->getLoopPreheader() || !L->getLoopLatch())
+    return false;
+
   if (!IU->IVUsesByStride.empty()) {
     DEBUG(errs() << "\nLSR on \"" << L->getHeader()->getParent()->getName()
           << "\" ";
@@ -2541,7 +2771,7 @@ bool LoopStrengthReduce::runOnLoop(Loop *L, LPPassManager &LPM) {
 
     // Change loop terminating condition to use the postinc iv when possible
     // and optimize loop terminating compare. FIXME: Move this after
-    // StrengthReduceStridedIVUsers?
+    // StrengthReduceIVUsersOfStride?
     OptimizeLoopTermCond(L);
 
     // FIXME: We can shrink overlarge IV's here.  e.g. if the code has
@@ -2557,26 +2787,12 @@ bool LoopStrengthReduce::runOnLoop(Loop *L, LPPassManager &LPM) {
     // IVsByStride keeps IVs for one particular loop.
     assert(IVsByStride.empty() && "Stale entries in IVsByStride?");
 
-    // Note: this processes each stride/type pair individually.  All users
-    // passed into StrengthReduceStridedIVUsers have the same type AND stride.
-    // Also, note that we iterate over IVUsesByStride indirectly by using
-    // StrideOrder. This extra layer of indirection makes the ordering of
-    // strides deterministic - not dependent on map order.
-    for (unsigned Stride = 0, e = IU->StrideOrder.size();
-         Stride != e; ++Stride) {
-      std::map<const SCEV *, IVUsersOfOneStride *>::iterator SI =
-        IU->IVUsesByStride.find(IU->StrideOrder[Stride]);
-      assert(SI != IU->IVUsesByStride.end() && "Stride doesn't exist!");
-      // FIXME: Generalize to non-affine IV's.
-      if (!SI->first->isLoopInvariant(L))
-        continue;
-      StrengthReduceStridedIVUsers(SI->first, *SI->second, L);
-    }
-  }
+    StrengthReduceIVUsers(L);
 
-  // After all sharing is done, see if we can adjust the loop to test against
-  // zero instead of counting up to a maximum.  This is usually faster.
-  OptimizeLoopCountIV(L);
+    // After all sharing is done, see if we can adjust the loop to test against
+    // zero instead of counting up to a maximum.  This is usually faster.
+    OptimizeLoopCountIV(L);
+  }
 
   // We're done analyzing this loop; release all the state we built up for it.
   IVsByStride.clear();
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnroll.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnroll.cpp
deleted file mode 100644
index 837ec59..0000000
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnroll.cpp
+++ /dev/null
@@ -1,177 +0,0 @@
-//===-- LoopUnroll.cpp - Loop unroller pass -------------------------------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This pass implements a simple loop unroller.  It works best when loops have
-// been canonicalized by the -indvars pass, allowing it to determine the trip
-// counts of loops easily.
-//===----------------------------------------------------------------------===//
-
-#define DEBUG_TYPE "loop-unroll"
-#include "llvm/IntrinsicInst.h"
-#include "llvm/Transforms/Scalar.h"
-#include "llvm/Analysis/LoopInfo.h"
-#include "llvm/Analysis/LoopPass.h"
-#include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Debug.h"
-#include "llvm/Support/raw_ostream.h"
-#include "llvm/Transforms/Utils/UnrollLoop.h"
-#include <climits>
-
-using namespace llvm;
-
-static cl::opt<unsigned>
-UnrollThreshold("unroll-threshold", cl::init(100), cl::Hidden,
-  cl::desc("The cut-off point for automatic loop unrolling"));
-
-static cl::opt<unsigned>
-UnrollCount("unroll-count", cl::init(0), cl::Hidden,
-  cl::desc("Use this unroll count for all loops, for testing purposes"));
-
-static cl::opt<bool>
-UnrollAllowPartial("unroll-allow-partial", cl::init(false), cl::Hidden,
-  cl::desc("Allows loops to be partially unrolled until "
-           "-unroll-threshold loop size is reached."));
-
-namespace {
-  class LoopUnroll : public LoopPass {
-  public:
-    static char ID; // Pass ID, replacement for typeid
-    LoopUnroll() : LoopPass(&ID) {}
-
-    /// A magic value for use with the Threshold parameter to indicate
-    /// that the loop unroll should be performed regardless of how much
-    /// code expansion would result.
-    static const unsigned NoThreshold = UINT_MAX;
-
-    bool runOnLoop(Loop *L, LPPassManager &LPM);
-
-    /// This transformation requires natural loop information & requires that
-    /// loop preheaders be inserted into the CFG...
-    ///
-    virtual void getAnalysisUsage(AnalysisUsage &AU) const {
-      AU.addRequiredID(LoopSimplifyID);
-      AU.addRequiredID(LCSSAID);
-      AU.addRequired<LoopInfo>();
-      AU.addPreservedID(LCSSAID);
-      AU.addPreserved<LoopInfo>();
-      // FIXME: Loop unroll requires LCSSA. And LCSSA requires dom info.
-      // If loop unroll does not preserve dom info then LCSSA pass on next
-      // loop will receive invalid dom info.
-      // For now, recreate dom info, if loop is unrolled.
-      AU.addPreserved<DominatorTree>();
-      AU.addPreserved<DominanceFrontier>();
-    }
-  };
-}
-
-char LoopUnroll::ID = 0;
-static RegisterPass<LoopUnroll> X("loop-unroll", "Unroll loops");
-
-Pass *llvm::createLoopUnrollPass() { return new LoopUnroll(); }
-
-/// ApproximateLoopSize - Approximate the size of the loop.
-static unsigned ApproximateLoopSize(const Loop *L) {
-  unsigned Size = 0;
-  for (Loop::block_iterator I = L->block_begin(), E = L->block_end();
-       I != E; ++I) {
-    BasicBlock *BB = *I;
-    Instruction *Term = BB->getTerminator();
-    for (BasicBlock::iterator I = BB->begin(), E = BB->end(); I != E; ++I) {
-      if (isa<PHINode>(I) && BB == L->getHeader()) {
-        // Ignore PHI nodes in the header.
-      } else if (I->hasOneUse() && I->use_back() == Term) {
-        // Ignore instructions only used by the loop terminator.
-      } else if (isa<DbgInfoIntrinsic>(I)) {
-        // Ignore debug instructions
-      } else if (isa<GetElementPtrInst>(I) && I->hasOneUse()) {
-        // Ignore GEP as they generally are subsumed into a load or store.
-      } else if (isa<CallInst>(I)) {
-        // Estimate size overhead introduced by call instructions which
-        // is higher than other instructions. Here 3 and 10 are magic
-        // numbers that help one isolated test case from PR2067 without
-        // negatively impacting measured benchmarks.
-        Size += isa<IntrinsicInst>(I) ? 3 : 10;
-      } else {
-        ++Size;
-      }
-
-      // TODO: Ignore expressions derived from PHI and constants if inval of phi
-      // is a constant, or if operation is associative.  This will get induction
-      // variables.
-    }
-  }
-
-  return Size;
-}
-
-bool LoopUnroll::runOnLoop(Loop *L, LPPassManager &LPM) {
-  assert(L->isLCSSAForm());
-  LoopInfo *LI = &getAnalysis<LoopInfo>();
-
-  BasicBlock *Header = L->getHeader();
-  DEBUG(errs() << "Loop Unroll: F[" << Header->getParent()->getName()
-        << "] Loop %" << Header->getName() << "\n");
-  (void)Header;
-
-  // Find trip count
-  unsigned TripCount = L->getSmallConstantTripCount();
-  unsigned Count = UnrollCount;
-
-  // Automatically select an unroll count.
-  if (Count == 0) {
-    // Conservative heuristic: if we know the trip count, see if we can
-    // completely unroll (subject to the threshold, checked below); otherwise
-    // try to find greatest modulo of the trip count which is still under
-    // threshold value.
-    if (TripCount == 0)
-      return false;
-    Count = TripCount;
-  }
-
-  // Enforce the threshold.
-  if (UnrollThreshold != NoThreshold) {
-    unsigned LoopSize = ApproximateLoopSize(L);
-    DEBUG(errs() << "  Loop Size = " << LoopSize << "\n");
-    uint64_t Size = (uint64_t)LoopSize*Count;
-    if (TripCount != 1 && Size > UnrollThreshold) {
-      DEBUG(errs() << "  Too large to fully unroll with count: " << Count
-            << " because size: " << Size << ">" << UnrollThreshold << "\n");
-      if (!UnrollAllowPartial) {
-        DEBUG(errs() << "  will not try to unroll partially because "
-              << "-unroll-allow-partial not given\n");
-        return false;
-      }
-      // Reduce unroll count to be modulo of TripCount for partial unrolling
-      Count = UnrollThreshold / LoopSize;
-      while (Count != 0 && TripCount%Count != 0) {
-        Count--;
-      }
-      if (Count < 2) {
-        DEBUG(errs() << "  could not unroll partially\n");
-        return false;
-      }
-      DEBUG(errs() << "  partially unrolling with count: " << Count << "\n");
-    }
-  }
-
-  // Unroll the loop.
-  Function *F = L->getHeader()->getParent();
-  if (!UnrollLoop(L, Count, LI, &LPM))
-    return false;
-
-  // FIXME: Reconstruct dom info, because it is not preserved properly.
-  DominatorTree *DT = getAnalysisIfAvailable<DominatorTree>();
-  if (DT) {
-    DT->runOnFunction(*F);
-    DominanceFrontier *DF = getAnalysisIfAvailable<DominanceFrontier>();
-    if (DF)
-      DF->runOnFunction(*F);
-  }
-  return true;
-}
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnrollPass.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnrollPass.cpp
new file mode 100644
index 0000000..c2bf9f2
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnrollPass.cpp
@@ -0,0 +1,151 @@
+//===-- LoopUnroll.cpp - Loop unroller pass -------------------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This pass implements a simple loop unroller.  It works best when loops have
+// been canonicalized by the -indvars pass, allowing it to determine the trip
+// counts of loops easily.
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "loop-unroll"
+#include "llvm/IntrinsicInst.h"
+#include "llvm/Transforms/Scalar.h"
+#include "llvm/Analysis/LoopPass.h"
+#include "llvm/Analysis/InlineCost.h"
+#include "llvm/Support/CommandLine.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/raw_ostream.h"
+#include "llvm/Transforms/Utils/UnrollLoop.h"
+#include <climits>
+
+using namespace llvm;
+
+static cl::opt<unsigned>
+UnrollThreshold("unroll-threshold", cl::init(100), cl::Hidden,
+  cl::desc("The cut-off point for automatic loop unrolling"));
+
+static cl::opt<unsigned>
+UnrollCount("unroll-count", cl::init(0), cl::Hidden,
+  cl::desc("Use this unroll count for all loops, for testing purposes"));
+
+static cl::opt<bool>
+UnrollAllowPartial("unroll-allow-partial", cl::init(false), cl::Hidden,
+  cl::desc("Allows loops to be partially unrolled until "
+           "-unroll-threshold loop size is reached."));
+
+namespace {
+  class LoopUnroll : public LoopPass {
+  public:
+    static char ID; // Pass ID, replacement for typeid
+    LoopUnroll() : LoopPass(&ID) {}
+
+    /// A magic value for use with the Threshold parameter to indicate
+    /// that the loop unroll should be performed regardless of how much
+    /// code expansion would result.
+    static const unsigned NoThreshold = UINT_MAX;
+
+    bool runOnLoop(Loop *L, LPPassManager &LPM);
+
+    /// This transformation requires natural loop information & requires that
+    /// loop preheaders be inserted into the CFG...
+    ///
+    virtual void getAnalysisUsage(AnalysisUsage &AU) const {
+      AU.addRequiredID(LoopSimplifyID);
+      AU.addRequiredID(LCSSAID);
+      AU.addRequired<LoopInfo>();
+      AU.addPreservedID(LCSSAID);
+      AU.addPreserved<LoopInfo>();
+      // FIXME: Loop unroll requires LCSSA. And LCSSA requires dom info.
+      // If loop unroll does not preserve dom info then LCSSA pass on next
+      // loop will receive invalid dom info.
+      // For now, recreate dom info, if loop is unrolled.
+      AU.addPreserved<DominatorTree>();
+      AU.addPreserved<DominanceFrontier>();
+    }
+  };
+}
+
+char LoopUnroll::ID = 0;
+static RegisterPass<LoopUnroll> X("loop-unroll", "Unroll loops");
+
+Pass *llvm::createLoopUnrollPass() { return new LoopUnroll(); }
+
+/// ApproximateLoopSize - Approximate the size of the loop.
+static unsigned ApproximateLoopSize(const Loop *L) {
+  CodeMetrics Metrics;
+  for (Loop::block_iterator I = L->block_begin(), E = L->block_end();
+       I != E; ++I)
+    Metrics.analyzeBasicBlock(*I);
+  return Metrics.NumInsts;
+}
+
+bool LoopUnroll::runOnLoop(Loop *L, LPPassManager &LPM) {
+  assert(L->isLCSSAForm());
+  LoopInfo *LI = &getAnalysis<LoopInfo>();
+
+  BasicBlock *Header = L->getHeader();
+  DEBUG(errs() << "Loop Unroll: F[" << Header->getParent()->getName()
+        << "] Loop %" << Header->getName() << "\n");
+  (void)Header;
+
+  // Find trip count
+  unsigned TripCount = L->getSmallConstantTripCount();
+  unsigned Count = UnrollCount;
+
+  // Automatically select an unroll count.
+  if (Count == 0) {
+    // Conservative heuristic: if we know the trip count, see if we can
+    // completely unroll (subject to the threshold, checked below); otherwise
+    // try to find greatest modulo of the trip count which is still under
+    // threshold value.
+    if (TripCount == 0)
+      return false;
+    Count = TripCount;
+  }
+
+  // Enforce the threshold.
+  if (UnrollThreshold != NoThreshold) {
+    unsigned LoopSize = ApproximateLoopSize(L);
+    DEBUG(errs() << "  Loop Size = " << LoopSize << "\n");
+    uint64_t Size = (uint64_t)LoopSize*Count;
+    if (TripCount != 1 && Size > UnrollThreshold) {
+      DEBUG(errs() << "  Too large to fully unroll with count: " << Count
+            << " because size: " << Size << ">" << UnrollThreshold << "\n");
+      if (!UnrollAllowPartial) {
+        DEBUG(errs() << "  will not try to unroll partially because "
+              << "-unroll-allow-partial not given\n");
+        return false;
+      }
+      // Reduce unroll count to be modulo of TripCount for partial unrolling
+      Count = UnrollThreshold / LoopSize;
+      while (Count != 0 && TripCount%Count != 0) {
+        Count--;
+      }
+      if (Count < 2) {
+        DEBUG(errs() << "  could not unroll partially\n");
+        return false;
+      }
+      DEBUG(errs() << "  partially unrolling with count: " << Count << "\n");
+    }
+  }
+
+  // Unroll the loop.
+  Function *F = L->getHeader()->getParent();
+  if (!UnrollLoop(L, Count, LI, &LPM))
+    return false;
+
+  // FIXME: Reconstruct dom info, because it is not preserved properly.
+  DominatorTree *DT = getAnalysisIfAvailable<DominatorTree>();
+  if (DT) {
+    DT->runOnFunction(*F);
+    DominanceFrontier *DF = getAnalysisIfAvailable<DominanceFrontier>();
+    if (DF)
+      DF->runOnFunction(*F);
+  }
+  return true;
+}
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp
index b1f4214..38d267a 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp
@@ -32,8 +32,8 @@
 #include "llvm/DerivedTypes.h"
 #include "llvm/Function.h"
 #include "llvm/Instructions.h"
-#include "llvm/LLVMContext.h"
 #include "llvm/Analysis/ConstantFolding.h"
+#include "llvm/Analysis/InlineCost.h"
 #include "llvm/Analysis/LoopInfo.h"
 #include "llvm/Analysis/LoopPass.h"
 #include "llvm/Analysis/Dominators.h"
@@ -56,9 +56,11 @@ STATISTIC(NumSelects , "Number of selects unswitched");
 STATISTIC(NumTrivial , "Number of unswitches that are trivial");
 STATISTIC(NumSimplify, "Number of simplifications of unswitched code");
 
+// The specific value of 50 here was chosen based only on intuition and a
+// few specific examples.
 static cl::opt<unsigned>
 Threshold("loop-unswitch-threshold", cl::desc("Max loop size to unswitch"),
-          cl::init(10), cl::Hidden);
+          cl::init(50), cl::Hidden);
   
 namespace {
   class LoopUnswitch : public LoopPass {
@@ -135,7 +137,6 @@ namespace {
     void SplitExitEdges(Loop *L, const SmallVector<BasicBlock *, 8> &ExitBlocks);
 
     bool UnswitchIfProfitable(Value *LoopCond, Constant *Val);
-    unsigned getLoopUnswitchCost(Value *LIC);
     void UnswitchTrivialCondition(Loop *L, Value *Cond, Constant *Val,
                                   BasicBlock *ExitBlock);
     void UnswitchNontrivialCondition(Value *LIC, Constant *OnVal, Loop *L);
@@ -397,40 +398,6 @@ bool LoopUnswitch::IsTrivialUnswitchCondition(Value *Cond, Constant **Val,
   return true;
 }
 
-/// getLoopUnswitchCost - Return the cost (code size growth) that will happen if
-/// we choose to unswitch current loop on the specified value.
-///
-unsigned LoopUnswitch::getLoopUnswitchCost(Value *LIC) {
-  // If the condition is trivial, always unswitch.  There is no code growth for
-  // this case.
-  if (IsTrivialUnswitchCondition(LIC))
-    return 0;
-  
-  // FIXME: This is really overly conservative.  However, more liberal 
-  // estimations have thus far resulted in excessive unswitching, which is bad
-  // both in compile time and in code size.  This should be replaced once
-  // someone figures out how a good estimation.
-  return currentLoop->getBlocks().size();
-  
-  unsigned Cost = 0;
-  // FIXME: this is brain dead.  It should take into consideration code
-  // shrinkage.
-  for (Loop::block_iterator I = currentLoop->block_begin(), 
-         E = currentLoop->block_end();
-       I != E; ++I) {
-    BasicBlock *BB = *I;
-    // Do not include empty blocks in the cost calculation.  This happen due to
-    // loop canonicalization and will be removed.
-    if (BB->begin() == BasicBlock::iterator(BB->getTerminator()))
-      continue;
-    
-    // Count basic blocks.
-    ++Cost;
-  }
-
-  return Cost;
-}
-
 /// UnswitchIfProfitable - We have found that we can unswitch currentLoop when
 /// LoopCond == Val to simplify the loop.  If we decide that this is profitable,
 /// unswitch the loop, reprocess the pieces, then return true.
@@ -439,24 +406,40 @@ bool LoopUnswitch::UnswitchIfProfitable(Value *LoopCond, Constant *Val){
   initLoopData();
   Function *F = loopHeader->getParent();
 
+  // If LoopSimplify was unable to form a preheader, don't do any unswitching.
+  if (!loopPreheader)
+    return false;
 
-  // Check to see if it would be profitable to unswitch current loop.
-  unsigned Cost = getLoopUnswitchCost(LoopCond);
+  // If the condition is trivial, always unswitch.  There is no code growth for
+  // this case.
+  if (!IsTrivialUnswitchCondition(LoopCond)) {
+    // Check to see if it would be profitable to unswitch current loop.
 
-  // Do not do non-trivial unswitch while optimizing for size.
-  if (Cost && OptimizeForSize)
-    return false;
-  if (Cost && !F->isDeclaration() && F->hasFnAttr(Attribute::OptimizeForSize))
-    return false;
+    // Do not do non-trivial unswitch while optimizing for size.
+    if (OptimizeForSize || F->hasFnAttr(Attribute::OptimizeForSize))
+      return false;
 
-  if (Cost > Threshold) {
-    // FIXME: this should estimate growth by the amount of code shared by the
-    // resultant unswitched loops.
-    //
-    DEBUG(errs() << "NOT unswitching loop %"
-          << currentLoop->getHeader()->getName() << ", cost too high: "
-          << currentLoop->getBlocks().size() << "\n");
-    return false;
+    // FIXME: This is overly conservative because it does not take into
+    // consideration code simplification opportunities and code that can
+    // be shared by the resultant unswitched loops.
+    CodeMetrics Metrics;
+    for (Loop::block_iterator I = currentLoop->block_begin(), 
+           E = currentLoop->block_end();
+         I != E; ++I)
+      Metrics.analyzeBasicBlock(*I);
+
+    // Limit the number of instructions to avoid causing significant code
+    // expansion, and the number of basic blocks, to avoid loops with
+    // large numbers of branches which cause loop unswitching to go crazy.
+    // This is a very ad-hoc heuristic.
+    if (Metrics.NumInsts > Threshold ||
+        Metrics.NumBlocks * 5 > Threshold ||
+        Metrics.NeverInline) {
+      DEBUG(errs() << "NOT unswitching loop %"
+            << currentLoop->getHeader()->getName() << ", cost too high: "
+            << currentLoop->getBlocks().size() << "\n");
+      return false;
+    }
   }
 
   Constant *CondVal;
@@ -793,7 +776,9 @@ void LoopUnswitch::RemoveBlockIfDead(BasicBlock *BB,
     
     // Anything that uses the instructions in this basic block should have their
     // uses replaced with undefs.
-    if (!I->use_empty())
+    // If I is not void type then replaceAllUsesWith undef.
+    // This allows ValueHandlers and custom metadata to adjust itself.
+    if (!I->getType()->isVoidTy())
       I->replaceAllUsesWith(UndefValue::get(I->getType()));
   }
   
@@ -975,7 +960,7 @@ void LoopUnswitch::SimplifyCode(std::vector<Instruction*> &Worklist, Loop *L) {
     Worklist.pop_back();
     
     // Simple constant folding.
-    if (Constant *C = ConstantFoldInstruction(I, I->getContext())) {
+    if (Constant *C = ConstantFoldInstruction(I)) {
       ReplaceUsesOfWith(I, C, Worklist, L, LPM);
       continue;
     }
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/MemCpyOptimizer.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/MemCpyOptimizer.cpp
index b131384..c922814 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/MemCpyOptimizer.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/MemCpyOptimizer.cpp
@@ -38,16 +38,18 @@ STATISTIC(NumMoveToCpy,   "Number of memmoves converted to memcpy");
 /// true for all i8 values obviously, but is also true for i32 0, i32 -1,
 /// i16 0xF0F0, double 0.0 etc.  If the value can't be handled with a repeated
 /// byte store (e.g. i16 0x1234), return null.
-static Value *isBytewiseValue(Value *V, LLVMContext &Context) {
+static Value *isBytewiseValue(Value *V) {
+  LLVMContext &Context = V->getContext();
+  
   // All byte-wide stores are splatable, even of arbitrary variables.
   if (V->getType() == Type::getInt8Ty(Context)) return V;
   
   // Constant float and double values can be handled as integer values if the
   // corresponding integer value is "byteable".  An important case is 0.0. 
   if (ConstantFP *CFP = dyn_cast<ConstantFP>(V)) {
-    if (CFP->getType() == Type::getFloatTy(Context))
+    if (CFP->getType()->isFloatTy())
       V = ConstantExpr::getBitCast(CFP, Type::getInt32Ty(Context));
-    if (CFP->getType() == Type::getDoubleTy(Context))
+    if (CFP->getType()->isDoubleTy())
       V = ConstantExpr::getBitCast(CFP, Type::getInt64Ty(Context));
     // Don't handle long double formats, which have strange constraints.
   }
@@ -349,7 +351,7 @@ bool MemCpyOpt::processStore(StoreInst *SI, BasicBlock::iterator &BBI) {
   // Ensure that the value being stored is something that can be memset'able a
   // byte at a time like "0" or "-1" or any width, as well as things like
   // 0xA0A0A0A0 and 0.0.
-  Value *ByteVal = isBytewiseValue(SI->getOperand(0), Context);
+  Value *ByteVal = isBytewiseValue(SI->getOperand(0));
   if (!ByteVal)
     return false;
 
@@ -390,7 +392,7 @@ bool MemCpyOpt::processStore(StoreInst *SI, BasicBlock::iterator &BBI) {
     if (NextStore->isVolatile()) break;
     
     // Check to see if this stored value is of the same byte-splattable value.
-    if (ByteVal != isBytewiseValue(NextStore->getOperand(0), Context))
+    if (ByteVal != isBytewiseValue(NextStore->getOperand(0)))
       break;
 
     // Check to see if this store is to a constant offset from the start ptr.
@@ -441,7 +443,7 @@ bool MemCpyOpt::processStore(StoreInst *SI, BasicBlock::iterator &BBI) {
     StartPtr = Range.StartPtr;
   
     // Cast the start ptr to be i8* as memset requires.
-    const Type *i8Ptr = PointerType::getUnqual(Type::getInt8Ty(Context));
+    const Type *i8Ptr = Type::getInt8PtrTy(Context);
     if (StartPtr->getType() != i8Ptr)
       StartPtr = new BitCastInst(StartPtr, i8Ptr, StartPtr->getName(),
                                  InsertPt);
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/PredicateSimplifier.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/PredicateSimplifier.cpp
deleted file mode 100644
index b8ac182..0000000
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/PredicateSimplifier.cpp
+++ /dev/null
@@ -1,2704 +0,0 @@
-//===-- PredicateSimplifier.cpp - Path Sensitive Simplifier ---------------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// Path-sensitive optimizer. In a branch where x == y, replace uses of
-// x with y. Permits further optimization, such as the elimination of
-// the unreachable call:
-//
-// void test(int *p, int *q)
-// {
-//   if (p != q)
-//     return;
-// 
-//   if (*p != *q)
-//     foo(); // unreachable
-// }
-//
-//===----------------------------------------------------------------------===//
-//
-// The InequalityGraph focusses on four properties; equals, not equals,
-// less-than and less-than-or-equals-to. The greater-than forms are also held
-// just to allow walking from a lesser node to a greater one. These properties
-// are stored in a lattice; LE can become LT or EQ, NE can become LT or GT.
-//
-// These relationships define a graph between values of the same type. Each
-// Value is stored in a map table that retrieves the associated Node. This
-// is how EQ relationships are stored; the map contains pointers from equal
-// Value to the same node. The node contains a most canonical Value* form
-// and the list of known relationships with other nodes.
-//
-// If two nodes are known to be inequal, then they will contain pointers to
-// each other with an "NE" relationship. If node getNode(%x) is less than
-// getNode(%y), then the %x node will contain <%y, GT> and %y will contain
-// <%x, LT>. This allows us to tie nodes together into a graph like this:
-//
-//   %a < %b < %c < %d
-//
-// with four nodes representing the properties. The InequalityGraph provides
-// querying with "isRelatedBy" and mutators "addEquality" and "addInequality".
-// To find a relationship, we start with one of the nodes any binary search
-// through its list to find where the relationships with the second node start.
-// Then we iterate through those to find the first relationship that dominates
-// our context node.
-//
-// To create these properties, we wait until a branch or switch instruction
-// implies that a particular value is true (or false). The VRPSolver is
-// responsible for analyzing the variable and seeing what new inferences
-// can be made from each property. For example:
-//
-//   %P = icmp ne i32* %ptr, null
-//   %a = and i1 %P, %Q
-//   br i1 %a label %cond_true, label %cond_false
-//
-// For the true branch, the VRPSolver will start with %a EQ true and look at
-// the definition of %a and find that it can infer that %P and %Q are both
-// true. From %P being true, it can infer that %ptr NE null. For the false
-// branch it can't infer anything from the "and" instruction.
-//
-// Besides branches, we can also infer properties from instruction that may
-// have undefined behaviour in certain cases. For example, the dividend of
-// a division may never be zero. After the division instruction, we may assume
-// that the dividend is not equal to zero.
-//
-//===----------------------------------------------------------------------===//
-//
-// The ValueRanges class stores the known integer bounds of a Value. When we
-// encounter i8 %a u< %b, the ValueRanges stores that %a = [1, 255] and
-// %b = [0, 254].
-//
-// It never stores an empty range, because that means that the code is
-// unreachable. It never stores a single-element range since that's an equality
-// relationship and better stored in the InequalityGraph, nor an empty range
-// since that is better stored in UnreachableBlocks.
-//
-//===----------------------------------------------------------------------===//
-
-#define DEBUG_TYPE "predsimplify"
-#include "llvm/Transforms/Scalar.h"
-#include "llvm/Constants.h"
-#include "llvm/DerivedTypes.h"
-#include "llvm/Instructions.h"
-#include "llvm/Pass.h"
-#include "llvm/ADT/DepthFirstIterator.h"
-#include "llvm/ADT/SetOperations.h"
-#include "llvm/ADT/SetVector.h"
-#include "llvm/ADT/Statistic.h"
-#include "llvm/ADT/STLExtras.h"
-#include "llvm/Analysis/Dominators.h"
-#include "llvm/Assembly/Writer.h"
-#include "llvm/Support/CFG.h"
-#include "llvm/Support/ConstantRange.h"
-#include "llvm/Support/Debug.h"
-#include "llvm/Support/InstVisitor.h"
-#include "llvm/Support/raw_ostream.h"
-#include "llvm/Target/TargetData.h"
-#include "llvm/Transforms/Utils/Local.h"
-#include <algorithm>
-#include <deque>
-#include <stack>
-using namespace llvm;
-
-STATISTIC(NumVarsReplaced, "Number of argument substitutions");
-STATISTIC(NumInstruction , "Number of instructions removed");
-STATISTIC(NumSimple      , "Number of simple replacements");
-STATISTIC(NumBlocks      , "Number of blocks marked unreachable");
-STATISTIC(NumSnuggle     , "Number of comparisons snuggled");
-
-static const ConstantRange empty(1, false);
-
-namespace {
-  class DomTreeDFS {
-  public:
-    class Node {
-      friend class DomTreeDFS;
-    public:
-      typedef std::vector<Node *>::iterator       iterator;
-      typedef std::vector<Node *>::const_iterator const_iterator;
-
-      unsigned getDFSNumIn()  const { return DFSin;  }
-      unsigned getDFSNumOut() const { return DFSout; }
-
-      BasicBlock *getBlock() const { return BB; }
-
-      iterator begin() { return Children.begin(); }
-      iterator end()   { return Children.end();   }
-
-      const_iterator begin() const { return Children.begin(); }
-      const_iterator end()   const { return Children.end();   }
-
-      bool dominates(const Node *N) const {
-        return DFSin <= N->DFSin && DFSout >= N->DFSout;
-      }
-
-      bool DominatedBy(const Node *N) const {
-        return N->dominates(this);
-      }
-
-      /// Sorts by the number of descendants. With this, you can iterate
-      /// through a sorted list and the first matching entry is the most
-      /// specific match for your basic block. The order provided is stable;
-      /// DomTreeDFS::Nodes with the same number of descendants are sorted by
-      /// DFS in number.
-      bool operator<(const Node &N) const {
-        unsigned   spread =   DFSout -   DFSin;
-        unsigned N_spread = N.DFSout - N.DFSin;
-        if (spread == N_spread) return DFSin < N.DFSin;
-        return spread < N_spread;
-      }
-      bool operator>(const Node &N) const { return N < *this; }
-
-    private:
-      unsigned DFSin, DFSout;
-      BasicBlock *BB;
-
-      std::vector<Node *> Children;
-    };
-
-    // XXX: this may be slow. Instead of using "new" for each node, consider
-    // putting them in a vector to keep them contiguous.
-    explicit DomTreeDFS(DominatorTree *DT) {
-      std::stack<std::pair<Node *, DomTreeNode *> > S;
-
-      Entry = new Node;
-      Entry->BB = DT->getRootNode()->getBlock();
-      S.push(std::make_pair(Entry, DT->getRootNode()));
-
-      NodeMap[Entry->BB] = Entry;
-
-      while (!S.empty()) {
-        std::pair<Node *, DomTreeNode *> &Pair = S.top();
-        Node *N = Pair.first;
-        DomTreeNode *DTNode = Pair.second;
-        S.pop();
-
-        for (DomTreeNode::iterator I = DTNode->begin(), E = DTNode->end();
-             I != E; ++I) {
-          Node *NewNode = new Node;
-          NewNode->BB = (*I)->getBlock();
-          N->Children.push_back(NewNode);
-          S.push(std::make_pair(NewNode, *I));
-
-          NodeMap[NewNode->BB] = NewNode;
-        }
-      }
-
-      renumber();
-
-#ifndef NDEBUG
-      DEBUG(dump());
-#endif
-    }
-
-#ifndef NDEBUG
-    virtual
-#endif
-    ~DomTreeDFS() {
-      std::stack<Node *> S;
-
-      S.push(Entry);
-      while (!S.empty()) {
-        Node *N = S.top(); S.pop();
-
-        for (Node::iterator I = N->begin(), E = N->end(); I != E; ++I)
-          S.push(*I);
-
-        delete N;
-      }
-    }
-
-    /// getRootNode - This returns the entry node for the CFG of the function.
-    Node *getRootNode() const { return Entry; }
-
-    /// getNodeForBlock - return the node for the specified basic block.
-    Node *getNodeForBlock(BasicBlock *BB) const {
-      if (!NodeMap.count(BB)) return 0;
-      return const_cast<DomTreeDFS*>(this)->NodeMap[BB];
-    }
-
-    /// dominates - returns true if the basic block for I1 dominates that of
-    /// the basic block for I2. If the instructions belong to the same basic
-    /// block, the instruction first instruction sequentially in the block is
-    /// considered dominating.
-    bool dominates(Instruction *I1, Instruction *I2) {
-      BasicBlock *BB1 = I1->getParent(),
-                 *BB2 = I2->getParent();
-      if (BB1 == BB2) {
-        if (isa<TerminatorInst>(I1)) return false;
-        if (isa<TerminatorInst>(I2)) return true;
-        if ( isa<PHINode>(I1) && !isa<PHINode>(I2)) return true;
-        if (!isa<PHINode>(I1) &&  isa<PHINode>(I2)) return false;
-
-        for (BasicBlock::const_iterator I = BB2->begin(), E = BB2->end();
-             I != E; ++I) {
-          if (&*I == I1) return true;
-          else if (&*I == I2) return false;
-        }
-        assert(!"Instructions not found in parent BasicBlock?");
-      } else {
-        Node *Node1 = getNodeForBlock(BB1),
-             *Node2 = getNodeForBlock(BB2);
-        return Node1 && Node2 && Node1->dominates(Node2);
-      }
-      return false; // Not reached
-    }
-
-  private:
-    /// renumber - calculates the depth first search numberings and applies
-    /// them onto the nodes.
-    void renumber() {
-      std::stack<std::pair<Node *, Node::iterator> > S;
-      unsigned n = 0;
-
-      Entry->DFSin = ++n;
-      S.push(std::make_pair(Entry, Entry->begin()));
-
-      while (!S.empty()) {
-        std::pair<Node *, Node::iterator> &Pair = S.top();
-        Node *N = Pair.first;
-        Node::iterator &I = Pair.second;
-
-        if (I == N->end()) {
-          N->DFSout = ++n;
-          S.pop();
-        } else {
-          Node *Next = *I++;
-          Next->DFSin = ++n;
-          S.push(std::make_pair(Next, Next->begin()));
-        }
-      }
-    }
-
-#ifndef NDEBUG
-    virtual void dump() const {
-      dump(errs());
-    }
-
-    void dump(raw_ostream &os) const {
-      os << "Predicate simplifier DomTreeDFS: \n";
-      dump(Entry, 0, os);
-      os << "\n\n";
-    }
-
-    void dump(Node *N, int depth, raw_ostream &os) const {
-      ++depth;
-      for (int i = 0; i < depth; ++i) { os << " "; }
-      os << "[" << depth << "] ";
-
-      os << N->getBlock()->getNameStr() << " (" << N->getDFSNumIn()
-         << ", " << N->getDFSNumOut() << ")\n";
-
-      for (Node::iterator I = N->begin(), E = N->end(); I != E; ++I)
-        dump(*I, depth, os);
-    }
-#endif
-
-    Node *Entry;
-    std::map<BasicBlock *, Node *> NodeMap;
-  };
-
-  // SLT SGT ULT UGT EQ
-  //   0   1   0   1  0 -- GT                  10
-  //   0   1   0   1  1 -- GE                  11
-  //   0   1   1   0  0 -- SGTULT              12
-  //   0   1   1   0  1 -- SGEULE              13
-  //   0   1   1   1  0 -- SGT                 14
-  //   0   1   1   1  1 -- SGE                 15
-  //   1   0   0   1  0 -- SLTUGT              18
-  //   1   0   0   1  1 -- SLEUGE              19
-  //   1   0   1   0  0 -- LT                  20
-  //   1   0   1   0  1 -- LE                  21
-  //   1   0   1   1  0 -- SLT                 22
-  //   1   0   1   1  1 -- SLE                 23
-  //   1   1   0   1  0 -- UGT                 26
-  //   1   1   0   1  1 -- UGE                 27
-  //   1   1   1   0  0 -- ULT                 28
-  //   1   1   1   0  1 -- ULE                 29
-  //   1   1   1   1  0 -- NE                  30
-  enum LatticeBits {
-    EQ_BIT = 1, UGT_BIT = 2, ULT_BIT = 4, SGT_BIT = 8, SLT_BIT = 16
-  };
-  enum LatticeVal {
-    GT = SGT_BIT | UGT_BIT,
-    GE = GT | EQ_BIT,
-    LT = SLT_BIT | ULT_BIT,
-    LE = LT | EQ_BIT,
-    NE = SLT_BIT | SGT_BIT | ULT_BIT | UGT_BIT,
-    SGTULT = SGT_BIT | ULT_BIT,
-    SGEULE = SGTULT | EQ_BIT,
-    SLTUGT = SLT_BIT | UGT_BIT,
-    SLEUGE = SLTUGT | EQ_BIT,
-    ULT = SLT_BIT | SGT_BIT | ULT_BIT,
-    UGT = SLT_BIT | SGT_BIT | UGT_BIT,
-    SLT = SLT_BIT | ULT_BIT | UGT_BIT,
-    SGT = SGT_BIT | ULT_BIT | UGT_BIT,
-    SLE = SLT | EQ_BIT,
-    SGE = SGT | EQ_BIT,
-    ULE = ULT | EQ_BIT,
-    UGE = UGT | EQ_BIT
-  };
-
-#ifndef NDEBUG
-  /// validPredicate - determines whether a given value is actually a lattice
-  /// value. Only used in assertions or debugging.
-  static bool validPredicate(LatticeVal LV) {
-    switch (LV) {
-      case GT: case GE: case LT: case LE: case NE:
-      case SGTULT: case SGT: case SGEULE:
-      case SLTUGT: case SLT: case SLEUGE:
-      case ULT: case UGT:
-      case SLE: case SGE: case ULE: case UGE:
-        return true;
-      default:
-        return false;
-    }
-  }
-#endif
-
-  /// reversePredicate - reverse the direction of the inequality
-  static LatticeVal reversePredicate(LatticeVal LV) {
-    unsigned reverse = LV ^ (SLT_BIT|SGT_BIT|ULT_BIT|UGT_BIT); //preserve EQ_BIT
-
-    if ((reverse & (SLT_BIT|SGT_BIT)) == 0)
-      reverse |= (SLT_BIT|SGT_BIT);
-
-    if ((reverse & (ULT_BIT|UGT_BIT)) == 0)
-      reverse |= (ULT_BIT|UGT_BIT);
-
-    LatticeVal Rev = static_cast<LatticeVal>(reverse);
-    assert(validPredicate(Rev) && "Failed reversing predicate.");
-    return Rev;
-  }
-
-  /// ValueNumbering stores the scope-specific value numbers for a given Value.
-  class ValueNumbering {
-
-    /// VNPair is a tuple of {Value, index number, DomTreeDFS::Node}. It
-    /// includes the comparison operators necessary to allow you to store it
-    /// in a sorted vector.
-    class VNPair {
-    public:
-      Value *V;
-      unsigned index;
-      DomTreeDFS::Node *Subtree;
-
-      VNPair(Value *V, unsigned index, DomTreeDFS::Node *Subtree)
-        : V(V), index(index), Subtree(Subtree) {}
-
-      bool operator==(const VNPair &RHS) const {
-        return V == RHS.V && Subtree == RHS.Subtree;
-      }
-
-      bool operator<(const VNPair &RHS) const {
-        if (V != RHS.V) return V < RHS.V;
-        return *Subtree < *RHS.Subtree;
-      }
-
-      bool operator<(Value *RHS) const {
-        return V < RHS;
-      }
-
-      bool operator>(Value *RHS) const {
-        return V > RHS;
-      }
-
-      friend bool operator<(Value *RHS, const VNPair &pair) {
-        return pair.operator>(RHS);
-      }
-    };
-
-    typedef std::vector<VNPair> VNMapType;
-    VNMapType VNMap;
-
-    /// The canonical choice for value number at index.
-    std::vector<Value *> Values;
-
-    DomTreeDFS *DTDFS;
-
-  public:
-#ifndef NDEBUG
-    virtual ~ValueNumbering() {}
-    virtual void dump() {
-      print(errs());
-    }
-
-    void print(raw_ostream &os) {
-      for (unsigned i = 1; i <= Values.size(); ++i) {
-        os << i << " = ";
-        WriteAsOperand(os, Values[i-1]);
-        os << " {";
-        for (unsigned j = 0; j < VNMap.size(); ++j) {
-          if (VNMap[j].index == i) {
-            WriteAsOperand(os, VNMap[j].V);
-            os << " (" << VNMap[j].Subtree->getDFSNumIn() << ")  ";
-          }
-        }
-        os << "}\n";
-      }
-    }
-#endif
-
-    /// compare - returns true if V1 is a better canonical value than V2.
-    bool compare(Value *V1, Value *V2) const {
-      if (isa<Constant>(V1))
-        return !isa<Constant>(V2);
-      else if (isa<Constant>(V2))
-        return false;
-      else if (isa<Argument>(V1))
-        return !isa<Argument>(V2);
-      else if (isa<Argument>(V2))
-        return false;
-
-      Instruction *I1 = dyn_cast<Instruction>(V1);
-      Instruction *I2 = dyn_cast<Instruction>(V2);
-
-      if (!I1 || !I2)
-        return V1->getNumUses() < V2->getNumUses();
-
-      return DTDFS->dominates(I1, I2);
-    }
-
-    ValueNumbering(DomTreeDFS *DTDFS) : DTDFS(DTDFS) {}
-
-    /// valueNumber - finds the value number for V under the Subtree. If
-    /// there is no value number, returns zero.
-    unsigned valueNumber(Value *V, DomTreeDFS::Node *Subtree) {
-      if (!(isa<Constant>(V) || isa<Argument>(V) || isa<Instruction>(V)) || 
-          V->getType() == Type::getVoidTy(V->getContext())) return 0;
-
-      VNMapType::iterator E = VNMap.end();
-      VNPair pair(V, 0, Subtree);
-      VNMapType::iterator I = std::lower_bound(VNMap.begin(), E, pair);
-      while (I != E && I->V == V) {
-        if (I->Subtree->dominates(Subtree))
-          return I->index;
-        ++I;
-      }
-      return 0;
-    }
-
-    /// getOrInsertVN - always returns a value number, creating it if necessary.
-    unsigned getOrInsertVN(Value *V, DomTreeDFS::Node *Subtree) {
-      if (unsigned n = valueNumber(V, Subtree))
-        return n;
-      else
-        return newVN(V);
-    }
-
-    /// newVN - creates a new value number. Value V must not already have a
-    /// value number assigned.
-    unsigned newVN(Value *V) {
-      assert((isa<Constant>(V) || isa<Argument>(V) || isa<Instruction>(V)) &&
-             "Bad Value for value numbering.");
-      assert(V->getType() != Type::getVoidTy(V->getContext()) &&
-             "Won't value number a void value");
-
-      Values.push_back(V);
-
-      VNPair pair = VNPair(V, Values.size(), DTDFS->getRootNode());
-      VNMapType::iterator I = std::lower_bound(VNMap.begin(), VNMap.end(), pair);
-      assert((I == VNMap.end() || value(I->index) != V) &&
-             "Attempt to create a duplicate value number.");
-      VNMap.insert(I, pair);
-
-      return Values.size();
-    }
-
-    /// value - returns the Value associated with a value number.
-    Value *value(unsigned index) const {
-      assert(index != 0 && "Zero index is reserved for not found.");
-      assert(index <= Values.size() && "Index out of range.");
-      return Values[index-1];
-    }
-
-    /// canonicalize - return a Value that is equal to V under Subtree.
-    Value *canonicalize(Value *V, DomTreeDFS::Node *Subtree) {
-      if (isa<Constant>(V)) return V;
-
-      if (unsigned n = valueNumber(V, Subtree))
-        return value(n);
-      else
-        return V;
-    }
-
-    /// addEquality - adds that value V belongs to the set of equivalent
-    /// values defined by value number n under Subtree.
-    void addEquality(unsigned n, Value *V, DomTreeDFS::Node *Subtree) {
-      assert(canonicalize(value(n), Subtree) == value(n) &&
-             "Node's 'canonical' choice isn't best within this subtree.");
-
-      // Suppose that we are given "%x -> node #1 (%y)". The problem is that
-      // we may already have "%z -> node #2 (%x)" somewhere above us in the
-      // graph. We need to find those edges and add "%z -> node #1 (%y)"
-      // to keep the lookups canonical.
-
-      std::vector<Value *> ToRepoint(1, V);
-
-      if (unsigned Conflict = valueNumber(V, Subtree)) {
-        for (VNMapType::iterator I = VNMap.begin(), E = VNMap.end();
-             I != E; ++I) {
-          if (I->index == Conflict && I->Subtree->dominates(Subtree))
-            ToRepoint.push_back(I->V);
-        }
-      }
-
-      for (std::vector<Value *>::iterator VI = ToRepoint.begin(),
-           VE = ToRepoint.end(); VI != VE; ++VI) {
-        Value *V = *VI;
-
-        VNPair pair(V, n, Subtree);
-        VNMapType::iterator B = VNMap.begin(), E = VNMap.end();
-        VNMapType::iterator I = std::lower_bound(B, E, pair);
-        if (I != E && I->V == V && I->Subtree == Subtree)
-          I->index = n; // Update best choice
-        else
-          VNMap.insert(I, pair); // New Value
-
-        // XXX: we currently don't have to worry about updating values with
-        // more specific Subtrees, but we will need to for PHI node support.
-
-#ifndef NDEBUG
-        Value *V_n = value(n);
-        if (isa<Constant>(V) && isa<Constant>(V_n)) {
-          assert(V == V_n && "Constant equals different constant?");
-        }
-#endif
-      }
-    }
-
-    /// remove - removes all references to value V.
-    void remove(Value *V) {
-      VNMapType::iterator B = VNMap.begin(), E = VNMap.end();
-      VNPair pair(V, 0, DTDFS->getRootNode());
-      VNMapType::iterator J = std::upper_bound(B, E, pair);
-      VNMapType::iterator I = J;
-
-      while (I != B && (I == E || I->V == V)) --I;
-
-      VNMap.erase(I, J);
-    }
-  };
-
-  /// The InequalityGraph stores the relationships between values.
-  /// Each Value in the graph is assigned to a Node. Nodes are pointer
-  /// comparable for equality. The caller is expected to maintain the logical
-  /// consistency of the system.
-  ///
-  /// The InequalityGraph class may invalidate Node*s after any mutator call.
-  /// @brief The InequalityGraph stores the relationships between values.
-  class InequalityGraph {
-    ValueNumbering &VN;
-    DomTreeDFS::Node *TreeRoot;
-
-    InequalityGraph();                  // DO NOT IMPLEMENT
-    InequalityGraph(InequalityGraph &); // DO NOT IMPLEMENT
-  public:
-    InequalityGraph(ValueNumbering &VN, DomTreeDFS::Node *TreeRoot)
-      : VN(VN), TreeRoot(TreeRoot) {}
-
-    class Node;
-
-    /// An Edge is contained inside a Node making one end of the edge implicit
-    /// and contains a pointer to the other end. The edge contains a lattice
-    /// value specifying the relationship and an DomTreeDFS::Node specifying
-    /// the root in the dominator tree to which this edge applies.
-    class Edge {
-    public:
-      Edge(unsigned T, LatticeVal V, DomTreeDFS::Node *ST)
-        : To(T), LV(V), Subtree(ST) {}
-
-      unsigned To;
-      LatticeVal LV;
-      DomTreeDFS::Node *Subtree;
-
-      bool operator<(const Edge &edge) const {
-        if (To != edge.To) return To < edge.To;
-        return *Subtree < *edge.Subtree;
-      }
-
-      bool operator<(unsigned to) const {
-        return To < to;
-      }
-
-      bool operator>(unsigned to) const {
-        return To > to;
-      }
-
-      friend bool operator<(unsigned to, const Edge &edge) {
-        return edge.operator>(to);
-      }
-    };
-
-    /// A single node in the InequalityGraph. This stores the canonical Value
-    /// for the node, as well as the relationships with the neighbours.
-    ///
-    /// @brief A single node in the InequalityGraph.
-    class Node {
-      friend class InequalityGraph;
-
-      typedef SmallVector<Edge, 4> RelationsType;
-      RelationsType Relations;
-
-      // TODO: can this idea improve performance?
-      //friend class std::vector<Node>;
-      //Node(Node &N) { RelationsType.swap(N.RelationsType); }
-
-    public:
-      typedef RelationsType::iterator       iterator;
-      typedef RelationsType::const_iterator const_iterator;
-
-#ifndef NDEBUG
-      virtual ~Node() {}
-      virtual void dump() const {
-        dump(errs());
-      }
-    private:
-      void dump(raw_ostream &os) const {
-        static const std::string names[32] =
-          { "000000", "000001", "000002", "000003", "000004", "000005",
-            "000006", "000007", "000008", "000009", "     >", "    >=",
-            "  s>u<", "s>=u<=", "    s>", "   s>=", "000016", "000017",
-            "  s<u>", "s<=u>=", "     <", "    <=", "    s<", "   s<=",
-            "000024", "000025", "    u>", "   u>=", "    u<", "   u<=",
-            "    !=", "000031" };
-        for (Node::const_iterator NI = begin(), NE = end(); NI != NE; ++NI) {
-          os << names[NI->LV] << " " << NI->To
-             << " (" << NI->Subtree->getDFSNumIn() << "), ";
-        }
-      }
-    public:
-#endif
-
-      iterator begin()             { return Relations.begin(); }
-      iterator end()               { return Relations.end();   }
-      const_iterator begin() const { return Relations.begin(); }
-      const_iterator end()   const { return Relations.end();   }
-
-      iterator find(unsigned n, DomTreeDFS::Node *Subtree) {
-        iterator E = end();
-        for (iterator I = std::lower_bound(begin(), E, n);
-             I != E && I->To == n; ++I) {
-          if (Subtree->DominatedBy(I->Subtree))
-            return I;
-        }
-        return E;
-      }
-
-      const_iterator find(unsigned n, DomTreeDFS::Node *Subtree) const {
-        const_iterator E = end();
-        for (const_iterator I = std::lower_bound(begin(), E, n);
-             I != E && I->To == n; ++I) {
-          if (Subtree->DominatedBy(I->Subtree))
-            return I;
-        }
-        return E;
-      }
-
-      /// update - updates the lattice value for a given node, creating a new
-      /// entry if one doesn't exist. The new lattice value must not be
-      /// inconsistent with any previously existing value.
-      void update(unsigned n, LatticeVal R, DomTreeDFS::Node *Subtree) {
-        assert(validPredicate(R) && "Invalid predicate.");
-
-        Edge edge(n, R, Subtree);
-        iterator B = begin(), E = end();
-        iterator I = std::lower_bound(B, E, edge);
-
-        iterator J = I;
-        while (J != E && J->To == n) {
-          if (Subtree->DominatedBy(J->Subtree))
-            break;
-          ++J;
-        }
-
-        if (J != E && J->To == n) {
-          edge.LV = static_cast<LatticeVal>(J->LV & R);
-          assert(validPredicate(edge.LV) && "Invalid union of lattice values.");
-
-          if (edge.LV == J->LV)
-            return; // This update adds nothing new.
-        }
-
-        if (I != B) {
-          // We also have to tighten any edge beneath our update.
-          for (iterator K = I - 1; K->To == n; --K) {
-            if (K->Subtree->DominatedBy(Subtree)) {
-              LatticeVal LV = static_cast<LatticeVal>(K->LV & edge.LV);
-              assert(validPredicate(LV) && "Invalid union of lattice values");
-              K->LV = LV;
-            }
-            if (K == B) break;
-          }
-        }
-
-        // Insert new edge at Subtree if it isn't already there.
-        if (I == E || I->To != n || Subtree != I->Subtree)
-          Relations.insert(I, edge);
-      }
-    };
-
-  private:
-
-    std::vector<Node> Nodes;
-
-  public:
-    /// node - returns the node object at a given value number. The pointer
-    /// returned may be invalidated on the next call to node().
-    Node *node(unsigned index) {
-      assert(VN.value(index)); // This triggers the necessary checks.
-      if (Nodes.size() < index) Nodes.resize(index);
-      return &Nodes[index-1];
-    }
-
-    /// isRelatedBy - true iff n1 op n2
-    bool isRelatedBy(unsigned n1, unsigned n2, DomTreeDFS::Node *Subtree,
-                     LatticeVal LV) {
-      if (n1 == n2) return LV & EQ_BIT;
-
-      Node *N1 = node(n1);
-      Node::iterator I = N1->find(n2, Subtree), E = N1->end();
-      if (I != E) return (I->LV & LV) == I->LV;
-
-      return false;
-    }
-
-    // The add* methods assume that your input is logically valid and may 
-    // assertion-fail or infinitely loop if you attempt a contradiction.
-
-    /// addInequality - Sets n1 op n2.
-    /// It is also an error to call this on an inequality that is already true.
-    void addInequality(unsigned n1, unsigned n2, DomTreeDFS::Node *Subtree,
-                       LatticeVal LV1) {
-      assert(n1 != n2 && "A node can't be inequal to itself.");
-
-      if (LV1 != NE)
-        assert(!isRelatedBy(n1, n2, Subtree, reversePredicate(LV1)) &&
-               "Contradictory inequality.");
-
-      // Suppose we're adding %n1 < %n2. Find all the %a < %n1 and
-      // add %a < %n2 too. This keeps the graph fully connected.
-      if (LV1 != NE) {
-        // Break up the relationship into signed and unsigned comparison parts.
-        // If the signed parts of %a op1 %n1 match that of %n1 op2 %n2, and
-        // op1 and op2 aren't NE, then add %a op3 %n2. The new relationship
-        // should have the EQ_BIT iff it's set for both op1 and op2.
-
-        unsigned LV1_s = LV1 & (SLT_BIT|SGT_BIT);
-        unsigned LV1_u = LV1 & (ULT_BIT|UGT_BIT);
-
-        for (Node::iterator I = node(n1)->begin(), E = node(n1)->end(); I != E; ++I) {
-          if (I->LV != NE && I->To != n2) {
-
-            DomTreeDFS::Node *Local_Subtree = NULL;
-            if (Subtree->DominatedBy(I->Subtree))
-              Local_Subtree = Subtree;
-            else if (I->Subtree->DominatedBy(Subtree))
-              Local_Subtree = I->Subtree;
-
-            if (Local_Subtree) {
-              unsigned new_relationship = 0;
-              LatticeVal ILV = reversePredicate(I->LV);
-              unsigned ILV_s = ILV & (SLT_BIT|SGT_BIT);
-              unsigned ILV_u = ILV & (ULT_BIT|UGT_BIT);
-
-              if (LV1_s != (SLT_BIT|SGT_BIT) && ILV_s == LV1_s)
-                new_relationship |= ILV_s;
-              if (LV1_u != (ULT_BIT|UGT_BIT) && ILV_u == LV1_u)
-                new_relationship |= ILV_u;
-
-              if (new_relationship) {
-                if ((new_relationship & (SLT_BIT|SGT_BIT)) == 0)
-                  new_relationship |= (SLT_BIT|SGT_BIT);
-                if ((new_relationship & (ULT_BIT|UGT_BIT)) == 0)
-                  new_relationship |= (ULT_BIT|UGT_BIT);
-                if ((LV1 & EQ_BIT) && (ILV & EQ_BIT))
-                  new_relationship |= EQ_BIT;
-
-                LatticeVal NewLV = static_cast<LatticeVal>(new_relationship);
-
-                node(I->To)->update(n2, NewLV, Local_Subtree);
-                node(n2)->update(I->To, reversePredicate(NewLV), Local_Subtree);
-              }
-            }
-          }
-        }
-
-        for (Node::iterator I = node(n2)->begin(), E = node(n2)->end(); I != E; ++I) {
-          if (I->LV != NE && I->To != n1) {
-            DomTreeDFS::Node *Local_Subtree = NULL;
-            if (Subtree->DominatedBy(I->Subtree))
-              Local_Subtree = Subtree;
-            else if (I->Subtree->DominatedBy(Subtree))
-              Local_Subtree = I->Subtree;
-
-            if (Local_Subtree) {
-              unsigned new_relationship = 0;
-              unsigned ILV_s = I->LV & (SLT_BIT|SGT_BIT);
-              unsigned ILV_u = I->LV & (ULT_BIT|UGT_BIT);
-
-              if (LV1_s != (SLT_BIT|SGT_BIT) && ILV_s == LV1_s)
-                new_relationship |= ILV_s;
-
-              if (LV1_u != (ULT_BIT|UGT_BIT) && ILV_u == LV1_u)
-                new_relationship |= ILV_u;
-
-              if (new_relationship) {
-                if ((new_relationship & (SLT_BIT|SGT_BIT)) == 0)
-                  new_relationship |= (SLT_BIT|SGT_BIT);
-                if ((new_relationship & (ULT_BIT|UGT_BIT)) == 0)
-                  new_relationship |= (ULT_BIT|UGT_BIT);
-                if ((LV1 & EQ_BIT) && (I->LV & EQ_BIT))
-                  new_relationship |= EQ_BIT;
-
-                LatticeVal NewLV = static_cast<LatticeVal>(new_relationship);
-
-                node(n1)->update(I->To, NewLV, Local_Subtree);
-                node(I->To)->update(n1, reversePredicate(NewLV), Local_Subtree);
-              }
-            }
-          }
-        }
-      }
-
-      node(n1)->update(n2, LV1, Subtree);
-      node(n2)->update(n1, reversePredicate(LV1), Subtree);
-    }
-
-    /// remove - removes a node from the graph by removing all references to
-    /// and from it.
-    void remove(unsigned n) {
-      Node *N = node(n);
-      for (Node::iterator NI = N->begin(), NE = N->end(); NI != NE; ++NI) {
-        Node::iterator Iter = node(NI->To)->find(n, TreeRoot);
-        do {
-          node(NI->To)->Relations.erase(Iter);
-          Iter = node(NI->To)->find(n, TreeRoot);
-        } while (Iter != node(NI->To)->end());
-      }
-      N->Relations.clear();
-    }
-
-#ifndef NDEBUG
-    virtual ~InequalityGraph() {}
-    virtual void dump() {
-      dump(errs());
-    }
-
-    void dump(raw_ostream &os) {
-      for (unsigned i = 1; i <= Nodes.size(); ++i) {
-        os << i << " = {";
-        node(i)->dump(os);
-        os << "}\n";
-      }
-    }
-#endif
-  };
-
-  class VRPSolver;
-
-  /// ValueRanges tracks the known integer ranges and anti-ranges of the nodes
-  /// in the InequalityGraph.
-  class ValueRanges {
-    ValueNumbering &VN;
-    TargetData *TD;
-    LLVMContext *Context;
-
-    class ScopedRange {
-      typedef std::vector<std::pair<DomTreeDFS::Node *, ConstantRange> >
-              RangeListType;
-      RangeListType RangeList;
-
-      static bool swo(const std::pair<DomTreeDFS::Node *, ConstantRange> &LHS,
-                      const std::pair<DomTreeDFS::Node *, ConstantRange> &RHS) {
-        return *LHS.first < *RHS.first;
-      }
-
-    public:
-#ifndef NDEBUG
-      virtual ~ScopedRange() {}
-      virtual void dump() const {
-        dump(errs());
-      }
-
-      void dump(raw_ostream &os) const {
-        os << "{";
-        for (const_iterator I = begin(), E = end(); I != E; ++I) {
-          os << &I->second << " (" << I->first->getDFSNumIn() << "), ";
-        }
-        os << "}";
-      }
-#endif
-
-      typedef RangeListType::iterator       iterator;
-      typedef RangeListType::const_iterator const_iterator;
-
-      iterator begin() { return RangeList.begin(); }
-      iterator end()   { return RangeList.end(); }
-      const_iterator begin() const { return RangeList.begin(); }
-      const_iterator end()   const { return RangeList.end(); }
-
-      iterator find(DomTreeDFS::Node *Subtree) {
-        iterator E = end();
-        iterator I = std::lower_bound(begin(), E,
-                                      std::make_pair(Subtree, empty), swo);
-
-        while (I != E && !I->first->dominates(Subtree)) ++I;
-        return I;
-      }
-
-      const_iterator find(DomTreeDFS::Node *Subtree) const {
-        const_iterator E = end();
-        const_iterator I = std::lower_bound(begin(), E,
-                                            std::make_pair(Subtree, empty), swo);
-
-        while (I != E && !I->first->dominates(Subtree)) ++I;
-        return I;
-      }
-
-      void update(const ConstantRange &CR, DomTreeDFS::Node *Subtree) {
-        assert(!CR.isEmptySet() && "Empty ConstantRange.");
-        assert(!CR.isSingleElement() && "Refusing to store single element.");
-
-        iterator E = end();
-        iterator I =
-            std::lower_bound(begin(), E, std::make_pair(Subtree, empty), swo);
-
-        if (I != end() && I->first == Subtree) {
-          ConstantRange CR2 = I->second.intersectWith(CR);
-          assert(!CR2.isEmptySet() && !CR2.isSingleElement() &&
-                 "Invalid union of ranges.");
-          I->second = CR2;
-        } else
-          RangeList.insert(I, std::make_pair(Subtree, CR));
-      }
-    };
-
-    std::vector<ScopedRange> Ranges;
-
-    void update(unsigned n, const ConstantRange &CR, DomTreeDFS::Node *Subtree){
-      if (CR.isFullSet()) return;
-      if (Ranges.size() < n) Ranges.resize(n);
-      Ranges[n-1].update(CR, Subtree);
-    }
-
-    /// create - Creates a ConstantRange that matches the given LatticeVal
-    /// relation with a given integer.
-    ConstantRange create(LatticeVal LV, const ConstantRange &CR) {
-      assert(!CR.isEmptySet() && "Can't deal with empty set.");
-
-      if (LV == NE)
-        return ConstantRange::makeICmpRegion(ICmpInst::ICMP_NE, CR);
-
-      unsigned LV_s = LV & (SGT_BIT|SLT_BIT);
-      unsigned LV_u = LV & (UGT_BIT|ULT_BIT);
-      bool hasEQ = LV & EQ_BIT;
-
-      ConstantRange Range(CR.getBitWidth());
-
-      if (LV_s == SGT_BIT) {
-        Range = Range.intersectWith(ConstantRange::makeICmpRegion(
-                    hasEQ ? ICmpInst::ICMP_SGE : ICmpInst::ICMP_SGT, CR));
-      } else if (LV_s == SLT_BIT) {
-        Range = Range.intersectWith(ConstantRange::makeICmpRegion(
-                    hasEQ ? ICmpInst::ICMP_SLE : ICmpInst::ICMP_SLT, CR));
-      }
-
-      if (LV_u == UGT_BIT) {
-        Range = Range.intersectWith(ConstantRange::makeICmpRegion(
-                    hasEQ ? ICmpInst::ICMP_UGE : ICmpInst::ICMP_UGT, CR));
-      } else if (LV_u == ULT_BIT) {
-        Range = Range.intersectWith(ConstantRange::makeICmpRegion(
-                    hasEQ ? ICmpInst::ICMP_ULE : ICmpInst::ICMP_ULT, CR));
-      }
-
-      return Range;
-    }
-
-#ifndef NDEBUG
-    bool isCanonical(Value *V, DomTreeDFS::Node *Subtree) {
-      return V == VN.canonicalize(V, Subtree);
-    }
-#endif
-
-  public:
-
-    ValueRanges(ValueNumbering &VN, TargetData *TD, LLVMContext *C) :
-      VN(VN), TD(TD), Context(C) {}
-
-#ifndef NDEBUG
-    virtual ~ValueRanges() {}
-
-    virtual void dump() const {
-      dump(errs());
-    }
-
-    void dump(raw_ostream &os) const {
-      for (unsigned i = 0, e = Ranges.size(); i != e; ++i) {
-        os << (i+1) << " = ";
-        Ranges[i].dump(os);
-        os << "\n";
-      }
-    }
-#endif
-
-    /// range - looks up the ConstantRange associated with a value number.
-    ConstantRange range(unsigned n, DomTreeDFS::Node *Subtree) {
-      assert(VN.value(n)); // performs range checks
-
-      if (n <= Ranges.size()) {
-        ScopedRange::iterator I = Ranges[n-1].find(Subtree);
-        if (I != Ranges[n-1].end()) return I->second;
-      }
-
-      Value *V = VN.value(n);
-      ConstantRange CR = range(V);
-      return CR;
-    }
-
-    /// range - determine a range from a Value without performing any lookups.
-    ConstantRange range(Value *V) const {
-      if (ConstantInt *C = dyn_cast<ConstantInt>(V))
-        return ConstantRange(C->getValue());
-      else if (isa<ConstantPointerNull>(V))
-        return ConstantRange(APInt::getNullValue(typeToWidth(V->getType())));
-      else
-        return ConstantRange(typeToWidth(V->getType()));
-    }
-
-    // typeToWidth - returns the number of bits necessary to store a value of
-    // this type, or zero if unknown.
-    uint32_t typeToWidth(const Type *Ty) const {
-      if (TD)
-        return TD->getTypeSizeInBits(Ty);
-      else
-        return Ty->getPrimitiveSizeInBits();
-    }
-
-    static bool isRelatedBy(const ConstantRange &CR1, const ConstantRange &CR2,
-                            LatticeVal LV) {
-      switch (LV) {
-      default: assert(!"Impossible lattice value!");
-      case NE:
-        return CR1.intersectWith(CR2).isEmptySet();
-      case ULT:
-        return CR1.getUnsignedMax().ult(CR2.getUnsignedMin());
-      case ULE:
-        return CR1.getUnsignedMax().ule(CR2.getUnsignedMin());
-      case UGT:
-        return CR1.getUnsignedMin().ugt(CR2.getUnsignedMax());
-      case UGE:
-        return CR1.getUnsignedMin().uge(CR2.getUnsignedMax());
-      case SLT:
-        return CR1.getSignedMax().slt(CR2.getSignedMin());
-      case SLE:
-        return CR1.getSignedMax().sle(CR2.getSignedMin());
-      case SGT:
-        return CR1.getSignedMin().sgt(CR2.getSignedMax());
-      case SGE:
-        return CR1.getSignedMin().sge(CR2.getSignedMax());
-      case LT:
-        return CR1.getUnsignedMax().ult(CR2.getUnsignedMin()) &&
-               CR1.getSignedMax().slt(CR2.getUnsignedMin());
-      case LE:
-        return CR1.getUnsignedMax().ule(CR2.getUnsignedMin()) &&
-               CR1.getSignedMax().sle(CR2.getUnsignedMin());
-      case GT:
-        return CR1.getUnsignedMin().ugt(CR2.getUnsignedMax()) &&
-               CR1.getSignedMin().sgt(CR2.getSignedMax());
-      case GE:
-        return CR1.getUnsignedMin().uge(CR2.getUnsignedMax()) &&
-               CR1.getSignedMin().sge(CR2.getSignedMax());
-      case SLTUGT:
-        return CR1.getSignedMax().slt(CR2.getSignedMin()) &&
-               CR1.getUnsignedMin().ugt(CR2.getUnsignedMax());
-      case SLEUGE:
-        return CR1.getSignedMax().sle(CR2.getSignedMin()) &&
-               CR1.getUnsignedMin().uge(CR2.getUnsignedMax());
-      case SGTULT:
-        return CR1.getSignedMin().sgt(CR2.getSignedMax()) &&
-               CR1.getUnsignedMax().ult(CR2.getUnsignedMin());
-      case SGEULE:
-        return CR1.getSignedMin().sge(CR2.getSignedMax()) &&
-               CR1.getUnsignedMax().ule(CR2.getUnsignedMin());
-      }
-    }
-
-    bool isRelatedBy(unsigned n1, unsigned n2, DomTreeDFS::Node *Subtree,
-                     LatticeVal LV) {
-      ConstantRange CR1 = range(n1, Subtree);
-      ConstantRange CR2 = range(n2, Subtree);
-
-      // True iff all values in CR1 are LV to all values in CR2.
-      return isRelatedBy(CR1, CR2, LV);
-    }
-
-    void addToWorklist(Value *V, Constant *C, ICmpInst::Predicate Pred,
-                       VRPSolver *VRP);
-    void markBlock(VRPSolver *VRP);
-
-    void mergeInto(Value **I, unsigned n, unsigned New,
-                   DomTreeDFS::Node *Subtree, VRPSolver *VRP) {
-      ConstantRange CR_New = range(New, Subtree);
-      ConstantRange Merged = CR_New;
-
-      for (; n != 0; ++I, --n) {
-        unsigned i = VN.valueNumber(*I, Subtree);
-        ConstantRange CR_Kill = i ? range(i, Subtree) : range(*I);
-        if (CR_Kill.isFullSet()) continue;
-        Merged = Merged.intersectWith(CR_Kill);
-      }
-
-      if (Merged.isFullSet() || Merged == CR_New) return;
-
-      applyRange(New, Merged, Subtree, VRP);
-    }
-
-    void applyRange(unsigned n, const ConstantRange &CR,
-                    DomTreeDFS::Node *Subtree, VRPSolver *VRP) {
-      ConstantRange Merged = CR.intersectWith(range(n, Subtree));
-      if (Merged.isEmptySet()) {
-        markBlock(VRP);
-        return;
-      }
-
-      if (const APInt *I = Merged.getSingleElement()) {
-        Value *V = VN.value(n); // XXX: redesign worklist.
-        const Type *Ty = V->getType();
-        if (Ty->isInteger()) {
-          addToWorklist(V, ConstantInt::get(*Context, *I),
-                        ICmpInst::ICMP_EQ, VRP);
-          return;
-        } else if (const PointerType *PTy = dyn_cast<PointerType>(Ty)) {
-          assert(*I == 0 && "Pointer is null but not zero?");
-          addToWorklist(V, ConstantPointerNull::get(PTy),
-                        ICmpInst::ICMP_EQ, VRP);
-          return;
-        }
-      }
-
-      update(n, Merged, Subtree);
-    }
-
-    void addNotEquals(unsigned n1, unsigned n2, DomTreeDFS::Node *Subtree,
-                      VRPSolver *VRP) {
-      ConstantRange CR1 = range(n1, Subtree);
-      ConstantRange CR2 = range(n2, Subtree);
-
-      uint32_t W = CR1.getBitWidth();
-
-      if (const APInt *I = CR1.getSingleElement()) {
-        if (CR2.isFullSet()) {
-          ConstantRange NewCR2(CR1.getUpper(), CR1.getLower());
-          applyRange(n2, NewCR2, Subtree, VRP);
-        } else if (*I == CR2.getLower()) {
-          APInt NewLower(CR2.getLower() + 1),
-                NewUpper(CR2.getUpper());
-          if (NewLower == NewUpper)
-            NewLower = NewUpper = APInt::getMinValue(W);
-
-          ConstantRange NewCR2(NewLower, NewUpper);
-          applyRange(n2, NewCR2, Subtree, VRP);
-        } else if (*I == CR2.getUpper() - 1) {
-          APInt NewLower(CR2.getLower()),
-                NewUpper(CR2.getUpper() - 1);
-          if (NewLower == NewUpper)
-            NewLower = NewUpper = APInt::getMinValue(W);
-
-          ConstantRange NewCR2(NewLower, NewUpper);
-          applyRange(n2, NewCR2, Subtree, VRP);
-        }
-      }
-
-      if (const APInt *I = CR2.getSingleElement()) {
-        if (CR1.isFullSet()) {
-          ConstantRange NewCR1(CR2.getUpper(), CR2.getLower());
-          applyRange(n1, NewCR1, Subtree, VRP);
-        } else if (*I == CR1.getLower()) {
-          APInt NewLower(CR1.getLower() + 1),
-                NewUpper(CR1.getUpper());
-          if (NewLower == NewUpper)
-            NewLower = NewUpper = APInt::getMinValue(W);
-
-          ConstantRange NewCR1(NewLower, NewUpper);
-          applyRange(n1, NewCR1, Subtree, VRP);
-        } else if (*I == CR1.getUpper() - 1) {
-          APInt NewLower(CR1.getLower()),
-                NewUpper(CR1.getUpper() - 1);
-          if (NewLower == NewUpper)
-            NewLower = NewUpper = APInt::getMinValue(W);
-
-          ConstantRange NewCR1(NewLower, NewUpper);
-          applyRange(n1, NewCR1, Subtree, VRP);
-        }
-      }
-    }
-
-    void addInequality(unsigned n1, unsigned n2, DomTreeDFS::Node *Subtree,
-                       LatticeVal LV, VRPSolver *VRP) {
-      assert(!isRelatedBy(n1, n2, Subtree, LV) && "Asked to do useless work.");
-
-      if (LV == NE) {
-        addNotEquals(n1, n2, Subtree, VRP);
-        return;
-      }
-
-      ConstantRange CR1 = range(n1, Subtree);
-      ConstantRange CR2 = range(n2, Subtree);
-
-      if (!CR1.isSingleElement()) {
-        ConstantRange NewCR1 = CR1.intersectWith(create(LV, CR2));
-        if (NewCR1 != CR1)
-          applyRange(n1, NewCR1, Subtree, VRP);
-      }
-
-      if (!CR2.isSingleElement()) {
-        ConstantRange NewCR2 = CR2.intersectWith(
-                                       create(reversePredicate(LV), CR1));
-        if (NewCR2 != CR2)
-          applyRange(n2, NewCR2, Subtree, VRP);
-      }
-    }
-  };
-
-  /// UnreachableBlocks keeps tracks of blocks that are for one reason or
-  /// another discovered to be unreachable. This is used to cull the graph when
-  /// analyzing instructions, and to mark blocks with the "unreachable"
-  /// terminator instruction after the function has executed.
-  class UnreachableBlocks {
-  private:
-    std::vector<BasicBlock *> DeadBlocks;
-
-  public:
-    /// mark - mark a block as dead
-    void mark(BasicBlock *BB) {
-      std::vector<BasicBlock *>::iterator E = DeadBlocks.end();
-      std::vector<BasicBlock *>::iterator I =
-        std::lower_bound(DeadBlocks.begin(), E, BB);
-
-      if (I == E || *I != BB) DeadBlocks.insert(I, BB);
-    }
-
-    /// isDead - returns whether a block is known to be dead already
-    bool isDead(BasicBlock *BB) {
-      std::vector<BasicBlock *>::iterator E = DeadBlocks.end();
-      std::vector<BasicBlock *>::iterator I =
-        std::lower_bound(DeadBlocks.begin(), E, BB);
-
-      return I != E && *I == BB;
-    }
-
-    /// kill - replace the dead blocks' terminator with an UnreachableInst.
-    bool kill() {
-      bool modified = false;
-      for (std::vector<BasicBlock *>::iterator I = DeadBlocks.begin(),
-           E = DeadBlocks.end(); I != E; ++I) {
-        BasicBlock *BB = *I;
-
-        DEBUG(errs() << "unreachable block: " << BB->getName() << "\n");
-
-        for (succ_iterator SI = succ_begin(BB), SE = succ_end(BB);
-             SI != SE; ++SI) {
-          BasicBlock *Succ = *SI;
-          Succ->removePredecessor(BB);
-        }
-
-        TerminatorInst *TI = BB->getTerminator();
-        TI->replaceAllUsesWith(UndefValue::get(TI->getType()));
-        TI->eraseFromParent();
-        new UnreachableInst(BB->getContext(), BB);
-        ++NumBlocks;
-        modified = true;
-      }
-      DeadBlocks.clear();
-      return modified;
-    }
-  };
-
-  /// VRPSolver keeps track of how changes to one variable affect other
-  /// variables, and forwards changes along to the InequalityGraph. It
-  /// also maintains the correct choice for "canonical" in the IG.
-  /// @brief VRPSolver calculates inferences from a new relationship.
-  class VRPSolver {
-  private:
-    friend class ValueRanges;
-
-    struct Operation {
-      Value *LHS, *RHS;
-      ICmpInst::Predicate Op;
-
-      BasicBlock *ContextBB; // XXX use a DomTreeDFS::Node instead
-      Instruction *ContextInst;
-    };
-    std::deque<Operation> WorkList;
-
-    ValueNumbering &VN;
-    InequalityGraph &IG;
-    UnreachableBlocks &UB;
-    ValueRanges &VR;
-    DomTreeDFS *DTDFS;
-    DomTreeDFS::Node *Top;
-    BasicBlock *TopBB;
-    Instruction *TopInst;
-    bool &modified;
-    LLVMContext *Context;
-
-    typedef InequalityGraph::Node Node;
-
-    // below - true if the Instruction is dominated by the current context
-    // block or instruction
-    bool below(Instruction *I) {
-      BasicBlock *BB = I->getParent();
-      if (TopInst && TopInst->getParent() == BB) {
-        if (isa<TerminatorInst>(TopInst)) return false;
-        if (isa<TerminatorInst>(I)) return true;
-        if ( isa<PHINode>(TopInst) && !isa<PHINode>(I)) return true;
-        if (!isa<PHINode>(TopInst) &&  isa<PHINode>(I)) return false;
-
-        for (BasicBlock::const_iterator Iter = BB->begin(), E = BB->end();
-             Iter != E; ++Iter) {
-          if (&*Iter == TopInst) return true;
-          else if (&*Iter == I) return false;
-        }
-        assert(!"Instructions not found in parent BasicBlock?");
-      } else {
-        DomTreeDFS::Node *Node = DTDFS->getNodeForBlock(BB);
-        if (!Node) return false;
-        return Top->dominates(Node);
-      }
-      return false; // Not reached
-    }
-
-    // aboveOrBelow - true if the Instruction either dominates or is dominated
-    // by the current context block or instruction
-    bool aboveOrBelow(Instruction *I) {
-      BasicBlock *BB = I->getParent();
-      DomTreeDFS::Node *Node = DTDFS->getNodeForBlock(BB);
-      if (!Node) return false;
-
-      return Top == Node || Top->dominates(Node) || Node->dominates(Top);
-    }
-
-    bool makeEqual(Value *V1, Value *V2) {
-      DEBUG(errs() << "makeEqual(" << *V1 << ", " << *V2 << ")\n");
-      DEBUG(errs() << "context is ");
-      DEBUG(if (TopInst) 
-              errs() << "I: " << *TopInst << "\n";
-            else 
-              errs() << "BB: " << TopBB->getName()
-                     << "(" << Top->getDFSNumIn() << ")\n");
-
-      assert(V1->getType() == V2->getType() &&
-             "Can't make two values with different types equal.");
-
-      if (V1 == V2) return true;
-
-      if (isa<Constant>(V1) && isa<Constant>(V2))
-        return false;
-
-      unsigned n1 = VN.valueNumber(V1, Top), n2 = VN.valueNumber(V2, Top);
-
-      if (n1 && n2) {
-        if (n1 == n2) return true;
-        if (IG.isRelatedBy(n1, n2, Top, NE)) return false;
-      }
-
-      if (n1) assert(V1 == VN.value(n1) && "Value isn't canonical.");
-      if (n2) assert(V2 == VN.value(n2) && "Value isn't canonical.");
-
-      assert(!VN.compare(V2, V1) && "Please order parameters to makeEqual.");
-
-      assert(!isa<Constant>(V2) && "Tried to remove a constant.");
-
-      SetVector<unsigned> Remove;
-      if (n2) Remove.insert(n2);
-
-      if (n1 && n2) {
-        // Suppose we're being told that %x == %y, and %x <= %z and %y >= %z.
-        // We can't just merge %x and %y because the relationship with %z would
-        // be EQ and that's invalid. What we're doing is looking for any nodes
-        // %z such that %x <= %z and %y >= %z, and vice versa.
-
-        Node::iterator end = IG.node(n2)->end();
-
-        // Find the intersection between N1 and N2 which is dominated by
-        // Top. If we find %x where N1 <= %x <= N2 (or >=) then add %x to
-        // Remove.
-        for (Node::iterator I = IG.node(n1)->begin(), E = IG.node(n1)->end();
-             I != E; ++I) {
-          if (!(I->LV & EQ_BIT) || !Top->DominatedBy(I->Subtree)) continue;
-
-          unsigned ILV_s = I->LV & (SLT_BIT|SGT_BIT);
-          unsigned ILV_u = I->LV & (ULT_BIT|UGT_BIT);
-          Node::iterator NI = IG.node(n2)->find(I->To, Top);
-          if (NI != end) {
-            LatticeVal NILV = reversePredicate(NI->LV);
-            unsigned NILV_s = NILV & (SLT_BIT|SGT_BIT);
-            unsigned NILV_u = NILV & (ULT_BIT|UGT_BIT);
-
-            if ((ILV_s != (SLT_BIT|SGT_BIT) && ILV_s == NILV_s) ||
-                (ILV_u != (ULT_BIT|UGT_BIT) && ILV_u == NILV_u))
-              Remove.insert(I->To);
-          }
-        }
-
-        // See if one of the nodes about to be removed is actually a better
-        // canonical choice than n1.
-        unsigned orig_n1 = n1;
-        SetVector<unsigned>::iterator DontRemove = Remove.end();
-        for (SetVector<unsigned>::iterator I = Remove.begin()+1 /* skip n2 */,
-             E = Remove.end(); I != E; ++I) {
-          unsigned n = *I;
-          Value *V = VN.value(n);
-          if (VN.compare(V, V1)) {
-            V1 = V;
-            n1 = n;
-            DontRemove = I;
-          }
-        }
-        if (DontRemove != Remove.end()) {
-          unsigned n = *DontRemove;
-          Remove.remove(n);
-          Remove.insert(orig_n1);
-        }
-      }
-
-      // We'd like to allow makeEqual on two values to perform a simple
-      // substitution without creating nodes in the IG whenever possible.
-      //
-      // The first iteration through this loop operates on V2 before going
-      // through the Remove list and operating on those too. If all of the
-      // iterations performed simple replacements then we exit early.
-      bool mergeIGNode = false;
-      unsigned i = 0;
-      for (Value *R = V2; i == 0 || i < Remove.size(); ++i) {
-        if (i) R = VN.value(Remove[i]); // skip n2.
-
-        // Try to replace the whole instruction. If we can, we're done.
-        Instruction *I2 = dyn_cast<Instruction>(R);
-        if (I2 && below(I2)) {
-          std::vector<Instruction *> ToNotify;
-          for (Value::use_iterator UI = I2->use_begin(), UE = I2->use_end();
-               UI != UE;) {
-            Use &TheUse = UI.getUse();
-            ++UI;
-            Instruction *I = cast<Instruction>(TheUse.getUser());
-            ToNotify.push_back(I);
-          }
-
-          DEBUG(errs() << "Simply removing " << *I2
-                       << ", replacing with " << *V1 << "\n");
-          I2->replaceAllUsesWith(V1);
-          // leave it dead; it'll get erased later.
-          ++NumInstruction;
-          modified = true;
-
-          for (std::vector<Instruction *>::iterator II = ToNotify.begin(),
-               IE = ToNotify.end(); II != IE; ++II) {
-            opsToDef(*II);
-          }
-
-          continue;
-        }
-
-        // Otherwise, replace all dominated uses.
-        for (Value::use_iterator UI = R->use_begin(), UE = R->use_end();
-             UI != UE;) {
-          Use &TheUse = UI.getUse();
-          ++UI;
-          if (Instruction *I = dyn_cast<Instruction>(TheUse.getUser())) {
-            if (below(I)) {
-              TheUse.set(V1);
-              modified = true;
-              ++NumVarsReplaced;
-              opsToDef(I);
-            }
-          }
-        }
-
-        // If that killed the instruction, stop here.
-        if (I2 && isInstructionTriviallyDead(I2)) {
-          DEBUG(errs() << "Killed all uses of " << *I2
-                       << ", replacing with " << *V1 << "\n");
-          continue;
-        }
-
-        // If we make it to here, then we will need to create a node for N1.
-        // Otherwise, we can skip out early!
-        mergeIGNode = true;
-      }
-
-      if (!isa<Constant>(V1)) {
-        if (Remove.empty()) {
-          VR.mergeInto(&V2, 1, VN.getOrInsertVN(V1, Top), Top, this);
-        } else {
-          std::vector<Value*> RemoveVals;
-          RemoveVals.reserve(Remove.size());
-
-          for (SetVector<unsigned>::iterator I = Remove.begin(),
-               E = Remove.end(); I != E; ++I) {
-            Value *V = VN.value(*I);
-            if (!V->use_empty())
-              RemoveVals.push_back(V);
-          }
-          VR.mergeInto(&RemoveVals[0], RemoveVals.size(), 
-                       VN.getOrInsertVN(V1, Top), Top, this);
-        }
-      }
-
-      if (mergeIGNode) {
-        // Create N1.
-        if (!n1) n1 = VN.getOrInsertVN(V1, Top);
-        IG.node(n1); // Ensure that IG.Nodes won't get resized
-
-        // Migrate relationships from removed nodes to N1.
-        for (SetVector<unsigned>::iterator I = Remove.begin(), E = Remove.end();
-             I != E; ++I) {
-          unsigned n = *I;
-          for (Node::iterator NI = IG.node(n)->begin(), NE = IG.node(n)->end();
-               NI != NE; ++NI) {
-            if (NI->Subtree->DominatedBy(Top)) {
-              if (NI->To == n1) {
-                assert((NI->LV & EQ_BIT) && "Node inequal to itself.");
-                continue;
-              }
-              if (Remove.count(NI->To))
-                continue;
-
-              IG.node(NI->To)->update(n1, reversePredicate(NI->LV), Top);
-              IG.node(n1)->update(NI->To, NI->LV, Top);
-            }
-          }
-        }
-
-        // Point V2 (and all items in Remove) to N1.
-        if (!n2)
-          VN.addEquality(n1, V2, Top);
-        else {
-          for (SetVector<unsigned>::iterator I = Remove.begin(),
-               E = Remove.end(); I != E; ++I) {
-            VN.addEquality(n1, VN.value(*I), Top);
-          }
-        }
-
-        // If !Remove.empty() then V2 = Remove[0]->getValue().
-        // Even when Remove is empty, we still want to process V2.
-        i = 0;
-        for (Value *R = V2; i == 0 || i < Remove.size(); ++i) {
-          if (i) R = VN.value(Remove[i]); // skip n2.
-
-          if (Instruction *I2 = dyn_cast<Instruction>(R)) {
-            if (aboveOrBelow(I2))
-            defToOps(I2);
-          }
-          for (Value::use_iterator UI = V2->use_begin(), UE = V2->use_end();
-               UI != UE;) {
-            Use &TheUse = UI.getUse();
-            ++UI;
-            if (Instruction *I = dyn_cast<Instruction>(TheUse.getUser())) {
-              if (aboveOrBelow(I))
-                opsToDef(I);
-            }
-          }
-        }
-      }
-
-      // re-opsToDef all dominated users of V1.
-      if (Instruction *I = dyn_cast<Instruction>(V1)) {
-        for (Value::use_iterator UI = I->use_begin(), UE = I->use_end();
-             UI != UE;) {
-          Use &TheUse = UI.getUse();
-          ++UI;
-          Value *V = TheUse.getUser();
-          if (!V->use_empty()) {
-            Instruction *Inst = cast<Instruction>(V);
-            if (aboveOrBelow(Inst))
-              opsToDef(Inst);
-          }
-        }
-      }
-
-      return true;
-    }
-
-    /// cmpInstToLattice - converts an CmpInst::Predicate to lattice value
-    /// Requires that the lattice value be valid; does not accept ICMP_EQ.
-    static LatticeVal cmpInstToLattice(ICmpInst::Predicate Pred) {
-      switch (Pred) {
-        case ICmpInst::ICMP_EQ:
-          assert(!"No matching lattice value.");
-          return static_cast<LatticeVal>(EQ_BIT);
-        default:
-          assert(!"Invalid 'icmp' predicate.");
-        case ICmpInst::ICMP_NE:
-          return NE;
-        case ICmpInst::ICMP_UGT:
-          return UGT;
-        case ICmpInst::ICMP_UGE:
-          return UGE;
-        case ICmpInst::ICMP_ULT:
-          return ULT;
-        case ICmpInst::ICMP_ULE:
-          return ULE;
-        case ICmpInst::ICMP_SGT:
-          return SGT;
-        case ICmpInst::ICMP_SGE:
-          return SGE;
-        case ICmpInst::ICMP_SLT:
-          return SLT;
-        case ICmpInst::ICMP_SLE:
-          return SLE;
-      }
-    }
-
-  public:
-    VRPSolver(ValueNumbering &VN, InequalityGraph &IG, UnreachableBlocks &UB,
-              ValueRanges &VR, DomTreeDFS *DTDFS, bool &modified,
-              BasicBlock *TopBB)
-      : VN(VN),
-        IG(IG),
-        UB(UB),
-        VR(VR),
-        DTDFS(DTDFS),
-        Top(DTDFS->getNodeForBlock(TopBB)),
-        TopBB(TopBB),
-        TopInst(NULL),
-        modified(modified),
-        Context(&TopBB->getContext())
-    {
-      assert(Top && "VRPSolver created for unreachable basic block.");
-    }
-
-    VRPSolver(ValueNumbering &VN, InequalityGraph &IG, UnreachableBlocks &UB,
-              ValueRanges &VR, DomTreeDFS *DTDFS, bool &modified,
-              Instruction *TopInst)
-      : VN(VN),
-        IG(IG),
-        UB(UB),
-        VR(VR),
-        DTDFS(DTDFS),
-        Top(DTDFS->getNodeForBlock(TopInst->getParent())),
-        TopBB(TopInst->getParent()),
-        TopInst(TopInst),
-        modified(modified),
-        Context(&TopInst->getContext())
-    {
-      assert(Top && "VRPSolver created for unreachable basic block.");
-      assert(Top->getBlock() == TopInst->getParent() && "Context mismatch.");
-    }
-
-    bool isRelatedBy(Value *V1, Value *V2, ICmpInst::Predicate Pred) const {
-      if (Constant *C1 = dyn_cast<Constant>(V1))
-        if (Constant *C2 = dyn_cast<Constant>(V2))
-          return ConstantExpr::getCompare(Pred, C1, C2) ==
-                 ConstantInt::getTrue(*Context);
-
-      unsigned n1 = VN.valueNumber(V1, Top);
-      unsigned n2 = VN.valueNumber(V2, Top);
-
-      if (n1 && n2) {
-        if (n1 == n2) return Pred == ICmpInst::ICMP_EQ ||
-                             Pred == ICmpInst::ICMP_ULE ||
-                             Pred == ICmpInst::ICMP_UGE ||
-                             Pred == ICmpInst::ICMP_SLE ||
-                             Pred == ICmpInst::ICMP_SGE;
-        if (Pred == ICmpInst::ICMP_EQ) return false;
-        if (IG.isRelatedBy(n1, n2, Top, cmpInstToLattice(Pred))) return true;
-        if (VR.isRelatedBy(n1, n2, Top, cmpInstToLattice(Pred))) return true;
-      }
-
-      if ((n1 && !n2 && isa<Constant>(V2)) ||
-          (n2 && !n1 && isa<Constant>(V1))) {
-        ConstantRange CR1 = n1 ? VR.range(n1, Top) : VR.range(V1);
-        ConstantRange CR2 = n2 ? VR.range(n2, Top) : VR.range(V2);
-
-        if (Pred == ICmpInst::ICMP_EQ)
-          return CR1.isSingleElement() &&
-                 CR1.getSingleElement() == CR2.getSingleElement();
-
-        return VR.isRelatedBy(CR1, CR2, cmpInstToLattice(Pred));
-      }
-      if (Pred == ICmpInst::ICMP_EQ) return V1 == V2;
-      return false;
-    }
-
-    /// add - adds a new property to the work queue
-    void add(Value *V1, Value *V2, ICmpInst::Predicate Pred,
-             Instruction *I = NULL) {
-      DEBUG(errs() << "adding " << *V1 << " " << Pred << " " << *V2);
-      if (I)
-        DEBUG(errs() << " context: " << *I);
-      else 
-        DEBUG(errs() << " default context (" << Top->getDFSNumIn() << ")");
-      DEBUG(errs() << "\n");
-
-      assert(V1->getType() == V2->getType() &&
-             "Can't relate two values with different types.");
-
-      WorkList.push_back(Operation());
-      Operation &O = WorkList.back();
-      O.LHS = V1, O.RHS = V2, O.Op = Pred, O.ContextInst = I;
-      O.ContextBB = I ? I->getParent() : TopBB;
-    }
-
-    /// defToOps - Given an instruction definition that we've learned something
-    /// new about, find any new relationships between its operands.
-    void defToOps(Instruction *I) {
-      Instruction *NewContext = below(I) ? I : TopInst;
-      Value *Canonical = VN.canonicalize(I, Top);
-
-      if (BinaryOperator *BO = dyn_cast<BinaryOperator>(I)) {
-        const Type *Ty = BO->getType();
-        assert(!Ty->isFPOrFPVector() && "Float in work queue!");
-
-        Value *Op0 = VN.canonicalize(BO->getOperand(0), Top);
-        Value *Op1 = VN.canonicalize(BO->getOperand(1), Top);
-
-        // TODO: "and i32 -1, %x" EQ %y then %x EQ %y.
-
-        switch (BO->getOpcode()) {
-          case Instruction::And: {
-            // "and i32 %a, %b" EQ -1 then %a EQ -1 and %b EQ -1
-            ConstantInt *CI = cast<ConstantInt>(Constant::getAllOnesValue(Ty));
-            if (Canonical == CI) {
-              add(CI, Op0, ICmpInst::ICMP_EQ, NewContext);
-              add(CI, Op1, ICmpInst::ICMP_EQ, NewContext);
-            }
-          } break;
-          case Instruction::Or: {
-            // "or i32 %a, %b" EQ 0 then %a EQ 0 and %b EQ 0
-            Constant *Zero = Constant::getNullValue(Ty);
-            if (Canonical == Zero) {
-              add(Zero, Op0, ICmpInst::ICMP_EQ, NewContext);
-              add(Zero, Op1, ICmpInst::ICMP_EQ, NewContext);
-            }
-          } break;
-          case Instruction::Xor: {
-            // "xor i32 %c, %a" EQ %b then %a EQ %c ^ %b
-            // "xor i32 %c, %a" EQ %c then %a EQ 0
-            // "xor i32 %c, %a" NE %c then %a NE 0
-            // Repeat the above, with order of operands reversed.
-            Value *LHS = Op0;
-            Value *RHS = Op1;
-            if (!isa<Constant>(LHS)) std::swap(LHS, RHS);
-
-            if (ConstantInt *CI = dyn_cast<ConstantInt>(Canonical)) {
-              if (ConstantInt *Arg = dyn_cast<ConstantInt>(LHS)) {
-                add(RHS,
-                  ConstantInt::get(*Context, CI->getValue() ^ Arg->getValue()),
-                    ICmpInst::ICMP_EQ, NewContext);
-              }
-            }
-            if (Canonical == LHS) {
-              if (isa<ConstantInt>(Canonical))
-                add(RHS, Constant::getNullValue(Ty), ICmpInst::ICMP_EQ,
-                    NewContext);
-            } else if (isRelatedBy(LHS, Canonical, ICmpInst::ICMP_NE)) {
-              add(RHS, Constant::getNullValue(Ty), ICmpInst::ICMP_NE,
-                  NewContext);
-            }
-          } break;
-          default:
-            break;
-        }
-      } else if (ICmpInst *IC = dyn_cast<ICmpInst>(I)) {
-        // "icmp ult i32 %a, %y" EQ true then %a u< y
-        // etc.
-
-        if (Canonical == ConstantInt::getTrue(*Context)) {
-          add(IC->getOperand(0), IC->getOperand(1), IC->getPredicate(),
-              NewContext);
-        } else if (Canonical == ConstantInt::getFalse(*Context)) {
-          add(IC->getOperand(0), IC->getOperand(1),
-              ICmpInst::getInversePredicate(IC->getPredicate()), NewContext);
-        }
-      } else if (SelectInst *SI = dyn_cast<SelectInst>(I)) {
-        if (I->getType()->isFPOrFPVector()) return;
-
-        // Given: "%a = select i1 %x, i32 %b, i32 %c"
-        // %a EQ %b and %b NE %c then %x EQ true
-        // %a EQ %c and %b NE %c then %x EQ false
-
-        Value *True  = SI->getTrueValue();
-        Value *False = SI->getFalseValue();
-        if (isRelatedBy(True, False, ICmpInst::ICMP_NE)) {
-          if (Canonical == VN.canonicalize(True, Top) ||
-              isRelatedBy(Canonical, False, ICmpInst::ICMP_NE))
-            add(SI->getCondition(), ConstantInt::getTrue(*Context),
-                ICmpInst::ICMP_EQ, NewContext);
-          else if (Canonical == VN.canonicalize(False, Top) ||
-                   isRelatedBy(Canonical, True, ICmpInst::ICMP_NE))
-            add(SI->getCondition(), ConstantInt::getFalse(*Context),
-                ICmpInst::ICMP_EQ, NewContext);
-        }
-      } else if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(I)) {
-        for (GetElementPtrInst::op_iterator OI = GEPI->idx_begin(),
-             OE = GEPI->idx_end(); OI != OE; ++OI) {
-          ConstantInt *Op = dyn_cast<ConstantInt>(VN.canonicalize(*OI, Top));
-          if (!Op || !Op->isZero()) return;
-        }
-        // TODO: The GEPI indices are all zero. Copy from definition to operand,
-        // jumping the type plane as needed.
-        if (isRelatedBy(GEPI, Constant::getNullValue(GEPI->getType()),
-                        ICmpInst::ICMP_NE)) {
-          Value *Ptr = GEPI->getPointerOperand();
-          add(Ptr, Constant::getNullValue(Ptr->getType()), ICmpInst::ICMP_NE,
-              NewContext);
-        }
-      } else if (CastInst *CI = dyn_cast<CastInst>(I)) {
-        const Type *SrcTy = CI->getSrcTy();
-
-        unsigned ci = VN.getOrInsertVN(CI, Top);
-        uint32_t W = VR.typeToWidth(SrcTy);
-        if (!W) return;
-        ConstantRange CR = VR.range(ci, Top);
-
-        if (CR.isFullSet()) return;
-
-        switch (CI->getOpcode()) {
-          default: break;
-          case Instruction::ZExt:
-          case Instruction::SExt:
-            VR.applyRange(VN.getOrInsertVN(CI->getOperand(0), Top),
-                          CR.truncate(W), Top, this);
-            break;
-          case Instruction::BitCast:
-            VR.applyRange(VN.getOrInsertVN(CI->getOperand(0), Top),
-                          CR, Top, this);
-            break;
-        }
-      }
-    }
-
-    /// opsToDef - A new relationship was discovered involving one of this
-    /// instruction's operands. Find any new relationship involving the
-    /// definition, or another operand.
-    void opsToDef(Instruction *I) {
-      Instruction *NewContext = below(I) ? I : TopInst;
-
-      if (BinaryOperator *BO = dyn_cast<BinaryOperator>(I)) {
-        Value *Op0 = VN.canonicalize(BO->getOperand(0), Top);
-        Value *Op1 = VN.canonicalize(BO->getOperand(1), Top);
-
-        if (ConstantInt *CI0 = dyn_cast<ConstantInt>(Op0))
-          if (ConstantInt *CI1 = dyn_cast<ConstantInt>(Op1)) {
-            add(BO, ConstantExpr::get(BO->getOpcode(), CI0, CI1),
-                ICmpInst::ICMP_EQ, NewContext);
-            return;
-          }
-
-        // "%y = and i1 true, %x" then %x EQ %y
-        // "%y = or i1 false, %x" then %x EQ %y
-        // "%x = add i32 %y, 0" then %x EQ %y
-        // "%x = mul i32 %y, 0" then %x EQ 0
-
-        Instruction::BinaryOps Opcode = BO->getOpcode();
-        const Type *Ty = BO->getType();
-        assert(!Ty->isFPOrFPVector() && "Float in work queue!");
-
-        Constant *Zero = Constant::getNullValue(Ty);
-        Constant *One = ConstantInt::get(Ty, 1);
-        ConstantInt *AllOnes = cast<ConstantInt>(Constant::getAllOnesValue(Ty));
-
-        switch (Opcode) {
-          default: break;
-          case Instruction::LShr:
-          case Instruction::AShr:
-          case Instruction::Shl:
-            if (Op1 == Zero) {
-              add(BO, Op0, ICmpInst::ICMP_EQ, NewContext);
-              return;
-            }
-            break;
-          case Instruction::Sub:
-            if (Op1 == Zero) {
-              add(BO, Op0, ICmpInst::ICMP_EQ, NewContext);
-              return;
-            }
-            if (ConstantInt *CI0 = dyn_cast<ConstantInt>(Op0)) {
-              unsigned n_ci0 = VN.getOrInsertVN(Op1, Top);
-              ConstantRange CR = VR.range(n_ci0, Top);
-              if (!CR.isFullSet()) {
-                CR.subtract(CI0->getValue());
-                unsigned n_bo = VN.getOrInsertVN(BO, Top);
-                VR.applyRange(n_bo, CR, Top, this);
-                return;
-              }
-            }
-            if (ConstantInt *CI1 = dyn_cast<ConstantInt>(Op1)) {
-              unsigned n_ci1 = VN.getOrInsertVN(Op0, Top);
-              ConstantRange CR = VR.range(n_ci1, Top);
-              if (!CR.isFullSet()) {
-                CR.subtract(CI1->getValue());
-                unsigned n_bo = VN.getOrInsertVN(BO, Top);
-                VR.applyRange(n_bo, CR, Top, this);
-                return;
-              }
-            }
-            break;
-          case Instruction::Or:
-            if (Op0 == AllOnes || Op1 == AllOnes) {
-              add(BO, AllOnes, ICmpInst::ICMP_EQ, NewContext);
-              return;
-            }
-            if (Op0 == Zero) {
-              add(BO, Op1, ICmpInst::ICMP_EQ, NewContext);
-              return;
-            } else if (Op1 == Zero) {
-              add(BO, Op0, ICmpInst::ICMP_EQ, NewContext);
-              return;
-            }
-            break;
-          case Instruction::Add:
-            if (ConstantInt *CI0 = dyn_cast<ConstantInt>(Op0)) {
-              unsigned n_ci0 = VN.getOrInsertVN(Op1, Top);
-              ConstantRange CR = VR.range(n_ci0, Top);
-              if (!CR.isFullSet()) {
-                CR.subtract(-CI0->getValue());
-                unsigned n_bo = VN.getOrInsertVN(BO, Top);
-                VR.applyRange(n_bo, CR, Top, this);
-                return;
-              }
-            }
-            if (ConstantInt *CI1 = dyn_cast<ConstantInt>(Op1)) {
-              unsigned n_ci1 = VN.getOrInsertVN(Op0, Top);
-              ConstantRange CR = VR.range(n_ci1, Top);
-              if (!CR.isFullSet()) {
-                CR.subtract(-CI1->getValue());
-                unsigned n_bo = VN.getOrInsertVN(BO, Top);
-                VR.applyRange(n_bo, CR, Top, this);
-                return;
-              }
-            }
-            // fall-through
-          case Instruction::Xor:
-            if (Op0 == Zero) {
-              add(BO, Op1, ICmpInst::ICMP_EQ, NewContext);
-              return;
-            } else if (Op1 == Zero) {
-              add(BO, Op0, ICmpInst::ICMP_EQ, NewContext);
-              return;
-            }
-            break;
-          case Instruction::And:
-            if (Op0 == AllOnes) {
-              add(BO, Op1, ICmpInst::ICMP_EQ, NewContext);
-              return;
-            } else if (Op1 == AllOnes) {
-              add(BO, Op0, ICmpInst::ICMP_EQ, NewContext);
-              return;
-            }
-            if (Op0 == Zero || Op1 == Zero) {
-              add(BO, Zero, ICmpInst::ICMP_EQ, NewContext);
-              return;
-            }
-            break;
-          case Instruction::Mul:
-            if (Op0 == Zero || Op1 == Zero) {
-              add(BO, Zero, ICmpInst::ICMP_EQ, NewContext);
-              return;
-            }
-            if (Op0 == One) {
-              add(BO, Op1, ICmpInst::ICMP_EQ, NewContext);
-              return;
-            } else if (Op1 == One) {
-              add(BO, Op0, ICmpInst::ICMP_EQ, NewContext);
-              return;
-            }
-            break;
-        }
-
-        // "%x = add i32 %y, %z" and %x EQ %y then %z EQ 0
-        // "%x = add i32 %y, %z" and %x EQ %z then %y EQ 0
-        // "%x = shl i32 %y, %z" and %x EQ %y and %y NE 0 then %z EQ 0
-        // "%x = udiv i32 %y, %z" and %x EQ %y and %y NE 0 then %z EQ 1
-
-        Value *Known = Op0, *Unknown = Op1,
-              *TheBO = VN.canonicalize(BO, Top);
-        if (Known != TheBO) std::swap(Known, Unknown);
-        if (Known == TheBO) {
-          switch (Opcode) {
-            default: break;
-            case Instruction::LShr:
-            case Instruction::AShr:
-            case Instruction::Shl:
-              if (!isRelatedBy(Known, Zero, ICmpInst::ICMP_NE)) break;
-              // otherwise, fall-through.
-            case Instruction::Sub:
-              if (Unknown == Op0) break;
-              // otherwise, fall-through.
-            case Instruction::Xor:
-            case Instruction::Add:
-              add(Unknown, Zero, ICmpInst::ICMP_EQ, NewContext);
-              break;
-            case Instruction::UDiv:
-            case Instruction::SDiv:
-              if (Unknown == Op1) break;
-              if (isRelatedBy(Known, Zero, ICmpInst::ICMP_NE))
-                add(Unknown, One, ICmpInst::ICMP_EQ, NewContext);
-              break;
-          }
-        }
-
-        // TODO: "%a = add i32 %b, 1" and %b > %z then %a >= %z.
-
-      } else if (ICmpInst *IC = dyn_cast<ICmpInst>(I)) {
-        // "%a = icmp ult i32 %b, %c" and %b u<  %c then %a EQ true
-        // "%a = icmp ult i32 %b, %c" and %b u>= %c then %a EQ false
-        // etc.
-
-        Value *Op0 = VN.canonicalize(IC->getOperand(0), Top);
-        Value *Op1 = VN.canonicalize(IC->getOperand(1), Top);
-
-        ICmpInst::Predicate Pred = IC->getPredicate();
-        if (isRelatedBy(Op0, Op1, Pred))
-          add(IC, ConstantInt::getTrue(*Context), ICmpInst::ICMP_EQ, NewContext);
-        else if (isRelatedBy(Op0, Op1, ICmpInst::getInversePredicate(Pred)))
-          add(IC, ConstantInt::getFalse(*Context),
-              ICmpInst::ICMP_EQ, NewContext);
-
-      } else if (SelectInst *SI = dyn_cast<SelectInst>(I)) {
-        if (I->getType()->isFPOrFPVector()) return;
-
-        // Given: "%a = select i1 %x, i32 %b, i32 %c"
-        // %x EQ true  then %a EQ %b
-        // %x EQ false then %a EQ %c
-        // %b EQ %c then %a EQ %b
-
-        Value *Canonical = VN.canonicalize(SI->getCondition(), Top);
-        if (Canonical == ConstantInt::getTrue(*Context)) {
-          add(SI, SI->getTrueValue(), ICmpInst::ICMP_EQ, NewContext);
-        } else if (Canonical == ConstantInt::getFalse(*Context)) {
-          add(SI, SI->getFalseValue(), ICmpInst::ICMP_EQ, NewContext);
-        } else if (VN.canonicalize(SI->getTrueValue(), Top) ==
-                   VN.canonicalize(SI->getFalseValue(), Top)) {
-          add(SI, SI->getTrueValue(), ICmpInst::ICMP_EQ, NewContext);
-        }
-      } else if (CastInst *CI = dyn_cast<CastInst>(I)) {
-        const Type *DestTy = CI->getDestTy();
-        if (DestTy->isFPOrFPVector()) return;
-
-        Value *Op = VN.canonicalize(CI->getOperand(0), Top);
-        Instruction::CastOps Opcode = CI->getOpcode();
-
-        if (Constant *C = dyn_cast<Constant>(Op)) {
-          add(CI, ConstantExpr::getCast(Opcode, C, DestTy),
-              ICmpInst::ICMP_EQ, NewContext);
-        }
-
-        uint32_t W = VR.typeToWidth(DestTy);
-        unsigned ci = VN.getOrInsertVN(CI, Top);
-        ConstantRange CR = VR.range(VN.getOrInsertVN(Op, Top), Top);
-
-        if (!CR.isFullSet()) {
-          switch (Opcode) {
-            default: break;
-            case Instruction::ZExt:
-              VR.applyRange(ci, CR.zeroExtend(W), Top, this);
-              break;
-            case Instruction::SExt:
-              VR.applyRange(ci, CR.signExtend(W), Top, this);
-              break;
-            case Instruction::Trunc: {
-              ConstantRange Result = CR.truncate(W);
-              if (!Result.isFullSet())
-                VR.applyRange(ci, Result, Top, this);
-            } break;
-            case Instruction::BitCast:
-              VR.applyRange(ci, CR, Top, this);
-              break;
-            // TODO: other casts?
-          }
-        }
-      } else if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(I)) {
-        for (GetElementPtrInst::op_iterator OI = GEPI->idx_begin(),
-             OE = GEPI->idx_end(); OI != OE; ++OI) {
-          ConstantInt *Op = dyn_cast<ConstantInt>(VN.canonicalize(*OI, Top));
-          if (!Op || !Op->isZero()) return;
-        }
-        // TODO: The GEPI indices are all zero. Copy from operand to definition,
-        // jumping the type plane as needed.
-        Value *Ptr = GEPI->getPointerOperand();
-        if (isRelatedBy(Ptr, Constant::getNullValue(Ptr->getType()),
-                        ICmpInst::ICMP_NE)) {
-          add(GEPI, Constant::getNullValue(GEPI->getType()), ICmpInst::ICMP_NE,
-              NewContext);
-        }
-      }
-    }
-
-    /// solve - process the work queue
-    void solve() {
-      //DEBUG(errs() << "WorkList entry, size: " << WorkList.size() << "\n");
-      while (!WorkList.empty()) {
-        //DEBUG(errs() << "WorkList size: " << WorkList.size() << "\n");
-
-        Operation &O = WorkList.front();
-        TopInst = O.ContextInst;
-        TopBB = O.ContextBB;
-        Top = DTDFS->getNodeForBlock(TopBB); // XXX move this into Context
-
-        O.LHS = VN.canonicalize(O.LHS, Top);
-        O.RHS = VN.canonicalize(O.RHS, Top);
-
-        assert(O.LHS == VN.canonicalize(O.LHS, Top) && "Canonicalize isn't.");
-        assert(O.RHS == VN.canonicalize(O.RHS, Top) && "Canonicalize isn't.");
-
-        DEBUG(errs() << "solving " << *O.LHS << " " << O.Op << " " << *O.RHS;
-              if (O.ContextInst) 
-                errs() << " context inst: " << *O.ContextInst;
-              else
-                errs() << " context block: " << O.ContextBB->getName();
-              errs() << "\n";
-
-              VN.dump();
-              IG.dump();
-              VR.dump(););
-
-        // If they're both Constant, skip it. Check for contradiction and mark
-        // the BB as unreachable if so.
-        if (Constant *CI_L = dyn_cast<Constant>(O.LHS)) {
-          if (Constant *CI_R = dyn_cast<Constant>(O.RHS)) {
-            if (ConstantExpr::getCompare(O.Op, CI_L, CI_R) ==
-                ConstantInt::getFalse(*Context))
-              UB.mark(TopBB);
-
-            WorkList.pop_front();
-            continue;
-          }
-        }
-
-        if (VN.compare(O.LHS, O.RHS)) {
-          std::swap(O.LHS, O.RHS);
-          O.Op = ICmpInst::getSwappedPredicate(O.Op);
-        }
-
-        if (O.Op == ICmpInst::ICMP_EQ) {
-          if (!makeEqual(O.RHS, O.LHS))
-            UB.mark(TopBB);
-        } else {
-          LatticeVal LV = cmpInstToLattice(O.Op);
-
-          if ((LV & EQ_BIT) &&
-              isRelatedBy(O.LHS, O.RHS, ICmpInst::getSwappedPredicate(O.Op))) {
-            if (!makeEqual(O.RHS, O.LHS))
-              UB.mark(TopBB);
-          } else {
-            if (isRelatedBy(O.LHS, O.RHS, ICmpInst::getInversePredicate(O.Op))){
-              UB.mark(TopBB);
-              WorkList.pop_front();
-              continue;
-            }
-
-            unsigned n1 = VN.getOrInsertVN(O.LHS, Top);
-            unsigned n2 = VN.getOrInsertVN(O.RHS, Top);
-
-            if (n1 == n2) {
-              if (O.Op != ICmpInst::ICMP_UGE && O.Op != ICmpInst::ICMP_ULE &&
-                  O.Op != ICmpInst::ICMP_SGE && O.Op != ICmpInst::ICMP_SLE)
-                UB.mark(TopBB);
-
-              WorkList.pop_front();
-              continue;
-            }
-
-            if (VR.isRelatedBy(n1, n2, Top, LV) ||
-                IG.isRelatedBy(n1, n2, Top, LV)) {
-              WorkList.pop_front();
-              continue;
-            }
-
-            VR.addInequality(n1, n2, Top, LV, this);
-            if ((!isa<ConstantInt>(O.RHS) && !isa<ConstantInt>(O.LHS)) ||
-                LV == NE)
-              IG.addInequality(n1, n2, Top, LV);
-
-            if (Instruction *I1 = dyn_cast<Instruction>(O.LHS)) {
-              if (aboveOrBelow(I1))
-                defToOps(I1);
-            }
-            if (isa<Instruction>(O.LHS) || isa<Argument>(O.LHS)) {
-              for (Value::use_iterator UI = O.LHS->use_begin(),
-                   UE = O.LHS->use_end(); UI != UE;) {
-                Use &TheUse = UI.getUse();
-                ++UI;
-                Instruction *I = cast<Instruction>(TheUse.getUser());
-                if (aboveOrBelow(I))
-                  opsToDef(I);
-              }
-            }
-            if (Instruction *I2 = dyn_cast<Instruction>(O.RHS)) {
-              if (aboveOrBelow(I2))
-              defToOps(I2);
-            }
-            if (isa<Instruction>(O.RHS) || isa<Argument>(O.RHS)) {
-              for (Value::use_iterator UI = O.RHS->use_begin(),
-                   UE = O.RHS->use_end(); UI != UE;) {
-                Use &TheUse = UI.getUse();
-                ++UI;
-                Instruction *I = cast<Instruction>(TheUse.getUser());
-                if (aboveOrBelow(I))
-                  opsToDef(I);
-              }
-            }
-          }
-        }
-        WorkList.pop_front();
-      }
-    }
-  };
-
-  void ValueRanges::addToWorklist(Value *V, Constant *C,
-                                  ICmpInst::Predicate Pred, VRPSolver *VRP) {
-    VRP->add(V, C, Pred, VRP->TopInst);
-  }
-
-  void ValueRanges::markBlock(VRPSolver *VRP) {
-    VRP->UB.mark(VRP->TopBB);
-  }
-
-  /// PredicateSimplifier - This class is a simplifier that replaces
-  /// one equivalent variable with another. It also tracks what
-  /// can't be equal and will solve setcc instructions when possible.
-  /// @brief Root of the predicate simplifier optimization.
-  class PredicateSimplifier : public FunctionPass {
-    DomTreeDFS *DTDFS;
-    bool modified;
-    ValueNumbering *VN;
-    InequalityGraph *IG;
-    UnreachableBlocks UB;
-    ValueRanges *VR;
-
-    std::vector<DomTreeDFS::Node *> WorkList;
-
-    LLVMContext *Context;
-  public:
-    static char ID; // Pass identification, replacement for typeid
-    PredicateSimplifier() : FunctionPass(&ID) {}
-
-    bool runOnFunction(Function &F);
-
-    virtual void getAnalysisUsage(AnalysisUsage &AU) const {
-      AU.addRequiredID(BreakCriticalEdgesID);
-      AU.addRequired<DominatorTree>();
-    }
-
-  private:
-    /// Forwards - Adds new properties to VRPSolver and uses them to
-    /// simplify instructions. Because new properties sometimes apply to
-    /// a transition from one BasicBlock to another, this will use the
-    /// PredicateSimplifier::proceedToSuccessor(s) interface to enter the
-    /// basic block.
-    /// @brief Performs abstract execution of the program.
-    class Forwards : public InstVisitor<Forwards> {
-      friend class InstVisitor<Forwards>;
-      PredicateSimplifier *PS;
-      DomTreeDFS::Node *DTNode;
-
-    public:
-      ValueNumbering &VN;
-      InequalityGraph &IG;
-      UnreachableBlocks &UB;
-      ValueRanges &VR;
-
-      Forwards(PredicateSimplifier *PS, DomTreeDFS::Node *DTNode)
-        : PS(PS), DTNode(DTNode), VN(*PS->VN), IG(*PS->IG), UB(PS->UB),
-          VR(*PS->VR) {}
-
-      void visitTerminatorInst(TerminatorInst &TI);
-      void visitBranchInst(BranchInst &BI);
-      void visitSwitchInst(SwitchInst &SI);
-
-      void visitAllocaInst(AllocaInst &AI);
-      void visitLoadInst(LoadInst &LI);
-      void visitStoreInst(StoreInst &SI);
-
-      void visitSExtInst(SExtInst &SI);
-      void visitZExtInst(ZExtInst &ZI);
-
-      void visitBinaryOperator(BinaryOperator &BO);
-      void visitICmpInst(ICmpInst &IC);
-    };
-  
-    // Used by terminator instructions to proceed from the current basic
-    // block to the next. Verifies that "current" dominates "next",
-    // then calls visitBasicBlock.
-    void proceedToSuccessors(DomTreeDFS::Node *Current) {
-      for (DomTreeDFS::Node::iterator I = Current->begin(),
-           E = Current->end(); I != E; ++I) {
-        WorkList.push_back(*I);
-      }
-    }
-
-    void proceedToSuccessor(DomTreeDFS::Node *Next) {
-      WorkList.push_back(Next);
-    }
-
-    // Visits each instruction in the basic block.
-    void visitBasicBlock(DomTreeDFS::Node *Node) {
-      BasicBlock *BB = Node->getBlock();
-      DEBUG(errs() << "Entering Basic Block: " << BB->getName()
-            << " (" << Node->getDFSNumIn() << ")\n");
-      for (BasicBlock::iterator I = BB->begin(), E = BB->end(); I != E;) {
-        visitInstruction(I++, Node);
-      }
-    }
-
-    // Tries to simplify each Instruction and add new properties.
-    void visitInstruction(Instruction *I, DomTreeDFS::Node *DT) {
-      DEBUG(errs() << "Considering instruction " << *I << "\n");
-      DEBUG(VN->dump());
-      DEBUG(IG->dump());
-      DEBUG(VR->dump());
-
-      // Sometimes instructions are killed in earlier analysis.
-      if (isInstructionTriviallyDead(I)) {
-        ++NumSimple;
-        modified = true;
-        if (unsigned n = VN->valueNumber(I, DTDFS->getRootNode()))
-          if (VN->value(n) == I) IG->remove(n);
-        VN->remove(I);
-        I->eraseFromParent();
-        return;
-      }
-
-#ifndef NDEBUG
-      // Try to replace the whole instruction.
-      Value *V = VN->canonicalize(I, DT);
-      assert(V == I && "Late instruction canonicalization.");
-      if (V != I) {
-        modified = true;
-        ++NumInstruction;
-        DEBUG(errs() << "Removing " << *I << ", replacing with " << *V << "\n");
-        if (unsigned n = VN->valueNumber(I, DTDFS->getRootNode()))
-          if (VN->value(n) == I) IG->remove(n);
-        VN->remove(I);
-        I->replaceAllUsesWith(V);
-        I->eraseFromParent();
-        return;
-      }
-
-      // Try to substitute operands.
-      for (unsigned i = 0, e = I->getNumOperands(); i != e; ++i) {
-        Value *Oper = I->getOperand(i);
-        Value *V = VN->canonicalize(Oper, DT);
-        assert(V == Oper && "Late operand canonicalization.");
-        if (V != Oper) {
-          modified = true;
-          ++NumVarsReplaced;
-          DEBUG(errs() << "Resolving " << *I);
-          I->setOperand(i, V);
-          DEBUG(errs() << " into " << *I);
-        }
-      }
-#endif
-
-      std::string name = I->getParent()->getName();
-      DEBUG(errs() << "push (%" << name << ")\n");
-      Forwards visit(this, DT);
-      visit.visit(*I);
-      DEBUG(errs() << "pop (%" << name << ")\n");
-    }
-  };
-
-  bool PredicateSimplifier::runOnFunction(Function &F) {
-    DominatorTree *DT = &getAnalysis<DominatorTree>();
-    DTDFS = new DomTreeDFS(DT);
-    TargetData *TD = getAnalysisIfAvailable<TargetData>();
-
-    // FIXME: PredicateSimplifier should still be able to do basic
-    // optimizations without TargetData. But for now, just exit if
-    // it's not available.
-    if (!TD) return false;
-
-    Context = &F.getContext();
-
-    DEBUG(errs() << "Entering Function: " << F.getName() << "\n");
-
-    modified = false;
-    DomTreeDFS::Node *Root = DTDFS->getRootNode();
-    VN = new ValueNumbering(DTDFS);
-    IG = new InequalityGraph(*VN, Root);
-    VR = new ValueRanges(*VN, TD, Context);
-    WorkList.push_back(Root);
-
-    do {
-      DomTreeDFS::Node *DTNode = WorkList.back();
-      WorkList.pop_back();
-      if (!UB.isDead(DTNode->getBlock())) visitBasicBlock(DTNode);
-    } while (!WorkList.empty());
-
-    delete DTDFS;
-    delete VR;
-    delete IG;
-    delete VN;
-
-    modified |= UB.kill();
-
-    return modified;
-  }
-
-  void PredicateSimplifier::Forwards::visitTerminatorInst(TerminatorInst &TI) {
-    PS->proceedToSuccessors(DTNode);
-  }
-
-  void PredicateSimplifier::Forwards::visitBranchInst(BranchInst &BI) {
-    if (BI.isUnconditional()) {
-      PS->proceedToSuccessors(DTNode);
-      return;
-    }
-
-    Value *Condition = BI.getCondition();
-    BasicBlock *TrueDest  = BI.getSuccessor(0);
-    BasicBlock *FalseDest = BI.getSuccessor(1);
-
-    if (isa<Constant>(Condition) || TrueDest == FalseDest) {
-      PS->proceedToSuccessors(DTNode);
-      return;
-    }
-
-    LLVMContext *Context = &BI.getContext();
-
-    for (DomTreeDFS::Node::iterator I = DTNode->begin(), E = DTNode->end();
-         I != E; ++I) {
-      BasicBlock *Dest = (*I)->getBlock();
-      DEBUG(errs() << "Branch thinking about %" << Dest->getName()
-            << "(" << PS->DTDFS->getNodeForBlock(Dest)->getDFSNumIn() << ")\n");
-
-      if (Dest == TrueDest) {
-        DEBUG(errs() << "(" << DTNode->getBlock()->getName() 
-              << ") true set:\n");
-        VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, Dest);
-        VRP.add(ConstantInt::getTrue(*Context), Condition, ICmpInst::ICMP_EQ);
-        VRP.solve();
-        DEBUG(VN.dump());
-        DEBUG(IG.dump());
-        DEBUG(VR.dump());
-      } else if (Dest == FalseDest) {
-        DEBUG(errs() << "(" << DTNode->getBlock()->getName() 
-              << ") false set:\n");
-        VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, Dest);
-        VRP.add(ConstantInt::getFalse(*Context), Condition, ICmpInst::ICMP_EQ);
-        VRP.solve();
-        DEBUG(VN.dump());
-        DEBUG(IG.dump());
-        DEBUG(VR.dump());
-      }
-
-      PS->proceedToSuccessor(*I);
-    }
-  }
-
-  void PredicateSimplifier::Forwards::visitSwitchInst(SwitchInst &SI) {
-    Value *Condition = SI.getCondition();
-
-    // Set the EQProperty in each of the cases BBs, and the NEProperties
-    // in the default BB.
-
-    for (DomTreeDFS::Node::iterator I = DTNode->begin(), E = DTNode->end();
-         I != E; ++I) {
-      BasicBlock *BB = (*I)->getBlock();
-      DEBUG(errs() << "Switch thinking about BB %" << BB->getName()
-            << "(" << PS->DTDFS->getNodeForBlock(BB)->getDFSNumIn() << ")\n");
-
-      VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, BB);
-      if (BB == SI.getDefaultDest()) {
-        for (unsigned i = 1, e = SI.getNumCases(); i < e; ++i)
-          if (SI.getSuccessor(i) != BB)
-            VRP.add(Condition, SI.getCaseValue(i), ICmpInst::ICMP_NE);
-        VRP.solve();
-      } else if (ConstantInt *CI = SI.findCaseDest(BB)) {
-        VRP.add(Condition, CI, ICmpInst::ICMP_EQ);
-        VRP.solve();
-      }
-      PS->proceedToSuccessor(*I);
-    }
-  }
-
-  void PredicateSimplifier::Forwards::visitAllocaInst(AllocaInst &AI) {
-    VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, &AI);
-    VRP.add(Constant::getNullValue(AI.getType()),
-            &AI, ICmpInst::ICMP_NE);
-    VRP.solve();
-  }
-
-  void PredicateSimplifier::Forwards::visitLoadInst(LoadInst &LI) {
-    Value *Ptr = LI.getPointerOperand();
-    // avoid "load i8* null" -> null NE null.
-    if (isa<Constant>(Ptr)) return;
-
-    VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, &LI);
-    VRP.add(Constant::getNullValue(Ptr->getType()),
-            Ptr, ICmpInst::ICMP_NE);
-    VRP.solve();
-  }
-
-  void PredicateSimplifier::Forwards::visitStoreInst(StoreInst &SI) {
-    Value *Ptr = SI.getPointerOperand();
-    if (isa<Constant>(Ptr)) return;
-
-    VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, &SI);
-    VRP.add(Constant::getNullValue(Ptr->getType()),
-            Ptr, ICmpInst::ICMP_NE);
-    VRP.solve();
-  }
-
-  void PredicateSimplifier::Forwards::visitSExtInst(SExtInst &SI) {
-    VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, &SI);
-    LLVMContext &Context = SI.getContext();
-    uint32_t SrcBitWidth = cast<IntegerType>(SI.getSrcTy())->getBitWidth();
-    uint32_t DstBitWidth = cast<IntegerType>(SI.getDestTy())->getBitWidth();
-    APInt Min(APInt::getHighBitsSet(DstBitWidth, DstBitWidth-SrcBitWidth+1));
-    APInt Max(APInt::getLowBitsSet(DstBitWidth, SrcBitWidth-1));
-    VRP.add(ConstantInt::get(Context, Min), &SI, ICmpInst::ICMP_SLE);
-    VRP.add(ConstantInt::get(Context, Max), &SI, ICmpInst::ICMP_SGE);
-    VRP.solve();
-  }
-
-  void PredicateSimplifier::Forwards::visitZExtInst(ZExtInst &ZI) {
-    VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, &ZI);
-    LLVMContext &Context = ZI.getContext();
-    uint32_t SrcBitWidth = cast<IntegerType>(ZI.getSrcTy())->getBitWidth();
-    uint32_t DstBitWidth = cast<IntegerType>(ZI.getDestTy())->getBitWidth();
-    APInt Max(APInt::getLowBitsSet(DstBitWidth, SrcBitWidth));
-    VRP.add(ConstantInt::get(Context, Max), &ZI, ICmpInst::ICMP_UGE);
-    VRP.solve();
-  }
-
-  void PredicateSimplifier::Forwards::visitBinaryOperator(BinaryOperator &BO) {
-    Instruction::BinaryOps ops = BO.getOpcode();
-
-    switch (ops) {
-    default: break;
-      case Instruction::URem:
-      case Instruction::SRem:
-      case Instruction::UDiv:
-      case Instruction::SDiv: {
-        Value *Divisor = BO.getOperand(1);
-        VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, &BO);
-        VRP.add(Constant::getNullValue(Divisor->getType()), 
-                Divisor, ICmpInst::ICMP_NE);
-        VRP.solve();
-        break;
-      }
-    }
-
-    switch (ops) {
-      default: break;
-      case Instruction::Shl: {
-        VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, &BO);
-        VRP.add(&BO, BO.getOperand(0), ICmpInst::ICMP_UGE);
-        VRP.solve();
-      } break;
-      case Instruction::AShr: {
-        VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, &BO);
-        VRP.add(&BO, BO.getOperand(0), ICmpInst::ICMP_SLE);
-        VRP.solve();
-      } break;
-      case Instruction::LShr:
-      case Instruction::UDiv: {
-        VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, &BO);
-        VRP.add(&BO, BO.getOperand(0), ICmpInst::ICMP_ULE);
-        VRP.solve();
-      } break;
-      case Instruction::URem: {
-        VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, &BO);
-        VRP.add(&BO, BO.getOperand(1), ICmpInst::ICMP_ULE);
-        VRP.solve();
-      } break;
-      case Instruction::And: {
-        VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, &BO);
-        VRP.add(&BO, BO.getOperand(0), ICmpInst::ICMP_ULE);
-        VRP.add(&BO, BO.getOperand(1), ICmpInst::ICMP_ULE);
-        VRP.solve();
-      } break;
-      case Instruction::Or: {
-        VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, &BO);
-        VRP.add(&BO, BO.getOperand(0), ICmpInst::ICMP_UGE);
-        VRP.add(&BO, BO.getOperand(1), ICmpInst::ICMP_UGE);
-        VRP.solve();
-      } break;
-    }
-  }
-
-  void PredicateSimplifier::Forwards::visitICmpInst(ICmpInst &IC) {
-    // If possible, squeeze the ICmp predicate into something simpler.
-    // Eg., if x = [0, 4) and we're being asked icmp uge %x, 3 then change
-    // the predicate to eq.
-
-    // XXX: once we do full PHI handling, modifying the instruction in the
-    // Forwards visitor will cause missed optimizations.
-
-    ICmpInst::Predicate Pred = IC.getPredicate();
-
-    switch (Pred) {
-      default: break;
-      case ICmpInst::ICMP_ULE: Pred = ICmpInst::ICMP_ULT; break;
-      case ICmpInst::ICMP_UGE: Pred = ICmpInst::ICMP_UGT; break;
-      case ICmpInst::ICMP_SLE: Pred = ICmpInst::ICMP_SLT; break;
-      case ICmpInst::ICMP_SGE: Pred = ICmpInst::ICMP_SGT; break;
-    }
-    if (Pred != IC.getPredicate()) {
-      VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, &IC);
-      if (VRP.isRelatedBy(IC.getOperand(1), IC.getOperand(0),
-                          ICmpInst::ICMP_NE)) {
-        ++NumSnuggle;
-        PS->modified = true;
-        IC.setPredicate(Pred);
-      }
-    }
-
-    Pred = IC.getPredicate();
-
-    LLVMContext &Context = IC.getContext();
-
-    if (ConstantInt *Op1 = dyn_cast<ConstantInt>(IC.getOperand(1))) {
-      ConstantInt *NextVal = 0;
-      switch (Pred) {
-        default: break;
-        case ICmpInst::ICMP_SLT:
-        case ICmpInst::ICMP_ULT:
-          if (Op1->getValue() != 0)
-            NextVal = ConstantInt::get(Context, Op1->getValue()-1);
-         break;
-        case ICmpInst::ICMP_SGT:
-        case ICmpInst::ICMP_UGT:
-          if (!Op1->getValue().isAllOnesValue())
-            NextVal = ConstantInt::get(Context, Op1->getValue()+1);
-         break;
-      }
-
-      if (NextVal) {
-        VRPSolver VRP(VN, IG, UB, VR, PS->DTDFS, PS->modified, &IC);
-        if (VRP.isRelatedBy(IC.getOperand(0), NextVal,
-                            ICmpInst::getInversePredicate(Pred))) {
-          ICmpInst *NewIC = new ICmpInst(&IC, ICmpInst::ICMP_EQ, 
-                                         IC.getOperand(0), NextVal, "");
-          NewIC->takeName(&IC);
-          IC.replaceAllUsesWith(NewIC);
-
-          // XXX: prove this isn't necessary
-          if (unsigned n = VN.valueNumber(&IC, PS->DTDFS->getRootNode()))
-            if (VN.value(n) == &IC) IG.remove(n);
-          VN.remove(&IC);
-
-          IC.eraseFromParent();
-          ++NumSnuggle;
-          PS->modified = true;
-        }
-      }
-    }
-  }
-}
-
-char PredicateSimplifier::ID = 0;
-static RegisterPass<PredicateSimplifier>
-X("predsimplify", "Predicate Simplifier");
-
-FunctionPass *llvm::createPredicateSimplifierPass() {
-  return new PredicateSimplifier();
-}
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/Reassociate.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/Reassociate.cpp
index e6ffac2..8466918 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/Reassociate.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/Reassociate.cpp
@@ -27,7 +27,6 @@
 #include "llvm/Function.h"
 #include "llvm/Instructions.h"
 #include "llvm/IntrinsicInst.h"
-#include "llvm/LLVMContext.h"
 #include "llvm/Pass.h"
 #include "llvm/Assembly/Writer.h"
 #include "llvm/Support/CFG.h"
@@ -121,7 +120,6 @@ static bool isUnmovableInstruction(Instruction *I) {
   if (I->getOpcode() == Instruction::PHI ||
       I->getOpcode() == Instruction::Alloca ||
       I->getOpcode() == Instruction::Load ||
-      I->getOpcode() == Instruction::Malloc ||
       I->getOpcode() == Instruction::Invoke ||
       (I->getOpcode() == Instruction::Call &&
        !isa<DbgInfoIntrinsic>(I)) ||
@@ -199,8 +197,7 @@ static BinaryOperator *isReassociableOp(Value *V, unsigned Opcode) {
 /// LowerNegateToMultiply - Replace 0-X with X*-1.
 ///
 static Instruction *LowerNegateToMultiply(Instruction *Neg,
-                              std::map<AssertingVH<>, unsigned> &ValueRankMap,
-                              LLVMContext &Context) {
+                              std::map<AssertingVH<>, unsigned> &ValueRankMap) {
   Constant *Cst = Constant::getAllOnesValue(Neg->getType());
 
   Instruction *Res = BinaryOperator::CreateMul(Neg->getOperand(1), Cst, "",Neg);
@@ -256,7 +253,6 @@ void Reassociate::LinearizeExprTree(BinaryOperator *I,
                                     std::vector<ValueEntry> &Ops) {
   Value *LHS = I->getOperand(0), *RHS = I->getOperand(1);
   unsigned Opcode = I->getOpcode();
-  LLVMContext &Context = I->getContext();
 
   // First step, linearize the expression if it is in ((A+B)+(C+D)) form.
   BinaryOperator *LHSBO = isReassociableOp(LHS, Opcode);
@@ -266,13 +262,11 @@ void Reassociate::LinearizeExprTree(BinaryOperator *I,
   // transform them into multiplies by -1 so they can be reassociated.
   if (I->getOpcode() == Instruction::Mul) {
     if (!LHSBO && LHS->hasOneUse() && BinaryOperator::isNeg(LHS)) {
-      LHS = LowerNegateToMultiply(cast<Instruction>(LHS),
-                                  ValueRankMap, Context);
+      LHS = LowerNegateToMultiply(cast<Instruction>(LHS), ValueRankMap);
       LHSBO = isReassociableOp(LHS, Opcode);
     }
     if (!RHSBO && RHS->hasOneUse() && BinaryOperator::isNeg(RHS)) {
-      RHS = LowerNegateToMultiply(cast<Instruction>(RHS),
-                                  ValueRankMap, Context);
+      RHS = LowerNegateToMultiply(cast<Instruction>(RHS), ValueRankMap);
       RHSBO = isReassociableOp(RHS, Opcode);
     }
   }
@@ -374,7 +368,7 @@ void Reassociate::RewriteExprTree(BinaryOperator *I,
 // version of the value is returned, and BI is left pointing at the instruction
 // that should be processed next by the reassociation pass.
 //
-static Value *NegateValue(LLVMContext &Context, Value *V, Instruction *BI) {
+static Value *NegateValue(Value *V, Instruction *BI) {
   // We are trying to expose opportunity for reassociation.  One of the things
   // that we want to do to achieve this is to push a negation as deep into an
   // expression chain as possible, to expose the add instructions.  In practice,
@@ -387,8 +381,8 @@ static Value *NegateValue(LLVMContext &Context, Value *V, Instruction *BI) {
   if (Instruction *I = dyn_cast<Instruction>(V))
     if (I->getOpcode() == Instruction::Add && I->hasOneUse()) {
       // Push the negates through the add.
-      I->setOperand(0, NegateValue(Context, I->getOperand(0), BI));
-      I->setOperand(1, NegateValue(Context, I->getOperand(1), BI));
+      I->setOperand(0, NegateValue(I->getOperand(0), BI));
+      I->setOperand(1, NegateValue(I->getOperand(1), BI));
 
       // We must move the add instruction here, because the neg instructions do
       // not dominate the old add instruction in general.  By moving it, we are
@@ -408,7 +402,7 @@ static Value *NegateValue(LLVMContext &Context, Value *V, Instruction *BI) {
 
 /// ShouldBreakUpSubtract - Return true if we should break up this subtract of
 /// X-Y into (X + -Y).
-static bool ShouldBreakUpSubtract(LLVMContext &Context, Instruction *Sub) {
+static bool ShouldBreakUpSubtract(Instruction *Sub) {
   // If this is a negation, we can't split it up!
   if (BinaryOperator::isNeg(Sub))
     return false;
@@ -432,7 +426,7 @@ static bool ShouldBreakUpSubtract(LLVMContext &Context, Instruction *Sub) {
 /// BreakUpSubtract - If we have (X-Y), and if either X is an add, or if this is
 /// only used by an add, transform this into (X+(0-Y)) to promote better
 /// reassociation.
-static Instruction *BreakUpSubtract(LLVMContext &Context, Instruction *Sub,
+static Instruction *BreakUpSubtract(Instruction *Sub,
                               std::map<AssertingVH<>, unsigned> &ValueRankMap) {
   // Convert a subtract into an add and a neg instruction... so that sub
   // instructions can be commuted with other add instructions...
@@ -440,7 +434,7 @@ static Instruction *BreakUpSubtract(LLVMContext &Context, Instruction *Sub,
   // Calculate the negative value of Operand 1 of the sub instruction...
   // and set it as the RHS of the add instruction we just made...
   //
-  Value *NegVal = NegateValue(Context, Sub->getOperand(1), Sub);
+  Value *NegVal = NegateValue(Sub->getOperand(1), Sub);
   Instruction *New =
     BinaryOperator::CreateAdd(Sub->getOperand(0), NegVal, "", Sub);
   New->takeName(Sub);
@@ -458,8 +452,7 @@ static Instruction *BreakUpSubtract(LLVMContext &Context, Instruction *Sub,
 /// by one, change this into a multiply by a constant to assist with further
 /// reassociation.
 static Instruction *ConvertShiftToMul(Instruction *Shl, 
-                              std::map<AssertingVH<>, unsigned> &ValueRankMap,
-                              LLVMContext &Context) {
+                              std::map<AssertingVH<>, unsigned> &ValueRankMap) {
   // If an operand of this shift is a reassociable multiply, or if the shift
   // is used by a reassociable multiply or add, turn into a multiply.
   if (isReassociableOp(Shl->getOperand(0), Instruction::Mul) ||
@@ -782,13 +775,11 @@ Value *Reassociate::OptimizeExpression(BinaryOperator *I,
 /// ReassociateBB - Inspect all of the instructions in this basic block,
 /// reassociating them as we go.
 void Reassociate::ReassociateBB(BasicBlock *BB) {
-  LLVMContext &Context = BB->getContext();
-  
   for (BasicBlock::iterator BBI = BB->begin(); BBI != BB->end(); ) {
     Instruction *BI = BBI++;
     if (BI->getOpcode() == Instruction::Shl &&
         isa<ConstantInt>(BI->getOperand(1)))
-      if (Instruction *NI = ConvertShiftToMul(BI, ValueRankMap, Context)) {
+      if (Instruction *NI = ConvertShiftToMul(BI, ValueRankMap)) {
         MadeChange = true;
         BI = NI;
       }
@@ -801,8 +792,8 @@ void Reassociate::ReassociateBB(BasicBlock *BB) {
     // If this is a subtract instruction which is not already in negate form,
     // see if we can convert it to X+-Y.
     if (BI->getOpcode() == Instruction::Sub) {
-      if (ShouldBreakUpSubtract(Context, BI)) {
-        BI = BreakUpSubtract(Context, BI, ValueRankMap);
+      if (ShouldBreakUpSubtract(BI)) {
+        BI = BreakUpSubtract(BI, ValueRankMap);
         MadeChange = true;
       } else if (BinaryOperator::isNeg(BI)) {
         // Otherwise, this is a negation.  See if the operand is a multiply tree
@@ -810,7 +801,7 @@ void Reassociate::ReassociateBB(BasicBlock *BB) {
         if (isReassociableOp(BI->getOperand(1), Instruction::Mul) &&
             (!BI->hasOneUse() ||
              !isReassociableOp(BI->use_back(), Instruction::Mul))) {
-          BI = LowerNegateToMultiply(BI, ValueRankMap, Context);
+          BI = LowerNegateToMultiply(BI, ValueRankMap);
           MadeChange = true;
         }
       }
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/SCCP.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/SCCP.cpp
index 2f49d25..d8c59b1 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/SCCP.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/SCCP.cpp
@@ -15,10 +15,6 @@
 //   * Proves values to be constant, and replaces them with constants
 //   * Proves conditional branches to be unconditional
 //
-// Notice that:
-//   * This pass has a habit of making definitions be dead.  It is a good idea
-//     to to run a DCE pass sometime after running this pass.
-//
 //===----------------------------------------------------------------------===//
 
 #define DEBUG_TYPE "sccp"
@@ -27,11 +23,11 @@
 #include "llvm/Constants.h"
 #include "llvm/DerivedTypes.h"
 #include "llvm/Instructions.h"
-#include "llvm/LLVMContext.h"
 #include "llvm/Pass.h"
 #include "llvm/Analysis/ConstantFolding.h"
 #include "llvm/Analysis/ValueTracking.h"
 #include "llvm/Transforms/Utils/Local.h"
+#include "llvm/Target/TargetData.h"
 #include "llvm/Support/CallSite.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
@@ -39,7 +35,8 @@
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/DenseSet.h"
-#include "llvm/ADT/SmallSet.h"
+#include "llvm/ADT/PointerIntPair.h"
+#include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/STLExtras.h"
@@ -51,7 +48,6 @@ STATISTIC(NumInstRemoved, "Number of instructions removed");
 STATISTIC(NumDeadBlocks , "Number of basic blocks unreachable");
 
 STATISTIC(IPNumInstRemoved, "Number of instructions removed by IPSCCP");
-STATISTIC(IPNumDeadBlocks , "Number of basic blocks unreachable by IPSCCP");
 STATISTIC(IPNumArgsElimed ,"Number of arguments constant propagated by IPSCCP");
 STATISTIC(IPNumGlobalConst, "Number of globals found to be constant by IPSCCP");
 
@@ -60,7 +56,7 @@ namespace {
 /// an LLVM value may occupy.  It is a simple class with value semantics.
 ///
 class LatticeVal {
-  enum {
+  enum LatticeValueTy {
     /// undefined - This LLVM Value has no known value yet.
     undefined,
     
@@ -76,63 +72,82 @@ class LatticeVal {
     /// overdefined - This instruction is not known to be constant, and we know
     /// it has a value.
     overdefined
-  } LatticeValue;    // The current lattice position
+  };
+
+  /// Val: This stores the current lattice value along with the Constant* for
+  /// the constant if this is a 'constant' or 'forcedconstant' value.
+  PointerIntPair<Constant *, 2, LatticeValueTy> Val;
+  
+  LatticeValueTy getLatticeValue() const {
+    return Val.getInt();
+  }
   
-  Constant *ConstantVal; // If Constant value, the current value
 public:
-  inline LatticeVal() : LatticeValue(undefined), ConstantVal(0) {}
+  LatticeVal() : Val(0, undefined) {}
   
-  // markOverdefined - Return true if this is a new status to be in...
-  inline bool markOverdefined() {
-    if (LatticeValue != overdefined) {
-      LatticeValue = overdefined;
-      return true;
-    }
-    return false;
+  bool isUndefined() const { return getLatticeValue() == undefined; }
+  bool isConstant() const {
+    return getLatticeValue() == constant || getLatticeValue() == forcedconstant;
+  }
+  bool isOverdefined() const { return getLatticeValue() == overdefined; }
+  
+  Constant *getConstant() const {
+    assert(isConstant() && "Cannot get the constant of a non-constant!");
+    return Val.getPointer();
+  }
+  
+  /// markOverdefined - Return true if this is a change in status.
+  bool markOverdefined() {
+    if (isOverdefined())
+      return false;
+    
+    Val.setInt(overdefined);
+    return true;
   }
 
-  // markConstant - Return true if this is a new status for us.
-  inline bool markConstant(Constant *V) {
-    if (LatticeValue != constant) {
-      if (LatticeValue == undefined) {
-        LatticeValue = constant;
-        assert(V && "Marking constant with NULL");
-        ConstantVal = V;
-      } else {
-        assert(LatticeValue == forcedconstant && 
-               "Cannot move from overdefined to constant!");
-        // Stay at forcedconstant if the constant is the same.
-        if (V == ConstantVal) return false;
-        
-        // Otherwise, we go to overdefined.  Assumptions made based on the
-        // forced value are possibly wrong.  Assuming this is another constant
-        // could expose a contradiction.
-        LatticeValue = overdefined;
-      }
-      return true;
+  /// markConstant - Return true if this is a change in status.
+  bool markConstant(Constant *V) {
+    if (getLatticeValue() == constant) { // Constant but not forcedconstant.
+      assert(getConstant() == V && "Marking constant with different value");
+      return false;
+    }
+    
+    if (isUndefined()) {
+      Val.setInt(constant);
+      assert(V && "Marking constant with NULL");
+      Val.setPointer(V);
     } else {
-      assert(ConstantVal == V && "Marking constant with different value");
+      assert(getLatticeValue() == forcedconstant && 
+             "Cannot move from overdefined to constant!");
+      // Stay at forcedconstant if the constant is the same.
+      if (V == getConstant()) return false;
+      
+      // Otherwise, we go to overdefined.  Assumptions made based on the
+      // forced value are possibly wrong.  Assuming this is another constant
+      // could expose a contradiction.
+      Val.setInt(overdefined);
     }
-    return false;
+    return true;
   }
 
-  inline void markForcedConstant(Constant *V) {
-    assert(LatticeValue == undefined && "Can't force a defined value!");
-    LatticeValue = forcedconstant;
-    ConstantVal = V;
+  /// getConstantInt - If this is a constant with a ConstantInt value, return it
+  /// otherwise return null.
+  ConstantInt *getConstantInt() const {
+    if (isConstant())
+      return dyn_cast<ConstantInt>(getConstant());
+    return 0;
   }
   
-  inline bool isUndefined() const { return LatticeValue == undefined; }
-  inline bool isConstant() const {
-    return LatticeValue == constant || LatticeValue == forcedconstant;
-  }
-  inline bool isOverdefined() const { return LatticeValue == overdefined; }
-
-  inline Constant *getConstant() const {
-    assert(isConstant() && "Cannot get the constant of a non-constant!");
-    return ConstantVal;
+  void markForcedConstant(Constant *V) {
+    assert(isUndefined() && "Can't force a defined value!");
+    Val.setInt(forcedconstant);
+    Val.setPointer(V);
   }
 };
+} // end anonymous namespace.
+
+
+namespace {
 
 //===----------------------------------------------------------------------===//
 //
@@ -140,10 +155,15 @@ public:
 /// Constant Propagation.
 ///
 class SCCPSolver : public InstVisitor<SCCPSolver> {
-  LLVMContext *Context;
-  DenseSet<BasicBlock*> BBExecutable;// The basic blocks that are executable
-  std::map<Value*, LatticeVal> ValueState;  // The state each value is in.
+  const TargetData *TD;
+  SmallPtrSet<BasicBlock*, 8> BBExecutable;// The BBs that are executable.
+  DenseMap<Value*, LatticeVal> ValueState;  // The state each value is in.
 
+  /// StructValueState - This maintains ValueState for values that have
+  /// StructType, for example for formal arguments, calls, insertelement, etc.
+  ///
+  DenseMap<std::pair<Value*, unsigned>, LatticeVal> StructValueState;
+  
   /// GlobalValue - If we are tracking any values for the contents of a global
   /// variable, we keep a mapping from the constant accessor to the element of
   /// the global, to the currently known value.  If the value becomes
@@ -158,13 +178,23 @@ class SCCPSolver : public InstVisitor<SCCPSolver> {
   /// TrackedMultipleRetVals - Same as TrackedRetVals, but used for functions
   /// that return multiple values.
   DenseMap<std::pair<Function*, unsigned>, LatticeVal> TrackedMultipleRetVals;
-
-  // The reason for two worklists is that overdefined is the lowest state
-  // on the lattice, and moving things to overdefined as fast as possible
-  // makes SCCP converge much faster.
-  // By having a separate worklist, we accomplish this because everything
-  // possibly overdefined will become overdefined at the soonest possible
-  // point.
+  
+  /// MRVFunctionsTracked - Each function in TrackedMultipleRetVals is
+  /// represented here for efficient lookup.
+  SmallPtrSet<Function*, 16> MRVFunctionsTracked;
+
+  /// TrackingIncomingArguments - This is the set of functions for whose
+  /// arguments we make optimistic assumptions about and try to prove as
+  /// constants.
+  SmallPtrSet<Function*, 16> TrackingIncomingArguments;
+  
+  /// The reason for two worklists is that overdefined is the lowest state
+  /// on the lattice, and moving things to overdefined as fast as possible
+  /// makes SCCP converge much faster.
+  ///
+  /// By having a separate worklist, we accomplish this because everything
+  /// possibly overdefined will become overdefined at the soonest possible
+  /// point.
   SmallVector<Value*, 64> OverdefinedInstWorkList;
   SmallVector<Value*, 64> InstWorkList;
 
@@ -180,14 +210,17 @@ class SCCPSolver : public InstVisitor<SCCPSolver> {
   typedef std::pair<BasicBlock*, BasicBlock*> Edge;
   DenseSet<Edge> KnownFeasibleEdges;
 public:
-  void setContext(LLVMContext *C) { Context = C; }
+  SCCPSolver(const TargetData *td) : TD(td) {}
 
   /// MarkBlockExecutable - This method can be used by clients to mark all of
   /// the blocks that are known to be intrinsically live in the processed unit.
-  void MarkBlockExecutable(BasicBlock *BB) {
+  ///
+  /// This returns true if the block was not considered live before.
+  bool MarkBlockExecutable(BasicBlock *BB) {
+    if (!BBExecutable.insert(BB)) return false;
     DEBUG(errs() << "Marking Block Executable: " << BB->getName() << "\n");
-    BBExecutable.insert(BB);   // Basic block is executable!
     BBWorkList.push_back(BB);  // Add the block to the work list!
+    return true;
   }
 
   /// TrackValueOfGlobalVariable - Clients can use this method to
@@ -195,8 +228,8 @@ public:
   /// specified global variable if it can.  This is only legal to call if
   /// performing Interprocedural SCCP.
   void TrackValueOfGlobalVariable(GlobalVariable *GV) {
-    const Type *ElTy = GV->getType()->getElementType();
-    if (ElTy->isFirstClassType()) {
+    // We only track the contents of scalar globals.
+    if (GV->getType()->getElementType()->isSingleValueType()) {
       LatticeVal &IV = TrackedGlobals[GV];
       if (!isa<UndefValue>(GV->getInitializer()))
         IV.markConstant(GV->getInitializer());
@@ -207,9 +240,9 @@ public:
   /// and out of the specified function (which cannot have its address taken),
   /// this method must be called.
   void AddTrackedFunction(Function *F) {
-    assert(F->hasLocalLinkage() && "Can only track internal functions!");
     // Add an entry, F -> undef.
     if (const StructType *STy = dyn_cast<StructType>(F->getReturnType())) {
+      MRVFunctionsTracked.insert(F);
       for (unsigned i = 0, e = STy->getNumElements(); i != e; ++i)
         TrackedMultipleRetVals.insert(std::make_pair(std::make_pair(F, i),
                                                      LatticeVal()));
@@ -217,6 +250,10 @@ public:
       TrackedRetVals.insert(std::make_pair(F, LatticeVal()));
   }
 
+  void AddArgumentTrackedFunction(Function *F) {
+    TrackingIncomingArguments.insert(F);
+  }
+  
   /// Solve - Solve for constants and executable blocks.
   ///
   void Solve();
@@ -232,10 +269,17 @@ public:
     return BBExecutable.count(BB);
   }
 
-  /// getValueMapping - Once we have solved for constants, return the mapping of
-  /// LLVM values to LatticeVals.
-  std::map<Value*, LatticeVal> &getValueMapping() {
-    return ValueState;
+  LatticeVal getLatticeValueFor(Value *V) const {
+    DenseMap<Value*, LatticeVal>::const_iterator I = ValueState.find(V);
+    assert(I != ValueState.end() && "V is not in valuemap!");
+    return I->second;
+  }
+  
+  LatticeVal getStructLatticeValueFor(Value *V, unsigned i) const {
+    DenseMap<std::pair<Value*, unsigned>, LatticeVal>::const_iterator I = 
+      StructValueState.find(std::make_pair(V, i));
+    assert(I != StructValueState.end() && "V is not in valuemap!");
+    return I->second;
   }
 
   /// getTrackedRetVals - Get the inferred return value map.
@@ -250,48 +294,61 @@ public:
     return TrackedGlobals;
   }
 
-  inline void markOverdefined(Value *V) {
+  void markOverdefined(Value *V) {
+    assert(!isa<StructType>(V->getType()) && "Should use other method");
     markOverdefined(ValueState[V], V);
   }
 
+  /// markAnythingOverdefined - Mark the specified value overdefined.  This
+  /// works with both scalars and structs.
+  void markAnythingOverdefined(Value *V) {
+    if (const StructType *STy = dyn_cast<StructType>(V->getType()))
+      for (unsigned i = 0, e = STy->getNumElements(); i != e; ++i)
+        markOverdefined(getStructValueState(V, i), V);
+    else
+      markOverdefined(V);
+  }
+  
 private:
   // markConstant - Make a value be marked as "constant".  If the value
   // is not already a constant, add it to the instruction work list so that
   // the users of the instruction are updated later.
   //
-  inline void markConstant(LatticeVal &IV, Value *V, Constant *C) {
-    if (IV.markConstant(C)) {
-      DEBUG(errs() << "markConstant: " << *C << ": " << *V << '\n');
-      InstWorkList.push_back(V);
-    }
-  }
-  
-  inline void markForcedConstant(LatticeVal &IV, Value *V, Constant *C) {
-    IV.markForcedConstant(C);
-    DEBUG(errs() << "markForcedConstant: " << *C << ": " << *V << '\n');
+  void markConstant(LatticeVal &IV, Value *V, Constant *C) {
+    if (!IV.markConstant(C)) return;
+    DEBUG(errs() << "markConstant: " << *C << ": " << *V << '\n');
     InstWorkList.push_back(V);
   }
   
-  inline void markConstant(Value *V, Constant *C) {
+  void markConstant(Value *V, Constant *C) {
+    assert(!isa<StructType>(V->getType()) && "Should use other method");
     markConstant(ValueState[V], V, C);
   }
 
+  void markForcedConstant(Value *V, Constant *C) {
+    assert(!isa<StructType>(V->getType()) && "Should use other method");
+    ValueState[V].markForcedConstant(C);
+    DEBUG(errs() << "markForcedConstant: " << *C << ": " << *V << '\n');
+    InstWorkList.push_back(V);
+  }
+  
+  
   // markOverdefined - Make a value be marked as "overdefined". If the
   // value is not already overdefined, add it to the overdefined instruction
   // work list so that the users of the instruction are updated later.
-  inline void markOverdefined(LatticeVal &IV, Value *V) {
-    if (IV.markOverdefined()) {
-      DEBUG(errs() << "markOverdefined: ";
-            if (Function *F = dyn_cast<Function>(V))
-              errs() << "Function '" << F->getName() << "'\n";
-            else
-              errs() << *V << '\n');
-      // Only instructions go on the work list
-      OverdefinedInstWorkList.push_back(V);
-    }
+  void markOverdefined(LatticeVal &IV, Value *V) {
+    if (!IV.markOverdefined()) return;
+    
+    DEBUG(errs() << "markOverdefined: ";
+          if (Function *F = dyn_cast<Function>(V))
+            errs() << "Function '" << F->getName() << "'\n";
+          else
+            errs() << *V << '\n');
+    // Only instructions go on the work list
+    OverdefinedInstWorkList.push_back(V);
   }
 
-  inline void mergeInValue(LatticeVal &IV, Value *V, LatticeVal &MergeWithV) {
+  void mergeInValue(LatticeVal &IV, Value *V, LatticeVal MergeWithV) {
     if (IV.isOverdefined() || MergeWithV.isUndefined())
       return;  // Noop.
     if (MergeWithV.isOverdefined())
@@ -302,53 +359,85 @@ private:
       markOverdefined(IV, V);
   }
   
-  inline void mergeInValue(Value *V, LatticeVal &MergeWithV) {
-    return mergeInValue(ValueState[V], V, MergeWithV);
+  void mergeInValue(Value *V, LatticeVal MergeWithV) {
+    assert(!isa<StructType>(V->getType()) && "Should use other method");
+    mergeInValue(ValueState[V], V, MergeWithV);
   }
 
 
-  // getValueState - Return the LatticeVal object that corresponds to the value.
-  // This function is necessary because not all values should start out in the
-  // underdefined state... Argument's should be overdefined, and
-  // constants should be marked as constants.  If a value is not known to be an
-  // Instruction object, then use this accessor to get its value from the map.
-  //
-  inline LatticeVal &getValueState(Value *V) {
-    std::map<Value*, LatticeVal>::iterator I = ValueState.find(V);
-    if (I != ValueState.end()) return I->second;  // Common case, in the map
+  /// getValueState - Return the LatticeVal object that corresponds to the
+  /// value.  This function handles the case when the value hasn't been seen yet
+  /// by properly seeding constants etc.
+  LatticeVal &getValueState(Value *V) {
+    assert(!isa<StructType>(V->getType()) && "Should use getStructValueState");
+
+    std::pair<DenseMap<Value*, LatticeVal>::iterator, bool> I =
+      ValueState.insert(std::make_pair(V, LatticeVal()));
+    LatticeVal &LV = I.first->second;
+
+    if (!I.second)
+      return LV;  // Common case, already in the map.
 
     if (Constant *C = dyn_cast<Constant>(V)) {
-      if (isa<UndefValue>(V)) {
-        // Nothing to do, remain undefined.
-      } else {
-        LatticeVal &LV = ValueState[C];
+      // Undef values remain undefined.
+      if (!isa<UndefValue>(V))
         LV.markConstant(C);          // Constants are constant
-        return LV;
-      }
     }
-    // All others are underdefined by default...
-    return ValueState[V];
+    
+    // All others are underdefined by default.
+    return LV;
   }
 
-  // markEdgeExecutable - Mark a basic block as executable, adding it to the BB
-  // work list if it is not already executable...
-  //
+  /// getStructValueState - Return the LatticeVal object that corresponds to the
+  /// value/field pair.  This function handles the case when the value hasn't
+  /// been seen yet by properly seeding constants etc.
+  LatticeVal &getStructValueState(Value *V, unsigned i) {
+    assert(isa<StructType>(V->getType()) && "Should use getValueState");
+    assert(i < cast<StructType>(V->getType())->getNumElements() &&
+           "Invalid element #");
+
+    std::pair<DenseMap<std::pair<Value*, unsigned>, LatticeVal>::iterator,
+              bool> I = StructValueState.insert(
+                        std::make_pair(std::make_pair(V, i), LatticeVal()));
+    LatticeVal &LV = I.first->second;
+
+    if (!I.second)
+      return LV;  // Common case, already in the map.
+
+    if (Constant *C = dyn_cast<Constant>(V)) {
+      if (isa<UndefValue>(C))
+        ; // Undef values remain undefined.
+      else if (ConstantStruct *CS = dyn_cast<ConstantStruct>(C))
+        LV.markConstant(CS->getOperand(i));      // Constants are constant.
+      else if (isa<ConstantAggregateZero>(C)) {
+        const Type *FieldTy = cast<StructType>(V->getType())->getElementType(i);
+        LV.markConstant(Constant::getNullValue(FieldTy));
+      } else
+        LV.markOverdefined();      // Unknown sort of constant.
+    }
+    
+    // All others are underdefined by default.
+    return LV;
+  }
+  
+
+  /// markEdgeExecutable - Mark a basic block as executable, adding it to the BB
+  /// work list if it is not already executable.
   void markEdgeExecutable(BasicBlock *Source, BasicBlock *Dest) {
     if (!KnownFeasibleEdges.insert(Edge(Source, Dest)).second)
       return;  // This edge is already known to be executable!
 
-    if (BBExecutable.count(Dest)) {
-      DEBUG(errs() << "Marking Edge Executable: " << Source->getName()
-            << " -> " << Dest->getName() << "\n");
-
-      // The destination is already executable, but we just made an edge
+    if (!MarkBlockExecutable(Dest)) {
+      // If the destination is already executable, we just made an *edge*
       // feasible that wasn't before.  Revisit the PHI nodes in the block
       // because they have potentially new operands.
-      for (BasicBlock::iterator I = Dest->begin(); isa<PHINode>(I); ++I)
-        visitPHINode(*cast<PHINode>(I));
+      DEBUG(errs() << "Marking Edge Executable: " << Source->getName()
+            << " -> " << Dest->getName() << "\n");
 
-    } else {
-      MarkBlockExecutable(Dest);
+      PHINode *PN;
+      for (BasicBlock::iterator I = Dest->begin();
+           (PN = dyn_cast<PHINode>(I)); ++I)
+        visitPHINode(*PN);
     }
   }
 
@@ -358,28 +447,39 @@ private:
   void getFeasibleSuccessors(TerminatorInst &TI, SmallVector<bool, 16> &Succs);
 
   // isEdgeFeasible - Return true if the control flow edge from the 'From' basic
-  // block to the 'To' basic block is currently feasible...
+  // block to the 'To' basic block is currently feasible.
   //
   bool isEdgeFeasible(BasicBlock *From, BasicBlock *To);
 
   // OperandChangedState - This method is invoked on all of the users of an
-  // instruction that was just changed state somehow....  Based on this
+  // instruction that was just changed state somehow.  Based on this
   // information, we need to update the specified user of this instruction.
   //
-  void OperandChangedState(User *U) {
-    // Only instructions use other variable values!
-    Instruction &I = cast<Instruction>(*U);
-    if (BBExecutable.count(I.getParent()))   // Inst is executable?
-      visit(I);
+  void OperandChangedState(Instruction *I) {
+    if (BBExecutable.count(I->getParent()))   // Inst is executable?
+      visit(*I);
+  }
+  
+  /// RemoveFromOverdefinedPHIs - If I has any entries in the
+  /// UsersOfOverdefinedPHIs map for PN, remove them now.
+  void RemoveFromOverdefinedPHIs(Instruction *I, PHINode *PN) {
+    if (UsersOfOverdefinedPHIs.empty()) return;
+    std::multimap<PHINode*, Instruction*>::iterator It, E;
+    tie(It, E) = UsersOfOverdefinedPHIs.equal_range(PN);
+    while (It != E) {
+      if (It->second == I)
+        UsersOfOverdefinedPHIs.erase(It++);
+      else
+        ++It;
+    }
   }
 
 private:
   friend class InstVisitor<SCCPSolver>;
 
-  // visit implementations - Something changed in this instruction... Either an
+  // visit implementations - Something changed in this instruction.  Either an
   // operand made a transition, or the instruction is newly executable.  Change
   // the value type of I to reflect these changes if appropriate.
-  //
   void visitPHINode(PHINode &I);
 
   // Terminators
@@ -396,11 +496,11 @@ private:
   void visitExtractValueInst(ExtractValueInst &EVI);
   void visitInsertValueInst(InsertValueInst &IVI);
 
-  // Instructions that cannot be folded away...
-  void visitStoreInst     (Instruction &I);
+  // Instructions that cannot be folded away.
+  void visitStoreInst     (StoreInst &I);
   void visitLoadInst      (LoadInst &I);
   void visitGetElementPtrInst(GetElementPtrInst &I);
-  void visitCallInst      (CallInst &I) { 
+  void visitCallInst      (CallInst &I) {
     visitCallSite(CallSite::get(&I));
   }
   void visitInvokeInst    (InvokeInst &II) {
@@ -410,15 +510,14 @@ private:
   void visitCallSite      (CallSite CS);
   void visitUnwindInst    (TerminatorInst &I) { /*returns void*/ }
   void visitUnreachableInst(TerminatorInst &I) { /*returns void*/ }
-  void visitAllocationInst(Instruction &I) { markOverdefined(&I); }
+  void visitAllocaInst    (Instruction &I) { markOverdefined(&I); }
   void visitVANextInst    (Instruction &I) { markOverdefined(&I); }
-  void visitVAArgInst     (Instruction &I) { markOverdefined(&I); }
-  void visitFreeInst      (Instruction &I) { /*returns void*/ }
+  void visitVAArgInst     (Instruction &I) { markAnythingOverdefined(&I); }
 
   void visitInstruction(Instruction &I) {
-    // If a new instruction is added to LLVM that we don't handle...
+    // If a new instruction is added to LLVM that we don't handle.
     errs() << "SCCP: Don't know how to handle: " << I;
-    markOverdefined(&I);   // Just in case
+    markAnythingOverdefined(&I);   // Just in case
   }
 };
 
@@ -434,37 +533,61 @@ void SCCPSolver::getFeasibleSuccessors(TerminatorInst &TI,
   if (BranchInst *BI = dyn_cast<BranchInst>(&TI)) {
     if (BI->isUnconditional()) {
       Succs[0] = true;
-    } else {
-      LatticeVal &BCValue = getValueState(BI->getCondition());
-      if (BCValue.isOverdefined() ||
-          (BCValue.isConstant() && !isa<ConstantInt>(BCValue.getConstant()))) {
-        // Overdefined condition variables, and branches on unfoldable constant
-        // conditions, mean the branch could go either way.
+      return;
+    }
+    
+    LatticeVal BCValue = getValueState(BI->getCondition());
+    ConstantInt *CI = BCValue.getConstantInt();
+    if (CI == 0) {
+      // Overdefined condition variables, and branches on unfoldable constant
+      // conditions, mean the branch could go either way.
+      if (!BCValue.isUndefined())
         Succs[0] = Succs[1] = true;
-      } else if (BCValue.isConstant()) {
-        // Constant condition variables mean the branch can only go a single way
-        Succs[BCValue.getConstant() == ConstantInt::getFalse(*Context)] = true;
-      }
+      return;
     }
-  } else if (isa<InvokeInst>(&TI)) {
+    
+    // Constant condition variables mean the branch can only go a single way.
+    Succs[CI->isZero()] = true;
+    return;
+  }
+  
+  if (isa<InvokeInst>(TI)) {
     // Invoke instructions successors are always executable.
     Succs[0] = Succs[1] = true;
-  } else if (SwitchInst *SI = dyn_cast<SwitchInst>(&TI)) {
-    LatticeVal &SCValue = getValueState(SI->getCondition());
-    if (SCValue.isOverdefined() ||   // Overdefined condition?
-        (SCValue.isConstant() && !isa<ConstantInt>(SCValue.getConstant()))) {
+    return;
+  }
+  
+  if (SwitchInst *SI = dyn_cast<SwitchInst>(&TI)) {
+    LatticeVal SCValue = getValueState(SI->getCondition());
+    ConstantInt *CI = SCValue.getConstantInt();
+    
+    if (CI == 0) {   // Overdefined or undefined condition?
       // All destinations are executable!
-      Succs.assign(TI.getNumSuccessors(), true);
-    } else if (SCValue.isConstant())
-      Succs[SI->findCaseValue(cast<ConstantInt>(SCValue.getConstant()))] = true;
-  } else {
-    llvm_unreachable("SCCP: Don't know how to handle this terminator!");
+      if (!SCValue.isUndefined())
+        Succs.assign(TI.getNumSuccessors(), true);
+      return;
+    }
+      
+    Succs[SI->findCaseValue(CI)] = true;
+    return;
+  }
+  
+  // TODO: This could be improved if the operand is a [cast of a] BlockAddress.
+  if (isa<IndirectBrInst>(&TI)) {
+    // Just mark all destinations executable!
+    Succs.assign(TI.getNumSuccessors(), true);
+    return;
   }
+  
+#ifndef NDEBUG
+  errs() << "Unknown terminator instruction: " << TI << '\n';
+#endif
+  llvm_unreachable("SCCP: Don't know how to handle this terminator!");
 }
 
 
 // isEdgeFeasible - Return true if the control flow edge from the 'From' basic
-// block to the 'To' basic block is currently feasible...
+// block to the 'To' basic block is currently feasible.
 //
 bool SCCPSolver::isEdgeFeasible(BasicBlock *From, BasicBlock *To) {
   assert(BBExecutable.count(To) && "Dest should always be alive!");
@@ -472,58 +595,57 @@ bool SCCPSolver::isEdgeFeasible(BasicBlock *From, BasicBlock *To) {
   // Make sure the source basic block is executable!!
   if (!BBExecutable.count(From)) return false;
 
-  // Check to make sure this edge itself is actually feasible now...
+  // Check to make sure this edge itself is actually feasible now.
   TerminatorInst *TI = From->getTerminator();
   if (BranchInst *BI = dyn_cast<BranchInst>(TI)) {
     if (BI->isUnconditional())
       return true;
-    else {
-      LatticeVal &BCValue = getValueState(BI->getCondition());
-      if (BCValue.isOverdefined()) {
-        // Overdefined condition variables mean the branch could go either way.
-        return true;
-      } else if (BCValue.isConstant()) {
-        // Not branching on an evaluatable constant?
-        if (!isa<ConstantInt>(BCValue.getConstant())) return true;
+    
+    LatticeVal BCValue = getValueState(BI->getCondition());
 
-        // Constant condition variables mean the branch can only go a single way
-        return BI->getSuccessor(BCValue.getConstant() ==
-                                       ConstantInt::getFalse(*Context)) == To;
-      }
-      return false;
-    }
-  } else if (isa<InvokeInst>(TI)) {
-    // Invoke instructions successors are always executable.
+    // Overdefined condition variables mean the branch could go either way,
+    // undef conditions mean that neither edge is feasible yet.
+    ConstantInt *CI = BCValue.getConstantInt();
+    if (CI == 0)
+      return !BCValue.isUndefined();
+    
+    // Constant condition variables mean the branch can only go a single way.
+    return BI->getSuccessor(CI->isZero()) == To;
+  }
+  
+  // Invoke instructions successors are always executable.
+  if (isa<InvokeInst>(TI))
     return true;
-  } else if (SwitchInst *SI = dyn_cast<SwitchInst>(TI)) {
-    LatticeVal &SCValue = getValueState(SI->getCondition());
-    if (SCValue.isOverdefined()) {  // Overdefined condition?
-      // All destinations are executable!
-      return true;
-    } else if (SCValue.isConstant()) {
-      Constant *CPV = SCValue.getConstant();
-      if (!isa<ConstantInt>(CPV))
-        return true;  // not a foldable constant?
-
-      // Make sure to skip the "default value" which isn't a value
-      for (unsigned i = 1, E = SI->getNumSuccessors(); i != E; ++i)
-        if (SI->getSuccessorValue(i) == CPV) // Found the taken branch...
-          return SI->getSuccessor(i) == To;
-
-      // Constant value not equal to any of the branches... must execute
-      // default branch then...
-      return SI->getDefaultDest() == To;
-    }
-    return false;
-  } else {
+  
+  if (SwitchInst *SI = dyn_cast<SwitchInst>(TI)) {
+    LatticeVal SCValue = getValueState(SI->getCondition());
+    ConstantInt *CI = SCValue.getConstantInt();
+    
+    if (CI == 0)
+      return !SCValue.isUndefined();
+
+    // Make sure to skip the "default value" which isn't a value
+    for (unsigned i = 1, E = SI->getNumSuccessors(); i != E; ++i)
+      if (SI->getSuccessorValue(i) == CI) // Found the taken branch.
+        return SI->getSuccessor(i) == To;
+
+    // If the constant value is not equal to any of the branches, we must
+    // execute default branch.
+    return SI->getDefaultDest() == To;
+  }
+  
+  // Just mark all destinations executable!
+  // TODO: This could be improved if the operand is a [cast of a] BlockAddress.
+  if (isa<IndirectBrInst>(&TI))
+    return true;
+  
 #ifndef NDEBUG
-    errs() << "Unknown terminator instruction: " << *TI << '\n';
+  errs() << "Unknown terminator instruction: " << *TI << '\n';
 #endif
-    llvm_unreachable(0);
-  }
+  llvm_unreachable(0);
 }
 
-// visit Implementations - Something changed in this instruction... Either an
+// visit Implementations - Something changed in this instruction, either an
 // operand made a transition, or the instruction is newly executable.  Change
 // the value type of I to reflect these changes if appropriate.  This method
 // makes sure to do the following actions:
@@ -542,31 +664,33 @@ bool SCCPSolver::isEdgeFeasible(BasicBlock *From, BasicBlock *To) {
 //    successors executable.
 //
 void SCCPSolver::visitPHINode(PHINode &PN) {
-  LatticeVal &PNIV = getValueState(&PN);
-  if (PNIV.isOverdefined()) {
+  // If this PN returns a struct, just mark the result overdefined.
+  // TODO: We could do a lot better than this if code actually uses this.
+  if (isa<StructType>(PN.getType()))
+    return markAnythingOverdefined(&PN);
+  
+  if (getValueState(&PN).isOverdefined()) {
     // There may be instructions using this PHI node that are not overdefined
     // themselves.  If so, make sure that they know that the PHI node operand
     // changed.
     std::multimap<PHINode*, Instruction*>::iterator I, E;
     tie(I, E) = UsersOfOverdefinedPHIs.equal_range(&PN);
-    if (I != E) {
-      SmallVector<Instruction*, 16> Users;
-      for (; I != E; ++I) Users.push_back(I->second);
-      while (!Users.empty()) {
-        visit(Users.back());
-        Users.pop_back();
-      }
-    }
+    if (I == E)
+      return;
+    
+    SmallVector<Instruction*, 16> Users;
+    for (; I != E; ++I)
+      Users.push_back(I->second);
+    while (!Users.empty())
+      visit(Users.pop_back_val());
     return;  // Quick exit
   }
 
   // Super-extra-high-degree PHI nodes are unlikely to ever be marked constant,
   // and slow us down a lot.  Just mark them overdefined.
-  if (PN.getNumIncomingValues() > 64) {
-    markOverdefined(PNIV, &PN);
-    return;
-  }
-
+  if (PN.getNumIncomingValues() > 64)
+    return markOverdefined(&PN);
+  
   // Look at all of the executable operands of the PHI node.  If any of them
   // are overdefined, the PHI becomes overdefined as well.  If they are all
   // constant, and they agree with each other, the PHI becomes the identical
@@ -575,32 +699,28 @@ void SCCPSolver::visitPHINode(PHINode &PN) {
   //
   Constant *OperandVal = 0;
   for (unsigned i = 0, e = PN.getNumIncomingValues(); i != e; ++i) {
-    LatticeVal &IV = getValueState(PN.getIncomingValue(i));
+    LatticeVal IV = getValueState(PN.getIncomingValue(i));
     if (IV.isUndefined()) continue;  // Doesn't influence PHI node.
 
-    if (isEdgeFeasible(PN.getIncomingBlock(i), PN.getParent())) {
-      if (IV.isOverdefined()) {   // PHI node becomes overdefined!
-        markOverdefined(&PN);
-        return;
-      }
+    if (!isEdgeFeasible(PN.getIncomingBlock(i), PN.getParent()))
+      continue;
+    
+    if (IV.isOverdefined())    // PHI node becomes overdefined!
+      return markOverdefined(&PN);
 
-      if (OperandVal == 0) {   // Grab the first value...
-        OperandVal = IV.getConstant();
-      } else {                // Another value is being merged in!
-        // There is already a reachable operand.  If we conflict with it,
-        // then the PHI node becomes overdefined.  If we agree with it, we
-        // can continue on.
-
-        // Check to see if there are two different constants merging...
-        if (IV.getConstant() != OperandVal) {
-          // Yes there is.  This means the PHI node is not constant.
-          // You must be overdefined poor PHI.
-          //
-          markOverdefined(&PN);    // The PHI node now becomes overdefined
-          return;    // I'm done analyzing you
-        }
-      }
+    if (OperandVal == 0) {   // Grab the first value.
+      OperandVal = IV.getConstant();
+      continue;
     }
+    
+    // There is already a reachable operand.  If we conflict with it,
+    // then the PHI node becomes overdefined.  If we agree with it, we
+    // can continue on.
+    
+    // Check to see if there are two different constants merging, if so, the PHI
+    // node is overdefined.
+    if (IV.getConstant() != OperandVal)
+      return markOverdefined(&PN);
   }
 
   // If we exited the loop, this means that the PHI node only has constant
@@ -612,44 +732,33 @@ void SCCPSolver::visitPHINode(PHINode &PN) {
     markConstant(&PN, OperandVal);      // Acquire operand value
 }
 
+
+
+
 void SCCPSolver::visitReturnInst(ReturnInst &I) {
-  if (I.getNumOperands() == 0) return;  // Ret void
+  if (I.getNumOperands() == 0) return;  // ret void
 
   Function *F = I.getParent()->getParent();
+  Value *ResultOp = I.getOperand(0);
+  
   // If we are tracking the return value of this function, merge it in.
-  if (!F->hasLocalLinkage())
-    return;
-
-  if (!TrackedRetVals.empty() && I.getNumOperands() == 1) {
+  if (!TrackedRetVals.empty() && !isa<StructType>(ResultOp->getType())) {
     DenseMap<Function*, LatticeVal>::iterator TFRVI =
       TrackedRetVals.find(F);
-    if (TFRVI != TrackedRetVals.end() &&
-        !TFRVI->second.isOverdefined()) {
-      LatticeVal &IV = getValueState(I.getOperand(0));
-      mergeInValue(TFRVI->second, F, IV);
+    if (TFRVI != TrackedRetVals.end()) {
+      mergeInValue(TFRVI->second, F, getValueState(ResultOp));
       return;
     }
   }
   
   // Handle functions that return multiple values.
-  if (!TrackedMultipleRetVals.empty() && I.getNumOperands() > 1) {
-    for (unsigned i = 0, e = I.getNumOperands(); i != e; ++i) {
-      DenseMap<std::pair<Function*, unsigned>, LatticeVal>::iterator
-        It = TrackedMultipleRetVals.find(std::make_pair(F, i));
-      if (It == TrackedMultipleRetVals.end()) break;
-      mergeInValue(It->second, F, getValueState(I.getOperand(i)));
-    }
-  } else if (!TrackedMultipleRetVals.empty() &&
-             I.getNumOperands() == 1 &&
-             isa<StructType>(I.getOperand(0)->getType())) {
-    for (unsigned i = 0, e = I.getOperand(0)->getType()->getNumContainedTypes();
-         i != e; ++i) {
-      DenseMap<std::pair<Function*, unsigned>, LatticeVal>::iterator
-        It = TrackedMultipleRetVals.find(std::make_pair(F, i));
-      if (It == TrackedMultipleRetVals.end()) break;
-      if (Value *Val = FindInsertedValue(I.getOperand(0), i, I.getContext()))
-        mergeInValue(It->second, F, getValueState(Val));
-    }
+  if (!TrackedMultipleRetVals.empty()) {
+    if (const StructType *STy = dyn_cast<StructType>(ResultOp->getType()))
+      if (MRVFunctionsTracked.count(F))
+        for (unsigned i = 0, e = STy->getNumElements(); i != e; ++i)
+          mergeInValue(TrackedMultipleRetVals[std::make_pair(F, i)], F,
+                       getStructValueState(ResultOp, i));
+    
   }
 }
 
@@ -659,356 +768,311 @@ void SCCPSolver::visitTerminatorInst(TerminatorInst &TI) {
 
   BasicBlock *BB = TI.getParent();
 
-  // Mark all feasible successors executable...
+  // Mark all feasible successors executable.
   for (unsigned i = 0, e = SuccFeasible.size(); i != e; ++i)
     if (SuccFeasible[i])
       markEdgeExecutable(BB, TI.getSuccessor(i));
 }
 
 void SCCPSolver::visitCastInst(CastInst &I) {
-  Value *V = I.getOperand(0);
-  LatticeVal &VState = getValueState(V);
-  if (VState.isOverdefined())          // Inherit overdefinedness of operand
+  LatticeVal OpSt = getValueState(I.getOperand(0));
+  if (OpSt.isOverdefined())          // Inherit overdefinedness of operand
     markOverdefined(&I);
-  else if (VState.isConstant())        // Propagate constant value
+  else if (OpSt.isConstant())        // Propagate constant value
     markConstant(&I, ConstantExpr::getCast(I.getOpcode(), 
-                                           VState.getConstant(), I.getType()));
+                                           OpSt.getConstant(), I.getType()));
 }
 
-void SCCPSolver::visitExtractValueInst(ExtractValueInst &EVI) {
-  Value *Aggr = EVI.getAggregateOperand();
-
-  // If the operand to the extractvalue is an undef, the result is undef.
-  if (isa<UndefValue>(Aggr))
-    return;
 
-  // Currently only handle single-index extractvalues.
-  if (EVI.getNumIndices() != 1) {
-    markOverdefined(&EVI);
-    return;
-  }
-  
-  Function *F = 0;
-  if (CallInst *CI = dyn_cast<CallInst>(Aggr))
-    F = CI->getCalledFunction();
-  else if (InvokeInst *II = dyn_cast<InvokeInst>(Aggr))
-    F = II->getCalledFunction();
-
-  // TODO: If IPSCCP resolves the callee of this function, we could propagate a
-  // result back!
-  if (F == 0 || TrackedMultipleRetVals.empty()) {
-    markOverdefined(&EVI);
-    return;
-  }
-  
-  // See if we are tracking the result of the callee.  If not tracking this
-  // function (for example, it is a declaration) just move to overdefined.
-  if (!TrackedMultipleRetVals.count(std::make_pair(F, *EVI.idx_begin()))) {
-    markOverdefined(&EVI);
-    return;
+void SCCPSolver::visitExtractValueInst(ExtractValueInst &EVI) {
+  // If this returns a struct, mark all elements over defined, we don't track
+  // structs in structs.
+  if (isa<StructType>(EVI.getType()))
+    return markAnythingOverdefined(&EVI);
+    
+  // If this is extracting from more than one level of struct, we don't know.
+  if (EVI.getNumIndices() != 1)
+    return markOverdefined(&EVI);
+
+  Value *AggVal = EVI.getAggregateOperand();
+  if (isa<StructType>(AggVal->getType())) {
+    unsigned i = *EVI.idx_begin();
+    LatticeVal EltVal = getStructValueState(AggVal, i);
+    mergeInValue(getValueState(&EVI), &EVI, EltVal);
+  } else {
+    // Otherwise, must be extracting from an array.
+    return markOverdefined(&EVI);
   }
-  
-  // Otherwise, the value will be merged in here as a result of CallSite
-  // handling.
 }
 
 void SCCPSolver::visitInsertValueInst(InsertValueInst &IVI) {
+  const StructType *STy = dyn_cast<StructType>(IVI.getType());
+  if (STy == 0)
+    return markOverdefined(&IVI);
+  
+  // If this has more than one index, we can't handle it, drive all results to
+  // undef.
+  if (IVI.getNumIndices() != 1)
+    return markAnythingOverdefined(&IVI);
+  
   Value *Aggr = IVI.getAggregateOperand();
-  Value *Val = IVI.getInsertedValueOperand();
-
-  // If the operands to the insertvalue are undef, the result is undef.
-  if (isa<UndefValue>(Aggr) && isa<UndefValue>(Val))
-    return;
-
-  // Currently only handle single-index insertvalues.
-  if (IVI.getNumIndices() != 1) {
-    markOverdefined(&IVI);
-    return;
-  }
-
-  // Currently only handle insertvalue instructions that are in a single-use
-  // chain that builds up a return value.
-  for (const InsertValueInst *TmpIVI = &IVI; ; ) {
-    if (!TmpIVI->hasOneUse()) {
-      markOverdefined(&IVI);
-      return;
+  unsigned Idx = *IVI.idx_begin();
+  
+  // Compute the result based on what we're inserting.
+  for (unsigned i = 0, e = STy->getNumElements(); i != e; ++i) {
+    // This passes through all values that aren't the inserted element.
+    if (i != Idx) {
+      LatticeVal EltVal = getStructValueState(Aggr, i);
+      mergeInValue(getStructValueState(&IVI, i), &IVI, EltVal);
+      continue;
     }
-    const Value *V = *TmpIVI->use_begin();
-    if (isa<ReturnInst>(V))
-      break;
-    TmpIVI = dyn_cast<InsertValueInst>(V);
-    if (!TmpIVI) {
-      markOverdefined(&IVI);
-      return;
+    
+    Value *Val = IVI.getInsertedValueOperand();
+    if (isa<StructType>(Val->getType()))
+      // We don't track structs in structs.
+      markOverdefined(getStructValueState(&IVI, i), &IVI);
+    else {
+      LatticeVal InVal = getValueState(Val);
+      mergeInValue(getStructValueState(&IVI, i), &IVI, InVal);
     }
   }
-  
-  // See if we are tracking the result of the callee.
-  Function *F = IVI.getParent()->getParent();
-  DenseMap<std::pair<Function*, unsigned>, LatticeVal>::iterator
-    It = TrackedMultipleRetVals.find(std::make_pair(F, *IVI.idx_begin()));
-
-  // Merge in the inserted member value.
-  if (It != TrackedMultipleRetVals.end())
-    mergeInValue(It->second, F, getValueState(Val));
-
-  // Mark the aggregate result of the IVI overdefined; any tracking that we do
-  // will be done on the individual member values.
-  markOverdefined(&IVI);
 }
 
 void SCCPSolver::visitSelectInst(SelectInst &I) {
-  LatticeVal &CondValue = getValueState(I.getCondition());
+  // If this select returns a struct, just mark the result overdefined.
+  // TODO: We could do a lot better than this if code actually uses this.
+  if (isa<StructType>(I.getType()))
+    return markAnythingOverdefined(&I);
+  
+  LatticeVal CondValue = getValueState(I.getCondition());
   if (CondValue.isUndefined())
     return;
-  if (CondValue.isConstant()) {
-    if (ConstantInt *CondCB = dyn_cast<ConstantInt>(CondValue.getConstant())){
-      mergeInValue(&I, getValueState(CondCB->getZExtValue() ? I.getTrueValue()
-                                                          : I.getFalseValue()));
-      return;
-    }
+  
+  if (ConstantInt *CondCB = CondValue.getConstantInt()) {
+    Value *OpVal = CondCB->isZero() ? I.getFalseValue() : I.getTrueValue();
+    mergeInValue(&I, getValueState(OpVal));
+    return;
   }
   
   // Otherwise, the condition is overdefined or a constant we can't evaluate.
   // See if we can produce something better than overdefined based on the T/F
   // value.
-  LatticeVal &TVal = getValueState(I.getTrueValue());
-  LatticeVal &FVal = getValueState(I.getFalseValue());
+  LatticeVal TVal = getValueState(I.getTrueValue());
+  LatticeVal FVal = getValueState(I.getFalseValue());
   
   // select ?, C, C -> C.
   if (TVal.isConstant() && FVal.isConstant() && 
-      TVal.getConstant() == FVal.getConstant()) {
-    markConstant(&I, FVal.getConstant());
-    return;
-  }
+      TVal.getConstant() == FVal.getConstant())
+    return markConstant(&I, FVal.getConstant());
 
-  if (TVal.isUndefined()) {  // select ?, undef, X -> X.
-    mergeInValue(&I, FVal);
-  } else if (FVal.isUndefined()) {  // select ?, X, undef -> X.
-    mergeInValue(&I, TVal);
-  } else {
-    markOverdefined(&I);
-  }
+  if (TVal.isUndefined())   // select ?, undef, X -> X.
+    return mergeInValue(&I, FVal);
+  if (FVal.isUndefined())   // select ?, X, undef -> X.
+    return mergeInValue(&I, TVal);
+  markOverdefined(&I);
 }
 
-// Handle BinaryOperators and Shift Instructions...
+// Handle Binary Operators.
 void SCCPSolver::visitBinaryOperator(Instruction &I) {
+  LatticeVal V1State = getValueState(I.getOperand(0));
+  LatticeVal V2State = getValueState(I.getOperand(1));
+  
   LatticeVal &IV = ValueState[&I];
   if (IV.isOverdefined()) return;
 
-  LatticeVal &V1State = getValueState(I.getOperand(0));
-  LatticeVal &V2State = getValueState(I.getOperand(1));
-
-  if (V1State.isOverdefined() || V2State.isOverdefined()) {
-    // If this is an AND or OR with 0 or -1, it doesn't matter that the other
-    // operand is overdefined.
-    if (I.getOpcode() == Instruction::And || I.getOpcode() == Instruction::Or) {
-      LatticeVal *NonOverdefVal = 0;
-      if (!V1State.isOverdefined()) {
-        NonOverdefVal = &V1State;
-      } else if (!V2State.isOverdefined()) {
-        NonOverdefVal = &V2State;
+  if (V1State.isConstant() && V2State.isConstant())
+    return markConstant(IV, &I,
+                        ConstantExpr::get(I.getOpcode(), V1State.getConstant(),
+                                          V2State.getConstant()));
+  
+  // If something is undef, wait for it to resolve.
+  if (!V1State.isOverdefined() && !V2State.isOverdefined())
+    return;
+  
+  // Otherwise, one of our operands is overdefined.  Try to produce something
+  // better than overdefined with some tricks.
+  
+  // If this is an AND or OR with 0 or -1, it doesn't matter that the other
+  // operand is overdefined.
+  if (I.getOpcode() == Instruction::And || I.getOpcode() == Instruction::Or) {
+    LatticeVal *NonOverdefVal = 0;
+    if (!V1State.isOverdefined())
+      NonOverdefVal = &V1State;
+    else if (!V2State.isOverdefined())
+      NonOverdefVal = &V2State;
+
+    if (NonOverdefVal) {
+      if (NonOverdefVal->isUndefined()) {
+        // Could annihilate value.
+        if (I.getOpcode() == Instruction::And)
+          markConstant(IV, &I, Constant::getNullValue(I.getType()));
+        else if (const VectorType *PT = dyn_cast<VectorType>(I.getType()))
+          markConstant(IV, &I, Constant::getAllOnesValue(PT));
+        else
+          markConstant(IV, &I,
+                       Constant::getAllOnesValue(I.getType()));
+        return;
       }
-
-      if (NonOverdefVal) {
-        if (NonOverdefVal->isUndefined()) {
-          // Could annihilate value.
-          if (I.getOpcode() == Instruction::And)
-            markConstant(IV, &I, Constant::getNullValue(I.getType()));
-          else if (const VectorType *PT = dyn_cast<VectorType>(I.getType()))
-            markConstant(IV, &I, Constant::getAllOnesValue(PT));
-          else
-            markConstant(IV, &I,
-                         Constant::getAllOnesValue(I.getType()));
-          return;
-        } else {
-          if (I.getOpcode() == Instruction::And) {
-            if (NonOverdefVal->getConstant()->isNullValue()) {
-              markConstant(IV, &I, NonOverdefVal->getConstant());
-              return;      // X and 0 = 0
-            }
-          } else {
-            if (ConstantInt *CI =
-                     dyn_cast<ConstantInt>(NonOverdefVal->getConstant()))
-              if (CI->isAllOnesValue()) {
-                markConstant(IV, &I, NonOverdefVal->getConstant());
-                return;    // X or -1 = -1
-              }
-          }
-        }
+      
+      if (I.getOpcode() == Instruction::And) {
+        // X and 0 = 0
+        if (NonOverdefVal->getConstant()->isNullValue())
+          return markConstant(IV, &I, NonOverdefVal->getConstant());
+      } else {
+        if (ConstantInt *CI = NonOverdefVal->getConstantInt())
+          if (CI->isAllOnesValue())     // X or -1 = -1
+            return markConstant(IV, &I, NonOverdefVal->getConstant());
       }
     }
+  }
 
 
-    // If both operands are PHI nodes, it is possible that this instruction has
-    // a constant value, despite the fact that the PHI node doesn't.  Check for
-    // this condition now.
-    if (PHINode *PN1 = dyn_cast<PHINode>(I.getOperand(0)))
-      if (PHINode *PN2 = dyn_cast<PHINode>(I.getOperand(1)))
-        if (PN1->getParent() == PN2->getParent()) {
-          // Since the two PHI nodes are in the same basic block, they must have
-          // entries for the same predecessors.  Walk the predecessor list, and
-          // if all of the incoming values are constants, and the result of
-          // evaluating this expression with all incoming value pairs is the
-          // same, then this expression is a constant even though the PHI node
-          // is not a constant!
-          LatticeVal Result;
-          for (unsigned i = 0, e = PN1->getNumIncomingValues(); i != e; ++i) {
-            LatticeVal &In1 = getValueState(PN1->getIncomingValue(i));
-            BasicBlock *InBlock = PN1->getIncomingBlock(i);
-            LatticeVal &In2 =
-              getValueState(PN2->getIncomingValueForBlock(InBlock));
-
-            if (In1.isOverdefined() || In2.isOverdefined()) {
+  // If both operands are PHI nodes, it is possible that this instruction has
+  // a constant value, despite the fact that the PHI node doesn't.  Check for
+  // this condition now.
+  if (PHINode *PN1 = dyn_cast<PHINode>(I.getOperand(0)))
+    if (PHINode *PN2 = dyn_cast<PHINode>(I.getOperand(1)))
+      if (PN1->getParent() == PN2->getParent()) {
+        // Since the two PHI nodes are in the same basic block, they must have
+        // entries for the same predecessors.  Walk the predecessor list, and
+        // if all of the incoming values are constants, and the result of
+        // evaluating this expression with all incoming value pairs is the
+        // same, then this expression is a constant even though the PHI node
+        // is not a constant!
+        LatticeVal Result;
+        for (unsigned i = 0, e = PN1->getNumIncomingValues(); i != e; ++i) {
+          LatticeVal In1 = getValueState(PN1->getIncomingValue(i));
+          BasicBlock *InBlock = PN1->getIncomingBlock(i);
+          LatticeVal In2 =getValueState(PN2->getIncomingValueForBlock(InBlock));
+
+          if (In1.isOverdefined() || In2.isOverdefined()) {
+            Result.markOverdefined();
+            break;  // Cannot fold this operation over the PHI nodes!
+          }
+          
+          if (In1.isConstant() && In2.isConstant()) {
+            Constant *V = ConstantExpr::get(I.getOpcode(), In1.getConstant(),
+                                            In2.getConstant());
+            if (Result.isUndefined())
+              Result.markConstant(V);
+            else if (Result.isConstant() && Result.getConstant() != V) {
               Result.markOverdefined();
-              break;  // Cannot fold this operation over the PHI nodes!
-            } else if (In1.isConstant() && In2.isConstant()) {
-              Constant *V =
-                     ConstantExpr::get(I.getOpcode(), In1.getConstant(),
-                                              In2.getConstant());
-              if (Result.isUndefined())
-                Result.markConstant(V);
-              else if (Result.isConstant() && Result.getConstant() != V) {
-                Result.markOverdefined();
-                break;
-              }
+              break;
             }
           }
+        }
 
-          // If we found a constant value here, then we know the instruction is
-          // constant despite the fact that the PHI nodes are overdefined.
-          if (Result.isConstant()) {
-            markConstant(IV, &I, Result.getConstant());
-            // Remember that this instruction is virtually using the PHI node
-            // operands.
-            UsersOfOverdefinedPHIs.insert(std::make_pair(PN1, &I));
-            UsersOfOverdefinedPHIs.insert(std::make_pair(PN2, &I));
-            return;
-          } else if (Result.isUndefined()) {
-            return;
-          }
-
-          // Okay, this really is overdefined now.  Since we might have
-          // speculatively thought that this was not overdefined before, and
-          // added ourselves to the UsersOfOverdefinedPHIs list for the PHIs,
-          // make sure to clean out any entries that we put there, for
-          // efficiency.
-          std::multimap<PHINode*, Instruction*>::iterator It, E;
-          tie(It, E) = UsersOfOverdefinedPHIs.equal_range(PN1);
-          while (It != E) {
-            if (It->second == &I) {
-              UsersOfOverdefinedPHIs.erase(It++);
-            } else
-              ++It;
-          }
-          tie(It, E) = UsersOfOverdefinedPHIs.equal_range(PN2);
-          while (It != E) {
-            if (It->second == &I) {
-              UsersOfOverdefinedPHIs.erase(It++);
-            } else
-              ++It;
-          }
+        // If we found a constant value here, then we know the instruction is
+        // constant despite the fact that the PHI nodes are overdefined.
+        if (Result.isConstant()) {
+          markConstant(IV, &I, Result.getConstant());
+          // Remember that this instruction is virtually using the PHI node
+          // operands.
+          UsersOfOverdefinedPHIs.insert(std::make_pair(PN1, &I));
+          UsersOfOverdefinedPHIs.insert(std::make_pair(PN2, &I));
+          return;
         }
+        
+        if (Result.isUndefined())
+          return;
 
-    markOverdefined(IV, &I);
-  } else if (V1State.isConstant() && V2State.isConstant()) {
-    markConstant(IV, &I,
-                ConstantExpr::get(I.getOpcode(), V1State.getConstant(),
-                                           V2State.getConstant()));
-  }
+        // Okay, this really is overdefined now.  Since we might have
+        // speculatively thought that this was not overdefined before, and
+        // added ourselves to the UsersOfOverdefinedPHIs list for the PHIs,
+        // make sure to clean out any entries that we put there, for
+        // efficiency.
+        RemoveFromOverdefinedPHIs(&I, PN1);
+        RemoveFromOverdefinedPHIs(&I, PN2);
+      }
+
+  markOverdefined(&I);
 }
 
-// Handle ICmpInst instruction...
+// Handle ICmpInst instruction.
 void SCCPSolver::visitCmpInst(CmpInst &I) {
+  LatticeVal V1State = getValueState(I.getOperand(0));
+  LatticeVal V2State = getValueState(I.getOperand(1));
+
   LatticeVal &IV = ValueState[&I];
   if (IV.isOverdefined()) return;
 
-  LatticeVal &V1State = getValueState(I.getOperand(0));
-  LatticeVal &V2State = getValueState(I.getOperand(1));
-
-  if (V1State.isOverdefined() || V2State.isOverdefined()) {
-    // If both operands are PHI nodes, it is possible that this instruction has
-    // a constant value, despite the fact that the PHI node doesn't.  Check for
-    // this condition now.
-    if (PHINode *PN1 = dyn_cast<PHINode>(I.getOperand(0)))
-      if (PHINode *PN2 = dyn_cast<PHINode>(I.getOperand(1)))
-        if (PN1->getParent() == PN2->getParent()) {
-          // Since the two PHI nodes are in the same basic block, they must have
-          // entries for the same predecessors.  Walk the predecessor list, and
-          // if all of the incoming values are constants, and the result of
-          // evaluating this expression with all incoming value pairs is the
-          // same, then this expression is a constant even though the PHI node
-          // is not a constant!
-          LatticeVal Result;
-          for (unsigned i = 0, e = PN1->getNumIncomingValues(); i != e; ++i) {
-            LatticeVal &In1 = getValueState(PN1->getIncomingValue(i));
-            BasicBlock *InBlock = PN1->getIncomingBlock(i);
-            LatticeVal &In2 =
-              getValueState(PN2->getIncomingValueForBlock(InBlock));
-
-            if (In1.isOverdefined() || In2.isOverdefined()) {
+  if (V1State.isConstant() && V2State.isConstant())
+    return markConstant(IV, &I, ConstantExpr::getCompare(I.getPredicate(), 
+                                                         V1State.getConstant(), 
+                                                        V2State.getConstant()));
+  
+  // If operands are still undefined, wait for it to resolve.
+  if (!V1State.isOverdefined() && !V2State.isOverdefined())
+    return;
+  
+  // If something is overdefined, use some tricks to avoid ending up and over
+  // defined if we can.
+  
+  // If both operands are PHI nodes, it is possible that this instruction has
+  // a constant value, despite the fact that the PHI node doesn't.  Check for
+  // this condition now.
+  if (PHINode *PN1 = dyn_cast<PHINode>(I.getOperand(0)))
+    if (PHINode *PN2 = dyn_cast<PHINode>(I.getOperand(1)))
+      if (PN1->getParent() == PN2->getParent()) {
+        // Since the two PHI nodes are in the same basic block, they must have
+        // entries for the same predecessors.  Walk the predecessor list, and
+        // if all of the incoming values are constants, and the result of
+        // evaluating this expression with all incoming value pairs is the
+        // same, then this expression is a constant even though the PHI node
+        // is not a constant!
+        LatticeVal Result;
+        for (unsigned i = 0, e = PN1->getNumIncomingValues(); i != e; ++i) {
+          LatticeVal In1 = getValueState(PN1->getIncomingValue(i));
+          BasicBlock *InBlock = PN1->getIncomingBlock(i);
+          LatticeVal In2 =getValueState(PN2->getIncomingValueForBlock(InBlock));
+
+          if (In1.isOverdefined() || In2.isOverdefined()) {
+            Result.markOverdefined();
+            break;  // Cannot fold this operation over the PHI nodes!
+          }
+          
+          if (In1.isConstant() && In2.isConstant()) {
+            Constant *V = ConstantExpr::getCompare(I.getPredicate(), 
+                                                   In1.getConstant(), 
+                                                   In2.getConstant());
+            if (Result.isUndefined())
+              Result.markConstant(V);
+            else if (Result.isConstant() && Result.getConstant() != V) {
               Result.markOverdefined();
-              break;  // Cannot fold this operation over the PHI nodes!
-            } else if (In1.isConstant() && In2.isConstant()) {
-              Constant *V = ConstantExpr::getCompare(I.getPredicate(), 
-                                                     In1.getConstant(), 
-                                                     In2.getConstant());
-              if (Result.isUndefined())
-                Result.markConstant(V);
-              else if (Result.isConstant() && Result.getConstant() != V) {
-                Result.markOverdefined();
-                break;
-              }
+              break;
             }
           }
+        }
 
-          // If we found a constant value here, then we know the instruction is
-          // constant despite the fact that the PHI nodes are overdefined.
-          if (Result.isConstant()) {
-            markConstant(IV, &I, Result.getConstant());
-            // Remember that this instruction is virtually using the PHI node
-            // operands.
-            UsersOfOverdefinedPHIs.insert(std::make_pair(PN1, &I));
-            UsersOfOverdefinedPHIs.insert(std::make_pair(PN2, &I));
-            return;
-          } else if (Result.isUndefined()) {
-            return;
-          }
-
-          // Okay, this really is overdefined now.  Since we might have
-          // speculatively thought that this was not overdefined before, and
-          // added ourselves to the UsersOfOverdefinedPHIs list for the PHIs,
-          // make sure to clean out any entries that we put there, for
-          // efficiency.
-          std::multimap<PHINode*, Instruction*>::iterator It, E;
-          tie(It, E) = UsersOfOverdefinedPHIs.equal_range(PN1);
-          while (It != E) {
-            if (It->second == &I) {
-              UsersOfOverdefinedPHIs.erase(It++);
-            } else
-              ++It;
-          }
-          tie(It, E) = UsersOfOverdefinedPHIs.equal_range(PN2);
-          while (It != E) {
-            if (It->second == &I) {
-              UsersOfOverdefinedPHIs.erase(It++);
-            } else
-              ++It;
-          }
+        // If we found a constant value here, then we know the instruction is
+        // constant despite the fact that the PHI nodes are overdefined.
+        if (Result.isConstant()) {
+          markConstant(&I, Result.getConstant());
+          // Remember that this instruction is virtually using the PHI node
+          // operands.
+          UsersOfOverdefinedPHIs.insert(std::make_pair(PN1, &I));
+          UsersOfOverdefinedPHIs.insert(std::make_pair(PN2, &I));
+          return;
         }
+        
+        if (Result.isUndefined())
+          return;
 
-    markOverdefined(IV, &I);
-  } else if (V1State.isConstant() && V2State.isConstant()) {
-    markConstant(IV, &I, ConstantExpr::getCompare(I.getPredicate(), 
-                                                  V1State.getConstant(), 
-                                                  V2State.getConstant()));
-  }
+        // Okay, this really is overdefined now.  Since we might have
+        // speculatively thought that this was not overdefined before, and
+        // added ourselves to the UsersOfOverdefinedPHIs list for the PHIs,
+        // make sure to clean out any entries that we put there, for
+        // efficiency.
+        RemoveFromOverdefinedPHIs(&I, PN1);
+        RemoveFromOverdefinedPHIs(&I, PN2);
+      }
+
+  markOverdefined(&I);
 }
 
 void SCCPSolver::visitExtractElementInst(ExtractElementInst &I) {
-  // FIXME : SCCP does not handle vectors properly.
-  markOverdefined(&I);
-  return;
+  // TODO : SCCP does not handle vectors properly.
+  return markOverdefined(&I);
 
 #if 0
   LatticeVal &ValState = getValueState(I.getOperand(0));
@@ -1023,9 +1087,8 @@ void SCCPSolver::visitExtractElementInst(ExtractElementInst &I) {
 }
 
 void SCCPSolver::visitInsertElementInst(InsertElementInst &I) {
-  // FIXME : SCCP does not handle vectors properly.
-  markOverdefined(&I);
-  return;
+  // TODO : SCCP does not handle vectors properly.
+  return markOverdefined(&I);
 #if 0
   LatticeVal &ValState = getValueState(I.getOperand(0));
   LatticeVal &EltState = getValueState(I.getOperand(1));
@@ -1048,9 +1111,8 @@ void SCCPSolver::visitInsertElementInst(InsertElementInst &I) {
 }
 
 void SCCPSolver::visitShuffleVectorInst(ShuffleVectorInst &I) {
-  // FIXME : SCCP does not handle vectors properly.
-  markOverdefined(&I);
-  return;
+  // TODO : SCCP does not handle vectors properly.
+  return markOverdefined(&I);
 #if 0
   LatticeVal &V1State   = getValueState(I.getOperand(0));
   LatticeVal &V2State   = getValueState(I.getOperand(1));
@@ -1076,46 +1138,46 @@ void SCCPSolver::visitShuffleVectorInst(ShuffleVectorInst &I) {
 #endif
 }
 
-// Handle getelementptr instructions... if all operands are constants then we
+// Handle getelementptr instructions.  If all operands are constants then we
 // can turn this into a getelementptr ConstantExpr.
 //
 void SCCPSolver::visitGetElementPtrInst(GetElementPtrInst &I) {
-  LatticeVal &IV = ValueState[&I];
-  if (IV.isOverdefined()) return;
+  if (ValueState[&I].isOverdefined()) return;
 
   SmallVector<Constant*, 8> Operands;
   Operands.reserve(I.getNumOperands());
 
   for (unsigned i = 0, e = I.getNumOperands(); i != e; ++i) {
-    LatticeVal &State = getValueState(I.getOperand(i));
+    LatticeVal State = getValueState(I.getOperand(i));
     if (State.isUndefined())
-      return;  // Operands are not resolved yet...
-    else if (State.isOverdefined()) {
-      markOverdefined(IV, &I);
-      return;
-    }
+      return;  // Operands are not resolved yet.
+    
+    if (State.isOverdefined())
+      return markOverdefined(&I);
+
     assert(State.isConstant() && "Unknown state!");
     Operands.push_back(State.getConstant());
   }
 
   Constant *Ptr = Operands[0];
-  Operands.erase(Operands.begin());  // Erase the pointer from idx list...
-
-  markConstant(IV, &I, ConstantExpr::getGetElementPtr(Ptr, &Operands[0],
-                                                      Operands.size()));
+  markConstant(&I, ConstantExpr::getGetElementPtr(Ptr, &Operands[0]+1,
+                                                  Operands.size()-1));
 }
 
-void SCCPSolver::visitStoreInst(Instruction &SI) {
+void SCCPSolver::visitStoreInst(StoreInst &SI) {
+  // If this store is of a struct, ignore it.
+  if (isa<StructType>(SI.getOperand(0)->getType()))
+    return;
+  
   if (TrackedGlobals.empty() || !isa<GlobalVariable>(SI.getOperand(1)))
     return;
+  
   GlobalVariable *GV = cast<GlobalVariable>(SI.getOperand(1));
   DenseMap<GlobalVariable*, LatticeVal>::iterator I = TrackedGlobals.find(GV);
   if (I == TrackedGlobals.end() || I->second.isOverdefined()) return;
 
-  // Get the value we are storing into the global.
-  LatticeVal &PtrVal = getValueState(SI.getOperand(0));
-
-  mergeInValue(I->second, GV, PtrVal);
+  // Get the value we are storing into the global, then merge it.
+  mergeInValue(I->second, GV, getValueState(SI.getOperand(0)));
   if (I->second.isOverdefined())
     TrackedGlobals.erase(I);      // No need to keep tracking this!
 }
@@ -1124,51 +1186,42 @@ void SCCPSolver::visitStoreInst(Instruction &SI) {
 // Handle load instructions.  If the operand is a constant pointer to a constant
 // global, we can replace the load with the loaded constant value!
 void SCCPSolver::visitLoadInst(LoadInst &I) {
+  // If this load is of a struct, just mark the result overdefined.
+  if (isa<StructType>(I.getType()))
+    return markAnythingOverdefined(&I);
+  
+  LatticeVal PtrVal = getValueState(I.getOperand(0));
+  if (PtrVal.isUndefined()) return;   // The pointer is not resolved yet!
+  
   LatticeVal &IV = ValueState[&I];
   if (IV.isOverdefined()) return;
 
-  LatticeVal &PtrVal = getValueState(I.getOperand(0));
-  if (PtrVal.isUndefined()) return;   // The pointer is not resolved yet!
-  if (PtrVal.isConstant() && !I.isVolatile()) {
-    Value *Ptr = PtrVal.getConstant();
-    // TODO: Consider a target hook for valid address spaces for this xform.
-    if (isa<ConstantPointerNull>(Ptr) && I.getPointerAddressSpace() == 0) {
-      // load null -> null
-      markConstant(IV, &I, Constant::getNullValue(I.getType()));
-      return;
-    }
+  if (!PtrVal.isConstant() || I.isVolatile())
+    return markOverdefined(IV, &I);
+    
+  Constant *Ptr = PtrVal.getConstant();
 
-    // Transform load (constant global) into the value loaded.
-    if (GlobalVariable *GV = dyn_cast<GlobalVariable>(Ptr)) {
-      if (GV->isConstant()) {
-        if (GV->hasDefinitiveInitializer()) {
-          markConstant(IV, &I, GV->getInitializer());
-          return;
-        }
-      } else if (!TrackedGlobals.empty()) {
-        // If we are tracking this global, merge in the known value for it.
-        DenseMap<GlobalVariable*, LatticeVal>::iterator It =
-          TrackedGlobals.find(GV);
-        if (It != TrackedGlobals.end()) {
-          mergeInValue(IV, &I, It->second);
-          return;
-        }
+  // load null -> null
+  if (isa<ConstantPointerNull>(Ptr) && I.getPointerAddressSpace() == 0)
+    return markConstant(IV, &I, Constant::getNullValue(I.getType()));
+  
+  // Transform load (constant global) into the value loaded.
+  if (GlobalVariable *GV = dyn_cast<GlobalVariable>(Ptr)) {
+    if (!TrackedGlobals.empty()) {
+      // If we are tracking this global, merge in the known value for it.
+      DenseMap<GlobalVariable*, LatticeVal>::iterator It =
+        TrackedGlobals.find(GV);
+      if (It != TrackedGlobals.end()) {
+        mergeInValue(IV, &I, It->second);
+        return;
       }
     }
-
-    // Transform load (constantexpr_GEP global, 0, ...) into the value loaded.
-    if (ConstantExpr *CE = dyn_cast<ConstantExpr>(Ptr))
-      if (CE->getOpcode() == Instruction::GetElementPtr)
-    if (GlobalVariable *GV = dyn_cast<GlobalVariable>(CE->getOperand(0)))
-      if (GV->isConstant() && GV->hasDefinitiveInitializer())
-        if (Constant *V =
-             ConstantFoldLoadThroughGEPConstantExpr(GV->getInitializer(), CE,
-                                                    *Context)) {
-          markConstant(IV, &I, V);
-          return;
-        }
   }
 
+  // Transform load from a constant into a constant if possible.
+  if (Constant *C = ConstantFoldLoadFromConstPtr(Ptr, TD))
+    return markConstant(IV, &I, C);
+
   // Otherwise we cannot say for certain what value this load will produce.
   // Bail out.
   markOverdefined(IV, &I);
@@ -1181,106 +1234,95 @@ void SCCPSolver::visitCallSite(CallSite CS) {
   // The common case is that we aren't tracking the callee, either because we
   // are not doing interprocedural analysis or the callee is indirect, or is
   // external.  Handle these cases first.
-  if (F == 0 || !F->hasLocalLinkage()) {
+  if (F == 0 || F->isDeclaration()) {
 CallOverdefined:
     // Void return and not tracking callee, just bail.
-    if (I->getType() == Type::getVoidTy(I->getContext())) return;
+    if (I->getType()->isVoidTy()) return;
     
     // Otherwise, if we have a single return value case, and if the function is
     // a declaration, maybe we can constant fold it.
-    if (!isa<StructType>(I->getType()) && F && F->isDeclaration() && 
+    if (F && F->isDeclaration() && !isa<StructType>(I->getType()) &&
         canConstantFoldCallTo(F)) {
       
       SmallVector<Constant*, 8> Operands;
       for (CallSite::arg_iterator AI = CS.arg_begin(), E = CS.arg_end();
            AI != E; ++AI) {
-        LatticeVal &State = getValueState(*AI);
+        LatticeVal State = getValueState(*AI);
+        
         if (State.isUndefined())
           return;  // Operands are not resolved yet.
-        else if (State.isOverdefined()) {
-          markOverdefined(I);
-          return;
-        }
+        if (State.isOverdefined())
+          return markOverdefined(I);
         assert(State.isConstant() && "Unknown state!");
         Operands.push_back(State.getConstant());
       }
      
       // If we can constant fold this, mark the result of the call as a
       // constant.
-      if (Constant *C = ConstantFoldCall(F, Operands.data(), Operands.size())) {
-        markConstant(I, C);
-        return;
-      }
+      if (Constant *C = ConstantFoldCall(F, Operands.data(), Operands.size()))
+        return markConstant(I, C);
     }
 
     // Otherwise, we don't know anything about this call, mark it overdefined.
-    markOverdefined(I);
-    return;
+    return markAnythingOverdefined(I);
   }
 
-  // If this is a single/zero retval case, see if we're tracking the function.
-  DenseMap<Function*, LatticeVal>::iterator TFRVI = TrackedRetVals.find(F);
-  if (TFRVI != TrackedRetVals.end()) {
-    // If so, propagate the return value of the callee into this call result.
-    mergeInValue(I, TFRVI->second);
-  } else if (isa<StructType>(I->getType())) {
-    // Check to see if we're tracking this callee, if not, handle it in the
-    // common path above.
-    DenseMap<std::pair<Function*, unsigned>, LatticeVal>::iterator
-    TMRVI = TrackedMultipleRetVals.find(std::make_pair(F, 0));
-    if (TMRVI == TrackedMultipleRetVals.end())
-      goto CallOverdefined;
+  // If this is a local function that doesn't have its address taken, mark its
+  // entry block executable and merge in the actual arguments to the call into
+  // the formal arguments of the function.
+  if (!TrackingIncomingArguments.empty() && TrackingIncomingArguments.count(F)){
+    MarkBlockExecutable(F->begin());
     
-    // If we are tracking this callee, propagate the return values of the call
-    // into this call site.  We do this by walking all the uses. Single-index
-    // ExtractValueInst uses can be tracked; anything more complicated is
-    // currently handled conservatively.
-    for (Value::use_iterator UI = I->use_begin(), E = I->use_end();
-         UI != E; ++UI) {
-      if (ExtractValueInst *EVI = dyn_cast<ExtractValueInst>(*UI)) {
-        if (EVI->getNumIndices() == 1) {
-          mergeInValue(EVI, 
-                  TrackedMultipleRetVals[std::make_pair(F, *EVI->idx_begin())]);
-          continue;
+    // Propagate information from this call site into the callee.
+    CallSite::arg_iterator CAI = CS.arg_begin();
+    for (Function::arg_iterator AI = F->arg_begin(), E = F->arg_end();
+         AI != E; ++AI, ++CAI) {
+      // If this argument is byval, and if the function is not readonly, there
+      // will be an implicit copy formed of the input aggregate.
+      if (AI->hasByValAttr() && !F->onlyReadsMemory()) {
+        markOverdefined(AI);
+        continue;
+      }
+      
+      if (const StructType *STy = dyn_cast<StructType>(AI->getType())) {
+        for (unsigned i = 0, e = STy->getNumElements(); i != e; ++i) {
+          LatticeVal CallArg = getStructValueState(*CAI, i);
+          mergeInValue(getStructValueState(AI, i), AI, CallArg);
         }
+      } else {
+        mergeInValue(AI, getValueState(*CAI));
       }
-      // The aggregate value is used in a way not handled here. Assume nothing.
-      markOverdefined(*UI);
     }
-  } else {
-    // Otherwise we're not tracking this callee, so handle it in the
-    // common path above.
-    goto CallOverdefined;
   }
-   
-  // Finally, if this is the first call to the function hit, mark its entry
-  // block executable.
-  if (!BBExecutable.count(F->begin()))
-    MarkBlockExecutable(F->begin());
   
-  // Propagate information from this call site into the callee.
-  CallSite::arg_iterator CAI = CS.arg_begin();
-  for (Function::arg_iterator AI = F->arg_begin(), E = F->arg_end();
-       AI != E; ++AI, ++CAI) {
-    LatticeVal &IV = ValueState[AI];
-    if (AI->hasByValAttr() && !F->onlyReadsMemory()) {
-      IV.markOverdefined();
-      continue;
-    }
-    if (!IV.isOverdefined())
-      mergeInValue(IV, AI, getValueState(*CAI));
+  // If this is a single/zero retval case, see if we're tracking the function.
+  if (const StructType *STy = dyn_cast<StructType>(F->getReturnType())) {
+    if (!MRVFunctionsTracked.count(F))
+      goto CallOverdefined;  // Not tracking this callee.
+    
+    // If we are tracking this callee, propagate the result of the function
+    // into this call site.
+    for (unsigned i = 0, e = STy->getNumElements(); i != e; ++i)
+      mergeInValue(getStructValueState(I, i), I, 
+                   TrackedMultipleRetVals[std::make_pair(F, i)]);
+  } else {
+    DenseMap<Function*, LatticeVal>::iterator TFRVI = TrackedRetVals.find(F);
+    if (TFRVI == TrackedRetVals.end())
+      goto CallOverdefined;  // Not tracking this callee.
+      
+    // If so, propagate the return value of the callee into this call result.
+    mergeInValue(I, TFRVI->second);
   }
 }
 
-
 void SCCPSolver::Solve() {
   // Process the work lists until they are empty!
   while (!BBWorkList.empty() || !InstWorkList.empty() ||
          !OverdefinedInstWorkList.empty()) {
-    // Process the instruction work list...
+    // Process the overdefined instruction's work list first, which drives other
+    // things to overdefined more quickly.
     while (!OverdefinedInstWorkList.empty()) {
-      Value *I = OverdefinedInstWorkList.back();
-      OverdefinedInstWorkList.pop_back();
+      Value *I = OverdefinedInstWorkList.pop_back_val();
 
       DEBUG(errs() << "\nPopped off OI-WL: " << *I << '\n');
 
@@ -1289,33 +1331,35 @@ void SCCPSolver::Solve() {
       //
       // Anything on this worklist that is overdefined need not be visited
       // since all of its users will have already been marked as overdefined
-      // Update all of the users of this instruction's value...
+      // Update all of the users of this instruction's value.
       //
       for (Value::use_iterator UI = I->use_begin(), E = I->use_end();
            UI != E; ++UI)
-        OperandChangedState(*UI);
+        if (Instruction *I = dyn_cast<Instruction>(*UI))
+          OperandChangedState(I);
     }
-    // Process the instruction work list...
+    
+    // Process the instruction work list.
     while (!InstWorkList.empty()) {
-      Value *I = InstWorkList.back();
-      InstWorkList.pop_back();
+      Value *I = InstWorkList.pop_back_val();
 
       DEBUG(errs() << "\nPopped off I-WL: " << *I << '\n');
 
-      // "I" got into the work list because it either made the transition from
-      // bottom to constant
+      // "I" got into the work list because it made the transition from undef to
+      // constant.
       //
       // Anything on this worklist that is overdefined need not be visited
       // since all of its users will have already been marked as overdefined.
-      // Update all of the users of this instruction's value...
+      // Update all of the users of this instruction's value.
       //
-      if (!getValueState(I).isOverdefined())
+      if (isa<StructType>(I->getType()) || !getValueState(I).isOverdefined())
         for (Value::use_iterator UI = I->use_begin(), E = I->use_end();
              UI != E; ++UI)
-          OperandChangedState(*UI);
+          if (Instruction *I = dyn_cast<Instruction>(*UI))
+            OperandChangedState(I);
     }
 
-    // Process the basic block work list...
+    // Process the basic block work list.
     while (!BBWorkList.empty()) {
       BasicBlock *BB = BBWorkList.back();
       BBWorkList.pop_back();
@@ -1354,15 +1398,37 @@ bool SCCPSolver::ResolvedUndefsIn(Function &F) {
     
     for (BasicBlock::iterator I = BB->begin(), E = BB->end(); I != E; ++I) {
       // Look for instructions which produce undef values.
-      if (I->getType() == Type::getVoidTy(F.getContext())) continue;
+      if (I->getType()->isVoidTy()) continue;
+      
+      if (const StructType *STy = dyn_cast<StructType>(I->getType())) {
+        // Only a few things that can be structs matter for undef.  Just send
+        // all their results to overdefined.  We could be more precise than this
+        // but it isn't worth bothering.
+        if (isa<CallInst>(I) || isa<SelectInst>(I)) {
+          for (unsigned i = 0, e = STy->getNumElements(); i != e; ++i) {
+            LatticeVal &LV = getStructValueState(I, i);
+            if (LV.isUndefined())
+              markOverdefined(LV, I);
+          }
+        }
+        continue;
+      }
       
       LatticeVal &LV = getValueState(I);
       if (!LV.isUndefined()) continue;
 
+      // No instructions using structs need disambiguation.
+      if (isa<StructType>(I->getOperand(0)->getType()))
+        continue;
+
       // Get the lattice values of the first two operands for use below.
-      LatticeVal &Op0LV = getValueState(I->getOperand(0));
+      LatticeVal Op0LV = getValueState(I->getOperand(0));
       LatticeVal Op1LV;
       if (I->getNumOperands() == 2) {
+        // No instructions using structs need disambiguation.
+        if (isa<StructType>(I->getOperand(1)->getType()))
+          continue;
+        
         // If this is a two-operand instruction, and if both operands are
         // undefs, the result stays undef.
         Op1LV = getValueState(I->getOperand(1));
@@ -1379,23 +1445,18 @@ bool SCCPSolver::ResolvedUndefsIn(Function &F) {
         // After a zero extend, we know the top part is zero.  SExt doesn't have
         // to be handled here, because we don't know whether the top part is 1's
         // or 0's.
-        assert(Op0LV.isUndefined());
-        markForcedConstant(LV, I, Constant::getNullValue(ITy));
+        markForcedConstant(I, Constant::getNullValue(ITy));
         return true;
       case Instruction::Mul:
       case Instruction::And:
         // undef * X -> 0.   X could be zero.
         // undef & X -> 0.   X could be zero.
-        markForcedConstant(LV, I, Constant::getNullValue(ITy));
+        markForcedConstant(I, Constant::getNullValue(ITy));
         return true;
 
       case Instruction::Or:
         // undef | X -> -1.   X could be -1.
-        if (const VectorType *PTy = dyn_cast<VectorType>(ITy))
-          markForcedConstant(LV, I,
-                             Constant::getAllOnesValue(PTy));
-        else          
-          markForcedConstant(LV, I, Constant::getAllOnesValue(ITy));
+        markForcedConstant(I, Constant::getAllOnesValue(ITy));
         return true;
 
       case Instruction::SDiv:
@@ -1408,7 +1469,7 @@ bool SCCPSolver::ResolvedUndefsIn(Function &F) {
         
         // undef / X -> 0.   X could be maxint.
         // undef % X -> 0.   X could be 1.
-        markForcedConstant(LV, I, Constant::getNullValue(ITy));
+        markForcedConstant(I, Constant::getNullValue(ITy));
         return true;
         
       case Instruction::AShr:
@@ -1417,9 +1478,9 @@ bool SCCPSolver::ResolvedUndefsIn(Function &F) {
         
         // X >>s undef -> X.  X could be 0, X could have the high-bit known set.
         if (Op0LV.isConstant())
-          markForcedConstant(LV, I, Op0LV.getConstant());
+          markForcedConstant(I, Op0LV.getConstant());
         else
-          markOverdefined(LV, I);
+          markOverdefined(I);
         return true;
       case Instruction::LShr:
       case Instruction::Shl:
@@ -1429,7 +1490,7 @@ bool SCCPSolver::ResolvedUndefsIn(Function &F) {
         
         // X >> undef -> 0.  X could be 0.
         // X << undef -> 0.  X could be 0.
-        markForcedConstant(LV, I, Constant::getNullValue(ITy));
+        markForcedConstant(I, Constant::getNullValue(ITy));
         return true;
       case Instruction::Select:
         // undef ? X : Y  -> X or Y.  There could be commonality between X/Y.
@@ -1447,15 +1508,15 @@ bool SCCPSolver::ResolvedUndefsIn(Function &F) {
         }
         
         if (Op1LV.isConstant())
-          markForcedConstant(LV, I, Op1LV.getConstant());
+          markForcedConstant(I, Op1LV.getConstant());
         else
-          markOverdefined(LV, I);
+          markOverdefined(I);
         return true;
       case Instruction::Call:
         // If a call has an undef result, it is because it is constant foldable
         // but one of the inputs was undef.  Just force the result to
         // overdefined.
-        markOverdefined(LV, I);
+        markOverdefined(I);
         return true;
       }
     }
@@ -1466,7 +1527,7 @@ bool SCCPSolver::ResolvedUndefsIn(Function &F) {
       if (!getValueState(BI->getCondition()).isUndefined())
         continue;
     } else if (SwitchInst *SI = dyn_cast<SwitchInst>(TI)) {
-      if (SI->getNumSuccessors()<2)   // no cases
+      if (SI->getNumSuccessors() < 2)   // no cases
         continue;
       if (!getValueState(SI->getCondition()).isUndefined())
         continue;
@@ -1492,7 +1553,7 @@ bool SCCPSolver::ResolvedUndefsIn(Function &F) {
     // as undef, then further analysis could think the undef went another way
     // leading to an inconsistent set of conclusions.
     if (BranchInst *BI = dyn_cast<BranchInst>(TI)) {
-      BI->setCondition(ConstantInt::getFalse(*Context));
+      BI->setCondition(ConstantInt::getFalse(BI->getContext()));
     } else {
       SwitchInst *SI = cast<SwitchInst>(TI);
       SI->setCondition(SI->getCaseValue(1));
@@ -1530,26 +1591,40 @@ char SCCP::ID = 0;
 static RegisterPass<SCCP>
 X("sccp", "Sparse Conditional Constant Propagation");
 
-// createSCCPPass - This is the public interface to this file...
+// createSCCPPass - This is the public interface to this file.
 FunctionPass *llvm::createSCCPPass() {
   return new SCCP();
 }
 
+static void DeleteInstructionInBlock(BasicBlock *BB) {
+  DEBUG(errs() << "  BasicBlock Dead:" << *BB);
+  ++NumDeadBlocks;
+  
+  // Delete the instructions backwards, as it has a reduced likelihood of
+  // having to update as many def-use and use-def chains.
+  while (!isa<TerminatorInst>(BB->begin())) {
+    Instruction *I = --BasicBlock::iterator(BB->getTerminator());
+    
+    if (!I->use_empty())
+      I->replaceAllUsesWith(UndefValue::get(I->getType()));
+    BB->getInstList().erase(I);
+    ++NumInstRemoved;
+  }
+}
 
 // runOnFunction() - Run the Sparse Conditional Constant Propagation algorithm,
 // and return true if the function was modified.
 //
 bool SCCP::runOnFunction(Function &F) {
   DEBUG(errs() << "SCCP on function '" << F.getName() << "'\n");
-  SCCPSolver Solver;
-  Solver.setContext(&F.getContext());
+  SCCPSolver Solver(getAnalysisIfAvailable<TargetData>());
 
   // Mark the first block of the function as being executable.
   Solver.MarkBlockExecutable(F.begin());
 
   // Mark all arguments to the function as being overdefined.
   for (Function::arg_iterator AI = F.arg_begin(), E = F.arg_end(); AI != E;++AI)
-    Solver.markOverdefined(AI);
+    Solver.markAnythingOverdefined(AI);
 
   // Solve for constants.
   bool ResolvedUndefs = true;
@@ -1564,58 +1639,45 @@ bool SCCP::runOnFunction(Function &F) {
   // If we decided that there are basic blocks that are dead in this function,
   // delete their contents now.  Note that we cannot actually delete the blocks,
   // as we cannot modify the CFG of the function.
-  //
-  SmallVector<Instruction*, 512> Insts;
-  std::map<Value*, LatticeVal> &Values = Solver.getValueMapping();
 
-  for (Function::iterator BB = F.begin(), E = F.end(); BB != E; ++BB)
+  for (Function::iterator BB = F.begin(), E = F.end(); BB != E; ++BB) {
     if (!Solver.isBlockExecutable(BB)) {
-      DEBUG(errs() << "  BasicBlock Dead:" << *BB);
-      ++NumDeadBlocks;
-
-      // Delete the instructions backwards, as it has a reduced likelihood of
-      // having to update as many def-use and use-def chains.
-      for (BasicBlock::iterator I = BB->begin(), E = BB->getTerminator();
-           I != E; ++I)
-        Insts.push_back(I);
-      while (!Insts.empty()) {
-        Instruction *I = Insts.back();
-        Insts.pop_back();
-        if (!I->use_empty())
-          I->replaceAllUsesWith(UndefValue::get(I->getType()));
-        BB->getInstList().erase(I);
-        MadeChanges = true;
-        ++NumInstRemoved;
-      }
-    } else {
-      // Iterate over all of the instructions in a function, replacing them with
-      // constants if we have found them to be of constant values.
-      //
-      for (BasicBlock::iterator BI = BB->begin(), E = BB->end(); BI != E; ) {
-        Instruction *Inst = BI++;
-        if (Inst->getType() == Type::getVoidTy(F.getContext()) ||
-            isa<TerminatorInst>(Inst))
-          continue;
-        
-        LatticeVal &IV = Values[Inst];
-        if (!IV.isConstant() && !IV.isUndefined())
-          continue;
-        
-        Constant *Const = IV.isConstant()
-          ? IV.getConstant() : UndefValue::get(Inst->getType());
-        DEBUG(errs() << "  Constant: " << *Const << " = " << *Inst);
+      DeleteInstructionInBlock(BB);
+      MadeChanges = true;
+      continue;
+    }
+  
+    // Iterate over all of the instructions in a function, replacing them with
+    // constants if we have found them to be of constant values.
+    //
+    for (BasicBlock::iterator BI = BB->begin(), E = BB->end(); BI != E; ) {
+      Instruction *Inst = BI++;
+      if (Inst->getType()->isVoidTy() || isa<TerminatorInst>(Inst))
+        continue;
+      
+      // TODO: Reconstruct structs from their elements.
+      if (isa<StructType>(Inst->getType()))
+        continue;
+      
+      LatticeVal IV = Solver.getLatticeValueFor(Inst);
+      if (IV.isOverdefined())
+        continue;
+      
+      Constant *Const = IV.isConstant()
+        ? IV.getConstant() : UndefValue::get(Inst->getType());
+      DEBUG(errs() << "  Constant: " << *Const << " = " << *Inst);
 
-        // Replaces all of the uses of a variable with uses of the constant.
-        Inst->replaceAllUsesWith(Const);
-        
-        // Delete the instruction.
-        Inst->eraseFromParent();
-        
-        // Hey, we just changed something!
-        MadeChanges = true;
-        ++NumInstRemoved;
-      }
+      // Replaces all of the uses of a variable with uses of the constant.
+      Inst->replaceAllUsesWith(Const);
+      
+      // Delete the instruction.
+      Inst->eraseFromParent();
+      
+      // Hey, we just changed something!
+      MadeChanges = true;
+      ++NumInstRemoved;
     }
+  }
 
   return MadeChanges;
 }
@@ -1637,7 +1699,7 @@ char IPSCCP::ID = 0;
 static RegisterPass<IPSCCP>
 Y("ipsccp", "Interprocedural Sparse Conditional Constant Propagation");
 
-// createIPSCCPPass - This is the public interface to this file...
+// createIPSCCPPass - This is the public interface to this file.
 ModulePass *llvm::createIPSCCPPass() {
   return new IPSCCP();
 }
@@ -1654,12 +1716,14 @@ static bool AddressIsTaken(GlobalValue *GV) {
         return true;  // Storing addr of GV.
     } else if (isa<InvokeInst>(*UI) || isa<CallInst>(*UI)) {
       // Make sure we are calling the function, not passing the address.
-      CallSite CS = CallSite::get(cast<Instruction>(*UI));
-      if (CS.hasArgument(GV))
+      if (UI.getOperandNo() != 0)
         return true;
     } else if (LoadInst *LI = dyn_cast<LoadInst>(*UI)) {
       if (LI->isVolatile())
         return true;
+    } else if (isa<BlockAddress>(*UI)) {
+      // blockaddress doesn't take the address of the function, it takes addr
+      // of label.
     } else {
       return true;
     }
@@ -1667,25 +1731,37 @@ static bool AddressIsTaken(GlobalValue *GV) {
 }
 
 bool IPSCCP::runOnModule(Module &M) {
-  LLVMContext *Context = &M.getContext();
-  
-  SCCPSolver Solver;
-  Solver.setContext(Context);
+  SCCPSolver Solver(getAnalysisIfAvailable<TargetData>());
 
   // Loop over all functions, marking arguments to those with their addresses
   // taken or that are external as overdefined.
   //
-  for (Module::iterator F = M.begin(), E = M.end(); F != E; ++F)
-    if (!F->hasLocalLinkage() || AddressIsTaken(F)) {
-      if (!F->isDeclaration())
-        Solver.MarkBlockExecutable(F->begin());
-      for (Function::arg_iterator AI = F->arg_begin(), E = F->arg_end();
-           AI != E; ++AI)
-        Solver.markOverdefined(AI);
-    } else {
+  for (Module::iterator F = M.begin(), E = M.end(); F != E; ++F) {
+    if (F->isDeclaration())
+      continue;
+    
+    // If this is a strong or ODR definition of this function, then we can
+    // propagate information about its result into callsites of it.
+    if (!F->mayBeOverridden())
       Solver.AddTrackedFunction(F);
+    
+    // If this function only has direct calls that we can see, we can track its
+    // arguments and return value aggressively, and can assume it is not called
+    // unless we see evidence to the contrary.
+    if (F->hasLocalLinkage() && !AddressIsTaken(F)) {
+      Solver.AddArgumentTrackedFunction(F);
+      continue;
     }
 
+    // Assume the function is called.
+    Solver.MarkBlockExecutable(F->begin());
+    
+    // Assume nothing about the incoming arguments.
+    for (Function::arg_iterator AI = F->arg_begin(), E = F->arg_end();
+         AI != E; ++AI)
+      Solver.markAnythingOverdefined(AI);
+  }
+
   // Loop over global variables.  We inform the solver about any internal global
   // variables that do not have their 'addresses taken'.  If they don't have
   // their addresses taken, we can propagate constants through them.
@@ -1710,48 +1786,37 @@ bool IPSCCP::runOnModule(Module &M) {
   // Iterate over all of the instructions in the module, replacing them with
   // constants if we have found them to be of constant values.
   //
-  SmallVector<Instruction*, 512> Insts;
   SmallVector<BasicBlock*, 512> BlocksToErase;
-  std::map<Value*, LatticeVal> &Values = Solver.getValueMapping();
 
   for (Module::iterator F = M.begin(), E = M.end(); F != E; ++F) {
-    for (Function::arg_iterator AI = F->arg_begin(), E = F->arg_end();
-         AI != E; ++AI)
-      if (!AI->use_empty()) {
-        LatticeVal &IV = Values[AI];
-        if (IV.isConstant() || IV.isUndefined()) {
-          Constant *CST = IV.isConstant() ?
-            IV.getConstant() : UndefValue::get(AI->getType());
-          DEBUG(errs() << "***  Arg " << *AI << " = " << *CST <<"\n");
-
-          // Replaces all of the uses of a variable with uses of the
-          // constant.
-          AI->replaceAllUsesWith(CST);
-          ++IPNumArgsElimed;
-        }
+    if (Solver.isBlockExecutable(F->begin())) {
+      for (Function::arg_iterator AI = F->arg_begin(), E = F->arg_end();
+           AI != E; ++AI) {
+        if (AI->use_empty() || isa<StructType>(AI->getType())) continue;
+        
+        // TODO: Could use getStructLatticeValueFor to find out if the entire
+        // result is a constant and replace it entirely if so.
+
+        LatticeVal IV = Solver.getLatticeValueFor(AI);
+        if (IV.isOverdefined()) continue;
+        
+        Constant *CST = IV.isConstant() ?
+        IV.getConstant() : UndefValue::get(AI->getType());
+        DEBUG(errs() << "***  Arg " << *AI << " = " << *CST <<"\n");
+        
+        // Replaces all of the uses of a variable with uses of the
+        // constant.
+        AI->replaceAllUsesWith(CST);
+        ++IPNumArgsElimed;
       }
+    }
 
-    for (Function::iterator BB = F->begin(), E = F->end(); BB != E; ++BB)
+    for (Function::iterator BB = F->begin(), E = F->end(); BB != E; ++BB) {
       if (!Solver.isBlockExecutable(BB)) {
-        DEBUG(errs() << "  BasicBlock Dead:" << *BB);
-        ++IPNumDeadBlocks;
+        DeleteInstructionInBlock(BB);
+        MadeChanges = true;
 
-        // Delete the instructions backwards, as it has a reduced likelihood of
-        // having to update as many def-use and use-def chains.
         TerminatorInst *TI = BB->getTerminator();
-        for (BasicBlock::iterator I = BB->begin(), E = TI; I != E; ++I)
-          Insts.push_back(I);
-
-        while (!Insts.empty()) {
-          Instruction *I = Insts.back();
-          Insts.pop_back();
-          if (!I->use_empty())
-            I->replaceAllUsesWith(UndefValue::get(I->getType()));
-          BB->getInstList().erase(I);
-          MadeChanges = true;
-          ++IPNumInstRemoved;
-        }
-
         for (unsigned i = 0, e = TI->getNumSuccessors(); i != e; ++i) {
           BasicBlock *Succ = TI->getSuccessor(i);
           if (!Succ->empty() && isa<PHINode>(Succ->begin()))
@@ -1759,40 +1824,44 @@ bool IPSCCP::runOnModule(Module &M) {
         }
         if (!TI->use_empty())
           TI->replaceAllUsesWith(UndefValue::get(TI->getType()));
-        BB->getInstList().erase(TI);
+        TI->eraseFromParent();
 
         if (&*BB != &F->front())
           BlocksToErase.push_back(BB);
         else
           new UnreachableInst(M.getContext(), BB);
+        continue;
+      }
+      
+      for (BasicBlock::iterator BI = BB->begin(), E = BB->end(); BI != E; ) {
+        Instruction *Inst = BI++;
+        if (Inst->getType()->isVoidTy() || isa<StructType>(Inst->getType()))
+          continue;
+        
+        // TODO: Could use getStructLatticeValueFor to find out if the entire
+        // result is a constant and replace it entirely if so.
+        
+        LatticeVal IV = Solver.getLatticeValueFor(Inst);
+        if (IV.isOverdefined())
+          continue;
+        
+        Constant *Const = IV.isConstant()
+          ? IV.getConstant() : UndefValue::get(Inst->getType());
+        DEBUG(errs() << "  Constant: " << *Const << " = " << *Inst);
 
-      } else {
-        for (BasicBlock::iterator BI = BB->begin(), E = BB->end(); BI != E; ) {
-          Instruction *Inst = BI++;
-          if (Inst->getType() == Type::getVoidTy(M.getContext()))
-            continue;
-          
-          LatticeVal &IV = Values[Inst];
-          if (!IV.isConstant() && !IV.isUndefined())
-            continue;
-          
-          Constant *Const = IV.isConstant()
-            ? IV.getConstant() : UndefValue::get(Inst->getType());
-          DEBUG(errs() << "  Constant: " << *Const << " = " << *Inst);
-
-          // Replaces all of the uses of a variable with uses of the
-          // constant.
-          Inst->replaceAllUsesWith(Const);
-          
-          // Delete the instruction.
-          if (!isa<CallInst>(Inst) && !isa<TerminatorInst>(Inst))
-            Inst->eraseFromParent();
+        // Replaces all of the uses of a variable with uses of the
+        // constant.
+        Inst->replaceAllUsesWith(Const);
+        
+        // Delete the instruction.
+        if (!isa<CallInst>(Inst) && !isa<TerminatorInst>(Inst))
+          Inst->eraseFromParent();
 
-          // Hey, we just changed something!
-          MadeChanges = true;
-          ++IPNumInstRemoved;
-        }
+        // Hey, we just changed something!
+        MadeChanges = true;
+        ++IPNumInstRemoved;
       }
+    }
 
     // Now that all instructions in the function are constant folded, erase dead
     // blocks, because we can now use ConstantFoldTerminator to get rid of
@@ -1800,8 +1869,16 @@ bool IPSCCP::runOnModule(Module &M) {
     for (unsigned i = 0, e = BlocksToErase.size(); i != e; ++i) {
       // If there are any PHI nodes in this successor, drop entries for BB now.
       BasicBlock *DeadBB = BlocksToErase[i];
-      while (!DeadBB->use_empty()) {
-        Instruction *I = cast<Instruction>(DeadBB->use_back());
+      for (Value::use_iterator UI = DeadBB->use_begin(), UE = DeadBB->use_end();
+           UI != UE; ) {
+        // Grab the user and then increment the iterator early, as the user
+        // will be deleted. Step past all adjacent uses from the same user.
+        Instruction *I = dyn_cast<Instruction>(*UI);
+        do { ++UI; } while (UI != UE && *UI == I);
+
+        // Ignore blockaddress users; BasicBlock's dtor will handle them.
+        if (!I) continue;
+
         bool Folded = ConstantFoldTerminator(I->getParent());
         if (!Folded) {
           // The constant folder may not have been able to fold the terminator
@@ -1844,16 +1921,21 @@ bool IPSCCP::runOnModule(Module &M) {
   // TODO: Process multiple value ret instructions also.
   const DenseMap<Function*, LatticeVal> &RV = Solver.getTrackedRetVals();
   for (DenseMap<Function*, LatticeVal>::const_iterator I = RV.begin(),
-         E = RV.end(); I != E; ++I)
-    if (!I->second.isOverdefined() &&
-        I->first->getReturnType() != Type::getVoidTy(M.getContext())) {
-      Function *F = I->first;
-      for (Function::iterator BB = F->begin(), E = F->end(); BB != E; ++BB)
-        if (ReturnInst *RI = dyn_cast<ReturnInst>(BB->getTerminator()))
-          if (!isa<UndefValue>(RI->getOperand(0)))
-            RI->setOperand(0, UndefValue::get(F->getReturnType()));
-    }
-
+       E = RV.end(); I != E; ++I) {
+    Function *F = I->first;
+    if (I->second.isOverdefined() || F->getReturnType()->isVoidTy())
+      continue;
+  
+    // We can only do this if we know that nothing else can call the function.
+    if (!F->hasLocalLinkage() || AddressIsTaken(F))
+      continue;
+    
+    for (Function::iterator BB = F->begin(), E = F->end(); BB != E; ++BB)
+      if (ReturnInst *RI = dyn_cast<ReturnInst>(BB->getTerminator()))
+        if (!isa<UndefValue>(RI->getOperand(0)))
+          RI->setOperand(0, UndefValue::get(F->getReturnType()));
+  }
+    
   // If we infered constant or undef values for globals variables, we can delete
   // the global and any stores that remain to it.
   const DenseMap<GlobalVariable*, LatticeVal> &TG = Solver.getTrackedGlobals();
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/SCCVN.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/SCCVN.cpp
new file mode 100644
index 0000000..001267a
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/SCCVN.cpp
@@ -0,0 +1,717 @@
+//===- SCCVN.cpp - Eliminate redundant values -----------------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This pass performs global value numbering to eliminate fully redundant
+// instructions.  This is based on the paper "SCC-based Value Numbering"
+// by Cooper, et al.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "sccvn"
+#include "llvm/Transforms/Scalar.h"
+#include "llvm/BasicBlock.h"
+#include "llvm/Constants.h"
+#include "llvm/DerivedTypes.h"
+#include "llvm/Function.h"
+#include "llvm/LLVMContext.h"
+#include "llvm/Operator.h"
+#include "llvm/Value.h"
+#include "llvm/ADT/DenseMap.h"
+#include "llvm/ADT/DepthFirstIterator.h"
+#include "llvm/ADT/PostOrderIterator.h"
+#include "llvm/ADT/SmallPtrSet.h"
+#include "llvm/ADT/SmallVector.h"
+#include "llvm/ADT/SparseBitVector.h"
+#include "llvm/ADT/Statistic.h"
+#include "llvm/Analysis/Dominators.h"
+#include "llvm/Support/CFG.h"
+#include "llvm/Support/CommandLine.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/ErrorHandling.h"
+#include "llvm/Transforms/Utils/SSAUpdater.h"
+#include <cstdio>
+using namespace llvm;
+
+STATISTIC(NumSCCVNInstr,  "Number of instructions deleted by SCCVN");
+STATISTIC(NumSCCVNPhi,  "Number of phis deleted by SCCVN");
+
+//===----------------------------------------------------------------------===//
+//                         ValueTable Class
+//===----------------------------------------------------------------------===//
+
+/// This class holds the mapping between values and value numbers.  It is used
+/// as an efficient mechanism to determine the expression-wise equivalence of
+/// two values.
+namespace {
+  struct Expression {
+    enum ExpressionOpcode { ADD, FADD, SUB, FSUB, MUL, FMUL,
+                            UDIV, SDIV, FDIV, UREM, SREM,
+                            FREM, SHL, LSHR, ASHR, AND, OR, XOR, ICMPEQ,
+                            ICMPNE, ICMPUGT, ICMPUGE, ICMPULT, ICMPULE,
+                            ICMPSGT, ICMPSGE, ICMPSLT, ICMPSLE, FCMPOEQ,
+                            FCMPOGT, FCMPOGE, FCMPOLT, FCMPOLE, FCMPONE,
+                            FCMPORD, FCMPUNO, FCMPUEQ, FCMPUGT, FCMPUGE,
+                            FCMPULT, FCMPULE, FCMPUNE, EXTRACT, INSERT,
+                            SHUFFLE, SELECT, TRUNC, ZEXT, SEXT, FPTOUI,
+                            FPTOSI, UITOFP, SITOFP, FPTRUNC, FPEXT,
+                            PTRTOINT, INTTOPTR, BITCAST, GEP, CALL, CONSTANT,
+                            INSERTVALUE, EXTRACTVALUE, EMPTY, TOMBSTONE };
+
+    ExpressionOpcode opcode;
+    const Type* type;
+    SmallVector<uint32_t, 4> varargs;
+
+    Expression() { }
+    Expression(ExpressionOpcode o) : opcode(o) { }
+
+    bool operator==(const Expression &other) const {
+      if (opcode != other.opcode)
+        return false;
+      else if (opcode == EMPTY || opcode == TOMBSTONE)
+        return true;
+      else if (type != other.type)
+        return false;
+      else {
+        if (varargs.size() != other.varargs.size())
+          return false;
+
+        for (size_t i = 0; i < varargs.size(); ++i)
+          if (varargs[i] != other.varargs[i])
+            return false;
+
+        return true;
+      }
+    }
+
+    bool operator!=(const Expression &other) const {
+      return !(*this == other);
+    }
+  };
+
+  class ValueTable {
+    private:
+      DenseMap<Value*, uint32_t> valueNumbering;
+      DenseMap<Expression, uint32_t> expressionNumbering;
+      DenseMap<Value*, uint32_t> constantsNumbering;
+
+      uint32_t nextValueNumber;
+
+      Expression::ExpressionOpcode getOpcode(BinaryOperator* BO);
+      Expression::ExpressionOpcode getOpcode(CmpInst* C);
+      Expression::ExpressionOpcode getOpcode(CastInst* C);
+      Expression create_expression(BinaryOperator* BO);
+      Expression create_expression(CmpInst* C);
+      Expression create_expression(ShuffleVectorInst* V);
+      Expression create_expression(ExtractElementInst* C);
+      Expression create_expression(InsertElementInst* V);
+      Expression create_expression(SelectInst* V);
+      Expression create_expression(CastInst* C);
+      Expression create_expression(GetElementPtrInst* G);
+      Expression create_expression(CallInst* C);
+      Expression create_expression(Constant* C);
+      Expression create_expression(ExtractValueInst* C);
+      Expression create_expression(InsertValueInst* C);
+    public:
+      ValueTable() : nextValueNumber(1) { }
+      uint32_t computeNumber(Value *V);
+      uint32_t lookup(Value *V);
+      void add(Value *V, uint32_t num);
+      void clear();
+      void clearExpressions();
+      void erase(Value *v);
+      unsigned size();
+      void verifyRemoved(const Value *) const;
+  };
+}
+
+namespace llvm {
+template <> struct DenseMapInfo<Expression> {
+  static inline Expression getEmptyKey() {
+    return Expression(Expression::EMPTY);
+  }
+
+  static inline Expression getTombstoneKey() {
+    return Expression(Expression::TOMBSTONE);
+  }
+
+  static unsigned getHashValue(const Expression e) {
+    unsigned hash = e.opcode;
+
+    hash = ((unsigned)((uintptr_t)e.type >> 4) ^
+            (unsigned)((uintptr_t)e.type >> 9));
+
+    for (SmallVector<uint32_t, 4>::const_iterator I = e.varargs.begin(),
+         E = e.varargs.end(); I != E; ++I)
+      hash = *I + hash * 37;
+
+    return hash;
+  }
+  static bool isEqual(const Expression &LHS, const Expression &RHS) {
+    return LHS == RHS;
+  }
+  static bool isPod() { return true; }
+};
+}
+
+//===----------------------------------------------------------------------===//
+//                     ValueTable Internal Functions
+//===----------------------------------------------------------------------===//
+Expression::ExpressionOpcode ValueTable::getOpcode(BinaryOperator* BO) {
+  switch(BO->getOpcode()) {
+  default: // THIS SHOULD NEVER HAPPEN
+    llvm_unreachable("Binary operator with unknown opcode?");
+  case Instruction::Add:  return Expression::ADD;
+  case Instruction::FAdd: return Expression::FADD;
+  case Instruction::Sub:  return Expression::SUB;
+  case Instruction::FSub: return Expression::FSUB;
+  case Instruction::Mul:  return Expression::MUL;
+  case Instruction::FMul: return Expression::FMUL;
+  case Instruction::UDiv: return Expression::UDIV;
+  case Instruction::SDiv: return Expression::SDIV;
+  case Instruction::FDiv: return Expression::FDIV;
+  case Instruction::URem: return Expression::UREM;
+  case Instruction::SRem: return Expression::SREM;
+  case Instruction::FRem: return Expression::FREM;
+  case Instruction::Shl:  return Expression::SHL;
+  case Instruction::LShr: return Expression::LSHR;
+  case Instruction::AShr: return Expression::ASHR;
+  case Instruction::And:  return Expression::AND;
+  case Instruction::Or:   return Expression::OR;
+  case Instruction::Xor:  return Expression::XOR;
+  }
+}
+
+Expression::ExpressionOpcode ValueTable::getOpcode(CmpInst* C) {
+  if (isa<ICmpInst>(C)) {
+    switch (C->getPredicate()) {
+    default:  // THIS SHOULD NEVER HAPPEN
+      llvm_unreachable("Comparison with unknown predicate?");
+    case ICmpInst::ICMP_EQ:  return Expression::ICMPEQ;
+    case ICmpInst::ICMP_NE:  return Expression::ICMPNE;
+    case ICmpInst::ICMP_UGT: return Expression::ICMPUGT;
+    case ICmpInst::ICMP_UGE: return Expression::ICMPUGE;
+    case ICmpInst::ICMP_ULT: return Expression::ICMPULT;
+    case ICmpInst::ICMP_ULE: return Expression::ICMPULE;
+    case ICmpInst::ICMP_SGT: return Expression::ICMPSGT;
+    case ICmpInst::ICMP_SGE: return Expression::ICMPSGE;
+    case ICmpInst::ICMP_SLT: return Expression::ICMPSLT;
+    case ICmpInst::ICMP_SLE: return Expression::ICMPSLE;
+    }
+  } else {
+    switch (C->getPredicate()) {
+    default: // THIS SHOULD NEVER HAPPEN
+      llvm_unreachable("Comparison with unknown predicate?");
+    case FCmpInst::FCMP_OEQ: return Expression::FCMPOEQ;
+    case FCmpInst::FCMP_OGT: return Expression::FCMPOGT;
+    case FCmpInst::FCMP_OGE: return Expression::FCMPOGE;
+    case FCmpInst::FCMP_OLT: return Expression::FCMPOLT;
+    case FCmpInst::FCMP_OLE: return Expression::FCMPOLE;
+    case FCmpInst::FCMP_ONE: return Expression::FCMPONE;
+    case FCmpInst::FCMP_ORD: return Expression::FCMPORD;
+    case FCmpInst::FCMP_UNO: return Expression::FCMPUNO;
+    case FCmpInst::FCMP_UEQ: return Expression::FCMPUEQ;
+    case FCmpInst::FCMP_UGT: return Expression::FCMPUGT;
+    case FCmpInst::FCMP_UGE: return Expression::FCMPUGE;
+    case FCmpInst::FCMP_ULT: return Expression::FCMPULT;
+    case FCmpInst::FCMP_ULE: return Expression::FCMPULE;
+    case FCmpInst::FCMP_UNE: return Expression::FCMPUNE;
+    }
+  }
+}
+
+Expression::ExpressionOpcode ValueTable::getOpcode(CastInst* C) {
+  switch(C->getOpcode()) {
+  default: // THIS SHOULD NEVER HAPPEN
+    llvm_unreachable("Cast operator with unknown opcode?");
+  case Instruction::Trunc:    return Expression::TRUNC;
+  case Instruction::ZExt:     return Expression::ZEXT;
+  case Instruction::SExt:     return Expression::SEXT;
+  case Instruction::FPToUI:   return Expression::FPTOUI;
+  case Instruction::FPToSI:   return Expression::FPTOSI;
+  case Instruction::UIToFP:   return Expression::UITOFP;
+  case Instruction::SIToFP:   return Expression::SITOFP;
+  case Instruction::FPTrunc:  return Expression::FPTRUNC;
+  case Instruction::FPExt:    return Expression::FPEXT;
+  case Instruction::PtrToInt: return Expression::PTRTOINT;
+  case Instruction::IntToPtr: return Expression::INTTOPTR;
+  case Instruction::BitCast:  return Expression::BITCAST;
+  }
+}
+
+Expression ValueTable::create_expression(CallInst* C) {
+  Expression e;
+
+  e.type = C->getType();
+  e.opcode = Expression::CALL;
+
+  e.varargs.push_back(lookup(C->getCalledFunction()));
+  for (CallInst::op_iterator I = C->op_begin()+1, E = C->op_end();
+       I != E; ++I)
+    e.varargs.push_back(lookup(*I));
+
+  return e;
+}
+
+Expression ValueTable::create_expression(BinaryOperator* BO) {
+  Expression e;
+  e.varargs.push_back(lookup(BO->getOperand(0)));
+  e.varargs.push_back(lookup(BO->getOperand(1)));
+  e.type = BO->getType();
+  e.opcode = getOpcode(BO);
+
+  return e;
+}
+
+Expression ValueTable::create_expression(CmpInst* C) {
+  Expression e;
+
+  e.varargs.push_back(lookup(C->getOperand(0)));
+  e.varargs.push_back(lookup(C->getOperand(1)));
+  e.type = C->getType();
+  e.opcode = getOpcode(C);
+
+  return e;
+}
+
+Expression ValueTable::create_expression(CastInst* C) {
+  Expression e;
+
+  e.varargs.push_back(lookup(C->getOperand(0)));
+  e.type = C->getType();
+  e.opcode = getOpcode(C);
+
+  return e;
+}
+
+Expression ValueTable::create_expression(ShuffleVectorInst* S) {
+  Expression e;
+
+  e.varargs.push_back(lookup(S->getOperand(0)));
+  e.varargs.push_back(lookup(S->getOperand(1)));
+  e.varargs.push_back(lookup(S->getOperand(2)));
+  e.type = S->getType();
+  e.opcode = Expression::SHUFFLE;
+
+  return e;
+}
+
+Expression ValueTable::create_expression(ExtractElementInst* E) {
+  Expression e;
+
+  e.varargs.push_back(lookup(E->getOperand(0)));
+  e.varargs.push_back(lookup(E->getOperand(1)));
+  e.type = E->getType();
+  e.opcode = Expression::EXTRACT;
+
+  return e;
+}
+
+Expression ValueTable::create_expression(InsertElementInst* I) {
+  Expression e;
+
+  e.varargs.push_back(lookup(I->getOperand(0)));
+  e.varargs.push_back(lookup(I->getOperand(1)));
+  e.varargs.push_back(lookup(I->getOperand(2)));
+  e.type = I->getType();
+  e.opcode = Expression::INSERT;
+
+  return e;
+}
+
+Expression ValueTable::create_expression(SelectInst* I) {
+  Expression e;
+
+  e.varargs.push_back(lookup(I->getCondition()));
+  e.varargs.push_back(lookup(I->getTrueValue()));
+  e.varargs.push_back(lookup(I->getFalseValue()));
+  e.type = I->getType();
+  e.opcode = Expression::SELECT;
+
+  return e;
+}
+
+Expression ValueTable::create_expression(GetElementPtrInst* G) {
+  Expression e;
+
+  e.varargs.push_back(lookup(G->getPointerOperand()));
+  e.type = G->getType();
+  e.opcode = Expression::GEP;
+
+  for (GetElementPtrInst::op_iterator I = G->idx_begin(), E = G->idx_end();
+       I != E; ++I)
+    e.varargs.push_back(lookup(*I));
+
+  return e;
+}
+
+Expression ValueTable::create_expression(ExtractValueInst* E) {
+  Expression e;
+
+  e.varargs.push_back(lookup(E->getAggregateOperand()));
+  for (ExtractValueInst::idx_iterator II = E->idx_begin(), IE = E->idx_end();
+       II != IE; ++II)
+    e.varargs.push_back(*II);
+  e.type = E->getType();
+  e.opcode = Expression::EXTRACTVALUE;
+
+  return e;
+}
+
+Expression ValueTable::create_expression(InsertValueInst* E) {
+  Expression e;
+
+  e.varargs.push_back(lookup(E->getAggregateOperand()));
+  e.varargs.push_back(lookup(E->getInsertedValueOperand()));
+  for (InsertValueInst::idx_iterator II = E->idx_begin(), IE = E->idx_end();
+       II != IE; ++II)
+    e.varargs.push_back(*II);
+  e.type = E->getType();
+  e.opcode = Expression::INSERTVALUE;
+
+  return e;
+}
+
+//===----------------------------------------------------------------------===//
+//                     ValueTable External Functions
+//===----------------------------------------------------------------------===//
+
+/// add - Insert a value into the table with a specified value number.
+void ValueTable::add(Value *V, uint32_t num) {
+  valueNumbering[V] = num;
+}
+
+/// computeNumber - Returns the value number for the specified value, assigning
+/// it a new number if it did not have one before.
+uint32_t ValueTable::computeNumber(Value *V) {
+  if (uint32_t v = valueNumbering[V])
+    return v;
+  else if (uint32_t v= constantsNumbering[V])
+    return v;
+
+  if (!isa<Instruction>(V)) {
+    constantsNumbering[V] = nextValueNumber;
+    return nextValueNumber++;
+  }
+  
+  Instruction* I = cast<Instruction>(V);
+  Expression exp;
+  switch (I->getOpcode()) {
+    case Instruction::Add:
+    case Instruction::FAdd:
+    case Instruction::Sub:
+    case Instruction::FSub:
+    case Instruction::Mul:
+    case Instruction::FMul:
+    case Instruction::UDiv:
+    case Instruction::SDiv:
+    case Instruction::FDiv:
+    case Instruction::URem:
+    case Instruction::SRem:
+    case Instruction::FRem:
+    case Instruction::Shl:
+    case Instruction::LShr:
+    case Instruction::AShr:
+    case Instruction::And:
+    case Instruction::Or :
+    case Instruction::Xor:
+      exp = create_expression(cast<BinaryOperator>(I));
+      break;
+    case Instruction::ICmp:
+    case Instruction::FCmp:
+      exp = create_expression(cast<CmpInst>(I));
+      break;
+    case Instruction::Trunc:
+    case Instruction::ZExt:
+    case Instruction::SExt:
+    case Instruction::FPToUI:
+    case Instruction::FPToSI:
+    case Instruction::UIToFP:
+    case Instruction::SIToFP:
+    case Instruction::FPTrunc:
+    case Instruction::FPExt:
+    case Instruction::PtrToInt:
+    case Instruction::IntToPtr:
+    case Instruction::BitCast:
+      exp = create_expression(cast<CastInst>(I));
+      break;
+    case Instruction::Select:
+      exp = create_expression(cast<SelectInst>(I));
+      break;
+    case Instruction::ExtractElement:
+      exp = create_expression(cast<ExtractElementInst>(I));
+      break;
+    case Instruction::InsertElement:
+      exp = create_expression(cast<InsertElementInst>(I));
+      break;
+    case Instruction::ShuffleVector:
+      exp = create_expression(cast<ShuffleVectorInst>(I));
+      break;
+    case Instruction::ExtractValue:
+      exp = create_expression(cast<ExtractValueInst>(I));
+      break;
+    case Instruction::InsertValue:
+      exp = create_expression(cast<InsertValueInst>(I));
+      break;      
+    case Instruction::GetElementPtr:
+      exp = create_expression(cast<GetElementPtrInst>(I));
+      break;
+    default:
+      valueNumbering[V] = nextValueNumber;
+      return nextValueNumber++;
+  }
+
+  uint32_t& e = expressionNumbering[exp];
+  if (!e) e = nextValueNumber++;
+  valueNumbering[V] = e;
+  
+  return e;
+}
+
+/// lookup - Returns the value number of the specified value. Returns 0 if
+/// the value has not yet been numbered.
+uint32_t ValueTable::lookup(Value *V) {
+  if (!isa<Instruction>(V)) {
+    if (!constantsNumbering.count(V))
+      constantsNumbering[V] = nextValueNumber++;
+    return constantsNumbering[V];
+  }
+  
+  return valueNumbering[V];
+}
+
+/// clear - Remove all entries from the ValueTable
+void ValueTable::clear() {
+  valueNumbering.clear();
+  expressionNumbering.clear();
+  constantsNumbering.clear();
+  nextValueNumber = 1;
+}
+
+void ValueTable::clearExpressions() {
+  expressionNumbering.clear();
+  constantsNumbering.clear();
+  nextValueNumber = 1;
+}
+
+/// erase - Remove a value from the value numbering
+void ValueTable::erase(Value *V) {
+  valueNumbering.erase(V);
+}
+
+/// verifyRemoved - Verify that the value is removed from all internal data
+/// structures.
+void ValueTable::verifyRemoved(const Value *V) const {
+  for (DenseMap<Value*, uint32_t>::const_iterator
+         I = valueNumbering.begin(), E = valueNumbering.end(); I != E; ++I) {
+    assert(I->first != V && "Inst still occurs in value numbering map!");
+  }
+}
+
+//===----------------------------------------------------------------------===//
+//                              SCCVN Pass
+//===----------------------------------------------------------------------===//
+
+namespace {
+
+  struct ValueNumberScope {
+    ValueNumberScope* parent;
+    DenseMap<uint32_t, Value*> table;
+    SparseBitVector<128> availIn;
+    SparseBitVector<128> availOut;
+    
+    ValueNumberScope(ValueNumberScope* p) : parent(p) { }
+  };
+
+  class SCCVN : public FunctionPass {
+    bool runOnFunction(Function &F);
+  public:
+    static char ID; // Pass identification, replacement for typeid
+    SCCVN() : FunctionPass(&ID) { }
+
+  private:
+    ValueTable VT;
+    DenseMap<BasicBlock*, ValueNumberScope*> BBMap;
+    
+    // This transformation requires dominator postdominator info
+    virtual void getAnalysisUsage(AnalysisUsage &AU) const {
+      AU.addRequired<DominatorTree>();
+
+      AU.addPreserved<DominatorTree>();
+      AU.setPreservesCFG();
+    }
+  };
+
+  char SCCVN::ID = 0;
+}
+
+// createSCCVNPass - The public interface to this file...
+FunctionPass *llvm::createSCCVNPass() { return new SCCVN(); }
+
+static RegisterPass<SCCVN> X("sccvn",
+                              "SCC Value Numbering");
+
+static Value *lookupNumber(ValueNumberScope *Locals, uint32_t num) {
+  while (Locals) {
+    DenseMap<uint32_t, Value*>::iterator I = Locals->table.find(num);
+    if (I != Locals->table.end())
+      return I->second;
+    Locals = Locals->parent;
+  }
+
+  return 0;
+}
+
+bool SCCVN::runOnFunction(Function& F) {
+  // Implement the RPO version of the SCCVN algorithm.  Conceptually, 
+  // we optimisitically assume that all instructions with the same opcode have
+  // the same VN.  Then we deepen our comparison by one level, to all 
+  // instructions whose operands have the same opcodes get the same VN.  We
+  // iterate this process until the partitioning stops changing, at which
+  // point we have computed a full numbering.
+  ReversePostOrderTraversal<Function*> RPOT(&F);
+  bool done = false;
+  while (!done) {
+    done = true;
+    VT.clearExpressions();
+    for (ReversePostOrderTraversal<Function*>::rpo_iterator I = RPOT.begin(),
+         E = RPOT.end(); I != E; ++I) {
+      BasicBlock* BB = *I;
+      for (BasicBlock::iterator BI = BB->begin(), BE = BB->end();
+           BI != BE; ++BI) {
+         uint32_t origVN = VT.lookup(BI);
+         uint32_t newVN = VT.computeNumber(BI);
+         if (origVN != newVN)
+           done = false;
+      }
+    }
+  }
+  
+  // Now, do a dominator walk, eliminating simple, dominated redundancies as we
+  // go.  Also, build the ValueNumberScope structure that will be used for
+  // computing full availability.
+  DominatorTree& DT = getAnalysis<DominatorTree>();
+  bool changed = false;
+  for (df_iterator<DomTreeNode*> DI = df_begin(DT.getRootNode()),
+       DE = df_end(DT.getRootNode()); DI != DE; ++DI) {
+    BasicBlock* BB = DI->getBlock();
+    if (DI->getIDom())
+      BBMap[BB] = new ValueNumberScope(BBMap[DI->getIDom()->getBlock()]);
+    else
+      BBMap[BB] = new ValueNumberScope(0);
+    
+    for (BasicBlock::iterator I = BB->begin(), E = BB->end(); I != E; ) {
+      uint32_t num = VT.lookup(I);
+      Value* repl = lookupNumber(BBMap[BB], num);
+      
+      if (repl) {
+        if (isa<PHINode>(I))
+          ++NumSCCVNPhi;
+        else
+          ++NumSCCVNInstr;
+        I->replaceAllUsesWith(repl);
+        Instruction* OldInst = I;
+        ++I;
+        BBMap[BB]->table[num] = repl;
+        OldInst->eraseFromParent();
+        changed = true;
+      } else {
+        BBMap[BB]->table[num] = I;
+        BBMap[BB]->availOut.set(num);
+  
+        ++I;
+      }
+    }
+  }
+
+  // Perform a forward data-flow to compute availability at all points on
+  // the CFG.
+  do {
+    changed = false;
+    for (ReversePostOrderTraversal<Function*>::rpo_iterator I = RPOT.begin(),
+         E = RPOT.end(); I != E; ++I) {
+      BasicBlock* BB = *I;
+      ValueNumberScope *VNS = BBMap[BB];
+      
+      SparseBitVector<128> preds;
+      bool first = true;
+      for (pred_iterator PI = pred_begin(BB), PE = pred_end(BB);
+           PI != PE; ++PI) {
+        if (first) {
+          preds = BBMap[*PI]->availOut;
+          first = false;
+        } else {
+          preds &= BBMap[*PI]->availOut;
+        }
+      }
+      
+      changed |= (VNS->availIn |= preds);
+      changed |= (VNS->availOut |= preds);
+    }
+  } while (changed);
+  
+  // Use full availability information to perform non-dominated replacements.
+  SSAUpdater SSU; 
+  for (Function::iterator FI = F.begin(), FE = F.end(); FI != FE; ++FI) {
+    if (!BBMap.count(FI)) continue;
+    for (BasicBlock::iterator BI = FI->begin(), BE = FI->end();
+         BI != BE; ) {
+      uint32_t num = VT.lookup(BI);
+      if (!BBMap[FI]->availIn.test(num)) {
+        ++BI;
+        continue;
+      }
+      
+      SSU.Initialize(BI);
+      
+      SmallPtrSet<BasicBlock*, 8> visited;
+      SmallVector<BasicBlock*, 8> stack;
+      visited.insert(FI);
+      for (pred_iterator PI = pred_begin(FI), PE = pred_end(FI);
+           PI != PE; ++PI)
+        if (!visited.count(*PI))
+          stack.push_back(*PI);
+      
+      while (!stack.empty()) {
+        BasicBlock* CurrBB = stack.back();
+        stack.pop_back();
+        visited.insert(CurrBB);
+        
+        ValueNumberScope* S = BBMap[CurrBB];
+        if (S->table.count(num)) {
+          SSU.AddAvailableValue(CurrBB, S->table[num]);
+        } else {
+          for (pred_iterator PI = pred_begin(CurrBB), PE = pred_end(CurrBB);
+               PI != PE; ++PI)
+            if (!visited.count(*PI))
+              stack.push_back(*PI);
+        }
+      }
+      
+      Value* repl = SSU.GetValueInMiddleOfBlock(FI);
+      BI->replaceAllUsesWith(repl);
+      Instruction* CurInst = BI;
+      ++BI;
+      BBMap[FI]->table[num] = repl;
+      if (isa<PHINode>(CurInst))
+        ++NumSCCVNPhi;
+      else
+        ++NumSCCVNInstr;
+        
+      CurInst->eraseFromParent();
+    }
+  }
+
+  VT.clear();
+  for (DenseMap<BasicBlock*, ValueNumberScope*>::iterator
+       I = BBMap.begin(), E = BBMap.end(); I != E; ++I)
+    delete I->second;
+  BBMap.clear();
+  
+  return changed;
+}
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/Scalar.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/Scalar.cpp
index 5669da0..b54565c 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/Scalar.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/Scalar.cpp
@@ -26,10 +26,6 @@ void LLVMAddCFGSimplificationPass(LLVMPassManagerRef PM) {
   unwrap(PM)->add(createCFGSimplificationPass());
 }
 
-void LLVMAddCondPropagationPass(LLVMPassManagerRef PM) {
-  unwrap(PM)->add(createCondPropagationPass());
-}
-
 void LLVMAddDeadStoreEliminationPass(LLVMPassManagerRef PM) {
   unwrap(PM)->add(createDeadStoreEliminationPass());
 }
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp
index 6d06959..047d279 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp
@@ -100,32 +100,32 @@ namespace {
 
     void MarkUnsafe(AllocaInfo &I) { I.isUnsafe = true; }
 
-    int isSafeAllocaToScalarRepl(AllocationInst *AI);
+    int isSafeAllocaToScalarRepl(AllocaInst *AI);
 
-    void isSafeUseOfAllocation(Instruction *User, AllocationInst *AI,
+    void isSafeUseOfAllocation(Instruction *User, AllocaInst *AI,
                                AllocaInfo &Info);
-    void isSafeElementUse(Value *Ptr, bool isFirstElt, AllocationInst *AI,
+    void isSafeElementUse(Value *Ptr, bool isFirstElt, AllocaInst *AI,
                          AllocaInfo &Info);
-    void isSafeMemIntrinsicOnAllocation(MemIntrinsic *MI, AllocationInst *AI,
+    void isSafeMemIntrinsicOnAllocation(MemIntrinsic *MI, AllocaInst *AI,
                                         unsigned OpNo, AllocaInfo &Info);
-    void isSafeUseOfBitCastedAllocation(BitCastInst *User, AllocationInst *AI,
+    void isSafeUseOfBitCastedAllocation(BitCastInst *User, AllocaInst *AI,
                                         AllocaInfo &Info);
     
-    void DoScalarReplacement(AllocationInst *AI, 
-                             std::vector<AllocationInst*> &WorkList);
+    void DoScalarReplacement(AllocaInst *AI, 
+                             std::vector<AllocaInst*> &WorkList);
     void CleanupGEP(GetElementPtrInst *GEP);
-    void CleanupAllocaUsers(AllocationInst *AI);
-    AllocaInst *AddNewAlloca(Function &F, const Type *Ty, AllocationInst *Base);
+    void CleanupAllocaUsers(AllocaInst *AI);
+    AllocaInst *AddNewAlloca(Function &F, const Type *Ty, AllocaInst *Base);
     
-    void RewriteBitCastUserOfAlloca(Instruction *BCInst, AllocationInst *AI,
+    void RewriteBitCastUserOfAlloca(Instruction *BCInst, AllocaInst *AI,
                                     SmallVector<AllocaInst*, 32> &NewElts);
     
     void RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
-                                      AllocationInst *AI,
+                                      AllocaInst *AI,
                                       SmallVector<AllocaInst*, 32> &NewElts);
-    void RewriteStoreUserOfWholeAlloca(StoreInst *SI, AllocationInst *AI,
+    void RewriteStoreUserOfWholeAlloca(StoreInst *SI, AllocaInst *AI,
                                        SmallVector<AllocaInst*, 32> &NewElts);
-    void RewriteLoadUserOfWholeAlloca(LoadInst *LI, AllocationInst *AI,
+    void RewriteLoadUserOfWholeAlloca(LoadInst *LI, AllocaInst *AI,
                                       SmallVector<AllocaInst*, 32> &NewElts);
     
     bool CanConvertToScalar(Value *V, bool &IsNotTrivial, const Type *&VecTy,
@@ -135,7 +135,7 @@ namespace {
                                      uint64_t Offset, IRBuilder<> &Builder);
     Value *ConvertScalar_InsertValue(Value *StoredVal, Value *ExistingVal,
                                      uint64_t Offset, IRBuilder<> &Builder);
-    static Instruction *isOnlyCopiedFromConstantGlobal(AllocationInst *AI);
+    static Instruction *isOnlyCopiedFromConstantGlobal(AllocaInst *AI);
   };
 }
 
@@ -192,7 +192,7 @@ bool SROA::performPromotion(Function &F) {
 
     if (Allocas.empty()) break;
 
-    PromoteMemToReg(Allocas, DT, DF, F.getContext());
+    PromoteMemToReg(Allocas, DT, DF);
     NumPromoted += Allocas.size();
     Changed = true;
   }
@@ -213,18 +213,18 @@ static uint64_t getNumSAElements(const Type *T) {
 // them if they are only used by getelementptr instructions.
 //
 bool SROA::performScalarRepl(Function &F) {
-  std::vector<AllocationInst*> WorkList;
+  std::vector<AllocaInst*> WorkList;
 
   // Scan the entry basic block, adding any alloca's and mallocs to the worklist
   BasicBlock &BB = F.getEntryBlock();
   for (BasicBlock::iterator I = BB.begin(), E = BB.end(); I != E; ++I)
-    if (AllocationInst *A = dyn_cast<AllocationInst>(I))
+    if (AllocaInst *A = dyn_cast<AllocaInst>(I))
       WorkList.push_back(A);
 
   // Process the worklist
   bool Changed = false;
   while (!WorkList.empty()) {
-    AllocationInst *AI = WorkList.back();
+    AllocaInst *AI = WorkList.back();
     WorkList.pop_back();
     
     // Handle dead allocas trivially.  These can be formed by SROA'ing arrays
@@ -335,8 +335,8 @@ bool SROA::performScalarRepl(Function &F) {
 
 /// DoScalarReplacement - This alloca satisfied the isSafeAllocaToScalarRepl
 /// predicate, do SROA now.
-void SROA::DoScalarReplacement(AllocationInst *AI, 
-                               std::vector<AllocationInst*> &WorkList) {
+void SROA::DoScalarReplacement(AllocaInst *AI, 
+                               std::vector<AllocaInst*> &WorkList) {
   DEBUG(errs() << "Found inst to SROA: " << *AI << '\n');
   SmallVector<AllocaInst*, 32> ElementAllocas;
   if (const StructType *ST = dyn_cast<StructType>(AI->getAllocatedType())) {
@@ -455,7 +455,7 @@ void SROA::DoScalarReplacement(AllocationInst *AI,
 /// getelementptr instruction of an array aggregate allocation.  isFirstElt
 /// indicates whether Ptr is known to the start of the aggregate.
 ///
-void SROA::isSafeElementUse(Value *Ptr, bool isFirstElt, AllocationInst *AI,
+void SROA::isSafeElementUse(Value *Ptr, bool isFirstElt, AllocaInst *AI,
                             AllocaInfo &Info) {
   for (Value::use_iterator I = Ptr->use_begin(), E = Ptr->use_end();
        I != E; ++I) {
@@ -520,7 +520,7 @@ static bool AllUsersAreLoads(Value *Ptr) {
 /// isSafeUseOfAllocation - Check to see if this user is an allowed use for an
 /// aggregate allocation.
 ///
-void SROA::isSafeUseOfAllocation(Instruction *User, AllocationInst *AI,
+void SROA::isSafeUseOfAllocation(Instruction *User, AllocaInst *AI,
                                  AllocaInfo &Info) {
   if (BitCastInst *C = dyn_cast<BitCastInst>(User))
     return isSafeUseOfBitCastedAllocation(C, AI, Info);
@@ -605,7 +605,7 @@ void SROA::isSafeUseOfAllocation(Instruction *User, AllocationInst *AI,
 /// isSafeMemIntrinsicOnAllocation - Return true if the specified memory
 /// intrinsic can be promoted by SROA.  At this point, we know that the operand
 /// of the memintrinsic is a pointer to the beginning of the allocation.
-void SROA::isSafeMemIntrinsicOnAllocation(MemIntrinsic *MI, AllocationInst *AI,
+void SROA::isSafeMemIntrinsicOnAllocation(MemIntrinsic *MI, AllocaInst *AI,
                                           unsigned OpNo, AllocaInfo &Info) {
   // If not constant length, give up.
   ConstantInt *Length = dyn_cast<ConstantInt>(MI->getLength());
@@ -632,7 +632,7 @@ void SROA::isSafeMemIntrinsicOnAllocation(MemIntrinsic *MI, AllocationInst *AI,
 
 /// isSafeUseOfBitCastedAllocation - Return true if all users of this bitcast
 /// are 
-void SROA::isSafeUseOfBitCastedAllocation(BitCastInst *BC, AllocationInst *AI,
+void SROA::isSafeUseOfBitCastedAllocation(BitCastInst *BC, AllocaInst *AI,
                                           AllocaInfo &Info) {
   for (Value::use_iterator UI = BC->use_begin(), E = BC->use_end();
        UI != E; ++UI) {
@@ -690,7 +690,7 @@ void SROA::isSafeUseOfBitCastedAllocation(BitCastInst *BC, AllocationInst *AI,
 /// RewriteBitCastUserOfAlloca - BCInst (transitively) bitcasts AI, or indexes
 /// to its first element.  Transform users of the cast to use the new values
 /// instead.
-void SROA::RewriteBitCastUserOfAlloca(Instruction *BCInst, AllocationInst *AI,
+void SROA::RewriteBitCastUserOfAlloca(Instruction *BCInst, AllocaInst *AI,
                                       SmallVector<AllocaInst*, 32> &NewElts) {
   Value::use_iterator UI = BCInst->use_begin(), UE = BCInst->use_end();
   while (UI != UE) {
@@ -729,7 +729,7 @@ void SROA::RewriteBitCastUserOfAlloca(Instruction *BCInst, AllocationInst *AI,
 /// RewriteMemIntrinUserOfAlloca - MI is a memcpy/memset/memmove from or to AI.
 /// Rewrite it to copy or set the elements of the scalarized memory.
 void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
-                                        AllocationInst *AI,
+                                        AllocaInst *AI,
                                         SmallVector<AllocaInst*, 32> &NewElts) {
   
   // If this is a memcpy/memmove, construct the other pointer as the
@@ -905,8 +905,7 @@ void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
 /// RewriteStoreUserOfWholeAlloca - We found an store of an integer that
 /// overwrites the entire allocation.  Extract out the pieces of the stored
 /// integer and store them individually.
-void SROA::RewriteStoreUserOfWholeAlloca(StoreInst *SI,
-                                         AllocationInst *AI,
+void SROA::RewriteStoreUserOfWholeAlloca(StoreInst *SI, AllocaInst *AI,
                                          SmallVector<AllocaInst*, 32> &NewElts){
   // Extract each element out of the integer according to its structure offset
   // and store the element value to the individual alloca.
@@ -1029,7 +1028,7 @@ void SROA::RewriteStoreUserOfWholeAlloca(StoreInst *SI,
 
 /// RewriteLoadUserOfWholeAlloca - We found an load of the entire allocation to
 /// an integer.  Load the individual pieces to form the aggregate value.
-void SROA::RewriteLoadUserOfWholeAlloca(LoadInst *LI, AllocationInst *AI,
+void SROA::RewriteLoadUserOfWholeAlloca(LoadInst *LI, AllocaInst *AI,
                                         SmallVector<AllocaInst*, 32> &NewElts) {
   // Extract each element out of the NewElts according to its structure offset
   // and form the result value.
@@ -1162,7 +1161,7 @@ static bool HasPadding(const Type *Ty, const TargetData &TD) {
 /// an aggregate can be broken down into elements.  Return 0 if not, 3 if safe,
 /// or 1 if safe after canonicalization has been performed.
 ///
-int SROA::isSafeAllocaToScalarRepl(AllocationInst *AI) {
+int SROA::isSafeAllocaToScalarRepl(AllocaInst *AI) {
   // Loop over the use list of the alloca.  We can only transform it if all of
   // the users are safe to transform.
   AllocaInfo Info;
@@ -1245,7 +1244,7 @@ void SROA::CleanupGEP(GetElementPtrInst *GEPI) {
 
 /// CleanupAllocaUsers - If SROA reported that it can promote the specified
 /// allocation, but only if cleaned up, perform the cleanups required.
-void SROA::CleanupAllocaUsers(AllocationInst *AI) {
+void SROA::CleanupAllocaUsers(AllocaInst *AI) {
   // At this point, we know that the end result will be SROA'd and promoted, so
   // we can insert ugly code if required so long as sroa+mem2reg will clean it
   // up.
@@ -1297,8 +1296,7 @@ static void MergeInType(const Type *In, uint64_t Offset, const Type *&VecTy,
           VecTy = VInTy;
         return;
       }
-    } else if (In == Type::getFloatTy(Context) ||
-               In == Type::getDoubleTy(Context) ||
+    } else if (In->isFloatTy() || In->isDoubleTy() ||
                (isa<IntegerType>(In) && In->getPrimitiveSizeInBits() >= 8 &&
                 isPowerOf2_32(In->getPrimitiveSizeInBits()))) {
       // If we're accessing something that could be an element of a vector, see
@@ -1854,7 +1852,7 @@ static bool isOnlyCopiedFromConstantGlobal(Value *V, Instruction *&TheCopy,
 /// isOnlyCopiedFromConstantGlobal - Return true if the specified alloca is only
 /// modified by a copy from a constant global.  If we can prove this, we can
 /// replace any uses of the alloca with uses of the global directly.
-Instruction *SROA::isOnlyCopiedFromConstantGlobal(AllocationInst *AI) {
+Instruction *SROA::isOnlyCopiedFromConstantGlobal(AllocaInst *AI) {
   Instruction *TheCopy = 0;
   if (::isOnlyCopiedFromConstantGlobal(AI, TheCopy, false))
     return TheCopy;
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyCFGPass.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyCFGPass.cpp
index 29712b3..e905952 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyCFGPass.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyCFGPass.cpp
@@ -26,7 +26,6 @@
 #include "llvm/Transforms/Utils/Local.h"
 #include "llvm/Constants.h"
 #include "llvm/Instructions.h"
-#include "llvm/LLVMContext.h"
 #include "llvm/Module.h"
 #include "llvm/Attributes.h"
 #include "llvm/Support/CFG.h"
@@ -57,7 +56,7 @@ FunctionPass *llvm::createCFGSimplificationPass() {
 
 /// ChangeToUnreachable - Insert an unreachable instruction before the specified
 /// instruction, making it and the rest of the code in the block dead.
-static void ChangeToUnreachable(Instruction *I, LLVMContext &Context) {
+static void ChangeToUnreachable(Instruction *I) {
   BasicBlock *BB = I->getParent();
   // Loop over all of the successors, removing BB's entry from any PHI
   // nodes.
@@ -95,8 +94,7 @@ static void ChangeToCall(InvokeInst *II) {
 }
 
 static bool MarkAliveBlocks(BasicBlock *BB,
-                            SmallPtrSet<BasicBlock*, 128> &Reachable,
-                            LLVMContext &Context) {
+                            SmallPtrSet<BasicBlock*, 128> &Reachable) {
   
   SmallVector<BasicBlock*, 128> Worklist;
   Worklist.push_back(BB);
@@ -119,20 +117,23 @@ static bool MarkAliveBlocks(BasicBlock *BB,
           // though.
           ++BBI;
           if (!isa<UnreachableInst>(BBI)) {
-            ChangeToUnreachable(BBI, Context);
+            ChangeToUnreachable(BBI);
             Changed = true;
           }
           break;
         }
       }
       
+      // Store to undef and store to null are undefined and used to signal that
+      // they should be changed to unreachable by passes that can't modify the
+      // CFG.
       if (StoreInst *SI = dyn_cast<StoreInst>(BBI)) {
         Value *Ptr = SI->getOperand(1);
         
         if (isa<UndefValue>(Ptr) ||
             (isa<ConstantPointerNull>(Ptr) &&
              SI->getPointerAddressSpace() == 0)) {
-          ChangeToUnreachable(SI, Context);
+          ChangeToUnreachable(SI);
           Changed = true;
           break;
         }
@@ -158,7 +159,7 @@ static bool MarkAliveBlocks(BasicBlock *BB,
 /// otherwise.
 static bool RemoveUnreachableBlocksFromFn(Function &F) {
   SmallPtrSet<BasicBlock*, 128> Reachable;
-  bool Changed = MarkAliveBlocks(F.begin(), Reachable, F.getContext());
+  bool Changed = MarkAliveBlocks(F.begin(), Reachable);
   
   // If there are unreachable blocks in the CFG...
   if (Reachable.size() == F.size())
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp
index 57a7d05..f9b929c 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp
@@ -57,9 +57,9 @@ public:
   /// performed.  If it returns CI, then it transformed the call and CI is to be
   /// deleted.  If it returns something else, replace CI with the new value and
   /// delete CI.
-  virtual Value *CallOptimizer(Function *Callee, CallInst *CI, IRBuilder<> &B) 
+  virtual Value *CallOptimizer(Function *Callee, CallInst *CI, IRBuilder<> &B)
     =0;
-  
+
   Value *OptimizeCall(CallInst *CI, const TargetData *TD, IRBuilder<> &B) {
     Caller = CI->getParent()->getParent();
     this->TD = TD;
@@ -75,12 +75,17 @@ public:
   /// specified pointer.  Ptr is required to be some pointer type, and the
   /// return value has 'intptr_t' type.
   Value *EmitStrLen(Value *Ptr, IRBuilder<> &B);
-  
+
   /// EmitMemCpy - Emit a call to the memcpy function to the builder.  This
   /// always expects that the size has type 'intptr_t' and Dst/Src are pointers.
-  Value *EmitMemCpy(Value *Dst, Value *Src, Value *Len, 
+  Value *EmitMemCpy(Value *Dst, Value *Src, Value *Len,
                     unsigned Align, IRBuilder<> &B);
-  
+
+  /// EmitMemMove - Emit a call to the memmove function to the builder.  This
+  /// always expects that the size has type 'intptr_t' and Dst/Src are pointers.
+  Value *EmitMemMove(Value *Dst, Value *Src, Value *Len,
+		     unsigned Align, IRBuilder<> &B);
+
   /// EmitMemChr - Emit a call to the memchr function.  This assumes that Ptr is
   /// a pointer, Val is an i32 value, and Len is an 'intptr_t' value.
   Value *EmitMemChr(Value *Ptr, Value *Val, Value *Len, IRBuilder<> &B);
@@ -97,34 +102,34 @@ public:
   /// is added as the suffix of name, if 'Op' is a float, we add a 'f' suffix.
   Value *EmitUnaryFloatFnCall(Value *Op, const char *Name, IRBuilder<> &B,
                               const AttrListPtr &Attrs);
-  
+
   /// EmitPutChar - Emit a call to the putchar function.  This assumes that Char
   /// is an integer.
-  void EmitPutChar(Value *Char, IRBuilder<> &B);
-  
+  Value *EmitPutChar(Value *Char, IRBuilder<> &B);
+
   /// EmitPutS - Emit a call to the puts function.  This assumes that Str is
   /// some pointer.
   void EmitPutS(Value *Str, IRBuilder<> &B);
-    
+
   /// EmitFPutC - Emit a call to the fputc function.  This assumes that Char is
   /// an i32, and File is a pointer to FILE.
   void EmitFPutC(Value *Char, Value *File, IRBuilder<> &B);
-  
+
   /// EmitFPutS - Emit a call to the puts function.  Str is required to be a
   /// pointer and File is a pointer to FILE.
   void EmitFPutS(Value *Str, Value *File, IRBuilder<> &B);
-  
+
   /// EmitFWrite - Emit a call to the fwrite function.  This assumes that Ptr is
   /// a pointer, Size is an 'intptr_t', and File is a pointer to FILE.
   void EmitFWrite(Value *Ptr, Value *Size, Value *File, IRBuilder<> &B);
-  
+
 };
 } // End anonymous namespace.
 
 /// CastToCStr - Return V if it is an i8*, otherwise cast it to i8*.
 Value *LibCallOptimization::CastToCStr(Value *V, IRBuilder<> &B) {
   return
-        B.CreateBitCast(V, PointerType::getUnqual(Type::getInt8Ty(*Context)), "cstr");
+        B.CreateBitCast(V, Type::getInt8PtrTy(*Context), "cstr");
 }
 
 /// EmitStrLen - Emit a call to the strlen function to the builder, for the
@@ -138,7 +143,7 @@ Value *LibCallOptimization::EmitStrLen(Value *Ptr, IRBuilder<> &B) {
 
   Constant *StrLen =M->getOrInsertFunction("strlen", AttrListPtr::get(AWI, 2),
                                            TD->getIntPtrType(*Context),
-                                    PointerType::getUnqual(Type::getInt8Ty(*Context)),
+					   Type::getInt8PtrTy(*Context),
                                            NULL);
   CallInst *CI = B.CreateCall(StrLen, CastToCStr(Ptr, B), "strlen");
   if (const Function *F = dyn_cast<Function>(StrLen->stripPointerCasts()))
@@ -160,6 +165,21 @@ Value *LibCallOptimization::EmitMemCpy(Value *Dst, Value *Src, Value *Len,
                        ConstantInt::get(Type::getInt32Ty(*Context), Align));
 }
 
+/// EmitMemMOve - Emit a call to the memmove function to the builder.  This
+/// always expects that the size has type 'intptr_t' and Dst/Src are pointers.
+Value *LibCallOptimization::EmitMemMove(Value *Dst, Value *Src, Value *Len,
+					unsigned Align, IRBuilder<> &B) {
+  Module *M = Caller->getParent();
+  Intrinsic::ID IID = Intrinsic::memmove;
+  const Type *Tys[1];
+  Tys[0] = TD->getIntPtrType(*Context);
+  Value *MemMove = Intrinsic::getDeclaration(M, IID, Tys, 1);
+  Value *D = CastToCStr(Dst, B);
+  Value *S = CastToCStr(Src, B);
+  Value *A = ConstantInt::get(Type::getInt32Ty(*Context), Align);
+  return B.CreateCall4(MemMove, D, S, Len, A);
+}
+
 /// EmitMemChr - Emit a call to the memchr function.  This assumes that Ptr is
 /// a pointer, Val is an i32 value, and Len is an 'intptr_t' value.
 Value *LibCallOptimization::EmitMemChr(Value *Ptr, Value *Val,
@@ -169,9 +189,10 @@ Value *LibCallOptimization::EmitMemChr(Value *Ptr, Value *Val,
   AWI = AttributeWithIndex::get(~0u, Attribute::ReadOnly | Attribute::NoUnwind);
 
   Value *MemChr = M->getOrInsertFunction("memchr", AttrListPtr::get(&AWI, 1),
-                                    PointerType::getUnqual(Type::getInt8Ty(*Context)),
-                                    PointerType::getUnqual(Type::getInt8Ty(*Context)),
-                                         Type::getInt32Ty(*Context), TD->getIntPtrType(*Context),
+					 Type::getInt8PtrTy(*Context),
+					 Type::getInt8PtrTy(*Context),
+                                         Type::getInt32Ty(*Context),
+					 TD->getIntPtrType(*Context),
                                          NULL);
   CallInst *CI = B.CreateCall3(MemChr, CastToCStr(Ptr, B), Val, Len, "memchr");
 
@@ -193,8 +214,8 @@ Value *LibCallOptimization::EmitMemCmp(Value *Ptr1, Value *Ptr2,
 
   Value *MemCmp = M->getOrInsertFunction("memcmp", AttrListPtr::get(AWI, 3),
                                          Type::getInt32Ty(*Context),
-                                    PointerType::getUnqual(Type::getInt8Ty(*Context)),
-                                    PointerType::getUnqual(Type::getInt8Ty(*Context)),
+                                    Type::getInt8PtrTy(*Context),
+                                    Type::getInt8PtrTy(*Context),
                                          TD->getIntPtrType(*Context), NULL);
   CallInst *CI = B.CreateCall3(MemCmp, CastToCStr(Ptr1, B), CastToCStr(Ptr2, B),
                                Len, "memcmp");
@@ -225,12 +246,12 @@ Value *LibCallOptimization::EmitUnaryFloatFnCall(Value *Op, const char *Name,
                                                  IRBuilder<> &B,
                                                  const AttrListPtr &Attrs) {
   char NameBuffer[20];
-  if (Op->getType() != Type::getDoubleTy(*Context)) {
+  if (!Op->getType()->isDoubleTy()) {
     // If we need to add a suffix, copy into NameBuffer.
     unsigned NameLen = strlen(Name);
     assert(NameLen < sizeof(NameBuffer)-2);
     memcpy(NameBuffer, Name, NameLen);
-    if (Op->getType() == Type::getFloatTy(*Context))
+    if (Op->getType()->isFloatTy())
       NameBuffer[NameLen] = 'f';  // floorf
     else
       NameBuffer[NameLen] = 'l';  // floorl
@@ -251,16 +272,20 @@ Value *LibCallOptimization::EmitUnaryFloatFnCall(Value *Op, const char *Name,
 
 /// EmitPutChar - Emit a call to the putchar function.  This assumes that Char
 /// is an integer.
-void LibCallOptimization::EmitPutChar(Value *Char, IRBuilder<> &B) {
+Value *LibCallOptimization::EmitPutChar(Value *Char, IRBuilder<> &B) {
   Module *M = Caller->getParent();
   Value *PutChar = M->getOrInsertFunction("putchar", Type::getInt32Ty(*Context),
                                           Type::getInt32Ty(*Context), NULL);
   CallInst *CI = B.CreateCall(PutChar,
-                              B.CreateIntCast(Char, Type::getInt32Ty(*Context), "chari"),
+                              B.CreateIntCast(Char,
+					      Type::getInt32Ty(*Context),
+                                              /*isSigned*/true,
+					      "chari"),
                               "putchar");
 
   if (const Function *F = dyn_cast<Function>(PutChar->stripPointerCasts()))
     CI->setCallingConv(F->getCallingConv());
+  return CI;
 }
 
 /// EmitPutS - Emit a call to the puts function.  This assumes that Str is
@@ -273,7 +298,7 @@ void LibCallOptimization::EmitPutS(Value *Str, IRBuilder<> &B) {
 
   Value *PutS = M->getOrInsertFunction("puts", AttrListPtr::get(AWI, 2),
                                        Type::getInt32Ty(*Context),
-                                    PointerType::getUnqual(Type::getInt8Ty(*Context)),
+                                    Type::getInt8PtrTy(*Context),
                                        NULL);
   CallInst *CI = B.CreateCall(PutS, CastToCStr(Str, B), "puts");
   if (const Function *F = dyn_cast<Function>(PutS->stripPointerCasts()))
@@ -290,12 +315,17 @@ void LibCallOptimization::EmitFPutC(Value *Char, Value *File, IRBuilder<> &B) {
   AWI[1] = AttributeWithIndex::get(~0u, Attribute::NoUnwind);
   Constant *F;
   if (isa<PointerType>(File->getType()))
-    F = M->getOrInsertFunction("fputc", AttrListPtr::get(AWI, 2), Type::getInt32Ty(*Context),
-                               Type::getInt32Ty(*Context), File->getType(), NULL);
+    F = M->getOrInsertFunction("fputc", AttrListPtr::get(AWI, 2),
+			       Type::getInt32Ty(*Context),
+                               Type::getInt32Ty(*Context), File->getType(),
+			       NULL);
   else
-    F = M->getOrInsertFunction("fputc", Type::getInt32Ty(*Context), Type::getInt32Ty(*Context),
+    F = M->getOrInsertFunction("fputc",
+			       Type::getInt32Ty(*Context),
+			       Type::getInt32Ty(*Context),
                                File->getType(), NULL);
-  Char = B.CreateIntCast(Char, Type::getInt32Ty(*Context), "chari");
+  Char = B.CreateIntCast(Char, Type::getInt32Ty(*Context), /*isSigned*/true,
+                         "chari");
   CallInst *CI = B.CreateCall2(F, Char, File, "fputc");
 
   if (const Function *Fn = dyn_cast<Function>(F->stripPointerCasts()))
@@ -312,12 +342,13 @@ void LibCallOptimization::EmitFPutS(Value *Str, Value *File, IRBuilder<> &B) {
   AWI[2] = AttributeWithIndex::get(~0u, Attribute::NoUnwind);
   Constant *F;
   if (isa<PointerType>(File->getType()))
-    F = M->getOrInsertFunction("fputs", AttrListPtr::get(AWI, 3), Type::getInt32Ty(*Context),
-                               PointerType::getUnqual(Type::getInt8Ty(*Context)),
+    F = M->getOrInsertFunction("fputs", AttrListPtr::get(AWI, 3),
+			       Type::getInt32Ty(*Context),
+                               Type::getInt8PtrTy(*Context),
                                File->getType(), NULL);
   else
     F = M->getOrInsertFunction("fputs", Type::getInt32Ty(*Context),
-                               PointerType::getUnqual(Type::getInt8Ty(*Context)),
+                               Type::getInt8PtrTy(*Context),
                                File->getType(), NULL);
   CallInst *CI = B.CreateCall2(F, CastToCStr(Str, B), File, "fputs");
 
@@ -338,13 +369,15 @@ void LibCallOptimization::EmitFWrite(Value *Ptr, Value *Size, Value *File,
   if (isa<PointerType>(File->getType()))
     F = M->getOrInsertFunction("fwrite", AttrListPtr::get(AWI, 3),
                                TD->getIntPtrType(*Context),
-                               PointerType::getUnqual(Type::getInt8Ty(*Context)),
-                               TD->getIntPtrType(*Context), TD->getIntPtrType(*Context),
+                               Type::getInt8PtrTy(*Context),
+                               TD->getIntPtrType(*Context),
+			       TD->getIntPtrType(*Context),
                                File->getType(), NULL);
   else
     F = M->getOrInsertFunction("fwrite", TD->getIntPtrType(*Context),
-                               PointerType::getUnqual(Type::getInt8Ty(*Context)),
-                               TD->getIntPtrType(*Context), TD->getIntPtrType(*Context),
+                               Type::getInt8PtrTy(*Context),
+                               TD->getIntPtrType(*Context),
+			       TD->getIntPtrType(*Context),
                                File->getType(), NULL);
   CallInst *CI = B.CreateCall4(F, CastToCStr(Ptr, B), Size,
                         ConstantInt::get(TD->getIntPtrType(*Context), 1), File);
@@ -363,30 +396,30 @@ static uint64_t GetStringLengthH(Value *V, SmallPtrSet<PHINode*, 32> &PHIs) {
   // Look through noop bitcast instructions.
   if (BitCastInst *BCI = dyn_cast<BitCastInst>(V))
     return GetStringLengthH(BCI->getOperand(0), PHIs);
-  
+
   // If this is a PHI node, there are two cases: either we have already seen it
   // or we haven't.
   if (PHINode *PN = dyn_cast<PHINode>(V)) {
     if (!PHIs.insert(PN))
       return ~0ULL;  // already in the set.
-    
+
     // If it was new, see if all the input strings are the same length.
     uint64_t LenSoFar = ~0ULL;
     for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
       uint64_t Len = GetStringLengthH(PN->getIncomingValue(i), PHIs);
       if (Len == 0) return 0; // Unknown length -> unknown.
-      
+
       if (Len == ~0ULL) continue;
-      
+
       if (Len != LenSoFar && LenSoFar != ~0ULL)
         return 0;    // Disagree -> unknown.
       LenSoFar = Len;
     }
-    
+
     // Success, all agree.
     return LenSoFar;
   }
-  
+
   // strlen(select(c,x,y)) -> strlen(x) ^ strlen(y)
   if (SelectInst *SI = dyn_cast<SelectInst>(V)) {
     uint64_t Len1 = GetStringLengthH(SI->getTrueValue(), PHIs);
@@ -398,7 +431,7 @@ static uint64_t GetStringLengthH(Value *V, SmallPtrSet<PHINode*, 32> &PHIs) {
     if (Len1 != Len2) return 0;
     return Len1;
   }
-  
+
   // If the value is not a GEP instruction nor a constant expression with a
   // GEP instruction, then return unknown.
   User *GEP = 0;
@@ -411,11 +444,11 @@ static uint64_t GetStringLengthH(Value *V, SmallPtrSet<PHINode*, 32> &PHIs) {
   } else {
     return 0;
   }
-  
+
   // Make sure the GEP has exactly three arguments.
   if (GEP->getNumOperands() != 3)
     return 0;
-  
+
   // Check to make sure that the first operand of the GEP is an integer and
   // has value 0 so that we are sure we're indexing into the initializer.
   if (ConstantInt *Idx = dyn_cast<ConstantInt>(GEP->getOperand(1))) {
@@ -423,7 +456,7 @@ static uint64_t GetStringLengthH(Value *V, SmallPtrSet<PHINode*, 32> &PHIs) {
       return 0;
   } else
     return 0;
-  
+
   // If the second index isn't a ConstantInt, then this is a variable index
   // into the array.  If this occurs, we can't say anything meaningful about
   // the string.
@@ -432,7 +465,7 @@ static uint64_t GetStringLengthH(Value *V, SmallPtrSet<PHINode*, 32> &PHIs) {
     StartIdx = CI->getZExtValue();
   else
     return 0;
-  
+
   // The GEP instruction, constant or instruction, must reference a global
   // variable that is a constant and is initialized. The referenced constant
   // initializer is the array that we'll use for optimization.
@@ -441,21 +474,21 @@ static uint64_t GetStringLengthH(Value *V, SmallPtrSet<PHINode*, 32> &PHIs) {
       GV->mayBeOverridden())
     return 0;
   Constant *GlobalInit = GV->getInitializer();
-  
+
   // Handle the ConstantAggregateZero case, which is a degenerate case. The
   // initializer is constant zero so the length of the string must be zero.
   if (isa<ConstantAggregateZero>(GlobalInit))
     return 1;  // Len = 0 offset by 1.
-  
+
   // Must be a Constant Array
   ConstantArray *Array = dyn_cast<ConstantArray>(GlobalInit);
   if (!Array ||
       Array->getType()->getElementType() != Type::getInt8Ty(V->getContext()))
     return false;
-  
+
   // Get the number of elements in the array
   uint64_t NumElts = Array->getType()->getNumElements();
-  
+
   // Traverse the constant array from StartIdx (derived above) which is
   // the place the GEP refers to in the array.
   for (unsigned i = StartIdx; i != NumElts; ++i) {
@@ -466,7 +499,7 @@ static uint64_t GetStringLengthH(Value *V, SmallPtrSet<PHINode*, 32> &PHIs) {
     if (CI->isZero())
       return i-StartIdx+1; // We found end of string, success!
   }
-  
+
   return 0; // The array isn't null terminated, conservatively return 'unknown'.
 }
 
@@ -474,7 +507,7 @@ static uint64_t GetStringLengthH(Value *V, SmallPtrSet<PHINode*, 32> &PHIs) {
 /// the specified pointer, return 'len+1'.  If we can't, return 0.
 static uint64_t GetStringLength(Value *V) {
   if (!isa<PointerType>(V->getType())) return 0;
-  
+
   SmallPtrSet<PHINode*, 32> PHIs;
   uint64_t Len = GetStringLengthH(V, PHIs);
   // If Len is ~0ULL, we had an infinite phi cycle: this is dead code, so return
@@ -483,7 +516,7 @@ static uint64_t GetStringLength(Value *V) {
 }
 
 /// IsOnlyUsedInZeroEqualityComparison - Return true if it only matters that the
-/// value is equal or not-equal to zero. 
+/// value is equal or not-equal to zero.
 static bool IsOnlyUsedInZeroEqualityComparison(Value *V) {
   for (Value::use_iterator UI = V->use_begin(), E = V->use_end();
        UI != E; ++UI) {
@@ -510,20 +543,20 @@ struct StrCatOpt : public LibCallOptimization {
     // Verify the "strcat" function prototype.
     const FunctionType *FT = Callee->getFunctionType();
     if (FT->getNumParams() != 2 ||
-        FT->getReturnType() != PointerType::getUnqual(Type::getInt8Ty(*Context)) ||
+        FT->getReturnType() != Type::getInt8PtrTy(*Context) ||
         FT->getParamType(0) != FT->getReturnType() ||
         FT->getParamType(1) != FT->getReturnType())
       return 0;
-    
+
     // Extract some information from the instruction
     Value *Dst = CI->getOperand(1);
     Value *Src = CI->getOperand(2);
-    
+
     // See if we can get the length of the input string.
     uint64_t Len = GetStringLength(Src);
     if (Len == 0) return 0;
     --Len;  // Unbias length.
-    
+
     // Handle the simple, do-nothing case: strcat(x, "") -> x
     if (Len == 0)
       return Dst;
@@ -539,12 +572,12 @@ struct StrCatOpt : public LibCallOptimization {
     // We need to find the end of the destination string.  That's where the
     // memory is to be moved to. We just generate a call to strlen.
     Value *DstLen = EmitStrLen(Dst, B);
-    
+
     // Now that we have the destination's length, we must index into the
     // destination's pointer to get the actual memcpy destination (end of
     // the string .. we're concatenating).
     Value *CpyDst = B.CreateGEP(Dst, DstLen, "endptr");
-    
+
     // We have enough information to now generate the memcpy call to do the
     // concatenation for us.  Make a memcpy to copy the nul byte with align = 1.
     EmitMemCpy(CpyDst, Src,
@@ -560,7 +593,7 @@ struct StrNCatOpt : public StrCatOpt {
     // Verify the "strncat" function prototype.
     const FunctionType *FT = Callee->getFunctionType();
     if (FT->getNumParams() != 3 ||
-        FT->getReturnType() != PointerType::getUnqual(Type::getInt8Ty(*Context)) ||
+        FT->getReturnType() != Type::getInt8PtrTy(*Context) ||
         FT->getParamType(0) != FT->getReturnType() ||
         FT->getParamType(1) != FT->getReturnType() ||
         !isa<IntegerType>(FT->getParamType(2)))
@@ -608,12 +641,12 @@ struct StrChrOpt : public LibCallOptimization {
     // Verify the "strchr" function prototype.
     const FunctionType *FT = Callee->getFunctionType();
     if (FT->getNumParams() != 2 ||
-        FT->getReturnType() != PointerType::getUnqual(Type::getInt8Ty(*Context)) ||
+        FT->getReturnType() != Type::getInt8PtrTy(*Context) ||
         FT->getParamType(0) != FT->getReturnType())
       return 0;
-    
+
     Value *SrcStr = CI->getOperand(1);
-    
+
     // If the second operand is non-constant, see if we can compute the length
     // of the input string and turn this into memchr.
     ConstantInt *CharC = dyn_cast<ConstantInt>(CI->getOperand(2));
@@ -622,9 +655,10 @@ struct StrChrOpt : public LibCallOptimization {
       if (!TD) return 0;
 
       uint64_t Len = GetStringLength(SrcStr);
-      if (Len == 0 || FT->getParamType(1) != Type::getInt32Ty(*Context)) // memchr needs i32.
+      if (Len == 0 ||
+          FT->getParamType(1) != Type::getInt32Ty(*Context)) // memchr needs i32.
         return 0;
-      
+
       return EmitMemChr(SrcStr, CI->getOperand(2), // include nul.
                         ConstantInt::get(TD->getIntPtrType(*Context), Len), B);
     }
@@ -634,11 +668,11 @@ struct StrChrOpt : public LibCallOptimization {
     std::string Str;
     if (!GetConstantStringInfo(SrcStr, Str))
       return 0;
-    
+
     // strchr can find the nul character.
     Str += '\0';
     char CharValue = CharC->getSExtValue();
-    
+
     // Compute the offset.
     uint64_t i = 0;
     while (1) {
@@ -649,7 +683,7 @@ struct StrChrOpt : public LibCallOptimization {
         break;
       ++i;
     }
-    
+
     // strchr(s+n,c)  -> gep(s+n+i,c)
     Value *Idx = ConstantInt::get(Type::getInt64Ty(*Context), i);
     return B.CreateGEP(SrcStr, Idx, "strchr");
@@ -663,28 +697,29 @@ struct StrCmpOpt : public LibCallOptimization {
   virtual Value *CallOptimizer(Function *Callee, CallInst *CI, IRBuilder<> &B) {
     // Verify the "strcmp" function prototype.
     const FunctionType *FT = Callee->getFunctionType();
-    if (FT->getNumParams() != 2 || FT->getReturnType() != Type::getInt32Ty(*Context) ||
+    if (FT->getNumParams() != 2 ||
+	FT->getReturnType() != Type::getInt32Ty(*Context) ||
         FT->getParamType(0) != FT->getParamType(1) ||
-        FT->getParamType(0) != PointerType::getUnqual(Type::getInt8Ty(*Context)))
+        FT->getParamType(0) != Type::getInt8PtrTy(*Context))
       return 0;
-    
+
     Value *Str1P = CI->getOperand(1), *Str2P = CI->getOperand(2);
     if (Str1P == Str2P)      // strcmp(x,x)  -> 0
       return ConstantInt::get(CI->getType(), 0);
-    
+
     std::string Str1, Str2;
     bool HasStr1 = GetConstantStringInfo(Str1P, Str1);
     bool HasStr2 = GetConstantStringInfo(Str2P, Str2);
-    
+
     if (HasStr1 && Str1.empty()) // strcmp("", x) -> *x
       return B.CreateZExt(B.CreateLoad(Str2P, "strcmpload"), CI->getType());
-    
+
     if (HasStr2 && Str2.empty()) // strcmp(x,"") -> *x
       return B.CreateZExt(B.CreateLoad(Str1P, "strcmpload"), CI->getType());
-    
+
     // strcmp(x, y)  -> cnst  (if both x and y are constant strings)
     if (HasStr1 && HasStr2)
-      return ConstantInt::get(CI->getType(), 
+      return ConstantInt::get(CI->getType(),
                                      strcmp(Str1.c_str(),Str2.c_str()));
 
     // strcmp(P, "x") -> memcmp(P, "x", 2)
@@ -710,36 +745,37 @@ struct StrNCmpOpt : public LibCallOptimization {
   virtual Value *CallOptimizer(Function *Callee, CallInst *CI, IRBuilder<> &B) {
     // Verify the "strncmp" function prototype.
     const FunctionType *FT = Callee->getFunctionType();
-    if (FT->getNumParams() != 3 || FT->getReturnType() != Type::getInt32Ty(*Context) ||
+    if (FT->getNumParams() != 3 ||
+	FT->getReturnType() != Type::getInt32Ty(*Context) ||
         FT->getParamType(0) != FT->getParamType(1) ||
-        FT->getParamType(0) != PointerType::getUnqual(Type::getInt8Ty(*Context)) ||
+        FT->getParamType(0) != Type::getInt8PtrTy(*Context) ||
         !isa<IntegerType>(FT->getParamType(2)))
       return 0;
-    
+
     Value *Str1P = CI->getOperand(1), *Str2P = CI->getOperand(2);
     if (Str1P == Str2P)      // strncmp(x,x,n)  -> 0
       return ConstantInt::get(CI->getType(), 0);
-    
+
     // Get the length argument if it is constant.
     uint64_t Length;
     if (ConstantInt *LengthArg = dyn_cast<ConstantInt>(CI->getOperand(3)))
       Length = LengthArg->getZExtValue();
     else
       return 0;
-    
+
     if (Length == 0) // strncmp(x,y,0)   -> 0
       return ConstantInt::get(CI->getType(), 0);
-    
+
     std::string Str1, Str2;
     bool HasStr1 = GetConstantStringInfo(Str1P, Str1);
     bool HasStr2 = GetConstantStringInfo(Str2P, Str2);
-    
+
     if (HasStr1 && Str1.empty())  // strncmp("", x, n) -> *x
       return B.CreateZExt(B.CreateLoad(Str2P, "strcmpload"), CI->getType());
-    
+
     if (HasStr2 && Str2.empty())  // strncmp(x, "", n) -> *x
       return B.CreateZExt(B.CreateLoad(Str1P, "strcmpload"), CI->getType());
-    
+
     // strncmp(x, y)  -> cnst  (if both x and y are constant strings)
     if (HasStr1 && HasStr2)
       return ConstantInt::get(CI->getType(),
@@ -758,20 +794,20 @@ struct StrCpyOpt : public LibCallOptimization {
     const FunctionType *FT = Callee->getFunctionType();
     if (FT->getNumParams() != 2 || FT->getReturnType() != FT->getParamType(0) ||
         FT->getParamType(0) != FT->getParamType(1) ||
-        FT->getParamType(0) != PointerType::getUnqual(Type::getInt8Ty(*Context)))
+        FT->getParamType(0) != Type::getInt8PtrTy(*Context))
       return 0;
-    
+
     Value *Dst = CI->getOperand(1), *Src = CI->getOperand(2);
     if (Dst == Src)      // strcpy(x,x)  -> x
       return Src;
-    
+
     // These optimizations require TargetData.
     if (!TD) return 0;
 
     // See if we can get the length of the input string.
     uint64_t Len = GetStringLength(Src);
     if (Len == 0) return 0;
-    
+
     // We have enough information to now generate the memcpy call to do the
     // concatenation for us.  Make a memcpy to copy the nul byte with align = 1.
     EmitMemCpy(Dst, Src,
@@ -788,7 +824,7 @@ struct StrNCpyOpt : public LibCallOptimization {
     const FunctionType *FT = Callee->getFunctionType();
     if (FT->getNumParams() != 3 || FT->getReturnType() != FT->getParamType(0) ||
         FT->getParamType(0) != FT->getParamType(1) ||
-        FT->getParamType(0) != PointerType::getUnqual(Type::getInt8Ty(*Context)) ||
+        FT->getParamType(0) != Type::getInt8PtrTy(*Context) ||
         !isa<IntegerType>(FT->getParamType(2)))
       return 0;
 
@@ -803,7 +839,8 @@ struct StrNCpyOpt : public LibCallOptimization {
 
     if (SrcLen == 0) {
       // strncpy(x, "", y) -> memset(x, '\0', y, 1)
-      EmitMemSet(Dst, ConstantInt::get(Type::getInt8Ty(*Context), '\0'), LenOp, B);
+      EmitMemSet(Dst, ConstantInt::get(Type::getInt8Ty(*Context), '\0'), LenOp,
+		 B);
       return Dst;
     }
 
@@ -836,10 +873,10 @@ struct StrLenOpt : public LibCallOptimization {
   virtual Value *CallOptimizer(Function *Callee, CallInst *CI, IRBuilder<> &B) {
     const FunctionType *FT = Callee->getFunctionType();
     if (FT->getNumParams() != 1 ||
-        FT->getParamType(0) != PointerType::getUnqual(Type::getInt8Ty(*Context)) ||
+        FT->getParamType(0) != Type::getInt8PtrTy(*Context) ||
         !isa<IntegerType>(FT->getReturnType()))
       return 0;
-    
+
     Value *Src = CI->getOperand(1);
 
     // Constant folding: strlen("xyz") -> 3
@@ -920,6 +957,17 @@ struct MemCmpOpt : public LibCallOptimization {
       return B.CreateZExt(B.CreateXor(LHSV, RHSV, "shortdiff"), CI->getType());
     }
 
+    // Constant folding: memcmp(x, y, l) -> cnst (all arguments are constant)
+    std::string LHSStr, RHSStr;
+    if (GetConstantStringInfo(LHS, LHSStr) &&
+        GetConstantStringInfo(RHS, RHSStr)) {
+      // Make sure we're not reading out-of-bounds memory.
+      if (Len > LHSStr.length() || Len > RHSStr.length())
+        return 0;
+      uint64_t Ret = memcmp(LHSStr.data(), RHSStr.data(), Len);
+      return ConstantInt::get(CI->getType(), Ret);
+    }
+
     return 0;
   }
 };
@@ -961,16 +1009,7 @@ struct MemMoveOpt : public LibCallOptimization {
       return 0;
 
     // memmove(x, y, n) -> llvm.memmove(x, y, n, 1)
-    Module *M = Caller->getParent();
-    Intrinsic::ID IID = Intrinsic::memmove;
-    const Type *Tys[1];
-    Tys[0] = TD->getIntPtrType(*Context);
-    Value *MemMove = Intrinsic::getDeclaration(M, IID, Tys, 1);
-    Value *Dst = CastToCStr(CI->getOperand(1), B);
-    Value *Src = CastToCStr(CI->getOperand(2), B);
-    Value *Size = CI->getOperand(3);
-    Value *Align = ConstantInt::get(Type::getInt32Ty(*Context), 1);
-    B.CreateCall4(MemMove, Dst, Src, Size, Align);
+    EmitMemMove(CI->getOperand(1), CI->getOperand(2), CI->getOperand(3), 1, B);
     return CI->getOperand(1);
   }
 };
@@ -991,13 +1030,126 @@ struct MemSetOpt : public LibCallOptimization {
       return 0;
 
     // memset(p, v, n) -> llvm.memset(p, v, n, 1)
-    Value *Val = B.CreateIntCast(CI->getOperand(2), Type::getInt8Ty(*Context), false);
+    Value *Val = B.CreateIntCast(CI->getOperand(2), Type::getInt8Ty(*Context),
+				 false);
     EmitMemSet(CI->getOperand(1), Val,  CI->getOperand(3), B);
     return CI->getOperand(1);
   }
 };
 
 //===----------------------------------------------------------------------===//
+// Object Size Checking Optimizations
+//===----------------------------------------------------------------------===//
+
+//===---------------------------------------===//
+// 'object size'
+namespace {
+struct SizeOpt : public LibCallOptimization {
+  virtual Value *CallOptimizer(Function *Callee, CallInst *CI, IRBuilder<> &B) {
+    // TODO: We can do more with this, but delaying to here should be no change
+    // in behavior.
+    ConstantInt *Const = dyn_cast<ConstantInt>(CI->getOperand(2));
+
+    if (!Const) return 0;
+
+    const Type *Ty = Callee->getFunctionType()->getReturnType();
+
+    if (Const->getZExtValue() < 2)
+      return Constant::getAllOnesValue(Ty);
+    else
+      return ConstantInt::get(Ty, 0);
+  }
+};
+}
+
+//===---------------------------------------===//
+// 'memcpy_chk' Optimizations
+
+struct MemCpyChkOpt : public LibCallOptimization {
+  virtual Value *CallOptimizer(Function *Callee, CallInst *CI, IRBuilder<> &B) {
+    // These optimizations require TargetData.
+    if (!TD) return 0;
+
+    const FunctionType *FT = Callee->getFunctionType();
+    if (FT->getNumParams() != 4 || FT->getReturnType() != FT->getParamType(0) ||
+        !isa<PointerType>(FT->getParamType(0)) ||
+        !isa<PointerType>(FT->getParamType(1)) ||
+	!isa<IntegerType>(FT->getParamType(3)) ||
+	FT->getParamType(2) != TD->getIntPtrType(*Context))
+      return 0;
+
+    ConstantInt *SizeCI = dyn_cast<ConstantInt>(CI->getOperand(4));
+    if (!SizeCI)
+      return 0;
+    if (SizeCI->isAllOnesValue()) {
+      EmitMemCpy(CI->getOperand(1), CI->getOperand(2), CI->getOperand(3), 1, B);
+      return CI->getOperand(1);
+    }
+
+    return 0;
+  }
+};
+
+//===---------------------------------------===//
+// 'memset_chk' Optimizations
+
+struct MemSetChkOpt : public LibCallOptimization {
+  virtual Value *CallOptimizer(Function *Callee, CallInst *CI, IRBuilder<> &B) {
+    // These optimizations require TargetData.
+    if (!TD) return 0;
+
+    const FunctionType *FT = Callee->getFunctionType();
+    if (FT->getNumParams() != 4 || FT->getReturnType() != FT->getParamType(0) ||
+        !isa<PointerType>(FT->getParamType(0)) ||
+        !isa<IntegerType>(FT->getParamType(1)) ||
+	!isa<IntegerType>(FT->getParamType(3)) ||
+        FT->getParamType(2) != TD->getIntPtrType(*Context))
+      return 0;
+
+    ConstantInt *SizeCI = dyn_cast<ConstantInt>(CI->getOperand(4));
+    if (!SizeCI)
+      return 0;
+    if (SizeCI->isAllOnesValue()) {
+      Value *Val = B.CreateIntCast(CI->getOperand(2), Type::getInt8Ty(*Context),
+				   false);
+      EmitMemSet(CI->getOperand(1), Val,  CI->getOperand(3), B);
+      return CI->getOperand(1);
+    }
+
+    return 0;
+  }
+};
+
+//===---------------------------------------===//
+// 'memmove_chk' Optimizations
+
+struct MemMoveChkOpt : public LibCallOptimization {
+  virtual Value *CallOptimizer(Function *Callee, CallInst *CI, IRBuilder<> &B) {
+    // These optimizations require TargetData.
+    if (!TD) return 0;
+
+    const FunctionType *FT = Callee->getFunctionType();
+    if (FT->getNumParams() != 4 || FT->getReturnType() != FT->getParamType(0) ||
+        !isa<PointerType>(FT->getParamType(0)) ||
+        !isa<PointerType>(FT->getParamType(1)) ||
+	!isa<IntegerType>(FT->getParamType(3)) ||
+        FT->getParamType(2) != TD->getIntPtrType(*Context))
+      return 0;
+
+    ConstantInt *SizeCI = dyn_cast<ConstantInt>(CI->getOperand(4));
+    if (!SizeCI)
+      return 0;
+    if (SizeCI->isAllOnesValue()) {
+      EmitMemMove(CI->getOperand(1), CI->getOperand(2), CI->getOperand(3),
+		  1, B);
+      return CI->getOperand(1);
+    }
+
+    return 0;
+  }
+};
+
+//===----------------------------------------------------------------------===//
 // Math Library Optimizations
 //===----------------------------------------------------------------------===//
 
@@ -1013,7 +1165,7 @@ struct PowOpt : public LibCallOptimization {
         FT->getParamType(0) != FT->getParamType(1) ||
         !FT->getParamType(0)->isFloatingPoint())
       return 0;
-    
+
     Value *Op1 = CI->getOperand(1), *Op2 = CI->getOperand(2);
     if (ConstantFP *Op1C = dyn_cast<ConstantFP>(Op1)) {
       if (Op1C->isExactlyValue(1.0))  // pow(1.0, x) -> 1.0
@@ -1021,13 +1173,13 @@ struct PowOpt : public LibCallOptimization {
       if (Op1C->isExactlyValue(2.0))  // pow(2.0, x) -> exp2(x)
         return EmitUnaryFloatFnCall(Op2, "exp2", B, Callee->getAttributes());
     }
-    
+
     ConstantFP *Op2C = dyn_cast<ConstantFP>(Op2);
     if (Op2C == 0) return 0;
-    
+
     if (Op2C->getValueAPF().isZero())  // pow(x, 0.0) -> 1.0
       return ConstantFP::get(CI->getType(), 1.0);
-    
+
     if (Op2C->isExactlyValue(0.5)) {
       // Expand pow(x, 0.5) to (x == -infinity ? +infinity : fabs(sqrt(x))).
       // This is faster than calling pow, and still handles negative zero
@@ -1044,7 +1196,7 @@ struct PowOpt : public LibCallOptimization {
       Value *Sel = B.CreateSelect(FCmp, Inf, FAbs, "tmp");
       return Sel;
     }
-    
+
     if (Op2C->isExactlyValue(1.0))  // pow(x, 1.0) -> x
       return Op1;
     if (Op2C->isExactlyValue(2.0))  // pow(x, 2.0) -> x*x
@@ -1067,35 +1219,38 @@ struct Exp2Opt : public LibCallOptimization {
     if (FT->getNumParams() != 1 || FT->getReturnType() != FT->getParamType(0) ||
         !FT->getParamType(0)->isFloatingPoint())
       return 0;
-    
+
     Value *Op = CI->getOperand(1);
     // Turn exp2(sitofp(x)) -> ldexp(1.0, sext(x))  if sizeof(x) <= 32
     // Turn exp2(uitofp(x)) -> ldexp(1.0, zext(x))  if sizeof(x) < 32
     Value *LdExpArg = 0;
     if (SIToFPInst *OpC = dyn_cast<SIToFPInst>(Op)) {
       if (OpC->getOperand(0)->getType()->getPrimitiveSizeInBits() <= 32)
-        LdExpArg = B.CreateSExt(OpC->getOperand(0), Type::getInt32Ty(*Context), "tmp");
+        LdExpArg = B.CreateSExt(OpC->getOperand(0),
+				Type::getInt32Ty(*Context), "tmp");
     } else if (UIToFPInst *OpC = dyn_cast<UIToFPInst>(Op)) {
       if (OpC->getOperand(0)->getType()->getPrimitiveSizeInBits() < 32)
-        LdExpArg = B.CreateZExt(OpC->getOperand(0), Type::getInt32Ty(*Context), "tmp");
+        LdExpArg = B.CreateZExt(OpC->getOperand(0),
+				Type::getInt32Ty(*Context), "tmp");
     }
 
     if (LdExpArg) {
       const char *Name;
-      if (Op->getType() == Type::getFloatTy(*Context))
+      if (Op->getType()->isFloatTy())
         Name = "ldexpf";
-      else if (Op->getType() == Type::getDoubleTy(*Context))
+      else if (Op->getType()->isDoubleTy())
         Name = "ldexp";
       else
         Name = "ldexpl";
 
       Constant *One = ConstantFP::get(*Context, APFloat(1.0f));
-      if (Op->getType() != Type::getFloatTy(*Context))
+      if (!Op->getType()->isFloatTy())
         One = ConstantExpr::getFPExtend(One, Op->getType());
 
       Module *M = Caller->getParent();
       Value *Callee = M->getOrInsertFunction(Name, Op->getType(),
-                                             Op->getType(), Type::getInt32Ty(*Context),NULL);
+                                             Op->getType(),
+					     Type::getInt32Ty(*Context),NULL);
       CallInst *CI = B.CreateCall2(Callee, One, LdExpArg);
       if (const Function *F = dyn_cast<Function>(Callee->stripPointerCasts()))
         CI->setCallingConv(F->getCallingConv());
@@ -1112,13 +1267,13 @@ struct Exp2Opt : public LibCallOptimization {
 struct UnaryDoubleFPOpt : public LibCallOptimization {
   virtual Value *CallOptimizer(Function *Callee, CallInst *CI, IRBuilder<> &B) {
     const FunctionType *FT = Callee->getFunctionType();
-    if (FT->getNumParams() != 1 || FT->getReturnType() != Type::getDoubleTy(*Context) ||
-        FT->getParamType(0) != Type::getDoubleTy(*Context))
+    if (FT->getNumParams() != 1 || !FT->getReturnType()->isDoubleTy() ||
+        !FT->getParamType(0)->isDoubleTy())
       return 0;
 
     // If this is something like 'floor((double)floatval)', convert to floorf.
     FPExtInst *Cast = dyn_cast<FPExtInst>(CI->getOperand(1));
-    if (Cast == 0 || Cast->getOperand(0)->getType() != Type::getFloatTy(*Context))
+    if (Cast == 0 || !Cast->getOperand(0)->getType()->isFloatTy())
       return 0;
 
     // floor((double)floatval) -> (double)floorf(floatval)
@@ -1141,12 +1296,13 @@ struct FFSOpt : public LibCallOptimization {
     const FunctionType *FT = Callee->getFunctionType();
     // Just make sure this has 2 arguments of the same FP type, which match the
     // result type.
-    if (FT->getNumParams() != 1 || FT->getReturnType() != Type::getInt32Ty(*Context) ||
+    if (FT->getNumParams() != 1 ||
+	FT->getReturnType() != Type::getInt32Ty(*Context) ||
         !isa<IntegerType>(FT->getParamType(0)))
       return 0;
-    
+
     Value *Op = CI->getOperand(1);
-    
+
     // Constant fold.
     if (ConstantInt *CI = dyn_cast<ConstantInt>(Op)) {
       if (CI->getValue() == 0)  // ffs(0) -> 0.
@@ -1154,7 +1310,7 @@ struct FFSOpt : public LibCallOptimization {
       return ConstantInt::get(Type::getInt32Ty(*Context), // ffs(c) -> cttz(c)+1
                               CI->getValue().countTrailingZeros()+1);
     }
-    
+
     // ffs(x) -> x != 0 ? (i32)llvm.cttz(x)+1 : 0
     const Type *ArgType = Op->getType();
     Value *F = Intrinsic::getDeclaration(Callee->getParent(),
@@ -1162,9 +1318,10 @@ struct FFSOpt : public LibCallOptimization {
     Value *V = B.CreateCall(F, Op, "cttz");
     V = B.CreateAdd(V, ConstantInt::get(V->getType(), 1), "tmp");
     V = B.CreateIntCast(V, Type::getInt32Ty(*Context), false, "tmp");
-    
+
     Value *Cond = B.CreateICmpNE(Op, Constant::getNullValue(ArgType), "tmp");
-    return B.CreateSelect(Cond, V, ConstantInt::get(Type::getInt32Ty(*Context), 0));
+    return B.CreateSelect(Cond, V,
+			  ConstantInt::get(Type::getInt32Ty(*Context), 0));
   }
 };
 
@@ -1178,12 +1335,12 @@ struct IsDigitOpt : public LibCallOptimization {
     if (FT->getNumParams() != 1 || !isa<IntegerType>(FT->getReturnType()) ||
         FT->getParamType(0) != Type::getInt32Ty(*Context))
       return 0;
-    
+
     // isdigit(c) -> (c-'0') <u 10
     Value *Op = CI->getOperand(1);
-    Op = B.CreateSub(Op, ConstantInt::get(Type::getInt32Ty(*Context), '0'), 
+    Op = B.CreateSub(Op, ConstantInt::get(Type::getInt32Ty(*Context), '0'),
                      "isdigittmp");
-    Op = B.CreateICmpULT(Op, ConstantInt::get(Type::getInt32Ty(*Context), 10), 
+    Op = B.CreateICmpULT(Op, ConstantInt::get(Type::getInt32Ty(*Context), 10),
                          "isdigit");
     return B.CreateZExt(Op, CI->getType());
   }
@@ -1199,7 +1356,7 @@ struct IsAsciiOpt : public LibCallOptimization {
     if (FT->getNumParams() != 1 || !isa<IntegerType>(FT->getReturnType()) ||
         FT->getParamType(0) != Type::getInt32Ty(*Context))
       return 0;
-    
+
     // isascii(c) -> c <u 128
     Value *Op = CI->getOperand(1);
     Op = B.CreateICmpULT(Op, ConstantInt::get(Type::getInt32Ty(*Context), 128),
@@ -1207,7 +1364,7 @@ struct IsAsciiOpt : public LibCallOptimization {
     return B.CreateZExt(Op, CI->getType());
   }
 };
-  
+
 //===---------------------------------------===//
 // 'abs', 'labs', 'llabs' Optimizations
 
@@ -1218,17 +1375,17 @@ struct AbsOpt : public LibCallOptimization {
     if (FT->getNumParams() != 1 || !isa<IntegerType>(FT->getReturnType()) ||
         FT->getParamType(0) != FT->getReturnType())
       return 0;
-    
+
     // abs(x) -> x >s -1 ? x : -x
     Value *Op = CI->getOperand(1);
-    Value *Pos = B.CreateICmpSGT(Op, 
+    Value *Pos = B.CreateICmpSGT(Op,
                              Constant::getAllOnesValue(Op->getType()),
                                  "ispos");
     Value *Neg = B.CreateNeg(Op, "neg");
     return B.CreateSelect(Pos, Op, Neg);
   }
 };
-  
+
 
 //===---------------------------------------===//
 // 'toascii' Optimizations
@@ -1240,7 +1397,7 @@ struct ToAsciiOpt : public LibCallOptimization {
     if (FT->getNumParams() != 1 || FT->getReturnType() != FT->getParamType(0) ||
         FT->getParamType(0) != Type::getInt32Ty(*Context))
       return 0;
-    
+
     // isascii(c) -> c & 0x7f
     return B.CreateAnd(CI->getOperand(1),
                        ConstantInt::get(CI->getType(),0x7F));
@@ -1260,9 +1417,9 @@ struct PrintFOpt : public LibCallOptimization {
     const FunctionType *FT = Callee->getFunctionType();
     if (FT->getNumParams() < 1 || !isa<PointerType>(FT->getParamType(0)) ||
         !(isa<IntegerType>(FT->getReturnType()) ||
-          FT->getReturnType() == Type::getVoidTy(*Context)))
+          FT->getReturnType()->isVoidTy()))
       return 0;
-    
+
     // Check for a fixed format string.
     std::string FormatStr;
     if (!GetConstantStringInfo(CI->getOperand(1), FormatStr))
@@ -1270,16 +1427,18 @@ struct PrintFOpt : public LibCallOptimization {
 
     // Empty format string -> noop.
     if (FormatStr.empty())  // Tolerate printf's declared void.
-      return CI->use_empty() ? (Value*)CI : 
+      return CI->use_empty() ? (Value*)CI :
                                ConstantInt::get(CI->getType(), 0);
-    
-    // printf("x") -> putchar('x'), even for '%'.
+
+    // printf("x") -> putchar('x'), even for '%'.  Return the result of putchar
+    // in case there is an error writing to stdout.
     if (FormatStr.size() == 1) {
-      EmitPutChar(ConstantInt::get(Type::getInt32Ty(*Context), FormatStr[0]), B);
-      return CI->use_empty() ? (Value*)CI : 
-                               ConstantInt::get(CI->getType(), 1);
+      Value *Res = EmitPutChar(ConstantInt::get(Type::getInt32Ty(*Context),
+                                                FormatStr[0]), B);
+      if (CI->use_empty()) return CI;
+      return B.CreateIntCast(Res, CI->getType(), true);
     }
-    
+
     // printf("foo\n") --> puts("foo")
     if (FormatStr[FormatStr.size()-1] == '\n' &&
         FormatStr.find('%') == std::string::npos) {  // no format characters.
@@ -1290,19 +1449,20 @@ struct PrintFOpt : public LibCallOptimization {
       C = new GlobalVariable(*Callee->getParent(), C->getType(), true,
                              GlobalVariable::InternalLinkage, C, "str");
       EmitPutS(C, B);
-      return CI->use_empty() ? (Value*)CI : 
+      return CI->use_empty() ? (Value*)CI :
                     ConstantInt::get(CI->getType(), FormatStr.size()+1);
     }
-    
+
     // Optimize specific format strings.
     // printf("%c", chr) --> putchar(*(i8*)dst)
     if (FormatStr == "%c" && CI->getNumOperands() > 2 &&
         isa<IntegerType>(CI->getOperand(2)->getType())) {
-      EmitPutChar(CI->getOperand(2), B);
-      return CI->use_empty() ? (Value*)CI : 
-                               ConstantInt::get(CI->getType(), 1);
+      Value *Res = EmitPutChar(CI->getOperand(2), B);
+
+      if (CI->use_empty()) return CI;
+      return B.CreateIntCast(Res, CI->getType(), true);
     }
-    
+
     // printf("%s\n", str) --> puts(str)
     if (FormatStr == "%s\n" && CI->getNumOperands() > 2 &&
         isa<PointerType>(CI->getOperand(2)->getType()) &&
@@ -1330,7 +1490,7 @@ struct SPrintFOpt : public LibCallOptimization {
     std::string FormatStr;
     if (!GetConstantStringInfo(CI->getOperand(2), FormatStr))
       return 0;
-    
+
     // If we just have a format string (nothing else crazy) transform it.
     if (CI->getNumOperands() == 3) {
       // Make sure there's no % in the constant array.  We could try to handle
@@ -1347,25 +1507,27 @@ struct SPrintFOpt : public LibCallOptimization {
           ConstantInt::get(TD->getIntPtrType(*Context), FormatStr.size()+1),1,B);
       return ConstantInt::get(CI->getType(), FormatStr.size());
     }
-    
+
     // The remaining optimizations require the format string to be "%s" or "%c"
     // and have an extra operand.
     if (FormatStr.size() != 2 || FormatStr[0] != '%' || CI->getNumOperands() <4)
       return 0;
-    
+
     // Decode the second character of the format string.
     if (FormatStr[1] == 'c') {
       // sprintf(dst, "%c", chr) --> *(i8*)dst = chr; *((i8*)dst+1) = 0
       if (!isa<IntegerType>(CI->getOperand(3)->getType())) return 0;
-      Value *V = B.CreateTrunc(CI->getOperand(3), Type::getInt8Ty(*Context), "char");
+      Value *V = B.CreateTrunc(CI->getOperand(3),
+			       Type::getInt8Ty(*Context), "char");
       Value *Ptr = CastToCStr(CI->getOperand(1), B);
       B.CreateStore(V, Ptr);
-      Ptr = B.CreateGEP(Ptr, ConstantInt::get(Type::getInt32Ty(*Context), 1), "nul");
+      Ptr = B.CreateGEP(Ptr, ConstantInt::get(Type::getInt32Ty(*Context), 1),
+			"nul");
       B.CreateStore(Constant::getNullValue(Type::getInt8Ty(*Context)), Ptr);
-      
+
       return ConstantInt::get(CI->getType(), 1);
     }
-    
+
     if (FormatStr[1] == 's') {
       // These optimizations require TargetData.
       if (!TD) return 0;
@@ -1378,7 +1540,7 @@ struct SPrintFOpt : public LibCallOptimization {
                                   ConstantInt::get(Len->getType(), 1),
                                   "leninc");
       EmitMemCpy(CI->getOperand(1), CI->getOperand(3), IncLen, 1, B);
-      
+
       // The sprintf result is the unincremented number of bytes in the string.
       return B.CreateIntCast(Len, CI->getType(), false);
     }
@@ -1399,17 +1561,17 @@ struct FWriteOpt : public LibCallOptimization {
         !isa<PointerType>(FT->getParamType(3)) ||
         !isa<IntegerType>(FT->getReturnType()))
       return 0;
-    
+
     // Get the element size and count.
     ConstantInt *SizeC = dyn_cast<ConstantInt>(CI->getOperand(2));
     ConstantInt *CountC = dyn_cast<ConstantInt>(CI->getOperand(3));
     if (!SizeC || !CountC) return 0;
     uint64_t Bytes = SizeC->getZExtValue()*CountC->getZExtValue();
-    
+
     // If this is writing zero records, remove the call (it's a noop).
     if (Bytes == 0)
       return ConstantInt::get(CI->getType(), 0);
-    
+
     // If this is writing one byte, turn it into fputc.
     if (Bytes == 1) {  // fwrite(S,1,1,F) -> fputc(S[0],F)
       Value *Char = B.CreateLoad(CastToCStr(CI->getOperand(1), B), "char");
@@ -1435,7 +1597,7 @@ struct FPutsOpt : public LibCallOptimization {
         !isa<PointerType>(FT->getParamType(1)) ||
         !CI->use_empty())
       return 0;
-    
+
     // fputs(s,F) --> fwrite(s,1,strlen(s),F)
     uint64_t Len = GetStringLength(CI->getOperand(1));
     if (!Len) return 0;
@@ -1457,7 +1619,7 @@ struct FPrintFOpt : public LibCallOptimization {
         !isa<PointerType>(FT->getParamType(1)) ||
         !isa<IntegerType>(FT->getReturnType()))
       return 0;
-    
+
     // All the optimizations depend on the format string.
     std::string FormatStr;
     if (!GetConstantStringInfo(CI->getOperand(2), FormatStr))
@@ -1477,12 +1639,12 @@ struct FPrintFOpt : public LibCallOptimization {
                  CI->getOperand(1), B);
       return ConstantInt::get(CI->getType(), FormatStr.size());
     }
-    
+
     // The remaining optimizations require the format string to be "%s" or "%c"
     // and have an extra operand.
     if (FormatStr.size() != 2 || FormatStr[0] != '%' || CI->getNumOperands() <4)
       return 0;
-    
+
     // Decode the second character of the format string.
     if (FormatStr[1] == 'c') {
       // fprintf(F, "%c", chr) --> *(i8*)dst = chr
@@ -1490,7 +1652,7 @@ struct FPrintFOpt : public LibCallOptimization {
       EmitFPutC(CI->getOperand(3), CI->getOperand(1), B);
       return ConstantInt::get(CI->getType(), 1);
     }
-    
+
     if (FormatStr[1] == 's') {
       // fprintf(F, "%s", str) -> fputs(str, F)
       if (!isa<PointerType>(CI->getOperand(3)->getType()) || !CI->use_empty())
@@ -1527,6 +1689,10 @@ namespace {
     SPrintFOpt SPrintF; PrintFOpt PrintF;
     FWriteOpt FWrite; FPutsOpt FPuts; FPrintFOpt FPrintF;
 
+    // Object Size Checking
+    SizeOpt ObjectSize;
+    MemCpyChkOpt MemCpyChk; MemSetChkOpt MemSetChk; MemMoveChkOpt MemMoveChk;
+
     bool Modified;  // This is only used by doInitialization.
   public:
     static char ID; // Pass identification
@@ -1553,7 +1719,7 @@ X("simplify-libcalls", "Simplify well-known library calls");
 
 // Public interface to the Simplify LibCalls pass.
 FunctionPass *llvm::createSimplifyLibCallsPass() {
-  return new SimplifyLibCalls(); 
+  return new SimplifyLibCalls();
 }
 
 /// Optimizations - Populate the Optimizations map with all the optimizations
@@ -1579,7 +1745,7 @@ void SimplifyLibCalls::InitOptimizations() {
   Optimizations["memcpy"] = &MemCpy;
   Optimizations["memmove"] = &MemMove;
   Optimizations["memset"] = &MemSet;
-  
+
   // Math Library Optimizations
   Optimizations["powf"] = &Pow;
   Optimizations["pow"] = &Pow;
@@ -1597,7 +1763,7 @@ void SimplifyLibCalls::InitOptimizations() {
   Optimizations["llvm.exp2.f80"] = &Exp2;
   Optimizations["llvm.exp2.f64"] = &Exp2;
   Optimizations["llvm.exp2.f32"] = &Exp2;
-  
+
 #ifdef HAVE_FLOORF
   Optimizations["floor"] = &UnaryDoubleFP;
 #endif
@@ -1613,7 +1779,7 @@ void SimplifyLibCalls::InitOptimizations() {
 #ifdef HAVE_NEARBYINTF
   Optimizations["nearbyint"] = &UnaryDoubleFP;
 #endif
-  
+
   // Integer Optimizations
   Optimizations["ffs"] = &FFS;
   Optimizations["ffsl"] = &FFS;
@@ -1624,13 +1790,20 @@ void SimplifyLibCalls::InitOptimizations() {
   Optimizations["isdigit"] = &IsDigit;
   Optimizations["isascii"] = &IsAscii;
   Optimizations["toascii"] = &ToAscii;
-  
+
   // Formatting and IO Optimizations
   Optimizations["sprintf"] = &SPrintF;
   Optimizations["printf"] = &PrintF;
   Optimizations["fwrite"] = &FWrite;
   Optimizations["fputs"] = &FPuts;
   Optimizations["fprintf"] = &FPrintF;
+
+  // Object Size Checking
+  Optimizations["llvm.objectsize.i32"] = &ObjectSize;
+  Optimizations["llvm.objectsize.i64"] = &ObjectSize;
+  Optimizations["__memcpy_chk"] = &MemCpyChk;
+  Optimizations["__memset_chk"] = &MemSetChk;
+  Optimizations["__memmove_chk"] = &MemMoveChk;
 }
 
 
@@ -1639,9 +1812,9 @@ void SimplifyLibCalls::InitOptimizations() {
 bool SimplifyLibCalls::runOnFunction(Function &F) {
   if (Optimizations.empty())
     InitOptimizations();
-  
+
   const TargetData *TD = getAnalysisIfAvailable<TargetData>();
-  
+
   IRBuilder<> Builder(F.getContext());
 
   bool Changed = false;
@@ -1650,35 +1823,35 @@ bool SimplifyLibCalls::runOnFunction(Function &F) {
       // Ignore non-calls.
       CallInst *CI = dyn_cast<CallInst>(I++);
       if (!CI) continue;
-      
+
       // Ignore indirect calls and calls to non-external functions.
       Function *Callee = CI->getCalledFunction();
       if (Callee == 0 || !Callee->isDeclaration() ||
           !(Callee->hasExternalLinkage() || Callee->hasDLLImportLinkage()))
         continue;
-      
+
       // Ignore unknown calls.
       LibCallOptimization *LCO = Optimizations.lookup(Callee->getName());
       if (!LCO) continue;
-      
+
       // Set the builder to the instruction after the call.
       Builder.SetInsertPoint(BB, I);
-      
+
       // Try to optimize this call.
       Value *Result = LCO->OptimizeCall(CI, TD, Builder);
       if (Result == 0) continue;
 
       DEBUG(errs() << "SimplifyLibCalls simplified: " << *CI;
             errs() << "  into: " << *Result << "\n");
-      
+
       // Something changed!
       Changed = true;
       ++NumSimplified;
-      
+
       // Inspect the instruction after the call (which was potentially just
       // added) next.
       I = CI; ++I;
-      
+
       if (CI != Result && !CI->use_empty()) {
         CI->replaceAllUsesWith(Result);
         if (!Result->hasName())
@@ -2432,10 +2605,6 @@ bool SimplifyLibCalls::doInitialization(Module &M) {
 // lround, lroundf, lroundl:
 //   * lround(cnst) -> cnst'
 //
-// memcmp:
-//   * memcmp(x,y,l)   -> cnst
-//      (if all arguments are constant and strlen(x) <= l and strlen(y) <= l)
-//
 // pow, powf, powl:
 //   * pow(exp(x),y)  -> exp(x*y)
 //   * pow(sqrt(x),y) -> pow(x,y*0.5)
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/TailDuplication.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/TailDuplication.cpp
index 68689d6..b06ae3d 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/TailDuplication.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/TailDuplication.cpp
@@ -129,7 +129,7 @@ bool TailDup::shouldEliminateUnconditionalBranch(TerminatorInst *TI,
     if (isa<CallInst>(I) || isa<InvokeInst>(I)) return false;
 
     // Also alloca and malloc.
-    if (isa<AllocationInst>(I)) return false;
+    if (isa<AllocaInst>(I)) return false;
 
     // Some vector instructions can expand into a number of instructions.
     if (isa<ShuffleVectorInst>(I) || isa<ExtractElementInst>(I) ||
@@ -359,8 +359,7 @@ void TailDup::eliminateUnconditionalBranch(BranchInst *Branch) {
       Instruction *Inst = BI++;
       if (isInstructionTriviallyDead(Inst))
         Inst->eraseFromParent();
-      else if (Constant *C = ConstantFoldInstruction(Inst,
-                                                     Inst->getContext())) {
+      else if (Constant *C = ConstantFoldInstruction(Inst)) {
         Inst->replaceAllUsesWith(C);
         Inst->eraseFromParent();
       }
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/TailRecursionElimination.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/TailRecursionElimination.cpp
index b56e170..4119cb9 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/TailRecursionElimination.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/TailRecursionElimination.cpp
@@ -25,7 +25,7 @@
 //     unlikely, that the return returns something else (like constant 0), and
 //     can still be TRE'd.  It can be TRE'd if ALL OTHER return instructions in
 //     the function return the exact same value.
-//  4. If it can prove that callees do not access theier caller stack frame,
+//  4. If it can prove that callees do not access their caller stack frame,
 //     they are marked as eligible for tail call elimination (by the code
 //     generator).
 //
@@ -58,6 +58,7 @@
 #include "llvm/Function.h"
 #include "llvm/Instructions.h"
 #include "llvm/Pass.h"
+#include "llvm/Analysis/CaptureTracking.h"
 #include "llvm/Support/CFG.h"
 #include "llvm/ADT/Statistic.h"
 using namespace llvm;
@@ -75,7 +76,7 @@ namespace {
   private:
     bool ProcessReturningBlock(ReturnInst *RI, BasicBlock *&OldEntry,
                                bool &TailCallsAreMarkedTail,
-                               std::vector<PHINode*> &ArgumentPHIs,
+                               SmallVector<PHINode*, 8> &ArgumentPHIs,
                                bool CannotTailCallElimCallsMarkedTail);
     bool CanMoveAboveCall(Instruction *I, CallInst *CI);
     Value *CanTransformAccumulatorRecursion(Instruction *I, CallInst *CI);
@@ -90,7 +91,6 @@ FunctionPass *llvm::createTailCallEliminationPass() {
   return new TailCallElim();
 }
 
-
 /// AllocaMightEscapeToCalls - Return true if this alloca may be accessed by
 /// callees of this function.  We only do very simple analysis right now, this
 /// could be expanded in the future to use mod/ref information for particular
@@ -100,7 +100,7 @@ static bool AllocaMightEscapeToCalls(AllocaInst *AI) {
   return true;
 }
 
-/// FunctionContainsAllocas - Scan the specified basic block for alloca
+/// CheckForEscapingAllocas - Scan the specified basic block for alloca
 /// instructions.  If it contains any that might be accessed by calls, return
 /// true.
 static bool CheckForEscapingAllocas(BasicBlock *BB,
@@ -127,7 +127,7 @@ bool TailCallElim::runOnFunction(Function &F) {
 
   BasicBlock *OldEntry = 0;
   bool TailCallsAreMarkedTail = false;
-  std::vector<PHINode*> ArgumentPHIs;
+  SmallVector<PHINode*, 8> ArgumentPHIs;
   bool MadeChange = false;
 
   bool FunctionContainsEscapingAllocas = false;
@@ -154,7 +154,6 @@ bool TailCallElim::runOnFunction(Function &F) {
   /// happen.  This bug is PR962.
   if (FunctionContainsEscapingAllocas)
     return false;
-  
 
   // Second pass, change any tail calls to loops.
   for (Function::iterator BB = F.begin(), E = F.end(); BB != E; ++BB)
@@ -204,7 +203,7 @@ bool TailCallElim::CanMoveAboveCall(Instruction *I, CallInst *CI) {
   if (I->mayHaveSideEffects())  // This also handles volatile loads.
     return false;
   
-  if (LoadInst* L = dyn_cast<LoadInst>(I)) {
+  if (LoadInst *L = dyn_cast<LoadInst>(I)) {
     // Loads may always be moved above calls without side effects.
     if (CI->mayHaveSideEffects()) {
       // Non-volatile loads may be moved above a call with side effects if it
@@ -235,7 +234,7 @@ bool TailCallElim::CanMoveAboveCall(Instruction *I, CallInst *CI) {
 // We currently handle static constants and arguments that are not modified as
 // part of the recursion.
 //
-static bool isDynamicConstant(Value *V, CallInst *CI) {
+static bool isDynamicConstant(Value *V, CallInst *CI, ReturnInst *RI) {
   if (isa<Constant>(V)) return true; // Static constants are always dyn consts
 
   // Check to see if this is an immutable argument, if so, the value
@@ -253,6 +252,15 @@ static bool isDynamicConstant(Value *V, CallInst *CI) {
     if (CI->getOperand(ArgNo+1) == Arg)
       return true;
   }
+
+  // Switch cases are always constant integers. If the value is being switched
+  // on and the return is only reachable from one of its cases, it's
+  // effectively constant.
+  if (BasicBlock *UniquePred = RI->getParent()->getUniquePredecessor())
+    if (SwitchInst *SI = dyn_cast<SwitchInst>(UniquePred->getTerminator()))
+      if (SI->getCondition() == V)
+        return SI->getDefaultDest() != RI->getParent();
+
   // Not a constant or immutable argument, we can't safely transform.
   return false;
 }
@@ -265,10 +273,6 @@ static Value *getCommonReturnValue(ReturnInst *TheRI, CallInst *CI) {
   Function *F = TheRI->getParent()->getParent();
   Value *ReturnedValue = 0;
 
-  // TODO: Handle multiple value ret instructions;
-  if (isa<StructType>(F->getReturnType()))
-      return 0;
-
   for (Function::iterator BBI = F->begin(), E = F->end(); BBI != E; ++BBI)
     if (ReturnInst *RI = dyn_cast<ReturnInst>(BBI->getTerminator()))
       if (RI != TheRI) {
@@ -278,7 +282,7 @@ static Value *getCommonReturnValue(ReturnInst *TheRI, CallInst *CI) {
         // evaluatable at the start of the initial invocation of the function,
         // instead of at the end of the evaluation.
         //
-        if (!isDynamicConstant(RetOp, CI))
+        if (!isDynamicConstant(RetOp, CI, RI))
           return 0;
 
         if (ReturnedValue && RetOp != ReturnedValue)
@@ -315,7 +319,7 @@ Value *TailCallElim::CanTransformAccumulatorRecursion(Instruction *I,
 
 bool TailCallElim::ProcessReturningBlock(ReturnInst *Ret, BasicBlock *&OldEntry,
                                          bool &TailCallsAreMarkedTail,
-                                         std::vector<PHINode*> &ArgumentPHIs,
+                                         SmallVector<PHINode*, 8> &ArgumentPHIs,
                                        bool CannotTailCallElimCallsMarkedTail) {
   BasicBlock *BB = Ret->getParent();
   Function *F = BB->getParent();
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/BasicBlockUtils.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/BasicBlockUtils.cpp
index 4931ab3..2974592 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/BasicBlockUtils.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/BasicBlockUtils.cpp
@@ -65,9 +65,6 @@ void llvm::DeleteDeadBlock(BasicBlock *BB) {
 /// when all entries to the PHI nodes in a block are guaranteed equal, such as
 /// when the block has exactly one predecessor.
 void llvm::FoldSingleEntryPHINodes(BasicBlock *BB) {
-  if (!isa<PHINode>(BB->begin()))
-    return;
-  
   while (PHINode *PN = dyn_cast<PHINode>(BB->begin())) {
     if (PN->getIncomingValue(0) != PN)
       PN->replaceAllUsesWith(PN->getIncomingValue(0));
@@ -97,10 +94,14 @@ void llvm::DeleteDeadPHIs(BasicBlock *BB) {
 
 /// MergeBlockIntoPredecessor - Attempts to merge a block into its predecessor,
 /// if possible.  The return value indicates success or failure.
-bool llvm::MergeBlockIntoPredecessor(BasicBlock* BB, Pass* P) {
+bool llvm::MergeBlockIntoPredecessor(BasicBlock *BB, Pass *P) {
   pred_iterator PI(pred_begin(BB)), PE(pred_end(BB));
-  // Can't merge the entry block.
-  if (pred_begin(BB) == pred_end(BB)) return false;
+  // Can't merge the entry block.  Don't merge away blocks who have their
+  // address taken: this is a bug if the predecessor block is the entry node
+  // (because we'd end up taking the address of the entry) and undesirable in
+  // any case.
+  if (pred_begin(BB) == pred_end(BB) ||
+      BB->hasAddressTaken()) return false;
   
   BasicBlock *PredBB = *PI++;
   for (; PI != PE; ++PI)  // Search all predecessors, see if they are all same
@@ -383,6 +384,12 @@ BasicBlock *llvm::SplitBlockPredecessors(BasicBlock *BB,
   bool IsLoopEntry = !!L;
   bool SplitMakesNewLoopHeader = false;
   for (unsigned i = 0; i != NumPreds; ++i) {
+    // This is slightly more strict than necessary; the minimum requirement
+    // is that there be no more than one indirectbr branching to BB. And
+    // all BlockAddress uses would need to be updated.
+    assert(!isa<IndirectBrInst>(Preds[i]->getTerminator()) &&
+           "Cannot split an edge from an IndirectBrInst");
+
     Preds[i]->getTerminator()->replaceUsesOfWith(BB, NewBB);
 
     if (LI) {
@@ -425,14 +432,26 @@ BasicBlock *llvm::SplitBlockPredecessors(BasicBlock *BB,
 
   if (L) {
     if (IsLoopEntry) {
-      if (Loop *PredLoop = LI->getLoopFor(Preds[0])) {
-        // Add the new block to the nearest enclosing loop (and not an
-        // adjacent loop).
-        while (PredLoop && !PredLoop->contains(BB))
-          PredLoop = PredLoop->getParentLoop();
-        if (PredLoop)
-          PredLoop->addBasicBlockToLoop(NewBB, LI->getBase());
-      }
+      // Add the new block to the nearest enclosing loop (and not an
+      // adjacent loop). To find this, examine each of the predecessors and
+      // determine which loops enclose them, and select the most-nested loop
+      // which contains the loop containing the block being split.
+      Loop *InnermostPredLoop = 0;
+      for (unsigned i = 0; i != NumPreds; ++i)
+        if (Loop *PredLoop = LI->getLoopFor(Preds[i])) {
+          // Seek a loop which actually contains the block being split (to
+          // avoid adjacent loops).
+          while (PredLoop && !PredLoop->contains(BB))
+            PredLoop = PredLoop->getParentLoop();
+          // Select the most-nested of these loops which contains the block.
+          if (PredLoop &&
+              PredLoop->contains(BB) &&
+              (!InnermostPredLoop ||
+               InnermostPredLoop->getLoopDepth() < PredLoop->getLoopDepth()))
+            InnermostPredLoop = PredLoop;
+        }
+      if (InnermostPredLoop)
+        InnermostPredLoop->addBasicBlockToLoop(NewBB, LI->getBase());
     } else {
       L->addBasicBlockToLoop(NewBB, LI->getBase());
       if (SplitMakesNewLoopHeader)
@@ -663,7 +682,7 @@ void llvm::CopyPrecedingStopPoint(Instruction *I,
   if (I != I->getParent()->begin()) {
     BasicBlock::iterator BBI = I;  --BBI;
     if (DbgStopPointInst *DSPI = dyn_cast<DbgStopPointInst>(BBI)) {
-      CallInst *newDSPI = DSPI->clone();
+      CallInst *newDSPI = cast<CallInst>(DSPI->clone());
       newDSPI->insertBefore(InsertPos);
     }
   }
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/BasicInliner.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/BasicInliner.cpp
index 4b720b1..b5ffe06 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/BasicInliner.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/BasicInliner.cpp
@@ -34,7 +34,7 @@ namespace llvm {
 
   /// BasicInlinerImpl - BasicInliner implemantation class. This hides
   /// container info, used by basic inliner, from public interface.
-  struct VISIBILITY_HIDDEN BasicInlinerImpl {
+  struct BasicInlinerImpl {
     
     BasicInlinerImpl(const BasicInlinerImpl&); // DO NOT IMPLEMENT
     void operator=(const BasicInlinerImpl&); // DO NO IMPLEMENT
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/BreakCriticalEdges.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/BreakCriticalEdges.cpp
index 849b2b5..ccd97c8 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/BreakCriticalEdges.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/BreakCriticalEdges.cpp
@@ -26,7 +26,6 @@
 #include "llvm/Instructions.h"
 #include "llvm/Type.h"
 #include "llvm/Support/CFG.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/Statistic.h"
@@ -35,7 +34,7 @@ using namespace llvm;
 STATISTIC(NumBroken, "Number of blocks inserted");
 
 namespace {
-  struct VISIBILITY_HIDDEN BreakCriticalEdges : public FunctionPass {
+  struct BreakCriticalEdges : public FunctionPass {
     static char ID; // Pass identification, replacement for typeid
     BreakCriticalEdges() : FunctionPass(&ID) {}
 
@@ -70,7 +69,7 @@ bool BreakCriticalEdges::runOnFunction(Function &F) {
   bool Changed = false;
   for (Function::iterator I = F.begin(), E = F.end(); I != E; ++I) {
     TerminatorInst *TI = I->getTerminator();
-    if (TI->getNumSuccessors() > 1)
+    if (TI->getNumSuccessors() > 1 && !isa<IndirectBrInst>(TI))
       for (unsigned i = 0, e = TI->getNumSuccessors(); i != e; ++i)
         if (SplitCriticalEdge(TI, i, this)) {
           ++NumBroken;
@@ -151,14 +150,29 @@ static void CreatePHIsForSplitLoopExit(SmallVectorImpl<BasicBlock *> &Preds,
 
 /// SplitCriticalEdge - If this edge is a critical edge, insert a new node to
 /// split the critical edge.  This will update DominatorTree and
-/// DominatorFrontier  information if it is available, thus calling this pass
-/// will not invalidate  any of them.  This returns true if the edge was split,
-/// false otherwise.  This ensures that all edges to that dest go to one block
-/// instead of each going to a different block.
-//
+/// DominatorFrontier information if it is available, thus calling this pass
+/// will not invalidate either of them. This returns the new block if the edge
+/// was split, null otherwise.
+///
+/// If MergeIdenticalEdges is true (not the default), *all* edges from TI to the
+/// specified successor will be merged into the same critical edge block.  
+/// This is most commonly interesting with switch instructions, which may 
+/// have many edges to any one destination.  This ensures that all edges to that
+/// dest go to one block instead of each going to a different block, but isn't 
+/// the standard definition of a "critical edge".
+///
+/// It is invalid to call this function on a critical edge that starts at an
+/// IndirectBrInst.  Splitting these edges will almost always create an invalid
+/// program because the address of the new block won't be the one that is jumped
+/// to.
+///
 BasicBlock *llvm::SplitCriticalEdge(TerminatorInst *TI, unsigned SuccNum,
                                     Pass *P, bool MergeIdenticalEdges) {
   if (!isCriticalEdge(TI, SuccNum, MergeIdenticalEdges)) return 0;
+  
+  assert(!isa<IndirectBrInst>(TI) &&
+         "Cannot split critical edge from IndirectBrInst");
+  
   BasicBlock *TIBB = TI->getParent();
   BasicBlock *DestBB = TI->getSuccessor(SuccNum);
 
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/CMakeLists.txt b/libclamav/c++/llvm/lib/Transforms/Utils/CMakeLists.txt
index aca4bca..93577b4 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/CMakeLists.txt
@@ -8,21 +8,20 @@ add_llvm_library(LLVMTransformUtils
   CloneModule.cpp
   CodeExtractor.cpp
   DemoteRegToStack.cpp
-  InlineCost.cpp
   InlineFunction.cpp
   InstructionNamer.cpp
   LCSSA.cpp
   Local.cpp
   LoopSimplify.cpp
-  LowerAllocations.cpp
+  LoopUnroll.cpp
   LowerInvoke.cpp
   LowerSwitch.cpp
   Mem2Reg.cpp
   PromoteMemoryToRegister.cpp
+  SSAUpdater.cpp
   SSI.cpp
   SimplifyCFG.cpp
   UnifyFunctionExitNodes.cpp
-  UnrollLoop.cpp
   ValueMapper.cpp
   )
 
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/CloneFunction.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/CloneFunction.cpp
index f042bc9..162d7b3 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/CloneFunction.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/CloneFunction.cpp
@@ -22,7 +22,6 @@
 #include "llvm/Function.h"
 #include "llvm/LLVMContext.h"
 #include "llvm/Support/CFG.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Transforms/Utils/ValueMapper.h"
 #include "llvm/Analysis/ConstantFolding.h"
 #include "llvm/Analysis/DebugInfo.h"
@@ -176,7 +175,7 @@ Function *llvm::CloneFunction(const Function *F,
 namespace {
   /// PruningFunctionCloner - This class is a private class used to implement
   /// the CloneAndPruneFunctionInto method.
-  struct VISIBILITY_HIDDEN PruningFunctionCloner {
+  struct PruningFunctionCloner {
     Function *NewFunc;
     const Function *OldFunc;
     DenseMap<const Value*, Value*> &ValueMap;
@@ -324,21 +323,17 @@ void PruningFunctionCloner::CloneBlock(const BasicBlock *BB,
 /// mapping its operands through ValueMap if they are available.
 Constant *PruningFunctionCloner::
 ConstantFoldMappedInstruction(const Instruction *I) {
-  LLVMContext &Context = I->getContext();
-  
   SmallVector<Constant*, 8> Ops;
   for (unsigned i = 0, e = I->getNumOperands(); i != e; ++i)
     if (Constant *Op = dyn_cast_or_null<Constant>(MapValue(I->getOperand(i),
-                                                           ValueMap,
-                                                           Context)))
+                                                           ValueMap)))
       Ops.push_back(Op);
     else
       return 0;  // All operands not constant!
 
   if (const CmpInst *CI = dyn_cast<CmpInst>(I))
-    return ConstantFoldCompareInstOperands(CI->getPredicate(),
-                                           &Ops[0], Ops.size(), 
-                                           Context, TD);
+    return ConstantFoldCompareInstOperands(CI->getPredicate(), Ops[0], Ops[1],
+                                           TD);
 
   if (const LoadInst *LI = dyn_cast<LoadInst>(I))
     if (ConstantExpr *CE = dyn_cast<ConstantExpr>(Ops[0]))
@@ -346,10 +341,31 @@ ConstantFoldMappedInstruction(const Instruction *I) {
         if (GlobalVariable *GV = dyn_cast<GlobalVariable>(CE->getOperand(0)))
           if (GV->isConstant() && GV->hasDefinitiveInitializer())
             return ConstantFoldLoadThroughGEPConstantExpr(GV->getInitializer(),
-                                                          CE, Context);
+                                                          CE);
 
   return ConstantFoldInstOperands(I->getOpcode(), I->getType(), &Ops[0],
-                                  Ops.size(), Context, TD);
+                                  Ops.size(), TD);
+}
+
+static MDNode *UpdateInlinedAtInfo(MDNode *InsnMD, MDNode *TheCallMD,
+                                   LLVMContext &Context) {
+  DILocation ILoc(InsnMD);
+  if (ILoc.isNull()) return InsnMD;
+
+  DILocation CallLoc(TheCallMD);
+  if (CallLoc.isNull()) return InsnMD;
+
+  DILocation OrigLocation = ILoc.getOrigLocation();
+  MDNode *NewLoc = TheCallMD;
+  if (!OrigLocation.isNull())
+    NewLoc = UpdateInlinedAtInfo(OrigLocation.getNode(), TheCallMD, Context);
+
+  SmallVector<Value *, 4> MDVs;
+  MDVs.push_back(InsnMD->getElement(0)); // Line
+  MDVs.push_back(InsnMD->getElement(1)); // Col
+  MDVs.push_back(InsnMD->getElement(2)); // Scope
+  MDVs.push_back(NewLoc);
+  return MDNode::get(Context, MDVs.data(), MDVs.size());
 }
 
 /// CloneAndPruneFunctionInto - This works exactly like CloneFunctionInto,
@@ -364,9 +380,9 @@ void llvm::CloneAndPruneFunctionInto(Function *NewFunc, const Function *OldFunc,
                                      SmallVectorImpl<ReturnInst*> &Returns,
                                      const char *NameSuffix, 
                                      ClonedCodeInfo *CodeInfo,
-                                     const TargetData *TD) {
+                                     const TargetData *TD,
+                                     Instruction *TheCall) {
   assert(NameSuffix && "NameSuffix cannot be null!");
-  LLVMContext &Context = OldFunc->getContext();
   
 #ifndef NDEBUG
   for (Function::const_arg_iterator II = OldFunc->arg_begin(), 
@@ -404,19 +420,52 @@ void llvm::CloneAndPruneFunctionInto(Function *NewFunc, const Function *OldFunc,
     // references as we go.  This uses ValueMap to do all the hard work.
     //
     BasicBlock::iterator I = NewBB->begin();
+
+    LLVMContext &Context = OldFunc->getContext();
+    unsigned DbgKind = Context.getMetadata().getMDKind("dbg");
+    MDNode *TheCallMD = NULL;
+    SmallVector<Value *, 4> MDVs;
+    if (TheCall && TheCall->hasMetadata()) 
+      TheCallMD = Context.getMetadata().getMD(DbgKind, TheCall);
     
     // Handle PHI nodes specially, as we have to remove references to dead
     // blocks.
     if (PHINode *PN = dyn_cast<PHINode>(I)) {
       // Skip over all PHI nodes, remembering them for later.
       BasicBlock::const_iterator OldI = BI->begin();
-      for (; (PN = dyn_cast<PHINode>(I)); ++I, ++OldI)
+      for (; (PN = dyn_cast<PHINode>(I)); ++I, ++OldI) {
+        if (I->hasMetadata()) {
+          if (TheCallMD) {
+            if (MDNode *IMD = Context.getMetadata().getMD(DbgKind, I)) {
+              MDNode *NewMD = UpdateInlinedAtInfo(IMD, TheCallMD, Context);
+              Context.getMetadata().addMD(DbgKind, NewMD, I);
+            }
+          } else {
+            // The cloned instruction has dbg info but the call instruction
+            // does not have dbg info. Remove dbg info from cloned instruction.
+            Context.getMetadata().removeMD(DbgKind, I);
+          }
+        }
         PHIToResolve.push_back(cast<PHINode>(OldI));
+      }
     }
     
     // Otherwise, remap the rest of the instructions normally.
-    for (; I != NewBB->end(); ++I)
+    for (; I != NewBB->end(); ++I) {
+      if (I->hasMetadata()) {
+        if (TheCallMD) {
+          if (MDNode *IMD = Context.getMetadata().getMD(DbgKind, I)) {
+            MDNode *NewMD = UpdateInlinedAtInfo(IMD, TheCallMD, Context);
+            Context.getMetadata().addMD(DbgKind, NewMD, I);
+          }
+        } else {
+          // The cloned instruction has dbg info but the call instruction
+          // does not have dbg info. Remove dbg info from cloned instruction.
+          Context.getMetadata().removeMD(DbgKind, I);
+        }
+      }
       RemapInstruction(I, ValueMap);
+    }
   }
   
   // Defer PHI resolution until rest of function is resolved, PHI resolution
@@ -437,7 +486,7 @@ void llvm::CloneAndPruneFunctionInto(Function *NewFunc, const Function *OldFunc,
         if (BasicBlock *MappedBlock = 
             cast_or_null<BasicBlock>(ValueMap[PN->getIncomingBlock(pred)])) {
           Value *InVal = MapValue(PN->getIncomingValue(pred),
-                                  ValueMap, Context);
+                                  ValueMap);
           assert(InVal && "Unknown input value?");
           PN->setIncomingValue(pred, InVal);
           PN->setIncomingBlock(pred, MappedBlock);
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/CloneModule.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/CloneModule.cpp
index 0285f8c..a163f89 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/CloneModule.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/CloneModule.cpp
@@ -89,8 +89,7 @@ Module *llvm::CloneModule(const Module *M,
     GlobalVariable *GV = cast<GlobalVariable>(ValueMap[I]);
     if (I->hasInitializer())
       GV->setInitializer(cast<Constant>(MapValue(I->getInitializer(),
-                                                 ValueMap,
-                                                 M->getContext())));
+                                                 ValueMap)));
     GV->setLinkage(I->getLinkage());
     GV->setThreadLocal(I->isThreadLocal());
     GV->setConstant(I->isConstant());
@@ -121,7 +120,7 @@ Module *llvm::CloneModule(const Module *M,
     GlobalAlias *GA = cast<GlobalAlias>(ValueMap[I]);
     GA->setLinkage(I->getLinkage());
     if (const Constant* C = I->getAliasee())
-      GA->setAliasee(cast<Constant>(MapValue(C, ValueMap, M->getContext())));
+      GA->setAliasee(cast<Constant>(MapValue(C, ValueMap)));
   }
   
   return New;
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/CodeExtractor.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/CodeExtractor.cpp
index c39ccf7..f966681 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/CodeExtractor.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/CodeExtractor.cpp
@@ -26,7 +26,6 @@
 #include "llvm/Analysis/Verifier.h"
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
 #include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
@@ -44,7 +43,7 @@ AggregateArgsOpt("aggregate-extracted-args", cl::Hidden,
                  cl::desc("Aggregate arguments to code-extracted functions"));
 
 namespace {
-  class VISIBILITY_HIDDEN CodeExtractor {
+  class CodeExtractor {
     typedef std::vector<Value*> Values;
     std::set<BasicBlock*> BlocksToExtract;
     DominatorTree* DT;
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/InlineCost.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/InlineCost.cpp
deleted file mode 100644
index a61b1a9..0000000
--- a/libclamav/c++/llvm/lib/Transforms/Utils/InlineCost.cpp
+++ /dev/null
@@ -1,333 +0,0 @@
-//===- InlineCost.cpp - Cost analysis for inliner -------------------------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file implements inline cost analysis.
-//
-//===----------------------------------------------------------------------===//
-
-#include "llvm/Transforms/Utils/InlineCost.h"
-#include "llvm/Support/CallSite.h"
-#include "llvm/CallingConv.h"
-#include "llvm/IntrinsicInst.h"
-#include "llvm/ADT/SmallPtrSet.h"
-using namespace llvm;
-
-// CountCodeReductionForConstant - Figure out an approximation for how many
-// instructions will be constant folded if the specified value is constant.
-//
-unsigned InlineCostAnalyzer::FunctionInfo::
-         CountCodeReductionForConstant(Value *V) {
-  unsigned Reduction = 0;
-  for (Value::use_iterator UI = V->use_begin(), E = V->use_end(); UI != E; ++UI)
-    if (isa<BranchInst>(*UI))
-      Reduction += 40;          // Eliminating a conditional branch is a big win
-    else if (SwitchInst *SI = dyn_cast<SwitchInst>(*UI))
-      // Eliminating a switch is a big win, proportional to the number of edges
-      // deleted.
-      Reduction += (SI->getNumSuccessors()-1) * 40;
-    else if (CallInst *CI = dyn_cast<CallInst>(*UI)) {
-      // Turning an indirect call into a direct call is a BIG win
-      Reduction += CI->getCalledValue() == V ? 500 : 0;
-    } else if (InvokeInst *II = dyn_cast<InvokeInst>(*UI)) {
-      // Turning an indirect call into a direct call is a BIG win
-      Reduction += II->getCalledValue() == V ? 500 : 0;
-    } else {
-      // Figure out if this instruction will be removed due to simple constant
-      // propagation.
-      Instruction &Inst = cast<Instruction>(**UI);
-      
-      // We can't constant propagate instructions which have effects or
-      // read memory.
-      //
-      // FIXME: It would be nice to capture the fact that a load from a
-      // pointer-to-constant-global is actually a *really* good thing to zap.
-      // Unfortunately, we don't know the pointer that may get propagated here,
-      // so we can't make this decision.
-      if (Inst.mayReadFromMemory() || Inst.mayHaveSideEffects() ||
-          isa<AllocationInst>(Inst)) 
-        continue;
-
-      bool AllOperandsConstant = true;
-      for (unsigned i = 0, e = Inst.getNumOperands(); i != e; ++i)
-        if (!isa<Constant>(Inst.getOperand(i)) && Inst.getOperand(i) != V) {
-          AllOperandsConstant = false;
-          break;
-        }
-
-      if (AllOperandsConstant) {
-        // We will get to remove this instruction...
-        Reduction += 7;
-
-        // And any other instructions that use it which become constants
-        // themselves.
-        Reduction += CountCodeReductionForConstant(&Inst);
-      }
-    }
-
-  return Reduction;
-}
-
-// CountCodeReductionForAlloca - Figure out an approximation of how much smaller
-// the function will be if it is inlined into a context where an argument
-// becomes an alloca.
-//
-unsigned InlineCostAnalyzer::FunctionInfo::
-         CountCodeReductionForAlloca(Value *V) {
-  if (!isa<PointerType>(V->getType())) return 0;  // Not a pointer
-  unsigned Reduction = 0;
-  for (Value::use_iterator UI = V->use_begin(), E = V->use_end(); UI != E;++UI){
-    Instruction *I = cast<Instruction>(*UI);
-    if (isa<LoadInst>(I) || isa<StoreInst>(I))
-      Reduction += 10;
-    else if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(I)) {
-      // If the GEP has variable indices, we won't be able to do much with it.
-      if (!GEP->hasAllConstantIndices())
-        Reduction += CountCodeReductionForAlloca(GEP)+15;
-    } else {
-      // If there is some other strange instruction, we're not going to be able
-      // to do much if we inline this.
-      return 0;
-    }
-  }
-
-  return Reduction;
-}
-
-/// analyzeFunction - Fill in the current structure with information gleaned
-/// from the specified function.
-void InlineCostAnalyzer::FunctionInfo::analyzeFunction(Function *F) {
-  unsigned NumInsts = 0, NumBlocks = 0, NumVectorInsts = 0, NumRets = 0;
-
-  // Look at the size of the callee.  Each basic block counts as 20 units, and
-  // each instruction counts as 5.
-  for (Function::const_iterator BB = F->begin(), E = F->end(); BB != E; ++BB) {
-    for (BasicBlock::const_iterator II = BB->begin(), E = BB->end();
-         II != E; ++II) {
-      if (isa<PHINode>(II)) continue;           // PHI nodes don't count.
-
-      // Special handling for calls.
-      if (isa<CallInst>(II) || isa<InvokeInst>(II)) {
-        if (isa<DbgInfoIntrinsic>(II))
-          continue;  // Debug intrinsics don't count as size.
-        
-        CallSite CS = CallSite::get(const_cast<Instruction*>(&*II));
-        
-        // If this function contains a call to setjmp or _setjmp, never inline
-        // it.  This is a hack because we depend on the user marking their local
-        // variables as volatile if they are live across a setjmp call, and they
-        // probably won't do this in callers.
-        if (Function *F = CS.getCalledFunction())
-          if (F->isDeclaration() && 
-              (F->getName() == "setjmp" || F->getName() == "_setjmp")) {
-            NeverInline = true;
-            return;
-          }
-
-        // Calls often compile into many machine instructions.  Bump up their
-        // cost to reflect this.
-        if (!isa<IntrinsicInst>(II))
-          NumInsts += 5;
-      }
-      
-      if (const AllocaInst *AI = dyn_cast<AllocaInst>(II)) {
-        if (!AI->isStaticAlloca())
-          this->usesDynamicAlloca = true;
-      }
-
-      if (isa<ExtractElementInst>(II) || isa<VectorType>(II->getType()))
-        ++NumVectorInsts; 
-      
-      // Noop casts, including ptr <-> int,  don't count.
-      if (const CastInst *CI = dyn_cast<CastInst>(II)) {
-        if (CI->isLosslessCast() || isa<IntToPtrInst>(CI) || 
-            isa<PtrToIntInst>(CI))
-          continue;
-      } else if (const GetElementPtrInst *GEPI =
-                 dyn_cast<GetElementPtrInst>(II)) {
-        // If a GEP has all constant indices, it will probably be folded with
-        // a load/store.
-        if (GEPI->hasAllConstantIndices())
-          continue;
-      }
-
-      if (isa<ReturnInst>(II))
-        ++NumRets;
-      
-      ++NumInsts;
-    }
-
-    ++NumBlocks;
-  }
-
-  // A function with exactly one return has it removed during the inlining
-  // process (see InlineFunction), so don't count it.
-  if (NumRets==1)
-    --NumInsts;
-
-  this->NumBlocks      = NumBlocks;
-  this->NumInsts       = NumInsts;
-  this->NumVectorInsts = NumVectorInsts;
-
-  // Check out all of the arguments to the function, figuring out how much
-  // code can be eliminated if one of the arguments is a constant.
-  for (Function::arg_iterator I = F->arg_begin(), E = F->arg_end(); I != E; ++I)
-    ArgumentWeights.push_back(ArgInfo(CountCodeReductionForConstant(I),
-                                      CountCodeReductionForAlloca(I)));
-}
-
-
-
-// getInlineCost - The heuristic used to determine if we should inline the
-// function call or not.
-//
-InlineCost InlineCostAnalyzer::getInlineCost(CallSite CS,
-                               SmallPtrSet<const Function *, 16> &NeverInline) {
-  Instruction *TheCall = CS.getInstruction();
-  Function *Callee = CS.getCalledFunction();
-  Function *Caller = TheCall->getParent()->getParent();
-
-  // Don't inline functions which can be redefined at link-time to mean
-  // something else.  Don't inline functions marked noinline.
-  if (Callee->mayBeOverridden() ||
-      Callee->hasFnAttr(Attribute::NoInline) || NeverInline.count(Callee))
-    return llvm::InlineCost::getNever();
-
-  // InlineCost - This value measures how good of an inline candidate this call
-  // site is to inline.  A lower inline cost make is more likely for the call to
-  // be inlined.  This value may go negative.
-  //
-  int InlineCost = 0;
-  
-  // If there is only one call of the function, and it has internal linkage,
-  // make it almost guaranteed to be inlined.
-  //
-  if (Callee->hasLocalLinkage() && Callee->hasOneUse())
-    InlineCost -= 15000;
-  
-  // If this function uses the coldcc calling convention, prefer not to inline
-  // it.
-  if (Callee->getCallingConv() == CallingConv::Cold)
-    InlineCost += 2000;
-  
-  // If the instruction after the call, or if the normal destination of the
-  // invoke is an unreachable instruction, the function is noreturn.  As such,
-  // there is little point in inlining this.
-  if (InvokeInst *II = dyn_cast<InvokeInst>(TheCall)) {
-    if (isa<UnreachableInst>(II->getNormalDest()->begin()))
-      InlineCost += 10000;
-  } else if (isa<UnreachableInst>(++BasicBlock::iterator(TheCall)))
-    InlineCost += 10000;
-  
-  // Get information about the callee...
-  FunctionInfo &CalleeFI = CachedFunctionInfo[Callee];
-  
-  // If we haven't calculated this information yet, do so now.
-  if (CalleeFI.NumBlocks == 0)
-    CalleeFI.analyzeFunction(Callee);
-
-  // If we should never inline this, return a huge cost.
-  if (CalleeFI.NeverInline)
-    return InlineCost::getNever();
-
-  // FIXME: It would be nice to kill off CalleeFI.NeverInline. Then we
-  // could move this up and avoid computing the FunctionInfo for
-  // things we are going to just return always inline for. This
-  // requires handling setjmp somewhere else, however.
-  if (!Callee->isDeclaration() && Callee->hasFnAttr(Attribute::AlwaysInline))
-    return InlineCost::getAlways();
-    
-  if (CalleeFI.usesDynamicAlloca) {
-    // Get infomation about the caller...
-    FunctionInfo &CallerFI = CachedFunctionInfo[Caller];
-
-    // If we haven't calculated this information yet, do so now.
-    if (CallerFI.NumBlocks == 0)
-      CallerFI.analyzeFunction(Caller);
-
-    // Don't inline a callee with dynamic alloca into a caller without them.
-    // Functions containing dynamic alloca's are inefficient in various ways;
-    // don't create more inefficiency.
-    if (!CallerFI.usesDynamicAlloca)
-      return InlineCost::getNever();
-  }
-
-  // Add to the inline quality for properties that make the call valuable to
-  // inline.  This includes factors that indicate that the result of inlining
-  // the function will be optimizable.  Currently this just looks at arguments
-  // passed into the function.
-  //
-  unsigned ArgNo = 0;
-  for (CallSite::arg_iterator I = CS.arg_begin(), E = CS.arg_end();
-       I != E; ++I, ++ArgNo) {
-    // Each argument passed in has a cost at both the caller and the callee
-    // sides.  This favors functions that take many arguments over functions
-    // that take few arguments.
-    InlineCost -= 20;
-    
-    // If this is a function being passed in, it is very likely that we will be
-    // able to turn an indirect function call into a direct function call.
-    if (isa<Function>(I))
-      InlineCost -= 100;
-    
-    // If an alloca is passed in, inlining this function is likely to allow
-    // significant future optimization possibilities (like scalar promotion, and
-    // scalarization), so encourage the inlining of the function.
-    //
-    else if (isa<AllocaInst>(I)) {
-      if (ArgNo < CalleeFI.ArgumentWeights.size())
-        InlineCost -= CalleeFI.ArgumentWeights[ArgNo].AllocaWeight;
-      
-      // If this is a constant being passed into the function, use the argument
-      // weights calculated for the callee to determine how much will be folded
-      // away with this information.
-    } else if (isa<Constant>(I)) {
-      if (ArgNo < CalleeFI.ArgumentWeights.size())
-        InlineCost -= CalleeFI.ArgumentWeights[ArgNo].ConstantWeight;
-    }
-  }
-  
-  // Now that we have considered all of the factors that make the call site more
-  // likely to be inlined, look at factors that make us not want to inline it.
-  
-  // Don't inline into something too big, which would make it bigger.
-  // "size" here is the number of basic blocks, not instructions.
-  //
-  InlineCost += Caller->size()/15;
-  
-  // Look at the size of the callee. Each instruction counts as 5.
-  InlineCost += CalleeFI.NumInsts*5;
-
-  return llvm::InlineCost::get(InlineCost);
-}
-
-// getInlineFudgeFactor - Return a > 1.0 factor if the inliner should use a
-// higher threshold to determine if the function call should be inlined.
-float InlineCostAnalyzer::getInlineFudgeFactor(CallSite CS) {
-  Function *Callee = CS.getCalledFunction();
-  
-  // Get information about the callee...
-  FunctionInfo &CalleeFI = CachedFunctionInfo[Callee];
-  
-  // If we haven't calculated this information yet, do so now.
-  if (CalleeFI.NumBlocks == 0)
-    CalleeFI.analyzeFunction(Callee);
-
-  float Factor = 1.0f;
-  // Single BB functions are often written to be inlined.
-  if (CalleeFI.NumBlocks == 1)
-    Factor += 0.5f;
-
-  // Be more aggressive if the function contains a good chunk (if it mades up
-  // at least 10% of the instructions) of vector instructions.
-  if (CalleeFI.NumVectorInsts > CalleeFI.NumInsts/2)
-    Factor += 2.0f;
-  else if (CalleeFI.NumVectorInsts > CalleeFI.NumInsts/10)
-    Factor += 1.5f;
-  return Factor;
-}
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/InlineFunction.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/InlineFunction.cpp
index f9efc34..043046c 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/InlineFunction.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/InlineFunction.cpp
@@ -316,7 +316,7 @@ bool llvm::InlineFunction(CallSite CS, CallGraph *CG, const TargetData *TD,
           !CalledFunc->onlyReadsMemory()) {
         const Type *AggTy = cast<PointerType>(I->getType())->getElementType();
         const Type *VoidPtrTy = 
-            PointerType::getUnqual(Type::getInt8Ty(Context));
+            Type::getInt8PtrTy(Context);
 
         // Create the alloca.  If we have TargetData, use nice alignment.
         unsigned Align = 1;
@@ -386,7 +386,7 @@ bool llvm::InlineFunction(CallSite CS, CallGraph *CG, const TargetData *TD,
     // (which can happen, e.g., because an argument was constant), but we'll be
     // happy with whatever the cloner can do.
     CloneAndPruneFunctionInto(Caller, CalledFunc, ValueMap, Returns, ".i",
-                              &InlinedFunctionInfo, TD);
+                              &InlinedFunctionInfo, TD, TheCall);
 
     // Remember the first block that is newly cloned over.
     FirstNewBlock = LastBlock; ++FirstNewBlock;
@@ -444,18 +444,15 @@ bool llvm::InlineFunction(CallSite CS, CallGraph *CG, const TargetData *TD,
   if (InlinedFunctionInfo.ContainsDynamicAllocas) {
     Module *M = Caller->getParent();
     // Get the two intrinsics we care about.
-    Constant *StackSave, *StackRestore;
-    StackSave    = Intrinsic::getDeclaration(M, Intrinsic::stacksave);
-    StackRestore = Intrinsic::getDeclaration(M, Intrinsic::stackrestore);
+    Function *StackSave = Intrinsic::getDeclaration(M, Intrinsic::stacksave);
+    Function *StackRestore=Intrinsic::getDeclaration(M,Intrinsic::stackrestore);
 
     // If we are preserving the callgraph, add edges to the stacksave/restore
     // functions for the calls we insert.
     CallGraphNode *StackSaveCGN = 0, *StackRestoreCGN = 0, *CallerNode = 0;
     if (CG) {
-      // We know that StackSave/StackRestore are Function*'s, because they are
-      // intrinsics which must have the right types.
-      StackSaveCGN    = CG->getOrInsertFunction(cast<Function>(StackSave));
-      StackRestoreCGN = CG->getOrInsertFunction(cast<Function>(StackRestore));
+      StackSaveCGN    = CG->getOrInsertFunction(StackSave);
+      StackRestoreCGN = CG->getOrInsertFunction(StackRestore);
       CallerNode = (*CG)[Caller];
     }
 
@@ -480,7 +477,8 @@ bool llvm::InlineFunction(CallSite CS, CallGraph *CG, const TargetData *TD,
       for (Function::iterator BB = FirstNewBlock, E = Caller->end();
            BB != E; ++BB)
         if (UnwindInst *UI = dyn_cast<UnwindInst>(BB->getTerminator())) {
-          CallInst::Create(StackRestore, SavedPtr, "", UI);
+          CallInst *CI = CallInst::Create(StackRestore, SavedPtr, "", UI);
+          if (CG) CallerNode->addCalledFunction(CI, StackRestoreCGN);
           ++NumStackRestores;
         }
     }
@@ -621,8 +619,17 @@ bool llvm::InlineFunction(CallSite CS, CallGraph *CG, const TargetData *TD,
                "Ret value not consistent in function!");
         PHI->addIncoming(RI->getReturnValue(), RI->getParent());
       }
+    
+      // Now that we inserted the PHI, check to see if it has a single value
+      // (e.g. all the entries are the same or undef).  If so, remove the PHI so
+      // it doesn't block other optimizations.
+      if (Value *V = PHI->hasConstantValue()) {
+        PHI->replaceAllUsesWith(V);
+        PHI->eraseFromParent();
+      }
     }
 
+
     // Add a branch to the merge points and remove return instructions.
     for (unsigned i = 0, e = Returns.size(); i != e; ++i) {
       ReturnInst *RI = Returns[i];
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/InstructionNamer.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/InstructionNamer.cpp
index 1fa51a3..7f11acf 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/InstructionNamer.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/InstructionNamer.cpp
@@ -33,11 +33,11 @@ namespace {
       for (Function::arg_iterator AI = F.arg_begin(), AE = F.arg_end();
            AI != AE; ++AI)
         if (!AI->hasName() && AI->getType() != Type::getVoidTy(F.getContext()))
-          AI->setName("tmp");
+          AI->setName("arg");
 
       for (Function::iterator BB = F.begin(), E = F.end(); BB != E; ++BB) {
         if (!BB->hasName())
-          BB->setName("BB");
+          BB->setName("bb");
         
         for (BasicBlock::iterator I = BB->begin(), E = BB->end(); I != E; ++I)
           if (!I->hasName() && I->getType() != Type::getVoidTy(F.getContext()))
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/LCSSA.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/LCSSA.cpp
index 48e6a17..590d667 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/LCSSA.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/LCSSA.cpp
@@ -33,28 +33,23 @@
 #include "llvm/Pass.h"
 #include "llvm/Function.h"
 #include "llvm/Instructions.h"
-#include "llvm/LLVMContext.h"
-#include "llvm/ADT/SetVector.h"
-#include "llvm/ADT/Statistic.h"
 #include "llvm/Analysis/Dominators.h"
 #include "llvm/Analysis/LoopPass.h"
 #include "llvm/Analysis/ScalarEvolution.h"
-#include "llvm/Support/CFG.h"
-#include "llvm/Support/Compiler.h"
+#include "llvm/Transforms/Utils/SSAUpdater.h"
+#include "llvm/ADT/Statistic.h"
+#include "llvm/ADT/STLExtras.h"
 #include "llvm/Support/PredIteratorCache.h"
-#include <algorithm>
-#include <map>
 using namespace llvm;
 
 STATISTIC(NumLCSSA, "Number of live out of a loop variables");
 
 namespace {
-  struct VISIBILITY_HIDDEN LCSSA : public LoopPass {
+  struct LCSSA : public LoopPass {
     static char ID; // Pass identification, replacement for typeid
     LCSSA() : LoopPass(&ID) {}
 
     // Cached analysis information for the current function.
-    LoopInfo *LI;
     DominatorTree *DT;
     std::vector<BasicBlock*> LoopBlocks;
     PredIteratorCache PredCache;
@@ -62,15 +57,15 @@ namespace {
     
     virtual bool runOnLoop(Loop *L, LPPassManager &LPM);
 
-    void ProcessInstruction(Instruction* Instr,
-                            const SmallVector<BasicBlock*, 8>& exitBlocks);
-    
     /// This transformation requires natural loop information & requires that
     /// loop preheaders be inserted into the CFG.  It maintains both of these,
     /// as well as the CFG.  It also requires dominator information.
     ///
     virtual void getAnalysisUsage(AnalysisUsage &AU) const {
       AU.setPreservesCFG();
+
+      // LCSSA doesn't actually require LoopSimplify, but the PassManager
+      // doesn't know how to schedule LoopSimplify by itself.
       AU.addRequiredID(LoopSimplifyID);
       AU.addPreservedID(LoopSimplifyID);
       AU.addRequiredTransitive<LoopInfo>();
@@ -87,22 +82,17 @@ namespace {
       AU.addPreserved<DominanceFrontier>();
     }
   private:
-
+    bool ProcessInstruction(Instruction *Inst,
+                            const SmallVectorImpl<BasicBlock*> &ExitBlocks);
+    
     /// verifyAnalysis() - Verify loop nest.
     virtual void verifyAnalysis() const {
       // Check the special guarantees that LCSSA makes.
       assert(L->isLCSSAForm() && "LCSSA form not preserved!");
     }
 
-    void getLoopValuesUsedOutsideLoop(Loop *L,
-                                      SetVector<Instruction*> &AffectedValues,
-                                 const SmallVector<BasicBlock*, 8>& exitBlocks);
-
-    Value *GetValueForBlock(DomTreeNode *BB, Instruction *OrigInst,
-                            DenseMap<DomTreeNode*, Value*> &Phis);
-
     /// inLoop - returns true if the given block is within the current loop
-    bool inLoop(BasicBlock* B) {
+    bool inLoop(BasicBlock *B) const {
       return std::binary_search(LoopBlocks.begin(), LoopBlocks.end(), B);
     }
   };
@@ -114,182 +104,171 @@ static RegisterPass<LCSSA> X("lcssa", "Loop-Closed SSA Form Pass");
 Pass *llvm::createLCSSAPass() { return new LCSSA(); }
 const PassInfo *const llvm::LCSSAID = &X;
 
+
+/// BlockDominatesAnExit - Return true if the specified block dominates at least
+/// one of the blocks in the specified list.
+static bool BlockDominatesAnExit(BasicBlock *BB,
+                                 const SmallVectorImpl<BasicBlock*> &ExitBlocks,
+                                 DominatorTree *DT) {
+  DomTreeNode *DomNode = DT->getNode(BB);
+  for (unsigned i = 0, e = ExitBlocks.size(); i != e; ++i)
+    if (DT->dominates(DomNode, DT->getNode(ExitBlocks[i])))
+      return true;
+
+  return false;
+}
+
+
 /// runOnFunction - Process all loops in the function, inner-most out.
-bool LCSSA::runOnLoop(Loop *l, LPPassManager &LPM) {
-  L = l;
-  PredCache.clear();
+bool LCSSA::runOnLoop(Loop *TheLoop, LPPassManager &LPM) {
+  L = TheLoop;
   
-  LI = &LPM.getAnalysis<LoopInfo>();
   DT = &getAnalysis<DominatorTree>();
 
-  // Speed up queries by creating a sorted list of blocks
+  // Get the set of exiting blocks.
+  SmallVector<BasicBlock*, 8> ExitBlocks;
+  L->getExitBlocks(ExitBlocks);
+  
+  if (ExitBlocks.empty())
+    return false;
+  
+  // Speed up queries by creating a sorted vector of blocks.
   LoopBlocks.clear();
   LoopBlocks.insert(LoopBlocks.end(), L->block_begin(), L->block_end());
-  std::sort(LoopBlocks.begin(), LoopBlocks.end());
+  array_pod_sort(LoopBlocks.begin(), LoopBlocks.end());
   
-  SmallVector<BasicBlock*, 8> exitBlocks;
-  L->getExitBlocks(exitBlocks);
+  // Look at all the instructions in the loop, checking to see if they have uses
+  // outside the loop.  If so, rewrite those uses.
+  bool MadeChange = false;
   
-  SetVector<Instruction*> AffectedValues;
-  getLoopValuesUsedOutsideLoop(L, AffectedValues, exitBlocks);
+  for (Loop::block_iterator BBI = L->block_begin(), E = L->block_end();
+       BBI != E; ++BBI) {
+    BasicBlock *BB = *BBI;
+    
+    // For large loops, avoid use-scanning by using dominance information:  In
+    // particular, if a block does not dominate any of the loop exits, then none
+    // of the values defined in the block could be used outside the loop.
+    if (!BlockDominatesAnExit(BB, ExitBlocks, DT))
+      continue;
+    
+    for (BasicBlock::iterator I = BB->begin(), E = BB->end();
+         I != E; ++I) {
+      // Reject two common cases fast: instructions with no uses (like stores)
+      // and instructions with one use that is in the same block as this.
+      if (I->use_empty() ||
+          (I->hasOneUse() && I->use_back()->getParent() == BB &&
+           !isa<PHINode>(I->use_back())))
+        continue;
+      
+      MadeChange |= ProcessInstruction(I, ExitBlocks);
+    }
+  }
   
-  // If no values are affected, we can save a lot of work, since we know that
-  // nothing will be changed.
-  if (AffectedValues.empty())
-    return false;
+  assert(L->isLCSSAForm());
+  PredCache.clear();
+
+  return MadeChange;
+}
+
+/// isExitBlock - Return true if the specified block is in the list.
+static bool isExitBlock(BasicBlock *BB,
+                        const SmallVectorImpl<BasicBlock*> &ExitBlocks) {
+  for (unsigned i = 0, e = ExitBlocks.size(); i != e; ++i)
+    if (ExitBlocks[i] == BB)
+      return true;
+  return false;
+}
+
+/// ProcessInstruction - Given an instruction in the loop, check to see if it
+/// has any uses that are outside the current loop.  If so, insert LCSSA PHI
+/// nodes and rewrite the uses.
+bool LCSSA::ProcessInstruction(Instruction *Inst,
+                               const SmallVectorImpl<BasicBlock*> &ExitBlocks) {
+  SmallVector<Use*, 16> UsesToRewrite;
   
-  // Iterate over all affected values for this loop and insert Phi nodes
-  // for them in the appropriate exit blocks
+  BasicBlock *InstBB = Inst->getParent();
   
-  for (SetVector<Instruction*>::iterator I = AffectedValues.begin(),
-       E = AffectedValues.end(); I != E; ++I)
-    ProcessInstruction(*I, exitBlocks);
+  for (Value::use_iterator UI = Inst->use_begin(), E = Inst->use_end();
+       UI != E; ++UI) {
+    BasicBlock *UserBB = cast<Instruction>(*UI)->getParent();
+    if (PHINode *PN = dyn_cast<PHINode>(*UI))
+      UserBB = PN->getIncomingBlock(UI);
+    
+    if (InstBB != UserBB && !inLoop(UserBB))
+      UsesToRewrite.push_back(&UI.getUse());
+  }
   
-  assert(L->isLCSSAForm());
+  // If there are no uses outside the loop, exit with no change.
+  if (UsesToRewrite.empty()) return false;
   
-  return true;
-}
-
-/// processInstruction - Given a live-out instruction, insert LCSSA Phi nodes,
-/// eliminate all out-of-loop uses.
-void LCSSA::ProcessInstruction(Instruction *Instr,
-                               const SmallVector<BasicBlock*, 8>& exitBlocks) {
   ++NumLCSSA; // We are applying the transformation
 
-  // Keep track of the blocks that have the value available already.
-  DenseMap<DomTreeNode*, Value*> Phis;
-
-  BasicBlock *DomBB = Instr->getParent();
-
   // Invoke instructions are special in that their result value is not available
   // along their unwind edge. The code below tests to see whether DomBB dominates
   // the value, so adjust DomBB to the normal destination block, which is
   // effectively where the value is first usable.
-  if (InvokeInst *Inv = dyn_cast<InvokeInst>(Instr))
+  BasicBlock *DomBB = Inst->getParent();
+  if (InvokeInst *Inv = dyn_cast<InvokeInst>(Inst))
     DomBB = Inv->getNormalDest();
 
   DomTreeNode *DomNode = DT->getNode(DomBB);
 
-  // Insert the LCSSA phi's into the exit blocks (dominated by the value), and
-  // add them to the Phi's map.
-  for (SmallVector<BasicBlock*, 8>::const_iterator BBI = exitBlocks.begin(),
-      BBE = exitBlocks.end(); BBI != BBE; ++BBI) {
-    BasicBlock *BB = *BBI;
-    DomTreeNode *ExitBBNode = DT->getNode(BB);
-    Value *&Phi = Phis[ExitBBNode];
-    if (!Phi && DT->dominates(DomNode, ExitBBNode)) {
-      PHINode *PN = PHINode::Create(Instr->getType(), Instr->getName()+".lcssa",
-                                    BB->begin());
-      PN->reserveOperandSpace(PredCache.GetNumPreds(BB));
+  SSAUpdater SSAUpdate;
+  SSAUpdate.Initialize(Inst);
+  
+  // Insert the LCSSA phi's into all of the exit blocks dominated by the
+  // value, and add them to the Phi's map.
+  for (SmallVectorImpl<BasicBlock*>::const_iterator BBI = ExitBlocks.begin(),
+      BBE = ExitBlocks.end(); BBI != BBE; ++BBI) {
+    BasicBlock *ExitBB = *BBI;
+    if (!DT->dominates(DomNode, DT->getNode(ExitBB))) continue;
+    
+    // If we already inserted something for this BB, don't reprocess it.
+    if (SSAUpdate.HasValueForBlock(ExitBB)) continue;
+    
+    PHINode *PN = PHINode::Create(Inst->getType(), Inst->getName()+".lcssa",
+                                  ExitBB->begin());
+    PN->reserveOperandSpace(PredCache.GetNumPreds(ExitBB));
 
-      // Remember that this phi makes the value alive in this block.
-      Phi = PN;
+    // Add inputs from inside the loop for this PHI.
+    for (BasicBlock **PI = PredCache.GetPreds(ExitBB); *PI; ++PI) {
+      PN->addIncoming(Inst, *PI);
 
-      // Add inputs from inside the loop for this PHI.
-      for (BasicBlock** PI = PredCache.GetPreds(BB); *PI; ++PI)
-        PN->addIncoming(Instr, *PI);
+      // If the exit block has a predecessor not within the loop, arrange for
+      // the incoming value use corresponding to that predecessor to be
+      // rewritten in terms of a different LCSSA PHI.
+      if (!inLoop(*PI))
+        UsesToRewrite.push_back(
+          &PN->getOperandUse(
+            PN->getOperandNumForIncomingValue(PN->getNumIncomingValues()-1)));
     }
+    
+    // Remember that this phi makes the value alive in this block.
+    SSAUpdate.AddAvailableValue(ExitBB, PN);
   }
   
-  
-  // Record all uses of Instr outside the loop.  We need to rewrite these.  The
-  // LCSSA phis won't be included because they use the value in the loop.
-  for (Value::use_iterator UI = Instr->use_begin(), E = Instr->use_end();
-       UI != E;) {
-    BasicBlock *UserBB = cast<Instruction>(*UI)->getParent();
-    if (PHINode *P = dyn_cast<PHINode>(*UI)) {
-      UserBB = P->getIncomingBlock(UI);
-    }
-    
-    // If the user is in the loop, don't rewrite it!
-    if (UserBB == Instr->getParent() || inLoop(UserBB)) {
-      ++UI;
+  // Rewrite all uses outside the loop in terms of the new PHIs we just
+  // inserted.
+  for (unsigned i = 0, e = UsesToRewrite.size(); i != e; ++i) {
+    // If this use is in an exit block, rewrite to use the newly inserted PHI.
+    // This is required for correctness because SSAUpdate doesn't handle uses in
+    // the same block.  It assumes the PHI we inserted is at the end of the
+    // block.
+    Instruction *User = cast<Instruction>(UsesToRewrite[i]->getUser());
+    BasicBlock *UserBB = User->getParent();
+    if (PHINode *PN = dyn_cast<PHINode>(User))
+      UserBB = PN->getIncomingBlock(*UsesToRewrite[i]);
+
+    if (isa<PHINode>(UserBB->begin()) &&
+        isExitBlock(UserBB, ExitBlocks)) {
+      UsesToRewrite[i]->set(UserBB->begin());
       continue;
     }
     
-    // Otherwise, patch up uses of the value with the appropriate LCSSA Phi,
-    // inserting PHI nodes into join points where needed.
-    Value *Val = GetValueForBlock(DT->getNode(UserBB), Instr, Phis);
-    
-    // Preincrement the iterator to avoid invalidating it when we change the
-    // value.
-    Use &U = UI.getUse();
-    ++UI;
-    U.set(Val);
-  }
-}
-
-/// getLoopValuesUsedOutsideLoop - Return any values defined in the loop that
-/// are used by instructions outside of it.
-void LCSSA::getLoopValuesUsedOutsideLoop(Loop *L,
-                                      SetVector<Instruction*> &AffectedValues,
-                                const SmallVector<BasicBlock*, 8>& exitBlocks) {
-  // FIXME: For large loops, we may be able to avoid a lot of use-scanning
-  // by using dominance information.  In particular, if a block does not
-  // dominate any of the loop exits, then none of the values defined in the
-  // block could be used outside the loop.
-  for (Loop::block_iterator BB = L->block_begin(), BE = L->block_end();
-       BB != BE; ++BB) {
-    for (BasicBlock::iterator I = (*BB)->begin(), E = (*BB)->end(); I != E; ++I)
-      for (Value::use_iterator UI = I->use_begin(), UE = I->use_end(); UI != UE;
-           ++UI) {
-        BasicBlock *UserBB = cast<Instruction>(*UI)->getParent();
-        if (PHINode* p = dyn_cast<PHINode>(*UI)) {
-          UserBB = p->getIncomingBlock(UI);
-        }
-        
-        if (*BB != UserBB && !inLoop(UserBB)) {
-          AffectedValues.insert(I);
-          break;
-        }
-      }
+    // Otherwise, do full PHI insertion.
+    SSAUpdate.RewriteUse(*UsesToRewrite[i]);
   }
-}
-
-/// GetValueForBlock - Get the value to use within the specified basic block.
-/// available values are in Phis.
-Value *LCSSA::GetValueForBlock(DomTreeNode *BB, Instruction *OrigInst,
-                               DenseMap<DomTreeNode*, Value*> &Phis) {
-  // If there is no dominator info for this BB, it is unreachable.
-  if (BB == 0)
-    return UndefValue::get(OrigInst->getType());
-                                 
-  // If we have already computed this value, return the previously computed val.
-  if (Phis.count(BB)) return Phis[BB];
-
-  DomTreeNode *IDom = BB->getIDom();
-
-  // Otherwise, there are two cases: we either have to insert a PHI node or we
-  // don't.  We need to insert a PHI node if this block is not dominated by one
-  // of the exit nodes from the loop (the loop could have multiple exits, and
-  // though the value defined *inside* the loop dominated all its uses, each
-  // exit by itself may not dominate all the uses).
-  //
-  // The simplest way to check for this condition is by checking to see if the
-  // idom is in the loop.  If so, we *know* that none of the exit blocks
-  // dominate this block.  Note that we *know* that the block defining the
-  // original instruction is in the idom chain, because if it weren't, then the
-  // original value didn't dominate this use.
-  if (!inLoop(IDom->getBlock())) {
-    // Idom is not in the loop, we must still be "below" the exit block and must
-    // be fully dominated by the value live in the idom.
-    Value* val = GetValueForBlock(IDom, OrigInst, Phis);
-    Phis.insert(std::make_pair(BB, val));
-    return val;
-  }
-  
-  BasicBlock *BBN = BB->getBlock();
   
-  // Otherwise, the idom is the loop, so we need to insert a PHI node.  Do so
-  // now, then get values to fill in the incoming values for the PHI.
-  PHINode *PN = PHINode::Create(OrigInst->getType(),
-                                OrigInst->getName() + ".lcssa", BBN->begin());
-  PN->reserveOperandSpace(PredCache.GetNumPreds(BBN));
-  Phis.insert(std::make_pair(BB, PN));
-                                 
-  // Fill in the incoming values for the block.
-  for (BasicBlock** PI = PredCache.GetPreds(BBN); *PI; ++PI)
-    PN->addIncoming(GetValueForBlock(DT->getNode(*PI), OrigInst, Phis), *PI);
-  return PN;
+  return true;
 }
 
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/Local.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/Local.cpp
index b622611..aef0f5f 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/Local.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/Local.cpp
@@ -24,10 +24,14 @@
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/Analysis/ConstantFolding.h"
 #include "llvm/Analysis/DebugInfo.h"
+#include "llvm/Analysis/InstructionSimplify.h"
 #include "llvm/Analysis/ProfileInfo.h"
 #include "llvm/Target/TargetData.h"
+#include "llvm/Support/CFG.h"
+#include "llvm/Support/Debug.h"
 #include "llvm/Support/GetElementPtrTypeIterator.h"
 #include "llvm/Support/MathExtras.h"
+#include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
 //===----------------------------------------------------------------------===//
@@ -59,9 +63,8 @@ bool llvm::isSafeToLoadUnconditionally(Value *V, Instruction *ScanFrom) {
 
     // If we see a free or a call which may write to memory (i.e. which might do
     // a free) the pointer could be marked invalid.
-    if (isa<FreeInst>(BBI) || 
-        (isa<CallInst>(BBI) && BBI->mayWriteToMemory() &&
-         !isa<DbgInfoIntrinsic>(BBI)))
+    if (isa<CallInst>(BBI) && BBI->mayWriteToMemory() &&
+        !isa<DbgInfoIntrinsic>(BBI))
       return false;
 
     if (LoadInst *LI = dyn_cast<LoadInst>(BBI)) {
@@ -110,7 +113,9 @@ bool llvm::ConstantFoldTerminator(BasicBlock *BB) {
       // unconditional branch.
       BI->setUnconditionalDest(Destination);
       return true;
-    } else if (Dest2 == Dest1) {       // Conditional branch to same location?
+    }
+    
+    if (Dest2 == Dest1) {       // Conditional branch to same location?
       // This branch matches something like this:
       //     br bool %cond, label %Dest, label %Dest
       // and changes it into:  br label %Dest
@@ -123,7 +128,10 @@ bool llvm::ConstantFoldTerminator(BasicBlock *BB) {
       BI->setUnconditionalDest(Dest1);
       return true;
     }
-  } else if (SwitchInst *SI = dyn_cast<SwitchInst>(T)) {
+    return false;
+  }
+  
+  if (SwitchInst *SI = dyn_cast<SwitchInst>(T)) {
     // If we are switching on a constant, we can convert the switch into a
     // single branch instruction!
     ConstantInt *CI = dyn_cast<ConstantInt>(SI->getCondition());
@@ -132,7 +140,7 @@ bool llvm::ConstantFoldTerminator(BasicBlock *BB) {
     assert(TheOnlyDest == SI->getDefaultDest() &&
            "Default destination is not successor #0?");
 
-    // Figure out which case it goes to...
+    // Figure out which case it goes to.
     for (unsigned i = 1, e = SI->getNumSuccessors(); i != e; ++i) {
       // Found case matching a constant operand?
       if (SI->getSuccessorValue(i) == CI) {
@@ -143,7 +151,7 @@ bool llvm::ConstantFoldTerminator(BasicBlock *BB) {
       // Check to see if this branch is going to the same place as the default
       // dest.  If so, eliminate it as an explicit compare.
       if (SI->getSuccessor(i) == DefaultDest) {
-        // Remove this entry...
+        // Remove this entry.
         DefaultDest->removePredecessor(SI->getParent());
         SI->removeCase(i);
         --i; --e;  // Don't skip an entry...
@@ -165,7 +173,7 @@ bool llvm::ConstantFoldTerminator(BasicBlock *BB) {
     // If we found a single destination that we can fold the switch into, do so
     // now.
     if (TheOnlyDest) {
-      // Insert the new branch..
+      // Insert the new branch.
       BranchInst::Create(TheOnlyDest, SI);
       BasicBlock *BB = SI->getParent();
 
@@ -179,28 +187,60 @@ bool llvm::ConstantFoldTerminator(BasicBlock *BB) {
           Succ->removePredecessor(BB);
       }
 
-      // Delete the old switch...
+      // Delete the old switch.
       BB->getInstList().erase(SI);
       return true;
-    } else if (SI->getNumSuccessors() == 2) {
+    }
+    
+    if (SI->getNumSuccessors() == 2) {
       // Otherwise, we can fold this switch into a conditional branch
       // instruction if it has only one non-default destination.
       Value *Cond = new ICmpInst(SI, ICmpInst::ICMP_EQ, SI->getCondition(),
                                  SI->getSuccessorValue(1), "cond");
-      // Insert the new branch...
+      // Insert the new branch.
       BranchInst::Create(SI->getSuccessor(1), SI->getSuccessor(0), Cond, SI);
 
-      // Delete the old switch...
+      // Delete the old switch.
       SI->eraseFromParent();
       return true;
     }
+    return false;
   }
+
+  if (IndirectBrInst *IBI = dyn_cast<IndirectBrInst>(T)) {
+    // indirectbr blockaddress(@F, @BB) -> br label @BB
+    if (BlockAddress *BA =
+          dyn_cast<BlockAddress>(IBI->getAddress()->stripPointerCasts())) {
+      BasicBlock *TheOnlyDest = BA->getBasicBlock();
+      // Insert the new branch.
+      BranchInst::Create(TheOnlyDest, IBI);
+      
+      for (unsigned i = 0, e = IBI->getNumDestinations(); i != e; ++i) {
+        if (IBI->getDestination(i) == TheOnlyDest)
+          TheOnlyDest = 0;
+        else
+          IBI->getDestination(i)->removePredecessor(IBI->getParent());
+      }
+      IBI->eraseFromParent();
+      
+      // If we didn't find our destination in the IBI successor list, then we
+      // have undefined behavior.  Replace the unconditional branch with an
+      // 'unreachable' instruction.
+      if (TheOnlyDest) {
+        BB->getTerminator()->eraseFromParent();
+        new UnreachableInst(BB->getContext(), BB);
+      }
+      
+      return true;
+    }
+  }
+  
   return false;
 }
 
 
 //===----------------------------------------------------------------------===//
-//  Local dead code elimination...
+//  Local dead code elimination.
 //
 
 /// isInstructionTriviallyDead - Return true if the result produced by the
@@ -212,6 +252,9 @@ bool llvm::isInstructionTriviallyDead(Instruction *I) {
   // We don't want debug info removed by anything this general.
   if (isa<DbgInfoIntrinsic>(I)) return false;
 
+  // Likewise for memory use markers.
+  if (isa<MemoryUseIntrinsic>(I)) return false;
+
   if (!I->mayHaveSideEffects()) return true;
 
   // Special case intrinsics that "may have side effects" but can be deleted
@@ -287,9 +330,53 @@ llvm::RecursivelyDeleteDeadPHINode(PHINode *PN) {
 }
 
 //===----------------------------------------------------------------------===//
-//  Control Flow Graph Restructuring...
+//  Control Flow Graph Restructuring.
 //
 
+
+/// RemovePredecessorAndSimplify - Like BasicBlock::removePredecessor, this
+/// method is called when we're about to delete Pred as a predecessor of BB.  If
+/// BB contains any PHI nodes, this drops the entries in the PHI nodes for Pred.
+///
+/// Unlike the removePredecessor method, this attempts to simplify uses of PHI
+/// nodes that collapse into identity values.  For example, if we have:
+///   x = phi(1, 0, 0, 0)
+///   y = and x, z
+///
+/// .. and delete the predecessor corresponding to the '1', this will attempt to
+/// recursively fold the and to 0.
+void llvm::RemovePredecessorAndSimplify(BasicBlock *BB, BasicBlock *Pred,
+                                        TargetData *TD) {
+  // This only adjusts blocks with PHI nodes.
+  if (!isa<PHINode>(BB->begin()))
+    return;
+  
+  // Remove the entries for Pred from the PHI nodes in BB, but do not simplify
+  // them down.  This will leave us with single entry phi nodes and other phis
+  // that can be removed.
+  BB->removePredecessor(Pred, true);
+  
+  WeakVH PhiIt = &BB->front();
+  while (PHINode *PN = dyn_cast<PHINode>(PhiIt)) {
+    PhiIt = &*++BasicBlock::iterator(cast<Instruction>(PhiIt));
+    
+    Value *PNV = PN->hasConstantValue();
+    if (PNV == 0) continue;
+    
+    // If we're able to simplify the phi to a single value, substitute the new
+    // value into all of its uses.
+    assert(PNV != PN && "hasConstantValue broken");
+    
+    ReplaceAndSimplifyAllUses(PN, PNV, TD);
+    
+    // If recursive simplification ended up deleting the next PHI node we would
+    // iterate to, then our iterator is invalid, restart scanning from the top
+    // of the block.
+    if (PhiIt == 0) PhiIt = &BB->front();
+  }
+}
+
+
 /// MergeBasicBlockIntoOnlyPred - DestBB is a block with one predecessor and its
 /// predecessor is known to have one successor (DestBB!).  Eliminate the edge
 /// between them, moving the instructions in the predecessor into DestBB and
@@ -326,6 +413,174 @@ void llvm::MergeBasicBlockIntoOnlyPred(BasicBlock *DestBB, Pass *P) {
   PredBB->eraseFromParent();
 }
 
+/// CanPropagatePredecessorsForPHIs - Return true if we can fold BB, an
+/// almost-empty BB ending in an unconditional branch to Succ, into succ.
+///
+/// Assumption: Succ is the single successor for BB.
+///
+static bool CanPropagatePredecessorsForPHIs(BasicBlock *BB, BasicBlock *Succ) {
+  assert(*succ_begin(BB) == Succ && "Succ is not successor of BB!");
+
+  DEBUG(errs() << "Looking to fold " << BB->getName() << " into " 
+        << Succ->getName() << "\n");
+  // Shortcut, if there is only a single predecessor it must be BB and merging
+  // is always safe
+  if (Succ->getSinglePredecessor()) return true;
+
+  // Make a list of the predecessors of BB
+  typedef SmallPtrSet<BasicBlock*, 16> BlockSet;
+  BlockSet BBPreds(pred_begin(BB), pred_end(BB));
+
+  // Use that list to make another list of common predecessors of BB and Succ
+  BlockSet CommonPreds;
+  for (pred_iterator PI = pred_begin(Succ), PE = pred_end(Succ);
+        PI != PE; ++PI)
+    if (BBPreds.count(*PI))
+      CommonPreds.insert(*PI);
+
+  // Shortcut, if there are no common predecessors, merging is always safe
+  if (CommonPreds.empty())
+    return true;
+  
+  // Look at all the phi nodes in Succ, to see if they present a conflict when
+  // merging these blocks
+  for (BasicBlock::iterator I = Succ->begin(); isa<PHINode>(I); ++I) {
+    PHINode *PN = cast<PHINode>(I);
+
+    // If the incoming value from BB is again a PHINode in
+    // BB which has the same incoming value for *PI as PN does, we can
+    // merge the phi nodes and then the blocks can still be merged
+    PHINode *BBPN = dyn_cast<PHINode>(PN->getIncomingValueForBlock(BB));
+    if (BBPN && BBPN->getParent() == BB) {
+      for (BlockSet::iterator PI = CommonPreds.begin(), PE = CommonPreds.end();
+            PI != PE; PI++) {
+        if (BBPN->getIncomingValueForBlock(*PI) 
+              != PN->getIncomingValueForBlock(*PI)) {
+          DEBUG(errs() << "Can't fold, phi node " << PN->getName() << " in " 
+                << Succ->getName() << " is conflicting with " 
+                << BBPN->getName() << " with regard to common predecessor "
+                << (*PI)->getName() << "\n");
+          return false;
+        }
+      }
+    } else {
+      Value* Val = PN->getIncomingValueForBlock(BB);
+      for (BlockSet::iterator PI = CommonPreds.begin(), PE = CommonPreds.end();
+            PI != PE; PI++) {
+        // See if the incoming value for the common predecessor is equal to the
+        // one for BB, in which case this phi node will not prevent the merging
+        // of the block.
+        if (Val != PN->getIncomingValueForBlock(*PI)) {
+          DEBUG(errs() << "Can't fold, phi node " << PN->getName() << " in " 
+                << Succ->getName() << " is conflicting with regard to common "
+                << "predecessor " << (*PI)->getName() << "\n");
+          return false;
+        }
+      }
+    }
+  }
+
+  return true;
+}
+
+/// TryToSimplifyUncondBranchFromEmptyBlock - BB is known to contain an
+/// unconditional branch, and contains no instructions other than PHI nodes,
+/// potential debug intrinsics and the branch.  If possible, eliminate BB by
+/// rewriting all the predecessors to branch to the successor block and return
+/// true.  If we can't transform, return false.
+bool llvm::TryToSimplifyUncondBranchFromEmptyBlock(BasicBlock *BB) {
+  // We can't eliminate infinite loops.
+  BasicBlock *Succ = cast<BranchInst>(BB->getTerminator())->getSuccessor(0);
+  if (BB == Succ) return false;
+  
+  // Check to see if merging these blocks would cause conflicts for any of the
+  // phi nodes in BB or Succ. If not, we can safely merge.
+  if (!CanPropagatePredecessorsForPHIs(BB, Succ)) return false;
+
+  // Check for cases where Succ has multiple predecessors and a PHI node in BB
+  // has uses which will not disappear when the PHI nodes are merged.  It is
+  // possible to handle such cases, but difficult: it requires checking whether
+  // BB dominates Succ, which is non-trivial to calculate in the case where
+  // Succ has multiple predecessors.  Also, it requires checking whether
+  // constructing the necessary self-referential PHI node doesn't intoduce any
+  // conflicts; this isn't too difficult, but the previous code for doing this
+  // was incorrect.
+  //
+  // Note that if this check finds a live use, BB dominates Succ, so BB is
+  // something like a loop pre-header (or rarely, a part of an irreducible CFG);
+  // folding the branch isn't profitable in that case anyway.
+  if (!Succ->getSinglePredecessor()) {
+    BasicBlock::iterator BBI = BB->begin();
+    while (isa<PHINode>(*BBI)) {
+      for (Value::use_iterator UI = BBI->use_begin(), E = BBI->use_end();
+           UI != E; ++UI) {
+        if (PHINode* PN = dyn_cast<PHINode>(*UI)) {
+          if (PN->getIncomingBlock(UI) != BB)
+            return false;
+        } else {
+          return false;
+        }
+      }
+      ++BBI;
+    }
+  }
+
+  DEBUG(errs() << "Killing Trivial BB: \n" << *BB);
+  
+  if (isa<PHINode>(Succ->begin())) {
+    // If there is more than one pred of succ, and there are PHI nodes in
+    // the successor, then we need to add incoming edges for the PHI nodes
+    //
+    const SmallVector<BasicBlock*, 16> BBPreds(pred_begin(BB), pred_end(BB));
+    
+    // Loop over all of the PHI nodes in the successor of BB.
+    for (BasicBlock::iterator I = Succ->begin(); isa<PHINode>(I); ++I) {
+      PHINode *PN = cast<PHINode>(I);
+      Value *OldVal = PN->removeIncomingValue(BB, false);
+      assert(OldVal && "No entry in PHI for Pred BB!");
+      
+      // If this incoming value is one of the PHI nodes in BB, the new entries
+      // in the PHI node are the entries from the old PHI.
+      if (isa<PHINode>(OldVal) && cast<PHINode>(OldVal)->getParent() == BB) {
+        PHINode *OldValPN = cast<PHINode>(OldVal);
+        for (unsigned i = 0, e = OldValPN->getNumIncomingValues(); i != e; ++i)
+          // Note that, since we are merging phi nodes and BB and Succ might
+          // have common predecessors, we could end up with a phi node with
+          // identical incoming branches. This will be cleaned up later (and
+          // will trigger asserts if we try to clean it up now, without also
+          // simplifying the corresponding conditional branch).
+          PN->addIncoming(OldValPN->getIncomingValue(i),
+                          OldValPN->getIncomingBlock(i));
+      } else {
+        // Add an incoming value for each of the new incoming values.
+        for (unsigned i = 0, e = BBPreds.size(); i != e; ++i)
+          PN->addIncoming(OldVal, BBPreds[i]);
+      }
+    }
+  }
+  
+  while (PHINode *PN = dyn_cast<PHINode>(&BB->front())) {
+    if (Succ->getSinglePredecessor()) {
+      // BB is the only predecessor of Succ, so Succ will end up with exactly
+      // the same predecessors BB had.
+      Succ->getInstList().splice(Succ->begin(),
+                                 BB->getInstList(), BB->begin());
+    } else {
+      // We explicitly check for such uses in CanPropagatePredecessorsForPHIs.
+      assert(PN->use_empty() && "There shouldn't be any uses here!");
+      PN->eraseFromParent();
+    }
+  }
+    
+  // Everything that jumped to BB now goes to Succ.
+  BB->replaceAllUsesWith(Succ);
+  if (!Succ->hasName()) Succ->takeName(BB);
+  BB->eraseFromParent();              // Delete the old basic block.
+  return true;
+}
+
+
+
 /// OnlyUsedByDbgIntrinsics - Return true if the instruction I is only used
 /// by DbgIntrinsics. If DbgInUses is specified then the vector is filled 
 /// with the DbgInfoIntrinsic that use the instruction I.
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/LoopSimplify.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/LoopSimplify.cpp
index c22708a..690972d 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/LoopSimplify.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/LoopSimplify.cpp
@@ -23,6 +23,11 @@
 //
 // This pass also guarantees that loops will have exactly one backedge.
 //
+// Indirectbr instructions introduce several complications. If the loop
+// contains or is entered by an indirectbr instruction, it may not be possible
+// to transform the loop and make these guarantees. Client code should check
+// that these conditions are true before relying on them.
+//
 // Note that the simplifycfg pass will clean up blocks which are split out but
 // end up being unnecessary, so usage of this pass should not pessimize
 // generated code.
@@ -46,7 +51,6 @@
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
 #include "llvm/Transforms/Utils/Local.h"
 #include "llvm/Support/CFG.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/ADT/SetOperations.h"
 #include "llvm/ADT/SetVector.h"
 #include "llvm/ADT/Statistic.h"
@@ -57,7 +61,7 @@ STATISTIC(NumInserted, "Number of pre-header or exit blocks inserted");
 STATISTIC(NumNested  , "Number of nested loops split out");
 
 namespace {
-  struct VISIBILITY_HIDDEN LoopSimplify : public LoopPass {
+  struct LoopSimplify : public LoopPass {
     static char ID; // Pass identification, replacement for typeid
     LoopSimplify() : LoopPass(&ID) {}
 
@@ -82,17 +86,15 @@ namespace {
       AU.addPreservedID(BreakCriticalEdgesID);  // No critical edges added.
     }
 
-    /// verifyAnalysis() - Verify loop nest.
-    void verifyAnalysis() const {
-      assert(L->isLoopSimplifyForm() && "LoopSimplify form not preserved!");
-    }
+    /// verifyAnalysis() - Verify LoopSimplifyForm's guarantees.
+    void verifyAnalysis() const;
 
   private:
     bool ProcessLoop(Loop *L, LPPassManager &LPM);
     BasicBlock *RewriteLoopExitBlock(Loop *L, BasicBlock *Exit);
     BasicBlock *InsertPreheaderForLoop(Loop *L);
     Loop *SeparateNestedLoop(Loop *L, LPPassManager &LPM);
-    void InsertUniqueBackedgeBlock(Loop *L, BasicBlock *Preheader);
+    BasicBlock *InsertUniqueBackedgeBlock(Loop *L, BasicBlock *Preheader);
     void PlaceSplitBlockCarefully(BasicBlock *NewBB,
                                   SmallVectorImpl<BasicBlock*> &SplitPreds,
                                   Loop *L);
@@ -161,8 +163,10 @@ ReprocessLoop:
   BasicBlock *Preheader = L->getLoopPreheader();
   if (!Preheader) {
     Preheader = InsertPreheaderForLoop(L);
-    NumInserted++;
-    Changed = true;
+    if (Preheader) {
+      NumInserted++;
+      Changed = true;
+    }
   }
 
   // Next, check to make sure that all exit nodes of the loop only have
@@ -181,21 +185,22 @@ ReprocessLoop:
       // Must be exactly this loop: no subloops, parent loops, or non-loop preds
       // allowed.
       if (!L->contains(*PI)) {
-        RewriteLoopExitBlock(L, ExitBlock);
-        NumInserted++;
-        Changed = true;
+        if (RewriteLoopExitBlock(L, ExitBlock)) {
+          NumInserted++;
+          Changed = true;
+        }
         break;
       }
   }
 
   // If the header has more than two predecessors at this point (from the
   // preheader and from multiple backedges), we must adjust the loop.
-  unsigned NumBackedges = L->getNumBackEdges();
-  if (NumBackedges != 1) {
+  BasicBlock *LoopLatch = L->getLoopLatch();
+  if (!LoopLatch) {
     // If this is really a nested loop, rip it out into a child loop.  Don't do
     // this for loops with a giant number of backedges, just factor them into a
     // common backedge instead.
-    if (NumBackedges < 8) {
+    if (L->getNumBackEdges() < 8) {
       if (SeparateNestedLoop(L, LPM)) {
         ++NumNested;
         // This is a big restructuring change, reprocess the whole loop.
@@ -208,9 +213,11 @@ ReprocessLoop:
     // If we either couldn't, or didn't want to, identify nesting of the loops,
     // insert a new block that all backedges target, then make it jump to the
     // loop header.
-    InsertUniqueBackedgeBlock(L, Preheader);
-    NumInserted++;
-    Changed = true;
+    LoopLatch = InsertUniqueBackedgeBlock(L, Preheader);
+    if (LoopLatch) {
+      NumInserted++;
+      Changed = true;
+    }
   }
 
   // Scan over the PHI nodes in the loop header.  Since they now have only two
@@ -234,7 +241,14 @@ ReprocessLoop:
   // loop-invariant instructions out of the way to open up more
   // opportunities, and the disadvantage of having the responsibility
   // to preserve dominator information.
-  if (ExitBlocks.size() > 1 && L->getUniqueExitBlock()) {
+  bool UniqueExit = true;
+  if (!ExitBlocks.empty())
+    for (unsigned i = 1, e = ExitBlocks.size(); i != e; ++i)
+      if (ExitBlocks[i] != ExitBlocks[0]) {
+        UniqueExit = false;
+        break;
+      }
+  if (UniqueExit) {
     SmallVector<BasicBlock*, 8> ExitingBlocks;
     L->getExitingBlocks(ExitingBlocks);
     for (unsigned i = 0, e = ExitingBlocks.size(); i != e; ++i) {
@@ -252,7 +266,8 @@ ReprocessLoop:
         Instruction *Inst = I++;
         if (Inst == CI)
           continue;
-        if (!L->makeLoopInvariant(Inst, Changed, Preheader->getTerminator())) {
+        if (!L->makeLoopInvariant(Inst, Changed,
+                                  Preheader ? Preheader->getTerminator() : 0)) {
           AllInvariant = false;
           break;
         }
@@ -290,6 +305,12 @@ ReprocessLoop:
     }
   }
 
+  // If there are duplicate phi nodes (for example, from loop rotation),
+  // get rid of them.
+  for (Loop::block_iterator BB = L->block_begin(), E = L->block_end();
+       BB != E; ++BB)
+    EliminateDuplicatePHINodes(*BB);
+
   return Changed;
 }
 
@@ -304,8 +325,15 @@ BasicBlock *LoopSimplify::InsertPreheaderForLoop(Loop *L) {
   SmallVector<BasicBlock*, 8> OutsideBlocks;
   for (pred_iterator PI = pred_begin(Header), PE = pred_end(Header);
        PI != PE; ++PI)
-    if (!L->contains(*PI))           // Coming in from outside the loop?
-      OutsideBlocks.push_back(*PI);  // Keep track of it...
+    if (!L->contains(*PI)) {         // Coming in from outside the loop?
+      // If the loop is branched to from an indirect branch, we won't
+      // be able to fully transform the loop, because it prohibits
+      // edge splitting.
+      if (isa<IndirectBrInst>((*PI)->getTerminator())) return 0;
+
+      // Keep track of it.
+      OutsideBlocks.push_back(*PI);
+    }
 
   // Split out the loop pre-header.
   BasicBlock *NewBB =
@@ -325,8 +353,12 @@ BasicBlock *LoopSimplify::InsertPreheaderForLoop(Loop *L) {
 BasicBlock *LoopSimplify::RewriteLoopExitBlock(Loop *L, BasicBlock *Exit) {
   SmallVector<BasicBlock*, 8> LoopBlocks;
   for (pred_iterator I = pred_begin(Exit), E = pred_end(Exit); I != E; ++I)
-    if (L->contains(*I))
+    if (L->contains(*I)) {
+      // Don't do this if the loop is exited via an indirect branch.
+      if (isa<IndirectBrInst>((*I)->getTerminator())) return 0;
+
       LoopBlocks.push_back(*I);
+    }
 
   assert(!LoopBlocks.empty() && "No edges coming in from outside the loop?");
   BasicBlock *NewBB = SplitBlockPredecessors(Exit, &LoopBlocks[0], 
@@ -445,8 +477,13 @@ Loop *LoopSimplify::SeparateNestedLoop(Loop *L, LPPassManager &LPM) {
   SmallVector<BasicBlock*, 8> OuterLoopPreds;
   for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i)
     if (PN->getIncomingValue(i) != PN ||
-        !L->contains(PN->getIncomingBlock(i)))
+        !L->contains(PN->getIncomingBlock(i))) {
+      // We can't split indirectbr edges.
+      if (isa<IndirectBrInst>(PN->getIncomingBlock(i)->getTerminator()))
+        return 0;
+
       OuterLoopPreds.push_back(PN->getIncomingBlock(i));
+    }
 
   BasicBlock *Header = L->getHeader();
   BasicBlock *NewBB = SplitBlockPredecessors(Header, &OuterLoopPreds[0],
@@ -520,13 +557,18 @@ Loop *LoopSimplify::SeparateNestedLoop(Loop *L, LPPassManager &LPM) {
 /// backedges to target a new basic block and have that block branch to the loop
 /// header.  This ensures that loops have exactly one backedge.
 ///
-void LoopSimplify::InsertUniqueBackedgeBlock(Loop *L, BasicBlock *Preheader) {
+BasicBlock *
+LoopSimplify::InsertUniqueBackedgeBlock(Loop *L, BasicBlock *Preheader) {
   assert(L->getNumBackEdges() > 1 && "Must have > 1 backedge!");
 
   // Get information about the loop
   BasicBlock *Header = L->getHeader();
   Function *F = Header->getParent();
 
+  // Unique backedge insertion currently depends on having a preheader.
+  if (!Preheader)
+    return 0;
+
   // Figure out which basic blocks contain back-edges to the loop header.
   std::vector<BasicBlock*> BackedgeBlocks;
   for (pred_iterator I = pred_begin(Header), E = pred_end(Header); I != E; ++I)
@@ -613,4 +655,40 @@ void LoopSimplify::InsertUniqueBackedgeBlock(Loop *L, BasicBlock *Preheader) {
   DT->splitBlock(BEBlock);
   if (DominanceFrontier *DF = getAnalysisIfAvailable<DominanceFrontier>())
     DF->splitBlock(BEBlock);
+
+  return BEBlock;
+}
+
+void LoopSimplify::verifyAnalysis() const {
+  // It used to be possible to just assert L->isLoopSimplifyForm(), however
+  // with the introduction of indirectbr, there are now cases where it's
+  // not possible to transform a loop as necessary. We can at least check
+  // that there is an indirectbr near any time there's trouble.
+
+  // Indirectbr can interfere with preheader and unique backedge insertion.
+  if (!L->getLoopPreheader() || !L->getLoopLatch()) {
+    bool HasIndBrPred = false;
+    for (pred_iterator PI = pred_begin(L->getHeader()),
+         PE = pred_end(L->getHeader()); PI != PE; ++PI)
+      if (isa<IndirectBrInst>((*PI)->getTerminator())) {
+        HasIndBrPred = true;
+        break;
+      }
+    assert(HasIndBrPred &&
+           "LoopSimplify has no excuse for missing loop header info!");
+  }
+
+  // Indirectbr can interfere with exit block canonicalization.
+  if (!L->hasDedicatedExits()) {
+    bool HasIndBrExiting = false;
+    SmallVector<BasicBlock*, 8> ExitingBlocks;
+    L->getExitingBlocks(ExitingBlocks);
+    for (unsigned i = 0, e = ExitingBlocks.size(); i != e; ++i)
+      if (isa<IndirectBrInst>((ExitingBlocks[i])->getTerminator())) {
+        HasIndBrExiting = true;
+        break;
+      }
+    assert(HasIndBrExiting &&
+           "LoopSimplify has no excuse for missing exit block info!");
+  }
 }
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/LoopUnroll.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/LoopUnroll.cpp
new file mode 100644
index 0000000..6232f32
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/LoopUnroll.cpp
@@ -0,0 +1,382 @@
+//===-- UnrollLoop.cpp - Loop unrolling utilities -------------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements some loop unrolling utilities. It does not define any
+// actual pass or policy, but provides a single function to perform loop
+// unrolling.
+//
+// It works best when loops have been canonicalized by the -indvars pass,
+// allowing it to determine the trip counts of loops easily.
+//
+// The process of unrolling can produce extraneous basic blocks linked with
+// unconditional branches.  This will be corrected in the future.
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "loop-unroll"
+#include "llvm/Transforms/Utils/UnrollLoop.h"
+#include "llvm/BasicBlock.h"
+#include "llvm/ADT/Statistic.h"
+#include "llvm/Analysis/ConstantFolding.h"
+#include "llvm/Analysis/LoopPass.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/raw_ostream.h"
+#include "llvm/Transforms/Utils/BasicBlockUtils.h"
+#include "llvm/Transforms/Utils/Cloning.h"
+#include "llvm/Transforms/Utils/Local.h"
+#include <cstdio>
+
+using namespace llvm;
+
+// TODO: Should these be here or in LoopUnroll?
+STATISTIC(NumCompletelyUnrolled, "Number of loops completely unrolled");
+STATISTIC(NumUnrolled,    "Number of loops unrolled (completely or otherwise)");
+
+/// RemapInstruction - Convert the instruction operands from referencing the
+/// current values into those specified by ValueMap.
+static inline void RemapInstruction(Instruction *I,
+                                    DenseMap<const Value *, Value*> &ValueMap) {
+  for (unsigned op = 0, E = I->getNumOperands(); op != E; ++op) {
+    Value *Op = I->getOperand(op);
+    DenseMap<const Value *, Value*>::iterator It = ValueMap.find(Op);
+    if (It != ValueMap.end())
+      I->setOperand(op, It->second);
+  }
+}
+
+/// FoldBlockIntoPredecessor - Folds a basic block into its predecessor if it
+/// only has one predecessor, and that predecessor only has one successor.
+/// The LoopInfo Analysis that is passed will be kept consistent.
+/// Returns the new combined block.
+static BasicBlock *FoldBlockIntoPredecessor(BasicBlock *BB, LoopInfo* LI) {
+  // Merge basic blocks into their predecessor if there is only one distinct
+  // pred, and if there is only one distinct successor of the predecessor, and
+  // if there are no PHI nodes.
+  BasicBlock *OnlyPred = BB->getSinglePredecessor();
+  if (!OnlyPred) return 0;
+
+  if (OnlyPred->getTerminator()->getNumSuccessors() != 1)
+    return 0;
+
+  DEBUG(errs() << "Merging: " << *BB << "into: " << *OnlyPred);
+
+  // Resolve any PHI nodes at the start of the block.  They are all
+  // guaranteed to have exactly one entry if they exist, unless there are
+  // multiple duplicate (but guaranteed to be equal) entries for the
+  // incoming edges.  This occurs when there are multiple edges from
+  // OnlyPred to OnlySucc.
+  FoldSingleEntryPHINodes(BB);
+
+  // Delete the unconditional branch from the predecessor...
+  OnlyPred->getInstList().pop_back();
+
+  // Move all definitions in the successor to the predecessor...
+  OnlyPred->getInstList().splice(OnlyPred->end(), BB->getInstList());
+
+  // Make all PHI nodes that referred to BB now refer to Pred as their
+  // source...
+  BB->replaceAllUsesWith(OnlyPred);
+
+  std::string OldName = BB->getName();
+
+  // Erase basic block from the function...
+  LI->removeBlock(BB);
+  BB->eraseFromParent();
+
+  // Inherit predecessor's name if it exists...
+  if (!OldName.empty() && !OnlyPred->hasName())
+    OnlyPred->setName(OldName);
+
+  return OnlyPred;
+}
+
+/// Unroll the given loop by Count. The loop must be in LCSSA form. Returns true
+/// if unrolling was succesful, or false if the loop was unmodified. Unrolling
+/// can only fail when the loop's latch block is not terminated by a conditional
+/// branch instruction. However, if the trip count (and multiple) are not known,
+/// loop unrolling will mostly produce more code that is no faster.
+///
+/// The LoopInfo Analysis that is passed will be kept consistent.
+///
+/// If a LoopPassManager is passed in, and the loop is fully removed, it will be
+/// removed from the LoopPassManager as well. LPM can also be NULL.
+bool llvm::UnrollLoop(Loop *L, unsigned Count, LoopInfo* LI, LPPassManager* LPM) {
+  assert(L->isLCSSAForm());
+
+  BasicBlock *Preheader = L->getLoopPreheader();
+  if (!Preheader) {
+    DEBUG(errs() << "  Can't unroll; loop preheader-insertion failed.\n");
+    return false;
+  }
+
+  BasicBlock *LatchBlock = L->getLoopLatch();
+  if (!LatchBlock) {
+    DEBUG(errs() << "  Can't unroll; loop exit-block-insertion failed.\n");
+    return false;
+  }
+
+  BasicBlock *Header = L->getHeader();
+  BranchInst *BI = dyn_cast<BranchInst>(LatchBlock->getTerminator());
+  
+  if (!BI || BI->isUnconditional()) {
+    // The loop-rotate pass can be helpful to avoid this in many cases.
+    DEBUG(errs() <<
+             "  Can't unroll; loop not terminated by a conditional branch.\n");
+    return false;
+  }
+
+  // Find trip count
+  unsigned TripCount = L->getSmallConstantTripCount();
+  // Find trip multiple if count is not available
+  unsigned TripMultiple = 1;
+  if (TripCount == 0)
+    TripMultiple = L->getSmallConstantTripMultiple();
+
+  if (TripCount != 0)
+    DEBUG(errs() << "  Trip Count = " << TripCount << "\n");
+  if (TripMultiple != 1)
+    DEBUG(errs() << "  Trip Multiple = " << TripMultiple << "\n");
+
+  // Effectively "DCE" unrolled iterations that are beyond the tripcount
+  // and will never be executed.
+  if (TripCount != 0 && Count > TripCount)
+    Count = TripCount;
+
+  assert(Count > 0);
+  assert(TripMultiple > 0);
+  assert(TripCount == 0 || TripCount % TripMultiple == 0);
+
+  // Are we eliminating the loop control altogether?
+  bool CompletelyUnroll = Count == TripCount;
+
+  // If we know the trip count, we know the multiple...
+  unsigned BreakoutTrip = 0;
+  if (TripCount != 0) {
+    BreakoutTrip = TripCount % Count;
+    TripMultiple = 0;
+  } else {
+    // Figure out what multiple to use.
+    BreakoutTrip = TripMultiple =
+      (unsigned)GreatestCommonDivisor64(Count, TripMultiple);
+  }
+
+  if (CompletelyUnroll) {
+    DEBUG(errs() << "COMPLETELY UNROLLING loop %" << Header->getName()
+          << " with trip count " << TripCount << "!\n");
+  } else {
+    DEBUG(errs() << "UNROLLING loop %" << Header->getName()
+          << " by " << Count);
+    if (TripMultiple == 0 || BreakoutTrip != TripMultiple) {
+      DEBUG(errs() << " with a breakout at trip " << BreakoutTrip);
+    } else if (TripMultiple != 1) {
+      DEBUG(errs() << " with " << TripMultiple << " trips per branch");
+    }
+    DEBUG(errs() << "!\n");
+  }
+
+  std::vector<BasicBlock*> LoopBlocks = L->getBlocks();
+
+  bool ContinueOnTrue = L->contains(BI->getSuccessor(0));
+  BasicBlock *LoopExit = BI->getSuccessor(ContinueOnTrue);
+
+  // For the first iteration of the loop, we should use the precloned values for
+  // PHI nodes.  Insert associations now.
+  typedef DenseMap<const Value*, Value*> ValueMapTy;
+  ValueMapTy LastValueMap;
+  std::vector<PHINode*> OrigPHINode;
+  for (BasicBlock::iterator I = Header->begin(); isa<PHINode>(I); ++I) {
+    PHINode *PN = cast<PHINode>(I);
+    OrigPHINode.push_back(PN);
+    if (Instruction *I = 
+                dyn_cast<Instruction>(PN->getIncomingValueForBlock(LatchBlock)))
+      if (L->contains(I->getParent()))
+        LastValueMap[I] = I;
+  }
+
+  std::vector<BasicBlock*> Headers;
+  std::vector<BasicBlock*> Latches;
+  Headers.push_back(Header);
+  Latches.push_back(LatchBlock);
+
+  for (unsigned It = 1; It != Count; ++It) {
+    char SuffixBuffer[100];
+    sprintf(SuffixBuffer, ".%d", It);
+    
+    std::vector<BasicBlock*> NewBlocks;
+    
+    for (std::vector<BasicBlock*>::iterator BB = LoopBlocks.begin(),
+         E = LoopBlocks.end(); BB != E; ++BB) {
+      ValueMapTy ValueMap;
+      BasicBlock *New = CloneBasicBlock(*BB, ValueMap, SuffixBuffer);
+      Header->getParent()->getBasicBlockList().push_back(New);
+
+      // Loop over all of the PHI nodes in the block, changing them to use the
+      // incoming values from the previous block.
+      if (*BB == Header)
+        for (unsigned i = 0, e = OrigPHINode.size(); i != e; ++i) {
+          PHINode *NewPHI = cast<PHINode>(ValueMap[OrigPHINode[i]]);
+          Value *InVal = NewPHI->getIncomingValueForBlock(LatchBlock);
+          if (Instruction *InValI = dyn_cast<Instruction>(InVal))
+            if (It > 1 && L->contains(InValI->getParent()))
+              InVal = LastValueMap[InValI];
+          ValueMap[OrigPHINode[i]] = InVal;
+          New->getInstList().erase(NewPHI);
+        }
+
+      // Update our running map of newest clones
+      LastValueMap[*BB] = New;
+      for (ValueMapTy::iterator VI = ValueMap.begin(), VE = ValueMap.end();
+           VI != VE; ++VI)
+        LastValueMap[VI->first] = VI->second;
+
+      L->addBasicBlockToLoop(New, LI->getBase());
+
+      // Add phi entries for newly created values to all exit blocks except
+      // the successor of the latch block.  The successor of the exit block will
+      // be updated specially after unrolling all the way.
+      if (*BB != LatchBlock)
+        for (Value::use_iterator UI = (*BB)->use_begin(), UE = (*BB)->use_end();
+             UI != UE;) {
+          Instruction *UseInst = cast<Instruction>(*UI);
+          ++UI;
+          if (isa<PHINode>(UseInst) && !L->contains(UseInst->getParent())) {
+            PHINode *phi = cast<PHINode>(UseInst);
+            Value *Incoming = phi->getIncomingValueForBlock(*BB);
+            phi->addIncoming(Incoming, New);
+          }
+        }
+
+      // Keep track of new headers and latches as we create them, so that
+      // we can insert the proper branches later.
+      if (*BB == Header)
+        Headers.push_back(New);
+      if (*BB == LatchBlock) {
+        Latches.push_back(New);
+
+        // Also, clear out the new latch's back edge so that it doesn't look
+        // like a new loop, so that it's amenable to being merged with adjacent
+        // blocks later on.
+        TerminatorInst *Term = New->getTerminator();
+        assert(L->contains(Term->getSuccessor(!ContinueOnTrue)));
+        assert(Term->getSuccessor(ContinueOnTrue) == LoopExit);
+        Term->setSuccessor(!ContinueOnTrue, NULL);
+      }
+
+      NewBlocks.push_back(New);
+    }
+    
+    // Remap all instructions in the most recent iteration
+    for (unsigned i = 0; i < NewBlocks.size(); ++i)
+      for (BasicBlock::iterator I = NewBlocks[i]->begin(),
+           E = NewBlocks[i]->end(); I != E; ++I)
+        RemapInstruction(I, LastValueMap);
+  }
+  
+  // The latch block exits the loop.  If there are any PHI nodes in the
+  // successor blocks, update them to use the appropriate values computed as the
+  // last iteration of the loop.
+  if (Count != 1) {
+    SmallPtrSet<PHINode*, 8> Users;
+    for (Value::use_iterator UI = LatchBlock->use_begin(),
+         UE = LatchBlock->use_end(); UI != UE; ++UI)
+      if (PHINode *phi = dyn_cast<PHINode>(*UI))
+        Users.insert(phi);
+    
+    BasicBlock *LastIterationBB = cast<BasicBlock>(LastValueMap[LatchBlock]);
+    for (SmallPtrSet<PHINode*,8>::iterator SI = Users.begin(), SE = Users.end();
+         SI != SE; ++SI) {
+      PHINode *PN = *SI;
+      Value *InVal = PN->removeIncomingValue(LatchBlock, false);
+      // If this value was defined in the loop, take the value defined by the
+      // last iteration of the loop.
+      if (Instruction *InValI = dyn_cast<Instruction>(InVal)) {
+        if (L->contains(InValI->getParent()))
+          InVal = LastValueMap[InVal];
+      }
+      PN->addIncoming(InVal, LastIterationBB);
+    }
+  }
+
+  // Now, if we're doing complete unrolling, loop over the PHI nodes in the
+  // original block, setting them to their incoming values.
+  if (CompletelyUnroll) {
+    BasicBlock *Preheader = L->getLoopPreheader();
+    for (unsigned i = 0, e = OrigPHINode.size(); i != e; ++i) {
+      PHINode *PN = OrigPHINode[i];
+      PN->replaceAllUsesWith(PN->getIncomingValueForBlock(Preheader));
+      Header->getInstList().erase(PN);
+    }
+  }
+
+  // Now that all the basic blocks for the unrolled iterations are in place,
+  // set up the branches to connect them.
+  for (unsigned i = 0, e = Latches.size(); i != e; ++i) {
+    // The original branch was replicated in each unrolled iteration.
+    BranchInst *Term = cast<BranchInst>(Latches[i]->getTerminator());
+
+    // The branch destination.
+    unsigned j = (i + 1) % e;
+    BasicBlock *Dest = Headers[j];
+    bool NeedConditional = true;
+
+    // For a complete unroll, make the last iteration end with a branch
+    // to the exit block.
+    if (CompletelyUnroll && j == 0) {
+      Dest = LoopExit;
+      NeedConditional = false;
+    }
+
+    // If we know the trip count or a multiple of it, we can safely use an
+    // unconditional branch for some iterations.
+    if (j != BreakoutTrip && (TripMultiple == 0 || j % TripMultiple != 0)) {
+      NeedConditional = false;
+    }
+
+    if (NeedConditional) {
+      // Update the conditional branch's successor for the following
+      // iteration.
+      Term->setSuccessor(!ContinueOnTrue, Dest);
+    } else {
+      Term->setUnconditionalDest(Dest);
+      // Merge adjacent basic blocks, if possible.
+      if (BasicBlock *Fold = FoldBlockIntoPredecessor(Dest, LI)) {
+        std::replace(Latches.begin(), Latches.end(), Dest, Fold);
+        std::replace(Headers.begin(), Headers.end(), Dest, Fold);
+      }
+    }
+  }
+  
+  // At this point, the code is well formed.  We now do a quick sweep over the
+  // inserted code, doing constant propagation and dead code elimination as we
+  // go.
+  const std::vector<BasicBlock*> &NewLoopBlocks = L->getBlocks();
+  for (std::vector<BasicBlock*>::const_iterator BB = NewLoopBlocks.begin(),
+       BBE = NewLoopBlocks.end(); BB != BBE; ++BB)
+    for (BasicBlock::iterator I = (*BB)->begin(), E = (*BB)->end(); I != E; ) {
+      Instruction *Inst = I++;
+
+      if (isInstructionTriviallyDead(Inst))
+        (*BB)->getInstList().erase(Inst);
+      else if (Constant *C = ConstantFoldInstruction(Inst)) {
+        Inst->replaceAllUsesWith(C);
+        (*BB)->getInstList().erase(Inst);
+      }
+    }
+
+  NumCompletelyUnrolled += CompletelyUnroll;
+  ++NumUnrolled;
+  // Remove the loop from the LoopPassManager if it's completely removed.
+  if (CompletelyUnroll && LPM != NULL)
+    LPM->deleteLoopFromQueue(L);
+
+  // If we didn't completely unroll the loop, it should still be in LCSSA form.
+  if (!CompletelyUnroll)
+    assert(L->isLCSSAForm());
+
+  return true;
+}
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/LowerAllocations.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/LowerAllocations.cpp
deleted file mode 100644
index 2df953c..0000000
--- a/libclamav/c++/llvm/lib/Transforms/Utils/LowerAllocations.cpp
+++ /dev/null
@@ -1,140 +0,0 @@
-//===- LowerAllocations.cpp - Reduce malloc & free insts to calls ---------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// The LowerAllocations transformation is a target-dependent tranformation
-// because it depends on the size of data types and alignment constraints.
-//
-//===----------------------------------------------------------------------===//
-
-#define DEBUG_TYPE "lowerallocs"
-#include "llvm/Transforms/Scalar.h"
-#include "llvm/Transforms/Utils/UnifyFunctionExitNodes.h"
-#include "llvm/Module.h"
-#include "llvm/DerivedTypes.h"
-#include "llvm/Instructions.h"
-#include "llvm/Constants.h"
-#include "llvm/LLVMContext.h"
-#include "llvm/Pass.h"
-#include "llvm/ADT/Statistic.h"
-#include "llvm/Target/TargetData.h"
-#include "llvm/Support/Compiler.h"
-using namespace llvm;
-
-STATISTIC(NumLowered, "Number of allocations lowered");
-
-namespace {
-  /// LowerAllocations - Turn malloc and free instructions into @malloc and
-  /// @free calls.
-  ///
-  class VISIBILITY_HIDDEN LowerAllocations : public BasicBlockPass {
-    Constant *FreeFunc;   // Functions in the module we are processing
-                          // Initialized by doInitialization
-    bool LowerMallocArgToInteger;
-  public:
-    static char ID; // Pass ID, replacement for typeid
-    explicit LowerAllocations(bool LowerToInt = false)
-      : BasicBlockPass(&ID), FreeFunc(0), 
-        LowerMallocArgToInteger(LowerToInt) {}
-
-    virtual void getAnalysisUsage(AnalysisUsage &AU) const {
-      AU.addRequired<TargetData>();
-      AU.setPreservesCFG();
-
-      // This is a cluster of orthogonal Transforms:
-      AU.addPreserved<UnifyFunctionExitNodes>();
-      AU.addPreservedID(PromoteMemoryToRegisterID);
-      AU.addPreservedID(LowerSwitchID);
-      AU.addPreservedID(LowerInvokePassID);
-    }
-
-    /// doPassInitialization - For the lower allocations pass, this ensures that
-    /// a module contains a declaration for a malloc and a free function.
-    ///
-    bool doInitialization(Module &M);
-
-    virtual bool doInitialization(Function &F) {
-      return doInitialization(*F.getParent());
-    }
-
-    /// runOnBasicBlock - This method does the actual work of converting
-    /// instructions over, assuming that the pass has already been initialized.
-    ///
-    bool runOnBasicBlock(BasicBlock &BB);
-  };
-}
-
-char LowerAllocations::ID = 0;
-static RegisterPass<LowerAllocations>
-X("lowerallocs", "Lower allocations from instructions to calls");
-
-// Publically exposed interface to pass...
-const PassInfo *const llvm::LowerAllocationsID = &X;
-// createLowerAllocationsPass - Interface to this file...
-Pass *llvm::createLowerAllocationsPass(bool LowerMallocArgToInteger) {
-  return new LowerAllocations(LowerMallocArgToInteger);
-}
-
-
-// doInitialization - For the lower allocations pass, this ensures that a
-// module contains a declaration for a malloc and a free function.
-//
-// This function is always successful.
-//
-bool LowerAllocations::doInitialization(Module &M) {
-  const Type *BPTy = PointerType::getUnqual(Type::getInt8Ty(M.getContext()));
-  FreeFunc = M.getOrInsertFunction("free"  , Type::getVoidTy(M.getContext()),
-                                   BPTy, (Type *)0);
-  return true;
-}
-
-// runOnBasicBlock - This method does the actual work of converting
-// instructions over, assuming that the pass has already been initialized.
-//
-bool LowerAllocations::runOnBasicBlock(BasicBlock &BB) {
-  bool Changed = false;
-  assert(FreeFunc && "Pass not initialized!");
-
-  BasicBlock::InstListType &BBIL = BB.getInstList();
-
-  const TargetData &TD = getAnalysis<TargetData>();
-  const Type *IntPtrTy = TD.getIntPtrType(BB.getContext());
-
-  // Loop over all of the instructions, looking for malloc or free instructions
-  for (BasicBlock::iterator I = BB.begin(), E = BB.end(); I != E; ++I) {
-    if (MallocInst *MI = dyn_cast<MallocInst>(I)) {
-      Value *ArraySize = MI->getOperand(0);
-      if (ArraySize->getType() != IntPtrTy)
-        ArraySize = CastInst::CreateIntegerCast(ArraySize, IntPtrTy,
-                                                false /*ZExt*/, "", I);
-      Value *MCast = CallInst::CreateMalloc(I, IntPtrTy,
-                                            MI->getAllocatedType(), ArraySize);
-
-      // Replace all uses of the old malloc inst with the cast inst
-      MI->replaceAllUsesWith(MCast);
-      I = --BBIL.erase(I);         // remove and delete the malloc instr...
-      Changed = true;
-      ++NumLowered;
-    } else if (FreeInst *FI = dyn_cast<FreeInst>(I)) {
-      Value *PtrCast = 
-        new BitCastInst(FI->getOperand(0),
-               PointerType::getUnqual(Type::getInt8Ty(BB.getContext())), "", I);
-
-      // Insert a call to the free function...
-      CallInst::Create(FreeFunc, PtrCast, "", I)->setTailCall();
-
-      // Delete the old free instruction
-      I = --BBIL.erase(I);
-      Changed = true;
-      ++NumLowered;
-    }
-  }
-
-  return Changed;
-}
-
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/LowerInvoke.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/LowerInvoke.cpp
index 4ecf6d7..6e6e8d2 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/LowerInvoke.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/LowerInvoke.cpp
@@ -47,7 +47,6 @@
 #include "llvm/Transforms/Utils/Local.h"
 #include "llvm/ADT/Statistic.h"
 #include "llvm/Support/CommandLine.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Target/TargetLowering.h"
 #include <csetjmp>
 #include <set>
@@ -61,7 +60,7 @@ static cl::opt<bool> ExpensiveEHSupport("enable-correct-eh-support",
  cl::desc("Make the -lowerinvoke pass insert expensive, but correct, EH code"));
 
 namespace {
-  class VISIBILITY_HIDDEN LowerInvoke : public FunctionPass {
+  class LowerInvoke : public FunctionPass {
     // Used for both models.
     Constant *WriteFn;
     Constant *AbortFn;
@@ -87,7 +86,6 @@ namespace {
       // This is a cluster of orthogonal Transforms
       AU.addPreservedID(PromoteMemoryToRegisterID);
       AU.addPreservedID(LowerSwitchID);
-      AU.addPreservedID(LowerAllocationsID);
     }
 
   private:
@@ -116,7 +114,7 @@ FunctionPass *llvm::createLowerInvokePass(const TargetLowering *TLI) {
 // current module.
 bool LowerInvoke::doInitialization(Module &M) {
   const Type *VoidPtrTy =
-          PointerType::getUnqual(Type::getInt8Ty(M.getContext()));
+          Type::getInt8PtrTy(M.getContext());
   AbortMessage = 0;
   if (ExpensiveEHSupport) {
     // Insert a type for the linked list of jump buffers.
@@ -530,7 +528,7 @@ bool LowerInvoke::insertExpensiveEHSupport(Function &F) {
                                                  "TheJmpBuf",
                                                  EntryBB->getTerminator());
     JmpBufPtr = new BitCastInst(JmpBufPtr,
-                        PointerType::getUnqual(Type::getInt8Ty(F.getContext())),
+                        Type::getInt8PtrTy(F.getContext()),
                                 "tmp", EntryBB->getTerminator());
     Value *SJRet = CallInst::Create(SetJmpFn, JmpBufPtr, "sjret",
                                     EntryBB->getTerminator());
@@ -585,7 +583,7 @@ bool LowerInvoke::insertExpensiveEHSupport(Function &F) {
   Idx[0] = GetElementPtrInst::Create(BufPtr, Idx.begin(), Idx.end(), "JmpBuf",
                                      UnwindBlock);
   Idx[0] = new BitCastInst(Idx[0],
-             PointerType::getUnqual(Type::getInt8Ty(F.getContext())),
+             Type::getInt8PtrTy(F.getContext()),
                            "tmp", UnwindBlock);
   Idx[1] = ConstantInt::get(Type::getInt32Ty(F.getContext()), 1);
   CallInst::Create(LongJmpFn, Idx.begin(), Idx.end(), "", UnwindBlock);
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/LowerSwitch.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/LowerSwitch.cpp
index 764f098..8c18b59 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/LowerSwitch.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/LowerSwitch.cpp
@@ -21,8 +21,8 @@
 #include "llvm/LLVMContext.h"
 #include "llvm/Pass.h"
 #include "llvm/ADT/STLExtras.h"
-#include "llvm/Support/Debug.h"
 #include "llvm/Support/Compiler.h"
+#include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 #include <algorithm>
 using namespace llvm;
@@ -31,7 +31,7 @@ namespace {
   /// LowerSwitch Pass - Replace all SwitchInst instructions with chained branch
   /// instructions.  Note that this cannot be a BasicBlock pass because it
   /// modifies the CFG!
-  class VISIBILITY_HIDDEN LowerSwitch : public FunctionPass {
+  class LowerSwitch : public FunctionPass {
   public:
     static char ID; // Pass identification, replacement for typeid
     LowerSwitch() : FunctionPass(&ID) {} 
@@ -43,7 +43,6 @@ namespace {
       AU.addPreserved<UnifyFunctionExitNodes>();
       AU.addPreservedID(PromoteMemoryToRegisterID);
       AU.addPreservedID(LowerInvokePassID);
-      AU.addPreservedID(LowerAllocationsID);
     }
 
     struct CaseRange {
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/Mem2Reg.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/Mem2Reg.cpp
index 5df0832..99203b6 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/Mem2Reg.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/Mem2Reg.cpp
@@ -20,13 +20,12 @@
 #include "llvm/Instructions.h"
 #include "llvm/Function.h"
 #include "llvm/ADT/Statistic.h"
-#include "llvm/Support/Compiler.h"
 using namespace llvm;
 
 STATISTIC(NumPromoted, "Number of alloca's promoted");
 
 namespace {
-  struct VISIBILITY_HIDDEN PromotePass : public FunctionPass {
+  struct PromotePass : public FunctionPass {
     static char ID; // Pass identification, replacement for typeid
     PromotePass() : FunctionPass(&ID) {}
 
@@ -45,7 +44,6 @@ namespace {
       AU.addPreserved<UnifyFunctionExitNodes>();
       AU.addPreservedID(LowerSwitchID);
       AU.addPreservedID(LowerInvokePassID);
-      AU.addPreservedID(LowerAllocationsID);
     }
   };
 }  // end of anonymous namespace
@@ -75,7 +73,7 @@ bool PromotePass::runOnFunction(Function &F) {
 
     if (Allocas.empty()) break;
 
-    PromoteMemToReg(Allocas, DT, DF, F.getContext());
+    PromoteMemToReg(Allocas, DT, DF);
     NumPromoted += Allocas.size();
     Changed = true;
   }
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/PromoteMemoryToRegister.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/PromoteMemoryToRegister.cpp
index 9ca06bd..e25f9e2 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/PromoteMemoryToRegister.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/PromoteMemoryToRegister.cpp
@@ -23,7 +23,6 @@
 #include "llvm/Function.h"
 #include "llvm/Instructions.h"
 #include "llvm/IntrinsicInst.h"
-#include "llvm/LLVMContext.h"
 #include "llvm/Analysis/Dominators.h"
 #include "llvm/Analysis/AliasSetTracker.h"
 #include "llvm/ADT/DenseMap.h"
@@ -32,7 +31,6 @@
 #include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/STLExtras.h"
 #include "llvm/Support/CFG.h"
-#include "llvm/Support/Compiler.h"
 #include <algorithm>
 using namespace llvm;
 
@@ -100,7 +98,7 @@ namespace {
   struct AllocaInfo;
 
   // Data package used by RenamePass()
-  class VISIBILITY_HIDDEN RenamePassData {
+  class RenamePassData {
   public:
     typedef std::vector<Value *> ValVector;
     
@@ -123,7 +121,7 @@ namespace {
   ///
   /// This functionality is important because it avoids scanning large basic
   /// blocks multiple times when promoting many allocas in the same block.
-  class VISIBILITY_HIDDEN LargeBlockInfo {
+  class LargeBlockInfo {
     /// InstNumbers - For each instruction that we track, keep the index of the
     /// instruction.  The index starts out as the number of the instruction from
     /// the start of the block.
@@ -170,7 +168,7 @@ namespace {
     }
   };
 
-  struct VISIBILITY_HIDDEN PromoteMem2Reg {
+  struct PromoteMem2Reg {
     /// Allocas - The alloca instructions being promoted.
     ///
     std::vector<AllocaInst*> Allocas;
@@ -181,8 +179,6 @@ namespace {
     ///
     AliasSetTracker *AST;
     
-    LLVMContext &Context;
-
     /// AllocaLookup - Reverse mapping of Allocas.
     ///
     std::map<AllocaInst*, unsigned>  AllocaLookup;
@@ -213,9 +209,8 @@ namespace {
     DenseMap<const BasicBlock*, unsigned> BBNumPreds;
   public:
     PromoteMem2Reg(const std::vector<AllocaInst*> &A, DominatorTree &dt,
-                   DominanceFrontier &df, AliasSetTracker *ast,
-                   LLVMContext &C)
-      : Allocas(A), DT(dt), DF(df), AST(ast), Context(C) {}
+                   DominanceFrontier &df, AliasSetTracker *ast)
+      : Allocas(A), DT(dt), DF(df), AST(ast) {}
 
     void run();
 
@@ -750,7 +745,12 @@ void PromoteMem2Reg::RewriteSingleStoreAlloca(AllocaInst *AI,
     }
     
     // Otherwise, we *can* safely rewrite this load.
-    LI->replaceAllUsesWith(OnlyStore->getOperand(0));
+    Value *ReplVal = OnlyStore->getOperand(0);
+    // If the replacement value is the load, this must occur in unreachable
+    // code.
+    if (ReplVal == LI)
+      ReplVal = UndefValue::get(LI->getType());
+    LI->replaceAllUsesWith(ReplVal);
     if (AST && isa<PointerType>(LI->getType()))
       AST->deleteValue(LI);
     LI->eraseFromParent();
@@ -999,9 +999,9 @@ NextIteration:
 ///
 void llvm::PromoteMemToReg(const std::vector<AllocaInst*> &Allocas,
                            DominatorTree &DT, DominanceFrontier &DF,
-                           LLVMContext &Context, AliasSetTracker *AST) {
+                           AliasSetTracker *AST) {
   // If there is nothing to do, bail out...
   if (Allocas.empty()) return;
 
-  PromoteMem2Reg(Allocas, DT, DF, AST, Context).run();
+  PromoteMem2Reg(Allocas, DT, DF, AST).run();
 }
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/SSAUpdater.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/SSAUpdater.cpp
new file mode 100644
index 0000000..8a07c35
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/SSAUpdater.cpp
@@ -0,0 +1,336 @@
+//===- SSAUpdater.cpp - Unstructured SSA Update Tool ----------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements the SSAUpdater class.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/Transforms/Utils/SSAUpdater.h"
+#include "llvm/Instructions.h"
+#include "llvm/ADT/DenseMap.h"
+#include "llvm/Support/CFG.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/ValueHandle.h"
+#include "llvm/Support/raw_ostream.h"
+using namespace llvm;
+
+typedef DenseMap<BasicBlock*, TrackingVH<Value> > AvailableValsTy;
+typedef std::vector<std::pair<BasicBlock*, TrackingVH<Value> > >
+                IncomingPredInfoTy;
+
+static AvailableValsTy &getAvailableVals(void *AV) {
+  return *static_cast<AvailableValsTy*>(AV);
+}
+
+static IncomingPredInfoTy &getIncomingPredInfo(void *IPI) {
+  return *static_cast<IncomingPredInfoTy*>(IPI);
+}
+
+
+SSAUpdater::SSAUpdater(SmallVectorImpl<PHINode*> *NewPHI)
+  : AV(0), PrototypeValue(0), IPI(0), InsertedPHIs(NewPHI) {}
+
+SSAUpdater::~SSAUpdater() {
+  delete &getAvailableVals(AV);
+  delete &getIncomingPredInfo(IPI);
+}
+
+/// Initialize - Reset this object to get ready for a new set of SSA
+/// updates.  ProtoValue is the value used to name PHI nodes.
+void SSAUpdater::Initialize(Value *ProtoValue) {
+  if (AV == 0)
+    AV = new AvailableValsTy();
+  else
+    getAvailableVals(AV).clear();
+
+  if (IPI == 0)
+    IPI = new IncomingPredInfoTy();
+  else
+    getIncomingPredInfo(IPI).clear();
+  PrototypeValue = ProtoValue;
+}
+
+/// HasValueForBlock - Return true if the SSAUpdater already has a value for
+/// the specified block.
+bool SSAUpdater::HasValueForBlock(BasicBlock *BB) const {
+  return getAvailableVals(AV).count(BB);
+}
+
+/// AddAvailableValue - Indicate that a rewritten value is available in the
+/// specified block with the specified value.
+void SSAUpdater::AddAvailableValue(BasicBlock *BB, Value *V) {
+  assert(PrototypeValue != 0 && "Need to initialize SSAUpdater");
+  assert(PrototypeValue->getType() == V->getType() &&
+         "All rewritten values must have the same type");
+  getAvailableVals(AV)[BB] = V;
+}
+
+/// GetValueAtEndOfBlock - Construct SSA form, materializing a value that is
+/// live at the end of the specified block.
+Value *SSAUpdater::GetValueAtEndOfBlock(BasicBlock *BB) {
+  assert(getIncomingPredInfo(IPI).empty() && "Unexpected Internal State");
+  Value *Res = GetValueAtEndOfBlockInternal(BB);
+  assert(getIncomingPredInfo(IPI).empty() && "Unexpected Internal State");
+  return Res;
+}
+
+/// GetValueInMiddleOfBlock - Construct SSA form, materializing a value that
+/// is live in the middle of the specified block.
+///
+/// GetValueInMiddleOfBlock is the same as GetValueAtEndOfBlock except in one
+/// important case: if there is a definition of the rewritten value after the
+/// 'use' in BB.  Consider code like this:
+///
+///      X1 = ...
+///   SomeBB:
+///      use(X)
+///      X2 = ...
+///      br Cond, SomeBB, OutBB
+///
+/// In this case, there are two values (X1 and X2) added to the AvailableVals
+/// set by the client of the rewriter, and those values are both live out of
+/// their respective blocks.  However, the use of X happens in the *middle* of
+/// a block.  Because of this, we need to insert a new PHI node in SomeBB to
+/// merge the appropriate values, and this value isn't live out of the block.
+///
+Value *SSAUpdater::GetValueInMiddleOfBlock(BasicBlock *BB) {
+  // If there is no definition of the renamed variable in this block, just use
+  // GetValueAtEndOfBlock to do our work.
+  if (!getAvailableVals(AV).count(BB))
+    return GetValueAtEndOfBlock(BB);
+
+  // Otherwise, we have the hard case.  Get the live-in values for each
+  // predecessor.
+  SmallVector<std::pair<BasicBlock*, Value*>, 8> PredValues;
+  Value *SingularValue = 0;
+
+  // We can get our predecessor info by walking the pred_iterator list, but it
+  // is relatively slow.  If we already have PHI nodes in this block, walk one
+  // of them to get the predecessor list instead.
+  if (PHINode *SomePhi = dyn_cast<PHINode>(BB->begin())) {
+    for (unsigned i = 0, e = SomePhi->getNumIncomingValues(); i != e; ++i) {
+      BasicBlock *PredBB = SomePhi->getIncomingBlock(i);
+      Value *PredVal = GetValueAtEndOfBlock(PredBB);
+      PredValues.push_back(std::make_pair(PredBB, PredVal));
+
+      // Compute SingularValue.
+      if (i == 0)
+        SingularValue = PredVal;
+      else if (PredVal != SingularValue)
+        SingularValue = 0;
+    }
+  } else {
+    bool isFirstPred = true;
+    for (pred_iterator PI = pred_begin(BB), E = pred_end(BB); PI != E; ++PI) {
+      BasicBlock *PredBB = *PI;
+      Value *PredVal = GetValueAtEndOfBlock(PredBB);
+      PredValues.push_back(std::make_pair(PredBB, PredVal));
+
+      // Compute SingularValue.
+      if (isFirstPred) {
+        SingularValue = PredVal;
+        isFirstPred = false;
+      } else if (PredVal != SingularValue)
+        SingularValue = 0;
+    }
+  }
+
+  // If there are no predecessors, just return undef.
+  if (PredValues.empty())
+    return UndefValue::get(PrototypeValue->getType());
+
+  // Otherwise, if all the merged values are the same, just use it.
+  if (SingularValue != 0)
+    return SingularValue;
+
+  // Otherwise, we do need a PHI: insert one now.
+  PHINode *InsertedPHI = PHINode::Create(PrototypeValue->getType(),
+                                         PrototypeValue->getName(),
+                                         &BB->front());
+  InsertedPHI->reserveOperandSpace(PredValues.size());
+
+  // Fill in all the predecessors of the PHI.
+  for (unsigned i = 0, e = PredValues.size(); i != e; ++i)
+    InsertedPHI->addIncoming(PredValues[i].second, PredValues[i].first);
+
+  // See if the PHI node can be merged to a single value.  This can happen in
+  // loop cases when we get a PHI of itself and one other value.
+  if (Value *ConstVal = InsertedPHI->hasConstantValue()) {
+    InsertedPHI->eraseFromParent();
+    return ConstVal;
+  }
+
+  // If the client wants to know about all new instructions, tell it.
+  if (InsertedPHIs) InsertedPHIs->push_back(InsertedPHI);
+
+  DEBUG(errs() << "  Inserted PHI: " << *InsertedPHI << "\n");
+  return InsertedPHI;
+}
+
+/// RewriteUse - Rewrite a use of the symbolic value.  This handles PHI nodes,
+/// which use their value in the corresponding predecessor.
+void SSAUpdater::RewriteUse(Use &U) {
+  Instruction *User = cast<Instruction>(U.getUser());
+  
+  Value *V;
+  if (PHINode *UserPN = dyn_cast<PHINode>(User))
+    V = GetValueAtEndOfBlock(UserPN->getIncomingBlock(U));
+  else
+    V = GetValueInMiddleOfBlock(User->getParent());
+
+  U.set(V);
+}
+
+
+/// GetValueAtEndOfBlockInternal - Check to see if AvailableVals has an entry
+/// for the specified BB and if so, return it.  If not, construct SSA form by
+/// walking predecessors inserting PHI nodes as needed until we get to a block
+/// where the value is available.
+///
+Value *SSAUpdater::GetValueAtEndOfBlockInternal(BasicBlock *BB) {
+  AvailableValsTy &AvailableVals = getAvailableVals(AV);
+
+  // Query AvailableVals by doing an insertion of null.
+  std::pair<AvailableValsTy::iterator, bool> InsertRes =
+  AvailableVals.insert(std::make_pair(BB, WeakVH()));
+
+  // Handle the case when the insertion fails because we have already seen BB.
+  if (!InsertRes.second) {
+    // If the insertion failed, there are two cases.  The first case is that the
+    // value is already available for the specified block.  If we get this, just
+    // return the value.
+    if (InsertRes.first->second != 0)
+      return InsertRes.first->second;
+
+    // Otherwise, if the value we find is null, then this is the value is not
+    // known but it is being computed elsewhere in our recursion.  This means
+    // that we have a cycle.  Handle this by inserting a PHI node and returning
+    // it.  When we get back to the first instance of the recursion we will fill
+    // in the PHI node.
+    return InsertRes.first->second =
+    PHINode::Create(PrototypeValue->getType(), PrototypeValue->getName(),
+                    &BB->front());
+  }
+
+  // Okay, the value isn't in the map and we just inserted a null in the entry
+  // to indicate that we're processing the block.  Since we have no idea what
+  // value is in this block, we have to recurse through our predecessors.
+  //
+  // While we're walking our predecessors, we keep track of them in a vector,
+  // then insert a PHI node in the end if we actually need one.  We could use a
+  // smallvector here, but that would take a lot of stack space for every level
+  // of the recursion, just use IncomingPredInfo as an explicit stack.
+  IncomingPredInfoTy &IncomingPredInfo = getIncomingPredInfo(IPI);
+  unsigned FirstPredInfoEntry = IncomingPredInfo.size();
+
+  // As we're walking the predecessors, keep track of whether they are all
+  // producing the same value.  If so, this value will capture it, if not, it
+  // will get reset to null.  We distinguish the no-predecessor case explicitly
+  // below.
+  TrackingVH<Value> SingularValue;
+
+  // We can get our predecessor info by walking the pred_iterator list, but it
+  // is relatively slow.  If we already have PHI nodes in this block, walk one
+  // of them to get the predecessor list instead.
+  if (PHINode *SomePhi = dyn_cast<PHINode>(BB->begin())) {
+    for (unsigned i = 0, e = SomePhi->getNumIncomingValues(); i != e; ++i) {
+      BasicBlock *PredBB = SomePhi->getIncomingBlock(i);
+      Value *PredVal = GetValueAtEndOfBlockInternal(PredBB);
+      IncomingPredInfo.push_back(std::make_pair(PredBB, PredVal));
+
+      // Compute SingularValue.
+      if (i == 0)
+        SingularValue = PredVal;
+      else if (PredVal != SingularValue)
+        SingularValue = 0;
+    }
+  } else {
+    bool isFirstPred = true;
+    for (pred_iterator PI = pred_begin(BB), E = pred_end(BB); PI != E; ++PI) {
+      BasicBlock *PredBB = *PI;
+      Value *PredVal = GetValueAtEndOfBlockInternal(PredBB);
+      IncomingPredInfo.push_back(std::make_pair(PredBB, PredVal));
+
+      // Compute SingularValue.
+      if (isFirstPred) {
+        SingularValue = PredVal;
+        isFirstPred = false;
+      } else if (PredVal != SingularValue)
+        SingularValue = 0;
+    }
+  }
+
+  // If there are no predecessors, then we must have found an unreachable block
+  // just return 'undef'.  Since there are no predecessors, InsertRes must not
+  // be invalidated.
+  if (IncomingPredInfo.size() == FirstPredInfoEntry)
+    return InsertRes.first->second = UndefValue::get(PrototypeValue->getType());
+
+  /// Look up BB's entry in AvailableVals.  'InsertRes' may be invalidated.  If
+  /// this block is involved in a loop, a no-entry PHI node will have been
+  /// inserted as InsertedVal.  Otherwise, we'll still have the null we inserted
+  /// above.
+  TrackingVH<Value> &InsertedVal = AvailableVals[BB];
+
+  // If all the predecessor values are the same then we don't need to insert a
+  // PHI.  This is the simple and common case.
+  if (SingularValue) {
+    // If a PHI node got inserted, replace it with the singlar value and delete
+    // it.
+    if (InsertedVal) {
+      PHINode *OldVal = cast<PHINode>(InsertedVal);
+      // Be careful about dead loops.  These RAUW's also update InsertedVal.
+      if (InsertedVal != SingularValue)
+        OldVal->replaceAllUsesWith(SingularValue);
+      else
+        OldVal->replaceAllUsesWith(UndefValue::get(InsertedVal->getType()));
+      OldVal->eraseFromParent();
+    } else {
+      InsertedVal = SingularValue;
+    }
+
+    // Drop the entries we added in IncomingPredInfo to restore the stack.
+    IncomingPredInfo.erase(IncomingPredInfo.begin()+FirstPredInfoEntry,
+                           IncomingPredInfo.end());
+    return InsertedVal;
+  }
+
+  // Otherwise, we do need a PHI: insert one now if we don't already have one.
+  if (InsertedVal == 0)
+    InsertedVal = PHINode::Create(PrototypeValue->getType(),
+                                  PrototypeValue->getName(), &BB->front());
+
+  PHINode *InsertedPHI = cast<PHINode>(InsertedVal);
+  InsertedPHI->reserveOperandSpace(IncomingPredInfo.size()-FirstPredInfoEntry);
+
+  // Fill in all the predecessors of the PHI.
+  for (IncomingPredInfoTy::iterator I =
+         IncomingPredInfo.begin()+FirstPredInfoEntry,
+       E = IncomingPredInfo.end(); I != E; ++I)
+    InsertedPHI->addIncoming(I->second, I->first);
+
+  // Drop the entries we added in IncomingPredInfo to restore the stack.
+  IncomingPredInfo.erase(IncomingPredInfo.begin()+FirstPredInfoEntry,
+                         IncomingPredInfo.end());
+
+  // See if the PHI node can be merged to a single value.  This can happen in
+  // loop cases when we get a PHI of itself and one other value.
+  if (Value *ConstVal = InsertedPHI->hasConstantValue()) {
+    InsertedPHI->replaceAllUsesWith(ConstVal);
+    InsertedPHI->eraseFromParent();
+    InsertedVal = ConstVal;
+  } else {
+    DEBUG(errs() << "  Inserted PHI: " << *InsertedPHI << "\n");
+
+    // If the client wants to know about all new instructions, tell it.
+    if (InsertedPHIs) InsertedPHIs->push_back(InsertedPHI);
+  }
+
+  return InsertedVal;
+}
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/SSI.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/SSI.cpp
index e5a1dd1..1c4afff 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/SSI.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/SSI.cpp
@@ -31,15 +31,13 @@ using namespace llvm;
 static const std::string SSI_PHI = "SSI_phi";
 static const std::string SSI_SIG = "SSI_sigma";
 
-static const unsigned UNSIGNED_INFINITE = ~0U;
-
 STATISTIC(NumSigmaInserted, "Number of sigma functions inserted");
 STATISTIC(NumPhiInserted, "Number of phi functions inserted");
 
 void SSI::getAnalysisUsage(AnalysisUsage &AU) const {
-  AU.addRequired<DominanceFrontier>();
-  AU.addRequired<DominatorTree>();
-  AU.setPreservesCFG();
+  AU.addRequiredTransitive<DominanceFrontier>();
+  AU.addRequiredTransitive<DominatorTree>();
+  AU.setPreservesAll();
 }
 
 bool SSI::runOnFunction(Function &F) {
@@ -49,22 +47,23 @@ bool SSI::runOnFunction(Function &F) {
 
 /// This methods creates the SSI representation for the list of values
 /// received. It will only create SSI representation if a value is used
-/// in a to decide a branch. Repeated values are created only once.
+/// to decide a branch. Repeated values are created only once.
 ///
 void SSI::createSSI(SmallVectorImpl<Instruction *> &value) {
   init(value);
 
-  for (unsigned i = 0; i < num_values; ++i) {
-    if (created.insert(value[i])) {
-      needConstruction[i] = true;
-    }
-  }
-  insertSigmaFunctions(value);
+  SmallPtrSet<Instruction*, 4> needConstruction;
+  for (SmallVectorImpl<Instruction*>::iterator I = value.begin(),
+       E = value.end(); I != E; ++I)
+    if (created.insert(*I))
+      needConstruction.insert(*I);
+
+  insertSigmaFunctions(needConstruction);
 
   // Test if there is a need to transform to SSI
-  if (needConstruction.any()) {
-    insertPhiFunctions(value);
-    renameInit(value);
+  if (!needConstruction.empty()) {
+    insertPhiFunctions(needConstruction);
+    renameInit(needConstruction);
     rename(DT_->getRoot());
     fixPhis();
   }
@@ -75,21 +74,19 @@ void SSI::createSSI(SmallVectorImpl<Instruction *> &value) {
 /// Insert sigma functions (a sigma function is a phi function with one
 /// operator)
 ///
-void SSI::insertSigmaFunctions(SmallVectorImpl<Instruction *> &value) {
-  for (unsigned i = 0; i < num_values; ++i) {
-    if (!needConstruction[i])
-      continue;
-
-    for (Value::use_iterator begin = value[i]->use_begin(), end =
-         value[i]->use_end(); begin != end; ++begin) {
+void SSI::insertSigmaFunctions(SmallPtrSet<Instruction*, 4> &value) {
+  for (SmallPtrSet<Instruction*, 4>::iterator I = value.begin(),
+       E = value.end(); I != E; ++I) {
+    for (Value::use_iterator begin = (*I)->use_begin(),
+         end = (*I)->use_end(); begin != end; ++begin) {
       // Test if the Use of the Value is in a comparator
       if (CmpInst *CI = dyn_cast<CmpInst>(begin)) {
         // Iterates through all uses of CmpInst
-        for (Value::use_iterator begin_ci = CI->use_begin(), end_ci =
-             CI->use_end(); begin_ci != end_ci; ++begin_ci) {
+        for (Value::use_iterator begin_ci = CI->use_begin(),
+             end_ci = CI->use_end(); begin_ci != end_ci; ++begin_ci) {
           // Test if any use of CmpInst is in a Terminator
           if (TerminatorInst *TI = dyn_cast<TerminatorInst>(begin_ci)) {
-            insertSigma(TI, value[i], i);
+            insertSigma(TI, *I);
           }
         }
       }
@@ -100,7 +97,7 @@ void SSI::insertSigmaFunctions(SmallVectorImpl<Instruction *> &value) {
 /// Inserts Sigma Functions in every BasicBlock successor to Terminator
 /// Instruction TI. All inserted Sigma Function are related to Instruction I.
 ///
-void SSI::insertSigma(TerminatorInst *TI, Instruction *I, unsigned pos) {
+void SSI::insertSigma(TerminatorInst *TI, Instruction *I) {
   // Basic Block of the Terminator Instruction
   BasicBlock *BB = TI->getParent();
   for (unsigned i = 0, e = TI->getNumSuccessors(); i < e; ++i) {
@@ -111,10 +108,9 @@ void SSI::insertSigma(TerminatorInst *TI, Instruction *I, unsigned pos) {
         dominateAny(BB_next, I)) {
       PHINode *PN = PHINode::Create(I->getType(), SSI_SIG, BB_next->begin());
       PN->addIncoming(I, BB);
-      sigmas.insert(std::make_pair(PN, pos));
+      sigmas[PN] = I;
       created.insert(PN);
-      needConstruction[pos] = true;
-      defsites[pos].push_back(BB_next);
+      defsites[I].push_back(BB_next);
       ++NumSigmaInserted;
     }
   }
@@ -122,66 +118,63 @@ void SSI::insertSigma(TerminatorInst *TI, Instruction *I, unsigned pos) {
 
 /// Insert phi functions when necessary
 ///
-void SSI::insertPhiFunctions(SmallVectorImpl<Instruction *> &value) {
+void SSI::insertPhiFunctions(SmallPtrSet<Instruction*, 4> &value) {
   DominanceFrontier *DF = &getAnalysis<DominanceFrontier>();
-  for (unsigned i = 0; i < num_values; ++i) {
+  for (SmallPtrSet<Instruction*, 4>::iterator I = value.begin(),
+       E = value.end(); I != E; ++I) {
     // Test if there were any sigmas for this variable
-    if (needConstruction[i]) {
-
-      SmallPtrSet<BasicBlock *, 16> BB_visited;
-
-      // Insert phi functions if there is any sigma function
-      while (!defsites[i].empty()) {
-
-        BasicBlock *BB = defsites[i].back();
-
-        defsites[i].pop_back();
-        DominanceFrontier::iterator DF_BB = DF->find(BB);
-
-        // The BB is unreachable. Skip it.
-        if (DF_BB == DF->end())
-          continue; 
-
-        // Iterates through all the dominance frontier of BB
-        for (std::set<BasicBlock *>::iterator DF_BB_begin =
-             DF_BB->second.begin(), DF_BB_end = DF_BB->second.end();
-             DF_BB_begin != DF_BB_end; ++DF_BB_begin) {
-          BasicBlock *BB_dominated = *DF_BB_begin;
-
-          // Test if has not yet visited this node and if the
-          // original definition dominates this node
-          if (BB_visited.insert(BB_dominated) &&
-              DT_->properlyDominates(value_original[i], BB_dominated) &&
-              dominateAny(BB_dominated, value[i])) {
-            PHINode *PN = PHINode::Create(
-                value[i]->getType(), SSI_PHI, BB_dominated->begin());
-            phis.insert(std::make_pair(PN, i));
-            created.insert(PN);
-
-            defsites[i].push_back(BB_dominated);
-            ++NumPhiInserted;
-          }
+    SmallPtrSet<BasicBlock *, 16> BB_visited;
+
+    // Insert phi functions if there is any sigma function
+    while (!defsites[*I].empty()) {
+
+      BasicBlock *BB = defsites[*I].back();
+
+      defsites[*I].pop_back();
+      DominanceFrontier::iterator DF_BB = DF->find(BB);
+
+      // The BB is unreachable. Skip it.
+      if (DF_BB == DF->end())
+        continue; 
+
+      // Iterates through all the dominance frontier of BB
+      for (std::set<BasicBlock *>::iterator DF_BB_begin =
+           DF_BB->second.begin(), DF_BB_end = DF_BB->second.end();
+           DF_BB_begin != DF_BB_end; ++DF_BB_begin) {
+        BasicBlock *BB_dominated = *DF_BB_begin;
+
+        // Test if has not yet visited this node and if the
+        // original definition dominates this node
+        if (BB_visited.insert(BB_dominated) &&
+            DT_->properlyDominates(value_original[*I], BB_dominated) &&
+            dominateAny(BB_dominated, *I)) {
+          PHINode *PN = PHINode::Create(
+              (*I)->getType(), SSI_PHI, BB_dominated->begin());
+          phis.insert(std::make_pair(PN, *I));
+          created.insert(PN);
+
+          defsites[*I].push_back(BB_dominated);
+          ++NumPhiInserted;
         }
       }
-      BB_visited.clear();
     }
+    BB_visited.clear();
   }
 }
 
 /// Some initialization for the rename part
 ///
-void SSI::renameInit(SmallVectorImpl<Instruction *> &value) {
-  value_stack.resize(num_values);
-  for (unsigned i = 0; i < num_values; ++i) {
-    value_stack[i].push_back(value[i]);
-  }
+void SSI::renameInit(SmallPtrSet<Instruction*, 4> &value) {
+  for (SmallPtrSet<Instruction*, 4>::iterator I = value.begin(),
+       E = value.end(); I != E; ++I)
+    value_stack[*I].push_back(*I);
 }
 
 /// Renames all variables in the specified BasicBlock.
 /// Only variables that need to be rename will be.
 ///
 void SSI::rename(BasicBlock *BB) {
-  BitVector *defined = new BitVector(num_values, false);
+  SmallPtrSet<Instruction*, 8> defined;
 
   // Iterate through instructions and make appropriate renaming.
   // For SSI_PHI (b = PHI()), store b at value_stack as a new
@@ -195,19 +188,17 @@ void SSI::rename(BasicBlock *BB) {
        begin != end; ++begin) {
     Instruction *I = begin;
     if (PHINode *PN = dyn_cast<PHINode>(I)) { // Treat PHI functions
-      int position;
+      Instruction* position;
 
       // Treat SSI_PHI
-      if ((position = getPositionPhi(PN)) != -1) {
+      if ((position = getPositionPhi(PN))) {
         value_stack[position].push_back(PN);
-        (*defined)[position] = true;
-      }
-
+        defined.insert(position);
       // Treat SSI_SIG
-      else if ((position = getPositionSigma(PN)) != -1) {
+      } else if ((position = getPositionSigma(PN))) {
         substituteUse(I);
         value_stack[position].push_back(PN);
-        (*defined)[position] = true;
+        defined.insert(position);
       }
 
       // Treat all other PHI functions
@@ -234,8 +225,8 @@ void SSI::rename(BasicBlock *BB) {
          notPhi = BB_succ->getFirstNonPHI(); begin != *notPhi; ++begin) {
       Instruction *I = begin;
       PHINode *PN = dyn_cast<PHINode>(I);
-      int position;
-      if (PN && ((position = getPositionPhi(PN)) != -1)) {
+      Instruction* position;
+      if (PN && ((position = getPositionPhi(PN)))) {
         PN->addIncoming(value_stack[position].back(), BB);
       }
     }
@@ -253,15 +244,9 @@ void SSI::rename(BasicBlock *BB) {
 
   // Now we remove all inserted definitions of a variable from the top of
   // the stack leaving the previous one as the top.
-  if (defined->any()) {
-    for (unsigned i = 0; i < num_values; ++i) {
-      if ((*defined)[i]) {
-        value_stack[i].pop_back();
-      }
-    }
-  }
-
-  delete defined;
+  for (SmallPtrSet<Instruction*, 8>::iterator DI = defined.begin(),
+       DE = defined.end(); DI != DE; ++DI)
+    value_stack[*DI].pop_back();
 }
 
 /// Substitute any use in this instruction for the last definition of
@@ -270,23 +255,24 @@ void SSI::rename(BasicBlock *BB) {
 void SSI::substituteUse(Instruction *I) {
   for (unsigned i = 0, e = I->getNumOperands(); i < e; ++i) {
     Value *operand = I->getOperand(i);
-    for (unsigned j = 0; j < num_values; ++j) {
-      if (operand == value_stack[j].front() &&
-          I != value_stack[j].back()) {
+    for (DenseMap<Instruction*, SmallVector<Instruction*, 1> >::iterator
+         VI = value_stack.begin(), VE = value_stack.end(); VI != VE; ++VI) {
+      if (operand == VI->second.front() &&
+          I != VI->second.back()) {
         PHINode *PN_I = dyn_cast<PHINode>(I);
-        PHINode *PN_vs = dyn_cast<PHINode>(value_stack[j].back());
+        PHINode *PN_vs = dyn_cast<PHINode>(VI->second.back());
 
         // If a phi created in a BasicBlock is used as an operand of another
         // created in the same BasicBlock, this step marks this second phi,
         // to fix this issue later. It cannot be fixed now, because the
         // operands of the first phi are not final yet.
         if (PN_I && PN_vs &&
-            value_stack[j].back()->getParent() == I->getParent()) {
+            VI->second.back()->getParent() == I->getParent()) {
 
           phisToFix.insert(PN_I);
         }
 
-        I->setOperand(i, value_stack[j].back());
+        I->setOperand(i, VI->second.back());
         break;
       }
     }
@@ -333,7 +319,7 @@ void SSI::fixPhis() {
     }
   }
 
-  for (DenseMapIterator<PHINode *, unsigned> begin = phis.begin(),
+  for (DenseMapIterator<PHINode *, Instruction*> begin = phis.begin(),
        end = phis.end(); begin != end; ++begin) {
     PHINode *PN = begin->first;
     BasicBlock *BB = PN->getParent();
@@ -359,10 +345,10 @@ void SSI::fixPhis() {
 /// Return which variable (position on the vector of variables) this phi
 /// represents on the phis list.
 ///
-unsigned SSI::getPositionPhi(PHINode *PN) {
-  DenseMap<PHINode *, unsigned>::iterator val = phis.find(PN);
+Instruction* SSI::getPositionPhi(PHINode *PN) {
+  DenseMap<PHINode *, Instruction*>::iterator val = phis.find(PN);
   if (val == phis.end())
-    return UNSIGNED_INFINITE;
+    return 0;
   else
     return val->second;
 }
@@ -370,10 +356,10 @@ unsigned SSI::getPositionPhi(PHINode *PN) {
 /// Return which variable (position on the vector of variables) this phi
 /// represents on the sigmas list.
 ///
-unsigned SSI::getPositionSigma(PHINode *PN) {
-  DenseMap<PHINode *, unsigned>::iterator val = sigmas.find(PN);
+Instruction* SSI::getPositionSigma(PHINode *PN) {
+  DenseMap<PHINode *, Instruction*>::iterator val = sigmas.find(PN);
   if (val == sigmas.end())
-    return UNSIGNED_INFINITE;
+    return 0;
   else
     return val->second;
 }
@@ -381,27 +367,16 @@ unsigned SSI::getPositionSigma(PHINode *PN) {
 /// Initializes
 ///
 void SSI::init(SmallVectorImpl<Instruction *> &value) {
-  num_values = value.size();
-  needConstruction.resize(num_values, false);
-
-  value_original.resize(num_values);
-  defsites.resize(num_values);
-
-  for (unsigned i = 0; i < num_values; ++i) {
-    value_original[i] = value[i]->getParent();
-    defsites[i].push_back(value_original[i]);
+  for (SmallVectorImpl<Instruction *>::iterator I = value.begin(),
+       E = value.end(); I != E; ++I) {
+    value_original[*I] = (*I)->getParent();
+    defsites[*I].push_back((*I)->getParent());
   }
 }
 
 /// Clean all used resources in this creation of SSI
 ///
 void SSI::clean() {
-  for (unsigned i = 0; i < num_values; ++i) {
-    defsites[i].clear();
-    if (i < value_stack.size())
-      value_stack[i].clear();
-  }
-
   phis.clear();
   sigmas.clear();
   phisToFix.clear();
@@ -409,7 +384,6 @@ void SSI::clean() {
   defsites.clear();
   value_stack.clear();
   value_original.clear();
-  needConstruction.clear();
 }
 
 /// createSSIPass - The public interface to this file...
@@ -422,7 +396,7 @@ static RegisterPass<SSI> X("ssi", "Static Single Information Construction");
 /// SSIEverything - A pass that runs createSSI on every non-void variable,
 /// intended for debugging.
 namespace {
-  struct VISIBILITY_HIDDEN SSIEverything : public FunctionPass {
+  struct SSIEverything : public FunctionPass {
     static char ID; // Pass identification, replacement for typeid
     SSIEverything() : FunctionPass(&ID) {}
 
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/SimplifyCFG.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/SimplifyCFG.cpp
index 92b1335..89b0bd9 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/SimplifyCFG.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/SimplifyCFG.cpp
@@ -24,6 +24,7 @@
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Analysis/ConstantFolding.h"
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
+#include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/Statistic.h"
@@ -77,166 +78,6 @@ static void AddPredecessorToBlock(BasicBlock *Succ, BasicBlock *NewPred,
     PN->addIncoming(PN->getIncomingValueForBlock(ExistPred), NewPred);
 }
 
-/// CanPropagatePredecessorsForPHIs - Return true if we can fold BB, an
-/// almost-empty BB ending in an unconditional branch to Succ, into succ.
-///
-/// Assumption: Succ is the single successor for BB.
-///
-static bool CanPropagatePredecessorsForPHIs(BasicBlock *BB, BasicBlock *Succ) {
-  assert(*succ_begin(BB) == Succ && "Succ is not successor of BB!");
-
-  DEBUG(errs() << "Looking to fold " << BB->getName() << " into " 
-        << Succ->getName() << "\n");
-  // Shortcut, if there is only a single predecessor it must be BB and merging
-  // is always safe
-  if (Succ->getSinglePredecessor()) return true;
-
-  // Make a list of the predecessors of BB
-  typedef SmallPtrSet<BasicBlock*, 16> BlockSet;
-  BlockSet BBPreds(pred_begin(BB), pred_end(BB));
-
-  // Use that list to make another list of common predecessors of BB and Succ
-  BlockSet CommonPreds;
-  for (pred_iterator PI = pred_begin(Succ), PE = pred_end(Succ);
-        PI != PE; ++PI)
-    if (BBPreds.count(*PI))
-      CommonPreds.insert(*PI);
-
-  // Shortcut, if there are no common predecessors, merging is always safe
-  if (CommonPreds.empty())
-    return true;
-  
-  // Look at all the phi nodes in Succ, to see if they present a conflict when
-  // merging these blocks
-  for (BasicBlock::iterator I = Succ->begin(); isa<PHINode>(I); ++I) {
-    PHINode *PN = cast<PHINode>(I);
-
-    // If the incoming value from BB is again a PHINode in
-    // BB which has the same incoming value for *PI as PN does, we can
-    // merge the phi nodes and then the blocks can still be merged
-    PHINode *BBPN = dyn_cast<PHINode>(PN->getIncomingValueForBlock(BB));
-    if (BBPN && BBPN->getParent() == BB) {
-      for (BlockSet::iterator PI = CommonPreds.begin(), PE = CommonPreds.end();
-            PI != PE; PI++) {
-        if (BBPN->getIncomingValueForBlock(*PI) 
-              != PN->getIncomingValueForBlock(*PI)) {
-          DEBUG(errs() << "Can't fold, phi node " << PN->getName() << " in " 
-                << Succ->getName() << " is conflicting with " 
-                << BBPN->getName() << " with regard to common predecessor "
-                << (*PI)->getName() << "\n");
-          return false;
-        }
-      }
-    } else {
-      Value* Val = PN->getIncomingValueForBlock(BB);
-      for (BlockSet::iterator PI = CommonPreds.begin(), PE = CommonPreds.end();
-            PI != PE; PI++) {
-        // See if the incoming value for the common predecessor is equal to the
-        // one for BB, in which case this phi node will not prevent the merging
-        // of the block.
-        if (Val != PN->getIncomingValueForBlock(*PI)) {
-          DEBUG(errs() << "Can't fold, phi node " << PN->getName() << " in " 
-                << Succ->getName() << " is conflicting with regard to common "
-                << "predecessor " << (*PI)->getName() << "\n");
-          return false;
-        }
-      }
-    }
-  }
-
-  return true;
-}
-
-/// TryToSimplifyUncondBranchFromEmptyBlock - BB contains an unconditional
-/// branch to Succ, and contains no instructions other than PHI nodes and the
-/// branch.  If possible, eliminate BB.
-static bool TryToSimplifyUncondBranchFromEmptyBlock(BasicBlock *BB,
-                                                    BasicBlock *Succ) {
-  // Check to see if merging these blocks would cause conflicts for any of the
-  // phi nodes in BB or Succ. If not, we can safely merge.
-  if (!CanPropagatePredecessorsForPHIs(BB, Succ)) return false;
-
-  // Check for cases where Succ has multiple predecessors and a PHI node in BB
-  // has uses which will not disappear when the PHI nodes are merged.  It is
-  // possible to handle such cases, but difficult: it requires checking whether
-  // BB dominates Succ, which is non-trivial to calculate in the case where
-  // Succ has multiple predecessors.  Also, it requires checking whether
-  // constructing the necessary self-referential PHI node doesn't intoduce any
-  // conflicts; this isn't too difficult, but the previous code for doing this
-  // was incorrect.
-  //
-  // Note that if this check finds a live use, BB dominates Succ, so BB is
-  // something like a loop pre-header (or rarely, a part of an irreducible CFG);
-  // folding the branch isn't profitable in that case anyway.
-  if (!Succ->getSinglePredecessor()) {
-    BasicBlock::iterator BBI = BB->begin();
-    while (isa<PHINode>(*BBI)) {
-      for (Value::use_iterator UI = BBI->use_begin(), E = BBI->use_end();
-           UI != E; ++UI) {
-        if (PHINode* PN = dyn_cast<PHINode>(*UI)) {
-          if (PN->getIncomingBlock(UI) != BB)
-            return false;
-        } else {
-          return false;
-        }
-      }
-      ++BBI;
-    }
-  }
-
-  DEBUG(errs() << "Killing Trivial BB: \n" << *BB);
-  
-  if (isa<PHINode>(Succ->begin())) {
-    // If there is more than one pred of succ, and there are PHI nodes in
-    // the successor, then we need to add incoming edges for the PHI nodes
-    //
-    const SmallVector<BasicBlock*, 16> BBPreds(pred_begin(BB), pred_end(BB));
-    
-    // Loop over all of the PHI nodes in the successor of BB.
-    for (BasicBlock::iterator I = Succ->begin(); isa<PHINode>(I); ++I) {
-      PHINode *PN = cast<PHINode>(I);
-      Value *OldVal = PN->removeIncomingValue(BB, false);
-      assert(OldVal && "No entry in PHI for Pred BB!");
-      
-      // If this incoming value is one of the PHI nodes in BB, the new entries
-      // in the PHI node are the entries from the old PHI.
-      if (isa<PHINode>(OldVal) && cast<PHINode>(OldVal)->getParent() == BB) {
-        PHINode *OldValPN = cast<PHINode>(OldVal);
-        for (unsigned i = 0, e = OldValPN->getNumIncomingValues(); i != e; ++i)
-          // Note that, since we are merging phi nodes and BB and Succ might
-          // have common predecessors, we could end up with a phi node with
-          // identical incoming branches. This will be cleaned up later (and
-          // will trigger asserts if we try to clean it up now, without also
-          // simplifying the corresponding conditional branch).
-          PN->addIncoming(OldValPN->getIncomingValue(i),
-                          OldValPN->getIncomingBlock(i));
-      } else {
-        // Add an incoming value for each of the new incoming values.
-        for (unsigned i = 0, e = BBPreds.size(); i != e; ++i)
-          PN->addIncoming(OldVal, BBPreds[i]);
-      }
-    }
-  }
-  
-  while (PHINode *PN = dyn_cast<PHINode>(&BB->front())) {
-    if (Succ->getSinglePredecessor()) {
-      // BB is the only predecessor of Succ, so Succ will end up with exactly
-      // the same predecessors BB had.
-      Succ->getInstList().splice(Succ->begin(),
-                                 BB->getInstList(), BB->begin());
-    } else {
-      // We explicitly check for such uses in CanPropagatePredecessorsForPHIs.
-      assert(PN->use_empty() && "There shouldn't be any uses here!");
-      PN->eraseFromParent();
-    }
-  }
-    
-  // Everything that jumped to BB now goes to Succ.
-  BB->replaceAllUsesWith(Succ);
-  if (!Succ->hasName()) Succ->takeName(BB);
-  BB->eraseFromParent();              // Delete the old basic block.
-  return true;
-}
 
 /// GetIfCondition - Given a basic block (BB) with two predecessors (and
 /// presumably PHI nodes in it), check to see if the merge at this block is due
@@ -1216,7 +1057,7 @@ static bool FoldCondBranchOnPHI(BranchInst *BI) {
           }
           
           // Check for trivial simplification.
-          if (Constant *C = ConstantFoldInstruction(N, BB->getContext())) {
+          if (Constant *C = ConstantFoldInstruction(N)) {
             TranslateMap[BBI] = C;
             delete N;   // Constant folded away, don't need actual inst
           } else {
@@ -1748,6 +1589,68 @@ static bool SimplifyCondBranchToCondBranch(BranchInst *PBI, BranchInst *BI) {
   return true;
 }
 
+/// EliminateDuplicatePHINodes - Check for and eliminate duplicate PHI
+/// nodes in this block. This doesn't try to be clever about PHI nodes
+/// which differ only in the order of the incoming values, but instcombine
+/// orders them so it usually won't matter.
+///
+bool llvm::EliminateDuplicatePHINodes(BasicBlock *BB) {
+  bool Changed = false;
+  
+  // This implementation doesn't currently consider undef operands
+  // specially. Theroetically, two phis which are identical except for
+  // one having an undef where the other doesn't could be collapsed.
+
+  // Map from PHI hash values to PHI nodes. If multiple PHIs have
+  // the same hash value, the element is the first PHI in the
+  // linked list in CollisionMap.
+  DenseMap<uintptr_t, PHINode *> HashMap;
+
+  // Maintain linked lists of PHI nodes with common hash values.
+  DenseMap<PHINode *, PHINode *> CollisionMap;
+
+  // Examine each PHI.
+  for (BasicBlock::iterator I = BB->begin();
+       PHINode *PN = dyn_cast<PHINode>(I++); ) {
+    // Compute a hash value on the operands. Instcombine will likely have sorted
+    // them, which helps expose duplicates, but we have to check all the
+    // operands to be safe in case instcombine hasn't run.
+    uintptr_t Hash = 0;
+    for (User::op_iterator I = PN->op_begin(), E = PN->op_end(); I != E; ++I) {
+      // This hash algorithm is quite weak as hash functions go, but it seems
+      // to do a good enough job for this particular purpose, and is very quick.
+      Hash ^= reinterpret_cast<uintptr_t>(static_cast<Value *>(*I));
+      Hash = (Hash << 7) | (Hash >> (sizeof(uintptr_t) * CHAR_BIT - 7));
+    }
+    // If we've never seen this hash value before, it's a unique PHI.
+    std::pair<DenseMap<uintptr_t, PHINode *>::iterator, bool> Pair =
+      HashMap.insert(std::make_pair(Hash, PN));
+    if (Pair.second) continue;
+    // Otherwise it's either a duplicate or a hash collision.
+    for (PHINode *OtherPN = Pair.first->second; ; ) {
+      if (OtherPN->isIdenticalTo(PN)) {
+        // A duplicate. Replace this PHI with its duplicate.
+        PN->replaceAllUsesWith(OtherPN);
+        PN->eraseFromParent();
+        Changed = true;
+        break;
+      }
+      // A non-duplicate hash collision.
+      DenseMap<PHINode *, PHINode *>::iterator I = CollisionMap.find(OtherPN);
+      if (I == CollisionMap.end()) {
+        // Set this PHI to be the head of the linked list of colliding PHIs.
+        PHINode *Old = Pair.first->second;
+        Pair.first->second = PN;
+        CollisionMap[PN] = Old;
+        break;
+      }
+      // Procede to the next PHI in the list.
+      OtherPN = I->second;
+    }
+  }
+
+  return Changed;
+}
 
 /// SimplifyCFG - This function is used to do simplification of a CFG.  For
 /// example, it adjusts branches to branches to eliminate the extra hop, it
@@ -1777,6 +1680,9 @@ bool llvm::SimplifyCFG(BasicBlock *BB) {
   // away...
   Changed |= ConstantFoldTerminator(BB);
 
+  // Check for and eliminate duplicate PHI nodes in this block.
+  Changed |= EliminateDuplicatePHINodes(BB);
+
   // If there is a trivial two-entry PHI node in this basic block, and we can
   // eliminate it, do so now.
   if (PHINode *PN = dyn_cast<PHINode>(BB->begin()))
@@ -1860,33 +1766,26 @@ bool llvm::SimplifyCFG(BasicBlock *BB) {
   } else if (isa<UnwindInst>(BB->begin())) {
     // Check to see if the first instruction in this block is just an unwind.
     // If so, replace any invoke instructions which use this as an exception
-    // destination with call instructions, and any unconditional branch
-    // predecessor with an unwind.
+    // destination with call instructions.
     //
     SmallVector<BasicBlock*, 8> Preds(pred_begin(BB), pred_end(BB));
     while (!Preds.empty()) {
       BasicBlock *Pred = Preds.back();
-      if (BranchInst *BI = dyn_cast<BranchInst>(Pred->getTerminator())) {
-        if (BI->isUnconditional()) {
-          Pred->getInstList().pop_back();  // nuke uncond branch
-          new UnwindInst(Pred->getContext(), Pred);            // Use unwind.
-          Changed = true;
-        }
-      } else if (InvokeInst *II = dyn_cast<InvokeInst>(Pred->getTerminator()))
+      if (InvokeInst *II = dyn_cast<InvokeInst>(Pred->getTerminator()))
         if (II->getUnwindDest() == BB) {
           // Insert a new branch instruction before the invoke, because this
-          // is now a fall through...
+          // is now a fall through.
           BranchInst *BI = BranchInst::Create(II->getNormalDest(), II);
           Pred->getInstList().remove(II);   // Take out of symbol table
 
-          // Insert the call now...
+          // Insert the call now.
           SmallVector<Value*,8> Args(II->op_begin()+3, II->op_end());
           CallInst *CI = CallInst::Create(II->getCalledValue(),
                                           Args.begin(), Args.end(),
                                           II->getName(), BI);
           CI->setCallingConv(II->getCallingConv());
           CI->setAttributes(II->getAttributes());
-          // If the invoke produced a value, the Call now does instead
+          // If the invoke produced a value, the Call now does instead.
           II->replaceAllUsesWith(CI);
           delete II;
           Changed = true;
@@ -1924,13 +1823,11 @@ bool llvm::SimplifyCFG(BasicBlock *BB) {
     if (BI->isUnconditional()) {
       BasicBlock::iterator BBI = BB->getFirstNonPHI();
 
-      BasicBlock *Succ = BI->getSuccessor(0);
       // Ignore dbg intrinsics.
       while (isa<DbgInfoIntrinsic>(BBI))
         ++BBI;
-      if (BBI->isTerminator() &&  // Terminator is the only non-phi instruction!
-          Succ != BB)             // Don't hurt infinite loops!
-        if (TryToSimplifyUncondBranchFromEmptyBlock(BB, Succ))
+      if (BBI->isTerminator()) // Terminator is the only non-phi instruction!
+        if (TryToSimplifyUncondBranchFromEmptyBlock(BB))
           return true;
       
     } else {  // Conditional branch
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/UnrollLoop.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/UnrollLoop.cpp
deleted file mode 100644
index 4d838b5..0000000
--- a/libclamav/c++/llvm/lib/Transforms/Utils/UnrollLoop.cpp
+++ /dev/null
@@ -1,372 +0,0 @@
-//===-- UnrollLoop.cpp - Loop unrolling utilities -------------------------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file implements some loop unrolling utilities. It does not define any
-// actual pass or policy, but provides a single function to perform loop
-// unrolling.
-//
-// It works best when loops have been canonicalized by the -indvars pass,
-// allowing it to determine the trip counts of loops easily.
-//
-// The process of unrolling can produce extraneous basic blocks linked with
-// unconditional branches.  This will be corrected in the future.
-//===----------------------------------------------------------------------===//
-
-#define DEBUG_TYPE "loop-unroll"
-#include "llvm/Transforms/Utils/UnrollLoop.h"
-#include "llvm/BasicBlock.h"
-#include "llvm/ADT/Statistic.h"
-#include "llvm/Analysis/ConstantFolding.h"
-#include "llvm/Analysis/LoopPass.h"
-#include "llvm/Support/Debug.h"
-#include "llvm/Support/raw_ostream.h"
-#include "llvm/Transforms/Utils/BasicBlockUtils.h"
-#include "llvm/Transforms/Utils/Cloning.h"
-#include "llvm/Transforms/Utils/Local.h"
-#include <cstdio>
-
-using namespace llvm;
-
-// TODO: Should these be here or in LoopUnroll?
-STATISTIC(NumCompletelyUnrolled, "Number of loops completely unrolled");
-STATISTIC(NumUnrolled,    "Number of loops unrolled (completely or otherwise)");
-
-/// RemapInstruction - Convert the instruction operands from referencing the
-/// current values into those specified by ValueMap.
-static inline void RemapInstruction(Instruction *I,
-                                    DenseMap<const Value *, Value*> &ValueMap) {
-  for (unsigned op = 0, E = I->getNumOperands(); op != E; ++op) {
-    Value *Op = I->getOperand(op);
-    DenseMap<const Value *, Value*>::iterator It = ValueMap.find(Op);
-    if (It != ValueMap.end()) Op = It->second;
-    I->setOperand(op, Op);
-  }
-}
-
-/// FoldBlockIntoPredecessor - Folds a basic block into its predecessor if it
-/// only has one predecessor, and that predecessor only has one successor.
-/// The LoopInfo Analysis that is passed will be kept consistent.
-/// Returns the new combined block.
-static BasicBlock *FoldBlockIntoPredecessor(BasicBlock *BB, LoopInfo* LI) {
-  // Merge basic blocks into their predecessor if there is only one distinct
-  // pred, and if there is only one distinct successor of the predecessor, and
-  // if there are no PHI nodes.
-  BasicBlock *OnlyPred = BB->getSinglePredecessor();
-  if (!OnlyPred) return 0;
-
-  if (OnlyPred->getTerminator()->getNumSuccessors() != 1)
-    return 0;
-
-  DEBUG(errs() << "Merging: " << *BB << "into: " << *OnlyPred);
-
-  // Resolve any PHI nodes at the start of the block.  They are all
-  // guaranteed to have exactly one entry if they exist, unless there are
-  // multiple duplicate (but guaranteed to be equal) entries for the
-  // incoming edges.  This occurs when there are multiple edges from
-  // OnlyPred to OnlySucc.
-  FoldSingleEntryPHINodes(BB);
-
-  // Delete the unconditional branch from the predecessor...
-  OnlyPred->getInstList().pop_back();
-
-  // Move all definitions in the successor to the predecessor...
-  OnlyPred->getInstList().splice(OnlyPred->end(), BB->getInstList());
-
-  // Make all PHI nodes that referred to BB now refer to Pred as their
-  // source...
-  BB->replaceAllUsesWith(OnlyPred);
-
-  std::string OldName = BB->getName();
-
-  // Erase basic block from the function...
-  LI->removeBlock(BB);
-  BB->eraseFromParent();
-
-  // Inherit predecessor's name if it exists...
-  if (!OldName.empty() && !OnlyPred->hasName())
-    OnlyPred->setName(OldName);
-
-  return OnlyPred;
-}
-
-/// Unroll the given loop by Count. The loop must be in LCSSA form. Returns true
-/// if unrolling was succesful, or false if the loop was unmodified. Unrolling
-/// can only fail when the loop's latch block is not terminated by a conditional
-/// branch instruction. However, if the trip count (and multiple) are not known,
-/// loop unrolling will mostly produce more code that is no faster.
-///
-/// The LoopInfo Analysis that is passed will be kept consistent.
-///
-/// If a LoopPassManager is passed in, and the loop is fully removed, it will be
-/// removed from the LoopPassManager as well. LPM can also be NULL.
-bool llvm::UnrollLoop(Loop *L, unsigned Count, LoopInfo* LI, LPPassManager* LPM) {
-  assert(L->isLCSSAForm());
-
-  BasicBlock *Header = L->getHeader();
-  BasicBlock *LatchBlock = L->getLoopLatch();
-  BranchInst *BI = dyn_cast<BranchInst>(LatchBlock->getTerminator());
-  
-  if (!BI || BI->isUnconditional()) {
-    // The loop-rotate pass can be helpful to avoid this in many cases.
-    DEBUG(errs() <<
-             "  Can't unroll; loop not terminated by a conditional branch.\n");
-    return false;
-  }
-
-  // Find trip count
-  unsigned TripCount = L->getSmallConstantTripCount();
-  // Find trip multiple if count is not available
-  unsigned TripMultiple = 1;
-  if (TripCount == 0)
-    TripMultiple = L->getSmallConstantTripMultiple();
-
-  if (TripCount != 0)
-    DEBUG(errs() << "  Trip Count = " << TripCount << "\n");
-  if (TripMultiple != 1)
-    DEBUG(errs() << "  Trip Multiple = " << TripMultiple << "\n");
-
-  // Effectively "DCE" unrolled iterations that are beyond the tripcount
-  // and will never be executed.
-  if (TripCount != 0 && Count > TripCount)
-    Count = TripCount;
-
-  assert(Count > 0);
-  assert(TripMultiple > 0);
-  assert(TripCount == 0 || TripCount % TripMultiple == 0);
-
-  // Are we eliminating the loop control altogether?
-  bool CompletelyUnroll = Count == TripCount;
-
-  // If we know the trip count, we know the multiple...
-  unsigned BreakoutTrip = 0;
-  if (TripCount != 0) {
-    BreakoutTrip = TripCount % Count;
-    TripMultiple = 0;
-  } else {
-    // Figure out what multiple to use.
-    BreakoutTrip = TripMultiple =
-      (unsigned)GreatestCommonDivisor64(Count, TripMultiple);
-  }
-
-  if (CompletelyUnroll) {
-    DEBUG(errs() << "COMPLETELY UNROLLING loop %" << Header->getName()
-          << " with trip count " << TripCount << "!\n");
-  } else {
-    DEBUG(errs() << "UNROLLING loop %" << Header->getName()
-          << " by " << Count);
-    if (TripMultiple == 0 || BreakoutTrip != TripMultiple) {
-      DEBUG(errs() << " with a breakout at trip " << BreakoutTrip);
-    } else if (TripMultiple != 1) {
-      DEBUG(errs() << " with " << TripMultiple << " trips per branch");
-    }
-    DEBUG(errs() << "!\n");
-  }
-
-  std::vector<BasicBlock*> LoopBlocks = L->getBlocks();
-
-  bool ContinueOnTrue = L->contains(BI->getSuccessor(0));
-  BasicBlock *LoopExit = BI->getSuccessor(ContinueOnTrue);
-
-  // For the first iteration of the loop, we should use the precloned values for
-  // PHI nodes.  Insert associations now.
-  typedef DenseMap<const Value*, Value*> ValueMapTy;
-  ValueMapTy LastValueMap;
-  std::vector<PHINode*> OrigPHINode;
-  for (BasicBlock::iterator I = Header->begin(); isa<PHINode>(I); ++I) {
-    PHINode *PN = cast<PHINode>(I);
-    OrigPHINode.push_back(PN);
-    if (Instruction *I = 
-                dyn_cast<Instruction>(PN->getIncomingValueForBlock(LatchBlock)))
-      if (L->contains(I->getParent()))
-        LastValueMap[I] = I;
-  }
-
-  std::vector<BasicBlock*> Headers;
-  std::vector<BasicBlock*> Latches;
-  Headers.push_back(Header);
-  Latches.push_back(LatchBlock);
-
-  for (unsigned It = 1; It != Count; ++It) {
-    char SuffixBuffer[100];
-    sprintf(SuffixBuffer, ".%d", It);
-    
-    std::vector<BasicBlock*> NewBlocks;
-    
-    for (std::vector<BasicBlock*>::iterator BB = LoopBlocks.begin(),
-         E = LoopBlocks.end(); BB != E; ++BB) {
-      ValueMapTy ValueMap;
-      BasicBlock *New = CloneBasicBlock(*BB, ValueMap, SuffixBuffer);
-      Header->getParent()->getBasicBlockList().push_back(New);
-
-      // Loop over all of the PHI nodes in the block, changing them to use the
-      // incoming values from the previous block.
-      if (*BB == Header)
-        for (unsigned i = 0, e = OrigPHINode.size(); i != e; ++i) {
-          PHINode *NewPHI = cast<PHINode>(ValueMap[OrigPHINode[i]]);
-          Value *InVal = NewPHI->getIncomingValueForBlock(LatchBlock);
-          if (Instruction *InValI = dyn_cast<Instruction>(InVal))
-            if (It > 1 && L->contains(InValI->getParent()))
-              InVal = LastValueMap[InValI];
-          ValueMap[OrigPHINode[i]] = InVal;
-          New->getInstList().erase(NewPHI);
-        }
-
-      // Update our running map of newest clones
-      LastValueMap[*BB] = New;
-      for (ValueMapTy::iterator VI = ValueMap.begin(), VE = ValueMap.end();
-           VI != VE; ++VI)
-        LastValueMap[VI->first] = VI->second;
-
-      L->addBasicBlockToLoop(New, LI->getBase());
-
-      // Add phi entries for newly created values to all exit blocks except
-      // the successor of the latch block.  The successor of the exit block will
-      // be updated specially after unrolling all the way.
-      if (*BB != LatchBlock)
-        for (Value::use_iterator UI = (*BB)->use_begin(), UE = (*BB)->use_end();
-             UI != UE;) {
-          Instruction *UseInst = cast<Instruction>(*UI);
-          ++UI;
-          if (isa<PHINode>(UseInst) && !L->contains(UseInst->getParent())) {
-            PHINode *phi = cast<PHINode>(UseInst);
-            Value *Incoming = phi->getIncomingValueForBlock(*BB);
-            phi->addIncoming(Incoming, New);
-          }
-        }
-
-      // Keep track of new headers and latches as we create them, so that
-      // we can insert the proper branches later.
-      if (*BB == Header)
-        Headers.push_back(New);
-      if (*BB == LatchBlock) {
-        Latches.push_back(New);
-
-        // Also, clear out the new latch's back edge so that it doesn't look
-        // like a new loop, so that it's amenable to being merged with adjacent
-        // blocks later on.
-        TerminatorInst *Term = New->getTerminator();
-        assert(L->contains(Term->getSuccessor(!ContinueOnTrue)));
-        assert(Term->getSuccessor(ContinueOnTrue) == LoopExit);
-        Term->setSuccessor(!ContinueOnTrue, NULL);
-      }
-
-      NewBlocks.push_back(New);
-    }
-    
-    // Remap all instructions in the most recent iteration
-    for (unsigned i = 0; i < NewBlocks.size(); ++i)
-      for (BasicBlock::iterator I = NewBlocks[i]->begin(),
-           E = NewBlocks[i]->end(); I != E; ++I)
-        RemapInstruction(I, LastValueMap);
-  }
-  
-  // The latch block exits the loop.  If there are any PHI nodes in the
-  // successor blocks, update them to use the appropriate values computed as the
-  // last iteration of the loop.
-  if (Count != 1) {
-    SmallPtrSet<PHINode*, 8> Users;
-    for (Value::use_iterator UI = LatchBlock->use_begin(),
-         UE = LatchBlock->use_end(); UI != UE; ++UI)
-      if (PHINode *phi = dyn_cast<PHINode>(*UI))
-        Users.insert(phi);
-    
-    BasicBlock *LastIterationBB = cast<BasicBlock>(LastValueMap[LatchBlock]);
-    for (SmallPtrSet<PHINode*,8>::iterator SI = Users.begin(), SE = Users.end();
-         SI != SE; ++SI) {
-      PHINode *PN = *SI;
-      Value *InVal = PN->removeIncomingValue(LatchBlock, false);
-      // If this value was defined in the loop, take the value defined by the
-      // last iteration of the loop.
-      if (Instruction *InValI = dyn_cast<Instruction>(InVal)) {
-        if (L->contains(InValI->getParent()))
-          InVal = LastValueMap[InVal];
-      }
-      PN->addIncoming(InVal, LastIterationBB);
-    }
-  }
-
-  // Now, if we're doing complete unrolling, loop over the PHI nodes in the
-  // original block, setting them to their incoming values.
-  if (CompletelyUnroll) {
-    BasicBlock *Preheader = L->getLoopPreheader();
-    for (unsigned i = 0, e = OrigPHINode.size(); i != e; ++i) {
-      PHINode *PN = OrigPHINode[i];
-      PN->replaceAllUsesWith(PN->getIncomingValueForBlock(Preheader));
-      Header->getInstList().erase(PN);
-    }
-  }
-
-  // Now that all the basic blocks for the unrolled iterations are in place,
-  // set up the branches to connect them.
-  for (unsigned i = 0, e = Latches.size(); i != e; ++i) {
-    // The original branch was replicated in each unrolled iteration.
-    BranchInst *Term = cast<BranchInst>(Latches[i]->getTerminator());
-
-    // The branch destination.
-    unsigned j = (i + 1) % e;
-    BasicBlock *Dest = Headers[j];
-    bool NeedConditional = true;
-
-    // For a complete unroll, make the last iteration end with a branch
-    // to the exit block.
-    if (CompletelyUnroll && j == 0) {
-      Dest = LoopExit;
-      NeedConditional = false;
-    }
-
-    // If we know the trip count or a multiple of it, we can safely use an
-    // unconditional branch for some iterations.
-    if (j != BreakoutTrip && (TripMultiple == 0 || j % TripMultiple != 0)) {
-      NeedConditional = false;
-    }
-
-    if (NeedConditional) {
-      // Update the conditional branch's successor for the following
-      // iteration.
-      Term->setSuccessor(!ContinueOnTrue, Dest);
-    } else {
-      Term->setUnconditionalDest(Dest);
-      // Merge adjacent basic blocks, if possible.
-      if (BasicBlock *Fold = FoldBlockIntoPredecessor(Dest, LI)) {
-        std::replace(Latches.begin(), Latches.end(), Dest, Fold);
-        std::replace(Headers.begin(), Headers.end(), Dest, Fold);
-      }
-    }
-  }
-  
-  // At this point, the code is well formed.  We now do a quick sweep over the
-  // inserted code, doing constant propagation and dead code elimination as we
-  // go.
-  const std::vector<BasicBlock*> &NewLoopBlocks = L->getBlocks();
-  for (std::vector<BasicBlock*>::const_iterator BB = NewLoopBlocks.begin(),
-       BBE = NewLoopBlocks.end(); BB != BBE; ++BB)
-    for (BasicBlock::iterator I = (*BB)->begin(), E = (*BB)->end(); I != E; ) {
-      Instruction *Inst = I++;
-
-      if (isInstructionTriviallyDead(Inst))
-        (*BB)->getInstList().erase(Inst);
-      else if (Constant *C = ConstantFoldInstruction(Inst, 
-                                                     Header->getContext())) {
-        Inst->replaceAllUsesWith(C);
-        (*BB)->getInstList().erase(Inst);
-      }
-    }
-
-  NumCompletelyUnrolled += CompletelyUnroll;
-  ++NumUnrolled;
-  // Remove the loop from the LoopPassManager if it's completely removed.
-  if (CompletelyUnroll && LPM != NULL)
-    LPM->deleteLoopFromQueue(L);
-
-  // If we didn't completely unroll the loop, it should still be in LCSSA form.
-  if (!CompletelyUnroll)
-    assert(L->isLCSSAForm());
-
-  return true;
-}
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/ValueMapper.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/ValueMapper.cpp
index 2d8332f..39331d7 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/ValueMapper.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/ValueMapper.cpp
@@ -13,18 +13,15 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/Transforms/Utils/ValueMapper.h"
-#include "llvm/BasicBlock.h"
 #include "llvm/DerivedTypes.h"  // For getNullValue(Type::Int32Ty)
 #include "llvm/Constants.h"
-#include "llvm/GlobalValue.h"
-#include "llvm/Instruction.h"
-#include "llvm/LLVMContext.h"
+#include "llvm/Function.h"
 #include "llvm/Metadata.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/Support/ErrorHandling.h"
 using namespace llvm;
 
-Value *llvm::MapValue(const Value *V, ValueMapTy &VM, LLVMContext &Context) {
+Value *llvm::MapValue(const Value *V, ValueMapTy &VM) {
   Value *&VMSlot = VM[V];
   if (VMSlot) return VMSlot;      // Does it exist in the map yet?
   
@@ -36,80 +33,91 @@ Value *llvm::MapValue(const Value *V, ValueMapTy &VM, LLVMContext &Context) {
   if (isa<GlobalValue>(V) || isa<InlineAsm>(V) || isa<MetadataBase>(V))
     return VMSlot = const_cast<Value*>(V);
 
-  if (Constant *C = const_cast<Constant*>(dyn_cast<Constant>(V))) {
-    if (isa<ConstantInt>(C) || isa<ConstantFP>(C) ||
-        isa<ConstantPointerNull>(C) || isa<ConstantAggregateZero>(C) ||
-        isa<UndefValue>(C) || isa<MDString>(C))
-      return VMSlot = C;           // Primitive constants map directly
-    else if (ConstantArray *CA = dyn_cast<ConstantArray>(C)) {
-      for (User::op_iterator b = CA->op_begin(), i = b, e = CA->op_end();
-           i != e; ++i) {
-        Value *MV = MapValue(*i, VM, Context);
-        if (MV != *i) {
-          // This array must contain a reference to a global, make a new array
-          // and return it.
-          //
-          std::vector<Constant*> Values;
-          Values.reserve(CA->getNumOperands());
-          for (User::op_iterator j = b; j != i; ++j)
-            Values.push_back(cast<Constant>(*j));
-          Values.push_back(cast<Constant>(MV));
-          for (++i; i != e; ++i)
-            Values.push_back(cast<Constant>(MapValue(*i, VM, Context)));
-          return VM[V] = ConstantArray::get(CA->getType(), Values);
-        }
+  Constant *C = const_cast<Constant*>(dyn_cast<Constant>(V));
+  if (C == 0) return 0;
+  
+  if (isa<ConstantInt>(C) || isa<ConstantFP>(C) ||
+      isa<ConstantPointerNull>(C) || isa<ConstantAggregateZero>(C) ||
+      isa<UndefValue>(C) || isa<MDString>(C))
+    return VMSlot = C;           // Primitive constants map directly
+  
+  if (ConstantArray *CA = dyn_cast<ConstantArray>(C)) {
+    for (User::op_iterator b = CA->op_begin(), i = b, e = CA->op_end();
+         i != e; ++i) {
+      Value *MV = MapValue(*i, VM);
+      if (MV != *i) {
+        // This array must contain a reference to a global, make a new array
+        // and return it.
+        //
+        std::vector<Constant*> Values;
+        Values.reserve(CA->getNumOperands());
+        for (User::op_iterator j = b; j != i; ++j)
+          Values.push_back(cast<Constant>(*j));
+        Values.push_back(cast<Constant>(MV));
+        for (++i; i != e; ++i)
+          Values.push_back(cast<Constant>(MapValue(*i, VM)));
+        return VM[V] = ConstantArray::get(CA->getType(), Values);
       }
-      return VM[V] = C;
-
-    } else if (ConstantStruct *CS = dyn_cast<ConstantStruct>(C)) {
-      for (User::op_iterator b = CS->op_begin(), i = b, e = CS->op_end();
-           i != e; ++i) {
-        Value *MV = MapValue(*i, VM, Context);
-        if (MV != *i) {
-          // This struct must contain a reference to a global, make a new struct
-          // and return it.
-          //
-          std::vector<Constant*> Values;
-          Values.reserve(CS->getNumOperands());
-          for (User::op_iterator j = b; j != i; ++j)
-            Values.push_back(cast<Constant>(*j));
-          Values.push_back(cast<Constant>(MV));
-          for (++i; i != e; ++i)
-            Values.push_back(cast<Constant>(MapValue(*i, VM, Context)));
-          return VM[V] = ConstantStruct::get(CS->getType(), Values);
-        }
+    }
+    return VM[V] = C;
+  }
+  
+  if (ConstantStruct *CS = dyn_cast<ConstantStruct>(C)) {
+    for (User::op_iterator b = CS->op_begin(), i = b, e = CS->op_end();
+         i != e; ++i) {
+      Value *MV = MapValue(*i, VM);
+      if (MV != *i) {
+        // This struct must contain a reference to a global, make a new struct
+        // and return it.
+        //
+        std::vector<Constant*> Values;
+        Values.reserve(CS->getNumOperands());
+        for (User::op_iterator j = b; j != i; ++j)
+          Values.push_back(cast<Constant>(*j));
+        Values.push_back(cast<Constant>(MV));
+        for (++i; i != e; ++i)
+          Values.push_back(cast<Constant>(MapValue(*i, VM)));
+        return VM[V] = ConstantStruct::get(CS->getType(), Values);
       }
-      return VM[V] = C;
-
-    } else if (ConstantExpr *CE = dyn_cast<ConstantExpr>(C)) {
-      std::vector<Constant*> Ops;
-      for (User::op_iterator i = CE->op_begin(), e = CE->op_end(); i != e; ++i)
-        Ops.push_back(cast<Constant>(MapValue(*i, VM, Context)));
-      return VM[V] = CE->getWithOperands(Ops);
-    } else if (ConstantVector *CP = dyn_cast<ConstantVector>(C)) {
-      for (User::op_iterator b = CP->op_begin(), i = b, e = CP->op_end();
-           i != e; ++i) {
-        Value *MV = MapValue(*i, VM, Context);
-        if (MV != *i) {
-          // This vector value must contain a reference to a global, make a new
-          // vector constant and return it.
-          //
-          std::vector<Constant*> Values;
-          Values.reserve(CP->getNumOperands());
-          for (User::op_iterator j = b; j != i; ++j)
-            Values.push_back(cast<Constant>(*j));
-          Values.push_back(cast<Constant>(MV));
-          for (++i; i != e; ++i)
-            Values.push_back(cast<Constant>(MapValue(*i, VM, Context)));
-          return VM[V] = ConstantVector::get(Values);
-        }
+    }
+    return VM[V] = C;
+  }
+  
+  if (ConstantExpr *CE = dyn_cast<ConstantExpr>(C)) {
+    std::vector<Constant*> Ops;
+    for (User::op_iterator i = CE->op_begin(), e = CE->op_end(); i != e; ++i)
+      Ops.push_back(cast<Constant>(MapValue(*i, VM)));
+    return VM[V] = CE->getWithOperands(Ops);
+  }
+  
+  if (ConstantVector *CV = dyn_cast<ConstantVector>(C)) {
+    for (User::op_iterator b = CV->op_begin(), i = b, e = CV->op_end();
+         i != e; ++i) {
+      Value *MV = MapValue(*i, VM);
+      if (MV != *i) {
+        // This vector value must contain a reference to a global, make a new
+        // vector constant and return it.
+        //
+        std::vector<Constant*> Values;
+        Values.reserve(CV->getNumOperands());
+        for (User::op_iterator j = b; j != i; ++j)
+          Values.push_back(cast<Constant>(*j));
+        Values.push_back(cast<Constant>(MV));
+        for (++i; i != e; ++i)
+          Values.push_back(cast<Constant>(MapValue(*i, VM)));
+        return VM[V] = ConstantVector::get(Values);
       }
-      return VM[V] = C;
-      
-    } else {
-      llvm_unreachable("Unknown type of constant!");
     }
+    return VM[V] = C;
   }
+  
+  if (BlockAddress *BA = dyn_cast<BlockAddress>(C)) {
+    Function *F = cast<Function>(MapValue(BA->getFunction(), VM));
+    BasicBlock *BB = cast_or_null<BasicBlock>(MapValue(BA->getBasicBlock(),VM));
+    return VM[V] = BlockAddress::get(F, BB ? BB : BA->getBasicBlock());
+  }
+  
+  llvm_unreachable("Unknown type of constant!");
   return 0;
 }
 
@@ -118,7 +126,7 @@ Value *llvm::MapValue(const Value *V, ValueMapTy &VM, LLVMContext &Context) {
 ///
 void llvm::RemapInstruction(Instruction *I, ValueMapTy &ValueMap) {
   for (User::op_iterator op = I->op_begin(), E = I->op_end(); op != E; ++op) {
-    Value *V = MapValue(*op, ValueMap, I->getParent()->getContext());
+    Value *V = MapValue(*op, ValueMap);
     assert(V && "Referenced value not in value map!");
     *op = V;
   }
diff --git a/libclamav/c++/llvm/lib/VMCore/AsmWriter.cpp b/libclamav/c++/llvm/lib/VMCore/AsmWriter.cpp
index 34ce7bb..82d7914 100644
--- a/libclamav/c++/llvm/lib/VMCore/AsmWriter.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/AsmWriter.cpp
@@ -23,6 +23,7 @@
 #include "llvm/InlineAsm.h"
 #include "llvm/Instruction.h"
 #include "llvm/Instructions.h"
+#include "llvm/LLVMContext.h"
 #include "llvm/Operator.h"
 #include "llvm/Metadata.h"
 #include "llvm/Module.h"
@@ -680,6 +681,8 @@ void SlotTracker::processFunction() {
   ST_DEBUG("Inserting Instructions:\n");
 
   MetadataContext &TheMetadata = TheFunction->getContext().getMetadata();
+  typedef SmallVector<std::pair<unsigned, TrackingVH<MDNode> >, 2> MDMapTy;
+  MDMapTy MDs;
 
   // Add all of the basic blocks and instructions with no names.
   for (Function::const_iterator BB = TheFunction->begin(),
@@ -696,12 +699,11 @@ void SlotTracker::processFunction() {
           CreateMetadataSlot(N);
 
       // Process metadata attached with this instruction.
-      const MetadataContext::MDMapTy *MDs = TheMetadata.getMDs(I);
-      if (MDs)
-        for (MetadataContext::MDMapTy::const_iterator MI = MDs->begin(),
-               ME = MDs->end(); MI != ME; ++MI)
-          if (MDNode *MDN = dyn_cast_or_null<MDNode>(MI->second))
-            CreateMetadataSlot(MDN);
+      MDs.clear();
+      TheMetadata.getMDs(I, MDs);
+      for (MDMapTy::const_iterator MI = MDs.begin(), ME = MDs.end(); MI != ME; 
+           ++MI)
+        CreateMetadataSlot(MI->second);
     }
   }
 
@@ -818,9 +820,8 @@ void SlotTracker::CreateMetadataSlot(const MDNode *N) {
   unsigned DestSlot = mdnNext++;
   mdnMap[N] = DestSlot;
 
-  for (MDNode::const_elem_iterator MDI = N->elem_begin(),
-         MDE = N->elem_end(); MDI != MDE; ++MDI) {
-    const Value *TV = *MDI;
+  for (unsigned i = 0, e = N->getNumElements(); i != e; ++i) {
+    const Value *TV = N->getElement(i);
     if (TV)
       if (const MDNode *N2 = dyn_cast<MDNode>(TV))
         CreateMetadataSlot(N2);
@@ -906,9 +907,8 @@ static void WriteMDNodes(formatted_raw_ostream &Out, TypePrinting &TypePrinter,
     Out << '!' << i << " = metadata ";
     const MDNode *Node = Nodes[i];
     Out << "!{";
-    for (MDNode::const_elem_iterator NI = Node->elem_begin(),
-           NE = Node->elem_end(); NI != NE;) {
-      const Value *V = *NI;
+    for (unsigned mi = 0, me = Node->getNumElements(); mi != me; ++mi) {
+      const Value *V = Node->getElement(mi);
       if (!V)
         Out << "null";
       else if (const MDNode *N = dyn_cast<MDNode>(V)) {
@@ -916,11 +916,12 @@ static void WriteMDNodes(formatted_raw_ostream &Out, TypePrinting &TypePrinter,
         Out << '!' << Machine.getMetadataSlot(N);
       }
       else {
-        TypePrinter.print((*NI)->getType(), Out);
+        TypePrinter.print(V->getType(), Out);
         Out << ' ';
-        WriteAsOperandInternal(Out, *NI, &TypePrinter, &Machine);
+        WriteAsOperandInternal(Out, Node->getElement(mi), 
+                               &TypePrinter, &Machine);
       }
-      if (++NI != NE)
+      if (mi + 1 != me)
         Out << ", ";
     }
 
@@ -1059,6 +1060,15 @@ static void WriteConstantInt(raw_ostream &Out, const Constant *CV,
     Out << "zeroinitializer";
     return;
   }
+  
+  if (const BlockAddress *BA = dyn_cast<BlockAddress>(CV)) {
+    Out << "blockaddress(";
+    WriteAsOperandInternal(Out, BA->getFunction(), &TypePrinter, Machine);
+    Out << ", ";
+    WriteAsOperandInternal(Out, BA->getBasicBlock(), &TypePrinter, Machine);
+    Out << ")";
+    return;
+  }
 
   if (const ConstantArray *CA = dyn_cast<ConstantArray>(CV)) {
     // As a special case, print the array as a string if it is an array of
@@ -1206,6 +1216,8 @@ static void WriteAsOperandInternal(raw_ostream &Out, const Value *V,
     Out << "asm ";
     if (IA->hasSideEffects())
       Out << "sideeffect ";
+    if (IA->isAlignStack())
+      Out << "alignstack ";
     Out << '"';
     PrintEscapedString(IA->getAsmString(), Out);
     Out << "\", \"";
@@ -1226,7 +1238,8 @@ static void WriteAsOperandInternal(raw_ostream &Out, const Value *V,
     return;
   }
 
-  if (V->getValueID() == Value::PseudoSourceValueVal) {
+  if (V->getValueID() == Value::PseudoSourceValueVal ||
+      V->getValueID() == Value::FixedStackPseudoSourceValueVal) {
     V->print(Out);
     return;
   }
@@ -1294,7 +1307,7 @@ class AssemblyWriter {
   TypePrinting TypePrinter;
   AssemblyAnnotationWriter *AnnotationWriter;
   std::vector<const Type*> NumberedTypes;
-  DenseMap<unsigned, const char *> MDNames;
+  DenseMap<unsigned, StringRef> MDNames;
 
 public:
   inline AssemblyWriter(formatted_raw_ostream &o, SlotTracker &Mac,
@@ -1303,12 +1316,15 @@ public:
     : Out(o), Machine(Mac), TheModule(M), AnnotationWriter(AAW) {
     AddModuleTypesToPrinter(TypePrinter, NumberedTypes, M);
     // FIXME: Provide MDPrinter
-    MetadataContext &TheMetadata = M->getContext().getMetadata();
-    const StringMap<unsigned> *Names = TheMetadata.getHandlerNames();
-    for (StringMapConstIterator<unsigned> I = Names->begin(),
-           E = Names->end(); I != E; ++I) {
-      const StringMapEntry<unsigned> &Entry = *I;
-      MDNames[I->second] = Entry.getKeyData();
+    if (M) {
+      MetadataContext &TheMetadata = M->getContext().getMetadata();
+      SmallVector<std::pair<unsigned, StringRef>, 4> Names;
+      TheMetadata.getHandlerNames(Names);
+      for (SmallVector<std::pair<unsigned, StringRef>, 4>::iterator 
+             I = Names.begin(),
+             E = Names.end(); I != E; ++I) {
+      MDNames[I->first] = I->second;
+      }
     }
   }
 
@@ -1482,8 +1498,8 @@ static void PrintLinkage(GlobalValue::LinkageTypes LT,
   case GlobalValue::AvailableExternallyLinkage:
     Out << "available_externally ";
     break;
-  case GlobalValue::GhostLinkage:
-    llvm_unreachable("GhostLinkage not allowed in AsmWriter!");
+    // This is invalid syntax and just a debugging aid.
+  case GlobalValue::GhostLinkage:	  Out << "ghost ";	    break;
   }
 }
 
@@ -1826,7 +1842,7 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
     writeOperand(BI.getSuccessor(1), true);
 
   } else if (isa<SwitchInst>(I)) {
-    // Special case switch statement to get formatting nice and correct...
+    // Special case switch instruction to get formatting nice and correct.
     Out << ' ';
     writeOperand(Operand        , true);
     Out << ", ";
@@ -1840,6 +1856,18 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
       writeOperand(I.getOperand(op+1), true);
     }
     Out << "\n  ]";
+  } else if (isa<IndirectBrInst>(I)) {
+    // Special case indirectbr instruction to get formatting nice and correct.
+    Out << ' ';
+    writeOperand(Operand, true);
+    Out << ", [";
+    
+    for (unsigned i = 1, e = I.getNumOperands(); i != e; ++i) {
+      if (i != 1)
+        Out << ", ";
+      writeOperand(I.getOperand(i), true);
+    }
+    Out << ']';
   } else if (isa<PHINode>(I)) {
     Out << ' ';
     TypePrinter.print(I.getType(), Out);
@@ -1961,7 +1989,7 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
     Out << " unwind ";
     writeOperand(II->getUnwindDest(), true);
 
-  } else if (const AllocationInst *AI = dyn_cast<AllocationInst>(&I)) {
+  } else if (const AllocaInst *AI = dyn_cast<AllocaInst>(&I)) {
     Out << ' ';
     TypePrinter.print(AI->getType()->getElementType(), Out);
     if (!AI->getArraySize() || AI->isArrayAllocation()) {
@@ -2029,15 +2057,16 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
   }
 
   // Print Metadata info
-  MetadataContext &TheMetadata = I.getContext().getMetadata();
-  const MetadataContext::MDMapTy *MDMap = TheMetadata.getMDs(&I);
-  if (MDMap)
-    for (MetadataContext::MDMapTy::const_iterator MI = MDMap->begin(),
-           ME = MDMap->end(); MI != ME; ++MI)
-      if (const MDNode *MD = dyn_cast_or_null<MDNode>(MI->second))
-        Out << ", !" << MDNames[MI->first]
-            << " !" << Machine.getMetadataSlot(MD);
-
+  if (!MDNames.empty()) {
+    MetadataContext &TheMetadata = I.getContext().getMetadata();
+    typedef SmallVector<std::pair<unsigned, TrackingVH<MDNode> >, 2> MDMapTy;
+    MDMapTy MDs;
+    TheMetadata.getMDs(&I, MDs);
+    for (MDMapTy::const_iterator MI = MDs.begin(), ME = MDs.end(); MI != ME; 
+         ++MI)
+      Out << ", !" << MDNames[MI->first]
+          << " !" << Machine.getMetadataSlot(MI->second);
+  }
   printInfoComment(I);
 }
 
diff --git a/libclamav/c++/llvm/lib/VMCore/AutoUpgrade.cpp b/libclamav/c++/llvm/lib/VMCore/AutoUpgrade.cpp
index 3f23b8d..77ab19f 100644
--- a/libclamav/c++/llvm/lib/VMCore/AutoUpgrade.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/AutoUpgrade.cpp
@@ -120,6 +120,31 @@ static bool UpgradeIntrinsicFunction1(Function *F, Function *&NewFn) {
     }
     break;
 
+  case 'e':
+    //  The old llvm.eh.selector.i32 is equivalent to the new llvm.eh.selector.
+    if (Name.compare("llvm.eh.selector.i32") == 0) {
+      F->setName("llvm.eh.selector");
+      NewFn = F;
+      return true;
+    }
+    //  The old llvm.eh.typeid.for.i32 is equivalent to llvm.eh.typeid.for.
+    if (Name.compare("llvm.eh.typeid.for.i32") == 0) {
+      F->setName("llvm.eh.typeid.for");
+      NewFn = F;
+      return true;
+    }
+    //  Convert the old llvm.eh.selector.i64 to a call to llvm.eh.selector.
+    if (Name.compare("llvm.eh.selector.i64") == 0) {
+      NewFn = Intrinsic::getDeclaration(M, Intrinsic::eh_selector);
+      return true;
+    }
+    //  Convert the old llvm.eh.typeid.for.i64 to a call to llvm.eh.typeid.for.
+    if (Name.compare("llvm.eh.typeid.for.i64") == 0) {
+      NewFn = Intrinsic::getDeclaration(M, Intrinsic::eh_typeid_for);
+      return true;
+    }
+    break;
+
   case 'p':
     //  This upgrades the llvm.part.select overloaded intrinsic names to only 
     //  use one type specifier in the name. We only care about the old format
@@ -265,7 +290,7 @@ void llvm::UpgradeIntrinsicCall(CallInst *CI, Function *NewFn) {
       if (isLoadH || isLoadL) {
         Value *Op1 = UndefValue::get(Op0->getType());
         Value *Addr = new BitCastInst(CI->getOperand(2), 
-                                  PointerType::getUnqual(Type::getDoubleTy(C)),
+                                  Type::getDoublePtrTy(C),
                                       "upgraded.", CI);
         Value *Load = new LoadInst(Addr, "upgraded.", false, 8, CI);
         Value *Idx = ConstantInt::get(Type::getInt32Ty(C), 0);
@@ -409,6 +434,27 @@ void llvm::UpgradeIntrinsicCall(CallInst *CI, Function *NewFn) {
     CI->eraseFromParent();
   }
   break;
+  case Intrinsic::eh_selector:
+  case Intrinsic::eh_typeid_for: {
+    // Only the return type changed.
+    SmallVector<Value*, 8> Operands(CI->op_begin() + 1, CI->op_end());
+    CallInst *NewCI = CallInst::Create(NewFn, Operands.begin(), Operands.end(),
+                                       "upgraded." + CI->getName(), CI);
+    NewCI->setTailCall(CI->isTailCall());
+    NewCI->setCallingConv(CI->getCallingConv());
+
+    //  Handle any uses of the old CallInst.
+    if (!CI->use_empty()) {
+      //  Construct an appropriate cast from the new return type to the old.
+      CastInst *RetCast =
+        CastInst::Create(CastInst::getCastOpcode(NewCI, true,
+                                                 F->getReturnType(), true),
+                         NewCI, F->getReturnType(), NewCI->getName(), CI);
+      CI->replaceAllUsesWith(RetCast);
+    }
+    CI->eraseFromParent();
+  }
+  break;
   }
 }
 
diff --git a/libclamav/c++/llvm/lib/VMCore/BasicBlock.cpp b/libclamav/c++/llvm/lib/VMCore/BasicBlock.cpp
index 50cf84c..23d0557 100644
--- a/libclamav/c++/llvm/lib/VMCore/BasicBlock.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/BasicBlock.cpp
@@ -58,6 +58,24 @@ BasicBlock::BasicBlock(LLVMContext &C, const Twine &Name, Function *NewParent,
 
 
 BasicBlock::~BasicBlock() {
+  // If the address of the block is taken and it is being deleted (e.g. because
+  // it is dead), this means that there is either a dangling constant expr
+  // hanging off the block, or an undefined use of the block (source code
+  // expecting the address of a label to keep the block alive even though there
+  // is no indirect branch).  Handle these cases by zapping the BlockAddress
+  // nodes.  There are no other possible uses at this point.
+  if (hasAddressTaken()) {
+    assert(!use_empty() && "There should be at least one blockaddress!");
+    Constant *Replacement =
+      ConstantInt::get(llvm::Type::getInt32Ty(getContext()), 1);
+    while (!use_empty()) {
+      BlockAddress *BA = cast<BlockAddress>(use_back());
+      BA->replaceAllUsesWith(ConstantExpr::getIntToPtr(Replacement,
+                                                       BA->getType()));
+      BA->destroyConstant();
+    }
+  }
+  
   assert(getParent() == 0 && "BasicBlock still linked into the program!");
   dropAllReferences();
   InstList.clear();
@@ -277,3 +295,4 @@ BasicBlock *BasicBlock::splitBasicBlock(iterator I, const Twine &BBName) {
   }
   return New;
 }
+
diff --git a/libclamav/c++/llvm/lib/VMCore/ConstantFold.cpp b/libclamav/c++/llvm/lib/VMCore/ConstantFold.cpp
index 5411549..7f713d1 100644
--- a/libclamav/c++/llvm/lib/VMCore/ConstantFold.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/ConstantFold.cpp
@@ -179,6 +179,151 @@ static Constant *FoldBitCast(LLVMContext &Context,
 }
 
 
+/// ExtractConstantBytes - V is an integer constant which only has a subset of
+/// its bytes used.  The bytes used are indicated by ByteStart (which is the
+/// first byte used, counting from the least significant byte) and ByteSize,
+/// which is the number of bytes used.
+///
+/// This function analyzes the specified constant to see if the specified byte
+/// range can be returned as a simplified constant.  If so, the constant is
+/// returned, otherwise null is returned.
+/// 
+static Constant *ExtractConstantBytes(Constant *C, unsigned ByteStart,
+                                      unsigned ByteSize) {
+  assert(isa<IntegerType>(C->getType()) &&
+         (cast<IntegerType>(C->getType())->getBitWidth() & 7) == 0 &&
+         "Non-byte sized integer input");
+  unsigned CSize = cast<IntegerType>(C->getType())->getBitWidth()/8;
+  assert(ByteSize && "Must be accessing some piece");
+  assert(ByteStart+ByteSize <= CSize && "Extracting invalid piece from input");
+  assert(ByteSize != CSize && "Should not extract everything");
+  
+  // Constant Integers are simple.
+  if (ConstantInt *CI = dyn_cast<ConstantInt>(C)) {
+    APInt V = CI->getValue();
+    if (ByteStart)
+      V = V.lshr(ByteStart*8);
+    V.trunc(ByteSize*8);
+    return ConstantInt::get(CI->getContext(), V);
+  }
+  
+  // In the input is a constant expr, we might be able to recursively simplify.
+  // If not, we definitely can't do anything.
+  ConstantExpr *CE = dyn_cast<ConstantExpr>(C);
+  if (CE == 0) return 0;
+  
+  switch (CE->getOpcode()) {
+  default: return 0;
+  case Instruction::Or: {
+    Constant *RHS = ExtractConstantBytes(CE->getOperand(1), ByteStart,ByteSize);
+    if (RHS == 0)
+      return 0;
+    
+    // X | -1 -> -1.
+    if (ConstantInt *RHSC = dyn_cast<ConstantInt>(RHS))
+      if (RHSC->isAllOnesValue())
+        return RHSC;
+    
+    Constant *LHS = ExtractConstantBytes(CE->getOperand(0), ByteStart,ByteSize);
+    if (LHS == 0)
+      return 0;
+    return ConstantExpr::getOr(LHS, RHS);
+  }
+  case Instruction::And: {
+    Constant *RHS = ExtractConstantBytes(CE->getOperand(1), ByteStart,ByteSize);
+    if (RHS == 0)
+      return 0;
+    
+    // X & 0 -> 0.
+    if (RHS->isNullValue())
+      return RHS;
+    
+    Constant *LHS = ExtractConstantBytes(CE->getOperand(0), ByteStart,ByteSize);
+    if (LHS == 0)
+      return 0;
+    return ConstantExpr::getAnd(LHS, RHS);
+  }
+  case Instruction::LShr: {
+    ConstantInt *Amt = dyn_cast<ConstantInt>(CE->getOperand(1));
+    if (Amt == 0)
+      return 0;
+    unsigned ShAmt = Amt->getZExtValue();
+    // Cannot analyze non-byte shifts.
+    if ((ShAmt & 7) != 0)
+      return 0;
+    ShAmt >>= 3;
+    
+    // If the extract is known to be all zeros, return zero.
+    if (ByteStart >= CSize-ShAmt)
+      return Constant::getNullValue(IntegerType::get(CE->getContext(),
+                                                     ByteSize*8));
+    // If the extract is known to be fully in the input, extract it.
+    if (ByteStart+ByteSize+ShAmt <= CSize)
+      return ExtractConstantBytes(CE->getOperand(0), ByteStart+ShAmt, ByteSize);
+    
+    // TODO: Handle the 'partially zero' case.
+    return 0;
+  }
+    
+  case Instruction::Shl: {
+    ConstantInt *Amt = dyn_cast<ConstantInt>(CE->getOperand(1));
+    if (Amt == 0)
+      return 0;
+    unsigned ShAmt = Amt->getZExtValue();
+    // Cannot analyze non-byte shifts.
+    if ((ShAmt & 7) != 0)
+      return 0;
+    ShAmt >>= 3;
+    
+    // If the extract is known to be all zeros, return zero.
+    if (ByteStart+ByteSize <= ShAmt)
+      return Constant::getNullValue(IntegerType::get(CE->getContext(),
+                                                     ByteSize*8));
+    // If the extract is known to be fully in the input, extract it.
+    if (ByteStart >= ShAmt)
+      return ExtractConstantBytes(CE->getOperand(0), ByteStart-ShAmt, ByteSize);
+    
+    // TODO: Handle the 'partially zero' case.
+    return 0;
+  }
+      
+  case Instruction::ZExt: {
+    unsigned SrcBitSize =
+      cast<IntegerType>(CE->getOperand(0)->getType())->getBitWidth();
+    
+    // If extracting something that is completely zero, return 0.
+    if (ByteStart*8 >= SrcBitSize)
+      return Constant::getNullValue(IntegerType::get(CE->getContext(),
+                                                     ByteSize*8));
+
+    // If exactly extracting the input, return it.
+    if (ByteStart == 0 && ByteSize*8 == SrcBitSize)
+      return CE->getOperand(0);
+    
+    // If extracting something completely in the input, if if the input is a
+    // multiple of 8 bits, recurse.
+    if ((SrcBitSize&7) == 0 && (ByteStart+ByteSize)*8 <= SrcBitSize)
+      return ExtractConstantBytes(CE->getOperand(0), ByteStart, ByteSize);
+      
+    // Otherwise, if extracting a subset of the input, which is not multiple of
+    // 8 bits, do a shift and trunc to get the bits.
+    if ((ByteStart+ByteSize)*8 < SrcBitSize) {
+      assert((SrcBitSize&7) && "Shouldn't get byte sized case here");
+      Constant *Res = CE->getOperand(0);
+      if (ByteStart)
+        Res = ConstantExpr::getLShr(Res, 
+                                 ConstantInt::get(Res->getType(), ByteStart*8));
+      return ConstantExpr::getTrunc(Res, IntegerType::get(C->getContext(),
+                                                          ByteSize*8));
+    }
+    
+    // TODO: Handle the 'partially zero' case.
+    return 0;
+  }
+  }
+}
+
+
 Constant *llvm::ConstantFoldCastInstruction(LLVMContext &Context, 
                                             unsigned opc, Constant *V,
                                             const Type *DestTy) {
@@ -192,7 +337,7 @@ Constant *llvm::ConstantFoldCastInstruction(LLVMContext &Context,
     return UndefValue::get(DestTy);
   }
   // No compile-time operations on this type yet.
-  if (V->getType() == Type::getPPC_FP128Ty(Context) || DestTy == Type::getPPC_FP128Ty(Context))
+  if (V->getType()->isPPC_FP128Ty() || DestTy->isPPC_FP128Ty())
     return 0;
 
   // If the cast operand is a constant expression, there's a few things we can
@@ -236,15 +381,17 @@ Constant *llvm::ConstantFoldCastInstruction(LLVMContext &Context,
   // We actually have to do a cast now. Perform the cast according to the
   // opcode specified.
   switch (opc) {
+  default:
+    llvm_unreachable("Failed to cast constant expression");
   case Instruction::FPTrunc:
   case Instruction::FPExt:
     if (ConstantFP *FPC = dyn_cast<ConstantFP>(V)) {
       bool ignored;
       APFloat Val = FPC->getValueAPF();
-      Val.convert(DestTy == Type::getFloatTy(Context) ? APFloat::IEEEsingle :
-                  DestTy == Type::getDoubleTy(Context) ? APFloat::IEEEdouble :
-                  DestTy == Type::getX86_FP80Ty(Context) ? APFloat::x87DoubleExtended :
-                  DestTy == Type::getFP128Ty(Context) ? APFloat::IEEEquad :
+      Val.convert(DestTy->isFloatTy() ? APFloat::IEEEsingle :
+                  DestTy->isDoubleTy() ? APFloat::IEEEdouble :
+                  DestTy->isX86_FP80Ty() ? APFloat::x87DoubleExtended :
+                  DestTy->isFP128Ty() ? APFloat::IEEEquad :
                   APFloat::Bogus,
                   APFloat::rmNearestTiesToEven, &ignored);
       return ConstantFP::get(Context, Val);
@@ -300,23 +447,27 @@ Constant *llvm::ConstantFoldCastInstruction(LLVMContext &Context,
       return ConstantInt::get(Context, Result);
     }
     return 0;
-  case Instruction::Trunc:
+  case Instruction::Trunc: {
+    uint32_t DestBitWidth = cast<IntegerType>(DestTy)->getBitWidth();
     if (ConstantInt *CI = dyn_cast<ConstantInt>(V)) {
-      uint32_t BitWidth = cast<IntegerType>(DestTy)->getBitWidth();
       APInt Result(CI->getValue());
-      Result.trunc(BitWidth);
+      Result.trunc(DestBitWidth);
       return ConstantInt::get(Context, Result);
     }
+    
+    // The input must be a constantexpr.  See if we can simplify this based on
+    // the bytes we are demanding.  Only do this if the source and dest are an
+    // even multiple of a byte.
+    if ((DestBitWidth & 7) == 0 &&
+        (cast<IntegerType>(V->getType())->getBitWidth() & 7) == 0)
+      if (Constant *Res = ExtractConstantBytes(V, 0, DestBitWidth / 8))
+        return Res;
+      
     return 0;
+  }
   case Instruction::BitCast:
     return FoldBitCast(Context, V, DestTy);
-  default:
-    assert(!"Invalid CE CastInst opcode");
-    break;
   }
-
-  llvm_unreachable("Failed to cast constant expression");
-  return 0;
 }
 
 Constant *llvm::ConstantFoldSelectInstruction(LLVMContext&,
@@ -483,7 +634,15 @@ Constant *llvm::ConstantFoldExtractValueInstruction(LLVMContext &Context,
                                                               Idxs + NumIdx));
 
   // Otherwise recurse.
-  return ConstantFoldExtractValueInstruction(Context, Agg->getOperand(*Idxs),
+  if (ConstantStruct *CS = dyn_cast<ConstantStruct>(Agg))
+    return ConstantFoldExtractValueInstruction(Context, CS->getOperand(*Idxs),
+                                               Idxs+1, NumIdx-1);
+
+  if (ConstantArray *CA = dyn_cast<ConstantArray>(Agg))
+    return ConstantFoldExtractValueInstruction(Context, CA->getOperand(*Idxs),
+                                               Idxs+1, NumIdx-1);
+  ConstantVector *CV = cast<ConstantVector>(Agg);
+  return ConstantFoldExtractValueInstruction(Context, CV->getOperand(*Idxs),
                                              Idxs+1, NumIdx-1);
 }
 
@@ -563,11 +722,10 @@ Constant *llvm::ConstantFoldInsertValueInstruction(LLVMContext &Context,
     // Insertion of constant into aggregate constant.
     std::vector<Constant*> Ops(Agg->getNumOperands());
     for (unsigned i = 0; i < Agg->getNumOperands(); ++i) {
-      Constant *Op =
-        (*Idxs == i) ?
-        ConstantFoldInsertValueInstruction(Context, Agg->getOperand(i),
-                                           Val, Idxs+1, NumIdx-1) :
-        Agg->getOperand(i);
+      Constant *Op = cast<Constant>(Agg->getOperand(i));
+      if (*Idxs == i)
+        Op = ConstantFoldInsertValueInstruction(Context, Op,
+                                                Val, Idxs+1, NumIdx-1);
       Ops[i] = Op;
     }
     
@@ -584,7 +742,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(LLVMContext &Context,
                                               unsigned Opcode,
                                               Constant *C1, Constant *C2) {
   // No compile-time operations on this type yet.
-  if (C1->getType() == Type::getPPC_FP128Ty(Context))
+  if (C1->getType()->isPPC_FP128Ty())
     return 0;
 
   // Handle UndefValue up front.
@@ -1110,7 +1268,7 @@ static FCmpInst::Predicate evaluateFCmpRelation(LLVMContext &Context,
          "Cannot compare values of different types!");
 
   // No compile-time operations on this type yet.
-  if (V1->getType() == Type::getPPC_FP128Ty(Context))
+  if (V1->getType()->isPPC_FP128Ty())
     return FCmpInst::BAD_FCMP_PREDICATE;
 
   // Handle degenerate case quickly
@@ -1403,7 +1561,7 @@ Constant *llvm::ConstantFoldCompareInstruction(LLVMContext &Context,
     return UndefValue::get(ResultTy);
 
   // No compile-time operations on this type yet.
-  if (C1->getType() == Type::getPPC_FP128Ty(Context))
+  if (C1->getType()->isPPC_FP128Ty())
     return 0;
 
   // icmp eq/ne(null,GV) -> false/true
@@ -1837,7 +1995,8 @@ Constant *llvm::ConstantFoldGetElementPtr(LLVMContext &Context,
     // This happens with pointers to member functions in C++.
     if (CE->getOpcode() == Instruction::IntToPtr && NumIdx == 1 &&
         isa<ConstantInt>(CE->getOperand(0)) && isa<ConstantInt>(Idxs[0]) &&
-        cast<PointerType>(CE->getType())->getElementType() == Type::getInt8Ty(Context)) {
+        cast<PointerType>(CE->getType())->getElementType() ==
+            Type::getInt8Ty(Context)) {
       Constant *Base = CE->getOperand(0);
       Constant *Offset = Idxs[0];
 
diff --git a/libclamav/c++/llvm/lib/VMCore/Constants.cpp b/libclamav/c++/llvm/lib/VMCore/Constants.cpp
index 2da95b0..c622558 100644
--- a/libclamav/c++/llvm/lib/VMCore/Constants.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Constants.cpp
@@ -7,7 +7,7 @@
 //
 //===----------------------------------------------------------------------===//
 //
-// This file implements the Constant* classes...
+// This file implements the Constant* classes.
 //
 //===----------------------------------------------------------------------===//
 
@@ -29,9 +29,6 @@
 #include "llvm/Support/MathExtras.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Support/GetElementPtrTypeIterator.h"
-#include "llvm/System/Mutex.h"
-#include "llvm/System/RWMutex.h"
-#include "llvm/System/Threading.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/SmallVector.h"
 #include <algorithm>
@@ -44,7 +41,7 @@ using namespace llvm;
 
 // Constructor to create a '0' constant of arbitrary type...
 static const uint64_t zero[2] = {0, 0};
-Constant* Constant::getNullValue(const Type* Ty) {
+Constant *Constant::getNullValue(const Type *Ty) {
   switch (Ty->getTypeID()) {
   case Type::IntegerTyID:
     return ConstantInt::get(Ty, 0);
@@ -72,7 +69,7 @@ Constant* Constant::getNullValue(const Type* Ty) {
   }
 }
 
-Constant* Constant::getIntegerValue(const Type* Ty, const APInt &V) {
+Constant* Constant::getIntegerValue(const Type *Ty, const APInt &V) {
   const Type *ScalarTy = Ty->getScalarType();
 
   // Create the base integer constant.
@@ -89,13 +86,13 @@ Constant* Constant::getIntegerValue(const Type* Ty, const APInt &V) {
   return C;
 }
 
-Constant* Constant::getAllOnesValue(const Type* Ty) {
-  if (const IntegerType* ITy = dyn_cast<IntegerType>(Ty))
+Constant* Constant::getAllOnesValue(const Type *Ty) {
+  if (const IntegerType *ITy = dyn_cast<IntegerType>(Ty))
     return ConstantInt::get(Ty->getContext(),
                             APInt::getAllOnesValue(ITy->getBitWidth()));
   
   std::vector<Constant*> Elts;
-  const VectorType* VTy = cast<VectorType>(Ty);
+  const VectorType *VTy = cast<VectorType>(Ty);
   Elts.resize(VTy->getNumElements(), getAllOnesValue(VTy->getElementType()));
   assert(Elts[0] && "Not a vector integer type!");
   return cast<ConstantVector>(ConstantVector::get(Elts));
@@ -140,7 +137,7 @@ bool Constant::canTrap() const {
   
   // ConstantExpr traps if any operands can trap. 
   for (unsigned i = 0, e = getNumOperands(); i != e; ++i)
-    if (getOperand(i)->canTrap()) 
+    if (CE->getOperand(i)->canTrap()) 
       return true;
 
   // Otherwise, only specific operations can trap.
@@ -154,12 +151,27 @@ bool Constant::canTrap() const {
   case Instruction::SRem:
   case Instruction::FRem:
     // Div and rem can trap if the RHS is not known to be non-zero.
-    if (!isa<ConstantInt>(getOperand(1)) || getOperand(1)->isNullValue())
+    if (!isa<ConstantInt>(CE->getOperand(1)) ||CE->getOperand(1)->isNullValue())
       return true;
     return false;
   }
 }
 
+/// isConstantUsed - Return true if the constant has users other than constant
+/// exprs and other dangling things.
+bool Constant::isConstantUsed() const {
+  for (use_const_iterator UI = use_begin(), E = use_end(); UI != E; ++UI) {
+    const Constant *UC = dyn_cast<Constant>(*UI);
+    if (UC == 0 || isa<GlobalValue>(UC))
+      return true;
+    
+    if (UC->isConstantUsed())
+      return true;
+  }
+  return false;
+}
+
+
 
 /// getRelocationInfo - This method classifies the entry according to
 /// whether or not it may generate a relocation entry.  This must be
@@ -182,9 +194,13 @@ Constant::PossibleRelocationsTy Constant::getRelocationInfo() const {
     return GlobalRelocations;    // Global reference.
   }
   
+  if (const BlockAddress *BA = dyn_cast<BlockAddress>(this))
+    return BA->getFunction()->getRelocationInfo();
+  
   PossibleRelocationsTy Result = NoRelocation;
   for (unsigned i = 0, e = getNumOperands(); i != e; ++i)
-    Result = std::max(Result, getOperand(i)->getRelocationInfo());
+    Result = std::max(Result,
+                      cast<Constant>(getOperand(i))->getRelocationInfo());
   
   return Result;
 }
@@ -232,7 +248,6 @@ ConstantInt::ConstantInt(const IntegerType *Ty, const APInt& V)
 
 ConstantInt* ConstantInt::getTrue(LLVMContext &Context) {
   LLVMContextImpl *pImpl = Context.pImpl;
-  sys::SmartScopedWriter<true>(pImpl->ConstantsLock);
   if (pImpl->TheTrueVal)
     return pImpl->TheTrueVal;
   else
@@ -242,7 +257,6 @@ ConstantInt* ConstantInt::getTrue(LLVMContext &Context) {
 
 ConstantInt* ConstantInt::getFalse(LLVMContext &Context) {
   LLVMContextImpl *pImpl = Context.pImpl;
-  sys::SmartScopedWriter<true>(pImpl->ConstantsLock);
   if (pImpl->TheFalseVal)
     return pImpl->TheFalseVal;
   else
@@ -261,22 +275,9 @@ ConstantInt *ConstantInt::get(LLVMContext &Context, const APInt& V) {
   const IntegerType *ITy = IntegerType::get(Context, V.getBitWidth());
   // get an existing value or the insertion position
   DenseMapAPIntKeyInfo::KeyTy Key(V, ITy);
-  
-  Context.pImpl->ConstantsLock.reader_acquire();
   ConstantInt *&Slot = Context.pImpl->IntConstants[Key]; 
-  Context.pImpl->ConstantsLock.reader_release();
-    
-  if (!Slot) {
-    sys::SmartScopedWriter<true> Writer(Context.pImpl->ConstantsLock);
-    ConstantInt *&NewSlot = Context.pImpl->IntConstants[Key]; 
-    if (!Slot) {
-      NewSlot = new ConstantInt(ITy, V);
-    }
-    
-    return NewSlot;
-  } else {
-    return Slot;
-  }
+  if (!Slot) Slot = new ConstantInt(ITy, V);
+  return Slot;
 }
 
 Constant* ConstantInt::get(const Type* Ty, uint64_t V, bool isSigned) {
@@ -317,7 +318,7 @@ Constant* ConstantInt::get(const Type* Ty, const APInt& V) {
   return C;
 }
 
-ConstantInt* ConstantInt::get(const IntegerType* Ty, const StringRef& Str,
+ConstantInt* ConstantInt::get(const IntegerType* Ty, StringRef Str,
                               uint8_t radix) {
   return get(Ty->getContext(), APInt(Ty->getBitWidth(), Str, radix));
 }
@@ -327,16 +328,16 @@ ConstantInt* ConstantInt::get(const IntegerType* Ty, const StringRef& Str,
 //===----------------------------------------------------------------------===//
 
 static const fltSemantics *TypeToFloatSemantics(const Type *Ty) {
-  if (Ty == Type::getFloatTy(Ty->getContext()))
+  if (Ty->isFloatTy())
     return &APFloat::IEEEsingle;
-  if (Ty == Type::getDoubleTy(Ty->getContext()))
+  if (Ty->isDoubleTy())
     return &APFloat::IEEEdouble;
-  if (Ty == Type::getX86_FP80Ty(Ty->getContext()))
+  if (Ty->isX86_FP80Ty())
     return &APFloat::x87DoubleExtended;
-  else if (Ty == Type::getFP128Ty(Ty->getContext()))
+  else if (Ty->isFP128Ty())
     return &APFloat::IEEEquad;
   
-  assert(Ty == Type::getPPC_FP128Ty(Ty->getContext()) && "Unknown FP format");
+  assert(Ty->isPPC_FP128Ty() && "Unknown FP format");
   return &APFloat::PPCDoubleDouble;
 }
 
@@ -361,7 +362,7 @@ Constant* ConstantFP::get(const Type* Ty, double V) {
 }
 
 
-Constant* ConstantFP::get(const Type* Ty, const StringRef& Str) {
+Constant* ConstantFP::get(const Type* Ty, StringRef Str) {
   LLVMContext &Context = Ty->getContext();
 
   APFloat FV(*TypeToFloatSemantics(Ty->getScalarType()), Str);
@@ -405,32 +406,24 @@ ConstantFP* ConstantFP::get(LLVMContext &Context, const APFloat& V) {
   
   LLVMContextImpl* pImpl = Context.pImpl;
   
-  pImpl->ConstantsLock.reader_acquire();
   ConstantFP *&Slot = pImpl->FPConstants[Key];
-  pImpl->ConstantsLock.reader_release();
     
   if (!Slot) {
-    sys::SmartScopedWriter<true> Writer(pImpl->ConstantsLock);
-    ConstantFP *&NewSlot = pImpl->FPConstants[Key];
-    if (!NewSlot) {
-      const Type *Ty;
-      if (&V.getSemantics() == &APFloat::IEEEsingle)
-        Ty = Type::getFloatTy(Context);
-      else if (&V.getSemantics() == &APFloat::IEEEdouble)
-        Ty = Type::getDoubleTy(Context);
-      else if (&V.getSemantics() == &APFloat::x87DoubleExtended)
-        Ty = Type::getX86_FP80Ty(Context);
-      else if (&V.getSemantics() == &APFloat::IEEEquad)
-        Ty = Type::getFP128Ty(Context);
-      else {
-        assert(&V.getSemantics() == &APFloat::PPCDoubleDouble && 
-               "Unknown FP format");
-        Ty = Type::getPPC_FP128Ty(Context);
-      }
-      NewSlot = new ConstantFP(Ty, V);
+    const Type *Ty;
+    if (&V.getSemantics() == &APFloat::IEEEsingle)
+      Ty = Type::getFloatTy(Context);
+    else if (&V.getSemantics() == &APFloat::IEEEdouble)
+      Ty = Type::getDoubleTy(Context);
+    else if (&V.getSemantics() == &APFloat::x87DoubleExtended)
+      Ty = Type::getX86_FP80Ty(Context);
+    else if (&V.getSemantics() == &APFloat::IEEEquad)
+      Ty = Type::getFP128Ty(Context);
+    else {
+      assert(&V.getSemantics() == &APFloat::PPCDoubleDouble && 
+             "Unknown FP format");
+      Ty = Type::getPPC_FP128Ty(Context);
     }
-    
-    return NewSlot;
+    Slot = new ConstantFP(Ty, V);
   }
   
   return Slot;
@@ -472,9 +465,7 @@ ConstantArray::ConstantArray(const ArrayType *T,
   for (std::vector<Constant*>::const_iterator I = V.begin(), E = V.end();
        I != E; ++I, ++OL) {
     Constant *C = *I;
-    assert((C->getType() == T->getElementType() ||
-            (T->isAbstract() &&
-             C->getType()->getTypeID() == T->getElementType()->getTypeID())) &&
+    assert(C->getType() == T->getElementType() &&
            "Initializer for array element doesn't match array element type!");
     *OL = C;
   }
@@ -517,7 +508,7 @@ Constant* ConstantArray::get(const ArrayType* T, Constant* const* Vals,
 /// Otherwise, the length parameter specifies how much of the string to use 
 /// and it won't be null terminated.
 ///
-Constant* ConstantArray::get(LLVMContext &Context, const StringRef &Str,
+Constant* ConstantArray::get(LLVMContext &Context, StringRef Str,
                              bool AddNull) {
   std::vector<Constant*> ElementVals;
   for (unsigned i = 0; i < Str.size(); ++i)
@@ -545,11 +536,7 @@ ConstantStruct::ConstantStruct(const StructType *T,
   for (std::vector<Constant*>::const_iterator I = V.begin(), E = V.end();
        I != E; ++I, ++OL) {
     Constant *C = *I;
-    assert((C->getType() == T->getElementType(I-V.begin()) ||
-            ((T->getElementType(I-V.begin())->isAbstract() ||
-              C->getType()->isAbstract()) &&
-             T->getElementType(I-V.begin())->getTypeID() == 
-                   C->getType()->getTypeID())) &&
+    assert(C->getType() == T->getElementType(I-V.begin()) &&
            "Initializer for struct element doesn't match struct element type!");
     *OL = C;
   }
@@ -594,9 +581,7 @@ ConstantVector::ConstantVector(const VectorType *T,
     for (std::vector<Constant*>::const_iterator I = V.begin(), E = V.end();
          I != E; ++I, ++OL) {
       Constant *C = *I;
-      assert((C->getType() == T->getElementType() ||
-            (T->isAbstract() &&
-             C->getType()->getTypeID() == T->getElementType()->getTypeID())) &&
+      assert(C->getType() == T->getElementType() &&
            "Initializer for vector element doesn't match vector element type!");
     *OL = C;
   }
@@ -1018,7 +1003,7 @@ Constant *ConstantVector::getSplatValue() {
   return Elt;
 }
 
-//---- ConstantPointerNull::get() implementation...
+//---- ConstantPointerNull::get() implementation.
 //
 
 ConstantPointerNull *ConstantPointerNull::get(const PointerType *Ty) {
@@ -1035,23 +1020,95 @@ void ConstantPointerNull::destroyConstant() {
 }
 
 
-//---- UndefValue::get() implementation...
+//---- UndefValue::get() implementation.
 //
 
 UndefValue *UndefValue::get(const Type *Ty) {
-  // Implicitly locked.
   return Ty->getContext().pImpl->UndefValueConstants.getOrCreate(Ty, 0);
 }
 
 // destroyConstant - Remove the constant from the constant table.
 //
 void UndefValue::destroyConstant() {
-  // Implicitly locked.
   getType()->getContext().pImpl->UndefValueConstants.remove(this);
   destroyConstantImpl();
 }
 
-//---- ConstantExpr::get() implementations...
+//---- BlockAddress::get() implementation.
+//
+
+BlockAddress *BlockAddress::get(BasicBlock *BB) {
+  assert(BB->getParent() != 0 && "Block must have a parent");
+  return get(BB->getParent(), BB);
+}
+
+BlockAddress *BlockAddress::get(Function *F, BasicBlock *BB) {
+  BlockAddress *&BA =
+    F->getContext().pImpl->BlockAddresses[std::make_pair(F, BB)];
+  if (BA == 0)
+    BA = new BlockAddress(F, BB);
+  
+  assert(BA->getFunction() == F && "Basic block moved between functions");
+  return BA;
+}
+
+BlockAddress::BlockAddress(Function *F, BasicBlock *BB)
+: Constant(Type::getInt8PtrTy(F->getContext()), Value::BlockAddressVal,
+           &Op<0>(), 2) {
+  setOperand(0, F);
+  setOperand(1, BB);
+  BB->AdjustBlockAddressRefCount(1);
+}
+
+
+// destroyConstant - Remove the constant from the constant table.
+//
+void BlockAddress::destroyConstant() {
+  getFunction()->getType()->getContext().pImpl
+    ->BlockAddresses.erase(std::make_pair(getFunction(), getBasicBlock()));
+  getBasicBlock()->AdjustBlockAddressRefCount(-1);
+  destroyConstantImpl();
+}
+
+void BlockAddress::replaceUsesOfWithOnConstant(Value *From, Value *To, Use *U) {
+  // This could be replacing either the Basic Block or the Function.  In either
+  // case, we have to remove the map entry.
+  Function *NewF = getFunction();
+  BasicBlock *NewBB = getBasicBlock();
+  
+  if (U == &Op<0>())
+    NewF = cast<Function>(To);
+  else
+    NewBB = cast<BasicBlock>(To);
+  
+  // See if the 'new' entry already exists, if not, just update this in place
+  // and return early.
+  BlockAddress *&NewBA =
+    getContext().pImpl->BlockAddresses[std::make_pair(NewF, NewBB)];
+  if (NewBA == 0) {
+    getBasicBlock()->AdjustBlockAddressRefCount(-1);
+    
+    // Remove the old entry, this can't cause the map to rehash (just a
+    // tombstone will get added).
+    getContext().pImpl->BlockAddresses.erase(std::make_pair(getFunction(),
+                                                            getBasicBlock()));
+    NewBA = this;
+    setOperand(0, NewF);
+    setOperand(1, NewBB);
+    getBasicBlock()->AdjustBlockAddressRefCount(1);
+    return;
+  }
+
+  // Otherwise, I do need to replace this with an existing value.
+  assert(NewBA != this && "I didn't contain From!");
+  
+  // Everyone using this now uses the replacement.
+  uncheckedReplaceAllUsesWith(NewBA);
+  
+  destroyConstant();
+}
+
+//---- ConstantExpr::get() implementations.
 //
 
 /// This is a utility function to handle folding of casts and lookup of the
@@ -1869,7 +1926,7 @@ const char *ConstantExpr::getOpcodeName() const {
 /// single invocation handles all 1000 uses.  Handling them one at a time would
 /// work, but would be really slow because it would have to unique each updated
 /// array instance.
-
+///
 void ConstantArray::replaceUsesOfWithOnConstant(Value *From, Value *To,
                                                 Use *U) {
   assert(isa<Constant>(To) && "Cannot make Constant refer to non-constant!");
@@ -1916,7 +1973,6 @@ void ConstantArray::replaceUsesOfWithOnConstant(Value *From, Value *To,
     Replacement = ConstantAggregateZero::get(getType());
   } else {
     // Check to see if we have this array type already.
-    sys::SmartScopedWriter<true> Writer(pImpl->ConstantsLock);
     bool Exists;
     LLVMContextImpl::ArrayConstantsTy::MapTy::iterator I =
       pImpl->ArrayConstants.InsertOrGetItem(Lookup, Exists);
@@ -1995,7 +2051,6 @@ void ConstantStruct::replaceUsesOfWithOnConstant(Value *From, Value *To,
     Replacement = ConstantAggregateZero::get(getType());
   } else {
     // Check to see if we have this array type already.
-    sys::SmartScopedWriter<true> Writer(pImpl->ConstantsLock);
     bool Exists;
     LLVMContextImpl::StructConstantsTy::MapTy::iterator I =
       pImpl->StructConstants.InsertOrGetItem(Lookup, Exists);
diff --git a/libclamav/c++/llvm/lib/VMCore/ConstantsContext.h b/libclamav/c++/llvm/lib/VMCore/ConstantsContext.h
index 526b4b1..268a660 100644
--- a/libclamav/c++/llvm/lib/VMCore/ConstantsContext.h
+++ b/libclamav/c++/llvm/lib/VMCore/ConstantsContext.h
@@ -20,8 +20,6 @@
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
-#include "llvm/System/Mutex.h"
-#include "llvm/System/RWMutex.h"
 #include <map>
 
 namespace llvm {
@@ -332,7 +330,7 @@ struct ExprMapKeyType {
 // The number of operands for each ConstantCreator::create method is
 // determined by the ConstantTraits template.
 // ConstantCreator - A class that is used to create constants by
-// ValueMap*.  This class should be partially specialized if there is
+// ConstantUniqueMap*.  This class should be partially specialized if there is
 // something strange that needs to be done to interface to the ctor for the
 // constant.
 //
@@ -506,7 +504,7 @@ struct ConstantKeyData<UndefValue> {
 
 template<class ValType, class TypeClass, class ConstantClass,
          bool HasLargeKey = false /*true for arrays and structs*/ >
-class ValueMap : public AbstractTypeUser {
+class ConstantUniqueMap : public AbstractTypeUser {
 public:
   typedef std::pair<const TypeClass*, ValType> MapKey;
   typedef std::map<MapKey, ConstantClass *> MapTy;
@@ -529,12 +527,7 @@ private:
   ///
   AbstractTypeMapTy AbstractTypeMap;
     
-  /// ValueMapLock - Mutex for this map.
-  sys::SmartMutex<true> ValueMapLock;
-
 public:
-  // NOTE: This function is not locked.  It is the caller's responsibility
-  // to enforce proper synchronization.
   typename MapTy::iterator map_begin() { return Map.begin(); }
   typename MapTy::iterator map_end() { return Map.end(); }
 
@@ -551,8 +544,6 @@ public:
   /// entry and Exists=true.  If not, the iterator points to the newly
   /// inserted entry and returns Exists=false.  Newly inserted entries have
   /// I->second == 0, and should be filled in.
-  /// NOTE: This function is not locked.  It is the caller's responsibility
-  // to enforce proper synchronization.
   typename MapTy::iterator InsertOrGetItem(std::pair<MapKey, ConstantClass *>
                                  &InsertVal,
                                  bool &Exists) {
@@ -619,7 +610,6 @@ public:
   /// getOrCreate - Return the specified constant from the map, creating it if
   /// necessary.
   ConstantClass *getOrCreate(const TypeClass *Ty, const ValType &V) {
-    sys::SmartScopedLock<true> Lock(ValueMapLock);
     MapKey Lookup(Ty, V);
     ConstantClass* Result = 0;
     
@@ -674,7 +664,6 @@ public:
   }
 
   void remove(ConstantClass *CP) {
-    sys::SmartScopedLock<true> Lock(ValueMapLock);
     typename MapTy::iterator I = FindExistingElement(CP);
     assert(I != Map.end() && "Constant not found in constant table!");
     assert(I->second == CP && "Didn't find correct element?");
@@ -694,8 +683,6 @@ public:
   /// MoveConstantToNewSlot - If we are about to change C to be the element
   /// specified by I, update our internal data structures to reflect this
   /// fact.
-  /// NOTE: This function is not locked. It is the responsibility of the
-  /// caller to enforce proper synchronization if using this method.
   void MoveConstantToNewSlot(ConstantClass *C, typename MapTy::iterator I) {
     // First, remove the old location of the specified constant in the map.
     typename MapTy::iterator OldI = FindExistingElement(C);
@@ -725,7 +712,6 @@ public:
   }
     
   void refineAbstractType(const DerivedType *OldTy, const Type *NewTy) {
-    sys::SmartScopedLock<true> Lock(ValueMapLock);
     typename AbstractTypeMapTy::iterator I = AbstractTypeMap.find(OldTy);
 
     assert(I != AbstractTypeMap.end() &&
@@ -778,7 +764,7 @@ public:
   }
 
   void dump() const {
-    DEBUG(errs() << "Constant.cpp: ValueMap\n");
+    DEBUG(errs() << "Constant.cpp: ConstantUniqueMap\n");
   }
 };
 
diff --git a/libclamav/c++/llvm/lib/VMCore/Core.cpp b/libclamav/c++/llvm/lib/VMCore/Core.cpp
index 77fd432..449e967 100644
--- a/libclamav/c++/llvm/lib/VMCore/Core.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Core.cpp
@@ -389,6 +389,9 @@ void LLVMDumpValue(LLVMValueRef Val) {
   unwrap(Val)->dump();
 }
 
+void LLVMReplaceAllUsesWith(LLVMValueRef OldVal, LLVMValueRef NewVal) {
+  unwrap(OldVal)->replaceAllUsesWith(unwrap(NewVal));
+}
 
 /*--.. Conversion functions ................................................--*/
 
@@ -399,6 +402,31 @@ void LLVMDumpValue(LLVMValueRef Val) {
 
 LLVM_FOR_EACH_VALUE_SUBCLASS(LLVM_DEFINE_VALUE_CAST)
 
+/*--.. Operations on Uses ..................................................--*/
+LLVMUseIteratorRef LLVMGetFirstUse(LLVMValueRef Val) {
+  Value *V = unwrap(Val);
+  Value::use_iterator I = V->use_begin();
+  if (I == V->use_end())
+    return 0;
+  return wrap(&(I.getUse()));
+}
+
+LLVMUseIteratorRef LLVMGetNextUse(LLVMUseIteratorRef UR) {
+  return wrap(unwrap(UR)->getNext());
+}
+
+LLVMValueRef LLVMGetUser(LLVMUseIteratorRef UR) {
+  return wrap(unwrap(UR)->getUser());
+}
+
+LLVMValueRef LLVMGetUsedValue(LLVMUseIteratorRef UR) {
+  return wrap(unwrap(UR)->get());
+}
+
+/*--.. Operations on Users .................................................--*/
+LLVMValueRef LLVMGetOperand(LLVMValueRef Val, unsigned Index) {
+  return wrap(unwrap<User>(Val)->getOperand(Index));
+}
 
 /*--.. Operations on constants of any type .................................--*/
 
@@ -465,6 +493,14 @@ LLVMValueRef LLVMConstRealOfStringAndSize(LLVMTypeRef RealTy, const char Str[],
   return wrap(ConstantFP::get(unwrap(RealTy), StringRef(Str, SLen)));
 }
 
+unsigned long long LLVMConstIntGetZExtValue(LLVMValueRef ConstantVal) {
+  return unwrap<ConstantInt>(ConstantVal)->getZExtValue();
+}
+
+long long LLVMConstIntGetSExtValue(LLVMValueRef ConstantVal) {
+  return unwrap<ConstantInt>(ConstantVal)->getSExtValue();
+}
+
 /*--.. Operations on composite constants ...................................--*/
 
 LLVMValueRef LLVMConstStringInContext(LLVMContextRef C, const char *Str,
@@ -506,6 +542,10 @@ LLVMValueRef LLVMConstVector(LLVMValueRef *ScalarConstantVals, unsigned Size) {
 
 /*--.. Constant expressions ................................................--*/
 
+LLVMOpcode LLVMGetConstOpcode(LLVMValueRef ConstantVal) {
+  return (LLVMOpcode)unwrap<ConstantExpr>(ConstantVal)->getOpcode();
+}
+
 LLVMValueRef LLVMAlignOf(LLVMTypeRef Ty) {
   return wrap(ConstantExpr::getAlignOf(unwrap(Ty)));
 }
@@ -844,9 +884,10 @@ LLVMValueRef LLVMConstInsertValue(LLVMValueRef AggConstant,
 }
 
 LLVMValueRef LLVMConstInlineAsm(LLVMTypeRef Ty, const char *AsmString, 
-                                const char *Constraints, int HasSideEffects) {
+                                const char *Constraints, int HasSideEffects,
+                                int IsAlignStack) {
   return wrap(InlineAsm::get(dyn_cast<FunctionType>(unwrap(Ty)), AsmString, 
-                             Constraints, HasSideEffects));
+                             Constraints, HasSideEffects, IsAlignStack));
 }
 
 /*--.. Operations on global variables, functions, and aliases (globals) ....--*/
@@ -1027,7 +1068,10 @@ void LLVMDeleteGlobal(LLVMValueRef GlobalVar) {
 }
 
 LLVMValueRef LLVMGetInitializer(LLVMValueRef GlobalVar) {
-  return wrap(unwrap<GlobalVariable>(GlobalVar)->getInitializer());
+  GlobalVariable* GV = unwrap<GlobalVariable>(GlobalVar);
+  if ( !GV->hasInitializer() )
+    return 0;
+  return wrap(GV->getInitializer());
 }
 
 void LLVMSetInitializer(LLVMValueRef GlobalVar, LLVMValueRef ConstantVal) {
@@ -1149,6 +1193,13 @@ void LLVMRemoveFunctionAttr(LLVMValueRef Fn, LLVMAttribute PA) {
   Func->setAttributes(PALnew);
 }
 
+LLVMAttribute LLVMGetFunctionAttr(LLVMValueRef Fn) {
+  Function *Func = unwrap<Function>(Fn);
+  const AttrListPtr PAL = Func->getAttributes();
+  Attributes attr = PAL.getFnAttributes();
+  return (LLVMAttribute)attr;
+}
+
 /*--.. Operations on parameters ............................................--*/
 
 unsigned LLVMCountParams(LLVMValueRef FnRef) {
@@ -1215,6 +1266,14 @@ void LLVMRemoveAttribute(LLVMValueRef Arg, LLVMAttribute PA) {
   unwrap<Argument>(Arg)->removeAttr(PA);
 }
 
+LLVMAttribute LLVMGetAttribute(LLVMValueRef Arg) {
+  Argument *A = unwrap<Argument>(Arg);
+  Attributes attr = A->getParent()->getAttributes().getParamAttributes(
+    A->getArgNo()+1);
+  return (LLVMAttribute)attr;
+}
+  
+
 void LLVMSetParamAlignment(LLVMValueRef Arg, unsigned align) {
   unwrap<Argument>(Arg)->addAttr(
           Attribute::constructAlignmentFromInt(align));
@@ -1640,12 +1699,24 @@ LLVMValueRef LLVMBuildNot(LLVMBuilderRef B, LLVMValueRef V, const char *Name) {
 
 LLVMValueRef LLVMBuildMalloc(LLVMBuilderRef B, LLVMTypeRef Ty,
                              const char *Name) {
-  return wrap(unwrap(B)->CreateMalloc(unwrap(Ty), 0, Name));
+  const Type* ITy = Type::getInt32Ty(unwrap(B)->GetInsertBlock()->getContext());
+  Constant* AllocSize = ConstantExpr::getSizeOf(unwrap(Ty));
+  AllocSize = ConstantExpr::getTruncOrBitCast(AllocSize, ITy);
+  Instruction* Malloc = CallInst::CreateMalloc(unwrap(B)->GetInsertBlock(), 
+                                               ITy, unwrap(Ty), AllocSize, 
+                                               0, 0, "");
+  return wrap(unwrap(B)->Insert(Malloc, Twine(Name)));
 }
 
 LLVMValueRef LLVMBuildArrayMalloc(LLVMBuilderRef B, LLVMTypeRef Ty,
                                   LLVMValueRef Val, const char *Name) {
-  return wrap(unwrap(B)->CreateMalloc(unwrap(Ty), unwrap(Val), Name));
+  const Type* ITy = Type::getInt32Ty(unwrap(B)->GetInsertBlock()->getContext());
+  Constant* AllocSize = ConstantExpr::getSizeOf(unwrap(Ty));
+  AllocSize = ConstantExpr::getTruncOrBitCast(AllocSize, ITy);
+  Instruction* Malloc = CallInst::CreateMalloc(unwrap(B)->GetInsertBlock(), 
+                                               ITy, unwrap(Ty), AllocSize, 
+                                               unwrap(Val), 0, "");
+  return wrap(unwrap(B)->Insert(Malloc, Twine(Name)));
 }
 
 LLVMValueRef LLVMBuildAlloca(LLVMBuilderRef B, LLVMTypeRef Ty,
@@ -1659,7 +1730,8 @@ LLVMValueRef LLVMBuildArrayAlloca(LLVMBuilderRef B, LLVMTypeRef Ty,
 }
 
 LLVMValueRef LLVMBuildFree(LLVMBuilderRef B, LLVMValueRef PointerVal) {
-  return wrap(unwrap(B)->CreateFree(unwrap(PointerVal)));
+  return wrap(unwrap(B)->Insert(
+     CallInst::CreateFree(unwrap(PointerVal), unwrap(B)->GetInsertBlock())));
 }
 
 
@@ -1789,7 +1861,8 @@ LLVMValueRef LLVMBuildPointerCast(LLVMBuilderRef B, LLVMValueRef Val,
 
 LLVMValueRef LLVMBuildIntCast(LLVMBuilderRef B, LLVMValueRef Val,
                               LLVMTypeRef DestTy, const char *Name) {
-  return wrap(unwrap(B)->CreateIntCast(unwrap(Val), unwrap(DestTy), Name));
+  return wrap(unwrap(B)->CreateIntCast(unwrap(Val), unwrap(DestTy),
+                                       /*isSigned*/true, Name));
 }
 
 LLVMValueRef LLVMBuildFPCast(LLVMBuilderRef B, LLVMValueRef Val,
@@ -1915,13 +1988,15 @@ int LLVMCreateMemoryBufferWithContentsOfFile(const char *Path,
 
 int LLVMCreateMemoryBufferWithSTDIN(LLVMMemoryBufferRef *OutMemBuf,
                                     char **OutMessage) {
-  if (MemoryBuffer *MB = MemoryBuffer::getSTDIN()) {
-    *OutMemBuf = wrap(MB);
-    return 0;
+  MemoryBuffer *MB = MemoryBuffer::getSTDIN();
+  if (!MB->getBufferSize()) {
+    delete MB;
+    *OutMessage = strdup("stdin is empty.");
+    return 1;
   }
-  
-  *OutMessage = strdup("stdin is empty.");
-  return 1;
+
+  *OutMemBuf = wrap(MB);
+  return 0;
 }
 
 void LLVMDisposeMemoryBuffer(LLVMMemoryBufferRef MemBuf) {
diff --git a/libclamav/c++/llvm/lib/VMCore/Dominators.cpp b/libclamav/c++/llvm/lib/VMCore/Dominators.cpp
index b49faf8..26c02e0 100644
--- a/libclamav/c++/llvm/lib/VMCore/Dominators.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Dominators.cpp
@@ -322,7 +322,7 @@ DominanceFrontier::calculate(const DominatorTree &DT,
 
 void DominanceFrontierBase::print(raw_ostream &OS, const Module* ) const {
   for (const_iterator I = begin(), E = end(); I != E; ++I) {
-    OS << "  DomFrontier for BB";
+    OS << "  DomFrontier for BB ";
     if (I->first)
       WriteAsOperand(OS, I->first, false);
     else
@@ -332,11 +332,13 @@ void DominanceFrontierBase::print(raw_ostream &OS, const Module* ) const {
     const std::set<BasicBlock*> &BBs = I->second;
     
     for (std::set<BasicBlock*>::const_iterator I = BBs.begin(), E = BBs.end();
-         I != E; ++I)
+         I != E; ++I) {
+      OS << ' ';
       if (*I)
         WriteAsOperand(OS, *I, false);
       else
-        OS << " <<exit node>>";
+        OS << "<<exit node>>";
+    }
     OS << "\n";
   }
 }
diff --git a/libclamav/c++/llvm/lib/VMCore/Function.cpp b/libclamav/c++/llvm/lib/VMCore/Function.cpp
index 8ad885c..6cf2c81 100644
--- a/libclamav/c++/llvm/lib/VMCore/Function.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Function.cpp
@@ -217,7 +217,20 @@ void Function::setParent(Module *parent) {
 void Function::dropAllReferences() {
   for (iterator I = begin(), E = end(); I != E; ++I)
     I->dropAllReferences();
-  BasicBlocks.clear();    // Delete all basic blocks...
+  
+  // Delete all basic blocks.
+  while (!BasicBlocks.empty()) {
+    // If there is still a reference to the block, it must be a 'blockaddress'
+    // constant pointing to it.  Just replace the BlockAddress with undef.
+    BasicBlock *BB = BasicBlocks.begin();
+    if (!BB->use_empty()) {
+      BlockAddress *BA = cast<BlockAddress>(BB->use_back());
+      BA->replaceAllUsesWith(UndefValue::get(BA->getType()));
+      BA->destroyConstant();
+    }
+    
+    BB->eraseFromParent();
+  }
 }
 
 void Function::addAttribute(unsigned i, Attributes attr) {
diff --git a/libclamav/c++/llvm/lib/VMCore/Globals.cpp b/libclamav/c++/llvm/lib/VMCore/Globals.cpp
index d18a201..94bf3de 100644
--- a/libclamav/c++/llvm/lib/VMCore/Globals.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Globals.cpp
@@ -16,7 +16,6 @@
 #include "llvm/GlobalVariable.h"
 #include "llvm/GlobalAlias.h"
 #include "llvm/DerivedTypes.h"
-#include "llvm/LLVMContext.h"
 #include "llvm/Module.h"
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/Support/ErrorHandling.h"
@@ -75,6 +74,7 @@ void GlobalValue::removeDeadConstantUsers() const {
   }
 }
 
+
 /// Override destroyConstant to make sure it doesn't get called on
 /// GlobalValue's because they shouldn't be treated like other constants.
 void GlobalValue::destroyConstant() {
@@ -94,8 +94,7 @@ void GlobalValue::copyAttributesFrom(const GlobalValue *Src) {
 // GlobalVariable Implementation
 //===----------------------------------------------------------------------===//
 
-GlobalVariable::GlobalVariable(LLVMContext &Context, const Type *Ty,
-                               bool constant, LinkageTypes Link,
+GlobalVariable::GlobalVariable(const Type *Ty, bool constant, LinkageTypes Link,
                                Constant *InitVal, const Twine &Name,
                                bool ThreadLocal, unsigned AddressSpace)
   : GlobalValue(PointerType::get(Ty, AddressSpace), 
@@ -172,6 +171,21 @@ void GlobalVariable::replaceUsesOfWithOnConstant(Value *From, Value *To,
   this->setOperand(0, cast<Constant>(To));
 }
 
+void GlobalVariable::setInitializer(Constant *InitVal) {
+  if (InitVal == 0) {
+    if (hasInitializer()) {
+      Op<0>().set(0);
+      NumOperands = 0;
+    }
+  } else {
+    assert(InitVal->getType() == getType()->getElementType() &&
+           "Initializer type must match GlobalVariable type");
+    if (!hasInitializer())
+      NumOperands = 1;
+    Op<0>().set(InitVal);
+  }
+}
+
 /// copyAttributesFrom - copy all additional attributes (those not needed to
 /// create a GlobalVariable) from the GlobalVariable Src to this one.
 void GlobalVariable::copyAttributesFrom(const GlobalValue *Src) {
diff --git a/libclamav/c++/llvm/lib/VMCore/InlineAsm.cpp b/libclamav/c++/llvm/lib/VMCore/InlineAsm.cpp
index fbd6b90..16de1af 100644
--- a/libclamav/c++/llvm/lib/VMCore/InlineAsm.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/InlineAsm.cpp
@@ -26,18 +26,22 @@ InlineAsm::~InlineAsm() {
 // NOTE: when memoizing the function type, we have to be careful to handle the
 // case when the type gets refined.
 
-InlineAsm *InlineAsm::get(const FunctionType *Ty, const StringRef &AsmString,
-                          const StringRef &Constraints, bool hasSideEffects) {
+InlineAsm *InlineAsm::get(const FunctionType *Ty, StringRef AsmString,
+                          StringRef Constraints, bool hasSideEffects,
+                          bool isAlignStack) {
   // FIXME: memoize!
-  return new InlineAsm(Ty, AsmString, Constraints, hasSideEffects);  
+  return new InlineAsm(Ty, AsmString, Constraints, hasSideEffects, 
+                       isAlignStack);
 }
 
-InlineAsm::InlineAsm(const FunctionType *Ty, const StringRef &asmString,
-                     const StringRef &constraints, bool hasSideEffects)
+InlineAsm::InlineAsm(const FunctionType *Ty, StringRef asmString,
+                     StringRef constraints, bool hasSideEffects,
+                     bool isAlignStack)
   : Value(PointerType::getUnqual(Ty), 
           Value::InlineAsmVal), 
     AsmString(asmString), 
-    Constraints(constraints), HasSideEffects(hasSideEffects) {
+    Constraints(constraints), HasSideEffects(hasSideEffects), 
+    IsAlignStack(isAlignStack) {
 
   // Do various checks on the constraint string and type.
   assert(Verify(Ty, constraints) && "Function type not legal for constraints!");
@@ -50,7 +54,7 @@ const FunctionType *InlineAsm::getFunctionType() const {
 /// Parse - Analyze the specified string (e.g. "==&{eax}") and fill in the
 /// fields in this structure.  If the constraint string is not understood,
 /// return true, otherwise return false.
-bool InlineAsm::ConstraintInfo::Parse(const StringRef &Str,
+bool InlineAsm::ConstraintInfo::Parse(StringRef Str,
                      std::vector<InlineAsm::ConstraintInfo> &ConstraintsSoFar) {
   StringRef::iterator I = Str.begin(), E = Str.end();
   
@@ -145,7 +149,7 @@ bool InlineAsm::ConstraintInfo::Parse(const StringRef &Str,
 }
 
 std::vector<InlineAsm::ConstraintInfo>
-InlineAsm::ParseConstraints(const StringRef &Constraints) {
+InlineAsm::ParseConstraints(StringRef Constraints) {
   std::vector<ConstraintInfo> Result;
   
   // Scan the constraints string.
@@ -179,7 +183,7 @@ InlineAsm::ParseConstraints(const StringRef &Constraints) {
 
 /// Verify - Verify that the specified constraint string is reasonable for the
 /// specified function type, and otherwise validate the constraint string.
-bool InlineAsm::Verify(const FunctionType *Ty, const StringRef &ConstStr) {
+bool InlineAsm::Verify(const FunctionType *Ty, StringRef ConstStr) {
   if (Ty->isVarArg()) return false;
   
   std::vector<ConstraintInfo> Constraints = ParseConstraints(ConstStr);
diff --git a/libclamav/c++/llvm/lib/VMCore/Instruction.cpp b/libclamav/c++/llvm/lib/VMCore/Instruction.cpp
index 4df536e..ce253d6 100644
--- a/libclamav/c++/llvm/lib/VMCore/Instruction.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Instruction.cpp
@@ -103,6 +103,7 @@ const char *Instruction::getOpcodeName(unsigned OpCode) {
   case Ret:    return "ret";
   case Br:     return "br";
   case Switch: return "switch";
+  case IndirectBr: return "indirectbr";
   case Invoke: return "invoke";
   case Unwind: return "unwind";
   case Unreachable: return "unreachable";
@@ -127,8 +128,6 @@ const char *Instruction::getOpcodeName(unsigned OpCode) {
   case Xor: return "xor";
 
   // Memory instructions...
-  case Malloc:        return "malloc";
-  case Free:          return "free";
   case Alloca:        return "alloca";
   case Load:          return "load";
   case Store:         return "store";
@@ -309,7 +308,6 @@ bool Instruction::isUsedOutsideOfBlock(const BasicBlock *BB) const {
 bool Instruction::mayReadFromMemory() const {
   switch (getOpcode()) {
   default: return false;
-  case Instruction::Free:
   case Instruction::VAArg:
   case Instruction::Load:
     return true;
@@ -327,7 +325,6 @@ bool Instruction::mayReadFromMemory() const {
 bool Instruction::mayWriteToMemory() const {
   switch (getOpcode()) {
   default: return false;
-  case Instruction::Free:
   case Instruction::Store:
   case Instruction::VAArg:
     return true;
@@ -381,7 +378,7 @@ bool Instruction::isCommutative(unsigned op) {
   }
 }
 
-// Code here matches isMalloc from MallocHelper, which is not in VMCore.
+// Code here matches isMalloc from MemoryBuiltins, which is not in VMCore.
 static bool isMalloc(const Value* I) {
   const CallInst *CI = dyn_cast<CallInst>(I);
   if (!CI) {
@@ -391,15 +388,25 @@ static bool isMalloc(const Value* I) {
     CI = dyn_cast<CallInst>(BCI->getOperand(0));
   }
 
-  if (!CI) return false;
-
-  const Module* M = CI->getParent()->getParent()->getParent();
-  Constant *MallocFunc = M->getFunction("malloc");
+  if (!CI)
+    return false;
+  Function *Callee = CI->getCalledFunction();
+  if (Callee == 0 || !Callee->isDeclaration() || Callee->getName() != "malloc")
+    return false;
 
-  if (CI->getOperand(0) != MallocFunc)
+  // Check malloc prototype.
+  // FIXME: workaround for PR5130, this will be obsolete when a nobuiltin 
+  // attribute will exist.
+  const FunctionType *FTy = Callee->getFunctionType();
+  if (FTy->getNumParams() != 1)
     return false;
+  if (IntegerType *ITy = dyn_cast<IntegerType>(FTy->param_begin()->get())) {
+    if (ITy->getBitWidth() != 32 && ITy->getBitWidth() != 64)
+      return false;
+    return true;
+  }
 
-  return true;
+  return false;
 }
 
 bool Instruction::isSafeToSpeculativelyExecute() const {
@@ -427,7 +434,7 @@ bool Instruction::isSafeToSpeculativelyExecute() const {
   case Load: {
     if (cast<LoadInst>(this)->isVolatile())
       return false;
-    if (isa<AllocationInst>(getOperand(0)) || isMalloc(getOperand(0)))
+    if (isa<AllocaInst>(getOperand(0)) || isMalloc(getOperand(0)))
       return true;
     if (GlobalVariable *GV = dyn_cast<GlobalVariable>(getOperand(0)))
       return !GV->hasExternalWeakLinkage();
@@ -442,11 +449,9 @@ bool Instruction::isSafeToSpeculativelyExecute() const {
                   // overflow-checking arithmetic, etc.)
   case VAArg:
   case Alloca:
-  case Malloc:
   case Invoke:
   case PHI:
   case Store:
-  case Free:
   case Ret:
   case Br:
   case Switch:
@@ -455,3 +460,11 @@ bool Instruction::isSafeToSpeculativelyExecute() const {
     return false; // Misc instructions which have effects
   }
 }
+
+Instruction *Instruction::clone() const {
+  Instruction *New = clone_impl();
+  New->SubclassOptionalData = SubclassOptionalData;
+  if (hasMetadata())
+    getContext().pImpl->TheMetadata.ValueIsCloned(this, New);
+  return New;
+}
diff --git a/libclamav/c++/llvm/lib/VMCore/Instructions.cpp b/libclamav/c++/llvm/lib/VMCore/Instructions.cpp
index c2fddfa..b03ee93 100644
--- a/libclamav/c++/llvm/lib/VMCore/Instructions.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Instructions.cpp
@@ -24,7 +24,6 @@
 #include "llvm/Support/CallSite.h"
 #include "llvm/Support/ConstantRange.h"
 #include "llvm/Support/MathExtras.h"
-
 using namespace llvm;
 
 //===----------------------------------------------------------------------===//
@@ -448,21 +447,11 @@ static bool IsConstantOne(Value *val) {
   return isa<ConstantInt>(val) && cast<ConstantInt>(val)->isOne();
 }
 
-static Value *checkArraySize(Value *Amt, const Type *IntPtrTy) {
-  if (!Amt)
-    Amt = ConstantInt::get(IntPtrTy, 1);
-  else {
-    assert(!isa<BasicBlock>(Amt) &&
-           "Passed basic block into malloc size parameter! Use other ctor");
-    assert(Amt->getType() == IntPtrTy &&
-           "Malloc array size is not an intptr!");
-  }
-  return Amt;
-}
-
-static Value *createMalloc(Instruction *InsertBefore, BasicBlock *InsertAtEnd,
-                           const Type *IntPtrTy, const Type *AllocTy,
-                           Value *ArraySize, const Twine &NameStr) {
+static Instruction *createMalloc(Instruction *InsertBefore,
+                                 BasicBlock *InsertAtEnd, const Type *IntPtrTy,
+                                 const Type *AllocTy, Value *AllocSize, 
+                                 Value *ArraySize, Function *MallocF,
+                                 const Twine &Name) {
   assert(((!InsertBefore && InsertAtEnd) || (InsertBefore && !InsertAtEnd)) &&
          "createMalloc needs either InsertBefore or InsertAtEnd");
 
@@ -470,10 +459,16 @@ static Value *createMalloc(Instruction *InsertBefore, BasicBlock *InsertAtEnd,
   //       bitcast (i8* malloc(typeSize)) to type*
   // malloc(type, arraySize) becomes:
   //       bitcast (i8 *malloc(typeSize*arraySize)) to type*
-  Value *AllocSize = ConstantExpr::getSizeOf(AllocTy);
-  AllocSize = ConstantExpr::getTruncOrBitCast(cast<Constant>(AllocSize),
-                                              IntPtrTy);
-  ArraySize = checkArraySize(ArraySize, IntPtrTy);
+  if (!ArraySize)
+    ArraySize = ConstantInt::get(IntPtrTy, 1);
+  else if (ArraySize->getType() != IntPtrTy) {
+    if (InsertBefore)
+      ArraySize = CastInst::CreateIntegerCast(ArraySize, IntPtrTy, false,
+                                              "", InsertBefore);
+    else
+      ArraySize = CastInst::CreateIntegerCast(ArraySize, IntPtrTy, false,
+                                              "", InsertAtEnd);
+  }
 
   if (!IsConstantOne(ArraySize)) {
     if (IsConstantOne(AllocSize)) {
@@ -498,28 +493,38 @@ static Value *createMalloc(Instruction *InsertBefore, BasicBlock *InsertAtEnd,
   // Create the call to Malloc.
   BasicBlock* BB = InsertBefore ? InsertBefore->getParent() : InsertAtEnd;
   Module* M = BB->getParent()->getParent();
-  const Type *BPTy = PointerType::getUnqual(Type::getInt8Ty(BB->getContext()));
-  // prototype malloc as "void *malloc(size_t)"
-  Constant *MallocF = M->getOrInsertFunction("malloc", BPTy, IntPtrTy, NULL);
-  if (!cast<Function>(MallocF)->doesNotAlias(0))
-    cast<Function>(MallocF)->setDoesNotAlias(0);
+  const Type *BPTy = Type::getInt8PtrTy(BB->getContext());
+  Value *MallocFunc = MallocF;
+  if (!MallocFunc)
+    // prototype malloc as "void *malloc(size_t)"
+    MallocFunc = M->getOrInsertFunction("malloc", BPTy, IntPtrTy, NULL);
   const PointerType *AllocPtrType = PointerType::getUnqual(AllocTy);
   CallInst *MCall = NULL;
-  Value    *MCast = NULL;
+  Instruction *Result = NULL;
   if (InsertBefore) {
-    MCall = CallInst::Create(MallocF, AllocSize, "malloccall", InsertBefore);
-    // Create a cast instruction to convert to the right type...
-    MCast = new BitCastInst(MCall, AllocPtrType, NameStr, InsertBefore);
+    MCall = CallInst::Create(MallocFunc, AllocSize, "malloccall", InsertBefore);
+    Result = MCall;
+    if (Result->getType() != AllocPtrType)
+      // Create a cast instruction to convert to the right type...
+      Result = new BitCastInst(MCall, AllocPtrType, Name, InsertBefore);
   } else {
-    MCall = CallInst::Create(MallocF, AllocSize, "malloccall", InsertAtEnd);
-    // Create a cast instruction to convert to the right type...
-    MCast = new BitCastInst(MCall, AllocPtrType, NameStr);
+    MCall = CallInst::Create(MallocFunc, AllocSize, "malloccall");
+    Result = MCall;
+    if (Result->getType() != AllocPtrType) {
+      InsertAtEnd->getInstList().push_back(MCall);
+      // Create a cast instruction to convert to the right type...
+      Result = new BitCastInst(MCall, AllocPtrType, Name);
+    }
   }
   MCall->setTailCall();
+  if (Function *F = dyn_cast<Function>(MallocFunc)) {
+    MCall->setCallingConv(F->getCallingConv());
+    if (!F->doesNotAlias(0)) F->setDoesNotAlias(0);
+  }
   assert(MCall->getType() != Type::getVoidTy(BB->getContext()) &&
          "Malloc has void return type");
 
-  return MCast;
+  return Result;
 }
 
 /// CreateMalloc - Generate the IR for a call to malloc:
@@ -528,10 +533,12 @@ static Value *createMalloc(Instruction *InsertBefore, BasicBlock *InsertAtEnd,
 ///    constant 1.
 /// 2. Call malloc with that argument.
 /// 3. Bitcast the result of the malloc call to the specified type.
-Value *CallInst::CreateMalloc(Instruction *InsertBefore, const Type *IntPtrTy,
-                              const Type *AllocTy, Value *ArraySize,
-                              const Twine &Name) {
-  return createMalloc(InsertBefore, NULL, IntPtrTy, AllocTy, ArraySize, Name);
+Instruction *CallInst::CreateMalloc(Instruction *InsertBefore,
+                                    const Type *IntPtrTy, const Type *AllocTy,
+                                    Value *AllocSize, Value *ArraySize,
+                                    const Twine &Name) {
+  return createMalloc(InsertBefore, NULL, IntPtrTy, AllocTy, AllocSize,
+                      ArraySize, NULL, Name);
 }
 
 /// CreateMalloc - Generate the IR for a call to malloc:
@@ -542,10 +549,58 @@ Value *CallInst::CreateMalloc(Instruction *InsertBefore, const Type *IntPtrTy,
 /// 3. Bitcast the result of the malloc call to the specified type.
 /// Note: This function does not add the bitcast to the basic block, that is the
 /// responsibility of the caller.
-Value *CallInst::CreateMalloc(BasicBlock *InsertAtEnd, const Type *IntPtrTy,
-                              const Type *AllocTy, Value *ArraySize, 
-                              const Twine &Name) {
-  return createMalloc(NULL, InsertAtEnd, IntPtrTy, AllocTy, ArraySize, Name);
+Instruction *CallInst::CreateMalloc(BasicBlock *InsertAtEnd,
+                                    const Type *IntPtrTy, const Type *AllocTy,
+                                    Value *AllocSize, Value *ArraySize, 
+                                    Function *MallocF, const Twine &Name) {
+  return createMalloc(NULL, InsertAtEnd, IntPtrTy, AllocTy, AllocSize,
+                      ArraySize, MallocF, Name);
+}
+
+static Instruction* createFree(Value* Source, Instruction *InsertBefore,
+                               BasicBlock *InsertAtEnd) {
+  assert(((!InsertBefore && InsertAtEnd) || (InsertBefore && !InsertAtEnd)) &&
+         "createFree needs either InsertBefore or InsertAtEnd");
+  assert(isa<PointerType>(Source->getType()) &&
+         "Can not free something of nonpointer type!");
+
+  BasicBlock* BB = InsertBefore ? InsertBefore->getParent() : InsertAtEnd;
+  Module* M = BB->getParent()->getParent();
+
+  const Type *VoidTy = Type::getVoidTy(M->getContext());
+  const Type *IntPtrTy = Type::getInt8PtrTy(M->getContext());
+  // prototype free as "void free(void*)"
+  Value *FreeFunc = M->getOrInsertFunction("free", VoidTy, IntPtrTy, NULL);
+  CallInst* Result = NULL;
+  Value *PtrCast = Source;
+  if (InsertBefore) {
+    if (Source->getType() != IntPtrTy)
+      PtrCast = new BitCastInst(Source, IntPtrTy, "", InsertBefore);
+    Result = CallInst::Create(FreeFunc, PtrCast, "", InsertBefore);
+  } else {
+    if (Source->getType() != IntPtrTy)
+      PtrCast = new BitCastInst(Source, IntPtrTy, "", InsertAtEnd);
+    Result = CallInst::Create(FreeFunc, PtrCast, "");
+  }
+  Result->setTailCall();
+  if (Function *F = dyn_cast<Function>(FreeFunc))
+    Result->setCallingConv(F->getCallingConv());
+
+  return Result;
+}
+
+/// CreateFree - Generate the IR for a call to the builtin free function.
+void CallInst::CreateFree(Value* Source, Instruction *InsertBefore) {
+  createFree(Source, InsertBefore, NULL);
+}
+
+/// CreateFree - Generate the IR for a call to the builtin free function.
+/// Note: This function does not add the call to the basic block, that is the
+/// responsibility of the caller.
+Instruction* CallInst::CreateFree(Value* Source, BasicBlock *InsertAtEnd) {
+  Instruction* FreeCall = createFree(Source, NULL, InsertAtEnd);
+  assert(FreeCall && "CreateFree did not create a CallInst");
+  return FreeCall;
 }
 
 //===----------------------------------------------------------------------===//
@@ -827,7 +882,7 @@ void BranchInst::setSuccessorV(unsigned idx, BasicBlock *B) {
 
 
 //===----------------------------------------------------------------------===//
-//                        AllocationInst Implementation
+//                        AllocaInst Implementation
 //===----------------------------------------------------------------------===//
 
 static Value *getAISize(LLVMContext &Context, Value *Amt) {
@@ -837,25 +892,59 @@ static Value *getAISize(LLVMContext &Context, Value *Amt) {
     assert(!isa<BasicBlock>(Amt) &&
            "Passed basic block into allocation size parameter! Use other ctor");
     assert(Amt->getType() == Type::getInt32Ty(Context) &&
-           "Malloc/Allocation array size is not a 32-bit integer!");
+           "Allocation array size is not a 32-bit integer!");
   }
   return Amt;
 }
 
-AllocationInst::AllocationInst(const Type *Ty, Value *ArraySize, unsigned iTy,
-                               unsigned Align, const Twine &Name,
-                               Instruction *InsertBefore)
-  : UnaryInstruction(PointerType::getUnqual(Ty), iTy,
+AllocaInst::AllocaInst(const Type *Ty, Value *ArraySize,
+                       const Twine &Name, Instruction *InsertBefore)
+  : UnaryInstruction(PointerType::getUnqual(Ty), Alloca,
+                     getAISize(Ty->getContext(), ArraySize), InsertBefore) {
+  setAlignment(0);
+  assert(Ty != Type::getVoidTy(Ty->getContext()) && "Cannot allocate void!");
+  setName(Name);
+}
+
+AllocaInst::AllocaInst(const Type *Ty, Value *ArraySize,
+                       const Twine &Name, BasicBlock *InsertAtEnd)
+  : UnaryInstruction(PointerType::getUnqual(Ty), Alloca,
+                     getAISize(Ty->getContext(), ArraySize), InsertAtEnd) {
+  setAlignment(0);
+  assert(Ty != Type::getVoidTy(Ty->getContext()) && "Cannot allocate void!");
+  setName(Name);
+}
+
+AllocaInst::AllocaInst(const Type *Ty, const Twine &Name,
+                       Instruction *InsertBefore)
+  : UnaryInstruction(PointerType::getUnqual(Ty), Alloca,
+                     getAISize(Ty->getContext(), 0), InsertBefore) {
+  setAlignment(0);
+  assert(Ty != Type::getVoidTy(Ty->getContext()) && "Cannot allocate void!");
+  setName(Name);
+}
+
+AllocaInst::AllocaInst(const Type *Ty, const Twine &Name,
+                       BasicBlock *InsertAtEnd)
+  : UnaryInstruction(PointerType::getUnqual(Ty), Alloca,
+                     getAISize(Ty->getContext(), 0), InsertAtEnd) {
+  setAlignment(0);
+  assert(Ty != Type::getVoidTy(Ty->getContext()) && "Cannot allocate void!");
+  setName(Name);
+}
+
+AllocaInst::AllocaInst(const Type *Ty, Value *ArraySize, unsigned Align,
+                       const Twine &Name, Instruction *InsertBefore)
+  : UnaryInstruction(PointerType::getUnqual(Ty), Alloca,
                      getAISize(Ty->getContext(), ArraySize), InsertBefore) {
   setAlignment(Align);
   assert(Ty != Type::getVoidTy(Ty->getContext()) && "Cannot allocate void!");
   setName(Name);
 }
 
-AllocationInst::AllocationInst(const Type *Ty, Value *ArraySize, unsigned iTy,
-                               unsigned Align, const Twine &Name,
-                               BasicBlock *InsertAtEnd)
-  : UnaryInstruction(PointerType::getUnqual(Ty), iTy,
+AllocaInst::AllocaInst(const Type *Ty, Value *ArraySize, unsigned Align,
+                       const Twine &Name, BasicBlock *InsertAtEnd)
+  : UnaryInstruction(PointerType::getUnqual(Ty), Alloca,
                      getAISize(Ty->getContext(), ArraySize), InsertAtEnd) {
   setAlignment(Align);
   assert(Ty != Type::getVoidTy(Ty->getContext()) && "Cannot allocate void!");
@@ -863,22 +952,22 @@ AllocationInst::AllocationInst(const Type *Ty, Value *ArraySize, unsigned iTy,
 }
 
 // Out of line virtual method, so the vtable, etc has a home.
-AllocationInst::~AllocationInst() {
+AllocaInst::~AllocaInst() {
 }
 
-void AllocationInst::setAlignment(unsigned Align) {
+void AllocaInst::setAlignment(unsigned Align) {
   assert((Align & (Align-1)) == 0 && "Alignment is not a power of 2!");
   SubclassData = Log2_32(Align) + 1;
   assert(getAlignment() == Align && "Alignment representation error!");
 }
 
-bool AllocationInst::isArrayAllocation() const {
+bool AllocaInst::isArrayAllocation() const {
   if (ConstantInt *CI = dyn_cast<ConstantInt>(getOperand(0)))
     return CI->getZExtValue() != 1;
   return true;
 }
 
-const Type *AllocationInst::getAllocatedType() const {
+const Type *AllocaInst::getAllocatedType() const {
   return getType()->getElementType();
 }
 
@@ -895,28 +984,6 @@ bool AllocaInst::isStaticAlloca() const {
 }
 
 //===----------------------------------------------------------------------===//
-//                             FreeInst Implementation
-//===----------------------------------------------------------------------===//
-
-void FreeInst::AssertOK() {
-  assert(isa<PointerType>(getOperand(0)->getType()) &&
-         "Can not free something of nonpointer type!");
-}
-
-FreeInst::FreeInst(Value *Ptr, Instruction *InsertBefore)
-  : UnaryInstruction(Type::getVoidTy(Ptr->getContext()),
-                     Free, Ptr, InsertBefore) {
-  AssertOK();
-}
-
-FreeInst::FreeInst(Value *Ptr, BasicBlock *InsertAtEnd)
-  : UnaryInstruction(Type::getVoidTy(Ptr->getContext()),
-                     Free, Ptr, InsertAtEnd) {
-  AssertOK();
-}
-
-
-//===----------------------------------------------------------------------===//
 //                           LoadInst Implementation
 //===----------------------------------------------------------------------===//
 
@@ -2769,17 +2836,6 @@ ICmpInst::Predicate ICmpInst::getUnsignedPredicate(Predicate pred) {
   }
 }
 
-bool ICmpInst::isSignedPredicate(Predicate pred) {
-  switch (pred) {
-    default: assert(! "Unknown icmp predicate!");
-    case ICMP_SGT: case ICMP_SLT: case ICMP_SGE: case ICMP_SLE: 
-      return true;
-    case ICMP_EQ:  case ICMP_NE: case ICMP_UGT: case ICMP_ULT: 
-    case ICMP_UGE: case ICMP_ULE:
-      return false;
-  }
-}
-
 /// Initialize a set of values that all satisfy the condition with C.
 ///
 ConstantRange 
@@ -2853,7 +2909,7 @@ bool CmpInst::isUnsigned(unsigned short predicate) {
   }
 }
 
-bool CmpInst::isSigned(unsigned short predicate){
+bool CmpInst::isSigned(unsigned short predicate) {
   switch (predicate) {
     default: return false;
     case ICmpInst::ICMP_SLT: case ICmpInst::ICMP_SLE: case ICmpInst::ICMP_SGT: 
@@ -2879,6 +2935,23 @@ bool CmpInst::isUnordered(unsigned short predicate) {
   }
 }
 
+bool CmpInst::isTrueWhenEqual(unsigned short predicate) {
+  switch(predicate) {
+    default: return false;
+    case ICMP_EQ:   case ICMP_UGE: case ICMP_ULE: case ICMP_SGE: case ICMP_SLE:
+    case FCMP_TRUE: case FCMP_UEQ: case FCMP_UGE: case FCMP_ULE: return true;
+  }
+}
+
+bool CmpInst::isFalseWhenEqual(unsigned short predicate) {
+  switch(predicate) {
+  case ICMP_NE:    case ICMP_UGT: case ICMP_ULT: case ICMP_SGT: case ICMP_SLT:
+  case FCMP_FALSE: case FCMP_ONE: case FCMP_OGT: case FCMP_OLT: return true;
+  default: return false;
+  }
+}
+
+
 //===----------------------------------------------------------------------===//
 //                        SwitchInst Implementation
 //===----------------------------------------------------------------------===//
@@ -3012,376 +3085,272 @@ void SwitchInst::setSuccessorV(unsigned idx, BasicBlock *B) {
   setSuccessor(idx, B);
 }
 
+//===----------------------------------------------------------------------===//
+//                        SwitchInst Implementation
+//===----------------------------------------------------------------------===//
+
+void IndirectBrInst::init(Value *Address, unsigned NumDests) {
+  assert(Address && isa<PointerType>(Address->getType()) &&
+         "Address of indirectbr must be a pointer");
+  ReservedSpace = 1+NumDests;
+  NumOperands = 1;
+  OperandList = allocHungoffUses(ReservedSpace);
+  
+  OperandList[0] = Address;
+}
+
+
+/// resizeOperands - resize operands - This adjusts the length of the operands
+/// list according to the following behavior:
+///   1. If NumOps == 0, grow the operand list in response to a push_back style
+///      of operation.  This grows the number of ops by 2 times.
+///   2. If NumOps > NumOperands, reserve space for NumOps operands.
+///   3. If NumOps == NumOperands, trim the reserved space.
+///
+void IndirectBrInst::resizeOperands(unsigned NumOps) {
+  unsigned e = getNumOperands();
+  if (NumOps == 0) {
+    NumOps = e*2;
+  } else if (NumOps*2 > NumOperands) {
+    // No resize needed.
+    if (ReservedSpace >= NumOps) return;
+  } else if (NumOps == NumOperands) {
+    if (ReservedSpace == NumOps) return;
+  } else {
+    return;
+  }
+  
+  ReservedSpace = NumOps;
+  Use *NewOps = allocHungoffUses(NumOps);
+  Use *OldOps = OperandList;
+  for (unsigned i = 0; i != e; ++i)
+    NewOps[i] = OldOps[i];
+  OperandList = NewOps;
+  if (OldOps) Use::zap(OldOps, OldOps + e, true);
+}
+
+IndirectBrInst::IndirectBrInst(Value *Address, unsigned NumCases,
+                               Instruction *InsertBefore)
+: TerminatorInst(Type::getVoidTy(Address->getContext()),Instruction::IndirectBr,
+                 0, 0, InsertBefore) {
+  init(Address, NumCases);
+}
+
+IndirectBrInst::IndirectBrInst(Value *Address, unsigned NumCases,
+                               BasicBlock *InsertAtEnd)
+: TerminatorInst(Type::getVoidTy(Address->getContext()),Instruction::IndirectBr,
+                 0, 0, InsertAtEnd) {
+  init(Address, NumCases);
+}
+
+IndirectBrInst::IndirectBrInst(const IndirectBrInst &IBI)
+  : TerminatorInst(Type::getVoidTy(IBI.getContext()), Instruction::IndirectBr,
+                   allocHungoffUses(IBI.getNumOperands()),
+                   IBI.getNumOperands()) {
+  Use *OL = OperandList, *InOL = IBI.OperandList;
+  for (unsigned i = 0, E = IBI.getNumOperands(); i != E; ++i)
+    OL[i] = InOL[i];
+  SubclassOptionalData = IBI.SubclassOptionalData;
+}
+
+IndirectBrInst::~IndirectBrInst() {
+  dropHungoffUses(OperandList);
+}
+
+/// addDestination - Add a destination.
+///
+void IndirectBrInst::addDestination(BasicBlock *DestBB) {
+  unsigned OpNo = NumOperands;
+  if (OpNo+1 > ReservedSpace)
+    resizeOperands(0);  // Get more space!
+  // Initialize some new operands.
+  assert(OpNo < ReservedSpace && "Growing didn't work!");
+  NumOperands = OpNo+1;
+  OperandList[OpNo] = DestBB;
+}
+
+/// removeDestination - This method removes the specified successor from the
+/// indirectbr instruction.
+void IndirectBrInst::removeDestination(unsigned idx) {
+  assert(idx < getNumOperands()-1 && "Successor index out of range!");
+  
+  unsigned NumOps = getNumOperands();
+  Use *OL = OperandList;
+
+  // Replace this value with the last one.
+  OL[idx+1] = OL[NumOps-1];
+  
+  // Nuke the last value.
+  OL[NumOps-1].set(0);
+  NumOperands = NumOps-1;
+}
+
+BasicBlock *IndirectBrInst::getSuccessorV(unsigned idx) const {
+  return getSuccessor(idx);
+}
+unsigned IndirectBrInst::getNumSuccessorsV() const {
+  return getNumSuccessors();
+}
+void IndirectBrInst::setSuccessorV(unsigned idx, BasicBlock *B) {
+  setSuccessor(idx, B);
+}
+
+//===----------------------------------------------------------------------===//
+//                           clone_impl() implementations
+//===----------------------------------------------------------------------===//
+
 // Define these methods here so vtables don't get emitted into every translation
 // unit that uses these classes.
 
-GetElementPtrInst *GetElementPtrInst::clone() const {
-  GetElementPtrInst *New = new(getNumOperands()) GetElementPtrInst(*this);
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+GetElementPtrInst *GetElementPtrInst::clone_impl() const {
+  return new (getNumOperands()) GetElementPtrInst(*this);
 }
 
-BinaryOperator *BinaryOperator::clone() const {
-  BinaryOperator *New = Create(getOpcode(), Op<0>(), Op<1>());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+BinaryOperator *BinaryOperator::clone_impl() const {
+  return Create(getOpcode(), Op<0>(), Op<1>());
 }
 
-FCmpInst* FCmpInst::clone() const {
-  FCmpInst *New = new FCmpInst(getPredicate(), Op<0>(), Op<1>());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
-}
-ICmpInst* ICmpInst::clone() const {
-  ICmpInst *New = new ICmpInst(getPredicate(), Op<0>(), Op<1>());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+FCmpInst* FCmpInst::clone_impl() const {
+  return new FCmpInst(getPredicate(), Op<0>(), Op<1>());
 }
 
-ExtractValueInst *ExtractValueInst::clone() const {
-  ExtractValueInst *New = new ExtractValueInst(*this);
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
-}
-InsertValueInst *InsertValueInst::clone() const {
-  InsertValueInst *New = new InsertValueInst(*this);
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+ICmpInst* ICmpInst::clone_impl() const {
+  return new ICmpInst(getPredicate(), Op<0>(), Op<1>());
 }
 
-MallocInst *MallocInst::clone() const {
-  MallocInst *New = new MallocInst(getAllocatedType(),
-                                   (Value*)getOperand(0),
-                                   getAlignment());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+ExtractValueInst *ExtractValueInst::clone_impl() const {
+  return new ExtractValueInst(*this);
 }
 
-AllocaInst *AllocaInst::clone() const {
-  AllocaInst *New = new AllocaInst(getAllocatedType(),
-                                   (Value*)getOperand(0),
-                                   getAlignment());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+InsertValueInst *InsertValueInst::clone_impl() const {
+  return new InsertValueInst(*this);
 }
 
-FreeInst *FreeInst::clone() const {
-  FreeInst *New = new FreeInst(getOperand(0));
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+AllocaInst *AllocaInst::clone_impl() const {
+  return new AllocaInst(getAllocatedType(),
+                        (Value*)getOperand(0),
+                        getAlignment());
 }
 
-LoadInst *LoadInst::clone() const {
-  LoadInst *New = new LoadInst(getOperand(0),
-                               Twine(), isVolatile(),
-                               getAlignment());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+LoadInst *LoadInst::clone_impl() const {
+  return new LoadInst(getOperand(0),
+                      Twine(), isVolatile(),
+                      getAlignment());
 }
 
-StoreInst *StoreInst::clone() const {
-  StoreInst *New = new StoreInst(getOperand(0), getOperand(1),
-                                 isVolatile(), getAlignment());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+StoreInst *StoreInst::clone_impl() const {
+  return new StoreInst(getOperand(0), getOperand(1),
+                       isVolatile(), getAlignment());
 }
 
-TruncInst *TruncInst::clone() const {
-  TruncInst *New = new TruncInst(getOperand(0), getType());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+TruncInst *TruncInst::clone_impl() const {
+  return new TruncInst(getOperand(0), getType());
 }
 
-ZExtInst *ZExtInst::clone() const {
-  ZExtInst *New = new ZExtInst(getOperand(0), getType());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+ZExtInst *ZExtInst::clone_impl() const {
+  return new ZExtInst(getOperand(0), getType());
 }
 
-SExtInst *SExtInst::clone() const {
-  SExtInst *New = new SExtInst(getOperand(0), getType());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+SExtInst *SExtInst::clone_impl() const {
+  return new SExtInst(getOperand(0), getType());
 }
 
-FPTruncInst *FPTruncInst::clone() const {
-  FPTruncInst *New = new FPTruncInst(getOperand(0), getType());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+FPTruncInst *FPTruncInst::clone_impl() const {
+  return new FPTruncInst(getOperand(0), getType());
 }
 
-FPExtInst *FPExtInst::clone() const {
-  FPExtInst *New = new FPExtInst(getOperand(0), getType());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+FPExtInst *FPExtInst::clone_impl() const {
+  return new FPExtInst(getOperand(0), getType());
 }
 
-UIToFPInst *UIToFPInst::clone() const {
-  UIToFPInst *New = new UIToFPInst(getOperand(0), getType());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+UIToFPInst *UIToFPInst::clone_impl() const {
+  return new UIToFPInst(getOperand(0), getType());
 }
 
-SIToFPInst *SIToFPInst::clone() const {
-  SIToFPInst *New = new SIToFPInst(getOperand(0), getType());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+SIToFPInst *SIToFPInst::clone_impl() const {
+  return new SIToFPInst(getOperand(0), getType());
 }
 
-FPToUIInst *FPToUIInst::clone() const {
-  FPToUIInst *New = new FPToUIInst(getOperand(0), getType());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+FPToUIInst *FPToUIInst::clone_impl() const {
+  return new FPToUIInst(getOperand(0), getType());
 }
 
-FPToSIInst *FPToSIInst::clone() const {
-  FPToSIInst *New = new FPToSIInst(getOperand(0), getType());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+FPToSIInst *FPToSIInst::clone_impl() const {
+  return new FPToSIInst(getOperand(0), getType());
 }
 
-PtrToIntInst *PtrToIntInst::clone() const {
-  PtrToIntInst *New = new PtrToIntInst(getOperand(0), getType());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+PtrToIntInst *PtrToIntInst::clone_impl() const {
+  return new PtrToIntInst(getOperand(0), getType());
 }
 
-IntToPtrInst *IntToPtrInst::clone() const {
-  IntToPtrInst *New = new IntToPtrInst(getOperand(0), getType());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+IntToPtrInst *IntToPtrInst::clone_impl() const {
+  return new IntToPtrInst(getOperand(0), getType());
 }
 
-BitCastInst *BitCastInst::clone() const {
-  BitCastInst *New = new BitCastInst(getOperand(0), getType());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+BitCastInst *BitCastInst::clone_impl() const {
+  return new BitCastInst(getOperand(0), getType());
 }
 
-CallInst *CallInst::clone() const {
-  CallInst *New = new(getNumOperands()) CallInst(*this);
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+CallInst *CallInst::clone_impl() const {
+  return  new(getNumOperands()) CallInst(*this);
 }
 
-SelectInst *SelectInst::clone() const {
-  SelectInst *New = SelectInst::Create(getOperand(0),
-                                       getOperand(1),
-                                       getOperand(2));
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+SelectInst *SelectInst::clone_impl() const {
+  return SelectInst::Create(getOperand(0), getOperand(1), getOperand(2));
 }
 
-VAArgInst *VAArgInst::clone() const {
-  VAArgInst *New = new VAArgInst(getOperand(0), getType());
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+VAArgInst *VAArgInst::clone_impl() const {
+  return new VAArgInst(getOperand(0), getType());
 }
 
-ExtractElementInst *ExtractElementInst::clone() const {
-  ExtractElementInst *New = ExtractElementInst::Create(getOperand(0),
-                                                       getOperand(1));
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+ExtractElementInst *ExtractElementInst::clone_impl() const {
+  return ExtractElementInst::Create(getOperand(0), getOperand(1));
 }
 
-InsertElementInst *InsertElementInst::clone() const {
-  InsertElementInst *New = InsertElementInst::Create(getOperand(0),
-                                                     getOperand(1),
-                                                     getOperand(2));
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+InsertElementInst *InsertElementInst::clone_impl() const {
+  return InsertElementInst::Create(getOperand(0),
+                                   getOperand(1),
+                                   getOperand(2));
 }
 
-ShuffleVectorInst *ShuffleVectorInst::clone() const {
-  ShuffleVectorInst *New = new ShuffleVectorInst(getOperand(0),
-                                                 getOperand(1),
-                                                 getOperand(2));
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+ShuffleVectorInst *ShuffleVectorInst::clone_impl() const {
+  return new ShuffleVectorInst(getOperand(0),
+                           getOperand(1),
+                           getOperand(2));
 }
 
-PHINode *PHINode::clone() const {
-  PHINode *New = new PHINode(*this);
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+PHINode *PHINode::clone_impl() const {
+  return new PHINode(*this);
 }
 
-ReturnInst *ReturnInst::clone() const {
-  ReturnInst *New = new(getNumOperands()) ReturnInst(*this);
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+ReturnInst *ReturnInst::clone_impl() const {
+  return new(getNumOperands()) ReturnInst(*this);
 }
 
-BranchInst *BranchInst::clone() const {
+BranchInst *BranchInst::clone_impl() const {
   unsigned Ops(getNumOperands());
-  BranchInst *New = new(Ops, Ops == 1) BranchInst(*this);
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+  return new(Ops, Ops == 1) BranchInst(*this);
 }
 
-SwitchInst *SwitchInst::clone() const {
-  SwitchInst *New = new SwitchInst(*this);
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+SwitchInst *SwitchInst::clone_impl() const {
+  return new SwitchInst(*this);
 }
 
-InvokeInst *InvokeInst::clone() const {
-  InvokeInst *New = new(getNumOperands()) InvokeInst(*this);
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata()) {
-    LLVMContext &Context = getContext();
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  }
-  return New;
+IndirectBrInst *IndirectBrInst::clone_impl() const {
+  return new IndirectBrInst(*this);
+}
+
+
+InvokeInst *InvokeInst::clone_impl() const {
+  return new(getNumOperands()) InvokeInst(*this);
 }
 
-UnwindInst *UnwindInst::clone() const {
+UnwindInst *UnwindInst::clone_impl() const {
   LLVMContext &Context = getContext();
-  UnwindInst *New = new UnwindInst(Context);
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata())
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  return New;
+  return new UnwindInst(Context);
 }
 
-UnreachableInst *UnreachableInst::clone() const {
+UnreachableInst *UnreachableInst::clone_impl() const {
   LLVMContext &Context = getContext();
-  UnreachableInst *New = new UnreachableInst(Context);
-  New->SubclassOptionalData = SubclassOptionalData;
-  if (hasMetadata())
-    Context.pImpl->TheMetadata.ValueIsCloned(this, New);
-  return New;
+  return new UnreachableInst(Context);
 }
diff --git a/libclamav/c++/llvm/lib/VMCore/LLVMContext.cpp b/libclamav/c++/llvm/lib/VMCore/LLVMContext.cpp
index 39ed7ed..3b4a1a3 100644
--- a/libclamav/c++/llvm/lib/VMCore/LLVMContext.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/LLVMContext.cpp
@@ -45,32 +45,6 @@ GetElementPtrConstantExpr::GetElementPtrConstantExpr
     OperandList[i+1] = IdxList[i];
 }
 
-bool LLVMContext::RemoveDeadMetadata() {
-  std::vector<WeakVH> DeadMDNodes;
-  bool Changed = false;
-  while (1) {
-
-    for (FoldingSet<MDNode>::iterator 
-           I = pImpl->MDNodeSet.begin(),
-           E = pImpl->MDNodeSet.end(); I != E; ++I) {
-      MDNode *N = &(*I);
-      if (N->use_empty()) 
-        DeadMDNodes.push_back(WeakVH(N));
-    }
-    
-    if (DeadMDNodes.empty())
-      return Changed;
-
-    while (!DeadMDNodes.empty()) {
-      Value *V = DeadMDNodes.back(); DeadMDNodes.pop_back();
-      if (const MDNode *N = dyn_cast_or_null<MDNode>(V))
-        if (N->use_empty())
-          delete N;
-    }
-  }
-  return Changed;
-}
-
 MetadataContext &LLVMContext::getMetadata() {
   return pImpl->TheMetadata;
 }
diff --git a/libclamav/c++/llvm/lib/VMCore/LLVMContextImpl.h b/libclamav/c++/llvm/lib/VMCore/LLVMContextImpl.h
index 83888c3..1c3244b 100644
--- a/libclamav/c++/llvm/lib/VMCore/LLVMContextImpl.h
+++ b/libclamav/c++/llvm/lib/VMCore/LLVMContextImpl.h
@@ -22,8 +22,6 @@
 #include "llvm/Metadata.h"
 #include "llvm/Constants.h"
 #include "llvm/DerivedTypes.h"
-#include "llvm/System/Mutex.h"
-#include "llvm/System/RWMutex.h"
 #include "llvm/Assembly/Writer.h"
 #include "llvm/ADT/APFloat.h"
 #include "llvm/ADT/APInt.h"
@@ -96,7 +94,6 @@ struct DenseMapAPFloatKeyInfo {
 
 class LLVMContextImpl {
 public:
-  sys::SmartRWMutex<true> ConstantsLock;
   typedef DenseMap<DenseMapAPIntKeyInfo::KeyTy, ConstantInt*, 
                          DenseMapAPIntKeyInfo> IntMapTy;
   IntMapTy IntConstants;
@@ -109,41 +106,32 @@ public:
   
   FoldingSet<MDNode> MDNodeSet;
   
-  ValueMap<char, Type, ConstantAggregateZero> AggZeroConstants;
+  ConstantUniqueMap<char, Type, ConstantAggregateZero> AggZeroConstants;
 
-  typedef ValueMap<std::vector<Constant*>, ArrayType, 
+  typedef ConstantUniqueMap<std::vector<Constant*>, ArrayType,
     ConstantArray, true /*largekey*/> ArrayConstantsTy;
   ArrayConstantsTy ArrayConstants;
   
-  typedef ValueMap<std::vector<Constant*>, StructType,
-                   ConstantStruct, true /*largekey*/> StructConstantsTy;
+  typedef ConstantUniqueMap<std::vector<Constant*>, StructType,
+    ConstantStruct, true /*largekey*/> StructConstantsTy;
   StructConstantsTy StructConstants;
   
-  typedef ValueMap<std::vector<Constant*>, VectorType,
-                   ConstantVector> VectorConstantsTy;
+  typedef ConstantUniqueMap<std::vector<Constant*>, VectorType,
+                            ConstantVector> VectorConstantsTy;
   VectorConstantsTy VectorConstants;
   
-  ValueMap<char, PointerType, ConstantPointerNull> NullPtrConstants;
+  ConstantUniqueMap<char, PointerType, ConstantPointerNull> NullPtrConstants;
   
-  ValueMap<char, Type, UndefValue> UndefValueConstants;
+  ConstantUniqueMap<char, Type, UndefValue> UndefValueConstants;
   
-  ValueMap<ExprMapKeyType, Type, ConstantExpr> ExprConstants;
+  DenseMap<std::pair<Function*, BasicBlock*> , BlockAddress*> BlockAddresses;
+  ConstantUniqueMap<ExprMapKeyType, Type, ConstantExpr> ExprConstants;
   
   ConstantInt *TheTrueVal;
   ConstantInt *TheFalseVal;
   
-  // Lock used for guarding access to the leak detector
-  sys::SmartMutex<true> LLVMObjectsLock;
   LeakDetectorImpl<Value> LLVMObjects;
   
-  // Lock used for guarding access to the type maps.
-  sys::SmartMutex<true> TypeMapLock;
-  
-  // Recursive lock used for guarding access to AbstractTypeUsers.
-  // NOTE: The true template parameter means this will no-op when we're not in
-  // multithreaded mode.
-  sys::SmartMutex<true> AbstractTypeUsersLock;
-
   // Basic type instances.
   const Type VoidTy;
   const Type LabelTy;
@@ -204,9 +192,6 @@ public:
     AggZeroConstants.freeConstants();
     NullPtrConstants.freeConstants();
     UndefValueConstants.freeConstants();
-    for (FoldingSet<MDNode>::iterator I = MDNodeSet.begin(), 
-           E = MDNodeSet.end(); I != E; ++I)
-      I->dropAllReferences();
     for (IntMapTy::iterator I = IntConstants.begin(), E = IntConstants.end(); 
          I != E; ++I) {
       if (I->second->use_empty())
@@ -217,6 +202,7 @@ public:
       if (I->second->use_empty())
         delete I->second;
     }
+    MDNodeSet.clear();
   }
 };
 
diff --git a/libclamav/c++/llvm/lib/VMCore/LeakDetector.cpp b/libclamav/c++/llvm/lib/VMCore/LeakDetector.cpp
index 5ebd4f5..a44f61d 100644
--- a/libclamav/c++/llvm/lib/VMCore/LeakDetector.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/LeakDetector.cpp
@@ -36,7 +36,6 @@ void LeakDetector::addGarbageObjectImpl(void *Object) {
 
 void LeakDetector::addGarbageObjectImpl(const Value *Object) {
   LLVMContextImpl *pImpl = Object->getContext().pImpl;
-  sys::SmartScopedLock<true> Lock(pImpl->LLVMObjectsLock);
   pImpl->LLVMObjects.addGarbage(Object);
 }
 
@@ -47,7 +46,6 @@ void LeakDetector::removeGarbageObjectImpl(void *Object) {
 
 void LeakDetector::removeGarbageObjectImpl(const Value *Object) {
   LLVMContextImpl *pImpl = Object->getContext().pImpl;
-  sys::SmartScopedLock<true> Lock(pImpl->LLVMObjectsLock);
   pImpl->LLVMObjects.removeGarbage(Object);
 }
 
@@ -55,7 +53,6 @@ void LeakDetector::checkForGarbageImpl(LLVMContext &Context,
                                        const std::string &Message) {
   LLVMContextImpl *pImpl = Context.pImpl;
   sys::SmartScopedLock<true> Lock(*ObjectsLock);
-  sys::SmartScopedLock<true> CLock(pImpl->LLVMObjectsLock);
   
   Objects->setName("GENERIC");
   pImpl->LLVMObjects.setName("LLVM");
diff --git a/libclamav/c++/llvm/lib/VMCore/LeaksContext.h b/libclamav/c++/llvm/lib/VMCore/LeaksContext.h
index b0c3a14..bd10a47 100644
--- a/libclamav/c++/llvm/lib/VMCore/LeaksContext.h
+++ b/libclamav/c++/llvm/lib/VMCore/LeaksContext.h
@@ -14,7 +14,6 @@
 
 #include "llvm/Value.h"
 #include "llvm/ADT/SmallPtrSet.h"
-#include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
 template <class T>
diff --git a/libclamav/c++/llvm/lib/VMCore/Metadata.cpp b/libclamav/c++/llvm/lib/VMCore/Metadata.cpp
index 2f2345f..854f86c 100644
--- a/libclamav/c++/llvm/lib/VMCore/Metadata.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Metadata.cpp
@@ -16,73 +16,51 @@
 #include "llvm/LLVMContext.h"
 #include "llvm/Module.h"
 #include "llvm/Instruction.h"
+#include "llvm/ADT/DenseMap.h"
+#include "llvm/ADT/StringMap.h"
 #include "SymbolTableListTraitsImpl.h"
 using namespace llvm;
 
 //===----------------------------------------------------------------------===//
-//MetadataBase implementation
+// MetadataBase implementation.
 //
 
-/// resizeOperands - Metadata keeps track of other metadata uses using 
-/// OperandList. Resize this list to hold anticipated number of metadata
-/// operands.
-void MetadataBase::resizeOperands(unsigned NumOps) {
-  unsigned e = getNumOperands();
-  if (NumOps == 0) {
-    NumOps = e*2;
-    if (NumOps < 2) NumOps = 2;  
-  } else if (NumOps > NumOperands) {
-    // No resize needed.
-    if (ReservedSpace >= NumOps) return;
-  } else if (NumOps == NumOperands) {
-    if (ReservedSpace == NumOps) return;
-  } else {
-    return;
-  }
-
-  ReservedSpace = NumOps;
-  Use *OldOps = OperandList;
-  Use *NewOps = allocHungoffUses(NumOps);
-  std::copy(OldOps, OldOps + e, NewOps);
-  OperandList = NewOps;
-  if (OldOps) Use::zap(OldOps, OldOps + e, true);
-}
 //===----------------------------------------------------------------------===//
-//MDString implementation
+// MDString implementation.
 //
-MDString *MDString::get(LLVMContext &Context, const StringRef &Str) {
+MDString *MDString::get(LLVMContext &Context, StringRef Str) {
   LLVMContextImpl *pImpl = Context.pImpl;
-  sys::SmartScopedWriter<true> Writer(pImpl->ConstantsLock);
   StringMapEntry<MDString *> &Entry = 
     pImpl->MDStringCache.GetOrCreateValue(Str);
   MDString *&S = Entry.getValue();
-  if (!S) S = new MDString(Context, Entry.getKeyData(),
-                           Entry.getKeyLength());
+  if (!S) S = new MDString(Context, Entry.getKey());
+  return S;
+}
 
+MDString *MDString::get(LLVMContext &Context, const char *Str) {
+  LLVMContextImpl *pImpl = Context.pImpl;
+  StringMapEntry<MDString *> &Entry = 
+    pImpl->MDStringCache.GetOrCreateValue(Str ? StringRef(Str) : StringRef());
+  MDString *&S = Entry.getValue();
+  if (!S) new MDString(Context, Entry.getKey());
   return S;
 }
 
 //===----------------------------------------------------------------------===//
-//MDNode implementation
+// MDNode implementation.
 //
-MDNode::MDNode(LLVMContext &C, Value*const* Vals, unsigned NumVals)
+MDNode::MDNode(LLVMContext &C, Value *const *Vals, unsigned NumVals)
   : MetadataBase(Type::getMetadataTy(C), Value::MDNodeVal) {
-  NumOperands = 0;
-  resizeOperands(NumVals);
-  for (unsigned i = 0; i != NumVals; ++i) {
-    // Only record metadata uses.
-    if (MetadataBase *MB = dyn_cast_or_null<MetadataBase>(Vals[i]))
-      OperandList[NumOperands++] = MB;
-    else if(Vals[i] && 
-            Vals[i]->getType()->getTypeID() == Type::MetadataTyID)
-      OperandList[NumOperands++] = Vals[i];
-    Node.push_back(ElementVH(Vals[i], this));
-  }
+  NodeSize = NumVals;
+  Node = new ElementVH[NodeSize];
+  ElementVH *Ptr = Node;
+  for (unsigned i = 0; i != NumVals; ++i) 
+    *Ptr++ = ElementVH(Vals[i], this);
 }
 
 void MDNode::Profile(FoldingSetNodeID &ID) const {
-  for (const_elem_iterator I = elem_begin(), E = elem_end(); I != E; ++I)
-    ID.AddPointer(*I);
+  for (unsigned i = 0, e = getNumElements(); i != e; ++i)
+    ID.AddPointer(getElement(i));
 }
 
 MDNode *MDNode::get(LLVMContext &Context, Value*const* Vals, unsigned NumVals) {
@@ -91,37 +69,22 @@ MDNode *MDNode::get(LLVMContext &Context, Value*const* Vals, unsigned NumVals) {
   for (unsigned i = 0; i != NumVals; ++i)
     ID.AddPointer(Vals[i]);
 
-  pImpl->ConstantsLock.reader_acquire();
   void *InsertPoint;
   MDNode *N = pImpl->MDNodeSet.FindNodeOrInsertPos(ID, InsertPoint);
-  pImpl->ConstantsLock.reader_release();
-  
   if (!N) {
-    sys::SmartScopedWriter<true> Writer(pImpl->ConstantsLock);
-    N = pImpl->MDNodeSet.FindNodeOrInsertPos(ID, InsertPoint);
-    if (!N) {
-      // InsertPoint will have been set by the FindNodeOrInsertPos call.
-      N = new MDNode(Context, Vals, NumVals);
-      pImpl->MDNodeSet.InsertNode(N, InsertPoint);
-    }
+    // InsertPoint will have been set by the FindNodeOrInsertPos call.
+    N = new MDNode(Context, Vals, NumVals);
+    pImpl->MDNodeSet.InsertNode(N, InsertPoint);
   }
-
   return N;
 }
 
-/// dropAllReferences - Remove all uses and clear node vector.
-void MDNode::dropAllReferences() {
-  User::dropAllReferences();
-  Node.clear();
-}
-
+/// ~MDNode - Destroy MDNode.
 MDNode::~MDNode() {
-  {
-    LLVMContextImpl *pImpl = getType()->getContext().pImpl;
-    sys::SmartScopedWriter<true> Writer(pImpl->ConstantsLock);
-    pImpl->MDNodeSet.RemoveNode(this);
-  }
-  dropAllReferences();
+  LLVMContextImpl *pImpl = getType()->getContext().pImpl;
+  pImpl->MDNodeSet.RemoveNode(this);
+  delete [] Node;
+  Node = NULL;
 }
 
 // Replace value from this node's element list.
@@ -136,9 +99,8 @@ void MDNode::replaceElement(Value *From, Value *To) {
   // From in this MDNode's element list.
   SmallVector<unsigned, 4> Indexes;
   unsigned Index = 0;
-  for (SmallVector<ElementVH, 4>::iterator I = Node.begin(),
-         E = Node.end(); I != E; ++I, ++Index) {
-    Value *V = *I;
+  for (unsigned i = 0, e = getNumElements(); i != e; ++i, ++Index) {
+    Value *V = getElement(i);
     if (V && V == From) 
       Indexes.push_back(Index);
   }
@@ -147,31 +109,7 @@ void MDNode::replaceElement(Value *From, Value *To) {
     return;
 
   // Remove "this" from the context map. 
-  {
-    sys::SmartScopedWriter<true> Writer(pImpl->ConstantsLock);
-    pImpl->MDNodeSet.RemoveNode(this);
-  }
-
-  // MDNode only lists metadata elements in operand list, because MDNode
-  // used by MDNode is considered a valid use. However on the side, MDNode
-  // using a non-metadata value is not considered a "use" of non-metadata
-  // value.
-  SmallVector<unsigned, 4> OpIndexes;
-  unsigned OpIndex = 0;
-  for (User::op_iterator OI = op_begin(), OE = op_end();
-       OI != OE; ++OI, OpIndex++) {
-    if (*OI == From)
-      OpIndexes.push_back(OpIndex);
-  }
-  if (MetadataBase *MDTo = dyn_cast_or_null<MetadataBase>(To)) {
-    for (SmallVector<unsigned, 4>::iterator OI = OpIndexes.begin(),
-           OE = OpIndexes.end(); OI != OE; ++OI)
-      setOperand(*OI, MDTo);
-  } else {
-    for (SmallVector<unsigned, 4>::iterator OI = OpIndexes.begin(),
-           OE = OpIndexes.end(); OI != OE; ++OI)
-      setOperand(*OI, 0);
-  }
+  pImpl->MDNodeSet.RemoveNode(this);
 
   // Replace From element(s) in place.
   for (SmallVector<unsigned, 4>::iterator I = Indexes.begin(), E = Indexes.end(); 
@@ -186,10 +124,8 @@ void MDNode::replaceElement(Value *From, Value *To) {
   // node with updated "this" node.
   FoldingSetNodeID ID;
   Profile(ID);
-  pImpl->ConstantsLock.reader_acquire();
   void *InsertPoint;
   MDNode *N = pImpl->MDNodeSet.FindNodeOrInsertPos(ID, InsertPoint);
-  pImpl->ConstantsLock.reader_release();
 
   if (N) {
     N->replaceAllUsesWith(this);
@@ -197,39 +133,32 @@ void MDNode::replaceElement(Value *From, Value *To) {
     N = 0;
   }
 
-  {
-    sys::SmartScopedWriter<true> Writer(pImpl->ConstantsLock);
-    N = pImpl->MDNodeSet.FindNodeOrInsertPos(ID, InsertPoint);
-    if (!N) {
-      // InsertPoint will have been set by the FindNodeOrInsertPos call.
-      N = this;
-      pImpl->MDNodeSet.InsertNode(N, InsertPoint);
-    }
+  N = pImpl->MDNodeSet.FindNodeOrInsertPos(ID, InsertPoint);
+  if (!N) {
+    // InsertPoint will have been set by the FindNodeOrInsertPos call.
+    N = this;
+    pImpl->MDNodeSet.InsertNode(N, InsertPoint);
   }
 }
 
 //===----------------------------------------------------------------------===//
-//NamedMDNode implementation
+// NamedMDNode implementation.
 //
 NamedMDNode::NamedMDNode(LLVMContext &C, const Twine &N,
-                         MetadataBase*const* MDs, 
+                         MetadataBase *const *MDs, 
                          unsigned NumMDs, Module *ParentModule)
   : MetadataBase(Type::getMetadataTy(C), Value::NamedMDNodeVal), Parent(0) {
   setName(N);
-  NumOperands = 0;
-  resizeOperands(NumMDs);
 
-  for (unsigned i = 0; i != NumMDs; ++i) {
-    if (MDs[i])
-      OperandList[NumOperands++] = MDs[i];
-    Node.push_back(WeakMetadataVH(MDs[i]));
-  }
+  for (unsigned i = 0; i != NumMDs; ++i)
+    Node.push_back(TrackingVH<MetadataBase>(MDs[i]));
+
   if (ParentModule)
     ParentModule->getNamedMDList().push_back(this);
 }
 
 NamedMDNode *NamedMDNode::Create(const NamedMDNode *NMD, Module *M) {
-  assert (NMD && "Invalid source NamedMDNode!");
+  assert(NMD && "Invalid source NamedMDNode!");
   SmallVector<MetadataBase *, 4> Elems;
   for (unsigned i = 0, e = NMD->getNumElements(); i != e; ++i)
     Elems.push_back(NMD->getElement(i));
@@ -245,7 +174,6 @@ void NamedMDNode::eraseFromParent() {
 
 /// dropAllReferences - Remove all uses and clear node vector.
 void NamedMDNode::dropAllReferences() {
-  User::dropAllReferences();
   Node.clear();
 }
 
@@ -254,65 +182,100 @@ NamedMDNode::~NamedMDNode() {
 }
 
 //===----------------------------------------------------------------------===//
-//Metadata implementation
+// MetadataContextImpl implementation.
 //
+namespace llvm {
+class MetadataContextImpl {
+public:
+  typedef std::pair<unsigned, TrackingVH<MDNode> > MDPairTy;
+  typedef SmallVector<MDPairTy, 2> MDMapTy;
+  typedef DenseMap<const Instruction *, MDMapTy> MDStoreTy;
+  friend class BitcodeReader;
+private:
+
+  /// MetadataStore - Collection of metadata used in this context.
+  MDStoreTy MetadataStore;
+
+  /// MDHandlerNames - Map to hold metadata handler names.
+  StringMap<unsigned> MDHandlerNames;
+
+public:
+  /// registerMDKind - Register a new metadata kind and return its ID.
+  /// A metadata kind can be registered only once. 
+  unsigned registerMDKind(StringRef Name);
+
+  /// getMDKind - Return metadata kind. If the requested metadata kind
+  /// is not registered then return 0.
+  unsigned getMDKind(StringRef Name) const;
+
+  /// getMD - Get the metadata of given kind attached to an Instruction.
+  /// If the metadata is not found then return 0.
+  MDNode *getMD(unsigned Kind, const Instruction *Inst);
+
+  /// getMDs - Get the metadata attached to an Instruction.
+  void getMDs(const Instruction *Inst, SmallVectorImpl<MDPairTy> &MDs) const;
+
+  /// addMD - Attach the metadata of given kind to an Instruction.
+  void addMD(unsigned Kind, MDNode *Node, Instruction *Inst);
+  
+  /// removeMD - Remove metadata of given kind attached with an instruction.
+  void removeMD(unsigned Kind, Instruction *Inst);
+  
+  /// removeAllMetadata - Remove all metadata attached with an instruction.
+  void removeAllMetadata(Instruction *Inst);
+
+  /// copyMD - If metadata is attached with Instruction In1 then attach
+  /// the same metadata to In2.
+  void copyMD(Instruction *In1, Instruction *In2);
+
+  /// getHandlerNames - Populate client-supplied smallvector using custom
+  /// metadata name and ID.
+  void getHandlerNames(SmallVectorImpl<std::pair<unsigned, StringRef> >&) const;
+
+  /// ValueIsDeleted - This handler is used to update metadata store
+  /// when a value is deleted.
+  void ValueIsDeleted(const Value *) {}
+  void ValueIsDeleted(Instruction *Inst) {
+    removeAllMetadata(Inst);
+  }
+  void ValueIsRAUWd(Value *V1, Value *V2);
 
-/// RegisterMDKind - Register a new metadata kind and return its ID.
-/// A metadata kind can be registered only once. 
-unsigned MetadataContext::RegisterMDKind(const char *Name) {
-  assert (validName(Name) && "Invalid custome metadata name!");
-  unsigned Count = MDHandlerNames.size();
-  assert(MDHandlerNames.find(Name) == MDHandlerNames.end() 
-         && "Already registered MDKind!");
-  MDHandlerNames[Name] = Count + 1;
-  return Count + 1;
+  /// ValueIsCloned - This handler is used to update metadata store
+  /// when In1 is cloned to create In2.
+  void ValueIsCloned(const Instruction *In1, Instruction *In2);
+};
 }
 
-/// validName - Return true if Name is a valid custom metadata handler name.
-bool MetadataContext::validName(const char *Name) {
-  if (!Name)
-    return false;
-
-  if (!isalpha(*Name))
-    return false;
-
-  unsigned Length = strlen(Name);  
-  unsigned Count = 1;
-  ++Name;
-  while (Name &&
-         (isalnum(*Name) || *Name == '_' || *Name == '-' || *Name == '.')) {
-    ++Name;
-    ++Count;
-  }
-  if (Length != Count)
-    return false;
-  return true;
+/// registerMDKind - Register a new metadata kind and return its ID.
+/// A metadata kind can be registered only once. 
+unsigned MetadataContextImpl::registerMDKind(StringRef Name) {
+  unsigned Count = MDHandlerNames.size();
+  assert(MDHandlerNames.count(Name) == 0 && "Already registered MDKind!");
+  return MDHandlerNames[Name] = Count + 1;
 }
 
 /// getMDKind - Return metadata kind. If the requested metadata kind
 /// is not registered then return 0.
-unsigned MetadataContext::getMDKind(const char *Name) {
-  assert (validName(Name) && "Invalid custome metadata name!");
-  StringMap<unsigned>::iterator I = MDHandlerNames.find(Name);
+unsigned MetadataContextImpl::getMDKind(StringRef Name) const {
+  StringMap<unsigned>::const_iterator I = MDHandlerNames.find(Name);
   if (I == MDHandlerNames.end())
     return 0;
 
   return I->getValue();
 }
 
-/// addMD - Attach the metadata of given kind with an Instruction.
-void MetadataContext::addMD(unsigned MDKind, MDNode *Node, Instruction *Inst) {
-  assert (Node && "Unable to add custome metadata");
+/// addMD - Attach the metadata of given kind to an Instruction.
+void MetadataContextImpl::addMD(unsigned MDKind, MDNode *Node, 
+                                Instruction *Inst) {
+  assert(Node && "Invalid null MDNode");
   Inst->HasMetadata = true;
-  MDStoreTy::iterator I = MetadataStore.find(Inst);
-  if (I == MetadataStore.end()) {
-    MDMapTy Info;
+  MDMapTy &Info = MetadataStore[Inst];
+  if (Info.empty()) {
     Info.push_back(std::make_pair(MDKind, Node));
     MetadataStore.insert(std::make_pair(Inst, Info));
     return;
   }
 
-  MDMapTy &Info = I->second;
   // If there is an entry for this MDKind then replace it.
   for (unsigned i = 0, e = Info.size(); i != e; ++i) {
     MDPairTy &P = Info[i];
@@ -324,11 +287,10 @@ void MetadataContext::addMD(unsigned MDKind, MDNode *Node, Instruction *Inst) {
 
   // Otherwise add a new entry.
   Info.push_back(std::make_pair(MDKind, Node));
-  return;
 }
 
-/// removeMD - Remove metadata of given kind attached with an instuction.
-void MetadataContext::removeMD(unsigned Kind, Instruction *Inst) {
+/// removeMD - Remove metadata of given kind attached with an instruction.
+void MetadataContextImpl::removeMD(unsigned Kind, Instruction *Inst) {
   MDStoreTy::iterator I = MetadataStore.find(Inst);
   if (I == MetadataStore.end())
     return;
@@ -341,66 +303,179 @@ void MetadataContext::removeMD(unsigned Kind, Instruction *Inst) {
       return;
     }
   }
-
-  return;
 }
-  
-/// removeMDs - Remove all metadata attached with an instruction.
-void MetadataContext::removeMDs(const Instruction *Inst) {
-  // Find Metadata handles for this instruction.
-  MDStoreTy::iterator I = MetadataStore.find(Inst);
-  assert (I != MetadataStore.end() && "Invalid custom metadata info!");
-  MDMapTy &Info = I->second;
-  
-  // FIXME : Give all metadata handlers a chance to adjust.
-  
-  // Remove the entries for this instruction.
-  Info.clear();
-  MetadataStore.erase(I);
+
+/// removeAllMetadata - Remove all metadata attached with an instruction.
+void MetadataContextImpl::removeAllMetadata(Instruction *Inst) {
+  MetadataStore.erase(Inst);
+  Inst->HasMetadata = false;
 }
 
+/// copyMD - If metadata is attached with Instruction In1 then attach
+/// the same metadata to In2.
+void MetadataContextImpl::copyMD(Instruction *In1, Instruction *In2) {
+  assert(In1 && In2 && "Invalid instruction!");
+  MDMapTy &In1Info = MetadataStore[In1];
+  if (In1Info.empty())
+    return;
+
+  for (MDMapTy::iterator I = In1Info.begin(), E = In1Info.end(); I != E; ++I)
+    addMD(I->first, I->second, In2);
+}
 
-/// getMD - Get the metadata of given kind attached with an Instruction.
+/// getMD - Get the metadata of given kind attached to an Instruction.
 /// If the metadata is not found then return 0.
-MDNode *MetadataContext::getMD(unsigned MDKind, const Instruction *Inst) {
-  MDStoreTy::iterator I = MetadataStore.find(Inst);
-  if (I == MetadataStore.end())
+MDNode *MetadataContextImpl::getMD(unsigned MDKind, const Instruction *Inst) {
+  MDMapTy &Info = MetadataStore[Inst];
+  if (Info.empty())
     return NULL;
-  
-  MDMapTy &Info = I->second;
+
   for (MDMapTy::iterator I = Info.begin(), E = Info.end(); I != E; ++I)
     if (I->first == MDKind)
-      return dyn_cast_or_null<MDNode>(I->second);
+      return I->second;
   return NULL;
 }
 
-/// getMDs - Get the metadata attached with an Instruction.
-const MetadataContext::MDMapTy *MetadataContext::getMDs(const Instruction *Inst) {
-  MDStoreTy::iterator I = MetadataStore.find(Inst);
+/// getMDs - Get the metadata attached to an Instruction.
+void MetadataContextImpl::
+getMDs(const Instruction *Inst, SmallVectorImpl<MDPairTy> &MDs) const {
+  MDStoreTy::const_iterator I = MetadataStore.find(Inst);
   if (I == MetadataStore.end())
-    return NULL;
-  
-  return &(I->second);
+    return;
+  MDs.resize(I->second.size());
+  for (MDMapTy::const_iterator MI = I->second.begin(), ME = I->second.end();
+       MI != ME; ++MI)
+    // MD kinds are numbered from 1.
+    MDs[MI->first - 1] = std::make_pair(MI->first, MI->second);
 }
 
-/// getHandlerNames - Get handler names. This is used by bitcode
-/// writer.
-const StringMap<unsigned> *MetadataContext::getHandlerNames() {
-  return &MDHandlerNames;
+/// getHandlerNames - Populate client supplied smallvector using custome
+/// metadata name and ID.
+void MetadataContextImpl::
+getHandlerNames(SmallVectorImpl<std::pair<unsigned, StringRef> >&Names) const {
+  Names.resize(MDHandlerNames.size());
+  for (StringMap<unsigned>::const_iterator I = MDHandlerNames.begin(),
+         E = MDHandlerNames.end(); I != E; ++I) 
+    // MD Handlers are numbered from 1.
+    Names[I->second - 1] = std::make_pair(I->second, I->first());
 }
 
 /// ValueIsCloned - This handler is used to update metadata store
 /// when In1 is cloned to create In2.
-void MetadataContext::ValueIsCloned(const Instruction *In1, Instruction *In2) {
+void MetadataContextImpl::ValueIsCloned(const Instruction *In1, 
+                                        Instruction *In2) {
   // Find Metadata handles for In1.
   MDStoreTy::iterator I = MetadataStore.find(In1);
-  assert (I != MetadataStore.end() && "Invalid custom metadata info!");
+  assert(I != MetadataStore.end() && "Invalid custom metadata info!");
 
   // FIXME : Give all metadata handlers a chance to adjust.
 
   MDMapTy &In1Info = I->second;
   MDMapTy In2Info;
   for (MDMapTy::iterator I = In1Info.begin(), E = In1Info.end(); I != E; ++I)
-    if (MDNode *MD = dyn_cast_or_null<MDNode>(I->second))
-      addMD(I->first, MD, In2);
+    addMD(I->first, I->second, In2);
+}
+
+/// ValueIsRAUWd - This handler is used when V1's all uses are replaced by
+/// V2.
+void MetadataContextImpl::ValueIsRAUWd(Value *V1, Value *V2) {
+  Instruction *I1 = dyn_cast<Instruction>(V1);
+  Instruction *I2 = dyn_cast<Instruction>(V2);
+  if (!I1 || !I2)
+    return;
+
+  // FIXME : Give custom handlers a chance to override this.
+  ValueIsCloned(I1, I2);
+}
+
+//===----------------------------------------------------------------------===//
+// MetadataContext implementation.
+//
+MetadataContext::MetadataContext() 
+  : pImpl(new MetadataContextImpl()) { }
+MetadataContext::~MetadataContext() { delete pImpl; }
+
+/// isValidName - Return true if Name is a valid custom metadata handler name.
+bool MetadataContext::isValidName(StringRef MDName) {
+  if (MDName.empty())
+    return false;
+
+  if (!isalpha(MDName[0]))
+    return false;
+
+  for (StringRef::iterator I = MDName.begin() + 1, E = MDName.end(); I != E;
+       ++I) {
+    if (!isalnum(*I) && *I != '_' && *I != '-' && *I != '.')
+        return false;
+  }
+  return true;
+}
+
+/// registerMDKind - Register a new metadata kind and return its ID.
+/// A metadata kind can be registered only once. 
+unsigned MetadataContext::registerMDKind(StringRef Name) {
+  assert(isValidName(Name) && "Invalid custome metadata name!");
+  return pImpl->registerMDKind(Name);
+}
+
+/// getMDKind - Return metadata kind. If the requested metadata kind
+/// is not registered then return 0.
+unsigned MetadataContext::getMDKind(StringRef Name) const {
+  return pImpl->getMDKind(Name);
+}
+
+/// getMD - Get the metadata of given kind attached to an Instruction.
+/// If the metadata is not found then return 0.
+MDNode *MetadataContext::getMD(unsigned Kind, const Instruction *Inst) {
+  return pImpl->getMD(Kind, Inst);
+}
+
+/// getMDs - Get the metadata attached to an Instruction.
+void MetadataContext::
+getMDs(const Instruction *Inst, 
+       SmallVectorImpl<std::pair<unsigned, TrackingVH<MDNode> > > &MDs) const {
+  return pImpl->getMDs(Inst, MDs);
+}
+
+/// addMD - Attach the metadata of given kind to an Instruction.
+void MetadataContext::addMD(unsigned Kind, MDNode *Node, Instruction *Inst) {
+  pImpl->addMD(Kind, Node, Inst);
+}
+
+/// removeMD - Remove metadata of given kind attached with an instruction.
+void MetadataContext::removeMD(unsigned Kind, Instruction *Inst) {
+  pImpl->removeMD(Kind, Inst);
+}
+
+/// removeAllMetadata - Remove all metadata attached with an instruction.
+void MetadataContext::removeAllMetadata(Instruction *Inst) {
+  pImpl->removeAllMetadata(Inst);
+}
+
+/// copyMD - If metadata is attached with Instruction In1 then attach
+/// the same metadata to In2.
+void MetadataContext::copyMD(Instruction *In1, Instruction *In2) {
+  pImpl->copyMD(In1, In2);
+}
+
+/// getHandlerNames - Populate client supplied smallvector using custome
+/// metadata name and ID.
+void MetadataContext::
+getHandlerNames(SmallVectorImpl<std::pair<unsigned, StringRef> >&N) const {
+  pImpl->getHandlerNames(N);
+}
+
+/// ValueIsDeleted - This handler is used to update metadata store
+/// when a value is deleted.
+void MetadataContext::ValueIsDeleted(Instruction *Inst) {
+  pImpl->ValueIsDeleted(Inst);
+}
+void MetadataContext::ValueIsRAUWd(Value *V1, Value *V2) {
+  pImpl->ValueIsRAUWd(V1, V2);
+}
+
+/// ValueIsCloned - This handler is used to update metadata store
+/// when In1 is cloned to create In2.
+void MetadataContext::ValueIsCloned(const Instruction *In1, Instruction *In2) {
+  pImpl->ValueIsCloned(In1, In2);
 }
diff --git a/libclamav/c++/llvm/lib/VMCore/Module.cpp b/libclamav/c++/llvm/lib/VMCore/Module.cpp
index add2449..3efd3e3 100644
--- a/libclamav/c++/llvm/lib/VMCore/Module.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Module.cpp
@@ -31,8 +31,7 @@ using namespace llvm;
 //
 
 GlobalVariable *ilist_traits<GlobalVariable>::createSentinel() {
-  GlobalVariable *Ret = new GlobalVariable(getGlobalContext(), 
-                                           Type::getInt32Ty(getGlobalContext()),
+  GlobalVariable *Ret = new GlobalVariable(Type::getInt32Ty(getGlobalContext()),
                                            false, GlobalValue::ExternalLinkage);
   // This should not be garbage monitored.
   LeakDetector::removeGarbageObject(Ret);
@@ -56,7 +55,7 @@ template class SymbolTableListTraits<GlobalAlias, Module>;
 // Primitive Module methods.
 //
 
-Module::Module(const StringRef &MID, LLVMContext& C)
+Module::Module(StringRef MID, LLVMContext& C)
   : Context(C), ModuleID(MID), DataLayout("")  {
   ValSymTab = new ValueSymbolTable();
   TypeSymTab = new TypeSymbolTable();
@@ -115,7 +114,7 @@ Module::PointerSize Module::getPointerSize() const {
 /// getNamedValue - Return the first global value in the module with
 /// the specified name, of arbitrary type.  This method returns null
 /// if a global with the specified name is not found.
-GlobalValue *Module::getNamedValue(const StringRef &Name) const {
+GlobalValue *Module::getNamedValue(StringRef Name) const {
   return cast_or_null<GlobalValue>(getValueSymbolTable().lookup(Name));
 }
 
@@ -128,7 +127,7 @@ GlobalValue *Module::getNamedValue(const StringRef &Name) const {
 // it.  This is nice because it allows most passes to get away with not handling
 // the symbol table directly for this common task.
 //
-Constant *Module::getOrInsertFunction(const StringRef &Name,
+Constant *Module::getOrInsertFunction(StringRef Name,
                                       const FunctionType *Ty,
                                       AttrListPtr AttributeList) {
   // See if we have a definition for the specified function already.
@@ -161,7 +160,7 @@ Constant *Module::getOrInsertFunction(const StringRef &Name,
   return F;  
 }
 
-Constant *Module::getOrInsertTargetIntrinsic(const StringRef &Name,
+Constant *Module::getOrInsertTargetIntrinsic(StringRef Name,
                                              const FunctionType *Ty,
                                              AttrListPtr AttributeList) {
   // See if we have a definition for the specified function already.
@@ -178,7 +177,7 @@ Constant *Module::getOrInsertTargetIntrinsic(const StringRef &Name,
   return F;  
 }
 
-Constant *Module::getOrInsertFunction(const StringRef &Name,
+Constant *Module::getOrInsertFunction(StringRef Name,
                                       const FunctionType *Ty) {
   AttrListPtr AttributeList = AttrListPtr::get((AttributeWithIndex *)0, 0);
   return getOrInsertFunction(Name, Ty, AttributeList);
@@ -189,7 +188,7 @@ Constant *Module::getOrInsertFunction(const StringRef &Name,
 // This version of the method takes a null terminated list of function
 // arguments, which makes it easier for clients to use.
 //
-Constant *Module::getOrInsertFunction(const StringRef &Name,
+Constant *Module::getOrInsertFunction(StringRef Name,
                                       AttrListPtr AttributeList,
                                       const Type *RetTy, ...) {
   va_list Args;
@@ -208,7 +207,7 @@ Constant *Module::getOrInsertFunction(const StringRef &Name,
                              AttributeList);
 }
 
-Constant *Module::getOrInsertFunction(const StringRef &Name,
+Constant *Module::getOrInsertFunction(StringRef Name,
                                       const Type *RetTy, ...) {
   va_list Args;
   va_start(Args, RetTy);
@@ -229,7 +228,7 @@ Constant *Module::getOrInsertFunction(const StringRef &Name,
 // getFunction - Look up the specified function in the module symbol table.
 // If it does not exist, return null.
 //
-Function *Module::getFunction(const StringRef &Name) const {
+Function *Module::getFunction(StringRef Name) const {
   return dyn_cast_or_null<Function>(getNamedValue(Name));
 }
 
@@ -244,7 +243,7 @@ Function *Module::getFunction(const StringRef &Name) const {
 /// If AllowLocal is set to true, this function will return types that
 /// have an local. By default, these types are not returned.
 ///
-GlobalVariable *Module::getGlobalVariable(const StringRef &Name,
+GlobalVariable *Module::getGlobalVariable(StringRef Name,
                                           bool AllowLocal) const {
   if (GlobalVariable *Result = 
       dyn_cast_or_null<GlobalVariable>(getNamedValue(Name)))
@@ -259,7 +258,7 @@ GlobalVariable *Module::getGlobalVariable(const StringRef &Name,
 ///      with a constantexpr cast to the right type.
 ///   3. Finally, if the existing global is the correct delclaration, return the
 ///      existing global.
-Constant *Module::getOrInsertGlobal(const StringRef &Name, const Type *Ty) {
+Constant *Module::getOrInsertGlobal(StringRef Name, const Type *Ty) {
   // See if we have a definition for the specified global already.
   GlobalVariable *GV = dyn_cast_or_null<GlobalVariable>(getNamedValue(Name));
   if (GV == 0) {
@@ -286,21 +285,21 @@ Constant *Module::getOrInsertGlobal(const StringRef &Name, const Type *Ty) {
 // getNamedAlias - Look up the specified global in the module symbol table.
 // If it does not exist, return null.
 //
-GlobalAlias *Module::getNamedAlias(const StringRef &Name) const {
+GlobalAlias *Module::getNamedAlias(StringRef Name) const {
   return dyn_cast_or_null<GlobalAlias>(getNamedValue(Name));
 }
 
 /// getNamedMetadata - Return the first NamedMDNode in the module with the
 /// specified name. This method returns null if a NamedMDNode with the 
 //// specified name is not found.
-NamedMDNode *Module::getNamedMetadata(const StringRef &Name) const {
+NamedMDNode *Module::getNamedMetadata(StringRef Name) const {
   return dyn_cast_or_null<NamedMDNode>(getValueSymbolTable().lookup(Name));
 }
 
 /// getOrInsertNamedMetadata - Return the first named MDNode in the module 
 /// with the specified name. This method returns a new NamedMDNode if a 
 /// NamedMDNode with the specified name is not found.
-NamedMDNode *Module::getOrInsertNamedMetadata(const StringRef &Name) {
+NamedMDNode *Module::getOrInsertNamedMetadata(StringRef Name) {
   NamedMDNode *NMD =
     dyn_cast_or_null<NamedMDNode>(getValueSymbolTable().lookup(Name));
   if (!NMD)
@@ -317,7 +316,7 @@ NamedMDNode *Module::getOrInsertNamedMetadata(const StringRef &Name) {
 // there is already an entry for this name, true is returned and the symbol
 // table is not modified.
 //
-bool Module::addTypeName(const StringRef &Name, const Type *Ty) {
+bool Module::addTypeName(StringRef Name, const Type *Ty) {
   TypeSymbolTable &ST = getTypeSymbolTable();
 
   if (ST.lookup(Name)) return true;  // Already in symtab...
@@ -331,7 +330,7 @@ bool Module::addTypeName(const StringRef &Name, const Type *Ty) {
 
 /// getTypeByName - Return the type with the specified name in this module, or
 /// null if there is none by that name.
-const Type *Module::getTypeByName(const StringRef &Name) const {
+const Type *Module::getTypeByName(StringRef Name) const {
   const TypeSymbolTable &ST = getTypeSymbolTable();
   return cast_or_null<Type>(ST.lookup(Name));
 }
@@ -377,14 +376,14 @@ void Module::dropAllReferences() {
     I->dropAllReferences();
 }
 
-void Module::addLibrary(const StringRef& Lib) {
+void Module::addLibrary(StringRef Lib) {
   for (Module::lib_iterator I = lib_begin(), E = lib_end(); I != E; ++I)
     if (*I == Lib)
       return;
   LibraryList.push_back(Lib);
 }
 
-void Module::removeLibrary(const StringRef& Lib) {
+void Module::removeLibrary(StringRef Lib) {
   LibraryListType::iterator I = LibraryList.begin();
   LibraryListType::iterator E = LibraryList.end();
   for (;I != E; ++I)
diff --git a/libclamav/c++/llvm/lib/VMCore/Pass.cpp b/libclamav/c++/llvm/lib/VMCore/Pass.cpp
index 1278074..1232fe2 100644
--- a/libclamav/c++/llvm/lib/VMCore/Pass.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Pass.cpp
@@ -18,6 +18,7 @@
 #include "llvm/Module.h"
 #include "llvm/ModuleProvider.h"
 #include "llvm/ADT/STLExtras.h"
+#include "llvm/ADT/StringMap.h"
 #include "llvm/Support/ManagedStatic.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/System/Atomic.h"
@@ -129,6 +130,9 @@ class PassRegistrar {
   /// pass.
   typedef std::map<intptr_t, const PassInfo*> MapType;
   MapType PassInfoMap;
+
+  typedef StringMap<const PassInfo*> StringMapType;
+  StringMapType PassInfoStringMap;
   
   /// AnalysisGroupInfo - Keep track of information for each analysis group.
   struct AnalysisGroupInfo {
@@ -145,10 +149,16 @@ public:
     return I != PassInfoMap.end() ? I->second : 0;
   }
   
+  const PassInfo *GetPassInfo(StringRef Arg) const {
+    StringMapType::const_iterator I = PassInfoStringMap.find(Arg);
+    return I != PassInfoStringMap.end() ? I->second : 0;
+  }
+  
   void RegisterPass(const PassInfo &PI) {
     bool Inserted =
       PassInfoMap.insert(std::make_pair(PI.getTypeInfo(),&PI)).second;
     assert(Inserted && "Pass registered multiple times!"); Inserted=Inserted;
+    PassInfoStringMap[PI.getPassArgument()] = &PI;
   }
   
   void UnregisterPass(const PassInfo &PI) {
@@ -157,6 +167,7 @@ public:
     
     // Remove pass from the map.
     PassInfoMap.erase(I);
+    PassInfoStringMap.erase(PI.getPassArgument());
   }
   
   void EnumerateWith(PassRegistrationListener *L) {
@@ -227,6 +238,10 @@ const PassInfo *Pass::lookupPassInfo(intptr_t TI) {
   return getPassRegistrar()->GetPassInfo(TI);
 }
 
+const PassInfo *Pass::lookupPassInfo(StringRef Arg) {
+  return getPassRegistrar()->GetPassInfo(Arg);
+}
+
 void PassInfo::registerPass() {
   getPassRegistrar()->RegisterPass(*this);
 
diff --git a/libclamav/c++/llvm/lib/VMCore/PassManager.cpp b/libclamav/c++/llvm/lib/VMCore/PassManager.cpp
index f10bc6f..ae418a0 100644
--- a/libclamav/c++/llvm/lib/VMCore/PassManager.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/PassManager.cpp
@@ -105,8 +105,7 @@ namespace {
 /// BBPassManager manages BasicBlockPass. It batches all the
 /// pass together and sequence them to process one basic block before
 /// processing next basic block.
-class VISIBILITY_HIDDEN BBPassManager : public PMDataManager, 
-                                        public FunctionPass {
+class BBPassManager : public PMDataManager, public FunctionPass {
 
 public:
   static char ID;
@@ -367,7 +366,7 @@ namespace {
 
 static ManagedStatic<sys::SmartMutex<true> > TimingInfoMutex;
 
-class VISIBILITY_HIDDEN TimingInfo {
+class TimingInfo {
   std::map<Pass*, Timer> TimingData;
   TimerGroup TG;
 
@@ -747,7 +746,7 @@ void PMDataManager::removeNotPreservedAnalysis(Pass *P) {
 }
 
 /// Remove analysis passes that are not used any longer
-void PMDataManager::removeDeadPasses(Pass *P, const StringRef &Msg,
+void PMDataManager::removeDeadPasses(Pass *P, StringRef Msg,
                                      enum PassDebuggingString DBG_STR) {
 
   SmallVector<Pass *, 12> DeadPasses;
@@ -769,7 +768,7 @@ void PMDataManager::removeDeadPasses(Pass *P, const StringRef &Msg,
     freePass(*I, Msg, DBG_STR);
 }
 
-void PMDataManager::freePass(Pass *P, const StringRef &Msg,
+void PMDataManager::freePass(Pass *P, StringRef Msg,
                              enum PassDebuggingString DBG_STR) {
   dumpPassInfo(P, FREEING_MSG, DBG_STR, Msg);
 
@@ -973,7 +972,7 @@ void PMDataManager::dumpPassArguments() const {
 
 void PMDataManager::dumpPassInfo(Pass *P, enum PassDebuggingString S1,
                                  enum PassDebuggingString S2,
-                                 const StringRef &Msg) {
+                                 StringRef Msg) {
   if (PassDebugging < Executions)
     return;
   errs() << (void*)this << std::string(getDepth()*2+1, ' ');
@@ -1029,7 +1028,7 @@ void PMDataManager::dumpPreservedSet(const Pass *P) const {
   dumpAnalysisUsage("Preserved", P, analysisUsage.getPreservedSet());
 }
 
-void PMDataManager::dumpAnalysisUsage(const StringRef &Msg, const Pass *P,
+void PMDataManager::dumpAnalysisUsage(StringRef Msg, const Pass *P,
                                    const AnalysisUsage::VectorType &Set) const {
   assert(PassDebugging >= Details);
   if (Set.empty())
@@ -1232,6 +1231,9 @@ bool FunctionPassManager::doFinalization() {
 bool FunctionPassManagerImpl::doInitialization(Module &M) {
   bool Changed = false;
 
+  dumpArguments();
+  dumpPasses();
+
   for (unsigned Index = 0; Index < getNumContainedManagers(); ++Index)
     Changed |= getContainedManager(Index)->doInitialization(M);
 
@@ -1275,9 +1277,6 @@ bool FunctionPassManagerImpl::run(Function &F) {
   bool Changed = false;
   TimingInfo::createTheTimeInfo();
 
-  dumpArguments();
-  dumpPasses();
-
   initializeAllAnalysisInfo();
   for (unsigned Index = 0; Index < getNumContainedManagers(); ++Index)
     Changed |= getContainedManager(Index)->runOnFunction(F);
diff --git a/libclamav/c++/llvm/lib/VMCore/PrintModulePass.cpp b/libclamav/c++/llvm/lib/VMCore/PrintModulePass.cpp
index 0a7f449..3d4f19d 100644
--- a/libclamav/c++/llvm/lib/VMCore/PrintModulePass.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/PrintModulePass.cpp
@@ -16,13 +16,12 @@
 #include "llvm/Function.h"
 #include "llvm/Module.h"
 #include "llvm/Pass.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
 namespace {
 
-  class VISIBILITY_HIDDEN PrintModulePass : public ModulePass {
+  class PrintModulePass : public ModulePass {
     raw_ostream *Out;       // raw_ostream to print on
     bool DeleteStream;      // Delete the ostream in our dtor?
   public:
diff --git a/libclamav/c++/llvm/lib/VMCore/Type.cpp b/libclamav/c++/llvm/lib/VMCore/Type.cpp
index 087464d..739c463 100644
--- a/libclamav/c++/llvm/lib/VMCore/Type.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Type.cpp
@@ -27,8 +27,6 @@
 #include "llvm/Support/ManagedStatic.h"
 #include "llvm/Support/MathExtras.h"
 #include "llvm/Support/raw_ostream.h"
-#include "llvm/System/Mutex.h"
-#include "llvm/System/RWMutex.h"
 #include "llvm/System/Threading.h"
 #include <algorithm>
 #include <cstdarg>
@@ -358,6 +356,46 @@ const IntegerType *Type::getInt64Ty(LLVMContext &C) {
   return &C.pImpl->Int64Ty;
 }
 
+const PointerType *Type::getFloatPtrTy(LLVMContext &C, unsigned AS) {
+  return getFloatTy(C)->getPointerTo(AS);
+}
+
+const PointerType *Type::getDoublePtrTy(LLVMContext &C, unsigned AS) {
+  return getDoubleTy(C)->getPointerTo(AS);
+}
+
+const PointerType *Type::getX86_FP80PtrTy(LLVMContext &C, unsigned AS) {
+  return getX86_FP80Ty(C)->getPointerTo(AS);
+}
+
+const PointerType *Type::getFP128PtrTy(LLVMContext &C, unsigned AS) {
+  return getFP128Ty(C)->getPointerTo(AS);
+}
+
+const PointerType *Type::getPPC_FP128PtrTy(LLVMContext &C, unsigned AS) {
+  return getPPC_FP128Ty(C)->getPointerTo(AS);
+}
+
+const PointerType *Type::getInt1PtrTy(LLVMContext &C, unsigned AS) {
+  return getInt1Ty(C)->getPointerTo(AS);
+}
+
+const PointerType *Type::getInt8PtrTy(LLVMContext &C, unsigned AS) {
+  return getInt8Ty(C)->getPointerTo(AS);
+}
+
+const PointerType *Type::getInt16PtrTy(LLVMContext &C, unsigned AS) {
+  return getInt16Ty(C)->getPointerTo(AS);
+}
+
+const PointerType *Type::getInt32PtrTy(LLVMContext &C, unsigned AS) {
+  return getInt32Ty(C)->getPointerTo(AS);
+}
+
+const PointerType *Type::getInt64PtrTy(LLVMContext &C, unsigned AS) {
+  return getInt64Ty(C)->getPointerTo(AS);
+}
+
 //===----------------------------------------------------------------------===//
 //                          Derived Type Constructors
 //===----------------------------------------------------------------------===//
@@ -728,7 +766,6 @@ const IntegerType *IntegerType::get(LLVMContext &C, unsigned NumBits) {
   
   // First, see if the type is already in the table, for which
   // a reader lock suffices.
-  sys::SmartScopedLock<true> L(pImpl->TypeMapLock);
   ITy = pImpl->IntegerTypes.get(IVT);
     
   if (!ITy) {
@@ -770,7 +807,6 @@ FunctionType *FunctionType::get(const Type *ReturnType,
   
   LLVMContextImpl *pImpl = ReturnType->getContext().pImpl;
   
-  sys::SmartScopedLock<true> L(pImpl->TypeMapLock);
   FT = pImpl->FunctionTypes.get(VT);
   
   if (!FT) {
@@ -795,7 +831,6 @@ ArrayType *ArrayType::get(const Type *ElementType, uint64_t NumElements) {
 
   LLVMContextImpl *pImpl = ElementType->getContext().pImpl;
   
-  sys::SmartScopedLock<true> L(pImpl->TypeMapLock);
   AT = pImpl->ArrayTypes.get(AVT);
       
   if (!AT) {
@@ -821,7 +856,6 @@ VectorType *VectorType::get(const Type *ElementType, unsigned NumElements) {
   
   LLVMContextImpl *pImpl = ElementType->getContext().pImpl;
   
-  sys::SmartScopedLock<true> L(pImpl->TypeMapLock);
   PT = pImpl->VectorTypes.get(PVT);
     
   if (!PT) {
@@ -850,7 +884,6 @@ StructType *StructType::get(LLVMContext &Context,
   
   LLVMContextImpl *pImpl = Context.pImpl;
   
-  sys::SmartScopedLock<true> L(pImpl->TypeMapLock);
   ST = pImpl->StructTypes.get(STV);
     
   if (!ST) {
@@ -898,7 +931,6 @@ PointerType *PointerType::get(const Type *ValueType, unsigned AddressSpace) {
   
   LLVMContextImpl *pImpl = ValueType->getContext().pImpl;
   
-  sys::SmartScopedLock<true> L(pImpl->TypeMapLock);
   PT = pImpl->PointerTypes.get(PVT);
   
   if (!PT) {
@@ -911,7 +943,7 @@ PointerType *PointerType::get(const Type *ValueType, unsigned AddressSpace) {
   return PT;
 }
 
-PointerType *Type::getPointerTo(unsigned addrs) const {
+const PointerType *Type::getPointerTo(unsigned addrs) const {
   return PointerType::get(this, addrs);
 }
 
@@ -930,10 +962,7 @@ bool PointerType::isValidElementType(const Type *ElemTy) {
 // it.  This function is called primarily by the PATypeHandle class.
 void Type::addAbstractTypeUser(AbstractTypeUser *U) const {
   assert(isAbstract() && "addAbstractTypeUser: Current type not abstract!");
-  LLVMContextImpl *pImpl = getContext().pImpl;
-  pImpl->AbstractTypeUsersLock.acquire();
   AbstractTypeUsers.push_back(U);
-  pImpl->AbstractTypeUsersLock.release();
 }
 
 
@@ -943,8 +972,6 @@ void Type::addAbstractTypeUser(AbstractTypeUser *U) const {
 // is annihilated, because there is no way to get a reference to it ever again.
 //
 void Type::removeAbstractTypeUser(AbstractTypeUser *U) const {
-  LLVMContextImpl *pImpl = getContext().pImpl;
-  pImpl->AbstractTypeUsersLock.acquire();
   
   // Search from back to front because we will notify users from back to
   // front.  Also, it is likely that there will be a stack like behavior to
@@ -973,7 +1000,6 @@ void Type::removeAbstractTypeUser(AbstractTypeUser *U) const {
   this->destroy();
   }
   
-  pImpl->AbstractTypeUsersLock.release();
 }
 
 // unlockedRefineAbstractTypeTo - This function is used when it is discovered
@@ -1025,7 +1051,6 @@ void DerivedType::unlockedRefineAbstractTypeTo(const Type *NewType) {
   // will not cause users to drop off of the use list.  If we resolve to ourself
   // we succeed!
   //
-  pImpl->AbstractTypeUsersLock.acquire();
   while (!AbstractTypeUsers.empty() && NewTy != this) {
     AbstractTypeUser *User = AbstractTypeUsers.back();
 
@@ -1041,7 +1066,6 @@ void DerivedType::unlockedRefineAbstractTypeTo(const Type *NewType) {
     assert(AbstractTypeUsers.size() != OldSize &&
            "AbsTyUser did not remove self from user list!");
   }
-  pImpl->AbstractTypeUsersLock.release();
 
   // If we were successful removing all users from the type, 'this' will be
   // deleted when the last PATypeHolder is destroyed or updated from this type.
@@ -1055,7 +1079,6 @@ void DerivedType::unlockedRefineAbstractTypeTo(const Type *NewType) {
 void DerivedType::refineAbstractTypeTo(const Type *NewType) {
   // All recursive calls will go through unlockedRefineAbstractTypeTo,
   // to avoid deadlock problems.
-  sys::SmartScopedLock<true> L(NewType->getContext().pImpl->TypeMapLock);
   unlockedRefineAbstractTypeTo(NewType);
 }
 
@@ -1067,9 +1090,6 @@ void DerivedType::notifyUsesThatTypeBecameConcrete() {
   DEBUG(errs() << "typeIsREFINED type: " << (void*)this << " " << *this <<"\n");
 #endif
 
-  LLVMContextImpl *pImpl = getContext().pImpl;
-
-  pImpl->AbstractTypeUsersLock.acquire();
   unsigned OldSize = AbstractTypeUsers.size(); OldSize=OldSize;
   while (!AbstractTypeUsers.empty()) {
     AbstractTypeUser *ATU = AbstractTypeUsers.back();
@@ -1078,7 +1098,6 @@ void DerivedType::notifyUsesThatTypeBecameConcrete() {
     assert(AbstractTypeUsers.size() < OldSize-- &&
            "AbstractTypeUser did not remove itself from the use list!");
   }
-  pImpl->AbstractTypeUsersLock.release();
 }
 
 // refineAbstractType - Called when a contained type is found to be more
diff --git a/libclamav/c++/llvm/lib/VMCore/TypeSymbolTable.cpp b/libclamav/c++/llvm/lib/VMCore/TypeSymbolTable.cpp
index f31ea66..0d0cdf5 100644
--- a/libclamav/c++/llvm/lib/VMCore/TypeSymbolTable.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/TypeSymbolTable.cpp
@@ -17,16 +17,12 @@
 #include "llvm/ADT/StringRef.h"
 #include "llvm/Support/ManagedStatic.h"
 #include "llvm/Support/raw_ostream.h"
-#include "llvm/System/RWMutex.h"
-#include "llvm/System/Threading.h"
 #include <algorithm>
 using namespace llvm;
 
 #define DEBUG_SYMBOL_TABLE 0
 #define DEBUG_ABSTYPE 0
 
-static ManagedStatic<sys::SmartRWMutex<true> > TypeSymbolTableLock;
-
 TypeSymbolTable::~TypeSymbolTable() {
   // Drop all abstract type references in the type plane...
   for (iterator TI = tmap.begin(), TE = tmap.end(); TI != TE; ++TI) {
@@ -35,11 +31,9 @@ TypeSymbolTable::~TypeSymbolTable() {
   }
 }
 
-std::string TypeSymbolTable::getUniqueName(const StringRef &BaseName) const {
+std::string TypeSymbolTable::getUniqueName(StringRef BaseName) const {
   std::string TryName = BaseName;
   
-  sys::SmartScopedReader<true> Reader(*TypeSymbolTableLock);
-  
   const_iterator End = tmap.end();
 
   // See if the name exists
@@ -49,9 +43,7 @@ std::string TypeSymbolTable::getUniqueName(const StringRef &BaseName) const {
 }
 
 // lookup a type by name - returns null on failure
-Type* TypeSymbolTable::lookup(const StringRef &Name) const {
-  sys::SmartScopedReader<true> Reader(*TypeSymbolTableLock);
-  
+Type* TypeSymbolTable::lookup(StringRef Name) const {
   const_iterator TI = tmap.find(Name);
   Type* result = 0;
   if (TI != tmap.end())
@@ -59,21 +51,8 @@ Type* TypeSymbolTable::lookup(const StringRef &Name) const {
   return result;
 }
 
-TypeSymbolTable::iterator TypeSymbolTable::find(const StringRef &Name) {
-  sys::SmartScopedReader<true> Reader(*TypeSymbolTableLock);  
-  return tmap.find(Name);
-}
-
-TypeSymbolTable::const_iterator
-TypeSymbolTable::find(const StringRef &Name) const {
-  sys::SmartScopedReader<true> Reader(*TypeSymbolTableLock);  
-  return tmap.find(Name);
-}
-
 // remove - Remove a type from the symbol table...
 Type* TypeSymbolTable::remove(iterator Entry) {
-  TypeSymbolTableLock->writer_acquire();
-  
   assert(Entry != tmap.end() && "Invalid entry to remove!");
   const Type* Result = Entry->second;
 
@@ -84,8 +63,6 @@ Type* TypeSymbolTable::remove(iterator Entry) {
 
   tmap.erase(Entry);
   
-  TypeSymbolTableLock->writer_release();
-
   // If we are removing an abstract type, remove the symbol table from it's use
   // list...
   if (Result->isAbstract()) {
@@ -102,11 +79,9 @@ Type* TypeSymbolTable::remove(iterator Entry) {
 
 
 // insert - Insert a type into the symbol table with the specified name...
-void TypeSymbolTable::insert(const StringRef &Name, const Type* T) {
+void TypeSymbolTable::insert(StringRef Name, const Type* T) {
   assert(T && "Can't insert null type into symbol table!");
 
-  TypeSymbolTableLock->writer_acquire();
-
   if (tmap.insert(std::make_pair(Name, T)).second) {
     // Type inserted fine with no conflict.
     
@@ -132,8 +107,6 @@ void TypeSymbolTable::insert(const StringRef &Name, const Type* T) {
     tmap.insert(make_pair(UniqueName, T));
   }
   
-  TypeSymbolTableLock->writer_release();
-
   // If we are adding an abstract type, add the symbol table to it's use list.
   if (T->isAbstract()) {
     cast<DerivedType>(T)->addAbstractTypeUser(this);
@@ -146,8 +119,6 @@ void TypeSymbolTable::insert(const StringRef &Name, const Type* T) {
 // This function is called when one of the types in the type plane are refined
 void TypeSymbolTable::refineAbstractType(const DerivedType *OldType,
                                          const Type *NewType) {
-  sys::SmartScopedReader<true> Reader(*TypeSymbolTableLock);
-  
   // Loop over all of the types in the symbol table, replacing any references
   // to OldType with references to NewType.  Note that there may be multiple
   // occurrences, and although we only need to remove one at a time, it's
@@ -177,7 +148,6 @@ void TypeSymbolTable::typeBecameConcrete(const DerivedType *AbsTy) {
   // Loop over all of the types in the symbol table, dropping any abstract
   // type user entries for AbsTy which occur because there are names for the
   // type.
-  sys::SmartScopedReader<true> Reader(*TypeSymbolTableLock);
   for (iterator TI = begin(), TE = end(); TI != TE; ++TI)
     if (TI->second == const_cast<Type*>(static_cast<const Type*>(AbsTy)))
       AbsTy->removeAbstractTypeUser(this);
@@ -191,8 +161,6 @@ static void DumpTypes(const std::pair<const std::string, const Type*>& T ) {
 
 void TypeSymbolTable::dump() const {
   errs() << "TypeSymbolPlane: ";
-  sys::SmartScopedReader<true> Reader(*TypeSymbolTableLock);
   for_each(tmap.begin(), tmap.end(), DumpTypes);
 }
 
-// vim: sw=2 ai
diff --git a/libclamav/c++/llvm/lib/VMCore/Value.cpp b/libclamav/c++/llvm/lib/VMCore/Value.cpp
index 9fce24a..826e8a1 100644
--- a/libclamav/c++/llvm/lib/VMCore/Value.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Value.cpp
@@ -27,9 +27,6 @@
 #include "llvm/Support/LeakDetector.h"
 #include "llvm/Support/ManagedStatic.h"
 #include "llvm/Support/ValueHandle.h"
-#include "llvm/Support/raw_ostream.h"
-#include "llvm/System/RWMutex.h"
-#include "llvm/System/Threading.h"
 #include "llvm/ADT/DenseMap.h"
 #include <algorithm>
 using namespace llvm;
@@ -309,6 +306,10 @@ void Value::uncheckedReplaceAllUsesWith(Value *New) {
   // Notify all ValueHandles (if present) that this value is going away.
   if (HasValueHandle)
     ValueHandleBase::ValueIsRAUWd(this, New);
+  if (HasMetadata) {
+    LLVMContext &Context = getContext();
+    Context.pImpl->TheMetadata.ValueIsRAUWd(this, New);
+  }
 
   while (!use_empty()) {
     Use &U = *UseList;
@@ -411,6 +412,16 @@ void ValueHandleBase::AddToExistingUseList(ValueHandleBase **List) {
   }
 }
 
+void ValueHandleBase::AddToExistingUseListAfter(ValueHandleBase *List) {
+  assert(List && "Must insert after existing node");
+
+  Next = List->Next;
+  setPrevPtr(&List->Next);
+  List->Next = this;
+  if (Next)
+    Next->setPrevPtr(&Next);
+}
+
 /// AddToUseList - Add this ValueHandle to the use list for VP.
 void ValueHandleBase::AddToUseList() {
   assert(VP && "Null pointer doesn't have a use list!");
@@ -490,37 +501,46 @@ void ValueHandleBase::ValueIsDeleted(Value *V) {
   ValueHandleBase *Entry = pImpl->ValueHandles[V];
   assert(Entry && "Value bit set but no entries exist");
 
-  while (Entry) {
-    // Advance pointer to avoid invalidation.
-    ValueHandleBase *ThisNode = Entry;
-    Entry = Entry->Next;
+  // We use a local ValueHandleBase as an iterator so that
+  // ValueHandles can add and remove themselves from the list without
+  // breaking our iteration.  This is not really an AssertingVH; we
+  // just have to give ValueHandleBase some kind.
+  for (ValueHandleBase Iterator(Assert, *Entry); Entry; Entry = Iterator.Next) {
+    Iterator.RemoveFromUseList();
+    Iterator.AddToExistingUseListAfter(Entry);
+    assert(Entry->Next == &Iterator && "Loop invariant broken.");
 
-    switch (ThisNode->getKind()) {
+    switch (Entry->getKind()) {
     case Assert:
-#ifndef NDEBUG      // Only in -g mode...
-      errs() << "While deleting: " << *V->getType() << " %" << V->getNameStr()
-             << "\n";
-#endif
-      llvm_unreachable("An asserting value handle still pointed to this"
-                       " value!");
+      break;
     case Tracking:
       // Mark that this value has been deleted by setting it to an invalid Value
       // pointer.
-      ThisNode->operator=(DenseMapInfo<Value *>::getTombstoneKey());
+      Entry->operator=(DenseMapInfo<Value *>::getTombstoneKey());
       break;
     case Weak:
       // Weak just goes to null, which will unlink it from the list.
-      ThisNode->operator=(0);
+      Entry->operator=(0);
       break;
     case Callback:
       // Forward to the subclass's implementation.
-      static_cast<CallbackVH*>(ThisNode)->deleted();
+      static_cast<CallbackVH*>(Entry)->deleted();
       break;
     }
   }
 
-  // All callbacks and weak references should be dropped by now.
-  assert(!V->HasValueHandle && "All references to V were not removed?");
+  // All callbacks, weak references, and assertingVHs should be dropped by now.
+  if (V->HasValueHandle) {
+#ifndef NDEBUG      // Only in +Asserts mode...
+    errs() << "While deleting: " << *V->getType() << " %" << V->getNameStr()
+           << "\n";
+    if (pImpl->ValueHandles[V]->getKind() == Assert)
+      llvm_unreachable("An asserting value handle still pointed to this"
+                       " value!");
+
+#endif
+    llvm_unreachable("All references to V were not removed?");
+  }
 }
 
 
@@ -535,12 +555,16 @@ void ValueHandleBase::ValueIsRAUWd(Value *Old, Value *New) {
 
   assert(Entry && "Value bit set but no entries exist");
 
-  while (Entry) {
-    // Advance pointer to avoid invalidation.
-    ValueHandleBase *ThisNode = Entry;
-    Entry = Entry->Next;
+  // We use a local ValueHandleBase as an iterator so that
+  // ValueHandles can add and remove themselves from the list without
+  // breaking our iteration.  This is not really an AssertingVH; we
+  // just have to give ValueHandleBase some kind.
+  for (ValueHandleBase Iterator(Assert, *Entry); Entry; Entry = Iterator.Next) {
+    Iterator.RemoveFromUseList();
+    Iterator.AddToExistingUseListAfter(Entry);
+    assert(Entry->Next == &Iterator && "Loop invariant broken.");
 
-    switch (ThisNode->getKind()) {
+    switch (Entry->getKind()) {
     case Assert:
       // Asserting handle does not follow RAUW implicitly.
       break;
@@ -553,11 +577,11 @@ void ValueHandleBase::ValueIsRAUWd(Value *Old, Value *New) {
       // FALLTHROUGH
     case Weak:
       // Weak goes to the new value, which will unlink it from Old's list.
-      ThisNode->operator=(New);
+      Entry->operator=(New);
       break;
     case Callback:
       // Forward to the subclass's implementation.
-      static_cast<CallbackVH*>(ThisNode)->allUsesReplacedWith(New);
+      static_cast<CallbackVH*>(Entry)->allUsesReplacedWith(New);
       break;
     }
   }
diff --git a/libclamav/c++/llvm/lib/VMCore/ValueSymbolTable.cpp b/libclamav/c++/llvm/lib/VMCore/ValueSymbolTable.cpp
index 7765a98..9d39a50 100644
--- a/libclamav/c++/llvm/lib/VMCore/ValueSymbolTable.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/ValueSymbolTable.cpp
@@ -77,7 +77,7 @@ void ValueSymbolTable::removeValueName(ValueName *V) {
 /// createValueName - This method attempts to create a value name and insert
 /// it into the symbol table with the specified name.  If it conflicts, it
 /// auto-renames the name and returns that instead.
-ValueName *ValueSymbolTable::createValueName(const StringRef &Name, Value *V) {
+ValueName *ValueSymbolTable::createValueName(StringRef Name, Value *V) {
   // In the common case, the name is not already in the symbol table.
   ValueName &Entry = vmap.GetOrCreateValue(Name);
   if (Entry.getValue() == 0) {
diff --git a/libclamav/c++/llvm/lib/VMCore/Verifier.cpp b/libclamav/c++/llvm/lib/VMCore/Verifier.cpp
index 4f7c847..7aa86b7 100644
--- a/libclamav/c++/llvm/lib/VMCore/Verifier.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Verifier.cpp
@@ -62,7 +62,6 @@
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/StringExtras.h"
 #include "llvm/ADT/STLExtras.h"
-#include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 #include <algorithm>
@@ -70,7 +69,7 @@
 using namespace llvm;
 
 namespace {  // Anonymous namespace for class
-  struct VISIBILITY_HIDDEN PreVerifier : public FunctionPass {
+  struct PreVerifier : public FunctionPass {
     static char ID; // Pass ID, replacement for typeid
 
     PreVerifier() : FunctionPass(&ID) { }
@@ -321,7 +320,7 @@ namespace {
     void visitUserOp1(Instruction &I);
     void visitUserOp2(Instruction &I) { visitUserOp1(I); }
     void visitIntrinsicFunctionCall(Intrinsic::ID ID, CallInst &CI);
-    void visitAllocationInst(AllocationInst &AI);
+    void visitAllocaInst(AllocaInst &AI);
     void visitExtractValueInst(ExtractValueInst &EVI);
     void visitInsertValueInst(InsertValueInst &IVI);
 
@@ -339,10 +338,10 @@ namespace {
     void WriteValue(const Value *V) {
       if (!V) return;
       if (isa<Instruction>(V)) {
-        MessagesStr << *V;
+        MessagesStr << *V << '\n';
       } else {
         WriteAsOperand(MessagesStr, V, true, Mod);
-        MessagesStr << "\n";
+        MessagesStr << '\n';
       }
     }
 
@@ -600,12 +599,11 @@ void Verifier::visitFunction(Function &F) {
           "# formal arguments must match # of arguments for function type!",
           &F, FT);
   Assert1(F.getReturnType()->isFirstClassType() ||
-          F.getReturnType()->getTypeID() == Type::VoidTyID || 
+          F.getReturnType()->isVoidTy() || 
           isa<StructType>(F.getReturnType()),
           "Functions cannot return aggregate values!", &F);
 
-  Assert1(!F.hasStructRetAttr() ||
-          F.getReturnType()->getTypeID() == Type::VoidTyID,
+  Assert1(!F.hasStructRetAttr() || F.getReturnType()->isVoidTy(),
           "Invalid struct return type!", &F);
 
   const AttrListPtr &Attrs = F.getAttributes();
@@ -643,7 +641,7 @@ void Verifier::visitFunction(Function &F) {
     Assert1(I->getType()->isFirstClassType(),
             "Function arguments must have first-class types!", I);
     if (!isLLVMdotName)
-      Assert2(I->getType() != Type::getMetadataTy(F.getContext()),
+      Assert2(!I->getType()->isMetadataTy(),
               "Function takes metadata but isn't an intrinsic", I, &F);
   }
 
@@ -660,6 +658,12 @@ void Verifier::visitFunction(Function &F) {
     BasicBlock *Entry = &F.getEntryBlock();
     Assert1(pred_begin(Entry) == pred_end(Entry),
             "Entry block to function must not have predecessors!", Entry);
+    
+    // The address of the entry block cannot be taken, unless it is dead.
+    if (Entry->hasAddressTaken()) {
+      Assert1(!BlockAddress::get(Entry)->isConstantUsed(),
+              "blockaddress may not be used with the entry block!", Entry);
+    }
   }
   
   // If this function is actually an intrinsic, verify that it is only used in
@@ -738,7 +742,7 @@ void Verifier::visitTerminatorInst(TerminatorInst &I) {
 void Verifier::visitReturnInst(ReturnInst &RI) {
   Function *F = RI.getParent()->getParent();
   unsigned N = RI.getNumOperands();
-  if (F->getReturnType()->getTypeID() == Type::VoidTyID) 
+  if (F->getReturnType()->isVoidTy()) 
     Assert2(N == 0,
             "Found return instr that returns non-void in Function of void "
             "return type!", &RI, F->getReturnType());
@@ -776,9 +780,13 @@ void Verifier::visitSwitchInst(SwitchInst &SI) {
   // Check to make sure that all of the constants in the switch instruction
   // have the same type as the switched-on value.
   const Type *SwitchTy = SI.getCondition()->getType();
-  for (unsigned i = 1, e = SI.getNumCases(); i != e; ++i)
+  SmallPtrSet<ConstantInt*, 32> Constants;
+  for (unsigned i = 1, e = SI.getNumCases(); i != e; ++i) {
     Assert1(SI.getCaseValue(i)->getType() == SwitchTy,
             "Switch constants must all be same type as switch value!", &SI);
+    Assert2(Constants.insert(SI.getCaseValue(i)),
+            "Duplicate integer as switch case", &SI, SI.getCaseValue(i));
+  }
 
   visitTerminatorInst(SI);
 }
@@ -1103,7 +1111,7 @@ void Verifier::VerifyCallSite(CallSite CS) {
       CS.getCalledFunction()->getName().substr(0, 5) != "llvm.") {
     for (FunctionType::param_iterator PI = FTy->param_begin(),
            PE = FTy->param_end(); PI != PE; ++PI)
-      Assert1(PI->get() != Type::getMetadataTy(I->getContext()),
+      Assert1(!PI->get()->isMetadataTy(),
               "Function has metadata parameter but isn't an intrinsic", I);
   }
 
@@ -1283,7 +1291,7 @@ void Verifier::visitStoreInst(StoreInst &SI) {
   visitInstruction(SI);
 }
 
-void Verifier::visitAllocationInst(AllocationInst &AI) {
+void Verifier::visitAllocaInst(AllocaInst &AI) {
   const PointerType *PTy = AI.getType();
   Assert1(PTy->getAddressSpace() == 0, 
           "Allocation instruction pointer not in the generic address space!",
@@ -1329,18 +1337,18 @@ void Verifier::visitInstruction(Instruction &I) {
     Assert1(BB->getTerminator() == &I, "Terminator not at end of block!", &I);
 
   // Check that void typed values don't have names
-  Assert1(I.getType() != Type::getVoidTy(I.getContext()) || !I.hasName(),
+  Assert1(!I.getType()->isVoidTy() || !I.hasName(),
           "Instruction has a name, but provides a void value!", &I);
 
   // Check that the return value of the instruction is either void or a legal
   // value type.
-  Assert1(I.getType()->getTypeID() == Type::VoidTyID || 
+  Assert1(I.getType()->isVoidTy() || 
           I.getType()->isFirstClassType(),
           "Instruction returns a non-scalar type!", &I);
 
   // Check that the instruction doesn't produce metadata. Calls are already
   // checked against the callee type.
-  Assert1(I.getType()->getTypeID() != Type::MetadataTyID ||
+  Assert1(!I.getType()->isMetadataTy() ||
           isa<CallInst>(I) || isa<InvokeInst>(I),
           "Invalid use of metadata!", &I);
 
@@ -1467,6 +1475,9 @@ void Verifier::visitInstruction(Instruction &I) {
 void Verifier::VerifyType(const Type *Ty) {
   if (!Types.insert(Ty)) return;
 
+  Assert1(&Mod->getContext() == &Ty->getContext(),
+          "Type context does not match Module context!", Ty);
+
   switch (Ty->getTypeID()) {
   case Type::FunctionTyID: {
     const FunctionType *FTy = cast<FunctionType>(Ty);
@@ -1579,6 +1590,17 @@ void Verifier::visitIntrinsicFunctionCall(Intrinsic::ID ID, CallInst &CI) {
             "llvm.stackprotector parameter #2 must resolve to an alloca.",
             &CI);
     break;
+  case Intrinsic::lifetime_start:
+  case Intrinsic::lifetime_end:
+  case Intrinsic::invariant_start:
+    Assert1(isa<ConstantInt>(CI.getOperand(1)),
+            "size argument of memory use markers must be a constant integer",
+            &CI);
+    break;
+  case Intrinsic::invariant_end:
+    Assert1(isa<ConstantInt>(CI.getOperand(2)),
+            "llvm.invariant.end parameter #2 must be a constant integer", &CI);
+    break;
   }
 }
 
diff --git a/libclamav/c++/llvm/test/Analysis/BasicAA/2006-03-03-BadArraySubscript.ll b/libclamav/c++/llvm/test/Analysis/BasicAA/2006-03-03-BadArraySubscript.ll
index 5d08312..49327ac 100644
--- a/libclamav/c++/llvm/test/Analysis/BasicAA/2006-03-03-BadArraySubscript.ll
+++ b/libclamav/c++/llvm/test/Analysis/BasicAA/2006-03-03-BadArraySubscript.ll
@@ -1,5 +1,6 @@
 ; RUN: opt < %s -aa-eval -disable-output |& grep {2 no alias respon}
 ; TEST that A[1][0] may alias A[0][i].
+target datalayout = "E-p:64:64:64-a0:0:8-f32:32:32-f64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-v64:64:64-v128:128:128"
 
 define void @test(i32 %N) {
 entry:
diff --git a/libclamav/c++/llvm/test/Analysis/BasicAA/2008-12-09-GEP-IndicesAlias.ll b/libclamav/c++/llvm/test/Analysis/BasicAA/2008-12-09-GEP-IndicesAlias.ll
deleted file mode 100644
index aaf9061..0000000
--- a/libclamav/c++/llvm/test/Analysis/BasicAA/2008-12-09-GEP-IndicesAlias.ll
+++ /dev/null
@@ -1,16 +0,0 @@
-; RUN: opt < %s -aa-eval -print-all-alias-modref-info -disable-output |& grep {MustAlias:.*%R,.*%r}
-; Make sure that basicaa thinks R and r are must aliases.
-
-define i32 @test(i8 * %P) {
-entry:
-	%Q = bitcast i8* %P to {i32, i32}*
-	%R = getelementptr {i32, i32}* %Q, i32 0, i32 1
-	%S = load i32* %R
-
-	%q = bitcast i8* %P to {i32, i32}*
-	%r = getelementptr {i32, i32}* %q, i32 0, i32 1
-	%s = load i32* %r
-
-	%t = sub i32 %S, %s
-	ret i32 %t
-}
diff --git a/libclamav/c++/llvm/test/Analysis/BasicAA/2009-10-13-AtomicModRef.ll b/libclamav/c++/llvm/test/Analysis/BasicAA/2009-10-13-AtomicModRef.ll
new file mode 100644
index 0000000..6475471
--- /dev/null
+++ b/libclamav/c++/llvm/test/Analysis/BasicAA/2009-10-13-AtomicModRef.ll
@@ -0,0 +1,17 @@
+; RUN: opt -gvn -instcombine -S < %s | FileCheck %s
+target datalayout = "E-p:64:64:64-a0:0:8-f32:32:32-f64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-v64:64:64-v128:128:128"
+
+declare i8 @llvm.atomic.load.add.i8.p0i8(i8*, i8)
+
+define i8 @foo(i8* %ptr) {
+  %P = getelementptr i8* %ptr, i32 0
+  %Q = getelementptr i8* %ptr, i32 1
+; CHECK: getelementptr
+  %X = load i8* %P
+  %Y = call i8 @llvm.atomic.load.add.i8.p0i8(i8* %Q, i8 1)
+  %Z = load i8* %P
+; CHECK-NOT: = load
+  %A = sub i8 %X, %Z
+  ret i8 %A
+; CHECK: ret i8 0
+}
diff --git a/libclamav/c++/llvm/test/Analysis/BasicAA/2009-10-13-GEP-BaseNoAlias.ll b/libclamav/c++/llvm/test/Analysis/BasicAA/2009-10-13-GEP-BaseNoAlias.ll
new file mode 100644
index 0000000..771636f
--- /dev/null
+++ b/libclamav/c++/llvm/test/Analysis/BasicAA/2009-10-13-GEP-BaseNoAlias.ll
@@ -0,0 +1,30 @@
+; RUN: opt < %s -aa-eval -print-all-alias-modref-info -disable-output |& grep {NoAlias:.*%P,.*@Z}
+; If GEP base doesn't alias Z, then GEP doesn't alias Z.
+; rdar://7282591
+
+ at Y = common global i32 0
+ at Z = common global i32 0
+
+define void @foo(i32 %cond) nounwind ssp {
+entry:
+  %a = alloca i32
+  %tmp = icmp ne i32 %cond, 0
+  br i1 %tmp, label %bb, label %bb1
+
+bb:
+  %b = getelementptr i32* %a, i32 0
+  br label %bb2
+
+bb1:
+  br label %bb2
+
+bb2:
+  %P = phi i32* [ %b, %bb ], [ @Y, %bb1 ]
+  %tmp1 = load i32* @Z, align 4
+  store i32 123, i32* %P, align 4
+  %tmp2 = load i32* @Z, align 4
+  br label %return
+
+return:
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/Analysis/BasicAA/cas.ll b/libclamav/c++/llvm/test/Analysis/BasicAA/cas.ll
index 87772bf..4ce7811 100644
--- a/libclamav/c++/llvm/test/Analysis/BasicAA/cas.ll
+++ b/libclamav/c++/llvm/test/Analysis/BasicAA/cas.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -basicaa -gvn -S | grep load | count 1
+; RUN: opt < %s -basicaa -gvn -instcombine -S | grep {ret i32 0}
 
 @flag0 = internal global i32 zeroinitializer
 @turn = internal global i32 zeroinitializer
@@ -6,9 +6,10 @@
 
 define i32 @main() {
   %a = load i32* @flag0
-	%b = tail call i32 @llvm.atomic.swap.i32.p0i32(i32* @turn, i32 1)
+  %b = tail call i32 @llvm.atomic.swap.i32.p0i32(i32* @turn, i32 1)
   %c = load i32* @flag0
-	ret i32 %c
+  %d = sub i32 %a, %c
+  ret i32 %d
 }
 
 declare i32 @llvm.atomic.swap.i32.p0i32(i32*, i32) nounwind
\ No newline at end of file
diff --git a/libclamav/c++/llvm/test/Analysis/BasicAA/featuretest.ll b/libclamav/c++/llvm/test/Analysis/BasicAA/featuretest.ll
index 737ee45..50dc886 100644
--- a/libclamav/c++/llvm/test/Analysis/BasicAA/featuretest.ll
+++ b/libclamav/c++/llvm/test/Analysis/BasicAA/featuretest.ll
@@ -2,6 +2,7 @@
 ; determine, as noted in the comments.
 
 ; RUN: opt < %s -basicaa -gvn -instcombine -dce -S | not grep REMOVE
+target datalayout = "E-p:64:64:64-a0:0:8-f32:32:32-f64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-v64:64:64-v128:128:128"
 
 @Global = external global { i32 }
 
diff --git a/libclamav/c++/llvm/test/Analysis/BasicAA/gep-alias.ll b/libclamav/c++/llvm/test/Analysis/BasicAA/gep-alias.ll
new file mode 100644
index 0000000..1ed0312
--- /dev/null
+++ b/libclamav/c++/llvm/test/Analysis/BasicAA/gep-alias.ll
@@ -0,0 +1,171 @@
+; RUN: opt < %s -gvn -instcombine -S |& FileCheck %s
+
+target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128"
+
+; Make sure that basicaa thinks R and r are must aliases.
+define i32 @test1(i8 * %P) {
+entry:
+	%Q = bitcast i8* %P to {i32, i32}*
+	%R = getelementptr {i32, i32}* %Q, i32 0, i32 1
+	%S = load i32* %R
+
+	%q = bitcast i8* %P to {i32, i32}*
+	%r = getelementptr {i32, i32}* %q, i32 0, i32 1
+	%s = load i32* %r
+
+	%t = sub i32 %S, %s
+	ret i32 %t
+; CHECK: @test1
+; CHECK: ret i32 0
+}
+
+define i32 @test2(i8 * %P) {
+entry:
+	%Q = bitcast i8* %P to {i32, i32, i32}*
+	%R = getelementptr {i32, i32, i32}* %Q, i32 0, i32 1
+	%S = load i32* %R
+
+	%r = getelementptr {i32, i32, i32}* %Q, i32 0, i32 2
+  store i32 42, i32* %r
+
+	%s = load i32* %R
+
+	%t = sub i32 %S, %s
+	ret i32 %t
+; CHECK: @test2
+; CHECK: ret i32 0
+}
+
+
+; This was a miscompilation.
+define i32 @test3({float, {i32, i32, i32}}* %P) {
+entry:
+  %P2 = getelementptr {float, {i32, i32, i32}}* %P, i32 0, i32 1
+	%R = getelementptr {i32, i32, i32}* %P2, i32 0, i32 1
+	%S = load i32* %R
+
+	%r = getelementptr {i32, i32, i32}* %P2, i32 0, i32 2
+  store i32 42, i32* %r
+
+	%s = load i32* %R
+
+	%t = sub i32 %S, %s
+	ret i32 %t
+; CHECK: @test3
+; CHECK: ret i32 0
+}
+
+
+;; This is reduced from the SmallPtrSet constructor.
+%SmallPtrSetImpl = type { i8**, i32, i32, i32, [1 x i8*] }
+%SmallPtrSet64 = type { %SmallPtrSetImpl, [64 x i8*] }
+
+define i32 @test4(%SmallPtrSet64* %P) {
+entry:
+  %tmp2 = getelementptr inbounds %SmallPtrSet64* %P, i64 0, i32 0, i32 1
+  store i32 64, i32* %tmp2, align 8
+  %tmp3 = getelementptr inbounds %SmallPtrSet64* %P, i64 0, i32 0, i32 4, i64 64
+  store i8* null, i8** %tmp3, align 8
+  %tmp4 = load i32* %tmp2, align 8
+	ret i32 %tmp4
+; CHECK: @test4
+; CHECK: ret i32 64
+}
+
+; P[i] != p[i+1]
+define i32 @test5(i32* %p, i64 %i) {
+  %pi = getelementptr i32* %p, i64 %i
+  %i.next = add i64 %i, 1
+  %pi.next = getelementptr i32* %p, i64 %i.next
+  %x = load i32* %pi
+  store i32 42, i32* %pi.next
+  %y = load i32* %pi
+  %z = sub i32 %x, %y
+  ret i32 %z
+; CHECK: @test5
+; CHECK: ret i32 0
+}
+
+; P[i] != p[(i*4)|1]
+define i32 @test6(i32* %p, i64 %i1) {
+  %i = shl i64 %i1, 2
+  %pi = getelementptr i32* %p, i64 %i
+  %i.next = or i64 %i, 1
+  %pi.next = getelementptr i32* %p, i64 %i.next
+  %x = load i32* %pi
+  store i32 42, i32* %pi.next
+  %y = load i32* %pi
+  %z = sub i32 %x, %y
+  ret i32 %z
+; CHECK: @test6
+; CHECK: ret i32 0
+}
+
+; P[1] != P[i*4]
+define i32 @test7(i32* %p, i64 %i) {
+  %pi = getelementptr i32* %p, i64 1
+  %i.next = shl i64 %i, 2
+  %pi.next = getelementptr i32* %p, i64 %i.next
+  %x = load i32* %pi
+  store i32 42, i32* %pi.next
+  %y = load i32* %pi
+  %z = sub i32 %x, %y
+  ret i32 %z
+; CHECK: @test7
+; CHECK: ret i32 0
+}
+
+; P[zext(i)] != p[zext(i+1)]
+; PR1143
+define i32 @test8(i32* %p, i32 %i) {
+  %i1 = zext i32 %i to i64
+  %pi = getelementptr i32* %p, i64 %i1
+  %i.next = add i32 %i, 1
+  %i.next2 = zext i32 %i.next to i64
+  %pi.next = getelementptr i32* %p, i64 %i.next2
+  %x = load i32* %pi
+  store i32 42, i32* %pi.next
+  %y = load i32* %pi
+  %z = sub i32 %x, %y
+  ret i32 %z
+; CHECK: @test8
+; CHECK: ret i32 0
+}
+
+define i8 @test9([4 x i8] *%P, i32 %i, i32 %j) {
+  %i2 = shl i32 %i, 2
+  %i3 = add i32 %i2, 1
+  ; P2 = P + 1 + 4*i
+  %P2 = getelementptr [4 x i8] *%P, i32 0, i32 %i3
+
+  %j2 = shl i32 %j, 2
+  
+  ; P4 = P + 4*j
+  %P4 = getelementptr [4 x i8]* %P, i32 0, i32 %j2
+
+  %x = load i8* %P2
+  store i8 42, i8* %P4
+  %y = load i8* %P2
+  %z = sub i8 %x, %y
+  ret i8 %z
+; CHECK: @test9
+; CHECK: ret i8 0
+}
+
+define i8 @test10([4 x i8] *%P, i32 %i) {
+  %i2 = shl i32 %i, 2
+  %i3 = add i32 %i2, 4
+  ; P2 = P + 4 + 4*i
+  %P2 = getelementptr [4 x i8] *%P, i32 0, i32 %i3
+  
+  ; P4 = P + 4*i
+  %P4 = getelementptr [4 x i8]* %P, i32 0, i32 %i2
+
+  %x = load i8* %P2
+  store i8 42, i8* %P4
+  %y = load i8* %P2
+  %z = sub i8 %x, %y
+  ret i8 %z
+; CHECK: @test10
+; CHECK: ret i8 0
+}
diff --git a/libclamav/c++/llvm/test/Analysis/BasicAA/global-size.ll b/libclamav/c++/llvm/test/Analysis/BasicAA/global-size.ll
index 0a643d4..b9cbbcc 100644
--- a/libclamav/c++/llvm/test/Analysis/BasicAA/global-size.ll
+++ b/libclamav/c++/llvm/test/Analysis/BasicAA/global-size.ll
@@ -2,6 +2,7 @@
 ; the global.
 
 ; RUN: opt < %s -basicaa -gvn -instcombine -S | not grep load
+target datalayout = "E-p:64:64:64-a0:0:8-f32:32:32-f64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-v64:64:64-v128:128:128"
 
 @B = global i16 8               ; <i16*> [#uses=2]
 
diff --git a/libclamav/c++/llvm/test/Analysis/BasicAA/modref.ll b/libclamav/c++/llvm/test/Analysis/BasicAA/modref.ll
index 8f7c0a7..3f642cf 100644
--- a/libclamav/c++/llvm/test/Analysis/BasicAA/modref.ll
+++ b/libclamav/c++/llvm/test/Analysis/BasicAA/modref.ll
@@ -1,15 +1,125 @@
-; A very rudimentary test on AliasAnalysis::getModRefInfo.
-; RUN: opt < %s -print-all-alias-modref-info -aa-eval -disable-output |& \
-; RUN: not grep NoModRef
+; RUN: opt < %s -basicaa -gvn -dse -S | FileCheck %s
+target datalayout = "E-p:64:64:64-a0:0:8-f32:32:32-f64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-v64:64:64-v128:128:128"
 
-define i32 @callee() {
-        %X = alloca { i32, i32 }                ; <{ i32, i32 }*> [#uses=1]
-        %Y = getelementptr { i32, i32 }* %X, i64 0, i32 0               ; <i32*> [#uses=1]
-        %Z = load i32* %Y               ; <i32> [#uses=1]
-        ret i32 %Z
+declare void @llvm.memset.i32(i8*, i8, i32, i32)
+declare void @llvm.memset.i8(i8*, i8, i8, i32)
+declare void @llvm.memcpy.i8(i8*, i8*, i8, i32)
+declare void @llvm.memcpy.i32(i8*, i8*, i32, i32)
+declare void @llvm.lifetime.end(i64, i8* nocapture)
+
+declare void @external(i32*) 
+
+define i32 @test0(i8* %P) {
+  %A = alloca i32
+  call void @external(i32* %A)
+  
+  store i32 0, i32* %A
+  
+  call void @llvm.memset.i32(i8* %P, i8 0, i32 42, i32 1)
+  
+  %B = load i32* %A
+  ret i32 %B
+  
+; CHECK: @test0
+; CHECK: ret i32 0
+}
+
+declare void @llvm.memcpy.i8(i8*, i8*, i8, i32)
+
+define i8 @test1() {
+; CHECK: @test1
+  %A = alloca i8
+  %B = alloca i8
+
+  store i8 2, i8* %B  ;; Not written to by memcpy
+
+  call void @llvm.memcpy.i8(i8* %A, i8* %B, i8 -1, i32 0)
+
+  %C = load i8* %B
+  ret i8 %C
+; CHECK: ret i8 2
+}
+
+define i8 @test2(i8* %P) {
+; CHECK: @test2
+  %P2 = getelementptr i8* %P, i32 127
+  store i8 1, i8* %P2  ;; Not dead across memset
+  call void @llvm.memset.i8(i8* %P, i8 2, i8 127, i32 0)
+  %A = load i8* %P2
+  ret i8 %A
+; CHECK: ret i8 1
 }
 
-define i32 @caller() {
-        %X = call i32 @callee( )                ; <i32> [#uses=1]
-        ret i32 %X
+define i8 @test2a(i8* %P) {
+; CHECK: @test2
+  %P2 = getelementptr i8* %P, i32 126
+  
+  ;; FIXME: DSE isn't zapping this dead store.
+  store i8 1, i8* %P2  ;; Dead, clobbered by memset.
+  
+  call void @llvm.memset.i8(i8* %P, i8 2, i8 127, i32 0)
+  %A = load i8* %P2
+  ret i8 %A
+; CHECK: %A = load i8* %P2
+; CHECK: ret i8 %A
 }
+
+define void @test3(i8* %P, i8 %X) {
+; CHECK: @test3
+; CHECK-NOT: store
+; CHECK-NOT: %Y
+  %Y = add i8 %X, 1     ;; Dead, because the only use (the store) is dead.
+  
+  %P2 = getelementptr i8* %P, i32 2
+  store i8 %Y, i8* %P2  ;; Not read by lifetime.end, should be removed.
+; CHECK: store i8 2, i8* %P2
+  call void @llvm.lifetime.end(i64 1, i8* %P)
+  store i8 2, i8* %P2
+; CHECK-NOT: store
+  ret void
+; CHECK: ret void
+}
+
+define void @test3a(i8* %P, i8 %X) {
+; CHECK: @test3a
+  %Y = add i8 %X, 1     ;; Dead, because the only use (the store) is dead.
+  
+  %P2 = getelementptr i8* %P, i32 2
+  store i8 %Y, i8* %P2  ;; FIXME: Killed by llvm.lifetime.end, should be zapped.
+; CHECK: store i8 %Y, i8* %P2
+  call void @llvm.lifetime.end(i64 10, i8* %P)
+  ret void
+; CHECK: ret void
+}
+
+ at G1 = external global i32
+ at G2 = external global [4000 x i32]
+
+define i32 @test4(i8* %P) {
+  %tmp = load i32* @G1
+  call void @llvm.memset.i32(i8* bitcast ([4000 x i32]* @G2 to i8*), i8 0, i32 4000, i32 1)
+  %tmp2 = load i32* @G1
+  %sub = sub i32 %tmp2, %tmp
+  ret i32 %sub
+; CHECK: @test4
+; CHECK: load i32* @G
+; CHECK: memset.i32
+; CHECK-NOT: load
+; CHECK: sub i32 %tmp, %tmp
+}
+
+; Verify that basicaa is handling variable length memcpy, knowing it doesn't
+; write to G1.
+define i32 @test5(i8* %P, i32 %Len) {
+  %tmp = load i32* @G1
+  call void @llvm.memcpy.i32(i8* bitcast ([4000 x i32]* @G2 to i8*), i8* bitcast (i32* @G1 to i8*), i32 %Len, i32 1)
+  %tmp2 = load i32* @G1
+  %sub = sub i32 %tmp2, %tmp
+  ret i32 %sub
+; CHECK: @test5
+; CHECK: load i32* @G
+; CHECK: memcpy.i32
+; CHECK-NOT: load
+; CHECK: sub i32 %tmp, %tmp
+}
+
diff --git a/libclamav/c++/llvm/test/Analysis/BasicAA/phi-aa.ll b/libclamav/c++/llvm/test/Analysis/BasicAA/phi-aa.ll
new file mode 100644
index 0000000..0288960
--- /dev/null
+++ b/libclamav/c++/llvm/test/Analysis/BasicAA/phi-aa.ll
@@ -0,0 +1,29 @@
+; RUN: opt < %s -basicaa -aa-eval -print-all-alias-modref-info -disable-output |& grep {NoAlias:.*%P,.*@Z}
+; rdar://7282591
+
+ at X = common global i32 0
+ at Y = common global i32 0
+ at Z = common global i32 0
+
+define void @foo(i32 %cond) nounwind ssp {
+entry:
+  %"alloca point" = bitcast i32 0 to i32
+  %tmp = icmp ne i32 %cond, 0
+  br i1 %tmp, label %bb, label %bb1
+
+bb:
+  br label %bb2
+
+bb1:
+  br label %bb2
+
+bb2:
+  %P = phi i32* [ @X, %bb ], [ @Y, %bb1 ]
+  %tmp1 = load i32* @Z, align 4
+  store i32 123, i32* %P, align 4
+  %tmp2 = load i32* @Z, align 4
+  br label %return
+
+return:
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/Analysis/BasicAA/phi-and-select.ll b/libclamav/c++/llvm/test/Analysis/BasicAA/phi-and-select.ll
new file mode 100644
index 0000000..c69e824
--- /dev/null
+++ b/libclamav/c++/llvm/test/Analysis/BasicAA/phi-and-select.ll
@@ -0,0 +1,73 @@
+; RUN: opt < %s -aa-eval -print-all-alias-modref-info -disable-output \
+; RUN:   |& grep {NoAlias:	double\\* \[%\]a, double\\* \[%\]b\$} | count 4
+
+; BasicAA should detect NoAliases in PHIs and Selects.
+
+; Two PHIs in the same block.
+define void @foo(i1 %m, double* noalias %x, double* noalias %y) {
+entry:
+  br i1 %m, label %true, label %false
+
+true:
+  br label %exit
+
+false:
+  br label %exit
+
+exit:
+  %a = phi double* [ %x, %true ], [ %y, %false ]
+  %b = phi double* [ %x, %false ], [ %y, %true ]
+  volatile store double 0.0, double* %a
+  volatile store double 1.0, double* %b
+  ret void
+}
+
+; Two selects with the same condition.
+define void @bar(i1 %m, double* noalias %x, double* noalias %y) {
+entry:
+  %a = select i1 %m, double* %x, double* %y
+  %b = select i1 %m, double* %y, double* %x
+  volatile store double 0.000000e+00, double* %a
+  volatile store double 1.000000e+00, double* %b
+  ret void
+}
+
+; Two PHIs with disjoint sets of inputs.
+define void @qux(i1 %m, double* noalias %x, double* noalias %y,
+                 i1 %n, double* noalias %v, double* noalias %w) {
+entry:
+  br i1 %m, label %true, label %false
+
+true:
+  br label %exit
+
+false:
+  br label %exit
+
+exit:
+  %a = phi double* [ %x, %true ], [ %y, %false ]
+  br i1 %n, label %ntrue, label %nfalse
+
+ntrue:
+  br label %nexit
+
+nfalse:
+  br label %nexit
+
+nexit:
+  %b = phi double* [ %v, %ntrue ], [ %w, %nfalse ]
+  volatile store double 0.0, double* %a
+  volatile store double 1.0, double* %b
+  ret void
+}
+
+; Two selects with disjoint sets of arms.
+define void @fin(i1 %m, double* noalias %x, double* noalias %y,
+                 i1 %n, double* noalias %v, double* noalias %w) {
+entry:
+  %a = select i1 %m, double* %x, double* %y
+  %b = select i1 %n, double* %v, double* %w
+  volatile store double 0.000000e+00, double* %a
+  volatile store double 1.000000e+00, double* %b
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/Analysis/BasicAA/store-promote.ll b/libclamav/c++/llvm/test/Analysis/BasicAA/store-promote.ll
index d8e7c75..33d0f3a 100644
--- a/libclamav/c++/llvm/test/Analysis/BasicAA/store-promote.ll
+++ b/libclamav/c++/llvm/test/Analysis/BasicAA/store-promote.ll
@@ -3,6 +3,7 @@
 ; two pointers, then the load should be hoisted, and the store sunk.
 
 ; RUN: opt < %s -basicaa -licm -S | FileCheck %s
+target datalayout = "E-p:64:64:64-a0:0:8-f32:32:32-f64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-v64:64:64-v128:128:128"
 
 @A = global i32 7               ; <i32*> [#uses=3]
 @B = global i32 8               ; <i32*> [#uses=2]
diff --git a/libclamav/c++/llvm/test/Analysis/LoopInfo/2003-05-15-NestingProblem.ll b/libclamav/c++/llvm/test/Analysis/LoopInfo/2003-05-15-NestingProblem.ll
index 617c23f..9355aee 100644
--- a/libclamav/c++/llvm/test/Analysis/LoopInfo/2003-05-15-NestingProblem.ll
+++ b/libclamav/c++/llvm/test/Analysis/LoopInfo/2003-05-15-NestingProblem.ll
@@ -2,7 +2,7 @@
 ; not a child of the loopentry.6 loop.
 ;
 ; RUN: opt < %s -analyze -loops | \
-; RUN:   grep {^            Loop at depth 4 containing: %loopentry.7<header><latch><exit>}
+; RUN:   grep {^            Loop at depth 4 containing: %loopentry.7<header><latch><exiting>}
 
 define void @getAndMoveToFrontDecode() {
 	br label %endif.2
diff --git a/libclamav/c++/llvm/test/Analysis/PointerTracking/sizes.ll b/libclamav/c++/llvm/test/Analysis/PointerTracking/sizes.ll
index c0b0606..c8ca648 100644
--- a/libclamav/c++/llvm/test/Analysis/PointerTracking/sizes.ll
+++ b/libclamav/c++/llvm/test/Analysis/PointerTracking/sizes.ll
@@ -60,9 +60,9 @@ entry:
 	ret i32 %add16
 }
 
-define i32 @foo2(i32 %n) nounwind {
+define i32 @foo2(i64 %n) nounwind {
 entry:
-	%call = malloc i8, i32 %n		; <i8*> [#uses=1]
+	%call = tail call i8* @malloc(i64 %n)  ; <i8*> [#uses=1]
 ; CHECK: %call =
 ; CHECK: ==> %n elements, %n bytes allocated
 	%call2 = tail call i8* @calloc(i64 2, i64 4) nounwind		; <i8*> [#uses=1]
@@ -74,11 +74,13 @@ entry:
 	%call6 = tail call i32 @bar(i8* %call) nounwind		; <i32> [#uses=1]
 	%call8 = tail call i32 @bar(i8* %call2) nounwind		; <i32> [#uses=1]
 	%call10 = tail call i32 @bar(i8* %call4) nounwind		; <i32> [#uses=1]
-	%add = add i32 %call8, %call6		; <i32> [#uses=1]
-	%add11 = add i32 %add, %call10		; <i32> [#uses=1]
+	%add = add i32 %call8, %call6                   ; <i32> [#uses=1]
+	%add11 = add i32 %add, %call10                ; <i32> [#uses=1]
 	ret i32 %add11
 }
 
+declare noalias i8* @malloc(i64) nounwind
+
 declare noalias i8* @calloc(i64, i64) nounwind
 
 declare noalias i8* @realloc(i8* nocapture, i64) nounwind
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-11-18-OrInstruction.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-11-18-OrInstruction.ll
index 2b3c982..27fe714 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-11-18-OrInstruction.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-11-18-OrInstruction.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output | grep -e {-->  %b}
+; RUN: opt < %s -analyze -scalar-evolution -disable-output | FileCheck %s
 ; PR1810
 
 define void @fun() {
@@ -16,3 +16,6 @@ body:
 exit:        
         ret void
 }
+
+; CHECK: -->  %b
+
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-29-SGTTripCount.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-29-SGTTripCount.ll
index 97d0640..37b5b94 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-29-SGTTripCount.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-29-SGTTripCount.ll
@@ -1,6 +1,5 @@
 ; RUN: opt < %s -analyze -scalar-evolution -disable-output \
-; RUN:   -scalar-evolution-max-iterations=0 | \
-; RUN: grep -F "backedge-taken count is (-1 + (-1 * %j))"
+; RUN:   -scalar-evolution-max-iterations=0 | FileCheck %s
 ; PR2607
 
 define i32 @_Z1aj(i32 %j) nounwind  {
@@ -25,3 +24,5 @@ return:		; preds = %return.loopexit, %entry
 	ret i32 %i.0.lcssa
 }
 
+; CHECK: backedge-taken count is (-1 + (-1 * %j))
+
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-29-SMinExpr.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-29-SMinExpr.ll
index 7f4de91..d54b3b4 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-29-SMinExpr.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-29-SMinExpr.ll
@@ -1,6 +1,5 @@
 ; RUN: opt < %s -analyze -scalar-evolution -disable-output \
-; RUN:   -scalar-evolution-max-iterations=0 | \
-; RUN: grep -F "backedge-taken count is (-2147483632 + ((-1 + (-1 * %x)) smax (-1 + (-1 * %y))))"
+; RUN:   -scalar-evolution-max-iterations=0 | FileCheck %s
 ; PR2607
 
 define i32 @b(i32 %x, i32 %y) nounwind {
@@ -22,3 +21,6 @@ afterfor:		; preds = %forinc, %entry
 	%j.0.lcssa = phi i32 [ -2147483632, %entry ], [ %dec, %forinc ]
 	ret i32 %j.0.lcssa
 }
+
+; CHECK: backedge-taken count is (-2147483632 + ((-1 + (-1 * %x)) smax (-1 + (-1 * %y))))
+
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-08-04-IVOverflow.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-08-04-IVOverflow.ll
index fa09895..06200ae 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-08-04-IVOverflow.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-08-04-IVOverflow.ll
@@ -1,5 +1,5 @@
 ; RUN: opt < %s -analyze -scalar-evolution -disable-output \
-; RUN:   -scalar-evolution-max-iterations=0 | grep -F "Exits: 20028"
+; RUN:   -scalar-evolution-max-iterations=0 | FileCheck %s
 ; PR2621
 
 define i32 @a() nounwind  {
@@ -23,3 +23,5 @@ bb2:
 	ret i32 %4
 }
 
+; CHECK: Exits: 20028
+
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-08-04-LongAddRec.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-08-04-LongAddRec.ll
index 5a28117..f3c703a 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-08-04-LongAddRec.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-08-04-LongAddRec.ll
@@ -1,5 +1,5 @@
 ; RUN: opt < %s -analyze -scalar-evolution -disable-output \
-; RUN:   -scalar-evolution-max-iterations=0 | grep -F "Exits: -19168"
+; RUN:   -scalar-evolution-max-iterations=0 | FileCheck %s
 ; PR2621
 
 define i32 @a() nounwind  {
@@ -54,3 +54,5 @@ bb2:		; preds = %bb1
 	ret i32 %19
 }
 
+; CHECK: Exits: -19168
+
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll
index 6ed2614..e81530e 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll
@@ -1,5 +1,6 @@
 ; RUN: opt < %s -analyze -scalar-evolution -disable-output | grep {count is 2}
 ; PR3171
+target datalayout = "E-p:64:64:64-a0:0:8-f32:32:32-f64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-v64:64:64-v128:128:128"
 
 	%struct.Foo = type { i32 }
 	%struct.NonPod = type { [2 x %struct.Foo] }
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/scev-aa.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/scev-aa.ll
index 0dcf529..371d07c 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/scev-aa.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/scev-aa.ll
@@ -1,8 +1,8 @@
 ; RUN: opt < %s -scev-aa -aa-eval -print-all-alias-modref-info \
 ; RUN:   |& FileCheck %s
 
-; At the time of this writing, all of these CHECK lines are cases that
-; plain -basicaa misses.
+; At the time of this writing, -basicaa only misses the example of the form
+; A[i+(j+1)] != A[i+j].  However, it does get A[(i+j)+1] != A[i+j].
 
 target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64"
 
diff --git a/libclamav/c++/llvm/test/Assembler/alignstack.ll b/libclamav/c++/llvm/test/Assembler/alignstack.ll
new file mode 100644
index 0000000..9f2059f
--- /dev/null
+++ b/libclamav/c++/llvm/test/Assembler/alignstack.ll
@@ -0,0 +1,36 @@
+; RUN: llvm-as < %s | llvm-dis | FileCheck %s
+target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128"
+target triple = "i386-apple-darwin10.0"
+
+define void @test1() nounwind {
+; CHECK: test1
+; CHECK: sideeffect
+; CHECK-NOT: alignstack
+	tail call void asm sideeffect "mov", "~{dirflag},~{fpsr},~{flags}"() nounwind
+	ret void
+; CHECK: ret
+}
+define void @test2() nounwind {
+; CHECK: test2
+; CHECK: sideeffect
+; CHECK: alignstack
+	tail call void asm sideeffect alignstack "mov", "~{dirflag},~{fpsr},~{flags}"() nounwind
+	ret void
+; CHECK: ret
+}
+define void @test3() nounwind {
+; CHECK: test3
+; CHECK-NOT: sideeffect
+; CHECK: alignstack
+	tail call void asm alignstack "mov", "~{dirflag},~{fpsr},~{flags}"() nounwind
+	ret void
+; CHECK: ret
+}
+define void @test4() nounwind {
+; CHECK: test4
+; CHECK-NOT: sideeffect
+; CHECK-NOT: alignstack
+	tail call void asm  "mov", "~{dirflag},~{fpsr},~{flags}"() nounwind
+	ret void
+; CHECK: ret
+}
diff --git a/libclamav/c++/llvm/test/CMakeLists.txt b/libclamav/c++/llvm/test/CMakeLists.txt
index 627b57d..d7037ab 100644
--- a/libclamav/c++/llvm/test/CMakeLists.txt
+++ b/libclamav/c++/llvm/test/CMakeLists.txt
@@ -1,31 +1,39 @@
-include(GetTargetTriple)
-get_target_triple(target)
-
 foreach(c ${LLVM_TARGETS_TO_BUILD})
   set(TARGETS_BUILT "${TARGETS_BUILT} ${c}")
 endforeach(c)
 set(TARGETS_TO_BUILD ${TARGETS_BUILT})
 
+# FIXME: This won't work for project files, we need to use a --param.
+set(LLVM_LIBS_DIR "${LLVM_BINARY_DIR}/lib/${CMAKE_CFG_INTDIR}")
+set(SHLIBEXT "${LTDL_SHLIB_EXT}")
+
 include(FindPythonInterp)
 if(PYTHONINTERP_FOUND)
-  get_target_property(LLVM_TOOLS_PATH llvm-config RUNTIME_OUTPUT_DIRECTORY)
-
   configure_file(
     ${CMAKE_CURRENT_SOURCE_DIR}/site.exp.in
     ${CMAKE_CURRENT_BINARY_DIR}/site.exp)
 
-  add_custom_target(llvm-test
+  MAKE_DIRECTORY(${CMAKE_CURRENT_BINARY_DIR}/Unit)
+
+  add_custom_target(check
     COMMAND sed -e "s#\@LLVM_SOURCE_DIR\@#${LLVM_MAIN_SRC_DIR}#"
                 -e "s#\@LLVM_BINARY_DIR\@#${LLVM_BINARY_DIR}#"
-                -e "s#\@LLVM_TOOLS_DIR\@#${LLVM_TOOLS_PATH}/${CMAKE_CFG_INTDIR}#"
-                -e "s#\@LLVMGCC_DIR\@##"
+                -e "s#\@LLVM_TOOLS_DIR\@#${LLVM_TOOLS_BINARY_DIR}/${CMAKE_CFG_INTDIR}#"
+                -e "s#\@LLVMGCCDIR\@##"
                 ${CMAKE_CURRENT_SOURCE_DIR}/lit.site.cfg.in >
                 ${CMAKE_CURRENT_BINARY_DIR}/lit.site.cfg
-    COMMAND ${PYTHON_EXECUTABLE} 
+    COMMAND sed -e "s#\@LLVM_SOURCE_DIR\@#${LLVM_MAIN_SRC_DIR}#"
+                -e "s#\@LLVM_BINARY_DIR\@#${LLVM_BINARY_DIR}#"
+                -e "s#\@LLVM_TOOLS_DIR\@#${LLVM_TOOLS_BINARY_DIR}/${CMAKE_CFG_INTDIR}#"
+                -e "s#\@LLVMGCCDIR\@##"
+                -e "s#\@LLVM_BUILD_MODE\@#${CMAKE_CFG_INTDIR}#"
+                ${CMAKE_CURRENT_SOURCE_DIR}/Unit/lit.site.cfg.in >
+                ${CMAKE_CURRENT_BINARY_DIR}/Unit/lit.site.cfg
+    COMMAND ${PYTHON_EXECUTABLE}
                 ${LLVM_SOURCE_DIR}/utils/lit/lit.py
                 -sv
                 ${CMAKE_CURRENT_BINARY_DIR}
                 DEPENDS
                 COMMENT "Running LLVM regression tests")
 
-endif()  
+endif()
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2008-11-19-ScavengerAssert.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2008-11-19-ScavengerAssert.ll
deleted file mode 100644
index 35ca7b4..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/2008-11-19-ScavengerAssert.ll
+++ /dev/null
@@ -1,414 +0,0 @@
-; RUN: llc < %s -mtriple=arm-apple-darwin9 -stats |& grep asm-printer | grep 154
-
-	%"struct.Adv5::Ekin<3>" = type <{ i8 }>
-	%"struct.Adv5::X::Energyflux<3>" = type { double }
-	%"struct.BinaryNode<OpAdd,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> >,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> > >" = type { %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >", %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >" }
-	%"struct.BinaryNode<OpAdd,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> >,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> > >" = type { %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,Remote<BrickView> >", %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,Remote<BrickView> >" }
-	%"struct.BinaryNode<OpMultiply,Scalar<double>,BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> > > >" = type { %"struct.Adv5::X::Energyflux<3>", %"struct.BinaryNode<OpAdd,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> >,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> > >" }
-	%"struct.BinaryNode<OpMultiply,Scalar<double>,BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> > > >" = type { %"struct.Adv5::X::Energyflux<3>", %"struct.BinaryNode<OpAdd,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> >,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> > >" }
-	%"struct.Centering<3>" = type { i32, i32, %"struct.std::vector<Loc<3>,std::allocator<Loc<3> > >", %"struct.std::vector<Vector<3, double, Full>,std::allocator<Vector<3, double, Full> > >" }
-	%"struct.ContextMapper<1>" = type { i32 (...)** }
-	%"struct.DataBlockController<double>" = type { %"struct.RefBlockController<double>", %"struct.Adv5::Ekin<3>"*, i8, %"struct.SingleObservable<int>", i32 }
-	%"struct.DataBlockPtr<double,false>" = type { %"struct.RefCountedBlockPtr<double,false,DataBlockController<double> >" }
-	%"struct.Domain<1,DomainTraits<Interval<1> > >" = type { %"struct.DomainBase<DomainTraits<Interval<1> > >" }
-	%"struct.Domain<1,DomainTraits<Loc<1> > >" = type { %"struct.DomainBase<DomainTraits<Loc<1> > >" }
-	%"struct.Domain<1,DomainTraits<Range<1> > >" = type { %"struct.DomainBase<DomainTraits<Range<1> > >" }
-	%"struct.Domain<3,DomainTraits<Interval<3> > >" = type { %"struct.DomainBase<DomainTraits<Interval<3> > >" }
-	%"struct.Domain<3,DomainTraits<Loc<3> > >" = type { %"struct.DomainBase<DomainTraits<Loc<3> > >" }
-	%"struct.Domain<3,DomainTraits<Range<3> > >" = type { %"struct.DomainBase<DomainTraits<Range<3> > >" }
-	%"struct.DomainBase<DomainTraits<Interval<1> > >" = type { [2 x i32] }
-	%"struct.DomainBase<DomainTraits<Interval<3> > >" = type { [3 x %"struct.WrapNoInit<Interval<1> >"] }
-	%"struct.DomainBase<DomainTraits<Loc<1> > >" = type { i32 }
-	%"struct.DomainBase<DomainTraits<Loc<3> > >" = type { [3 x %"struct.WrapNoInit<Loc<1> >"] }
-	%"struct.DomainBase<DomainTraits<Range<1> > >" = type { [3 x i32] }
-	%"struct.DomainBase<DomainTraits<Range<3> > >" = type { [3 x %"struct.WrapNoInit<Range<1> >"] }
-	%"struct.DomainLayout<3>" = type { %"struct.Node<Interval<3>,Interval<3> >" }
-	%"struct.DomainMap<Interval<1>,int>" = type { i32, %"struct.DomainMapNode<Interval<1>,int>"*, %"struct.DomainMapIterator<Interval<1>,int>" }
-	%"struct.DomainMapIterator<Interval<1>,int>" = type { %"struct.DomainMapNode<Interval<1>,int>"*, %"struct.std::_List_const_iterator<Interval<3> >" }
-	%"struct.DomainMapNode<Interval<1>,int>" = type { %"struct.Interval<1>", %"struct.DomainMapNode<Interval<1>,int>"*, %"struct.DomainMapNode<Interval<1>,int>"*, %"struct.DomainMapNode<Interval<1>,int>"*, %"struct.std::list<Interval<3>,std::allocator<Interval<3> > >" }
-	%"struct.Engine<3,Zero<double>,ConstantFunction>" = type { %"struct.Adv5::Ekin<3>", %"struct.Interval<3>", [3 x i32] }
-	%"struct.Engine<3,double,Brick>" = type { %"struct.Pooma::BrickBase<3>", %"struct.DataBlockPtr<double,false>", double* }
-	%"struct.Engine<3,double,BrickView>" = type { %"struct.Pooma::BrickViewBase<3>", %"struct.DataBlockPtr<double,false>", double* }
-	%"struct.Engine<3,double,ConstantFunction>" = type { double, %"struct.Interval<3>", [3 x i32] }
-	%"struct.Engine<3,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> > > > > >" = type { %"struct.BinaryNode<OpMultiply,Scalar<double>,BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> > > >", %"struct.Interval<3>" }
-	%"struct.Engine<3,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> > > > > >" = type { %"struct.BinaryNode<OpMultiply,Scalar<double>,BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> > > >", %"struct.Interval<3>" }
-	%"struct.Engine<3,double,MultiPatch<GridTag, Remote<Brick> > >" = type { %"struct.ContextMapper<1>", %"struct.GridLayout<3>", %"struct.RefCountedBlockPtr<Engine<3, double, Remote<Brick> >,false,RefBlockController<Engine<3, double, Remote<Brick> > > >", i32* }
-	%"struct.Engine<3,double,MultiPatchView<GridTag, Remote<Brick>, 3> >" = type { %"struct.GridLayoutView<3,3>", %"struct.Engine<3,double,MultiPatch<GridTag, Remote<Brick> > >" }
-	%"struct.Engine<3,double,Remote<Brick> >" = type { %"struct.Interval<3>", i32, %"struct.RefCountedPtr<Shared<Engine<3, double, Brick> > >" }
-	%"struct.Engine<3,double,Remote<BrickView> >" = type { %"struct.Interval<3>", i32, %"struct.RefCountedPtr<Shared<Engine<3, double, BrickView> > >" }
-	%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,Zero<double>,ConstantFunction>" = type { %"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,Zero<double>,ConstantFunction>" }
-	%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ConstantFunction>" = type { %"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ConstantFunction>" }
-	%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> > > > > >" = type { %"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> > > > > >" }
-	%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> > > > > >" = type { %"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> > > > > >" }
-	%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >" = type { %"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >" }
-	%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >" = type { %"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >" }
-	%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,Remote<BrickView> >" = type { %"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,Remote<BrickView> >" }
-	%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,Zero<double>,ConstantFunction>" = type { i32, %"struct.Centering<3>", i32, %"struct.RefCountedBlockPtr<FieldEngineBaseData<3, Zero<double>, ConstantFunction>,false,RefBlockController<FieldEngineBaseData<3, Zero<double>, ConstantFunction> > >", %"struct.Interval<3>", %"struct.GuardLayers<3>", %"struct.UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >" }
-	%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ConstantFunction>" = type { i32, %"struct.Centering<3>", i32, %"struct.RefCountedBlockPtr<FieldEngineBaseData<3, double, ConstantFunction>,false,RefBlockController<FieldEngineBaseData<3, double, ConstantFunction> > >", %"struct.Interval<3>", %"struct.GuardLayers<3>", %"struct.UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >" }
-	%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> > > > > >" = type { %"struct.Engine<3,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> > > > > >", %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"* }
-	%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> > > > > >" = type { %"struct.Engine<3,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> > > > > >", %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,Remote<BrickView> >"* }
-	%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >" = type { i32, %"struct.Centering<3>", i32, %"struct.RefCountedBlockPtr<FieldEngineBaseData<3, double, MultiPatch<GridTag, Remote<Brick> > >,false,RefBlockController<FieldEngineBaseData<3, double, MultiPatch<GridTag, Remote<Brick> > > > >", %"struct.Interval<3>", %"struct.GuardLayers<3>", %"struct.UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >" }
-	%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >" = type { i32, %"struct.Centering<3>", i32, %"struct.RefCountedBlockPtr<FieldEngineBaseData<3, double, MultiPatchView<GridTag, Remote<Brick>, 3> >,false,RefBlockController<FieldEngineBaseData<3, double, MultiPatchView<GridTag, Remote<Brick>, 3> > > >", %"struct.Interval<3>", %"struct.GuardLayers<3>", %"struct.UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >" }
-	%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,Remote<BrickView> >" = type { i32, %"struct.Centering<3>", i32, %"struct.RefCountedBlockPtr<FieldEngineBaseData<3, double, Remote<BrickView> >,false,RefBlockController<FieldEngineBaseData<3, double, Remote<BrickView> > > >", %"struct.Interval<3>", %"struct.GuardLayers<3>", %"struct.UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >" }
-	%"struct.FieldEngineBaseData<3,Zero<double>,ConstantFunction>" = type { %"struct.Engine<3,Zero<double>,ConstantFunction>", %struct.RelationList }
-	%"struct.FieldEngineBaseData<3,double,ConstantFunction>" = type { %"struct.Engine<3,double,ConstantFunction>", %struct.RelationList }
-	%"struct.FieldEngineBaseData<3,double,MultiPatch<GridTag, Remote<Brick> > >" = type { %"struct.Engine<3,double,MultiPatch<GridTag, Remote<Brick> > >", %struct.RelationList }
-	%"struct.FieldEngineBaseData<3,double,MultiPatchView<GridTag, Remote<Brick>, 3> >" = type { %"struct.Engine<3,double,MultiPatchView<GridTag, Remote<Brick>, 3> >", %struct.RelationList }
-	%"struct.FieldEngineBaseData<3,double,Remote<BrickView> >" = type { %"struct.Engine<3,double,Remote<BrickView> >", %struct.RelationList }
-	%struct.GlobalIDDataBase = type { %"struct.std::vector<GlobalIDDataBase::Pack,std::allocator<GlobalIDDataBase::Pack> >", %"struct.std::map<int,InformStream*,std::less<int>,std::allocator<std::pair<const int, InformStream*> > >" }
-	%"struct.GlobalIDDataBase::Pack" = type { i32, i32, i32, i32 }
-	%"struct.GridLayout<3>" = type { %"struct.ContextMapper<1>", %"struct.LayoutBase<3,GridLayoutData<3> >", %"struct.Observable<GridLayout<3> >" }
-	%"struct.GridLayoutData<3>" = type { %"struct.LayoutBaseData<3>", %struct.RefCounted, [21 x i8], i8, [3 x i32], [3 x %"struct.DomainMap<Interval<1>,int>"], [3 x %"struct.DomainMap<Interval<1>,int>"] }
-	%"struct.GridLayoutView<3,3>" = type { %"struct.LayoutBaseView<3,3,GridLayoutViewData<3, 3> >" }
-	%"struct.GridLayoutViewData<3,3>" = type { %"struct.LayoutBaseViewData<3,3,GridLayout<3> >", %struct.RefCounted }
-	%"struct.GuardLayers<3>" = type { [3 x i32], [3 x i32] }
-	%"struct.INode<3>" = type { %"struct.Interval<3>", %struct.GlobalIDDataBase*, i32 }
-	%"struct.Interval<1>" = type { %"struct.Domain<1,DomainTraits<Interval<1> > >" }
-	%"struct.Interval<3>" = type { %"struct.Domain<3,DomainTraits<Interval<3> > >" }
-	%"struct.LayoutBase<3,GridLayoutData<3> >" = type { %"struct.RefCountedPtr<GridLayoutData<3> >" }
-	%"struct.LayoutBaseData<3>" = type { i32, %"struct.Interval<3>", %"struct.Interval<3>", %"struct.std::vector<Node<Interval<3>, Interval<3> >*,std::allocator<Node<Interval<3>, Interval<3> >*> >", %"struct.std::vector<Node<Interval<3>, Interval<3> >*,std::allocator<Node<Interval<3>, Interval<3> >*> >", %"struct.std::vector<Node<Interval<3>, Interval<3> >*,std::allocator<Node<Interval<3>, Interval<3> >*> >", i8, i8, %"struct.GuardLayers<3>", %"struct.GuardLayers<3>", %"struct.std::vector<LayoutBaseData<3>::GCFillInfo,std::allocator<LayoutBaseData<3>::GCFillInfo> >", [3 x i32], [3 x i32], %"struct.Loc<3>" }
-	%"struct.LayoutBaseData<3>::GCFillInfo" = type { %"struct.Interval<3>", i32, i32, i32 }
-	%"struct.LayoutBaseView<3,3,GridLayoutViewData<3, 3> >" = type { %"struct.RefCountedPtr<GridLayoutViewData<3, 3> >" }
-	%"struct.LayoutBaseViewData<3,3,GridLayout<3> >" = type { i32, %"struct.GridLayout<3>", %"struct.GuardLayers<3>", %"struct.GuardLayers<3>", %"struct.ViewIndexer<3,3>", %"struct.std::vector<Node<Interval<3>, Interval<3> >*,std::allocator<Node<Interval<3>, Interval<3> >*> >", %"struct.std::vector<Node<Interval<3>, Interval<3> >*,std::allocator<Node<Interval<3>, Interval<3> >*> >", %"struct.std::vector<Node<Interval<3>, Interval<3> >*,std::allocator<Node<Interval<3>, Interval<3> >*> >", i8 }
-	%"struct.Loc<1>" = type { %"struct.Domain<1,DomainTraits<Loc<1> > >" }
-	%"struct.Loc<3>" = type { %"struct.Domain<3,DomainTraits<Loc<3> > >" }
-	%"struct.MultiArg6<Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatch<GridTag, Remote<Brick> > >,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatch<GridTag, Remote<Brick> > >,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatch<GridTag, Remote<Brick> > >,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatch<GridTag, Remote<Brick> > >,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, ConstantFunction>,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, Zero<double>, ConstantFunction> >" = type { %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >", %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >", %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >", %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >", %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ConstantFunction>", %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,Zero<double>,ConstantFunction>" }
-	%"struct.NoMeshData<3>" = type { %struct.RefCounted, %"struct.Interval<3>", %"struct.Interval<3>", %"struct.Interval<3>", %"struct.Interval<3>" }
-	%"struct.Node<Interval<3>,Interval<3> >" = type { %"struct.Interval<3>", %"struct.Interval<3>", i32, i32, i32, i32 }
-	%"struct.Observable<GridLayout<3> >" = type { %"struct.GridLayout<3>"*, %"struct.std::vector<Observer<GridLayout<3> >*,std::allocator<Observer<GridLayout<3> >*> >", i32, %"struct.Adv5::Ekin<3>" }
-	%"struct.Pooma::BrickBase<3>" = type { %"struct.DomainLayout<3>", [3 x i32], [3 x i32], i32, i8 }
-	%"struct.Pooma::BrickViewBase<3>" = type { %"struct.Interval<3>", [3 x i32], [3 x i32], i32, i8 }
-	%"struct.Range<1>" = type { %"struct.Domain<1,DomainTraits<Range<1> > >" }
-	%"struct.Range<3>" = type { %"struct.Domain<3,DomainTraits<Range<3> > >" }
-	%"struct.RefBlockController<Engine<3, double, Remote<Brick> > >" = type { %struct.RefCounted, %"struct.Engine<3,double,Remote<Brick> >"*, %"struct.Engine<3,double,Remote<Brick> >"*, %"struct.Engine<3,double,Remote<Brick> >"*, i8 }
-	%"struct.RefBlockController<FieldEngineBaseData<3, Zero<double>, ConstantFunction> >" = type { %struct.RefCounted, %"struct.FieldEngineBaseData<3,Zero<double>,ConstantFunction>"*, %"struct.FieldEngineBaseData<3,Zero<double>,ConstantFunction>"*, %"struct.FieldEngineBaseData<3,Zero<double>,ConstantFunction>"*, i8 }
-	%"struct.RefBlockController<FieldEngineBaseData<3, double, ConstantFunction> >" = type { %struct.RefCounted, %"struct.FieldEngineBaseData<3,double,ConstantFunction>"*, %"struct.FieldEngineBaseData<3,double,ConstantFunction>"*, %"struct.FieldEngineBaseData<3,double,ConstantFunction>"*, i8 }
-	%"struct.RefBlockController<FieldEngineBaseData<3, double, MultiPatch<GridTag, Remote<Brick> > > >" = type { %struct.RefCounted, %"struct.FieldEngineBaseData<3,double,MultiPatch<GridTag, Remote<Brick> > >"*, %"struct.FieldEngineBaseData<3,double,MultiPatch<GridTag, Remote<Brick> > >"*, %"struct.FieldEngineBaseData<3,double,MultiPatch<GridTag, Remote<Brick> > >"*, i8 }
-	%"struct.RefBlockController<FieldEngineBaseData<3, double, MultiPatchView<GridTag, Remote<Brick>, 3> > >" = type { %struct.RefCounted, %"struct.FieldEngineBaseData<3,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"*, %"struct.FieldEngineBaseData<3,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"*, %"struct.FieldEngineBaseData<3,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"*, i8 }
-	%"struct.RefBlockController<FieldEngineBaseData<3, double, Remote<BrickView> > >" = type { %struct.RefCounted, %"struct.FieldEngineBaseData<3,double,Remote<BrickView> >"*, %"struct.FieldEngineBaseData<3,double,Remote<BrickView> >"*, %"struct.FieldEngineBaseData<3,double,Remote<BrickView> >"*, i8 }
-	%"struct.RefBlockController<double>" = type { %struct.RefCounted, double*, double*, double*, i8 }
-	%struct.RefCounted = type { i32, %"struct.Adv5::Ekin<3>" }
-	%"struct.RefCountedBlockPtr<Engine<3, double, Remote<Brick> >,false,RefBlockController<Engine<3, double, Remote<Brick> > > >" = type { i32, %"struct.RefCountedPtr<RefBlockController<Engine<3, double, Remote<Brick> > > >" }
-	%"struct.RefCountedBlockPtr<FieldEngineBaseData<3, Zero<double>, ConstantFunction>,false,RefBlockController<FieldEngineBaseData<3, Zero<double>, ConstantFunction> > >" = type { i32, %"struct.RefCountedPtr<RefBlockController<FieldEngineBaseData<3, Zero<double>, ConstantFunction> > >" }
-	%"struct.RefCountedBlockPtr<FieldEngineBaseData<3, double, ConstantFunction>,false,RefBlockController<FieldEngineBaseData<3, double, ConstantFunction> > >" = type { i32, %"struct.RefCountedPtr<RefBlockController<FieldEngineBaseData<3, double, ConstantFunction> > >" }
-	%"struct.RefCountedBlockPtr<FieldEngineBaseData<3, double, MultiPatch<GridTag, Remote<Brick> > >,false,RefBlockController<FieldEngineBaseData<3, double, MultiPatch<GridTag, Remote<Brick> > > > >" = type { i32, %"struct.RefCountedPtr<RefBlockController<FieldEngineBaseData<3, double, MultiPatch<GridTag, Remote<Brick> > > > >" }
-	%"struct.RefCountedBlockPtr<FieldEngineBaseData<3, double, MultiPatchView<GridTag, Remote<Brick>, 3> >,false,RefBlockController<FieldEngineBaseData<3, double, MultiPatchView<GridTag, Remote<Brick>, 3> > > >" = type { i32, %"struct.RefCountedPtr<RefBlockController<FieldEngineBaseData<3, double, MultiPatchView<GridTag, Remote<Brick>, 3> > > >" }
-	%"struct.RefCountedBlockPtr<FieldEngineBaseData<3, double, Remote<BrickView> >,false,RefBlockController<FieldEngineBaseData<3, double, Remote<BrickView> > > >" = type { i32, %"struct.RefCountedPtr<RefBlockController<FieldEngineBaseData<3, double, Remote<BrickView> > > >" }
-	%"struct.RefCountedBlockPtr<double,false,DataBlockController<double> >" = type { i32, %"struct.RefCountedPtr<DataBlockController<double> >" }
-	%"struct.RefCountedPtr<DataBlockController<double> >" = type { %"struct.DataBlockController<double>"* }
-	%"struct.RefCountedPtr<GridLayoutData<3> >" = type { %"struct.GridLayoutData<3>"* }
-	%"struct.RefCountedPtr<GridLayoutViewData<3, 3> >" = type { %"struct.GridLayoutViewData<3,3>"* }
-	%"struct.RefCountedPtr<RefBlockController<Engine<3, double, Remote<Brick> > > >" = type { %"struct.RefBlockController<Engine<3, double, Remote<Brick> > >"* }
-	%"struct.RefCountedPtr<RefBlockController<FieldEngineBaseData<3, Zero<double>, ConstantFunction> > >" = type { %"struct.RefBlockController<FieldEngineBaseData<3, Zero<double>, ConstantFunction> >"* }
-	%"struct.RefCountedPtr<RefBlockController<FieldEngineBaseData<3, double, ConstantFunction> > >" = type { %"struct.RefBlockController<FieldEngineBaseData<3, double, ConstantFunction> >"* }
-	%"struct.RefCountedPtr<RefBlockController<FieldEngineBaseData<3, double, MultiPatch<GridTag, Remote<Brick> > > > >" = type { %"struct.RefBlockController<FieldEngineBaseData<3, double, MultiPatch<GridTag, Remote<Brick> > > >"* }
-	%"struct.RefCountedPtr<RefBlockController<FieldEngineBaseData<3, double, MultiPatchView<GridTag, Remote<Brick>, 3> > > >" = type { %"struct.RefBlockController<FieldEngineBaseData<3, double, MultiPatchView<GridTag, Remote<Brick>, 3> > >"* }
-	%"struct.RefCountedPtr<RefBlockController<FieldEngineBaseData<3, double, Remote<BrickView> > > >" = type { %"struct.RefBlockController<FieldEngineBaseData<3, double, Remote<BrickView> > >"* }
-	%"struct.RefCountedPtr<RelationListData>" = type { %struct.RelationListData* }
-	%"struct.RefCountedPtr<Shared<Engine<3, double, Brick> > >" = type { %"struct.Shared<Engine<3, double, Brick> >"* }
-	%"struct.RefCountedPtr<Shared<Engine<3, double, BrickView> > >" = type { %"struct.Shared<Engine<3, double, BrickView> >"* }
-	%"struct.RefCountedPtr<UniformRectilinearMeshData<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> > >" = type { %"struct.UniformRectilinearMeshData<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >"* }
-	%struct.RelationList = type { %"struct.RefCountedPtr<RelationListData>" }
-	%struct.RelationListData = type { %struct.RefCounted, %"struct.std::vector<RelationListItem*,std::allocator<RelationListItem*> >" }
-	%struct.RelationListItem = type { i32 (...)**, i32, i32, i8 }
-	%"struct.Shared<Engine<3, double, Brick> >" = type { %struct.RefCounted, %"struct.Engine<3,double,Brick>" }
-	%"struct.Shared<Engine<3, double, BrickView> >" = type { %struct.RefCounted, %"struct.Engine<3,double,BrickView>" }
-	%"struct.SingleObservable<int>" = type { %"struct.ContextMapper<1>"* }
-	%"struct.UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >" = type { %"struct.RefCountedPtr<UniformRectilinearMeshData<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> > >" }
-	%"struct.UniformRectilinearMeshData<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >" = type { %"struct.NoMeshData<3>", %"struct.Vector<3,double,Full>", %"struct.Vector<3,double,Full>" }
-	%"struct.Vector<3,double,Full>" = type { %"struct.VectorEngine<3,double,Full>" }
-	%"struct.VectorEngine<3,double,Full>" = type { [3 x double] }
-	%"struct.ViewIndexer<3,3>" = type { %"struct.Interval<3>", %"struct.Range<3>", [3 x i32], [3 x i32], %"struct.Loc<3>" }
-	%"struct.WrapNoInit<Interval<1> >" = type { %"struct.Interval<1>" }
-	%"struct.WrapNoInit<Loc<1> >" = type { %"struct.Loc<1>" }
-	%"struct.WrapNoInit<Range<1> >" = type { %"struct.Range<1>" }
-	%"struct.std::_List_base<Interval<3>,std::allocator<Interval<3> > >" = type { %"struct.std::_List_base<Interval<3>,std::allocator<Interval<3> > >::_List_impl" }
-	%"struct.std::_List_base<Interval<3>,std::allocator<Interval<3> > >::_List_impl" = type { %"struct.std::_List_node_base" }
-	%"struct.std::_List_const_iterator<Interval<3> >" = type { %"struct.std::_List_node_base"* }
-	%"struct.std::_List_node_base" = type { %"struct.std::_List_node_base"*, %"struct.std::_List_node_base"* }
-	%"struct.std::_Rb_tree<int,std::pair<const int, InformStream*>,std::_Select1st<std::pair<const int, InformStream*> >,std::less<int>,std::allocator<std::pair<const int, InformStream*> > >" = type { %"struct.std::_Rb_tree<int,std::pair<const int, InformStream*>,std::_Select1st<std::pair<const int, InformStream*> >,std::less<int>,std::allocator<std::pair<const int, InformStream*> > >::_Rb_tree_impl<std::less<int>,false>" }
-	%"struct.std::_Rb_tree<int,std::pair<const int, InformStream*>,std::_Select1st<std::pair<const int, InformStream*> >,std::less<int>,std::allocator<std::pair<const int, InformStream*> > >::_Rb_tree_impl<std::less<int>,false>" = type { %"struct.Adv5::Ekin<3>", %"struct.std::_Rb_tree_node_base", i32 }
-	%"struct.std::_Rb_tree_node_base" = type { i32, %"struct.std::_Rb_tree_node_base"*, %"struct.std::_Rb_tree_node_base"*, %"struct.std::_Rb_tree_node_base"* }
-	%"struct.std::_Vector_base<GlobalIDDataBase::Pack,std::allocator<GlobalIDDataBase::Pack> >" = type { %"struct.std::_Vector_base<GlobalIDDataBase::Pack,std::allocator<GlobalIDDataBase::Pack> >::_Vector_impl" }
-	%"struct.std::_Vector_base<GlobalIDDataBase::Pack,std::allocator<GlobalIDDataBase::Pack> >::_Vector_impl" = type { %"struct.GlobalIDDataBase::Pack"*, %"struct.GlobalIDDataBase::Pack"*, %"struct.GlobalIDDataBase::Pack"* }
-	%"struct.std::_Vector_base<LayoutBaseData<3>::GCFillInfo,std::allocator<LayoutBaseData<3>::GCFillInfo> >" = type { %"struct.std::_Vector_base<LayoutBaseData<3>::GCFillInfo,std::allocator<LayoutBaseData<3>::GCFillInfo> >::_Vector_impl" }
-	%"struct.std::_Vector_base<LayoutBaseData<3>::GCFillInfo,std::allocator<LayoutBaseData<3>::GCFillInfo> >::_Vector_impl" = type { %"struct.LayoutBaseData<3>::GCFillInfo"*, %"struct.LayoutBaseData<3>::GCFillInfo"*, %"struct.LayoutBaseData<3>::GCFillInfo"* }
-	%"struct.std::_Vector_base<Loc<3>,std::allocator<Loc<3> > >" = type { %"struct.std::_Vector_base<Loc<3>,std::allocator<Loc<3> > >::_Vector_impl" }
-	%"struct.std::_Vector_base<Loc<3>,std::allocator<Loc<3> > >::_Vector_impl" = type { %"struct.Loc<3>"*, %"struct.Loc<3>"*, %"struct.Loc<3>"* }
-	%"struct.std::_Vector_base<Node<Interval<3>, Interval<3> >*,std::allocator<Node<Interval<3>, Interval<3> >*> >" = type { %"struct.std::_Vector_base<Node<Interval<3>, Interval<3> >*,std::allocator<Node<Interval<3>, Interval<3> >*> >::_Vector_impl" }
-	%"struct.std::_Vector_base<Node<Interval<3>, Interval<3> >*,std::allocator<Node<Interval<3>, Interval<3> >*> >::_Vector_impl" = type { %"struct.Node<Interval<3>,Interval<3> >"**, %"struct.Node<Interval<3>,Interval<3> >"**, %"struct.Node<Interval<3>,Interval<3> >"** }
-	%"struct.std::_Vector_base<Observer<GridLayout<3> >*,std::allocator<Observer<GridLayout<3> >*> >" = type { %"struct.std::_Vector_base<Observer<GridLayout<3> >*,std::allocator<Observer<GridLayout<3> >*> >::_Vector_impl" }
-	%"struct.std::_Vector_base<Observer<GridLayout<3> >*,std::allocator<Observer<GridLayout<3> >*> >::_Vector_impl" = type { %"struct.ContextMapper<1>"**, %"struct.ContextMapper<1>"**, %"struct.ContextMapper<1>"** }
-	%"struct.std::_Vector_base<RelationListItem*,std::allocator<RelationListItem*> >" = type { %"struct.std::_Vector_base<RelationListItem*,std::allocator<RelationListItem*> >::_Vector_impl" }
-	%"struct.std::_Vector_base<RelationListItem*,std::allocator<RelationListItem*> >::_Vector_impl" = type { %struct.RelationListItem**, %struct.RelationListItem**, %struct.RelationListItem** }
-	%"struct.std::_Vector_base<Vector<3, double, Full>,std::allocator<Vector<3, double, Full> > >" = type { %"struct.std::_Vector_base<Vector<3, double, Full>,std::allocator<Vector<3, double, Full> > >::_Vector_impl" }
-	%"struct.std::_Vector_base<Vector<3, double, Full>,std::allocator<Vector<3, double, Full> > >::_Vector_impl" = type { %"struct.Vector<3,double,Full>"*, %"struct.Vector<3,double,Full>"*, %"struct.Vector<3,double,Full>"* }
-	%"struct.std::list<Interval<3>,std::allocator<Interval<3> > >" = type { %"struct.std::_List_base<Interval<3>,std::allocator<Interval<3> > >" }
-	%"struct.std::map<int,InformStream*,std::less<int>,std::allocator<std::pair<const int, InformStream*> > >" = type { %"struct.std::_Rb_tree<int,std::pair<const int, InformStream*>,std::_Select1st<std::pair<const int, InformStream*> >,std::less<int>,std::allocator<std::pair<const int, InformStream*> > >" }
-	%"struct.std::vector<GlobalIDDataBase::Pack,std::allocator<GlobalIDDataBase::Pack> >" = type { %"struct.std::_Vector_base<GlobalIDDataBase::Pack,std::allocator<GlobalIDDataBase::Pack> >" }
-	%"struct.std::vector<LayoutBaseData<3>::GCFillInfo,std::allocator<LayoutBaseData<3>::GCFillInfo> >" = type { %"struct.std::_Vector_base<LayoutBaseData<3>::GCFillInfo,std::allocator<LayoutBaseData<3>::GCFillInfo> >" }
-	%"struct.std::vector<Loc<3>,std::allocator<Loc<3> > >" = type { %"struct.std::_Vector_base<Loc<3>,std::allocator<Loc<3> > >" }
-	%"struct.std::vector<Node<Interval<3>, Interval<3> >*,std::allocator<Node<Interval<3>, Interval<3> >*> >" = type { %"struct.std::_Vector_base<Node<Interval<3>, Interval<3> >*,std::allocator<Node<Interval<3>, Interval<3> >*> >" }
-	%"struct.std::vector<Observer<GridLayout<3> >*,std::allocator<Observer<GridLayout<3> >*> >" = type { %"struct.std::_Vector_base<Observer<GridLayout<3> >*,std::allocator<Observer<GridLayout<3> >*> >" }
-	%"struct.std::vector<RelationListItem*,std::allocator<RelationListItem*> >" = type { %"struct.std::_Vector_base<RelationListItem*,std::allocator<RelationListItem*> >" }
-	%"struct.std::vector<Vector<3, double, Full>,std::allocator<Vector<3, double, Full> > >" = type { %"struct.std::_Vector_base<Vector<3, double, Full>,std::allocator<Vector<3, double, Full> > >" }
-
-declare void @llvm.memcpy.i32(i8*, i8*, i32, i32) nounwind
-
-declare fastcc void @_ZN11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd10MultiPatchI7GridTag6RemoteI5BrickEEEC1ERKSC_(%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*, %"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*) nounwind
-
-declare fastcc void @_ZN9CenteringILi3EEC1ERKS0_i(%"struct.Centering<3>"*, %"struct.Centering<3>"*, i32) nounwind
-
-declare fastcc void @_ZN11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd6RemoteI9BrickViewEED1Ev(%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,Remote<BrickView> >"*) nounwind
-
-declare fastcc void @_ZN11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd6RemoteI9BrickViewEEC1Id14MultiPatchViewI7GridTagS6_I5BrickELi3EEEERKS_IS5_T_T0_ERK5INodeILi3EE(%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,Remote<BrickView> >"*, %"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"*, %"struct.INode<3>"*) nounwind
-
-declare fastcc void @_ZN11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd14MultiPatchViewI7GridTag6RemoteI5BrickELi3EEED1Ev(%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"*) nounwind
-
-declare fastcc void @_ZN11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd14MultiPatchViewI7GridTag6RemoteI5BrickELi3EEEC1Id10MultiPatchIS7_SA_EEERKS_IS5_T_T0_ERK8IntervalILi3EE(%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"*, %"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*, %"struct.Interval<3>"*) nounwind
-
-define fastcc void @t(double %dt, %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %rh, %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %T, %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %v, %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %pg, %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %ph, %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %cs, %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ConstantFunction>"* %cv, %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,Zero<double>,ConstantFunction>"* %dlmdlt, %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ConstantFunction>"* %xmue, %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %vint, %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %cent, %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %fvis, double %c_nr, double %c_av, i8 zeroext %cartvis_f) nounwind {
-entry:
-	%0 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> > > > > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> > > > > >"*> [#uses=4]
-	%s.i.i.i.i.i = alloca %"struct.Interval<3>"		; <%"struct.Interval<3>"*> [#uses=0]
-	%1 = alloca %"struct.BinaryNode<OpMultiply,Scalar<double>,BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> > > >"		; <%"struct.BinaryNode<OpMultiply,Scalar<double>,BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> > > >"*> [#uses=2]
-	%multiArg.i = alloca %"struct.MultiArg6<Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatch<GridTag, Remote<Brick> > >,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatch<GridTag, Remote<Brick> > >,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatch<GridTag, Remote<Brick> > >,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatch<GridTag, Remote<Brick> > >,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, ConstantFunction>,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, Zero<double>, ConstantFunction> >"		; <%"struct.MultiArg6<Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatch<GridTag, Remote<Brick> > >,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatch<GridTag, Remote<Brick> > >,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatch<GridTag, Remote<Brick> > >,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatch<GridTag, Remote<Brick> > >,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, ConstantFunction>,Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, Zero<double>, ConstantFunction> >"*> [#uses=0]
-	%2 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=6]
-	%3 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=2]
-	%4 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"*> [#uses=0]
-	%5 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=2]
-	%6 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"*> [#uses=0]
-	%7 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%8 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%9 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%10 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%11 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%12 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%13 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%14 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%15 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%16 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%17 = alloca %"struct.Interval<3>"		; <%"struct.Interval<3>"*> [#uses=0]
-	%18 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%19 = alloca double		; <double*> [#uses=0]
-	%20 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"*> [#uses=0]
-	%21 = alloca %"struct.Interval<3>"		; <%"struct.Interval<3>"*> [#uses=0]
-	%22 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"*> [#uses=0]
-	%23 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"*> [#uses=0]
-	%24 = alloca %"struct.Interval<3>"		; <%"struct.Interval<3>"*> [#uses=0]
-	%25 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%26 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%27 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%28 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%29 = alloca %"struct.Interval<3>"		; <%"struct.Interval<3>"*> [#uses=0]
-	%30 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%31 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%32 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%33 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%34 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%35 = alloca %"struct.Interval<3>"		; <%"struct.Interval<3>"*> [#uses=0]
-	%36 = alloca %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"		; <%"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=0]
-	%37 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %v, i32 0, i32 0, i32 5		; <%"struct.GuardLayers<3>"*> [#uses=1]
-	%38 = bitcast %"struct.GuardLayers<3>"* %37 to i8*		; <i8*> [#uses=1]
-	br label %bb.i.i.i.i.i
-
-bb.i.i.i.i.i:		; preds = %bb.i.i.i.i.i, %entry
-	%39 = icmp eq i32* null, null		; <i1> [#uses=1]
-	br i1 %39, label %_ZN14ScalarCodeInfoILi3ELi4EEC1Ev.exit.i, label %bb.i.i.i.i.i
-
-_ZN14ScalarCodeInfoILi3ELi4EEC1Ev.exit.i:		; preds = %bb.i.i.i.i.i
-	br label %bb.i.i.i35.i.i34
-
-bb.i.i.i35.i.i34:		; preds = %bb.i.i.i35.i.i34, %_ZN14ScalarCodeInfoILi3ELi4EEC1Ev.exit.i
-	%40 = icmp eq i32* null, null		; <i1> [#uses=1]
-	br i1 %40, label %_ZNSt6vectorIbSaIbEEC1EmRKbRKS0_.exit36.i.i37, label %bb.i.i.i35.i.i34
-
-_ZNSt6vectorIbSaIbEEC1EmRKbRKS0_.exit36.i.i37:		; preds = %bb.i.i.i35.i.i34
-	br label %bb.i.i.i19.i.i47
-
-bb.i.i.i19.i.i47:		; preds = %bb.i.i.i19.i.i47, %_ZNSt6vectorIbSaIbEEC1EmRKbRKS0_.exit36.i.i37
-	%41 = icmp eq i32* null, null		; <i1> [#uses=1]
-	br i1 %41, label %_ZNSt6vectorIbSaIbEEC1EmRKbRKS0_.exit20.i.i50, label %bb.i.i.i19.i.i47
-
-_ZNSt6vectorIbSaIbEEC1EmRKbRKS0_.exit20.i.i50:		; preds = %bb.i.i.i19.i.i47
-	%42 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %rh, i32 0, i32 0		; <%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"*> [#uses=1]
-	br label %bb.i.i.i19.i.i.i
-
-bb.i.i.i19.i.i.i:		; preds = %bb.i.i.i19.i.i.i, %_ZNSt6vectorIbSaIbEEC1EmRKbRKS0_.exit20.i.i50
-	%43 = icmp eq i32* null, null		; <i1> [#uses=1]
-	br i1 %43, label %_ZNSt6vectorIbSaIbEEC1EmRKbRKS0_.exit20.i.i.i, label %bb.i.i.i19.i.i.i
-
-_ZNSt6vectorIbSaIbEEC1EmRKbRKS0_.exit20.i.i.i:		; preds = %bb.i.i.i19.i.i.i
-	br label %bb.i.i.i35.i.i433
-
-bb.i.i.i35.i.i433:		; preds = %bb.i.i.i35.i.i433, %_ZNSt6vectorIbSaIbEEC1EmRKbRKS0_.exit20.i.i.i
-	%44 = icmp eq i32* null, null		; <i1> [#uses=1]
-	br i1 %44, label %_ZNSt6vectorIbSaIbEEC1EmRKbRKS0_.exit36.i.i436, label %bb.i.i.i35.i.i433
-
-_ZNSt6vectorIbSaIbEEC1EmRKbRKS0_.exit36.i.i436:		; preds = %bb.i.i.i35.i.i433
-	br label %bb.i.i.i19.i.i446
-
-bb.i.i.i19.i.i446:		; preds = %bb.i.i.i19.i.i446, %_ZNSt6vectorIbSaIbEEC1EmRKbRKS0_.exit36.i.i436
-	%45 = icmp eq i32* null, null		; <i1> [#uses=1]
-	br i1 %45, label %_ZNSt6vectorIbSaIbEEC1EmRKbRKS0_.exit20.i.i449, label %bb.i.i.i19.i.i446
-
-_ZNSt6vectorIbSaIbEEC1EmRKbRKS0_.exit20.i.i449:		; preds = %bb.i.i.i19.i.i446
-	br label %bb.i.i.i.i.i459
-
-bb.i.i.i.i.i459:		; preds = %bb.i.i.i.i.i459, %_ZNSt6vectorIbSaIbEEC1EmRKbRKS0_.exit20.i.i449
-	%46 = icmp eq i32* null, null		; <i1> [#uses=1]
-	br i1 %46, label %_ZN14ScalarCodeInfoILi3ELi6EEC1Ev.exit.i460, label %bb.i.i.i.i.i459
-
-_ZN14ScalarCodeInfoILi3ELi6EEC1Ev.exit.i460:		; preds = %bb.i.i.i.i.i459
-	%47 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %5, i32 0, i32 0, i32 1		; <%"struct.Centering<3>"*> [#uses=1]
-	%48 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %vint, i32 0, i32 0, i32 1		; <%"struct.Centering<3>"*> [#uses=2]
-	%49 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %5, i32 0, i32 0, i32 5		; <%"struct.GuardLayers<3>"*> [#uses=1]
-	%50 = bitcast %"struct.GuardLayers<3>"* %49 to i8*		; <i8*> [#uses=1]
-	%51 = bitcast %"struct.GuardLayers<3>"* null to i8*		; <i8*> [#uses=2]
-	%52 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %3, i32 0, i32 0, i32 1		; <%"struct.Centering<3>"*> [#uses=1]
-	%53 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %3, i32 0, i32 0, i32 5		; <%"struct.GuardLayers<3>"*> [#uses=1]
-	%54 = bitcast %"struct.GuardLayers<3>"* %53 to i8*		; <i8*> [#uses=1]
-	%55 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %2, i32 0, i32 0, i32 1		; <%"struct.Centering<3>"*> [#uses=1]
-	%56 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %2, i32 0, i32 0, i32 4, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0, i32 1		; <i32*> [#uses=1]
-	%57 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %2, i32 0, i32 0, i32 4, i32 0, i32 0, i32 0, i32 1, i32 0, i32 0, i32 0, i32 0, i32 1		; <i32*> [#uses=1]
-	%58 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %2, i32 0, i32 0, i32 4, i32 0, i32 0, i32 0, i32 2, i32 0, i32 0, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	%59 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %2, i32 0, i32 0, i32 4, i32 0, i32 0, i32 0, i32 2, i32 0, i32 0, i32 0, i32 0, i32 1		; <i32*> [#uses=1]
-	%60 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %2, i32 0, i32 0, i32 5		; <%"struct.GuardLayers<3>"*> [#uses=1]
-	%61 = bitcast %"struct.GuardLayers<3>"* %60 to i8*		; <i8*> [#uses=1]
-	%62 = getelementptr %"struct.BinaryNode<OpMultiply,Scalar<double>,BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> > > >"* %1, i32 0, i32 1, i32 0, i32 0		; <%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"*> [#uses=1]
-	%63 = getelementptr %"struct.BinaryNode<OpMultiply,Scalar<double>,BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> > > >"* %1, i32 0, i32 1, i32 1, i32 0		; <%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"*> [#uses=1]
-	%64 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, MultiPatchView<GridTag, Remote<Brick>, 3> > > > > >"* null, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0		; <double*> [#uses=1]
-	%65 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> > > > > >"* %0, i32 0, i32 0, i32 0, i32 0, i32 1, i32 1, i32 0		; <%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,Remote<BrickView> >"*> [#uses=2]
-	%66 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> > > > > >"* %0, i32 0, i32 0, i32 0, i32 0, i32 1, i32 0, i32 0, i32 4, i32 0, i32 0, i32 0, i32 1, i32 0, i32 0, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	%67 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> > > > > >"* %0, i32 0, i32 0, i32 0, i32 0, i32 1, i32 0, i32 0, i32 4, i32 0, i32 0, i32 0, i32 1, i32 0, i32 0, i32 0, i32 0, i32 1		; <i32*> [#uses=1]
-	%68 = getelementptr %"struct.Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,ExpressionTag<BinaryNode<OpMultiply, Scalar<double>, BinaryNode<OpAdd, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> >, Field<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >, double, Remote<BrickView> > > > > >"* %0, i32 0, i32 0, i32 0, i32 0, i32 1, i32 0, i32 0, i32 4, i32 0, i32 0, i32 0, i32 2, i32 0, i32 0, i32 0, i32 0, i32 1		; <i32*> [#uses=1]
-	br label %bb15
-
-bb15:		; preds = %_Z6assignI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd14MultiPatchViewI7GridTag6RemoteI5BrickELi3EES5_d13ExpressionTagI10BinaryNodeI10OpMultiply6ScalarIdESD_I5OpAdd9ReferenceI5FieldIS5_dSB_EESL_EEE8OpAssignERKSJ_IT_T0_T1_ESV_RKSJ_IT2_T3_T4_ERKT5_.exit, %_ZN14ScalarCodeInfoILi3ELi6EEC1Ev.exit.i460
-	%i.0.reg2mem.0 = phi i32 [ 0, %_ZN14ScalarCodeInfoILi3ELi6EEC1Ev.exit.i460 ], [ %indvar.next, %_Z6assignI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd14MultiPatchViewI7GridTag6RemoteI5BrickELi3EES5_d13ExpressionTagI10BinaryNodeI10OpMultiply6ScalarIdESD_I5OpAdd9ReferenceI5FieldIS5_dSB_EESL_EEE8OpAssignERKSJ_IT_T0_T1_ESV_RKSJ_IT2_T3_T4_ERKT5_.exit ]		; <i32> [#uses=4]
-	call fastcc void @_ZN9CenteringILi3EEC1ERKS0_i(%"struct.Centering<3>"* %47, %"struct.Centering<3>"* %48, i32 %i.0.reg2mem.0) nounwind
-	call void @llvm.memcpy.i32(i8* %50, i8* %51, i32 24, i32 4) nounwind
-	call fastcc void @_ZN9CenteringILi3EEC1ERKS0_i(%"struct.Centering<3>"* %52, %"struct.Centering<3>"* %48, i32 %i.0.reg2mem.0) nounwind
-	call void @llvm.memcpy.i32(i8* %54, i8* %51, i32 24, i32 4) nounwind
-	br i1 false, label %bb.i940, label %bb4.i943
-
-bb.i940:		; preds = %bb15
-	br label %_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd10MultiPatchI7GridTag6RemoteI5BrickEEE11totalDomainEv.exit944
-
-bb4.i943:		; preds = %bb15
-	br label %_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd10MultiPatchI7GridTag6RemoteI5BrickEEE11totalDomainEv.exit944
-
-_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd10MultiPatchI7GridTag6RemoteI5BrickEEE11totalDomainEv.exit944:		; preds = %bb4.i943, %bb.i940
-	call fastcc void @_ZN11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd14MultiPatchViewI7GridTag6RemoteI5BrickELi3EEEC1Id10MultiPatchIS7_SA_EEERKS_IS5_T_T0_ERK8IntervalILi3EE(%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"* null, %"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* null, %"struct.Interval<3>"* null) nounwind
-	call fastcc void @_ZN9CenteringILi3EEC1ERKS0_i(%"struct.Centering<3>"* %55, %"struct.Centering<3>"* null, i32 %i.0.reg2mem.0) nounwind
-	call void @llvm.memcpy.i32(i8* %61, i8* %38, i32 24, i32 4) nounwind
-	%69 = load %"struct.Loc<3>"** null, align 4		; <%"struct.Loc<3>"*> [#uses=1]
-	%70 = ptrtoint %"struct.Loc<3>"* %69 to i32		; <i32> [#uses=1]
-	%.off.i911 = sub i32 0, %70		; <i32> [#uses=1]
-	%71 = icmp ult i32 %.off.i911, 12		; <i1> [#uses=1]
-	%72 = sub i32 0, 0		; <i32> [#uses=2]
-	%73 = load i32* %56, align 4		; <i32> [#uses=1]
-	%74 = add i32 %73, 0		; <i32> [#uses=1]
-	%75 = sub i32 %74, %72		; <i32> [#uses=1]
-	%76 = add i32 %75, 0		; <i32> [#uses=1]
-	%77 = load i32* null, align 8		; <i32> [#uses=2]
-	%78 = load i32* null, align 4		; <i32> [#uses=1]
-	%79 = sub i32 %77, %78		; <i32> [#uses=1]
-	%80 = load i32* %57, align 4		; <i32> [#uses=1]
-	%81 = load i32* null, align 4		; <i32> [#uses=1]
-	br i1 %71, label %bb.i912, label %bb4.i915
-
-bb.i912:		; preds = %_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd10MultiPatchI7GridTag6RemoteI5BrickEEE11totalDomainEv.exit944
-	br label %_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd10MultiPatchI7GridTag6RemoteI5BrickEEE11totalDomainEv.exit916
-
-bb4.i915:		; preds = %_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd10MultiPatchI7GridTag6RemoteI5BrickEEE11totalDomainEv.exit944
-	%82 = sub i32 %77, %79		; <i32> [#uses=1]
-	%83 = add i32 %82, %80		; <i32> [#uses=1]
-	%84 = add i32 %83, %81		; <i32> [#uses=1]
-	%85 = load i32* %58, align 8		; <i32> [#uses=2]
-	%86 = load i32* null, align 8		; <i32> [#uses=1]
-	%87 = sub i32 %85, %86		; <i32> [#uses=2]
-	%88 = load i32* %59, align 4		; <i32> [#uses=1]
-	%89 = load i32* null, align 4		; <i32> [#uses=1]
-	%90 = sub i32 %85, %87		; <i32> [#uses=1]
-	%91 = add i32 %90, %88		; <i32> [#uses=1]
-	%92 = add i32 %91, %89		; <i32> [#uses=1]
-	br label %_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd10MultiPatchI7GridTag6RemoteI5BrickEEE11totalDomainEv.exit916
-
-_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd10MultiPatchI7GridTag6RemoteI5BrickEEE11totalDomainEv.exit916:		; preds = %bb4.i915, %bb.i912
-	%.0978.0.0.1.0.0.0.0.1.0 = phi i32 [ %84, %bb4.i915 ], [ 0, %bb.i912 ]		; <i32> [#uses=0]
-	%.0978.0.0.2.0.0.0.0.0.0 = phi i32 [ %87, %bb4.i915 ], [ 0, %bb.i912 ]		; <i32> [#uses=1]
-	%.0978.0.0.2.0.0.0.0.1.0 = phi i32 [ %92, %bb4.i915 ], [ 0, %bb.i912 ]		; <i32> [#uses=0]
-	store i32 %72, i32* null, align 8
-	store i32 %76, i32* null, align 4
-	store i32 %.0978.0.0.2.0.0.0.0.0.0, i32* null, align 8
-	call fastcc void @_ZN11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd14MultiPatchViewI7GridTag6RemoteI5BrickELi3EEEC1Id10MultiPatchIS7_SA_EEERKS_IS5_T_T0_ERK8IntervalILi3EE(%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"* null, %"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* null, %"struct.Interval<3>"* null) nounwind
-	%93 = load i32* null, align 8		; <i32> [#uses=1]
-	%94 = icmp sgt i32 %93, 0		; <i1> [#uses=1]
-	br i1 %94, label %bb1.i, label %_Z6assignI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd14MultiPatchViewI7GridTag6RemoteI5BrickELi3EES5_d13ExpressionTagI10BinaryNodeI10OpMultiply6ScalarIdESD_I5OpAdd9ReferenceI5FieldIS5_dSB_EESL_EEE8OpAssignERKSJ_IT_T0_T1_ESV_RKSJ_IT2_T3_T4_ERKT5_.exit
-
-bb1.i:		; preds = %bb3.i23.i.i, %_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd10MultiPatchI7GridTag6RemoteI5BrickEEE11totalDomainEv.exit916
-	call fastcc void @_ZN11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd14MultiPatchViewI7GridTag6RemoteI5BrickELi3EEED1Ev(%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"* %63) nounwind
-	call fastcc void @_ZN11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd14MultiPatchViewI7GridTag6RemoteI5BrickELi3EEED1Ev(%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"* %62) nounwind
-	br label %bb.i17.i14.i
-
-bb.i17.i14.i:		; preds = %_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd6RemoteI9BrickViewEE14physicalDomainEv.exit26.i.i.i.i, %bb1.i
-	%i.0.02.rec.i.i.i = phi i32 [ %.rec.i.i.i641, %_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd6RemoteI9BrickViewEE14physicalDomainEv.exit26.i.i.i.i ], [ 0, %bb1.i ]		; <i32> [#uses=1]
-	%95 = load double* %64, align 8		; <double> [#uses=1]
-	store double %95, double* null, align 8
-	call fastcc void @_ZN11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd6RemoteI9BrickViewEEC1Id14MultiPatchViewI7GridTagS6_I5BrickELi3EEEERKS_IS5_T_T0_ERK5INodeILi3EE(%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,Remote<BrickView> >"* %65, %"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatchView<GridTag, Remote<Brick>, 3> >"* null, %"struct.INode<3>"* null) nounwind
-	%96 = load %"struct.Loc<3>"** null, align 4		; <%"struct.Loc<3>"*> [#uses=1]
-	%97 = ptrtoint %"struct.Loc<3>"* %96 to i32		; <i32> [#uses=1]
-	%.off.i21.i.i.i.i = sub i32 0, %97		; <i32> [#uses=1]
-	%98 = icmp ult i32 %.off.i21.i.i.i.i, 12		; <i1> [#uses=1]
-	br i1 %98, label %bb.i22.i.i.i.i, label %bb3.i25.i.i.i.i
-
-bb.i22.i.i.i.i:		; preds = %bb.i17.i14.i
-	%99 = load i32* null, align 4		; <i32> [#uses=1]
-	%100 = icmp eq i32 %99, 1		; <i1> [#uses=1]
-	%101 = load i32* null, align 4		; <i32> [#uses=1]
-	br i1 %100, label %_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd6RemoteI9BrickViewEE14physicalDomainEv.exit26.i.i.i.i, label %bb6.i.i24.i.i.i.i
-
-bb6.i.i24.i.i.i.i:		; preds = %bb.i22.i.i.i.i
-	br label %_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd6RemoteI9BrickViewEE14physicalDomainEv.exit26.i.i.i.i
-
-bb3.i25.i.i.i.i:		; preds = %bb.i17.i14.i
-	%102 = load i32* %66, align 8		; <i32> [#uses=2]
-	%103 = load i32* %67, align 4		; <i32> [#uses=1]
-	%104 = load i32* %68, align 4		; <i32> [#uses=1]
-	br label %_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd6RemoteI9BrickViewEE14physicalDomainEv.exit26.i.i.i.i
-
-_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd6RemoteI9BrickViewEE14physicalDomainEv.exit26.i.i.i.i:		; preds = %bb3.i25.i.i.i.i, %bb6.i.i24.i.i.i.i, %bb.i22.i.i.i.i
-	%.rle1279 = phi i32 [ 0, %bb3.i25.i.i.i.i ], [ 0, %bb6.i.i24.i.i.i.i ], [ 0, %bb.i22.i.i.i.i ]		; <i32> [#uses=1]
-	%.rle1277 = phi i32 [ %102, %bb3.i25.i.i.i.i ], [ 0, %bb6.i.i24.i.i.i.i ], [ 0, %bb.i22.i.i.i.i ]		; <i32> [#uses=1]
-	%.rle1275 = phi i32 [ 0, %bb3.i25.i.i.i.i ], [ 0, %bb6.i.i24.i.i.i.i ], [ 0, %bb.i22.i.i.i.i ]		; <i32> [#uses=1]
-	%.01034.0.0.2.0.0.0.0.1.0 = phi i32 [ %104, %bb3.i25.i.i.i.i ], [ 0, %bb6.i.i24.i.i.i.i ], [ 0, %bb.i22.i.i.i.i ]		; <i32> [#uses=1]
-	%.01034.0.0.2.0.0.0.0.0.0 = phi i32 [ 0, %bb3.i25.i.i.i.i ], [ 0, %bb6.i.i24.i.i.i.i ], [ 0, %bb.i22.i.i.i.i ]		; <i32> [#uses=1]
-	%.01034.0.0.1.0.0.0.0.1.0 = phi i32 [ %103, %bb3.i25.i.i.i.i ], [ 0, %bb6.i.i24.i.i.i.i ], [ 0, %bb.i22.i.i.i.i ]		; <i32> [#uses=1]
-	%.01034.0.0.1.0.0.0.0.0.0 = phi i32 [ %102, %bb3.i25.i.i.i.i ], [ 0, %bb6.i.i24.i.i.i.i ], [ 0, %bb.i22.i.i.i.i ]		; <i32> [#uses=1]
-	%.01034.0.0.0.0.0.0.0.1.0 = phi i32 [ 0, %bb3.i25.i.i.i.i ], [ 0, %bb6.i.i24.i.i.i.i ], [ %101, %bb.i22.i.i.i.i ]		; <i32> [#uses=1]
-	%.01034.0.0.0.0.0.0.0.0.0 = phi i32 [ 0, %bb3.i25.i.i.i.i ], [ 0, %bb6.i.i24.i.i.i.i ], [ 0, %bb.i22.i.i.i.i ]		; <i32> [#uses=1]
-	%105 = sub i32 %.01034.0.0.0.0.0.0.0.0.0, %.rle1275		; <i32> [#uses=0]
-	%106 = sub i32 %.01034.0.0.1.0.0.0.0.0.0, %.rle1277		; <i32> [#uses=0]
-	%107 = sub i32 %.01034.0.0.2.0.0.0.0.0.0, %.rle1279		; <i32> [#uses=0]
-	store i32 %.01034.0.0.0.0.0.0.0.1.0, i32* null, align 4
-	store i32 %.01034.0.0.1.0.0.0.0.1.0, i32* null, align 4
-	store i32 %.01034.0.0.2.0.0.0.0.1.0, i32* null, align 4
-	call fastcc void @_ZN11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd6RemoteI9BrickViewEED1Ev(%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,Remote<BrickView> >"* %65) nounwind
-	%.rec.i.i.i641 = add i32 %i.0.02.rec.i.i.i, 1		; <i32> [#uses=1]
-	%108 = load %"struct.INode<3>"** null, align 4		; <%"struct.INode<3>"*> [#uses=1]
-	%109 = icmp eq %"struct.INode<3>"* null, %108		; <i1> [#uses=1]
-	br i1 %109, label %bb3.i23.i.i, label %bb.i17.i14.i
-
-bb3.i23.i.i:		; preds = %_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd6RemoteI9BrickViewEE14physicalDomainEv.exit26.i.i.i.i
-	br label %bb1.i
-
-_Z6assignI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd14MultiPatchViewI7GridTag6RemoteI5BrickELi3EES5_d13ExpressionTagI10BinaryNodeI10OpMultiply6ScalarIdESD_I5OpAdd9ReferenceI5FieldIS5_dSB_EESL_EEE8OpAssignERKSJ_IT_T0_T1_ESV_RKSJ_IT2_T3_T4_ERKT5_.exit:		; preds = %_ZNK11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd10MultiPatchI7GridTag6RemoteI5BrickEEE11totalDomainEv.exit916
-	%indvar.next = add i32 %i.0.reg2mem.0, 1		; <i32> [#uses=2]
-	%exitcond = icmp eq i32 %indvar.next, 3		; <i1> [#uses=1]
-	br i1 %exitcond, label %bb18, label %bb15
-
-bb18:		; preds = %_Z6assignI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd14MultiPatchViewI7GridTag6RemoteI5BrickELi3EES5_d13ExpressionTagI10BinaryNodeI10OpMultiply6ScalarIdESD_I5OpAdd9ReferenceI5FieldIS5_dSB_EESL_EEE8OpAssignERKSJ_IT_T0_T1_ESV_RKSJ_IT2_T3_T4_ERKT5_.exit
-	call fastcc void @_ZN11FieldEngineI22UniformRectilinearMeshI10MeshTraitsILi3Ed21UniformRectilinearTag12CartesianTagLi3EEEd10MultiPatchI7GridTag6RemoteI5BrickEEEC1ERKSC_(%"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* null, %"struct.FieldEngine<UniformRectilinearMesh<MeshTraits<3, double, UniformRectilinearTag, CartesianTag, 3> >,double,MultiPatch<GridTag, Remote<Brick> > >"* %42) nounwind
-	unreachable
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-05-18-InlineAsmMem.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-05-18-InlineAsmMem.ll
index 2fc9eb3..1e2707f 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/2009-05-18-InlineAsmMem.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-05-18-InlineAsmMem.ll
@@ -1,7 +1,9 @@
-; RUN: llc < %s -march=arm | grep swp
+; RUN: llc < %s -march=arm | FileCheck %s
+; RUN: llc < %s -march=thumb | FileCheck %s
 ; PR4091
 
 define void @foo(i32 %i, i32* %p) nounwind {
+;CHECK: swp r2, r0, [r1]
 	%asmtmp = call i32 asm sideeffect "swp $0, $2, $3", "=&r,=*m,r,*m,~{memory}"(i32* %p, i32 %i, i32* %p) nounwind
 	ret void
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-07-18-RewriterBug.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-07-18-RewriterBug.ll
index ee93fde..2b7ccd8 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/2009-07-18-RewriterBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-07-18-RewriterBug.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -mtriple=armv6-apple-darwin10 -mattr=+vfp2 | grep fcmpezd | count 13
+; RUN: llc < %s -mtriple=armv6-apple-darwin10 -mattr=+vfp2 | grep vcmpe | count 13
 
 	%struct.EDGE_PAIR = type { %struct.edge_rec*, %struct.edge_rec* }
 	%struct.VEC2 = type { double, double, double }
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-09-01-PostRAProlog.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-09-01-PostRAProlog.ll
index f0301a8..bf91fe0 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/2009-09-01-PostRAProlog.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-09-01-PostRAProlog.ll
@@ -1,4 +1,4 @@
-; RUN: llvm-as < %s | llc -asm-verbose=false -O3 -relocation-model=pic -disable-fp-elim -mtriple=thumbv7-apple-darwin -mcpu=cortex-a8  | FileCheck %s
+; RUN: llc -asm-verbose=false -O3 -relocation-model=pic -disable-fp-elim -mtriple=thumbv7-apple-darwin -mcpu=cortex-a8 < %s | FileCheck %s
 
 target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32"
 target triple = "thumbv7-apple-darwin9"
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-09-09-fpcmp-ole.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-09-09-fpcmp-ole.ll
index 98cab9a..3909c6a 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/2009-09-09-fpcmp-ole.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-09-09-fpcmp-ole.ll
@@ -9,7 +9,7 @@ define void @test(double* %x, double* %y) nounwind {
   br i1 %4, label %bb1, label %bb2
 
 bb1:
-;CHECK: fstdhi
+;CHECK: vstrhi.64
   store double %1, double* %y, align 4
   br label %bb2
 
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-09-24-spill-align.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-09-24-spill-align.ll
index 6281775..5476d5f 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/2009-09-24-spill-align.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-09-24-spill-align.ll
@@ -6,7 +6,7 @@ entry:
   %arg0_poly16x4_t = alloca <4 x i16>             ; <<4 x i16>*> [#uses=1]
   %out_poly16_t = alloca i16                      ; <i16*> [#uses=1]
   %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
-; CHECK: fldd
+; CHECK: vldr.64
   %0 = load <4 x i16>* %arg0_poly16x4_t, align 8  ; <<4 x i16>> [#uses=1]
   %1 = extractelement <4 x i16> %0, i32 1         ; <i16> [#uses=1]
   store i16 %1, i16* %out_poly16_t, align 2
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-10-02-NEONSubregsBug.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-10-02-NEONSubregsBug.ll
new file mode 100644
index 0000000..465368b
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-10-02-NEONSubregsBug.ll
@@ -0,0 +1,63 @@
+; RUN: llc -mtriple=armv7-eabi -mcpu=cortex-a8 -enable-unsafe-fp-math < %s
+; PR5367
+
+define arm_aapcs_vfpcc void @_Z27Benchmark_SceDualQuaternionPvm(i8* nocapture %pBuffer, i32 %numItems) nounwind {
+entry:
+  br i1 undef, label %return, label %bb
+
+bb:                                               ; preds = %bb, %entry
+  %0 = load float* undef, align 4                 ; <float> [#uses=1]
+  %1 = load float* null, align 4                  ; <float> [#uses=1]
+  %2 = insertelement <4 x float> undef, float undef, i32 1 ; <<4 x float>> [#uses=1]
+  %3 = insertelement <4 x float> %2, float %1, i32 2 ; <<4 x float>> [#uses=2]
+  %4 = insertelement <4 x float> undef, float %0, i32 2 ; <<4 x float>> [#uses=1]
+  %5 = insertelement <4 x float> %4, float 0.000000e+00, i32 3 ; <<4 x float>> [#uses=4]
+  %6 = fsub <4 x float> zeroinitializer, %3       ; <<4 x float>> [#uses=1]
+  %7 = shufflevector <4 x float> %6, <4 x float> undef, <4 x i32> zeroinitializer ; <<4 x float>> [#uses=2]
+  %8 = shufflevector <4 x float> %5, <4 x float> undef, <2 x i32> <i32 0, i32 1> ; <<2 x float>> [#uses=1]
+  %9 = shufflevector <2 x float> %8, <2 x float> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x float>> [#uses=2]
+  %10 = fmul <4 x float> %7, %9                   ; <<4 x float>> [#uses=1]
+  %11 = shufflevector <4 x float> zeroinitializer, <4 x float> undef, <4 x i32> zeroinitializer ; <<4 x float>> [#uses=1]
+  %12 = shufflevector <4 x float> %5, <4 x float> undef, <2 x i32> <i32 2, i32 3> ; <<2 x float>> [#uses=2]
+  %13 = shufflevector <2 x float> %12, <2 x float> undef, <4 x i32> zeroinitializer ; <<4 x float>> [#uses=1]
+  %14 = fmul <4 x float> %11, %13                 ; <<4 x float>> [#uses=1]
+  %15 = fadd <4 x float> %10, %14                 ; <<4 x float>> [#uses=1]
+  %16 = shufflevector <2 x float> %12, <2 x float> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x float>> [#uses=1]
+  %17 = fadd <4 x float> %15, zeroinitializer     ; <<4 x float>> [#uses=1]
+  %18 = shufflevector <4 x float> %17, <4 x float> zeroinitializer, <4 x i32> <i32 0, i32 5, i32 undef, i32 undef> ; <<4 x float>> [#uses=1]
+  %19 = fmul <4 x float> %7, %16                  ; <<4 x float>> [#uses=1]
+  %20 = fadd <4 x float> %19, zeroinitializer     ; <<4 x float>> [#uses=1]
+  %21 = shufflevector <4 x float> %3, <4 x float> undef, <4 x i32> <i32 2, i32 undef, i32 undef, i32 undef> ; <<4 x float>> [#uses=1]
+  %22 = shufflevector <4 x float> %21, <4 x float> undef, <4 x i32> zeroinitializer ; <<4 x float>> [#uses=1]
+  %23 = fmul <4 x float> %22, %9                  ; <<4 x float>> [#uses=1]
+  %24 = fadd <4 x float> %20, %23                 ; <<4 x float>> [#uses=1]
+  %25 = shufflevector <4 x float> %18, <4 x float> %24, <4 x i32> <i32 0, i32 1, i32 6, i32 undef> ; <<4 x float>> [#uses=1]
+  %26 = shufflevector <4 x float> %25, <4 x float> undef, <4 x i32> <i32 0, i32 1, i32 2, i32 7> ; <<4 x float>> [#uses=1]
+  %27 = fmul <4 x float> %26, <float 5.000000e-01, float 5.000000e-01, float 5.000000e-01, float 5.000000e-01> ; <<4 x float>> [#uses=1]
+  %28 = fsub <4 x float> <float -0.000000e+00, float -0.000000e+00, float -0.000000e+00, float -0.000000e+00>, %5 ; <<4 x float>> [#uses=1]
+  %29 = tail call <4 x float> @llvm.arm.neon.vrecpe.v4f32(<4 x float> zeroinitializer) nounwind ; <<4 x float>> [#uses=1]
+  %30 = fmul <4 x float> zeroinitializer, %29     ; <<4 x float>> [#uses=1]
+  %31 = fmul <4 x float> %30, <float 2.000000e+00, float 2.000000e+00, float 2.000000e+00, float 2.000000e+00> ; <<4 x float>> [#uses=1]
+  %32 = shufflevector <4 x float> %27, <4 x float> undef, <4 x i32> zeroinitializer ; <<4 x float>> [#uses=1]
+  %33 = shufflevector <4 x float> %28, <4 x float> undef, <2 x i32> <i32 2, i32 3> ; <<2 x float>> [#uses=1]
+  %34 = shufflevector <2 x float> %33, <2 x float> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x float>> [#uses=1]
+  %35 = fmul <4 x float> %32, %34                 ; <<4 x float>> [#uses=1]
+  %36 = fadd <4 x float> %35, zeroinitializer     ; <<4 x float>> [#uses=1]
+  %37 = shufflevector <4 x float> %5, <4 x float> undef, <4 x i32> <i32 1, i32 undef, i32 undef, i32 undef> ; <<4 x float>> [#uses=1]
+  %38 = shufflevector <4 x float> %37, <4 x float> undef, <4 x i32> zeroinitializer ; <<4 x float>> [#uses=1]
+  %39 = fmul <4 x float> zeroinitializer, %38     ; <<4 x float>> [#uses=1]
+  %40 = fadd <4 x float> %36, %39                 ; <<4 x float>> [#uses=1]
+  %41 = fadd <4 x float> %40, zeroinitializer     ; <<4 x float>> [#uses=1]
+  %42 = shufflevector <4 x float> undef, <4 x float> %41, <4 x i32> <i32 0, i32 1, i32 6, i32 3> ; <<4 x float>> [#uses=1]
+  %43 = fmul <4 x float> %42, %31                 ; <<4 x float>> [#uses=1]
+  store float undef, float* undef, align 4
+  store float 0.000000e+00, float* null, align 4
+  %44 = extractelement <4 x float> %43, i32 1     ; <float> [#uses=1]
+  store float %44, float* undef, align 4
+  br i1 undef, label %return, label %bb
+
+return:                                           ; preds = %bb, %entry
+  ret void
+}
+
+declare <4 x float> @llvm.arm.neon.vrecpe.v4f32(<4 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-10-21-InvalidFNeg.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-10-21-InvalidFNeg.ll
new file mode 100644
index 0000000..0f021d2
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-10-21-InvalidFNeg.ll
@@ -0,0 +1,48 @@
+; RUN: llc -mcpu=cortex-a8 -mattr=+neon < %s | grep vneg
+target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64"
+target triple = "armv7-eabi"
+
+%aaa = type { %fff, %fff }
+%bbb = type { [6 x %ddd] }
+%ccc = type { %eee, %fff }
+%ddd = type { %fff }
+%eee = type { %fff, %fff, %fff, %fff }
+%fff = type { %struct.vec_float4 }
+%struct.vec_float4 = type { <4 x float> }
+
+define linkonce_odr arm_aapcs_vfpcc void @foo(%eee* noalias sret %agg.result, i64 %tfrm.0.0, i64 %tfrm.0.1, i64 %tfrm.0.2, i64 %tfrm.0.3, i64 %tfrm.0.4, i64 %tfrm.0.5, i64 %tfrm.0.6, i64 %tfrm.0.7) nounwind noinline {
+entry:
+  %tmp104 = zext i64 %tfrm.0.2 to i512            ; <i512> [#uses=1]
+  %tmp105 = shl i512 %tmp104, 128                 ; <i512> [#uses=1]
+  %tmp118 = zext i64 %tfrm.0.3 to i512            ; <i512> [#uses=1]
+  %tmp119 = shl i512 %tmp118, 192                 ; <i512> [#uses=1]
+  %ins121 = or i512 %tmp119, %tmp105              ; <i512> [#uses=1]
+  %tmp99 = zext i64 %tfrm.0.4 to i512             ; <i512> [#uses=1]
+  %tmp100 = shl i512 %tmp99, 256                  ; <i512> [#uses=1]
+  %tmp123 = zext i64 %tfrm.0.5 to i512            ; <i512> [#uses=1]
+  %tmp124 = shl i512 %tmp123, 320                 ; <i512> [#uses=1]
+  %tmp96 = zext i64 %tfrm.0.6 to i512             ; <i512> [#uses=1]
+  %tmp97 = shl i512 %tmp96, 384                   ; <i512> [#uses=1]
+  %tmp128 = zext i64 %tfrm.0.7 to i512            ; <i512> [#uses=1]
+  %tmp129 = shl i512 %tmp128, 448                 ; <i512> [#uses=1]
+  %mask.masked = or i512 %tmp124, %tmp100         ; <i512> [#uses=1]
+  %ins131 = or i512 %tmp129, %tmp97               ; <i512> [#uses=1]
+  %tmp109132 = zext i64 %tfrm.0.0 to i128         ; <i128> [#uses=1]
+  %tmp113134 = zext i64 %tfrm.0.1 to i128         ; <i128> [#uses=1]
+  %tmp114133 = shl i128 %tmp113134, 64            ; <i128> [#uses=1]
+  %tmp94 = or i128 %tmp114133, %tmp109132         ; <i128> [#uses=1]
+  %tmp95 = bitcast i128 %tmp94 to <4 x float>     ; <<4 x float>> [#uses=0]
+  %tmp82 = lshr i512 %ins121, 128                 ; <i512> [#uses=1]
+  %tmp83 = trunc i512 %tmp82 to i128              ; <i128> [#uses=1]
+  %tmp84 = bitcast i128 %tmp83 to <4 x float>     ; <<4 x float>> [#uses=0]
+  %tmp86 = lshr i512 %mask.masked, 256            ; <i512> [#uses=1]
+  %tmp87 = trunc i512 %tmp86 to i128              ; <i128> [#uses=1]
+  %tmp88 = bitcast i128 %tmp87 to <4 x float>     ; <<4 x float>> [#uses=0]
+  %tmp90 = lshr i512 %ins131, 384                 ; <i512> [#uses=1]
+  %tmp91 = trunc i512 %tmp90 to i128              ; <i128> [#uses=1]
+  %tmp92 = bitcast i128 %tmp91 to <4 x float>     ; <<4 x float>> [#uses=1]
+  %tmp = fsub <4 x float> <float -0.000000e+00, float -0.000000e+00, float -0.000000e+00, float -0.000000e+00>, %tmp92 ; <<4 x float>> [#uses=1]
+  %tmp28 = getelementptr inbounds %eee* %agg.result, i32 0, i32 3, i32 0, i32 0 ; <<4 x float>*> [#uses=1]
+  store <4 x float> %tmp, <4 x float>* %tmp28, align 16
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-10-27-double-align.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-10-27-double-align.ll
new file mode 100644
index 0000000..a4e7685
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-10-27-double-align.ll
@@ -0,0 +1,14 @@
+; RUN: llc < %s  -mtriple=arm-linux-gnueabi  | FileCheck %s
+
+ at .str = private constant [1 x i8] zeroinitializer, align 1
+
+define arm_aapcscc void @g() {
+entry:
+;CHECK: [sp, #+8]
+;CHECK: [sp, #+12]
+;CHECK: [sp]
+        tail call arm_aapcscc  void (i8*, ...)* @f(i8* getelementptr ([1 x i8]* @.str, i32 0, i32 0), i32 1, double 2.000000e+00, i32 3, double 4.000000e+00)
+        ret void
+}
+
+declare arm_aapcscc void @f(i8*, ...)
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-10-30.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-10-30.ll
new file mode 100644
index 0000000..8256386
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-10-30.ll
@@ -0,0 +1,17 @@
+; RUN: llc < %s  -mtriple=arm-linux-gnueabi  | FileCheck %s
+; This test checks that the address of the varg arguments is correctly
+; computed when there are 5 or more regular arguments.
+
+define void @f(i32 %a1, i32 %a2, i32 %a3, i32 %a4, i32 %a5, ...) {
+entry:
+;CHECK: sub	sp, sp, #4
+;CHECK: add	r0, sp, #8
+;CHECK: str	r0, [sp], #+4
+;CHECK: bx	lr
+	%ap = alloca i8*, align 4
+	%ap1 = bitcast i8** %ap to i8*
+	call void @llvm.va_start(i8* %ap1)
+	ret void
+}
+
+declare void @llvm.va_start(i8*) nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-01-NeonMoves.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-01-NeonMoves.ll
new file mode 100644
index 0000000..62f3786
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-01-NeonMoves.ll
@@ -0,0 +1,40 @@
+; RUN: llc -mcpu=cortex-a8 < %s | FileCheck %s
+
+target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64"
+target triple = "armv7-eabi"
+
+%foo = type { <4 x float> }
+
+define arm_aapcs_vfpcc void @bar(%foo* noalias sret %agg.result, <4 x float> %quat.0) nounwind {
+entry:
+  %quat_addr = alloca %foo, align 16              ; <%foo*> [#uses=2]
+  %0 = getelementptr inbounds %foo* %quat_addr, i32 0, i32 0 ; <<4 x float>*> [#uses=1]
+  store <4 x float> %quat.0, <4 x float>* %0
+  %1 = call arm_aapcs_vfpcc  <4 x float> @quux(%foo* %quat_addr) nounwind ; <<4 x float>> [#uses=3]
+;CHECK: vmov.f32
+;CHECK: vmov.f32
+  %2 = fmul <4 x float> %1, %1                    ; <<4 x float>> [#uses=2]
+  %3 = shufflevector <4 x float> %2, <4 x float> undef, <2 x i32> <i32 0, i32 1> ; <<2 x float>> [#uses=1]
+  %4 = shufflevector <4 x float> %2, <4 x float> undef, <2 x i32> <i32 2, i32 3> ; <<2 x float>> [#uses=1]
+  %5 = call <2 x float> @llvm.arm.neon.vpadd.v2f32(<2 x float> %3, <2 x float> %4) nounwind ; <<2 x float>> [#uses=2]
+  %6 = call <2 x float> @llvm.arm.neon.vpadd.v2f32(<2 x float> %5, <2 x float> %5) nounwind ; <<2 x float>> [#uses=2]
+  %7 = shufflevector <2 x float> %6, <2 x float> %6, <4 x i32> <i32 0, i32 1, i32 2, i32 3> ; <<4 x float>> [#uses=2]
+;CHECK: vmov
+  %8 = call <4 x float> @llvm.arm.neon.vrsqrte.v4f32(<4 x float> %7) nounwind ; <<4 x float>> [#uses=3]
+  %9 = fmul <4 x float> %8, %8                    ; <<4 x float>> [#uses=1]
+  %10 = call <4 x float> @llvm.arm.neon.vrsqrts.v4f32(<4 x float> %9, <4 x float> %7) nounwind ; <<4 x float>> [#uses=1]
+  %11 = fmul <4 x float> %10, %8                  ; <<4 x float>> [#uses=1]
+  %12 = fmul <4 x float> %11, %1                  ; <<4 x float>> [#uses=1]
+  %13 = call arm_aapcs_vfpcc  %foo* @baz(%foo* %agg.result, <4 x float> %12) nounwind ; <%foo*> [#uses=0]
+  ret void
+}
+
+declare arm_aapcs_vfpcc %foo* @baz(%foo*, <4 x float>) nounwind
+
+declare arm_aapcs_vfpcc <4 x float> @quux(%foo* nocapture) nounwind readonly
+
+declare <2 x float> @llvm.arm.neon.vpadd.v2f32(<2 x float>, <2 x float>) nounwind readnone
+
+declare <4 x float> @llvm.arm.neon.vrsqrte.v4f32(<4 x float>) nounwind readnone
+
+declare <4 x float> @llvm.arm.neon.vrsqrts.v4f32(<4 x float>, <4 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-02-NegativeLane.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-02-NegativeLane.ll
new file mode 100644
index 0000000..f2288c3
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-02-NegativeLane.ll
@@ -0,0 +1,20 @@
+; RUN: llc -mcpu=cortex-a8 < %s | grep vdup.32
+target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64"
+target triple = "armv7-eabi"
+
+define arm_aapcs_vfpcc void @foo(i8* nocapture %pBuffer, i32 %numItems) nounwind {
+entry:
+  br i1 undef, label %return, label %bb
+
+bb:                                               ; preds = %bb, %entry
+  %0 = load float* undef, align 4                 ; <float> [#uses=1]
+  %1 = insertelement <4 x float> undef, float %0, i32 2 ; <<4 x float>> [#uses=1]
+  %2 = insertelement <4 x float> %1, float undef, i32 3 ; <<4 x float>> [#uses=1]
+  %3 = fmul <4 x float> undef, %2                 ; <<4 x float>> [#uses=1]
+  %4 = extractelement <4 x float> %3, i32 1       ; <float> [#uses=1]
+  store float %4, float* undef, align 4
+  br i1 undef, label %return, label %bb
+
+return:                                           ; preds = %bb, %entry
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-07-SubRegAsmPrinting.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-07-SubRegAsmPrinting.ll
new file mode 100644
index 0000000..7aae3ac
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-07-SubRegAsmPrinting.ll
@@ -0,0 +1,66 @@
+; RUN: llc -mcpu=cortex-a8 < %s | FileCheck %s
+; PR5423
+
+target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64"
+target triple = "armv7-eabi"
+
+define arm_aapcs_vfpcc void @foo() {
+entry:
+  %0 = load float* null, align 4                  ; <float> [#uses=2]
+  %1 = fmul float %0, undef                       ; <float> [#uses=2]
+  %2 = fmul float 0.000000e+00, %1                ; <float> [#uses=2]
+  %3 = fmul float %0, %1                          ; <float> [#uses=1]
+  %4 = fadd float 0.000000e+00, %3                ; <float> [#uses=1]
+  %5 = fsub float 1.000000e+00, %4                ; <float> [#uses=1]
+; CHECK: foo:
+; CHECK: vmov.f32 s{{[0-9]+}}, #1.000000e+00
+  %6 = fsub float 1.000000e+00, undef             ; <float> [#uses=2]
+  %7 = fsub float %2, undef                       ; <float> [#uses=1]
+  %8 = fsub float 0.000000e+00, undef             ; <float> [#uses=3]
+  %9 = fadd float %2, undef                       ; <float> [#uses=3]
+  %10 = load float* undef, align 8                ; <float> [#uses=3]
+  %11 = fmul float %8, %10                        ; <float> [#uses=1]
+  %12 = fadd float undef, %11                     ; <float> [#uses=2]
+  %13 = fmul float undef, undef                   ; <float> [#uses=1]
+  %14 = fmul float %6, 0.000000e+00               ; <float> [#uses=1]
+  %15 = fadd float %13, %14                       ; <float> [#uses=1]
+  %16 = fmul float %9, %10                        ; <float> [#uses=1]
+  %17 = fadd float %15, %16                       ; <float> [#uses=2]
+  %18 = fmul float 0.000000e+00, undef            ; <float> [#uses=1]
+  %19 = fadd float %18, 0.000000e+00              ; <float> [#uses=1]
+  %20 = fmul float undef, %10                     ; <float> [#uses=1]
+  %21 = fadd float %19, %20                       ; <float> [#uses=1]
+  %22 = load float* undef, align 8                ; <float> [#uses=1]
+  %23 = fmul float %5, %22                        ; <float> [#uses=1]
+  %24 = fadd float %23, undef                     ; <float> [#uses=1]
+  %25 = load float* undef, align 8                ; <float> [#uses=2]
+  %26 = fmul float %8, %25                        ; <float> [#uses=1]
+  %27 = fadd float %24, %26                       ; <float> [#uses=1]
+  %28 = fmul float %9, %25                        ; <float> [#uses=1]
+  %29 = fadd float undef, %28                     ; <float> [#uses=1]
+  %30 = fmul float %8, undef                      ; <float> [#uses=1]
+  %31 = fadd float undef, %30                     ; <float> [#uses=1]
+  %32 = fmul float %6, undef                      ; <float> [#uses=1]
+  %33 = fadd float undef, %32                     ; <float> [#uses=1]
+  %34 = fmul float %9, undef                      ; <float> [#uses=1]
+  %35 = fadd float %33, %34                       ; <float> [#uses=1]
+  %36 = fmul float 0.000000e+00, undef            ; <float> [#uses=1]
+  %37 = fmul float %7, undef                      ; <float> [#uses=1]
+  %38 = fadd float %36, %37                       ; <float> [#uses=1]
+  %39 = fmul float undef, undef                   ; <float> [#uses=1]
+  %40 = fadd float %38, %39                       ; <float> [#uses=1]
+  store float %12, float* undef, align 8
+  store float %17, float* undef, align 4
+  store float %21, float* undef, align 8
+  store float %27, float* undef, align 8
+  store float %29, float* undef, align 4
+  store float %31, float* undef, align 8
+  store float %40, float* undef, align 8
+  store float %12, float* null, align 8
+  %41 = fmul float %17, undef                     ; <float> [#uses=1]
+  %42 = fadd float %41, undef                     ; <float> [#uses=1]
+  %43 = fmul float %35, undef                     ; <float> [#uses=1]
+  %44 = fadd float %42, %43                       ; <float> [#uses=1]
+  store float %44, float* null, align 4
+  unreachable
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-13-CoalescerCrash.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-13-CoalescerCrash.ll
new file mode 100644
index 0000000..efc4be1
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-13-CoalescerCrash.ll
@@ -0,0 +1,20 @@
+; RUN: llc -mtriple=armv7-eabi -mcpu=cortex-a8 < %s
+; PR5410
+
+%0 = type { float, float, float, float }
+%pln = type { %vec, float }
+%vec = type { [4 x float] }
+
+define arm_aapcs_vfpcc float @aaa(%vec* nocapture %ustart, %vec* nocapture %udir, %vec* nocapture %vstart, %vec* nocapture %vdir, %vec* %upoint, %vec* %vpoint) {
+entry:
+  br i1 undef, label %bb81, label %bb48
+
+bb48:                                             ; preds = %entry
+  %0 = call arm_aapcs_vfpcc  %0 @bbb(%pln* undef, %vec* %vstart, %vec* undef) nounwind ; <%0> [#uses=0]
+  ret float 0.000000e+00
+
+bb81:                                             ; preds = %entry
+  ret float 0.000000e+00
+}
+
+declare arm_aapcs_vfpcc %0 @bbb(%pln* nocapture, %vec* nocapture, %vec* nocapture) nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-13-ScavengerAssert.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-13-ScavengerAssert.ll
new file mode 100644
index 0000000..6cce02d
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-13-ScavengerAssert.ll
@@ -0,0 +1,42 @@
+; RUN: llc -mtriple=armv7-eabi -mcpu=cortex-a8 < %s
+; PR5411
+
+%bar = type { %quad, float, float, [3 x %quux*], [3 x %bar*], [2 x %bar*], [3 x i8], i8 }
+%baz = type { %bar*, i32 }
+%foo = type { i8, %quuz, %quad, float, [64 x %quux], [128 x %bar], i32, %baz, %baz }
+%quad = type { [4 x float] }
+%quux = type { %quad, %quad }
+%quuz = type { [4 x %quux*], [4 x float], i32 }
+
+define arm_aapcs_vfpcc %bar* @aaa(%foo* nocapture %this, %quux* %a, %quux* %b, %quux* %c, i8 zeroext %forced) {
+entry:
+  br i1 undef, label %bb85, label %bb
+
+bb:                                               ; preds = %entry
+  %0 = getelementptr inbounds %bar* null, i32 0, i32 0, i32 0, i32 2 ; <float*> [#uses=2]
+  %1 = load float* undef, align 4                 ; <float> [#uses=1]
+  %2 = fsub float 0.000000e+00, undef             ; <float> [#uses=2]
+  %3 = fmul float 0.000000e+00, undef             ; <float> [#uses=1]
+  %4 = load float* %0, align 4                    ; <float> [#uses=3]
+  %5 = fmul float %4, %2                          ; <float> [#uses=1]
+  %6 = fsub float %3, %5                          ; <float> [#uses=1]
+  %7 = fmul float %4, undef                       ; <float> [#uses=1]
+  %8 = fsub float %7, undef                       ; <float> [#uses=1]
+  %9 = fmul float undef, %2                       ; <float> [#uses=1]
+  %10 = fmul float 0.000000e+00, undef            ; <float> [#uses=1]
+  %11 = fsub float %9, %10                        ; <float> [#uses=1]
+  %12 = fmul float undef, %6                      ; <float> [#uses=1]
+  %13 = fmul float 0.000000e+00, %8               ; <float> [#uses=1]
+  %14 = fadd float %12, %13                       ; <float> [#uses=1]
+  %15 = fmul float %1, %11                        ; <float> [#uses=1]
+  %16 = fadd float %14, %15                       ; <float> [#uses=1]
+  %17 = select i1 undef, float undef, float %16   ; <float> [#uses=1]
+  %18 = fdiv float %17, 0.000000e+00              ; <float> [#uses=1]
+  store float %18, float* undef, align 4
+  %19 = fmul float %4, undef                      ; <float> [#uses=1]
+  store float %19, float* %0, align 4
+  ret %bar* null
+
+bb85:                                             ; preds = %entry
+  ret %bar* null
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-13-ScavengerAssert2.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-13-ScavengerAssert2.ll
new file mode 100644
index 0000000..3ff6631
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-13-ScavengerAssert2.ll
@@ -0,0 +1,123 @@
+; RUN: llc -mtriple=armv7-eabi -mcpu=cortex-a8 < %s
+; PR5412
+
+%bar = type { %quad, float, float, [3 x %quuz*], [3 x %bar*], [2 x %bar*], [3 x i8], i8 }
+%baz = type { %bar*, i32 }
+%foo = type { i8, %quux, %quad, float, [64 x %quuz], [128 x %bar], i32, %baz, %baz }
+%quad = type { [4 x float] }
+%quux = type { [4 x %quuz*], [4 x float], i32 }
+%quuz = type { %quad, %quad }
+
+define arm_aapcs_vfpcc %bar* @aaa(%foo* nocapture %this, %quuz* %a, %quuz* %b, %quuz* %c, i8 zeroext %forced) {
+entry:
+  br i1 undef, label %bb85, label %bb
+
+bb:                                               ; preds = %entry
+  br i1 undef, label %bb3.i, label %bb2.i
+
+bb2.i:                                            ; preds = %bb
+  br label %bb3.i
+
+bb3.i:                                            ; preds = %bb2.i, %bb
+  %0 = getelementptr inbounds %quuz* %a, i32 0, i32 1, i32 0, i32 0 ; <float*> [#uses=0]
+  %1 = fsub float 0.000000e+00, undef             ; <float> [#uses=1]
+  %2 = getelementptr inbounds %quuz* %b, i32 0, i32 1, i32 0, i32 1 ; <float*> [#uses=2]
+  %3 = load float* %2, align 4                    ; <float> [#uses=1]
+  %4 = getelementptr inbounds %quuz* %a, i32 0, i32 1, i32 0, i32 1 ; <float*> [#uses=1]
+  %5 = fsub float %3, undef                       ; <float> [#uses=2]
+  %6 = getelementptr inbounds %quuz* %b, i32 0, i32 1, i32 0, i32 2 ; <float*> [#uses=2]
+  %7 = load float* %6, align 4                    ; <float> [#uses=1]
+  %8 = fsub float %7, undef                       ; <float> [#uses=1]
+  %9 = getelementptr inbounds %quuz* %c, i32 0, i32 1, i32 0, i32 0 ; <float*> [#uses=2]
+  %10 = load float* %9, align 4                   ; <float> [#uses=1]
+  %11 = fsub float %10, undef                     ; <float> [#uses=2]
+  %12 = getelementptr inbounds %quuz* %c, i32 0, i32 1, i32 0, i32 1 ; <float*> [#uses=2]
+  %13 = load float* %12, align 4                  ; <float> [#uses=1]
+  %14 = fsub float %13, undef                     ; <float> [#uses=1]
+  %15 = load float* undef, align 4                ; <float> [#uses=1]
+  %16 = fsub float %15, undef                     ; <float> [#uses=1]
+  %17 = fmul float %5, %16                        ; <float> [#uses=1]
+  %18 = fsub float %17, 0.000000e+00              ; <float> [#uses=5]
+  %19 = fmul float %8, %11                        ; <float> [#uses=1]
+  %20 = fsub float %19, undef                     ; <float> [#uses=3]
+  %21 = fmul float %1, %14                        ; <float> [#uses=1]
+  %22 = fmul float %5, %11                        ; <float> [#uses=1]
+  %23 = fsub float %21, %22                       ; <float> [#uses=2]
+  store float %18, float* undef
+  %24 = getelementptr inbounds %bar* null, i32 0, i32 0, i32 0, i32 1 ; <float*> [#uses=2]
+  store float %20, float* %24
+  store float %23, float* undef
+  %25 = getelementptr inbounds %bar* null, i32 0, i32 0, i32 0, i32 3 ; <float*> [#uses=0]
+  %26 = fmul float %18, %18                       ; <float> [#uses=1]
+  %27 = fadd float %26, undef                     ; <float> [#uses=1]
+  %28 = fadd float %27, undef                     ; <float> [#uses=1]
+  %29 = call arm_aapcs_vfpcc  float @sqrtf(float %28) readnone ; <float> [#uses=1]
+  %30 = load float* null, align 4                 ; <float> [#uses=2]
+  %31 = load float* %4, align 4                   ; <float> [#uses=2]
+  %32 = load float* %2, align 4                   ; <float> [#uses=2]
+  %33 = load float* null, align 4                 ; <float> [#uses=3]
+  %34 = load float* %6, align 4                   ; <float> [#uses=2]
+  %35 = fsub float %33, %34                       ; <float> [#uses=2]
+  %36 = fmul float %20, %35                       ; <float> [#uses=1]
+  %37 = fsub float %36, undef                     ; <float> [#uses=1]
+  %38 = fmul float %23, 0.000000e+00              ; <float> [#uses=1]
+  %39 = fmul float %18, %35                       ; <float> [#uses=1]
+  %40 = fsub float %38, %39                       ; <float> [#uses=1]
+  %41 = fmul float %18, 0.000000e+00              ; <float> [#uses=1]
+  %42 = fmul float %20, 0.000000e+00              ; <float> [#uses=1]
+  %43 = fsub float %41, %42                       ; <float> [#uses=1]
+  %44 = fmul float 0.000000e+00, %37              ; <float> [#uses=1]
+  %45 = fmul float %31, %40                       ; <float> [#uses=1]
+  %46 = fadd float %44, %45                       ; <float> [#uses=1]
+  %47 = fmul float %33, %43                       ; <float> [#uses=1]
+  %48 = fadd float %46, %47                       ; <float> [#uses=2]
+  %49 = load float* %9, align 4                   ; <float> [#uses=2]
+  %50 = fsub float %30, %49                       ; <float> [#uses=1]
+  %51 = load float* %12, align 4                  ; <float> [#uses=3]
+  %52 = fsub float %32, %51                       ; <float> [#uses=2]
+  %53 = load float* undef, align 4                ; <float> [#uses=2]
+  %54 = load float* %24, align 4                  ; <float> [#uses=2]
+  %55 = fmul float %54, undef                     ; <float> [#uses=1]
+  %56 = fmul float undef, %52                     ; <float> [#uses=1]
+  %57 = fsub float %55, %56                       ; <float> [#uses=1]
+  %58 = fmul float undef, %52                     ; <float> [#uses=1]
+  %59 = fmul float %54, %50                       ; <float> [#uses=1]
+  %60 = fsub float %58, %59                       ; <float> [#uses=1]
+  %61 = fmul float %30, %57                       ; <float> [#uses=1]
+  %62 = fmul float %32, 0.000000e+00              ; <float> [#uses=1]
+  %63 = fadd float %61, %62                       ; <float> [#uses=1]
+  %64 = fmul float %34, %60                       ; <float> [#uses=1]
+  %65 = fadd float %63, %64                       ; <float> [#uses=2]
+  %66 = fcmp olt float %48, %65                   ; <i1> [#uses=1]
+  %67 = fsub float %49, 0.000000e+00              ; <float> [#uses=1]
+  %68 = fsub float %51, %31                       ; <float> [#uses=1]
+  %69 = fsub float %53, %33                       ; <float> [#uses=1]
+  %70 = fmul float undef, %67                     ; <float> [#uses=1]
+  %71 = load float* undef, align 4                ; <float> [#uses=2]
+  %72 = fmul float %71, %69                       ; <float> [#uses=1]
+  %73 = fsub float %70, %72                       ; <float> [#uses=1]
+  %74 = fmul float %71, %68                       ; <float> [#uses=1]
+  %75 = fsub float %74, 0.000000e+00              ; <float> [#uses=1]
+  %76 = fmul float %51, %73                       ; <float> [#uses=1]
+  %77 = fadd float undef, %76                     ; <float> [#uses=1]
+  %78 = fmul float %53, %75                       ; <float> [#uses=1]
+  %79 = fadd float %77, %78                       ; <float> [#uses=1]
+  %80 = select i1 %66, float %48, float %65       ; <float> [#uses=1]
+  %81 = select i1 undef, float %80, float %79     ; <float> [#uses=1]
+  %iftmp.164.0 = select i1 undef, float %29, float 1.000000e+00 ; <float> [#uses=1]
+  %82 = fdiv float %81, %iftmp.164.0              ; <float> [#uses=1]
+  %iftmp.165.0 = select i1 undef, float %82, float 0.000000e+00 ; <float> [#uses=1]
+  store float %iftmp.165.0, float* undef, align 4
+  br i1 false, label %bb4.i97, label %ccc.exit98
+
+bb4.i97:                                          ; preds = %bb3.i
+  br label %ccc.exit98
+
+ccc.exit98:                                       ; preds = %bb4.i97, %bb3.i
+  ret %bar* null
+
+bb85:                                             ; preds = %entry
+  ret %bar* null
+}
+
+declare arm_aapcs_vfpcc float @sqrtf(float) readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-13-VRRewriterCrash.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-13-VRRewriterCrash.ll
new file mode 100644
index 0000000..832ff4f
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-13-VRRewriterCrash.ll
@@ -0,0 +1,113 @@
+; RUN: llc -mtriple=armv7-eabi -mcpu=cortex-a8 < %s
+; PR5412
+; rdar://7384107
+
+%bar = type { %quad, float, float, [3 x %quuz*], [3 x %bar*], [2 x %bar*], [3 x i8], i8 }
+%baz = type { %bar*, i32 }
+%foo = type { i8, %quux, %quad, float, [64 x %quuz], [128 x %bar], i32, %baz, %baz }
+%quad = type { [4 x float] }
+%quux = type { [4 x %quuz*], [4 x float], i32 }
+%quuz = type { %quad, %quad }
+
+define arm_aapcs_vfpcc %bar* @aaa(%foo* nocapture %this, %quuz* %a, %quuz* %b, %quuz* %c, i8 zeroext %forced) {
+entry:
+  %0 = load %bar** undef, align 4                 ; <%bar*> [#uses=2]
+  br i1 false, label %bb85, label %bb
+
+bb:                                               ; preds = %entry
+  br i1 undef, label %bb3.i, label %bb2.i
+
+bb2.i:                                            ; preds = %bb
+  br label %bb3.i
+
+bb3.i:                                            ; preds = %bb2.i, %bb
+  %1 = getelementptr inbounds %quuz* %a, i32 0, i32 1, i32 0, i32 0 ; <float*> [#uses=1]
+  %2 = fsub float 0.000000e+00, undef             ; <float> [#uses=1]
+  %3 = getelementptr inbounds %quuz* %b, i32 0, i32 1, i32 0, i32 1 ; <float*> [#uses=1]
+  %4 = getelementptr inbounds %quuz* %b, i32 0, i32 1, i32 0, i32 2 ; <float*> [#uses=1]
+  %5 = fsub float 0.000000e+00, undef             ; <float> [#uses=1]
+  %6 = getelementptr inbounds %quuz* %c, i32 0, i32 1, i32 0, i32 0 ; <float*> [#uses=1]
+  %7 = getelementptr inbounds %quuz* %c, i32 0, i32 1, i32 0, i32 1 ; <float*> [#uses=1]
+  %8 = fsub float undef, undef                    ; <float> [#uses=1]
+  %9 = fmul float 0.000000e+00, %8                ; <float> [#uses=1]
+  %10 = fmul float %5, 0.000000e+00               ; <float> [#uses=1]
+  %11 = fsub float %9, %10                        ; <float> [#uses=3]
+  %12 = fmul float %2, 0.000000e+00               ; <float> [#uses=1]
+  %13 = fmul float 0.000000e+00, undef            ; <float> [#uses=1]
+  %14 = fsub float %12, %13                       ; <float> [#uses=2]
+  store float %14, float* undef
+  %15 = getelementptr inbounds %bar* %0, i32 0, i32 0, i32 0, i32 3 ; <float*> [#uses=1]
+  store float 0.000000e+00, float* %15
+  %16 = fmul float %11, %11                       ; <float> [#uses=1]
+  %17 = fadd float %16, 0.000000e+00              ; <float> [#uses=1]
+  %18 = fadd float %17, undef                     ; <float> [#uses=1]
+  %19 = call arm_aapcs_vfpcc  float @sqrtf(float %18) readnone ; <float> [#uses=2]
+  %20 = fcmp ogt float %19, 0x3F1A36E2E0000000    ; <i1> [#uses=1]
+  %21 = load float* %1, align 4                   ; <float> [#uses=2]
+  %22 = load float* %3, align 4                   ; <float> [#uses=2]
+  %23 = load float* undef, align 4                ; <float> [#uses=2]
+  %24 = load float* %4, align 4                   ; <float> [#uses=2]
+  %25 = fsub float %23, %24                       ; <float> [#uses=2]
+  %26 = fmul float 0.000000e+00, %25              ; <float> [#uses=1]
+  %27 = fsub float %26, undef                     ; <float> [#uses=1]
+  %28 = fmul float %14, 0.000000e+00              ; <float> [#uses=1]
+  %29 = fmul float %11, %25                       ; <float> [#uses=1]
+  %30 = fsub float %28, %29                       ; <float> [#uses=1]
+  %31 = fsub float undef, 0.000000e+00            ; <float> [#uses=1]
+  %32 = fmul float %21, %27                       ; <float> [#uses=1]
+  %33 = fmul float undef, %30                     ; <float> [#uses=1]
+  %34 = fadd float %32, %33                       ; <float> [#uses=1]
+  %35 = fmul float %23, %31                       ; <float> [#uses=1]
+  %36 = fadd float %34, %35                       ; <float> [#uses=1]
+  %37 = load float* %6, align 4                   ; <float> [#uses=2]
+  %38 = load float* %7, align 4                   ; <float> [#uses=2]
+  %39 = fsub float %22, %38                       ; <float> [#uses=2]
+  %40 = load float* undef, align 4                ; <float> [#uses=1]
+  %41 = load float* null, align 4                 ; <float> [#uses=2]
+  %42 = fmul float %41, undef                     ; <float> [#uses=1]
+  %43 = fmul float undef, %39                     ; <float> [#uses=1]
+  %44 = fsub float %42, %43                       ; <float> [#uses=1]
+  %45 = fmul float undef, %39                     ; <float> [#uses=1]
+  %46 = fmul float %41, 0.000000e+00              ; <float> [#uses=1]
+  %47 = fsub float %45, %46                       ; <float> [#uses=1]
+  %48 = fmul float 0.000000e+00, %44              ; <float> [#uses=1]
+  %49 = fmul float %22, undef                     ; <float> [#uses=1]
+  %50 = fadd float %48, %49                       ; <float> [#uses=1]
+  %51 = fmul float %24, %47                       ; <float> [#uses=1]
+  %52 = fadd float %50, %51                       ; <float> [#uses=1]
+  %53 = fsub float %37, %21                       ; <float> [#uses=2]
+  %54 = fmul float undef, undef                   ; <float> [#uses=1]
+  %55 = fmul float undef, undef                   ; <float> [#uses=1]
+  %56 = fsub float %54, %55                       ; <float> [#uses=1]
+  %57 = fmul float undef, %53                     ; <float> [#uses=1]
+  %58 = load float* undef, align 4                ; <float> [#uses=2]
+  %59 = fmul float %58, undef                     ; <float> [#uses=1]
+  %60 = fsub float %57, %59                       ; <float> [#uses=1]
+  %61 = fmul float %58, undef                     ; <float> [#uses=1]
+  %62 = fmul float undef, %53                     ; <float> [#uses=1]
+  %63 = fsub float %61, %62                       ; <float> [#uses=1]
+  %64 = fmul float %37, %56                       ; <float> [#uses=1]
+  %65 = fmul float %38, %60                       ; <float> [#uses=1]
+  %66 = fadd float %64, %65                       ; <float> [#uses=1]
+  %67 = fmul float %40, %63                       ; <float> [#uses=1]
+  %68 = fadd float %66, %67                       ; <float> [#uses=1]
+  %69 = select i1 undef, float %36, float %52     ; <float> [#uses=1]
+  %70 = select i1 undef, float %69, float %68     ; <float> [#uses=1]
+  %iftmp.164.0 = select i1 %20, float %19, float 1.000000e+00 ; <float> [#uses=1]
+  %71 = fdiv float %70, %iftmp.164.0              ; <float> [#uses=1]
+  store float %71, float* null, align 4
+  %72 = icmp eq %bar* null, %0                    ; <i1> [#uses=1]
+  br i1 %72, label %bb4.i97, label %ccc.exit98
+
+bb4.i97:                                          ; preds = %bb3.i
+  %73 = load %bar** undef, align 4                ; <%bar*> [#uses=0]
+  br label %ccc.exit98
+
+ccc.exit98:                                       ; preds = %bb4.i97, %bb3.i
+  ret %bar* null
+
+bb85:                                             ; preds = %entry
+  ret %bar* null
+}
+
+declare arm_aapcs_vfpcc float @sqrtf(float) readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/alloca.ll b/libclamav/c++/llvm/test/CodeGen/ARM/alloca.ll
index 15cf677..82a8c98 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/alloca.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/alloca.ll
@@ -1,13 +1,12 @@
-; RUN: llc < %s -march=arm -mtriple=arm-linux-gnu | \
-; RUN:   grep {mov r11, sp}
-; RUN: llc < %s -march=arm -mtriple=arm-linux-gnu | \
-; RUN:   grep {mov sp, r11}
+; RUN: llc < %s -march=arm -mtriple=arm-linux-gnu | FileCheck %s
 
 define void @f(i32 %a) {
 entry:
+; CHECK: mov r11, sp
         %tmp = alloca i8, i32 %a                ; <i8*> [#uses=1]
         call void @g( i8* %tmp, i32 %a, i32 1, i32 2, i32 3 )
         ret void
+; CHECK: mov sp, r11
 }
 
 declare void @g(i8*, i32, i32, i32, i32)
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/arguments.ll b/libclamav/c++/llvm/test/CodeGen/ARM/arguments.ll
index ad5b2d6..cc71839 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/arguments.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/arguments.ll
@@ -1,9 +1,9 @@
-; RUN: llc < %s -mtriple=arm-linux-gnueabi | \
-; RUN:   grep {mov r0, r2} | count 1
-; RUN: llc < %s -mtriple=arm-apple-darwin | \
-; RUN:   grep {mov r0, r1} | count 1
+; RUN: llc < %s -mtriple=arm-linux-gnueabi | FileCheck %s -check-prefix=ELF
+; RUN: llc < %s -mtriple=arm-apple-darwin  | FileCheck %s -check-prefix=DARWIN
 
 define i32 @f(i32 %a, i64 %b) {
+; ELF: mov r0, r2
+; DARWIN: mov r0, r1
         %tmp = call i32 @g(i64 %b)
         ret i32 %tmp
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/arguments_f64_backfill.ll b/libclamav/c++/llvm/test/CodeGen/ARM/arguments_f64_backfill.ll
index 690f488..062133e 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/arguments_f64_backfill.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/arguments_f64_backfill.ll
@@ -1,6 +1,7 @@
-; RUN: llc < %s -mtriple=arm-linux-gnueabi -mattr=+vfp2 -float-abi=hard | grep {fcpys s0, s1}
+; RUN: llc < %s -mtriple=arm-linux-gnueabi -mattr=+vfp2 -float-abi=hard | FileCheck %s
 
 define float @f(float %z, double %a, float %b) {
+; CHECK: vmov.f32 s0, s1
         %tmp = call float @g(float %b)
         ret float %tmp
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/arm-negative-stride.ll b/libclamav/c++/llvm/test/CodeGen/ARM/arm-negative-stride.ll
index c4b4ec6..72ec8ef 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/arm-negative-stride.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/arm-negative-stride.ll
@@ -1,7 +1,8 @@
-; RUN: llc < %s -march=arm | grep {str r1, \\\[r.*, -r.*, lsl #2\}
+; RUN: llc < %s -march=arm | FileCheck %s
 
 define void @test(i32* %P, i32 %A, i32 %i) nounwind {
 entry:
+; CHECK: str r1, [{{r.*}}, -{{r.*}}, lsl #2]
         icmp eq i32 %i, 0               ; <i1>:0 [#uses=1]
         br i1 %0, label %return, label %bb
 
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/bfc.ll b/libclamav/c++/llvm/test/CodeGen/ARM/bfc.ll
index 53392de..c4a44b4 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/bfc.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/bfc.ll
@@ -1,19 +1,25 @@
-; RUN: llc < %s -march=arm -mattr=+v6t2 | grep "bfc " | count 3
+; RUN: llc < %s -march=arm -mattr=+v6t2 | FileCheck %s
 
 ; 4278190095 = 0xff00000f
 define i32 @f1(i32 %a) {
+; CHECK: f1:
+; CHECK: bfc
     %tmp = and i32 %a, 4278190095
     ret i32 %tmp
 }
 
 ; 4286578688 = 0xff800000
 define i32 @f2(i32 %a) {
+; CHECK: f2:
+; CHECK: bfc
     %tmp = and i32 %a, 4286578688
     ret i32 %tmp
 }
 
 ; 4095 = 0x00000fff
 define i32 @f3(i32 %a) {
+; CHECK: f3:
+; CHECK: bfc
     %tmp = and i32 %a, 4095
     ret i32 %tmp
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/bic.ll b/libclamav/c++/llvm/test/CodeGen/ARM/bic.ll
index b16dcc6..1dfd627 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/bic.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/bic.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=arm | grep {bic\\W*r\[0-9\]*,\\W*r\[0-9\]*,\\W*r\[0-9\]*} | count 2
+; RUN: llc < %s -march=arm | FileCheck %s
 
 define i32 @f1(i32 %a, i32 %b) {
     %tmp = xor i32 %b, 4294967295
@@ -6,8 +6,12 @@ define i32 @f1(i32 %a, i32 %b) {
     ret i32 %tmp1
 }
 
+; CHECK: bic	r0, r0, r1
+
 define i32 @f2(i32 %a, i32 %b) {
     %tmp = xor i32 %b, 4294967295
     %tmp1 = and i32 %tmp, %a
     ret i32 %tmp1
 }
+
+; CHECK: bic	r0, r0, r1
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/call.ll b/libclamav/c++/llvm/test/CodeGen/ARM/call.ll
index 52246c3..3dd66ae 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/call.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/call.ll
@@ -1,13 +1,16 @@
-; RUN: llc < %s -march=arm | grep {mov lr, pc}
-; RUN: llc < %s -march=arm -mattr=+v5t | grep blx
+; RUN: llc < %s -march=arm | FileCheck %s -check-prefix=CHECKV4
+; RUN: llc < %s -march=arm -mattr=+v5t | FileCheck %s -check-prefix=CHECKV5
 ; RUN: llc < %s -march=arm -mtriple=arm-linux-gnueabi\
-; RUN:   -relocation-model=pic | grep {PLT}
+; RUN:   -relocation-model=pic | FileCheck %s -check-prefix=CHECKELF
 
 @t = weak global i32 ()* null           ; <i32 ()**> [#uses=1]
 
 declare void @g(i32, i32, i32, i32)
 
 define void @f() {
+; CHECKV4: mov lr, pc
+; CHECKV5: blx
+; CHECKELF: PLT
         call void @g( i32 1, i32 2, i32 3, i32 4 )
         ret void
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/carry.ll b/libclamav/c++/llvm/test/CodeGen/ARM/carry.ll
index 294de5f..a6a7ed6 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/carry.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/carry.ll
@@ -1,14 +1,19 @@
-; RUN: llc < %s -march=arm | grep "subs r" | count 2
-; RUN: llc < %s -march=arm | grep "adc r"
-; RUN: llc < %s -march=arm | grep "sbc r"  | count 2
+; RUN: llc < %s -march=arm | FileCheck %s
 
 define i64 @f1(i64 %a, i64 %b) {
+; CHECK: f1:
+; CHECK: subs r
+; CHECK: sbc r
 entry:
 	%tmp = sub i64 %a, %b
 	ret i64 %tmp
 }
 
 define i64 @f2(i64 %a, i64 %b) {
+; CHECK: f2:
+; CHECK: adc r
+; CHECK: subs r
+; CHECK: sbc r
 entry:
         %tmp1 = shl i64 %a, 1
 	%tmp2 = sub i64 %tmp1, %b
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/compare-call.ll b/libclamav/c++/llvm/test/CodeGen/ARM/compare-call.ll
index 5f3ed1d..fac2bc5 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/compare-call.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/compare-call.ll
@@ -1,5 +1,5 @@
 ; RUN: llc < %s -march=arm -mattr=+v6,+vfp2 | \
-; RUN:   grep fcmpes
+; RUN:   grep vcmpe.f32
 
 define void @test3(float* %glob, i32 %X) {
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/constants.ll b/libclamav/c++/llvm/test/CodeGen/ARM/constants.ll
index e2d8ddc..ce91936 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/constants.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/constants.ll
@@ -1,39 +1,44 @@
-; RUN: llc < %s -march=arm | \
-; RUN:   grep {mov r0, #0} | count 1
-; RUN: llc < %s -march=arm | \
-; RUN:   grep {mov r0, #255$} | count 1
-; RUN: llc < %s -march=arm -asm-verbose | \
-; RUN:   grep {mov r0.*256} | count 1
-; RUN: llc < %s -march=arm -asm-verbose | grep {orr.*256} | count 1
-; RUN: llc < %s -march=arm -asm-verbose | grep {mov r0, .*-1073741761} | count 1
-; RUN: llc < %s -march=arm -asm-verbose | grep {mov r0, .*1008} | count 1
-; RUN: llc < %s -march=arm | grep {cmp r0, #1, 16} | count 1
+; RUN: llc < %s -march=arm | FileCheck %s
 
 define i32 @f1() {
+; CHECK: f1
+; CHECK: mov r0, #0
         ret i32 0
 }
 
 define i32 @f2() {
+; CHECK: f2
+; CHECK: mov r0, #255
         ret i32 255
 }
 
 define i32 @f3() {
+; CHECK: f3
+; CHECK: mov r0{{.*}}256
         ret i32 256
 }
 
 define i32 @f4() {
+; CHECK: f4
+; CHECK: orr{{.*}}256
         ret i32 257
 }
 
 define i32 @f5() {
+; CHECK: f5
+; CHECK: mov r0, {{.*}}-1073741761
         ret i32 -1073741761
 }
 
 define i32 @f6() {
+; CHECK: f6
+; CHECK: mov r0, {{.*}}1008
         ret i32 1008
 }
 
 define void @f7(i32 %a) {
+; CHECK: f7
+; CHECK: cmp r0, #1, 16
         %b = icmp ugt i32 %a, 65536             ; <i1> [#uses=1]
         br i1 %b, label %r, label %r
 
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fabss.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fabss.ll
index 5690a01..e5b5791 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fabss.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fabss.ll
@@ -1,8 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+vfp2 | grep -E {fabss\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | grep -E {vabs.f32\\W*d\[0-9\]+,\\W*d\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | grep -E {fabss\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mcpu=cortex-a8 | grep -E {vabs.f32\\W*d\[0-9\]+,\\W*d\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mcpu=cortex-a9 | grep -E {fabss\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
+; RUN: llc < %s -march=arm -mattr=+vfp2 | FileCheck %s -check-prefix=VFP2
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | FileCheck %s -check-prefix=NFP1
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | FileCheck %s -check-prefix=NFP0
+; RUN: llc < %s -march=arm -mcpu=cortex-a8 | FileCheck %s -check-prefix=CORTEXA8
+; RUN: llc < %s -march=arm -mcpu=cortex-a9 | FileCheck %s -check-prefix=CORTEXA9
 
 define float @test(float %a, float %b) {
 entry:
@@ -13,3 +13,16 @@ entry:
 }
 
 declare float @fabsf(float)
+
+; VFP2: test:
+; VFP2: 	vabs.f32	s1, s1
+
+; NFP1: test:
+; NFP1: 	vabs.f32	d1, d1
+; NFP0: test:
+; NFP0: 	vabs.f32	s1, s1
+
+; CORTEXA8: test:
+; CORTEXA8: 	vabs.f32	d1, d1
+; CORTEXA9: test:
+; CORTEXA9: 	vabs.f32	s1, s1
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fadds.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fadds.ll
index a01f868..db18a86 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fadds.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fadds.ll
@@ -1,8 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+vfp2 | grep -E {fadds\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | grep -E {vadd.f32\\W*d\[0-9\]+,\\W*d\[0-9\]+,\\W*d\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | grep -E {fadds\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mcpu=cortex-a8 | grep -E {vadd.f32\\W*d\[0-9\]+,\\W*d\[0-9\]+,\\W*d\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mcpu=cortex-a9 | grep -E {fadds\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
+; RUN: llc < %s -march=arm -mattr=+vfp2 | FileCheck %s -check-prefix=VFP2
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | FileCheck %s -check-prefix=NFP1
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | FileCheck %s -check-prefix=NFP0
+; RUN: llc < %s -march=arm -mcpu=cortex-a8 | FileCheck %s -check-prefix=CORTEXA8
+; RUN: llc < %s -march=arm -mcpu=cortex-a9 | FileCheck %s -check-prefix=CORTEXA9
 
 define float @test(float %a, float %b) {
 entry:
@@ -10,3 +10,15 @@ entry:
 	ret float %0
 }
 
+; VFP2: test:
+; VFP2: 	vadd.f32	s0, s1, s0
+
+; NFP1: test:
+; NFP1: 	vadd.f32	d0, d1, d0
+; NFP0: test:
+; NFP0: 	vadd.f32	s0, s1, s0
+
+; CORTEXA8: test:
+; CORTEXA8: 	vadd.f32	d0, d1, d0
+; CORTEXA9: test:
+; CORTEXA9: 	vadd.f32	s0, s1, s0
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fcopysign.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fcopysign.ll
index bf7c305..a6d7410 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fcopysign.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fcopysign.ll
@@ -1,6 +1,6 @@
 ; RUN: llc < %s -march=arm | grep bic | count 2
 ; RUN: llc < %s -march=arm -mattr=+v6,+vfp2 | \
-; RUN:   grep fneg | count 2
+; RUN:   grep vneg | count 2
 
 define float @test1(float %x, double %y) {
 	%tmp = fpext float %x to double
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fdivs.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fdivs.ll
index 2af250d..a5c86bf 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fdivs.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fdivs.ll
@@ -1,8 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+vfp2 | grep -E {fdivs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | grep -E {fdivs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | grep -E {fdivs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mcpu=cortex-a8 | grep -E {fdivs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mcpu=cortex-a9 | grep -E {fdivs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
+; RUN: llc < %s -march=arm -mattr=+vfp2 | FileCheck %s -check-prefix=VFP2
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | FileCheck %s -check-prefix=NFP1
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | FileCheck %s -check-prefix=NFP0
+; RUN: llc < %s -march=arm -mcpu=cortex-a8 | FileCheck %s -check-prefix=CORTEXA8
+; RUN: llc < %s -march=arm -mcpu=cortex-a9 | FileCheck %s -check-prefix=CORTEXA9
 
 define float @test(float %a, float %b) {
 entry:
@@ -10,3 +10,15 @@ entry:
 	ret float %0
 }
 
+; VFP2: test:
+; VFP2: 	vdiv.f32	s0, s1, s0
+
+; NFP1: test:
+; NFP1: 	vdiv.f32	s0, s1, s0
+; NFP0: test:
+; NFP0: 	vdiv.f32	s0, s1, s0
+
+; CORTEXA8: test:
+; CORTEXA8: 	vdiv.f32	s0, s1, s0
+; CORTEXA9: test:
+; CORTEXA9: 	vdiv.f32	s0, s1, s0
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fixunsdfdi.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fixunsdfdi.ll
index ebf1d84..6db2385 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fixunsdfdi.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fixunsdfdi.ll
@@ -1,5 +1,5 @@
 ; RUN: llc < %s -march=arm -mattr=+vfp2
-; RUN: llc < %s -march=arm -mattr=vfp2 | not grep fstd
+; RUN: llc < %s -march=arm -mattr=vfp2 | not grep vstr.64
 
 define hidden i64 @__fixunsdfdi(double %x) nounwind readnone {
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fmacs.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fmacs.ll
index 1a1cd07..904a587 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fmacs.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fmacs.ll
@@ -1,8 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+vfp2 | grep -E {fmacs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | grep -E {vmla.f32\\W*d\[0-9\]+,\\W*d\[0-9\]+,\\W*d\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | grep -E {fmacs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mcpu=cortex-a8 | grep -E {vmla.f32\\W*d\[0-9\]+,\\W*d\[0-9\]+,\\W*d\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mcpu=cortex-a9 | grep -E {fmacs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
+; RUN: llc < %s -march=arm -mattr=+vfp2 | FileCheck %s -check-prefix=VFP2
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | FileCheck %s -check-prefix=NFP1
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | FileCheck %s -check-prefix=NFP0
+; RUN: llc < %s -march=arm -mcpu=cortex-a8 | FileCheck %s -check-prefix=CORTEXA8
+; RUN: llc < %s -march=arm -mcpu=cortex-a9 | FileCheck %s -check-prefix=CORTEXA9
 
 define float @test(float %acc, float %a, float %b) {
 entry:
@@ -11,3 +11,15 @@ entry:
 	ret float %1
 }
 
+; VFP2: test:
+; VFP2: 	vmla.f32	s2, s1, s0
+
+; NFP1: test:
+; NFP1: 	vmul.f32	d0, d1, d0
+; NFP0: test:
+; NFP0: 	vmla.f32	s2, s1, s0
+
+; CORTEXA8: test:
+; CORTEXA8: 	vmul.f32	d0, d1, d0
+; CORTEXA9: test:
+; CORTEXA9: 	vmla.f32	s2, s1, s0
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fmscs.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fmscs.ll
index c6e6d40..7b9e029 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fmscs.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fmscs.ll
@@ -1,8 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+vfp2 | grep -E {fmscs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | grep -E {fmscs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | grep -E {fmscs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mcpu=cortex-a8 | grep -E {fmscs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mcpu=cortex-a9 | grep -E {fmscs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
+; RUN: llc < %s -march=arm -mattr=+vfp2 | FileCheck %s -check-prefix=VFP2
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | FileCheck %s -check-prefix=NFP1
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | FileCheck %s -check-prefix=NFP0
+; RUN: llc < %s -march=arm -mcpu=cortex-a8 | FileCheck %s -check-prefix=CORTEXA8
+; RUN: llc < %s -march=arm -mcpu=cortex-a9 | FileCheck %s -check-prefix=CORTEXA9
 
 define float @test(float %acc, float %a, float %b) {
 entry:
@@ -11,3 +11,15 @@ entry:
 	ret float %1
 }
 
+; VFP2: test:
+; VFP2: 	vnmls.f32	s2, s1, s0
+
+; NFP1: test:
+; NFP1: 	vnmls.f32	s2, s1, s0
+; NFP0: test:
+; NFP0: 	vnmls.f32	s2, s1, s0
+
+; CORTEXA8: test:
+; CORTEXA8: 	vnmls.f32	s2, s1, s0
+; CORTEXA9: test:
+; CORTEXA9: 	vnmls.f32	s2, s1, s0
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fmuls.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fmuls.ll
index cb5dade..d3c9c82 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fmuls.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fmuls.ll
@@ -1,8 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+vfp2 | grep -E {fmuls\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | grep -E {vmul.f32\\W*d\[0-9\]+,\\W*d\[0-9\]+,\\W*d\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | grep -E {fmuls\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mcpu=cortex-a8 | grep -E {vmul.f32\\W*d\[0-9\]+,\\W*d\[0-9\]+,\\W*d\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mcpu=cortex-a9 | grep -E {fmuls\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
+; RUN: llc < %s -march=arm -mattr=+vfp2 | FileCheck %s -check-prefix=VFP2
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | FileCheck %s -check-prefix=NFP1
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | FileCheck %s -check-prefix=NFP0
+; RUN: llc < %s -march=arm -mcpu=cortex-a8 | FileCheck %s -check-prefix=CORTEXA8
+; RUN: llc < %s -march=arm -mcpu=cortex-a9 | FileCheck %s -check-prefix=CORTEXA9
 
 define float @test(float %a, float %b) {
 entry:
@@ -10,3 +10,15 @@ entry:
 	ret float %0
 }
 
+; VFP2: test:
+; VFP2: 	vmul.f32	s0, s1, s0
+
+; NFP1: test:
+; NFP1: 	vmul.f32	d0, d1, d0
+; NFP0: test:
+; NFP0: 	vmul.f32	s0, s1, s0
+
+; CORTEXA8: test:
+; CORTEXA8: 	vmul.f32	d0, d1, d0
+; CORTEXA9: test:
+; CORTEXA9: 	vmul.f32	s0, s1, s0
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fnegs.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fnegs.ll
index 7da443d..d6c22f1 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fnegs.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fnegs.ll
@@ -1,8 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+vfp2 | grep -E {fnegs\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 2
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | grep -E {vneg.f32\\W*d\[0-9\]+,\\W*d\[0-9\]+} | count 2
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | grep -E {fnegs\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 2
-; RUN: llc < %s -march=arm -mcpu=cortex-a8 | grep -E {vneg.f32\\W*d\[0-9\]+,\\W*d\[0-9\]+} | count 2
-; RUN: llc < %s -march=arm -mcpu=cortex-a9 | grep -E {fnegs\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 2
+; RUN: llc < %s -march=arm -mattr=+vfp2 | FileCheck %s -check-prefix=VFP2
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | FileCheck %s -check-prefix=NFP1
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | FileCheck %s -check-prefix=NFP0
+; RUN: llc < %s -march=arm -mcpu=cortex-a8 | FileCheck %s -check-prefix=CORTEXA8
+; RUN: llc < %s -march=arm -mcpu=cortex-a9 | FileCheck %s -check-prefix=CORTEXA9
 
 define float @test1(float* %a) {
 entry:
@@ -13,6 +13,20 @@ entry:
 	%retval = select i1 %3, float %1, float %0		; <float> [#uses=1]
 	ret float %retval
 }
+; VFP2: test1:
+; VFP2: 	vneg.f32	s1, s0
+
+; NFP1: test1:
+; NFP1: 	vneg.f32	d1, d0
+
+; NFP0: test1:
+; NFP0: 	vneg.f32	s1, s0
+
+; CORTEXA8: test1:
+; CORTEXA8: 	vneg.f32	d1, d0
+
+; CORTEXA9: test1:
+; CORTEXA9: 	vneg.f32	s1, s0
 
 define float @test2(float* %a) {
 entry:
@@ -23,3 +37,18 @@ entry:
 	%retval = select i1 %3, float %1, float %0		; <float> [#uses=1]
 	ret float %retval
 }
+; VFP2: test2:
+; VFP2: 	vneg.f32	s1, s0
+
+; NFP1: test2:
+; NFP1: 	vneg.f32	d1, d0
+
+; NFP0: test2:
+; NFP0: 	vneg.f32	s1, s0
+
+; CORTEXA8: test2:
+; CORTEXA8: 	vneg.f32	d1, d0
+
+; CORTEXA9: test2:
+; CORTEXA9: 	vneg.f32	s1, s0
+
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fnmacs.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fnmacs.ll
index e57bbbb..724947e 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fnmacs.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fnmacs.ll
@@ -1,11 +1,18 @@
-; RUN: llc < %s -march=arm -mattr=+vfp2 | grep -E {fnmacs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | grep -E {vmls.f32\\W*d\[0-9\]+,\\W*d\[0-9\]+,\\W*d\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | grep -E {fnmacs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mcpu=cortex-a8 | grep -E {vmls.f32\\W*d\[0-9\]+,\\W*d\[0-9\]+,\\W*d\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mcpu=cortex-a9 | grep -E {fnmacs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
+; RUN: llc < %s -march=arm -mattr=+vfp2 | FileCheck %s -check-prefix=VFP2
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | FileCheck %s -check-prefix=NEON
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | FileCheck %s -check-prefix=NEONFP
 
 define float @test(float %acc, float %a, float %b) {
 entry:
+; VFP2: vmls.f32
+; NEON: vmls.f32
+
+; NEONFP-NOT: vmls
+; NEONFP-NOT: vmov.f32
+; NEONFP:     vmul.f32
+; NEONFP:     vsub.f32
+; NEONFP:     vmov
+
 	%0 = fmul float %a, %b
         %1 = fsub float %acc, %0
 	ret float %1
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fnmscs.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fnmscs.ll
index 3ae437d..ad21882 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fnmscs.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fnmscs.ll
@@ -5,7 +5,7 @@
 ; RUN: llc < %s -march=arm -mcpu=cortex-a9 | FileCheck %s
 
 define float @test1(float %acc, float %a, float %b) nounwind {
-; CHECK: fnmscs s2, s1, s0 
+; CHECK: vnmla.f32 s2, s1, s0
 entry:
 	%0 = fmul float %a, %b
 	%1 = fsub float -0.0, %0
@@ -14,7 +14,7 @@ entry:
 }
 
 define float @test2(float %acc, float %a, float %b) nounwind {
-; CHECK: fnmscs s2, s1, s0 
+; CHECK: vnmla.f32 s2, s1, s0
 entry:
 	%0 = fmul float %a, %b
 	%1 = fmul float -1.0, %0
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fnmul.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fnmul.ll
index 613b347..6d7bc05 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fnmul.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fnmul.ll
@@ -1,5 +1,5 @@
-; RUN: llc < %s -march=arm -mattr=+v6,+vfp2 | grep fnmuld
-; RUN: llc < %s -march=arm -mattr=+v6,+vfp2 -enable-sign-dependent-rounding-fp-math | grep fmul
+; RUN: llc < %s -march=arm -mattr=+v6,+vfp2 | grep vnmul.f64
+; RUN: llc < %s -march=arm -mattr=+v6,+vfp2 -enable-sign-dependent-rounding-fp-math | grep vmul.f64
 
 
 define double @t1(double %a, double %b) {
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fp.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fp.ll
index 301a796..8fbd45b 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fp.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fp.ll
@@ -1,55 +1,71 @@
-; RUN: llc < %s -march=arm -mattr=+vfp2 > %t
-; RUN: grep fmsr %t | count 4
-; RUN: grep fsitos %t
-; RUN: grep fmrs %t | count 2
-; RUN: grep fsitod %t
-; RUN: grep fmrrd %t | count 3
-; RUN: not grep fmdrr %t 
-; RUN: grep fldd %t
-; RUN: grep fuitod %t
-; RUN: grep fuitos %t
-; RUN: grep 1065353216 %t
+; RUN: llc < %s -march=arm -mattr=+vfp2 | FileCheck %s
 
 define float @f(i32 %a) {
+;CHECK: f:
+;CHECK: vmov
+;CHECK-NEXT: vcvt.f32.s32
+;CHECK-NEXT: vmov
 entry:
         %tmp = sitofp i32 %a to float           ; <float> [#uses=1]
         ret float %tmp
 }
 
 define double @g(i32 %a) {
+;CHECK: g:
+;CHECK: vmov
+;CHECK-NEXT: vcvt.f64.s32
+;CHECK-NEXT: vmov
 entry:
         %tmp = sitofp i32 %a to double          ; <double> [#uses=1]
         ret double %tmp
 }
 
 define double @uint_to_double(i32 %a) {
+;CHECK: uint_to_double:
+;CHECK: vmov
+;CHECK-NEXT: vcvt.f64.u32
+;CHECK-NEXT: vmov
 entry:
         %tmp = uitofp i32 %a to double          ; <double> [#uses=1]
         ret double %tmp
 }
 
 define float @uint_to_float(i32 %a) {
+;CHECK: uint_to_float:
+;CHECK: vmov
+;CHECK-NEXT: vcvt.f32.u32
+;CHECK-NEXT: vmov
 entry:
         %tmp = uitofp i32 %a to float           ; <float> [#uses=1]
         ret float %tmp
 }
 
 define double @h(double* %v) {
+;CHECK: h:
+;CHECK: vldr.64 
+;CHECK-NEXT: vmov
 entry:
         %tmp = load double* %v          ; <double> [#uses=1]
         ret double %tmp
 }
 
 define float @h2() {
+;CHECK: h2:
+;CHECK: 1065353216
 entry:
         ret float 1.000000e+00
 }
 
 define double @f2(double %a) {
+;CHECK: f2:
+;CHECK-NOT: vmov
         ret double %a
 }
 
 define void @f3() {
+;CHECK: f3:
+;CHECK-NOT: vmov
+;CHECK: f4
 entry:
         %tmp = call double @f5( )               ; <double> [#uses=1]
         call void @f4( double %tmp )
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fp_convert.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fp_convert.ll
index 9ce2ac5..2adac78 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fp_convert.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fp_convert.ll
@@ -6,7 +6,7 @@
 
 define i32 @test1(float %a, float %b) {
 ; VFP2: test1:
-; VFP2: ftosizs s0, s0
+; VFP2: vcvt.s32.f32 s0, s0
 ; NEON: test1:
 ; NEON: vcvt.s32.f32 d0, d0
 entry:
@@ -17,7 +17,7 @@ entry:
 
 define i32 @test2(float %a, float %b) {
 ; VFP2: test2:
-; VFP2: ftouizs s0, s0
+; VFP2: vcvt.u32.f32 s0, s0
 ; NEON: test2:
 ; NEON: vcvt.u32.f32 d0, d0
 entry:
@@ -28,7 +28,7 @@ entry:
 
 define float @test3(i32 %a, i32 %b) {
 ; VFP2: test3:
-; VFP2: fuitos s0, s0
+; VFP2: vcvt.f32.u32 s0, s0
 ; NEON: test3:
 ; NEON: vcvt.f32.u32 d0, d0
 entry:
@@ -39,7 +39,7 @@ entry:
 
 define float @test4(i32 %a, i32 %b) {
 ; VFP2: test4:
-; VFP2: fsitos s0, s0
+; VFP2: vcvt.f32.s32 s0, s0
 ; NEON: test4:
 ; NEON: vcvt.f32.s32 d0, d0
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fparith.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fparith.ll
index 7386b91..ce6d6b2 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fparith.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fparith.ll
@@ -1,74 +1,88 @@
-; RUN: llc < %s -march=arm -mattr=+vfp2 > %t
-; RUN: grep fadds %t
-; RUN: grep faddd %t
-; RUN: grep fmuls %t
-; RUN: grep fmuld %t
-; RUN: grep eor %t
-; RUN: grep fnegd %t
-; RUN: grep fdivs %t
-; RUN: grep fdivd %t
+; RUN: llc < %s -march=arm -mattr=+vfp2 | FileCheck %s
 
 define float @f1(float %a, float %b) {
+;CHECK: f1:
+;CHECK: vadd.f32
 entry:
 	%tmp = fadd float %a, %b		; <float> [#uses=1]
 	ret float %tmp
 }
 
 define double @f2(double %a, double %b) {
+;CHECK: f2:
+;CHECK: vadd.f64
 entry:
 	%tmp = fadd double %a, %b		; <double> [#uses=1]
 	ret double %tmp
 }
 
 define float @f3(float %a, float %b) {
+;CHECK: f3:
+;CHECK: vmul.f32
 entry:
 	%tmp = fmul float %a, %b		; <float> [#uses=1]
 	ret float %tmp
 }
 
 define double @f4(double %a, double %b) {
+;CHECK: f4:
+;CHECK: vmul.f64
 entry:
 	%tmp = fmul double %a, %b		; <double> [#uses=1]
 	ret double %tmp
 }
 
 define float @f5(float %a, float %b) {
+;CHECK: f5:
+;CHECK: vsub.f32
 entry:
 	%tmp = fsub float %a, %b		; <float> [#uses=1]
 	ret float %tmp
 }
 
 define double @f6(double %a, double %b) {
+;CHECK: f6:
+;CHECK: vsub.f64
 entry:
 	%tmp = fsub double %a, %b		; <double> [#uses=1]
 	ret double %tmp
 }
 
 define float @f7(float %a) {
+;CHECK: f7:
+;CHECK: eor
 entry:
 	%tmp1 = fsub float -0.000000e+00, %a		; <float> [#uses=1]
 	ret float %tmp1
 }
 
 define double @f8(double %a) {
+;CHECK: f8:
+;CHECK: vneg.f64
 entry:
 	%tmp1 = fsub double -0.000000e+00, %a		; <double> [#uses=1]
 	ret double %tmp1
 }
 
 define float @f9(float %a, float %b) {
+;CHECK: f9:
+;CHECK: vdiv.f32
 entry:
 	%tmp1 = fdiv float %a, %b		; <float> [#uses=1]
 	ret float %tmp1
 }
 
 define double @f10(double %a, double %b) {
+;CHECK: f10:
+;CHECK: vdiv.f64
 entry:
 	%tmp1 = fdiv double %a, %b		; <double> [#uses=1]
 	ret double %tmp1
 }
 
 define float @f11(float %a) {
+;CHECK: f11:
+;CHECK: bic
 entry:
 	%tmp1 = call float @fabsf( float %a )		; <float> [#uses=1]
 	ret float %tmp1
@@ -77,6 +91,8 @@ entry:
 declare float @fabsf(float)
 
 define double @f12(double %a) {
+;CHECK: f12:
+;CHECK: vabs.f64
 entry:
 	%tmp1 = call double @fabs( double %a )		; <double> [#uses=1]
 	ret double %tmp1
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fpcmp.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fpcmp.ll
index 8370fbb..260ec49 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fpcmp.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fpcmp.ll
@@ -1,13 +1,9 @@
-; RUN: llc < %s -march=arm -mattr=+vfp2 > %t
-; RUN: grep movmi %t
-; RUN: grep moveq %t
-; RUN: grep movgt %t
-; RUN: grep movge %t
-; RUN: grep movne %t
-; RUN: grep fcmped %t | count 1
-; RUN: grep fcmpes %t | count 6
+; RUN: llc < %s -march=arm -mattr=+vfp2 | FileCheck %s
 
 define i32 @f1(float %a) {
+;CHECK: f1:
+;CHECK: vcmpe.f32
+;CHECK: movmi
 entry:
         %tmp = fcmp olt float %a, 1.000000e+00          ; <i1> [#uses=1]
         %tmp1 = zext i1 %tmp to i32              ; <i32> [#uses=1]
@@ -15,6 +11,9 @@ entry:
 }
 
 define i32 @f2(float %a) {
+;CHECK: f2:
+;CHECK: vcmpe.f32
+;CHECK: moveq
 entry:
         %tmp = fcmp oeq float %a, 1.000000e+00          ; <i1> [#uses=1]
         %tmp2 = zext i1 %tmp to i32              ; <i32> [#uses=1]
@@ -22,6 +21,9 @@ entry:
 }
 
 define i32 @f3(float %a) {
+;CHECK: f3:
+;CHECK: vcmpe.f32
+;CHECK: movgt
 entry:
         %tmp = fcmp ogt float %a, 1.000000e+00          ; <i1> [#uses=1]
         %tmp3 = zext i1 %tmp to i32              ; <i32> [#uses=1]
@@ -29,6 +31,9 @@ entry:
 }
 
 define i32 @f4(float %a) {
+;CHECK: f4:
+;CHECK: vcmpe.f32
+;CHECK: movge
 entry:
         %tmp = fcmp oge float %a, 1.000000e+00          ; <i1> [#uses=1]
         %tmp4 = zext i1 %tmp to i32              ; <i32> [#uses=1]
@@ -36,6 +41,9 @@ entry:
 }
 
 define i32 @f5(float %a) {
+;CHECK: f5:
+;CHECK: vcmpe.f32
+;CHECK: movls
 entry:
         %tmp = fcmp ole float %a, 1.000000e+00          ; <i1> [#uses=1]
         %tmp5 = zext i1 %tmp to i32              ; <i32> [#uses=1]
@@ -43,6 +51,9 @@ entry:
 }
 
 define i32 @f6(float %a) {
+;CHECK: f6:
+;CHECK: vcmpe.f32
+;CHECK: movne
 entry:
         %tmp = fcmp une float %a, 1.000000e+00          ; <i1> [#uses=1]
         %tmp6 = zext i1 %tmp to i32              ; <i32> [#uses=1]
@@ -50,6 +61,9 @@ entry:
 }
 
 define i32 @g1(double %a) {
+;CHECK: g1:
+;CHECK: vcmpe.f64
+;CHECK: movmi
 entry:
         %tmp = fcmp olt double %a, 1.000000e+00         ; <i1> [#uses=1]
         %tmp7 = zext i1 %tmp to i32              ; <i32> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fpconsts.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fpconsts.ll
new file mode 100644
index 0000000..710994d
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fpconsts.ll
@@ -0,0 +1,33 @@
+; RUN: llc < %s -march=arm -mattr=+vfp3 | FileCheck %s
+
+define arm_apcscc float @t1(float %x) nounwind readnone optsize {
+entry:
+; CHECK: t1:
+; CHECK: vmov.f32 s1, #4.000000e+00
+  %0 = fadd float %x, 4.000000e+00
+  ret float %0
+}
+
+define arm_apcscc double @t2(double %x) nounwind readnone optsize {
+entry:
+; CHECK: t2:
+; CHECK: vmov.f64 d1, #3.000000e+00
+  %0 = fadd double %x, 3.000000e+00
+  ret double %0
+}
+
+define arm_apcscc double @t3(double %x) nounwind readnone optsize {
+entry:
+; CHECK: t3:
+; CHECK: vmov.f64 d1, #-1.300000e+01
+  %0 = fmul double %x, -1.300000e+01
+  ret double %0
+}
+
+define arm_apcscc float @t4(float %x) nounwind readnone optsize {
+entry:
+; CHECK: t4:
+; CHECK: vmov.f32 s1, #-2.400000e+01
+  %0 = fmul float %x, -2.400000e+01
+  ret float %0
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fpconv.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fpconv.ll
index 5420106..bf197a4 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fpconv.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fpconv.ll
@@ -1,81 +1,101 @@
-; RUN: llc < %s -march=arm -mattr=+vfp2 > %t
-; RUN: grep fcvtsd %t
-; RUN: grep fcvtds %t
-; RUN: grep ftosizs %t
-; RUN: grep ftouizs %t
-; RUN: grep ftosizd %t
-; RUN: grep ftouizd %t
-; RUN: grep fsitos %t
-; RUN: grep fsitod %t
-; RUN: grep fuitos %t
-; RUN: grep fuitod %t
-; RUN: llc < %s -march=arm > %t
-; RUN: grep truncdfsf2 %t
-; RUN: grep extendsfdf2 %t
-; RUN: grep fixsfsi %t
-; RUN: grep fixunssfsi %t
-; RUN: grep fixdfsi %t
-; RUN: grep fixunsdfsi %t
-; RUN: grep floatsisf %t
-; RUN: grep floatsidf %t
-; RUN: grep floatunsisf %t
-; RUN: grep floatunsidf %t
+; RUN: llc < %s -march=arm -mattr=+vfp2 | FileCheck %s --check-prefix=CHECK-VFP
+; RUN: llc < %s -march=arm | FileCheck %s
 
 define float @f1(double %x) {
+;CHECK-VFP: f1:
+;CHECK-VFP: vcvt.f32.f64
+;CHECK: f1:
+;CHECK: truncdfsf2
 entry:
 	%tmp1 = fptrunc double %x to float		; <float> [#uses=1]
 	ret float %tmp1
 }
 
 define double @f2(float %x) {
+;CHECK-VFP: f2:
+;CHECK-VFP: vcvt.f64.f32
+;CHECK: f2:
+;CHECK: extendsfdf2
 entry:
 	%tmp1 = fpext float %x to double		; <double> [#uses=1]
 	ret double %tmp1
 }
 
 define i32 @f3(float %x) {
+;CHECK-VFP: f3:
+;CHECK-VFP: vcvt.s32.f32
+;CHECK: f3:
+;CHECK: fixsfsi
 entry:
 	%tmp = fptosi float %x to i32		; <i32> [#uses=1]
 	ret i32 %tmp
 }
 
 define i32 @f4(float %x) {
+;CHECK-VFP: f4:
+;CHECK-VFP: vcvt.u32.f32
+;CHECK: f4:
+;CHECK: fixunssfsi
 entry:
 	%tmp = fptoui float %x to i32		; <i32> [#uses=1]
 	ret i32 %tmp
 }
 
 define i32 @f5(double %x) {
+;CHECK-VFP: f5:
+;CHECK-VFP: vcvt.s32.f64
+;CHECK: f5:
+;CHECK: fixdfsi
 entry:
 	%tmp = fptosi double %x to i32		; <i32> [#uses=1]
 	ret i32 %tmp
 }
 
 define i32 @f6(double %x) {
+;CHECK-VFP: f6:
+;CHECK-VFP: vcvt.u32.f64
+;CHECK: f6:
+;CHECK: fixunsdfsi
 entry:
 	%tmp = fptoui double %x to i32		; <i32> [#uses=1]
 	ret i32 %tmp
 }
 
 define float @f7(i32 %a) {
+;CHECK-VFP: f7:
+;CHECK-VFP: vcvt.f32.s32
+;CHECK: f7:
+;CHECK: floatsisf
 entry:
 	%tmp = sitofp i32 %a to float		; <float> [#uses=1]
 	ret float %tmp
 }
 
 define double @f8(i32 %a) {
+;CHECK-VFP: f8:
+;CHECK-VFP: vcvt.f64.s32
+;CHECK: f8:
+;CHECK: floatsidf
 entry:
 	%tmp = sitofp i32 %a to double		; <double> [#uses=1]
 	ret double %tmp
 }
 
 define float @f9(i32 %a) {
+;CHECK-VFP: f9:
+;CHECK-VFP: vcvt.f32.u32
+;CHECK: f9:
+;CHECK: floatunsisf
 entry:
 	%tmp = uitofp i32 %a to float		; <float> [#uses=1]
 	ret float %tmp
 }
 
 define double @f10(i32 %a) {
+;CHECK-VFP: f10:
+;CHECK-VFP: vcvt.f64.u32
+;CHECK: f10:
+;CHECK: floatunsidf
 entry:
 	%tmp = uitofp i32 %a to double		; <double> [#uses=1]
 	ret double %tmp
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fpmem.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fpmem.ll
index fa897bf..c3cff18 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fpmem.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fpmem.ll
@@ -1,21 +1,22 @@
-; RUN: llc < %s -march=arm | \
-; RUN:   grep {mov r0, #0} | count 1
-; RUN: llc < %s -march=arm -mattr=+vfp2 | \
-; RUN:   grep {flds.*\\\[} | count 1
-; RUN: llc < %s -march=arm -mattr=+vfp2 | \
-; RUN:   grep {fsts.*\\\[} | count 1
+; RUN: llc < %s -march=arm -mattr=+vfp2 | FileCheck %s
 
 define float @f1(float %a) {
+; CHECK: f1:
+; CHECK: mov r0, #0
         ret float 0.000000e+00
 }
 
 define float @f2(float* %v, float %u) {
+; CHECK: f2:
+; CHECK: vldr.32{{.*}}[
         %tmp = load float* %v           ; <float> [#uses=1]
         %tmp1 = fadd float %tmp, %u              ; <float> [#uses=1]
         ret float %tmp1
 }
 
 define void @f3(float %a, float %b, float* %v) {
+; CHECK: f3:
+; CHECK: vstr.32{{.*}}[
         %tmp = fadd float %a, %b         ; <float> [#uses=1]
         store float %tmp, float* %v
         ret void
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fptoint.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fptoint.ll
index 0d270b0..299cb8f 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fptoint.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fptoint.ll
@@ -1,5 +1,4 @@
-; RUN: llc < %s -march=arm -mattr=+v6,+vfp2 | grep fmrs | count 1
-; RUN: llc < %s -march=arm -mattr=+v6,+vfp2 | not grep fmrrd
+; RUN: llc < %s -march=arm -mattr=+v6,+vfp2 | FileCheck %s
 
 @i = weak global i32 0		; <i32*> [#uses=2]
 @u = weak global i32 0		; <i32*> [#uses=2]
@@ -45,3 +44,6 @@ define void @foo9(double %x) {
 	store i16 %tmp, i16* null
 	ret void
 }
+; CHECK: foo9:
+; CHECK: 	vmov	r0, s0
+
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/fsubs.ll b/libclamav/c++/llvm/test/CodeGen/ARM/fsubs.ll
index 060dd46..ae98be3 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/fsubs.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/fsubs.ll
@@ -1,6 +1,6 @@
-; RUN: llc < %s -march=arm -mattr=+vfp2 | grep -E {fsubs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | grep -E {vsub.f32\\W*d\[0-9\]+,\\W*d\[0-9\]+,\\W*d\[0-9\]+} | count 1
-; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | grep -E {fsubs\\W*s\[0-9\]+,\\W*s\[0-9\]+,\\W*s\[0-9\]+} | count 1
+; RUN: llc < %s -march=arm -mattr=+vfp2 | FileCheck %s -check-prefix=VFP2
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=1 | FileCheck %s -check-prefix=NFP1
+; RUN: llc < %s -march=arm -mattr=+neon -arm-use-neon-fp=0 | FileCheck %s -check-prefix=NFP0
 
 define float @test(float %a, float %b) {
 entry:
@@ -8,3 +8,6 @@ entry:
 	ret float %0
 }
 
+; VFP2: vsub.f32	s0, s1, s0
+; NFP1: vsub.f32	d0, d1, d0
+; NFP0: vsub.f32	s0, s1, s0
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/globals.ll b/libclamav/c++/llvm/test/CodeGen/ARM/globals.ll
new file mode 100644
index 0000000..886c0d5
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/globals.ll
@@ -0,0 +1,75 @@
+; RUN: llc < %s -mtriple=arm-apple-darwin -relocation-model=static | FileCheck %s -check-prefix=DarwinStatic
+; RUN: llc < %s -mtriple=arm-apple-darwin -relocation-model=dynamic-no-pic | FileCheck %s -check-prefix=DarwinDynamic
+; RUN: llc < %s -mtriple=arm-apple-darwin -relocation-model=pic | FileCheck %s -check-prefix=DarwinPIC
+; RUN: llc < %s -mtriple=arm-linux-gnueabi -relocation-model=pic | FileCheck %s -check-prefix=LinuxPIC
+
+ at G = external global i32
+
+define i32 @test1() {
+	%tmp = load i32* @G
+	ret i32 %tmp
+}
+
+; DarwinStatic: _test1:
+; DarwinStatic: 	ldr r0, LCPI1_0
+; DarwinStatic:	        ldr r0, [r0]
+; DarwinStatic:	        bx lr
+
+; DarwinStatic: 	.align	2
+; DarwinStatic:	LCPI1_0:
+; DarwinStatic: 	.long	{{_G$}}
+
+
+; DarwinDynamic: _test1:
+; DarwinDynamic: 	ldr r0, LCPI1_0
+; DarwinDynamic:        ldr r0, [r0]
+; DarwinDynamic:        ldr r0, [r0]
+; DarwinDynamic:        bx lr
+
+; DarwinDynamic: 	.align	2
+; DarwinDynamic:	LCPI1_0:
+; DarwinDynamic: 	.long	L_G$non_lazy_ptr
+
+; DarwinDynamic: 	.section __DATA,__nl_symbol_ptr,non_lazy_symbol_pointers
+; DarwinDynamic:	.align	2
+; DarwinDynamic: L_G$non_lazy_ptr:
+; DarwinDynamic:	.indirect_symbol _G
+; DarwinDynamic:	.long	0
+
+
+
+; DarwinPIC: _test1:
+; DarwinPIC: 	ldr r0, LCPI1_0
+; DarwinPIC: LPC1_0:
+; DarwinPIC:    ldr r0, [pc, +r0]
+; DarwinPIC:    ldr r0, [r0]
+; DarwinPIC:    bx lr
+
+; DarwinPIC: 	.align	2
+; DarwinPIC: LCPI1_0:
+; DarwinPIC: 	.long	L_G$non_lazy_ptr-(LPC1_0+8)
+
+; DarwinPIC: 	.section __DATA,__nl_symbol_ptr,non_lazy_symbol_pointers
+; DarwinPIC:	.align	2
+; DarwinPIC: L_G$non_lazy_ptr:
+; DarwinPIC:	.indirect_symbol _G
+; DarwinPIC:	.long	0
+
+
+
+; LinuxPIC: test1:
+; LinuxPIC: 	ldr r0, .LCPI1_0
+; LinuxPIC: 	ldr r1, .LCPI1_1
+	
+; LinuxPIC: .LPC1_0:
+; LinuxPIC: 	add r0, pc, r0
+; LinuxPIC: 	ldr r0, [r1, +r0]
+; LinuxPIC: 	ldr r0, [r0]
+; LinuxPIC: 	bx lr
+
+; LinuxPIC: .align 2
+; LinuxPIC: .LCPI1_0:
+; LinuxPIC:     .long _GLOBAL_OFFSET_TABLE_-(.LPC1_0+8)
+; LinuxPIC: .align 2
+; LinuxPIC: .LCPI1_1:
+; LinuxPIC:     .long	G(GOT)
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/ifcvt5.ll b/libclamav/c++/llvm/test/CodeGen/ARM/ifcvt5.ll
index e9145ac..623f2cb 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/ifcvt5.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/ifcvt5.ll
@@ -11,7 +11,7 @@ entry:
 
 define void @t1(i32 %a, i32 %b) {
 ; CHECK: t1:
-; CHECK: ldmltfd sp!, {r7, pc}
+; CHECK: ldmfdlt sp!, {r7, pc}
 entry:
 	%tmp1 = icmp sgt i32 %a, 10		; <i1> [#uses=1]
 	br i1 %tmp1, label %cond_true, label %UnifiedReturnBlock
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/ifcvt6.ll b/libclamav/c++/llvm/test/CodeGen/ARM/ifcvt6.ll
index 5824115..d7fcf7d 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/ifcvt6.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/ifcvt6.ll
@@ -1,7 +1,7 @@
 ; RUN: llc < %s -march=arm -mtriple=arm-apple-darwin | \
 ; RUN:   grep cmpne | count 1
 ; RUN: llc < %s -march=arm -mtriple=arm-apple-darwin | \
-; RUN:   grep ldmhi | count 1
+; RUN:   grep ldmfdhi | count 1
 
 define void @foo(i32 %X, i32 %Y) {
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/ifcvt7.ll b/libclamav/c++/llvm/test/CodeGen/ARM/ifcvt7.ll
index f9cf88f..c60ad93 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/ifcvt7.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/ifcvt7.ll
@@ -3,7 +3,7 @@
 ; RUN: llc < %s -march=arm -mtriple=arm-apple-darwin | \
 ; RUN:   grep moveq | count 1
 ; RUN: llc < %s -march=arm -mtriple=arm-apple-darwin | \
-; RUN:   grep ldmeq | count 1
+; RUN:   grep ldmfdeq | count 1
 ; FIXME: Need post-ifcvt branch folding to get rid of the extra br at end of BB1.
 
 	%struct.quad_struct = type { i32, i32, %struct.quad_struct*, %struct.quad_struct*, %struct.quad_struct*, %struct.quad_struct*, %struct.quad_struct* }
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/ifcvt8.ll b/libclamav/c++/llvm/test/CodeGen/ARM/ifcvt8.ll
index 6cb8e7b..a7da834 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/ifcvt8.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/ifcvt8.ll
@@ -1,5 +1,5 @@
 ; RUN: llc < %s -march=arm -mtriple=arm-apple-darwin | \
-; RUN:   grep ldmne | count 1
+; RUN:   grep ldmfdne | count 1
 
 	%struct.SString = type { i8*, i32, i32 }
 
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/indirectbr.ll b/libclamav/c++/llvm/test/CodeGen/ARM/indirectbr.ll
new file mode 100644
index 0000000..8b56f13
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/indirectbr.ll
@@ -0,0 +1,60 @@
+; RUN: llc < %s -relocation-model=pic -mtriple=arm-apple-darwin | FileCheck %s -check-prefix=ARM
+; RUN: llc < %s -relocation-model=pic -mtriple=thumb-apple-darwin | FileCheck %s -check-prefix=THUMB
+; RUN: llc < %s -relocation-model=static -mtriple=thumbv7-apple-darwin | FileCheck %s -check-prefix=THUMB2
+
+ at nextaddr = global i8* null                       ; <i8**> [#uses=2]
+ at C.0.2070 = private constant [5 x i8*] [i8* blockaddress(@foo, %L1), i8* blockaddress(@foo, %L2), i8* blockaddress(@foo, %L3), i8* blockaddress(@foo, %L4), i8* blockaddress(@foo, %L5)] ; <[5 x i8*]*> [#uses=1]
+
+define internal arm_apcscc i32 @foo(i32 %i) nounwind {
+; ARM: foo:
+; THUMB: foo:
+; THUMB2: foo:
+entry:
+  %0 = load i8** @nextaddr, align 4               ; <i8*> [#uses=2]
+  %1 = icmp eq i8* %0, null                       ; <i1> [#uses=1]
+  br i1 %1, label %bb3, label %bb2
+
+bb2:                                              ; preds = %entry, %bb3
+  %gotovar.4.0 = phi i8* [ %gotovar.4.0.pre, %bb3 ], [ %0, %entry ] ; <i8*> [#uses=1]
+; ARM: bx
+; THUMB: mov pc, r1
+; THUMB2: mov pc, r1
+  indirectbr i8* %gotovar.4.0, [label %L5, label %L4, label %L3, label %L2, label %L1]
+
+bb3:                                              ; preds = %entry
+  %2 = getelementptr inbounds [5 x i8*]* @C.0.2070, i32 0, i32 %i ; <i8**> [#uses=1]
+  %gotovar.4.0.pre = load i8** %2, align 4        ; <i8*> [#uses=1]
+  br label %bb2
+
+L5:                                               ; preds = %bb2
+  br label %L4
+
+L4:                                               ; preds = %L5, %bb2
+  %res.0 = phi i32 [ 385, %L5 ], [ 35, %bb2 ]     ; <i32> [#uses=1]
+  br label %L3
+
+L3:                                               ; preds = %L4, %bb2
+  %res.1 = phi i32 [ %res.0, %L4 ], [ 5, %bb2 ]   ; <i32> [#uses=1]
+  br label %L2
+
+L2:                                               ; preds = %L3, %bb2
+  %res.2 = phi i32 [ %res.1, %L3 ], [ 1, %bb2 ]   ; <i32> [#uses=1]
+  %phitmp = mul i32 %res.2, 6                     ; <i32> [#uses=1]
+  br label %L1
+
+L1:                                               ; preds = %L2, %bb2
+  %res.3 = phi i32 [ %phitmp, %L2 ], [ 2, %bb2 ]  ; <i32> [#uses=1]
+; ARM: ldr r1, LCPI
+; ARM: add r1, pc, r1
+; ARM: str r1
+; THUMB: ldr.n r2, LCPI
+; THUMB: add r2, pc
+; THUMB: str r2
+; THUMB2: ldr.n r2, LCPI
+; THUMB2-NEXT: str r2
+  store i8* blockaddress(@foo, %L5), i8** @nextaddr, align 4
+  ret i32 %res.3
+}
+; ARM: .long LBA4__foo__L5-(LPC{{.*}}+8)
+; THUMB: .long LBA4__foo__L5-(LPC{{.*}}+4)
+; THUMB2: .long LBA4__foo__L5
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/ispositive.ll b/libclamav/c++/llvm/test/CodeGen/ARM/ispositive.ll
index 5116ac8..245ed51 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/ispositive.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/ispositive.ll
@@ -1,6 +1,7 @@
-; RUN: llc < %s -march=arm | grep {mov r0, r0, lsr #31}
+; RUN: llc < %s -march=arm | FileCheck %s
 
 define i32 @test1(i32 %X) {
+; CHECK: mov r0, r0, lsr #31
 entry:
         icmp slt i32 %X, 0              ; <i1>:0 [#uses=1]
         zext i1 %0 to i32               ; <i32>:1 [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/ldm.ll b/libclamav/c++/llvm/test/CodeGen/ARM/ldm.ll
index 774b3c0..1a016a0 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/ldm.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/ldm.ll
@@ -1,13 +1,10 @@
-; RUN: llc < %s -march=arm | \
-; RUN:   grep ldmia | count 2
-; RUN: llc < %s -march=arm | \
-; RUN:   grep ldmib | count 1
-; RUN: llc < %s -mtriple=arm-apple-darwin | \
-; RUN:   grep {ldmfd sp\!} | count 3
+; RUN: llc < %s -mtriple=arm-apple-darwin | FileCheck %s
 
 @X = external global [0 x i32]          ; <[0 x i32]*> [#uses=5]
 
 define i32 @t1() {
+; CHECK: t1:
+; CHECK: ldmia
         %tmp = load i32* getelementptr ([0 x i32]* @X, i32 0, i32 0)            ; <i32> [#uses=1]
         %tmp3 = load i32* getelementptr ([0 x i32]* @X, i32 0, i32 1)           ; <i32> [#uses=1]
         %tmp4 = tail call i32 @f1( i32 %tmp, i32 %tmp3 )                ; <i32> [#uses=1]
@@ -15,6 +12,8 @@ define i32 @t1() {
 }
 
 define i32 @t2() {
+; CHECK: t2:
+; CHECK: ldmia
         %tmp = load i32* getelementptr ([0 x i32]* @X, i32 0, i32 2)            ; <i32> [#uses=1]
         %tmp3 = load i32* getelementptr ([0 x i32]* @X, i32 0, i32 3)           ; <i32> [#uses=1]
         %tmp5 = load i32* getelementptr ([0 x i32]* @X, i32 0, i32 4)           ; <i32> [#uses=1]
@@ -23,6 +22,9 @@ define i32 @t2() {
 }
 
 define i32 @t3() {
+; CHECK: t3:
+; CHECK: ldmib
+; CHECK: ldmfd sp!
         %tmp = load i32* getelementptr ([0 x i32]* @X, i32 0, i32 1)            ; <i32> [#uses=1]
         %tmp3 = load i32* getelementptr ([0 x i32]* @X, i32 0, i32 2)           ; <i32> [#uses=1]
         %tmp5 = load i32* getelementptr ([0 x i32]* @X, i32 0, i32 3)           ; <i32> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/ldr.ll b/libclamav/c++/llvm/test/CodeGen/ARM/ldr.ll
index 954fb5b..011e61c 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/ldr.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/ldr.ll
@@ -1,16 +1,16 @@
-; RUN: llc < %s -march=arm | grep {ldr r0} | count 7
-; RUN: llc < %s -march=arm | grep mov | grep 1
-; RUN: llc < %s -march=arm | not grep mvn
-; RUN: llc < %s -march=arm | grep ldr | grep lsl
-; RUN: llc < %s -march=arm | grep ldr | grep lsr
+; RUN: llc < %s -march=arm | FileCheck %s
 
 define i32 @f1(i32* %v) {
+; CHECK: f1:
+; CHECK: ldr r0
 entry:
         %tmp = load i32* %v
         ret i32 %tmp
 }
 
 define i32 @f2(i32* %v) {
+; CHECK: f2:
+; CHECK: ldr r0
 entry:
         %tmp2 = getelementptr i32* %v, i32 1023
         %tmp = load i32* %tmp2
@@ -18,6 +18,9 @@ entry:
 }
 
 define i32 @f3(i32* %v) {
+; CHECK: f3:
+; CHECK: mov
+; CHECK: ldr r0
 entry:
         %tmp2 = getelementptr i32* %v, i32 1024
         %tmp = load i32* %tmp2
@@ -25,6 +28,9 @@ entry:
 }
 
 define i32 @f4(i32 %base) {
+; CHECK: f4:
+; CHECK-NOT: mvn
+; CHECK: ldr r0
 entry:
         %tmp1 = sub i32 %base, 128
         %tmp2 = inttoptr i32 %tmp1 to i32*
@@ -33,6 +39,8 @@ entry:
 }
 
 define i32 @f5(i32 %base, i32 %offset) {
+; CHECK: f5:
+; CHECK: ldr r0
 entry:
         %tmp1 = add i32 %base, %offset
         %tmp2 = inttoptr i32 %tmp1 to i32*
@@ -41,6 +49,8 @@ entry:
 }
 
 define i32 @f6(i32 %base, i32 %offset) {
+; CHECK: f6:
+; CHECK: ldr r0{{.*}}lsl{{.*}}
 entry:
         %tmp1 = shl i32 %offset, 2
         %tmp2 = add i32 %base, %tmp1
@@ -50,6 +60,8 @@ entry:
 }
 
 define i32 @f7(i32 %base, i32 %offset) {
+; CHECK: f7:
+; CHECK: ldr r0{{.*}}lsr{{.*}}
 entry:
         %tmp1 = lshr i32 %offset, 2
         %tmp2 = add i32 %base, %tmp1
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/ldrd.ll b/libclamav/c++/llvm/test/CodeGen/ARM/ldrd.ll
index 8f7ae55..c366e2d 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/ldrd.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/ldrd.ll
@@ -7,13 +7,13 @@
 
 define i64 @t(i64 %a) nounwind readonly {
 entry:
-;V6:      ldrd r2, [r2]
+;V6:   ldrd r2, [r2]
 
-;V5:      ldr r3, [r2]
-;V5-NEXT: ldr r2, [r2, #+4]
+;V5:   ldr r3, [r2]
+;V5:   ldr r2, [r2, #+4]
 
-;EABI:      ldr r3, [r2]
-;EABI-NEXT: ldr r2, [r2, #+4]
+;EABI: ldr r3, [r2]
+;EABI: ldr r2, [r2, #+4]
 
 	%0 = load i64** @b, align 4
 	%1 = load i64* %0, align 4
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/load-global.ll b/libclamav/c++/llvm/test/CodeGen/ARM/load-global.ll
deleted file mode 100644
index 56a4a47..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/load-global.ll
+++ /dev/null
@@ -1,15 +0,0 @@
-; RUN: llc < %s -mtriple=arm-apple-darwin -relocation-model=static | \
-; RUN:   not grep {L_G\$non_lazy_ptr}
-; RUN: llc < %s -mtriple=arm-apple-darwin -relocation-model=dynamic-no-pic | \
-; RUN:   grep {L_G\$non_lazy_ptr} | count 2
-; RUN: llc < %s -mtriple=arm-apple-darwin -relocation-model=pic | \
-; RUN:   grep {ldr.*pc} | count 1
-; RUN: llc < %s -mtriple=arm-linux-gnueabi -relocation-model=pic | \
-; RUN:   grep {GOT} | count 1
-
- at G = external global i32
-
-define i32 @test1() {
-	%tmp = load i32* @G
-	ret i32 %tmp
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/long.ll b/libclamav/c++/llvm/test/CodeGen/ARM/long.ll
index 2fcaac0..16ef7cc 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/long.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/long.ll
@@ -1,47 +1,50 @@
-; RUN: llc < %s -march=arm -asm-verbose | \
-; RUN:   grep -- {-2147483648} | count 3
-; RUN: llc < %s -march=arm | grep mvn | count 3
-; RUN: llc < %s -march=arm | grep adds | count 1
-; RUN: llc < %s -march=arm | grep adc | count 1
-; RUN: llc < %s -march=arm | grep {subs } | count 1
-; RUN: llc < %s -march=arm | grep sbc | count 1
-; RUN: llc < %s -march=arm | \
-; RUN:   grep smull | count 1
-; RUN: llc < %s -march=arm | \
-; RUN:   grep umull | count 1
+; RUN: llc < %s -march=arm | FileCheck %s
 
 define i64 @f1() {
+; CHECK: f1:
 entry:
         ret i64 0
 }
 
 define i64 @f2() {
+; CHECK: f2:
 entry:
         ret i64 1
 }
 
 define i64 @f3() {
+; CHECK: f3:
+; CHECK: mvn{{.*}}-2147483648
 entry:
         ret i64 2147483647
 }
 
 define i64 @f4() {
+; CHECK: f4:
+; CHECK: -2147483648
 entry:
         ret i64 2147483648
 }
 
 define i64 @f5() {
+; CHECK: f5:
+; CHECK: mvn
+; CHECK: mvn{{.*}}-2147483648
 entry:
         ret i64 9223372036854775807
 }
 
 define i64 @f6(i64 %x, i64 %y) {
+; CHECK: f6:
+; CHECK: adds
+; CHECK: adc
 entry:
         %tmp1 = add i64 %y, 1           ; <i64> [#uses=1]
         ret i64 %tmp1
 }
 
 define void @f7() {
+; CHECK: f7:
 entry:
         %tmp = call i64 @f8( )          ; <i64> [#uses=0]
         ret void
@@ -50,12 +53,17 @@ entry:
 declare i64 @f8()
 
 define i64 @f9(i64 %a, i64 %b) {
+; CHECK: f9:
+; CHECK: subs r
+; CHECK: sbc
 entry:
         %tmp = sub i64 %a, %b           ; <i64> [#uses=1]
         ret i64 %tmp
 }
 
 define i64 @f(i32 %a, i32 %b) {
+; CHECK: f:
+; CHECK: smull
 entry:
         %tmp = sext i32 %a to i64               ; <i64> [#uses=1]
         %tmp1 = sext i32 %b to i64              ; <i64> [#uses=1]
@@ -64,6 +72,8 @@ entry:
 }
 
 define i64 @g(i32 %a, i32 %b) {
+; CHECK: g:
+; CHECK: umull
 entry:
         %tmp = zext i32 %a to i64               ; <i64> [#uses=1]
         %tmp1 = zext i32 %b to i64              ; <i64> [#uses=1]
@@ -72,9 +82,9 @@ entry:
 }
 
 define i64 @f10() {
+; CHECK: f10:
 entry:
         %a = alloca i64, align 8                ; <i64*> [#uses=1]
         %retval = load i64* %a          ; <i64> [#uses=1]
         ret i64 %retval
 }
-
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/long_shift.ll b/libclamav/c++/llvm/test/CodeGen/ARM/long_shift.ll
index 057b5f0..688b7bc 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/long_shift.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/long_shift.ll
@@ -1,10 +1,11 @@
-; RUN: llc < %s -march=arm > %t
-; RUN: grep rrx %t | count 1
-; RUN: grep __ashldi3 %t
-; RUN: grep __ashrdi3 %t
-; RUN: grep __lshrdi3 %t
+; RUN: llc < %s -march=arm | FileCheck %s
 
 define i64 @f0(i64 %A, i64 %B) {
+; CHECK: f0
+; CHECK:      movs    r3, r3, lsr #1
+; CHECK-NEXT: mov     r2, r2, rrx
+; CHECK-NEXT: subs    r0, r0, r2
+; CHECK-NEXT: sbc     r1, r1, r3
 	%tmp = bitcast i64 %A to i64
 	%tmp2 = lshr i64 %B, 1
 	%tmp3 = sub i64 %tmp, %tmp2
@@ -12,18 +13,34 @@ define i64 @f0(i64 %A, i64 %B) {
 }
 
 define i32 @f1(i64 %x, i64 %y) {
+; CHECK: f1
+; CHECK: mov r0, r0, lsl r2
 	%a = shl i64 %x, %y
 	%b = trunc i64 %a to i32
 	ret i32 %b
 }
 
 define i32 @f2(i64 %x, i64 %y) {
+; CHECK: f2
+; CHECK:      mov     r0, r0, lsr r2
+; CHECK-NEXT: rsb     r3, r2, #32
+; CHECK-NEXT: sub     r2, r2, #32
+; CHECK-NEXT: cmp     r2, #0
+; CHECK-NEXT: orr     r0, r0, r1, lsl r3
+; CHECK-NEXT: movge   r0, r1, asr r2
 	%a = ashr i64 %x, %y
 	%b = trunc i64 %a to i32
 	ret i32 %b
 }
 
 define i32 @f3(i64 %x, i64 %y) {
+; CHECK: f3
+; CHECK:      mov     r0, r0, lsr r2
+; CHECK-NEXT: rsb     r3, r2, #32
+; CHECK-NEXT: sub     r2, r2, #32
+; CHECK-NEXT: cmp     r2, #0
+; CHECK-NEXT: orr     r0, r0, r1, lsl r3
+; CHECK-NEXT: movge   r0, r1, lsr r2
 	%a = lshr i64 %x, %y
 	%b = trunc i64 %a to i32
 	ret i32 %b
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/mls.ll b/libclamav/c++/llvm/test/CodeGen/ARM/mls.ll
index 85407fa..a6cdba4 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/mls.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/mls.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=arm -mattr=+v6t2 | grep {mls\\W*r\[0-9\],\\W*r\[0-9\],\\W*r\[0-9\],\\W*r\[0-9\]} | count 1
+; RUN: llc < %s -march=arm -mattr=+v6t2 | FileCheck %s
 
 define i32 @f1(i32 %a, i32 %b, i32 %c) {
     %tmp1 = mul i32 %a, %b
@@ -12,3 +12,5 @@ define i32 @f2(i32 %a, i32 %b, i32 %c) {
     %tmp2 = sub i32 %tmp1, %c
     ret i32 %tmp2
 }
+
+; CHECK: mls	r0, r0, r1, r2
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/movt-movw-global.ll b/libclamav/c++/llvm/test/CodeGen/ARM/movt-movw-global.ll
new file mode 100644
index 0000000..886ff3f
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/movt-movw-global.ll
@@ -0,0 +1,20 @@
+; RUN: llc < %s | FileCheck %s
+target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64"
+target triple = "armv7-eabi"
+
+ at foo = common global i32 0                        ; <i32*> [#uses=1]
+
+define arm_aapcs_vfpcc i32* @bar1() nounwind readnone {
+entry:
+; CHECK:      movw    r0, :lower16:foo
+; CHECK-NEXT: movt    r0, :upper16:foo
+  ret i32* @foo
+}
+
+define arm_aapcs_vfpcc void @bar2(i32 %baz) nounwind {
+entry:
+; CHECK:      movw    r1, :lower16:foo
+; CHECK-NEXT: movt    r1, :upper16:foo
+  store i32 %baz, i32* @foo, align 4
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/movt.ll b/libclamav/c++/llvm/test/CodeGen/ARM/movt.ll
new file mode 100644
index 0000000..e82aca0
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/movt.ll
@@ -0,0 +1,19 @@
+; RUN: llc < %s -march=arm -mattr=+thumb2 | FileCheck %s
+; rdar://7317664
+
+define i32 @t(i32 %X) nounwind {
+; CHECK: t:
+; CHECK: movt r0, #65535
+entry:
+	%0 = or i32 %X, -65536
+	ret i32 %0
+}
+
+define i32 @t2(i32 %X) nounwind {
+; CHECK: t2:
+; CHECK: movt r0, #65534
+entry:
+	%0 = or i32 %X, -131072
+	%1 = and i32 %0, -65537
+	ret i32 %1
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/neon_ld1.ll b/libclamav/c++/llvm/test/CodeGen/ARM/neon_ld1.ll
index 2796dec..c78872a 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/neon_ld1.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/neon_ld1.ll
@@ -1,6 +1,6 @@
-; RUN: llc < %s -march=arm -mattr=+neon | grep fldd | count 4
-; RUN: llc < %s -march=arm -mattr=+neon | grep fstd
-; RUN: llc < %s -march=arm -mattr=+neon | grep fmrrd
+; RUN: llc < %s -march=arm -mattr=+neon | grep vldr.64 | count 4
+; RUN: llc < %s -march=arm -mattr=+neon | grep vstr.64
+; RUN: llc < %s -march=arm -mattr=+neon | grep vmov
 
 define void @t1(<2 x i32>* %r, <4 x i16>* %a, <4 x i16>* %b) nounwind {
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/neon_ld2.ll b/libclamav/c++/llvm/test/CodeGen/ARM/neon_ld2.ll
index 547bab7..130277b 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/neon_ld2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/neon_ld2.ll
@@ -1,6 +1,6 @@
 ; RUN: llc < %s -march=arm -mattr=+neon | grep vldmia | count 4
 ; RUN: llc < %s -march=arm -mattr=+neon | grep vstmia | count 1
-; RUN: llc < %s -march=arm -mattr=+neon | grep fmrrd  | count 2
+; RUN: llc < %s -march=arm -mattr=+neon | grep vmov  | count 2
 
 define void @t1(<4 x i32>* %r, <2 x i64>* %a, <2 x i64>* %b) nounwind {
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/remat-2.ll b/libclamav/c++/llvm/test/CodeGen/ARM/remat-2.ll
new file mode 100644
index 0000000..1a871d2
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/remat-2.ll
@@ -0,0 +1,65 @@
+; RUN: llc < %s -march=arm -mattr=+v6,+vfp2 -stats -info-output-file - | grep "Number of re-materialization"
+
+define arm_apcscc i32 @main(i32 %argc, i8** nocapture %argv) nounwind {
+entry:
+  br i1 undef, label %smvp.exit, label %bb.i3
+
+bb.i3:                                            ; preds = %bb.i3, %bb134
+  br i1 undef, label %smvp.exit, label %bb.i3
+
+smvp.exit:                                        ; preds = %bb.i3
+  %0 = fmul double undef, 2.400000e-03            ; <double> [#uses=2]
+  br i1 undef, label %bb138.preheader, label %bb159
+
+bb138.preheader:                                  ; preds = %smvp.exit
+  br label %bb138
+
+bb138:                                            ; preds = %bb138, %bb138.preheader
+  br i1 undef, label %bb138, label %bb145.loopexit
+
+bb142:                                            ; preds = %bb.nph218.bb.nph218.split_crit_edge, %phi0.exit
+  %1 = fmul double undef, -1.200000e-03           ; <double> [#uses=1]
+  %2 = fadd double undef, %1                      ; <double> [#uses=1]
+  %3 = fmul double %2, undef                      ; <double> [#uses=1]
+  %4 = fsub double 0.000000e+00, %3               ; <double> [#uses=1]
+  br i1 %14, label %phi1.exit, label %bb.i35
+
+bb.i35:                                           ; preds = %bb142
+  %5 = call arm_apcscc  double @sin(double %15) nounwind readonly ; <double> [#uses=1]
+  %6 = fmul double %5, 0x4031740AFA84AD8A         ; <double> [#uses=1]
+  %7 = fsub double 1.000000e+00, undef            ; <double> [#uses=1]
+  %8 = fdiv double %7, 6.000000e-01               ; <double> [#uses=1]
+  br label %phi1.exit
+
+phi1.exit:                                        ; preds = %bb.i35, %bb142
+  %.pn = phi double [ %6, %bb.i35 ], [ 0.000000e+00, %bb142 ] ; <double> [#uses=0]
+  %9 = phi double [ %8, %bb.i35 ], [ 0.000000e+00, %bb142 ] ; <double> [#uses=1]
+  %10 = fmul double undef, %9                     ; <double> [#uses=0]
+  br i1 %14, label %phi0.exit, label %bb.i
+
+bb.i:                                             ; preds = %phi1.exit
+  unreachable
+
+phi0.exit:                                        ; preds = %phi1.exit
+  %11 = fsub double %4, undef                     ; <double> [#uses=1]
+  %12 = fadd double 0.000000e+00, %11             ; <double> [#uses=1]
+  store double %12, double* undef, align 4
+  br label %bb142
+
+bb145.loopexit:                                   ; preds = %bb138
+  br i1 undef, label %bb.nph218.bb.nph218.split_crit_edge, label %bb159
+
+bb.nph218.bb.nph218.split_crit_edge:              ; preds = %bb145.loopexit
+  %13 = fmul double %0, 0x401921FB54442D18        ; <double> [#uses=1]
+  %14 = fcmp ugt double %0, 6.000000e-01          ; <i1> [#uses=2]
+  %15 = fdiv double %13, 6.000000e-01             ; <double> [#uses=1]
+  br label %bb142
+
+bb159:                                            ; preds = %bb145.loopexit, %smvp.exit, %bb134
+  unreachable
+
+bb166:                                            ; preds = %bb127
+  unreachable
+}
+
+declare arm_apcscc double @sin(double) nounwind readonly
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/remat.ll b/libclamav/c++/llvm/test/CodeGen/ARM/remat.ll
index 50da997..9565c8b 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/remat.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/remat.ll
@@ -1,5 +1,5 @@
 ; RUN: llc < %s -mtriple=arm-apple-darwin 
-; RUN: llc < %s -mtriple=arm-apple-darwin -stats -info-output-file - | grep "Number of re-materialization" | grep 5
+; RUN: llc < %s -mtriple=arm-apple-darwin -stats -info-output-file - | grep "Number of re-materialization" | grep 3
 
 	%struct.CONTENTBOX = type { i32, i32, i32, i32, i32 }
 	%struct.LOCBOX = type { i32, i32, i32, i32 }
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/sbfx.ll b/libclamav/c++/llvm/test/CodeGen/ARM/sbfx.ll
new file mode 100644
index 0000000..6f1d87d
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/sbfx.ll
@@ -0,0 +1,47 @@
+; RUN: llc < %s -march=arm -mattr=+v6t2 | FileCheck %s
+
+define i32 @f1(i32 %a) {
+entry:
+; CHECK: f1:
+; CHECK: sbfx r0, r0, #0, #20
+    %tmp = shl i32 %a, 12
+    %tmp2 = ashr i32 %tmp, 12
+    ret i32 %tmp2
+}
+
+define i32 @f2(i32 %a) {
+entry:
+; CHECK: f2:
+; CHECK: ubfx r0, r0, #0, #20
+    %tmp = shl i32 %a, 12
+    %tmp2 = lshr i32 %tmp, 12
+    ret i32 %tmp2
+}
+
+define i32 @f3(i32 %a) {
+entry:
+; CHECK: f3:
+; CHECK: sbfx r0, r0, #5, #3
+    %tmp = shl i32 %a, 24
+    %tmp2 = ashr i32 %tmp, 29
+    ret i32 %tmp2
+}
+
+define i32 @f4(i32 %a) {
+entry:
+; CHECK: f4:
+; CHECK: ubfx r0, r0, #5, #3
+    %tmp = shl i32 %a, 24
+    %tmp2 = lshr i32 %tmp, 29
+    ret i32 %tmp2
+}
+
+define i32 @f5(i32 %a) {
+entry:
+; CHECK: f5:
+; CHECK-NOT: sbfx
+; CHECK: bx
+    %tmp = shl i32 %a, 3
+    %tmp2 = ashr i32 %tmp, 1
+    ret i32 %tmp2
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/select-imm.ll b/libclamav/c++/llvm/test/CodeGen/ARM/select-imm.ll
new file mode 100644
index 0000000..07edc91
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/select-imm.ll
@@ -0,0 +1,48 @@
+; RUN: llc < %s -march=arm                | FileCheck %s --check-prefix=ARM
+; RUN: llc < %s -march=arm -mattr=+thumb2 | FileCheck %s --check-prefix=T2
+
+define arm_apcscc i32 @t1(i32 %c) nounwind readnone {
+entry:
+; ARM: t1:
+; ARM: mov r1, #101
+; ARM: orr r1, r1, #1, 24
+; ARM: movgt r0, #123
+
+; T2: t1:
+; T2: movw r0, #357
+; T2: movgt r0, #123
+
+  %0 = icmp sgt i32 %c, 1
+  %1 = select i1 %0, i32 123, i32 357
+  ret i32 %1
+}
+
+define arm_apcscc i32 @t2(i32 %c) nounwind readnone {
+entry:
+; ARM: t2:
+; ARM: mov r1, #101
+; ARM: orr r1, r1, #1, 24
+; ARM: movle r0, #123
+
+; T2: t2:
+; T2: movw r0, #357
+; T2: movle r0, #123
+
+  %0 = icmp sgt i32 %c, 1
+  %1 = select i1 %0, i32 357, i32 123
+  ret i32 %1
+}
+
+define arm_apcscc i32 @t3(i32 %a) nounwind readnone {
+entry:
+; ARM: t3:
+; ARM: mov r0, #0
+; ARM: moveq r0, #1
+
+; T2: t3:
+; T2: mov r0, #0
+; T2: moveq r0, #1
+  %0 = icmp eq i32 %a, 160
+  %1 = zext i1 %0 to i32
+  ret i32 %1
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/select.ll b/libclamav/c++/llvm/test/CodeGen/ARM/select.ll
index d1565d1..29c55c6 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/select.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/select.ll
@@ -1,13 +1,9 @@
-; RUN: llc < %s -march=arm | grep moveq | count 1
-; RUN: llc < %s -march=arm | grep movgt | count 1
-; RUN: llc < %s -march=arm | grep movlt | count 3
-; RUN: llc < %s -march=arm | grep movle | count 1
-; RUN: llc < %s -march=arm | grep movls | count 1
-; RUN: llc < %s -march=arm | grep movhi | count 1
-; RUN: llc < %s -march=arm -mattr=+vfp2 | \
-; RUN:   grep fcpydmi | count 1
+; RUN: llc < %s -march=arm | FileCheck %s
+; RUN: llc < %s -march=arm -mattr=+vfp2 | FileCheck %s --check-prefix=CHECK-VFP
 
 define i32 @f1(i32 %a.s) {
+;CHECK: f1:
+;CHECK: moveq
 entry:
     %tmp = icmp eq i32 %a.s, 4
     %tmp1.s = select i1 %tmp, i32 2, i32 3
@@ -15,6 +11,8 @@ entry:
 }
 
 define i32 @f2(i32 %a.s) {
+;CHECK: f2:
+;CHECK: movgt
 entry:
     %tmp = icmp sgt i32 %a.s, 4
     %tmp1.s = select i1 %tmp, i32 2, i32 3
@@ -22,6 +20,8 @@ entry:
 }
 
 define i32 @f3(i32 %a.s, i32 %b.s) {
+;CHECK: f3:
+;CHECK: movlt
 entry:
     %tmp = icmp slt i32 %a.s, %b.s
     %tmp1.s = select i1 %tmp, i32 2, i32 3
@@ -29,6 +29,8 @@ entry:
 }
 
 define i32 @f4(i32 %a.s, i32 %b.s) {
+;CHECK: f4:
+;CHECK: movle
 entry:
     %tmp = icmp sle i32 %a.s, %b.s
     %tmp1.s = select i1 %tmp, i32 2, i32 3
@@ -36,6 +38,8 @@ entry:
 }
 
 define i32 @f5(i32 %a.u, i32 %b.u) {
+;CHECK: f5:
+;CHECK: movls
 entry:
     %tmp = icmp ule i32 %a.u, %b.u
     %tmp1.s = select i1 %tmp, i32 2, i32 3
@@ -43,6 +47,8 @@ entry:
 }
 
 define i32 @f6(i32 %a.u, i32 %b.u) {
+;CHECK: f6:
+;CHECK: movhi
 entry:
     %tmp = icmp ugt i32 %a.u, %b.u
     %tmp1.s = select i1 %tmp, i32 2, i32 3
@@ -50,6 +56,11 @@ entry:
 }
 
 define double @f7(double %a, double %b) {
+;CHECK: f7:
+;CHECK: movlt
+;CHECK: movlt
+;CHECK-VFP: f7:
+;CHECK-VFP: vmovmi
     %tmp = fcmp olt double %a, 1.234e+00
     %tmp1 = select i1 %tmp, double -1.000e+00, double %b
     ret double %tmp1
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/spill-q.ll b/libclamav/c++/llvm/test/CodeGen/ARM/spill-q.ll
index f4b27a7..5ad7ecc 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/spill-q.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/spill-q.ll
@@ -11,8 +11,9 @@ declare <4 x float> @llvm.arm.neon.vld1.v4f32(i8*) nounwind readonly
 
 define arm_apcscc void @aaa(%quuz* %this, i8* %block) {
 ; CHECK: aaa:
-; CHECK: vstmia sp
-; CHECK: vldmia sp
+; CHECK: bic sp, sp, #15
+; CHECK: vst1.64 {{.*}}sp, :128
+; CHECK: vld1.64 {{.*}}sp, :128
 entry:
   %0 = call <4 x float> @llvm.arm.neon.vld1.v4f32(i8* undef) nounwind ; <<4 x float>> [#uses=1]
   store float 6.300000e+01, float* undef, align 4
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/str_post.ll b/libclamav/c++/llvm/test/CodeGen/ARM/str_post.ll
index 801b9ce..97916f1 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/str_post.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/str_post.ll
@@ -1,9 +1,8 @@
-; RUN: llc < %s -march=arm | \
-; RUN:   grep {strh .*\\\[.*\], #-4} | count 1
-; RUN: llc < %s -march=arm | \
-; RUN:   grep {str .*\\\[.*\],} | count 1
+; RUN: llc < %s -march=arm | FileCheck %s
 
 define i16 @test1(i32* %X, i16* %A) {
+; CHECK: test1:
+; CHECK: strh {{.*}}[{{.*}}], #-4
         %Y = load i32* %X               ; <i32> [#uses=1]
         %tmp1 = trunc i32 %Y to i16             ; <i16> [#uses=1]
         store i16 %tmp1, i16* %A
@@ -13,6 +12,8 @@ define i16 @test1(i32* %X, i16* %A) {
 }
 
 define i32 @test2(i32* %X, i32* %A) {
+; CHECK: test2:
+; CHECK: str {{.*}}[{{.*}}],
         %Y = load i32* %X               ; <i32> [#uses=1]
         store i32 %Y, i32* %A
         %tmp1 = ptrtoint i32* %A to i32         ; <i32> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/t2-imm.ll b/libclamav/c++/llvm/test/CodeGen/ARM/t2-imm.ll
index 8b619bf..848a4df 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/t2-imm.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/t2-imm.ll
@@ -2,8 +2,8 @@
 
 define i32 @f6(i32 %a) {
 ; CHECK:f6
-; CHECK: movw r0, #1123
-; CHECK: movt r0, #1000
+; CHECK: movw r0, #:lower16:65537123
+; CHECK: movt r0, #:upper16:65537123
     %tmp = add i32 0, 65537123
     ret i32 %tmp
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/tail-opts.ll b/libclamav/c++/llvm/test/CodeGen/ARM/tail-opts.ll
new file mode 100644
index 0000000..1a867a9
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/tail-opts.ll
@@ -0,0 +1,64 @@
+; RUN: llc < %s -mtriple=arm-apple-darwin -mcpu=cortex-a8 -asm-verbose=false | FileCheck %s
+
+declare void @bar(i32)
+declare void @car(i32)
+declare void @dar(i32)
+declare void @ear(i32)
+declare void @far(i32)
+declare i1 @qux()
+
+ at GHJK = global i32 0
+
+declare i8* @choose(i8*, i8*);
+
+; BranchFolding should tail-duplicate the indirect jump to avoid
+; redundant branching.
+
+; CHECK: tail_duplicate_me:
+; CHECK:      qux
+; CHECK:      qux
+; CHECK:      ldr r{{.}}, LCPI
+; CHECK:      str r
+; CHECK-NEXT: bx r
+; CHECK:      ldr r{{.}}, LCPI
+; CHECK:      str r
+; CHECK-NEXT: bx r
+; CHECK:      ldr r{{.}}, LCPI
+; CHECK:      str r
+; CHECK-NEXT: bx r
+
+define void @tail_duplicate_me() nounwind {
+entry:
+  %a = call i1 @qux()
+  %c = call i8* @choose(i8* blockaddress(@tail_duplicate_me, %return),
+                        i8* blockaddress(@tail_duplicate_me, %altret))
+  br i1 %a, label %A, label %next
+next:
+  %b = call i1 @qux()
+  br i1 %b, label %B, label %C
+
+A:
+  call void @bar(i32 0)
+  store i32 0, i32* @GHJK
+  br label %M
+
+B:
+  call void @car(i32 1)
+  store i32 0, i32* @GHJK
+  br label %M
+
+C:
+  call void @dar(i32 2)
+  store i32 0, i32* @GHJK
+  br label %M
+
+M:
+  indirectbr i8* %c, [label %return, label %altret]
+
+return:
+  call void @ear(i32 1000)
+  ret void
+altret:
+  call void @far(i32 1001)
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/tls2.ll b/libclamav/c++/llvm/test/CodeGen/ARM/tls2.ll
index 3284720..d932f90 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/tls2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/tls2.ll
@@ -1,19 +1,27 @@
-; RUN: llc < %s -march=arm -mtriple=arm-linux-gnueabi | \
-; RUN:     grep {i(gottpoff)}
-; RUN: llc < %s -march=arm -mtriple=arm-linux-gnueabi | \
-; RUN:     grep {ldr r., \[pc, r.\]}
 ; RUN: llc < %s -march=arm -mtriple=arm-linux-gnueabi \
-; RUN:     -relocation-model=pic | grep {__tls_get_addr}
+; RUN:   | FileCheck %s -check-prefix=CHECK-NONPIC
+; RUN: llc < %s -march=arm -mtriple=arm-linux-gnueabi \
+; RUN:   -relocation-model=pic | FileCheck %s -check-prefix=CHECK-PIC
 
 @i = external thread_local global i32		; <i32*> [#uses=2]
 
 define i32 @f() {
+; CHECK-NONPIC: f:
+; CHECK-NONPIC: ldr {{r.}}, [pc, +{{r.}}]
+; CHECK-NONPIC: i(gottpoff)
+; CHECK-PIC: f:
+; CHECK-PIC: __tls_get_addr
 entry:
 	%tmp1 = load i32* @i		; <i32> [#uses=1]
 	ret i32 %tmp1
 }
 
 define i32* @g() {
+; CHECK-NONPIC: g:
+; CHECK-NONPIC: ldr {{r.}}, [pc, +{{r.}}]
+; CHECK-NONPIC: i(gottpoff)
+; CHECK-PIC: g:
+; CHECK-PIC: __tls_get_addr
 entry:
 	ret i32* @i
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vaba.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vaba.ll
index 5d58e29..e2dca46 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vaba.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vaba.ll
@@ -135,3 +135,71 @@ declare <4 x i32> @llvm.arm.neon.vabas.v4i32(<4 x i32>, <4 x i32>, <4 x i32>) no
 declare <16 x i8> @llvm.arm.neon.vabau.v16i8(<16 x i8>, <16 x i8>, <16 x i8>) nounwind readnone
 declare <8 x i16> @llvm.arm.neon.vabau.v8i16(<8 x i16>, <8 x i16>, <8 x i16>) nounwind readnone
 declare <4 x i32> @llvm.arm.neon.vabau.v4i32(<4 x i32>, <4 x i32>, <4 x i32>) nounwind readnone
+
+define <8 x i16> @vabals8(<8 x i16>* %A, <8 x i8>* %B, <8 x i8>* %C) nounwind {
+;CHECK: vabals8:
+;CHECK: vabal.s8
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = load <8 x i8>* %C
+	%tmp4 = call <8 x i16> @llvm.arm.neon.vabals.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2, <8 x i8> %tmp3)
+	ret <8 x i16> %tmp4
+}
+
+define <4 x i32> @vabals16(<4 x i32>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
+;CHECK: vabals16:
+;CHECK: vabal.s16
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = load <4 x i16>* %C
+	%tmp4 = call <4 x i32> @llvm.arm.neon.vabals.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2, <4 x i16> %tmp3)
+	ret <4 x i32> %tmp4
+}
+
+define <2 x i64> @vabals32(<2 x i64>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
+;CHECK: vabals32:
+;CHECK: vabal.s32
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = load <2 x i32>* %C
+	%tmp4 = call <2 x i64> @llvm.arm.neon.vabals.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2, <2 x i32> %tmp3)
+	ret <2 x i64> %tmp4
+}
+
+define <8 x i16> @vabalu8(<8 x i16>* %A, <8 x i8>* %B, <8 x i8>* %C) nounwind {
+;CHECK: vabalu8:
+;CHECK: vabal.u8
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = load <8 x i8>* %C
+	%tmp4 = call <8 x i16> @llvm.arm.neon.vabalu.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2, <8 x i8> %tmp3)
+	ret <8 x i16> %tmp4
+}
+
+define <4 x i32> @vabalu16(<4 x i32>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
+;CHECK: vabalu16:
+;CHECK: vabal.u16
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = load <4 x i16>* %C
+	%tmp4 = call <4 x i32> @llvm.arm.neon.vabalu.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2, <4 x i16> %tmp3)
+	ret <4 x i32> %tmp4
+}
+
+define <2 x i64> @vabalu32(<2 x i64>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
+;CHECK: vabalu32:
+;CHECK: vabal.u32
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = load <2 x i32>* %C
+	%tmp4 = call <2 x i64> @llvm.arm.neon.vabalu.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2, <2 x i32> %tmp3)
+	ret <2 x i64> %tmp4
+}
+
+declare <8 x i16> @llvm.arm.neon.vabals.v8i16(<8 x i16>, <8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vabals.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vabals.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
+
+declare <8 x i16> @llvm.arm.neon.vabalu.v8i16(<8 x i16>, <8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vabalu.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vabalu.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vabal.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vabal.ll
deleted file mode 100644
index 89efd5b..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vabal.ll
+++ /dev/null
@@ -1,69 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
-
-define <8 x i16> @vabals8(<8 x i16>* %A, <8 x i8>* %B, <8 x i8>* %C) nounwind {
-;CHECK: vabals8:
-;CHECK: vabal.s8
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = load <8 x i8>* %C
-	%tmp4 = call <8 x i16> @llvm.arm.neon.vabals.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2, <8 x i8> %tmp3)
-	ret <8 x i16> %tmp4
-}
-
-define <4 x i32> @vabals16(<4 x i32>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
-;CHECK: vabals16:
-;CHECK: vabal.s16
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = load <4 x i16>* %C
-	%tmp4 = call <4 x i32> @llvm.arm.neon.vabals.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2, <4 x i16> %tmp3)
-	ret <4 x i32> %tmp4
-}
-
-define <2 x i64> @vabals32(<2 x i64>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
-;CHECK: vabals32:
-;CHECK: vabal.s32
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = load <2 x i32>* %C
-	%tmp4 = call <2 x i64> @llvm.arm.neon.vabals.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2, <2 x i32> %tmp3)
-	ret <2 x i64> %tmp4
-}
-
-define <8 x i16> @vabalu8(<8 x i16>* %A, <8 x i8>* %B, <8 x i8>* %C) nounwind {
-;CHECK: vabalu8:
-;CHECK: vabal.u8
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = load <8 x i8>* %C
-	%tmp4 = call <8 x i16> @llvm.arm.neon.vabalu.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2, <8 x i8> %tmp3)
-	ret <8 x i16> %tmp4
-}
-
-define <4 x i32> @vabalu16(<4 x i32>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
-;CHECK: vabalu16:
-;CHECK: vabal.u16
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = load <4 x i16>* %C
-	%tmp4 = call <4 x i32> @llvm.arm.neon.vabalu.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2, <4 x i16> %tmp3)
-	ret <4 x i32> %tmp4
-}
-
-define <2 x i64> @vabalu32(<2 x i64>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
-;CHECK: vabalu32:
-;CHECK: vabal.u32
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = load <2 x i32>* %C
-	%tmp4 = call <2 x i64> @llvm.arm.neon.vabalu.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2, <2 x i32> %tmp3)
-	ret <2 x i64> %tmp4
-}
-
-declare <8 x i16> @llvm.arm.neon.vabals.v8i16(<8 x i16>, <8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vabals.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vabals.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
-
-declare <8 x i16> @llvm.arm.neon.vabalu.v8i16(<8 x i16>, <8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vabalu.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vabalu.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vabd.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vabd.ll
index db762bd..2b45393 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vabd.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vabd.ll
@@ -145,3 +145,65 @@ declare <8 x i16> @llvm.arm.neon.vabdu.v8i16(<8 x i16>, <8 x i16>) nounwind read
 declare <4 x i32> @llvm.arm.neon.vabdu.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
 
 declare <4 x float> @llvm.arm.neon.vabds.v4f32(<4 x float>, <4 x float>) nounwind readnone
+
+define <8 x i16> @vabdls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vabdls8:
+;CHECK: vabdl.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vabdls.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vabdls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vabdls16:
+;CHECK: vabdl.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vabdls.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vabdls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vabdls32:
+;CHECK: vabdl.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vabdls.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+define <8 x i16> @vabdlu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vabdlu8:
+;CHECK: vabdl.u8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vabdlu.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vabdlu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vabdlu16:
+;CHECK: vabdl.u16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vabdlu.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vabdlu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vabdlu32:
+;CHECK: vabdl.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vabdlu.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+declare <8 x i16> @llvm.arm.neon.vabdls.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vabdls.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vabdls.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <8 x i16> @llvm.arm.neon.vabdlu.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vabdlu.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vabdlu.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vabdl.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vabdl.ll
deleted file mode 100644
index 23840f7..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vabdl.ll
+++ /dev/null
@@ -1,63 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
-
-define <8 x i16> @vabdls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-;CHECK: vabdls8:
-;CHECK: vabdl.s8
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vabdls.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vabdls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-;CHECK: vabdls16:
-;CHECK: vabdl.s16
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vabdls.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vabdls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-;CHECK: vabdls32:
-;CHECK: vabdl.s32
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vabdls.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-define <8 x i16> @vabdlu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-;CHECK: vabdlu8:
-;CHECK: vabdl.u8
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vabdlu.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vabdlu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-;CHECK: vabdlu16:
-;CHECK: vabdl.u16
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vabdlu.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vabdlu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-;CHECK: vabdlu32:
-;CHECK: vabdl.u32
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vabdlu.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-declare <8 x i16> @llvm.arm.neon.vabdls.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vabdls.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vabdls.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <8 x i16> @llvm.arm.neon.vabdlu.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vabdlu.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vabdlu.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vabs.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vabs.ll
index f1aafed..18ba61f 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vabs.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vabs.ll
@@ -74,3 +74,58 @@ declare <8 x i16> @llvm.arm.neon.vabs.v8i16(<8 x i16>) nounwind readnone
 declare <4 x i32> @llvm.arm.neon.vabs.v4i32(<4 x i32>) nounwind readnone
 declare <4 x float> @llvm.arm.neon.vabs.v4f32(<4 x float>) nounwind readnone
 
+define <8 x i8> @vqabss8(<8 x i8>* %A) nounwind {
+;CHECK: vqabss8:
+;CHECK: vqabs.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = call <8 x i8> @llvm.arm.neon.vqabs.v8i8(<8 x i8> %tmp1)
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @vqabss16(<4 x i16>* %A) nounwind {
+;CHECK: vqabss16:
+;CHECK: vqabs.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = call <4 x i16> @llvm.arm.neon.vqabs.v4i16(<4 x i16> %tmp1)
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vqabss32(<2 x i32>* %A) nounwind {
+;CHECK: vqabss32:
+;CHECK: vqabs.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vqabs.v2i32(<2 x i32> %tmp1)
+	ret <2 x i32> %tmp2
+}
+
+define <16 x i8> @vqabsQs8(<16 x i8>* %A) nounwind {
+;CHECK: vqabsQs8:
+;CHECK: vqabs.s8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = call <16 x i8> @llvm.arm.neon.vqabs.v16i8(<16 x i8> %tmp1)
+	ret <16 x i8> %tmp2
+}
+
+define <8 x i16> @vqabsQs16(<8 x i16>* %A) nounwind {
+;CHECK: vqabsQs16:
+;CHECK: vqabs.s16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = call <8 x i16> @llvm.arm.neon.vqabs.v8i16(<8 x i16> %tmp1)
+	ret <8 x i16> %tmp2
+}
+
+define <4 x i32> @vqabsQs32(<4 x i32>* %A) nounwind {
+;CHECK: vqabsQs32:
+;CHECK: vqabs.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x i32> @llvm.arm.neon.vqabs.v4i32(<4 x i32> %tmp1)
+	ret <4 x i32> %tmp2
+}
+
+declare <8 x i8>  @llvm.arm.neon.vqabs.v8i8(<8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vqabs.v4i16(<4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vqabs.v2i32(<2 x i32>) nounwind readnone
+
+declare <16 x i8> @llvm.arm.neon.vqabs.v16i8(<16 x i8>) nounwind readnone
+declare <8 x i16> @llvm.arm.neon.vqabs.v8i16(<8 x i16>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vqabs.v4i32(<4 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vacge.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vacge.ll
deleted file mode 100644
index b178446..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vacge.ll
+++ /dev/null
@@ -1,22 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
-
-define <2 x i32> @vacgef32(<2 x float>* %A, <2 x float>* %B) nounwind {
-;CHECK: vacgef32:
-;CHECK: vacge.f32
-	%tmp1 = load <2 x float>* %A
-	%tmp2 = load <2 x float>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vacged(<2 x float> %tmp1, <2 x float> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <4 x i32> @vacgeQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
-;CHECK: vacgeQf32:
-;CHECK: vacge.f32
-	%tmp1 = load <4 x float>* %A
-	%tmp2 = load <4 x float>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vacgeq(<4 x float> %tmp1, <4 x float> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-declare <2 x i32> @llvm.arm.neon.vacged(<2 x float>, <2 x float>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vacgeq(<4 x float>, <4 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vacgt.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vacgt.ll
deleted file mode 100644
index fd01163..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vacgt.ll
+++ /dev/null
@@ -1,22 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
-
-define <2 x i32> @vacgtf32(<2 x float>* %A, <2 x float>* %B) nounwind {
-;CHECK: vacgtf32:
-;CHECK: vacgt.f32
-	%tmp1 = load <2 x float>* %A
-	%tmp2 = load <2 x float>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vacgtd(<2 x float> %tmp1, <2 x float> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <4 x i32> @vacgtQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
-;CHECK: vacgtQf32:
-;CHECK: vacgt.f32
-	%tmp1 = load <4 x float>* %A
-	%tmp2 = load <4 x float>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vacgtq(<4 x float> %tmp1, <4 x float> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-declare <2 x i32> @llvm.arm.neon.vacgtd(<2 x float>, <2 x float>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vacgtq(<4 x float>, <4 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vadd.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vadd.ll
index 9d2deb9..9fa5307 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vadd.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vadd.ll
@@ -89,3 +89,189 @@ define <4 x float> @vaddQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
 	%tmp3 = add <4 x float> %tmp1, %tmp2
 	ret <4 x float> %tmp3
 }
+
+define <8 x i8> @vaddhni16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vaddhni16:
+;CHECK: vaddhn.i16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vaddhn.v8i8(<8 x i16> %tmp1, <8 x i16> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vaddhni32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vaddhni32:
+;CHECK: vaddhn.i32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vaddhn.v4i16(<4 x i32> %tmp1, <4 x i32> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vaddhni64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vaddhni64:
+;CHECK: vaddhn.i64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i64>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vaddhn.v2i32(<2 x i64> %tmp1, <2 x i64> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+declare <8 x i8>  @llvm.arm.neon.vaddhn.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vaddhn.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vaddhn.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
+
+define <8 x i8> @vraddhni16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vraddhni16:
+;CHECK: vraddhn.i16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vraddhn.v8i8(<8 x i16> %tmp1, <8 x i16> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vraddhni32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vraddhni32:
+;CHECK: vraddhn.i32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vraddhn.v4i16(<4 x i32> %tmp1, <4 x i32> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vraddhni64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vraddhni64:
+;CHECK: vraddhn.i64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i64>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vraddhn.v2i32(<2 x i64> %tmp1, <2 x i64> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+declare <8 x i8>  @llvm.arm.neon.vraddhn.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vraddhn.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vraddhn.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
+
+define <8 x i16> @vaddls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vaddls8:
+;CHECK: vaddl.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vaddls.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vaddls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vaddls16:
+;CHECK: vaddl.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vaddls.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vaddls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vaddls32:
+;CHECK: vaddl.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vaddls.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+define <8 x i16> @vaddlu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vaddlu8:
+;CHECK: vaddl.u8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vaddlu.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vaddlu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vaddlu16:
+;CHECK: vaddl.u16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vaddlu.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vaddlu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vaddlu32:
+;CHECK: vaddl.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vaddlu.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+declare <8 x i16> @llvm.arm.neon.vaddls.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vaddls.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vaddls.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <8 x i16> @llvm.arm.neon.vaddlu.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vaddlu.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vaddlu.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
+
+define <8 x i16> @vaddws8(<8 x i16>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vaddws8:
+;CHECK: vaddw.s8
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vaddws.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vaddws16(<4 x i32>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vaddws16:
+;CHECK: vaddw.s16
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vaddws.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vaddws32(<2 x i64>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vaddws32:
+;CHECK: vaddw.s32
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vaddws.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+define <8 x i16> @vaddwu8(<8 x i16>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vaddwu8:
+;CHECK: vaddw.u8
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vaddwu.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vaddwu16(<4 x i32>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vaddwu16:
+;CHECK: vaddw.u16
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vaddwu.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vaddwu32(<2 x i64>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vaddwu32:
+;CHECK: vaddw.u32
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vaddwu.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+declare <8 x i16> @llvm.arm.neon.vaddws.v8i16(<8 x i16>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vaddws.v4i32(<4 x i32>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vaddws.v2i64(<2 x i64>, <2 x i32>) nounwind readnone
+
+declare <8 x i16> @llvm.arm.neon.vaddwu.v8i16(<8 x i16>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vaddwu.v4i32(<4 x i32>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vaddwu.v2i64(<2 x i64>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vaddhn.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vaddhn.ll
deleted file mode 100644
index aba5712..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vaddhn.ll
+++ /dev/null
@@ -1,32 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
-
-define <8 x i8> @vaddhni16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-;CHECK: vaddhni16:
-;CHECK: vaddhn.i16
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vaddhn.v8i8(<8 x i16> %tmp1, <8 x i16> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vaddhni32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-;CHECK: vaddhni32:
-;CHECK: vaddhn.i32
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vaddhn.v4i16(<4 x i32> %tmp1, <4 x i32> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vaddhni64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
-;CHECK: vaddhni64:
-;CHECK: vaddhn.i64
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i64>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vaddhn.v2i32(<2 x i64> %tmp1, <2 x i64> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-declare <8 x i8>  @llvm.arm.neon.vaddhn.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vaddhn.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vaddhn.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vaddl.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vaddl.ll
deleted file mode 100644
index 3a31b95..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vaddl.ll
+++ /dev/null
@@ -1,63 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
-
-define <8 x i16> @vaddls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-;CHECK: vaddls8:
-;CHECK: vaddl.s8
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vaddls.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vaddls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-;CHECK: vaddls16:
-;CHECK: vaddl.s16
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vaddls.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vaddls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-;CHECK: vaddls32:
-;CHECK: vaddl.s32
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vaddls.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-define <8 x i16> @vaddlu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-;CHECK: vaddlu8:
-;CHECK: vaddl.u8
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vaddlu.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vaddlu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-;CHECK: vaddlu16:
-;CHECK: vaddl.u16
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vaddlu.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vaddlu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-;CHECK: vaddlu32:
-;CHECK: vaddl.u32
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vaddlu.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-declare <8 x i16> @llvm.arm.neon.vaddls.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vaddls.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vaddls.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <8 x i16> @llvm.arm.neon.vaddlu.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vaddlu.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vaddlu.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vaddw.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vaddw.ll
deleted file mode 100644
index 6d0459e..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vaddw.ll
+++ /dev/null
@@ -1,63 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
-
-define <8 x i16> @vaddws8(<8 x i16>* %A, <8 x i8>* %B) nounwind {
-;CHECK: vaddws8:
-;CHECK: vaddw.s8
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vaddws.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vaddws16(<4 x i32>* %A, <4 x i16>* %B) nounwind {
-;CHECK: vaddws16:
-;CHECK: vaddw.s16
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vaddws.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vaddws32(<2 x i64>* %A, <2 x i32>* %B) nounwind {
-;CHECK: vaddws32:
-;CHECK: vaddw.s32
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vaddws.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-define <8 x i16> @vaddwu8(<8 x i16>* %A, <8 x i8>* %B) nounwind {
-;CHECK: vaddwu8:
-;CHECK: vaddw.u8
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vaddwu.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vaddwu16(<4 x i32>* %A, <4 x i16>* %B) nounwind {
-;CHECK: vaddwu16:
-;CHECK: vaddw.u16
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vaddwu.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vaddwu32(<2 x i64>* %A, <2 x i32>* %B) nounwind {
-;CHECK: vaddwu32:
-;CHECK: vaddw.u32
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vaddwu.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-declare <8 x i16> @llvm.arm.neon.vaddws.v8i16(<8 x i16>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vaddws.v4i32(<4 x i32>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vaddws.v2i64(<2 x i64>, <2 x i32>) nounwind readnone
-
-declare <8 x i16> @llvm.arm.neon.vaddwu.v8i16(<8 x i16>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vaddwu.v4i32(<4 x i32>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vaddwu.v2i64(<2 x i64>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vand.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vand.ll
deleted file mode 100644
index 653a70b..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vand.ll
+++ /dev/null
@@ -1,73 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
-
-define <8 x i8> @v_andi8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-;CHECK: v_andi8:
-;CHECK: vand
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = and <8 x i8> %tmp1, %tmp2
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @v_andi16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-;CHECK: v_andi16:
-;CHECK: vand
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = and <4 x i16> %tmp1, %tmp2
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @v_andi32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-;CHECK: v_andi32:
-;CHECK: vand
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = and <2 x i32> %tmp1, %tmp2
-	ret <2 x i32> %tmp3
-}
-
-define <1 x i64> @v_andi64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
-;CHECK: v_andi64:
-;CHECK: vand
-	%tmp1 = load <1 x i64>* %A
-	%tmp2 = load <1 x i64>* %B
-	%tmp3 = and <1 x i64> %tmp1, %tmp2
-	ret <1 x i64> %tmp3
-}
-
-define <16 x i8> @v_andQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
-;CHECK: v_andQi8:
-;CHECK: vand
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = load <16 x i8>* %B
-	%tmp3 = and <16 x i8> %tmp1, %tmp2
-	ret <16 x i8> %tmp3
-}
-
-define <8 x i16> @v_andQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-;CHECK: v_andQi16:
-;CHECK: vand
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = and <8 x i16> %tmp1, %tmp2
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @v_andQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-;CHECK: v_andQi32:
-;CHECK: vand
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = and <4 x i32> %tmp1, %tmp2
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @v_andQi64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
-;CHECK: v_andQi64:
-;CHECK: vand
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i64>* %B
-	%tmp3 = and <2 x i64> %tmp1, %tmp2
-	ret <2 x i64> %tmp3
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vbic.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vbic.ll
deleted file mode 100644
index 2f79232..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vbic.ll
+++ /dev/null
@@ -1,81 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
-
-define <8 x i8> @v_bici8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-;CHECK: v_bici8:
-;CHECK: vbic
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = xor <8 x i8> %tmp2, < i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1 >
-	%tmp4 = and <8 x i8> %tmp1, %tmp3
-	ret <8 x i8> %tmp4
-}
-
-define <4 x i16> @v_bici16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-;CHECK: v_bici16:
-;CHECK: vbic
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = xor <4 x i16> %tmp2, < i16 -1, i16 -1, i16 -1, i16 -1 >
-	%tmp4 = and <4 x i16> %tmp1, %tmp3
-	ret <4 x i16> %tmp4
-}
-
-define <2 x i32> @v_bici32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-;CHECK: v_bici32:
-;CHECK: vbic
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = xor <2 x i32> %tmp2, < i32 -1, i32 -1 >
-	%tmp4 = and <2 x i32> %tmp1, %tmp3
-	ret <2 x i32> %tmp4
-}
-
-define <1 x i64> @v_bici64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
-;CHECK: v_bici64:
-;CHECK: vbic
-	%tmp1 = load <1 x i64>* %A
-	%tmp2 = load <1 x i64>* %B
-	%tmp3 = xor <1 x i64> %tmp2, < i64 -1 >
-	%tmp4 = and <1 x i64> %tmp1, %tmp3
-	ret <1 x i64> %tmp4
-}
-
-define <16 x i8> @v_bicQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
-;CHECK: v_bicQi8:
-;CHECK: vbic
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = load <16 x i8>* %B
-	%tmp3 = xor <16 x i8> %tmp2, < i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1 >
-	%tmp4 = and <16 x i8> %tmp1, %tmp3
-	ret <16 x i8> %tmp4
-}
-
-define <8 x i16> @v_bicQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-;CHECK: v_bicQi16:
-;CHECK: vbic
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = xor <8 x i16> %tmp2, < i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1 >
-	%tmp4 = and <8 x i16> %tmp1, %tmp3
-	ret <8 x i16> %tmp4
-}
-
-define <4 x i32> @v_bicQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-;CHECK: v_bicQi32:
-;CHECK: vbic
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = xor <4 x i32> %tmp2, < i32 -1, i32 -1, i32 -1, i32 -1 >
-	%tmp4 = and <4 x i32> %tmp1, %tmp3
-	ret <4 x i32> %tmp4
-}
-
-define <2 x i64> @v_bicQi64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
-;CHECK: v_bicQi64:
-;CHECK: vbic
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i64>* %B
-	%tmp3 = xor <2 x i64> %tmp2, < i64 -1, i64 -1 >
-	%tmp4 = and <2 x i64> %tmp1, %tmp3
-	ret <2 x i64> %tmp4
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vbits.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vbits.ll
new file mode 100644
index 0000000..e1d23a1
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vbits.ll
@@ -0,0 +1,507 @@
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
+
+define <8 x i8> @v_andi8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: v_andi8:
+;CHECK: vand
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = and <8 x i8> %tmp1, %tmp2
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @v_andi16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: v_andi16:
+;CHECK: vand
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = and <4 x i16> %tmp1, %tmp2
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @v_andi32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: v_andi32:
+;CHECK: vand
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = and <2 x i32> %tmp1, %tmp2
+	ret <2 x i32> %tmp3
+}
+
+define <1 x i64> @v_andi64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: v_andi64:
+;CHECK: vand
+	%tmp1 = load <1 x i64>* %A
+	%tmp2 = load <1 x i64>* %B
+	%tmp3 = and <1 x i64> %tmp1, %tmp2
+	ret <1 x i64> %tmp3
+}
+
+define <16 x i8> @v_andQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: v_andQi8:
+;CHECK: vand
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = load <16 x i8>* %B
+	%tmp3 = and <16 x i8> %tmp1, %tmp2
+	ret <16 x i8> %tmp3
+}
+
+define <8 x i16> @v_andQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: v_andQi16:
+;CHECK: vand
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = and <8 x i16> %tmp1, %tmp2
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @v_andQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: v_andQi32:
+;CHECK: vand
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = and <4 x i32> %tmp1, %tmp2
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @v_andQi64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: v_andQi64:
+;CHECK: vand
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i64>* %B
+	%tmp3 = and <2 x i64> %tmp1, %tmp2
+	ret <2 x i64> %tmp3
+}
+
+define <8 x i8> @v_bici8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: v_bici8:
+;CHECK: vbic
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = xor <8 x i8> %tmp2, < i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1 >
+	%tmp4 = and <8 x i8> %tmp1, %tmp3
+	ret <8 x i8> %tmp4
+}
+
+define <4 x i16> @v_bici16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: v_bici16:
+;CHECK: vbic
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = xor <4 x i16> %tmp2, < i16 -1, i16 -1, i16 -1, i16 -1 >
+	%tmp4 = and <4 x i16> %tmp1, %tmp3
+	ret <4 x i16> %tmp4
+}
+
+define <2 x i32> @v_bici32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: v_bici32:
+;CHECK: vbic
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = xor <2 x i32> %tmp2, < i32 -1, i32 -1 >
+	%tmp4 = and <2 x i32> %tmp1, %tmp3
+	ret <2 x i32> %tmp4
+}
+
+define <1 x i64> @v_bici64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: v_bici64:
+;CHECK: vbic
+	%tmp1 = load <1 x i64>* %A
+	%tmp2 = load <1 x i64>* %B
+	%tmp3 = xor <1 x i64> %tmp2, < i64 -1 >
+	%tmp4 = and <1 x i64> %tmp1, %tmp3
+	ret <1 x i64> %tmp4
+}
+
+define <16 x i8> @v_bicQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: v_bicQi8:
+;CHECK: vbic
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = load <16 x i8>* %B
+	%tmp3 = xor <16 x i8> %tmp2, < i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1 >
+	%tmp4 = and <16 x i8> %tmp1, %tmp3
+	ret <16 x i8> %tmp4
+}
+
+define <8 x i16> @v_bicQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: v_bicQi16:
+;CHECK: vbic
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = xor <8 x i16> %tmp2, < i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1 >
+	%tmp4 = and <8 x i16> %tmp1, %tmp3
+	ret <8 x i16> %tmp4
+}
+
+define <4 x i32> @v_bicQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: v_bicQi32:
+;CHECK: vbic
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = xor <4 x i32> %tmp2, < i32 -1, i32 -1, i32 -1, i32 -1 >
+	%tmp4 = and <4 x i32> %tmp1, %tmp3
+	ret <4 x i32> %tmp4
+}
+
+define <2 x i64> @v_bicQi64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: v_bicQi64:
+;CHECK: vbic
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i64>* %B
+	%tmp3 = xor <2 x i64> %tmp2, < i64 -1, i64 -1 >
+	%tmp4 = and <2 x i64> %tmp1, %tmp3
+	ret <2 x i64> %tmp4
+}
+
+define <8 x i8> @v_eori8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: v_eori8:
+;CHECK: veor
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = xor <8 x i8> %tmp1, %tmp2
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @v_eori16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: v_eori16:
+;CHECK: veor
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = xor <4 x i16> %tmp1, %tmp2
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @v_eori32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: v_eori32:
+;CHECK: veor
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = xor <2 x i32> %tmp1, %tmp2
+	ret <2 x i32> %tmp3
+}
+
+define <1 x i64> @v_eori64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: v_eori64:
+;CHECK: veor
+	%tmp1 = load <1 x i64>* %A
+	%tmp2 = load <1 x i64>* %B
+	%tmp3 = xor <1 x i64> %tmp1, %tmp2
+	ret <1 x i64> %tmp3
+}
+
+define <16 x i8> @v_eorQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: v_eorQi8:
+;CHECK: veor
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = load <16 x i8>* %B
+	%tmp3 = xor <16 x i8> %tmp1, %tmp2
+	ret <16 x i8> %tmp3
+}
+
+define <8 x i16> @v_eorQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: v_eorQi16:
+;CHECK: veor
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = xor <8 x i16> %tmp1, %tmp2
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @v_eorQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: v_eorQi32:
+;CHECK: veor
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = xor <4 x i32> %tmp1, %tmp2
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @v_eorQi64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: v_eorQi64:
+;CHECK: veor
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i64>* %B
+	%tmp3 = xor <2 x i64> %tmp1, %tmp2
+	ret <2 x i64> %tmp3
+}
+
+define <8 x i8> @v_mvni8(<8 x i8>* %A) nounwind {
+;CHECK: v_mvni8:
+;CHECK: vmvn
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = xor <8 x i8> %tmp1, < i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1 >
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @v_mvni16(<4 x i16>* %A) nounwind {
+;CHECK: v_mvni16:
+;CHECK: vmvn
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = xor <4 x i16> %tmp1, < i16 -1, i16 -1, i16 -1, i16 -1 >
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @v_mvni32(<2 x i32>* %A) nounwind {
+;CHECK: v_mvni32:
+;CHECK: vmvn
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = xor <2 x i32> %tmp1, < i32 -1, i32 -1 >
+	ret <2 x i32> %tmp2
+}
+
+define <1 x i64> @v_mvni64(<1 x i64>* %A) nounwind {
+;CHECK: v_mvni64:
+;CHECK: vmvn
+	%tmp1 = load <1 x i64>* %A
+	%tmp2 = xor <1 x i64> %tmp1, < i64 -1 >
+	ret <1 x i64> %tmp2
+}
+
+define <16 x i8> @v_mvnQi8(<16 x i8>* %A) nounwind {
+;CHECK: v_mvnQi8:
+;CHECK: vmvn
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = xor <16 x i8> %tmp1, < i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1 >
+	ret <16 x i8> %tmp2
+}
+
+define <8 x i16> @v_mvnQi16(<8 x i16>* %A) nounwind {
+;CHECK: v_mvnQi16:
+;CHECK: vmvn
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = xor <8 x i16> %tmp1, < i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1 >
+	ret <8 x i16> %tmp2
+}
+
+define <4 x i32> @v_mvnQi32(<4 x i32>* %A) nounwind {
+;CHECK: v_mvnQi32:
+;CHECK: vmvn
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = xor <4 x i32> %tmp1, < i32 -1, i32 -1, i32 -1, i32 -1 >
+	ret <4 x i32> %tmp2
+}
+
+define <2 x i64> @v_mvnQi64(<2 x i64>* %A) nounwind {
+;CHECK: v_mvnQi64:
+;CHECK: vmvn
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = xor <2 x i64> %tmp1, < i64 -1, i64 -1 >
+	ret <2 x i64> %tmp2
+}
+
+define <8 x i8> @v_orri8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: v_orri8:
+;CHECK: vorr
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = or <8 x i8> %tmp1, %tmp2
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @v_orri16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: v_orri16:
+;CHECK: vorr
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = or <4 x i16> %tmp1, %tmp2
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @v_orri32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: v_orri32:
+;CHECK: vorr
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = or <2 x i32> %tmp1, %tmp2
+	ret <2 x i32> %tmp3
+}
+
+define <1 x i64> @v_orri64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: v_orri64:
+;CHECK: vorr
+	%tmp1 = load <1 x i64>* %A
+	%tmp2 = load <1 x i64>* %B
+	%tmp3 = or <1 x i64> %tmp1, %tmp2
+	ret <1 x i64> %tmp3
+}
+
+define <16 x i8> @v_orrQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: v_orrQi8:
+;CHECK: vorr
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = load <16 x i8>* %B
+	%tmp3 = or <16 x i8> %tmp1, %tmp2
+	ret <16 x i8> %tmp3
+}
+
+define <8 x i16> @v_orrQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: v_orrQi16:
+;CHECK: vorr
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = or <8 x i16> %tmp1, %tmp2
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @v_orrQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: v_orrQi32:
+;CHECK: vorr
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = or <4 x i32> %tmp1, %tmp2
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @v_orrQi64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: v_orrQi64:
+;CHECK: vorr
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i64>* %B
+	%tmp3 = or <2 x i64> %tmp1, %tmp2
+	ret <2 x i64> %tmp3
+}
+
+define <8 x i8> @v_orni8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: v_orni8:
+;CHECK: vorn
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = xor <8 x i8> %tmp2, < i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1 >
+	%tmp4 = or <8 x i8> %tmp1, %tmp3
+	ret <8 x i8> %tmp4
+}
+
+define <4 x i16> @v_orni16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: v_orni16:
+;CHECK: vorn
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = xor <4 x i16> %tmp2, < i16 -1, i16 -1, i16 -1, i16 -1 >
+	%tmp4 = or <4 x i16> %tmp1, %tmp3
+	ret <4 x i16> %tmp4
+}
+
+define <2 x i32> @v_orni32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: v_orni32:
+;CHECK: vorn
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = xor <2 x i32> %tmp2, < i32 -1, i32 -1 >
+	%tmp4 = or <2 x i32> %tmp1, %tmp3
+	ret <2 x i32> %tmp4
+}
+
+define <1 x i64> @v_orni64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: v_orni64:
+;CHECK: vorn
+	%tmp1 = load <1 x i64>* %A
+	%tmp2 = load <1 x i64>* %B
+	%tmp3 = xor <1 x i64> %tmp2, < i64 -1 >
+	%tmp4 = or <1 x i64> %tmp1, %tmp3
+	ret <1 x i64> %tmp4
+}
+
+define <16 x i8> @v_ornQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: v_ornQi8:
+;CHECK: vorn
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = load <16 x i8>* %B
+	%tmp3 = xor <16 x i8> %tmp2, < i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1 >
+	%tmp4 = or <16 x i8> %tmp1, %tmp3
+	ret <16 x i8> %tmp4
+}
+
+define <8 x i16> @v_ornQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: v_ornQi16:
+;CHECK: vorn
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = xor <8 x i16> %tmp2, < i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1 >
+	%tmp4 = or <8 x i16> %tmp1, %tmp3
+	ret <8 x i16> %tmp4
+}
+
+define <4 x i32> @v_ornQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: v_ornQi32:
+;CHECK: vorn
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = xor <4 x i32> %tmp2, < i32 -1, i32 -1, i32 -1, i32 -1 >
+	%tmp4 = or <4 x i32> %tmp1, %tmp3
+	ret <4 x i32> %tmp4
+}
+
+define <2 x i64> @v_ornQi64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: v_ornQi64:
+;CHECK: vorn
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i64>* %B
+	%tmp3 = xor <2 x i64> %tmp2, < i64 -1, i64 -1 >
+	%tmp4 = or <2 x i64> %tmp1, %tmp3
+	ret <2 x i64> %tmp4
+}
+
+define <8 x i8> @vtsti8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vtsti8:
+;CHECK: vtst.i8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = and <8 x i8> %tmp1, %tmp2
+	%tmp4 = icmp ne <8 x i8> %tmp3, zeroinitializer
+        %tmp5 = sext <8 x i1> %tmp4 to <8 x i8>
+	ret <8 x i8> %tmp5
+}
+
+define <4 x i16> @vtsti16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vtsti16:
+;CHECK: vtst.i16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = and <4 x i16> %tmp1, %tmp2
+	%tmp4 = icmp ne <4 x i16> %tmp3, zeroinitializer
+        %tmp5 = sext <4 x i1> %tmp4 to <4 x i16>
+	ret <4 x i16> %tmp5
+}
+
+define <2 x i32> @vtsti32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vtsti32:
+;CHECK: vtst.i32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = and <2 x i32> %tmp1, %tmp2
+	%tmp4 = icmp ne <2 x i32> %tmp3, zeroinitializer
+        %tmp5 = sext <2 x i1> %tmp4 to <2 x i32>
+	ret <2 x i32> %tmp5
+}
+
+define <16 x i8> @vtstQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vtstQi8:
+;CHECK: vtst.i8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = load <16 x i8>* %B
+	%tmp3 = and <16 x i8> %tmp1, %tmp2
+	%tmp4 = icmp ne <16 x i8> %tmp3, zeroinitializer
+        %tmp5 = sext <16 x i1> %tmp4 to <16 x i8>
+	ret <16 x i8> %tmp5
+}
+
+define <8 x i16> @vtstQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vtstQi16:
+;CHECK: vtst.i16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = and <8 x i16> %tmp1, %tmp2
+	%tmp4 = icmp ne <8 x i16> %tmp3, zeroinitializer
+        %tmp5 = sext <8 x i1> %tmp4 to <8 x i16>
+	ret <8 x i16> %tmp5
+}
+
+define <4 x i32> @vtstQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vtstQi32:
+;CHECK: vtst.i32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = and <4 x i32> %tmp1, %tmp2
+	%tmp4 = icmp ne <4 x i32> %tmp3, zeroinitializer
+        %tmp5 = sext <4 x i1> %tmp4 to <4 x i32>
+	ret <4 x i32> %tmp5
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vcge.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vcge.ll
index b8debd9..2c16111 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vcge.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vcge.ll
@@ -139,3 +139,24 @@ define <4 x i32> @vcgeQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
         %tmp4 = sext <4 x i1> %tmp3 to <4 x i32>
 	ret <4 x i32> %tmp4
 }
+
+define <2 x i32> @vacgef32(<2 x float>* %A, <2 x float>* %B) nounwind {
+;CHECK: vacgef32:
+;CHECK: vacge.f32
+	%tmp1 = load <2 x float>* %A
+	%tmp2 = load <2 x float>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vacged(<2 x float> %tmp1, <2 x float> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <4 x i32> @vacgeQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
+;CHECK: vacgeQf32:
+;CHECK: vacge.f32
+	%tmp1 = load <4 x float>* %A
+	%tmp2 = load <4 x float>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vacgeq(<4 x float> %tmp1, <4 x float> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+declare <2 x i32> @llvm.arm.neon.vacged(<2 x float>, <2 x float>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vacgeq(<4 x float>, <4 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vcgt.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vcgt.ll
index eae29b2..6b11ba5 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vcgt.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vcgt.ll
@@ -139,3 +139,24 @@ define <4 x i32> @vcgtQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
         %tmp4 = sext <4 x i1> %tmp3 to <4 x i32>
 	ret <4 x i32> %tmp4
 }
+
+define <2 x i32> @vacgtf32(<2 x float>* %A, <2 x float>* %B) nounwind {
+;CHECK: vacgtf32:
+;CHECK: vacgt.f32
+	%tmp1 = load <2 x float>* %A
+	%tmp2 = load <2 x float>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vacgtd(<2 x float> %tmp1, <2 x float> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <4 x i32> @vacgtQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
+;CHECK: vacgtQf32:
+;CHECK: vacgt.f32
+	%tmp1 = load <4 x float>* %A
+	%tmp2 = load <4 x float>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vacgtq(<4 x float> %tmp1, <4 x float> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+declare <2 x i32> @llvm.arm.neon.vacgtd(<2 x float>, <2 x float>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vacgtq(<4 x float>, <4 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vcls.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vcls.ll
deleted file mode 100644
index 43bd3f9..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vcls.ll
+++ /dev/null
@@ -1,57 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
-
-define <8 x i8> @vclss8(<8 x i8>* %A) nounwind {
-;CHECK: vclss8:
-;CHECK: vcls.s8
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = call <8 x i8> @llvm.arm.neon.vcls.v8i8(<8 x i8> %tmp1)
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @vclss16(<4 x i16>* %A) nounwind {
-;CHECK: vclss16:
-;CHECK: vcls.s16
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = call <4 x i16> @llvm.arm.neon.vcls.v4i16(<4 x i16> %tmp1)
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vclss32(<2 x i32>* %A) nounwind {
-;CHECK: vclss32:
-;CHECK: vcls.s32
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vcls.v2i32(<2 x i32> %tmp1)
-	ret <2 x i32> %tmp2
-}
-
-define <16 x i8> @vclsQs8(<16 x i8>* %A) nounwind {
-;CHECK: vclsQs8:
-;CHECK: vcls.s8
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = call <16 x i8> @llvm.arm.neon.vcls.v16i8(<16 x i8> %tmp1)
-	ret <16 x i8> %tmp2
-}
-
-define <8 x i16> @vclsQs16(<8 x i16>* %A) nounwind {
-;CHECK: vclsQs16:
-;CHECK: vcls.s16
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = call <8 x i16> @llvm.arm.neon.vcls.v8i16(<8 x i16> %tmp1)
-	ret <8 x i16> %tmp2
-}
-
-define <4 x i32> @vclsQs32(<4 x i32>* %A) nounwind {
-;CHECK: vclsQs32:
-;CHECK: vcls.s32
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x i32> @llvm.arm.neon.vcls.v4i32(<4 x i32> %tmp1)
-	ret <4 x i32> %tmp2
-}
-
-declare <8 x i8>  @llvm.arm.neon.vcls.v8i8(<8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vcls.v4i16(<4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vcls.v2i32(<2 x i32>) nounwind readnone
-
-declare <16 x i8> @llvm.arm.neon.vcls.v16i8(<16 x i8>) nounwind readnone
-declare <8 x i16> @llvm.arm.neon.vcls.v8i16(<8 x i16>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vcls.v4i32(<4 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vclz.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vclz.ll
deleted file mode 100644
index ec439db..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vclz.ll
+++ /dev/null
@@ -1,57 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
-
-define <8 x i8> @vclz8(<8 x i8>* %A) nounwind {
-;CHECK: vclz8:
-;CHECK: vclz.i8
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = call <8 x i8> @llvm.arm.neon.vclz.v8i8(<8 x i8> %tmp1)
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @vclz16(<4 x i16>* %A) nounwind {
-;CHECK: vclz16:
-;CHECK: vclz.i16
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = call <4 x i16> @llvm.arm.neon.vclz.v4i16(<4 x i16> %tmp1)
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vclz32(<2 x i32>* %A) nounwind {
-;CHECK: vclz32:
-;CHECK: vclz.i32
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vclz.v2i32(<2 x i32> %tmp1)
-	ret <2 x i32> %tmp2
-}
-
-define <16 x i8> @vclzQ8(<16 x i8>* %A) nounwind {
-;CHECK: vclzQ8:
-;CHECK: vclz.i8
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = call <16 x i8> @llvm.arm.neon.vclz.v16i8(<16 x i8> %tmp1)
-	ret <16 x i8> %tmp2
-}
-
-define <8 x i16> @vclzQ16(<8 x i16>* %A) nounwind {
-;CHECK: vclzQ16:
-;CHECK: vclz.i16
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = call <8 x i16> @llvm.arm.neon.vclz.v8i16(<8 x i16> %tmp1)
-	ret <8 x i16> %tmp2
-}
-
-define <4 x i32> @vclzQ32(<4 x i32>* %A) nounwind {
-;CHECK: vclzQ32:
-;CHECK: vclz.i32
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x i32> @llvm.arm.neon.vclz.v4i32(<4 x i32> %tmp1)
-	ret <4 x i32> %tmp2
-}
-
-declare <8 x i8>  @llvm.arm.neon.vclz.v8i8(<8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vclz.v4i16(<4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vclz.v2i32(<2 x i32>) nounwind readnone
-
-declare <16 x i8> @llvm.arm.neon.vclz.v16i8(<16 x i8>) nounwind readnone
-declare <8 x i16> @llvm.arm.neon.vclz.v8i16(<8 x i16>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vclz.v4i32(<4 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vcnt.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vcnt.ll
index 7e045ee..450f90d 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vcnt.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vcnt.ll
@@ -18,3 +18,115 @@ define <16 x i8> @vcntQ8(<16 x i8>* %A) nounwind {
 
 declare <8 x i8>  @llvm.arm.neon.vcnt.v8i8(<8 x i8>) nounwind readnone
 declare <16 x i8> @llvm.arm.neon.vcnt.v16i8(<16 x i8>) nounwind readnone
+
+define <8 x i8> @vclz8(<8 x i8>* %A) nounwind {
+;CHECK: vclz8:
+;CHECK: vclz.i8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = call <8 x i8> @llvm.arm.neon.vclz.v8i8(<8 x i8> %tmp1)
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @vclz16(<4 x i16>* %A) nounwind {
+;CHECK: vclz16:
+;CHECK: vclz.i16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = call <4 x i16> @llvm.arm.neon.vclz.v4i16(<4 x i16> %tmp1)
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vclz32(<2 x i32>* %A) nounwind {
+;CHECK: vclz32:
+;CHECK: vclz.i32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vclz.v2i32(<2 x i32> %tmp1)
+	ret <2 x i32> %tmp2
+}
+
+define <16 x i8> @vclzQ8(<16 x i8>* %A) nounwind {
+;CHECK: vclzQ8:
+;CHECK: vclz.i8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = call <16 x i8> @llvm.arm.neon.vclz.v16i8(<16 x i8> %tmp1)
+	ret <16 x i8> %tmp2
+}
+
+define <8 x i16> @vclzQ16(<8 x i16>* %A) nounwind {
+;CHECK: vclzQ16:
+;CHECK: vclz.i16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = call <8 x i16> @llvm.arm.neon.vclz.v8i16(<8 x i16> %tmp1)
+	ret <8 x i16> %tmp2
+}
+
+define <4 x i32> @vclzQ32(<4 x i32>* %A) nounwind {
+;CHECK: vclzQ32:
+;CHECK: vclz.i32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x i32> @llvm.arm.neon.vclz.v4i32(<4 x i32> %tmp1)
+	ret <4 x i32> %tmp2
+}
+
+declare <8 x i8>  @llvm.arm.neon.vclz.v8i8(<8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vclz.v4i16(<4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vclz.v2i32(<2 x i32>) nounwind readnone
+
+declare <16 x i8> @llvm.arm.neon.vclz.v16i8(<16 x i8>) nounwind readnone
+declare <8 x i16> @llvm.arm.neon.vclz.v8i16(<8 x i16>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vclz.v4i32(<4 x i32>) nounwind readnone
+
+define <8 x i8> @vclss8(<8 x i8>* %A) nounwind {
+;CHECK: vclss8:
+;CHECK: vcls.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = call <8 x i8> @llvm.arm.neon.vcls.v8i8(<8 x i8> %tmp1)
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @vclss16(<4 x i16>* %A) nounwind {
+;CHECK: vclss16:
+;CHECK: vcls.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = call <4 x i16> @llvm.arm.neon.vcls.v4i16(<4 x i16> %tmp1)
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vclss32(<2 x i32>* %A) nounwind {
+;CHECK: vclss32:
+;CHECK: vcls.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vcls.v2i32(<2 x i32> %tmp1)
+	ret <2 x i32> %tmp2
+}
+
+define <16 x i8> @vclsQs8(<16 x i8>* %A) nounwind {
+;CHECK: vclsQs8:
+;CHECK: vcls.s8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = call <16 x i8> @llvm.arm.neon.vcls.v16i8(<16 x i8> %tmp1)
+	ret <16 x i8> %tmp2
+}
+
+define <8 x i16> @vclsQs16(<8 x i16>* %A) nounwind {
+;CHECK: vclsQs16:
+;CHECK: vcls.s16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = call <8 x i16> @llvm.arm.neon.vcls.v8i16(<8 x i16> %tmp1)
+	ret <8 x i16> %tmp2
+}
+
+define <4 x i32> @vclsQs32(<4 x i32>* %A) nounwind {
+;CHECK: vclsQs32:
+;CHECK: vcls.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x i32> @llvm.arm.neon.vcls.v4i32(<4 x i32> %tmp1)
+	ret <4 x i32> %tmp2
+}
+
+declare <8 x i8>  @llvm.arm.neon.vcls.v8i8(<8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vcls.v4i16(<4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vcls.v2i32(<2 x i32>) nounwind readnone
+
+declare <16 x i8> @llvm.arm.neon.vcls.v16i8(<16 x i8>) nounwind readnone
+declare <8 x i16> @llvm.arm.neon.vcls.v8i16(<8 x i16>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vcls.v4i32(<4 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vcvt.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vcvt.ll
index 795caa4..f4cc536 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vcvt.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vcvt.ll
@@ -63,3 +63,78 @@ define <4 x float> @vcvtQ_u32tof32(<4 x i32>* %A) nounwind {
 	%tmp2 = uitofp <4 x i32> %tmp1 to <4 x float>
 	ret <4 x float> %tmp2
 }
+
+define <2 x i32> @vcvt_n_f32tos32(<2 x float>* %A) nounwind {
+;CHECK: vcvt_n_f32tos32:
+;CHECK: vcvt.s32.f32
+	%tmp1 = load <2 x float>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vcvtfp2fxs.v2i32.v2f32(<2 x float> %tmp1, i32 1)
+	ret <2 x i32> %tmp2
+}
+
+define <2 x i32> @vcvt_n_f32tou32(<2 x float>* %A) nounwind {
+;CHECK: vcvt_n_f32tou32:
+;CHECK: vcvt.u32.f32
+	%tmp1 = load <2 x float>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vcvtfp2fxu.v2i32.v2f32(<2 x float> %tmp1, i32 1)
+	ret <2 x i32> %tmp2
+}
+
+define <2 x float> @vcvt_n_s32tof32(<2 x i32>* %A) nounwind {
+;CHECK: vcvt_n_s32tof32:
+;CHECK: vcvt.f32.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = call <2 x float> @llvm.arm.neon.vcvtfxs2fp.v2f32.v2i32(<2 x i32> %tmp1, i32 1)
+	ret <2 x float> %tmp2
+}
+
+define <2 x float> @vcvt_n_u32tof32(<2 x i32>* %A) nounwind {
+;CHECK: vcvt_n_u32tof32:
+;CHECK: vcvt.f32.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = call <2 x float> @llvm.arm.neon.vcvtfxu2fp.v2f32.v2i32(<2 x i32> %tmp1, i32 1)
+	ret <2 x float> %tmp2
+}
+
+declare <2 x i32> @llvm.arm.neon.vcvtfp2fxs.v2i32.v2f32(<2 x float>, i32) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vcvtfp2fxu.v2i32.v2f32(<2 x float>, i32) nounwind readnone
+declare <2 x float> @llvm.arm.neon.vcvtfxs2fp.v2f32.v2i32(<2 x i32>, i32) nounwind readnone
+declare <2 x float> @llvm.arm.neon.vcvtfxu2fp.v2f32.v2i32(<2 x i32>, i32) nounwind readnone
+
+define <4 x i32> @vcvtQ_n_f32tos32(<4 x float>* %A) nounwind {
+;CHECK: vcvtQ_n_f32tos32:
+;CHECK: vcvt.s32.f32
+	%tmp1 = load <4 x float>* %A
+	%tmp2 = call <4 x i32> @llvm.arm.neon.vcvtfp2fxs.v4i32.v4f32(<4 x float> %tmp1, i32 1)
+	ret <4 x i32> %tmp2
+}
+
+define <4 x i32> @vcvtQ_n_f32tou32(<4 x float>* %A) nounwind {
+;CHECK: vcvtQ_n_f32tou32:
+;CHECK: vcvt.u32.f32
+	%tmp1 = load <4 x float>* %A
+	%tmp2 = call <4 x i32> @llvm.arm.neon.vcvtfp2fxu.v4i32.v4f32(<4 x float> %tmp1, i32 1)
+	ret <4 x i32> %tmp2
+}
+
+define <4 x float> @vcvtQ_n_s32tof32(<4 x i32>* %A) nounwind {
+;CHECK: vcvtQ_n_s32tof32:
+;CHECK: vcvt.f32.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x float> @llvm.arm.neon.vcvtfxs2fp.v4f32.v4i32(<4 x i32> %tmp1, i32 1)
+	ret <4 x float> %tmp2
+}
+
+define <4 x float> @vcvtQ_n_u32tof32(<4 x i32>* %A) nounwind {
+;CHECK: vcvtQ_n_u32tof32:
+;CHECK: vcvt.f32.u32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x float> @llvm.arm.neon.vcvtfxu2fp.v4f32.v4i32(<4 x i32> %tmp1, i32 1)
+	ret <4 x float> %tmp2
+}
+
+declare <4 x i32> @llvm.arm.neon.vcvtfp2fxs.v4i32.v4f32(<4 x float>, i32) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vcvtfp2fxu.v4i32.v4f32(<4 x float>, i32) nounwind readnone
+declare <4 x float> @llvm.arm.neon.vcvtfxs2fp.v4f32.v4i32(<4 x i32>, i32) nounwind readnone
+declare <4 x float> @llvm.arm.neon.vcvtfxu2fp.v4f32.v4i32(<4 x i32>, i32) nounwind readnone
+
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vcvt_n.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vcvt_n.ll
deleted file mode 100644
index 0ee9976..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vcvt_n.ll
+++ /dev/null
@@ -1,76 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
-
-define <2 x i32> @vcvt_f32tos32(<2 x float>* %A) nounwind {
-;CHECK: vcvt_f32tos32:
-;CHECK: vcvt.s32.f32
-	%tmp1 = load <2 x float>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vcvtfp2fxs.v2i32.v2f32(<2 x float> %tmp1, i32 1)
-	ret <2 x i32> %tmp2
-}
-
-define <2 x i32> @vcvt_f32tou32(<2 x float>* %A) nounwind {
-;CHECK: vcvt_f32tou32:
-;CHECK: vcvt.u32.f32
-	%tmp1 = load <2 x float>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vcvtfp2fxu.v2i32.v2f32(<2 x float> %tmp1, i32 1)
-	ret <2 x i32> %tmp2
-}
-
-define <2 x float> @vcvt_s32tof32(<2 x i32>* %A) nounwind {
-;CHECK: vcvt_s32tof32:
-;CHECK: vcvt.f32.s32
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = call <2 x float> @llvm.arm.neon.vcvtfxs2fp.v2f32.v2i32(<2 x i32> %tmp1, i32 1)
-	ret <2 x float> %tmp2
-}
-
-define <2 x float> @vcvt_u32tof32(<2 x i32>* %A) nounwind {
-;CHECK: vcvt_u32tof32:
-;CHECK: vcvt.f32.u32
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = call <2 x float> @llvm.arm.neon.vcvtfxu2fp.v2f32.v2i32(<2 x i32> %tmp1, i32 1)
-	ret <2 x float> %tmp2
-}
-
-declare <2 x i32> @llvm.arm.neon.vcvtfp2fxs.v2i32.v2f32(<2 x float>, i32) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vcvtfp2fxu.v2i32.v2f32(<2 x float>, i32) nounwind readnone
-declare <2 x float> @llvm.arm.neon.vcvtfxs2fp.v2f32.v2i32(<2 x i32>, i32) nounwind readnone
-declare <2 x float> @llvm.arm.neon.vcvtfxu2fp.v2f32.v2i32(<2 x i32>, i32) nounwind readnone
-
-define <4 x i32> @vcvtQ_f32tos32(<4 x float>* %A) nounwind {
-;CHECK: vcvtQ_f32tos32:
-;CHECK: vcvt.s32.f32
-	%tmp1 = load <4 x float>* %A
-	%tmp2 = call <4 x i32> @llvm.arm.neon.vcvtfp2fxs.v4i32.v4f32(<4 x float> %tmp1, i32 1)
-	ret <4 x i32> %tmp2
-}
-
-define <4 x i32> @vcvtQ_f32tou32(<4 x float>* %A) nounwind {
-;CHECK: vcvtQ_f32tou32:
-;CHECK: vcvt.u32.f32
-	%tmp1 = load <4 x float>* %A
-	%tmp2 = call <4 x i32> @llvm.arm.neon.vcvtfp2fxu.v4i32.v4f32(<4 x float> %tmp1, i32 1)
-	ret <4 x i32> %tmp2
-}
-
-define <4 x float> @vcvtQ_s32tof32(<4 x i32>* %A) nounwind {
-;CHECK: vcvtQ_s32tof32:
-;CHECK: vcvt.f32.s32
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x float> @llvm.arm.neon.vcvtfxs2fp.v4f32.v4i32(<4 x i32> %tmp1, i32 1)
-	ret <4 x float> %tmp2
-}
-
-define <4 x float> @vcvtQ_u32tof32(<4 x i32>* %A) nounwind {
-;CHECK: vcvtQ_u32tof32:
-;CHECK: vcvt.f32.u32
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x float> @llvm.arm.neon.vcvtfxu2fp.v4f32.v4i32(<4 x i32> %tmp1, i32 1)
-	ret <4 x float> %tmp2
-}
-
-declare <4 x i32> @llvm.arm.neon.vcvtfp2fxs.v4i32.v4f32(<4 x float>, i32) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vcvtfp2fxu.v4i32.v4f32(<4 x float>, i32) nounwind readnone
-declare <4 x float> @llvm.arm.neon.vcvtfxs2fp.v4f32.v4i32(<4 x i32>, i32) nounwind readnone
-declare <4 x float> @llvm.arm.neon.vcvtfxu2fp.v4f32.v4i32(<4 x i32>, i32) nounwind readnone
-
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vdup.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vdup.ll
index cee24c8..c9a68ca 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vdup.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vdup.ll
@@ -179,3 +179,91 @@ define <4 x float> @v_shuffledupQfloat2(float* %A) nounwind {
         %tmp2 = shufflevector <4 x float> %tmp1, <4 x float> undef, <4 x i32> zeroinitializer
         ret <4 x float> %tmp2
 }
+
+define <8 x i8> @vduplane8(<8 x i8>* %A) nounwind {
+;CHECK: vduplane8:
+;CHECK: vdup.8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = shufflevector <8 x i8> %tmp1, <8 x i8> undef, <8 x i32> < i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1 >
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @vduplane16(<4 x i16>* %A) nounwind {
+;CHECK: vduplane16:
+;CHECK: vdup.16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = shufflevector <4 x i16> %tmp1, <4 x i16> undef, <4 x i32> < i32 1, i32 1, i32 1, i32 1 >
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vduplane32(<2 x i32>* %A) nounwind {
+;CHECK: vduplane32:
+;CHECK: vdup.32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = shufflevector <2 x i32> %tmp1, <2 x i32> undef, <2 x i32> < i32 1, i32 1 >
+	ret <2 x i32> %tmp2
+}
+
+define <2 x float> @vduplanefloat(<2 x float>* %A) nounwind {
+;CHECK: vduplanefloat:
+;CHECK: vdup.32
+	%tmp1 = load <2 x float>* %A
+	%tmp2 = shufflevector <2 x float> %tmp1, <2 x float> undef, <2 x i32> < i32 1, i32 1 >
+	ret <2 x float> %tmp2
+}
+
+define <16 x i8> @vduplaneQ8(<8 x i8>* %A) nounwind {
+;CHECK: vduplaneQ8:
+;CHECK: vdup.8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = shufflevector <8 x i8> %tmp1, <8 x i8> undef, <16 x i32> < i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1 >
+	ret <16 x i8> %tmp2
+}
+
+define <8 x i16> @vduplaneQ16(<4 x i16>* %A) nounwind {
+;CHECK: vduplaneQ16:
+;CHECK: vdup.16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = shufflevector <4 x i16> %tmp1, <4 x i16> undef, <8 x i32> < i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1 >
+	ret <8 x i16> %tmp2
+}
+
+define <4 x i32> @vduplaneQ32(<2 x i32>* %A) nounwind {
+;CHECK: vduplaneQ32:
+;CHECK: vdup.32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = shufflevector <2 x i32> %tmp1, <2 x i32> undef, <4 x i32> < i32 1, i32 1, i32 1, i32 1 >
+	ret <4 x i32> %tmp2
+}
+
+define <4 x float> @vduplaneQfloat(<2 x float>* %A) nounwind {
+;CHECK: vduplaneQfloat:
+;CHECK: vdup.32
+	%tmp1 = load <2 x float>* %A
+	%tmp2 = shufflevector <2 x float> %tmp1, <2 x float> undef, <4 x i32> < i32 1, i32 1, i32 1, i32 1 >
+	ret <4 x float> %tmp2
+}
+
+define arm_apcscc <2 x i64> @foo(<2 x i64> %arg0_int64x1_t) nounwind readnone {
+entry:
+  %0 = shufflevector <2 x i64> %arg0_int64x1_t, <2 x i64> undef, <2 x i32> <i32 1, i32 1>
+  ret <2 x i64> %0
+}
+
+define arm_apcscc <2 x i64> @bar(<2 x i64> %arg0_int64x1_t) nounwind readnone {
+entry:
+  %0 = shufflevector <2 x i64> %arg0_int64x1_t, <2 x i64> undef, <2 x i32> <i32 0, i32 0>
+  ret <2 x i64> %0
+}
+
+define arm_apcscc <2 x double> @baz(<2 x double> %arg0_int64x1_t) nounwind readnone {
+entry:
+  %0 = shufflevector <2 x double> %arg0_int64x1_t, <2 x double> undef, <2 x i32> <i32 1, i32 1>
+  ret <2 x double> %0
+}
+
+define arm_apcscc <2 x double> @qux(<2 x double> %arg0_int64x1_t) nounwind readnone {
+entry:
+  %0 = shufflevector <2 x double> %arg0_int64x1_t, <2 x double> undef, <2 x i32> <i32 0, i32 0>
+  ret <2 x double> %0
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vdup_lane.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vdup_lane.ll
deleted file mode 100644
index 313260a..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vdup_lane.ll
+++ /dev/null
@@ -1,89 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
-
-define <8 x i8> @vduplane8(<8 x i8>* %A) nounwind {
-;CHECK: vduplane8:
-;CHECK: vdup.8
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = shufflevector <8 x i8> %tmp1, <8 x i8> undef, <8 x i32> < i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1 >
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @vduplane16(<4 x i16>* %A) nounwind {
-;CHECK: vduplane16:
-;CHECK: vdup.16
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = shufflevector <4 x i16> %tmp1, <4 x i16> undef, <4 x i32> < i32 1, i32 1, i32 1, i32 1 >
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vduplane32(<2 x i32>* %A) nounwind {
-;CHECK: vduplane32:
-;CHECK: vdup.32
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = shufflevector <2 x i32> %tmp1, <2 x i32> undef, <2 x i32> < i32 1, i32 1 >
-	ret <2 x i32> %tmp2
-}
-
-define <2 x float> @vduplanefloat(<2 x float>* %A) nounwind {
-;CHECK: vduplanefloat:
-;CHECK: vdup.32
-	%tmp1 = load <2 x float>* %A
-	%tmp2 = shufflevector <2 x float> %tmp1, <2 x float> undef, <2 x i32> < i32 1, i32 1 >
-	ret <2 x float> %tmp2
-}
-
-define <16 x i8> @vduplaneQ8(<8 x i8>* %A) nounwind {
-;CHECK: vduplaneQ8:
-;CHECK: vdup.8
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = shufflevector <8 x i8> %tmp1, <8 x i8> undef, <16 x i32> < i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1 >
-	ret <16 x i8> %tmp2
-}
-
-define <8 x i16> @vduplaneQ16(<4 x i16>* %A) nounwind {
-;CHECK: vduplaneQ16:
-;CHECK: vdup.16
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = shufflevector <4 x i16> %tmp1, <4 x i16> undef, <8 x i32> < i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1 >
-	ret <8 x i16> %tmp2
-}
-
-define <4 x i32> @vduplaneQ32(<2 x i32>* %A) nounwind {
-;CHECK: vduplaneQ32:
-;CHECK: vdup.32
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = shufflevector <2 x i32> %tmp1, <2 x i32> undef, <4 x i32> < i32 1, i32 1, i32 1, i32 1 >
-	ret <4 x i32> %tmp2
-}
-
-define <4 x float> @vduplaneQfloat(<2 x float>* %A) nounwind {
-;CHECK: vduplaneQfloat:
-;CHECK: vdup.32
-	%tmp1 = load <2 x float>* %A
-	%tmp2 = shufflevector <2 x float> %tmp1, <2 x float> undef, <4 x i32> < i32 1, i32 1, i32 1, i32 1 >
-	ret <4 x float> %tmp2
-}
-
-define arm_apcscc <2 x i64> @foo(<2 x i64> %arg0_int64x1_t) nounwind readnone {
-entry:
-  %0 = shufflevector <2 x i64> %arg0_int64x1_t, <2 x i64> undef, <2 x i32> <i32 1, i32 1>
-  ret <2 x i64> %0
-}
-
-define arm_apcscc <2 x i64> @bar(<2 x i64> %arg0_int64x1_t) nounwind readnone {
-entry:
-  %0 = shufflevector <2 x i64> %arg0_int64x1_t, <2 x i64> undef, <2 x i32> <i32 0, i32 0>
-  ret <2 x i64> %0
-}
-
-define arm_apcscc <2 x double> @baz(<2 x double> %arg0_int64x1_t) nounwind readnone {
-entry:
-  %0 = shufflevector <2 x double> %arg0_int64x1_t, <2 x double> undef, <2 x i32> <i32 1, i32 1>
-  ret <2 x double> %0
-}
-
-define arm_apcscc <2 x double> @qux(<2 x double> %arg0_int64x1_t) nounwind readnone {
-entry:
-  %0 = shufflevector <2 x double> %arg0_int64x1_t, <2 x double> undef, <2 x i32> <i32 0, i32 0>
-  ret <2 x double> %0
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/veor.ll b/libclamav/c++/llvm/test/CodeGen/ARM/veor.ll
deleted file mode 100644
index febceb4..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/veor.ll
+++ /dev/null
@@ -1,73 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
-
-define <8 x i8> @v_eori8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-;CHECK: v_eori8:
-;CHECK: veor
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = xor <8 x i8> %tmp1, %tmp2
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @v_eori16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-;CHECK: v_eori16:
-;CHECK: veor
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = xor <4 x i16> %tmp1, %tmp2
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @v_eori32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-;CHECK: v_eori32:
-;CHECK: veor
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = xor <2 x i32> %tmp1, %tmp2
-	ret <2 x i32> %tmp3
-}
-
-define <1 x i64> @v_eori64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
-;CHECK: v_eori64:
-;CHECK: veor
-	%tmp1 = load <1 x i64>* %A
-	%tmp2 = load <1 x i64>* %B
-	%tmp3 = xor <1 x i64> %tmp1, %tmp2
-	ret <1 x i64> %tmp3
-}
-
-define <16 x i8> @v_eorQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
-;CHECK: v_eorQi8:
-;CHECK: veor
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = load <16 x i8>* %B
-	%tmp3 = xor <16 x i8> %tmp1, %tmp2
-	ret <16 x i8> %tmp3
-}
-
-define <8 x i16> @v_eorQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-;CHECK: v_eorQi16:
-;CHECK: veor
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = xor <8 x i16> %tmp1, %tmp2
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @v_eorQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-;CHECK: v_eorQi32:
-;CHECK: veor
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = xor <4 x i32> %tmp1, %tmp2
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @v_eorQi64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
-;CHECK: v_eorQi64:
-;CHECK: veor
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i64>* %B
-	%tmp3 = xor <2 x i64> %tmp1, %tmp2
-	ret <2 x i64> %tmp3
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vfp.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vfp.ll
index 50000e3..44a44af 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vfp.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vfp.ll
@@ -15,11 +15,11 @@ declare double @fabs(double)
 define void @test_abs(float* %P, double* %D) {
 ;CHECK: test_abs:
 	%a = load float* %P		; <float> [#uses=1]
-;CHECK: fabss
+;CHECK: vabs.f32
 	%b = call float @fabsf( float %a )		; <float> [#uses=1]
 	store float %b, float* %P
 	%A = load double* %D		; <double> [#uses=1]
-;CHECK: fabsd
+;CHECK: vabs.f64
 	%B = call double @fabs( double %A )		; <double> [#uses=1]
 	store double %B, double* %D
 	ret void
@@ -39,10 +39,10 @@ define void @test_add(float* %P, double* %D) {
 define void @test_ext_round(float* %P, double* %D) {
 ;CHECK: test_ext_round:
 	%a = load float* %P		; <float> [#uses=1]
-;CHECK: fcvtds
+;CHECK: vcvt.f64.f32
 	%b = fpext float %a to double		; <double> [#uses=1]
 	%A = load double* %D		; <double> [#uses=1]
-;CHECK: fcvtsd
+;CHECK: vcvt.f32.f64
 	%B = fptrunc double %A to float		; <float> [#uses=1]
 	store double %b, double* %D
 	store float %B, float* %P
@@ -54,7 +54,7 @@ define void @test_fma(float* %P1, float* %P2, float* %P3) {
 	%a1 = load float* %P1		; <float> [#uses=1]
 	%a2 = load float* %P2		; <float> [#uses=1]
 	%a3 = load float* %P3		; <float> [#uses=1]
-;CHECK: fmscs
+;CHECK: vnmls.f32
 	%X = fmul float %a1, %a2		; <float> [#uses=1]
 	%Y = fsub float %X, %a3		; <float> [#uses=1]
 	store float %Y, float* %P1
@@ -64,7 +64,7 @@ define void @test_fma(float* %P1, float* %P2, float* %P3) {
 define i32 @test_ftoi(float* %P1) {
 ;CHECK: test_ftoi:
 	%a1 = load float* %P1		; <float> [#uses=1]
-;CHECK: ftosizs
+;CHECK: vcvt.s32.f32
 	%b1 = fptosi float %a1 to i32		; <i32> [#uses=1]
 	ret i32 %b1
 }
@@ -72,7 +72,7 @@ define i32 @test_ftoi(float* %P1) {
 define i32 @test_ftou(float* %P1) {
 ;CHECK: test_ftou:
 	%a1 = load float* %P1		; <float> [#uses=1]
-;CHECK: ftouizs
+;CHECK: vcvt.u32.f32
 	%b1 = fptoui float %a1 to i32		; <i32> [#uses=1]
 	ret i32 %b1
 }
@@ -80,7 +80,7 @@ define i32 @test_ftou(float* %P1) {
 define i32 @test_dtoi(double* %P1) {
 ;CHECK: test_dtoi:
 	%a1 = load double* %P1		; <double> [#uses=1]
-;CHECK: ftosizd
+;CHECK: vcvt.s32.f64
 	%b1 = fptosi double %a1 to i32		; <i32> [#uses=1]
 	ret i32 %b1
 }
@@ -88,14 +88,14 @@ define i32 @test_dtoi(double* %P1) {
 define i32 @test_dtou(double* %P1) {
 ;CHECK: test_dtou:
 	%a1 = load double* %P1		; <double> [#uses=1]
-;CHECK: ftouizd
+;CHECK: vcvt.u32.f64
 	%b1 = fptoui double %a1 to i32		; <i32> [#uses=1]
 	ret i32 %b1
 }
 
 define void @test_utod(double* %P1, i32 %X) {
 ;CHECK: test_utod:
-;CHECK: fuitod
+;CHECK: vcvt.f64.u32
 	%b1 = uitofp i32 %X to double		; <double> [#uses=1]
 	store double %b1, double* %P1
 	ret void
@@ -103,7 +103,7 @@ define void @test_utod(double* %P1, i32 %X) {
 
 define void @test_utod2(double* %P1, i8 %X) {
 ;CHECK: test_utod2:
-;CHECK: fuitod
+;CHECK: vcvt.f64.u32
 	%b1 = uitofp i8 %X to double		; <double> [#uses=1]
 	store double %b1, double* %P1
 	ret void
@@ -141,7 +141,7 @@ define void @test_cmpfp0(float* %glob, i32 %X) {
 ;CHECK: test_cmpfp0:
 entry:
 	%tmp = load float* %glob		; <float> [#uses=1]
-;CHECK: fcmpezs
+;CHECK: vcmpe.f32
 	%tmp.upgrd.3 = fcmp ogt float %tmp, 0.000000e+00		; <i1> [#uses=1]
 	br i1 %tmp.upgrd.3, label %cond_true, label %cond_false
 
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vget_lane.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vget_lane.ll
index b4f093c..5dd87d6 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vget_lane.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vget_lane.ll
@@ -1,4 +1,6 @@
-; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
+; RUN: llc < %s -mattr=+neon | FileCheck %s
+target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32"
+target triple = "thumbv7-elf"
 
 define i32 @vget_lanes8(<8 x i8>* %A) nounwind {
 ;CHECK: vget_lanes8:
@@ -91,3 +93,120 @@ define i32 @vgetQ_lanei32(<4 x i32>* %A) nounwind {
 	%tmp3 = extractelement <4 x i32> %tmp2, i32 1
 	ret i32 %tmp3
 }
+
+define arm_aapcs_vfpcc void @test_vget_laneu16() nounwind {
+entry:
+; CHECK: vmov.u16 r0, d0[1]
+  %arg0_uint16x4_t = alloca <4 x i16>             ; <<4 x i16>*> [#uses=1]
+  %out_uint16_t = alloca i16                      ; <i16*> [#uses=1]
+  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
+  %0 = load <4 x i16>* %arg0_uint16x4_t, align 8  ; <<4 x i16>> [#uses=1]
+  %1 = extractelement <4 x i16> %0, i32 1         ; <i16> [#uses=1]
+  store i16 %1, i16* %out_uint16_t, align 2
+  br label %return
+
+return:                                           ; preds = %entry
+  ret void
+}
+
+define arm_aapcs_vfpcc void @test_vget_laneu8() nounwind {
+entry:
+; CHECK: vmov.u8 r0, d0[1]
+  %arg0_uint8x8_t = alloca <8 x i8>               ; <<8 x i8>*> [#uses=1]
+  %out_uint8_t = alloca i8                        ; <i8*> [#uses=1]
+  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
+  %0 = load <8 x i8>* %arg0_uint8x8_t, align 8    ; <<8 x i8>> [#uses=1]
+  %1 = extractelement <8 x i8> %0, i32 1          ; <i8> [#uses=1]
+  store i8 %1, i8* %out_uint8_t, align 1
+  br label %return
+
+return:                                           ; preds = %entry
+  ret void
+}
+
+define arm_aapcs_vfpcc void @test_vgetQ_laneu16() nounwind {
+entry:
+; CHECK: vmov.u16 r0, d0[1]
+  %arg0_uint16x8_t = alloca <8 x i16>             ; <<8 x i16>*> [#uses=1]
+  %out_uint16_t = alloca i16                      ; <i16*> [#uses=1]
+  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
+  %0 = load <8 x i16>* %arg0_uint16x8_t, align 16 ; <<8 x i16>> [#uses=1]
+  %1 = extractelement <8 x i16> %0, i32 1         ; <i16> [#uses=1]
+  store i16 %1, i16* %out_uint16_t, align 2
+  br label %return
+
+return:                                           ; preds = %entry
+  ret void
+}
+
+define arm_aapcs_vfpcc void @test_vgetQ_laneu8() nounwind {
+entry:
+; CHECK: vmov.u8 r0, d0[1]
+  %arg0_uint8x16_t = alloca <16 x i8>             ; <<16 x i8>*> [#uses=1]
+  %out_uint8_t = alloca i8                        ; <i8*> [#uses=1]
+  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
+  %0 = load <16 x i8>* %arg0_uint8x16_t, align 16 ; <<16 x i8>> [#uses=1]
+  %1 = extractelement <16 x i8> %0, i32 1         ; <i8> [#uses=1]
+  store i8 %1, i8* %out_uint8_t, align 1
+  br label %return
+
+return:                                           ; preds = %entry
+  ret void
+}
+
+define <8 x i8> @vset_lane8(<8 x i8>* %A, i8 %B) nounwind {
+;CHECK: vset_lane8:
+;CHECK: vmov.8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = insertelement <8 x i8> %tmp1, i8 %B, i32 1
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @vset_lane16(<4 x i16>* %A, i16 %B) nounwind {
+;CHECK: vset_lane16:
+;CHECK: vmov.16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = insertelement <4 x i16> %tmp1, i16 %B, i32 1
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vset_lane32(<2 x i32>* %A, i32 %B) nounwind {
+;CHECK: vset_lane32:
+;CHECK: vmov.32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = insertelement <2 x i32> %tmp1, i32 %B, i32 1
+	ret <2 x i32> %tmp2
+}
+
+define <16 x i8> @vsetQ_lane8(<16 x i8>* %A, i8 %B) nounwind {
+;CHECK: vsetQ_lane8:
+;CHECK: vmov.8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = insertelement <16 x i8> %tmp1, i8 %B, i32 1
+	ret <16 x i8> %tmp2
+}
+
+define <8 x i16> @vsetQ_lane16(<8 x i16>* %A, i16 %B) nounwind {
+;CHECK: vsetQ_lane16:
+;CHECK: vmov.16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = insertelement <8 x i16> %tmp1, i16 %B, i32 1
+	ret <8 x i16> %tmp2
+}
+
+define <4 x i32> @vsetQ_lane32(<4 x i32>* %A, i32 %B) nounwind {
+;CHECK: vsetQ_lane32:
+;CHECK: vmov.32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = insertelement <4 x i32> %tmp1, i32 %B, i32 1
+	ret <4 x i32> %tmp2
+}
+
+define arm_aapcs_vfpcc <2 x float> @test_vset_lanef32(float %arg0_float32_t, <2 x float> %arg1_float32x2_t) nounwind {
+;CHECK: test_vset_lanef32:
+;CHECK: vmov.f32
+;CHECK: vmov.f32
+entry:
+  %0 = insertelement <2 x float> %arg1_float32x2_t, float %arg0_float32_t, i32 1 ; <<2 x float>> [#uses=1]
+  ret <2 x float> %0
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vget_lane2.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vget_lane2.ll
deleted file mode 100644
index 1981bf9..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vget_lane2.ll
+++ /dev/null
@@ -1,63 +0,0 @@
-; RUN: llc < %s -mattr=+neon | FileCheck %s
-target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32"
-target triple = "thumbv7-elf"
-
-define arm_aapcs_vfpcc void @test_vget_laneu16() nounwind {
-entry:
-; CHECK: vmov.u16 r0, d0[1]
-  %arg0_uint16x4_t = alloca <4 x i16>             ; <<4 x i16>*> [#uses=1]
-  %out_uint16_t = alloca i16                      ; <i16*> [#uses=1]
-  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
-  %0 = load <4 x i16>* %arg0_uint16x4_t, align 8  ; <<4 x i16>> [#uses=1]
-  %1 = extractelement <4 x i16> %0, i32 1         ; <i16> [#uses=1]
-  store i16 %1, i16* %out_uint16_t, align 2
-  br label %return
-
-return:                                           ; preds = %entry
-  ret void
-}
-
-define arm_aapcs_vfpcc void @test_vget_laneu8() nounwind {
-entry:
-; CHECK: vmov.u8 r0, d0[1]
-  %arg0_uint8x8_t = alloca <8 x i8>               ; <<8 x i8>*> [#uses=1]
-  %out_uint8_t = alloca i8                        ; <i8*> [#uses=1]
-  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
-  %0 = load <8 x i8>* %arg0_uint8x8_t, align 8    ; <<8 x i8>> [#uses=1]
-  %1 = extractelement <8 x i8> %0, i32 1          ; <i8> [#uses=1]
-  store i8 %1, i8* %out_uint8_t, align 1
-  br label %return
-
-return:                                           ; preds = %entry
-  ret void
-}
-
-define arm_aapcs_vfpcc void @test_vgetQ_laneu16() nounwind {
-entry:
-; CHECK: vmov.u16 r0, d0[1]
-  %arg0_uint16x8_t = alloca <8 x i16>             ; <<8 x i16>*> [#uses=1]
-  %out_uint16_t = alloca i16                      ; <i16*> [#uses=1]
-  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
-  %0 = load <8 x i16>* %arg0_uint16x8_t, align 16 ; <<8 x i16>> [#uses=1]
-  %1 = extractelement <8 x i16> %0, i32 1         ; <i16> [#uses=1]
-  store i16 %1, i16* %out_uint16_t, align 2
-  br label %return
-
-return:                                           ; preds = %entry
-  ret void
-}
-
-define arm_aapcs_vfpcc void @test_vgetQ_laneu8() nounwind {
-entry:
-; CHECK: vmov.u8 r0, d0[1]
-  %arg0_uint8x16_t = alloca <16 x i8>             ; <<16 x i8>*> [#uses=1]
-  %out_uint8_t = alloca i8                        ; <i8*> [#uses=1]
-  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
-  %0 = load <16 x i8>* %arg0_uint8x16_t, align 16 ; <<16 x i8>> [#uses=1]
-  %1 = extractelement <16 x i8> %0, i32 1         ; <i8> [#uses=1]
-  store i8 %1, i8* %out_uint8_t, align 1
-  br label %return
-
-return:                                           ; preds = %entry
-  ret void
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vhadd.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vhadd.ll
index d767097..379e062 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vhadd.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vhadd.ll
@@ -123,3 +123,127 @@ declare <4 x i32> @llvm.arm.neon.vhadds.v4i32(<4 x i32>, <4 x i32>) nounwind rea
 declare <16 x i8> @llvm.arm.neon.vhaddu.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
 declare <8 x i16> @llvm.arm.neon.vhaddu.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
 declare <4 x i32> @llvm.arm.neon.vhaddu.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
+
+define <8 x i8> @vrhadds8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vrhadds8:
+;CHECK: vrhadd.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vrhadds.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vrhadds16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vrhadds16:
+;CHECK: vrhadd.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vrhadds.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vrhadds32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vrhadds32:
+;CHECK: vrhadd.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vrhadds.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <8 x i8> @vrhaddu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vrhaddu8:
+;CHECK: vrhadd.u8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vrhaddu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vrhaddu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vrhaddu16:
+;CHECK: vrhadd.u16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vrhaddu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vrhaddu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vrhaddu32:
+;CHECK: vrhadd.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vrhaddu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <16 x i8> @vrhaddQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vrhaddQs8:
+;CHECK: vrhadd.s8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = load <16 x i8>* %B
+	%tmp3 = call <16 x i8> @llvm.arm.neon.vrhadds.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
+	ret <16 x i8> %tmp3
+}
+
+define <8 x i16> @vrhaddQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vrhaddQs16:
+;CHECK: vrhadd.s16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vrhadds.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vrhaddQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vrhaddQs32:
+;CHECK: vrhadd.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vrhadds.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <16 x i8> @vrhaddQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vrhaddQu8:
+;CHECK: vrhadd.u8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = load <16 x i8>* %B
+	%tmp3 = call <16 x i8> @llvm.arm.neon.vrhaddu.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
+	ret <16 x i8> %tmp3
+}
+
+define <8 x i16> @vrhaddQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vrhaddQu16:
+;CHECK: vrhadd.u16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vrhaddu.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vrhaddQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vrhaddQu32:
+;CHECK: vrhadd.u32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vrhaddu.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+declare <8 x i8>  @llvm.arm.neon.vrhadds.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vrhadds.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vrhadds.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <8 x i8>  @llvm.arm.neon.vrhaddu.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vrhaddu.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vrhaddu.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <16 x i8> @llvm.arm.neon.vrhadds.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
+declare <8 x i16> @llvm.arm.neon.vrhadds.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vrhadds.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
+
+declare <16 x i8> @llvm.arm.neon.vrhaddu.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
+declare <8 x i16> @llvm.arm.neon.vrhaddu.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vrhaddu.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vicmp.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vicmp.ll
index fb0f4cc..2d8cb89 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vicmp.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vicmp.ll
@@ -1,12 +1,4 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vceq\\.i8} %t | count 2
-; RUN: grep {vceq\\.i16} %t | count 2
-; RUN: grep {vceq\\.i32} %t | count 2
-; RUN: grep vmvn %t | count 6
-; RUN: grep {vcgt\\.s8} %t | count 1
-; RUN: grep {vcge\\.s16} %t | count 1
-; RUN: grep {vcgt\\.u16} %t | count 1
-; RUN: grep {vcge\\.u32} %t | count 1
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 ; This tests icmp operations that do not map directly to NEON instructions.
 ; Not-equal (ne) operations are implemented by VCEQ/VMVN.  Less-than (lt/ult)
@@ -15,6 +7,9 @@
 ; the other operations.
 
 define <8 x i8> @vcnei8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vcnei8:
+;CHECK: vceq.i8
+;CHECK-NEXT: vmvn
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = icmp ne <8 x i8> %tmp1, %tmp2
@@ -23,6 +18,9 @@ define <8 x i8> @vcnei8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vcnei16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vcnei16:
+;CHECK: vceq.i16
+;CHECK-NEXT: vmvn
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = icmp ne <4 x i16> %tmp1, %tmp2
@@ -31,6 +29,9 @@ define <4 x i16> @vcnei16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vcnei32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vcnei32:
+;CHECK: vceq.i32
+;CHECK-NEXT: vmvn
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = icmp ne <2 x i32> %tmp1, %tmp2
@@ -39,6 +40,9 @@ define <2 x i32> @vcnei32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <16 x i8> @vcneQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vcneQi8:
+;CHECK: vceq.i8
+;CHECK-NEXT: vmvn
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = icmp ne <16 x i8> %tmp1, %tmp2
@@ -47,6 +51,9 @@ define <16 x i8> @vcneQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vcneQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vcneQi16:
+;CHECK: vceq.i16
+;CHECK-NEXT: vmvn
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = icmp ne <8 x i16> %tmp1, %tmp2
@@ -55,6 +62,9 @@ define <8 x i16> @vcneQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vcneQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vcneQi32:
+;CHECK: vceq.i32
+;CHECK-NEXT: vmvn
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = icmp ne <4 x i32> %tmp1, %tmp2
@@ -63,6 +73,8 @@ define <4 x i32> @vcneQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <16 x i8> @vcltQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vcltQs8:
+;CHECK: vcgt.s8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = icmp slt <16 x i8> %tmp1, %tmp2
@@ -71,6 +83,8 @@ define <16 x i8> @vcltQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vcles16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vcles16:
+;CHECK: vcge.s16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = icmp sle <4 x i16> %tmp1, %tmp2
@@ -79,6 +93,8 @@ define <4 x i16> @vcles16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <4 x i16> @vcltu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vcltu16:
+;CHECK: vcgt.u16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = icmp ult <4 x i16> %tmp1, %tmp2
@@ -87,6 +103,8 @@ define <4 x i16> @vcltu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vcleQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vcleQu32:
+;CHECK: vcge.u32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = icmp ule <4 x i32> %tmp1, %tmp2
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vld2.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vld2.ll
index 36e54bd..23f7d2c 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vld2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vld2.ll
@@ -1,16 +1,22 @@
 ; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
-%struct.__builtin_neon_v8qi2 = type { <8 x i8>,  <8 x i8> }
-%struct.__builtin_neon_v4hi2 = type { <4 x i16>, <4 x i16> }
-%struct.__builtin_neon_v2si2 = type { <2 x i32>, <2 x i32> }
-%struct.__builtin_neon_v2sf2 = type { <2 x float>, <2 x float> }
+%struct.__neon_int8x8x2_t = type { <8 x i8>,  <8 x i8> }
+%struct.__neon_int16x4x2_t = type { <4 x i16>, <4 x i16> }
+%struct.__neon_int32x2x2_t = type { <2 x i32>, <2 x i32> }
+%struct.__neon_float32x2x2_t = type { <2 x float>, <2 x float> }
+%struct.__neon_int64x1x2_t = type { <1 x i64>, <1 x i64> }
+
+%struct.__neon_int8x16x2_t = type { <16 x i8>,  <16 x i8> }
+%struct.__neon_int16x8x2_t = type { <8 x i16>, <8 x i16> }
+%struct.__neon_int32x4x2_t = type { <4 x i32>, <4 x i32> }
+%struct.__neon_float32x4x2_t = type { <4 x float>, <4 x float> }
 
 define <8 x i8> @vld2i8(i8* %A) nounwind {
 ;CHECK: vld2i8:
 ;CHECK: vld2.8
-	%tmp1 = call %struct.__builtin_neon_v8qi2 @llvm.arm.neon.vld2.v8i8(i8* %A)
-        %tmp2 = extractvalue %struct.__builtin_neon_v8qi2 %tmp1, 0
-        %tmp3 = extractvalue %struct.__builtin_neon_v8qi2 %tmp1, 1
+	%tmp1 = call %struct.__neon_int8x8x2_t @llvm.arm.neon.vld2.v8i8(i8* %A)
+        %tmp2 = extractvalue %struct.__neon_int8x8x2_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int8x8x2_t %tmp1, 1
         %tmp4 = add <8 x i8> %tmp2, %tmp3
 	ret <8 x i8> %tmp4
 }
@@ -18,9 +24,9 @@ define <8 x i8> @vld2i8(i8* %A) nounwind {
 define <4 x i16> @vld2i16(i16* %A) nounwind {
 ;CHECK: vld2i16:
 ;CHECK: vld2.16
-	%tmp1 = call %struct.__builtin_neon_v4hi2 @llvm.arm.neon.vld2.v4i16(i16* %A)
-        %tmp2 = extractvalue %struct.__builtin_neon_v4hi2 %tmp1, 0
-        %tmp3 = extractvalue %struct.__builtin_neon_v4hi2 %tmp1, 1
+	%tmp1 = call %struct.__neon_int16x4x2_t @llvm.arm.neon.vld2.v4i16(i16* %A)
+        %tmp2 = extractvalue %struct.__neon_int16x4x2_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int16x4x2_t %tmp1, 1
         %tmp4 = add <4 x i16> %tmp2, %tmp3
 	ret <4 x i16> %tmp4
 }
@@ -28,9 +34,9 @@ define <4 x i16> @vld2i16(i16* %A) nounwind {
 define <2 x i32> @vld2i32(i32* %A) nounwind {
 ;CHECK: vld2i32:
 ;CHECK: vld2.32
-	%tmp1 = call %struct.__builtin_neon_v2si2 @llvm.arm.neon.vld2.v2i32(i32* %A)
-        %tmp2 = extractvalue %struct.__builtin_neon_v2si2 %tmp1, 0
-        %tmp3 = extractvalue %struct.__builtin_neon_v2si2 %tmp1, 1
+	%tmp1 = call %struct.__neon_int32x2x2_t @llvm.arm.neon.vld2.v2i32(i32* %A)
+        %tmp2 = extractvalue %struct.__neon_int32x2x2_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int32x2x2_t %tmp1, 1
         %tmp4 = add <2 x i32> %tmp2, %tmp3
 	ret <2 x i32> %tmp4
 }
@@ -38,14 +44,70 @@ define <2 x i32> @vld2i32(i32* %A) nounwind {
 define <2 x float> @vld2f(float* %A) nounwind {
 ;CHECK: vld2f:
 ;CHECK: vld2.32
-	%tmp1 = call %struct.__builtin_neon_v2sf2 @llvm.arm.neon.vld2.v2f32(float* %A)
-        %tmp2 = extractvalue %struct.__builtin_neon_v2sf2 %tmp1, 0
-        %tmp3 = extractvalue %struct.__builtin_neon_v2sf2 %tmp1, 1
+	%tmp1 = call %struct.__neon_float32x2x2_t @llvm.arm.neon.vld2.v2f32(float* %A)
+        %tmp2 = extractvalue %struct.__neon_float32x2x2_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_float32x2x2_t %tmp1, 1
         %tmp4 = add <2 x float> %tmp2, %tmp3
 	ret <2 x float> %tmp4
 }
 
-declare %struct.__builtin_neon_v8qi2 @llvm.arm.neon.vld2.v8i8(i8*) nounwind readonly
-declare %struct.__builtin_neon_v4hi2 @llvm.arm.neon.vld2.v4i16(i8*) nounwind readonly
-declare %struct.__builtin_neon_v2si2 @llvm.arm.neon.vld2.v2i32(i8*) nounwind readonly
-declare %struct.__builtin_neon_v2sf2 @llvm.arm.neon.vld2.v2f32(i8*) nounwind readonly
+define <1 x i64> @vld2i64(i64* %A) nounwind {
+;CHECK: vld2i64:
+;CHECK: vld1.64
+	%tmp1 = call %struct.__neon_int64x1x2_t @llvm.arm.neon.vld2.v1i64(i64* %A)
+        %tmp2 = extractvalue %struct.__neon_int64x1x2_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int64x1x2_t %tmp1, 1
+        %tmp4 = add <1 x i64> %tmp2, %tmp3
+	ret <1 x i64> %tmp4
+}
+
+define <16 x i8> @vld2Qi8(i8* %A) nounwind {
+;CHECK: vld2Qi8:
+;CHECK: vld2.8
+	%tmp1 = call %struct.__neon_int8x16x2_t @llvm.arm.neon.vld2.v16i8(i8* %A)
+        %tmp2 = extractvalue %struct.__neon_int8x16x2_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int8x16x2_t %tmp1, 1
+        %tmp4 = add <16 x i8> %tmp2, %tmp3
+	ret <16 x i8> %tmp4
+}
+
+define <8 x i16> @vld2Qi16(i16* %A) nounwind {
+;CHECK: vld2Qi16:
+;CHECK: vld2.16
+	%tmp1 = call %struct.__neon_int16x8x2_t @llvm.arm.neon.vld2.v8i16(i16* %A)
+        %tmp2 = extractvalue %struct.__neon_int16x8x2_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int16x8x2_t %tmp1, 1
+        %tmp4 = add <8 x i16> %tmp2, %tmp3
+	ret <8 x i16> %tmp4
+}
+
+define <4 x i32> @vld2Qi32(i32* %A) nounwind {
+;CHECK: vld2Qi32:
+;CHECK: vld2.32
+	%tmp1 = call %struct.__neon_int32x4x2_t @llvm.arm.neon.vld2.v4i32(i32* %A)
+        %tmp2 = extractvalue %struct.__neon_int32x4x2_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int32x4x2_t %tmp1, 1
+        %tmp4 = add <4 x i32> %tmp2, %tmp3
+	ret <4 x i32> %tmp4
+}
+
+define <4 x float> @vld2Qf(float* %A) nounwind {
+;CHECK: vld2Qf:
+;CHECK: vld2.32
+	%tmp1 = call %struct.__neon_float32x4x2_t @llvm.arm.neon.vld2.v4f32(float* %A)
+        %tmp2 = extractvalue %struct.__neon_float32x4x2_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_float32x4x2_t %tmp1, 1
+        %tmp4 = add <4 x float> %tmp2, %tmp3
+	ret <4 x float> %tmp4
+}
+
+declare %struct.__neon_int8x8x2_t @llvm.arm.neon.vld2.v8i8(i8*) nounwind readonly
+declare %struct.__neon_int16x4x2_t @llvm.arm.neon.vld2.v4i16(i8*) nounwind readonly
+declare %struct.__neon_int32x2x2_t @llvm.arm.neon.vld2.v2i32(i8*) nounwind readonly
+declare %struct.__neon_float32x2x2_t @llvm.arm.neon.vld2.v2f32(i8*) nounwind readonly
+declare %struct.__neon_int64x1x2_t @llvm.arm.neon.vld2.v1i64(i8*) nounwind readonly
+
+declare %struct.__neon_int8x16x2_t @llvm.arm.neon.vld2.v16i8(i8*) nounwind readonly
+declare %struct.__neon_int16x8x2_t @llvm.arm.neon.vld2.v8i16(i8*) nounwind readonly
+declare %struct.__neon_int32x4x2_t @llvm.arm.neon.vld2.v4i32(i8*) nounwind readonly
+declare %struct.__neon_float32x4x2_t @llvm.arm.neon.vld2.v4f32(i8*) nounwind readonly
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vld3.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vld3.ll
index aa38bb0..207dc6a 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vld3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vld3.ll
@@ -1,16 +1,22 @@
 ; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
-%struct.__builtin_neon_v8qi3 = type { <8 x i8>,  <8 x i8>,  <8 x i8> }
-%struct.__builtin_neon_v4hi3 = type { <4 x i16>, <4 x i16>, <4 x i16> }
-%struct.__builtin_neon_v2si3 = type { <2 x i32>, <2 x i32>, <2 x i32> }
-%struct.__builtin_neon_v2sf3 = type { <2 x float>, <2 x float>, <2 x float> }
+%struct.__neon_int8x8x3_t = type { <8 x i8>,  <8 x i8>,  <8 x i8> }
+%struct.__neon_int16x4x3_t = type { <4 x i16>, <4 x i16>, <4 x i16> }
+%struct.__neon_int32x2x3_t = type { <2 x i32>, <2 x i32>, <2 x i32> }
+%struct.__neon_float32x2x3_t = type { <2 x float>, <2 x float>, <2 x float> }
+%struct.__neon_int64x1x3_t = type { <1 x i64>, <1 x i64>, <1 x i64> }
+
+%struct.__neon_int8x16x3_t = type { <16 x i8>,  <16 x i8>,  <16 x i8> }
+%struct.__neon_int16x8x3_t = type { <8 x i16>, <8 x i16>, <8 x i16> }
+%struct.__neon_int32x4x3_t = type { <4 x i32>, <4 x i32>, <4 x i32> }
+%struct.__neon_float32x4x3_t = type { <4 x float>, <4 x float>, <4 x float> }
 
 define <8 x i8> @vld3i8(i8* %A) nounwind {
 ;CHECK: vld3i8:
 ;CHECK: vld3.8
-	%tmp1 = call %struct.__builtin_neon_v8qi3 @llvm.arm.neon.vld3.v8i8(i8* %A)
-        %tmp2 = extractvalue %struct.__builtin_neon_v8qi3 %tmp1, 0
-        %tmp3 = extractvalue %struct.__builtin_neon_v8qi3 %tmp1, 2
+	%tmp1 = call %struct.__neon_int8x8x3_t @llvm.arm.neon.vld3.v8i8(i8* %A)
+        %tmp2 = extractvalue %struct.__neon_int8x8x3_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int8x8x3_t %tmp1, 2
         %tmp4 = add <8 x i8> %tmp2, %tmp3
 	ret <8 x i8> %tmp4
 }
@@ -18,9 +24,9 @@ define <8 x i8> @vld3i8(i8* %A) nounwind {
 define <4 x i16> @vld3i16(i16* %A) nounwind {
 ;CHECK: vld3i16:
 ;CHECK: vld3.16
-	%tmp1 = call %struct.__builtin_neon_v4hi3 @llvm.arm.neon.vld3.v4i16(i16* %A)
-        %tmp2 = extractvalue %struct.__builtin_neon_v4hi3 %tmp1, 0
-        %tmp3 = extractvalue %struct.__builtin_neon_v4hi3 %tmp1, 2
+	%tmp1 = call %struct.__neon_int16x4x3_t @llvm.arm.neon.vld3.v4i16(i16* %A)
+        %tmp2 = extractvalue %struct.__neon_int16x4x3_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int16x4x3_t %tmp1, 2
         %tmp4 = add <4 x i16> %tmp2, %tmp3
 	ret <4 x i16> %tmp4
 }
@@ -28,9 +34,9 @@ define <4 x i16> @vld3i16(i16* %A) nounwind {
 define <2 x i32> @vld3i32(i32* %A) nounwind {
 ;CHECK: vld3i32:
 ;CHECK: vld3.32
-	%tmp1 = call %struct.__builtin_neon_v2si3 @llvm.arm.neon.vld3.v2i32(i32* %A)
-        %tmp2 = extractvalue %struct.__builtin_neon_v2si3 %tmp1, 0
-        %tmp3 = extractvalue %struct.__builtin_neon_v2si3 %tmp1, 2
+	%tmp1 = call %struct.__neon_int32x2x3_t @llvm.arm.neon.vld3.v2i32(i32* %A)
+        %tmp2 = extractvalue %struct.__neon_int32x2x3_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int32x2x3_t %tmp1, 2
         %tmp4 = add <2 x i32> %tmp2, %tmp3
 	ret <2 x i32> %tmp4
 }
@@ -38,14 +44,74 @@ define <2 x i32> @vld3i32(i32* %A) nounwind {
 define <2 x float> @vld3f(float* %A) nounwind {
 ;CHECK: vld3f:
 ;CHECK: vld3.32
-	%tmp1 = call %struct.__builtin_neon_v2sf3 @llvm.arm.neon.vld3.v2f32(float* %A)
-        %tmp2 = extractvalue %struct.__builtin_neon_v2sf3 %tmp1, 0
-        %tmp3 = extractvalue %struct.__builtin_neon_v2sf3 %tmp1, 2
+	%tmp1 = call %struct.__neon_float32x2x3_t @llvm.arm.neon.vld3.v2f32(float* %A)
+        %tmp2 = extractvalue %struct.__neon_float32x2x3_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_float32x2x3_t %tmp1, 2
         %tmp4 = add <2 x float> %tmp2, %tmp3
 	ret <2 x float> %tmp4
 }
 
-declare %struct.__builtin_neon_v8qi3 @llvm.arm.neon.vld3.v8i8(i8*) nounwind readonly
-declare %struct.__builtin_neon_v4hi3 @llvm.arm.neon.vld3.v4i16(i8*) nounwind readonly
-declare %struct.__builtin_neon_v2si3 @llvm.arm.neon.vld3.v2i32(i8*) nounwind readonly
-declare %struct.__builtin_neon_v2sf3 @llvm.arm.neon.vld3.v2f32(i8*) nounwind readonly
+define <1 x i64> @vld3i64(i64* %A) nounwind {
+;CHECK: vld3i64:
+;CHECK: vld1.64
+	%tmp1 = call %struct.__neon_int64x1x3_t @llvm.arm.neon.vld3.v1i64(i64* %A)
+        %tmp2 = extractvalue %struct.__neon_int64x1x3_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int64x1x3_t %tmp1, 2
+        %tmp4 = add <1 x i64> %tmp2, %tmp3
+	ret <1 x i64> %tmp4
+}
+
+define <16 x i8> @vld3Qi8(i8* %A) nounwind {
+;CHECK: vld3Qi8:
+;CHECK: vld3.8
+;CHECK: vld3.8
+	%tmp1 = call %struct.__neon_int8x16x3_t @llvm.arm.neon.vld3.v16i8(i8* %A)
+        %tmp2 = extractvalue %struct.__neon_int8x16x3_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int8x16x3_t %tmp1, 2
+        %tmp4 = add <16 x i8> %tmp2, %tmp3
+	ret <16 x i8> %tmp4
+}
+
+define <8 x i16> @vld3Qi16(i16* %A) nounwind {
+;CHECK: vld3Qi16:
+;CHECK: vld3.16
+;CHECK: vld3.16
+	%tmp1 = call %struct.__neon_int16x8x3_t @llvm.arm.neon.vld3.v8i16(i16* %A)
+        %tmp2 = extractvalue %struct.__neon_int16x8x3_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int16x8x3_t %tmp1, 2
+        %tmp4 = add <8 x i16> %tmp2, %tmp3
+	ret <8 x i16> %tmp4
+}
+
+define <4 x i32> @vld3Qi32(i32* %A) nounwind {
+;CHECK: vld3Qi32:
+;CHECK: vld3.32
+;CHECK: vld3.32
+	%tmp1 = call %struct.__neon_int32x4x3_t @llvm.arm.neon.vld3.v4i32(i32* %A)
+        %tmp2 = extractvalue %struct.__neon_int32x4x3_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int32x4x3_t %tmp1, 2
+        %tmp4 = add <4 x i32> %tmp2, %tmp3
+	ret <4 x i32> %tmp4
+}
+
+define <4 x float> @vld3Qf(float* %A) nounwind {
+;CHECK: vld3Qf:
+;CHECK: vld3.32
+;CHECK: vld3.32
+	%tmp1 = call %struct.__neon_float32x4x3_t @llvm.arm.neon.vld3.v4f32(float* %A)
+        %tmp2 = extractvalue %struct.__neon_float32x4x3_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_float32x4x3_t %tmp1, 2
+        %tmp4 = add <4 x float> %tmp2, %tmp3
+	ret <4 x float> %tmp4
+}
+
+declare %struct.__neon_int8x8x3_t @llvm.arm.neon.vld3.v8i8(i8*) nounwind readonly
+declare %struct.__neon_int16x4x3_t @llvm.arm.neon.vld3.v4i16(i8*) nounwind readonly
+declare %struct.__neon_int32x2x3_t @llvm.arm.neon.vld3.v2i32(i8*) nounwind readonly
+declare %struct.__neon_float32x2x3_t @llvm.arm.neon.vld3.v2f32(i8*) nounwind readonly
+declare %struct.__neon_int64x1x3_t @llvm.arm.neon.vld3.v1i64(i8*) nounwind readonly
+
+declare %struct.__neon_int8x16x3_t @llvm.arm.neon.vld3.v16i8(i8*) nounwind readonly
+declare %struct.__neon_int16x8x3_t @llvm.arm.neon.vld3.v8i16(i8*) nounwind readonly
+declare %struct.__neon_int32x4x3_t @llvm.arm.neon.vld3.v4i32(i8*) nounwind readonly
+declare %struct.__neon_float32x4x3_t @llvm.arm.neon.vld3.v4f32(i8*) nounwind readonly
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vld4.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vld4.ll
index 4d59a88..0624f29 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vld4.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vld4.ll
@@ -1,16 +1,22 @@
 ; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
-%struct.__builtin_neon_v8qi4 = type { <8 x i8>,  <8 x i8>,  <8 x i8>, <8 x i8> }
-%struct.__builtin_neon_v4hi4 = type { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> }
-%struct.__builtin_neon_v2si4 = type { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> }
-%struct.__builtin_neon_v2sf4 = type { <2 x float>, <2 x float>, <2 x float>, <2 x float> }
+%struct.__neon_int8x8x4_t = type { <8 x i8>,  <8 x i8>,  <8 x i8>, <8 x i8> }
+%struct.__neon_int16x4x4_t = type { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> }
+%struct.__neon_int32x2x4_t = type { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> }
+%struct.__neon_float32x2x4_t = type { <2 x float>, <2 x float>, <2 x float>, <2 x float> }
+%struct.__neon_int64x1x4_t = type { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> }
+
+%struct.__neon_int8x16x4_t = type { <16 x i8>,  <16 x i8>,  <16 x i8>, <16 x i8> }
+%struct.__neon_int16x8x4_t = type { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> }
+%struct.__neon_int32x4x4_t = type { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> }
+%struct.__neon_float32x4x4_t = type { <4 x float>, <4 x float>, <4 x float>, <4 x float> }
 
 define <8 x i8> @vld4i8(i8* %A) nounwind {
 ;CHECK: vld4i8:
 ;CHECK: vld4.8
-	%tmp1 = call %struct.__builtin_neon_v8qi4 @llvm.arm.neon.vld4.v8i8(i8* %A)
-        %tmp2 = extractvalue %struct.__builtin_neon_v8qi4 %tmp1, 0
-        %tmp3 = extractvalue %struct.__builtin_neon_v8qi4 %tmp1, 2
+	%tmp1 = call %struct.__neon_int8x8x4_t @llvm.arm.neon.vld4.v8i8(i8* %A)
+        %tmp2 = extractvalue %struct.__neon_int8x8x4_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int8x8x4_t %tmp1, 2
         %tmp4 = add <8 x i8> %tmp2, %tmp3
 	ret <8 x i8> %tmp4
 }
@@ -18,9 +24,9 @@ define <8 x i8> @vld4i8(i8* %A) nounwind {
 define <4 x i16> @vld4i16(i16* %A) nounwind {
 ;CHECK: vld4i16:
 ;CHECK: vld4.16
-	%tmp1 = call %struct.__builtin_neon_v4hi4 @llvm.arm.neon.vld4.v4i16(i16* %A)
-        %tmp2 = extractvalue %struct.__builtin_neon_v4hi4 %tmp1, 0
-        %tmp3 = extractvalue %struct.__builtin_neon_v4hi4 %tmp1, 2
+	%tmp1 = call %struct.__neon_int16x4x4_t @llvm.arm.neon.vld4.v4i16(i16* %A)
+        %tmp2 = extractvalue %struct.__neon_int16x4x4_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int16x4x4_t %tmp1, 2
         %tmp4 = add <4 x i16> %tmp2, %tmp3
 	ret <4 x i16> %tmp4
 }
@@ -28,9 +34,9 @@ define <4 x i16> @vld4i16(i16* %A) nounwind {
 define <2 x i32> @vld4i32(i32* %A) nounwind {
 ;CHECK: vld4i32:
 ;CHECK: vld4.32
-	%tmp1 = call %struct.__builtin_neon_v2si4 @llvm.arm.neon.vld4.v2i32(i32* %A)
-        %tmp2 = extractvalue %struct.__builtin_neon_v2si4 %tmp1, 0
-        %tmp3 = extractvalue %struct.__builtin_neon_v2si4 %tmp1, 2
+	%tmp1 = call %struct.__neon_int32x2x4_t @llvm.arm.neon.vld4.v2i32(i32* %A)
+        %tmp2 = extractvalue %struct.__neon_int32x2x4_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int32x2x4_t %tmp1, 2
         %tmp4 = add <2 x i32> %tmp2, %tmp3
 	ret <2 x i32> %tmp4
 }
@@ -38,14 +44,74 @@ define <2 x i32> @vld4i32(i32* %A) nounwind {
 define <2 x float> @vld4f(float* %A) nounwind {
 ;CHECK: vld4f:
 ;CHECK: vld4.32
-	%tmp1 = call %struct.__builtin_neon_v2sf4 @llvm.arm.neon.vld4.v2f32(float* %A)
-        %tmp2 = extractvalue %struct.__builtin_neon_v2sf4 %tmp1, 0
-        %tmp3 = extractvalue %struct.__builtin_neon_v2sf4 %tmp1, 2
+	%tmp1 = call %struct.__neon_float32x2x4_t @llvm.arm.neon.vld4.v2f32(float* %A)
+        %tmp2 = extractvalue %struct.__neon_float32x2x4_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_float32x2x4_t %tmp1, 2
         %tmp4 = add <2 x float> %tmp2, %tmp3
 	ret <2 x float> %tmp4
 }
 
-declare %struct.__builtin_neon_v8qi4 @llvm.arm.neon.vld4.v8i8(i8*) nounwind readonly
-declare %struct.__builtin_neon_v4hi4 @llvm.arm.neon.vld4.v4i16(i8*) nounwind readonly
-declare %struct.__builtin_neon_v2si4 @llvm.arm.neon.vld4.v2i32(i8*) nounwind readonly
-declare %struct.__builtin_neon_v2sf4 @llvm.arm.neon.vld4.v2f32(i8*) nounwind readonly
+define <1 x i64> @vld4i64(i64* %A) nounwind {
+;CHECK: vld4i64:
+;CHECK: vld1.64
+	%tmp1 = call %struct.__neon_int64x1x4_t @llvm.arm.neon.vld4.v1i64(i64* %A)
+        %tmp2 = extractvalue %struct.__neon_int64x1x4_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int64x1x4_t %tmp1, 2
+        %tmp4 = add <1 x i64> %tmp2, %tmp3
+	ret <1 x i64> %tmp4
+}
+
+define <16 x i8> @vld4Qi8(i8* %A) nounwind {
+;CHECK: vld4Qi8:
+;CHECK: vld4.8
+;CHECK: vld4.8
+	%tmp1 = call %struct.__neon_int8x16x4_t @llvm.arm.neon.vld4.v16i8(i8* %A)
+        %tmp2 = extractvalue %struct.__neon_int8x16x4_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int8x16x4_t %tmp1, 2
+        %tmp4 = add <16 x i8> %tmp2, %tmp3
+	ret <16 x i8> %tmp4
+}
+
+define <8 x i16> @vld4Qi16(i16* %A) nounwind {
+;CHECK: vld4Qi16:
+;CHECK: vld4.16
+;CHECK: vld4.16
+	%tmp1 = call %struct.__neon_int16x8x4_t @llvm.arm.neon.vld4.v8i16(i16* %A)
+        %tmp2 = extractvalue %struct.__neon_int16x8x4_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int16x8x4_t %tmp1, 2
+        %tmp4 = add <8 x i16> %tmp2, %tmp3
+	ret <8 x i16> %tmp4
+}
+
+define <4 x i32> @vld4Qi32(i32* %A) nounwind {
+;CHECK: vld4Qi32:
+;CHECK: vld4.32
+;CHECK: vld4.32
+	%tmp1 = call %struct.__neon_int32x4x4_t @llvm.arm.neon.vld4.v4i32(i32* %A)
+        %tmp2 = extractvalue %struct.__neon_int32x4x4_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_int32x4x4_t %tmp1, 2
+        %tmp4 = add <4 x i32> %tmp2, %tmp3
+	ret <4 x i32> %tmp4
+}
+
+define <4 x float> @vld4Qf(float* %A) nounwind {
+;CHECK: vld4Qf:
+;CHECK: vld4.32
+;CHECK: vld4.32
+	%tmp1 = call %struct.__neon_float32x4x4_t @llvm.arm.neon.vld4.v4f32(float* %A)
+        %tmp2 = extractvalue %struct.__neon_float32x4x4_t %tmp1, 0
+        %tmp3 = extractvalue %struct.__neon_float32x4x4_t %tmp1, 2
+        %tmp4 = add <4 x float> %tmp2, %tmp3
+	ret <4 x float> %tmp4
+}
+
+declare %struct.__neon_int8x8x4_t @llvm.arm.neon.vld4.v8i8(i8*) nounwind readonly
+declare %struct.__neon_int16x4x4_t @llvm.arm.neon.vld4.v4i16(i8*) nounwind readonly
+declare %struct.__neon_int32x2x4_t @llvm.arm.neon.vld4.v2i32(i8*) nounwind readonly
+declare %struct.__neon_float32x2x4_t @llvm.arm.neon.vld4.v2f32(i8*) nounwind readonly
+declare %struct.__neon_int64x1x4_t @llvm.arm.neon.vld4.v1i64(i8*) nounwind readonly
+
+declare %struct.__neon_int8x16x4_t @llvm.arm.neon.vld4.v16i8(i8*) nounwind readonly
+declare %struct.__neon_int16x8x4_t @llvm.arm.neon.vld4.v8i16(i8*) nounwind readonly
+declare %struct.__neon_int32x4x4_t @llvm.arm.neon.vld4.v4i32(i8*) nounwind readonly
+declare %struct.__neon_float32x4x4_t @llvm.arm.neon.vld4.v4f32(i8*) nounwind readonly
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vldlane.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vldlane.ll
index 01334a6..53881a3 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vldlane.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vldlane.ll
@@ -1,17 +1,21 @@
 ; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
-%struct.__builtin_neon_v8qi2 = type { <8 x i8>,  <8 x i8> }
-%struct.__builtin_neon_v4hi2 = type { <4 x i16>, <4 x i16> }
-%struct.__builtin_neon_v2si2 = type { <2 x i32>, <2 x i32> }
-%struct.__builtin_neon_v2sf2 = type { <2 x float>, <2 x float> }
+%struct.__neon_int8x8x2_t = type { <8 x i8>,  <8 x i8> }
+%struct.__neon_int16x4x2_t = type { <4 x i16>, <4 x i16> }
+%struct.__neon_int32x2x2_t = type { <2 x i32>, <2 x i32> }
+%struct.__neon_float32x2x2_t = type { <2 x float>, <2 x float> }
+
+%struct.__neon_int16x8x2_t = type { <8 x i16>, <8 x i16> }
+%struct.__neon_int32x4x2_t = type { <4 x i32>, <4 x i32> }
+%struct.__neon_float32x4x2_t = type { <4 x float>, <4 x float> }
 
 define <8 x i8> @vld2lanei8(i8* %A, <8 x i8>* %B) nounwind {
 ;CHECK: vld2lanei8:
 ;CHECK: vld2.8
 	%tmp1 = load <8 x i8>* %B
-	%tmp2 = call %struct.__builtin_neon_v8qi2 @llvm.arm.neon.vld2lane.v8i8(i8* %A, <8 x i8> %tmp1, <8 x i8> %tmp1, i32 1)
-        %tmp3 = extractvalue %struct.__builtin_neon_v8qi2 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v8qi2 %tmp2, 1
+	%tmp2 = call %struct.__neon_int8x8x2_t @llvm.arm.neon.vld2lane.v8i8(i8* %A, <8 x i8> %tmp1, <8 x i8> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_int8x8x2_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int8x8x2_t %tmp2, 1
         %tmp5 = add <8 x i8> %tmp3, %tmp4
 	ret <8 x i8> %tmp5
 }
@@ -20,9 +24,9 @@ define <4 x i16> @vld2lanei16(i16* %A, <4 x i16>* %B) nounwind {
 ;CHECK: vld2lanei16:
 ;CHECK: vld2.16
 	%tmp1 = load <4 x i16>* %B
-	%tmp2 = call %struct.__builtin_neon_v4hi2 @llvm.arm.neon.vld2lane.v4i16(i16* %A, <4 x i16> %tmp1, <4 x i16> %tmp1, i32 1)
-        %tmp3 = extractvalue %struct.__builtin_neon_v4hi2 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v4hi2 %tmp2, 1
+	%tmp2 = call %struct.__neon_int16x4x2_t @llvm.arm.neon.vld2lane.v4i16(i16* %A, <4 x i16> %tmp1, <4 x i16> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_int16x4x2_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int16x4x2_t %tmp2, 1
         %tmp5 = add <4 x i16> %tmp3, %tmp4
 	ret <4 x i16> %tmp5
 }
@@ -31,9 +35,9 @@ define <2 x i32> @vld2lanei32(i32* %A, <2 x i32>* %B) nounwind {
 ;CHECK: vld2lanei32:
 ;CHECK: vld2.32
 	%tmp1 = load <2 x i32>* %B
-	%tmp2 = call %struct.__builtin_neon_v2si2 @llvm.arm.neon.vld2lane.v2i32(i32* %A, <2 x i32> %tmp1, <2 x i32> %tmp1, i32 1)
-        %tmp3 = extractvalue %struct.__builtin_neon_v2si2 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v2si2 %tmp2, 1
+	%tmp2 = call %struct.__neon_int32x2x2_t @llvm.arm.neon.vld2lane.v2i32(i32* %A, <2 x i32> %tmp1, <2 x i32> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_int32x2x2_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int32x2x2_t %tmp2, 1
         %tmp5 = add <2 x i32> %tmp3, %tmp4
 	ret <2 x i32> %tmp5
 }
@@ -42,31 +46,72 @@ define <2 x float> @vld2lanef(float* %A, <2 x float>* %B) nounwind {
 ;CHECK: vld2lanef:
 ;CHECK: vld2.32
 	%tmp1 = load <2 x float>* %B
-	%tmp2 = call %struct.__builtin_neon_v2sf2 @llvm.arm.neon.vld2lane.v2f32(float* %A, <2 x float> %tmp1, <2 x float> %tmp1, i32 1)
-        %tmp3 = extractvalue %struct.__builtin_neon_v2sf2 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v2sf2 %tmp2, 1
+	%tmp2 = call %struct.__neon_float32x2x2_t @llvm.arm.neon.vld2lane.v2f32(float* %A, <2 x float> %tmp1, <2 x float> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_float32x2x2_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_float32x2x2_t %tmp2, 1
         %tmp5 = add <2 x float> %tmp3, %tmp4
 	ret <2 x float> %tmp5
 }
 
-declare %struct.__builtin_neon_v8qi2 @llvm.arm.neon.vld2lane.v8i8(i8*, <8 x i8>, <8 x i8>, i32) nounwind readonly
-declare %struct.__builtin_neon_v4hi2 @llvm.arm.neon.vld2lane.v4i16(i8*, <4 x i16>, <4 x i16>, i32) nounwind readonly
-declare %struct.__builtin_neon_v2si2 @llvm.arm.neon.vld2lane.v2i32(i8*, <2 x i32>, <2 x i32>, i32) nounwind readonly
-declare %struct.__builtin_neon_v2sf2 @llvm.arm.neon.vld2lane.v2f32(i8*, <2 x float>, <2 x float>, i32) nounwind readonly
+define <8 x i16> @vld2laneQi16(i16* %A, <8 x i16>* %B) nounwind {
+;CHECK: vld2laneQi16:
+;CHECK: vld2.16
+	%tmp1 = load <8 x i16>* %B
+	%tmp2 = call %struct.__neon_int16x8x2_t @llvm.arm.neon.vld2lane.v8i16(i16* %A, <8 x i16> %tmp1, <8 x i16> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_int16x8x2_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int16x8x2_t %tmp2, 1
+        %tmp5 = add <8 x i16> %tmp3, %tmp4
+	ret <8 x i16> %tmp5
+}
+
+define <4 x i32> @vld2laneQi32(i32* %A, <4 x i32>* %B) nounwind {
+;CHECK: vld2laneQi32:
+;CHECK: vld2.32
+	%tmp1 = load <4 x i32>* %B
+	%tmp2 = call %struct.__neon_int32x4x2_t @llvm.arm.neon.vld2lane.v4i32(i32* %A, <4 x i32> %tmp1, <4 x i32> %tmp1, i32 2)
+        %tmp3 = extractvalue %struct.__neon_int32x4x2_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int32x4x2_t %tmp2, 1
+        %tmp5 = add <4 x i32> %tmp3, %tmp4
+	ret <4 x i32> %tmp5
+}
+
+define <4 x float> @vld2laneQf(float* %A, <4 x float>* %B) nounwind {
+;CHECK: vld2laneQf:
+;CHECK: vld2.32
+	%tmp1 = load <4 x float>* %B
+	%tmp2 = call %struct.__neon_float32x4x2_t @llvm.arm.neon.vld2lane.v4f32(float* %A, <4 x float> %tmp1, <4 x float> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_float32x4x2_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_float32x4x2_t %tmp2, 1
+        %tmp5 = add <4 x float> %tmp3, %tmp4
+	ret <4 x float> %tmp5
+}
 
-%struct.__builtin_neon_v8qi3 = type { <8 x i8>,  <8 x i8>,  <8 x i8> }
-%struct.__builtin_neon_v4hi3 = type { <4 x i16>, <4 x i16>, <4 x i16> }
-%struct.__builtin_neon_v2si3 = type { <2 x i32>, <2 x i32>, <2 x i32> }
-%struct.__builtin_neon_v2sf3 = type { <2 x float>, <2 x float>, <2 x float> }
+declare %struct.__neon_int8x8x2_t @llvm.arm.neon.vld2lane.v8i8(i8*, <8 x i8>, <8 x i8>, i32) nounwind readonly
+declare %struct.__neon_int16x4x2_t @llvm.arm.neon.vld2lane.v4i16(i8*, <4 x i16>, <4 x i16>, i32) nounwind readonly
+declare %struct.__neon_int32x2x2_t @llvm.arm.neon.vld2lane.v2i32(i8*, <2 x i32>, <2 x i32>, i32) nounwind readonly
+declare %struct.__neon_float32x2x2_t @llvm.arm.neon.vld2lane.v2f32(i8*, <2 x float>, <2 x float>, i32) nounwind readonly
+
+declare %struct.__neon_int16x8x2_t @llvm.arm.neon.vld2lane.v8i16(i8*, <8 x i16>, <8 x i16>, i32) nounwind readonly
+declare %struct.__neon_int32x4x2_t @llvm.arm.neon.vld2lane.v4i32(i8*, <4 x i32>, <4 x i32>, i32) nounwind readonly
+declare %struct.__neon_float32x4x2_t @llvm.arm.neon.vld2lane.v4f32(i8*, <4 x float>, <4 x float>, i32) nounwind readonly
+
+%struct.__neon_int8x8x3_t = type { <8 x i8>,  <8 x i8>,  <8 x i8> }
+%struct.__neon_int16x4x3_t = type { <4 x i16>, <4 x i16>, <4 x i16> }
+%struct.__neon_int32x2x3_t = type { <2 x i32>, <2 x i32>, <2 x i32> }
+%struct.__neon_float32x2x3_t = type { <2 x float>, <2 x float>, <2 x float> }
+
+%struct.__neon_int16x8x3_t = type { <8 x i16>, <8 x i16>, <8 x i16> }
+%struct.__neon_int32x4x3_t = type { <4 x i32>, <4 x i32>, <4 x i32> }
+%struct.__neon_float32x4x3_t = type { <4 x float>, <4 x float>, <4 x float> }
 
 define <8 x i8> @vld3lanei8(i8* %A, <8 x i8>* %B) nounwind {
 ;CHECK: vld3lanei8:
 ;CHECK: vld3.8
 	%tmp1 = load <8 x i8>* %B
-	%tmp2 = call %struct.__builtin_neon_v8qi3 @llvm.arm.neon.vld3lane.v8i8(i8* %A, <8 x i8> %tmp1, <8 x i8> %tmp1, <8 x i8> %tmp1, i32 1)
-        %tmp3 = extractvalue %struct.__builtin_neon_v8qi3 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v8qi3 %tmp2, 1
-        %tmp5 = extractvalue %struct.__builtin_neon_v8qi3 %tmp2, 2
+	%tmp2 = call %struct.__neon_int8x8x3_t @llvm.arm.neon.vld3lane.v8i8(i8* %A, <8 x i8> %tmp1, <8 x i8> %tmp1, <8 x i8> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_int8x8x3_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int8x8x3_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_int8x8x3_t %tmp2, 2
         %tmp6 = add <8 x i8> %tmp3, %tmp4
         %tmp7 = add <8 x i8> %tmp5, %tmp6
 	ret <8 x i8> %tmp7
@@ -76,10 +121,10 @@ define <4 x i16> @vld3lanei16(i16* %A, <4 x i16>* %B) nounwind {
 ;CHECK: vld3lanei16:
 ;CHECK: vld3.16
 	%tmp1 = load <4 x i16>* %B
-	%tmp2 = call %struct.__builtin_neon_v4hi3 @llvm.arm.neon.vld3lane.v4i16(i16* %A, <4 x i16> %tmp1, <4 x i16> %tmp1, <4 x i16> %tmp1, i32 1)
-        %tmp3 = extractvalue %struct.__builtin_neon_v4hi3 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v4hi3 %tmp2, 1
-        %tmp5 = extractvalue %struct.__builtin_neon_v4hi3 %tmp2, 2
+	%tmp2 = call %struct.__neon_int16x4x3_t @llvm.arm.neon.vld3lane.v4i16(i16* %A, <4 x i16> %tmp1, <4 x i16> %tmp1, <4 x i16> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_int16x4x3_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int16x4x3_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_int16x4x3_t %tmp2, 2
         %tmp6 = add <4 x i16> %tmp3, %tmp4
         %tmp7 = add <4 x i16> %tmp5, %tmp6
 	ret <4 x i16> %tmp7
@@ -89,10 +134,10 @@ define <2 x i32> @vld3lanei32(i32* %A, <2 x i32>* %B) nounwind {
 ;CHECK: vld3lanei32:
 ;CHECK: vld3.32
 	%tmp1 = load <2 x i32>* %B
-	%tmp2 = call %struct.__builtin_neon_v2si3 @llvm.arm.neon.vld3lane.v2i32(i32* %A, <2 x i32> %tmp1, <2 x i32> %tmp1, <2 x i32> %tmp1, i32 1)
-        %tmp3 = extractvalue %struct.__builtin_neon_v2si3 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v2si3 %tmp2, 1
-        %tmp5 = extractvalue %struct.__builtin_neon_v2si3 %tmp2, 2
+	%tmp2 = call %struct.__neon_int32x2x3_t @llvm.arm.neon.vld3lane.v2i32(i32* %A, <2 x i32> %tmp1, <2 x i32> %tmp1, <2 x i32> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_int32x2x3_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int32x2x3_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_int32x2x3_t %tmp2, 2
         %tmp6 = add <2 x i32> %tmp3, %tmp4
         %tmp7 = add <2 x i32> %tmp5, %tmp6
 	ret <2 x i32> %tmp7
@@ -102,34 +147,81 @@ define <2 x float> @vld3lanef(float* %A, <2 x float>* %B) nounwind {
 ;CHECK: vld3lanef:
 ;CHECK: vld3.32
 	%tmp1 = load <2 x float>* %B
-	%tmp2 = call %struct.__builtin_neon_v2sf3 @llvm.arm.neon.vld3lane.v2f32(float* %A, <2 x float> %tmp1, <2 x float> %tmp1, <2 x float> %tmp1, i32 1)
-        %tmp3 = extractvalue %struct.__builtin_neon_v2sf3 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v2sf3 %tmp2, 1
-        %tmp5 = extractvalue %struct.__builtin_neon_v2sf3 %tmp2, 2
+	%tmp2 = call %struct.__neon_float32x2x3_t @llvm.arm.neon.vld3lane.v2f32(float* %A, <2 x float> %tmp1, <2 x float> %tmp1, <2 x float> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_float32x2x3_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_float32x2x3_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_float32x2x3_t %tmp2, 2
         %tmp6 = add <2 x float> %tmp3, %tmp4
         %tmp7 = add <2 x float> %tmp5, %tmp6
 	ret <2 x float> %tmp7
 }
 
-declare %struct.__builtin_neon_v8qi3 @llvm.arm.neon.vld3lane.v8i8(i8*, <8 x i8>, <8 x i8>, <8 x i8>, i32) nounwind readonly
-declare %struct.__builtin_neon_v4hi3 @llvm.arm.neon.vld3lane.v4i16(i8*, <4 x i16>, <4 x i16>, <4 x i16>, i32) nounwind readonly
-declare %struct.__builtin_neon_v2si3 @llvm.arm.neon.vld3lane.v2i32(i8*, <2 x i32>, <2 x i32>, <2 x i32>, i32) nounwind readonly
-declare %struct.__builtin_neon_v2sf3 @llvm.arm.neon.vld3lane.v2f32(i8*, <2 x float>, <2 x float>, <2 x float>, i32) nounwind readonly
+define <8 x i16> @vld3laneQi16(i16* %A, <8 x i16>* %B) nounwind {
+;CHECK: vld3laneQi16:
+;CHECK: vld3.16
+	%tmp1 = load <8 x i16>* %B
+	%tmp2 = call %struct.__neon_int16x8x3_t @llvm.arm.neon.vld3lane.v8i16(i16* %A, <8 x i16> %tmp1, <8 x i16> %tmp1, <8 x i16> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_int16x8x3_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int16x8x3_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_int16x8x3_t %tmp2, 2
+        %tmp6 = add <8 x i16> %tmp3, %tmp4
+        %tmp7 = add <8 x i16> %tmp5, %tmp6
+	ret <8 x i16> %tmp7
+}
 
-%struct.__builtin_neon_v8qi4 = type { <8 x i8>,  <8 x i8>,  <8 x i8>,  <8 x i8> }
-%struct.__builtin_neon_v4hi4 = type { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> }
-%struct.__builtin_neon_v2si4 = type { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> }
-%struct.__builtin_neon_v2sf4 = type { <2 x float>, <2 x float>, <2 x float>, <2 x float> }
+define <4 x i32> @vld3laneQi32(i32* %A, <4 x i32>* %B) nounwind {
+;CHECK: vld3laneQi32:
+;CHECK: vld3.32
+	%tmp1 = load <4 x i32>* %B
+	%tmp2 = call %struct.__neon_int32x4x3_t @llvm.arm.neon.vld3lane.v4i32(i32* %A, <4 x i32> %tmp1, <4 x i32> %tmp1, <4 x i32> %tmp1, i32 3)
+        %tmp3 = extractvalue %struct.__neon_int32x4x3_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int32x4x3_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_int32x4x3_t %tmp2, 2
+        %tmp6 = add <4 x i32> %tmp3, %tmp4
+        %tmp7 = add <4 x i32> %tmp5, %tmp6
+	ret <4 x i32> %tmp7
+}
+
+define <4 x float> @vld3laneQf(float* %A, <4 x float>* %B) nounwind {
+;CHECK: vld3laneQf:
+;CHECK: vld3.32
+	%tmp1 = load <4 x float>* %B
+	%tmp2 = call %struct.__neon_float32x4x3_t @llvm.arm.neon.vld3lane.v4f32(float* %A, <4 x float> %tmp1, <4 x float> %tmp1, <4 x float> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_float32x4x3_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_float32x4x3_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_float32x4x3_t %tmp2, 2
+        %tmp6 = add <4 x float> %tmp3, %tmp4
+        %tmp7 = add <4 x float> %tmp5, %tmp6
+	ret <4 x float> %tmp7
+}
+
+declare %struct.__neon_int8x8x3_t @llvm.arm.neon.vld3lane.v8i8(i8*, <8 x i8>, <8 x i8>, <8 x i8>, i32) nounwind readonly
+declare %struct.__neon_int16x4x3_t @llvm.arm.neon.vld3lane.v4i16(i8*, <4 x i16>, <4 x i16>, <4 x i16>, i32) nounwind readonly
+declare %struct.__neon_int32x2x3_t @llvm.arm.neon.vld3lane.v2i32(i8*, <2 x i32>, <2 x i32>, <2 x i32>, i32) nounwind readonly
+declare %struct.__neon_float32x2x3_t @llvm.arm.neon.vld3lane.v2f32(i8*, <2 x float>, <2 x float>, <2 x float>, i32) nounwind readonly
+
+declare %struct.__neon_int16x8x3_t @llvm.arm.neon.vld3lane.v8i16(i8*, <8 x i16>, <8 x i16>, <8 x i16>, i32) nounwind readonly
+declare %struct.__neon_int32x4x3_t @llvm.arm.neon.vld3lane.v4i32(i8*, <4 x i32>, <4 x i32>, <4 x i32>, i32) nounwind readonly
+declare %struct.__neon_float32x4x3_t @llvm.arm.neon.vld3lane.v4f32(i8*, <4 x float>, <4 x float>, <4 x float>, i32) nounwind readonly
+
+%struct.__neon_int8x8x4_t = type { <8 x i8>,  <8 x i8>,  <8 x i8>,  <8 x i8> }
+%struct.__neon_int16x4x4_t = type { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> }
+%struct.__neon_int32x2x4_t = type { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> }
+%struct.__neon_float32x2x4_t = type { <2 x float>, <2 x float>, <2 x float>, <2 x float> }
+
+%struct.__neon_int16x8x4_t = type { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> }
+%struct.__neon_int32x4x4_t = type { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> }
+%struct.__neon_float32x4x4_t = type { <4 x float>, <4 x float>, <4 x float>, <4 x float> }
 
 define <8 x i8> @vld4lanei8(i8* %A, <8 x i8>* %B) nounwind {
 ;CHECK: vld4lanei8:
 ;CHECK: vld4.8
 	%tmp1 = load <8 x i8>* %B
-	%tmp2 = call %struct.__builtin_neon_v8qi4 @llvm.arm.neon.vld4lane.v8i8(i8* %A, <8 x i8> %tmp1, <8 x i8> %tmp1, <8 x i8> %tmp1, <8 x i8> %tmp1, i32 1)
-        %tmp3 = extractvalue %struct.__builtin_neon_v8qi4 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v8qi4 %tmp2, 1
-        %tmp5 = extractvalue %struct.__builtin_neon_v8qi4 %tmp2, 2
-        %tmp6 = extractvalue %struct.__builtin_neon_v8qi4 %tmp2, 3
+	%tmp2 = call %struct.__neon_int8x8x4_t @llvm.arm.neon.vld4lane.v8i8(i8* %A, <8 x i8> %tmp1, <8 x i8> %tmp1, <8 x i8> %tmp1, <8 x i8> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_int8x8x4_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int8x8x4_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_int8x8x4_t %tmp2, 2
+        %tmp6 = extractvalue %struct.__neon_int8x8x4_t %tmp2, 3
         %tmp7 = add <8 x i8> %tmp3, %tmp4
         %tmp8 = add <8 x i8> %tmp5, %tmp6
         %tmp9 = add <8 x i8> %tmp7, %tmp8
@@ -140,11 +232,11 @@ define <4 x i16> @vld4lanei16(i16* %A, <4 x i16>* %B) nounwind {
 ;CHECK: vld4lanei16:
 ;CHECK: vld4.16
 	%tmp1 = load <4 x i16>* %B
-	%tmp2 = call %struct.__builtin_neon_v4hi4 @llvm.arm.neon.vld4lane.v4i16(i16* %A, <4 x i16> %tmp1, <4 x i16> %tmp1, <4 x i16> %tmp1, <4 x i16> %tmp1, i32 1)
-        %tmp3 = extractvalue %struct.__builtin_neon_v4hi4 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v4hi4 %tmp2, 1
-        %tmp5 = extractvalue %struct.__builtin_neon_v4hi4 %tmp2, 2
-        %tmp6 = extractvalue %struct.__builtin_neon_v4hi4 %tmp2, 3
+	%tmp2 = call %struct.__neon_int16x4x4_t @llvm.arm.neon.vld4lane.v4i16(i16* %A, <4 x i16> %tmp1, <4 x i16> %tmp1, <4 x i16> %tmp1, <4 x i16> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_int16x4x4_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int16x4x4_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_int16x4x4_t %tmp2, 2
+        %tmp6 = extractvalue %struct.__neon_int16x4x4_t %tmp2, 3
         %tmp7 = add <4 x i16> %tmp3, %tmp4
         %tmp8 = add <4 x i16> %tmp5, %tmp6
         %tmp9 = add <4 x i16> %tmp7, %tmp8
@@ -155,11 +247,11 @@ define <2 x i32> @vld4lanei32(i32* %A, <2 x i32>* %B) nounwind {
 ;CHECK: vld4lanei32:
 ;CHECK: vld4.32
 	%tmp1 = load <2 x i32>* %B
-	%tmp2 = call %struct.__builtin_neon_v2si4 @llvm.arm.neon.vld4lane.v2i32(i32* %A, <2 x i32> %tmp1, <2 x i32> %tmp1, <2 x i32> %tmp1, <2 x i32> %tmp1, i32 1)
-        %tmp3 = extractvalue %struct.__builtin_neon_v2si4 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v2si4 %tmp2, 1
-        %tmp5 = extractvalue %struct.__builtin_neon_v2si4 %tmp2, 2
-        %tmp6 = extractvalue %struct.__builtin_neon_v2si4 %tmp2, 3
+	%tmp2 = call %struct.__neon_int32x2x4_t @llvm.arm.neon.vld4lane.v2i32(i32* %A, <2 x i32> %tmp1, <2 x i32> %tmp1, <2 x i32> %tmp1, <2 x i32> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_int32x2x4_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int32x2x4_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_int32x2x4_t %tmp2, 2
+        %tmp6 = extractvalue %struct.__neon_int32x2x4_t %tmp2, 3
         %tmp7 = add <2 x i32> %tmp3, %tmp4
         %tmp8 = add <2 x i32> %tmp5, %tmp6
         %tmp9 = add <2 x i32> %tmp7, %tmp8
@@ -170,18 +262,67 @@ define <2 x float> @vld4lanef(float* %A, <2 x float>* %B) nounwind {
 ;CHECK: vld4lanef:
 ;CHECK: vld4.32
 	%tmp1 = load <2 x float>* %B
-	%tmp2 = call %struct.__builtin_neon_v2sf4 @llvm.arm.neon.vld4lane.v2f32(float* %A, <2 x float> %tmp1, <2 x float> %tmp1, <2 x float> %tmp1, <2 x float> %tmp1, i32 1)
-        %tmp3 = extractvalue %struct.__builtin_neon_v2sf4 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v2sf4 %tmp2, 1
-        %tmp5 = extractvalue %struct.__builtin_neon_v2sf4 %tmp2, 2
-        %tmp6 = extractvalue %struct.__builtin_neon_v2sf4 %tmp2, 3
+	%tmp2 = call %struct.__neon_float32x2x4_t @llvm.arm.neon.vld4lane.v2f32(float* %A, <2 x float> %tmp1, <2 x float> %tmp1, <2 x float> %tmp1, <2 x float> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_float32x2x4_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_float32x2x4_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_float32x2x4_t %tmp2, 2
+        %tmp6 = extractvalue %struct.__neon_float32x2x4_t %tmp2, 3
         %tmp7 = add <2 x float> %tmp3, %tmp4
         %tmp8 = add <2 x float> %tmp5, %tmp6
         %tmp9 = add <2 x float> %tmp7, %tmp8
 	ret <2 x float> %tmp9
 }
 
-declare %struct.__builtin_neon_v8qi4 @llvm.arm.neon.vld4lane.v8i8(i8*, <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8>, i32) nounwind readonly
-declare %struct.__builtin_neon_v4hi4 @llvm.arm.neon.vld4lane.v4i16(i8*, <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16>, i32) nounwind readonly
-declare %struct.__builtin_neon_v2si4 @llvm.arm.neon.vld4lane.v2i32(i8*, <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32>, i32) nounwind readonly
-declare %struct.__builtin_neon_v2sf4 @llvm.arm.neon.vld4lane.v2f32(i8*, <2 x float>, <2 x float>, <2 x float>, <2 x float>, i32) nounwind readonly
+define <8 x i16> @vld4laneQi16(i16* %A, <8 x i16>* %B) nounwind {
+;CHECK: vld4laneQi16:
+;CHECK: vld4.16
+	%tmp1 = load <8 x i16>* %B
+	%tmp2 = call %struct.__neon_int16x8x4_t @llvm.arm.neon.vld4lane.v8i16(i16* %A, <8 x i16> %tmp1, <8 x i16> %tmp1, <8 x i16> %tmp1, <8 x i16> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_int16x8x4_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int16x8x4_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_int16x8x4_t %tmp2, 2
+        %tmp6 = extractvalue %struct.__neon_int16x8x4_t %tmp2, 3
+        %tmp7 = add <8 x i16> %tmp3, %tmp4
+        %tmp8 = add <8 x i16> %tmp5, %tmp6
+        %tmp9 = add <8 x i16> %tmp7, %tmp8
+	ret <8 x i16> %tmp9
+}
+
+define <4 x i32> @vld4laneQi32(i32* %A, <4 x i32>* %B) nounwind {
+;CHECK: vld4laneQi32:
+;CHECK: vld4.32
+	%tmp1 = load <4 x i32>* %B
+	%tmp2 = call %struct.__neon_int32x4x4_t @llvm.arm.neon.vld4lane.v4i32(i32* %A, <4 x i32> %tmp1, <4 x i32> %tmp1, <4 x i32> %tmp1, <4 x i32> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_int32x4x4_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int32x4x4_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_int32x4x4_t %tmp2, 2
+        %tmp6 = extractvalue %struct.__neon_int32x4x4_t %tmp2, 3
+        %tmp7 = add <4 x i32> %tmp3, %tmp4
+        %tmp8 = add <4 x i32> %tmp5, %tmp6
+        %tmp9 = add <4 x i32> %tmp7, %tmp8
+	ret <4 x i32> %tmp9
+}
+
+define <4 x float> @vld4laneQf(float* %A, <4 x float>* %B) nounwind {
+;CHECK: vld4laneQf:
+;CHECK: vld4.32
+	%tmp1 = load <4 x float>* %B
+	%tmp2 = call %struct.__neon_float32x4x4_t @llvm.arm.neon.vld4lane.v4f32(float* %A, <4 x float> %tmp1, <4 x float> %tmp1, <4 x float> %tmp1, <4 x float> %tmp1, i32 1)
+        %tmp3 = extractvalue %struct.__neon_float32x4x4_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_float32x4x4_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_float32x4x4_t %tmp2, 2
+        %tmp6 = extractvalue %struct.__neon_float32x4x4_t %tmp2, 3
+        %tmp7 = add <4 x float> %tmp3, %tmp4
+        %tmp8 = add <4 x float> %tmp5, %tmp6
+        %tmp9 = add <4 x float> %tmp7, %tmp8
+	ret <4 x float> %tmp9
+}
+
+declare %struct.__neon_int8x8x4_t @llvm.arm.neon.vld4lane.v8i8(i8*, <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8>, i32) nounwind readonly
+declare %struct.__neon_int16x4x4_t @llvm.arm.neon.vld4lane.v4i16(i8*, <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16>, i32) nounwind readonly
+declare %struct.__neon_int32x2x4_t @llvm.arm.neon.vld4lane.v2i32(i8*, <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32>, i32) nounwind readonly
+declare %struct.__neon_float32x2x4_t @llvm.arm.neon.vld4lane.v2f32(i8*, <2 x float>, <2 x float>, <2 x float>, <2 x float>, i32) nounwind readonly
+
+declare %struct.__neon_int16x8x4_t @llvm.arm.neon.vld4lane.v8i16(i8*, <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16>, i32) nounwind readonly
+declare %struct.__neon_int32x4x4_t @llvm.arm.neon.vld4lane.v4i32(i8*, <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32>, i32) nounwind readonly
+declare %struct.__neon_float32x4x4_t @llvm.arm.neon.vld4lane.v4f32(i8*, <4 x float>, <4 x float>, <4 x float>, <4 x float>, i32) nounwind readonly
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vmax.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vmax.ll
deleted file mode 100644
index 85ff310..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vmax.ll
+++ /dev/null
@@ -1,126 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vmax\\.s8} %t | count 2
-; RUN: grep {vmax\\.s16} %t | count 2
-; RUN: grep {vmax\\.s32} %t | count 2
-; RUN: grep {vmax\\.u8} %t | count 2
-; RUN: grep {vmax\\.u16} %t | count 2
-; RUN: grep {vmax\\.u32} %t | count 2
-; RUN: grep {vmax\\.f32} %t | count 2
-
-define <8 x i8> @vmaxs8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vmaxs.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vmaxs16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vmaxs.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vmaxs32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vmaxs.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <8 x i8> @vmaxu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vmaxu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vmaxu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vmaxu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vmaxu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vmaxu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <2 x float> @vmaxf32(<2 x float>* %A, <2 x float>* %B) nounwind {
-	%tmp1 = load <2 x float>* %A
-	%tmp2 = load <2 x float>* %B
-	%tmp3 = call <2 x float> @llvm.arm.neon.vmaxs.v2f32(<2 x float> %tmp1, <2 x float> %tmp2)
-	ret <2 x float> %tmp3
-}
-
-define <16 x i8> @vmaxQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = load <16 x i8>* %B
-	%tmp3 = call <16 x i8> @llvm.arm.neon.vmaxs.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
-	ret <16 x i8> %tmp3
-}
-
-define <8 x i16> @vmaxQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vmaxs.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vmaxQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vmaxs.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <16 x i8> @vmaxQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = load <16 x i8>* %B
-	%tmp3 = call <16 x i8> @llvm.arm.neon.vmaxu.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
-	ret <16 x i8> %tmp3
-}
-
-define <8 x i16> @vmaxQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vmaxu.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vmaxQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vmaxu.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <4 x float> @vmaxQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
-	%tmp1 = load <4 x float>* %A
-	%tmp2 = load <4 x float>* %B
-	%tmp3 = call <4 x float> @llvm.arm.neon.vmaxs.v4f32(<4 x float> %tmp1, <4 x float> %tmp2)
-	ret <4 x float> %tmp3
-}
-
-declare <8 x i8>  @llvm.arm.neon.vmaxs.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vmaxs.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vmaxs.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <8 x i8>  @llvm.arm.neon.vmaxu.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vmaxu.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vmaxu.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <2 x float> @llvm.arm.neon.vmaxs.v2f32(<2 x float>, <2 x float>) nounwind readnone
-
-declare <16 x i8> @llvm.arm.neon.vmaxs.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
-declare <8 x i16> @llvm.arm.neon.vmaxs.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vmaxs.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
-
-declare <16 x i8> @llvm.arm.neon.vmaxu.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
-declare <8 x i16> @llvm.arm.neon.vmaxu.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vmaxu.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
-
-declare <4 x float> @llvm.arm.neon.vmaxs.v4f32(<4 x float>, <4 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vmin.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vmin.ll
deleted file mode 100644
index ecde35a..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vmin.ll
+++ /dev/null
@@ -1,126 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vmin\\.s8} %t | count 2
-; RUN: grep {vmin\\.s16} %t | count 2
-; RUN: grep {vmin\\.s32} %t | count 2
-; RUN: grep {vmin\\.u8} %t | count 2
-; RUN: grep {vmin\\.u16} %t | count 2
-; RUN: grep {vmin\\.u32} %t | count 2
-; RUN: grep {vmin\\.f32} %t | count 2
-
-define <8 x i8> @vmins8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vmins.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vmins16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vmins.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vmins32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vmins.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <8 x i8> @vminu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vminu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vminu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vminu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vminu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vminu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <2 x float> @vminf32(<2 x float>* %A, <2 x float>* %B) nounwind {
-	%tmp1 = load <2 x float>* %A
-	%tmp2 = load <2 x float>* %B
-	%tmp3 = call <2 x float> @llvm.arm.neon.vmins.v2f32(<2 x float> %tmp1, <2 x float> %tmp2)
-	ret <2 x float> %tmp3
-}
-
-define <16 x i8> @vminQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = load <16 x i8>* %B
-	%tmp3 = call <16 x i8> @llvm.arm.neon.vmins.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
-	ret <16 x i8> %tmp3
-}
-
-define <8 x i16> @vminQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vmins.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vminQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vmins.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <16 x i8> @vminQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = load <16 x i8>* %B
-	%tmp3 = call <16 x i8> @llvm.arm.neon.vminu.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
-	ret <16 x i8> %tmp3
-}
-
-define <8 x i16> @vminQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vminu.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vminQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vminu.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <4 x float> @vminQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
-	%tmp1 = load <4 x float>* %A
-	%tmp2 = load <4 x float>* %B
-	%tmp3 = call <4 x float> @llvm.arm.neon.vmins.v4f32(<4 x float> %tmp1, <4 x float> %tmp2)
-	ret <4 x float> %tmp3
-}
-
-declare <8 x i8>  @llvm.arm.neon.vmins.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vmins.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vmins.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <8 x i8>  @llvm.arm.neon.vminu.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vminu.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vminu.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <2 x float> @llvm.arm.neon.vmins.v2f32(<2 x float>, <2 x float>) nounwind readnone
-
-declare <16 x i8> @llvm.arm.neon.vmins.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
-declare <8 x i16> @llvm.arm.neon.vmins.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vmins.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
-
-declare <16 x i8> @llvm.arm.neon.vminu.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
-declare <8 x i16> @llvm.arm.neon.vminu.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vminu.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
-
-declare <4 x float> @llvm.arm.neon.vmins.v4f32(<4 x float>, <4 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vminmax.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vminmax.ll
new file mode 100644
index 0000000..e3527c1
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vminmax.ll
@@ -0,0 +1,293 @@
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
+
+define <8 x i8> @vmins8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vmins8:
+;CHECK: vmin.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vmins.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vmins16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vmins16:
+;CHECK: vmin.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vmins.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vmins32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vmins32:
+;CHECK: vmin.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vmins.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <8 x i8> @vminu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vminu8:
+;CHECK: vmin.u8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vminu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vminu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vminu16:
+;CHECK: vmin.u16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vminu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vminu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vminu32:
+;CHECK: vmin.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vminu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <2 x float> @vminf32(<2 x float>* %A, <2 x float>* %B) nounwind {
+;CHECK: vminf32:
+;CHECK: vmin.f32
+	%tmp1 = load <2 x float>* %A
+	%tmp2 = load <2 x float>* %B
+	%tmp3 = call <2 x float> @llvm.arm.neon.vmins.v2f32(<2 x float> %tmp1, <2 x float> %tmp2)
+	ret <2 x float> %tmp3
+}
+
+define <16 x i8> @vminQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vminQs8:
+;CHECK: vmin.s8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = load <16 x i8>* %B
+	%tmp3 = call <16 x i8> @llvm.arm.neon.vmins.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
+	ret <16 x i8> %tmp3
+}
+
+define <8 x i16> @vminQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vminQs16:
+;CHECK: vmin.s16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vmins.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vminQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vminQs32:
+;CHECK: vmin.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vmins.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <16 x i8> @vminQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vminQu8:
+;CHECK: vmin.u8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = load <16 x i8>* %B
+	%tmp3 = call <16 x i8> @llvm.arm.neon.vminu.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
+	ret <16 x i8> %tmp3
+}
+
+define <8 x i16> @vminQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vminQu16:
+;CHECK: vmin.u16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vminu.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vminQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vminQu32:
+;CHECK: vmin.u32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vminu.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <4 x float> @vminQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
+;CHECK: vminQf32:
+;CHECK: vmin.f32
+	%tmp1 = load <4 x float>* %A
+	%tmp2 = load <4 x float>* %B
+	%tmp3 = call <4 x float> @llvm.arm.neon.vmins.v4f32(<4 x float> %tmp1, <4 x float> %tmp2)
+	ret <4 x float> %tmp3
+}
+
+declare <8 x i8>  @llvm.arm.neon.vmins.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vmins.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vmins.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <8 x i8>  @llvm.arm.neon.vminu.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vminu.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vminu.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <2 x float> @llvm.arm.neon.vmins.v2f32(<2 x float>, <2 x float>) nounwind readnone
+
+declare <16 x i8> @llvm.arm.neon.vmins.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
+declare <8 x i16> @llvm.arm.neon.vmins.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vmins.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
+
+declare <16 x i8> @llvm.arm.neon.vminu.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
+declare <8 x i16> @llvm.arm.neon.vminu.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vminu.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
+
+declare <4 x float> @llvm.arm.neon.vmins.v4f32(<4 x float>, <4 x float>) nounwind readnone
+
+define <8 x i8> @vmaxs8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vmaxs8:
+;CHECK: vmax.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vmaxs.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vmaxs16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vmaxs16:
+;CHECK: vmax.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vmaxs.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vmaxs32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vmaxs32:
+;CHECK: vmax.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vmaxs.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <8 x i8> @vmaxu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vmaxu8:
+;CHECK: vmax.u8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vmaxu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vmaxu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vmaxu16:
+;CHECK: vmax.u16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vmaxu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vmaxu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vmaxu32:
+;CHECK: vmax.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vmaxu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <2 x float> @vmaxf32(<2 x float>* %A, <2 x float>* %B) nounwind {
+;CHECK: vmaxf32:
+;CHECK: vmax.f32
+	%tmp1 = load <2 x float>* %A
+	%tmp2 = load <2 x float>* %B
+	%tmp3 = call <2 x float> @llvm.arm.neon.vmaxs.v2f32(<2 x float> %tmp1, <2 x float> %tmp2)
+	ret <2 x float> %tmp3
+}
+
+define <16 x i8> @vmaxQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vmaxQs8:
+;CHECK: vmax.s8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = load <16 x i8>* %B
+	%tmp3 = call <16 x i8> @llvm.arm.neon.vmaxs.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
+	ret <16 x i8> %tmp3
+}
+
+define <8 x i16> @vmaxQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vmaxQs16:
+;CHECK: vmax.s16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vmaxs.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vmaxQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vmaxQs32:
+;CHECK: vmax.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vmaxs.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <16 x i8> @vmaxQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vmaxQu8:
+;CHECK: vmax.u8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = load <16 x i8>* %B
+	%tmp3 = call <16 x i8> @llvm.arm.neon.vmaxu.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
+	ret <16 x i8> %tmp3
+}
+
+define <8 x i16> @vmaxQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vmaxQu16:
+;CHECK: vmax.u16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vmaxu.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vmaxQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vmaxQu32:
+;CHECK: vmax.u32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vmaxu.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <4 x float> @vmaxQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
+;CHECK: vmaxQf32:
+;CHECK: vmax.f32
+	%tmp1 = load <4 x float>* %A
+	%tmp2 = load <4 x float>* %B
+	%tmp3 = call <4 x float> @llvm.arm.neon.vmaxs.v4f32(<4 x float> %tmp1, <4 x float> %tmp2)
+	ret <4 x float> %tmp3
+}
+
+declare <8 x i8>  @llvm.arm.neon.vmaxs.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vmaxs.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vmaxs.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <8 x i8>  @llvm.arm.neon.vmaxu.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vmaxu.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vmaxu.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <2 x float> @llvm.arm.neon.vmaxs.v2f32(<2 x float>, <2 x float>) nounwind readnone
+
+declare <16 x i8> @llvm.arm.neon.vmaxs.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
+declare <8 x i16> @llvm.arm.neon.vmaxs.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vmaxs.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
+
+declare <16 x i8> @llvm.arm.neon.vmaxu.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
+declare <8 x i16> @llvm.arm.neon.vmaxu.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vmaxu.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
+
+declare <4 x float> @llvm.arm.neon.vmaxs.v4f32(<4 x float>, <4 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vmla.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vmla.ll
index 3103d7f..8405218 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vmla.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vmla.ll
@@ -1,10 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vmla\\.i8} %t | count 2
-; RUN: grep {vmla\\.i16} %t | count 2
-; RUN: grep {vmla\\.i32} %t | count 2
-; RUN: grep {vmla\\.f32} %t | count 2
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i8> @vmlai8(<8 x i8>* %A, <8 x i8>* %B, <8 x i8> * %C) nounwind {
+;CHECK: vmlai8:
+;CHECK: vmla.i8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = load <8 x i8>* %C
@@ -14,6 +12,8 @@ define <8 x i8> @vmlai8(<8 x i8>* %A, <8 x i8>* %B, <8 x i8> * %C) nounwind {
 }
 
 define <4 x i16> @vmlai16(<4 x i16>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
+;CHECK: vmlai16:
+;CHECK: vmla.i16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = load <4 x i16>* %C
@@ -23,6 +23,8 @@ define <4 x i16> @vmlai16(<4 x i16>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind
 }
 
 define <2 x i32> @vmlai32(<2 x i32>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
+;CHECK: vmlai32:
+;CHECK: vmla.i32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = load <2 x i32>* %C
@@ -32,6 +34,8 @@ define <2 x i32> @vmlai32(<2 x i32>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind
 }
 
 define <2 x float> @vmlaf32(<2 x float>* %A, <2 x float>* %B, <2 x float>* %C) nounwind {
+;CHECK: vmlaf32:
+;CHECK: vmla.f32
 	%tmp1 = load <2 x float>* %A
 	%tmp2 = load <2 x float>* %B
 	%tmp3 = load <2 x float>* %C
@@ -41,6 +45,8 @@ define <2 x float> @vmlaf32(<2 x float>* %A, <2 x float>* %B, <2 x float>* %C) n
 }
 
 define <16 x i8> @vmlaQi8(<16 x i8>* %A, <16 x i8>* %B, <16 x i8> * %C) nounwind {
+;CHECK: vmlaQi8:
+;CHECK: vmla.i8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = load <16 x i8>* %C
@@ -50,6 +56,8 @@ define <16 x i8> @vmlaQi8(<16 x i8>* %A, <16 x i8>* %B, <16 x i8> * %C) nounwind
 }
 
 define <8 x i16> @vmlaQi16(<8 x i16>* %A, <8 x i16>* %B, <8 x i16>* %C) nounwind {
+;CHECK: vmlaQi16:
+;CHECK: vmla.i16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = load <8 x i16>* %C
@@ -59,6 +67,8 @@ define <8 x i16> @vmlaQi16(<8 x i16>* %A, <8 x i16>* %B, <8 x i16>* %C) nounwind
 }
 
 define <4 x i32> @vmlaQi32(<4 x i32>* %A, <4 x i32>* %B, <4 x i32>* %C) nounwind {
+;CHECK: vmlaQi32:
+;CHECK: vmla.i32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = load <4 x i32>* %C
@@ -68,6 +78,8 @@ define <4 x i32> @vmlaQi32(<4 x i32>* %A, <4 x i32>* %B, <4 x i32>* %C) nounwind
 }
 
 define <4 x float> @vmlaQf32(<4 x float>* %A, <4 x float>* %B, <4 x float>* %C) nounwind {
+;CHECK: vmlaQf32:
+;CHECK: vmla.f32
 	%tmp1 = load <4 x float>* %A
 	%tmp2 = load <4 x float>* %B
 	%tmp3 = load <4 x float>* %C
@@ -75,3 +87,107 @@ define <4 x float> @vmlaQf32(<4 x float>* %A, <4 x float>* %B, <4 x float>* %C)
 	%tmp5 = add <4 x float> %tmp1, %tmp4
 	ret <4 x float> %tmp5
 }
+
+define <8 x i16> @vmlals8(<8 x i16>* %A, <8 x i8>* %B, <8 x i8>* %C) nounwind {
+;CHECK: vmlals8:
+;CHECK: vmlal.s8
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = load <8 x i8>* %C
+	%tmp4 = call <8 x i16> @llvm.arm.neon.vmlals.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2, <8 x i8> %tmp3)
+	ret <8 x i16> %tmp4
+}
+
+define <4 x i32> @vmlals16(<4 x i32>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
+;CHECK: vmlals16:
+;CHECK: vmlal.s16
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = load <4 x i16>* %C
+	%tmp4 = call <4 x i32> @llvm.arm.neon.vmlals.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2, <4 x i16> %tmp3)
+	ret <4 x i32> %tmp4
+}
+
+define <2 x i64> @vmlals32(<2 x i64>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
+;CHECK: vmlals32:
+;CHECK: vmlal.s32
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = load <2 x i32>* %C
+	%tmp4 = call <2 x i64> @llvm.arm.neon.vmlals.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2, <2 x i32> %tmp3)
+	ret <2 x i64> %tmp4
+}
+
+define <8 x i16> @vmlalu8(<8 x i16>* %A, <8 x i8>* %B, <8 x i8>* %C) nounwind {
+;CHECK: vmlalu8:
+;CHECK: vmlal.u8
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = load <8 x i8>* %C
+	%tmp4 = call <8 x i16> @llvm.arm.neon.vmlalu.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2, <8 x i8> %tmp3)
+	ret <8 x i16> %tmp4
+}
+
+define <4 x i32> @vmlalu16(<4 x i32>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
+;CHECK: vmlalu16:
+;CHECK: vmlal.u16
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = load <4 x i16>* %C
+	%tmp4 = call <4 x i32> @llvm.arm.neon.vmlalu.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2, <4 x i16> %tmp3)
+	ret <4 x i32> %tmp4
+}
+
+define <2 x i64> @vmlalu32(<2 x i64>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
+;CHECK: vmlalu32:
+;CHECK: vmlal.u32
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = load <2 x i32>* %C
+	%tmp4 = call <2 x i64> @llvm.arm.neon.vmlalu.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2, <2 x i32> %tmp3)
+	ret <2 x i64> %tmp4
+}
+
+define arm_aapcs_vfpcc <4 x i32> @test_vmlal_lanes16(<4 x i32> %arg0_int32x4_t, <4 x i16> %arg1_int16x4_t, <4 x i16> %arg2_int16x4_t) nounwind readnone {
+entry:
+; CHECK: test_vmlal_lanes16
+; CHECK: vmlal.s16 q0, d2, d3[1]
+  %0 = shufflevector <4 x i16> %arg2_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
+  %1 = tail call <4 x i32> @llvm.arm.neon.vmlals.v4i32(<4 x i32> %arg0_int32x4_t, <4 x i16> %arg1_int16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
+  ret <4 x i32> %1
+}
+
+define arm_aapcs_vfpcc <2 x i64> @test_vmlal_lanes32(<2 x i64> %arg0_int64x2_t, <2 x i32> %arg1_int32x2_t, <2 x i32> %arg2_int32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vmlal_lanes32
+; CHECK: vmlal.s32 q0, d2, d3[1]
+  %0 = shufflevector <2 x i32> %arg2_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
+  %1 = tail call <2 x i64> @llvm.arm.neon.vmlals.v2i64(<2 x i64> %arg0_int64x2_t, <2 x i32> %arg1_int32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
+  ret <2 x i64> %1
+}
+
+define arm_aapcs_vfpcc <4 x i32> @test_vmlal_laneu16(<4 x i32> %arg0_uint32x4_t, <4 x i16> %arg1_uint16x4_t, <4 x i16> %arg2_uint16x4_t) nounwind readnone {
+entry:
+; CHECK: test_vmlal_laneu16
+; CHECK: vmlal.u16 q0, d2, d3[1]
+  %0 = shufflevector <4 x i16> %arg2_uint16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
+  %1 = tail call <4 x i32> @llvm.arm.neon.vmlalu.v4i32(<4 x i32> %arg0_uint32x4_t, <4 x i16> %arg1_uint16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
+  ret <4 x i32> %1
+}
+
+define arm_aapcs_vfpcc <2 x i64> @test_vmlal_laneu32(<2 x i64> %arg0_uint64x2_t, <2 x i32> %arg1_uint32x2_t, <2 x i32> %arg2_uint32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vmlal_laneu32
+; CHECK: vmlal.u32 q0, d2, d3[1]
+  %0 = shufflevector <2 x i32> %arg2_uint32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
+  %1 = tail call <2 x i64> @llvm.arm.neon.vmlalu.v2i64(<2 x i64> %arg0_uint64x2_t, <2 x i32> %arg1_uint32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
+  ret <2 x i64> %1
+}
+
+declare <8 x i16> @llvm.arm.neon.vmlals.v8i16(<8 x i16>, <8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vmlals.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vmlals.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
+
+declare <8 x i16> @llvm.arm.neon.vmlalu.v8i16(<8 x i16>, <8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vmlalu.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vmlalu.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vmlal.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vmlal.ll
deleted file mode 100644
index 08c4d88..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vmlal.ll
+++ /dev/null
@@ -1,63 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vmlal\\.s8} %t | count 1
-; RUN: grep {vmlal\\.s16} %t | count 1
-; RUN: grep {vmlal\\.s32} %t | count 1
-; RUN: grep {vmlal\\.u8} %t | count 1
-; RUN: grep {vmlal\\.u16} %t | count 1
-; RUN: grep {vmlal\\.u32} %t | count 1
-
-define <8 x i16> @vmlals8(<8 x i16>* %A, <8 x i8>* %B, <8 x i8>* %C) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = load <8 x i8>* %C
-	%tmp4 = call <8 x i16> @llvm.arm.neon.vmlals.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2, <8 x i8> %tmp3)
-	ret <8 x i16> %tmp4
-}
-
-define <4 x i32> @vmlals16(<4 x i32>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = load <4 x i16>* %C
-	%tmp4 = call <4 x i32> @llvm.arm.neon.vmlals.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2, <4 x i16> %tmp3)
-	ret <4 x i32> %tmp4
-}
-
-define <2 x i64> @vmlals32(<2 x i64>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = load <2 x i32>* %C
-	%tmp4 = call <2 x i64> @llvm.arm.neon.vmlals.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2, <2 x i32> %tmp3)
-	ret <2 x i64> %tmp4
-}
-
-define <8 x i16> @vmlalu8(<8 x i16>* %A, <8 x i8>* %B, <8 x i8>* %C) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = load <8 x i8>* %C
-	%tmp4 = call <8 x i16> @llvm.arm.neon.vmlalu.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2, <8 x i8> %tmp3)
-	ret <8 x i16> %tmp4
-}
-
-define <4 x i32> @vmlalu16(<4 x i32>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = load <4 x i16>* %C
-	%tmp4 = call <4 x i32> @llvm.arm.neon.vmlalu.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2, <4 x i16> %tmp3)
-	ret <4 x i32> %tmp4
-}
-
-define <2 x i64> @vmlalu32(<2 x i64>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = load <2 x i32>* %C
-	%tmp4 = call <2 x i64> @llvm.arm.neon.vmlalu.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2, <2 x i32> %tmp3)
-	ret <2 x i64> %tmp4
-}
-
-declare <8 x i16> @llvm.arm.neon.vmlals.v8i16(<8 x i16>, <8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vmlals.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vmlals.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
-
-declare <8 x i16> @llvm.arm.neon.vmlalu.v8i16(<8 x i16>, <8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vmlalu.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vmlalu.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vmlal_lane.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vmlal_lane.ll
deleted file mode 100644
index 5bb0621..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vmlal_lane.ll
+++ /dev/null
@@ -1,47 +0,0 @@
-; RUN: llc -mattr=+neon < %s | FileCheck %s
-target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32"
-target triple = "thumbv7-elf"
-
-define arm_aapcs_vfpcc <4 x i32> @test_vmlal_lanes16(<4 x i32> %arg0_int32x4_t, <4 x i16> %arg1_int16x4_t, <4 x i16> %arg2_int16x4_t) nounwind readnone {
-entry:
-; CHECK: test_vmlal_lanes16
-; CHECK: vmlal.s16 q0, d2, d3[1]
-  %0 = shufflevector <4 x i16> %arg2_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
-  %1 = tail call <4 x i32> @llvm.arm.neon.vmlals.v4i32(<4 x i32> %arg0_int32x4_t, <4 x i16> %arg1_int16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
-  ret <4 x i32> %1
-}
-
-declare <4 x i32> @llvm.arm.neon.vmlals.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
-
-define arm_aapcs_vfpcc <2 x i64> @test_vmlal_lanes32(<2 x i64> %arg0_int64x2_t, <2 x i32> %arg1_int32x2_t, <2 x i32> %arg2_int32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vmlal_lanes32
-; CHECK: vmlal.s32 q0, d2, d3[1]
-  %0 = shufflevector <2 x i32> %arg2_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
-  %1 = tail call <2 x i64> @llvm.arm.neon.vmlals.v2i64(<2 x i64> %arg0_int64x2_t, <2 x i32> %arg1_int32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
-  ret <2 x i64> %1
-}
-
-declare <2 x i64> @llvm.arm.neon.vmlals.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
-
-define arm_aapcs_vfpcc <4 x i32> @test_vmlal_laneu16(<4 x i32> %arg0_uint32x4_t, <4 x i16> %arg1_uint16x4_t, <4 x i16> %arg2_uint16x4_t) nounwind readnone {
-entry:
-; CHECK: test_vmlal_laneu16
-; CHECK: vmlal.u16 q0, d2, d3[1]
-  %0 = shufflevector <4 x i16> %arg2_uint16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
-  %1 = tail call <4 x i32> @llvm.arm.neon.vmlalu.v4i32(<4 x i32> %arg0_uint32x4_t, <4 x i16> %arg1_uint16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
-  ret <4 x i32> %1
-}
-
-declare <4 x i32> @llvm.arm.neon.vmlalu.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
-
-define arm_aapcs_vfpcc <2 x i64> @test_vmlal_laneu32(<2 x i64> %arg0_uint64x2_t, <2 x i32> %arg1_uint32x2_t, <2 x i32> %arg2_uint32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vmlal_laneu32
-; CHECK: vmlal.u32 q0, d2, d3[1]
-  %0 = shufflevector <2 x i32> %arg2_uint32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
-  %1 = tail call <2 x i64> @llvm.arm.neon.vmlalu.v2i64(<2 x i64> %arg0_uint64x2_t, <2 x i32> %arg1_uint32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
-  ret <2 x i64> %1
-}
-
-declare <2 x i64> @llvm.arm.neon.vmlalu.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vmls.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vmls.ll
index d3996a3..c89552e 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vmls.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vmls.ll
@@ -1,10 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vmls\\.i8} %t | count 2
-; RUN: grep {vmls\\.i16} %t | count 2
-; RUN: grep {vmls\\.i32} %t | count 2
-; RUN: grep {vmls\\.f32} %t | count 2
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i8> @vmlsi8(<8 x i8>* %A, <8 x i8>* %B, <8 x i8> * %C) nounwind {
+;CHECK: vmlsi8:
+;CHECK: vmls.i8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = load <8 x i8>* %C
@@ -14,6 +12,8 @@ define <8 x i8> @vmlsi8(<8 x i8>* %A, <8 x i8>* %B, <8 x i8> * %C) nounwind {
 }
 
 define <4 x i16> @vmlsi16(<4 x i16>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
+;CHECK: vmlsi16:
+;CHECK: vmls.i16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = load <4 x i16>* %C
@@ -23,6 +23,8 @@ define <4 x i16> @vmlsi16(<4 x i16>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind
 }
 
 define <2 x i32> @vmlsi32(<2 x i32>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
+;CHECK: vmlsi32:
+;CHECK: vmls.i32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = load <2 x i32>* %C
@@ -32,6 +34,8 @@ define <2 x i32> @vmlsi32(<2 x i32>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind
 }
 
 define <2 x float> @vmlsf32(<2 x float>* %A, <2 x float>* %B, <2 x float>* %C) nounwind {
+;CHECK: vmlsf32:
+;CHECK: vmls.f32
 	%tmp1 = load <2 x float>* %A
 	%tmp2 = load <2 x float>* %B
 	%tmp3 = load <2 x float>* %C
@@ -41,6 +45,8 @@ define <2 x float> @vmlsf32(<2 x float>* %A, <2 x float>* %B, <2 x float>* %C) n
 }
 
 define <16 x i8> @vmlsQi8(<16 x i8>* %A, <16 x i8>* %B, <16 x i8> * %C) nounwind {
+;CHECK: vmlsQi8:
+;CHECK: vmls.i8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = load <16 x i8>* %C
@@ -50,6 +56,8 @@ define <16 x i8> @vmlsQi8(<16 x i8>* %A, <16 x i8>* %B, <16 x i8> * %C) nounwind
 }
 
 define <8 x i16> @vmlsQi16(<8 x i16>* %A, <8 x i16>* %B, <8 x i16>* %C) nounwind {
+;CHECK: vmlsQi16:
+;CHECK: vmls.i16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = load <8 x i16>* %C
@@ -59,6 +67,8 @@ define <8 x i16> @vmlsQi16(<8 x i16>* %A, <8 x i16>* %B, <8 x i16>* %C) nounwind
 }
 
 define <4 x i32> @vmlsQi32(<4 x i32>* %A, <4 x i32>* %B, <4 x i32>* %C) nounwind {
+;CHECK: vmlsQi32:
+;CHECK: vmls.i32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = load <4 x i32>* %C
@@ -68,6 +78,8 @@ define <4 x i32> @vmlsQi32(<4 x i32>* %A, <4 x i32>* %B, <4 x i32>* %C) nounwind
 }
 
 define <4 x float> @vmlsQf32(<4 x float>* %A, <4 x float>* %B, <4 x float>* %C) nounwind {
+;CHECK: vmlsQf32:
+;CHECK: vmls.f32
 	%tmp1 = load <4 x float>* %A
 	%tmp2 = load <4 x float>* %B
 	%tmp3 = load <4 x float>* %C
@@ -75,3 +87,107 @@ define <4 x float> @vmlsQf32(<4 x float>* %A, <4 x float>* %B, <4 x float>* %C)
 	%tmp5 = sub <4 x float> %tmp1, %tmp4
 	ret <4 x float> %tmp5
 }
+
+define <8 x i16> @vmlsls8(<8 x i16>* %A, <8 x i8>* %B, <8 x i8>* %C) nounwind {
+;CHECK: vmlsls8:
+;CHECK: vmlsl.s8
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = load <8 x i8>* %C
+	%tmp4 = call <8 x i16> @llvm.arm.neon.vmlsls.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2, <8 x i8> %tmp3)
+	ret <8 x i16> %tmp4
+}
+
+define <4 x i32> @vmlsls16(<4 x i32>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
+;CHECK: vmlsls16:
+;CHECK: vmlsl.s16
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = load <4 x i16>* %C
+	%tmp4 = call <4 x i32> @llvm.arm.neon.vmlsls.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2, <4 x i16> %tmp3)
+	ret <4 x i32> %tmp4
+}
+
+define <2 x i64> @vmlsls32(<2 x i64>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
+;CHECK: vmlsls32:
+;CHECK: vmlsl.s32
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = load <2 x i32>* %C
+	%tmp4 = call <2 x i64> @llvm.arm.neon.vmlsls.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2, <2 x i32> %tmp3)
+	ret <2 x i64> %tmp4
+}
+
+define <8 x i16> @vmlslu8(<8 x i16>* %A, <8 x i8>* %B, <8 x i8>* %C) nounwind {
+;CHECK: vmlslu8:
+;CHECK: vmlsl.u8
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = load <8 x i8>* %C
+	%tmp4 = call <8 x i16> @llvm.arm.neon.vmlslu.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2, <8 x i8> %tmp3)
+	ret <8 x i16> %tmp4
+}
+
+define <4 x i32> @vmlslu16(<4 x i32>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
+;CHECK: vmlslu16:
+;CHECK: vmlsl.u16
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = load <4 x i16>* %C
+	%tmp4 = call <4 x i32> @llvm.arm.neon.vmlslu.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2, <4 x i16> %tmp3)
+	ret <4 x i32> %tmp4
+}
+
+define <2 x i64> @vmlslu32(<2 x i64>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
+;CHECK: vmlslu32:
+;CHECK: vmlsl.u32
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = load <2 x i32>* %C
+	%tmp4 = call <2 x i64> @llvm.arm.neon.vmlslu.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2, <2 x i32> %tmp3)
+	ret <2 x i64> %tmp4
+}
+
+define arm_aapcs_vfpcc <4 x i32> @test_vmlsl_lanes16(<4 x i32> %arg0_int32x4_t, <4 x i16> %arg1_int16x4_t, <4 x i16> %arg2_int16x4_t) nounwind readnone {
+entry:
+; CHECK: test_vmlsl_lanes16
+; CHECK: vmlsl.s16 q0, d2, d3[1]
+  %0 = shufflevector <4 x i16> %arg2_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
+  %1 = tail call <4 x i32> @llvm.arm.neon.vmlsls.v4i32(<4 x i32> %arg0_int32x4_t, <4 x i16> %arg1_int16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
+  ret <4 x i32> %1
+}
+
+define arm_aapcs_vfpcc <2 x i64> @test_vmlsl_lanes32(<2 x i64> %arg0_int64x2_t, <2 x i32> %arg1_int32x2_t, <2 x i32> %arg2_int32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vmlsl_lanes32
+; CHECK: vmlsl.s32 q0, d2, d3[1]
+  %0 = shufflevector <2 x i32> %arg2_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
+  %1 = tail call <2 x i64> @llvm.arm.neon.vmlsls.v2i64(<2 x i64> %arg0_int64x2_t, <2 x i32> %arg1_int32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
+  ret <2 x i64> %1
+}
+
+define arm_aapcs_vfpcc <4 x i32> @test_vmlsl_laneu16(<4 x i32> %arg0_uint32x4_t, <4 x i16> %arg1_uint16x4_t, <4 x i16> %arg2_uint16x4_t) nounwind readnone {
+entry:
+; CHECK: test_vmlsl_laneu16
+; CHECK: vmlsl.u16 q0, d2, d3[1]
+  %0 = shufflevector <4 x i16> %arg2_uint16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
+  %1 = tail call <4 x i32> @llvm.arm.neon.vmlslu.v4i32(<4 x i32> %arg0_uint32x4_t, <4 x i16> %arg1_uint16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
+  ret <4 x i32> %1
+}
+
+define arm_aapcs_vfpcc <2 x i64> @test_vmlsl_laneu32(<2 x i64> %arg0_uint64x2_t, <2 x i32> %arg1_uint32x2_t, <2 x i32> %arg2_uint32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vmlsl_laneu32
+; CHECK: vmlsl.u32 q0, d2, d3[1]
+  %0 = shufflevector <2 x i32> %arg2_uint32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
+  %1 = tail call <2 x i64> @llvm.arm.neon.vmlslu.v2i64(<2 x i64> %arg0_uint64x2_t, <2 x i32> %arg1_uint32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
+  ret <2 x i64> %1
+}
+
+declare <8 x i16> @llvm.arm.neon.vmlsls.v8i16(<8 x i16>, <8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vmlsls.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vmlsls.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
+
+declare <8 x i16> @llvm.arm.neon.vmlslu.v8i16(<8 x i16>, <8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vmlslu.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vmlslu.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vmlsl.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vmlsl.ll
deleted file mode 100644
index 253157d..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vmlsl.ll
+++ /dev/null
@@ -1,63 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vmlsl\\.s8} %t | count 1
-; RUN: grep {vmlsl\\.s16} %t | count 1
-; RUN: grep {vmlsl\\.s32} %t | count 1
-; RUN: grep {vmlsl\\.u8} %t | count 1
-; RUN: grep {vmlsl\\.u16} %t | count 1
-; RUN: grep {vmlsl\\.u32} %t | count 1
-
-define <8 x i16> @vmlsls8(<8 x i16>* %A, <8 x i8>* %B, <8 x i8>* %C) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = load <8 x i8>* %C
-	%tmp4 = call <8 x i16> @llvm.arm.neon.vmlsls.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2, <8 x i8> %tmp3)
-	ret <8 x i16> %tmp4
-}
-
-define <4 x i32> @vmlsls16(<4 x i32>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = load <4 x i16>* %C
-	%tmp4 = call <4 x i32> @llvm.arm.neon.vmlsls.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2, <4 x i16> %tmp3)
-	ret <4 x i32> %tmp4
-}
-
-define <2 x i64> @vmlsls32(<2 x i64>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = load <2 x i32>* %C
-	%tmp4 = call <2 x i64> @llvm.arm.neon.vmlsls.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2, <2 x i32> %tmp3)
-	ret <2 x i64> %tmp4
-}
-
-define <8 x i16> @vmlslu8(<8 x i16>* %A, <8 x i8>* %B, <8 x i8>* %C) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = load <8 x i8>* %C
-	%tmp4 = call <8 x i16> @llvm.arm.neon.vmlslu.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2, <8 x i8> %tmp3)
-	ret <8 x i16> %tmp4
-}
-
-define <4 x i32> @vmlslu16(<4 x i32>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = load <4 x i16>* %C
-	%tmp4 = call <4 x i32> @llvm.arm.neon.vmlslu.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2, <4 x i16> %tmp3)
-	ret <4 x i32> %tmp4
-}
-
-define <2 x i64> @vmlslu32(<2 x i64>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = load <2 x i32>* %C
-	%tmp4 = call <2 x i64> @llvm.arm.neon.vmlslu.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2, <2 x i32> %tmp3)
-	ret <2 x i64> %tmp4
-}
-
-declare <8 x i16> @llvm.arm.neon.vmlsls.v8i16(<8 x i16>, <8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vmlsls.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vmlsls.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
-
-declare <8 x i16> @llvm.arm.neon.vmlslu.v8i16(<8 x i16>, <8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vmlslu.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vmlslu.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vmlsl_lane.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vmlsl_lane.ll
deleted file mode 100644
index 1effbd6..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vmlsl_lane.ll
+++ /dev/null
@@ -1,47 +0,0 @@
-; RUN: llc -mattr=+neon < %s | FileCheck %s
-target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32"
-target triple = "thumbv7-elf"
-
-define arm_aapcs_vfpcc <4 x i32> @test_vmlsl_lanes16(<4 x i32> %arg0_int32x4_t, <4 x i16> %arg1_int16x4_t, <4 x i16> %arg2_int16x4_t) nounwind readnone {
-entry:
-; CHECK: test_vmlsl_lanes16
-; CHECK: vmlsl.s16 q0, d2, d3[1]
-  %0 = shufflevector <4 x i16> %arg2_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
-  %1 = tail call <4 x i32> @llvm.arm.neon.vmlsls.v4i32(<4 x i32> %arg0_int32x4_t, <4 x i16> %arg1_int16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
-  ret <4 x i32> %1
-}
-
-declare <4 x i32> @llvm.arm.neon.vmlsls.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
-
-define arm_aapcs_vfpcc <2 x i64> @test_vmlsl_lanes32(<2 x i64> %arg0_int64x2_t, <2 x i32> %arg1_int32x2_t, <2 x i32> %arg2_int32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vmlsl_lanes32
-; CHECK: vmlsl.s32 q0, d2, d3[1]
-  %0 = shufflevector <2 x i32> %arg2_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
-  %1 = tail call <2 x i64> @llvm.arm.neon.vmlsls.v2i64(<2 x i64> %arg0_int64x2_t, <2 x i32> %arg1_int32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
-  ret <2 x i64> %1
-}
-
-declare <2 x i64> @llvm.arm.neon.vmlsls.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
-
-define arm_aapcs_vfpcc <4 x i32> @test_vmlsl_laneu16(<4 x i32> %arg0_uint32x4_t, <4 x i16> %arg1_uint16x4_t, <4 x i16> %arg2_uint16x4_t) nounwind readnone {
-entry:
-; CHECK: test_vmlsl_laneu16
-; CHECK: vmlsl.u16 q0, d2, d3[1]
-  %0 = shufflevector <4 x i16> %arg2_uint16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
-  %1 = tail call <4 x i32> @llvm.arm.neon.vmlslu.v4i32(<4 x i32> %arg0_uint32x4_t, <4 x i16> %arg1_uint16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
-  ret <4 x i32> %1
-}
-
-declare <4 x i32> @llvm.arm.neon.vmlslu.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
-
-define arm_aapcs_vfpcc <2 x i64> @test_vmlsl_laneu32(<2 x i64> %arg0_uint64x2_t, <2 x i32> %arg1_uint32x2_t, <2 x i32> %arg2_uint32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vmlsl_laneu32
-; CHECK: vmlsl.u32 q0, d2, d3[1]
-  %0 = shufflevector <2 x i32> %arg2_uint32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
-  %1 = tail call <2 x i64> @llvm.arm.neon.vmlslu.v2i64(<2 x i64> %arg0_uint64x2_t, <2 x i32> %arg1_uint32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
-  ret <2 x i64> %1
-}
-
-declare <2 x i64> @llvm.arm.neon.vmlslu.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vmov.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vmov.ll
index c4cb204..e4368d6 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vmov.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vmov.ll
@@ -1,101 +1,323 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep vmov.i8 %t | count 2
-; RUN: grep vmov.i16 %t | count 4
-; RUN: grep vmov.i32 %t | count 12
-; RUN: grep vmov.i64 %t | count 2
-; Note: function names do not include "vmov" to allow simple grep for opcodes
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i8> @v_movi8() nounwind {
+;CHECK: v_movi8:
+;CHECK: vmov.i8
 	ret <8 x i8> < i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8 >
 }
 
 define <4 x i16> @v_movi16a() nounwind {
+;CHECK: v_movi16a:
+;CHECK: vmov.i16
 	ret <4 x i16> < i16 16, i16 16, i16 16, i16 16 >
 }
 
 ; 0x1000 = 4096
 define <4 x i16> @v_movi16b() nounwind {
+;CHECK: v_movi16b:
+;CHECK: vmov.i16
 	ret <4 x i16> < i16 4096, i16 4096, i16 4096, i16 4096 >
 }
 
 define <2 x i32> @v_movi32a() nounwind {
+;CHECK: v_movi32a:
+;CHECK: vmov.i32
 	ret <2 x i32> < i32 32, i32 32 >
 }
 
 ; 0x2000 = 8192
 define <2 x i32> @v_movi32b() nounwind {
+;CHECK: v_movi32b:
+;CHECK: vmov.i32
 	ret <2 x i32> < i32 8192, i32 8192 >
 }
 
 ; 0x200000 = 2097152
 define <2 x i32> @v_movi32c() nounwind {
+;CHECK: v_movi32c:
+;CHECK: vmov.i32
 	ret <2 x i32> < i32 2097152, i32 2097152 >
 }
 
 ; 0x20000000 = 536870912
 define <2 x i32> @v_movi32d() nounwind {
+;CHECK: v_movi32d:
+;CHECK: vmov.i32
 	ret <2 x i32> < i32 536870912, i32 536870912 >
 }
 
 ; 0x20ff = 8447
 define <2 x i32> @v_movi32e() nounwind {
+;CHECK: v_movi32e:
+;CHECK: vmov.i32
 	ret <2 x i32> < i32 8447, i32 8447 >
 }
 
 ; 0x20ffff = 2162687
 define <2 x i32> @v_movi32f() nounwind {
+;CHECK: v_movi32f:
+;CHECK: vmov.i32
 	ret <2 x i32> < i32 2162687, i32 2162687 >
 }
 
 ; 0xff0000ff0000ffff = 18374687574888349695
 define <1 x i64> @v_movi64() nounwind {
+;CHECK: v_movi64:
+;CHECK: vmov.i64
 	ret <1 x i64> < i64 18374687574888349695 >
 }
 
 define <16 x i8> @v_movQi8() nounwind {
+;CHECK: v_movQi8:
+;CHECK: vmov.i8
 	ret <16 x i8> < i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8 >
 }
 
 define <8 x i16> @v_movQi16a() nounwind {
+;CHECK: v_movQi16a:
+;CHECK: vmov.i16
 	ret <8 x i16> < i16 16, i16 16, i16 16, i16 16, i16 16, i16 16, i16 16, i16 16 >
 }
 
 ; 0x1000 = 4096
 define <8 x i16> @v_movQi16b() nounwind {
+;CHECK: v_movQi16b:
+;CHECK: vmov.i16
 	ret <8 x i16> < i16 4096, i16 4096, i16 4096, i16 4096, i16 4096, i16 4096, i16 4096, i16 4096 >
 }
 
 define <4 x i32> @v_movQi32a() nounwind {
+;CHECK: v_movQi32a:
+;CHECK: vmov.i32
 	ret <4 x i32> < i32 32, i32 32, i32 32, i32 32 >
 }
 
 ; 0x2000 = 8192
 define <4 x i32> @v_movQi32b() nounwind {
+;CHECK: v_movQi32b:
+;CHECK: vmov.i32
 	ret <4 x i32> < i32 8192, i32 8192, i32 8192, i32 8192 >
 }
 
 ; 0x200000 = 2097152
 define <4 x i32> @v_movQi32c() nounwind {
+;CHECK: v_movQi32c:
+;CHECK: vmov.i32
 	ret <4 x i32> < i32 2097152, i32 2097152, i32 2097152, i32 2097152 >
 }
 
 ; 0x20000000 = 536870912
 define <4 x i32> @v_movQi32d() nounwind {
+;CHECK: v_movQi32d:
+;CHECK: vmov.i32
 	ret <4 x i32> < i32 536870912, i32 536870912, i32 536870912, i32 536870912 >
 }
 
 ; 0x20ff = 8447
 define <4 x i32> @v_movQi32e() nounwind {
+;CHECK: v_movQi32e:
+;CHECK: vmov.i32
 	ret <4 x i32> < i32 8447, i32 8447, i32 8447, i32 8447 >
 }
 
 ; 0x20ffff = 2162687
 define <4 x i32> @v_movQi32f() nounwind {
+;CHECK: v_movQi32f:
+;CHECK: vmov.i32
 	ret <4 x i32> < i32 2162687, i32 2162687, i32 2162687, i32 2162687 >
 }
 
 ; 0xff0000ff0000ffff = 18374687574888349695
 define <2 x i64> @v_movQi64() nounwind {
+;CHECK: v_movQi64:
+;CHECK: vmov.i64
 	ret <2 x i64> < i64 18374687574888349695, i64 18374687574888349695 >
 }
 
+; Check for correct assembler printing for immediate values.
+%struct.int8x8_t = type { <8 x i8> }
+define arm_apcscc void @vdupn128(%struct.int8x8_t* noalias nocapture sret %agg.result) nounwind {
+entry:
+;CHECK: vdupn128:
+;CHECK: vmov.i8 d0, #0x80
+  %0 = getelementptr inbounds %struct.int8x8_t* %agg.result, i32 0, i32 0 ; <<8 x i8>*> [#uses=1]
+  store <8 x i8> <i8 -128, i8 -128, i8 -128, i8 -128, i8 -128, i8 -128, i8 -128, i8 -128>, <8 x i8>* %0, align 8
+  ret void
+}
+
+define arm_apcscc void @vdupnneg75(%struct.int8x8_t* noalias nocapture sret %agg.result) nounwind {
+entry:
+;CHECK: vdupnneg75:
+;CHECK: vmov.i8 d0, #0xB5
+  %0 = getelementptr inbounds %struct.int8x8_t* %agg.result, i32 0, i32 0 ; <<8 x i8>*> [#uses=1]
+  store <8 x i8> <i8 -75, i8 -75, i8 -75, i8 -75, i8 -75, i8 -75, i8 -75, i8 -75>, <8 x i8>* %0, align 8
+  ret void
+}
+
+define <8 x i16> @vmovls8(<8 x i8>* %A) nounwind {
+;CHECK: vmovls8:
+;CHECK: vmovl.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = call <8 x i16> @llvm.arm.neon.vmovls.v8i16(<8 x i8> %tmp1)
+	ret <8 x i16> %tmp2
+}
+
+define <4 x i32> @vmovls16(<4 x i16>* %A) nounwind {
+;CHECK: vmovls16:
+;CHECK: vmovl.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = call <4 x i32> @llvm.arm.neon.vmovls.v4i32(<4 x i16> %tmp1)
+	ret <4 x i32> %tmp2
+}
+
+define <2 x i64> @vmovls32(<2 x i32>* %A) nounwind {
+;CHECK: vmovls32:
+;CHECK: vmovl.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = call <2 x i64> @llvm.arm.neon.vmovls.v2i64(<2 x i32> %tmp1)
+	ret <2 x i64> %tmp2
+}
+
+define <8 x i16> @vmovlu8(<8 x i8>* %A) nounwind {
+;CHECK: vmovlu8:
+;CHECK: vmovl.u8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = call <8 x i16> @llvm.arm.neon.vmovlu.v8i16(<8 x i8> %tmp1)
+	ret <8 x i16> %tmp2
+}
+
+define <4 x i32> @vmovlu16(<4 x i16>* %A) nounwind {
+;CHECK: vmovlu16:
+;CHECK: vmovl.u16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = call <4 x i32> @llvm.arm.neon.vmovlu.v4i32(<4 x i16> %tmp1)
+	ret <4 x i32> %tmp2
+}
+
+define <2 x i64> @vmovlu32(<2 x i32>* %A) nounwind {
+;CHECK: vmovlu32:
+;CHECK: vmovl.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = call <2 x i64> @llvm.arm.neon.vmovlu.v2i64(<2 x i32> %tmp1)
+	ret <2 x i64> %tmp2
+}
+
+declare <8 x i16> @llvm.arm.neon.vmovls.v8i16(<8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vmovls.v4i32(<4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vmovls.v2i64(<2 x i32>) nounwind readnone
+
+declare <8 x i16> @llvm.arm.neon.vmovlu.v8i16(<8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vmovlu.v4i32(<4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vmovlu.v2i64(<2 x i32>) nounwind readnone
+
+define <8 x i8> @vmovni16(<8 x i16>* %A) nounwind {
+;CHECK: vmovni16:
+;CHECK: vmovn.i16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = call <8 x i8> @llvm.arm.neon.vmovn.v8i8(<8 x i16> %tmp1)
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @vmovni32(<4 x i32>* %A) nounwind {
+;CHECK: vmovni32:
+;CHECK: vmovn.i32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x i16> @llvm.arm.neon.vmovn.v4i16(<4 x i32> %tmp1)
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vmovni64(<2 x i64>* %A) nounwind {
+;CHECK: vmovni64:
+;CHECK: vmovn.i64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vmovn.v2i32(<2 x i64> %tmp1)
+	ret <2 x i32> %tmp2
+}
+
+declare <8 x i8>  @llvm.arm.neon.vmovn.v8i8(<8 x i16>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vmovn.v4i16(<4 x i32>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vmovn.v2i32(<2 x i64>) nounwind readnone
+
+define <8 x i8> @vqmovns16(<8 x i16>* %A) nounwind {
+;CHECK: vqmovns16:
+;CHECK: vqmovn.s16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = call <8 x i8> @llvm.arm.neon.vqmovns.v8i8(<8 x i16> %tmp1)
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @vqmovns32(<4 x i32>* %A) nounwind {
+;CHECK: vqmovns32:
+;CHECK: vqmovn.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x i16> @llvm.arm.neon.vqmovns.v4i16(<4 x i32> %tmp1)
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vqmovns64(<2 x i64>* %A) nounwind {
+;CHECK: vqmovns64:
+;CHECK: vqmovn.s64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vqmovns.v2i32(<2 x i64> %tmp1)
+	ret <2 x i32> %tmp2
+}
+
+define <8 x i8> @vqmovnu16(<8 x i16>* %A) nounwind {
+;CHECK: vqmovnu16:
+;CHECK: vqmovn.u16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = call <8 x i8> @llvm.arm.neon.vqmovnu.v8i8(<8 x i16> %tmp1)
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @vqmovnu32(<4 x i32>* %A) nounwind {
+;CHECK: vqmovnu32:
+;CHECK: vqmovn.u32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x i16> @llvm.arm.neon.vqmovnu.v4i16(<4 x i32> %tmp1)
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vqmovnu64(<2 x i64>* %A) nounwind {
+;CHECK: vqmovnu64:
+;CHECK: vqmovn.u64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vqmovnu.v2i32(<2 x i64> %tmp1)
+	ret <2 x i32> %tmp2
+}
+
+define <8 x i8> @vqmovuns16(<8 x i16>* %A) nounwind {
+;CHECK: vqmovuns16:
+;CHECK: vqmovun.s16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = call <8 x i8> @llvm.arm.neon.vqmovnsu.v8i8(<8 x i16> %tmp1)
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @vqmovuns32(<4 x i32>* %A) nounwind {
+;CHECK: vqmovuns32:
+;CHECK: vqmovun.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x i16> @llvm.arm.neon.vqmovnsu.v4i16(<4 x i32> %tmp1)
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vqmovuns64(<2 x i64>* %A) nounwind {
+;CHECK: vqmovuns64:
+;CHECK: vqmovun.s64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vqmovnsu.v2i32(<2 x i64> %tmp1)
+	ret <2 x i32> %tmp2
+}
+
+declare <8 x i8>  @llvm.arm.neon.vqmovns.v8i8(<8 x i16>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vqmovns.v4i16(<4 x i32>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vqmovns.v2i32(<2 x i64>) nounwind readnone
+
+declare <8 x i8>  @llvm.arm.neon.vqmovnu.v8i8(<8 x i16>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vqmovnu.v4i16(<4 x i32>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vqmovnu.v2i32(<2 x i64>) nounwind readnone
+
+declare <8 x i8>  @llvm.arm.neon.vqmovnsu.v8i8(<8 x i16>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vqmovnsu.v4i16(<4 x i32>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vqmovnsu.v2i32(<2 x i64>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vmovl.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vmovl.ll
deleted file mode 100644
index 4757680..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vmovl.ll
+++ /dev/null
@@ -1,51 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vmovl\\.s8} %t | count 1
-; RUN: grep {vmovl\\.s16} %t | count 1
-; RUN: grep {vmovl\\.s32} %t | count 1
-; RUN: grep {vmovl\\.u8} %t | count 1
-; RUN: grep {vmovl\\.u16} %t | count 1
-; RUN: grep {vmovl\\.u32} %t | count 1
-
-define <8 x i16> @vmovls8(<8 x i8>* %A) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = call <8 x i16> @llvm.arm.neon.vmovls.v8i16(<8 x i8> %tmp1)
-	ret <8 x i16> %tmp2
-}
-
-define <4 x i32> @vmovls16(<4 x i16>* %A) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = call <4 x i32> @llvm.arm.neon.vmovls.v4i32(<4 x i16> %tmp1)
-	ret <4 x i32> %tmp2
-}
-
-define <2 x i64> @vmovls32(<2 x i32>* %A) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = call <2 x i64> @llvm.arm.neon.vmovls.v2i64(<2 x i32> %tmp1)
-	ret <2 x i64> %tmp2
-}
-
-define <8 x i16> @vmovlu8(<8 x i8>* %A) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = call <8 x i16> @llvm.arm.neon.vmovlu.v8i16(<8 x i8> %tmp1)
-	ret <8 x i16> %tmp2
-}
-
-define <4 x i32> @vmovlu16(<4 x i16>* %A) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = call <4 x i32> @llvm.arm.neon.vmovlu.v4i32(<4 x i16> %tmp1)
-	ret <4 x i32> %tmp2
-}
-
-define <2 x i64> @vmovlu32(<2 x i32>* %A) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = call <2 x i64> @llvm.arm.neon.vmovlu.v2i64(<2 x i32> %tmp1)
-	ret <2 x i64> %tmp2
-}
-
-declare <8 x i16> @llvm.arm.neon.vmovls.v8i16(<8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vmovls.v4i32(<4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vmovls.v2i64(<2 x i32>) nounwind readnone
-
-declare <8 x i16> @llvm.arm.neon.vmovlu.v8i16(<8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vmovlu.v4i32(<4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vmovlu.v2i64(<2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vmovn.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vmovn.ll
deleted file mode 100644
index 173bb52..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vmovn.ll
+++ /dev/null
@@ -1,26 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vmovn\\.i16} %t | count 1
-; RUN: grep {vmovn\\.i32} %t | count 1
-; RUN: grep {vmovn\\.i64} %t | count 1
-
-define <8 x i8> @vmovni16(<8 x i16>* %A) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = call <8 x i8> @llvm.arm.neon.vmovn.v8i8(<8 x i16> %tmp1)
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @vmovni32(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x i16> @llvm.arm.neon.vmovn.v4i16(<4 x i32> %tmp1)
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vmovni64(<2 x i64>* %A) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vmovn.v2i32(<2 x i64> %tmp1)
-	ret <2 x i32> %tmp2
-}
-
-declare <8 x i8>  @llvm.arm.neon.vmovn.v8i8(<8 x i16>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vmovn.v4i16(<4 x i32>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vmovn.v2i32(<2 x i64>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vmul.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vmul.ll
index 38abcca..325da5d 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vmul.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vmul.ll
@@ -1,11 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vmul\\.i8} %t | count 2
-; RUN: grep {vmul\\.i16} %t | count 2
-; RUN: grep {vmul\\.i32} %t | count 2
-; RUN: grep {vmul\\.f32} %t | count 2
-; RUN: grep {vmul\\.p8} %t | count 2
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i8> @vmuli8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vmuli8:
+;CHECK: vmul.i8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = mul <8 x i8> %tmp1, %tmp2
@@ -13,6 +10,8 @@ define <8 x i8> @vmuli8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vmuli16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vmuli16:
+;CHECK: vmul.i16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = mul <4 x i16> %tmp1, %tmp2
@@ -20,6 +19,8 @@ define <4 x i16> @vmuli16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vmuli32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vmuli32:
+;CHECK: vmul.i32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = mul <2 x i32> %tmp1, %tmp2
@@ -27,6 +28,8 @@ define <2 x i32> @vmuli32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <2 x float> @vmulf32(<2 x float>* %A, <2 x float>* %B) nounwind {
+;CHECK: vmulf32:
+;CHECK: vmul.f32
 	%tmp1 = load <2 x float>* %A
 	%tmp2 = load <2 x float>* %B
 	%tmp3 = mul <2 x float> %tmp1, %tmp2
@@ -34,6 +37,8 @@ define <2 x float> @vmulf32(<2 x float>* %A, <2 x float>* %B) nounwind {
 }
 
 define <8 x i8> @vmulp8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vmulp8:
+;CHECK: vmul.p8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = call <8 x i8> @llvm.arm.neon.vmulp.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
@@ -41,6 +46,8 @@ define <8 x i8> @vmulp8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <16 x i8> @vmulQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vmulQi8:
+;CHECK: vmul.i8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = mul <16 x i8> %tmp1, %tmp2
@@ -48,6 +55,8 @@ define <16 x i8> @vmulQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vmulQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vmulQi16:
+;CHECK: vmul.i16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = mul <8 x i16> %tmp1, %tmp2
@@ -55,6 +64,8 @@ define <8 x i16> @vmulQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vmulQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vmulQi32:
+;CHECK: vmul.i32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = mul <4 x i32> %tmp1, %tmp2
@@ -62,6 +73,8 @@ define <4 x i32> @vmulQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <4 x float> @vmulQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
+;CHECK: vmulQf32:
+;CHECK: vmul.f32
 	%tmp1 = load <4 x float>* %A
 	%tmp2 = load <4 x float>* %B
 	%tmp3 = mul <4 x float> %tmp1, %tmp2
@@ -69,6 +82,8 @@ define <4 x float> @vmulQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
 }
 
 define <16 x i8> @vmulQp8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vmulQp8:
+;CHECK: vmul.p8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = call <16 x i8> @llvm.arm.neon.vmulp.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
@@ -77,3 +92,166 @@ define <16 x i8> @vmulQp8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 
 declare <8 x i8>  @llvm.arm.neon.vmulp.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
 declare <16 x i8>  @llvm.arm.neon.vmulp.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
+
+define arm_aapcs_vfpcc <2 x float> @test_vmul_lanef32(<2 x float> %arg0_float32x2_t, <2 x float> %arg1_float32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vmul_lanef32:
+; CHECK: vmul.f32 d0, d0, d1[0]
+  %0 = shufflevector <2 x float> %arg1_float32x2_t, <2 x float> undef, <2 x i32> zeroinitializer ; <<2 x float>> [#uses=1]
+  %1 = fmul <2 x float> %0, %arg0_float32x2_t     ; <<2 x float>> [#uses=1]
+  ret <2 x float> %1
+}
+
+define arm_aapcs_vfpcc <4 x i16> @test_vmul_lanes16(<4 x i16> %arg0_int16x4_t, <4 x i16> %arg1_int16x4_t) nounwind readnone {
+entry:
+; CHECK: test_vmul_lanes16:
+; CHECK: vmul.i16 d0, d0, d1[1]
+  %0 = shufflevector <4 x i16> %arg1_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses$
+  %1 = mul <4 x i16> %0, %arg0_int16x4_t          ; <<4 x i16>> [#uses=1]
+  ret <4 x i16> %1
+}
+
+define arm_aapcs_vfpcc <2 x i32> @test_vmul_lanes32(<2 x i32> %arg0_int32x2_t, <2 x i32> %arg1_int32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vmul_lanes32:
+; CHECK: vmul.i32 d0, d0, d1[1]
+  %0 = shufflevector <2 x i32> %arg1_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
+  %1 = mul <2 x i32> %0, %arg0_int32x2_t          ; <<2 x i32>> [#uses=1]
+  ret <2 x i32> %1
+}
+
+define arm_aapcs_vfpcc <4 x float> @test_vmulQ_lanef32(<4 x float> %arg0_float32x4_t, <2 x float> %arg1_float32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vmulQ_lanef32:
+; CHECK: vmul.f32 q0, q0, d2[1]
+  %0 = shufflevector <2 x float> %arg1_float32x2_t, <2 x float> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x float>$
+  %1 = fmul <4 x float> %0, %arg0_float32x4_t     ; <<4 x float>> [#uses=1]
+  ret <4 x float> %1
+}
+
+define arm_aapcs_vfpcc <8 x i16> @test_vmulQ_lanes16(<8 x i16> %arg0_int16x8_t, <4 x i16> %arg1_int16x4_t) nounwind readnone {
+entry:
+; CHECK: test_vmulQ_lanes16:
+; CHECK: vmul.i16 q0, q0, d2[1]
+  %0 = shufflevector <4 x i16> %arg1_int16x4_t, <4 x i16> undef, <8 x i32> <i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1>
+  %1 = mul <8 x i16> %0, %arg0_int16x8_t          ; <<8 x i16>> [#uses=1]
+  ret <8 x i16> %1
+}
+
+define arm_aapcs_vfpcc <4 x i32> @test_vmulQ_lanes32(<4 x i32> %arg0_int32x4_t, <2 x i32> %arg1_int32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vmulQ_lanes32:
+; CHECK: vmul.i32 q0, q0, d2[1]
+  %0 = shufflevector <2 x i32> %arg1_int32x2_t, <2 x i32> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i32>> [#uses$
+  %1 = mul <4 x i32> %0, %arg0_int32x4_t          ; <<4 x i32>> [#uses=1]
+  ret <4 x i32> %1
+}
+
+define <8 x i16> @vmulls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vmulls8:
+;CHECK: vmull.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vmulls.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vmulls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vmulls16:
+;CHECK: vmull.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vmulls.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vmulls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vmulls32:
+;CHECK: vmull.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vmulls.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+define <8 x i16> @vmullu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vmullu8:
+;CHECK: vmull.u8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vmullu.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vmullu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vmullu16:
+;CHECK: vmull.u16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vmullu.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vmullu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vmullu32:
+;CHECK: vmull.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vmullu.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+define <8 x i16> @vmullp8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vmullp8:
+;CHECK: vmull.p8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vmullp.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define arm_aapcs_vfpcc <4 x i32> @test_vmull_lanes16(<4 x i16> %arg0_int16x4_t, <4 x i16> %arg1_int16x4_t) nounwind readnone {
+entry:
+; CHECK: test_vmull_lanes16
+; CHECK: vmull.s16 q0, d0, d1[1]
+  %0 = shufflevector <4 x i16> %arg1_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
+  %1 = tail call <4 x i32> @llvm.arm.neon.vmulls.v4i32(<4 x i16> %arg0_int16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
+  ret <4 x i32> %1
+}
+
+define arm_aapcs_vfpcc <2 x i64> @test_vmull_lanes32(<2 x i32> %arg0_int32x2_t, <2 x i32> %arg1_int32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vmull_lanes32
+; CHECK: vmull.s32 q0, d0, d1[1]
+  %0 = shufflevector <2 x i32> %arg1_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
+  %1 = tail call <2 x i64> @llvm.arm.neon.vmulls.v2i64(<2 x i32> %arg0_int32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
+  ret <2 x i64> %1
+}
+
+define arm_aapcs_vfpcc <4 x i32> @test_vmull_laneu16(<4 x i16> %arg0_uint16x4_t, <4 x i16> %arg1_uint16x4_t) nounwind readnone {
+entry:
+; CHECK: test_vmull_laneu16
+; CHECK: vmull.u16 q0, d0, d1[1]
+  %0 = shufflevector <4 x i16> %arg1_uint16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
+  %1 = tail call <4 x i32> @llvm.arm.neon.vmullu.v4i32(<4 x i16> %arg0_uint16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
+  ret <4 x i32> %1
+}
+
+define arm_aapcs_vfpcc <2 x i64> @test_vmull_laneu32(<2 x i32> %arg0_uint32x2_t, <2 x i32> %arg1_uint32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vmull_laneu32
+; CHECK: vmull.u32 q0, d0, d1[1]
+  %0 = shufflevector <2 x i32> %arg1_uint32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
+  %1 = tail call <2 x i64> @llvm.arm.neon.vmullu.v2i64(<2 x i32> %arg0_uint32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
+  ret <2 x i64> %1
+}
+
+declare <8 x i16> @llvm.arm.neon.vmulls.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vmulls.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vmulls.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <8 x i16> @llvm.arm.neon.vmullu.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vmullu.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vmullu.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <8 x i16>  @llvm.arm.neon.vmullp.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vmul_lane.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vmul_lane.ll
deleted file mode 100644
index 7edd873..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vmul_lane.ll
+++ /dev/null
@@ -1,57 +0,0 @@
-; RUN: llc -mattr=+neon < %s | FileCheck %s
-target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32"
-target triple = "thumbv7-elf"
-
-define arm_aapcs_vfpcc <2 x float> @test_vmul_lanef32(<2 x float> %arg0_float32x2_t, <2 x float> %arg1_float32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vmul_lanef32:
-; CHECK: vmul.f32 d0, d0, d1[0]
-  %0 = shufflevector <2 x float> %arg1_float32x2_t, <2 x float> undef, <2 x i32> zeroinitializer ; <<2 x float>> [#uses=1]
-  %1 = fmul <2 x float> %0, %arg0_float32x2_t     ; <<2 x float>> [#uses=1]
-  ret <2 x float> %1
-}
-
-define arm_aapcs_vfpcc <4 x i16> @test_vmul_lanes16(<4 x i16> %arg0_int16x4_t, <4 x i16> %arg1_int16x4_t) nounwind readnone {
-entry:
-; CHECK: test_vmul_lanes16:
-; CHECK: vmul.i16 d0, d0, d1[1]
-  %0 = shufflevector <4 x i16> %arg1_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses$
-  %1 = mul <4 x i16> %0, %arg0_int16x4_t          ; <<4 x i16>> [#uses=1]
-  ret <4 x i16> %1
-}
-
-define arm_aapcs_vfpcc <2 x i32> @test_vmul_lanes32(<2 x i32> %arg0_int32x2_t, <2 x i32> %arg1_int32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vmul_lanes32:
-; CHECK: vmul.i32 d0, d0, d1[1]
-  %0 = shufflevector <2 x i32> %arg1_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
-  %1 = mul <2 x i32> %0, %arg0_int32x2_t          ; <<2 x i32>> [#uses=1]
-  ret <2 x i32> %1
-}
-
-define arm_aapcs_vfpcc <4 x float> @test_vmulQ_lanef32(<4 x float> %arg0_float32x4_t, <2 x float> %arg1_float32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vmulQ_lanef32:
-; CHECK: vmul.f32 q0, q0, d2[1]
-  %0 = shufflevector <2 x float> %arg1_float32x2_t, <2 x float> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x float>$
-  %1 = fmul <4 x float> %0, %arg0_float32x4_t     ; <<4 x float>> [#uses=1]
-  ret <4 x float> %1
-}
-
-define arm_aapcs_vfpcc <8 x i16> @test_vmulQ_lanes16(<8 x i16> %arg0_int16x8_t, <4 x i16> %arg1_int16x4_t) nounwind readnone {
-entry:
-; CHECK: test_vmulQ_lanes16:
-; CHECK: vmul.i16 q0, q0, d2[1]
-  %0 = shufflevector <4 x i16> %arg1_int16x4_t, <4 x i16> undef, <8 x i32> <i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1>
-  %1 = mul <8 x i16> %0, %arg0_int16x8_t          ; <<8 x i16>> [#uses=1]
-  ret <8 x i16> %1
-}
-
-define arm_aapcs_vfpcc <4 x i32> @test_vmulQ_lanes32(<4 x i32> %arg0_int32x4_t, <2 x i32> %arg1_int32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vmulQ_lanes32:
-; CHECK: vmul.i32 q0, q0, d2[1]
-  %0 = shufflevector <2 x i32> %arg1_int32x2_t, <2 x i32> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i32>> [#uses$
-  %1 = mul <4 x i32> %0, %arg0_int32x4_t          ; <<4 x i32>> [#uses=1]
-  ret <4 x i32> %1
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vmull.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vmull.ll
deleted file mode 100644
index c3bd141..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vmull.ll
+++ /dev/null
@@ -1,67 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vmull\\.s8} %t | count 1
-; RUN: grep {vmull\\.s16} %t | count 1
-; RUN: grep {vmull\\.s32} %t | count 1
-; RUN: grep {vmull\\.u8} %t | count 1
-; RUN: grep {vmull\\.u16} %t | count 1
-; RUN: grep {vmull\\.u32} %t | count 1
-; RUN: grep {vmull\\.p8} %t | count 1
-
-define <8 x i16> @vmulls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vmulls.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vmulls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vmulls.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vmulls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vmulls.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-define <8 x i16> @vmullu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vmullu.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vmullu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vmullu.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vmullu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vmullu.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-define <8 x i16> @vmullp8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vmullp.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-declare <8 x i16> @llvm.arm.neon.vmulls.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vmulls.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vmulls.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <8 x i16> @llvm.arm.neon.vmullu.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vmullu.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vmullu.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <8 x i16>  @llvm.arm.neon.vmullp.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vmull_lane.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vmull_lane.ll
deleted file mode 100644
index 72cb3b1..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vmull_lane.ll
+++ /dev/null
@@ -1,47 +0,0 @@
-; RUN: llc -mattr=+neon < %s | FileCheck %s
-target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32"
-target triple = "thumbv7-elf"
-
-define arm_aapcs_vfpcc <4 x i32> @test_vmull_lanes16(<4 x i16> %arg0_int16x4_t, <4 x i16> %arg1_int16x4_t) nounwind readnone {
-entry:
-; CHECK: test_vmull_lanes16
-; CHECK: vmull.s16 q0, d0, d1[1]
-  %0 = shufflevector <4 x i16> %arg1_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
-  %1 = tail call <4 x i32> @llvm.arm.neon.vmulls.v4i32(<4 x i16> %arg0_int16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
-  ret <4 x i32> %1
-}
-
-declare <4 x i32> @llvm.arm.neon.vmulls.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
-
-define arm_aapcs_vfpcc <2 x i64> @test_vmull_lanes32(<2 x i32> %arg0_int32x2_t, <2 x i32> %arg1_int32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vmull_lanes32
-; CHECK: vmull.s32 q0, d0, d1[1]
-  %0 = shufflevector <2 x i32> %arg1_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
-  %1 = tail call <2 x i64> @llvm.arm.neon.vmulls.v2i64(<2 x i32> %arg0_int32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
-  ret <2 x i64> %1
-}
-
-declare <2 x i64> @llvm.arm.neon.vmulls.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
-
-define arm_aapcs_vfpcc <4 x i32> @test_vmull_laneu16(<4 x i16> %arg0_uint16x4_t, <4 x i16> %arg1_uint16x4_t) nounwind readnone {
-entry:
-; CHECK: test_vmull_laneu16
-; CHECK: vmull.u16 q0, d0, d1[1]
-  %0 = shufflevector <4 x i16> %arg1_uint16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
-  %1 = tail call <4 x i32> @llvm.arm.neon.vmullu.v4i32(<4 x i16> %arg0_uint16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
-  ret <4 x i32> %1
-}
-
-declare <4 x i32> @llvm.arm.neon.vmullu.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
-
-define arm_aapcs_vfpcc <2 x i64> @test_vmull_laneu32(<2 x i32> %arg0_uint32x2_t, <2 x i32> %arg1_uint32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vmull_laneu32
-; CHECK: vmull.u32 q0, d0, d1[1]
-  %0 = shufflevector <2 x i32> %arg1_uint32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
-  %1 = tail call <2 x i64> @llvm.arm.neon.vmullu.v2i64(<2 x i32> %arg0_uint32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
-  ret <2 x i64> %1
-}
-
-declare <2 x i64> @llvm.arm.neon.vmullu.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vmvn.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vmvn.ll
deleted file mode 100644
index f71777b..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vmvn.ll
+++ /dev/null
@@ -1,51 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep vmvn %t | count 8
-; Note: function names do not include "vmvn" to allow simple grep for opcodes
-
-define <8 x i8> @v_mvni8(<8 x i8>* %A) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = xor <8 x i8> %tmp1, < i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1 >
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @v_mvni16(<4 x i16>* %A) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = xor <4 x i16> %tmp1, < i16 -1, i16 -1, i16 -1, i16 -1 >
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @v_mvni32(<2 x i32>* %A) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = xor <2 x i32> %tmp1, < i32 -1, i32 -1 >
-	ret <2 x i32> %tmp2
-}
-
-define <1 x i64> @v_mvni64(<1 x i64>* %A) nounwind {
-	%tmp1 = load <1 x i64>* %A
-	%tmp2 = xor <1 x i64> %tmp1, < i64 -1 >
-	ret <1 x i64> %tmp2
-}
-
-define <16 x i8> @v_mvnQi8(<16 x i8>* %A) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = xor <16 x i8> %tmp1, < i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1 >
-	ret <16 x i8> %tmp2
-}
-
-define <8 x i16> @v_mvnQi16(<8 x i16>* %A) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = xor <8 x i16> %tmp1, < i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1 >
-	ret <8 x i16> %tmp2
-}
-
-define <4 x i32> @v_mvnQi32(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = xor <4 x i32> %tmp1, < i32 -1, i32 -1, i32 -1, i32 -1 >
-	ret <4 x i32> %tmp2
-}
-
-define <2 x i64> @v_mvnQi64(<2 x i64>* %A) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = xor <2 x i64> %tmp1, < i64 -1, i64 -1 >
-	ret <2 x i64> %tmp2
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vneg.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vneg.ll
index e5dc832..7764e87 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vneg.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vneg.ll
@@ -1,53 +1,121 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vneg\\.s8} %t | count 2
-; RUN: grep {vneg\\.s16} %t | count 2
-; RUN: grep {vneg\\.s32} %t | count 2
-; RUN: grep {vneg\\.f32} %t | count 2
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i8> @vnegs8(<8 x i8>* %A) nounwind {
+;CHECK: vnegs8:
+;CHECK: vneg.s8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = sub <8 x i8> zeroinitializer, %tmp1
 	ret <8 x i8> %tmp2
 }
 
 define <4 x i16> @vnegs16(<4 x i16>* %A) nounwind {
+;CHECK: vnegs16:
+;CHECK: vneg.s16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = sub <4 x i16> zeroinitializer, %tmp1
 	ret <4 x i16> %tmp2
 }
 
 define <2 x i32> @vnegs32(<2 x i32>* %A) nounwind {
+;CHECK: vnegs32:
+;CHECK: vneg.s32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = sub <2 x i32> zeroinitializer, %tmp1
 	ret <2 x i32> %tmp2
 }
 
 define <2 x float> @vnegf32(<2 x float>* %A) nounwind {
+;CHECK: vnegf32:
+;CHECK: vneg.f32
 	%tmp1 = load <2 x float>* %A
 	%tmp2 = sub <2 x float> < float -0.000000e+00, float -0.000000e+00 >, %tmp1
 	ret <2 x float> %tmp2
 }
 
 define <16 x i8> @vnegQs8(<16 x i8>* %A) nounwind {
+;CHECK: vnegQs8:
+;CHECK: vneg.s8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = sub <16 x i8> zeroinitializer, %tmp1
 	ret <16 x i8> %tmp2
 }
 
 define <8 x i16> @vnegQs16(<8 x i16>* %A) nounwind {
+;CHECK: vnegQs16:
+;CHECK: vneg.s16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = sub <8 x i16> zeroinitializer, %tmp1
 	ret <8 x i16> %tmp2
 }
 
 define <4 x i32> @vnegQs32(<4 x i32>* %A) nounwind {
+;CHECK: vnegQs32:
+;CHECK: vneg.s32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = sub <4 x i32> zeroinitializer, %tmp1
 	ret <4 x i32> %tmp2
 }
 
 define <4 x float> @vnegQf32(<4 x float>* %A) nounwind {
+;CHECK: vnegQf32:
+;CHECK: vneg.f32
 	%tmp1 = load <4 x float>* %A
 	%tmp2 = sub <4 x float> < float -0.000000e+00, float -0.000000e+00, float -0.000000e+00, float -0.000000e+00 >, %tmp1
 	ret <4 x float> %tmp2
 }
+
+define <8 x i8> @vqnegs8(<8 x i8>* %A) nounwind {
+;CHECK: vqnegs8:
+;CHECK: vqneg.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = call <8 x i8> @llvm.arm.neon.vqneg.v8i8(<8 x i8> %tmp1)
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @vqnegs16(<4 x i16>* %A) nounwind {
+;CHECK: vqnegs16:
+;CHECK: vqneg.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = call <4 x i16> @llvm.arm.neon.vqneg.v4i16(<4 x i16> %tmp1)
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vqnegs32(<2 x i32>* %A) nounwind {
+;CHECK: vqnegs32:
+;CHECK: vqneg.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vqneg.v2i32(<2 x i32> %tmp1)
+	ret <2 x i32> %tmp2
+}
+
+define <16 x i8> @vqnegQs8(<16 x i8>* %A) nounwind {
+;CHECK: vqnegQs8:
+;CHECK: vqneg.s8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = call <16 x i8> @llvm.arm.neon.vqneg.v16i8(<16 x i8> %tmp1)
+	ret <16 x i8> %tmp2
+}
+
+define <8 x i16> @vqnegQs16(<8 x i16>* %A) nounwind {
+;CHECK: vqnegQs16:
+;CHECK: vqneg.s16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = call <8 x i16> @llvm.arm.neon.vqneg.v8i16(<8 x i16> %tmp1)
+	ret <8 x i16> %tmp2
+}
+
+define <4 x i32> @vqnegQs32(<4 x i32>* %A) nounwind {
+;CHECK: vqnegQs32:
+;CHECK: vqneg.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x i32> @llvm.arm.neon.vqneg.v4i32(<4 x i32> %tmp1)
+	ret <4 x i32> %tmp2
+}
+
+declare <8 x i8>  @llvm.arm.neon.vqneg.v8i8(<8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vqneg.v4i16(<4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vqneg.v2i32(<2 x i32>) nounwind readnone
+
+declare <16 x i8> @llvm.arm.neon.vqneg.v16i8(<16 x i8>) nounwind readnone
+declare <8 x i16> @llvm.arm.neon.vqneg.v8i16(<8 x i16>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vqneg.v4i32(<4 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vorn.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vorn.ll
deleted file mode 100644
index 23cbbf0..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vorn.ll
+++ /dev/null
@@ -1,67 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep vorn %t | count 8
-; Note: function names do not include "vorn" to allow simple grep for opcodes
-
-define <8 x i8> @v_orni8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = xor <8 x i8> %tmp2, < i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1 >
-	%tmp4 = or <8 x i8> %tmp1, %tmp3
-	ret <8 x i8> %tmp4
-}
-
-define <4 x i16> @v_orni16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = xor <4 x i16> %tmp2, < i16 -1, i16 -1, i16 -1, i16 -1 >
-	%tmp4 = or <4 x i16> %tmp1, %tmp3
-	ret <4 x i16> %tmp4
-}
-
-define <2 x i32> @v_orni32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = xor <2 x i32> %tmp2, < i32 -1, i32 -1 >
-	%tmp4 = or <2 x i32> %tmp1, %tmp3
-	ret <2 x i32> %tmp4
-}
-
-define <1 x i64> @v_orni64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
-	%tmp1 = load <1 x i64>* %A
-	%tmp2 = load <1 x i64>* %B
-	%tmp3 = xor <1 x i64> %tmp2, < i64 -1 >
-	%tmp4 = or <1 x i64> %tmp1, %tmp3
-	ret <1 x i64> %tmp4
-}
-
-define <16 x i8> @v_ornQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = load <16 x i8>* %B
-	%tmp3 = xor <16 x i8> %tmp2, < i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1, i8 -1 >
-	%tmp4 = or <16 x i8> %tmp1, %tmp3
-	ret <16 x i8> %tmp4
-}
-
-define <8 x i16> @v_ornQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = xor <8 x i16> %tmp2, < i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1, i16 -1 >
-	%tmp4 = or <8 x i16> %tmp1, %tmp3
-	ret <8 x i16> %tmp4
-}
-
-define <4 x i32> @v_ornQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = xor <4 x i32> %tmp2, < i32 -1, i32 -1, i32 -1, i32 -1 >
-	%tmp4 = or <4 x i32> %tmp1, %tmp3
-	ret <4 x i32> %tmp4
-}
-
-define <2 x i64> @v_ornQi64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i64>* %B
-	%tmp3 = xor <2 x i64> %tmp2, < i64 -1, i64 -1 >
-	%tmp4 = or <2 x i64> %tmp1, %tmp3
-	ret <2 x i64> %tmp4
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vorr.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vorr.ll
deleted file mode 100644
index 5788bb2..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vorr.ll
+++ /dev/null
@@ -1,59 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep vorr %t | count 8
-; Note: function names do not include "vorr" to allow simple grep for opcodes
-
-define <8 x i8> @v_orri8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = or <8 x i8> %tmp1, %tmp2
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @v_orri16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = or <4 x i16> %tmp1, %tmp2
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @v_orri32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = or <2 x i32> %tmp1, %tmp2
-	ret <2 x i32> %tmp3
-}
-
-define <1 x i64> @v_orri64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
-	%tmp1 = load <1 x i64>* %A
-	%tmp2 = load <1 x i64>* %B
-	%tmp3 = or <1 x i64> %tmp1, %tmp2
-	ret <1 x i64> %tmp3
-}
-
-define <16 x i8> @v_orrQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = load <16 x i8>* %B
-	%tmp3 = or <16 x i8> %tmp1, %tmp2
-	ret <16 x i8> %tmp3
-}
-
-define <8 x i16> @v_orrQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = or <8 x i16> %tmp1, %tmp2
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @v_orrQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = or <4 x i32> %tmp1, %tmp2
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @v_orrQi64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i64>* %B
-	%tmp3 = or <2 x i64> %tmp1, %tmp2
-	ret <2 x i64> %tmp3
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vpadal.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vpadal.ll
index 8423b1b..7296e93 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vpadal.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vpadal.ll
@@ -1,12 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vpadal\\.s8} %t | count 2
-; RUN: grep {vpadal\\.s16} %t | count 2
-; RUN: grep {vpadal\\.s32} %t | count 2
-; RUN: grep {vpadal\\.u8} %t | count 2
-; RUN: grep {vpadal\\.u16} %t | count 2
-; RUN: grep {vpadal\\.u32} %t | count 2
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <4 x i16> @vpadals8(<4 x i16>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vpadals8:
+;CHECK: vpadal.s8
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = call <4 x i16> @llvm.arm.neon.vpadals.v4i16.v8i8(<4 x i16> %tmp1, <8 x i8> %tmp2)
@@ -14,6 +10,8 @@ define <4 x i16> @vpadals8(<4 x i16>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <2 x i32> @vpadals16(<2 x i32>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vpadals16:
+;CHECK: vpadal.s16
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = call <2 x i32> @llvm.arm.neon.vpadals.v2i32.v4i16(<2 x i32> %tmp1, <4 x i16> %tmp2)
@@ -21,6 +19,8 @@ define <2 x i32> @vpadals16(<2 x i32>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <1 x i64> @vpadals32(<1 x i64>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vpadals32:
+;CHECK: vpadal.s32
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = call <1 x i64> @llvm.arm.neon.vpadals.v1i64.v2i32(<1 x i64> %tmp1, <2 x i32> %tmp2)
@@ -28,6 +28,8 @@ define <1 x i64> @vpadals32(<1 x i64>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <4 x i16> @vpadalu8(<4 x i16>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vpadalu8:
+;CHECK: vpadal.u8
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = call <4 x i16> @llvm.arm.neon.vpadalu.v4i16.v8i8(<4 x i16> %tmp1, <8 x i8> %tmp2)
@@ -35,6 +37,8 @@ define <4 x i16> @vpadalu8(<4 x i16>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <2 x i32> @vpadalu16(<2 x i32>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vpadalu16:
+;CHECK: vpadal.u16
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = call <2 x i32> @llvm.arm.neon.vpadalu.v2i32.v4i16(<2 x i32> %tmp1, <4 x i16> %tmp2)
@@ -42,6 +46,8 @@ define <2 x i32> @vpadalu16(<2 x i32>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <1 x i64> @vpadalu32(<1 x i64>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vpadalu32:
+;CHECK: vpadal.u32
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = call <1 x i64> @llvm.arm.neon.vpadalu.v1i64.v2i32(<1 x i64> %tmp1, <2 x i32> %tmp2)
@@ -49,6 +55,8 @@ define <1 x i64> @vpadalu32(<1 x i64>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <8 x i16> @vpadalQs8(<8 x i16>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vpadalQs8:
+;CHECK: vpadal.s8
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = call <8 x i16> @llvm.arm.neon.vpadals.v8i16.v16i8(<8 x i16> %tmp1, <16 x i8> %tmp2)
@@ -56,6 +64,8 @@ define <8 x i16> @vpadalQs8(<8 x i16>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <4 x i32> @vpadalQs16(<4 x i32>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vpadalQs16:
+;CHECK: vpadal.s16
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = call <4 x i32> @llvm.arm.neon.vpadals.v4i32.v8i16(<4 x i32> %tmp1, <8 x i16> %tmp2)
@@ -63,6 +73,8 @@ define <4 x i32> @vpadalQs16(<4 x i32>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <2 x i64> @vpadalQs32(<2 x i64>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vpadalQs32:
+;CHECK: vpadal.s32
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = call <2 x i64> @llvm.arm.neon.vpadals.v2i64.v4i32(<2 x i64> %tmp1, <4 x i32> %tmp2)
@@ -70,6 +82,8 @@ define <2 x i64> @vpadalQs32(<2 x i64>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <8 x i16> @vpadalQu8(<8 x i16>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vpadalQu8:
+;CHECK: vpadal.u8
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = call <8 x i16> @llvm.arm.neon.vpadalu.v8i16.v16i8(<8 x i16> %tmp1, <16 x i8> %tmp2)
@@ -77,6 +91,8 @@ define <8 x i16> @vpadalQu8(<8 x i16>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <4 x i32> @vpadalQu16(<4 x i32>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vpadalQu16:
+;CHECK: vpadal.u16
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = call <4 x i32> @llvm.arm.neon.vpadalu.v4i32.v8i16(<4 x i32> %tmp1, <8 x i16> %tmp2)
@@ -84,6 +100,8 @@ define <4 x i32> @vpadalQu16(<4 x i32>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <2 x i64> @vpadalQu32(<2 x i64>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vpadalQu32:
+;CHECK: vpadal.u32
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = call <2 x i64> @llvm.arm.neon.vpadalu.v2i64.v4i32(<2 x i64> %tmp1, <4 x i32> %tmp2)
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vpadd.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vpadd.ll
index 3e6179d..2125573 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vpadd.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vpadd.ll
@@ -1,10 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vpadd\\.i8} %t | count 1
-; RUN: grep {vpadd\\.i16} %t | count 1
-; RUN: grep {vpadd\\.i32} %t | count 1
-; RUN: grep {vpadd\\.f32} %t | count 1
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i8> @vpaddi8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vpaddi8:
+;CHECK: vpadd.i8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = call <8 x i8> @llvm.arm.neon.vpadd.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
@@ -12,6 +10,8 @@ define <8 x i8> @vpaddi8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vpaddi16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vpaddi16:
+;CHECK: vpadd.i16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = call <4 x i16> @llvm.arm.neon.vpadd.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
@@ -19,6 +19,8 @@ define <4 x i16> @vpaddi16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vpaddi32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vpaddi32:
+;CHECK: vpadd.i32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = call <2 x i32> @llvm.arm.neon.vpadd.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
@@ -26,6 +28,8 @@ define <2 x i32> @vpaddi32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <2 x float> @vpaddf32(<2 x float>* %A, <2 x float>* %B) nounwind {
+;CHECK: vpaddf32:
+;CHECK: vpadd.f32
 	%tmp1 = load <2 x float>* %A
 	%tmp2 = load <2 x float>* %B
 	%tmp3 = call <2 x float> @llvm.arm.neon.vpadd.v2f32(<2 x float> %tmp1, <2 x float> %tmp2)
@@ -37,3 +41,115 @@ declare <4 x i16> @llvm.arm.neon.vpadd.v4i16(<4 x i16>, <4 x i16>) nounwind read
 declare <2 x i32> @llvm.arm.neon.vpadd.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
 
 declare <2 x float> @llvm.arm.neon.vpadd.v2f32(<2 x float>, <2 x float>) nounwind readnone
+
+define <4 x i16> @vpaddls8(<8 x i8>* %A) nounwind {
+;CHECK: vpaddls8:
+;CHECK: vpaddl.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = call <4 x i16> @llvm.arm.neon.vpaddls.v4i16.v8i8(<8 x i8> %tmp1)
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vpaddls16(<4 x i16>* %A) nounwind {
+;CHECK: vpaddls16:
+;CHECK: vpaddl.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vpaddls.v2i32.v4i16(<4 x i16> %tmp1)
+	ret <2 x i32> %tmp2
+}
+
+define <1 x i64> @vpaddls32(<2 x i32>* %A) nounwind {
+;CHECK: vpaddls32:
+;CHECK: vpaddl.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = call <1 x i64> @llvm.arm.neon.vpaddls.v1i64.v2i32(<2 x i32> %tmp1)
+	ret <1 x i64> %tmp2
+}
+
+define <4 x i16> @vpaddlu8(<8 x i8>* %A) nounwind {
+;CHECK: vpaddlu8:
+;CHECK: vpaddl.u8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = call <4 x i16> @llvm.arm.neon.vpaddlu.v4i16.v8i8(<8 x i8> %tmp1)
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vpaddlu16(<4 x i16>* %A) nounwind {
+;CHECK: vpaddlu16:
+;CHECK: vpaddl.u16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vpaddlu.v2i32.v4i16(<4 x i16> %tmp1)
+	ret <2 x i32> %tmp2
+}
+
+define <1 x i64> @vpaddlu32(<2 x i32>* %A) nounwind {
+;CHECK: vpaddlu32:
+;CHECK: vpaddl.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = call <1 x i64> @llvm.arm.neon.vpaddlu.v1i64.v2i32(<2 x i32> %tmp1)
+	ret <1 x i64> %tmp2
+}
+
+define <8 x i16> @vpaddlQs8(<16 x i8>* %A) nounwind {
+;CHECK: vpaddlQs8:
+;CHECK: vpaddl.s8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = call <8 x i16> @llvm.arm.neon.vpaddls.v8i16.v16i8(<16 x i8> %tmp1)
+	ret <8 x i16> %tmp2
+}
+
+define <4 x i32> @vpaddlQs16(<8 x i16>* %A) nounwind {
+;CHECK: vpaddlQs16:
+;CHECK: vpaddl.s16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = call <4 x i32> @llvm.arm.neon.vpaddls.v4i32.v8i16(<8 x i16> %tmp1)
+	ret <4 x i32> %tmp2
+}
+
+define <2 x i64> @vpaddlQs32(<4 x i32>* %A) nounwind {
+;CHECK: vpaddlQs32:
+;CHECK: vpaddl.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <2 x i64> @llvm.arm.neon.vpaddls.v2i64.v4i32(<4 x i32> %tmp1)
+	ret <2 x i64> %tmp2
+}
+
+define <8 x i16> @vpaddlQu8(<16 x i8>* %A) nounwind {
+;CHECK: vpaddlQu8:
+;CHECK: vpaddl.u8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = call <8 x i16> @llvm.arm.neon.vpaddlu.v8i16.v16i8(<16 x i8> %tmp1)
+	ret <8 x i16> %tmp2
+}
+
+define <4 x i32> @vpaddlQu16(<8 x i16>* %A) nounwind {
+;CHECK: vpaddlQu16:
+;CHECK: vpaddl.u16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = call <4 x i32> @llvm.arm.neon.vpaddlu.v4i32.v8i16(<8 x i16> %tmp1)
+	ret <4 x i32> %tmp2
+}
+
+define <2 x i64> @vpaddlQu32(<4 x i32>* %A) nounwind {
+;CHECK: vpaddlQu32:
+;CHECK: vpaddl.u32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <2 x i64> @llvm.arm.neon.vpaddlu.v2i64.v4i32(<4 x i32> %tmp1)
+	ret <2 x i64> %tmp2
+}
+
+declare <4 x i16> @llvm.arm.neon.vpaddls.v4i16.v8i8(<8 x i8>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vpaddls.v2i32.v4i16(<4 x i16>) nounwind readnone
+declare <1 x i64> @llvm.arm.neon.vpaddls.v1i64.v2i32(<2 x i32>) nounwind readnone
+
+declare <4 x i16> @llvm.arm.neon.vpaddlu.v4i16.v8i8(<8 x i8>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vpaddlu.v2i32.v4i16(<4 x i16>) nounwind readnone
+declare <1 x i64> @llvm.arm.neon.vpaddlu.v1i64.v2i32(<2 x i32>) nounwind readnone
+
+declare <8 x i16> @llvm.arm.neon.vpaddls.v8i16.v16i8(<16 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vpaddls.v4i32.v8i16(<8 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vpaddls.v2i64.v4i32(<4 x i32>) nounwind readnone
+
+declare <8 x i16> @llvm.arm.neon.vpaddlu.v8i16.v16i8(<16 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vpaddlu.v4i32.v8i16(<8 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vpaddlu.v2i64.v4i32(<4 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vpaddl.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vpaddl.ll
deleted file mode 100644
index d975710..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vpaddl.ll
+++ /dev/null
@@ -1,95 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vpaddl\\.s8} %t | count 2
-; RUN: grep {vpaddl\\.s16} %t | count 2
-; RUN: grep {vpaddl\\.s32} %t | count 2
-; RUN: grep {vpaddl\\.u8} %t | count 2
-; RUN: grep {vpaddl\\.u16} %t | count 2
-; RUN: grep {vpaddl\\.u32} %t | count 2
-
-define <4 x i16> @vpaddls8(<8 x i8>* %A) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = call <4 x i16> @llvm.arm.neon.vpaddls.v4i16.v8i8(<8 x i8> %tmp1)
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vpaddls16(<4 x i16>* %A) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vpaddls.v2i32.v4i16(<4 x i16> %tmp1)
-	ret <2 x i32> %tmp2
-}
-
-define <1 x i64> @vpaddls32(<2 x i32>* %A) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = call <1 x i64> @llvm.arm.neon.vpaddls.v1i64.v2i32(<2 x i32> %tmp1)
-	ret <1 x i64> %tmp2
-}
-
-define <4 x i16> @vpaddlu8(<8 x i8>* %A) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = call <4 x i16> @llvm.arm.neon.vpaddlu.v4i16.v8i8(<8 x i8> %tmp1)
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vpaddlu16(<4 x i16>* %A) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vpaddlu.v2i32.v4i16(<4 x i16> %tmp1)
-	ret <2 x i32> %tmp2
-}
-
-define <1 x i64> @vpaddlu32(<2 x i32>* %A) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = call <1 x i64> @llvm.arm.neon.vpaddlu.v1i64.v2i32(<2 x i32> %tmp1)
-	ret <1 x i64> %tmp2
-}
-
-define <8 x i16> @vpaddlQs8(<16 x i8>* %A) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = call <8 x i16> @llvm.arm.neon.vpaddls.v8i16.v16i8(<16 x i8> %tmp1)
-	ret <8 x i16> %tmp2
-}
-
-define <4 x i32> @vpaddlQs16(<8 x i16>* %A) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = call <4 x i32> @llvm.arm.neon.vpaddls.v4i32.v8i16(<8 x i16> %tmp1)
-	ret <4 x i32> %tmp2
-}
-
-define <2 x i64> @vpaddlQs32(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <2 x i64> @llvm.arm.neon.vpaddls.v2i64.v4i32(<4 x i32> %tmp1)
-	ret <2 x i64> %tmp2
-}
-
-define <8 x i16> @vpaddlQu8(<16 x i8>* %A) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = call <8 x i16> @llvm.arm.neon.vpaddlu.v8i16.v16i8(<16 x i8> %tmp1)
-	ret <8 x i16> %tmp2
-}
-
-define <4 x i32> @vpaddlQu16(<8 x i16>* %A) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = call <4 x i32> @llvm.arm.neon.vpaddlu.v4i32.v8i16(<8 x i16> %tmp1)
-	ret <4 x i32> %tmp2
-}
-
-define <2 x i64> @vpaddlQu32(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <2 x i64> @llvm.arm.neon.vpaddlu.v2i64.v4i32(<4 x i32> %tmp1)
-	ret <2 x i64> %tmp2
-}
-
-declare <4 x i16> @llvm.arm.neon.vpaddls.v4i16.v8i8(<8 x i8>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vpaddls.v2i32.v4i16(<4 x i16>) nounwind readnone
-declare <1 x i64> @llvm.arm.neon.vpaddls.v1i64.v2i32(<2 x i32>) nounwind readnone
-
-declare <4 x i16> @llvm.arm.neon.vpaddlu.v4i16.v8i8(<8 x i8>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vpaddlu.v2i32.v4i16(<4 x i16>) nounwind readnone
-declare <1 x i64> @llvm.arm.neon.vpaddlu.v1i64.v2i32(<2 x i32>) nounwind readnone
-
-declare <8 x i16> @llvm.arm.neon.vpaddls.v8i16.v16i8(<16 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vpaddls.v4i32.v8i16(<8 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vpaddls.v2i64.v4i32(<4 x i32>) nounwind readnone
-
-declare <8 x i16> @llvm.arm.neon.vpaddlu.v8i16.v16i8(<16 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vpaddlu.v4i32.v8i16(<8 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vpaddlu.v2i64.v4i32(<4 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vpmax.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vpmax.ll
deleted file mode 100644
index 8f6fb57..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vpmax.ll
+++ /dev/null
@@ -1,67 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vpmax\\.s8} %t | count 1
-; RUN: grep {vpmax\\.s16} %t | count 1
-; RUN: grep {vpmax\\.s32} %t | count 1
-; RUN: grep {vpmax\\.u8} %t | count 1
-; RUN: grep {vpmax\\.u16} %t | count 1
-; RUN: grep {vpmax\\.u32} %t | count 1
-; RUN: grep {vpmax\\.f32} %t | count 1
-
-define <8 x i8> @vpmaxs8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vpmaxs.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vpmaxs16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vpmaxs.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vpmaxs32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vpmaxs.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <8 x i8> @vpmaxu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vpmaxu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vpmaxu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vpmaxu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vpmaxu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vpmaxu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <2 x float> @vpmaxf32(<2 x float>* %A, <2 x float>* %B) nounwind {
-	%tmp1 = load <2 x float>* %A
-	%tmp2 = load <2 x float>* %B
-	%tmp3 = call <2 x float> @llvm.arm.neon.vpmaxs.v2f32(<2 x float> %tmp1, <2 x float> %tmp2)
-	ret <2 x float> %tmp3
-}
-
-declare <8 x i8>  @llvm.arm.neon.vpmaxs.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vpmaxs.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vpmaxs.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <8 x i8>  @llvm.arm.neon.vpmaxu.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vpmaxu.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vpmaxu.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <2 x float> @llvm.arm.neon.vpmaxs.v2f32(<2 x float>, <2 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vpmin.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vpmin.ll
deleted file mode 100644
index 3771258..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vpmin.ll
+++ /dev/null
@@ -1,67 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vpmin\\.s8} %t | count 1
-; RUN: grep {vpmin\\.s16} %t | count 1
-; RUN: grep {vpmin\\.s32} %t | count 1
-; RUN: grep {vpmin\\.u8} %t | count 1
-; RUN: grep {vpmin\\.u16} %t | count 1
-; RUN: grep {vpmin\\.u32} %t | count 1
-; RUN: grep {vpmin\\.f32} %t | count 1
-
-define <8 x i8> @vpmins8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vpmins.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vpmins16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vpmins.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vpmins32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vpmins.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <8 x i8> @vpminu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vpminu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vpminu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vpminu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vpminu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vpminu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <2 x float> @vpminf32(<2 x float>* %A, <2 x float>* %B) nounwind {
-	%tmp1 = load <2 x float>* %A
-	%tmp2 = load <2 x float>* %B
-	%tmp3 = call <2 x float> @llvm.arm.neon.vpmins.v2f32(<2 x float> %tmp1, <2 x float> %tmp2)
-	ret <2 x float> %tmp3
-}
-
-declare <8 x i8>  @llvm.arm.neon.vpmins.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vpmins.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vpmins.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <8 x i8>  @llvm.arm.neon.vpminu.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vpminu.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vpminu.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <2 x float> @llvm.arm.neon.vpmins.v2f32(<2 x float>, <2 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vpminmax.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vpminmax.ll
new file mode 100644
index 0000000..b75bcc9
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vpminmax.ll
@@ -0,0 +1,147 @@
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
+
+define <8 x i8> @vpmins8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vpmins8:
+;CHECK: vpmin.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vpmins.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vpmins16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vpmins16:
+;CHECK: vpmin.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vpmins.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vpmins32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vpmins32:
+;CHECK: vpmin.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vpmins.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <8 x i8> @vpminu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vpminu8:
+;CHECK: vpmin.u8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vpminu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vpminu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vpminu16:
+;CHECK: vpmin.u16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vpminu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vpminu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vpminu32:
+;CHECK: vpmin.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vpminu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <2 x float> @vpminf32(<2 x float>* %A, <2 x float>* %B) nounwind {
+;CHECK: vpminf32:
+;CHECK: vpmin.f32
+	%tmp1 = load <2 x float>* %A
+	%tmp2 = load <2 x float>* %B
+	%tmp3 = call <2 x float> @llvm.arm.neon.vpmins.v2f32(<2 x float> %tmp1, <2 x float> %tmp2)
+	ret <2 x float> %tmp3
+}
+
+declare <8 x i8>  @llvm.arm.neon.vpmins.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vpmins.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vpmins.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <8 x i8>  @llvm.arm.neon.vpminu.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vpminu.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vpminu.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <2 x float> @llvm.arm.neon.vpmins.v2f32(<2 x float>, <2 x float>) nounwind readnone
+
+define <8 x i8> @vpmaxs8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vpmaxs8:
+;CHECK: vpmax.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vpmaxs.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vpmaxs16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vpmaxs16:
+;CHECK: vpmax.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vpmaxs.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vpmaxs32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vpmaxs32:
+;CHECK: vpmax.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vpmaxs.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <8 x i8> @vpmaxu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vpmaxu8:
+;CHECK: vpmax.u8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vpmaxu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vpmaxu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vpmaxu16:
+;CHECK: vpmax.u16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vpmaxu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vpmaxu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vpmaxu32:
+;CHECK: vpmax.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vpmaxu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <2 x float> @vpmaxf32(<2 x float>* %A, <2 x float>* %B) nounwind {
+;CHECK: vpmaxf32:
+;CHECK: vpmax.f32
+	%tmp1 = load <2 x float>* %A
+	%tmp2 = load <2 x float>* %B
+	%tmp3 = call <2 x float> @llvm.arm.neon.vpmaxs.v2f32(<2 x float> %tmp1, <2 x float> %tmp2)
+	ret <2 x float> %tmp3
+}
+
+declare <8 x i8>  @llvm.arm.neon.vpmaxs.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vpmaxs.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vpmaxs.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <8 x i8>  @llvm.arm.neon.vpmaxu.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vpmaxu.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vpmaxu.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <2 x float> @llvm.arm.neon.vpmaxs.v2f32(<2 x float>, <2 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqRdmulh_lane.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqRdmulh_lane.ll
deleted file mode 100644
index f308c52..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqRdmulh_lane.ll
+++ /dev/null
@@ -1,47 +0,0 @@
-; RUN: llc -mattr=+neon < %s | FileCheck %s
-target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32"
-target triple = "thumbv7-elf"
-
-define arm_aapcs_vfpcc <8 x i16> @test_vqRdmulhQ_lanes16(<8 x i16> %arg0_int16x8_t, <4 x i16> %arg1_int16x4_t) nounwind readnone {
-entry:
-; CHECK: test_vqRdmulhQ_lanes16
-; CHECK: vqrdmulh.s16 q0, q0, d2[1]
-  %0 = shufflevector <4 x i16> %arg1_int16x4_t, <4 x i16> undef, <8 x i32> <i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1> ; <<8 x i16>> [#uses=1]
-  %1 = tail call <8 x i16> @llvm.arm.neon.vqrdmulh.v8i16(<8 x i16> %arg0_int16x8_t, <8 x i16> %0) ; <<8 x i16>> [#uses=1]
-  ret <8 x i16> %1
-}
-
-declare <8 x i16> @llvm.arm.neon.vqrdmulh.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
-
-define arm_aapcs_vfpcc <4 x i32> @test_vqRdmulhQ_lanes32(<4 x i32> %arg0_int32x4_t, <2 x i32> %arg1_int32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vqRdmulhQ_lanes32
-; CHECK: vqrdmulh.s32 q0, q0, d2[1]
-  %0 = shufflevector <2 x i32> %arg1_int32x2_t, <2 x i32> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i32>> [#uses=1]
-  %1 = tail call <4 x i32> @llvm.arm.neon.vqrdmulh.v4i32(<4 x i32> %arg0_int32x4_t, <4 x i32> %0) ; <<4 x i32>> [#uses=1]
-  ret <4 x i32> %1
-}
-
-declare <4 x i32> @llvm.arm.neon.vqrdmulh.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
-
-define arm_aapcs_vfpcc <4 x i16> @test_vqRdmulh_lanes16(<4 x i16> %arg0_int16x4_t, <4 x i16> %arg1_int16x4_t) nounwind readnone {
-entry:
-; CHECK: test_vqRdmulh_lanes16
-; CHECK: vqrdmulh.s16 d0, d0, d1[1]
-  %0 = shufflevector <4 x i16> %arg1_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
-  %1 = tail call <4 x i16> @llvm.arm.neon.vqrdmulh.v4i16(<4 x i16> %arg0_int16x4_t, <4 x i16> %0) ; <<4 x i16>> [#uses=1]
-  ret <4 x i16> %1
-}
-
-declare <4 x i16> @llvm.arm.neon.vqrdmulh.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-
-define arm_aapcs_vfpcc <2 x i32> @test_vqRdmulh_lanes32(<2 x i32> %arg0_int32x2_t, <2 x i32> %arg1_int32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vqRdmulh_lanes32
-; CHECK: vqrdmulh.s32 d0, d0, d1[1]
-  %0 = shufflevector <2 x i32> %arg1_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
-  %1 = tail call <2 x i32> @llvm.arm.neon.vqrdmulh.v2i32(<2 x i32> %arg0_int32x2_t, <2 x i32> %0) ; <<2 x i32>> [#uses=1]
-  ret <2 x i32> %1
-}
-
-declare <2 x i32> @llvm.arm.neon.vqrdmulh.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqabs.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqabs.ll
deleted file mode 100644
index 84e5938..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqabs.ll
+++ /dev/null
@@ -1,48 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vqabs\\.s8} %t | count 2
-; RUN: grep {vqabs\\.s16} %t | count 2
-; RUN: grep {vqabs\\.s32} %t | count 2
-
-define <8 x i8> @vqabss8(<8 x i8>* %A) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = call <8 x i8> @llvm.arm.neon.vqabs.v8i8(<8 x i8> %tmp1)
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @vqabss16(<4 x i16>* %A) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = call <4 x i16> @llvm.arm.neon.vqabs.v4i16(<4 x i16> %tmp1)
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vqabss32(<2 x i32>* %A) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vqabs.v2i32(<2 x i32> %tmp1)
-	ret <2 x i32> %tmp2
-}
-
-define <16 x i8> @vqabsQs8(<16 x i8>* %A) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = call <16 x i8> @llvm.arm.neon.vqabs.v16i8(<16 x i8> %tmp1)
-	ret <16 x i8> %tmp2
-}
-
-define <8 x i16> @vqabsQs16(<8 x i16>* %A) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = call <8 x i16> @llvm.arm.neon.vqabs.v8i16(<8 x i16> %tmp1)
-	ret <8 x i16> %tmp2
-}
-
-define <4 x i32> @vqabsQs32(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x i32> @llvm.arm.neon.vqabs.v4i32(<4 x i32> %tmp1)
-	ret <4 x i32> %tmp2
-}
-
-declare <8 x i8>  @llvm.arm.neon.vqabs.v8i8(<8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vqabs.v4i16(<4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vqabs.v2i32(<2 x i32>) nounwind readnone
-
-declare <16 x i8> @llvm.arm.neon.vqabs.v16i8(<16 x i8>) nounwind readnone
-declare <8 x i16> @llvm.arm.neon.vqabs.v8i16(<8 x i16>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vqabs.v4i32(<4 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqadd.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqadd.ll
index bce677a..a1669b6 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqadd.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vqadd.ll
@@ -1,14 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vqadd\\.s8} %t | count 2
-; RUN: grep {vqadd\\.s16} %t | count 2
-; RUN: grep {vqadd\\.s32} %t | count 2
-; RUN: grep {vqadd\\.s64} %t | count 2
-; RUN: grep {vqadd\\.u8} %t | count 2
-; RUN: grep {vqadd\\.u16} %t | count 2
-; RUN: grep {vqadd\\.u32} %t | count 2
-; RUN: grep {vqadd\\.u64} %t | count 2
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i8> @vqadds8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vqadds8:
+;CHECK: vqadd.s8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = call <8 x i8> @llvm.arm.neon.vqadds.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
@@ -16,6 +10,8 @@ define <8 x i8> @vqadds8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vqadds16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vqadds16:
+;CHECK: vqadd.s16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = call <4 x i16> @llvm.arm.neon.vqadds.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
@@ -23,6 +19,8 @@ define <4 x i16> @vqadds16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vqadds32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vqadds32:
+;CHECK: vqadd.s32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = call <2 x i32> @llvm.arm.neon.vqadds.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
@@ -30,6 +28,8 @@ define <2 x i32> @vqadds32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vqadds64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vqadds64:
+;CHECK: vqadd.s64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = call <1 x i64> @llvm.arm.neon.vqadds.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2)
@@ -37,6 +37,8 @@ define <1 x i64> @vqadds64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <8 x i8> @vqaddu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vqaddu8:
+;CHECK: vqadd.u8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = call <8 x i8> @llvm.arm.neon.vqaddu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
@@ -44,6 +46,8 @@ define <8 x i8> @vqaddu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vqaddu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vqaddu16:
+;CHECK: vqadd.u16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = call <4 x i16> @llvm.arm.neon.vqaddu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
@@ -51,6 +55,8 @@ define <4 x i16> @vqaddu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vqaddu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vqaddu32:
+;CHECK: vqadd.u32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = call <2 x i32> @llvm.arm.neon.vqaddu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
@@ -58,6 +64,8 @@ define <2 x i32> @vqaddu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vqaddu64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vqaddu64:
+;CHECK: vqadd.u64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = call <1 x i64> @llvm.arm.neon.vqaddu.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2)
@@ -65,6 +73,8 @@ define <1 x i64> @vqaddu64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vqaddQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vqaddQs8:
+;CHECK: vqadd.s8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = call <16 x i8> @llvm.arm.neon.vqadds.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
@@ -72,6 +82,8 @@ define <16 x i8> @vqaddQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vqaddQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vqaddQs16:
+;CHECK: vqadd.s16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = call <8 x i16> @llvm.arm.neon.vqadds.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
@@ -79,6 +91,8 @@ define <8 x i16> @vqaddQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vqaddQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vqaddQs32:
+;CHECK: vqadd.s32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = call <4 x i32> @llvm.arm.neon.vqadds.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
@@ -86,6 +100,8 @@ define <4 x i32> @vqaddQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vqaddQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vqaddQs64:
+;CHECK: vqadd.s64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = call <2 x i64> @llvm.arm.neon.vqadds.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2)
@@ -93,6 +109,8 @@ define <2 x i64> @vqaddQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vqaddQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vqaddQu8:
+;CHECK: vqadd.u8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = call <16 x i8> @llvm.arm.neon.vqaddu.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
@@ -100,6 +118,8 @@ define <16 x i8> @vqaddQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vqaddQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vqaddQu16:
+;CHECK: vqadd.u16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = call <8 x i16> @llvm.arm.neon.vqaddu.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
@@ -107,6 +127,8 @@ define <8 x i16> @vqaddQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vqaddQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vqaddQu32:
+;CHECK: vqadd.u32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = call <4 x i32> @llvm.arm.neon.vqaddu.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
@@ -114,6 +136,8 @@ define <4 x i32> @vqaddQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vqaddQu64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vqaddQu64:
+;CHECK: vqadd.u64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = call <2 x i64> @llvm.arm.neon.vqaddu.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2)
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmlal.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqdmlal.ll
deleted file mode 100644
index 3f9cde2..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmlal.ll
+++ /dev/null
@@ -1,22 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vqdmlal\\.s16} %t | count 1
-; RUN: grep {vqdmlal\\.s32} %t | count 1
-
-define <4 x i32> @vqdmlals16(<4 x i32>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = load <4 x i16>* %C
-	%tmp4 = call <4 x i32> @llvm.arm.neon.vqdmlal.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2, <4 x i16> %tmp3)
-	ret <4 x i32> %tmp4
-}
-
-define <2 x i64> @vqdmlals32(<2 x i64>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = load <2 x i32>* %C
-	%tmp4 = call <2 x i64> @llvm.arm.neon.vqdmlal.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2, <2 x i32> %tmp3)
-	ret <2 x i64> %tmp4
-}
-
-declare <4 x i32>  @llvm.arm.neon.vqdmlal.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64>  @llvm.arm.neon.vqdmlal.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmlal_lanes.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqdmlal_lanes.ll
deleted file mode 100644
index ff532f3..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmlal_lanes.ll
+++ /dev/null
@@ -1,25 +0,0 @@
-; RUN: llc -mattr=+neon < %s | FileCheck %s
-target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32"
-target triple = "thumbv7-elf"
-
-define arm_aapcs_vfpcc <4 x i32> @test_vqdmlal_lanes16(<4 x i32> %arg0_int32x4_t, <4 x i16> %arg1_int16x4_t, <4 x i16> %arg2_int16x4_t) nounwind readnone {
-entry:
-; CHECK: test_vqdmlal_lanes16
-; CHECK: vqdmlal.s16 q0, d2, d3[1]
-  %0 = shufflevector <4 x i16> %arg2_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
-  %1 = tail call <4 x i32> @llvm.arm.neon.vqdmlal.v4i32(<4 x i32> %arg0_int32x4_t, <4 x i16> %arg1_int16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
-  ret <4 x i32> %1
-}
-
-declare <4 x i32> @llvm.arm.neon.vqdmlal.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
-
-define arm_aapcs_vfpcc <2 x i64> @test_vqdmlal_lanes32(<2 x i64> %arg0_int64x2_t, <2 x i32> %arg1_int32x2_t, <2 x i32> %arg2_int32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vqdmlal_lanes32
-; CHECK: vqdmlal.s32 q0, d2, d3[1]
-  %0 = shufflevector <2 x i32> %arg2_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
-  %1 = tail call <2 x i64> @llvm.arm.neon.vqdmlal.v2i64(<2 x i64> %arg0_int64x2_t, <2 x i32> %arg1_int32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
-  ret <2 x i64> %1
-}
-
-declare <2 x i64> @llvm.arm.neon.vqdmlal.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmlsl.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqdmlsl.ll
deleted file mode 100644
index 2802916..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmlsl.ll
+++ /dev/null
@@ -1,22 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vqdmlsl\\.s16} %t | count 1
-; RUN: grep {vqdmlsl\\.s32} %t | count 1
-
-define <4 x i32> @vqdmlsls16(<4 x i32>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = load <4 x i16>* %C
-	%tmp4 = call <4 x i32> @llvm.arm.neon.vqdmlsl.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2, <4 x i16> %tmp3)
-	ret <4 x i32> %tmp4
-}
-
-define <2 x i64> @vqdmlsls32(<2 x i64>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = load <2 x i32>* %C
-	%tmp4 = call <2 x i64> @llvm.arm.neon.vqdmlsl.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2, <2 x i32> %tmp3)
-	ret <2 x i64> %tmp4
-}
-
-declare <4 x i32>  @llvm.arm.neon.vqdmlsl.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64>  @llvm.arm.neon.vqdmlsl.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmlsl_lanes.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqdmlsl_lanes.ll
deleted file mode 100644
index 1a834ff..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmlsl_lanes.ll
+++ /dev/null
@@ -1,25 +0,0 @@
-; RUN: llc -mattr=+neon < %s | FileCheck %s
-target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32"
-target triple = "thumbv7-elf"
-
-define arm_aapcs_vfpcc <4 x i32> @test_vqdmlsl_lanes16(<4 x i32> %arg0_int32x4_t, <4 x i16> %arg1_int16x4_t, <4 x i16> %arg2_int16x4_t) nounwind readnone {
-entry:
-; CHECK: test_vqdmlsl_lanes16
-; CHECK: vqdmlsl.s16 q0, d2, d3[1]
-  %0 = shufflevector <4 x i16> %arg2_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
-  %1 = tail call <4 x i32> @llvm.arm.neon.vqdmlsl.v4i32(<4 x i32> %arg0_int32x4_t, <4 x i16> %arg1_int16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
-  ret <4 x i32> %1
-}
-
-declare <4 x i32> @llvm.arm.neon.vqdmlsl.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
-
-define arm_aapcs_vfpcc <2 x i64> @test_vqdmlsl_lanes32(<2 x i64> %arg0_int64x2_t, <2 x i32> %arg1_int32x2_t, <2 x i32> %arg2_int32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vqdmlsl_lanes32
-; CHECK: vqdmlsl.s32 q0, d2, d3[1]
-  %0 = shufflevector <2 x i32> %arg2_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
-  %1 = tail call <2 x i64> @llvm.arm.neon.vqdmlsl.v2i64(<2 x i64> %arg0_int64x2_t, <2 x i32> %arg1_int32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
-  ret <2 x i64> %1
-}
-
-declare <2 x i64> @llvm.arm.neon.vqdmlsl.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmul.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqdmul.ll
new file mode 100644
index 0000000..8dcc7f7
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vqdmul.ll
@@ -0,0 +1,281 @@
+; RUN: llc -mattr=+neon < %s | FileCheck %s
+target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32"
+target triple = "thumbv7-elf"
+
+define <4 x i16> @vqdmulhs16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vqdmulhs16:
+;CHECK: vqdmulh.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vqdmulh.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vqdmulhs32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vqdmulhs32:
+;CHECK: vqdmulh.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vqdmulh.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <8 x i16> @vqdmulhQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vqdmulhQs16:
+;CHECK: vqdmulh.s16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vqdmulh.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vqdmulhQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vqdmulhQs32:
+;CHECK: vqdmulh.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vqdmulh.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define arm_aapcs_vfpcc <8 x i16> @test_vqdmulhQ_lanes16(<8 x i16> %arg0_int16x8_t, <4 x i16> %arg1_int16x4_t) nounwind readnone {
+entry:
+; CHECK: test_vqdmulhQ_lanes16
+; CHECK: vqdmulh.s16 q0, q0, d2[1]
+  %0 = shufflevector <4 x i16> %arg1_int16x4_t, <4 x i16> undef, <8 x i32> <i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1> ; <<8 x i16>> [#uses=1]
+  %1 = tail call <8 x i16> @llvm.arm.neon.vqdmulh.v8i16(<8 x i16> %arg0_int16x8_t, <8 x i16> %0) ; <<8 x i16>> [#uses=1]
+  ret <8 x i16> %1
+}
+
+define arm_aapcs_vfpcc <4 x i32> @test_vqdmulhQ_lanes32(<4 x i32> %arg0_int32x4_t, <2 x i32> %arg1_int32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vqdmulhQ_lanes32
+; CHECK: vqdmulh.s32 q0, q0, d2[1]
+  %0 = shufflevector <2 x i32> %arg1_int32x2_t, <2 x i32> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i32>> [#uses=1]
+  %1 = tail call <4 x i32> @llvm.arm.neon.vqdmulh.v4i32(<4 x i32> %arg0_int32x4_t, <4 x i32> %0) ; <<4 x i32>> [#uses=1]
+  ret <4 x i32> %1
+}
+
+define arm_aapcs_vfpcc <4 x i16> @test_vqdmulh_lanes16(<4 x i16> %arg0_int16x4_t, <4 x i16> %arg1_int16x4_t) nounwind readnone {
+entry:
+; CHECK: test_vqdmulh_lanes16
+; CHECK: vqdmulh.s16 d0, d0, d1[1]
+  %0 = shufflevector <4 x i16> %arg1_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
+  %1 = tail call <4 x i16> @llvm.arm.neon.vqdmulh.v4i16(<4 x i16> %arg0_int16x4_t, <4 x i16> %0) ; <<4 x i16>> [#uses=1]
+  ret <4 x i16> %1
+}
+
+define arm_aapcs_vfpcc <2 x i32> @test_vqdmulh_lanes32(<2 x i32> %arg0_int32x2_t, <2 x i32> %arg1_int32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vqdmulh_lanes32
+; CHECK: vqdmulh.s32 d0, d0, d1[1]
+  %0 = shufflevector <2 x i32> %arg1_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
+  %1 = tail call <2 x i32> @llvm.arm.neon.vqdmulh.v2i32(<2 x i32> %arg0_int32x2_t, <2 x i32> %0) ; <<2 x i32>> [#uses=1]
+  ret <2 x i32> %1
+}
+
+declare <4 x i16> @llvm.arm.neon.vqdmulh.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vqdmulh.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <8 x i16> @llvm.arm.neon.vqdmulh.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vqdmulh.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
+
+define <4 x i16> @vqrdmulhs16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vqrdmulhs16:
+;CHECK: vqrdmulh.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vqrdmulh.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vqrdmulhs32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vqrdmulhs32:
+;CHECK: vqrdmulh.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vqrdmulh.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <8 x i16> @vqrdmulhQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vqrdmulhQs16:
+;CHECK: vqrdmulh.s16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vqrdmulh.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vqrdmulhQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vqrdmulhQs32:
+;CHECK: vqrdmulh.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vqrdmulh.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define arm_aapcs_vfpcc <8 x i16> @test_vqRdmulhQ_lanes16(<8 x i16> %arg0_int16x8_t, <4 x i16> %arg1_int16x4_t) nounwind readnone {
+entry:
+; CHECK: test_vqRdmulhQ_lanes16
+; CHECK: vqrdmulh.s16 q0, q0, d2[1]
+  %0 = shufflevector <4 x i16> %arg1_int16x4_t, <4 x i16> undef, <8 x i32> <i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1> ; <<8 x i16>> [#uses=1]
+  %1 = tail call <8 x i16> @llvm.arm.neon.vqrdmulh.v8i16(<8 x i16> %arg0_int16x8_t, <8 x i16> %0) ; <<8 x i16>> [#uses=1]
+  ret <8 x i16> %1
+}
+
+define arm_aapcs_vfpcc <4 x i32> @test_vqRdmulhQ_lanes32(<4 x i32> %arg0_int32x4_t, <2 x i32> %arg1_int32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vqRdmulhQ_lanes32
+; CHECK: vqrdmulh.s32 q0, q0, d2[1]
+  %0 = shufflevector <2 x i32> %arg1_int32x2_t, <2 x i32> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i32>> [#uses=1]
+  %1 = tail call <4 x i32> @llvm.arm.neon.vqrdmulh.v4i32(<4 x i32> %arg0_int32x4_t, <4 x i32> %0) ; <<4 x i32>> [#uses=1]
+  ret <4 x i32> %1
+}
+
+define arm_aapcs_vfpcc <4 x i16> @test_vqRdmulh_lanes16(<4 x i16> %arg0_int16x4_t, <4 x i16> %arg1_int16x4_t) nounwind readnone {
+entry:
+; CHECK: test_vqRdmulh_lanes16
+; CHECK: vqrdmulh.s16 d0, d0, d1[1]
+  %0 = shufflevector <4 x i16> %arg1_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
+  %1 = tail call <4 x i16> @llvm.arm.neon.vqrdmulh.v4i16(<4 x i16> %arg0_int16x4_t, <4 x i16> %0) ; <<4 x i16>> [#uses=1]
+  ret <4 x i16> %1
+}
+
+define arm_aapcs_vfpcc <2 x i32> @test_vqRdmulh_lanes32(<2 x i32> %arg0_int32x2_t, <2 x i32> %arg1_int32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vqRdmulh_lanes32
+; CHECK: vqrdmulh.s32 d0, d0, d1[1]
+  %0 = shufflevector <2 x i32> %arg1_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
+  %1 = tail call <2 x i32> @llvm.arm.neon.vqrdmulh.v2i32(<2 x i32> %arg0_int32x2_t, <2 x i32> %0) ; <<2 x i32>> [#uses=1]
+  ret <2 x i32> %1
+}
+
+declare <2 x i32> @llvm.arm.neon.vqrdmulh.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vqrdmulh.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vqrdmulh.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <8 x i16> @llvm.arm.neon.vqrdmulh.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vqrdmulh.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
+
+define <4 x i32> @vqdmulls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vqdmulls16:
+;CHECK: vqdmull.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vqdmull.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vqdmulls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vqdmulls32:
+;CHECK: vqdmull.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vqdmull.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+define arm_aapcs_vfpcc <4 x i32> @test_vqdmull_lanes16(<4 x i16> %arg0_int16x4_t, <4 x i16> %arg1_int16x4_t) nounwind readnone {
+entry:
+; CHECK: test_vqdmull_lanes16
+; CHECK: vqdmull.s16 q0, d0, d1[1]
+  %0 = shufflevector <4 x i16> %arg1_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
+  %1 = tail call <4 x i32> @llvm.arm.neon.vqdmull.v4i32(<4 x i16> %arg0_int16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
+  ret <4 x i32> %1
+}
+
+define arm_aapcs_vfpcc <2 x i64> @test_vqdmull_lanes32(<2 x i32> %arg0_int32x2_t, <2 x i32> %arg1_int32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vqdmull_lanes32
+; CHECK: vqdmull.s32 q0, d0, d1[1]
+  %0 = shufflevector <2 x i32> %arg1_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
+  %1 = tail call <2 x i64> @llvm.arm.neon.vqdmull.v2i64(<2 x i32> %arg0_int32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
+  ret <2 x i64> %1
+}
+
+declare <4 x i32>  @llvm.arm.neon.vqdmull.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64>  @llvm.arm.neon.vqdmull.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
+
+define <4 x i32> @vqdmlals16(<4 x i32>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
+;CHECK: vqdmlals16:
+;CHECK: vqdmlal.s16
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = load <4 x i16>* %C
+	%tmp4 = call <4 x i32> @llvm.arm.neon.vqdmlal.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2, <4 x i16> %tmp3)
+	ret <4 x i32> %tmp4
+}
+
+define <2 x i64> @vqdmlals32(<2 x i64>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
+;CHECK: vqdmlals32:
+;CHECK: vqdmlal.s32
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = load <2 x i32>* %C
+	%tmp4 = call <2 x i64> @llvm.arm.neon.vqdmlal.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2, <2 x i32> %tmp3)
+	ret <2 x i64> %tmp4
+}
+
+define arm_aapcs_vfpcc <4 x i32> @test_vqdmlal_lanes16(<4 x i32> %arg0_int32x4_t, <4 x i16> %arg1_int16x4_t, <4 x i16> %arg2_int16x4_t) nounwind readnone {
+entry:
+; CHECK: test_vqdmlal_lanes16
+; CHECK: vqdmlal.s16 q0, d2, d3[1]
+  %0 = shufflevector <4 x i16> %arg2_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
+  %1 = tail call <4 x i32> @llvm.arm.neon.vqdmlal.v4i32(<4 x i32> %arg0_int32x4_t, <4 x i16> %arg1_int16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
+  ret <4 x i32> %1
+}
+
+define arm_aapcs_vfpcc <2 x i64> @test_vqdmlal_lanes32(<2 x i64> %arg0_int64x2_t, <2 x i32> %arg1_int32x2_t, <2 x i32> %arg2_int32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vqdmlal_lanes32
+; CHECK: vqdmlal.s32 q0, d2, d3[1]
+  %0 = shufflevector <2 x i32> %arg2_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
+  %1 = tail call <2 x i64> @llvm.arm.neon.vqdmlal.v2i64(<2 x i64> %arg0_int64x2_t, <2 x i32> %arg1_int32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
+  ret <2 x i64> %1
+}
+
+declare <4 x i32>  @llvm.arm.neon.vqdmlal.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64>  @llvm.arm.neon.vqdmlal.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
+
+define <4 x i32> @vqdmlsls16(<4 x i32>* %A, <4 x i16>* %B, <4 x i16>* %C) nounwind {
+;CHECK: vqdmlsls16:
+;CHECK: vqdmlsl.s16
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = load <4 x i16>* %C
+	%tmp4 = call <4 x i32> @llvm.arm.neon.vqdmlsl.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2, <4 x i16> %tmp3)
+	ret <4 x i32> %tmp4
+}
+
+define <2 x i64> @vqdmlsls32(<2 x i64>* %A, <2 x i32>* %B, <2 x i32>* %C) nounwind {
+;CHECK: vqdmlsls32:
+;CHECK: vqdmlsl.s32
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = load <2 x i32>* %C
+	%tmp4 = call <2 x i64> @llvm.arm.neon.vqdmlsl.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2, <2 x i32> %tmp3)
+	ret <2 x i64> %tmp4
+}
+
+define arm_aapcs_vfpcc <4 x i32> @test_vqdmlsl_lanes16(<4 x i32> %arg0_int32x4_t, <4 x i16> %arg1_int16x4_t, <4 x i16> %arg2_int16x4_t) nounwind readnone {
+entry:
+; CHECK: test_vqdmlsl_lanes16
+; CHECK: vqdmlsl.s16 q0, d2, d3[1]
+  %0 = shufflevector <4 x i16> %arg2_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
+  %1 = tail call <4 x i32> @llvm.arm.neon.vqdmlsl.v4i32(<4 x i32> %arg0_int32x4_t, <4 x i16> %arg1_int16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
+  ret <4 x i32> %1
+}
+
+define arm_aapcs_vfpcc <2 x i64> @test_vqdmlsl_lanes32(<2 x i64> %arg0_int64x2_t, <2 x i32> %arg1_int32x2_t, <2 x i32> %arg2_int32x2_t) nounwind readnone {
+entry:
+; CHECK: test_vqdmlsl_lanes32
+; CHECK: vqdmlsl.s32 q0, d2, d3[1]
+  %0 = shufflevector <2 x i32> %arg2_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
+  %1 = tail call <2 x i64> @llvm.arm.neon.vqdmlsl.v2i64(<2 x i64> %arg0_int64x2_t, <2 x i32> %arg1_int32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
+  ret <2 x i64> %1
+}
+
+declare <4 x i32>  @llvm.arm.neon.vqdmlsl.v4i32(<4 x i32>, <4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64>  @llvm.arm.neon.vqdmlsl.v2i64(<2 x i64>, <2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmulh.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqdmulh.ll
deleted file mode 100644
index 1600dc5..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmulh.ll
+++ /dev/null
@@ -1,73 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vqdmulh\\.s16} %t | count 2
-; RUN: grep {vqdmulh\\.s32} %t | count 2
-; RUN: grep {vqrdmulh\\.s16} %t | count 2
-; RUN: grep {vqrdmulh\\.s32} %t | count 2
-
-define <4 x i16> @vqdmulhs16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vqdmulh.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vqdmulhs32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vqdmulh.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <8 x i16> @vqdmulhQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vqdmulh.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vqdmulhQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vqdmulh.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-declare <4 x i16> @llvm.arm.neon.vqdmulh.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vqdmulh.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <8 x i16> @llvm.arm.neon.vqdmulh.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vqdmulh.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
-
-define <4 x i16> @vqrdmulhs16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vqrdmulh.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vqrdmulhs32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vqrdmulh.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <8 x i16> @vqrdmulhQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vqrdmulh.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vqrdmulhQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vqrdmulh.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-declare <4 x i16> @llvm.arm.neon.vqrdmulh.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vqrdmulh.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <8 x i16> @llvm.arm.neon.vqrdmulh.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vqrdmulh.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmulh_lane.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqdmulh_lane.ll
deleted file mode 100644
index 874f5f3..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmulh_lane.ll
+++ /dev/null
@@ -1,47 +0,0 @@
-; RUN: llc -mattr=+neon < %s | FileCheck %s
-target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32"
-target triple = "thumbv7-elf"
-
-define arm_aapcs_vfpcc <8 x i16> @test_vqdmulhQ_lanes16(<8 x i16> %arg0_int16x8_t, <4 x i16> %arg1_int16x4_t) nounwind readnone {
-entry:
-; CHECK: test_vqdmulhQ_lanes16
-; CHECK: vqdmulh.s16 q0, q0, d2[1]
-  %0 = shufflevector <4 x i16> %arg1_int16x4_t, <4 x i16> undef, <8 x i32> <i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1> ; <<8 x i16>> [#uses=1]
-  %1 = tail call <8 x i16> @llvm.arm.neon.vqdmulh.v8i16(<8 x i16> %arg0_int16x8_t, <8 x i16> %0) ; <<8 x i16>> [#uses=1]
-  ret <8 x i16> %1
-}
-
-declare <8 x i16> @llvm.arm.neon.vqdmulh.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
-
-define arm_aapcs_vfpcc <4 x i32> @test_vqdmulhQ_lanes32(<4 x i32> %arg0_int32x4_t, <2 x i32> %arg1_int32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vqdmulhQ_lanes32
-; CHECK: vqdmulh.s32 q0, q0, d2[1]
-  %0 = shufflevector <2 x i32> %arg1_int32x2_t, <2 x i32> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i32>> [#uses=1]
-  %1 = tail call <4 x i32> @llvm.arm.neon.vqdmulh.v4i32(<4 x i32> %arg0_int32x4_t, <4 x i32> %0) ; <<4 x i32>> [#uses=1]
-  ret <4 x i32> %1
-}
-
-declare <4 x i32> @llvm.arm.neon.vqdmulh.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
-
-define arm_aapcs_vfpcc <4 x i16> @test_vqdmulh_lanes16(<4 x i16> %arg0_int16x4_t, <4 x i16> %arg1_int16x4_t) nounwind readnone {
-entry:
-; CHECK: test_vqdmulh_lanes16
-; CHECK: vqdmulh.s16 d0, d0, d1[1]
-  %0 = shufflevector <4 x i16> %arg1_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
-  %1 = tail call <4 x i16> @llvm.arm.neon.vqdmulh.v4i16(<4 x i16> %arg0_int16x4_t, <4 x i16> %0) ; <<4 x i16>> [#uses=1]
-  ret <4 x i16> %1
-}
-
-declare <4 x i16> @llvm.arm.neon.vqdmulh.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-
-define arm_aapcs_vfpcc <2 x i32> @test_vqdmulh_lanes32(<2 x i32> %arg0_int32x2_t, <2 x i32> %arg1_int32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vqdmulh_lanes32
-; CHECK: vqdmulh.s32 d0, d0, d1[1]
-  %0 = shufflevector <2 x i32> %arg1_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
-  %1 = tail call <2 x i32> @llvm.arm.neon.vqdmulh.v2i32(<2 x i32> %arg0_int32x2_t, <2 x i32> %0) ; <<2 x i32>> [#uses=1]
-  ret <2 x i32> %1
-}
-
-declare <2 x i32> @llvm.arm.neon.vqdmulh.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmull.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqdmull.ll
deleted file mode 100644
index 8ddd4d6..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmull.ll
+++ /dev/null
@@ -1,20 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vqdmull\\.s16} %t | count 1
-; RUN: grep {vqdmull\\.s32} %t | count 1
-
-define <4 x i32> @vqdmulls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vqdmull.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vqdmulls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vqdmull.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-declare <4 x i32>  @llvm.arm.neon.vqdmull.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64>  @llvm.arm.neon.vqdmull.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmull_lane.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqdmull_lane.ll
deleted file mode 100644
index 21f4e94..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqdmull_lane.ll
+++ /dev/null
@@ -1,25 +0,0 @@
-; RUN: llc -mattr=+neon < %s | FileCheck %s
-target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32"
-target triple = "thumbv7-elf"
-
-define arm_aapcs_vfpcc <4 x i32> @test_vqdmull_lanes16(<4 x i16> %arg0_int16x4_t, <4 x i16> %arg1_int16x4_t) nounwind readnone {
-entry:
-; CHECK: test_vqdmull_lanes16
-; CHECK: vqdmull.s16 q0, d0, d1[1]
-  %0 = shufflevector <4 x i16> %arg1_int16x4_t, <4 x i16> undef, <4 x i32> <i32 1, i32 1, i32 1, i32 1> ; <<4 x i16>> [#uses=1]
-  %1 = tail call <4 x i32> @llvm.arm.neon.vqdmull.v4i32(<4 x i16> %arg0_int16x4_t, <4 x i16> %0) ; <<4 x i32>> [#uses=1]
-  ret <4 x i32> %1
-}
-
-declare <4 x i32> @llvm.arm.neon.vqdmull.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
-
-define arm_aapcs_vfpcc <2 x i64> @test_vqdmull_lanes32(<2 x i32> %arg0_int32x2_t, <2 x i32> %arg1_int32x2_t) nounwind readnone {
-entry:
-; CHECK: test_vqdmull_lanes32
-; CHECK: vqdmull.s32 q0, d0, d1[1]
-  %0 = shufflevector <2 x i32> %arg1_int32x2_t, <2 x i32> undef, <2 x i32> <i32 1, i32 1> ; <<2 x i32>> [#uses=1]
-  %1 = tail call <2 x i64> @llvm.arm.neon.vqdmull.v2i64(<2 x i32> %arg0_int32x2_t, <2 x i32> %0) ; <<2 x i64>> [#uses=1]
-  ret <2 x i64> %1
-}
-
-declare <2 x i64> @llvm.arm.neon.vqdmull.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqmovn.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqmovn.ll
deleted file mode 100644
index 06e5f1e..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqmovn.ll
+++ /dev/null
@@ -1,76 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vqmovn\\.s16} %t | count 1
-; RUN: grep {vqmovn\\.s32} %t | count 1
-; RUN: grep {vqmovn\\.s64} %t | count 1
-; RUN: grep {vqmovn\\.u16} %t | count 1
-; RUN: grep {vqmovn\\.u32} %t | count 1
-; RUN: grep {vqmovn\\.u64} %t | count 1
-; RUN: grep {vqmovun\\.s16} %t | count 1
-; RUN: grep {vqmovun\\.s32} %t | count 1
-; RUN: grep {vqmovun\\.s64} %t | count 1
-
-define <8 x i8> @vqmovns16(<8 x i16>* %A) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = call <8 x i8> @llvm.arm.neon.vqmovns.v8i8(<8 x i16> %tmp1)
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @vqmovns32(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x i16> @llvm.arm.neon.vqmovns.v4i16(<4 x i32> %tmp1)
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vqmovns64(<2 x i64>* %A) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vqmovns.v2i32(<2 x i64> %tmp1)
-	ret <2 x i32> %tmp2
-}
-
-define <8 x i8> @vqmovnu16(<8 x i16>* %A) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = call <8 x i8> @llvm.arm.neon.vqmovnu.v8i8(<8 x i16> %tmp1)
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @vqmovnu32(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x i16> @llvm.arm.neon.vqmovnu.v4i16(<4 x i32> %tmp1)
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vqmovnu64(<2 x i64>* %A) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vqmovnu.v2i32(<2 x i64> %tmp1)
-	ret <2 x i32> %tmp2
-}
-
-define <8 x i8> @vqmovuns16(<8 x i16>* %A) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = call <8 x i8> @llvm.arm.neon.vqmovnsu.v8i8(<8 x i16> %tmp1)
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @vqmovuns32(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x i16> @llvm.arm.neon.vqmovnsu.v4i16(<4 x i32> %tmp1)
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vqmovuns64(<2 x i64>* %A) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vqmovnsu.v2i32(<2 x i64> %tmp1)
-	ret <2 x i32> %tmp2
-}
-
-declare <8 x i8>  @llvm.arm.neon.vqmovns.v8i8(<8 x i16>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vqmovns.v4i16(<4 x i32>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vqmovns.v2i32(<2 x i64>) nounwind readnone
-
-declare <8 x i8>  @llvm.arm.neon.vqmovnu.v8i8(<8 x i16>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vqmovnu.v4i16(<4 x i32>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vqmovnu.v2i32(<2 x i64>) nounwind readnone
-
-declare <8 x i8>  @llvm.arm.neon.vqmovnsu.v8i8(<8 x i16>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vqmovnsu.v4i16(<4 x i32>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vqmovnsu.v2i32(<2 x i64>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqneg.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqneg.ll
deleted file mode 100644
index 3626559..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqneg.ll
+++ /dev/null
@@ -1,48 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vqneg\\.s8} %t | count 2
-; RUN: grep {vqneg\\.s16} %t | count 2
-; RUN: grep {vqneg\\.s32} %t | count 2
-
-define <8 x i8> @vqnegs8(<8 x i8>* %A) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = call <8 x i8> @llvm.arm.neon.vqneg.v8i8(<8 x i8> %tmp1)
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @vqnegs16(<4 x i16>* %A) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = call <4 x i16> @llvm.arm.neon.vqneg.v4i16(<4 x i16> %tmp1)
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vqnegs32(<2 x i32>* %A) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vqneg.v2i32(<2 x i32> %tmp1)
-	ret <2 x i32> %tmp2
-}
-
-define <16 x i8> @vqnegQs8(<16 x i8>* %A) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = call <16 x i8> @llvm.arm.neon.vqneg.v16i8(<16 x i8> %tmp1)
-	ret <16 x i8> %tmp2
-}
-
-define <8 x i16> @vqnegQs16(<8 x i16>* %A) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = call <8 x i16> @llvm.arm.neon.vqneg.v8i16(<8 x i16> %tmp1)
-	ret <8 x i16> %tmp2
-}
-
-define <4 x i32> @vqnegQs32(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x i32> @llvm.arm.neon.vqneg.v4i32(<4 x i32> %tmp1)
-	ret <4 x i32> %tmp2
-}
-
-declare <8 x i8>  @llvm.arm.neon.vqneg.v8i8(<8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vqneg.v4i16(<4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vqneg.v2i32(<2 x i32>) nounwind readnone
-
-declare <16 x i8> @llvm.arm.neon.vqneg.v16i8(<16 x i8>) nounwind readnone
-declare <8 x i16> @llvm.arm.neon.vqneg.v8i16(<8 x i16>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vqneg.v4i32(<4 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqrshl.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqrshl.ll
deleted file mode 100644
index e680f93..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqrshl.ll
+++ /dev/null
@@ -1,141 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vqrshl\\.s8} %t | count 2
-; RUN: grep {vqrshl\\.s16} %t | count 2
-; RUN: grep {vqrshl\\.s32} %t | count 2
-; RUN: grep {vqrshl\\.s64} %t | count 2
-; RUN: grep {vqrshl\\.u8} %t | count 2
-; RUN: grep {vqrshl\\.u16} %t | count 2
-; RUN: grep {vqrshl\\.u32} %t | count 2
-; RUN: grep {vqrshl\\.u64} %t | count 2
-
-define <8 x i8> @vqrshls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vqrshifts.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vqrshls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vqrshifts.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vqrshls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vqrshifts.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <1 x i64> @vqrshls64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
-	%tmp1 = load <1 x i64>* %A
-	%tmp2 = load <1 x i64>* %B
-	%tmp3 = call <1 x i64> @llvm.arm.neon.vqrshifts.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2)
-	ret <1 x i64> %tmp3
-}
-
-define <8 x i8> @vqrshlu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vqrshiftu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vqrshlu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vqrshiftu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vqrshlu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vqrshiftu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <1 x i64> @vqrshlu64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
-	%tmp1 = load <1 x i64>* %A
-	%tmp2 = load <1 x i64>* %B
-	%tmp3 = call <1 x i64> @llvm.arm.neon.vqrshiftu.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2)
-	ret <1 x i64> %tmp3
-}
-
-define <16 x i8> @vqrshlQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = load <16 x i8>* %B
-	%tmp3 = call <16 x i8> @llvm.arm.neon.vqrshifts.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
-	ret <16 x i8> %tmp3
-}
-
-define <8 x i16> @vqrshlQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vqrshifts.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vqrshlQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vqrshifts.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vqrshlQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i64>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vqrshifts.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-define <16 x i8> @vqrshlQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = load <16 x i8>* %B
-	%tmp3 = call <16 x i8> @llvm.arm.neon.vqrshiftu.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
-	ret <16 x i8> %tmp3
-}
-
-define <8 x i16> @vqrshlQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vqrshiftu.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vqrshlQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vqrshiftu.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vqrshlQu64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i64>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vqrshiftu.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-declare <8 x i8>  @llvm.arm.neon.vqrshifts.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vqrshifts.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vqrshifts.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
-declare <1 x i64> @llvm.arm.neon.vqrshifts.v1i64(<1 x i64>, <1 x i64>) nounwind readnone
-
-declare <8 x i8>  @llvm.arm.neon.vqrshiftu.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vqrshiftu.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vqrshiftu.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
-declare <1 x i64> @llvm.arm.neon.vqrshiftu.v1i64(<1 x i64>, <1 x i64>) nounwind readnone
-
-declare <16 x i8> @llvm.arm.neon.vqrshifts.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
-declare <8 x i16> @llvm.arm.neon.vqrshifts.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vqrshifts.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vqrshifts.v2i64(<2 x i64>, <2 x i64>) nounwind readnone
-
-declare <16 x i8> @llvm.arm.neon.vqrshiftu.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
-declare <8 x i16> @llvm.arm.neon.vqrshiftu.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vqrshiftu.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vqrshiftu.v2i64(<2 x i64>, <2 x i64>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqrshrn.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqrshrn.ll
deleted file mode 100644
index bb046fa..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqrshrn.ll
+++ /dev/null
@@ -1,76 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vqrshrn\\.s16} %t | count 1
-; RUN: grep {vqrshrn\\.s32} %t | count 1
-; RUN: grep {vqrshrn\\.s64} %t | count 1
-; RUN: grep {vqrshrn\\.u16} %t | count 1
-; RUN: grep {vqrshrn\\.u32} %t | count 1
-; RUN: grep {vqrshrn\\.u64} %t | count 1
-; RUN: grep {vqrshrun\\.s16} %t | count 1
-; RUN: grep {vqrshrun\\.s32} %t | count 1
-; RUN: grep {vqrshrun\\.s64} %t | count 1
-
-define <8 x i8> @vqrshrns8(<8 x i16>* %A) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = call <8 x i8> @llvm.arm.neon.vqrshiftns.v8i8(<8 x i16> %tmp1, <8 x i16> < i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8 >)
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @vqrshrns16(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x i16> @llvm.arm.neon.vqrshiftns.v4i16(<4 x i32> %tmp1, <4 x i32> < i32 -16, i32 -16, i32 -16, i32 -16 >)
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vqrshrns32(<2 x i64>* %A) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vqrshiftns.v2i32(<2 x i64> %tmp1, <2 x i64> < i64 -32, i64 -32 >)
-	ret <2 x i32> %tmp2
-}
-
-define <8 x i8> @vqrshrnu8(<8 x i16>* %A) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = call <8 x i8> @llvm.arm.neon.vqrshiftnu.v8i8(<8 x i16> %tmp1, <8 x i16> < i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8 >)
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @vqrshrnu16(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x i16> @llvm.arm.neon.vqrshiftnu.v4i16(<4 x i32> %tmp1, <4 x i32> < i32 -16, i32 -16, i32 -16, i32 -16 >)
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vqrshrnu32(<2 x i64>* %A) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vqrshiftnu.v2i32(<2 x i64> %tmp1, <2 x i64> < i64 -32, i64 -32 >)
-	ret <2 x i32> %tmp2
-}
-
-define <8 x i8> @vqrshruns8(<8 x i16>* %A) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = call <8 x i8> @llvm.arm.neon.vqrshiftnsu.v8i8(<8 x i16> %tmp1, <8 x i16> < i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8 >)
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @vqrshruns16(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x i16> @llvm.arm.neon.vqrshiftnsu.v4i16(<4 x i32> %tmp1, <4 x i32> < i32 -16, i32 -16, i32 -16, i32 -16 >)
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vqrshruns32(<2 x i64>* %A) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vqrshiftnsu.v2i32(<2 x i64> %tmp1, <2 x i64> < i64 -32, i64 -32 >)
-	ret <2 x i32> %tmp2
-}
-
-declare <8 x i8>  @llvm.arm.neon.vqrshiftns.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vqrshiftns.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vqrshiftns.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
-
-declare <8 x i8>  @llvm.arm.neon.vqrshiftnu.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vqrshiftnu.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vqrshiftnu.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
-
-declare <8 x i8>  @llvm.arm.neon.vqrshiftnsu.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vqrshiftnsu.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vqrshiftnsu.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqshl.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqshl.ll
index bfc4e88..e4d29a3 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqshl.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vqshl.ll
@@ -1,26 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vqshl\\.s8} %t | count 4
-; RUN: grep {vqshl\\.s16} %t | count 4
-; RUN: grep {vqshl\\.s32} %t | count 4
-; RUN: grep {vqshl\\.s64} %t | count 4
-; RUN: grep {vqshl\\.u8} %t | count 4
-; RUN: grep {vqshl\\.u16} %t | count 4
-; RUN: grep {vqshl\\.u32} %t | count 4
-; RUN: grep {vqshl\\.u64} %t | count 4
-; RUN: grep {vqshl\\.s8.*#7} %t | count 2
-; RUN: grep {vqshl\\.s16.*#15} %t | count 2
-; RUN: grep {vqshl\\.s32.*#31} %t | count 2
-; RUN: grep {vqshl\\.s64.*#63} %t | count 2
-; RUN: grep {vqshl\\.u8.*#7} %t | count 2
-; RUN: grep {vqshl\\.u16.*#15} %t | count 2
-; RUN: grep {vqshl\\.u32.*#31} %t | count 2
-; RUN: grep {vqshl\\.u64.*#63} %t | count 2
-; RUN: grep {vqshlu\\.s8} %t | count 2
-; RUN: grep {vqshlu\\.s16} %t | count 2
-; RUN: grep {vqshlu\\.s32} %t | count 2
-; RUN: grep {vqshlu\\.s64} %t | count 2
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i8> @vqshls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vqshls8:
+;CHECK: vqshl.s8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = call <8 x i8> @llvm.arm.neon.vqshifts.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
@@ -28,6 +10,8 @@ define <8 x i8> @vqshls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vqshls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vqshls16:
+;CHECK: vqshl.s16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = call <4 x i16> @llvm.arm.neon.vqshifts.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
@@ -35,6 +19,8 @@ define <4 x i16> @vqshls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vqshls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vqshls32:
+;CHECK: vqshl.s32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = call <2 x i32> @llvm.arm.neon.vqshifts.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
@@ -42,6 +28,8 @@ define <2 x i32> @vqshls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vqshls64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vqshls64:
+;CHECK: vqshl.s64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = call <1 x i64> @llvm.arm.neon.vqshifts.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2)
@@ -49,6 +37,8 @@ define <1 x i64> @vqshls64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <8 x i8> @vqshlu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vqshlu8:
+;CHECK: vqshl.u8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = call <8 x i8> @llvm.arm.neon.vqshiftu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
@@ -56,6 +46,8 @@ define <8 x i8> @vqshlu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vqshlu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vqshlu16:
+;CHECK: vqshl.u16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = call <4 x i16> @llvm.arm.neon.vqshiftu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
@@ -63,6 +55,8 @@ define <4 x i16> @vqshlu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vqshlu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vqshlu32:
+;CHECK: vqshl.u32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = call <2 x i32> @llvm.arm.neon.vqshiftu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
@@ -70,6 +64,8 @@ define <2 x i32> @vqshlu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vqshlu64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vqshlu64:
+;CHECK: vqshl.u64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = call <1 x i64> @llvm.arm.neon.vqshiftu.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2)
@@ -77,6 +73,8 @@ define <1 x i64> @vqshlu64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vqshlQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vqshlQs8:
+;CHECK: vqshl.s8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = call <16 x i8> @llvm.arm.neon.vqshifts.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
@@ -84,6 +82,8 @@ define <16 x i8> @vqshlQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vqshlQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vqshlQs16:
+;CHECK: vqshl.s16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = call <8 x i16> @llvm.arm.neon.vqshifts.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
@@ -91,6 +91,8 @@ define <8 x i16> @vqshlQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vqshlQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vqshlQs32:
+;CHECK: vqshl.s32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = call <4 x i32> @llvm.arm.neon.vqshifts.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
@@ -98,6 +100,8 @@ define <4 x i32> @vqshlQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vqshlQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vqshlQs64:
+;CHECK: vqshl.s64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = call <2 x i64> @llvm.arm.neon.vqshifts.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2)
@@ -105,6 +109,8 @@ define <2 x i64> @vqshlQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vqshlQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vqshlQu8:
+;CHECK: vqshl.u8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = call <16 x i8> @llvm.arm.neon.vqshiftu.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
@@ -112,6 +118,8 @@ define <16 x i8> @vqshlQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vqshlQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vqshlQu16:
+;CHECK: vqshl.u16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = call <8 x i16> @llvm.arm.neon.vqshiftu.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
@@ -119,6 +127,8 @@ define <8 x i16> @vqshlQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vqshlQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vqshlQu32:
+;CHECK: vqshl.u32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = call <4 x i32> @llvm.arm.neon.vqshiftu.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
@@ -126,6 +136,8 @@ define <4 x i32> @vqshlQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vqshlQu64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vqshlQu64:
+;CHECK: vqshl.u64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = call <2 x i64> @llvm.arm.neon.vqshiftu.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2)
@@ -133,144 +145,192 @@ define <2 x i64> @vqshlQu64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
 }
 
 define <8 x i8> @vqshls_n8(<8 x i8>* %A) nounwind {
+;CHECK: vqshls_n8:
+;CHECK: vqshl.s8{{.*#7}}
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = call <8 x i8> @llvm.arm.neon.vqshifts.v8i8(<8 x i8> %tmp1, <8 x i8> < i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7 >)
 	ret <8 x i8> %tmp2
 }
 
 define <4 x i16> @vqshls_n16(<4 x i16>* %A) nounwind {
+;CHECK: vqshls_n16:
+;CHECK: vqshl.s16{{.*#15}}
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = call <4 x i16> @llvm.arm.neon.vqshifts.v4i16(<4 x i16> %tmp1, <4 x i16> < i16 15, i16 15, i16 15, i16 15 >)
 	ret <4 x i16> %tmp2
 }
 
 define <2 x i32> @vqshls_n32(<2 x i32>* %A) nounwind {
+;CHECK: vqshls_n32:
+;CHECK: vqshl.s32{{.*#31}}
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = call <2 x i32> @llvm.arm.neon.vqshifts.v2i32(<2 x i32> %tmp1, <2 x i32> < i32 31, i32 31 >)
 	ret <2 x i32> %tmp2
 }
 
 define <1 x i64> @vqshls_n64(<1 x i64>* %A) nounwind {
+;CHECK: vqshls_n64:
+;CHECK: vqshl.s64{{.*#63}}
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = call <1 x i64> @llvm.arm.neon.vqshifts.v1i64(<1 x i64> %tmp1, <1 x i64> < i64 63 >)
 	ret <1 x i64> %tmp2
 }
 
 define <8 x i8> @vqshlu_n8(<8 x i8>* %A) nounwind {
+;CHECK: vqshlu_n8:
+;CHECK: vqshl.u8{{.*#7}}
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = call <8 x i8> @llvm.arm.neon.vqshiftu.v8i8(<8 x i8> %tmp1, <8 x i8> < i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7 >)
 	ret <8 x i8> %tmp2
 }
 
 define <4 x i16> @vqshlu_n16(<4 x i16>* %A) nounwind {
+;CHECK: vqshlu_n16:
+;CHECK: vqshl.u16{{.*#15}}
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = call <4 x i16> @llvm.arm.neon.vqshiftu.v4i16(<4 x i16> %tmp1, <4 x i16> < i16 15, i16 15, i16 15, i16 15 >)
 	ret <4 x i16> %tmp2
 }
 
 define <2 x i32> @vqshlu_n32(<2 x i32>* %A) nounwind {
+;CHECK: vqshlu_n32:
+;CHECK: vqshl.u32{{.*#31}}
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = call <2 x i32> @llvm.arm.neon.vqshiftu.v2i32(<2 x i32> %tmp1, <2 x i32> < i32 31, i32 31 >)
 	ret <2 x i32> %tmp2
 }
 
 define <1 x i64> @vqshlu_n64(<1 x i64>* %A) nounwind {
+;CHECK: vqshlu_n64:
+;CHECK: vqshl.u64{{.*#63}}
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = call <1 x i64> @llvm.arm.neon.vqshiftu.v1i64(<1 x i64> %tmp1, <1 x i64> < i64 63 >)
 	ret <1 x i64> %tmp2
 }
 
 define <8 x i8> @vqshlsu_n8(<8 x i8>* %A) nounwind {
+;CHECK: vqshlsu_n8:
+;CHECK: vqshlu.s8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = call <8 x i8> @llvm.arm.neon.vqshiftsu.v8i8(<8 x i8> %tmp1, <8 x i8> < i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7 >)
 	ret <8 x i8> %tmp2
 }
 
 define <4 x i16> @vqshlsu_n16(<4 x i16>* %A) nounwind {
+;CHECK: vqshlsu_n16:
+;CHECK: vqshlu.s16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = call <4 x i16> @llvm.arm.neon.vqshiftsu.v4i16(<4 x i16> %tmp1, <4 x i16> < i16 15, i16 15, i16 15, i16 15 >)
 	ret <4 x i16> %tmp2
 }
 
 define <2 x i32> @vqshlsu_n32(<2 x i32>* %A) nounwind {
+;CHECK: vqshlsu_n32:
+;CHECK: vqshlu.s32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = call <2 x i32> @llvm.arm.neon.vqshiftsu.v2i32(<2 x i32> %tmp1, <2 x i32> < i32 31, i32 31 >)
 	ret <2 x i32> %tmp2
 }
 
 define <1 x i64> @vqshlsu_n64(<1 x i64>* %A) nounwind {
+;CHECK: vqshlsu_n64:
+;CHECK: vqshlu.s64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = call <1 x i64> @llvm.arm.neon.vqshiftsu.v1i64(<1 x i64> %tmp1, <1 x i64> < i64 63 >)
 	ret <1 x i64> %tmp2
 }
 
 define <16 x i8> @vqshlQs_n8(<16 x i8>* %A) nounwind {
+;CHECK: vqshlQs_n8:
+;CHECK: vqshl.s8{{.*#7}}
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = call <16 x i8> @llvm.arm.neon.vqshifts.v16i8(<16 x i8> %tmp1, <16 x i8> < i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7 >)
 	ret <16 x i8> %tmp2
 }
 
 define <8 x i16> @vqshlQs_n16(<8 x i16>* %A) nounwind {
+;CHECK: vqshlQs_n16:
+;CHECK: vqshl.s16{{.*#15}}
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = call <8 x i16> @llvm.arm.neon.vqshifts.v8i16(<8 x i16> %tmp1, <8 x i16> < i16 15, i16 15, i16 15, i16 15, i16 15, i16 15, i16 15, i16 15 >)
 	ret <8 x i16> %tmp2
 }
 
 define <4 x i32> @vqshlQs_n32(<4 x i32>* %A) nounwind {
+;CHECK: vqshlQs_n32:
+;CHECK: vqshl.s32{{.*#31}}
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = call <4 x i32> @llvm.arm.neon.vqshifts.v4i32(<4 x i32> %tmp1, <4 x i32> < i32 31, i32 31, i32 31, i32 31 >)
 	ret <4 x i32> %tmp2
 }
 
 define <2 x i64> @vqshlQs_n64(<2 x i64>* %A) nounwind {
+;CHECK: vqshlQs_n64:
+;CHECK: vqshl.s64{{.*#63}}
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = call <2 x i64> @llvm.arm.neon.vqshifts.v2i64(<2 x i64> %tmp1, <2 x i64> < i64 63, i64 63 >)
 	ret <2 x i64> %tmp2
 }
 
 define <16 x i8> @vqshlQu_n8(<16 x i8>* %A) nounwind {
+;CHECK: vqshlQu_n8:
+;CHECK: vqshl.u8{{.*#7}}
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = call <16 x i8> @llvm.arm.neon.vqshiftu.v16i8(<16 x i8> %tmp1, <16 x i8> < i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7 >)
 	ret <16 x i8> %tmp2
 }
 
 define <8 x i16> @vqshlQu_n16(<8 x i16>* %A) nounwind {
+;CHECK: vqshlQu_n16:
+;CHECK: vqshl.u16{{.*#15}}
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = call <8 x i16> @llvm.arm.neon.vqshiftu.v8i16(<8 x i16> %tmp1, <8 x i16> < i16 15, i16 15, i16 15, i16 15, i16 15, i16 15, i16 15, i16 15 >)
 	ret <8 x i16> %tmp2
 }
 
 define <4 x i32> @vqshlQu_n32(<4 x i32>* %A) nounwind {
+;CHECK: vqshlQu_n32:
+;CHECK: vqshl.u32{{.*#31}}
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = call <4 x i32> @llvm.arm.neon.vqshiftu.v4i32(<4 x i32> %tmp1, <4 x i32> < i32 31, i32 31, i32 31, i32 31 >)
 	ret <4 x i32> %tmp2
 }
 
 define <2 x i64> @vqshlQu_n64(<2 x i64>* %A) nounwind {
+;CHECK: vqshlQu_n64:
+;CHECK: vqshl.u64{{.*#63}}
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = call <2 x i64> @llvm.arm.neon.vqshiftu.v2i64(<2 x i64> %tmp1, <2 x i64> < i64 63, i64 63 >)
 	ret <2 x i64> %tmp2
 }
 
 define <16 x i8> @vqshlQsu_n8(<16 x i8>* %A) nounwind {
+;CHECK: vqshlQsu_n8:
+;CHECK: vqshlu.s8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = call <16 x i8> @llvm.arm.neon.vqshiftsu.v16i8(<16 x i8> %tmp1, <16 x i8> < i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7 >)
 	ret <16 x i8> %tmp2
 }
 
 define <8 x i16> @vqshlQsu_n16(<8 x i16>* %A) nounwind {
+;CHECK: vqshlQsu_n16:
+;CHECK: vqshlu.s16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = call <8 x i16> @llvm.arm.neon.vqshiftsu.v8i16(<8 x i16> %tmp1, <8 x i16> < i16 15, i16 15, i16 15, i16 15, i16 15, i16 15, i16 15, i16 15 >)
 	ret <8 x i16> %tmp2
 }
 
 define <4 x i32> @vqshlQsu_n32(<4 x i32>* %A) nounwind {
+;CHECK: vqshlQsu_n32:
+;CHECK: vqshlu.s32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = call <4 x i32> @llvm.arm.neon.vqshiftsu.v4i32(<4 x i32> %tmp1, <4 x i32> < i32 31, i32 31, i32 31, i32 31 >)
 	ret <4 x i32> %tmp2
 }
 
 define <2 x i64> @vqshlQsu_n64(<2 x i64>* %A) nounwind {
+;CHECK: vqshlQsu_n64:
+;CHECK: vqshlu.s64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = call <2 x i64> @llvm.arm.neon.vqshiftsu.v2i64(<2 x i64> %tmp1, <2 x i64> < i64 63, i64 63 >)
 	ret <2 x i64> %tmp2
@@ -305,3 +365,167 @@ declare <16 x i8> @llvm.arm.neon.vqshiftsu.v16i8(<16 x i8>, <16 x i8>) nounwind
 declare <8 x i16> @llvm.arm.neon.vqshiftsu.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
 declare <4 x i32> @llvm.arm.neon.vqshiftsu.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
 declare <2 x i64> @llvm.arm.neon.vqshiftsu.v2i64(<2 x i64>, <2 x i64>) nounwind readnone
+
+define <8 x i8> @vqrshls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vqrshls8:
+;CHECK: vqrshl.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vqrshifts.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vqrshls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vqrshls16:
+;CHECK: vqrshl.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vqrshifts.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vqrshls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vqrshls32:
+;CHECK: vqrshl.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vqrshifts.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <1 x i64> @vqrshls64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vqrshls64:
+;CHECK: vqrshl.s64
+	%tmp1 = load <1 x i64>* %A
+	%tmp2 = load <1 x i64>* %B
+	%tmp3 = call <1 x i64> @llvm.arm.neon.vqrshifts.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2)
+	ret <1 x i64> %tmp3
+}
+
+define <8 x i8> @vqrshlu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vqrshlu8:
+;CHECK: vqrshl.u8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vqrshiftu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vqrshlu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vqrshlu16:
+;CHECK: vqrshl.u16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vqrshiftu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vqrshlu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vqrshlu32:
+;CHECK: vqrshl.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vqrshiftu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <1 x i64> @vqrshlu64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vqrshlu64:
+;CHECK: vqrshl.u64
+	%tmp1 = load <1 x i64>* %A
+	%tmp2 = load <1 x i64>* %B
+	%tmp3 = call <1 x i64> @llvm.arm.neon.vqrshiftu.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2)
+	ret <1 x i64> %tmp3
+}
+
+define <16 x i8> @vqrshlQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vqrshlQs8:
+;CHECK: vqrshl.s8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = load <16 x i8>* %B
+	%tmp3 = call <16 x i8> @llvm.arm.neon.vqrshifts.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
+	ret <16 x i8> %tmp3
+}
+
+define <8 x i16> @vqrshlQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vqrshlQs16:
+;CHECK: vqrshl.s16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vqrshifts.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vqrshlQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vqrshlQs32:
+;CHECK: vqrshl.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vqrshifts.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vqrshlQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vqrshlQs64:
+;CHECK: vqrshl.s64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i64>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vqrshifts.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+define <16 x i8> @vqrshlQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vqrshlQu8:
+;CHECK: vqrshl.u8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = load <16 x i8>* %B
+	%tmp3 = call <16 x i8> @llvm.arm.neon.vqrshiftu.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
+	ret <16 x i8> %tmp3
+}
+
+define <8 x i16> @vqrshlQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vqrshlQu16:
+;CHECK: vqrshl.u16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vqrshiftu.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vqrshlQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vqrshlQu32:
+;CHECK: vqrshl.u32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vqrshiftu.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vqrshlQu64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vqrshlQu64:
+;CHECK: vqrshl.u64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i64>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vqrshiftu.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+declare <8 x i8>  @llvm.arm.neon.vqrshifts.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vqrshifts.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vqrshifts.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+declare <1 x i64> @llvm.arm.neon.vqrshifts.v1i64(<1 x i64>, <1 x i64>) nounwind readnone
+
+declare <8 x i8>  @llvm.arm.neon.vqrshiftu.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vqrshiftu.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vqrshiftu.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+declare <1 x i64> @llvm.arm.neon.vqrshiftu.v1i64(<1 x i64>, <1 x i64>) nounwind readnone
+
+declare <16 x i8> @llvm.arm.neon.vqrshifts.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
+declare <8 x i16> @llvm.arm.neon.vqrshifts.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vqrshifts.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vqrshifts.v2i64(<2 x i64>, <2 x i64>) nounwind readnone
+
+declare <16 x i8> @llvm.arm.neon.vqrshiftu.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
+declare <8 x i16> @llvm.arm.neon.vqrshiftu.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vqrshiftu.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vqrshiftu.v2i64(<2 x i64>, <2 x i64>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqshrn.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqshrn.ll
index fb53c36..5da7943 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqshrn.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vqshrn.ll
@@ -1,63 +1,72 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vqshrn\\.s16} %t | count 1
-; RUN: grep {vqshrn\\.s32} %t | count 1
-; RUN: grep {vqshrn\\.s64} %t | count 1
-; RUN: grep {vqshrn\\.u16} %t | count 1
-; RUN: grep {vqshrn\\.u32} %t | count 1
-; RUN: grep {vqshrn\\.u64} %t | count 1
-; RUN: grep {vqshrun\\.s16} %t | count 1
-; RUN: grep {vqshrun\\.s32} %t | count 1
-; RUN: grep {vqshrun\\.s64} %t | count 1
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i8> @vqshrns8(<8 x i16>* %A) nounwind {
+;CHECK: vqshrns8:
+;CHECK: vqshrn.s16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = call <8 x i8> @llvm.arm.neon.vqshiftns.v8i8(<8 x i16> %tmp1, <8 x i16> < i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8 >)
 	ret <8 x i8> %tmp2
 }
 
 define <4 x i16> @vqshrns16(<4 x i32>* %A) nounwind {
+;CHECK: vqshrns16:
+;CHECK: vqshrn.s32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = call <4 x i16> @llvm.arm.neon.vqshiftns.v4i16(<4 x i32> %tmp1, <4 x i32> < i32 -16, i32 -16, i32 -16, i32 -16 >)
 	ret <4 x i16> %tmp2
 }
 
 define <2 x i32> @vqshrns32(<2 x i64>* %A) nounwind {
+;CHECK: vqshrns32:
+;CHECK: vqshrn.s64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = call <2 x i32> @llvm.arm.neon.vqshiftns.v2i32(<2 x i64> %tmp1, <2 x i64> < i64 -32, i64 -32 >)
 	ret <2 x i32> %tmp2
 }
 
 define <8 x i8> @vqshrnu8(<8 x i16>* %A) nounwind {
+;CHECK: vqshrnu8:
+;CHECK: vqshrn.u16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = call <8 x i8> @llvm.arm.neon.vqshiftnu.v8i8(<8 x i16> %tmp1, <8 x i16> < i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8 >)
 	ret <8 x i8> %tmp2
 }
 
 define <4 x i16> @vqshrnu16(<4 x i32>* %A) nounwind {
+;CHECK: vqshrnu16:
+;CHECK: vqshrn.u32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = call <4 x i16> @llvm.arm.neon.vqshiftnu.v4i16(<4 x i32> %tmp1, <4 x i32> < i32 -16, i32 -16, i32 -16, i32 -16 >)
 	ret <4 x i16> %tmp2
 }
 
 define <2 x i32> @vqshrnu32(<2 x i64>* %A) nounwind {
+;CHECK: vqshrnu32:
+;CHECK: vqshrn.u64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = call <2 x i32> @llvm.arm.neon.vqshiftnu.v2i32(<2 x i64> %tmp1, <2 x i64> < i64 -32, i64 -32 >)
 	ret <2 x i32> %tmp2
 }
 
 define <8 x i8> @vqshruns8(<8 x i16>* %A) nounwind {
+;CHECK: vqshruns8:
+;CHECK: vqshrun.s16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = call <8 x i8> @llvm.arm.neon.vqshiftnsu.v8i8(<8 x i16> %tmp1, <8 x i16> < i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8 >)
 	ret <8 x i8> %tmp2
 }
 
 define <4 x i16> @vqshruns16(<4 x i32>* %A) nounwind {
+;CHECK: vqshruns16:
+;CHECK: vqshrun.s32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = call <4 x i16> @llvm.arm.neon.vqshiftnsu.v4i16(<4 x i32> %tmp1, <4 x i32> < i32 -16, i32 -16, i32 -16, i32 -16 >)
 	ret <4 x i16> %tmp2
 }
 
 define <2 x i32> @vqshruns32(<2 x i64>* %A) nounwind {
+;CHECK: vqshruns32:
+;CHECK: vqshrun.s64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = call <2 x i32> @llvm.arm.neon.vqshiftnsu.v2i32(<2 x i64> %tmp1, <2 x i64> < i64 -32, i64 -32 >)
 	ret <2 x i32> %tmp2
@@ -74,3 +83,87 @@ declare <2 x i32> @llvm.arm.neon.vqshiftnu.v2i32(<2 x i64>, <2 x i64>) nounwind
 declare <8 x i8>  @llvm.arm.neon.vqshiftnsu.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
 declare <4 x i16> @llvm.arm.neon.vqshiftnsu.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
 declare <2 x i32> @llvm.arm.neon.vqshiftnsu.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
+
+define <8 x i8> @vqrshrns8(<8 x i16>* %A) nounwind {
+;CHECK: vqrshrns8:
+;CHECK: vqrshrn.s16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = call <8 x i8> @llvm.arm.neon.vqrshiftns.v8i8(<8 x i16> %tmp1, <8 x i16> < i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8 >)
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @vqrshrns16(<4 x i32>* %A) nounwind {
+;CHECK: vqrshrns16:
+;CHECK: vqrshrn.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x i16> @llvm.arm.neon.vqrshiftns.v4i16(<4 x i32> %tmp1, <4 x i32> < i32 -16, i32 -16, i32 -16, i32 -16 >)
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vqrshrns32(<2 x i64>* %A) nounwind {
+;CHECK: vqrshrns32:
+;CHECK: vqrshrn.s64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vqrshiftns.v2i32(<2 x i64> %tmp1, <2 x i64> < i64 -32, i64 -32 >)
+	ret <2 x i32> %tmp2
+}
+
+define <8 x i8> @vqrshrnu8(<8 x i16>* %A) nounwind {
+;CHECK: vqrshrnu8:
+;CHECK: vqrshrn.u16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = call <8 x i8> @llvm.arm.neon.vqrshiftnu.v8i8(<8 x i16> %tmp1, <8 x i16> < i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8 >)
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @vqrshrnu16(<4 x i32>* %A) nounwind {
+;CHECK: vqrshrnu16:
+;CHECK: vqrshrn.u32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x i16> @llvm.arm.neon.vqrshiftnu.v4i16(<4 x i32> %tmp1, <4 x i32> < i32 -16, i32 -16, i32 -16, i32 -16 >)
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vqrshrnu32(<2 x i64>* %A) nounwind {
+;CHECK: vqrshrnu32:
+;CHECK: vqrshrn.u64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vqrshiftnu.v2i32(<2 x i64> %tmp1, <2 x i64> < i64 -32, i64 -32 >)
+	ret <2 x i32> %tmp2
+}
+
+define <8 x i8> @vqrshruns8(<8 x i16>* %A) nounwind {
+;CHECK: vqrshruns8:
+;CHECK: vqrshrun.s16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = call <8 x i8> @llvm.arm.neon.vqrshiftnsu.v8i8(<8 x i16> %tmp1, <8 x i16> < i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8 >)
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @vqrshruns16(<4 x i32>* %A) nounwind {
+;CHECK: vqrshruns16:
+;CHECK: vqrshrun.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x i16> @llvm.arm.neon.vqrshiftnsu.v4i16(<4 x i32> %tmp1, <4 x i32> < i32 -16, i32 -16, i32 -16, i32 -16 >)
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vqrshruns32(<2 x i64>* %A) nounwind {
+;CHECK: vqrshruns32:
+;CHECK: vqrshrun.s64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vqrshiftnsu.v2i32(<2 x i64> %tmp1, <2 x i64> < i64 -32, i64 -32 >)
+	ret <2 x i32> %tmp2
+}
+
+declare <8 x i8>  @llvm.arm.neon.vqrshiftns.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vqrshiftns.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vqrshiftns.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
+
+declare <8 x i8>  @llvm.arm.neon.vqrshiftnu.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vqrshiftnu.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vqrshiftnu.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
+
+declare <8 x i8>  @llvm.arm.neon.vqrshiftnsu.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vqrshiftnsu.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vqrshiftnsu.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vqsub.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vqsub.ll
index bae4ebe..4231fca 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vqsub.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vqsub.ll
@@ -1,14 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vqsub\\.s8} %t | count 2
-; RUN: grep {vqsub\\.s16} %t | count 2
-; RUN: grep {vqsub\\.s32} %t | count 2
-; RUN: grep {vqsub\\.s64} %t | count 2
-; RUN: grep {vqsub\\.u8} %t | count 2
-; RUN: grep {vqsub\\.u16} %t | count 2
-; RUN: grep {vqsub\\.u32} %t | count 2
-; RUN: grep {vqsub\\.u64} %t | count 2
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i8> @vqsubs8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vqsubs8:
+;CHECK: vqsub.s8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = call <8 x i8> @llvm.arm.neon.vqsubs.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
@@ -16,6 +10,8 @@ define <8 x i8> @vqsubs8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vqsubs16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vqsubs16:
+;CHECK: vqsub.s16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = call <4 x i16> @llvm.arm.neon.vqsubs.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
@@ -23,6 +19,8 @@ define <4 x i16> @vqsubs16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vqsubs32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vqsubs32:
+;CHECK: vqsub.s32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = call <2 x i32> @llvm.arm.neon.vqsubs.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
@@ -30,6 +28,8 @@ define <2 x i32> @vqsubs32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vqsubs64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vqsubs64:
+;CHECK: vqsub.s64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = call <1 x i64> @llvm.arm.neon.vqsubs.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2)
@@ -37,6 +37,8 @@ define <1 x i64> @vqsubs64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <8 x i8> @vqsubu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vqsubu8:
+;CHECK: vqsub.u8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = call <8 x i8> @llvm.arm.neon.vqsubu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
@@ -44,6 +46,8 @@ define <8 x i8> @vqsubu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vqsubu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vqsubu16:
+;CHECK: vqsub.u16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = call <4 x i16> @llvm.arm.neon.vqsubu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
@@ -51,6 +55,8 @@ define <4 x i16> @vqsubu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vqsubu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vqsubu32:
+;CHECK: vqsub.u32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = call <2 x i32> @llvm.arm.neon.vqsubu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
@@ -58,6 +64,8 @@ define <2 x i32> @vqsubu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vqsubu64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vqsubu64:
+;CHECK: vqsub.u64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = call <1 x i64> @llvm.arm.neon.vqsubu.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2)
@@ -65,6 +73,8 @@ define <1 x i64> @vqsubu64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vqsubQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vqsubQs8:
+;CHECK: vqsub.s8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = call <16 x i8> @llvm.arm.neon.vqsubs.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
@@ -72,6 +82,8 @@ define <16 x i8> @vqsubQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vqsubQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vqsubQs16:
+;CHECK: vqsub.s16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = call <8 x i16> @llvm.arm.neon.vqsubs.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
@@ -79,6 +91,8 @@ define <8 x i16> @vqsubQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vqsubQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vqsubQs32:
+;CHECK: vqsub.s32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = call <4 x i32> @llvm.arm.neon.vqsubs.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
@@ -86,6 +100,8 @@ define <4 x i32> @vqsubQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vqsubQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vqsubQs64:
+;CHECK: vqsub.s64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = call <2 x i64> @llvm.arm.neon.vqsubs.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2)
@@ -93,6 +109,8 @@ define <2 x i64> @vqsubQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vqsubQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vqsubQu8:
+;CHECK: vqsub.u8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = call <16 x i8> @llvm.arm.neon.vqsubu.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
@@ -100,6 +118,8 @@ define <16 x i8> @vqsubQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vqsubQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vqsubQu16:
+;CHECK: vqsub.u16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = call <8 x i16> @llvm.arm.neon.vqsubu.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
@@ -107,6 +127,8 @@ define <8 x i16> @vqsubQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vqsubQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vqsubQu32:
+;CHECK: vqsub.u32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = call <4 x i32> @llvm.arm.neon.vqsubu.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
@@ -114,6 +136,8 @@ define <4 x i32> @vqsubQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vqsubQu64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vqsubQu64:
+;CHECK: vqsub.u64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = call <2 x i64> @llvm.arm.neon.vqsubu.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2)
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vraddhn.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vraddhn.ll
deleted file mode 100644
index b3f2f0d..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vraddhn.ll
+++ /dev/null
@@ -1,29 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vraddhn\\.i16} %t | count 1
-; RUN: grep {vraddhn\\.i32} %t | count 1
-; RUN: grep {vraddhn\\.i64} %t | count 1
-
-define <8 x i8> @vraddhni16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vraddhn.v8i8(<8 x i16> %tmp1, <8 x i16> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vraddhni32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vraddhn.v4i16(<4 x i32> %tmp1, <4 x i32> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vraddhni64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i64>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vraddhn.v2i32(<2 x i64> %tmp1, <2 x i64> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-declare <8 x i8>  @llvm.arm.neon.vraddhn.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vraddhn.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vraddhn.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vrec.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vrec.ll
new file mode 100644
index 0000000..99989e9
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vrec.ll
@@ -0,0 +1,119 @@
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
+
+define <2 x i32> @vrecpei32(<2 x i32>* %A) nounwind {
+;CHECK: vrecpei32:
+;CHECK: vrecpe.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vrecpe.v2i32(<2 x i32> %tmp1)
+	ret <2 x i32> %tmp2
+}
+
+define <4 x i32> @vrecpeQi32(<4 x i32>* %A) nounwind {
+;CHECK: vrecpeQi32:
+;CHECK: vrecpe.u32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x i32> @llvm.arm.neon.vrecpe.v4i32(<4 x i32> %tmp1)
+	ret <4 x i32> %tmp2
+}
+
+define <2 x float> @vrecpef32(<2 x float>* %A) nounwind {
+;CHECK: vrecpef32:
+;CHECK: vrecpe.f32
+	%tmp1 = load <2 x float>* %A
+	%tmp2 = call <2 x float> @llvm.arm.neon.vrecpe.v2f32(<2 x float> %tmp1)
+	ret <2 x float> %tmp2
+}
+
+define <4 x float> @vrecpeQf32(<4 x float>* %A) nounwind {
+;CHECK: vrecpeQf32:
+;CHECK: vrecpe.f32
+	%tmp1 = load <4 x float>* %A
+	%tmp2 = call <4 x float> @llvm.arm.neon.vrecpe.v4f32(<4 x float> %tmp1)
+	ret <4 x float> %tmp2
+}
+
+declare <2 x i32> @llvm.arm.neon.vrecpe.v2i32(<2 x i32>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vrecpe.v4i32(<4 x i32>) nounwind readnone
+
+declare <2 x float> @llvm.arm.neon.vrecpe.v2f32(<2 x float>) nounwind readnone
+declare <4 x float> @llvm.arm.neon.vrecpe.v4f32(<4 x float>) nounwind readnone
+
+define <2 x float> @vrecpsf32(<2 x float>* %A, <2 x float>* %B) nounwind {
+;CHECK: vrecpsf32:
+;CHECK: vrecps.f32
+	%tmp1 = load <2 x float>* %A
+	%tmp2 = load <2 x float>* %B
+	%tmp3 = call <2 x float> @llvm.arm.neon.vrecps.v2f32(<2 x float> %tmp1, <2 x float> %tmp2)
+	ret <2 x float> %tmp3
+}
+
+define <4 x float> @vrecpsQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
+;CHECK: vrecpsQf32:
+;CHECK: vrecps.f32
+	%tmp1 = load <4 x float>* %A
+	%tmp2 = load <4 x float>* %B
+	%tmp3 = call <4 x float> @llvm.arm.neon.vrecps.v4f32(<4 x float> %tmp1, <4 x float> %tmp2)
+	ret <4 x float> %tmp3
+}
+
+declare <2 x float> @llvm.arm.neon.vrecps.v2f32(<2 x float>, <2 x float>) nounwind readnone
+declare <4 x float> @llvm.arm.neon.vrecps.v4f32(<4 x float>, <4 x float>) nounwind readnone
+
+define <2 x i32> @vrsqrtei32(<2 x i32>* %A) nounwind {
+;CHECK: vrsqrtei32:
+;CHECK: vrsqrte.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vrsqrte.v2i32(<2 x i32> %tmp1)
+	ret <2 x i32> %tmp2
+}
+
+define <4 x i32> @vrsqrteQi32(<4 x i32>* %A) nounwind {
+;CHECK: vrsqrteQi32:
+;CHECK: vrsqrte.u32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x i32> @llvm.arm.neon.vrsqrte.v4i32(<4 x i32> %tmp1)
+	ret <4 x i32> %tmp2
+}
+
+define <2 x float> @vrsqrtef32(<2 x float>* %A) nounwind {
+;CHECK: vrsqrtef32:
+;CHECK: vrsqrte.f32
+	%tmp1 = load <2 x float>* %A
+	%tmp2 = call <2 x float> @llvm.arm.neon.vrsqrte.v2f32(<2 x float> %tmp1)
+	ret <2 x float> %tmp2
+}
+
+define <4 x float> @vrsqrteQf32(<4 x float>* %A) nounwind {
+;CHECK: vrsqrteQf32:
+;CHECK: vrsqrte.f32
+	%tmp1 = load <4 x float>* %A
+	%tmp2 = call <4 x float> @llvm.arm.neon.vrsqrte.v4f32(<4 x float> %tmp1)
+	ret <4 x float> %tmp2
+}
+
+declare <2 x i32> @llvm.arm.neon.vrsqrte.v2i32(<2 x i32>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vrsqrte.v4i32(<4 x i32>) nounwind readnone
+
+declare <2 x float> @llvm.arm.neon.vrsqrte.v2f32(<2 x float>) nounwind readnone
+declare <4 x float> @llvm.arm.neon.vrsqrte.v4f32(<4 x float>) nounwind readnone
+
+define <2 x float> @vrsqrtsf32(<2 x float>* %A, <2 x float>* %B) nounwind {
+;CHECK: vrsqrtsf32:
+;CHECK: vrsqrts.f32
+	%tmp1 = load <2 x float>* %A
+	%tmp2 = load <2 x float>* %B
+	%tmp3 = call <2 x float> @llvm.arm.neon.vrsqrts.v2f32(<2 x float> %tmp1, <2 x float> %tmp2)
+	ret <2 x float> %tmp3
+}
+
+define <4 x float> @vrsqrtsQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
+;CHECK: vrsqrtsQf32:
+;CHECK: vrsqrts.f32
+	%tmp1 = load <4 x float>* %A
+	%tmp2 = load <4 x float>* %B
+	%tmp3 = call <4 x float> @llvm.arm.neon.vrsqrts.v4f32(<4 x float> %tmp1, <4 x float> %tmp2)
+	ret <4 x float> %tmp3
+}
+
+declare <2 x float> @llvm.arm.neon.vrsqrts.v2f32(<2 x float>, <2 x float>) nounwind readnone
+declare <4 x float> @llvm.arm.neon.vrsqrts.v4f32(<4 x float>, <4 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vrecpe.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vrecpe.ll
deleted file mode 100644
index a97054f..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vrecpe.ll
+++ /dev/null
@@ -1,33 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vrecpe\\.u32} %t | count 2
-; RUN: grep {vrecpe\\.f32} %t | count 2
-
-define <2 x i32> @vrecpei32(<2 x i32>* %A) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vrecpe.v2i32(<2 x i32> %tmp1)
-	ret <2 x i32> %tmp2
-}
-
-define <4 x i32> @vrecpeQi32(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x i32> @llvm.arm.neon.vrecpe.v4i32(<4 x i32> %tmp1)
-	ret <4 x i32> %tmp2
-}
-
-define <2 x float> @vrecpef32(<2 x float>* %A) nounwind {
-	%tmp1 = load <2 x float>* %A
-	%tmp2 = call <2 x float> @llvm.arm.neon.vrecpe.v2f32(<2 x float> %tmp1)
-	ret <2 x float> %tmp2
-}
-
-define <4 x float> @vrecpeQf32(<4 x float>* %A) nounwind {
-	%tmp1 = load <4 x float>* %A
-	%tmp2 = call <4 x float> @llvm.arm.neon.vrecpe.v4f32(<4 x float> %tmp1)
-	ret <4 x float> %tmp2
-}
-
-declare <2 x i32> @llvm.arm.neon.vrecpe.v2i32(<2 x i32>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vrecpe.v4i32(<4 x i32>) nounwind readnone
-
-declare <2 x float> @llvm.arm.neon.vrecpe.v2f32(<2 x float>) nounwind readnone
-declare <4 x float> @llvm.arm.neon.vrecpe.v4f32(<4 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vrecps.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vrecps.ll
deleted file mode 100644
index 5ddd60b..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vrecps.ll
+++ /dev/null
@@ -1,19 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vrecps\\.f32} %t | count 2
-
-define <2 x float> @vrecpsf32(<2 x float>* %A, <2 x float>* %B) nounwind {
-	%tmp1 = load <2 x float>* %A
-	%tmp2 = load <2 x float>* %B
-	%tmp3 = call <2 x float> @llvm.arm.neon.vrecps.v2f32(<2 x float> %tmp1, <2 x float> %tmp2)
-	ret <2 x float> %tmp3
-}
-
-define <4 x float> @vrecpsQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
-	%tmp1 = load <4 x float>* %A
-	%tmp2 = load <4 x float>* %B
-	%tmp3 = call <4 x float> @llvm.arm.neon.vrecps.v4f32(<4 x float> %tmp1, <4 x float> %tmp2)
-	ret <4 x float> %tmp3
-}
-
-declare <2 x float> @llvm.arm.neon.vrecps.v2f32(<2 x float>, <2 x float>) nounwind readnone
-declare <4 x float> @llvm.arm.neon.vrecps.v4f32(<4 x float>, <4 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vrhadd.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vrhadd.ll
deleted file mode 100644
index 6fa9f5e..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vrhadd.ll
+++ /dev/null
@@ -1,107 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vrhadd\\.s8} %t | count 2
-; RUN: grep {vrhadd\\.s16} %t | count 2
-; RUN: grep {vrhadd\\.s32} %t | count 2
-; RUN: grep {vrhadd\\.u8} %t | count 2
-; RUN: grep {vrhadd\\.u16} %t | count 2
-; RUN: grep {vrhadd\\.u32} %t | count 2
-
-define <8 x i8> @vrhadds8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vrhadds.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vrhadds16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vrhadds.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vrhadds32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vrhadds.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <8 x i8> @vrhaddu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vrhaddu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vrhaddu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vrhaddu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vrhaddu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vrhaddu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <16 x i8> @vrhaddQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = load <16 x i8>* %B
-	%tmp3 = call <16 x i8> @llvm.arm.neon.vrhadds.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
-	ret <16 x i8> %tmp3
-}
-
-define <8 x i16> @vrhaddQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vrhadds.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vrhaddQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vrhadds.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <16 x i8> @vrhaddQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = load <16 x i8>* %B
-	%tmp3 = call <16 x i8> @llvm.arm.neon.vrhaddu.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
-	ret <16 x i8> %tmp3
-}
-
-define <8 x i16> @vrhaddQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vrhaddu.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vrhaddQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vrhaddu.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-declare <8 x i8>  @llvm.arm.neon.vrhadds.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vrhadds.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vrhadds.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <8 x i8>  @llvm.arm.neon.vrhaddu.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vrhaddu.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vrhaddu.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <16 x i8> @llvm.arm.neon.vrhadds.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
-declare <8 x i16> @llvm.arm.neon.vrhadds.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vrhadds.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
-
-declare <16 x i8> @llvm.arm.neon.vrhaddu.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
-declare <8 x i16> @llvm.arm.neon.vrhaddu.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vrhaddu.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vrshl.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vrshl.ll
deleted file mode 100644
index df051e9..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vrshl.ll
+++ /dev/null
@@ -1,245 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vrshl\\.s8} %t | count 2
-; RUN: grep {vrshl\\.s16} %t | count 2
-; RUN: grep {vrshl\\.s32} %t | count 2
-; RUN: grep {vrshl\\.s64} %t | count 2
-; RUN: grep {vrshl\\.u8} %t | count 2
-; RUN: grep {vrshl\\.u16} %t | count 2
-; RUN: grep {vrshl\\.u32} %t | count 2
-; RUN: grep {vrshl\\.u64} %t | count 2
-; RUN: grep {vrshr\\.s8} %t | count 2
-; RUN: grep {vrshr\\.s16} %t | count 2
-; RUN: grep {vrshr\\.s32} %t | count 2
-; RUN: grep {vrshr\\.s64} %t | count 2
-; RUN: grep {vrshr\\.u8} %t | count 2
-; RUN: grep {vrshr\\.u16} %t | count 2
-; RUN: grep {vrshr\\.u32} %t | count 2
-; RUN: grep {vrshr\\.u64} %t | count 2
-
-define <8 x i8> @vrshls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vrshifts.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vrshls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vrshifts.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vrshls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vrshifts.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <1 x i64> @vrshls64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
-	%tmp1 = load <1 x i64>* %A
-	%tmp2 = load <1 x i64>* %B
-	%tmp3 = call <1 x i64> @llvm.arm.neon.vrshifts.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2)
-	ret <1 x i64> %tmp3
-}
-
-define <8 x i8> @vrshlu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vrshiftu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vrshlu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vrshiftu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vrshlu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vrshiftu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-define <1 x i64> @vrshlu64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
-	%tmp1 = load <1 x i64>* %A
-	%tmp2 = load <1 x i64>* %B
-	%tmp3 = call <1 x i64> @llvm.arm.neon.vrshiftu.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2)
-	ret <1 x i64> %tmp3
-}
-
-define <16 x i8> @vrshlQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = load <16 x i8>* %B
-	%tmp3 = call <16 x i8> @llvm.arm.neon.vrshifts.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
-	ret <16 x i8> %tmp3
-}
-
-define <8 x i16> @vrshlQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vrshifts.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vrshlQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vrshifts.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vrshlQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i64>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vrshifts.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-define <16 x i8> @vrshlQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = load <16 x i8>* %B
-	%tmp3 = call <16 x i8> @llvm.arm.neon.vrshiftu.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
-	ret <16 x i8> %tmp3
-}
-
-define <8 x i16> @vrshlQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vrshiftu.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vrshlQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vrshiftu.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vrshlQu64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i64>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vrshiftu.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-define <8 x i8> @vrshrs8(<8 x i8>* %A) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = call <8 x i8> @llvm.arm.neon.vrshifts.v8i8(<8 x i8> %tmp1, <8 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @vrshrs16(<4 x i16>* %A) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = call <4 x i16> @llvm.arm.neon.vrshifts.v4i16(<4 x i16> %tmp1, <4 x i16> < i16 -16, i16 -16, i16 -16, i16 -16 >)
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vrshrs32(<2 x i32>* %A) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vrshifts.v2i32(<2 x i32> %tmp1, <2 x i32> < i32 -32, i32 -32 >)
-	ret <2 x i32> %tmp2
-}
-
-define <1 x i64> @vrshrs64(<1 x i64>* %A) nounwind {
-	%tmp1 = load <1 x i64>* %A
-	%tmp2 = call <1 x i64> @llvm.arm.neon.vrshifts.v1i64(<1 x i64> %tmp1, <1 x i64> < i64 -64 >)
-	ret <1 x i64> %tmp2
-}
-
-define <8 x i8> @vrshru8(<8 x i8>* %A) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = call <8 x i8> @llvm.arm.neon.vrshiftu.v8i8(<8 x i8> %tmp1, <8 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @vrshru16(<4 x i16>* %A) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = call <4 x i16> @llvm.arm.neon.vrshiftu.v4i16(<4 x i16> %tmp1, <4 x i16> < i16 -16, i16 -16, i16 -16, i16 -16 >)
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vrshru32(<2 x i32>* %A) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vrshiftu.v2i32(<2 x i32> %tmp1, <2 x i32> < i32 -32, i32 -32 >)
-	ret <2 x i32> %tmp2
-}
-
-define <1 x i64> @vrshru64(<1 x i64>* %A) nounwind {
-	%tmp1 = load <1 x i64>* %A
-	%tmp2 = call <1 x i64> @llvm.arm.neon.vrshiftu.v1i64(<1 x i64> %tmp1, <1 x i64> < i64 -64 >)
-	ret <1 x i64> %tmp2
-}
-
-define <16 x i8> @vrshrQs8(<16 x i8>* %A) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = call <16 x i8> @llvm.arm.neon.vrshifts.v16i8(<16 x i8> %tmp1, <16 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
-	ret <16 x i8> %tmp2
-}
-
-define <8 x i16> @vrshrQs16(<8 x i16>* %A) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = call <8 x i16> @llvm.arm.neon.vrshifts.v8i16(<8 x i16> %tmp1, <8 x i16> < i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16 >)
-	ret <8 x i16> %tmp2
-}
-
-define <4 x i32> @vrshrQs32(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x i32> @llvm.arm.neon.vrshifts.v4i32(<4 x i32> %tmp1, <4 x i32> < i32 -32, i32 -32, i32 -32, i32 -32 >)
-	ret <4 x i32> %tmp2
-}
-
-define <2 x i64> @vrshrQs64(<2 x i64>* %A) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = call <2 x i64> @llvm.arm.neon.vrshifts.v2i64(<2 x i64> %tmp1, <2 x i64> < i64 -64, i64 -64 >)
-	ret <2 x i64> %tmp2
-}
-
-define <16 x i8> @vrshrQu8(<16 x i8>* %A) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = call <16 x i8> @llvm.arm.neon.vrshiftu.v16i8(<16 x i8> %tmp1, <16 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
-	ret <16 x i8> %tmp2
-}
-
-define <8 x i16> @vrshrQu16(<8 x i16>* %A) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = call <8 x i16> @llvm.arm.neon.vrshiftu.v8i16(<8 x i16> %tmp1, <8 x i16> < i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16 >)
-	ret <8 x i16> %tmp2
-}
-
-define <4 x i32> @vrshrQu32(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x i32> @llvm.arm.neon.vrshiftu.v4i32(<4 x i32> %tmp1, <4 x i32> < i32 -32, i32 -32, i32 -32, i32 -32 >)
-	ret <4 x i32> %tmp2
-}
-
-define <2 x i64> @vrshrQu64(<2 x i64>* %A) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = call <2 x i64> @llvm.arm.neon.vrshiftu.v2i64(<2 x i64> %tmp1, <2 x i64> < i64 -64, i64 -64 >)
-	ret <2 x i64> %tmp2
-}
-
-declare <8 x i8>  @llvm.arm.neon.vrshifts.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vrshifts.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vrshifts.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
-declare <1 x i64> @llvm.arm.neon.vrshifts.v1i64(<1 x i64>, <1 x i64>) nounwind readnone
-
-declare <8 x i8>  @llvm.arm.neon.vrshiftu.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vrshiftu.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vrshiftu.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
-declare <1 x i64> @llvm.arm.neon.vrshiftu.v1i64(<1 x i64>, <1 x i64>) nounwind readnone
-
-declare <16 x i8> @llvm.arm.neon.vrshifts.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
-declare <8 x i16> @llvm.arm.neon.vrshifts.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vrshifts.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vrshifts.v2i64(<2 x i64>, <2 x i64>) nounwind readnone
-
-declare <16 x i8> @llvm.arm.neon.vrshiftu.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
-declare <8 x i16> @llvm.arm.neon.vrshiftu.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vrshiftu.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vrshiftu.v2i64(<2 x i64>, <2 x i64>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vrshrn.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vrshrn.ll
deleted file mode 100644
index 3dd21bc..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vrshrn.ll
+++ /dev/null
@@ -1,26 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vrshrn\\.i16} %t | count 1
-; RUN: grep {vrshrn\\.i32} %t | count 1
-; RUN: grep {vrshrn\\.i64} %t | count 1
-
-define <8 x i8> @vrshrns8(<8 x i16>* %A) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = call <8 x i8> @llvm.arm.neon.vrshiftn.v8i8(<8 x i16> %tmp1, <8 x i16> < i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8 >)
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @vrshrns16(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x i16> @llvm.arm.neon.vrshiftn.v4i16(<4 x i32> %tmp1, <4 x i32> < i32 -16, i32 -16, i32 -16, i32 -16 >)
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vrshrns32(<2 x i64>* %A) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vrshiftn.v2i32(<2 x i64> %tmp1, <2 x i64> < i64 -32, i64 -32 >)
-	ret <2 x i32> %tmp2
-}
-
-declare <8 x i8>  @llvm.arm.neon.vrshiftn.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vrshiftn.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vrshiftn.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vrsqrte.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vrsqrte.ll
deleted file mode 100644
index 5eb9494..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vrsqrte.ll
+++ /dev/null
@@ -1,33 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vrsqrte\\.u32} %t | count 2
-; RUN: grep {vrsqrte\\.f32} %t | count 2
-
-define <2 x i32> @vrsqrtei32(<2 x i32>* %A) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = call <2 x i32> @llvm.arm.neon.vrsqrte.v2i32(<2 x i32> %tmp1)
-	ret <2 x i32> %tmp2
-}
-
-define <4 x i32> @vrsqrteQi32(<4 x i32>* %A) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = call <4 x i32> @llvm.arm.neon.vrsqrte.v4i32(<4 x i32> %tmp1)
-	ret <4 x i32> %tmp2
-}
-
-define <2 x float> @vrsqrtef32(<2 x float>* %A) nounwind {
-	%tmp1 = load <2 x float>* %A
-	%tmp2 = call <2 x float> @llvm.arm.neon.vrsqrte.v2f32(<2 x float> %tmp1)
-	ret <2 x float> %tmp2
-}
-
-define <4 x float> @vrsqrteQf32(<4 x float>* %A) nounwind {
-	%tmp1 = load <4 x float>* %A
-	%tmp2 = call <4 x float> @llvm.arm.neon.vrsqrte.v4f32(<4 x float> %tmp1)
-	ret <4 x float> %tmp2
-}
-
-declare <2 x i32> @llvm.arm.neon.vrsqrte.v2i32(<2 x i32>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vrsqrte.v4i32(<4 x i32>) nounwind readnone
-
-declare <2 x float> @llvm.arm.neon.vrsqrte.v2f32(<2 x float>) nounwind readnone
-declare <4 x float> @llvm.arm.neon.vrsqrte.v4f32(<4 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vrsqrts.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vrsqrts.ll
deleted file mode 100644
index 46a4ce9..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vrsqrts.ll
+++ /dev/null
@@ -1,19 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vrsqrts\\.f32} %t | count 2
-
-define <2 x float> @vrsqrtsf32(<2 x float>* %A, <2 x float>* %B) nounwind {
-	%tmp1 = load <2 x float>* %A
-	%tmp2 = load <2 x float>* %B
-	%tmp3 = call <2 x float> @llvm.arm.neon.vrsqrts.v2f32(<2 x float> %tmp1, <2 x float> %tmp2)
-	ret <2 x float> %tmp3
-}
-
-define <4 x float> @vrsqrtsQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
-	%tmp1 = load <4 x float>* %A
-	%tmp2 = load <4 x float>* %B
-	%tmp3 = call <4 x float> @llvm.arm.neon.vrsqrts.v4f32(<4 x float> %tmp1, <4 x float> %tmp2)
-	ret <4 x float> %tmp3
-}
-
-declare <2 x float> @llvm.arm.neon.vrsqrts.v2f32(<2 x float>, <2 x float>) nounwind readnone
-declare <4 x float> @llvm.arm.neon.vrsqrts.v4f32(<4 x float>, <4 x float>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vrsubhn.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vrsubhn.ll
deleted file mode 100644
index 4691783..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vrsubhn.ll
+++ /dev/null
@@ -1,29 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vrsubhn\\.i16} %t | count 1
-; RUN: grep {vrsubhn\\.i32} %t | count 1
-; RUN: grep {vrsubhn\\.i64} %t | count 1
-
-define <8 x i8> @vrsubhni16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vrsubhn.v8i8(<8 x i16> %tmp1, <8 x i16> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vrsubhni32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vrsubhn.v4i16(<4 x i32> %tmp1, <4 x i32> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vrsubhni64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i64>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vrsubhn.v2i32(<2 x i64> %tmp1, <2 x i64> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-declare <8 x i8>  @llvm.arm.neon.vrsubhn.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vrsubhn.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vrsubhn.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vset_lane.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vset_lane.ll
deleted file mode 100644
index bb20ded..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vset_lane.ll
+++ /dev/null
@@ -1,47 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vmov\\.8} %t | count 2
-; RUN: grep {vmov\\.16} %t | count 2
-; RUN: grep {vmov\\.32} %t | count 2
-; RUN: grep {fcpys} %t | count 2
-
-define <8 x i8> @vset_lane8(<8 x i8>* %A, i8 %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = insertelement <8 x i8> %tmp1, i8 %B, i32 1
-	ret <8 x i8> %tmp2
-}
-
-define <4 x i16> @vset_lane16(<4 x i16>* %A, i16 %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = insertelement <4 x i16> %tmp1, i16 %B, i32 1
-	ret <4 x i16> %tmp2
-}
-
-define <2 x i32> @vset_lane32(<2 x i32>* %A, i32 %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = insertelement <2 x i32> %tmp1, i32 %B, i32 1
-	ret <2 x i32> %tmp2
-}
-
-define <16 x i8> @vsetQ_lane8(<16 x i8>* %A, i8 %B) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = insertelement <16 x i8> %tmp1, i8 %B, i32 1
-	ret <16 x i8> %tmp2
-}
-
-define <8 x i16> @vsetQ_lane16(<8 x i16>* %A, i16 %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = insertelement <8 x i16> %tmp1, i16 %B, i32 1
-	ret <8 x i16> %tmp2
-}
-
-define <4 x i32> @vsetQ_lane32(<4 x i32>* %A, i32 %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = insertelement <4 x i32> %tmp1, i32 %B, i32 1
-	ret <4 x i32> %tmp2
-}
-
-define arm_aapcs_vfpcc <2 x float> @test_vset_lanef32(float %arg0_float32_t, <2 x float> %arg1_float32x2_t) nounwind {
-entry:
-  %0 = insertelement <2 x float> %arg1_float32x2_t, float %arg0_float32_t, i32 1 ; <<2 x float>> [#uses=1]
-  ret <2 x float> %0
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vshift.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vshift.ll
index 346d7e2..f3cbec7 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vshift.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vshift.ll
@@ -1,30 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vshl\\.s8} %t | count 2
-; RUN: grep {vshl\\.s16} %t | count 2
-; RUN: grep {vshl\\.s32} %t | count 2
-; RUN: grep {vshl\\.s64} %t | count 2
-; RUN: grep {vshl\\.u8} %t | count 4
-; RUN: grep {vshl\\.u16} %t | count 4
-; RUN: grep {vshl\\.u32} %t | count 4
-; RUN: grep {vshl\\.u64} %t | count 4
-; RUN: grep {vshl\\.i8} %t | count 2
-; RUN: grep {vshl\\.i16} %t | count 2
-; RUN: grep {vshl\\.i32} %t | count 2
-; RUN: grep {vshl\\.i64} %t | count 2
-; RUN: grep {vshr\\.u8} %t | count 2
-; RUN: grep {vshr\\.u16} %t | count 2
-; RUN: grep {vshr\\.u32} %t | count 2
-; RUN: grep {vshr\\.u64} %t | count 2
-; RUN: grep {vshr\\.s8} %t | count 2
-; RUN: grep {vshr\\.s16} %t | count 2
-; RUN: grep {vshr\\.s32} %t | count 2
-; RUN: grep {vshr\\.s64} %t | count 2
-; RUN: grep {vneg\\.s8} %t | count 4
-; RUN: grep {vneg\\.s16} %t | count 4
-; RUN: grep {vneg\\.s32} %t | count 4
-; RUN: grep {vsub\\.i64} %t | count 4
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i8> @vshls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vshls8:
+;CHECK: vshl.u8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = shl <8 x i8> %tmp1, %tmp2
@@ -32,6 +10,8 @@ define <8 x i8> @vshls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vshls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vshls16:
+;CHECK: vshl.u16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = shl <4 x i16> %tmp1, %tmp2
@@ -39,6 +19,8 @@ define <4 x i16> @vshls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vshls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vshls32:
+;CHECK: vshl.u32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = shl <2 x i32> %tmp1, %tmp2
@@ -46,6 +28,8 @@ define <2 x i32> @vshls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vshls64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vshls64:
+;CHECK: vshl.u64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = shl <1 x i64> %tmp1, %tmp2
@@ -53,30 +37,40 @@ define <1 x i64> @vshls64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <8 x i8> @vshli8(<8 x i8>* %A) nounwind {
+;CHECK: vshli8:
+;CHECK: vshl.i8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = shl <8 x i8> %tmp1, < i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7 >
 	ret <8 x i8> %tmp2
 }
 
 define <4 x i16> @vshli16(<4 x i16>* %A) nounwind {
+;CHECK: vshli16:
+;CHECK: vshl.i16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = shl <4 x i16> %tmp1, < i16 15, i16 15, i16 15, i16 15 >
 	ret <4 x i16> %tmp2
 }
 
 define <2 x i32> @vshli32(<2 x i32>* %A) nounwind {
+;CHECK: vshli32:
+;CHECK: vshl.i32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = shl <2 x i32> %tmp1, < i32 31, i32 31 >
 	ret <2 x i32> %tmp2
 }
 
 define <1 x i64> @vshli64(<1 x i64>* %A) nounwind {
+;CHECK: vshli64:
+;CHECK: vshl.i64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = shl <1 x i64> %tmp1, < i64 63 >
 	ret <1 x i64> %tmp2
 }
 
 define <16 x i8> @vshlQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vshlQs8:
+;CHECK: vshl.u8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = shl <16 x i8> %tmp1, %tmp2
@@ -84,6 +78,8 @@ define <16 x i8> @vshlQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vshlQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vshlQs16:
+;CHECK: vshl.u16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = shl <8 x i16> %tmp1, %tmp2
@@ -91,6 +87,8 @@ define <8 x i16> @vshlQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vshlQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vshlQs32:
+;CHECK: vshl.u32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = shl <4 x i32> %tmp1, %tmp2
@@ -98,6 +96,8 @@ define <4 x i32> @vshlQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vshlQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vshlQs64:
+;CHECK: vshl.u64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = shl <2 x i64> %tmp1, %tmp2
@@ -105,30 +105,41 @@ define <2 x i64> @vshlQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vshlQi8(<16 x i8>* %A) nounwind {
+;CHECK: vshlQi8:
+;CHECK: vshl.i8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = shl <16 x i8> %tmp1, < i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7 >
 	ret <16 x i8> %tmp2
 }
 
 define <8 x i16> @vshlQi16(<8 x i16>* %A) nounwind {
+;CHECK: vshlQi16:
+;CHECK: vshl.i16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = shl <8 x i16> %tmp1, < i16 15, i16 15, i16 15, i16 15, i16 15, i16 15, i16 15, i16 15 >
 	ret <8 x i16> %tmp2
 }
 
 define <4 x i32> @vshlQi32(<4 x i32>* %A) nounwind {
+;CHECK: vshlQi32:
+;CHECK: vshl.i32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = shl <4 x i32> %tmp1, < i32 31, i32 31, i32 31, i32 31 >
 	ret <4 x i32> %tmp2
 }
 
 define <2 x i64> @vshlQi64(<2 x i64>* %A) nounwind {
+;CHECK: vshlQi64:
+;CHECK: vshl.i64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = shl <2 x i64> %tmp1, < i64 63, i64 63 >
 	ret <2 x i64> %tmp2
 }
 
 define <8 x i8> @vlshru8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vlshru8:
+;CHECK: vneg.s8
+;CHECK: vshl.u8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = lshr <8 x i8> %tmp1, %tmp2
@@ -136,6 +147,9 @@ define <8 x i8> @vlshru8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vlshru16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vlshru16:
+;CHECK: vneg.s16
+;CHECK: vshl.u16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = lshr <4 x i16> %tmp1, %tmp2
@@ -143,6 +157,9 @@ define <4 x i16> @vlshru16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vlshru32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vlshru32:
+;CHECK: vneg.s32
+;CHECK: vshl.u32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = lshr <2 x i32> %tmp1, %tmp2
@@ -150,6 +167,9 @@ define <2 x i32> @vlshru32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vlshru64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vlshru64:
+;CHECK: vsub.i64
+;CHECK: vshl.u64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = lshr <1 x i64> %tmp1, %tmp2
@@ -157,30 +177,41 @@ define <1 x i64> @vlshru64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <8 x i8> @vlshri8(<8 x i8>* %A) nounwind {
+;CHECK: vlshri8:
+;CHECK: vshr.u8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = lshr <8 x i8> %tmp1, < i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8 >
 	ret <8 x i8> %tmp2
 }
 
 define <4 x i16> @vlshri16(<4 x i16>* %A) nounwind {
+;CHECK: vlshri16:
+;CHECK: vshr.u16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = lshr <4 x i16> %tmp1, < i16 16, i16 16, i16 16, i16 16 >
 	ret <4 x i16> %tmp2
 }
 
 define <2 x i32> @vlshri32(<2 x i32>* %A) nounwind {
+;CHECK: vlshri32:
+;CHECK: vshr.u32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = lshr <2 x i32> %tmp1, < i32 32, i32 32 >
 	ret <2 x i32> %tmp2
 }
 
 define <1 x i64> @vlshri64(<1 x i64>* %A) nounwind {
+;CHECK: vlshri64:
+;CHECK: vshr.u64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = lshr <1 x i64> %tmp1, < i64 64 >
 	ret <1 x i64> %tmp2
 }
 
 define <16 x i8> @vlshrQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vlshrQu8:
+;CHECK: vneg.s8
+;CHECK: vshl.u8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = lshr <16 x i8> %tmp1, %tmp2
@@ -188,6 +219,9 @@ define <16 x i8> @vlshrQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vlshrQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vlshrQu16:
+;CHECK: vneg.s16
+;CHECK: vshl.u16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = lshr <8 x i16> %tmp1, %tmp2
@@ -195,6 +229,9 @@ define <8 x i16> @vlshrQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vlshrQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vlshrQu32:
+;CHECK: vneg.s32
+;CHECK: vshl.u32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = lshr <4 x i32> %tmp1, %tmp2
@@ -202,6 +239,9 @@ define <4 x i32> @vlshrQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vlshrQu64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vlshrQu64:
+;CHECK: vsub.i64
+;CHECK: vshl.u64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = lshr <2 x i64> %tmp1, %tmp2
@@ -209,30 +249,48 @@ define <2 x i64> @vlshrQu64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vlshrQi8(<16 x i8>* %A) nounwind {
+;CHECK: vlshrQi8:
+;CHECK: vshr.u8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = lshr <16 x i8> %tmp1, < i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8 >
 	ret <16 x i8> %tmp2
 }
 
 define <8 x i16> @vlshrQi16(<8 x i16>* %A) nounwind {
+;CHECK: vlshrQi16:
+;CHECK: vshr.u16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = lshr <8 x i16> %tmp1, < i16 16, i16 16, i16 16, i16 16, i16 16, i16 16, i16 16, i16 16 >
 	ret <8 x i16> %tmp2
 }
 
 define <4 x i32> @vlshrQi32(<4 x i32>* %A) nounwind {
+;CHECK: vlshrQi32:
+;CHECK: vshr.u32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = lshr <4 x i32> %tmp1, < i32 32, i32 32, i32 32, i32 32 >
 	ret <4 x i32> %tmp2
 }
 
 define <2 x i64> @vlshrQi64(<2 x i64>* %A) nounwind {
+;CHECK: vlshrQi64:
+;CHECK: vshr.u64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = lshr <2 x i64> %tmp1, < i64 64, i64 64 >
 	ret <2 x i64> %tmp2
 }
 
+; Example that requires splitting and expanding a vector shift.
+define <2 x i64> @update(<2 x i64> %val) nounwind readnone {
+entry:
+	%shr = lshr <2 x i64> %val, < i64 2, i64 2 >		; <<2 x i64>> [#uses=1]
+	ret <2 x i64> %shr
+}
+
 define <8 x i8> @vashrs8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vashrs8:
+;CHECK: vneg.s8
+;CHECK: vshl.s8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = ashr <8 x i8> %tmp1, %tmp2
@@ -240,6 +298,9 @@ define <8 x i8> @vashrs8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vashrs16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vashrs16:
+;CHECK: vneg.s16
+;CHECK: vshl.s16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = ashr <4 x i16> %tmp1, %tmp2
@@ -247,6 +308,9 @@ define <4 x i16> @vashrs16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vashrs32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vashrs32:
+;CHECK: vneg.s32
+;CHECK: vshl.s32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = ashr <2 x i32> %tmp1, %tmp2
@@ -254,6 +318,9 @@ define <2 x i32> @vashrs32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vashrs64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vashrs64:
+;CHECK: vsub.i64
+;CHECK: vshl.s64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = ashr <1 x i64> %tmp1, %tmp2
@@ -261,30 +328,41 @@ define <1 x i64> @vashrs64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <8 x i8> @vashri8(<8 x i8>* %A) nounwind {
+;CHECK: vashri8:
+;CHECK: vshr.s8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = ashr <8 x i8> %tmp1, < i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8 >
 	ret <8 x i8> %tmp2
 }
 
 define <4 x i16> @vashri16(<4 x i16>* %A) nounwind {
+;CHECK: vashri16:
+;CHECK: vshr.s16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = ashr <4 x i16> %tmp1, < i16 16, i16 16, i16 16, i16 16 >
 	ret <4 x i16> %tmp2
 }
 
 define <2 x i32> @vashri32(<2 x i32>* %A) nounwind {
+;CHECK: vashri32:
+;CHECK: vshr.s32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = ashr <2 x i32> %tmp1, < i32 32, i32 32 >
 	ret <2 x i32> %tmp2
 }
 
 define <1 x i64> @vashri64(<1 x i64>* %A) nounwind {
+;CHECK: vashri64:
+;CHECK: vshr.s64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = ashr <1 x i64> %tmp1, < i64 64 >
 	ret <1 x i64> %tmp2
 }
 
 define <16 x i8> @vashrQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vashrQs8:
+;CHECK: vneg.s8
+;CHECK: vshl.s8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = ashr <16 x i8> %tmp1, %tmp2
@@ -292,6 +370,9 @@ define <16 x i8> @vashrQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vashrQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vashrQs16:
+;CHECK: vneg.s16
+;CHECK: vshl.s16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = ashr <8 x i16> %tmp1, %tmp2
@@ -299,6 +380,9 @@ define <8 x i16> @vashrQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vashrQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vashrQs32:
+;CHECK: vneg.s32
+;CHECK: vshl.s32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = ashr <4 x i32> %tmp1, %tmp2
@@ -306,6 +390,9 @@ define <4 x i32> @vashrQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vashrQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vashrQs64:
+;CHECK: vsub.i64
+;CHECK: vshl.s64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = ashr <2 x i64> %tmp1, %tmp2
@@ -313,24 +400,32 @@ define <2 x i64> @vashrQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vashrQi8(<16 x i8>* %A) nounwind {
+;CHECK: vashrQi8:
+;CHECK: vshr.s8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = ashr <16 x i8> %tmp1, < i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8 >
 	ret <16 x i8> %tmp2
 }
 
 define <8 x i16> @vashrQi16(<8 x i16>* %A) nounwind {
+;CHECK: vashrQi16:
+;CHECK: vshr.s16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = ashr <8 x i16> %tmp1, < i16 16, i16 16, i16 16, i16 16, i16 16, i16 16, i16 16, i16 16 >
 	ret <8 x i16> %tmp2
 }
 
 define <4 x i32> @vashrQi32(<4 x i32>* %A) nounwind {
+;CHECK: vashrQi32:
+;CHECK: vshr.s32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = ashr <4 x i32> %tmp1, < i32 32, i32 32, i32 32, i32 32 >
 	ret <4 x i32> %tmp2
 }
 
 define <2 x i64> @vashrQi64(<2 x i64>* %A) nounwind {
+;CHECK: vashrQi64:
+;CHECK: vshr.s64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = ashr <2 x i64> %tmp1, < i64 64, i64 64 >
 	ret <2 x i64> %tmp2
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vshift_split.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vshift_split.ll
deleted file mode 100644
index f05921f..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vshift_split.ll
+++ /dev/null
@@ -1,8 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=-neon
-
-; Example that requires splitting and expanding a vector shift.
-define <2 x i64> @update(<2 x i64> %val) nounwind readnone {
-entry:
-	%shr = lshr <2 x i64> %val, < i64 2, i64 2 >		; <<2 x i64>> [#uses=1]
-	ret <2 x i64> %shr
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vshiftins.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vshiftins.ll
index 251efdc..3a4f857 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vshiftins.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vshiftins.ll
@@ -1,14 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vsli\\.8} %t | count 2
-; RUN: grep {vsli\\.16} %t | count 2
-; RUN: grep {vsli\\.32} %t | count 2
-; RUN: grep {vsli\\.64} %t | count 2
-; RUN: grep {vsri\\.8} %t | count 2
-; RUN: grep {vsri\\.16} %t | count 2
-; RUN: grep {vsri\\.32} %t | count 2
-; RUN: grep {vsri\\.64} %t | count 2
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i8> @vsli8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vsli8:
+;CHECK: vsli.8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = call <8 x i8> @llvm.arm.neon.vshiftins.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2, <8 x i8> < i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7 >)
@@ -16,6 +10,8 @@ define <8 x i8> @vsli8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vsli16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vsli16:
+;CHECK: vsli.16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = call <4 x i16> @llvm.arm.neon.vshiftins.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2, <4 x i16> < i16 15, i16 15, i16 15, i16 15 >)
@@ -23,6 +19,8 @@ define <4 x i16> @vsli16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vsli32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vsli32:
+;CHECK: vsli.32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = call <2 x i32> @llvm.arm.neon.vshiftins.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2, <2 x i32> < i32 31, i32 31 >)
@@ -30,6 +28,8 @@ define <2 x i32> @vsli32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vsli64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vsli64:
+;CHECK: vsli.64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = call <1 x i64> @llvm.arm.neon.vshiftins.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2, <1 x i64> < i64 63 >)
@@ -37,6 +37,8 @@ define <1 x i64> @vsli64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vsliQ8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vsliQ8:
+;CHECK: vsli.8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = call <16 x i8> @llvm.arm.neon.vshiftins.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2, <16 x i8> < i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7 >)
@@ -44,6 +46,8 @@ define <16 x i8> @vsliQ8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vsliQ16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vsliQ16:
+;CHECK: vsli.16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = call <8 x i16> @llvm.arm.neon.vshiftins.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2, <8 x i16> < i16 15, i16 15, i16 15, i16 15, i16 15, i16 15, i16 15, i16 15 >)
@@ -51,6 +55,8 @@ define <8 x i16> @vsliQ16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vsliQ32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vsliQ32:
+;CHECK: vsli.32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = call <4 x i32> @llvm.arm.neon.vshiftins.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2, <4 x i32> < i32 31, i32 31, i32 31, i32 31 >)
@@ -58,6 +64,8 @@ define <4 x i32> @vsliQ32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vsliQ64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vsliQ64:
+;CHECK: vsli.64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = call <2 x i64> @llvm.arm.neon.vshiftins.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2, <2 x i64> < i64 63, i64 63 >)
@@ -65,6 +73,8 @@ define <2 x i64> @vsliQ64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
 }
 
 define <8 x i8> @vsri8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vsri8:
+;CHECK: vsri.8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = call <8 x i8> @llvm.arm.neon.vshiftins.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2, <8 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
@@ -72,6 +82,8 @@ define <8 x i8> @vsri8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vsri16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vsri16:
+;CHECK: vsri.16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = call <4 x i16> @llvm.arm.neon.vshiftins.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2, <4 x i16> < i16 -16, i16 -16, i16 -16, i16 -16 >)
@@ -79,6 +91,8 @@ define <4 x i16> @vsri16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vsri32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vsri32:
+;CHECK: vsri.32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = call <2 x i32> @llvm.arm.neon.vshiftins.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2, <2 x i32> < i32 -32, i32 -32 >)
@@ -86,6 +100,8 @@ define <2 x i32> @vsri32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vsri64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vsri64:
+;CHECK: vsri.64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = call <1 x i64> @llvm.arm.neon.vshiftins.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2, <1 x i64> < i64 -64 >)
@@ -93,6 +109,8 @@ define <1 x i64> @vsri64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vsriQ8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vsriQ8:
+;CHECK: vsri.8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = call <16 x i8> @llvm.arm.neon.vshiftins.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2, <16 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
@@ -100,6 +118,8 @@ define <16 x i8> @vsriQ8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vsriQ16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vsriQ16:
+;CHECK: vsri.16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = call <8 x i16> @llvm.arm.neon.vshiftins.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2, <8 x i16> < i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16 >)
@@ -107,6 +127,8 @@ define <8 x i16> @vsriQ16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vsriQ32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vsriQ32:
+;CHECK: vsri.32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = call <4 x i32> @llvm.arm.neon.vshiftins.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2, <4 x i32> < i32 -32, i32 -32, i32 -32, i32 -32 >)
@@ -114,6 +136,8 @@ define <4 x i32> @vsriQ32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vsriQ64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vsriQ64:
+;CHECK: vsri.64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = call <2 x i64> @llvm.arm.neon.vshiftins.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2, <2 x i64> < i64 -64, i64 -64 >)
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vshl.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vshl.ll
index 773b184..818e71b 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vshl.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vshl.ll
@@ -1,26 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vshl\\.s8} %t | count 2
-; RUN: grep {vshl\\.s16} %t | count 2
-; RUN: grep {vshl\\.s32} %t | count 2
-; RUN: grep {vshl\\.s64} %t | count 2
-; RUN: grep {vshl\\.u8} %t | count 2
-; RUN: grep {vshl\\.u16} %t | count 2
-; RUN: grep {vshl\\.u32} %t | count 2
-; RUN: grep {vshl\\.u64} %t | count 2
-; RUN: grep {vshl\\.i8} %t | count 2
-; RUN: grep {vshl\\.i16} %t | count 2
-; RUN: grep {vshl\\.i32} %t | count 2
-; RUN: grep {vshl\\.i64} %t | count 2
-; RUN: grep {vshr\\.s8} %t | count 2
-; RUN: grep {vshr\\.s16} %t | count 2
-; RUN: grep {vshr\\.s32} %t | count 2
-; RUN: grep {vshr\\.s64} %t | count 2
-; RUN: grep {vshr\\.u8} %t | count 2
-; RUN: grep {vshr\\.u16} %t | count 2
-; RUN: grep {vshr\\.u32} %t | count 2
-; RUN: grep {vshr\\.u64} %t | count 2
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i8> @vshls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vshls8:
+;CHECK: vshl.s8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = call <8 x i8> @llvm.arm.neon.vshifts.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
@@ -28,6 +10,8 @@ define <8 x i8> @vshls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vshls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vshls16:
+;CHECK: vshl.s16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = call <4 x i16> @llvm.arm.neon.vshifts.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
@@ -35,6 +19,8 @@ define <4 x i16> @vshls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vshls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vshls32:
+;CHECK: vshl.s32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = call <2 x i32> @llvm.arm.neon.vshifts.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
@@ -42,6 +28,8 @@ define <2 x i32> @vshls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vshls64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vshls64:
+;CHECK: vshl.s64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = call <1 x i64> @llvm.arm.neon.vshifts.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2)
@@ -49,6 +37,8 @@ define <1 x i64> @vshls64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <8 x i8> @vshlu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vshlu8:
+;CHECK: vshl.u8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = call <8 x i8> @llvm.arm.neon.vshiftu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
@@ -56,6 +46,8 @@ define <8 x i8> @vshlu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vshlu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vshlu16:
+;CHECK: vshl.u16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = call <4 x i16> @llvm.arm.neon.vshiftu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
@@ -63,6 +55,8 @@ define <4 x i16> @vshlu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vshlu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vshlu32:
+;CHECK: vshl.u32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = call <2 x i32> @llvm.arm.neon.vshiftu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
@@ -70,6 +64,8 @@ define <2 x i32> @vshlu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vshlu64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vshlu64:
+;CHECK: vshl.u64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = call <1 x i64> @llvm.arm.neon.vshiftu.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2)
@@ -77,6 +73,8 @@ define <1 x i64> @vshlu64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vshlQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vshlQs8:
+;CHECK: vshl.s8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = call <16 x i8> @llvm.arm.neon.vshifts.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
@@ -84,6 +82,8 @@ define <16 x i8> @vshlQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vshlQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vshlQs16:
+;CHECK: vshl.s16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = call <8 x i16> @llvm.arm.neon.vshifts.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
@@ -91,6 +91,8 @@ define <8 x i16> @vshlQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vshlQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vshlQs32:
+;CHECK: vshl.s32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = call <4 x i32> @llvm.arm.neon.vshifts.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
@@ -98,6 +100,8 @@ define <4 x i32> @vshlQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vshlQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vshlQs64:
+;CHECK: vshl.s64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = call <2 x i64> @llvm.arm.neon.vshifts.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2)
@@ -105,6 +109,8 @@ define <2 x i64> @vshlQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vshlQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vshlQu8:
+;CHECK: vshl.u8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = call <16 x i8> @llvm.arm.neon.vshiftu.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
@@ -112,6 +118,8 @@ define <16 x i8> @vshlQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vshlQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vshlQu16:
+;CHECK: vshl.u16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = call <8 x i16> @llvm.arm.neon.vshiftu.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
@@ -119,6 +127,8 @@ define <8 x i16> @vshlQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vshlQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vshlQu32:
+;CHECK: vshl.u32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = call <4 x i32> @llvm.arm.neon.vshiftu.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
@@ -126,6 +136,8 @@ define <4 x i32> @vshlQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vshlQu64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vshlQu64:
+;CHECK: vshl.u64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = call <2 x i64> @llvm.arm.neon.vshiftu.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2)
@@ -136,48 +148,64 @@ define <2 x i64> @vshlQu64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
 ; Test a mix of both signed and unsigned intrinsics.
 
 define <8 x i8> @vshli8(<8 x i8>* %A) nounwind {
+;CHECK: vshli8:
+;CHECK: vshl.i8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = call <8 x i8> @llvm.arm.neon.vshifts.v8i8(<8 x i8> %tmp1, <8 x i8> < i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7 >)
 	ret <8 x i8> %tmp2
 }
 
 define <4 x i16> @vshli16(<4 x i16>* %A) nounwind {
+;CHECK: vshli16:
+;CHECK: vshl.i16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = call <4 x i16> @llvm.arm.neon.vshiftu.v4i16(<4 x i16> %tmp1, <4 x i16> < i16 15, i16 15, i16 15, i16 15 >)
 	ret <4 x i16> %tmp2
 }
 
 define <2 x i32> @vshli32(<2 x i32>* %A) nounwind {
+;CHECK: vshli32:
+;CHECK: vshl.i32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = call <2 x i32> @llvm.arm.neon.vshifts.v2i32(<2 x i32> %tmp1, <2 x i32> < i32 31, i32 31 >)
 	ret <2 x i32> %tmp2
 }
 
 define <1 x i64> @vshli64(<1 x i64>* %A) nounwind {
+;CHECK: vshli64:
+;CHECK: vshl.i64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = call <1 x i64> @llvm.arm.neon.vshiftu.v1i64(<1 x i64> %tmp1, <1 x i64> < i64 63 >)
 	ret <1 x i64> %tmp2
 }
 
 define <16 x i8> @vshlQi8(<16 x i8>* %A) nounwind {
+;CHECK: vshlQi8:
+;CHECK: vshl.i8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = call <16 x i8> @llvm.arm.neon.vshifts.v16i8(<16 x i8> %tmp1, <16 x i8> < i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7 >)
 	ret <16 x i8> %tmp2
 }
 
 define <8 x i16> @vshlQi16(<8 x i16>* %A) nounwind {
+;CHECK: vshlQi16:
+;CHECK: vshl.i16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = call <8 x i16> @llvm.arm.neon.vshiftu.v8i16(<8 x i16> %tmp1, <8 x i16> < i16 15, i16 15, i16 15, i16 15, i16 15, i16 15, i16 15, i16 15 >)
 	ret <8 x i16> %tmp2
 }
 
 define <4 x i32> @vshlQi32(<4 x i32>* %A) nounwind {
+;CHECK: vshlQi32:
+;CHECK: vshl.i32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = call <4 x i32> @llvm.arm.neon.vshifts.v4i32(<4 x i32> %tmp1, <4 x i32> < i32 31, i32 31, i32 31, i32 31 >)
 	ret <4 x i32> %tmp2
 }
 
 define <2 x i64> @vshlQi64(<2 x i64>* %A) nounwind {
+;CHECK: vshlQi64:
+;CHECK: vshl.i64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = call <2 x i64> @llvm.arm.neon.vshiftu.v2i64(<2 x i64> %tmp1, <2 x i64> < i64 63, i64 63 >)
 	ret <2 x i64> %tmp2
@@ -186,96 +214,128 @@ define <2 x i64> @vshlQi64(<2 x i64>* %A) nounwind {
 ; Right shift by immediate:
 
 define <8 x i8> @vshrs8(<8 x i8>* %A) nounwind {
+;CHECK: vshrs8:
+;CHECK: vshr.s8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = call <8 x i8> @llvm.arm.neon.vshifts.v8i8(<8 x i8> %tmp1, <8 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
 	ret <8 x i8> %tmp2
 }
 
 define <4 x i16> @vshrs16(<4 x i16>* %A) nounwind {
+;CHECK: vshrs16:
+;CHECK: vshr.s16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = call <4 x i16> @llvm.arm.neon.vshifts.v4i16(<4 x i16> %tmp1, <4 x i16> < i16 -16, i16 -16, i16 -16, i16 -16 >)
 	ret <4 x i16> %tmp2
 }
 
 define <2 x i32> @vshrs32(<2 x i32>* %A) nounwind {
+;CHECK: vshrs32:
+;CHECK: vshr.s32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = call <2 x i32> @llvm.arm.neon.vshifts.v2i32(<2 x i32> %tmp1, <2 x i32> < i32 -32, i32 -32 >)
 	ret <2 x i32> %tmp2
 }
 
 define <1 x i64> @vshrs64(<1 x i64>* %A) nounwind {
+;CHECK: vshrs64:
+;CHECK: vshr.s64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = call <1 x i64> @llvm.arm.neon.vshifts.v1i64(<1 x i64> %tmp1, <1 x i64> < i64 -64 >)
 	ret <1 x i64> %tmp2
 }
 
 define <8 x i8> @vshru8(<8 x i8>* %A) nounwind {
+;CHECK: vshru8:
+;CHECK: vshr.u8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = call <8 x i8> @llvm.arm.neon.vshiftu.v8i8(<8 x i8> %tmp1, <8 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
 	ret <8 x i8> %tmp2
 }
 
 define <4 x i16> @vshru16(<4 x i16>* %A) nounwind {
+;CHECK: vshru16:
+;CHECK: vshr.u16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = call <4 x i16> @llvm.arm.neon.vshiftu.v4i16(<4 x i16> %tmp1, <4 x i16> < i16 -16, i16 -16, i16 -16, i16 -16 >)
 	ret <4 x i16> %tmp2
 }
 
 define <2 x i32> @vshru32(<2 x i32>* %A) nounwind {
+;CHECK: vshru32:
+;CHECK: vshr.u32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = call <2 x i32> @llvm.arm.neon.vshiftu.v2i32(<2 x i32> %tmp1, <2 x i32> < i32 -32, i32 -32 >)
 	ret <2 x i32> %tmp2
 }
 
 define <1 x i64> @vshru64(<1 x i64>* %A) nounwind {
+;CHECK: vshru64:
+;CHECK: vshr.u64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = call <1 x i64> @llvm.arm.neon.vshiftu.v1i64(<1 x i64> %tmp1, <1 x i64> < i64 -64 >)
 	ret <1 x i64> %tmp2
 }
 
 define <16 x i8> @vshrQs8(<16 x i8>* %A) nounwind {
+;CHECK: vshrQs8:
+;CHECK: vshr.s8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = call <16 x i8> @llvm.arm.neon.vshifts.v16i8(<16 x i8> %tmp1, <16 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
 	ret <16 x i8> %tmp2
 }
 
 define <8 x i16> @vshrQs16(<8 x i16>* %A) nounwind {
+;CHECK: vshrQs16:
+;CHECK: vshr.s16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = call <8 x i16> @llvm.arm.neon.vshifts.v8i16(<8 x i16> %tmp1, <8 x i16> < i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16 >)
 	ret <8 x i16> %tmp2
 }
 
 define <4 x i32> @vshrQs32(<4 x i32>* %A) nounwind {
+;CHECK: vshrQs32:
+;CHECK: vshr.s32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = call <4 x i32> @llvm.arm.neon.vshifts.v4i32(<4 x i32> %tmp1, <4 x i32> < i32 -32, i32 -32, i32 -32, i32 -32 >)
 	ret <4 x i32> %tmp2
 }
 
 define <2 x i64> @vshrQs64(<2 x i64>* %A) nounwind {
+;CHECK: vshrQs64:
+;CHECK: vshr.s64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = call <2 x i64> @llvm.arm.neon.vshifts.v2i64(<2 x i64> %tmp1, <2 x i64> < i64 -64, i64 -64 >)
 	ret <2 x i64> %tmp2
 }
 
 define <16 x i8> @vshrQu8(<16 x i8>* %A) nounwind {
+;CHECK: vshrQu8:
+;CHECK: vshr.u8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = call <16 x i8> @llvm.arm.neon.vshiftu.v16i8(<16 x i8> %tmp1, <16 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
 	ret <16 x i8> %tmp2
 }
 
 define <8 x i16> @vshrQu16(<8 x i16>* %A) nounwind {
+;CHECK: vshrQu16:
+;CHECK: vshr.u16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = call <8 x i16> @llvm.arm.neon.vshiftu.v8i16(<8 x i16> %tmp1, <8 x i16> < i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16 >)
 	ret <8 x i16> %tmp2
 }
 
 define <4 x i32> @vshrQu32(<4 x i32>* %A) nounwind {
+;CHECK: vshrQu32:
+;CHECK: vshr.u32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = call <4 x i32> @llvm.arm.neon.vshiftu.v4i32(<4 x i32> %tmp1, <4 x i32> < i32 -32, i32 -32, i32 -32, i32 -32 >)
 	ret <4 x i32> %tmp2
 }
 
 define <2 x i64> @vshrQu64(<2 x i64>* %A) nounwind {
+;CHECK: vshrQu64:
+;CHECK: vshr.u64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = call <2 x i64> @llvm.arm.neon.vshiftu.v2i64(<2 x i64> %tmp1, <2 x i64> < i64 -64, i64 -64 >)
 	ret <2 x i64> %tmp2
@@ -300,3 +360,295 @@ declare <16 x i8> @llvm.arm.neon.vshiftu.v16i8(<16 x i8>, <16 x i8>) nounwind re
 declare <8 x i16> @llvm.arm.neon.vshiftu.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
 declare <4 x i32> @llvm.arm.neon.vshiftu.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
 declare <2 x i64> @llvm.arm.neon.vshiftu.v2i64(<2 x i64>, <2 x i64>) nounwind readnone
+
+define <8 x i8> @vrshls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vrshls8:
+;CHECK: vrshl.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vrshifts.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vrshls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vrshls16:
+;CHECK: vrshl.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vrshifts.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vrshls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vrshls32:
+;CHECK: vrshl.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vrshifts.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <1 x i64> @vrshls64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vrshls64:
+;CHECK: vrshl.s64
+	%tmp1 = load <1 x i64>* %A
+	%tmp2 = load <1 x i64>* %B
+	%tmp3 = call <1 x i64> @llvm.arm.neon.vrshifts.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2)
+	ret <1 x i64> %tmp3
+}
+
+define <8 x i8> @vrshlu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vrshlu8:
+;CHECK: vrshl.u8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vrshiftu.v8i8(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vrshlu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vrshlu16:
+;CHECK: vrshl.u16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vrshiftu.v4i16(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vrshlu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vrshlu32:
+;CHECK: vrshl.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vrshiftu.v2i32(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+define <1 x i64> @vrshlu64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vrshlu64:
+;CHECK: vrshl.u64
+	%tmp1 = load <1 x i64>* %A
+	%tmp2 = load <1 x i64>* %B
+	%tmp3 = call <1 x i64> @llvm.arm.neon.vrshiftu.v1i64(<1 x i64> %tmp1, <1 x i64> %tmp2)
+	ret <1 x i64> %tmp3
+}
+
+define <16 x i8> @vrshlQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vrshlQs8:
+;CHECK: vrshl.s8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = load <16 x i8>* %B
+	%tmp3 = call <16 x i8> @llvm.arm.neon.vrshifts.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
+	ret <16 x i8> %tmp3
+}
+
+define <8 x i16> @vrshlQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vrshlQs16:
+;CHECK: vrshl.s16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vrshifts.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vrshlQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vrshlQs32:
+;CHECK: vrshl.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vrshifts.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vrshlQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vrshlQs64:
+;CHECK: vrshl.s64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i64>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vrshifts.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+define <16 x i8> @vrshlQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vrshlQu8:
+;CHECK: vrshl.u8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = load <16 x i8>* %B
+	%tmp3 = call <16 x i8> @llvm.arm.neon.vrshiftu.v16i8(<16 x i8> %tmp1, <16 x i8> %tmp2)
+	ret <16 x i8> %tmp3
+}
+
+define <8 x i16> @vrshlQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vrshlQu16:
+;CHECK: vrshl.u16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vrshiftu.v8i16(<8 x i16> %tmp1, <8 x i16> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vrshlQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vrshlQu32:
+;CHECK: vrshl.u32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vrshiftu.v4i32(<4 x i32> %tmp1, <4 x i32> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vrshlQu64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vrshlQu64:
+;CHECK: vrshl.u64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i64>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vrshiftu.v2i64(<2 x i64> %tmp1, <2 x i64> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+define <8 x i8> @vrshrs8(<8 x i8>* %A) nounwind {
+;CHECK: vrshrs8:
+;CHECK: vrshr.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = call <8 x i8> @llvm.arm.neon.vrshifts.v8i8(<8 x i8> %tmp1, <8 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @vrshrs16(<4 x i16>* %A) nounwind {
+;CHECK: vrshrs16:
+;CHECK: vrshr.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = call <4 x i16> @llvm.arm.neon.vrshifts.v4i16(<4 x i16> %tmp1, <4 x i16> < i16 -16, i16 -16, i16 -16, i16 -16 >)
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vrshrs32(<2 x i32>* %A) nounwind {
+;CHECK: vrshrs32:
+;CHECK: vrshr.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vrshifts.v2i32(<2 x i32> %tmp1, <2 x i32> < i32 -32, i32 -32 >)
+	ret <2 x i32> %tmp2
+}
+
+define <1 x i64> @vrshrs64(<1 x i64>* %A) nounwind {
+;CHECK: vrshrs64:
+;CHECK: vrshr.s64
+	%tmp1 = load <1 x i64>* %A
+	%tmp2 = call <1 x i64> @llvm.arm.neon.vrshifts.v1i64(<1 x i64> %tmp1, <1 x i64> < i64 -64 >)
+	ret <1 x i64> %tmp2
+}
+
+define <8 x i8> @vrshru8(<8 x i8>* %A) nounwind {
+;CHECK: vrshru8:
+;CHECK: vrshr.u8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = call <8 x i8> @llvm.arm.neon.vrshiftu.v8i8(<8 x i8> %tmp1, <8 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @vrshru16(<4 x i16>* %A) nounwind {
+;CHECK: vrshru16:
+;CHECK: vrshr.u16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = call <4 x i16> @llvm.arm.neon.vrshiftu.v4i16(<4 x i16> %tmp1, <4 x i16> < i16 -16, i16 -16, i16 -16, i16 -16 >)
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vrshru32(<2 x i32>* %A) nounwind {
+;CHECK: vrshru32:
+;CHECK: vrshr.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vrshiftu.v2i32(<2 x i32> %tmp1, <2 x i32> < i32 -32, i32 -32 >)
+	ret <2 x i32> %tmp2
+}
+
+define <1 x i64> @vrshru64(<1 x i64>* %A) nounwind {
+;CHECK: vrshru64:
+;CHECK: vrshr.u64
+	%tmp1 = load <1 x i64>* %A
+	%tmp2 = call <1 x i64> @llvm.arm.neon.vrshiftu.v1i64(<1 x i64> %tmp1, <1 x i64> < i64 -64 >)
+	ret <1 x i64> %tmp2
+}
+
+define <16 x i8> @vrshrQs8(<16 x i8>* %A) nounwind {
+;CHECK: vrshrQs8:
+;CHECK: vrshr.s8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = call <16 x i8> @llvm.arm.neon.vrshifts.v16i8(<16 x i8> %tmp1, <16 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
+	ret <16 x i8> %tmp2
+}
+
+define <8 x i16> @vrshrQs16(<8 x i16>* %A) nounwind {
+;CHECK: vrshrQs16:
+;CHECK: vrshr.s16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = call <8 x i16> @llvm.arm.neon.vrshifts.v8i16(<8 x i16> %tmp1, <8 x i16> < i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16 >)
+	ret <8 x i16> %tmp2
+}
+
+define <4 x i32> @vrshrQs32(<4 x i32>* %A) nounwind {
+;CHECK: vrshrQs32:
+;CHECK: vrshr.s32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x i32> @llvm.arm.neon.vrshifts.v4i32(<4 x i32> %tmp1, <4 x i32> < i32 -32, i32 -32, i32 -32, i32 -32 >)
+	ret <4 x i32> %tmp2
+}
+
+define <2 x i64> @vrshrQs64(<2 x i64>* %A) nounwind {
+;CHECK: vrshrQs64:
+;CHECK: vrshr.s64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = call <2 x i64> @llvm.arm.neon.vrshifts.v2i64(<2 x i64> %tmp1, <2 x i64> < i64 -64, i64 -64 >)
+	ret <2 x i64> %tmp2
+}
+
+define <16 x i8> @vrshrQu8(<16 x i8>* %A) nounwind {
+;CHECK: vrshrQu8:
+;CHECK: vrshr.u8
+	%tmp1 = load <16 x i8>* %A
+	%tmp2 = call <16 x i8> @llvm.arm.neon.vrshiftu.v16i8(<16 x i8> %tmp1, <16 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
+	ret <16 x i8> %tmp2
+}
+
+define <8 x i16> @vrshrQu16(<8 x i16>* %A) nounwind {
+;CHECK: vrshrQu16:
+;CHECK: vrshr.u16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = call <8 x i16> @llvm.arm.neon.vrshiftu.v8i16(<8 x i16> %tmp1, <8 x i16> < i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16 >)
+	ret <8 x i16> %tmp2
+}
+
+define <4 x i32> @vrshrQu32(<4 x i32>* %A) nounwind {
+;CHECK: vrshrQu32:
+;CHECK: vrshr.u32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x i32> @llvm.arm.neon.vrshiftu.v4i32(<4 x i32> %tmp1, <4 x i32> < i32 -32, i32 -32, i32 -32, i32 -32 >)
+	ret <4 x i32> %tmp2
+}
+
+define <2 x i64> @vrshrQu64(<2 x i64>* %A) nounwind {
+;CHECK: vrshrQu64:
+;CHECK: vrshr.u64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = call <2 x i64> @llvm.arm.neon.vrshiftu.v2i64(<2 x i64> %tmp1, <2 x i64> < i64 -64, i64 -64 >)
+	ret <2 x i64> %tmp2
+}
+
+declare <8 x i8>  @llvm.arm.neon.vrshifts.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vrshifts.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vrshifts.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+declare <1 x i64> @llvm.arm.neon.vrshifts.v1i64(<1 x i64>, <1 x i64>) nounwind readnone
+
+declare <8 x i8>  @llvm.arm.neon.vrshiftu.v8i8(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vrshiftu.v4i16(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vrshiftu.v2i32(<2 x i32>, <2 x i32>) nounwind readnone
+declare <1 x i64> @llvm.arm.neon.vrshiftu.v1i64(<1 x i64>, <1 x i64>) nounwind readnone
+
+declare <16 x i8> @llvm.arm.neon.vrshifts.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
+declare <8 x i16> @llvm.arm.neon.vrshifts.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vrshifts.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vrshifts.v2i64(<2 x i64>, <2 x i64>) nounwind readnone
+
+declare <16 x i8> @llvm.arm.neon.vrshiftu.v16i8(<16 x i8>, <16 x i8>) nounwind readnone
+declare <8 x i16> @llvm.arm.neon.vrshiftu.v8i16(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vrshiftu.v4i32(<4 x i32>, <4 x i32>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vrshiftu.v2i64(<2 x i64>, <2 x i64>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vshll.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vshll.ll
index 5407662..8e85b98 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vshll.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vshll.ll
@@ -1,45 +1,48 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vshll\\.s8} %t | count 1
-; RUN: grep {vshll\\.s16} %t | count 1
-; RUN: grep {vshll\\.s32} %t | count 1
-; RUN: grep {vshll\\.u8} %t | count 1
-; RUN: grep {vshll\\.u16} %t | count 1
-; RUN: grep {vshll\\.u32} %t | count 1
-; RUN: grep {vshll\\.i8} %t | count 1
-; RUN: grep {vshll\\.i16} %t | count 1
-; RUN: grep {vshll\\.i32} %t | count 1
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i16> @vshlls8(<8 x i8>* %A) nounwind {
+;CHECK: vshlls8:
+;CHECK: vshll.s8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = call <8 x i16> @llvm.arm.neon.vshiftls.v8i16(<8 x i8> %tmp1, <8 x i8> < i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7 >)
 	ret <8 x i16> %tmp2
 }
 
 define <4 x i32> @vshlls16(<4 x i16>* %A) nounwind {
+;CHECK: vshlls16:
+;CHECK: vshll.s16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = call <4 x i32> @llvm.arm.neon.vshiftls.v4i32(<4 x i16> %tmp1, <4 x i16> < i16 15, i16 15, i16 15, i16 15 >)
 	ret <4 x i32> %tmp2
 }
 
 define <2 x i64> @vshlls32(<2 x i32>* %A) nounwind {
+;CHECK: vshlls32:
+;CHECK: vshll.s32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = call <2 x i64> @llvm.arm.neon.vshiftls.v2i64(<2 x i32> %tmp1, <2 x i32> < i32 31, i32 31 >)
 	ret <2 x i64> %tmp2
 }
 
 define <8 x i16> @vshllu8(<8 x i8>* %A) nounwind {
+;CHECK: vshllu8:
+;CHECK: vshll.u8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = call <8 x i16> @llvm.arm.neon.vshiftlu.v8i16(<8 x i8> %tmp1, <8 x i8> < i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7, i8 7 >)
 	ret <8 x i16> %tmp2
 }
 
 define <4 x i32> @vshllu16(<4 x i16>* %A) nounwind {
+;CHECK: vshllu16:
+;CHECK: vshll.u16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = call <4 x i32> @llvm.arm.neon.vshiftlu.v4i32(<4 x i16> %tmp1, <4 x i16> < i16 15, i16 15, i16 15, i16 15 >)
 	ret <4 x i32> %tmp2
 }
 
 define <2 x i64> @vshllu32(<2 x i32>* %A) nounwind {
+;CHECK: vshllu32:
+;CHECK: vshll.u32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = call <2 x i64> @llvm.arm.neon.vshiftlu.v2i64(<2 x i32> %tmp1, <2 x i32> < i32 31, i32 31 >)
 	ret <2 x i64> %tmp2
@@ -48,18 +51,24 @@ define <2 x i64> @vshllu32(<2 x i32>* %A) nounwind {
 ; The following tests use the maximum shift count, so the signedness is
 ; irrelevant.  Test both signed and unsigned versions.
 define <8 x i16> @vshlli8(<8 x i8>* %A) nounwind {
+;CHECK: vshlli8:
+;CHECK: vshll.i8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = call <8 x i16> @llvm.arm.neon.vshiftls.v8i16(<8 x i8> %tmp1, <8 x i8> < i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8 >)
 	ret <8 x i16> %tmp2
 }
 
 define <4 x i32> @vshlli16(<4 x i16>* %A) nounwind {
+;CHECK: vshlli16:
+;CHECK: vshll.i16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = call <4 x i32> @llvm.arm.neon.vshiftlu.v4i32(<4 x i16> %tmp1, <4 x i16> < i16 16, i16 16, i16 16, i16 16 >)
 	ret <4 x i32> %tmp2
 }
 
 define <2 x i64> @vshlli32(<2 x i32>* %A) nounwind {
+;CHECK: vshlli32:
+;CHECK: vshll.i32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = call <2 x i64> @llvm.arm.neon.vshiftls.v2i64(<2 x i32> %tmp1, <2 x i32> < i32 32, i32 32 >)
 	ret <2 x i64> %tmp2
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vshrn.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vshrn.ll
index 26834e7..e2544f4 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vshrn.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vshrn.ll
@@ -1,21 +1,24 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vshrn\\.i16} %t | count 1
-; RUN: grep {vshrn\\.i32} %t | count 1
-; RUN: grep {vshrn\\.i64} %t | count 1
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i8> @vshrns8(<8 x i16>* %A) nounwind {
+;CHECK: vshrns8:
+;CHECK: vshrn.i16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = call <8 x i8> @llvm.arm.neon.vshiftn.v8i8(<8 x i16> %tmp1, <8 x i16> < i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8 >)
 	ret <8 x i8> %tmp2
 }
 
 define <4 x i16> @vshrns16(<4 x i32>* %A) nounwind {
+;CHECK: vshrns16:
+;CHECK: vshrn.i32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = call <4 x i16> @llvm.arm.neon.vshiftn.v4i16(<4 x i32> %tmp1, <4 x i32> < i32 -16, i32 -16, i32 -16, i32 -16 >)
 	ret <4 x i16> %tmp2
 }
 
 define <2 x i32> @vshrns32(<2 x i64>* %A) nounwind {
+;CHECK: vshrns32:
+;CHECK: vshrn.i64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = call <2 x i32> @llvm.arm.neon.vshiftn.v2i32(<2 x i64> %tmp1, <2 x i64> < i64 -32, i64 -32 >)
 	ret <2 x i32> %tmp2
@@ -24,3 +27,31 @@ define <2 x i32> @vshrns32(<2 x i64>* %A) nounwind {
 declare <8 x i8>  @llvm.arm.neon.vshiftn.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
 declare <4 x i16> @llvm.arm.neon.vshiftn.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
 declare <2 x i32> @llvm.arm.neon.vshiftn.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
+
+define <8 x i8> @vrshrns8(<8 x i16>* %A) nounwind {
+;CHECK: vrshrns8:
+;CHECK: vrshrn.i16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = call <8 x i8> @llvm.arm.neon.vrshiftn.v8i8(<8 x i16> %tmp1, <8 x i16> < i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8, i16 -8 >)
+	ret <8 x i8> %tmp2
+}
+
+define <4 x i16> @vrshrns16(<4 x i32>* %A) nounwind {
+;CHECK: vrshrns16:
+;CHECK: vrshrn.i32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = call <4 x i16> @llvm.arm.neon.vrshiftn.v4i16(<4 x i32> %tmp1, <4 x i32> < i32 -16, i32 -16, i32 -16, i32 -16 >)
+	ret <4 x i16> %tmp2
+}
+
+define <2 x i32> @vrshrns32(<2 x i64>* %A) nounwind {
+;CHECK: vrshrns32:
+;CHECK: vrshrn.i64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = call <2 x i32> @llvm.arm.neon.vrshiftn.v2i32(<2 x i64> %tmp1, <2 x i64> < i64 -32, i64 -32 >)
+	ret <2 x i32> %tmp2
+}
+
+declare <8 x i8>  @llvm.arm.neon.vrshiftn.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vrshiftn.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vrshiftn.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vsra.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vsra.ll
index 10cefc2..acb672d 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vsra.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vsra.ll
@@ -1,22 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vsra\\.s8} %t | count 2
-; RUN: grep {vsra\\.s16} %t | count 2
-; RUN: grep {vsra\\.s32} %t | count 2
-; RUN: grep {vsra\\.s64} %t | count 2
-; RUN: grep {vsra\\.u8} %t | count 2
-; RUN: grep {vsra\\.u16} %t | count 2
-; RUN: grep {vsra\\.u32} %t | count 2
-; RUN: grep {vsra\\.u64} %t | count 2
-; RUN: grep {vrsra\\.s8} %t | count 2
-; RUN: grep {vrsra\\.s16} %t | count 2
-; RUN: grep {vrsra\\.s32} %t | count 2
-; RUN: grep {vrsra\\.s64} %t | count 2
-; RUN: grep {vrsra\\.u8} %t | count 2
-; RUN: grep {vrsra\\.u16} %t | count 2
-; RUN: grep {vrsra\\.u32} %t | count 2
-; RUN: grep {vrsra\\.u64} %t | count 2
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i8> @vsras8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vsras8:
+;CHECK: vsra.s8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = ashr <8 x i8> %tmp2, < i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8 >
@@ -25,6 +11,8 @@ define <8 x i8> @vsras8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vsras16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vsras16:
+;CHECK: vsra.s16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = ashr <4 x i16> %tmp2, < i16 16, i16 16, i16 16, i16 16 >
@@ -33,6 +21,8 @@ define <4 x i16> @vsras16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vsras32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vsras32:
+;CHECK: vsra.s32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = ashr <2 x i32> %tmp2, < i32 32, i32 32 >
@@ -41,6 +31,8 @@ define <2 x i32> @vsras32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vsras64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vsras64:
+;CHECK: vsra.s64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = ashr <1 x i64> %tmp2, < i64 64 >
@@ -49,6 +41,8 @@ define <1 x i64> @vsras64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vsraQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vsraQs8:
+;CHECK: vsra.s8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = ashr <16 x i8> %tmp2, < i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8 >
@@ -57,6 +51,8 @@ define <16 x i8> @vsraQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vsraQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vsraQs16:
+;CHECK: vsra.s16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = ashr <8 x i16> %tmp2, < i16 16, i16 16, i16 16, i16 16, i16 16, i16 16, i16 16, i16 16 >
@@ -65,6 +61,8 @@ define <8 x i16> @vsraQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vsraQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vsraQs32:
+;CHECK: vsra.s32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = ashr <4 x i32> %tmp2, < i32 32, i32 32, i32 32, i32 32 >
@@ -73,6 +71,8 @@ define <4 x i32> @vsraQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vsraQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vsraQs64:
+;CHECK: vsra.s64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = ashr <2 x i64> %tmp2, < i64 64, i64 64 >
@@ -81,6 +81,8 @@ define <2 x i64> @vsraQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
 }
 
 define <8 x i8> @vsrau8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vsrau8:
+;CHECK: vsra.u8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = lshr <8 x i8> %tmp2, < i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8 >
@@ -89,6 +91,8 @@ define <8 x i8> @vsrau8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vsrau16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vsrau16:
+;CHECK: vsra.u16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = lshr <4 x i16> %tmp2, < i16 16, i16 16, i16 16, i16 16 >
@@ -97,6 +101,8 @@ define <4 x i16> @vsrau16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vsrau32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vsrau32:
+;CHECK: vsra.u32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = lshr <2 x i32> %tmp2, < i32 32, i32 32 >
@@ -105,6 +111,8 @@ define <2 x i32> @vsrau32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vsrau64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vsrau64:
+;CHECK: vsra.u64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = lshr <1 x i64> %tmp2, < i64 64 >
@@ -113,6 +121,8 @@ define <1 x i64> @vsrau64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vsraQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vsraQu8:
+;CHECK: vsra.u8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = lshr <16 x i8> %tmp2, < i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8, i8 8 >
@@ -121,6 +131,8 @@ define <16 x i8> @vsraQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vsraQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vsraQu16:
+;CHECK: vsra.u16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = lshr <8 x i16> %tmp2, < i16 16, i16 16, i16 16, i16 16, i16 16, i16 16, i16 16, i16 16 >
@@ -129,6 +141,8 @@ define <8 x i16> @vsraQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vsraQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vsraQu32:
+;CHECK: vsra.u32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = lshr <4 x i32> %tmp2, < i32 32, i32 32, i32 32, i32 32 >
@@ -137,6 +151,8 @@ define <4 x i32> @vsraQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vsraQu64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vsraQu64:
+;CHECK: vsra.u64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = lshr <2 x i64> %tmp2, < i64 64, i64 64 >
@@ -145,6 +161,8 @@ define <2 x i64> @vsraQu64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
 }
 
 define <8 x i8> @vrsras8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vrsras8:
+;CHECK: vrsra.s8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = call <8 x i8> @llvm.arm.neon.vrshifts.v8i8(<8 x i8> %tmp2, <8 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
@@ -153,6 +171,8 @@ define <8 x i8> @vrsras8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vrsras16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vrsras16:
+;CHECK: vrsra.s16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = call <4 x i16> @llvm.arm.neon.vrshifts.v4i16(<4 x i16> %tmp2, <4 x i16> < i16 -16, i16 -16, i16 -16, i16 -16 >)
@@ -161,6 +181,8 @@ define <4 x i16> @vrsras16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vrsras32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vrsras32:
+;CHECK: vrsra.s32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = call <2 x i32> @llvm.arm.neon.vrshifts.v2i32(<2 x i32> %tmp2, <2 x i32> < i32 -32, i32 -32 >)
@@ -169,6 +191,8 @@ define <2 x i32> @vrsras32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vrsras64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vrsras64:
+;CHECK: vrsra.s64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = call <1 x i64> @llvm.arm.neon.vrshifts.v1i64(<1 x i64> %tmp2, <1 x i64> < i64 -64 >)
@@ -177,6 +201,8 @@ define <1 x i64> @vrsras64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <8 x i8> @vrsrau8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vrsrau8:
+;CHECK: vrsra.u8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = call <8 x i8> @llvm.arm.neon.vrshiftu.v8i8(<8 x i8> %tmp2, <8 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
@@ -185,6 +211,8 @@ define <8 x i8> @vrsrau8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vrsrau16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vrsrau16:
+;CHECK: vrsra.u16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = call <4 x i16> @llvm.arm.neon.vrshiftu.v4i16(<4 x i16> %tmp2, <4 x i16> < i16 -16, i16 -16, i16 -16, i16 -16 >)
@@ -193,6 +221,8 @@ define <4 x i16> @vrsrau16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vrsrau32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vrsrau32:
+;CHECK: vrsra.u32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = call <2 x i32> @llvm.arm.neon.vrshiftu.v2i32(<2 x i32> %tmp2, <2 x i32> < i32 -32, i32 -32 >)
@@ -201,6 +231,8 @@ define <2 x i32> @vrsrau32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vrsrau64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vrsrau64:
+;CHECK: vrsra.u64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = call <1 x i64> @llvm.arm.neon.vrshiftu.v1i64(<1 x i64> %tmp2, <1 x i64> < i64 -64 >)
@@ -209,6 +241,8 @@ define <1 x i64> @vrsrau64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vrsraQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vrsraQs8:
+;CHECK: vrsra.s8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = call <16 x i8> @llvm.arm.neon.vrshifts.v16i8(<16 x i8> %tmp2, <16 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
@@ -217,6 +251,8 @@ define <16 x i8> @vrsraQs8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vrsraQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vrsraQs16:
+;CHECK: vrsra.s16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = call <8 x i16> @llvm.arm.neon.vrshifts.v8i16(<8 x i16> %tmp2, <8 x i16> < i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16 >)
@@ -225,6 +261,8 @@ define <8 x i16> @vrsraQs16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vrsraQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vrsraQs32:
+;CHECK: vrsra.s32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = call <4 x i32> @llvm.arm.neon.vrshifts.v4i32(<4 x i32> %tmp2, <4 x i32> < i32 -32, i32 -32, i32 -32, i32 -32 >)
@@ -233,6 +271,8 @@ define <4 x i32> @vrsraQs32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vrsraQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vrsraQs64:
+;CHECK: vrsra.s64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = call <2 x i64> @llvm.arm.neon.vrshifts.v2i64(<2 x i64> %tmp2, <2 x i64> < i64 -64, i64 -64 >)
@@ -241,6 +281,8 @@ define <2 x i64> @vrsraQs64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
 }
 
 define <16 x i8> @vrsraQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vrsraQu8:
+;CHECK: vrsra.u8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = call <16 x i8> @llvm.arm.neon.vrshiftu.v16i8(<16 x i8> %tmp2, <16 x i8> < i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8, i8 -8 >)
@@ -249,6 +291,8 @@ define <16 x i8> @vrsraQu8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vrsraQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vrsraQu16:
+;CHECK: vrsra.u16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = call <8 x i16> @llvm.arm.neon.vrshiftu.v8i16(<8 x i16> %tmp2, <8 x i16> < i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16, i16 -16 >)
@@ -257,6 +301,8 @@ define <8 x i16> @vrsraQu16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vrsraQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vrsraQu32:
+;CHECK: vrsra.u32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = call <4 x i32> @llvm.arm.neon.vrshiftu.v4i32(<4 x i32> %tmp2, <4 x i32> < i32 -32, i32 -32, i32 -32, i32 -32 >)
@@ -265,6 +311,8 @@ define <4 x i32> @vrsraQu32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vrsraQu64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vrsraQu64:
+;CHECK: vrsra.u64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = call <2 x i64> @llvm.arm.neon.vrshiftu.v2i64(<2 x i64> %tmp2, <2 x i64> < i64 -64, i64 -64 >)
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vst2.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vst2.ll
index 587b17d..17d6bee 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vst2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vst2.ll
@@ -32,7 +32,53 @@ define void @vst2f(float* %A, <2 x float>* %B) nounwind {
 	ret void
 }
 
+define void @vst2i64(i64* %A, <1 x i64>* %B) nounwind {
+;CHECK: vst2i64:
+;CHECK: vst1.64
+	%tmp1 = load <1 x i64>* %B
+	call void @llvm.arm.neon.vst2.v1i64(i64* %A, <1 x i64> %tmp1, <1 x i64> %tmp1)
+	ret void
+}
+
+define void @vst2Qi8(i8* %A, <16 x i8>* %B) nounwind {
+;CHECK: vst2Qi8:
+;CHECK: vst2.8
+	%tmp1 = load <16 x i8>* %B
+	call void @llvm.arm.neon.vst2.v16i8(i8* %A, <16 x i8> %tmp1, <16 x i8> %tmp1)
+	ret void
+}
+
+define void @vst2Qi16(i16* %A, <8 x i16>* %B) nounwind {
+;CHECK: vst2Qi16:
+;CHECK: vst2.16
+	%tmp1 = load <8 x i16>* %B
+	call void @llvm.arm.neon.vst2.v8i16(i16* %A, <8 x i16> %tmp1, <8 x i16> %tmp1)
+	ret void
+}
+
+define void @vst2Qi32(i32* %A, <4 x i32>* %B) nounwind {
+;CHECK: vst2Qi32:
+;CHECK: vst2.32
+	%tmp1 = load <4 x i32>* %B
+	call void @llvm.arm.neon.vst2.v4i32(i32* %A, <4 x i32> %tmp1, <4 x i32> %tmp1)
+	ret void
+}
+
+define void @vst2Qf(float* %A, <4 x float>* %B) nounwind {
+;CHECK: vst2Qf:
+;CHECK: vst2.32
+	%tmp1 = load <4 x float>* %B
+	call void @llvm.arm.neon.vst2.v4f32(float* %A, <4 x float> %tmp1, <4 x float> %tmp1)
+	ret void
+}
+
 declare void @llvm.arm.neon.vst2.v8i8(i8*, <8 x i8>, <8 x i8>) nounwind
 declare void @llvm.arm.neon.vst2.v4i16(i8*, <4 x i16>, <4 x i16>) nounwind
 declare void @llvm.arm.neon.vst2.v2i32(i8*, <2 x i32>, <2 x i32>) nounwind
 declare void @llvm.arm.neon.vst2.v2f32(i8*, <2 x float>, <2 x float>) nounwind
+declare void @llvm.arm.neon.vst2.v1i64(i8*, <1 x i64>, <1 x i64>) nounwind
+
+declare void @llvm.arm.neon.vst2.v16i8(i8*, <16 x i8>, <16 x i8>) nounwind
+declare void @llvm.arm.neon.vst2.v8i16(i8*, <8 x i16>, <8 x i16>) nounwind
+declare void @llvm.arm.neon.vst2.v4i32(i8*, <4 x i32>, <4 x i32>) nounwind
+declare void @llvm.arm.neon.vst2.v4f32(i8*, <4 x float>, <4 x float>) nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vst3.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vst3.ll
index a851d0a..a831a0c 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vst3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vst3.ll
@@ -32,7 +32,57 @@ define void @vst3f(float* %A, <2 x float>* %B) nounwind {
 	ret void
 }
 
+define void @vst3i64(i64* %A, <1 x i64>* %B) nounwind {
+;CHECK: vst3i64:
+;CHECK: vst1.64
+	%tmp1 = load <1 x i64>* %B
+	call void @llvm.arm.neon.vst3.v1i64(i64* %A, <1 x i64> %tmp1, <1 x i64> %tmp1, <1 x i64> %tmp1)
+	ret void
+}
+
+define void @vst3Qi8(i8* %A, <16 x i8>* %B) nounwind {
+;CHECK: vst3Qi8:
+;CHECK: vst3.8
+;CHECK: vst3.8
+	%tmp1 = load <16 x i8>* %B
+	call void @llvm.arm.neon.vst3.v16i8(i8* %A, <16 x i8> %tmp1, <16 x i8> %tmp1, <16 x i8> %tmp1)
+	ret void
+}
+
+define void @vst3Qi16(i16* %A, <8 x i16>* %B) nounwind {
+;CHECK: vst3Qi16:
+;CHECK: vst3.16
+;CHECK: vst3.16
+	%tmp1 = load <8 x i16>* %B
+	call void @llvm.arm.neon.vst3.v8i16(i16* %A, <8 x i16> %tmp1, <8 x i16> %tmp1, <8 x i16> %tmp1)
+	ret void
+}
+
+define void @vst3Qi32(i32* %A, <4 x i32>* %B) nounwind {
+;CHECK: vst3Qi32:
+;CHECK: vst3.32
+;CHECK: vst3.32
+	%tmp1 = load <4 x i32>* %B
+	call void @llvm.arm.neon.vst3.v4i32(i32* %A, <4 x i32> %tmp1, <4 x i32> %tmp1, <4 x i32> %tmp1)
+	ret void
+}
+
+define void @vst3Qf(float* %A, <4 x float>* %B) nounwind {
+;CHECK: vst3Qf:
+;CHECK: vst3.32
+;CHECK: vst3.32
+	%tmp1 = load <4 x float>* %B
+	call void @llvm.arm.neon.vst3.v4f32(float* %A, <4 x float> %tmp1, <4 x float> %tmp1, <4 x float> %tmp1)
+	ret void
+}
+
 declare void @llvm.arm.neon.vst3.v8i8(i8*, <8 x i8>, <8 x i8>, <8 x i8>) nounwind
 declare void @llvm.arm.neon.vst3.v4i16(i8*, <4 x i16>, <4 x i16>, <4 x i16>) nounwind
 declare void @llvm.arm.neon.vst3.v2i32(i8*, <2 x i32>, <2 x i32>, <2 x i32>) nounwind
 declare void @llvm.arm.neon.vst3.v2f32(i8*, <2 x float>, <2 x float>, <2 x float>) nounwind
+declare void @llvm.arm.neon.vst3.v1i64(i8*, <1 x i64>, <1 x i64>, <1 x i64>) nounwind
+
+declare void @llvm.arm.neon.vst3.v16i8(i8*, <16 x i8>, <16 x i8>, <16 x i8>) nounwind
+declare void @llvm.arm.neon.vst3.v8i16(i8*, <8 x i16>, <8 x i16>, <8 x i16>) nounwind
+declare void @llvm.arm.neon.vst3.v4i32(i8*, <4 x i32>, <4 x i32>, <4 x i32>) nounwind
+declare void @llvm.arm.neon.vst3.v4f32(i8*, <4 x float>, <4 x float>, <4 x float>) nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vst4.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vst4.ll
index 8966b62..d92c017 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vst4.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vst4.ll
@@ -32,7 +32,57 @@ define void @vst4f(float* %A, <2 x float>* %B) nounwind {
 	ret void
 }
 
+define void @vst4i64(i64* %A, <1 x i64>* %B) nounwind {
+;CHECK: vst4i64:
+;CHECK: vst1.64
+	%tmp1 = load <1 x i64>* %B
+	call void @llvm.arm.neon.vst4.v1i64(i64* %A, <1 x i64> %tmp1, <1 x i64> %tmp1, <1 x i64> %tmp1, <1 x i64> %tmp1)
+	ret void
+}
+
+define void @vst4Qi8(i8* %A, <16 x i8>* %B) nounwind {
+;CHECK: vst4Qi8:
+;CHECK: vst4.8
+;CHECK: vst4.8
+	%tmp1 = load <16 x i8>* %B
+	call void @llvm.arm.neon.vst4.v16i8(i8* %A, <16 x i8> %tmp1, <16 x i8> %tmp1, <16 x i8> %tmp1, <16 x i8> %tmp1)
+	ret void
+}
+
+define void @vst4Qi16(i16* %A, <8 x i16>* %B) nounwind {
+;CHECK: vst4Qi16:
+;CHECK: vst4.16
+;CHECK: vst4.16
+	%tmp1 = load <8 x i16>* %B
+	call void @llvm.arm.neon.vst4.v8i16(i16* %A, <8 x i16> %tmp1, <8 x i16> %tmp1, <8 x i16> %tmp1, <8 x i16> %tmp1)
+	ret void
+}
+
+define void @vst4Qi32(i32* %A, <4 x i32>* %B) nounwind {
+;CHECK: vst4Qi32:
+;CHECK: vst4.32
+;CHECK: vst4.32
+	%tmp1 = load <4 x i32>* %B
+	call void @llvm.arm.neon.vst4.v4i32(i32* %A, <4 x i32> %tmp1, <4 x i32> %tmp1, <4 x i32> %tmp1, <4 x i32> %tmp1)
+	ret void
+}
+
+define void @vst4Qf(float* %A, <4 x float>* %B) nounwind {
+;CHECK: vst4Qf:
+;CHECK: vst4.32
+;CHECK: vst4.32
+	%tmp1 = load <4 x float>* %B
+	call void @llvm.arm.neon.vst4.v4f32(float* %A, <4 x float> %tmp1, <4 x float> %tmp1, <4 x float> %tmp1, <4 x float> %tmp1)
+	ret void
+}
+
 declare void @llvm.arm.neon.vst4.v8i8(i8*, <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8>) nounwind
 declare void @llvm.arm.neon.vst4.v4i16(i8*, <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16>) nounwind
 declare void @llvm.arm.neon.vst4.v2i32(i8*, <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32>) nounwind
 declare void @llvm.arm.neon.vst4.v2f32(i8*, <2 x float>, <2 x float>, <2 x float>, <2 x float>) nounwind
+declare void @llvm.arm.neon.vst4.v1i64(i8*, <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64>) nounwind
+
+declare void @llvm.arm.neon.vst4.v16i8(i8*, <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8>) nounwind
+declare void @llvm.arm.neon.vst4.v8i16(i8*, <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16>) nounwind
+declare void @llvm.arm.neon.vst4.v4i32(i8*, <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32>) nounwind
+declare void @llvm.arm.neon.vst4.v4f32(i8*, <4 x float>, <4 x float>, <4 x float>, <4 x float>) nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vstlane.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vstlane.ll
index 391b702..3bfb14f 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vstlane.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vstlane.ll
@@ -32,11 +32,39 @@ define void @vst2lanef(float* %A, <2 x float>* %B) nounwind {
 	ret void
 }
 
+define void @vst2laneQi16(i16* %A, <8 x i16>* %B) nounwind {
+;CHECK: vst2laneQi16:
+;CHECK: vst2.16
+	%tmp1 = load <8 x i16>* %B
+	call void @llvm.arm.neon.vst2lane.v8i16(i16* %A, <8 x i16> %tmp1, <8 x i16> %tmp1, i32 1)
+	ret void
+}
+
+define void @vst2laneQi32(i32* %A, <4 x i32>* %B) nounwind {
+;CHECK: vst2laneQi32:
+;CHECK: vst2.32
+	%tmp1 = load <4 x i32>* %B
+	call void @llvm.arm.neon.vst2lane.v4i32(i32* %A, <4 x i32> %tmp1, <4 x i32> %tmp1, i32 2)
+	ret void
+}
+
+define void @vst2laneQf(float* %A, <4 x float>* %B) nounwind {
+;CHECK: vst2laneQf:
+;CHECK: vst2.32
+	%tmp1 = load <4 x float>* %B
+	call void @llvm.arm.neon.vst2lane.v4f32(float* %A, <4 x float> %tmp1, <4 x float> %tmp1, i32 3)
+	ret void
+}
+
 declare void @llvm.arm.neon.vst2lane.v8i8(i8*, <8 x i8>, <8 x i8>, i32) nounwind
 declare void @llvm.arm.neon.vst2lane.v4i16(i8*, <4 x i16>, <4 x i16>, i32) nounwind
 declare void @llvm.arm.neon.vst2lane.v2i32(i8*, <2 x i32>, <2 x i32>, i32) nounwind
 declare void @llvm.arm.neon.vst2lane.v2f32(i8*, <2 x float>, <2 x float>, i32) nounwind
 
+declare void @llvm.arm.neon.vst2lane.v8i16(i8*, <8 x i16>, <8 x i16>, i32) nounwind
+declare void @llvm.arm.neon.vst2lane.v4i32(i8*, <4 x i32>, <4 x i32>, i32) nounwind
+declare void @llvm.arm.neon.vst2lane.v4f32(i8*, <4 x float>, <4 x float>, i32) nounwind
+
 define void @vst3lanei8(i8* %A, <8 x i8>* %B) nounwind {
 ;CHECK: vst3lanei8:
 ;CHECK: vst3.8
@@ -69,11 +97,39 @@ define void @vst3lanef(float* %A, <2 x float>* %B) nounwind {
 	ret void
 }
 
+define void @vst3laneQi16(i16* %A, <8 x i16>* %B) nounwind {
+;CHECK: vst3laneQi16:
+;CHECK: vst3.16
+	%tmp1 = load <8 x i16>* %B
+	call void @llvm.arm.neon.vst3lane.v8i16(i16* %A, <8 x i16> %tmp1, <8 x i16> %tmp1, <8 x i16> %tmp1, i32 6)
+	ret void
+}
+
+define void @vst3laneQi32(i32* %A, <4 x i32>* %B) nounwind {
+;CHECK: vst3laneQi32:
+;CHECK: vst3.32
+	%tmp1 = load <4 x i32>* %B
+	call void @llvm.arm.neon.vst3lane.v4i32(i32* %A, <4 x i32> %tmp1, <4 x i32> %tmp1, <4 x i32> %tmp1, i32 0)
+	ret void
+}
+
+define void @vst3laneQf(float* %A, <4 x float>* %B) nounwind {
+;CHECK: vst3laneQf:
+;CHECK: vst3.32
+	%tmp1 = load <4 x float>* %B
+	call void @llvm.arm.neon.vst3lane.v4f32(float* %A, <4 x float> %tmp1, <4 x float> %tmp1, <4 x float> %tmp1, i32 1)
+	ret void
+}
+
 declare void @llvm.arm.neon.vst3lane.v8i8(i8*, <8 x i8>, <8 x i8>, <8 x i8>, i32) nounwind
 declare void @llvm.arm.neon.vst3lane.v4i16(i8*, <4 x i16>, <4 x i16>, <4 x i16>, i32) nounwind
 declare void @llvm.arm.neon.vst3lane.v2i32(i8*, <2 x i32>, <2 x i32>, <2 x i32>, i32) nounwind
 declare void @llvm.arm.neon.vst3lane.v2f32(i8*, <2 x float>, <2 x float>, <2 x float>, i32) nounwind
 
+declare void @llvm.arm.neon.vst3lane.v8i16(i8*, <8 x i16>, <8 x i16>, <8 x i16>, i32) nounwind
+declare void @llvm.arm.neon.vst3lane.v4i32(i8*, <4 x i32>, <4 x i32>, <4 x i32>, i32) nounwind
+declare void @llvm.arm.neon.vst3lane.v4f32(i8*, <4 x float>, <4 x float>, <4 x float>, i32) nounwind
+
 
 define void @vst4lanei8(i8* %A, <8 x i8>* %B) nounwind {
 ;CHECK: vst4lanei8:
@@ -107,7 +163,35 @@ define void @vst4lanef(float* %A, <2 x float>* %B) nounwind {
 	ret void
 }
 
+define void @vst4laneQi16(i16* %A, <8 x i16>* %B) nounwind {
+;CHECK: vst4laneQi16:
+;CHECK: vst4.16
+	%tmp1 = load <8 x i16>* %B
+	call void @llvm.arm.neon.vst4lane.v8i16(i16* %A, <8 x i16> %tmp1, <8 x i16> %tmp1, <8 x i16> %tmp1, <8 x i16> %tmp1, i32 7)
+	ret void
+}
+
+define void @vst4laneQi32(i32* %A, <4 x i32>* %B) nounwind {
+;CHECK: vst4laneQi32:
+;CHECK: vst4.32
+	%tmp1 = load <4 x i32>* %B
+	call void @llvm.arm.neon.vst4lane.v4i32(i32* %A, <4 x i32> %tmp1, <4 x i32> %tmp1, <4 x i32> %tmp1, <4 x i32> %tmp1, i32 2)
+	ret void
+}
+
+define void @vst4laneQf(float* %A, <4 x float>* %B) nounwind {
+;CHECK: vst4laneQf:
+;CHECK: vst4.32
+	%tmp1 = load <4 x float>* %B
+	call void @llvm.arm.neon.vst4lane.v4f32(float* %A, <4 x float> %tmp1, <4 x float> %tmp1, <4 x float> %tmp1, <4 x float> %tmp1, i32 1)
+	ret void
+}
+
 declare void @llvm.arm.neon.vst4lane.v8i8(i8*, <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8>, i32) nounwind
 declare void @llvm.arm.neon.vst4lane.v4i16(i8*, <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16>, i32) nounwind
 declare void @llvm.arm.neon.vst4lane.v2i32(i8*, <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32>, i32) nounwind
 declare void @llvm.arm.neon.vst4lane.v2f32(i8*, <2 x float>, <2 x float>, <2 x float>, <2 x float>, i32) nounwind
+
+declare void @llvm.arm.neon.vst4lane.v8i16(i8*, <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16>, i32) nounwind
+declare void @llvm.arm.neon.vst4lane.v4i32(i8*, <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32>, i32) nounwind
+declare void @llvm.arm.neon.vst4lane.v4f32(i8*, <4 x float>, <4 x float>, <4 x float>, <4 x float>, i32) nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vsub.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vsub.ll
index 8419a1b..8f0055f 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vsub.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vsub.ll
@@ -1,11 +1,8 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vsub\\.i8} %t | count 2
-; RUN: grep {vsub\\.i16} %t | count 2
-; RUN: grep {vsub\\.i32} %t | count 2
-; RUN: grep {vsub\\.i64} %t | count 2
-; RUN: grep {vsub\\.f32} %t | count 2
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
 define <8 x i8> @vsubi8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vsubi8:
+;CHECK: vsub.i8
 	%tmp1 = load <8 x i8>* %A
 	%tmp2 = load <8 x i8>* %B
 	%tmp3 = sub <8 x i8> %tmp1, %tmp2
@@ -13,6 +10,8 @@ define <8 x i8> @vsubi8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 }
 
 define <4 x i16> @vsubi16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vsubi16:
+;CHECK: vsub.i16
 	%tmp1 = load <4 x i16>* %A
 	%tmp2 = load <4 x i16>* %B
 	%tmp3 = sub <4 x i16> %tmp1, %tmp2
@@ -20,6 +19,8 @@ define <4 x i16> @vsubi16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
 }
 
 define <2 x i32> @vsubi32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vsubi32:
+;CHECK: vsub.i32
 	%tmp1 = load <2 x i32>* %A
 	%tmp2 = load <2 x i32>* %B
 	%tmp3 = sub <2 x i32> %tmp1, %tmp2
@@ -27,6 +28,8 @@ define <2 x i32> @vsubi32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
 }
 
 define <1 x i64> @vsubi64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
+;CHECK: vsubi64:
+;CHECK: vsub.i64
 	%tmp1 = load <1 x i64>* %A
 	%tmp2 = load <1 x i64>* %B
 	%tmp3 = sub <1 x i64> %tmp1, %tmp2
@@ -34,6 +37,8 @@ define <1 x i64> @vsubi64(<1 x i64>* %A, <1 x i64>* %B) nounwind {
 }
 
 define <2 x float> @vsubf32(<2 x float>* %A, <2 x float>* %B) nounwind {
+;CHECK: vsubf32:
+;CHECK: vsub.f32
 	%tmp1 = load <2 x float>* %A
 	%tmp2 = load <2 x float>* %B
 	%tmp3 = sub <2 x float> %tmp1, %tmp2
@@ -41,6 +46,8 @@ define <2 x float> @vsubf32(<2 x float>* %A, <2 x float>* %B) nounwind {
 }
 
 define <16 x i8> @vsubQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
+;CHECK: vsubQi8:
+;CHECK: vsub.i8
 	%tmp1 = load <16 x i8>* %A
 	%tmp2 = load <16 x i8>* %B
 	%tmp3 = sub <16 x i8> %tmp1, %tmp2
@@ -48,6 +55,8 @@ define <16 x i8> @vsubQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
 }
 
 define <8 x i16> @vsubQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vsubQi16:
+;CHECK: vsub.i16
 	%tmp1 = load <8 x i16>* %A
 	%tmp2 = load <8 x i16>* %B
 	%tmp3 = sub <8 x i16> %tmp1, %tmp2
@@ -55,6 +64,8 @@ define <8 x i16> @vsubQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
 }
 
 define <4 x i32> @vsubQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vsubQi32:
+;CHECK: vsub.i32
 	%tmp1 = load <4 x i32>* %A
 	%tmp2 = load <4 x i32>* %B
 	%tmp3 = sub <4 x i32> %tmp1, %tmp2
@@ -62,6 +73,8 @@ define <4 x i32> @vsubQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
 }
 
 define <2 x i64> @vsubQi64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vsubQi64:
+;CHECK: vsub.i64
 	%tmp1 = load <2 x i64>* %A
 	%tmp2 = load <2 x i64>* %B
 	%tmp3 = sub <2 x i64> %tmp1, %tmp2
@@ -69,8 +82,196 @@ define <2 x i64> @vsubQi64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
 }
 
 define <4 x float> @vsubQf32(<4 x float>* %A, <4 x float>* %B) nounwind {
+;CHECK: vsubQf32:
+;CHECK: vsub.f32
 	%tmp1 = load <4 x float>* %A
 	%tmp2 = load <4 x float>* %B
 	%tmp3 = sub <4 x float> %tmp1, %tmp2
 	ret <4 x float> %tmp3
 }
+
+define <8 x i8> @vsubhni16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vsubhni16:
+;CHECK: vsubhn.i16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vsubhn.v8i8(<8 x i16> %tmp1, <8 x i16> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vsubhni32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vsubhni32:
+;CHECK: vsubhn.i32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vsubhn.v4i16(<4 x i32> %tmp1, <4 x i32> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vsubhni64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vsubhni64:
+;CHECK: vsubhn.i64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i64>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vsubhn.v2i32(<2 x i64> %tmp1, <2 x i64> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+declare <8 x i8>  @llvm.arm.neon.vsubhn.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vsubhn.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vsubhn.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
+
+define <8 x i8> @vrsubhni16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
+;CHECK: vrsubhni16:
+;CHECK: vrsubhn.i16
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i16>* %B
+	%tmp3 = call <8 x i8> @llvm.arm.neon.vrsubhn.v8i8(<8 x i16> %tmp1, <8 x i16> %tmp2)
+	ret <8 x i8> %tmp3
+}
+
+define <4 x i16> @vrsubhni32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
+;CHECK: vrsubhni32:
+;CHECK: vrsubhn.i32
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i32>* %B
+	%tmp3 = call <4 x i16> @llvm.arm.neon.vrsubhn.v4i16(<4 x i32> %tmp1, <4 x i32> %tmp2)
+	ret <4 x i16> %tmp3
+}
+
+define <2 x i32> @vrsubhni64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
+;CHECK: vrsubhni64:
+;CHECK: vrsubhn.i64
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i64>* %B
+	%tmp3 = call <2 x i32> @llvm.arm.neon.vrsubhn.v2i32(<2 x i64> %tmp1, <2 x i64> %tmp2)
+	ret <2 x i32> %tmp3
+}
+
+declare <8 x i8>  @llvm.arm.neon.vrsubhn.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
+declare <4 x i16> @llvm.arm.neon.vrsubhn.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
+declare <2 x i32> @llvm.arm.neon.vrsubhn.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
+
+define <8 x i16> @vsubls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vsubls8:
+;CHECK: vsubl.s8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vsubls.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vsubls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vsubls16:
+;CHECK: vsubl.s16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vsubls.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vsubls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vsubls32:
+;CHECK: vsubl.s32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vsubls.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+define <8 x i16> @vsublu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vsublu8:
+;CHECK: vsubl.u8
+	%tmp1 = load <8 x i8>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vsublu.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vsublu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vsublu16:
+;CHECK: vsubl.u16
+	%tmp1 = load <4 x i16>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vsublu.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vsublu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vsublu32:
+;CHECK: vsubl.u32
+	%tmp1 = load <2 x i32>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vsublu.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+declare <8 x i16> @llvm.arm.neon.vsubls.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vsubls.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vsubls.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
+
+declare <8 x i16> @llvm.arm.neon.vsublu.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vsublu.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vsublu.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
+
+define <8 x i16> @vsubws8(<8 x i16>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vsubws8:
+;CHECK: vsubw.s8
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vsubws.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vsubws16(<4 x i32>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vsubws16:
+;CHECK: vsubw.s16
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vsubws.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vsubws32(<2 x i64>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vsubws32:
+;CHECK: vsubw.s32
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vsubws.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+define <8 x i16> @vsubwu8(<8 x i16>* %A, <8 x i8>* %B) nounwind {
+;CHECK: vsubwu8:
+;CHECK: vsubw.u8
+	%tmp1 = load <8 x i16>* %A
+	%tmp2 = load <8 x i8>* %B
+	%tmp3 = call <8 x i16> @llvm.arm.neon.vsubwu.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2)
+	ret <8 x i16> %tmp3
+}
+
+define <4 x i32> @vsubwu16(<4 x i32>* %A, <4 x i16>* %B) nounwind {
+;CHECK: vsubwu16:
+;CHECK: vsubw.u16
+	%tmp1 = load <4 x i32>* %A
+	%tmp2 = load <4 x i16>* %B
+	%tmp3 = call <4 x i32> @llvm.arm.neon.vsubwu.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2)
+	ret <4 x i32> %tmp3
+}
+
+define <2 x i64> @vsubwu32(<2 x i64>* %A, <2 x i32>* %B) nounwind {
+;CHECK: vsubwu32:
+;CHECK: vsubw.u32
+	%tmp1 = load <2 x i64>* %A
+	%tmp2 = load <2 x i32>* %B
+	%tmp3 = call <2 x i64> @llvm.arm.neon.vsubwu.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2)
+	ret <2 x i64> %tmp3
+}
+
+declare <8 x i16> @llvm.arm.neon.vsubws.v8i16(<8 x i16>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vsubws.v4i32(<4 x i32>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vsubws.v2i64(<2 x i64>, <2 x i32>) nounwind readnone
+
+declare <8 x i16> @llvm.arm.neon.vsubwu.v8i16(<8 x i16>, <8 x i8>) nounwind readnone
+declare <4 x i32> @llvm.arm.neon.vsubwu.v4i32(<4 x i32>, <4 x i16>) nounwind readnone
+declare <2 x i64> @llvm.arm.neon.vsubwu.v2i64(<2 x i64>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vsubhn.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vsubhn.ll
deleted file mode 100644
index f1eafa8..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vsubhn.ll
+++ /dev/null
@@ -1,29 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vsubhn\\.i16} %t | count 1
-; RUN: grep {vsubhn\\.i32} %t | count 1
-; RUN: grep {vsubhn\\.i64} %t | count 1
-
-define <8 x i8> @vsubhni16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = call <8 x i8> @llvm.arm.neon.vsubhn.v8i8(<8 x i16> %tmp1, <8 x i16> %tmp2)
-	ret <8 x i8> %tmp3
-}
-
-define <4 x i16> @vsubhni32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = call <4 x i16> @llvm.arm.neon.vsubhn.v4i16(<4 x i32> %tmp1, <4 x i32> %tmp2)
-	ret <4 x i16> %tmp3
-}
-
-define <2 x i32> @vsubhni64(<2 x i64>* %A, <2 x i64>* %B) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i64>* %B
-	%tmp3 = call <2 x i32> @llvm.arm.neon.vsubhn.v2i32(<2 x i64> %tmp1, <2 x i64> %tmp2)
-	ret <2 x i32> %tmp3
-}
-
-declare <8 x i8>  @llvm.arm.neon.vsubhn.v8i8(<8 x i16>, <8 x i16>) nounwind readnone
-declare <4 x i16> @llvm.arm.neon.vsubhn.v4i16(<4 x i32>, <4 x i32>) nounwind readnone
-declare <2 x i32> @llvm.arm.neon.vsubhn.v2i32(<2 x i64>, <2 x i64>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vsubl.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vsubl.ll
deleted file mode 100644
index 6cd867f..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vsubl.ll
+++ /dev/null
@@ -1,57 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vsubl\\.s8} %t | count 1
-; RUN: grep {vsubl\\.s16} %t | count 1
-; RUN: grep {vsubl\\.s32} %t | count 1
-; RUN: grep {vsubl\\.u8} %t | count 1
-; RUN: grep {vsubl\\.u16} %t | count 1
-; RUN: grep {vsubl\\.u32} %t | count 1
-
-define <8 x i16> @vsubls8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vsubls.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vsubls16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vsubls.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vsubls32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vsubls.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-define <8 x i16> @vsublu8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vsublu.v8i16(<8 x i8> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vsublu16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vsublu.v4i32(<4 x i16> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vsublu32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vsublu.v2i64(<2 x i32> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-declare <8 x i16> @llvm.arm.neon.vsubls.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vsubls.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vsubls.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
-
-declare <8 x i16> @llvm.arm.neon.vsublu.v8i16(<8 x i8>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vsublu.v4i32(<4 x i16>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vsublu.v2i64(<2 x i32>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vsubw.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vsubw.ll
deleted file mode 100644
index d83b19c..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vsubw.ll
+++ /dev/null
@@ -1,57 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vsubw\\.s8} %t | count 1
-; RUN: grep {vsubw\\.s16} %t | count 1
-; RUN: grep {vsubw\\.s32} %t | count 1
-; RUN: grep {vsubw\\.u8} %t | count 1
-; RUN: grep {vsubw\\.u16} %t | count 1
-; RUN: grep {vsubw\\.u32} %t | count 1
-
-define <8 x i16> @vsubws8(<8 x i16>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vsubws.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vsubws16(<4 x i32>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vsubws.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vsubws32(<2 x i64>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vsubws.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-define <8 x i16> @vsubwu8(<8 x i16>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = call <8 x i16> @llvm.arm.neon.vsubwu.v8i16(<8 x i16> %tmp1, <8 x i8> %tmp2)
-	ret <8 x i16> %tmp3
-}
-
-define <4 x i32> @vsubwu16(<4 x i32>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = call <4 x i32> @llvm.arm.neon.vsubwu.v4i32(<4 x i32> %tmp1, <4 x i16> %tmp2)
-	ret <4 x i32> %tmp3
-}
-
-define <2 x i64> @vsubwu32(<2 x i64>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i64>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = call <2 x i64> @llvm.arm.neon.vsubwu.v2i64(<2 x i64> %tmp1, <2 x i32> %tmp2)
-	ret <2 x i64> %tmp3
-}
-
-declare <8 x i16> @llvm.arm.neon.vsubws.v8i16(<8 x i16>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vsubws.v4i32(<4 x i32>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vsubws.v2i64(<2 x i64>, <2 x i32>) nounwind readnone
-
-declare <8 x i16> @llvm.arm.neon.vsubwu.v8i16(<8 x i16>, <8 x i8>) nounwind readnone
-declare <4 x i32> @llvm.arm.neon.vsubwu.v4i32(<4 x i32>, <4 x i16>) nounwind readnone
-declare <2 x i64> @llvm.arm.neon.vsubwu.v2i64(<2 x i64>, <2 x i32>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vtbl.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vtbl.ll
index 89653b0..9264987 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vtbl.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vtbl.ll
@@ -1,8 +1,8 @@
 ; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
-%struct.__builtin_neon_v8qi2 = type { <8 x i8>, <8 x i8> }
-%struct.__builtin_neon_v8qi3 = type { <8 x i8>,  <8 x i8>, <8 x i8> }
-%struct.__builtin_neon_v8qi4 = type { <8 x i8>,  <8 x i8>,  <8 x i8>, <8 x i8> }
+%struct.__neon_int8x8x2_t = type { <8 x i8>, <8 x i8> }
+%struct.__neon_int8x8x3_t = type { <8 x i8>,  <8 x i8>, <8 x i8> }
+%struct.__neon_int8x8x4_t = type { <8 x i8>,  <8 x i8>,  <8 x i8>, <8 x i8> }
 
 define <8 x i8> @vtbl1(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 ;CHECK: vtbl1:
@@ -13,38 +13,38 @@ define <8 x i8> @vtbl1(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 	ret <8 x i8> %tmp3
 }
 
-define <8 x i8> @vtbl2(<8 x i8>* %A, %struct.__builtin_neon_v8qi2* %B) nounwind {
+define <8 x i8> @vtbl2(<8 x i8>* %A, %struct.__neon_int8x8x2_t* %B) nounwind {
 ;CHECK: vtbl2:
 ;CHECK: vtbl.8
 	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load %struct.__builtin_neon_v8qi2* %B
-        %tmp3 = extractvalue %struct.__builtin_neon_v8qi2 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v8qi2 %tmp2, 1
+	%tmp2 = load %struct.__neon_int8x8x2_t* %B
+        %tmp3 = extractvalue %struct.__neon_int8x8x2_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int8x8x2_t %tmp2, 1
 	%tmp5 = call <8 x i8> @llvm.arm.neon.vtbl2(<8 x i8> %tmp1, <8 x i8> %tmp3, <8 x i8> %tmp4)
 	ret <8 x i8> %tmp5
 }
 
-define <8 x i8> @vtbl3(<8 x i8>* %A, %struct.__builtin_neon_v8qi3* %B) nounwind {
+define <8 x i8> @vtbl3(<8 x i8>* %A, %struct.__neon_int8x8x3_t* %B) nounwind {
 ;CHECK: vtbl3:
 ;CHECK: vtbl.8
 	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load %struct.__builtin_neon_v8qi3* %B
-        %tmp3 = extractvalue %struct.__builtin_neon_v8qi3 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v8qi3 %tmp2, 1
-        %tmp5 = extractvalue %struct.__builtin_neon_v8qi3 %tmp2, 2
+	%tmp2 = load %struct.__neon_int8x8x3_t* %B
+        %tmp3 = extractvalue %struct.__neon_int8x8x3_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int8x8x3_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_int8x8x3_t %tmp2, 2
 	%tmp6 = call <8 x i8> @llvm.arm.neon.vtbl3(<8 x i8> %tmp1, <8 x i8> %tmp3, <8 x i8> %tmp4, <8 x i8> %tmp5)
 	ret <8 x i8> %tmp6
 }
 
-define <8 x i8> @vtbl4(<8 x i8>* %A, %struct.__builtin_neon_v8qi4* %B) nounwind {
+define <8 x i8> @vtbl4(<8 x i8>* %A, %struct.__neon_int8x8x4_t* %B) nounwind {
 ;CHECK: vtbl4:
 ;CHECK: vtbl.8
 	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load %struct.__builtin_neon_v8qi4* %B
-        %tmp3 = extractvalue %struct.__builtin_neon_v8qi4 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v8qi4 %tmp2, 1
-        %tmp5 = extractvalue %struct.__builtin_neon_v8qi4 %tmp2, 2
-        %tmp6 = extractvalue %struct.__builtin_neon_v8qi4 %tmp2, 3
+	%tmp2 = load %struct.__neon_int8x8x4_t* %B
+        %tmp3 = extractvalue %struct.__neon_int8x8x4_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int8x8x4_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_int8x8x4_t %tmp2, 2
+        %tmp6 = extractvalue %struct.__neon_int8x8x4_t %tmp2, 3
 	%tmp7 = call <8 x i8> @llvm.arm.neon.vtbl4(<8 x i8> %tmp1, <8 x i8> %tmp3, <8 x i8> %tmp4, <8 x i8> %tmp5, <8 x i8> %tmp6)
 	ret <8 x i8> %tmp7
 }
@@ -59,40 +59,40 @@ define <8 x i8> @vtbx1(<8 x i8>* %A, <8 x i8>* %B, <8 x i8>* %C) nounwind {
 	ret <8 x i8> %tmp4
 }
 
-define <8 x i8> @vtbx2(<8 x i8>* %A, %struct.__builtin_neon_v8qi2* %B, <8 x i8>* %C) nounwind {
+define <8 x i8> @vtbx2(<8 x i8>* %A, %struct.__neon_int8x8x2_t* %B, <8 x i8>* %C) nounwind {
 ;CHECK: vtbx2:
 ;CHECK: vtbx.8
 	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load %struct.__builtin_neon_v8qi2* %B
-        %tmp3 = extractvalue %struct.__builtin_neon_v8qi2 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v8qi2 %tmp2, 1
+	%tmp2 = load %struct.__neon_int8x8x2_t* %B
+        %tmp3 = extractvalue %struct.__neon_int8x8x2_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int8x8x2_t %tmp2, 1
 	%tmp5 = load <8 x i8>* %C
 	%tmp6 = call <8 x i8> @llvm.arm.neon.vtbx2(<8 x i8> %tmp1, <8 x i8> %tmp3, <8 x i8> %tmp4, <8 x i8> %tmp5)
 	ret <8 x i8> %tmp6
 }
 
-define <8 x i8> @vtbx3(<8 x i8>* %A, %struct.__builtin_neon_v8qi3* %B, <8 x i8>* %C) nounwind {
+define <8 x i8> @vtbx3(<8 x i8>* %A, %struct.__neon_int8x8x3_t* %B, <8 x i8>* %C) nounwind {
 ;CHECK: vtbx3:
 ;CHECK: vtbx.8
 	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load %struct.__builtin_neon_v8qi3* %B
-        %tmp3 = extractvalue %struct.__builtin_neon_v8qi3 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v8qi3 %tmp2, 1
-        %tmp5 = extractvalue %struct.__builtin_neon_v8qi3 %tmp2, 2
+	%tmp2 = load %struct.__neon_int8x8x3_t* %B
+        %tmp3 = extractvalue %struct.__neon_int8x8x3_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int8x8x3_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_int8x8x3_t %tmp2, 2
 	%tmp6 = load <8 x i8>* %C
 	%tmp7 = call <8 x i8> @llvm.arm.neon.vtbx3(<8 x i8> %tmp1, <8 x i8> %tmp3, <8 x i8> %tmp4, <8 x i8> %tmp5, <8 x i8> %tmp6)
 	ret <8 x i8> %tmp7
 }
 
-define <8 x i8> @vtbx4(<8 x i8>* %A, %struct.__builtin_neon_v8qi4* %B, <8 x i8>* %C) nounwind {
+define <8 x i8> @vtbx4(<8 x i8>* %A, %struct.__neon_int8x8x4_t* %B, <8 x i8>* %C) nounwind {
 ;CHECK: vtbx4:
 ;CHECK: vtbx.8
 	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load %struct.__builtin_neon_v8qi4* %B
-        %tmp3 = extractvalue %struct.__builtin_neon_v8qi4 %tmp2, 0
-        %tmp4 = extractvalue %struct.__builtin_neon_v8qi4 %tmp2, 1
-        %tmp5 = extractvalue %struct.__builtin_neon_v8qi4 %tmp2, 2
-        %tmp6 = extractvalue %struct.__builtin_neon_v8qi4 %tmp2, 3
+	%tmp2 = load %struct.__neon_int8x8x4_t* %B
+        %tmp3 = extractvalue %struct.__neon_int8x8x4_t %tmp2, 0
+        %tmp4 = extractvalue %struct.__neon_int8x8x4_t %tmp2, 1
+        %tmp5 = extractvalue %struct.__neon_int8x8x4_t %tmp2, 2
+        %tmp6 = extractvalue %struct.__neon_int8x8x4_t %tmp2, 3
 	%tmp7 = load <8 x i8>* %C
 	%tmp8 = call <8 x i8> @llvm.arm.neon.vtbx4(<8 x i8> %tmp1, <8 x i8> %tmp3, <8 x i8> %tmp4, <8 x i8> %tmp5, <8 x i8> %tmp6, <8 x i8> %tmp7)
 	ret <8 x i8> %tmp8
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vtrn.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vtrn.ll
index be55daa..5122b09 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vtrn.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vtrn.ll
@@ -1,15 +1,5 @@
 ; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
-%struct.__builtin_neon_v8qi2 = type { <8 x i8>,  <8 x i8> }
-%struct.__builtin_neon_v4hi2 = type { <4 x i16>, <4 x i16> }
-%struct.__builtin_neon_v2si2 = type { <2 x i32>, <2 x i32> }
-%struct.__builtin_neon_v2sf2 = type { <2 x float>, <2 x float> }
-
-%struct.__builtin_neon_v16qi2 = type { <16 x i8>, <16 x i8> }
-%struct.__builtin_neon_v8hi2  = type { <8 x i16>, <8 x i16> }
-%struct.__builtin_neon_v4si2  = type { <4 x i32>, <4 x i32> }
-%struct.__builtin_neon_v4sf2  = type { <4 x float>, <4 x float> }
-
 define <8 x i8> @vtrni8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 ;CHECK: vtrni8:
 ;CHECK: vtrn.8
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vtst.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vtst.ll
deleted file mode 100644
index df7fb3e..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vtst.ll
+++ /dev/null
@@ -1,58 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+neon > %t
-; RUN: grep {vtst\\.i8} %t | count 2
-; RUN: grep {vtst\\.i16} %t | count 2
-; RUN: grep {vtst\\.i32} %t | count 2
-
-define <8 x i8> @vtsti8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
-	%tmp1 = load <8 x i8>* %A
-	%tmp2 = load <8 x i8>* %B
-	%tmp3 = and <8 x i8> %tmp1, %tmp2
-	%tmp4 = icmp ne <8 x i8> %tmp3, zeroinitializer
-        %tmp5 = sext <8 x i1> %tmp4 to <8 x i8>
-	ret <8 x i8> %tmp5
-}
-
-define <4 x i16> @vtsti16(<4 x i16>* %A, <4 x i16>* %B) nounwind {
-	%tmp1 = load <4 x i16>* %A
-	%tmp2 = load <4 x i16>* %B
-	%tmp3 = and <4 x i16> %tmp1, %tmp2
-	%tmp4 = icmp ne <4 x i16> %tmp3, zeroinitializer
-        %tmp5 = sext <4 x i1> %tmp4 to <4 x i16>
-	ret <4 x i16> %tmp5
-}
-
-define <2 x i32> @vtsti32(<2 x i32>* %A, <2 x i32>* %B) nounwind {
-	%tmp1 = load <2 x i32>* %A
-	%tmp2 = load <2 x i32>* %B
-	%tmp3 = and <2 x i32> %tmp1, %tmp2
-	%tmp4 = icmp ne <2 x i32> %tmp3, zeroinitializer
-        %tmp5 = sext <2 x i1> %tmp4 to <2 x i32>
-	ret <2 x i32> %tmp5
-}
-
-define <16 x i8> @vtstQi8(<16 x i8>* %A, <16 x i8>* %B) nounwind {
-	%tmp1 = load <16 x i8>* %A
-	%tmp2 = load <16 x i8>* %B
-	%tmp3 = and <16 x i8> %tmp1, %tmp2
-	%tmp4 = icmp ne <16 x i8> %tmp3, zeroinitializer
-        %tmp5 = sext <16 x i1> %tmp4 to <16 x i8>
-	ret <16 x i8> %tmp5
-}
-
-define <8 x i16> @vtstQi16(<8 x i16>* %A, <8 x i16>* %B) nounwind {
-	%tmp1 = load <8 x i16>* %A
-	%tmp2 = load <8 x i16>* %B
-	%tmp3 = and <8 x i16> %tmp1, %tmp2
-	%tmp4 = icmp ne <8 x i16> %tmp3, zeroinitializer
-        %tmp5 = sext <8 x i1> %tmp4 to <8 x i16>
-	ret <8 x i16> %tmp5
-}
-
-define <4 x i32> @vtstQi32(<4 x i32>* %A, <4 x i32>* %B) nounwind {
-	%tmp1 = load <4 x i32>* %A
-	%tmp2 = load <4 x i32>* %B
-	%tmp3 = and <4 x i32> %tmp1, %tmp2
-	%tmp4 = icmp ne <4 x i32> %tmp3, zeroinitializer
-        %tmp5 = sext <4 x i1> %tmp4 to <4 x i32>
-	ret <4 x i32> %tmp5
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vuzp.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vuzp.ll
index 411f59e..e531718 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vuzp.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vuzp.ll
@@ -1,15 +1,5 @@
 ; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
-%struct.__builtin_neon_v8qi2 = type { <8 x i8>,  <8 x i8> }
-%struct.__builtin_neon_v4hi2 = type { <4 x i16>, <4 x i16> }
-%struct.__builtin_neon_v2si2 = type { <2 x i32>, <2 x i32> }
-%struct.__builtin_neon_v2sf2 = type { <2 x float>, <2 x float> }
-
-%struct.__builtin_neon_v16qi2 = type { <16 x i8>, <16 x i8> }
-%struct.__builtin_neon_v8hi2  = type { <8 x i16>, <8 x i16> }
-%struct.__builtin_neon_v4si2  = type { <4 x i32>, <4 x i32> }
-%struct.__builtin_neon_v4sf2  = type { <4 x float>, <4 x float> }
-
 define <8 x i8> @vuzpi8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 ;CHECK: vuzpi8:
 ;CHECK: vuzp.8
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/vzip.ll b/libclamav/c++/llvm/test/CodeGen/ARM/vzip.ll
index a1509b9..32f7e0d 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/vzip.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/vzip.ll
@@ -1,15 +1,5 @@
 ; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
 
-%struct.__builtin_neon_v8qi2 = type { <8 x i8>,  <8 x i8> }
-%struct.__builtin_neon_v4hi2 = type { <4 x i16>, <4 x i16> }
-%struct.__builtin_neon_v2si2 = type { <2 x i32>, <2 x i32> }
-%struct.__builtin_neon_v2sf2 = type { <2 x float>, <2 x float> }
-
-%struct.__builtin_neon_v16qi2 = type { <16 x i8>, <16 x i8> }
-%struct.__builtin_neon_v8hi2  = type { <8 x i16>, <8 x i16> }
-%struct.__builtin_neon_v4si2  = type { <4 x i32>, <4 x i32> }
-%struct.__builtin_neon_v4sf2  = type { <4 x float>, <4 x float> }
-
 define <8 x i8> @vzipi8(<8 x i8>* %A, <8 x i8>* %B) nounwind {
 ;CHECK: vzipi8:
 ;CHECK: vzip.8
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/2007-06-06-CriticalEdgeLandingPad.ll b/libclamav/c++/llvm/test/CodeGen/Generic/2007-06-06-CriticalEdgeLandingPad.ll
index 33a3645..1519fe6 100644
--- a/libclamav/c++/llvm/test/CodeGen/Generic/2007-06-06-CriticalEdgeLandingPad.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Generic/2007-06-06-CriticalEdgeLandingPad.ll
@@ -1,5 +1,4 @@
-; RUN: llc < %s -march=x86 -enable-eh -asm-verbose -o - | \
-; RUN:   grep -A 3 {Llabel138.*Region start} | grep {3.*Action}
+; RUN: llc < %s -march=x86 -enable-eh -asm-verbose -o - | FileCheck %s
 ; PR1422
 ; PR1508
 
@@ -2864,3 +2863,8 @@ declare void @system__img_enum__image_enumeration_8(%struct.string___XUP* sret ,
 declare i32 @memcmp(i8*, i8*, i32, ...)
 
 declare void @report__result()
+
+; CHECK: {{Llabel138.*Region start}}
+; CHECK-NEXT: Region length
+; CHECK-NEXT: Landing pad
+; CHECK-NEXT: {{3.*Action}}
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/2009-11-16-BadKillsCrash.ll b/libclamav/c++/llvm/test/CodeGen/Generic/2009-11-16-BadKillsCrash.ll
new file mode 100644
index 0000000..a51c75d
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Generic/2009-11-16-BadKillsCrash.ll
@@ -0,0 +1,75 @@
+; RUN: llc < %s
+; PR5495
+target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:32:32-n8:16:32"
+target triple = "i386-pc-linux-gnu"
+
+%"struct.std::__ctype_abstract_base<wchar_t>" = type { %"struct.std::locale::facet" }
+%"struct.std::basic_ios<char,std::char_traits<char> >" = type { %"struct.std::ios_base", %"struct.std::basic_ostream<char,std::char_traits<char> >"*, i8, i8, %"struct.std::basic_streambuf<char,std::char_traits<char> >"*, %"struct.std::ctype<char>"*, %"struct.std::__ctype_abstract_base<wchar_t>"*, %"struct.std::__ctype_abstract_base<wchar_t>"* }
+%"struct.std::basic_istream<char,std::char_traits<char> >" = type { i32 (...)**, i32, %"struct.std::basic_ios<char,std::char_traits<char> >" }
+%"struct.std::basic_ostream<char,std::char_traits<char> >" = type { i32 (...)**, %"struct.std::basic_ios<char,std::char_traits<char> >" }
+%"struct.std::basic_streambuf<char,std::char_traits<char> >" = type { i32 (...)**, i8*, i8*, i8*, i8*, i8*, i8*, %"struct.std::locale" }
+%"struct.std::ctype<char>" = type { %"struct.std::locale::facet", i32*, i8, i32*, i32*, i16*, i8, [256 x i8], [256 x i8], i8 }
+%"struct.std::ios_base" = type { i32 (...)**, i32, i32, i32, i32, i32, %"struct.std::ios_base::_Callback_list"*, %"struct.std::ios_base::_Words", [8 x %"struct.std::ios_base::_Words"], i32, %"struct.std::ios_base::_Words"*, %"struct.std::locale" }
+%"struct.std::ios_base::_Callback_list" = type { %"struct.std::ios_base::_Callback_list"*, void (i32, %"struct.std::ios_base"*, i32)*, i32, i32 }
+%"struct.std::ios_base::_Words" = type { i8*, i32 }
+%"struct.std::locale" = type { %"struct.std::locale::_Impl"* }
+%"struct.std::locale::_Impl" = type { i32, %"struct.std::locale::facet"**, i32, %"struct.std::locale::facet"**, i8** }
+%"struct.std::locale::facet" = type { i32 (...)**, i32 }
+%union..0._15 = type { i32 }
+
+declare i8* @llvm.eh.exception() nounwind readonly
+
+declare i8* @__cxa_begin_catch(i8*) nounwind
+
+declare %"struct.std::ctype<char>"* @_ZSt9use_facetISt5ctypeIcEERKT_RKSt6locale(%"struct.std::locale"*)
+
+define %"struct.std::basic_istream<char,std::char_traits<char> >"* @_ZStrsIcSt11char_traitsIcEERSt13basic_istreamIT_T0_ES6_PS3_(%"struct.std::basic_istream<char,std::char_traits<char> >"* %__in, i8* nocapture %__s) {
+entry:
+  %0 = invoke %"struct.std::ctype<char>"* @_ZSt9use_facetISt5ctypeIcEERKT_RKSt6locale(%"struct.std::locale"* undef)
+          to label %invcont8 unwind label %lpad74 ; <%"struct.std::ctype<char>"*> [#uses=0]
+
+invcont8:                                         ; preds = %entry
+  %1 = invoke i32 undef(%"struct.std::basic_streambuf<char,std::char_traits<char> >"* undef)
+          to label %bb26.preheader unwind label %lpad ; <i32> [#uses=0]
+
+bb26.preheader:                                   ; preds = %invcont8
+  br label %invcont38
+
+bb1.i100:                                         ; preds = %invcont38
+  %2 = add nsw i32 1, %__extracted.0  ; <i32> [#uses=3]
+  br i1 undef, label %bb.i97, label %bb1.i
+
+bb.i97:                                           ; preds = %bb1.i100
+  br label %invcont38
+
+bb1.i:                                            ; preds = %bb1.i100
+  %3 = invoke i32 undef(%"struct.std::basic_streambuf<char,std::char_traits<char> >"* undef)
+          to label %invcont38 unwind label %lpad ; <i32> [#uses=0]
+
+invcont24:                                        ; preds = %invcont38
+  %4 = invoke i32 undef(%"struct.std::basic_streambuf<char,std::char_traits<char> >"* undef)
+          to label %_ZNSt15basic_streambufIcSt11char_traitsIcEE6sbumpcEv.exit.i unwind label %lpad ; <i32> [#uses=0]
+
+_ZNSt15basic_streambufIcSt11char_traitsIcEE6sbumpcEv.exit.i: ; preds = %invcont24
+  br i1 undef, label %invcont25, label %bb.i93
+
+bb.i93:                                           ; preds = %_ZNSt15basic_streambufIcSt11char_traitsIcEE6sbumpcEv.exit.i
+  %5 = invoke i32 undef(%"struct.std::basic_streambuf<char,std::char_traits<char> >"* undef)
+          to label %invcont25 unwind label %lpad ; <i32> [#uses=0]
+
+invcont25:                                        ; preds = %bb.i93, %_ZNSt15basic_streambufIcSt11char_traitsIcEE6sbumpcEv.exit.i
+  br label %invcont38
+
+invcont38:                                        ; preds = %invcont25, %bb1.i, %bb.i97, %bb26.preheader
+  %__extracted.0 = phi i32 [ 0, %bb26.preheader ], [ undef, %invcont25 ], [ %2, %bb.i97 ], [ %2, %bb1.i ] ; <i32> [#uses=1]
+  br i1 false, label %bb1.i100, label %invcont24
+
+lpad:                                             ; preds = %bb.i93, %invcont24, %bb1.i, %invcont8
+  %__extracted.1 = phi i32 [ 0, %invcont8 ], [ %2, %bb1.i ], [ undef, %bb.i93 ], [ undef, %invcont24 ] ; <i32> [#uses=0]
+  %eh_ptr = call i8* @llvm.eh.exception() ; <i8*> [#uses=1]
+  %6 = call i8* @__cxa_begin_catch(i8* %eh_ptr) nounwind ; <i8*> [#uses=0]
+  unreachable
+
+lpad74:                                           ; preds = %entry
+  unreachable
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/2009-11-20-NewNode.ll b/libclamav/c++/llvm/test/CodeGen/Generic/2009-11-20-NewNode.ll
new file mode 100644
index 0000000..92d7628
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Generic/2009-11-20-NewNode.ll
@@ -0,0 +1,37 @@
+; RUN: llc -march=msp430 < %s
+; RUN: llc -march=pic16 < %s
+; PR5558
+
+define i64 @_strtoll_r(i16 %base) nounwind {
+entry:
+  br i1 undef, label %if.then, label %if.end27
+
+if.then:                                          ; preds = %do.end
+  br label %if.end27
+
+if.end27:                                         ; preds = %if.then, %do.end
+  %cond66 = select i1 undef, i64 -9223372036854775808, i64 9223372036854775807 ; <i64> [#uses=3]
+  %conv69 = sext i16 %base to i64                 ; <i64> [#uses=1]
+  %div = udiv i64 %cond66, %conv69                ; <i64> [#uses=1]
+  br label %for.cond
+
+for.cond:                                         ; preds = %if.end116, %if.end27
+  br i1 undef, label %if.then152, label %if.then93
+
+if.then93:                                        ; preds = %for.cond
+  br i1 undef, label %if.end116, label %if.then152
+
+if.end116:                                        ; preds = %if.then93
+  %cmp123 = icmp ugt i64 undef, %div              ; <i1> [#uses=1]
+  %or.cond = or i1 undef, %cmp123                 ; <i1> [#uses=0]
+  br label %for.cond
+
+if.then152:                                       ; preds = %if.then93, %for.cond
+  br i1 undef, label %if.end182, label %if.then172
+
+if.then172:                                       ; preds = %if.then152
+  ret i64 %cond66
+
+if.end182:                                        ; preds = %if.then152
+  ret i64 %cond66
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/intrinsics.ll b/libclamav/c++/llvm/test/CodeGen/Generic/intrinsics.ll
index 9a42c3e..29bc499 100644
--- a/libclamav/c++/llvm/test/CodeGen/Generic/intrinsics.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Generic/intrinsics.ll
@@ -14,9 +14,9 @@ define double @test_sqrt(float %F) {
 
 
 ; SIN
-declare float @sinf(float)
+declare float @sinf(float) readonly
 
-declare double @sin(double)
+declare double @sin(double) readonly
 
 define double @test_sin(float %F) {
         %G = call float @sinf( float %F )               ; <float> [#uses=1]
@@ -27,9 +27,9 @@ define double @test_sin(float %F) {
 
 
 ; COS
-declare float @cosf(float)
+declare float @cosf(float) readonly
 
-declare double @cos(double)
+declare double @cos(double) readonly
 
 define double @test_cos(float %F) {
         %G = call float @cosf( float %F )               ; <float> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/switch-lower-feature-2.ll b/libclamav/c++/llvm/test/CodeGen/Generic/switch-lower-feature-2.ll
index d6e5647..80e0618 100644
--- a/libclamav/c++/llvm/test/CodeGen/Generic/switch-lower-feature-2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Generic/switch-lower-feature-2.ll
@@ -5,9 +5,9 @@
 ; RUN: grep 1023 %t | count 1
 ; RUN: grep 119  %t | count 1
 ; RUN: grep JTI %t | count 2
-; RUN: grep jg %t | count 1
+; RUN: grep jg %t | count 3
 ; RUN: grep ja %t | count 1
-; RUN: grep js %t | count 1
+; RUN: grep jns %t | count 1
 
 target triple = "i686-pc-linux-gnu"
 
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/switch-lower.ll b/libclamav/c++/llvm/test/CodeGen/Generic/switch-lower.ll
index eb240ed..1cefe82 100644
--- a/libclamav/c++/llvm/test/CodeGen/Generic/switch-lower.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Generic/switch-lower.ll
@@ -1,8 +1,22 @@
 ; RUN: llc < %s
-; PR1197
 
 
-define void @exp_attr__expand_n_attribute_reference() {
+; PR5421
+define void @test1() {
+entry:
+  switch i128 undef, label %exit [
+    i128 55340232221128654848, label %exit
+    i128 92233720368547758080, label %exit
+    i128 73786976294838206464, label %exit
+    i128 147573952589676412928, label %exit
+  ]
+exit:
+  unreachable
+}
+
+
+; PR1197
+define void @test2() {
 entry:
 	br i1 false, label %cond_next954, label %cond_true924
 
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/2009-11-15-ProcImpDefsBug.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/2009-11-15-ProcImpDefsBug.ll
new file mode 100644
index 0000000..2d9d16a
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/2009-11-15-ProcImpDefsBug.ll
@@ -0,0 +1,105 @@
+; RUN: llc < %s -mtriple=powerpc-apple-darwin8
+
+define void @gcov_exit() nounwind {
+entry:
+  br i1 undef, label %return, label %bb.nph341
+
+bb.nph341:                                        ; preds = %entry
+  br label %bb25
+
+bb25:                                             ; preds = %read_fatal, %bb.nph341
+  br i1 undef, label %bb49.1, label %bb48
+
+bb48:                                             ; preds = %bb25
+  br label %bb49.1
+
+bb51:                                             ; preds = %bb48.4, %bb49.3
+  switch i32 undef, label %bb58 [
+    i32 0, label %rewrite
+    i32 1734567009, label %bb59
+  ]
+
+bb58:                                             ; preds = %bb51
+  br label %read_fatal
+
+bb59:                                             ; preds = %bb51
+  br i1 undef, label %bb60, label %bb3.i156
+
+bb3.i156:                                         ; preds = %bb59
+  br label %read_fatal
+
+bb60:                                             ; preds = %bb59
+  br i1 undef, label %bb78.preheader, label %rewrite
+
+bb78.preheader:                                   ; preds = %bb60
+  br i1 undef, label %bb62, label %bb80
+
+bb62:                                             ; preds = %bb78.preheader
+  br i1 undef, label %bb64, label %read_mismatch
+
+bb64:                                             ; preds = %bb62
+  br i1 undef, label %bb65, label %read_mismatch
+
+bb65:                                             ; preds = %bb64
+  br i1 undef, label %bb75, label %read_mismatch
+
+read_mismatch:                                    ; preds = %bb98, %bb119.preheader, %bb72, %bb71, %bb65, %bb64, %bb62
+  br label %read_fatal
+
+bb71:                                             ; preds = %bb75
+  br i1 undef, label %bb72, label %read_mismatch
+
+bb72:                                             ; preds = %bb71
+  br i1 undef, label %bb73, label %read_mismatch
+
+bb73:                                             ; preds = %bb72
+  unreachable
+
+bb74:                                             ; preds = %bb75
+  br label %bb75
+
+bb75:                                             ; preds = %bb74, %bb65
+  br i1 undef, label %bb74, label %bb71
+
+bb80:                                             ; preds = %bb78.preheader
+  unreachable
+
+read_fatal:                                       ; preds = %read_mismatch, %bb3.i156, %bb58
+  br i1 undef, label %return, label %bb25
+
+rewrite:                                          ; preds = %bb60, %bb51
+  br i1 undef, label %bb94, label %bb119.preheader
+
+bb94:                                             ; preds = %rewrite
+  unreachable
+
+bb119.preheader:                                  ; preds = %rewrite
+  br i1 undef, label %read_mismatch, label %bb98
+
+bb98:                                             ; preds = %bb119.preheader
+  br label %read_mismatch
+
+return:                                           ; preds = %read_fatal, %entry
+  ret void
+
+bb49.1:                                           ; preds = %bb48, %bb25
+  br i1 undef, label %bb49.2, label %bb48.2
+
+bb49.2:                                           ; preds = %bb48.2, %bb49.1
+  br i1 undef, label %bb49.3, label %bb48.3
+
+bb48.2:                                           ; preds = %bb49.1
+  br label %bb49.2
+
+bb49.3:                                           ; preds = %bb48.3, %bb49.2
+  %c_ix.0.3 = phi i32 [ undef, %bb48.3 ], [ undef, %bb49.2 ] ; <i32> [#uses=1]
+  br i1 undef, label %bb51, label %bb48.4
+
+bb48.3:                                           ; preds = %bb49.2
+  store i64* undef, i64** undef, align 4
+  br label %bb49.3
+
+bb48.4:                                           ; preds = %bb49.3
+  %0 = getelementptr inbounds [5 x i64*]* undef, i32 0, i32 %c_ix.0.3 ; <i64**> [#uses=0]
+  br label %bb51
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/2009-11-15-ReMatBug.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/2009-11-15-ReMatBug.ll
new file mode 100644
index 0000000..54f4b2e
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/2009-11-15-ReMatBug.ll
@@ -0,0 +1,155 @@
+; RUN: llc < %s -mtriple=powerpc-apple-darwin8
+
+%struct.FILE = type { i8*, i32, i32, i16, i16, %struct.__sbuf, i32, i8*, i32 (i8*)*, i32 (i8*, i8*, i32)*, i64 (i8*, i64, i32)*, i32 (i8*, i8*, i32)*, %struct.__sbuf, %struct.__sFILEX*, i32, [3 x i8], [1 x i8], %struct.__sbuf, i32, i64 }
+%struct.__gcov_var = type { %struct.FILE*, i32, i32, i32, i32, i32, i32, [1025 x i32] }
+%struct.__sFILEX = type opaque
+%struct.__sbuf = type { i8*, i32 }
+%struct.gcov_ctr_info = type { i32, i64*, void (i64*, i32)* }
+%struct.gcov_ctr_summary = type { i32, i32, i64, i64, i64 }
+%struct.gcov_fn_info = type { i32, i32, [0 x i32] }
+%struct.gcov_info = type { i32, %struct.gcov_info*, i32, i8*, i32, %struct.gcov_fn_info*, i32, [0 x %struct.gcov_ctr_info] }
+%struct.gcov_summary = type { i32, [1 x %struct.gcov_ctr_summary] }
+
+ at __gcov_var = external global %struct.__gcov_var  ; <%struct.__gcov_var*> [#uses=1]
+ at __sF = external global [0 x %struct.FILE]        ; <[0 x %struct.FILE]*> [#uses=1]
+ at .str = external constant [56 x i8], align 4      ; <[56 x i8]*> [#uses=1]
+ at gcov_list = external global %struct.gcov_info*   ; <%struct.gcov_info**> [#uses=1]
+ at .str7 = external constant [35 x i8], align 4     ; <[35 x i8]*> [#uses=1]
+ at .str8 = external constant [9 x i8], align 4      ; <[9 x i8]*> [#uses=1]
+ at .str9 = external constant [10 x i8], align 4     ; <[10 x i8]*> [#uses=1]
+ at .str10 = external constant [36 x i8], align 4    ; <[36 x i8]*> [#uses=1]
+
+declare i32 @"\01_fprintf$LDBL128"(%struct.FILE*, i8*, ...) nounwind
+
+define void @gcov_exit() nounwind {
+entry:
+  %gi_ptr.0357 = load %struct.gcov_info** @gcov_list, align 4 ; <%struct.gcov_info*> [#uses=1]
+  %0 = alloca i8, i32 undef, align 1              ; <i8*> [#uses=3]
+  br i1 undef, label %return, label %bb.nph341
+
+bb.nph341:                                        ; preds = %entry
+  %object27 = bitcast %struct.gcov_summary* undef to i8* ; <i8*> [#uses=1]
+  br label %bb25
+
+bb25:                                             ; preds = %read_fatal, %bb.nph341
+  %gi_ptr.1329 = phi %struct.gcov_info* [ %gi_ptr.0357, %bb.nph341 ], [ undef, %read_fatal ] ; <%struct.gcov_info*> [#uses=1]
+  call void @llvm.memset.i32(i8* %object27, i8 0, i32 36, i32 8)
+  br i1 undef, label %bb49.1, label %bb48
+
+bb48:                                             ; preds = %bb25
+  br label %bb49.1
+
+bb51:                                             ; preds = %bb48.4, %bb49.3
+  switch i32 undef, label %bb58 [
+    i32 0, label %rewrite
+    i32 1734567009, label %bb59
+  ]
+
+bb58:                                             ; preds = %bb51
+  %1 = call i32 (%struct.FILE*, i8*, ...)* @"\01_fprintf$LDBL128"(%struct.FILE* getelementptr inbounds ([0 x %struct.FILE]* @__sF, i32 0, i32 2), i8* getelementptr inbounds ([35 x i8]* @.str7, i32 0, i32 0), i8* %0) nounwind ; <i32> [#uses=0]
+  br label %read_fatal
+
+bb59:                                             ; preds = %bb51
+  br i1 undef, label %bb60, label %bb3.i156
+
+bb3.i156:                                         ; preds = %bb59
+  store i8 52, i8* undef, align 1
+  store i8 42, i8* undef, align 1
+  %2 = call i32 (%struct.FILE*, i8*, ...)* @"\01_fprintf$LDBL128"(%struct.FILE* getelementptr inbounds ([0 x %struct.FILE]* @__sF, i32 0, i32 2), i8* getelementptr inbounds ([56 x i8]* @.str, i32 0, i32 0), i8* %0, i8* undef, i8* undef) nounwind ; <i32> [#uses=0]
+  br label %read_fatal
+
+bb60:                                             ; preds = %bb59
+  br i1 undef, label %bb78.preheader, label %rewrite
+
+bb78.preheader:                                   ; preds = %bb60
+  br i1 undef, label %bb62, label %bb80
+
+bb62:                                             ; preds = %bb78.preheader
+  br i1 undef, label %bb64, label %read_mismatch
+
+bb64:                                             ; preds = %bb62
+  br i1 undef, label %bb65, label %read_mismatch
+
+bb65:                                             ; preds = %bb64
+  br i1 undef, label %bb75, label %read_mismatch
+
+read_mismatch:                                    ; preds = %bb98, %bb119.preheader, %bb72, %bb71, %bb65, %bb64, %bb62
+  %3 = icmp eq i32 undef, -1                      ; <i1> [#uses=1]
+  %iftmp.11.0 = select i1 %3, i8* getelementptr inbounds ([10 x i8]* @.str9, i32 0, i32 0), i8* getelementptr inbounds ([9 x i8]* @.str8, i32 0, i32 0) ; <i8*> [#uses=1]
+  %4 = call i32 (%struct.FILE*, i8*, ...)* @"\01_fprintf$LDBL128"(%struct.FILE* getelementptr inbounds ([0 x %struct.FILE]* @__sF, i32 0, i32 2), i8* getelementptr inbounds ([36 x i8]* @.str10, i32 0, i32 0), i8* %0, i8* %iftmp.11.0) nounwind ; <i32> [#uses=0]
+  br label %read_fatal
+
+bb71:                                             ; preds = %bb75
+  %5 = load i32* undef, align 4                   ; <i32> [#uses=1]
+  %6 = getelementptr inbounds %struct.gcov_info* %gi_ptr.1329, i32 0, i32 7, i32 undef, i32 2 ; <void (i64*, i32)**> [#uses=1]
+  %7 = load void (i64*, i32)** %6, align 4        ; <void (i64*, i32)*> [#uses=1]
+  %8 = call i32 @__gcov_read_unsigned() nounwind  ; <i32> [#uses=1]
+  %9 = call i32 @__gcov_read_unsigned() nounwind  ; <i32> [#uses=1]
+  %10 = icmp eq i32 %tmp386, %8                   ; <i1> [#uses=1]
+  br i1 %10, label %bb72, label %read_mismatch
+
+bb72:                                             ; preds = %bb71
+  %11 = icmp eq i32 undef, %9                     ; <i1> [#uses=1]
+  br i1 %11, label %bb73, label %read_mismatch
+
+bb73:                                             ; preds = %bb72
+  call void %7(i64* null, i32 %5) nounwind
+  unreachable
+
+bb74:                                             ; preds = %bb75
+  %12 = add i32 %13, 1                            ; <i32> [#uses=1]
+  br label %bb75
+
+bb75:                                             ; preds = %bb74, %bb65
+  %13 = phi i32 [ %12, %bb74 ], [ 0, %bb65 ]      ; <i32> [#uses=2]
+  %tmp386 = add i32 0, 27328512                   ; <i32> [#uses=1]
+  %14 = shl i32 1, %13                            ; <i32> [#uses=1]
+  %15 = load i32* undef, align 4                  ; <i32> [#uses=1]
+  %16 = and i32 %15, %14                          ; <i32> [#uses=1]
+  %17 = icmp eq i32 %16, 0                        ; <i1> [#uses=1]
+  br i1 %17, label %bb74, label %bb71
+
+bb80:                                             ; preds = %bb78.preheader
+  unreachable
+
+read_fatal:                                       ; preds = %read_mismatch, %bb3.i156, %bb58
+  br i1 undef, label %return, label %bb25
+
+rewrite:                                          ; preds = %bb60, %bb51
+  store i32 -1, i32* getelementptr inbounds (%struct.__gcov_var* @__gcov_var, i32 0, i32 6), align 4
+  br i1 undef, label %bb94, label %bb119.preheader
+
+bb94:                                             ; preds = %rewrite
+  unreachable
+
+bb119.preheader:                                  ; preds = %rewrite
+  br i1 undef, label %read_mismatch, label %bb98
+
+bb98:                                             ; preds = %bb119.preheader
+  br label %read_mismatch
+
+return:                                           ; preds = %read_fatal, %entry
+  ret void
+
+bb49.1:                                           ; preds = %bb48, %bb25
+  br i1 undef, label %bb49.2, label %bb48.2
+
+bb49.2:                                           ; preds = %bb48.2, %bb49.1
+  br i1 undef, label %bb49.3, label %bb48.3
+
+bb48.2:                                           ; preds = %bb49.1
+  br label %bb49.2
+
+bb49.3:                                           ; preds = %bb48.3, %bb49.2
+  br i1 undef, label %bb51, label %bb48.4
+
+bb48.3:                                           ; preds = %bb49.2
+  br label %bb49.3
+
+bb48.4:                                           ; preds = %bb49.3
+  br label %bb51
+}
+
+declare i32 @__gcov_read_unsigned() nounwind
+
+declare void @llvm.memset.i32(i8* nocapture, i8, i32, i32) nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/2009-11-25-ImpDefBug.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/2009-11-25-ImpDefBug.ll
new file mode 100644
index 0000000..9a22a6f
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/2009-11-25-ImpDefBug.ll
@@ -0,0 +1,56 @@
+; RUN: llc < %s -mtriple=powerpc-apple-darwin9.5 -mcpu=g5
+; rdar://7422268
+
+%struct..0EdgeT = type { i32, i32, float, float, i32, i32, i32, float, i32, i32 }
+
+define void @smooth_color_z_triangle(i32 %v0, i32 %v1, i32 %v2, i32 %pv) nounwind {
+entry:
+  br i1 undef, label %return, label %bb14
+
+bb14:                                             ; preds = %entry
+  br i1 undef, label %bb15, label %return
+
+bb15:                                             ; preds = %bb14
+  br i1 undef, label %bb16, label %bb17
+
+bb16:                                             ; preds = %bb15
+  br label %bb17
+
+bb17:                                             ; preds = %bb16, %bb15
+  %0 = fcmp olt float undef, 0.000000e+00         ; <i1> [#uses=2]
+  %eTop.eMaj = select i1 %0, %struct..0EdgeT* undef, %struct..0EdgeT* null ; <%struct..0EdgeT*> [#uses=1]
+  br label %bb69
+
+bb24:                                             ; preds = %bb69
+  br i1 undef, label %bb25, label %bb28
+
+bb25:                                             ; preds = %bb24
+  br label %bb33
+
+bb28:                                             ; preds = %bb24
+  br i1 undef, label %return, label %bb32
+
+bb32:                                             ; preds = %bb28
+  br i1 %0, label %bb38, label %bb33
+
+bb33:                                             ; preds = %bb32, %bb25
+  br i1 undef, label %bb34, label %bb38
+
+bb34:                                             ; preds = %bb33
+  br label %bb38
+
+bb38:                                             ; preds = %bb34, %bb33, %bb32
+  %eRight.08 = phi %struct..0EdgeT* [ %eTop.eMaj, %bb32 ], [ undef, %bb34 ], [ undef, %bb33 ] ; <%struct..0EdgeT*> [#uses=0]
+  %fdgOuter.0 = phi i32 [ %fdgOuter.1, %bb32 ], [ undef, %bb34 ], [ %fdgOuter.1, %bb33 ] ; <i32> [#uses=1]
+  %fz.3 = phi i32 [ %fz.2, %bb32 ], [ 2147483647, %bb34 ], [ %fz.2, %bb33 ] ; <i32> [#uses=1]
+  %1 = add i32 undef, 1                           ; <i32> [#uses=0]
+  br label %bb69
+
+bb69:                                             ; preds = %bb38, %bb17
+  %fdgOuter.1 = phi i32 [ undef, %bb17 ], [ %fdgOuter.0, %bb38 ] ; <i32> [#uses=2]
+  %fz.2 = phi i32 [ undef, %bb17 ], [ %fz.3, %bb38 ] ; <i32> [#uses=2]
+  br i1 undef, label %bb24, label %return
+
+return:                                           ; preds = %bb69, %bb28, %bb14, %entry
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/Frames-alloca.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/Frames-alloca.ll
index 25fc626..aed4fdb 100644
--- a/libclamav/c++/llvm/test/CodeGen/PowerPC/Frames-alloca.ll
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/Frames-alloca.ll
@@ -6,23 +6,23 @@
 ; RUN: llc < %s -march=ppc32 -mtriple=powerpc-apple-darwin8 -enable-ppc32-regscavenger | FileCheck %s -check-prefix=PPC32-RS
 ; RUN: llc < %s -march=ppc32 -mtriple=powerpc-apple-darwin8 -disable-fp-elim -enable-ppc32-regscavenger | FileCheck %s -check-prefix=PPC32-RS-NOFP
 
-; CHECK-PPC32: stw r31, 20(r1)
+; CHECK-PPC32: stw r31, -4(r1)
 ; CHECK-PPC32: lwz r1, 0(r1)
-; CHECK-PPC32: lwz r31, 20(r1)
-; CHECK-PPC32-NOFP: stw r31, 20(r1)
+; CHECK-PPC32: lwz r31, -4(r1)
+; CHECK-PPC32-NOFP: stw r31, -4(r1)
 ; CHECK-PPC32-NOFP: lwz r1, 0(r1)
-; CHECK-PPC32-NOFP: lwz r31, 20(r1)
+; CHECK-PPC32-NOFP: lwz r31, -4(r1)
 ; CHECK-PPC32-RS: stwu r1, -80(r1)
 ; CHECK-PPC32-RS-NOFP: stwu r1, -80(r1)
 
-; CHECK-PPC64: std r31, 40(r1)
-; CHECK-PPC64: stdu r1, -112(r1)
+; CHECK-PPC64: std r31, -8(r1)
+; CHECK-PPC64: stdu r1, -128(r1)
 ; CHECK-PPC64: ld r1, 0(r1)
-; CHECK-PPC64: ld r31, 40(r1)
-; CHECK-PPC64-NOFP: std r31, 40(r1)
-; CHECK-PPC64-NOFP: stdu r1, -112(r1)
+; CHECK-PPC64: ld r31, -8(r1)
+; CHECK-PPC64-NOFP: std r31, -8(r1)
+; CHECK-PPC64-NOFP: stdu r1, -128(r1)
 ; CHECK-PPC64-NOFP: ld r1, 0(r1)
-; CHECK-PPC64-NOFP: ld r31, 40(r1)
+; CHECK-PPC64-NOFP: ld r31, -8(r1)
 
 define i32* @f1(i32 %n) {
 	%tmp = alloca i32, i32 %n		; <i32*> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/Frames-large.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/Frames-large.ll
index fda2e4f..302d3df 100644
--- a/libclamav/c++/llvm/test/CodeGen/PowerPC/Frames-large.ll
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/Frames-large.ll
@@ -22,13 +22,13 @@ define i32* @f1() nounwind {
 ; PPC32-NOFP: 	blr 
 
 ; PPC32-FP: _f1:
-; PPC32-FP:	stw r31, 20(r1)
+; PPC32-FP:	stw r31, -4(r1)
 ; PPC32-FP:	lis r0, -1
 ; PPC32-FP:	ori r0, r0, 32704
 ; PPC32-FP:	stwux r1, r1, r0
 ; ...
 ; PPC32-FP:	lwz r1, 0(r1)
-; PPC32-FP:	lwz r31, 20(r1)
+; PPC32-FP:	lwz r31, -4(r1)
 ; PPC32-FP:	blr 
 
 
@@ -42,11 +42,11 @@ define i32* @f1() nounwind {
 
 
 ; PPC64-FP: _f1:
-; PPC64-FP:	std r31, 40(r1)
+; PPC64-FP:	std r31, -8(r1)
 ; PPC64-FP:	lis r0, -1
-; PPC64-FP:	ori r0, r0, 32656
+; PPC64-FP:	ori r0, r0, 32640
 ; PPC64-FP:	stdux r1, r1, r0
 ; ...
 ; PPC64-FP:	ld r1, 0(r1)
-; PPC64-FP:	ld r31, 40(r1)
+; PPC64-FP:	ld r31, -8(r1)
 ; PPC64-FP:	blr 
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/Frames-small.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/Frames-small.ll
index 6875704..404fdd0 100644
--- a/libclamav/c++/llvm/test/CodeGen/PowerPC/Frames-small.ll
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/Frames-small.ll
@@ -1,26 +1,26 @@
 ; RUN: llc < %s -march=ppc32 -mtriple=powerpc-apple-darwin8 -o %t1
-; RUN  not grep {stw r31, 20(r1)} %t1
+; RUN  not grep {stw r31, -4(r1)} %t1
 ; RUN: grep {stwu r1, -16448(r1)} %t1
 ; RUN: grep {addi r1, r1, 16448} %t1
 ; RUN: llc < %s -march=ppc32 | \
-; RUN: not grep {lwz r31, 20(r1)}
+; RUN: not grep {lwz r31, -4(r1)}
 ; RUN: llc < %s -march=ppc32 -mtriple=powerpc-apple-darwin8 -disable-fp-elim \
 ; RUN:   -o %t2
-; RUN: grep {stw r31, 20(r1)} %t2
+; RUN: grep {stw r31, -4(r1)} %t2
 ; RUN: grep {stwu r1, -16448(r1)} %t2
 ; RUN: grep {addi r1, r1, 16448} %t2
-; RUN: grep {lwz r31, 20(r1)} %t2
+; RUN: grep {lwz r31, -4(r1)} %t2
 ; RUN: llc < %s -march=ppc64 -mtriple=powerpc-apple-darwin8 -o %t3
-; RUN: not grep {std r31, 40(r1)} %t3
+; RUN: not grep {std r31, -8(r1)} %t3
 ; RUN: grep {stdu r1, -16496(r1)} %t3
 ; RUN: grep {addi r1, r1, 16496} %t3
-; RUN: not grep {ld r31, 40(r1)} %t3
+; RUN: not grep {ld r31, -8(r1)} %t3
 ; RUN: llc < %s -march=ppc64 -mtriple=powerpc-apple-darwin8 -disable-fp-elim \
 ; RUN:   -o %t4
-; RUN: grep {std r31, 40(r1)} %t4
-; RUN: grep {stdu r1, -16496(r1)} %t4
-; RUN: grep {addi r1, r1, 16496} %t4
-; RUN: grep {ld r31, 40(r1)} %t4
+; RUN: grep {std r31, -8(r1)} %t4
+; RUN: grep {stdu r1, -16512(r1)} %t4
+; RUN: grep {addi r1, r1, 16512} %t4
+; RUN: grep {ld r31, -8(r1)} %t4
 
 define i32* @f1() {
         %tmp = alloca i32, i32 4095             ; <i32*> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/bswap-load-store.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/bswap-load-store.ll
index 7eb3bbb..4f6bfc7 100644
--- a/libclamav/c++/llvm/test/CodeGen/PowerPC/bswap-load-store.ll
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/bswap-load-store.ll
@@ -1,11 +1,6 @@
-; RUN: llc < %s -march=ppc32 | \
-; RUN:   grep {stwbrx\\|lwbrx\\|sthbrx\\|lhbrx} | count 4
-; RUN: llc < %s -march=ppc32 | not grep rlwinm
-; RUN: llc < %s -march=ppc32 | not grep rlwimi
-; RUN: llc < %s -march=ppc64 | \
-; RUN:   grep {stwbrx\\|lwbrx\\|sthbrx\\|lhbrx} | count 4
-; RUN: llc < %s -march=ppc64 | not grep rlwinm
-; RUN: llc < %s -march=ppc64 | not grep rlwimi
+; RUN: llc < %s -march=ppc32 | FileCheck %s -check-prefix=X32
+; RUN: llc < %s -march=ppc64 | FileCheck %s -check-prefix=X64
+
 
 define void @STWBRX(i32 %i, i8* %ptr, i32 %off) {
         %tmp1 = getelementptr i8* %ptr, i32 %off                ; <i8*> [#uses=1]
@@ -43,3 +38,14 @@ declare i32 @llvm.bswap.i32(i32)
 
 declare i16 @llvm.bswap.i16(i16)
 
+
+; X32: stwbrx
+; X32: lwbrx
+; X32: sthbrx
+; X32: lhbrx
+
+; X64: stwbrx
+; X64: lwbrx
+; X64: sthbrx
+; X64: lhbrx
+
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/indirectbr.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/indirectbr.ll
new file mode 100644
index 0000000..1b302e4
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/indirectbr.ll
@@ -0,0 +1,55 @@
+; RUN: llc < %s -relocation-model=pic -march=ppc32 -mtriple=powerpc-apple-darwin | FileCheck %s -check-prefix=PIC
+; RUN: llc < %s -relocation-model=static -march=ppc32 -mtriple=powerpc-apple-darwin | FileCheck %s -check-prefix=STATIC
+
+ at nextaddr = global i8* null                       ; <i8**> [#uses=2]
+ at C.0.2070 = private constant [5 x i8*] [i8* blockaddress(@foo, %L1), i8* blockaddress(@foo, %L2), i8* blockaddress(@foo, %L3), i8* blockaddress(@foo, %L4), i8* blockaddress(@foo, %L5)] ; <[5 x i8*]*> [#uses=1]
+
+define internal i32 @foo(i32 %i) nounwind {
+; PIC: foo:
+; STATIC: foo:
+entry:
+  %0 = load i8** @nextaddr, align 4               ; <i8*> [#uses=2]
+  %1 = icmp eq i8* %0, null                       ; <i1> [#uses=1]
+  br i1 %1, label %bb3, label %bb2
+
+bb2:                                              ; preds = %entry, %bb3
+  %gotovar.4.0 = phi i8* [ %gotovar.4.0.pre, %bb3 ], [ %0, %entry ] ; <i8*> [#uses=1]
+; PIC: mtctr
+; PIC-NEXT: bctr
+; STATIC: mtctr
+; STATIC-NEXT: bctr
+  indirectbr i8* %gotovar.4.0, [label %L5, label %L4, label %L3, label %L2, label %L1]
+
+bb3:                                              ; preds = %entry
+  %2 = getelementptr inbounds [5 x i8*]* @C.0.2070, i32 0, i32 %i ; <i8**> [#uses=1]
+  %gotovar.4.0.pre = load i8** %2, align 4        ; <i8*> [#uses=1]
+  br label %bb2
+
+L5:                                               ; preds = %bb2
+  br label %L4
+
+L4:                                               ; preds = %L5, %bb2
+  %res.0 = phi i32 [ 385, %L5 ], [ 35, %bb2 ]     ; <i32> [#uses=1]
+  br label %L3
+
+L3:                                               ; preds = %L4, %bb2
+  %res.1 = phi i32 [ %res.0, %L4 ], [ 5, %bb2 ]   ; <i32> [#uses=1]
+  br label %L2
+
+L2:                                               ; preds = %L3, %bb2
+  %res.2 = phi i32 [ %res.1, %L3 ], [ 1, %bb2 ]   ; <i32> [#uses=1]
+  %phitmp = mul i32 %res.2, 6                     ; <i32> [#uses=1]
+  br label %L1
+
+L1:                                               ; preds = %L2, %bb2
+  %res.3 = phi i32 [ %phitmp, %L2 ], [ 2, %bb2 ]  ; <i32> [#uses=1]
+; PIC: addis r4, r2, ha16(LBA4__foo__L5-"L1$pb")
+; PIC: li r5, lo16(LBA4__foo__L5-"L1$pb")
+; PIC: add r4, r4, r5
+; PIC: stw r4
+; STATIC: li r2, lo16(LBA4__foo__L5)
+; STATIC: addis r2, r2, ha16(LBA4__foo__L5)
+; STATIC: stw r2
+  store i8* blockaddress(@foo, %L5), i8** @nextaddr, align 4
+  ret i32 %res.3
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/ppc-prologue.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/ppc-prologue.ll
new file mode 100644
index 0000000..e49dcb8
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/ppc-prologue.ll
@@ -0,0 +1,28 @@
+; RUN: llc < %s -mtriple=powerpc-apple-darwin8 -disable-fp-elim | FileCheck %s
+
+define i32 @_Z4funci(i32 %a) ssp {
+; CHECK:       mflr r0
+; CHECK-NEXT:  stw r31, -4(r1)
+; CHECK-NEXT:  stw r0, 8(r1)
+; CHECK-NEXT:  stwu r1, -80(r1)
+; CHECK-NEXT: Llabel1:
+; CHECK-NEXT:  mr r31, r1
+; CHECK-NEXT: Llabel2:
+entry:
+  %a_addr = alloca i32                            ; <i32*> [#uses=2]
+  %retval = alloca i32                            ; <i32*> [#uses=2]
+  %0 = alloca i32                                 ; <i32*> [#uses=2]
+  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
+  store i32 %a, i32* %a_addr
+  %1 = call i32 @_Z3barPi(i32* %a_addr)           ; <i32> [#uses=1]
+  store i32 %1, i32* %0, align 4
+  %2 = load i32* %0, align 4                      ; <i32> [#uses=1]
+  store i32 %2, i32* %retval, align 4
+  br label %return
+
+return:                                           ; preds = %entry
+  %retval1 = load i32* %retval                    ; <i32> [#uses=1]
+  ret i32 %retval1
+}
+
+declare i32 @_Z3barPi(i32*)
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/rlwimi-keep-rsh.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/rlwimi-keep-rsh.ll
new file mode 100644
index 0000000..7bce01c
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/rlwimi-keep-rsh.ll
@@ -0,0 +1,28 @@
+; RUN: llc < %s -march=ppc32 -mtriple=powerpc-apple-darwin | FileCheck %s
+; Formerly dropped the RHS of %tmp6 when constructing rlwimi.
+; 7346117
+
+ at foo = external global i32
+
+define void @xxx(i32 %a, i32 %b, i32 %c, i32 %d) nounwind optsize {
+; CHECK: _xxx:
+; CHECK: or
+; CHECK: and
+; CHECK: rlwimi
+entry:
+  %tmp0 = ashr i32 %d, 31
+  %tmp1 = and i32 %tmp0, 255
+  %tmp2 = xor i32 %tmp1, 255
+  %tmp3 = ashr i32 %b, 31
+  %tmp4 = ashr i32 %a, 4
+  %tmp5 = or i32 %tmp3, %tmp4
+  %tmp6 = and i32 %tmp2, %tmp5
+  %tmp7 = shl i32 %c, 8
+  %tmp8 = or i32 %tmp6, %tmp7
+  store i32 %tmp8, i32* @foo, align 4
+  br label %return
+
+return:
+  ret void
+; CHECK: blr
+}
\ No newline at end of file
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/vec_auto_constant.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/vec_auto_constant.ll
new file mode 100644
index 0000000..973f089
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/vec_auto_constant.ll
@@ -0,0 +1,36 @@
+; RUN: llc < %s -march=ppc32 -mtriple=powerpc-apple-darwin -mcpu=g5 | FileCheck %s
+; Formerly produced .long, 7320806 (partial)
+; CHECK: .byte  22
+; CHECK: .byte  21
+; CHECK: .byte  20
+; CHECK: .byte  3
+; CHECK: .byte  25
+; CHECK: .byte  24
+; CHECK: .byte  23
+; CHECK: .byte  3
+; CHECK: .byte  28
+; CHECK: .byte  27
+; CHECK: .byte  26
+; CHECK: .byte  3
+; CHECK: .byte  31
+; CHECK: .byte  30
+; CHECK: .byte  29
+; CHECK: .byte  3
+ at baz = common global <16 x i8> zeroinitializer    ; <<16 x i8>*> [#uses=1]
+
+define void @foo(<16 x i8> %x) nounwind ssp {
+entry:
+  %x_addr = alloca <16 x i8>                      ; <<16 x i8>*> [#uses=2]
+  %temp = alloca <16 x i8>                        ; <<16 x i8>*> [#uses=2]
+  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
+  store <16 x i8> %x, <16 x i8>* %x_addr
+  store <16 x i8> <i8 22, i8 21, i8 20, i8 3, i8 25, i8 24, i8 23, i8 3, i8 28, i8 27, i8 26, i8 3, i8 31, i8 30, i8 29, i8 3>, <16 x i8>* %temp, align 16
+  %0 = load <16 x i8>* %x_addr, align 16          ; <<16 x i8>> [#uses=1]
+  %1 = load <16 x i8>* %temp, align 16            ; <<16 x i8>> [#uses=1]
+  %tmp = add <16 x i8> %0, %1                     ; <<16 x i8>> [#uses=1]
+  store <16 x i8> %tmp, <16 x i8>* @baz, align 16
+  br label %return
+
+return:                                           ; preds = %entry
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/vec_buildvector_loadstore.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/vec_buildvector_loadstore.ll
new file mode 100644
index 0000000..015c086
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/vec_buildvector_loadstore.ll
@@ -0,0 +1,37 @@
+; RUN: llc < %s -march=ppc32 -mtriple=powerpc-apple-darwin -mattr=+altivec  | FileCheck %s
+; Formerly this did byte loads and word stores.
+ at a = external global <16 x i8>
+ at b = external global <16 x i8>
+ at c = external global <16 x i8>
+
+define void @foo() nounwind ssp {
+; CHECK: _foo:
+; CHECK-NOT: stw
+entry:
+    %tmp0 = load <16 x i8>* @a, align 16
+  %tmp180.i = extractelement <16 x i8> %tmp0, i32 0 ; <i8> [#uses=1]
+  %tmp181.i = insertelement <16 x i8> <i8 0, i8 0, i8 undef, i8 undef, i8 undef, i8 undef, i8 undef, i8 undef, i8 undef, i8 undef, i8 undef, i8 undef, i8 undef, i8 undef, i8 undef, i8 undef>, i8 %tmp180.i, i32 2 ; <<16 x i8>> [#uses=1]
+  %tmp182.i = extractelement <16 x i8> %tmp0, i32 1 ; <i8> [#uses=1]
+  %tmp183.i = insertelement <16 x i8> %tmp181.i, i8 %tmp182.i, i32 3 ; <<16 x i8>> [#uses=1]
+  %tmp184.i = insertelement <16 x i8> %tmp183.i, i8 0, i32 4 ; <<16 x i8>> [#uses=1]
+  %tmp185.i = insertelement <16 x i8> %tmp184.i, i8 0, i32 5 ; <<16 x i8>> [#uses=1]
+  %tmp186.i = extractelement <16 x i8> %tmp0, i32 4 ; <i8> [#uses=1]
+  %tmp187.i = insertelement <16 x i8> %tmp185.i, i8 %tmp186.i, i32 6 ; <<16 x i8>> [#uses=1]
+  %tmp188.i = extractelement <16 x i8> %tmp0, i32 5 ; <i8> [#uses=1]
+  %tmp189.i = insertelement <16 x i8> %tmp187.i, i8 %tmp188.i, i32 7 ; <<16 x i8>> [#uses=1]
+  %tmp190.i = insertelement <16 x i8> %tmp189.i, i8 0, i32 8 ; <<16 x i8>> [#uses=1]
+  %tmp191.i = insertelement <16 x i8> %tmp190.i, i8 0, i32 9 ; <<16 x i8>> [#uses=1]
+  %tmp192.i = extractelement <16 x i8> %tmp0, i32 8 ; <i8> [#uses=1]
+  %tmp193.i = insertelement <16 x i8> %tmp191.i, i8 %tmp192.i, i32 10 ; <<16 x i8>> [#uses=1]
+  %tmp194.i = extractelement <16 x i8> %tmp0, i32 9 ; <i8> [#uses=1]
+  %tmp195.i = insertelement <16 x i8> %tmp193.i, i8 %tmp194.i, i32 11 ; <<16 x i8>> [#uses=1]
+  %tmp196.i = insertelement <16 x i8> %tmp195.i, i8 0, i32 12 ; <<16 x i8>> [#uses=1]
+  %tmp197.i = insertelement <16 x i8> %tmp196.i, i8 0, i32 13 ; <<16 x i8>> [#uses=1]
+%tmp201 = shufflevector <16 x i8> %tmp197.i, <16 x i8> %tmp0, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 28, i32 29>; ModuleID = 'try.c'
+    store <16 x i8> %tmp201, <16 x i8>* @c, align 16
+    br label %return
+
+return:		; preds = %bb2
+	ret void
+; CHECK: blr
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/vec_splat_constant.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/vec_splat_constant.ll
new file mode 100644
index 0000000..b227794
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/vec_splat_constant.ll
@@ -0,0 +1,24 @@
+; RUN: llc < %s -march=ppc32 -mtriple=powerpc-apple-darwin -mcpu=g5 | FileCheck %s
+; Formerly incorrectly inserted vsldoi (endian confusion)
+
+ at baz = common global <16 x i8> zeroinitializer    ; <<16 x i8>*> [#uses=1]
+
+define void @foo(<16 x i8> %x) nounwind ssp {
+entry:
+; CHECK: _foo:
+; CHECK-NOT: vsldoi
+  %x_addr = alloca <16 x i8>                      ; <<16 x i8>*> [#uses=2]
+  %temp = alloca <16 x i8>                        ; <<16 x i8>*> [#uses=2]
+  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
+  store <16 x i8> %x, <16 x i8>* %x_addr
+  store <16 x i8> <i8 0, i8 0, i8 0, i8 14, i8 0, i8 0, i8 0, i8 14, i8 0, i8 0, i8 0, i8 14, i8 0, i8 0, i8 0, i8 14>, <16 x i8>* %temp, align 16
+  %0 = load <16 x i8>* %x_addr, align 16          ; <<16 x i8>> [#uses=1]
+  %1 = load <16 x i8>* %temp, align 16            ; <<16 x i8>> [#uses=1]
+  %tmp = add <16 x i8> %0, %1                     ; <<16 x i8>> [#uses=1]
+  store <16 x i8> %tmp, <16 x i8>* @baz, align 16
+  br label %return
+
+return:                                           ; preds = %entry
+  ret void
+; CHECK: blr
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb/2009-08-20-ISelBug.ll b/libclamav/c++/llvm/test/CodeGen/Thumb/2009-08-20-ISelBug.ll
index 1627f61..c31b65b 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb/2009-08-20-ISelBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb/2009-08-20-ISelBug.ll
@@ -11,7 +11,7 @@
 
 define arm_apcscc i32 @t(%struct.asl_file_t* %s, i64 %off, i64* %out) nounwind optsize {
 ; CHECK: t:
-; CHECK: adds r4, #8
+; CHECK: adds r3, #8
 entry:
   %val = alloca i64, align 4                      ; <i64*> [#uses=3]
   %0 = icmp eq %struct.asl_file_t* %s, null       ; <i1> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb/machine-licm.ll b/libclamav/c++/llvm/test/CodeGen/Thumb/machine-licm.ll
new file mode 100644
index 0000000..dae1412
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb/machine-licm.ll
@@ -0,0 +1,41 @@
+; RUN: llc < %s -mtriple=thumb-apple-darwin -relocation-model=pic -disable-fp-elim | FileCheck %s
+; rdar://7353541
+; rdar://7354376
+
+; The generated code is no where near ideal. It's not recognizing the two
+; constantpool entries being loaded can be merged into one.
+
+ at GV = external global i32                         ; <i32*> [#uses=2]
+
+define arm_apcscc void @t(i32* nocapture %vals, i32 %c) nounwind {
+entry:
+; CHECK: t:
+  %0 = icmp eq i32 %c, 0                          ; <i1> [#uses=1]
+  br i1 %0, label %return, label %bb.nph
+
+bb.nph:                                           ; preds = %entry
+; CHECK: BB#1
+; CHECK: ldr.n r2, LCPI1_0
+; CHECK: add r2, pc
+; CHECK: ldr r{{[0-9]+}}, [r2]
+; CHECK: LBB1_2
+; CHECK: LCPI1_0:
+; CHECK-NOT: LCPI1_1:
+; CHECK: .section
+  %.pre = load i32* @GV, align 4                  ; <i32> [#uses=1]
+  br label %bb
+
+bb:                                               ; preds = %bb, %bb.nph
+  %1 = phi i32 [ %.pre, %bb.nph ], [ %3, %bb ]    ; <i32> [#uses=1]
+  %i.03 = phi i32 [ 0, %bb.nph ], [ %4, %bb ]     ; <i32> [#uses=2]
+  %scevgep = getelementptr i32* %vals, i32 %i.03  ; <i32*> [#uses=1]
+  %2 = load i32* %scevgep, align 4                ; <i32> [#uses=1]
+  %3 = add nsw i32 %1, %2                         ; <i32> [#uses=2]
+  store i32 %3, i32* @GV, align 4
+  %4 = add i32 %i.03, 1                           ; <i32> [#uses=2]
+  %exitcond = icmp eq i32 %4, %c                  ; <i1> [#uses=1]
+  br i1 %exitcond, label %return, label %bb
+
+return:                                           ; preds = %bb, %entry
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb/pop.ll b/libclamav/c++/llvm/test/CodeGen/Thumb/pop.ll
index c5e86ad..0e1b2e5 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb/pop.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb/pop.ll
@@ -4,7 +4,7 @@
 define arm_apcscc void @t(i8* %a, ...) nounwind {
 ; CHECK:      t:
 ; CHECK:      pop {r3}
-; CHECK-NEXT: add sp, #3 * 4
+; CHECK-NEXT: add sp, #12
 ; CHECK-NEXT: bx r3
 entry:
   %a.addr = alloca i8*
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-07-21-ISelBug.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-07-21-ISelBug.ll
index ec649c3..ef076a4 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-07-21-ISelBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-07-21-ISelBug.ll
@@ -6,7 +6,7 @@
 define arm_apcscc i32 @t(i32, ...) nounwind {
 entry:
 ; CHECK: t:
-; CHECK: add r7, sp, #3 * 4
+; CHECK: add r7, sp, #12
 	%1 = load i8** undef, align 4		; <i8*> [#uses=3]
 	%2 = getelementptr i8* %1, i32 4		; <i8*> [#uses=1]
 	%3 = getelementptr i8* %1, i32 8		; <i8*> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-08-04-SubregLoweringBug.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-08-04-SubregLoweringBug.ll
index 3cbb212..7647474 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-08-04-SubregLoweringBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-08-04-SubregLoweringBug.ll
@@ -1,5 +1,5 @@
 ; RUN: llc < %s -mtriple=thumbv7-apple-darwin9 -mattr=+neon -arm-use-neon-fp 
-; RUN: llc < %s -mtriple=thumbv7-apple-darwin9 -mattr=+neon -arm-use-neon-fp | grep fcpys | count 1
+; RUN: llc < %s -mtriple=thumbv7-apple-darwin9 -mattr=+neon -arm-use-neon-fp | not grep fcpys
 ; rdar://7117307
 
 	%struct.Hosp = type { i32, i32, i32, %struct.List, %struct.List, %struct.List, %struct.List }
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-08-06-SpDecBug.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-08-06-SpDecBug.ll
index 03f9fac..4077535 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-08-06-SpDecBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-08-06-SpDecBug.ll
@@ -6,7 +6,7 @@ define hidden arm_aapcscc i32 @__gcov_execlp(i8* %path, i8* %arg, ...) nounwind
 entry:
 ; CHECK: __gcov_execlp:
 ; CHECK: mov sp, r7
-; CHECK: sub sp, #1 * 4
+; CHECK: sub sp, #4
 	call arm_aapcscc  void @__gcov_flush() nounwind
 	br i1 undef, label %bb5, label %bb
 
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-09-28-ITBlockBug.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-09-28-ITBlockBug.ll
index e84e867..8d03b52 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-09-28-ITBlockBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-09-28-ITBlockBug.ll
@@ -6,10 +6,8 @@
 
 define arm_apcscc void @t() nounwind {
 ; CHECK: t:
-; CHECK:      ittt eq
-; CHECK-NEXT: addeq
-; CHECK-NEXT: movweq
-; CHECK-NEXT: movteq
+; CHECK:      it eq
+; CHECK-NEXT: cmpeq
 entry:
   %pix_a.i294 = alloca [4 x %struct.pix_pos], align 4 ; <[4 x %struct.pix_pos]*> [#uses=2]
   br i1 undef, label %land.rhs, label %lor.end
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-10-15-ITBlockBranch.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-10-15-ITBlockBranch.ll
new file mode 100644
index 0000000..b4b6ed9
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-10-15-ITBlockBranch.ll
@@ -0,0 +1,44 @@
+; RUN: llc < %s -mtriple=thumbv7-eabi -mcpu=cortex-a8 -float-abi=hard | FileCheck %s
+
+; A fix for PR5204 will require this check to be changed.
+
+%"struct.__gnu_cxx::__normal_iterator<char*,std::basic_string<char, std::char_traits<char>, std::allocator<char> > >" = type { i8* }
+%"struct.__gnu_cxx::new_allocator<char>" = type <{ i8 }>
+%"struct.std::basic_string<char,std::char_traits<char>,std::allocator<char> >" = type { %"struct.__gnu_cxx::__normal_iterator<char*,std::basic_string<char, std::char_traits<char>, std::allocator<char> > >" }
+%"struct.std::basic_string<char,std::char_traits<char>,std::allocator<char> >::_Rep" = type { %"struct.std::basic_string<char,std::char_traits<char>,std::allocator<char> >::_Rep_base" }
+%"struct.std::basic_string<char,std::char_traits<char>,std::allocator<char> >::_Rep_base" = type { i32, i32, i32 }
+
+
+define weak arm_aapcs_vfpcc i32 @_ZNKSs7compareERKSs(%"struct.std::basic_string<char,std::char_traits<char>,std::allocator<char> >"* %this, %"struct.std::basic_string<char,std::char_traits<char>,std::allocator<char> >"* %__str) {
+; CHECK: _ZNKSs7compareERKSs:
+; CHECK:	it ne
+; CHECK-NEXT: ldmfdne.w
+; CHECK-NEXT: itt eq
+; CHECK-NEXT: subeq.w
+; CHECK-NEXT: ldmfdeq.w
+entry:
+  %0 = tail call arm_aapcs_vfpcc  i32 @_ZNKSs4sizeEv(%"struct.std::basic_string<char,std::char_traits<char>,std::allocator<char> >"* %this) ; <i32> [#uses=3]
+  %1 = tail call arm_aapcs_vfpcc  i32 @_ZNKSs4sizeEv(%"struct.std::basic_string<char,std::char_traits<char>,std::allocator<char> >"* %__str) ; <i32> [#uses=3]
+  %2 = icmp ult i32 %1, %0                        ; <i1> [#uses=1]
+  %3 = select i1 %2, i32 %1, i32 %0               ; <i32> [#uses=1]
+  %4 = tail call arm_aapcs_vfpcc  i8* @_ZNKSs7_M_dataEv(%"struct.std::basic_string<char,std::char_traits<char>,std::allocator<char> >"* %this) ; <i8*> [#uses=1]
+  %5 = tail call arm_aapcs_vfpcc  i8* @_ZNKSs4dataEv(%"struct.std::basic_string<char,std::char_traits<char>,std::allocator<char> >"* %__str) ; <i8*> [#uses=1]
+  %6 = tail call arm_aapcs_vfpcc  i32 @memcmp(i8* %4, i8* %5, i32 %3) nounwind readonly ; <i32> [#uses=2]
+  %7 = icmp eq i32 %6, 0                          ; <i1> [#uses=1]
+  br i1 %7, label %bb, label %bb1
+
+bb:                                               ; preds = %entry
+  %8 = sub i32 %0, %1                             ; <i32> [#uses=1]
+  ret i32 %8
+
+bb1:                                              ; preds = %entry
+  ret i32 %6
+}
+
+declare arm_aapcs_vfpcc i32 @memcmp(i8* nocapture, i8* nocapture, i32) nounwind readonly
+
+declare arm_aapcs_vfpcc i32 @_ZNKSs4sizeEv(%"struct.std::basic_string<char,std::char_traits<char>,std::allocator<char> >"* %this)
+
+declare arm_aapcs_vfpcc i8* @_ZNKSs7_M_dataEv(%"struct.std::basic_string<char,std::char_traits<char>,std::allocator<char> >"* %this)
+
+declare arm_aapcs_vfpcc i8* @_ZNKSs4dataEv(%"struct.std::basic_string<char,std::char_traits<char>,std::allocator<char> >"* %this)
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-11-01-CopyReg2RegBug.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-11-01-CopyReg2RegBug.ll
new file mode 100644
index 0000000..216f3e3
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-11-01-CopyReg2RegBug.ll
@@ -0,0 +1,29 @@
+; RUN: llc < %s -mtriple=thumbv7-apple-darwin -relocation-model=pic -disable-fp-elim -mcpu=cortex-a8
+
+define arm_apcscc void @get_initial_mb16x16_cost() nounwind {
+entry:
+  br i1 undef, label %bb4, label %bb1
+
+bb1:                                              ; preds = %entry
+  br label %bb7
+
+bb4:                                              ; preds = %entry
+  br i1 undef, label %bb7.thread, label %bb5
+
+bb5:                                              ; preds = %bb4
+  br label %bb7
+
+bb7.thread:                                       ; preds = %bb4
+  br label %bb8
+
+bb7:                                              ; preds = %bb5, %bb1
+  br i1 undef, label %bb8, label %bb10
+
+bb8:                                              ; preds = %bb7, %bb7.thread
+  %0 = phi double [ 5.120000e+02, %bb7.thread ], [ undef, %bb7 ] ; <double> [#uses=1]
+  %1 = fdiv double %0, undef                      ; <double> [#uses=0]
+  unreachable
+
+bb10:                                             ; preds = %bb7
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-11-11-ScavengerAssert.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-11-11-ScavengerAssert.ll
new file mode 100644
index 0000000..9f2e399
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-11-11-ScavengerAssert.ll
@@ -0,0 +1,85 @@
+; RUN: llc < %s -mtriple=thumbv7-apple-darwin10
+
+%struct.OP = type { %struct.OP*, %struct.OP*, %struct.OP* ()*, i32, i16, i16, i8, i8 }
+%struct.SV = type { i8*, i32, i32 }
+
+declare arm_apcscc void @Perl_mg_set(%struct.SV*) nounwind
+
+define arm_apcscc %struct.OP* @Perl_pp_complement() nounwind {
+entry:
+  %0 = load %struct.SV** null, align 4            ; <%struct.SV*> [#uses=2]
+  br i1 undef, label %bb21, label %bb5
+
+bb5:                                              ; preds = %entry
+  br i1 undef, label %bb13, label %bb6
+
+bb6:                                              ; preds = %bb5
+  br i1 undef, label %bb8, label %bb7
+
+bb7:                                              ; preds = %bb6
+  %1 = getelementptr inbounds %struct.SV* %0, i32 0, i32 0 ; <i8**> [#uses=1]
+  %2 = load i8** %1, align 4                      ; <i8*> [#uses=1]
+  %3 = getelementptr inbounds i8* %2, i32 12      ; <i8*> [#uses=1]
+  %4 = bitcast i8* %3 to i32*                     ; <i32*> [#uses=1]
+  %5 = load i32* %4, align 4                      ; <i32> [#uses=1]
+  %storemerge5 = xor i32 %5, -1                   ; <i32> [#uses=1]
+  call arm_apcscc  void @Perl_sv_setiv(%struct.SV* undef, i32 %storemerge5) nounwind
+  %6 = getelementptr inbounds %struct.SV* undef, i32 0, i32 2 ; <i32*> [#uses=1]
+  %7 = load i32* %6, align 4                      ; <i32> [#uses=1]
+  %8 = and i32 %7, 16384                          ; <i32> [#uses=1]
+  %9 = icmp eq i32 %8, 0                          ; <i1> [#uses=1]
+  br i1 %9, label %bb12, label %bb11
+
+bb8:                                              ; preds = %bb6
+  unreachable
+
+bb11:                                             ; preds = %bb7
+  call arm_apcscc  void @Perl_mg_set(%struct.SV* undef) nounwind
+  br label %bb12
+
+bb12:                                             ; preds = %bb11, %bb7
+  store %struct.SV* undef, %struct.SV** null, align 4
+  br label %bb44
+
+bb13:                                             ; preds = %bb5
+  %10 = call arm_apcscc  i32 @Perl_sv_2uv(%struct.SV* %0) nounwind ; <i32> [#uses=0]
+  br i1 undef, label %bb.i, label %bb1.i
+
+bb.i:                                             ; preds = %bb13
+  call arm_apcscc  void @Perl_sv_setiv(%struct.SV* undef, i32 undef) nounwind
+  br label %Perl_sv_setuv.exit
+
+bb1.i:                                            ; preds = %bb13
+  br label %Perl_sv_setuv.exit
+
+Perl_sv_setuv.exit:                               ; preds = %bb1.i, %bb.i
+  %11 = getelementptr inbounds %struct.SV* undef, i32 0, i32 2 ; <i32*> [#uses=1]
+  %12 = load i32* %11, align 4                    ; <i32> [#uses=1]
+  %13 = and i32 %12, 16384                        ; <i32> [#uses=1]
+  %14 = icmp eq i32 %13, 0                        ; <i1> [#uses=1]
+  br i1 %14, label %bb20, label %bb19
+
+bb19:                                             ; preds = %Perl_sv_setuv.exit
+  call arm_apcscc  void @Perl_mg_set(%struct.SV* undef) nounwind
+  br label %bb20
+
+bb20:                                             ; preds = %bb19, %Perl_sv_setuv.exit
+  store %struct.SV* undef, %struct.SV** null, align 4
+  br label %bb44
+
+bb21:                                             ; preds = %entry
+  br i1 undef, label %bb23, label %bb22
+
+bb22:                                             ; preds = %bb21
+  unreachable
+
+bb23:                                             ; preds = %bb21
+  unreachable
+
+bb44:                                             ; preds = %bb20, %bb12
+  ret %struct.OP* undef
+}
+
+declare arm_apcscc void @Perl_sv_setiv(%struct.SV*, i32) nounwind
+
+declare arm_apcscc i32 @Perl_sv_2uv(%struct.SV*) nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-11-13-STRDBug.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-11-13-STRDBug.ll
new file mode 100644
index 0000000..8a67bb1
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-11-13-STRDBug.ll
@@ -0,0 +1,20 @@
+; RUN: llc < %s -mtriple=thumbv7-apple-darwin10
+; rdar://7394794
+
+define arm_apcscc void @lshift_double(i64 %l1, i64 %h1, i64 %count, i32 %prec, i64* nocapture %lv, i64* nocapture %hv, i32 %arith) nounwind {
+entry:
+  %..i = select i1 false, i64 0, i64 0            ; <i64> [#uses=1]
+  br i1 undef, label %bb11.i, label %bb6.i
+
+bb6.i:                                            ; preds = %entry
+  %0 = lshr i64 %h1, 0                            ; <i64> [#uses=1]
+  store i64 %0, i64* %hv, align 4
+  %1 = lshr i64 %l1, 0                            ; <i64> [#uses=1]
+  %2 = or i64 0, %1                               ; <i64> [#uses=1]
+  store i64 %2, i64* %lv, align 4
+  br label %bb11.i
+
+bb11.i:                                           ; preds = %bb6.i, %entry
+  store i64 %..i, i64* %lv, align 4
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/cross-rc-coalescing-1.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/cross-rc-coalescing-1.ll
new file mode 100644
index 0000000..572f1e8
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/cross-rc-coalescing-1.ll
@@ -0,0 +1,52 @@
+; RUN: llc < %s -mtriple=thumbv7-apple-darwin9 -mcpu=cortex-a8
+
+%struct.FILE = type { i8*, i32, i32, i16, i16, %struct.__sbuf, i32, i8*, i32 (i8*)*, i32 (i8*, i8*, i32)*, i64 (i8*, i64, i32)*, i32 (i8*, i8*, i32)*, %struct.__sbuf, %struct.__sFILEX*, i32, [3 x i8], [1 x i8], %struct.__sbuf, i32, i64 }
+%struct.__sFILEX = type opaque
+%struct.__sbuf = type { i8*, i32 }
+
+declare arm_apcscc i32 @fgetc(%struct.FILE* nocapture) nounwind
+
+define arm_apcscc i32 @main(i32 %argc, i8** nocapture %argv) nounwind {
+entry:
+  br i1 undef, label %bb, label %bb1
+
+bb:                                               ; preds = %entry
+  unreachable
+
+bb1:                                              ; preds = %entry
+  br i1 undef, label %bb.i1, label %bb1.i2
+
+bb.i1:                                            ; preds = %bb1
+  unreachable
+
+bb1.i2:                                           ; preds = %bb1
+  %0 = call arm_apcscc  i32 @fgetc(%struct.FILE* undef) nounwind ; <i32> [#uses=0]
+  br i1 undef, label %bb2.i3, label %bb3.i4
+
+bb2.i3:                                           ; preds = %bb1.i2
+  br i1 undef, label %bb4.i, label %bb3.i4
+
+bb3.i4:                                           ; preds = %bb2.i3, %bb1.i2
+  unreachable
+
+bb4.i:                                            ; preds = %bb2.i3
+  br i1 undef, label %bb5.i, label %get_image.exit
+
+bb5.i:                                            ; preds = %bb4.i
+  unreachable
+
+get_image.exit:                                   ; preds = %bb4.i
+  br i1 undef, label %bb28, label %bb27
+
+bb27:                                             ; preds = %get_image.exit
+  br label %bb.i
+
+bb.i:                                             ; preds = %bb.i, %bb27
+  %1 = fptrunc double undef to float              ; <float> [#uses=1]
+  %2 = fptoui float %1 to i8                      ; <i8> [#uses=1]
+  store i8 %2, i8* undef, align 1
+  br label %bb.i
+
+bb28:                                             ; preds = %get_image.exit
+  unreachable
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/cross-rc-coalescing-2.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/cross-rc-coalescing-2.ll
new file mode 100644
index 0000000..8f6449e
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/cross-rc-coalescing-2.ll
@@ -0,0 +1,67 @@
+; RUN: llc < %s -mtriple=thumbv7-apple-darwin9 -mcpu=cortex-a8 | grep vmov.f32 | count 7
+
+define arm_apcscc void @fht(float* nocapture %fz, i16 signext %n) nounwind {
+entry:
+  br label %bb5
+
+bb5:                                              ; preds = %bb5, %entry
+  br i1 undef, label %bb5, label %bb.nph
+
+bb.nph:                                           ; preds = %bb5
+  br label %bb7
+
+bb7:                                              ; preds = %bb9, %bb.nph
+  %s1.02 = phi float [ undef, %bb.nph ], [ %35, %bb9 ] ; <float> [#uses=3]
+  %tmp79 = add i32 undef, undef                   ; <i32> [#uses=1]
+  %tmp53 = sub i32 undef, undef                   ; <i32> [#uses=1]
+  %0 = fadd float 0.000000e+00, 1.000000e+00      ; <float> [#uses=2]
+  %1 = fmul float 0.000000e+00, 0.000000e+00      ; <float> [#uses=2]
+  br label %bb8
+
+bb8:                                              ; preds = %bb8, %bb7
+  %tmp54 = add i32 0, %tmp53                      ; <i32> [#uses=0]
+  %fi.1 = getelementptr float* %fz, i32 undef     ; <float*> [#uses=2]
+  %tmp80 = add i32 0, %tmp79                      ; <i32> [#uses=1]
+  %scevgep81 = getelementptr float* %fz, i32 %tmp80 ; <float*> [#uses=1]
+  %2 = load float* undef, align 4                 ; <float> [#uses=1]
+  %3 = fmul float %2, %1                          ; <float> [#uses=1]
+  %4 = load float* null, align 4                  ; <float> [#uses=2]
+  %5 = fmul float %4, %0                          ; <float> [#uses=1]
+  %6 = fsub float %3, %5                          ; <float> [#uses=1]
+  %7 = fmul float %4, %1                          ; <float> [#uses=1]
+  %8 = fadd float undef, %7                       ; <float> [#uses=2]
+  %9 = load float* %fi.1, align 4                 ; <float> [#uses=2]
+  %10 = fsub float %9, %8                         ; <float> [#uses=1]
+  %11 = fadd float %9, %8                         ; <float> [#uses=1]
+  %12 = fsub float 0.000000e+00, %6               ; <float> [#uses=1]
+  %13 = fsub float 0.000000e+00, undef            ; <float> [#uses=2]
+  %14 = fmul float undef, %0                      ; <float> [#uses=1]
+  %15 = fadd float %14, undef                     ; <float> [#uses=2]
+  %16 = load float* %scevgep81, align 4           ; <float> [#uses=2]
+  %17 = fsub float %16, %15                       ; <float> [#uses=1]
+  %18 = fadd float %16, %15                       ; <float> [#uses=2]
+  %19 = load float* undef, align 4                ; <float> [#uses=2]
+  %20 = fsub float %19, %13                       ; <float> [#uses=2]
+  %21 = fadd float %19, %13                       ; <float> [#uses=1]
+  %22 = fmul float %s1.02, %18                    ; <float> [#uses=1]
+  %23 = fmul float 0.000000e+00, %20              ; <float> [#uses=1]
+  %24 = fsub float %22, %23                       ; <float> [#uses=1]
+  %25 = fmul float 0.000000e+00, %18              ; <float> [#uses=1]
+  %26 = fmul float %s1.02, %20                    ; <float> [#uses=1]
+  %27 = fadd float %25, %26                       ; <float> [#uses=1]
+  %28 = fadd float %11, %27                       ; <float> [#uses=1]
+  store float %28, float* %fi.1, align 4
+  %29 = fadd float %12, %24                       ; <float> [#uses=1]
+  store float %29, float* null, align 4
+  %30 = fmul float 0.000000e+00, %21              ; <float> [#uses=1]
+  %31 = fmul float %s1.02, %17                    ; <float> [#uses=1]
+  %32 = fsub float %30, %31                       ; <float> [#uses=1]
+  %33 = fsub float %10, %32                       ; <float> [#uses=1]
+  store float %33, float* undef, align 4
+  %34 = icmp slt i32 undef, undef                 ; <i1> [#uses=1]
+  br i1 %34, label %bb8, label %bb9
+
+bb9:                                              ; preds = %bb8
+  %35 = fadd float 0.000000e+00, undef            ; <float> [#uses=1]
+  br label %bb7
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/ifcvt-neon.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/ifcvt-neon.ll
new file mode 100644
index 0000000..c667909
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/ifcvt-neon.ll
@@ -0,0 +1,29 @@
+; RUN: llc < %s -march=thumb -mcpu=cortex-a8 | FileCheck %s
+; rdar://7368193
+
+ at a = common global float 0.000000e+00             ; <float*> [#uses=2]
+ at b = common global float 0.000000e+00             ; <float*> [#uses=1]
+
+define arm_apcscc float @t(i32 %c) nounwind {
+entry:
+  %0 = icmp sgt i32 %c, 1                         ; <i1> [#uses=1]
+  %1 = load float* @a, align 4                    ; <float> [#uses=2]
+  %2 = load float* @b, align 4                    ; <float> [#uses=2]
+  br i1 %0, label %bb, label %bb1
+
+bb:                                               ; preds = %entry
+; CHECK:      ite lt
+; CHECK:      vsublt.f32
+; CHECK-NEXT: vaddge.f32
+  %3 = fadd float %1, %2                          ; <float> [#uses=1]
+  br label %bb2
+
+bb1:                                              ; preds = %entry
+  %4 = fsub float %1, %2                          ; <float> [#uses=1]
+  br label %bb2
+
+bb2:                                              ; preds = %bb1, %bb
+  %storemerge = phi float [ %4, %bb1 ], [ %3, %bb ] ; <float> [#uses=2]
+  store float %storemerge, float* @a
+  ret float %storemerge
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/large-stack.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/large-stack.ll
index 865b17b..6f59961 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/large-stack.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/large-stack.ll
@@ -2,7 +2,7 @@
 
 define void @test1() {
 ; CHECK: test1:
-; CHECK: sub sp, #64 * 4
+; CHECK: sub sp, #256
     %tmp = alloca [ 64 x i32 ] , align 4
     ret void
 }
@@ -10,7 +10,7 @@ define void @test1() {
 define void @test2() {
 ; CHECK: test2:
 ; CHECK: sub.w sp, sp, #4160
-; CHECK: sub sp, #2 * 4
+; CHECK: sub sp, #8
     %tmp = alloca [ 4168 x i8 ] , align 4
     ret void
 }
@@ -18,7 +18,7 @@ define void @test2() {
 define i32 @test3() {
 ; CHECK: test3:
 ; CHECK: sub.w sp, sp, #805306368
-; CHECK: sub sp, #4 * 4
+; CHECK: sub sp, #24
     %retval = alloca i32, align 4
     %tmp = alloca i32, align 4
     %a = alloca [805306369 x i8], align 16
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/ldr-str-imm12.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/ldr-str-imm12.ll
new file mode 100644
index 0000000..47d85b1
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/ldr-str-imm12.ll
@@ -0,0 +1,79 @@
+; RUN: llc < %s -mtriple=thumbv7-apple-darwin -mcpu=cortex-a8 -relocation-model=pic -disable-fp-elim | FileCheck %s
+; rdar://7352504
+; Make sure we use "str r9, [sp, #+28]" instead of "sub.w r4, r7, #256" followed by "str r9, [r4, #-32]".
+
+%0 = type { i16, i8, i8 }
+%1 = type { [2 x i32], [2 x i32] }
+%2 = type { %union.rec* }
+%struct.FILE_POS = type { i8, i8, i16, i32 }
+%struct.GAP = type { i8, i8, i16 }
+%struct.LIST = type { %union.rec*, %union.rec* }
+%struct.STYLE = type { %union.anon, %union.anon, i16, i16, i32 }
+%struct.head_type = type { [2 x %struct.LIST], %union.FIRST_UNION, %union.SECOND_UNION, %union.THIRD_UNION, %union.FOURTH_UNION, %union.rec*, %2, %union.rec*, %union.rec*, %union.rec*, %union.rec*, %union.rec*, %union.rec*, %union.rec*, %union.rec*, i32 }
+%union.FIRST_UNION = type { %struct.FILE_POS }
+%union.FOURTH_UNION = type { %struct.STYLE }
+%union.SECOND_UNION = type { %0 }
+%union.THIRD_UNION = type { %1 }
+%union.anon = type { %struct.GAP }
+%union.rec = type { %struct.head_type }
+
+ at zz_hold = external global %union.rec*            ; <%union.rec**> [#uses=2]
+ at zz_res = external global %union.rec*             ; <%union.rec**> [#uses=1]
+
+define arm_apcscc %union.rec* @Manifest(%union.rec* %x, %union.rec* %env, %struct.STYLE* %style, %union.rec** %bthr, %union.rec** %fthr, %union.rec** %target, %union.rec** %crs, i32 %ok, i32 %need_expand, %union.rec** %enclose, i32 %fcr) nounwind {
+entry:
+; CHECK:       ldr.w	r9, [r7, #+28]
+  %xgaps.i = alloca [32 x %union.rec*], align 4   ; <[32 x %union.rec*]*> [#uses=0]
+  %ycomp.i = alloca [32 x %union.rec*], align 4   ; <[32 x %union.rec*]*> [#uses=0]
+  br i1 false, label %bb, label %bb20
+
+bb:                                               ; preds = %entry
+  unreachable
+
+bb20:                                             ; preds = %entry
+  switch i32 undef, label %bb1287 [
+    i32 11, label %bb119
+    i32 12, label %bb119
+    i32 21, label %bb420
+    i32 23, label %bb420
+    i32 45, label %bb438
+    i32 46, label %bb438
+    i32 55, label %bb533
+    i32 56, label %bb569
+    i32 64, label %bb745
+    i32 78, label %bb1098
+  ]
+
+bb119:                                            ; preds = %bb20, %bb20
+  unreachable
+
+bb420:                                            ; preds = %bb20, %bb20
+; CHECK: bb420
+; CHECK: str r{{[0-7]}}, [sp]
+; CHECK: str r{{[0-7]}}, [sp, #+4]
+; CHECK: str r{{[0-7]}}, [sp, #+8]
+; CHECK: str r{{[0-7]}}, [sp, #+24]
+  store %union.rec* null, %union.rec** @zz_hold, align 4
+  store %union.rec* null, %union.rec** @zz_res, align 4
+  store %union.rec* %x, %union.rec** @zz_hold, align 4
+  %0 = call arm_apcscc  %union.rec* @Manifest(%union.rec* undef, %union.rec* %env, %struct.STYLE* %style, %union.rec** %bthr, %union.rec** %fthr, %union.rec** %target, %union.rec** %crs, i32 %ok, i32 %need_expand, %union.rec** %enclose, i32 %fcr) nounwind ; <%union.rec*> [#uses=0]
+  unreachable
+
+bb438:                                            ; preds = %bb20, %bb20
+  unreachable
+
+bb533:                                            ; preds = %bb20
+  ret %union.rec* %x
+
+bb569:                                            ; preds = %bb20
+  unreachable
+
+bb745:                                            ; preds = %bb20
+  unreachable
+
+bb1098:                                           ; preds = %bb20
+  unreachable
+
+bb1287:                                           ; preds = %bb20
+  unreachable
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/load-global.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/load-global.ll
index 4fd4525..9286670 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/load-global.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/load-global.ll
@@ -14,7 +14,7 @@ define i32 @test1() {
 
 ; PIC: _test1
 ; PIC: add r0, pc
-; PIC: .long L_G$non_lazy_ptr-(LPC0+4)
+; PIC: .long L_G$non_lazy_ptr-(LPC1_0+4)
 
 ; LINUX: test1
 ; LINUX: .long G(GOT)
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/lsr-deficiency.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/lsr-deficiency.ll
new file mode 100644
index 0000000..7b1b57a
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/lsr-deficiency.ll
@@ -0,0 +1,37 @@
+; RUN: llc < %s -mtriple=thumbv7-apple-darwin10 -relocation-model=pic | FileCheck %s
+; rdar://7387640
+
+; FIXME: We still need to rewrite array reference iv of stride -4 with loop
+; count iv of stride -1.
+
+ at G = external global i32                          ; <i32*> [#uses=2]
+ at array = external global i32*                     ; <i32**> [#uses=1]
+
+define arm_apcscc void @t() nounwind optsize {
+; CHECK: t:
+; CHECK: mov.w r2, #4000
+; CHECK: movw r3, #1001
+entry:
+  %.pre = load i32* @G, align 4                   ; <i32> [#uses=1]
+  br label %bb
+
+bb:                                               ; preds = %bb, %entry
+; CHECK: LBB1_1:
+; CHECK: subs r3, #1
+; CHECK: cmp r3, #0
+; CHECK: sub.w r2, r2, #4
+  %0 = phi i32 [ %.pre, %entry ], [ %3, %bb ]     ; <i32> [#uses=1]
+  %indvar = phi i32 [ 0, %entry ], [ %indvar.next, %bb ] ; <i32> [#uses=2]
+  %tmp5 = sub i32 1000, %indvar                   ; <i32> [#uses=1]
+  %1 = load i32** @array, align 4                 ; <i32*> [#uses=1]
+  %scevgep = getelementptr i32* %1, i32 %tmp5     ; <i32*> [#uses=1]
+  %2 = load i32* %scevgep, align 4                ; <i32> [#uses=1]
+  %3 = add nsw i32 %2, %0                         ; <i32> [#uses=2]
+  store i32 %3, i32* @G, align 4
+  %indvar.next = add i32 %indvar, 1               ; <i32> [#uses=2]
+  %exitcond = icmp eq i32 %indvar.next, 1001      ; <i1> [#uses=1]
+  br i1 %exitcond, label %return, label %bb
+
+return:                                           ; preds = %bb
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/machine-licm.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/machine-licm.ll
new file mode 100644
index 0000000..9ab19e9
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/machine-licm.ll
@@ -0,0 +1,55 @@
+; RUN: llc < %s -mtriple=thumbv7-apple-darwin -disable-fp-elim                       | FileCheck %s
+; RUN: llc < %s -mtriple=thumbv7-apple-darwin -relocation-model=pic -disable-fp-elim | FileCheck %s --check-prefix=PIC
+; rdar://7353541
+; rdar://7354376
+
+; The generated code is no where near ideal. It's not recognizing the two
+; constantpool entries being loaded can be merged into one.
+
+ at GV = external global i32                         ; <i32*> [#uses=2]
+
+define arm_apcscc void @t(i32* nocapture %vals, i32 %c) nounwind {
+entry:
+; CHECK: t:
+; CHECK: cbz
+  %0 = icmp eq i32 %c, 0                          ; <i1> [#uses=1]
+  br i1 %0, label %return, label %bb.nph
+
+bb.nph:                                           ; preds = %entry
+; CHECK: BB#1
+; CHECK: ldr.n r2, LCPI1_0
+; CHECK: ldr r3, [r2]
+; CHECK: ldr r3, [r3]
+; CHECK: ldr r2, [r2]
+; CHECK: LBB1_2
+; CHECK: LCPI1_0:
+; CHECK-NOT: LCPI1_1:
+; CHECK: .section
+
+; PIC: BB#1
+; PIC: ldr.n r2, LCPI1_0
+; PIC: add r2, pc
+; PIC: ldr r3, [r2]
+; PIC: ldr r3, [r3]
+; PIC: ldr r2, [r2]
+; PIC: LBB1_2
+; PIC: LCPI1_0:
+; PIC-NOT: LCPI1_1:
+; PIC: .section
+  %.pre = load i32* @GV, align 4                  ; <i32> [#uses=1]
+  br label %bb
+
+bb:                                               ; preds = %bb, %bb.nph
+  %1 = phi i32 [ %.pre, %bb.nph ], [ %3, %bb ]    ; <i32> [#uses=1]
+  %i.03 = phi i32 [ 0, %bb.nph ], [ %4, %bb ]     ; <i32> [#uses=2]
+  %scevgep = getelementptr i32* %vals, i32 %i.03  ; <i32*> [#uses=1]
+  %2 = load i32* %scevgep, align 4                ; <i32> [#uses=1]
+  %3 = add nsw i32 %1, %2                         ; <i32> [#uses=2]
+  store i32 %3, i32* @GV, align 4
+  %4 = add i32 %i.03, 1                           ; <i32> [#uses=2]
+  %exitcond = icmp eq i32 %4, %c                  ; <i1> [#uses=1]
+  br i1 %exitcond, label %return, label %bb
+
+return:                                           ; preds = %bb, %entry
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-add3.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-add3.ll
index 8d472cb..58fc333 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-add3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-add3.ll
@@ -1,6 +1,9 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {addw\\W*r\[0-9\],\\W*r\[0-9\],\\W*#\[0-9\]*} | grep {#4095} | count 1
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 define i32 @f1(i32 %a) {
     %tmp = add i32 %a, 4095
     ret i32 %tmp
 }
+
+; CHECK: f1:
+; CHECK: 	addw	r0, r0, #4095
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-and2.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-and2.ll
index 1e2666f..76c56d0 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-and2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-and2.ll
@@ -1,31 +1,41 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {and\\W*r\[0-9\],\\W*r\[0-9\],\\W*#\[0-9\]*} | grep {#171\\|#1179666\\|#872428544\\|#1448498774\\|#66846720} | count 5
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 ; 171 = 0x000000ab
 define i32 @f1(i32 %a) {
     %tmp = and i32 %a, 171
     ret i32 %tmp
 }
+; CHECK: f1:
+; CHECK: 	and	r0, r0, #171
 
 ; 1179666 = 0x00120012
 define i32 @f2(i32 %a) {
     %tmp = and i32 %a, 1179666
     ret i32 %tmp
 }
+; CHECK: f2:
+; CHECK: 	and	r0, r0, #1179666
 
 ; 872428544 = 0x34003400
 define i32 @f3(i32 %a) {
     %tmp = and i32 %a, 872428544
     ret i32 %tmp
 }
+; CHECK: f3:
+; CHECK: 	and	r0, r0, #872428544
 
 ; 1448498774 = 0x56565656
 define i32 @f4(i32 %a) {
     %tmp = and i32 %a, 1448498774
     ret i32 %tmp
 }
+; CHECK: f4:
+; CHECK: 	and	r0, r0, #1448498774
 
 ; 66846720 = 0x03fc0000
 define i32 @f5(i32 %a) {
     %tmp = and i32 %a, 66846720
     ret i32 %tmp
 }
+; CHECK: f5:
+; CHECK: 	and	r0, r0, #66846720
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-bcc.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-bcc.ll
index e1f9cdb..aae9f5c 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-bcc.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-bcc.ll
@@ -2,8 +2,8 @@
 ; RUN: llc < %s -march=thumb -mattr=+thumb2 | not grep it
 
 define i32 @t1(i32 %a, i32 %b, i32 %c) {
-; CHECK: t1
-; CHECK: beq
+; CHECK: t1:
+; CHECK: cbz
 	%tmp2 = icmp eq i32 %a, 0
 	br i1 %tmp2, label %cond_false, label %cond_true
 
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-bfc.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-bfc.ll
index d33cf7e..b486045 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-bfc.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-bfc.ll
@@ -1,25 +1,32 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep "bfc " | count 3
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 ; 4278190095 = 0xff00000f
 define i32 @f1(i32 %a) {
+; CHECK: f1:
+; CHECK: bfc r
     %tmp = and i32 %a, 4278190095
     ret i32 %tmp
 }
 
 ; 4286578688 = 0xff800000
 define i32 @f2(i32 %a) {
+; CHECK: f2:
+; CHECK: bfc r
     %tmp = and i32 %a, 4286578688
     ret i32 %tmp
 }
 
 ; 4095 = 0x00000fff
 define i32 @f3(i32 %a) {
+; CHECK: f3:
+; CHECK: bfc r
     %tmp = and i32 %a, 4095
     ret i32 %tmp
 }
 
 ; 2147483646 = 0x7ffffffe   not implementable w/ BFC
 define i32 @f4(i32 %a) {
+; CHECK: f4:
     %tmp = and i32 %a, 2147483646
     ret i32 %tmp
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-branch.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-branch.ll
index b46cb5f..1298384 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-branch.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-branch.ll
@@ -1,9 +1,9 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
+; RUN: llc < %s -mtriple=thumbv7-apple-darwin -mattr=+thumb2 | FileCheck %s
 
 define void @f1(i32 %a, i32 %b, i32* %v) {
 entry:
 ; CHECK: f1:
-; CHECK bne LBB
+; CHECK: bne LBB
         %tmp = icmp eq i32 %a, %b               ; <i1> [#uses=1]
         br i1 %tmp, label %cond_true, label %return
 
@@ -18,7 +18,7 @@ return:         ; preds = %entry
 define void @f2(i32 %a, i32 %b, i32* %v) {
 entry:
 ; CHECK: f2:
-; CHECK bge LBB
+; CHECK: bge LBB
         %tmp = icmp slt i32 %a, %b              ; <i1> [#uses=1]
         br i1 %tmp, label %cond_true, label %return
 
@@ -33,7 +33,7 @@ return:         ; preds = %entry
 define void @f3(i32 %a, i32 %b, i32* %v) {
 entry:
 ; CHECK: f3:
-; CHECK bhs LBB
+; CHECK: bhs LBB
         %tmp = icmp ult i32 %a, %b              ; <i1> [#uses=1]
         br i1 %tmp, label %cond_true, label %return
 
@@ -48,7 +48,7 @@ return:         ; preds = %entry
 define void @f4(i32 %a, i32 %b, i32* %v) {
 entry:
 ; CHECK: f4:
-; CHECK blo LBB
+; CHECK: blo LBB
         %tmp = icmp ult i32 %a, %b              ; <i1> [#uses=1]
         br i1 %tmp, label %return, label %cond_true
 
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-cbnz.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-cbnz.ll
new file mode 100644
index 0000000..0fc6899
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-cbnz.ll
@@ -0,0 +1,33 @@
+; RUN: llc < %s -mtriple=thumbv7-apple-darwin -mcpu=cortex-a8 | FileCheck %s
+; rdar://7354379
+
+declare arm_apcscc double @floor(double) nounwind readnone
+
+define void @t(i1 %a, double %b) {
+entry:
+  br i1 %a, label %bb3, label %bb1
+
+bb1:                                              ; preds = %entry
+  unreachable
+
+bb3:                                              ; preds = %entry
+  br i1 %a, label %bb7, label %bb5
+
+bb5:                                              ; preds = %bb3
+  unreachable
+
+bb7:                                              ; preds = %bb3
+  br i1 %a, label %bb11, label %bb9
+
+bb9:                                              ; preds = %bb7
+; CHECK:      cmp r0, #0
+; CHECK-NEXT: cmp r0, #0
+; CHECK-NEXT: cbnz
+  %0 = tail call arm_apcscc  double @floor(double %b) nounwind readnone ; <double> [#uses=0]
+  br label %bb11
+
+bb11:                                             ; preds = %bb9, %bb7
+  %1 = getelementptr i32* undef, i32 0
+  store i32 0, i32* %1
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-clz.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-clz.ll
index 0bed058..74728bf 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-clz.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-clz.ll
@@ -1,6 +1,8 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2,+v7a | grep "clz " | count 1
+; RUN: llc < %s -march=thumb -mattr=+thumb2,+v7a | FileCheck %s
 
 define i32 @f1(i32 %a) {
+; CHECK: f1:
+; CHECK: clz r
     %tmp = tail call i32 @llvm.ctlz.i32(i32 %a)
     ret i32 %tmp
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-cmn.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-cmn.ll
index 401c56a..eeaaa7f 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-cmn.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-cmn.ll
@@ -1,32 +1,36 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {cmn\\.w\\W*r\[0-9\],\\W*r\[0-9\]$} | count 4
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {cmn\\.w\\W*r\[0-9\],\\W*r\[0-9\],\\W*lsl\\W*#5$} | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {cmn\\.w\\W*r\[0-9\],\\W*r\[0-9\],\\W*lsr\\W*#6$} | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {cmn\\.w\\W*r\[0-9\],\\W*r\[0-9\],\\W*asr\\W*#7$} | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {cmn\\.w\\W*r\[0-9\],\\W*r\[0-9\],\\W*ror\\W*#8$} | count 1
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 define i1 @f1(i32 %a, i32 %b) {
     %nb = sub i32 0, %b
     %tmp = icmp ne i32 %a, %nb
     ret i1 %tmp
 }
+; CHECK: f1:
+; CHECK: 	cmn.w	r0, r1
 
 define i1 @f2(i32 %a, i32 %b) {
     %nb = sub i32 0, %b
     %tmp = icmp ne i32 %nb, %a
     ret i1 %tmp
 }
+; CHECK: f2:
+; CHECK: 	cmn.w	r0, r1
 
 define i1 @f3(i32 %a, i32 %b) {
     %nb = sub i32 0, %b
     %tmp = icmp eq i32 %a, %nb
     ret i1 %tmp
 }
+; CHECK: f3:
+; CHECK: 	cmn.w	r0, r1
 
 define i1 @f4(i32 %a, i32 %b) {
     %nb = sub i32 0, %b
     %tmp = icmp eq i32 %nb, %a
     ret i1 %tmp
 }
+; CHECK: f4:
+; CHECK: 	cmn.w	r0, r1
 
 define i1 @f5(i32 %a, i32 %b) {
     %tmp = shl i32 %b, 5
@@ -34,6 +38,8 @@ define i1 @f5(i32 %a, i32 %b) {
     %tmp1 = icmp eq i32 %nb, %a
     ret i1 %tmp1
 }
+; CHECK: f5:
+; CHECK: 	cmn.w	r0, r1, lsl #5
 
 define i1 @f6(i32 %a, i32 %b) {
     %tmp = lshr i32 %b, 6
@@ -41,6 +47,8 @@ define i1 @f6(i32 %a, i32 %b) {
     %tmp1 = icmp ne i32 %nb, %a
     ret i1 %tmp1
 }
+; CHECK: f6:
+; CHECK: 	cmn.w	r0, r1, lsr #6
 
 define i1 @f7(i32 %a, i32 %b) {
     %tmp = ashr i32 %b, 7
@@ -48,6 +56,8 @@ define i1 @f7(i32 %a, i32 %b) {
     %tmp1 = icmp eq i32 %a, %nb
     ret i1 %tmp1
 }
+; CHECK: f7:
+; CHECK: 	cmn.w	r0, r1, asr #7
 
 define i1 @f8(i32 %a, i32 %b) {
     %l8 = shl i32 %a, 24
@@ -57,3 +67,6 @@ define i1 @f8(i32 %a, i32 %b) {
     %tmp1 = icmp ne i32 %a, %nb
     ret i1 %tmp1
 }
+; CHECK: f8:
+; CHECK: 	cmn.w	r0, r0, ror #8
+
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-cmn2.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-cmn2.ll
index c1fcac0..c0e19f6 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-cmn2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-cmn2.ll
@@ -1,25 +1,33 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep "cmn\\.w "  | grep {#187\\|#11141290\\|#-872363008\\|#1114112} | count 4
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 ; -0x000000bb = 4294967109
 define i1 @f1(i32 %a) {
+; CHECK: f1:
+; CHECK: cmn.w {{r.*}}, #187
     %tmp = icmp ne i32 %a, 4294967109
     ret i1 %tmp
 }
 
 ; -0x00aa00aa = 4283826006
 define i1 @f2(i32 %a) {
+; CHECK: f2:
+; CHECK: cmn.w {{r.*}}, #11141290
     %tmp = icmp eq i32 %a, 4283826006
     ret i1 %tmp
 }
 
 ; -0xcc00cc00 = 872363008
 define i1 @f3(i32 %a) {
+; CHECK: f3:
+; CHECK: cmn.w {{r.*}}, #-872363008
     %tmp = icmp ne i32 %a, 872363008
     ret i1 %tmp
 }
 
 ; -0x00110000 = 4293853184
 define i1 @f4(i32 %a) {
+; CHECK: f4:
+; CHECK: cmn.w {{r.*}}, #1114112
     %tmp = icmp eq i32 %a, 4293853184
     ret i1 %tmp
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-eor2.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-eor2.ll
index 185634c..6b2e9dc 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-eor2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-eor2.ll
@@ -1,31 +1,41 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep "eor "  | grep {#187\\|#11141290\\|#-872363008\\|#1114112\\|#-572662307} | count 5
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 ; 0x000000bb = 187
 define i32 @f1(i32 %a) {
+; CHECK: f1:
+; CHECK: eor {{.*}}#187
     %tmp = xor i32 %a, 187
     ret i32 %tmp
 }
 
 ; 0x00aa00aa = 11141290
 define i32 @f2(i32 %a) {
+; CHECK: f2:
+; CHECK: eor {{.*}}#11141290
     %tmp = xor i32 %a, 11141290 
     ret i32 %tmp
 }
 
 ; 0xcc00cc00 = 3422604288
 define i32 @f3(i32 %a) {
+; CHECK: f3:
+; CHECK: eor {{.*}}#-872363008
     %tmp = xor i32 %a, 3422604288
     ret i32 %tmp
 }
 
 ; 0xdddddddd = 3722304989
 define i32 @f4(i32 %a) {
+; CHECK: f4:
+; CHECK: eor {{.*}}#-572662307
     %tmp = xor i32 %a, 3722304989
     ret i32 %tmp
 }
 
 ; 0x00110000 = 1114112
 define i32 @f5(i32 %a) {
+; CHECK: f5:
+; CHECK: eor {{.*}}#1114112
     %tmp = xor i32 %a, 1114112
     ret i32 %tmp
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-ifcvt3.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-ifcvt3.ll
index 1d45d3c..496158c 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-ifcvt3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-ifcvt3.ll
@@ -23,7 +23,7 @@ bb52:                                             ; preds = %newFuncRoot
 ; CHECK: movne
 ; CHECK: moveq
 ; CHECK: pop
-; CHECK-NEXT: LBB1_2:
+; CHECK-NEXT: LBB1_1:
   %0 = load i64* @posed, align 4                  ; <i64> [#uses=3]
   %1 = sub i64 %0, %.reload78                     ; <i64> [#uses=1]
   %2 = ashr i64 %1, 1                             ; <i64> [#uses=3]
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-jtb.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-jtb.ll
index 7d093ec..f5a56e5 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-jtb.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-jtb.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | not grep tbb
+; RUN: llc < %s -march=thumb -mattr=+thumb2 -arm-adjust-jump-tables=0 | not grep tbb
 
 ; Do not use tbb / tbh if any destination is before the jumptable.
 ; rdar://7102917
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mla.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mla.ll
index be66425..c4cc749 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mla.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mla.ll
@@ -1,13 +1,17 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {mla\\W*r\[0-9\],\\W*r\[0-9\],\\W*r\[0-9\],\\W*r\[0-9\]} | count 2
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 define i32 @f1(i32 %a, i32 %b, i32 %c) {
     %tmp1 = mul i32 %a, %b
     %tmp2 = add i32 %c, %tmp1
     ret i32 %tmp2
 }
+; CHECK: f1:
+; CHECK: 	mla	r0, r0, r1, r2
 
 define i32 @f2(i32 %a, i32 %b, i32 %c) {
     %tmp1 = mul i32 %a, %b
     %tmp2 = add i32 %tmp1, %c
     ret i32 %tmp2
 }
+; CHECK: f2:
+; CHECK: 	mla	r0, r0, r1, r2
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mls.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mls.ll
index 782def9..fc9e6ba 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mls.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mls.ll
@@ -1,10 +1,12 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {mls\\W*r\[0-9\],\\W*r\[0-9\],\\W*r\[0-9\],\\W*r\[0-9\]} | count 1
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 define i32 @f1(i32 %a, i32 %b, i32 %c) {
     %tmp1 = mul i32 %a, %b
     %tmp2 = sub i32 %c, %tmp1
     ret i32 %tmp2
 }
+; CHECK: f1:
+; CHECK: 	mls	r0, r0, r1, r2
 
 ; sub doesn't commute, so no mls for this one
 define i32 @f2(i32 %a, i32 %b, i32 %c) {
@@ -12,3 +14,6 @@ define i32 @f2(i32 %a, i32 %b, i32 %c) {
     %tmp2 = sub i32 %tmp1, %c
     ret i32 %tmp2
 }
+; CHECK: f2:
+; CHECK: 	muls	r0, r1
+
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mov.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mov.ll
index e9fdec8..1dc3614 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mov.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mov.ll
@@ -5,35 +5,40 @@
 ; var 2.1 - 0x00ab00ab
 define i32 @t2_const_var2_1_ok_1(i32 %lhs) {
 ;CHECK: t2_const_var2_1_ok_1:
-;CHECK: #11206827
+;CHECK: add.w   r0, r0, #11206827
     %ret = add i32 %lhs, 11206827 ; 0x00ab00ab
     ret i32 %ret
 }
 
-define i32 @t2_const_var2_1_fail_1(i32 %lhs) {
-;CHECK: t2_const_var2_1_fail_1:
-;CHECK: movt
+define i32 @t2_const_var2_1_ok_2(i32 %lhs) {
+;CHECK: t2_const_var2_1_ok_2:
+;CHECK: add.w   r0, r0, #11206656
+;CHECK: adds    r0, #187
     %ret = add i32 %lhs, 11206843 ; 0x00ab00bb
     ret i32 %ret
 }
 
-define i32 @t2_const_var2_1_fail_2(i32 %lhs) {
-;CHECK: t2_const_var2_1_fail_2:
-;CHECK: movt
+define i32 @t2_const_var2_1_ok_3(i32 %lhs) {
+;CHECK: t2_const_var2_1_ok_3:
+;CHECK: add.w   r0, r0, #11206827
+;CHECK: add.w   r0, r0, #16777216
     %ret = add i32 %lhs, 27984043 ; 0x01ab00ab
     ret i32 %ret
 }
 
-define i32 @t2_const_var2_1_fail_3(i32 %lhs) {
-;CHECK: t2_const_var2_1_fail_3:
-;CHECK: movt
+define i32 @t2_const_var2_1_ok_4(i32 %lhs) {
+;CHECK: t2_const_var2_1_ok_4:
+;CHECK: add.w   r0, r0, #16777472
+;CHECK: add.w   r0, r0, #11206827
     %ret = add i32 %lhs, 27984299 ; 0x01ab01ab
     ret i32 %ret
 }
 
-define i32 @t2_const_var2_1_fail_4(i32 %lhs) {
-;CHECK: t2_const_var2_1_fail_4:
-;CHECK: movt
+define i32 @t2_const_var2_1_fail_1(i32 %lhs) {
+;CHECK: t2_const_var2_1_fail_1:
+;CHECK: movw    r1, #43777
+;CHECK: movt    r1, #427
+;CHECK: add     r0, r1
     %ret = add i32 %lhs, 28027649 ; 0x01abab01
     ret i32 %ret
 }
@@ -41,35 +46,40 @@ define i32 @t2_const_var2_1_fail_4(i32 %lhs) {
 ; var 2.2 - 0xab00ab00
 define i32 @t2_const_var2_2_ok_1(i32 %lhs) {
 ;CHECK: t2_const_var2_2_ok_1:
-;CHECK: #-1426019584
+;CHECK: add.w   r0, r0, #-1426019584
     %ret = add i32 %lhs, 2868947712 ; 0xab00ab00
     ret i32 %ret
 }
 
-define i32 @t2_const_var2_2_fail_1(i32 %lhs) {
-;CHECK: t2_const_var2_2_fail_1:
-;CHECK: movt
+define i32 @t2_const_var2_2_ok_2(i32 %lhs) {
+;CHECK: t2_const_var2_2_ok_2:
+;CHECK: add.w   r0, r0, #-1426063360
+;CHECK: add.w   r0, r0, #47616
     %ret = add i32 %lhs, 2868951552 ; 0xab00ba00
     ret i32 %ret
 }
 
-define i32 @t2_const_var2_2_fail_2(i32 %lhs) {
-;CHECK: t2_const_var2_2_fail_2:
-;CHECK: movt
+define i32 @t2_const_var2_2_ok_3(i32 %lhs) {
+;CHECK: t2_const_var2_2_ok_3:
+;CHECK: add.w   r0, r0, #-1426019584
+;CHECK: adds    r0, #16
     %ret = add i32 %lhs, 2868947728 ; 0xab00ab10
     ret i32 %ret
 }
 
-define i32 @t2_const_var2_2_fail_3(i32 %lhs) {
-;CHECK: t2_const_var2_2_fail_3:
-;CHECK: movt
+define i32 @t2_const_var2_2_ok_4(i32 %lhs) {
+;CHECK: t2_const_var2_2_ok_4:
+;CHECK: add.w   r0, r0, #-1426019584
+;CHECK: add.w   r0, r0, #1048592
     %ret = add i32 %lhs, 2869996304 ; 0xab10ab10
     ret i32 %ret
 }
 
-define i32 @t2_const_var2_2_fail_4(i32 %lhs) {
-;CHECK: t2_const_var2_2_fail_4:
-;CHECK: movt
+define i32 @t2_const_var2_2_fail_1(i32 %lhs) {
+;CHECK: t2_const_var2_2_fail_1:
+;CHECK: movw    r1, #43792
+;CHECK: movt    r1, #4267
+;CHECK: add     r0, r1
     %ret = add i32 %lhs, 279685904 ; 0x10abab10
     ret i32 %ret
 }
@@ -77,35 +87,43 @@ define i32 @t2_const_var2_2_fail_4(i32 %lhs) {
 ; var 2.3 - 0xabababab
 define i32 @t2_const_var2_3_ok_1(i32 %lhs) {
 ;CHECK: t2_const_var2_3_ok_1:
-;CHECK: #-1414812757
+;CHECK: add.w   r0, r0, #-1414812757
     %ret = add i32 %lhs, 2880154539 ; 0xabababab
     ret i32 %ret
 }
 
 define i32 @t2_const_var2_3_fail_1(i32 %lhs) {
 ;CHECK: t2_const_var2_3_fail_1:
-;CHECK: movt
+;CHECK: movw    r1, #43962
+;CHECK: movt    r1, #43947
+;CHECK: add     r0, r1
     %ret = add i32 %lhs, 2880154554 ; 0xabababba
     ret i32 %ret
 }
 
 define i32 @t2_const_var2_3_fail_2(i32 %lhs) {
 ;CHECK: t2_const_var2_3_fail_2:
-;CHECK: movt
+;CHECK: movw    r1, #47787
+;CHECK: movt    r1, #43947
+;CHECK: add     r0, r1
     %ret = add i32 %lhs, 2880158379 ; 0xababbaab
     ret i32 %ret
 }
 
 define i32 @t2_const_var2_3_fail_3(i32 %lhs) {
 ;CHECK: t2_const_var2_3_fail_3:
-;CHECK: movt
+;CHECK: movw    r1, #43947
+;CHECK: movt    r1, #43962
+;CHECK: add     r0, r1
     %ret = add i32 %lhs, 2881137579 ; 0xabbaabab
     ret i32 %ret
 }
 
 define i32 @t2_const_var2_3_fail_4(i32 %lhs) {
 ;CHECK: t2_const_var2_3_fail_4:
-;CHECK: movt
+;CHECK: movw    r1, #43947
+;CHECK: movt    r1, #47787
+;CHECK: add     r0, r1
     %ret = add i32 %lhs, 3131812779 ; 0xbaababab
     ret i32 %ret
 }
@@ -113,35 +131,136 @@ define i32 @t2_const_var2_3_fail_4(i32 %lhs) {
 ; var 3 - 0x0F000000
 define i32 @t2_const_var3_1_ok_1(i32 %lhs) {
 ;CHECK: t2_const_var3_1_ok_1:
-;CHECK: #251658240
+;CHECK: add.w   r0, r0, #251658240
     %ret = add i32 %lhs, 251658240 ; 0x0F000000
     ret i32 %ret
 }
 
 define i32 @t2_const_var3_2_ok_1(i32 %lhs) {
 ;CHECK: t2_const_var3_2_ok_1:
-;CHECK: #3948544
+;CHECK: add.w   r0, r0, #3948544
     %ret = add i32 %lhs, 3948544 ; 0b00000000001111000100000000000000
     ret i32 %ret
 }
 
-define i32 @t2_const_var3_2_fail_1(i32 %lhs) {
-;CHECK: t2_const_var3_2_fail_1:
-;CHECK: movt
+define i32 @t2_const_var3_2_ok_2(i32 %lhs) {
+;CHECK: t2_const_var3_2_ok_2:
+;CHECK: add.w   r0, r0, #2097152
+;CHECK: add.w   r0, r0, #1843200
     %ret = add i32 %lhs, 3940352 ; 0b00000000001111000010000000000000
     ret i32 %ret
 }
 
 define i32 @t2_const_var3_3_ok_1(i32 %lhs) {
 ;CHECK: t2_const_var3_3_ok_1:
-;CHECK: #258
+;CHECK: add.w   r0, r0, #258
     %ret = add i32 %lhs, 258 ; 0b00000000000000000000000100000010
     ret i32 %ret
 }
 
 define i32 @t2_const_var3_4_ok_1(i32 %lhs) {
 ;CHECK: t2_const_var3_4_ok_1:
-;CHECK: #-268435456
+;CHECK: add.w   r0, r0, #-268435456
     %ret = add i32 %lhs, 4026531840 ; 0xF0000000
     ret i32 %ret
 }
+
+define i32 @t2MOVTi16_ok_1(i32 %a) {
+; CHECK: t2MOVTi16_ok_1:
+; CHECK: movt r0, #1234
+    %1 = and i32 %a, 65535
+    %2 = shl i32 1234, 16
+    %3 = or  i32 %1, %2
+
+    ret i32 %3
+}
+
+define i32 @t2MOVTi16_test_1(i32 %a) {
+; CHECK: t2MOVTi16_test_1:
+; CHECK: movt r0, #1234
+    %1 = shl i32  255,   8
+    %2 = shl i32 1234,   8
+    %3 = or  i32   %1, 255  ; This gives us 0xFFFF in %3
+    %4 = shl i32   %2,   8  ; This gives us (1234 << 16) in %4
+    %5 = and i32   %a,  %3
+    %6 = or  i32   %4,  %5
+
+    ret i32 %6
+}
+
+define i32 @t2MOVTi16_test_2(i32 %a) {
+; CHECK: t2MOVTi16_test_2:
+; CHECK: movt r0, #1234
+    %1 = shl i32  255,   8
+    %2 = shl i32 1234,   8
+    %3 = or  i32   %1, 255  ; This gives us 0xFFFF in %3
+    %4 = shl i32   %2,   6
+    %5 = and i32   %a,  %3
+    %6 = shl i32   %4,   2  ; This gives us (1234 << 16) in %6
+    %7 = or  i32   %5,  %6
+
+    ret i32 %7
+}
+
+define i32 @t2MOVTi16_test_3(i32 %a) {
+; CHECK: t2MOVTi16_test_3:
+; CHECK: movt r0, #1234
+    %1 = shl i32  255,   8
+    %2 = shl i32 1234,   8
+    %3 = or  i32   %1, 255  ; This gives us 0xFFFF in %3
+    %4 = shl i32   %2,   6
+    %5 = and i32   %a,  %3
+    %6 = shl i32   %4,   2  ; This gives us (1234 << 16) in %6
+    %7 = lshr i32  %6,   6
+    %8 = shl i32   %7,   6
+    %9 = or  i32   %5,  %8
+
+    ret i32 %8
+}
+
+; 171 = 0x000000ab
+define i32 @f1(i32 %a) {
+; CHECK: f1:
+; CHECK: movs r0, #171
+    %tmp = add i32 0, 171
+    ret i32 %tmp
+}
+
+; 1179666 = 0x00120012
+define i32 @f2(i32 %a) {
+; CHECK: f2:
+; CHECK: mov.w r0, #1179666
+    %tmp = add i32 0, 1179666
+    ret i32 %tmp
+}
+
+; 872428544 = 0x34003400
+define i32 @f3(i32 %a) {
+; CHECK: f3:
+; CHECK: mov.w r0, #872428544
+    %tmp = add i32 0, 872428544
+    ret i32 %tmp
+}
+
+; 1448498774 = 0x56565656
+define i32 @f4(i32 %a) {
+; CHECK: f4:
+; CHECK: mov.w r0, #1448498774
+    %tmp = add i32 0, 1448498774
+    ret i32 %tmp
+}
+
+; 66846720 = 0x03fc0000
+define i32 @f5(i32 %a) {
+; CHECK: f5:
+; CHECK: mov.w r0, #66846720
+    %tmp = add i32 0, 66846720
+    ret i32 %tmp
+}
+
+define i32 @f6(i32 %a) {
+;CHECK: f6
+;CHECK: movw    r0, #65535
+    %tmp = add i32 0, 65535
+    ret i32 %tmp
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mov2.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mov2.ll
deleted file mode 100644
index a02f4f0..0000000
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mov2.ll
+++ /dev/null
@@ -1,85 +0,0 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
-
-define i32 @t2MOVTi16_ok_1(i32 %a) {
-; CHECK: t2MOVTi16_ok_1:
-; CHECK:      movs r1, #0
-; CHECK-NEXT: movt r1, #1234
-; CHECK:      movw r1, #65535
-; CHECK-NEXT: movt r1, #1234
-    %1 = and i32 %a, 65535
-    %2 = shl i32 1234, 16
-    %3 = or  i32 %1, %2
-
-    ret i32 %3
-}
-
-define i32 @t2MOVTi16_test_1(i32 %a) {
-; CHECK: t2MOVTi16_test_1:
-; CHECK:      movs r1, #0
-; CHECK-NEXT: movt r1, #1234
-; CHECK:      movw r1, #65535
-; CHECK-NEXT: movt r1, #1234
-    %1 = shl i32  255,   8
-    %2 = shl i32 1234,   8
-    %3 = or  i32   %1, 255  ; This give us 0xFFFF in %3
-    %4 = shl i32   %2,   8  ; This gives us (1234 << 16) in %4
-    %5 = and i32   %a,  %3
-    %6 = or  i32   %4,  %5
-
-    ret i32 %6
-}
-
-define i32 @t2MOVTi16_test_2(i32 %a) {
-; CHECK: t2MOVTi16_test_2:
-; CHECK:      movs r1, #0
-; CHECK-NEXT: movt r1, #1234
-; CHECK:      movw r1, #65535
-; CHECK-NEXT: movt r1, #1234
-    %1 = shl i32  255,   8
-    %2 = shl i32 1234,   8
-    %3 = or  i32   %1, 255  ; This give us 0xFFFF in %3
-    %4 = shl i32   %2,   6
-    %5 = and i32   %a,  %3
-    %6 = shl i32   %4,   2  ; This gives us (1234 << 16) in %6
-    %7 = or  i32   %5,  %6
-
-    ret i32 %7
-}
-
-define i32 @t2MOVTi16_test_3(i32 %a) {
-; CHECK: t2MOVTi16_test_3:
-; CHECK:      movs r1, #0
-; CHECK-NEXT: movt r1, #1234
-; CHECK:      movw r1, #65535
-; CHECK-NEXT: movt r1, #1234
-    %1 = shl i32  255,   8
-    %2 = shl i32 1234,   8
-    %3 = or  i32   %1, 255  ; This give us 0xFFFF in %3
-    %4 = shl i32   %2,   6
-    %5 = and i32   %a,  %3
-    %6 = shl i32   %4,   2  ; This gives us (1234 << 16) in %6
-    %7 = lshr i32  %6,   6
-    %8 = shl i32   %7,   6
-    %9 = or  i32   %5,  %8
-
-    ret i32 %9
-}
-
-define i32 @t2MOVTi16_test_nomatch_1(i32 %a) {
-; CHECK: t2MOVTi16_test_nomatch_1:
-; CHECK:      movw r1, #16384
-; CHECK-NEXT: movt r1, #154
-; CHECK:      movw r1, #65535
-; CHECK-NEXT: movt r1, #154
-    %1 = shl i32  255,   8
-    %2 = shl i32 1234,   8
-    %3 = or  i32   %1, 255  ; This give us 0xFFFF in %3
-    %4 = shl i32   %2,   6
-    %5 = and i32   %a,  %3
-    %6 = shl i32   %4,   2  ; This gives us (1234 << 16) in %6
-    %7 = lshr i32  %6,   3
-    %8 = or  i32   %5,  %7
-    ret i32 %8
-}
-
-
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mov3.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mov3.ll
deleted file mode 100644
index 46af6fb..0000000
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mov3.ll
+++ /dev/null
@@ -1,41 +0,0 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
-
-; 171 = 0x000000ab
-define i32 @f1(i32 %a) {
-; CHECK: f1:
-; CHECK: movs r0, #171
-    %tmp = add i32 0, 171
-    ret i32 %tmp
-}
-
-; 1179666 = 0x00120012
-define i32 @f2(i32 %a) {
-; CHECK: f2:
-; CHECK: mov.w r0, #1179666
-    %tmp = add i32 0, 1179666
-    ret i32 %tmp
-}
-
-; 872428544 = 0x34003400
-define i32 @f3(i32 %a) {
-; CHECK: f3:
-; CHECK: mov.w r0, #872428544
-    %tmp = add i32 0, 872428544
-    ret i32 %tmp
-}
-
-; 1448498774 = 0x56565656
-define i32 @f4(i32 %a) {
-; CHECK: f4:
-; CHECK: mov.w r0, #1448498774
-    %tmp = add i32 0, 1448498774
-    ret i32 %tmp
-}
-
-; 66846720 = 0x03fc0000
-define i32 @f5(i32 %a) {
-; CHECK: f5:
-; CHECK: mov.w r0, #66846720
-    %tmp = add i32 0, 66846720
-    ret i32 %tmp
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mov4.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mov4.ll
deleted file mode 100644
index 06fa238..0000000
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-mov4.ll
+++ /dev/null
@@ -1,6 +0,0 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {movw\\W*r\[0-9\],\\W*#\[0-9\]*} | grep {#65535} | count 1
-
-define i32 @f6(i32 %a) {
-    %tmp = add i32 0, 65535
-    ret i32 %tmp
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-orn.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-orn.ll
index d4222c2..97a3fd7 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-orn.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-orn.ll
@@ -1,32 +1,37 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {orn\\W*r\[0-9\]*,\\W*r\[0-9\]*,\\W*r\[0-9\]*$} | count 4
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {orn\\W*r\[0-9\],\\W*r\[0-9\],\\W*r\[0-9\],\\W*lsl\\W*#5$} | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {orn\\W*r\[0-9\],\\W*r\[0-9\],\\W*r\[0-9\],\\W*lsr\\W*#6$} | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {orn\\W*r\[0-9\],\\W*r\[0-9\],\\W*r\[0-9\],\\W*asr\\W*#7$} | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {orn\\W*r\[0-9\],\\W*r\[0-9\],\\W*r\[0-9\],\\W*ror\\W*#8$} | count 1
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
+
 
 define i32 @f1(i32 %a, i32 %b) {
     %tmp = xor i32 %b, 4294967295
     %tmp1 = or i32 %a, %tmp
     ret i32 %tmp1
 }
+; CHECK: f1:
+; CHECK: 	orn	r0, r0, r1
 
 define i32 @f2(i32 %a, i32 %b) {
     %tmp = xor i32 %b, 4294967295
     %tmp1 = or i32 %tmp, %a
     ret i32 %tmp1
 }
+; CHECK: f2:
+; CHECK: 	orn	r0, r0, r1
 
 define i32 @f3(i32 %a, i32 %b) {
     %tmp = xor i32 4294967295, %b
     %tmp1 = or i32 %a, %tmp
     ret i32 %tmp1
 }
+; CHECK: f3:
+; CHECK: 	orn	r0, r0, r1
 
 define i32 @f4(i32 %a, i32 %b) {
     %tmp = xor i32 4294967295, %b
     %tmp1 = or i32 %tmp, %a
     ret i32 %tmp1
 }
+; CHECK: f4:
+; CHECK: 	orn	r0, r0, r1
 
 define i32 @f5(i32 %a, i32 %b) {
     %tmp = shl i32 %b, 5
@@ -34,6 +39,8 @@ define i32 @f5(i32 %a, i32 %b) {
     %tmp2 = or i32 %a, %tmp1
     ret i32 %tmp2
 }
+; CHECK: f5:
+; CHECK: 	orn	r0, r0, r1, lsl #5
 
 define i32 @f6(i32 %a, i32 %b) {
     %tmp = lshr i32 %b, 6
@@ -41,6 +48,8 @@ define i32 @f6(i32 %a, i32 %b) {
     %tmp2 = or i32 %a, %tmp1
     ret i32 %tmp2
 }
+; CHECK: f6:
+; CHECK: 	orn	r0, r0, r1, lsr #6
 
 define i32 @f7(i32 %a, i32 %b) {
     %tmp = ashr i32 %b, 7
@@ -48,6 +57,8 @@ define i32 @f7(i32 %a, i32 %b) {
     %tmp2 = or i32 %a, %tmp1
     ret i32 %tmp2
 }
+; CHECK: f7:
+; CHECK: 	orn	r0, r0, r1, asr #7
 
 define i32 @f8(i32 %a, i32 %b) {
     %l8 = shl i32 %a, 24
@@ -57,3 +68,5 @@ define i32 @f8(i32 %a, i32 %b) {
     %tmp2 = or i32 %a, %tmp1
     ret i32 %tmp2
 }
+; CHECK: f8:
+; CHECK: 	orn	r0, r0, r0, ror #8
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-orn2.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-orn2.ll
index 7b01882..34ab3a5 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-orn2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-orn2.ll
@@ -1,5 +1,5 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {orn\\W*r\[0-9\]*,\\W*r\[0-9\]*,\\W*#\[0-9\]*} |\
-; RUN:     grep {#187\\|#11141290\\|#-872363008\\|#1114112} | count 4
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
+
 
 ; 0x000000bb = 187
 define i32 @f1(i32 %a) {
@@ -7,6 +7,8 @@ define i32 @f1(i32 %a) {
     %tmp2 = or i32 %a, %tmp1
     ret i32 %tmp2
 }
+; CHECK: f1:
+; CHECK: 	orn	r0, r0, #187
 
 ; 0x00aa00aa = 11141290
 define i32 @f2(i32 %a) {
@@ -14,6 +16,8 @@ define i32 @f2(i32 %a) {
     %tmp2 = or i32 %a, %tmp1
     ret i32 %tmp2
 }
+; CHECK: f2:
+; CHECK: 	orn	r0, r0, #11141290
 
 ; 0xcc00cc00 = 3422604288
 define i32 @f3(i32 %a) {
@@ -21,6 +25,8 @@ define i32 @f3(i32 %a) {
     %tmp2 = or i32 %a, %tmp1
     ret i32 %tmp2
 }
+; CHECK: f3:
+; CHECK: 	orn	r0, r0, #-872363008
 
 ; 0x00110000 = 1114112
 define i32 @f5(i32 %a) {
@@ -28,3 +34,5 @@ define i32 @f5(i32 %a) {
     %tmp2 = or i32 %a, %tmp1
     ret i32 %tmp2
 }
+; CHECK: f5:
+; CHECK: 	orn	r0, r0, #1114112
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-orr2.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-orr2.ll
index 759a5b8..8f7a3c2 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-orr2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-orr2.ll
@@ -1,31 +1,42 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {orr\\W*r\[0-9\]*,\\W*r\[0-9\]*,\\W*#\[0-9\]*} | grep {#187\\|#11141290\\|#-872363008\\|#1145324612\\|#1114112} | count 5
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
+
 
 ; 0x000000bb = 187
 define i32 @f1(i32 %a) {
     %tmp2 = or i32 %a, 187
     ret i32 %tmp2
 }
+; CHECK: f1:
+; CHECK: 	orr	r0, r0, #187
 
 ; 0x00aa00aa = 11141290
 define i32 @f2(i32 %a) {
     %tmp2 = or i32 %a, 11141290 
     ret i32 %tmp2
 }
+; CHECK: f2:
+; CHECK: 	orr	r0, r0, #11141290
 
 ; 0xcc00cc00 = 3422604288
 define i32 @f3(i32 %a) {
     %tmp2 = or i32 %a, 3422604288
     ret i32 %tmp2
 }
+; CHECK: f3:
+; CHECK: 	orr	r0, r0, #-872363008
 
 ; 0x44444444 = 1145324612
 define i32 @f4(i32 %a) {
     %tmp2 = or i32 %a, 1145324612
     ret i32 %tmp2
 }
+; CHECK: f4:
+; CHECK: 	orr	r0, r0, #1145324612
 
 ; 0x00110000 = 1114112
 define i32 @f5(i32 %a) {
     %tmp2 = or i32 %a, 1114112
     ret i32 %tmp2
 }
+; CHECK: f5:
+; CHECK: 	orr	r0, r0, #1114112
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-ror.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-ror.ll
index 01adb52..0200116 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-ror.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-ror.ll
@@ -1,4 +1,5 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {ror\\.w\\W*r\[0-9\]*,\\W*r\[0-9\]*,\\W*#\[0-9\]*} | grep 22 | count 1
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
+
 
 define i32 @f1(i32 %a) {
     %l8 = shl i32 %a, 10
@@ -6,3 +7,5 @@ define i32 @f1(i32 %a) {
     %tmp = or i32 %l8, %r8
     ret i32 %tmp
 }
+; CHECK: f1:
+; CHECK: 	ror.w	r0, r0, #22
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-rsb.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-rsb.ll
index 4611e94..15185be 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-rsb.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-rsb.ll
@@ -1,30 +1,35 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {rsb\\W*r\[0-9\],\\W*r\[0-9\],\\W*r\[0-9\],\\W*lsl\\W*#5$} | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {rsb\\W*r\[0-9\],\\W*r\[0-9\],\\W*r\[0-9\],\\W*lsr\\W*#6$} | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {rsb\\W*r\[0-9\],\\W*r\[0-9\],\\W*r\[0-9\],\\W*asr\\W*#7$} | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {rsb\\W*r\[0-9\],\\W*r\[0-9\],\\W*r\[0-9\],\\W*ror\\W*#8$} | count 1
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
-define i32 @f2(i32 %a, i32 %b) {
+define i32 @f1(i32 %a, i32 %b) {
     %tmp = shl i32 %b, 5
     %tmp1 = sub i32 %tmp, %a
     ret i32 %tmp1
 }
+; CHECK: f1:
+; CHECK: 	rsb	r0, r0, r1, lsl #5
 
-define i32 @f3(i32 %a, i32 %b) {
+define i32 @f2(i32 %a, i32 %b) {
     %tmp = lshr i32 %b, 6
     %tmp1 = sub i32 %tmp, %a
     ret i32 %tmp1
 }
+; CHECK: f2:
+; CHECK: 	rsb	r0, r0, r1, lsr #6
 
-define i32 @f4(i32 %a, i32 %b) {
+define i32 @f3(i32 %a, i32 %b) {
     %tmp = ashr i32 %b, 7
     %tmp1 = sub i32 %tmp, %a
     ret i32 %tmp1
 }
+; CHECK: f3:
+; CHECK: 	rsb	r0, r0, r1, asr #7
 
-define i32 @f5(i32 %a, i32 %b) {
+define i32 @f4(i32 %a, i32 %b) {
     %l8 = shl i32 %a, 24
     %r8 = lshr i32 %a, 8
     %tmp = or i32 %l8, %r8
     %tmp1 = sub i32 %tmp, %a
     ret i32 %tmp1
 }
+; CHECK: f4:
+; CHECK: 	rsb	r0, r0, r0, ror #8
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-rsb2.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-rsb2.ll
index 84a3796..61fb619 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-rsb2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-rsb2.ll
@@ -1,31 +1,41 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {rsb\\.w\\W*r\[0-9\],\\W*r\[0-9\],\\W*#\[0-9\]*} | grep {#171\\|#1179666\\|#872428544\\|#1448498774\\|#66846720} | count 5
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 ; 171 = 0x000000ab
 define i32 @f1(i32 %a) {
     %tmp = sub i32 171, %a
     ret i32 %tmp
 }
+; CHECK: f1:
+; CHECK: 	rsb.w	r0, r0, #171
 
 ; 1179666 = 0x00120012
 define i32 @f2(i32 %a) {
     %tmp = sub i32 1179666, %a
     ret i32 %tmp
 }
+; CHECK: f2:
+; CHECK: 	rsb.w	r0, r0, #1179666
 
 ; 872428544 = 0x34003400
 define i32 @f3(i32 %a) {
     %tmp = sub i32 872428544, %a
     ret i32 %tmp
 }
+; CHECK: f3:
+; CHECK: 	rsb.w	r0, r0, #872428544
 
 ; 1448498774 = 0x56565656
 define i32 @f4(i32 %a) {
     %tmp = sub i32 1448498774, %a
     ret i32 %tmp
 }
+; CHECK: f4:
+; CHECK: 	rsb.w	r0, r0, #1448498774
 
 ; 66846720 = 0x03fc0000
 define i32 @f5(i32 %a) {
     %tmp = sub i32 66846720, %a
     ret i32 %tmp
 }
+; CHECK: f5:
+; CHECK: 	rsb.w	r0, r0, #66846720
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-select_xform.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-select_xform.ll
index b4274ad..7fc2e2a 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-select_xform.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-select_xform.ll
@@ -1,8 +1,12 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep mov | count 3
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep mvn | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep it  | count 3
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 define i32 @t1(i32 %a, i32 %b, i32 %c) nounwind {
+; CHECK: t1
+; CHECK: sub.w r0, r1, #-2147483648
+; CHECK: cmp r2, #10
+; CHECK: sub.w r0, r0, #1
+; CHECK: it  gt
+; CHECK: movgt r0, r1
         %tmp1 = icmp sgt i32 %c, 10
         %tmp2 = select i1 %tmp1, i32 0, i32 2147483647
         %tmp3 = add i32 %tmp2, %b
@@ -10,6 +14,12 @@ define i32 @t1(i32 %a, i32 %b, i32 %c) nounwind {
 }
 
 define i32 @t2(i32 %a, i32 %b, i32 %c) nounwind {
+; CHECK: t2
+; CHECK: add.w r0, r1, #-2147483648
+; CHECK: cmp r2, #10
+; CHECK: it  gt
+; CHECK: movgt r0, r1
+
         %tmp1 = icmp sgt i32 %c, 10
         %tmp2 = select i1 %tmp1, i32 0, i32 2147483648
         %tmp3 = add i32 %tmp2, %b
@@ -17,6 +27,11 @@ define i32 @t2(i32 %a, i32 %b, i32 %c) nounwind {
 }
 
 define i32 @t3(i32 %a, i32 %b, i32 %c, i32 %d) nounwind {
+; CHECK: t3
+; CHECK: sub.w r0, r1, #10
+; CHECK: cmp r2, #10
+; CHECK: it  gt
+; CHECK: movgt r0, r1
         %tmp1 = icmp sgt i32 %c, 10
         %tmp2 = select i1 %tmp1, i32 0, i32 10
         %tmp3 = sub i32 %b, %tmp2
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-shifter.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-shifter.ll
index 7746cd3..b106ced 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-shifter.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-shifter.ll
@@ -1,22 +1,24 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep lsl
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep lsr
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep asr
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep ror
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | not grep mov
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 define i32 @t2ADDrs_lsl(i32 %X, i32 %Y) {
+; CHECK: t2ADDrs_lsl
+; CHECK: add.w  r0, r0, r1, lsl #16
         %A = shl i32 %Y, 16
         %B = add i32 %X, %A
         ret i32 %B
 }
 
 define i32 @t2ADDrs_lsr(i32 %X, i32 %Y) {
+; CHECK: t2ADDrs_lsr
+; CHECK: add.w  r0, r0, r1, lsr #16
         %A = lshr i32 %Y, 16
         %B = add i32 %X, %A
         ret i32 %B
 }
 
 define i32 @t2ADDrs_asr(i32 %X, i32 %Y) {
+; CHECK: t2ADDrs_asr
+; CHECK: add.w  r0, r0, r1, asr #16
         %A = ashr i32 %Y, 16
         %B = add i32 %X, %A
         ret i32 %B
@@ -24,6 +26,8 @@ define i32 @t2ADDrs_asr(i32 %X, i32 %Y) {
 
 ; i32 ror(n) = (x >> n) | (x << (32 - n))
 define i32 @t2ADDrs_ror(i32 %X, i32 %Y) {
+; CHECK: t2ADDrs_ror
+; CHECK: add.w  r0, r0, r1, ror #16
         %A = lshr i32 %Y, 16
         %B = shl  i32 %Y, 16
         %C = or   i32 %B, %A
@@ -32,6 +36,10 @@ define i32 @t2ADDrs_ror(i32 %X, i32 %Y) {
 }
 
 define i32 @t2ADDrs_noRegShift(i32 %X, i32 %Y, i8 %sh) {
+; CHECK: t2ADDrs_noRegShift
+; CHECK: uxtb r2, r2
+; CHECK: lsls r1, r2
+; CHECK: add  r0, r1
         %shift.upgrd.1 = zext i8 %sh to i32
         %A = shl i32 %Y, %shift.upgrd.1
         %B = add i32 %X, %A
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-smla.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-smla.ll
index 66cc884..092ec27 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-smla.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-smla.ll
@@ -1,7 +1,8 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | \
-; RUN:   grep smlabt | count 1
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 define i32 @f3(i32 %a, i16 %x, i32 %y) {
+; CHECK: f3
+; CHECK: smlabt r0, r1, r2, r0
         %tmp = sext i16 %x to i32               ; <i32> [#uses=1]
         %tmp2 = ashr i32 %y, 16         ; <i32> [#uses=1]
         %tmp3 = mul i32 %tmp2, %tmp             ; <i32> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-smul.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-smul.ll
index cdbf4ca..16ea85d 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-smul.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-smul.ll
@@ -1,12 +1,11 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | \
-; RUN:   grep smulbt | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | \
-; RUN:   grep smultt | count 1
+; RUN: llc < %s -march=thumb -mattr=+thumb2 |  FileCheck %s
 
 @x = weak global i16 0          ; <i16*> [#uses=1]
 @y = weak global i16 0          ; <i16*> [#uses=0]
 
 define i32 @f1(i32 %y) {
+; CHECK: f1
+; CHECK: smulbt r0, r1, r0
         %tmp = load i16* @x             ; <i16> [#uses=1]
         %tmp1 = add i16 %tmp, 2         ; <i16> [#uses=1]
         %tmp2 = sext i16 %tmp1 to i32           ; <i32> [#uses=1]
@@ -16,6 +15,8 @@ define i32 @f1(i32 %y) {
 }
 
 define i32 @f2(i32 %x, i32 %y) {
+; CHECK: f2
+; CHECK: smultt r0, r1, r0
         %tmp1 = ashr i32 %x, 16         ; <i32> [#uses=1]
         %tmp3 = ashr i32 %y, 16         ; <i32> [#uses=1]
         %tmp4 = mul i32 %tmp3, %tmp1            ; <i32> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-spill-q.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-spill-q.ll
index 0a7221c..aef167b 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-spill-q.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-spill-q.ll
@@ -11,8 +11,9 @@ declare <4 x float> @llvm.arm.neon.vld1.v4f32(i8*) nounwind readonly
 
 define arm_apcscc void @aaa(%quuz* %this, i8* %block) {
 ; CHECK: aaa:
-; CHECK: vstmia sp
-; CHECK: vldmia sp
+; CHECK: bic sp, sp, #15
+; CHECK: vst1.64 {{.*}}sp, :128
+; CHECK: vld1.64 {{.*}}sp, :128
 entry:
   %0 = call <4 x float> @llvm.arm.neon.vld1.v4f32(i8* undef) nounwind ; <<4 x float>> [#uses=1]
   store float 6.300000e+01, float* undef, align 4
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-str_post.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-str_post.ll
index bee5810..bbfb447 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-str_post.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-str_post.ll
@@ -1,9 +1,8 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | \
-; RUN:   grep {strh .*\\\[.*\], #-4} | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | \
-; RUN:   grep {str .*\\\[.*\],} | count 1
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 define i16 @test1(i32* %X, i16* %A) {
+; CHECK: test1:
+; CHECK: strh {{.*}}[{{.*}}], #-4
         %Y = load i32* %X               ; <i32> [#uses=1]
         %tmp1 = trunc i32 %Y to i16             ; <i16> [#uses=1]
         store i16 %tmp1, i16* %A
@@ -13,6 +12,8 @@ define i16 @test1(i32* %X, i16* %A) {
 }
 
 define i32 @test2(i32* %X, i32* %A) {
+; CHECK: test2:
+; CHECK: str {{.*}}[{{.*}}],
         %Y = load i32* %X               ; <i32> [#uses=1]
         store i32 %Y, i32* %A
         %tmp1 = ptrtoint i32* %A to i32         ; <i32> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-str_pre.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-str_pre.ll
index 6c804ee..9af960b 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-str_pre.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-str_pre.ll
@@ -1,7 +1,8 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | \
-; RUN:   grep {str.*\\!} | count 2
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 define void @test1(i32* %X, i32* %A, i32** %dest) {
+; CHECK: test1
+; CHECK: str  r1, [r0, #+16]!
         %B = load i32* %A               ; <i32> [#uses=1]
         %Y = getelementptr i32* %X, i32 4               ; <i32*> [#uses=2]
         store i32 %B, i32* %Y
@@ -10,6 +11,8 @@ define void @test1(i32* %X, i32* %A, i32** %dest) {
 }
 
 define i16* @test2(i16* %X, i32* %A) {
+; CHECK: test2
+; CHECK: strh r1, [r0, #+8]!
         %B = load i32* %A               ; <i32> [#uses=1]
         %Y = getelementptr i16* %X, i32 4               ; <i16*> [#uses=2]
         %tmp = trunc i32 %B to i16              ; <i16> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-sub2.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-sub2.ll
index 6813f76..bb99cbd 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-sub2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-sub2.ll
@@ -1,6 +1,8 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {subw\\W*r\[0-9\],\\W*r\[0-9\],\\W*#\[0-9\]*} | grep {#4095} | count 1
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 define i32 @f1(i32 %a) {
     %tmp = sub i32 %a, 4095
     ret i32 %tmp
 }
+; CHECK: f1:
+; CHECK: 	subw	r0, r0, #4095
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-sxt_rot.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-sxt_rot.ll
index 33ed543..054d5df 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-sxt_rot.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-sxt_rot.ll
@@ -1,16 +1,15 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | \
-; RUN:   grep sxtb | count 2
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | \
-; RUN:   grep sxtb | grep ror | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | \
-; RUN:   grep sxtab | count 1
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 define i32 @test0(i8 %A) {
+; CHECK: test0
+; CHECK: sxtb r0, r0
         %B = sext i8 %A to i32
 	ret i32 %B
 }
 
 define i8 @test1(i32 %A) signext {
+; CHECK: test1
+; CHECK: sxtb.w r0, r0, ror #8
 	%B = lshr i32 %A, 8
 	%C = shl i32 %A, 24
 	%D = or i32 %B, %C
@@ -19,6 +18,9 @@ define i8 @test1(i32 %A) signext {
 }
 
 define i32 @test2(i32 %A, i32 %X) signext {
+; CHECK: test2
+; CHECK: lsrs r0, r0, #8
+; CHECK: sxtab  r0, r1, r0
 	%B = lshr i32 %A, 8
 	%C = shl i32 %A, 24
 	%D = or i32 %B, %C
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-tbh.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-tbh.ll
index c5cb6f3..2cf1d6a 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-tbh.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-tbh.ll
@@ -2,8 +2,6 @@
 
 ; Thumb2 target should reorder the bb's in order to use tbb / tbh.
 
-; XFAIL: *
-
 	%struct.R_flstr = type { i32, i32, i8* }
 	%struct._T_tstr = type { i32, %struct.R_flstr*, %struct._T_tstr* }
 @_C_nextcmd = external global i32		; <i32*> [#uses=3]
@@ -18,7 +16,7 @@ declare arm_apcscc noalias i8* @calloc(i32, i32) nounwind
 
 define arm_apcscc i32 @main(i32 %argc, i8** nocapture %argv) nounwind {
 ; CHECK: main:
-; CHECK: tbh
+; CHECK: tbb
 entry:
 	br label %bb42.i
 
@@ -26,7 +24,7 @@ bb1.i2:		; preds = %bb42.i
 	br label %bb40.i
 
 bb5.i:		; preds = %bb42.i
-	%0 = or i32 %_Y_flags.1, 32		; <i32> [#uses=1]
+	%0 = or i32 %argc, 32		; <i32> [#uses=1]
 	br label %bb40.i
 
 bb7.i:		; preds = %bb42.i
@@ -66,14 +64,10 @@ bb39.i:		; preds = %bb42.i
 	unreachable
 
 bb40.i:		; preds = %bb42.i, %bb5.i, %bb1.i2
-	%_Y_flags.0 = phi i32 [ 0, %bb1.i2 ], [ %0, %bb5.i ], [ %_Y_flags.1, %bb42.i ]		; <i32> [#uses=1]
-	%_Y_eflag.b.0 = phi i1 [ %_Y_eflag.b.1, %bb1.i2 ], [ %_Y_eflag.b.1, %bb5.i ], [ true, %bb42.i ]		; <i1> [#uses=1]
 	br label %bb42.i
 
 bb42.i:		; preds = %bb40.i, %entry
-	%_Y_eflag.b.1 = phi i1 [ false, %entry ], [ %_Y_eflag.b.0, %bb40.i ]		; <i1> [#uses=2]
-	%_Y_flags.1 = phi i32 [ 0, %entry ], [ %_Y_flags.0, %bb40.i ]		; <i32> [#uses=2]
-	switch i32 undef, label %bb39.i [
+	switch i32 %argc, label %bb39.i [
 		i32 67, label %bb33.i
 		i32 70, label %bb35.i
 		i32 77, label %bb37.i
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-teq.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-teq.ll
index 634d318..69f0383 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-teq.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-teq.ll
@@ -1,5 +1,5 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {teq\\.w\\W*r\[0-9\],\\W*#\[0-9\]*} | \
-; RUN:     grep {#187\\|#11141290\\|#-872363008\\|#1114112\\|#-572662307} | count 10
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
+
 
 ; 0x000000bb = 187
 define i1 @f1(i32 %a) {
@@ -7,6 +7,8 @@ define i1 @f1(i32 %a) {
     %tmp1 = icmp ne i32 %tmp, 0
     ret i1 %tmp1
 }
+; CHECK: f1:
+; CHECK: 	teq.w	r0, #187
 
 ; 0x000000bb = 187
 define i1 @f2(i32 %a) {
@@ -14,6 +16,8 @@ define i1 @f2(i32 %a) {
     %tmp1 = icmp eq i32 0, %tmp
     ret i1 %tmp1
 }
+; CHECK: f2:
+; CHECK: 	teq.w	r0, #187
 
 ; 0x00aa00aa = 11141290
 define i1 @f3(i32 %a) {
@@ -21,6 +25,8 @@ define i1 @f3(i32 %a) {
     %tmp1 = icmp eq i32 %tmp, 0
     ret i1 %tmp1
 }
+; CHECK: f3:
+; CHECK: 	teq.w	r0, #11141290
 
 ; 0x00aa00aa = 11141290
 define i1 @f4(i32 %a) {
@@ -28,6 +34,8 @@ define i1 @f4(i32 %a) {
     %tmp1 = icmp ne i32 0, %tmp
     ret i1 %tmp1
 }
+; CHECK: f4:
+; CHECK: 	teq.w	r0, #11141290
 
 ; 0xcc00cc00 = 3422604288
 define i1 @f5(i32 %a) {
@@ -35,6 +43,8 @@ define i1 @f5(i32 %a) {
     %tmp1 = icmp ne i32 %tmp, 0
     ret i1 %tmp1
 }
+; CHECK: f5:
+; CHECK: 	teq.w	r0, #-872363008
 
 ; 0xcc00cc00 = 3422604288
 define i1 @f6(i32 %a) {
@@ -42,6 +52,8 @@ define i1 @f6(i32 %a) {
     %tmp1 = icmp eq i32 0, %tmp
     ret i1 %tmp1
 }
+; CHECK: f6:
+; CHECK: 	teq.w	r0, #-872363008
 
 ; 0xdddddddd = 3722304989
 define i1 @f7(i32 %a) {
@@ -49,6 +61,8 @@ define i1 @f7(i32 %a) {
     %tmp1 = icmp eq i32 %tmp, 0
     ret i1 %tmp1
 }
+; CHECK: f7:
+; CHECK: 	teq.w	r0, #-572662307
 
 ; 0xdddddddd = 3722304989
 define i1 @f8(i32 %a) {
@@ -56,6 +70,8 @@ define i1 @f8(i32 %a) {
     %tmp1 = icmp ne i32 0, %tmp
     ret i1 %tmp1
 }
+; CHECK: f8:
+; CHECK: 	teq.w	r0, #-572662307
 
 ; 0x00110000 = 1114112
 define i1 @f9(i32 %a) {
@@ -63,6 +79,8 @@ define i1 @f9(i32 %a) {
     %tmp1 = icmp ne i32 %tmp, 0
     ret i1 %tmp1
 }
+; CHECK: f9:
+; CHECK: 	teq.w	r0, #1114112
 
 ; 0x00110000 = 1114112
 define i1 @f10(i32 %a) {
@@ -70,3 +88,6 @@ define i1 @f10(i32 %a) {
     %tmp1 = icmp eq i32 0, %tmp
     ret i1 %tmp1
 }
+; CHECK: f10:
+; CHECK: 	teq.w	r0, #1114112
+
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-teq2.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-teq2.ll
index c6867d9..0f122f2 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-teq2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-teq2.ll
@@ -1,34 +1,40 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {teq\\.w\\W*r\[0-9\],\\W*r\[0-9\]$} | count 4
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {teq\\.w\\W*r\[0-9\],\\W*r\[0-9\],\\W*lsl\\W*#5$} | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {teq\\.w\\W*r\[0-9\],\\W*r\[0-9\],\\W*lsr\\W*#6$} | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {teq\\.w\\W*r\[0-9\],\\W*r\[0-9\],\\W*asr\\W*#7$} | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {teq\\.w\\W*r\[0-9\],\\W*r\[0-9\],\\W*ror\\W*#8$} | count 1
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 define i1 @f1(i32 %a, i32 %b) {
+; CHECK: f1
+; CHECK: teq.w  r0, r1
     %tmp = xor i32 %a, %b
     %tmp1 = icmp ne i32 %tmp, 0
     ret i1 %tmp1
 }
 
 define i1 @f2(i32 %a, i32 %b) {
+; CHECK: f2
+; CHECK: teq.w r0, r1
     %tmp = xor i32 %a, %b
     %tmp1 = icmp eq i32 %tmp, 0
     ret i1 %tmp1
 }
 
 define i1 @f3(i32 %a, i32 %b) {
+; CHECK: f3
+; CHECK: teq.w  r0, r1
     %tmp = xor i32 %a, %b
     %tmp1 = icmp ne i32 0, %tmp
     ret i1 %tmp1
 }
 
 define i1 @f4(i32 %a, i32 %b) {
+; CHECK: f4
+; CHECK: teq.w  r0, r1
     %tmp = xor i32 %a, %b
     %tmp1 = icmp eq i32 0, %tmp
     ret i1 %tmp1
 }
 
 define i1 @f6(i32 %a, i32 %b) {
+; CHECK: f6
+; CHECK: teq.w  r0, r1, lsl #5
     %tmp = shl i32 %b, 5
     %tmp1 = xor i32 %a, %tmp
     %tmp2 = icmp eq i32 %tmp1, 0
@@ -36,6 +42,8 @@ define i1 @f6(i32 %a, i32 %b) {
 }
 
 define i1 @f7(i32 %a, i32 %b) {
+; CHECK: f7
+; CHECK: teq.w  r0, r1, lsr #6
     %tmp = lshr i32 %b, 6
     %tmp1 = xor i32 %a, %tmp
     %tmp2 = icmp eq i32 %tmp1, 0
@@ -43,6 +51,8 @@ define i1 @f7(i32 %a, i32 %b) {
 }
 
 define i1 @f8(i32 %a, i32 %b) {
+; CHECK: f8
+; CHECK: teq.w  r0, r1, asr #7
     %tmp = ashr i32 %b, 7
     %tmp1 = xor i32 %a, %tmp
     %tmp2 = icmp eq i32 %tmp1, 0
@@ -50,6 +60,8 @@ define i1 @f8(i32 %a, i32 %b) {
 }
 
 define i1 @f9(i32 %a, i32 %b) {
+; CHECK: f9
+; CHECK: teq.w  r0, r0, ror #8
     %l8 = shl i32 %a, 24
     %r8 = lshr i32 %a, 8
     %tmp = or i32 %l8, %r8
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-tst.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-tst.ll
index 525a817..d905217 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-tst.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-tst.ll
@@ -1,5 +1,5 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep {tst\\.w\\W*r\[0-9\],\\W*#\[0-9\]*} | \
-; RUN:     grep {#187\\|#11141290\\|#-872363008\\|#1114112\\|#-572662307} | count 10
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
+
 
 ; 0x000000bb = 187
 define i1 @f1(i32 %a) {
@@ -7,6 +7,8 @@ define i1 @f1(i32 %a) {
     %tmp1 = icmp ne i32 %tmp, 0
     ret i1 %tmp1
 }
+; CHECK: f1:
+; CHECK: 	tst.w	r0, #187
 
 ; 0x000000bb = 187
 define i1 @f2(i32 %a) {
@@ -14,6 +16,8 @@ define i1 @f2(i32 %a) {
     %tmp1 = icmp eq i32 0, %tmp
     ret i1 %tmp1
 }
+; CHECK: f2:
+; CHECK: 	tst.w	r0, #187
 
 ; 0x00aa00aa = 11141290
 define i1 @f3(i32 %a) {
@@ -21,6 +25,8 @@ define i1 @f3(i32 %a) {
     %tmp1 = icmp eq i32 %tmp, 0
     ret i1 %tmp1
 }
+; CHECK: f3:
+; CHECK: 	tst.w	r0, #11141290
 
 ; 0x00aa00aa = 11141290
 define i1 @f4(i32 %a) {
@@ -28,6 +34,8 @@ define i1 @f4(i32 %a) {
     %tmp1 = icmp ne i32 0, %tmp
     ret i1 %tmp1
 }
+; CHECK: f4:
+; CHECK: 	tst.w	r0, #11141290
 
 ; 0xcc00cc00 = 3422604288
 define i1 @f5(i32 %a) {
@@ -35,6 +43,8 @@ define i1 @f5(i32 %a) {
     %tmp1 = icmp ne i32 %tmp, 0
     ret i1 %tmp1
 }
+; CHECK: f5:
+; CHECK: 	tst.w	r0, #-872363008
 
 ; 0xcc00cc00 = 3422604288
 define i1 @f6(i32 %a) {
@@ -42,6 +52,8 @@ define i1 @f6(i32 %a) {
     %tmp1 = icmp eq i32 0, %tmp
     ret i1 %tmp1
 }
+; CHECK: f6:
+; CHECK: 	tst.w	r0, #-872363008
 
 ; 0xdddddddd = 3722304989
 define i1 @f7(i32 %a) {
@@ -49,6 +61,8 @@ define i1 @f7(i32 %a) {
     %tmp1 = icmp eq i32 %tmp, 0
     ret i1 %tmp1
 }
+; CHECK: f7:
+; CHECK: 	tst.w	r0, #-572662307
 
 ; 0xdddddddd = 3722304989
 define i1 @f8(i32 %a) {
@@ -56,6 +70,8 @@ define i1 @f8(i32 %a) {
     %tmp1 = icmp ne i32 0, %tmp
     ret i1 %tmp1
 }
+; CHECK: f8:
+; CHECK: 	tst.w	r0, #-572662307
 
 ; 0x00110000 = 1114112
 define i1 @f9(i32 %a) {
@@ -63,6 +79,8 @@ define i1 @f9(i32 %a) {
     %tmp1 = icmp ne i32 %tmp, 0
     ret i1 %tmp1
 }
+; CHECK: f9:
+; CHECK: 	tst.w	r0, #1114112
 
 ; 0x00110000 = 1114112
 define i1 @f10(i32 %a) {
@@ -70,3 +88,5 @@ define i1 @f10(i32 %a) {
     %tmp1 = icmp eq i32 0, %tmp
     ret i1 %tmp1
 }
+; CHECK: f10:
+; CHECK: 	tst.w	r0, #1114112
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-uxt_rot.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-uxt_rot.ll
index 37919dd..75e1d70 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-uxt_rot.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-uxt_rot.ll
@@ -1,13 +1,15 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep uxtb | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep uxtab | count 1
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | grep uxth | count 1
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 define i8 @test1(i32 %A.u) zeroext {
+; CHECK: test1
+; CHECK: uxtb r0, r0
     %B.u = trunc i32 %A.u to i8
     ret i8 %B.u
 }
 
 define i32 @test2(i32 %A.u, i32 %B.u) zeroext {
+; CHECK: test2
+; CHECK: uxtab  r0, r0, r1
     %C.u = trunc i32 %B.u to i8
     %D.u = zext i8 %C.u to i32
     %E.u = add i32 %A.u, %D.u
@@ -15,6 +17,8 @@ define i32 @test2(i32 %A.u, i32 %B.u) zeroext {
 }
 
 define i32 @test3(i32 %A.u) zeroext {
+; CHECK: test3
+; CHECK: uxth.w r0, r0, ror #8
     %B.u = lshr i32 %A.u, 8
     %C.u = shl i32 %A.u, 24
     %D.u = or i32 %B.u, %C.u
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-uxtb.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-uxtb.ll
index 4022d95..4e23f53 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-uxtb.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-uxtb.ll
@@ -1,36 +1,47 @@
-; RUN: llc < %s -march=thumb -mattr=+thumb2 | \
-; RUN:   grep uxt | count 10
+; RUN: llc < %s -march=thumb -mattr=+thumb2 | FileCheck %s
 
 define i32 @test1(i32 %x) {
+; CHECK: test1
+; CHECK: uxtb16.w  r0, r0
 	%tmp1 = and i32 %x, 16711935		; <i32> [#uses=1]
 	ret i32 %tmp1
 }
 
 define i32 @test2(i32 %x) {
+; CHECK: test2
+; CHECK: uxtb16.w  r0, r0, ror #8
 	%tmp1 = lshr i32 %x, 8		; <i32> [#uses=1]
 	%tmp2 = and i32 %tmp1, 16711935		; <i32> [#uses=1]
 	ret i32 %tmp2
 }
 
 define i32 @test3(i32 %x) {
+; CHECK: test3
+; CHECK: uxtb16.w  r0, r0, ror #8
 	%tmp1 = lshr i32 %x, 8		; <i32> [#uses=1]
 	%tmp2 = and i32 %tmp1, 16711935		; <i32> [#uses=1]
 	ret i32 %tmp2
 }
 
 define i32 @test4(i32 %x) {
+; CHECK: test4
+; CHECK: uxtb16.w  r0, r0, ror #8
 	%tmp1 = lshr i32 %x, 8		; <i32> [#uses=1]
 	%tmp6 = and i32 %tmp1, 16711935		; <i32> [#uses=1]
 	ret i32 %tmp6
 }
 
 define i32 @test5(i32 %x) {
+; CHECK: test5
+; CHECK: uxtb16.w  r0, r0, ror #8
 	%tmp1 = lshr i32 %x, 8		; <i32> [#uses=1]
 	%tmp2 = and i32 %tmp1, 16711935		; <i32> [#uses=1]
 	ret i32 %tmp2
 }
 
 define i32 @test6(i32 %x) {
+; CHECK: test6
+; CHECK: uxtb16.w  r0, r0, ror #16
 	%tmp1 = lshr i32 %x, 16		; <i32> [#uses=1]
 	%tmp2 = and i32 %tmp1, 255		; <i32> [#uses=1]
 	%tmp4 = shl i32 %x, 16		; <i32> [#uses=1]
@@ -40,6 +51,8 @@ define i32 @test6(i32 %x) {
 }
 
 define i32 @test7(i32 %x) {
+; CHECK: test7
+; CHECK: uxtb16.w  r0, r0, ror #16
 	%tmp1 = lshr i32 %x, 16		; <i32> [#uses=1]
 	%tmp2 = and i32 %tmp1, 255		; <i32> [#uses=1]
 	%tmp4 = shl i32 %x, 16		; <i32> [#uses=1]
@@ -49,6 +62,8 @@ define i32 @test7(i32 %x) {
 }
 
 define i32 @test8(i32 %x) {
+; CHECK: test8
+; CHECK: uxtb16.w  r0, r0, ror #24
 	%tmp1 = shl i32 %x, 8		; <i32> [#uses=1]
 	%tmp2 = and i32 %tmp1, 16711680		; <i32> [#uses=1]
 	%tmp5 = lshr i32 %x, 24		; <i32> [#uses=1]
@@ -57,6 +72,8 @@ define i32 @test8(i32 %x) {
 }
 
 define i32 @test9(i32 %x) {
+; CHECK: test9
+; CHECK: uxtb16.w  r0, r0, ror #24
 	%tmp1 = lshr i32 %x, 24		; <i32> [#uses=1]
 	%tmp4 = shl i32 %x, 8		; <i32> [#uses=1]
 	%tmp5 = and i32 %tmp4, 16711680		; <i32> [#uses=1]
@@ -65,6 +82,13 @@ define i32 @test9(i32 %x) {
 }
 
 define i32 @test10(i32 %p0) {
+; CHECK: test10
+; CHECK: mov.w r1, #16253176
+; CHECK: and.w r0, r1, r0, lsr #7
+; CHECK: lsrs  r1, r0, #5
+; CHECK: uxtb16.w  r1, r1
+; CHECK: orr.w r0, r1, r0
+
 	%tmp1 = lshr i32 %p0, 7		; <i32> [#uses=1]
 	%tmp2 = and i32 %tmp1, 16253176		; <i32> [#uses=2]
 	%tmp4 = lshr i32 %tmp2, 5		; <i32> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2006-04-04-CrossBlockCrash.ll b/libclamav/c++/llvm/test/CodeGen/X86/2006-04-04-CrossBlockCrash.ll
index c106f57..3f67097 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2006-04-04-CrossBlockCrash.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2006-04-04-CrossBlockCrash.ll
@@ -11,7 +11,7 @@ target triple = "i686-apple-darwin8.6.1"
 
 declare <4 x float> @llvm.x86.sse.cmp.ps(<4 x float>, <4 x float>, i8)
 
-declare <4 x i32> @llvm.x86.sse2.packssdw.128(<4 x i32>, <4 x i32>)
+declare <8 x i16> @llvm.x86.sse2.packssdw.128(<4 x i32>, <4 x i32>)
 
 declare i32 @llvm.x86.sse2.pmovmskb.128(<16 x i8>)
 
@@ -33,8 +33,8 @@ cond_false183:		; preds = %cond_false, %entry
 	%tmp337 = bitcast <4 x i32> %tmp336 to <4 x float>		; <<4 x float>> [#uses=1]
 	%tmp378 = tail call <4 x float> @llvm.x86.sse.cmp.ps( <4 x float> %tmp337, <4 x float> zeroinitializer, i8 1 )		; <<4 x float>> [#uses=1]
 	%tmp379 = bitcast <4 x float> %tmp378 to <4 x i32>		; <<4 x i32>> [#uses=1]
-	%tmp388 = tail call <4 x i32> @llvm.x86.sse2.packssdw.128( <4 x i32> zeroinitializer, <4 x i32> %tmp379 )		; <<4 x i32>> [#uses=1]
-	%tmp392 = bitcast <4 x i32> %tmp388 to <8 x i16>		; <<8 x i16>> [#uses=1]
+	%tmp388 = tail call <8 x i16> @llvm.x86.sse2.packssdw.128( <4 x i32> zeroinitializer, <4 x i32> %tmp379 )		; <<4 x i32>> [#uses=1]
+	%tmp392 = bitcast <8 x i16> %tmp388 to <8 x i16>		; <<8 x i16>> [#uses=1]
 	%tmp399 = extractelement <8 x i16> %tmp392, i32 7		; <i16> [#uses=1]
 	%tmp423 = insertelement <8 x i16> zeroinitializer, i16 %tmp399, i32 7		; <<8 x i16>> [#uses=1]
 	%tmp427 = bitcast <8 x i16> %tmp423 to <16 x i8>		; <<16 x i8>> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2006-05-01-SchedCausingSpills.ll b/libclamav/c++/llvm/test/CodeGen/X86/2006-05-01-SchedCausingSpills.ll
index 49f3a95..b045329 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2006-05-01-SchedCausingSpills.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2006-05-01-SchedCausingSpills.ll
@@ -17,8 +17,8 @@ define i32 @foo(<4 x float>* %a, <4 x float>* %b, <4 x float>* %c, <4 x float>*
 	%tmp75 = bitcast <4 x float> %tmp74 to <4 x i32>		; <<4 x i32>> [#uses=1]
 	%tmp88 = tail call <4 x float> @llvm.x86.sse.cmp.ps( <4 x float> %tmp44, <4 x float> %tmp61, i8 1 )		; <<4 x float>> [#uses=1]
 	%tmp89 = bitcast <4 x float> %tmp88 to <4 x i32>		; <<4 x i32>> [#uses=1]
-	%tmp98 = tail call <4 x i32> @llvm.x86.sse2.packssdw.128( <4 x i32> %tmp75, <4 x i32> %tmp89 )		; <<4 x i32>> [#uses=1]
-	%tmp102 = bitcast <4 x i32> %tmp98 to <8 x i16>		; <<8 x i16>> [#uses=1]
+	%tmp98 = tail call <8 x i16> @llvm.x86.sse2.packssdw.128( <4 x i32> %tmp75, <4 x i32> %tmp89 )		; <<4 x i32>> [#uses=1]
+	%tmp102 = bitcast <8 x i16> %tmp98 to <8 x i16>		; <<8 x i16>> [#uses=1]
 	%tmp.upgrd.1 = shufflevector <8 x i16> %tmp102, <8 x i16> undef, <8 x i32> < i32 0, i32 1, i32 2, i32 3, i32 6, i32 5, i32 4, i32 7 >		; <<8 x i16>> [#uses=1]
 	%tmp105 = shufflevector <8 x i16> %tmp.upgrd.1, <8 x i16> undef, <8 x i32> < i32 2, i32 1, i32 0, i32 3, i32 4, i32 5, i32 6, i32 7 >		; <<8 x i16>> [#uses=1]
 	%tmp105.upgrd.2 = bitcast <8 x i16> %tmp105 to <4 x float>		; <<4 x float>> [#uses=1]
@@ -32,8 +32,8 @@ define i32 @foo(<4 x float>* %a, <4 x float>* %b, <4 x float>* %c, <4 x float>*
 	%tmp134 = bitcast <4 x float> %tmp133 to <4 x i32>		; <<4 x i32>> [#uses=1]
 	%tmp147 = tail call <4 x float> @llvm.x86.sse.cmp.ps( <4 x float> %tmp44, <4 x float> %tmp120, i8 1 )		; <<4 x float>> [#uses=1]
 	%tmp148 = bitcast <4 x float> %tmp147 to <4 x i32>		; <<4 x i32>> [#uses=1]
-	%tmp159 = tail call <4 x i32> @llvm.x86.sse2.packssdw.128( <4 x i32> %tmp134, <4 x i32> %tmp148 )		; <<4 x i32>> [#uses=1]
-	%tmp163 = bitcast <4 x i32> %tmp159 to <8 x i16>		; <<8 x i16>> [#uses=1]
+	%tmp159 = tail call <8 x i16> @llvm.x86.sse2.packssdw.128( <4 x i32> %tmp134, <4 x i32> %tmp148 )		; <<4 x i32>> [#uses=1]
+	%tmp163 = bitcast <8 x i16> %tmp159 to <8 x i16>		; <<8 x i16>> [#uses=1]
 	%tmp164 = shufflevector <8 x i16> %tmp163, <8 x i16> undef, <8 x i32> < i32 0, i32 1, i32 2, i32 3, i32 6, i32 5, i32 4, i32 7 >		; <<8 x i16>> [#uses=1]
 	%tmp166 = shufflevector <8 x i16> %tmp164, <8 x i16> undef, <8 x i32> < i32 2, i32 1, i32 0, i32 3, i32 4, i32 5, i32 6, i32 7 >		; <<8 x i16>> [#uses=1]
 	%tmp166.upgrd.4 = bitcast <8 x i16> %tmp166 to <4 x float>		; <<4 x float>> [#uses=1]
@@ -47,8 +47,8 @@ define i32 @foo(<4 x float>* %a, <4 x float>* %b, <4 x float>* %c, <4 x float>*
 	%tmp195 = bitcast <4 x float> %tmp194 to <4 x i32>		; <<4 x i32>> [#uses=1]
 	%tmp208 = tail call <4 x float> @llvm.x86.sse.cmp.ps( <4 x float> %tmp44, <4 x float> %tmp181, i8 1 )		; <<4 x float>> [#uses=1]
 	%tmp209 = bitcast <4 x float> %tmp208 to <4 x i32>		; <<4 x i32>> [#uses=1]
-	%tmp220 = tail call <4 x i32> @llvm.x86.sse2.packssdw.128( <4 x i32> %tmp195, <4 x i32> %tmp209 )		; <<4 x i32>> [#uses=1]
-	%tmp224 = bitcast <4 x i32> %tmp220 to <8 x i16>		; <<8 x i16>> [#uses=1]
+	%tmp220 = tail call <8 x i16> @llvm.x86.sse2.packssdw.128( <4 x i32> %tmp195, <4 x i32> %tmp209 )		; <<4 x i32>> [#uses=1]
+	%tmp224 = bitcast <8 x i16> %tmp220 to <8 x i16>		; <<8 x i16>> [#uses=1]
 	%tmp225 = shufflevector <8 x i16> %tmp224, <8 x i16> undef, <8 x i32> < i32 0, i32 1, i32 2, i32 3, i32 6, i32 5, i32 4, i32 7 >		; <<8 x i16>> [#uses=1]
 	%tmp227 = shufflevector <8 x i16> %tmp225, <8 x i16> undef, <8 x i32> < i32 2, i32 1, i32 0, i32 3, i32 4, i32 5, i32 6, i32 7 >		; <<8 x i16>> [#uses=1]
 	%tmp227.upgrd.6 = bitcast <8 x i16> %tmp227 to <4 x float>		; <<4 x float>> [#uses=1]
@@ -62,8 +62,8 @@ define i32 @foo(<4 x float>* %a, <4 x float>* %b, <4 x float>* %c, <4 x float>*
 	%tmp256 = bitcast <4 x float> %tmp255 to <4 x i32>		; <<4 x i32>> [#uses=1]
 	%tmp269 = tail call <4 x float> @llvm.x86.sse.cmp.ps( <4 x float> %tmp44, <4 x float> %tmp242, i8 1 )		; <<4 x float>> [#uses=1]
 	%tmp270 = bitcast <4 x float> %tmp269 to <4 x i32>		; <<4 x i32>> [#uses=1]
-	%tmp281 = tail call <4 x i32> @llvm.x86.sse2.packssdw.128( <4 x i32> %tmp256, <4 x i32> %tmp270 )		; <<4 x i32>> [#uses=1]
-	%tmp285 = bitcast <4 x i32> %tmp281 to <8 x i16>		; <<8 x i16>> [#uses=1]
+	%tmp281 = tail call <8 x i16> @llvm.x86.sse2.packssdw.128( <4 x i32> %tmp256, <4 x i32> %tmp270 )		; <<4 x i32>> [#uses=1]
+	%tmp285 = bitcast <8 x i16> %tmp281 to <8 x i16>		; <<8 x i16>> [#uses=1]
 	%tmp286 = shufflevector <8 x i16> %tmp285, <8 x i16> undef, <8 x i32> < i32 0, i32 1, i32 2, i32 3, i32 6, i32 5, i32 4, i32 7 >		; <<8 x i16>> [#uses=1]
 	%tmp288 = shufflevector <8 x i16> %tmp286, <8 x i16> undef, <8 x i32> < i32 2, i32 1, i32 0, i32 3, i32 4, i32 5, i32 6, i32 7 >		; <<8 x i16>> [#uses=1]
 	%tmp288.upgrd.8 = bitcast <8 x i16> %tmp288 to <4 x float>		; <<4 x float>> [#uses=1]
@@ -73,4 +73,4 @@ define i32 @foo(<4 x float>* %a, <4 x float>* %b, <4 x float>* %c, <4 x float>*
 
 declare <4 x float> @llvm.x86.sse.cmp.ps(<4 x float>, <4 x float>, i8)
 
-declare <4 x i32> @llvm.x86.sse2.packssdw.128(<4 x i32>, <4 x i32>)
+declare <8 x i16> @llvm.x86.sse2.packssdw.128(<4 x i32>, <4 x i32>)
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2006-10-19-SwitchUnnecessaryBranching.ll b/libclamav/c++/llvm/test/CodeGen/X86/2006-10-19-SwitchUnnecessaryBranching.ll
index 4fd8072..88e8b4a 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2006-10-19-SwitchUnnecessaryBranching.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2006-10-19-SwitchUnnecessaryBranching.ll
@@ -7,7 +7,7 @@ define i32 @test(i32 %argc, i8** %argv) nounwind {
 entry:
 ; CHECK: cmpl	$2
 ; CHECK-NEXT: je
-; CHECK-NEXT: LBB1_1
+; CHECK-NEXT: %entry
 
 	switch i32 %argc, label %UnifiedReturnBlock [
 		 i32 1, label %bb
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2007-01-08-InstrSched.ll b/libclamav/c++/llvm/test/CodeGen/X86/2007-01-08-InstrSched.ll
index e1bae32..81f0a1d 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2007-01-08-InstrSched.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2007-01-08-InstrSched.ll
@@ -11,9 +11,12 @@ define float @foo(float %x) nounwind {
     %tmp14 = fadd float %tmp12, %tmp7
     ret float %tmp14
 
-; CHECK:      mulss	LCPI1_2(%rip)
+; CHECK:      mulss	LCPI1_3(%rip)
+; CHECK-NEXT: mulss	LCPI1_0(%rip)
+; CHECK-NEXT: mulss	LCPI1_1(%rip)
+; CHECK-NEXT: mulss	LCPI1_2(%rip)
+; CHECK-NEXT: addss
 ; CHECK-NEXT: addss
-; CHECK-NEXT: mulss	LCPI1_3(%rip)
 ; CHECK-NEXT: addss
 ; CHECK-NEXT: ret
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2007-05-17-ShuffleISelBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2007-05-17-ShuffleISelBug.ll
index 989dfc5..b27ef83 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2007-05-17-ShuffleISelBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2007-05-17-ShuffleISelBug.ll
@@ -1,7 +1,7 @@
 ; RUN: llc < %s -march=x86 -mattr=+sse2
 ; RUN: llc < %s -march=x86 -mattr=+sse2 | not grep punpckhwd
 
-declare <8 x i16> @llvm.x86.sse2.packuswb.128(<8 x i16>, <8 x i16>)
+declare <16 x i8> @llvm.x86.sse2.packuswb.128(<8 x i16>, <8 x i16>)
 
 declare <8 x i16> @llvm.x86.sse2.psrl.w(<8 x i16>, <8 x i16>)
 
@@ -13,8 +13,8 @@ define fastcc void @test(i32* %src, i32 %sbpr, i32* %dst, i32 %dbpr, i32 %w, i32
 	%tmp805 = add <4 x i32> %tmp777, zeroinitializer
 	%tmp832 = bitcast <4 x i32> %tmp805 to <8 x i16>
 	%tmp838 = tail call <8 x i16> @llvm.x86.sse2.psrl.w( <8 x i16> %tmp832, <8 x i16> < i16 8, i16 undef, i16 undef, i16 undef, i16 undef, i16 undef, i16 undef, i16 undef > )
-	%tmp1020 = tail call <8 x i16> @llvm.x86.sse2.packuswb.128( <8 x i16> zeroinitializer, <8 x i16> %tmp838 )
-	%tmp1030 = bitcast <8 x i16> %tmp1020 to <4 x i32>
+	%tmp1020 = tail call <16 x i8> @llvm.x86.sse2.packuswb.128( <8 x i16> zeroinitializer, <8 x i16> %tmp838 )
+	%tmp1030 = bitcast <16 x i8> %tmp1020 to <4 x i32>
 	%tmp1033 = add <4 x i32> zeroinitializer, %tmp1030
 	%tmp1048 = bitcast <4 x i32> %tmp1033 to <2 x i64>
 	%tmp1049 = or <2 x i64> %tmp1048, zeroinitializer
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2007-11-30-LoadFolding-Bug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2007-11-30-LoadFolding-Bug.ll
index 0626d28..721d4c9 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2007-11-30-LoadFolding-Bug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2007-11-30-LoadFolding-Bug.ll
@@ -1,6 +1,5 @@
-; RUN: llc < %s -march=x86 -mattr=+sse2 -stats |& \
-; RUN:   grep {1 .*folded into instructions}
-; Increment in loop bb.128.i adjusted to 2, to prevent loop reversal from
+; RUN: llc < %s -march=x86 -mattr=+sse2 | FileCheck %s
+; Increment in loop bb.i28.i adjusted to 2, to prevent loop reversal from
 ; kicking in.
 
 declare fastcc void @rdft(i32, i32, double*, i32*, double*)
@@ -34,6 +33,9 @@ cond_next36.i:		; preds = %cond_next.i
 	br label %bb.i28.i
 
 bb.i28.i:		; preds = %bb.i28.i, %cond_next36.i
+; CHECK: %bb.i28.i
+; CHECK: addl $2
+; CHECK: addl $2
 	%j.0.reg2mem.0.i16.i = phi i32 [ 0, %cond_next36.i ], [ %indvar.next39.i, %bb.i28.i ]		; <i32> [#uses=2]
 	%din_addr.1.reg2mem.0.i17.i = phi double [ 0.000000e+00, %cond_next36.i ], [ %tmp16.i25.i, %bb.i28.i ]		; <double> [#uses=1]
 	%tmp1.i18.i = fptosi double %din_addr.1.reg2mem.0.i17.i to i32		; <i32> [#uses=2]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2008-02-18-TailMergingBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2008-02-18-TailMergingBug.ll
index 9b52c5c..7463a0e 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2008-02-18-TailMergingBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2008-02-18-TailMergingBug.ll
@@ -3,7 +3,7 @@
 
 @.str = internal constant [48 x i8] c"transformed bounds: (%.2f, %.2f), (%.2f, %.2f)\0A\00"		; <[48 x i8]*> [#uses=1]
 
-define void @minmax(float* %result) nounwind  {
+define void @minmax(float* %result) nounwind optsize {
 entry:
 	%tmp2 = load float* %result, align 4		; <float> [#uses=6]
 	%tmp4 = getelementptr float* %result, i32 2		; <float*> [#uses=5]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2008-04-15-LiveVariableBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2008-04-15-LiveVariableBug.ll
index 83eb61a..2aea9c5 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2008-04-15-LiveVariableBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2008-04-15-LiveVariableBug.ll
@@ -1,5 +1,6 @@
 ; RUN: llc < %s -mtriple=x86_64-apple-darwin
 ; RUN: llc < %s -mtriple=x86_64-apple-darwin -relocation-model=pic -disable-fp-elim -O0 -regalloc=local
+; PR5534
 
 	%struct.CGPoint = type { double, double }
 	%struct.NSArray = type { %struct.NSObject }
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2008-05-12-tailmerge-5.ll b/libclamav/c++/llvm/test/CodeGen/X86/2008-05-12-tailmerge-5.ll
index 1f95a24..4852e89 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2008-05-12-tailmerge-5.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2008-05-12-tailmerge-5.ll
@@ -6,7 +6,7 @@ target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f3
 target triple = "x86_64-apple-darwin8"
 	%struct.BoundaryAlignment = type { [3 x i8], i8, i16, i16, i8, [2 x i8] }
 
-define void @passing2(i64 %str.0, i64 %str.1, i16 signext  %s, i32 %j, i8 signext  %c, i16 signext  %t, i16 signext  %u, i8 signext  %d) nounwind  {
+define void @passing2(i64 %str.0, i64 %str.1, i16 signext  %s, i32 %j, i8 signext  %c, i16 signext  %t, i16 signext  %u, i8 signext  %d) nounwind optsize {
 entry:
 	%str_addr = alloca %struct.BoundaryAlignment		; <%struct.BoundaryAlignment*> [#uses=7]
 	%s_addr = alloca i16		; <i16*> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2008-07-11-SpillerBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2008-07-11-SpillerBug.ll
index f75e605..88a5fde 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2008-07-11-SpillerBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2008-07-11-SpillerBug.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=x86 -relocation-model=static -disable-fp-elim | FileCheck %s
+; RUN: llc < %s -march=x86 -relocation-model=static -disable-fp-elim -post-RA-scheduler=false | FileCheck %s
 ; PR2536
 
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-03-13-PHIElimBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-03-13-PHIElimBug.ll
index 878fa51..ad7f9f7 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2009-03-13-PHIElimBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-03-13-PHIElimBug.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=x86 | grep -A 2 {call.*f} | grep movl
+; RUN: llc < %s -march=x86 | FileCheck %s
 ; Check the register copy comes after the call to f and before the call to g
 ; PR3784
 
@@ -26,3 +26,7 @@ lpad:		; preds = %cont, %entry
 	%y = phi i32 [ %a, %entry ], [ %aa, %cont ]		; <i32> [#uses=1]
 	ret i32 %y
 }
+
+; CHECK: call{{.*}}f
+; CHECK-NEXT: Llabel1:
+; CHECK-NEXT: movl %eax, %esi
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-03-16-PHIElimInLPad.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-03-16-PHIElimInLPad.ll
index adbd241..11c4101 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2009-03-16-PHIElimInLPad.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-03-16-PHIElimInLPad.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=x86 -asm-verbose | grep -A 1 lpad | grep Llabel
+; RUN: llc < %s -march=x86 -asm-verbose | FileCheck %s
 ; Check that register copies in the landing pad come after the EH_LABEL
 
 declare i32 @f()
@@ -19,3 +19,6 @@ lpad:		; preds = %cont, %entry
 	%v = phi i32 [ %x, %entry ], [ %a, %cont ]		; <i32> [#uses=1]
 	ret i32 %v
 }
+
+; CHECK: lpad
+; CHECK-NEXT: Llabel
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-04-20-LinearScanOpt.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-04-20-LinearScanOpt.ll
index 4d25b0f..d7b9463 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2009-04-20-LinearScanOpt.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-04-20-LinearScanOpt.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -mtriple=x86_64-apple-darwin10.0 -relocation-model=pic -disable-fp-elim -stats |& grep asm-printer | grep 84
+; RUN: llc < %s -mtriple=x86_64-apple-darwin10.0 -relocation-model=pic -disable-fp-elim -stats |& grep asm-printer | grep 83
 ; rdar://6802189
 
 ; Test if linearscan is unfavoring registers for allocation to allow more reuse
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-04-21-NoReloadImpDef.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-04-21-NoReloadImpDef.ll
index c6e6e50..5bd956a 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2009-04-21-NoReloadImpDef.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-04-21-NoReloadImpDef.ll
@@ -1,5 +1,5 @@
 ; RUN: llc -mtriple=i386-apple-darwin10.0 -relocation-model=pic \
-; RUN:     -disable-fp-elim -mattr=-sse41,-sse3,+sse2 < %s | \
+; RUN:     -disable-fp-elim -mattr=-sse41,-sse3,+sse2 -post-RA-scheduler=false < %s | \
 ; RUN:   FileCheck %s
 ; rdar://6808032
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-09-10-SpillComments.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-09-10-SpillComments.ll
new file mode 100644
index 0000000..8c62f4d
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-09-10-SpillComments.ll
@@ -0,0 +1,104 @@
+; RUN: llc < %s -mtriple=x86_64-unknown-linux | grep "Spill"
+; RUN: llc < %s -mtriple=x86_64-unknown-linux | grep "Folded Spill"
+; RUN: llc < %s -mtriple=x86_64-unknown-linux | grep "Reload"
+
+	%struct..0anon = type { i32 }
+	%struct.rtvec_def = type { i32, [1 x %struct..0anon] }
+	%struct.rtx_def = type { i16, i8, i8, [1 x %struct..0anon] }
+ at rtx_format = external global [116 x i8*]		; <[116 x i8*]*> [#uses=1]
+ at rtx_length = external global [117 x i32]		; <[117 x i32]*> [#uses=1]
+
+declare %struct.rtx_def* @fixup_memory_subreg(%struct.rtx_def*, %struct.rtx_def*, i32)
+
+define %struct.rtx_def* @walk_fixup_memory_subreg(%struct.rtx_def* %x, %struct.rtx_def* %insn) {
+entry:
+	%tmp2 = icmp eq %struct.rtx_def* %x, null		; <i1> [#uses=1]
+	br i1 %tmp2, label %UnifiedReturnBlock, label %cond_next
+
+cond_next:		; preds = %entry
+	%tmp6 = getelementptr %struct.rtx_def* %x, i32 0, i32 0		; <i16*> [#uses=1]
+	%tmp7 = load i16* %tmp6		; <i16> [#uses=2]
+	%tmp78 = zext i16 %tmp7 to i32		; <i32> [#uses=2]
+	%tmp10 = icmp eq i16 %tmp7, 54		; <i1> [#uses=1]
+	br i1 %tmp10, label %cond_true13, label %cond_next32
+
+cond_true13:		; preds = %cond_next
+	%tmp15 = getelementptr %struct.rtx_def* %x, i32 0, i32 3		; <[1 x %struct..0anon]*> [#uses=1]
+	%tmp1718 = bitcast [1 x %struct..0anon]* %tmp15 to %struct.rtx_def**		; <%struct.rtx_def**> [#uses=1]
+	%tmp19 = load %struct.rtx_def** %tmp1718		; <%struct.rtx_def*> [#uses=1]
+	%tmp20 = getelementptr %struct.rtx_def* %tmp19, i32 0, i32 0		; <i16*> [#uses=1]
+	%tmp21 = load i16* %tmp20		; <i16> [#uses=1]
+	%tmp22 = icmp eq i16 %tmp21, 57		; <i1> [#uses=1]
+	br i1 %tmp22, label %cond_true25, label %cond_next32
+
+cond_true25:		; preds = %cond_true13
+	%tmp29 = tail call %struct.rtx_def* @fixup_memory_subreg( %struct.rtx_def* %x, %struct.rtx_def* %insn, i32 1 )		; <%struct.rtx_def*> [#uses=1]
+	ret %struct.rtx_def* %tmp29
+
+cond_next32:		; preds = %cond_true13, %cond_next
+	%tmp34 = getelementptr [116 x i8*]* @rtx_format, i32 0, i32 %tmp78		; <i8**> [#uses=1]
+	%tmp35 = load i8** %tmp34, align 4		; <i8*> [#uses=1]
+	%tmp37 = getelementptr [117 x i32]* @rtx_length, i32 0, i32 %tmp78		; <i32*> [#uses=1]
+	%tmp38 = load i32* %tmp37, align 4		; <i32> [#uses=1]
+	%i.011 = add i32 %tmp38, -1		; <i32> [#uses=2]
+	%tmp12513 = icmp sgt i32 %i.011, -1		; <i1> [#uses=1]
+	br i1 %tmp12513, label %bb, label %UnifiedReturnBlock
+
+bb:		; preds = %bb123, %cond_next32
+	%indvar = phi i32 [ %indvar.next26, %bb123 ], [ 0, %cond_next32 ]		; <i32> [#uses=2]
+	%i.01.0 = sub i32 %i.011, %indvar		; <i32> [#uses=5]
+	%tmp42 = getelementptr i8* %tmp35, i32 %i.01.0		; <i8*> [#uses=2]
+	%tmp43 = load i8* %tmp42		; <i8> [#uses=1]
+	switch i8 %tmp43, label %bb123 [
+		 i8 101, label %cond_true47
+		 i8 69, label %bb105.preheader
+	]
+
+cond_true47:		; preds = %bb
+	%tmp52 = getelementptr %struct.rtx_def* %x, i32 0, i32 3, i32 %i.01.0		; <%struct..0anon*> [#uses=1]
+	%tmp5354 = bitcast %struct..0anon* %tmp52 to %struct.rtx_def**		; <%struct.rtx_def**> [#uses=1]
+	%tmp55 = load %struct.rtx_def** %tmp5354		; <%struct.rtx_def*> [#uses=1]
+	%tmp58 = tail call  %struct.rtx_def* @walk_fixup_memory_subreg( %struct.rtx_def* %tmp55, %struct.rtx_def* %insn )		; <%struct.rtx_def*> [#uses=1]
+	%tmp62 = getelementptr %struct.rtx_def* %x, i32 0, i32 3, i32 %i.01.0, i32 0		; <i32*> [#uses=1]
+	%tmp58.c = ptrtoint %struct.rtx_def* %tmp58 to i32		; <i32> [#uses=1]
+	store i32 %tmp58.c, i32* %tmp62
+	%tmp6816 = load i8* %tmp42		; <i8> [#uses=1]
+	%tmp6917 = icmp eq i8 %tmp6816, 69		; <i1> [#uses=1]
+	br i1 %tmp6917, label %bb105.preheader, label %bb123
+
+bb105.preheader:		; preds = %cond_true47, %bb
+	%tmp11020 = getelementptr %struct.rtx_def* %x, i32 0, i32 3, i32 %i.01.0		; <%struct..0anon*> [#uses=1]
+	%tmp11111221 = bitcast %struct..0anon* %tmp11020 to %struct.rtvec_def**		; <%struct.rtvec_def**> [#uses=3]
+	%tmp11322 = load %struct.rtvec_def** %tmp11111221		; <%struct.rtvec_def*> [#uses=1]
+	%tmp11423 = getelementptr %struct.rtvec_def* %tmp11322, i32 0, i32 0		; <i32*> [#uses=1]
+	%tmp11524 = load i32* %tmp11423		; <i32> [#uses=1]
+	%tmp11625 = icmp eq i32 %tmp11524, 0		; <i1> [#uses=1]
+	br i1 %tmp11625, label %bb123, label %bb73
+
+bb73:		; preds = %bb73, %bb105.preheader
+	%j.019 = phi i32 [ %tmp104, %bb73 ], [ 0, %bb105.preheader ]		; <i32> [#uses=3]
+	%tmp81 = load %struct.rtvec_def** %tmp11111221		; <%struct.rtvec_def*> [#uses=2]
+	%tmp92 = getelementptr %struct.rtvec_def* %tmp81, i32 0, i32 1, i32 %j.019		; <%struct..0anon*> [#uses=1]
+	%tmp9394 = bitcast %struct..0anon* %tmp92 to %struct.rtx_def**		; <%struct.rtx_def**> [#uses=1]
+	%tmp95 = load %struct.rtx_def** %tmp9394		; <%struct.rtx_def*> [#uses=1]
+	%tmp98 = tail call  %struct.rtx_def* @walk_fixup_memory_subreg( %struct.rtx_def* %tmp95, %struct.rtx_def* %insn )		; <%struct.rtx_def*> [#uses=1]
+	%tmp101 = getelementptr %struct.rtvec_def* %tmp81, i32 0, i32 1, i32 %j.019, i32 0		; <i32*> [#uses=1]
+	%tmp98.c = ptrtoint %struct.rtx_def* %tmp98 to i32		; <i32> [#uses=1]
+	store i32 %tmp98.c, i32* %tmp101
+	%tmp104 = add i32 %j.019, 1		; <i32> [#uses=2]
+	%tmp113 = load %struct.rtvec_def** %tmp11111221		; <%struct.rtvec_def*> [#uses=1]
+	%tmp114 = getelementptr %struct.rtvec_def* %tmp113, i32 0, i32 0		; <i32*> [#uses=1]
+	%tmp115 = load i32* %tmp114		; <i32> [#uses=1]
+	%tmp116 = icmp ult i32 %tmp104, %tmp115		; <i1> [#uses=1]
+	br i1 %tmp116, label %bb73, label %bb123
+
+bb123:		; preds = %bb73, %bb105.preheader, %cond_true47, %bb
+	%i.0 = add i32 %i.01.0, -1		; <i32> [#uses=1]
+	%tmp125 = icmp sgt i32 %i.0, -1		; <i1> [#uses=1]
+	%indvar.next26 = add i32 %indvar, 1		; <i32> [#uses=1]
+	br i1 %tmp125, label %bb, label %UnifiedReturnBlock
+
+UnifiedReturnBlock:		; preds = %bb123, %cond_next32, %entry
+	%UnifiedRetVal = phi %struct.rtx_def* [ null, %entry ], [ %x, %cond_next32 ], [ %x, %bb123 ]		; <%struct.rtx_def*> [#uses=1]
+	ret %struct.rtx_def* %UnifiedRetVal
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-09-19-SchedCustomLoweringBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-09-19-SchedCustomLoweringBug.ll
index 646806e..f3cf1d5 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2009-09-19-SchedCustomLoweringBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-09-19-SchedCustomLoweringBug.ll
@@ -9,9 +9,7 @@ entry:
   br label %bb
 
 bb:                                               ; preds = %bb1, %entry
-; CHECK:      movl %e
-; CHECK-NEXT: addl $1
-; CHECK-NEXT: movl %e
+; CHECK:      addl $1
 ; CHECK-NEXT: adcl $0
   %i.0 = phi i64 [ 0, %entry ], [ %0, %bb1 ]      ; <i64> [#uses=1]
   %0 = add nsw i64 %i.0, 1                        ; <i64> [#uses=2]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-10-08-MachineLICMBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-10-08-MachineLICMBug.ll
new file mode 100644
index 0000000..91c5440
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-10-08-MachineLICMBug.ll
@@ -0,0 +1,264 @@
+; RUN: llc < %s -mtriple=i386-apple-darwin -relocation-model=pic -stats |& grep {machine-licm} | grep 2
+; rdar://7274692
+
+%0 = type { [125 x i32] }
+%1 = type { i32 }
+%struct..5sPragmaType = type { i8*, i32 }
+%struct.AggInfo = type { i8, i8, i32, %struct.ExprList*, i32, %struct.AggInfo_col*, i32, i32, i32, %struct.AggInfo_func*, i32, i32 }
+%struct.AggInfo_col = type { %struct.Table*, i32, i32, i32, i32, %struct.Expr* }
+%struct.AggInfo_func = type { %struct.Expr*, %struct.FuncDef*, i32, i32 }
+%struct.AuxData = type { i8*, void (i8*)* }
+%struct.Bitvec = type { i32, i32, i32, %0 }
+%struct.BtCursor = type { %struct.Btree*, %struct.BtShared*, %struct.BtCursor*, %struct.BtCursor*, i32 (i8*, i32, i8*, i32, i8*)*, i8*, i32, %struct.MemPage*, i32, %struct.CellInfo, i8, i8, i8*, i64, i32, i8, i32* }
+%struct.BtLock = type { %struct.Btree*, i32, i8, %struct.BtLock* }
+%struct.BtShared = type { %struct.Pager*, %struct.sqlite3*, %struct.BtCursor*, %struct.MemPage*, i8, i8, i8, i8, i8, i8, i8, i8, i32, i16, i16, i32, i32, i32, i32, i8, i32, i8*, void (i8*)*, %struct.sqlite3_mutex*, %struct.BusyHandler, i32, %struct.BtShared*, %struct.BtLock*, %struct.Btree* }
+%struct.Btree = type { %struct.sqlite3*, %struct.BtShared*, i8, i8, i8, i32, %struct.Btree*, %struct.Btree* }
+%struct.BtreeMutexArray = type { i32, [11 x %struct.Btree*] }
+%struct.BusyHandler = type { i32 (i8*, i32)*, i8*, i32 }
+%struct.CellInfo = type { i8*, i64, i32, i32, i16, i16, i16, i16 }
+%struct.CollSeq = type { i8*, i8, i8, i8*, i32 (i8*, i32, i8*, i32, i8*)*, void (i8*)* }
+%struct.Column = type { i8*, %struct.Expr*, i8*, i8*, i8, i8, i8, i8 }
+%struct.Context = type { i64, i32, %struct.Fifo }
+%struct.CountCtx = type { i64 }
+%struct.Cursor = type { %struct.BtCursor*, i32, i64, i64, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i64, %struct.Btree*, i32, i8*, i64, i8*, %struct.KeyInfo*, i32, i64, %struct.sqlite3_vtab_cursor*, %struct.sqlite3_module*, i32, i32, i32*, i32*, i8* }
+%struct.Db = type { i8*, %struct.Btree*, i8, i8, i8*, void (i8*)*, %struct.Schema* }
+%struct.DbPage = type { %struct.Pager*, i32, %struct.DbPage*, %struct.DbPage*, %struct.PagerLruLink, %struct.DbPage*, i8, i8, i8, i8, i8, i16, %struct.DbPage*, %struct.DbPage*, i8* }
+%struct.Expr = type { i8, i8, i16, %struct.CollSeq*, %struct.Expr*, %struct.Expr*, %struct.ExprList*, %struct..5sPragmaType, %struct..5sPragmaType, i32, i32, %struct.AggInfo*, i32, i32, %struct.Select*, %struct.Table*, i32 }
+%struct.ExprList = type { i32, i32, i32, %struct.ExprList_item* }
+%struct.ExprList_item = type { %struct.Expr*, i8*, i8, i8, i8 }
+%struct.FILE = type { i8*, i32, i32, i16, i16, %struct..5sPragmaType, i32, i8*, i32 (i8*)*, i32 (i8*, i8*, i32)*, i64 (i8*, i64, i32)*, i32 (i8*, i8*, i32)*, %struct..5sPragmaType, %struct.__sFILEX*, i32, [3 x i8], [1 x i8], %struct..5sPragmaType, i32, i64 }
+%struct.FKey = type { %struct.Table*, %struct.FKey*, i8*, %struct.FKey*, i32, %struct.sColMap*, i8, i8, i8, i8 }
+%struct.Fifo = type { i32, %struct.FifoPage*, %struct.FifoPage* }
+%struct.FifoPage = type { i32, i32, i32, %struct.FifoPage*, [1 x i64] }
+%struct.FuncDef = type { i16, i8, i8, i8, i8*, %struct.FuncDef*, void (%struct.sqlite3_context*, i32, %struct.Mem**)*, void (%struct.sqlite3_context*, i32, %struct.Mem**)*, void (%struct.sqlite3_context*)*, [1 x i8] }
+%struct.Hash = type { i8, i8, i32, i32, %struct.HashElem*, %struct._ht* }
+%struct.HashElem = type { %struct.HashElem*, %struct.HashElem*, i8*, i8*, i32 }
+%struct.IdList = type { %struct..5sPragmaType*, i32, i32 }
+%struct.Index = type { i8*, i32, i32*, i32*, %struct.Table*, i32, i8, i8, i8*, %struct.Index*, %struct.Schema*, i8*, i8** }
+%struct.KeyInfo = type { %struct.sqlite3*, i8, i8, i8, i32, i8*, [1 x %struct.CollSeq*] }
+%struct.Mem = type { %struct.CountCtx, double, %struct.sqlite3*, i8*, i32, i16, i8, i8, void (i8*)* }
+%struct.MemPage = type { i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i16, i16, i16, i16, i16, i16, [5 x %struct._OvflCell], %struct.BtShared*, i8*, %struct.DbPage*, i32, %struct.MemPage* }
+%struct.Module = type { %struct.sqlite3_module*, i8*, i8*, void (i8*)* }
+%struct.Op = type { i8, i8, i8, i8, i32, i32, i32, %1 }
+%struct.Pager = type { %struct.sqlite3_vfs*, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, %struct.Bitvec*, %struct.Bitvec*, i8*, i8*, i8*, i8*, %struct.sqlite3_file*, %struct.sqlite3_file*, %struct.sqlite3_file*, %struct.BusyHandler*, %struct.PagerLruList, %struct.DbPage*, %struct.DbPage*, %struct.DbPage*, i64, i64, i64, i64, i64, i32, void (%struct.DbPage*, i32)*, void (%struct.DbPage*, i32)*, i32, %struct.DbPage**, i8*, [16 x i8] }
+%struct.PagerLruLink = type { %struct.DbPage*, %struct.DbPage* }
+%struct.PagerLruList = type { %struct.DbPage*, %struct.DbPage*, %struct.DbPage* }
+%struct.Schema = type { i32, %struct.Hash, %struct.Hash, %struct.Hash, %struct.Hash, %struct.Table*, i8, i8, i16, i32, %struct.sqlite3* }
+%struct.Select = type { %struct.ExprList*, i8, i8, i8, i8, i8, i8, i8, %struct.SrcList*, %struct.Expr*, %struct.ExprList*, %struct.Expr*, %struct.ExprList*, %struct.Select*, %struct.Select*, %struct.Select*, %struct.Expr*, %struct.Expr*, i32, i32, [3 x i32] }
+%struct.SrcList = type { i16, i16, [1 x %struct.SrcList_item] }
+%struct.SrcList_item = type { i8*, i8*, i8*, %struct.Table*, %struct.Select*, i8, i8, i32, %struct.Expr*, %struct.IdList*, i64 }
+%struct.Table = type { i8*, i32, %struct.Column*, i32, %struct.Index*, i32, %struct.Select*, i32, %struct.Trigger*, %struct.FKey*, i8*, %struct.Expr*, i32, i8, i8, i8, i8, i8, i8, i8, %struct.Module*, %struct.sqlite3_vtab*, i32, i8**, %struct.Schema* }
+%struct.Trigger = type { i8*, i8*, i8, i8, %struct.Expr*, %struct.IdList*, %struct..5sPragmaType, %struct.Schema*, %struct.Schema*, %struct.TriggerStep*, %struct.Trigger* }
+%struct.TriggerStep = type { i32, i32, %struct.Trigger*, %struct.Select*, %struct..5sPragmaType, %struct.Expr*, %struct.ExprList*, %struct.IdList*, %struct.TriggerStep*, %struct.TriggerStep* }
+%struct.Vdbe = type { %struct.sqlite3*, %struct.Vdbe*, %struct.Vdbe*, i32, i32, %struct.Op*, i32, i32, i32*, %struct.Mem**, %struct.Mem*, i32, %struct.Cursor**, i32, %struct.Mem*, i8**, i32, i32, i32, %struct.Mem*, i32, i32, %struct.Fifo, i32, i32, %struct.Context*, i32, i32, i32, i32, i32, [25 x i32], i32, i32, i8**, i8*, %struct.Mem*, i8, i8, i8, i8, i8, i8, i32, i64, i32, %struct.BtreeMutexArray, i32, i8*, i32 }
+%struct.VdbeFunc = type { %struct.FuncDef*, i32, [1 x %struct.AuxData] }
+%struct._OvflCell = type { i8*, i16 }
+%struct._RuneCharClass = type { [14 x i8], i32 }
+%struct._RuneEntry = type { i32, i32, i32, i32* }
+%struct._RuneLocale = type { [8 x i8], [32 x i8], i32 (i8*, i32, i8**)*, i32 (i32, i8*, i32, i8**)*, i32, [256 x i32], [256 x i32], [256 x i32], %struct._RuneRange, %struct._RuneRange, %struct._RuneRange, i8*, i32, i32, %struct._RuneCharClass* }
+%struct._RuneRange = type { i32, %struct._RuneEntry* }
+%struct.__sFILEX = type opaque
+%struct._ht = type { i32, %struct.HashElem* }
+%struct.callback_data = type { %struct.sqlite3*, i32, i32, %struct.FILE*, i32, i32, i32, i8*, [20 x i8], [100 x i32], [100 x i32], [20 x i8], %struct.previous_mode_data, [1024 x i8], i8* }
+%struct.previous_mode_data = type { i32, i32, i32, [100 x i32] }
+%struct.sColMap = type { i32, i8* }
+%struct.sqlite3 = type { %struct.sqlite3_vfs*, i32, %struct.Db*, i32, i32, i32, i32, i8, i8, i8, i8, i32, %struct.CollSeq*, i64, i64, i32, i32, i32, %struct.sqlite3_mutex*, %struct.sqlite3InitInfo, i32, i8**, %struct.Vdbe*, i32, void (i8*, i8*)*, i8*, void (i8*, i8*, i64)*, i8*, i8*, i32 (i8*)*, i8*, void (i8*)*, i8*, void (i8*, i32, i8*, i8*, i64)*, void (i8*, %struct.sqlite3*, i32, i8*)*, void (i8*, %struct.sqlite3*, i32, i8*)*, i8*, %struct.Mem*, i8*, i8*, %union.anon, i32 (i8*, i32, i8*, i8*, i8*, i8*)*, i8*, i32 (i8*)*, i8*, i32, %struct.Hash, %struct.Table*, %struct.sqlite3_vtab**, i32, %struct.Hash, %struct.Hash, %struct.BusyHandler, i32, [2 x %struct.Db], i8 }
+%struct.sqlite3InitInfo = type { i32, i32, i8 }
+%struct.sqlite3_context = type { %struct.FuncDef*, %struct.VdbeFunc*, %struct.Mem, %struct.Mem*, i32, %struct.CollSeq* }
+%struct.sqlite3_file = type { %struct.sqlite3_io_methods* }
+%struct.sqlite3_index_constraint = type { i32, i8, i8, i32 }
+%struct.sqlite3_index_constraint_usage = type { i32, i8 }
+%struct.sqlite3_index_info = type { i32, %struct.sqlite3_index_constraint*, i32, %struct.sqlite3_index_constraint_usage*, %struct.sqlite3_index_constraint_usage*, i32, i8*, i32, i32, double }
+%struct.sqlite3_io_methods = type { i32, i32 (%struct.sqlite3_file*)*, i32 (%struct.sqlite3_file*, i8*, i32, i64)*, i32 (%struct.sqlite3_file*, i8*, i32, i64)*, i32 (%struct.sqlite3_file*, i64)*, i32 (%struct.sqlite3_file*, i32)*, i32 (%struct.sqlite3_file*, i64*)*, i32 (%struct.sqlite3_file*, i32)*, i32 (%struct.sqlite3_file*, i32)*, i32 (%struct.sqlite3_file*)*, i32 (%struct.sqlite3_file*, i32, i8*)*, i32 (%struct.sqlite3_file*)*, i32 (%struct.sqlite3_file*)* }
+%struct.sqlite3_module = type { i32, i32 (%struct.sqlite3*, i8*, i32, i8**, %struct.sqlite3_vtab**, i8**)*, i32 (%struct.sqlite3*, i8*, i32, i8**, %struct.sqlite3_vtab**, i8**)*, i32 (%struct.sqlite3_vtab*, %struct.sqlite3_index_info*)*, i32 (%struct.sqlite3_vtab*)*, i32 (%struct.sqlite3_vtab*)*, i32 (%struct.sqlite3_vtab*, %struct.sqlite3_vtab_cursor**)*, i32 (%struct.sqlite3_vtab_cursor*)*, i32 (%struct.sqlite3_vtab_cursor*, i32, i8*, i32, %struct.Mem**)*, i32 (%struct.sqlite3_vtab_cursor*)*, i32 (%struct.sqlite3_vtab_cursor*)*, i32 (%struct.sqlite3_vtab_cursor*, %struct.sqlite3_context*, i32)*, i32 (%struct.sqlite3_vtab_cursor*, i64*)*, i32 (%struct.sqlite3_vtab*, i32, %struct.Mem**, i64*)*, i32 (%struct.sqlite3_vtab*)*, i32 (%struct.sqlite3_vtab*)*, i32 (%struct.sqlite3_vtab*)*, i32 (%struct.sqlite3_vtab*)*, i32 (%struct.sqlite3_vtab*, i32, i8*, void (%struct.sqlite3_context*, i32, %struct.Mem**)**, i8**)*, i32 (%struct.sqlite3_vtab*, i8*)* }
+%struct.sqlite3_mutex = type opaque
+%struct.sqlite3_vfs = type { i32, i32, i32, %struct.sqlite3_vfs*, i8*, i8*, i32 (%struct.sqlite3_vfs*, i8*, %struct.sqlite3_file*, i32, i32*)*, i32 (%struct.sqlite3_vfs*, i8*, i32)*, i32 (%struct.sqlite3_vfs*, i8*, i32)*, i32 (%struct.sqlite3_vfs*, i32, i8*)*, i32 (%struct.sqlite3_vfs*, i8*, i32, i8*)*, i8* (%struct.sqlite3_vfs*, i8*)*, void (%struct.sqlite3_vfs*, i32, i8*)*, i8* (%struct.sqlite3_vfs*, i8*, i8*)*, void (%struct.sqlite3_vfs*, i8*)*, i32 (%struct.sqlite3_vfs*, i32, i8*)*, i32 (%struct.sqlite3_vfs*, i32)*, i32 (%struct.sqlite3_vfs*, double*)* }
+%struct.sqlite3_vtab = type { %struct.sqlite3_module*, i32, i8* }
+%struct.sqlite3_vtab_cursor = type { %struct.sqlite3_vtab* }
+%union.anon = type { double }
+
+ at _DefaultRuneLocale = external global %struct._RuneLocale ; <%struct._RuneLocale*> [#uses=2]
+ at __stderrp = external global %struct.FILE*        ; <%struct.FILE**> [#uses=1]
+ at .str10 = internal constant [16 x i8] c"Out of memory!\0A\00", align 1 ; <[16 x i8]*> [#uses=1]
+ at llvm.used = appending global [1 x i8*] [i8* bitcast (void (%struct.callback_data*, i8*)* @set_table_name to i8*)], section "llvm.metadata" ; <[1 x i8*]*> [#uses=0]
+
+define fastcc void @set_table_name(%struct.callback_data* nocapture %p, i8* %zName) nounwind ssp {
+entry:
+  %0 = getelementptr inbounds %struct.callback_data* %p, i32 0, i32 7 ; <i8**> [#uses=3]
+  %1 = load i8** %0, align 4                      ; <i8*> [#uses=2]
+  %2 = icmp eq i8* %1, null                       ; <i1> [#uses=1]
+  br i1 %2, label %bb1, label %bb
+
+bb:                                               ; preds = %entry
+  free i8* %1
+  store i8* null, i8** %0, align 4
+  br label %bb1
+
+bb1:                                              ; preds = %bb, %entry
+  %3 = icmp eq i8* %zName, null                   ; <i1> [#uses=1]
+  br i1 %3, label %return, label %bb2
+
+bb2:                                              ; preds = %bb1
+  %4 = load i8* %zName, align 1                   ; <i8> [#uses=2]
+  %5 = zext i8 %4 to i32                          ; <i32> [#uses=2]
+  %6 = icmp sgt i8 %4, -1                         ; <i1> [#uses=1]
+  br i1 %6, label %bb.i.i, label %bb1.i.i
+
+bb.i.i:                                           ; preds = %bb2
+  %7 = getelementptr inbounds %struct._RuneLocale* @_DefaultRuneLocale, i32 0, i32 5, i32 %5 ; <i32*> [#uses=1]
+  %8 = load i32* %7, align 4                      ; <i32> [#uses=1]
+  %9 = and i32 %8, 256                            ; <i32> [#uses=1]
+  br label %isalpha.exit
+
+bb1.i.i:                                          ; preds = %bb2
+  %10 = tail call i32 @__maskrune(i32 %5, i32 256) nounwind ; <i32> [#uses=1]
+  br label %isalpha.exit
+
+isalpha.exit:                                     ; preds = %bb1.i.i, %bb.i.i
+  %storemerge.in.in.i.i = phi i32 [ %9, %bb.i.i ], [ %10, %bb1.i.i ] ; <i32> [#uses=1]
+  %storemerge.in.i.i = icmp eq i32 %storemerge.in.in.i.i, 0 ; <i1> [#uses=1]
+  br i1 %storemerge.in.i.i, label %bb3, label %bb5
+
+bb3:                                              ; preds = %isalpha.exit
+  %11 = load i8* %zName, align 1                  ; <i8> [#uses=2]
+  %12 = icmp eq i8 %11, 95                        ; <i1> [#uses=1]
+  br i1 %12, label %bb5, label %bb12.preheader
+
+bb5:                                              ; preds = %bb3, %isalpha.exit
+  %.pre = load i8* %zName, align 1                ; <i8> [#uses=1]
+  br label %bb12.preheader
+
+bb12.preheader:                                   ; preds = %bb5, %bb3
+  %13 = phi i8 [ %.pre, %bb5 ], [ %11, %bb3 ]     ; <i8> [#uses=1]
+  %needQuote.1.ph = phi i32 [ 0, %bb5 ], [ 1, %bb3 ] ; <i32> [#uses=2]
+  %14 = icmp eq i8 %13, 0                         ; <i1> [#uses=1]
+  br i1 %14, label %bb13, label %bb7
+
+bb7:                                              ; preds = %bb11, %bb12.preheader
+  %i.011 = phi i32 [ %tmp17, %bb11 ], [ 0, %bb12.preheader ] ; <i32> [#uses=2]
+  %n.110 = phi i32 [ %26, %bb11 ], [ 0, %bb12.preheader ] ; <i32> [#uses=3]
+  %needQuote.19 = phi i32 [ %needQuote.0, %bb11 ], [ %needQuote.1.ph, %bb12.preheader ] ; <i32> [#uses=2]
+  %scevgep16 = getelementptr i8* %zName, i32 %i.011 ; <i8*> [#uses=2]
+  %tmp17 = add i32 %i.011, 1                      ; <i32> [#uses=2]
+  %scevgep18 = getelementptr i8* %zName, i32 %tmp17 ; <i8*> [#uses=1]
+  %15 = load i8* %scevgep16, align 1              ; <i8> [#uses=2]
+  %16 = zext i8 %15 to i32                        ; <i32> [#uses=2]
+  %17 = icmp sgt i8 %15, -1                       ; <i1> [#uses=1]
+  br i1 %17, label %bb.i.i2, label %bb1.i.i3
+
+bb.i.i2:                                          ; preds = %bb7
+  %18 = getelementptr inbounds %struct._RuneLocale* @_DefaultRuneLocale, i32 0, i32 5, i32 %16 ; <i32*> [#uses=1]
+  %19 = load i32* %18, align 4                    ; <i32> [#uses=1]
+  %20 = and i32 %19, 1280                         ; <i32> [#uses=1]
+  br label %isalnum.exit
+
+bb1.i.i3:                                         ; preds = %bb7
+  %21 = tail call i32 @__maskrune(i32 %16, i32 1280) nounwind ; <i32> [#uses=1]
+  br label %isalnum.exit
+
+isalnum.exit:                                     ; preds = %bb1.i.i3, %bb.i.i2
+  %storemerge.in.in.i.i4 = phi i32 [ %20, %bb.i.i2 ], [ %21, %bb1.i.i3 ] ; <i32> [#uses=1]
+  %storemerge.in.i.i5 = icmp eq i32 %storemerge.in.in.i.i4, 0 ; <i1> [#uses=1]
+  br i1 %storemerge.in.i.i5, label %bb8, label %bb11
+
+bb8:                                              ; preds = %isalnum.exit
+  %22 = load i8* %scevgep16, align 1              ; <i8> [#uses=2]
+  %23 = icmp eq i8 %22, 95                        ; <i1> [#uses=1]
+  br i1 %23, label %bb11, label %bb9
+
+bb9:                                              ; preds = %bb8
+  %24 = icmp eq i8 %22, 39                        ; <i1> [#uses=1]
+  %25 = zext i1 %24 to i32                        ; <i32> [#uses=1]
+  %.n.1 = add i32 %n.110, %25                     ; <i32> [#uses=1]
+  br label %bb11
+
+bb11:                                             ; preds = %bb9, %bb8, %isalnum.exit
+  %needQuote.0 = phi i32 [ 1, %bb9 ], [ %needQuote.19, %isalnum.exit ], [ %needQuote.19, %bb8 ] ; <i32> [#uses=2]
+  %n.0 = phi i32 [ %.n.1, %bb9 ], [ %n.110, %isalnum.exit ], [ %n.110, %bb8 ] ; <i32> [#uses=1]
+  %26 = add nsw i32 %n.0, 1                       ; <i32> [#uses=2]
+  %27 = load i8* %scevgep18, align 1              ; <i8> [#uses=1]
+  %28 = icmp eq i8 %27, 0                         ; <i1> [#uses=1]
+  br i1 %28, label %bb13, label %bb7
+
+bb13:                                             ; preds = %bb11, %bb12.preheader
+  %n.1.lcssa = phi i32 [ 0, %bb12.preheader ], [ %26, %bb11 ] ; <i32> [#uses=2]
+  %needQuote.1.lcssa = phi i32 [ %needQuote.1.ph, %bb12.preheader ], [ %needQuote.0, %bb11 ] ; <i32> [#uses=1]
+  %29 = add nsw i32 %n.1.lcssa, 2                 ; <i32> [#uses=1]
+  %30 = icmp eq i32 %needQuote.1.lcssa, 0         ; <i1> [#uses=3]
+  %n.1. = select i1 %30, i32 %n.1.lcssa, i32 %29  ; <i32> [#uses=1]
+  %31 = add nsw i32 %n.1., 1                      ; <i32> [#uses=1]
+  %32 = malloc i8, i32 %31                        ; <i8*> [#uses=7]
+  store i8* %32, i8** %0, align 4
+  %33 = icmp eq i8* %32, null                     ; <i1> [#uses=1]
+  br i1 %33, label %bb16, label %bb17
+
+bb16:                                             ; preds = %bb13
+  %34 = load %struct.FILE** @__stderrp, align 4   ; <%struct.FILE*> [#uses=1]
+  %35 = bitcast %struct.FILE* %34 to i8*          ; <i8*> [#uses=1]
+  %36 = tail call i32 @"\01_fwrite$UNIX2003"(i8* getelementptr inbounds ([16 x i8]* @.str10, i32 0, i32 0), i32 1, i32 15, i8* %35) nounwind ; <i32> [#uses=0]
+  tail call void @exit(i32 1) noreturn nounwind
+  unreachable
+
+bb17:                                             ; preds = %bb13
+  br i1 %30, label %bb23.preheader, label %bb18
+
+bb18:                                             ; preds = %bb17
+  store i8 39, i8* %32, align 4
+  br label %bb23.preheader
+
+bb23.preheader:                                   ; preds = %bb18, %bb17
+  %n.3.ph = phi i32 [ 1, %bb18 ], [ 0, %bb17 ]    ; <i32> [#uses=2]
+  %37 = load i8* %zName, align 1                  ; <i8> [#uses=1]
+  %38 = icmp eq i8 %37, 0                         ; <i1> [#uses=1]
+  br i1 %38, label %bb24, label %bb20
+
+bb20:                                             ; preds = %bb22, %bb23.preheader
+  %storemerge18 = phi i32 [ %tmp, %bb22 ], [ 0, %bb23.preheader ] ; <i32> [#uses=2]
+  %n.37 = phi i32 [ %n.4, %bb22 ], [ %n.3.ph, %bb23.preheader ] ; <i32> [#uses=3]
+  %scevgep = getelementptr i8* %zName, i32 %storemerge18 ; <i8*> [#uses=1]
+  %tmp = add i32 %storemerge18, 1                 ; <i32> [#uses=2]
+  %scevgep15 = getelementptr i8* %zName, i32 %tmp ; <i8*> [#uses=1]
+  %39 = load i8* %scevgep, align 1                ; <i8> [#uses=2]
+  %40 = getelementptr inbounds i8* %32, i32 %n.37 ; <i8*> [#uses=1]
+  store i8 %39, i8* %40, align 1
+  %41 = add nsw i32 %n.37, 1                      ; <i32> [#uses=2]
+  %42 = icmp eq i8 %39, 39                        ; <i1> [#uses=1]
+  br i1 %42, label %bb21, label %bb22
+
+bb21:                                             ; preds = %bb20
+  %43 = getelementptr inbounds i8* %32, i32 %41   ; <i8*> [#uses=1]
+  store i8 39, i8* %43, align 1
+  %44 = add nsw i32 %n.37, 2                      ; <i32> [#uses=1]
+  br label %bb22
+
+bb22:                                             ; preds = %bb21, %bb20
+  %n.4 = phi i32 [ %44, %bb21 ], [ %41, %bb20 ]   ; <i32> [#uses=2]
+  %45 = load i8* %scevgep15, align 1              ; <i8> [#uses=1]
+  %46 = icmp eq i8 %45, 0                         ; <i1> [#uses=1]
+  br i1 %46, label %bb24, label %bb20
+
+bb24:                                             ; preds = %bb22, %bb23.preheader
+  %n.3.lcssa = phi i32 [ %n.3.ph, %bb23.preheader ], [ %n.4, %bb22 ] ; <i32> [#uses=3]
+  br i1 %30, label %bb26, label %bb25
+
+bb25:                                             ; preds = %bb24
+  %47 = getelementptr inbounds i8* %32, i32 %n.3.lcssa ; <i8*> [#uses=1]
+  store i8 39, i8* %47, align 1
+  %48 = add nsw i32 %n.3.lcssa, 1                 ; <i32> [#uses=1]
+  br label %bb26
+
+bb26:                                             ; preds = %bb25, %bb24
+  %n.5 = phi i32 [ %48, %bb25 ], [ %n.3.lcssa, %bb24 ] ; <i32> [#uses=1]
+  %49 = getelementptr inbounds i8* %32, i32 %n.5  ; <i8*> [#uses=1]
+  store i8 0, i8* %49, align 1
+  ret void
+
+return:                                           ; preds = %bb1
+  ret void
+}
+
+declare i32 @"\01_fwrite$UNIX2003"(i8*, i32, i32, i8*)
+
+declare void @exit(i32) noreturn nounwind
+
+declare i32 @__maskrune(i32, i32)
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-10-14-LiveVariablesBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-10-14-LiveVariablesBug.ll
new file mode 100644
index 0000000..c1aa17c
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-10-14-LiveVariablesBug.ll
@@ -0,0 +1,15 @@
+; RUN: llc < %s -mtriple=i386-apple-darwin
+; rdar://7299435
+
+ at i = internal global i32 0                        ; <i32*> [#uses=1]
+ at llvm.used = appending global [1 x i8*] [i8* bitcast (void (i16)* @foo to i8*)], section "llvm.metadata" ; <[1 x i8*]*> [#uses=0]
+
+define void @foo(i16 signext %source) nounwind ssp {
+entry:
+  %source_addr = alloca i16, align 2              ; <i16*> [#uses=2]
+  store i16 %source, i16* %source_addr
+  store i32 4, i32* @i, align 4
+  call void asm sideeffect "# top of block", "~{dirflag},~{fpsr},~{flags},~{edi},~{esi},~{edx},~{ecx},~{eax}"() nounwind
+  %asmtmp = call i16 asm sideeffect "movw $1, $0", "=={ax},*m,~{dirflag},~{fpsr},~{flags},~{memory}"(i16* %source_addr) nounwind ; <i16> [#uses=0]
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-10-19-EmergencySpill.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-10-19-EmergencySpill.ll
new file mode 100644
index 0000000..ba44a2e
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-10-19-EmergencySpill.ll
@@ -0,0 +1,54 @@
+; RUN: llc < %s -mtriple=x86_64-apple-darwin10 -disable-fp-elim
+; rdar://7291624
+
+%union.RtreeCoord = type { float }
+%struct.RtreeCell = type { i64, [10 x %union.RtreeCoord] }
+%struct.Rtree = type { i32, i32*, i32, i32, i32, i32, i8*, i8* }
+%struct.RtreeNode = type { i32*, i64, i32, i32, i8*, i32* }
+
+define fastcc void @nodeOverwriteCell(%struct.Rtree* nocapture %pRtree, %struct.RtreeNode* nocapture %pNode, %struct.RtreeCell* nocapture %pCell, i32 %iCell) nounwind ssp {
+entry:
+  %0 = load i8** undef, align 8                   ; <i8*> [#uses=2]
+  %1 = load i32* undef, align 8                   ; <i32> [#uses=1]
+  %2 = mul i32 %1, %iCell                         ; <i32> [#uses=1]
+  %3 = add nsw i32 %2, 4                          ; <i32> [#uses=1]
+  %4 = sext i32 %3 to i64                         ; <i64> [#uses=2]
+  %5 = load i64* null, align 8                    ; <i64> [#uses=2]
+  %6 = lshr i64 %5, 48                            ; <i64> [#uses=1]
+  %7 = trunc i64 %6 to i8                         ; <i8> [#uses=1]
+  store i8 %7, i8* undef, align 1
+  %8 = lshr i64 %5, 8                             ; <i64> [#uses=1]
+  %9 = trunc i64 %8 to i8                         ; <i8> [#uses=1]
+  %.sum4 = add i64 %4, 6                          ; <i64> [#uses=1]
+  %10 = getelementptr inbounds i8* %0, i64 %.sum4 ; <i8*> [#uses=1]
+  store i8 %9, i8* %10, align 1
+  %11 = getelementptr inbounds %struct.Rtree* %pRtree, i64 0, i32 3 ; <i32*> [#uses=1]
+  br i1 undef, label %bb.nph, label %bb2
+
+bb.nph:                                           ; preds = %entry
+  %tmp25 = add i64 %4, 11                         ; <i64> [#uses=1]
+  br label %bb
+
+bb:                                               ; preds = %bb, %bb.nph
+  %indvar = phi i64 [ 0, %bb.nph ], [ %indvar.next, %bb ] ; <i64> [#uses=3]
+  %scevgep = getelementptr %struct.RtreeCell* %pCell, i64 0, i32 1, i64 %indvar ; <%union.RtreeCoord*> [#uses=1]
+  %scevgep12 = bitcast %union.RtreeCoord* %scevgep to i32* ; <i32*> [#uses=1]
+  %tmp = shl i64 %indvar, 2                       ; <i64> [#uses=1]
+  %tmp26 = add i64 %tmp, %tmp25                   ; <i64> [#uses=1]
+  %scevgep27 = getelementptr i8* %0, i64 %tmp26   ; <i8*> [#uses=1]
+  %12 = load i32* %scevgep12, align 4             ; <i32> [#uses=1]
+  %13 = lshr i32 %12, 24                          ; <i32> [#uses=1]
+  %14 = trunc i32 %13 to i8                       ; <i8> [#uses=1]
+  store i8 %14, i8* undef, align 1
+  store i8 undef, i8* %scevgep27, align 1
+  %15 = load i32* %11, align 4                    ; <i32> [#uses=1]
+  %16 = shl i32 %15, 1                            ; <i32> [#uses=1]
+  %17 = icmp sgt i32 %16, undef                   ; <i1> [#uses=1]
+  %indvar.next = add i64 %indvar, 1               ; <i64> [#uses=1]
+  br i1 %17, label %bb, label %bb2
+
+bb2:                                              ; preds = %bb, %entry
+  %18 = getelementptr inbounds %struct.RtreeNode* %pNode, i64 0, i32 3 ; <i32*> [#uses=1]
+  store i32 1, i32* %18, align 4
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-10-19-atomic-cmp-eflags.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-10-19-atomic-cmp-eflags.ll
new file mode 100644
index 0000000..d7f0c1a
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-10-19-atomic-cmp-eflags.ll
@@ -0,0 +1,69 @@
+; RUN: llvm-as <%s | llc | FileCheck %s
+; PR 5247
+; check that cmp is not scheduled before the add
+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128"
+target triple = "x86_64-unknown-linux-gnu"
+
+ at .str76843 = external constant [45 x i8]          ; <[45 x i8]*> [#uses=1]
+ at __profiling_callsite_timestamps_live = external global [1216 x i64] ; <[1216 x i64]*> [#uses=2]
+
+define i32 @cl_init(i32 %initoptions) nounwind {
+entry:
+  %retval.i = alloca i32                          ; <i32*> [#uses=3]
+  %retval = alloca i32                            ; <i32*> [#uses=2]
+  %initoptions.addr = alloca i32                  ; <i32*> [#uses=2]
+  tail call void asm sideeffect "cpuid", "~{ax},~{bx},~{cx},~{dx},~{memory},~{dirflag},~{fpsr},~{flags}"() nounwind
+  %0 = tail call i64 @llvm.readcyclecounter() nounwind ; <i64> [#uses=1]
+  store i32 %initoptions, i32* %initoptions.addr
+  %1 = bitcast i32* %initoptions.addr to { }*     ; <{ }*> [#uses=0]
+  call void asm sideeffect "cpuid", "~{ax},~{bx},~{cx},~{dx},~{memory},~{dirflag},~{fpsr},~{flags}"() nounwind
+  %2 = call i64 @llvm.readcyclecounter() nounwind ; <i64> [#uses=1]
+  %call.i = call i32 @lt_dlinit() nounwind        ; <i32> [#uses=1]
+  %tobool.i = icmp ne i32 %call.i, 0              ; <i1> [#uses=1]
+  br i1 %tobool.i, label %if.then.i, label %if.end.i
+
+if.then.i:                                        ; preds = %entry
+  %call1.i = call i32 @warn_dlerror(i8* getelementptr inbounds ([45 x i8]* @.str76843, i32 0, i32 0)) nounwind ; <i32> [#uses=0]
+  store i32 -1, i32* %retval.i
+  br label %lt_init.exit
+
+if.end.i:                                         ; preds = %entry
+  store i32 0, i32* %retval.i
+  br label %lt_init.exit
+
+lt_init.exit:                                     ; preds = %if.end.i, %if.then.i
+  %3 = load i32* %retval.i                        ; <i32> [#uses=1]
+  call void asm sideeffect "cpuid", "~{ax},~{bx},~{cx},~{dx},~{memory},~{dirflag},~{fpsr},~{flags}"() nounwind
+  %4 = call i64 @llvm.readcyclecounter() nounwind ; <i64> [#uses=1]
+  %5 = sub i64 %4, %2                             ; <i64> [#uses=1]
+  %6 = call i64 @llvm.atomic.load.add.i64.p0i64(i64* getelementptr inbounds ([1216 x i64]* @__profiling_callsite_timestamps_live, i32 0, i32 51), i64 %5) nounwind ; <i64> [#uses=0]
+;CHECK: lock
+;CHECK-NEXT: {{xadd|addq}} %rdx, __profiling_callsite_timestamps_live
+;CHECK-NEXT: cmpl $0,
+;CHECK-NEXT: jne
+  %cmp = icmp eq i32 %3, 0                        ; <i1> [#uses=1]
+  br i1 %cmp, label %if.then, label %if.end
+
+if.then:                                          ; preds = %lt_init.exit
+  call void @cli_rarload()
+  br label %if.end
+
+if.end:                                           ; preds = %if.then, %lt_init.exit
+  store i32 0, i32* %retval
+  %7 = load i32* %retval                          ; <i32> [#uses=1]
+  tail call void asm sideeffect "cpuid", "~{ax},~{bx},~{cx},~{dx},~{memory},~{dirflag},~{fpsr},~{flags}"() nounwind
+  %8 = tail call i64 @llvm.readcyclecounter() nounwind ; <i64> [#uses=1]
+  %9 = sub i64 %8, %0                             ; <i64> [#uses=1]
+  %10 = call i64 @llvm.atomic.load.add.i64.p0i64(i64* getelementptr inbounds ([1216 x i64]* @__profiling_callsite_timestamps_live, i32 0, i32 50), i64 %9) ; <i64> [#uses=0]
+  ret i32 %7
+}
+
+declare void @cli_rarload() nounwind
+
+declare i32 @lt_dlinit()
+
+declare i32 @warn_dlerror(i8*) nounwind
+
+declare i64 @llvm.atomic.load.add.i64.p0i64(i64* nocapture, i64) nounwind
+
+declare i64 @llvm.readcyclecounter() nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-10-25-RewriterBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-10-25-RewriterBug.ll
new file mode 100644
index 0000000..5b4e818
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-10-25-RewriterBug.ll
@@ -0,0 +1,171 @@
+; RUN: llc < %s -mtriple=x86_64-apple-darwin -relocation-model=pic -disable-fp-elim
+
+%struct.DecRefPicMarking_t = type { i32, i32, i32, i32, i32, %struct.DecRefPicMarking_t* }
+%struct.FrameStore = type { i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, %struct.StorablePicture*, %struct.StorablePicture*, %struct.StorablePicture* }
+%struct.StorablePicture = type { i32, i32, i32, i32, i32, [50 x [6 x [33 x i64]]], [50 x [6 x [33 x i64]]], [50 x [6 x [33 x i64]]], [50 x [6 x [33 x i64]]], i32, i32, i32, i32, i32, i32, i32, i32, i32, i16, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i16**, i16***, i8*, i16**, i8***, i64***, i64***, i16****, i8**, i8**, %struct.StorablePicture*, %struct.StorablePicture*, %struct.StorablePicture*, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, [2 x i32], i32, %struct.DecRefPicMarking_t*, i32 }
+
+define fastcc void @insert_picture_in_dpb(%struct.FrameStore* nocapture %fs, %struct.StorablePicture* %p) nounwind ssp {
+entry:
+  %0 = getelementptr inbounds %struct.FrameStore* %fs, i64 0, i32 12 ; <%struct.StorablePicture**> [#uses=1]
+  %1 = icmp eq i32 undef, 0                       ; <i1> [#uses=1]
+  br i1 %1, label %bb.i, label %bb36.i
+
+bb.i:                                             ; preds = %entry
+  br i1 undef, label %bb3.i, label %bb14.preheader.i
+
+bb3.i:                                            ; preds = %bb.i
+  unreachable
+
+bb14.preheader.i:                                 ; preds = %bb.i
+  br i1 undef, label %bb9.i, label %bb20.preheader.i
+
+bb9.i:                                            ; preds = %bb9.i, %bb14.preheader.i
+  br i1 undef, label %bb9.i, label %bb20.preheader.i
+
+bb20.preheader.i:                                 ; preds = %bb9.i, %bb14.preheader.i
+  br i1 undef, label %bb18.i, label %bb29.preheader.i
+
+bb18.i:                                           ; preds = %bb20.preheader.i
+  unreachable
+
+bb29.preheader.i:                                 ; preds = %bb20.preheader.i
+  br i1 undef, label %bb24.i, label %bb30.i
+
+bb24.i:                                           ; preds = %bb29.preheader.i
+  unreachable
+
+bb30.i:                                           ; preds = %bb29.preheader.i
+  store i32 undef, i32* undef, align 8
+  br label %bb67.preheader.i
+
+bb36.i:                                           ; preds = %entry
+  br label %bb67.preheader.i
+
+bb67.preheader.i:                                 ; preds = %bb36.i, %bb30.i
+  %2 = phi %struct.StorablePicture* [ null, %bb36.i ], [ undef, %bb30.i ] ; <%struct.StorablePicture*> [#uses=2]
+  %3 = phi %struct.StorablePicture* [ null, %bb36.i ], [ undef, %bb30.i ] ; <%struct.StorablePicture*> [#uses=2]
+  %4 = phi %struct.StorablePicture* [ null, %bb36.i ], [ undef, %bb30.i ] ; <%struct.StorablePicture*> [#uses=2]
+  %5 = phi %struct.StorablePicture* [ null, %bb36.i ], [ undef, %bb30.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %6 = phi %struct.StorablePicture* [ null, %bb36.i ], [ undef, %bb30.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %7 = phi %struct.StorablePicture* [ null, %bb36.i ], [ undef, %bb30.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %8 = phi %struct.StorablePicture* [ null, %bb36.i ], [ undef, %bb30.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %9 = phi %struct.StorablePicture* [ null, %bb36.i ], [ undef, %bb30.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %10 = phi %struct.StorablePicture* [ null, %bb36.i ], [ undef, %bb30.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %11 = phi %struct.StorablePicture* [ null, %bb36.i ], [ undef, %bb30.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %12 = phi %struct.StorablePicture* [ null, %bb36.i ], [ undef, %bb30.i ] ; <%struct.StorablePicture*> [#uses=1]
+  br i1 undef, label %bb38.i, label %bb68.i
+
+bb38.i:                                           ; preds = %bb66.i, %bb67.preheader.i
+  %13 = phi %struct.StorablePicture* [ %37, %bb66.i ], [ %2, %bb67.preheader.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %14 = phi %struct.StorablePicture* [ %38, %bb66.i ], [ %3, %bb67.preheader.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %15 = phi %struct.StorablePicture* [ %39, %bb66.i ], [ %4, %bb67.preheader.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %16 = phi %struct.StorablePicture* [ %40, %bb66.i ], [ %5, %bb67.preheader.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %17 = phi %struct.StorablePicture* [ %40, %bb66.i ], [ %6, %bb67.preheader.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %18 = phi %struct.StorablePicture* [ %40, %bb66.i ], [ %7, %bb67.preheader.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %19 = phi %struct.StorablePicture* [ %40, %bb66.i ], [ %8, %bb67.preheader.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %20 = phi %struct.StorablePicture* [ %40, %bb66.i ], [ %9, %bb67.preheader.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %21 = phi %struct.StorablePicture* [ %40, %bb66.i ], [ %10, %bb67.preheader.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %22 = phi %struct.StorablePicture* [ %40, %bb66.i ], [ %11, %bb67.preheader.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %23 = phi %struct.StorablePicture* [ %40, %bb66.i ], [ %12, %bb67.preheader.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %indvar248.i = phi i64 [ %indvar.next249.i, %bb66.i ], [ 0, %bb67.preheader.i ] ; <i64> [#uses=3]
+  %storemerge52.i = trunc i64 %indvar248.i to i32 ; <i32> [#uses=1]
+  %24 = getelementptr inbounds %struct.StorablePicture* %23, i64 0, i32 19 ; <i32*> [#uses=0]
+  br i1 undef, label %bb.nph51.i, label %bb66.i
+
+bb.nph51.i:                                       ; preds = %bb38.i
+  %25 = sdiv i32 %storemerge52.i, 8               ; <i32> [#uses=0]
+  br label %bb39.i
+
+bb39.i:                                           ; preds = %bb64.i, %bb.nph51.i
+  %26 = phi %struct.StorablePicture* [ %17, %bb.nph51.i ], [ null, %bb64.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %27 = phi %struct.StorablePicture* [ %18, %bb.nph51.i ], [ null, %bb64.i ] ; <%struct.StorablePicture*> [#uses=0]
+  %28 = phi %struct.StorablePicture* [ %19, %bb.nph51.i ], [ null, %bb64.i ] ; <%struct.StorablePicture*> [#uses=0]
+  %29 = phi %struct.StorablePicture* [ %20, %bb.nph51.i ], [ null, %bb64.i ] ; <%struct.StorablePicture*> [#uses=0]
+  %30 = phi %struct.StorablePicture* [ %21, %bb.nph51.i ], [ null, %bb64.i ] ; <%struct.StorablePicture*> [#uses=0]
+  %31 = phi %struct.StorablePicture* [ %22, %bb.nph51.i ], [ null, %bb64.i ] ; <%struct.StorablePicture*> [#uses=0]
+  br i1 undef, label %bb57.i, label %bb40.i
+
+bb40.i:                                           ; preds = %bb39.i
+  br i1 undef, label %bb57.i, label %bb41.i
+
+bb41.i:                                           ; preds = %bb40.i
+  %storemerge10.i = select i1 undef, i32 2, i32 4 ; <i32> [#uses=1]
+  %32 = zext i32 %storemerge10.i to i64           ; <i64> [#uses=1]
+  br i1 undef, label %bb45.i, label %bb47.i
+
+bb45.i:                                           ; preds = %bb41.i
+  %33 = getelementptr inbounds %struct.StorablePicture* %26, i64 0, i32 5, i64 undef, i64 %32, i64 undef ; <i64*> [#uses=1]
+  %34 = load i64* %33, align 8                    ; <i64> [#uses=1]
+  br label %bb47.i
+
+bb47.i:                                           ; preds = %bb45.i, %bb41.i
+  %storemerge11.i = phi i64 [ %34, %bb45.i ], [ 0, %bb41.i ] ; <i64> [#uses=0]
+  %scevgep246.i = getelementptr i64* undef, i64 undef ; <i64*> [#uses=0]
+  br label %bb64.i
+
+bb57.i:                                           ; preds = %bb40.i, %bb39.i
+  br i1 undef, label %bb58.i, label %bb60.i
+
+bb58.i:                                           ; preds = %bb57.i
+  br label %bb60.i
+
+bb60.i:                                           ; preds = %bb58.i, %bb57.i
+  %35 = load i64*** undef, align 8                ; <i64**> [#uses=1]
+  %scevgep256.i = getelementptr i64** %35, i64 %indvar248.i ; <i64**> [#uses=1]
+  %36 = load i64** %scevgep256.i, align 8         ; <i64*> [#uses=1]
+  %scevgep243.i = getelementptr i64* %36, i64 undef ; <i64*> [#uses=1]
+  store i64 -1, i64* %scevgep243.i, align 8
+  br label %bb64.i
+
+bb64.i:                                           ; preds = %bb60.i, %bb47.i
+  br i1 undef, label %bb39.i, label %bb66.i
+
+bb66.i:                                           ; preds = %bb64.i, %bb38.i
+  %37 = phi %struct.StorablePicture* [ %13, %bb38.i ], [ null, %bb64.i ] ; <%struct.StorablePicture*> [#uses=2]
+  %38 = phi %struct.StorablePicture* [ %14, %bb38.i ], [ null, %bb64.i ] ; <%struct.StorablePicture*> [#uses=2]
+  %39 = phi %struct.StorablePicture* [ %15, %bb38.i ], [ null, %bb64.i ] ; <%struct.StorablePicture*> [#uses=2]
+  %40 = phi %struct.StorablePicture* [ %16, %bb38.i ], [ null, %bb64.i ] ; <%struct.StorablePicture*> [#uses=8]
+  %indvar.next249.i = add i64 %indvar248.i, 1     ; <i64> [#uses=1]
+  br i1 undef, label %bb38.i, label %bb68.i
+
+bb68.i:                                           ; preds = %bb66.i, %bb67.preheader.i
+  %41 = phi %struct.StorablePicture* [ %2, %bb67.preheader.i ], [ %37, %bb66.i ] ; <%struct.StorablePicture*> [#uses=0]
+  %42 = phi %struct.StorablePicture* [ %3, %bb67.preheader.i ], [ %38, %bb66.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %43 = phi %struct.StorablePicture* [ %4, %bb67.preheader.i ], [ %39, %bb66.i ] ; <%struct.StorablePicture*> [#uses=1]
+  br i1 undef, label %bb.nph48.i, label %bb108.i
+
+bb.nph48.i:                                       ; preds = %bb68.i
+  br label %bb80.i
+
+bb80.i:                                           ; preds = %bb104.i, %bb.nph48.i
+  %44 = phi %struct.StorablePicture* [ %42, %bb.nph48.i ], [ null, %bb104.i ] ; <%struct.StorablePicture*> [#uses=1]
+  %45 = phi %struct.StorablePicture* [ %43, %bb.nph48.i ], [ null, %bb104.i ] ; <%struct.StorablePicture*> [#uses=1]
+  br i1 undef, label %bb.nph39.i, label %bb104.i
+
+bb.nph39.i:                                       ; preds = %bb80.i
+  br label %bb81.i
+
+bb81.i:                                           ; preds = %bb102.i, %bb.nph39.i
+  %46 = phi %struct.StorablePicture* [ %44, %bb.nph39.i ], [ %48, %bb102.i ] ; <%struct.StorablePicture*> [#uses=0]
+  %47 = phi %struct.StorablePicture* [ %45, %bb.nph39.i ], [ %48, %bb102.i ] ; <%struct.StorablePicture*> [#uses=0]
+  br i1 undef, label %bb83.i, label %bb82.i
+
+bb82.i:                                           ; preds = %bb81.i
+  br i1 undef, label %bb83.i, label %bb101.i
+
+bb83.i:                                           ; preds = %bb82.i, %bb81.i
+  br label %bb102.i
+
+bb101.i:                                          ; preds = %bb82.i
+  br label %bb102.i
+
+bb102.i:                                          ; preds = %bb101.i, %bb83.i
+  %48 = load %struct.StorablePicture** %0, align 8 ; <%struct.StorablePicture*> [#uses=2]
+  br i1 undef, label %bb81.i, label %bb104.i
+
+bb104.i:                                          ; preds = %bb102.i, %bb80.i
+  br label %bb80.i
+
+bb108.i:                                          ; preds = %bb68.i
+  unreachable
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-11-04-SubregCoalescingBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-04-SubregCoalescingBug.ll
new file mode 100644
index 0000000..d84b63a
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-04-SubregCoalescingBug.ll
@@ -0,0 +1,15 @@
+; RUN: llc < %s -mtriple=x86_64-apple-darwin11 | FileCheck %s
+; rdar://7362871
+
+define void @bar(i32 %b, i32 %a) nounwind optsize ssp {
+entry:
+; CHECK:     leal 15(%rsi), %edi
+; CHECK-NOT: movl
+; CHECK:     call _foo
+  %0 = add i32 %a, 15                             ; <i32> [#uses=1]
+  %1 = zext i32 %0 to i64                         ; <i64> [#uses=1]
+  tail call void @foo(i64 %1) nounwind
+  ret void
+}
+
+declare void @foo(i64)
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-11-13-VirtRegRewriterBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-13-VirtRegRewriterBug.ll
new file mode 100644
index 0000000..5398eef
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-13-VirtRegRewriterBug.ll
@@ -0,0 +1,133 @@
+; RUN: llc < %s -mtriple=i386-apple-darwin -relocation-model=pic -disable-fp-elim
+; rdar://7394770
+
+%struct.JVTLib_100487 = type <{ i8 }>
+
+define i32 @_Z13JVTLib_10335613JVTLib_10266513JVTLib_100579S_S_S_jPhj(i16* nocapture %ResidualX_Array.0, %struct.JVTLib_100487* nocapture byval align 4 %xqp, i16* nocapture %ResidualL_Array.0, i16* %ResidualDCZ_Array.0, i16* nocapture %ResidualACZ_FOArray.0, i32 %useFRextDequant, i8* nocapture %JVTLib_103357, i32 %use_field_scan) ssp {
+bb.nph:
+  %0 = shl i32 undef, 1                           ; <i32> [#uses=2]
+  %mask133.masked.masked.masked.masked.masked.masked = or i640 undef, undef ; <i640> [#uses=1]
+  br label %bb
+
+bb:                                               ; preds = %_ZL13JVTLib_105204PKsPK13JVTLib_105184PsPhjS5_j.exit, %bb.nph
+  br i1 undef, label %bb2, label %bb1
+
+bb1:                                              ; preds = %bb
+  br i1 undef, label %bb.i, label %bb1.i
+
+bb2:                                              ; preds = %bb
+  unreachable
+
+bb.i:                                             ; preds = %bb1
+  br label %_ZL13JVTLib_105204PKsPK13JVTLib_105184PsPhjS5_j.exit
+
+bb1.i:                                            ; preds = %bb1
+  br label %_ZL13JVTLib_105204PKsPK13JVTLib_105184PsPhjS5_j.exit
+
+_ZL13JVTLib_105204PKsPK13JVTLib_105184PsPhjS5_j.exit: ; preds = %bb1.i, %bb.i
+  br i1 undef, label %bb5, label %bb
+
+bb5:                                              ; preds = %_ZL13JVTLib_105204PKsPK13JVTLib_105184PsPhjS5_j.exit
+  %mask271.masked.masked.masked.masked.masked.masked.masked = or i256 0, undef ; <i256> [#uses=2]
+  %mask266.masked.masked.masked.masked.masked.masked = or i256 %mask271.masked.masked.masked.masked.masked.masked.masked, undef ; <i256> [#uses=1]
+  %mask241.masked = or i256 undef, undef          ; <i256> [#uses=1]
+  %ins237 = or i256 undef, 0                      ; <i256> [#uses=1]
+  br i1 undef, label %bb9, label %bb10
+
+bb9:                                              ; preds = %bb5
+  br i1 undef, label %bb12.i, label %_ZL13JVTLib_105255PKsPK13JVTLib_105184Psj.exit
+
+bb12.i:                                           ; preds = %bb9
+  br label %_ZL13JVTLib_105255PKsPK13JVTLib_105184Psj.exit
+
+_ZL13JVTLib_105255PKsPK13JVTLib_105184Psj.exit:   ; preds = %bb12.i, %bb9
+  ret i32 undef
+
+bb10:                                             ; preds = %bb5
+  %1 = sext i16 undef to i32                      ; <i32> [#uses=1]
+  %2 = sext i16 undef to i32                      ; <i32> [#uses=1]
+  %3 = sext i16 undef to i32                      ; <i32> [#uses=1]
+  %4 = sext i16 undef to i32                      ; <i32> [#uses=1]
+  %5 = sext i16 undef to i32                      ; <i32> [#uses=1]
+  %6 = sext i16 undef to i32                      ; <i32> [#uses=1]
+  %tmp211 = lshr i256 %mask271.masked.masked.masked.masked.masked.masked.masked, 112 ; <i256> [#uses=0]
+  %7 = sext i16 undef to i32                      ; <i32> [#uses=1]
+  %tmp208 = lshr i256 %mask266.masked.masked.masked.masked.masked.masked, 128 ; <i256> [#uses=1]
+  %tmp209 = trunc i256 %tmp208 to i16             ; <i16> [#uses=1]
+  %8 = sext i16 %tmp209 to i32                    ; <i32> [#uses=1]
+  %9 = sext i16 undef to i32                      ; <i32> [#uses=1]
+  %10 = sext i16 undef to i32                     ; <i32> [#uses=1]
+  %tmp193 = lshr i256 %mask241.masked, 208        ; <i256> [#uses=1]
+  %tmp194 = trunc i256 %tmp193 to i16             ; <i16> [#uses=1]
+  %11 = sext i16 %tmp194 to i32                   ; <i32> [#uses=1]
+  %tmp187 = lshr i256 %ins237, 240                ; <i256> [#uses=1]
+  %tmp188 = trunc i256 %tmp187 to i16             ; <i16> [#uses=1]
+  %12 = sext i16 %tmp188 to i32                   ; <i32> [#uses=1]
+  %13 = add nsw i32 %4, %1                        ; <i32> [#uses=1]
+  %14 = add nsw i32 %5, 0                         ; <i32> [#uses=1]
+  %15 = add nsw i32 %6, %2                        ; <i32> [#uses=1]
+  %16 = add nsw i32 %7, %3                        ; <i32> [#uses=1]
+  %17 = add nsw i32 0, %8                         ; <i32> [#uses=1]
+  %18 = add nsw i32 %11, %9                       ; <i32> [#uses=1]
+  %19 = add nsw i32 0, %10                        ; <i32> [#uses=1]
+  %20 = add nsw i32 %12, 0                        ; <i32> [#uses=1]
+  %21 = add nsw i32 %17, %13                      ; <i32> [#uses=2]
+  %22 = add nsw i32 %18, %14                      ; <i32> [#uses=2]
+  %23 = add nsw i32 %19, %15                      ; <i32> [#uses=2]
+  %24 = add nsw i32 %20, %16                      ; <i32> [#uses=2]
+  %25 = add nsw i32 %22, %21                      ; <i32> [#uses=2]
+  %26 = add nsw i32 %24, %23                      ; <i32> [#uses=2]
+  %27 = sub i32 %21, %22                          ; <i32> [#uses=1]
+  %28 = sub i32 %23, %24                          ; <i32> [#uses=1]
+  %29 = add nsw i32 %26, %25                      ; <i32> [#uses=1]
+  %30 = sub i32 %25, %26                          ; <i32> [#uses=1]
+  %31 = sub i32 %27, %28                          ; <i32> [#uses=1]
+  %32 = ashr i32 %29, 1                           ; <i32> [#uses=2]
+  %33 = ashr i32 %30, 1                           ; <i32> [#uses=2]
+  %34 = ashr i32 %31, 1                           ; <i32> [#uses=2]
+  %35 = icmp sgt i32 %32, 32767                   ; <i1> [#uses=1]
+  %o0_0.0.i = select i1 %35, i32 32767, i32 %32   ; <i32> [#uses=2]
+  %36 = icmp slt i32 %o0_0.0.i, -32768            ; <i1> [#uses=1]
+  %37 = icmp sgt i32 %33, 32767                   ; <i1> [#uses=1]
+  %o1_0.0.i = select i1 %37, i32 32767, i32 %33   ; <i32> [#uses=2]
+  %38 = icmp slt i32 %o1_0.0.i, -32768            ; <i1> [#uses=1]
+  %39 = icmp sgt i32 %34, 32767                   ; <i1> [#uses=1]
+  %o2_0.0.i = select i1 %39, i32 32767, i32 %34   ; <i32> [#uses=2]
+  %40 = icmp slt i32 %o2_0.0.i, -32768            ; <i1> [#uses=1]
+  %tmp101 = lshr i640 %mask133.masked.masked.masked.masked.masked.masked, 256 ; <i640> [#uses=1]
+  %41 = trunc i32 %o0_0.0.i to i16                ; <i16> [#uses=1]
+  %tmp358 = select i1 %36, i16 -32768, i16 %41    ; <i16> [#uses=2]
+  %42 = trunc i32 %o1_0.0.i to i16                ; <i16> [#uses=1]
+  %tmp347 = select i1 %38, i16 -32768, i16 %42    ; <i16> [#uses=1]
+  %43 = trunc i32 %o2_0.0.i to i16                ; <i16> [#uses=1]
+  %tmp335 = select i1 %40, i16 -32768, i16 %43    ; <i16> [#uses=1]
+  %44 = icmp sgt i16 %tmp358, -1                  ; <i1> [#uses=2]
+  %..i24 = select i1 %44, i16 %tmp358, i16 undef  ; <i16> [#uses=1]
+  %45 = icmp sgt i16 %tmp347, -1                  ; <i1> [#uses=1]
+  %46 = icmp sgt i16 %tmp335, -1                  ; <i1> [#uses=1]
+  %47 = zext i16 %..i24 to i32                    ; <i32> [#uses=1]
+  %tmp = trunc i640 %tmp101 to i32                ; <i32> [#uses=1]
+  %48 = and i32 %tmp, 65535                       ; <i32> [#uses=2]
+  %49 = mul i32 %47, %48                          ; <i32> [#uses=1]
+  %50 = zext i16 undef to i32                     ; <i32> [#uses=1]
+  %51 = mul i32 %50, %48                          ; <i32> [#uses=1]
+  %52 = add i32 %49, %0                           ; <i32> [#uses=1]
+  %53 = add i32 %51, %0                           ; <i32> [#uses=1]
+  %54 = lshr i32 %52, undef                       ; <i32> [#uses=1]
+  %55 = lshr i32 %53, undef                       ; <i32> [#uses=1]
+  %56 = trunc i32 %54 to i16                      ; <i16> [#uses=1]
+  %57 = trunc i32 %55 to i16                      ; <i16> [#uses=1]
+  %vs16Out0_0.0.i = select i1 %44, i16 %56, i16 undef ; <i16> [#uses=1]
+  %vs16Out0_4.0.i = select i1 %45, i16 0, i16 undef ; <i16> [#uses=1]
+  %vs16Out1_0.0.i = select i1 %46, i16 %57, i16 undef ; <i16> [#uses=1]
+  br i1 undef, label %bb129.i, label %_ZL13JVTLib_105207PKsPK13JVTLib_105184Psj.exit
+
+bb129.i:                                          ; preds = %bb10
+  br label %_ZL13JVTLib_105207PKsPK13JVTLib_105184Psj.exit
+
+_ZL13JVTLib_105207PKsPK13JVTLib_105184Psj.exit:   ; preds = %bb129.i, %bb10
+  %58 = phi i16 [ %vs16Out0_4.0.i, %bb129.i ], [ undef, %bb10 ] ; <i16> [#uses=0]
+  %59 = phi i16 [ undef, %bb129.i ], [ %vs16Out1_0.0.i, %bb10 ] ; <i16> [#uses=0]
+  store i16 %vs16Out0_0.0.i, i16* %ResidualDCZ_Array.0, align 2
+  unreachable
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-11-16-MachineLICM.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-16-MachineLICM.ll
new file mode 100644
index 0000000..a7c2020
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-16-MachineLICM.ll
@@ -0,0 +1,42 @@
+; RUN: llc < %s -mtriple=x86_64-apple-darwin | FileCheck %s
+; rdar://7395200
+
+ at g = common global [4 x float] zeroinitializer, align 16 ; <[4 x float]*> [#uses=4]
+
+define void @foo(i32 %n, float* nocapture %x) nounwind ssp {
+entry:
+; CHECK: foo:
+  %0 = icmp sgt i32 %n, 0                         ; <i1> [#uses=1]
+  br i1 %0, label %bb.nph, label %return
+
+bb.nph:                                           ; preds = %entry
+; CHECK: movq _g at GOTPCREL(%rip), %rcx
+  %tmp = zext i32 %n to i64                       ; <i64> [#uses=1]
+  br label %bb
+
+bb:                                               ; preds = %bb, %bb.nph
+; CHECK: LBB1_2:
+  %indvar = phi i64 [ 0, %bb.nph ], [ %indvar.next, %bb ] ; <i64> [#uses=2]
+  %tmp9 = shl i64 %indvar, 2                      ; <i64> [#uses=4]
+  %tmp1016 = or i64 %tmp9, 1                      ; <i64> [#uses=1]
+  %scevgep = getelementptr float* %x, i64 %tmp1016 ; <float*> [#uses=1]
+  %tmp1117 = or i64 %tmp9, 2                      ; <i64> [#uses=1]
+  %scevgep12 = getelementptr float* %x, i64 %tmp1117 ; <float*> [#uses=1]
+  %tmp1318 = or i64 %tmp9, 3                      ; <i64> [#uses=1]
+  %scevgep14 = getelementptr float* %x, i64 %tmp1318 ; <float*> [#uses=1]
+  %x_addr.03 = getelementptr float* %x, i64 %tmp9 ; <float*> [#uses=1]
+  %1 = load float* getelementptr inbounds ([4 x float]* @g, i64 0, i64 0), align 16 ; <float> [#uses=1]
+  store float %1, float* %x_addr.03, align 4
+  %2 = load float* getelementptr inbounds ([4 x float]* @g, i64 0, i64 1), align 4 ; <float> [#uses=1]
+  store float %2, float* %scevgep, align 4
+  %3 = load float* getelementptr inbounds ([4 x float]* @g, i64 0, i64 2), align 8 ; <float> [#uses=1]
+  store float %3, float* %scevgep12, align 4
+  %4 = load float* getelementptr inbounds ([4 x float]* @g, i64 0, i64 3), align 4 ; <float> [#uses=1]
+  store float %4, float* %scevgep14, align 4
+  %indvar.next = add i64 %indvar, 1               ; <i64> [#uses=2]
+  %exitcond = icmp eq i64 %indvar.next, %tmp      ; <i1> [#uses=1]
+  br i1 %exitcond, label %return, label %bb
+
+return:                                           ; preds = %bb, %entry
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-11-16-UnfoldMemOpBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-16-UnfoldMemOpBug.ll
new file mode 100644
index 0000000..3ce9edb
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-16-UnfoldMemOpBug.ll
@@ -0,0 +1,28 @@
+; RUN: llc < %s -mtriple=x86_64-apple-darwin | FileCheck %s
+; rdar://7396984
+
+ at str = private constant [28 x i8] c"xxxxxxxxxxxxxxxxxxxxxxxxxxx\00", align 1
+
+define void @t(i32 %count) ssp nounwind {
+entry:
+; CHECK: t:
+; CHECK: movq ___stack_chk_guard at GOTPCREL(%rip)
+; CHECK: movups L_str(%rip), %xmm0
+  %tmp0 = alloca [60 x i8], align 1
+  %tmp1 = getelementptr inbounds [60 x i8]* %tmp0, i64 0, i64 0
+  br label %bb1
+
+bb1:
+; CHECK: LBB1_1:
+; CHECK: movaps %xmm0, (%rsp)
+  %tmp2 = phi i32 [ %tmp3, %bb1 ], [ 0, %entry ]
+  call void @llvm.memcpy.i64(i8* %tmp1, i8* getelementptr inbounds ([28 x i8]* @str, i64 0, i64 0), i64 28, i32 1)
+  %tmp3 = add i32 %tmp2, 1
+  %tmp4 = icmp eq i32 %tmp3, %count
+  br i1 %tmp4, label %bb2, label %bb1
+
+bb2:
+  ret void
+}
+
+declare void @llvm.memcpy.i64(i8* nocapture, i8* nocapture, i64, i32) nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-11-17-UpdateTerminator.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-17-UpdateTerminator.ll
new file mode 100644
index 0000000..5c1a2bc
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-17-UpdateTerminator.ll
@@ -0,0 +1,52 @@
+; RUN: llc -O3 < %s
+; This test fails with:
+; Assertion failed: (!B && "UpdateTerminators requires analyzable predecessors!"), function updateTerminator, MachineBasicBlock.cpp, line 255.
+
+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64"
+target triple = "x86_64-apple-darwin10.2"
+
+%"struct.llvm::InlineAsm::ConstraintInfo" = type { i32, i8, i8, i8, i8, %"struct.std::vector<std::basic_string<char, std::char_traits<char>, std::allocator<char> >,std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >" }
+%"struct.std::_Vector_base<llvm::InlineAsm::ConstraintInfo,std::allocator<llvm::InlineAsm::ConstraintInfo> >" = type { %"struct.std::_Vector_base<llvm::InlineAsm::ConstraintInfo,std::allocator<llvm::InlineAsm::ConstraintInfo> >::_Vector_impl" }
+%"struct.std::_Vector_base<llvm::InlineAsm::ConstraintInfo,std::allocator<llvm::InlineAsm::ConstraintInfo> >::_Vector_impl" = type { %"struct.llvm::InlineAsm::ConstraintInfo"*, %"struct.llvm::InlineAsm::ConstraintInfo"*, %"struct.llvm::InlineAsm::ConstraintInfo"* }
+%"struct.std::_Vector_base<std::basic_string<char, std::char_traits<char>, std::allocator<char> >,std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >" = type { %"struct.std::_Vector_base<std::basic_string<char, std::char_traits<char>, std::allocator<char> >,std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >::_Vector_impl" }
+%"struct.std::_Vector_base<std::basic_string<char, std::char_traits<char>, std::allocator<char> >,std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >::_Vector_impl" = type { %"struct.std::string"*, %"struct.std::string"*, %"struct.std::string"* }
+%"struct.std::basic_string<char,std::char_traits<char>,std::allocator<char> >::_Alloc_hider" = type { i8* }
+%"struct.std::string" = type { %"struct.std::basic_string<char,std::char_traits<char>,std::allocator<char> >::_Alloc_hider" }
+%"struct.std::vector<llvm::InlineAsm::ConstraintInfo,std::allocator<llvm::InlineAsm::ConstraintInfo> >" = type { %"struct.std::_Vector_base<llvm::InlineAsm::ConstraintInfo,std::allocator<llvm::InlineAsm::ConstraintInfo> >" }
+%"struct.std::vector<std::basic_string<char, std::char_traits<char>, std::allocator<char> >,std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >" = type { %"struct.std::_Vector_base<std::basic_string<char, std::char_traits<char>, std::allocator<char> >,std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >" }
+
+define zeroext i8 @_ZN4llvm9InlineAsm14ConstraintInfo5ParseENS_9StringRefERSt6vectorIS1_SaIS1_EE(%"struct.llvm::InlineAsm::ConstraintInfo"* nocapture %this, i64 %Str.0, i64 %Str.1, %"struct.std::vector<llvm::InlineAsm::ConstraintInfo,std::allocator<llvm::InlineAsm::ConstraintInfo> >"* nocapture %ConstraintsSoFar) nounwind ssp align 2 {
+entry:
+  br i1 undef, label %bb56, label %bb27.outer
+
+bb8:                                              ; preds = %bb27.outer108, %bb13
+  switch i8 undef, label %bb27.outer [
+    i8 35, label %bb56
+    i8 37, label %bb14
+    i8 38, label %bb10
+    i8 42, label %bb56
+  ]
+
+bb27.outer:                                       ; preds = %bb8, %entry
+  %I.2.ph = phi i8* [ undef, %entry ], [ %I.2.ph109, %bb8 ] ; <i8*> [#uses=2]
+  br label %bb27.outer108
+
+bb10:                                             ; preds = %bb8
+  %toBool = icmp eq i8 0, 0                       ; <i1> [#uses=1]
+  %or.cond = and i1 undef, %toBool                ; <i1> [#uses=1]
+  br i1 %or.cond, label %bb13, label %bb56
+
+bb13:                                             ; preds = %bb10
+  br i1 undef, label %bb27.outer108, label %bb8
+
+bb14:                                             ; preds = %bb8
+  ret i8 1
+
+bb27.outer108:                                    ; preds = %bb13, %bb27.outer
+  %I.2.ph109 = getelementptr i8* %I.2.ph, i64 undef ; <i8*> [#uses=1]
+  %scevgep = getelementptr i8* %I.2.ph, i64 undef ; <i8*> [#uses=0]
+  br label %bb8
+
+bb56:                                             ; preds = %bb10, %bb8, %bb8, %entry
+  ret i8 1
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-11-18-TwoAddrKill.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-18-TwoAddrKill.ll
new file mode 100644
index 0000000..0edaa70
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-18-TwoAddrKill.ll
@@ -0,0 +1,29 @@
+; RUN: llc < %s
+; PR 5300
+target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:32:32-n8:16:32"
+target triple = "i386-pc-linux-gnu"
+
+ at g_296 = external global i8, align 1              ; <i8*> [#uses=1]
+
+define noalias i8** @func_31(i32** nocapture %int8p_33, i8** nocapture %p_34, i8* nocapture %p_35) nounwind {
+entry:
+  %cmp.i = icmp sgt i16 undef, 234                ; <i1> [#uses=1]
+  %tmp17 = select i1 %cmp.i, i16 undef, i16 0     ; <i16> [#uses=2]
+  %conv8 = trunc i16 %tmp17 to i8                 ; <i8> [#uses=3]
+  br i1 undef, label %cond.false.i29, label %land.lhs.true.i
+
+land.lhs.true.i:                                  ; preds = %entry
+  %tobool5.i = icmp eq i32 undef, undef           ; <i1> [#uses=1]
+  br i1 %tobool5.i, label %cond.false.i29, label %bar.exit
+
+cond.false.i29:                                   ; preds = %land.lhs.true.i, %entry
+  %tmp = sub i8 0, %conv8                         ; <i8> [#uses=1]
+  %mul.i = and i8 %conv8, %tmp                    ; <i8> [#uses=1]
+  br label %bar.exit
+
+bar.exit:                                         ; preds = %cond.false.i29, %land.lhs.true.i
+  %call1231 = phi i8 [ %mul.i, %cond.false.i29 ], [ %conv8, %land.lhs.true.i ] ; <i8> [#uses=0]
+  %conv21 = trunc i16 %tmp17 to i8                ; <i8> [#uses=1]
+  store i8 %conv21, i8* @g_296
+  ret i8** undef
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-11-25-ImpDefBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-25-ImpDefBug.ll
new file mode 100644
index 0000000..7606c0e
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-25-ImpDefBug.ll
@@ -0,0 +1,116 @@
+; RUN: llc < %s -mtriple=x86_64-unknown-linux-gnu
+; pr5600
+
+%struct..0__pthread_mutex_s = type { i32, i32, i32, i32, i32, i32, %struct.__pthread_list_t }
+%struct.ASN1ObjHeader = type { i8, %"struct.__gmp_expr<__mpz_struct [1],__mpz_struct [1]>", i64, i32, i32, i32 }
+%struct.ASN1Object = type { i32 (...)**, i32, i32, i64 }
+%struct.ASN1Unit = type { [4 x i32 (%struct.ASN1ObjHeader*, %struct.ASN1Object**)*], %"struct.std::ASN1ObjList" }
+%"struct.__gmp_expr<__mpz_struct [1],__mpz_struct [1]>" = type { [1 x %struct.__mpz_struct] }
+%struct.__mpz_struct = type { i32, i32, i64* }
+%struct.__pthread_list_t = type { %struct.__pthread_list_t*, %struct.__pthread_list_t* }
+%struct.pthread_attr_t = type { i64, [48 x i8] }
+%struct.pthread_mutex_t = type { %struct..0__pthread_mutex_s }
+%struct.pthread_mutexattr_t = type { i32 }
+%"struct.std::ASN1ObjList" = type { %"struct.std::_Vector_base<ASN1Object*,std::allocator<ASN1Object*> >" }
+%"struct.std::_Vector_base<ASN1Object*,std::allocator<ASN1Object*> >" = type { %"struct.std::_Vector_base<ASN1Object*,std::allocator<ASN1Object*> >::_Vector_impl" }
+%"struct.std::_Vector_base<ASN1Object*,std::allocator<ASN1Object*> >::_Vector_impl" = type { %struct.ASN1Object**, %struct.ASN1Object**, %struct.ASN1Object** }
+%struct.xmstream = type { i8*, i64, i64, i64, i8 }
+
+declare void @_ZNSt6vectorIP10ASN1ObjectSaIS1_EE13_M_insert_auxEN9__gnu_cxx17__normal_iteratorIPS1_S3_EERKS1_(%"struct.std::ASN1ObjList"* nocapture, i64, %struct.ASN1Object** nocapture)
+
+declare i32 @_Z17LoadObjectFromBERR8xmstreamPP10ASN1ObjectPPF10ASN1StatusP13ASN1ObjHeaderS3_E(%struct.xmstream*, %struct.ASN1Object**, i32 (%struct.ASN1ObjHeader*, %struct.ASN1Object**)**)
+
+define i32 @_ZN8ASN1Unit4loadER8xmstreamjm18ASN1LengthEncoding(%struct.ASN1Unit* %this, %struct.xmstream* nocapture %stream, i32 %numObjects, i64 %size, i32 %lEncoding) {
+entry:
+  br label %meshBB85
+
+bb5:                                              ; preds = %bb13.fragment.cl135, %bb13.fragment.cl, %bb.i.i.bbcl.disp, %bb13.fragment
+  %0 = invoke i32 @_Z17LoadObjectFromBERR8xmstreamPP10ASN1ObjectPPF10ASN1StatusP13ASN1ObjHeaderS3_E(%struct.xmstream* undef, %struct.ASN1Object** undef, i32 (%struct.ASN1ObjHeader*, %struct.ASN1Object**)** undef)
+          to label %meshBB81.bbcl.disp unwind label %lpad ; <i32> [#uses=0]
+
+bb10.fragment:                                    ; preds = %bb13.fragment.bbcl.disp
+  br i1 undef, label %bb1.i.fragment.bbcl.disp, label %bb.i.i.bbcl.disp
+
+bb1.i.fragment:                                   ; preds = %bb1.i.fragment.bbcl.disp
+  invoke void @_ZNSt6vectorIP10ASN1ObjectSaIS1_EE13_M_insert_auxEN9__gnu_cxx17__normal_iteratorIPS1_S3_EERKS1_(%"struct.std::ASN1ObjList"* undef, i64 undef, %struct.ASN1Object** undef)
+          to label %meshBB81.bbcl.disp unwind label %lpad
+
+bb13.fragment:                                    ; preds = %bb13.fragment.bbcl.disp
+  br i1 undef, label %meshBB81.bbcl.disp, label %bb5
+
+bb.i4:                                            ; preds = %bb.i4.bbcl.disp, %bb1.i.fragment.bbcl.disp
+  ret i32 undef
+
+bb1.i5:                                           ; preds = %bb.i1
+  ret i32 undef
+
+lpad:                                             ; preds = %bb1.i.fragment.cl, %bb1.i.fragment, %bb5
+  %.SV10.phi807 = phi i8* [ undef, %bb1.i.fragment.cl ], [ undef, %bb1.i.fragment ], [ undef, %bb5 ] ; <i8*> [#uses=1]
+  %1 = load i8* %.SV10.phi807, align 8            ; <i8> [#uses=0]
+  br i1 undef, label %meshBB81.bbcl.disp, label %bb13.fragment.bbcl.disp
+
+bb.i1:                                            ; preds = %bb.i.i.bbcl.disp
+  br i1 undef, label %meshBB81.bbcl.disp, label %bb1.i5
+
+meshBB81:                                         ; preds = %meshBB81.bbcl.disp, %bb.i.i.bbcl.disp
+  br i1 undef, label %meshBB81.bbcl.disp, label %bb.i4.bbcl.disp
+
+meshBB85:                                         ; preds = %meshBB81.bbcl.disp, %bb.i4.bbcl.disp, %bb1.i.fragment.bbcl.disp, %bb.i.i.bbcl.disp, %entry
+  br i1 undef, label %meshBB81.bbcl.disp, label %bb13.fragment.bbcl.disp
+
+bb.i.i.bbcl.disp:                                 ; preds = %bb10.fragment
+  switch i8 undef, label %meshBB85 [
+    i8 123, label %bb.i1
+    i8 97, label %bb5
+    i8 44, label %meshBB81
+    i8 1, label %meshBB81.cl
+    i8 51, label %meshBB81.cl141
+  ]
+
+bb1.i.fragment.cl:                                ; preds = %bb1.i.fragment.bbcl.disp
+  invoke void @_ZNSt6vectorIP10ASN1ObjectSaIS1_EE13_M_insert_auxEN9__gnu_cxx17__normal_iteratorIPS1_S3_EERKS1_(%"struct.std::ASN1ObjList"* undef, i64 undef, %struct.ASN1Object** undef)
+          to label %meshBB81.bbcl.disp unwind label %lpad
+
+bb1.i.fragment.bbcl.disp:                         ; preds = %bb10.fragment
+  switch i8 undef, label %bb.i4 [
+    i8 97, label %bb1.i.fragment
+    i8 7, label %bb1.i.fragment.cl
+    i8 35, label %bb.i4.cl
+    i8 77, label %meshBB85
+  ]
+
+bb13.fragment.cl:                                 ; preds = %bb13.fragment.bbcl.disp
+  br i1 undef, label %meshBB81.bbcl.disp, label %bb5
+
+bb13.fragment.cl135:                              ; preds = %bb13.fragment.bbcl.disp
+  br i1 undef, label %meshBB81.bbcl.disp, label %bb5
+
+bb13.fragment.bbcl.disp:                          ; preds = %meshBB85, %lpad
+  switch i8 undef, label %bb10.fragment [
+    i8 67, label %bb13.fragment.cl
+    i8 108, label %bb13.fragment
+    i8 58, label %bb13.fragment.cl135
+  ]
+
+bb.i4.cl:                                         ; preds = %bb.i4.bbcl.disp, %bb1.i.fragment.bbcl.disp
+  ret i32 undef
+
+bb.i4.bbcl.disp:                                  ; preds = %meshBB81.cl141, %meshBB81.cl, %meshBB81
+  switch i8 undef, label %bb.i4 [
+    i8 35, label %bb.i4.cl
+    i8 77, label %meshBB85
+  ]
+
+meshBB81.cl:                                      ; preds = %meshBB81.bbcl.disp, %bb.i.i.bbcl.disp
+  br i1 undef, label %meshBB81.bbcl.disp, label %bb.i4.bbcl.disp
+
+meshBB81.cl141:                                   ; preds = %meshBB81.bbcl.disp, %bb.i.i.bbcl.disp
+  br i1 undef, label %meshBB81.bbcl.disp, label %bb.i4.bbcl.disp
+
+meshBB81.bbcl.disp:                               ; preds = %meshBB81.cl141, %meshBB81.cl, %bb13.fragment.cl135, %bb13.fragment.cl, %bb1.i.fragment.cl, %meshBB85, %meshBB81, %bb.i1, %lpad, %bb13.fragment, %bb1.i.fragment, %bb5
+  switch i8 undef, label %meshBB85 [
+    i8 44, label %meshBB81
+    i8 1, label %meshBB81.cl
+    i8 51, label %meshBB81.cl141
+  ]
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/abi-isel.ll b/libclamav/c++/llvm/test/CodeGen/X86/abi-isel.ll
index a6fd2d8..6d7b2d4 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/abi-isel.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/abi-isel.ll
@@ -1,16 +1,16 @@
-; RUN: llc < %s -asm-verbose=0 -mtriple=i686-unknown-linux-gnu -march=x86 -relocation-model=static -code-model=small | FileCheck %s -check-prefix=LINUX-32-STATIC
-; RUN: llc < %s -asm-verbose=0 -mtriple=i686-unknown-linux-gnu -march=x86 -relocation-model=static -code-model=small | FileCheck %s -check-prefix=LINUX-32-PIC
+; RUN: llc < %s -asm-verbose=0 -mtriple=i686-unknown-linux-gnu -march=x86 -relocation-model=static -code-model=small -post-RA-scheduler=false | FileCheck %s -check-prefix=LINUX-32-STATIC
+; RUN: llc < %s -asm-verbose=0 -mtriple=i686-unknown-linux-gnu -march=x86 -relocation-model=static -code-model=small -post-RA-scheduler=false | FileCheck %s -check-prefix=LINUX-32-PIC
 
-; RUN: llc < %s -asm-verbose=0 -mtriple=x86_64-unknown-linux-gnu -march=x86-64 -relocation-model=static -code-model=small | FileCheck %s -check-prefix=LINUX-64-STATIC
-; RUN: llc < %s -asm-verbose=0 -mtriple=x86_64-unknown-linux-gnu -march=x86-64 -relocation-model=pic -code-model=small | FileCheck %s -check-prefix=LINUX-64-PIC
+; RUN: llc < %s -asm-verbose=0 -mtriple=x86_64-unknown-linux-gnu -march=x86-64 -relocation-model=static -code-model=small -post-RA-scheduler=false | FileCheck %s -check-prefix=LINUX-64-STATIC
+; RUN: llc < %s -asm-verbose=0 -mtriple=x86_64-unknown-linux-gnu -march=x86-64 -relocation-model=pic -code-model=small -post-RA-scheduler=false | FileCheck %s -check-prefix=LINUX-64-PIC
 
-; RUN: llc < %s -asm-verbose=0 -mtriple=i686-apple-darwin -march=x86 -relocation-model=static -code-model=small | FileCheck %s -check-prefix=DARWIN-32-STATIC
-; RUN: llc < %s -asm-verbose=0 -mtriple=i686-apple-darwin -march=x86 -relocation-model=dynamic-no-pic -code-model=small | FileCheck %s -check-prefix=DARWIN-32-DYNAMIC
-; RUN: llc < %s -asm-verbose=0 -mtriple=i686-apple-darwin -march=x86 -relocation-model=pic -code-model=small | FileCheck %s -check-prefix=DARWIN-32-PIC
+; RUN: llc < %s -asm-verbose=0 -mtriple=i686-apple-darwin -march=x86 -relocation-model=static -code-model=small -post-RA-scheduler=false | FileCheck %s -check-prefix=DARWIN-32-STATIC
+; RUN: llc < %s -asm-verbose=0 -mtriple=i686-apple-darwin -march=x86 -relocation-model=dynamic-no-pic -code-model=small -post-RA-scheduler=false | FileCheck %s -check-prefix=DARWIN-32-DYNAMIC
+; RUN: llc < %s -asm-verbose=0 -mtriple=i686-apple-darwin -march=x86 -relocation-model=pic -code-model=small -post-RA-scheduler=false | FileCheck %s -check-prefix=DARWIN-32-PIC
 
-; RUN: llc < %s -asm-verbose=0 -mtriple=x86_64-apple-darwin -march=x86-64 -relocation-model=static -code-model=small | FileCheck %s -check-prefix=DARWIN-64-STATIC
-; RUN: llc < %s -asm-verbose=0 -mtriple=x86_64-apple-darwin -march=x86-64 -relocation-model=dynamic-no-pic -code-model=small | FileCheck %s -check-prefix=DARWIN-64-DYNAMIC
-; RUN: llc < %s -asm-verbose=0 -mtriple=x86_64-apple-darwin -march=x86-64 -relocation-model=pic -code-model=small | FileCheck %s -check-prefix=DARWIN-64-PIC
+; RUN: llc < %s -asm-verbose=0 -mtriple=x86_64-apple-darwin -march=x86-64 -relocation-model=static -code-model=small -post-RA-scheduler=false | FileCheck %s -check-prefix=DARWIN-64-STATIC
+; RUN: llc < %s -asm-verbose=0 -mtriple=x86_64-apple-darwin -march=x86-64 -relocation-model=dynamic-no-pic -code-model=small -post-RA-scheduler=false | FileCheck %s -check-prefix=DARWIN-64-DYNAMIC
+; RUN: llc < %s -asm-verbose=0 -mtriple=x86_64-apple-darwin -march=x86-64 -relocation-model=pic -code-model=small -post-RA-scheduler=false | FileCheck %s -check-prefix=DARWIN-64-PIC
 
 @src = external global [131072 x i32]
 @dst = external global [131072 x i32]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/avoid-loop-align-2.ll b/libclamav/c++/llvm/test/CodeGen/X86/avoid-loop-align-2.ll
index 03e69e7..fc9d1f0 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/avoid-loop-align-2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/avoid-loop-align-2.ll
@@ -1,4 +1,8 @@
-; RUN: llc < %s -march=x86 | grep align | count 3
+; RUN: llc < %s -march=x86 | grep align | count 4
+
+; TODO: Is it a good idea to align inner loops? It's hard to know without
+; knowing what their trip counts are, or other dynamic information. For
+; now, CodeGen aligns all loops.
 
 @x = external global i32*		; <i32**> [#uses=1]
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/avoid-loop-align.ll b/libclamav/c++/llvm/test/CodeGen/X86/avoid-loop-align.ll
index 3e68f94..d4c5c67 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/avoid-loop-align.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/avoid-loop-align.ll
@@ -1,4 +1,11 @@
-; RUN: llc < %s -mtriple=i386-apple-darwin | grep align | count 1
+; RUN: llc < %s -mtriple=i386-apple-darwin | FileCheck %s
+
+; CodeGen should align the top of the loop, which differs from the loop
+; header in this case.
+
+; CHECK: jmp LBB1_2
+; CHECK: .align
+; CHECK: LBB1_1:
 
 @A = common global [100 x i32] zeroinitializer, align 32		; <[100 x i32]*> [#uses=1]
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/bigstructret.ll b/libclamav/c++/llvm/test/CodeGen/X86/bigstructret.ll
new file mode 100644
index 0000000..633995d
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/bigstructret.ll
@@ -0,0 +1,17 @@
+; RUN: llc < %s -march=x86 -o %t
+; RUN: grep "movl	.24601, 12(%ecx)" %t
+; RUN: grep "movl	.48, 8(%ecx)" %t
+; RUN: grep "movl	.24, 4(%ecx)" %t
+; RUN: grep "movl	.12, (%ecx)" %t
+
+%0 = type { i32, i32, i32, i32 }
+
+define internal fastcc %0 @ReturnBigStruct() nounwind readnone {
+entry:
+  %0 = insertvalue %0 zeroinitializer, i32 12, 0
+  %1 = insertvalue %0 %0, i32 24, 1
+  %2 = insertvalue %0 %1, i32 48, 2
+  %3 = insertvalue %0 %2, i32 24601, 3
+  ret %0 %3
+}
+
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/break-anti-dependencies.ll b/libclamav/c++/llvm/test/CodeGen/X86/break-anti-dependencies.ll
index 6b245c1..972b3cd 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/break-anti-dependencies.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/break-anti-dependencies.ll
@@ -1,7 +1,7 @@
-; RUN: llc < %s -march=x86-64 -post-RA-scheduler -break-anti-dependencies=false > %t
+; RUN: llc < %s -march=x86-64 -post-RA-scheduler -break-anti-dependencies=none > %t
 ; RUN:   grep {%xmm0} %t | count 14
 ; RUN:   not grep {%xmm1} %t
-; RUN: llc < %s -march=x86-64 -post-RA-scheduler -break-anti-dependencies > %t
+; RUN: llc < %s -march=x86-64 -post-RA-scheduler -break-anti-dependencies=critical > %t
 ; RUN:   grep {%xmm0} %t | count 7
 ; RUN:   grep {%xmm1} %t | count 7
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/cmp0.ll b/libclamav/c++/llvm/test/CodeGen/X86/cmp0.ll
index de89374..4878448 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/cmp0.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/cmp0.ll
@@ -1,7 +1,24 @@
-; RUN: llc < %s -march=x86-64 | grep -v cmp
+; RUN: llc < %s -march=x86-64 | FileCheck %s
 
-define i64 @foo(i64 %x) {
+define i64 @test0(i64 %x) nounwind {
   %t = icmp eq i64 %x, 0
   %r = zext i1 %t to i64
   ret i64 %r
+; CHECK: test0:
+; CHECK: 	testq	%rdi, %rdi
+; CHECK: 	sete	%al
+; CHECK: 	movzbl	%al, %eax
+; CHECK: 	ret
 }
+
+define i64 @test1(i64 %x) nounwind {
+  %t = icmp slt i64 %x, 1
+  %r = zext i1 %t to i64
+  ret i64 %r
+; CHECK: test1:
+; CHECK: 	testq	%rdi, %rdi
+; CHECK: 	setle	%al
+; CHECK: 	movzbl	%al, %eax
+; CHECK: 	ret
+}
+
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/cmp1.ll b/libclamav/c++/llvm/test/CodeGen/X86/cmp1.ll
deleted file mode 100644
index d4aa399..0000000
--- a/libclamav/c++/llvm/test/CodeGen/X86/cmp1.ll
+++ /dev/null
@@ -1,7 +0,0 @@
-; RUN: llc < %s -march=x86-64 | grep -v cmp
-
-define i64 @foo(i64 %x) {
-  %t = icmp slt i64 %x, 1
-  %r = zext i1 %t to i64
-  ret i64 %r
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/codegen-prepare-extload.ll b/libclamav/c++/llvm/test/CodeGen/X86/codegen-prepare-extload.ll
new file mode 100644
index 0000000..9f57d53
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/codegen-prepare-extload.ll
@@ -0,0 +1,20 @@
+; RUN: llc < %s -march=x86-64 | FileCheck %s
+; rdar://7304838
+
+; CodeGenPrepare should move the zext into the block with the load
+; so that SelectionDAG can select it with the load.
+
+; CHECK: movzbl (%rdi), %eax
+
+define void @foo(i8* %p, i32* %q) {
+entry:
+  %t = load i8* %p
+  %a = icmp slt i8 %t, 20
+  br i1 %a, label %true, label %false
+true:
+  %s = zext i8 %t to i32
+  store i32 %s, i32* %q
+  ret void
+false:
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/constant-pool-sharing.ll b/libclamav/c++/llvm/test/CodeGen/X86/constant-pool-sharing.ll
new file mode 100644
index 0000000..c3e97ad
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/constant-pool-sharing.ll
@@ -0,0 +1,19 @@
+; RUN: llc < %s -march=x86-64 | FileCheck %s
+
+; llc should share constant pool entries between this integer vector
+; and this floating-point vector since they have the same encoding.
+
+; CHECK:  LCPI1_0(%rip), %xmm0
+; CHECK:  movaps        %xmm0, (%rdi)
+; CHECK:  movaps        %xmm0, (%rsi)
+
+define void @foo(<4 x i32>* %p, <4 x float>* %q, i1 %t) nounwind {
+entry:
+  br label %loop
+loop:
+  store <4 x i32><i32 1073741824, i32 1073741824, i32 1073741824, i32 1073741824>, <4 x i32>* %p
+  store <4 x float><float 2.0, float 2.0, float 2.0, float 2.0>, <4 x float>* %q
+  br i1 %t, label %loop, label %ret
+ret:
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/convert-2-addr-3-addr-inc64.ll b/libclamav/c++/llvm/test/CodeGen/X86/convert-2-addr-3-addr-inc64.ll
index 2b4b832..337f1b2 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/convert-2-addr-3-addr-inc64.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/convert-2-addr-3-addr-inc64.ll
@@ -2,7 +2,7 @@
 ; RUN:   grep {asm-printer} | grep {Number of machine instrs printed} | grep 5
 ; RUN: grep {leal	1(\%rsi),} %t
 
-define fastcc zeroext i8 @fullGtU(i32 %i1, i32 %i2) nounwind {
+define fastcc zeroext i8 @fullGtU(i32 %i1, i32 %i2) nounwind optsize {
 entry:
   %0 = add i32 %i2, 1           ; <i32> [#uses=1]
   %1 = sext i32 %0 to i64               ; <i64> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/discontiguous-loops.ll b/libclamav/c++/llvm/test/CodeGen/X86/discontiguous-loops.ll
new file mode 100644
index 0000000..479c450
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/discontiguous-loops.ll
@@ -0,0 +1,72 @@
+; RUN: llc -verify-loop-info -verify-dom-info -march=x86-64 < %s
+; PR5243
+
+ at .str96 = external constant [37 x i8], align 8    ; <[37 x i8]*> [#uses=1]
+
+define void @foo() nounwind {
+bb:
+  br label %ybb1
+
+ybb1:                                              ; preds = %yybb13, %xbb6, %bb
+  switch i32 undef, label %bb18 [
+    i32 150, label %ybb2
+    i32 151, label %bb17
+    i32 152, label %bb19
+    i32 157, label %ybb8
+  ]
+
+ybb2:                                              ; preds = %ybb1
+  %tmp = icmp eq i8** undef, null                 ; <i1> [#uses=1]
+  br i1 %tmp, label %bb3, label %xbb6
+
+bb3:                                              ; preds = %ybb2
+  unreachable
+
+xbb4:                                              ; preds = %xbb6
+  store i32 0, i32* undef, align 8
+  br i1 undef, label %xbb6, label %bb5
+
+bb5:                                              ; preds = %xbb4
+  call fastcc void @decl_mode_check_failed() nounwind
+  unreachable
+
+xbb6:                                              ; preds = %xbb4, %ybb2
+  %tmp7 = icmp slt i32 undef, 0                   ; <i1> [#uses=1]
+  br i1 %tmp7, label %xbb4, label %ybb1
+
+ybb8:                                              ; preds = %ybb1
+  %tmp9 = icmp eq i8** undef, null                ; <i1> [#uses=1]
+  br i1 %tmp9, label %bb10, label %ybb12
+
+bb10:                                             ; preds = %ybb8
+  %tmp11 = load i8** undef, align 8               ; <i8*> [#uses=1]
+  call void (i8*, ...)* @fatal(i8* getelementptr inbounds ([37 x i8]* @.str96, i64 0, i64 0), i8* %tmp11) nounwind
+  unreachable
+
+ybb12:                                             ; preds = %ybb8
+  br i1 undef, label %bb15, label %ybb13
+
+ybb13:                                             ; preds = %ybb12
+  %tmp14 = icmp sgt i32 undef, 0                  ; <i1> [#uses=1]
+  br i1 %tmp14, label %bb16, label %ybb1
+
+bb15:                                             ; preds = %ybb12
+  call void (i8*, ...)* @fatal(i8* getelementptr inbounds ([37 x i8]* @.str96, i64 0, i64 0), i8* undef) nounwind
+  unreachable
+
+bb16:                                             ; preds = %ybb13
+  unreachable
+
+bb17:                                             ; preds = %ybb1
+  unreachable
+
+bb18:                                             ; preds = %ybb1
+  unreachable
+
+bb19:                                             ; preds = %ybb1
+  unreachable
+}
+
+declare void @fatal(i8*, ...)
+
+declare fastcc void @decl_mode_check_failed() nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/fastcc.ll b/libclamav/c++/llvm/test/CodeGen/X86/fastcc.ll
index c70005b..705ab7b 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/fastcc.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/fastcc.ll
@@ -1,5 +1,6 @@
-; RUN: llc < %s -mtriple=i686-apple-darwin -mattr=+sse2 | grep mov | grep ecx | grep 0
-; RUN: llc < %s -mtriple=i686-apple-darwin -mattr=+sse2 | grep mov | grep xmm0 | grep 8
+; RUN: llc < %s -mtriple=i686-apple-darwin -mattr=+sse2 -post-RA-scheduler=false | FileCheck %s
+; CHECK: movsd %xmm0, 8(%esp)
+; CHECK: xorl %ecx, %ecx
 
 @d = external global double		; <double*> [#uses=1]
 @c = external global double		; <double*> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/fp_constant_op.ll b/libclamav/c++/llvm/test/CodeGen/X86/fp_constant_op.ll
index 8e823ed..b3ec538 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/fp_constant_op.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/fp_constant_op.ll
@@ -1,6 +1,4 @@
-; RUN: llc < %s -march=x86 -x86-asm-syntax=intel -mcpu=i486 | \
-; RUN:   grep {fadd\\|fsub\\|fdiv\\|fmul} | not grep -i ST
-
+; RUN: llc < %s -march=x86 -x86-asm-syntax=intel -mcpu=i486 | FileCheck %s
 ; Test that the load of the constant is folded into the operation.
 
 
@@ -8,28 +6,41 @@ define double @foo_add(double %P) {
 	%tmp.1 = fadd double %P, 1.230000e+02		; <double> [#uses=1]
 	ret double %tmp.1
 }
+; CHECK: foo_add:
+; CHECK: fadd DWORD PTR
 
 define double @foo_mul(double %P) {
 	%tmp.1 = fmul double %P, 1.230000e+02		; <double> [#uses=1]
 	ret double %tmp.1
 }
+; CHECK: foo_mul:
+; CHECK: fmul DWORD PTR
 
 define double @foo_sub(double %P) {
 	%tmp.1 = fsub double %P, 1.230000e+02		; <double> [#uses=1]
 	ret double %tmp.1
 }
+; CHECK: foo_sub:
+; CHECK: fadd DWORD PTR
 
 define double @foo_subr(double %P) {
 	%tmp.1 = fsub double 1.230000e+02, %P		; <double> [#uses=1]
 	ret double %tmp.1
 }
+; CHECK: foo_subr:
+; CHECK: fsub QWORD PTR
 
 define double @foo_div(double %P) {
 	%tmp.1 = fdiv double %P, 1.230000e+02		; <double> [#uses=1]
 	ret double %tmp.1
 }
+; CHECK: foo_div:
+; CHECK: fdiv DWORD PTR
 
 define double @foo_divr(double %P) {
 	%tmp.1 = fdiv double 1.230000e+02, %P		; <double> [#uses=1]
 	ret double %tmp.1
 }
+; CHECK: foo_divr:
+; CHECK: fdiv QWORD PTR
+
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/hidden-vis-5.ll b/libclamav/c++/llvm/test/CodeGen/X86/hidden-vis-5.ll
new file mode 100644
index 0000000..88fae37
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/hidden-vis-5.ll
@@ -0,0 +1,30 @@
+; RUN: llc < %s -mtriple=i386-apple-darwin9 -relocation-model=pic -disable-fp-elim -unwind-tables | FileCheck %s
+; <rdar://problem/7383328>
+
+ at .str = private constant [12 x i8] c"hello world\00", align 1 ; <[12 x i8]*> [#uses=1]
+
+define hidden void @func() nounwind ssp {
+entry:
+  %0 = call i32 @puts(i8* getelementptr inbounds ([12 x i8]* @.str, i64 0, i64 0)) nounwind ; <i32> [#uses=0]
+  br label %return
+
+return:                                           ; preds = %entry
+  ret void
+}
+
+declare i32 @puts(i8*)
+
+define hidden i32 @main() nounwind ssp {
+entry:
+  %retval = alloca i32                            ; <i32*> [#uses=1]
+  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
+  call void @func() nounwind
+  br label %return
+
+return:                                           ; preds = %entry
+  %retval1 = load i32* %retval                    ; <i32> [#uses=1]
+  ret i32 %retval1
+}
+
+; CHECK: .private_extern _func.eh
+; CHECK: .private_extern _main.eh
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/inline-asm-R-constraint.ll b/libclamav/c++/llvm/test/CodeGen/X86/inline-asm-R-constraint.ll
new file mode 100644
index 0000000..66c27ac
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/inline-asm-R-constraint.ll
@@ -0,0 +1,18 @@
+; RUN: llc -march=x86-64 < %s | FileCheck %s
+; 7282062
+; ModuleID = '<stdin>'
+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128"
+target triple = "x86_64-apple-darwin10.0"
+
+define void @udiv8(i8* %quotient, i16 zeroext %a, i8 zeroext %b, i8 zeroext %c, i8* %remainder) nounwind ssp {
+entry:
+; CHECK: udiv8:
+; CHECK-NOT: movb %ah, (%r8)
+  %a_addr = alloca i16, align 2                   ; <i16*> [#uses=2]
+  %b_addr = alloca i8, align 1                    ; <i8*> [#uses=2]
+  store i16 %a, i16* %a_addr
+  store i8 %b, i8* %b_addr
+  call void asm "\09\09movw\09$2, %ax\09\09\0A\09\09divb\09$3\09\09\09\0A\09\09movb\09%al, $0\09\0A\09\09movb %ah, ($4)", "=*m,=*m,*m,*m,R,~{dirflag},~{fpsr},~{flags},~{ax}"(i8* %quotient, i8* %remainder, i16* %a_addr, i8* %b_addr, i8* %remainder) nounwind
+  ret void
+; CHECK: ret
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/large-gep-scale.ll b/libclamav/c++/llvm/test/CodeGen/X86/large-gep-scale.ll
new file mode 100644
index 0000000..143294e
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/large-gep-scale.ll
@@ -0,0 +1,12 @@
+; RUN: llc < %s -march=x86 | FileCheck %s
+; PR5281
+
+; After scaling, this type doesn't fit in memory. Codegen should generate
+; correct addressing still.
+
+; CHECK: shll $2, %edx
+
+define fastcc i32* @_ada_smkr([2147483647 x i32]* %u, i32 %t) nounwind {
+  %x = getelementptr [2147483647 x i32]* %u, i32 %t, i32 0
+  ret i32* %x
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/legalize-fmp-oeq-vector-select.ll b/libclamav/c++/llvm/test/CodeGen/X86/legalize-fmp-oeq-vector-select.ll
new file mode 100644
index 0000000..6a8c154
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/legalize-fmp-oeq-vector-select.ll
@@ -0,0 +1,11 @@
+; RUN: llc -march=x86-64 -enable-legalize-types-checking < %s
+; PR5092
+
+define <4 x float> @bug(float %a) nounwind {
+entry:
+  %cmp = fcmp oeq float %a, 0.000000e+00          ; <i1> [#uses=1]
+  %temp = select i1 %cmp, <4 x float> <float 1.000000e+00, float 0.000000e+00,
+float 0.000000e+00, float 0.000000e+00>, <4 x float> zeroinitializer
+  ret <4 x float> %temp
+}
+
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/loop-blocks.ll b/libclamav/c++/llvm/test/CodeGen/X86/loop-blocks.ll
new file mode 100644
index 0000000..ec5236b
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/loop-blocks.ll
@@ -0,0 +1,207 @@
+; RUN: llc < %s -march=x86-64 -mtriple=x86_64-unknown-linux-gnu -asm-verbose=false | FileCheck %s
+
+; These tests check for loop branching structure, and that the loop align
+; directive is placed in the expected place.
+
+; CodeGen should insert a branch into the middle of the loop in
+; order to avoid a branch within the loop.
+
+; CHECK: simple:
+;      CHECK:   jmp   .LBB1_1
+; CHECK-NEXT:   align
+; CHECK-NEXT: .LBB1_2:
+; CHECK-NEXT:   call loop_latch
+; CHECK-NEXT: .LBB1_1:
+; CHECK-NEXT:   call loop_header
+
+define void @simple() nounwind {
+entry:
+  br label %loop
+
+loop:
+  call void @loop_header()
+  %t0 = tail call i32 @get()
+  %t1 = icmp slt i32 %t0, 0
+  br i1 %t1, label %done, label %bb
+
+bb:
+  call void @loop_latch()
+  br label %loop
+
+done:
+  call void @exit()
+  ret void
+}
+
+; CodeGen should move block_a to the top of the loop so that it
+; falls through into the loop, avoiding a branch within the loop.
+
+; CHECK: slightly_more_involved:
+;      CHECK:   jmp .LBB2_1
+; CHECK-NEXT:   align
+; CHECK-NEXT: .LBB2_4:
+; CHECK-NEXT:   call bar99
+; CHECK-NEXT: .LBB2_1:
+; CHECK-NEXT:   call body
+
+define void @slightly_more_involved() nounwind {
+entry:
+  br label %loop
+
+loop:
+  call void @body()
+  %t0 = call i32 @get()
+  %t1 = icmp slt i32 %t0, 2
+  br i1 %t1, label %block_a, label %bb
+
+bb:
+  %t2 = call i32 @get()
+  %t3 = icmp slt i32 %t2, 99
+  br i1 %t3, label %exit, label %loop
+
+block_a:
+  call void @bar99()
+  br label %loop
+
+exit:
+  call void @exit()
+  ret void
+}
+
+; Same as slightly_more_involved, but block_a is now a CFG diamond with
+; fallthrough edges which should be preserved.
+
+; CHECK: yet_more_involved:
+;      CHECK:   jmp .LBB3_1
+; CHECK-NEXT:   align
+; CHECK-NEXT: .LBB3_4:
+; CHECK-NEXT:   call bar99
+; CHECK-NEXT:   call get
+; CHECK-NEXT:   cmpl $2999, %eax
+; CHECK-NEXT:   jg .LBB3_6
+; CHECK-NEXT:   call block_a_true_func
+; CHECK-NEXT:   jmp .LBB3_7
+; CHECK-NEXT: .LBB3_6:
+; CHECK-NEXT:   call block_a_false_func
+; CHECK-NEXT: .LBB3_7:
+; CHECK-NEXT:   call block_a_merge_func
+; CHECK-NEXT: .LBB3_1:
+; CHECK-NEXT:   call body
+
+define void @yet_more_involved() nounwind {
+entry:
+  br label %loop
+
+loop:
+  call void @body()
+  %t0 = call i32 @get()
+  %t1 = icmp slt i32 %t0, 2
+  br i1 %t1, label %block_a, label %bb
+
+bb:
+  %t2 = call i32 @get()
+  %t3 = icmp slt i32 %t2, 99
+  br i1 %t3, label %exit, label %loop
+
+block_a:
+  call void @bar99()
+  %z0 = call i32 @get()
+  %z1 = icmp slt i32 %z0, 3000
+  br i1 %z1, label %block_a_true, label %block_a_false
+
+block_a_true:
+  call void @block_a_true_func()
+  br label %block_a_merge
+
+block_a_false:
+  call void @block_a_false_func()
+  br label %block_a_merge
+
+block_a_merge:
+  call void @block_a_merge_func()
+  br label %loop
+
+exit:
+  call void @exit()
+  ret void
+}
+
+; CodeGen should move the CFG islands that are part of the loop but don't
+; conveniently fit anywhere so that they are at least contiguous with the
+; loop.
+
+; CHECK: cfg_islands:
+;      CHECK:   jmp     .LBB4_1
+; CHECK-NEXT:   align
+; CHECK-NEXT: .LBB4_7:
+; CHECK-NEXT:   call    bar100
+; CHECK-NEXT:   jmp     .LBB4_1
+; CHECK-NEXT: .LBB4_8:
+; CHECK-NEXT:   call    bar101
+; CHECK-NEXT:   jmp     .LBB4_1
+; CHECK-NEXT: .LBB4_9:
+; CHECK-NEXT:   call    bar102
+; CHECK-NEXT:   jmp     .LBB4_1
+; CHECK-NEXT: .LBB4_5:
+; CHECK-NEXT:   call    loop_latch
+; CHECK-NEXT: .LBB4_1:
+; CHECK-NEXT:   call    loop_header
+
+define void @cfg_islands() nounwind {
+entry:
+  br label %loop
+
+loop:
+  call void @loop_header()
+  %t0 = call i32 @get()
+  %t1 = icmp slt i32 %t0, 100
+  br i1 %t1, label %block100, label %bb
+
+bb:
+  %t2 = call i32 @get()
+  %t3 = icmp slt i32 %t2, 101
+  br i1 %t3, label %block101, label %bb1
+
+bb1:
+  %t4 = call i32 @get()
+  %t5 = icmp slt i32 %t4, 102
+  br i1 %t5, label %block102, label %bb2
+
+bb2:
+  %t6 = call i32 @get()
+  %t7 = icmp slt i32 %t6, 103
+  br i1 %t7, label %exit, label %bb3
+
+bb3:
+  call void @loop_latch()
+  br label %loop
+
+exit:
+  call void @exit()
+  ret void
+
+block100:
+  call void @bar100()
+  br label %loop
+
+block101:
+  call void @bar101()
+  br label %loop
+
+block102:
+  call void @bar102()
+  br label %loop
+}
+
+declare void @bar99() nounwind
+declare void @bar100() nounwind
+declare void @bar101() nounwind
+declare void @bar102() nounwind
+declare void @body() nounwind
+declare void @exit() nounwind
+declare void @loop_header() nounwind
+declare void @loop_latch() nounwind
+declare i32 @get() nounwind
+declare void @block_a_true_func() nounwind
+declare void @block_a_false_func() nounwind
+declare void @block_a_merge_func() nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce2.ll b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce2.ll
index a1f38a7..9b53adb 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce2.ll
@@ -4,7 +4,7 @@
 
 @flags2 = internal global [8193 x i8] zeroinitializer, align 32		; <[8193 x i8]*> [#uses=1]
 
-define void @test(i32 %k, i32 %i) {
+define void @test(i32 %k, i32 %i) nounwind {
 entry:
 	%k_addr.012 = shl i32 %i, 1		; <i32> [#uses=1]
 	%tmp14 = icmp sgt i32 %k_addr.012, 8192		; <i1> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce3.ll b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce3.ll
index e340edd..c45a374 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce3.ll
@@ -1,7 +1,7 @@
 ; RUN: llc < %s -march=x86 | grep cmp | grep 240
 ; RUN: llc < %s -march=x86 | grep inc | count 1
 
-define i32 @foo(i32 %A, i32 %B, i32 %C, i32 %D) {
+define i32 @foo(i32 %A, i32 %B, i32 %C, i32 %D) nounwind {
 entry:
 	%tmp2955 = icmp sgt i32 %C, 0		; <i1> [#uses=1]
 	br i1 %tmp2955, label %bb26.outer.us, label %bb40.split
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce5.ll b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce5.ll
index 4ec2a02..b07eeb6 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce5.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce5.ll
@@ -3,7 +3,7 @@
 @X = weak global i16 0		; <i16*> [#uses=1]
 @Y = weak global i16 0		; <i16*> [#uses=1]
 
-define void @foo(i32 %N) {
+define void @foo(i32 %N) nounwind {
 entry:
 	%tmp1019 = icmp sgt i32 %N, 0		; <i1> [#uses=1]
 	br i1 %tmp1019, label %bb, label %return
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce6.ll b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce6.ll
index 81da82e..bbafcf7 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce6.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce6.ll
@@ -1,6 +1,6 @@
 ; RUN: llc < %s -march=x86-64 | not grep inc
 
-define fastcc i32 @decodeMP3(i32 %isize, i32* %done) {
+define fastcc i32 @decodeMP3(i32 %isize, i32* %done) nounwind {
 entry:
 	br i1 false, label %cond_next191, label %cond_true189
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/negative-stride-fptosi-user.ll b/libclamav/c++/llvm/test/CodeGen/X86/negative-stride-fptosi-user.ll
new file mode 100644
index 0000000..332e0b9
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/negative-stride-fptosi-user.ll
@@ -0,0 +1,25 @@
+; RUN: llc < %s -march=x86-64 | grep cvtsi2sd
+
+; LSR previously eliminated the sitofp by introducing an induction
+; variable which stepped by a bogus ((double)UINT32_C(-1)). It's theoretically
+; possible to eliminate the sitofp using a proper -1.0 step though; this
+; test should be changed if that is done.
+
+define void @foo(i32 %N) nounwind {
+entry:
+  %0 = icmp slt i32 %N, 0                         ; <i1> [#uses=1]
+  br i1 %0, label %bb, label %return
+
+bb:                                               ; preds = %bb, %entry
+  %i.03 = phi i32 [ 0, %entry ], [ %2, %bb ]      ; <i32> [#uses=2]
+  %1 = sitofp i32 %i.03 to double                  ; <double> [#uses=1]
+  tail call void @bar(double %1) nounwind
+  %2 = add nsw i32 %i.03, -1                       ; <i32> [#uses=2]
+  %exitcond = icmp eq i32 %2, %N                  ; <i1> [#uses=1]
+  br i1 %exitcond, label %return, label %bb
+
+return:                                           ; preds = %bb, %entry
+  ret void
+}
+
+declare void @bar(double)
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/object-size.ll b/libclamav/c++/llvm/test/CodeGen/X86/object-size.ll
new file mode 100644
index 0000000..3f90245
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/object-size.ll
@@ -0,0 +1,55 @@
+; RUN: llc -O0 < %s -march=x86-64 | FileCheck %s -check-prefix=X64
+
+; ModuleID = 'ts.c'
+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64"
+target triple = "x86_64-apple-darwin10.0"
+
+ at p = common global i8* null, align 8              ; <i8**> [#uses=4]
+ at .str = private constant [3 x i8] c"Hi\00"        ; <[3 x i8]*> [#uses=1]
+
+define void @bar() nounwind ssp {
+entry:
+  %tmp = load i8** @p                             ; <i8*> [#uses=1]
+  %0 = call i64 @llvm.objectsize.i64(i8* %tmp, i32 0) ; <i64> [#uses=1]
+  %cmp = icmp ne i64 %0, -1                       ; <i1> [#uses=1]
+; X64: movq    $-1, %rax
+; X64: cmpq    $-1, %rax
+  br i1 %cmp, label %cond.true, label %cond.false
+
+cond.true:                                        ; preds = %entry
+  %tmp1 = load i8** @p                            ; <i8*> [#uses=1]
+  %tmp2 = load i8** @p                            ; <i8*> [#uses=1]
+  %1 = call i64 @llvm.objectsize.i64(i8* %tmp2, i32 1) ; <i64> [#uses=1]
+  %call = call i8* @__strcpy_chk(i8* %tmp1, i8* getelementptr inbounds ([3 x i8]* @.str, i32 0, i32 0), i64 %1) ssp ; <i8*> [#uses=1]
+  br label %cond.end
+
+cond.false:                                       ; preds = %entry
+  %tmp3 = load i8** @p                            ; <i8*> [#uses=1]
+  %call4 = call i8* @__inline_strcpy_chk(i8* %tmp3, i8* getelementptr inbounds ([3 x i8]* @.str, i32 0, i32 0)) ssp ; <i8*> [#uses=1]
+  br label %cond.end
+
+cond.end:                                         ; preds = %cond.false, %cond.true
+  %cond = phi i8* [ %call, %cond.true ], [ %call4, %cond.false ] ; <i8*> [#uses=0]
+  ret void
+}
+
+declare i64 @llvm.objectsize.i64(i8*, i32) nounwind readonly
+
+declare i8* @__strcpy_chk(i8*, i8*, i64) ssp
+
+define internal i8* @__inline_strcpy_chk(i8* %__dest, i8* %__src) nounwind ssp {
+entry:
+  %retval = alloca i8*                            ; <i8**> [#uses=2]
+  %__dest.addr = alloca i8*                       ; <i8**> [#uses=3]
+  %__src.addr = alloca i8*                        ; <i8**> [#uses=2]
+  store i8* %__dest, i8** %__dest.addr
+  store i8* %__src, i8** %__src.addr
+  %tmp = load i8** %__dest.addr                   ; <i8*> [#uses=1]
+  %tmp1 = load i8** %__src.addr                   ; <i8*> [#uses=1]
+  %tmp2 = load i8** %__dest.addr                  ; <i8*> [#uses=1]
+  %0 = call i64 @llvm.objectsize.i64(i8* %tmp2, i32 1) ; <i64> [#uses=1]
+  %call = call i8* @__strcpy_chk(i8* %tmp, i8* %tmp1, i64 %0) ssp ; <i8*> [#uses=1]
+  store i8* %call, i8** %retval
+  %1 = load i8** %retval                          ; <i8*> [#uses=1]
+  ret i8* %1
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/palignr-2.ll b/libclamav/c++/llvm/test/CodeGen/X86/palignr-2.ll
new file mode 100644
index 0000000..116d4c7
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/palignr-2.ll
@@ -0,0 +1,28 @@
+; RUN: llc < %s -march=x86 -mattr=+ssse3 | FileCheck %s
+; rdar://7341330
+
+ at a = global [4 x i32] [i32 4, i32 5, i32 6, i32 7], align 16 ; <[4 x i32]*> [#uses=1]
+ at c = common global [4 x i32] zeroinitializer, align 16 ; <[4 x i32]*> [#uses=1]
+ at b = global [4 x i32] [i32 0, i32 1, i32 2, i32 3], align 16 ; <[4 x i32]*> [#uses=1]
+
+define void @t1(<2 x i64> %a, <2 x i64> %b) nounwind ssp {
+entry:
+; CHECK: t1:
+; palignr $3, %xmm1, %xmm0
+  %0 = tail call <2 x i64> @llvm.x86.ssse3.palign.r.128(<2 x i64> %a, <2 x i64> %b, i8 24) nounwind readnone
+  store <2 x i64> %0, <2 x i64>* bitcast ([4 x i32]* @c to <2 x i64>*), align 16
+  ret void
+}
+
+declare <2 x i64> @llvm.x86.ssse3.palign.r.128(<2 x i64>, <2 x i64>, i8) nounwind readnone
+
+define void @t2() nounwind ssp {
+entry:
+; CHECK: t2:
+; palignr $4, _b, %xmm0
+  %0 = load <2 x i64>* bitcast ([4 x i32]* @b to <2 x i64>*), align 16 ; <<2 x i64>> [#uses=1]
+  %1 = load <2 x i64>* bitcast ([4 x i32]* @a to <2 x i64>*), align 16 ; <<2 x i64>> [#uses=1]
+  %2 = tail call <2 x i64> @llvm.x86.ssse3.palign.r.128(<2 x i64> %1, <2 x i64> %0, i8 32) nounwind readnone
+  store <2 x i64> %2, <2 x i64>* bitcast ([4 x i32]* @c to <2 x i64>*), align 16
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/palignr.ll b/libclamav/c++/llvm/test/CodeGen/X86/palignr.ll
new file mode 100644
index 0000000..3812c72
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/palignr.ll
@@ -0,0 +1,58 @@
+; RUN: llc < %s -march=x86 -mcpu=core2 | FileCheck %s
+; RUN: llc < %s -march=x86 -mcpu=yonah | FileCheck --check-prefix=YONAH %s
+
+define <4 x i32> @test1(<4 x i32> %A, <4 x i32> %B) nounwind {
+; CHECK: pshufd
+; CHECK-YONAH: pshufd
+  %C = shufflevector <4 x i32> %A, <4 x i32> undef, <4 x i32> < i32 1, i32 2, i32 3, i32 0 >
+	ret <4 x i32> %C
+}
+
+define <4 x i32> @test2(<4 x i32> %A, <4 x i32> %B) nounwind {
+; CHECK: palignr
+; CHECK-YONAH: shufps
+  %C = shufflevector <4 x i32> %A, <4 x i32> %B, <4 x i32> < i32 1, i32 2, i32 3, i32 4 >
+	ret <4 x i32> %C
+}
+
+define <4 x i32> @test3(<4 x i32> %A, <4 x i32> %B) nounwind {
+; CHECK: palignr
+  %C = shufflevector <4 x i32> %A, <4 x i32> %B, <4 x i32> < i32 1, i32 2, i32 undef, i32 4 >
+	ret <4 x i32> %C
+}
+
+define <4 x i32> @test4(<4 x i32> %A, <4 x i32> %B) nounwind {
+; CHECK: palignr
+  %C = shufflevector <4 x i32> %A, <4 x i32> %B, <4 x i32> < i32 6, i32 7, i32 undef, i32 1 >
+	ret <4 x i32> %C
+}
+
+define <4 x float> @test5(<4 x float> %A, <4 x float> %B) nounwind {
+; CHECK: palignr
+  %C = shufflevector <4 x float> %A, <4 x float> %B, <4 x i32> < i32 6, i32 7, i32 undef, i32 1 >
+	ret <4 x float> %C
+}
+
+define <8 x i16> @test6(<8 x i16> %A, <8 x i16> %B) nounwind {
+; CHECK: palignr
+  %C = shufflevector <8 x i16> %A, <8 x i16> %B, <8 x i32> < i32 3, i32 4, i32 undef, i32 6, i32 7, i32 8, i32 9, i32 10 >
+	ret <8 x i16> %C
+}
+
+define <8 x i16> @test7(<8 x i16> %A, <8 x i16> %B) nounwind {
+; CHECK: palignr
+  %C = shufflevector <8 x i16> %A, <8 x i16> %B, <8 x i32> < i32 undef, i32 6, i32 undef, i32 8, i32 9, i32 10, i32 11, i32 12 >
+	ret <8 x i16> %C
+}
+
+define <8 x i16> @test8(<8 x i16> %A, <8 x i16> %B) nounwind {
+; CHECK: palignr
+  %C = shufflevector <8 x i16> %A, <8 x i16> %B, <8 x i32> < i32 undef, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 0 >
+	ret <8 x i16> %C
+}
+
+define <16 x i8> @test9(<16 x i8> %A, <16 x i8> %B) nounwind {
+; CHECK: palignr
+  %C = shufflevector <16 x i8> %A, <16 x i8> %B, <16 x i32> < i32 5, i32 6, i32 7, i32 undef, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20 >
+	ret <16 x i8> %C
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/peep-test-3.ll b/libclamav/c++/llvm/test/CodeGen/X86/peep-test-3.ll
index 13a69ed..5aaf81b 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/peep-test-3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/peep-test-3.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=x86 | FileCheck %s
+; RUN: llc < %s -march=x86 -post-RA-scheduler=false | FileCheck %s
 ; rdar://7226797
 
 ; LLVM should omit the testl and use the flags result from the orl.
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/pic.ll b/libclamav/c++/llvm/test/CodeGen/X86/pic.ll
index e9218ed..e886ba0 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/pic.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/pic.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -mtriple=i686-pc-linux-gnu -relocation-model=pic | FileCheck %s -check-prefix=LINUX
+; RUN: llc < %s -mtriple=i686-pc-linux-gnu -relocation-model=pic -asm-verbose=false -post-RA-scheduler=false | FileCheck %s -check-prefix=LINUX
 
 @ptr = external global i32* 
 @dst = external global i32 
@@ -12,7 +12,6 @@ entry:
     ret void
     
 ; LINUX:    test1:
-; LINUX: .LBB1_0:
 ; LINUX:	call	.L1$pb
 ; LINUX-NEXT: .L1$pb:
 ; LINUX-NEXT:	popl
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/pre-split11.ll b/libclamav/c++/llvm/test/CodeGen/X86/pre-split11.ll
new file mode 100644
index 0000000..0a9f4e3
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/pre-split11.ll
@@ -0,0 +1,34 @@
+; RUN: llc < %s -mtriple=x86_64-apple-darwin -mattr=+sse2 -pre-alloc-split | FileCheck %s
+
+ at .str = private constant [28 x i8] c"\0A\0ADOUBLE            D = %f\0A\00", align 1 ; <[28 x i8]*> [#uses=1]
+ at .str1 = private constant [37 x i8] c"double to long    l1 = %ld\09\09(0x%lx)\0A\00", align 8 ; <[37 x i8]*> [#uses=1]
+ at .str2 = private constant [35 x i8] c"double to uint   ui1 = %u\09\09(0x%x)\0A\00", align 8 ; <[35 x i8]*> [#uses=1]
+ at .str3 = private constant [37 x i8] c"double to ulong  ul1 = %lu\09\09(0x%lx)\0A\00", align 8 ; <[37 x i8]*> [#uses=1]
+
+define i32 @main(i32 %argc, i8** nocapture %argv) nounwind ssp {
+; CHECK: movsd %xmm0, (%rsp)
+entry:
+  %0 = icmp sgt i32 %argc, 4                      ; <i1> [#uses=1]
+  br i1 %0, label %bb, label %bb2
+
+bb:                                               ; preds = %entry
+  %1 = getelementptr inbounds i8** %argv, i64 4   ; <i8**> [#uses=1]
+  %2 = load i8** %1, align 8                      ; <i8*> [#uses=1]
+  %3 = tail call double @atof(i8* %2) nounwind    ; <double> [#uses=1]
+  br label %bb2
+
+bb2:                                              ; preds = %bb, %entry
+  %storemerge = phi double [ %3, %bb ], [ 2.000000e+00, %entry ] ; <double> [#uses=4]
+  %4 = fptoui double %storemerge to i32           ; <i32> [#uses=2]
+  %5 = fptoui double %storemerge to i64           ; <i64> [#uses=2]
+  %6 = fptosi double %storemerge to i64           ; <i64> [#uses=2]
+  %7 = tail call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([28 x i8]* @.str, i64 0, i64 0), double %storemerge) nounwind ; <i32> [#uses=0]
+  %8 = tail call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([37 x i8]* @.str1, i64 0, i64 0), i64 %6, i64 %6) nounwind ; <i32> [#uses=0]
+  %9 = tail call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([35 x i8]* @.str2, i64 0, i64 0), i32 %4, i32 %4) nounwind ; <i32> [#uses=0]
+  %10 = tail call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([37 x i8]* @.str3, i64 0, i64 0), i64 %5, i64 %5) nounwind ; <i32> [#uses=0]
+  ret i32 0
+}
+
+declare double @atof(i8* nocapture) nounwind readonly
+
+declare i32 @printf(i8* nocapture, ...) nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/sink-hoist.ll b/libclamav/c++/llvm/test/CodeGen/X86/sink-hoist.ll
index 0f4e63f..f8d542e 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/sink-hoist.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/sink-hoist.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=x86-64 -asm-verbose=false | FileCheck %s
+; RUN: llc < %s -march=x86-64 -asm-verbose=false -mtriple=x86_64-unknown-linux-gnu | FileCheck %s
 
 ; Currently, floating-point selects are lowered to CFG triangles.
 ; This means that one side of the select is always unconditionally
@@ -6,10 +6,10 @@
 ; that it's conditionally evaluated.
 
 ; CHECK: foo:
-; CHECK-NEXT: divsd
-; CHECK:      testb $1, %dil
-; CHECK-NEXT: jne
 ; CHECK:      divsd
+; CHECK-NEXT: testb $1, %dil
+; CHECK-NEXT: jne
+; CHECK-NEXT: divsd
 
 define double @foo(double %x, double %y, i1 %c) nounwind {
   %a = fdiv double %x, 3.2
@@ -41,3 +41,108 @@ bb:
 return:
   ret void
 }
+
+; Sink instructions with dead EFLAGS defs.
+
+; CHECK: zzz:
+; CHECK:      je
+; CHECK-NEXT: orb
+
+define zeroext i8 @zzz(i8 zeroext %a, i8 zeroext %b) nounwind readnone {
+entry:
+  %tmp = zext i8 %a to i32                        ; <i32> [#uses=1]
+  %tmp2 = icmp eq i8 %a, 0                    ; <i1> [#uses=1]
+  %tmp3 = or i8 %b, -128                          ; <i8> [#uses=1]
+  %tmp4 = and i8 %b, 127                          ; <i8> [#uses=1]
+  %b_addr.0 = select i1 %tmp2, i8 %tmp4, i8 %tmp3 ; <i8> [#uses=1]
+  ret i8 %b_addr.0
+}
+
+; Codegen should hoist and CSE these constants.
+
+; CHECK: vv:
+; CHECK: LCPI4_0(%rip), %xmm0
+; CHECK: LCPI4_1(%rip), %xmm1
+; CHECK: LCPI4_2(%rip), %xmm2
+; CHECK: align
+; CHECK-NOT: LCPI
+; CHECK: ret
+
+ at _minusZero.6007 = internal constant <4 x float> <float -0.000000e+00, float -0.000000e+00, float -0.000000e+00, float -0.000000e+00> ; <<4 x float>*> [#uses=0]
+ at twoTo23.6008 = internal constant <4 x float> <float 8.388608e+06, float 8.388608e+06, float 8.388608e+06, float 8.388608e+06> ; <<4 x float>*> [#uses=0]
+
+define void @vv(float* %y, float* %x, i32* %n) nounwind ssp {
+entry:
+  br label %bb60
+
+bb:                                               ; preds = %bb60
+  %0 = bitcast float* %x_addr.0 to <4 x float>*   ; <<4 x float>*> [#uses=1]
+  %1 = load <4 x float>* %0, align 16             ; <<4 x float>> [#uses=4]
+  %tmp20 = bitcast <4 x float> %1 to <4 x i32>    ; <<4 x i32>> [#uses=1]
+  %tmp22 = and <4 x i32> %tmp20, <i32 2147483647, i32 2147483647, i32 2147483647, i32 2147483647> ; <<4 x i32>> [#uses=1]
+  %tmp23 = bitcast <4 x i32> %tmp22 to <4 x float> ; <<4 x float>> [#uses=1]
+  %tmp25 = bitcast <4 x float> %1 to <4 x i32>    ; <<4 x i32>> [#uses=1]
+  %tmp27 = and <4 x i32> %tmp25, <i32 -2147483648, i32 -2147483648, i32 -2147483648, i32 -2147483648> ; <<4 x i32>> [#uses=2]
+  %tmp30 = call <4 x float> @llvm.x86.sse.cmp.ps(<4 x float> %tmp23, <4 x float> <float 8.388608e+06, float 8.388608e+06, float 8.388608e+06, float 8.388608e+06>, i8 5) ; <<4 x float>> [#uses=1]
+  %tmp34 = bitcast <4 x float> %tmp30 to <4 x i32> ; <<4 x i32>> [#uses=1]
+  %tmp36 = xor <4 x i32> %tmp34, <i32 -1, i32 -1, i32 -1, i32 -1> ; <<4 x i32>> [#uses=1]
+  %tmp37 = and <4 x i32> %tmp36, <i32 1258291200, i32 1258291200, i32 1258291200, i32 1258291200> ; <<4 x i32>> [#uses=1]
+  %tmp42 = or <4 x i32> %tmp37, %tmp27            ; <<4 x i32>> [#uses=1]
+  %tmp43 = bitcast <4 x i32> %tmp42 to <4 x float> ; <<4 x float>> [#uses=2]
+  %tmp45 = fadd <4 x float> %1, %tmp43            ; <<4 x float>> [#uses=1]
+  %tmp47 = fsub <4 x float> %tmp45, %tmp43        ; <<4 x float>> [#uses=2]
+  %tmp49 = call <4 x float> @llvm.x86.sse.cmp.ps(<4 x float> %1, <4 x float> %tmp47, i8 1) ; <<4 x float>> [#uses=1]
+  %2 = bitcast <4 x float> %tmp49 to <4 x i32>    ; <<4 x i32>> [#uses=1]
+  %3 = call <4 x float> @llvm.x86.sse2.cvtdq2ps(<4 x i32> %2) nounwind readnone ; <<4 x float>> [#uses=1]
+  %tmp53 = fadd <4 x float> %tmp47, %3            ; <<4 x float>> [#uses=1]
+  %tmp55 = bitcast <4 x float> %tmp53 to <4 x i32> ; <<4 x i32>> [#uses=1]
+  %tmp57 = or <4 x i32> %tmp55, %tmp27            ; <<4 x i32>> [#uses=1]
+  %tmp58 = bitcast <4 x i32> %tmp57 to <4 x float> ; <<4 x float>> [#uses=1]
+  %4 = bitcast float* %y_addr.0 to <4 x float>*   ; <<4 x float>*> [#uses=1]
+  store <4 x float> %tmp58, <4 x float>* %4, align 16
+  %5 = getelementptr float* %x_addr.0, i64 4      ; <float*> [#uses=1]
+  %6 = getelementptr float* %y_addr.0, i64 4      ; <float*> [#uses=1]
+  %7 = add i32 %i.0, 4                            ; <i32> [#uses=1]
+  br label %bb60
+
+bb60:                                             ; preds = %bb, %entry
+  %i.0 = phi i32 [ 0, %entry ], [ %7, %bb ]       ; <i32> [#uses=2]
+  %x_addr.0 = phi float* [ %x, %entry ], [ %5, %bb ] ; <float*> [#uses=2]
+  %y_addr.0 = phi float* [ %y, %entry ], [ %6, %bb ] ; <float*> [#uses=2]
+  %8 = load i32* %n, align 4                      ; <i32> [#uses=1]
+  %9 = icmp sgt i32 %8, %i.0                      ; <i1> [#uses=1]
+  br i1 %9, label %bb, label %return
+
+return:                                           ; preds = %bb60
+  ret void
+}
+
+declare <4 x float> @llvm.x86.sse.cmp.ps(<4 x float>, <4 x float>, i8) nounwind readnone
+
+declare <4 x float> @llvm.x86.sse2.cvtdq2ps(<4 x i32>) nounwind readnone
+
+; CodeGen should use the correct register class when extracting
+; a load from a zero-extending load for hoisting.
+
+; CHECK: default_get_pch_validity:
+; CHECK: movl cl_options_count(%rip), %ecx
+
+ at cl_options_count = external constant i32         ; <i32*> [#uses=2]
+
+define void @default_get_pch_validity() nounwind {
+entry:
+  %tmp4 = load i32* @cl_options_count, align 4    ; <i32> [#uses=1]
+  %tmp5 = icmp eq i32 %tmp4, 0                    ; <i1> [#uses=1]
+  br i1 %tmp5, label %bb6, label %bb2
+
+bb2:                                              ; preds = %bb2, %entry
+  %i.019 = phi i64 [ 0, %entry ], [ %tmp25, %bb2 ] ; <i64> [#uses=1]
+  %tmp25 = add i64 %i.019, 1                      ; <i64> [#uses=2]
+  %tmp11 = load i32* @cl_options_count, align 4   ; <i32> [#uses=1]
+  %tmp12 = zext i32 %tmp11 to i64                 ; <i64> [#uses=1]
+  %tmp13 = icmp ugt i64 %tmp12, %tmp25            ; <i1> [#uses=1]
+  br i1 %tmp13, label %bb2, label %bb6
+
+bb6:                                              ; preds = %bb2, %entry
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/sse2.ll b/libclamav/c++/llvm/test/CodeGen/X86/sse2.ll
index 9f926f2..58fe28b 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/sse2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/sse2.ll
@@ -10,10 +10,10 @@ define void @t1(<2 x double>* %r, <2 x double>* %A, double %B) nounwind  {
         
 ; CHECK: t1:
 ; CHECK: 	movl	8(%esp), %eax
+; CHECK-NEXT: 	movl	4(%esp), %ecx
 ; CHECK-NEXT: 	movapd	(%eax), %xmm0
 ; CHECK-NEXT: 	movlpd	12(%esp), %xmm0
-; CHECK-NEXT: 	movl	4(%esp), %eax
-; CHECK-NEXT: 	movapd	%xmm0, (%eax)
+; CHECK-NEXT: 	movapd	%xmm0, (%ecx)
 ; CHECK-NEXT: 	ret
 }
 
@@ -26,9 +26,9 @@ define void @t2(<2 x double>* %r, <2 x double>* %A, double %B) nounwind  {
         
 ; CHECK: t2:
 ; CHECK: 	movl	8(%esp), %eax
+; CHECK-NEXT: 	movl	4(%esp), %ecx
 ; CHECK-NEXT: 	movapd	(%eax), %xmm0
 ; CHECK-NEXT: 	movhpd	12(%esp), %xmm0
-; CHECK-NEXT: 	movl	4(%esp), %eax
-; CHECK-NEXT: 	movapd	%xmm0, (%eax)
+; CHECK-NEXT: 	movapd	%xmm0, (%ecx)
 ; CHECK-NEXT: 	ret
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/sse3.ll b/libclamav/c++/llvm/test/CodeGen/X86/sse3.ll
index 703635c..21c1a3c 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/sse3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/sse3.ll
@@ -17,8 +17,8 @@ entry:
         
 ; X64: t0:
 ; X64: 	movddup	(%rsi), %xmm0
-; X64:  pshuflw	$0, %xmm0, %xmm0
 ; X64:	xorl	%eax, %eax
+; X64:  pshuflw	$0, %xmm0, %xmm0
 ; X64:	pinsrw	$0, %eax, %xmm0
 ; X64:	movaps	%xmm0, (%rdi)
 ; X64:	ret
@@ -145,7 +145,9 @@ define void @t9(<4 x float>* %r, <2 x i32>* %A) nounwind {
 	ret void
 ; X64: 	t9:
 ; X64: 		movsd	(%rsi), %xmm0
-; X64: 		movhps	%xmm0, (%rdi)
+; X64:	        movaps  (%rdi), %xmm1
+; X64:	        movlhps %xmm0, %xmm1
+; X64:	        movaps  %xmm1, (%rdi)
 ; X64: 		ret
 }
 
@@ -167,18 +169,12 @@ define internal void @t10() nounwind {
         store <4 x i16> %6, <4 x i16>* @g2, align 8
         ret void
 ; X64: 	t10:
-; X64: 		movq	_g1 at GOTPCREL(%rip), %rax
-; X64: 		movaps	(%rax), %xmm0
 ; X64: 		pextrw	$4, %xmm0, %eax
-; X64: 		movaps	%xmm0, %xmm1
+; X64: 		pextrw	$6, %xmm0, %edx
 ; X64: 		movlhps	%xmm1, %xmm1
 ; X64: 		pshuflw	$8, %xmm1, %xmm1
 ; X64: 		pinsrw	$2, %eax, %xmm1
-; X64: 		pextrw	$6, %xmm0, %eax
-; X64: 		pinsrw	$3, %eax, %xmm1
-; X64: 		movq	_g2 at GOTPCREL(%rip), %rax
-; X64: 		movq	%xmm1, (%rax)
-; X64: 		ret
+; X64: 		pinsrw	$3, %edx, %xmm1
 }
 
 
@@ -189,8 +185,8 @@ entry:
 	ret <8 x i16> %tmp7
 
 ; X64: t11:
-; X64:	movd	%xmm1, %eax
 ; X64:	movlhps	%xmm0, %xmm0
+; X64:	movd	%xmm1, %eax
 ; X64:	pshuflw	$1, %xmm0, %xmm0
 ; X64:	pinsrw	$1, %eax, %xmm0
 ; X64:	ret
@@ -203,8 +199,8 @@ entry:
 	ret <8 x i16> %tmp9
 
 ; X64: t12:
-; X64: 	pextrw	$3, %xmm1, %eax
 ; X64: 	movlhps	%xmm0, %xmm0
+; X64: 	pextrw	$3, %xmm1, %eax
 ; X64: 	pshufhw	$3, %xmm0, %xmm0
 ; X64: 	pinsrw	$5, %eax, %xmm0
 ; X64: 	ret
@@ -256,18 +252,12 @@ entry:
         %tmp9 = shufflevector <16 x i8> %tmp8, <16 x i8> %T0,  <16 x i32> < i32 0, i32 1, i32 2, i32 17,  i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef, i32 undef , i32 undef >
         ret <16 x i8> %tmp9
 ; X64: 	t16:
-; X64: 		movaps	LCPI17_0(%rip), %xmm1
-; X64: 		movd	%xmm1, %eax
 ; X64: 		pinsrw	$0, %eax, %xmm1
 ; X64: 		pextrw	$8, %xmm0, %eax
 ; X64: 		pinsrw	$1, %eax, %xmm1
 ; X64: 		pextrw	$1, %xmm1, %ecx
 ; X64: 		movd	%xmm1, %edx
 ; X64: 		pinsrw	$0, %edx, %xmm1
-; X64: 		movzbl	%cl, %ecx
-; X64: 		andw	$-256, %ax
-; X64: 		orw	%cx, %ax
-; X64: 		movaps	%xmm1, %xmm0
 ; X64: 		pinsrw	$1, %eax, %xmm0
 ; X64: 		ret
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/stack-color-with-reg.ll b/libclamav/c++/llvm/test/CodeGen/X86/stack-color-with-reg.ll
index 672f77e..d762392 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/stack-color-with-reg.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/stack-color-with-reg.ll
@@ -1,6 +1,5 @@
 ; RUN: llc < %s -mtriple=x86_64-apple-darwin10 -relocation-model=pic -disable-fp-elim -color-ss-with-regs -stats -info-output-file - > %t
-; RUN:   grep stackcoloring %t | grep "stack slot refs replaced with reg refs"  | grep 5
-; RUN:   grep asm-printer %t   | grep 179
+; RUN:   grep stackcoloring %t | grep "stack slot refs replaced with reg refs"  | grep 6
 
 	type { [62 x %struct.Bitvec*] }		; type %0
 	type { i8* }		; type %1
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/tail-opts.ll b/libclamav/c++/llvm/test/CodeGen/X86/tail-opts.ll
new file mode 100644
index 0000000..0d86e56
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/tail-opts.ll
@@ -0,0 +1,408 @@
+; RUN: llc < %s -march=x86-64 -mtriple=x86_64-unknown-linux-gnu -asm-verbose=false | FileCheck %s
+
+declare void @bar(i32)
+declare void @car(i32)
+declare void @dar(i32)
+declare void @ear(i32)
+declare void @far(i32)
+declare i1 @qux()
+
+ at GHJK = global i32 0
+ at HABC = global i32 0
+
+; BranchFolding should tail-merge the stores since they all precede
+; direct branches to the same place.
+
+; CHECK: tail_merge_me:
+; CHECK-NOT:  GHJK
+; CHECK:      movl $0, GHJK(%rip)
+; CHECK-NEXT: movl $1, HABC(%rip)
+; CHECK-NOT:  GHJK
+
+define void @tail_merge_me() nounwind {
+entry:
+  %a = call i1 @qux()
+  br i1 %a, label %A, label %next
+next:
+  %b = call i1 @qux()
+  br i1 %b, label %B, label %C
+
+A:
+  call void @bar(i32 0)
+  store i32 0, i32* @GHJK
+  br label %M
+
+B:
+  call void @car(i32 1)
+  store i32 0, i32* @GHJK
+  br label %M
+
+C:
+  call void @dar(i32 2)
+  store i32 0, i32* @GHJK
+  br label %M
+
+M:
+  store i32 1, i32* @HABC
+  %c = call i1 @qux()
+  br i1 %c, label %return, label %altret
+
+return:
+  call void @ear(i32 1000)
+  ret void
+altret:
+  call void @far(i32 1001)
+  ret void
+}
+
+declare i8* @choose(i8*, i8*);
+
+; BranchFolding should tail-duplicate the indirect jump to avoid
+; redundant branching.
+
+; CHECK: tail_duplicate_me:
+; CHECK:      movl $0, GHJK(%rip)
+; CHECK-NEXT: jmpq *%rbx
+; CHECK:      movl $0, GHJK(%rip)
+; CHECK-NEXT: jmpq *%rbx
+; CHECK:      movl $0, GHJK(%rip)
+; CHECK-NEXT: jmpq *%rbx
+
+define void @tail_duplicate_me() nounwind {
+entry:
+  %a = call i1 @qux()
+  %c = call i8* @choose(i8* blockaddress(@tail_duplicate_me, %return),
+                        i8* blockaddress(@tail_duplicate_me, %altret))
+  br i1 %a, label %A, label %next
+next:
+  %b = call i1 @qux()
+  br i1 %b, label %B, label %C
+
+A:
+  call void @bar(i32 0)
+  store i32 0, i32* @GHJK
+  br label %M
+
+B:
+  call void @car(i32 1)
+  store i32 0, i32* @GHJK
+  br label %M
+
+C:
+  call void @dar(i32 2)
+  store i32 0, i32* @GHJK
+  br label %M
+
+M:
+  indirectbr i8* %c, [label %return, label %altret]
+
+return:
+  call void @ear(i32 1000)
+  ret void
+altret:
+  call void @far(i32 1001)
+  ret void
+}
+
+; BranchFolding shouldn't try to merge the tails of two blocks
+; with only a branch in common, regardless of the fallthrough situation.
+
+; CHECK: dont_merge_oddly:
+; CHECK-NOT:   ret
+; CHECK:        ucomiss %xmm0, %xmm1
+; CHECK-NEXT:   jbe .LBB3_3
+; CHECK-NEXT:   ucomiss %xmm2, %xmm0
+; CHECK-NEXT:   ja .LBB3_4
+; CHECK-NEXT: .LBB3_2:
+; CHECK-NEXT:   movb $1, %al
+; CHECK-NEXT:   ret
+; CHECK-NEXT: .LBB3_3:
+; CHECK-NEXT:   ucomiss %xmm2, %xmm1
+; CHECK-NEXT:   jbe .LBB3_2
+; CHECK-NEXT: .LBB3_4:
+; CHECK-NEXT:   xorb %al, %al
+; CHECK-NEXT:   ret
+
+define i1 @dont_merge_oddly(float* %result) nounwind {
+entry:
+  %tmp4 = getelementptr float* %result, i32 2
+  %tmp5 = load float* %tmp4, align 4
+  %tmp7 = getelementptr float* %result, i32 4
+  %tmp8 = load float* %tmp7, align 4
+  %tmp10 = getelementptr float* %result, i32 6
+  %tmp11 = load float* %tmp10, align 4
+  %tmp12 = fcmp olt float %tmp8, %tmp11
+  br i1 %tmp12, label %bb, label %bb21
+
+bb:
+  %tmp23469 = fcmp olt float %tmp5, %tmp8
+  br i1 %tmp23469, label %bb26, label %bb30
+
+bb21:
+  %tmp23 = fcmp olt float %tmp5, %tmp11
+  br i1 %tmp23, label %bb26, label %bb30
+
+bb26:
+  ret i1 0
+
+bb30:
+  ret i1 1
+}
+
+; Do any-size tail-merging when two candidate blocks will both require
+; an unconditional jump to complete a two-way conditional branch.
+
+; CHECK: c_expand_expr_stmt:
+; CHECK:        jmp .LBB4_7
+; CHECK-NEXT: .LBB4_12:
+; CHECK-NEXT:   movq 8(%rax), %rax
+; CHECK-NEXT:   movb 16(%rax), %al
+; CHECK-NEXT:   cmpb $16, %al
+; CHECK-NEXT:   je .LBB4_6
+; CHECK-NEXT:   cmpb $23, %al
+; CHECK-NEXT:   je .LBB4_6
+; CHECK-NEXT:   jmp .LBB4_15
+; CHECK-NEXT: .LBB4_14:
+; CHECK-NEXT:   cmpb $23, %bl
+; CHECK-NEXT:   jne .LBB4_15
+; CHECK-NEXT: .LBB4_15:
+
+%0 = type { %struct.rtx_def* }
+%struct.lang_decl = type opaque
+%struct.rtx_def = type { i16, i8, i8, [1 x %union.rtunion] }
+%struct.tree_decl = type { [24 x i8], i8*, i32, %union.tree_node*, i32, i8, i8, i8, i8, %union.tree_node*, %union.tree_node*, %union.tree_node*, %union.tree_node*, %union.tree_node*, %union.tree_node*, %union.tree_node*, %union.tree_node*, %union.tree_node*, %struct.rtx_def*, %union..2anon, %0, %union.tree_node*, %struct.lang_decl* }
+%union..2anon = type { i32 }
+%union.rtunion = type { i8* }
+%union.tree_node = type { %struct.tree_decl }
+
+define fastcc void @c_expand_expr_stmt(%union.tree_node* %expr) nounwind {
+entry:
+  %tmp4 = load i8* null, align 8                  ; <i8> [#uses=3]
+  switch i8 %tmp4, label %bb3 [
+    i8 18, label %bb
+  ]
+
+bb:                                               ; preds = %entry
+  switch i32 undef, label %bb1 [
+    i32 0, label %bb2.i
+    i32 37, label %bb.i
+  ]
+
+bb.i:                                             ; preds = %bb
+  switch i32 undef, label %bb1 [
+    i32 0, label %lvalue_p.exit
+  ]
+
+bb2.i:                                            ; preds = %bb
+  br label %bb3
+
+lvalue_p.exit:                                    ; preds = %bb.i
+  %tmp21 = load %union.tree_node** null, align 8  ; <%union.tree_node*> [#uses=3]
+  %tmp22 = getelementptr inbounds %union.tree_node* %tmp21, i64 0, i32 0, i32 0, i64 0 ; <i8*> [#uses=1]
+  %tmp23 = load i8* %tmp22, align 8               ; <i8> [#uses=1]
+  %tmp24 = zext i8 %tmp23 to i32                  ; <i32> [#uses=1]
+  switch i32 %tmp24, label %lvalue_p.exit4 [
+    i32 0, label %bb2.i3
+    i32 2, label %bb.i1
+  ]
+
+bb.i1:                                            ; preds = %lvalue_p.exit
+  %tmp25 = getelementptr inbounds %union.tree_node* %tmp21, i64 0, i32 0, i32 2 ; <i32*> [#uses=1]
+  %tmp26 = bitcast i32* %tmp25 to %union.tree_node** ; <%union.tree_node**> [#uses=1]
+  %tmp27 = load %union.tree_node** %tmp26, align 8 ; <%union.tree_node*> [#uses=2]
+  %tmp28 = getelementptr inbounds %union.tree_node* %tmp27, i64 0, i32 0, i32 0, i64 16 ; <i8*> [#uses=1]
+  %tmp29 = load i8* %tmp28, align 8               ; <i8> [#uses=1]
+  %tmp30 = zext i8 %tmp29 to i32                  ; <i32> [#uses=1]
+  switch i32 %tmp30, label %lvalue_p.exit4 [
+    i32 0, label %bb2.i.i2
+    i32 2, label %bb.i.i
+  ]
+
+bb.i.i:                                           ; preds = %bb.i1
+  %tmp34 = tail call fastcc i32 @lvalue_p(%union.tree_node* null) nounwind ; <i32> [#uses=1]
+  %phitmp = icmp ne i32 %tmp34, 0                 ; <i1> [#uses=1]
+  br label %lvalue_p.exit4
+
+bb2.i.i2:                                         ; preds = %bb.i1
+  %tmp35 = getelementptr inbounds %union.tree_node* %tmp27, i64 0, i32 0, i32 0, i64 8 ; <i8*> [#uses=1]
+  %tmp36 = bitcast i8* %tmp35 to %union.tree_node** ; <%union.tree_node**> [#uses=1]
+  %tmp37 = load %union.tree_node** %tmp36, align 8 ; <%union.tree_node*> [#uses=1]
+  %tmp38 = getelementptr inbounds %union.tree_node* %tmp37, i64 0, i32 0, i32 0, i64 16 ; <i8*> [#uses=1]
+  %tmp39 = load i8* %tmp38, align 8               ; <i8> [#uses=1]
+  switch i8 %tmp39, label %bb2 [
+    i8 16, label %lvalue_p.exit4
+    i8 23, label %lvalue_p.exit4
+  ]
+
+bb2.i3:                                           ; preds = %lvalue_p.exit
+  %tmp40 = getelementptr inbounds %union.tree_node* %tmp21, i64 0, i32 0, i32 0, i64 8 ; <i8*> [#uses=1]
+  %tmp41 = bitcast i8* %tmp40 to %union.tree_node** ; <%union.tree_node**> [#uses=1]
+  %tmp42 = load %union.tree_node** %tmp41, align 8 ; <%union.tree_node*> [#uses=1]
+  %tmp43 = getelementptr inbounds %union.tree_node* %tmp42, i64 0, i32 0, i32 0, i64 16 ; <i8*> [#uses=1]
+  %tmp44 = load i8* %tmp43, align 8               ; <i8> [#uses=1]
+  switch i8 %tmp44, label %bb2 [
+    i8 16, label %lvalue_p.exit4
+    i8 23, label %lvalue_p.exit4
+  ]
+
+lvalue_p.exit4:                                   ; preds = %bb2.i3, %bb2.i3, %bb2.i.i2, %bb2.i.i2, %bb.i.i, %bb.i1, %lvalue_p.exit
+  %tmp45 = phi i1 [ %phitmp, %bb.i.i ], [ false, %bb2.i.i2 ], [ false, %bb2.i.i2 ], [ false, %bb.i1 ], [ false, %bb2.i3 ], [ false, %bb2.i3 ], [ false, %lvalue_p.exit ] ; <i1> [#uses=1]
+  %tmp46 = icmp eq i8 %tmp4, 0                    ; <i1> [#uses=1]
+  %or.cond = or i1 %tmp45, %tmp46                 ; <i1> [#uses=1]
+  br i1 %or.cond, label %bb2, label %bb3
+
+bb1:                                              ; preds = %bb2.i.i, %bb.i, %bb
+  %.old = icmp eq i8 %tmp4, 23                    ; <i1> [#uses=1]
+  br i1 %.old, label %bb2, label %bb3
+
+bb2:                                              ; preds = %bb1, %lvalue_p.exit4, %bb2.i3, %bb2.i.i2
+  br label %bb3
+
+bb3:                                              ; preds = %bb2, %bb1, %lvalue_p.exit4, %bb2.i, %entry
+  %expr_addr.0 = phi %union.tree_node* [ null, %bb2 ], [ %expr, %bb2.i ], [ %expr, %entry ], [ %expr, %bb1 ], [ %expr, %lvalue_p.exit4 ] ; <%union.tree_node*> [#uses=0]
+  unreachable
+}
+
+declare fastcc i32 @lvalue_p(%union.tree_node* nocapture) nounwind readonly
+
+declare fastcc %union.tree_node* @default_conversion(%union.tree_node*) nounwind
+
+
+; If one tail merging candidate falls through into the other,
+; tail merging is likely profitable regardless of how few
+; instructions are involved. This function should have only
+; one ret instruction.
+
+; CHECK: foo:
+; CHECK:        call func
+; CHECK-NEXT: .LBB5_2:
+; CHECK-NEXT:   addq $8, %rsp
+; CHECK-NEXT:   ret
+
+define void @foo(i1* %V) nounwind {
+entry:
+  %t0 = icmp eq i1* %V, null
+  br i1 %t0, label %return, label %bb
+
+bb:
+  call void @func()
+  ret void
+
+return:
+  ret void
+}
+
+declare void @func()
+
+; one - One instruction may be tail-duplicated even with optsize.
+
+; CHECK: one:
+; CHECK: movl $0, XYZ(%rip)
+; CHECK: movl $0, XYZ(%rip)
+
+ at XYZ = external global i32
+
+define void @one() nounwind optsize {
+entry:
+  %0 = icmp eq i32 undef, 0
+  br i1 %0, label %bbx, label %bby
+
+bby:
+  switch i32 undef, label %bb7 [
+    i32 16, label %return
+  ]
+
+bb7:
+  volatile store i32 0, i32* @XYZ
+  unreachable
+
+bbx:
+  switch i32 undef, label %bb12 [
+    i32 128, label %return
+  ]
+
+bb12:
+  volatile store i32 0, i32* @XYZ
+  unreachable
+
+return:
+  ret void
+}
+
+; two - Same as one, but with two instructions in the common
+; tail instead of one. This is too much to be merged, given
+; the optsize attribute.
+
+; CHECK: two:
+; CHECK-NOT: XYZ
+; CHECK: movl $0, XYZ(%rip)
+; CHECK: movl $1, XYZ(%rip)
+; CHECK-NOT: XYZ
+; CHECK: ret
+
+define void @two() nounwind optsize {
+entry:
+  %0 = icmp eq i32 undef, 0
+  br i1 %0, label %bbx, label %bby
+
+bby:
+  switch i32 undef, label %bb7 [
+    i32 16, label %return
+  ]
+
+bb7:
+  volatile store i32 0, i32* @XYZ
+  volatile store i32 1, i32* @XYZ
+  unreachable
+
+bbx:
+  switch i32 undef, label %bb12 [
+    i32 128, label %return
+  ]
+
+bb12:
+  volatile store i32 0, i32* @XYZ
+  volatile store i32 1, i32* @XYZ
+  unreachable
+
+return:
+  ret void
+}
+
+; two_nosize - Same as two, but without the optsize attribute.
+; Now two instructions are enough to be tail-duplicated.
+
+; CHECK: two_nosize:
+; CHECK: movl $0, XYZ(%rip)
+; CHECK: movl $1, XYZ(%rip)
+; CHECK: movl $0, XYZ(%rip)
+; CHECK: movl $1, XYZ(%rip)
+
+define void @two_nosize() nounwind {
+entry:
+  %0 = icmp eq i32 undef, 0
+  br i1 %0, label %bbx, label %bby
+
+bby:
+  switch i32 undef, label %bb7 [
+    i32 16, label %return
+  ]
+
+bb7:
+  volatile store i32 0, i32* @XYZ
+  volatile store i32 1, i32* @XYZ
+  unreachable
+
+bbx:
+  switch i32 undef, label %bb12 [
+    i32 128, label %return
+  ]
+
+bb12:
+  volatile store i32 0, i32* @XYZ
+  volatile store i32 1, i32* @XYZ
+  unreachable
+
+return:
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/tailcall-fastisel.ll b/libclamav/c++/llvm/test/CodeGen/X86/tailcall-fastisel.ll
new file mode 100644
index 0000000..d54fb41
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/tailcall-fastisel.ll
@@ -0,0 +1,13 @@
+; RUN: llc < %s -march=x86-64 -tailcallopt -fast-isel | grep TAILCALL
+
+; Fast-isel shouldn't attempt to handle this tail call, and it should
+; cleanly terminate instruction selection in the block after it's
+; done to avoid emitting invalid MachineInstrs.
+
+%0 = type { i64, i32, i8* }
+
+define fastcc i8* @"visit_array_aux<`Reference>"(%0 %arg, i32 %arg1) nounwind {
+fail:                                             ; preds = %entry
+  %tmp20 = tail call fastcc i8* @"visit_array_aux<`Reference>"(%0 %arg, i32 undef) ; <i8*> [#uses=1]
+  ret i8* %tmp20
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/tailcall-stackalign.ll b/libclamav/c++/llvm/test/CodeGen/X86/tailcall-stackalign.ll
index 110472c..0233139 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/tailcall-stackalign.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/tailcall-stackalign.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s  -mtriple=i686-unknown-linux  -tailcallopt | grep -A 1 call | grep -A 1 tailcaller | grep subl | grep 12
+; RUN: llc < %s  -mtriple=i686-unknown-linux  -tailcallopt | FileCheck %s
 ; Linux has 8 byte alignment so the params cause stack size 20 when tailcallopt
 ; is enabled, ensure that a normal fastcc call has matching stack size
 
@@ -19,6 +19,5 @@ define i32 @main(i32 %argc, i8** %argv) {
  ret i32 0
 }
 
-
-
-
+; CHECK: call tailcaller
+; CHECK-NEXT: subl $12
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/tailcall1.ll b/libclamav/c++/llvm/test/CodeGen/X86/tailcall1.ll
index a4f87c0..4923df2 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/tailcall1.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/tailcall1.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=x86 -tailcallopt | grep TAILCALL
+; RUN: llc < %s -march=x86 -tailcallopt | grep TAILCALL | count 4
 define fastcc i32 @tailcallee(i32 %a1, i32 %a2, i32 %a3, i32 %a4) {
 entry:
 	ret i32 %a3
@@ -9,3 +9,24 @@ entry:
 	%tmp11 = tail call fastcc i32 @tailcallee( i32 %in1, i32 %in2, i32 %in1, i32 %in2 )		; <i32> [#uses=1]
 	ret i32 %tmp11
 }
+
+declare fastcc i8* @alias_callee()
+
+define fastcc noalias i8* @noalias_caller() nounwind {
+  %p = tail call fastcc i8* @alias_callee()
+  ret i8* %p
+}
+
+declare fastcc noalias i8* @noalias_callee()
+
+define fastcc i8* @alias_caller() nounwind {
+  %p = tail call fastcc noalias i8* @noalias_callee()
+  ret i8* %p
+}
+
+declare fastcc i32 @i32_callee()
+
+define fastcc i32 @ret_undef() nounwind {
+  %p = tail call fastcc i32 @i32_callee()
+  ret i32 undef
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/tailcallstack64.ll b/libclamav/c++/llvm/test/CodeGen/X86/tailcallstack64.ll
index 73c59bb..69018aa 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/tailcallstack64.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/tailcallstack64.ll
@@ -3,19 +3,18 @@
 ; Check that lowered arguments on the stack do not overwrite each other.
 ; Add %in1 %p1 to a different temporary register (%eax).
 ; CHECK: movl  %edi, %eax
-; CHECK: addl  32(%rsp), %eax
 ; Move param %in1 to temp register (%r10d).
 ; CHECK: movl  40(%rsp), %r10d
-; Move result of addition to stack.
-; CHECK: movl  %eax, 40(%rsp)
 ; Move param %in2 to stack.
 ; CHECK: movl  %r10d, 32(%rsp)
+; Move result of addition to stack.
+; CHECK: movl  %eax, 40(%rsp)
 ; Eventually, do a TAILCALL
 ; CHECK: TAILCALL
 
-declare fastcc i32 @tailcallee(i32 %p1, i32 %p2, i32 %p3, i32 %p4, i32 %p5, i32 %p6, i32 %a, i32 %b)
+declare fastcc i32 @tailcallee(i32 %p1, i32 %p2, i32 %p3, i32 %p4, i32 %p5, i32 %p6, i32 %a, i32 %b) nounwind
 
-define fastcc i32 @tailcaller(i32 %p1, i32 %p2, i32 %p3, i32 %p4, i32 %p5, i32 %p6, i32 %in1, i32 %in2) {
+define fastcc i32 @tailcaller(i32 %p1, i32 %p2, i32 %p3, i32 %p4, i32 %p5, i32 %p6, i32 %in1, i32 %in2) nounwind {
 entry:
         %tmp = add i32 %in1, %p1
         %retval = tail call fastcc i32 @tailcallee(i32 %p1, i32 %p2, i32 %p3, i32 %p4, i32 %p5, i32 %p6, i32 %in2,i32 %tmp)
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/test-shrink-bug.ll b/libclamav/c++/llvm/test/CodeGen/X86/test-shrink-bug.ll
new file mode 100644
index 0000000..64631ea
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/test-shrink-bug.ll
@@ -0,0 +1,23 @@
+; RUN: llc < %s | FileCheck %s
+
+; Codegen shouldn't reduce the comparison down to testb $-1, %al
+; because that changes the result of the signed test.
+; PR5132
+; CHECK: testw  $255, %ax
+
+target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128"
+target triple = "i386-apple-darwin10.0"
+
+ at g_14 = global i8 -6, align 1                     ; <i8*> [#uses=1]
+
+declare i32 @func_16(i8 signext %p_19, i32 %p_20) nounwind
+
+define i32 @func_35(i64 %p_38) nounwind ssp {
+entry:
+  %tmp = load i8* @g_14                           ; <i8> [#uses=2]
+  %conv = zext i8 %tmp to i32                     ; <i32> [#uses=1]
+  %cmp = icmp sle i32 1, %conv                    ; <i1> [#uses=1]
+  %conv2 = zext i1 %cmp to i32                    ; <i32> [#uses=1]
+  %call = call i32 @func_16(i8 signext %tmp, i32 %conv2) ssp ; <i32> [#uses=1]
+  ret i32 1
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/trunc-to-bool.ll b/libclamav/c++/llvm/test/CodeGen/X86/trunc-to-bool.ll
index 374d404..bfab1ae 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/trunc-to-bool.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/trunc-to-bool.ll
@@ -1,13 +1,13 @@
 ; An integer truncation to i1 should be done with an and instruction to make
 ; sure only the LSBit survives. Test that this is the case both for a returned
 ; value and as the operand of a branch.
-; RUN: llc < %s -march=x86 | grep {\\(and\\)\\|\\(test.*\\\$1\\)} | \
-; RUN:   count 5
+; RUN: llc < %s -march=x86 | FileCheck %s
 
 define i1 @test1(i32 %X) zeroext {
     %Y = trunc i32 %X to i1
     ret i1 %Y
 }
+; CHECK: andl $1, %eax
 
 define i1 @test2(i32 %val, i32 %mask) {
 entry:
@@ -20,6 +20,7 @@ ret_true:
 ret_false:
     ret i1 false
 }
+; CHECK: testb $1, %al
 
 define i32 @test3(i8* %ptr) {
     %val = load i8* %ptr
@@ -30,6 +31,7 @@ cond_true:
 cond_false:
     ret i32 42
 }
+; CHECK: testb $1, %al
 
 define i32 @test4(i8* %ptr) {
     %tmp = ptrtoint i8* %ptr to i1
@@ -39,6 +41,7 @@ cond_true:
 cond_false:
     ret i32 42
 }
+; CHECK: testb $1, %al
 
 define i32 @test6(double %d) {
     %tmp = fptosi double %d to i1
@@ -48,4 +51,4 @@ cond_true:
 cond_false:
     ret i32 42
 }
-
+; CHECK: testb $1
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/unaligned-load.ll b/libclamav/c++/llvm/test/CodeGen/X86/unaligned-load.ll
new file mode 100644
index 0000000..7dddcda
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/unaligned-load.ll
@@ -0,0 +1,28 @@
+; RUN: llc < %s -mtriple=x86_64-apple-darwin10.0 -relocation-model=dynamic-no-pic | not grep {movaps\t_.str3}
+; RUN: llc < %s -mtriple=x86_64-apple-darwin10.0 -relocation-model=dynamic-no-pic | FileCheck %s
+
+ at .str1 = internal constant [31 x i8] c"DHRYSTONE PROGRAM, SOME STRING\00", align 8
+ at .str3 = internal constant [31 x i8] c"DHRYSTONE PROGRAM, 2'ND STRING\00", align 8
+
+define void @func() nounwind ssp {
+entry:
+  %String2Loc = alloca [31 x i8], align 1
+  br label %bb
+
+bb:
+  %String2Loc9 = getelementptr inbounds [31 x i8]* %String2Loc, i64 0, i64 0
+  call void @llvm.memcpy.i64(i8* %String2Loc9, i8* getelementptr inbounds ([31 x i8]* @.str3, i64 0, i64 0), i64 31, i32 1)
+; CHECK: movups _.str3
+  br label %bb
+
+return:
+  ret void
+}
+
+declare void @llvm.memcpy.i64(i8* nocapture, i8* nocapture, i64, i32) nounwind
+
+; CHECK: .align  3
+; CHECK-NEXT: _.str1:
+; CHECK-NEXT: .asciz "DHRYSTONE PROGRAM, SOME STRING"
+; CHECK-NEXT: .align 3
+; CHECK-NEXT: _.str3:
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/vec_ins_extract.ll b/libclamav/c++/llvm/test/CodeGen/X86/vec_ins_extract.ll
index d43b95b..8924362 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/vec_ins_extract.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/vec_ins_extract.ll
@@ -4,6 +4,7 @@
 
 ; This checks that various insert/extract idiom work without going to the
 ; stack.
+target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:32:32"
 
 define void @test(<4 x float>* %F, float %f) {
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/vec_shuffle-22.ll b/libclamav/c++/llvm/test/CodeGen/X86/vec_shuffle-22.ll
index 5307ced..1cf37d4 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/vec_shuffle-22.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/vec_shuffle-22.ll
@@ -1,19 +1,15 @@
-; RUN: llc < %s -march=x86 -mcpu=pentium-m -o %t
-; RUN: grep movlhps %t | count 1
-; RUN: grep pshufd %t | count 1
-; RUN: llc < %s -march=x86 -mcpu=core2 -o %t
-; RUN: grep movlhps %t | count 1
-; RUN: grep movddup %t | count 1
+; RUN: llc < %s -march=x86 -mcpu=pentium-m  | FileCheck %s
 
 define <4 x float> @t1(<4 x float> %a) nounwind  {
-entry:
-        %tmp1 = shufflevector <4 x float> %a, <4 x float> undef, <4 x i32> < i32 0, i32 1, i32 0, i32 1 >       ; <<4 x float>> [#uses=1]
-        ret <4 x float> %tmp1
+; CHECK: movlhps
+  %tmp1 = shufflevector <4 x float> %a, <4 x float> undef, <4 x i32> < i32 0, i32 1, i32 0, i32 1 >       ; <<4 x float>> [#uses=1]
+  ret <4 x float> %tmp1
 }
 
 define <4 x i32> @t2(<4 x i32>* %a) nounwind {
-entry:
-        %tmp1 = load <4 x i32>* %a;
+; CHECK: pshufd
+; CHECK: ret
+  %tmp1 = load <4 x i32>* %a;
 	%tmp2 = shufflevector <4 x i32> %tmp1, <4 x i32> undef, <4 x i32> < i32 0, i32 1, i32 0, i32 1 >		; <<4 x i32>> [#uses=1]
 	ret <4 x i32> %tmp2
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/vec_shuffle-3.ll b/libclamav/c++/llvm/test/CodeGen/X86/vec_shuffle-3.ll
index 556f103..f4930b0 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/vec_shuffle-3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/vec_shuffle-3.ll
@@ -18,4 +18,3 @@ entry:
         %tmp4 = shufflevector <4 x float> %tmp3, <4 x float> %tmp, <4 x i32> < i32 2, i32 3, i32 6, i32 7 >           ; <<4 x float>> [#uses=1]
         ret <4 x float> %tmp4
 }
-
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/vec_shuffle-9.ll b/libclamav/c++/llvm/test/CodeGen/X86/vec_shuffle-9.ll
index 2bef24d..fc16a26 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/vec_shuffle-9.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/vec_shuffle-9.ll
@@ -1,9 +1,10 @@
-; RUN: llc < %s -march=x86 -mattr=+sse2 -o %t
-; RUN: grep punpck %t | count 2
-; RUN: not grep pextrw %t
+; RUN: llc < %s -march=x86 -mattr=+sse2 | FileCheck %s
 
 define <4 x i32> @test(i8** %ptr) {
-entry:
+; CHECK: xorps
+; CHECK: punpcklbw
+; CHECK: punpcklwd
+
 	%tmp = load i8** %ptr		; <i8*> [#uses=1]
 	%tmp.upgrd.1 = bitcast i8* %tmp to float*		; <float*> [#uses=1]
 	%tmp.upgrd.2 = load float* %tmp.upgrd.1		; <float> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/vec_zero-2.ll b/libclamav/c++/llvm/test/CodeGen/X86/vec_zero-2.ll
index e42b538..cdb030e 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/vec_zero-2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/vec_zero-2.ll
@@ -12,8 +12,8 @@ bb4743:		; preds = %bb1664
 	%tmp5257 = sub <8 x i16> %tmp5256, zeroinitializer		; <<8 x i16>> [#uses=1]
 	%tmp5258 = bitcast <8 x i16> %tmp5257 to <2 x i64>		; <<2 x i64>> [#uses=1]
 	%tmp5265 = bitcast <2 x i64> %tmp5258 to <8 x i16>		; <<8 x i16>> [#uses=1]
-	%tmp5266 = call <8 x i16> @llvm.x86.sse2.packuswb.128( <8 x i16> %tmp5265, <8 x i16> zeroinitializer ) nounwind readnone 		; <<8 x i16>> [#uses=1]
-	%tmp5267 = bitcast <8 x i16> %tmp5266 to <2 x i64>		; <<2 x i64>> [#uses=1]
+	%tmp5266 = call <16 x i8> @llvm.x86.sse2.packuswb.128( <8 x i16> %tmp5265, <8 x i16> zeroinitializer ) nounwind readnone 		; <<8 x i16>> [#uses=1]
+	%tmp5267 = bitcast <16 x i8> %tmp5266 to <2 x i64>		; <<2 x i64>> [#uses=1]
 	%tmp5294 = and <2 x i64> zeroinitializer, %tmp5267		; <<2 x i64>> [#uses=1]
 	br label %bb5310
 bb5310:		; preds = %bb4743, %bb1664
@@ -21,4 +21,4 @@ bb5310:		; preds = %bb4743, %bb1664
 	ret i32 0
 }
 
-declare <8 x i16> @llvm.x86.sse2.packuswb.128(<8 x i16>, <8 x i16>) nounwind readnone 
+declare <16 x i8> @llvm.x86.sse2.packuswb.128(<8 x i16>, <8 x i16>) nounwind readnone
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-1.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-1.ll
index 8f607f5..f8d0690 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-1.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-1.ll
@@ -1,14 +1,12 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
-; RUN: grep paddb  %t | count 1
-; RUN: grep pextrb %t | count 1
-; RUN: not grep pextrw %t
+; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx |  FileCheck %s
 
 ; Widen a v3i8 to v16i8 to use a vector add
 
-target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128"
-
 define void @update(<3 x i8>* %dst, <3 x i8>* %src, i32 %n) nounwind {
 entry:
+; CHECK-NOT: pextrw
+; CHECK: paddb
+; CHECK: pextrb
 	%dst.addr = alloca <3 x i8>*		; <<3 x i8>**> [#uses=2]
 	%src.addr = alloca <3 x i8>*		; <<3 x i8>**> [#uses=2]
 	%n.addr = alloca i32		; <i32*> [#uses=2]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-2.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-2.ll
index e2420f0..fdecaa3 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-2.ll
@@ -1,9 +1,8 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
-; RUN: grep paddb  %t | count 1
-; RUN: grep pand %t | count 1
+; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx  | FileCheck %s
+; CHECK: paddb
+; CHECK: pand
 
 ; widen v8i8 to v16i8 (checks even power of 2 widening with add & and)
-target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128"
 
 define void @update(i64* %dst_i, i64* %src_i, i32 %n) nounwind {
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-3.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-3.ll
index a22d254..a2b8b82 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-3.ll
@@ -1,12 +1,10 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
-; RUN: grep paddw  %t | count 1
-; RUN: grep movd %t | count 2
-; RUN: grep pextrw %t | count 1
+; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx | FileCheck %s
+; CHECK: paddw
+; CHECK: pextrw
+; CHECK: movd
 
 ; Widen a v3i16 to v8i16 to do a vector add
 
-target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128"
-target triple = "i686-apple-darwin10.0.0d2"
 @.str = internal constant [4 x i8] c"%d \00"		; <[4 x i8]*> [#uses=1]
 @.str1 = internal constant [2 x i8] c"\0A\00"		; <[2 x i8]*> [#uses=1]
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-4.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-4.ll
index 898bff0..f7506ae 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-4.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-4.ll
@@ -1,11 +1,9 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
-; RUN: grep psubw  %t | count 1
-; RUN: grep pmullw %t | count 1
+; RUN: llc < %s -march=x86-64 -mattr=+sse42 -disable-mmx | FileCheck %s
+; CHECK: psubw
+; CHECK-NEXT: pmullw
 
 ; Widen a v5i16 to v8i16 to do a vector sub and multiple
 
-target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128"
-
 define void @update(<5 x i16>* %dst, <5 x i16>* %src, i32 %n) nounwind {
 entry:
 	%dst.addr = alloca <5 x i16>*		; <<5 x i16>**> [#uses=2]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-5.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-5.ll
index 1ecf09d..f7f3408 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-5.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-5.ll
@@ -1,10 +1,9 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
-; RUN: grep pmulld  %t | count 1
-; RUN: grep psubd  %t | count 1
-; RUN: grep movaps %t | count 1
+; RUN: llc < %s -march=x86-64 -mattr=+sse42 -disable-mmx  | FileCheck %s
+; CHECK: movaps
+; CHECK: pmulld
+; CHECK: psubd
 
 ; widen a v3i32 to v4i32 to do a vector multiple and a subtraction
-target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128"
 
 define void @update(<3 x i32>* %dst, <3 x i32>* %src, i32 %n) nounwind {
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-6.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-6.ll
index 3583258..538123f 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-6.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-6.ll
@@ -1,9 +1,8 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
-; RUN: grep mulps  %t | count 1
-; RUN: grep addps  %t | count 1
+; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx | FileCheck %s
+; CHECK: mulps
+; CHECK: addps
 
 ; widen a v3f32 to vfi32 to do a vector multiple and an add
-target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128"
 
 define void @update(<3 x float>* %dst, <3 x float>* %src, i32 %n) nounwind {
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-1.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-1.ll
index 441a360..d4ab174 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-1.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-1.ll
@@ -1,7 +1,7 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
-; RUN: grep paddw  %t | count 1
-; RUN: grep movd  %t | count 1
-; RUN: grep pextrd  %t | count 1
+; RUN: llc -march=x86 -mattr=+sse42 < %s -disable-mmx | FileCheck %s
+; CHECK: paddw
+; CHECK: pextrd
+; CHECK: movd
 
 ; bitcast a v4i16 to v2i32
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-2.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-2.ll
index ded5707..e5d2c6a 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-2.ll
@@ -1,6 +1,11 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
-; RUN: grep pextrd  %t | count 5
-; RUN: grep movd  %t | count 3
+; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx | FileCheck %s
+; CHECK: pextrd
+; CHECK: pextrd
+; CHECK: movd
+; CHECK: pextrd
+; CHECK: pextrd
+; CHECK: pextrd
+; CHECK: movd
 
 ; bitcast v14i16 to v7i32
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-3.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-3.ll
index 67a760f..02674dd 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-3.ll
@@ -1,6 +1,7 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
-; RUN: grep paddd  %t | count 1
-; RUN: grep pextrd %t | count 2
+; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx | FileCheck %s
+; CHECK: paddd
+; CHECK: pextrd
+; CHECK: pextrd
 
 ; bitcast v12i8 to v3i32
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-4.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-4.ll
index 614eeed..5f31e56 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-4.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-4.ll
@@ -1,5 +1,12 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
-; RUN: grep sarb  %t | count 8
+; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx | FileCheck %s
+; CHECK: sarb
+; CHECK: sarb
+; CHECK: sarb
+; CHECK: sarb
+; CHECK: sarb
+; CHECK: sarb
+; CHECK: sarb
+; CHECK: sarb
 
 ; v8i8 that is widen to v16i8 then split
 ; FIXME: This is widen to v16i8 and split to 16 and we then rebuild the vector.
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-5.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-5.ll
index 92618d6..d1d7fec 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-5.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-5.ll
@@ -1,4 +1,6 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
+; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx | FileCheck %s
+; CHECK: movl
+; CHECK: movd
 
 ; bitcast a i64 to v2i32
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-6.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-6.ll
index 386f749..08759bf 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-6.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_cast-6.ll
@@ -1,5 +1,5 @@
-; RUN: llc < %s -march=x86 -mattr=+sse41 -disable-mmx -o %t
-; RUN: grep movd  %t | count 1
+; RUN: llc < %s -march=x86 -mattr=+sse41 -disable-mmx | FileCheck %s
+; CHECK: movd
 
 ; Test bit convert that requires widening in the operand.
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_conv-1.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_conv-1.ll
index ccc8b4f..a2029dd 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_conv-1.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_conv-1.ll
@@ -1,6 +1,6 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
-; RUN: grep pshufd %t | count 1
-; RUN: grep paddd  %t | count 1
+; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx | FileCheck %s
+; CHECK: pshufd
+; CHECK: paddd
 
 ; truncate v2i64 to v2i32
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_conv-2.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_conv-2.ll
index 9b7ab74..b24a9b3 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_conv-2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_conv-2.ll
@@ -1,4 +1,6 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
+; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx | FileCheck %s
+; CHECK: movswl
+; CHECK: movswl
 
 ; sign extension v2i32 to v2i16
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_conv-3.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_conv-3.ll
index 4ec76a9..1a40800 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_conv-3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_conv-3.ll
@@ -1,5 +1,6 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
-; grep cvtsi2ss  %t | count 1 
+; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx | FileCheck %s
+; CHECK: cvtsi2ss
+
 ; sign to float v2i16 to v2f32
 
 define void @convert(<2 x float>* %dst.addr, <2 x i16> %src) nounwind {
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_conv-4.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_conv-4.ll
index 61a26a8..e505b62 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_conv-4.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_conv-4.ll
@@ -1,4 +1,5 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
+; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx | FileCheck %s
+; CHECK: cvtsi2ss
 
 ; unsigned to float v7i16 to v7f32
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_extract-1.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_extract-1.ll
new file mode 100644
index 0000000..308e6b8
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_extract-1.ll
@@ -0,0 +1,12 @@
+; RUN: llc < %s -march=x86-64 -mattr=+sse42 -disable-mmx | FileCheck %s
+; widen extract subvector
+
+define void @convert(<2 x double>* %dst.addr, <3 x double> %src)  {
+entry:
+; CHECK: convert:
+; CHECK: unpcklpd {{%xmm[0-7]}}, {{%xmm[0-7]}}
+; CHECK-NEXT: movapd
+  %val = shufflevector <3 x double> %src, <3 x double> undef, <2 x i32> < i32 0, i32 1>
+  store <2 x double> %val, <2 x double>* %dst.addr
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_select-1.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_select-1.ll
index aca0b67..4154433 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_select-1.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_select-1.ll
@@ -1,4 +1,5 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
+; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx | FileCheck %s
+; CHECK: jne
 
 ; widening select v6i32 and then a sub
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_shuffle-1.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_shuffle-1.ll
index 15da870..dd02241 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_shuffle-1.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_shuffle-1.ll
@@ -1,4 +1,6 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
+; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx | FileCheck %s
+; CHECK: insertps
+; CHECK: extractps
 
 ; widening shuffle v3float and then a add
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_shuffle-2.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_shuffle-2.ll
index 617cc1d..d097e41 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_shuffle-2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_shuffle-2.ll
@@ -1,4 +1,6 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -o %t
+; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx | FileCheck %s
+; CHECK: insertps
+; CHECK: extractps
 
 ; widening shuffle v3float and then a add
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/x86-64-jumps.ll b/libclamav/c++/llvm/test/CodeGen/X86/x86-64-jumps.ll
new file mode 100644
index 0000000..5ed6a23
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/x86-64-jumps.ll
@@ -0,0 +1,16 @@
+; RUN: llc < %s 
+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128"
+target triple = "x86_64-apple-darwin10.0"
+
+define i8 @test1() nounwind ssp {
+entry:
+  %0 = select i1 undef, i8* blockaddress(@test1, %bb), i8* blockaddress(@test1, %bb6) ; <i8*> [#uses=1]
+  indirectbr i8* %0, [label %bb, label %bb6]
+
+bb:                                               ; preds = %entry
+  ret i8 1
+
+bb6:                                              ; preds = %entry
+  ret i8 2
+}
+
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-10.ll b/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-10.ll
index 0f65e57..7baa7e5 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-10.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/x86-64-pic-10.ll
@@ -3,7 +3,7 @@
 
 @g = alias weak i32 ()* @f
 
-define void @g() {
+define void @h() {
 entry:
 	%tmp31 = call i32 @g()
         ret void
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/x86-64-sret-return.ll b/libclamav/c++/llvm/test/CodeGen/X86/x86-64-sret-return.ll
index 3ee1a0b..7b5f189 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/x86-64-sret-return.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/x86-64-sret-return.ll
@@ -1,9 +1,11 @@
-; RUN: llc < %s
+; RUN: llc < %s | FileCheck %s
 
 target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128"
 target triple = "x86_64-apple-darwin8"
 	%struct.foo = type { [4 x i64] }
 
+; CHECK: bar:
+; CHECK: movq %rdi, %rax
 define void @bar(%struct.foo* noalias sret  %agg.result, %struct.foo* %d) nounwind  {
 entry:
 	%d_addr = alloca %struct.foo*		; <%struct.foo**> [#uses=2]
@@ -52,3 +54,10 @@ entry:
 return:		; preds = %entry
 	ret void
 }
+
+; CHECK: foo:
+; CHECK: movq %rdi, %rax
+define void @foo({ i64 }* noalias nocapture sret %agg.result) nounwind {
+  store { i64 } { i64 0 }, { i64 }* %agg.result
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/ExecutionEngine/stubs.ll b/libclamav/c++/llvm/test/ExecutionEngine/stubs.ll
new file mode 100644
index 0000000..525d135
--- /dev/null
+++ b/libclamav/c++/llvm/test/ExecutionEngine/stubs.ll
@@ -0,0 +1,35 @@
+; RUN: llvm-as < %s | lli -disable-lazy-compilation=false
+
+define i32 @main() nounwind {
+entry:
+	call void @lazily_compiled_address_is_consistent()
+	ret i32 0
+}
+
+; Test PR3043: @test should have the same address before and after
+; it's JIT-compiled.
+ at funcPtr = common global i1 ()* null, align 4
+ at lcaic_failure = internal constant [46 x i8] c"@lazily_compiled_address_is_consistent failed\00"
+
+define void @lazily_compiled_address_is_consistent() nounwind {
+entry:
+	store i1 ()* @test, i1 ()** @funcPtr
+	%pass = tail call i1 @test()		; <i32> [#uses=1]
+	br i1 %pass, label %pass_block, label %fail_block
+pass_block:
+	ret void
+fail_block:
+	call i32 @puts(i8* getelementptr([46 x i8]* @lcaic_failure, i32 0, i32 0))
+	call void @exit(i32 1)
+	unreachable
+}
+
+define i1 @test() nounwind {
+entry:
+	%tmp = load i1 ()** @funcPtr
+	%eq = icmp eq i1 ()* %tmp, @test
+	ret i1 %eq
+}
+
+declare i32 @puts(i8*) noreturn
+declare void @exit(i32) noreturn
diff --git a/libclamav/c++/llvm/test/Feature/md_on_instruction.ll b/libclamav/c++/llvm/test/Feature/md_on_instruction.ll
index d765cd8..da9e49e 100644
--- a/libclamav/c++/llvm/test/Feature/md_on_instruction.ll
+++ b/libclamav/c++/llvm/test/Feature/md_on_instruction.ll
@@ -1,5 +1,4 @@
-; RUN: llvm-as < %s -disable-output
-
+; RUN: llvm-as < %s | llvm-dis | grep " !dbg " | count 4
 define i32 @foo() nounwind ssp {
 entry:
   %retval = alloca i32                            ; <i32*> [#uses=2]
diff --git a/libclamav/c++/llvm/test/Feature/md_on_instruction2.ll b/libclamav/c++/llvm/test/Feature/md_on_instruction2.ll
deleted file mode 100644
index da9e49e..0000000
--- a/libclamav/c++/llvm/test/Feature/md_on_instruction2.ll
+++ /dev/null
@@ -1,22 +0,0 @@
-; RUN: llvm-as < %s | llvm-dis | grep " !dbg " | count 4
-define i32 @foo() nounwind ssp {
-entry:
-  %retval = alloca i32                            ; <i32*> [#uses=2]
-  call void @llvm.dbg.func.start(metadata !0)
-  store i32 42, i32* %retval, !dbg !3
-  br label %0, !dbg !3
-
-; <label>:0                                       ; preds = %entry
-  call void @llvm.dbg.region.end(metadata !0)
-  %1 = load i32* %retval, !dbg !3                  ; <i32> [#uses=1]
-  ret i32 %1, !dbg !3
-}
-
-declare void @llvm.dbg.func.start(metadata) nounwind readnone
-
-declare void @llvm.dbg.region.end(metadata) nounwind readnone
-
-!0 = metadata !{i32 458798, i32 0, metadata !1, metadata !"foo", metadata !"foo", metadata !"foo", metadata !1, i32 1, metadata !2, i1 false, i1 true}
-!1 = metadata !{i32 458769, i32 0, i32 12, metadata !"foo.c", metadata !"/tmp", metadata !"clang 1.0", i1 true, i1 false, metadata !"", i32 0}
-!2 = metadata !{i32 458788, metadata !1, metadata !"int", metadata !1, i32 0, i64 32, i64 32, i64 0, i32 0, i32 5}
-!3 = metadata !{i32 1, i32 13, metadata !1, metadata !1}
diff --git a/libclamav/c++/llvm/test/Feature/memorymarkers.ll b/libclamav/c++/llvm/test/Feature/memorymarkers.ll
new file mode 100644
index 0000000..06b8376
--- /dev/null
+++ b/libclamav/c++/llvm/test/Feature/memorymarkers.ll
@@ -0,0 +1,36 @@
+; RUN: llvm-as -disable-output < %s
+
+%"struct.std::pair<int,int>" = type { i32, i32 }
+
+declare void @_Z3barRKi(i32*)
+
+declare void @llvm.lifetime.start(i64, i8* nocapture) nounwind
+declare void @llvm.lifetime.end(i64, i8* nocapture) nounwind
+declare {}* @llvm.invariant.start(i64, i8* nocapture) readonly nounwind
+declare void @llvm.invariant.end({}*, i64, i8* nocapture) nounwind
+
+define i32 @_Z4foo2v() nounwind {
+entry:
+  %x = alloca %"struct.std::pair<int,int>"
+  %y = bitcast %"struct.std::pair<int,int>"* %x to i8*
+
+  ;; Constructor starts here (this isn't needed since it is immediately
+  ;; preceded by an alloca, but shown for completeness).
+  call void @llvm.lifetime.start(i64 8, i8* %y)
+
+  %0 = getelementptr %"struct.std::pair<int,int>"* %x, i32 0, i32 0
+  store i32 4, i32* %0, align 8
+  %1 = getelementptr %"struct.std::pair<int,int>"* %x, i32 0, i32 1
+  store i32 5, i32* %1, align 4
+
+  ;; Constructor has finished here.
+  %inv = call {}* @llvm.invariant.start(i64 8, i8* %y)
+  call void @_Z3barRKi(i32* %0) nounwind
+  %2 = load i32* %0, align 8
+
+  ;; Destructor is run here.
+  call void @llvm.invariant.end({}* %inv, i64 8, i8* %y)
+  ;; Destructor is done here.
+  call void @llvm.lifetime.end(i64 8, i8* %y)
+  ret i32 %2
+}
diff --git a/libclamav/c++/llvm/test/Feature/terminators.ll b/libclamav/c++/llvm/test/Feature/terminators.ll
new file mode 100644
index 0000000..1bca2a8
--- /dev/null
+++ b/libclamav/c++/llvm/test/Feature/terminators.ll
@@ -0,0 +1,43 @@
+; RUN: llvm-as < %s | llvm-dis > %t1.ll
+; RUN: llvm-as %t1.ll -o - | llvm-dis > %t2.ll
+; RUN: diff %t1.ll %t2.ll
+
+        %int = type i32
+
+define i32 @squared(i32 %i0) {
+        switch i32 %i0, label %Default [
+                 i32 1, label %Case1
+                 i32 2, label %Case2
+                 i32 4, label %Case4
+        ]
+
+Default:                ; preds = %0
+        ret i32 -1
+
+Case1:          ; preds = %0
+        ret i32 1
+
+Case2:          ; preds = %0
+        ret i32 4
+
+Case4:          ; preds = %0
+        ret i32 16
+}
+
+
+ at Addr = global i8* blockaddress(@indbrtest, %BB1)
+ at Addr3 = global i8* blockaddress(@squared, %Case1)
+
+
+define i32 @indbrtest(i8* %P, i32* %Q) {
+  indirectbr i8* %P, [label %BB1, label %BB2, label %BB3]
+BB1:
+  indirectbr i32* %Q, []
+BB2:
+  %R = bitcast i8* blockaddress(@indbrtest, %BB3) to i8*
+  indirectbr i8* %R, [label %BB1, label %BB2, label %BB3]
+BB3:
+  ret i32 2
+}
+
+
diff --git a/libclamav/c++/llvm/test/Feature/testswitch.ll b/libclamav/c++/llvm/test/Feature/testswitch.ll
deleted file mode 100644
index 417f56b..0000000
--- a/libclamav/c++/llvm/test/Feature/testswitch.ll
+++ /dev/null
@@ -1,26 +0,0 @@
-; RUN: llvm-as < %s | llvm-dis > %t1.ll
-; RUN: llvm-as %t1.ll -o - | llvm-dis > %t2.ll
-; RUN: diff %t1.ll %t2.ll
-
-        %int = type i32
-
-define i32 @squared(i32 %i0) {
-        switch i32 %i0, label %Default [
-                 i32 1, label %Case1
-                 i32 2, label %Case2
-                 i32 4, label %Case4
-        ]
-
-Default:                ; preds = %0
-        ret i32 -1
-
-Case1:          ; preds = %0
-        ret i32 1
-
-Case2:          ; preds = %0
-        ret i32 4
-
-Case4:          ; preds = %0
-        ret i32 16
-}
-
diff --git a/libclamav/c++/llvm/test/Makefile b/libclamav/c++/llvm/test/Makefile
index 4955c2e..e7776f8 100644
--- a/libclamav/c++/llvm/test/Makefile
+++ b/libclamav/c++/llvm/test/Makefile
@@ -75,12 +75,18 @@ ifdef IGNORE_TESTS
 RUNTESTFLAGS += --ignore "$(strip $(IGNORE_TESTS))"
 endif
 
+# ulimits like these are redundantly enforced by the buildbots, so
+# just removing them here won't work.
 # Both AuroraUX & Solaris do not have the -m flag for ulimit
 ifeq ($(HOST_OS),SunOS)
 ULIMIT=ulimit -t 600 ; ulimit -d 512000 ; ulimit -v 512000 ;
-else
+else # !SunOS
+ifeq ($(HOST_OS),AuroraUX)
+ULIMIT=ulimit -t 600 ; ulimit -d 512000 ; ulimit -v 512000 ;
+else # !AuroraUX
 ULIMIT=ulimit -t 600 ; ulimit -d 512000 ; ulimit -m 512000 ; ulimit -v 512000 ;
-endif
+endif # AuroraUX
+endif # SunOS
 
 ifneq ($(RUNTEST),)
 check-local:: site.exp
@@ -94,19 +100,11 @@ endif
 
 check-local-lit:: lit.site.cfg Unit/lit.site.cfg
 	( $(ULIMIT) \
-	  $(LLVM_SRC_ROOT)/utils/lit/lit.py \
-		--path "$(LLVMToolDir)" \
-		--path "$(LLVM_SRC_ROOT)/test/Scripts" \
-		--path "$(LLVMGCCDIR)/bin" \
-		$(LIT_ARGS) $(LIT_TESTSUITE) )
+	  $(LLVM_SRC_ROOT)/utils/lit/lit.py $(LIT_ARGS) $(LIT_TESTSUITE) )
 
 check-local-all:: lit.site.cfg Unit/lit.site.cfg extra-lit-site-cfgs
 	( $(ULIMIT) \
-	  $(LLVM_SRC_ROOT)/utils/lit/lit.py \
-		--path "$(LLVMToolDir)" \
-		--path "$(LLVM_SRC_ROOT)/test/Scripts" \
-		--path "$(LLVMGCCDIR)/bin" \
-		$(LIT_ARGS) $(LIT_ALL_TESTSUITES) )
+	  $(LLVM_SRC_ROOT)/utils/lit/lit.py $(LIT_ARGS) $(LIT_ALL_TESTSUITES) )
 
 ifdef TESTONE
 CLEANED_TESTONE := $(patsubst %/,%,$(TESTONE))
@@ -152,9 +150,8 @@ FORCE:
 
 site.exp: FORCE
 	@echo 'Making a new site.exp file...'
-	@echo '## these variables are automatically generated by make ##' >site.tmp
-	@echo '# Do not edit here.  If you wish to override these values' >>site.tmp
-	@echo '# edit the last section' >>site.tmp
+	@echo '## Autogenerated by LLVM configuration.' > site.tmp
+	@echo '# Do not edit!' >> site.tmp
 	@echo 'set target_triplet "$(TARGET_TRIPLE)"' >> site.tmp
 	@echo 'set TARGETS_TO_BUILD "$(TARGETS_TO_BUILD)"' >> site.tmp
 	@echo 'set llvmgcc_langs "$(LLVMGCC_LANGS)"' >> site.tmp
@@ -198,15 +195,9 @@ lit.site.cfg: site.exp
 
 Unit/lit.site.cfg: $(PROJ_OBJ_DIR)/Unit/.dir FORCE
 	@echo "Making LLVM unittest 'lit.site.cfg' file..."
-	@echo "## Autogenerated by Makefile ##" > $@
-	@echo "# Do not edit!" >> $@
-	@echo >> $@
-	@echo "# Preserve some key paths for use by main LLVM test suite config." >> $@
-	@echo "config.llvm_obj_root = \"\"\"$(LLVM_OBJ_ROOT)\"\"\"" >> $@
-	@echo >> $@
-	@echo "# Remember the build mode." >> $@
-	@echo "config.llvm_build_mode = \"\"\"$(BuildMode)\"\"\"" >> $@
-	@echo >> $@
-	@echo "# Let the main config do the real work." >> $@
-	@echo "lit.load_config(config, \"\"\"$(LLVM_SRC_ROOT)/test/Unit/lit.cfg\"\"\")" >> $@
-
+	@sed -e "s#@LLVM_SOURCE_DIR@#$(LLVM_SRC_ROOT)#g" \
+	     -e "s#@LLVM_BINARY_DIR@#$(LLVM_OBJ_ROOT)#g" \
+	     -e "s#@LLVM_TOOLS_DIR@#$(ToolDir)#g" \
+	     -e "s#@LLVMGCCDIR@#$(LLVMGCCDIR)#g" \
+	     -e "s#@LLVM_BUILD_MODE@#$(BuildMode)#g" \
+	     $(PROJ_SRC_DIR)/Unit/lit.site.cfg.in > $@
diff --git a/libclamav/c++/llvm/test/Other/2003-02-19-LoopInfoNestingBug.ll b/libclamav/c++/llvm/test/Other/2003-02-19-LoopInfoNestingBug.ll
index 267b0e8..13f8351 100644
--- a/libclamav/c++/llvm/test/Other/2003-02-19-LoopInfoNestingBug.ll
+++ b/libclamav/c++/llvm/test/Other/2003-02-19-LoopInfoNestingBug.ll
@@ -3,7 +3,7 @@
 ; and instead nests it just inside loop "Top"
 ;
 ; RUN: opt < %s -analyze -loops | \
-; RUN:   grep {     Loop at depth 3 containing: %Inner<header><latch><exit>}
+; RUN:   grep {     Loop at depth 3 containing: %Inner<header><latch><exiting>}
 ;
 define void @test() {
         br label %Top
diff --git a/libclamav/c++/llvm/test/Scripts/macho-dump b/libclamav/c++/llvm/test/Scripts/macho-dump
index 12ec26d..5b9943a 100755
--- a/libclamav/c++/llvm/test/Scripts/macho-dump
+++ b/libclamav/c++/llvm/test/Scripts/macho-dump
@@ -104,6 +104,9 @@ def dumpLoadCommand(f, i, opts):
       dumpSymtabCommand(f, opts)
    elif cmd == 11:
       dumpDysymtabCommand(f, opts)
+   elif cmd == 27:
+      import uuid
+      print "  ('uuid', %s)" % uuid.UUID(bytes=f.read(16))
    else:
       print >>sys.stderr,"%s: warning: unknown load command: %r" % (sys.argv[0], cmd)
       f.read(cmdSize - 8)
diff --git a/libclamav/c++/llvm/test/Scripts/notcast b/libclamav/c++/llvm/test/Scripts/notcast
deleted file mode 100755
index 6d56580..0000000
--- a/libclamav/c++/llvm/test/Scripts/notcast
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/bin/sh
-#
-# Program: notcast
-#
-# Synopsis: Returns 0 if the input does not contain a cast operator
-#
-# Syntax:   notcast tailexpr
-#
-#    postpat - optionally allows a regular expression to go at the end
-#    prepat  - optionally allow a regular expression to go at the start
-#
-
-if grep "$2"'\(\([sz]ext\)\|\(trunc\)\|\(fpto[us]i\)\|\([us]itofp\)\|\(bitcast\)\|\(fpext\)\|\(fptrunc\)\|\(ptrtoint\)\|\(inttoptr\)\|\(cast\)\)'"$1"
-then exit 1
-else exit 0
-fi
diff --git a/libclamav/c++/llvm/test/TableGen/UnsetBitInit.td b/libclamav/c++/llvm/test/TableGen/UnsetBitInit.td
new file mode 100644
index 0000000..91342ec
--- /dev/null
+++ b/libclamav/c++/llvm/test/TableGen/UnsetBitInit.td
@@ -0,0 +1,10 @@
+// RUN: tblgen %s
+class x {
+  field bits<32> A;
+}
+
+class y<bits<2> B> : x {
+  let A{21-20} = B;
+}
+
+def z : y<{0,?}>;
diff --git a/libclamav/c++/llvm/test/Unit/lit.cfg b/libclamav/c++/llvm/test/Unit/lit.cfg
index 6fd3998..8321593 100644
--- a/libclamav/c++/llvm/test/Unit/lit.cfg
+++ b/libclamav/c++/llvm/test/Unit/lit.cfg
@@ -7,8 +7,7 @@ import os
 # name: The name of this test suite.
 config.name = 'LLVM-Unit'
 
-# suffixes: A list of file extensions to treat as test files, this is actually
-# set by on_clone().
+# suffixes: A list of file extensions to treat as test files.
 config.suffixes = []
 
 # test_source_root: The root path where tests are located.
diff --git a/libclamav/c++/llvm/test/Unit/lit.site.cfg.in b/libclamav/c++/llvm/test/Unit/lit.site.cfg.in
new file mode 100644
index 0000000..c190ffa
--- /dev/null
+++ b/libclamav/c++/llvm/test/Unit/lit.site.cfg.in
@@ -0,0 +1,10 @@
+## Autogenerated by LLVM/Clang configuration.
+# Do not edit!
+config.llvm_src_root = "@LLVM_SOURCE_DIR@"
+config.llvm_obj_root = "@LLVM_BINARY_DIR@"
+config.llvm_tools_dir = "@LLVM_TOOLS_DIR@"
+config.llvmgcc_dir = "@LLVMGCCDIR@"
+config.llvm_build_mode = "@LLVM_BUILD_MODE@"
+
+# Let the main config do the real work.
+lit.load_config(config, "@LLVM_SOURCE_DIR@/test/Unit/lit.cfg")
diff --git a/libclamav/c++/llvm/test/lit.cfg b/libclamav/c++/llvm/test/lit.cfg
index 7eac5c6..1939792 100644
--- a/libclamav/c++/llvm/test/lit.cfg
+++ b/libclamav/c++/llvm/test/lit.cfg
@@ -22,6 +22,31 @@ llvm_obj_root = getattr(config, 'llvm_obj_root', None)
 if llvm_obj_root is not None:
     config.test_exec_root = os.path.join(llvm_obj_root, 'test')
 
+# Tweak the PATH to include the scripts dir, the tools dir, and the llvm-gcc bin
+# dir (if available).
+if llvm_obj_root is not None:
+    llvm_src_root = getattr(config, 'llvm_src_root', None)
+    if not llvm_src_root:
+        lit.fatal('No LLVM source root set!')
+    path = os.path.pathsep.join((os.path.join(llvm_src_root, 'test',
+                                              'Scripts'),
+                                 config.environment['PATH']))
+    config.environment['PATH'] = path
+
+    llvm_tools_dir = getattr(config, 'llvm_tools_dir', None)
+    if not llvm_tools_dir:
+        lit.fatal('No LLVM tools dir set!')
+    path = os.path.pathsep.join((llvm_tools_dir, config.environment['PATH']))
+    config.environment['PATH'] = path
+
+    llvmgcc_dir = getattr(config, 'llvmgcc_dir', None)
+    if not llvm_tools_dir:
+        lit.fatal('No llvm-gcc dir set!')
+    if llvmgcc_dir:
+        path = os.path.pathsep.join((os.path.join(llvmgcc_dir, 'bin'),
+                                     config.environment['PATH']))
+        config.environment['PATH'] = path
+
 ###
 
 import os
@@ -76,6 +101,7 @@ for line in open(os.path.join(config.llvm_obj_root, 'test', 'site.exp')):
         site_exp[m.group(1)] = m.group(2)
 
 # Add substitutions.
+config.substitutions.append(('%llvmgcc_only', site_exp['llvmgcc']))
 for sub in ['llvmgcc', 'llvmgxx', 'compile_cxx', 'compile_c',
             'link', 'shlibext', 'ocamlopt', 'llvmdsymutil', 'llvmlibsdir',
             'bugpoint_topts']:
diff --git a/libclamav/c++/llvm/test/site.exp.in b/libclamav/c++/llvm/test/site.exp.in
index 6a74ba8..f88d361 100644
--- a/libclamav/c++/llvm/test/site.exp.in
+++ b/libclamav/c++/llvm/test/site.exp.in
@@ -1,9 +1,10 @@
-## Autogenerated by LLVM/Clang configuration.
+## Autogenerated by LLVM configuration.
 # Do not edit!
-set target_triplet "@target@"
+set target_triplet "@TARGET_TRIPLE@"
 set TARGETS_TO_BUILD "@TARGETS_TO_BUILD@"
 set llvmgcc_langs "@LLVMGCC_LANGS@"
 set llvmgcc_version "@LLVMGCC_VERSION@"
+set llvmtoolsdir "@LLVM_TOOLS_DIR@"
 set llvmlibsdir "@LLVM_LIBS_DIR@"
 set llvm_bindings "@LLVM_BINDINGS@"
 set srcroot "@LLVM_SOURCE_DIR@"
diff --git a/libclamav/c++/llvm/tools/CMakeLists.txt b/libclamav/c++/llvm/tools/CMakeLists.txt
index a253b33..8b5d77e 100644
--- a/libclamav/c++/llvm/tools/CMakeLists.txt
+++ b/libclamav/c++/llvm/tools/CMakeLists.txt
@@ -27,7 +27,6 @@ add_subdirectory(llvm-link)
 add_subdirectory(lli)
 
 add_subdirectory(llvm-extract)
-add_subdirectory(llvm-db)
 
 add_subdirectory(bugpoint)
 add_subdirectory(llvm-bcanalyzer)
diff --git a/libclamav/c++/llvm/tools/Makefile b/libclamav/c++/llvm/tools/Makefile
index caf8b2f..0340c7f 100644
--- a/libclamav/c++/llvm/tools/Makefile
+++ b/libclamav/c++/llvm/tools/Makefile
@@ -19,7 +19,7 @@ DIRS := llvm-config
 PARALLEL_DIRS := opt llvm-as llvm-dis \
                  llc llvm-ranlib llvm-ar llvm-nm \
                  llvm-ld llvm-prof llvm-link \
-                 lli llvm-extract llvm-db \
+                 lli llvm-extract \
                  bugpoint llvm-bcanalyzer llvm-stub \
                  llvm-mc llvmc
 
diff --git a/libclamav/c++/llvm/tools/llc/llc.cpp b/libclamav/c++/llvm/tools/llc/llc.cpp
index b94e5fb..84e6867 100644
--- a/libclamav/c++/llvm/tools/llc/llc.cpp
+++ b/libclamav/c++/llvm/tools/llc/llc.cpp
@@ -298,7 +298,7 @@ int main(int argc, char **argv) {
     return 1;
   case ' ': break;
   case '0': OLvl = CodeGenOpt::None; break;
-  case '1':
+  case '1': OLvl = CodeGenOpt::Less; break;
   case '2': OLvl = CodeGenOpt::Default; break;
   case '3': OLvl = CodeGenOpt::Aggressive; break;
   }
diff --git a/libclamav/c++/llvm/tools/lli/lli.cpp b/libclamav/c++/llvm/tools/lli/lli.cpp
index e5c1070..218bb93 100644
--- a/libclamav/c++/llvm/tools/lli/lli.cpp
+++ b/libclamav/c++/llvm/tools/lli/lli.cpp
@@ -148,7 +148,7 @@ int main(int argc, char **argv, char * const *envp) {
     return 1;
   case ' ': break;
   case '0': OLvl = CodeGenOpt::None; break;
-  case '1':
+  case '1': OLvl = CodeGenOpt::Less; break;
   case '2': OLvl = CodeGenOpt::Default; break;
   case '3': OLvl = CodeGenOpt::Aggressive; break;
   }
@@ -163,11 +163,9 @@ int main(int argc, char **argv, char * const *envp) {
     exit(1);
   }
 
-  EE->RegisterJITEventListener(createMacOSJITEventListener());
   EE->RegisterJITEventListener(createOProfileJITEventListener());
 
-  if (NoLazyCompilation)
-    EE->DisableLazyCompilation();
+  EE->DisableLazyCompilation(NoLazyCompilation);
 
   // If the user specifically requested an argv[0] to pass into the program,
   // do it now.
diff --git a/libclamav/c++/llvm/tools/llvm-as/llvm-as.cpp b/libclamav/c++/llvm/tools/llvm-as/llvm-as.cpp
index d510297..d39d6c8 100644
--- a/libclamav/c++/llvm/tools/llvm-as/llvm-as.cpp
+++ b/libclamav/c++/llvm/tools/llvm-as/llvm-as.cpp
@@ -50,34 +50,7 @@ static cl::opt<bool>
 DisableVerify("disable-verify", cl::Hidden,
               cl::desc("Do not run verifier on input LLVM (dangerous!)"));
 
-int main(int argc, char **argv) {
-  // Print a stack trace if we signal out.
-  sys::PrintStackTraceOnErrorSignal();
-  PrettyStackTraceProgram X(argc, argv);
-  LLVMContext &Context = getGlobalContext();
-  llvm_shutdown_obj Y;  // Call llvm_shutdown() on exit.
-  cl::ParseCommandLineOptions(argc, argv, "llvm .ll -> .bc assembler\n");
-
-  // Parse the file now...
-  SMDiagnostic Err;
-  std::auto_ptr<Module> M(ParseAssemblyFile(InputFilename, Err, Context));
-  if (M.get() == 0) {
-    Err.Print(argv[0], errs());
-    return 1;
-  }
-
-  if (!DisableVerify) {
-    std::string Err;
-    if (verifyModule(*M.get(), ReturnStatusAction, &Err)) {
-      errs() << argv[0]
-             << ": assembly parsed, but does not verify as correct!\n";
-      errs() << Err;
-      return 1;
-    } 
-  }
-
-  if (DumpAsm) errs() << "Here's the assembly:\n" << *M.get();
-
+static void WriteOutputFile(const Module *M) {
   // Infer the output filename if needed.
   if (OutputFilename.empty()) {
     if (InputFilename == "-") {
@@ -106,12 +79,43 @@ int main(int argc, char **argv) {
                       raw_fd_ostream::F_Binary));
   if (!ErrorInfo.empty()) {
     errs() << ErrorInfo << '\n';
+    exit(1);
+  }
+
+  if (Force || !CheckBitcodeOutputToConsole(*Out, true))
+    WriteBitcodeToFile(M, *Out);
+}
+
+int main(int argc, char **argv) {
+  // Print a stack trace if we signal out.
+  sys::PrintStackTraceOnErrorSignal();
+  PrettyStackTraceProgram X(argc, argv);
+  LLVMContext &Context = getGlobalContext();
+  llvm_shutdown_obj Y;  // Call llvm_shutdown() on exit.
+  cl::ParseCommandLineOptions(argc, argv, "llvm .ll -> .bc assembler\n");
+
+  // Parse the file now...
+  SMDiagnostic Err;
+  std::auto_ptr<Module> M(ParseAssemblyFile(InputFilename, Err, Context));
+  if (M.get() == 0) {
+    Err.Print(argv[0], errs());
     return 1;
   }
 
+  if (!DisableVerify) {
+    std::string Err;
+    if (verifyModule(*M.get(), ReturnStatusAction, &Err)) {
+      errs() << argv[0]
+             << ": assembly parsed, but does not verify as correct!\n";
+      errs() << Err;
+      return 1;
+    }
+  }
+
+  if (DumpAsm) errs() << "Here's the assembly:\n" << *M.get();
+
   if (!DisableOutput)
-    if (Force || !CheckBitcodeOutputToConsole(*Out, true))
-      WriteBitcodeToFile(M.get(), *Out);
+    WriteOutputFile(M.get());
+
   return 0;
 }
-
diff --git a/libclamav/c++/llvm/tools/llvm-config/CMakeLists.txt b/libclamav/c++/llvm/tools/llvm-config/CMakeLists.txt
index 15df21f..8a710ea 100644
--- a/libclamav/c++/llvm/tools/llvm-config/CMakeLists.txt
+++ b/libclamav/c++/llvm/tools/llvm-config/CMakeLists.txt
@@ -36,9 +36,6 @@ foreach(l ${LLVM_SYSTEM_LIBS_LIST})
   set(LLVM_SYSTEM_LIBS ${LLVM_SYSTEM_LIBS} "-l${l}")
 endforeach()
 
-include(GetTargetTriple)
-get_target_triple(target)
-
 foreach(c ${LLVM_TARGETS_TO_BUILD})
   set(TARGETS_BUILT "${TARGETS_BUILT} ${c}")
 endforeach(c)
@@ -136,7 +133,7 @@ set(LLVMLibDeps ${LLVM_MAIN_SRC_DIR}/cmake/modules/LLVMLibDeps.cmake)
 set(LLVMLibDeps_TMP ${CMAKE_CURRENT_BINARY_DIR}/LLVMLibDeps.cmake.tmp)
 
 add_custom_command(OUTPUT ${LLVMLibDeps_TMP}
-  COMMAND sed -e s'@\\.a@@g' -e 's at libLLVM@LLVM at g' -e 's@: @ @' -e 's@\\\(.*\\\)@set\(MSVC_LIB_DEPS_\\1\)@' ${FINAL_LIBDEPS} > ${LLVMLibDeps_TMP}
+  COMMAND sed -e s'@\\.a@@g' -e s'@\\.so@@g' -e 's at libLLVM@LLVM at g' -e 's@: @ @' -e 's@\\\(.*\\\)@set\(MSVC_LIB_DEPS_\\1\)@' ${FINAL_LIBDEPS} > ${LLVMLibDeps_TMP}
   COMMAND ${CMAKE_COMMAND} -E copy_if_different ${LLVMLibDeps_TMP} ${LLVMLibDeps}
   DEPENDS ${FINAL_LIBDEPS}
   COMMENT "Updating cmake library dependencies file ${LLVMLibDeps}"
diff --git a/libclamav/c++/llvm/tools/llvm-config/llvm-config.in.in b/libclamav/c++/llvm/tools/llvm-config/llvm-config.in.in
index 7f93f16..d0edda0 100644
--- a/libclamav/c++/llvm/tools/llvm-config/llvm-config.in.in
+++ b/libclamav/c++/llvm/tools/llvm-config/llvm-config.in.in
@@ -333,6 +333,10 @@ sub build_name_map {
         if (defined $NAME_MAP{$target.'asmparser'}) {
             push @{$NAME_MAP{$target}},$target.'asmparser'
         }
+
+        if (defined $NAME_MAP{$target.'disassembler'}) {
+            push @{$NAME_MAP{$target}},$target.'disassembler'
+        }
     }
 
     # Add virtual entries.
diff --git a/libclamav/c++/llvm/tools/llvmc/doc/LLVMC-Reference.rst b/libclamav/c++/llvm/tools/llvmc/doc/LLVMC-Reference.rst
index 5d3b02a..b92ab69 100644
--- a/libclamav/c++/llvm/tools/llvmc/doc/LLVMC-Reference.rst
+++ b/libclamav/c++/llvm/tools/llvmc/doc/LLVMC-Reference.rst
@@ -349,8 +349,9 @@ separate option groups syntactically.
 
    - ``multi_val n`` - this option takes *n* arguments (can be useful in some
      special cases). Usage example: ``(parameter_list_option "foo", (multi_val
-     3))``. Only list options can have this attribute; you can, however, use
-     the ``one_or_more`` and ``zero_or_one`` properties.
+     3))``; the command-line syntax is '-foo a b c'. Only list options can have
+     this attribute; you can, however, use the ``one_or_more``, ``zero_or_one``
+     and ``required`` properties.
 
    - ``init`` - this option has a default value, either a string (if it is a
      parameter), or a boolean (if it is a switch; boolean constants are called
@@ -430,8 +431,16 @@ use TableGen inheritance instead.
 
 * Possible tests are:
 
-  - ``switch_on`` - Returns true if a given command-line switch is
-    provided by the user. Example: ``(switch_on "opt")``.
+  - ``switch_on`` - Returns true if a given command-line switch is provided by
+    the user. Can be given a list as argument, in that case ``(switch_on ["foo",
+    "bar", "baz"])`` is equivalent to ``(and (switch_on "foo"), (switch_on
+    "bar"), (switch_on "baz"))``.
+    Example: ``(switch_on "opt")``.
+
+  - ``any_switch_on`` - Given a list of switch options, returns true if any of
+    the switches is turned on.
+    Example: ``(any_switch_on ["foo", "bar", "baz"])`` is equivalent to ``(or
+    (switch_on "foo"), (switch_on "bar"), (switch_on "baz"))``.
 
   - ``parameter_equals`` - Returns true if a command-line parameter equals
     a given value.
@@ -445,18 +454,28 @@ use TableGen inheritance instead.
     belongs to the current input language set.
     Example: ``(input_languages_contain "c++")``.
 
-  - ``in_language`` - Evaluates to true if the input file language
-    equals to the argument. At the moment works only with ``cmd_line``
-    and ``actions`` (on non-join nodes).
+  - ``in_language`` - Evaluates to true if the input file language is equal to
+    the argument. At the moment works only with ``cmd_line`` and ``actions`` (on
+    non-join nodes).
     Example: ``(in_language "c++")``.
 
-  - ``not_empty`` - Returns true if a given option (which should be
-    either a parameter or a parameter list) is set by the
-    user.
+  - ``not_empty`` - Returns true if a given option (which should be either a
+    parameter or a parameter list) is set by the user. Like ``switch_on``, can
+    be also given a list as argument.
     Example: ``(not_empty "o")``.
 
+  - ``any_not_empty`` - Returns true if ``not_empty`` returns true for any of
+    the options in the list.
+    Example: ``(any_not_empty ["foo", "bar", "baz"])`` is equivalent to ``(or
+    (not_empty "foo"), (not_empty "bar"), (not_empty "baz"))``.
+
   - ``empty`` - The opposite of ``not_empty``. Equivalent to ``(not (not_empty
-    X))``. Provided for convenience.
+    X))``. Provided for convenience. Can be given a list as argument.
+
+  - ``any_not_empty`` - Returns true if ``not_empty`` returns true for any of
+    the options in the list.
+    Example: ``(any_empty ["foo", "bar", "baz"])`` is equivalent to ``(not (and
+    (not_empty "foo"), (not_empty "bar"), (not_empty "baz")))``.
 
   - ``single_input_file`` - Returns true if there was only one input file
     provided on the command-line. Used without arguments:
@@ -509,8 +528,8 @@ The complete list of all currently implemented tool properties follows.
   - ``in_language`` - input language name. Can be either a string or a
     list, in case the tool supports multiple input languages.
 
-  - ``out_language`` - output language name. Tools are not allowed to
-    have multiple output languages.
+  - ``out_language`` - output language name. Multiple output languages are not
+    allowed.
 
   - ``output_suffix`` - output file suffix. Can also be changed
     dynamically, see documentation on actions.
@@ -571,11 +590,13 @@ The list of all possible actions follows.
      Example: ``(case (switch_on "pthread"), (append_cmd
      "-lpthread"))``
 
-   - ``error` - exit with error.
+   - ``error`` - exit with error.
      Example: ``(error "Mixing -c and -S is not allowed!")``.
 
-   - ``forward`` - forward an option unchanged.
-     Example: ``(forward "Wall")``.
+   - ``warning`` - print a warning.
+     Example: ``(warning "Specifying both -O1 and -O2 is meaningless!")``.
+
+   - ``forward`` - forward an option unchanged.  Example: ``(forward "Wall")``.
 
    - ``forward_as`` - Change the name of an option, but forward the
      argument unchanged.
@@ -618,6 +639,36 @@ linked with the root node. Since tools are not allowed to have
 multiple output languages, for nodes "inside" the graph the input and
 output languages should match. This is enforced at compile-time.
 
+Option preprocessor
+===================
+
+It is sometimes useful to run error-checking code before processing the
+compilation graph. For example, if optimization options "-O1" and "-O2" are
+implemented as switches, we might want to output a warning if the user invokes
+the driver with both of these options enabled.
+
+The ``OptionPreprocessor`` feature is reserved specially for these
+occasions. Example (adapted from the built-in Base plugin)::
+
+   def Preprocess : OptionPreprocessor<
+   (case (and (switch_on "O3"), (any_switch_on ["O0", "O1", "O2"])),
+              [(unset_option ["O0", "O1", "O2"]),
+               (warning "Multiple -O options specified, defaulted to -O3.")],
+         (and (switch_on "O2"), (any_switch_on ["O0", "O1"])),
+              (unset_option ["O0", "O1"]),
+         (and (switch_on "O1"), (switch_on "O0")),
+              (unset_option "O0"))
+   >;
+
+Here, ``OptionPreprocessor`` is used to unset all spurious optimization options
+(so that they are not forwarded to the compiler).
+
+``OptionPreprocessor`` is basically a single big ``case`` expression, which is
+evaluated only once right after the plugin is loaded. The only allowed actions
+in ``OptionPreprocessor`` are ``error``, ``warning`` and a special action
+``unset_option``, which, as the name suggests, unsets a given option. For
+convenience, ``unset_option`` also works on lists.
+
 
 More advanced topics
 ====================
diff --git a/libclamav/c++/llvm/tools/llvmc/example/Hello/Hello.cpp b/libclamav/c++/llvm/tools/llvmc/example/Hello/Hello.cpp
index 9c96bd0..a7179ea 100644
--- a/libclamav/c++/llvm/tools/llvmc/example/Hello/Hello.cpp
+++ b/libclamav/c++/llvm/tools/llvmc/example/Hello/Hello.cpp
@@ -17,6 +17,10 @@
 
 namespace {
 struct MyPlugin : public llvmc::BasePlugin {
+
+  void PreprocessOptions() const
+  {}
+
   void PopulateLanguageMap(llvmc::LanguageMap&) const
   { outs() << "Hello!\n"; }
 
diff --git a/libclamav/c++/llvm/tools/llvmc/example/mcc16/driver/Main.cpp b/libclamav/c++/llvm/tools/llvmc/example/mcc16/driver/Main.cpp
index f42e17f..5d50f9d 100644
--- a/libclamav/c++/llvm/tools/llvmc/example/mcc16/driver/Main.cpp
+++ b/libclamav/c++/llvm/tools/llvmc/example/mcc16/driver/Main.cpp
@@ -13,18 +13,31 @@
 //
 //===----------------------------------------------------------------------===//
 
+#include "llvm/Config/config.h"
 #include "llvm/CompilerDriver/BuiltinOptions.h"
 #include "llvm/CompilerDriver/ForceLinkage.h"
 #include "llvm/System/Path.h"
+#include <iostream>
 
 namespace llvmc {
   int Main(int argc, char** argv);
 }
 
+// Modify the PACKAGE_VERSION to use build number in top level configure file.
+void PIC16VersionPrinter(void) {
+  std::cout << "MPLAB C16 1.0 " << PACKAGE_VERSION << "\n";
+}
+
 int main(int argc, char** argv) {
 
   // HACK
   SaveTemps.setHiddenFlag(llvm::cl::Hidden);
+  TempDirname.setHiddenFlag(llvm::cl::Hidden);
+  Languages.setHiddenFlag(llvm::cl::Hidden);
+  DryRun.setHiddenFlag(llvm::cl::Hidden);
+
+  llvm::cl::SetVersionPrinter(PIC16VersionPrinter); 
+  
   TempDirname = "tmp-objs";
 
   // Remove the temp dir if already exists.
diff --git a/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PIC16Base.td b/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PIC16Base.td
index 3d25ab6..df9b99e 100644
--- a/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PIC16Base.td
+++ b/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PIC16Base.td
@@ -11,85 +11,149 @@ include "llvm/CompilerDriver/Common.td"
 def OptionList : OptionList<[
  (switch_option "g",
     (help "Enable Debugging")),
+ (switch_option "E",
+    (help "Stop after preprocessing, do not compile")),
  (switch_option "S",
     (help "Stop after compilation, do not assemble")),
+ (switch_option "bc",
+    (help "Stop after b-code generation, do not compile")),
  (switch_option "c",
     (help "Stop after assemble, do not link")),
- (parameter_option "I",
+ (prefix_list_option "I",
     (help "Add a directory to include path")),
- (parameter_option "pre-RA-sched",
-    (help "Example of an option that is passed to llc")),
+ (prefix_list_option "L",
+    (help "Add a directory to library path")),
+ (prefix_list_option "K",
+    (help "Add a directory to linker script search path")),
+ (parameter_option "l",
+    (help "Specify a library to link")),
+ (parameter_option "k",
+    (help "Specify a linker script")),
+ (parameter_option "m",
+    (help "Generate linker map file with the given name")),
+ (prefix_list_option "D",
+    (help "Define a macro")),
+ (switch_option "O0",
+    (help "Do not optimize")),
+// (switch_option "O1",
+//    (help "Optimization level 1")),
+// (switch_option "O2",
+//    (help "Optimization level 2. (Default)")),
+// (parameter_option "pre-RA-sched",
+//    (help "Example of an option that is passed to llc")),
  (prefix_list_option "Wa,",
     (help "Pass options to native assembler")),
  (prefix_list_option "Wl,",
-    (help "Pass options to native linker")),
- (prefix_list_option "Wllc,",
-    (help "Pass options to llc")),
- (prefix_list_option "Wo,",
-    (help "Pass options to llvm-ld"))
+    (help "Pass options to native linker"))
+// (prefix_list_option "Wllc,",
+//    (help "Pass options to llc")),
+// (prefix_list_option "Wo,",
+//    (help "Pass options to llvm-ld"))
 ]>;
 
 // Tools
-
-def clang_cc : Tool<[
- (in_language "c"),
+class clang_based<string language, string cmd, string ext_E> : Tool<
+[(in_language language),
  (out_language "llvm-bitcode"),
  (output_suffix "bc"),
- (cmd_line "$CALL(GetBinDir)clang-cc -I $CALL(GetStdHeadersDir) -triple=pic16- -emit-llvm-bc $INFILE -o $OUTFILE"),
- (actions (case
-          (not_empty "I"), (forward "I"))),
+ (cmd_line (case
+           (switch_on "E"),
+           (case 
+              (not_empty "o"), !strconcat(cmd, " -E $INFILE -o $OUTFILE"),
+              (default), !strconcat(cmd, " -E $INFILE")),
+           (default), !strconcat(cmd, " $INFILE -o $OUTFILE"))),
+ (actions (case 
+                (and (multiple_input_files), (or (switch_on "S"), (switch_on "c"))),
+              (error "cannot specify -o with -c or -S with multiple files"),
+                (switch_on "E"), [(stop_compilation), (output_suffix ext_E)],
+                (switch_on "bc"),[(stop_compilation), (output_suffix "bc")],
+                (switch_on "g"), (append_cmd "-g"),
+                (not_empty "D"), (forward "D"),
+                (not_empty "I"), (forward "I"))),
  (sink)
 ]>;
 
+def clang_cc : clang_based<"c", "$CALL(GetBinDir)clang-cc                                                        -I $CALL(GetStdHeadersDir) -triple=pic16-                                       -emit-llvm-bc ", "i">;
+
+//def clang_cc : Tool<[
+// (in_language "c"),
+// (out_language "llvm-bitcode"),
+// (output_suffix "bc"),
+// (cmd_line "$CALL(GetBinDir)clang-cc -I $CALL(GetStdHeadersDir) -triple=pic16- -emit-llvm-bc "),
+// (cmd_line kkkkk
+// (actions (case
+//          (switch_on "g"), (append_cmd "g"),
+//          (not_empty "I"), (forward "I"))),
+// (sink)
+//]>;
+
+
+// pre-link-and-lto step.
 def llvm_ld : Tool<[
  (in_language "llvm-bitcode"),
  (out_language "llvm-bitcode"),
  (output_suffix "bc"),
- (cmd_line "$CALL(GetBinDir)llvm-ld -link-as-library $INFILE -o $OUTFILE"),
+ (cmd_line "$CALL(GetBinDir)llvm-ld -L $CALL(GetStdLibsDir) -disable-gvn -instcombine -disable-inlining                   $INFILE -b $OUTFILE -l std"),
  (actions (case
-          (switch_on "g"), (append_cmd "-disable-opt"),
-          (not_empty "Wo,"), (unpack_values "Wo,")))
+          (switch_on "O0"), (append_cmd "-disable-opt"))),
+ (join)
 ]>;
 
-def llvm_ld_lto : Tool<[
+// optimize single file
+def llvm_ld_optimizer : Tool<[
  (in_language "llvm-bitcode"),
  (out_language "llvm-bitcode"),
  (output_suffix "bc"),
- (cmd_line "$CALL(GetBinDir)llvm-ld -L $CALL(GetStdLibsDir) -l std $INFILE -b $OUTFILE"),
+ (cmd_line "$CALL(GetBinDir)llvm-ld -disable-gvn -instcombine -disable-inlining                   $INFILE -b $OUTFILE"),
  (actions (case
-          (switch_on "g"), (append_cmd "-disable-opt"),
-          (not_empty "Wo,"), (unpack_values "Wo,"))),
- (join)
+          (switch_on "O0"), (append_cmd "-disable-opt")))
+]>;
+
+// optimizer step.
+def pic16passes : Tool<[
+ (in_language "llvm-bitcode"),
+ (out_language "llvm-bitcode"),
+ (output_suffix "obc"),
+ (cmd_line "$CALL(GetBinDir)opt -pic16cg -pic16overlay $INFILE -f -o $OUTFILE"),
+ (actions (case
+          (switch_on "O0"), (append_cmd "-disable-opt")))
 ]>;
 
 def llc : Tool<[
  (in_language "llvm-bitcode"),
  (out_language "assembler"),
  (output_suffix "s"),
- (cmd_line "$CALL(GetBinDir)llc -march=pic16 -disable-jump-tables -f $INFILE -o $OUTFILE"),
+ (cmd_line "$CALL(GetBinDir)llc -march=pic16 -disable-jump-tables -pre-RA-sched=list-burr -regalloc=pbqp -f $INFILE -o $OUTFILE"),
  (actions (case
-          (switch_on "S"), (stop_compilation),
-          (not_empty "Wllc,"), (unpack_values "Wllc,"),
-          (not_empty "pre-RA-sched"), (forward "pre-RA-sched")))
+          (switch_on "S"), (stop_compilation)))
+//          (not_empty "Wllc,"), (unpack_values "Wllc,"),
+//         (not_empty "pre-RA-sched"), (forward "pre-RA-sched")))
 ]>;
 
 def gpasm : Tool<[
  (in_language "assembler"),
  (out_language "object-code"),
  (output_suffix "o"),
- (cmd_line "$CALL(GetBinDir)gpasm -r decimal -p p16F1937 -I $CALL(GetStdAsmHeadersDir) -C -c $INFILE -o $OUTFILE"),
+ (cmd_line "$CALL(GetBinDir)gpasm -r decimal -p p16F1937 -I $CALL(GetStdAsmHeadersDir) -C -c -q $INFILE -o $OUTFILE"),
  (actions (case
           (switch_on "c"), (stop_compilation),
+          (switch_on "g"), (append_cmd "-g"),
           (not_empty "Wa,"), (unpack_values "Wa,")))
 ]>;
 
 def mplink : Tool<[
  (in_language "object-code"),
  (out_language "executable"),
- (output_suffix "out"),
- (cmd_line "$CALL(GetBinDir)mplink.exe -k $CALL(GetStdLinkerScriptsDir) -l $CALL(GetStdLibsDir) 16f1937_g.lkr intrinsics.lib devices.lib $INFILE -o $OUTFILE"),
+ (output_suffix "cof"),
+ (cmd_line "$CALL(GetBinDir)mplink.exe -k $CALL(GetStdLinkerScriptsDir) -l $CALL(GetStdLibsDir) -p 16f1937  intrinsics.lib devices.lib $INFILE -o $OUTFILE"),
  (actions (case
-          (not_empty "Wl,"), (unpack_values "Wl,"))),
+          (not_empty "Wl,"), (unpack_values "Wl,"),
+          (not_empty "L"), (forward_as "L", "-l"),
+          (not_empty "K"), (forward_as "K", "-k"),
+          (not_empty "m"), (forward "m"),
+//          (not_empty "l"), [(unpack_values "l"),(append_cmd ".lib")])),
+          (not_empty "k"), (unpack_values "k"),
+          (not_empty "l"), (unpack_values "l"))),
  (join)
 ]>;
 
@@ -103,19 +167,26 @@ def LanguageMap : LanguageMap<[
     LangToSuffixes<"llvm-assembler", ["ll"]>,
     LangToSuffixes<"llvm-bitcode", ["bc"]>,
     LangToSuffixes<"object-code", ["o"]>,
-    LangToSuffixes<"executable", ["out"]>
+    LangToSuffixes<"executable", ["cof"]>
 ]>;
 
 // Compilation graph
 
 def CompilationGraph : CompilationGraph<[
     Edge<"root", "clang_cc">,
-    Edge<"clang_cc", "llvm_ld_lto">,
-    Edge<"llvm_ld_lto", "llc">,
-    OptionalEdge<"clang_cc", "llvm_ld", (case 
+    Edge<"root", "llvm_ld">,
+    OptionalEdge<"root", "llvm_ld_optimizer", (case 
+                                         (switch_on "S"), (inc_weight),
+                                         (switch_on "c"), (inc_weight))>,
+    Edge<"root", "gpasm">,
+    Edge<"root", "mplink">,
+    Edge<"clang_cc", "llvm_ld">,
+    OptionalEdge<"clang_cc", "llvm_ld_optimizer", (case 
                                          (switch_on "S"), (inc_weight),
                                          (switch_on "c"), (inc_weight))>,
-    Edge<"llvm_ld", "llc">,
+    Edge<"llvm_ld", "pic16passes">,
+    Edge<"llvm_ld_optimizer", "pic16passes">,
+    Edge<"pic16passes", "llc">,
     Edge<"llc", "gpasm">,
     Edge<"gpasm", "mplink">
 ]>;
diff --git a/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PluginMain.cpp b/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PluginMain.cpp
index f8492ed..a6d2ff6 100644
--- a/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PluginMain.cpp
+++ b/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PluginMain.cpp
@@ -10,11 +10,13 @@ namespace llvmc {
 }
 
 // Returns the platform specific directory separator via #ifdefs.
+// FIXME: This currently work on linux and windows only. It does not 
+// work on other unices. 
 static std::string GetDirSeparator() {
-#ifdef _WIN32
-  return "\\";
-#else
+#if __linux__ || __APPLE__
   return "/";
+#else
+  return "\\";
 #endif
 }
 
diff --git a/libclamav/c++/llvm/tools/llvmc/plugins/Base/Base.td.in b/libclamav/c++/llvm/tools/llvmc/plugins/Base/Base.td.in
index 4b964e3..c26a567 100644
--- a/libclamav/c++/llvm/tools/llvmc/plugins/Base/Base.td.in
+++ b/libclamav/c++/llvm/tools/llvmc/plugins/Base/Base.td.in
@@ -1,4 +1,4 @@
-//===- Base.td - LLVMC2 toolchain descriptions -------------*- tablegen -*-===//
+//===- Base.td - LLVMC toolchain descriptions --------------*- tablegen -*-===//
 //
 //                     The LLVM Compiler Infrastructure
 //
@@ -7,7 +7,7 @@
 //
 //===----------------------------------------------------------------------===//
 //
-// This file contains compilation graph description used by llvmc2.
+// This file contains compilation graph description used by llvmc.
 //
 //===----------------------------------------------------------------------===//
 
@@ -24,6 +24,14 @@ def OptList : OptionList<[
     (help "Stop after checking the input for syntax errors")),
  (switch_option "opt",
     (help "Enable opt")),
+ (switch_option "O0",
+    (help "Turn off optimization")),
+ (switch_option "O1",
+    (help "Optimization level 1")),
+ (switch_option "O2",
+    (help "Optimization level 2")),
+ (switch_option "O3",
+    (help "Optimization level 3")),
  (switch_option "S",
     (help "Stop after compilation, do not assemble")),
  (switch_option "c",
@@ -32,10 +40,17 @@ def OptList : OptionList<[
     (help "Enable threads")),
  (parameter_option "linker",
     (help "Choose linker (possible values: gcc, g++)")),
+ (parameter_option "MF",
+    (help "Specify a file to write dependencies to"), (hidden)),
+ (parameter_option "MT",
+    (help "Change the name of the rule emitted by dependency generation"),
+    (hidden)),
  (parameter_list_option "include",
     (help "Include the named file prior to preprocessing")),
  (prefix_list_option "I",
     (help "Add a directory to include path")),
+ (prefix_list_option "D",
+    (help "Define a macro")),
  (prefix_list_option "Wa,",
     (help "Pass options to assembler")),
  (prefix_list_option "Wllc,",
@@ -50,6 +65,18 @@ def OptList : OptionList<[
     (help "Pass options to opt"))
 ]>;
 
+// Option preprocessor.
+
+def Preprocess : OptionPreprocessor<
+(case (and (switch_on "O3"), (any_switch_on ["O0", "O1", "O2"])),
+           (unset_option ["O0", "O1", "O2"]),
+      (and (switch_on "O2"), (any_switch_on ["O0", "O1"])),
+           (unset_option ["O0", "O1"]),
+      (and (switch_on "O1"), (switch_on "O0")),
+           (unset_option "O0"))
+>;
+
+
 // Tools
 
 class llvm_gcc_based <string cmd_prefix, string in_lang, string E_ext> : Tool<
@@ -78,13 +105,20 @@ class llvm_gcc_based <string cmd_prefix, string in_lang, string E_ext> : Tool<
          (and (switch_on "emit-llvm"), (switch_on "c")), (stop_compilation),
          (switch_on "fsyntax-only"), (stop_compilation),
          (not_empty "include"), (forward "include"),
-         (not_empty "I"), (forward "I"))),
+         (not_empty "I"), (forward "I"),
+         (not_empty "D"), (forward "D"),
+         (switch_on "O1"), (forward "O1"),
+         (switch_on "O2"), (forward "O2"),
+         (switch_on "O3"), (forward "O3"),
+         (not_empty "MF"), (forward "MF"),
+         (not_empty "MT"), (forward "MT"))),
  (sink)
 ]>;
 
 def llvm_gcc_c : llvm_gcc_based<"@LLVMGCCCOMMAND@ -x c", "c", "i">;
 def llvm_gcc_cpp : llvm_gcc_based<"@LLVMGXXCOMMAND@ -x c++", "c++", "i">;
-def llvm_gcc_m : llvm_gcc_based<"@LLVMGCCCOMMAND@ -x objective-c", "objective-c", "mi">;
+def llvm_gcc_m : llvm_gcc_based<"@LLVMGCCCOMMAND@ -x objective-c",
+                                                  "objective-c", "mi">;
 def llvm_gcc_mxx : llvm_gcc_based<"@LLVMGCCCOMMAND@ -x objective-c++",
                                   "objective-c++", "mi">;
 
@@ -92,7 +126,10 @@ def opt : Tool<
 [(in_language "llvm-bitcode"),
  (out_language "llvm-bitcode"),
  (output_suffix "bc"),
- (actions (case (not_empty "Wo,"), (unpack_values "Wo,"))),
+ (actions (case (not_empty "Wo,"), (unpack_values "Wo,"),
+                (switch_on "O1"), (forward "O1"),
+                (switch_on "O2"), (forward "O2"),
+                (switch_on "O3"), (forward "O3"))),
  (cmd_line "opt -f $INFILE -o $OUTFILE")
 ]>;
 
@@ -100,7 +137,8 @@ def llvm_as : Tool<
 [(in_language "llvm-assembler"),
  (out_language "llvm-bitcode"),
  (output_suffix "bc"),
- (cmd_line "llvm-as $INFILE -o $OUTFILE")
+ (cmd_line "llvm-as $INFILE -o $OUTFILE"),
+ (actions (case (switch_on "emit-llvm"), (stop_compilation)))
 ]>;
 
 def llvm_gcc_assembler : Tool<
@@ -114,12 +152,16 @@ def llvm_gcc_assembler : Tool<
 ]>;
 
 def llc : Tool<
-[(in_language "llvm-bitcode"),
+[(in_language ["llvm-bitcode", "llvm-assembler"]),
  (out_language "assembler"),
  (output_suffix "s"),
  (cmd_line "llc -f $INFILE -o $OUTFILE"),
  (actions (case
           (switch_on "S"), (stop_compilation),
+          (switch_on "O0"), (forward "O0"),
+          (switch_on "O1"), (forward "O1"),
+          (switch_on "O2"), (forward "O2"),
+          (switch_on "O3"), (forward "O3"),
           (not_empty "Wllc,"), (unpack_values "Wllc,")))
 ]>;
 
@@ -134,7 +176,7 @@ class llvm_gcc_based_linker <string cmd_prefix> : Tool<
           (switch_on "pthread"), (append_cmd "-lpthread"),
           (not_empty "L"), (forward "L"),
           (not_empty "l"), (forward "l"),
-          (not_empty "Wl,"), (unpack_values "Wl,")))
+          (not_empty "Wl,"), (forward "Wl,")))
 ]>;
 
 // Default linker
@@ -167,7 +209,6 @@ def CompilationGraph : CompilationGraph<[
     Edge<"root", "llvm_gcc_cpp">,
     Edge<"root", "llvm_gcc_m">,
     Edge<"root", "llvm_gcc_mxx">,
-    Edge<"root", "llvm_as">,
     Edge<"root", "llc">,
 
     Edge<"llvm_gcc_c", "llc">,
@@ -176,6 +217,8 @@ def CompilationGraph : CompilationGraph<[
     Edge<"llvm_gcc_mxx", "llc">,
     Edge<"llvm_as", "llc">,
 
+    OptionalEdge<"root", "llvm_as",
+                         (case (switch_on "emit-llvm"), (inc_weight))>,
     OptionalEdge<"llvm_gcc_c", "opt", (case (switch_on "opt"), (inc_weight))>,
     OptionalEdge<"llvm_gcc_cpp", "opt", (case (switch_on "opt"), (inc_weight))>,
     OptionalEdge<"llvm_gcc_m", "opt", (case (switch_on "opt"), (inc_weight))>,
diff --git a/libclamav/c++/llvm/unittests/ADT/APIntTest.cpp b/libclamav/c++/llvm/unittests/ADT/APIntTest.cpp
index 24a3d9b..0b13aa4 100644
--- a/libclamav/c++/llvm/unittests/ADT/APIntTest.cpp
+++ b/libclamav/c++/llvm/unittests/ADT/APIntTest.cpp
@@ -315,6 +315,17 @@ TEST(APIntTest, StringBitsNeeded16) {
   EXPECT_EQ(9U, APInt::getBitsNeeded("-20", 16));
 }
 
+TEST(APIntTest, Log2) {
+  EXPECT_EQ(APInt(15, 7).logBase2(), 2U);
+  EXPECT_EQ(APInt(15, 7).ceilLogBase2(), 3U);
+  EXPECT_EQ(APInt(15, 7).exactLogBase2(), -1);
+  EXPECT_EQ(APInt(15, 8).logBase2(), 3U);
+  EXPECT_EQ(APInt(15, 8).ceilLogBase2(), 3U);
+  EXPECT_EQ(APInt(15, 8).exactLogBase2(), 3);
+  EXPECT_EQ(APInt(15, 9).logBase2(), 3U);
+  EXPECT_EQ(APInt(15, 9).ceilLogBase2(), 4U);
+  EXPECT_EQ(APInt(15, 9).exactLogBase2(), -1);
+}
 
 #ifdef GTEST_HAS_DEATH_TEST
 TEST(APIntTest, StringDeath) {
diff --git a/libclamav/c++/llvm/unittests/ADT/DenseMapTest.cpp b/libclamav/c++/llvm/unittests/ADT/DenseMapTest.cpp
index 15a5379..afac651 100644
--- a/libclamav/c++/llvm/unittests/ADT/DenseMapTest.cpp
+++ b/libclamav/c++/llvm/unittests/ADT/DenseMapTest.cpp
@@ -164,4 +164,16 @@ TEST_F(DenseMapTest, IterationTest) {
   }
 }
 
+// const_iterator test
+TEST_F(DenseMapTest, ConstIteratorTest) {
+  // Check conversion from iterator to const_iterator.
+  DenseMap<uint32_t, uint32_t>::iterator it = uintMap.begin();
+  DenseMap<uint32_t, uint32_t>::const_iterator cit(it);
+  EXPECT_TRUE(it == cit);
+
+  // Check copying of const_iterators.
+  DenseMap<uint32_t, uint32_t>::const_iterator cit2(cit);
+  EXPECT_TRUE(cit == cit2);
+}
+
 }
diff --git a/libclamav/c++/llvm/unittests/ADT/StringMapTest.cpp b/libclamav/c++/llvm/unittests/ADT/StringMapTest.cpp
index 8ee166b..3dcdc39 100644
--- a/libclamav/c++/llvm/unittests/ADT/StringMapTest.cpp
+++ b/libclamav/c++/llvm/unittests/ADT/StringMapTest.cpp
@@ -9,7 +9,7 @@
 
 #include "gtest/gtest.h"
 #include "llvm/ADT/StringMap.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 using namespace llvm;
 
 namespace {
diff --git a/libclamav/c++/llvm/unittests/ADT/StringRefTest.cpp b/libclamav/c++/llvm/unittests/ADT/StringRefTest.cpp
index cdc476e..dfa208a 100644
--- a/libclamav/c++/llvm/unittests/ADT/StringRefTest.cpp
+++ b/libclamav/c++/llvm/unittests/ADT/StringRefTest.cpp
@@ -9,6 +9,7 @@
 
 #include "gtest/gtest.h"
 #include "llvm/ADT/StringRef.h"
+#include "llvm/ADT/SmallVector.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
@@ -110,6 +111,86 @@ TEST(StringRefTest, Split) {
             Str.rsplit('o'));
 }
 
+TEST(StringRefTest, Split2) {
+  SmallVector<StringRef, 5> parts;
+  SmallVector<StringRef, 5> expected;
+
+  expected.push_back("ab"); expected.push_back("c");
+  StringRef(",ab,,c,").split(parts, ",", -1, false);
+  EXPECT_TRUE(parts == expected);
+
+  expected.clear(); parts.clear();
+  expected.push_back(""); expected.push_back("ab"); expected.push_back("");
+  expected.push_back("c"); expected.push_back("");
+  StringRef(",ab,,c,").split(parts, ",", -1, true);
+  EXPECT_TRUE(parts == expected);
+
+  expected.clear(); parts.clear();
+  expected.push_back("");
+  StringRef("").split(parts, ",", -1, true);
+  EXPECT_TRUE(parts == expected);
+
+  expected.clear(); parts.clear();
+  StringRef("").split(parts, ",", -1, false);
+  EXPECT_TRUE(parts == expected);
+
+  expected.clear(); parts.clear();
+  StringRef(",").split(parts, ",", -1, false);
+  EXPECT_TRUE(parts == expected);
+
+  expected.clear(); parts.clear();
+  expected.push_back(""); expected.push_back("");
+  StringRef(",").split(parts, ",", -1, true);
+  EXPECT_TRUE(parts == expected);
+
+  expected.clear(); parts.clear();
+  expected.push_back("a"); expected.push_back("b");
+  StringRef("a,b").split(parts, ",", -1, true);
+  EXPECT_TRUE(parts == expected);
+
+  // Test MaxSplit
+  expected.clear(); parts.clear();
+  expected.push_back("a,,b,c");
+  StringRef("a,,b,c").split(parts, ",", 0, true);
+  EXPECT_TRUE(parts == expected);
+
+  expected.clear(); parts.clear();
+  expected.push_back("a,,b,c");
+  StringRef("a,,b,c").split(parts, ",", 0, false);
+  EXPECT_TRUE(parts == expected);
+
+  expected.clear(); parts.clear();
+  expected.push_back("a"); expected.push_back(",b,c");
+  StringRef("a,,b,c").split(parts, ",", 1, true);
+  EXPECT_TRUE(parts == expected);
+
+  expected.clear(); parts.clear();
+  expected.push_back("a"); expected.push_back(",b,c");
+  StringRef("a,,b,c").split(parts, ",", 1, false);
+  EXPECT_TRUE(parts == expected);
+
+  expected.clear(); parts.clear();
+  expected.push_back("a"); expected.push_back(""); expected.push_back("b,c");
+  StringRef("a,,b,c").split(parts, ",", 2, true);
+  EXPECT_TRUE(parts == expected);
+
+  expected.clear(); parts.clear();
+  expected.push_back("a"); expected.push_back("b,c");
+  StringRef("a,,b,c").split(parts, ",", 2, false);
+  EXPECT_TRUE(parts == expected);
+
+  expected.clear(); parts.clear();
+  expected.push_back("a"); expected.push_back(""); expected.push_back("b");
+  expected.push_back("c");
+  StringRef("a,,b,c").split(parts, ",", 3, true);
+  EXPECT_TRUE(parts == expected);
+
+  expected.clear(); parts.clear();
+  expected.push_back("a"); expected.push_back("b"); expected.push_back("c");
+  StringRef("a,,b,c").split(parts, ",", 3, false);
+  EXPECT_TRUE(parts == expected);
+}
+
 TEST(StringRefTest, StartsWith) {
   StringRef Str("hello");
   EXPECT_TRUE(Str.startswith("he"));
@@ -125,6 +206,8 @@ TEST(StringRefTest, Find) {
   EXPECT_EQ(0U, Str.find("hello"));
   EXPECT_EQ(1U, Str.find("ello"));
   EXPECT_EQ(StringRef::npos, Str.find("zz"));
+  EXPECT_EQ(2U, Str.find("ll", 2));
+  EXPECT_EQ(StringRef::npos, Str.find("ll", 3));
 
   EXPECT_EQ(3U, Str.rfind('l'));
   EXPECT_EQ(StringRef::npos, Str.rfind('z'));
@@ -132,6 +215,14 @@ TEST(StringRefTest, Find) {
   EXPECT_EQ(0U, Str.rfind("hello"));
   EXPECT_EQ(1U, Str.rfind("ello"));
   EXPECT_EQ(StringRef::npos, Str.rfind("zz"));
+
+  EXPECT_EQ(2U, Str.find_first_of('l'));
+  EXPECT_EQ(1U, Str.find_first_of("el"));
+  EXPECT_EQ(StringRef::npos, Str.find_first_of("xyz"));
+
+  EXPECT_EQ(1U, Str.find_first_not_of('h'));
+  EXPECT_EQ(4U, Str.find_first_not_of("hel"));
+  EXPECT_EQ(StringRef::npos, Str.find_first_not_of("hello"));
 }
 
 TEST(StringRefTest, Count) {
diff --git a/libclamav/c++/llvm/unittests/ADT/ValueMapTest.cpp b/libclamav/c++/llvm/unittests/ADT/ValueMapTest.cpp
new file mode 100644
index 0000000..451e30a
--- /dev/null
+++ b/libclamav/c++/llvm/unittests/ADT/ValueMapTest.cpp
@@ -0,0 +1,294 @@
+//===- llvm/unittest/ADT/ValueMapTest.cpp - ValueMap unit tests -*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/ADT/ValueMap.h"
+#include "llvm/Instructions.h"
+#include "llvm/LLVMContext.h"
+#include "llvm/ADT/OwningPtr.h"
+#include "llvm/Config/config.h"
+
+#include "gtest/gtest.h"
+
+using namespace llvm;
+
+namespace {
+
+// Test fixture
+template<typename T>
+class ValueMapTest : public testing::Test {
+protected:
+  Constant *ConstantV;
+  OwningPtr<BitCastInst> BitcastV;
+  OwningPtr<BinaryOperator> AddV;
+
+  ValueMapTest() :
+    ConstantV(ConstantInt::get(Type::getInt32Ty(getGlobalContext()), 0)),
+    BitcastV(new BitCastInst(ConstantV, Type::getInt32Ty(getGlobalContext()))),
+    AddV(BinaryOperator::CreateAdd(ConstantV, ConstantV)) {
+  }
+};
+
+// Run everything on Value*, a subtype to make sure that casting works as
+// expected, and a const subtype to make sure we cast const correctly.
+typedef ::testing::Types<Value, Instruction, const Instruction> KeyTypes;
+TYPED_TEST_CASE(ValueMapTest, KeyTypes);
+
+TYPED_TEST(ValueMapTest, Null) {
+  ValueMap<TypeParam*, int> VM1;
+  VM1[NULL] = 7;
+  EXPECT_EQ(7, VM1.lookup(NULL));
+}
+
+TYPED_TEST(ValueMapTest, FollowsValue) {
+  ValueMap<TypeParam*, int> VM;
+  VM[this->BitcastV.get()] = 7;
+  EXPECT_EQ(7, VM.lookup(this->BitcastV.get()));
+  EXPECT_EQ(0, VM.count(this->AddV.get()));
+  this->BitcastV->replaceAllUsesWith(this->AddV.get());
+  EXPECT_EQ(7, VM.lookup(this->AddV.get()));
+  EXPECT_EQ(0, VM.count(this->BitcastV.get()));
+  this->AddV.reset();
+  EXPECT_EQ(0, VM.count(this->AddV.get()));
+  EXPECT_EQ(0, VM.count(this->BitcastV.get()));
+  EXPECT_EQ(0U, VM.size());
+}
+
+TYPED_TEST(ValueMapTest, OperationsWork) {
+  ValueMap<TypeParam*, int> VM;
+  ValueMap<TypeParam*, int> VM2(16);
+  typename ValueMapConfig<TypeParam*>::ExtraData Data;
+  ValueMap<TypeParam*, int> VM3(Data, 16);
+  EXPECT_TRUE(VM.empty());
+
+  VM[this->BitcastV.get()] = 7;
+
+  // Find:
+  typename ValueMap<TypeParam*, int>::iterator I =
+    VM.find(this->BitcastV.get());
+  ASSERT_TRUE(I != VM.end());
+  EXPECT_EQ(this->BitcastV.get(), I->first);
+  EXPECT_EQ(7, I->second);
+  EXPECT_TRUE(VM.find(this->AddV.get()) == VM.end());
+
+  // Const find:
+  const ValueMap<TypeParam*, int> &CVM = VM;
+  typename ValueMap<TypeParam*, int>::const_iterator CI =
+    CVM.find(this->BitcastV.get());
+  ASSERT_TRUE(CI != CVM.end());
+  EXPECT_EQ(this->BitcastV.get(), CI->first);
+  EXPECT_EQ(7, CI->second);
+  EXPECT_TRUE(CVM.find(this->AddV.get()) == CVM.end());
+
+  // Insert:
+  std::pair<typename ValueMap<TypeParam*, int>::iterator, bool> InsertResult1 =
+    VM.insert(std::make_pair(this->AddV.get(), 3));
+  EXPECT_EQ(this->AddV.get(), InsertResult1.first->first);
+  EXPECT_EQ(3, InsertResult1.first->second);
+  EXPECT_TRUE(InsertResult1.second);
+  EXPECT_EQ(true, VM.count(this->AddV.get()));
+  std::pair<typename ValueMap<TypeParam*, int>::iterator, bool> InsertResult2 =
+    VM.insert(std::make_pair(this->AddV.get(), 5));
+  EXPECT_EQ(this->AddV.get(), InsertResult2.first->first);
+  EXPECT_EQ(3, InsertResult2.first->second);
+  EXPECT_FALSE(InsertResult2.second);
+
+  // Erase:
+  VM.erase(InsertResult2.first);
+  EXPECT_EQ(false, VM.count(this->AddV.get()));
+  EXPECT_EQ(true, VM.count(this->BitcastV.get()));
+  VM.erase(this->BitcastV.get());
+  EXPECT_EQ(false, VM.count(this->BitcastV.get()));
+  EXPECT_EQ(0U, VM.size());
+
+  // Range insert:
+  SmallVector<std::pair<Instruction*, int>, 2> Elems;
+  Elems.push_back(std::make_pair(this->AddV.get(), 1));
+  Elems.push_back(std::make_pair(this->BitcastV.get(), 2));
+  VM.insert(Elems.begin(), Elems.end());
+  EXPECT_EQ(1, VM.lookup(this->AddV.get()));
+  EXPECT_EQ(2, VM.lookup(this->BitcastV.get()));
+}
+
+template<typename ExpectedType, typename VarType>
+void CompileAssertHasType(VarType) {
+  typedef char assert[is_same<ExpectedType, VarType>::value ? 1 : -1];
+}
+
+TYPED_TEST(ValueMapTest, Iteration) {
+  ValueMap<TypeParam*, int> VM;
+  VM[this->BitcastV.get()] = 2;
+  VM[this->AddV.get()] = 3;
+  size_t size = 0;
+  for (typename ValueMap<TypeParam*, int>::iterator I = VM.begin(), E = VM.end();
+       I != E; ++I) {
+    ++size;
+    std::pair<TypeParam*, int> value = *I;
+    CompileAssertHasType<TypeParam*>(I->first);
+    if (I->second == 2) {
+      EXPECT_EQ(this->BitcastV.get(), I->first);
+      I->second = 5;
+    } else if (I->second == 3) {
+      EXPECT_EQ(this->AddV.get(), I->first);
+      I->second = 6;
+    } else {
+      ADD_FAILURE() << "Iterated through an extra value.";
+    }
+  }
+  EXPECT_EQ(2U, size);
+  EXPECT_EQ(5, VM[this->BitcastV.get()]);
+  EXPECT_EQ(6, VM[this->AddV.get()]);
+
+  size = 0;
+  // Cast to const ValueMap to avoid a bug in DenseMap's iterators.
+  const ValueMap<TypeParam*, int>& CVM = VM;
+  for (typename ValueMap<TypeParam*, int>::const_iterator I = CVM.begin(),
+         E = CVM.end(); I != E; ++I) {
+    ++size;
+    std::pair<TypeParam*, int> value = *I;
+    CompileAssertHasType<TypeParam*>(I->first);
+    if (I->second == 5) {
+      EXPECT_EQ(this->BitcastV.get(), I->first);
+    } else if (I->second == 6) {
+      EXPECT_EQ(this->AddV.get(), I->first);
+    } else {
+      ADD_FAILURE() << "Iterated through an extra value.";
+    }
+  }
+  EXPECT_EQ(2U, size);
+}
+
+TYPED_TEST(ValueMapTest, DefaultCollisionBehavior) {
+  // By default, we overwrite the old value with the replaced value.
+  ValueMap<TypeParam*, int> VM;
+  VM[this->BitcastV.get()] = 7;
+  VM[this->AddV.get()] = 9;
+  this->BitcastV->replaceAllUsesWith(this->AddV.get());
+  EXPECT_EQ(0, VM.count(this->BitcastV.get()));
+  EXPECT_EQ(9, VM.lookup(this->AddV.get()));
+}
+
+TYPED_TEST(ValueMapTest, ConfiguredCollisionBehavior) {
+  // TODO: Implement this when someone needs it.
+}
+
+template<typename KeyT>
+struct LockMutex : ValueMapConfig<KeyT> {
+  struct ExtraData {
+    sys::Mutex *M;
+    bool *CalledRAUW;
+    bool *CalledDeleted;
+  };
+  static void onRAUW(const ExtraData &Data, KeyT Old, KeyT New) {
+    *Data.CalledRAUW = true;
+    EXPECT_FALSE(Data.M->tryacquire()) << "Mutex should already be locked.";
+  }
+  static void onDelete(const ExtraData &Data, KeyT Old) {
+    *Data.CalledDeleted = true;
+    EXPECT_FALSE(Data.M->tryacquire()) << "Mutex should already be locked.";
+  }
+  static sys::Mutex *getMutex(const ExtraData &Data) { return Data.M; }
+};
+#if ENABLE_THREADS
+TYPED_TEST(ValueMapTest, LocksMutex) {
+  sys::Mutex M(false);  // Not recursive.
+  bool CalledRAUW = false, CalledDeleted = false;
+  typename LockMutex<TypeParam*>::ExtraData Data =
+    {&M, &CalledRAUW, &CalledDeleted};
+  ValueMap<TypeParam*, int, LockMutex<TypeParam*> > VM(Data);
+  VM[this->BitcastV.get()] = 7;
+  this->BitcastV->replaceAllUsesWith(this->AddV.get());
+  this->AddV.reset();
+  EXPECT_TRUE(CalledRAUW);
+  EXPECT_TRUE(CalledDeleted);
+}
+#endif
+
+template<typename KeyT>
+struct NoFollow : ValueMapConfig<KeyT> {
+  enum { FollowRAUW = false };
+};
+
+TYPED_TEST(ValueMapTest, NoFollowRAUW) {
+  ValueMap<TypeParam*, int, NoFollow<TypeParam*> > VM;
+  VM[this->BitcastV.get()] = 7;
+  EXPECT_EQ(7, VM.lookup(this->BitcastV.get()));
+  EXPECT_EQ(0, VM.count(this->AddV.get()));
+  this->BitcastV->replaceAllUsesWith(this->AddV.get());
+  EXPECT_EQ(7, VM.lookup(this->BitcastV.get()));
+  EXPECT_EQ(0, VM.lookup(this->AddV.get()));
+  this->AddV.reset();
+  EXPECT_EQ(7, VM.lookup(this->BitcastV.get()));
+  EXPECT_EQ(0, VM.lookup(this->AddV.get()));
+  this->BitcastV.reset();
+  EXPECT_EQ(0, VM.lookup(this->BitcastV.get()));
+  EXPECT_EQ(0, VM.lookup(this->AddV.get()));
+  EXPECT_EQ(0U, VM.size());
+}
+
+template<typename KeyT>
+struct CountOps : ValueMapConfig<KeyT> {
+  struct ExtraData {
+    int *Deletions;
+    int *RAUWs;
+  };
+
+  static void onRAUW(const ExtraData &Data, KeyT Old, KeyT New) {
+    ++*Data.RAUWs;
+  }
+  static void onDelete(const ExtraData &Data, KeyT Old) {
+    ++*Data.Deletions;
+  }
+};
+
+TYPED_TEST(ValueMapTest, CallsConfig) {
+  int Deletions = 0, RAUWs = 0;
+  typename CountOps<TypeParam*>::ExtraData Data = {&Deletions, &RAUWs};
+  ValueMap<TypeParam*, int, CountOps<TypeParam*> > VM(Data);
+  VM[this->BitcastV.get()] = 7;
+  this->BitcastV->replaceAllUsesWith(this->AddV.get());
+  EXPECT_EQ(0, Deletions);
+  EXPECT_EQ(1, RAUWs);
+  this->AddV.reset();
+  EXPECT_EQ(1, Deletions);
+  EXPECT_EQ(1, RAUWs);
+  this->BitcastV.reset();
+  EXPECT_EQ(1, Deletions);
+  EXPECT_EQ(1, RAUWs);
+}
+
+template<typename KeyT>
+struct ModifyingConfig : ValueMapConfig<KeyT> {
+  // We'll put a pointer here back to the ValueMap this key is in, so
+  // that we can modify it (and clobber *this) before the ValueMap
+  // tries to do the same modification.  In previous versions of
+  // ValueMap, that exploded.
+  typedef ValueMap<KeyT, int, ModifyingConfig<KeyT> > **ExtraData;
+
+  static void onRAUW(ExtraData Map, KeyT Old, KeyT New) {
+    (*Map)->erase(Old);
+  }
+  static void onDelete(ExtraData Map, KeyT Old) {
+    (*Map)->erase(Old);
+  }
+};
+TYPED_TEST(ValueMapTest, SurvivesModificationByConfig) {
+  ValueMap<TypeParam*, int, ModifyingConfig<TypeParam*> > *MapAddress;
+  ValueMap<TypeParam*, int, ModifyingConfig<TypeParam*> > VM(&MapAddress);
+  MapAddress = &VM;
+  // Now the ModifyingConfig can modify the Map inside a callback.
+  VM[this->BitcastV.get()] = 7;
+  this->BitcastV->replaceAllUsesWith(this->AddV.get());
+  EXPECT_FALSE(VM.count(this->BitcastV.get()));
+  EXPECT_FALSE(VM.count(this->AddV.get()));
+  VM[this->AddV.get()] = 7;
+  this->AddV.reset();
+  EXPECT_FALSE(VM.count(this->AddV.get()));
+}
+
+}
diff --git a/libclamav/c++/llvm/unittests/ExecutionEngine/ExecutionEngineTest.cpp b/libclamav/c++/llvm/unittests/ExecutionEngine/ExecutionEngineTest.cpp
index 2106e86..904ee2b 100644
--- a/libclamav/c++/llvm/unittests/ExecutionEngine/ExecutionEngineTest.cpp
+++ b/libclamav/c++/llvm/unittests/ExecutionEngine/ExecutionEngineTest.cpp
@@ -93,4 +93,37 @@ TEST_F(ExecutionEngineTest, ReverseGlobalMapping) {
     << " now-free address.";
 }
 
+TEST_F(ExecutionEngineTest, ClearModuleMappings) {
+  GlobalVariable *G1 =
+      NewExtGlobal(Type::getInt32Ty(getGlobalContext()), "Global1");
+
+  int32_t Mem1 = 3;
+  Engine->addGlobalMapping(G1, &Mem1);
+  EXPECT_EQ(G1, Engine->getGlobalValueAtAddress(&Mem1));
+
+  Engine->clearGlobalMappingsFromModule(M);
+
+  EXPECT_EQ(NULL, Engine->getGlobalValueAtAddress(&Mem1));
+
+  GlobalVariable *G2 =
+      NewExtGlobal(Type::getInt32Ty(getGlobalContext()), "Global2");
+  // After clearing the module mappings, we can assign a new GV to the
+  // same address.
+  Engine->addGlobalMapping(G2, &Mem1);
+  EXPECT_EQ(G2, Engine->getGlobalValueAtAddress(&Mem1));
+}
+
+TEST_F(ExecutionEngineTest, DestructionRemovesGlobalMapping) {
+  GlobalVariable *G1 =
+    NewExtGlobal(Type::getInt32Ty(getGlobalContext()), "Global1");
+  int32_t Mem1 = 3;
+  Engine->addGlobalMapping(G1, &Mem1);
+  // Make sure the reverse mapping is enabled.
+  EXPECT_EQ(G1, Engine->getGlobalValueAtAddress(&Mem1));
+  // When the GV goes away, the ExecutionEngine should remove any
+  // mappings that refer to it.
+  G1->eraseFromParent();
+  EXPECT_EQ(NULL, Engine->getGlobalValueAtAddress(&Mem1));
+}
+
 }
diff --git a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITEventListenerTest.cpp b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITEventListenerTest.cpp
index 87e3280..dda86fb 100644
--- a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITEventListenerTest.cpp
+++ b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITEventListenerTest.cpp
@@ -37,7 +37,6 @@ struct FunctionEmittedEvent {
 };
 struct FunctionFreedEvent {
   unsigned Index;
-  const Function *F;
   void *Code;
 };
 
@@ -56,8 +55,8 @@ struct RecordingJITEventListener : public JITEventListener {
     EmittedEvents.push_back(Event);
   }
 
-  virtual void NotifyFreeingMachineCode(const Function &F, void *OldPtr) {
-    FunctionFreedEvent Event = {NextIndex++, &F, OldPtr};
+  virtual void NotifyFreeingMachineCode(void *OldPtr) {
+    FunctionFreedEvent Event = {NextIndex++, OldPtr};
     FreedEvents.push_back(Event);
   }
 };
@@ -116,11 +115,9 @@ TEST_F(JITEventListenerTest, Simple) {
       << " contain some bytes.";
 
   EXPECT_EQ(2U, Listener.FreedEvents[0].Index);
-  EXPECT_EQ(F1, Listener.FreedEvents[0].F);
   EXPECT_EQ(F1_addr, Listener.FreedEvents[0].Code);
 
   EXPECT_EQ(3U, Listener.FreedEvents[1].Index);
-  EXPECT_EQ(F2, Listener.FreedEvents[1].F);
   EXPECT_EQ(F2_addr, Listener.FreedEvents[1].Code);
 
   F1->eraseFromParent();
@@ -164,7 +161,6 @@ TEST_F(JITEventListenerTest, MultipleListenersDontInterfere) {
       << " contain some bytes.";
 
   EXPECT_EQ(1U, Listener1.FreedEvents[0].Index);
-  EXPECT_EQ(F2, Listener1.FreedEvents[0].F);
   EXPECT_EQ(F2_addr, Listener1.FreedEvents[0].Code);
 
   // Listener 2.
@@ -186,7 +182,6 @@ TEST_F(JITEventListenerTest, MultipleListenersDontInterfere) {
       << " contain some bytes.";
 
   EXPECT_EQ(2U, Listener2.FreedEvents[0].Index);
-  EXPECT_EQ(F2, Listener2.FreedEvents[0].F);
   EXPECT_EQ(F2_addr, Listener2.FreedEvents[0].Code);
 
   // Listener 3.
@@ -201,7 +196,6 @@ TEST_F(JITEventListenerTest, MultipleListenersDontInterfere) {
       << " contain some bytes.";
 
   EXPECT_EQ(1U, Listener3.FreedEvents[0].Index);
-  EXPECT_EQ(F2, Listener3.FreedEvents[0].F);
   EXPECT_EQ(F2_addr, Listener3.FreedEvents[0].Code);
 
   F1->eraseFromParent();
@@ -228,7 +222,6 @@ TEST_F(JITEventListenerTest, MatchesMachineCodeInfo) {
   EXPECT_EQ(MCI.size(), Listener.EmittedEvents[0].Size);
 
   EXPECT_EQ(1U, Listener.FreedEvents[0].Index);
-  EXPECT_EQ(F, Listener.FreedEvents[0].F);
   EXPECT_EQ(F_addr, Listener.FreedEvents[0].Code);
 }
 
diff --git a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITMemoryManagerTest.cpp b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITMemoryManagerTest.cpp
index 89a4be7..aa0c41d 100644
--- a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITMemoryManagerTest.cpp
+++ b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITMemoryManagerTest.cpp
@@ -13,6 +13,7 @@
 #include "llvm/DerivedTypes.h"
 #include "llvm/Function.h"
 #include "llvm/GlobalValue.h"
+#include "llvm/LLVMContext.h"
 
 using namespace llvm;
 
@@ -32,37 +33,36 @@ TEST(JITMemoryManagerTest, NoAllocations) {
   OwningPtr<JITMemoryManager> MemMgr(
       JITMemoryManager::CreateDefaultMemManager());
   uintptr_t size;
-  uint8_t *start;
   std::string Error;
 
   // Allocate the functions.
   OwningPtr<Function> F1(makeFakeFunction());
   size = 1024;
-  start = MemMgr->startFunctionBody(F1.get(), size);
-  memset(start, 0xFF, 1024);
-  MemMgr->endFunctionBody(F1.get(), start, start + 1024);
+  uint8_t *FunctionBody1 = MemMgr->startFunctionBody(F1.get(), size);
+  memset(FunctionBody1, 0xFF, 1024);
+  MemMgr->endFunctionBody(F1.get(), FunctionBody1, FunctionBody1 + 1024);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
 
   OwningPtr<Function> F2(makeFakeFunction());
   size = 1024;
-  start = MemMgr->startFunctionBody(F2.get(), size);
-  memset(start, 0xFF, 1024);
-  MemMgr->endFunctionBody(F2.get(), start, start + 1024);
+  uint8_t *FunctionBody2 = MemMgr->startFunctionBody(F2.get(), size);
+  memset(FunctionBody2, 0xFF, 1024);
+  MemMgr->endFunctionBody(F2.get(), FunctionBody2, FunctionBody2 + 1024);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
 
   OwningPtr<Function> F3(makeFakeFunction());
   size = 1024;
-  start = MemMgr->startFunctionBody(F3.get(), size);
-  memset(start, 0xFF, 1024);
-  MemMgr->endFunctionBody(F3.get(), start, start + 1024);
+  uint8_t *FunctionBody3 = MemMgr->startFunctionBody(F3.get(), size);
+  memset(FunctionBody3, 0xFF, 1024);
+  MemMgr->endFunctionBody(F3.get(), FunctionBody3, FunctionBody3 + 1024);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
 
   // Deallocate them out of order, in case that matters.
-  MemMgr->deallocateMemForFunction(F2.get());
+  MemMgr->deallocateFunctionBody(FunctionBody2);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
-  MemMgr->deallocateMemForFunction(F1.get());
+  MemMgr->deallocateFunctionBody(FunctionBody1);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
-  MemMgr->deallocateMemForFunction(F3.get());
+  MemMgr->deallocateFunctionBody(FunctionBody3);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
 }
 
@@ -72,7 +72,6 @@ TEST(JITMemoryManagerTest, TestCodeAllocation) {
   OwningPtr<JITMemoryManager> MemMgr(
       JITMemoryManager::CreateDefaultMemManager());
   uintptr_t size;
-  uint8_t *start;
   std::string Error;
 
   // Big functions are a little less than the largest block size.
@@ -83,26 +82,26 @@ TEST(JITMemoryManagerTest, TestCodeAllocation) {
   // Allocate big functions
   OwningPtr<Function> F1(makeFakeFunction());
   size = bigFuncSize;
-  start = MemMgr->startFunctionBody(F1.get(), size);
+  uint8_t *FunctionBody1 = MemMgr->startFunctionBody(F1.get(), size);
   ASSERT_LE(bigFuncSize, size);
-  memset(start, 0xFF, bigFuncSize);
-  MemMgr->endFunctionBody(F1.get(), start, start + bigFuncSize);
+  memset(FunctionBody1, 0xFF, bigFuncSize);
+  MemMgr->endFunctionBody(F1.get(), FunctionBody1, FunctionBody1 + bigFuncSize);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
 
   OwningPtr<Function> F2(makeFakeFunction());
   size = bigFuncSize;
-  start = MemMgr->startFunctionBody(F2.get(), size);
+  uint8_t *FunctionBody2 = MemMgr->startFunctionBody(F2.get(), size);
   ASSERT_LE(bigFuncSize, size);
-  memset(start, 0xFF, bigFuncSize);
-  MemMgr->endFunctionBody(F2.get(), start, start + bigFuncSize);
+  memset(FunctionBody2, 0xFF, bigFuncSize);
+  MemMgr->endFunctionBody(F2.get(), FunctionBody2, FunctionBody2 + bigFuncSize);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
 
   OwningPtr<Function> F3(makeFakeFunction());
   size = bigFuncSize;
-  start = MemMgr->startFunctionBody(F3.get(), size);
+  uint8_t *FunctionBody3 = MemMgr->startFunctionBody(F3.get(), size);
   ASSERT_LE(bigFuncSize, size);
-  memset(start, 0xFF, bigFuncSize);
-  MemMgr->endFunctionBody(F3.get(), start, start + bigFuncSize);
+  memset(FunctionBody3, 0xFF, bigFuncSize);
+  MemMgr->endFunctionBody(F3.get(), FunctionBody3, FunctionBody3 + bigFuncSize);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
 
   // Check that each large function took it's own slab.
@@ -111,43 +110,46 @@ TEST(JITMemoryManagerTest, TestCodeAllocation) {
   // Allocate small functions
   OwningPtr<Function> F4(makeFakeFunction());
   size = smallFuncSize;
-  start = MemMgr->startFunctionBody(F4.get(), size);
+  uint8_t *FunctionBody4 = MemMgr->startFunctionBody(F4.get(), size);
   ASSERT_LE(smallFuncSize, size);
-  memset(start, 0xFF, smallFuncSize);
-  MemMgr->endFunctionBody(F4.get(), start, start + smallFuncSize);
+  memset(FunctionBody4, 0xFF, smallFuncSize);
+  MemMgr->endFunctionBody(F4.get(), FunctionBody4,
+                          FunctionBody4 + smallFuncSize);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
 
   OwningPtr<Function> F5(makeFakeFunction());
   size = smallFuncSize;
-  start = MemMgr->startFunctionBody(F5.get(), size);
+  uint8_t *FunctionBody5 = MemMgr->startFunctionBody(F5.get(), size);
   ASSERT_LE(smallFuncSize, size);
-  memset(start, 0xFF, smallFuncSize);
-  MemMgr->endFunctionBody(F5.get(), start, start + smallFuncSize);
+  memset(FunctionBody5, 0xFF, smallFuncSize);
+  MemMgr->endFunctionBody(F5.get(), FunctionBody5,
+                          FunctionBody5 + smallFuncSize);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
 
   OwningPtr<Function> F6(makeFakeFunction());
   size = smallFuncSize;
-  start = MemMgr->startFunctionBody(F6.get(), size);
+  uint8_t *FunctionBody6 = MemMgr->startFunctionBody(F6.get(), size);
   ASSERT_LE(smallFuncSize, size);
-  memset(start, 0xFF, smallFuncSize);
-  MemMgr->endFunctionBody(F6.get(), start, start + smallFuncSize);
+  memset(FunctionBody6, 0xFF, smallFuncSize);
+  MemMgr->endFunctionBody(F6.get(), FunctionBody6,
+                          FunctionBody6 + smallFuncSize);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
 
   // Check that the small functions didn't allocate any new slabs.
   EXPECT_EQ(3U, MemMgr->GetNumCodeSlabs());
 
   // Deallocate them out of order, in case that matters.
-  MemMgr->deallocateMemForFunction(F2.get());
+  MemMgr->deallocateFunctionBody(FunctionBody2);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
-  MemMgr->deallocateMemForFunction(F1.get());
+  MemMgr->deallocateFunctionBody(FunctionBody1);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
-  MemMgr->deallocateMemForFunction(F4.get());
+  MemMgr->deallocateFunctionBody(FunctionBody4);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
-  MemMgr->deallocateMemForFunction(F3.get());
+  MemMgr->deallocateFunctionBody(FunctionBody3);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
-  MemMgr->deallocateMemForFunction(F5.get());
+  MemMgr->deallocateFunctionBody(FunctionBody5);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
-  MemMgr->deallocateMemForFunction(F6.get());
+  MemMgr->deallocateFunctionBody(FunctionBody6);
   EXPECT_TRUE(MemMgr->CheckInvariants(Error)) << Error;
 }
 
diff --git a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITTest.cpp b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITTest.cpp
index 9e70a54..12c6b67 100644
--- a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITTest.cpp
+++ b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITTest.cpp
@@ -9,6 +9,8 @@
 
 #include "gtest/gtest.h"
 #include "llvm/ADT/OwningPtr.h"
+#include "llvm/ADT/SmallPtrSet.h"
+#include "llvm/Assembly/Parser.h"
 #include "llvm/BasicBlock.h"
 #include "llvm/Constant.h"
 #include "llvm/Constants.h"
@@ -22,9 +24,13 @@
 #include "llvm/Module.h"
 #include "llvm/ModuleProvider.h"
 #include "llvm/Support/IRBuilder.h"
+#include "llvm/Support/SourceMgr.h"
+#include "llvm/Support/TypeBuilder.h"
 #include "llvm/Target/TargetSelect.h"
 #include "llvm/Type.h"
 
+#include <vector>
+
 using namespace llvm;
 
 namespace {
@@ -44,6 +50,163 @@ Function *makeReturnGlobal(std::string Name, GlobalVariable *G, Module *M) {
   return F;
 }
 
+std::string DumpFunction(const Function *F) {
+  std::string Result;
+  raw_string_ostream(Result) << "" << *F;
+  return Result;
+}
+
+class RecordingJITMemoryManager : public JITMemoryManager {
+  const OwningPtr<JITMemoryManager> Base;
+public:
+  RecordingJITMemoryManager()
+    : Base(JITMemoryManager::CreateDefaultMemManager()) {
+    stubsAllocated = 0;
+  }
+
+  virtual void setMemoryWritable() { Base->setMemoryWritable(); }
+  virtual void setMemoryExecutable() { Base->setMemoryExecutable(); }
+  virtual void setPoisonMemory(bool poison) { Base->setPoisonMemory(poison); }
+  virtual void AllocateGOT() { Base->AllocateGOT(); }
+  virtual uint8_t *getGOTBase() const { return Base->getGOTBase(); }
+  struct StartFunctionBodyCall {
+    StartFunctionBodyCall(uint8_t *Result, const Function *F,
+                          uintptr_t ActualSize, uintptr_t ActualSizeResult)
+      : Result(Result), F(F), F_dump(DumpFunction(F)),
+        ActualSize(ActualSize), ActualSizeResult(ActualSizeResult) {}
+    uint8_t *Result;
+    const Function *F;
+    std::string F_dump;
+    uintptr_t ActualSize;
+    uintptr_t ActualSizeResult;
+  };
+  std::vector<StartFunctionBodyCall> startFunctionBodyCalls;
+  virtual uint8_t *startFunctionBody(const Function *F,
+                                     uintptr_t &ActualSize) {
+    uintptr_t InitialActualSize = ActualSize;
+    uint8_t *Result = Base->startFunctionBody(F, ActualSize);
+    startFunctionBodyCalls.push_back(
+      StartFunctionBodyCall(Result, F, InitialActualSize, ActualSize));
+    return Result;
+  }
+  int stubsAllocated;
+  virtual uint8_t *allocateStub(const GlobalValue* F, unsigned StubSize,
+                                unsigned Alignment) {
+    stubsAllocated++;
+    return Base->allocateStub(F, StubSize, Alignment);
+  }
+  struct EndFunctionBodyCall {
+    EndFunctionBodyCall(const Function *F, uint8_t *FunctionStart,
+                        uint8_t *FunctionEnd)
+      : F(F), F_dump(DumpFunction(F)),
+        FunctionStart(FunctionStart), FunctionEnd(FunctionEnd) {}
+    const Function *F;
+    std::string F_dump;
+    uint8_t *FunctionStart;
+    uint8_t *FunctionEnd;
+  };
+  std::vector<EndFunctionBodyCall> endFunctionBodyCalls;
+  virtual void endFunctionBody(const Function *F, uint8_t *FunctionStart,
+                               uint8_t *FunctionEnd) {
+    endFunctionBodyCalls.push_back(
+      EndFunctionBodyCall(F, FunctionStart, FunctionEnd));
+    Base->endFunctionBody(F, FunctionStart, FunctionEnd);
+  }
+  virtual uint8_t *allocateSpace(intptr_t Size, unsigned Alignment) {
+    return Base->allocateSpace(Size, Alignment);
+  }
+  virtual uint8_t *allocateGlobal(uintptr_t Size, unsigned Alignment) {
+    return Base->allocateGlobal(Size, Alignment);
+  }
+  struct DeallocateFunctionBodyCall {
+    DeallocateFunctionBodyCall(const void *Body) : Body(Body) {}
+    const void *Body;
+  };
+  std::vector<DeallocateFunctionBodyCall> deallocateFunctionBodyCalls;
+  virtual void deallocateFunctionBody(void *Body) {
+    deallocateFunctionBodyCalls.push_back(DeallocateFunctionBodyCall(Body));
+    Base->deallocateFunctionBody(Body);
+  }
+  struct DeallocateExceptionTableCall {
+    DeallocateExceptionTableCall(const void *ET) : ET(ET) {}
+    const void *ET;
+  };
+  std::vector<DeallocateExceptionTableCall> deallocateExceptionTableCalls;
+  virtual void deallocateExceptionTable(void *ET) {
+    deallocateExceptionTableCalls.push_back(DeallocateExceptionTableCall(ET));
+    Base->deallocateExceptionTable(ET);
+  }
+  struct StartExceptionTableCall {
+    StartExceptionTableCall(uint8_t *Result, const Function *F,
+                            uintptr_t ActualSize, uintptr_t ActualSizeResult)
+      : Result(Result), F(F), F_dump(DumpFunction(F)),
+        ActualSize(ActualSize), ActualSizeResult(ActualSizeResult) {}
+    uint8_t *Result;
+    const Function *F;
+    std::string F_dump;
+    uintptr_t ActualSize;
+    uintptr_t ActualSizeResult;
+  };
+  std::vector<StartExceptionTableCall> startExceptionTableCalls;
+  virtual uint8_t* startExceptionTable(const Function* F,
+                                       uintptr_t &ActualSize) {
+    uintptr_t InitialActualSize = ActualSize;
+    uint8_t *Result = Base->startExceptionTable(F, ActualSize);
+    startExceptionTableCalls.push_back(
+      StartExceptionTableCall(Result, F, InitialActualSize, ActualSize));
+    return Result;
+  }
+  struct EndExceptionTableCall {
+    EndExceptionTableCall(const Function *F, uint8_t *TableStart,
+                          uint8_t *TableEnd, uint8_t* FrameRegister)
+      : F(F), F_dump(DumpFunction(F)),
+        TableStart(TableStart), TableEnd(TableEnd),
+        FrameRegister(FrameRegister) {}
+    const Function *F;
+    std::string F_dump;
+    uint8_t *TableStart;
+    uint8_t *TableEnd;
+    uint8_t *FrameRegister;
+  };
+  std::vector<EndExceptionTableCall> endExceptionTableCalls;
+  virtual void endExceptionTable(const Function *F, uint8_t *TableStart,
+                                 uint8_t *TableEnd, uint8_t* FrameRegister) {
+      endExceptionTableCalls.push_back(
+          EndExceptionTableCall(F, TableStart, TableEnd, FrameRegister));
+    return Base->endExceptionTable(F, TableStart, TableEnd, FrameRegister);
+  }
+};
+
+class JITTest : public testing::Test {
+ protected:
+  virtual void SetUp() {
+    M = new Module("<main>", Context);
+    MP = new ExistingModuleProvider(M);
+    RJMM = new RecordingJITMemoryManager;
+    RJMM->setPoisonMemory(true);
+    std::string Error;
+    TheJIT.reset(EngineBuilder(MP).setEngineKind(EngineKind::JIT)
+                 .setJITMemoryManager(RJMM)
+                 .setErrorStr(&Error).create());
+    ASSERT_TRUE(TheJIT.get() != NULL) << Error;
+  }
+
+  void LoadAssembly(const char *assembly) {
+    SMDiagnostic Error;
+    bool success = NULL != ParseAssemblyString(assembly, M, Error, Context);
+    std::string errMsg;
+    raw_string_ostream os(errMsg);
+    Error.Print("", os);
+    ASSERT_TRUE(success) << os.str();
+  }
+
+  LLVMContext Context;
+  Module *M;  // Owned by MP.
+  ModuleProvider *MP;  // Owned by ExecutionEngine.
+  RecordingJITMemoryManager *RJMM;
+  OwningPtr<ExecutionEngine> TheJIT;
+};
+
 // Regression test for a bug.  The JIT used to allocate globals inside the same
 // memory block used for the function, and when the function code was freed,
 // the global was left in the same place.  This test allocates a function
@@ -83,9 +246,8 @@ TEST(JIT, GlobalInFunction) {
 
   // Get the pointer to the native code to force it to JIT the function and
   // allocate space for the global.
-  void (*F1Ptr)();
-  // Hack to avoid ISO C++ warning about casting function pointers.
-  *(void**)(void*)&F1Ptr = JIT->getPointerToFunction(F1);
+  void (*F1Ptr)() =
+      reinterpret_cast<void(*)()>((intptr_t)JIT->getPointerToFunction(F1));
 
   // Since F1 was codegen'd, a pointer to G should be available.
   int32_t *GPtr = (int32_t*)JIT->getPointerToGlobalIfAvailable(G);
@@ -99,9 +261,8 @@ TEST(JIT, GlobalInFunction) {
   // Make a second function identical to the first, referring to the same
   // global.
   Function *F2 = makeReturnGlobal("F2", G, M);
-  // Hack to avoid ISO C++ warning about casting function pointers.
-  void (*F2Ptr)();
-  *(void**)(void*)&F2Ptr = JIT->getPointerToFunction(F2);
+  void (*F2Ptr)() =
+      reinterpret_cast<void(*)()>((intptr_t)JIT->getPointerToFunction(F2));
 
   // F2() should increment G.
   F2Ptr();
@@ -115,6 +276,264 @@ TEST(JIT, GlobalInFunction) {
   EXPECT_EQ(3, *GPtr);
 }
 
+int PlusOne(int arg) {
+  return arg + 1;
+}
+
+TEST_F(JITTest, FarCallToKnownFunction) {
+  // x86-64 can only make direct calls to functions within 32 bits of
+  // the current PC.  To call anything farther away, we have to load
+  // the address into a register and call through the register.  The
+  // current JIT does this by allocating a stub for any far call.
+  // There was a bug in which the JIT tried to emit a direct call when
+  // the target was already in the JIT's global mappings and lazy
+  // compilation was disabled.
+
+  Function *KnownFunction = Function::Create(
+      TypeBuilder<int(int), false>::get(Context),
+      GlobalValue::ExternalLinkage, "known", M);
+  TheJIT->addGlobalMapping(KnownFunction, (void*)(intptr_t)PlusOne);
+
+  // int test() { return known(7); }
+  Function *TestFunction = Function::Create(
+      TypeBuilder<int(), false>::get(Context),
+      GlobalValue::ExternalLinkage, "test", M);
+  BasicBlock *Entry = BasicBlock::Create(Context, "entry", TestFunction);
+  IRBuilder<> Builder(Entry);
+  Value *result = Builder.CreateCall(
+      KnownFunction,
+      ConstantInt::get(TypeBuilder<int, false>::get(Context), 7));
+  Builder.CreateRet(result);
+
+  TheJIT->DisableLazyCompilation(true);
+  int (*TestFunctionPtr)() = reinterpret_cast<int(*)()>(
+      (intptr_t)TheJIT->getPointerToFunction(TestFunction));
+  // This used to crash in trying to call PlusOne().
+  EXPECT_EQ(8, TestFunctionPtr());
+}
+
+// Test a function C which calls A and B which call each other.
+TEST_F(JITTest, NonLazyCompilationStillNeedsStubs) {
+  TheJIT->DisableLazyCompilation(true);
+
+  const FunctionType *Func1Ty =
+      cast<FunctionType>(TypeBuilder<void(void), false>::get(Context));
+  std::vector<const Type*> arg_types;
+  arg_types.push_back(Type::getInt1Ty(Context));
+  const FunctionType *FuncTy = FunctionType::get(
+      Type::getVoidTy(Context), arg_types, false);
+  Function *Func1 = Function::Create(Func1Ty, Function::ExternalLinkage,
+                                     "func1", M);
+  Function *Func2 = Function::Create(FuncTy, Function::InternalLinkage,
+                                     "func2", M);
+  Function *Func3 = Function::Create(FuncTy, Function::InternalLinkage,
+                                     "func3", M);
+  BasicBlock *Block1 = BasicBlock::Create(Context, "block1", Func1);
+  BasicBlock *Block2 = BasicBlock::Create(Context, "block2", Func2);
+  BasicBlock *True2 = BasicBlock::Create(Context, "cond_true", Func2);
+  BasicBlock *False2 = BasicBlock::Create(Context, "cond_false", Func2);
+  BasicBlock *Block3 = BasicBlock::Create(Context, "block3", Func3);
+  BasicBlock *True3 = BasicBlock::Create(Context, "cond_true", Func3);
+  BasicBlock *False3 = BasicBlock::Create(Context, "cond_false", Func3);
+
+  // Make Func1 call Func2(0) and Func3(0).
+  IRBuilder<> Builder(Block1);
+  Builder.CreateCall(Func2, ConstantInt::getTrue(Context));
+  Builder.CreateCall(Func3, ConstantInt::getTrue(Context));
+  Builder.CreateRetVoid();
+
+  // void Func2(bool b) { if (b) { Func3(false); return; } return; }
+  Builder.SetInsertPoint(Block2);
+  Builder.CreateCondBr(Func2->arg_begin(), True2, False2);
+  Builder.SetInsertPoint(True2);
+  Builder.CreateCall(Func3, ConstantInt::getFalse(Context));
+  Builder.CreateRetVoid();
+  Builder.SetInsertPoint(False2);
+  Builder.CreateRetVoid();
+
+  // void Func3(bool b) { if (b) { Func2(false); return; } return; }
+  Builder.SetInsertPoint(Block3);
+  Builder.CreateCondBr(Func3->arg_begin(), True3, False3);
+  Builder.SetInsertPoint(True3);
+  Builder.CreateCall(Func2, ConstantInt::getFalse(Context));
+  Builder.CreateRetVoid();
+  Builder.SetInsertPoint(False3);
+  Builder.CreateRetVoid();
+
+  // Compile the function to native code
+  void (*F1Ptr)() =
+     reinterpret_cast<void(*)()>((intptr_t)TheJIT->getPointerToFunction(Func1));
+
+  F1Ptr();
+}
+
+// Regression test for PR5162.  This used to trigger an AssertingVH inside the
+// JIT's Function to stub mapping.
+TEST_F(JITTest, NonLazyLeaksNoStubs) {
+  TheJIT->DisableLazyCompilation(true);
+
+  // Create two functions with a single basic block each.
+  const FunctionType *FuncTy =
+      cast<FunctionType>(TypeBuilder<int(), false>::get(Context));
+  Function *Func1 = Function::Create(FuncTy, Function::ExternalLinkage,
+                                     "func1", M);
+  Function *Func2 = Function::Create(FuncTy, Function::InternalLinkage,
+                                     "func2", M);
+  BasicBlock *Block1 = BasicBlock::Create(Context, "block1", Func1);
+  BasicBlock *Block2 = BasicBlock::Create(Context, "block2", Func2);
+
+  // The first function calls the second and returns the result
+  IRBuilder<> Builder(Block1);
+  Value *Result = Builder.CreateCall(Func2);
+  Builder.CreateRet(Result);
+
+  // The second function just returns a constant
+  Builder.SetInsertPoint(Block2);
+  Builder.CreateRet(ConstantInt::get(TypeBuilder<int, false>::get(Context),42));
+
+  // Compile the function to native code
+  (void)TheJIT->getPointerToFunction(Func1);
+
+  // Free the JIT state for the functions
+  TheJIT->freeMachineCodeForFunction(Func1);
+  TheJIT->freeMachineCodeForFunction(Func2);
+
+  // Delete the first function (and show that is has no users)
+  EXPECT_EQ(Func1->getNumUses(), 0u);
+  Func1->eraseFromParent();
+
+  // Delete the second function (and show that it has no users - it had one,
+  // func1 but that's gone now)
+  EXPECT_EQ(Func2->getNumUses(), 0u);
+  Func2->eraseFromParent();
+}
+
+TEST_F(JITTest, ModuleDeletion) {
+  TheJIT->DisableLazyCompilation(false);
+  LoadAssembly("define void @main() { "
+               "  call i32 @computeVal() "
+               "  ret void "
+               "} "
+               " "
+               "define internal i32 @computeVal()  { "
+               "  ret i32 0 "
+               "} ");
+  Function *func = M->getFunction("main");
+  TheJIT->getPointerToFunction(func);
+  TheJIT->deleteModuleProvider(MP);
+
+  SmallPtrSet<const void*, 2> FunctionsDeallocated;
+  for (unsigned i = 0, e = RJMM->deallocateFunctionBodyCalls.size();
+       i != e; ++i) {
+    FunctionsDeallocated.insert(RJMM->deallocateFunctionBodyCalls[i].Body);
+  }
+  for (unsigned i = 0, e = RJMM->startFunctionBodyCalls.size(); i != e; ++i) {
+    EXPECT_TRUE(FunctionsDeallocated.count(
+                  RJMM->startFunctionBodyCalls[i].Result))
+      << "Function leaked: \n" << RJMM->startFunctionBodyCalls[i].F_dump;
+  }
+  EXPECT_EQ(RJMM->startFunctionBodyCalls.size(),
+            RJMM->deallocateFunctionBodyCalls.size());
+
+  SmallPtrSet<const void*, 2> ExceptionTablesDeallocated;
+  unsigned NumTablesDeallocated = 0;
+  for (unsigned i = 0, e = RJMM->deallocateExceptionTableCalls.size();
+       i != e; ++i) {
+    ExceptionTablesDeallocated.insert(
+        RJMM->deallocateExceptionTableCalls[i].ET);
+    if (RJMM->deallocateExceptionTableCalls[i].ET != NULL) {
+        // If JITEmitDebugInfo is off, we'll "deallocate" NULL, which doesn't
+        // appear in startExceptionTableCalls.
+        NumTablesDeallocated++;
+    }
+  }
+  for (unsigned i = 0, e = RJMM->startExceptionTableCalls.size(); i != e; ++i) {
+    EXPECT_TRUE(ExceptionTablesDeallocated.count(
+                  RJMM->startExceptionTableCalls[i].Result))
+      << "Function's exception table leaked: \n"
+      << RJMM->startExceptionTableCalls[i].F_dump;
+  }
+  EXPECT_EQ(RJMM->startExceptionTableCalls.size(),
+            NumTablesDeallocated);
+}
+
+// ARM and PPC still emit stubs for calls since the target may be too far away
+// to call directly.  This #if can probably be removed when
+// http://llvm.org/PR5201 is fixed.
+#if !defined(__arm__) && !defined(__powerpc__) && !defined(__ppc__)
+typedef int (*FooPtr) ();
+
+TEST_F(JITTest, NoStubs) {
+  LoadAssembly("define void @bar() {"
+	       "entry: "
+	       "ret void"
+	       "}"
+	       " "
+	       "define i32 @foo() {"
+	       "entry:"
+	       "call void @bar()"
+	       "ret i32 undef"
+	       "}"
+	       " "
+	       "define i32 @main() {"
+	       "entry:"
+	       "%0 = call i32 @foo()"
+	       "call void @bar()"
+	       "ret i32 undef"
+	       "}");
+  Function *foo = M->getFunction("foo");
+  uintptr_t tmp = (uintptr_t)(TheJIT->getPointerToFunction(foo));
+  FooPtr ptr = (FooPtr)(tmp);
+
+  (ptr)();
+
+  // We should now allocate no more stubs, we have the code to foo
+  // and the existing stub for bar.
+  int stubsBefore = RJMM->stubsAllocated;
+  Function *func = M->getFunction("main");
+  TheJIT->getPointerToFunction(func);
+
+  Function *bar = M->getFunction("bar");
+  TheJIT->getPointerToFunction(bar);
+
+  ASSERT_EQ(stubsBefore, RJMM->stubsAllocated);
+}
+#endif  // !ARM && !PPC
+
+TEST_F(JITTest, FunctionPointersOutliveTheirCreator) {
+  TheJIT->DisableLazyCompilation(true);
+  LoadAssembly("define i8()* @get_foo_addr() { "
+               "  ret i8()* @foo "
+               "} "
+               " "
+               "define i8 @foo() { "
+               "  ret i8 42 "
+               "} ");
+  Function *F_get_foo_addr = M->getFunction("get_foo_addr");
+
+  typedef char(*fooT)();
+  fooT (*get_foo_addr)() = reinterpret_cast<fooT(*)()>(
+      (intptr_t)TheJIT->getPointerToFunction(F_get_foo_addr));
+  fooT foo_addr = get_foo_addr();
+
+  // Now free get_foo_addr.  This should not free the machine code for foo or
+  // any call stub returned as foo's canonical address.
+  TheJIT->freeMachineCodeForFunction(F_get_foo_addr);
+
+  // Check by calling the reported address of foo.
+  EXPECT_EQ(42, foo_addr());
+
+  // The reported address should also be the same as the result of a subsequent
+  // getPointerToFunction(foo).
+#if 0
+  // Fails until PR5126 is fixed:
+  Function *F_foo = M->getFunction("foo");
+  fooT foo = reinterpret_cast<fooT>(
+      (intptr_t)TheJIT->getPointerToFunction(F_foo));
+  EXPECT_EQ((intptr_t)foo, (intptr_t)foo_addr);
+#endif
+}
+
 // This code is copied from JITEventListenerTest, but it only runs once for all
 // the tests in this directory.  Everything seems fine, but that's strange
 // behavior.
diff --git a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/Makefile b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/Makefile
index 0069c76..048924a 100644
--- a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/Makefile
+++ b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/Makefile
@@ -9,7 +9,7 @@
 
 LEVEL = ../../..
 TESTNAME = JIT
-LINK_COMPONENTS := core support jit native
+LINK_COMPONENTS := asmparser core support jit native
 
 include $(LEVEL)/Makefile.config
 include $(LLVM_SRC_ROOT)/unittests/Makefile.unittest
diff --git a/libclamav/c++/llvm/unittests/Makefile.unittest b/libclamav/c++/llvm/unittests/Makefile.unittest
index 214a1e8..e417435 100644
--- a/libclamav/c++/llvm/unittests/Makefile.unittest
+++ b/libclamav/c++/llvm/unittests/Makefile.unittest
@@ -19,13 +19,13 @@ include $(LEVEL)/Makefile.common
 LLVMUnitTestExe = $(BuildMode)/$(TESTNAME)Tests$(EXEEXT)
 
 CPP.Flags += -I$(LLVM_SRC_ROOT)/utils/unittest/googletest/include/
-CPP.Flags += -Wno-variadic-macros
-LIBS += -lGoogleTest -lUnitTestMain
+CPP.Flags += $(NO_VARIADIC_MACROS)
+TESTLIBS = -lGoogleTest -lUnitTestMain
 
 $(LLVMUnitTestExe): $(ObjectsO) $(ProjLibsPaths) $(LLVMLibsPaths)
 	$(Echo) Linking $(BuildMode) unit test $(TESTNAME) $(StripWarnMsg)
 	$(Verb) $(Link) -o $@ $(TOOLLINKOPTS) $(ObjectsO) $(ProjLibsOptions) \
-	$(LIBS) $(LLVMLibsOptions) $(ExtraLibs) $(TOOLLINKOPTSB)
+	$(TESTLIBS) $(LLVMLibsOptions) $(ExtraLibs) $(TOOLLINKOPTSB) $(LIBS)
 	$(Echo) ======= Finished Linking $(BuildMode) Unit test $(TESTNAME) \
           $(StripWarnMsg)
 
diff --git a/libclamav/c++/llvm/unittests/Support/TypeBuilderTest.cpp b/libclamav/c++/llvm/unittests/Support/TypeBuilderTest.cpp
index fae8907..a5c5e67 100644
--- a/libclamav/c++/llvm/unittests/Support/TypeBuilderTest.cpp
+++ b/libclamav/c++/llvm/unittests/Support/TypeBuilderTest.cpp
@@ -20,7 +20,7 @@ TEST(TypeBuilderTest, Void) {
   EXPECT_EQ(Type::getVoidTy(getGlobalContext()), (TypeBuilder<void, true>::get(getGlobalContext())));
   EXPECT_EQ(Type::getVoidTy(getGlobalContext()), (TypeBuilder<void, false>::get(getGlobalContext())));
   // Special case for C compatibility:
-  EXPECT_EQ(PointerType::getUnqual(Type::getInt8Ty(getGlobalContext())),
+  EXPECT_EQ(Type::getInt8PtrTy(getGlobalContext()),
             (TypeBuilder<void*, false>::get(getGlobalContext())));
 }
 
@@ -64,21 +64,21 @@ TEST(TypeBuilderTest, Float) {
 }
 
 TEST(TypeBuilderTest, Derived) {
-  EXPECT_EQ(PointerType::getUnqual(PointerType::getUnqual(Type::getInt8Ty(getGlobalContext()))),
+  EXPECT_EQ(PointerType::getUnqual(Type::getInt8PtrTy(getGlobalContext())),
             (TypeBuilder<int8_t**, false>::get(getGlobalContext())));
   EXPECT_EQ(ArrayType::get(Type::getInt8Ty(getGlobalContext()), 7),
             (TypeBuilder<int8_t[7], false>::get(getGlobalContext())));
   EXPECT_EQ(ArrayType::get(Type::getInt8Ty(getGlobalContext()), 0),
             (TypeBuilder<int8_t[], false>::get(getGlobalContext())));
 
-  EXPECT_EQ(PointerType::getUnqual(PointerType::getUnqual(Type::getInt8Ty(getGlobalContext()))),
+  EXPECT_EQ(PointerType::getUnqual(Type::getInt8PtrTy(getGlobalContext())),
             (TypeBuilder<types::i<8>**, false>::get(getGlobalContext())));
   EXPECT_EQ(ArrayType::get(Type::getInt8Ty(getGlobalContext()), 7),
             (TypeBuilder<types::i<8>[7], false>::get(getGlobalContext())));
   EXPECT_EQ(ArrayType::get(Type::getInt8Ty(getGlobalContext()), 0),
             (TypeBuilder<types::i<8>[], false>::get(getGlobalContext())));
 
-  EXPECT_EQ(PointerType::getUnqual(PointerType::getUnqual(Type::getInt8Ty(getGlobalContext()))),
+  EXPECT_EQ(PointerType::getUnqual(Type::getInt8PtrTy(getGlobalContext())),
             (TypeBuilder<types::i<8>**, true>::get(getGlobalContext())));
   EXPECT_EQ(ArrayType::get(Type::getInt8Ty(getGlobalContext()), 7),
             (TypeBuilder<types::i<8>[7], true>::get(getGlobalContext())));
@@ -107,7 +107,7 @@ TEST(TypeBuilderTest, Derived) {
   EXPECT_EQ(Type::getInt8Ty(getGlobalContext()),
             (TypeBuilder<const volatile types::i<8>, true>::get(getGlobalContext())));
 
-  EXPECT_EQ(PointerType::getUnqual(Type::getInt8Ty(getGlobalContext())),
+  EXPECT_EQ(Type::getInt8PtrTy(getGlobalContext()),
             (TypeBuilder<const volatile int8_t*const volatile, false>::get(getGlobalContext())));
 }
 
diff --git a/libclamav/c++/llvm/unittests/Support/ValueHandleTest.cpp b/libclamav/c++/llvm/unittests/Support/ValueHandleTest.cpp
index c6b5356..6a6528f 100644
--- a/libclamav/c++/llvm/unittests/Support/ValueHandleTest.cpp
+++ b/libclamav/c++/llvm/unittests/Support/ValueHandleTest.cpp
@@ -11,6 +11,8 @@
 
 #include "llvm/Constants.h"
 #include "llvm/Instructions.h"
+#include "llvm/LLVMContext.h"
+#include "llvm/ADT/OwningPtr.h"
 
 #include "gtest/gtest.h"
 
@@ -327,4 +329,84 @@ TEST_F(ValueHandle, CallbackVH_DeletionCanRAUW) {
             BitcastUser->getOperand(0));
 }
 
+TEST_F(ValueHandle, DestroyingOtherVHOnSameValueDoesntBreakIteration) {
+  // When a CallbackVH modifies other ValueHandles in its callbacks,
+  // that shouldn't interfere with non-modified ValueHandles receiving
+  // their appropriate callbacks.
+  //
+  // We create the active CallbackVH in the middle of a palindromic
+  // arrangement of other VHs so that the bad behavior would be
+  // triggered in whichever order callbacks run.
+
+  class DestroyingVH : public CallbackVH {
+  public:
+    OwningPtr<WeakVH> ToClear[2];
+    DestroyingVH(Value *V) {
+      ToClear[0].reset(new WeakVH(V));
+      setValPtr(V);
+      ToClear[1].reset(new WeakVH(V));
+    }
+    virtual void deleted() {
+      ToClear[0].reset();
+      ToClear[1].reset();
+      CallbackVH::deleted();
+    }
+    virtual void allUsesReplacedWith(Value *) {
+      ToClear[0].reset();
+      ToClear[1].reset();
+    }
+  };
+
+  {
+    WeakVH ShouldBeVisited1(BitcastV.get());
+    DestroyingVH C(BitcastV.get());
+    WeakVH ShouldBeVisited2(BitcastV.get());
+
+    BitcastV->replaceAllUsesWith(ConstantV);
+    EXPECT_EQ(ConstantV, static_cast<Value*>(ShouldBeVisited1));
+    EXPECT_EQ(ConstantV, static_cast<Value*>(ShouldBeVisited2));
+  }
+
+  {
+    WeakVH ShouldBeVisited1(BitcastV.get());
+    DestroyingVH C(BitcastV.get());
+    WeakVH ShouldBeVisited2(BitcastV.get());
+
+    BitcastV.reset();
+    EXPECT_EQ(NULL, static_cast<Value*>(ShouldBeVisited1));
+    EXPECT_EQ(NULL, static_cast<Value*>(ShouldBeVisited2));
+  }
+}
+
+TEST_F(ValueHandle, AssertingVHCheckedLast) {
+  // If a CallbackVH exists to clear out a group of AssertingVHs on
+  // Value deletion, the CallbackVH should get a chance to do so
+  // before the AssertingVHs assert.
+
+  class ClearingVH : public CallbackVH {
+  public:
+    AssertingVH<Value> *ToClear[2];
+    ClearingVH(Value *V,
+               AssertingVH<Value> &A0, AssertingVH<Value> &A1)
+      : CallbackVH(V) {
+      ToClear[0] = &A0;
+      ToClear[1] = &A1;
+    }
+
+    virtual void deleted() {
+      *ToClear[0] = 0;
+      *ToClear[1] = 0;
+      CallbackVH::deleted();
+    }
+  };
+
+  AssertingVH<Value> A1, A2;
+  A1 = BitcastV.get();
+  ClearingVH C(BitcastV.get(), A1, A2);
+  A2 = BitcastV.get();
+  // C.deleted() should run first, clearing the two AssertingVHs,
+  // which should prevent them from asserting.
+  BitcastV.reset();
+}
+
 }
diff --git a/libclamav/c++/llvm/unittests/Support/raw_ostream_test.cpp b/libclamav/c++/llvm/unittests/Support/raw_ostream_test.cpp
index bd2e95c..2b797b4 100644
--- a/libclamav/c++/llvm/unittests/Support/raw_ostream_test.cpp
+++ b/libclamav/c++/llvm/unittests/Support/raw_ostream_test.cpp
@@ -127,4 +127,20 @@ TEST(raw_ostreamTest, TinyBuffer) {
   EXPECT_EQ("hello1world", OS.str());
 }
 
+TEST(raw_ostreamTest, WriteEscaped) {
+  std::string Str;
+
+  Str = "";
+  raw_string_ostream(Str).write_escaped("hi");
+  EXPECT_EQ("hi", Str);
+
+  Str = "";
+  raw_string_ostream(Str).write_escaped("\\\t\n\"");
+  EXPECT_EQ("\\\\\\t\\n\\\"", Str);
+
+  Str = "";
+  raw_string_ostream(Str).write_escaped("\1\10\200");
+  EXPECT_EQ("\\001\\010\\200", Str);
+}
+
 }
diff --git a/libclamav/c++/llvm/unittests/Transforms/Utils/Cloning.cpp b/libclamav/c++/llvm/unittests/Transforms/Utils/Cloning.cpp
index 7c93f6f..17047e7 100644
--- a/libclamav/c++/llvm/unittests/Transforms/Utils/Cloning.cpp
+++ b/libclamav/c++/llvm/unittests/Transforms/Utils/Cloning.cpp
@@ -10,6 +10,7 @@
 #include "gtest/gtest.h"
 #include "llvm/Argument.h"
 #include "llvm/Instructions.h"
+#include "llvm/LLVMContext.h"
 
 using namespace llvm;
 
@@ -21,58 +22,58 @@ TEST(CloneInstruction, OverflowBits) {
   BinaryOperator *Sub = BinaryOperator::Create(Instruction::Sub, V, V);
   BinaryOperator *Mul = BinaryOperator::Create(Instruction::Mul, V, V);
 
-  EXPECT_FALSE(Add->clone()->hasNoUnsignedWrap());
-  EXPECT_FALSE(Add->clone()->hasNoSignedWrap());
-  EXPECT_FALSE(Sub->clone()->hasNoUnsignedWrap());
-  EXPECT_FALSE(Sub->clone()->hasNoSignedWrap());
-  EXPECT_FALSE(Mul->clone()->hasNoUnsignedWrap());
-  EXPECT_FALSE(Mul->clone()->hasNoSignedWrap());
+  EXPECT_FALSE(cast<BinaryOperator>(Add->clone())->hasNoUnsignedWrap());
+  EXPECT_FALSE(cast<BinaryOperator>(Add->clone())->hasNoSignedWrap());
+  EXPECT_FALSE(cast<BinaryOperator>(Sub->clone())->hasNoUnsignedWrap());
+  EXPECT_FALSE(cast<BinaryOperator>(Sub->clone())->hasNoSignedWrap());
+  EXPECT_FALSE(cast<BinaryOperator>(Mul->clone())->hasNoUnsignedWrap());
+  EXPECT_FALSE(cast<BinaryOperator>(Mul->clone())->hasNoSignedWrap());
 
   Add->setHasNoUnsignedWrap();
   Sub->setHasNoUnsignedWrap();
   Mul->setHasNoUnsignedWrap();
 
-  EXPECT_TRUE(Add->clone()->hasNoUnsignedWrap());
-  EXPECT_FALSE(Add->clone()->hasNoSignedWrap());
-  EXPECT_TRUE(Sub->clone()->hasNoUnsignedWrap());
-  EXPECT_FALSE(Sub->clone()->hasNoSignedWrap());
-  EXPECT_TRUE(Mul->clone()->hasNoUnsignedWrap());
-  EXPECT_FALSE(Mul->clone()->hasNoSignedWrap());
+  EXPECT_TRUE(cast<BinaryOperator>(Add->clone())->hasNoUnsignedWrap());
+  EXPECT_FALSE(cast<BinaryOperator>(Add->clone())->hasNoSignedWrap());
+  EXPECT_TRUE(cast<BinaryOperator>(Sub->clone())->hasNoUnsignedWrap());
+  EXPECT_FALSE(cast<BinaryOperator>(Sub->clone())->hasNoSignedWrap());
+  EXPECT_TRUE(cast<BinaryOperator>(Mul->clone())->hasNoUnsignedWrap());
+  EXPECT_FALSE(cast<BinaryOperator>(Mul->clone())->hasNoSignedWrap());
 
   Add->setHasNoSignedWrap();
   Sub->setHasNoSignedWrap();
   Mul->setHasNoSignedWrap();
 
-  EXPECT_TRUE(Add->clone()->hasNoUnsignedWrap());
-  EXPECT_TRUE(Add->clone()->hasNoSignedWrap());
-  EXPECT_TRUE(Sub->clone()->hasNoUnsignedWrap());
-  EXPECT_TRUE(Sub->clone()->hasNoSignedWrap());
-  EXPECT_TRUE(Mul->clone()->hasNoUnsignedWrap());
-  EXPECT_TRUE(Mul->clone()->hasNoSignedWrap());
+  EXPECT_TRUE(cast<BinaryOperator>(Add->clone())->hasNoUnsignedWrap());
+  EXPECT_TRUE(cast<BinaryOperator>(Add->clone())->hasNoSignedWrap());
+  EXPECT_TRUE(cast<BinaryOperator>(Sub->clone())->hasNoUnsignedWrap());
+  EXPECT_TRUE(cast<BinaryOperator>(Sub->clone())->hasNoSignedWrap());
+  EXPECT_TRUE(cast<BinaryOperator>(Mul->clone())->hasNoUnsignedWrap());
+  EXPECT_TRUE(cast<BinaryOperator>(Mul->clone())->hasNoSignedWrap());
 
   Add->setHasNoUnsignedWrap(false);
   Sub->setHasNoUnsignedWrap(false);
   Mul->setHasNoUnsignedWrap(false);
 
-  EXPECT_FALSE(Add->clone()->hasNoUnsignedWrap());
-  EXPECT_TRUE(Add->clone()->hasNoSignedWrap());
-  EXPECT_FALSE(Sub->clone()->hasNoUnsignedWrap());
-  EXPECT_TRUE(Sub->clone()->hasNoSignedWrap());
-  EXPECT_FALSE(Mul->clone()->hasNoUnsignedWrap());
-  EXPECT_TRUE(Mul->clone()->hasNoSignedWrap());
+  EXPECT_FALSE(cast<BinaryOperator>(Add->clone())->hasNoUnsignedWrap());
+  EXPECT_TRUE(cast<BinaryOperator>(Add->clone())->hasNoSignedWrap());
+  EXPECT_FALSE(cast<BinaryOperator>(Sub->clone())->hasNoUnsignedWrap());
+  EXPECT_TRUE(cast<BinaryOperator>(Sub->clone())->hasNoSignedWrap());
+  EXPECT_FALSE(cast<BinaryOperator>(Mul->clone())->hasNoUnsignedWrap());
+  EXPECT_TRUE(cast<BinaryOperator>(Mul->clone())->hasNoSignedWrap());
 }
 
 TEST(CloneInstruction, Inbounds) {
   LLVMContext context;
-  Value *V = new Argument(Type::getInt32Ty(context)->getPointerTo());
+  Value *V = new Argument(Type::getInt32PtrTy(context));
   Constant *Z = Constant::getNullValue(Type::getInt32Ty(context));
   std::vector<Value *> ops;
   ops.push_back(Z);
   GetElementPtrInst *GEP = GetElementPtrInst::Create(V, ops.begin(), ops.end());
-  EXPECT_FALSE(GEP->clone()->isInBounds());
+  EXPECT_FALSE(cast<GetElementPtrInst>(GEP->clone())->isInBounds());
 
   GEP->setIsInBounds();
-  EXPECT_TRUE(GEP->clone()->isInBounds());
+  EXPECT_TRUE(cast<GetElementPtrInst>(GEP->clone())->isInBounds());
 }
 
 TEST(CloneInstruction, Exact) {
@@ -80,8 +81,8 @@ TEST(CloneInstruction, Exact) {
   Value *V = new Argument(Type::getInt32Ty(context));
 
   BinaryOperator *SDiv = BinaryOperator::Create(Instruction::SDiv, V, V);
-  EXPECT_FALSE(SDiv->clone()->isExact());
+  EXPECT_FALSE(cast<BinaryOperator>(SDiv->clone())->isExact());
 
   SDiv->setIsExact(true);
-  EXPECT_TRUE(SDiv->clone()->isExact());
+  EXPECT_TRUE(cast<BinaryOperator>(SDiv->clone())->isExact());
 }
diff --git a/libclamav/c++/llvm/unittests/VMCore/MetadataTest.cpp b/libclamav/c++/llvm/unittests/VMCore/MetadataTest.cpp
index b92b068..4bd777b 100644
--- a/libclamav/c++/llvm/unittests/VMCore/MetadataTest.cpp
+++ b/libclamav/c++/llvm/unittests/VMCore/MetadataTest.cpp
@@ -10,6 +10,7 @@
 #include "gtest/gtest.h"
 #include "llvm/Constants.h"
 #include "llvm/Instructions.h"
+#include "llvm/LLVMContext.h"
 #include "llvm/Metadata.h"
 #include "llvm/Module.h"
 #include "llvm/Type.h"
diff --git a/libclamav/c++/llvm/utils/FileCheck/FileCheck.cpp b/libclamav/c++/llvm/utils/FileCheck/FileCheck.cpp
index b4d1f84..101ff24 100644
--- a/libclamav/c++/llvm/utils/FileCheck/FileCheck.cpp
+++ b/libclamav/c++/llvm/utils/FileCheck/FileCheck.cpp
@@ -23,6 +23,7 @@
 #include "llvm/Support/SourceMgr.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/System/Signals.h"
+#include "llvm/ADT/SmallString.h"
 #include "llvm/ADT/StringMap.h"
 #include <algorithm>
 using namespace llvm;
@@ -82,10 +83,21 @@ public:
   /// variables and is updated if this match defines new values.
   size_t Match(StringRef Buffer, size_t &MatchLen,
                StringMap<StringRef> &VariableTable) const;
-  
+
+  /// PrintFailureInfo - Print additional information about a failure to match
+  /// involving this pattern.
+  void PrintFailureInfo(const SourceMgr &SM, StringRef Buffer,
+                        const StringMap<StringRef> &VariableTable) const;
+
 private:
   static void AddFixedStringToRegEx(StringRef FixedStr, std::string &TheStr);
   bool AddRegExToRegEx(StringRef RegExStr, unsigned &CurParen, SourceMgr &SM);
+
+  /// ComputeMatchDistance - Compute an arbitrary estimate for the quality of
+  /// matching this pattern at the start of \arg Buffer; a distance of zero
+  /// should correspond to a perfect match.
+  unsigned ComputeMatchDistance(StringRef Buffer,
+                               const StringMap<StringRef> &VariableTable) const;
 };
 
 
@@ -140,7 +152,7 @@ bool Pattern::ParsePattern(StringRef PatternStr, SourceMgr &SM) {
     // Named RegEx matches.  These are of two forms: [[foo:.*]] which matches .*
     // (or some other regex) and assigns it to the FileCheck variable 'foo'. The
     // second form is [[foo]] which is a reference to foo.  The variable name
-    // itself must be of the form "[a-zA-Z][0-9a-zA-Z]*", otherwise we reject
+    // itself must be of the form "[a-zA-Z_][0-9a-zA-Z_]*", otherwise we reject
     // it.  This is to catch some common errors.
     if (PatternStr.size() >= 2 &&
         PatternStr[0] == '[' && PatternStr[1] == '[') {
@@ -167,7 +179,8 @@ bool Pattern::ParsePattern(StringRef PatternStr, SourceMgr &SM) {
 
       // Verify that the name is well formed.
       for (unsigned i = 0, e = Name.size(); i != e; ++i)
-        if ((Name[i] < 'a' || Name[i] > 'z') &&
+        if (Name[i] != '_' &&
+            (Name[i] < 'a' || Name[i] > 'z') &&
             (Name[i] < 'A' || Name[i] > 'Z') &&
             (Name[i] < '0' || Name[i] > '9')) {
           SM.PrintMessage(SMLoc::getFromPointer(Name.data()+i),
@@ -275,9 +288,15 @@ size_t Pattern::Match(StringRef Buffer, size_t &MatchLen,
     
     unsigned InsertOffset = 0;
     for (unsigned i = 0, e = VariableUses.size(); i != e; ++i) {
+      StringMap<StringRef>::iterator it =
+        VariableTable.find(VariableUses[i].first);
+      // If the variable is undefined, return an error.
+      if (it == VariableTable.end())
+        return StringRef::npos;
+
       // Look up the value and escape it so that we can plop it into the regex.
       std::string Value;
-      AddFixedStringToRegEx(VariableTable[VariableUses[i].first], Value);
+      AddFixedStringToRegEx(it->second, Value);
       
       // Plop it into the regex at the adjusted offset.
       TmpStr.insert(TmpStr.begin()+VariableUses[i].second+InsertOffset,
@@ -309,6 +328,86 @@ size_t Pattern::Match(StringRef Buffer, size_t &MatchLen,
   return FullMatch.data()-Buffer.data();
 }
 
+unsigned Pattern::ComputeMatchDistance(StringRef Buffer,
+                              const StringMap<StringRef> &VariableTable) const {
+  // Just compute the number of matching characters. For regular expressions, we
+  // just compare against the regex itself and hope for the best.
+  //
+  // FIXME: One easy improvement here is have the regex lib generate a single
+  // example regular expression which matches, and use that as the example
+  // string.
+  StringRef ExampleString(FixedStr);
+  if (ExampleString.empty())
+    ExampleString = RegExStr;
+
+  unsigned Distance = 0;
+  for (unsigned i = 0, e = ExampleString.size(); i != e; ++i)
+    if (Buffer.substr(i, 1) != ExampleString.substr(i, 1))
+      ++Distance;
+
+  return Distance;
+}
+
+void Pattern::PrintFailureInfo(const SourceMgr &SM, StringRef Buffer,
+                               const StringMap<StringRef> &VariableTable) const{
+  // If this was a regular expression using variables, print the current
+  // variable values.
+  if (!VariableUses.empty()) {
+    for (unsigned i = 0, e = VariableUses.size(); i != e; ++i) {
+      StringRef Var = VariableUses[i].first;
+      StringMap<StringRef>::const_iterator it = VariableTable.find(Var);
+      SmallString<256> Msg;
+      raw_svector_ostream OS(Msg);
+
+      // Check for undefined variable references.
+      if (it == VariableTable.end()) {
+        OS << "uses undefined variable \"";
+        OS.write_escaped(Var) << "\"";;
+      } else {
+        OS << "with variable \"";
+        OS.write_escaped(Var) << "\" equal to \"";
+        OS.write_escaped(it->second) << "\"";
+      }
+
+      SM.PrintMessage(SMLoc::getFromPointer(Buffer.data()), OS.str(), "note",
+                      /*ShowLine=*/false);
+    }
+  }
+
+  // Attempt to find the closest/best fuzzy match.  Usually an error happens
+  // because some string in the output didn't exactly match. In these cases, we
+  // would like to show the user a best guess at what "should have" matched, to
+  // save them having to actually check the input manually.
+  size_t NumLinesForward = 0;
+  size_t Best = StringRef::npos;
+  double BestQuality = 0;
+
+  // Use an arbitrary 4k limit on how far we will search.
+  for (size_t i = 0, e = std::min(4096, int(Buffer.size())); i != e; ++i) {
+    if (Buffer[i] == '\n')
+      ++NumLinesForward;
+
+    // Compute the "quality" of this match as an arbitrary combination of the
+    // match distance and the number of lines skipped to get to this match.
+    unsigned Distance = ComputeMatchDistance(Buffer.substr(i), VariableTable);
+    double Quality = Distance + (NumLinesForward / 100.);
+
+    if (Quality < BestQuality || Best == StringRef::npos) {
+      Best = i;
+      BestQuality = Quality;
+    }
+  }
+
+  if (BestQuality < 50) {
+    // Print the "possible intended match here" line if we found something
+    // reasonable.
+    SM.PrintMessage(SMLoc::getFromPointer(Buffer.data() + Best),
+                    "possible intended match here", "note");
+
+    // FIXME: If we wanted to be really friendly we would show why the match
+    // failed, as it can be hard to spot simple one character differences.
+  }
+}
 
 //===----------------------------------------------------------------------===//
 // Check Strings.
@@ -477,7 +576,8 @@ static bool ReadCheckFile(SourceMgr &SM,
 }
 
 static void PrintCheckFailed(const SourceMgr &SM, const CheckString &CheckStr,
-                             StringRef Buffer) {
+                             StringRef Buffer,
+                             StringMap<StringRef> &VariableTable) {
   // Otherwise, we have an error, emit an error message.
   SM.PrintMessage(CheckStr.Loc, "expected string not found in input",
                   "error");
@@ -488,6 +588,9 @@ static void PrintCheckFailed(const SourceMgr &SM, const CheckString &CheckStr,
   
   SM.PrintMessage(SMLoc::getFromPointer(Buffer.data()), "scanning from here",
                   "note");
+
+  // Allow the pattern to print additional information if desired.
+  CheckStr.Pat.PrintFailureInfo(SM, Buffer, VariableTable);
 }
 
 /// CountNumNewlinesBetween - Count the number of newlines in the specified
@@ -558,7 +661,7 @@ int main(int argc, char **argv) {
     
     // If we didn't find a match, reject the input.
     if (Buffer.empty()) {
-      PrintCheckFailed(SM, CheckStr, SearchFrom);
+      PrintCheckFailed(SM, CheckStr, SearchFrom, VariableTable);
       return 1;
     }
 
diff --git a/libclamav/c++/llvm/utils/Misc/zkill b/libclamav/c++/llvm/utils/Misc/zkill
new file mode 100755
index 0000000..bc0bfd5
--- /dev/null
+++ b/libclamav/c++/llvm/utils/Misc/zkill
@@ -0,0 +1,276 @@
+#!/usr/bin/env python
+
+import os
+import re
+import sys
+
+def _write_message(kind, message):
+    import inspect, os, sys
+
+    # Get the file/line where this message was generated.
+    f = inspect.currentframe()
+    # Step out of _write_message, and then out of wrapper.
+    f = f.f_back.f_back
+    file,line,_,_,_ = inspect.getframeinfo(f)
+    location = '%s:%d' % (os.path.basename(file), line)
+
+    print >>sys.stderr, '%s: %s: %s' % (location, kind, message)
+
+note = lambda message: _write_message('note', message)
+warning = lambda message: _write_message('warning', message)
+error = lambda message: (_write_message('error', message), sys.exit(1))
+
+def re_full_match(pattern, str):
+    m = re.match(pattern, str)
+    if m and m.end() != len(str):
+        m = None
+    return m
+
+def parse_time(value):
+    minutes,value = value.split(':',1)
+    if '.' in value:
+        seconds,fseconds = value.split('.',1)
+    else:
+        seconds = value
+    return int(minutes) * 60 + int(seconds) + float('.'+fseconds)
+
+def extractExecutable(command):
+    """extractExecutable - Given a string representing a command line, attempt
+    to extract the executable path, even if it includes spaces."""
+
+    # Split into potential arguments.
+    args = command.split(' ')
+
+    # Scanning from the beginning, try to see if the first N args, when joined,
+    # exist. If so that's probably the executable.
+    for i in range(1,len(args)):
+        cmd = ' '.join(args[:i])
+        if os.path.exists(cmd):
+            return cmd
+
+    # Otherwise give up and return the first "argument".
+    return args[0]
+
+class Struct:
+    def __init__(self, **kwargs):
+        self.fields = kwargs.keys()
+        self.__dict__.update(kwargs)
+
+    def __repr__(self):
+        return 'Struct(%s)' % ', '.join(['%s=%r' % (k,getattr(self,k))
+                                         for k in self.fields])
+
+kExpectedPSFields = [('PID', int, 'pid'),
+                     ('USER', str, 'user'),
+                     ('COMMAND', str, 'command'),
+                     ('%CPU', float, 'cpu_percent'),
+                     ('TIME', parse_time, 'cpu_time'),
+                     ('VSZ', int, 'vmem_size'),
+                     ('RSS', int, 'rss')]
+def getProcessTable():
+    import subprocess
+    p = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE,
+                         stderr=subprocess.PIPE)
+    out,err = p.communicate()
+    res = p.wait()
+    if p.wait():
+        error('unable to get process table')
+    elif err.strip():
+        error('unable to get process table: %s' % err)
+
+    lns = out.split('\n')
+    it = iter(lns)
+    header = it.next().split()
+    numRows = len(header)
+
+    # Make sure we have the expected fields.
+    indexes = []
+    for field in kExpectedPSFields:
+        try:
+            indexes.append(header.index(field[0]))
+        except:
+            if opts.debug:
+                raise
+            error('unable to get process table, no %r field.' % field[0])
+
+    table = []
+    for i,ln in enumerate(it):
+        if not ln.strip():
+            continue
+
+        fields = ln.split(None, numRows - 1)
+        if len(fields) != numRows:
+            warning('unable to process row: %r' % ln)
+            continue
+
+        record = {}
+        for field,idx in zip(kExpectedPSFields, indexes):
+            value = fields[idx]
+            try:
+                record[field[2]] = field[1](value)
+            except:
+                if opts.debug:
+                    raise
+                warning('unable to process %r in row: %r' % (field[0], ln))
+                break
+        else:
+            # Add our best guess at the executable.
+            record['executable'] = extractExecutable(record['command'])
+            table.append(Struct(**record))
+
+    return table
+
+def getSignalValue(name):
+    import signal
+    if name.startswith('SIG'):
+        value = getattr(signal, name)
+        if value and isinstance(value, int):
+            return value
+    error('unknown signal: %r' % name)
+
+import signal
+kSignals = {}
+for name in dir(signal):
+    if name.startswith('SIG') and name == name.upper() and name.isalpha():
+        kSignals[name[3:]] = getattr(signal, name)
+
+def main():
+    global opts
+    from optparse import OptionParser, OptionGroup
+    parser = OptionParser("usage: %prog [options] {pid}*")
+
+    # FIXME: Add -NNN and -SIGNAME options.
+
+    parser.add_option("-s", "", dest="signalName",
+                      help="Name of the signal to use (default=%default)",
+                      action="store", default='INT',
+                      choices=kSignals.keys())
+    parser.add_option("-l", "", dest="listSignals",
+                      help="List known signal names",
+                      action="store_true", default=False)
+
+    parser.add_option("-n", "--dry-run", dest="dryRun",
+                      help="Only print the actions that would be taken",
+                      action="store_true", default=False)
+    parser.add_option("-v", "--verbose", dest="verbose",
+                      help="Print more verbose output",
+                      action="store_true", default=False)
+    parser.add_option("", "--debug", dest="debug",
+                      help="Enable debugging output",
+                      action="store_true", default=False)
+    parser.add_option("", "--force", dest="force",
+                      help="Perform the specified commands, even if it seems like a bad idea",
+                      action="store_true", default=False)
+
+    inf = float('inf')
+    group = OptionGroup(parser, "Process Filters")
+    group.add_option("", "--name", dest="execName", metavar="REGEX",
+                      help="Kill processes whose name matches the given regexp",
+                      action="store", default=None)
+    group.add_option("", "--exec", dest="execPath", metavar="REGEX",
+                      help="Kill processes whose executable matches the given regexp",
+                      action="store", default=None)
+    group.add_option("", "--user", dest="userName", metavar="REGEX",
+                      help="Kill processes whose user matches the given regexp",
+                      action="store", default=None)
+    group.add_option("", "--min-cpu", dest="minCPU", metavar="PCT",
+                      help="Kill processes with CPU usage >= PCT",
+                      action="store", type=float, default=None)
+    group.add_option("", "--max-cpu", dest="maxCPU", metavar="PCT",
+                      help="Kill processes with CPU usage <= PCT",
+                      action="store", type=float, default=inf)
+    group.add_option("", "--min-mem", dest="minMem", metavar="N",
+                      help="Kill processes with virtual size >= N (MB)",
+                      action="store", type=float, default=None)
+    group.add_option("", "--max-mem", dest="maxMem", metavar="N",
+                      help="Kill processes with virtual size <= N (MB)",
+                      action="store", type=float, default=inf)
+    group.add_option("", "--min-rss", dest="minRSS", metavar="N",
+                      help="Kill processes with RSS >= N",
+                      action="store", type=float, default=None)
+    group.add_option("", "--max-rss", dest="maxRSS", metavar="N",
+                      help="Kill processes with RSS <= N",
+                      action="store", type=float, default=inf)
+    group.add_option("", "--min-time", dest="minTime", metavar="N",
+                      help="Kill processes with CPU time >= N (seconds)",
+                      action="store", type=float, default=None)
+    group.add_option("", "--max-time", dest="maxTime", metavar="N",
+                      help="Kill processes with CPU time <= N (seconds)",
+                      action="store", type=float, default=inf)
+    parser.add_option_group(group)
+
+    (opts, args) = parser.parse_args()
+
+    if opts.listSignals:
+        items = [(v,k) for k,v in kSignals.items()]
+        items.sort()
+        for i in range(0, len(items), 4):
+            print '\t'.join(['%2d) SIG%s' % (k,v)
+                             for k,v in items[i:i+4]])
+        sys.exit(0)
+
+    # Figure out the signal to use.
+    signal = kSignals[opts.signalName]
+    signalValueName = str(signal)
+    if opts.verbose:
+        name = dict((v,k) for k,v in kSignals.items()).get(signal,None)
+        if name:
+            signalValueName = name
+            note('using signal %d (SIG%s)' % (signal, name))
+        else:
+            note('using signal %d' % signal)
+
+    # Get the pid list to consider.
+    pids = set()
+    for arg in args:
+        try:
+            pids.add(int(arg))
+        except:
+            parser.error('invalid positional argument: %r' % arg)
+
+    filtered = ps = getProcessTable()
+
+    # Apply filters.
+    if pids:
+        filtered = [p for p in filtered
+                    if p.pid in pids]
+    if opts.execName is not None:
+        filtered = [p for p in filtered
+                    if re_full_match(opts.execName,
+                                     os.path.basename(p.executable))]
+    if opts.execPath is not None:
+        filtered = [p for p in filtered
+                    if re_full_match(opts.execPath, p.executable)]
+    if opts.userName is not None:
+        filtered = [p for p in filtered
+                    if re_full_match(opts.userName, p.user)]
+    filtered = [p for p in filtered
+                if opts.minCPU <= p.cpu_percent <= opts.maxCPU]
+    filtered = [p for p in filtered
+                if opts.minMem <= float(p.vmem_size) / (1<<20) <= opts.maxMem]
+    filtered = [p for p in filtered
+                if opts.minRSS <= p.rss <= opts.maxRSS]
+    filtered = [p for p in filtered
+                if opts.minTime <= p.cpu_time <= opts.maxTime]
+
+    if len(filtered) == len(ps):
+        if not opts.force and not opts.dryRun:
+            error('refusing to kill all processes without --force')
+
+    if not filtered:
+        warning('no processes selected')
+
+    for p in filtered:
+        if opts.verbose:
+            note('kill(%r, %s) # (user=%r, executable=%r, CPU=%2.2f%%, time=%r, vmem=%r, rss=%r)' %
+                 (p.pid, signalValueName, p.user, p.executable, p.cpu_percent, p.cpu_time, p.vmem_size, p.rss))
+        if not opts.dryRun:
+            try:
+                os.kill(p.pid, signal)
+            except OSError:
+                if opts.debug:
+                    raise
+                warning('unable to kill PID: %r' % p.pid)
+
+if __name__ == '__main__':
+    main()
diff --git a/libclamav/c++/llvm/utils/NewNightlyTest.pl b/libclamav/c++/llvm/utils/NewNightlyTest.pl
index 477df8f..a8cf8de 100755
--- a/libclamav/c++/llvm/utils/NewNightlyTest.pl
+++ b/libclamav/c++/llvm/utils/NewNightlyTest.pl
@@ -15,19 +15,50 @@ use Socket;
 # Syntax:   NightlyTest.pl [OPTIONS] [CVSROOT BUILDDIR WEBDIR]
 #   where
 # OPTIONS may include one or more of the following:
+#
+# MAIN OPTIONS:
+#  -config LLVMPATH If specified, use an existing LLVM build and only run and
+#                   report the test information. The LLVMCONFIG argument should
+#                   be the path to the llvm-config executable in the LLVM build.
+#                   This should be the first argument if given. NOT YET
+#                   IMPLEMENTED.
+#  -nickname NAME   The NAME argument specifieds the nickname this script
+#                   will submit to the nightlytest results repository.
+#  -submit-server   Specifies a server to submit the test results too. If this
+#                   option is not specified it defaults to
+#                   llvm.org. This is basically just the address of the
+#                   webserver
+#  -submit-script   Specifies which script to call on the submit server. If
+#                   this option is not specified it defaults to
+#                   /nightlytest/NightlyTestAccept.php. This is basically
+#                   everything after the www.yourserver.org.
+#  -submit-aux      If specified, an auxiliary script to run in addition to the
+#                   normal submit script. The script will be passed the path to
+#                   the "sentdata.txt" file as its sole argument.
+#  -nosubmit        Do not report the test results back to a submit server.
+#
+#
+# BUILD OPTIONS (not used with -config):
 #  -nocheckout      Do not create, checkout, update, or configure
 #                   the source tree.
 #  -noremove        Do not remove the BUILDDIR after it has been built.
 #  -noremoveresults Do not remove the WEBDIR after it has been built.
+#  -noclean         Do not run 'make clean' before building.
 #  -nobuild         Do not build llvm. If tests are enabled perform them
 #                   on the llvm build specified in the build directory
-#  -notest          Do not even attempt to run the test programs.
-#  -nodejagnu       Do not run feature or regression tests
-#  -parallel        Run parallel jobs with GNU Make (see -parallel-jobs).
-#  -parallel-jobs   The number of parallel Make jobs to use (default is two).
-#  -with-clang      Checkout Clang source into tools/clang.
 #  -release         Build an LLVM Release version
 #  -release-asserts Build an LLVM ReleaseAsserts version
+#  -disable-bindings     Disable building LLVM bindings.
+#  -with-clang      Checkout Clang source into tools/clang.
+#  -compileflags    Next argument specifies extra options passed to make when
+#                   building LLVM.
+#  -use-gmake       Use gmake instead of the default make command to build
+#                   llvm and run tests.
+#  -llvmgccdir      Next argument specifies the llvm-gcc install prefix.
+#
+# TESTING OPTIONS:
+#  -notest          Do not even attempt to run the test programs.
+#  -nodejagnu       Do not run feature or regression tests
 #  -enable-llcbeta  Enable testing of beta features in llc.
 #  -enable-lli      Enable testing of lli (interpreter) features, default is off
 #  -disable-pic	    Disable building with Position Independent Code.
@@ -35,19 +66,25 @@ use Socket;
 #  -disable-jit     Disable JIT tests in the nightly tester.
 #  -disable-cbe     Disable C backend tests in the nightly tester.
 #  -disable-lto     Disable link time optimization.
-#  -disable-bindings     Disable building LLVM bindings.
+#  -test-cflags     Next argument specifies that C compilation options that
+#                   override the default when running the testsuite.
+#  -test-cxxflags   Next argument specifies that C++ compilation options that
+#                   override the default when running the testsuite.
+#  -extraflags      Next argument specifies extra options that are passed to
+#                   compile the tests.
+#  -noexternals     Do not run the external tests (for cases where povray
+#                   or SPEC are not installed)
+#  -with-externals  Specify a directory where the external tests are located.
+#
+# OTHER OPTIONS:
+#  -parallel        Run parallel jobs with GNU Make (see -parallel-jobs).
+#  -parallel-jobs   The number of parallel Make jobs to use (default is two).
+#  -parallel-test   Allow parallel execution of llvm-test
 #  -verbose         Turn on some debug output
-#  -debug           Print information useful only to maintainers of this script.
 #  -nice            Checkout/Configure/Build with "nice" to reduce impact
 #                   on busy servers.
 #  -f2c             Next argument specifies path to F2C utility
-#  -nickname        The next argument specifieds the nickname this script
-#                   will submit to the nightlytest results repository.
 #  -gccpath         Path to gcc/g++ used to build LLVM
-#  -cvstag          Check out a specific CVS tag to build LLVM (useful for
-#                   testing release branches)
-#  -usecvs          Check code out from the (old) CVS Repository instead of from
-#                   the standard Subversion repository.
 #  -target          Specify the target triplet
 #  -cflags          Next argument specifies that C compilation options that
 #                   override the default.
@@ -55,40 +92,11 @@ use Socket;
 #                   override the default.
 #  -ldflags         Next argument specifies that linker options that override
 #                   the default.
-#  -test-cflags     Next argument specifies that C compilation options that
-#                   override the default when running the testsuite.
-#  -test-cxxflags   Next argument specifies that C++ compilation options that
-#                   override the default when running the testsuite.
-#  -compileflags    Next argument specifies extra options passed to make when
-#                   building LLVM.
-#  -use-gmake       Use gmake instead of the default make command to build
-#                   llvm and run tests.
 #
-#  ---------------- Options to configure llvm-test ----------------------------
-#  -extraflags      Next argument specifies extra options that are passed to
-#                   compile the tests.
-#  -noexternals     Do not run the external tests (for cases where povray
-#                   or SPEC are not installed)
-#  -with-externals  Specify a directory where the external tests are located.
-#  -submit-server   Specifies a server to submit the test results too. If this
-#                   option is not specified it defaults to
-#                   llvm.org. This is basically just the address of the
-#                   webserver
-#  -submit-script   Specifies which script to call on the submit server. If
-#                   this option is not specified it defaults to
-#                   /nightlytest/NightlyTestAccept.php. This is basically
-#                   everything after the www.yourserver.org.
-#  -submit-aux      If specified, an auxiliary script to run in addition to the
-#                   normal submit script. The script will be passed the path to
-#                   the "sentdata.txt" file as its sole argument.
-#  -nosubmit        Do not report the test results back to a submit server.
-#
-# CVSROOT is the CVS repository from which the tree will be checked out,
-#  specified either in the full :method:user at host:/dir syntax, or
-#  just /dir if using a local repo.
+# CVSROOT is ignored, it is passed for backwards compatibility.
 # BUILDDIR is the directory where sources for this test run will be checked out
 #  AND objects for this test run will be built. This directory MUST NOT
-#  exist before the script is run; it will be created by the cvs checkout
+#  exist before the script is run; it will be created by the svn checkout
 #  process and erased (unless -noremove is specified; see above.)
 # WEBDIR is the directory into which the test results web page will be written,
 #  AND in which the "index.html" is assumed to be a symlink to the most recent
@@ -106,37 +114,27 @@ my $SVNURL     = $ENV{"SVNURL"};
 $SVNURL        = 'http://llvm.org/svn/llvm-project' unless $SVNURL;
 my $TestSVNURL = $ENV{"TestSVNURL"};
 $TestSVNURL    = 'http://llvm.org/svn/llvm-project' unless $TestSVNURL;
-my $CVSRootDir = $ENV{'CVSROOT'};
-$CVSRootDir    = "/home/vadve/shared/PublicCVS" unless $CVSRootDir;
 my $BuildDir   = $ENV{'BUILDDIR'};
-$BuildDir      = "$HOME/buildtest" unless $BuildDir;
 my $WebDir     = $ENV{'WEBDIR'};
-$WebDir        = "$HOME/cvs/testresults-X86" unless $WebDir;
-
-my $LLVMSrcDir   = $ENV{'LLVMSRCDIR'};
-$LLVMSrcDir    = "$BuildDir/llvm" unless $LLVMSrcDir;
-my $LLVMObjDir   = $ENV{'LLVMOBJDIR'};
-$LLVMObjDir    = "$BuildDir/llvm" unless $LLVMObjDir;
-my $LLVMTestDir   = $ENV{'LLVMTESTDIR'};
-$LLVMTestDir    = "$BuildDir/llvm/projects/llvm-test" unless $LLVMTestDir;
 
 ##############################################################
 #
 # Calculate the date prefix...
 #
 ##############################################################
+use POSIX;
 @TIME = localtime;
-my $DATE = sprintf "%4d-%02d-%02d_%02d-%02d", $TIME[5]+1900, $TIME[4]+1, $TIME[3], $TIME[1], $TIME[0];
+my $DATE = strftime("%Y-%m-%d_%H-%M-%S", localtime());
 
 ##############################################################
 #
 # Parse arguments...
 #
 ##############################################################
+$CONFIG_PATH="";
 $CONFIGUREARGS="";
 $nickname="";
 $NOTEST=0;
-$USESVN=1;
 $MAKECMD="make";
 $SUBMITSERVER = "llvm.org";
 $SUBMITSCRIPT = "/nightlytest/NightlyTestAccept.php";
@@ -145,27 +143,36 @@ $SUBMIT = 1;
 $PARALLELJOBS = "2";
 my $TESTFLAGS="";
 
+if ($ENV{'LLVMGCCDIR'}) {
+  $CONFIGUREARGS .= " --with-llvmgccdir=" . $ENV{'LLVMGCCDIR'};
+  $LLVMGCCPATH = $ENV{'LLVMGCCDIR'} . '/bin';
+}
+else {
+  $LLVMGCCPATH = "";
+}
+
 while (scalar(@ARGV) and ($_ = $ARGV[0], /^[-+]/)) {
   shift;
   last if /^--$/;  # Stop processing arguments on --
 
   # List command line options here...
+  if (/^-config$/)         { $CONFIG_PATH = "$ARGV[0]"; shift; next; }
   if (/^-nocheckout$/)     { $NOCHECKOUT = 1; next; }
-  if (/^-nocvsstats$/)     { $NOCVSSTATS = 1; next; }
+  if (/^-noclean$/)        { $NOCLEAN = 1; next; }
   if (/^-noremove$/)       { $NOREMOVE = 1; next; }
   if (/^-noremoveatend$/)  { $NOREMOVEATEND = 1; next; }
   if (/^-noremoveresults$/){ $NOREMOVERESULTS = 1; next; }
   if (/^-notest$/)         { $NOTEST = 1; next; }
   if (/^-norunningtests$/) { next; } # Backward compatibility, ignored.
   if (/^-parallel-jobs$/)  { $PARALLELJOBS = "$ARGV[0]"; shift; next;}
-  if (/^-parallel$/)       { $MAKEOPTS = "$MAKEOPTS -j$PARALLELJOBS -l3.0"; next; }
+  if (/^-parallel$/)       { $MAKEOPTS = "$MAKEOPTS -j$PARALLELJOBS"; next; }
+  if (/^-parallel-test$/)  { $PROGTESTOPTS .= " ENABLE_PARALLEL_REPORT=1"; next; }
   if (/^-with-clang$/)     { $WITHCLANG = 1; next; }
   if (/^-release$/)        { $MAKEOPTS = "$MAKEOPTS ENABLE_OPTIMIZED=1 ".
-                             "OPTIMIZE_OPTION=-O2"; $BUILDTYPE="release"; next;}
+                             "OPTIMIZE_OPTION=-O2"; next;}
   if (/^-release-asserts$/){ $MAKEOPTS = "$MAKEOPTS ENABLE_OPTIMIZED=1 ".
                              "DISABLE_ASSERTIONS=1 ".
-                             "OPTIMIZE_OPTION=-O2";
-                             $BUILDTYPE="release-asserts"; next;}
+                             "OPTIMIZE_OPTION=-O2"; next;}
   if (/^-enable-llcbeta$/) { $PROGTESTOPTS .= " ENABLE_LLCBETA=1"; next; }
   if (/^-disable-pic$/)    { $CONFIGUREARGS .= " --enable-pic=no"; next; }
   if (/^-enable-lli$/)     { $PROGTESTOPTS .= " ENABLE_LLI=1";
@@ -180,7 +187,6 @@ while (scalar(@ARGV) and ($_ = $ARGV[0], /^[-+]/)) {
   if (/^-test-opts$/)      { $PROGTESTOPTS .= " $ARGV[0]"; shift; next; }
   if (/^-verbose$/)        { $VERBOSE = 1; next; }
   if (/^-teelogs$/)        { $TEELOGS = 1; next; }
-  if (/^-debug$/)          { $DEBUG = 1; next; }
   if (/^-nice$/)           { $NICE = "nice "; next; }
   if (/^-f2c$/)            { $CONFIGUREARGS .= " --with-f2c=$ARGV[0]";
                              shift; next; }
@@ -197,9 +203,6 @@ while (scalar(@ARGV) and ($_ = $ARGV[0], /^[-+]/)) {
                              " CC=$ARGV[0]/gcc CXX=$ARGV[0]/g++";
                              $GCCPATH=$ARGV[0]; shift;  next; }
   else                     { $GCCPATH=""; }
-  if (/^-cvstag/)          { $CVSCOOPT .= " -r $ARGV[0]"; shift; next; }
-  else                     { $CVSCOOPT="";}
-  if (/^-usecvs/)          { $USESVN = 0; }
   if (/^-target/)          { $CONFIGUREARGS .= " --target=$ARGV[0]";
                              shift; next; }
   if (/^-cflags/)          { $MAKEOPTS = "$MAKEOPTS C.Flags=\'$ARGV[0]\'";
@@ -213,96 +216,86 @@ while (scalar(@ARGV) and ($_ = $ARGV[0], /^[-+]/)) {
   if (/^-test-cxxflags/)   { $TESTFLAGS = "$TESTFLAGS CXXFLAGS=\'$ARGV[0]\'";
                              shift; next; }
   if (/^-compileflags/)    { $MAKEOPTS = "$MAKEOPTS $ARGV[0]"; shift; next; }
+  if (/^-llvmgccdir/)      { $CONFIGUREARGS .= " --with-llvmgccdir=\'$ARGV[0]\'";
+                             $LLVMGCCPATH = $ARGV[0] . '/bin';
+                             shift; next;}
+  if (/^-noexternals$/)    { $NOEXTERNALS = 1; next; }
   if (/^-use-gmake/)       { $MAKECMD = "gmake"; shift; next; }
   if (/^-extraflags/)      { $CONFIGUREARGS .=
                              " --with-extra-options=\'$ARGV[0]\'"; shift; next;}
   if (/^-noexternals$/)    { $NOEXTERNALS = 1; next; }
-  if (/^-nodejagnu$/)      { $NODEJAGNU = 1; next; }
+  if (/^-nodejagnu$/)      { next; }
   if (/^-nobuild$/)        { $NOBUILD = 1; next; }
   print "Unknown option: $_ : ignoring!\n";
 }
 
-if ($ENV{'LLVMGCCDIR'}) {
-  $CONFIGUREARGS .= " --with-llvmgccdir=" . $ENV{'LLVMGCCDIR'};
-  $LLVMGCCPATH = $ENV{'LLVMGCCDIR'} . '/bin';
-}
-else {
-  $LLVMGCCPATH = "";
-}
-
 if ($CONFIGUREARGS !~ /--disable-jit/) {
   $CONFIGUREARGS .= " --enable-jit";
 }
 
-if (@ARGV != 0 and @ARGV != 3 and $VERBOSE) {
-  foreach $x (@ARGV) {
-    print "$x\n";
-  }
-  print "Must specify 0 or 3 options!";
+if (@ARGV != 0 and @ARGV != 3) {
+  die "error: must specify 0 or 3 options!";
 }
 
 if (@ARGV == 3) {
-  $CVSRootDir = $ARGV[0];
+  if ($CONFIG_PATH ne "") {
+      die "error: arguments are unsupported in -config mode,";
+  }
+
+  # ARGV[0] used to be the CVS root, ignored for backward compatibility.
   $BuildDir   = $ARGV[1];
   $WebDir     = $ARGV[2];
 }
 
-if ($CVSRootDir eq "" or
-    $BuildDir   eq "" or
-    $WebDir     eq "") {
-  die("please specify a cvs root directory, a build directory, and a ".
-       "web directory");
- }
+if ($CONFIG_PATH ne "") {
+  $BuildDir = "";
+  $SVNURL = $TestSVNURL = "";
+  if ($WebDir     eq "") {
+    die("please specify a web directory");
+  }
+} else {
+  if ($BuildDir   eq "" or
+      $WebDir     eq "") {
+    die("please specify a build directory, and a web directory");
+  }
+}
 
 if ($nickname eq "") {
   die ("Please invoke NewNightlyTest.pl with command line option " .
        "\"-nickname <nickname>\"");
 }
 
-if ($BUILDTYPE ne "release" && $BUILDTYPE ne "release-asserts") {
-  $BUILDTYPE = "debug";
-}
+my $LLVMSrcDir   = $ENV{'LLVMSRCDIR'};
+$LLVMSrcDir    = "$BuildDir/llvm" unless $LLVMSrcDir;
+my $LLVMObjDir   = $ENV{'LLVMOBJDIR'};
+$LLVMObjDir    = "$BuildDir/llvm" unless $LLVMObjDir;
+my $LLVMTestDir   = $ENV{'LLVMTESTDIR'};
+$LLVMTestDir    = "$BuildDir/llvm/projects/llvm-test" unless $LLVMTestDir;
 
 ##############################################################
 #
-#define the file names we'll use
+# Define the file names we'll use
 #
 ##############################################################
+
 my $Prefix = "$WebDir/$DATE";
-my $BuildLog = "$Prefix-Build-Log.txt";
-my $COLog = "$Prefix-CVS-Log.txt";
 my $SingleSourceLog = "$Prefix-SingleSource-ProgramTest.txt.gz";
 my $MultiSourceLog = "$Prefix-MultiSource-ProgramTest.txt.gz";
 my $ExternalLog = "$Prefix-External-ProgramTest.txt.gz";
-my $DejagnuLog = "$Prefix-Dejagnu-testrun.log";
-my $DejagnuSum = "$Prefix-Dejagnu-testrun.sum";
-my $DejagnuTestsLog = "$Prefix-DejagnuTests-Log.txt";
-if (! -d $WebDir) {
-  mkdir $WebDir, 0777;
-  if($VERBOSE){
-    warn "$WebDir did not exist; creating it.\n";
-  }
-}
 
-if ($VERBOSE) {
-  print "INITIALIZED\n";
-  if ($USESVN) {
-    print "SVN URL  = $SVNURL\n";
-  } else {
-    print "CVS Root = $CVSRootDir\n";
-  }
-  print "COLog    = $COLog\n";
-  print "BuildDir = $BuildDir\n";
-  print "WebDir   = $WebDir\n";
-  print "Prefix   = $Prefix\n";
-  print "BuildLog = $BuildLog\n";
-}
+# These are only valid in non-config mode.
+my $ConfigureLog = "", $BuildLog = "", $COLog = "";
+my $DejagnuLog = "", $DejagnuSum = "", $DejagnuLog = "";
+
+# Are we in config mode?
+my $ConfigMode = 0;
 
 ##############################################################
 #
 # Helper functions
 #
 ##############################################################
+
 sub GetDir {
   my $Suffix = shift;
   opendir DH, $WebDir;
@@ -349,61 +342,25 @@ sub RunAppendingLoggedCommand {
   }
 }
 
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-#
-# DiffFiles - Diff the current version of the file against the last version of
-# the file, reporting things added and removed.  This is used to report, for
-# example, added and removed warnings.  This returns a pair (added, removed)
-#
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-sub DiffFiles {
-  my $Suffix = shift;
-  my @Others = GetDir $Suffix;
-  if (@Others == 0) {  # No other files?  We added all entries...
-    return (`cat $WebDir/$DATE$Suffix`, "");
-  }
-# Diff the files now...
-  my @Diffs = split "\n", `diff $WebDir/$DATE$Suffix $WebDir/$Others[0]`;
-  my $Added   = join "\n", grep /^</, @Diffs;
-  my $Removed = join "\n", grep /^>/, @Diffs;
-  $Added =~ s/^< //gm;
-  $Removed =~ s/^> //gm;
-  return ($Added, $Removed);
-}
-
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 sub GetRegex {   # (Regex with ()'s, value)
-  $_[1] =~ /$_[0]/m;
-  return $1
-    if (defined($1));
+  if ($_[1] =~ /$_[0]/m) {
+    return $1;
+  }
   return "0";
 }
 
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-sub GetRegexNum {
-  my ($Regex, $Num, $Regex2, $File) = @_;
-  my @Items = split "\n", `grep '$Regex' $File`;
-  return GetRegex $Regex2, $Items[$Num];
-}
-
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 sub ChangeDir { # directory, logical name
   my ($dir,$name) = @_;
   chomp($dir);
   if ( $VERBOSE ) { print "Changing To: $name ($dir)\n"; }
   $result = chdir($dir);
   if (!$result) {
-    print "ERROR!!! Cannot change directory to: $name ($dir) because $!";
+    print "ERROR!!! Cannot change directory to: $name ($dir) because $!\n";
     return false;
   }
   return true;
 }
 
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 sub ReadFile {
   if (open (FILE, $_[0])) {
     undef $/;
@@ -417,16 +374,12 @@ sub ReadFile {
   }
 }
 
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 sub WriteFile {  # (filename, contents)
   open (FILE, ">$_[0]") or die "Could not open file '$_[0]' for writing!\n";
   print FILE $_[1];
   close FILE;
 }
 
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 sub CopyFile { #filename, newfile
   my ($file, $newfile) = @_;
   chomp($file);
@@ -435,74 +388,15 @@ sub CopyFile { #filename, newfile
 }
 
 #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-sub AddRecord {
-  my ($Val, $Filename,$WebDir) = @_;
-  my @Records;
-  if (open FILE, "$WebDir/$Filename") {
-    @Records = grep !/$DATE/, split "\n", <FILE>;
-    close FILE;
-  }
-  push @Records, "$DATE: $Val";
-  WriteFile "$WebDir/$Filename", (join "\n", @Records) . "\n";
-}
-
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-#
-# FormatTime - Convert a time from 1m23.45 into 83.45
-#
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-sub FormatTime {
-  my $Time = shift;
-  if ($Time =~ m/([0-9]+)m([0-9.]+)/) {
-    $Time = sprintf("%7.4f", $1*60.0+$2);
-  }
-  return $Time;
-}
-
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-#
-# This function is meant to read in the dejagnu sum file and
-# return a string with only the results (i.e. PASS/FAIL/XPASS/
-# XFAIL).
-#
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-sub GetDejagnuTestResults { # (filename, log)
-    my ($filename, $DejagnuLog) = @_;
-    my @lines;
-    $/ = "\n"; #Make sure we're going line at a time.
-
-    if( $VERBOSE) { print "DEJAGNU TEST RESULTS:\n"; }
-
-    if (open SRCHFILE, $filename) {
-        # Process test results
-        while ( <SRCHFILE> ) {
-            if ( length($_) > 1 ) {
-                chomp($_);
-                if ( m/^(PASS|XPASS|FAIL|XFAIL): .*\/llvm\/test\/(.*)$/ ) {
-                    push(@lines, "$1: test/$2");
-                }
-            }
-        }
-    }
-    close SRCHFILE;
-
-    my $content = join("\n", @lines);
-    return $content;
-}
-
-
-
-#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 #
 # This function acts as a mini web browswer submitting data
 # to our central server via the post method
 #
 #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-sub SendData{
+sub SendData {
     $host = $_[0];
     $file = $_[1];
-    $variables=$_[2];
+    $variables = $_[2];
 
     # Write out the "...-sentdata.txt" file.
 
@@ -517,7 +411,7 @@ sub SendData{
         system "$SUBMITAUX \"$Prefix-sentdata.txt\"";
     }
 
-    if (!$SUBMIT) { 
+    if (!$SUBMIT) {
         return "Skipped standard submit.\n";
     }
 
@@ -531,9 +425,9 @@ sub SendData{
     }
 
     # Send the data to the server.
-    # 
+    #
     # FIXME: This code should be more robust?
-    
+
     $port=80;
     $socketaddr= sockaddr_in $port, inet_aton $host or die "Bad hostname\n";
     socket SOCK, PF_INET, SOCK_STREAM, getprotobyname('tcp') or
@@ -561,17 +455,13 @@ sub SendData{
 
 ##############################################################
 #
-# Getting Start timestamp
+# Individual Build & Test Functions
 #
 ##############################################################
-$starttime = `date "+20%y-%m-%d %H:%M:%S"`;
 
-##############################################################
-#
-# Create the CVS repository directory
-#
-##############################################################
-if (!$NOCHECKOUT) {
+# Create the source repository directory.
+sub CheckoutSource {
+  die "Invalid call!" unless $ConfigMode == 0;
   if (-d $BuildDir) {
     if (!$NOREMOVE) {
       if ( $VERBOSE ) {
@@ -587,341 +477,41 @@ if (!$NOCHECKOUT) {
   } else {
     mkdir $BuildDir or die "Could not create checkout directory $BuildDir!";
   }
-}
 
-
-##############################################################
-#
-# Check out the llvm tree, using either SVN or CVS
-#
-##############################################################
-if (!$NOCHECKOUT) {
   ChangeDir( $BuildDir, "checkout directory" );
-  if ($USESVN) {
-      my $SVNCMD = "$NICE svn co --non-interactive $SVNURL";
-      my $SVNCMD2 = "$NICE svn co --non-interactive $TestSVNURL";
-      RunLoggedCommand("( time -p $SVNCMD/llvm/trunk llvm; cd llvm/projects ; " .
-                       "$SVNCMD2/test-suite/trunk llvm-test )", $COLog,
-                       "CHECKOUT LLVM");
-      if ($WITHCLANG) {
-        my $SVNCMD = "$NICE svn co --non-interactive $SVNURL/cfe/trunk";
-        RunLoggedCommand("( time -p cd llvm/tools ; $SVNCMD clang )", $COLog,
-                         "CHECKOUT CLANG");
-      }
-  } else {
-    my $CVSOPT = "";
-    $CVSOPT = "-z3" # Use compression if going over ssh.
-      if $CVSRootDir =~ /^:ext:/;
-    my $CVSCMD = "$NICE cvs $CVSOPT -d $CVSRootDir co -P $CVSCOOPT";
-    RunLoggedCommand("( time -p $CVSCMD llvm; cd llvm/projects ; " .
-                     "$CVSCMD llvm-test )", $COLog,
-                     "CHECKOUT LLVM-TEST");
+  my $SVNCMD = "$NICE svn co --non-interactive";
+  RunLoggedCommand("( time -p $SVNCMD $SVNURL/llvm/trunk llvm; cd llvm/projects ; " .
+                   "  $SVNCMD $TestSVNURL/test-suite/trunk llvm-test )", $COLog,
+                   "CHECKOUT LLVM");
+  if ($WITHCLANG) {
+      RunLoggedCommand("( cd llvm/tools ; " .
+                       "  $SVNCMD $SVNURL/cfe/trunk clang )", $COLog,
+                       "CHECKOUT CLANG");
   }
 }
-ChangeDir( $LLVMSrcDir , "llvm source directory") ;
-
-##############################################################
-#
-# Get some static statistics about the current state of CVS
-#
-# This can probably be put on the server side
-#
-##############################################################
-my $CheckoutTime_Wall = GetRegex "([0-9.]+)", `grep '^real' $COLog`;
-my $CheckoutTime_User = GetRegex "([0-9.]+)", `grep '^user' $COLog`;
-my $CheckoutTime_Sys = GetRegex "([0-9.]+)", `grep '^sys' $COLog`;
-my $CheckoutTime_CPU = $CVSCheckoutTime_User + $CVSCheckoutTime_Sys;
-
-my $NumFilesInCVS = 0;
-my $NumDirsInCVS  = 0;
-if ($USESVN) {
-  $NumFilesInCVS = `egrep '^A' $COLog | wc -l` + 0;
-  $NumDirsInCVS  = `sed -e 's#/[^/]*\$##' $COLog | sort | uniq | wc -l` + 0;
-} else {
-  $NumFilesInCVS = `egrep '^U' $COLog | wc -l` + 0;
-  $NumDirsInCVS  = `egrep '^cvs (checkout|server|update):' $COLog | wc -l` + 0;
-}
-
-##############################################################
-#
-# Extract some information from the CVS history... use a hash so no duplicate
-# stuff is stored. This gets the history from the previous days worth
-# of cvs activity and parses it.
-#
-##############################################################
-
-# This just computes a reasonably accurate #of seconds since 2000. It doesn't
-# have to be perfect as its only used for comparing date ranges within a couple
-# of days.
-sub ConvertToSeconds {
-  my ($sec, $min, $hour, $day, $mon, $yr) = @_;
-  my $Result = ($yr - 2000) * 12;
-  $Result += $mon;
-  $Result *= 31;
-  $Result += $day;
-  $Result *= 24;
-  $Result += $hour;
-  $Result *= 60;
-  $Result += $min;
-  $Result *= 60;
-  $Result += $sec;
-  return $Result;
-}
-
-my (%AddedFiles, %ModifiedFiles, %RemovedFiles, %UsersCommitted, %UsersUpdated);
-
-if (!$NOCVSSTATS) {
-  if ($VERBOSE) { print "CHANGE HISTORY ANALYSIS STAGE\n"; }
-
-  if ($USESVN) {
-    @SVNHistory = split /<logentry/, `svn log --non-interactive --xml --verbose -r{$DATE}:HEAD`;
-    # Skip very first entry because it is the XML header cruft
-    shift @SVNHistory;
-    my $Now = time();
-    foreach $Record (@SVNHistory) {
-      my @Lines = split "\n", $Record;
-      my ($Author, $Date, $Revision);
-      # Get the date and see if its one we want to process.
-      my ($Year, $Month, $Day, $Hour, $Min, $Sec);
-      if ($Lines[3] =~ /<date>(\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2})/){
-        $Year = $1; $Month = $2; $Day = $3; $Hour = $4; $Min = $5; $Sec = $6;
-      }
-      my $Then = ConvertToSeconds($Sec, $Min, $Hour, $Day, $Month, $Year);
-      # Get the current date and compute when "yesterday" is.
-      my ($NSec, $NMin, $NHour, $NDay, $NMon, $NYear) = gmtime();
-      my $Now = ConvertToSeconds( $NSec, $NMin, $NHour, $NDay, $NMon, $NYear);
-      if (($Now - 24*60*60) > $Then) {
-        next;
-      }
-      if ($Lines[1] =~ /   revision="([0-9]*)">/) {
-        $Revision = $1;
-      }
-      if ($Lines[2] =~ /<author>([^<]*)<\/author>/) {
-        $Author = $1;
-      }
-      $UsersCommitted{$Author} = 1;
-      $Date = $Year . "-" . $Month . "-" . $Day;
-      $Time = $Hour . ":" . $Min . ":" . $Sec;
-      print "Rev: $Revision, Author: $Author, Date: $Date, Time: $Time\n";
-      for ($i = 6; $i < $#Lines; $i += 2 ) {
-        if ($Lines[$i] =~ /^   action="(.)">([^<]*)</) {
-          if ($1 == "A") {
-            $AddedFiles{$2} = 1;
-          } elsif ($1 == 'D') {
-            $RemovedFiles{$2} = 1;
-          } elsif ($1 == 'M' || $1 == 'R' || $1 == 'C') {
-            $ModifiedFiles{$2} = 1;
-          } else {
-            print "UNMATCHABLE: $Lines[$i]\n";
-          }
-        }
-      }
-    }
-  } else {
-    @CVSHistory = split "\n", `cvs history -D '1 day ago' -a -xAMROCGUW`;
-#print join "\n", @CVSHistory; print "\n";
-
-    my $DateRE = '[-/:0-9 ]+\+[0-9]+';
-
-# Loop over every record from the CVS history, filling in the hashes.
-    foreach $File (@CVSHistory) {
-        my ($Type, $Date, $UID, $Rev, $Filename);
-        if ($File =~ /([AMRUGC]) ($DateRE) ([^ ]+) +([^ ]+) +([^ ]+) +([^ ]+)/) {
-            ($Type, $Date, $UID, $Rev, $Filename) = ($1, $2, $3, $4, "$6/$5");
-        } elsif ($File =~ /([W]) ($DateRE) ([^ ]+)/) {
-            ($Type, $Date, $UID, $Rev, $Filename) = ($1, $2, $3, "", "");
-        } elsif ($File =~ /([O]) ($DateRE) ([^ ]+) +([^ ]+)/) {
-            ($Type, $Date, $UID, $Rev, $Filename) = ($1, $2, $3, "", "$4/");
-        } else {
-            print "UNMATCHABLE: $File\n";
-            next;
-        }
-        # print "$File\nTy = $Type Date = '$Date' UID=$UID Rev=$Rev File = '$Filename'\n";
-
-        if ($Filename =~ /^llvm/) {
-            if ($Type eq 'M') {        # Modified
-                $ModifiedFiles{$Filename} = 1;
-                $UsersCommitted{$UID} = 1;
-            } elsif ($Type eq 'A') {   # Added
-                $AddedFiles{$Filename} = 1;
-                $UsersCommitted{$UID} = 1;
-            } elsif ($Type eq 'R') {   # Removed
-                $RemovedFiles{$Filename} = 1;
-                $UsersCommitted{$UID} = 1;
-            } else {
-                $UsersUpdated{$UID} = 1;
-            }
-        }
-    }
 
-    my $TestError = 1;
-  } #$USESVN
-}#!NOCVSSTATS
-
-my $CVSAddedFiles = join "\n", sort keys %AddedFiles;
-my $CVSModifiedFiles = join "\n", sort keys %ModifiedFiles;
-my $CVSRemovedFiles = join "\n", sort keys %RemovedFiles;
-my $UserCommitList = join "\n", sort keys %UsersCommitted;
-my $UserUpdateList = join "\n", sort keys %UsersUpdated;
-
-##############################################################
-#
-# Build the entire tree, saving build messages to the build log
-#
-##############################################################
-if (!$NOCHECKOUT && !$NOBUILD) {
+# Build the entire tree, saving build messages to the build log. Returns false
+# on build failure.
+sub BuildLLVM {
+  die "Invalid call!" unless $ConfigMode == 0;
   my $EXTRAFLAGS = "--enable-spec --with-objroot=.";
   RunLoggedCommand("(time -p $NICE ./configure $CONFIGUREARGS $EXTRAFLAGS) ",
-                   $BuildLog, "CONFIGURE");
+                   $ConfigureLog, "CONFIGURE");
   # Build the entire tree, capturing the output into $BuildLog
-  RunAppendingLoggedCommand("(time -p $NICE $MAKECMD clean)", $BuildLog, "BUILD CLEAN");
+  if (!$NOCLEAN) {
+      RunAppendingLoggedCommand("($NICE $MAKECMD $MAKEOPTS clean)", $BuildLog, "BUILD CLEAN");
+  }
   RunAppendingLoggedCommand("(time -p $NICE $MAKECMD $MAKEOPTS)", $BuildLog, "BUILD");
-}
 
-##############################################################
-#
-# Get some statistics about the build...
-#
-##############################################################
-#this can de done on server
-#my @Linked = split '\n', `grep Linking $BuildLog`;
-#my $NumExecutables = scalar(grep(/executable/, @Linked));
-#my $NumLibraries   = scalar(grep(!/executable/, @Linked));
-#my $NumObjects     = `grep ']\: Compiling ' $BuildLog | wc -l` + 0;
-
-# Get the number of lines of source code. Must be here after the build is done
-# because countloc.sh uses the llvm-config script which must be built.
-my $LOC = `utils/countloc.sh -topdir $LLVMSrcDir`;
-
-# Get the time taken by the configure script
-my $ConfigTimeU = GetRegexNum "^user", 0, "([0-9.]+)", "$BuildLog";
-my $ConfigTimeS = GetRegexNum "^sys", 0, "([0-9.]+)", "$BuildLog";
-my $ConfigTime  = $ConfigTimeU+$ConfigTimeS;  # ConfigTime = User+System
-my $ConfigWallTime = GetRegexNum "^real", 0,"([0-9.]+)","$BuildLog";
-
-$ConfigTime=-1 unless $ConfigTime;
-$ConfigWallTime=-1 unless $ConfigWallTime;
-
-my $BuildTimeU = GetRegexNum "^user", 1, "([0-9.]+)", "$BuildLog";
-my $BuildTimeS = GetRegexNum "^sys", 1, "([0-9.]+)", "$BuildLog";
-my $BuildTime  = $BuildTimeU+$BuildTimeS;  # BuildTime = User+System
-my $BuildWallTime = GetRegexNum "^real", 1, "([0-9.]+)","$BuildLog";
-
-$BuildTime=-1 unless $BuildTime;
-$BuildWallTime=-1 unless $BuildWallTime;
-
-my $BuildError = 0, $BuildStatus = "OK";
-if ($NOBUILD) {
-  $BuildStatus = "Skipped by user";
-}
-elsif (`grep '^$MAKECMD\[^:]*: .*Error' $BuildLog | wc -l` + 0 ||
-  `grep '^$MAKECMD: \*\*\*.*Stop.' $BuildLog | wc -l`+0) {
-  $BuildStatus = "Error: compilation aborted";
-  $BuildError = 1;
-  if( $VERBOSE) { print  "\n***ERROR BUILDING TREE\n\n"; }
-}
-if ($BuildError) { $NODEJAGNU=1; }
-
-my $a_file_sizes="";
-my $o_file_sizes="";
-if (!$BuildError) {
-  print "Organizing size of .o and .a files\n"
-    if ( $VERBOSE );
-  ChangeDir( "$LLVMObjDir", "Build Directory" );
-
-  my @dirs = ('utils', 'lib', 'tools');
-  if($BUILDTYPE eq "release"){
-    push @dirs, 'Release';
-  } elsif($BUILDTYPE eq "release-asserts") {
-    push @dirs, 'Release-Asserts';
-  } else {
-    push @dirs, 'Debug';
+  if (`grep '^$MAKECMD\[^:]*: .*Error' $BuildLog | wc -l` + 0 ||
+      `grep '^$MAKECMD: \*\*\*.*Stop.' $BuildLog | wc -l` + 0) {
+    return 0;
   }
 
-  find(sub {
-      $a_file_sizes .= (-s $_)." $File::Find::name $BUILDTYPE\n" if /\.a$/i;
-      $o_file_sizes .= (-s $_)." $File::Find::name $BUILDTYPE\n" if /\.o$/i;
-    }, @dirs);
-} else {
-  $a_file_sizes="No data due to a bad build.";
-  $o_file_sizes="No data due to a bad build.";
-}
-
-##############################################################
-#
-# Running dejagnu tests
-#
-##############################################################
-my $DejangnuTestResults=""; # String containing the results of the dejagnu
-my $dejagnu_output = "$DejagnuTestsLog";
-if (!$NODEJAGNU) {
-  #Run the feature and regression tests, results are put into testrun.sum
-  #Full log in testrun.log
-  RunLoggedCommand("(time -p $MAKECMD $MAKEOPTS check)", $dejagnu_output, "DEJAGNU");
-
-  #Copy the testrun.log and testrun.sum to our webdir
-  CopyFile("test/testrun.log", $DejagnuLog);
-  CopyFile("test/testrun.sum", $DejagnuSum);
-  #can be done on server
-  $DejagnuTestResults = GetDejagnuTestResults($DejagnuSum, $DejagnuLog);
-  $unexpfail_tests = $DejagnuTestResults;
+  return 1;
 }
 
-#Extract time of dejagnu tests
-my $DejagnuTimeU = GetRegexNum "^user", 0, "([0-9.]+)", "$dejagnu_output";
-my $DejagnuTimeS = GetRegexNum "^sys", 0, "([0-9.]+)", "$dejagnu_output";
-$DejagnuTime  = $DejagnuTimeU+$DejagnuTimeS;  # DejagnuTime = User+System
-$DejagnuWallTime = GetRegexNum "^real", 0,"([0-9.]+)","$dejagnu_output";
-$DejagnuTestResults =
-  "Dejagnu skipped by user choice." unless $DejagnuTestResults;
-$DejagnuTime     = "0.0" unless $DejagnuTime;
-$DejagnuWallTime = "0.0" unless $DejagnuWallTime;
-
-##############################################################
-#
-# Get warnings from the build
-#
-##############################################################
-if (!$NODEJAGNU) {
-  if ( $VERBOSE ) { print "BUILD INFORMATION COLLECTION STAGE\n"; }
-  my @Warn = split "\n", `egrep 'warning:|Entering dir' $BuildLog`;
-  my @Warnings;
-  my $CurDir = "";
-
-  foreach $Warning (@Warn) {
-    if ($Warning =~ m/Entering directory \`([^\`]+)\'/) {
-      $CurDir = $1;                 # Keep track of directory warning is in...
-      # Remove buildir prefix if included
-      if ($CurDir =~ m#$LLVMSrcDir/(.*)#) { $CurDir = $1; }
-    } else {
-      push @Warnings, "$CurDir/$Warning";     # Add directory to warning...
-    }
-  }
-  my $WarningsFile =  join "\n", @Warnings;
-  $WarningsFile =~ s/:[0-9]+:/::/g;
-
-  # Emit the warnings file, so we can diff...
-  WriteFile "$WebDir/$DATE-Warnings.txt", $WarningsFile . "\n";
-  my ($WarningsAdded, $WarningsRemoved) = DiffFiles "-Warnings.txt";
-
-  # Output something to stdout if something has changed
-  #print "ADDED   WARNINGS:\n$WarningsAdded\n\n" if (length $WarningsAdded);
-  #print "REMOVED WARNINGS:\n$WarningsRemoved\n\n" if (length $WarningsRemoved);
-
-  #my @TmpWarningsAdded = split "\n", $WarningsAdded; ~PJ on upgrade
-  #my @TmpWarningsRemoved = split "\n", $WarningsRemoved; ~PJ on upgrade
-
-} #endif !NODEGAGNU
-
-##############################################################
-#
-# If we built the tree successfully, run the nightly programs tests...
-#
-# A set of tests to run is passed in (i.e. "SingleSource" "MultiSource"
-# "External")
-#
-##############################################################
-
+# Run the named tests (i.e. "SingleSource" "MultiSource" "External")
 sub TestDirectory {
   my $SubDir = shift;
   ChangeDir( "$LLVMTestDir/$SubDir",
@@ -929,55 +519,45 @@ sub TestDirectory {
 
   my $ProgramTestLog = "$Prefix-$SubDir-ProgramTest.txt";
 
-  # Run the programs tests... creating a report.nightly.csv file
-  if (!$NOTEST) {
-    if( $VERBOSE) {
-      print "$MAKECMD -k $MAKEOPTS $PROGTESTOPTS report.nightly.csv ".
-            "$TESTFLAGS TEST=nightly > $ProgramTestLog 2>&1\n";
-    }
-    RunLoggedCommand("$MAKECMD -k $MAKEOPTS $PROGTESTOPTS report.nightly.csv ".
-                     "$TESTFLAGS TEST=nightly",
-                     $ProgramTestLog, "TEST DIRECTORY $SubDir");
-    $llcbeta_options=`$MAKECMD print-llcbeta-option`;
-  }
+  # Make sure to clean the test results.
+  RunLoggedCommand("$MAKECMD -k $MAKEOPTS $PROGTESTOPTS clean $TESTFLAGS",
+                   $ProgramTestLog, "TEST DIRECTORY $SubDir");
+
+  # Run the programs tests... creating a report.nightly.csv file.
+  my $LLCBetaOpts = "";
+  RunLoggedCommand("$MAKECMD -k $MAKEOPTS $PROGTESTOPTS report.nightly.csv ".
+                   "$TESTFLAGS TEST=nightly",
+                   $ProgramTestLog, "TEST DIRECTORY $SubDir");
+  $LLCBetaOpts = `$MAKECMD print-llcbeta-option`;
 
   my $ProgramsTable;
   if (`grep '^$MAKECMD\[^:]: .*Error' $ProgramTestLog | wc -l` + 0) {
-    $TestError = 1;
     $ProgramsTable="Error running test $SubDir\n";
     print "ERROR TESTING\n";
   } elsif (`grep '^$MAKECMD\[^:]: .*No rule to make target' $ProgramTestLog | wc -l` + 0) {
-    $TestError = 1;
     $ProgramsTable="Makefile error running tests $SubDir!\n";
     print "ERROR TESTING\n";
   } else {
-    $TestError = 0;
-  #
-  # Create a list of the tests which were run...
-  #
-  system "egrep 'TEST-(PASS|FAIL)' < $ProgramTestLog ".
-         "| sort > $Prefix-$SubDir-Tests.txt";
+    # Create a list of the tests which were run...
+    system "egrep 'TEST-(PASS|FAIL)' < $ProgramTestLog ".
+           "| sort > $Prefix-$SubDir-Tests.txt";
   }
   $ProgramsTable = ReadFile "report.nightly.csv";
 
   ChangeDir( "../../..", "Programs Test Parent Directory" );
-  return ($ProgramsTable, $llcbeta_options);
-} #end sub TestDirectory
+  return ($ProgramsTable, $LLCBetaOpts);
+}
 
-##############################################################
-#
-# Calling sub TestDirectory
-#
-##############################################################
-if (!$BuildError) {
-  ($SingleSourceProgramsTable, $llcbeta_options) =
-    TestDirectory("SingleSource");
-  WriteFile "$Prefix-SingleSource-Performance.txt", $SingleSourceProgramsTable;
-  ($MultiSourceProgramsTable, $llcbeta_options) = TestDirectory("MultiSource");
-  WriteFile "$Prefix-MultiSource-Performance.txt", $MultiSourceProgramsTable;
+# Run all the nightly tests and return the program tables and the list of tests,
+# passes, fails, and xfails.
+sub RunNightlyTest() {
+  ($SSProgs, $llcbeta_options) = TestDirectory("SingleSource");
+  WriteFile "$Prefix-SingleSource-Performance.txt", $SSProgs;
+  ($MSProgs, $llcbeta_options) = TestDirectory("MultiSource");
+  WriteFile "$Prefix-MultiSource-Performance.txt", $MSProgs;
   if ( ! $NOEXTERNALS ) {
-    ($ExternalProgramsTable, $llcbeta_options) = TestDirectory("External");
-    WriteFile "$Prefix-External-Performance.txt", $ExternalProgramsTable;
+    ($ExtProgs, $llcbeta_options) = TestDirectory("External");
+    WriteFile "$Prefix-External-Performance.txt", $ExtProgs;
     system "cat $Prefix-SingleSource-Tests.txt " .
                "$Prefix-MultiSource-Tests.txt ".
                "$Prefix-External-Tests.txt | sort > $Prefix-Tests.txt";
@@ -985,7 +565,7 @@ if (!$BuildError) {
                "$Prefix-MultiSource-Performance.txt ".
                "$Prefix-External-Performance.txt | sort > $Prefix-Performance.txt";
   } else {
-    $ExternalProgramsTable = "External TEST STAGE SKIPPED\n";
+    $ExtProgs = "External TEST STAGE SKIPPED\n";
     if ( $VERBOSE ) {
       print "External TEST STAGE SKIPPED\n";
     }
@@ -997,46 +577,116 @@ if (!$BuildError) {
                " | sort > $Prefix-Performance.txt";
   }
 
-  ##############################################################
-  #
-  #
-  # gathering tests added removed broken information here
-  #
-  #
-  ##############################################################
-  my $dejagnu_test_list = ReadFile "$Prefix-Tests.txt";
-  my @DEJAGNU = split "\n", $dejagnu_test_list;
-  my ($passes, $fails, $xfails) = "";
-
-  if(!$NODEJAGNU) {
-    for ($x=0; $x<@DEJAGNU; $x++) {
-      if ($DEJAGNU[$x] =~ m/^PASS:/) {
-        $passes.="$DEJAGNU[$x]\n";
-      }
-      elsif ($DEJAGNU[$x] =~ m/^FAIL:/) {
-        $fails.="$DEJAGNU[$x]\n";
-      }
-      elsif ($DEJAGNU[$x] =~ m/^XFAIL:/) {
-        $xfails.="$DEJAGNU[$x]\n";
-      }
+  # Compile passes, fails, xfails.
+  my $All = (ReadFile "$Prefix-Tests.txt");
+  my @TestSuiteResultLines = split "\n", $All;
+  my ($Passes, $Fails, $XFails) = "";
+
+  for ($x=0; $x < @TestSuiteResultLines; $x++) {
+    if (@TestSuiteResultLines[$x] =~ m/^PASS:/) {
+      $Passes .= "$TestSuiteResultLines[$x]\n";
+    }
+    elsif (@TestSuiteResultLines[$x] =~ m/^FAIL:/) {
+      $Fails .= "$TestSuiteResultLines[$x]\n";
     }
+    elsif (@TestSuiteResultLines[$x] =~ m/^XFAIL:/) {
+      $XFails .= "$TestSuiteResultLines[$x]\n";
+    }
+  }
+
+  return ($SSProgs, $MSProgs, $ExtProgs, $All, $Passes, $Fails, $XFails);
+}
+
+##############################################################
+#
+# Initialize filenames
+#
+##############################################################
+
+if (! -d $WebDir) {
+  mkdir $WebDir, 0777 or die "Unable to create web directory: '$WebDir'.";
+  if($VERBOSE){
+    warn "$WebDir did not exist; creating it.\n";
   }
+}
+
+if ($CONFIG_PATH ne "") {
+  $ConfigMode = 1;
+  $LLVMSrcDir = GetRegex "^(.*)\\s+", `$CONFIG_PATH --src-root`;
+  $LLVMObjDir = GetRegex "^(.*)\\s+", `$CONFIG_PATH --obj-root`;
+  # FIXME: Add llvm-config hook for this?
+  $LLVMTestDir = $LLVMObjDir . "/projects/test-suite";
+} else {
+  $ConfigureLog = "$Prefix-Configure-Log.txt";
+  $BuildLog = "$Prefix-Build-Log.txt";
+  $COLog = "$Prefix-CVS-Log.txt";
+}
 
-} #end if !$BuildError
+if ($VERBOSE) {
+  if ($CONFIG_PATH ne "") {
+    print "INITIALIZED (config mode)\n";
+    print "WebDir    = $WebDir\n";
+    print "Prefix    = $Prefix\n";
+    print "LLVM Src  = $LLVMSrcDir\n";
+    print "LLVM Obj  = $LLVMObjDir\n";
+    print "LLVM Test = $LLVMTestDir\n";
+  } else {
+    print "INITIALIZED\n";
+    print "SVN URL  = $SVNURL\n";
+    print "COLog    = $COLog\n";
+    print "BuildDir = $BuildDir\n";
+    print "WebDir   = $WebDir\n";
+    print "Prefix   = $Prefix\n";
+    print "BuildLog = $BuildLog\n";
+  }
+}
 
 ##############################################################
 #
-# Getting end timestamp
+# The actual NewNightlyTest logic.
 #
 ##############################################################
+
+$starttime = `date "+20%y-%m-%d %H:%M:%S"`;
+
+my $BuildError = 0, $BuildStatus = "OK";
+if ($ConfigMode == 0) {
+  if (!$NOCHECKOUT) {
+    CheckoutSource();
+  }
+
+  # Build LLVM.
+  ChangeDir( $LLVMSrcDir , "llvm source directory") ;
+  if ($NOCHECKOUT || $NOBUILD) {
+    $BuildStatus = "Skipped by user";
+  } else {
+    if (!BuildLLVM()) {
+      if( $VERBOSE) { print  "\n***ERROR BUILDING TREE\n\n"; }
+      $BuildError = 1;
+      $BuildStatus = "Error: compilation aborted";
+    }
+  }
+}
+
+# Run the llvm-test tests.
+my ($SingleSourceProgramsTable, $MultiSourceProgramsTable, $ExternalProgramsTable,
+    $all_tests, $passes, $fails, $xfails) = "";
+if (!$NOTEST && !$BuildError) {
+  ($SingleSourceProgramsTable, $MultiSourceProgramsTable, $ExternalProgramsTable,
+   $all_tests, $passes, $fails, $xfails) = RunNightlyTest();
+}
+
 $endtime = `date "+20%y-%m-%d %H:%M:%S"`;
 
+# The last bit of logic is to remove the build and web dirs, after sending data
+# to the server.
 
 ##############################################################
 #
-# Place all the logs neatly into one humungous file
+# Accumulate the information to send to the server.
 #
 ##############################################################
+
 if ( $VERBOSE ) { print "PREPARING LOGS TO BE SENT TO SERVER\n"; }
 
 $machine_data = "uname: ".`uname -a`.
@@ -1046,102 +696,105 @@ $machine_data = "uname: ".`uname -a`.
                 "date: ".`date \"+20%y-%m-%d\"`.
                 "time: ".`date +\"%H:%M:%S\"`;
 
-my @CVS_DATA;
-my $cvs_data;
- at CVS_DATA = ReadFile "$COLog";
-$cvs_data = join("\n", @CVS_DATA);
-
-my @BUILD_DATA;
-my $build_data;
- at BUILD_DATA = ReadFile "$BuildLog";
-$build_data = join("\n", @BUILD_DATA);
-
-my (@DEJAGNU_LOG, @DEJAGNU_SUM, @DEJAGNULOG_FULL, @GCC_VERSION);
-my ($dejagnutests_log ,$dejagnutests_sum, $dejagnulog_full) = "";
-my ($gcc_version, $gcc_version_long) = "";
-
-$gcc_version_long="";
+# Get gcc version.
+my $gcc_version_long = "";
 if ($GCCPATH ne "") {
-	$gcc_version_long = `$GCCPATH/gcc --version`;
+  $gcc_version_long = `$GCCPATH/gcc --version`;
 } elsif ($ENV{"CC"}) {
-	$gcc_version_long = `$ENV{"CC"} --version`;
+  $gcc_version_long = `$ENV{"CC"} --version`;
 } else {
-	$gcc_version_long = `gcc --version`;
+  $gcc_version_long = `gcc --version`;
 }
- at GCC_VERSION = split '\n', $gcc_version_long;
-$gcc_version = $GCC_VERSION[0];
+my $gcc_version = (split '\n', $gcc_version_long)[0];
 
-$llvmgcc_version_long="";
+# Get llvm-gcc target triple.
+#
+# FIXME: This shouldn't be hardwired to llvm-gcc.
+my $llvmgcc_version_long = "";
 if ($LLVMGCCPATH ne "") {
   $llvmgcc_version_long = `$LLVMGCCPATH/llvm-gcc -v 2>&1`;
 } else {
   $llvmgcc_version_long = `llvm-gcc -v 2>&1`;
 }
- at LLVMGCC_VERSION = split '\n', $llvmgcc_version_long;
-$llvmgcc_versionTarget = $LLVMGCC_VERSION[1];
-$llvmgcc_versionTarget =~ /Target: (.+)/;
-$targetTriple = $1;
-
-if(!$BuildError){
-  @DEJAGNU_LOG = ReadFile "$DejagnuLog";
-  @DEJAGNU_SUM = ReadFile "$DejagnuSum";
-  $dejagnutests_log = join("\n", @DEJAGNU_LOG);
-  $dejagnutests_sum = join("\n", @DEJAGNU_SUM);
-
-  @DEJAGNULOG_FULL = ReadFile "$DejagnuTestsLog";
-  $dejagnulog_full = join("\n", @DEJAGNULOG_FULL);
+(split '\n', $llvmgcc_version_long)[1] =~ /Target: (.+)/;
+my $targetTriple = $1;
+
+# Logs.
+my ($ConfigureLogData, $BuildLogData, $CheckoutLogData) = "";
+if ($ConfigMode == 0) {
+  $ConfigureLogData = ReadFile $ConfigureLog;
+  $BuildLogData = ReadFile $BuildLog;
+  $CheckoutLogData = ReadFile $COLog;
 }
 
-##############################################################
-#
-# Send data via a post request
-#
-##############################################################
+# Checkout info.
+my $CheckoutTime_Wall = GetRegex "^real ([0-9.]+)", $CheckoutLogData;
+my $CheckoutTime_User = GetRegex "^user ([0-9.]+)", $CheckoutLogData;
+my $CheckoutTime_Sys = GetRegex "^sys ([0-9.]+)", $CheckoutLogData;
+my $CheckoutTime_CPU = $CVSCheckoutTime_User + $CVSCheckoutTime_Sys;
+
+# Configure info.
+my $ConfigTimeU = GetRegex "^user ([0-9.]+)", $ConfigureLogData;
+my $ConfigTimeS = GetRegex "^sys ([0-9.]+)", $ConfigureLogData;
+my $ConfigTime  = $ConfigTimeU+$ConfigTimeS;  # ConfigTime = User+System
+my $ConfigWallTime = GetRegex "^real ([0-9.]+)",$ConfigureLogData;
+$ConfigTime=-1 unless $ConfigTime;
+$ConfigWallTime=-1 unless $ConfigWallTime;
+
+# Build info.
+my $BuildTimeU = GetRegex "^user ([0-9.]+)", $BuildLogData;
+my $BuildTimeS = GetRegex "^sys ([0-9.]+)", $BuildLogData;
+my $BuildTime  = $BuildTimeU+$BuildTimeS;  # BuildTime = User+System
+my $BuildWallTime = GetRegex "^real ([0-9.]+)", $BuildLogData;
+$BuildTime=-1 unless $BuildTime;
+$BuildWallTime=-1 unless $BuildWallTime;
 
 if ( $VERBOSE ) { print "SEND THE DATA VIA THE POST REQUEST\n"; }
 
 my %hash_of_data = (
   'machine_data' => $machine_data,
-  'build_data' => $build_data,
+  'build_data' => $ConfigureLogData . $BuildLogData,
   'gcc_version' => $gcc_version,
   'nickname' => $nickname,
-  'dejagnutime_wall' => $DejagnuWallTime,
-  'dejagnutime_cpu' => $DejagnuTime,
+  'dejagnutime_wall' => "0.0",
+  'dejagnutime_cpu' => "0.0",
   'cvscheckouttime_wall' => $CheckoutTime_Wall,
   'cvscheckouttime_cpu' => $CheckoutTime_CPU,
   'configtime_wall' => $ConfigWallTime,
   'configtime_cpu'=> $ConfigTime,
   'buildtime_wall' => $BuildWallTime,
   'buildtime_cpu' => $BuildTime,
-  'warnings' => $WarningsFile,
-  'cvsusercommitlist' => $UserCommitList,
-  'cvsuserupdatelist' => $UserUpdateList,
-  'cvsaddedfiles' => $CVSAddedFiles,
-  'cvsmodifiedfiles' => $CVSModifiedFiles,
-  'cvsremovedfiles' => $CVSRemovedFiles,
-  'lines_of_code' => $LOC,
-  'cvs_file_count' => $NumFilesInCVS,
-  'cvs_dir_count' => $NumDirsInCVS,
   'buildstatus' => $BuildStatus,
   'singlesource_programstable' => $SingleSourceProgramsTable,
   'multisource_programstable' => $MultiSourceProgramsTable,
   'externalsource_programstable' => $ExternalProgramsTable,
-  'llcbeta_options' => $multisource_llcbeta_options,
-  'warnings_removed' => $WarningsRemoved,
-  'warnings_added' => $WarningsAdded,
+  'llcbeta_options' => $llcbeta_options,
   'passing_tests' => $passes,
   'expfail_tests' => $xfails,
   'unexpfail_tests' => $fails,
-  'all_tests' => $dejagnu_test_list,
-  'new_tests' => "",
-  'removed_tests' => "",
-  'dejagnutests_results' => $DejagnuTestResults,
-  'dejagnutests_log' => $dejagnulog_full,
+  'all_tests' => $all_tests,
+  'dejagnutests_results' => "Dejagnu skipped by user choice.",
+  'dejagnutests_log' => "",
   'starttime' => $starttime,
   'endtime' => $endtime,
-  'o_file_sizes' => $o_file_sizes,
-  'a_file_sizes' => $a_file_sizes,
-  'target_triple' => $targetTriple
+  'target_triple' => $targetTriple,
+
+  # Unused, but left around for backwards compatability.
+  'warnings' => "",
+  'cvsusercommitlist' => "",
+  'cvsuserupdatelist' => "",
+  'cvsaddedfiles' => "",
+  'cvsmodifiedfiles' => "",
+  'cvsremovedfiles' => "",
+  'lines_of_code' => "",
+  'cvs_file_count' => 0,
+  'cvs_dir_count' => 0,
+  'warnings_removed' => "",
+  'warnings_added' => "",
+  'new_tests' => "",
+  'removed_tests' => "",
+  'o_file_sizes' => "",
+  'a_file_sizes' => ""
 );
 
 if ($SUBMIT || !($SUBMITAUX eq "")) {
@@ -1156,7 +809,7 @@ if ($SUBMIT || !($SUBMITAUX eq "")) {
 
 ##############################################################
 #
-# Remove the cvs tree...
+# Remove the source tree...
 #
 ##############################################################
 system ( "$NICE rm -rf $BuildDir")
diff --git a/libclamav/c++/llvm/utils/TableGen/AsmWriterEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/AsmWriterEmitter.cpp
index 84a647b..ff83c76 100644
--- a/libclamav/c++/llvm/utils/TableGen/AsmWriterEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/AsmWriterEmitter.cpp
@@ -538,6 +538,29 @@ FindUniqueOperandCommands(std::vector<std::string> &UniqueOperandCommands,
 }
 
 
+static void UnescapeString(std::string &Str) {
+  for (unsigned i = 0; i != Str.size(); ++i) {
+    if (Str[i] == '\\' && i != Str.size()-1) {
+      switch (Str[i+1]) {
+      default: continue;  // Don't execute the code after the switch.
+      case 'a': Str[i] = '\a'; break;
+      case 'b': Str[i] = '\b'; break;
+      case 'e': Str[i] = 27; break;
+      case 'f': Str[i] = '\f'; break;
+      case 'n': Str[i] = '\n'; break;
+      case 'r': Str[i] = '\r'; break;
+      case 't': Str[i] = '\t'; break;
+      case 'v': Str[i] = '\v'; break;
+      case '"': Str[i] = '\"'; break;
+      case '\'': Str[i] = '\''; break;
+      case '\\': Str[i] = '\\'; break;
+      }
+      // Nuke the second character.
+      Str.erase(Str.begin()+i+1);
+    }
+  }
+}
+
 /// EmitPrintInstruction - Generate the code for the "printInstruction" method
 /// implementation.
 void AsmWriterEmitter::EmitPrintInstruction(raw_ostream &O) {
@@ -672,7 +695,6 @@ void AsmWriterEmitter::EmitPrintInstruction(raw_ostream &O) {
   O << "\n#ifndef NO_ASM_WRITER_BOILERPLATE\n";
   
   O << "  if (MI->getOpcode() == TargetInstrInfo::INLINEASM) {\n"
-    << "    O << \"\\t\";\n"
     << "    printInlineAsm(MI);\n"
     << "    return;\n"
     << "  } else if (MI->isLabel()) {\n"
@@ -682,6 +704,7 @@ void AsmWriterEmitter::EmitPrintInstruction(raw_ostream &O) {
     << "    printImplicitDef(MI);\n"
     << "    return;\n"
     << "  } else if (MI->getOpcode() == TargetInstrInfo::KILL) {\n"
+    << "    printKill(MI);\n"
     << "    return;\n"
     << "  }\n\n";
 
@@ -763,7 +786,6 @@ void AsmWriterEmitter::EmitPrintInstruction(raw_ostream &O) {
     O << "  return;\n";
   }
 
-  O << "  return;\n";
   O << "}\n";
 }
 
diff --git a/libclamav/c++/llvm/utils/TableGen/CMakeLists.txt b/libclamav/c++/llvm/utils/TableGen/CMakeLists.txt
index e568c62..daf8676 100644
--- a/libclamav/c++/llvm/utils/TableGen/CMakeLists.txt
+++ b/libclamav/c++/llvm/utils/TableGen/CMakeLists.txt
@@ -8,11 +8,13 @@ add_executable(tblgen
   CodeGenInstruction.cpp
   CodeGenTarget.cpp
   DAGISelEmitter.cpp
+  DisassemblerEmitter.cpp
   FastISelEmitter.cpp
   InstrEnumEmitter.cpp
   InstrInfoEmitter.cpp
   IntrinsicEmitter.cpp
   LLVMCConfigurationEmitter.cpp
+  OptParserEmitter.cpp
   Record.cpp
   RegisterInfoEmitter.cpp
   SubtargetEmitter.cpp
diff --git a/libclamav/c++/llvm/utils/TableGen/ClangDiagnosticsEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/ClangDiagnosticsEmitter.cpp
index c127afd..6f1080e 100644
--- a/libclamav/c++/llvm/utils/TableGen/ClangDiagnosticsEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/ClangDiagnosticsEmitter.cpp
@@ -52,15 +52,12 @@ void ClangDiagsDefsEmitter::run(raw_ostream &OS) {
     
     // Description string.
     OS << ", \"";
-    std::string S = R.getValueAsString("Text");
-    EscapeString(S);
-    OS << S << "\"";
+    OS.write_escaped(R.getValueAsString("Text")) << '"';
     
     // Warning associated with the diagnostic.
     if (DefInit *DI = dynamic_cast<DefInit*>(R.getValueInit("Group"))) {
-      S = DI->getDef()->getValueAsString("GroupName");
-      EscapeString(S);
-      OS << ", \"" << S << "\"";
+      OS << ", \"";
+      OS.write_escaped(DI->getDef()->getValueAsString("GroupName")) << '"';
     } else {
       OS << ", 0";
     }
@@ -151,11 +148,10 @@ void ClangDiagGroupsEmitter::run(raw_ostream &OS) {
   OS << "\n#ifdef GET_DIAG_TABLE\n";
   for (std::map<std::string, GroupInfo>::iterator
        I = DiagsInGroup.begin(), E = DiagsInGroup.end(); I != E; ++I) {
-    std::string S = I->first;
-    EscapeString(S);
     // Group option string.
-    OS << "  { \"" << S << "\","
-      << std::string(MaxLen-I->first.size()+1, ' ');
+    OS << "  { \"";
+    OS.write_escaped(I->first) << "\","
+                               << std::string(MaxLen-I->first.size()+1, ' ');
     
     // Diagnostics in the group.
     if (I->second.DiagsInGroup.empty())
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.cpp b/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.cpp
index 6b8ceae..fab41c5 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.cpp
@@ -915,7 +915,6 @@ bool TreePatternNode::ApplyTypeConstraints(TreePattern &TP, bool NotRegisters) {
     bool MadeChange = false;
     MadeChange |= getChild(0)->ApplyTypeConstraints(TP, NotRegisters);
     MadeChange |= getChild(1)->ApplyTypeConstraints(TP, NotRegisters);
-    MadeChange |= UpdateNodeType(getChild(1)->getTypeNum(0), TP);
     return MadeChange;
   } else if (const CodeGenIntrinsic *Int = getIntrinsicInfo(CDP)) {
     bool MadeChange = false;
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.h b/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.h
index 9b53ecc..398764b 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.h
+++ b/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.h
@@ -584,6 +584,8 @@ public:
     return intrinsic_wo_chain_sdnode;
   }
   
+  bool hasTargetIntrinsics() { return !TgtIntrinsics.empty(); }
+
 private:
   void ParseNodeInfo();
   void ParseNodeTransforms();
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.cpp b/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.cpp
index d421fd0..8520d9e 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.cpp
@@ -94,7 +94,7 @@ CodeGenInstruction::CodeGenInstruction(Record *R, const std::string &AsmStr)
   isTerminator = R->getValueAsBit("isTerminator");
   isReMaterializable = R->getValueAsBit("isReMaterializable");
   hasDelaySlot = R->getValueAsBit("hasDelaySlot");
-  usesCustomDAGSchedInserter = R->getValueAsBit("usesCustomDAGSchedInserter");
+  usesCustomInserter = R->getValueAsBit("usesCustomInserter");
   hasCtrlDep   = R->getValueAsBit("hasCtrlDep");
   isNotDuplicable = R->getValueAsBit("isNotDuplicable");
   hasSideEffects = R->getValueAsBit("hasSideEffects");
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.h b/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.h
index 04506e9..d22ac3e 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.h
+++ b/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.h
@@ -97,7 +97,7 @@ namespace llvm {
     bool isTerminator;
     bool isReMaterializable;
     bool hasDelaySlot;
-    bool usesCustomDAGSchedInserter;
+    bool usesCustomInserter;
     bool isVariadic;
     bool hasCtrlDep;
     bool isNotDuplicable;
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeGenTarget.h b/libclamav/c++/llvm/utils/TableGen/CodeGenTarget.h
index e763795..da4b1cc 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeGenTarget.h
+++ b/libclamav/c++/llvm/utils/TableGen/CodeGenTarget.h
@@ -229,7 +229,7 @@ class ComplexPattern {
   unsigned Properties; // Node properties
   unsigned Attributes; // Pattern attributes
 public:
-  ComplexPattern() : NumOperands(0) {};
+  ComplexPattern() : NumOperands(0) {}
   ComplexPattern(Record *R);
 
   MVT::SimpleValueType getValueType() const { return Ty; }
diff --git a/libclamav/c++/llvm/utils/TableGen/DAGISelEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/DAGISelEmitter.cpp
index dcf64e4..66debe2 100644
--- a/libclamav/c++/llvm/utils/TableGen/DAGISelEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/DAGISelEmitter.cpp
@@ -114,7 +114,7 @@ static unsigned getResultPatternCost(TreePatternNode *P,
   if (Op->isSubClassOf("Instruction")) {
     Cost++;
     CodeGenInstruction &II = CGP.getTargetInfo().getInstruction(Op->getName());
-    if (II.usesCustomDAGSchedInserter)
+    if (II.usesCustomInserter)
       Cost += 10;
   }
   for (unsigned i = 0, e = P->getNumChildren(); i != e; ++i)
@@ -1359,7 +1359,7 @@ public:
       Pat->setTypes(Other->getExtTypes());
       // The top level node type is checked outside of the select function.
       if (!isRoot)
-        emitCheck(Prefix + ".getNode()->getValueType(0) == " +
+        emitCheck(Prefix + ".getValueType() == " +
                   getName(Pat->getTypeNum(0)));
       return true;
     }
@@ -1786,11 +1786,7 @@ void DAGISelEmitter::EmitInstructionSelector(raw_ostream &OS) {
         }
 
         CallerCode += ");";
-        CalleeCode += ") ";
-        // Prevent emission routines from being inlined to reduce selection
-        // routines stack frame sizes.
-        CalleeCode += "DISABLE_INLINE ";
-        CalleeCode += "{\n";
+        CalleeCode += ") {\n";
 
         for (std::vector<std::string>::const_reverse_iterator
                I = AddedInits.rbegin(), E = AddedInits.rend(); I != E; ++I)
@@ -1811,6 +1807,9 @@ void DAGISelEmitter::EmitInstructionSelector(raw_ostream &OS) {
         } else {
           EmitFuncNum = EmitFunctions.size();
           EmitFunctions.insert(std::make_pair(CalleeCode, EmitFuncNum));
+          // Prevent emission routines from being inlined to reduce selection
+          // routines stack frame sizes.
+          OS << "DISABLE_INLINE ";
           OS << "SDNode *Emit_" << utostr(EmitFuncNum) << CalleeCode;
         }
 
@@ -1917,40 +1916,6 @@ void DAGISelEmitter::EmitInstructionSelector(raw_ostream &OS) {
     }
   }
   
-  // Emit boilerplate.
-  OS << "SDNode *Select_INLINEASM(SDValue N) {\n"
-     << "  std::vector<SDValue> Ops(N.getNode()->op_begin(), N.getNode()->op_end());\n"
-     << "  SelectInlineAsmMemoryOperands(Ops);\n\n"
-    
-     << "  std::vector<EVT> VTs;\n"
-     << "  VTs.push_back(MVT::Other);\n"
-     << "  VTs.push_back(MVT::Flag);\n"
-     << "  SDValue New = CurDAG->getNode(ISD::INLINEASM, N.getDebugLoc(), "
-                 "VTs, &Ops[0], Ops.size());\n"
-     << "  return New.getNode();\n"
-     << "}\n\n";
-
-  OS << "SDNode *Select_UNDEF(const SDValue &N) {\n"
-     << "  return CurDAG->SelectNodeTo(N.getNode(), TargetInstrInfo::IMPLICIT_DEF,\n"
-     << "                              N.getValueType());\n"
-     << "}\n\n";
-
-  OS << "SDNode *Select_DBG_LABEL(const SDValue &N) {\n"
-     << "  SDValue Chain = N.getOperand(0);\n"
-     << "  unsigned C = cast<LabelSDNode>(N)->getLabelID();\n"
-     << "  SDValue Tmp = CurDAG->getTargetConstant(C, MVT::i32);\n"
-     << "  return CurDAG->SelectNodeTo(N.getNode(), TargetInstrInfo::DBG_LABEL,\n"
-     << "                              MVT::Other, Tmp, Chain);\n"
-     << "}\n\n";
-
-  OS << "SDNode *Select_EH_LABEL(const SDValue &N) {\n"
-     << "  SDValue Chain = N.getOperand(0);\n"
-     << "  unsigned C = cast<LabelSDNode>(N)->getLabelID();\n"
-     << "  SDValue Tmp = CurDAG->getTargetConstant(C, MVT::i32);\n"
-     << "  return CurDAG->SelectNodeTo(N.getNode(), TargetInstrInfo::EH_LABEL,\n"
-     << "                              MVT::Other, Tmp, Chain);\n"
-     << "}\n\n";
-
   OS << "// The main instruction selector code.\n"
      << "SDNode *SelectCode(SDValue N) {\n"
      << "  MVT::SimpleValueType NVT = N.getNode()->getValueType(0).getSimpleVT().SimpleTy;\n"
@@ -1967,6 +1932,7 @@ void DAGISelEmitter::EmitInstructionSelector(raw_ostream &OS) {
      << "  case ISD::TargetConstantPool:\n"
      << "  case ISD::TargetFrameIndex:\n"
      << "  case ISD::TargetExternalSymbol:\n"
+     << "  case ISD::TargetBlockAddress:\n"
      << "  case ISD::TargetJumpTable:\n"
      << "  case ISD::TargetGlobalTLSAddress:\n"
      << "  case ISD::TargetGlobalAddress:\n"
@@ -1981,7 +1947,6 @@ void DAGISelEmitter::EmitInstructionSelector(raw_ostream &OS) {
      << "    return NULL;\n"
      << "  }\n"
      << "  case ISD::INLINEASM: return Select_INLINEASM(N);\n"
-     << "  case ISD::DBG_LABEL: return Select_DBG_LABEL(N);\n"
      << "  case ISD::EH_LABEL: return Select_EH_LABEL(N);\n"
      << "  case ISD::UNDEF: return Select_UNDEF(N);\n";
 
@@ -2054,22 +2019,6 @@ void DAGISelEmitter::EmitInstructionSelector(raw_ostream &OS) {
      << "  }\n"
      << "  return NULL;\n"
      << "}\n\n";
-
-  OS << "void CannotYetSelect(SDValue N) DISABLE_INLINE {\n"
-     << "  std::string msg;\n"
-     << "  raw_string_ostream Msg(msg);\n"
-     << "  Msg << \"Cannot yet select: \";\n"
-     << "  N.getNode()->print(Msg, CurDAG);\n"
-     << "  llvm_report_error(Msg.str());\n"
-     << "}\n\n";
-
-  OS << "void CannotYetSelectIntrinsic(SDValue N) DISABLE_INLINE {\n"
-     << "  errs() << \"Cannot yet select: \";\n"
-     << "  unsigned iid = cast<ConstantSDNode>(N.getOperand("
-     << "N.getOperand(0).getValueType() == MVT::Other))->getZExtValue();\n"
-     << " llvm_report_error(\"Cannot yet select: intrinsic %\" +\n"
-     << "Intrinsic::getName((Intrinsic::ID)iid));\n"
-     << "}\n\n";
 }
 
 void DAGISelEmitter::run(raw_ostream &OS) {
diff --git a/libclamav/c++/llvm/utils/TableGen/DisassemblerEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/DisassemblerEmitter.cpp
new file mode 100644
index 0000000..cc13125
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/DisassemblerEmitter.cpp
@@ -0,0 +1,30 @@
+//===- DisassemblerEmitter.cpp - Generate a disassembler ------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#include "DisassemblerEmitter.h"
+#include "CodeGenTarget.h"
+#include "Record.h"
+using namespace llvm;
+
+void DisassemblerEmitter::run(raw_ostream &OS) {
+  CodeGenTarget Target;
+
+  OS << "/*===- TableGen'erated file "
+     << "---------------------------------------*- C -*-===*\n"
+     << " *\n"
+     << " * " << Target.getName() << " Disassembler\n"
+     << " *\n"
+     << " * Automatically generated file, do not edit!\n"
+     << " *\n"
+     << " *===---------------------------------------------------------------"
+     << "-------===*/\n";
+
+  throw TGError(Target.getTargetRecord()->getLoc(),
+                "Unable to generate disassembler for this target");
+}
diff --git a/libclamav/c++/llvm/utils/TableGen/DisassemblerEmitter.h b/libclamav/c++/llvm/utils/TableGen/DisassemblerEmitter.h
new file mode 100644
index 0000000..7229d81
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/DisassemblerEmitter.h
@@ -0,0 +1,28 @@
+//===- DisassemblerEmitter.h - Disassembler Generator -----------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef DISASSEMBLEREMITTER_H
+#define DISASSEMBLEREMITTER_H
+
+#include "TableGenBackend.h"
+
+namespace llvm {
+
+  class DisassemblerEmitter : public TableGenBackend {
+    RecordKeeper &Records;
+  public:
+    DisassemblerEmitter(RecordKeeper &R) : Records(R) {}
+
+    /// run - Output the disassembler.
+    void run(raw_ostream &o);
+  };
+
+} // end llvm namespace
+
+#endif
diff --git a/libclamav/c++/llvm/utils/TableGen/InstrInfoEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/InstrInfoEmitter.cpp
index 3a104ea..adb98fb 100644
--- a/libclamav/c++/llvm/utils/TableGen/InstrInfoEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/InstrInfoEmitter.cpp
@@ -275,8 +275,7 @@ void InstrInfoEmitter::emitRecord(const CodeGenInstruction &Inst, unsigned Num,
   if (Inst.isReMaterializable) OS << "|(1<<TID::Rematerializable)";
   if (Inst.isNotDuplicable)    OS << "|(1<<TID::NotDuplicable)";
   if (Inst.hasOptionalDef)     OS << "|(1<<TID::HasOptionalDef)";
-  if (Inst.usesCustomDAGSchedInserter)
-    OS << "|(1<<TID::UsesCustomDAGSchedInserter)";
+  if (Inst.usesCustomInserter) OS << "|(1<<TID::UsesCustomInserter)";
   if (Inst.isVariadic)         OS << "|(1<<TID::Variadic)";
   if (Inst.hasSideEffects)     OS << "|(1<<TID::UnmodeledSideEffects)";
   if (Inst.isAsCheapAsAMove)   OS << "|(1<<TID::CheapAsAMove)";
diff --git a/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp
index f9a447a..546988a 100644
--- a/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp
@@ -28,7 +28,6 @@
 
 using namespace llvm;
 
-namespace {
 
 //===----------------------------------------------------------------------===//
 /// Typedefs
@@ -40,24 +39,32 @@ typedef std::vector<std::string> StrVector;
 /// Constants
 
 // Indentation.
-unsigned TabWidth = 4;
-unsigned Indent1  = TabWidth*1;
-unsigned Indent2  = TabWidth*2;
-unsigned Indent3  = TabWidth*3;
+static const unsigned TabWidth = 4;
+static const unsigned Indent1  = TabWidth*1;
+static const unsigned Indent2  = TabWidth*2;
+static const unsigned Indent3  = TabWidth*3;
 
 // Default help string.
-const char * DefaultHelpString = "NO HELP MESSAGE PROVIDED";
+static const char * const DefaultHelpString = "NO HELP MESSAGE PROVIDED";
 
 // Name for the "sink" option.
-const char * SinkOptionName = "AutoGeneratedSinkOption";
+static const char * const SinkOptionName = "AutoGeneratedSinkOption";
+
+namespace {
 
 //===----------------------------------------------------------------------===//
 /// Helper functions
 
 /// Id - An 'identity' function object.
 struct Id {
-  template<typename T>
-  void operator()(const T&) const {
+  template<typename T0>
+  void operator()(const T0&) const {
+  }
+  template<typename T0, typename T1>
+  void operator()(const T0&, const T1&) const {
+  }
+  template<typename T0, typename T1, typename T2>
+  void operator()(const T0&, const T1&, const T2&) const {
   }
 };
 
@@ -81,16 +88,24 @@ const DagInit& InitPtrToDag(const Init* ptr) {
   return val;
 }
 
+const std::string GetOperatorName(const DagInit* D) {
+  return D->getOperator()->getAsString();
+}
+
+const std::string GetOperatorName(const DagInit& D) {
+  return GetOperatorName(&D);
+}
+
 // checkNumberOfArguments - Ensure that the number of args in d is
 // greater than or equal to min_arguments, otherwise throw an exception.
 void checkNumberOfArguments (const DagInit* d, unsigned min_arguments) {
   if (!d || d->getNumArgs() < min_arguments)
-    throw d->getOperator()->getAsString() + ": too few arguments!";
+    throw GetOperatorName(d) + ": too few arguments!";
 }
 
 // isDagEmpty - is this DAG marked with an empty marker?
 bool isDagEmpty (const DagInit* d) {
-  return d->getOperator()->getAsString() == "empty_dag_marker";
+  return GetOperatorName(d) == "empty_dag_marker";
 }
 
 // EscapeVariableName - Escape commas and other symbols not allowed
@@ -132,6 +147,18 @@ void checkedIncrement(I& P, I E, S ErrorString) {
     throw ErrorString;
 }
 
+// apply is needed because C++'s syntax doesn't let us construct a function
+// object and call it in the same statement.
+template<typename F, typename T0>
+void apply(F Fun, T0& Arg0) {
+  return Fun(Arg0);
+}
+
+template<typename F, typename T0, typename T1>
+void apply(F Fun, T0& Arg0, T1& Arg1) {
+  return Fun(Arg0, Arg1);
+}
+
 //===----------------------------------------------------------------------===//
 /// Back-end specific code
 
@@ -143,6 +170,10 @@ namespace OptionType {
   enum OptionType { Alias, Switch, Parameter, ParameterList,
                     Prefix, PrefixList};
 
+  bool IsAlias(OptionType t) {
+    return (t == Alias);
+  }
+
   bool IsList (OptionType t) {
     return (t == ParameterList || t == PrefixList);
   }
@@ -231,12 +262,12 @@ struct OptionDescription {
   bool isReallyHidden() const;
   void setReallyHidden();
 
-  bool isParameter() const
-  { return OptionType::IsParameter(this->Type); }
-
   bool isSwitch() const
   { return OptionType::IsSwitch(this->Type); }
 
+  bool isParameter() const
+  { return OptionType::IsParameter(this->Type); }
+
   bool isList() const
   { return OptionType::IsList(this->Type); }
 
@@ -258,7 +289,7 @@ void OptionDescription::Merge (const OptionDescription& other)
 }
 
 bool OptionDescription::isAlias() const {
-  return Type == OptionType::Alias;
+  return OptionType::IsAlias(this->Type);
 }
 
 bool OptionDescription::isMultiVal() const {
@@ -352,6 +383,14 @@ public:
   /// FindOption - exception-throwing wrapper for find().
   const OptionDescription& FindOption(const std::string& OptName) const;
 
+  // Wrappers for FindOption that throw an exception in case the option has a
+  // wrong type.
+  const OptionDescription& FindSwitch(const std::string& OptName) const;
+  const OptionDescription& FindParameter(const std::string& OptName) const;
+  const OptionDescription& FindList(const std::string& OptName) const;
+  const OptionDescription&
+  FindListOrParameter(const std::string& OptName) const;
+
   /// insertDescription - Insert new OptionDescription into
   /// OptionDescriptions list
   void InsertDescription (const OptionDescription& o);
@@ -363,8 +402,7 @@ public:
 };
 
 const OptionDescription&
-OptionDescriptions::FindOption(const std::string& OptName) const
-{
+OptionDescriptions::FindOption(const std::string& OptName) const {
   const_iterator I = Descriptions.find(OptName);
   if (I != Descriptions.end())
     return I->second;
@@ -372,8 +410,40 @@ OptionDescriptions::FindOption(const std::string& OptName) const
     throw OptName + ": no such option!";
 }
 
-void OptionDescriptions::InsertDescription (const OptionDescription& o)
-{
+const OptionDescription&
+OptionDescriptions::FindSwitch(const std::string& OptName) const {
+  const OptionDescription& OptDesc = this->FindOption(OptName);
+  if (!OptDesc.isSwitch())
+    throw OptName + ": incorrect option type - should be a switch!";
+  return OptDesc;
+}
+
+const OptionDescription&
+OptionDescriptions::FindList(const std::string& OptName) const {
+  const OptionDescription& OptDesc = this->FindOption(OptName);
+  if (!OptDesc.isList())
+    throw OptName + ": incorrect option type - should be a list!";
+  return OptDesc;
+}
+
+const OptionDescription&
+OptionDescriptions::FindParameter(const std::string& OptName) const {
+  const OptionDescription& OptDesc = this->FindOption(OptName);
+  if (!OptDesc.isParameter())
+    throw OptName + ": incorrect option type - should be a parameter!";
+  return OptDesc;
+}
+
+const OptionDescription&
+OptionDescriptions::FindListOrParameter(const std::string& OptName) const {
+  const OptionDescription& OptDesc = this->FindOption(OptName);
+  if (!OptDesc.isList() && !OptDesc.isParameter())
+    throw OptName
+      + ": incorrect option type - should be a list or parameter!";
+  return OptDesc;
+}
+
+void OptionDescriptions::InsertDescription (const OptionDescription& o) {
   container_type::iterator I = Descriptions.find(o.Name);
   if (I != Descriptions.end()) {
     OptionDescription& D = I->second;
@@ -409,7 +479,7 @@ public:
   /// handler.
   void operator() (Init* i) {
     const DagInit& property = InitPtrToDag(i);
-    const std::string& property_name = property.getOperator()->getAsString();
+    const std::string& property_name = GetOperatorName(property);
     typename HandlerMap::iterator method = Handlers_.find(property_name);
 
     if (method != Handlers_.end()) {
@@ -558,7 +628,7 @@ public:
     checkNumberOfArguments(&d, 1);
 
     const OptionType::OptionType Type =
-      stringToOptionType(d.getOperator()->getAsString());
+      stringToOptionType(GetOperatorName(d));
     const std::string& Name = InitPtrToString(d.getArg(0));
 
     OptionDescription OD(Type, Name);
@@ -678,7 +748,7 @@ private:
     checkNumberOfArguments(d, 1);
     Init* Case = d->getArg(0);
     if (typeid(*Case) != typeid(DagInit) ||
-        static_cast<DagInit*>(Case)->getOperator()->getAsString() != "case")
+        GetOperatorName(static_cast<DagInit*>(Case)) != "case")
       throw
         std::string("The argument to (actions) should be a 'case' construct!");
     toolDesc_.Actions = Case;
@@ -775,11 +845,17 @@ void FillInEdgeVector(RecordVector::const_iterator B,
 /// CalculatePriority - Calculate the priority of this plugin.
 int CalculatePriority(RecordVector::const_iterator B,
                       RecordVector::const_iterator E) {
-  int total = 0;
-  for (; B!=E; ++B) {
-    total += static_cast<int>((*B)->getValueAsInt("priority"));
+  int priority = 0;
+
+  if (B != E) {
+    priority  = static_cast<int>((*B)->getValueAsInt("priority"));
+
+    if (++B != E)
+      throw std::string("More than one 'PluginPriority' instance found: "
+                        "most probably an error!");
   }
-  return total;
+
+  return priority;
 }
 
 /// NotInGraph - Helper function object for FilterNotInGraph.
@@ -874,22 +950,60 @@ void TypecheckGraph (const RecordVector& EdgeVector,
 /// WalkCase - Walks the 'case' expression DAG and invokes
 /// TestCallback on every test, and StatementCallback on every
 /// statement. Handles 'case' nesting, but not the 'and' and 'or'
-/// combinators.
-// TODO: Re-implement EmitCaseConstructHandler on top of this function?
+/// combinators (that is, they are passed directly to TestCallback).
+/// TestCallback must have type 'void TestCallback(const DagInit*, unsigned
+/// IndentLevel, bool FirstTest)'.
+/// StatementCallback must have type 'void StatementCallback(const Init*,
+/// unsigned IndentLevel)'.
 template <typename F1, typename F2>
-void WalkCase(Init* Case, F1 TestCallback, F2 StatementCallback) {
+void WalkCase(const Init* Case, F1 TestCallback, F2 StatementCallback,
+              unsigned IndentLevel = 0)
+{
   const DagInit& d = InitPtrToDag(Case);
+
+  // Error checks.
+  if (GetOperatorName(d) != "case")
+    throw std::string("WalkCase should be invoked only on 'case' expressions!");
+
+  if (d.getNumArgs() < 2)
+    throw "There should be at least one clause in the 'case' expression:\n"
+      + d.getAsString();
+
+  // Main loop.
   bool even = false;
+  const unsigned numArgs = d.getNumArgs();
+  unsigned i = 1;
   for (DagInit::const_arg_iterator B = d.arg_begin(), E = d.arg_end();
        B != E; ++B) {
     Init* arg = *B;
-    if (even && dynamic_cast<DagInit*>(arg)
-        && static_cast<DagInit*>(arg)->getOperator()->getAsString() == "case")
-      WalkCase(arg, TestCallback, StatementCallback);
-    else if (!even)
-      TestCallback(arg);
+
+    if (!even)
+    {
+      // Handle test.
+      const DagInit& Test = InitPtrToDag(arg);
+
+      if (GetOperatorName(Test) == "default" && (i+1 != numArgs))
+        throw std::string("The 'default' clause should be the last in the"
+                          "'case' construct!");
+      if (i == numArgs)
+        throw "Case construct handler: no corresponding action "
+          "found for the test " + Test.getAsString() + '!';
+
+      TestCallback(&Test, IndentLevel, (i == 1));
+    }
     else
-      StatementCallback(arg);
+    {
+      if (dynamic_cast<DagInit*>(arg)
+          && GetOperatorName(static_cast<DagInit*>(arg)) == "case") {
+        // Nested 'case'.
+        WalkCase(arg, TestCallback, StatementCallback, IndentLevel + Indent1);
+      }
+
+      // Handle statement.
+      StatementCallback(arg, IndentLevel);
+    }
+
+    ++i;
     even = !even;
   }
 }
@@ -901,7 +1015,7 @@ class ExtractOptionNames {
 
   void processDag(const Init* Statement) {
     const DagInit& Stmt = InitPtrToDag(Statement);
-    const std::string& ActionName = Stmt.getOperator()->getAsString();
+    const std::string& ActionName = GetOperatorName(Stmt);
     if (ActionName == "forward" || ActionName == "forward_as" ||
         ActionName == "unpack_values" || ActionName == "switch_on" ||
         ActionName == "parameter_equals" || ActionName == "element_in_list" ||
@@ -932,6 +1046,13 @@ public:
       this->processDag(Statement);
     }
   }
+
+  void operator()(const DagInit* Test, unsigned, bool) {
+    this->operator()(Test);
+  }
+  void operator()(const Init* Statement, unsigned) {
+    this->operator()(Statement);
+  }
 };
 
 /// CheckForSuperfluousOptions - Check that there are no side
@@ -990,51 +1111,137 @@ bool EmitCaseTest0Args(const std::string& TestName, raw_ostream& O) {
   return false;
 }
 
+/// EmitListTest - Helper function used by EmitCaseTest1ArgList().
+template <typename F>
+void EmitListTest(const ListInit& L, const char* LogicOp,
+                  F Callback, raw_ostream& O)
+{
+  // This is a lot like EmitLogicalOperationTest, but works on ListInits instead
+  // of Dags...
+  bool isFirst = true;
+  for (ListInit::const_iterator B = L.begin(), E = L.end(); B != E; ++B) {
+    if (isFirst)
+      isFirst = false;
+    else
+      O << " || ";
+    Callback(InitPtrToString(*B), O);
+  }
+}
 
-/// EmitCaseTest1Arg - Helper function used by EmitCaseConstructHandler().
-bool EmitCaseTest1Arg(const std::string& TestName,
-                      const DagInit& d,
-                      const OptionDescriptions& OptDescs,
-                      raw_ostream& O) {
-  checkNumberOfArguments(&d, 1);
+// Callbacks for use with EmitListTest.
+
+class EmitSwitchOn {
+  const OptionDescriptions& OptDescs_;
+public:
+  EmitSwitchOn(const OptionDescriptions& OptDescs) : OptDescs_(OptDescs)
+  {}
+
+  void operator()(const std::string& OptName, raw_ostream& O) const {
+    const OptionDescription& OptDesc = OptDescs_.FindSwitch(OptName);
+    O << OptDesc.GenVariableName();
+  }
+};
+
+class EmitEmptyTest {
+  bool EmitNegate_;
+  const OptionDescriptions& OptDescs_;
+public:
+  EmitEmptyTest(bool EmitNegate, const OptionDescriptions& OptDescs)
+    : EmitNegate_(EmitNegate), OptDescs_(OptDescs)
+  {}
+
+  void operator()(const std::string& OptName, raw_ostream& O) const {
+    const char* Neg = (EmitNegate_ ? "!" : "");
+    if (OptName == "o") {
+      O << Neg << "OutputFilename.empty()";
+    }
+    else {
+      const OptionDescription& OptDesc = OptDescs_.FindListOrParameter(OptName);
+      O << Neg << OptDesc.GenVariableName() << ".empty()";
+    }
+  }
+};
+
+
+/// EmitCaseTest1ArgList - Helper function used by EmitCaseTest1Arg();
+bool EmitCaseTest1ArgList(const std::string& TestName,
+                          const DagInit& d,
+                          const OptionDescriptions& OptDescs,
+                          raw_ostream& O) {
+  const ListInit& L = *static_cast<ListInit*>(d.getArg(0));
+
+  if (TestName == "any_switch_on") {
+    EmitListTest(L, "||", EmitSwitchOn(OptDescs), O);
+    return true;
+  }
+  else if (TestName == "switch_on") {
+    EmitListTest(L, "&&", EmitSwitchOn(OptDescs), O);
+    return true;
+  }
+  else if (TestName == "any_not_empty") {
+    EmitListTest(L, "||", EmitEmptyTest(true, OptDescs), O);
+    return true;
+  }
+  else if (TestName == "any_empty") {
+    EmitListTest(L, "||", EmitEmptyTest(false, OptDescs), O);
+    return true;
+  }
+  else if (TestName == "not_empty") {
+    EmitListTest(L, "&&", EmitEmptyTest(true, OptDescs), O);
+    return true;
+  }
+  else if (TestName == "empty") {
+    EmitListTest(L, "&&", EmitEmptyTest(false, OptDescs), O);
+    return true;
+  }
+
+  return false;
+}
+
+/// EmitCaseTest1ArgStr - Helper function used by EmitCaseTest1Arg();
+bool EmitCaseTest1ArgStr(const std::string& TestName,
+                         const DagInit& d,
+                         const OptionDescriptions& OptDescs,
+                         raw_ostream& O) {
   const std::string& OptName = InitPtrToString(d.getArg(0));
 
   if (TestName == "switch_on") {
-    const OptionDescription& OptDesc = OptDescs.FindOption(OptName);
-    if (!OptDesc.isSwitch())
-      throw OptName + ": incorrect option type - should be a switch!";
-    O << OptDesc.GenVariableName();
+    apply(EmitSwitchOn(OptDescs), OptName, O);
     return true;
-  } else if (TestName == "input_languages_contain") {
+  }
+  else if (TestName == "input_languages_contain") {
     O << "InLangs.count(\"" << OptName << "\") != 0";
     return true;
-  } else if (TestName == "in_language") {
+  }
+  else if (TestName == "in_language") {
     // This works only for single-argument Tool::GenerateAction. Join
     // tools can process several files in different languages simultaneously.
 
     // TODO: make this work with Edge::Weight (if possible).
     O << "LangMap.GetLanguage(inFile) == \"" << OptName << '\"';
     return true;
-  } else if (TestName == "not_empty" || TestName == "empty") {
-    const char* Test = (TestName == "empty") ? "" : "!";
-
-    if (OptName == "o") {
-      O << Test << "OutputFilename.empty()";
-      return true;
-    }
-    else {
-      const OptionDescription& OptDesc = OptDescs.FindOption(OptName);
-      if (OptDesc.isSwitch())
-        throw OptName
-          + ": incorrect option type - should be a list or parameter!";
-      O << Test << OptDesc.GenVariableName() << ".empty()";
-      return true;
-    }
+  }
+  else if (TestName == "not_empty" || TestName == "empty") {
+    bool EmitNegate = (TestName == "not_empty");
+    apply(EmitEmptyTest(EmitNegate, OptDescs), OptName, O);
+    return true;
   }
 
   return false;
 }
 
+/// EmitCaseTest1Arg - Helper function used by EmitCaseConstructHandler();
+bool EmitCaseTest1Arg(const std::string& TestName,
+                      const DagInit& d,
+                      const OptionDescriptions& OptDescs,
+                      raw_ostream& O) {
+  checkNumberOfArguments(&d, 1);
+  if (typeid(*d.getArg(0)) == typeid(ListInit))
+    return EmitCaseTest1ArgList(TestName, d, OptDescs, O);
+  else
+    return EmitCaseTest1ArgStr(TestName, d, OptDescs, O);
+}
+
 /// EmitCaseTest2Args - Helper function used by EmitCaseConstructHandler().
 bool EmitCaseTest2Args(const std::string& TestName,
                        const DagInit& d,
@@ -1044,17 +1251,14 @@ bool EmitCaseTest2Args(const std::string& TestName,
   checkNumberOfArguments(&d, 2);
   const std::string& OptName = InitPtrToString(d.getArg(0));
   const std::string& OptArg = InitPtrToString(d.getArg(1));
-  const OptionDescription& OptDesc = OptDescs.FindOption(OptName);
 
   if (TestName == "parameter_equals") {
-    if (!OptDesc.isParameter())
-      throw OptName + ": incorrect option type - should be a parameter!";
+    const OptionDescription& OptDesc = OptDescs.FindParameter(OptName);
     O << OptDesc.GenVariableName() << " == \"" << OptArg << "\"";
     return true;
   }
   else if (TestName == "element_in_list") {
-    if (!OptDesc.isList())
-      throw OptName + ": incorrect option type - should be a list!";
+    const OptionDescription& OptDesc = OptDescs.FindList(OptName);
     const std::string& VarName = OptDesc.GenVariableName();
     O << "std::find(" << VarName << ".begin(),\n";
     O.indent(IndentLevel + Indent1)
@@ -1106,7 +1310,7 @@ void EmitLogicalNot(const DagInit& d, unsigned IndentLevel,
 void EmitCaseTest(const DagInit& d, unsigned IndentLevel,
                   const OptionDescriptions& OptDescs,
                   raw_ostream& O) {
-  const std::string& TestName = d.getOperator()->getAsString();
+  const std::string& TestName = GetOperatorName(d);
 
   if (TestName == "and")
     EmitLogicalOperationTest(d, "&&", IndentLevel, OptDescs, O);
@@ -1124,59 +1328,79 @@ void EmitCaseTest(const DagInit& d, unsigned IndentLevel,
     throw TestName + ": unknown edge property!";
 }
 
-// Emit code that handles the 'case' construct.
-// Takes a function object that should emit code for every case clause.
-// Callback's type is
-// void F(Init* Statement, unsigned IndentLevel, raw_ostream& O).
-template <typename F>
-void EmitCaseConstructHandler(const Init* Dag, unsigned IndentLevel,
-                              F Callback, bool EmitElseIf,
-                              const OptionDescriptions& OptDescs,
-                              raw_ostream& O) {
-  const DagInit* d = &InitPtrToDag(Dag);
-  if (d->getOperator()->getAsString() != "case")
-    throw std::string("EmitCaseConstructHandler should be invoked"
-                      " only on 'case' expressions!");
 
-  unsigned numArgs = d->getNumArgs();
-  if (d->getNumArgs() < 2)
-    throw "There should be at least one clause in the 'case' expression:\n"
-      + d->getAsString();
+/// EmitCaseTestCallback - Callback used by EmitCaseConstructHandler.
+class EmitCaseTestCallback {
+  bool EmitElseIf_;
+  const OptionDescriptions& OptDescs_;
+  raw_ostream& O_;
+public:
 
-  for (unsigned i = 0; i != numArgs; ++i) {
-    const DagInit& Test = InitPtrToDag(d->getArg(i));
+  EmitCaseTestCallback(bool EmitElseIf,
+                       const OptionDescriptions& OptDescs, raw_ostream& O)
+    : EmitElseIf_(EmitElseIf), OptDescs_(OptDescs), O_(O)
+  {}
 
-    // Emit the test.
-    if (Test.getOperator()->getAsString() == "default") {
-      if (i+2 != numArgs)
-        throw std::string("The 'default' clause should be the last in the"
-                          "'case' construct!");
-      O.indent(IndentLevel) << "else {\n";
+  void operator()(const DagInit* Test, unsigned IndentLevel, bool FirstTest)
+  {
+    if (GetOperatorName(Test) == "default") {
+      O_.indent(IndentLevel) << "else {\n";
     }
     else {
-      O.indent(IndentLevel) << ((i != 0 && EmitElseIf) ? "else if (" : "if (");
-      EmitCaseTest(Test, IndentLevel, OptDescs, O);
-      O << ") {\n";
+      O_.indent(IndentLevel)
+        << ((!FirstTest && EmitElseIf_) ? "else if (" : "if (");
+      EmitCaseTest(*Test, IndentLevel, OptDescs_, O_);
+      O_ << ") {\n";
     }
+  }
+};
 
-    // Emit the corresponding statement.
-    ++i;
-    if (i == numArgs)
-      throw "Case construct handler: no corresponding action "
-        "found for the test " + Test.getAsString() + '!';
-
-    Init* arg = d->getArg(i);
-    const DagInit* nd = dynamic_cast<DagInit*>(arg);
-    if (nd && (nd->getOperator()->getAsString() == "case")) {
-      // Handle the nested 'case'.
-      EmitCaseConstructHandler(nd, (IndentLevel + Indent1),
-                               Callback, EmitElseIf, OptDescs, O);
-    }
-    else {
-      Callback(arg, (IndentLevel + Indent1), O);
+/// EmitCaseStatementCallback - Callback used by EmitCaseConstructHandler.
+template <typename F>
+class EmitCaseStatementCallback {
+  F Callback_;
+  raw_ostream& O_;
+public:
+
+  EmitCaseStatementCallback(F Callback, raw_ostream& O)
+    : Callback_(Callback), O_(O)
+  {}
+
+  void operator() (const Init* Statement, unsigned IndentLevel) {
+
+    // Ignore nested 'case' DAG.
+    if (!(dynamic_cast<const DagInit*>(Statement) &&
+          GetOperatorName(static_cast<const DagInit*>(Statement)) == "case")) {
+      if (typeid(*Statement) == typeid(ListInit)) {
+        const ListInit& DagList = *static_cast<const ListInit*>(Statement);
+        for (ListInit::const_iterator B = DagList.begin(), E = DagList.end();
+             B != E; ++B)
+          Callback_(*B, (IndentLevel + Indent1), O_);
+      }
+      else {
+        Callback_(Statement, (IndentLevel + Indent1), O_);
+      }
     }
-    O.indent(IndentLevel) << "}\n";
+    O_.indent(IndentLevel) << "}\n";
   }
+
+};
+
+/// EmitCaseConstructHandler - Emit code that handles the 'case'
+/// construct. Takes a function object that should emit code for every case
+/// clause. Implemented on top of WalkCase.
+/// Callback's type is void F(Init* Statement, unsigned IndentLevel,
+/// raw_ostream& O).
+/// EmitElseIf parameter controls the type of condition that is emitted ('if
+/// (..) {..} else if (..) {} .. else {..}' vs. 'if (..) {..} if(..)  {..}
+/// .. else {..}').
+template <typename F>
+void EmitCaseConstructHandler(const Init* Case, unsigned IndentLevel,
+                              F Callback, bool EmitElseIf,
+                              const OptionDescriptions& OptDescs,
+                              raw_ostream& O) {
+  WalkCase(Case, EmitCaseTestCallback(EmitElseIf, OptDescs, O),
+           EmitCaseStatementCallback<F>(Callback, O), IndentLevel);
 }
 
 /// TokenizeCmdline - converts from "$CALL(HookName, 'Arg1', 'Arg2')/path" to
@@ -1352,12 +1576,15 @@ void EmitCmdLineVecFill(const Init* CmdLine, const std::string& ToolName,
     ++I;
   }
 
+  bool hasINFILE = false;
+
   for (; I != E; ++I) {
     const std::string& cmd = *I;
     assert(!cmd.empty());
     O.indent(IndentLevel);
     if (cmd.at(0) == '$') {
       if (cmd == "$INFILE") {
+        hasINFILE = true;
         if (IsJoin) {
           O << "for (PathVector::const_iterator B = inFiles.begin()"
             << ", E = inFiles.end();\n";
@@ -1369,7 +1596,8 @@ void EmitCmdLineVecFill(const Init* CmdLine, const std::string& ToolName,
         }
       }
       else if (cmd == "$OUTFILE") {
-        O << "vec.push_back(out_file);\n";
+        O << "vec.push_back(\"\");\n";
+        O.indent(IndentLevel) << "out_file_index = vec.size()-1;\n";
       }
       else {
         O << "vec.push_back(";
@@ -1381,8 +1609,10 @@ void EmitCmdLineVecFill(const Init* CmdLine, const std::string& ToolName,
       O << "vec.push_back(\"" << cmd << "\");\n";
     }
   }
-  O.indent(IndentLevel) << "cmd = ";
+  if (!hasINFILE)
+    throw "Tool '" + ToolName + "' doesn't take any input!";
 
+  O.indent(IndentLevel) << "cmd = ";
   if (StrVec[0][0] == '$')
     SubstituteSpecialCommands(StrVec.begin(), StrVec.end(), O);
   else
@@ -1468,17 +1698,40 @@ void EmitForwardOptionPropertyHandlingCode (const OptionDescription& D,
   }
 }
 
-/// EmitActionHandler - Emit code that handles actions. Used by
-/// EmitGenerateActionMethod() as an argument to
-/// EmitCaseConstructHandler().
-class EmitActionHandler {
+/// ActionHandlingCallbackBase - Base class of EmitActionHandlersCallback and
+/// EmitPreprocessOptionsCallback.
+struct ActionHandlingCallbackBase {
+
+  void onErrorDag(const DagInit& d,
+                  unsigned IndentLevel, raw_ostream& O) const
+  {
+    O.indent(IndentLevel)
+      << "throw std::runtime_error(\"" <<
+      (d.getNumArgs() >= 1 ? InitPtrToString(d.getArg(0))
+       : "Unknown error!")
+      << "\");\n";
+  }
+
+  void onWarningDag(const DagInit& d,
+                    unsigned IndentLevel, raw_ostream& O) const
+  {
+    checkNumberOfArguments(&d, 1);
+    O.indent(IndentLevel) << "llvm::errs() << \""
+                          << InitPtrToString(d.getArg(0)) << "\";\n";
+  }
+
+};
+
+/// EmitActionHandlersCallback - Emit code that handles actions. Used by
+/// EmitGenerateActionMethod() as an argument to EmitCaseConstructHandler().
+class EmitActionHandlersCallback : ActionHandlingCallbackBase {
   const OptionDescriptions& OptDescs;
 
   void processActionDag(const Init* Statement, unsigned IndentLevel,
                         raw_ostream& O) const
   {
     const DagInit& Dag = InitPtrToDag(Statement);
-    const std::string& ActionName = Dag.getOperator()->getAsString();
+    const std::string& ActionName = GetOperatorName(Dag);
 
     if (ActionName == "append_cmd") {
       checkNumberOfArguments(&Dag, 1);
@@ -1491,10 +1744,10 @@ class EmitActionHandler {
         O.indent(IndentLevel) << "vec.push_back(\"" << *B << "\");\n";
     }
     else if (ActionName == "error") {
-      O.indent(IndentLevel) << "throw std::runtime_error(\"" <<
-        (Dag.getNumArgs() >= 1 ? InitPtrToString(Dag.getArg(0))
-         : "Unknown error!")
-        << "\");\n";
+      this->onErrorDag(Dag, IndentLevel, O);
+    }
+    else if (ActionName == "warning") {
+      this->onWarningDag(Dag, IndentLevel, O);
     }
     else if (ActionName == "forward") {
       checkNumberOfArguments(&Dag, 1);
@@ -1548,25 +1801,69 @@ class EmitActionHandler {
     }
   }
  public:
-  EmitActionHandler(const OptionDescriptions& OD)
+  EmitActionHandlersCallback(const OptionDescriptions& OD)
     : OptDescs(OD) {}
 
-  void operator()(const Init* Statement, unsigned IndentLevel,
-                  raw_ostream& O) const
+  void operator()(const Init* Statement,
+                  unsigned IndentLevel, raw_ostream& O) const
   {
-    if (typeid(*Statement) == typeid(ListInit)) {
-      const ListInit& DagList = *static_cast<const ListInit*>(Statement);
-      for (ListInit::const_iterator B = DagList.begin(), E = DagList.end();
-           B != E; ++B)
-        this->processActionDag(*B, IndentLevel, O);
-    }
-    else {
-      this->processActionDag(Statement, IndentLevel, O);
-    }
+    this->processActionDag(Statement, IndentLevel, O);
   }
 };
 
-// EmitGenerateActionMethod - Emit one of two versions of the
+bool IsOutFileIndexCheckRequiredStr (const Init* CmdLine) {
+  StrVector StrVec;
+  TokenizeCmdline(InitPtrToString(CmdLine), StrVec);
+
+  for (StrVector::const_iterator I = StrVec.begin(), E = StrVec.end();
+       I != E; ++I) {
+    if (*I == "$OUTFILE")
+      return false;
+  }
+
+  return true;
+}
+
+class IsOutFileIndexCheckRequiredStrCallback {
+  bool* ret_;
+
+public:
+  IsOutFileIndexCheckRequiredStrCallback(bool* ret) : ret_(ret)
+  {}
+
+  void operator()(const Init* CmdLine) {
+    // Ignore nested 'case' DAG.
+    if (typeid(*CmdLine) == typeid(DagInit))
+      return;
+
+    if (IsOutFileIndexCheckRequiredStr(CmdLine))
+      *ret_ = true;
+  }
+
+  void operator()(const DagInit* Test, unsigned, bool) {
+    this->operator()(Test);
+  }
+  void operator()(const Init* Statement, unsigned) {
+    this->operator()(Statement);
+  }
+};
+
+bool IsOutFileIndexCheckRequiredCase (Init* CmdLine) {
+  bool ret = false;
+  WalkCase(CmdLine, Id(), IsOutFileIndexCheckRequiredStrCallback(&ret));
+  return ret;
+}
+
+/// IsOutFileIndexCheckRequired - Should we emit an "out_file_index != -1" check
+/// in EmitGenerateActionMethod() ?
+bool IsOutFileIndexCheckRequired (Init* CmdLine) {
+  if (typeid(*CmdLine) == typeid(StringInit))
+    return IsOutFileIndexCheckRequiredStr(CmdLine);
+  else
+    return IsOutFileIndexCheckRequiredCase(CmdLine);
+}
+
+// EmitGenerateActionMethod - Emit either a normal or a "join" version of the
 // Tool::GenerateAction() method.
 void EmitGenerateActionMethod (const ToolDescription& D,
                                const OptionDescriptions& OptDescs,
@@ -1586,27 +1883,39 @@ void EmitGenerateActionMethod (const ToolDescription& D,
   O.indent(Indent2) << "bool stop_compilation = !HasChildren;\n";
   O.indent(Indent2) << "const char* output_suffix = \""
                     << D.OutputSuffix << "\";\n";
-  O.indent(Indent2) << "std::string out_file;\n\n";
+
+  if (!D.CmdLine)
+    throw "Tool " + D.Name + " has no cmd_line property!";
+
+  bool IndexCheckRequired = IsOutFileIndexCheckRequired(D.CmdLine);
+  O.indent(Indent2) << "int out_file_index"
+                    << (IndexCheckRequired ? " = -1" : "")
+                    << ";\n\n";
+
+  // Process the cmd_line property.
+  if (typeid(*D.CmdLine) == typeid(StringInit))
+    EmitCmdLineVecFill(D.CmdLine, D.Name, IsJoin, Indent2, O);
+  else
+    EmitCaseConstructHandler(D.CmdLine, Indent2,
+                             EmitCmdLineVecFillCallback(IsJoin, D.Name),
+                             true, OptDescs, O);
 
   // For every understood option, emit handling code.
   if (D.Actions)
-    EmitCaseConstructHandler(D.Actions, Indent2, EmitActionHandler(OptDescs),
+    EmitCaseConstructHandler(D.Actions, Indent2,
+                             EmitActionHandlersCallback(OptDescs),
                              false, OptDescs, O);
 
   O << '\n';
   O.indent(Indent2)
-    << "out_file = OutFilename(" << (IsJoin ? "sys::Path(),\n" : "inFile,\n");
+    << "std::string out_file = OutFilename("
+    << (IsJoin ? "sys::Path(),\n" : "inFile,\n");
   O.indent(Indent3) << "TempDir, stop_compilation, output_suffix).str();\n\n";
 
-  // cmd_line is either a string or a 'case' construct.
-  if (!D.CmdLine)
-    throw "Tool " + D.Name + " has no cmd_line property!";
-  else if (typeid(*D.CmdLine) == typeid(StringInit))
-    EmitCmdLineVecFill(D.CmdLine, D.Name, IsJoin, Indent2, O);
-  else
-    EmitCaseConstructHandler(D.CmdLine, Indent2,
-                             EmitCmdLineVecFillCallback(IsJoin, D.Name),
-                             true, OptDescs, O);
+  if (IndexCheckRequired)
+    O.indent(Indent2) << "if (out_file_index != -1)\n";
+  O.indent(IndexCheckRequired ? Indent3 : Indent2)
+    << "vec[out_file_index] = out_file;\n";
 
   // Handle the Sink property.
   if (D.isSink()) {
@@ -1812,10 +2121,93 @@ void EmitOptionDefinitions (const OptionDescriptions& descs,
   O << '\n';
 }
 
-/// EmitPopulateLanguageMap - Emit the PopulateLanguageMap() function.
+/// EmitPreprocessOptionsCallback - Helper function passed to
+/// EmitCaseConstructHandler() by EmitPreprocessOptions().
+class EmitPreprocessOptionsCallback : ActionHandlingCallbackBase {
+  const OptionDescriptions& OptDescs_;
+
+  void onUnsetOption(Init* i, unsigned IndentLevel, raw_ostream& O) {
+    const std::string& OptName = InitPtrToString(i);
+    const OptionDescription& OptDesc = OptDescs_.FindOption(OptName);
+
+    if (OptDesc.isSwitch()) {
+      O.indent(IndentLevel) << OptDesc.GenVariableName() << " = false;\n";
+    }
+    else if (OptDesc.isParameter()) {
+      O.indent(IndentLevel) << OptDesc.GenVariableName() << " = \"\";\n";
+    }
+    else if (OptDesc.isList()) {
+      O.indent(IndentLevel) << OptDesc.GenVariableName() << ".clear();\n";
+    }
+    else {
+      throw "Can't apply 'unset_option' to alias option '" + OptName + "'";
+    }
+  }
+
+  void processDag(const Init* I, unsigned IndentLevel, raw_ostream& O)
+  {
+    const DagInit& d = InitPtrToDag(I);
+    const std::string& OpName = GetOperatorName(d);
+
+    if (OpName == "warning") {
+      this->onWarningDag(d, IndentLevel, O);
+    }
+    else if (OpName == "error") {
+      this->onWarningDag(d, IndentLevel, O);
+    }
+    else if (OpName == "unset_option") {
+      checkNumberOfArguments(&d, 1);
+      Init* I = d.getArg(0);
+      if (typeid(*I) == typeid(ListInit)) {
+        const ListInit& DagList = *static_cast<const ListInit*>(I);
+        for (ListInit::const_iterator B = DagList.begin(), E = DagList.end();
+             B != E; ++B)
+          this->onUnsetOption(*B, IndentLevel, O);
+      }
+      else {
+        this->onUnsetOption(I, IndentLevel, O);
+      }
+    }
+    else {
+      throw "Unknown operator in the option preprocessor: '" + OpName + "'!"
+        "\nOnly 'warning', 'error' and 'unset_option' are allowed.";
+    }
+  }
+
+public:
+
+  void operator()(const Init* I, unsigned IndentLevel, raw_ostream& O) {
+      this->processDag(I, IndentLevel, O);
+  }
+
+  EmitPreprocessOptionsCallback(const OptionDescriptions& OptDescs)
+  : OptDescs_(OptDescs)
+  {}
+};
+
+/// EmitPreprocessOptions - Emit the PreprocessOptionsLocal() function.
+void EmitPreprocessOptions (const RecordKeeper& Records,
+                            const OptionDescriptions& OptDecs, raw_ostream& O)
+{
+  O << "void PreprocessOptionsLocal() {\n";
+
+  const RecordVector& OptionPreprocessors =
+    Records.getAllDerivedDefinitions("OptionPreprocessor");
+
+  for (RecordVector::const_iterator B = OptionPreprocessors.begin(),
+         E = OptionPreprocessors.end(); B!=E; ++B) {
+    DagInit* Case = (*B)->getValueAsDag("preprocessor");
+    EmitCaseConstructHandler(Case, Indent1,
+                             EmitPreprocessOptionsCallback(OptDecs),
+                             false, OptDecs, O);
+  }
+
+  O << "}\n\n";
+}
+
+/// EmitPopulateLanguageMap - Emit the PopulateLanguageMapLocal() function.
 void EmitPopulateLanguageMap (const RecordKeeper& Records, raw_ostream& O)
 {
-  // Generate code
   O << "void PopulateLanguageMapLocal(LanguageMap& langMap) {\n";
 
   // Get the relevant field out of RecordKeeper
@@ -1849,7 +2241,7 @@ void EmitPopulateLanguageMap (const RecordKeeper& Records, raw_ostream& O)
 void IncDecWeight (const Init* i, unsigned IndentLevel,
                    raw_ostream& O) {
   const DagInit& d = InitPtrToDag(i);
-  const std::string& OpName = d.getOperator()->getAsString();
+  const std::string& OpName = GetOperatorName(d);
 
   if (OpName == "inc_weight") {
     O.indent(IndentLevel) << "ret += ";
@@ -1858,17 +2250,16 @@ void IncDecWeight (const Init* i, unsigned IndentLevel,
     O.indent(IndentLevel) << "ret -= ";
   }
   else if (OpName == "error") {
-    O.indent(IndentLevel)
-      << "throw std::runtime_error(\"" <<
-      (d.getNumArgs() >= 1 ? InitPtrToString(d.getArg(0))
-       : "Unknown error!")
-      << "\");\n";
+    checkNumberOfArguments(&d, 1);
+    O.indent(IndentLevel) << "throw std::runtime_error(\""
+                          << InitPtrToString(d.getArg(0))
+                          << "\");\n";
     return;
   }
-
-  else
-    throw "Unknown operator in edge properties list: " + OpName + '!' +
+  else {
+    throw "Unknown operator in edge properties list: '" + OpName + "'!"
       "\nOnly 'inc_weight', 'dec_weight' and 'error' are allowed.";
+  }
 
   if (d.getNumArgs() > 0)
     O << InitPtrToInt(d.getArg(0)) << ";\n";
@@ -1917,7 +2308,7 @@ void EmitEdgeClasses (const RecordVector& EdgeVector,
   }
 }
 
-/// EmitPopulateCompilationGraph - Emit the PopulateCompilationGraph()
+/// EmitPopulateCompilationGraph - Emit the PopulateCompilationGraphLocal()
 /// function.
 void EmitPopulateCompilationGraph (const RecordVector& EdgeVector,
                                    const ToolDescriptions& ToolDescs,
@@ -1966,6 +2357,11 @@ public:
 
   void operator()(const Init* CmdLine) {
     StrVector cmds;
+
+    // Ignore nested 'case' DAG.
+    if (typeid(*CmdLine) == typeid(DagInit))
+      return;
+
     TokenizeCmdline(InitPtrToString(CmdLine), cmds);
     for (StrVector::const_iterator B = cmds.begin(), E = cmds.end();
          B != E; ++B) {
@@ -1995,6 +2391,13 @@ public:
       }
     }
   }
+
+  void operator()(const DagInit* Test, unsigned, bool) {
+    this->operator()(Test);
+  }
+  void operator()(const Init* Statement, unsigned) {
+    this->operator()(Statement);
+  }
 };
 
 /// FillInHookNames - Actually extract the hook names from all command
@@ -2046,6 +2449,8 @@ void EmitRegisterPlugin(int Priority, raw_ostream& O) {
   O << "struct Plugin : public llvmc::BasePlugin {\n\n";
   O.indent(Indent1) << "int Priority() const { return "
                     << Priority << "; }\n\n";
+  O.indent(Indent1) << "void PreprocessOptions() const\n";
+  O.indent(Indent1) << "{ PreprocessOptionsLocal(); }\n\n";
   O.indent(Indent1) << "void PopulateLanguageMap(LanguageMap& langMap) const\n";
   O.indent(Indent1) << "{ PopulateLanguageMapLocal(langMap); }\n\n";
   O.indent(Indent1)
@@ -2065,7 +2470,8 @@ void EmitIncludes(raw_ostream& O) {
     << "#include \"llvm/CompilerDriver/Tool.h\"\n\n"
 
     << "#include \"llvm/ADT/StringExtras.h\"\n"
-    << "#include \"llvm/Support/CommandLine.h\"\n\n"
+    << "#include \"llvm/Support/CommandLine.h\"\n"
+    << "#include \"llvm/Support/raw_ostream.h\"\n\n"
 
     << "#include <cstdlib>\n"
     << "#include <stdexcept>\n\n"
@@ -2165,8 +2571,11 @@ void EmitPluginCode(const PluginData& Data, raw_ostream& O) {
 
   O << "namespace {\n\n";
 
-  // Emit PopulateLanguageMap() function
-  // (a language map maps from file extensions to language names).
+  // Emit PreprocessOptionsLocal() function.
+  EmitPreprocessOptions(Records, Data.OptDescs, O);
+
+  // Emit PopulateLanguageMapLocal() function
+  // (language map maps from file extensions to language names).
   EmitPopulateLanguageMap(Records, O);
 
   // Emit Tool classes.
@@ -2177,7 +2586,7 @@ void EmitPluginCode(const PluginData& Data, raw_ostream& O) {
   // Emit Edge# classes.
   EmitEdgeClasses(Data.Edges, Data.OptDescs, O);
 
-  // Emit PopulateCompilationGraph() function.
+  // Emit PopulateCompilationGraphLocal() function.
   EmitPopulateCompilationGraph(Data.Edges, Data.ToolDescs, O);
 
   // Emit code for plugin registration.
diff --git a/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.h b/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.h
index 347f6f1..b37b83f 100644
--- a/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.h
+++ b/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.h
@@ -21,9 +21,8 @@ namespace llvm {
   /// LLVMCConfigurationEmitter - TableGen backend that generates
   /// configuration code for LLVMC.
   class LLVMCConfigurationEmitter : public TableGenBackend {
-    RecordKeeper &Records;
   public:
-    explicit LLVMCConfigurationEmitter(RecordKeeper &R) : Records(R) {}
+    explicit LLVMCConfigurationEmitter(RecordKeeper&) {}
 
     // run - Output the asmwriter, returning true on failure.
     void run(raw_ostream &o);
diff --git a/libclamav/c++/llvm/utils/TableGen/OptParserEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/OptParserEmitter.cpp
new file mode 100644
index 0000000..ce1aef5
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/OptParserEmitter.cpp
@@ -0,0 +1,199 @@
+//===- OptParserEmitter.cpp - Table Driven Command Line Parsing -----------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#include "OptParserEmitter.h"
+#include "Record.h"
+#include "llvm/ADT/STLExtras.h"
+using namespace llvm;
+
+static int StrCmpOptionName(const char *A, const char *B) {
+  char a = *A, b = *B;
+  while (a == b) {
+    if (a == '\0')
+      return 0;
+
+    a = *++A;
+    b = *++B;
+  }
+
+  if (a == '\0') // A is a prefix of B.
+    return 1;
+  if (b == '\0') // B is a prefix of A.
+    return -1;
+
+  // Otherwise lexicographic.
+  return (a < b) ? -1 : 1;
+}
+
+static int CompareOptionRecords(const void *Av, const void *Bv) {
+  const Record *A = *(Record**) Av;
+  const Record *B = *(Record**) Bv;
+
+  // Sentinel options preceed all others and are only ordered by precedence.
+  bool ASent = A->getValueAsDef("Kind")->getValueAsBit("Sentinel");
+  bool BSent = B->getValueAsDef("Kind")->getValueAsBit("Sentinel");
+  if (ASent != BSent)
+    return ASent ? -1 : 1;
+
+  // Compare options by name, unless they are sentinels.
+  if (!ASent)
+    if (int Cmp = StrCmpOptionName(A->getValueAsString("Name").c_str(),
+                                   B->getValueAsString("Name").c_str()))
+    return Cmp;
+
+  // Then by the kind precedence;
+  int APrec = A->getValueAsDef("Kind")->getValueAsInt("Precedence");
+  int BPrec = B->getValueAsDef("Kind")->getValueAsInt("Precedence");
+  assert(APrec != BPrec && "Options are equivalent!");
+  return APrec < BPrec ? -1 : 1;
+}
+
+static const std::string getOptionName(const Record &R) {
+  // Use the record name unless EnumName is defined.
+  if (dynamic_cast<UnsetInit*>(R.getValueInit("EnumName")))
+    return R.getName();
+
+  return R.getValueAsString("EnumName");
+}
+
+static raw_ostream &write_cstring(raw_ostream &OS, llvm::StringRef Str) {
+  OS << '"';
+  OS.write_escaped(Str);
+  OS << '"';
+  return OS;
+}
+
+void OptParserEmitter::run(raw_ostream &OS) {
+  // Get the option groups and options.
+  const std::vector<Record*> &Groups =
+    Records.getAllDerivedDefinitions("OptionGroup");
+  std::vector<Record*> Opts = Records.getAllDerivedDefinitions("Option");
+
+  if (GenDefs) {
+    OS << "\
+//=== TableGen'erated File - Option Parsing Definitions ---------*- C++ -*-===//\n \
+//\n\
+// Option Parsing Definitions\n\
+//\n\
+// Automatically generated file, do not edit!\n\
+//\n\
+//===----------------------------------------------------------------------===//\n";
+  } else {
+    OS << "\
+//=== TableGen'erated File - Option Parsing Table ---------------*- C++ -*-===//\n \
+//\n\
+// Option Parsing Definitions\n\
+//\n\
+// Automatically generated file, do not edit!\n\
+//\n\
+//===----------------------------------------------------------------------===//\n";
+  }
+  OS << "\n";
+
+  array_pod_sort(Opts.begin(), Opts.end(), CompareOptionRecords);
+  if (GenDefs) {
+    OS << "#ifndef OPTION\n";
+    OS << "#error \"Define OPTION prior to including this file!\"\n";
+    OS << "#endif\n\n";
+
+    OS << "/////////\n";
+    OS << "// Groups\n\n";
+    for (unsigned i = 0, e = Groups.size(); i != e; ++i) {
+      const Record &R = *Groups[i];
+
+      // Start a single option entry.
+      OS << "OPTION(";
+
+      // The option string.
+      OS << '"' << R.getValueAsString("Name") << '"';
+
+      // The option identifier name.
+      OS  << ", "<< getOptionName(R);
+
+      // The option kind.
+      OS << ", Group";
+
+      // The containing option group (if any).
+      OS << ", ";
+      if (const DefInit *DI = dynamic_cast<DefInit*>(R.getValueInit("Group")))
+        OS << getOptionName(*DI->getDef());
+      else
+        OS << "INVALID";
+
+      // The other option arguments (unused for groups).
+      OS << ", INVALID, 0, 0, 0, 0)\n";
+    }
+    OS << "\n";
+
+    OS << "//////////\n";
+    OS << "// Options\n\n";
+    for (unsigned i = 0, e = Opts.size(); i != e; ++i) {
+      const Record &R = *Opts[i];
+
+      // Start a single option entry.
+      OS << "OPTION(";
+
+      // The option string.
+      write_cstring(OS, R.getValueAsString("Name"));
+
+      // The option identifier name.
+      OS  << ", "<< getOptionName(R);
+
+      // The option kind.
+      OS << ", " << R.getValueAsDef("Kind")->getValueAsString("Name");
+
+      // The containing option group (if any).
+      OS << ", ";
+      if (const DefInit *DI = dynamic_cast<DefInit*>(R.getValueInit("Group")))
+        OS << getOptionName(*DI->getDef());
+      else
+        OS << "INVALID";
+
+      // The option alias (if any).
+      OS << ", ";
+      if (const DefInit *DI = dynamic_cast<DefInit*>(R.getValueInit("Alias")))
+        OS << getOptionName(*DI->getDef());
+      else
+        OS << "INVALID";
+
+      // The option flags.
+      const ListInit *LI = R.getValueAsListInit("Flags");
+      if (LI->empty()) {
+        OS << ", 0";
+      } else {
+        OS << ", ";
+        for (unsigned i = 0, e = LI->size(); i != e; ++i) {
+          if (i)
+            OS << " | ";
+          OS << dynamic_cast<DefInit*>(LI->getElement(i))->getDef()->getName();
+        }
+      }
+
+      // The option parameter field.
+      OS << ", " << R.getValueAsInt("NumArgs");
+
+      // The option help text.
+      if (!dynamic_cast<UnsetInit*>(R.getValueInit("HelpText"))) {
+        OS << ",\n";
+        OS << "       ";
+        write_cstring(OS, R.getValueAsString("HelpText"));
+      } else
+        OS << ", 0";
+
+      // The option meta-variable name.
+      OS << ", ";
+      if (!dynamic_cast<UnsetInit*>(R.getValueInit("MetaVarName")))
+        write_cstring(OS, R.getValueAsString("MetaVarName"));
+      else
+        OS << "0";
+
+      OS << ")\n";
+    }
+  }
+}
diff --git a/libclamav/c++/llvm/utils/TableGen/OptParserEmitter.h b/libclamav/c++/llvm/utils/TableGen/OptParserEmitter.h
new file mode 100644
index 0000000..241a3f2
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/OptParserEmitter.h
@@ -0,0 +1,34 @@
+//===- OptParserEmitter.h - Table Driven Command Line Parsing ---*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef UTILS_TABLEGEN_OPTPARSEREMITTER_H
+#define UTILS_TABLEGEN_OPTPARSEREMITTER_H
+
+#include "TableGenBackend.h"
+
+namespace llvm {
+  /// OptParserEmitter - This tablegen backend takes an input .td file
+  /// describing a list of options and emits a data structure for parsing and
+  /// working with those options when given an input command line.
+  class OptParserEmitter : public TableGenBackend {
+    RecordKeeper &Records;
+    bool GenDefs;
+
+  public:
+    OptParserEmitter(RecordKeeper &R, bool _GenDefs)
+      : Records(R), GenDefs(_GenDefs) {}
+
+    /// run - Output the option parsing information.
+    ///
+    /// \param GenHeader - Generate the header describing the option IDs.x
+    void run(raw_ostream &OS);
+  };
+}
+
+#endif
diff --git a/libclamav/c++/llvm/utils/TableGen/Record.cpp b/libclamav/c++/llvm/utils/TableGen/Record.cpp
index a551166..53f9014 100644
--- a/libclamav/c++/llvm/utils/TableGen/Record.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/Record.cpp
@@ -12,7 +12,7 @@
 //===----------------------------------------------------------------------===//
 
 #include "Record.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/Support/Format.h"
 #include "llvm/ADT/StringExtras.h"
 
@@ -274,7 +274,7 @@ bool RecordRecTy::baseClassOf(const RecordRecTy *RHS) const {
 }
 
 
-/// resolveTypes - Find a common type that T1 and T2 convert to.  
+/// resolveTypes - Find a common type that T1 and T2 convert to.
 /// Return 0 if no such type exists.
 ///
 RecTy *llvm::resolveTypes(RecTy *T1, RecTy *T2) {
@@ -284,7 +284,8 @@ RecTy *llvm::resolveTypes(RecTy *T1, RecTy *T2) {
       RecordRecTy *RecTy1 = dynamic_cast<RecordRecTy*>(T1);
       if (RecTy1) {
         // See if T2 inherits from a type T1 also inherits from
-        const std::vector<Record *> &T1SuperClasses = RecTy1->getRecord()->getSuperClasses();
+        const std::vector<Record *> &T1SuperClasses =
+          RecTy1->getRecord()->getSuperClasses();
         for(std::vector<Record *>::const_iterator i = T1SuperClasses.begin(),
               iend = T1SuperClasses.end();
             i != iend;
@@ -302,8 +303,9 @@ RecTy *llvm::resolveTypes(RecTy *T1, RecTy *T2) {
       RecordRecTy *RecTy2 = dynamic_cast<RecordRecTy*>(T2);
       if (RecTy2) {
         // See if T1 inherits from a type T2 also inherits from
-        const std::vector<Record *> &T2SuperClasses = RecTy2->getRecord()->getSuperClasses();
-        for(std::vector<Record *>::const_iterator i = T2SuperClasses.begin(),
+        const std::vector<Record *> &T2SuperClasses =
+          RecTy2->getRecord()->getSuperClasses();
+        for (std::vector<Record *>::const_iterator i = T2SuperClasses.begin(),
               iend = T2SuperClasses.end();
             i != iend;
             ++i) {
@@ -344,10 +346,6 @@ Init *BitsInit::convertInitializerBitRange(const std::vector<unsigned> &Bits) {
 }
 
 std::string BitsInit::getAsString() const {
-  //if (!printInHex(OS)) return;
-  //if (!printAsVariable(OS)) return;
-  //if (!printAsUnset(OS)) return;
-
   std::string Result = "{ ";
   for (unsigned i = 0, e = getNumBits(); i != e; ++i) {
     if (i) Result += ", ";
@@ -359,51 +357,6 @@ std::string BitsInit::getAsString() const {
   return Result + " }";
 }
 
-bool BitsInit::printInHex(raw_ostream &OS) const {
-  // First, attempt to convert the value into an integer value...
-  int64_t Result = 0;
-  for (unsigned i = 0, e = getNumBits(); i != e; ++i)
-    if (BitInit *Bit = dynamic_cast<BitInit*>(getBit(i))) {
-      Result |= Bit->getValue() << i;
-    } else {
-      return true;
-    }
-
-  OS << format("0x%x", Result);
-  return false;
-}
-
-bool BitsInit::printAsVariable(raw_ostream &OS) const {
-  // Get the variable that we may be set equal to...
-  assert(getNumBits() != 0);
-  VarBitInit *FirstBit = dynamic_cast<VarBitInit*>(getBit(0));
-  if (FirstBit == 0) return true;
-  TypedInit *Var = FirstBit->getVariable();
-
-  // Check to make sure the types are compatible.
-  BitsRecTy *Ty = dynamic_cast<BitsRecTy*>(FirstBit->getVariable()->getType());
-  if (Ty == 0) return true;
-  if (Ty->getNumBits() != getNumBits()) return true; // Incompatible types!
-
-  // Check to make sure all bits are referring to the right bits in the variable
-  for (unsigned i = 0, e = getNumBits(); i != e; ++i) {
-    VarBitInit *Bit = dynamic_cast<VarBitInit*>(getBit(i));
-    if (Bit == 0 || Bit->getVariable() != Var || Bit->getBitNum() != i)
-      return true;
-  }
-
-  Var->print(OS);
-  return false;
-}
-
-bool BitsInit::printAsUnset(raw_ostream &OS) const {
-  for (unsigned i = 0, e = getNumBits(); i != e; ++i)
-    if (!dynamic_cast<UnsetInit*>(getBit(i)))
-      return true;
-  OS << "?";
-  return false;
-}
-
 // resolveReferences - If there are any field references that refer to fields
 // that have been filled in, we can propagate the values now.
 //
@@ -486,12 +439,15 @@ Init *ListInit::resolveReferences(Record &R, const RecordVal *RV) {
 }
 
 Init *ListInit::resolveListElementReference(Record &R, const RecordVal *IRV,
-                                           unsigned Elt) {
+                                            unsigned Elt) {
   if (Elt >= getSize())
     return 0;  // Out of range reference.
   Init *E = getElement(Elt);
-  if (!dynamic_cast<UnsetInit*>(E))  // If the element is set
-    return E;                        // Replace the VarListElementInit with it.
+  // If the element is set to some value, or if we are resolving a reference
+  // to a specific variable and that variable is explicitly unset, then
+  // replace the VarListElementInit with it.
+  if (IRV || !dynamic_cast<UnsetInit*>(E))
+    return E;
   return 0;
 }
 
@@ -505,30 +461,30 @@ std::string ListInit::getAsString() const {
 }
 
 Init *OpInit::resolveBitReference(Record &R, const RecordVal *IRV,
-                                   unsigned Bit) {
+                                  unsigned Bit) {
   Init *Folded = Fold(&R, 0);
 
   if (Folded != this) {
     TypedInit *Typed = dynamic_cast<TypedInit *>(Folded);
     if (Typed) {
       return Typed->resolveBitReference(R, IRV, Bit);
-    }    
+    }
   }
-  
+
   return 0;
 }
 
 Init *OpInit::resolveListElementReference(Record &R, const RecordVal *IRV,
-                                           unsigned Elt) {
+                                          unsigned Elt) {
   Init *Folded = Fold(&R, 0);
 
   if (Folded != this) {
     TypedInit *Typed = dynamic_cast<TypedInit *>(Folded);
     if (Typed) {
       return Typed->resolveListElementReference(R, IRV, Elt);
-    }    
+    }
   }
-  
+
   return 0;
 }
 
@@ -546,8 +502,7 @@ Init *UnOpInit::Fold(Record *CurRec, MultiClass *CurMultiClass) {
       if (LHSd) {
         return new StringInit(LHSd->getDef()->getName());
       }
-    }
-    else {
+    } else {
       StringInit *LHSs = dynamic_cast<StringInit*>(LHS);
       if (LHSs) {
         std::string Name = LHSs->getValue();
@@ -579,15 +534,15 @@ Init *UnOpInit::Fold(Record *CurRec, MultiClass *CurMultiClass) {
           if (CurMultiClass->Rec.isTemplateArg(MCName)) {
             const RecordVal *RV = CurMultiClass->Rec.getValue(MCName);
             assert(RV && "Template arg doesn't exist??");
-            
+
             if (RV->getType() != getType()) {
               throw "type mismatch in nameconcat";
             }
-            
+
             return new VarInit(MCName, RV->getType());
           }
         }
-        
+
         if (Record *D = Records.getDef(Name))
           return new DefInit(D);
 
@@ -616,7 +571,8 @@ Init *UnOpInit::Fold(Record *CurRec, MultiClass *CurMultiClass) {
         assert(0 && "Empty list in cdr");
         return 0;
       }
-      ListInit *Result = new ListInit(LHSl->begin()+1, LHSl->end(), LHSl->getType());
+      ListInit *Result = new ListInit(LHSl->begin()+1, LHSl->end(),
+                                      LHSl->getType());
       return Result;
     }
     break;
@@ -626,8 +582,7 @@ Init *UnOpInit::Fold(Record *CurRec, MultiClass *CurMultiClass) {
     if (LHSl) {
       if (LHSl->getSize() == 0) {
         return new IntInit(1);
-      }
-      else {
+      } else {
         return new IntInit(0);
       }
     }
@@ -635,12 +590,11 @@ Init *UnOpInit::Fold(Record *CurRec, MultiClass *CurMultiClass) {
     if (LHSs) {
       if (LHSs->getValue().empty()) {
         return new IntInit(1);
-      }
-      else {
+      } else {
         return new IntInit(0);
       }
     }
-    
+
     break;
   }
   }
@@ -649,7 +603,7 @@ Init *UnOpInit::Fold(Record *CurRec, MultiClass *CurMultiClass) {
 
 Init *UnOpInit::resolveReferences(Record &R, const RecordVal *RV) {
   Init *lhs = LHS->resolveReferences(R, RV);
-  
+
   if (LHS != lhs)
     return (new UnOpInit(getOpcode(), lhs, getType()))->Fold(&R, 0);
   return Fold(&R, 0);
@@ -739,7 +693,7 @@ Init *BinOpInit::Fold(Record *CurRec, MultiClass *CurMultiClass) {
           }
           return new VarInit(Name, RV->getType());
         }
-        
+
         std::string TemplateArgName = CurRec->getName()+":"+Name;
         if (CurRec->isTemplateArg(TemplateArgName)) {
           const RecordVal *RV = CurRec->getValue(TemplateArgName);
@@ -762,7 +716,7 @@ Init *BinOpInit::Fold(Record *CurRec, MultiClass *CurMultiClass) {
           if (RV->getType() != getType()) {
             throw "type mismatch in nameconcat";
           }
-          
+
           return new VarInit(MCName, RV->getType());
         }
       }
@@ -801,7 +755,7 @@ Init *BinOpInit::Fold(Record *CurRec, MultiClass *CurMultiClass) {
 Init *BinOpInit::resolveReferences(Record &R, const RecordVal *RV) {
   Init *lhs = LHS->resolveReferences(R, RV);
   Init *rhs = RHS->resolveReferences(R, RV);
-  
+
   if (LHS != lhs || RHS != rhs)
     return (new BinOpInit(getOpcode(), lhs, rhs, getType()))->Fold(&R, 0);
   return Fold(&R, 0);
@@ -815,7 +769,7 @@ std::string BinOpInit::getAsString() const {
   case SRA: Result = "!sra"; break;
   case SRL: Result = "!srl"; break;
   case STRCONCAT: Result = "!strconcat"; break;
-  case NAMECONCAT: 
+  case NAMECONCAT:
     Result = "!nameconcat<" + getType()->getAsString() + ">"; break;
   }
   return Result + "(" + LHS->getAsString() + ", " + RHS->getAsString() + ")";
@@ -837,8 +791,7 @@ static Init *EvaluateOperation(OpInit *RHSo, Init *LHS, Init *Arg,
                                  CurRec, CurMultiClass);
     if (Result != 0) {
       return Result;
-    }
-    else {
+    } else {
       return 0;
     }
   }
@@ -851,15 +804,12 @@ static Init *EvaluateOperation(OpInit *RHSo, Init *LHS, Init *Arg,
                                        Type, CurRec, CurMultiClass);
       if (Result != 0) {
         NewOperands.push_back(Result);
-      }
-      else {
+      } else {
         NewOperands.push_back(Arg);
       }
-    }
-    else if (LHS->getAsString() == RHSo->getOperand(i)->getAsString()) {
+    } else if (LHS->getAsString() == RHSo->getOperand(i)->getAsString()) {
       NewOperands.push_back(Arg);
-    }
-    else {
+    } else {
       NewOperands.push_back(RHSo->getOperand(i));
     }
   }
@@ -939,8 +889,7 @@ static Init *ForeachHelper(Init *LHS, Init *MHS, Init *RHS, RecTy *Type,
           // First, replace the foreach variable with the list item
           if (LHS->getAsString() == RHSo->getOperand(i)->getAsString()) {
             NewOperands.push_back(Item);
-          }
-          else {
+          } else {
             NewOperands.push_back(RHSo->getOperand(i));
           }
         }
@@ -1007,7 +956,7 @@ Init *TernOpInit::Fold(Record *CurRec, MultiClass *CurMultiClass) {
       }
     }
     break;
-  }  
+  }
 
   case FOREACH: {
     Init *Result = ForeachHelper(LHS, MHS, RHS, getType(),
@@ -1023,8 +972,7 @@ Init *TernOpInit::Fold(Record *CurRec, MultiClass *CurMultiClass) {
     if (LHSi) {
       if (LHSi->getValue()) {
         return MHS;
-      }
-      else {
+      } else {
         return RHS;
       }
     }
@@ -1044,15 +992,16 @@ Init *TernOpInit::resolveReferences(Record &R, const RecordVal *RV) {
       // Short-circuit
       if (Value->getValue()) {
         Init *mhs = MHS->resolveReferences(R, RV);
-        return (new TernOpInit(getOpcode(), lhs, mhs, RHS, getType()))->Fold(&R, 0);
-      }
-      else {
+        return (new TernOpInit(getOpcode(), lhs, mhs,
+                               RHS, getType()))->Fold(&R, 0);
+      } else {
         Init *rhs = RHS->resolveReferences(R, RV);
-        return (new TernOpInit(getOpcode(), lhs, MHS, rhs, getType()))->Fold(&R, 0);
+        return (new TernOpInit(getOpcode(), lhs, MHS,
+                               rhs, getType()))->Fold(&R, 0);
       }
     }
   }
-  
+
   Init *mhs = MHS->resolveReferences(R, RV);
   Init *rhs = RHS->resolveReferences(R, RV);
 
@@ -1065,10 +1014,10 @@ std::string TernOpInit::getAsString() const {
   std::string Result;
   switch (Opc) {
   case SUBST: Result = "!subst"; break;
-  case FOREACH: Result = "!foreach"; break; 
-  case IF: Result = "!if"; break; 
+  case FOREACH: Result = "!foreach"; break;
+  case IF: Result = "!if"; break;
  }
-  return Result + "(" + LHS->getAsString() + ", " + MHS->getAsString() + ", " 
+  return Result + "(" + LHS->getAsString() + ", " + MHS->getAsString() + ", "
     + RHS->getAsString() + ")";
 }
 
@@ -1109,15 +1058,18 @@ Init *VarInit::resolveBitReference(Record &R, const RecordVal *IRV,
   if (IRV && IRV->getName() != getName()) return 0;
 
   RecordVal *RV = R.getValue(getName());
-  assert(RV && "Reference to a non-existant variable?");
+  assert(RV && "Reference to a non-existent variable?");
   assert(dynamic_cast<BitsInit*>(RV->getValue()));
   BitsInit *BI = (BitsInit*)RV->getValue();
 
   assert(Bit < BI->getNumBits() && "Bit reference out of range!");
   Init *B = BI->getBit(Bit);
 
-  if (!dynamic_cast<UnsetInit*>(B))  // If the bit is not set...
-    return B;                        // Replace the VarBitInit with it.
+  // If the bit is set to some value, or if we are resolving a reference to a
+  // specific variable and that variable is explicitly unset, then replace the
+  // VarBitInit with it.
+  if (IRV || !dynamic_cast<UnsetInit*>(B))
+    return B;
   return 0;
 }
 
@@ -1127,19 +1079,22 @@ Init *VarInit::resolveListElementReference(Record &R, const RecordVal *IRV,
   if (IRV && IRV->getName() != getName()) return 0;
 
   RecordVal *RV = R.getValue(getName());
-  assert(RV && "Reference to a non-existant variable?");
+  assert(RV && "Reference to a non-existent variable?");
   ListInit *LI = dynamic_cast<ListInit*>(RV->getValue());
   if (!LI) {
     VarInit *VI = dynamic_cast<VarInit*>(RV->getValue());
     assert(VI && "Invalid list element!");
     return new VarListElementInit(VI, Elt);
   }
-  
+
   if (Elt >= LI->getSize())
     return 0;  // Out of range reference.
   Init *E = LI->getElement(Elt);
-  if (!dynamic_cast<UnsetInit*>(E))  // If the element is set
-    return E;                        // Replace the VarListElementInit with it.
+  // If the element is set to some value, or if we are resolving a reference
+  // to a specific variable and that variable is explicitly unset, then
+  // replace the VarListElementInit with it.
+  if (IRV || !dynamic_cast<UnsetInit*>(E))
+    return E;
   return 0;
 }
 
@@ -1165,7 +1120,7 @@ Init *VarInit::getFieldInit(Record &R, const std::string &FieldName) const {
 }
 
 /// resolveReferences - This method is used by classes that refer to other
-/// variables which may not be defined at the time they expression is formed.
+/// variables which may not be defined at the time the expression is formed.
 /// If a value is set for the variable later, this method will be called on
 /// users of the value to allow the value to propagate out.
 ///
@@ -1246,8 +1201,11 @@ Init *FieldInit::resolveListElementReference(Record &R, const RecordVal *RV,
       if (Elt >= LI->getSize()) return 0;
       Init *E = LI->getElement(Elt);
 
-      if (!dynamic_cast<UnsetInit*>(E))  // If the bit is set...
-        return E;                  // Replace the VarListElementInit with it.
+      // If the element is set to some value, or if we are resolving a
+      // reference to a specific variable and that variable is explicitly
+      // unset, then replace the VarListElementInit with it.
+      if (RV || !dynamic_cast<UnsetInit*>(E))
+        return E;
     }
   return 0;
 }
@@ -1271,12 +1229,12 @@ Init *DagInit::resolveReferences(Record &R, const RecordVal *RV) {
   std::vector<Init*> NewArgs;
   for (unsigned i = 0, e = Args.size(); i != e; ++i)
     NewArgs.push_back(Args[i]->resolveReferences(R, RV));
-  
+
   Init *Op = Val->resolveReferences(R, RV);
-  
+
   if (Args != NewArgs || Op != Val)
     return new DagInit(Op, "", NewArgs, ArgNames);
-    
+
   return this;
 }
 
@@ -1445,7 +1403,7 @@ ListInit *Record::getValueAsListInit(StringRef FieldName) const {
 /// its value as a vector of records, throwing an exception if the field does
 /// not exist or if the value is not the right type.
 ///
-std::vector<Record*> 
+std::vector<Record*>
 Record::getValueAsListOfDefs(StringRef FieldName) const {
   ListInit *List = getValueAsListInit(FieldName);
   std::vector<Record*> Defs;
@@ -1480,7 +1438,7 @@ int64_t Record::getValueAsInt(StringRef FieldName) const {
 /// its value as a vector of integers, throwing an exception if the field does
 /// not exist or if the value is not the right type.
 ///
-std::vector<int64_t> 
+std::vector<int64_t>
 Record::getValueAsListOfInts(StringRef FieldName) const {
   ListInit *List = getValueAsListInit(FieldName);
   std::vector<int64_t> Ints;
@@ -1548,7 +1506,7 @@ std::string Record::getValueAsCode(StringRef FieldName) const {
   if (R == 0 || R->getValue() == 0)
     throw "Record `" + getName() + "' does not have a field named `" +
       FieldName.str() + "'!\n";
-  
+
   if (const CodeInit *CI = dynamic_cast<const CodeInit*>(R->getValue()))
     return CI->getValue();
   throw "Record `" + getName() + "', field `" + FieldName.str() +
@@ -1559,7 +1517,7 @@ std::string Record::getValueAsCode(StringRef FieldName) const {
 void MultiClass::dump() const {
   errs() << "Record:\n";
   Rec.dump();
-  
+
   errs() << "Defs:\n";
   for (RecordVector::const_iterator r = DefPrototypes.begin(),
          rend = DefPrototypes.end();
diff --git a/libclamav/c++/llvm/utils/TableGen/Record.h b/libclamav/c++/llvm/utils/TableGen/Record.h
index 1b33743..278c349 100644
--- a/libclamav/c++/llvm/utils/TableGen/Record.h
+++ b/libclamav/c++/llvm/utils/TableGen/Record.h
@@ -16,13 +16,13 @@
 #define RECORD_H
 
 #include "llvm/Support/SourceMgr.h"
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include "llvm/Support/raw_ostream.h"
 #include <map>
 
 namespace llvm {
 class raw_ostream;
-  
+
 // RecTy subclasses.
 class BitRecTy;
 class BitsRecTy;
@@ -443,7 +443,7 @@ public:
   virtual bool baseClassOf(const RecordRecTy *RHS) const;
 };
 
-/// resolveTypes - Find a common type that T1 and T2 convert to.  
+/// resolveTypes - Find a common type that T1 and T2 convert to.
 /// Return 0 if no such type exists.
 ///
 RecTy *resolveTypes(RecTy *T1, RecTy *T2);
@@ -508,7 +508,7 @@ struct Init {
   }
 
   /// resolveReferences - This method is used by classes that refer to other
-  /// variables which may not be defined at the time they expression is formed.
+  /// variables which may not be defined at the time the expression is formed.
   /// If a value is set for the variable later, this method will be called on
   /// users of the value to allow the value to propagate out.
   ///
@@ -611,12 +611,6 @@ public:
   virtual std::string getAsString() const;
 
   virtual Init *resolveReferences(Record &R, const RecordVal *RV);
-
-  // printXX - Print this bitstream with the specified format, returning true if
-  // it is not possible.
-  bool printInHex(raw_ostream &OS) const;
-  bool printAsVariable(raw_ostream &OS) const;
-  bool printAsUnset(raw_ostream &OS) const;
 };
 
 
@@ -731,7 +725,7 @@ public:
   }
 
   Record *getElementAsRecord(unsigned i) const;
-  
+
   Init *convertInitListSlice(const std::vector<unsigned> &Elements);
 
   virtual Init *convertInitializerTo(RecTy *Ty) {
@@ -792,7 +786,7 @@ public:
   virtual Init *convertInitializerTo(RecTy *Ty) {
     return Ty->convertValue(this);
   }
-  
+
   virtual Init *resolveBitReference(Record &R, const RecordVal *RV,
                                     unsigned Bit);
   virtual Init *resolveListElementReference(Record &R, const RecordVal *RV,
@@ -825,7 +819,7 @@ public:
     assert(i == 0 && "Invalid operand id for unary operator");
     return getOperand();
   }
-  
+
   UnaryOp getOpcode() const { return Opc; }
   Init *getOperand() const { return LHS; }
 
@@ -834,7 +828,7 @@ public:
   Init *Fold(Record *CurRec, MultiClass *CurMultiClass);
 
   virtual Init *resolveReferences(Record &R, const RecordVal *RV);
-  
+
   /// getFieldType - This method is used to implement the FieldInit class.
   /// Implementors of this method should return the type of the named field if
   /// they are of record type.
@@ -856,7 +850,7 @@ public:
   BinOpInit(BinaryOp opc, Init *lhs, Init *rhs, RecTy *Type) :
       OpInit(Type), Opc(opc), LHS(lhs), RHS(rhs) {
   }
-  
+
   // Clone - Clone this operator, replacing arguments with the new list
   virtual OpInit *clone(std::vector<Init *> &Operands) {
     assert(Operands.size() == 2 &&
@@ -869,8 +863,7 @@ public:
     assert((i == 0 || i == 1) && "Invalid operand id for binary operator");
     if (i == 0) {
       return getLHS();
-    }
-    else {
+    } else {
       return getRHS();
     }
   }
@@ -884,7 +877,7 @@ public:
   Init *Fold(Record *CurRec, MultiClass *CurMultiClass);
 
   virtual Init *resolveReferences(Record &R, const RecordVal *RV);
-  
+
   virtual std::string getAsString() const;
 };
 
@@ -900,7 +893,7 @@ public:
   TernOpInit(TernaryOp opc, Init *lhs, Init *mhs, Init *rhs, RecTy *Type) :
       OpInit(Type), Opc(opc), LHS(lhs), MHS(mhs), RHS(rhs) {
   }
-  
+
   // Clone - Clone this operator, replacing arguments with the new list
   virtual OpInit *clone(std::vector<Init *> &Operands) {
     assert(Operands.size() == 3 &&
@@ -915,11 +908,9 @@ public:
            "Invalid operand id for ternary operator");
     if (i == 0) {
       return getLHS();
-    }
-    else if (i == 1) {
+    } else if (i == 1) {
       return getMHS();
-    }
-    else {
+    } else {
       return getRHS();
     }
   }
@@ -932,9 +923,9 @@ public:
   // Fold - If possible, fold this to a simpler init.  Return this if not
   // possible to fold.
   Init *Fold(Record *CurRec, MultiClass *CurMultiClass);
-  
+
   virtual Init *resolveReferences(Record &R, const RecordVal *RV);
-  
+
   virtual std::string getAsString() const;
 };
 
@@ -1106,7 +1097,7 @@ class DagInit : public TypedInit {
   std::vector<Init*> Args;
   std::vector<std::string> ArgNames;
 public:
-  DagInit(Init *V, std::string VN, 
+  DagInit(Init *V, std::string VN,
           const std::vector<std::pair<Init*, std::string> > &args)
     : TypedInit(new DagRecTy), Val(V), ValName(VN) {
     Args.reserve(args.size());
@@ -1116,11 +1107,11 @@ public:
       ArgNames.push_back(args[i].second);
     }
   }
-  DagInit(Init *V, std::string VN, const std::vector<Init*> &args, 
+  DagInit(Init *V, std::string VN, const std::vector<Init*> &args,
           const std::vector<std::string> &argNames)
-  : TypedInit(new DagRecTy), Val(V), ValName(VN), Args(args), ArgNames(argNames) {
-  }
-  
+    : TypedInit(new DagRecTy), Val(V), ValName(VN), Args(args),
+      ArgNames(argNames) { }
+
   virtual Init *convertInitializerTo(RecTy *Ty) {
     return Ty->convertValue(this);
   }
@@ -1143,7 +1134,7 @@ public:
     assert(Num < Args.size() && "Arg number out of range!");
     Args[Num] = I;
   }
-  
+
   virtual Init *resolveReferences(Record &R, const RecordVal *RV);
 
   virtual std::string getAsString() const;
@@ -1174,13 +1165,12 @@ public:
     assert(0 && "Illegal bit reference off dag");
     return 0;
   }
-  
+
   virtual Init *resolveListElementReference(Record &R, const RecordVal *RV,
                                             unsigned Elt) {
     assert(0 && "Illegal element reference off dag");
     return 0;
   }
-  
 };
 
 //===----------------------------------------------------------------------===//
@@ -1231,17 +1221,17 @@ class Record {
   std::vector<Record*> SuperClasses;
 public:
 
-  explicit Record(const std::string &N, SMLoc loc) : 
+  explicit Record(const std::string &N, SMLoc loc) :
     ID(LastID++), Name(N), Loc(loc) {}
   ~Record() {}
-  
+
   unsigned getID() const { return ID; }
 
   const std::string &getName() const { return Name; }
   void setName(const std::string &Name);  // Also updates RecordKeeper.
-  
+
   SMLoc getLoc() const { return Loc; }
-  
+
   const std::vector<std::string> &getTemplateArgs() const {
     return TemplateArgs;
   }
@@ -1276,13 +1266,12 @@ public:
   }
 
   void removeValue(StringRef Name) {
-    assert(getValue(Name) && "Cannot remove an entry that does not exist!");
     for (unsigned i = 0, e = Values.size(); i != e; ++i)
       if (Values[i].getName() == Name) {
         Values.erase(Values.begin()+i);
         return;
       }
-    assert(0 && "Name does not exist in record!");
+    assert(0 && "Cannot remove an entry that does not exist!");
   }
 
   bool isSubClassOf(const Record *R) const {
@@ -1354,7 +1343,7 @@ public:
   /// not exist or if the value is not the right type.
   ///
   std::vector<int64_t> getValueAsListOfInts(StringRef FieldName) const;
-  
+
   /// getValueAsDef - This method looks up the specified field and returns its
   /// value as a Record, throwing an exception if the field does not exist or if
   /// the value is not the right type.
@@ -1378,7 +1367,7 @@ public:
   /// the value is not the right type.
   ///
   DagInit *getValueAsDag(StringRef FieldName) const;
-  
+
   /// getValueAsCode - This method looks up the specified field and returns
   /// its value as the string data in a CodeInit, throwing an exception if the
   /// field does not exist or if the value is not a code object.
@@ -1442,7 +1431,7 @@ public:
     assert(Defs.count(Name) && "Def does not exist!");
     Defs.erase(Name);
   }
-  
+
   //===--------------------------------------------------------------------===//
   // High-level helper methods, useful for tablegen backends...
 
@@ -1464,7 +1453,7 @@ struct LessRecord {
   }
 };
 
-/// LessRecordFieldName - Sorting predicate to sort record pointers by their 
+/// LessRecordFieldName - Sorting predicate to sort record pointers by their
 /// name field.
 ///
 struct LessRecordFieldName {
@@ -1479,19 +1468,18 @@ class TGError {
   std::string Message;
 public:
   TGError(SMLoc loc, const std::string &message) : Loc(loc), Message(message) {}
-  
+
   SMLoc getLoc() const { return Loc; }
   const std::string &getMessage() const { return Message; }
 };
-  
-  
+
+
 raw_ostream &operator<<(raw_ostream &OS, const RecordKeeper &RK);
 
 extern RecordKeeper Records;
 
 void PrintError(SMLoc ErrorLoc, const std::string &Msg);
 
-  
 } // End llvm namespace
 
 #endif
diff --git a/libclamav/c++/llvm/utils/TableGen/RegisterInfoEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/RegisterInfoEmitter.cpp
index 3c7b44a..bf0721e 100644
--- a/libclamav/c++/llvm/utils/TableGen/RegisterInfoEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/RegisterInfoEmitter.cpp
@@ -66,6 +66,7 @@ void RegisterInfoEmitter::runHeader(raw_ostream &OS) {
      << "  virtual bool needsStackRealignment(const MachineFunction &) const\n"
      << "     { return false; }\n"
      << "  unsigned getSubReg(unsigned RegNo, unsigned Index) const;\n"
+     << "  unsigned getSubRegIndex(unsigned RegNo, unsigned SubRegNo) const;\n"
      << "};\n\n";
 
   const std::vector<CodeGenRegisterClass> &RegisterClasses =
@@ -831,6 +832,23 @@ void RegisterInfoEmitter::run(raw_ostream &OS) {
   OS << "  };\n";
   OS << "  return 0;\n";
   OS << "}\n\n";
+
+  OS << "unsigned " << ClassName 
+     << "::getSubRegIndex(unsigned RegNo, unsigned SubRegNo) const {\n"
+     << "  switch (RegNo) {\n"
+     << "  default:\n    return 0;\n";
+  for (std::map<Record*, std::vector<std::pair<int, Record*> > >::iterator 
+        I = SubRegVectors.begin(), E = SubRegVectors.end(); I != E; ++I) {
+    OS << "  case " << getQualifiedName(I->first) << ":\n";
+    for (unsigned i = 0, e = I->second.size(); i != e; ++i)
+      OS << "    if (SubRegNo == "
+         << getQualifiedName((I->second)[i].second)
+         << ")  return " << (I->second)[i].first << ";\n";
+    OS << "    return 0;\n";
+  }
+  OS << "  };\n";
+  OS << "  return 0;\n";
+  OS << "}\n\n";
   
   // Emit the constructor of the class...
   OS << ClassName << "::" << ClassName
diff --git a/libclamav/c++/llvm/utils/TableGen/StringToOffsetTable.h b/libclamav/c++/llvm/utils/TableGen/StringToOffsetTable.h
index d9d7cf4..ac9422c 100644
--- a/libclamav/c++/llvm/utils/TableGen/StringToOffsetTable.h
+++ b/libclamav/c++/llvm/utils/TableGen/StringToOffsetTable.h
@@ -10,9 +10,10 @@
 #ifndef TBLGEN_STRING_TO_OFFSET_TABLE_H
 #define TBLGEN_STRING_TO_OFFSET_TABLE_H
 
+#include "llvm/ADT/SmallString.h"
 #include "llvm/ADT/StringMap.h"
-#include "llvm/Support/raw_ostream.h"
 #include "llvm/ADT/StringExtras.h"
+#include "llvm/Support/raw_ostream.h"
 
 namespace llvm {
 
@@ -38,9 +39,13 @@ public:
   }
   
   void EmitString(raw_ostream &O) {
+    // Escape the string.
+    SmallString<256> Str;
+    raw_svector_ostream(Str).write_escaped(AggregateString);
+    AggregateString = Str.str();
+
     O << "    \"";
     unsigned CharsPrinted = 0;
-    EscapeString(AggregateString);
     for (unsigned i = 0, e = AggregateString.size(); i != e; ++i) {
       if (CharsPrinted > 70) {
         O << "\"\n    \"";
diff --git a/libclamav/c++/llvm/utils/TableGen/SubtargetEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/SubtargetEmitter.cpp
index c8cf234..0dbfcbe 100644
--- a/libclamav/c++/llvm/utils/TableGen/SubtargetEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/SubtargetEmitter.cpp
@@ -519,6 +519,8 @@ void SubtargetEmitter::ParseFeaturesFunction(raw_ostream &OS) {
   OS << Target;
   OS << "Subtarget::ParseSubtargetFeatures(const std::string &FS,\n"
      << "                                  const std::string &CPU) {\n"
+     << "  DEBUG(errs() << \"\\nFeatures:\" << FS);\n"
+     << "  DEBUG(errs() << \"\\nCPU:\" << CPU);\n"
      << "  SubtargetFeatures Features(FS);\n"
      << "  Features.setCPUIfNone(CPU);\n"
      << "  uint32_t Bits =  Features.getBits(SubTypeKV, SubTypeKVSize,\n"
@@ -558,6 +560,8 @@ void SubtargetEmitter::run(raw_ostream &OS) {
 
   EmitSourceFileHeader("Subtarget Enumeration Source Fragment", OS);
 
+  OS << "#include \"llvm/Support/Debug.h\"\n";
+  OS << "#include \"llvm/Support/raw_ostream.h\"\n";
   OS << "#include \"llvm/Target/SubtargetFeature.h\"\n";
   OS << "#include \"llvm/Target/TargetInstrItineraries.h\"\n\n";
   
diff --git a/libclamav/c++/llvm/utils/TableGen/TGLexer.h b/libclamav/c++/llvm/utils/TableGen/TGLexer.h
index 80405ac..6790208 100644
--- a/libclamav/c++/llvm/utils/TableGen/TGLexer.h
+++ b/libclamav/c++/llvm/utils/TableGen/TGLexer.h
@@ -14,7 +14,7 @@
 #ifndef TGLEXER_H
 #define TGLEXER_H
 
-#include "llvm/Support/DataTypes.h"
+#include "llvm/System/DataTypes.h"
 #include <vector>
 #include <string>
 #include <cassert>
diff --git a/libclamav/c++/llvm/utils/TableGen/TGParser.cpp b/libclamav/c++/llvm/utils/TableGen/TGParser.cpp
index 7122265..b095a6c 100644
--- a/libclamav/c++/llvm/utils/TableGen/TGParser.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/TGParser.cpp
@@ -44,9 +44,9 @@ struct SubMultiClassReference {
 
 void SubMultiClassReference::dump() const {
   errs() << "Multiclass:\n";
- 
+
   MC->dump();
- 
+
   errs() << "Template args:\n";
   for (std::vector<Init *>::const_iterator i = TemplateArgs.begin(),
          iend = TemplateArgs.end();
@@ -61,13 +61,13 @@ void SubMultiClassReference::dump() const {
 bool TGParser::AddValue(Record *CurRec, SMLoc Loc, const RecordVal &RV) {
   if (CurRec == 0)
     CurRec = &CurMultiClass->Rec;
-  
+
   if (RecordVal *ERV = CurRec->getValue(RV.getName())) {
     // The value already exists in the class, treat this as a set.
     if (ERV->setValue(RV.getValue()))
       return Error(Loc, "New definition of '" + RV.getName() + "' of type '" +
                    RV.getType()->getAsString() + "' is incompatible with " +
-                   "previous definition of type '" + 
+                   "previous definition of type '" +
                    ERV->getType()->getAsString() + "'");
   } else {
     CurRec->addValue(RV);
@@ -77,7 +77,7 @@ bool TGParser::AddValue(Record *CurRec, SMLoc Loc, const RecordVal &RV) {
 
 /// SetValue -
 /// Return true on error, false on success.
-bool TGParser::SetValue(Record *CurRec, SMLoc Loc, const std::string &ValName, 
+bool TGParser::SetValue(Record *CurRec, SMLoc Loc, const std::string &ValName,
                         const std::vector<unsigned> &BitList, Init *V) {
   if (!V) return false;
 
@@ -93,7 +93,7 @@ bool TGParser::SetValue(Record *CurRec, SMLoc Loc, const std::string &ValName,
     if (VarInit *VI = dynamic_cast<VarInit*>(V))
       if (VI->getName() == ValName)
         return false;
-  
+
   // If we are assigning to a subset of the bits in the value... then we must be
   // assigning to a field of BitsRecTy, which must have a BitsInit
   // initializer.
@@ -109,7 +109,7 @@ bool TGParser::SetValue(Record *CurRec, SMLoc Loc, const std::string &ValName,
       V->convertInitializerTo(new BitsRecTy(BitList.size()));
       return Error(Loc, "Initializer is not compatible with bit range");
     }
-                   
+
     // We should have a BitsInit type now.
     BitsInit *BInit = dynamic_cast<BitsInit*>(BI);
     assert(BInit != 0);
@@ -133,8 +133,8 @@ bool TGParser::SetValue(Record *CurRec, SMLoc Loc, const std::string &ValName,
   }
 
   if (RV->setValue(V))
-   return Error(Loc, "Value '" + ValName + "' of type '" + 
-                RV->getType()->getAsString() + 
+   return Error(Loc, "Value '" + ValName + "' of type '" +
+                RV->getType()->getAsString() +
                 "' is incompatible with initializer '" + V->getAsString() +"'");
   return false;
 }
@@ -154,25 +154,25 @@ bool TGParser::AddSubClass(Record *CurRec, SubClassReference &SubClass) {
   // Ensure that an appropriate number of template arguments are specified.
   if (TArgs.size() < SubClass.TemplateArgs.size())
     return Error(SubClass.RefLoc, "More template args specified than expected");
-  
+
   // Loop over all of the template arguments, setting them to the specified
   // value or leaving them as the default if necessary.
   for (unsigned i = 0, e = TArgs.size(); i != e; ++i) {
     if (i < SubClass.TemplateArgs.size()) {
       // If a value is specified for this template arg, set it now.
-      if (SetValue(CurRec, SubClass.RefLoc, TArgs[i], std::vector<unsigned>(), 
+      if (SetValue(CurRec, SubClass.RefLoc, TArgs[i], std::vector<unsigned>(),
                    SubClass.TemplateArgs[i]))
         return true;
-      
+
       // Resolve it next.
       CurRec->resolveReferencesTo(CurRec->getValue(TArgs[i]));
-      
+
       // Now remove it.
       CurRec->removeValue(TArgs[i]);
 
     } else if (!CurRec->getValue(TArgs[i])->getValue()->isComplete()) {
       return Error(SubClass.RefLoc,"Value not specified for template argument #"
-                   + utostr(i) + " (" + TArgs[i] + ") of subclass '" + 
+                   + utostr(i) + " (" + TArgs[i] + ") of subclass '" +
                    SC->getName() + "'!");
     }
   }
@@ -186,7 +186,7 @@ bool TGParser::AddSubClass(Record *CurRec, SubClassReference &SubClass) {
                    "Already subclass of '" + SCs[i]->getName() + "'!\n");
     CurRec->addSuperClass(SCs[i]);
   }
-  
+
   if (CurRec->isSubClassOf(SC))
     return Error(SubClass.RefLoc,
                  "Already subclass of '" + SC->getName() + "'!\n");
@@ -291,7 +291,7 @@ bool TGParser::AddSubMultiClass(MultiClass *CurMC,
 /// isObjectStart - Return true if this is a valid first token for an Object.
 static bool isObjectStart(tgtok::TokKind K) {
   return K == tgtok::Class || K == tgtok::Def ||
-         K == tgtok::Defm || K == tgtok::Let || K == tgtok::MultiClass; 
+         K == tgtok::Defm || K == tgtok::Let || K == tgtok::MultiClass;
 }
 
 /// ParseObjectName - If an object name is specified, return it.  Otherwise,
@@ -305,7 +305,7 @@ std::string TGParser::ParseObjectName() {
     Lex.Lex();
     return Ret;
   }
-  
+
   static unsigned AnonCounter = 0;
   return "anonymous."+utostr(AnonCounter++);
 }
@@ -321,11 +321,11 @@ Record *TGParser::ParseClassID() {
     TokError("expected name for ClassID");
     return 0;
   }
-  
+
   Record *Result = Records.getClass(Lex.getCurStrVal());
   if (Result == 0)
     TokError("Couldn't find class '" + Lex.getCurStrVal() + "'");
-  
+
   Lex.Lex();
   return Result;
 }
@@ -354,17 +354,16 @@ Record *TGParser::ParseDefmID() {
     TokError("expected multiclass name");
     return 0;
   }
-  
+
   MultiClass *MC = MultiClasses[Lex.getCurStrVal()];
   if (MC == 0) {
     TokError("Couldn't find multiclass '" + Lex.getCurStrVal() + "'");
     return 0;
   }
-  
+
   Lex.Lex();
   return &MC->Rec;
-}  
-
+}
 
 
 /// ParseSubClassReference - Parse a reference to a subclass or to a templated
@@ -377,37 +376,37 @@ SubClassReference TGParser::
 ParseSubClassReference(Record *CurRec, bool isDefm) {
   SubClassReference Result;
   Result.RefLoc = Lex.getLoc();
-  
+
   if (isDefm)
     Result.Rec = ParseDefmID();
   else
     Result.Rec = ParseClassID();
   if (Result.Rec == 0) return Result;
-  
+
   // If there is no template arg list, we're done.
   if (Lex.getCode() != tgtok::less)
     return Result;
   Lex.Lex();  // Eat the '<'
-  
+
   if (Lex.getCode() == tgtok::greater) {
     TokError("subclass reference requires a non-empty list of template values");
     Result.Rec = 0;
     return Result;
   }
-  
+
   Result.TemplateArgs = ParseValueList(CurRec, Result.Rec);
   if (Result.TemplateArgs.empty()) {
     Result.Rec = 0;   // Error parsing value list.
     return Result;
   }
-    
+
   if (Lex.getCode() != tgtok::greater) {
     TokError("expected '>' in template value list");
     Result.Rec = 0;
     return Result;
   }
   Lex.Lex();
-  
+
   return Result;
 }
 
@@ -464,12 +463,12 @@ bool TGParser::ParseRangePiece(std::vector<unsigned> &Ranges) {
   }
   int64_t Start = Lex.getCurIntVal();
   int64_t End;
-  
+
   if (Start < 0)
     return TokError("invalid range, cannot be negative");
-  
+
   switch (Lex.Lex()) {  // eat first character.
-  default: 
+  default:
     Ranges.push_back(Start);
     return false;
   case tgtok::minus:
@@ -483,10 +482,10 @@ bool TGParser::ParseRangePiece(std::vector<unsigned> &Ranges) {
     End = -Lex.getCurIntVal();
     break;
   }
-  if (End < 0) 
+  if (End < 0)
     return TokError("invalid range, cannot be negative");
   Lex.Lex();
-  
+
   // Add to the range.
   if (Start < End) {
     for (; Start <= End; ++Start)
@@ -504,7 +503,7 @@ bool TGParser::ParseRangePiece(std::vector<unsigned> &Ranges) {
 ///
 std::vector<unsigned> TGParser::ParseRangeList() {
   std::vector<unsigned> Result;
-  
+
   // Parse the first piece.
   if (ParseRangePiece(Result))
     return std::vector<unsigned>();
@@ -524,14 +523,14 @@ std::vector<unsigned> TGParser::ParseRangeList() {
 bool TGParser::ParseOptionalRangeList(std::vector<unsigned> &Ranges) {
   if (Lex.getCode() != tgtok::less)
     return false;
-  
+
   SMLoc StartLoc = Lex.getLoc();
   Lex.Lex(); // eat the '<'
-  
+
   // Parse the range list.
   Ranges = ParseRangeList();
   if (Ranges.empty()) return true;
-  
+
   if (Lex.getCode() != tgtok::greater) {
     TokError("expected '>' at end of range list");
     return Error(StartLoc, "to match this '<'");
@@ -546,14 +545,14 @@ bool TGParser::ParseOptionalRangeList(std::vector<unsigned> &Ranges) {
 bool TGParser::ParseOptionalBitList(std::vector<unsigned> &Ranges) {
   if (Lex.getCode() != tgtok::l_brace)
     return false;
-  
+
   SMLoc StartLoc = Lex.getLoc();
   Lex.Lex(); // eat the '{'
-  
+
   // Parse the range list.
   Ranges = ParseRangeList();
   if (Ranges.empty()) return true;
-  
+
   if (Lex.getCode() != tgtok::r_brace) {
     TokError("expected '}' at end of bit list");
     return Error(StartLoc, "to match this '{'");
@@ -610,7 +609,7 @@ RecTy *TGParser::ParseType() {
     Lex.Lex();  // Eat '<'
     RecTy *SubType = ParseType();
     if (SubType == 0) return 0;
-    
+
     if (Lex.getCode() != tgtok::greater) {
       TokError("expected '>' at end of list<ty> type");
       return 0;
@@ -618,7 +617,7 @@ RecTy *TGParser::ParseType() {
     Lex.Lex();  // Eat '>'
     return new ListRecTy(SubType);
   }
-  }      
+  }
 }
 
 /// ParseIDValue - Parse an ID as a value and decode what it means.
@@ -639,12 +638,12 @@ Init *TGParser::ParseIDValue(Record *CurRec) {
 
 /// ParseIDValue - This is just like ParseIDValue above, but it assumes the ID
 /// has already been read.
-Init *TGParser::ParseIDValue(Record *CurRec, 
+Init *TGParser::ParseIDValue(Record *CurRec,
                              const std::string &Name, SMLoc NameLoc) {
   if (CurRec) {
     if (const RecordVal *RV = CurRec->getValue(Name))
       return new VarInit(Name, RV->getType());
-    
+
     std::string TemplateArgName = CurRec->getName()+":"+Name;
     if (CurRec->isTemplateArg(TemplateArgName)) {
       const RecordVal *RV = CurRec->getValue(TemplateArgName);
@@ -652,7 +651,7 @@ Init *TGParser::ParseIDValue(Record *CurRec,
       return new VarInit(TemplateArgName, RV->getType());
     }
   }
-  
+
   if (CurMultiClass) {
     std::string MCName = CurMultiClass->Rec.getName()+"::"+Name;
     if (CurMultiClass->Rec.isTemplateArg(MCName)) {
@@ -661,7 +660,7 @@ Init *TGParser::ParseIDValue(Record *CurRec,
       return new VarInit(MCName, RV->getType());
     }
   }
-  
+
   if (Record *D = Records.getDef(Name))
     return new DefInit(D);
 
@@ -748,7 +747,7 @@ Init *TGParser::ParseOperation(Record *CurRec) {
           TokError("expected list type argumnet in unary operator");
           return 0;
         }
-        
+
         if (LHSl && LHSl->getSize() == 0) {
           TokError("empty list argument in unary operator");
           return 0;
@@ -762,12 +761,10 @@ Init *TGParser::ParseOperation(Record *CurRec) {
           }
           if (Code == UnOpInit::CAR) {
             Type = Itemt->getType();
-          }
-          else {
+          } else {
             Type = new ListRecTy(Itemt->getType());
           }
-        }
-        else {
+        } else {
           assert(LHSt && "expected list type argument in unary operator");
           ListRecTy *LType = dynamic_cast<ListRecTy*>(LHSt->getType());
           if (LType == 0) {
@@ -776,8 +773,7 @@ Init *TGParser::ParseOperation(Record *CurRec) {
           }
           if (Code == UnOpInit::CAR) {
             Type = LType->getElementType();
-          }
-          else {
+          } else {
             Type = LType;
           }
         }
@@ -793,7 +789,7 @@ Init *TGParser::ParseOperation(Record *CurRec) {
   }
 
   case tgtok::XConcat:
-  case tgtok::XSRA: 
+  case tgtok::XSRA:
   case tgtok::XSRL:
   case tgtok::XSHL:
   case tgtok::XStrConcat:
@@ -804,32 +800,32 @@ Init *TGParser::ParseOperation(Record *CurRec) {
 
     switch (Lex.getCode()) {
     default: assert(0 && "Unhandled code!");
-    case tgtok::XConcat:     
+    case tgtok::XConcat:
       Lex.Lex();  // eat the operation
       Code = BinOpInit::CONCAT;
       Type = new DagRecTy();
       break;
-    case tgtok::XSRA:        
+    case tgtok::XSRA:
       Lex.Lex();  // eat the operation
       Code = BinOpInit::SRA;
       Type = new IntRecTy();
       break;
-    case tgtok::XSRL:        
+    case tgtok::XSRL:
       Lex.Lex();  // eat the operation
       Code = BinOpInit::SRL;
       Type = new IntRecTy();
       break;
-    case tgtok::XSHL:        
+    case tgtok::XSHL:
       Lex.Lex();  // eat the operation
       Code = BinOpInit::SHL;
       Type = new IntRecTy();
       break;
-    case tgtok::XStrConcat:  
+    case tgtok::XStrConcat:
       Lex.Lex();  // eat the operation
       Code = BinOpInit::STRCONCAT;
       Type = new StringRecTy();
       break;
-    case tgtok::XNameConcat: 
+    case tgtok::XNameConcat:
       Lex.Lex();  // eat the operation
       Code = BinOpInit::NAMECONCAT;
 
@@ -856,7 +852,7 @@ Init *TGParser::ParseOperation(Record *CurRec) {
       return 0;
     }
     Lex.Lex();  // eat the ','
-    
+
     Init *RHS = ParseValue(CurRec);
     if (RHS == 0) return 0;
 
@@ -903,7 +899,7 @@ Init *TGParser::ParseOperation(Record *CurRec) {
       return 0;
     }
     Lex.Lex();  // eat the ','
-    
+
     Init *MHS = ParseValue(CurRec);
     if (MHS == 0) return 0;
 
@@ -912,7 +908,7 @@ Init *TGParser::ParseOperation(Record *CurRec) {
       return 0;
     }
     Lex.Lex();  // eat the ','
-    
+
     Init *RHS = ParseValue(CurRec);
     if (RHS == 0) return 0;
 
@@ -933,11 +929,9 @@ Init *TGParser::ParseOperation(Record *CurRec) {
       }
       if (MHSt->getType()->typeIsConvertibleTo(RHSt->getType())) {
         Type = RHSt->getType();
-      }
-      else if (RHSt->getType()->typeIsConvertibleTo(MHSt->getType())) {
+      } else if (RHSt->getType()->typeIsConvertibleTo(MHSt->getType())) {
         Type = MHSt->getType();
-      }
-      else {
+      } else {
         TokError("inconsistent types for !if");
         return 0;
       }
@@ -962,7 +956,8 @@ Init *TGParser::ParseOperation(Record *CurRec) {
       break;
     }
     }
-    return (new TernOpInit(Code, LHS, MHS, RHS, Type))->Fold(CurRec, CurMultiClass);
+    return (new TernOpInit(Code, LHS, MHS, RHS, Type))->Fold(CurRec,
+                                                             CurMultiClass);
   }
   }
   TokError("could not parse operation");
@@ -1025,25 +1020,25 @@ Init *TGParser::ParseSimpleValue(Record *CurRec, RecTy *ItemType) {
   case tgtok::StrVal: {
     std::string Val = Lex.getCurStrVal();
     Lex.Lex();
-    
+
     // Handle multiple consecutive concatenated strings.
     while (Lex.getCode() == tgtok::StrVal) {
       Val += Lex.getCurStrVal();
       Lex.Lex();
     }
-    
+
     R = new StringInit(Val);
     break;
   }
   case tgtok::CodeFragment:
-      R = new CodeInit(Lex.getCurStrVal()); Lex.Lex(); break;
+    R = new CodeInit(Lex.getCurStrVal()); Lex.Lex(); break;
   case tgtok::question: R = new UnsetInit(); Lex.Lex(); break;
   case tgtok::Id: {
     SMLoc NameLoc = Lex.getLoc();
     std::string Name = Lex.getCurStrVal();
     if (Lex.Lex() != tgtok::less)  // consume the Id.
       return ParseIDValue(CurRec, Name, NameLoc);    // Value ::= IDValue
-    
+
     // Value ::= ID '<' ValueListNE '>'
     if (Lex.Lex() == tgtok::greater) {
       TokError("expected non-empty value list");
@@ -1061,13 +1056,13 @@ Init *TGParser::ParseSimpleValue(Record *CurRec, RecTy *ItemType) {
 
     std::vector<Init*> ValueList = ParseValueList(CurRec, Class);
     if (ValueList.empty()) return 0;
-    
+
     if (Lex.getCode() != tgtok::greater) {
       TokError("expected '>' at end of value list");
       return 0;
     }
     Lex.Lex();  // eat the '>'
-    
+
     // Create the new record, set it as CurRec temporarily.
     static unsigned AnonCounter = 0;
     Record *NewRec = new Record("anonymous.val."+utostr(AnonCounter++),NameLoc);
@@ -1080,15 +1075,15 @@ Init *TGParser::ParseSimpleValue(Record *CurRec, RecTy *ItemType) {
       return 0;
     NewRec->resolveReferences();
     Records.addDef(NewRec);
-    
+
     // The result of the expression is a reference to the new record.
     return new DefInit(NewRec);
-  }    
+  }
   case tgtok::l_brace: {           // Value ::= '{' ValueList '}'
     SMLoc BraceLoc = Lex.getLoc();
     Lex.Lex(); // eat the '{'
     std::vector<Init*> Vals;
-    
+
     if (Lex.getCode() != tgtok::r_brace) {
       Vals = ParseValueList(CurRec);
       if (Vals.empty()) return 0;
@@ -1098,7 +1093,7 @@ Init *TGParser::ParseSimpleValue(Record *CurRec, RecTy *ItemType) {
       return 0;
     }
     Lex.Lex();  // eat the '}'
-    
+
     BitsInit *Result = new BitsInit(Vals.size());
     for (unsigned i = 0, e = Vals.size(); i != e; ++i) {
       Init *Bit = Vals[i]->convertInitializerTo(new BitRecTy());
@@ -1114,23 +1109,24 @@ Init *TGParser::ParseSimpleValue(Record *CurRec, RecTy *ItemType) {
   case tgtok::l_square: {          // Value ::= '[' ValueList ']'
     Lex.Lex(); // eat the '['
     std::vector<Init*> Vals;
-    
+
     RecTy *DeducedEltTy = 0;
     ListRecTy *GivenListTy = 0;
-    
+
     if (ItemType != 0) {
       ListRecTy *ListType = dynamic_cast<ListRecTy*>(ItemType);
       if (ListType == 0) {
         std::stringstream s;
-        s << "Type mismatch for list, expected list type, got " 
+        s << "Type mismatch for list, expected list type, got "
           << ItemType->getAsString();
         TokError(s.str());
       }
       GivenListTy = ListType;
-    }    
+    }
 
     if (Lex.getCode() != tgtok::r_square) {
-      Vals = ParseValueList(CurRec, 0, GivenListTy ? GivenListTy->getElementType() : 0);
+      Vals = ParseValueList(CurRec, 0,
+                            GivenListTy ? GivenListTy->getElementType() : 0);
       if (Vals.empty()) return 0;
     }
     if (Lex.getCode() != tgtok::r_square) {
@@ -1173,8 +1169,7 @@ Init *TGParser::ParseSimpleValue(Record *CurRec, RecTy *ItemType) {
           TokError("Incompatible types in list elements");
           return 0;
         }
-      }
-      else {
+      } else {
         EltTy = TArg->getType();
       }
     }
@@ -1196,8 +1191,7 @@ Init *TGParser::ParseSimpleValue(Record *CurRec, RecTy *ItemType) {
         return 0;
       }
       DeducedEltTy = GivenListTy->getElementType();
-   }
-    else {
+    } else {
       // Make sure the deduced type is compatible with the given type
       if (GivenListTy) {
         if (!EltTy->typeIsConvertibleTo(GivenListTy->getElementType())) {
@@ -1207,7 +1201,7 @@ Init *TGParser::ParseSimpleValue(Record *CurRec, RecTy *ItemType) {
       }
       DeducedEltTy = EltTy;
     }
-    
+
     return new ListInit(Vals, DeducedEltTy);
   }
   case tgtok::l_paren: {         // Value ::= '(' IDValue DagArgList ')'
@@ -1218,13 +1212,12 @@ Init *TGParser::ParseSimpleValue(Record *CurRec, RecTy *ItemType) {
       TokError("expected identifier in dag init");
       return 0;
     }
-    
+
     Init *Operator = 0;
     if (Lex.getCode() == tgtok::Id) {
       Operator = ParseIDValue(CurRec);
       if (Operator == 0) return 0;
-    }
-    else {
+    } else {
       Operator = ParseOperation(CurRec);
       if (Operator == 0) return 0;
     }
@@ -1239,29 +1232,29 @@ Init *TGParser::ParseSimpleValue(Record *CurRec, RecTy *ItemType) {
       OperatorName = Lex.getCurStrVal();
       Lex.Lex();  // eat the VarName.
     }
-    
+
     std::vector<std::pair<llvm::Init*, std::string> > DagArgs;
     if (Lex.getCode() != tgtok::r_paren) {
       DagArgs = ParseDagArgList(CurRec);
       if (DagArgs.empty()) return 0;
     }
-    
+
     if (Lex.getCode() != tgtok::r_paren) {
       TokError("expected ')' in dag init");
       return 0;
     }
     Lex.Lex();  // eat the ')'
-    
+
     return new DagInit(Operator, OperatorName, DagArgs);
     break;
   }
- 
+
   case tgtok::XCar:
   case tgtok::XCdr:
   case tgtok::XNull:
   case tgtok::XCast:  // Value ::= !unop '(' Value ')'
   case tgtok::XConcat:
-  case tgtok::XSRA: 
+  case tgtok::XSRA:
   case tgtok::XSRL:
   case tgtok::XSHL:
   case tgtok::XStrConcat:
@@ -1273,7 +1266,7 @@ Init *TGParser::ParseSimpleValue(Record *CurRec, RecTy *ItemType) {
     break;
   }
   }
-  
+
   return R;
 }
 
@@ -1287,7 +1280,7 @@ Init *TGParser::ParseSimpleValue(Record *CurRec, RecTy *ItemType) {
 Init *TGParser::ParseValue(Record *CurRec, RecTy *ItemType) {
   Init *Result = ParseSimpleValue(CurRec, ItemType);
   if (Result == 0) return 0;
-  
+
   // Parse the suffixes now if present.
   while (1) {
     switch (Lex.getCode()) {
@@ -1297,7 +1290,7 @@ Init *TGParser::ParseValue(Record *CurRec, RecTy *ItemType) {
       Lex.Lex(); // eat the '{'
       std::vector<unsigned> Ranges = ParseRangeList();
       if (Ranges.empty()) return 0;
-      
+
       // Reverse the bitlist.
       std::reverse(Ranges.begin(), Ranges.end());
       Result = Result->convertInitializerBitRange(Ranges);
@@ -1305,7 +1298,7 @@ Init *TGParser::ParseValue(Record *CurRec, RecTy *ItemType) {
         Error(CurlyLoc, "Invalid bit range for value");
         return 0;
       }
-      
+
       // Eat the '}'.
       if (Lex.getCode() != tgtok::r_brace) {
         TokError("expected '}' at end of bit range list");
@@ -1319,13 +1312,13 @@ Init *TGParser::ParseValue(Record *CurRec, RecTy *ItemType) {
       Lex.Lex(); // eat the '['
       std::vector<unsigned> Ranges = ParseRangeList();
       if (Ranges.empty()) return 0;
-      
+
       Result = Result->convertInitListSlice(Ranges);
       if (Result == 0) {
         Error(SquareLoc, "Invalid range for list slice");
         return 0;
       }
-      
+
       // Eat the ']'.
       if (Lex.getCode() != tgtok::r_square) {
         TokError("expected ']' at end of list slice");
@@ -1355,14 +1348,14 @@ Init *TGParser::ParseValue(Record *CurRec, RecTy *ItemType) {
 ///
 ///    ParseDagArgList ::= Value (':' VARNAME)?
 ///    ParseDagArgList ::= ParseDagArgList ',' Value (':' VARNAME)?
-std::vector<std::pair<llvm::Init*, std::string> > 
+std::vector<std::pair<llvm::Init*, std::string> >
 TGParser::ParseDagArgList(Record *CurRec) {
   std::vector<std::pair<llvm::Init*, std::string> > Result;
-  
+
   while (1) {
     Init *Val = ParseValue(CurRec);
     if (Val == 0) return std::vector<std::pair<llvm::Init*, std::string> >();
-    
+
     // If the variable name is present, add it.
     std::string VarName;
     if (Lex.getCode() == tgtok::colon) {
@@ -1373,13 +1366,13 @@ TGParser::ParseDagArgList(Record *CurRec) {
       VarName = Lex.getCurStrVal();
       Lex.Lex();  // eat the VarName.
     }
-    
+
     Result.push_back(std::make_pair(Val, VarName));
-    
+
     if (Lex.getCode() != tgtok::comma) break;
-    Lex.Lex(); // eat the ','    
+    Lex.Lex(); // eat the ','
   }
-  
+
   return Result;
 }
 
@@ -1390,7 +1383,8 @@ TGParser::ParseDagArgList(Record *CurRec) {
 ///
 ///   ValueList ::= Value (',' Value)
 ///
-std::vector<Init*> TGParser::ParseValueList(Record *CurRec, Record *ArgsRec, RecTy *EltTy) {
+std::vector<Init*> TGParser::ParseValueList(Record *CurRec, Record *ArgsRec,
+                                            RecTy *EltTy) {
   std::vector<Init*> Result;
   RecTy *ItemType = EltTy;
   unsigned int ArgN = 0;
@@ -1403,16 +1397,16 @@ std::vector<Init*> TGParser::ParseValueList(Record *CurRec, Record *ArgsRec, Rec
   }
   Result.push_back(ParseValue(CurRec, ItemType));
   if (Result.back() == 0) return std::vector<Init*>();
-  
+
   while (Lex.getCode() == tgtok::comma) {
     Lex.Lex();  // Eat the comma
-    
+
     if (ArgsRec != 0 && EltTy == 0) {
       const std::vector<std::string> &TArgs = ArgsRec->getTemplateArgs();
       if (ArgN >= TArgs.size()) {
         TokError("too many template arguments");
         return std::vector<Init*>();
-      }        
+      }
       const RecordVal *RV = ArgsRec->getValue(TArgs[ArgN]);
       assert(RV && "Template argument record not found??");
       ItemType = RV->getType();
@@ -1421,12 +1415,11 @@ std::vector<Init*> TGParser::ParseValueList(Record *CurRec, Record *ArgsRec, Rec
     Result.push_back(ParseValue(CurRec, ItemType));
     if (Result.back() == 0) return std::vector<Init*>();
   }
-  
+
   return Result;
 }
 
 
-
 /// ParseDeclaration - Read a declaration, returning the name of field ID, or an
 /// empty string on error.  This can happen in a number of different context's,
 /// including within a def or in the template args for a def (which which case
@@ -1437,24 +1430,24 @@ std::vector<Init*> TGParser::ParseValueList(Record *CurRec, Record *ArgsRec, Rec
 ///
 ///  Declaration ::= FIELD? Type ID ('=' Value)?
 ///
-std::string TGParser::ParseDeclaration(Record *CurRec, 
+std::string TGParser::ParseDeclaration(Record *CurRec,
                                        bool ParsingTemplateArgs) {
   // Read the field prefix if present.
   bool HasField = Lex.getCode() == tgtok::Field;
   if (HasField) Lex.Lex();
-  
+
   RecTy *Type = ParseType();
   if (Type == 0) return "";
-  
+
   if (Lex.getCode() != tgtok::Id) {
     TokError("Expected identifier in declaration");
     return "";
   }
-  
+
   SMLoc IdLoc = Lex.getLoc();
   std::string DeclName = Lex.getCurStrVal();
   Lex.Lex();
-  
+
   if (ParsingTemplateArgs) {
     if (CurRec) {
       DeclName = CurRec->getName() + ":" + DeclName;
@@ -1464,11 +1457,11 @@ std::string TGParser::ParseDeclaration(Record *CurRec,
     if (CurMultiClass)
       DeclName = CurMultiClass->Rec.getName() + "::" + DeclName;
   }
-  
+
   // Add the value.
   if (AddValue(CurRec, IdLoc, RecordVal(DeclName, Type, HasField)))
     return "";
-  
+
   // If a value is present, parse it.
   if (Lex.getCode() == tgtok::equal) {
     Lex.Lex();
@@ -1478,7 +1471,7 @@ std::string TGParser::ParseDeclaration(Record *CurRec,
         SetValue(CurRec, ValLoc, DeclName, std::vector<unsigned>(), Val))
       return "";
   }
-  
+
   return DeclName;
 }
 
@@ -1488,30 +1481,30 @@ std::string TGParser::ParseDeclaration(Record *CurRec,
 /// these are the template args for a multiclass.
 ///
 ///    TemplateArgList ::= '<' Declaration (',' Declaration)* '>'
-/// 
+///
 bool TGParser::ParseTemplateArgList(Record *CurRec) {
   assert(Lex.getCode() == tgtok::less && "Not a template arg list!");
   Lex.Lex(); // eat the '<'
-  
+
   Record *TheRecToAddTo = CurRec ? CurRec : &CurMultiClass->Rec;
-  
+
   // Read the first declaration.
   std::string TemplArg = ParseDeclaration(CurRec, true/*templateargs*/);
   if (TemplArg.empty())
     return true;
-  
+
   TheRecToAddTo->addTemplateArg(TemplArg);
-  
+
   while (Lex.getCode() == tgtok::comma) {
     Lex.Lex(); // eat the ','
-    
+
     // Read the following declarations.
     TemplArg = ParseDeclaration(CurRec, true/*templateargs*/);
     if (TemplArg.empty())
       return true;
     TheRecToAddTo->addTemplateArg(TemplArg);
   }
-  
+
   if (Lex.getCode() != tgtok::greater)
     return TokError("expected '>' at end of template argument list");
   Lex.Lex(); // eat the '>'.
@@ -1525,9 +1518,9 @@ bool TGParser::ParseTemplateArgList(Record *CurRec) {
 ///   BodyItem ::= LET ID OptionalBitList '=' Value ';'
 bool TGParser::ParseBodyItem(Record *CurRec) {
   if (Lex.getCode() != tgtok::Let) {
-    if (ParseDeclaration(CurRec, false).empty()) 
+    if (ParseDeclaration(CurRec, false).empty())
       return true;
-    
+
     if (Lex.getCode() != tgtok::semi)
       return TokError("expected ';' after declaration");
     Lex.Lex();
@@ -1537,33 +1530,33 @@ bool TGParser::ParseBodyItem(Record *CurRec) {
   // LET ID OptionalRangeList '=' Value ';'
   if (Lex.Lex() != tgtok::Id)
     return TokError("expected field identifier after let");
-  
+
   SMLoc IdLoc = Lex.getLoc();
   std::string FieldName = Lex.getCurStrVal();
   Lex.Lex();  // eat the field name.
-  
+
   std::vector<unsigned> BitList;
-  if (ParseOptionalBitList(BitList)) 
+  if (ParseOptionalBitList(BitList))
     return true;
   std::reverse(BitList.begin(), BitList.end());
-  
+
   if (Lex.getCode() != tgtok::equal)
     return TokError("expected '=' in let expression");
   Lex.Lex();  // eat the '='.
-  
+
   RecordVal *Field = CurRec->getValue(FieldName);
   if (Field == 0)
     return TokError("Value '" + FieldName + "' unknown!");
 
   RecTy *Type = Field->getType();
-  
+
   Init *Val = ParseValue(CurRec, Type);
   if (Val == 0) return true;
-  
+
   if (Lex.getCode() != tgtok::semi)
     return TokError("expected ';' after let expression");
   Lex.Lex();
-  
+
   return SetValue(CurRec, IdLoc, FieldName, BitList, Val);
 }
 
@@ -1580,12 +1573,12 @@ bool TGParser::ParseBody(Record *CurRec) {
     Lex.Lex();
     return false;
   }
-  
+
   if (Lex.getCode() != tgtok::l_brace)
     return TokError("Expected ';' or '{' to start body");
   // Eat the '{'.
   Lex.Lex();
-  
+
   while (Lex.getCode() != tgtok::r_brace)
     if (ParseBodyItem(CurRec))
       return true;
@@ -1608,17 +1601,17 @@ bool TGParser::ParseObjectBody(Record *CurRec) {
   // If there is a baseclass list, read it.
   if (Lex.getCode() == tgtok::colon) {
     Lex.Lex();
-    
+
     // Read all of the subclasses.
     SubClassReference SubClass = ParseSubClassReference(CurRec, false);
     while (1) {
       // Check for error.
       if (SubClass.Rec == 0) return true;
-     
+
       // Add it.
       if (AddSubClass(CurRec, SubClass))
         return true;
-      
+
       if (Lex.getCode() != tgtok::comma) break;
       Lex.Lex(); // eat ','.
       SubClass = ParseSubClassReference(CurRec, false);
@@ -1631,7 +1624,7 @@ bool TGParser::ParseObjectBody(Record *CurRec) {
       if (SetValue(CurRec, LetStack[i][j].Loc, LetStack[i][j].Name,
                    LetStack[i][j].Bits, LetStack[i][j].Value))
         return true;
-  
+
   return ParseBody(CurRec);
 }
 
@@ -1644,14 +1637,14 @@ bool TGParser::ParseObjectBody(Record *CurRec) {
 llvm::Record *TGParser::ParseDef(MultiClass *CurMultiClass) {
   SMLoc DefLoc = Lex.getLoc();
   assert(Lex.getCode() == tgtok::Def && "Unknown tok");
-  Lex.Lex();  // Eat the 'def' token.  
+  Lex.Lex();  // Eat the 'def' token.
 
   // Parse ObjectName and make a record for it.
   Record *CurRec = new Record(ParseObjectName(), DefLoc);
-  
+
   if (!CurMultiClass) {
     // Top-level def definition.
-    
+
     // Ensure redefinition doesn't happen.
     if (Records.getDef(CurRec->getName())) {
       Error(DefLoc, "def '" + CurRec->getName() + "' already defined");
@@ -1668,13 +1661,13 @@ llvm::Record *TGParser::ParseDef(MultiClass *CurMultiClass) {
       }
     CurMultiClass->DefPrototypes.push_back(CurRec);
   }
-  
+
   if (ParseObjectBody(CurRec))
     return 0;
-  
+
   if (CurMultiClass == 0)  // Def's in multiclasses aren't really defs.
     CurRec->resolveReferences();
-  
+
   // If ObjectBody has template arguments, it's an error.
   assert(CurRec->getTemplateArgs().empty() && "How'd this get template args?");
   return CurRec;
@@ -1688,10 +1681,10 @@ llvm::Record *TGParser::ParseDef(MultiClass *CurMultiClass) {
 bool TGParser::ParseClass() {
   assert(Lex.getCode() == tgtok::Class && "Unexpected token!");
   Lex.Lex();
-  
+
   if (Lex.getCode() != tgtok::Id)
     return TokError("expected class name after 'class' keyword");
-  
+
   Record *CurRec = Records.getClass(Lex.getCurStrVal());
   if (CurRec) {
     // If the body was previously defined, this is an error.
@@ -1705,7 +1698,7 @@ bool TGParser::ParseClass() {
     Records.addClass(CurRec);
   }
   Lex.Lex(); // eat the name.
-  
+
   // If there are template args, parse them.
   if (Lex.getCode() == tgtok::less)
     if (ParseTemplateArgList(CurRec))
@@ -1723,7 +1716,7 @@ bool TGParser::ParseClass() {
 ///
 std::vector<LetRecord> TGParser::ParseLetList() {
   std::vector<LetRecord> Result;
-  
+
   while (1) {
     if (Lex.getCode() != tgtok::Id) {
       TokError("expected identifier in let definition");
@@ -1731,29 +1724,29 @@ std::vector<LetRecord> TGParser::ParseLetList() {
     }
     std::string Name = Lex.getCurStrVal();
     SMLoc NameLoc = Lex.getLoc();
-    Lex.Lex();  // Eat the identifier. 
+    Lex.Lex();  // Eat the identifier.
 
     // Check for an optional RangeList.
     std::vector<unsigned> Bits;
-    if (ParseOptionalRangeList(Bits)) 
+    if (ParseOptionalRangeList(Bits))
       return std::vector<LetRecord>();
     std::reverse(Bits.begin(), Bits.end());
-    
+
     if (Lex.getCode() != tgtok::equal) {
       TokError("expected '=' in let expression");
       return std::vector<LetRecord>();
     }
     Lex.Lex();  // eat the '='.
-    
+
     Init *Val = ParseValue(0);
     if (Val == 0) return std::vector<LetRecord>();
-    
+
     // Now that we have everything, add the record.
     Result.push_back(LetRecord(Name, Bits, Val, NameLoc));
-    
+
     if (Lex.getCode() != tgtok::comma)
       return Result;
-    Lex.Lex();  // eat the comma.    
+    Lex.Lex();  // eat the comma.
   }
 }
 
@@ -1766,7 +1759,7 @@ std::vector<LetRecord> TGParser::ParseLetList() {
 bool TGParser::ParseTopLevelLet() {
   assert(Lex.getCode() == tgtok::Let && "Unexpected token");
   Lex.Lex();
-  
+
   // Add this entry to the let stack.
   std::vector<LetRecord> LetInfo = ParseLetList();
   if (LetInfo.empty()) return true;
@@ -1775,7 +1768,7 @@ bool TGParser::ParseTopLevelLet() {
   if (Lex.getCode() != tgtok::In)
     return TokError("expected 'in' at end of top-level 'let'");
   Lex.Lex();
-  
+
   // If this is a scalar let, just handle it now
   if (Lex.getCode() != tgtok::l_brace) {
     // LET LetList IN Object
@@ -1785,18 +1778,18 @@ bool TGParser::ParseTopLevelLet() {
     SMLoc BraceLoc = Lex.getLoc();
     // Otherwise, this is a group let.
     Lex.Lex();  // eat the '{'.
-    
+
     // Parse the object list.
     if (ParseObjectList())
       return true;
-    
+
     if (Lex.getCode() != tgtok::r_brace) {
       TokError("expected '}' at end of top level let command");
       return Error(BraceLoc, "to match this '{'");
     }
     Lex.Lex();
   }
-  
+
   // Outside this let scope, this let block is not active.
   LetStack.pop_back();
   return false;
@@ -1807,15 +1800,15 @@ bool TGParser::ParseTopLevelLet() {
 ///  MultiClassDef ::= DefInst
 ///
 bool TGParser::ParseMultiClassDef(MultiClass *CurMC) {
-  if (Lex.getCode() != tgtok::Def) 
+  if (Lex.getCode() != tgtok::Def)
     return TokError("expected 'def' in multiclass body");
 
   Record *D = ParseDef(CurMC);
   if (D == 0) return true;
-  
+
   // Copy the template arguments for the multiclass into the def.
   const std::vector<std::string> &TArgs = CurMC->Rec.getTemplateArgs();
-  
+
   for (unsigned i = 0, e = TArgs.size(); i != e; ++i) {
     const RecordVal *RV = CurMC->Rec.getValue(TArgs[i]);
     assert(RV && "Template arg doesn't exist?");
@@ -1837,13 +1830,13 @@ bool TGParser::ParseMultiClass() {
   if (Lex.getCode() != tgtok::Id)
     return TokError("expected identifier after multiclass for name");
   std::string Name = Lex.getCurStrVal();
-  
+
   if (MultiClasses.count(Name))
     return TokError("multiclass '" + Name + "' already defined");
-  
+
   CurMultiClass = MultiClasses[Name] = new MultiClass(Name, Lex.getLoc());
   Lex.Lex();  // Eat the identifier.
-  
+
   // If there are template args, parse them.
   if (Lex.getCode() == tgtok::less)
     if (ParseTemplateArgList(0))
@@ -1877,23 +1870,21 @@ bool TGParser::ParseMultiClass() {
   if (Lex.getCode() != tgtok::l_brace) {
     if (!inherits)
       return TokError("expected '{' in multiclass definition");
+    else if (Lex.getCode() != tgtok::semi)
+      return TokError("expected ';' in multiclass definition");
     else
-      if (Lex.getCode() != tgtok::semi)
-        return TokError("expected ';' in multiclass definition");
-      else
-        Lex.Lex();  // eat the ';'.
-  }
-  else {
+      Lex.Lex();  // eat the ';'.
+  } else {
     if (Lex.Lex() == tgtok::r_brace)  // eat the '{'.
       return TokError("multiclass must contain at least one def");
-  
+
     while (Lex.getCode() != tgtok::r_brace)
       if (ParseMultiClassDef(CurMultiClass))
         return true;
-  
+
     Lex.Lex();  // eat the '}'.
   }
-  
+
   CurMultiClass = 0;
   return false;
 }
@@ -1906,12 +1897,12 @@ bool TGParser::ParseDefm() {
   assert(Lex.getCode() == tgtok::Defm && "Unexpected token!");
   if (Lex.Lex() != tgtok::Id)  // eat the defm.
     return TokError("expected identifier after defm");
-  
+
   SMLoc DefmPrefixLoc = Lex.getLoc();
   std::string DefmPrefix = Lex.getCurStrVal();
   if (Lex.Lex() != tgtok::colon)
     return TokError("expected ':' after defm identifier");
-  
+
   // eat the colon.
   Lex.Lex();
 
@@ -1926,7 +1917,7 @@ bool TGParser::ParseDefm() {
     // template parameters.
     MultiClass *MC = MultiClasses[Ref.Rec->getName()];
     assert(MC && "Didn't lookup multiclass correctly?");
-    std::vector<Init*> &TemplateVals = Ref.TemplateArgs;   
+    std::vector<Init*> &TemplateVals = Ref.TemplateArgs;
 
     // Verify that the correct number of template arguments were specified.
     const std::vector<std::string> &TArgs = MC->Rec.getTemplateArgs();
@@ -1943,8 +1934,7 @@ bool TGParser::ParseDefm() {
       std::string::size_type idx = DefName.find("#NAME#");
       if (idx != std::string::npos) {
         DefName.replace(idx, 6, DefmPrefix);
-      }
-      else {
+      } else {
         // Add the suffix to the defm name to get the new name.
         DefName = DefmPrefix + DefName;
       }
@@ -1991,8 +1981,8 @@ bool TGParser::ParseDefm() {
 
       // Ensure redefinition doesn't happen.
       if (Records.getDef(CurRec->getName()))
-        return Error(DefmPrefixLoc, "def '" + CurRec->getName() + 
-                     "' already defined, instantiating defm with subdef '" + 
+        return Error(DefmPrefixLoc, "def '" + CurRec->getName() +
+                     "' already defined, instantiating defm with subdef '" +
                      DefProto->getName() + "'");
       Records.addDef(CurRec);
       CurRec->resolveReferences();
@@ -2008,7 +1998,7 @@ bool TGParser::ParseDefm() {
   if (Lex.getCode() != tgtok::semi)
     return TokError("expected ';' at end of defm");
   Lex.Lex();
-  
+
   return false;
 }
 
@@ -2040,15 +2030,14 @@ bool TGParser::ParseObjectList() {
   return false;
 }
 
-
 bool TGParser::ParseFile() {
   Lex.Lex(); // Prime the lexer.
   if (ParseObjectList()) return true;
-  
+
   // If we have unread input at the end of the file, report it.
   if (Lex.getCode() == tgtok::Eof)
     return false;
-  
+
   return TokError("Unexpected input at top level");
 }
 
diff --git a/libclamav/c++/llvm/utils/TableGen/TableGen.cpp b/libclamav/c++/llvm/utils/TableGen/TableGen.cpp
index c6d7502..7c8d288 100644
--- a/libclamav/c++/llvm/utils/TableGen/TableGen.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/TableGen.cpp
@@ -15,21 +15,23 @@
 //
 //===----------------------------------------------------------------------===//
 
-#include "Record.h"
-#include "TGParser.h"
+#include "AsmMatcherEmitter.h"
+#include "AsmWriterEmitter.h"
 #include "CallingConvEmitter.h"
+#include "ClangDiagnosticsEmitter.h"
 #include "CodeEmitterGen.h"
-#include "RegisterInfoEmitter.h"
-#include "InstrInfoEmitter.h"
-#include "InstrEnumEmitter.h"
-#include "AsmWriterEmitter.h"
-#include "AsmMatcherEmitter.h"
 #include "DAGISelEmitter.h"
+#include "DisassemblerEmitter.h"
 #include "FastISelEmitter.h"
-#include "SubtargetEmitter.h"
+#include "InstrEnumEmitter.h"
+#include "InstrInfoEmitter.h"
 #include "IntrinsicEmitter.h"
 #include "LLVMCConfigurationEmitter.h"
-#include "ClangDiagnosticsEmitter.h"
+#include "OptParserEmitter.h"
+#include "Record.h"
+#include "RegisterInfoEmitter.h"
+#include "SubtargetEmitter.h"
+#include "TGParser.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/FileUtilities.h"
 #include "llvm/Support/MemoryBuffer.h"
@@ -45,11 +47,13 @@ enum ActionType {
   GenEmitter,
   GenRegisterEnums, GenRegister, GenRegisterHeader,
   GenInstrEnums, GenInstrs, GenAsmWriter, GenAsmMatcher,
+  GenDisassembler,
   GenCallingConv,
   GenClangDiagsDefs,
   GenClangDiagGroups,
   GenDAGISel,
   GenFastISel,
+  GenOptParserDefs, GenOptParserImpl,
   GenSubtarget,
   GenIntrinsic,
   GenTgtIntrinsic,
@@ -78,12 +82,18 @@ namespace {
                                "Generate calling convention descriptions"),
                     clEnumValN(GenAsmWriter, "gen-asm-writer",
                                "Generate assembly writer"),
+                    clEnumValN(GenDisassembler, "gen-disassembler",
+                               "Generate disassembler"),
                     clEnumValN(GenAsmMatcher, "gen-asm-matcher",
                                "Generate assembly instruction matcher"),
                     clEnumValN(GenDAGISel, "gen-dag-isel",
                                "Generate a DAG instruction selector"),
                     clEnumValN(GenFastISel, "gen-fast-isel",
                                "Generate a \"fast\" instruction selector"),
+                    clEnumValN(GenOptParserDefs, "gen-opt-parser-defs",
+                               "Generate option definitions"),
+                    clEnumValN(GenOptParserImpl, "gen-opt-parser-impl",
+                               "Generate option parser implementation"),
                     clEnumValN(GenSubtarget, "gen-subtarget",
                                "Generate subtarget enumerations"),
                     clEnumValN(GenIntrinsic, "gen-intrinsic",
@@ -221,7 +231,16 @@ int main(int argc, char **argv) {
       break;
     case GenClangDiagGroups:
       ClangDiagGroupsEmitter(Records).run(*Out);
-      break;        
+      break;
+    case GenDisassembler:
+      DisassemblerEmitter(Records).run(*Out);
+      break;
+    case GenOptParserDefs:
+      OptParserEmitter(Records, true).run(*Out);
+      break;
+    case GenOptParserImpl:
+      OptParserEmitter(Records, false).run(*Out);
+      break;
     case GenDAGISel:
       DAGISelEmitter(Records).run(*Out);
       break;
diff --git a/libclamav/c++/llvm/utils/UpdateCMakeLists.pl b/libclamav/c++/llvm/utils/UpdateCMakeLists.pl
index 3aa2f88..6d24d90 100755
--- a/libclamav/c++/llvm/utils/UpdateCMakeLists.pl
+++ b/libclamav/c++/llvm/utils/UpdateCMakeLists.pl
@@ -68,7 +68,7 @@ sub UpdateCMake {
   while(<IN>) {
     if (!$foundLibrary) {
       print OUT $_;
-      if (/^add_clang_library\(/ || /^add_llvm_library\(/) {
+      if (/^add_clang_library\(/ || /^add_llvm_library\(/ || /^add_llvm_target\(/) {
         $foundLibrary = 1;
         EmitCMakeList($dir);
       }
diff --git a/libclamav/c++/llvm/utils/buildit/GNUmakefile b/libclamav/c++/llvm/utils/buildit/GNUmakefile
index e3b334a..9971883 100644
--- a/libclamav/c++/llvm/utils/buildit/GNUmakefile
+++ b/libclamav/c++/llvm/utils/buildit/GNUmakefile
@@ -59,7 +59,7 @@ endif
 # NOTE : Always put version numbers at the end because they are optional.
 install: $(OBJROOT) $(SYMROOT) $(DSTROOT)
 	cd $(OBJROOT) && \
-	  $(SRC)/build_llvm "$(RC_ARCHS)" "$(TARGETS)" \
+	  $(SRC)/utils/buildit/build_llvm "$(RC_ARCHS)" "$(TARGETS)" \
 	    $(SRC) $(PREFIX) $(DSTROOT) $(SYMROOT) \
 	    $(LLVM_ASSERTIONS) $(LLVM_OPTIMIZED) \
 	    $(RC_ProjectSourceVersion) $(RC_ProjectSourceSubversion) 
diff --git a/libclamav/c++/llvm/utils/buildit/build_llvm b/libclamav/c++/llvm/utils/buildit/build_llvm
index 91fbe15..9168d1a 100755
--- a/libclamav/c++/llvm/utils/buildit/build_llvm
+++ b/libclamav/c++/llvm/utils/buildit/build_llvm
@@ -59,6 +59,10 @@ echo DARWIN_VERS = $DARWIN_VERS
 if [ "x$RC_ProjectName" = "xllvmCore_Embedded" ]; then
     DT_HOME=$DEST_DIR/Developer/Platforms/iPhoneOS.platform/Developer/usr
     DEST_ROOT="/Developer/Platforms/iPhoneOS.platform/Developer$DEST_ROOT"
+elif [ "x$RC_ProjectName" = "xllvmCore_EmbeddedHosted" ]; then
+    DT_HOME=$DEST_DIR/usr
+    DEST_ROOT="/Developer$DEST_ROOT"
+    HOST_SDKROOT=$SDKROOT
 else
     DT_HOME=$DEST_DIR/Developer/usr
     DEST_ROOT="/Developer$DEST_ROOT"
@@ -82,29 +86,78 @@ SRC_DIR=$DIR/src
 rm -rf $SRC_DIR || exit 1
 mkdir $SRC_DIR || exit 1
 ln -s $ORIG_SRC_DIR/* $SRC_DIR/ || exit 1
+# We can't use the top-level Makefile as-is.  Remove the soft link:
+rm $SRC_DIR/Makefile || exit 1
+# Now create our own by editing the top-level Makefile, deleting every line marked "Apple-style":
+sed -e '/[Aa]pple-style/d' -e '/include.*GNUmakefile/d' $ORIG_SRC_DIR/Makefile > $SRC_DIR/Makefile || exit 1
 
 # Build the LLVM tree universal.
 mkdir -p $DIR/obj-llvm || exit 1
 cd $DIR/obj-llvm || exit 1
 
-# If the user has set CC or CXX, respect their wishes.  If not,
-# compile with LLVM-GCC/LLVM-G++ if available; if LLVM is not
-# available, fall back to usual GCC/G++ default.
-savedPATH=$PATH ; PATH="$PATH:/Developer/usr/bin"
-XTMPCC=$(which llvm-gcc)
-if [ x$CC  = x -a x$XTMPCC != x ] ; then export CC=$XTMPCC  ; fi
-XTMPCC=$(which llvm-g++)
-if [ x$CXX = x -a x$XTMPCC != x ] ; then export CXX=$XTMPCC ; fi
-PATH=$savedPATH
-unset XTMPCC savedPATH
-
-if [ \! -f Makefile.config ]; then
-    $SRC_DIR/configure --prefix=$DT_HOME/local \
-        --enable-targets=arm,x86,powerpc,cbe \
-        --enable-assertions=$LLVM_ASSERTIONS \
-        --enable-optimized=$LLVM_OPTIMIZED \
-        --disable-bindings \
-        || exit 1
+
+if [ "x$RC_ProjectName" = "xllvmCore_EmbeddedHosted" ]; then
+  # The cross-tools' build process expects to find an existing cross toolchain
+  # under names like 'arm-apple-darwin$DARWIN_VERS-as'; so make them.
+  rm -rf $DIR/bin || exit 1
+  mkdir $DIR/bin || exit 1
+  for prog in ar nm ranlib strip lipo ld as ; do
+    P=$DIR/bin/arm-apple-darwin$DARWIN_VERS-${prog}
+    T=`xcrun -sdk $SDKROOT -find ${prog}`
+    echo '#!/bin/sh' > $P || exit 1
+    echo 'exec '$T' "$@"' >> $P || exit 1
+    chmod a+x $P || exit 1
+  done
+  # Try to use the platform llvm-gcc. Fall back to gcc if it's not available.
+  for prog in gcc g++ ; do
+    P=$DIR/bin/arm-apple-darwin$DARWIN_VERS-${prog}
+# FIXME: Uncomment once llvm-gcc works for this
+#    T=`xcrun -find llvm-${prog}`
+#    if [ "x$T" = "x" ] ; then
+      T=`xcrun -sdk $SDKROOT -find ${prog}`
+#    fi
+    echo '#!/bin/sh' > $P || exit 1
+    echo 'exec '$T' -arch armv6 -isysroot '${SDKROOT}' "$@"' >> $P || exit 1
+    chmod a+x $P || exit 1
+  done
+
+  PATH=$DIR/bin:$PATH
+# otherwise, try to use llvm-gcc if it's available
+elif [ $DARWIN_VERS -gt 9 ]; then
+  # If the user has set CC or CXX, respect their wishes.  If not,
+  # compile with LLVM-GCC/LLVM-G++ if available; if LLVM is not
+  # available, fall back to usual GCC/G++ default.
+  savedPATH=$PATH ; PATH="/Developer/usr/bin:$PATH"
+  XTMPCC=$(which llvm-gcc)
+  if [ x$CC  = x -a x$XTMPCC != x ] ; then export CC=$XTMPCC  ; fi
+  XTMPCC=$(which llvm-g++)
+  if [ x$CXX = x -a x$XTMPCC != x ] ; then export CXX=$XTMPCC ; fi
+  PATH=$savedPATH
+  unset XTMPCC savedPATH
+fi
+
+
+if [ "x$RC_ProjectName" = "xllvmCore_EmbeddedHosted" ]; then
+  if [ \! -f Makefile.config ]; then
+      $SRC_DIR/configure --prefix=$DT_HOME \
+          --enable-targets=arm \
+          --host=arm-apple-darwin10 \
+          --target=arm-apple-darwin10 \
+          --build=i686-apple-darwin10 \
+          --enable-assertions=$LLVM_ASSERTIONS \
+          --enable-optimized=$LLVM_OPTIMIZED \
+          --disable-bindings \
+          || exit 1
+  fi
+else
+  if [ \! -f Makefile.config ]; then
+      $SRC_DIR/configure --prefix=$DT_HOME/local \
+          --enable-targets=arm,x86,powerpc,cbe \
+          --enable-assertions=$LLVM_ASSERTIONS \
+          --enable-optimized=$LLVM_OPTIMIZED \
+          --disable-bindings \
+          || exit 1
+  fi
 fi
 
 SUBVERSION=`echo $RC_ProjectSourceVersion | sed -e 's/[^.]*\.\([0-9]*\).*/\1/'`
@@ -154,6 +207,7 @@ if [ "x$MAJ_VER" != "x4" -o "x$MIN_VER" != "x0" ]; then
 fi
 
 make $JOBS_FLAG $OPTIMIZE_OPTS UNIVERSAL=1 UNIVERSAL_ARCH="$TARGETS" \
+    UNIVERSAL_SDK_PATH=$HOST_SDKROOT \
     NO_RUNTIME_LIBS=1 \
     LLVM_SUBMIT_VERSION=$LLVM_SUBMIT_VERSION \
     LLVM_SUBMIT_SUBVERSION=$LLVM_SUBMIT_SUBVERSION \
diff --git a/libclamav/c++/llvm/utils/emacs/tablegen-mode.el b/libclamav/c++/llvm/utils/emacs/tablegen-mode.el
index 08f7f25..833c16c 100644
--- a/libclamav/c++/llvm/utils/emacs/tablegen-mode.el
+++ b/libclamav/c++/llvm/utils/emacs/tablegen-mode.el
@@ -112,6 +112,8 @@
         )
 
   (set-syntax-table tablegen-mode-syntax-table)
+  (make-local-variable 'comment-start)
+  (setq comment-start "//")
   (run-hooks 'tablegen-mode-hook))       ; Finally, this permits the user to
                                          ;   customize the mode with a hook.
 
diff --git a/libclamav/c++/llvm/utils/findoptdiff b/libclamav/c++/llvm/utils/findoptdiff
index 36620d9..4f8d08d 100755
--- a/libclamav/c++/llvm/utils/findoptdiff
+++ b/libclamav/c++/llvm/utils/findoptdiff
@@ -70,7 +70,7 @@ dis2="$llvm2/Debug/bin/llvm-dis"
 opt1="$llvm1/Debug/bin/opt"
 opt2="$llvm2/Debug/bin/opt"
 
-all_switches="-verify -lowersetjmp -raiseallocs -simplifycfg -mem2reg -globalopt -globaldce -ipconstprop -deadargelim -instcombine -simplifycfg -prune-eh -inline -simplify-libcalls -argpromotion -tailduplicate -simplifycfg -scalarrepl -instcombine -predsimplify -condprop -tailcallelim -simplifycfg -reassociate -licm -loop-unswitch -instcombine -indvars -loop-unroll -instcombine -load-vn -gcse -sccp -instcombine -condprop -dse -dce -simplifycfg -deadtypeelim -constmerge -internalize -ipsccp -globalopt -constmerge -deadargelim -inline -prune-eh -globalopt -globaldce -argpromotion -instcombine -predsimplify -scalarrepl -globalsmodref-aa -licm -load-vn -gcse -dse -instcombine -simplifycfg -verify"
+all_switches="-verify -lowersetjmp -simplifycfg -mem2reg -globalopt -globaldce -ipconstprop -deadargelim -instcombine -simplifycfg -prune-eh -inline -simplify-libcalls -argpromotion -tailduplicate -simplifycfg -scalarrepl -instcombine -predsimplify -condprop -tailcallelim -simplifycfg -reassociate -licm -loop-unswitch -instcombine -indvars -loop-unroll -instcombine -load-vn -gcse -sccp -instcombine -condprop -dse -dce -simplifycfg -deadtypeelim -constmerge -internalize -ipsccp -globalopt -constmerge -deadargelim -inline -prune-eh -globalopt -globaldce -argpromotion -instcombine -predsimplify -scalarrepl -globalsmodref-aa -licm -load-vn -gcse -dse -instcombine -simplifycfg -verify"
 
 #counter=0
 function tryit {
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests.ObjDir/lit.site.cfg b/libclamav/c++/llvm/utils/lit/ExampleTests.ObjDir/lit.site.cfg
new file mode 100644
index 0000000..14b6e01
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests.ObjDir/lit.site.cfg
@@ -0,0 +1,15 @@
+# -*- Python -*-
+
+# Site specific configuration file.
+#
+# Typically this will be generated by the build system to automatically set
+# certain configuration variables which cannot be autodetected, so that 'lit'
+# can easily be used on the command line.
+
+import os
+
+# Preserve the obj_root, for use by the main lit.cfg.
+config.example_obj_root = os.path.dirname(__file__)
+
+lit.load_config(config, os.path.join(config.test_source_root,
+                                     'lit.cfg'))
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/Clang/fsyntax-only.c b/libclamav/c++/llvm/utils/lit/ExampleTests/Clang/fsyntax-only.c
new file mode 100644
index 0000000..a4a064b
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/Clang/fsyntax-only.c
@@ -0,0 +1,4 @@
+// RUN: clang -fsyntax-only -Xclang -verify %s
+
+int f0(void) {} // expected-warning {{control reaches end of non-void function}}
+
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/Clang/lit.cfg b/libclamav/c++/llvm/utils/lit/ExampleTests/Clang/lit.cfg
new file mode 100644
index 0000000..114ac60
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/Clang/lit.cfg
@@ -0,0 +1,80 @@
+# -*- Python -*-
+
+# Configuration file for the 'lit' test runner.
+
+# name: The name of this test suite.
+config.name = 'Clang'
+
+# testFormat: The test format to use to interpret tests.
+#
+# For now we require '&&' between commands, until they get globally killed and
+# the test runner updated.
+config.test_format = lit.formats.ShTest(execute_external = True)
+
+# suffixes: A list of file extensions to treat as test files.
+config.suffixes = ['.c', '.cpp', '.m', '.mm']
+
+# target_triple: Used by ShTest and TclTest formats for XFAIL checks.
+config.target_triple = 'foo'
+
+###
+
+# Discover the 'clang' and 'clangcc' to use.
+
+import os
+
+def inferClang(PATH):
+    # Determine which clang to use.
+    clang = os.getenv('CLANG')
+
+    # If the user set clang in the environment, definitely use that and don't
+    # try to validate.
+    if clang:
+        return clang
+
+    # Otherwise look in the path.
+    clang = lit.util.which('clang', PATH)
+
+    if not clang:
+        lit.fatal("couldn't find 'clang' program, try setting "
+                  "CLANG in your environment")
+
+    return clang
+
+def inferClangCC(clang, PATH):
+    clangcc = os.getenv('CLANGCC')
+
+    # If the user set clang in the environment, definitely use that and don't
+    # try to validate.
+    if clangcc:
+        return clangcc
+
+    # Otherwise try adding -cc since we expect to be looking in a build
+    # directory.
+    if clang.endswith('.exe'):
+        clangccName = clang[:-4] + '-cc.exe'
+    else:
+        clangccName = clang + '-cc'
+    clangcc = lit.util.which(clangccName, PATH)
+    if not clangcc:
+        # Otherwise ask clang.
+        res = lit.util.capture([clang, '-print-prog-name=clang-cc'])
+        res = res.strip()
+        if res and os.path.exists(res):
+            clangcc = res
+
+    if not clangcc:
+        lit.fatal("couldn't find 'clang-cc' program, try setting "
+                  "CLANGCC in your environment")
+
+    return clangcc
+
+clang = inferClang(config.environment['PATH'])
+if not lit.quiet:
+    lit.note('using clang: %r' % clang)
+config.substitutions.append( (' clang ', ' ' + clang + ' ') )
+
+clang_cc = inferClangCC(clang, config.environment['PATH'])
+if not lit.quiet:
+    lit.note('using clang-cc: %r' % clang_cc)
+config.substitutions.append( (' clang-cc ', ' ' + clang_cc + ' ') )
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/Bar/bar-test.ll b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/Bar/bar-test.ll
new file mode 100644
index 0000000..3017b13
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/Bar/bar-test.ll
@@ -0,0 +1,3 @@
+; RUN: true
+; XFAIL: *
+; XTARGET: darwin
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/Bar/dg.exp b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/Bar/dg.exp
new file mode 100644
index 0000000..2bda07a
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/Bar/dg.exp
@@ -0,0 +1,6 @@
+load_lib llvm.exp
+
+if { [llvm_supports_target X86] } {
+  RunLLVMTests [lsort [glob -nocomplain $srcdir/$subdir/*.{ll}]]
+}
+
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/lit.cfg b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/lit.cfg
new file mode 100644
index 0000000..e7ef037
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/lit.cfg
@@ -0,0 +1,151 @@
+# -*- Python -*-
+
+# Configuration file for the 'lit' test runner.
+
+import os
+
+# name: The name of this test suite.
+config.name = 'LLVM'
+
+# testFormat: The test format to use to interpret tests.
+config.test_format = lit.formats.TclTest()
+
+# suffixes: A list of file extensions to treat as test files, this is actually
+# set by on_clone().
+config.suffixes = []
+
+# test_source_root: The root path where tests are located.
+config.test_source_root = os.path.dirname(__file__)
+
+# test_exec_root: The root path where tests should be run.
+llvm_obj_root = getattr(config, 'llvm_obj_root', None)
+if llvm_obj_root is not None:
+    config.test_exec_root = os.path.join(llvm_obj_root, 'test')
+
+###
+
+import os
+
+# Check that the object root is known.
+if config.test_exec_root is None:
+    # Otherwise, we haven't loaded the site specific configuration (the user is
+    # probably trying to run on a test file directly, and either the site
+    # configuration hasn't been created by the build system, or we are in an
+    # out-of-tree build situation).
+
+    # Try to detect the situation where we are using an out-of-tree build by
+    # looking for 'llvm-config'.
+    #
+    # FIXME: I debated (i.e., wrote and threw away) adding logic to
+    # automagically generate the lit.site.cfg if we are in some kind of fresh
+    # build situation. This means knowing how to invoke the build system
+    # though, and I decided it was too much magic.
+
+    llvm_config = lit.util.which('llvm-config', config.environment['PATH'])
+    if not llvm_config:
+        lit.fatal('No site specific configuration available!')
+
+    # Get the source and object roots.
+    llvm_src_root = lit.util.capture(['llvm-config', '--src-root']).strip()
+    llvm_obj_root = lit.util.capture(['llvm-config', '--obj-root']).strip()
+
+    # Validate that we got a tree which points to here.
+    this_src_root = os.path.dirname(config.test_source_root)
+    if os.path.realpath(llvm_src_root) != os.path.realpath(this_src_root):
+        lit.fatal('No site specific configuration available!')
+
+    # Check that the site specific configuration exists.
+    site_cfg = os.path.join(llvm_obj_root, 'test', 'lit.site.cfg')
+    if not os.path.exists(site_cfg):
+        lit.fatal('No site specific configuration available!')
+
+    # Okay, that worked. Notify the user of the automagic, and reconfigure.
+    lit.note('using out-of-tree build at %r' % llvm_obj_root)
+    lit.load_config(config, site_cfg)
+    raise SystemExit
+
+###
+
+# Load site data from DejaGNU's site.exp.
+import re
+site_exp = {}
+# FIXME: Implement lit.site.cfg.
+for line in open(os.path.join(config.llvm_obj_root, 'test', 'site.exp')):
+    m = re.match('set ([^ ]+) "([^"]*)"', line)
+    if m:
+        site_exp[m.group(1)] = m.group(2)
+
+# Add substitutions.
+for sub in ['prcontext', 'llvmgcc', 'llvmgxx', 'compile_cxx', 'compile_c',
+            'link', 'shlibext', 'ocamlopt', 'llvmdsymutil', 'llvmlibsdir',
+            'bugpoint_topts']:
+    if sub in ('llvmgcc', 'llvmgxx'):
+        config.substitutions.append(('%' + sub,
+                                     site_exp[sub] + ' -emit-llvm -w'))
+    else:
+        config.substitutions.append(('%' + sub, site_exp[sub]))
+
+excludes = []
+
+# Provide target_triple for use in XFAIL and XTARGET.
+config.target_triple = site_exp['target_triplet']
+
+# Provide llvm_supports_target for use in local configs.
+targets = set(site_exp["TARGETS_TO_BUILD"].split())
+def llvm_supports_target(name):
+    return name in targets
+
+langs = set(site_exp['llvmgcc_langs'].split(','))
+def llvm_gcc_supports(name):
+    return name in langs
+
+# Provide on_clone hook for reading 'dg.exp'.
+import os
+simpleLibData = re.compile(r"""load_lib llvm.exp
+
+RunLLVMTests \[lsort \[glob -nocomplain \$srcdir/\$subdir/\*\.(.*)\]\]""",
+                           re.MULTILINE)
+conditionalLibData = re.compile(r"""load_lib llvm.exp
+
+if.*\[ ?(llvm[^ ]*) ([^ ]*) ?\].*{
+ *RunLLVMTests \[lsort \[glob -nocomplain \$srcdir/\$subdir/\*\.(.*)\]\]
+\}""", re.MULTILINE)
+def on_clone(parent, cfg, for_path):
+    def addSuffixes(match):
+        if match[0] == '{' and match[-1] == '}':
+            cfg.suffixes = ['.' + s for s in match[1:-1].split(',')]
+        else:
+            cfg.suffixes = ['.' + match]
+
+    libPath = os.path.join(os.path.dirname(for_path),
+                           'dg.exp')
+    if not os.path.exists(libPath):
+        cfg.unsupported = True
+        return
+
+    # Reset unsupported, in case we inherited it.
+    cfg.unsupported = False
+    lib = open(libPath).read().strip()
+
+    # Check for a simple library.
+    m = simpleLibData.match(lib)
+    if m:
+        addSuffixes(m.group(1))
+        return
+
+    # Check for a conditional test set.
+    m = conditionalLibData.match(lib)
+    if m:
+        funcname,arg,match = m.groups()
+        addSuffixes(match)
+
+        func = globals().get(funcname)
+        if not func:
+            lit.error('unsupported predicate %r' % funcname)
+        elif not func(arg):
+            cfg.unsupported = True
+        return
+    # Otherwise, give up.
+    lit.error('unable to understand %r:\n%s' % (libPath, lib))
+
+config.on_clone = on_clone
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/lit.site.cfg b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/lit.site.cfg
new file mode 100644
index 0000000..3bfee54
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/lit.site.cfg
@@ -0,0 +1,10 @@
+# -*- Python -*-
+
+## Autogenerated by Makefile ##
+# Do not edit!
+
+# Preserve some key paths for use by main LLVM test suite config.
+config.llvm_obj_root = os.path.dirname(os.path.dirname(__file__))
+
+# Let the main config do the real work.
+lit.load_config(config, os.path.join(config.llvm_obj_root, 'test/lit.cfg'))
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/site.exp b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/site.exp
new file mode 100644
index 0000000..1d9c743
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.InTree/test/site.exp
@@ -0,0 +1,30 @@
+## these variables are automatically generated by make ##
+# Do not edit here.  If you wish to override these values
+# edit the last section
+set target_triplet "x86_64-apple-darwin10"
+set TARGETS_TO_BUILD "X86 Sparc PowerPC Alpha ARM Mips CellSPU PIC16 XCore MSP430 SystemZ Blackfin CBackend MSIL CppBackend"
+set llvmgcc_langs "c,c++,objc,obj-c++"
+set llvmgcc_version "4.2.1"
+set prcontext "/usr/bin/tclsh8.4 /Volumes/Data/ddunbar/llvm/test/Scripts/prcontext.tcl"
+set llvmtoolsdir "/Users/ddunbar/llvm.obj.64/Debug/bin"
+set llvmlibsdir "/Users/ddunbar/llvm.obj.64/Debug/lib"
+set srcroot "/Volumes/Data/ddunbar/llvm"
+set objroot "/Volumes/Data/ddunbar/llvm.obj.64"
+set srcdir "/Volumes/Data/ddunbar/llvm/test"
+set objdir "/Volumes/Data/ddunbar/llvm.obj.64/test"
+set gccpath "/usr/bin/gcc -arch x86_64"
+set gxxpath "/usr/bin/g++ -arch x86_64"
+set compile_c " /usr/bin/gcc -arch x86_64 -I/Users/ddunbar/llvm.obj.64/include -I/Users/ddunbar/llvm.obj.64/test -I/Volumes/Data/ddunbar/llvm.obj.64/include -I/Volumes/Data/ddunbar/llvm/include -I/Volumes/Data/ddunbar/llvm/test -D_DEBUG -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS -m64 -pedantic -Wno-long-long -Wall -W -Wno-unused-parameter -Wwrite-strings -c "
+set compile_cxx " /usr/bin/g++ -arch x86_64 -I/Users/ddunbar/llvm.obj.64/include -I/Users/ddunbar/llvm.obj.64/test -I/Volumes/Data/ddunbar/llvm.obj.64/include -I/Volumes/Data/ddunbar/llvm/include -I/Volumes/Data/ddunbar/llvm/test -D_DEBUG -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS -g -fno-exceptions -fno-common -Woverloaded-virtual -m64 -pedantic -Wno-long-long -Wall -W -Wno-unused-parameter -Wwrite-strings -c "
+set link " /usr/bin/g++ -arch x86_64 -I/Users/ddunbar/llvm.obj.64/include -I/Users/ddunbar/llvm.obj.64/test -I/Volumes/Data/ddunbar/llvm.obj.64/include -I/Volumes/Data/ddunbar/llvm/include -I/Volumes/Data/ddunbar/llvm/test -D_DEBUG -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS -g -fno-exceptions -fno-common -Woverloaded-virtual -m64 -pedantic -Wno-long-long -Wall -W -Wno-unused-parameter -Wwrite-strings -g -L/Users/ddunbar/llvm.obj.64/Debug/lib -L/Volumes/Data/ddunbar/llvm.obj.64/Debug/lib "
+set llvmgcc "/Users/ddunbar/llvm-gcc/install/bin/llvm-gcc -m64 "
+set llvmgxx "/Users/ddunbar/llvm-gcc/install/bin/llvm-gcc -m64 "
+set llvmgccmajvers "4"
+set bugpoint_topts "-gcc-tool-args -m64"
+set shlibext ".dylib"
+set ocamlopt "/sw/bin/ocamlopt -cc \"g++ -Wall -D_FILE_OFFSET_BITS=64 -D_REENTRANT\" -I /Users/ddunbar/llvm.obj.64/Debug/lib/ocaml"
+set valgrind ""
+set grep "/usr/bin/grep"
+set gas "/usr/bin/as"
+set llvmdsymutil "dsymutil"
+## All variables above are generated by configure. Do Not Edit ## 
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/lit.local.cfg b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/lit.local.cfg
new file mode 100644
index 0000000..80d0c7e
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/lit.local.cfg
@@ -0,0 +1 @@
+config.excludes = ['src']
diff --git a/contrib/entitynorm/ChangeLog b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/obj/test/Foo/lit.local.cfg
similarity index 100%
copy from contrib/entitynorm/ChangeLog
copy to libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/obj/test/Foo/lit.local.cfg
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/obj/test/lit.site.cfg b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/obj/test/lit.site.cfg
new file mode 100644
index 0000000..bdcc35e
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/obj/test/lit.site.cfg
@@ -0,0 +1,11 @@
+# -*- Python -*-
+
+## Autogenerated by Makefile ##
+# Do not edit!
+
+# Preserve some key paths for use by main LLVM test suite config.
+config.llvm_obj_root = os.path.dirname(os.path.dirname(__file__))
+
+# Let the main config do the real work.
+lit.load_config(config, os.path.join(config.llvm_obj_root,
+                                     '../src/test/lit.cfg'))
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/obj/test/site.exp b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/obj/test/site.exp
new file mode 100644
index 0000000..1d9c743
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/obj/test/site.exp
@@ -0,0 +1,30 @@
+## these variables are automatically generated by make ##
+# Do not edit here.  If you wish to override these values
+# edit the last section
+set target_triplet "x86_64-apple-darwin10"
+set TARGETS_TO_BUILD "X86 Sparc PowerPC Alpha ARM Mips CellSPU PIC16 XCore MSP430 SystemZ Blackfin CBackend MSIL CppBackend"
+set llvmgcc_langs "c,c++,objc,obj-c++"
+set llvmgcc_version "4.2.1"
+set prcontext "/usr/bin/tclsh8.4 /Volumes/Data/ddunbar/llvm/test/Scripts/prcontext.tcl"
+set llvmtoolsdir "/Users/ddunbar/llvm.obj.64/Debug/bin"
+set llvmlibsdir "/Users/ddunbar/llvm.obj.64/Debug/lib"
+set srcroot "/Volumes/Data/ddunbar/llvm"
+set objroot "/Volumes/Data/ddunbar/llvm.obj.64"
+set srcdir "/Volumes/Data/ddunbar/llvm/test"
+set objdir "/Volumes/Data/ddunbar/llvm.obj.64/test"
+set gccpath "/usr/bin/gcc -arch x86_64"
+set gxxpath "/usr/bin/g++ -arch x86_64"
+set compile_c " /usr/bin/gcc -arch x86_64 -I/Users/ddunbar/llvm.obj.64/include -I/Users/ddunbar/llvm.obj.64/test -I/Volumes/Data/ddunbar/llvm.obj.64/include -I/Volumes/Data/ddunbar/llvm/include -I/Volumes/Data/ddunbar/llvm/test -D_DEBUG -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS -m64 -pedantic -Wno-long-long -Wall -W -Wno-unused-parameter -Wwrite-strings -c "
+set compile_cxx " /usr/bin/g++ -arch x86_64 -I/Users/ddunbar/llvm.obj.64/include -I/Users/ddunbar/llvm.obj.64/test -I/Volumes/Data/ddunbar/llvm.obj.64/include -I/Volumes/Data/ddunbar/llvm/include -I/Volumes/Data/ddunbar/llvm/test -D_DEBUG -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS -g -fno-exceptions -fno-common -Woverloaded-virtual -m64 -pedantic -Wno-long-long -Wall -W -Wno-unused-parameter -Wwrite-strings -c "
+set link " /usr/bin/g++ -arch x86_64 -I/Users/ddunbar/llvm.obj.64/include -I/Users/ddunbar/llvm.obj.64/test -I/Volumes/Data/ddunbar/llvm.obj.64/include -I/Volumes/Data/ddunbar/llvm/include -I/Volumes/Data/ddunbar/llvm/test -D_DEBUG -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS -g -fno-exceptions -fno-common -Woverloaded-virtual -m64 -pedantic -Wno-long-long -Wall -W -Wno-unused-parameter -Wwrite-strings -g -L/Users/ddunbar/llvm.obj.64/Debug/lib -L/Volumes/Data/ddunbar/llvm.obj.64/Debug/lib "
+set llvmgcc "/Users/ddunbar/llvm-gcc/install/bin/llvm-gcc -m64 "
+set llvmgxx "/Users/ddunbar/llvm-gcc/install/bin/llvm-gcc -m64 "
+set llvmgccmajvers "4"
+set bugpoint_topts "-gcc-tool-args -m64"
+set shlibext ".dylib"
+set ocamlopt "/sw/bin/ocamlopt -cc \"g++ -Wall -D_FILE_OFFSET_BITS=64 -D_REENTRANT\" -I /Users/ddunbar/llvm.obj.64/Debug/lib/ocaml"
+set valgrind ""
+set grep "/usr/bin/grep"
+set gas "/usr/bin/as"
+set llvmdsymutil "dsymutil"
+## All variables above are generated by configure. Do Not Edit ## 
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/data.txt b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/data.txt
new file mode 100644
index 0000000..45b983b
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/data.txt
@@ -0,0 +1 @@
+hi
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/dg.exp b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/dg.exp
new file mode 100644
index 0000000..2bda07a
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/dg.exp
@@ -0,0 +1,6 @@
+load_lib llvm.exp
+
+if { [llvm_supports_target X86] } {
+  RunLLVMTests [lsort [glob -nocomplain $srcdir/$subdir/*.{ll}]]
+}
+
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/pct-S.ll b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/pct-S.ll
new file mode 100644
index 0000000..4e8a582
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/Foo/pct-S.ll
@@ -0,0 +1 @@
+; RUN: grep "hi" %S/data.txt
\ No newline at end of file
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/lit.cfg b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/lit.cfg
new file mode 100644
index 0000000..e7ef037
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/LLVM.OutOfTree/src/test/lit.cfg
@@ -0,0 +1,151 @@
+# -*- Python -*-
+
+# Configuration file for the 'lit' test runner.
+
+import os
+
+# name: The name of this test suite.
+config.name = 'LLVM'
+
+# testFormat: The test format to use to interpret tests.
+config.test_format = lit.formats.TclTest()
+
+# suffixes: A list of file extensions to treat as test files, this is actually
+# set by on_clone().
+config.suffixes = []
+
+# test_source_root: The root path where tests are located.
+config.test_source_root = os.path.dirname(__file__)
+
+# test_exec_root: The root path where tests should be run.
+llvm_obj_root = getattr(config, 'llvm_obj_root', None)
+if llvm_obj_root is not None:
+    config.test_exec_root = os.path.join(llvm_obj_root, 'test')
+
+###
+
+import os
+
+# Check that the object root is known.
+if config.test_exec_root is None:
+    # Otherwise, we haven't loaded the site specific configuration (the user is
+    # probably trying to run on a test file directly, and either the site
+    # configuration hasn't been created by the build system, or we are in an
+    # out-of-tree build situation).
+
+    # Try to detect the situation where we are using an out-of-tree build by
+    # looking for 'llvm-config'.
+    #
+    # FIXME: I debated (i.e., wrote and threw away) adding logic to
+    # automagically generate the lit.site.cfg if we are in some kind of fresh
+    # build situation. This means knowing how to invoke the build system
+    # though, and I decided it was too much magic.
+
+    llvm_config = lit.util.which('llvm-config', config.environment['PATH'])
+    if not llvm_config:
+        lit.fatal('No site specific configuration available!')
+
+    # Get the source and object roots.
+    llvm_src_root = lit.util.capture(['llvm-config', '--src-root']).strip()
+    llvm_obj_root = lit.util.capture(['llvm-config', '--obj-root']).strip()
+
+    # Validate that we got a tree which points to here.
+    this_src_root = os.path.dirname(config.test_source_root)
+    if os.path.realpath(llvm_src_root) != os.path.realpath(this_src_root):
+        lit.fatal('No site specific configuration available!')
+
+    # Check that the site specific configuration exists.
+    site_cfg = os.path.join(llvm_obj_root, 'test', 'lit.site.cfg')
+    if not os.path.exists(site_cfg):
+        lit.fatal('No site specific configuration available!')
+
+    # Okay, that worked. Notify the user of the automagic, and reconfigure.
+    lit.note('using out-of-tree build at %r' % llvm_obj_root)
+    lit.load_config(config, site_cfg)
+    raise SystemExit
+
+###
+
+# Load site data from DejaGNU's site.exp.
+import re
+site_exp = {}
+# FIXME: Implement lit.site.cfg.
+for line in open(os.path.join(config.llvm_obj_root, 'test', 'site.exp')):
+    m = re.match('set ([^ ]+) "([^"]*)"', line)
+    if m:
+        site_exp[m.group(1)] = m.group(2)
+
+# Add substitutions.
+for sub in ['prcontext', 'llvmgcc', 'llvmgxx', 'compile_cxx', 'compile_c',
+            'link', 'shlibext', 'ocamlopt', 'llvmdsymutil', 'llvmlibsdir',
+            'bugpoint_topts']:
+    if sub in ('llvmgcc', 'llvmgxx'):
+        config.substitutions.append(('%' + sub,
+                                     site_exp[sub] + ' -emit-llvm -w'))
+    else:
+        config.substitutions.append(('%' + sub, site_exp[sub]))
+
+excludes = []
+
+# Provide target_triple for use in XFAIL and XTARGET.
+config.target_triple = site_exp['target_triplet']
+
+# Provide llvm_supports_target for use in local configs.
+targets = set(site_exp["TARGETS_TO_BUILD"].split())
+def llvm_supports_target(name):
+    return name in targets
+
+langs = set(site_exp['llvmgcc_langs'].split(','))
+def llvm_gcc_supports(name):
+    return name in langs
+
+# Provide on_clone hook for reading 'dg.exp'.
+import os
+simpleLibData = re.compile(r"""load_lib llvm.exp
+
+RunLLVMTests \[lsort \[glob -nocomplain \$srcdir/\$subdir/\*\.(.*)\]\]""",
+                           re.MULTILINE)
+conditionalLibData = re.compile(r"""load_lib llvm.exp
+
+if.*\[ ?(llvm[^ ]*) ([^ ]*) ?\].*{
+ *RunLLVMTests \[lsort \[glob -nocomplain \$srcdir/\$subdir/\*\.(.*)\]\]
+\}""", re.MULTILINE)
+def on_clone(parent, cfg, for_path):
+    def addSuffixes(match):
+        if match[0] == '{' and match[-1] == '}':
+            cfg.suffixes = ['.' + s for s in match[1:-1].split(',')]
+        else:
+            cfg.suffixes = ['.' + match]
+
+    libPath = os.path.join(os.path.dirname(for_path),
+                           'dg.exp')
+    if not os.path.exists(libPath):
+        cfg.unsupported = True
+        return
+
+    # Reset unsupported, in case we inherited it.
+    cfg.unsupported = False
+    lib = open(libPath).read().strip()
+
+    # Check for a simple library.
+    m = simpleLibData.match(lib)
+    if m:
+        addSuffixes(m.group(1))
+        return
+
+    # Check for a conditional test set.
+    m = conditionalLibData.match(lib)
+    if m:
+        funcname,arg,match = m.groups()
+        addSuffixes(match)
+
+        func = globals().get(funcname)
+        if not func:
+            lit.error('unsupported predicate %r' % funcname)
+        elif not func(arg):
+            cfg.unsupported = True
+        return
+    # Otherwise, give up.
+    lit.error('unable to understand %r:\n%s' % (libPath, lib))
+
+config.on_clone = on_clone
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/ShExternal/lit.local.cfg b/libclamav/c++/llvm/utils/lit/ExampleTests/ShExternal/lit.local.cfg
new file mode 100644
index 0000000..1061da6
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/ShExternal/lit.local.cfg
@@ -0,0 +1,6 @@
+# -*- Python -*-
+
+config.test_format = lit.formats.ShTest(execute_external = True)
+
+config.suffixes = ['.c']
+
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/ShInternal/lit.local.cfg b/libclamav/c++/llvm/utils/lit/ExampleTests/ShInternal/lit.local.cfg
new file mode 100644
index 0000000..448eaa4
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/ShInternal/lit.local.cfg
@@ -0,0 +1,6 @@
+# -*- Python -*-
+
+config.test_format = lit.formats.ShTest(execute_external = False)
+
+config.suffixes = ['.c']
+
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/TclTest/lit.local.cfg b/libclamav/c++/llvm/utils/lit/ExampleTests/TclTest/lit.local.cfg
new file mode 100644
index 0000000..6a37129
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/TclTest/lit.local.cfg
@@ -0,0 +1,5 @@
+# -*- Python -*-
+
+config.test_format = lit.formats.TclTest()
+
+config.suffixes = ['.ll']
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/TclTest/stderr-pipe.ll b/libclamav/c++/llvm/utils/lit/ExampleTests/TclTest/stderr-pipe.ll
new file mode 100644
index 0000000..6c55fe8
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/TclTest/stderr-pipe.ll
@@ -0,0 +1 @@
+; RUN: gcc -### > /dev/null |& grep {gcc version}
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/TclTest/tcl-redir-1.ll b/libclamav/c++/llvm/utils/lit/ExampleTests/TclTest/tcl-redir-1.ll
new file mode 100644
index 0000000..61240ba
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/TclTest/tcl-redir-1.ll
@@ -0,0 +1,7 @@
+; RUN: echo 'hi' > %t.1 | echo 'hello' > %t.2
+; RUN: not grep 'hi' %t.1
+; RUN: grep 'hello' %t.2
+
+
+
+
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/fail.c b/libclamav/c++/llvm/utils/lit/ExampleTests/fail.c
new file mode 100644
index 0000000..84db41a
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/fail.c
@@ -0,0 +1,2 @@
+// RUN: echo 'I am some stdout'
+// RUN: false
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/lit.cfg b/libclamav/c++/llvm/utils/lit/ExampleTests/lit.cfg
new file mode 100644
index 0000000..dbd574f
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/lit.cfg
@@ -0,0 +1,23 @@
+# -*- Python -*-
+
+# Configuration file for the 'lit' test runner.
+
+# name: The name of this test suite.
+config.name = 'Examples'
+
+# suffixes: A list of file extensions to treat as test files.
+config.suffixes = ['.c', '.cpp', '.m', '.mm', '.ll']
+
+# testFormat: The test format to use to interpret tests.
+config.test_format = lit.formats.ShTest()
+
+# test_source_root: The path where tests are located (default is the test suite
+# root).
+config.test_source_root = None
+
+# test_exec_root: The path where tests are located (default is the test suite
+# root).
+config.test_exec_root = None
+
+# target_triple: Used by ShTest and TclTest formats for XFAIL checks.
+config.target_triple = 'foo'
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/pass.c b/libclamav/c++/llvm/utils/lit/ExampleTests/pass.c
new file mode 100644
index 0000000..5c1031c
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/pass.c
@@ -0,0 +1 @@
+// RUN: true
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/xfail.c b/libclamav/c++/llvm/utils/lit/ExampleTests/xfail.c
new file mode 100644
index 0000000..b36cd99
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/xfail.c
@@ -0,0 +1,2 @@
+// RUN: false
+// XFAIL: *
diff --git a/libclamav/c++/llvm/utils/lit/ExampleTests/xpass.c b/libclamav/c++/llvm/utils/lit/ExampleTests/xpass.c
new file mode 100644
index 0000000..ad84990
--- /dev/null
+++ b/libclamav/c++/llvm/utils/lit/ExampleTests/xpass.c
@@ -0,0 +1,2 @@
+// RUN: true
+// XFAIL
diff --git a/libclamav/c++/llvm/utils/lit/LitConfig.py b/libclamav/c++/llvm/utils/lit/LitConfig.py
index 4fb0ccc..0e0a493 100644
--- a/libclamav/c++/llvm/utils/lit/LitConfig.py
+++ b/libclamav/c++/llvm/utils/lit/LitConfig.py
@@ -17,7 +17,8 @@ class LitConfig:
     def __init__(self, progname, path, quiet,
                  useValgrind, valgrindArgs,
                  useTclAsSh,
-                 noExecute, debug, isWindows):
+                 noExecute, debug, isWindows,
+                 params):
         # The name of the test runner.
         self.progname = progname
         # The items to add to the PATH environment variable.
@@ -29,6 +30,8 @@ class LitConfig:
         self.noExecute = noExecute
         self.debug = debug
         self.isWindows = bool(isWindows)
+        self.params = dict(params)
+        self.bashPath = None
 
         self.numErrors = 0
         self.numWarnings = 0
@@ -41,6 +44,27 @@ class LitConfig:
                                       mustExist = True,
                                       config = config)
 
+    def getBashPath(self):
+        """getBashPath - Get the path to 'bash'"""
+        import os, Util
+
+        if self.bashPath is not None:
+            return self.bashPath
+
+        self.bashPath = Util.which('bash', os.pathsep.join(self.path))
+        if self.bashPath is None:
+            # Check some known paths.
+            for path in ('/bin/bash', '/usr/bin/bash'):
+                if os.path.exists(path):
+                    self.bashPath = path
+                    break
+
+        if self.bashPath is None:
+            self.warning("Unable to find 'bash', running Tcl tests internally.")
+            self.bashPath = ''
+
+        return self.bashPath
+
     def _write_message(self, kind, message):
         import inspect, os, sys
 
diff --git a/libclamav/c++/llvm/utils/lit/LitFormats.py b/libclamav/c++/llvm/utils/lit/LitFormats.py
index 9b8250d..270f087 100644
--- a/libclamav/c++/llvm/utils/lit/LitFormats.py
+++ b/libclamav/c++/llvm/utils/lit/LitFormats.py
@@ -1,2 +1,3 @@
-from TestFormats import GoogleTest, ShTest, TclTest, SyntaxCheckTest
+from TestFormats import GoogleTest, ShTest, TclTest
+from TestFormats import SyntaxCheckTest, OneCommandPerFileTest
 
diff --git a/libclamav/c++/llvm/utils/lit/Test.py b/libclamav/c++/llvm/utils/lit/Test.py
index d3f6274..1f6556b 100644
--- a/libclamav/c++/llvm/utils/lit/Test.py
+++ b/libclamav/c++/llvm/utils/lit/Test.py
@@ -54,6 +54,14 @@ class Test:
         self.output = None
         # The wall time to execute this test, if timing and once complete.
         self.elapsed = None
+        # The repeat index of this test, or None.
+        self.index = None
+
+    def copyWithIndex(self, index):
+        import copy
+        res = copy.copy(self)
+        res.index = index
+        return res
 
     def setResult(self, result, output, elapsed):
         assert self.result is None, "Test result already set!"
diff --git a/libclamav/c++/llvm/utils/lit/TestFormats.py b/libclamav/c++/llvm/utils/lit/TestFormats.py
index 61bdb18..7305c79 100644
--- a/libclamav/c++/llvm/utils/lit/TestFormats.py
+++ b/libclamav/c++/llvm/utils/lit/TestFormats.py
@@ -53,6 +53,11 @@ class GoogleTest(object):
 
     def execute(self, test, litConfig):
         testPath,testName = os.path.split(test.getSourcePath())
+        while not os.path.exists(testPath):
+            # Handle GTest parametrized and typed tests, whose name includes
+            # some '/'s.
+            testPath, namePrefix = os.path.split(testPath)
+            testName = os.path.join(namePrefix, testName)
 
         cmd = [testPath, '--gtest_filter=' + testName]
         out, err, exitCode = TestRunner.executeCommand(cmd)
@@ -77,14 +82,12 @@ class FileBasedTest(object):
                                     localConfig)
 
 class ShTest(FileBasedTest):
-    def __init__(self, execute_external = False, require_and_and = False):
+    def __init__(self, execute_external = False):
         self.execute_external = execute_external
-        self.require_and_and = require_and_and
 
     def execute(self, test, litConfig):
         return TestRunner.executeShTest(test, litConfig,
-                                        self.execute_external,
-                                        self.require_and_and)
+                                        self.execute_external)
 
 class TclTest(FileBasedTest):
     def execute(self, test, litConfig):
@@ -95,16 +98,20 @@ class TclTest(FileBasedTest):
 import re
 import tempfile
 
-class SyntaxCheckTest:
+class OneCommandPerFileTest:
     # FIXME: Refactor into generic test for running some command on a directory
     # of inputs.
 
-    def __init__(self, compiler, dir, recursive, pattern, extra_cxx_args=[]):
-        self.compiler = str(compiler)
+    def __init__(self, command, dir, recursive=False,
+                 pattern=".*", useTempInput=False):
+        if isinstance(command, str):
+            self.command = [command]
+        else:
+            self.command = list(command)
         self.dir = str(dir)
         self.recursive = bool(recursive)
         self.pattern = re.compile(pattern)
-        self.extra_cxx_args = list(extra_cxx_args)
+        self.useTempInput = useTempInput
 
     def getTestsInDirectory(self, testSuite, path_in_suite,
                             litConfig, localConfig):
@@ -112,6 +119,10 @@ class SyntaxCheckTest:
             if not self.recursive:
                 subdirs[:] = []
 
+            subdirs[:] = [d for d in subdirs
+                          if (d != '.svn' and
+                              d not in localConfig.excludes)]
+
             for filename in filenames:
                 if (not self.pattern.match(filename) or
                     filename in localConfig.excludes):
@@ -128,17 +139,46 @@ class SyntaxCheckTest:
                 test.source_path = path
                 yield test
 
+    def createTempInput(self, tmp, test):
+        abstract
+
     def execute(self, test, litConfig):
-        tmp = tempfile.NamedTemporaryFile(suffix='.cpp')
-        print >>tmp, '#include "%s"' % test.source_path
-        tmp.flush()
+        if test.config.unsupported:
+            return (Test.UNSUPPORTED, 'Test is unsupported')
+
+        cmd = list(self.command)
+
+        # If using temp input, create a temporary file and hand it to the
+        # subclass.
+        if self.useTempInput:
+            tmp = tempfile.NamedTemporaryFile(suffix='.cpp')
+            self.createTempInput(tmp, test)
+            tmp.flush()
+            cmd.append(tmp.name)
+        else:
+            cmd.append(test.source_path)
 
-        cmd = [self.compiler, '-x', 'c++', '-fsyntax-only', tmp.name]
-        cmd.extend(self.extra_cxx_args)
         out, err, exitCode = TestRunner.executeCommand(cmd)
 
         diags = out + err
         if not exitCode and not diags.strip():
             return Test.PASS,''
 
-        return Test.FAIL, diags
+        # Try to include some useful information.
+        report = """Command: %s\n""" % ' '.join(["'%s'" % a
+                                                 for a in cmd])
+        if self.useTempInput:
+            report += """Temporary File: %s\n""" % tmp.name
+            report += "--\n%s--\n""" % open(tmp.name).read()
+        report += """Output:\n--\n%s--""" % diags
+
+        return Test.FAIL, report
+
+class SyntaxCheckTest(OneCommandPerFileTest):
+    def __init__(self, compiler, dir, extra_cxx_args=[], *args, **kwargs):
+        cmd = [compiler, '-x', 'c++', '-fsyntax-only'] + extra_cxx_args
+        OneCommandPerFileTest.__init__(self, cmd, dir,
+                                       useTempInput=1, *args, **kwargs)
+
+    def createTempInput(self, tmp, test):
+        print >>tmp, '#include "%s"' % test.source_path
diff --git a/libclamav/c++/llvm/utils/lit/TestRunner.py b/libclamav/c++/llvm/utils/lit/TestRunner.py
index 7b549ac..20fbc6c 100644
--- a/libclamav/c++/llvm/utils/lit/TestRunner.py
+++ b/libclamav/c++/llvm/utils/lit/TestRunner.py
@@ -15,6 +15,10 @@ class InternalShellError(Exception):
 
 # Don't use close_fds on Windows.
 kUseCloseFDs = platform.system() != 'Windows'
+
+# Use temporary files to replace /dev/null on Windows.
+kAvoidDevNull = platform.system() == 'Windows'
+
 def executeCommand(command, cwd=None, env=None):
     p = subprocess.Popen(command, cwd=cwd,
                          stdin=subprocess.PIPE,
@@ -63,21 +67,30 @@ def executeShCmd(cmd, cfg, cwd, results):
     # output. This is null until we have seen some output using
     # stderr.
     for i,j in enumerate(cmd.commands):
+        # Apply the redirections, we use (N,) as a sentinal to indicate stdin,
+        # stdout, stderr for N equal to 0, 1, or 2 respectively. Redirects to or
+        # from a file are represented with a list [file, mode, file-object]
+        # where file-object is initially None.
         redirects = [(0,), (1,), (2,)]
         for r in j.redirects:
             if r[0] == ('>',2):
                 redirects[2] = [r[1], 'w', None]
+            elif r[0] == ('>>',2):
+                redirects[2] = [r[1], 'a', None]
             elif r[0] == ('>&',2) and r[1] in '012':
                 redirects[2] = redirects[int(r[1])]
             elif r[0] == ('>&',) or r[0] == ('&>',):
                 redirects[1] = redirects[2] = [r[1], 'w', None]
             elif r[0] == ('>',):
                 redirects[1] = [r[1], 'w', None]
+            elif r[0] == ('>>',):
+                redirects[1] = [r[1], 'a', None]
             elif r[0] == ('<',):
                 redirects[0] = [r[1], 'r', None]
             else:
                 raise NotImplementedError,"Unsupported redirect: %r" % (r,)
 
+        # Map from the final redirections to something subprocess can handle.
         final_redirects = []
         for index,r in enumerate(redirects):
             if r == (0,):
@@ -95,7 +108,13 @@ def executeShCmd(cmd, cfg, cwd, results):
                 result = subprocess.PIPE
             else:
                 if r[2] is None:
-                    r[2] = open(r[0], r[1])
+                    if kAvoidDevNull and r[0] == '/dev/null':
+                        r[2] = tempfile.TemporaryFile(mode=r[1])
+                    else:
+                        r[2] = open(r[0], r[1])
+                    # Workaround a Win32 and/or subprocess bug when appending.
+                    if r[1] == 'a':
+                        r[2].seek(0, 2)
                 result = r[2]
             final_redirects.append(result)
 
@@ -237,7 +256,9 @@ def executeTclScriptInternal(test, litConfig, tmpBase, commands, cwd):
     for c in cmds[1:]:
         cmd = ShUtil.Seq(cmd, '&&', c)
 
-    if litConfig.useTclAsSh:
+    # FIXME: This is lame, we shouldn't need bash. See PR5240.
+    bashPath = litConfig.getBashPath()
+    if litConfig.useTclAsSh and bashPath:
         script = tmpBase + '.script'
 
         # Write script file
@@ -252,7 +273,7 @@ def executeTclScriptInternal(test, litConfig, tmpBase, commands, cwd):
             print >>sys.stdout
             return '', '', 0
 
-        command = ['/bin/bash', script]
+        command = [litConfig.getBashPath(), script]
         out,err,exitCode = executeCommand(command, cwd=cwd,
                                           env=test.config.environment)
 
@@ -315,7 +336,24 @@ def executeScript(test, litConfig, tmpBase, commands, cwd):
 
     return executeCommand(command, cwd=cwd, env=test.config.environment)
 
-def parseIntegratedTestScript(test, xfailHasColon, requireAndAnd):
+def isExpectedFail(xfails, xtargets, target_triple):
+    # Check if any xfail matches this target.
+    for item in xfails:
+        if item == '*' or item in target_triple:
+            break
+    else:
+        return False
+
+    # If so, see if it is expected to pass on this target.
+    #
+    # FIXME: Rename XTARGET to something that makes sense, like XPASS.
+    for item in xtargets:
+        if item == '*' or item in target_triple:
+            return False
+
+    return True
+
+def parseIntegratedTestScript(test):
     """parseIntegratedTestScript - Scan an LLVM/Clang style integrated test
     script and extract the lines to 'RUN' as well as 'XFAIL' and 'XTARGET'
     information. The RUN lines also will have variable substitution performed.
@@ -329,6 +367,8 @@ def parseIntegratedTestScript(test, xfailHasColon, requireAndAnd):
     execpath = test.getExecPath()
     execdir,execbase = os.path.split(execpath)
     tmpBase = os.path.join(execdir, 'Output', execbase)
+    if test.index is not None:
+        tmpBase += '_%d' % test.index
 
     # We use #_MARKER_# to hide %% while we do the other substitutions.
     substitutions = [('%%', '#_MARKER_#')]
@@ -359,12 +399,9 @@ def parseIntegratedTestScript(test, xfailHasColon, requireAndAnd):
                 script[-1] = script[-1][:-1] + ln
             else:
                 script.append(ln)
-        elif xfailHasColon and 'XFAIL:' in ln:
+        elif 'XFAIL:' in ln:
             items = ln[ln.index('XFAIL:') + 6:].split(',')
             xfails.extend([s.strip() for s in items])
-        elif not xfailHasColon and 'XFAIL' in ln:
-            items = ln[ln.index('XFAIL') + 5:].split(',')
-            xfails.extend([s.strip() for s in items])
         elif 'XTARGET:' in ln:
             items = ln[ln.index('XTARGET:') + 8:].split(',')
             xtargets.extend([s.strip() for s in items])
@@ -390,20 +427,8 @@ def parseIntegratedTestScript(test, xfailHasColon, requireAndAnd):
     if script[-1][-1] == '\\':
         return (Test.UNRESOLVED, "Test has unterminated run lines (with '\\')")
 
-    # Validate interior lines for '&&', a lovely historical artifact.
-    if requireAndAnd:
-        for i in range(len(script) - 1):
-            ln = script[i]
-
-            if not ln.endswith('&&'):
-                return (Test.FAIL,
-                        ("MISSING \'&&\': %s\n"  +
-                         "FOLLOWED BY   : %s\n") % (ln, script[i + 1]))
-
-            # Strip off '&&'
-            script[i] = ln[:-2]
-
-    return script,xfails,xtargets,tmpBase,execdir
+    isXFail = isExpectedFail(xfails, xtargets, test.suite.config.target_triple)
+    return script,isXFail,tmpBase,execdir
 
 def formatTestOutput(status, out, err, exitCode, script):
     output = StringIO.StringIO()
@@ -426,11 +451,11 @@ def executeTclTest(test, litConfig):
     if test.config.unsupported:
         return (Test.UNSUPPORTED, 'Test is unsupported')
 
-    res = parseIntegratedTestScript(test, True, False)
+    res = parseIntegratedTestScript(test)
     if len(res) == 2:
         return res
 
-    script, xfails, xtargets, tmpBase, execdir = res
+    script, isXFail, tmpBase, execdir = res
 
     if litConfig.noExecute:
         return (Test.PASS, '')
@@ -442,19 +467,6 @@ def executeTclTest(test, litConfig):
     if len(res) == 2:
         return res
 
-    isXFail = False
-    for item in xfails:
-        if item == '*' or item in test.suite.config.target_triple:
-            isXFail = True
-            break
-
-    # If this is XFAIL, see if it is expected to pass on this target.
-    if isXFail:
-        for item in xtargets:
-            if item == '*' or item in test.suite.config.target_triple:
-                isXFail = False
-                break
-
     out,err,exitCode = res
     if isXFail:
         ok = exitCode != 0
@@ -468,15 +480,15 @@ def executeTclTest(test, litConfig):
 
     return formatTestOutput(status, out, err, exitCode, script)
 
-def executeShTest(test, litConfig, useExternalSh, requireAndAnd):
+def executeShTest(test, litConfig, useExternalSh):
     if test.config.unsupported:
         return (Test.UNSUPPORTED, 'Test is unsupported')
 
-    res = parseIntegratedTestScript(test, False, requireAndAnd)
+    res = parseIntegratedTestScript(test)
     if len(res) == 2:
         return res
 
-    script, xfails, xtargets, tmpBase, execdir = res
+    script, isXFail, tmpBase, execdir = res
 
     if litConfig.noExecute:
         return (Test.PASS, '')
@@ -492,7 +504,7 @@ def executeShTest(test, litConfig, useExternalSh, requireAndAnd):
         return res
 
     out,err,exitCode = res
-    if xfails:
+    if isXFail:
         ok = exitCode != 0
         status = (Test.XPASS, Test.XFAIL)[ok]
     else:
diff --git a/libclamav/c++/llvm/utils/lit/TestingConfig.py b/libclamav/c++/llvm/utils/lit/TestingConfig.py
index e4874d7..1f5067c 100644
--- a/libclamav/c++/llvm/utils/lit/TestingConfig.py
+++ b/libclamav/c++/llvm/utils/lit/TestingConfig.py
@@ -12,6 +12,7 @@ class TestingConfig:
             environment = {
                 'PATH' : os.pathsep.join(litConfig.path +
                                          [os.environ.get('PATH','')]),
+                'PATHEXT' : os.environ.get('PATHEXT',''),
                 'SYSTEMROOT' : os.environ.get('SYSTEMROOT',''),
                 'LLVM_DISABLE_CRT_DEBUG' : '1',
                 }
diff --git a/libclamav/c++/llvm/utils/lit/Util.py b/libclamav/c++/llvm/utils/lit/Util.py
index e62a8ed..66c5e46 100644
--- a/libclamav/c++/llvm/utils/lit/Util.py
+++ b/libclamav/c++/llvm/utils/lit/Util.py
@@ -15,7 +15,7 @@ def detectCPUs():
             return int(os.popen2("sysctl -n hw.ncpu")[1].read())
     # Windows:
     if os.environ.has_key("NUMBER_OF_PROCESSORS"):
-        ncpus = int(os.environ["NUMBER_OF_PROCESSORS"]);
+        ncpus = int(os.environ["NUMBER_OF_PROCESSORS"])
         if ncpus > 0:
             return ncpus
     return 1 # Default
diff --git a/libclamav/c++/llvm/utils/lit/lit.py b/libclamav/c++/llvm/utils/lit/lit.py
index 5b24286..dcdce7d 100755
--- a/libclamav/c++/llvm/utils/lit/lit.py
+++ b/libclamav/c++/llvm/utils/lit/lit.py
@@ -16,9 +16,13 @@ from TestingConfig import TestingConfig
 import LitConfig
 import Test
 
+# Configuration files to look for when discovering test suites. These can be
+# overridden with --config-prefix.
+#
 # FIXME: Rename to 'config.lit', 'site.lit', and 'local.lit' ?
-kConfigName = 'lit.cfg'
-kSiteConfigName = 'lit.site.cfg'
+gConfigName = 'lit.cfg'
+gSiteConfigName = 'lit.site.cfg'
+
 kLocalConfigName = 'lit.local.cfg'
 
 class TestingProgressDisplay:
@@ -134,10 +138,10 @@ class Tester(threading.Thread):
         self.display.update(test)
 
 def dirContainsTestSuite(path):
-    cfgpath = os.path.join(path, kSiteConfigName)
+    cfgpath = os.path.join(path, gSiteConfigName)
     if os.path.exists(cfgpath):
         return cfgpath
-    cfgpath = os.path.join(path, kConfigName)
+    cfgpath = os.path.join(path, gConfigName)
     if os.path.exists(cfgpath):
         return cfgpath
 
@@ -232,8 +236,8 @@ def getTests(path, litConfig, testSuiteCache, localConfigCache):
         litConfig.note('resolved input %r to %r::%r' % (path, ts.name,
                                                         path_in_suite))
 
-    return getTestsInSuite(ts, path_in_suite, litConfig,
-                           testSuiteCache, localConfigCache)
+    return ts, getTestsInSuite(ts, path_in_suite, litConfig,
+                               testSuiteCache, localConfigCache)
 
 def getTestsInSuite(ts, path_in_suite, litConfig,
                     testSuiteCache, localConfigCache):
@@ -268,24 +272,29 @@ def getTestsInSuite(ts, path_in_suite, litConfig,
         file_sourcepath = os.path.join(source_path, filename)
         if not os.path.isdir(file_sourcepath):
             continue
-        
+
         # Check for nested test suites, first in the execpath in case there is a
         # site configuration and then in the source path.
         file_execpath = ts.getExecPath(path_in_suite + (filename,))
         if dirContainsTestSuite(file_execpath):
-            subiter = getTests(file_execpath, litConfig,
-                               testSuiteCache, localConfigCache)
+            sub_ts, subiter = getTests(file_execpath, litConfig,
+                                       testSuiteCache, localConfigCache)
         elif dirContainsTestSuite(file_sourcepath):
-            subiter = getTests(file_sourcepath, litConfig,
-                               testSuiteCache, localConfigCache)
+            sub_ts, subiter = getTests(file_sourcepath, litConfig,
+                                       testSuiteCache, localConfigCache)
         else:
             # Otherwise, continue loading from inside this test suite.
             subiter = getTestsInSuite(ts, path_in_suite + (filename,),
                                       litConfig, testSuiteCache,
                                       localConfigCache)
-        
+            sub_ts = None
+
+        N = 0
         for res in subiter:
+            N += 1
             yield res
+        if sub_ts and not N:
+            litConfig.warning('test suite %r contained no tests' % sub_ts.name)
 
 def runTests(numThreads, litConfig, provider, display):
     # If only using one testing thread, don't use threads at all; this lets us
@@ -314,6 +323,13 @@ def main():
     parser.add_option("-j", "--threads", dest="numThreads", metavar="N",
                       help="Number of testing threads",
                       type=int, action="store", default=None)
+    parser.add_option("", "--config-prefix", dest="configPrefix",
+                      metavar="NAME", help="Prefix for 'lit' config files",
+                      action="store", default=None)
+    parser.add_option("", "--param", dest="userParameters",
+                      metavar="NAME=VAL",
+                      help="Add 'NAME' = 'VAL' to the user defined parameters",
+                      type=str, action="append", default=[])
 
     group = OptionGroup(parser, "Output Format")
     # FIXME: I find these names very confusing, although I like the
@@ -372,6 +388,9 @@ def main():
     group.add_option("", "--no-tcl-as-sh", dest="useTclAsSh",
                       help="Don't run Tcl scripts using 'sh'",
                       action="store_false", default=True)
+    group.add_option("", "--repeat", dest="repeatTests", metavar="N",
+                      help="Repeat tests N times (for timing)",
+                      action="store", default=None, type=int)
     parser.add_option_group(group)
 
     (opts, args) = parser.parse_args()
@@ -379,11 +398,25 @@ def main():
     if not args:
         parser.error('No inputs specified')
 
+    if opts.configPrefix is not None:
+        global gConfigName, gSiteConfigName
+        gConfigName = '%s.cfg' % opts.configPrefix
+        gSiteConfigName = '%s.site.cfg' % opts.configPrefix
+
     if opts.numThreads is None:
         opts.numThreads = Util.detectCPUs()
 
     inputs = args
 
+    # Create the user defined parameters.
+    userParams = {}
+    for entry in opts.userParameters:
+        if '=' not in entry:
+            name,val = entry,''
+        else:
+            name,val = entry.split('=', 1)
+        userParams[name] = val
+
     # Create the global config object.
     litConfig = LitConfig.LitConfig(progname = os.path.basename(sys.argv[0]),
                                     path = opts.path,
@@ -393,7 +426,8 @@ def main():
                                     useTclAsSh = opts.useTclAsSh,
                                     noExecute = opts.noExecute,
                                     debug = opts.debug,
-                                    isWindows = (platform.system()=='Windows'))
+                                    isWindows = (platform.system()=='Windows'),
+                                    params = userParams)
 
     # Load the tests from the inputs.
     tests = []
@@ -402,7 +436,7 @@ def main():
     for input in inputs:
         prev = len(tests)
         tests.extend(getTests(input, litConfig,
-                              testSuiteCache, localConfigCache))
+                              testSuiteCache, localConfigCache)[1])
         if prev == len(tests):
             litConfig.warning('input %r contained no tests' % input)
 
@@ -413,15 +447,16 @@ def main():
 
     if opts.showSuites:
         suitesAndTests = dict([(ts,[])
-                               for ts,_ in testSuiteCache.values()])
+                               for ts,_ in testSuiteCache.values()
+                               if ts])
         for t in tests:
             suitesAndTests[t.suite].append(t)
 
         print '-- Test Suites --'
         suitesAndTests = suitesAndTests.items()
         suitesAndTests.sort(key = lambda (ts,_): ts.name)
-        for ts,tests in suitesAndTests:
-            print '  %s - %d tests' %(ts.name, len(tests))
+        for ts,ts_tests in suitesAndTests:
+            print '  %s - %d tests' %(ts.name, len(ts_tests))
             print '    Source Root: %s' % ts.source_root
             print '    Exec Root  : %s' % ts.exec_root
 
@@ -440,6 +475,11 @@ def main():
     header = '-- Testing: %d%s tests, %d threads --'%(len(tests),extra,
                                                       opts.numThreads)
 
+    if opts.repeatTests:
+        tests = [t.copyWithIndex(i)
+                 for t in tests
+                 for i in range(opts.repeatTests)]
+
     progressBar = None
     if not opts.quiet:
         if opts.succinct and opts.useProgressBar:
@@ -492,11 +532,16 @@ def main():
         print
 
     if opts.timeTests:
-        byTime = list(tests)
-        byTime.sort(key = lambda t: t.elapsed)
+        # Collate, in case we repeated tests.
+        times = {}
+        for t in tests:
+            key = t.getFullName()
+            times[key] = times.get(key, 0.) + t.elapsed
+
+        byTime = list(times.items())
+        byTime.sort(key = lambda (name,elapsed): elapsed)
         if byTime:
-            Util.printHistogram([(t.getFullName(), t.elapsed) for t in byTime],
-                                title='Tests')
+            Util.printHistogram(byTime, title='Tests')
 
     for name,code in (('Expected Passes    ', Test.PASS),
                       ('Expected Failures  ', Test.XFAIL),
diff --git a/libclamav/c++/llvm/utils/unittest/UnitTestMain/Makefile b/libclamav/c++/llvm/utils/unittest/UnitTestMain/Makefile
index aadff21..7b49191 100644
--- a/libclamav/c++/llvm/utils/unittest/UnitTestMain/Makefile
+++ b/libclamav/c++/llvm/utils/unittest/UnitTestMain/Makefile
@@ -10,8 +10,6 @@
 LEVEL = ../../..
 
 include $(LEVEL)/Makefile.config
-NO_MISSING_FIELD_INITIALIZERS := $(shell $(CXX) -Wno-missing-field-initializers -fsyntax-only -xc /dev/null 2>/dev/null && echo -Wno-missing-field-initializers)
-NO_VARIADIC_MACROS := $(shell $(CXX) -Wno-variadic-macros -fsyntax-only -xc /dev/null 2>/dev/null && echo -Wno-variadic-macros)
 
 LIBRARYNAME = UnitTestMain
 BUILD_ARCHIVE = 1
diff --git a/libclamav/c++/llvm/utils/unittest/googletest/Makefile b/libclamav/c++/llvm/utils/unittest/googletest/Makefile
index 29fe679..2d2c282 100644
--- a/libclamav/c++/llvm/utils/unittest/googletest/Makefile
+++ b/libclamav/c++/llvm/utils/unittest/googletest/Makefile
@@ -10,8 +10,6 @@
 LEVEL := ../../..
 
 include $(LEVEL)/Makefile.config
-NO_MISSING_FIELD_INITIALIZERS := $(shell $(CXX) -Wno-missing-field-initializers -fsyntax-only -xc /dev/null 2>/dev/null && echo -Wno-missing-field-initializers)
-NO_VARIADIC_MACROS := $(shell $(CXX) -Wno-variadic-macros -fsyntax-only -xc /dev/null 2>/dev/null && echo -Wno-variadic-macros)
 
 LIBRARYNAME = GoogleTest
 BUILD_ARCHIVE = 1
diff --git a/libclamav/c++/llvm/utils/unittest/googletest/include/gtest/internal/gtest-port.h b/libclamav/c++/llvm/utils/unittest/googletest/include/gtest/internal/gtest-port.h
index 6a1593e..3e49993 100644
--- a/libclamav/c++/llvm/utils/unittest/googletest/include/gtest/internal/gtest-port.h
+++ b/libclamav/c++/llvm/utils/unittest/googletest/include/gtest/internal/gtest-port.h
@@ -185,6 +185,8 @@
 #define GTEST_OS_ZOS
 #elif defined(__sun) && defined(__SVR4)
 #define GTEST_OS_SOLARIS
+#elif defined(__HAIKU__)
+#define GTEST_OS_HAIKU
 #endif  // _MSC_VER
 
 // Determines whether ::std::string and ::string are available.
@@ -225,7 +227,7 @@
 // TODO(wan at google.com): uses autoconf to detect whether ::std::wstring
 //   is available.
 
-#if defined(GTEST_OS_CYGWIN) || defined(GTEST_OS_SOLARIS)
+#if defined(GTEST_OS_CYGWIN) || defined(GTEST_OS_SOLARIS) || defined(GTEST_OS_HAIKU)
 // At least some versions of cygwin don't support ::std::wstring.
 // Solaris' libc++ doesn't support it either.
 #define GTEST_HAS_STD_WSTRING 0
diff --git a/libclamav/c++/llvm/utils/vim/llvm.vim b/libclamav/c++/llvm/utils/vim/llvm.vim
index 2cc266b..451013e 100644
--- a/libclamav/c++/llvm/utils/vim/llvm.vim
+++ b/libclamav/c++/llvm/utils/vim/llvm.vim
@@ -34,7 +34,7 @@ syn keyword llvmStatement phi call select shl lshr ashr va_arg
 syn keyword llvmStatement trunc zext sext
 syn keyword llvmStatement fptrunc fpext fptoui fptosi uitofp sitofp
 syn keyword llvmStatement ptrtoint inttoptr bitcast
-syn keyword llvmStatement ret br switch invoke unwind unreachable
+syn keyword llvmStatement ret br indirectbr switch invoke unwind unreachable
 syn keyword llvmStatement malloc alloca free load store getelementptr
 syn keyword llvmStatement extractelement insertelement shufflevector
 syn keyword llvmStatement extractvalue insertvalue
@@ -56,6 +56,7 @@ syn keyword llvmKeyword noredzone noimplicitfloat naked
 syn keyword llvmKeyword module asm align tail to
 syn keyword llvmKeyword addrspace section alias sideeffect c gc
 syn keyword llvmKeyword target datalayout triple
+syn keyword llvmKeyword blockaddress
 
 " Obsolete keywords.
 syn keyword llvmError  uninitialized implementation
diff --git a/libclamav/c++/llvm/lib/Target/MSP430/AsmPrinter/Makefile b/llvm/lib/Target/MSP430/AsmPrinter/Makefile
similarity index 100%
copy from libclamav/c++/llvm/lib/Target/MSP430/AsmPrinter/Makefile
copy to llvm/lib/Target/MSP430/AsmPrinter/Makefile
diff --git a/libclamav/c++/llvm/lib/Target/MSP430/AsmPrinter/Makefile b/llvm/lib/Target/PIC16/AsmPrinter/Makefile
similarity index 100%
copy from libclamav/c++/llvm/lib/Target/MSP430/AsmPrinter/Makefile
copy to llvm/lib/Target/PIC16/AsmPrinter/Makefile

-- 
Debian repository for ClamAV



More information about the Pkg-clamav-commits mailing list